Compare commits
44 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8c76a8c7dc | ||
|
|
0780db55e1 | ||
|
|
1def53b7fe | ||
|
|
f9c98a377d | ||
|
|
93bf3e8a1f | ||
|
|
d202f3539b | ||
|
|
12e73d4898 | ||
|
|
449dd7cc0b | ||
|
|
c0e87abaee | ||
|
|
c8485776fe | ||
|
|
aa3e2d0fe6 | ||
|
|
98c64f9d5f | ||
|
|
7d81c17cca | ||
|
|
652d396a81 | ||
|
|
1d83c493af | ||
|
|
cf35cbe59e | ||
|
|
9221c08418 | ||
|
|
48d43c14b1 | ||
|
|
776efa74a4 | ||
|
|
b14e83f499 | ||
|
|
a9b6b65238 | ||
|
|
a036b7f122 | ||
|
|
0bccf23db3 | ||
|
|
0cbd594512 | ||
|
|
efe93a5f57 | ||
|
|
3fda66b85b | ||
|
|
ddfb6707b4 | ||
|
|
a69f7a9531 | ||
|
|
d583aa43ca | ||
|
|
3abb573142 | ||
|
|
d556dada9f | ||
|
|
ce7d49484f | ||
|
|
e4acd18429 | ||
|
|
c2d4784810 | ||
|
|
76bea6c577 | ||
|
|
3ff0b0b2c4 | ||
|
|
a1c7dc17ce | ||
|
|
24723b2f10 | ||
|
|
f998e9e949 | ||
|
|
73661f7d1f | ||
|
|
b5d4db07d1 | ||
|
|
c6a022132b | ||
|
|
195c0ccf8a | ||
|
|
b09a86c0c1 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -214,3 +214,4 @@ git_issues.md
|
||||
todo_executor.md
|
||||
protect-all-except-feature.sh
|
||||
manage-collab.sh
|
||||
publish.sh
|
||||
107
CHANGELOG.md
107
CHANGELOG.md
@@ -1,5 +1,112 @@
|
||||
# Changelog
|
||||
|
||||
## [0.3.746] November 29, 2024
|
||||
|
||||
### Major Features
|
||||
1. Enhanced Docker Support (Nov 29, 2024)
|
||||
- Improved GPU support in Docker images.
|
||||
- Dockerfile refactored for better platform-specific installations.
|
||||
- Introduced new Docker commands for different platforms:
|
||||
- `basic-amd64`, `all-amd64`, `gpu-amd64` for AMD64.
|
||||
- `basic-arm64`, `all-arm64`, `gpu-arm64` for ARM64.
|
||||
|
||||
### Infrastructure & Documentation
|
||||
- Enhanced README.md to improve user guidance and installation instructions.
|
||||
- Added installation instructions for Playwright setup in README.
|
||||
- Created and updated examples in `docs/examples/quickstart_async.py` to be more useful and user-friendly.
|
||||
- Updated `requirements.txt` with a new `pydantic` dependency.
|
||||
- Bumped version number in `crawl4ai/__version__.py` to 0.3.746.
|
||||
|
||||
### Breaking Changes
|
||||
- Streamlined application structure:
|
||||
- Removed static pages and related code from `main.py` which might affect existing deployments relying on static content.
|
||||
|
||||
### Development Updates
|
||||
- Developed `post_install` method in `crawl4ai/install.py` to streamline post-installation setup tasks.
|
||||
- Refined migration processes in `crawl4ai/migrations.py` with enhanced logging for better error visibility.
|
||||
- Updated `docker-compose.yml` to support local and hub services for different architectures, enhancing build and deploy capabilities.
|
||||
- Refactored example test cases in `docs/examples/docker_example.py` to facilitate comprehensive testing.
|
||||
|
||||
### README.md
|
||||
Updated README with new docker commands and setup instructions.
|
||||
Enhanced installation instructions and guidance.
|
||||
|
||||
### crawl4ai/install.py
|
||||
Added post-install script functionality.
|
||||
Introduced `post_install` method for automation of post-installation tasks.
|
||||
|
||||
### crawl4ai/migrations.py
|
||||
Improved migration logging.
|
||||
Refined migration processes and added better logging.
|
||||
|
||||
### docker-compose.yml
|
||||
Refactored docker-compose for better service management.
|
||||
Updated to define services for different platforms and versions.
|
||||
|
||||
### requirements.txt
|
||||
Updated dependencies.
|
||||
Added `pydantic` to requirements file.
|
||||
|
||||
### crawler/__version__.py
|
||||
Updated version number.
|
||||
Bumped version number to 0.3.746.
|
||||
|
||||
### docs/examples/quickstart_async.py
|
||||
Enhanced example scripts.
|
||||
Uncommented example usage in async guide for user functionality.
|
||||
|
||||
### main.py
|
||||
Refactored code to improve maintainability.
|
||||
Streamlined app structure by removing static pages code.
|
||||
|
||||
## [0.3.743] November 27, 2024
|
||||
|
||||
Enhance features and documentation
|
||||
- Updated version to 0.3.743
|
||||
- Improved ManagedBrowser configuration with dynamic host/port
|
||||
- Implemented fast HTML formatting in web crawler
|
||||
- Enhanced markdown generation with a new generator class
|
||||
- Improved sanitization and utility functions
|
||||
- Added contributor details and pull request acknowledgments
|
||||
- Updated documentation for clearer usage scenarios
|
||||
- Adjusted tests to reflect class name changes
|
||||
|
||||
### CONTRIBUTORS.md
|
||||
Added new contributors and pull request details.
|
||||
Updated community contributions and acknowledged pull requests.
|
||||
|
||||
### crawl4ai/__version__.py
|
||||
Version update.
|
||||
Bumped version to 0.3.743.
|
||||
|
||||
### crawl4ai/async_crawler_strategy.py
|
||||
Improved ManagedBrowser configuration.
|
||||
Enhanced browser initialization with configurable host and debugging port; improved hook execution.
|
||||
|
||||
### crawl4ai/async_webcrawler.py
|
||||
Optimized HTML processing.
|
||||
Implemented 'fast_format_html' for optimized HTML formatting; applied it when 'prettiify' is enabled.
|
||||
|
||||
### crawl4ai/content_scraping_strategy.py
|
||||
Enhanced markdown generation strategy.
|
||||
Updated to use DefaultMarkdownGenerator and improved markdown generation with filters option.
|
||||
|
||||
### crawl4ai/markdown_generation_strategy.py
|
||||
Refactored markdown generation class.
|
||||
Renamed DefaultMarkdownGenerationStrategy to DefaultMarkdownGenerator; added content filter handling.
|
||||
|
||||
### crawl4ai/utils.py
|
||||
Enhanced utility functions.
|
||||
Improved input sanitization and enhanced HTML formatting method.
|
||||
|
||||
### docs/md_v2/advanced/hooks-auth.md
|
||||
Improved documentation for hooks.
|
||||
Updated code examples to include cookies in crawler strategy initialization.
|
||||
|
||||
### tests/async/test_markdown_genertor.py
|
||||
Refactored tests to match class renaming.
|
||||
Updated tests to use renamed DefaultMarkdownGenerator class.
|
||||
|
||||
## [0.3.74] November 17, 2024
|
||||
|
||||
This changelog details the updates and changes introduced in Crawl4AI version 0.3.74. It's designed to inform developers about new features, modifications to existing components, removals, and other important information.
|
||||
|
||||
@@ -10,11 +10,21 @@ We would like to thank the following people for their contributions to Crawl4AI:
|
||||
|
||||
## Community Contributors
|
||||
|
||||
- [aadityakanjolia4](https://github.com/aadityakanjolia4) - Fix for `CustomHTML2Text` is not defined.
|
||||
- [FractalMind](https://github.com/FractalMind) - Created the first official Docker Hub image and fixed Dockerfile errors
|
||||
- [ketonkss4](https://github.com/ketonkss4) - Identified Selenium's new capabilities, helping reduce dependencies
|
||||
- [jonymusky](https://github.com/jonymusky) - Javascript execution documentation, and wait_for
|
||||
- [datehoer](https://github.com/datehoer) - Add browser prxy support
|
||||
|
||||
## Pull Requests
|
||||
|
||||
- [dvschuyl](https://github.com/dvschuyl) - AsyncPlaywrightCrawlerStrategy page-evaluate context destroyed by navigation [#304](https://github.com/unclecode/crawl4ai/pull/304)
|
||||
- [nelzomal](https://github.com/nelzomal) - Enhance development installation instructions [#286](https://github.com/unclecode/crawl4ai/pull/286)
|
||||
- [HamzaFarhan](https://github.com/HamzaFarhan) - Handled the cases where markdown_with_citations, references_markdown, and filtered_html might not be defined [#293](https://github.com/unclecode/crawl4ai/pull/293)
|
||||
- [NanmiCoder](https://github.com/NanmiCoder) - fix: crawler strategy exception handling and fixes [#271](https://github.com/unclecode/crawl4ai/pull/271)
|
||||
- [paulokuong](https://github.com/paulokuong) - fix: RAWL4_AI_BASE_DIRECTORY should be Path object instead of string [#298](https://github.com/unclecode/crawl4ai/pull/298)
|
||||
|
||||
|
||||
## Other Contributors
|
||||
|
||||
- [Gokhan](https://github.com/gkhngyk)
|
||||
|
||||
25
Dockerfile
25
Dockerfile
@@ -1,6 +1,9 @@
|
||||
# syntax=docker/dockerfile:1.4
|
||||
|
||||
# Build arguments
|
||||
ARG TARGETPLATFORM
|
||||
ARG BUILDPLATFORM
|
||||
|
||||
# Other build arguments
|
||||
ARG PYTHON_VERSION=3.10
|
||||
|
||||
# Base stage with system dependencies
|
||||
@@ -63,13 +66,13 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# GPU support if enabled and architecture is supported
|
||||
RUN if [ "$ENABLE_GPU" = "true" ] && [ "$(dpkg --print-architecture)" != "arm64" ] ; then \
|
||||
apt-get update && apt-get install -y --no-install-recommends \
|
||||
nvidia-cuda-toolkit \
|
||||
&& rm -rf /var/lib/apt/lists/* ; \
|
||||
else \
|
||||
echo "Skipping NVIDIA CUDA Toolkit installation (unsupported architecture or GPU disabled)"; \
|
||||
fi
|
||||
RUN if [ "$ENABLE_GPU" = "true" ] && [ "$TARGETPLATFORM" = "linux/amd64" ] ; then \
|
||||
apt-get update && apt-get install -y --no-install-recommends \
|
||||
nvidia-cuda-toolkit \
|
||||
&& rm -rf /var/lib/apt/lists/* ; \
|
||||
else \
|
||||
echo "Skipping NVIDIA CUDA Toolkit installation (unsupported platform or GPU disabled)"; \
|
||||
fi
|
||||
|
||||
# Create and set working directory
|
||||
WORKDIR /app
|
||||
@@ -120,7 +123,11 @@ RUN pip install --no-cache-dir \
|
||||
RUN mkdocs build
|
||||
|
||||
# Install Playwright and browsers
|
||||
RUN playwright install
|
||||
RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
|
||||
playwright install chromium; \
|
||||
elif [ "$TARGETPLATFORM" = "linux/arm64" ]; then \
|
||||
playwright install chromium; \
|
||||
fi
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8000 11235 9222 8080
|
||||
|
||||
765
README.md
765
README.md
@@ -1,4 +1,4 @@
|
||||
# 🔥🕷️ Crawl4AI: LLM Friendly Web Crawler & Scraper
|
||||
# 🔥🕷️ Crawl4AI: Crawl Smarter, Faster, Freely. For AI.
|
||||
|
||||
<a href="https://trendshift.io/repositories/11716" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11716" alt="unclecode%2Fcrawl4ai | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
|
||||
@@ -9,22 +9,115 @@
|
||||
[](https://github.com/unclecode/crawl4ai/pulls)
|
||||
[](https://github.com/unclecode/crawl4ai/blob/main/LICENSE)
|
||||
|
||||
Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐
|
||||
Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for LLMs, AI agents, and data pipelines. Open source, flexible, and built for real-time performance, Crawl4AI empowers developers with unmatched speed, precision, and deployment ease.
|
||||
|
||||
## New in 0.3.74 ✨
|
||||
[✨ Check out latest update v0.3.745](#-recent-updates)
|
||||
|
||||
## 🧐 Why Crawl4AI?
|
||||
|
||||
1. **Built for LLMs**: Creates smart, concise Markdown optimized for RAG and fine-tuning applications.
|
||||
2. **Lightning Fast**: Delivers results 6x faster with real-time, cost-efficient performance.
|
||||
3. **Flexible Browser Control**: Offers session management, proxies, and custom hooks for seamless data access.
|
||||
4. **Heuristic Intelligence**: Uses advanced algorithms for efficient extraction, reducing reliance on costly models.
|
||||
5. **Open Source & Deployable**: Fully open-source with no API keys—ready for Docker and cloud integration.
|
||||
6. **Thriving Community**: Actively maintained by a vibrant community and the #1 trending GitHub repository.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
1. Install Crawl4AI:
|
||||
```bash
|
||||
pip install crawl4ai
|
||||
crawl4ai-setup # Setup the browser
|
||||
```
|
||||
|
||||
2. Run a simple web crawl:
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CacheMode
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(url="https://www.nbcnews.com/business")
|
||||
# Soone will be change to result.markdown
|
||||
print(result.markdown_v2.raw_markdown)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## ✨ Features
|
||||
|
||||
<details>
|
||||
<summary>📝 <strong>Markdown Generation</strong></summary>
|
||||
|
||||
- 🧹 **Clean Markdown**: Generates clean, structured Markdown with accurate formatting.
|
||||
- 🎯 **Fit Markdown**: Heuristic-based filtering to remove noise and irrelevant parts for AI-friendly processing.
|
||||
- 🔗 **Citations and References**: Converts page links into a numbered reference list with clean citations.
|
||||
- 🛠️ **Custom Strategies**: Users can create their own Markdown generation strategies tailored to specific needs.
|
||||
- 📚 **BM25 Algorithm**: Employs BM25-based filtering for extracting core information and removing irrelevant content.
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>📊 <strong>Structured Data Extraction</strong></summary>
|
||||
|
||||
- 🤖 **LLM-Driven Extraction**: Supports all LLMs (open-source and proprietary) for structured data extraction.
|
||||
- 🧱 **Chunking Strategies**: Implements chunking (topic-based, regex, sentence-level) for targeted content processing.
|
||||
- 🌌 **Cosine Similarity**: Find relevant content chunks based on user queries for semantic extraction.
|
||||
- 🔎 **CSS-Based Extraction**: Fast schema-based data extraction using XPath and CSS selectors.
|
||||
- 🔧 **Schema Definition**: Define custom schemas for extracting structured JSON from repetitive patterns.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>🌐 <strong>Browser Integration</strong></summary>
|
||||
|
||||
- 🖥️ **Managed Browser**: Use user-owned browsers with full control, avoiding bot detection.
|
||||
- 🔄 **Remote Browser Control**: Connect to Chrome Developer Tools Protocol for remote, large-scale data extraction.
|
||||
- 🔒 **Session Management**: Preserve browser states and reuse them for multi-step crawling.
|
||||
- 🧩 **Proxy Support**: Seamlessly connect to proxies with authentication for secure access.
|
||||
- ⚙️ **Full Browser Control**: Modify headers, cookies, user agents, and more for tailored crawling setups.
|
||||
- 🌍 **Multi-Browser Support**: Compatible with Chromium, Firefox, and WebKit.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>🔎 <strong>Crawling & Scraping</strong></summary>
|
||||
|
||||
- 🖼️ **Media Support**: Extract images, audio, videos, and responsive image formats like `srcset` and `picture`.
|
||||
- 🚀 **Dynamic Crawling**: Execute JS and wait for async or sync for dynamic content extraction.
|
||||
- 📸 **Screenshots**: Capture page screenshots during crawling for debugging or analysis.
|
||||
- 📂 **Raw Data Crawling**: Directly process raw HTML (`raw:`) or local files (`file://`).
|
||||
- 🔗 **Comprehensive Link Extraction**: Extracts internal, external links, and embedded iframe content.
|
||||
- 🛠️ **Customizable Hooks**: Define hooks at every step to customize crawling behavior.
|
||||
- 💾 **Caching**: Cache data for improved speed and to avoid redundant fetches.
|
||||
- 📄 **Metadata Extraction**: Retrieve structured metadata from web pages.
|
||||
- 📡 **IFrame Content Extraction**: Seamless extraction from embedded iframe content.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>🚀 <strong>Deployment</strong></summary>
|
||||
|
||||
- 🐳 **Dockerized Setup**: Optimized Docker image with API server for easy deployment.
|
||||
- 🔄 **API Gateway**: One-click deployment with secure token authentication for API-based workflows.
|
||||
- 🌐 **Scalable Architecture**: Designed for mass-scale production and optimized server performance.
|
||||
- ⚙️ **DigitalOcean Deployment**: Ready-to-deploy configurations for DigitalOcean and similar platforms.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>🎯 <strong>Additional Features</strong></summary>
|
||||
|
||||
- 🕶️ **Stealth Mode**: Avoid bot detection by mimicking real users.
|
||||
- 🏷️ **Tag-Based Content Extraction**: Refine crawling based on custom tags, headers, or metadata.
|
||||
- 🔗 **Link Analysis**: Extract and analyze all links for detailed data exploration.
|
||||
- 🛡️ **Error Handling**: Robust error management for seamless execution.
|
||||
- 🔐 **CORS & Static Serving**: Supports filesystem-based caching and cross-origin requests.
|
||||
- 📖 **Clear Documentation**: Simplified and updated guides for onboarding and advanced usage.
|
||||
- 🙌 **Community Recognition**: Acknowledges contributors and pull requests for transparency.
|
||||
|
||||
</details>
|
||||
|
||||
- 🚀 **Blazing Fast Scraping**: Significantly improved scraping speed.
|
||||
- 📥 **Download Manager**: Integrated file crawling, downloading, and tracking within `CrawlResult`.
|
||||
- 📝 **Markdown Strategy**: Flexible system for custom markdown generation and formats.
|
||||
- 🔗 **LLM-Friendly Citations**: Auto-converts links to numbered citations with reference lists.
|
||||
- 🔎 **Markdown Filter**: BM25-based content extraction for cleaner, relevant markdown.
|
||||
- 🖼️ **Image Extraction**: Supports `srcset`, `picture`, and responsive image formats.
|
||||
- 🗂️ **Local/Raw HTML**: Crawl `file://` paths and raw HTML (`raw:`) directly.
|
||||
- 🤖 **Browser Control**: Custom browser setups with stealth integration to bypass bots.
|
||||
- ☁️ **API & Cache Boost**: CORS, static serving, and enhanced filesystem-based caching.
|
||||
- 🐳 **API Gateway**: Run as an API service with secure token authentication.
|
||||
- 🛠️ **Database Upgrades**: Optimized for larger content sets with faster caching.
|
||||
- 🐛 **Bug Fixes**: Resolved browser context issues, memory leaks, and improved error handling.
|
||||
|
||||
|
||||
## Try it Now!
|
||||
@@ -33,53 +126,27 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
|
||||
|
||||
✨ Visit our [Documentation Website](https://crawl4ai.com/mkdocs/)
|
||||
|
||||
## Features ✨
|
||||
|
||||
- 🆓 Completely free and open-source
|
||||
- 🚀 Blazing fast performance, outperforming many paid services
|
||||
- 🤖 LLM-friendly output formats (JSON, cleaned HTML, markdown)
|
||||
- 🌐 Multi-browser support (Chromium, Firefox, WebKit)
|
||||
- 🌍 Supports crawling multiple URLs simultaneously
|
||||
- 🎨 Extracts and returns all media tags (Images, Audio, and Video)
|
||||
- 🔗 Extracts all external and internal links
|
||||
- 📚 Extracts metadata from the page
|
||||
- 🔄 Custom hooks for authentication, headers, and page modifications
|
||||
- 🕵️ User-agent customization
|
||||
- 🖼️ Takes screenshots of pages with enhanced error handling
|
||||
- 📜 Executes multiple custom JavaScripts before crawling
|
||||
- 📊 Generates structured output without LLM using JsonCssExtractionStrategy
|
||||
- 📚 Various chunking strategies: topic-based, regex, sentence, and more
|
||||
- 🧠 Advanced extraction strategies: cosine clustering, LLM, and more
|
||||
- 🎯 CSS selector support for precise data extraction
|
||||
- 📝 Passes instructions/keywords to refine extraction
|
||||
- 🔒 Proxy support with authentication for enhanced access
|
||||
- 🔄 Session management for complex multi-page crawling
|
||||
- 🌐 Asynchronous architecture for improved performance
|
||||
- 🖼️ Improved image processing with lazy-loading detection
|
||||
- 🕰️ Enhanced handling of delayed content loading
|
||||
- 🔑 Custom headers support for LLM interactions
|
||||
- 🖼️ iframe content extraction for comprehensive analysis
|
||||
- ⏱️ Flexible timeout and delayed content retrieval options
|
||||
|
||||
## Installation 🛠️
|
||||
|
||||
Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package or use Docker.
|
||||
|
||||
### Using pip 🐍
|
||||
<details>
|
||||
<summary>🐍 <strong>Using pip</strong></summary>
|
||||
|
||||
Choose the installation option that best fits your needs:
|
||||
|
||||
#### Basic Installation
|
||||
### Basic Installation
|
||||
|
||||
For basic web crawling and scraping tasks:
|
||||
|
||||
```bash
|
||||
pip install crawl4ai
|
||||
crawl4ai-setup # Setup the browser
|
||||
```
|
||||
|
||||
By default, this will install the asynchronous version of Crawl4AI, using Playwright for web crawling.
|
||||
|
||||
👉 Note: When you install Crawl4AI, the setup script should automatically install and set up Playwright. However, if you encounter any Playwright-related errors, you can manually install it using one of these methods:
|
||||
👉 **Note**: When you install Crawl4AI, the `crawl4ai-setup` should automatically install and set up Playwright. However, if you encounter any Playwright-related errors, you can manually install it using one of these methods:
|
||||
|
||||
1. Through the command line:
|
||||
|
||||
@@ -95,25 +162,42 @@ By default, this will install the asynchronous version of Crawl4AI, using Playwr
|
||||
|
||||
This second method has proven to be more reliable in some cases.
|
||||
|
||||
#### Installation with Synchronous Version
|
||||
---
|
||||
|
||||
If you need the synchronous version using Selenium:
|
||||
### Installation with Synchronous Version
|
||||
|
||||
The sync version is deprecated and will be removed in future versions. If you need the synchronous version using Selenium:
|
||||
|
||||
```bash
|
||||
pip install crawl4ai[sync]
|
||||
```
|
||||
|
||||
#### Development Installation
|
||||
---
|
||||
|
||||
### Development Installation
|
||||
|
||||
For contributors who plan to modify the source code:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/unclecode/crawl4ai.git
|
||||
cd crawl4ai
|
||||
pip install -e .
|
||||
pip install -e . # Basic installation in editable mode
|
||||
```
|
||||
|
||||
## One-Click Deployment 🚀
|
||||
Install optional features:
|
||||
|
||||
```bash
|
||||
pip install -e ".[torch]" # With PyTorch features
|
||||
pip install -e ".[transformer]" # With Transformer features
|
||||
pip install -e ".[cosine]" # With cosine similarity features
|
||||
pip install -e ".[sync]" # With synchronous crawling (Selenium)
|
||||
pip install -e ".[all]" # Install all optional features
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>🚀 <strong>One-Click Deployment</strong></summary>
|
||||
|
||||
Deploy your own instance of Crawl4AI with one click:
|
||||
|
||||
@@ -124,54 +208,191 @@ Deploy your own instance of Crawl4AI with one click:
|
||||
The deploy will:
|
||||
- Set up a Docker container with Crawl4AI
|
||||
- Configure Playwright and all dependencies
|
||||
- Start the FastAPI server on port 11235
|
||||
- Start the FastAPI server on port `11235`
|
||||
- Set up health checks and auto-deployment
|
||||
|
||||
### Using Docker 🐳
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>🐳 <strong>Using Docker</strong></summary>
|
||||
|
||||
Crawl4AI is available as Docker images for easy deployment. You can either pull directly from Docker Hub (recommended) or build from the repository.
|
||||
|
||||
#### Option 1: Docker Hub (Recommended)
|
||||
---
|
||||
|
||||
<details>
|
||||
<summary>🐳 <strong>Option 1: Docker Hub (Recommended)</strong></summary>
|
||||
|
||||
Choose the appropriate image based on your platform and needs:
|
||||
|
||||
### For AMD64 (Regular Linux/Windows):
|
||||
```bash
|
||||
# Pull and run from Docker Hub (choose one):
|
||||
docker pull unclecode/crawl4ai:basic # Basic crawling features
|
||||
docker pull unclecode/crawl4ai:all # Full installation (ML, LLM support)
|
||||
docker pull unclecode/crawl4ai:gpu # GPU-enabled version
|
||||
# Basic version (recommended)
|
||||
docker pull unclecode/crawl4ai:basic-amd64
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic-amd64
|
||||
|
||||
# Run the container
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic # Replace 'basic' with your chosen version
|
||||
# Full ML/LLM support
|
||||
docker pull unclecode/crawl4ai:all-amd64
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:all-amd64
|
||||
|
||||
# In case you want to set platform to arm64
|
||||
docker run --platform linux/arm64 -p 11235:11235 unclecode/crawl4ai:basic
|
||||
|
||||
# In case to allocate more shared memory for the container
|
||||
docker run --shm-size=2gb -p 11235:11235 unclecode/crawl4ai:basic
|
||||
# With GPU support
|
||||
docker pull unclecode/crawl4ai:gpu-amd64
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:gpu-amd64
|
||||
```
|
||||
|
||||
#### Option 2: Build from Repository
|
||||
### For ARM64 (M1/M2 Macs, ARM servers):
|
||||
```bash
|
||||
# Basic version (recommended)
|
||||
docker pull unclecode/crawl4ai:basic-arm64
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic-arm64
|
||||
|
||||
# Full ML/LLM support
|
||||
docker pull unclecode/crawl4ai:all-arm64
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:all-arm64
|
||||
|
||||
# With GPU support
|
||||
docker pull unclecode/crawl4ai:gpu-arm64
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:gpu-arm64
|
||||
```
|
||||
|
||||
Need more memory? Add `--shm-size`:
|
||||
```bash
|
||||
docker run --shm-size=2gb -p 11235:11235 unclecode/crawl4ai:basic-amd64
|
||||
```
|
||||
|
||||
Test the installation:
|
||||
```bash
|
||||
curl http://localhost:11235/health
|
||||
```
|
||||
|
||||
### For Raspberry Pi (32-bit) (coming soon):
|
||||
```bash
|
||||
# Pull and run basic version (recommended for Raspberry Pi)
|
||||
docker pull unclecode/crawl4ai:basic-armv7
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic-armv7
|
||||
|
||||
# With increased shared memory if needed
|
||||
docker run --shm-size=2gb -p 11235:11235 unclecode/crawl4ai:basic-armv7
|
||||
```
|
||||
|
||||
Note: Due to hardware constraints, only the basic version is recommended for Raspberry Pi.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>🐳 <strong>Option 2: Build from Repository</strong></summary>
|
||||
|
||||
Build the image locally based on your platform:
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/unclecode/crawl4ai.git
|
||||
cd crawl4ai
|
||||
|
||||
# Build the image
|
||||
docker build -t crawl4ai:local \
|
||||
--build-arg INSTALL_TYPE=basic \ # Options: basic, all
|
||||
# For AMD64 (Regular Linux/Windows)
|
||||
docker build --platform linux/amd64 \
|
||||
--tag crawl4ai:local \
|
||||
--build-arg INSTALL_TYPE=basic \
|
||||
.
|
||||
|
||||
# In case you want to set platform to arm64
|
||||
docker build -t crawl4ai:local \
|
||||
--build-arg INSTALL_TYPE=basic \ # Options: basic, all
|
||||
--platform linux/arm64 \
|
||||
# For ARM64 (M1/M2 Macs, ARM servers)
|
||||
docker build --platform linux/arm64 \
|
||||
--tag crawl4ai:local \
|
||||
--build-arg INSTALL_TYPE=basic \
|
||||
.
|
||||
|
||||
# Run your local build
|
||||
docker run -p 11235:11235 crawl4ai:local
|
||||
```
|
||||
|
||||
Quick test (works for both options):
|
||||
Build options:
|
||||
- INSTALL_TYPE=basic (default): Basic crawling features
|
||||
- INSTALL_TYPE=all: Full ML/LLM support
|
||||
- ENABLE_GPU=true: Add GPU support
|
||||
|
||||
Example with all options:
|
||||
```bash
|
||||
docker build --platform linux/amd64 \
|
||||
--tag crawl4ai:local \
|
||||
--build-arg INSTALL_TYPE=all \
|
||||
--build-arg ENABLE_GPU=true \
|
||||
.
|
||||
```
|
||||
|
||||
Run your local build:
|
||||
```bash
|
||||
# Regular run
|
||||
docker run -p 11235:11235 crawl4ai:local
|
||||
|
||||
# With increased shared memory
|
||||
docker run --shm-size=2gb -p 11235:11235 crawl4ai:local
|
||||
```
|
||||
|
||||
Test the installation:
|
||||
```bash
|
||||
curl http://localhost:11235/health
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>🐳 <strong>Option 3: Using Docker Compose</strong></summary>
|
||||
|
||||
Docker Compose provides a more structured way to run Crawl4AI, especially when dealing with environment variables and multiple configurations.
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/unclecode/crawl4ai.git
|
||||
cd crawl4ai
|
||||
```
|
||||
|
||||
### For AMD64 (Regular Linux/Windows):
|
||||
```bash
|
||||
# Build and run locally
|
||||
docker-compose --profile local-amd64 up
|
||||
|
||||
# Run from Docker Hub
|
||||
VERSION=basic docker-compose --profile hub-amd64 up # Basic version
|
||||
VERSION=all docker-compose --profile hub-amd64 up # Full ML/LLM support
|
||||
VERSION=gpu docker-compose --profile hub-amd64 up # GPU support
|
||||
```
|
||||
|
||||
### For ARM64 (M1/M2 Macs, ARM servers):
|
||||
```bash
|
||||
# Build and run locally
|
||||
docker-compose --profile local-arm64 up
|
||||
|
||||
# Run from Docker Hub
|
||||
VERSION=basic docker-compose --profile hub-arm64 up # Basic version
|
||||
VERSION=all docker-compose --profile hub-arm64 up # Full ML/LLM support
|
||||
VERSION=gpu docker-compose --profile hub-arm64 up # GPU support
|
||||
```
|
||||
|
||||
Environment variables (optional):
|
||||
```bash
|
||||
# Create a .env file
|
||||
CRAWL4AI_API_TOKEN=your_token
|
||||
OPENAI_API_KEY=your_openai_key
|
||||
CLAUDE_API_KEY=your_claude_key
|
||||
```
|
||||
|
||||
The compose file includes:
|
||||
- Memory management (4GB limit, 1GB reserved)
|
||||
- Shared memory volume for browser support
|
||||
- Health checks
|
||||
- Auto-restart policy
|
||||
- All necessary port mappings
|
||||
|
||||
Test the installation:
|
||||
```bash
|
||||
curl http://localhost:11235/health
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
### Quick Test
|
||||
|
||||
Run a quick test (works for both Docker options):
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
@@ -182,149 +403,140 @@ response = requests.post(
|
||||
)
|
||||
task_id = response.json()["task_id"]
|
||||
|
||||
# Get results
|
||||
# Continue polling until the task is complete (status="completed")
|
||||
result = requests.get(f"http://localhost:11235/task/{task_id}")
|
||||
```
|
||||
|
||||
For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://crawl4ai.com/mkdocs/basic/docker-deployment/).
|
||||
For more examples, see our [Docker Examples](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_example.py). For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://crawl4ai.com/mkdocs/basic/docker-deployment/).
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## Quick Start 🚀
|
||||
## 🔬 Advanced Usage Examples 🔬
|
||||
|
||||
You can check the project structure in the directory [https://github.com/unclecode/crawl4ai/docs/examples](docs/examples). Over there, you can find a variety of examples; here, some popular examples are shared.
|
||||
|
||||
<details>
|
||||
<summary>📝 <strong>Heuristic Markdown Generation with Clean and Fit Markdown</strong></summary>
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai import AsyncWebCrawler, CacheMode
|
||||
from crawl4ai.content_filter_strategy import BM25ContentFilter
|
||||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(url="https://www.nbcnews.com/business")
|
||||
print(result.markdown)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Advanced Usage 🔬
|
||||
|
||||
### Executing JavaScript and Using CSS Selectors
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
js_code = ["const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"]
|
||||
async with AsyncWebCrawler(
|
||||
headless=True,
|
||||
verbose=True,
|
||||
) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://www.nbcnews.com/business",
|
||||
js_code=js_code,
|
||||
css_selector=".wide-tease-item__description",
|
||||
bypass_cache=True
|
||||
url="https://docs.micronaut.io/4.7.6/guide/",
|
||||
cache_mode=CacheMode.ENABLED,
|
||||
markdown_generator=DefaultMarkdownGenerator(
|
||||
content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
|
||||
),
|
||||
)
|
||||
print(result.extracted_content)
|
||||
print(len(result.markdown))
|
||||
print(len(result.fit_markdown))
|
||||
print(len(result.markdown_v2.fit_markdown))
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Using a Proxy
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>🖥️ <strong>Executing JavaScript & Extract Structured Data without LLMs</strong></summary>
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler(verbose=True, proxy="http://127.0.0.1:7890") as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://www.nbcnews.com/business",
|
||||
bypass_cache=True
|
||||
)
|
||||
print(result.markdown)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Extracting Structured Data without LLM
|
||||
|
||||
The `JsonCssExtractionStrategy` allows for precise extraction of structured data from web pages using CSS selectors.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai import AsyncWebCrawler, CacheMode
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
import json
|
||||
|
||||
async def extract_news_teasers():
|
||||
async def main():
|
||||
schema = {
|
||||
"name": "News Teaser Extractor",
|
||||
"baseSelector": ".wide-tease-item__wrapper",
|
||||
"fields": [
|
||||
{
|
||||
"name": "category",
|
||||
"selector": ".unibrow span[data-testid='unibrow-text']",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "headline",
|
||||
"selector": ".wide-tease-item__headline",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "summary",
|
||||
"selector": ".wide-tease-item__description",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "time",
|
||||
"selector": "[data-testid='wide-tease-date']",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "image",
|
||||
"type": "nested",
|
||||
"selector": "picture.teasePicture img",
|
||||
"fields": [
|
||||
{"name": "src", "type": "attribute", "attribute": "src"},
|
||||
{"name": "alt", "type": "attribute", "attribute": "alt"},
|
||||
],
|
||||
},
|
||||
{
|
||||
"name": "link",
|
||||
"selector": "a[href]",
|
||||
"type": "attribute",
|
||||
"attribute": "href",
|
||||
},
|
||||
],
|
||||
}
|
||||
"name": "KidoCode Courses",
|
||||
"baseSelector": "section.charge-methodology .w-tab-content > div",
|
||||
"fields": [
|
||||
{
|
||||
"name": "section_title",
|
||||
"selector": "h3.heading-50",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "section_description",
|
||||
"selector": ".charge-content",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "course_name",
|
||||
"selector": ".text-block-93",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "course_description",
|
||||
"selector": ".course-content-text",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "course_icon",
|
||||
"selector": ".image-92",
|
||||
"type": "attribute",
|
||||
"attribute": "src"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
|
||||
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
async with AsyncWebCrawler(
|
||||
headless=False,
|
||||
verbose=True
|
||||
) as crawler:
|
||||
|
||||
# Create the JavaScript that handles clicking multiple times
|
||||
js_click_tabs = """
|
||||
(async () => {
|
||||
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
|
||||
|
||||
for(let tab of tabs) {
|
||||
// scroll to the tab
|
||||
tab.scrollIntoView();
|
||||
tab.click();
|
||||
// Wait for content to load and animations to complete
|
||||
await new Promise(r => setTimeout(r, 500));
|
||||
}
|
||||
})();
|
||||
"""
|
||||
|
||||
result = await crawler.arun(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=extraction_strategy,
|
||||
bypass_cache=True,
|
||||
url="https://www.kidocode.com/degrees/technology",
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema, verbose=True),
|
||||
js_code=[js_click_tabs],
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
assert result.success, "Failed to crawl the page"
|
||||
companies = json.loads(result.extracted_content)
|
||||
print(f"Successfully extracted {len(companies)} companies")
|
||||
print(json.dumps(companies[0], indent=2))
|
||||
|
||||
news_teasers = json.loads(result.extracted_content)
|
||||
print(f"Successfully extracted {len(news_teasers)} news teasers")
|
||||
print(json.dumps(news_teasers[0], indent=2))
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(extract_news_teasers())
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
For more advanced usage examples, check out our [Examples](https://crawl4ai.com/mkdocs/extraction/css-advanced/) section in the documentation.
|
||||
</details>
|
||||
|
||||
### Extracting Structured Data with OpenAI
|
||||
<details>
|
||||
<summary>📚 <strong>Extracting Structured Data with LLMs</strong></summary>
|
||||
|
||||
```python
|
||||
import os
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai import AsyncWebCrawler, CacheMode
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
@@ -339,6 +551,8 @@ async def main():
|
||||
url='https://openai.com/api/pricing/',
|
||||
word_count_threshold=1,
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
# Here you can use any provider that Litellm library supports, for instance: ollama/qwen2
|
||||
# provider="ollama/qwen2", api_token="no-token",
|
||||
provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY'),
|
||||
schema=OpenAIModelFee.schema(),
|
||||
extraction_type="schema",
|
||||
@@ -346,7 +560,7 @@ async def main():
|
||||
Do not miss any models in the entire content. One extracted model JSON format should look like this:
|
||||
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}."""
|
||||
),
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
)
|
||||
print(result.extracted_content)
|
||||
|
||||
@@ -354,143 +568,98 @@ if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Session Management and Dynamic Content Crawling
|
||||
</details>
|
||||
|
||||
Crawl4AI excels at handling complex scenarios, such as crawling multiple pages with dynamic content loaded via JavaScript. Here's an example of crawling GitHub commits across multiple pages:
|
||||
<details>
|
||||
<summary>🤖 <strong>Using You own Browswer with Custome User Profile</strong></summary>
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import re
|
||||
from bs4 import BeautifulSoup
|
||||
import os, sys
|
||||
from pathlib import Path
|
||||
import asyncio, time
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
|
||||
async def crawl_typescript_commits():
|
||||
first_commit = ""
|
||||
async def on_execution_started(page):
|
||||
nonlocal first_commit
|
||||
try:
|
||||
while True:
|
||||
await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')
|
||||
commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')
|
||||
commit = await commit.evaluate('(element) => element.textContent')
|
||||
commit = re.sub(r'\s+', '', commit)
|
||||
if commit and commit != first_commit:
|
||||
first_commit = commit
|
||||
break
|
||||
await asyncio.sleep(0.5)
|
||||
except Exception as e:
|
||||
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
|
||||
async def test_news_crawl():
|
||||
# Create a persistent user data directory
|
||||
user_data_dir = os.path.join(Path.home(), ".crawl4ai", "browser_profile")
|
||||
os.makedirs(user_data_dir, exist_ok=True)
|
||||
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)
|
||||
|
||||
url = "https://github.com/microsoft/TypeScript/commits/main"
|
||||
session_id = "typescript_commits_session"
|
||||
all_commits = []
|
||||
|
||||
js_next_page = """
|
||||
const button = document.querySelector('a[data-testid="pagination-next-button"]');
|
||||
if (button) button.click();
|
||||
"""
|
||||
|
||||
for page in range(3): # Crawl 3 pages
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
session_id=session_id,
|
||||
css_selector="li.Box-sc-g0xbh4-0",
|
||||
js=js_next_page if page > 0 else None,
|
||||
bypass_cache=True,
|
||||
js_only=page > 0
|
||||
)
|
||||
|
||||
assert result.success, f"Failed to crawl page {page + 1}"
|
||||
|
||||
soup = BeautifulSoup(result.cleaned_html, 'html.parser')
|
||||
commits = soup.select("li")
|
||||
all_commits.extend(commits)
|
||||
|
||||
print(f"Page {page + 1}: Found {len(commits)} commits")
|
||||
|
||||
await crawler.crawler_strategy.kill_session(session_id)
|
||||
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(crawl_typescript_commits())
|
||||
async with AsyncWebCrawler(
|
||||
verbose=True,
|
||||
headless=True,
|
||||
user_data_dir=user_data_dir,
|
||||
use_persistent_context=True,
|
||||
headers={
|
||||
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
|
||||
"Accept-Language": "en-US,en;q=0.5",
|
||||
"Accept-Encoding": "gzip, deflate, br",
|
||||
"DNT": "1",
|
||||
"Connection": "keep-alive",
|
||||
"Upgrade-Insecure-Requests": "1",
|
||||
"Sec-Fetch-Dest": "document",
|
||||
"Sec-Fetch-Mode": "navigate",
|
||||
"Sec-Fetch-Site": "none",
|
||||
"Sec-Fetch-User": "?1",
|
||||
"Cache-Control": "max-age=0",
|
||||
}
|
||||
) as crawler:
|
||||
url = "ADDRESS_OF_A_CHALLENGING_WEBSITE"
|
||||
|
||||
result = await crawler.arun(
|
||||
url,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
magic=True,
|
||||
)
|
||||
|
||||
print(f"Successfully crawled {url}")
|
||||
print(f"Content length: {len(result.markdown)}")
|
||||
```
|
||||
|
||||
This example demonstrates Crawl4AI's ability to handle complex scenarios where content is loaded asynchronously. It crawls multiple pages of GitHub commits, executing JavaScript to load new content and using custom hooks to ensure data is loaded before proceeding.
|
||||
|
||||
For more advanced usage examples, check out our [Examples](https://crawl4ai.com/mkdocs/tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites/) section in the documentation.
|
||||
</details>
|
||||
|
||||
|
||||
## Speed Comparison 🚀
|
||||
## ✨ Recent Updates
|
||||
|
||||
Crawl4AI is designed with speed as a primary focus. Our goal is to provide the fastest possible response with high-quality data extraction, minimizing abstractions between the data and the user.
|
||||
- 🚀 **Improved ManagedBrowser Configuration**: Dynamic host and port support for more flexible browser management.
|
||||
- 📝 **Enhanced Markdown Generation**: New generator class for better formatting and customization.
|
||||
- ⚡ **Fast HTML Formatting**: Significantly optimized HTML formatting in the web crawler.
|
||||
- 🛠️ **Utility & Sanitization Upgrades**: Improved sanitization and expanded utility functions for streamlined workflows.
|
||||
- 👥 **Acknowledgments**: Added contributor details and pull request acknowledgments for better transparency.
|
||||
|
||||
We've conducted a speed comparison between Crawl4AI and Firecrawl, a paid service. The results demonstrate Crawl4AI's superior performance:
|
||||
|
||||
```bash
|
||||
Firecrawl:
|
||||
Time taken: 7.02 seconds
|
||||
Content length: 42074 characters
|
||||
Images found: 49
|
||||
|
||||
Crawl4AI (simple crawl):
|
||||
Time taken: 1.60 seconds
|
||||
Content length: 18238 characters
|
||||
Images found: 49
|
||||
|
||||
Crawl4AI (with JavaScript execution):
|
||||
Time taken: 4.64 seconds
|
||||
Content length: 40869 characters
|
||||
Images found: 89
|
||||
```
|
||||
|
||||
As you can see, Crawl4AI outperforms Firecrawl significantly:
|
||||
|
||||
- Simple crawl: Crawl4AI is over 4 times faster than Firecrawl.
|
||||
- With JavaScript execution: Even when executing JavaScript to load more content (doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.
|
||||
|
||||
You can find the full comparison code in our repository at `docs/examples/crawl4ai_vs_firecrawl.py`.
|
||||
|
||||
## Documentation 📚
|
||||
## 📖 Documentation & Roadmap
|
||||
|
||||
For detailed documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://crawl4ai.com/mkdocs/).
|
||||
|
||||
## Crawl4AI Roadmap 🗺️
|
||||
Moreover to check our development plans and upcoming features, check out our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md).
|
||||
|
||||
For detailed information on our development plans and upcoming features, check out our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md).
|
||||
<details>
|
||||
<summary>📈 <strong>Development TODOs</strong></summary>
|
||||
|
||||
### Advanced Crawling Systems 🔧
|
||||
- [x] 0. Graph Crawler: Smart website traversal using graph search algorithms for comprehensive nested page extraction
|
||||
- [ ] 1. Question-Based Crawler: Natural language driven web discovery and content extraction
|
||||
- [ ] 2. Knowledge-Optimal Crawler: Smart crawling that maximizes knowledge while minimizing data extraction
|
||||
- [ ] 3. Agentic Crawler: Autonomous system for complex multi-step crawling operations
|
||||
|
||||
### Specialized Features 🛠️
|
||||
- [ ] 4. Automated Schema Generator: Convert natural language to extraction schemas
|
||||
- [ ] 5. Domain-Specific Scrapers: Pre-configured extractors for common platforms (academic, e-commerce)
|
||||
- [ ] 6. Web Embedding Index: Semantic search infrastructure for crawled content
|
||||
|
||||
### Development Tools 🔨
|
||||
- [ ] 7. Interactive Playground: Web UI for testing, comparing strategies with AI assistance
|
||||
- [ ] 8. Performance Monitor: Real-time insights into crawler operations
|
||||
- [ ] 9. Cloud Integration: One-click deployment solutions across cloud providers
|
||||
|
||||
### Community & Growth 🌱
|
||||
- [ ] 10. Sponsorship Program: Structured support system with tiered benefits
|
||||
- [ ] 11. Educational Content: "How to Crawl" video series and interactive tutorials
|
||||
|
||||
## Contributing 🤝
|
||||
</details>
|
||||
|
||||
## 🤝 Contributing
|
||||
|
||||
We welcome contributions from the open-source community. Check out our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTING.md) for more information.
|
||||
|
||||
## License 📄
|
||||
## 📄 License
|
||||
|
||||
Crawl4AI is released under the [Apache 2.0 License](https://github.com/unclecode/crawl4ai/blob/main/LICENSE).
|
||||
|
||||
## Contact 📧
|
||||
## 📧 Contact
|
||||
|
||||
For questions, suggestions, or feedback, feel free to reach out:
|
||||
|
||||
@@ -500,32 +669,32 @@ For questions, suggestions, or feedback, feel free to reach out:
|
||||
|
||||
Happy Crawling! 🕸️🚀
|
||||
|
||||
## 🗾 Mission
|
||||
|
||||
# Mission
|
||||
Our mission is to unlock the value of personal and enterprise data by transforming digital footprints into structured, tradeable assets. Crawl4AI empowers individuals and organizations with open-source tools to extract and structure data, fostering a shared data economy.
|
||||
|
||||
Our mission is to unlock the untapped potential of personal and enterprise data in the digital age. In today's world, individuals and organizations generate vast amounts of valuable digital footprints, yet this data remains largely uncapitalized as a true asset.
|
||||
We envision a future where AI is powered by real human knowledge, ensuring data creators directly benefit from their contributions. By democratizing data and enabling ethical sharing, we are laying the foundation for authentic AI advancement.
|
||||
|
||||
Our open-source solution empowers developers and innovators to build tools for data extraction and structuring, laying the foundation for a new era of data ownership. By transforming personal and enterprise data into structured, tradeable assets, we're creating opportunities for individuals to capitalize on their digital footprints and for organizations to unlock the value of their collective knowledge.
|
||||
<details>
|
||||
<summary>🔑 <strong>Key Opportunities</strong></summary>
|
||||
|
||||
- **Data Capitalization**: Transform digital footprints into measurable, valuable assets.
|
||||
- **Authentic AI Data**: Provide AI systems with real human insights.
|
||||
- **Shared Economy**: Create a fair data marketplace that benefits data creators.
|
||||
|
||||
This democratization of data represents the first step toward a shared data economy, where willing participation in data sharing drives AI advancement while ensuring the benefits flow back to data creators. Through this approach, we're building a future where AI development is powered by authentic human knowledge rather than synthetic alternatives.
|
||||
</details>
|
||||
|
||||

|
||||
<details>
|
||||
<summary>🚀 <strong>Development Pathway</strong></summary>
|
||||
|
||||
For a detailed exploration of our vision, opportunities, and pathway forward, please see our [full mission statement](./MISSION.md).
|
||||
1. **Open-Source Tools**: Community-driven platforms for transparent data extraction.
|
||||
2. **Digital Asset Structuring**: Tools to organize and value digital knowledge.
|
||||
3. **Ethical Data Marketplace**: A secure, fair platform for exchanging structured data.
|
||||
|
||||
## Key Opportunities
|
||||
For more details, see our [full mission statement](./MISSION.md).
|
||||
</details>
|
||||
|
||||
- **Data Capitalization**: Transform digital footprints into valuable assets that can appear on personal and enterprise balance sheets
|
||||
- **Authentic Data**: Unlock the vast reservoir of real human insights and knowledge for AI advancement
|
||||
- **Shared Economy**: Create new value streams where data creators directly benefit from their contributions
|
||||
|
||||
## Development Pathway
|
||||
|
||||
1. **Open-Source Foundation**: Building transparent, community-driven data extraction tools
|
||||
2. **Data Capitalization Platform**: Creating tools to structure and value digital assets
|
||||
3. **Shared Data Marketplace**: Establishing an economic platform for ethical data exchange
|
||||
|
||||
For a detailed exploration of our vision, challenges, and solutions, please see our [full mission statement](./MISSION.md).
|
||||
|
||||
|
||||
## Star History
|
||||
|
||||
@@ -4,7 +4,6 @@ from .async_webcrawler import AsyncWebCrawler, CacheMode
|
||||
|
||||
from .models import CrawlResult
|
||||
from .__version__ import __version__
|
||||
# __version__ = "0.3.73"
|
||||
|
||||
__all__ = [
|
||||
"AsyncWebCrawler",
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
# crawl4ai/_version.py
|
||||
__version__ = "0.3.742"
|
||||
__version__ = "0.3.746"
|
||||
|
||||
@@ -15,7 +15,7 @@ import hashlib
|
||||
import json
|
||||
import uuid
|
||||
from .models import AsyncCrawlResponse
|
||||
|
||||
from .utils import create_box_message
|
||||
from playwright_stealth import StealthConfig, stealth_async
|
||||
|
||||
stealth_config = StealthConfig(
|
||||
@@ -35,13 +35,14 @@ stealth_config = StealthConfig(
|
||||
|
||||
|
||||
class ManagedBrowser:
|
||||
def __init__(self, browser_type: str = "chromium", user_data_dir: Optional[str] = None, headless: bool = False, logger = None):
|
||||
def __init__(self, browser_type: str = "chromium", user_data_dir: Optional[str] = None, headless: bool = False, logger = None, host: str = "localhost", debugging_port: int = 9222):
|
||||
self.browser_type = browser_type
|
||||
self.user_data_dir = user_data_dir
|
||||
self.headless = headless
|
||||
self.browser_process = None
|
||||
self.temp_dir = None
|
||||
self.debugging_port = 9222
|
||||
self.debugging_port = debugging_port
|
||||
self.host = host
|
||||
self.logger = logger
|
||||
self.shutting_down = False
|
||||
|
||||
@@ -70,7 +71,7 @@ class ManagedBrowser:
|
||||
# Monitor browser process output for errors
|
||||
asyncio.create_task(self._monitor_browser_process())
|
||||
await asyncio.sleep(2) # Give browser time to start
|
||||
return f"http://localhost:{self.debugging_port}"
|
||||
return f"http://{self.host}:{self.debugging_port}"
|
||||
except Exception as e:
|
||||
await self.cleanup()
|
||||
raise Exception(f"Failed to start browser: {e}")
|
||||
@@ -320,10 +321,10 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
"--disable-infobars",
|
||||
"--window-position=0,0",
|
||||
"--ignore-certificate-errors",
|
||||
"--ignore-certificate-errors-spki-list",
|
||||
"--ignore-certificate-errors-spki-list"
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
# Add channel if specified (try Chrome first)
|
||||
if self.chrome_channel:
|
||||
browser_args["channel"] = self.chrome_channel
|
||||
@@ -416,13 +417,13 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
else:
|
||||
raise ValueError(f"Invalid hook type: {hook_type}")
|
||||
|
||||
async def execute_hook(self, hook_type: str, *args):
|
||||
async def execute_hook(self, hook_type: str, *args, **kwargs):
|
||||
hook = self.hooks.get(hook_type)
|
||||
if hook:
|
||||
if asyncio.iscoroutinefunction(hook):
|
||||
return await hook(*args)
|
||||
return await hook(*args, **kwargs)
|
||||
else:
|
||||
return hook(*args)
|
||||
return hook(*args, **kwargs)
|
||||
return args[0] if args else None
|
||||
|
||||
def update_user_agent(self, user_agent: str):
|
||||
@@ -642,6 +643,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
session_id = kwargs.get("session_id")
|
||||
|
||||
# Handle page creation differently for managed browser
|
||||
context = None
|
||||
if self.use_managed_browser:
|
||||
if session_id:
|
||||
# Reuse existing session if available
|
||||
@@ -760,20 +762,23 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
return response
|
||||
|
||||
if not kwargs.get("js_only", False):
|
||||
await self.execute_hook('before_goto', page)
|
||||
await self.execute_hook('before_goto', page, context = context)
|
||||
|
||||
|
||||
response = await page.goto(
|
||||
url,
|
||||
# wait_until=kwargs.get("wait_until", ["domcontentloaded", "networkidle"]),
|
||||
wait_until=kwargs.get("wait_until", "domcontentloaded"),
|
||||
timeout=kwargs.get("page_timeout", 60000)
|
||||
)
|
||||
try:
|
||||
response = await page.goto(
|
||||
url,
|
||||
# wait_until=kwargs.get("wait_until", ["domcontentloaded", "networkidle"]),
|
||||
wait_until=kwargs.get("wait_until", "domcontentloaded"),
|
||||
timeout=kwargs.get("page_timeout", 60000),
|
||||
)
|
||||
except Error as e:
|
||||
raise RuntimeError(f"Failed on navigating ACS-GOTO :\n{str(e)}")
|
||||
|
||||
# response = await page.goto("about:blank")
|
||||
# await page.evaluate(f"window.location.href = '{url}'")
|
||||
|
||||
await self.execute_hook('after_goto', page)
|
||||
await self.execute_hook('after_goto', page, context = context)
|
||||
|
||||
# Get status code and headers
|
||||
status_code = response.status
|
||||
@@ -838,7 +843,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
# await page.wait_for_timeout(100)
|
||||
|
||||
# Check for on execution event
|
||||
await self.execute_hook('on_execution_started', page)
|
||||
await self.execute_hook('on_execution_started', page, context = context)
|
||||
|
||||
if kwargs.get("simulate_user", False) or kwargs.get("magic", False):
|
||||
# Simulate user interactions
|
||||
@@ -915,7 +920,11 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
});
|
||||
}
|
||||
"""
|
||||
await page.evaluate(update_image_dimensions_js)
|
||||
try:
|
||||
await page.wait_for_load_state()
|
||||
await page.evaluate(update_image_dimensions_js)
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Error updating image dimensions ACS-UPDATE_IMAGE_DIMENSIONS_JS: {str(e)}")
|
||||
|
||||
# Wait a bit for any onload events to complete
|
||||
await page.wait_for_timeout(100)
|
||||
@@ -924,7 +933,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
if kwargs.get("process_iframes", False):
|
||||
page = await self.process_iframes(page)
|
||||
|
||||
await self.execute_hook('before_retrieve_html', page)
|
||||
await self.execute_hook('before_retrieve_html', page, context = context)
|
||||
# Check if delay_before_return_html is set then wait for that time
|
||||
delay_before_return_html = kwargs.get("delay_before_return_html")
|
||||
if delay_before_return_html:
|
||||
@@ -935,7 +944,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
|
||||
await self.remove_overlay_elements(page)
|
||||
|
||||
html = await page.content()
|
||||
await self.execute_hook('before_return_html', page, html)
|
||||
await self.execute_hook('before_return_html', page, html, context = context)
|
||||
|
||||
# Check if kwargs has screenshot=True then take screenshot
|
||||
screenshot_data = None
|
||||
|
||||
@@ -25,8 +25,11 @@ from .config import (
|
||||
from .utils import (
|
||||
sanitize_input_encode,
|
||||
InvalidCSSSelectorError,
|
||||
format_html
|
||||
format_html,
|
||||
fast_format_html,
|
||||
create_box_message
|
||||
)
|
||||
|
||||
from urllib.parse import urlparse
|
||||
import random
|
||||
from .__version__ import __version__ as crawl4ai_version
|
||||
@@ -325,15 +328,15 @@ class AsyncWebCrawler:
|
||||
if not hasattr(e, "msg"):
|
||||
e.msg = str(e)
|
||||
# print(f"{Fore.RED}{self.tag_format('ERROR')} {self.log_icons['ERROR']} Failed to crawl {cache_context.display_url[:URL_LOG_SHORTEN_LENGTH]}... | {e.msg}{Style.RESET_ALL}")
|
||||
|
||||
self.logger.error_status(
|
||||
url=cache_context.display_url,
|
||||
error=e.msg,
|
||||
error=create_box_message(e.msg, type = "error"),
|
||||
tag="ERROR"
|
||||
)
|
||||
return CrawlResult(
|
||||
url=url,
|
||||
html="",
|
||||
markdown=f"[ERROR] 🚫 arun(): Failed to crawl {cache_context.display_url}, error: {e.msg}",
|
||||
success=False,
|
||||
error_message=e.msg
|
||||
)
|
||||
@@ -534,16 +537,17 @@ class AsyncWebCrawler:
|
||||
"timing": time.perf_counter() - t1
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
|
||||
|
||||
screenshot = None if not screenshot else screenshot
|
||||
|
||||
|
||||
if kwargs.get("prettiify", False):
|
||||
cleaned_html = fast_format_html(cleaned_html)
|
||||
|
||||
return CrawlResult(
|
||||
url=url,
|
||||
html=html,
|
||||
cleaned_html=format_html(cleaned_html),
|
||||
cleaned_html=cleaned_html,
|
||||
markdown_v2=markdown_v2,
|
||||
markdown=markdown,
|
||||
fit_markdown=fit_markdown,
|
||||
|
||||
@@ -10,7 +10,7 @@ from urllib.parse import urljoin
|
||||
from requests.exceptions import InvalidSchema
|
||||
# from .content_cleaning_strategy import ContentCleaningStrategy
|
||||
from .content_filter_strategy import RelevantContentFilter, BM25ContentFilter#, HeuristicContentFilter
|
||||
from .markdown_generation_strategy import MarkdownGenerationStrategy, DefaultMarkdownGenerationStrategy
|
||||
from .markdown_generation_strategy import MarkdownGenerationStrategy, DefaultMarkdownGenerator
|
||||
from .models import MarkdownGenerationResult
|
||||
from .utils import (
|
||||
sanitize_input_encode,
|
||||
@@ -105,21 +105,28 @@ class WebScrapingStrategy(ContentScrapingStrategy):
|
||||
Returns:
|
||||
Dict containing markdown content in various formats
|
||||
"""
|
||||
markdown_generator: Optional[MarkdownGenerationStrategy] = kwargs.get('markdown_generator', DefaultMarkdownGenerationStrategy())
|
||||
markdown_generator: Optional[MarkdownGenerationStrategy] = kwargs.get('markdown_generator', DefaultMarkdownGenerator())
|
||||
|
||||
if markdown_generator:
|
||||
try:
|
||||
if kwargs.get('fit_markdown', False) and not markdown_generator.content_filter:
|
||||
markdown_generator.content_filter = BM25ContentFilter(
|
||||
user_query=kwargs.get('fit_markdown_user_query', None),
|
||||
bm25_threshold=kwargs.get('fit_markdown_bm25_threshold', 1.0)
|
||||
)
|
||||
|
||||
markdown_result: MarkdownGenerationResult = markdown_generator.generate_markdown(
|
||||
cleaned_html=cleaned_html,
|
||||
base_url=url,
|
||||
html2text_options=kwargs.get('html2text', {}),
|
||||
content_filter=kwargs.get('content_filter', None)
|
||||
html2text_options=kwargs.get('html2text', {})
|
||||
)
|
||||
|
||||
help_message = """"""
|
||||
|
||||
return {
|
||||
'markdown': markdown_result.raw_markdown,
|
||||
'fit_markdown': markdown_result.fit_markdown or "Set flag 'fit_markdown' to True to get cleaned HTML content.",
|
||||
'fit_html': markdown_result.fit_html or "Set flag 'fit_markdown' to True to get cleaned HTML content.",
|
||||
'fit_markdown': markdown_result.fit_markdown,
|
||||
'fit_html': markdown_result.fit_html,
|
||||
'markdown_v2': markdown_result
|
||||
}
|
||||
except Exception as e:
|
||||
|
||||
44
crawl4ai/install.py
Normal file
44
crawl4ai/install.py
Normal file
@@ -0,0 +1,44 @@
|
||||
import subprocess
|
||||
import sys
|
||||
import asyncio
|
||||
from .async_logger import AsyncLogger, LogLevel
|
||||
|
||||
# Initialize logger
|
||||
logger = AsyncLogger(log_level=LogLevel.DEBUG, verbose=True)
|
||||
|
||||
def post_install():
|
||||
"""Run all post-installation tasks"""
|
||||
logger.info("Running post-installation setup...", tag="INIT")
|
||||
install_playwright()
|
||||
run_migration()
|
||||
logger.success("Post-installation setup completed!", tag="COMPLETE")
|
||||
|
||||
def install_playwright():
|
||||
logger.info("Installing Playwright browsers...", tag="INIT")
|
||||
try:
|
||||
subprocess.check_call([sys.executable, "-m", "playwright", "install"])
|
||||
logger.success("Playwright installation completed successfully.", tag="COMPLETE")
|
||||
except subprocess.CalledProcessError as e:
|
||||
logger.error(f"Error during Playwright installation: {e}", tag="ERROR")
|
||||
logger.warning(
|
||||
"Please run 'python -m playwright install' manually after the installation."
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error during Playwright installation: {e}", tag="ERROR")
|
||||
logger.warning(
|
||||
"Please run 'python -m playwright install' manually after the installation."
|
||||
)
|
||||
|
||||
def run_migration():
|
||||
"""Initialize database during installation"""
|
||||
try:
|
||||
logger.info("Starting database initialization...", tag="INIT")
|
||||
from crawl4ai.async_database import async_db_manager
|
||||
|
||||
asyncio.run(async_db_manager.initialize())
|
||||
logger.success("Database initialization completed successfully.", tag="COMPLETE")
|
||||
except ImportError:
|
||||
logger.warning("Database module not found. Will initialize on first use.")
|
||||
except Exception as e:
|
||||
logger.warning(f"Database initialization failed: {e}")
|
||||
logger.warning("Database will be initialized on first use")
|
||||
@@ -11,6 +11,8 @@ LINK_PATTERN = re.compile(r'!?\[([^\]]+)\]\(([^)]+?)(?:\s+"([^"]*)")?\)')
|
||||
|
||||
class MarkdownGenerationStrategy(ABC):
|
||||
"""Abstract base class for markdown generation strategies."""
|
||||
def __init__(self, content_filter: Optional[RelevantContentFilter] = None):
|
||||
self.content_filter = content_filter
|
||||
|
||||
@abstractmethod
|
||||
def generate_markdown(self,
|
||||
@@ -23,8 +25,10 @@ class MarkdownGenerationStrategy(ABC):
|
||||
"""Generate markdown from cleaned HTML."""
|
||||
pass
|
||||
|
||||
class DefaultMarkdownGenerationStrategy(MarkdownGenerationStrategy):
|
||||
class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
|
||||
"""Default implementation of markdown generation strategy."""
|
||||
def __init__(self, content_filter: Optional[RelevantContentFilter] = None):
|
||||
super().__init__(content_filter)
|
||||
|
||||
def convert_links_to_citations(self, markdown: str, base_url: str = "") -> Tuple[str, str]:
|
||||
link_map = {}
|
||||
@@ -84,14 +88,18 @@ class DefaultMarkdownGenerationStrategy(MarkdownGenerationStrategy):
|
||||
raw_markdown = raw_markdown.replace(' ```', '```')
|
||||
|
||||
# Convert links to citations
|
||||
markdown_with_citations: str = ""
|
||||
references_markdown: str = ""
|
||||
if citations:
|
||||
markdown_with_citations, references_markdown = self.convert_links_to_citations(
|
||||
raw_markdown, base_url
|
||||
)
|
||||
|
||||
# Generate fit markdown if content filter is provided
|
||||
fit_markdown: Optional[str] = None
|
||||
if content_filter:
|
||||
fit_markdown: Optional[str] = ""
|
||||
filtered_html: Optional[str] = ""
|
||||
if content_filter or self.content_filter:
|
||||
content_filter = content_filter or self.content_filter
|
||||
filtered_html = content_filter.filter_content(cleaned_html)
|
||||
filtered_html = '\n'.join('<div>{}</div>'.format(s) for s in filtered_html)
|
||||
fit_markdown = h.handle(filtered_html)
|
||||
@@ -101,7 +109,7 @@ class DefaultMarkdownGenerationStrategy(MarkdownGenerationStrategy):
|
||||
markdown_with_citations=markdown_with_citations,
|
||||
references_markdown=references_markdown,
|
||||
fit_markdown=fit_markdown,
|
||||
fit_html=filtered_html
|
||||
fit_html=filtered_html,
|
||||
)
|
||||
|
||||
def fast_urljoin(base: str, url: str) -> str:
|
||||
|
||||
@@ -9,9 +9,13 @@ import aiofiles
|
||||
import shutil
|
||||
import time
|
||||
from datetime import datetime
|
||||
from .async_logger import AsyncLogger, LogLevel
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
# Initialize logger
|
||||
logger = AsyncLogger(log_level=LogLevel.DEBUG, verbose=True)
|
||||
|
||||
# logging.basicConfig(level=logging.INFO)
|
||||
# logger = logging.getLogger(__name__)
|
||||
|
||||
class DatabaseMigration:
|
||||
def __init__(self, db_path: str):
|
||||
@@ -55,7 +59,8 @@ class DatabaseMigration:
|
||||
|
||||
async def migrate_database(self):
|
||||
"""Migrate existing database to file-based storage"""
|
||||
logger.info("Starting database migration...")
|
||||
# logger.info("Starting database migration...")
|
||||
logger.info("Starting database migration...", tag="INIT")
|
||||
|
||||
try:
|
||||
async with aiosqlite.connect(self.db_path) as db:
|
||||
@@ -91,19 +96,25 @@ class DatabaseMigration:
|
||||
|
||||
migrated_count += 1
|
||||
if migrated_count % 100 == 0:
|
||||
logger.info(f"Migrated {migrated_count} records...")
|
||||
logger.info(f"Migrated {migrated_count} records...", tag="INIT")
|
||||
|
||||
|
||||
await db.commit()
|
||||
logger.info(f"Migration completed. {migrated_count} records processed.")
|
||||
logger.success(f"Migration completed. {migrated_count} records processed.", tag="COMPLETE")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Migration failed: {e}")
|
||||
raise
|
||||
# logger.error(f"Migration failed: {e}")
|
||||
logger.error(
|
||||
message="Migration failed: {error}",
|
||||
tag="ERROR",
|
||||
params={"error": str(e)}
|
||||
)
|
||||
raise e
|
||||
|
||||
async def backup_database(db_path: str) -> str:
|
||||
"""Create backup of existing database"""
|
||||
if not os.path.exists(db_path):
|
||||
logger.info("No existing database found. Skipping backup.")
|
||||
logger.info("No existing database found. Skipping backup.", tag="INIT")
|
||||
return None
|
||||
|
||||
# Create backup with timestamp
|
||||
@@ -116,11 +127,16 @@ async def backup_database(db_path: str) -> str:
|
||||
|
||||
# Create backup
|
||||
shutil.copy2(db_path, backup_path)
|
||||
logger.info(f"Database backup created at: {backup_path}")
|
||||
logger.info(f"Database backup created at: {backup_path}", tag="COMPLETE")
|
||||
return backup_path
|
||||
except Exception as e:
|
||||
logger.error(f"Backup failed: {e}")
|
||||
raise
|
||||
# logger.error(f"Backup failed: {e}")
|
||||
logger.error(
|
||||
message="Migration failed: {error}",
|
||||
tag="ERROR",
|
||||
params={"error": str(e)}
|
||||
)
|
||||
raise e
|
||||
|
||||
async def run_migration(db_path: Optional[str] = None):
|
||||
"""Run database migration"""
|
||||
@@ -128,7 +144,7 @@ async def run_migration(db_path: Optional[str] = None):
|
||||
db_path = os.path.join(Path.home(), ".crawl4ai", "crawl4ai.db")
|
||||
|
||||
if not os.path.exists(db_path):
|
||||
logger.info("No existing database found. Skipping migration.")
|
||||
logger.info("No existing database found. Skipping migration.", tag="INIT")
|
||||
return
|
||||
|
||||
# Create backup first
|
||||
|
||||
@@ -17,7 +17,8 @@ from requests.exceptions import InvalidSchema
|
||||
import hashlib
|
||||
from typing import Optional, Tuple, Dict, Any
|
||||
import xxhash
|
||||
|
||||
from colorama import Fore, Style, init
|
||||
import textwrap
|
||||
|
||||
from .html2text import HTML2Text
|
||||
class CustomHTML2Text(HTML2Text):
|
||||
@@ -103,12 +104,67 @@ class CustomHTML2Text(HTML2Text):
|
||||
self.preserved_content.append(data)
|
||||
return
|
||||
super().handle_data(data, entity_char)
|
||||
|
||||
|
||||
|
||||
class InvalidCSSSelectorError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def create_box_message(
|
||||
message: str,
|
||||
type: str = "info",
|
||||
width: int = 80,
|
||||
add_newlines: bool = True,
|
||||
double_line: bool = False
|
||||
) -> str:
|
||||
init()
|
||||
|
||||
# Define border and text colors for different types
|
||||
styles = {
|
||||
"warning": (Fore.YELLOW, Fore.LIGHTYELLOW_EX, "⚠"),
|
||||
"info": (Fore.BLUE, Fore.LIGHTBLUE_EX, "ℹ"),
|
||||
"success": (Fore.GREEN, Fore.LIGHTGREEN_EX, "✓"),
|
||||
"error": (Fore.RED, Fore.LIGHTRED_EX, "×"),
|
||||
}
|
||||
|
||||
border_color, text_color, prefix = styles.get(type.lower(), styles["info"])
|
||||
|
||||
# Define box characters based on line style
|
||||
box_chars = {
|
||||
"single": ("─", "│", "┌", "┐", "└", "┘"),
|
||||
"double": ("═", "║", "╔", "╗", "╚", "╝")
|
||||
}
|
||||
line_style = "double" if double_line else "single"
|
||||
h_line, v_line, tl, tr, bl, br = box_chars[line_style]
|
||||
|
||||
# Process lines with lighter text color
|
||||
formatted_lines = []
|
||||
raw_lines = message.split('\n')
|
||||
|
||||
if raw_lines:
|
||||
first_line = f"{prefix} {raw_lines[0].strip()}"
|
||||
wrapped_first = textwrap.fill(first_line, width=width-4)
|
||||
formatted_lines.extend(wrapped_first.split('\n'))
|
||||
|
||||
for line in raw_lines[1:]:
|
||||
if line.strip():
|
||||
wrapped = textwrap.fill(f" {line.strip()}", width=width-4)
|
||||
formatted_lines.extend(wrapped.split('\n'))
|
||||
else:
|
||||
formatted_lines.append("")
|
||||
|
||||
# Create the box with colored borders and lighter text
|
||||
horizontal_line = h_line * (width - 1)
|
||||
box = [
|
||||
f"{border_color}{tl}{horizontal_line}{tr}",
|
||||
*[f"{border_color}{v_line}{text_color} {line:<{width-2}}{border_color}{v_line}" for line in formatted_lines],
|
||||
f"{border_color}{bl}{horizontal_line}{br}{Style.RESET_ALL}"
|
||||
]
|
||||
|
||||
result = "\n".join(box)
|
||||
if add_newlines:
|
||||
result = f"\n{result}\n"
|
||||
|
||||
return result
|
||||
|
||||
def calculate_semaphore_count():
|
||||
cpu_count = os.cpu_count()
|
||||
memory_gb = get_system_memory() / (1024 ** 3) # Convert to GB
|
||||
@@ -233,12 +289,17 @@ def sanitize_html(html):
|
||||
def sanitize_input_encode(text: str) -> str:
|
||||
"""Sanitize input to handle potential encoding issues."""
|
||||
try:
|
||||
# Attempt to encode and decode as UTF-8 to handle potential encoding issues
|
||||
return text.encode('utf-8', errors='ignore').decode('utf-8')
|
||||
except UnicodeEncodeError as e:
|
||||
print(f"Warning: Encoding issue detected. Some characters may be lost. Error: {e}")
|
||||
# Fall back to ASCII if UTF-8 fails
|
||||
return text.encode('ascii', errors='ignore').decode('ascii')
|
||||
try:
|
||||
if not text:
|
||||
return ''
|
||||
# Attempt to encode and decode as UTF-8 to handle potential encoding issues
|
||||
return text.encode('utf-8', errors='ignore').decode('utf-8')
|
||||
except UnicodeEncodeError as e:
|
||||
print(f"Warning: Encoding issue detected. Some characters may be lost. Error: {e}")
|
||||
# Fall back to ASCII if UTF-8 fails
|
||||
return text.encode('ascii', errors='ignore').decode('ascii')
|
||||
except Exception as e:
|
||||
raise ValueError(f"Error sanitizing input: {str(e)}") from e
|
||||
|
||||
def escape_json_string(s):
|
||||
"""
|
||||
@@ -1079,9 +1140,54 @@ def wrap_text(draw, text, font, max_width):
|
||||
return '\n'.join(lines)
|
||||
|
||||
def format_html(html_string):
|
||||
soup = BeautifulSoup(html_string, 'html.parser')
|
||||
soup = BeautifulSoup(html_string, 'lxml.parser')
|
||||
return soup.prettify()
|
||||
|
||||
def fast_format_html(html_string):
|
||||
"""
|
||||
A fast HTML formatter that uses string operations instead of parsing.
|
||||
|
||||
Args:
|
||||
html_string (str): The HTML string to format
|
||||
|
||||
Returns:
|
||||
str: The formatted HTML string
|
||||
"""
|
||||
# Initialize variables
|
||||
indent = 0
|
||||
indent_str = " " # Two spaces for indentation
|
||||
formatted = []
|
||||
in_content = False
|
||||
|
||||
# Split by < and > to separate tags and content
|
||||
parts = html_string.replace('>', '>\n').replace('<', '\n<').split('\n')
|
||||
|
||||
for part in parts:
|
||||
if not part.strip():
|
||||
continue
|
||||
|
||||
# Handle closing tags
|
||||
if part.startswith('</'):
|
||||
indent -= 1
|
||||
formatted.append(indent_str * indent + part)
|
||||
|
||||
# Handle self-closing tags
|
||||
elif part.startswith('<') and part.endswith('/>'):
|
||||
formatted.append(indent_str * indent + part)
|
||||
|
||||
# Handle opening tags
|
||||
elif part.startswith('<'):
|
||||
formatted.append(indent_str * indent + part)
|
||||
indent += 1
|
||||
|
||||
# Handle content between tags
|
||||
else:
|
||||
content = part.strip()
|
||||
if content:
|
||||
formatted.append(indent_str * indent + content)
|
||||
|
||||
return '\n'.join(formatted)
|
||||
|
||||
def normalize_url(href, base_url):
|
||||
"""Normalize URLs to ensure consistent format"""
|
||||
from urllib.parse import urljoin, urlparse
|
||||
|
||||
@@ -1,27 +0,0 @@
|
||||
services:
|
||||
crawl4ai:
|
||||
image: unclecode/crawl4ai:basic # Pull image from Docker Hub
|
||||
ports:
|
||||
- "11235:11235" # FastAPI server
|
||||
- "8000:8000" # Alternative port
|
||||
- "9222:9222" # Browser debugging
|
||||
- "8080:8080" # Additional port
|
||||
environment:
|
||||
- CRAWL4AI_API_TOKEN=${CRAWL4AI_API_TOKEN:-} # Optional API token
|
||||
- OPENAI_API_KEY=${OPENAI_API_KEY:-} # Optional OpenAI API key
|
||||
- CLAUDE_API_KEY=${CLAUDE_API_KEY:-} # Optional Claude API key
|
||||
volumes:
|
||||
- /dev/shm:/dev/shm # Shared memory for browser operations
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 4G
|
||||
reservations:
|
||||
memory: 1G
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:11235/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
@@ -1,33 +0,0 @@
|
||||
services:
|
||||
crawl4ai:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
args:
|
||||
PYTHON_VERSION: 3.10
|
||||
INSTALL_TYPE: all
|
||||
ENABLE_GPU: false
|
||||
ports:
|
||||
- "11235:11235" # FastAPI server
|
||||
- "8000:8000" # Alternative port
|
||||
- "9222:9222" # Browser debugging
|
||||
- "8080:8080" # Additional port
|
||||
environment:
|
||||
- CRAWL4AI_API_TOKEN=${CRAWL4AI_API_TOKEN:-} # Optional API token
|
||||
- OPENAI_API_KEY=${OPENAI_API_KEY:-} # Optional OpenAI API key
|
||||
- CLAUDE_API_KEY=${CLAUDE_API_KEY:-} # Optional Claude API key
|
||||
volumes:
|
||||
- /dev/shm:/dev/shm # Shared memory for browser operations
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 4G
|
||||
reservations:
|
||||
memory: 1G
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:11235/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
@@ -1,5 +1,6 @@
|
||||
services:
|
||||
crawl4ai:
|
||||
# Local build services for different platforms
|
||||
crawl4ai-amd64:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
@@ -7,35 +8,39 @@ services:
|
||||
PYTHON_VERSION: "3.10"
|
||||
INSTALL_TYPE: ${INSTALL_TYPE:-basic}
|
||||
ENABLE_GPU: false
|
||||
profiles: ["local"]
|
||||
ports:
|
||||
- "11235:11235"
|
||||
- "8000:8000"
|
||||
- "9222:9222"
|
||||
- "8080:8080"
|
||||
environment:
|
||||
- CRAWL4AI_API_TOKEN=${CRAWL4AI_API_TOKEN:-}
|
||||
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
|
||||
- CLAUDE_API_KEY=${CLAUDE_API_KEY:-}
|
||||
volumes:
|
||||
- /dev/shm:/dev/shm
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 4G
|
||||
reservations:
|
||||
memory: 1G
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:11235/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
platforms:
|
||||
- linux/amd64
|
||||
profiles: ["local-amd64"]
|
||||
extends: &base-config
|
||||
file: docker-compose.yml
|
||||
service: base-config
|
||||
|
||||
crawl4ai-hub:
|
||||
image: unclecode/crawl4ai:basic
|
||||
profiles: ["hub"]
|
||||
crawl4ai-arm64:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
args:
|
||||
PYTHON_VERSION: "3.10"
|
||||
INSTALL_TYPE: ${INSTALL_TYPE:-basic}
|
||||
ENABLE_GPU: false
|
||||
platforms:
|
||||
- linux/arm64
|
||||
profiles: ["local-arm64"]
|
||||
extends: *base-config
|
||||
|
||||
# Hub services for different platforms and versions
|
||||
crawl4ai-hub-amd64:
|
||||
image: unclecode/crawl4ai:${VERSION:-basic}-amd64
|
||||
profiles: ["hub-amd64"]
|
||||
extends: *base-config
|
||||
|
||||
crawl4ai-hub-arm64:
|
||||
image: unclecode/crawl4ai:${VERSION:-basic}-arm64
|
||||
profiles: ["hub-arm64"]
|
||||
extends: *base-config
|
||||
|
||||
# Base configuration to be extended
|
||||
base-config:
|
||||
ports:
|
||||
- "11235:11235"
|
||||
- "8000:8000"
|
||||
@@ -59,4 +64,4 @@ services:
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
start_period: 40s
|
||||
@@ -78,20 +78,20 @@ def test_docker_deployment(version="basic"):
|
||||
time.sleep(5)
|
||||
|
||||
# Test cases based on version
|
||||
# test_basic_crawl(tester)
|
||||
# test_basic_crawl(tester)
|
||||
# test_basic_crawl_sync(tester)
|
||||
test_basic_crawl_direct(tester)
|
||||
test_basic_crawl(tester)
|
||||
test_basic_crawl(tester)
|
||||
test_basic_crawl_sync(tester)
|
||||
|
||||
# if version in ["full", "transformer"]:
|
||||
# test_cosine_extraction(tester)
|
||||
if version in ["full", "transformer"]:
|
||||
test_cosine_extraction(tester)
|
||||
|
||||
# test_js_execution(tester)
|
||||
# test_css_selector(tester)
|
||||
# test_structured_extraction(tester)
|
||||
# test_llm_extraction(tester)
|
||||
# test_llm_with_ollama(tester)
|
||||
# test_screenshot(tester)
|
||||
test_js_execution(tester)
|
||||
test_css_selector(tester)
|
||||
test_structured_extraction(tester)
|
||||
test_llm_extraction(tester)
|
||||
test_llm_with_ollama(tester)
|
||||
test_screenshot(tester)
|
||||
|
||||
|
||||
def test_basic_crawl(tester: Crawl4AiTester):
|
||||
|
||||
@@ -13,7 +13,9 @@ import re
|
||||
from typing import Dict, List
|
||||
from bs4 import BeautifulSoup
|
||||
from pydantic import BaseModel, Field
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai import AsyncWebCrawler, CacheMode
|
||||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||
from crawl4ai.content_filter_strategy import BM25ContentFilter
|
||||
from crawl4ai.extraction_strategy import (
|
||||
JsonCssExtractionStrategy,
|
||||
LLMExtractionStrategy,
|
||||
@@ -30,7 +32,7 @@ print("Website: https://crawl4ai.com")
|
||||
async def simple_crawl():
|
||||
print("\n--- Basic Usage ---")
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(url="https://www.nbcnews.com/business")
|
||||
result = await crawler.arun(url="https://www.nbcnews.com/business", cache_mode= CacheMode.BYPASS)
|
||||
print(result.markdown[:500]) # Print first 500 characters
|
||||
|
||||
async def simple_example_with_running_js_code():
|
||||
@@ -51,7 +53,7 @@ async def simple_example_with_running_js_code():
|
||||
url="https://www.nbcnews.com/business",
|
||||
js_code=js_code,
|
||||
# wait_for=wait_for,
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
)
|
||||
print(result.markdown[:500]) # Print first 500 characters
|
||||
|
||||
@@ -61,7 +63,7 @@ async def simple_example_with_css_selector():
|
||||
result = await crawler.arun(
|
||||
url="https://www.nbcnews.com/business",
|
||||
css_selector=".wide-tease-item__description",
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
)
|
||||
print(result.markdown[:500]) # Print first 500 characters
|
||||
|
||||
@@ -74,16 +76,17 @@ async def use_proxy():
|
||||
async with AsyncWebCrawler(verbose=True, proxy="http://your-proxy-url:port") as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://www.nbcnews.com/business",
|
||||
bypass_cache=True
|
||||
cache_mode= CacheMode.BYPASS
|
||||
)
|
||||
print(result.markdown[:500]) # Print first 500 characters
|
||||
if result.success:
|
||||
print(result.markdown[:500]) # Print first 500 characters
|
||||
|
||||
async def capture_and_save_screenshot(url: str, output_path: str):
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
screenshot=True,
|
||||
bypass_cache=True
|
||||
cache_mode= CacheMode.BYPASS
|
||||
)
|
||||
|
||||
if result.success and result.screenshot:
|
||||
@@ -132,48 +135,75 @@ async def extract_structured_data_using_llm(provider: str, api_token: str = None
|
||||
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}.""",
|
||||
extra_args=extra_args
|
||||
),
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
)
|
||||
print(result.extracted_content)
|
||||
|
||||
async def extract_structured_data_using_css_extractor():
|
||||
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
|
||||
schema = {
|
||||
"name": "Coinbase Crypto Prices",
|
||||
"baseSelector": ".cds-tableRow-t45thuk",
|
||||
"fields": [
|
||||
{
|
||||
"name": "crypto",
|
||||
"selector": "td:nth-child(1) h2",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "symbol",
|
||||
"selector": "td:nth-child(1) p",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "price",
|
||||
"selector": "td:nth-child(2)",
|
||||
"type": "text",
|
||||
"name": "KidoCode Courses",
|
||||
"baseSelector": "section.charge-methodology .w-tab-content > div",
|
||||
"fields": [
|
||||
{
|
||||
"name": "section_title",
|
||||
"selector": "h3.heading-50",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "section_description",
|
||||
"selector": ".charge-content",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "course_name",
|
||||
"selector": ".text-block-93",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "course_description",
|
||||
"selector": ".course-content-text",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "course_icon",
|
||||
"selector": ".image-92",
|
||||
"type": "attribute",
|
||||
"attribute": "src"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
async with AsyncWebCrawler(
|
||||
headless=True,
|
||||
verbose=True
|
||||
) as crawler:
|
||||
|
||||
# Create the JavaScript that handles clicking multiple times
|
||||
js_click_tabs = """
|
||||
(async () => {
|
||||
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
|
||||
|
||||
for(let tab of tabs) {
|
||||
// scroll to the tab
|
||||
tab.scrollIntoView();
|
||||
tab.click();
|
||||
// Wait for content to load and animations to complete
|
||||
await new Promise(r => setTimeout(r, 500));
|
||||
}
|
||||
],
|
||||
}
|
||||
})();
|
||||
"""
|
||||
|
||||
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
|
||||
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://www.coinbase.com/explore",
|
||||
extraction_strategy=extraction_strategy,
|
||||
bypass_cache=True,
|
||||
url="https://www.kidocode.com/degrees/technology",
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema, verbose=True),
|
||||
js_code=[js_click_tabs],
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
assert result.success, "Failed to crawl the page"
|
||||
|
||||
news_teasers = json.loads(result.extracted_content)
|
||||
print(f"Successfully extracted {len(news_teasers)} news teasers")
|
||||
print(json.dumps(news_teasers[0], indent=2))
|
||||
companies = json.loads(result.extracted_content)
|
||||
print(f"Successfully extracted {len(companies)} companies")
|
||||
print(json.dumps(companies[0], indent=2))
|
||||
|
||||
# Advanced Session-Based Crawling with Dynamic Content 🔄
|
||||
async def crawl_dynamic_content_pages_method_1():
|
||||
@@ -213,7 +243,7 @@ async def crawl_dynamic_content_pages_method_1():
|
||||
session_id=session_id,
|
||||
css_selector="li.Box-sc-g0xbh4-0",
|
||||
js=js_next_page if page > 0 else None,
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
js_only=page > 0,
|
||||
headless=False,
|
||||
)
|
||||
@@ -282,7 +312,7 @@ async def crawl_dynamic_content_pages_method_2():
|
||||
extraction_strategy=extraction_strategy,
|
||||
js_code=js_next_page_and_wait if page > 0 else None,
|
||||
js_only=page > 0,
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
headless=False,
|
||||
)
|
||||
|
||||
@@ -343,7 +373,7 @@ async def crawl_dynamic_content_pages_method_3():
|
||||
js_code=js_next_page if page > 0 else None,
|
||||
wait_for=wait_for if page > 0 else None,
|
||||
js_only=page > 0,
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
headless=False,
|
||||
)
|
||||
|
||||
@@ -361,21 +391,21 @@ async def crawl_custom_browser_type():
|
||||
# Use Firefox
|
||||
start = time.time()
|
||||
async with AsyncWebCrawler(browser_type="firefox", verbose=True, headless = True) as crawler:
|
||||
result = await crawler.arun(url="https://www.example.com", bypass_cache=True)
|
||||
result = await crawler.arun(url="https://www.example.com", cache_mode= CacheMode.BYPASS)
|
||||
print(result.markdown[:500])
|
||||
print("Time taken: ", time.time() - start)
|
||||
|
||||
# Use WebKit
|
||||
start = time.time()
|
||||
async with AsyncWebCrawler(browser_type="webkit", verbose=True, headless = True) as crawler:
|
||||
result = await crawler.arun(url="https://www.example.com", bypass_cache=True)
|
||||
result = await crawler.arun(url="https://www.example.com", cache_mode= CacheMode.BYPASS)
|
||||
print(result.markdown[:500])
|
||||
print("Time taken: ", time.time() - start)
|
||||
|
||||
# Use Chromium (default)
|
||||
start = time.time()
|
||||
async with AsyncWebCrawler(verbose=True, headless = True) as crawler:
|
||||
result = await crawler.arun(url="https://www.example.com", bypass_cache=True)
|
||||
result = await crawler.arun(url="https://www.example.com", cache_mode= CacheMode.BYPASS)
|
||||
print(result.markdown[:500])
|
||||
print("Time taken: ", time.time() - start)
|
||||
|
||||
@@ -384,7 +414,7 @@ async def crawl_with_user_simultion():
|
||||
url = "YOUR-URL-HERE"
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
magic = True, # Automatically detects and removes overlays, popups, and other elements that block content
|
||||
# simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
|
||||
# override_navigator = True # Overrides the navigator object to make it look like a real user
|
||||
@@ -408,7 +438,7 @@ async def speed_comparison():
|
||||
params={'formats': ['markdown', 'html']}
|
||||
)
|
||||
end = time.time()
|
||||
print("Firecrawl (simulated):")
|
||||
print("Firecrawl:")
|
||||
print(f"Time taken: {end - start:.2f} seconds")
|
||||
print(f"Content length: {len(scrape_status['markdown'])} characters")
|
||||
print(f"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}")
|
||||
@@ -420,7 +450,7 @@ async def speed_comparison():
|
||||
result = await crawler.arun(
|
||||
url="https://www.nbcnews.com/business",
|
||||
word_count_threshold=0,
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
verbose=False,
|
||||
)
|
||||
end = time.time()
|
||||
@@ -430,6 +460,25 @@ async def speed_comparison():
|
||||
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
|
||||
print()
|
||||
|
||||
# Crawl4AI with advanced content filtering
|
||||
start = time.time()
|
||||
result = await crawler.arun(
|
||||
url="https://www.nbcnews.com/business",
|
||||
word_count_threshold=0,
|
||||
markdown_generator=DefaultMarkdownGenerator(
|
||||
content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
|
||||
),
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
verbose=False,
|
||||
)
|
||||
end = time.time()
|
||||
print("Crawl4AI (Markdown Plus):")
|
||||
print(f"Time taken: {end - start:.2f} seconds")
|
||||
print(f"Content length: {len(result.markdown_v2.raw_markdown)} characters")
|
||||
print(f"Fit Markdown: {len(result.markdown_v2.fit_markdown)} characters")
|
||||
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
|
||||
print()
|
||||
|
||||
# Crawl4AI with JavaScript execution
|
||||
start = time.time()
|
||||
result = await crawler.arun(
|
||||
@@ -438,13 +487,17 @@ async def speed_comparison():
|
||||
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
|
||||
],
|
||||
word_count_threshold=0,
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
markdown_generator=DefaultMarkdownGenerator(
|
||||
content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
|
||||
),
|
||||
verbose=False,
|
||||
)
|
||||
end = time.time()
|
||||
print("Crawl4AI (with JavaScript execution):")
|
||||
print(f"Time taken: {end - start:.2f} seconds")
|
||||
print(f"Content length: {len(result.markdown)} characters")
|
||||
print(f"Fit Markdown: {len(result.markdown_v2.fit_markdown)} characters")
|
||||
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
|
||||
|
||||
print("\nNote on Speed Comparison:")
|
||||
@@ -483,7 +536,7 @@ async def generate_knowledge_graph():
|
||||
url = "https://paulgraham.com/love.html"
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
extraction_strategy=extraction_strategy,
|
||||
# magic=True
|
||||
)
|
||||
@@ -496,7 +549,7 @@ async def fit_markdown_remove_overlay():
|
||||
url = "https://janineintheworld.com/places-to-visit-in-central-mexico"
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
bypass_cache=True,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
word_count_threshold = 10,
|
||||
remove_overlay_elements=True,
|
||||
screenshot = True
|
||||
@@ -512,25 +565,25 @@ async def main():
|
||||
await simple_crawl()
|
||||
await simple_example_with_running_js_code()
|
||||
await simple_example_with_css_selector()
|
||||
await use_proxy()
|
||||
# await use_proxy()
|
||||
await capture_and_save_screenshot("https://www.example.com", os.path.join(__location__, "tmp/example_screenshot.jpg"))
|
||||
await extract_structured_data_using_css_extractor()
|
||||
|
||||
# LLM extraction examples
|
||||
await extract_structured_data_using_llm()
|
||||
await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY"))
|
||||
# await extract_structured_data_using_llm()
|
||||
# await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY"))
|
||||
# await extract_structured_data_using_llm("ollama/llama3.2")
|
||||
await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY"))
|
||||
await extract_structured_data_using_llm("ollama/llama3.2")
|
||||
|
||||
# You always can pass custom headers to the extraction strategy
|
||||
custom_headers = {
|
||||
"Authorization": "Bearer your-custom-token",
|
||||
"X-Custom-Header": "Some-Value"
|
||||
}
|
||||
await extract_structured_data_using_llm(extra_headers=custom_headers)
|
||||
# custom_headers = {
|
||||
# "Authorization": "Bearer your-custom-token",
|
||||
# "X-Custom-Header": "Some-Value"
|
||||
# }
|
||||
# await extract_structured_data_using_llm(extra_headers=custom_headers)
|
||||
|
||||
# await crawl_dynamic_content_pages_method_1()
|
||||
# await crawl_dynamic_content_pages_method_2()
|
||||
await crawl_dynamic_content_pages_method_1()
|
||||
await crawl_dynamic_content_pages_method_2()
|
||||
await crawl_dynamic_content_pages_method_3()
|
||||
|
||||
await crawl_custom_browser_type()
|
||||
|
||||
@@ -18,7 +18,7 @@ Let's see how we can customize the AsyncWebCrawler using hooks! In this example,
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai.async_crawler_strategy import AsyncPlaywrightCrawlerStrategy
|
||||
from playwright.async_api import Page, Browser
|
||||
from playwright.async_api import Page, Browser, BrowserContext
|
||||
|
||||
async def on_browser_created(browser: Browser):
|
||||
print("[HOOK] on_browser_created")
|
||||
@@ -71,7 +71,11 @@ from crawl4ai.async_crawler_strategy import AsyncPlaywrightCrawlerStrategy
|
||||
async def main():
|
||||
print("\n🔗 Using Crawler Hooks: Let's see how we can customize the AsyncWebCrawler using hooks!")
|
||||
|
||||
crawler_strategy = AsyncPlaywrightCrawlerStrategy(verbose=True)
|
||||
initial_cookies = [
|
||||
{"name": "sessionId", "value": "abc123", "domain": ".example.com"},
|
||||
{"name": "userId", "value": "12345", "domain": ".example.com"}
|
||||
]
|
||||
crawler_strategy = AsyncPlaywrightCrawlerStrategy(verbose=True, cookies=initial_cookies)
|
||||
crawler_strategy.set_hook('on_browser_created', on_browser_created)
|
||||
crawler_strategy.set_hook('before_goto', before_goto)
|
||||
crawler_strategy.set_hook('after_goto', after_goto)
|
||||
|
||||
4
main.py
4
main.py
@@ -340,9 +340,6 @@ app.add_middleware(
|
||||
allow_headers=["*"], # Allows all headers
|
||||
)
|
||||
|
||||
# Mount the pages directory as a static directory
|
||||
app.mount("/pages", StaticFiles(directory=__location__ + "/pages"), name="pages")
|
||||
|
||||
# API token security
|
||||
security = HTTPBearer()
|
||||
CRAWL4AI_API_TOKEN = os.getenv("CRAWL4AI_API_TOKEN") or "test_api_code"
|
||||
@@ -364,7 +361,6 @@ if os.path.exists(__location__ + "/site"):
|
||||
app.mount("/mkdocs", StaticFiles(directory="site", html=True), name="mkdocs")
|
||||
|
||||
site_templates = Jinja2Templates(directory=__location__ + "/site")
|
||||
templates = Jinja2Templates(directory=__location__ + "/pages")
|
||||
|
||||
crawler_service = CrawlerService()
|
||||
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
aiosqlite~=0.20
|
||||
html2text~=2024.2
|
||||
lxml~=5.3
|
||||
litellm~=1.48
|
||||
litellm>=1.53.1
|
||||
numpy>=1.26.0,<3
|
||||
pillow~=10.4
|
||||
playwright>=1.47,<1.48
|
||||
playwright>=1.49.0
|
||||
python-dotenv~=1.0
|
||||
requests~=2.26
|
||||
beautifulsoup4~=4.12
|
||||
tf-playwright-stealth~=1.0
|
||||
tf-playwright-stealth>=1.1.0
|
||||
xxhash~=3.4
|
||||
rank-bm25~=0.2
|
||||
aiofiles~=24.0
|
||||
aiofiles>=24.1.0
|
||||
colorama~=0.4
|
||||
snowballstemmer~=2.2
|
||||
snowballstemmer~=2.2
|
||||
pydantic>=2.10
|
||||
76
setup.py
76
setup.py
@@ -1,18 +1,22 @@
|
||||
from setuptools import setup, find_packages
|
||||
from setuptools.command.install import install
|
||||
import os
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import asyncio
|
||||
|
||||
|
||||
# Create the .crawl4ai folder in the user's home directory if it doesn't exist
|
||||
# If the folder already exists, remove the cache folder
|
||||
crawl4ai_folder = os.getenv("CRAWL4_AI_BASE_DIRECTORY", Path.home()) / ".crawl4ai"
|
||||
base_dir = os.getenv("CRAWL4_AI_BASE_DIRECTORY")
|
||||
crawl4ai_folder = Path(base_dir) if base_dir else Path.home()
|
||||
crawl4ai_folder = crawl4ai_folder / ".crawl4ai"
|
||||
cache_folder = crawl4ai_folder / "cache"
|
||||
content_folders = ['html_content', 'cleaned_html', 'markdown_content',
|
||||
'extracted_content', 'screenshots']
|
||||
content_folders = [
|
||||
"html_content",
|
||||
"cleaned_html",
|
||||
"markdown_content",
|
||||
"extracted_content",
|
||||
"screenshots",
|
||||
]
|
||||
|
||||
# Clean up old cache if exists
|
||||
if cache_folder.exists():
|
||||
@@ -28,7 +32,7 @@ for folder in content_folders:
|
||||
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
|
||||
with open(os.path.join(__location__, "requirements.txt")) as f:
|
||||
requirements = f.read().splitlines()
|
||||
|
||||
|
||||
with open("crawl4ai/__version__.py") as f:
|
||||
for line in f:
|
||||
if line.startswith("__version__"):
|
||||
@@ -37,42 +41,11 @@ with open("crawl4ai/__version__.py") as f:
|
||||
|
||||
# Define requirements
|
||||
default_requirements = requirements
|
||||
torch_requirements = ["torch", "nltk", "scikit-learn"]
|
||||
torch_requirements = ["torch", "nltk", "scikit-learn"]
|
||||
transformer_requirements = ["transformers", "tokenizers"]
|
||||
cosine_similarity_requirements = ["torch", "transformers", "nltk" ]
|
||||
cosine_similarity_requirements = ["torch", "transformers", "nltk"]
|
||||
sync_requirements = ["selenium"]
|
||||
|
||||
def install_playwright():
|
||||
print("Installing Playwright browsers...")
|
||||
try:
|
||||
subprocess.check_call([sys.executable, "-m", "playwright", "install"])
|
||||
print("Playwright installation completed successfully.")
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"Error during Playwright installation: {e}")
|
||||
print("Please run 'python -m playwright install' manually after the installation.")
|
||||
except Exception as e:
|
||||
print(f"Unexpected error during Playwright installation: {e}")
|
||||
print("Please run 'python -m playwright install' manually after the installation.")
|
||||
|
||||
def run_migration():
|
||||
"""Initialize database during installation"""
|
||||
try:
|
||||
print("Starting database initialization...")
|
||||
from crawl4ai.async_database import async_db_manager
|
||||
asyncio.run(async_db_manager.initialize())
|
||||
print("Database initialization completed successfully.")
|
||||
except ImportError:
|
||||
print("Warning: Database module not found. Will initialize on first use.")
|
||||
except Exception as e:
|
||||
print(f"Warning: Database initialization failed: {e}")
|
||||
print("Database will be initialized on first use")
|
||||
|
||||
class PostInstallCommand(install):
|
||||
def run(self):
|
||||
install.run(self)
|
||||
install_playwright()
|
||||
# run_migration()
|
||||
|
||||
setup(
|
||||
name="Crawl4AI",
|
||||
version=version,
|
||||
@@ -84,18 +57,24 @@ setup(
|
||||
author_email="unclecode@kidocode.com",
|
||||
license="MIT",
|
||||
packages=find_packages(),
|
||||
install_requires=default_requirements + ["playwright", "aiofiles"], # Added aiofiles
|
||||
install_requires=default_requirements
|
||||
+ ["playwright", "aiofiles"], # Added aiofiles
|
||||
extras_require={
|
||||
"torch": torch_requirements,
|
||||
"transformer": transformer_requirements,
|
||||
"cosine": cosine_similarity_requirements,
|
||||
"sync": sync_requirements,
|
||||
"all": default_requirements + torch_requirements + transformer_requirements + cosine_similarity_requirements + sync_requirements,
|
||||
"all": default_requirements
|
||||
+ torch_requirements
|
||||
+ transformer_requirements
|
||||
+ cosine_similarity_requirements
|
||||
+ sync_requirements,
|
||||
},
|
||||
entry_points={
|
||||
'console_scripts': [
|
||||
'crawl4ai-download-models=crawl4ai.model_loader:main',
|
||||
'crawl4ai-migrate=crawl4ai.migrations:main', # Added migration command
|
||||
"console_scripts": [
|
||||
"crawl4ai-download-models=crawl4ai.model_loader:main",
|
||||
"crawl4ai-migrate=crawl4ai.migrations:main",
|
||||
'crawl4ai-setup=crawl4ai.install:post_install',
|
||||
],
|
||||
},
|
||||
classifiers=[
|
||||
@@ -109,7 +88,4 @@ setup(
|
||||
"Programming Language :: Python :: 3.10",
|
||||
],
|
||||
python_requires=">=3.7",
|
||||
cmdclass={
|
||||
'install': PostInstallCommand,
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
@@ -11,7 +11,7 @@ import asyncio
|
||||
import os
|
||||
import time
|
||||
from typing import Dict, Any
|
||||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerationStrategy
|
||||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||
|
||||
# Get current directory
|
||||
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
|
||||
@@ -41,7 +41,7 @@ def test_basic_markdown_conversion():
|
||||
with open(__location__ + "/data/wikipedia.html", "r") as f:
|
||||
cleaned_html = f.read()
|
||||
|
||||
generator = DefaultMarkdownGenerationStrategy()
|
||||
generator = DefaultMarkdownGenerator()
|
||||
|
||||
start_time = time.perf_counter()
|
||||
result = generator.generate_markdown(
|
||||
@@ -70,7 +70,7 @@ def test_relative_links():
|
||||
Also an [image](/images/test.png) and another [page](/wiki/Banana).
|
||||
"""
|
||||
|
||||
generator = DefaultMarkdownGenerationStrategy()
|
||||
generator = DefaultMarkdownGenerator()
|
||||
result = generator.generate_markdown(
|
||||
cleaned_html=markdown,
|
||||
base_url="https://en.wikipedia.org"
|
||||
@@ -86,7 +86,7 @@ def test_duplicate_links():
|
||||
Here's a [link](/test) and another [link](/test) and a [different link](/other).
|
||||
"""
|
||||
|
||||
generator = DefaultMarkdownGenerationStrategy()
|
||||
generator = DefaultMarkdownGenerator()
|
||||
result = generator.generate_markdown(
|
||||
cleaned_html=markdown,
|
||||
base_url="https://example.com"
|
||||
@@ -102,7 +102,7 @@ def test_link_descriptions():
|
||||
Here's a [link with title](/test "Test Title") and a [link with description](/other) to test.
|
||||
"""
|
||||
|
||||
generator = DefaultMarkdownGenerationStrategy()
|
||||
generator = DefaultMarkdownGenerator()
|
||||
result = generator.generate_markdown(
|
||||
cleaned_html=markdown,
|
||||
base_url="https://example.com"
|
||||
@@ -120,7 +120,7 @@ def test_performance_large_document():
|
||||
iterations = 5
|
||||
times = []
|
||||
|
||||
generator = DefaultMarkdownGenerationStrategy()
|
||||
generator = DefaultMarkdownGenerator()
|
||||
|
||||
for i in range(iterations):
|
||||
start_time = time.perf_counter()
|
||||
@@ -144,7 +144,7 @@ def test_image_links():
|
||||
And a regular [link](/page).
|
||||
"""
|
||||
|
||||
generator = DefaultMarkdownGenerationStrategy()
|
||||
generator = DefaultMarkdownGenerator()
|
||||
result = generator.generate_markdown(
|
||||
cleaned_html=markdown,
|
||||
base_url="https://example.com"
|
||||
|
||||
Reference in New Issue
Block a user