* fix(docker-api): migrate to modern datetime library API
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* Fix examples in README.md
* feat(docker): add user-provided hooks support to Docker API
Implements comprehensive hooks functionality allowing users to provide custom Python
functions as strings that execute at specific points in the crawling pipeline.
Key Features:
- Support for all 8 crawl4ai hook points:
• on_browser_created: Initialize browser settings
• on_page_context_created: Configure page context
• before_goto: Pre-navigation setup
• after_goto: Post-navigation processing
• on_user_agent_updated: User agent modification handling
• on_execution_started: Crawl execution initialization
• before_retrieve_html: Pre-extraction processing
• before_return_html: Final HTML processing
Implementation Details:
- Created UserHookManager for validation, compilation, and safe execution
- Added IsolatedHookWrapper for error isolation and timeout protection
- AST-based validation ensures code structure correctness
- Sandboxed execution with restricted builtins for security
- Configurable timeout (1-120 seconds) prevents infinite loops
- Comprehensive error handling ensures hooks don't crash main process
- Execution tracking with detailed statistics and logging
API Changes:
- Added HookConfig schema with code and timeout fields
- Extended CrawlRequest with optional hooks parameter
- Added /hooks/info endpoint for hook discovery
- Updated /crawl and /crawl/stream endpoints to support hooks
Safety Features:
- Malformed hooks return clear validation errors
- Hook errors are isolated and reported without stopping crawl
- Execution statistics track success/failure/timeout rates
- All hook results are JSON-serializable
Testing:
- Comprehensive test suite covering all 8 hooks
- Error handling and timeout scenarios validated
- Authentication, performance, and content extraction examples
- 100% success rate in production testing
Documentation:
- Added extensive hooks section to docker-deployment.md
- Security warnings about user-provided code risks
- Real-world examples using httpbin.org, GitHub, BBC
- Best practices and troubleshooting guide
ref #1377
* fix(deep-crawl): BestFirst priority inversion; remove pre-scoring truncation. ref #1253
Use negative scores in PQ to visit high-score URLs first and drop link cap prior to scoring; add test for ordering.
* docs: Update URL seeding examples to use proper async context managers
- Wrap all AsyncUrlSeeder usage with async context managers
- Update URL seeding adventure example to use "sitemap+cc" source, focus on course posts, and add stream=True parameter to fix runtime error
* fix(crawler): Removed the incorrect reference in browser_config variable #1310
* docs: update Docker instructions to use the latest release tag
* fix(docker): Fix LLM API key handling for multi-provider support
Previously, the system incorrectly used OPENAI_API_KEY for all LLM providers
due to a hardcoded api_key_env fallback in config.yml. This caused authentication
errors when using non-OpenAI providers like Gemini.
Changes:
- Remove api_key_env from config.yml to let litellm handle provider-specific env vars
- Simplify get_llm_api_key() to return None, allowing litellm to auto-detect keys
- Update validate_llm_provider() to trust litellm's built-in key detection
- Update documentation to reflect the new automatic key handling
The fix leverages litellm's existing capability to automatically find the correct
environment variable for each provider (OPENAI_API_KEY, GEMINI_API_TOKEN, etc.)
without manual configuration.
ref #1291
* docs: update adaptive crawler docs and cache defaults; remove deprecated examples (#1330)
- Replace BaseStrategy with CrawlStrategy in custom strategy examples (DomainSpecificStrategy, HybridStrategy)
- Remove “Custom Link Scoring” and “Caching Strategy” sections no longer aligned with current library
- Revise memory pruning example to use adaptive.get_relevant_content and index-based retention of top 500 docs
- Correct Quickstart note: default cache mode is CacheMode.BYPASS; instruct enabling with CacheMode.ENABLED
* fix(utils): Improve URL normalization by avoiding quote/unquote to preserve '+' signs. ref #1332
* feat: Add comprehensive website to API example with frontend
This commit adds a complete, web scraping API example that demonstrates how to get structured data from any website and use it like an API using the crawl4ai library with a minimalist frontend interface.
Core Functionality
- AI-powered web scraping with plain English queries
- Dual scraping approaches: Schema-based (faster) and LLM-based (flexible)
- Intelligent schema caching for improved performance
- Custom LLM model support with API key management
- Automatic duplicate request prevention
Modern Frontend Interface
- Minimalist black-and-white design inspired by modern web apps
- Responsive layout with smooth animations and transitions
- Three main pages: Scrape Data, Models Management, API Request History
- Real-time results display with JSON formatting
- Copy-to-clipboard functionality for extracted data
- Toast notifications for user feedback
- Auto-scroll to results when scraping starts
Model Management System
- Web-based model configuration interface
- Support for any LLM provider (OpenAI, Gemini, Anthropic, etc.)
- Simplified configuration requiring only provider and API token
- Add, list, and delete model configurations
- Secure storage of API keys in local JSON files
API Request History
- Automatic saving of all API requests and responses
- Display of request history with URL, query, and cURL commands
- Duplicate prevention (same URL + query combinations)
- Request deletion functionality
- Clean, simplified display focusing on essential information
Technical Implementation
Backend (FastAPI)
- RESTful API with comprehensive endpoints
- Pydantic models for request/response validation
- Async web scraping with crawl4ai library
- Error handling with detailed error messages
- File-based storage for models and request history
Frontend (Vanilla JS/CSS/HTML)
- No framework dependencies - pure HTML, CSS, JavaScript
- Modern CSS Grid and Flexbox layouts
- Custom dropdown styling with SVG arrows
- Responsive design for mobile and desktop
- Smooth scrolling and animations
Core Library Integration
- WebScraperAgent class for orchestration
- ModelConfig class for LLM configuration management
- Schema generation and caching system
- LLM extraction strategy support
- Browser configuration with headless mode
* fix(dependencies): add cssselect to project dependencies
Fixes bug reported in issue #1405
[Bug]: Excluded selector (excluded_selector) doesn't work
This commit reintroduces the cssselect library which was removed by PR (https://github.com/unclecode/crawl4ai/pull/1368) and merged via (437395e490).
Integration tested against 0.7.4 Docker container. Reintroducing cssselector package eliminated errors seen in logs and excluded_selector functionality was restored.
Refs: #1405
* fix(docker): resolve filter serialization and JSON encoding errors in deep crawl strategy (ref #1419)
- Fix URLPatternFilter serialization by preventing private __slots__ from being serialized as constructor params
- Add public attributes to URLPatternFilter to store original constructor parameters for proper serialization
- Handle property descriptors in CrawlResult.model_dump() to prevent JSON serialization errors
- Ensure filter chains work correctly with Docker client and REST API
The issue occurred because:
1. Private implementation details (_simple_suffixes, etc.) were being serialized and passed as constructor arguments during deserialization
2. Property descriptors were being included in the serialized output, causing "Object of type property is not JSON serializable" errors
Changes:
- async_configs.py: Comment out __slots__ serialization logic (lines 100-109)
- filters.py: Add patterns, use_glob, reverse to URLPatternFilter __slots__ and store as public attributes
- models.py: Convert property descriptors to strings in model_dump() instead of including them directly
* fix(logger): ensure logger is a Logger instance in crawling strategies. ref #1437
* feat(docker): Add temperature and base_url parameters for LLM configuration. ref #1035
Implement hierarchical configuration for LLM parameters with support for:
- Temperature control (0.0-2.0) to adjust response creativity
- Custom base_url for proxy servers and alternative endpoints
- 4-tier priority: request params > provider env > global env > defaults
Add helper functions in utils.py, update API schemas and handlers,
support environment variables (LLM_TEMPERATURE, OPENAI_TEMPERATURE, etc.),
and provide comprehensive documentation with examples.
* feat(docker): improve docker error handling
- Return comprehensive error messages along with status codes for api internal errors.
- Fix fit_html property serialization issue in both /crawl and /crawl/stream endpoints
- Add sanitization to ensure fit_html is always JSON-serializable (string or None)
- Add comprehensive error handling test suite.
* #1375 : refactor(proxy) Deprecate 'proxy' parameter in BrowserConfig and enhance proxy string parsing
- Updated ProxyConfig.from_string to support multiple proxy formats, including URLs with credentials.
- Deprecated the 'proxy' parameter in BrowserConfig, replacing it with 'proxy_config' for better flexibility.
- Added warnings for deprecated usage and clarified behavior when both parameters are provided.
- Updated documentation and tests to reflect changes in proxy configuration handling.
* Remove deprecated test for 'proxy' parameter in BrowserConfig and update .gitignore to include test_scripts directory.
* feat: add preserve_https_for_internal_links flag to maintain HTTPS during crawling. Ref #1410
Added a new `preserve_https_for_internal_links` configuration flag that preserves the original HTTPS scheme for same-domain links even when the server redirects to HTTP.
* feat: update documentation for preserve_https_for_internal_links. ref #1410
* fix: drop Python 3.9 support and require Python >=3.10.
The library no longer supports Python 3.9 and so it was important to drop all references to python 3.9.
Following changes have been made:
- pyproject.toml: set requires-python to ">=3.10"; remove 3.9 classifier
- setup.py: set python_requires to ">=3.10"; remove 3.9 classifier
- docs: update Python version mentions
- deploy/docker/c4ai-doc-context.md: options -> 3.10, 3.11, 3.12, 3.13
* issue #1329 refactor(crawler): move unwanted properties to CrawlerRunConfig class
* fix(auth): fixed Docker JWT authentication. ref #1442
* remove: delete unused yoyo snapshot subproject
* fix: raise error on last attempt failure in perform_completion_with_backoff. ref #989
* Commit without API
* fix: update option labels in request builder for clarity
* fix: allow custom LLM providers for adaptive crawler embedding config. ref: #1291
- Change embedding_llm_config from Dict to Union[LLMConfig, Dict] for type safety
- Add backward-compatible conversion property _embedding_llm_config_dict
- Replace all hardcoded OpenAI embedding configs with configurable options
- Fix LLMConfig object attribute access in query expansion logic
- Add comprehensive example demonstrating multiple provider configurations
- Update documentation with both LLMConfig object and dictionary usage patterns
Users can now specify any LLM provider for query expansion in embedding strategy:
- New: embedding_llm_config=LLMConfig(provider='anthropic/claude-3', api_token='key')
- Old: embedding_llm_config={'provider': 'openai/gpt-4', 'api_token': 'key'} (still works)
* refactor(BrowserConfig): change deprecation warning for 'proxy' parameter to UserWarning
* feat(StealthAdapter): fix stealth features for Playwright integration. ref #1481
* #1505 fix(api): update config handling to only set base config if not provided by user
* fix(docker-deployment): replace console.log with print for metadata extraction
* Release v0.7.5: The Update
- Updated version to 0.7.5
- Added comprehensive demo and release notes
- Updated documentation
* refactor(release): remove memory management section for cleaner documentation. ref #1443
* feat(docs): add brand book and page copy functionality
- Add comprehensive brand book with color system, typography, components
- Add page copy dropdown with markdown copy/view functionality
- Update mkdocs.yml with new assets and branding navigation
- Use terminal-style ASCII icons and condensed menu design
* Update gitignore add local scripts folder
* fix: remove this import as it causes python to treat "json" as a variable in the except block
* fix: always return a list, even if we catch an exception
* feat(marketplace): Add Crawl4AI marketplace with secure configuration
- Implement marketplace frontend and admin dashboard
- Add FastAPI backend with environment-based configuration
- Use .env file for secrets management
- Include data generation scripts
- Add proper CORS configuration
- Remove hardcoded password from admin login
- Update gitignore for security
* fix(marketplace): Update URLs to use /marketplace path and relative API endpoints
- Change API_BASE to relative '/api' for production
- Move marketplace to /marketplace instead of /marketplace/frontend
- Update MkDocs navigation
- Fix logo path in marketplace index
* fix(docs): hide copy menu on non-markdown pages
* feat(marketplace): add sponsor logo uploads
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
* feat(docs): add chatgpt quick link to page actions
* fix(marketplace): align admin api with backend endpoints
* fix(marketplace): isolate api under marketplace prefix
* fix(marketplace): resolve app detail page routing and styling issues
- Fixed JavaScript errors from missing HTML elements (install-code, usage-code, integration-code)
- Added missing CSS classes for tabs, overview layout, sidebar, and integration content
- Fixed tab navigation to display horizontally in single line
- Added proper padding to tab content sections (removed from container, added to content)
- Fixed tab selector from .nav-tab to .tab-btn to match HTML structure
- Added sidebar styling with stats grid and metadata display
- Improved responsive design with mobile-friendly tab scrolling
- Fixed code block positioning for copy buttons
- Removed margin from first headings to prevent extra spacing
- Added null checks for DOM elements in JavaScript to prevent errors
These changes resolve the routing issue where clicking on apps caused page redirects,
and fix the broken layout where CSS was not properly applied to the app detail page.
* fix(marketplace): prevent hero image overflow and secondary card stretching
- Fixed hero image to 200px height with min/max constraints
- Added object-fit: cover to hero-image img elements
- Changed secondary-featured align-items from stretch to flex-start
- Fixed secondary-card height to 118px (no flex: 1 stretching)
- Updated responsive grid layouts for wider screens
- Added flex: 1 to hero-content for better content distribution
These changes ensure a rigid, predictable layout that prevents:
1. Large images from pushing text content down
2. Single secondary cards from stretching to fill entire height
* feat: Add hooks utility for function-based hooks with Docker client integration. ref #1377
Add hooks_to_string() utility function that converts Python function objects
to string representations for the Docker API, enabling developers to write hooks
as regular Python functions instead of strings.
Core Changes:
- New hooks_to_string() utility in crawl4ai/utils.py using inspect.getsource()
- Docker client now accepts both function objects and strings for hooks
- Automatic detection and conversion in Crawl4aiDockerClient._prepare_request()
- New hooks and hooks_timeout parameters in client.crawl() method
Documentation:
- Docker client examples with function-based hooks (docs/examples/docker_client_hooks_example.py)
- Updated main Docker deployment guide with comprehensive hooks section
- Added unit tests for hooks utility (tests/docker/test_hooks_utility.py)
* feat: Add hooks utility for function-based hooks with Docker client integration. ref #1377
Add hooks_to_string() utility function that converts Python function objects
to string representations for the Docker API, enabling developers to write hooks
as regular Python functions instead of strings.
Core Changes:
- New hooks_to_string() utility in crawl4ai/utils.py using inspect.getsource()
- Docker client now accepts both function objects and strings for hooks
- Automatic detection and conversion in Crawl4aiDockerClient._prepare_request()
- New hooks and hooks_timeout parameters in client.crawl() method
Documentation:
- Docker client examples with function-based hooks (docs/examples/docker_client_hooks_example.py)
- Updated main Docker deployment guide with comprehensive hooks section
- Added unit tests for hooks utility (tests/docker/test_hooks_utility.py)
* fix(docs): clarify Docker Hooks System with function-based API in README
* docs: Add demonstration files for v0.7.5 release, showcasing the new Docker Hooks System and all other features.
* docs: Update 0.7.5 video walkthrough
* docs: add complete SDK reference documentation
Add comprehensive single-page SDK reference combining:
- Installation & Setup
- Quick Start
- Core API (AsyncWebCrawler, arun, arun_many, CrawlResult)
- Configuration (BrowserConfig, CrawlerConfig, Parameters)
- Crawling Patterns
- Content Processing (Markdown, Fit Markdown, Selection, Interaction, Link & Media)
- Extraction Strategies (LLM and No-LLM)
- Advanced Features (Session Management, Hooks & Auth)
Generated using scripts/generate_sdk_docs.py in ultra-dense mode
optimized for AI assistant consumption.
Stats: 23K words, 185 code blocks, 220KB
* feat: add AI assistant skill package for Crawl4AI
- Create comprehensive skill package for AI coding assistants
- Include complete SDK reference (23K words, v0.7.4)
- Add three extraction scripts (basic, batch, pipeline)
- Implement version tracking in skill and scripts
- Add prominent download section on homepage
- Place skill in docs/assets for web distribution
The skill enables AI assistants like Claude, Cursor, and Windsurf
to effectively use Crawl4AI with optimized workflows for markdown
generation and data extraction.
* fix: remove non-existent wiki link and clarify skill usage instructions
* fix: update Crawl4AI skill with corrected parameters and examples
- Fixed CrawlerConfig → CrawlerRunConfig throughout
- Fixed parameter names (timeout → page_timeout, store_html removed)
- Fixed schema format (selector → baseSelector)
- Corrected proxy configuration (in BrowserConfig, not CrawlerRunConfig)
- Fixed fit_markdown usage with content filters
- Added comprehensive references to docs/examples/ directory
- Created safe packaging script to avoid root directory pollution
- All scripts tested and verified working
* fix: thoroughly verify and fix all Crawl4AI skill examples
- Cross-checked every section against actual docs
- Fixed BM25ContentFilter parameters (user_query, bm25_threshold)
- Removed incorrect wait_for selector from basic example
- Added comprehensive test suite (4 test files)
- All examples now tested and verified working
- Tests validate: basic crawling, markdown generation, data extraction, advanced patterns
- Package size: 76.6 KB (includes tests for future validation)
* feat(ci): split release pipeline and add Docker caching
- Split release.yml into PyPI/GitHub release and Docker workflows
- Add GitHub Actions cache for Docker builds (10-15x faster rebuilds)
- Implement dual-trigger for docker-release.yml (auto + manual)
- Add comprehensive workflow documentation in .github/workflows/docs/
- Backup original workflow as release.yml.backup
* feat: add webhook notifications for crawl job completion
Implements webhook support for the crawl job API to eliminate polling requirements.
Changes:
- Added WebhookConfig and WebhookPayload schemas to schemas.py
- Created webhook.py with WebhookDeliveryService class
- Integrated webhook notifications in api.py handle_crawl_job
- Updated job.py CrawlJobPayload to accept webhook_config
- Added webhook configuration section to config.yml
- Included comprehensive usage examples in WEBHOOK_EXAMPLES.md
Features:
- Webhook notifications on job completion (success/failure)
- Configurable data inclusion in webhook payload
- Custom webhook headers support
- Global default webhook URL configuration
- Exponential backoff retry logic (5 attempts: 1s, 2s, 4s, 8s, 16s)
- 30-second timeout per webhook call
Usage:
POST /crawl/job with optional webhook_config:
- webhook_url: URL to receive notifications
- webhook_data_in_payload: include full results (default: false)
- webhook_headers: custom headers for authentication
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
* docs: add webhook documentation to Docker README
Added comprehensive webhook section to README.md including:
- Overview of asynchronous job queue with webhooks
- Benefits and use cases
- Quick start examples
- Webhook authentication
- Global webhook configuration
- Job status polling alternative
Updated table of contents and summary to include webhook feature.
Maintains consistent tone and style with rest of README.
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
* docs: add webhook example for Docker deployment
Added docker_webhook_example.py demonstrating:
- Submitting crawl jobs with webhook configuration
- Flask-based webhook receiver implementation
- Three usage patterns:
1. Webhook notification only (fetch data separately)
2. Webhook with full data in payload
3. Traditional polling approach for comparison
Includes comprehensive comments explaining:
- Webhook payload structure
- Authentication headers setup
- Error handling
- Production deployment tips
Example is fully functional and ready to run with Flask installed.
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
* test: add webhook implementation validation tests
Added comprehensive test suite to validate webhook implementation:
- Module import verification
- WebhookDeliveryService initialization
- Pydantic model validation (WebhookConfig)
- Payload construction logic
- Exponential backoff calculation
- API integration checks
All tests pass (6/6), confirming implementation is correct.
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
* test: add comprehensive webhook feature test script
Added end-to-end test script that automates webhook feature testing:
Script Features (test_webhook_feature.sh):
- Automatic branch switching and dependency installation
- Redis and server startup/shutdown management
- Webhook receiver implementation
- Integration test for webhook notifications
- Comprehensive cleanup and error handling
- Returns to original branch after completion
Test Flow:
1. Fetch and checkout webhook feature branch
2. Activate venv and install dependencies
3. Start Redis and Crawl4AI server
4. Submit crawl job with webhook config
5. Verify webhook delivery and payload
6. Clean up all processes and return to original branch
Documentation:
- WEBHOOK_TEST_README.md with usage instructions
- Troubleshooting guide
- Exit codes and safety features
Usage: ./tests/test_webhook_feature.sh
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: properly serialize Pydantic HttpUrl in webhook config
Use model_dump(mode='json') instead of deprecated dict() method to ensure
Pydantic special types (HttpUrl, UUID, etc.) are properly serialized to
JSON-compatible native Python types.
This fixes webhook delivery failures caused by HttpUrl objects remaining
as Pydantic types in the webhook_config dict, which caused JSON
serialization errors and httpx request failures.
Also update mcp requirement to >=1.18.0 for compatibility.
* feat: add webhook support for /llm/job endpoint
Add comprehensive webhook notification support for the /llm/job endpoint,
following the same pattern as the existing /crawl/job implementation.
Changes:
- Add webhook_config field to LlmJobPayload model (job.py)
- Implement webhook notifications in process_llm_extraction() with 4
notification points: success, provider validation failure, extraction
failure, and general exceptions (api.py)
- Store webhook_config in Redis task data for job tracking
- Initialize WebhookDeliveryService with exponential backoff retry logic
Documentation:
- Add Example 6 to WEBHOOK_EXAMPLES.md showing LLM extraction with webhooks
- Update Flask webhook handler to support both crawl and llm_extraction tasks
- Add TypeScript client examples for LLM jobs
- Add comprehensive examples to docker_webhook_example.py with schema support
- Clarify data structure differences between webhook and API responses
Testing:
- Add test_llm_webhook_feature.py with 7 validation tests (all passing)
- Verify pattern consistency with /crawl/job implementation
- Add implementation guide (WEBHOOK_LLM_JOB_IMPLEMENTATION.md)
* fix: remove duplicate comma in webhook_config parameter
* fix: update Crawl4AI Docker container port from 11234 to 11235
* Release v0.7.6: The 0.7.6 Update
- Updated version to 0.7.6
- Added comprehensive demo and release notes
- Updated all documentation
- Update the veriosn in Dockerfile to 0.7.6
---------
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Nezar Ali <abu5sohaib@gmail.com>
Co-authored-by: Soham Kukreti <kukretisoham@gmail.com>
Co-authored-by: James T. Wood <jamesthomaswood@gmail.com>
Co-authored-by: AHMET YILMAZ <tawfik@kidocode.com>
Co-authored-by: nafeqq-1306 <nafiquee@yahoo.com>
Co-authored-by: unclecode <unclecode@kidocode.com>
Co-authored-by: Martin Sjöborg <martin.sjoborg@quartr.se>
Co-authored-by: Martin Sjöborg <martin@sjoborg.org>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
1005 lines
42 KiB
Markdown
1005 lines
42 KiB
Markdown
# 🚀🤖 Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper.
|
||
|
||
<div align="center">
|
||
|
||
<a href="https://trendshift.io/repositories/11716" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11716" alt="unclecode%2Fcrawl4ai | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||
|
||
[](https://github.com/unclecode/crawl4ai/stargazers)
|
||
[](https://github.com/unclecode/crawl4ai/network/members)
|
||
|
||
[](https://badge.fury.io/py/crawl4ai)
|
||
[](https://pypi.org/project/crawl4ai/)
|
||
[](https://pepy.tech/project/crawl4ai)
|
||
[](https://github.com/sponsors/unclecode)
|
||
|
||
<p align="center">
|
||
<a href="https://x.com/crawl4ai">
|
||
<img src="https://img.shields.io/badge/Follow%20on%20X-000000?style=for-the-badge&logo=x&logoColor=white" alt="Follow on X" />
|
||
</a>
|
||
<a href="https://www.linkedin.com/company/crawl4ai">
|
||
<img src="https://img.shields.io/badge/Follow%20on%20LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white" alt="Follow on LinkedIn" />
|
||
</a>
|
||
<a href="https://discord.gg/jP8KfhDhyN">
|
||
<img src="https://img.shields.io/badge/Join%20our%20Discord-5865F2?style=for-the-badge&logo=discord&logoColor=white" alt="Join our Discord" />
|
||
</a>
|
||
</p>
|
||
</div>
|
||
|
||
Crawl4AI turns the web into clean, LLM ready Markdown for RAG, agents, and data pipelines. Fast, controllable, battle tested by a 50k+ star community.
|
||
|
||
[✨ Check out latest update v0.7.6](#-recent-updates)
|
||
|
||
✨ **New in v0.7.6**: Complete Webhook Infrastructure for Docker Job Queue API! Real-time notifications for both `/crawl/job` and `/llm/job` endpoints with exponential backoff retry, custom headers, and flexible delivery modes. No more polling! [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.6.md)
|
||
|
||
✨ Recent v0.7.5: Docker Hooks System with function-based API for pipeline customization, Enhanced LLM Integration with custom providers, HTTPS Preservation, and multiple community-reported bug fixes. [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.5.md)
|
||
|
||
✨ Previous v0.7.4: Revolutionary LLM Table Extraction with intelligent chunking, enhanced concurrency fixes, memory management refactor, and critical stability improvements. [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.4.md)
|
||
|
||
<details>
|
||
<summary>🤓 <strong>My Personal Story</strong></summary>
|
||
|
||
I grew up on an Amstrad, thanks to my dad, and never stopped building. In grad school I specialized in NLP and built crawlers for research. That’s where I learned how much extraction matters.
|
||
|
||
In 2023, I needed web-to-Markdown. The “open source” option wanted an account, API token, and $16, and still under-delivered. I went turbo anger mode, built Crawl4AI in days, and it went viral. Now it’s the most-starred crawler on GitHub.
|
||
|
||
I made it open source for **availability**, anyone can use it without a gate. Now I’m building the platform for **affordability**, anyone can run serious crawls without breaking the bank. If that resonates, join in, send feedback, or just crawl something amazing.
|
||
</details>
|
||
|
||
|
||
<details>
|
||
<summary>Why developers pick Crawl4AI</summary>
|
||
|
||
- **LLM ready output**, smart Markdown with headings, tables, code, citation hints
|
||
- **Fast in practice**, async browser pool, caching, minimal hops
|
||
- **Full control**, sessions, proxies, cookies, user scripts, hooks
|
||
- **Adaptive intelligence**, learns site patterns, explores only what matters
|
||
- **Deploy anywhere**, zero keys, CLI and Docker, cloud friendly
|
||
</details>
|
||
|
||
|
||
## 🚀 Quick Start
|
||
|
||
1. Install Crawl4AI:
|
||
```bash
|
||
# Install the package
|
||
pip install -U crawl4ai
|
||
|
||
# For pre release versions
|
||
pip install crawl4ai --pre
|
||
|
||
# Run post-installation setup
|
||
crawl4ai-setup
|
||
|
||
# Verify your installation
|
||
crawl4ai-doctor
|
||
```
|
||
|
||
If you encounter any browser-related issues, you can install them manually:
|
||
```bash
|
||
python -m playwright install --with-deps chromium
|
||
```
|
||
|
||
2. Run a simple web crawl with Python:
|
||
```python
|
||
import asyncio
|
||
from crawl4ai import *
|
||
|
||
async def main():
|
||
async with AsyncWebCrawler() as crawler:
|
||
result = await crawler.arun(
|
||
url="https://www.nbcnews.com/business",
|
||
)
|
||
print(result.markdown)
|
||
|
||
if __name__ == "__main__":
|
||
asyncio.run(main())
|
||
```
|
||
|
||
3. Or use the new command-line interface:
|
||
```bash
|
||
# Basic crawl with markdown output
|
||
crwl https://www.nbcnews.com/business -o markdown
|
||
|
||
# Deep crawl with BFS strategy, max 10 pages
|
||
crwl https://docs.crawl4ai.com --deep-crawl bfs --max-pages 10
|
||
|
||
# Use LLM extraction with a specific question
|
||
crwl https://www.example.com/products -q "Extract all product prices"
|
||
```
|
||
|
||
## 💖 Support Crawl4AI
|
||
|
||
> 🎉 **Sponsorship Program Now Open!** After powering 51K+ developers and 1 year of growth, Crawl4AI is launching dedicated support for **startups** and **enterprises**. Be among the first 50 **Founding Sponsors** for permanent recognition in our Hall of Fame.
|
||
|
||
Crawl4AI is the #1 trending open-source web crawler on GitHub. Your support keeps it independent, innovative, and free for the community — while giving you direct access to premium benefits.
|
||
|
||
<div align="">
|
||
|
||
[](https://github.com/sponsors/unclecode)
|
||
[](https://github.com/sponsors/unclecode)
|
||
|
||
</div>
|
||
|
||
### 🤝 Sponsorship Tiers
|
||
|
||
- **🌱 Believer ($5/mo)** — Join the movement for data democratization
|
||
- **🚀 Builder ($50/mo)** — Priority support & early access to features
|
||
- **💼 Growing Team ($500/mo)** — Bi-weekly syncs & optimization help
|
||
- **🏢 Data Infrastructure Partner ($2000/mo)** — Full partnership with dedicated support
|
||
*Custom arrangements available - see [SPONSORS.md](SPONSORS.md) for details & contact*
|
||
|
||
**Why sponsor?**
|
||
No rate-limited APIs. No lock-in. Build and own your data pipeline with direct guidance from the creator of Crawl4AI.
|
||
|
||
[See All Tiers & Benefits →](https://github.com/sponsors/unclecode)
|
||
|
||
|
||
## ✨ Features
|
||
|
||
<details>
|
||
<summary>📝 <strong>Markdown Generation</strong></summary>
|
||
|
||
- 🧹 **Clean Markdown**: Generates clean, structured Markdown with accurate formatting.
|
||
- 🎯 **Fit Markdown**: Heuristic-based filtering to remove noise and irrelevant parts for AI-friendly processing.
|
||
- 🔗 **Citations and References**: Converts page links into a numbered reference list with clean citations.
|
||
- 🛠️ **Custom Strategies**: Users can create their own Markdown generation strategies tailored to specific needs.
|
||
- 📚 **BM25 Algorithm**: Employs BM25-based filtering for extracting core information and removing irrelevant content.
|
||
</details>
|
||
|
||
<details>
|
||
<summary>📊 <strong>Structured Data Extraction</strong></summary>
|
||
|
||
- 🤖 **LLM-Driven Extraction**: Supports all LLMs (open-source and proprietary) for structured data extraction.
|
||
- 🧱 **Chunking Strategies**: Implements chunking (topic-based, regex, sentence-level) for targeted content processing.
|
||
- 🌌 **Cosine Similarity**: Find relevant content chunks based on user queries for semantic extraction.
|
||
- 🔎 **CSS-Based Extraction**: Fast schema-based data extraction using XPath and CSS selectors.
|
||
- 🔧 **Schema Definition**: Define custom schemas for extracting structured JSON from repetitive patterns.
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary>🌐 <strong>Browser Integration</strong></summary>
|
||
|
||
- 🖥️ **Managed Browser**: Use user-owned browsers with full control, avoiding bot detection.
|
||
- 🔄 **Remote Browser Control**: Connect to Chrome Developer Tools Protocol for remote, large-scale data extraction.
|
||
- 👤 **Browser Profiler**: Create and manage persistent profiles with saved authentication states, cookies, and settings.
|
||
- 🔒 **Session Management**: Preserve browser states and reuse them for multi-step crawling.
|
||
- 🧩 **Proxy Support**: Seamlessly connect to proxies with authentication for secure access.
|
||
- ⚙️ **Full Browser Control**: Modify headers, cookies, user agents, and more for tailored crawling setups.
|
||
- 🌍 **Multi-Browser Support**: Compatible with Chromium, Firefox, and WebKit.
|
||
- 📐 **Dynamic Viewport Adjustment**: Automatically adjusts the browser viewport to match page content, ensuring complete rendering and capturing of all elements.
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary>🔎 <strong>Crawling & Scraping</strong></summary>
|
||
|
||
- 🖼️ **Media Support**: Extract images, audio, videos, and responsive image formats like `srcset` and `picture`.
|
||
- 🚀 **Dynamic Crawling**: Execute JS and wait for async or sync for dynamic content extraction.
|
||
- 📸 **Screenshots**: Capture page screenshots during crawling for debugging or analysis.
|
||
- 📂 **Raw Data Crawling**: Directly process raw HTML (`raw:`) or local files (`file://`).
|
||
- 🔗 **Comprehensive Link Extraction**: Extracts internal, external links, and embedded iframe content.
|
||
- 🛠️ **Customizable Hooks**: Define hooks at every step to customize crawling behavior (supports both string and function-based APIs).
|
||
- 💾 **Caching**: Cache data for improved speed and to avoid redundant fetches.
|
||
- 📄 **Metadata Extraction**: Retrieve structured metadata from web pages.
|
||
- 📡 **IFrame Content Extraction**: Seamless extraction from embedded iframe content.
|
||
- 🕵️ **Lazy Load Handling**: Waits for images to fully load, ensuring no content is missed due to lazy loading.
|
||
- 🔄 **Full-Page Scanning**: Simulates scrolling to load and capture all dynamic content, perfect for infinite scroll pages.
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary>🚀 <strong>Deployment</strong></summary>
|
||
|
||
- 🐳 **Dockerized Setup**: Optimized Docker image with FastAPI server for easy deployment.
|
||
- 🔑 **Secure Authentication**: Built-in JWT token authentication for API security.
|
||
- 🔄 **API Gateway**: One-click deployment with secure token authentication for API-based workflows.
|
||
- 🌐 **Scalable Architecture**: Designed for mass-scale production and optimized server performance.
|
||
- ☁️ **Cloud Deployment**: Ready-to-deploy configurations for major cloud platforms.
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary>🎯 <strong>Additional Features</strong></summary>
|
||
|
||
- 🕶️ **Stealth Mode**: Avoid bot detection by mimicking real users.
|
||
- 🏷️ **Tag-Based Content Extraction**: Refine crawling based on custom tags, headers, or metadata.
|
||
- 🔗 **Link Analysis**: Extract and analyze all links for detailed data exploration.
|
||
- 🛡️ **Error Handling**: Robust error management for seamless execution.
|
||
- 🔐 **CORS & Static Serving**: Supports filesystem-based caching and cross-origin requests.
|
||
- 📖 **Clear Documentation**: Simplified and updated guides for onboarding and advanced usage.
|
||
- 🙌 **Community Recognition**: Acknowledges contributors and pull requests for transparency.
|
||
|
||
</details>
|
||
|
||
## Try it Now!
|
||
|
||
✨ Play around with this [](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing)
|
||
|
||
✨ Visit our [Documentation Website](https://docs.crawl4ai.com/)
|
||
|
||
## Installation 🛠️
|
||
|
||
Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package or use Docker.
|
||
|
||
<details>
|
||
<summary>🐍 <strong>Using pip</strong></summary>
|
||
|
||
Choose the installation option that best fits your needs:
|
||
|
||
### Basic Installation
|
||
|
||
For basic web crawling and scraping tasks:
|
||
|
||
```bash
|
||
pip install crawl4ai
|
||
crawl4ai-setup # Setup the browser
|
||
```
|
||
|
||
By default, this will install the asynchronous version of Crawl4AI, using Playwright for web crawling.
|
||
|
||
👉 **Note**: When you install Crawl4AI, the `crawl4ai-setup` should automatically install and set up Playwright. However, if you encounter any Playwright-related errors, you can manually install it using one of these methods:
|
||
|
||
1. Through the command line:
|
||
|
||
```bash
|
||
playwright install
|
||
```
|
||
|
||
2. If the above doesn't work, try this more specific command:
|
||
|
||
```bash
|
||
python -m playwright install chromium
|
||
```
|
||
|
||
This second method has proven to be more reliable in some cases.
|
||
|
||
---
|
||
|
||
### Installation with Synchronous Version
|
||
|
||
The sync version is deprecated and will be removed in future versions. If you need the synchronous version using Selenium:
|
||
|
||
```bash
|
||
pip install crawl4ai[sync]
|
||
```
|
||
|
||
---
|
||
|
||
### Development Installation
|
||
|
||
For contributors who plan to modify the source code:
|
||
|
||
```bash
|
||
git clone https://github.com/unclecode/crawl4ai.git
|
||
cd crawl4ai
|
||
pip install -e . # Basic installation in editable mode
|
||
```
|
||
|
||
Install optional features:
|
||
|
||
```bash
|
||
pip install -e ".[torch]" # With PyTorch features
|
||
pip install -e ".[transformer]" # With Transformer features
|
||
pip install -e ".[cosine]" # With cosine similarity features
|
||
pip install -e ".[sync]" # With synchronous crawling (Selenium)
|
||
pip install -e ".[all]" # Install all optional features
|
||
```
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary>🐳 <strong>Docker Deployment</strong></summary>
|
||
|
||
> 🚀 **Now Available!** Our completely redesigned Docker implementation is here! This new solution makes deployment more efficient and seamless than ever.
|
||
|
||
### New Docker Features
|
||
|
||
The new Docker implementation includes:
|
||
- **Browser pooling** with page pre-warming for faster response times
|
||
- **Interactive playground** to test and generate request code
|
||
- **MCP integration** for direct connection to AI tools like Claude Code
|
||
- **Comprehensive API endpoints** including HTML extraction, screenshots, PDF generation, and JavaScript execution
|
||
- **Multi-architecture support** with automatic detection (AMD64/ARM64)
|
||
- **Optimized resources** with improved memory management
|
||
|
||
### Getting Started
|
||
|
||
```bash
|
||
# Pull and run the latest release
|
||
docker pull unclecode/crawl4ai:latest
|
||
docker run -d -p 11235:11235 --name crawl4ai --shm-size=1g unclecode/crawl4ai:latest
|
||
|
||
# Visit the playground at http://localhost:11235/playground
|
||
```
|
||
|
||
### Quick Test
|
||
|
||
Run a quick test (works for both Docker options):
|
||
|
||
```python
|
||
import requests
|
||
|
||
# Submit a crawl job
|
||
response = requests.post(
|
||
"http://localhost:11235/crawl",
|
||
json={"urls": ["https://example.com"], "priority": 10}
|
||
)
|
||
if response.status_code == 200:
|
||
print("Crawl job submitted successfully.")
|
||
|
||
if "results" in response.json():
|
||
results = response.json()["results"]
|
||
print("Crawl job completed. Results:")
|
||
for result in results:
|
||
print(result)
|
||
else:
|
||
task_id = response.json()["task_id"]
|
||
print(f"Crawl job submitted. Task ID:: {task_id}")
|
||
result = requests.get(f"http://localhost:11235/task/{task_id}")
|
||
```
|
||
|
||
For more examples, see our [Docker Examples](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_example.py). For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://docs.crawl4ai.com/basic/docker-deployment/).
|
||
|
||
</details>
|
||
|
||
---
|
||
|
||
## 🔬 Advanced Usage Examples 🔬
|
||
|
||
You can check the project structure in the directory [docs/examples](https://github.com/unclecode/crawl4ai/tree/main/docs/examples). Over there, you can find a variety of examples; here, some popular examples are shared.
|
||
|
||
<details>
|
||
<summary>📝 <strong>Heuristic Markdown Generation with Clean and Fit Markdown</strong></summary>
|
||
|
||
```python
|
||
import asyncio
|
||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||
from crawl4ai.content_filter_strategy import PruningContentFilter, BM25ContentFilter
|
||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||
|
||
async def main():
|
||
browser_config = BrowserConfig(
|
||
headless=True,
|
||
verbose=True,
|
||
)
|
||
run_config = CrawlerRunConfig(
|
||
cache_mode=CacheMode.ENABLED,
|
||
markdown_generator=DefaultMarkdownGenerator(
|
||
content_filter=PruningContentFilter(threshold=0.48, threshold_type="fixed", min_word_threshold=0)
|
||
),
|
||
# markdown_generator=DefaultMarkdownGenerator(
|
||
# content_filter=BM25ContentFilter(user_query="WHEN_WE_FOCUS_BASED_ON_A_USER_QUERY", bm25_threshold=1.0)
|
||
# ),
|
||
)
|
||
|
||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||
result = await crawler.arun(
|
||
url="https://docs.micronaut.io/4.9.9/guide/",
|
||
config=run_config
|
||
)
|
||
print(len(result.markdown.raw_markdown))
|
||
print(len(result.markdown.fit_markdown))
|
||
|
||
if __name__ == "__main__":
|
||
asyncio.run(main())
|
||
```
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary>🖥️ <strong>Executing JavaScript & Extract Structured Data without LLMs</strong></summary>
|
||
|
||
```python
|
||
import asyncio
|
||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||
from crawl4ai import JsonCssExtractionStrategy
|
||
import json
|
||
|
||
async def main():
|
||
schema = {
|
||
"name": "KidoCode Courses",
|
||
"baseSelector": "section.charge-methodology .w-tab-content > div",
|
||
"fields": [
|
||
{
|
||
"name": "section_title",
|
||
"selector": "h3.heading-50",
|
||
"type": "text",
|
||
},
|
||
{
|
||
"name": "section_description",
|
||
"selector": ".charge-content",
|
||
"type": "text",
|
||
},
|
||
{
|
||
"name": "course_name",
|
||
"selector": ".text-block-93",
|
||
"type": "text",
|
||
},
|
||
{
|
||
"name": "course_description",
|
||
"selector": ".course-content-text",
|
||
"type": "text",
|
||
},
|
||
{
|
||
"name": "course_icon",
|
||
"selector": ".image-92",
|
||
"type": "attribute",
|
||
"attribute": "src"
|
||
}
|
||
]
|
||
}
|
||
|
||
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
|
||
|
||
browser_config = BrowserConfig(
|
||
headless=False,
|
||
verbose=True
|
||
)
|
||
run_config = CrawlerRunConfig(
|
||
extraction_strategy=extraction_strategy,
|
||
js_code=["""(async () => {const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");for(let tab of tabs) {tab.scrollIntoView();tab.click();await new Promise(r => setTimeout(r, 500));}})();"""],
|
||
cache_mode=CacheMode.BYPASS
|
||
)
|
||
|
||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||
|
||
result = await crawler.arun(
|
||
url="https://www.kidocode.com/degrees/technology",
|
||
config=run_config
|
||
)
|
||
|
||
companies = json.loads(result.extracted_content)
|
||
print(f"Successfully extracted {len(companies)} companies")
|
||
print(json.dumps(companies[0], indent=2))
|
||
|
||
|
||
if __name__ == "__main__":
|
||
asyncio.run(main())
|
||
```
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary>📚 <strong>Extracting Structured Data with LLMs</strong></summary>
|
||
|
||
```python
|
||
import os
|
||
import asyncio
|
||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LLMConfig
|
||
from crawl4ai import LLMExtractionStrategy
|
||
from pydantic import BaseModel, Field
|
||
|
||
class OpenAIModelFee(BaseModel):
|
||
model_name: str = Field(..., description="Name of the OpenAI model.")
|
||
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
|
||
output_fee: str = Field(..., description="Fee for output token for the OpenAI model.")
|
||
|
||
async def main():
|
||
browser_config = BrowserConfig(verbose=True)
|
||
run_config = CrawlerRunConfig(
|
||
word_count_threshold=1,
|
||
extraction_strategy=LLMExtractionStrategy(
|
||
# Here you can use any provider that Litellm library supports, for instance: ollama/qwen2
|
||
# provider="ollama/qwen2", api_token="no-token",
|
||
llm_config = LLMConfig(provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY')),
|
||
schema=OpenAIModelFee.schema(),
|
||
extraction_type="schema",
|
||
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
|
||
Do not miss any models in the entire content. One extracted model JSON format should look like this:
|
||
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}."""
|
||
),
|
||
cache_mode=CacheMode.BYPASS,
|
||
)
|
||
|
||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||
result = await crawler.arun(
|
||
url='https://openai.com/api/pricing/',
|
||
config=run_config
|
||
)
|
||
print(result.extracted_content)
|
||
|
||
if __name__ == "__main__":
|
||
asyncio.run(main())
|
||
```
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary>🤖 <strong>Using Your own Browser with Custom User Profile</strong></summary>
|
||
|
||
```python
|
||
import os, sys
|
||
from pathlib import Path
|
||
import asyncio, time
|
||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||
|
||
async def test_news_crawl():
|
||
# Create a persistent user data directory
|
||
user_data_dir = os.path.join(Path.home(), ".crawl4ai", "browser_profile")
|
||
os.makedirs(user_data_dir, exist_ok=True)
|
||
|
||
browser_config = BrowserConfig(
|
||
verbose=True,
|
||
headless=True,
|
||
user_data_dir=user_data_dir,
|
||
use_persistent_context=True,
|
||
)
|
||
run_config = CrawlerRunConfig(
|
||
cache_mode=CacheMode.BYPASS
|
||
)
|
||
|
||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||
url = "ADDRESS_OF_A_CHALLENGING_WEBSITE"
|
||
|
||
result = await crawler.arun(
|
||
url,
|
||
config=run_config,
|
||
magic=True,
|
||
)
|
||
|
||
print(f"Successfully crawled {url}")
|
||
print(f"Content length: {len(result.markdown)}")
|
||
```
|
||
|
||
</details>
|
||
|
||
## ✨ Recent Updates
|
||
|
||
<details>
|
||
<summary><strong>Version 0.7.5 Release Highlights - The Docker Hooks & Security Update</strong></summary>
|
||
|
||
- **🔧 Docker Hooks System**: Complete pipeline customization with user-provided Python functions at 8 key points
|
||
- **✨ Function-Based Hooks API (NEW)**: Write hooks as regular Python functions with full IDE support:
|
||
```python
|
||
from crawl4ai import hooks_to_string
|
||
from crawl4ai.docker_client import Crawl4aiDockerClient
|
||
|
||
# Define hooks as regular Python functions
|
||
async def on_page_context_created(page, context, **kwargs):
|
||
"""Block images to speed up crawling"""
|
||
await context.route("**/*.{png,jpg,jpeg,gif,webp}", lambda route: route.abort())
|
||
await page.set_viewport_size({"width": 1920, "height": 1080})
|
||
return page
|
||
|
||
async def before_goto(page, context, url, **kwargs):
|
||
"""Add custom headers"""
|
||
await page.set_extra_http_headers({'X-Crawl4AI': 'v0.7.5'})
|
||
return page
|
||
|
||
# Option 1: Use hooks_to_string() utility for REST API
|
||
hooks_code = hooks_to_string({
|
||
"on_page_context_created": on_page_context_created,
|
||
"before_goto": before_goto
|
||
})
|
||
|
||
# Option 2: Docker client with automatic conversion (Recommended)
|
||
client = Crawl4aiDockerClient(base_url="http://localhost:11235")
|
||
results = await client.crawl(
|
||
urls=["https://httpbin.org/html"],
|
||
hooks={
|
||
"on_page_context_created": on_page_context_created,
|
||
"before_goto": before_goto
|
||
}
|
||
)
|
||
# ✓ Full IDE support, type checking, and reusability!
|
||
```
|
||
|
||
- **🤖 Enhanced LLM Integration**: Custom providers with temperature control and base_url configuration
|
||
- **🔒 HTTPS Preservation**: Secure internal link handling with `preserve_https_for_internal_links=True`
|
||
- **🐍 Python 3.10+ Support**: Modern language features and enhanced performance
|
||
- **🛠️ Bug Fixes**: Resolved multiple community-reported issues including URL processing, JWT authentication, and proxy configuration
|
||
|
||
[Full v0.7.5 Release Notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.5.md)
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary><strong>Version 0.7.4 Release Highlights - The Intelligent Table Extraction & Performance Update</strong></summary>
|
||
|
||
- **🚀 LLMTableExtraction**: Revolutionary table extraction with intelligent chunking for massive tables:
|
||
```python
|
||
from crawl4ai import LLMTableExtraction, LLMConfig
|
||
|
||
# Configure intelligent table extraction
|
||
table_strategy = LLMTableExtraction(
|
||
llm_config=LLMConfig(provider="openai/gpt-4.1-mini"),
|
||
enable_chunking=True, # Handle massive tables
|
||
chunk_token_threshold=5000, # Smart chunking threshold
|
||
overlap_threshold=100, # Maintain context between chunks
|
||
extraction_type="structured" # Get structured data output
|
||
)
|
||
|
||
config = CrawlerRunConfig(table_extraction_strategy=table_strategy)
|
||
result = await crawler.arun("https://complex-tables-site.com", config=config)
|
||
|
||
# Tables are automatically chunked, processed, and merged
|
||
for table in result.tables:
|
||
print(f"Extracted table: {len(table['data'])} rows")
|
||
```
|
||
|
||
- **⚡ Dispatcher Bug Fix**: Fixed sequential processing bottleneck in arun_many for fast-completing tasks
|
||
- **🧹 Memory Management Refactor**: Consolidated memory utilities into main utils module for cleaner architecture
|
||
- **🔧 Browser Manager Fixes**: Resolved race conditions in concurrent page creation with thread-safe locking
|
||
- **🔗 Advanced URL Processing**: Better handling of raw:// URLs and base tag link resolution
|
||
- **🛡️ Enhanced Proxy Support**: Flexible proxy configuration supporting both dict and string formats
|
||
|
||
[Full v0.7.4 Release Notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.4.md)
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary><strong>Version 0.7.3 Release Highlights - The Multi-Config Intelligence Update</strong></summary>
|
||
|
||
- **🕵️ Undetected Browser Support**: Bypass sophisticated bot detection systems:
|
||
```python
|
||
from crawl4ai import AsyncWebCrawler, BrowserConfig
|
||
|
||
browser_config = BrowserConfig(
|
||
browser_type="undetected", # Use undetected Chrome
|
||
headless=True, # Can run headless with stealth
|
||
extra_args=[
|
||
"--disable-blink-features=AutomationControlled",
|
||
"--disable-web-security"
|
||
]
|
||
)
|
||
|
||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||
result = await crawler.arun("https://protected-site.com")
|
||
# Successfully bypass Cloudflare, Akamai, and custom bot detection
|
||
```
|
||
|
||
- **🎨 Multi-URL Configuration**: Different strategies for different URL patterns in one batch:
|
||
```python
|
||
from crawl4ai import CrawlerRunConfig, MatchMode
|
||
|
||
configs = [
|
||
# Documentation sites - aggressive caching
|
||
CrawlerRunConfig(
|
||
url_matcher=["*docs*", "*documentation*"],
|
||
cache_mode="write",
|
||
markdown_generator_options={"include_links": True}
|
||
),
|
||
|
||
# News/blog sites - fresh content
|
||
CrawlerRunConfig(
|
||
url_matcher=lambda url: 'blog' in url or 'news' in url,
|
||
cache_mode="bypass"
|
||
),
|
||
|
||
# Fallback for everything else
|
||
CrawlerRunConfig()
|
||
]
|
||
|
||
results = await crawler.arun_many(urls, config=configs)
|
||
# Each URL gets the perfect configuration automatically
|
||
```
|
||
|
||
- **🧠 Memory Monitoring**: Track and optimize memory usage during crawling:
|
||
```python
|
||
from crawl4ai.memory_utils import MemoryMonitor
|
||
|
||
monitor = MemoryMonitor()
|
||
monitor.start_monitoring()
|
||
|
||
results = await crawler.arun_many(large_url_list)
|
||
|
||
report = monitor.get_report()
|
||
print(f"Peak memory: {report['peak_mb']:.1f} MB")
|
||
print(f"Efficiency: {report['efficiency']:.1f}%")
|
||
# Get optimization recommendations
|
||
```
|
||
|
||
- **📊 Enhanced Table Extraction**: Direct DataFrame conversion from web tables:
|
||
```python
|
||
result = await crawler.arun("https://site-with-tables.com")
|
||
|
||
# New way - direct table access
|
||
if result.tables:
|
||
import pandas as pd
|
||
for table in result.tables:
|
||
df = pd.DataFrame(table['data'])
|
||
print(f"Table: {df.shape[0]} rows × {df.shape[1]} columns")
|
||
```
|
||
|
||
- **💰 GitHub Sponsors**: 4-tier sponsorship system for project sustainability
|
||
- **🐳 Docker LLM Flexibility**: Configure providers via environment variables
|
||
|
||
[Full v0.7.3 Release Notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.3.md)
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary><strong>Version 0.7.0 Release Highlights - The Adaptive Intelligence Update</strong></summary>
|
||
|
||
- **🧠 Adaptive Crawling**: Your crawler now learns and adapts to website patterns automatically:
|
||
```python
|
||
config = AdaptiveConfig(
|
||
confidence_threshold=0.7, # Min confidence to stop crawling
|
||
max_depth=5, # Maximum crawl depth
|
||
max_pages=20, # Maximum number of pages to crawl
|
||
strategy="statistical"
|
||
)
|
||
|
||
async with AsyncWebCrawler() as crawler:
|
||
adaptive_crawler = AdaptiveCrawler(crawler, config)
|
||
state = await adaptive_crawler.digest(
|
||
start_url="https://news.example.com",
|
||
query="latest news content"
|
||
)
|
||
# Crawler learns patterns and improves extraction over time
|
||
```
|
||
|
||
- **🌊 Virtual Scroll Support**: Complete content extraction from infinite scroll pages:
|
||
```python
|
||
scroll_config = VirtualScrollConfig(
|
||
container_selector="[data-testid='feed']",
|
||
scroll_count=20,
|
||
scroll_by="container_height",
|
||
wait_after_scroll=1.0
|
||
)
|
||
|
||
result = await crawler.arun(url, config=CrawlerRunConfig(
|
||
virtual_scroll_config=scroll_config
|
||
))
|
||
```
|
||
|
||
- **🔗 Intelligent Link Analysis**: 3-layer scoring system for smart link prioritization:
|
||
```python
|
||
link_config = LinkPreviewConfig(
|
||
query="machine learning tutorials",
|
||
score_threshold=0.3,
|
||
concurrent_requests=10
|
||
)
|
||
|
||
result = await crawler.arun(url, config=CrawlerRunConfig(
|
||
link_preview_config=link_config,
|
||
score_links=True
|
||
))
|
||
# Links ranked by relevance and quality
|
||
```
|
||
|
||
- **🎣 Async URL Seeder**: Discover thousands of URLs in seconds:
|
||
```python
|
||
seeder = AsyncUrlSeeder(SeedingConfig(
|
||
source="sitemap+cc",
|
||
pattern="*/blog/*",
|
||
query="python tutorials",
|
||
score_threshold=0.4
|
||
))
|
||
|
||
urls = await seeder.discover("https://example.com")
|
||
```
|
||
|
||
- **⚡ Performance Boost**: Up to 3x faster with optimized resource handling and memory efficiency
|
||
|
||
Read the full details in our [0.7.0 Release Notes](https://docs.crawl4ai.com/blog/release-v0.7.0) or check the [CHANGELOG](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md).
|
||
|
||
</details>
|
||
|
||
## Version Numbering in Crawl4AI
|
||
|
||
Crawl4AI follows standard Python version numbering conventions (PEP 440) to help users understand the stability and features of each release.
|
||
|
||
<details>
|
||
<summary>📈 <strong>Version Numbers Explained</strong></summary>
|
||
|
||
Our version numbers follow this pattern: `MAJOR.MINOR.PATCH` (e.g., 0.4.3)
|
||
|
||
#### Pre-release Versions
|
||
We use different suffixes to indicate development stages:
|
||
|
||
- `dev` (0.4.3dev1): Development versions, unstable
|
||
- `a` (0.4.3a1): Alpha releases, experimental features
|
||
- `b` (0.4.3b1): Beta releases, feature complete but needs testing
|
||
- `rc` (0.4.3): Release candidates, potential final version
|
||
|
||
#### Installation
|
||
- Regular installation (stable version):
|
||
```bash
|
||
pip install -U crawl4ai
|
||
```
|
||
|
||
- Install pre-release versions:
|
||
```bash
|
||
pip install crawl4ai --pre
|
||
```
|
||
|
||
- Install specific version:
|
||
```bash
|
||
pip install crawl4ai==0.4.3b1
|
||
```
|
||
|
||
#### Why Pre-releases?
|
||
We use pre-releases to:
|
||
- Test new features in real-world scenarios
|
||
- Gather feedback before final releases
|
||
- Ensure stability for production users
|
||
- Allow early adopters to try new features
|
||
|
||
For production environments, we recommend using the stable version. For testing new features, you can opt-in to pre-releases using the `--pre` flag.
|
||
|
||
</details>
|
||
|
||
## 📖 Documentation & Roadmap
|
||
|
||
> 🚨 **Documentation Update Alert**: We're undertaking a major documentation overhaul next week to reflect recent updates and improvements. Stay tuned for a more comprehensive and up-to-date guide!
|
||
|
||
For current documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://docs.crawl4ai.com/).
|
||
|
||
To check our development plans and upcoming features, visit our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md).
|
||
|
||
<details>
|
||
<summary>📈 <strong>Development TODOs</strong></summary>
|
||
|
||
- [x] 0. Graph Crawler: Smart website traversal using graph search algorithms for comprehensive nested page extraction
|
||
- [x] 1. Question-Based Crawler: Natural language driven web discovery and content extraction
|
||
- [x] 2. Knowledge-Optimal Crawler: Smart crawling that maximizes knowledge while minimizing data extraction
|
||
- [x] 3. Agentic Crawler: Autonomous system for complex multi-step crawling operations
|
||
- [x] 4. Automated Schema Generator: Convert natural language to extraction schemas
|
||
- [x] 5. Domain-Specific Scrapers: Pre-configured extractors for common platforms (academic, e-commerce)
|
||
- [x] 6. Web Embedding Index: Semantic search infrastructure for crawled content
|
||
- [x] 7. Interactive Playground: Web UI for testing, comparing strategies with AI assistance
|
||
- [x] 8. Performance Monitor: Real-time insights into crawler operations
|
||
- [ ] 9. Cloud Integration: One-click deployment solutions across cloud providers
|
||
- [x] 10. Sponsorship Program: Structured support system with tiered benefits
|
||
- [ ] 11. Educational Content: "How to Crawl" video series and interactive tutorials
|
||
|
||
</details>
|
||
|
||
## 🤝 Contributing
|
||
|
||
We welcome contributions from the open-source community. Check out our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTORS.md) for more information.
|
||
|
||
I'll help modify the license section with badges. For the halftone effect, here's a version with it:
|
||
|
||
Here's the updated license section:
|
||
|
||
## 📄 License & Attribution
|
||
|
||
This project is licensed under the Apache License 2.0, attribution is recommended via the badges below. See the [Apache 2.0 License](https://github.com/unclecode/crawl4ai/blob/main/LICENSE) file for details.
|
||
|
||
### Attribution Requirements
|
||
When using Crawl4AI, you must include one of the following attribution methods:
|
||
|
||
<details>
|
||
<summary>📈 <strong>1. Badge Attribution (Recommended)</strong></summary>
|
||
Add one of these badges to your README, documentation, or website:
|
||
|
||
| Theme | Badge |
|
||
|-------|-------|
|
||
| **Disco Theme (Animated)** | <a href="https://github.com/unclecode/crawl4ai"><img src="./docs/assets/powered-by-disco.svg" alt="Powered by Crawl4AI" width="200"/></a> |
|
||
| **Night Theme (Dark with Neon)** | <a href="https://github.com/unclecode/crawl4ai"><img src="./docs/assets/powered-by-night.svg" alt="Powered by Crawl4AI" width="200"/></a> |
|
||
| **Dark Theme (Classic)** | <a href="https://github.com/unclecode/crawl4ai"><img src="./docs/assets/powered-by-dark.svg" alt="Powered by Crawl4AI" width="200"/></a> |
|
||
| **Light Theme (Classic)** | <a href="https://github.com/unclecode/crawl4ai"><img src="./docs/assets/powered-by-light.svg" alt="Powered by Crawl4AI" width="200"/></a> |
|
||
|
||
|
||
HTML code for adding the badges:
|
||
```html
|
||
<!-- Disco Theme (Animated) -->
|
||
<a href="https://github.com/unclecode/crawl4ai">
|
||
<img src="https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-disco.svg" alt="Powered by Crawl4AI" width="200"/>
|
||
</a>
|
||
|
||
<!-- Night Theme (Dark with Neon) -->
|
||
<a href="https://github.com/unclecode/crawl4ai">
|
||
<img src="https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-night.svg" alt="Powered by Crawl4AI" width="200"/>
|
||
</a>
|
||
|
||
<!-- Dark Theme (Classic) -->
|
||
<a href="https://github.com/unclecode/crawl4ai">
|
||
<img src="https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-dark.svg" alt="Powered by Crawl4AI" width="200"/>
|
||
</a>
|
||
|
||
<!-- Light Theme (Classic) -->
|
||
<a href="https://github.com/unclecode/crawl4ai">
|
||
<img src="https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-light.svg" alt="Powered by Crawl4AI" width="200"/>
|
||
</a>
|
||
|
||
<!-- Simple Shield Badge -->
|
||
<a href="https://github.com/unclecode/crawl4ai">
|
||
<img src="https://img.shields.io/badge/Powered%20by-Crawl4AI-blue?style=flat-square" alt="Powered by Crawl4AI"/>
|
||
</a>
|
||
```
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary>📖 <strong>2. Text Attribution</strong></summary>
|
||
Add this line to your documentation:
|
||
```
|
||
This project uses Crawl4AI (https://github.com/unclecode/crawl4ai) for web data extraction.
|
||
```
|
||
</details>
|
||
|
||
## 📚 Citation
|
||
|
||
If you use Crawl4AI in your research or project, please cite:
|
||
|
||
```bibtex
|
||
@software{crawl4ai2024,
|
||
author = {UncleCode},
|
||
title = {Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper},
|
||
year = {2024},
|
||
publisher = {GitHub},
|
||
journal = {GitHub Repository},
|
||
howpublished = {\url{https://github.com/unclecode/crawl4ai}},
|
||
commit = {Please use the commit hash you're working with}
|
||
}
|
||
```
|
||
|
||
Text citation format:
|
||
```
|
||
UncleCode. (2024). Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper [Computer software].
|
||
GitHub. https://github.com/unclecode/crawl4ai
|
||
```
|
||
|
||
## 📧 Contact
|
||
|
||
For questions, suggestions, or feedback, feel free to reach out:
|
||
|
||
- GitHub: [unclecode](https://github.com/unclecode)
|
||
- Twitter: [@unclecode](https://twitter.com/unclecode)
|
||
- Website: [crawl4ai.com](https://crawl4ai.com)
|
||
|
||
Happy Crawling! 🕸️🚀
|
||
|
||
## 🗾 Mission
|
||
|
||
Our mission is to unlock the value of personal and enterprise data by transforming digital footprints into structured, tradeable assets. Crawl4AI empowers individuals and organizations with open-source tools to extract and structure data, fostering a shared data economy.
|
||
|
||
We envision a future where AI is powered by real human knowledge, ensuring data creators directly benefit from their contributions. By democratizing data and enabling ethical sharing, we are laying the foundation for authentic AI advancement.
|
||
|
||
<details>
|
||
<summary>🔑 <strong>Key Opportunities</strong></summary>
|
||
|
||
- **Data Capitalization**: Transform digital footprints into measurable, valuable assets.
|
||
- **Authentic AI Data**: Provide AI systems with real human insights.
|
||
- **Shared Economy**: Create a fair data marketplace that benefits data creators.
|
||
|
||
</details>
|
||
|
||
<details>
|
||
<summary>🚀 <strong>Development Pathway</strong></summary>
|
||
|
||
1. **Open-Source Tools**: Community-driven platforms for transparent data extraction.
|
||
2. **Digital Asset Structuring**: Tools to organize and value digital knowledge.
|
||
3. **Ethical Data Marketplace**: A secure, fair platform for exchanging structured data.
|
||
|
||
For more details, see our [full mission statement](./MISSION.md).
|
||
</details>
|
||
|
||
## 🌟 Current Sponsors
|
||
|
||
### 🏢 Enterprise Sponsors & Partners
|
||
|
||
Our enterprise sponsors and technology partners help scale Crawl4AI to power production-grade data pipelines.
|
||
|
||
| Company | About | Sponsorship Tier |
|
||
|------|------|----------------------------|
|
||
| <a href="https://dashboard.capsolver.com/passport/register?inviteCode=ESVSECTX5Q23" target="_blank"><picture><source width="120" media="(prefers-color-scheme: dark)" srcset="https://docs.crawl4ai.com/uploads/sponsors/20251013045338_72a71fa4ee4d2f40.png"><source width="120" media="(prefers-color-scheme: light)" srcset="https://www.capsolver.com/assets/images/logo-text.png"><img alt="Capsolver" src="https://www.capsolver.com/assets/images/logo-text.png"></picture></a> | AI-powered Captcha solving service. Supports all major Captcha types, including reCAPTCHA, Cloudflare, and more | 🥈 Silver |
|
||
| <a href="https://kipo.ai" target="_blank"><img src="https://docs.crawl4ai.com/uploads/sponsors/20251013045751_2d54f57f117c651e.png" alt="DataSync" width="120"/></a> | Helps engineers and buyers find, compare, and source electronic & industrial parts in seconds, with specs, pricing, lead times & alternatives.| 🥇 Gold |
|
||
| <a href="https://www.kidocode.com/" target="_blank"><img src="https://docs.crawl4ai.com/uploads/sponsors/20251013045045_bb8dace3f0440d65.svg" alt="Kidocode" width="120"/><p align="center">KidoCode</p></a> | Kidocode is a hybrid technology and entrepreneurship school for kids aged 5–18, offering both online and on-campus education. | 🥇 Gold |
|
||
| <a href="https://www.alephnull.sg/" target="_blank"><img src="https://docs.crawl4ai.com/uploads/sponsors/20251013050323_a9e8e8c4c3650421.svg" alt="Aleph null" width="120"/></a> | Singapore-based Aleph Null is Asia’s leading edtech hub, dedicated to student-centric, AI-driven education—empowering learners with the tools to thrive in a fast-changing world. | 🥇 Gold |
|
||
|
||
### 🧑🤝 Individual Sponsors
|
||
|
||
A heartfelt thanks to our individual supporters! Every contribution helps us keep our opensource mission alive and thriving!
|
||
|
||
<p align="left">
|
||
<a href="https://github.com/hafezparast"><img src="https://avatars.githubusercontent.com/u/14273305?s=60&v=4" style="border-radius:50%;" width="64px;"/></a>
|
||
<a href="https://github.com/ntohidi"><img src="https://avatars.githubusercontent.com/u/17140097?s=60&v=4" style="border-radius:50%;"width="64px;"/></a>
|
||
<a href="https://github.com/Sjoeborg"><img src="https://avatars.githubusercontent.com/u/17451310?s=60&v=4" style="border-radius:50%;"width="64px;"/></a>
|
||
<a href="https://github.com/romek-rozen"><img src="https://avatars.githubusercontent.com/u/30595969?s=60&v=4" style="border-radius:50%;"width="64px;"/></a>
|
||
<a href="https://github.com/Kourosh-Kiyani"><img src="https://avatars.githubusercontent.com/u/34105600?s=60&v=4" style="border-radius:50%;"width="64px;"/></a>
|
||
<a href="https://github.com/Etherdrake"><img src="https://avatars.githubusercontent.com/u/67021215?s=60&v=4" style="border-radius:50%;"width="64px;"/></a>
|
||
<a href="https://github.com/shaman247"><img src="https://avatars.githubusercontent.com/u/211010067?s=60&v=4" style="border-radius:50%;"width="64px;"/></a>
|
||
<a href="https://github.com/work-flow-manager"><img src="https://avatars.githubusercontent.com/u/217665461?s=60&v=4" style="border-radius:50%;"width="64px;"/></a>
|
||
</p>
|
||
|
||
> Want to join them? [Sponsor Crawl4AI →](https://github.com/sponsors/unclecode)
|
||
|
||
## Star History
|
||
|
||
[](https://star-history.com/#unclecode/crawl4ai&Date)
|