Compare commits

..

1 Commits

Author SHA1 Message Date
ntohidi
88a9fbbb7e fix(deep-crawl): BestFirst priority inversion; remove pre-scoring truncation. ref #1253
Use negative scores in PQ to visit high-score URLs first and drop link cap prior to scoring; add test for ordering.
2025-08-11 18:16:57 +08:00
34 changed files with 732 additions and 4575 deletions

7
.github/FUNDING.yml vendored
View File

@@ -1,7 +0,0 @@
# These are supported funding model platforms
# GitHub Sponsors
github: unclecode
# Custom links for enterprise inquiries (uncomment when ready)
# custom: ["https://crawl4ai.com/enterprise"]

10
.gitignore vendored
View File

@@ -1,11 +1,6 @@
# Scripts folder (private tools)
.scripts/
# Local development CLI (private)
local_dev.py
dev
DEV_CLI_README.md
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
@@ -275,7 +270,4 @@ docs/**/data
.codecat/
docs/apps/linkdin/debug*/
docs/apps/linkdin/samples/insights/*
# Production checklist (local, not for version control)
PRODUCTION_CHECKLIST.md
docs/apps/linkdin/samples/insights/*

View File

@@ -5,76 +5,6 @@ All notable changes to Crawl4AI will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.7.3] - 2025-08-09
### Added
- **🕵️ Undetected Browser Support**: New browser adapter pattern with stealth capabilities
- `browser_adapter.py` with undetected Chrome integration
- Bypass sophisticated bot detection systems (Cloudflare, Akamai, custom solutions)
- Support for headless stealth mode with anti-detection techniques
- Human-like behavior simulation with random mouse movements and scrolling
- Comprehensive examples for anti-bot strategies and stealth crawling
- Full documentation guide for undetected browser usage
- **🎨 Multi-URL Configuration System**: URL-specific crawler configurations for batch processing
- Different crawling strategies for different URL patterns in a single batch
- Support for string patterns with wildcards (`"*.pdf"`, `"*/blog/*"`)
- Lambda function matchers for complex URL logic
- Mixed matchers combining strings and functions with AND/OR logic
- Fallback configuration support when no patterns match
- First-match-wins configuration selection with optional fallback
- **🧠 Memory Monitoring & Optimization**: Comprehensive memory usage tracking
- New `memory_utils.py` module for memory monitoring and optimization
- Real-time memory usage tracking during crawl sessions
- Memory leak detection and reporting
- Performance optimization recommendations
- Peak memory usage analysis and efficiency metrics
- Automatic cleanup suggestions for memory-intensive operations
- **📊 Enhanced Table Extraction**: Improved table access and DataFrame conversion
- Direct `result.tables` interface replacing generic `result.media` approach
- Instant pandas DataFrame conversion with `pd.DataFrame(table['data'])`
- Enhanced table detection algorithms for better accuracy
- Table metadata including source XPath and headers
- Improved table structure preservation during extraction
- **💰 GitHub Sponsors Integration**: 4-tier sponsorship system
- Supporter ($5/month): Community support + early feature previews
- Professional ($25/month): Priority support + beta access
- Business ($100/month): Direct consultation + custom integrations
- Enterprise ($500/month): Dedicated support + feature development
- Custom arrangement options for larger organizations
- **🐳 Docker LLM Provider Flexibility**: Environment-based LLM configuration
- `LLM_PROVIDER` environment variable support for dynamic provider switching
- `.llm.env` file support for secure configuration management
- Per-request provider override capabilities in API endpoints
- Support for OpenAI, Groq, and other providers without rebuilding images
- Enhanced Docker documentation with deployment examples
### Fixed
- **URL Matcher Fallback**: Resolved edge cases in URL pattern matching logic
- **Memory Management**: Fixed memory leaks in long-running crawl sessions
- **Sitemap Processing**: Improved redirect handling in sitemap fetching
- **Table Extraction**: Enhanced table detection and extraction accuracy
- **Error Handling**: Better error messages and recovery from network failures
### Changed
- **Architecture Refactoring**: Major cleanup and optimization
- Moved 2,450+ lines from main `async_crawler_strategy.py` to backup
- Cleaner separation of concerns in crawler architecture
- Better maintainability and code organization
- Preserved backward compatibility while improving performance
### Documentation
- **Comprehensive Examples**: Added real-world URLs and practical use cases
- **API Documentation**: Complete CrawlResult field documentation with all available fields
- **Migration Guides**: Updated table extraction patterns from `result.media` to `result.tables`
- **Undetected Browser Guide**: Full documentation for stealth mode and anti-bot strategies
- **Multi-Config Examples**: Detailed examples for URL-specific configurations
- **Docker Deployment**: Enhanced Docker documentation with LLM provider configuration
## [0.7.x] - 2025-06-29
### Added

View File

@@ -1,809 +0,0 @@
# 🚀🤖 Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper.
<div align="center">
<a href="https://trendshift.io/repositories/11716" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11716" alt="unclecode%2Fcrawl4ai | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
[![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members)
[![PyPI version](https://badge.fury.io/py/crawl4ai.svg)](https://badge.fury.io/py/crawl4ai)
[![Python Version](https://img.shields.io/pypi/pyversions/crawl4ai)](https://pypi.org/project/crawl4ai/)
[![Downloads](https://static.pepy.tech/badge/crawl4ai/month)](https://pepy.tech/project/crawl4ai)
[![GitHub Sponsors](https://img.shields.io/github/sponsors/unclecode?style=flat&logo=GitHub-Sponsors&label=Sponsors&color=pink)](https://github.com/sponsors/unclecode)
<p align="center">
<a href="https://x.com/crawl4ai">
<img src="https://img.shields.io/badge/Follow%20on%20X-000000?style=for-the-badge&logo=x&logoColor=white" alt="Follow on X" />
</a>
<a href="https://www.linkedin.com/company/crawl4ai">
<img src="https://img.shields.io/badge/Follow%20on%20LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white" alt="Follow on LinkedIn" />
</a>
<a href="https://discord.gg/jP8KfhDhyN">
<img src="https://img.shields.io/badge/Join%20our%20Discord-5865F2?style=for-the-badge&logo=discord&logoColor=white" alt="Join our Discord" />
</a>
</p>
</div>
Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for LLMs, AI agents, and data pipelines. Open source, flexible, and built for real-time performance, Crawl4AI empowers developers with unmatched speed, precision, and deployment ease.
[✨ Check out latest update v0.7.0](#-recent-updates)
🎉 **Version 0.7.0 is now available!** The Adaptive Intelligence Update introduces groundbreaking features: Adaptive Crawling that learns website patterns, Virtual Scroll support for infinite pages, intelligent Link Preview with 3-layer scoring, Async URL Seeder for massive discovery, and significant performance improvements. [Read the release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.0.md)
<details>
<summary>🤓 <strong>My Personal Story</strong></summary>
My journey with computers started in childhood when my dad, a computer scientist, introduced me to an Amstrad computer. Those early days sparked a fascination with technology, leading me to pursue computer science and specialize in NLP during my postgraduate studies. It was during this time that I first delved into web crawling, building tools to help researchers organize papers and extract information from publications a challenging yet rewarding experience that honed my skills in data extraction.
Fast forward to 2023, I was working on a tool for a project and needed a crawler to convert a webpage into markdown. While exploring solutions, I found one that claimed to be open-source but required creating an account and generating an API token. Worse, it turned out to be a SaaS model charging $16, and its quality didnt meet my standards. Frustrated, I realized this was a deeper problem. That frustration turned into turbo anger mode, and I decided to build my own solution. In just a few days, I created Crawl4AI. To my surprise, it went viral, earning thousands of GitHub stars and resonating with a global community.
I made Crawl4AI open-source for two reasons. First, its my way of giving back to the open-source community that has supported me throughout my career. Second, I believe data should be accessible to everyone, not locked behind paywalls or monopolized by a few. Open access to data lays the foundation for the democratization of AI, a vision where individuals can train their own models and take ownership of their information. This library is the first step in a larger journey to create the best open-source data extraction and generation tool the world has ever seen, built collaboratively by a passionate community.
Thank you to everyone who has supported this project, used it, and shared feedback. Your encouragement motivates me to dream even bigger. Join us, file issues, submit PRs, or spread the word. Together, we can build a tool that truly empowers people to access their own data and reshape the future of AI.
</details>
## 🧐 Why Crawl4AI?
1. **Built for LLMs**: Creates smart, concise Markdown optimized for RAG and fine-tuning applications.
2. **Lightning Fast**: Delivers results faster with real-time, cost-efficient performance.
3. **Flexible Browser Control**: Offers session management, proxies, and custom hooks for seamless data access.
4. **Heuristic Intelligence**: Uses advanced algorithms for efficient extraction, reducing reliance on costly models.
5. **Open Source & Deployable**: Fully open-source with no API keys—ready for Docker and cloud integration.
6. **Thriving Community**: Actively maintained by a vibrant community and the #1 trending GitHub repository.
## 🚀 Quick Start
1. Install Crawl4AI:
```bash
# Install the package
pip install -U crawl4ai
# For pre release versions
pip install crawl4ai --pre
# Run post-installation setup
crawl4ai-setup
# Verify your installation
crawl4ai-doctor
```
If you encounter any browser-related issues, you can install them manually:
```bash
python -m playwright install --with-deps chromium
```
2. Run a simple web crawl with Python:
```python
import asyncio
from crawl4ai import *
async def main():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
)
print(result.markdown)
if __name__ == "__main__":
asyncio.run(main())
```
3. Or use the new command-line interface:
```bash
# Basic crawl with markdown output
crwl https://www.nbcnews.com/business -o markdown
# Deep crawl with BFS strategy, max 10 pages
crwl https://docs.crawl4ai.com --deep-crawl bfs --max-pages 10
# Use LLM extraction with a specific question
crwl https://www.example.com/products -q "Extract all product prices"
```
## ✨ Features
<details>
<summary>📝 <strong>Markdown Generation</strong></summary>
- 🧹 **Clean Markdown**: Generates clean, structured Markdown with accurate formatting.
- 🎯 **Fit Markdown**: Heuristic-based filtering to remove noise and irrelevant parts for AI-friendly processing.
- 🔗 **Citations and References**: Converts page links into a numbered reference list with clean citations.
- 🛠️ **Custom Strategies**: Users can create their own Markdown generation strategies tailored to specific needs.
- 📚 **BM25 Algorithm**: Employs BM25-based filtering for extracting core information and removing irrelevant content.
</details>
<details>
<summary>📊 <strong>Structured Data Extraction</strong></summary>
- 🤖 **LLM-Driven Extraction**: Supports all LLMs (open-source and proprietary) for structured data extraction.
- 🧱 **Chunking Strategies**: Implements chunking (topic-based, regex, sentence-level) for targeted content processing.
- 🌌 **Cosine Similarity**: Find relevant content chunks based on user queries for semantic extraction.
- 🔎 **CSS-Based Extraction**: Fast schema-based data extraction using XPath and CSS selectors.
- 🔧 **Schema Definition**: Define custom schemas for extracting structured JSON from repetitive patterns.
</details>
<details>
<summary>🌐 <strong>Browser Integration</strong></summary>
- 🖥️ **Managed Browser**: Use user-owned browsers with full control, avoiding bot detection.
- 🔄 **Remote Browser Control**: Connect to Chrome Developer Tools Protocol for remote, large-scale data extraction.
- 👤 **Browser Profiler**: Create and manage persistent profiles with saved authentication states, cookies, and settings.
- 🔒 **Session Management**: Preserve browser states and reuse them for multi-step crawling.
- 🧩 **Proxy Support**: Seamlessly connect to proxies with authentication for secure access.
- ⚙️ **Full Browser Control**: Modify headers, cookies, user agents, and more for tailored crawling setups.
- 🌍 **Multi-Browser Support**: Compatible with Chromium, Firefox, and WebKit.
- 📐 **Dynamic Viewport Adjustment**: Automatically adjusts the browser viewport to match page content, ensuring complete rendering and capturing of all elements.
</details>
<details>
<summary>🔎 <strong>Crawling & Scraping</strong></summary>
- 🖼️ **Media Support**: Extract images, audio, videos, and responsive image formats like `srcset` and `picture`.
- 🚀 **Dynamic Crawling**: Execute JS and wait for async or sync for dynamic content extraction.
- 📸 **Screenshots**: Capture page screenshots during crawling for debugging or analysis.
- 📂 **Raw Data Crawling**: Directly process raw HTML (`raw:`) or local files (`file://`).
- 🔗 **Comprehensive Link Extraction**: Extracts internal, external links, and embedded iframe content.
- 🛠️ **Customizable Hooks**: Define hooks at every step to customize crawling behavior.
- 💾 **Caching**: Cache data for improved speed and to avoid redundant fetches.
- 📄 **Metadata Extraction**: Retrieve structured metadata from web pages.
- 📡 **IFrame Content Extraction**: Seamless extraction from embedded iframe content.
- 🕵️ **Lazy Load Handling**: Waits for images to fully load, ensuring no content is missed due to lazy loading.
- 🔄 **Full-Page Scanning**: Simulates scrolling to load and capture all dynamic content, perfect for infinite scroll pages.
</details>
<details>
<summary>🚀 <strong>Deployment</strong></summary>
- 🐳 **Dockerized Setup**: Optimized Docker image with FastAPI server for easy deployment.
- 🔑 **Secure Authentication**: Built-in JWT token authentication for API security.
- 🔄 **API Gateway**: One-click deployment with secure token authentication for API-based workflows.
- 🌐 **Scalable Architecture**: Designed for mass-scale production and optimized server performance.
- ☁️ **Cloud Deployment**: Ready-to-deploy configurations for major cloud platforms.
</details>
<details>
<summary>🎯 <strong>Additional Features</strong></summary>
- 🕶️ **Stealth Mode**: Avoid bot detection by mimicking real users.
- 🏷️ **Tag-Based Content Extraction**: Refine crawling based on custom tags, headers, or metadata.
- 🔗 **Link Analysis**: Extract and analyze all links for detailed data exploration.
- 🛡️ **Error Handling**: Robust error management for seamless execution.
- 🔐 **CORS & Static Serving**: Supports filesystem-based caching and cross-origin requests.
- 📖 **Clear Documentation**: Simplified and updated guides for onboarding and advanced usage.
- 🙌 **Community Recognition**: Acknowledges contributors and pull requests for transparency.
</details>
## Try it Now!
✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing)
✨ Visit our [Documentation Website](https://docs.crawl4ai.com/)
## Installation 🛠️
Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package or use Docker.
<details>
<summary>🐍 <strong>Using pip</strong></summary>
Choose the installation option that best fits your needs:
### Basic Installation
For basic web crawling and scraping tasks:
```bash
pip install crawl4ai
crawl4ai-setup # Setup the browser
```
By default, this will install the asynchronous version of Crawl4AI, using Playwright for web crawling.
👉 **Note**: When you install Crawl4AI, the `crawl4ai-setup` should automatically install and set up Playwright. However, if you encounter any Playwright-related errors, you can manually install it using one of these methods:
1. Through the command line:
```bash
playwright install
```
2. If the above doesn't work, try this more specific command:
```bash
python -m playwright install chromium
```
This second method has proven to be more reliable in some cases.
---
### Installation with Synchronous Version
The sync version is deprecated and will be removed in future versions. If you need the synchronous version using Selenium:
```bash
pip install crawl4ai[sync]
```
---
### Development Installation
For contributors who plan to modify the source code:
```bash
git clone https://github.com/unclecode/crawl4ai.git
cd crawl4ai
pip install -e . # Basic installation in editable mode
```
Install optional features:
```bash
pip install -e ".[torch]" # With PyTorch features
pip install -e ".[transformer]" # With Transformer features
pip install -e ".[cosine]" # With cosine similarity features
pip install -e ".[sync]" # With synchronous crawling (Selenium)
pip install -e ".[all]" # Install all optional features
```
</details>
<details>
<summary>🐳 <strong>Docker Deployment</strong></summary>
> 🚀 **Now Available!** Our completely redesigned Docker implementation is here! This new solution makes deployment more efficient and seamless than ever.
### New Docker Features
The new Docker implementation includes:
- **Browser pooling** with page pre-warming for faster response times
- **Interactive playground** to test and generate request code
- **MCP integration** for direct connection to AI tools like Claude Code
- **Comprehensive API endpoints** including HTML extraction, screenshots, PDF generation, and JavaScript execution
- **Multi-architecture support** with automatic detection (AMD64/ARM64)
- **Optimized resources** with improved memory management
### Getting Started
```bash
# Pull and run the latest release candidate
docker pull unclecode/crawl4ai:0.7.0
docker run -d -p 11235:11235 --name crawl4ai --shm-size=1g unclecode/crawl4ai:0.7.0
# Visit the playground at http://localhost:11235/playground
```
For complete documentation, see our [Docker Deployment Guide](https://docs.crawl4ai.com/core/docker-deployment/).
</details>
---
### Quick Test
Run a quick test (works for both Docker options):
```python
import requests
# Submit a crawl job
response = requests.post(
"http://localhost:11235/crawl",
json={"urls": ["https://example.com"], "priority": 10}
)
if response.status_code == 200:
print("Crawl job submitted successfully.")
if "results" in response.json():
results = response.json()["results"]
print("Crawl job completed. Results:")
for result in results:
print(result)
else:
task_id = response.json()["task_id"]
print(f"Crawl job submitted. Task ID:: {task_id}")
result = requests.get(f"http://localhost:11235/task/{task_id}")
```
For more examples, see our [Docker Examples](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_example.py). For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://docs.crawl4ai.com/basic/docker-deployment/).
</details>
## 🔬 Advanced Usage Examples 🔬
You can check the project structure in the directory [docs/examples](https://github.com/unclecode/crawl4ai/tree/main/docs/examples). Over there, you can find a variety of examples; here, some popular examples are shared.
<details>
<summary>📝 <strong>Heuristic Markdown Generation with Clean and Fit Markdown</strong></summary>
```python
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
from crawl4ai.content_filter_strategy import PruningContentFilter, BM25ContentFilter
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
async def main():
browser_config = BrowserConfig(
headless=True,
verbose=True,
)
run_config = CrawlerRunConfig(
cache_mode=CacheMode.ENABLED,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(threshold=0.48, threshold_type="fixed", min_word_threshold=0)
),
# markdown_generator=DefaultMarkdownGenerator(
# content_filter=BM25ContentFilter(user_query="WHEN_WE_FOCUS_BASED_ON_A_USER_QUERY", bm25_threshold=1.0)
# ),
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://docs.micronaut.io/4.7.6/guide/",
config=run_config
)
print(len(result.markdown.raw_markdown))
print(len(result.markdown.fit_markdown))
if __name__ == "__main__":
asyncio.run(main())
```
</details>
<details>
<summary>🖥️ <strong>Executing JavaScript & Extract Structured Data without LLMs</strong></summary>
```python
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
from crawl4ai import JsonCssExtractionStrategy
import json
async def main():
schema = {
"name": "KidoCode Courses",
"baseSelector": "section.charge-methodology .w-tab-content > div",
"fields": [
{
"name": "section_title",
"selector": "h3.heading-50",
"type": "text",
},
{
"name": "section_description",
"selector": ".charge-content",
"type": "text",
},
{
"name": "course_name",
"selector": ".text-block-93",
"type": "text",
},
{
"name": "course_description",
"selector": ".course-content-text",
"type": "text",
},
{
"name": "course_icon",
"selector": ".image-92",
"type": "attribute",
"attribute": "src"
}
}
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
browser_config = BrowserConfig(
headless=False,
verbose=True
)
run_config = CrawlerRunConfig(
extraction_strategy=extraction_strategy,
js_code=["""(async () => {const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");for(let tab of tabs) {tab.scrollIntoView();tab.click();await new Promise(r => setTimeout(r, 500));}})();"""],
cache_mode=CacheMode.BYPASS
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology",
config=run_config
)
companies = json.loads(result.extracted_content)
print(f"Successfully extracted {len(companies)} companies")
print(json.dumps(companies[0], indent=2))
if __name__ == "__main__":
asyncio.run(main())
```
</details>
<details>
<summary>📚 <strong>Extracting Structured Data with LLMs</strong></summary>
```python
import os
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LLMConfig
from crawl4ai import LLMExtractionStrategy
from pydantic import BaseModel, Field
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(..., description="Fee for output token for the OpenAI model.")
async def main():
browser_config = BrowserConfig(verbose=True)
run_config = CrawlerRunConfig(
word_count_threshold=1,
extraction_strategy=LLMExtractionStrategy(
# Here you can use any provider that Litellm library supports, for instance: ollama/qwen2
# provider="ollama/qwen2", api_token="no-token",
llm_config = LLMConfig(provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY')),
schema=OpenAIModelFee.schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content. One extracted model JSON format should look like this:
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}."""
),
cache_mode=CacheMode.BYPASS,
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url='https://openai.com/api/pricing/',
config=run_config
)
print(result.extracted_content)
if __name__ == "__main__":
asyncio.run(main())
```
</details>
<details>
<summary>🤖 <strong>Using Your own Browser with Custom User Profile</strong></summary>
```python
import os, sys
from pathlib import Path
import asyncio, time
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
async def test_news_crawl():
# Create a persistent user data directory
user_data_dir = os.path.join(Path.home(), ".crawl4ai", "browser_profile")
os.makedirs(user_data_dir, exist_ok=True)
browser_config = BrowserConfig(
verbose=True,
headless=True,
user_data_dir=user_data_dir,
use_persistent_context=True,
)
run_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS
)
async with AsyncWebCrawler(config=browser_config) as crawler:
url = "ADDRESS_OF_A_CHALLENGING_WEBSITE"
result = await crawler.arun(
url,
config=run_config,
magic=True,
)
print(f"Successfully crawled {url}")
print(f"Content length: {len(result.markdown)}")
```
</details>
## ✨ Recent Updates
### Version 0.7.0 Release Highlights - The Adaptive Intelligence Update
- **🧠 Adaptive Crawling**: Your crawler now learns and adapts to website patterns automatically:
```python
config = AdaptiveConfig(
confidence_threshold=0.7, # Min confidence to stop crawling
max_depth=5, # Maximum crawl depth
max_pages=20, # Maximum number of pages to crawl
strategy="statistical"
)
async with AsyncWebCrawler() as crawler:
adaptive_crawler = AdaptiveCrawler(crawler, config)
state = await adaptive_crawler.digest(
start_url="https://news.example.com",
query="latest news content"
)
# Crawler learns patterns and improves extraction over time
```
- **🌊 Virtual Scroll Support**: Complete content extraction from infinite scroll pages:
```python
scroll_config = VirtualScrollConfig(
container_selector="[data-testid='feed']",
scroll_count=20,
scroll_by="container_height",
wait_after_scroll=1.0
)
result = await crawler.arun(url, config=CrawlerRunConfig(
virtual_scroll_config=scroll_config
))
```
- **🔗 Intelligent Link Analysis**: 3-layer scoring system for smart link prioritization:
```python
link_config = LinkPreviewConfig(
query="machine learning tutorials",
score_threshold=0.3,
concurrent_requests=10
)
result = await crawler.arun(url, config=CrawlerRunConfig(
link_preview_config=link_config,
score_links=True
))
# Links ranked by relevance and quality
```
- **🎣 Async URL Seeder**: Discover thousands of URLs in seconds:
```python
seeder = AsyncUrlSeeder(SeedingConfig(
source="sitemap+cc",
pattern="*/blog/*",
query="python tutorials",
score_threshold=0.4
))
urls = await seeder.discover("https://example.com")
```
- **⚡ Performance Boost**: Up to 3x faster with optimized resource handling and memory efficiency
Read the full details in our [0.7.0 Release Notes](https://docs.crawl4ai.com/blog/release-v0.7.0) or check the [CHANGELOG](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md).
## Version Numbering in Crawl4AI
Crawl4AI follows standard Python version numbering conventions (PEP 440) to help users understand the stability and features of each release.
### Version Numbers Explained
Our version numbers follow this pattern: `MAJOR.MINOR.PATCH` (e.g., 0.4.3)
#### Pre-release Versions
We use different suffixes to indicate development stages:
- `dev` (0.4.3dev1): Development versions, unstable
- `a` (0.4.3a1): Alpha releases, experimental features
- `b` (0.4.3b1): Beta releases, feature complete but needs testing
- `rc` (0.4.3): Release candidates, potential final version
#### Installation
- Regular installation (stable version):
```bash
pip install -U crawl4ai
```
- Install pre-release versions:
```bash
pip install crawl4ai --pre
```
- Install specific version:
```bash
pip install crawl4ai==0.4.3b1
```
#### Why Pre-releases?
We use pre-releases to:
- Test new features in real-world scenarios
- Gather feedback before final releases
- Ensure stability for production users
- Allow early adopters to try new features
For production environments, we recommend using the stable version. For testing new features, you can opt-in to pre-releases using the `--pre` flag.
## 📖 Documentation & Roadmap
> 🚨 **Documentation Update Alert**: We're undertaking a major documentation overhaul next week to reflect recent updates and improvements. Stay tuned for a more comprehensive and up-to-date guide!
For current documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://docs.crawl4ai.com/).
To check our development plans and upcoming features, visit our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md).
<details>
<summary>📈 <strong>Development TODOs</strong></summary>
- [x] 0. Graph Crawler: Smart website traversal using graph search algorithms for comprehensive nested page extraction
- [ ] 1. Question-Based Crawler: Natural language driven web discovery and content extraction
- [ ] 2. Knowledge-Optimal Crawler: Smart crawling that maximizes knowledge while minimizing data extraction
- [ ] 3. Agentic Crawler: Autonomous system for complex multi-step crawling operations
- [ ] 4. Automated Schema Generator: Convert natural language to extraction schemas
- [ ] 5. Domain-Specific Scrapers: Pre-configured extractors for common platforms (academic, e-commerce)
- [ ] 6. Web Embedding Index: Semantic search infrastructure for crawled content
- [ ] 7. Interactive Playground: Web UI for testing, comparing strategies with AI assistance
- [ ] 8. Performance Monitor: Real-time insights into crawler operations
- [ ] 9. Cloud Integration: One-click deployment solutions across cloud providers
- [ ] 10. Sponsorship Program: Structured support system with tiered benefits
- [ ] 11. Educational Content: "How to Crawl" video series and interactive tutorials
</details>
## 🤝 Contributing
We welcome contributions from the open-source community. Check out our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTORS.md) for more information.
I'll help modify the license section with badges. For the halftone effect, here's a version with it:
Here's the updated license section:
## 📄 License & Attribution
This project is licensed under the Apache License 2.0, attribution is recommended via the badges below. See the [Apache 2.0 License](https://github.com/unclecode/crawl4ai/blob/main/LICENSE) file for details.
### Attribution Requirements
When using Crawl4AI, you must include one of the following attribution methods:
#### 1. Badge Attribution (Recommended)
Add one of these badges to your README, documentation, or website:
| Theme | Badge |
|-------|-------|
| **Disco Theme (Animated)** | <a href="https://github.com/unclecode/crawl4ai"><img src="./docs/assets/powered-by-disco.svg" alt="Powered by Crawl4AI" width="200"/></a> |
| **Night Theme (Dark with Neon)** | <a href="https://github.com/unclecode/crawl4ai"><img src="./docs/assets/powered-by-night.svg" alt="Powered by Crawl4AI" width="200"/></a> |
| **Dark Theme (Classic)** | <a href="https://github.com/unclecode/crawl4ai"><img src="./docs/assets/powered-by-dark.svg" alt="Powered by Crawl4AI" width="200"/></a> |
| **Light Theme (Classic)** | <a href="https://github.com/unclecode/crawl4ai"><img src="./docs/assets/powered-by-light.svg" alt="Powered by Crawl4AI" width="200"/></a> |
HTML code for adding the badges:
```html
<!-- Disco Theme (Animated) -->
<a href="https://github.com/unclecode/crawl4ai">
<img src="https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-disco.svg" alt="Powered by Crawl4AI" width="200"/>
</a>
<!-- Night Theme (Dark with Neon) -->
<a href="https://github.com/unclecode/crawl4ai">
<img src="https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-night.svg" alt="Powered by Crawl4AI" width="200"/>
</a>
<!-- Dark Theme (Classic) -->
<a href="https://github.com/unclecode/crawl4ai">
<img src="https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-dark.svg" alt="Powered by Crawl4AI" width="200"/>
</a>
<!-- Light Theme (Classic) -->
<a href="https://github.com/unclecode/crawl4ai">
<img src="https://raw.githubusercontent.com/unclecode/crawl4ai/main/docs/assets/powered-by-light.svg" alt="Powered by Crawl4AI" width="200"/>
</a>
<!-- Simple Shield Badge -->
<a href="https://github.com/unclecode/crawl4ai">
<img src="https://img.shields.io/badge/Powered%20by-Crawl4AI-blue?style=flat-square" alt="Powered by Crawl4AI"/>
</a>
```
#### 2. Text Attribution
Add this line to your documentation:
```
This project uses Crawl4AI (https://github.com/unclecode/crawl4ai) for web data extraction.
```
## 📚 Citation
If you use Crawl4AI in your research or project, please cite:
```bibtex
@software{crawl4ai2024,
author = {UncleCode},
title = {Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper},
year = {2024},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/unclecode/crawl4ai}},
commit = {Please use the commit hash you're working with}
}
```
Text citation format:
```
UncleCode. (2024). Crawl4AI: Open-source LLM Friendly Web Crawler & Scraper [Computer software].
GitHub. https://github.com/unclecode/crawl4ai
```
## 📧 Contact
For questions, suggestions, or feedback, feel free to reach out:
- GitHub: [unclecode](https://github.com/unclecode)
- Twitter: [@unclecode](https://twitter.com/unclecode)
- Website: [crawl4ai.com](https://crawl4ai.com)
Happy Crawling! 🕸️🚀
## 💖 Support Crawl4AI
> 🎉 **Sponsorship Program Just Launched!** Be among the first 50 **Founding Sponsors** and get permanent recognition in our Hall of Fame!
Crawl4AI is the #1 trending open-source web crawler with 51K+ stars. Your support ensures we stay independent, innovative, and free forever.
<div align="center">
[![Become a Sponsor](https://img.shields.io/badge/Become%20a%20Sponsor-pink?style=for-the-badge&logo=github-sponsors&logoColor=white)](https://github.com/sponsors/unclecode)
[![Current Sponsors](https://img.shields.io/github/sponsors/unclecode?style=for-the-badge&logo=github&label=Current%20Sponsors&color=green)](https://github.com/sponsors/unclecode)
</div>
### 🤝 Sponsorship Tiers
- **🌱 Believer ($5/mo)**: Join the movement for data democratization
- **🚀 Builder ($50/mo)**: Get priority support and early feature access
- **💼 Growing Team ($500/mo)**: Bi-weekly syncs and optimization help
- **🏢 Data Infrastructure Partner ($2000/mo)**: Full partnership with dedicated support
**Why sponsor?** Every tier includes real benefits. No more rate-limited APIs. Own your data pipeline. Build data sovereignty together.
[View All Tiers & Benefits →](https://github.com/sponsors/unclecode)
### 🏆 Our Sponsors
#### 👑 Founding Sponsors (First 50)
*Be part of history - [Become a Founding Sponsor](https://github.com/sponsors/unclecode)*
<!-- Founding sponsors will be permanently recognized here -->
#### Current Sponsors
Thank you to all our sponsors who make this project possible!
<!-- Sponsors will be automatically added here -->
## 🗾 Mission
Our mission is to unlock the value of personal and enterprise data by transforming digital footprints into structured, tradeable assets. Crawl4AI empowers individuals and organizations with open-source tools to extract and structure data, fostering a shared data economy.
We envision a future where AI is powered by real human knowledge, ensuring data creators directly benefit from their contributions. By democratizing data and enabling ethical sharing, we are laying the foundation for authentic AI advancement.
<details>
<summary>🔑 <strong>Key Opportunities</strong></summary>
- **Data Capitalization**: Transform digital footprints into measurable, valuable assets.
- **Authentic AI Data**: Provide AI systems with real human insights.
- **Shared Economy**: Create a fair data marketplace that benefits data creators.
</details>
<details>
<summary>🚀 <strong>Development Pathway</strong></summary>
1. **Open-Source Tools**: Community-driven platforms for transparent data extraction.
2. **Digital Asset Structuring**: Tools to organize and value digital knowledge.
3. **Ethical Data Marketplace**: A secure, fair platform for exchanging structured data.
For more details, see our [full mission statement](./MISSION.md).
</details>
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=unclecode/crawl4ai&type=Date)](https://star-history.com/#unclecode/crawl4ai&Date)

276
README.md
View File

@@ -10,7 +10,6 @@
[![PyPI version](https://badge.fury.io/py/crawl4ai.svg)](https://badge.fury.io/py/crawl4ai)
[![Python Version](https://img.shields.io/pypi/pyversions/crawl4ai)](https://pypi.org/project/crawl4ai/)
[![Downloads](https://static.pepy.tech/badge/crawl4ai/month)](https://pepy.tech/project/crawl4ai)
[![GitHub Sponsors](https://img.shields.io/github/sponsors/unclecode?style=flat&logo=GitHub-Sponsors&label=Sponsors&color=pink)](https://github.com/sponsors/unclecode)
<p align="center">
<a href="https://x.com/crawl4ai">
@@ -25,33 +24,32 @@
</p>
</div>
Crawl4AI turns the web into clean, LLM ready Markdown for RAG, agents, and data pipelines. Fast, controllable, battle tested by a 50k+ star community.
Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for LLMs, AI agents, and data pipelines. Open source, flexible, and built for real-time performance, Crawl4AI empowers developers with unmatched speed, precision, and deployment ease.
[✨ Check out latest update v0.7.3](#-recent-updates)
[✨ Check out latest update v0.7.0](#-recent-updates)
✨ New in v0.7.3: Undetected Browser Support, Multi-URL Configurations, Memory Monitoring, Enhanced Table Extraction, GitHub Sponsors. [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.3.md)
🎉 **Version 0.7.0 is now available!** The Adaptive Intelligence Update introduces groundbreaking features: Adaptive Crawling that learns website patterns, Virtual Scroll support for infinite pages, intelligent Link Preview with 3-layer scoring, Async URL Seeder for massive discovery, and significant performance improvements. [Read the release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.0.md)
<details>
<summary>🤓 <strong>My Personal Story</strong></summary>
<summary>🤓 <strong>My Personal Story</strong></summary>
I grew up on an Amstrad, thanks to my dad, and never stopped building. In grad school I specialized in NLP and built crawlers for research. Thats where I learned how much extraction matters.
My journey with computers started in childhood when my dad, a computer scientist, introduced me to an Amstrad computer. Those early days sparked a fascination with technology, leading me to pursue computer science and specialize in NLP during my postgraduate studies. It was during this time that I first delved into web crawling, building tools to help researchers organize papers and extract information from publications a challenging yet rewarding experience that honed my skills in data extraction.
In 2023, I needed web-to-Markdown. The “open source” option wanted an account, API token, and $16, and still under-delivered. I went turbo anger mode, built Crawl4AI in days, and it went viral. Now its the most-starred crawler on GitHub.
Fast forward to 2023, I was working on a tool for a project and needed a crawler to convert a webpage into markdown. While exploring solutions, I found one that claimed to be open-source but required creating an account and generating an API token. Worse, it turned out to be a SaaS model charging $16, and its quality didnt meet my standards. Frustrated, I realized this was a deeper problem. That frustration turned into turbo anger mode, and I decided to build my own solution. In just a few days, I created Crawl4AI. To my surprise, it went viral, earning thousands of GitHub stars and resonating with a global community.
I made it open source for **availability**, anyone can use it without a gate. Now Im building the platform for **affordability**, anyone can run serious crawls without breaking the bank. If that resonates, join in, send feedback, or just crawl something amazing.
I made Crawl4AI open-source for two reasons. First, its my way of giving back to the open-source community that has supported me throughout my career. Second, I believe data should be accessible to everyone, not locked behind paywalls or monopolized by a few. Open access to data lays the foundation for the democratization of AI, a vision where individuals can train their own models and take ownership of their information. This library is the first step in a larger journey to create the best open-source data extraction and generation tool the world has ever seen, built collaboratively by a passionate community.
Thank you to everyone who has supported this project, used it, and shared feedback. Your encouragement motivates me to dream even bigger. Join us, file issues, submit PRs, or spread the word. Together, we can build a tool that truly empowers people to access their own data and reshape the future of AI.
</details>
## 🧐 Why Crawl4AI?
<details>
<summary>Why developers pick Crawl4AI</summary>
- **LLM ready output**, smart Markdown with headings, tables, code, citation hints
- **Fast in practice**, async browser pool, caching, minimal hops
- **Full control**, sessions, proxies, cookies, user scripts, hooks
- **Adaptive intelligence**, learns site patterns, explores only what matters
- **Deploy anywhere**, zero keys, CLI and Docker, cloud friendly
</details>
1. **Built for LLMs**: Creates smart, concise Markdown optimized for RAG and fine-tuning applications.
2. **Lightning Fast**: Delivers results 6x faster with real-time, cost-efficient performance.
3. **Flexible Browser Control**: Offers session management, proxies, and custom hooks for seamless data access.
4. **Heuristic Intelligence**: Uses advanced algorithms for efficient extraction, reducing reliance on costly models.
5. **Open Source & Deployable**: Fully open-source with no API keys—ready for Docker and cloud integration.
6. **Thriving Community**: Actively maintained by a vibrant community and the #1 trending GitHub repository.
## 🚀 Quick Start
@@ -103,33 +101,6 @@ crwl https://docs.crawl4ai.com --deep-crawl bfs --max-pages 10
crwl https://www.example.com/products -q "Extract all product prices"
```
## 💖 Support Crawl4AI
> 🎉 **Sponsorship Program Now Open!** After powering 51K+ developers and 1 year of growth, Crawl4AI is launching dedicated support for **startups** and **enterprises**. Be among the first 50 **Founding Sponsors** for permanent recognition in our Hall of Fame.
Crawl4AI is the #1 trending open-source web crawler on GitHub. Your support keeps it independent, innovative, and free for the community — while giving you direct access to premium benefits.
<div align="">
[![Become a Sponsor](https://img.shields.io/badge/Become%20a%20Sponsor-pink?style=for-the-badge&logo=github-sponsors&logoColor=white)](https://github.com/sponsors/unclecode)
[![Current Sponsors](https://img.shields.io/github/sponsors/unclecode?style=for-the-badge&logo=github&label=Current%20Sponsors&color=green)](https://github.com/sponsors/unclecode)
</div>
### 🤝 Sponsorship Tiers
- **🌱 Believer ($5/mo)** — Join the movement for data democratization
- **🚀 Builder ($50/mo)** — Priority support & early access to features
- **💼 Growing Team ($500/mo)** — Bi-weekly syncs & optimization help
- **🏢 Data Infrastructure Partner ($2000/mo)** — Full partnership with dedicated support
*Custom arrangements available - see [SPONSORS.md](SPONSORS.md) for details & contact*
**Why sponsor?**
No rate-limited APIs. No lock-in. Build and own your data pipeline with direct guidance from the creator of Crawl4AI.
[See All Tiers & Benefits →](https://github.com/sponsors/unclecode)
## ✨ Features
<details>
@@ -309,6 +280,12 @@ docker run -d -p 11235:11235 --name crawl4ai --shm-size=1g unclecode/crawl4ai:0.
# Visit the playground at http://localhost:11235/playground
```
For complete documentation, see our [Docker Deployment Guide](https://docs.crawl4ai.com/core/docker-deployment/).
</details>
---
### Quick Test
Run a quick test (works for both Docker options):
@@ -339,11 +316,10 @@ For more examples, see our [Docker Examples](https://github.com/unclecode/crawl4
</details>
---
## 🔬 Advanced Usage Examples 🔬
You can check the project structure in the directory [docs/examples](https://github.com/unclecode/crawl4ai/tree/main/docs/examples). Over there, you can find a variety of examples; here, some popular examples are shared.
You can check the project structure in the directory [https://github.com/unclecode/crawl4ai/docs/examples](docs/examples). Over there, you can find a variety of examples; here, some popular examples are shared.
<details>
<summary>📝 <strong>Heuristic Markdown Generation with Clean and Fit Markdown</strong></summary>
@@ -502,7 +478,7 @@ if __name__ == "__main__":
</details>
<details>
<summary>🤖 <strong>Using Your own Browser with Custom User Profile</strong></summary>
<summary>🤖 <strong>Using You own Browser with Custom User Profile</strong></summary>
```python
import os, sys
@@ -542,89 +518,7 @@ async def test_news_crawl():
## ✨ Recent Updates
<details>
<summary><strong>Version 0.7.3 Release Highlights - The Multi-Config Intelligence Update</strong></summary>
- **🕵️ Undetected Browser Support**: Bypass sophisticated bot detection systems:
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig
browser_config = BrowserConfig(
browser_type="undetected", # Use undetected Chrome
headless=True, # Can run headless with stealth
extra_args=[
"--disable-blink-features=AutomationControlled",
"--disable-web-security"
]
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun("https://protected-site.com")
# Successfully bypass Cloudflare, Akamai, and custom bot detection
```
- **🎨 Multi-URL Configuration**: Different strategies for different URL patterns in one batch:
```python
from crawl4ai import CrawlerRunConfig, MatchMode
configs = [
# Documentation sites - aggressive caching
CrawlerRunConfig(
url_matcher=["*docs*", "*documentation*"],
cache_mode="write",
markdown_generator_options={"include_links": True}
),
# News/blog sites - fresh content
CrawlerRunConfig(
url_matcher=lambda url: 'blog' in url or 'news' in url,
cache_mode="bypass"
),
# Fallback for everything else
CrawlerRunConfig()
]
results = await crawler.arun_many(urls, config=configs)
# Each URL gets the perfect configuration automatically
```
- **🧠 Memory Monitoring**: Track and optimize memory usage during crawling:
```python
from crawl4ai.memory_utils import MemoryMonitor
monitor = MemoryMonitor()
monitor.start_monitoring()
results = await crawler.arun_many(large_url_list)
report = monitor.get_report()
print(f"Peak memory: {report['peak_mb']:.1f} MB")
print(f"Efficiency: {report['efficiency']:.1f}%")
# Get optimization recommendations
```
- **📊 Enhanced Table Extraction**: Direct DataFrame conversion from web tables:
```python
result = await crawler.arun("https://site-with-tables.com")
# New way - direct table access
if result.tables:
import pandas as pd
for table in result.tables:
df = pd.DataFrame(table['data'])
print(f"Table: {df.shape[0]} rows × {df.shape[1]} columns")
```
- **💰 GitHub Sponsors**: 4-tier sponsorship system for project sustainability
- **🐳 Docker LLM Flexibility**: Configure providers via environment variables
[Full v0.7.3 Release Notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.3.md)
</details>
<details>
<summary><strong>Version 0.7.0 Release Highlights - The Adaptive Intelligence Update</strong></summary>
### Version 0.7.0 Release Highlights - The Adaptive Intelligence Update
- **🧠 Adaptive Crawling**: Your crawler now learns and adapts to website patterns automatically:
```python
@@ -689,14 +583,97 @@ async def test_news_crawl():
Read the full details in our [0.7.0 Release Notes](https://docs.crawl4ai.com/blog/release-v0.7.0) or check the [CHANGELOG](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md).
</details>
### Previous Version: 0.6.0 Release Highlights
- **🌎 World-aware Crawling**: Set geolocation, language, and timezone for authentic locale-specific content:
```python
crun_cfg = CrawlerRunConfig(
url="https://browserleaks.com/geo", # test page that shows your location
locale="en-US", # Accept-Language & UI locale
timezone_id="America/Los_Angeles", # JS Date()/Intl timezone
geolocation=GeolocationConfig( # override GPS coords
latitude=34.0522,
longitude=-118.2437,
accuracy=10.0,
)
)
```
- **📊 Table-to-DataFrame Extraction**: Extract HTML tables directly to CSV or pandas DataFrames:
```python
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
try:
# Set up scraping parameters
crawl_config = CrawlerRunConfig(
table_score_threshold=8, # Strict table detection
)
# Execute market data extraction
results: List[CrawlResult] = await crawler.arun(
url="https://coinmarketcap.com/?page=1", config=crawl_config
)
# Process results
raw_df = pd.DataFrame()
for result in results:
if result.success and result.tables:
raw_df = pd.DataFrame(
result.tables[0]["rows"],
columns=result.tables[0]["headers"],
)
break
print(raw_df.head())
finally:
await crawler.close()
```
- **🚀 Browser Pooling**: Pages launch hot with pre-warmed browser instances for lower latency and memory usage
- **🕸️ Network and Console Capture**: Full traffic logs and MHTML snapshots for debugging:
```python
crawler_config = CrawlerRunConfig(
capture_network=True,
capture_console=True,
mhtml=True
)
```
- **🔌 MCP Integration**: Connect to AI tools like Claude Code through the Model Context Protocol
```bash
# Add Crawl4AI to Claude Code
claude mcp add --transport sse c4ai-sse http://localhost:11235/mcp/sse
```
- **🖥️ Interactive Playground**: Test configurations and generate API requests with the built-in web interface at `http://localhost:11235//playground`
- **🐳 Revamped Docker Deployment**: Streamlined multi-architecture Docker image with improved resource efficiency
- **📱 Multi-stage Build System**: Optimized Dockerfile with platform-specific performance enhancements
### Previous Version: 0.5.0 Major Release Highlights
- **🚀 Deep Crawling System**: Explore websites beyond initial URLs with BFS, DFS, and BestFirst strategies
- **⚡ Memory-Adaptive Dispatcher**: Dynamically adjusts concurrency based on system memory
- **🔄 Multiple Crawling Strategies**: Browser-based and lightweight HTTP-only crawlers
- **💻 Command-Line Interface**: New `crwl` CLI provides convenient terminal access
- **👤 Browser Profiler**: Create and manage persistent browser profiles
- **🧠 Crawl4AI Coding Assistant**: AI-powered coding assistant
- **🏎️ LXML Scraping Mode**: Fast HTML parsing using the `lxml` library
- **🌐 Proxy Rotation**: Built-in support for proxy switching
- **🤖 LLM Content Filter**: Intelligent markdown generation using LLMs
- **📄 PDF Processing**: Extract text, images, and metadata from PDF files
Read the full details in our [0.5.0 Release Notes](https://docs.crawl4ai.com/blog/releases/0.5.0.html).
## Version Numbering in Crawl4AI
Crawl4AI follows standard Python version numbering conventions (PEP 440) to help users understand the stability and features of each release.
<details>
<summary>📈 <strong>Version Numbers Explained</strong></summary>
### Version Numbers Explained
Our version numbers follow this pattern: `MAJOR.MINOR.PATCH` (e.g., 0.4.3)
@@ -733,8 +710,6 @@ We use pre-releases to:
For production environments, we recommend using the stable version. For testing new features, you can opt-in to pre-releases using the `--pre` flag.
</details>
## 📖 Documentation & Roadmap
> 🚨 **Documentation Update Alert**: We're undertaking a major documentation overhaul next week to reflect recent updates and improvements. Stay tuned for a more comprehensive and up-to-date guide!
@@ -747,16 +722,16 @@ To check our development plans and upcoming features, visit our [Roadmap](https:
<summary>📈 <strong>Development TODOs</strong></summary>
- [x] 0. Graph Crawler: Smart website traversal using graph search algorithms for comprehensive nested page extraction
- [x] 1. Question-Based Crawler: Natural language driven web discovery and content extraction
- [x] 2. Knowledge-Optimal Crawler: Smart crawling that maximizes knowledge while minimizing data extraction
- [x] 3. Agentic Crawler: Autonomous system for complex multi-step crawling operations
- [x] 4. Automated Schema Generator: Convert natural language to extraction schemas
- [x] 5. Domain-Specific Scrapers: Pre-configured extractors for common platforms (academic, e-commerce)
- [x] 6. Web Embedding Index: Semantic search infrastructure for crawled content
- [x] 7. Interactive Playground: Web UI for testing, comparing strategies with AI assistance
- [x] 8. Performance Monitor: Real-time insights into crawler operations
- [ ] 1. Question-Based Crawler: Natural language driven web discovery and content extraction
- [ ] 2. Knowledge-Optimal Crawler: Smart crawling that maximizes knowledge while minimizing data extraction
- [ ] 3. Agentic Crawler: Autonomous system for complex multi-step crawling operations
- [ ] 4. Automated Schema Generator: Convert natural language to extraction schemas
- [ ] 5. Domain-Specific Scrapers: Pre-configured extractors for common platforms (academic, e-commerce)
- [ ] 6. Web Embedding Index: Semantic search infrastructure for crawled content
- [ ] 7. Interactive Playground: Web UI for testing, comparing strategies with AI assistance
- [ ] 8. Performance Monitor: Real-time insights into crawler operations
- [ ] 9. Cloud Integration: One-click deployment solutions across cloud providers
- [x] 10. Sponsorship Program: Structured support system with tiered benefits
- [ ] 10. Sponsorship Program: Structured support system with tiered benefits
- [ ] 11. Educational Content: "How to Crawl" video series and interactive tutorials
</details>
@@ -771,13 +746,12 @@ Here's the updated license section:
## 📄 License & Attribution
This project is licensed under the Apache License 2.0, attribution is recommended via the badges below. See the [Apache 2.0 License](https://github.com/unclecode/crawl4ai/blob/main/LICENSE) file for details.
This project is licensed under the Apache License 2.0 with a required attribution clause. See the [Apache 2.0 License](https://github.com/unclecode/crawl4ai/blob/main/LICENSE) file for details.
### Attribution Requirements
When using Crawl4AI, you must include one of the following attribution methods:
<details>
<summary>📈 <strong>1. Badge Attribution (Recommended)</strong></summary>
#### 1. Badge Attribution (Recommended)
Add one of these badges to your README, documentation, or website:
| Theme | Badge |
@@ -816,15 +790,11 @@ HTML code for adding the badges:
</a>
```
</details>
<details>
<summary>📖 <strong>2. Text Attribution</strong></summary>
#### 2. Text Attribution
Add this line to your documentation:
```
This project uses Crawl4AI (https://github.com/unclecode/crawl4ai) for web data extraction.
```
</details>
## 📚 Citation

View File

@@ -1,65 +0,0 @@
# 💖 Sponsors & Supporters
Thank you to everyone supporting Crawl4AI! Your sponsorship helps keep this project open-source and actively maintained.
## 👑 Founding Sponsors
*The first 50 sponsors who believed in our vision - permanently recognized*
<!-- Founding sponsors will be listed here with special recognition -->
🎉 **Become a Founding Sponsor!** Only [X/50] spots remaining! [Join now →](https://github.com/sponsors/unclecode)
---
## 🏢 Data Infrastructure Partners ($2000/month)
*These organizations are building their data sovereignty with Crawl4AI at the core*
<!-- Data Infrastructure Partners will be listed here -->
*Be the first Data Infrastructure Partner! [Join us →](https://github.com/sponsors/unclecode)*
---
## 💼 Growing Teams ($500/month)
*Teams scaling their data extraction with Crawl4AI*
<!-- Growing Teams will be listed here -->
*Your team could be here! [Become a sponsor →](https://github.com/sponsors/unclecode)*
---
## 🚀 Builders ($50/month)
*Developers and entrepreneurs building with Crawl4AI*
<!-- Builders will be listed here -->
*Join the builders! [Start sponsoring →](https://github.com/sponsors/unclecode)*
---
## 🌱 Believers ($5/month)
*The community supporting data democratization*
<!-- Believers will be listed here -->
*Thank you to all our community believers!*
---
## 🤝 Want to Sponsor?
Crawl4AI is the #1 trending open-source web crawler. We're building the future of data extraction - where organizations own their data pipelines instead of relying on rate-limited APIs.
### Available Sponsorship Tiers:
- **🌱 Believer** ($5/mo) - Support the movement
- **🚀 Builder** ($50/mo) - Priority support & early access
- **💼 Growing Team** ($500/mo) - Bi-weekly syncs & optimization
- **🏢 Data Infrastructure Partner** ($2000/mo) - Full partnership & dedicated support
[View all tiers and benefits →](https://github.com/sponsors/unclecode)
### Enterprise & Custom Partnerships
Building data extraction at scale? Need dedicated support or infrastructure? Let's talk about a custom partnership.
📧 Contact: [hello@crawl4ai.com](mailto:hello@crawl4ai.com) | 📅 [Schedule a call](https://calendar.app.google/rEpvi2UBgUQjWHfJ9)
---
*This list is updated regularly. Sponsors at $50+ tiers can submit their logos via [hello@crawl4ai.com](mailto:hello@crawl4ai.com)*

View File

@@ -1,7 +1,7 @@
# crawl4ai/__version__.py
# This is the version that will be used for stable releases
__version__ = "0.7.3"
__version__ = "0.7.2"
# For nightly builds, this gets set during build process
__nightly_version__ = None

View File

@@ -448,10 +448,6 @@ class BrowserConfig:
self.chrome_channel = ""
self.proxy = proxy
self.proxy_config = proxy_config
if isinstance(self.proxy_config, dict):
self.proxy_config = ProxyConfig.from_dict(self.proxy_config)
if isinstance(self.proxy_config, str):
self.proxy_config = ProxyConfig.from_string(self.proxy_config)
self.viewport_width = viewport_width
@@ -1163,11 +1159,6 @@ class CrawlerRunConfig():
self.parser_type = parser_type
self.scraping_strategy = scraping_strategy or LXMLWebScrapingStrategy()
self.proxy_config = proxy_config
if isinstance(proxy_config, dict):
self.proxy_config = ProxyConfig.from_dict(proxy_config)
if isinstance(proxy_config, str):
self.proxy_config = ProxyConfig.from_string(proxy_config)
self.proxy_rotation_strategy = proxy_rotation_strategy
# Browser Location and Identity Parameters

View File

@@ -407,34 +407,32 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
t.cancel()
raise exc
# If memory pressure is low, greedily fill all available slots
if not self.memory_pressure_mode:
slots = self.max_session_permit - len(active_tasks)
while slots > 0:
try:
# Use get_nowait() to immediately get tasks without blocking
priority, (url, task_id, retry_count, enqueue_time) = self.task_queue.get_nowait()
# Create and start the task
task = asyncio.create_task(
self.crawl_url(url, config, task_id, retry_count)
# If memory pressure is low, start new tasks
if not self.memory_pressure_mode and len(active_tasks) < self.max_session_permit:
try:
# Try to get a task with timeout to avoid blocking indefinitely
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Create and start the task
task = asyncio.create_task(
self.crawl_url(url, config, task_id, retry_count)
)
active_tasks.append(task)
# Update waiting time in monitor
if self.monitor:
wait_time = time.time() - enqueue_time
self.monitor.update_task(
task_id,
wait_time=wait_time,
status=CrawlStatus.IN_PROGRESS
)
active_tasks.append(task)
# Update waiting time in monitor
if self.monitor:
wait_time = time.time() - enqueue_time
self.monitor.update_task(
task_id,
wait_time=wait_time,
status=CrawlStatus.IN_PROGRESS
)
slots -= 1
except asyncio.QueueEmpty:
# No more tasks in queue, exit the loop
break
except asyncio.TimeoutError:
# No tasks in queue, that's fine
pass
# Wait for completion even if queue is starved
if active_tasks:
@@ -561,34 +559,32 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
for t in active_tasks:
t.cancel()
raise exc
# If memory pressure is low, greedily fill all available slots
if not self.memory_pressure_mode:
slots = self.max_session_permit - len(active_tasks)
while slots > 0:
try:
# Use get_nowait() to immediately get tasks without blocking
priority, (url, task_id, retry_count, enqueue_time) = self.task_queue.get_nowait()
# Create and start the task
task = asyncio.create_task(
self.crawl_url(url, config, task_id, retry_count)
# If memory pressure is low, start new tasks
if not self.memory_pressure_mode and len(active_tasks) < self.max_session_permit:
try:
# Try to get a task with timeout
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Create and start the task
task = asyncio.create_task(
self.crawl_url(url, config, task_id, retry_count)
)
active_tasks.append(task)
# Update waiting time in monitor
if self.monitor:
wait_time = time.time() - enqueue_time
self.monitor.update_task(
task_id,
wait_time=wait_time,
status=CrawlStatus.IN_PROGRESS
)
active_tasks.append(task)
# Update waiting time in monitor
if self.monitor:
wait_time = time.time() - enqueue_time
self.monitor.update_task(
task_id,
wait_time=wait_time,
status=CrawlStatus.IN_PROGRESS
)
slots -= 1
except asyncio.QueueEmpty:
# No more tasks in queue, exit the loop
break
except asyncio.TimeoutError:
# No tasks in queue, that's fine
pass
# Process completed tasks and yield results
if active_tasks:

View File

@@ -608,11 +608,6 @@ class BrowserManager:
self.contexts_by_config = {}
self._contexts_lock = asyncio.Lock()
# Serialize context.new_page() across concurrent tasks to avoid races
# when using a shared persistent context (context.pages may be empty
# for all racers). Prevents 'Target page/context closed' errors.
self._page_lock = asyncio.Lock()
# Stealth-related attributes
self._stealth_instance = None
self._stealth_cm = None
@@ -1032,26 +1027,13 @@ class BrowserManager:
context = await self.create_browser_context(crawlerRunConfig)
ctx = self.default_context # default context, one window only
ctx = await clone_runtime_state(context, ctx, crawlerRunConfig, self.config)
# Avoid concurrent new_page on shared persistent context
# See GH-1198: context.pages can be empty under races
async with self._page_lock:
page = await ctx.new_page()
page = await ctx.new_page()
else:
context = self.default_context
pages = context.pages
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
if not page:
if pages:
page = pages[0]
else:
# Double-check under lock to avoid TOCTOU and ensure only
# one task calls new_page when pages=[] concurrently
async with self._page_lock:
pages = context.pages
if pages:
page = pages[0]
else:
page = await context.new_page()
page = context.pages[0] # await context.new_page()
else:
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)

View File

@@ -65,213 +65,6 @@ class BrowserProfiler:
self.builtin_config_file = os.path.join(self.builtin_browser_dir, "browser_config.json")
os.makedirs(self.builtin_browser_dir, exist_ok=True)
def _is_windows(self) -> bool:
"""Check if running on Windows platform."""
return sys.platform.startswith('win') or sys.platform == 'cygwin'
def _is_macos(self) -> bool:
"""Check if running on macOS platform."""
return sys.platform == 'darwin'
def _is_linux(self) -> bool:
"""Check if running on Linux platform."""
return sys.platform.startswith('linux')
def _get_quit_message(self, tag: str) -> str:
"""Get appropriate quit message based on context."""
if tag == "PROFILE":
return "Closing browser and saving profile..."
elif tag == "CDP":
return "Closing browser..."
else:
return "Closing browser..."
async def _listen_windows(self, user_done_event, check_browser_process, tag: str):
"""Windows-specific keyboard listener using msvcrt."""
try:
import msvcrt
except ImportError:
raise ImportError("msvcrt module not available on this platform")
while True:
try:
# Check for keyboard input
if msvcrt.kbhit():
raw = msvcrt.getch()
# Handle Unicode decoding more robustly
key = None
try:
key = raw.decode("utf-8")
except UnicodeDecodeError:
try:
# Try different encodings
key = raw.decode("latin1")
except UnicodeDecodeError:
# Skip if we can't decode
continue
# Validate key
if not key or len(key) != 1:
continue
# Check for printable characters only
if not key.isprintable():
continue
# Check for quit command
if key.lower() == "q":
self.logger.info(
self._get_quit_message(tag),
tag=tag,
base_color=LogColor.GREEN
)
user_done_event.set()
return
# Check if browser process ended
if await check_browser_process():
return
# Small delay to prevent busy waiting
await asyncio.sleep(0.1)
except Exception as e:
self.logger.warning(f"Error in Windows keyboard listener: {e}", tag=tag)
# Continue trying instead of failing completely
await asyncio.sleep(0.1)
continue
async def _listen_unix(self, user_done_event: asyncio.Event, check_browser_process, tag: str):
"""Unix/Linux/macOS keyboard listener using termios and select."""
try:
import termios
import tty
import select
except ImportError:
raise ImportError("termios/tty/select modules not available on this platform")
# Get stdin file descriptor
try:
fd = sys.stdin.fileno()
except (AttributeError, OSError):
raise ImportError("stdin is not a terminal")
# Save original terminal settings
old_settings = None
try:
old_settings = termios.tcgetattr(fd)
except termios.error as e:
raise ImportError(f"Cannot get terminal attributes: {e}")
try:
# Switch to non-canonical mode (cbreak mode)
tty.setcbreak(fd)
while True:
try:
# Use select to check if input is available (non-blocking)
# Timeout of 0.5 seconds to periodically check browser process
readable, _, _ = select.select([sys.stdin], [], [], 0.5)
if readable:
# Read one character
key = sys.stdin.read(1)
if key and key.lower() == "q":
self.logger.info(
self._get_quit_message(tag),
tag=tag,
base_color=LogColor.GREEN
)
user_done_event.set()
return
# Check if browser process ended
if await check_browser_process():
return
# Small delay to prevent busy waiting
await asyncio.sleep(0.1)
except (KeyboardInterrupt, EOFError):
# Handle Ctrl+C or EOF gracefully
self.logger.info("Keyboard interrupt received", tag=tag)
user_done_event.set()
return
except Exception as e:
self.logger.warning(f"Error in Unix keyboard listener: {e}", tag=tag)
await asyncio.sleep(0.1)
continue
finally:
# Always restore terminal settings
if old_settings is not None:
try:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
except Exception as e:
self.logger.error(f"Failed to restore terminal settings: {e}", tag=tag)
async def _listen_fallback(self, user_done_event: asyncio.Event, check_browser_process, tag: str):
"""Fallback keyboard listener using simple input() method."""
self.logger.info("Using fallback input mode. Type 'q' and press Enter to quit.", tag=tag)
# Run input in a separate thread to avoid blocking
import threading
import queue
input_queue = queue.Queue()
def input_thread():
"""Thread function to handle input."""
try:
while not user_done_event.is_set():
try:
# Use input() with a prompt
user_input = input("Press 'q' + Enter to quit: ").strip().lower()
input_queue.put(user_input)
if user_input == 'q':
break
except (EOFError, KeyboardInterrupt):
input_queue.put('q')
break
except Exception as e:
self.logger.warning(f"Error in input thread: {e}", tag=tag)
break
except Exception as e:
self.logger.error(f"Input thread failed: {e}", tag=tag)
# Start input thread
thread = threading.Thread(target=input_thread, daemon=True)
thread.start()
try:
while not user_done_event.is_set():
# Check for user input
try:
user_input = input_queue.get_nowait()
if user_input == 'q':
self.logger.info(
self._get_quit_message(tag),
tag=tag,
base_color=LogColor.GREEN
)
user_done_event.set()
return
except queue.Empty:
pass
# Check if browser process ended
if await check_browser_process():
return
# Small delay
await asyncio.sleep(0.5)
except Exception as e:
self.logger.error(f"Fallback listener failed: {e}", tag=tag)
user_done_event.set()
async def create_profile(self,
profile_name: Optional[str] = None,
browser_config: Optional[BrowserConfig] = None) -> Optional[str]:
@@ -387,38 +180,42 @@ class BrowserProfiler:
# Run keyboard input loop in a separate task
async def listen_for_quit_command():
"""Cross-platform keyboard listener that waits for 'q' key press."""
import termios
import tty
import select
# First output the prompt
self.logger.info(
"Press {segment} when you've finished using the browser...",
tag="PROFILE",
params={"segment": "'q'"}, colors={"segment": LogColor.YELLOW},
base_color=LogColor.CYAN
)
async def check_browser_process():
"""Check if browser process is still running."""
if (
managed_browser.browser_process
and managed_browser.browser_process.poll() is not None
):
self.logger.info(
"Browser already closed. Ending input listener.", tag="PROFILE"
)
user_done_event.set()
return True
return False
# Try platform-specific implementations with fallback
self.logger.info("Press 'q' when you've finished using the browser...", tag="PROFILE")
# Save original terminal settings
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
if self._is_windows():
await self._listen_windows(user_done_event, check_browser_process, "PROFILE")
else:
await self._listen_unix(user_done_event, check_browser_process, "PROFILE")
except Exception as e:
self.logger.warning(f"Platform-specific keyboard listener failed: {e}", tag="PROFILE")
self.logger.info("Falling back to simple input mode...", tag="PROFILE")
await self._listen_fallback(user_done_event, check_browser_process, "PROFILE")
# Switch to non-canonical mode (no line buffering)
tty.setcbreak(fd)
while True:
# Check if input is available (non-blocking)
readable, _, _ = select.select([sys.stdin], [], [], 0.5)
if readable:
key = sys.stdin.read(1)
if key.lower() == 'q':
self.logger.info("Closing browser and saving profile...", tag="PROFILE", base_color=LogColor.GREEN)
user_done_event.set()
return
# Check if the browser process has already exited
if managed_browser.browser_process and managed_browser.browser_process.poll() is not None:
self.logger.info("Browser already closed. Ending input listener.", tag="PROFILE")
user_done_event.set()
return
await asyncio.sleep(0.1)
finally:
# Restore terminal settings
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
try:
from playwright.async_api import async_playwright
@@ -885,33 +682,42 @@ class BrowserProfiler:
# Run keyboard input loop in a separate task
async def listen_for_quit_command():
"""Cross-platform keyboard listener that waits for 'q' key press."""
import termios
import tty
import select
# First output the prompt
self.logger.info(
"Press {segment} to stop the browser and exit...",
tag="CDP",
params={"segment": "'q'"}, colors={"segment": LogColor.YELLOW},
base_color=LogColor.CYAN
)
async def check_browser_process():
"""Check if browser process is still running."""
if managed_browser.browser_process and managed_browser.browser_process.poll() is not None:
self.logger.info("Browser already closed. Ending input listener.", tag="CDP")
user_done_event.set()
return True
return False
# Try platform-specific implementations with fallback
self.logger.info("Press 'q' to stop the browser and exit...", tag="CDP")
# Save original terminal settings
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
if self._is_windows():
await self._listen_windows(user_done_event, check_browser_process, "CDP")
else:
await self._listen_unix(user_done_event, check_browser_process, "CDP")
except Exception as e:
self.logger.warning(f"Platform-specific keyboard listener failed: {e}", tag="CDP")
self.logger.info("Falling back to simple input mode...", tag="CDP")
await self._listen_fallback(user_done_event, check_browser_process, "CDP")
# Switch to non-canonical mode (no line buffering)
tty.setcbreak(fd)
while True:
# Check if input is available (non-blocking)
readable, _, _ = select.select([sys.stdin], [], [], 0.5)
if readable:
key = sys.stdin.read(1)
if key.lower() == 'q':
self.logger.info("Closing browser...", tag="CDP")
user_done_event.set()
return
# Check if the browser process has already exited
if managed_browser.browser_process and managed_browser.browser_process.poll() is not None:
self.logger.info("Browser already closed. Ending input listener.", tag="CDP")
user_done_event.set()
return
await asyncio.sleep(0.1)
finally:
# Restore terminal settings
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
# Function to retrieve and display CDP JSON config
async def get_cdp_json(port):

View File

@@ -242,16 +242,6 @@ class LXMLWebScrapingStrategy(ContentScrapingStrategy):
exclude_domains = set(kwargs.get("exclude_domains", []))
# Process links
try:
base_element = element.xpath("//head/base[@href]")
if base_element:
base_href = base_element[0].get("href", "").strip()
if base_href:
url = base_href
except Exception as e:
self._log("error", f"Error extracting base URL: {str(e)}", "SCRAPE")
pass
for link in element.xpath(".//a[@href]"):
href = link.get("href", "").strip()
if not href:

View File

@@ -116,11 +116,6 @@ class BestFirstCrawlingStrategy(DeepCrawlStrategy):
valid_links.append(base_url)
# If we have more valid links than capacity, limit them
if len(valid_links) > remaining_capacity:
valid_links = valid_links[:remaining_capacity]
self.logger.info(f"Limiting to {remaining_capacity} URLs due to max_pages limit")
# Record the new depths and add to next_links
for url in valid_links:
depths[url] = new_depth
@@ -140,7 +135,8 @@ class BestFirstCrawlingStrategy(DeepCrawlStrategy):
"""
queue: asyncio.PriorityQueue = asyncio.PriorityQueue()
# Push the initial URL with score 0 and depth 0.
await queue.put((0, 0, start_url, None))
initial_score = self.url_scorer.score(start_url) if self.url_scorer else 0
await queue.put((-initial_score, 0, start_url, None))
visited: Set[str] = set()
depths: Dict[str, int] = {start_url: 0}
@@ -187,7 +183,7 @@ class BestFirstCrawlingStrategy(DeepCrawlStrategy):
result.metadata = result.metadata or {}
result.metadata["depth"] = depth
result.metadata["parent_url"] = parent_url
result.metadata["score"] = score
result.metadata["score"] = -score
# Count only successful crawls toward max_pages limit
if result.success:
@@ -208,7 +204,7 @@ class BestFirstCrawlingStrategy(DeepCrawlStrategy):
for new_url, new_parent in new_links:
new_depth = depths.get(new_url, depth + 1)
new_score = self.url_scorer.score(new_url) if self.url_scorer else 0
await queue.put((new_score, new_depth, new_url, new_parent))
await queue.put((-new_score, new_depth, new_url, new_parent))
# End of crawl.

View File

@@ -38,14 +38,7 @@ class BFSDeepCrawlStrategy(DeepCrawlStrategy):
self.include_external = include_external
self.score_threshold = score_threshold
self.max_pages = max_pages
# Type check for logger
if isinstance(logger, dict):
logging.getLogger(__name__).warning(
"BFSDeepCrawlStrategy received a dict as logger; falling back to default logger."
)
self.logger = logging.getLogger(__name__)
else:
self.logger = logger or logging.getLogger(__name__)
self.logger = logger or logging.getLogger(__name__)
self.stats = TraversalStats(start_time=datetime.now())
self._cancel_event = asyncio.Event()
self._pages_crawled = 0

View File

@@ -30,7 +30,7 @@ class Crawl4aiDockerClient:
def __init__(
self,
base_url: str = "http://localhost:8000",
timeout: float = 600.0, # Increased to 10 minutes for crawling operations
timeout: float = 30.0,
verify_ssl: bool = True,
verbose: bool = True,
log_file: Optional[str] = None
@@ -113,12 +113,21 @@ class Crawl4aiDockerClient:
self.logger.info(f"Crawling {len(urls)} URLs {'(streaming)' if is_streaming else ''}", tag="CRAWL")
if is_streaming:
# For streaming, we need to return the async generator properly
# The caller should be able to do: async for result in await client.crawl(...)
async def streaming_wrapper():
async for result in self._stream_crawl_results(data):
yield result
return streaming_wrapper()
async def stream_results() -> AsyncGenerator[CrawlResult, None]:
async with self._http_client.stream("POST", f"{self.base_url}/crawl/stream", json=data) as response:
response.raise_for_status()
async for line in response.aiter_lines():
if line.strip():
result = json.loads(line)
if "error" in result:
self.logger.error_status(url=result.get("url", "unknown"), error=result["error"])
continue
self.logger.url_status(url=result.get("url", "unknown"), success=True, timing=result.get("timing", 0.0))
if result.get("status") == "completed":
continue
else:
yield CrawlResult(**result)
return stream_results()
response = await self._request("POST", "/crawl", json=data)
result_data = response.json()
@@ -129,35 +138,6 @@ class Crawl4aiDockerClient:
self.logger.success(f"Crawl completed with {len(results)} results", tag="CRAWL")
return results[0] if len(results) == 1 else results
async def _stream_crawl_results(self, data: Dict[str, Any]) -> AsyncGenerator[CrawlResult, None]:
"""Internal method to handle streaming crawl results."""
async with self._http_client.stream("POST", f"{self.base_url}/crawl/stream", json=data) as response:
response.raise_for_status()
async for line in response.aiter_lines():
if line.strip():
try:
result = json.loads(line)
if "error" in result:
self.logger.error_status(url=result.get("url", "unknown"), error=result["error"])
continue
# Check if this is a crawl result (has required fields)
if "url" in result and "success" in result:
self.logger.url_status(url=result.get("url", "unknown"), success=result.get("success", False), timing=result.get("timing", 0.0))
# Create CrawlResult object properly
crawl_result = CrawlResult(**result)
yield crawl_result
# Skip status-only messages
elif result.get("status") == "completed":
continue
except json.JSONDecodeError as e:
self.logger.error(f"Failed to parse streaming response: {e}", tag="STREAM")
continue
except Exception as e:
self.logger.error(f"Error processing streaming result: {e}", tag="STREAM")
continue
async def get_schema(self) -> Dict[str, Any]:
"""Retrieve configuration schemas."""
response = await self._request("GET", "/schema")

View File

@@ -1,36 +1,4 @@
"""
Crawl4AI Models Module
This module contains Pydantic models used throughout the Crawl4AI library.
Key Features:
- ORJSONModel: Base model with ORJSON serialization support
- DeprecatedPropertiesMixin: Global system for handling deprecated properties
- CrawlResult: Main result model with backward compatibility support
Deprecated Properties System:
The DeprecatedPropertiesMixin provides a global way to handle deprecated properties
across all models. Instead of manually excluding deprecated properties in each
model_dump() call, you can simply override the get_deprecated_properties() method:
Example:
class MyModel(ORJSONModel):
name: str
old_field: Optional[str] = None
def get_deprecated_properties(self) -> set[str]:
return {'old_field', 'another_deprecated_field'}
@property
def old_field(self):
raise AttributeError("old_field is deprecated, use name instead")
The system automatically excludes these properties from serialization, preventing
property objects from appearing in JSON output.
"""
from pydantic import BaseModel, ConfigDict,HttpUrl, PrivateAttr, Field
from pydantic import BaseModel, HttpUrl, PrivateAttr, Field
from typing import List, Dict, Optional, Callable, Awaitable, Union, Any
from typing import AsyncGenerator
from typing import Generic, TypeVar
@@ -40,7 +8,7 @@ from .ssl_certificate import SSLCertificate
from datetime import datetime
from datetime import timedelta
import orjson
###############################
# Dispatcher Models
###############################
@@ -123,122 +91,7 @@ class TokenUsage:
completion_tokens_details: Optional[dict] = None
prompt_tokens_details: Optional[dict] = None
def orjson_default(obj):
# Handle datetime (if not already handled by orjson)
if isinstance(obj, datetime):
return obj.isoformat()
# Handle property objects (convert to string or something else)
if isinstance(obj, property):
return str(obj)
# Last resort: convert to string
return str(obj)
class DeprecatedPropertiesMixin:
"""
Mixin to handle deprecated properties in Pydantic models.
Classes that inherit from this mixin can define deprecated properties
that will be automatically excluded from serialization.
Usage:
1. Override the get_deprecated_properties() method to return a set of deprecated property names
2. The model_dump method will automatically exclude these properties
Example:
class MyModel(ORJSONModel):
def get_deprecated_properties(self) -> set[str]:
return {'old_field', 'legacy_property'}
name: str
old_field: Optional[str] = None # Field definition
@property
def old_field(self): # Property that overrides the field
raise AttributeError("old_field is deprecated, use name instead")
"""
def get_deprecated_properties(self) -> set[str]:
"""
Get deprecated property names for this model.
Override this method in subclasses to define deprecated properties.
Returns:
set[str]: Set of deprecated property names
"""
return set()
@classmethod
def get_all_deprecated_properties(cls) -> set[str]:
"""
Get all deprecated properties from this class and all parent classes.
Returns:
set[str]: Set of all deprecated property names
"""
deprecated_props = set()
# Create an instance to call the instance method
try:
# Try to create a dummy instance to get deprecated properties
dummy_instance = cls.__new__(cls)
deprecated_props.update(dummy_instance.get_deprecated_properties())
except Exception:
# If we can't create an instance, check for class-level definitions
pass
# Also check parent classes
for klass in cls.__mro__:
if hasattr(klass, 'get_deprecated_properties') and klass != DeprecatedPropertiesMixin:
try:
dummy_instance = klass.__new__(klass)
deprecated_props.update(dummy_instance.get_deprecated_properties())
except Exception:
pass
return deprecated_props
def model_dump(self, *args, **kwargs):
"""
Override model_dump to automatically exclude deprecated properties.
This method:
1. Gets the existing exclude set from kwargs
2. Adds all deprecated properties defined in get_deprecated_properties()
3. Calls the parent model_dump with the updated exclude set
"""
# Get the default exclude set, or create empty set if None
exclude = kwargs.get('exclude', set())
if exclude is None:
exclude = set()
elif not isinstance(exclude, set):
exclude = set(exclude) if exclude else set()
# Add deprecated properties for this instance
exclude.update(self.get_deprecated_properties())
kwargs['exclude'] = exclude
return super().model_dump(*args, **kwargs)
class ORJSONModel(DeprecatedPropertiesMixin, BaseModel):
model_config = ConfigDict(
ser_json_timedelta="iso8601", # Optional: format timedelta
ser_json_bytes="utf8", # Optional: bytes → UTF-8 string
)
def model_dump_json(self, **kwargs) -> bytes:
"""Custom JSON serialization using orjson"""
return orjson.dumps(self.model_dump(**kwargs), default=orjson_default)
@classmethod
def model_validate_json(cls, json_data: Union[str, bytes], **kwargs):
"""Custom JSON deserialization using orjson"""
if isinstance(json_data, str):
json_data = json_data.encode()
return cls.model_validate(orjson.loads(json_data), **kwargs)
class UrlModel(ORJSONModel):
class UrlModel(BaseModel):
url: HttpUrl
forced: bool = False
@@ -255,7 +108,7 @@ class TraversalStats:
total_depth_reached: int = 0
current_depth: int = 0
class DispatchResult(ORJSONModel):
class DispatchResult(BaseModel):
task_id: str
memory_usage: float
peak_memory: float
@@ -263,7 +116,7 @@ class DispatchResult(ORJSONModel):
end_time: Union[datetime, float]
error_message: str = ""
class MarkdownGenerationResult(ORJSONModel):
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
@@ -273,7 +126,7 @@ class MarkdownGenerationResult(ORJSONModel):
def __str__(self):
return self.raw_markdown
class CrawlResult(ORJSONModel):
class CrawlResult(BaseModel):
url: str
html: str
fit_html: Optional[str] = None
@@ -303,10 +156,6 @@ class CrawlResult(ORJSONModel):
class Config:
arbitrary_types_allowed = True
def get_deprecated_properties(self) -> set[str]:
"""Define deprecated properties that should be excluded from serialization."""
return {'fit_html', 'fit_markdown', 'markdown_v2'}
# NOTE: The StringCompatibleMarkdown class, custom __init__ method, property getters/setters,
# and model_dump override all exist to support a smooth transition from markdown as a string
# to markdown as a MarkdownGenerationResult object, while maintaining backward compatibility.
@@ -396,16 +245,14 @@ class CrawlResult(ORJSONModel):
1. PrivateAttr fields are excluded from serialization by default
2. We need to maintain backward compatibility by including the 'markdown' field
in the serialized output
3. Uses the DeprecatedPropertiesMixin to automatically exclude deprecated properties
3. We're transitioning from 'markdown_v2' to enhancing 'markdown' to hold
the same type of data
Future developers: This method ensures that the markdown content is properly
serialized despite being stored in a private attribute. The deprecated properties
are automatically handled by the mixin.
serialized despite being stored in a private attribute. If the serialization
requirements change, this is where you would update the logic.
"""
# Use the parent class method which handles deprecated properties automatically
result = super().model_dump(*args, **kwargs)
# Add the markdown content if it exists
if self._markdown is not None:
result["markdown"] = self._markdown.model_dump()
return result
@@ -460,7 +307,7 @@ RunManyReturn = Union[
# 1. Replace the private attribute and property with a standard field
# 2. Update any serialization logic that might depend on the current behavior
class AsyncCrawlResponse(ORJSONModel):
class AsyncCrawlResponse(BaseModel):
html: str
response_headers: Dict[str, str]
js_execution_result: Optional[Dict[str, Any]] = None
@@ -481,7 +328,7 @@ class AsyncCrawlResponse(ORJSONModel):
###############################
# Scraping Models
###############################
class MediaItem(ORJSONModel):
class MediaItem(BaseModel):
src: Optional[str] = ""
data: Optional[str] = ""
alt: Optional[str] = ""
@@ -493,7 +340,7 @@ class MediaItem(ORJSONModel):
width: Optional[int] = None
class Link(ORJSONModel):
class Link(BaseModel):
href: Optional[str] = ""
text: Optional[str] = ""
title: Optional[str] = ""
@@ -506,7 +353,7 @@ class Link(ORJSONModel):
total_score: Optional[float] = None # Combined score from intrinsic and contextual scores
class Media(ORJSONModel):
class Media(BaseModel):
images: List[MediaItem] = []
videos: List[
MediaItem
@@ -517,12 +364,12 @@ class Media(ORJSONModel):
tables: List[Dict] = [] # Table data extracted from HTML tables
class Links(ORJSONModel):
class Links(BaseModel):
internal: List[Link] = []
external: List[Link] = []
class ScrapingResult(ORJSONModel):
class ScrapingResult(BaseModel):
cleaned_html: str
success: bool
media: Media = Media()

View File

@@ -1,6 +1,5 @@
import os
import json
import orjson
import asyncio
from typing import List, Tuple, Dict
from functools import partial
@@ -66,7 +65,7 @@ async def handle_llm_qa(
) -> str:
"""Process QA using LLM with crawled content as context."""
try:
if not url.startswith(('http://', 'https://')) and not url.startswith(("raw:", "raw://")):
if not url.startswith(('http://', 'https://')):
url = 'https://' + url
# Extract base URL by finding last '?q=' occurrence
last_q_index = url.rfind('?q=')
@@ -192,7 +191,7 @@ async def handle_markdown_request(
detail=error_msg
)
decoded_url = unquote(url)
if not decoded_url.startswith(('http://', 'https://')) and not decoded_url.startswith(("raw:", "raw://")):
if not decoded_url.startswith(('http://', 'https://')):
decoded_url = 'https://' + decoded_url
if filter_type == FilterType.RAW:
@@ -329,7 +328,7 @@ async def create_new_task(
) -> JSONResponse:
"""Create and initialize a new task."""
decoded_url = unquote(input_path)
if not decoded_url.startswith(('http://', 'https://')) and not decoded_url.startswith(("raw:", "raw://")):
if not decoded_url.startswith(('http://', 'https://')):
decoded_url = 'https://' + decoded_url
from datetime import datetime
@@ -385,60 +384,27 @@ def create_task_response(task: dict, task_id: str, base_url: str) -> dict:
async def stream_results(crawler: AsyncWebCrawler, results_gen: AsyncGenerator) -> AsyncGenerator[bytes, None]:
"""Stream results with heartbeats and completion markers."""
import orjson
from datetime import datetime
import inspect
def orjson_default(obj):
# Handle datetime (if not already handled by orjson)
if isinstance(obj, datetime):
return obj.isoformat()
# Handle property objects (convert to string or something else)
if isinstance(obj, property):
return str(obj)
# Last resort: convert to string
return str(obj)
import json
from utils import datetime_handler
try:
logger.info(f"Starting streaming with results_gen type: {type(results_gen)}")
logger.info(f"Is results_gen async generator: {inspect.isasyncgen(results_gen)}")
# Check if results_gen is actually an async generator vs another type
if inspect.isasyncgen(results_gen):
logger.info("Processing as async generator")
async for result in results_gen:
try:
logger.info(f"Processing streaming result of type: {type(result)}")
# Check if this result is actually a CrawlResult
if hasattr(result, 'model_dump_json'):
server_memory_mb = _get_memory_mb()
result_json = result.model_dump_json()
result_dict = orjson.loads(result_json)
result_dict['server_memory_mb'] = server_memory_mb
if result_dict.get('pdf') is not None:
result_dict['pdf'] = b64encode(result_dict['pdf']).decode('utf-8')
logger.info(f"Streaming result for {result_dict.get('url', 'unknown')}")
data = orjson.dumps(result_dict, default=orjson_default).decode('utf-8') + "\n"
yield data.encode('utf-8')
else:
logger.error(f"Result doesn't have model_dump_json method: {type(result)}")
error_response = {"error": f"Invalid result type: {type(result)}", "url": "unknown"}
yield (orjson.dumps(error_response).decode('utf-8') + "\n").encode('utf-8')
except Exception as e:
logger.error(f"Serialization error: {e}")
logger.error(f"Result type was: {type(result)}")
error_response = {"error": str(e), "url": getattr(result, 'url', 'unknown')}
yield (orjson.dumps(error_response).decode('utf-8') + "\n").encode('utf-8')
else:
logger.error(f"results_gen is not an async generator: {type(results_gen)}")
error_response = {"error": f"Invalid results_gen type: {type(results_gen)}"}
yield (orjson.dumps(error_response).decode('utf-8') + "\n").encode('utf-8')
async for result in results_gen:
try:
server_memory_mb = _get_memory_mb()
result_dict = result.model_dump()
result_dict['server_memory_mb'] = server_memory_mb
# If PDF exists, encode it to base64
if result_dict.get('pdf') is not None:
result_dict['pdf'] = b64encode(result_dict['pdf']).decode('utf-8')
logger.info(f"Streaming result for {result_dict.get('url', 'unknown')}")
data = json.dumps(result_dict, default=datetime_handler) + "\n"
yield data.encode('utf-8')
except Exception as e:
logger.error(f"Serialization error: {e}")
error_response = {"error": str(e), "url": getattr(result, 'url', 'unknown')}
yield (json.dumps(error_response) + "\n").encode('utf-8')
yield orjson.dumps({"status": "completed"}).decode('utf-8').encode('utf-8')
yield json.dumps({"status": "completed"}).encode('utf-8')
except asyncio.CancelledError:
logger.warning("Client disconnected during streaming")
@@ -462,7 +428,7 @@ async def handle_crawl_request(
peak_mem_mb = start_mem_mb
try:
urls = [('https://' + url) if not url.startswith(('http://', 'https://')) and not url.startswith(("raw:", "raw://")) else url for url in urls]
urls = [('https://' + url) if not url.startswith(('http://', 'https://')) else url for url in urls]
browser_config = BrowserConfig.load(browser_config)
crawler_config = CrawlerRunConfig.load(crawler_config)
@@ -506,9 +472,7 @@ async def handle_crawl_request(
# Process results to handle PDF bytes
processed_results = []
for result in results:
# Use ORJSON serialization to handle property objects properly
result_json = result.model_dump_json()
result_dict = orjson.loads(result_json)
result_dict = result.model_dump()
# If PDF exists, encode it to base64
if result_dict.get('pdf') is not None:
result_dict['pdf'] = b64encode(result_dict['pdf']).decode('utf-8')
@@ -558,19 +522,8 @@ async def handle_stream_crawl_request(
browser_config.verbose = False
crawler_config = CrawlerRunConfig.load(crawler_config)
crawler_config.scraping_strategy = LXMLWebScrapingStrategy()
# Don't force stream=True here - let the deep crawl strategy control its own streaming behavior
crawler_config.stream = True
# Apply global base config (this was missing!)
base_config = config["crawler"]["base_config"]
for key, value in base_config.items():
if hasattr(crawler_config, key):
print(f"[DEBUG] Applying base_config: {key} = {value}")
setattr(crawler_config, key, value)
print(f"[DEBUG] Deep crawl strategy: {type(crawler_config.deep_crawl_strategy).__name__ if crawler_config.deep_crawl_strategy else 'None'}")
print(f"[DEBUG] Stream mode: {crawler_config.stream}")
print(f"[DEBUG] Simulate user: {getattr(crawler_config, 'simulate_user', 'Not set')}")
dispatcher = MemoryAdaptiveDispatcher(
memory_threshold_percent=config["crawler"]["memory_threshold_percent"],
rate_limiter=RateLimiter(
@@ -584,58 +537,11 @@ async def handle_stream_crawl_request(
# crawler = AsyncWebCrawler(config=browser_config)
# await crawler.start()
# Use correct method based on URL count (same as regular endpoint)
if len(urls) == 1:
# For single URL, use arun to get CrawlResult, then wrap in async generator
single_result_container = await crawler.arun(
url=urls[0],
config=crawler_config,
dispatcher=dispatcher
)
async def single_result_generator():
# Handle CrawlResultContainer - extract the actual results
if hasattr(single_result_container, '_results'):
# It's a CrawlResultContainer - iterate over the internal results
for result in single_result_container._results:
# Check if the result is an async generator (from deep crawl)
if hasattr(result, '__aiter__'):
async for sub_result in result:
yield sub_result
else:
yield result
elif hasattr(single_result_container, '__aiter__'):
# It's an async generator (from streaming deep crawl)
async for result in single_result_container:
yield result
elif hasattr(single_result_container, '__iter__') and not hasattr(single_result_container, 'url'):
# It's iterable but not a CrawlResult itself
for result in single_result_container:
# Check if each result is an async generator
if hasattr(result, '__aiter__'):
async for sub_result in result:
yield sub_result
else:
yield result
else:
# It's a single CrawlResult
yield single_result_container
results_gen = single_result_generator()
else:
# For multiple URLs, use arun_many
results_gen = await crawler.arun_many(
urls=urls,
config=crawler_config,
dispatcher=dispatcher
)
# If results_gen is a list (e.g., from deep crawl), convert to async generator
if isinstance(results_gen, list):
async def convert_list_to_generator():
for result in results_gen:
yield result
results_gen = convert_list_to_generator()
results_gen = await crawler.arun_many(
urls=urls,
config=crawler_config,
dispatcher=dispatcher
)
return crawler, results_gen

View File

@@ -7,16 +7,13 @@ Crawl4AI FastAPI entrypoint
"""
# ── stdlib & 3rdparty imports ───────────────────────────────
from datetime import datetime
import orjson
from crawler_pool import get_crawler, close_all, janitor
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from auth import create_access_token, get_token_dependency, TokenRequest
from pydantic import BaseModel
from typing import Optional, List, Dict
from fastapi import Request, Depends
from fastapi.responses import FileResponse, ORJSONResponse
from fastapi.responses import FileResponse
import base64
import re
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
@@ -35,8 +32,6 @@ from schemas import (
JSEndpointRequest,
)
# Use the proper serialization functions from async_configs
from crawl4ai.async_configs import to_serializable_dict
from utils import (
FilterType, load_config, setup_logging, verify_email_domain
)
@@ -117,26 +112,11 @@ async def lifespan(_: FastAPI):
app.state.janitor.cancel()
await close_all()
def orjson_default(obj):
# Handle datetime (if not already handled by orjson)
if isinstance(obj, datetime):
return obj.isoformat()
# Handle property objects (convert to string or something else)
if isinstance(obj, property):
return str(obj)
# Last resort: convert to string
return str(obj)
def orjson_dumps(v, *, default):
return orjson.dumps(v, default=orjson_default).decode()
# ───────────────────── FastAPI instance ──────────────────────
app = FastAPI(
title=config["app"]["title"],
version=config["app"]["version"],
lifespan=lifespan,
default_response_class=ORJSONResponse
)
# ── static playground ──────────────────────────────────────
@@ -257,9 +237,9 @@ async def get_markdown(
body: MarkdownRequest,
_td: Dict = Depends(token_dep),
):
if not body.url.startswith(("http://", "https://")) and not body.url.startswith(("raw:", "raw://")):
if not body.url.startswith(("http://", "https://")):
raise HTTPException(
400, "Invalid URL format. Must start with http://, https://, or for raw HTML (raw:, raw://)")
400, "URL must be absolute and start with http/https")
markdown = await handle_markdown_request(
body.url, body.f, body.q, body.c, config, body.provider
)
@@ -421,7 +401,7 @@ async def llm_endpoint(
):
if not q:
raise HTTPException(400, "Query parameter 'q' is required")
if not url.startswith(("http://", "https://")) and not url.startswith(("raw:", "raw://")):
if not url.startswith(("http://", "https://")):
url = "https://" + url
answer = await handle_llm_qa(url, q, config)
return JSONResponse({"answer": answer})
@@ -455,20 +435,15 @@ async def crawl(
"""
Crawl a list of URLs and return the results as JSON.
"""
try:
if not crawl_request.urls:
raise HTTPException(400, "At least one URL required")
res = await handle_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config,
)
# handle_crawl_request returns a dictionary, so we can pass it directly to ORJSONResponse
return ORJSONResponse(res)
except Exception as e:
print(f"Error occurred: {e}")
return ORJSONResponse({"error": str(e)}, status_code=500)
if not crawl_request.urls:
raise HTTPException(400, "At least one URL required")
res = await handle_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config,
)
return JSONResponse(res)
@app.post("/crawl/stream")

View File

@@ -1,350 +0,0 @@
# 🚀 Crawl4AI v0.7.3: The Multi-Config Intelligence Update
*August 6, 2025 • 5 min read*
---
Today I'm releasing Crawl4AI v0.7.3—the Multi-Config Intelligence Update. This release brings smarter URL-specific configurations, flexible Docker deployments, important bug fixes, and documentation improvements that make Crawl4AI more robust and production-ready.
## 🎯 What's New at a Glance
- **🕵️ Undetected Browser Support**: Stealth mode for bypassing bot detection systems
- **🎨 Multi-URL Configurations**: Different crawling strategies for different URL patterns in a single batch
- **🐳 Flexible Docker LLM Providers**: Configure LLM providers via environment variables
- **🧠 Memory Monitoring**: Enhanced memory usage tracking and optimization tools
- **📊 Enhanced Table Extraction**: Improved table access and DataFrame conversion
- **💰 GitHub Sponsors**: 4-tier sponsorship system with custom arrangements
- **🔧 Bug Fixes**: Resolved several critical issues for better stability
- **📚 Documentation Updates**: Clearer examples and improved API documentation
## 🎨 Multi-URL Configurations: One Size Doesn't Fit All
**The Problem:** You're crawling a mix of documentation sites, blogs, and API endpoints. Each needs different handling—caching for docs, fresh content for news, structured extraction for APIs. Previously, you'd run separate crawls or write complex conditional logic.
**My Solution:** I implemented URL-specific configurations that let you define different strategies for different URL patterns in a single crawl batch. First match wins, with optional fallback support.
### Technical Implementation
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, MatchMode
# Define specialized configs for different content types
configs = [
# Documentation sites - aggressive caching, include links
CrawlerRunConfig(
url_matcher=["*docs*", "*documentation*"],
cache_mode="write",
markdown_generator_options={"include_links": True}
),
# News/blog sites - fresh content, scroll for lazy loading
CrawlerRunConfig(
url_matcher=lambda url: 'blog' in url or 'news' in url,
cache_mode="bypass",
js_code="window.scrollTo(0, document.body.scrollHeight/2);"
),
# API endpoints - structured extraction
CrawlerRunConfig(
url_matcher=["*.json", "*api*"],
extraction_strategy=LLMExtractionStrategy(
provider="openai/gpt-4o-mini",
extraction_type="structured"
)
),
# Default fallback for everything else
CrawlerRunConfig() # No url_matcher = matches everything
]
# Crawl multiple URLs with appropriate configs
async with AsyncWebCrawler() as crawler:
results = await crawler.arun_many(
urls=[
"https://docs.python.org/3/", # → Uses documentation config
"https://blog.python.org/", # → Uses blog config
"https://api.github.com/users", # → Uses API config
"https://example.com/" # → Uses default config
],
config=configs
)
```
**Matching Capabilities:**
- **String Patterns**: Wildcards like `"*.pdf"`, `"*/blog/*"`
- **Function Matchers**: Lambda functions for complex logic
- **Mixed Matchers**: Combine strings and functions with AND/OR logic
- **Fallback Support**: Default config when nothing matches
**Expected Real-World Impact:**
- **Mixed Content Sites**: Handle blogs, docs, and downloads in one crawl
- **Multi-Domain Crawling**: Different strategies per domain without separate runs
- **Reduced Complexity**: No more if/else forests in your extraction code
- **Better Performance**: Each URL gets exactly the processing it needs
## 🕵️ Undetected Browser Support: Stealth Mode Activated
**The Problem:** Modern websites employ sophisticated bot detection systems. Cloudflare, Akamai, and custom solutions block automated crawlers, limiting access to valuable content.
**My Solution:** I implemented undetected browser support with a flexible adapter pattern. Now Crawl4AI can bypass most bot detection systems using stealth techniques.
### Technical Implementation
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig
# Enable undetected mode for stealth crawling
browser_config = BrowserConfig(
browser_type="undetected", # Use undetected Chrome
headless=True, # Can run headless with stealth
extra_args=[
"--disable-blink-features=AutomationControlled",
"--disable-web-security",
"--disable-features=VizDisplayCompositor"
]
)
async with AsyncWebCrawler(config=browser_config) as crawler:
# This will bypass most bot detection systems
result = await crawler.arun("https://protected-site.com")
if result.success:
print("✅ Successfully bypassed bot detection!")
print(f"Content length: {len(result.markdown)}")
```
**Advanced Anti-Bot Strategies:**
```python
# Combine multiple stealth techniques
from crawl4ai import CrawlerRunConfig
config = CrawlerRunConfig(
# Random user agents and headers
headers={
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"DNT": "1"
},
# Human-like behavior simulation
js_code="""
// Random mouse movements
const simulateHuman = () => {
const event = new MouseEvent('mousemove', {
clientX: Math.random() * window.innerWidth,
clientY: Math.random() * window.innerHeight
});
document.dispatchEvent(event);
};
setInterval(simulateHuman, 100 + Math.random() * 200);
// Random scrolling
const randomScroll = () => {
const scrollY = Math.random() * (document.body.scrollHeight - window.innerHeight);
window.scrollTo(0, scrollY);
};
setTimeout(randomScroll, 500 + Math.random() * 1000);
""",
# Delay to appear more human
delay_before_return_html=2.0
)
result = await crawler.arun("https://bot-protected-site.com", config=config)
```
**Expected Real-World Impact:**
- **Enterprise Scraping**: Access previously blocked corporate sites and databases
- **Market Research**: Gather data from competitor sites with protection
- **Price Monitoring**: Track e-commerce sites that block automated access
- **Content Aggregation**: Collect news and social media despite anti-bot measures
- **Compliance Testing**: Verify your own site's bot protection effectiveness
## 🧠 Memory Monitoring & Optimization
**The Problem:** Long-running crawl sessions consuming excessive memory, especially when processing large batches or heavy JavaScript sites.
**My Solution:** Built comprehensive memory monitoring and optimization utilities that track usage patterns and provide actionable insights.
### Memory Tracking Implementation
```python
from crawl4ai.memory_utils import MemoryMonitor, get_memory_info
# Monitor memory during crawling
monitor = MemoryMonitor()
async with AsyncWebCrawler() as crawler:
# Start monitoring
monitor.start_monitoring()
# Perform memory-intensive operations
results = await crawler.arun_many([
"https://heavy-js-site.com",
"https://large-images-site.com",
"https://dynamic-content-site.com"
])
# Get detailed memory report
memory_report = monitor.get_report()
print(f"Peak memory usage: {memory_report['peak_mb']:.1f} MB")
print(f"Memory efficiency: {memory_report['efficiency']:.1f}%")
# Automatic cleanup suggestions
if memory_report['peak_mb'] > 1000: # > 1GB
print("💡 Consider batch size optimization")
print("💡 Enable aggressive garbage collection")
```
**Expected Real-World Impact:**
- **Production Stability**: Prevent memory-related crashes in long-running services
- **Cost Optimization**: Right-size server resources based on actual usage
- **Performance Tuning**: Identify memory bottlenecks and optimization opportunities
- **Scalability Planning**: Understand memory patterns for horizontal scaling
## 📊 Enhanced Table Extraction
**The Problem:** Table data was accessed through the generic `result.media` interface, making DataFrame conversion cumbersome and unclear.
**My Solution:** Dedicated `result.tables` interface with direct DataFrame conversion and improved detection algorithms.
### New Table Access Pattern
```python
# Old way (deprecated)
# tables_data = result.media.get('tables', [])
# New way (v0.7.3+)
result = await crawler.arun("https://site-with-tables.com")
# Direct table access
if result.tables:
print(f"Found {len(result.tables)} tables")
# Convert to pandas DataFrame instantly
import pandas as pd
for i, table in enumerate(result.tables):
df = pd.DataFrame(table['data'])
print(f"Table {i}: {df.shape[0]} rows × {df.shape[1]} columns")
print(df.head())
# Table metadata
print(f"Source: {table.get('source_xpath', 'Unknown')}")
print(f"Headers: {table.get('headers', [])}")
```
**Expected Real-World Impact:**
- **Data Analysis**: Faster transition from web data to analysis-ready DataFrames
- **ETL Pipelines**: Cleaner integration with data processing workflows
- **Reporting**: Simplified table extraction for automated reporting systems
## 💰 Community Support: GitHub Sponsors
I've launched GitHub Sponsors to ensure Crawl4AI's continued development and support our growing community.
**Sponsorship Tiers:**
- **🌱 Supporter ($5/month)**: Community support + early feature previews
- **🚀 Professional ($25/month)**: Priority support + beta access
- **🏢 Business ($100/month)**: Direct consultation + custom integrations
- **🏛️ Enterprise ($500/month)**: Dedicated support + feature development
**Why Sponsor?**
- Ensure continuous development and maintenance
- Get priority support and feature requests
- Access to premium documentation and examples
- Direct line to the development team
[**Become a Sponsor →**](https://github.com/sponsors/unclecode)
## 🐳 Docker: Flexible LLM Provider Configuration
**The Problem:** Hardcoded LLM providers in Docker deployments. Want to switch from OpenAI to Groq? Rebuild and redeploy. Testing different models? Multiple Docker images.
**My Solution:** Configure LLM providers via environment variables. Switch providers without touching code or rebuilding images.
### Deployment Flexibility
```bash
# Option 1: Direct environment variables
docker run -d \
-e LLM_PROVIDER="groq/llama-3.2-3b-preview" \
-e GROQ_API_KEY="your-key" \
-p 11235:11235 \
unclecode/crawl4ai:latest
# Option 2: Using .llm.env file (recommended for production)
# Create .llm.env file:
# LLM_PROVIDER=openai/gpt-4o-mini
# OPENAI_API_KEY=your-openai-key
# GROQ_API_KEY=your-groq-key
docker run -d \
--env-file .llm.env \
-p 11235:11235 \
unclecode/crawl4ai:latest
```
Override per request when needed:
```python
# Use default provider from .llm.env
response = requests.post("http://localhost:11235/crawl", json={
"url": "https://example.com",
"extraction_strategy": {"type": "llm"}
})
# Override to use different provider for this specific request
response = requests.post("http://localhost:11235/crawl", json={
"url": "https://complex-page.com",
"extraction_strategy": {
"type": "llm",
"provider": "openai/gpt-4" # Override default
}
})
```
**Expected Real-World Impact:**
- **Cost Optimization**: Use cheaper models for simple tasks, premium for complex
- **A/B Testing**: Compare provider performance without deployment changes
- **Fallback Strategies**: Switch providers on-the-fly during outages
- **Development Flexibility**: Test locally with one provider, deploy with another
- **Secure Configuration**: Keep API keys in `.llm.env` file, not in commands
## 🔧 Bug Fixes & Improvements
This release includes several important bug fixes that improve stability and reliability:
- **URL Matcher Fallback**: Fixed edge cases in URL pattern matching logic
- **Memory Management**: Resolved memory leaks in long-running crawl sessions
- **Sitemap Processing**: Fixed redirect handling in sitemap fetching
- **Table Extraction**: Improved table detection and extraction accuracy
- **Error Handling**: Better error messages and recovery from network failures
## 📚 Documentation Enhancements
Based on community feedback, we've updated:
- Clearer examples for multi-URL configuration
- Improved CrawlResult documentation with all available fields
- Fixed typos and inconsistencies across documentation
- Added real-world URLs in examples for better understanding
- New comprehensive demo showcasing all v0.7.3 features
## 🙏 Acknowledgments
Thanks to our contributors and the entire community for feedback and bug reports.
## 📚 Resources
- [Full Documentation](https://docs.crawl4ai.com)
- [GitHub Repository](https://github.com/unclecode/crawl4ai)
- [Discord Community](https://discord.gg/crawl4ai)
- [Feature Demo](https://github.com/unclecode/crawl4ai/blob/main/docs/releases_review/demo_v0.7.3.py)
---
*Crawl4AI continues to evolve with your needs. This release makes it smarter, more flexible, and more stable. Try the new multi-config feature and flexible Docker deployment—they're game changers!*
**Happy Crawling! 🕷️**
*- The Crawl4AI Team*

View File

@@ -8,20 +8,26 @@ from typing import Dict, Any
class Crawl4AiTester:
def __init__(self, base_url: str = "http://localhost:11235"):
def __init__(self, base_url: str = "http://localhost:11235", api_token: str = None):
self.base_url = base_url
self.api_token = (
api_token or os.getenv("CRAWL4AI_API_TOKEN") or "test_api_code"
) # Check environment variable as fallback
self.headers = (
{"Authorization": f"Bearer {self.api_token}"} if self.api_token else {}
)
def submit_and_wait(
self, request_data: Dict[str, Any], timeout: int = 300
) -> Dict[str, Any]:
# Submit crawl job using async endpoint
# Submit crawl job
response = requests.post(
f"{self.base_url}/crawl/job", json=request_data
f"{self.base_url}/crawl", json=request_data, headers=self.headers
)
response.raise_for_status()
job_response = response.json()
task_id = job_response["task_id"]
print(f"Submitted job with task_id: {task_id}")
if response.status_code == 403:
raise Exception("API token is invalid or missing")
task_id = response.json()["task_id"]
print(f"Task ID: {task_id}")
# Poll for result
start_time = time.time()
@@ -32,9 +38,8 @@ class Crawl4AiTester:
)
result = requests.get(
f"{self.base_url}/crawl/job/{task_id}"
f"{self.base_url}/task/{task_id}", headers=self.headers
)
result.raise_for_status()
status = result.json()
if status["status"] == "failed":
@@ -47,10 +52,10 @@ class Crawl4AiTester:
time.sleep(2)
def submit_sync(self, request_data: Dict[str, Any]) -> Dict[str, Any]:
# Use synchronous crawl endpoint
response = requests.post(
f"{self.base_url}/crawl",
f"{self.base_url}/crawl_sync",
json=request_data,
headers=self.headers,
timeout=60,
)
if response.status_code == 408:
@@ -58,9 +63,20 @@ class Crawl4AiTester:
response.raise_for_status()
return response.json()
def crawl_direct(self, request_data: Dict[str, Any]) -> Dict[str, Any]:
"""Directly crawl without using task queue"""
response = requests.post(
f"{self.base_url}/crawl_direct", json=request_data, headers=self.headers
)
response.raise_for_status()
return response.json()
def test_docker_deployment(version="basic"):
tester = Crawl4AiTester(
base_url="http://localhost:11235",
# base_url="https://api.crawl4ai.com" # just for example
# api_token="test" # just for example
)
print(f"Testing Crawl4AI Docker {version} version")
@@ -79,8 +95,11 @@ def test_docker_deployment(version="basic"):
time.sleep(5)
# Test cases based on version
test_basic_crawl_direct(tester)
test_basic_crawl(tester)
test_basic_crawl(tester)
test_basic_crawl_sync(tester)
if version in ["full", "transformer"]:
test_cosine_extraction(tester)
@@ -93,129 +112,115 @@ def test_docker_deployment(version="basic"):
def test_basic_crawl(tester: Crawl4AiTester):
print("\n=== Testing Basic Crawl (Async) ===")
print("\n=== Testing Basic Crawl ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {},
"crawler_config": {}
"urls": "https://www.nbcnews.com/business",
"priority": 10,
"session_id": "test",
}
result = tester.submit_and_wait(request)
print(f"Basic crawl result count: {len(result['result']['results'])}")
print(f"Basic crawl result length: {len(result['result']['markdown'])}")
assert result["result"]["success"]
assert len(result["result"]["results"]) > 0
assert len(result["result"]["results"][0]["markdown"]) > 0
assert len(result["result"]["markdown"]) > 0
def test_basic_crawl_sync(tester: Crawl4AiTester):
print("\n=== Testing Basic Crawl (Sync) ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {},
"crawler_config": {}
"urls": "https://www.nbcnews.com/business",
"priority": 10,
"session_id": "test",
}
result = tester.submit_sync(request)
print(f"Basic crawl result count: {len(result['results'])}")
assert result["success"]
assert len(result["results"]) > 0
assert len(result["results"][0]["markdown"]) > 0
print(f"Basic crawl result length: {len(result['result']['markdown'])}")
assert result["status"] == "completed"
assert result["result"]["success"]
assert len(result["result"]["markdown"]) > 0
def test_basic_crawl_direct(tester: Crawl4AiTester):
print("\n=== Testing Basic Crawl (Direct) ===")
request = {
"urls": "https://www.nbcnews.com/business",
"priority": 10,
# "session_id": "test"
"cache_mode": "bypass", # or "enabled", "disabled", "read_only", "write_only"
}
result = tester.crawl_direct(request)
print(f"Basic crawl result length: {len(result['result']['markdown'])}")
assert result["result"]["success"]
assert len(result["result"]["markdown"]) > 0
def test_js_execution(tester: Crawl4AiTester):
print("\n=== Testing JS Execution ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {"headless": True},
"crawler_config": {
"js_code": [
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); if(loadMoreButton) loadMoreButton.click();"
],
"wait_for": "wide-tease-item__wrapper df flex-column flex-row-m flex-nowrap-m enable-new-sports-feed-mobile-design(10)"
}
"urls": "https://www.nbcnews.com/business",
"priority": 8,
"js_code": [
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
],
"wait_for": "article.tease-card:nth-child(10)",
"crawler_params": {"headless": True},
}
result = tester.submit_and_wait(request)
print(f"JS execution result count: {len(result['result']['results'])}")
print(f"JS execution result length: {len(result['result']['markdown'])}")
assert result["result"]["success"]
def test_css_selector(tester: Crawl4AiTester):
print("\n=== Testing CSS Selector ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {"headless": True},
"crawler_config": {
"css_selector": ".wide-tease-item__description",
"word_count_threshold": 10
}
"urls": "https://www.nbcnews.com/business",
"priority": 7,
"css_selector": ".wide-tease-item__description",
"crawler_params": {"headless": True},
"extra": {"word_count_threshold": 10},
}
result = tester.submit_and_wait(request)
print(f"CSS selector result count: {len(result['result']['results'])}")
print(f"CSS selector result length: {len(result['result']['markdown'])}")
assert result["result"]["success"]
def test_structured_extraction(tester: Crawl4AiTester):
print("\n=== Testing Structured Extraction ===")
schema = {
"name": "Cryptocurrency Prices",
"baseSelector": "table[data-testid=\"prices-table\"] tbody tr",
"name": "Coinbase Crypto Prices",
"baseSelector": ".cds-tableRow-t45thuk",
"fields": [
{
"name": "asset_name",
"selector": "td:nth-child(2) p.cds-headline-h4steop",
"type": "text"
"name": "crypto",
"selector": "td:nth-child(1) h2",
"type": "text",
},
{
"name": "asset_symbol",
"selector": "td:nth-child(2) p.cds-label2-l1sm09ec",
"type": "text"
},
{
"name": "asset_image_url",
"selector": "td:nth-child(2) img[alt=\"Asset Symbol\"]",
"type": "attribute",
"attribute": "src"
},
{
"name": "asset_url",
"selector": "td:nth-child(2) a[aria-label^=\"Asset page for\"]",
"type": "attribute",
"attribute": "href"
"name": "symbol",
"selector": "td:nth-child(1) p",
"type": "text",
},
{
"name": "price",
"selector": "td:nth-child(3) div.cds-typographyResets-t6muwls.cds-body-bwup3gq",
"type": "text"
"selector": "td:nth-child(2)",
"type": "text",
},
{
"name": "change",
"selector": "td:nth-child(7) p.cds-body-bwup3gq",
"type": "text"
}
]
],
}
request = {
"urls": ["https://www.coinbase.com/explore"],
"browser_config": {},
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"extraction_strategy": {
"type": "JsonCssExtractionStrategy",
"params": {"schema": schema}
}
}
}
"urls": "https://www.coinbase.com/explore",
"priority": 9,
"extraction_config": {"type": "json_css", "params": {"schema": schema}},
}
result = tester.submit_and_wait(request)
extracted = json.loads(result["result"]["results"][0]["extracted_content"])
extracted = json.loads(result["result"]["extracted_content"])
print(f"Extracted {len(extracted)} items")
if extracted:
print("Sample item:", json.dumps(extracted[0], indent=2))
print("Sample item:", json.dumps(extracted[0], indent=2))
assert result["result"]["success"]
assert len(extracted) > 0
@@ -225,54 +230,43 @@ def test_llm_extraction(tester: Crawl4AiTester):
schema = {
"type": "object",
"properties": {
"asset_name": {
"model_name": {
"type": "string",
"description": "Name of the asset.",
"description": "Name of the OpenAI model.",
},
"price": {
"input_fee": {
"type": "string",
"description": "Price of the asset.",
"description": "Fee for input token for the OpenAI model.",
},
"change": {
"output_fee": {
"type": "string",
"description": "Change in price of the asset.",
"description": "Fee for output token for the OpenAI model.",
},
},
"required": ["asset_name", "price", "change"],
"required": ["model_name", "input_fee", "output_fee"],
}
request = {
"urls": ["https://www.coinbase.com/en-in/explore"],
"browser_config": {},
"crawler_config": {
"type": "CrawlerRunConfig",
"urls": "https://openai.com/api/pricing",
"priority": 8,
"extraction_config": {
"type": "llm",
"params": {
"extraction_strategy": {
"type": "LLMExtractionStrategy",
"params": {
"llm_config": {
"type": "LLMConfig",
"params": {
"provider": "gemini/gemini-2.0-flash-exp",
"api_token": os.getenv("GEMINI_API_KEY")
}
},
"schema": schema,
"extraction_type": "schema",
"instruction": "From the crawled content, extract asset names along with their prices and change in price.",
}
},
"word_count_threshold": 1
}
}
"provider": "openai/gpt-4o-mini",
"api_token": os.getenv("OPENAI_API_KEY"),
"schema": schema,
"extraction_type": "schema",
"instruction": """From the crawled content, extract all mentioned model names along with their fees for input and output tokens.""",
},
},
"crawler_params": {"word_count_threshold": 1},
}
try:
result = tester.submit_and_wait(request)
extracted = json.loads(result["result"]["results"][0]["extracted_content"])
print(f"Extracted {len(extracted)} asset pricing entries")
if extracted:
print("Sample entry:", json.dumps(extracted[0], indent=2))
extracted = json.loads(result["result"]["extracted_content"])
print(f"Extracted {len(extracted)} model pricing entries")
print("Sample entry:", json.dumps(extracted[0], indent=2))
assert result["result"]["success"]
except Exception as e:
print(f"LLM extraction test failed (might be due to missing API key): {str(e)}")
@@ -280,16 +274,6 @@ def test_llm_extraction(tester: Crawl4AiTester):
def test_llm_with_ollama(tester: Crawl4AiTester):
print("\n=== Testing LLM with Ollama ===")
# Check if Ollama is accessible first
try:
ollama_response = requests.get("http://localhost:11434/api/tags", timeout=5)
ollama_response.raise_for_status()
print("Ollama is accessible")
except:
print("Ollama is not accessible, skipping test")
return
schema = {
"type": "object",
"properties": {
@@ -310,33 +294,24 @@ def test_llm_with_ollama(tester: Crawl4AiTester):
}
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {"verbose": True},
"crawler_config": {
"type": "CrawlerRunConfig",
"urls": "https://www.nbcnews.com/business",
"priority": 8,
"extraction_config": {
"type": "llm",
"params": {
"extraction_strategy": {
"type": "LLMExtractionStrategy",
"params": {
"llm_config": {
"type": "LLMConfig",
"params": {
"provider": "ollama/llama3.2:latest",
}
},
"schema": schema,
"extraction_type": "schema",
"instruction": "Extract the main article information including title, summary, and main topics.",
}
},
"word_count_threshold": 1
}
}
"provider": "ollama/llama2",
"schema": schema,
"extraction_type": "schema",
"instruction": "Extract the main article information including title, summary, and main topics.",
},
},
"extra": {"word_count_threshold": 1},
"crawler_params": {"verbose": True},
}
try:
result = tester.submit_and_wait(request)
extracted = json.loads(result["result"]["results"][0]["extracted_content"])
extracted = json.loads(result["result"]["extracted_content"])
print("Extracted content:", json.dumps(extracted, indent=2))
assert result["result"]["success"]
except Exception as e:
@@ -346,30 +321,24 @@ def test_llm_with_ollama(tester: Crawl4AiTester):
def test_cosine_extraction(tester: Crawl4AiTester):
print("\n=== Testing Cosine Extraction ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {},
"crawler_config": {
"type": "CrawlerRunConfig",
"urls": "https://www.nbcnews.com/business",
"priority": 8,
"extraction_config": {
"type": "cosine",
"params": {
"extraction_strategy": {
"type": "CosineStrategy",
"params": {
"semantic_filter": "business finance economy",
"word_count_threshold": 10,
"max_dist": 0.2,
"top_k": 3,
}
}
}
}
"semantic_filter": "business finance economy",
"word_count_threshold": 10,
"max_dist": 0.2,
"top_k": 3,
},
},
}
try:
result = tester.submit_and_wait(request)
extracted = json.loads(result["result"]["results"][0]["extracted_content"])
extracted = json.loads(result["result"]["extracted_content"])
print(f"Extracted {len(extracted)} text clusters")
if extracted:
print("First cluster tags:", extracted[0]["tags"])
print("First cluster tags:", extracted[0]["tags"])
assert result["result"]["success"]
except Exception as e:
print(f"Cosine extraction test failed: {str(e)}")
@@ -378,25 +347,20 @@ def test_cosine_extraction(tester: Crawl4AiTester):
def test_screenshot(tester: Crawl4AiTester):
print("\n=== Testing Screenshot ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {"headless": True},
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"screenshot": True
}
}
"urls": "https://www.nbcnews.com/business",
"priority": 5,
"screenshot": True,
"crawler_params": {"headless": True},
}
result = tester.submit_and_wait(request)
screenshot_data = result["result"]["results"][0]["screenshot"]
print("Screenshot captured:", bool(screenshot_data))
print("Screenshot captured:", bool(result["result"]["screenshot"]))
if screenshot_data:
if result["result"]["screenshot"]:
# Save screenshot
screenshot_bytes = base64.b64decode(screenshot_data)
screenshot_data = base64.b64decode(result["result"]["screenshot"])
with open("test_screenshot.jpg", "wb") as f:
f.write(screenshot_bytes)
f.write(screenshot_data)
print("Screenshot saved as test_screenshot.jpg")
assert result["result"]["success"]
@@ -404,4 +368,5 @@ def test_screenshot(tester: Crawl4AiTester):
if __name__ == "__main__":
version = sys.argv[1] if len(sys.argv) > 1 else "basic"
# version = "full"
test_docker_deployment(version)

View File

@@ -20,30 +20,24 @@ Ever wondered why your AI coding assistant struggles with your library despite c
## Latest Release
### [Crawl4AI v0.7.3 The Multi-Config Intelligence Update](releases/0.7.3.md)
*August 6, 2025*
### [Crawl4AI v0.7.0 The Adaptive Intelligence Update](releases/0.7.0.md)
*January 28, 2025*
Crawl4AI v0.7.3 brings smarter URL-specific configurations, flexible Docker deployments, and critical stability improvements. Configure different crawling strategies for different URL patterns in a single batch—perfect for mixed content sites with docs, blogs, and APIs.
Crawl4AI v0.7.0 introduces groundbreaking intelligence features that transform how crawlers understand and adapt to websites. This release brings Adaptive Crawling that learns website patterns, Virtual Scroll support for infinite pages, intelligent Link Preview with 3-layer scoring, and the powerful Async URL Seeder for massive URL discovery.
Key highlights:
- **Multi-URL Configurations**: Different strategies for different URL patterns in one crawl
- **Flexible Docker LLM Providers**: Configure providers via environment variables
- **Bug Fixes**: Critical stability improvements for production deployments
- **Documentation Updates**: Clearer examples and improved API documentation
- **Adaptive Crawling**: Crawlers that learn and adapt to website structures automatically
- **Virtual Scroll Support**: Complete content extraction from modern infinite scroll pages
- **Link Preview**: 3-layer scoring system for intelligent link prioritization
- **Async URL Seeder**: Discover thousands of URLs in seconds with smart filtering
- **Performance Boost**: Up to 3x faster with optimized resource handling
[Read full release notes →](releases/0.7.3.md)
[Read full release notes →](releases/0.7.0.md)
---
## Previous Releases
### [Crawl4AI v0.7.0 The Adaptive Intelligence Update](releases/0.7.0.md)
*January 28, 2025*
Introduced groundbreaking intelligence features including Adaptive Crawling, Virtual Scroll support, intelligent Link Preview, and the Async URL Seeder for massive URL discovery.
[Read release notes →](releases/0.7.0.md)
### [Crawl4AI v0.6.0 World-Aware Crawling, Pre-Warmed Browsers, and the MCP API](releases/0.6.0.md)
*December 23, 2024*

View File

@@ -1,170 +0,0 @@
# 🚀 Crawl4AI v0.7.3: The Multi-Config Intelligence Update
*August 6, 2025 • 5 min read*
---
Today I'm releasing Crawl4AI v0.7.3—the Multi-Config Intelligence Update. This release brings smarter URL-specific configurations, flexible Docker deployments, important bug fixes, and documentation improvements that make Crawl4AI more robust and production-ready.
## 🎯 What's New at a Glance
- **Multi-URL Configurations**: Different crawling strategies for different URL patterns in a single batch
- **Flexible Docker LLM Providers**: Configure LLM providers via environment variables
- **Bug Fixes**: Resolved several critical issues for better stability
- **Documentation Updates**: Clearer examples and improved API documentation
## 🎨 Multi-URL Configurations: One Size Doesn't Fit All
**The Problem:** You're crawling a mix of documentation sites, blogs, and API endpoints. Each needs different handling—caching for docs, fresh content for news, structured extraction for APIs. Previously, you'd run separate crawls or write complex conditional logic.
**My Solution:** I implemented URL-specific configurations that let you define different strategies for different URL patterns in a single crawl batch. First match wins, with optional fallback support.
### Technical Implementation
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, MatchMode
# Define specialized configs for different content types
configs = [
# Documentation sites - aggressive caching, include links
CrawlerRunConfig(
url_matcher=["*docs*", "*documentation*"],
cache_mode="write",
markdown_generator_options={"include_links": True}
),
# News/blog sites - fresh content, scroll for lazy loading
CrawlerRunConfig(
url_matcher=lambda url: 'blog' in url or 'news' in url,
cache_mode="bypass",
js_code="window.scrollTo(0, document.body.scrollHeight/2);"
),
# API endpoints - structured extraction
CrawlerRunConfig(
url_matcher=["*.json", "*api*"],
extraction_strategy=LLMExtractionStrategy(
provider="openai/gpt-4o-mini",
extraction_type="structured"
)
),
# Default fallback for everything else
CrawlerRunConfig() # No url_matcher = matches everything
]
# Crawl multiple URLs with appropriate configs
async with AsyncWebCrawler() as crawler:
results = await crawler.arun_many(
urls=[
"https://docs.python.org/3/", # → Uses documentation config
"https://blog.python.org/", # → Uses blog config
"https://api.github.com/users", # → Uses API config
"https://example.com/" # → Uses default config
],
config=configs
)
```
**Matching Capabilities:**
- **String Patterns**: Wildcards like `"*.pdf"`, `"*/blog/*"`
- **Function Matchers**: Lambda functions for complex logic
- **Mixed Matchers**: Combine strings and functions with AND/OR logic
- **Fallback Support**: Default config when nothing matches
**Expected Real-World Impact:**
- **Mixed Content Sites**: Handle blogs, docs, and downloads in one crawl
- **Multi-Domain Crawling**: Different strategies per domain without separate runs
- **Reduced Complexity**: No more if/else forests in your extraction code
- **Better Performance**: Each URL gets exactly the processing it needs
## 🐳 Docker: Flexible LLM Provider Configuration
**The Problem:** Hardcoded LLM providers in Docker deployments. Want to switch from OpenAI to Groq? Rebuild and redeploy. Testing different models? Multiple Docker images.
**My Solution:** Configure LLM providers via environment variables. Switch providers without touching code or rebuilding images.
### Deployment Flexibility
```bash
# Option 1: Direct environment variables
docker run -d \
-e LLM_PROVIDER="groq/llama-3.2-3b-preview" \
-e GROQ_API_KEY="your-key" \
-p 11235:11235 \
unclecode/crawl4ai:latest
# Option 2: Using .llm.env file (recommended for production)
# Create .llm.env file:
# LLM_PROVIDER=openai/gpt-4o-mini
# OPENAI_API_KEY=your-openai-key
# GROQ_API_KEY=your-groq-key
docker run -d \
--env-file .llm.env \
-p 11235:11235 \
unclecode/crawl4ai:latest
```
Override per request when needed:
```python
# Use default provider from .llm.env
response = requests.post("http://localhost:11235/crawl", json={
"url": "https://example.com",
"extraction_strategy": {"type": "llm"}
})
# Override to use different provider for this specific request
response = requests.post("http://localhost:11235/crawl", json={
"url": "https://complex-page.com",
"extraction_strategy": {
"type": "llm",
"provider": "openai/gpt-4" # Override default
}
})
```
**Expected Real-World Impact:**
- **Cost Optimization**: Use cheaper models for simple tasks, premium for complex
- **A/B Testing**: Compare provider performance without deployment changes
- **Fallback Strategies**: Switch providers on-the-fly during outages
- **Development Flexibility**: Test locally with one provider, deploy with another
- **Secure Configuration**: Keep API keys in `.llm.env` file, not in commands
## 🔧 Bug Fixes & Improvements
This release includes several important bug fixes that improve stability and reliability:
- **URL Matcher Fallback**: Fixed edge cases in URL pattern matching logic
- **Memory Management**: Resolved memory leaks in long-running crawl sessions
- **Sitemap Processing**: Fixed redirect handling in sitemap fetching
- **Table Extraction**: Improved table detection and extraction accuracy
- **Error Handling**: Better error messages and recovery from network failures
## 📚 Documentation Enhancements
Based on community feedback, we've updated:
- Clearer examples for multi-URL configuration
- Improved CrawlResult documentation with all available fields
- Fixed typos and inconsistencies across documentation
- Added real-world URLs in examples for better understanding
- New comprehensive demo showcasing all v0.7.3 features
## 🙏 Acknowledgments
Thanks to our contributors and the entire community for feedback and bug reports.
## 📚 Resources
- [Full Documentation](https://docs.crawl4ai.com)
- [GitHub Repository](https://github.com/unclecode/crawl4ai)
- [Discord Community](https://discord.gg/crawl4ai)
- [Feature Demo](https://github.com/unclecode/crawl4ai/blob/main/docs/releases_review/demo_v0.7.3.py)
---
*Crawl4AI continues to evolve with your needs. This release makes it smarter, more flexible, and more stable. Try the new multi-config feature and flexible Docker deployment—they're game changers!*
**Happy Crawling! 🕷️**
*- The Crawl4AI Team*

View File

@@ -58,15 +58,15 @@ Pull and run images directly from Docker Hub without building locally.
#### 1. Pull the Image
Our latest release is `0.7.3`. Images are built with multi-arch manifests, so Docker automatically pulls the correct version for your system.
Our latest release candidate is `0.7.0-r1`. Images are built with multi-arch manifests, so Docker automatically pulls the correct version for your system.
> 💡 **Note**: The `latest` tag points to the stable `0.7.3` version.
> ⚠️ **Important Note**: The `latest` tag currently points to the stable `0.6.0` version. After testing and validation, `0.7.0` (without -r1) will be released and `latest` will be updated. For now, please use `0.7.0-r1` to test the new features.
```bash
# Pull the latest version
docker pull unclecode/crawl4ai:0.7.3
# Pull the release candidate (for testing new features)
docker pull unclecode/crawl4ai:0.7.0-r1
# Or pull using the latest tag
# Or pull the current stable version (0.6.0)
docker pull unclecode/crawl4ai:latest
```
@@ -126,7 +126,7 @@ docker stop crawl4ai && docker rm crawl4ai
#### Docker Hub Versioning Explained
* **Image Name:** `unclecode/crawl4ai`
* **Tag Format:** `LIBRARY_VERSION[-SUFFIX]` (e.g., `0.7.3`)
* **Tag Format:** `LIBRARY_VERSION[-SUFFIX]` (e.g., `0.7.0-r1`)
* `LIBRARY_VERSION`: The semantic version of the core `crawl4ai` Python library
* `SUFFIX`: Optional tag for release candidates (``) and revisions (`r1`)
* **`latest` Tag:** Points to the most recent stable version

View File

@@ -91,17 +91,6 @@ async def test_css_selector_extraction():
assert result.markdown
assert all(heading in result.markdown for heading in ["#", "##", "###"])
@pytest.mark.asyncio
async def test_base_tag_link_extraction():
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://sohamkukreti.github.io/portfolio"
result = await crawler.arun(url=url)
assert result.success
assert result.links
assert isinstance(result.links, dict)
assert "internal" in result.links
assert "external" in result.links
assert any("github.com" in x["href"] for x in result.links["external"])
# Entry point for debugging
if __name__ == "__main__":

View File

@@ -10,13 +10,11 @@ import sys
import uuid
import shutil
from crawl4ai import BrowserProfiler
from crawl4ai.browser_manager import BrowserManager
# Add the project root to Python path if running directly
if __name__ == "__main__":
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../..')))
from crawl4ai.browser import BrowserManager, BrowserProfileManager
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
@@ -27,7 +25,7 @@ async def test_profile_creation():
"""Test creating and managing browser profiles."""
logger.info("Testing profile creation and management", tag="TEST")
profile_manager = BrowserProfiler(logger=logger)
profile_manager = BrowserProfileManager(logger=logger)
try:
# List existing profiles
@@ -85,7 +83,7 @@ async def test_profile_with_browser():
"""Test using a profile with a browser."""
logger.info("Testing using a profile with a browser", tag="TEST")
profile_manager = BrowserProfiler(logger=logger)
profile_manager = BrowserProfileManager(logger=logger)
test_profile_name = f"test-browser-profile-{uuid.uuid4().hex[:8]}"
profile_path = None
@@ -103,8 +101,6 @@ async def test_profile_with_browser():
# Now use this profile with a browser
browser_config = BrowserConfig(
user_data_dir=profile_path,
use_managed_browser=True,
use_persistent_context=True,
headless=True
)

View File

@@ -168,7 +168,7 @@ class SimpleApiTester:
print("\n=== CORE APIs ===")
test_url = "https://example.com"
test_raw_html_url = "raw://<html><body><h1>Hello, World!</h1></body></html>"
# Test markdown endpoint
md_payload = {
"url": test_url,
@@ -180,17 +180,6 @@ class SimpleApiTester:
# print(result['data'].get('markdown', ''))
self.print_result(result)
# Test markdown endpoint with raw HTML
raw_md_payload = {
"url": test_raw_html_url,
"f": "fit",
"q": "test query",
"c": "0"
}
result = self.test_post_endpoint("/md", raw_md_payload)
self.print_result(result)
# Test HTML endpoint
html_payload = {"url": test_url}
result = self.test_post_endpoint("/html", html_payload)
@@ -226,15 +215,6 @@ class SimpleApiTester:
result = self.test_post_endpoint("/crawl", crawl_payload)
self.print_result(result)
# Test crawl endpoint with raw HTML
crawl_payload = {
"urls": [test_raw_html_url],
"browser_config": {},
"crawler_config": {}
}
result = self.test_post_endpoint("/crawl", crawl_payload)
self.print_result(result)
# Test config dump
config_payload = {"code": "CrawlerRunConfig()"}
result = self.test_post_endpoint("/config/dump", config_payload)

View File

@@ -74,7 +74,7 @@ async def test_direct_api():
# Make direct API call
async with httpx.AsyncClient() as client:
response = await client.post(
"http://localhost:11235/crawl",
"http://localhost:8000/crawl",
json=request_data,
timeout=300
)
@@ -100,24 +100,13 @@ async def test_direct_api():
async with httpx.AsyncClient() as client:
response = await client.post(
"http://localhost:11235/crawl",
"http://localhost:8000/crawl",
json=request_data
)
assert response.status_code == 200
result = response.json()
print("Structured extraction result:", result["success"])
# Test 3: Raw HTML
request_data["urls"] = ["raw://<html><body><h1>Hello, World!</h1><a href='https://example.com'>Example</a></body></html>"]
async with httpx.AsyncClient() as client:
response = await client.post(
"http://localhost:11235/crawl",
json=request_data
)
assert response.status_code == 200
result = response.json()
print("Raw HTML result:", result["success"])
# Test 3: Get schema
# async with httpx.AsyncClient() as client:
# response = await client.get("http://localhost:8000/schema")
@@ -129,7 +118,7 @@ async def test_with_client():
"""Test using the Crawl4AI Docker client SDK"""
print("\n=== Testing Client SDK ===")
async with Crawl4aiDockerClient(base_url="http://localhost:11235", verbose=True) as client:
async with Crawl4aiDockerClient(verbose=True) as client:
# Test 1: Basic crawl
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(

View File

@@ -6,22 +6,28 @@ import base64
import os
from typing import Dict, Any
class Crawl4AiTester:
def __init__(self, base_url: str = "http://localhost:11235"):
self.base_url = base_url
class Crawl4AiTester:
def __init__(self, base_url: str = "http://localhost:11235", api_token: str = None):
self.base_url = base_url
self.api_token = api_token or os.getenv(
"CRAWL4AI_API_TOKEN"
) # Check environment variable as fallback
self.headers = (
{"Authorization": f"Bearer {self.api_token}"} if self.api_token else {}
)
def submit_and_wait(
self, request_data: Dict[str, Any], timeout: int = 300
) -> Dict[str, Any]:
# Submit crawl job using async endpoint
# Submit crawl job
response = requests.post(
f"{self.base_url}/crawl/job", json=request_data
f"{self.base_url}/crawl", json=request_data, headers=self.headers
)
response.raise_for_status()
job_response = response.json()
task_id = job_response["task_id"]
print(f"Submitted job with task_id: {task_id}")
if response.status_code == 403:
raise Exception("API token is invalid or missing")
task_id = response.json()["task_id"]
print(f"Task ID: {task_id}")
# Poll for result
start_time = time.time()
@@ -32,9 +38,8 @@ class Crawl4AiTester:
)
result = requests.get(
f"{self.base_url}/crawl/job/{task_id}"
f"{self.base_url}/task/{task_id}", headers=self.headers
)
result.raise_for_status()
status = result.json()
if status["status"] == "failed":
@@ -47,10 +52,10 @@ class Crawl4AiTester:
time.sleep(2)
def submit_sync(self, request_data: Dict[str, Any]) -> Dict[str, Any]:
# Use synchronous crawl endpoint
response = requests.post(
f"{self.base_url}/crawl",
f"{self.base_url}/crawl_sync",
json=request_data,
headers=self.headers,
timeout=60,
)
if response.status_code == 408:
@@ -61,8 +66,9 @@ class Crawl4AiTester:
def test_docker_deployment(version="basic"):
tester = Crawl4AiTester(
base_url="http://localhost:11235",
#base_url="https://crawl4ai-sby74.ondigitalocean.app",
# base_url="http://localhost:11235" ,
base_url="https://crawl4ai-sby74.ondigitalocean.app",
api_token="test",
)
print(f"Testing Crawl4AI Docker {version} version")
@@ -82,60 +88,63 @@ def test_docker_deployment(version="basic"):
# Test cases based on version
test_basic_crawl(tester)
test_basic_crawl(tester)
test_basic_crawl_sync(tester)
if version in ["full", "transformer"]:
test_cosine_extraction(tester)
# if version in ["full", "transformer"]:
# test_cosine_extraction(tester)
test_js_execution(tester)
test_css_selector(tester)
test_structured_extraction(tester)
test_llm_extraction(tester)
test_llm_with_ollama(tester)
test_screenshot(tester)
# test_js_execution(tester)
# test_css_selector(tester)
# test_structured_extraction(tester)
# test_llm_extraction(tester)
# test_llm_with_ollama(tester)
# test_screenshot(tester)
def test_basic_crawl(tester: Crawl4AiTester):
print("\n=== Testing Basic Crawl (Async) ===")
print("\n=== Testing Basic Crawl ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"priority": 10,
"session_id": "test",
}
result = tester.submit_and_wait(request)
print(f"Basic crawl result count: {len(result['result']['results'])}")
print(f"Basic crawl result length: {len(result['result']['markdown'])}")
assert result["result"]["success"]
assert len(result["result"]["results"]) > 0
assert len(result["result"]["results"][0]["markdown"]) > 0
assert len(result["result"]["markdown"]) > 0
def test_basic_crawl_sync(tester: Crawl4AiTester):
print("\n=== Testing Basic Crawl (Sync) ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"priority": 10,
"session_id": "test",
}
result = tester.submit_sync(request)
print(f"Basic crawl result count: {len(result['results'])}")
assert result["success"]
assert len(result["results"]) > 0
assert len(result["results"][0]["markdown"]) > 0
print(f"Basic crawl result length: {len(result['result']['markdown'])}")
assert result["status"] == "completed"
assert result["result"]["success"]
assert len(result["result"]["markdown"]) > 0
def test_js_execution(tester: Crawl4AiTester):
print("\n=== Testing JS Execution ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {"headless": True},
"crawler_config": {
"js_code": [
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); if(loadMoreButton) loadMoreButton.click();"
],
"wait_for": "wide-tease-item__wrapper df flex-column flex-row-m flex-nowrap-m enable-new-sports-feed-mobile-design(10)"
}
"priority": 8,
"js_code": [
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
],
"wait_for": "article.tease-card:nth-child(10)",
"crawler_params": {"headless": True},
}
result = tester.submit_and_wait(request)
print(f"JS execution result count: {len(result['result']['results'])}")
print(f"JS execution result length: {len(result['result']['markdown'])}")
assert result["result"]["success"]
@@ -143,78 +152,51 @@ def test_css_selector(tester: Crawl4AiTester):
print("\n=== Testing CSS Selector ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {"headless": True},
"crawler_config": {
"css_selector": ".wide-tease-item__description",
"word_count_threshold": 10
}
"priority": 7,
"css_selector": ".wide-tease-item__description",
"crawler_params": {"headless": True},
"extra": {"word_count_threshold": 10},
}
result = tester.submit_and_wait(request)
print(f"CSS selector result count: {len(result['result']['results'])}")
print(f"CSS selector result length: {len(result['result']['markdown'])}")
assert result["result"]["success"]
def test_structured_extraction(tester: Crawl4AiTester):
print("\n=== Testing Structured Extraction ===")
schema = {
"name": "Cryptocurrency Prices",
"baseSelector": "table[data-testid=\"prices-table\"] tbody tr",
"fields": [
{
"name": "asset_name",
"selector": "td:nth-child(2) p.cds-headline-h4steop",
"type": "text"
},
{
"name": "asset_symbol",
"selector": "td:nth-child(2) p.cds-label2-l1sm09ec",
"type": "text"
},
{
"name": "asset_image_url",
"selector": "td:nth-child(2) img[alt=\"Asset Symbol\"]",
"type": "attribute",
"attribute": "src"
},
{
"name": "asset_url",
"selector": "td:nth-child(2) a[aria-label^=\"Asset page for\"]",
"type": "attribute",
"attribute": "href"
},
{
"name": "price",
"selector": "td:nth-child(3) div.cds-typographyResets-t6muwls.cds-body-bwup3gq",
"type": "text"
},
{
"name": "change",
"selector": "td:nth-child(7) p.cds-body-bwup3gq",
"type": "text"
"name": "Coinbase Crypto Prices",
"baseSelector": ".cds-tableRow-t45thuk",
"fields": [
{
"name": "crypto",
"selector": "td:nth-child(1) h2",
"type": "text",
},
{
"name": "symbol",
"selector": "td:nth-child(1) p",
"type": "text",
},
{
"name": "price",
"selector": "td:nth-child(2)",
"type": "text",
},
],
}
]
}
request = {
"urls": ["https://www.coinbase.com/explore"],
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"extraction_strategy": {
"type": "JsonCssExtractionStrategy",
"params": {"schema": schema}
}
}
}
"priority": 9,
"extraction_config": {"type": "json_css", "params": {"schema": schema}},
}
result = tester.submit_and_wait(request)
extracted = json.loads(result["result"]["results"][0]["extracted_content"])
extracted = json.loads(result["result"]["extracted_content"])
print(f"Extracted {len(extracted)} items")
if extracted:
print("Sample item:", json.dumps(extracted[0], indent=2))
print("Sample item:", json.dumps(extracted[0], indent=2))
assert result["result"]["success"]
assert len(extracted) > 0
@@ -224,54 +206,43 @@ def test_llm_extraction(tester: Crawl4AiTester):
schema = {
"type": "object",
"properties": {
"asset_name": {
"model_name": {
"type": "string",
"description": "Name of the asset.",
"description": "Name of the OpenAI model.",
},
"price": {
"input_fee": {
"type": "string",
"description": "Price of the asset.",
"description": "Fee for input token for the OpenAI model.",
},
"change": {
"output_fee": {
"type": "string",
"description": "Change in price of the asset.",
"description": "Fee for output token for the OpenAI model.",
},
},
"required": ["asset_name", "price", "change"],
"required": ["model_name", "input_fee", "output_fee"],
}
request = {
"urls": ["https://www.coinbase.com/en-in/explore"],
"browser_config": {},
"crawler_config": {
"type": "CrawlerRunConfig",
"urls": ["https://openai.com/api/pricing"],
"priority": 8,
"extraction_config": {
"type": "llm",
"params": {
"extraction_strategy": {
"type": "LLMExtractionStrategy",
"params": {
"llm_config": {
"type": "LLMConfig",
"params": {
"provider": "gemini/gemini-2.5-flash",
"api_token": os.getenv("GEMINI_API_KEY")
}
},
"schema": schema,
"extraction_type": "schema",
"instruction": "From the crawled content tioned asset names along with their prices and change in price.",
}
},
"word_count_threshold": 1
}
}
"provider": "openai/gpt-4o-mini",
"api_token": os.getenv("OPENAI_API_KEY"),
"schema": schema,
"extraction_type": "schema",
"instruction": """From the crawled content, extract all mentioned model names along with their fees for input and output tokens.""",
},
},
"crawler_params": {"word_count_threshold": 1},
}
try:
result = tester.submit_and_wait(request)
extracted = json.loads(result["result"]["results"][0]["extracted_content"])
extracted = json.loads(result["result"]["extracted_content"])
print(f"Extracted {len(extracted)} model pricing entries")
if extracted:
print("Sample entry:", json.dumps(extracted[0], indent=2))
print("Sample entry:", json.dumps(extracted[0], indent=2))
assert result["result"]["success"]
except Exception as e:
print(f"LLM extraction test failed (might be due to missing API key): {str(e)}")
@@ -300,32 +271,23 @@ def test_llm_with_ollama(tester: Crawl4AiTester):
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {"verbose": True},
"crawler_config": {
"type": "CrawlerRunConfig",
"priority": 8,
"extraction_config": {
"type": "llm",
"params": {
"extraction_strategy": {
"type": "LLMExtractionStrategy",
"params": {
"llm_config": {
"type": "LLMConfig",
"params": {
"provider": "ollama/llama3.2:latest",
}
},
"schema": schema,
"extraction_type": "schema",
"instruction": "Extract the main article information including title, summary, and main topics.",
}
},
"word_count_threshold": 1
}
}
"provider": "ollama/llama2",
"schema": schema,
"extraction_type": "schema",
"instruction": "Extract the main article information including title, summary, and main topics.",
},
},
"extra": {"word_count_threshold": 1},
"crawler_params": {"verbose": True},
}
try:
result = tester.submit_and_wait(request)
extracted = json.loads(result["result"]["results"][0]["extracted_content"])
extracted = json.loads(result["result"]["extracted_content"])
print("Extracted content:", json.dumps(extracted, indent=2))
assert result["result"]["success"]
except Exception as e:
@@ -336,29 +298,23 @@ def test_cosine_extraction(tester: Crawl4AiTester):
print("\n=== Testing Cosine Extraction ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {},
"crawler_config": {
"type": "CrawlerRunConfig",
"priority": 8,
"extraction_config": {
"type": "cosine",
"params": {
"extraction_strategy": {
"type": "CosineStrategy",
"params": {
"semantic_filter": "business finance economy",
"word_count_threshold": 10,
"max_dist": 0.2,
"top_k": 3,
}
}
}
}
"semantic_filter": "business finance economy",
"word_count_threshold": 10,
"max_dist": 0.2,
"top_k": 3,
},
},
}
try:
result = tester.submit_and_wait(request)
extracted = json.loads(result["result"]["results"][0]["extracted_content"])
extracted = json.loads(result["result"]["extracted_content"])
print(f"Extracted {len(extracted)} text clusters")
if extracted:
print("First cluster tags:", extracted[0]["tags"])
print("First cluster tags:", extracted[0]["tags"])
assert result["result"]["success"]
except Exception as e:
print(f"Cosine extraction test failed: {str(e)}")
@@ -368,24 +324,19 @@ def test_screenshot(tester: Crawl4AiTester):
print("\n=== Testing Screenshot ===")
request = {
"urls": ["https://www.nbcnews.com/business"],
"browser_config": {"headless": True},
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"screenshot": True
}
}
"priority": 5,
"screenshot": True,
"crawler_params": {"headless": True},
}
result = tester.submit_and_wait(request)
screenshot_data = result["result"]["results"][0]["screenshot"]
print("Screenshot captured:", bool(screenshot_data))
print("Screenshot captured:", bool(result["result"]["screenshot"]))
if screenshot_data:
if result["result"]["screenshot"]:
# Save screenshot
screenshot_bytes = base64.b64decode(screenshot_data)
screenshot_data = base64.b64decode(result["result"]["screenshot"])
with open("test_screenshot.jpg", "wb") as f:
f.write(screenshot_bytes)
f.write(screenshot_data)
print("Screenshot saved as test_screenshot.jpg")
assert result["result"]["success"]

View File

@@ -0,0 +1,117 @@
#!/usr/bin/env python3
"""
Simple test to verify BestFirstCrawlingStrategy fixes.
This test crawls a real website and shows that:
1. Higher-scoring pages are crawled first (priority queue fix)
2. Links are scored before truncation (link discovery fix)
"""
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from crawl4ai.deep_crawling import BestFirstCrawlingStrategy
from crawl4ai.deep_crawling.scorers import KeywordRelevanceScorer
async def test_best_first_strategy():
"""Test BestFirstCrawlingStrategy with keyword scoring"""
print("=" * 70)
print("Testing BestFirstCrawlingStrategy with Real URL")
print("=" * 70)
print("\nThis test will:")
print("1. Crawl Python.org documentation")
print("2. Score pages based on keywords: 'tutorial', 'guide', 'reference'")
print("3. Show that higher-scoring pages are crawled first")
print("-" * 70)
# Create a keyword scorer that prioritizes tutorial/guide pages
scorer = KeywordRelevanceScorer(
keywords=["tutorial", "guide", "reference", "documentation"],
weight=1.0,
case_sensitive=False
)
# Create the strategy with scoring
strategy = BestFirstCrawlingStrategy(
max_depth=2, # Crawl 2 levels deep
max_pages=10, # Limit to 10 pages total
url_scorer=scorer, # Use keyword scoring
include_external=False # Only internal links
)
# Configure browser and crawler
browser_config = BrowserConfig(
headless=True, # Run in background
verbose=False # Reduce output noise
)
crawler_config = CrawlerRunConfig(
deep_crawl_strategy=strategy,
verbose=False
)
print("\nStarting crawl of https://docs.python.org/3/")
print("Looking for pages with keywords: tutorial, guide, reference, documentation")
print("-" * 70)
crawled_urls = []
async with AsyncWebCrawler(config=browser_config) as crawler:
# Crawl and collect results
results = await crawler.arun(
url="https://docs.python.org/3/",
config=crawler_config
)
# Process results
if isinstance(results, list):
for result in results:
score = result.metadata.get('score', 0) if result.metadata else 0
depth = result.metadata.get('depth', 0) if result.metadata else 0
crawled_urls.append({
'url': result.url,
'score': score,
'depth': depth,
'success': result.success
})
print("\n" + "=" * 70)
print("CRAWL RESULTS (in order of crawling)")
print("=" * 70)
for i, item in enumerate(crawled_urls, 1):
status = "" if item['success'] else ""
# Highlight high-scoring pages
if item['score'] > 0.5:
print(f"{i:2}. [{status}] Score: {item['score']:.2f} | Depth: {item['depth']} | {item['url']}")
print(f" ^ HIGH SCORE - Contains keywords!")
else:
print(f"{i:2}. [{status}] Score: {item['score']:.2f} | Depth: {item['depth']} | {item['url']}")
print("\n" + "=" * 70)
print("ANALYSIS")
print("=" * 70)
# Check if higher scores appear early in the crawl
scores = [item['score'] for item in crawled_urls[1:]] # Skip initial URL
high_score_indices = [i for i, s in enumerate(scores) if s > 0.3]
if high_score_indices and high_score_indices[0] < len(scores) / 2:
print("✅ SUCCESS: Higher-scoring pages (with keywords) were crawled early!")
print(" This confirms the priority queue fix is working.")
else:
print("⚠️ Check the crawl order above - higher scores should appear early")
# Show score distribution
print(f"\nScore Statistics:")
print(f" - Total pages crawled: {len(crawled_urls)}")
print(f" - Average score: {sum(item['score'] for item in crawled_urls) / len(crawled_urls):.2f}")
print(f" - Max score: {max(item['score'] for item in crawled_urls):.2f}")
print(f" - Pages with keywords: {sum(1 for item in crawled_urls if item['score'] > 0.3)}")
print("\n" + "=" * 70)
print("TEST COMPLETE")
print("=" * 70)
if __name__ == "__main__":
print("\n🔍 BestFirstCrawlingStrategy Simple Test\n")
asyncio.run(test_best_first_strategy())

View File

@@ -1,43 +0,0 @@
import asyncio
import os
from crawl4ai.async_webcrawler import AsyncWebCrawler
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig, CacheMode
# Simple concurrency test for persistent context page creation
# Usage: python scripts/test_persistent_context.py
URLS = [
# "https://example.com",
"https://httpbin.org/html",
"https://www.python.org/",
"https://www.rust-lang.org/",
]
async def main():
profile_dir = os.path.join(os.path.expanduser("~"), ".crawl4ai", "profiles", "test-persistent-profile")
os.makedirs(profile_dir, exist_ok=True)
browser_config = BrowserConfig(
browser_type="chromium",
headless=True,
use_persistent_context=True,
user_data_dir=profile_dir,
use_managed_browser=True,
verbose=True,
)
run_cfg = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
stream=False,
verbose=True,
)
async with AsyncWebCrawler(config=browser_config) as crawler:
results = await crawler.arun_many(URLS, config=run_cfg)
for r in results:
print(r.url, r.success, len(r.markdown.raw_markdown) if r.markdown else 0)
# r = await crawler.arun(url=URLS[0], config=run_cfg)
# print(r.url, r.success, len(r.markdown.raw_markdown) if r.markdown else 0)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,55 +0,0 @@
import sys
import pytest
import asyncio
from unittest.mock import patch, MagicMock
from crawl4ai.browser_profiler import BrowserProfiler
@pytest.mark.asyncio
@pytest.mark.skipif(sys.platform != "win32", reason="Windows-specific msvcrt test")
async def test_keyboard_input_handling():
# Mock sequence of keystrokes: arrow key followed by 'q'
mock_keys = [b'\x00K', b'q']
mock_kbhit = MagicMock(side_effect=[True, True, False])
mock_getch = MagicMock(side_effect=mock_keys)
with patch('msvcrt.kbhit', mock_kbhit), patch('msvcrt.getch', mock_getch):
# profiler = BrowserProfiler()
user_done_event = asyncio.Event()
# Create a local async function to simulate the keyboard input handling
async def test_listen_for_quit_command():
if sys.platform == "win32":
while True:
try:
if mock_kbhit():
raw = mock_getch()
try:
key = raw.decode("utf-8")
except UnicodeDecodeError:
continue
if len(key) != 1 or not key.isprintable():
continue
if key.lower() == "q":
user_done_event.set()
return
await asyncio.sleep(0.1)
except Exception as e:
continue
# Run the listener
listener_task = asyncio.create_task(test_listen_for_quit_command())
# Wait for the event to be set
try:
await asyncio.wait_for(user_done_event.wait(), timeout=1.0)
assert user_done_event.is_set()
finally:
if not listener_task.done():
listener_task.cancel()
try:
await listener_task
except asyncio.CancelledError:
pass

View File

@@ -1,582 +0,0 @@
"""
Comprehensive test suite for ProxyConfig in different forms:
1. String form (ip:port:username:password)
2. Dict form (dictionary with keys)
3. Object form (ProxyConfig instance)
4. Environment variable form (from env vars)
Tests cover all possible scenarios and edge cases using pytest.
"""
import asyncio
import os
import pytest
import tempfile
from unittest.mock import patch
from crawl4ai import AsyncWebCrawler, BrowserConfig
from crawl4ai.async_configs import CrawlerRunConfig, ProxyConfig
from crawl4ai.cache_context import CacheMode
class TestProxyConfig:
"""Comprehensive test suite for ProxyConfig functionality."""
# Test data for different scenarios
# get free proxy server from from webshare.io https://www.webshare.io/?referral_code=3sqog0y1fvsl
TEST_PROXY_DATA = {
"server": "",
"username": "",
"password": "",
"ip": ""
}
def setup_method(self):
"""Setup for each test method."""
self.test_url = "https://httpbin.org/ip" # Use httpbin for testing
# ==================== OBJECT FORM TESTS ====================
def test_proxy_config_object_creation_basic(self):
"""Test basic ProxyConfig object creation."""
proxy = ProxyConfig(server="127.0.0.1:8080")
assert proxy.server == "127.0.0.1:8080"
assert proxy.username is None
assert proxy.password is None
assert proxy.ip == "127.0.0.1" # Should auto-extract IP
def test_proxy_config_object_creation_full(self):
"""Test ProxyConfig object creation with all parameters."""
proxy = ProxyConfig(
server=f"http://{self.TEST_PROXY_DATA['server']}",
username=self.TEST_PROXY_DATA['username'],
password=self.TEST_PROXY_DATA['password'],
ip=self.TEST_PROXY_DATA['ip']
)
assert proxy.server == f"http://{self.TEST_PROXY_DATA['server']}"
assert proxy.username == self.TEST_PROXY_DATA['username']
assert proxy.password == self.TEST_PROXY_DATA['password']
assert proxy.ip == self.TEST_PROXY_DATA['ip']
def test_proxy_config_object_ip_extraction(self):
"""Test automatic IP extraction from server URL."""
test_cases = [
("http://192.168.1.1:8080", "192.168.1.1"),
("https://10.0.0.1:3128", "10.0.0.1"),
("192.168.1.100:8080", "192.168.1.100"),
("proxy.example.com:8080", "proxy.example.com"),
]
for server, expected_ip in test_cases:
proxy = ProxyConfig(server=server)
assert proxy.ip == expected_ip, f"Failed for server: {server}"
def test_proxy_config_object_invalid_server(self):
"""Test ProxyConfig with invalid server formats."""
# Should not raise exception but may not extract IP properly
proxy = ProxyConfig(server="invalid-format")
assert proxy.server == "invalid-format"
# IP extraction might fail but object should still be created
# ==================== DICT FORM TESTS ====================
def test_proxy_config_from_dict_basic(self):
"""Test creating ProxyConfig from basic dictionary."""
proxy_dict = {"server": "127.0.0.1:8080"}
proxy = ProxyConfig.from_dict(proxy_dict)
assert proxy.server == "127.0.0.1:8080"
assert proxy.username is None
assert proxy.password is None
def test_proxy_config_from_dict_full(self):
"""Test creating ProxyConfig from complete dictionary."""
proxy_dict = {
"server": f"http://{self.TEST_PROXY_DATA['server']}",
"username": self.TEST_PROXY_DATA['username'],
"password": self.TEST_PROXY_DATA['password'],
"ip": self.TEST_PROXY_DATA['ip']
}
proxy = ProxyConfig.from_dict(proxy_dict)
assert proxy.server == proxy_dict["server"]
assert proxy.username == proxy_dict["username"]
assert proxy.password == proxy_dict["password"]
assert proxy.ip == proxy_dict["ip"]
def test_proxy_config_from_dict_missing_keys(self):
"""Test creating ProxyConfig from dictionary with missing keys."""
proxy_dict = {"server": "127.0.0.1:8080", "username": "user"}
proxy = ProxyConfig.from_dict(proxy_dict)
assert proxy.server == "127.0.0.1:8080"
assert proxy.username == "user"
assert proxy.password is None
assert proxy.ip == "127.0.0.1" # Should auto-extract
def test_proxy_config_from_dict_empty(self):
"""Test creating ProxyConfig from empty dictionary."""
proxy_dict = {}
proxy = ProxyConfig.from_dict(proxy_dict)
assert proxy.server is None
assert proxy.username is None
assert proxy.password is None
assert proxy.ip is None
def test_proxy_config_from_dict_none_values(self):
"""Test creating ProxyConfig from dictionary with None values."""
proxy_dict = {
"server": "127.0.0.1:8080",
"username": None,
"password": None,
"ip": None
}
proxy = ProxyConfig.from_dict(proxy_dict)
assert proxy.server == "127.0.0.1:8080"
assert proxy.username is None
assert proxy.password is None
assert proxy.ip == "127.0.0.1" # Should auto-extract despite None
# ==================== STRING FORM TESTS ====================
def test_proxy_config_from_string_full_format(self):
"""Test creating ProxyConfig from full string format (ip:port:username:password)."""
proxy_str = f"{self.TEST_PROXY_DATA['ip']}:6114:{self.TEST_PROXY_DATA['username']}:{self.TEST_PROXY_DATA['password']}"
proxy = ProxyConfig.from_string(proxy_str)
assert proxy.server == f"http://{self.TEST_PROXY_DATA['ip']}:6114"
assert proxy.username == self.TEST_PROXY_DATA['username']
assert proxy.password == self.TEST_PROXY_DATA['password']
assert proxy.ip == self.TEST_PROXY_DATA['ip']
def test_proxy_config_from_string_ip_port_only(self):
"""Test creating ProxyConfig from string with only ip:port."""
proxy_str = "192.168.1.1:8080"
proxy = ProxyConfig.from_string(proxy_str)
assert proxy.server == "http://192.168.1.1:8080"
assert proxy.username is None
assert proxy.password is None
assert proxy.ip == "192.168.1.1"
def test_proxy_config_from_string_invalid_format(self):
"""Test creating ProxyConfig from invalid string formats."""
invalid_formats = [
"invalid",
"ip:port:user", # Missing password (3 parts)
"ip:port:user:pass:extra", # Too many parts (5 parts)
"",
"::", # Empty parts but 3 total (invalid)
"::::", # Empty parts but 5 total (invalid)
]
for proxy_str in invalid_formats:
with pytest.raises(ValueError, match="Invalid proxy string format"):
ProxyConfig.from_string(proxy_str)
def test_proxy_config_from_string_edge_cases_that_work(self):
"""Test string formats that should work but might be edge cases."""
# These cases actually work as valid formats
edge_cases = [
(":", "http://:", ""), # ip:port format with empty values
(":::", "http://:", ""), # ip:port:user:pass format with empty values
]
for proxy_str, expected_server, expected_ip in edge_cases:
proxy = ProxyConfig.from_string(proxy_str)
assert proxy.server == expected_server
assert proxy.ip == expected_ip
def test_proxy_config_from_string_edge_cases(self):
"""Test string parsing edge cases."""
# Test with different port numbers
proxy_str = "10.0.0.1:3128:user:pass"
proxy = ProxyConfig.from_string(proxy_str)
assert proxy.server == "http://10.0.0.1:3128"
# Test with special characters in credentials
proxy_str = "10.0.0.1:8080:user@domain:pass:word"
with pytest.raises(ValueError): # Should fail due to extra colon in password
ProxyConfig.from_string(proxy_str)
# ==================== ENVIRONMENT VARIABLE TESTS ====================
def test_proxy_config_from_env_single_proxy(self):
"""Test loading single proxy from environment variable."""
proxy_str = f"{self.TEST_PROXY_DATA['ip']}:6114:{self.TEST_PROXY_DATA['username']}:{self.TEST_PROXY_DATA['password']}"
with patch.dict(os.environ, {'TEST_PROXIES': proxy_str}):
proxies = ProxyConfig.from_env('TEST_PROXIES')
assert len(proxies) == 1
proxy = proxies[0]
assert proxy.ip == self.TEST_PROXY_DATA['ip']
assert proxy.username == self.TEST_PROXY_DATA['username']
assert proxy.password == self.TEST_PROXY_DATA['password']
def test_proxy_config_from_env_multiple_proxies(self):
"""Test loading multiple proxies from environment variable."""
proxy_list = [
"192.168.1.1:8080:user1:pass1",
"192.168.1.2:8080:user2:pass2",
"10.0.0.1:3128" # No auth
]
proxy_str = ",".join(proxy_list)
with patch.dict(os.environ, {'TEST_PROXIES': proxy_str}):
proxies = ProxyConfig.from_env('TEST_PROXIES')
assert len(proxies) == 3
# Check first proxy
assert proxies[0].ip == "192.168.1.1"
assert proxies[0].username == "user1"
assert proxies[0].password == "pass1"
# Check second proxy
assert proxies[1].ip == "192.168.1.2"
assert proxies[1].username == "user2"
assert proxies[1].password == "pass2"
# Check third proxy (no auth)
assert proxies[2].ip == "10.0.0.1"
assert proxies[2].username is None
assert proxies[2].password is None
def test_proxy_config_from_env_empty_var(self):
"""Test loading from empty environment variable."""
with patch.dict(os.environ, {'TEST_PROXIES': ''}):
proxies = ProxyConfig.from_env('TEST_PROXIES')
assert len(proxies) == 0
def test_proxy_config_from_env_missing_var(self):
"""Test loading from missing environment variable."""
# Ensure the env var doesn't exist
with patch.dict(os.environ, {}, clear=True):
proxies = ProxyConfig.from_env('NON_EXISTENT_VAR')
assert len(proxies) == 0
def test_proxy_config_from_env_with_empty_entries(self):
"""Test loading proxies with empty entries in the list."""
proxy_str = "192.168.1.1:8080:user:pass,,10.0.0.1:3128,"
with patch.dict(os.environ, {'TEST_PROXIES': proxy_str}):
proxies = ProxyConfig.from_env('TEST_PROXIES')
assert len(proxies) == 2 # Empty entries should be skipped
assert proxies[0].ip == "192.168.1.1"
assert proxies[1].ip == "10.0.0.1"
def test_proxy_config_from_env_with_invalid_entries(self):
"""Test loading proxies with some invalid entries."""
proxy_str = "192.168.1.1:8080:user:pass,invalid_proxy,10.0.0.1:3128"
with patch.dict(os.environ, {'TEST_PROXIES': proxy_str}):
# Should handle errors gracefully and return valid proxies
proxies = ProxyConfig.from_env('TEST_PROXIES')
# Depending on implementation, might return partial list or empty
# This tests error handling
assert isinstance(proxies, list)
# ==================== SERIALIZATION TESTS ====================
def test_proxy_config_to_dict(self):
"""Test converting ProxyConfig to dictionary."""
proxy = ProxyConfig(
server=f"http://{self.TEST_PROXY_DATA['server']}",
username=self.TEST_PROXY_DATA['username'],
password=self.TEST_PROXY_DATA['password'],
ip=self.TEST_PROXY_DATA['ip']
)
result_dict = proxy.to_dict()
expected = {
"server": f"http://{self.TEST_PROXY_DATA['server']}",
"username": self.TEST_PROXY_DATA['username'],
"password": self.TEST_PROXY_DATA['password'],
"ip": self.TEST_PROXY_DATA['ip']
}
assert result_dict == expected
def test_proxy_config_clone(self):
"""Test cloning ProxyConfig with modifications."""
original = ProxyConfig(
server="http://127.0.0.1:8080",
username="user",
password="pass"
)
# Clone with modifications
cloned = original.clone(username="new_user", password="new_pass")
# Original should be unchanged
assert original.username == "user"
assert original.password == "pass"
# Clone should have new values
assert cloned.username == "new_user"
assert cloned.password == "new_pass"
assert cloned.server == original.server # Unchanged value
def test_proxy_config_roundtrip_serialization(self):
"""Test that ProxyConfig can be serialized and deserialized without loss."""
original = ProxyConfig(
server=f"http://{self.TEST_PROXY_DATA['server']}",
username=self.TEST_PROXY_DATA['username'],
password=self.TEST_PROXY_DATA['password'],
ip=self.TEST_PROXY_DATA['ip']
)
# Serialize to dict and back
serialized = original.to_dict()
deserialized = ProxyConfig.from_dict(serialized)
assert deserialized.server == original.server
assert deserialized.username == original.username
assert deserialized.password == original.password
assert deserialized.ip == original.ip
# ==================== INTEGRATION TESTS ====================
@pytest.mark.asyncio
async def test_crawler_with_proxy_config_object(self):
"""Test AsyncWebCrawler with ProxyConfig object."""
proxy_config = ProxyConfig(
server=f"http://{self.TEST_PROXY_DATA['server']}",
username=self.TEST_PROXY_DATA['username'],
password=self.TEST_PROXY_DATA['password']
)
browser_config = BrowserConfig(headless=True)
# Test that the crawler accepts the ProxyConfig object without errors
async with AsyncWebCrawler(config=browser_config) as crawler:
try:
# Note: This might fail due to actual proxy connection, but should not fail due to config issues
result = await crawler.arun(
url=self.test_url,
config=CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
proxy_config=proxy_config,
page_timeout=10000 # Short timeout for testing
)
)
# If we get here, proxy config was accepted
assert result is not None
except Exception as e:
# We expect connection errors with test proxies, but not config errors
error_msg = str(e).lower()
assert "attribute" not in error_msg, f"Config error: {e}"
assert "proxy_config" not in error_msg, f"Proxy config error: {e}"
@pytest.mark.asyncio
async def test_crawler_with_proxy_config_dict(self):
"""Test AsyncWebCrawler with ProxyConfig from dictionary."""
proxy_dict = {
"server": f"http://{self.TEST_PROXY_DATA['server']}",
"username": self.TEST_PROXY_DATA['username'],
"password": self.TEST_PROXY_DATA['password']
}
proxy_config = ProxyConfig.from_dict(proxy_dict)
browser_config = BrowserConfig(headless=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
try:
result = await crawler.arun(
url=self.test_url,
config=CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
proxy_config=proxy_config,
page_timeout=10000
)
)
assert result is not None
except Exception as e:
error_msg = str(e).lower()
assert "attribute" not in error_msg, f"Config error: {e}"
@pytest.mark.asyncio
async def test_crawler_with_proxy_config_from_string(self):
"""Test AsyncWebCrawler with ProxyConfig from string."""
proxy_str = f"{self.TEST_PROXY_DATA['ip']}:6114:{self.TEST_PROXY_DATA['username']}:{self.TEST_PROXY_DATA['password']}"
proxy_config = ProxyConfig.from_string(proxy_str)
browser_config = BrowserConfig(headless=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
try:
result = await crawler.arun(
url=self.test_url,
config=CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
proxy_config=proxy_config,
page_timeout=10000
)
)
assert result is not None
except Exception as e:
error_msg = str(e).lower()
assert "attribute" not in error_msg, f"Config error: {e}"
# ==================== EDGE CASES AND ERROR HANDLING ====================
def test_proxy_config_with_none_server(self):
"""Test ProxyConfig behavior with None server."""
proxy = ProxyConfig(server=None)
assert proxy.server is None
assert proxy.ip is None # Should not crash
def test_proxy_config_with_empty_string_server(self):
"""Test ProxyConfig behavior with empty string server."""
proxy = ProxyConfig(server="")
assert proxy.server == ""
assert proxy.ip is None or proxy.ip == ""
def test_proxy_config_special_characters_in_credentials(self):
"""Test ProxyConfig with special characters in username/password."""
special_chars_tests = [
("user@domain.com", "pass!@#$%"),
("user_123", "p@ssw0rd"),
("user-test", "pass-word"),
]
for username, password in special_chars_tests:
proxy = ProxyConfig(
server="http://127.0.0.1:8080",
username=username,
password=password
)
assert proxy.username == username
assert proxy.password == password
def test_proxy_config_unicode_handling(self):
"""Test ProxyConfig with unicode characters."""
proxy = ProxyConfig(
server="http://127.0.0.1:8080",
username="ユーザー", # Japanese characters
password="пароль" # Cyrillic characters
)
assert proxy.username == "ユーザー"
assert proxy.password == "пароль"
# ==================== PERFORMANCE TESTS ====================
def test_proxy_config_creation_performance(self):
"""Test that ProxyConfig creation is reasonably fast."""
import time
start_time = time.time()
for i in range(1000):
proxy = ProxyConfig(
server=f"http://192.168.1.{i % 255}:8080",
username=f"user{i}",
password=f"pass{i}"
)
end_time = time.time()
# Should be able to create 1000 configs in less than 1 second
assert (end_time - start_time) < 1.0
def test_proxy_config_from_env_performance(self):
"""Test that loading many proxies from env is reasonably fast."""
import time
# Create a large list of proxy strings
proxy_list = [f"192.168.1.{i}:8080:user{i}:pass{i}" for i in range(100)]
proxy_str = ",".join(proxy_list)
with patch.dict(os.environ, {'PERF_TEST_PROXIES': proxy_str}):
start_time = time.time()
proxies = ProxyConfig.from_env('PERF_TEST_PROXIES')
end_time = time.time()
assert len(proxies) == 100
# Should be able to parse 100 proxies in less than 1 second
assert (end_time - start_time) < 1.0
# ==================== STANDALONE TEST FUNCTIONS ====================
@pytest.mark.asyncio
async def test_dict_proxy():
"""Original test function for dict proxy - kept for backward compatibility."""
proxy_config = {
"server": "23.95.150.145:6114",
"username": "cfyswbwn",
"password": "1gs266hoqysi"
}
proxy_config_obj = ProxyConfig.from_dict(proxy_config)
browser_config = BrowserConfig(headless=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
try:
result = await crawler.arun(url="https://httpbin.org/ip", config=CrawlerRunConfig(
stream=False,
cache_mode=CacheMode.BYPASS,
proxy_config=proxy_config_obj,
page_timeout=10000
))
print("Dict proxy test passed!")
print(result.markdown[:200] if result and result.markdown else "No result")
except Exception as e:
print(f"Dict proxy test error (expected): {e}")
@pytest.mark.asyncio
async def test_string_proxy():
"""Test function for string proxy format."""
proxy_str = "23.95.150.145:6114:cfyswbwn:1gs266hoqysi"
proxy_config_obj = ProxyConfig.from_string(proxy_str)
browser_config = BrowserConfig(headless=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
try:
result = await crawler.arun(url="https://httpbin.org/ip", config=CrawlerRunConfig(
stream=False,
cache_mode=CacheMode.BYPASS,
proxy_config=proxy_config_obj,
page_timeout=10000
))
print("String proxy test passed!")
print(result.markdown[:200] if result and result.markdown else "No result")
except Exception as e:
print(f"String proxy test error (expected): {e}")
@pytest.mark.asyncio
async def test_env_proxy():
"""Test function for environment variable proxy."""
# Set environment variable
os.environ['TEST_PROXIES'] = "23.95.150.145:6114:cfyswbwn:1gs266hoqysi"
proxies = ProxyConfig.from_env('TEST_PROXIES')
if proxies:
proxy_config_obj = proxies[0] # Use first proxy
browser_config = BrowserConfig(headless=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
try:
result = await crawler.arun(url="https://httpbin.org/ip", config=CrawlerRunConfig(
stream=False,
cache_mode=CacheMode.BYPASS,
proxy_config=proxy_config_obj,
page_timeout=10000
))
print("Environment proxy test passed!")
print(result.markdown[:200] if result and result.markdown else "No result")
except Exception as e:
print(f"Environment proxy test error (expected): {e}")
else:
print("No proxies loaded from environment")
if __name__ == "__main__":
print("Running comprehensive ProxyConfig tests...")
print("=" * 50)
# Run the standalone test functions
print("\n1. Testing dict proxy format...")
asyncio.run(test_dict_proxy())
print("\n2. Testing string proxy format...")
asyncio.run(test_string_proxy())
print("\n3. Testing environment variable proxy format...")
asyncio.run(test_env_proxy())
print("\n" + "=" * 50)
print("To run the full pytest suite, use: pytest " + __file__)
print("=" * 50)

File diff suppressed because it is too large Load Diff