Release v0.8.0: Crash Recovery, Prefetch Mode & Security Fixes (#1712)
* Fix: Use correct URL variable for raw HTML extraction (#1116) - Prevents full HTML content from being passed as URL to extraction strategies - Added unit tests to verify raw HTML and regular URL processing Fix: Wrong URL variable used for extraction of raw html * Fix #1181: Preserve whitespace in code blocks during HTML scraping The remove_empty_elements_fast() method was removing whitespace-only span elements inside <pre> and <code> tags, causing import statements like "import torch" to become "importtorch". Now skips elements inside code blocks where whitespace is significant. * Refactor Pydantic model configuration to use ConfigDict for arbitrary types * Fix EmbeddingStrategy: Uncomment response handling for the variations and clean up mock data. ref #1621 * Fix: permission issues with .cache/url_seeder and other runtime cache dirs. ref #1638 * fix: ensure BrowserConfig.to_dict serializes proxy_config * feat: make LLM backoff configurable end-to-end - extend LLMConfig with backoff delay/attempt/factor fields and thread them through LLMExtractionStrategy, LLMContentFilter, table extraction, and Docker API handlers - expose the backoff parameter knobs on perform_completion_with_backoff/aperform_completion_with_backoff and document them in the md_v2 guides * reproduced AttributeError from #1642 * pass timeout parameter to docker client request * added missing deep crawling objects to init * generalized query in ContentRelevanceFilter to be a str or list * import modules from enhanceable deserialization * parameterized tests * Fix: capture current page URL to reflect JavaScript navigation and add test for delayed redirects. ref #1268 * refactor: replace PyPDF2 with pypdf across the codebase. ref #1412 * Add browser_context_id and target_id parameters to BrowserConfig Enable Crawl4AI to connect to pre-created CDP browser contexts, which is essential for cloud browser services that pre-create isolated contexts. Changes: - Add browser_context_id and target_id parameters to BrowserConfig - Update from_kwargs() and to_dict() methods - Modify BrowserManager.start() to use existing context when provided - Add _get_page_by_target_id() helper method - Update get_page() to handle pre-existing targets - Add test for browser_context_id functionality This enables cloud services to: 1. Create isolated CDP contexts before Crawl4AI connects 2. Pass context/target IDs to BrowserConfig 3. Have Crawl4AI reuse existing contexts instead of creating new ones * Add cdp_cleanup_on_close flag to prevent memory leaks in cloud/server scenarios * Fix: add cdp_cleanup_on_close to from_kwargs * Fix: find context by target_id for concurrent CDP connections * Fix: use target_id to find correct page in get_page * Fix: use CDP to find context by browserContextId for concurrent sessions * Revert context matching attempts - Playwright cannot see CDP-created contexts * Add create_isolated_context flag for concurrent CDP crawls When True, forces creation of a new browser context instead of reusing the default context. Essential for concurrent crawls on the same browser to prevent navigation conflicts. * Add context caching to create_isolated_context branch Uses contexts_by_config cache (same as non-CDP mode) to reuse contexts for multiple URLs with same config. Still creates new page per crawl for navigation isolation. Benefits batch/deep crawls. * Add init_scripts support to BrowserConfig for pre-page-load JS injection This adds the ability to inject JavaScript that runs before any page loads, useful for stealth evasions (canvas/audio fingerprinting, userAgentData). - Add init_scripts parameter to BrowserConfig (list of JS strings) - Apply init_scripts in setup_context() via context.add_init_script() - Update from_kwargs() and to_dict() for serialization * Fix CDP connection handling: support WS URLs and proper cleanup Changes to browser_manager.py: 1. _verify_cdp_ready(): Support multiple URL formats - WebSocket URLs (ws://, wss://): Skip HTTP verification, Playwright handles directly - HTTP URLs with query params: Properly parse with urlparse to preserve query string - Fixes issue where naive f"{cdp_url}/json/version" broke WS URLs and query params 2. close(): Proper cleanup when cdp_cleanup_on_close=True - Close all sessions (pages) - Close all contexts - Call browser.close() to disconnect (doesn't terminate browser, just releases connection) - Wait 1 second for CDP connection to fully release - Stop Playwright instance to prevent memory leaks This enables: - Connecting to specific browsers via WS URL - Reusing the same browser with multiple sequential connections - No user wait needed between connections (internal 1s delay handles it) Added tests/browser/test_cdp_cleanup_reuse.py with comprehensive tests. * Update gitignore * Some debugging for caching * Add _generate_screenshot_from_html for raw: and file:// URLs Implements the missing method that was being called but never defined. Now raw: and file:// URLs can generate screenshots by: 1. Loading HTML into a browser page via page.set_content() 2. Taking screenshot using existing take_screenshot() method 3. Cleaning up the page afterward This enables cached HTML to be rendered with screenshots in crawl4ai-cloud. * Add PDF and MHTML support for raw: and file:// URLs - Replace _generate_screenshot_from_html with _generate_media_from_html - New method handles screenshot, PDF, and MHTML in one browser session - Update raw: and file:// URL handlers to use new method - Enables cached HTML to generate all media types * Add crash recovery for deep crawl strategies Add optional resume_state and on_state_change parameters to all deep crawl strategies (BFS, DFS, Best-First) for cloud deployment crash recovery. Features: - resume_state: Pass saved state to resume from checkpoint - on_state_change: Async callback fired after each URL for real-time state persistence to external storage (Redis, DB, etc.) - export_state(): Get last captured state manually - Zero overhead when features are disabled (None defaults) State includes visited URLs, pending queue/stack, depths, and pages_crawled count. All state is JSON-serializable. * Fix: HTTP strategy raw: URL parsing truncates at # character The AsyncHTTPCrawlerStrategy.crawl() method used urlparse() to extract content from raw: URLs. This caused HTML with CSS color codes like #eee to be truncated because # is treated as a URL fragment delimiter. Before: raw:body{background:#eee} -> parsed.path = 'body{background:' After: raw:body{background:#eee} -> raw_content = 'body{background:#eee' Fix: Strip the raw: or raw:// prefix directly instead of using urlparse, matching how the browser strategy handles it. * Add base_url parameter to CrawlerRunConfig for raw HTML processing When processing raw: HTML (e.g., from cache), the URL parameter is meaningless for markdown link resolution. This adds a base_url parameter that can be set explicitly to provide proper URL resolution context. Changes: - Add base_url parameter to CrawlerRunConfig.__init__ - Add base_url to CrawlerRunConfig.from_kwargs - Update aprocess_html to use base_url for markdown generation Usage: config = CrawlerRunConfig(base_url='https://example.com') result = await crawler.arun(url='raw:{html}', config=config) * Add prefetch mode for two-phase deep crawling - Add `prefetch` parameter to CrawlerRunConfig - Add `quick_extract_links()` function for fast link extraction - Add short-circuit in aprocess_html() for prefetch mode - Add 42 tests (unit, integration, regression) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Updates on proxy rotation and proxy configuration * Add proxy support to HTTP crawler strategy * Add browser pipeline support for raw:/file:// URLs - Add process_in_browser parameter to CrawlerRunConfig - Route raw:/file:// URLs through _crawl_web() when browser operations needed - Use page.set_content() instead of goto() for local content - Fix cookie handling for non-HTTP URLs in browser_manager - Auto-detect browser requirements: js_code, wait_for, screenshot, etc. - Maintain fast path for raw:/file:// without browser params Fixes #310 * Add smart TTL cache for sitemap URL seeder - Add cache_ttl_hours and validate_sitemap_lastmod params to SeedingConfig - New JSON cache format with metadata (version, created_at, lastmod, url_count) - Cache validation by TTL expiry and sitemap lastmod comparison - Auto-migration from old .jsonl to new .json format - Fixes bug where incomplete cache was used indefinitely * Update URL seeder docs with smart TTL cache parameters - Add cache_ttl_hours and validate_sitemap_lastmod to parameter table - Document smart TTL cache validation with examples - Add cache-related troubleshooting entries - Update key features summary * Add MEMORY.md to gitignore * Docs: Add multi-sample schema generation section Add documentation explaining how to pass multiple HTML samples to generate_schema() for stable selectors that work across pages with varying DOM structures. Includes: - Problem explanation (fragile nth-child selectors) - Solution with code example - Key points for multi-sample queries - Comparison table of fragile vs stable selectors * Fix critical RCE and LFI vulnerabilities in Docker API deployment Security fixes for vulnerabilities reported by ProjectDiscovery: 1. Remote Code Execution via Hooks (CVE pending) - Remove __import__ from allowed_builtins in hook_manager.py - Prevents arbitrary module imports (os, subprocess, etc.) - Hooks now disabled by default via CRAWL4AI_HOOKS_ENABLED env var 2. Local File Inclusion via file:// URLs (CVE pending) - Add URL scheme validation to /execute_js, /screenshot, /pdf, /html - Block file://, javascript:, data: and other dangerous schemes - Only allow http://, https://, and raw: (where appropriate) 3. Security hardening - Add CRAWL4AI_HOOKS_ENABLED=false as default (opt-in for hooks) - Add security warning comments in config.yml - Add validate_url_scheme() helper for consistent validation Testing: - Add unit tests (test_security_fixes.py) - 16 tests - Add integration tests (run_security_tests.py) for live server Affected endpoints: - POST /crawl (hooks disabled by default) - POST /crawl/stream (hooks disabled by default) - POST /execute_js (URL validation added) - POST /screenshot (URL validation added) - POST /pdf (URL validation added) - POST /html (URL validation added) Breaking changes: - Hooks require CRAWL4AI_HOOKS_ENABLED=true to function - file:// URLs no longer work on API endpoints (use library directly) * Enhance authentication flow by implementing JWT token retrieval and adding authorization headers to API requests * Add release notes for v0.7.9, detailing breaking changes, security fixes, new features, bug fixes, and documentation updates * Add release notes for v0.8.0, detailing breaking changes, security fixes, new features, bug fixes, and documentation updates Documentation for v0.8.0 release: - SECURITY.md: Security policy and vulnerability reporting guidelines - RELEASE_NOTES_v0.8.0.md: Comprehensive release notes - migration/v0.8.0-upgrade-guide.md: Step-by-step migration guide - security/GHSA-DRAFT-RCE-LFI.md: GitHub security advisory drafts - CHANGELOG.md: Updated with v0.8.0 changes Breaking changes documented: - Docker API hooks disabled by default (CRAWL4AI_HOOKS_ENABLED) - file:// URLs blocked on Docker API endpoints Security fixes credited to Neo by ProjectDiscovery * Add examples for deep crawl crash recovery and prefetch mode in documentation * Release v0.8.0: The v0.8.0 Update - Updated version to 0.8.0 - Added comprehensive demo and release notes - Updated all documentation * Update security researcher acknowledgment with a hyperlink for Neo by ProjectDiscovery * Add async agenerate_schema method for schema generation - Extract prompt building to shared _build_schema_prompt() method - Add agenerate_schema() async version using aperform_completion_with_backoff - Refactor generate_schema() to use shared prompt builder - Fixes Gemini/Vertex AI compatibility in async contexts (FastAPI) * Fix: Enable litellm.drop_params for O-series/GPT-5 model compatibility O-series (o1, o3) and GPT-5 models only support temperature=1. Setting litellm.drop_params=True auto-drops unsupported parameters instead of throwing UnsupportedParamsError. Fixes temperature=0.01 error for these models in LLM extraction. --------- Co-authored-by: rbushria <rbushri@gmail.com> Co-authored-by: AHMET YILMAZ <tawfik@kidocode.com> Co-authored-by: Soham Kukreti <kukretisoham@gmail.com> Co-authored-by: Chris Murphy <chris.murphy@klaviyo.com> Co-authored-by: unclecode <unclecode@kidocode.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -4,11 +4,13 @@ One of Crawl4AI's most powerful features is its ability to perform **configurabl
|
||||
|
||||
In this tutorial, you'll learn:
|
||||
|
||||
1. How to set up a **Basic Deep Crawler** with BFS strategy
|
||||
2. Understanding the difference between **streamed and non-streamed** output
|
||||
3. Implementing **filters and scorers** to target specific content
|
||||
4. Creating **advanced filtering chains** for sophisticated crawls
|
||||
5. Using **BestFirstCrawling** for intelligent exploration prioritization
|
||||
1. How to set up a **Basic Deep Crawler** with BFS strategy
|
||||
2. Understanding the difference between **streamed and non-streamed** output
|
||||
3. Implementing **filters and scorers** to target specific content
|
||||
4. Creating **advanced filtering chains** for sophisticated crawls
|
||||
5. Using **BestFirstCrawling** for intelligent exploration prioritization
|
||||
6. **Crash recovery** for long-running production crawls
|
||||
7. **Prefetch mode** for fast URL discovery
|
||||
|
||||
> **Prerequisites**
|
||||
> - You’ve completed or read [AsyncWebCrawler Basics](../core/simple-crawling.md) to understand how to run a simple crawl.
|
||||
@@ -485,7 +487,249 @@ This is especially useful for security-conscious crawling or when dealing with s
|
||||
|
||||
---
|
||||
|
||||
## 10. Summary & Next Steps
|
||||
## 10. Crash Recovery for Long-Running Crawls
|
||||
|
||||
For production deployments, especially in cloud environments where instances can be terminated unexpectedly, Crawl4AI provides built-in crash recovery support for all deep crawl strategies.
|
||||
|
||||
### 10.1 Enabling State Persistence
|
||||
|
||||
All deep crawl strategies (BFS, DFS, Best-First) support two optional parameters:
|
||||
|
||||
- **`resume_state`**: Pass a previously saved state to resume from a checkpoint
|
||||
- **`on_state_change`**: Async callback fired after each URL is processed
|
||||
|
||||
```python
|
||||
from crawl4ai.deep_crawling import BFSDeepCrawlStrategy
|
||||
import json
|
||||
|
||||
# Callback to save state after each URL
|
||||
async def save_state_to_redis(state: dict):
|
||||
await redis.set("crawl_state", json.dumps(state))
|
||||
|
||||
strategy = BFSDeepCrawlStrategy(
|
||||
max_depth=3,
|
||||
on_state_change=save_state_to_redis, # Called after each URL
|
||||
)
|
||||
```
|
||||
|
||||
### 10.2 State Structure
|
||||
|
||||
The state dictionary is JSON-serializable and contains:
|
||||
|
||||
```python
|
||||
{
|
||||
"strategy_type": "bfs", # or "dfs", "best_first"
|
||||
"visited": ["url1", "url2", ...], # Already crawled URLs
|
||||
"pending": [{"url": "...", "parent_url": "..."}], # Queue/stack
|
||||
"depths": {"url1": 0, "url2": 1}, # Depth tracking
|
||||
"pages_crawled": 42 # Counter
|
||||
}
|
||||
```
|
||||
|
||||
### 10.3 Resuming from a Checkpoint
|
||||
|
||||
```python
|
||||
import json
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.deep_crawling import BFSDeepCrawlStrategy
|
||||
|
||||
# Load saved state (e.g., from Redis, database, or file)
|
||||
saved_state = json.loads(await redis.get("crawl_state"))
|
||||
|
||||
# Resume crawling from where we left off
|
||||
strategy = BFSDeepCrawlStrategy(
|
||||
max_depth=3,
|
||||
resume_state=saved_state, # Continue from checkpoint
|
||||
on_state_change=save_state_to_redis, # Keep saving progress
|
||||
)
|
||||
|
||||
config = CrawlerRunConfig(deep_crawl_strategy=strategy)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
# Will skip already-visited URLs and continue from pending queue
|
||||
results = await crawler.arun(start_url, config=config)
|
||||
```
|
||||
|
||||
### 10.4 Manual State Export
|
||||
|
||||
You can export the last captured state using `export_state()`. Note that this requires `on_state_change` to be set (state is captured in the callback):
|
||||
|
||||
```python
|
||||
import json
|
||||
|
||||
captured_state = None
|
||||
|
||||
async def capture_state(state: dict):
|
||||
global captured_state
|
||||
captured_state = state
|
||||
|
||||
strategy = BFSDeepCrawlStrategy(
|
||||
max_depth=2,
|
||||
on_state_change=capture_state, # Required for state capture
|
||||
)
|
||||
config = CrawlerRunConfig(deep_crawl_strategy=strategy)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
results = await crawler.arun(start_url, config=config)
|
||||
|
||||
# Get the last captured state
|
||||
state = strategy.export_state()
|
||||
if state:
|
||||
# Save to your preferred storage
|
||||
with open("crawl_checkpoint.json", "w") as f:
|
||||
json.dump(state, f)
|
||||
```
|
||||
|
||||
### 10.5 Complete Example: Redis-Based Recovery
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
import redis.asyncio as redis
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.deep_crawling import BFSDeepCrawlStrategy
|
||||
|
||||
REDIS_KEY = "crawl4ai:crawl_state"
|
||||
|
||||
async def main():
|
||||
redis_client = redis.Redis(host='localhost', port=6379, db=0)
|
||||
|
||||
# Check for existing state
|
||||
saved_state = None
|
||||
existing = await redis_client.get(REDIS_KEY)
|
||||
if existing:
|
||||
saved_state = json.loads(existing)
|
||||
print(f"Resuming from checkpoint: {saved_state['pages_crawled']} pages already crawled")
|
||||
|
||||
# State persistence callback
|
||||
async def persist_state(state: dict):
|
||||
await redis_client.set(REDIS_KEY, json.dumps(state))
|
||||
|
||||
# Create strategy with recovery support
|
||||
strategy = BFSDeepCrawlStrategy(
|
||||
max_depth=3,
|
||||
max_pages=100,
|
||||
resume_state=saved_state,
|
||||
on_state_change=persist_state,
|
||||
)
|
||||
|
||||
config = CrawlerRunConfig(deep_crawl_strategy=strategy, stream=True)
|
||||
|
||||
try:
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
async for result in await crawler.arun("https://example.com", config=config):
|
||||
print(f"Crawled: {result.url}")
|
||||
except Exception as e:
|
||||
print(f"Crawl interrupted: {e}")
|
||||
print("State saved - restart to resume")
|
||||
finally:
|
||||
await redis_client.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### 10.6 Zero Overhead
|
||||
|
||||
When `resume_state=None` and `on_state_change=None` (the defaults), there is no performance impact. State tracking only activates when you enable these features.
|
||||
|
||||
---
|
||||
|
||||
## 11. Prefetch Mode for Fast URL Discovery
|
||||
|
||||
When you need to quickly discover URLs without full page processing, use **prefetch mode**. This is ideal for two-phase crawling where you first map the site, then selectively process specific pages.
|
||||
|
||||
### 11.1 Enabling Prefetch Mode
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
config = CrawlerRunConfig(prefetch=True)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://example.com", config=config)
|
||||
|
||||
# Result contains only HTML and links - no markdown, no extraction
|
||||
print(f"Found {len(result.links['internal'])} internal links")
|
||||
print(f"Found {len(result.links['external'])} external links")
|
||||
```
|
||||
|
||||
### 11.2 What Gets Skipped
|
||||
|
||||
Prefetch mode uses a fast path that bypasses heavy processing:
|
||||
|
||||
| Processing Step | Normal Mode | Prefetch Mode |
|
||||
|----------------|-------------|---------------|
|
||||
| Fetch HTML | ✅ | ✅ |
|
||||
| Extract links | ✅ | ✅ (fast `quick_extract_links()`) |
|
||||
| Generate markdown | ✅ | ❌ Skipped |
|
||||
| Content scraping | ✅ | ❌ Skipped |
|
||||
| Media extraction | ✅ | ❌ Skipped |
|
||||
| LLM extraction | ✅ | ❌ Skipped |
|
||||
|
||||
### 11.3 Performance Benefit
|
||||
|
||||
- **Normal mode**: Full pipeline (~2-5 seconds per page)
|
||||
- **Prefetch mode**: HTML + links only (~200-500ms per page)
|
||||
|
||||
This makes prefetch mode **5-10x faster** for URL discovery.
|
||||
|
||||
### 11.4 Two-Phase Crawling Pattern
|
||||
|
||||
The most common use case is two-phase crawling:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
async def two_phase_crawl(start_url: str):
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
# ═══════════════════════════════════════════════
|
||||
# Phase 1: Fast discovery (prefetch mode)
|
||||
# ═══════════════════════════════════════════════
|
||||
prefetch_config = CrawlerRunConfig(prefetch=True)
|
||||
discovery = await crawler.arun(start_url, config=prefetch_config)
|
||||
|
||||
all_urls = [link["href"] for link in discovery.links.get("internal", [])]
|
||||
print(f"Discovered {len(all_urls)} URLs")
|
||||
|
||||
# Filter to URLs you care about
|
||||
blog_urls = [url for url in all_urls if "/blog/" in url]
|
||||
print(f"Found {len(blog_urls)} blog posts to process")
|
||||
|
||||
# ═══════════════════════════════════════════════
|
||||
# Phase 2: Full processing on selected URLs only
|
||||
# ═══════════════════════════════════════════════
|
||||
full_config = CrawlerRunConfig(
|
||||
# Your normal extraction settings
|
||||
word_count_threshold=100,
|
||||
remove_overlay_elements=True,
|
||||
)
|
||||
|
||||
results = []
|
||||
for url in blog_urls:
|
||||
result = await crawler.arun(url, config=full_config)
|
||||
if result.success:
|
||||
results.append(result)
|
||||
print(f"Processed: {url}")
|
||||
|
||||
return results
|
||||
|
||||
if __name__ == "__main__":
|
||||
results = asyncio.run(two_phase_crawl("https://example.com"))
|
||||
print(f"Fully processed {len(results)} pages")
|
||||
```
|
||||
|
||||
### 11.5 Use Cases
|
||||
|
||||
- **Site mapping**: Quickly discover all URLs before deciding what to process
|
||||
- **Link validation**: Check which pages exist without heavy processing
|
||||
- **Selective deep crawl**: Prefetch to find URLs, filter by pattern, then full crawl
|
||||
- **Crawl planning**: Estimate crawl size before committing resources
|
||||
|
||||
---
|
||||
|
||||
## 12. Summary & Next Steps
|
||||
|
||||
In this **Deep Crawling with Crawl4AI** tutorial, you learned to:
|
||||
|
||||
@@ -495,5 +739,7 @@ In this **Deep Crawling with Crawl4AI** tutorial, you learned to:
|
||||
- Use scorers to prioritize the most relevant pages
|
||||
- Limit crawls with `max_pages` and `score_threshold` parameters
|
||||
- Build a complete advanced crawler with combined techniques
|
||||
- **Implement crash recovery** with `resume_state` and `on_state_change` for production deployments
|
||||
- **Use prefetch mode** for fast URL discovery and two-phase crawling
|
||||
|
||||
With these tools, you can efficiently extract structured data from websites at scale, focusing precisely on the content you need for your specific use case.
|
||||
|
||||
@@ -67,13 +67,13 @@ Pull and run images directly from Docker Hub without building locally.
|
||||
|
||||
#### 1. Pull the Image
|
||||
|
||||
Our latest release is `0.7.6`. Images are built with multi-arch manifests, so Docker automatically pulls the correct version for your system.
|
||||
Our latest release is `0.8.0`. Images are built with multi-arch manifests, so Docker automatically pulls the correct version for your system.
|
||||
|
||||
> 💡 **Note**: The `latest` tag points to the stable `0.7.6` version.
|
||||
> 💡 **Note**: The `latest` tag points to the stable `0.8.0` version.
|
||||
|
||||
```bash
|
||||
# Pull the latest version
|
||||
docker pull unclecode/crawl4ai:0.7.6
|
||||
docker pull unclecode/crawl4ai:0.8.0
|
||||
|
||||
# Or pull using the latest tag
|
||||
docker pull unclecode/crawl4ai:latest
|
||||
@@ -145,7 +145,7 @@ docker stop crawl4ai && docker rm crawl4ai
|
||||
#### Docker Hub Versioning Explained
|
||||
|
||||
* **Image Name:** `unclecode/crawl4ai`
|
||||
* **Tag Format:** `LIBRARY_VERSION[-SUFFIX]` (e.g., `0.7.6`)
|
||||
* **Tag Format:** `LIBRARY_VERSION[-SUFFIX]` (e.g., `0.8.0`)
|
||||
* `LIBRARY_VERSION`: The semantic version of the core `crawl4ai` Python library
|
||||
* `SUFFIX`: Optional tag for release candidates (``) and revisions (`r1`)
|
||||
* **`latest` Tag:** Points to the most recent stable version
|
||||
|
||||
@@ -255,6 +255,8 @@ The `SeedingConfig` object is your control panel. Here's everything you can conf
|
||||
| `scoring_method` | str | None | Scoring method (currently "bm25") |
|
||||
| `score_threshold` | float | None | Minimum score to include URL |
|
||||
| `filter_nonsense_urls` | bool | True | Filter out utility URLs (robots.txt, etc.) |
|
||||
| `cache_ttl_hours` | int | 24 | Hours before sitemap cache expires (0 = no TTL) |
|
||||
| `validate_sitemap_lastmod` | bool | True | Check sitemap's lastmod and refetch if newer |
|
||||
|
||||
#### Pattern Matching Examples
|
||||
|
||||
@@ -968,10 +970,49 @@ config = SeedingConfig(
|
||||
The seeder automatically caches results to speed up repeated operations:
|
||||
|
||||
- **Common Crawl cache**: `~/.crawl4ai/seeder_cache/[index]_[domain]_[hash].jsonl`
|
||||
- **Sitemap cache**: `~/.crawl4ai/seeder_cache/sitemap_[domain]_[hash].jsonl`
|
||||
- **Sitemap cache**: `~/.crawl4ai/seeder_cache/sitemap_[domain]_[hash].json`
|
||||
- **HEAD data cache**: `~/.cache/url_seeder/head/[hash].json`
|
||||
|
||||
Cache expires after 7 days by default. Use `force=True` to refresh.
|
||||
#### Smart TTL Cache for Sitemaps
|
||||
|
||||
Sitemap caches now include intelligent validation:
|
||||
|
||||
```python
|
||||
# Default: 24-hour TTL with lastmod validation
|
||||
config = SeedingConfig(
|
||||
source="sitemap",
|
||||
cache_ttl_hours=24, # Cache expires after 24 hours
|
||||
validate_sitemap_lastmod=True # Also check if sitemap was updated
|
||||
)
|
||||
|
||||
# Aggressive caching (1 week, no lastmod check)
|
||||
config = SeedingConfig(
|
||||
source="sitemap",
|
||||
cache_ttl_hours=168, # 7 days
|
||||
validate_sitemap_lastmod=False # Trust TTL only
|
||||
)
|
||||
|
||||
# Always validate (no TTL, only lastmod)
|
||||
config = SeedingConfig(
|
||||
source="sitemap",
|
||||
cache_ttl_hours=0, # Disable TTL
|
||||
validate_sitemap_lastmod=True # Refetch if sitemap has newer lastmod
|
||||
)
|
||||
|
||||
# Always fresh (bypass cache completely)
|
||||
config = SeedingConfig(
|
||||
source="sitemap",
|
||||
force=True # Ignore all caching
|
||||
)
|
||||
```
|
||||
|
||||
**Cache validation priority:**
|
||||
1. `force=True` → Always refetch
|
||||
2. Cache doesn't exist → Fetch fresh
|
||||
3. `validate_sitemap_lastmod=True` and sitemap has newer `<lastmod>` → Refetch
|
||||
4. `cache_ttl_hours > 0` and cache is older than TTL → Refetch
|
||||
5. Cache corrupted → Refetch (automatic recovery)
|
||||
6. Otherwise → Use cache
|
||||
|
||||
### Pattern Matching Strategies
|
||||
|
||||
@@ -1060,6 +1101,9 @@ config = SeedingConfig(
|
||||
| Rate limit errors | Reduce `hits_per_sec` and `concurrency` |
|
||||
| Memory issues with large sites | Use `max_urls` to limit results, reduce `concurrency` |
|
||||
| Connection not closed | Use context manager or call `await seeder.close()` |
|
||||
| Stale/outdated URLs | Set `cache_ttl_hours=0` or use `force=True` |
|
||||
| Cache not updating | Check `validate_sitemap_lastmod=True`, or use `force=True` |
|
||||
| Incomplete URL list | Delete cache file and refetch, or use `force=True` |
|
||||
|
||||
### Performance Benchmarks
|
||||
|
||||
@@ -1119,6 +1163,7 @@ config = SeedingConfig(
|
||||
3. **Context Manager Support**: Automatic cleanup with `async with` statement
|
||||
4. **URL-Based Scoring**: Smart filtering even without head extraction
|
||||
5. **Smart URL Filtering**: Automatically excludes utility/nonsense URLs
|
||||
6. **Dual Caching**: Separate caches for URL lists and metadata
|
||||
6. **Smart TTL Cache**: Sitemap caches with TTL expiry and lastmod validation
|
||||
7. **Automatic Cache Recovery**: Corrupted or incomplete caches are automatically refreshed
|
||||
|
||||
Now go forth and seed intelligently! 🌱🚀
|
||||
Now go forth and seed intelligently!
|
||||
Reference in New Issue
Block a user