* fix(docker-api): migrate to modern datetime library API
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
* Fix examples in README.md
* feat(docker): add user-provided hooks support to Docker API
Implements comprehensive hooks functionality allowing users to provide custom Python
functions as strings that execute at specific points in the crawling pipeline.
Key Features:
- Support for all 8 crawl4ai hook points:
• on_browser_created: Initialize browser settings
• on_page_context_created: Configure page context
• before_goto: Pre-navigation setup
• after_goto: Post-navigation processing
• on_user_agent_updated: User agent modification handling
• on_execution_started: Crawl execution initialization
• before_retrieve_html: Pre-extraction processing
• before_return_html: Final HTML processing
Implementation Details:
- Created UserHookManager for validation, compilation, and safe execution
- Added IsolatedHookWrapper for error isolation and timeout protection
- AST-based validation ensures code structure correctness
- Sandboxed execution with restricted builtins for security
- Configurable timeout (1-120 seconds) prevents infinite loops
- Comprehensive error handling ensures hooks don't crash main process
- Execution tracking with detailed statistics and logging
API Changes:
- Added HookConfig schema with code and timeout fields
- Extended CrawlRequest with optional hooks parameter
- Added /hooks/info endpoint for hook discovery
- Updated /crawl and /crawl/stream endpoints to support hooks
Safety Features:
- Malformed hooks return clear validation errors
- Hook errors are isolated and reported without stopping crawl
- Execution statistics track success/failure/timeout rates
- All hook results are JSON-serializable
Testing:
- Comprehensive test suite covering all 8 hooks
- Error handling and timeout scenarios validated
- Authentication, performance, and content extraction examples
- 100% success rate in production testing
Documentation:
- Added extensive hooks section to docker-deployment.md
- Security warnings about user-provided code risks
- Real-world examples using httpbin.org, GitHub, BBC
- Best practices and troubleshooting guide
ref #1377
* fix(deep-crawl): BestFirst priority inversion; remove pre-scoring truncation. ref #1253
Use negative scores in PQ to visit high-score URLs first and drop link cap prior to scoring; add test for ordering.
* docs: Update URL seeding examples to use proper async context managers
- Wrap all AsyncUrlSeeder usage with async context managers
- Update URL seeding adventure example to use "sitemap+cc" source, focus on course posts, and add stream=True parameter to fix runtime error
* fix(crawler): Removed the incorrect reference in browser_config variable #1310
* docs: update Docker instructions to use the latest release tag
* fix(docker): Fix LLM API key handling for multi-provider support
Previously, the system incorrectly used OPENAI_API_KEY for all LLM providers
due to a hardcoded api_key_env fallback in config.yml. This caused authentication
errors when using non-OpenAI providers like Gemini.
Changes:
- Remove api_key_env from config.yml to let litellm handle provider-specific env vars
- Simplify get_llm_api_key() to return None, allowing litellm to auto-detect keys
- Update validate_llm_provider() to trust litellm's built-in key detection
- Update documentation to reflect the new automatic key handling
The fix leverages litellm's existing capability to automatically find the correct
environment variable for each provider (OPENAI_API_KEY, GEMINI_API_TOKEN, etc.)
without manual configuration.
ref #1291
* docs: update adaptive crawler docs and cache defaults; remove deprecated examples (#1330)
- Replace BaseStrategy with CrawlStrategy in custom strategy examples (DomainSpecificStrategy, HybridStrategy)
- Remove “Custom Link Scoring” and “Caching Strategy” sections no longer aligned with current library
- Revise memory pruning example to use adaptive.get_relevant_content and index-based retention of top 500 docs
- Correct Quickstart note: default cache mode is CacheMode.BYPASS; instruct enabling with CacheMode.ENABLED
* fix(utils): Improve URL normalization by avoiding quote/unquote to preserve '+' signs. ref #1332
* feat: Add comprehensive website to API example with frontend
This commit adds a complete, web scraping API example that demonstrates how to get structured data from any website and use it like an API using the crawl4ai library with a minimalist frontend interface.
Core Functionality
- AI-powered web scraping with plain English queries
- Dual scraping approaches: Schema-based (faster) and LLM-based (flexible)
- Intelligent schema caching for improved performance
- Custom LLM model support with API key management
- Automatic duplicate request prevention
Modern Frontend Interface
- Minimalist black-and-white design inspired by modern web apps
- Responsive layout with smooth animations and transitions
- Three main pages: Scrape Data, Models Management, API Request History
- Real-time results display with JSON formatting
- Copy-to-clipboard functionality for extracted data
- Toast notifications for user feedback
- Auto-scroll to results when scraping starts
Model Management System
- Web-based model configuration interface
- Support for any LLM provider (OpenAI, Gemini, Anthropic, etc.)
- Simplified configuration requiring only provider and API token
- Add, list, and delete model configurations
- Secure storage of API keys in local JSON files
API Request History
- Automatic saving of all API requests and responses
- Display of request history with URL, query, and cURL commands
- Duplicate prevention (same URL + query combinations)
- Request deletion functionality
- Clean, simplified display focusing on essential information
Technical Implementation
Backend (FastAPI)
- RESTful API with comprehensive endpoints
- Pydantic models for request/response validation
- Async web scraping with crawl4ai library
- Error handling with detailed error messages
- File-based storage for models and request history
Frontend (Vanilla JS/CSS/HTML)
- No framework dependencies - pure HTML, CSS, JavaScript
- Modern CSS Grid and Flexbox layouts
- Custom dropdown styling with SVG arrows
- Responsive design for mobile and desktop
- Smooth scrolling and animations
Core Library Integration
- WebScraperAgent class for orchestration
- ModelConfig class for LLM configuration management
- Schema generation and caching system
- LLM extraction strategy support
- Browser configuration with headless mode
* fix(dependencies): add cssselect to project dependencies
Fixes bug reported in issue #1405
[Bug]: Excluded selector (excluded_selector) doesn't work
This commit reintroduces the cssselect library which was removed by PR (https://github.com/unclecode/crawl4ai/pull/1368) and merged via (437395e490).
Integration tested against 0.7.4 Docker container. Reintroducing cssselector package eliminated errors seen in logs and excluded_selector functionality was restored.
Refs: #1405
* fix(docker): resolve filter serialization and JSON encoding errors in deep crawl strategy (ref #1419)
- Fix URLPatternFilter serialization by preventing private __slots__ from being serialized as constructor params
- Add public attributes to URLPatternFilter to store original constructor parameters for proper serialization
- Handle property descriptors in CrawlResult.model_dump() to prevent JSON serialization errors
- Ensure filter chains work correctly with Docker client and REST API
The issue occurred because:
1. Private implementation details (_simple_suffixes, etc.) were being serialized and passed as constructor arguments during deserialization
2. Property descriptors were being included in the serialized output, causing "Object of type property is not JSON serializable" errors
Changes:
- async_configs.py: Comment out __slots__ serialization logic (lines 100-109)
- filters.py: Add patterns, use_glob, reverse to URLPatternFilter __slots__ and store as public attributes
- models.py: Convert property descriptors to strings in model_dump() instead of including them directly
* fix(logger): ensure logger is a Logger instance in crawling strategies. ref #1437
* feat(docker): Add temperature and base_url parameters for LLM configuration. ref #1035
Implement hierarchical configuration for LLM parameters with support for:
- Temperature control (0.0-2.0) to adjust response creativity
- Custom base_url for proxy servers and alternative endpoints
- 4-tier priority: request params > provider env > global env > defaults
Add helper functions in utils.py, update API schemas and handlers,
support environment variables (LLM_TEMPERATURE, OPENAI_TEMPERATURE, etc.),
and provide comprehensive documentation with examples.
* feat(docker): improve docker error handling
- Return comprehensive error messages along with status codes for api internal errors.
- Fix fit_html property serialization issue in both /crawl and /crawl/stream endpoints
- Add sanitization to ensure fit_html is always JSON-serializable (string or None)
- Add comprehensive error handling test suite.
* #1375 : refactor(proxy) Deprecate 'proxy' parameter in BrowserConfig and enhance proxy string parsing
- Updated ProxyConfig.from_string to support multiple proxy formats, including URLs with credentials.
- Deprecated the 'proxy' parameter in BrowserConfig, replacing it with 'proxy_config' for better flexibility.
- Added warnings for deprecated usage and clarified behavior when both parameters are provided.
- Updated documentation and tests to reflect changes in proxy configuration handling.
* Remove deprecated test for 'proxy' parameter in BrowserConfig and update .gitignore to include test_scripts directory.
* feat: add preserve_https_for_internal_links flag to maintain HTTPS during crawling. Ref #1410
Added a new `preserve_https_for_internal_links` configuration flag that preserves the original HTTPS scheme for same-domain links even when the server redirects to HTTP.
* feat: update documentation for preserve_https_for_internal_links. ref #1410
* fix: drop Python 3.9 support and require Python >=3.10.
The library no longer supports Python 3.9 and so it was important to drop all references to python 3.9.
Following changes have been made:
- pyproject.toml: set requires-python to ">=3.10"; remove 3.9 classifier
- setup.py: set python_requires to ">=3.10"; remove 3.9 classifier
- docs: update Python version mentions
- deploy/docker/c4ai-doc-context.md: options -> 3.10, 3.11, 3.12, 3.13
* issue #1329 refactor(crawler): move unwanted properties to CrawlerRunConfig class
* fix(auth): fixed Docker JWT authentication. ref #1442
* remove: delete unused yoyo snapshot subproject
* fix: raise error on last attempt failure in perform_completion_with_backoff. ref #989
* Commit without API
* fix: update option labels in request builder for clarity
* fix: allow custom LLM providers for adaptive crawler embedding config. ref: #1291
- Change embedding_llm_config from Dict to Union[LLMConfig, Dict] for type safety
- Add backward-compatible conversion property _embedding_llm_config_dict
- Replace all hardcoded OpenAI embedding configs with configurable options
- Fix LLMConfig object attribute access in query expansion logic
- Add comprehensive example demonstrating multiple provider configurations
- Update documentation with both LLMConfig object and dictionary usage patterns
Users can now specify any LLM provider for query expansion in embedding strategy:
- New: embedding_llm_config=LLMConfig(provider='anthropic/claude-3', api_token='key')
- Old: embedding_llm_config={'provider': 'openai/gpt-4', 'api_token': 'key'} (still works)
* refactor(BrowserConfig): change deprecation warning for 'proxy' parameter to UserWarning
* feat(StealthAdapter): fix stealth features for Playwright integration. ref #1481
* #1505 fix(api): update config handling to only set base config if not provided by user
* fix(docker-deployment): replace console.log with print for metadata extraction
* Release v0.7.5: The Update
- Updated version to 0.7.5
- Added comprehensive demo and release notes
- Updated documentation
* refactor(release): remove memory management section for cleaner documentation. ref #1443
* feat(docs): add brand book and page copy functionality
- Add comprehensive brand book with color system, typography, components
- Add page copy dropdown with markdown copy/view functionality
- Update mkdocs.yml with new assets and branding navigation
- Use terminal-style ASCII icons and condensed menu design
* Update gitignore add local scripts folder
* fix: remove this import as it causes python to treat "json" as a variable in the except block
* fix: always return a list, even if we catch an exception
* feat(marketplace): Add Crawl4AI marketplace with secure configuration
- Implement marketplace frontend and admin dashboard
- Add FastAPI backend with environment-based configuration
- Use .env file for secrets management
- Include data generation scripts
- Add proper CORS configuration
- Remove hardcoded password from admin login
- Update gitignore for security
* fix(marketplace): Update URLs to use /marketplace path and relative API endpoints
- Change API_BASE to relative '/api' for production
- Move marketplace to /marketplace instead of /marketplace/frontend
- Update MkDocs navigation
- Fix logo path in marketplace index
* fix(docs): hide copy menu on non-markdown pages
* feat(marketplace): add sponsor logo uploads
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
* feat(docs): add chatgpt quick link to page actions
* fix(marketplace): align admin api with backend endpoints
* fix(marketplace): isolate api under marketplace prefix
* fix(marketplace): resolve app detail page routing and styling issues
- Fixed JavaScript errors from missing HTML elements (install-code, usage-code, integration-code)
- Added missing CSS classes for tabs, overview layout, sidebar, and integration content
- Fixed tab navigation to display horizontally in single line
- Added proper padding to tab content sections (removed from container, added to content)
- Fixed tab selector from .nav-tab to .tab-btn to match HTML structure
- Added sidebar styling with stats grid and metadata display
- Improved responsive design with mobile-friendly tab scrolling
- Fixed code block positioning for copy buttons
- Removed margin from first headings to prevent extra spacing
- Added null checks for DOM elements in JavaScript to prevent errors
These changes resolve the routing issue where clicking on apps caused page redirects,
and fix the broken layout where CSS was not properly applied to the app detail page.
* fix(marketplace): prevent hero image overflow and secondary card stretching
- Fixed hero image to 200px height with min/max constraints
- Added object-fit: cover to hero-image img elements
- Changed secondary-featured align-items from stretch to flex-start
- Fixed secondary-card height to 118px (no flex: 1 stretching)
- Updated responsive grid layouts for wider screens
- Added flex: 1 to hero-content for better content distribution
These changes ensure a rigid, predictable layout that prevents:
1. Large images from pushing text content down
2. Single secondary cards from stretching to fill entire height
* feat: Add hooks utility for function-based hooks with Docker client integration. ref #1377
Add hooks_to_string() utility function that converts Python function objects
to string representations for the Docker API, enabling developers to write hooks
as regular Python functions instead of strings.
Core Changes:
- New hooks_to_string() utility in crawl4ai/utils.py using inspect.getsource()
- Docker client now accepts both function objects and strings for hooks
- Automatic detection and conversion in Crawl4aiDockerClient._prepare_request()
- New hooks and hooks_timeout parameters in client.crawl() method
Documentation:
- Docker client examples with function-based hooks (docs/examples/docker_client_hooks_example.py)
- Updated main Docker deployment guide with comprehensive hooks section
- Added unit tests for hooks utility (tests/docker/test_hooks_utility.py)
* feat: Add hooks utility for function-based hooks with Docker client integration. ref #1377
Add hooks_to_string() utility function that converts Python function objects
to string representations for the Docker API, enabling developers to write hooks
as regular Python functions instead of strings.
Core Changes:
- New hooks_to_string() utility in crawl4ai/utils.py using inspect.getsource()
- Docker client now accepts both function objects and strings for hooks
- Automatic detection and conversion in Crawl4aiDockerClient._prepare_request()
- New hooks and hooks_timeout parameters in client.crawl() method
Documentation:
- Docker client examples with function-based hooks (docs/examples/docker_client_hooks_example.py)
- Updated main Docker deployment guide with comprehensive hooks section
- Added unit tests for hooks utility (tests/docker/test_hooks_utility.py)
* fix(docs): clarify Docker Hooks System with function-based API in README
* docs: Add demonstration files for v0.7.5 release, showcasing the new Docker Hooks System and all other features.
* docs: Update 0.7.5 video walkthrough
* docs: add complete SDK reference documentation
Add comprehensive single-page SDK reference combining:
- Installation & Setup
- Quick Start
- Core API (AsyncWebCrawler, arun, arun_many, CrawlResult)
- Configuration (BrowserConfig, CrawlerConfig, Parameters)
- Crawling Patterns
- Content Processing (Markdown, Fit Markdown, Selection, Interaction, Link & Media)
- Extraction Strategies (LLM and No-LLM)
- Advanced Features (Session Management, Hooks & Auth)
Generated using scripts/generate_sdk_docs.py in ultra-dense mode
optimized for AI assistant consumption.
Stats: 23K words, 185 code blocks, 220KB
* feat: add AI assistant skill package for Crawl4AI
- Create comprehensive skill package for AI coding assistants
- Include complete SDK reference (23K words, v0.7.4)
- Add three extraction scripts (basic, batch, pipeline)
- Implement version tracking in skill and scripts
- Add prominent download section on homepage
- Place skill in docs/assets for web distribution
The skill enables AI assistants like Claude, Cursor, and Windsurf
to effectively use Crawl4AI with optimized workflows for markdown
generation and data extraction.
* fix: remove non-existent wiki link and clarify skill usage instructions
* fix: update Crawl4AI skill with corrected parameters and examples
- Fixed CrawlerConfig → CrawlerRunConfig throughout
- Fixed parameter names (timeout → page_timeout, store_html removed)
- Fixed schema format (selector → baseSelector)
- Corrected proxy configuration (in BrowserConfig, not CrawlerRunConfig)
- Fixed fit_markdown usage with content filters
- Added comprehensive references to docs/examples/ directory
- Created safe packaging script to avoid root directory pollution
- All scripts tested and verified working
* fix: thoroughly verify and fix all Crawl4AI skill examples
- Cross-checked every section against actual docs
- Fixed BM25ContentFilter parameters (user_query, bm25_threshold)
- Removed incorrect wait_for selector from basic example
- Added comprehensive test suite (4 test files)
- All examples now tested and verified working
- Tests validate: basic crawling, markdown generation, data extraction, advanced patterns
- Package size: 76.6 KB (includes tests for future validation)
* feat(ci): split release pipeline and add Docker caching
- Split release.yml into PyPI/GitHub release and Docker workflows
- Add GitHub Actions cache for Docker builds (10-15x faster rebuilds)
- Implement dual-trigger for docker-release.yml (auto + manual)
- Add comprehensive workflow documentation in .github/workflows/docs/
- Backup original workflow as release.yml.backup
* feat: add webhook notifications for crawl job completion
Implements webhook support for the crawl job API to eliminate polling requirements.
Changes:
- Added WebhookConfig and WebhookPayload schemas to schemas.py
- Created webhook.py with WebhookDeliveryService class
- Integrated webhook notifications in api.py handle_crawl_job
- Updated job.py CrawlJobPayload to accept webhook_config
- Added webhook configuration section to config.yml
- Included comprehensive usage examples in WEBHOOK_EXAMPLES.md
Features:
- Webhook notifications on job completion (success/failure)
- Configurable data inclusion in webhook payload
- Custom webhook headers support
- Global default webhook URL configuration
- Exponential backoff retry logic (5 attempts: 1s, 2s, 4s, 8s, 16s)
- 30-second timeout per webhook call
Usage:
POST /crawl/job with optional webhook_config:
- webhook_url: URL to receive notifications
- webhook_data_in_payload: include full results (default: false)
- webhook_headers: custom headers for authentication
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
* docs: add webhook documentation to Docker README
Added comprehensive webhook section to README.md including:
- Overview of asynchronous job queue with webhooks
- Benefits and use cases
- Quick start examples
- Webhook authentication
- Global webhook configuration
- Job status polling alternative
Updated table of contents and summary to include webhook feature.
Maintains consistent tone and style with rest of README.
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
* docs: add webhook example for Docker deployment
Added docker_webhook_example.py demonstrating:
- Submitting crawl jobs with webhook configuration
- Flask-based webhook receiver implementation
- Three usage patterns:
1. Webhook notification only (fetch data separately)
2. Webhook with full data in payload
3. Traditional polling approach for comparison
Includes comprehensive comments explaining:
- Webhook payload structure
- Authentication headers setup
- Error handling
- Production deployment tips
Example is fully functional and ready to run with Flask installed.
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
* test: add webhook implementation validation tests
Added comprehensive test suite to validate webhook implementation:
- Module import verification
- WebhookDeliveryService initialization
- Pydantic model validation (WebhookConfig)
- Payload construction logic
- Exponential backoff calculation
- API integration checks
All tests pass (6/6), confirming implementation is correct.
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
* test: add comprehensive webhook feature test script
Added end-to-end test script that automates webhook feature testing:
Script Features (test_webhook_feature.sh):
- Automatic branch switching and dependency installation
- Redis and server startup/shutdown management
- Webhook receiver implementation
- Integration test for webhook notifications
- Comprehensive cleanup and error handling
- Returns to original branch after completion
Test Flow:
1. Fetch and checkout webhook feature branch
2. Activate venv and install dependencies
3. Start Redis and Crawl4AI server
4. Submit crawl job with webhook config
5. Verify webhook delivery and payload
6. Clean up all processes and return to original branch
Documentation:
- WEBHOOK_TEST_README.md with usage instructions
- Troubleshooting guide
- Exit codes and safety features
Usage: ./tests/test_webhook_feature.sh
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: properly serialize Pydantic HttpUrl in webhook config
Use model_dump(mode='json') instead of deprecated dict() method to ensure
Pydantic special types (HttpUrl, UUID, etc.) are properly serialized to
JSON-compatible native Python types.
This fixes webhook delivery failures caused by HttpUrl objects remaining
as Pydantic types in the webhook_config dict, which caused JSON
serialization errors and httpx request failures.
Also update mcp requirement to >=1.18.0 for compatibility.
* feat: add webhook support for /llm/job endpoint
Add comprehensive webhook notification support for the /llm/job endpoint,
following the same pattern as the existing /crawl/job implementation.
Changes:
- Add webhook_config field to LlmJobPayload model (job.py)
- Implement webhook notifications in process_llm_extraction() with 4
notification points: success, provider validation failure, extraction
failure, and general exceptions (api.py)
- Store webhook_config in Redis task data for job tracking
- Initialize WebhookDeliveryService with exponential backoff retry logic
Documentation:
- Add Example 6 to WEBHOOK_EXAMPLES.md showing LLM extraction with webhooks
- Update Flask webhook handler to support both crawl and llm_extraction tasks
- Add TypeScript client examples for LLM jobs
- Add comprehensive examples to docker_webhook_example.py with schema support
- Clarify data structure differences between webhook and API responses
Testing:
- Add test_llm_webhook_feature.py with 7 validation tests (all passing)
- Verify pattern consistency with /crawl/job implementation
- Add implementation guide (WEBHOOK_LLM_JOB_IMPLEMENTATION.md)
* fix: remove duplicate comma in webhook_config parameter
* fix: update Crawl4AI Docker container port from 11234 to 11235
* Release v0.7.6: The 0.7.6 Update
- Updated version to 0.7.6
- Added comprehensive demo and release notes
- Updated all documentation
- Update the veriosn in Dockerfile to 0.7.6
---------
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
Co-authored-by: Nezar Ali <abu5sohaib@gmail.com>
Co-authored-by: Soham Kukreti <kukretisoham@gmail.com>
Co-authored-by: James T. Wood <jamesthomaswood@gmail.com>
Co-authored-by: AHMET YILMAZ <tawfik@kidocode.com>
Co-authored-by: nafeqq-1306 <nafiquee@yahoo.com>
Co-authored-by: unclecode <unclecode@kidocode.com>
Co-authored-by: Martin Sjöborg <martin.sjoborg@quartr.se>
Co-authored-by: Martin Sjöborg <martin@sjoborg.org>
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
34 KiB
URL Seeding: The Smart Way to Crawl at Scale
Why URL Seeding?
Web crawling comes in different flavors, each with its own strengths. Let's understand when to use URL seeding versus deep crawling.
Deep Crawling: Real-Time Discovery
Deep crawling is perfect when you need:
- Fresh, real-time data - discovering pages as they're created
- Dynamic exploration - following links based on content
- Selective extraction - stopping when you find what you need
# Deep crawling example: Explore a website dynamically
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
from crawl4ai.deep_crawling import BFSDeepCrawlStrategy
async def deep_crawl_example():
# Configure a 2-level deep crawl
config = CrawlerRunConfig(
deep_crawl_strategy=BFSDeepCrawlStrategy(
max_depth=2, # Crawl 2 levels deep
include_external=False, # Stay within domain
max_pages=50 # Limit for efficiency
),
verbose=True
)
async with AsyncWebCrawler() as crawler:
# Start crawling and follow links dynamically
results = await crawler.arun("https://example.com", config=config)
print(f"Discovered and crawled {len(results)} pages")
for result in results[:3]:
print(f"Found: {result.url} at depth {result.metadata.get('depth', 0)}")
asyncio.run(deep_crawl_example())
URL Seeding: Bulk Discovery
URL seeding shines when you want:
- Comprehensive coverage - get thousands of URLs in seconds
- Bulk processing - filter before crawling
- Resource efficiency - know exactly what you'll crawl
# URL seeding example: Analyze all documentation
from crawl4ai import AsyncUrlSeeder, SeedingConfig
seeder = AsyncUrlSeeder()
config = SeedingConfig(
source="sitemap",
extract_head=True,
pattern="*/docs/*"
)
# Get ALL documentation URLs instantly
urls = await seeder.urls("example.com", config)
# 1000+ URLs discovered in seconds!
The Trade-offs
| Aspect | Deep Crawling | URL Seeding |
|---|---|---|
| Coverage | Discovers pages dynamically | Gets most existing URLs instantly |
| Freshness | Finds brand new pages | May miss very recent pages |
| Speed | Slower, page by page | Extremely fast bulk discovery |
| Resource Usage | Higher - crawls to discover | Lower - discovers then crawls |
| Control | Can stop mid-process | Pre-filters before crawling |
When to Use Each
Choose Deep Crawling when:
- You need the absolute latest content
- You're searching for specific information
- The site structure is unknown or dynamic
- You want to stop as soon as you find what you need
Choose URL Seeding when:
- You need to analyze large portions of a site
- You want to filter URLs before crawling
- You're doing comparative analysis
- You need to optimize resource usage
The magic happens when you understand both approaches and choose the right tool for your task. Sometimes, you might even combine them - use URL seeding for bulk discovery, then deep crawl specific sections for the latest updates.
Your First URL Seeding Adventure
Let's see the magic in action. We'll discover blog posts about Python, filter for tutorials, and crawl only those pages.
import asyncio
from crawl4ai import AsyncUrlSeeder, AsyncWebCrawler, SeedingConfig, CrawlerRunConfig
async def smart_blog_crawler():
# Step 1: Create our URL discoverer
seeder = AsyncUrlSeeder()
# Step 2: Configure discovery - let's find all blog posts
config = SeedingConfig(
source="sitemap+cc", # Use the website's sitemap+cc
pattern="*/courses/*", # Only courses related posts
extract_head=True, # Get page metadata
max_urls=100 # Limit for this example
)
# Step 3: Discover URLs from the Python blog
print("🔍 Discovering course posts...")
urls = await seeder.urls("realpython.com", config)
print(f"✅ Found {len(urls)} course posts")
# Step 4: Filter for Python tutorials (using metadata!)
tutorials = [
url for url in urls
if url["status"] == "valid" and
any(keyword in str(url["head_data"]).lower()
for keyword in ["tutorial", "guide", "how to"])
]
print(f"📚 Filtered to {len(tutorials)} tutorials")
# Step 5: Show what we found
print("\n🎯 Found these tutorials:")
for tutorial in tutorials[:5]: # First 5
title = tutorial["head_data"].get("title", "No title")
print(f" - {title}")
print(f" {tutorial['url']}")
# Step 6: Now crawl ONLY these relevant pages
print("\n🚀 Crawling tutorials...")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
only_text=True,
word_count_threshold=300, # Only substantial articles
stream=True
)
# Extract URLs and crawl them
tutorial_urls = [t["url"] for t in tutorials[:10]]
results = await crawler.arun_many(tutorial_urls, config=config)
successful = 0
async for result in results:
if result.success:
successful += 1
print(f"✓ Crawled: {result.url[:60]}...")
print(f"\n✨ Successfully crawled {successful} tutorials!")
# Run it!
asyncio.run(smart_blog_crawler())
What just happened?
- We discovered all blog URLs from the sitemap+cc
- We filtered using metadata (no crawling needed!)
- We crawled only the relevant tutorials
- We saved tons of time and bandwidth
This is the power of URL seeding - you see everything before you crawl anything.
Understanding the URL Seeder
Now that you've seen the magic, let's understand how it works.
Basic Usage
Creating a URL seeder is simple:
from crawl4ai import AsyncUrlSeeder
# Method 1: Manual cleanup
seeder = AsyncUrlSeeder()
try:
config = SeedingConfig(source="sitemap")
urls = await seeder.urls("example.com", config)
finally:
await seeder.close()
# Method 2: Context manager (recommended)
async with AsyncUrlSeeder() as seeder:
config = SeedingConfig(source="sitemap")
urls = await seeder.urls("example.com", config)
# Automatically cleaned up on exit
The seeder can discover URLs from two powerful sources:
1. Sitemaps (Fastest)
# Discover from sitemap
config = SeedingConfig(source="sitemap")
urls = await seeder.urls("example.com", config)
Sitemaps are XML files that websites create specifically to list all their URLs. It's like getting a menu at a restaurant - everything is listed upfront.
Sitemap Index Support: For large websites like TechCrunch that use sitemap indexes (a sitemap of sitemaps), the seeder automatically detects and processes all sub-sitemaps in parallel:
<!-- Example sitemap index -->
<sitemapindex>
<sitemap>
<loc>https://techcrunch.com/sitemap-1.xml</loc>
</sitemap>
<sitemap>
<loc>https://techcrunch.com/sitemap-2.xml</loc>
</sitemap>
<!-- ... more sitemaps ... -->
</sitemapindex>
The seeder handles this transparently - you'll get all URLs from all sub-sitemaps automatically!
2. Common Crawl (Most Comprehensive)
# Discover from Common Crawl
config = SeedingConfig(source="cc")
urls = await seeder.urls("example.com", config)
Common Crawl is a massive public dataset that regularly crawls the entire web. It's like having access to a pre-built index of the internet.
3. Both Sources (Maximum Coverage)
# Use both sources
config = SeedingConfig(source="sitemap+cc")
urls = await seeder.urls("example.com", config)
Configuration Magic: SeedingConfig
The SeedingConfig object is your control panel. Here's everything you can configure:
| Parameter | Type | Default | Description |
|---|---|---|---|
source |
str | "sitemap+cc" | URL source: "cc" (Common Crawl), "sitemap", or "sitemap+cc" |
pattern |
str | "*" | URL pattern filter (e.g., "/blog/", "*.html") |
extract_head |
bool | False | Extract metadata from page <head> |
live_check |
bool | False | Verify URLs are accessible |
max_urls |
int | -1 | Maximum URLs to return (-1 = unlimited) |
concurrency |
int | 10 | Parallel workers for fetching |
hits_per_sec |
int | 5 | Rate limit for requests |
force |
bool | False | Bypass cache, fetch fresh data |
verbose |
bool | False | Show detailed progress |
query |
str | None | Search query for BM25 scoring |
scoring_method |
str | None | Scoring method (currently "bm25") |
score_threshold |
float | None | Minimum score to include URL |
filter_nonsense_urls |
bool | True | Filter out utility URLs (robots.txt, etc.) |
Pattern Matching Examples
# Match all blog posts
config = SeedingConfig(pattern="*/blog/*")
# Match only HTML files
config = SeedingConfig(pattern="*.html")
# Match product pages
config = SeedingConfig(pattern="*/product/*")
# Match everything except admin pages
config = SeedingConfig(pattern="*")
# Then filter: urls = [u for u in urls if "/admin/" not in u["url"]]
URL Validation: Live Checking
Sometimes you need to know if URLs are actually accessible. That's where live checking comes in:
config = SeedingConfig(
source="sitemap",
live_check=True, # Verify each URL is accessible
concurrency=20 # Check 20 URLs in parallel
)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("example.com", config)
# Now you can filter by status
live_urls = [u for u in urls if u["status"] == "valid"]
dead_urls = [u for u in urls if u["status"] == "not_valid"]
print(f"Live URLs: {len(live_urls)}")
print(f"Dead URLs: {len(dead_urls)}")
When to use live checking:
- Before a large crawling operation
- When working with older sitemaps
- When data freshness is critical
When to skip it:
- Quick explorations
- When you trust the source
- When speed is more important than accuracy
The Power of Metadata: Head Extraction
This is where URL seeding gets really powerful. Instead of crawling entire pages, you can extract just the metadata:
config = SeedingConfig(
extract_head=True # Extract metadata from <head> section
)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("example.com", config)
# Now each URL has rich metadata
for url in urls[:3]:
print(f"\nURL: {url['url']}")
print(f"Title: {url['head_data'].get('title')}")
meta = url['head_data'].get('meta', {})
print(f"Description: {meta.get('description')}")
print(f"Keywords: {meta.get('keywords')}")
# Even Open Graph data!
print(f"OG Image: {meta.get('og:image')}")
What Can We Extract?
The head extraction gives you a treasure trove of information:
# Example of extracted head_data
{
"title": "10 Python Tips for Beginners",
"charset": "utf-8",
"lang": "en",
"meta": {
"description": "Learn essential Python tips...",
"keywords": "python, programming, tutorial",
"author": "Jane Developer",
"viewport": "width=device-width, initial-scale=1",
# Open Graph tags
"og:title": "10 Python Tips for Beginners",
"og:description": "Essential Python tips for new programmers",
"og:image": "https://example.com/python-tips.jpg",
"og:type": "article",
# Twitter Card tags
"twitter:card": "summary_large_image",
"twitter:title": "10 Python Tips",
# Dublin Core metadata
"dc.creator": "Jane Developer",
"dc.date": "2024-01-15"
},
"link": {
"canonical": [{"href": "https://example.com/blog/python-tips"}],
"alternate": [{"href": "/feed.xml", "type": "application/rss+xml"}]
},
"jsonld": [
{
"@type": "Article",
"headline": "10 Python Tips for Beginners",
"datePublished": "2024-01-15",
"author": {"@type": "Person", "name": "Jane Developer"}
}
]
}
This metadata is gold for filtering! You can find exactly what you need without crawling a single page.
Smart URL-Based Filtering (No Head Extraction)
When extract_head=False but you still provide a query, the seeder uses intelligent URL-based scoring:
# Fast filtering based on URL structure alone
config = SeedingConfig(
source="sitemap",
extract_head=False, # Don't fetch page metadata
query="python tutorial async",
scoring_method="bm25",
score_threshold=0.3
)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("example.com", config)
# URLs are scored based on:
# 1. Domain parts matching (e.g., 'python' in python.example.com)
# 2. Path segments (e.g., '/tutorials/python-async/')
# 3. Query parameters (e.g., '?topic=python')
# 4. Fuzzy matching using character n-grams
# Example URL scoring:
# https://example.com/tutorials/python/async-guide.html - High score
# https://example.com/blog/javascript-tips.html - Low score
This approach is much faster than head extraction while still providing intelligent filtering!
Understanding Results
Each URL in the results has this structure:
{
"url": "https://example.com/blog/python-tips.html",
"status": "valid", # "valid", "not_valid", or "unknown"
"head_data": { # Only if extract_head=True
"title": "Page Title",
"meta": {...},
"link": {...},
"jsonld": [...]
},
"relevance_score": 0.85 # Only if using BM25 scoring
}
Let's see a real example:
config = SeedingConfig(
source="sitemap",
extract_head=True,
live_check=True
)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("blog.example.com", config)
# Analyze the results
for url in urls[:5]:
print(f"\n{'='*60}")
print(f"URL: {url['url']}")
print(f"Status: {url['status']}")
if url['head_data']:
data = url['head_data']
print(f"Title: {data.get('title', 'No title')}")
# Check content type
meta = data.get('meta', {})
content_type = meta.get('og:type', 'unknown')
print(f"Content Type: {content_type}")
# Publication date
pub_date = None
for jsonld in data.get('jsonld', []):
if isinstance(jsonld, dict):
pub_date = jsonld.get('datePublished')
if pub_date:
break
if pub_date:
print(f"Published: {pub_date}")
# Word count (if available)
word_count = meta.get('word_count')
if word_count:
print(f"Word Count: {word_count}")
Smart Filtering with BM25 Scoring
Now for the really cool part - intelligent filtering based on relevance!
Introduction to Relevance Scoring
BM25 is a ranking algorithm that scores how relevant a document is to a search query. With URL seeding, we can score URLs based on their metadata before crawling them.
Think of it like this:
- Traditional way: Read every book in the library to find ones about Python
- Smart way: Check the titles and descriptions, score them, read only the most relevant
Query-Based Discovery
Here's how to use BM25 scoring:
config = SeedingConfig(
source="sitemap",
extract_head=True, # Required for scoring
query="python async tutorial", # What we're looking for
scoring_method="bm25", # Use BM25 algorithm
score_threshold=0.3 # Minimum relevance score
)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("realpython.com", config)
# Results are automatically sorted by relevance!
for url in urls[:5]:
print(f"Score: {url['relevance_score']:.2f} - {url['url']}")
print(f" Title: {url['head_data']['title']}")
Real Examples
Finding Documentation Pages
# Find API documentation
config = SeedingConfig(
source="sitemap",
extract_head=True,
query="API reference documentation endpoints",
scoring_method="bm25",
score_threshold=0.5,
max_urls=20
)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("docs.example.com", config)
# The highest scoring URLs will be API docs!
Discovering Product Pages
# Find specific products
config = SeedingConfig(
source="sitemap+cc", # Use both sources
extract_head=True,
query="wireless headphones noise canceling",
scoring_method="bm25",
score_threshold=0.4,
pattern="*/product/*" # Combine with pattern matching
)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("shop.example.com", config)
# Filter further by price (from metadata)
affordable = [
u for u in urls
if float(u['head_data'].get('meta', {}).get('product:price', '0')) < 200
]
Filtering News Articles
# Find recent news about AI
config = SeedingConfig(
source="sitemap",
extract_head=True,
query="artificial intelligence machine learning breakthrough",
scoring_method="bm25",
score_threshold=0.35
)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("technews.com", config)
# Filter by date
from datetime import datetime, timedelta
recent = []
cutoff = datetime.now() - timedelta(days=7)
for url in urls:
# Check JSON-LD for publication date
for jsonld in url['head_data'].get('jsonld', []):
if 'datePublished' in jsonld:
pub_date = datetime.fromisoformat(jsonld['datePublished'].replace('Z', '+00:00'))
if pub_date > cutoff:
recent.append(url)
break
Complex Query Patterns
# Multi-concept queries
queries = [
"python async await concurrency tutorial",
"data science pandas numpy visualization",
"web scraping beautifulsoup selenium automation",
"machine learning tensorflow keras deep learning"
]
all_tutorials = []
for query in queries:
config = SeedingConfig(
source="sitemap",
extract_head=True,
query=query,
scoring_method="bm25",
score_threshold=0.4,
max_urls=10 # Top 10 per topic
)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("learning-platform.com", config)
all_tutorials.extend(urls)
# Remove duplicates while preserving order
seen = set()
unique_tutorials = []
for url in all_tutorials:
if url['url'] not in seen:
seen.add(url['url'])
unique_tutorials.append(url)
print(f"Found {len(unique_tutorials)} unique tutorials across all topics")
Scaling Up: Multiple Domains
When you need to discover URLs across multiple websites, URL seeding really shines.
The many_urls Method
# Discover URLs from multiple domains in parallel
domains = ["site1.com", "site2.com", "site3.com"]
config = SeedingConfig(
source="sitemap",
extract_head=True,
query="python tutorial",
scoring_method="bm25",
score_threshold=0.3
)
# Returns a dictionary: {domain: [urls]}
async with AsyncUrlSeeder() as seeder:
results = await seeder.many_urls(domains, config)
# Process results
for domain, urls in results.items():
print(f"\n{domain}: Found {len(urls)} relevant URLs")
if urls:
top = urls[0] # Highest scoring
print(f" Top result: {top['url']}")
print(f" Score: {top['relevance_score']:.2f}")
Cross-Domain Examples
Competitor Analysis
# Analyze content strategies across competitors
competitors = [
"competitor1.com",
"competitor2.com",
"competitor3.com"
]
config = SeedingConfig(
source="sitemap",
extract_head=True,
pattern="*/blog/*",
max_urls=100
)
async with AsyncUrlSeeder() as seeder:
results = await seeder.many_urls(competitors, config)
# Analyze content types
for domain, urls in results.items():
content_types = {}
for url in urls:
# Extract content type from metadata
og_type = url['head_data'].get('meta', {}).get('og:type', 'unknown')
content_types[og_type] = content_types.get(og_type, 0) + 1
print(f"\n{domain} content distribution:")
for ctype, count in sorted(content_types.items(), key=lambda x: x[1], reverse=True):
print(f" {ctype}: {count}")
Industry Research
# Research Python tutorials across educational sites
educational_sites = [
"realpython.com",
"pythontutorial.net",
"learnpython.org",
"python.org"
]
config = SeedingConfig(
source="sitemap",
extract_head=True,
query="beginner python tutorial basics",
scoring_method="bm25",
score_threshold=0.3,
max_urls=20 # Per site
)
async with AsyncUrlSeeder() as seeder:
results = await seeder.many_urls(educational_sites, config)
# Find the best beginner tutorials
all_tutorials = []
for domain, urls in results.items():
for url in urls:
url['domain'] = domain # Add domain info
all_tutorials.append(url)
# Sort by relevance across all domains
all_tutorials.sort(key=lambda x: x['relevance_score'], reverse=True)
print("Top 10 Python tutorials for beginners across all sites:")
for i, tutorial in enumerate(all_tutorials[:10], 1):
print(f"{i}. [{tutorial['relevance_score']:.2f}] {tutorial['head_data']['title']}")
print(f" {tutorial['url']}")
print(f" From: {tutorial['domain']}")
Multi-Site Monitoring
# Monitor news about your company across multiple sources
news_sites = [
"techcrunch.com",
"theverge.com",
"wired.com",
"arstechnica.com"
]
company_name = "YourCompany"
config = SeedingConfig(
source="cc", # Common Crawl for recent content
extract_head=True,
query=f"{company_name} announcement news",
scoring_method="bm25",
score_threshold=0.5, # High threshold for relevance
max_urls=10
)
async with AsyncUrlSeeder() as seeder:
results = await seeder.many_urls(news_sites, config)
# Collect all mentions
mentions = []
for domain, urls in results.items():
mentions.extend(urls)
if mentions:
print(f"Found {len(mentions)} mentions of {company_name}:")
for mention in mentions:
print(f"\n- {mention['head_data']['title']}")
print(f" {mention['url']}")
print(f" Score: {mention['relevance_score']:.2f}")
else:
print(f"No recent mentions of {company_name} found")
Advanced Integration Patterns
Let's put everything together in a real-world example.
Building a Research Assistant
Here's a complete example that discovers, scores, filters, and crawls intelligently:
import asyncio
from datetime import datetime
from crawl4ai import AsyncUrlSeeder, AsyncWebCrawler, SeedingConfig, CrawlerRunConfig
class ResearchAssistant:
def __init__(self):
self.seeder = None
async def __aenter__(self):
self.seeder = AsyncUrlSeeder()
await self.seeder.__aenter__()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
if self.seeder:
await self.seeder.__aexit__(exc_type, exc_val, exc_tb)
async def research_topic(self, topic, domains, max_articles=20):
"""Research a topic across multiple domains."""
print(f"🔬 Researching '{topic}' across {len(domains)} domains...")
# Step 1: Discover relevant URLs
config = SeedingConfig(
source="sitemap+cc", # Maximum coverage
extract_head=True, # Get metadata
query=topic, # Research topic
scoring_method="bm25", # Smart scoring
score_threshold=0.4, # Quality threshold
max_urls=10, # Per domain
concurrency=20, # Fast discovery
verbose=True
)
# Discover across all domains
discoveries = await self.seeder.many_urls(domains, config)
# Step 2: Collect and rank all articles
all_articles = []
for domain, urls in discoveries.items():
for url in urls:
url['domain'] = domain
all_articles.append(url)
# Sort by relevance
all_articles.sort(key=lambda x: x['relevance_score'], reverse=True)
# Take top articles
top_articles = all_articles[:max_articles]
print(f"\n📊 Found {len(all_articles)} relevant articles")
print(f"📌 Selected top {len(top_articles)} for deep analysis")
# Step 3: Show what we're about to crawl
print("\n🎯 Articles to analyze:")
for i, article in enumerate(top_articles[:5], 1):
print(f"\n{i}. {article['head_data']['title']}")
print(f" Score: {article['relevance_score']:.2f}")
print(f" Source: {article['domain']}")
print(f" URL: {article['url'][:60]}...")
# Step 4: Crawl the selected articles
print(f"\n🚀 Deep crawling {len(top_articles)} articles...")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
only_text=True,
word_count_threshold=200, # Substantial content only
stream=True
)
# Extract URLs and crawl all articles
article_urls = [article['url'] for article in top_articles]
results = []
crawl_results = await crawler.arun_many(article_urls, config=config)
async for result in crawl_results:
if result.success:
results.append({
'url': result.url,
'title': result.metadata.get('title', 'No title'),
'content': result.markdown.raw_markdown,
'domain': next(a['domain'] for a in top_articles if a['url'] == result.url),
'score': next(a['relevance_score'] for a in top_articles if a['url'] == result.url)
})
print(f"✓ Crawled: {result.url[:60]}...")
# Step 5: Analyze and summarize
print(f"\n📝 Analysis complete! Crawled {len(results)} articles")
return self.create_research_summary(topic, results)
def create_research_summary(self, topic, articles):
"""Create a research summary from crawled articles."""
summary = {
'topic': topic,
'timestamp': datetime.now().isoformat(),
'total_articles': len(articles),
'sources': {}
}
# Group by domain
for article in articles:
domain = article['domain']
if domain not in summary['sources']:
summary['sources'][domain] = []
summary['sources'][domain].append({
'title': article['title'],
'url': article['url'],
'score': article['score'],
'excerpt': article['content'][:500] + '...' if len(article['content']) > 500 else article['content']
})
return summary
# Use the research assistant
async def main():
async with ResearchAssistant() as assistant:
# Research Python async programming across multiple sources
topic = "python asyncio best practices performance optimization"
domains = [
"realpython.com",
"python.org",
"stackoverflow.com",
"medium.com"
]
summary = await assistant.research_topic(topic, domains, max_articles=15)
# Display results
print("\n" + "="*60)
print("RESEARCH SUMMARY")
print("="*60)
print(f"Topic: {summary['topic']}")
print(f"Date: {summary['timestamp']}")
print(f"Total Articles Analyzed: {summary['total_articles']}")
print("\nKey Findings by Source:")
for domain, articles in summary['sources'].items():
print(f"\n📚 {domain} ({len(articles)} articles)")
for article in articles[:2]: # Top 2 per domain
print(f"\n Title: {article['title']}")
print(f" Relevance: {article['score']:.2f}")
print(f" Preview: {article['excerpt'][:200]}...")
asyncio.run(main())
Performance Optimization Tips
- Use caching wisely
# First run - populate cache
config = SeedingConfig(source="sitemap", extract_head=True, force=True)
urls = await seeder.urls("example.com", config)
# Subsequent runs - use cache (much faster)
config = SeedingConfig(source="sitemap", extract_head=True, force=False)
urls = await seeder.urls("example.com", config)
- Optimize concurrency
# For many small requests (like HEAD checks)
config = SeedingConfig(concurrency=50, hits_per_sec=20)
# For fewer large requests (like full head extraction)
config = SeedingConfig(concurrency=10, hits_per_sec=5)
- Stream large result sets
# When crawling many URLs
async with AsyncWebCrawler() as crawler:
# Assuming urls is a list of URL strings
crawl_results = await crawler.arun_many(urls, config=config)
# Process as they arrive
async for result in crawl_results:
process_immediately(result) # Don't wait for all
- Memory protection for large domains
The seeder uses bounded queues to prevent memory issues when processing domains with millions of URLs:
# Safe for domains with 1M+ URLs
config = SeedingConfig(
source="cc+sitemap",
concurrency=50, # Queue size adapts to concurrency
max_urls=100000 # Process in batches if needed
)
# The seeder automatically manages memory by:
# - Using bounded queues (prevents RAM spikes)
# - Applying backpressure when queue is full
# - Processing URLs as they're discovered
Best Practices & Tips
Cache Management
The seeder automatically caches results to speed up repeated operations:
- Common Crawl cache:
~/.crawl4ai/seeder_cache/[index]_[domain]_[hash].jsonl - Sitemap cache:
~/.crawl4ai/seeder_cache/sitemap_[domain]_[hash].jsonl - HEAD data cache:
~/.cache/url_seeder/head/[hash].json
Cache expires after 7 days by default. Use force=True to refresh.
Pattern Matching Strategies
# Be specific when possible
good_pattern = "*/blog/2024/*.html" # Specific
bad_pattern = "*" # Too broad
# Combine patterns with metadata filtering
config = SeedingConfig(
pattern="*/articles/*",
extract_head=True
)
urls = await seeder.urls("news.com", config)
# Further filter by publish date, author, category, etc.
recent = [u for u in urls if is_recent(u['head_data'])]
Rate Limiting Considerations
# Be respectful of servers
config = SeedingConfig(
hits_per_sec=10, # Max 10 requests per second
concurrency=20 # But use 20 workers
)
# For your own servers
config = SeedingConfig(
hits_per_sec=None, # No limit
concurrency=100 # Go fast
)
Quick Reference
Common Patterns
# Blog post discovery
config = SeedingConfig(
source="sitemap",
pattern="*/blog/*",
extract_head=True,
query="your topic",
scoring_method="bm25"
)
# E-commerce product discovery
config = SeedingConfig(
source="sitemap+cc",
pattern="*/product/*",
extract_head=True,
live_check=True
)
# Documentation search
config = SeedingConfig(
source="sitemap",
pattern="*/docs/*",
extract_head=True,
query="API reference",
scoring_method="bm25",
score_threshold=0.5
)
# News monitoring
config = SeedingConfig(
source="cc",
extract_head=True,
query="company name",
scoring_method="bm25",
max_urls=50
)
Troubleshooting Guide
| Issue | Solution |
|---|---|
| No URLs found | Try source="cc+sitemap", check domain spelling |
| Slow discovery | Reduce concurrency, add hits_per_sec limit |
| Missing metadata | Ensure extract_head=True |
| Low relevance scores | Refine query, lower score_threshold |
| Rate limit errors | Reduce hits_per_sec and concurrency |
| Memory issues with large sites | Use max_urls to limit results, reduce concurrency |
| Connection not closed | Use context manager or call await seeder.close() |
Performance Benchmarks
Typical performance on a standard connection:
- Sitemap discovery: 100-1,000 URLs/second
- Common Crawl discovery: 50-500 URLs/second
- HEAD checking: 10-50 URLs/second
- Head extraction: 5-20 URLs/second
- BM25 scoring: 10,000+ URLs/second
Conclusion
URL seeding transforms web crawling from a blind expedition into a surgical strike. By discovering and analyzing URLs before crawling, you can:
- Save hours of crawling time
- Reduce bandwidth usage by 90%+
- Find exactly what you need
- Scale across multiple domains effortlessly
Whether you're building a research tool, monitoring competitors, or creating a content aggregator, URL seeding gives you the intelligence to crawl smarter, not harder.
Smart URL Filtering
The seeder automatically filters out nonsense URLs that aren't useful for content crawling:
# Enabled by default
config = SeedingConfig(
source="sitemap",
filter_nonsense_urls=True # Default: True
)
# URLs that get filtered:
# - robots.txt, sitemap.xml, ads.txt
# - API endpoints (/api/, /v1/, .json)
# - Media files (.jpg, .mp4, .pdf)
# - Archives (.zip, .tar.gz)
# - Source code (.js, .css)
# - Admin/login pages
# - And many more...
To disable filtering (not recommended):
config = SeedingConfig(
source="sitemap",
filter_nonsense_urls=False # Include ALL URLs
)
Key Features Summary
- Parallel Sitemap Index Processing: Automatically detects and processes sitemap indexes in parallel
- Memory Protection: Bounded queues prevent RAM issues with large domains (1M+ URLs)
- Context Manager Support: Automatic cleanup with
async withstatement - URL-Based Scoring: Smart filtering even without head extraction
- Smart URL Filtering: Automatically excludes utility/nonsense URLs
- Dual Caching: Separate caches for URL lists and metadata
Now go forth and seed intelligently! 🌱🚀