Files
crawl4ai/tests/test_arun_many.py
ntohidi a03e68fa2f feat: Add URL-specific crawler configurations for multi-URL crawling
Implement dynamic configuration selection based on URL patterns to optimize crawling for different content types. This feature enables users to apply different crawling strategies (PDF extraction, content filtering, JavaScript execution) based on URL matching patterns.

Key additions:
- Add url_matcher and match_mode parameters to CrawlerRunConfig
- Implement is_match() method supporting string patterns, functions, and mixed lists
- Add MatchMode enum for OR/AND logic when combining multiple matchers
- Update AsyncWebCrawler.arun_many() to accept List[CrawlerRunConfig]
- Add select_config() method to dispatchers for runtime config selection
- First matching config wins, with fallback to default

Pattern matching supports:
- Glob-style strings: *.pdf, */blog/*, *api*
- Lambda functions: lambda url: 'github.com' in url
- Mixed patterns with AND/OR logic for complex matching

This enables optimal per-URL configuration:
- PDFs: Use PDFContentScrapingStrategy without JavaScript
- Blogs: Apply content filtering to reduce noise
- APIs: Skip JavaScript, use JSON extraction
- Dynamic sites: Execute only necessary JavaScript

Breaking changes: None - fully backward compatible
2025-08-02 19:10:36 +08:00

43 lines
1.3 KiB
Python

"""
Test example for multiple crawler configs feature
"""
import asyncio
import sys
from pathlib import Path
# Add parent directory to path for imports
sys.path.insert(0, str(Path(__file__).parent.parent))
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
from crawl4ai.processors.pdf import PDFContentScrapingStrategy
async def test_run_many():
default_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
# scraping_strategy=PDFContentScrapingStrategy()
)
test_urls = [
# "https://blog.python.org/", # Blog URL
"https://www.python.org/", # Generic HTTPS page
"https://www.kidocode.com/", # Generic HTTPS page
"https://www.example.com/", # Generic HTTPS page
# "https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf",
]
async with AsyncWebCrawler() as crawler:
# Single config - traditional usage still works
print("Test 1: Single config (backwards compatible)")
result = await crawler.arun_many(
urls=test_urls[:2],
config=default_config
)
print(f"Crawled {len(result)} URLs with single config\n")
for item in result:
print(f" {item.url} -> {item.status_code}")
if __name__ == "__main__":
asyncio.run(test_run_many())