execution, causing URLs to be processed sequentially instead of in parallel.
Changes:
- Added aperform_completion_with_backoff() using litellm.acompletion for async LLM calls
- Implemented arun() method in ExtractionStrategy base class with thread pool fallback
- Created async arun() and aextract() methods in LLMExtractionStrategy using asyncio.gather
- Updated AsyncWebCrawler.arun() to detect and use arun() when available
- Added comprehensive test suite to verify parallel execution
Impact:
- LLM extraction now runs truly in parallel across multiple URLs
- Significant performance improvement for multi-URL crawls with LLM strategies
- Backward compatible - existing extraction strategies continue to work
- No breaking changes to public API
Technical details:
- Uses litellm.acompletion for non-blocking LLM calls
- Leverages asyncio.gather for concurrent chunk processing
- Maintains backward compatibility via asyncio.to_thread fallback
- Works seamlessly with MemoryAdaptiveDispatcher and other dispatchers
- Add lightweight security test to verify version requirements
- Add comprehensive integration test for crawl4ai functionality
- Tests verify pyOpenSSL >= 25.3.0 and cryptography >= 45.0.7
- All tests passing: security vulnerability is resolved
Related to #1545🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Added end-to-end test script that automates webhook feature testing:
Script Features (test_webhook_feature.sh):
- Automatic branch switching and dependency installation
- Redis and server startup/shutdown management
- Webhook receiver implementation
- Integration test for webhook notifications
- Comprehensive cleanup and error handling
- Returns to original branch after completion
Test Flow:
1. Fetch and checkout webhook feature branch
2. Activate venv and install dependencies
3. Start Redis and Crawl4AI server
4. Submit crawl job with webhook config
5. Verify webhook delivery and payload
6. Clean up all processes and return to original branch
Documentation:
- WEBHOOK_TEST_README.md with usage instructions
- Troubleshooting guide
- Exit codes and safety features
Usage: ./tests/test_webhook_feature.sh
Generated with Claude Code https://claude.com/claude-code
Co-Authored-By: Claude <noreply@anthropic.com>
Add hooks_to_string() utility function that converts Python function objects
to string representations for the Docker API, enabling developers to write hooks
as regular Python functions instead of strings.
Core Changes:
- New hooks_to_string() utility in crawl4ai/utils.py using inspect.getsource()
- Docker client now accepts both function objects and strings for hooks
- Automatic detection and conversion in Crawl4aiDockerClient._prepare_request()
- New hooks and hooks_timeout parameters in client.crawl() method
Documentation:
- Docker client examples with function-based hooks (docs/examples/docker_client_hooks_example.py)
- Updated main Docker deployment guide with comprehensive hooks section
- Added unit tests for hooks utility (tests/docker/test_hooks_utility.py)
- Fix critical bug where overlay removal JS function was injected but never called
- Change remove_overlay_elements() to properly execute the injected async function
- Wrap JS execution in async to handle the async overlay removal logic
- Add test_remove_overlay_elements() test case to verify functionality works
- Ensure overlay elements (cookie banners, popups, modals) are actually removed
The remove_overlay_elements feature now works as intended:
- Before: Function definition injected but never executed (silent failure)
- After: Function injected and called, successfully removing overlay elements
- Prevents full HTML content from being passed as URL to extraction strategies
- Added unit tests to verify raw HTML and regular URL processing
Fix: Wrong URL variable used for extraction of raw html
Added a new `preserve_https_for_internal_links` configuration flag that preserves the original HTTPS scheme for same-domain links even when the server redirects to HTTP.
- Updated ProxyConfig.from_string to support multiple proxy formats, including URLs with credentials.
- Deprecated the 'proxy' parameter in BrowserConfig, replacing it with 'proxy_config' for better flexibility.
- Added warnings for deprecated usage and clarified behavior when both parameters are provided.
- Updated documentation and tests to reflect changes in proxy configuration handling.
- Return comprehensive error messages along with status codes for api internal errors.
- Fix fit_html property serialization issue in both /crawl and /crawl/stream endpoints
- Add sanitization to ensure fit_html is always JSON-serializable (string or None)
- Add comprehensive error handling test suite.
Implement hierarchical configuration for LLM parameters with support for:
- Temperature control (0.0-2.0) to adjust response creativity
- Custom base_url for proxy servers and alternative endpoints
- 4-tier priority: request params > provider env > global env > defaults
Add helper functions in utils.py, update API schemas and handlers,
support environment variables (LLM_TEMPERATURE, OPENAI_TEMPERATURE, etc.),
and provide comprehensive documentation with examples.
- Fix URLPatternFilter serialization by preventing private __slots__ from being serialized as constructor params
- Add public attributes to URLPatternFilter to store original constructor parameters for proper serialization
- Handle property descriptors in CrawlResult.model_dump() to prevent JSON serialization errors
- Ensure filter chains work correctly with Docker client and REST API
The issue occurred because:
1. Private implementation details (_simple_suffixes, etc.) were being serialized and passed as constructor arguments during deserialization
2. Property descriptors were being included in the serialized output, causing "Object of type property is not JSON serializable" errors
Changes:
- async_configs.py: Comment out __slots__ serialization logic (lines 100-109)
- filters.py: Add patterns, use_glob, reverse to URLPatternFilter __slots__ and store as public attributes
- models.py: Convert property descriptors to strings in model_dump() instead of including them directly
BREAKING CHANGE: Table extraction now uses Strategy Design Pattern
This epic commit introduces a game-changing approach to table extraction in Crawl4AI:
✨ NEW FEATURES:
- LLMTableExtraction: AI-powered extraction for complex HTML tables with rowspan/colspan
- Smart Chunking: Automatically splits massive tables into optimal chunks at row boundaries
- Parallel Processing: Processes multiple chunks simultaneously for blazing-fast extraction
- Intelligent Merging: Seamlessly combines chunk results into complete tables
- Header Preservation: Each chunk maintains context with original headers
- Auto-retry Logic: Built-in resilience with configurable retry attempts
🏗️ ARCHITECTURE:
- Strategy Design Pattern for pluggable table extraction strategies
- ThreadPoolExecutor for concurrent chunk processing
- Token-based chunking with configurable thresholds
- Handles tables without headers gracefully
⚡ PERFORMANCE:
- Process 1000+ row tables without timeout
- Parallel processing with up to 5 concurrent chunks
- Smart token estimation prevents LLM context overflow
- Optimized for providers like Groq for massive tables
🔧 CONFIGURATION:
- enable_chunking: Auto-handle large tables (default: True)
- chunk_token_threshold: When to split (default: 3000 tokens)
- min_rows_per_chunk: Meaningful chunk sizes (default: 10)
- max_parallel_chunks: Concurrent processing (default: 5)
📚 BACKWARD COMPATIBILITY:
- Existing code continues to work unchanged
- DefaultTableExtraction remains the default strategy
- Progressive enhancement approach
This is the future of web table extraction - handling everything from simple tables to massive, complex data grids with merged cells and nested structures. The chunking is completely transparent to users while providing unprecedented scalability.
- Add raw HTML URL validation alongside http/https checks
- Fix URL preprocessing logic to handle raw: and raw:// prefixes
- Update error message and add comprehensive test cases
- Remove deprecated API token authentication from all Docker examples
- Fix async job endpoints: /crawl -> /crawl/job for submission, /task/{id} -> /crawl/job/{id} for polling
- Fix sync endpoint: /crawl_sync -> /crawl (synchronous)
- Remove non-existent /crawl_direct endpoint
- Update request format to use new structure with browser_config and crawler_config
- Fix response handling for both async and sync calls
- Update extraction strategy format to use proper nested structure
- Add Ollama connectivity check before running tests
- Update test schemas and selectors for current website structures
This makes the Docker examples work out-of-the-box with the current API structure.
- Extract base href from <head><base> tag using XPath in _process_element method
- Use base URL as the primary URL for link normalization when present
- Add error handling with logging for malformed or problematic base tags
- Maintain backward compatibility when no base tag is present
- Add test to verify the functionality of the base tag extraction.
- Support LLM_PROVIDER env var to override default provider (openai/gpt-4o-mini)
- Add optional 'provider' parameter to API endpoints for per-request overrides
- Implement provider validation to ensure API keys exist
- Update documentation and examples with new configuration options
Closes the need to hardcode providers in config.yml
commit 2def6524cdacb69c72760bf55a41089257c0bb07
Author: ntohidi <nasrin@kidocode.com>
Date: Mon Aug 4 18:59:10 2025 +0800
refactor: consolidate WebScrapingStrategy to use LXML implementation only
BREAKING CHANGE: None - full backward compatibility maintained
This commit simplifies the content scraping architecture by removing the
redundant BeautifulSoup-based WebScrapingStrategy implementation and making
it an alias for LXMLWebScrapingStrategy.
Changes:
- Remove ~1000 lines of BeautifulSoup-based WebScrapingStrategy code
- Make WebScrapingStrategy an alias for LXMLWebScrapingStrategy
- Update LXMLWebScrapingStrategy to inherit directly from ContentScrapingStrategy
- Add required methods (scrap, ascrap, process_element, _log) to LXMLWebScrapingStrategy
- Maintain 100% backward compatibility - existing code continues to work
Code changes:
- crawl4ai/content_scraping_strategy.py: Remove WebScrapingStrategy class, add alias
- crawl4ai/async_configs.py: Remove WebScrapingStrategy from imports
- crawl4ai/__init__.py: Update imports to show alias relationship
- crawl4ai/types.py: Update type definitions
- crawl4ai/legacy/web_crawler.py: Update import to use alias
- tests/async/test_content_scraper_strategy.py: Update to use LXMLWebScrapingStrategy
- docs/examples/scraping_strategies_performance.py: Update to use single strategy
Documentation updates:
- docs/md_v2/core/content-selection.md: Update scraping modes section
- docs/md_v2/migration/webscraping-strategy-migration.md: Add migration guide
- CHANGELOG.md: Document the refactoring under [Unreleased]
Benefits:
- 10-20x faster HTML parsing for large documents
- Reduced memory usage and simplified codebase
- Consistent parsing behavior
- No migration required for existing users
All existing code using WebScrapingStrategy continues to work without
modification, while benefiting from LXML's superior performance.