Compare commits

...

33 Commits

Author SHA1 Message Date
ntohidi
159207b86f feat(docker): Add temperature and base_url parameters for LLM configuration. ref #1035
Implement hierarchical configuration for LLM parameters with support for:
  - Temperature control (0.0-2.0) to adjust response creativity
  - Custom base_url for proxy servers and alternative endpoints
  - 4-tier priority: request params > provider env > global env > defaults

  Add helper functions in utils.py, update API schemas and handlers,
  support environment variables (LLM_TEMPERATURE, OPENAI_TEMPERATURE, etc.),
  and provide comprehensive documentation with examples.
2025-08-26 16:44:07 +08:00
ntohidi
40ab287c90 fix(utils): Improve URL normalization by avoiding quote/unquote to preserve '+' signs. ref #1332 2025-08-22 12:05:21 +08:00
ntohidi
90af453506 Merge branch 'develop' of https://github.com/unclecode/crawl4ai into develop 2025-08-21 14:10:01 +08:00
Nasrin
8bb0e68cce Merge pull request #1422 from unclecode/fix/docker-llmEnvFile
fix(docker): Fix LLM API key handling for multi-provider support
2025-08-21 14:05:06 +08:00
ntohidi
95051020f4 fix(docker): Fix LLM API key handling for multi-provider support
Previously, the system incorrectly used OPENAI_API_KEY for all LLM providers
due to a hardcoded api_key_env fallback in config.yml. This caused authentication
errors when using non-OpenAI providers like Gemini.

Changes:
- Remove api_key_env from config.yml to let litellm handle provider-specific env vars
- Simplify get_llm_api_key() to return None, allowing litellm to auto-detect keys
- Update validate_llm_provider() to trust litellm's built-in key detection
- Update documentation to reflect the new automatic key handling

The fix leverages litellm's existing capability to automatically find the correct
environment variable for each provider (OPENAI_API_KEY, GEMINI_API_TOKEN, etc.)
without manual configuration.

ref #1291
2025-08-21 14:01:04 +08:00
ntohidi
69961cf40b Merge branch 'develop' of https://github.com/unclecode/crawl4ai into develop 2025-08-20 16:56:19 +08:00
Nasrin
ef174a4c7a Merge pull request #1104 from emmanuel-ferdman/main
fix(docker-api): migrate to modern datetime library API
2025-08-20 10:57:39 +08:00
Nasrin
f4206d6ba1 Merge pull request #1369 from NezarAli/main
Fix examples in README.md
2025-08-18 14:22:54 +08:00
ntohidi
9447054a65 docs: update Docker instructions to use the latest release tag 2025-08-18 14:20:05 +08:00
Nasrin
dad7c51481 Merge pull request #1398 from unclecode/fix/update-url-seeding-docs
Update URL seeding examples to use proper async context managers
2025-08-18 13:00:26 +08:00
ntohidi
f4a432829e fix(crawler): Removed the incorrect reference in browser_config variable #1310 2025-08-18 10:59:14 +08:00
UncleCode
e651e045c4 Release v0.7.4: Merge release branch
- Merge release/v0.7.4 into main
- Version: 0.7.4
- Ready for tag and publication
2025-08-17 19:46:48 +08:00
UncleCode
5398acc7d2 docs: add v0.7.4 release blog post and update documentation
- Add comprehensive v0.7.4 release blog post with LLMTableExtraction feature highlight
- Update blog index to feature v0.7.4 as latest release
- Update README.md to showcase v0.7.4 features alongside v0.7.3
- Accurately describe dispatcher fix as bug fix rather than major enhancement
- Include practical code examples for new LLMTableExtraction capabilities
2025-08-17 19:45:23 +08:00
UncleCode
22c7932ba3 chore(version): update version to 0.7.4 2025-08-17 19:22:23 +08:00
UncleCode
2ab0bf27c2 refactor(utils): move memory utilities to utils and update imports 2025-08-17 19:14:55 +08:00
ntohidi
d30dc9fdc1 fix(http-crawler): bring back HTTP crawler strategy 2025-08-16 09:27:23 +08:00
ntohidi
e6044e6053 Merge branch 'develop' of https://github.com/unclecode/crawl4ai into develop 2025-08-15 19:44:06 +08:00
ntohidi
a50e47adad Merge branch 'feature/table-extraction-strategies' into develop 2025-08-15 19:41:37 +08:00
ntohidi
ada7441bd1 refactor: Update LLMTableExtraction examples and tests 2025-08-15 19:11:26 +08:00
ntohidi
9f7fee91a9 feat: 🚀 Introduce revolutionary LLMTableExtraction with intelligent chunking for massive tables
BREAKING CHANGE: Table extraction now uses Strategy Design Pattern

This epic commit introduces a game-changing approach to table extraction in Crawl4AI:

 NEW FEATURES:
- LLMTableExtraction: AI-powered extraction for complex HTML tables with rowspan/colspan
- Smart Chunking: Automatically splits massive tables into optimal chunks at row boundaries
- Parallel Processing: Processes multiple chunks simultaneously for blazing-fast extraction
- Intelligent Merging: Seamlessly combines chunk results into complete tables
- Header Preservation: Each chunk maintains context with original headers
- Auto-retry Logic: Built-in resilience with configurable retry attempts

🏗️ ARCHITECTURE:
- Strategy Design Pattern for pluggable table extraction strategies
- ThreadPoolExecutor for concurrent chunk processing
- Token-based chunking with configurable thresholds
- Handles tables without headers gracefully

 PERFORMANCE:
- Process 1000+ row tables without timeout
- Parallel processing with up to 5 concurrent chunks
- Smart token estimation prevents LLM context overflow
- Optimized for providers like Groq for massive tables

🔧 CONFIGURATION:
- enable_chunking: Auto-handle large tables (default: True)
- chunk_token_threshold: When to split (default: 3000 tokens)
- min_rows_per_chunk: Meaningful chunk sizes (default: 10)
- max_parallel_chunks: Concurrent processing (default: 5)

📚 BACKWARD COMPATIBILITY:
- Existing code continues to work unchanged
- DefaultTableExtraction remains the default strategy
- Progressive enhancement approach

This is the future of web table extraction - handling everything from simple tables to massive, complex data grids with merged cells and nested structures. The chunking is completely transparent to users while providing unprecedented scalability.
2025-08-15 19:11:26 +08:00
AHMET YILMAZ
7f48655cf1 feat(browser-profiler): implement cross-platform keyboard listeners and improve quit handling 2025-08-15 19:11:26 +08:00
prokopis3
1417a67e90 chore(profile-test): fix filename typo ( test_crteate_profile.py → test_create_profile.py )
- Rename file to correct spelling
- No content changes
2025-08-15 19:11:26 +08:00
prokopis3
19398d33ef fix(browser_profiler): improve keyboard input handling
- fix handling of special keys in Windows msvcrt implementation
- Guard against UnicodeDecodeError from multi-byte key sequences
- Filter out non-printable characters and control sequences
- Add error handling to prevent coroutine crashes
- Add unit test to verify keyboard input handling

Key changes:
- Safe UTF-8 decoding with try/except for special keys
- Skip non-printable and multi-byte character sequences
- Add broad exception handling in keyboard listener

Test runs on Windows only due to msvcrt dependency.
2025-08-15 19:11:26 +08:00
prokopis3
263d362daa fix(browser_profiler): cross-platform 'q' to quit
This commit introduces platform-specific handling for the 'q' key press to quit the browser profiler, ensuring compatibility with both Windows and Unix-like systems. It also adds a check to see if the browser process has already exited, terminating the input listener if so.

- Implemented `msvcrt` for Windows to capture keyboard input without requiring a newline.
- Retained `termios`, `tty`, and `select` for Unix-like systems.
- Added a check for browser process termination to gracefully exit the input listener.
- Updated logger messages to use colored output for better user experience.
2025-08-15 19:11:26 +08:00
ntohidi
bac92a47e4 refactor: Update LLMTableExtraction examples and tests 2025-08-15 18:47:31 +08:00
ntohidi
a51545c883 feat: 🚀 Introduce revolutionary LLMTableExtraction with intelligent chunking for massive tables
BREAKING CHANGE: Table extraction now uses Strategy Design Pattern

This epic commit introduces a game-changing approach to table extraction in Crawl4AI:

 NEW FEATURES:
- LLMTableExtraction: AI-powered extraction for complex HTML tables with rowspan/colspan
- Smart Chunking: Automatically splits massive tables into optimal chunks at row boundaries
- Parallel Processing: Processes multiple chunks simultaneously for blazing-fast extraction
- Intelligent Merging: Seamlessly combines chunk results into complete tables
- Header Preservation: Each chunk maintains context with original headers
- Auto-retry Logic: Built-in resilience with configurable retry attempts

🏗️ ARCHITECTURE:
- Strategy Design Pattern for pluggable table extraction strategies
- ThreadPoolExecutor for concurrent chunk processing
- Token-based chunking with configurable thresholds
- Handles tables without headers gracefully

 PERFORMANCE:
- Process 1000+ row tables without timeout
- Parallel processing with up to 5 concurrent chunks
- Smart token estimation prevents LLM context overflow
- Optimized for providers like Groq for massive tables

🔧 CONFIGURATION:
- enable_chunking: Auto-handle large tables (default: True)
- chunk_token_threshold: When to split (default: 3000 tokens)
- min_rows_per_chunk: Meaningful chunk sizes (default: 10)
- max_parallel_chunks: Concurrent processing (default: 5)

📚 BACKWARD COMPATIBILITY:
- Existing code continues to work unchanged
- DefaultTableExtraction remains the default strategy
- Progressive enhancement approach

This is the future of web table extraction - handling everything from simple tables to massive, complex data grids with merged cells and nested structures. The chunking is completely transparent to users while providing unprecedented scalability.
2025-08-14 18:21:24 +08:00
Soham Kukreti
ecbe5ffb84 docs: Update URL seeding examples to use proper async context managers
- Wrap all AsyncUrlSeeder usage with async context managers
- Update URL seeding adventure example to use "sitemap+cc" source, focus on course posts, and add stream=True parameter to fix runtime error
2025-08-13 18:16:46 +05:30
Nasrin
926e41aab8 Merge pull request #1378 from unclecode/fix/exit_with_q
Cross Platform fix for browser profiler
2025-08-13 14:16:47 +08:00
AHMET YILMAZ
b61b2ee676 feat(browser-profiler): implement cross-platform keyboard listeners and improve quit handling 2025-08-08 11:18:34 +08:00
Nezar Ali
7a8190ecb6 Fix examples in README.md 2025-08-06 11:58:29 +03:00
Nasrin
64f37792a7 Merge pull request #1170 from prokopis3/fix/create-profile
fix(browser_profiler): cross-platform 'q' to quit - create profile
2025-08-06 16:29:14 +08:00
Emmanuel Ferdman
8e3c411a3e Merge branch 'main' into main 2025-07-29 14:05:35 +03:00
Emmanuel Ferdman
1e1c887a2f fix(docker-api): migrate to modern datetime library API
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-05-13 00:04:58 -07:00
30 changed files with 4770 additions and 418 deletions

View File

@@ -27,9 +27,11 @@
Crawl4AI turns the web into clean, LLM ready Markdown for RAG, agents, and data pipelines. Fast, controllable, battle tested by a 50k+ star community.
[✨ Check out latest update v0.7.3](#-recent-updates)
[✨ Check out latest update v0.7.4](#-recent-updates)
✨ New in v0.7.3: Undetected Browser Support, Multi-URL Configurations, Memory Monitoring, Enhanced Table Extraction, GitHub Sponsors. [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.3.md)
✨ New in v0.7.4: Revolutionary LLM Table Extraction with intelligent chunking, enhanced concurrency fixes, memory management refactor, and critical stability improvements. [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.4.md)
✨ Recent v0.7.3: Undetected Browser Support, Multi-URL Configurations, Memory Monitoring, Enhanced Table Extraction, GitHub Sponsors. [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.3.md)
<details>
<summary>🤓 <strong>My Personal Story</strong></summary>
@@ -302,9 +304,9 @@ The new Docker implementation includes:
### Getting Started
```bash
# Pull and run the latest release candidate
docker pull unclecode/crawl4ai:0.7.0
docker run -d -p 11235:11235 --name crawl4ai --shm-size=1g unclecode/crawl4ai:0.7.0
# Pull and run the latest release
docker pull unclecode/crawl4ai:latest
docker run -d -p 11235:11235 --name crawl4ai --shm-size=1g unclecode/crawl4ai:latest
# Visit the playground at http://localhost:11235/playground
```
@@ -371,7 +373,7 @@ async def main():
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://docs.micronaut.io/4.7.6/guide/",
url="https://docs.micronaut.io/4.9.9/guide/",
config=run_config
)
print(len(result.markdown.raw_markdown))
@@ -423,7 +425,7 @@ async def main():
"type": "attribute",
"attribute": "src"
}
}
]
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
@@ -542,6 +544,40 @@ async def test_news_crawl():
## ✨ Recent Updates
<details>
<summary><strong>Version 0.7.4 Release Highlights - The Intelligent Table Extraction & Performance Update</strong></summary>
- **🚀 LLMTableExtraction**: Revolutionary table extraction with intelligent chunking for massive tables:
```python
from crawl4ai import LLMTableExtraction, LLMConfig
# Configure intelligent table extraction
table_strategy = LLMTableExtraction(
llm_config=LLMConfig(provider="openai/gpt-4.1-mini"),
enable_chunking=True, # Handle massive tables
chunk_token_threshold=5000, # Smart chunking threshold
overlap_threshold=100, # Maintain context between chunks
extraction_type="structured" # Get structured data output
)
config = CrawlerRunConfig(table_extraction_strategy=table_strategy)
result = await crawler.arun("https://complex-tables-site.com", config=config)
# Tables are automatically chunked, processed, and merged
for table in result.tables:
print(f"Extracted table: {len(table['data'])} rows")
```
- **⚡ Dispatcher Bug Fix**: Fixed sequential processing bottleneck in arun_many for fast-completing tasks
- **🧹 Memory Management Refactor**: Consolidated memory utilities into main utils module for cleaner architecture
- **🔧 Browser Manager Fixes**: Resolved race conditions in concurrent page creation with thread-safe locking
- **🔗 Advanced URL Processing**: Better handling of raw:// URLs and base tag link resolution
- **🛡️ Enhanced Proxy Support**: Flexible proxy configuration supporting both dict and string formats
[Full v0.7.4 Release Notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.4.md)
</details>
<details>
<summary><strong>Version 0.7.3 Release Highlights - The Multi-Config Intelligence Update</strong></summary>

View File

@@ -29,6 +29,12 @@ from .extraction_strategy import (
)
from .chunking_strategy import ChunkingStrategy, RegexChunking
from .markdown_generation_strategy import DefaultMarkdownGenerator
from .table_extraction import (
TableExtractionStrategy,
DefaultTableExtraction,
NoTableExtraction,
LLMTableExtraction,
)
from .content_filter_strategy import (
PruningContentFilter,
BM25ContentFilter,
@@ -156,6 +162,9 @@ __all__ = [
"ChunkingStrategy",
"RegexChunking",
"DefaultMarkdownGenerator",
"TableExtractionStrategy",
"DefaultTableExtraction",
"NoTableExtraction",
"RelevantContentFilter",
"PruningContentFilter",
"BM25ContentFilter",

View File

@@ -1,7 +1,7 @@
# crawl4ai/__version__.py
# This is the version that will be used for stable releases
__version__ = "0.7.3"
__version__ = "0.7.4"
# For nightly builds, this gets set during build process
__nightly_version__ = None

View File

@@ -20,6 +20,7 @@ from .chunking_strategy import ChunkingStrategy, RegexChunking
from .markdown_generation_strategy import MarkdownGenerationStrategy, DefaultMarkdownGenerator
from .content_scraping_strategy import ContentScrapingStrategy, LXMLWebScrapingStrategy
from .deep_crawling import DeepCrawlStrategy
from .table_extraction import TableExtractionStrategy, DefaultTableExtraction
from .cache_context import CacheMode
from .proxy_strategy import ProxyRotationStrategy
@@ -982,6 +983,8 @@ class CrawlerRunConfig():
Default: False.
table_score_threshold (int): Minimum score threshold for processing a table.
Default: 7.
table_extraction (TableExtractionStrategy): Strategy to use for table extraction.
Default: DefaultTableExtraction with table_score_threshold.
# Virtual Scroll Parameters
virtual_scroll_config (VirtualScrollConfig or dict or None): Configuration for handling virtual scroll containers.
@@ -1108,6 +1111,7 @@ class CrawlerRunConfig():
image_description_min_word_threshold: int = IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
image_score_threshold: int = IMAGE_SCORE_THRESHOLD,
table_score_threshold: int = 7,
table_extraction: TableExtractionStrategy = None,
exclude_external_images: bool = False,
exclude_all_images: bool = False,
# Link and Domain Handling Parameters
@@ -1224,6 +1228,12 @@ class CrawlerRunConfig():
self.exclude_external_images = exclude_external_images
self.exclude_all_images = exclude_all_images
self.table_score_threshold = table_score_threshold
# Table extraction strategy (default to DefaultTableExtraction if not specified)
if table_extraction is None:
self.table_extraction = DefaultTableExtraction(table_score_threshold=table_score_threshold)
else:
self.table_extraction = table_extraction
# Link and Domain Handling Parameters
self.exclude_social_media_domains = (
@@ -1495,6 +1505,7 @@ class CrawlerRunConfig():
"image_score_threshold", IMAGE_SCORE_THRESHOLD
),
table_score_threshold=kwargs.get("table_score_threshold", 7),
table_extraction=kwargs.get("table_extraction", None),
exclude_all_images=kwargs.get("exclude_all_images", False),
exclude_external_images=kwargs.get("exclude_external_images", False),
# Link and Domain Handling Parameters
@@ -1603,6 +1614,7 @@ class CrawlerRunConfig():
"image_description_min_word_threshold": self.image_description_min_word_threshold,
"image_score_threshold": self.image_score_threshold,
"table_score_threshold": self.table_score_threshold,
"table_extraction": self.table_extraction,
"exclude_all_images": self.exclude_all_images,
"exclude_external_images": self.exclude_external_images,
"exclude_social_media_domains": self.exclude_social_media_domains,

View File

@@ -824,7 +824,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
except Error:
visibility_info = await self.check_visibility(page)
if self.browser_config.config.verbose:
if self.browser_config.verbose:
self.logger.debug(
message="Body visibility info: {info}",
tag="DEBUG",

View File

@@ -2129,3 +2129,265 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
return True # Default to scrolling if check fails
####################################################################################################
# HTTP Crawler Strategy
####################################################################################################
class HTTPCrawlerError(Exception):
"""Base error class for HTTP crawler specific exceptions"""
pass
class ConnectionTimeoutError(HTTPCrawlerError):
"""Raised when connection timeout occurs"""
pass
class HTTPStatusError(HTTPCrawlerError):
"""Raised for unexpected status codes"""
def __init__(self, status_code: int, message: str):
self.status_code = status_code
super().__init__(f"HTTP {status_code}: {message}")
class AsyncHTTPCrawlerStrategy(AsyncCrawlerStrategy):
"""
Fast, lightweight HTTP-only crawler strategy optimized for memory efficiency.
"""
__slots__ = ('logger', 'max_connections', 'dns_cache_ttl', 'chunk_size', '_session', 'hooks', 'browser_config')
DEFAULT_TIMEOUT: Final[int] = 30
DEFAULT_CHUNK_SIZE: Final[int] = 64 * 1024
DEFAULT_MAX_CONNECTIONS: Final[int] = min(32, (os.cpu_count() or 1) * 4)
DEFAULT_DNS_CACHE_TTL: Final[int] = 300
VALID_SCHEMES: Final = frozenset({'http', 'https', 'file', 'raw'})
_BASE_HEADERS: Final = MappingProxyType({
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.5',
'Accept-Encoding': 'gzip, deflate, br',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
})
def __init__(
self,
browser_config: Optional[HTTPCrawlerConfig] = None,
logger: Optional[AsyncLogger] = None,
max_connections: int = DEFAULT_MAX_CONNECTIONS,
dns_cache_ttl: int = DEFAULT_DNS_CACHE_TTL,
chunk_size: int = DEFAULT_CHUNK_SIZE
):
"""Initialize the HTTP crawler with config"""
self.browser_config = browser_config or HTTPCrawlerConfig()
self.logger = logger
self.max_connections = max_connections
self.dns_cache_ttl = dns_cache_ttl
self.chunk_size = chunk_size
self._session: Optional[aiohttp.ClientSession] = None
self.hooks = {
k: partial(self._execute_hook, k)
for k in ('before_request', 'after_request', 'on_error')
}
# Set default hooks
self.set_hook('before_request', lambda *args, **kwargs: None)
self.set_hook('after_request', lambda *args, **kwargs: None)
self.set_hook('on_error', lambda *args, **kwargs: None)
async def __aenter__(self) -> AsyncHTTPCrawlerStrategy:
await self.start()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb) -> None:
await self.close()
@contextlib.asynccontextmanager
async def _session_context(self):
try:
if not self._session:
await self.start()
yield self._session
finally:
pass
def set_hook(self, hook_type: str, hook_func: Callable) -> None:
if hook_type in self.hooks:
self.hooks[hook_type] = partial(self._execute_hook, hook_type, hook_func)
else:
raise ValueError(f"Invalid hook type: {hook_type}")
async def _execute_hook(
self,
hook_type: str,
hook_func: Callable,
*args: Any,
**kwargs: Any
) -> Any:
if asyncio.iscoroutinefunction(hook_func):
return await hook_func(*args, **kwargs)
return hook_func(*args, **kwargs)
async def start(self) -> None:
if not self._session:
connector = aiohttp.TCPConnector(
limit=self.max_connections,
ttl_dns_cache=self.dns_cache_ttl,
use_dns_cache=True,
force_close=False
)
self._session = aiohttp.ClientSession(
headers=dict(self._BASE_HEADERS),
connector=connector,
timeout=ClientTimeout(total=self.DEFAULT_TIMEOUT)
)
async def close(self) -> None:
if self._session and not self._session.closed:
try:
await asyncio.wait_for(self._session.close(), timeout=5.0)
except asyncio.TimeoutError:
if self.logger:
self.logger.warning(
message="Session cleanup timed out",
tag="CLEANUP"
)
finally:
self._session = None
async def _stream_file(self, path: str) -> AsyncGenerator[memoryview, None]:
async with aiofiles.open(path, mode='rb') as f:
while chunk := await f.read(self.chunk_size):
yield memoryview(chunk)
async def _handle_file(self, path: str) -> AsyncCrawlResponse:
if not os.path.exists(path):
raise FileNotFoundError(f"Local file not found: {path}")
chunks = []
async for chunk in self._stream_file(path):
chunks.append(chunk.tobytes().decode('utf-8', errors='replace'))
return AsyncCrawlResponse(
html=''.join(chunks),
response_headers={},
status_code=200
)
async def _handle_raw(self, content: str) -> AsyncCrawlResponse:
return AsyncCrawlResponse(
html=content,
response_headers={},
status_code=200
)
async def _handle_http(
self,
url: str,
config: CrawlerRunConfig
) -> AsyncCrawlResponse:
async with self._session_context() as session:
timeout = ClientTimeout(
total=config.page_timeout or self.DEFAULT_TIMEOUT,
connect=10,
sock_read=30
)
headers = dict(self._BASE_HEADERS)
if self.browser_config.headers:
headers.update(self.browser_config.headers)
request_kwargs = {
'timeout': timeout,
'allow_redirects': self.browser_config.follow_redirects,
'ssl': self.browser_config.verify_ssl,
'headers': headers
}
if self.browser_config.method == "POST":
if self.browser_config.data:
request_kwargs['data'] = self.browser_config.data
if self.browser_config.json:
request_kwargs['json'] = self.browser_config.json
await self.hooks['before_request'](url, request_kwargs)
try:
async with session.request(self.browser_config.method, url, **request_kwargs) as response:
content = memoryview(await response.read())
if not (200 <= response.status < 300):
raise HTTPStatusError(
response.status,
f"Unexpected status code for {url}"
)
encoding = response.charset
if not encoding:
encoding = chardet.detect(content.tobytes())['encoding'] or 'utf-8'
result = AsyncCrawlResponse(
html=content.tobytes().decode(encoding, errors='replace'),
response_headers=dict(response.headers),
status_code=response.status,
redirected_url=str(response.url)
)
await self.hooks['after_request'](result)
return result
except aiohttp.ServerTimeoutError as e:
await self.hooks['on_error'](e)
raise ConnectionTimeoutError(f"Request timed out: {str(e)}")
except aiohttp.ClientConnectorError as e:
await self.hooks['on_error'](e)
raise ConnectionError(f"Connection failed: {str(e)}")
except aiohttp.ClientError as e:
await self.hooks['on_error'](e)
raise HTTPCrawlerError(f"HTTP client error: {str(e)}")
except asyncio.exceptions.TimeoutError as e:
await self.hooks['on_error'](e)
raise ConnectionTimeoutError(f"Request timed out: {str(e)}")
except Exception as e:
await self.hooks['on_error'](e)
raise HTTPCrawlerError(f"HTTP request failed: {str(e)}")
async def crawl(
self,
url: str,
config: Optional[CrawlerRunConfig] = None,
**kwargs
) -> AsyncCrawlResponse:
config = config or CrawlerRunConfig.from_kwargs(kwargs)
parsed = urlparse(url)
scheme = parsed.scheme.rstrip('/')
if scheme not in self.VALID_SCHEMES:
raise ValueError(f"Unsupported URL scheme: {scheme}")
try:
if scheme == 'file':
return await self._handle_file(parsed.path)
elif scheme == 'raw':
return await self._handle_raw(parsed.path)
else: # http or https
return await self._handle_http(url, config)
except Exception as e:
if self.logger:
self.logger.error(
message="Crawl failed: {error}",
tag="CRAWL",
params={"error": str(e), "url": url}
)
raise

View File

@@ -22,7 +22,7 @@ from urllib.parse import urlparse
import random
from abc import ABC, abstractmethod
from .memory_utils import get_true_memory_usage_percent
from .utils import get_true_memory_usage_percent
class RateLimiter:

View File

@@ -586,117 +586,6 @@ class LXMLWebScrapingStrategy(ContentScrapingStrategy):
return root
def is_data_table(self, table: etree.Element, **kwargs) -> bool:
score = 0
# Check for thead and tbody
has_thead = len(table.xpath(".//thead")) > 0
has_tbody = len(table.xpath(".//tbody")) > 0
if has_thead:
score += 2
if has_tbody:
score += 1
# Check for th elements
th_count = len(table.xpath(".//th"))
if th_count > 0:
score += 2
if has_thead or table.xpath(".//tr[1]/th"):
score += 1
# Check for nested tables
if len(table.xpath(".//table")) > 0:
score -= 3
# Role attribute check
role = table.get("role", "").lower()
if role in {"presentation", "none"}:
score -= 3
# Column consistency
rows = table.xpath(".//tr")
if not rows:
return False
col_counts = [len(row.xpath(".//td|.//th")) for row in rows]
avg_cols = sum(col_counts) / len(col_counts)
variance = sum((c - avg_cols)**2 for c in col_counts) / len(col_counts)
if variance < 1:
score += 2
# Caption and summary
if table.xpath(".//caption"):
score += 2
if table.get("summary"):
score += 1
# Text density
total_text = sum(len(''.join(cell.itertext()).strip()) for row in rows for cell in row.xpath(".//td|.//th"))
total_tags = sum(1 for _ in table.iterdescendants())
text_ratio = total_text / (total_tags + 1e-5)
if text_ratio > 20:
score += 3
elif text_ratio > 10:
score += 2
# Data attributes
data_attrs = sum(1 for attr in table.attrib if attr.startswith('data-'))
score += data_attrs * 0.5
# Size check
if avg_cols >= 2 and len(rows) >= 2:
score += 2
threshold = kwargs.get("table_score_threshold", 7)
return score >= threshold
def extract_table_data(self, table: etree.Element) -> dict:
caption = table.xpath(".//caption/text()")
caption = caption[0].strip() if caption else ""
summary = table.get("summary", "").strip()
# Extract headers with colspan handling
headers = []
thead_rows = table.xpath(".//thead/tr")
if thead_rows:
header_cells = thead_rows[0].xpath(".//th")
for cell in header_cells:
text = cell.text_content().strip()
colspan = int(cell.get("colspan", 1))
headers.extend([text] * colspan)
else:
first_row = table.xpath(".//tr[1]")
if first_row:
for cell in first_row[0].xpath(".//th|.//td"):
text = cell.text_content().strip()
colspan = int(cell.get("colspan", 1))
headers.extend([text] * colspan)
# Extract rows with colspan handling
rows = []
for row in table.xpath(".//tr[not(ancestor::thead)]"):
row_data = []
for cell in row.xpath(".//td"):
text = cell.text_content().strip()
colspan = int(cell.get("colspan", 1))
row_data.extend([text] * colspan)
if row_data:
rows.append(row_data)
# Align rows with headers
max_columns = len(headers) if headers else (max(len(row) for row in rows) if rows else 0)
aligned_rows = []
for row in rows:
aligned = row[:max_columns] + [''] * (max_columns - len(row))
aligned_rows.append(aligned)
if not headers:
headers = [f"Column {i+1}" for i in range(max_columns)]
return {
"headers": headers,
"rows": aligned_rows,
"caption": caption,
"summary": summary,
}
def _scrap(
self,
@@ -839,12 +728,16 @@ class LXMLWebScrapingStrategy(ContentScrapingStrategy):
**kwargs,
)
# Extract tables using the table extraction strategy if provided
if 'table' not in excluded_tags:
tables = body.xpath(".//table")
for table in tables:
if self.is_data_table(table, **kwargs):
table_data = self.extract_table_data(table)
media["tables"].append(table_data)
table_extraction = kwargs.get('table_extraction')
if table_extraction:
# Pass logger to the strategy if it doesn't have one
if not table_extraction.logger:
table_extraction.logger = self.logger
# Extract tables using the strategy
extracted_tables = table_extraction.extract_tables(body, **kwargs)
media["tables"].extend(extracted_tables)
# Handle only_text option
if kwargs.get("only_text", False):

View File

@@ -1,79 +0,0 @@
import psutil
import platform
import subprocess
from typing import Tuple
def get_true_available_memory_gb() -> float:
"""Get truly available memory including inactive pages (cross-platform)"""
vm = psutil.virtual_memory()
if platform.system() == 'Darwin': # macOS
# On macOS, we need to include inactive memory too
try:
# Use vm_stat to get accurate values
result = subprocess.run(['vm_stat'], capture_output=True, text=True)
lines = result.stdout.split('\n')
page_size = 16384 # macOS page size
pages = {}
for line in lines:
if 'Pages free:' in line:
pages['free'] = int(line.split()[-1].rstrip('.'))
elif 'Pages inactive:' in line:
pages['inactive'] = int(line.split()[-1].rstrip('.'))
elif 'Pages speculative:' in line:
pages['speculative'] = int(line.split()[-1].rstrip('.'))
elif 'Pages purgeable:' in line:
pages['purgeable'] = int(line.split()[-1].rstrip('.'))
# Calculate total available (free + inactive + speculative + purgeable)
total_available_pages = (
pages.get('free', 0) +
pages.get('inactive', 0) +
pages.get('speculative', 0) +
pages.get('purgeable', 0)
)
available_gb = (total_available_pages * page_size) / (1024**3)
return available_gb
except:
# Fallback to psutil
return vm.available / (1024**3)
else:
# For Windows and Linux, psutil.available is accurate
return vm.available / (1024**3)
def get_true_memory_usage_percent() -> float:
"""
Get memory usage percentage that accounts for platform differences.
Returns:
float: Memory usage percentage (0-100)
"""
vm = psutil.virtual_memory()
total_gb = vm.total / (1024**3)
available_gb = get_true_available_memory_gb()
# Calculate used percentage based on truly available memory
used_percent = 100.0 * (total_gb - available_gb) / total_gb
# Ensure it's within valid range
return max(0.0, min(100.0, used_percent))
def get_memory_stats() -> Tuple[float, float, float]:
"""
Get comprehensive memory statistics.
Returns:
Tuple[float, float, float]: (used_percent, available_gb, total_gb)
"""
vm = psutil.virtual_memory()
total_gb = vm.total / (1024**3)
available_gb = get_true_available_memory_gb()
used_percent = get_true_memory_usage_percent()
return used_percent, available_gb, total_gb

1396
crawl4ai/table_extraction.py Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -16,7 +16,7 @@ from .config import MIN_WORD_THRESHOLD, IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD, IM
import httpx
from socket import gaierror
from pathlib import Path
from typing import Dict, Any, List, Optional, Callable
from typing import Dict, Any, List, Optional, Callable, Generator, Tuple, Iterable
from urllib.parse import urljoin
import requests
from requests.exceptions import InvalidSchema
@@ -40,8 +40,7 @@ from typing import Sequence
from itertools import chain
from collections import deque
from typing import Generator, Iterable
import psutil
import numpy as np
from urllib.parse import (
@@ -2185,8 +2184,10 @@ def normalize_url(
netloc = parsed.netloc.lower()
# ── path ──
# Strip duplicate slashes and trailing “/” (except root)
path = quote(unquote(parsed.path))
# Strip duplicate slashes and trailing "/" (except root)
# IMPORTANT: Don't use quote(unquote()) as it mangles + signs in URLs
# The path from urlparse is already properly encoded
path = parsed.path
if path.endswith('/') and path != '/':
path = path.rstrip('/')
@@ -3414,3 +3415,79 @@ def cosine_distance(vec1: np.ndarray, vec2: np.ndarray) -> float:
"""Calculate cosine distance (1 - similarity) between two vectors"""
return 1 - cosine_similarity(vec1, vec2)
# Memory utilities
def get_true_available_memory_gb() -> float:
"""Get truly available memory including inactive pages (cross-platform)"""
vm = psutil.virtual_memory()
if platform.system() == 'Darwin': # macOS
# On macOS, we need to include inactive memory too
try:
# Use vm_stat to get accurate values
result = subprocess.run(['vm_stat'], capture_output=True, text=True)
lines = result.stdout.split('\n')
page_size = 16384 # macOS page size
pages = {}
for line in lines:
if 'Pages free:' in line:
pages['free'] = int(line.split()[-1].rstrip('.'))
elif 'Pages inactive:' in line:
pages['inactive'] = int(line.split()[-1].rstrip('.'))
elif 'Pages speculative:' in line:
pages['speculative'] = int(line.split()[-1].rstrip('.'))
elif 'Pages purgeable:' in line:
pages['purgeable'] = int(line.split()[-1].rstrip('.'))
# Calculate total available (free + inactive + speculative + purgeable)
total_available_pages = (
pages.get('free', 0) +
pages.get('inactive', 0) +
pages.get('speculative', 0) +
pages.get('purgeable', 0)
)
available_gb = (total_available_pages * page_size) / (1024**3)
return available_gb
except:
# Fallback to psutil
return vm.available / (1024**3)
else:
# For Windows and Linux, psutil.available is accurate
return vm.available / (1024**3)
def get_true_memory_usage_percent() -> float:
"""
Get memory usage percentage that accounts for platform differences.
Returns:
float: Memory usage percentage (0-100)
"""
vm = psutil.virtual_memory()
total_gb = vm.total / (1024**3)
available_gb = get_true_available_memory_gb()
# Calculate used percentage based on truly available memory
used_percent = 100.0 * (total_gb - available_gb) / total_gb
# Ensure it's within valid range
return max(0.0, min(100.0, used_percent))
def get_memory_stats() -> Tuple[float, float, float]:
"""
Get comprehensive memory statistics.
Returns:
Tuple[float, float, float]: (used_percent, available_gb, total_gb)
"""
vm = psutil.virtual_memory()
total_gb = vm.total / (1024**3)
available_gb = get_true_available_memory_gb()
used_percent = get_true_memory_usage_percent()
return used_percent, available_gb, total_gb

View File

@@ -10,4 +10,23 @@ GEMINI_API_TOKEN=your_gemini_key_here
# Optional: Override the default LLM provider
# Examples: "openai/gpt-4", "anthropic/claude-3-opus", "deepseek/chat", etc.
# If not set, uses the provider specified in config.yml (default: openai/gpt-4o-mini)
# LLM_PROVIDER=anthropic/claude-3-opus
# LLM_PROVIDER=anthropic/claude-3-opus
# Optional: Global LLM temperature setting (0.0-2.0)
# Controls randomness in responses. Lower = more focused, Higher = more creative
# LLM_TEMPERATURE=0.7
# Optional: Global custom API base URL
# Use this to point to custom endpoints or proxy servers
# LLM_BASE_URL=https://api.custom.com/v1
# Optional: Provider-specific temperature overrides
# These take precedence over the global LLM_TEMPERATURE
# OPENAI_TEMPERATURE=0.5
# ANTHROPIC_TEMPERATURE=0.3
# GROQ_TEMPERATURE=0.8
# Optional: Provider-specific base URL overrides
# Use for provider-specific proxy endpoints
# OPENAI_BASE_URL=https://custom-openai.company.com/v1
# GROQ_BASE_URL=https://custom-groq.company.com/v1

View File

@@ -692,8 +692,7 @@ app:
# Default LLM Configuration
llm:
provider: "openai/gpt-4o-mini" # Can be overridden by LLM_PROVIDER env var
api_key_env: "OPENAI_API_KEY"
# api_key: sk-... # If you pass the API key directly then api_key_env will be ignored
# api_key: sk-... # If you pass the API key directly (not recommended)
# Redis Configuration (Used by internal Redis server managed by supervisord)
redis:

View File

@@ -4,7 +4,7 @@ import asyncio
from typing import List, Tuple, Dict
from functools import partial
from uuid import uuid4
from datetime import datetime
from datetime import datetime, timezone
from base64 import b64encode
import logging
@@ -42,7 +42,9 @@ from utils import (
should_cleanup_task,
decode_redis_hash,
get_llm_api_key,
validate_llm_provider
validate_llm_provider,
get_llm_temperature,
get_llm_base_url
)
import psutil, time
@@ -96,7 +98,9 @@ async def handle_llm_qa(
response = perform_completion_with_backoff(
provider=config["llm"]["provider"],
prompt_with_variables=prompt,
api_token=get_llm_api_key(config)
api_token=get_llm_api_key(config), # Returns None to let litellm handle it
temperature=get_llm_temperature(config),
base_url=get_llm_base_url(config)
)
return response.choices[0].message.content
@@ -115,7 +119,9 @@ async def process_llm_extraction(
instruction: str,
schema: Optional[str] = None,
cache: str = "0",
provider: Optional[str] = None
provider: Optional[str] = None,
temperature: Optional[float] = None,
base_url: Optional[str] = None
) -> None:
"""Process LLM extraction in background."""
try:
@@ -127,11 +133,13 @@ async def process_llm_extraction(
"error": error_msg
})
return
api_key = get_llm_api_key(config, provider)
api_key = get_llm_api_key(config, provider) # Returns None to let litellm handle it
llm_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(
provider=provider or config["llm"]["provider"],
api_token=api_key
api_token=api_key,
temperature=temperature or get_llm_temperature(config, provider),
base_url=base_url or get_llm_base_url(config, provider)
),
instruction=instruction,
schema=json.loads(schema) if schema else None,
@@ -178,7 +186,9 @@ async def handle_markdown_request(
query: Optional[str] = None,
cache: str = "0",
config: Optional[dict] = None,
provider: Optional[str] = None
provider: Optional[str] = None,
temperature: Optional[float] = None,
base_url: Optional[str] = None
) -> str:
"""Handle markdown generation requests."""
try:
@@ -203,7 +213,9 @@ async def handle_markdown_request(
FilterType.LLM: LLMContentFilter(
llm_config=LLMConfig(
provider=provider or config["llm"]["provider"],
api_token=get_llm_api_key(config, provider),
api_token=get_llm_api_key(config, provider), # Returns None to let litellm handle it
temperature=temperature or get_llm_temperature(config, provider),
base_url=base_url or get_llm_base_url(config, provider)
),
instruction=query or "Extract main content"
)
@@ -248,7 +260,9 @@ async def handle_llm_request(
schema: Optional[str] = None,
cache: str = "0",
config: Optional[dict] = None,
provider: Optional[str] = None
provider: Optional[str] = None,
temperature: Optional[float] = None,
api_base_url: Optional[str] = None
) -> JSONResponse:
"""Handle LLM extraction requests."""
base_url = get_base_url(request)
@@ -279,7 +293,9 @@ async def handle_llm_request(
cache,
base_url,
config,
provider
provider,
temperature,
api_base_url
)
except Exception as e:
@@ -324,7 +340,9 @@ async def create_new_task(
cache: str,
base_url: str,
config: dict,
provider: Optional[str] = None
provider: Optional[str] = None,
temperature: Optional[float] = None,
api_base_url: Optional[str] = None
) -> JSONResponse:
"""Create and initialize a new task."""
decoded_url = unquote(input_path)
@@ -349,7 +367,9 @@ async def create_new_task(
query,
schema,
cache,
provider
provider,
temperature,
api_base_url
)
return JSONResponse({
@@ -576,7 +596,7 @@ async def handle_crawl_job(
task_id = f"crawl_{uuid4().hex[:8]}"
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.PROCESSING, # <-- keep enum values consistent
"created_at": datetime.utcnow().isoformat(),
"created_at": datetime.now(timezone.utc).replace(tzinfo=None).isoformat(),
"url": json.dumps(urls), # store list as JSON string
"result": "",
"error": "",

View File

@@ -11,8 +11,7 @@ app:
# Default LLM Configuration
llm:
provider: "openai/gpt-4o-mini"
api_key_env: "OPENAI_API_KEY"
# api_key: sk-... # If you pass the API key directly then api_key_env will be ignored
# api_key: sk-... # If you pass the API key directly (not recommended)
# Redis Configuration
redis:

View File

@@ -37,6 +37,8 @@ class LlmJobPayload(BaseModel):
schema: Optional[str] = None
cache: bool = False
provider: Optional[str] = None
temperature: Optional[float] = None
base_url: Optional[str] = None
class CrawlJobPayload(BaseModel):
@@ -63,6 +65,8 @@ async def llm_job_enqueue(
cache=payload.cache,
config=_config,
provider=payload.provider,
temperature=payload.temperature,
api_base_url=payload.base_url,
)
@@ -72,7 +76,7 @@ async def llm_job_status(
task_id: str,
_td: Dict = Depends(lambda: _token_dep())
):
return await handle_task_status(_redis, task_id)
return await handle_task_status(_redis, task_id, base_url=str(request.base_url))
# ---------- CRAWL job -------------------------------------------------------

View File

@@ -16,6 +16,8 @@ class MarkdownRequest(BaseModel):
q: Optional[str] = Field(None, description="Query string used by BM25/LLM filters")
c: Optional[str] = Field("0", description="Cachebust / revision counter")
provider: Optional[str] = Field(None, description="LLM provider override (e.g., 'anthropic/claude-3-opus')")
temperature: Optional[float] = Field(None, description="LLM temperature override (0.0-2.0)")
base_url: Optional[str] = Field(None, description="LLM API base URL override")
class RawCode(BaseModel):

View File

@@ -241,7 +241,8 @@ async def get_markdown(
raise HTTPException(
400, "Invalid URL format. Must start with http://, https://, or for raw HTML (raw:, raw://)")
markdown = await handle_markdown_request(
body.url, body.f, body.q, body.c, config, body.provider
body.url, body.f, body.q, body.c, config, body.provider,
body.temperature, body.base_url
)
return JSONResponse({
"url": body.url,

View File

@@ -71,7 +71,7 @@ def decode_redis_hash(hash_data: Dict[bytes, bytes]) -> Dict[str, str]:
def get_llm_api_key(config: Dict, provider: Optional[str] = None) -> str:
def get_llm_api_key(config: Dict, provider: Optional[str] = None) -> Optional[str]:
"""Get the appropriate API key based on the LLM provider.
Args:
@@ -79,19 +79,14 @@ def get_llm_api_key(config: Dict, provider: Optional[str] = None) -> str:
provider: Optional provider override (e.g., "openai/gpt-4")
Returns:
The API key for the provider, or empty string if not found
The API key if directly configured, otherwise None to let litellm handle it
"""
# Use provided provider or fall back to config
if not provider:
provider = config["llm"]["provider"]
# Check if direct API key is configured
# Check if direct API key is configured (for backward compatibility)
if "api_key" in config["llm"]:
return config["llm"]["api_key"]
# Fall back to the configured api_key_env if no match
return os.environ.get(config["llm"].get("api_key_env", ""), "")
# Return None - litellm will automatically find the right environment variable
return None
def validate_llm_provider(config: Dict, provider: Optional[str] = None) -> tuple[bool, str]:
@@ -104,19 +99,78 @@ def validate_llm_provider(config: Dict, provider: Optional[str] = None) -> tuple
Returns:
Tuple of (is_valid, error_message)
"""
# Use provided provider or fall back to config
if not provider:
provider = config["llm"]["provider"]
# Get the API key for this provider
api_key = get_llm_api_key(config, provider)
if not api_key:
return False, f"No API key found for provider '{provider}'. Please set the appropriate environment variable."
# If a direct API key is configured, validation passes
if "api_key" in config["llm"]:
return True, ""
# Otherwise, trust that litellm will find the appropriate environment variable
# We can't easily validate this without reimplementing litellm's logic
return True, ""
def get_llm_temperature(config: Dict, provider: Optional[str] = None) -> Optional[float]:
"""Get temperature setting based on the LLM provider.
Priority order:
1. Provider-specific environment variable (e.g., OPENAI_TEMPERATURE)
2. Global LLM_TEMPERATURE environment variable
3. None (to use litellm/provider defaults)
Args:
config: The application configuration dictionary
provider: Optional provider override (e.g., "openai/gpt-4")
Returns:
The temperature setting if configured, otherwise None
"""
# Check provider-specific temperature first
if provider:
provider_name = provider.split('/')[0].upper()
provider_temp = os.environ.get(f"{provider_name}_TEMPERATURE")
if provider_temp:
try:
return float(provider_temp)
except ValueError:
logging.warning(f"Invalid temperature value for {provider_name}: {provider_temp}")
# Check global LLM_TEMPERATURE
global_temp = os.environ.get("LLM_TEMPERATURE")
if global_temp:
try:
return float(global_temp)
except ValueError:
logging.warning(f"Invalid global temperature value: {global_temp}")
# Return None to use litellm/provider defaults
return None
def get_llm_base_url(config: Dict, provider: Optional[str] = None) -> Optional[str]:
"""Get base URL setting based on the LLM provider.
Priority order:
1. Provider-specific environment variable (e.g., OPENAI_BASE_URL)
2. Global LLM_BASE_URL environment variable
3. None (to use default endpoints)
Args:
config: The application configuration dictionary
provider: Optional provider override (e.g., "openai/gpt-4")
Returns:
The base URL if configured, otherwise None
"""
# Check provider-specific base URL first
if provider:
provider_name = provider.split('/')[0].upper()
provider_url = os.environ.get(f"{provider_name}_BASE_URL")
if provider_url:
return provider_url
# Check global LLM_BASE_URL
return os.environ.get("LLM_BASE_URL")
def verify_email_domain(email: str) -> bool:
try:
domain = email.split('@')[1]

305
docs/blog/release-v0.7.4.md Normal file
View File

@@ -0,0 +1,305 @@
# 🚀 Crawl4AI v0.7.4: The Intelligent Table Extraction & Performance Update
*August 17, 2025 • 6 min read*
---
Today I'm releasing Crawl4AI v0.7.4—the Intelligent Table Extraction & Performance Update. This release introduces revolutionary LLM-powered table extraction with intelligent chunking, significant performance improvements for concurrent crawling, enhanced browser management, and critical stability fixes that make Crawl4AI more robust for production workloads.
## 🎯 What's New at a Glance
- **🚀 LLMTableExtraction**: Revolutionary table extraction with intelligent chunking for massive tables
- **⚡ Enhanced Concurrency**: True concurrency improvements for fast-completing tasks in batch operations
- **🧹 Memory Management Refactor**: Streamlined memory utilities and better resource management
- **🔧 Browser Manager Fixes**: Resolved race conditions in concurrent page creation
- **⌨️ Cross-Platform Browser Profiler**: Improved keyboard handling and quit mechanisms
- **🔗 Advanced URL Processing**: Better handling of raw URLs and base tag link resolution
- **🛡️ Enhanced Proxy Support**: Flexible proxy configuration with dict and string formats
- **🐳 Docker Improvements**: Better API handling and raw HTML support
## 🚀 LLMTableExtraction: Revolutionary Table Processing
**The Problem:** Complex tables with rowspan, colspan, nested structures, or massive datasets that traditional HTML parsing can't handle effectively. Large tables that exceed token limits crash extraction processes.
**My Solution:** I developed LLMTableExtraction—an intelligent table extraction strategy that uses Large Language Models with automatic chunking to handle tables of any size and complexity.
### Technical Implementation
```python
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
LLMConfig,
LLMTableExtraction,
CacheMode
)
# Configure LLM for table extraction
llm_config = LLMConfig(
provider="openai/gpt-4.1-mini",
api_token="env:OPENAI_API_KEY",
temperature=0.1, # Low temperature for consistency
max_tokens=32000
)
# Create intelligent table extraction strategy
table_strategy = LLMTableExtraction(
llm_config=llm_config,
verbose=True,
max_tries=2,
enable_chunking=True, # Handle massive tables
chunk_token_threshold=5000, # Smart chunking threshold
overlap_threshold=100, # Maintain context between chunks
extraction_type="structured" # Get structured data output
)
# Apply to crawler configuration
config = CrawlerRunConfig(
table_extraction_strategy=table_strategy,
cache_mode=CacheMode.BYPASS
)
async with AsyncWebCrawler() as crawler:
# Extract complex tables with intelligence
result = await crawler.arun(
"https://en.wikipedia.org/wiki/List_of_countries_by_GDP",
config=config
)
# Access extracted tables directly
for i, table in enumerate(result.tables):
print(f"Table {i}: {len(table['data'])} rows × {len(table['headers'])} columns")
# Convert to pandas DataFrame instantly
import pandas as pd
df = pd.DataFrame(table['data'], columns=table['headers'])
print(df.head())
```
**Intelligent Chunking for Massive Tables:**
```python
# Handle tables that exceed token limits
large_table_strategy = LLMTableExtraction(
llm_config=llm_config,
enable_chunking=True,
chunk_token_threshold=3000, # Conservative threshold
overlap_threshold=150, # Preserve context
max_concurrent_chunks=3, # Parallel processing
merge_strategy="intelligent" # Smart chunk merging
)
# Process Wikipedia comparison tables, financial reports, etc.
config = CrawlerRunConfig(
table_extraction_strategy=large_table_strategy,
# Target specific table containers
css_selector="div.wikitable, table.sortable",
delay_before_return_html=2.0
)
result = await crawler.arun(
"https://en.wikipedia.org/wiki/Comparison_of_operating_systems",
config=config
)
# Tables are automatically chunked, processed, and merged
print(f"Extracted {len(result.tables)} complex tables")
for table in result.tables:
print(f"Merged table: {len(table['data'])} total rows")
```
**Advanced Features:**
- **Intelligent Chunking**: Automatically splits massive tables while preserving structure
- **Context Preservation**: Overlapping chunks maintain column relationships
- **Parallel Processing**: Concurrent chunk processing for speed
- **Smart Merging**: Reconstructs complete tables from processed chunks
- **Complex Structure Support**: Handles rowspan, colspan, nested tables
- **Metadata Extraction**: Captures table context, captions, and relationships
**Expected Real-World Impact:**
- **Financial Analysis**: Extract complex earnings tables and financial statements
- **Research & Academia**: Process large datasets from Wikipedia, research papers
- **E-commerce**: Handle product comparison tables with complex layouts
- **Government Data**: Extract census data, statistical tables from official sources
- **Competitive Intelligence**: Process competitor pricing and feature tables
## ⚡ Enhanced Concurrency: True Performance Gains
**The Problem:** The `arun_many()` method wasn't achieving true concurrency for fast-completing tasks, leading to sequential processing bottlenecks in batch operations.
**My Solution:** I implemented true concurrency improvements in the dispatcher that enable genuine parallel processing for fast-completing tasks.
### Performance Optimization
```python
# Before v0.7.4: Sequential-like behavior for fast tasks
# After v0.7.4: True concurrency
async with AsyncWebCrawler() as crawler:
# These will now run with true concurrency
urls = [
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/1",
"https://httpbin.org/delay/1"
]
# Processes in truly parallel fashion
results = await crawler.arun_many(urls)
# Performance improvement: ~4x faster for fast-completing tasks
print(f"Processed {len(results)} URLs with true concurrency")
```
**Expected Real-World Impact:**
- **API Crawling**: 3-4x faster processing of REST endpoints and API documentation
- **Batch URL Processing**: Significant speedup for large URL lists
- **Monitoring Systems**: Faster health checks and status page monitoring
- **Data Aggregation**: Improved performance for real-time data collection
## 🧹 Memory Management Refactor: Cleaner Architecture
**The Problem:** Memory utilities were scattered and difficult to maintain, with potential import conflicts and unclear organization.
**My Solution:** I consolidated all memory-related utilities into the main `utils.py` module, creating a cleaner, more maintainable architecture.
### Improved Memory Handling
```python
# All memory utilities now consolidated
from crawl4ai.utils import get_true_memory_usage_percent, MemoryMonitor
# Enhanced memory monitoring
monitor = MemoryMonitor()
monitor.start_monitoring()
async with AsyncWebCrawler() as crawler:
# Memory-efficient batch processing
results = await crawler.arun_many(large_url_list)
# Get accurate memory metrics
memory_usage = get_true_memory_usage_percent()
memory_report = monitor.get_report()
print(f"Memory efficiency: {memory_report['efficiency']:.1f}%")
print(f"Peak usage: {memory_report['peak_mb']:.1f} MB")
```
**Expected Real-World Impact:**
- **Production Stability**: More reliable memory tracking and management
- **Code Maintainability**: Cleaner architecture for easier debugging
- **Import Clarity**: Resolved potential conflicts and import issues
- **Developer Experience**: Simpler API for memory monitoring
## 🔧 Critical Stability Fixes
### Browser Manager Race Condition Resolution
**The Problem:** Concurrent page creation in persistent browser contexts caused "Target page/context closed" errors during high-concurrency operations.
**My Solution:** Implemented thread-safe page creation with proper locking mechanisms.
```python
# Fixed: Safe concurrent page creation
browser_config = BrowserConfig(
browser_type="chromium",
use_persistent_context=True, # Now thread-safe
max_concurrent_sessions=10 # Safely handle concurrent requests
)
async with AsyncWebCrawler(config=browser_config) as crawler:
# These concurrent operations are now stable
tasks = [crawler.arun(url) for url in url_list]
results = await asyncio.gather(*tasks) # No more race conditions
```
### Enhanced Browser Profiler
**The Problem:** Inconsistent keyboard handling across platforms and unreliable quit mechanisms.
**My Solution:** Cross-platform keyboard listeners with improved quit handling.
### Advanced URL Processing
**The Problem:** Raw URL formats (`raw://` and `raw:`) weren't properly handled, and base tag link resolution was incomplete.
**My Solution:** Enhanced URL preprocessing and base tag support.
```python
# Now properly handles all URL formats
urls = [
"https://example.com",
"raw://static-html-content",
"raw:file://local-file.html"
]
# Base tag links are now correctly resolved
config = CrawlerRunConfig(
include_links=True, # Links properly resolved with base tags
resolve_absolute_urls=True
)
```
## 🛡️ Enhanced Proxy Configuration
**The Problem:** Proxy configuration only accepted specific formats, limiting flexibility.
**My Solution:** Enhanced ProxyConfig to support both dictionary and string formats.
```python
# Multiple proxy configuration formats now supported
from crawl4ai import BrowserConfig, ProxyConfig
# String format
proxy_config = ProxyConfig("http://proxy.example.com:8080")
# Dictionary format
proxy_config = ProxyConfig({
"server": "http://proxy.example.com:8080",
"username": "user",
"password": "pass"
})
# Use with crawler
browser_config = BrowserConfig(proxy_config=proxy_config)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun("https://httpbin.org/ip")
```
## 🐳 Docker & Infrastructure Improvements
This release includes several Docker and infrastructure improvements:
- **Better API Token Handling**: Improved Docker example scripts with correct endpoints
- **Raw HTML Support**: Enhanced Docker API to handle raw HTML content properly
- **Documentation Updates**: Comprehensive Docker deployment examples
- **Test Coverage**: Expanded test suite with better coverage
## 📚 Documentation & Examples
Enhanced documentation includes:
- **LLM Table Extraction Guide**: Comprehensive examples and best practices
- **Migration Documentation**: Updated patterns for new table extraction methods
- **Docker Deployment**: Complete deployment guide with examples
- **Performance Optimization**: Guidelines for concurrent crawling
## 🙏 Acknowledgments
Thanks to our contributors and community for feedback, bug reports, and feature requests that made this release possible.
## 📚 Resources
- [Full Documentation](https://docs.crawl4ai.com)
- [GitHub Repository](https://github.com/unclecode/crawl4ai)
- [Discord Community](https://discord.gg/crawl4ai)
- [LLM Table Extraction Examples](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/llm_table_extraction_example.py)
---
*Crawl4AI v0.7.4 delivers intelligent table extraction and significant performance improvements. The new LLMTableExtraction strategy handles complex tables that were previously impossible to process, while concurrency improvements make batch operations 3-4x faster. Try the intelligent table extraction—it's a game changer for data extraction workflows!*
**Happy Crawling! 🕷️**
*- The Crawl4AI Team*

View File

@@ -0,0 +1,356 @@
#!/usr/bin/env python3
"""
Example demonstrating LLM-based table extraction in Crawl4AI.
This example shows how to use the LLMTableExtraction strategy to extract
complex tables from web pages, including handling rowspan, colspan, and nested tables.
"""
import os
import sys
# Get the grandparent directory
grandparent_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.path.append(grandparent_dir)
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
import asyncio
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
LLMConfig,
LLMTableExtraction,
CacheMode
)
import pandas as pd
# Example 1: Basic LLM Table Extraction
async def basic_llm_extraction():
"""Extract tables using LLM with default settings."""
print("\n=== Example 1: Basic LLM Table Extraction ===")
# Configure LLM (using OpenAI GPT-4o-mini for cost efficiency)
llm_config = LLMConfig(
provider="openai/gpt-4.1-mini",
api_token="env:OPENAI_API_KEY", # Uses environment variable
temperature=0.1, # Low temperature for consistency
max_tokens=32000
)
# Create LLM table extraction strategy
table_strategy = LLMTableExtraction(
llm_config=llm_config,
verbose=True,
# css_selector="div.mw-content-ltr",
max_tries=2,
enable_chunking=True,
chunk_token_threshold=5000, # Lower threshold to force chunking
min_rows_per_chunk=10,
max_parallel_chunks=3
)
# Configure crawler with the strategy
config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_extraction=table_strategy
)
async with AsyncWebCrawler() as crawler:
# Extract tables from a Wikipedia page
result = await crawler.arun(
url="https://en.wikipedia.org/wiki/List_of_chemical_elements",
config=config
)
if result.success:
print(f"✓ Found {len(result.tables)} tables")
# Display first table
if result.tables:
first_table = result.tables[0]
print(f"\nFirst table:")
print(f" Headers: {first_table['headers'][:5]}...")
print(f" Rows: {len(first_table['rows'])}")
# Convert to pandas DataFrame
df = pd.DataFrame(
first_table['rows'],
columns=first_table['headers']
)
print(f"\nDataFrame shape: {df.shape}")
print(df.head())
else:
print(f"✗ Extraction failed: {result.error}")
# Example 2: Focused Extraction with CSS Selector
async def focused_extraction():
"""Extract tables from specific page sections using CSS selectors."""
print("\n=== Example 2: Focused Extraction with CSS Selector ===")
# HTML with multiple tables
test_html = """
<html>
<body>
<div class="sidebar">
<table role="presentation">
<tr><td>Navigation</td></tr>
</table>
</div>
<div class="main-content">
<table id="data-table">
<caption>Quarterly Sales Report</caption>
<thead>
<tr>
<th rowspan="2">Product</th>
<th colspan="3">Q1 2024</th>
</tr>
<tr>
<th>Jan</th>
<th>Feb</th>
<th>Mar</th>
</tr>
</thead>
<tbody>
<tr>
<td>Widget A</td>
<td>100</td>
<td>120</td>
<td>140</td>
</tr>
<tr>
<td>Widget B</td>
<td>200</td>
<td>180</td>
<td>220</td>
</tr>
</tbody>
</table>
</div>
</body>
</html>
"""
llm_config = LLMConfig(
provider="openai/gpt-4.1-mini",
api_token="env:OPENAI_API_KEY"
)
# Focus only on main content area
table_strategy = LLMTableExtraction(
llm_config=llm_config,
css_selector=".main-content", # Only extract from main content
verbose=True
)
config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_extraction=table_strategy
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=f"raw:{test_html}",
config=config
)
if result.success and result.tables:
table = result.tables[0]
print(f"✓ Extracted table: {table.get('caption', 'No caption')}")
print(f" Headers: {table['headers']}")
print(f" Metadata: {table['metadata']}")
# The LLM should have handled the rowspan/colspan correctly
print("\nProcessed data (rowspan/colspan handled):")
for i, row in enumerate(table['rows']):
print(f" Row {i+1}: {row}")
# Example 3: Comparing with Default Extraction
async def compare_strategies():
"""Compare LLM extraction with default extraction on complex tables."""
print("\n=== Example 3: Comparing LLM vs Default Extraction ===")
# Complex table with nested structure
complex_html = """
<html>
<body>
<table>
<tr>
<th rowspan="3">Category</th>
<th colspan="2">2023</th>
<th colspan="2">2024</th>
</tr>
<tr>
<th>H1</th>
<th>H2</th>
<th>H1</th>
<th>H2</th>
</tr>
<tr>
<td colspan="4">All values in millions</td>
</tr>
<tr>
<td>Revenue</td>
<td>100</td>
<td>120</td>
<td>130</td>
<td>145</td>
</tr>
<tr>
<td>Profit</td>
<td>20</td>
<td>25</td>
<td>28</td>
<td>32</td>
</tr>
</table>
</body>
</html>
"""
async with AsyncWebCrawler() as crawler:
# Test with default extraction
from crawl4ai import DefaultTableExtraction
default_strategy = DefaultTableExtraction(
table_score_threshold=3,
verbose=True
)
config_default = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_extraction=default_strategy
)
result_default = await crawler.arun(
url=f"raw:{complex_html}",
config=config_default
)
# Test with LLM extraction
llm_strategy = LLMTableExtraction(
llm_config=LLMConfig(
provider="openai/gpt-4.1-mini",
api_token="env:OPENAI_API_KEY"
),
verbose=True
)
config_llm = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_extraction=llm_strategy
)
result_llm = await crawler.arun(
url=f"raw:{complex_html}",
config=config_llm
)
# Compare results
print("\nDefault Extraction:")
if result_default.tables:
table = result_default.tables[0]
print(f" Headers: {table.get('headers', [])}")
print(f" Rows: {len(table.get('rows', []))}")
for i, row in enumerate(table.get('rows', [])[:3]):
print(f" Row {i+1}: {row}")
print("\nLLM Extraction (handles complex structure better):")
if result_llm.tables:
table = result_llm.tables[0]
print(f" Headers: {table.get('headers', [])}")
print(f" Rows: {len(table.get('rows', []))}")
for i, row in enumerate(table.get('rows', [])):
print(f" Row {i+1}: {row}")
print(f" Metadata: {table.get('metadata', {})}")
# Example 4: Batch Processing Multiple Pages
async def batch_extraction():
"""Extract tables from multiple pages efficiently."""
print("\n=== Example 4: Batch Table Extraction ===")
urls = [
"https://www.worldometers.info/geography/alphabetical-list-of-countries/",
# "https://en.wikipedia.org/wiki/List_of_chemical_elements",
]
llm_config = LLMConfig(
provider="openai/gpt-4.1-mini",
api_token="env:OPENAI_API_KEY",
temperature=0.1,
max_tokens=1500
)
table_strategy = LLMTableExtraction(
llm_config=llm_config,
css_selector="div.datatable-container", # Wikipedia data tables
verbose=False,
enable_chunking=True,
chunk_token_threshold=5000, # Lower threshold to force chunking
min_rows_per_chunk=10,
max_parallel_chunks=3
)
config = CrawlerRunConfig(
table_extraction=table_strategy,
cache_mode=CacheMode.BYPASS
)
all_tables = []
async with AsyncWebCrawler() as crawler:
for url in urls:
print(f"\nProcessing: {url.split('/')[-1][:50]}...")
result = await crawler.arun(url=url, config=config)
if result.success and result.tables:
print(f" ✓ Found {len(result.tables)} tables")
# Store first table from each page
if result.tables:
all_tables.append({
'url': url,
'table': result.tables[0]
})
# Summary
print(f"\n=== Summary ===")
print(f"Extracted {len(all_tables)} tables from {len(urls)} pages")
for item in all_tables:
table = item['table']
print(f"\nFrom {item['url'].split('/')[-1][:30]}:")
print(f" Columns: {len(table['headers'])}")
print(f" Rows: {len(table['rows'])}")
async def main():
"""Run all examples."""
print("=" * 60)
print("LLM TABLE EXTRACTION EXAMPLES")
print("=" * 60)
# Run examples (comment out ones you don't want to run)
# Basic extraction
await basic_llm_extraction()
# # Focused extraction with CSS
# await focused_extraction()
# # Compare strategies
# await compare_strategies()
# # Batch processing
# await batch_extraction()
print("\n" + "=" * 60)
print("ALL EXAMPLES COMPLETED")
print("=" * 60)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,276 @@
"""
Example: Using Table Extraction Strategies in Crawl4AI
This example demonstrates how to use different table extraction strategies
to extract tables from web pages.
"""
import asyncio
import pandas as pd
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
CacheMode,
DefaultTableExtraction,
NoTableExtraction,
TableExtractionStrategy
)
from typing import Dict, List, Any
async def example_default_extraction():
"""Example 1: Using default table extraction (automatic)."""
print("\n" + "="*50)
print("Example 1: Default Table Extraction")
print("="*50)
async with AsyncWebCrawler() as crawler:
# No need to specify table_extraction - uses DefaultTableExtraction automatically
config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_score_threshold=7 # Adjust sensitivity (default: 7)
)
result = await crawler.arun(
"https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)",
config=config
)
if result.success and result.tables:
print(f"Found {len(result.tables)} tables")
# Convert first table to pandas DataFrame
if result.tables:
first_table = result.tables[0]
df = pd.DataFrame(
first_table['rows'],
columns=first_table['headers'] if first_table['headers'] else None
)
print(f"\nFirst table preview:")
print(df.head())
print(f"Shape: {df.shape}")
async def example_custom_configuration():
"""Example 2: Custom table extraction configuration."""
print("\n" + "="*50)
print("Example 2: Custom Table Configuration")
print("="*50)
async with AsyncWebCrawler() as crawler:
# Create custom extraction strategy with specific settings
table_strategy = DefaultTableExtraction(
table_score_threshold=5, # Lower threshold for more permissive detection
min_rows=3, # Only extract tables with at least 3 rows
min_cols=2, # Only extract tables with at least 2 columns
verbose=True
)
config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_extraction=table_strategy,
# Target specific tables using CSS selector
css_selector="div.main-content"
)
result = await crawler.arun(
"https://example.com/data",
config=config
)
if result.success:
print(f"Found {len(result.tables)} tables matching criteria")
for i, table in enumerate(result.tables):
print(f"\nTable {i+1}:")
print(f" Caption: {table.get('caption', 'No caption')}")
print(f" Size: {table['metadata']['row_count']} rows × {table['metadata']['column_count']} columns")
print(f" Has headers: {table['metadata']['has_headers']}")
async def example_disable_extraction():
"""Example 3: Disable table extraction when not needed."""
print("\n" + "="*50)
print("Example 3: Disable Table Extraction")
print("="*50)
async with AsyncWebCrawler() as crawler:
# Use NoTableExtraction to skip table processing entirely
config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_extraction=NoTableExtraction() # No tables will be extracted
)
result = await crawler.arun(
"https://example.com",
config=config
)
if result.success:
print(f"Tables extracted: {len(result.tables)} (should be 0)")
print("Table extraction disabled - better performance for non-table content")
class FinancialTableExtraction(TableExtractionStrategy):
"""
Custom strategy for extracting financial tables with specific requirements.
"""
def __init__(self, currency_symbols=None, **kwargs):
super().__init__(**kwargs)
self.currency_symbols = currency_symbols or ['$', '', '£', '¥']
def extract_tables(self, element, **kwargs):
"""Extract only tables that appear to contain financial data."""
tables_data = []
for table in element.xpath(".//table"):
# Check if table contains currency symbols
table_text = ''.join(table.itertext())
has_currency = any(symbol in table_text for symbol in self.currency_symbols)
if not has_currency:
continue
# Extract using base logic (could reuse DefaultTableExtraction logic)
headers = []
rows = []
# Extract headers
for th in table.xpath(".//thead//th | .//tr[1]//th"):
headers.append(th.text_content().strip())
# Extract rows
for tr in table.xpath(".//tbody//tr | .//tr[position()>1]"):
row = []
for td in tr.xpath(".//td"):
cell_text = td.text_content().strip()
# Clean currency values
for symbol in self.currency_symbols:
cell_text = cell_text.replace(symbol, '')
row.append(cell_text)
if row:
rows.append(row)
if headers or rows:
tables_data.append({
"headers": headers,
"rows": rows,
"caption": table.xpath(".//caption/text()")[0] if table.xpath(".//caption") else "",
"summary": table.get("summary", ""),
"metadata": {
"type": "financial",
"has_currency": True,
"row_count": len(rows),
"column_count": len(headers) if headers else len(rows[0]) if rows else 0
}
})
return tables_data
async def example_custom_strategy():
"""Example 4: Custom table extraction strategy."""
print("\n" + "="*50)
print("Example 4: Custom Financial Table Strategy")
print("="*50)
async with AsyncWebCrawler() as crawler:
# Use custom strategy for financial tables
config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_extraction=FinancialTableExtraction(
currency_symbols=['$', ''],
verbose=True
)
)
result = await crawler.arun(
"https://finance.yahoo.com/",
config=config
)
if result.success:
print(f"Found {len(result.tables)} financial tables")
for table in result.tables:
if table['metadata'].get('type') == 'financial':
print(f" ✓ Financial table with {table['metadata']['row_count']} rows")
async def example_combined_extraction():
"""Example 5: Combine table extraction with other strategies."""
print("\n" + "="*50)
print("Example 5: Combined Extraction Strategies")
print("="*50)
from crawl4ai import LLMExtractionStrategy, LLMConfig
async with AsyncWebCrawler() as crawler:
# Define schema for structured extraction
schema = {
"type": "object",
"properties": {
"page_title": {"type": "string"},
"main_topic": {"type": "string"},
"key_figures": {
"type": "array",
"items": {"type": "string"}
}
}
}
config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
# Table extraction
table_extraction=DefaultTableExtraction(
table_score_threshold=6,
min_rows=2
),
# LLM extraction for structured data
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai"),
schema=schema
)
)
result = await crawler.arun(
"https://en.wikipedia.org/wiki/Economy_of_the_United_States",
config=config
)
if result.success:
print(f"Tables found: {len(result.tables)}")
# Tables are in result.tables
if result.tables:
print(f"First table has {len(result.tables[0]['rows'])} rows")
# Structured data is in result.extracted_content
if result.extracted_content:
import json
structured_data = json.loads(result.extracted_content)
print(f"Page title: {structured_data.get('page_title', 'N/A')}")
print(f"Main topic: {structured_data.get('main_topic', 'N/A')}")
async def main():
"""Run all examples."""
print("\n" + "="*60)
print("CRAWL4AI TABLE EXTRACTION EXAMPLES")
print("="*60)
# Run examples
await example_default_extraction()
await example_custom_configuration()
await example_disable_extraction()
await example_custom_strategy()
# await example_combined_extraction() # Requires OpenAI API key
print("\n" + "="*60)
print("EXAMPLES COMPLETED")
print("="*60)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -20,136 +20,22 @@ Ever wondered why your AI coding assistant struggles with your library despite c
## Latest Release
### [Crawl4AI v0.7.3 The Multi-Config Intelligence Update](releases/0.7.3.md)
*August 6, 2025*
### [Crawl4AI v0.7.4 The Intelligent Table Extraction & Performance Update](../blog/release-v0.7.4.md)
*August 17, 2025*
Crawl4AI v0.7.3 brings smarter URL-specific configurations, flexible Docker deployments, and critical stability improvements. Configure different crawling strategies for different URL patterns in a single batch—perfect for mixed content sites with docs, blogs, and APIs.
Crawl4AI v0.7.4 introduces revolutionary LLM-powered table extraction with intelligent chunking, performance improvements for concurrent crawling, enhanced browser management, and critical stability fixes that make Crawl4AI more robust for production workloads.
Key highlights:
- **Multi-URL Configurations**: Different strategies for different URL patterns in one crawl
- **Flexible Docker LLM Providers**: Configure providers via environment variables
- **Bug Fixes**: Critical stability improvements for production deployments
- **Documentation Updates**: Clearer examples and improved API documentation
- **🚀 LLMTableExtraction**: Revolutionary table extraction with intelligent chunking for massive tables
- **⚡ Dispatcher Bug Fix**: Fixed sequential processing issue in arun_many for fast-completing tasks
- **🧹 Memory Management Refactor**: Streamlined memory utilities and better resource management
- **🔧 Browser Manager Fixes**: Resolved race conditions in concurrent page creation
- **🔗 Advanced URL Processing**: Better handling of raw URLs and base tag link resolution
[Read full release notes →](releases/0.7.3.md)
[Read full release notes →](../blog/release-v0.7.4.md)
---
## Previous Releases
### [Crawl4AI v0.7.0 The Adaptive Intelligence Update](releases/0.7.0.md)
*January 28, 2025*
Introduced groundbreaking intelligence features including Adaptive Crawling, Virtual Scroll support, intelligent Link Preview, and the Async URL Seeder for massive URL discovery.
[Read release notes →](releases/0.7.0.md)
### [Crawl4AI v0.6.0 World-Aware Crawling, Pre-Warmed Browsers, and the MCP API](releases/0.6.0.md)
*December 23, 2024*
Crawl4AI v0.6.0 brought major architectural upgrades including world-aware crawling (set geolocation, locale, and timezone), real-time traffic capture, and a memory-efficient crawler pool with pre-warmed pages.
The Docker server now exposes a full-featured MCP socket + SSE interface, supports streaming, and comes with a new Playground UI. Plus, table extraction is now native, and the new stress-test framework supports crawling 1,000+ URLs.
Other key changes:
* Native support for `result.media["tables"]` to export DataFrames
* Full network + console logs and MHTML snapshot per crawl
* Browser pooling and pre-warming for faster cold starts
* New streaming endpoints via MCP API and Playground
* Robots.txt support, proxy rotation, and improved session handling
* Deprecated old markdown names, legacy modules cleaned up
* Massive repo cleanup: ~36K insertions, ~5K deletions across 121 files
[Read full release notes →](releases/0.6.0.md)
---
### [Crawl4AI v0.5.0: Deep Crawling, Scalability, and a New CLI!](releases/0.5.0.md)
My dear friends and crawlers, there you go, this is the release of Crawl4AI v0.5.0! This release brings a wealth of new features, performance improvements, and a more streamlined developer experience. Here's a breakdown of what's new:
**Major New Features:**
* **Deep Crawling:** Explore entire websites with configurable strategies (BFS, DFS, Best-First). Define custom filters and URL scoring for targeted crawls.
* **Memory-Adaptive Dispatcher:** Handle large-scale crawls with ease! Our new dispatcher dynamically adjusts concurrency based on available memory and includes built-in rate limiting.
* **Multiple Crawler Strategies:** Choose between the full-featured Playwright browser-based crawler or a new, *much* faster HTTP-only crawler for simpler tasks.
* **Docker Deployment:** Deploy Crawl4AI as a scalable, self-contained service with built-in API endpoints and optional JWT authentication.
* **Command-Line Interface (CLI):** Interact with Crawl4AI directly from your terminal. Crawl, configure, and extract data with simple commands.
* **LLM Configuration (`LLMConfig`):** A new, unified way to configure LLM providers (OpenAI, Anthropic, Ollama, etc.) for extraction, filtering, and schema generation. Simplifies API key management and switching between models.
**Minor Updates & Improvements:**
* **LXML Scraping Mode:** Faster HTML parsing with `LXMLWebScrapingStrategy`.
* **Proxy Rotation:** Added `ProxyRotationStrategy` with a `RoundRobinProxyStrategy` implementation.
* **PDF Processing:** Extract text, images, and metadata from PDF files.
* **URL Redirection Tracking:** Automatically follows and records redirects.
* **Robots.txt Compliance:** Optionally respect website crawling rules.
* **LLM-Powered Schema Generation:** Automatically create extraction schemas using an LLM.
* **`LLMContentFilter`:** Generate high-quality, focused markdown using an LLM.
* **Improved Error Handling & Stability:** Numerous bug fixes and performance enhancements.
* **Enhanced Documentation:** Updated guides and examples.
**Breaking Changes & Migration:**
This release includes several breaking changes to improve the library's structure and consistency. Here's what you need to know:
* **`arun_many()` Behavior:** Now uses the `MemoryAdaptiveDispatcher` by default. The return type depends on the `stream` parameter in `CrawlerRunConfig`. Adjust code that relied on unbounded concurrency.
* **`max_depth` Location:** Moved to `CrawlerRunConfig` and now controls *crawl depth*.
* **Deep Crawling Imports:** Import `DeepCrawlStrategy` and related classes from `crawl4ai.deep_crawling`.
* **`BrowserContext` API:** Updated; the old `get_context` method is deprecated.
* **Optional Model Fields:** Many data model fields are now optional. Handle potential `None` values.
* **`ScrapingMode` Enum:** Replaced with strategy pattern (`WebScrapingStrategy`, `LXMLWebScrapingStrategy`).
* **`content_filter` Parameter:** Removed from `CrawlerRunConfig`. Use extraction strategies or markdown generators with filters.
* **Removed Functionality:** The synchronous `WebCrawler`, the old CLI, and docs management tools have been removed.
* **Docker:** Significant changes to deployment. See the [Docker documentation](../deploy/docker/README.md).
* **`ssl_certificate.json`:** This file has been removed.
* **Config**: FastFilterChain has been replaced with FilterChain
* **Deep-Crawl**: DeepCrawlStrategy.arun now returns Union[CrawlResultT, List[CrawlResultT], AsyncGenerator[CrawlResultT, None]]
* **Proxy**: Removed synchronous WebCrawler support and related rate limiting configurations
* **LLM Parameters:** Use the new `LLMConfig` object instead of passing `provider`, `api_token`, `base_url`, and `api_base` directly to `LLMExtractionStrategy` and `LLMContentFilter`.
**In short:** Update imports, adjust `arun_many()` usage, check for optional fields, and review the Docker deployment guide.
## License Change
Crawl4AI v0.5.0 updates the license to Apache 2.0 *with a required attribution clause*. This means you are free to use, modify, and distribute Crawl4AI (even commercially), but you *must* clearly attribute the project in any public use or distribution. See the updated `LICENSE` file for the full legal text and specific requirements.
**Get Started:**
* **Installation:** `pip install "crawl4ai[all]"` (or use the Docker image)
* **Documentation:** [https://docs.crawl4ai.com](https://docs.crawl4ai.com)
* **GitHub:** [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)
I'm very excited to see what you build with Crawl4AI v0.5.0!
---
### [0.4.2 - Configurable Crawlers, Session Management, and Smarter Screenshots](releases/0.4.2.md)
*December 12, 2024*
The 0.4.2 update brings massive improvements to configuration, making crawlers and browsers easier to manage with dedicated objects. You can now import/export local storage for seamless session management. Plus, long-page screenshots are faster and cleaner, and full-page PDF exports are now possible. Check out all the new features to make your crawling experience even smoother.
[Read full release notes →](releases/0.4.2.md)
---
### [0.4.1 - Smarter Crawling with Lazy-Load Handling, Text-Only Mode, and More](releases/0.4.1.md)
*December 8, 2024*
This release brings major improvements to handling lazy-loaded images, a blazing-fast Text-Only Mode, full-page scanning for infinite scrolls, dynamic viewport adjustments, and session reuse for efficient crawling. If you're looking to improve speed, reliability, or handle dynamic content with ease, this update has you covered.
[Read full release notes →](releases/0.4.1.md)
---
### [0.4.0 - Major Content Filtering Update](releases/0.4.0.md)
*December 1, 2024*
Introduced significant improvements to content filtering, multi-threaded environment handling, and user-agent generation. This release features the new PruningContentFilter, enhanced thread safety, and improved test coverage.
[Read full release notes →](releases/0.4.0.md)
## Project History
Curious about how Crawl4AI has evolved? Check out our [complete changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md) for a detailed history of all versions and updates.

View File

@@ -89,6 +89,16 @@ ANTHROPIC_API_KEY=your-anthropic-key
# TOGETHER_API_KEY=your-together-key
# MISTRAL_API_KEY=your-mistral-key
# GEMINI_API_TOKEN=your-gemini-token
# Optional: Global LLM settings
# LLM_PROVIDER=openai/gpt-4o-mini
# LLM_TEMPERATURE=0.7
# LLM_BASE_URL=https://api.custom.com/v1
# Optional: Provider-specific overrides
# OPENAI_TEMPERATURE=0.5
# OPENAI_BASE_URL=https://custom-openai.com/v1
# ANTHROPIC_TEMPERATURE=0.3
EOL
```
> 🔑 **Note**: Keep your API keys secure! Never commit `.llm.env` to version control.
@@ -156,27 +166,43 @@ cp deploy/docker/.llm.env.example .llm.env
**Flexible LLM Provider Configuration:**
The Docker setup now supports flexible LLM provider configuration through three methods:
The Docker setup now supports flexible LLM provider configuration through a hierarchical system:
1. **Environment Variable** (Highest Priority): Set `LLM_PROVIDER` to override the default
```bash
export LLM_PROVIDER="anthropic/claude-3-opus"
# Or in your .llm.env file:
# LLM_PROVIDER=anthropic/claude-3-opus
```
2. **API Request Parameter**: Specify provider per request
1. **API Request Parameters** (Highest Priority): Specify per request
```json
{
"url": "https://example.com",
"f": "llm",
"provider": "groq/mixtral-8x7b"
"provider": "groq/mixtral-8x7b",
"temperature": 0.7,
"base_url": "https://api.custom.com/v1"
}
```
3. **Config File Default**: Falls back to `config.yml` (default: `openai/gpt-4o-mini`)
2. **Provider-Specific Environment Variables**: Override for specific providers
```bash
# In your .llm.env file:
OPENAI_TEMPERATURE=0.5
OPENAI_BASE_URL=https://custom-openai.com/v1
ANTHROPIC_TEMPERATURE=0.3
```
The system automatically selects the appropriate API key based on the configured `api_key_env` in the config file.
3. **Global Environment Variables**: Set defaults for all providers
```bash
# In your .llm.env file:
LLM_PROVIDER=anthropic/claude-3-opus
LLM_TEMPERATURE=0.7
LLM_BASE_URL=https://api.proxy.com/v1
```
4. **Config File Default**: Falls back to `config.yml` (default: `openai/gpt-4o-mini`)
The system automatically selects the appropriate API key based on the provider. LiteLLM handles finding the correct environment variable for each provider (e.g., OPENAI_API_KEY for OpenAI, GEMINI_API_TOKEN for Google Gemini, etc.).
**Supported LLM Parameters:**
- `provider`: LLM provider and model (e.g., "openai/gpt-4", "anthropic/claude-3-opus")
- `temperature`: Controls randomness (0.0-2.0, lower = more focused, higher = more creative)
- `base_url`: Custom API endpoint for proxy servers or alternative endpoints
#### 3. Build and Run with Compose
@@ -555,6 +581,101 @@ Crucially, when sending configurations directly via JSON, they **must** follow t
**LLM Extraction Strategy** *(Keep example, ensure schema uses type/value wrapper)*
*(Keep Deep Crawler Example)*
### LLM Configuration Examples
The Docker API supports dynamic LLM configuration through multiple levels:
#### Temperature Control
Temperature affects the randomness of LLM responses (0.0 = deterministic, 2.0 = very creative):
```python
import requests
# Low temperature for factual extraction
response = requests.post(
"http://localhost:11235/md",
json={
"url": "https://example.com",
"f": "llm",
"q": "Extract all dates and numbers from this page",
"temperature": 0.2 # Very focused, deterministic
}
)
# High temperature for creative tasks
response = requests.post(
"http://localhost:11235/md",
json={
"url": "https://example.com",
"f": "llm",
"q": "Write a creative summary of this content",
"temperature": 1.2 # More creative, varied responses
}
)
```
#### Custom API Endpoints
Use custom base URLs for proxy servers or alternative API endpoints:
```python
# Using a local LLM server
response = requests.post(
"http://localhost:11235/md",
json={
"url": "https://example.com",
"f": "llm",
"q": "Extract key information",
"provider": "ollama/llama2",
"base_url": "http://localhost:11434/v1"
}
)
```
#### Dynamic Provider Selection
Switch between providers based on task requirements:
```python
async def smart_extraction(url: str, content_type: str):
"""Select provider and temperature based on content type"""
configs = {
"technical": {
"provider": "openai/gpt-4",
"temperature": 0.3,
"query": "Extract technical specifications and code examples"
},
"creative": {
"provider": "anthropic/claude-3-opus",
"temperature": 0.9,
"query": "Create an engaging narrative summary"
},
"quick": {
"provider": "groq/mixtral-8x7b",
"temperature": 0.5,
"query": "Quick summary in bullet points"
}
}
config = configs.get(content_type, configs["quick"])
response = await httpx.post(
"http://localhost:11235/md",
json={
"url": url,
"f": "llm",
"q": config["query"],
"provider": config["provider"],
"temperature": config["temperature"]
}
)
return response.json()
```
### REST API Examples
Update URLs to use port `11235`.
@@ -693,8 +814,8 @@ app:
# Default LLM Configuration
llm:
provider: "openai/gpt-4o-mini" # Can be overridden by LLM_PROVIDER env var
api_key_env: "OPENAI_API_KEY"
# api_key: sk-... # If you pass the API key directly then api_key_env will be ignored
# api_key: sk-... # If you pass the API key directly (not recommended)
# temperature and base_url are controlled via environment variables or request parameters
# Redis Configuration (Used by internal Redis server managed by supervisord)
redis:

View File

@@ -0,0 +1,807 @@
# Table Extraction Strategies
## Overview
**New in v0.7.3+**: Table extraction now follows the **Strategy Design Pattern**, providing unprecedented flexibility and power for handling different table structures. Don't worry - **your existing code still works!** We maintain full backward compatibility while offering new capabilities.
### What's Changed?
- **Architecture**: Table extraction now uses pluggable strategies
- **Backward Compatible**: Your existing code with `table_score_threshold` continues to work
- **More Power**: Choose from multiple strategies or create your own
- **Same Default Behavior**: By default, uses `DefaultTableExtraction` (same as before)
### Key Points
**Old code still works** - No breaking changes
**Same default behavior** - Uses the proven extraction algorithm
**New capabilities** - Add LLM extraction or custom strategies when needed
**Strategy pattern** - Clean, extensible architecture
## Quick Start
### The Simplest Way (Works Like Before)
If you're already using Crawl4AI, nothing changes:
```python
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
async def extract_tables():
async with AsyncWebCrawler() as crawler:
# This works exactly like before - uses DefaultTableExtraction internally
result = await crawler.arun("https://example.com/data")
# Tables are automatically extracted and available in result.tables
for table in result.tables:
print(f"Table with {len(table['rows'])} rows and {len(table['headers'])} columns")
print(f"Headers: {table['headers']}")
print(f"First row: {table['rows'][0] if table['rows'] else 'No data'}")
asyncio.run(extract_tables())
```
### Using the Old Configuration (Still Supported)
Your existing code with `table_score_threshold` continues to work:
```python
# This old approach STILL WORKS - we maintain backward compatibility
config = CrawlerRunConfig(
table_score_threshold=7 # Internally creates DefaultTableExtraction(table_score_threshold=7)
)
result = await crawler.arun(url, config)
```
## Table Extraction Strategies
### Understanding the Strategy Pattern
The strategy pattern allows you to choose different table extraction algorithms at runtime. Think of it as having different tools in a toolbox - you pick the right one for the job:
- **No explicit strategy?** → Uses `DefaultTableExtraction` automatically (same as v0.7.2 and earlier)
- **Need complex table handling?** → Choose `LLMTableExtraction` (costs money, use sparingly)
- **Want to disable tables?** → Use `NoTableExtraction`
- **Have special requirements?** → Create a custom strategy
### Available Strategies
| Strategy | Description | Use Case | Cost | When to Use |
|----------|-------------|----------|------|-------------|
| `DefaultTableExtraction` | **RECOMMENDED**: Same algorithm as before v0.7.3 | General purpose (default) | Free | **Use this first - handles 95% of cases** |
| `LLMTableExtraction` | AI-powered extraction for complex tables | Tables with complex rowspan/colspan | **$$$ Per API call** | Only when DefaultTableExtraction fails |
| `NoTableExtraction` | Disables table extraction | When tables aren't needed | Free | For text-only extraction |
| Custom strategies | User-defined extraction logic | Specialized requirements | Free | Domain-specific needs |
> **⚠️ CRITICAL COST WARNING for LLMTableExtraction**:
>
> **DO NOT USE `LLMTableExtraction` UNLESS ABSOLUTELY NECESSARY!**
>
> - **Always try `DefaultTableExtraction` first** - It's free and handles most tables perfectly
> - LLM extraction **costs money** with every API call
> - For large tables (100+ rows), LLM extraction can be **very slow**
> - **For large tables**: If you must use LLM, choose fast providers:
> - ✅ **Groq** (fastest inference)
> - ✅ **Cerebras** (optimized for speed)
> - ⚠️ Avoid: OpenAI, Anthropic for large tables (slower)
>
> **🚧 WORK IN PROGRESS**:
> We are actively developing an **advanced non-LLM algorithm** that will handle complex table structures (rowspan, colspan, nested tables) for **FREE**. This will replace the need for costly LLM extraction in most cases. Coming soon!
### DefaultTableExtraction
The default strategy uses a sophisticated scoring system to identify data tables:
```python
from crawl4ai import DefaultTableExtraction, CrawlerRunConfig
# Customize the default extraction
table_strategy = DefaultTableExtraction(
table_score_threshold=7, # Scoring threshold (default: 7)
min_rows=2, # Minimum rows required
min_cols=2, # Minimum columns required
verbose=True # Enable detailed logging
)
config = CrawlerRunConfig(
table_extraction=table_strategy
)
```
#### Scoring System
The scoring system evaluates multiple factors:
| Factor | Score Impact | Description |
|--------|--------------|-------------|
| Has `<thead>` | +2 | Semantic table structure |
| Has `<tbody>` | +1 | Organized table body |
| Has `<th>` elements | +2 | Header cells present |
| Headers in correct position | +1 | Proper semantic structure |
| Consistent column count | +2 | Regular data structure |
| Has caption | +2 | Descriptive caption |
| Has summary | +1 | Summary attribute |
| High text density | +2 to +3 | Content-rich cells |
| Data attributes | +0.5 each | Data-* attributes |
| Nested tables | -3 | Often indicates layout |
| Role="presentation" | -3 | Explicitly non-data |
| Too few rows | -2 | Insufficient data |
### LLMTableExtraction (Use Sparingly!)
**⚠️ WARNING**: Only use this when `DefaultTableExtraction` fails with complex tables!
LLMTableExtraction uses AI to understand complex table structures that traditional parsers struggle with. It automatically handles large tables through intelligent chunking and parallel processing:
```python
from crawl4ai import LLMTableExtraction, LLMConfig, CrawlerRunConfig
# Configure LLM (costs money per call!)
llm_config = LLMConfig(
provider="groq/llama-3.3-70b-versatile", # Fast provider for large tables
api_token="your_api_key",
temperature=0.1
)
# Create LLM extraction strategy with smart chunking
table_strategy = LLMTableExtraction(
llm_config=llm_config,
max_tries=3, # Retry up to 3 times if extraction fails
css_selector="table", # Optional: focus on specific tables
enable_chunking=True, # Automatically chunk large tables (default: True)
chunk_token_threshold=3000, # Split tables larger than this (default: 3000 tokens)
min_rows_per_chunk=10, # Minimum rows per chunk (default: 10)
max_parallel_chunks=5, # Process up to 5 chunks in parallel (default: 5)
verbose=True
)
config = CrawlerRunConfig(
table_extraction=table_strategy
)
result = await crawler.arun(url, config)
```
#### When to Use LLMTableExtraction
**Use ONLY when**:
- Tables have complex merged cells (rowspan/colspan) that break DefaultTableExtraction
- Nested tables that need semantic understanding
- Tables with irregular structures
- You've tried DefaultTableExtraction and it failed
**Never use when**:
- DefaultTableExtraction works (99% of cases)
- Tables are simple or well-structured
- You're processing many pages (costs add up!)
- Tables have 100+ rows (very slow)
#### How Smart Chunking Works
LLMTableExtraction automatically handles large tables through intelligent chunking:
1. **Automatic Detection**: Tables exceeding the token threshold are automatically split
2. **Smart Splitting**: Chunks are created at row boundaries, preserving table structure
3. **Header Preservation**: Each chunk includes the original headers for context
4. **Parallel Processing**: Multiple chunks are processed simultaneously for speed
5. **Intelligent Merging**: Results are merged back into a single, complete table
**Chunking Parameters**:
- `enable_chunking` (default: `True`): Automatically handle large tables
- `chunk_token_threshold` (default: `3000`): When to split tables
- `min_rows_per_chunk` (default: `10`): Ensures meaningful chunk sizes
- `max_parallel_chunks` (default: `5`): Concurrent processing for speed
The chunking is completely transparent - you get the same output format whether the table was processed in one piece or multiple chunks.
#### Performance Optimization for LLMTableExtraction
**Provider Recommendations by Table Size**:
| Table Size | Recommended Providers | Why |
|------------|----------------------|-----|
| Small (<50 rows) | Any provider | Fast enough |
| Medium (50-200 rows) | Groq, Cerebras | Optimized inference |
| Large (200+ rows) | **Groq** (best), Cerebras | Fastest inference + automatic chunking |
| Very Large (500+ rows) | Groq with chunking | Parallel processing keeps it fast |
### NoTableExtraction
Disable table extraction for better performance when tables aren't needed:
```python
from crawl4ai import NoTableExtraction, CrawlerRunConfig
config = CrawlerRunConfig(
table_extraction=NoTableExtraction()
)
# Tables won't be extracted, improving performance
result = await crawler.arun(url, config)
assert len(result.tables) == 0
```
## Extracted Table Structure
Each extracted table contains:
```python
{
"headers": ["Column 1", "Column 2", ...], # Column headers
"rows": [ # Data rows
["Row 1 Col 1", "Row 1 Col 2", ...],
["Row 2 Col 1", "Row 2 Col 2", ...],
],
"caption": "Table Caption", # If present
"summary": "Table Summary", # If present
"metadata": {
"row_count": 10, # Number of rows
"column_count": 3, # Number of columns
"has_headers": True, # Headers detected
"has_caption": True, # Caption exists
"has_summary": False, # Summary exists
"id": "data-table-1", # Table ID if present
"class": "financial-data" # Table class if present
}
}
```
## Configuration Options
### Basic Configuration
```python
config = CrawlerRunConfig(
# Table extraction settings
table_score_threshold=7, # Default threshold (backward compatible)
table_extraction=strategy, # Optional: custom strategy
# Filter what to process
css_selector="main", # Focus on specific area
excluded_tags=["nav", "aside"] # Exclude page sections
)
```
### Advanced Configuration
```python
from crawl4ai import DefaultTableExtraction, CrawlerRunConfig
# Fine-tuned extraction
strategy = DefaultTableExtraction(
table_score_threshold=5, # Lower = more permissive
min_rows=3, # Require at least 3 rows
min_cols=2, # Require at least 2 columns
verbose=True # Detailed logging
)
config = CrawlerRunConfig(
table_extraction=strategy,
css_selector="article.content", # Target specific content
exclude_domains=["ads.com"], # Exclude ad domains
cache_mode=CacheMode.BYPASS # Fresh extraction
)
```
## Working with Extracted Tables
### Convert to Pandas DataFrame
```python
import pandas as pd
async def tables_to_dataframes(url):
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url)
dataframes = []
for table_data in result.tables:
# Create DataFrame
if table_data['headers']:
df = pd.DataFrame(
table_data['rows'],
columns=table_data['headers']
)
else:
df = pd.DataFrame(table_data['rows'])
# Add metadata as DataFrame attributes
df.attrs['caption'] = table_data.get('caption', '')
df.attrs['metadata'] = table_data.get('metadata', {})
dataframes.append(df)
return dataframes
```
### Filter Tables by Criteria
```python
async def extract_large_tables(url):
async with AsyncWebCrawler() as crawler:
# Configure minimum size requirements
strategy = DefaultTableExtraction(
min_rows=10,
min_cols=3,
table_score_threshold=6
)
config = CrawlerRunConfig(
table_extraction=strategy
)
result = await crawler.arun(url, config)
# Further filter results
large_tables = [
table for table in result.tables
if table['metadata']['row_count'] > 10
and table['metadata']['column_count'] > 3
]
return large_tables
```
### Export Tables to Different Formats
```python
import json
import csv
async def export_tables(url):
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url)
for i, table in enumerate(result.tables):
# Export as JSON
with open(f'table_{i}.json', 'w') as f:
json.dump(table, f, indent=2)
# Export as CSV
with open(f'table_{i}.csv', 'w', newline='') as f:
writer = csv.writer(f)
if table['headers']:
writer.writerow(table['headers'])
writer.writerows(table['rows'])
# Export as Markdown
with open(f'table_{i}.md', 'w') as f:
# Write headers
if table['headers']:
f.write('| ' + ' | '.join(table['headers']) + ' |\n')
f.write('|' + '---|' * len(table['headers']) + '\n')
# Write rows
for row in table['rows']:
f.write('| ' + ' | '.join(str(cell) for cell in row) + ' |\n')
```
## Creating Custom Strategies
Extend `TableExtractionStrategy` to create custom extraction logic:
### Example: Financial Table Extractor
```python
from crawl4ai import TableExtractionStrategy
from typing import List, Dict, Any
import re
class FinancialTableExtractor(TableExtractionStrategy):
"""Extract tables containing financial data."""
def __init__(self, currency_symbols=None, require_numbers=True, **kwargs):
super().__init__(**kwargs)
self.currency_symbols = currency_symbols or ['$', '', '£', '¥']
self.require_numbers = require_numbers
self.number_pattern = re.compile(r'\d+[,.]?\d*')
def extract_tables(self, element, **kwargs):
tables_data = []
for table in element.xpath(".//table"):
# Check if table contains financial indicators
table_text = ''.join(table.itertext())
# Must contain currency symbols
has_currency = any(sym in table_text for sym in self.currency_symbols)
if not has_currency:
continue
# Must contain numbers if required
if self.require_numbers:
numbers = self.number_pattern.findall(table_text)
if len(numbers) < 3: # Arbitrary minimum
continue
# Extract the table data
table_data = self._extract_financial_data(table)
if table_data:
tables_data.append(table_data)
return tables_data
def _extract_financial_data(self, table):
"""Extract and clean financial data from table."""
headers = []
rows = []
# Extract headers
for th in table.xpath(".//thead//th | .//tr[1]//th"):
headers.append(th.text_content().strip())
# Extract and clean rows
for tr in table.xpath(".//tbody//tr | .//tr[position()>1]"):
row = []
for td in tr.xpath(".//td"):
text = td.text_content().strip()
# Clean currency formatting
text = re.sub(r'[$€£¥,]', '', text)
row.append(text)
if row:
rows.append(row)
return {
"headers": headers,
"rows": rows,
"caption": self._get_caption(table),
"summary": table.get("summary", ""),
"metadata": {
"type": "financial",
"row_count": len(rows),
"column_count": len(headers) or len(rows[0]) if rows else 0
}
}
def _get_caption(self, table):
caption = table.xpath(".//caption/text()")
return caption[0].strip() if caption else ""
# Usage
strategy = FinancialTableExtractor(
currency_symbols=['$', 'EUR'],
require_numbers=True
)
config = CrawlerRunConfig(
table_extraction=strategy
)
```
### Example: Specific Table Extractor
```python
class SpecificTableExtractor(TableExtractionStrategy):
"""Extract only tables matching specific criteria."""
def __init__(self,
required_headers=None,
id_pattern=None,
class_pattern=None,
**kwargs):
super().__init__(**kwargs)
self.required_headers = required_headers or []
self.id_pattern = id_pattern
self.class_pattern = class_pattern
def extract_tables(self, element, **kwargs):
tables_data = []
for table in element.xpath(".//table"):
# Check ID pattern
if self.id_pattern:
table_id = table.get('id', '')
if not re.match(self.id_pattern, table_id):
continue
# Check class pattern
if self.class_pattern:
table_class = table.get('class', '')
if not re.match(self.class_pattern, table_class):
continue
# Extract headers to check requirements
headers = self._extract_headers(table)
# Check if required headers are present
if self.required_headers:
if not all(req in headers for req in self.required_headers):
continue
# Extract full table data
table_data = self._extract_table_data(table, headers)
tables_data.append(table_data)
return tables_data
```
## Combining with Other Strategies
Table extraction works seamlessly with other Crawl4AI strategies:
```python
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
DefaultTableExtraction,
LLMExtractionStrategy,
JsonCssExtractionStrategy
)
async def combined_extraction(url):
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
# Table extraction
table_extraction=DefaultTableExtraction(
table_score_threshold=6,
min_rows=2
),
# CSS-based extraction for specific elements
extraction_strategy=JsonCssExtractionStrategy({
"title": "h1",
"summary": "p.summary",
"date": "time"
}),
# Focus on main content
css_selector="main.content"
)
result = await crawler.arun(url, config)
# Access different extraction results
tables = result.tables # Table data
structured = json.loads(result.extracted_content) # CSS extraction
return {
"tables": tables,
"structured_data": structured,
"markdown": result.markdown
}
```
## Performance Considerations
### Optimization Tips
1. **Disable when not needed**: Use `NoTableExtraction` if tables aren't required
2. **Target specific areas**: Use `css_selector` to limit processing scope
3. **Set minimum thresholds**: Filter out small/irrelevant tables early
4. **Cache results**: Use appropriate cache modes for repeated extractions
```python
# Optimized configuration for large pages
config = CrawlerRunConfig(
# Only process main content area
css_selector="article.main-content",
# Exclude navigation and sidebars
excluded_tags=["nav", "aside", "footer"],
# Higher threshold for stricter filtering
table_extraction=DefaultTableExtraction(
table_score_threshold=8,
min_rows=5,
min_cols=3
),
# Enable caching for repeated access
cache_mode=CacheMode.ENABLED
)
```
## Migration Guide
### Important: Your Code Still Works!
**No changes required!** The transition to the strategy pattern is **fully backward compatible**.
### How It Works Internally
#### v0.7.2 and Earlier
```python
# Old way - directly passing table_score_threshold
config = CrawlerRunConfig(
table_score_threshold=7
)
# Internally: No strategy pattern, direct implementation
```
#### v0.7.3+ (Current)
```python
# Old way STILL WORKS - we handle it internally
config = CrawlerRunConfig(
table_score_threshold=7
)
# Internally: Automatically creates DefaultTableExtraction(table_score_threshold=7)
```
### Taking Advantage of New Features
While your old code works, you can now use the strategy pattern for more control:
```python
# Option 1: Keep using the old way (perfectly fine!)
config = CrawlerRunConfig(
table_score_threshold=7 # Still supported
)
# Option 2: Use the new strategy pattern (more flexibility)
from crawl4ai import DefaultTableExtraction
strategy = DefaultTableExtraction(
table_score_threshold=7,
min_rows=2, # New capability!
min_cols=2 # New capability!
)
config = CrawlerRunConfig(
table_extraction=strategy
)
# Option 3: Use advanced strategies when needed
from crawl4ai import LLMTableExtraction, LLMConfig
# Only for complex tables that DefaultTableExtraction can't handle
# Automatically handles large tables with smart chunking
llm_strategy = LLMTableExtraction(
llm_config=LLMConfig(
provider="groq/llama-3.3-70b-versatile",
api_token="your_key"
),
max_tries=3,
enable_chunking=True, # Automatically chunk large tables
chunk_token_threshold=3000, # Chunk when exceeding 3000 tokens
max_parallel_chunks=5 # Process up to 5 chunks in parallel
)
config = CrawlerRunConfig(
table_extraction=llm_strategy # Advanced extraction with automatic chunking
)
```
### Summary
-**No breaking changes** - Old code works as-is
-**Same defaults** - DefaultTableExtraction is automatically used
-**Gradual adoption** - Use new features when you need them
-**Full compatibility** - result.tables structure unchanged
## Best Practices
### 1. Choose the Right Strategy (Cost-Conscious Approach)
**Decision Flow**:
```
1. Do you need tables?
→ No: Use NoTableExtraction
→ Yes: Continue to #2
2. Try DefaultTableExtraction first (FREE)
→ Works? Done! ✅
→ Fails? Continue to #3
3. Is the table critical and complex?
→ No: Accept DefaultTableExtraction results
→ Yes: Continue to #4
4. Use LLMTableExtraction (COSTS MONEY)
→ Small table (<50 rows): Any LLM provider
→ Large table (50+ rows): Use Groq or Cerebras
→ Very large (500+ rows): Reconsider - maybe chunk the page
```
**Strategy Selection Guide**:
- **DefaultTableExtraction**: Use for 99% of cases - it's free and effective
- **LLMTableExtraction**: Only for complex tables with merged cells that break DefaultTableExtraction
- **NoTableExtraction**: When you only need text/markdown content
- **Custom Strategy**: For specialized requirements (financial, scientific, etc.)
### 2. Validate Extracted Data
```python
def validate_table(table):
"""Validate table data quality."""
# Check structure
if not table.get('rows'):
return False
# Check consistency
if table.get('headers'):
expected_cols = len(table['headers'])
for row in table['rows']:
if len(row) != expected_cols:
return False
# Check minimum content
total_cells = sum(len(row) for row in table['rows'])
non_empty = sum(1 for row in table['rows']
for cell in row if cell.strip())
if non_empty / total_cells < 0.5: # Less than 50% non-empty
return False
return True
# Filter valid tables
valid_tables = [t for t in result.tables if validate_table(t)]
```
### 3. Handle Edge Cases
```python
async def robust_table_extraction(url):
"""Extract tables with error handling."""
async with AsyncWebCrawler() as crawler:
try:
config = CrawlerRunConfig(
table_extraction=DefaultTableExtraction(
table_score_threshold=6,
verbose=True
)
)
result = await crawler.arun(url, config)
if not result.success:
print(f"Crawl failed: {result.error}")
return []
# Process tables safely
processed_tables = []
for table in result.tables:
try:
# Validate and process
if validate_table(table):
processed_tables.append(table)
except Exception as e:
print(f"Error processing table: {e}")
continue
return processed_tables
except Exception as e:
print(f"Extraction error: {e}")
return []
```
## Troubleshooting
### Common Issues and Solutions
| Issue | Cause | Solution |
|-------|-------|----------|
| No tables extracted | Score too high | Lower `table_score_threshold` |
| Layout tables included | Score too low | Increase `table_score_threshold` |
| Missing tables | CSS selector too specific | Broaden or remove `css_selector` |
| Incomplete data | Complex table structure | Create custom strategy |
| Performance issues | Processing entire page | Use `css_selector` to limit scope |
### Debug Logging
Enable verbose logging to understand extraction decisions:
```python
import logging
# Configure logging
logging.basicConfig(level=logging.DEBUG)
# Enable verbose mode in strategy
strategy = DefaultTableExtraction(
table_score_threshold=7,
verbose=True # Detailed extraction logs
)
config = CrawlerRunConfig(
table_extraction=strategy,
verbose=True # General crawler logs
)
```
## See Also
- [Extraction Strategies](extraction-strategies.md) - Overview of all extraction strategies
- [Content Selection](content-selection.md) - Using CSS selectors and filters
- [Performance Optimization](../optimization/performance-tuning.md) - Speed up extraction
- [Examples](../examples/table_extraction_example.py) - Complete working examples

View File

@@ -102,16 +102,16 @@ async def smart_blog_crawler():
# Step 2: Configure discovery - let's find all blog posts
config = SeedingConfig(
source="sitemap", # Use the website's sitemap
pattern="*/blog/*.html", # Only blog posts
source="sitemap+cc", # Use the website's sitemap+cc
pattern="*/courses/*", # Only courses related posts
extract_head=True, # Get page metadata
max_urls=100 # Limit for this example
)
# Step 3: Discover URLs from the Python blog
print("🔍 Discovering blog posts...")
print("🔍 Discovering course posts...")
urls = await seeder.urls("realpython.com", config)
print(f"✅ Found {len(urls)} blog posts")
print(f"✅ Found {len(urls)} course posts")
# Step 4: Filter for Python tutorials (using metadata!)
tutorials = [
@@ -134,7 +134,8 @@ async def smart_blog_crawler():
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
only_text=True,
word_count_threshold=300 # Only substantial articles
word_count_threshold=300, # Only substantial articles
stream=True
)
# Extract URLs and crawl them
@@ -155,7 +156,7 @@ asyncio.run(smart_blog_crawler())
**What just happened?**
1. We discovered all blog URLs from the sitemap
1. We discovered all blog URLs from the sitemap+cc
2. We filtered using metadata (no crawling needed!)
3. We crawled only the relevant tutorials
4. We saved tons of time and bandwidth
@@ -282,8 +283,8 @@ config = SeedingConfig(
live_check=True, # Verify each URL is accessible
concurrency=20 # Check 20 URLs in parallel
)
urls = await seeder.urls("example.com", config)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("example.com", config)
# Now you can filter by status
live_urls = [u for u in urls if u["status"] == "valid"]
@@ -311,8 +312,8 @@ This is where URL seeding gets really powerful. Instead of crawling entire pages
config = SeedingConfig(
extract_head=True # Extract metadata from <head> section
)
urls = await seeder.urls("example.com", config)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("example.com", config)
# Now each URL has rich metadata
for url in urls[:3]:
@@ -387,8 +388,8 @@ config = SeedingConfig(
scoring_method="bm25",
score_threshold=0.3
)
urls = await seeder.urls("example.com", config)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("example.com", config)
# URLs are scored based on:
# 1. Domain parts matching (e.g., 'python' in python.example.com)
@@ -429,8 +430,8 @@ config = SeedingConfig(
extract_head=True,
live_check=True
)
urls = await seeder.urls("blog.example.com", config)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("blog.example.com", config)
# Analyze the results
for url in urls[:5]:
@@ -488,8 +489,8 @@ config = SeedingConfig(
scoring_method="bm25", # Use BM25 algorithm
score_threshold=0.3 # Minimum relevance score
)
urls = await seeder.urls("realpython.com", config)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("realpython.com", config)
# Results are automatically sorted by relevance!
for url in urls[:5]:
@@ -511,8 +512,8 @@ config = SeedingConfig(
score_threshold=0.5,
max_urls=20
)
urls = await seeder.urls("docs.example.com", config)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("docs.example.com", config)
# The highest scoring URLs will be API docs!
```
@@ -529,8 +530,8 @@ config = SeedingConfig(
score_threshold=0.4,
pattern="*/product/*" # Combine with pattern matching
)
urls = await seeder.urls("shop.example.com", config)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("shop.example.com", config)
# Filter further by price (from metadata)
affordable = [
@@ -550,8 +551,8 @@ config = SeedingConfig(
scoring_method="bm25",
score_threshold=0.35
)
urls = await seeder.urls("technews.com", config)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("technews.com", config)
# Filter by date
from datetime import datetime, timedelta
@@ -591,8 +592,8 @@ for query in queries:
score_threshold=0.4,
max_urls=10 # Top 10 per topic
)
urls = await seeder.urls("learning-platform.com", config)
async with AsyncUrlSeeder() as seeder:
urls = await seeder.urls("learning-platform.com", config)
all_tutorials.extend(urls)
# Remove duplicates while preserving order
@@ -625,7 +626,8 @@ config = SeedingConfig(
)
# Returns a dictionary: {domain: [urls]}
results = await seeder.many_urls(domains, config)
async with AsyncUrlSeeder() as seeder:
results = await seeder.many_urls(domains, config)
# Process results
for domain, urls in results.items():
@@ -654,8 +656,8 @@ config = SeedingConfig(
pattern="*/blog/*",
max_urls=100
)
results = await seeder.many_urls(competitors, config)
async with AsyncUrlSeeder() as seeder:
results = await seeder.many_urls(competitors, config)
# Analyze content types
for domain, urls in results.items():
@@ -690,8 +692,8 @@ config = SeedingConfig(
score_threshold=0.3,
max_urls=20 # Per site
)
results = await seeder.many_urls(educational_sites, config)
async with AsyncUrlSeeder() as seeder:
results = await seeder.many_urls(educational_sites, config)
# Find the best beginner tutorials
all_tutorials = []
@@ -731,8 +733,8 @@ config = SeedingConfig(
score_threshold=0.5, # High threshold for relevance
max_urls=10
)
results = await seeder.many_urls(news_sites, config)
async with AsyncUrlSeeder() as seeder:
results = await seeder.many_urls(news_sites, config)
# Collect all mentions
mentions = []

View File

@@ -0,0 +1,376 @@
# Migration Guide: Table Extraction v0.7.3
## Overview
Version 0.7.3 introduces the **Table Extraction Strategy Pattern**, providing a more flexible and extensible approach to table extraction while maintaining full backward compatibility.
## What's New
### Strategy Pattern Implementation
Table extraction now follows the same strategy pattern used throughout Crawl4AI:
- **Consistent Architecture**: Aligns with extraction, chunking, and markdown strategies
- **Extensibility**: Easy to create custom table extraction strategies
- **Better Separation**: Table logic moved from content scraping to dedicated module
- **Full Control**: Fine-grained control over table detection and extraction
### New Classes
```python
from crawl4ai import (
TableExtractionStrategy, # Abstract base class
DefaultTableExtraction, # Current implementation (default)
NoTableExtraction # Explicitly disable extraction
)
```
## Backward Compatibility
**✅ All existing code continues to work without changes.**
### No Changes Required
If your code looks like this, it will continue to work:
```python
# This still works exactly the same
config = CrawlerRunConfig(
table_score_threshold=7
)
result = await crawler.arun(url, config)
tables = result.tables # Same structure, same data
```
### What Happens Behind the Scenes
When you don't specify a `table_extraction` strategy:
1. `CrawlerRunConfig` automatically creates `DefaultTableExtraction`
2. It uses your `table_score_threshold` parameter
3. Tables are extracted exactly as before
4. Results appear in `result.tables` with the same structure
## New Capabilities
### 1. Explicit Strategy Configuration
You can now explicitly configure table extraction:
```python
# New: Explicit control
strategy = DefaultTableExtraction(
table_score_threshold=7,
min_rows=2, # New: minimum row filter
min_cols=2, # New: minimum column filter
verbose=True # New: detailed logging
)
config = CrawlerRunConfig(
table_extraction=strategy
)
```
### 2. Disable Table Extraction
Improve performance when tables aren't needed:
```python
# New: Skip table extraction entirely
config = CrawlerRunConfig(
table_extraction=NoTableExtraction()
)
# No CPU cycles spent on table detection/extraction
```
### 3. Custom Extraction Strategies
Create specialized extractors:
```python
class MyTableExtractor(TableExtractionStrategy):
def extract_tables(self, element, **kwargs):
# Custom extraction logic
return custom_tables
config = CrawlerRunConfig(
table_extraction=MyTableExtractor()
)
```
## Migration Scenarios
### Scenario 1: Basic Usage (No Changes Needed)
**Before (v0.7.2):**
```python
config = CrawlerRunConfig()
result = await crawler.arun(url, config)
for table in result.tables:
print(table['headers'])
```
**After (v0.7.3):**
```python
# Exactly the same - no changes required
config = CrawlerRunConfig()
result = await crawler.arun(url, config)
for table in result.tables:
print(table['headers'])
```
### Scenario 2: Custom Threshold (No Changes Needed)
**Before (v0.7.2):**
```python
config = CrawlerRunConfig(
table_score_threshold=5
)
```
**After (v0.7.3):**
```python
# Still works the same
config = CrawlerRunConfig(
table_score_threshold=5
)
# Or use new explicit approach for more control
strategy = DefaultTableExtraction(
table_score_threshold=5,
min_rows=2 # Additional filtering
)
config = CrawlerRunConfig(
table_extraction=strategy
)
```
### Scenario 3: Advanced Filtering (New Feature)
**Before (v0.7.2):**
```python
# Had to filter after extraction
config = CrawlerRunConfig(
table_score_threshold=5
)
result = await crawler.arun(url, config)
# Manual filtering
large_tables = [
t for t in result.tables
if len(t['rows']) >= 5 and len(t['headers']) >= 3
]
```
**After (v0.7.3):**
```python
# Filter during extraction (more efficient)
strategy = DefaultTableExtraction(
table_score_threshold=5,
min_rows=5,
min_cols=3
)
config = CrawlerRunConfig(
table_extraction=strategy
)
result = await crawler.arun(url, config)
# result.tables already filtered
```
## Code Organization Changes
### Module Structure
**Before (v0.7.2):**
```
crawl4ai/
content_scraping_strategy.py
- LXMLWebScrapingStrategy
- is_data_table() # Table detection
- extract_table_data() # Table extraction
```
**After (v0.7.3):**
```
crawl4ai/
content_scraping_strategy.py
- LXMLWebScrapingStrategy
# Table methods removed, uses strategy
table_extraction.py (NEW)
- TableExtractionStrategy # Base class
- DefaultTableExtraction # Moved logic here
- NoTableExtraction # New option
```
### Import Changes
**New imports available (optional):**
```python
# These are now available but not required for existing code
from crawl4ai import (
TableExtractionStrategy,
DefaultTableExtraction,
NoTableExtraction
)
```
## Performance Implications
### No Performance Impact
For existing code, performance remains identical:
- Same extraction logic
- Same scoring algorithm
- Same processing time
### Performance Improvements Available
New options for better performance:
```python
# Skip tables entirely (faster)
config = CrawlerRunConfig(
table_extraction=NoTableExtraction()
)
# Process only specific areas (faster)
config = CrawlerRunConfig(
css_selector="main.content",
table_extraction=DefaultTableExtraction(
min_rows=5, # Skip small tables
min_cols=3
)
)
```
## Testing Your Migration
### Verification Script
Run this to verify your extraction still works:
```python
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
async def verify_extraction():
url = "your_url_here"
async with AsyncWebCrawler() as crawler:
# Test 1: Old approach
config_old = CrawlerRunConfig(
table_score_threshold=7
)
result_old = await crawler.arun(url, config_old)
# Test 2: New explicit approach
from crawl4ai import DefaultTableExtraction
config_new = CrawlerRunConfig(
table_extraction=DefaultTableExtraction(
table_score_threshold=7
)
)
result_new = await crawler.arun(url, config_new)
# Compare results
assert len(result_old.tables) == len(result_new.tables)
print(f"✓ Both approaches extracted {len(result_old.tables)} tables")
# Verify structure
for old, new in zip(result_old.tables, result_new.tables):
assert old['headers'] == new['headers']
assert old['rows'] == new['rows']
print("✓ Table content identical")
asyncio.run(verify_extraction())
```
## Deprecation Notes
### No Deprecations
- All existing parameters continue to work
- `table_score_threshold` in `CrawlerRunConfig` is still supported
- No breaking changes
### Internal Changes (Transparent to Users)
- `LXMLWebScrapingStrategy.is_data_table()` - Moved to `DefaultTableExtraction`
- `LXMLWebScrapingStrategy.extract_table_data()` - Moved to `DefaultTableExtraction`
These methods were internal and not part of the public API.
## Benefits of Upgrading
While not required, using the new pattern provides:
1. **Better Control**: Filter tables during extraction, not after
2. **Performance Options**: Skip extraction when not needed
3. **Extensibility**: Create custom extractors for specific needs
4. **Consistency**: Same pattern as other Crawl4AI strategies
5. **Future-Proof**: Ready for upcoming advanced strategies
## Troubleshooting
### Issue: Different Number of Tables
**Cause**: Threshold or filtering differences
**Solution**:
```python
# Ensure same threshold
strategy = DefaultTableExtraction(
table_score_threshold=7, # Match your old setting
min_rows=0, # No filtering (default)
min_cols=0 # No filtering (default)
)
```
### Issue: Import Errors
**Cause**: Using new classes without importing
**Solution**:
```python
# Add imports if using new features
from crawl4ai import (
DefaultTableExtraction,
NoTableExtraction,
TableExtractionStrategy
)
```
### Issue: Custom Strategy Not Working
**Cause**: Incorrect method signature
**Solution**:
```python
class CustomExtractor(TableExtractionStrategy):
def extract_tables(self, element, **kwargs): # Correct signature
# Not: extract_tables(self, html)
# Not: extract(self, element)
return tables_list
```
## Getting Help
If you encounter issues:
1. Check your `table_score_threshold` matches previous settings
2. Verify imports if using new classes
3. Enable verbose logging: `DefaultTableExtraction(verbose=True)`
4. Review the [Table Extraction Documentation](../core/table_extraction.md)
5. Check [examples](../examples/table_extraction_example.py)
## Summary
-**Full backward compatibility** - No code changes required
-**Same results** - Identical extraction behavior by default
-**New options** - Additional control when needed
-**Better architecture** - Consistent with Crawl4AI patterns
-**Ready for future** - Foundation for advanced strategies
The migration to v0.7.3 is seamless with no required changes while providing new capabilities for those who need them.

349
tests/docker/test_llm_params.py Executable file
View File

@@ -0,0 +1,349 @@
#!/usr/bin/env python3
"""
Test script for LLM temperature and base_url parameters in Crawl4AI Docker API.
This demonstrates the new hierarchical configuration system:
1. Request-level parameters (highest priority)
2. Provider-specific environment variables
3. Global environment variables
4. System defaults (lowest priority)
"""
import asyncio
import httpx
import json
import os
from rich.console import Console
from rich.panel import Panel
from rich.syntax import Syntax
from rich.table import Table
console = Console()
# Configuration
BASE_URL = "http://localhost:11235" # Docker API endpoint
TEST_URL = "https://httpbin.org/html" # Simple test page
# --- Helper Functions ---
async def check_server_health(client: httpx.AsyncClient) -> bool:
"""Check if the server is healthy."""
console.print("[bold cyan]Checking server health...[/]", end="")
try:
response = await client.get("/health", timeout=10.0)
response.raise_for_status()
console.print(" [bold green]✓ Server is healthy![/]")
return True
except Exception as e:
console.print(f"\n[bold red]✗ Server health check failed: {e}[/]")
console.print(f"Is the server running at {BASE_URL}?")
return False
def print_request(endpoint: str, payload: dict, title: str = "Request"):
"""Pretty print the request."""
syntax = Syntax(json.dumps(payload, indent=2), "json", theme="monokai")
console.print(Panel.fit(
f"[cyan]POST {endpoint}[/cyan]\n{syntax}",
title=f"[bold blue]{title}[/]",
border_style="blue"
))
def print_response(response: dict, title: str = "Response"):
"""Pretty print relevant parts of the response."""
# Extract only the relevant parts
relevant = {}
if "markdown" in response:
relevant["markdown"] = response["markdown"][:200] + "..." if len(response.get("markdown", "")) > 200 else response.get("markdown", "")
if "success" in response:
relevant["success"] = response["success"]
if "url" in response:
relevant["url"] = response["url"]
if "filter" in response:
relevant["filter"] = response["filter"]
console.print(Panel.fit(
Syntax(json.dumps(relevant, indent=2), "json", theme="monokai"),
title=f"[bold green]{title}[/]",
border_style="green"
))
# --- Test Functions ---
async def test_default_no_params(client: httpx.AsyncClient):
"""Test 1: No temperature or base_url specified - uses defaults"""
console.rule("[bold yellow]Test 1: Default Configuration (No Parameters)[/]")
payload = {
"url": TEST_URL,
"f": "llm",
"q": "What is the main heading of this page? Answer in exactly 5 words."
}
print_request("/md", payload, "Request without temperature/base_url")
try:
response = await client.post("/md", json=payload, timeout=30.0)
response.raise_for_status()
data = response.json()
print_response(data, "Response (using system defaults)")
console.print("[dim]→ This used system defaults or environment variables if set[/]")
except Exception as e:
console.print(f"[red]Error: {e}[/]")
async def test_request_temperature(client: httpx.AsyncClient):
"""Test 2: Request-level temperature (highest priority)"""
console.rule("[bold yellow]Test 2: Request-Level Temperature[/]")
# Test with low temperature (more focused)
payload_low = {
"url": TEST_URL,
"f": "llm",
"q": "What is the main heading? Be creative and poetic.",
"temperature": 0.1 # Very low - should be less creative
}
print_request("/md", payload_low, "Low Temperature (0.1)")
try:
response = await client.post("/md", json=payload_low, timeout=30.0)
response.raise_for_status()
data_low = response.json()
print_response(data_low, "Response with Low Temperature")
console.print("[dim]→ Low temperature (0.1) should produce focused, less creative output[/]")
except Exception as e:
console.print(f"[red]Error: {e}[/]")
console.print()
# Test with high temperature (more creative)
payload_high = {
"url": TEST_URL,
"f": "llm",
"q": "What is the main heading? Be creative and poetic.",
"temperature": 1.5 # High - should be more creative
}
print_request("/md", payload_high, "High Temperature (1.5)")
try:
response = await client.post("/md", json=payload_high, timeout=30.0)
response.raise_for_status()
data_high = response.json()
print_response(data_high, "Response with High Temperature")
console.print("[dim]→ High temperature (1.5) should produce more creative, varied output[/]")
except Exception as e:
console.print(f"[red]Error: {e}[/]")
async def test_provider_override(client: httpx.AsyncClient):
"""Test 3: Provider override with temperature"""
console.rule("[bold yellow]Test 3: Provider Override with Temperature[/]")
provider = "gemini/gemini-2.5-flash-lite"
payload = {
"url": TEST_URL,
"f": "llm",
"q": "Summarize this page in one sentence.",
"provider": provider, # Explicitly set provider
"temperature": 0.7
}
print_request("/md", payload, "Provider + Temperature Override")
try:
response = await client.post("/md", json=payload, timeout=30.0)
response.raise_for_status()
data = response.json()
print_response(data, "Response with Provider Override")
console.print(f"[dim]→ This explicitly uses {provider} with temperature 0.7[/]")
except Exception as e:
console.print(f"[red]Error: {e}[/]")
async def test_base_url_custom(client: httpx.AsyncClient):
"""Test 4: Custom base_url (will fail unless you have a custom endpoint)"""
console.rule("[bold yellow]Test 4: Custom Base URL (Demo Only)[/]")
payload = {
"url": TEST_URL,
"f": "llm",
"q": "What is this page about?",
"base_url": "https://api.custom-endpoint.com/v1", # Custom endpoint
"temperature": 0.5
}
print_request("/md", payload, "Custom Base URL Request")
console.print("[yellow]Note: This will fail unless you have a custom endpoint set up[/]")
try:
response = await client.post("/md", json=payload, timeout=10.0)
response.raise_for_status()
data = response.json()
print_response(data, "Response from Custom Endpoint")
except httpx.HTTPStatusError as e:
console.print(f"[yellow]Expected failure (no custom endpoint): Status {e.response.status_code}[/]")
except Exception as e:
console.print(f"[yellow]Expected error: {e}[/]")
async def test_llm_job_endpoint(client: httpx.AsyncClient):
"""Test 5: Test the /llm/job endpoint with temperature and base_url"""
console.rule("[bold yellow]Test 5: LLM Job Endpoint with Parameters[/]")
payload = {
"url": TEST_URL,
"q": "Extract the main title and any key information",
"temperature": 0.3,
# "base_url": "https://api.openai.com/v1" # Optional
}
print_request("/llm/job", payload, "LLM Job with Temperature")
try:
# Submit the job
response = await client.post("/llm/job", json=payload, timeout=30.0)
response.raise_for_status()
job_data = response.json()
if "task_id" in job_data:
task_id = job_data["task_id"]
console.print(f"[green]Job created with task_id: {task_id}[/]")
# Poll for result (simplified - in production use proper polling)
await asyncio.sleep(3)
status_response = await client.get(f"/llm/job/{task_id}")
status_data = status_response.json()
if status_data.get("status") == "completed":
console.print("[green]Job completed successfully![/]")
if "result" in status_data:
console.print(Panel.fit(
Syntax(json.dumps(status_data["result"], indent=2), "json", theme="monokai"),
title="Extraction Result",
border_style="green"
))
else:
console.print(f"[yellow]Job status: {status_data.get('status', 'unknown')}[/]")
else:
console.print(f"[red]Unexpected response: {job_data}[/]")
except Exception as e:
console.print(f"[red]Error: {e}[/]")
async def test_llm_endpoint(client: httpx.AsyncClient):
"""
Quick QA round-trip with /llm.
Asks a trivial question against SIMPLE_URL just to show wiring.
"""
import time
import urllib.parse
page_url = "https://kidocode.com"
question = "What is the title of this page?"
enc = urllib.parse.quote_plus(page_url, safe="")
console.print(f"GET /llm/{enc}?q={question}")
try:
t0 = time.time()
resp = await client.get(f"/llm/{enc}", params={"q": question})
dt = time.time() - t0
console.print(
f"Response Status: [bold {'green' if resp.is_success else 'red'}]{resp.status_code}[/] (took {dt:.2f}s)")
resp.raise_for_status()
answer = resp.json().get("answer", "")
console.print(Panel(answer or "No answer returned",
title="LLM answer", border_style="magenta", expand=False))
except Exception as e:
console.print(f"[bold red]Error hitting /llm:[/] {e}")
async def show_environment_info():
"""Display current environment configuration"""
console.rule("[bold cyan]Current Environment Configuration[/]")
table = Table(title="LLM Environment Variables", show_header=True, header_style="bold magenta")
table.add_column("Variable", style="cyan", width=30)
table.add_column("Value", style="yellow")
table.add_column("Description", style="dim")
env_vars = [
("LLM_PROVIDER", "Global default provider"),
("LLM_TEMPERATURE", "Global default temperature"),
("LLM_BASE_URL", "Global custom API endpoint"),
("OPENAI_API_KEY", "OpenAI API key"),
("OPENAI_TEMPERATURE", "OpenAI-specific temperature"),
("OPENAI_BASE_URL", "OpenAI-specific endpoint"),
("ANTHROPIC_API_KEY", "Anthropic API key"),
("ANTHROPIC_TEMPERATURE", "Anthropic-specific temperature"),
("GROQ_API_KEY", "Groq API key"),
("GROQ_TEMPERATURE", "Groq-specific temperature"),
]
for var, desc in env_vars:
value = os.environ.get(var, "[not set]")
if "API_KEY" in var and value != "[not set]":
# Mask API keys for security
value = value[:10] + "..." if len(value) > 10 else "***"
table.add_row(var, value, desc)
console.print(table)
console.print()
# --- Main Test Runner ---
async def main():
"""Run all tests"""
console.print(Panel.fit(
"[bold cyan]Crawl4AI LLM Parameters Test Suite[/]\n" +
"Testing temperature and base_url configuration hierarchy",
border_style="cyan"
))
# Show current environment
# await show_environment_info()
# Create HTTP client
async with httpx.AsyncClient(base_url=BASE_URL, timeout=60.0) as client:
# Check server health
if not await check_server_health(client):
console.print("[red]Server is not available. Please ensure the Docker container is running.[/]")
return
# Run tests
tests = [
("Default Configuration", test_default_no_params),
("Request Temperature", test_request_temperature),
("Provider Override", test_provider_override),
("Custom Base URL", test_base_url_custom),
("LLM Job Endpoint", test_llm_job_endpoint),
("LLM Endpoint", test_llm_endpoint),
]
for i, (name, test_func) in enumerate(tests, 1):
if i > 1:
console.print() # Add spacing between tests
try:
await test_func(client)
except Exception as e:
console.print(f"[red]Test '{name}' failed with error: {e}[/]")
console.print_exception(show_locals=False)
console.rule("[bold green]All Tests Complete![/]", style="green")
# Summary
console.print("\n[bold cyan]Configuration Hierarchy Summary:[/]")
console.print("1. [yellow]Request parameters[/] - Highest priority (temperature, base_url in API call)")
console.print("2. [yellow]Provider-specific env[/] - e.g., OPENAI_TEMPERATURE, GROQ_BASE_URL")
console.print("3. [yellow]Global env variables[/] - LLM_TEMPERATURE, LLM_BASE_URL")
console.print("4. [yellow]System defaults[/] - Lowest priority (provider/litellm defaults)")
console.print()
if __name__ == "__main__":
try:
asyncio.run(main())
except KeyboardInterrupt:
console.print("\n[yellow]Tests interrupted by user.[/]")
except Exception as e:
console.print(f"\n[bold red]An error occurred:[/]")
console.print_exception(show_locals=False)

View File

@@ -0,0 +1,170 @@
#!/usr/bin/env python3
"""
Test LLMTableExtraction with controlled HTML
"""
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
import asyncio
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
LLMConfig,
LLMTableExtraction,
DefaultTableExtraction,
CacheMode
)
async def test_controlled_html():
"""Test with controlled HTML content."""
print("\n" + "=" * 60)
print("LLM TABLE EXTRACTION TEST")
print("=" * 60)
url = "https://en.wikipedia.org/wiki/List_of_chemical_elements"
# url = "https://en.wikipedia.org/wiki/List_of_prime_ministers_of_India"
# Configure LLM
llm_config = LLMConfig(
# provider="openai/gpt-4.1-mini",
# api_token=os.getenv("OPENAI_API_KEY"),
provider="groq/llama-3.3-70b-versatile",
api_token="GROQ_API_TOKEN",
temperature=0.1,
max_tokens=32000
)
print("\n1. Testing LLMTableExtraction:")
# Create LLM extraction strategy
llm_strategy = LLMTableExtraction(
llm_config=llm_config,
verbose=True,
# css_selector="div.w3-example"
css_selector="div.mw-content-ltr",
# css_selector="table.wikitable",
max_tries=2,
enable_chunking=True,
chunk_token_threshold=5000, # Lower threshold to force chunking
min_rows_per_chunk=10,
max_parallel_chunks=3
)
config_llm = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_extraction=llm_strategy
)
async with AsyncWebCrawler() as crawler:
# Test with LLM extraction
result_llm = await crawler.arun(
# url=f"raw:{test_html}",
url=url,
config=config_llm
)
if result_llm.success:
print(f"\n ✓ LLM Extraction: Found {len(result_llm.tables)} table(s)")
for i, table in enumerate(result_llm.tables, 1):
print(f"\n Table {i}:")
print(f" - Caption: {table.get('caption', 'No caption')}")
print(f" - Headers: {table['headers']}")
print(f" - Rows: {len(table['rows'])}")
# Show how colspan/rowspan were handled
print(f" - Sample rows:")
for j, row in enumerate(table['rows'][:2], 1):
print(f" Row {j}: {row}")
metadata = table.get('metadata', {})
print(f" - Metadata:")
print(f" • Has merged cells: {metadata.get('has_merged_cells', False)}")
print(f" • Table type: {metadata.get('table_type', 'unknown')}")
# # Compare with default extraction
# print("\n2. Comparing with DefaultTableExtraction:")
# default_strategy = DefaultTableExtraction(
# table_score_threshold=3,
# verbose=False
# )
# config_default = CrawlerRunConfig(
# cache_mode=CacheMode.BYPASS,
# table_extraction=default_strategy
# )
# result_default = await crawler.arun(
# # url=f"raw:{test_html}",
# url=url,
# config=config_default
# )
# if result_default.success:
# print(f" ✓ Default Extraction: Found {len(result_default.tables)} table(s)")
# # Compare handling of complex structures
# print("\n3. Comparison Summary:")
# print(f" LLM found: {len(result_llm.tables)} tables")
# print(f" Default found: {len(result_default.tables)} tables")
# if result_llm.tables and result_default.tables:
# llm_first = result_llm.tables[0]
# default_first = result_default.tables[0]
# print(f"\n First table comparison:")
# print(f" LLM headers: {len(llm_first['headers'])} columns")
# print(f" Default headers: {len(default_first['headers'])} columns")
# # Check if LLM better handled the complex structure
# if llm_first.get('metadata', {}).get('has_merged_cells'):
# print(" ✓ LLM correctly identified merged cells")
# # Test pandas compatibility
# try:
# import pandas as pd
# print("\n4. Testing Pandas compatibility:")
# # Create DataFrame from LLM extraction
# df_llm = pd.DataFrame(
# llm_first['rows'],
# columns=llm_first['headers']
# )
# print(f" ✓ LLM table -> DataFrame: Shape {df_llm.shape}")
# # Create DataFrame from default extraction
# df_default = pd.DataFrame(
# default_first['rows'],
# columns=default_first['headers']
# )
# print(f" ✓ Default table -> DataFrame: Shape {df_default.shape}")
# print("\n LLM DataFrame preview:")
# print(df_llm.head(2).to_string())
# except ImportError:
# print("\n4. Pandas not installed, skipping DataFrame test")
print("\n✅ Test completed successfully!")
async def main():
"""Run the test."""
# Check for API key
if not os.getenv("OPENAI_API_KEY"):
print("⚠️ OPENAI_API_KEY not set. Please set it to test LLM extraction.")
print(" You can set it with: export OPENAI_API_KEY='your-key-here'")
return
await test_controlled_html()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -4,7 +4,7 @@
import psutil
import platform
import time
from crawl4ai.memory_utils import get_true_memory_usage_percent, get_memory_stats, get_true_available_memory_gb
from crawl4ai.utils import get_true_memory_usage_percent, get_memory_stats, get_true_available_memory_gb
def test_memory_calculation():