Compare commits

..

1 Commits

Author SHA1 Message Date
AHMET YILMAZ
43a2088eb0 Fix redirect target verification in AsyncUrlSeeder and enhance tests
- Added `verify_redirect_targets` parameter to control redirect verification.
- Modified `_resolve_head()` to verify redirect targets based on the new parameter.
- Implemented tests for both verification modes, ensuring dead redirects are filtered out and legacy behavior is preserved.
2025-11-18 11:43:47 +08:00
35 changed files with 217 additions and 2200 deletions

View File

@@ -11,25 +11,6 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Free up disk space
run: |
echo "=== Disk space before cleanup ==="
df -h
# Remove unnecessary tools and libraries (frees ~25GB)
sudo rm -rf /usr/share/dotnet
sudo rm -rf /usr/local/lib/android
sudo rm -rf /opt/ghc
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo rm -rf /usr/local/share/boost
sudo rm -rf /usr/share/swift
# Clean apt cache
sudo apt-get clean
echo "=== Disk space after cleanup ==="
df -h
- name: Checkout code
uses: actions/checkout@v4

View File

@@ -1,7 +1,7 @@
FROM python:3.12-slim-bookworm AS build
# C4ai version
ARG C4AI_VER=0.7.8
ARG C4AI_VER=0.7.7
ENV C4AI_VERSION=$C4AI_VER
LABEL c4ai.version=$C4AI_VER
@@ -167,11 +167,6 @@ RUN mkdir -p /home/appuser/.cache/ms-playwright \
RUN crawl4ai-doctor
# Ensure all cache directories belong to appuser
# This fixes permission issues with .cache/url_seeder and other runtime cache dirs
RUN mkdir -p /home/appuser/.cache \
&& chown -R appuser:appuser /home/appuser/.cache
# Copy application code
COPY deploy/docker/* ${APP_HOME}/

View File

@@ -12,16 +12,6 @@
[![Downloads](https://static.pepy.tech/badge/crawl4ai/month)](https://pepy.tech/project/crawl4ai)
[![GitHub Sponsors](https://img.shields.io/github/sponsors/unclecode?style=flat&logo=GitHub-Sponsors&label=Sponsors&color=pink)](https://github.com/sponsors/unclecode)
---
#### 🚀 Crawl4AI Cloud API — Closed Beta (Launching Soon)
Reliable, large-scale web extraction, now built to be _**drastically more cost-effective**_ than any of the existing solutions.
👉 **Apply [here](https://forms.gle/E9MyPaNXACnAMaqG7) for early access**
_Well be onboarding in phases and working closely with early users.
Limited slots._
---
<p align="center">
<a href="https://x.com/crawl4ai">
<img src="https://img.shields.io/badge/Follow%20on%20X-000000?style=for-the-badge&logo=x&logoColor=white" alt="Follow on X" />
@@ -37,13 +27,13 @@ Limited slots._
Crawl4AI turns the web into clean, LLM ready Markdown for RAG, agents, and data pipelines. Fast, controllable, battle tested by a 50k+ star community.
[✨ Check out latest update v0.7.8](#-recent-updates)
[✨ Check out latest update v0.7.7](#-recent-updates)
**New in v0.7.8**: Stability & Bug Fix Release! 11 bug fixes addressing Docker API issues (ContentRelevanceFilter, ProxyConfig, cache permissions), LLM extraction improvements (configurable backoff, HTML input format), URL handling fixes, and dependency updates (pypdf, Pydantic v2). [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.8.md)
**New in v0.7.7**: Complete Self-Hosting Platform with Real-time Monitoring! Enterprise-grade monitoring dashboard, comprehensive REST API, WebSocket streaming, smart browser pool management, and production-ready observability. Full visibility and control over your crawling infrastructure. [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.7.md)
✨ Recent v0.7.7: Complete Self-Hosting Platform with Real-time Monitoring! Enterprise-grade monitoring dashboard, comprehensive REST API, WebSocket streaming, smart browser pool management, and production-ready observability. [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.7.md)
✨ Recent v0.7.6: Complete Webhook Infrastructure for Docker Job Queue API! Real-time notifications for both `/crawl/job` and `/llm/job` endpoints with exponential backoff retry, custom headers, and flexible delivery modes. No more polling! [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.6.md)
✨ Previous v0.7.6: Complete Webhook Infrastructure for Docker Job Queue API! Real-time notifications for both `/crawl/job` and `/llm/job` endpoints with exponential backoff retry, custom headers, and flexible delivery modes. [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.6.md)
✨ Previous v0.7.5: Docker Hooks System with function-based API for pipeline customization, Enhanced LLM Integration with custom providers, HTTPS Preservation, and multiple community-reported bug fixes. [Release notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.5.md)
<details>
<summary>🤓 <strong>My Personal Story</strong></summary>
@@ -562,55 +552,6 @@ async def test_news_crawl():
## ✨ Recent Updates
<details>
<summary><strong>Version 0.7.8 Release Highlights - Stability & Bug Fix Release</strong></summary>
This release focuses on stability with 11 bug fixes addressing issues reported by the community. No new features, but significant improvements to reliability.
- **🐳 Docker API Fixes**:
- Fixed `ContentRelevanceFilter` deserialization in deep crawl requests (#1642)
- Fixed `ProxyConfig` JSON serialization in `BrowserConfig.to_dict()` (#1629)
- Fixed `.cache` folder permissions in Docker image (#1638)
- **🤖 LLM Extraction Improvements**:
- Configurable rate limiter backoff with new `LLMConfig` parameters (#1269):
```python
from crawl4ai import LLMConfig
config = LLMConfig(
provider="openai/gpt-4o-mini",
backoff_base_delay=5, # Wait 5s on first retry
backoff_max_attempts=5, # Try up to 5 times
backoff_exponential_factor=3 # Multiply delay by 3 each attempt
)
```
- HTML input format support for `LLMExtractionStrategy` (#1178):
```python
from crawl4ai import LLMExtractionStrategy
strategy = LLMExtractionStrategy(
llm_config=config,
instruction="Extract table data",
input_format="html" # Now supports: "html", "markdown", "fit_markdown"
)
```
- Fixed raw HTML URL variable - extraction strategies now receive `"Raw HTML"` instead of HTML blob (#1116)
- **🔗 URL Handling**:
- Fixed relative URL resolution after JavaScript redirects (#1268)
- Fixed import statement formatting in extracted code (#1181)
- **📦 Dependency Updates**:
- Replaced deprecated PyPDF2 with pypdf (#1412)
- Pydantic v2 ConfigDict compatibility - no more deprecation warnings (#678)
- **🧠 AdaptiveCrawler**:
- Fixed query expansion to actually use LLM instead of hardcoded mock data (#1621)
[Full v0.7.8 Release Notes →](https://github.com/unclecode/crawl4ai/blob/main/docs/blog/release-v0.7.8.md)
</details>
<details>
<summary><strong>Version 0.7.7 Release Highlights - The Self-Hosting & Monitoring Update</strong></summary>
@@ -1093,8 +1034,7 @@ Our enterprise sponsors and technology partners help scale Crawl4AI to power pro
| Company | About | Sponsorship Tier |
|------|------|----------------------------|
| <a href="https://app.nstproxy.com/register?i=ecOqW9" target="_blank"><picture><source width="250" media="(prefers-color-scheme: dark)" srcset="https://gist.github.com/aravindkarnam/62f82bd4818d3079d9dd3c31df432cf8/raw/nst-light.svg"><source width="250" media="(prefers-color-scheme: light)" srcset="https://www.nstproxy.com/logo.svg"><img alt="nstproxy" src="ttps://www.nstproxy.com/logo.svg"></picture></a> | NstProxy is a trusted proxy provider with over 110M+ real residential IPs, city-level targeting, 99.99% uptime, and low pricing at $0.1/GB, it delivers unmatched stability, scale, and cost-efficiency. | 🥈 Silver |
| <a href="https://app.scrapeless.com/passport/register?utm_source=official&utm_term=crawl4ai" target="_blank"><picture><source width="250" media="(prefers-color-scheme: dark)" srcset="https://gist.githubusercontent.com/aravindkarnam/0d275b942705604263e5c32d2db27bc1/raw/Scrapeless-light-logo.svg"><source width="250" media="(prefers-color-scheme: light)" srcset="https://gist.githubusercontent.com/aravindkarnam/22d0525cc0f3021bf19ebf6e11a69ccd/raw/Scrapeless-dark-logo.svg"><img alt="Scrapeless" src="https://gist.githubusercontent.com/aravindkarnam/22d0525cc0f3021bf19ebf6e11a69ccd/raw/Scrapeless-dark-logo.svg"></picture></a> | Scrapeless provides production-grade infrastructure for Crawling, Automation, and AI Agents, offering Scraping Browser, 4 Proxy Types and Universal Scraping API. | 🥈 Silver |
| <a href="https://app.scrapeless.com/passport/register?utm_source=official&utm_term=crawl4ai" target="_blank"><picture><source width="250" media="(prefers-color-scheme: dark)" srcset="https://gist.githubusercontent.com/aravindkarnam/0d275b942705604263e5c32d2db27bc1/raw/Scrapeless-light-logo.svg"><source width="250" media="(prefers-color-scheme: light)" srcset="https://gist.githubusercontent.com/aravindkarnam/22d0525cc0f3021bf19ebf6e11a69ccd/raw/Scrapeless-dark-logo.svg"><img alt="Scrapeless" src="https://gist.githubusercontent.com/aravindkarnam/22d0525cc0f3021bf19ebf6e11a69ccd/raw/Scrapeless-dark-logo.svg"></picture></a> | Scrapeless is the best full-stack web scraping toolkit offering Scraping API, Scraping Browser, Web Unlocker, Captcha Solver, and Proxies, designed to handle all your data collection needs. | 🥈 Silver |
| <a href="https://dashboard.capsolver.com/passport/register?inviteCode=ESVSECTX5Q23" target="_blank"><picture><source width="120" media="(prefers-color-scheme: dark)" srcset="https://docs.crawl4ai.com/uploads/sponsors/20251013045338_72a71fa4ee4d2f40.png"><source width="120" media="(prefers-color-scheme: light)" srcset="https://www.capsolver.com/assets/images/logo-text.png"><img alt="Capsolver" src="https://www.capsolver.com/assets/images/logo-text.png"></picture></a> | AI-powered Captcha solving service. Supports all major Captcha types, including reCAPTCHA, Cloudflare, and more | 🥉 Bronze |
| <a href="https://kipo.ai" target="_blank"><img src="https://docs.crawl4ai.com/uploads/sponsors/20251013045751_2d54f57f117c651e.png" alt="DataSync" width="120"/></a> | Helps engineers and buyers find, compare, and source electronic & industrial parts in seconds, with specs, pricing, lead times & alternatives.| 🥇 Gold |
| <a href="https://www.kidocode.com/" target="_blank"><img src="https://docs.crawl4ai.com/uploads/sponsors/20251013045045_bb8dace3f0440d65.svg" alt="Kidocode" width="120"/><p align="center">KidoCode</p></a> | Kidocode is a hybrid technology and entrepreneurship school for kids aged 518, offering both online and on-campus education. | 🥇 Gold |

View File

@@ -72,8 +72,6 @@ from .deep_crawling import (
BestFirstCrawlingStrategy,
DFSDeepCrawlStrategy,
DeepCrawlDecorator,
ContentRelevanceFilter,
ContentTypeScorer,
)
# NEW: Import AsyncUrlSeeder
from .async_url_seeder import AsyncUrlSeeder

View File

@@ -1,7 +1,7 @@
# crawl4ai/__version__.py
# This is the version that will be used for stable releases
__version__ = "0.7.8"
__version__ = "0.7.7"
# For nightly builds, this gets set during build process
__nightly_version__ = None

View File

@@ -728,18 +728,18 @@ class EmbeddingStrategy(CrawlStrategy):
provider = llm_config_dict.get('provider', 'openai/gpt-4o-mini') if llm_config_dict else 'openai/gpt-4o-mini'
api_token = llm_config_dict.get('api_token') if llm_config_dict else None
response = perform_completion_with_backoff(
provider=provider,
prompt_with_variables=prompt,
api_token=api_token,
json_response=True
)
# response = perform_completion_with_backoff(
# provider=provider,
# prompt_with_variables=prompt,
# api_token=api_token,
# json_response=True
# )
variations = json.loads(response.choices[0].message.content)
# variations = json.loads(response.choices[0].message.content)
# # Mock data with more variations for split
# variations ={'queries': ['what are the best vegetables to use in fried rice?', 'how do I make vegetable fried rice from scratch?', 'can you provide a quick recipe for vegetable fried rice?', 'what cooking techniques are essential for perfect fried rice with vegetables?', 'how to add flavor to vegetable fried rice?', 'are there any tips for making healthy fried rice with vegetables?']}
variations ={'queries': ['what are the best vegetables to use in fried rice?', 'how do I make vegetable fried rice from scratch?', 'can you provide a quick recipe for vegetable fried rice?', 'what cooking techniques are essential for perfect fried rice with vegetables?', 'how to add flavor to vegetable fried rice?', 'are there any tips for making healthy fried rice with vegetables?']}
# variations = {'queries': [

View File

@@ -1,5 +1,5 @@
import importlib
import os
from typing import Union
import warnings
import requests
from .config import (
@@ -27,14 +27,14 @@ from .table_extraction import TableExtractionStrategy, DefaultTableExtraction
from .cache_context import CacheMode
from .proxy_strategy import ProxyRotationStrategy
from typing import Union, List, Callable
import inspect
from typing import Any, Callable, Dict, List, Optional, Union
from typing import Any, Dict, Optional
from enum import Enum
# Type alias for URL matching
UrlMatcher = Union[str, Callable[[str], bool], List[Union[str, Callable[[str], bool]]]]
class MatchMode(Enum):
OR = "or"
AND = "and"
@@ -42,7 +42,8 @@ class MatchMode(Enum):
# from .proxy_strategy import ProxyConfig
def to_serializable_dict(obj: Any, ignore_default_value : bool = False):
def to_serializable_dict(obj: Any, ignore_default_value : bool = False) -> Dict:
"""
Recursively convert an object to a serializable dictionary using {type, params} structure
for complex objects.
@@ -109,6 +110,8 @@ def to_serializable_dict(obj: Any, ignore_default_value : bool = False):
# if value is not None:
# current_values[attr_name] = to_serializable_dict(value)
return {
"type": obj.__class__.__name__,
"params": current_values
@@ -134,20 +137,12 @@ def from_serializable_dict(data: Any) -> Any:
if data["type"] == "dict" and "value" in data:
return {k: from_serializable_dict(v) for k, v in data["value"].items()}
cls = None
# If you are receiving an error while trying to convert a dict to an object:
# Either add a module to `modules_paths` list, or add the `data["type"]` to the crawl4ai __init__.py file
module_paths = ["crawl4ai"]
for module_path in module_paths:
try:
mod = importlib.import_module(module_path)
if hasattr(mod, data["type"]):
cls = getattr(mod, data["type"])
break
except (ImportError, AttributeError):
continue
# Import from crawl4ai for class instances
import crawl4ai
if hasattr(crawl4ai, data["type"]):
cls = getattr(crawl4ai, data["type"])
if cls is not None:
# Handle Enum
if issubclass(cls, Enum):
return cls(data["params"])
@@ -603,7 +598,7 @@ class BrowserConfig:
"chrome_channel": self.chrome_channel,
"channel": self.channel,
"proxy": self.proxy,
"proxy_config": self.proxy_config.to_dict() if self.proxy_config else None,
"proxy_config": self.proxy_config,
"viewport_width": self.viewport_width,
"viewport_height": self.viewport_height,
"accept_downloads": self.accept_downloads,
@@ -1797,10 +1792,7 @@ class LLMConfig:
frequency_penalty: Optional[float] = None,
presence_penalty: Optional[float] = None,
stop: Optional[List[str]] = None,
n: Optional[int] = None,
backoff_base_delay: Optional[int] = None,
backoff_max_attempts: Optional[int] = None,
backoff_exponential_factor: Optional[int] = None,
n: Optional[int] = None,
):
"""Configuaration class for LLM provider and API token."""
self.provider = provider
@@ -1829,9 +1821,6 @@ class LLMConfig:
self.presence_penalty = presence_penalty
self.stop = stop
self.n = n
self.backoff_base_delay = backoff_base_delay if backoff_base_delay is not None else 2
self.backoff_max_attempts = backoff_max_attempts if backoff_max_attempts is not None else 3
self.backoff_exponential_factor = backoff_exponential_factor if backoff_exponential_factor is not None else 2
@staticmethod
def from_kwargs(kwargs: dict) -> "LLMConfig":
@@ -1845,10 +1834,7 @@ class LLMConfig:
frequency_penalty=kwargs.get("frequency_penalty"),
presence_penalty=kwargs.get("presence_penalty"),
stop=kwargs.get("stop"),
n=kwargs.get("n"),
backoff_base_delay=kwargs.get("backoff_base_delay"),
backoff_max_attempts=kwargs.get("backoff_max_attempts"),
backoff_exponential_factor=kwargs.get("backoff_exponential_factor")
n=kwargs.get("n")
)
def to_dict(self):
@@ -1862,10 +1848,7 @@ class LLMConfig:
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"stop": self.stop,
"n": self.n,
"backoff_base_delay": self.backoff_base_delay,
"backoff_max_attempts": self.backoff_max_attempts,
"backoff_exponential_factor": self.backoff_exponential_factor
"n": self.n
}
def clone(self, **kwargs):

View File

@@ -1023,12 +1023,6 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
final_messages = await self.adapter.retrieve_console_messages(page)
captured_console.extend(final_messages)
###
# This ensures we capture the current page URL at the time we return the response,
# which correctly reflects any JavaScript navigation that occurred.
###
redirected_url = page.url # Use current page URL to capture JS redirects
# Return complete response
return AsyncCrawlResponse(
html=html,

View File

@@ -166,6 +166,22 @@ class AsyncUrlSeeder:
Async version of UrlSeeder.
Call pattern is await/async for / async with.
Parameters
----------
ttl : timedelta, default TTL
Time-to-live for cached results.
client : httpx.AsyncClient, optional
HTTP client to use. If None, creates a new one.
logger : AsyncLoggerBase, optional
Logger instance for logging messages.
base_directory : str or pathlib.Path, optional
Base directory for cache storage. Defaults to home directory.
cache_root : str or pathlib.Path, optional
Root directory for URL seeder cache. Defaults to ~/.cache/url_seeder.
verify_redirect_targets : bool, default True
Whether to verify that redirect targets are alive (2xx status) before returning them.
When False, returns redirect targets without verification (legacy behavior).
Public coroutines
-----------------
await seed.urls(...)
@@ -203,6 +219,8 @@ class AsyncUrlSeeder:
# NEW: Add base_directory
base_directory: Optional[Union[str, pathlib.Path]] = None,
cache_root: Optional[Union[str, Path]] = None,
# NEW: Control redirect target verification
verify_redirect_targets: bool = True,
):
self.ttl = ttl
self._owns_client = client is None # Track if we created the client
@@ -227,6 +245,9 @@ class AsyncUrlSeeder:
cache_root or "~/.cache/url_seeder"))
(self.cache_root / "live").mkdir(parents=True, exist_ok=True)
(self.cache_root / "head").mkdir(exist_ok=True)
# Store redirect verification setting
self.verify_redirect_targets = verify_redirect_targets
def _log(self, level: str, message: str, tag: str = "URL_SEED", **kwargs: Any):
"""Helper to log messages using the provided logger, if available."""
@@ -682,24 +703,47 @@ class AsyncUrlSeeder:
Returns:
* the same URL if it answers 2xx,
* the absolute redirect target if it answers 3xx,
* the absolute redirect target if it answers 3xx (and if verify_redirect_targets=True, only if target is alive/2xx),
* None on any other status or network error.
"""
try:
r = await self.client.head(url, timeout=10, follow_redirects=False)
# direct hit
# direct 2xx hit
if 200 <= r.status_code < 300:
return str(r.url)
# single level redirect
# single-level redirect (3xx)
if r.status_code in (301, 302, 303, 307, 308):
loc = r.headers.get("location")
if loc:
return urljoin(url, loc)
target = urljoin(url, loc)
# Avoid infinite loop on self-redirect
if target == url:
return None
# If not verifying redirect targets, return immediately (old behavior)
if not self.verify_redirect_targets:
return target
# Verify redirect target is alive (new behavior)
try:
r2 = await self.client.head(target, timeout=10, follow_redirects=False)
if 200 <= r2.status_code < 300:
return str(r2.url)
# Optionally, could handle another 3xx here for 2-step chains, but spec only says 1
else:
self._log(
"debug",
"HEAD redirect target {target} did not resolve: status {status}",
params={"target": target, "status": r2.status_code},
tag="URL_SEED",
)
return None
except Exception as e2:
self._log("debug", "HEAD {target} failed: {err}",
params={"target": target, "err": str(e2)}, tag="URL_SEED")
return None
# all other cases
return None
except Exception as e:
self._log("debug", "HEAD {url} failed: {err}",
params={"url": url, "err": str(e)}, tag="URL_SEED")

View File

@@ -617,17 +617,17 @@ class AsyncWebCrawler:
else config.chunking_strategy
)
sections = chunking.chunk(content)
# extracted_content = config.extraction_strategy.run(_url, sections)
# extracted_content = config.extraction_strategy.run(url, sections)
# Use async version if available for better parallelism
if hasattr(config.extraction_strategy, 'arun'):
extracted_content = await config.extraction_strategy.arun(_url, sections)
extracted_content = await config.extraction_strategy.arun(url, sections)
else:
# Fallback to sync version run in thread pool to avoid blocking
extracted_content = await asyncio.to_thread(
config.extraction_strategy.run, url, sections
)
extracted_content = json.dumps(
extracted_content, indent=4, default=str, ensure_ascii=False
)

View File

@@ -980,9 +980,6 @@ class LLMContentFilter(RelevantContentFilter):
prompt,
api_token,
base_url=base_url,
base_delay=self.llm_config.backoff_base_delay,
max_attempts=self.llm_config.backoff_max_attempts,
exponential_factor=self.llm_config.backoff_exponential_factor,
extra_args=extra_args,
)

View File

@@ -509,22 +509,18 @@ class DomainFilter(URLFilter):
class ContentRelevanceFilter(URLFilter):
"""BM25-based relevance filter using head section content"""
__slots__ = ("query_terms", "threshold", "k1", "b", "avgdl", "query")
__slots__ = ("query_terms", "threshold", "k1", "b", "avgdl")
def __init__(
self,
query: Union[str, List[str]],
query: str,
threshold: float,
k1: float = 1.2,
b: float = 0.75,
avgdl: int = 1000,
):
super().__init__(name="BM25RelevanceFilter")
if isinstance(query, list):
self.query = " ".join(query)
else:
self.query = query
self.query_terms = self._tokenize(self.query)
self.query_terms = self._tokenize(query)
self.threshold = threshold
self.k1 = k1 # TF saturation parameter
self.b = b # Length normalization parameter

View File

@@ -180,7 +180,7 @@ class Crawl4aiDockerClient:
yield CrawlResult(**result)
return stream_results()
response = await self._request("POST", "/crawl", json=data, timeout=hooks_timeout)
response = await self._request("POST", "/crawl", json=data)
result_data = response.json()
if not result_data.get("success", False):
raise RequestError(f"Crawl failed: {result_data.get('msg', 'Unknown error')}")

View File

@@ -649,9 +649,6 @@ class LLMExtractionStrategy(ExtractionStrategy):
base_url=self.llm_config.base_url,
json_response=self.force_json_response,
extra_args=self.extra_args,
base_delay=self.llm_config.backoff_base_delay,
max_attempts=self.llm_config.backoff_max_attempts,
exponential_factor=self.llm_config.backoff_exponential_factor
) # , json_response=self.extract_type == "schema")
# Track usage
usage = TokenUsage(
@@ -849,9 +846,6 @@ class LLMExtractionStrategy(ExtractionStrategy):
base_url=self.llm_config.base_url,
json_response=self.force_json_response,
extra_args=self.extra_args,
base_delay=self.llm_config.backoff_base_delay,
max_attempts=self.llm_config.backoff_max_attempts,
exponential_factor=self.llm_config.backoff_exponential_factor
)
# Track usage
usage = TokenUsage(

View File

@@ -1,4 +1,4 @@
from pydantic import BaseModel, HttpUrl, PrivateAttr, Field, ConfigDict
from pydantic import BaseModel, HttpUrl, PrivateAttr, Field
from typing import List, Dict, Optional, Callable, Awaitable, Union, Any
from typing import AsyncGenerator
from typing import Generic, TypeVar
@@ -153,7 +153,8 @@ class CrawlResult(BaseModel):
console_messages: Optional[List[Dict[str, Any]]] = None
tables: List[Dict] = Field(default_factory=list) # NEW [{headers,rows,caption,summary}]
model_config = ConfigDict(arbitrary_types_allowed=True)
class Config:
arbitrary_types_allowed = True
# NOTE: The StringCompatibleMarkdown class, custom __init__ method, property getters/setters,
# and model_dump override all exist to support a smooth transition from markdown as a string
@@ -331,7 +332,8 @@ class AsyncCrawlResponse(BaseModel):
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
model_config = ConfigDict(arbitrary_types_allowed=True)
class Config:
arbitrary_types_allowed = True
###############################
# Scraping Models

View File

@@ -15,9 +15,9 @@ from .utils import (
clean_pdf_text_to_html,
)
# Remove direct pypdf imports from the top
# import pypdf
# from pypdf import PdfReader
# Remove direct PyPDF2 imports from the top
# import PyPDF2
# from PyPDF2 import PdfReader
logger = logging.getLogger(__name__)
@@ -59,9 +59,9 @@ class NaivePDFProcessorStrategy(PDFProcessorStrategy):
save_images_locally: bool = False, image_save_dir: Optional[Path] = None, batch_size: int = 4):
# Import check at initialization time
try:
import pypdf
import PyPDF2
except ImportError:
raise ImportError("pypdf is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
raise ImportError("PyPDF2 is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
self.image_dpi = image_dpi
self.image_quality = image_quality
@@ -75,9 +75,9 @@ class NaivePDFProcessorStrategy(PDFProcessorStrategy):
def process(self, pdf_path: Path) -> PDFProcessResult:
# Import inside method to allow dependency to be optional
try:
from pypdf import PdfReader
from PyPDF2 import PdfReader
except ImportError:
raise ImportError("pypdf is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
raise ImportError("PyPDF2 is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
start_time = time()
result = PDFProcessResult(
@@ -125,15 +125,15 @@ class NaivePDFProcessorStrategy(PDFProcessorStrategy):
"""Like process() but processes PDF pages in parallel batches"""
# Import inside method to allow dependency to be optional
try:
from pypdf import PdfReader
import pypdf # For type checking
from PyPDF2 import PdfReader
import PyPDF2 # For type checking
except ImportError:
raise ImportError("pypdf is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
raise ImportError("PyPDF2 is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
import concurrent.futures
import threading
# Initialize pypdf thread support
# Initialize PyPDF2 thread support
if not hasattr(threading.current_thread(), "_children"):
threading.current_thread()._children = set()
@@ -232,11 +232,11 @@ class NaivePDFProcessorStrategy(PDFProcessorStrategy):
return pdf_page
def _extract_images(self, page, image_dir: Optional[Path]) -> List[Dict]:
# Import pypdf for type checking only when needed
# Import PyPDF2 for type checking only when needed
try:
from pypdf.generic import IndirectObject
import PyPDF2
except ImportError:
raise ImportError("pypdf is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
raise ImportError("PyPDF2 is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
if not self.extract_images:
return []
@@ -266,7 +266,7 @@ class NaivePDFProcessorStrategy(PDFProcessorStrategy):
width = xobj.get('/Width', 0)
height = xobj.get('/Height', 0)
color_space = xobj.get('/ColorSpace', '/DeviceRGB')
if isinstance(color_space, IndirectObject):
if isinstance(color_space, PyPDF2.generic.IndirectObject):
color_space = color_space.get_object()
# Handle different image encodings
@@ -277,7 +277,7 @@ class NaivePDFProcessorStrategy(PDFProcessorStrategy):
if '/FlateDecode' in filters:
try:
decode_parms = xobj.get('/DecodeParms', {})
if isinstance(decode_parms, IndirectObject):
if isinstance(decode_parms, PyPDF2.generic.IndirectObject):
decode_parms = decode_parms.get_object()
predictor = decode_parms.get('/Predictor', 1)
@@ -416,10 +416,10 @@ class NaivePDFProcessorStrategy(PDFProcessorStrategy):
# Import inside method to allow dependency to be optional
if reader is None:
try:
from pypdf import PdfReader
from PyPDF2 import PdfReader
reader = PdfReader(pdf_path)
except ImportError:
raise ImportError("pypdf is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
raise ImportError("PyPDF2 is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
meta = reader.metadata or {}
created = self._parse_pdf_date(meta.get('/CreationDate', ''))
@@ -459,11 +459,11 @@ if __name__ == "__main__":
from pathlib import Path
try:
# Import pypdf only when running the file directly
import pypdf
from pypdf import PdfReader
# Import PyPDF2 only when running the file directly
import PyPDF2
from PyPDF2 import PdfReader
except ImportError:
print("pypdf is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
print("PyPDF2 is required for PDF processing. Install with 'pip install crawl4ai[pdf]'")
exit(1)
current_dir = Path(__file__).resolve().parent

View File

@@ -795,9 +795,6 @@ Return only a JSON array of extracted tables following the specified format."""
api_token=self.llm_config.api_token,
base_url=self.llm_config.base_url,
json_response=True,
base_delay=self.llm_config.backoff_base_delay,
max_attempts=self.llm_config.backoff_max_attempts,
exponential_factor=self.llm_config.backoff_exponential_factor,
extra_args=self.extra_args
)
@@ -1119,9 +1116,6 @@ Return only a JSON array of extracted tables following the specified format."""
api_token=self.llm_config.api_token,
base_url=self.llm_config.base_url,
json_response=True,
base_delay=self.llm_config.backoff_base_delay,
max_attempts=self.llm_config.backoff_max_attempts,
exponential_factor=self.llm_config.backoff_exponential_factor,
extra_args=self.extra_args
)

View File

@@ -1745,9 +1745,6 @@ def perform_completion_with_backoff(
api_token,
json_response=False,
base_url=None,
base_delay=2,
max_attempts=3,
exponential_factor=2,
**kwargs,
):
"""
@@ -1764,9 +1761,6 @@ def perform_completion_with_backoff(
api_token (str): The API token for authentication.
json_response (bool): Whether to request a JSON response. Defaults to False.
base_url (Optional[str]): The base URL for the API. Defaults to None.
base_delay (int): The base delay in seconds. Defaults to 2.
max_attempts (int): The maximum number of attempts. Defaults to 3.
exponential_factor (int): The exponential factor. Defaults to 2.
**kwargs: Additional arguments for the API request.
Returns:
@@ -1776,6 +1770,9 @@ def perform_completion_with_backoff(
from litellm import completion
from litellm.exceptions import RateLimitError
max_attempts = 3
base_delay = 2 # Base delay in seconds, you can adjust this based on your needs
extra_args = {"temperature": 0.01, "api_key": api_token, "base_url": base_url}
if json_response:
extra_args["response_format"] = {"type": "json_object"}
@@ -1801,7 +1798,7 @@ def perform_completion_with_backoff(
# Check if we have exhausted our max attempts
if attempt < max_attempts - 1:
# Calculate the delay and wait
delay = base_delay * (exponential_factor**attempt) # Exponential backoff formula
delay = base_delay * (2**attempt) # Exponential backoff formula
print(f"Waiting for {delay} seconds before retrying...")
time.sleep(delay)
else:
@@ -1834,9 +1831,6 @@ async def aperform_completion_with_backoff(
api_token,
json_response=False,
base_url=None,
base_delay=2,
max_attempts=3,
exponential_factor=2,
**kwargs,
):
"""
@@ -1853,9 +1847,6 @@ async def aperform_completion_with_backoff(
api_token (str): The API token for authentication.
json_response (bool): Whether to request a JSON response. Defaults to False.
base_url (Optional[str]): The base URL for the API. Defaults to None.
base_delay (int): The base delay in seconds. Defaults to 2.
max_attempts (int): The maximum number of attempts. Defaults to 3.
exponential_factor (int): The exponential factor. Defaults to 2.
**kwargs: Additional arguments for the API request.
Returns:
@@ -1866,6 +1857,9 @@ async def aperform_completion_with_backoff(
from litellm.exceptions import RateLimitError
import asyncio
max_attempts = 3
base_delay = 2 # Base delay in seconds, you can adjust this based on your needs
extra_args = {"temperature": 0.01, "api_key": api_token, "base_url": base_url}
if json_response:
extra_args["response_format"] = {"type": "json_object"}
@@ -1891,7 +1885,7 @@ async def aperform_completion_with_backoff(
# Check if we have exhausted our max attempts
if attempt < max_attempts - 1:
# Calculate the delay and wait
delay = base_delay * (exponential_factor**attempt) # Exponential backoff formula
delay = base_delay * (2**attempt) # Exponential backoff formula
print(f"Waiting for {delay} seconds before retrying...")
await asyncio.sleep(delay)
else:

View File

@@ -108,10 +108,7 @@ async def handle_llm_qa(
prompt_with_variables=prompt,
api_token=get_llm_api_key(config), # Returns None to let litellm handle it
temperature=get_llm_temperature(config),
base_url=get_llm_base_url(config),
base_delay=config["llm"].get("backoff_base_delay", 2),
max_attempts=config["llm"].get("backoff_max_attempts", 3),
exponential_factor=config["llm"].get("backoff_exponential_factor", 2)
base_url=get_llm_base_url(config)
)
return response.choices[0].message.content

View File

@@ -1,327 +0,0 @@
# Crawl4AI v0.7.8: Stability & Bug Fix Release
*December 2025*
---
I'm releasing Crawl4AI v0.7.8—a focused stability release that addresses 11 bugs reported by the community. While there are no new features in this release, these fixes resolve important issues affecting Docker deployments, LLM extraction, URL handling, and dependency compatibility.
## What's Fixed at a Glance
- **Docker API**: Fixed ContentRelevanceFilter deserialization, ProxyConfig serialization, and cache folder permissions
- **LLM Extraction**: Configurable rate limiter backoff, HTML input format support, and proper URL handling for raw HTML
- **URL Handling**: Correct relative URL resolution after JavaScript redirects
- **Dependencies**: Replaced deprecated PyPDF2 with pypdf, Pydantic v2 ConfigDict compatibility
- **AdaptiveCrawler**: Fixed query expansion to actually use LLM instead of hardcoded mock data
## Bug Fixes
### Docker & API Fixes
#### ContentRelevanceFilter Deserialization (#1642)
**The Problem:** When sending deep crawl requests to the Docker API with `ContentRelevanceFilter`, the server failed to deserialize the filter, causing requests to fail.
**The Fix:** I added `ContentRelevanceFilter` to the public exports and enhanced the deserialization logic with dynamic imports.
```python
# This now works correctly in Docker API
import httpx
request = {
"urls": ["https://docs.example.com"],
"crawler_config": {
"deep_crawl_strategy": {
"type": "BFSDeepCrawlStrategy",
"max_depth": 2,
"filter_chain": [
{
"type": "ContentRelevanceFilter",
"query": "API documentation",
"threshold": 0.3
}
]
}
}
}
async with httpx.AsyncClient() as client:
response = await client.post("http://localhost:11235/crawl", json=request)
# Previously failed, now works!
```
#### ProxyConfig JSON Serialization (#1629)
**The Problem:** `BrowserConfig.to_dict()` failed when `proxy_config` was set because `ProxyConfig` wasn't being serialized to a dictionary.
**The Fix:** `ProxyConfig.to_dict()` is now called during serialization.
```python
from crawl4ai import BrowserConfig
from crawl4ai.async_configs import ProxyConfig
proxy = ProxyConfig(
server="http://proxy.example.com:8080",
username="user",
password="pass"
)
config = BrowserConfig(headless=True, proxy_config=proxy)
# Previously raised TypeError, now works
config_dict = config.to_dict()
json.dumps(config_dict) # Valid JSON
```
#### Docker Cache Folder Permissions (#1638)
**The Problem:** The `.cache` folder in the Docker image had incorrect permissions, causing crawling to fail when caching was enabled.
**The Fix:** Corrected ownership and permissions during image build.
```bash
# Cache now works correctly in Docker
docker run -d -p 11235:11235 \
--shm-size=1g \
-v ./my-cache:/app/.cache \
unclecode/crawl4ai:0.7.8
```
---
### LLM & Extraction Fixes
#### Configurable Rate Limiter Backoff (#1269)
**The Problem:** The LLM rate limiting backoff parameters were hardcoded, making it impossible to adjust retry behavior for different API rate limits.
**The Fix:** `LLMConfig` now accepts three new parameters for complete control over retry behavior.
```python
from crawl4ai import LLMConfig
# Default behavior (unchanged)
default_config = LLMConfig(provider="openai/gpt-4o-mini")
# backoff_base_delay=2, backoff_max_attempts=3, backoff_exponential_factor=2
# Custom configuration for APIs with strict rate limits
custom_config = LLMConfig(
provider="openai/gpt-4o-mini",
backoff_base_delay=5, # Wait 5 seconds on first retry
backoff_max_attempts=5, # Try up to 5 times
backoff_exponential_factor=3 # Multiply delay by 3 each attempt
)
# Retry sequence: 5s -> 15s -> 45s -> 135s -> 405s
```
#### LLM Strategy HTML Input Support (#1178)
**The Problem:** `LLMExtractionStrategy` always sent markdown to the LLM, but some extraction tasks work better with HTML structure preserved.
**The Fix:** Added `input_format` parameter supporting `"markdown"`, `"html"`, `"fit_markdown"`, `"cleaned_html"`, and `"fit_html"`.
```python
from crawl4ai import LLMExtractionStrategy, LLMConfig
# Default: markdown input (unchanged)
markdown_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o-mini"),
instruction="Extract product information"
)
# NEW: HTML input - preserves table/list structure
html_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o-mini"),
instruction="Extract the data table preserving structure",
input_format="html"
)
# NEW: Filtered markdown - only relevant content
fit_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o-mini"),
instruction="Summarize the main content",
input_format="fit_markdown"
)
```
#### Raw HTML URL Variable (#1116)
**The Problem:** When using `url="raw:<html>..."`, the entire HTML content was being passed to extraction strategies as the URL parameter, polluting LLM prompts.
**The Fix:** The URL is now correctly set to `"Raw HTML"` for raw HTML inputs.
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
html = "<html><body><h1>Test</h1></body></html>"
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=f"raw:{html}",
config=CrawlerRunConfig(extraction_strategy=my_strategy)
)
# extraction_strategy receives url="Raw HTML" instead of the HTML blob
```
---
### URL Handling Fix
#### Relative URLs After Redirects (#1268)
**The Problem:** When JavaScript caused a page redirect, relative links were resolved against the original URL instead of the final URL.
**The Fix:** `redirected_url` now captures the actual page URL after all JavaScript execution completes.
```python
from crawl4ai import AsyncWebCrawler
async with AsyncWebCrawler() as crawler:
# Page at /old-page redirects via JS to /new-page
result = await crawler.arun(url="https://example.com/old-page")
# BEFORE: redirected_url = "https://example.com/old-page"
# AFTER: redirected_url = "https://example.com/new-page"
# Links are now correctly resolved against the final URL
for link in result.links['internal']:
print(link['href']) # Relative links resolved correctly
```
---
### Dependency & Compatibility Fixes
#### PyPDF2 Replaced with pypdf (#1412)
**The Problem:** PyPDF2 was deprecated in 2022 and is no longer maintained.
**The Fix:** Replaced with the actively maintained `pypdf` library.
```python
# Installation (unchanged)
pip install crawl4ai[pdf]
# The PDF processor now uses pypdf internally
# No code changes required - API remains the same
```
#### Pydantic v2 ConfigDict Compatibility (#678)
**The Problem:** Using the deprecated `class Config` syntax caused deprecation warnings with Pydantic v2.
**The Fix:** Migrated to `model_config = ConfigDict(...)` syntax.
```python
# No more deprecation warnings when importing crawl4ai models
from crawl4ai.models import CrawlResult
from crawl4ai import CrawlerRunConfig, BrowserConfig
# All models are now Pydantic v2 compatible
```
---
### AdaptiveCrawler Fix
#### Query Expansion Using LLM (#1621)
**The Problem:** The `EmbeddingStrategy` in AdaptiveCrawler had commented-out LLM code and was using hardcoded mock query variations instead.
**The Fix:** Uncommented and activated the LLM call for actual query expansion.
```python
# AdaptiveCrawler query expansion now actually uses the LLM
# Instead of hardcoded variations like:
# variations = {'queries': ['what are the best vegetables...']}
# The LLM generates relevant query variations based on your actual query
```
---
### Code Formatting Fix
#### Import Statement Formatting (#1181)
**The Problem:** When extracting code from web pages, import statements were sometimes concatenated without proper line separation.
**The Fix:** Import statements now maintain proper newline separation.
```python
# BEFORE: "import osimport sysfrom pathlib import Path"
# AFTER:
# import os
# import sys
# from pathlib import Path
```
---
## Breaking Changes
**None!** This release is fully backward compatible.
- All existing code continues to work without modification
- New parameters have sensible defaults matching previous behavior
- No API changes to existing functionality
---
## Upgrade Instructions
### Python Package
```bash
pip install --upgrade crawl4ai
# or
pip install crawl4ai==0.7.8
```
### Docker
```bash
# Pull the latest version
docker pull unclecode/crawl4ai:0.7.8
# Run
docker run -d -p 11235:11235 --shm-size=1g unclecode/crawl4ai:0.7.8
```
---
## Verification
Run the verification tests to confirm all fixes are working:
```bash
python docs/releases_review/demo_v0.7.8.py
```
This runs actual tests that verify each bug fix is properly implemented.
---
## Acknowledgments
Thank you to everyone who reported these issues and provided detailed reproduction steps. Your bug reports make Crawl4AI better for everyone.
Issues fixed: #1642, #1638, #1629, #1621, #1412, #1269, #1268, #1181, #1178, #1116, #678
---
## Support & Resources
- **Documentation**: [docs.crawl4ai.com](https://docs.crawl4ai.com)
- **GitHub**: [github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)
- **Discord**: [discord.gg/crawl4ai](https://discord.gg/jP8KfhDhyN)
- **Twitter**: [@unclecode](https://x.com/unclecode)
---
**This stability release ensures Crawl4AI works reliably across Docker deployments, LLM extraction workflows, and various edge cases. Thank you for your continued support and feedback!**
**Happy crawling!**
*- unclecode*

View File

@@ -1,61 +0,0 @@
import json
import asyncio
from urllib.parse import quote, urlencode
from crawl4ai import CrawlerRunConfig, BrowserConfig, AsyncWebCrawler
# Scrapeless provides a free anti-detection fingerprint browser client and cloud browsers:
# https://www.scrapeless.com/en/blog/scrapeless-nstbrowser-strategic-integration
async def main():
# customize browser fingerprint
fingerprint = {
"userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.1.2.3 Safari/537.36",
"platform": "Windows",
"screen": {
"width": 1280, "height": 1024
},
"localization": {
"languages": ["zh-HK", "en-US", "en"], "timezone": "Asia/Hong_Kong",
}
}
fingerprint_json = json.dumps(fingerprint)
encoded_fingerprint = quote(fingerprint_json)
scrapeless_params = {
"token": "your token",
"sessionTTL": 1000,
"sessionName": "Demo",
"fingerprint": encoded_fingerprint,
# Sets the target country/region for the proxy, sending requests via an IP address from that region. You can specify a country code (e.g., US for the United States, GB for the United Kingdom, ANY for any country). See country codes for all supported options.
# "proxyCountry": "ANY",
# create profile on scrapeless
# "profileId": "your profileId",
# For more usage details, please refer to https://docs.scrapeless.com/en/scraping-browser/quickstart/getting-started
}
query_string = urlencode(scrapeless_params)
scrapeless_connection_url = f"wss://browser.scrapeless.com/api/v2/browser?{query_string}"
async with AsyncWebCrawler(
config=BrowserConfig(
headless=False,
browser_mode="cdp",
cdp_url=scrapeless_connection_url,
)
) as crawler:
result = await crawler.arun(
url="https://www.scrapeless.com/en",
config=CrawlerRunConfig(
wait_for="css:.content",
scan_full_page=True,
),
)
print("-" * 20)
print(f'Status Code: {result.status_code}')
print("-" * 20)
print(f'Title: {result.metadata["title"]}')
print(f'Description: {result.metadata["description"]}')
print("-" * 20)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -439,19 +439,10 @@ LLMConfig is useful to pass LLM provider config to strategies and functions that
| **`provider`** | `"ollama/llama3","groq/llama3-70b-8192","groq/llama3-8b-8192", "openai/gpt-4o-mini" ,"openai/gpt-4o","openai/o1-mini","openai/o1-preview","openai/o3-mini","openai/o3-mini-high","anthropic/claude-3-haiku-20240307","anthropic/claude-3-opus-20240229","anthropic/claude-3-sonnet-20240229","anthropic/claude-3-5-sonnet-20240620","gemini/gemini-pro","gemini/gemini-1.5-pro","gemini/gemini-2.0-flash","gemini/gemini-2.0-flash-exp","gemini/gemini-2.0-flash-lite-preview-02-05","deepseek/deepseek-chat"`<br/>*(default: `"openai/gpt-4o-mini"`)* | Which LLM provider to use.
| **`api_token`** |1.Optional. When not provided explicitly, api_token will be read from environment variables based on provider. For example: If a gemini model is passed as provider then,`"GEMINI_API_KEY"` will be read from environment variables <br/> 2. API token of LLM provider <br/> eg: `api_token = "gsk_1ClHGGJ7Lpn4WGybR7vNWGdyb3FY7zXEw3SCiy0BAVM9lL8CQv"` <br/> 3. Environment variable - use with prefix "env:" <br/> eg:`api_token = "env: GROQ_API_KEY"` | API token to use for the given provider
| **`base_url`** |Optional. Custom API endpoint | If your provider has a custom endpoint
| **`backoff_base_delay`** |Optional. `int` *(default: `2`)* | Seconds to wait before the first retry when the provider throttles a request.
| **`backoff_max_attempts`** |Optional. `int` *(default: `3`)* | Total tries (initial call + retries) before surfacing an error.
| **`backoff_exponential_factor`** |Optional. `int` *(default: `2`)* | Multiplier that increases the wait time for each retry (`delay = base_delay * factor^attempt`).
## 3.2 Example Usage
```python
llm_config = LLMConfig(
provider="openai/gpt-4o-mini",
api_token=os.getenv("OPENAI_API_KEY"),
backoff_base_delay=1, # optional
backoff_max_attempts=5, # optional
backoff_exponential_factor=3, # optional
)
llm_config = LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY"))
```
## 4. Putting It All Together

View File

@@ -20,35 +20,25 @@ Ever wondered why your AI coding assistant struggles with your library despite c
## Latest Release
### [Crawl4AI v0.7.8 Stability & Bug Fix Release](../blog/release-v0.7.8.md)
*December 2025*
Crawl4AI v0.7.8 is a focused stability release addressing 11 bugs reported by the community. While there are no new features, these fixes resolve important issues affecting Docker deployments, LLM extraction, URL handling, and dependency compatibility.
Key highlights:
- **🐳 Docker API Fixes**: ContentRelevanceFilter deserialization, ProxyConfig serialization, cache folder permissions
- **🤖 LLM Improvements**: Configurable rate limiter backoff, HTML input format support, raw HTML URL handling
- **🔗 URL Handling**: Correct relative URL resolution after JavaScript redirects
- **📦 Dependencies**: Replaced deprecated PyPDF2 with pypdf, Pydantic v2 ConfigDict compatibility
- **🧠 AdaptiveCrawler**: Fixed query expansion to actually use LLM instead of mock data
[Read full release notes →](../blog/release-v0.7.8.md)
## Recent Releases
### [Crawl4AI v0.7.7 The Self-Hosting & Monitoring Update](../blog/release-v0.7.7.md)
*November 14, 2025*
Crawl4AI v0.7.7 transforms Docker into a complete self-hosting platform with enterprise-grade real-time monitoring, comprehensive observability, and full operational control.
Crawl4AI v0.7.7 transforms Docker into a complete self-hosting platform with enterprise-grade real-time monitoring, comprehensive observability, and full operational control. Experience complete visibility into your crawling infrastructure!
Key highlights:
- **📊 Real-time Monitoring Dashboard**: Interactive web UI with live system metrics
- **🔌 Comprehensive Monitor API**: Complete REST API for programmatic access
- **⚡ WebSocket Streaming**: Real-time updates every 2 seconds
- **🔥 Smart Browser Pool**: 3-tier architecture with automatic promotion and cleanup
- **📊 Real-time Monitoring Dashboard**: Interactive web UI with live system metrics and browser pool visibility
- **🔌 Comprehensive Monitor API**: Complete REST API for programmatic access to all monitoring data
- **⚡ WebSocket Streaming**: Real-time updates every 2 seconds for custom dashboards
- **🔥 Smart Browser Pool**: 3-tier architecture (permanent/hot/cold) with automatic promotion and cleanup
- **🧹 Janitor System**: Automatic resource management with event logging
- **🎮 Control Actions**: Manual browser management (kill, restart, cleanup) via API
- **📈 Production Ready**: Prometheus integration, alerting patterns, and 6 critical metrics for ops excellence
- **🐛 Critical Fixes**: Async LLM extraction (#1055), DFS crawling (#1607), viewport config, and security updates
[Read full release notes →](../blog/release-v0.7.7.md)
## Recent Releases
### [Crawl4AI v0.7.6 The Webhook Infrastructure Update](../blog/release-v0.7.6.md)
*October 22, 2025*
@@ -76,17 +66,15 @@ Key highlights:
[Read full release notes →](../blog/release-v0.7.5.md)
### [Crawl4AI v0.7.4 The Intelligent Table Extraction & Performance Update](../blog/release-v0.7.4.md)
*August 17, 2025*
Revolutionary LLM-powered table extraction with intelligent chunking, performance improvements for concurrent crawling, enhanced browser management, and critical stability fixes.
[Read full release notes →](../blog/release-v0.7.4.md)
---
## Older Releases
| Version | Date | Highlights |
|---------|------|------------|
| [v0.7.4](../blog/release-v0.7.4.md) | August 2025 | LLM-powered table extraction, performance improvements |
| [v0.7.3](../blog/release-v0.7.3.md) | July 2025 | Undetected browser, multi-URL config, memory monitoring |
| [v0.7.1](../blog/release-v0.7.1.md) | June 2025 | Bug fixes and stability improvements |
| [v0.7.0](../blog/release-v0.7.0.md) | May 2025 | Adaptive crawling, virtual scroll, link analysis |
## Project History
Curious about how Crawl4AI has evolved? Check out our [complete changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md) for a detailed history of all versions and updates.

View File

@@ -1,327 +0,0 @@
# Crawl4AI v0.7.8: Stability & Bug Fix Release
*December 2025*
---
I'm releasing Crawl4AI v0.7.8—a focused stability release that addresses 11 bugs reported by the community. While there are no new features in this release, these fixes resolve important issues affecting Docker deployments, LLM extraction, URL handling, and dependency compatibility.
## What's Fixed at a Glance
- **Docker API**: Fixed ContentRelevanceFilter deserialization, ProxyConfig serialization, and cache folder permissions
- **LLM Extraction**: Configurable rate limiter backoff, HTML input format support, and proper URL handling for raw HTML
- **URL Handling**: Correct relative URL resolution after JavaScript redirects
- **Dependencies**: Replaced deprecated PyPDF2 with pypdf, Pydantic v2 ConfigDict compatibility
- **AdaptiveCrawler**: Fixed query expansion to actually use LLM instead of hardcoded mock data
## Bug Fixes
### Docker & API Fixes
#### ContentRelevanceFilter Deserialization (#1642)
**The Problem:** When sending deep crawl requests to the Docker API with `ContentRelevanceFilter`, the server failed to deserialize the filter, causing requests to fail.
**The Fix:** I added `ContentRelevanceFilter` to the public exports and enhanced the deserialization logic with dynamic imports.
```python
# This now works correctly in Docker API
import httpx
request = {
"urls": ["https://docs.example.com"],
"crawler_config": {
"deep_crawl_strategy": {
"type": "BFSDeepCrawlStrategy",
"max_depth": 2,
"filter_chain": [
{
"type": "ContentRelevanceFilter",
"query": "API documentation",
"threshold": 0.3
}
]
}
}
}
async with httpx.AsyncClient() as client:
response = await client.post("http://localhost:11235/crawl", json=request)
# Previously failed, now works!
```
#### ProxyConfig JSON Serialization (#1629)
**The Problem:** `BrowserConfig.to_dict()` failed when `proxy_config` was set because `ProxyConfig` wasn't being serialized to a dictionary.
**The Fix:** `ProxyConfig.to_dict()` is now called during serialization.
```python
from crawl4ai import BrowserConfig
from crawl4ai.async_configs import ProxyConfig
proxy = ProxyConfig(
server="http://proxy.example.com:8080",
username="user",
password="pass"
)
config = BrowserConfig(headless=True, proxy_config=proxy)
# Previously raised TypeError, now works
config_dict = config.to_dict()
json.dumps(config_dict) # Valid JSON
```
#### Docker Cache Folder Permissions (#1638)
**The Problem:** The `.cache` folder in the Docker image had incorrect permissions, causing crawling to fail when caching was enabled.
**The Fix:** Corrected ownership and permissions during image build.
```bash
# Cache now works correctly in Docker
docker run -d -p 11235:11235 \
--shm-size=1g \
-v ./my-cache:/app/.cache \
unclecode/crawl4ai:0.7.8
```
---
### LLM & Extraction Fixes
#### Configurable Rate Limiter Backoff (#1269)
**The Problem:** The LLM rate limiting backoff parameters were hardcoded, making it impossible to adjust retry behavior for different API rate limits.
**The Fix:** `LLMConfig` now accepts three new parameters for complete control over retry behavior.
```python
from crawl4ai import LLMConfig
# Default behavior (unchanged)
default_config = LLMConfig(provider="openai/gpt-4o-mini")
# backoff_base_delay=2, backoff_max_attempts=3, backoff_exponential_factor=2
# Custom configuration for APIs with strict rate limits
custom_config = LLMConfig(
provider="openai/gpt-4o-mini",
backoff_base_delay=5, # Wait 5 seconds on first retry
backoff_max_attempts=5, # Try up to 5 times
backoff_exponential_factor=3 # Multiply delay by 3 each attempt
)
# Retry sequence: 5s -> 15s -> 45s -> 135s -> 405s
```
#### LLM Strategy HTML Input Support (#1178)
**The Problem:** `LLMExtractionStrategy` always sent markdown to the LLM, but some extraction tasks work better with HTML structure preserved.
**The Fix:** Added `input_format` parameter supporting `"markdown"`, `"html"`, `"fit_markdown"`, `"cleaned_html"`, and `"fit_html"`.
```python
from crawl4ai import LLMExtractionStrategy, LLMConfig
# Default: markdown input (unchanged)
markdown_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o-mini"),
instruction="Extract product information"
)
# NEW: HTML input - preserves table/list structure
html_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o-mini"),
instruction="Extract the data table preserving structure",
input_format="html"
)
# NEW: Filtered markdown - only relevant content
fit_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o-mini"),
instruction="Summarize the main content",
input_format="fit_markdown"
)
```
#### Raw HTML URL Variable (#1116)
**The Problem:** When using `url="raw:<html>..."`, the entire HTML content was being passed to extraction strategies as the URL parameter, polluting LLM prompts.
**The Fix:** The URL is now correctly set to `"Raw HTML"` for raw HTML inputs.
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
html = "<html><body><h1>Test</h1></body></html>"
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=f"raw:{html}",
config=CrawlerRunConfig(extraction_strategy=my_strategy)
)
# extraction_strategy receives url="Raw HTML" instead of the HTML blob
```
---
### URL Handling Fix
#### Relative URLs After Redirects (#1268)
**The Problem:** When JavaScript caused a page redirect, relative links were resolved against the original URL instead of the final URL.
**The Fix:** `redirected_url` now captures the actual page URL after all JavaScript execution completes.
```python
from crawl4ai import AsyncWebCrawler
async with AsyncWebCrawler() as crawler:
# Page at /old-page redirects via JS to /new-page
result = await crawler.arun(url="https://example.com/old-page")
# BEFORE: redirected_url = "https://example.com/old-page"
# AFTER: redirected_url = "https://example.com/new-page"
# Links are now correctly resolved against the final URL
for link in result.links['internal']:
print(link['href']) # Relative links resolved correctly
```
---
### Dependency & Compatibility Fixes
#### PyPDF2 Replaced with pypdf (#1412)
**The Problem:** PyPDF2 was deprecated in 2022 and is no longer maintained.
**The Fix:** Replaced with the actively maintained `pypdf` library.
```python
# Installation (unchanged)
pip install crawl4ai[pdf]
# The PDF processor now uses pypdf internally
# No code changes required - API remains the same
```
#### Pydantic v2 ConfigDict Compatibility (#678)
**The Problem:** Using the deprecated `class Config` syntax caused deprecation warnings with Pydantic v2.
**The Fix:** Migrated to `model_config = ConfigDict(...)` syntax.
```python
# No more deprecation warnings when importing crawl4ai models
from crawl4ai.models import CrawlResult
from crawl4ai import CrawlerRunConfig, BrowserConfig
# All models are now Pydantic v2 compatible
```
---
### AdaptiveCrawler Fix
#### Query Expansion Using LLM (#1621)
**The Problem:** The `EmbeddingStrategy` in AdaptiveCrawler had commented-out LLM code and was using hardcoded mock query variations instead.
**The Fix:** Uncommented and activated the LLM call for actual query expansion.
```python
# AdaptiveCrawler query expansion now actually uses the LLM
# Instead of hardcoded variations like:
# variations = {'queries': ['what are the best vegetables...']}
# The LLM generates relevant query variations based on your actual query
```
---
### Code Formatting Fix
#### Import Statement Formatting (#1181)
**The Problem:** When extracting code from web pages, import statements were sometimes concatenated without proper line separation.
**The Fix:** Import statements now maintain proper newline separation.
```python
# BEFORE: "import osimport sysfrom pathlib import Path"
# AFTER:
# import os
# import sys
# from pathlib import Path
```
---
## Breaking Changes
**None!** This release is fully backward compatible.
- All existing code continues to work without modification
- New parameters have sensible defaults matching previous behavior
- No API changes to existing functionality
---
## Upgrade Instructions
### Python Package
```bash
pip install --upgrade crawl4ai
# or
pip install crawl4ai==0.7.8
```
### Docker
```bash
# Pull the latest version
docker pull unclecode/crawl4ai:0.7.8
# Run
docker run -d -p 11235:11235 --shm-size=1g unclecode/crawl4ai:0.7.8
```
---
## Verification
Run the verification tests to confirm all fixes are working:
```bash
python docs/releases_review/demo_v0.7.8.py
```
This runs actual tests that verify each bug fix is properly implemented.
---
## Acknowledgments
Thank you to everyone who reported these issues and provided detailed reproduction steps. Your bug reports make Crawl4AI better for everyone.
Issues fixed: #1642, #1638, #1629, #1621, #1412, #1269, #1268, #1181, #1178, #1116, #678
---
## Support & Resources
- **Documentation**: [docs.crawl4ai.com](https://docs.crawl4ai.com)
- **GitHub**: [github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)
- **Discord**: [discord.gg/crawl4ai](https://discord.gg/jP8KfhDhyN)
- **Twitter**: [@unclecode](https://x.com/unclecode)
---
**This stability release ensures Crawl4AI works reliably across Docker deployments, LLM extraction workflows, and various edge cases. Thank you for your continued support and feedback!**
**Happy crawling!**
*- unclecode*

View File

@@ -1593,20 +1593,8 @@ The `clone()` method:
- Environment variable - use with prefix "env:" <br/> eg:`api_token = "env: GROQ_API_KEY"`
3. **`base_url`**:
- If your provider has a custom endpoint
4. **Backoff controls** *(optional)*:
- `backoff_base_delay` *(default `2` seconds)* how long to pause before the first retry if the provider rate-limits you.
- `backoff_max_attempts` *(default `3`)* total tries for the same prompt (initial call + retries).
- `backoff_exponential_factor` *(default `2`)* how quickly the pause grows between retries. A factor of 2 yields waits like 2s → 4s → 8s.
- Because these plug into Crawl4AIs retry helper, every LLM strategy automatically follows the pacing you define here.
```python
llm_config = LLMConfig(
provider="openai/gpt-4o-mini",
api_token=os.getenv("OPENAI_API_KEY"),
backoff_base_delay=1, # optional
backoff_max_attempts=5, # optional
backoff_exponential_factor=3, # optional
)
llm_config = LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY"))
```
## 4. Putting It All Together
In a typical scenario, you define **one** `BrowserConfig` for your crawler session, then create **one or more** `CrawlerRunConfig` & `LLMConfig` depending on each call's needs:

View File

@@ -308,20 +308,8 @@ The `clone()` method:
3.**`base_url`**:
- If your provider has a custom endpoint
4.**Retry/backoff controls** *(optional)*:
- `backoff_base_delay` *(default `2` seconds)* base delay inserted before the first retry when the provider returns a rate-limit response.
- `backoff_max_attempts` *(default `3`)* total number of attempts (initial call plus retries) before the request is surfaced as an error.
- `backoff_exponential_factor` *(default `2`)* growth rate for the retry delay (`delay = base_delay * factor^attempt`).
- These values are forwarded to the shared `perform_completion_with_backoff` helper, ensuring every strategy that consumes your `LLMConfig` honors the same throttling policy.
```python
llm_config = LLMConfig(
provider="openai/gpt-4o-mini",
api_token=os.getenv("OPENAI_API_KEY"),
backoff_base_delay=1, # optional
backoff_max_attempts=5, # optional
backoff_exponential_factor=3, #optional
)
llm_config = LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY"))
```
## 4. Putting It All Together

View File

@@ -55,16 +55,6 @@
</div>
---
#### 🚀 Crawl4AI Cloud API — Closed Beta (Launching Soon)
Reliable, large-scale web extraction, now built to be _**drastically more cost-effective**_ than any of the existing solutions.
👉 **Apply [here](https://forms.gle/E9MyPaNXACnAMaqG7) for early access**
_Well be onboarding in phases and working closely with early users.
Limited slots._
---
Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for large language models, AI agents, and data pipelines. Fully open source, flexible, and built for real-time performance, **Crawl4AI** empowers developers with unmatched speed, precision, and deployment ease.
> Enjoy using Crawl4AI? Consider **[becoming a sponsor](https://github.com/sponsors/unclecode)** to support ongoing development and community growth!

View File

@@ -1,910 +0,0 @@
#!/usr/bin/env python3
"""
Crawl4AI v0.7.8 Release Demo - Verification Tests
==================================================
This demo ACTUALLY RUNS and VERIFIES the bug fixes in v0.7.8.
Each test executes real code and validates the fix is working.
Bug Fixes Verified:
1. ProxyConfig JSON serialization (#1629)
2. Configurable backoff parameters (#1269)
3. LLM Strategy input_format support (#1178)
4. Raw HTML URL variable (#1116)
5. Relative URLs after redirects (#1268)
6. pypdf migration (#1412)
7. Pydantic v2 ConfigDict (#678)
8. Docker ContentRelevanceFilter (#1642) - requires Docker
9. Docker .cache permissions (#1638) - requires Docker
10. AdaptiveCrawler query expansion (#1621) - requires LLM API key
11. Import statement formatting (#1181)
Usage:
python docs/releases_review/demo_v0.7.8.py
For Docker tests:
docker run -d -p 11235:11235 --shm-size=1g unclecode/crawl4ai:0.7.8
python docs/releases_review/demo_v0.7.8.py
"""
import asyncio
import json
import sys
import warnings
import os
import tempfile
from typing import Tuple, Optional
from dataclasses import dataclass
# Test results tracking
@dataclass
class TestResult:
name: str
issue: str
passed: bool
message: str
skipped: bool = False
results: list[TestResult] = []
def print_header(title: str):
print(f"\n{'=' * 70}")
print(f"{title}")
print(f"{'=' * 70}")
def print_test(name: str, issue: str):
print(f"\n[TEST] {name} ({issue})")
print("-" * 50)
def record_result(name: str, issue: str, passed: bool, message: str, skipped: bool = False):
results.append(TestResult(name, issue, passed, message, skipped))
if skipped:
print(f" SKIPPED: {message}")
elif passed:
print(f" PASSED: {message}")
else:
print(f" FAILED: {message}")
# =============================================================================
# TEST 1: ProxyConfig JSON Serialization (#1629)
# =============================================================================
async def test_proxy_config_serialization():
"""
Verify BrowserConfig.to_dict() properly serializes ProxyConfig to JSON.
BEFORE: ProxyConfig was included as object, causing JSON serialization to fail
AFTER: ProxyConfig.to_dict() is called, producing valid JSON
"""
print_test("ProxyConfig JSON Serialization", "#1629")
try:
from crawl4ai import BrowserConfig
from crawl4ai.async_configs import ProxyConfig
# Create config with ProxyConfig
proxy = ProxyConfig(
server="http://proxy.example.com:8080",
username="testuser",
password="testpass"
)
browser_config = BrowserConfig(headless=True, proxy_config=proxy)
# Test 1: to_dict() should return dict for proxy_config
config_dict = browser_config.to_dict()
proxy_dict = config_dict.get('proxy_config')
if not isinstance(proxy_dict, dict):
record_result("ProxyConfig Serialization", "#1629", False,
f"proxy_config is {type(proxy_dict)}, expected dict")
return
# Test 2: Should be JSON serializable
try:
json_str = json.dumps(config_dict)
json.loads(json_str) # Verify valid JSON
except (TypeError, json.JSONDecodeError) as e:
record_result("ProxyConfig Serialization", "#1629", False,
f"JSON serialization failed: {e}")
return
# Test 3: Verify proxy data is preserved
if proxy_dict.get('server') != "http://proxy.example.com:8080":
record_result("ProxyConfig Serialization", "#1629", False,
"Proxy server not preserved in serialization")
return
record_result("ProxyConfig Serialization", "#1629", True,
"BrowserConfig with ProxyConfig serializes to valid JSON")
except Exception as e:
record_result("ProxyConfig Serialization", "#1629", False, f"Exception: {e}")
# =============================================================================
# TEST 2: Configurable Backoff Parameters (#1269)
# =============================================================================
async def test_configurable_backoff():
"""
Verify LLMConfig accepts and stores backoff configuration parameters.
BEFORE: Backoff was hardcoded (delay=2, attempts=3, factor=2)
AFTER: LLMConfig accepts backoff_base_delay, backoff_max_attempts, backoff_exponential_factor
"""
print_test("Configurable Backoff Parameters", "#1269")
try:
from crawl4ai import LLMConfig
# Test 1: Default values
default_config = LLMConfig(provider="openai/gpt-4o-mini")
if default_config.backoff_base_delay != 2:
record_result("Configurable Backoff", "#1269", False,
f"Default base_delay is {default_config.backoff_base_delay}, expected 2")
return
if default_config.backoff_max_attempts != 3:
record_result("Configurable Backoff", "#1269", False,
f"Default max_attempts is {default_config.backoff_max_attempts}, expected 3")
return
if default_config.backoff_exponential_factor != 2:
record_result("Configurable Backoff", "#1269", False,
f"Default exponential_factor is {default_config.backoff_exponential_factor}, expected 2")
return
# Test 2: Custom values
custom_config = LLMConfig(
provider="openai/gpt-4o-mini",
backoff_base_delay=5,
backoff_max_attempts=10,
backoff_exponential_factor=3
)
if custom_config.backoff_base_delay != 5:
record_result("Configurable Backoff", "#1269", False,
f"Custom base_delay is {custom_config.backoff_base_delay}, expected 5")
return
if custom_config.backoff_max_attempts != 10:
record_result("Configurable Backoff", "#1269", False,
f"Custom max_attempts is {custom_config.backoff_max_attempts}, expected 10")
return
if custom_config.backoff_exponential_factor != 3:
record_result("Configurable Backoff", "#1269", False,
f"Custom exponential_factor is {custom_config.backoff_exponential_factor}, expected 3")
return
# Test 3: to_dict() includes backoff params
config_dict = custom_config.to_dict()
if 'backoff_base_delay' not in config_dict:
record_result("Configurable Backoff", "#1269", False,
"backoff_base_delay missing from to_dict()")
return
record_result("Configurable Backoff", "#1269", True,
"LLMConfig accepts and stores custom backoff parameters")
except Exception as e:
record_result("Configurable Backoff", "#1269", False, f"Exception: {e}")
# =============================================================================
# TEST 3: LLM Strategy Input Format (#1178)
# =============================================================================
async def test_llm_input_format():
"""
Verify LLMExtractionStrategy accepts input_format parameter.
BEFORE: Always used markdown input
AFTER: Supports "markdown", "html", "fit_markdown", "cleaned_html", "fit_html"
"""
print_test("LLM Strategy Input Format", "#1178")
try:
from crawl4ai import LLMExtractionStrategy, LLMConfig
llm_config = LLMConfig(provider="openai/gpt-4o-mini")
# Test 1: Default is markdown
default_strategy = LLMExtractionStrategy(
llm_config=llm_config,
instruction="Extract data"
)
if default_strategy.input_format != "markdown":
record_result("LLM Input Format", "#1178", False,
f"Default input_format is '{default_strategy.input_format}', expected 'markdown'")
return
# Test 2: Can set to html
html_strategy = LLMExtractionStrategy(
llm_config=llm_config,
instruction="Extract data",
input_format="html"
)
if html_strategy.input_format != "html":
record_result("LLM Input Format", "#1178", False,
f"HTML input_format is '{html_strategy.input_format}', expected 'html'")
return
# Test 3: Can set to fit_markdown
fit_strategy = LLMExtractionStrategy(
llm_config=llm_config,
instruction="Extract data",
input_format="fit_markdown"
)
if fit_strategy.input_format != "fit_markdown":
record_result("LLM Input Format", "#1178", False,
f"fit_markdown input_format is '{fit_strategy.input_format}'")
return
record_result("LLM Input Format", "#1178", True,
"LLMExtractionStrategy accepts all input_format options")
except Exception as e:
record_result("LLM Input Format", "#1178", False, f"Exception: {e}")
# =============================================================================
# TEST 4: Raw HTML URL Variable (#1116)
# =============================================================================
async def test_raw_html_url_variable():
"""
Verify that raw: prefix URLs pass "Raw HTML" to extraction strategy.
BEFORE: Entire HTML blob was passed as URL parameter
AFTER: "Raw HTML" string is passed as URL parameter
"""
print_test("Raw HTML URL Variable", "#1116")
try:
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
from crawl4ai.extraction_strategy import ExtractionStrategy
# Custom strategy to capture what URL is passed
class URLCapturingStrategy(ExtractionStrategy):
captured_url = None
def extract(self, url: str, html: str, *args, **kwargs):
URLCapturingStrategy.captured_url = url
return [{"content": "test"}]
html_content = "<html><body><h1>Test</h1></body></html>"
strategy = URLCapturingStrategy()
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=f"raw:{html_content}",
config=CrawlerRunConfig(
extraction_strategy=strategy
)
)
captured = URLCapturingStrategy.captured_url
if captured is None:
record_result("Raw HTML URL Variable", "#1116", False,
"Extraction strategy was not called")
return
if captured == html_content or captured.startswith("<html"):
record_result("Raw HTML URL Variable", "#1116", False,
f"URL contains HTML content instead of 'Raw HTML': {captured[:50]}...")
return
if captured != "Raw HTML":
record_result("Raw HTML URL Variable", "#1116", False,
f"URL is '{captured}', expected 'Raw HTML'")
return
record_result("Raw HTML URL Variable", "#1116", True,
"Extraction strategy receives 'Raw HTML' as URL for raw: prefix")
except Exception as e:
record_result("Raw HTML URL Variable", "#1116", False, f"Exception: {e}")
# =============================================================================
# TEST 5: Relative URLs After Redirects (#1268)
# =============================================================================
async def test_redirect_url_handling():
"""
Verify that redirected_url reflects the final URL after JS navigation.
BEFORE: redirected_url was the original URL, not the final URL
AFTER: redirected_url is captured after JS execution completes
"""
print_test("Relative URLs After Redirects", "#1268")
try:
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
# Test with a URL that we know the final state of
# We'll use httpbin which doesn't redirect, but verify the mechanism works
test_url = "https://httpbin.org/html"
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=test_url,
config=CrawlerRunConfig()
)
# Verify redirected_url is populated
if not result.redirected_url:
record_result("Redirect URL Handling", "#1268", False,
"redirected_url is empty")
return
# For non-redirecting URL, should match original or be the final URL
if not result.redirected_url.startswith("https://httpbin.org"):
record_result("Redirect URL Handling", "#1268", False,
f"redirected_url is unexpected: {result.redirected_url}")
return
# Verify links are present and resolved
if result.links:
# Check that internal links have full URLs
internal_links = result.links.get('internal', [])
external_links = result.links.get('external', [])
all_links = internal_links + external_links
for link in all_links[:5]: # Check first 5 links
href = link.get('href', '')
if href and not href.startswith(('http://', 'https://', 'mailto:', 'tel:', '#', 'javascript:')):
record_result("Redirect URL Handling", "#1268", False,
f"Link not resolved to absolute URL: {href}")
return
record_result("Redirect URL Handling", "#1268", True,
f"redirected_url correctly captured: {result.redirected_url}")
except Exception as e:
record_result("Redirect URL Handling", "#1268", False, f"Exception: {e}")
# =============================================================================
# TEST 6: pypdf Migration (#1412)
# =============================================================================
async def test_pypdf_migration():
"""
Verify pypdf is used instead of deprecated PyPDF2.
BEFORE: Used PyPDF2 (deprecated since 2022)
AFTER: Uses pypdf (actively maintained)
"""
print_test("pypdf Migration", "#1412")
try:
# Test 1: pypdf should be importable (if pdf extra is installed)
try:
import pypdf
pypdf_available = True
pypdf_version = pypdf.__version__
except ImportError:
pypdf_available = False
pypdf_version = None
# Test 2: PyPDF2 should NOT be imported by crawl4ai
# Check if the processor uses pypdf
try:
from crawl4ai.processors.pdf import processor
processor_source = open(processor.__file__).read()
uses_pypdf = 'from pypdf' in processor_source or 'import pypdf' in processor_source
uses_pypdf2 = 'from PyPDF2' in processor_source or 'import PyPDF2' in processor_source
if uses_pypdf2 and not uses_pypdf:
record_result("pypdf Migration", "#1412", False,
"PDF processor still uses PyPDF2")
return
if uses_pypdf:
record_result("pypdf Migration", "#1412", True,
f"PDF processor uses pypdf{' v' + pypdf_version if pypdf_version else ''}")
return
else:
record_result("pypdf Migration", "#1412", True,
"PDF processor found, pypdf dependency updated", skipped=not pypdf_available)
return
except ImportError:
# PDF processor not available
if pypdf_available:
record_result("pypdf Migration", "#1412", True,
f"pypdf v{pypdf_version} is installed (PDF processor not loaded)")
else:
record_result("pypdf Migration", "#1412", True,
"PDF support not installed (optional feature)", skipped=True)
return
except Exception as e:
record_result("pypdf Migration", "#1412", False, f"Exception: {e}")
# =============================================================================
# TEST 7: Pydantic v2 ConfigDict (#678)
# =============================================================================
async def test_pydantic_configdict():
"""
Verify no Pydantic deprecation warnings for Config class.
BEFORE: Used deprecated 'class Config' syntax
AFTER: Uses ConfigDict for Pydantic v2 compatibility
"""
print_test("Pydantic v2 ConfigDict", "#678")
try:
import pydantic
from pydantic import __version__ as pydantic_version
# Capture warnings during import
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always", DeprecationWarning)
# Import models that might have Config classes
from crawl4ai.models import CrawlResult, MarkdownGenerationResult
from crawl4ai.async_configs import CrawlerRunConfig, BrowserConfig
# Filter for Pydantic-related deprecation warnings
pydantic_warnings = [
warning for warning in w
if 'pydantic' in str(warning.message).lower()
or 'config' in str(warning.message).lower()
]
if pydantic_warnings:
warning_msgs = [str(w.message) for w in pydantic_warnings[:3]]
record_result("Pydantic ConfigDict", "#678", False,
f"Deprecation warnings: {warning_msgs}")
return
# Verify models work correctly
try:
# Test that models can be instantiated without issues
config = CrawlerRunConfig()
browser = BrowserConfig()
record_result("Pydantic ConfigDict", "#678", True,
f"No deprecation warnings with Pydantic v{pydantic_version}")
except Exception as e:
record_result("Pydantic ConfigDict", "#678", False,
f"Model instantiation failed: {e}")
except Exception as e:
record_result("Pydantic ConfigDict", "#678", False, f"Exception: {e}")
# =============================================================================
# TEST 8: Docker ContentRelevanceFilter (#1642)
# =============================================================================
async def test_docker_content_filter():
"""
Verify ContentRelevanceFilter deserializes correctly in Docker API.
BEFORE: Docker API failed to import/instantiate ContentRelevanceFilter
AFTER: Filter is properly exported and deserializable
"""
print_test("Docker ContentRelevanceFilter", "#1642")
# First verify the fix in local code
try:
# Test 1: ContentRelevanceFilter should be importable from crawl4ai
from crawl4ai import ContentRelevanceFilter
# Test 2: Should be instantiable
filter_instance = ContentRelevanceFilter(
query="test query",
threshold=0.3
)
if not hasattr(filter_instance, 'query'):
record_result("Docker ContentRelevanceFilter", "#1642", False,
"ContentRelevanceFilter missing query attribute")
return
except ImportError as e:
record_result("Docker ContentRelevanceFilter", "#1642", False,
f"ContentRelevanceFilter not exported: {e}")
return
except Exception as e:
record_result("Docker ContentRelevanceFilter", "#1642", False,
f"ContentRelevanceFilter instantiation failed: {e}")
return
# Test Docker API if available
try:
import httpx
async with httpx.AsyncClient(timeout=5.0) as client:
response = await client.get("http://localhost:11235/health")
if response.status_code != 200:
raise Exception("Docker not available")
# Docker is running, test the API
async with httpx.AsyncClient(timeout=30.0) as client:
request = {
"urls": ["https://httpbin.org/html"],
"crawler_config": {
"deep_crawl_strategy": {
"type": "BFSDeepCrawlStrategy",
"max_depth": 1,
"filter_chain": [
{
"type": "ContentTypeFilter",
"allowed_types": ["text/html"]
}
]
}
}
}
response = await client.post(
"http://localhost:11235/crawl",
json=request
)
if response.status_code == 200:
record_result("Docker ContentRelevanceFilter", "#1642", True,
"Filter deserializes correctly in Docker API")
else:
record_result("Docker ContentRelevanceFilter", "#1642", False,
f"Docker API returned {response.status_code}: {response.text[:100]}")
except ImportError:
record_result("Docker ContentRelevanceFilter", "#1642", True,
"ContentRelevanceFilter exportable (Docker test skipped - httpx not installed)",
skipped=True)
except Exception as e:
record_result("Docker ContentRelevanceFilter", "#1642", True,
f"ContentRelevanceFilter exportable (Docker test skipped: {e})",
skipped=True)
# =============================================================================
# TEST 9: Docker Cache Permissions (#1638)
# =============================================================================
async def test_docker_cache_permissions():
"""
Verify Docker image has correct .cache folder permissions.
This test requires Docker container to be running.
"""
print_test("Docker Cache Permissions", "#1638")
try:
import httpx
async with httpx.AsyncClient(timeout=5.0) as client:
response = await client.get("http://localhost:11235/health")
if response.status_code != 200:
raise Exception("Docker not available")
# Test by making a crawl request with caching
async with httpx.AsyncClient(timeout=60.0) as client:
request = {
"urls": ["https://httpbin.org/html"],
"crawler_config": {
"cache_mode": "enabled"
}
}
response = await client.post(
"http://localhost:11235/crawl",
json=request
)
if response.status_code == 200:
result = response.json()
# Check if there were permission errors
if "permission" in str(result).lower() and "denied" in str(result).lower():
record_result("Docker Cache Permissions", "#1638", False,
"Permission denied error in response")
else:
record_result("Docker Cache Permissions", "#1638", True,
"Crawl with caching succeeded in Docker")
else:
error_text = response.text[:200]
if "permission" in error_text.lower():
record_result("Docker Cache Permissions", "#1638", False,
f"Permission error: {error_text}")
else:
record_result("Docker Cache Permissions", "#1638", False,
f"Request failed: {response.status_code}")
except ImportError:
record_result("Docker Cache Permissions", "#1638", True,
"Skipped - httpx not installed", skipped=True)
except Exception as e:
record_result("Docker Cache Permissions", "#1638", True,
f"Skipped - Docker not available: {e}", skipped=True)
# =============================================================================
# TEST 10: AdaptiveCrawler Query Expansion (#1621)
# =============================================================================
async def test_adaptive_crawler_embedding():
"""
Verify EmbeddingStrategy LLM code is uncommented and functional.
BEFORE: LLM call was commented out, using hardcoded mock data
AFTER: Actually calls LLM for query expansion
"""
print_test("AdaptiveCrawler Query Expansion", "#1621")
try:
# Read the source file to verify the fix
import crawl4ai.adaptive_crawler as adaptive_module
source_file = adaptive_module.__file__
with open(source_file, 'r') as f:
source_code = f.read()
# Check that the LLM call is NOT commented out
# Look for the perform_completion_with_backoff call
# Find the EmbeddingStrategy section
if 'class EmbeddingStrategy' not in source_code:
record_result("AdaptiveCrawler Query Expansion", "#1621", True,
"EmbeddingStrategy not in adaptive_crawler (may have moved)",
skipped=True)
return
# Check if the mock data line is commented out
# and the actual LLM call is NOT commented out
lines = source_code.split('\n')
in_embedding_strategy = False
found_llm_call = False
mock_data_commented = False
for i, line in enumerate(lines):
if 'class EmbeddingStrategy' in line:
in_embedding_strategy = True
elif in_embedding_strategy and line.strip().startswith('class '):
in_embedding_strategy = False
if in_embedding_strategy:
# Check for uncommented LLM call
if 'perform_completion_with_backoff' in line and not line.strip().startswith('#'):
found_llm_call = True
# Check for commented mock data
if "variations ={'queries'" in line or 'variations = {\'queries\'' in line:
if line.strip().startswith('#'):
mock_data_commented = True
if found_llm_call:
record_result("AdaptiveCrawler Query Expansion", "#1621", True,
"LLM call is active in EmbeddingStrategy")
else:
# Check if the entire embedding strategy exists but might be structured differently
if 'perform_completion_with_backoff' in source_code:
record_result("AdaptiveCrawler Query Expansion", "#1621", True,
"perform_completion_with_backoff found in module")
else:
record_result("AdaptiveCrawler Query Expansion", "#1621", False,
"LLM call not found or still commented out")
except Exception as e:
record_result("AdaptiveCrawler Query Expansion", "#1621", False, f"Exception: {e}")
# =============================================================================
# TEST 11: Import Statement Formatting (#1181)
# =============================================================================
async def test_import_formatting():
"""
Verify code extraction properly formats import statements.
BEFORE: Import statements were concatenated without newlines
AFTER: Import statements have proper newline separation
"""
print_test("Import Statement Formatting", "#1181")
try:
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
# Create HTML with code containing imports
html_with_code = """
<html>
<body>
<pre><code>
import os
import sys
from pathlib import Path
from typing import List, Dict
def main():
pass
</code></pre>
</body>
</html>
"""
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=f"raw:{html_with_code}",
config=CrawlerRunConfig()
)
markdown = result.markdown.raw_markdown if result.markdown else ""
# Check that imports are not concatenated on the same line
# Bad: "import osimport sys" (no newline between statements)
# This is the actual bug - statements getting merged on same line
bad_patterns = [
"import os import sys", # Space but no newline
"import osimport sys", # No space or newline
"import os from pathlib", # Space but no newline
"import osfrom pathlib", # No space or newline
]
markdown_single_line = markdown.replace('\n', ' ') # Convert newlines to spaces
for pattern in bad_patterns:
# Check if pattern exists without proper line separation
if pattern.replace(' ', '') in markdown_single_line.replace(' ', ''):
# Verify it's actually on same line (not just adjacent after newline removal)
lines = markdown.split('\n')
for line in lines:
if 'import' in line.lower():
# Count import statements on this line
import_count = line.lower().count('import ')
if import_count > 1:
record_result("Import Formatting", "#1181", False,
f"Multiple imports on same line: {line[:60]}...")
return
# Verify imports are present
if "import" in markdown.lower():
record_result("Import Formatting", "#1181", True,
"Import statements are properly line-separated")
else:
record_result("Import Formatting", "#1181", True,
"No import statements found to verify (test HTML may have changed)")
except Exception as e:
record_result("Import Formatting", "#1181", False, f"Exception: {e}")
# =============================================================================
# COMPREHENSIVE CRAWL TEST
# =============================================================================
async def test_comprehensive_crawl():
"""
Run a comprehensive crawl to verify overall stability.
"""
print_test("Comprehensive Crawl Test", "Overall")
try:
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, BrowserConfig
async with AsyncWebCrawler(config=BrowserConfig(headless=True)) as crawler:
result = await crawler.arun(
url="https://httpbin.org/html",
config=CrawlerRunConfig()
)
# Verify result
checks = []
if result.success:
checks.append("success=True")
else:
record_result("Comprehensive Crawl", "Overall", False,
f"Crawl failed: {result.error_message}")
return
if result.html and len(result.html) > 100:
checks.append(f"html={len(result.html)} chars")
if result.markdown and result.markdown.raw_markdown:
checks.append(f"markdown={len(result.markdown.raw_markdown)} chars")
if result.redirected_url:
checks.append("redirected_url present")
record_result("Comprehensive Crawl", "Overall", True,
f"All checks passed: {', '.join(checks)}")
except Exception as e:
record_result("Comprehensive Crawl", "Overall", False, f"Exception: {e}")
# =============================================================================
# MAIN
# =============================================================================
def print_summary():
"""Print test results summary"""
print_header("TEST RESULTS SUMMARY")
passed = sum(1 for r in results if r.passed and not r.skipped)
failed = sum(1 for r in results if not r.passed and not r.skipped)
skipped = sum(1 for r in results if r.skipped)
print(f"\nTotal: {len(results)} tests")
print(f" Passed: {passed}")
print(f" Failed: {failed}")
print(f" Skipped: {skipped}")
if failed > 0:
print("\nFailed Tests:")
for r in results:
if not r.passed and not r.skipped:
print(f" - {r.name} ({r.issue}): {r.message}")
if skipped > 0:
print("\nSkipped Tests:")
for r in results:
if r.skipped:
print(f" - {r.name} ({r.issue}): {r.message}")
print("\n" + "=" * 70)
if failed == 0:
print("All tests passed! v0.7.8 bug fixes verified.")
else:
print(f"WARNING: {failed} test(s) failed!")
print("=" * 70)
return failed == 0
async def main():
"""Run all verification tests"""
print_header("Crawl4AI v0.7.8 - Bug Fix Verification Tests")
print("Running actual tests to verify bug fixes...")
# Run all tests
tests = [
test_proxy_config_serialization, # #1629
test_configurable_backoff, # #1269
test_llm_input_format, # #1178
test_raw_html_url_variable, # #1116
test_redirect_url_handling, # #1268
test_pypdf_migration, # #1412
test_pydantic_configdict, # #678
test_docker_content_filter, # #1642
test_docker_cache_permissions, # #1638
test_adaptive_crawler_embedding, # #1621
test_import_formatting, # #1181
test_comprehensive_crawl, # Overall
]
for test_func in tests:
try:
await test_func()
except Exception as e:
print(f"\nTest {test_func.__name__} crashed: {e}")
results.append(TestResult(
test_func.__name__,
"Unknown",
False,
f"Crashed: {e}"
))
# Print summary
all_passed = print_summary()
return 0 if all_passed else 1
if __name__ == "__main__":
try:
exit_code = asyncio.run(main())
sys.exit(exit_code)
except KeyboardInterrupt:
print("\n\nTests interrupted by user.")
sys.exit(1)
except Exception as e:
print(f"\n\nTest suite failed: {e}")
import traceback
traceback.print_exc()
sys.exit(1)

View File

@@ -59,13 +59,13 @@ classifiers = [
]
[project.optional-dependencies]
pdf = ["pypdf"]
pdf = ["PyPDF2"]
torch = ["torch", "nltk", "scikit-learn"]
transformer = ["transformers", "tokenizers", "sentence-transformers"]
cosine = ["torch", "transformers", "nltk", "sentence-transformers"]
sync = ["selenium"]
all = [
"pypdf",
"PyPDF2",
"torch",
"nltk",
"scikit-learn",

View File

@@ -33,4 +33,4 @@ shapely>=2.0.0
fake-useragent>=2.2.0
pdf2image>=1.17.0
pypdf>=6.0.0
PyPDF2>=3.0.1

View File

@@ -1,118 +0,0 @@
"""Test delayed redirect WITH wait_for - does link resolution use correct URL?"""
import asyncio
import threading
from http.server import HTTPServer, SimpleHTTPRequestHandler
class RedirectTestHandler(SimpleHTTPRequestHandler):
def log_message(self, format, *args):
pass
def do_GET(self):
if self.path == "/page-a":
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
content = """
<!DOCTYPE html>
<html>
<head><title>Page A</title></head>
<body>
<h1>Page A - Will redirect after 200ms</h1>
<script>
setTimeout(function() {
window.location.href = '/redirect-target/';
}, 200);
</script>
</body>
</html>
"""
self.wfile.write(content.encode())
elif self.path.startswith("/redirect-target"):
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
content = """
<!DOCTYPE html>
<html>
<head><title>Redirect Target</title></head>
<body>
<h1>Redirect Target</h1>
<nav id="target-nav">
<a href="subpage-1">Subpage 1</a>
<a href="subpage-2">Subpage 2</a>
</nav>
</body>
</html>
"""
self.wfile.write(content.encode())
else:
self.send_response(404)
self.end_headers()
async def main():
import socket
class ReuseAddrHTTPServer(HTTPServer):
allow_reuse_address = True
server = ReuseAddrHTTPServer(("localhost", 8769), RedirectTestHandler)
thread = threading.Thread(target=server.serve_forever)
thread.daemon = True
thread.start()
try:
import sys
sys.path.insert(0, '/Users/nasrin/vscode/c4ai-uc/develop')
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
print("=" * 60)
print("TEST: Delayed JS redirect WITH wait_for='css:#target-nav'")
print("This waits for the redirect to complete")
print("=" * 60)
browser_config = BrowserConfig(headless=True, verbose=False)
crawl_config = CrawlerRunConfig(
cache_mode="bypass",
wait_for="css:#target-nav" # Wait for element on redirect target
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="http://localhost:8769/page-a",
config=crawl_config
)
print(f"Original URL: http://localhost:8769/page-a")
print(f"Redirected URL returned: {result.redirected_url}")
print(f"HTML contains 'Redirect Target': {'Redirect Target' in result.html}")
print()
if "/redirect-target" in (result.redirected_url or ""):
print("✓ redirected_url is CORRECT")
else:
print("✗ BUG #1: redirected_url is WRONG - still shows original URL!")
# Check links
all_links = []
if isinstance(result.links, dict):
all_links = result.links.get("internal", []) + result.links.get("external", [])
print(f"\nLinks found ({len(all_links)} total):")
bug_found = False
for link in all_links:
href = link.get("href", "") if isinstance(link, dict) else getattr(link, 'href', "")
if "subpage" in href:
print(f" {href}")
if "/page-a/" in href:
print(" ^^^ BUG #2: Link resolved with WRONG base URL!")
bug_found = True
elif "/redirect-target/" in href:
print(" ^^^ CORRECT")
if not bug_found and all_links:
print("\n✓ Link resolution is CORRECT")
finally:
server.shutdown()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -71,7 +71,7 @@ PACKAGE_MAPPINGS = {
'sentence_transformers': 'sentence-transformers',
'rank_bm25': 'rank-bm25',
'snowballstemmer': 'snowballstemmer',
'pypdf': 'pypdf',
'PyPDF2': 'PyPDF2',
'pdf2image': 'pdf2image',
}

View File

@@ -1,31 +1,16 @@
"""
Test the complete fix for both the filter serialization and JSON serialization issues.
"""
import os
import traceback
from typing import Any
import asyncio
import httpx
from crawl4ai import BrowserConfig, CacheMode, CrawlerRunConfig
from crawl4ai.deep_crawling import (
BFSDeepCrawlStrategy,
ContentRelevanceFilter,
FilterChain,
URLFilter,
URLPatternFilter,
)
from crawl4ai.deep_crawling import BFSDeepCrawlStrategy, FilterChain, URLPatternFilter
CRAWL4AI_DOCKER_PORT = os.environ.get("CRAWL4AI_DOCKER_PORT", "11234")
try:
BASE_PORT = int(CRAWL4AI_DOCKER_PORT)
except TypeError:
BASE_PORT = 11234
BASE_URL = f"http://localhost:{BASE_PORT}/" # Adjust port as needed
BASE_URL = "http://localhost:11234/" # Adjust port as needed
async def test_with_docker_client(filter_chain: list[URLFilter], max_pages: int = 20, timeout: int = 30) -> bool:
async def test_with_docker_client():
"""Test using the Docker client (same as 1419.py)."""
from crawl4ai.docker_client import Crawl4aiDockerClient
@@ -39,10 +24,19 @@ async def test_with_docker_client(filter_chain: list[URLFilter], max_pages: int
verbose=True,
) as client:
# Create filter chain - testing the serialization fix
filter_chain = [
URLPatternFilter(
# patterns=["*about*", "*privacy*", "*terms*"],
patterns=["*advanced*"],
reverse=True
),
]
crawler_config = CrawlerRunConfig(
deep_crawl_strategy=BFSDeepCrawlStrategy(
max_depth=2, # Keep it shallow for testing
max_pages=max_pages, # Limit pages for testing
# max_pages=5, # Limit pages for testing
filter_chain=FilterChain(filter_chain)
),
cache_mode=CacheMode.BYPASS,
@@ -53,7 +47,6 @@ async def test_with_docker_client(filter_chain: list[URLFilter], max_pages: int
["https://docs.crawl4ai.com"], # Simple test page
browser_config=BrowserConfig(headless=True),
crawler_config=crawler_config,
hooks_timeout=timeout,
)
if results:
@@ -81,11 +74,12 @@ async def test_with_docker_client(filter_chain: list[URLFilter], max_pages: int
except Exception as e:
print(f"❌ Docker client test failed: {e}")
import traceback
traceback.print_exc()
return False
async def test_with_rest_api(filters: list[dict[str, Any]], max_pages: int = 20, timeout: int = 30) -> bool:
async def test_with_rest_api():
"""Test using REST API directly."""
print("\n" + "=" * 60)
print("Testing with REST API")
@@ -96,11 +90,19 @@ async def test_with_rest_api(filters: list[dict[str, Any]], max_pages: int = 20,
"type": "BFSDeepCrawlStrategy",
"params": {
"max_depth": 2,
"max_pages": max_pages,
# "max_pages": 5,
"filter_chain": {
"type": "FilterChain",
"params": {
"filters": filters
"filters": [
{
"type": "URLPatternFilter",
"params": {
"patterns": ["*advanced*"],
"reverse": True
}
}
]
}
}
}
@@ -124,7 +126,7 @@ async def test_with_rest_api(filters: list[dict[str, Any]], max_pages: int = 20,
response = await client.post(
f"{BASE_URL}crawl",
json=crawl_payload,
timeout=timeout,
timeout=30
)
if response.status_code == 200:
@@ -148,6 +150,7 @@ async def test_with_rest_api(filters: list[dict[str, Any]], max_pages: int = 20,
except Exception as e:
print(f"❌ REST API test failed: {e}")
import traceback
traceback.print_exc()
return False
@@ -162,62 +165,12 @@ async def main():
results = []
# Test 1: Docker client
max_pages_ = [20, 5]
timeouts = [30, 60]
filter_chain_test_cases = [
[
URLPatternFilter(
# patterns=["*about*", "*privacy*", "*terms*"],
patterns=["*advanced*"],
reverse=True
),
],
[
ContentRelevanceFilter(
query="about faq",
threshold=0.2,
),
],
]
for idx, (filter_chain, max_pages, timeout) in enumerate(zip(filter_chain_test_cases, max_pages_, timeouts)):
docker_passed = await test_with_docker_client(filter_chain=filter_chain, max_pages=max_pages, timeout=timeout)
results.append((f"Docker Client w/ filter chain {idx}", docker_passed))
docker_passed = await test_with_docker_client()
results.append(("Docker Client", docker_passed))
# Test 2: REST API
max_pages_ = [20, 5, 5]
timeouts = [30, 60, 60]
filters_test_cases = [
[
{
"type": "URLPatternFilter",
"params": {
"patterns": ["*advanced*"],
"reverse": True
}
}
],
[
{
"type": "ContentRelevanceFilter",
"params": {
"query": "about faq",
"threshold": 0.2,
}
}
],
[
{
"type": "ContentRelevanceFilter",
"params": {
"query": ["about", "faq"],
"threshold": 0.2,
}
}
],
]
for idx, (filters, max_pages, timeout) in enumerate(zip(filters_test_cases, max_pages_, timeouts)):
rest_passed = await test_with_rest_api(filters=filters, max_pages=max_pages, timeout=timeout)
results.append((f"REST API w/ filters {idx}", rest_passed))
rest_passed = await test_with_rest_api()
results.append(("REST API", rest_passed))
# Summary
print("\n" + "=" * 60)
@@ -233,7 +186,10 @@ async def main():
print("=" * 60)
if all_passed:
print("🎉 ALL TESTS PASSED!")
print("🎉 ALL TESTS PASSED! Both issues are fully resolved!")
print("\nThe fixes:")
print("1. Filter serialization: Fixed by not serializing private __slots__")
print("2. JSON serialization: Fixed by removing property descriptors from model_dump()")
else:
print("⚠️ Some tests failed. Please check the server logs for details.")
@@ -242,4 +198,4 @@ async def main():
if __name__ == "__main__":
import sys
sys.exit(asyncio.run(main()))
sys.exit(asyncio.run(main()))

View File

@@ -9,21 +9,6 @@ from crawl4ai import (
RateLimiter,
CacheMode
)
from crawl4ai.extraction_strategy import ExtractionStrategy
class MockExtractionStrategy(ExtractionStrategy):
"""Mock extraction strategy for testing URL parameter handling"""
def __init__(self):
super().__init__()
self.run_calls = []
def extract(self, url: str, html: str, *args, **kwargs):
return [{"test": "data"}]
def run(self, url: str, sections: List[str], *args, **kwargs):
self.run_calls.append(url)
return super().run(url, sections, *args, **kwargs)
@pytest.mark.asyncio
@pytest.mark.parametrize("viewport", [
@@ -157,72 +142,8 @@ async def test_error_handling(error_url):
assert not result.success
assert result.error_message is not None
@pytest.mark.asyncio
async def test_extraction_strategy_run_with_regular_url():
"""
Regression test for extraction_strategy.run URL parameter handling with regular URLs.
This test verifies that when is_raw_html=False (regular URL),
extraction_strategy.run is called with the actual URL.
"""
browser_config = BrowserConfig(
browser_type="chromium",
headless=True
)
async with AsyncWebCrawler(config=browser_config) as crawler:
mock_strategy = MockExtractionStrategy()
# Test regular URL (is_raw_html=False)
regular_url = "https://example.com"
result = await crawler.arun(
url=regular_url,
config=CrawlerRunConfig(
page_timeout=30000,
extraction_strategy=mock_strategy,
cache_mode=CacheMode.BYPASS
)
)
assert result.success
assert len(mock_strategy.run_calls) == 1
assert mock_strategy.run_calls[0] == regular_url, f"Expected '{regular_url}', got '{mock_strategy.run_calls[0]}'"
@pytest.mark.asyncio
async def test_extraction_strategy_run_with_raw_html():
"""
Regression test for extraction_strategy.run URL parameter handling with raw HTML.
This test verifies that when is_raw_html=True (URL starts with "raw:"),
extraction_strategy.run is called with "Raw HTML" instead of the actual URL.
"""
browser_config = BrowserConfig(
browser_type="chromium",
headless=True
)
async with AsyncWebCrawler(config=browser_config) as crawler:
mock_strategy = MockExtractionStrategy()
# Test raw HTML URL (is_raw_html=True automatically set)
raw_html_url = "raw:<html><body><h1>Test HTML</h1><p>This is a test.</p></body></html>"
result = await crawler.arun(
url=raw_html_url,
config=CrawlerRunConfig(
page_timeout=30000,
extraction_strategy=mock_strategy,
cache_mode=CacheMode.BYPASS
)
)
assert result.success
assert len(mock_strategy.run_calls) == 1
assert mock_strategy.run_calls[0] == "Raw HTML", f"Expected 'Raw HTML', got '{mock_strategy.run_calls[0]}'"
if __name__ == "__main__":
asyncio.run(test_viewport_config((1024, 768)))
asyncio.run(test_memory_management())
asyncio.run(test_rate_limiting())
asyncio.run(test_javascript_execution())
asyncio.run(test_extraction_strategy_run_with_regular_url())
asyncio.run(test_extraction_strategy_run_with_raw_html())
asyncio.run(test_javascript_execution())

View File

@@ -0,0 +1,29 @@
import pytest
import asyncio
from crawl4ai.async_url_seeder import AsyncUrlSeeder
@pytest.mark.asyncio
async def test_resolve_head_handles_dead_redirects():
seeder = AsyncUrlSeeder()
# Should return None redirects to a dead URL
assert await seeder._resolve_head("http://youtube.com/sitemap.xml") is None
assert await seeder._resolve_head("https://stripe.com/sitemap.xml") is None
@pytest.mark.asyncio
async def test_resolve_head_direct_hit():
seeder = AsyncUrlSeeder()
# Test with a known live URL, e.g., httpbin
result = await seeder._resolve_head("https://httpbin.org/status/200")
assert result == "https://httpbin.org/status/200"
@pytest.mark.asyncio
async def test_resolve_head_verify_redirect_targets_false():
# Test with verification disabled - should return redirect target without checking if alive
seeder = AsyncUrlSeeder(verify_redirect_targets=False)
# This should return the redirect target even if it's dead (old behavior)
result = await seeder._resolve_head("http://youtube.com/sitemap.xml")
# The exact redirect target might vary, but it should not be None
assert result is not None
assert isinstance(result, str)
# Should be different from the input URL (indicating redirect was followed)
assert result != "http://youtube.com/sitemap.xml"