Files
crawl4ai/tests/cache_validation/test_end_to_end.py
Nasrin f6f7f1b551 Release v0.8.0: Crash Recovery, Prefetch Mode & Security Fixes (#1712)
* Fix: Use correct URL variable for raw HTML extraction (#1116)

- Prevents full HTML content from being passed as URL to extraction strategies
- Added unit tests to verify raw HTML and regular URL processing

Fix: Wrong URL variable used for extraction of raw html

* Fix #1181: Preserve whitespace in code blocks during HTML scraping

  The remove_empty_elements_fast() method was removing whitespace-only
  span elements inside <pre> and <code> tags, causing import statements
  like "import torch" to become "importtorch". Now skips elements inside
  code blocks where whitespace is significant.

* Refactor Pydantic model configuration to use ConfigDict for arbitrary types

* Fix EmbeddingStrategy: Uncomment response handling for the variations and clean up mock data. ref #1621

* Fix: permission issues with .cache/url_seeder and other runtime cache dirs. ref #1638

* fix: ensure BrowserConfig.to_dict serializes proxy_config

* feat: make LLM backoff configurable end-to-end

- extend LLMConfig with backoff delay/attempt/factor fields and thread them
  through LLMExtractionStrategy, LLMContentFilter, table extraction, and
  Docker API handlers
- expose the backoff parameter knobs on perform_completion_with_backoff/aperform_completion_with_backoff
  and document them in the md_v2 guides

* reproduced AttributeError from #1642

* pass timeout parameter to docker client request

* added missing deep crawling objects to init

* generalized query in ContentRelevanceFilter to be a str or list

* import modules from enhanceable deserialization

* parameterized tests

* Fix: capture current page URL to reflect JavaScript navigation and add test for delayed redirects. ref #1268

* refactor: replace PyPDF2 with pypdf across the codebase. ref #1412

* Add browser_context_id and target_id parameters to BrowserConfig

Enable Crawl4AI to connect to pre-created CDP browser contexts, which is
essential for cloud browser services that pre-create isolated contexts.

Changes:
- Add browser_context_id and target_id parameters to BrowserConfig
- Update from_kwargs() and to_dict() methods
- Modify BrowserManager.start() to use existing context when provided
- Add _get_page_by_target_id() helper method
- Update get_page() to handle pre-existing targets
- Add test for browser_context_id functionality

This enables cloud services to:
1. Create isolated CDP contexts before Crawl4AI connects
2. Pass context/target IDs to BrowserConfig
3. Have Crawl4AI reuse existing contexts instead of creating new ones

* Add cdp_cleanup_on_close flag to prevent memory leaks in cloud/server scenarios

* Fix: add cdp_cleanup_on_close to from_kwargs

* Fix: find context by target_id for concurrent CDP connections

* Fix: use target_id to find correct page in get_page

* Fix: use CDP to find context by browserContextId for concurrent sessions

* Revert context matching attempts - Playwright cannot see CDP-created contexts

* Add create_isolated_context flag for concurrent CDP crawls

When True, forces creation of a new browser context instead of reusing
the default context. Essential for concurrent crawls on the same browser
to prevent navigation conflicts.

* Add context caching to create_isolated_context branch

Uses contexts_by_config cache (same as non-CDP mode) to reuse contexts
for multiple URLs with same config. Still creates new page per crawl
for navigation isolation. Benefits batch/deep crawls.

* Add init_scripts support to BrowserConfig for pre-page-load JS injection

This adds the ability to inject JavaScript that runs before any page loads,
useful for stealth evasions (canvas/audio fingerprinting, userAgentData).

- Add init_scripts parameter to BrowserConfig (list of JS strings)
- Apply init_scripts in setup_context() via context.add_init_script()
- Update from_kwargs() and to_dict() for serialization

* Fix CDP connection handling: support WS URLs and proper cleanup

Changes to browser_manager.py:

1. _verify_cdp_ready(): Support multiple URL formats
   - WebSocket URLs (ws://, wss://): Skip HTTP verification, Playwright handles directly
   - HTTP URLs with query params: Properly parse with urlparse to preserve query string
   - Fixes issue where naive f"{cdp_url}/json/version" broke WS URLs and query params

2. close(): Proper cleanup when cdp_cleanup_on_close=True
   - Close all sessions (pages)
   - Close all contexts
   - Call browser.close() to disconnect (doesn't terminate browser, just releases connection)
   - Wait 1 second for CDP connection to fully release
   - Stop Playwright instance to prevent memory leaks

This enables:
- Connecting to specific browsers via WS URL
- Reusing the same browser with multiple sequential connections
- No user wait needed between connections (internal 1s delay handles it)

Added tests/browser/test_cdp_cleanup_reuse.py with comprehensive tests.

* Update gitignore

* Some debugging for caching

* Add _generate_screenshot_from_html for raw: and file:// URLs

Implements the missing method that was being called but never defined.
Now raw: and file:// URLs can generate screenshots by:
1. Loading HTML into a browser page via page.set_content()
2. Taking screenshot using existing take_screenshot() method
3. Cleaning up the page afterward

This enables cached HTML to be rendered with screenshots in crawl4ai-cloud.

* Add PDF and MHTML support for raw: and file:// URLs

- Replace _generate_screenshot_from_html with _generate_media_from_html
- New method handles screenshot, PDF, and MHTML in one browser session
- Update raw: and file:// URL handlers to use new method
- Enables cached HTML to generate all media types

* Add crash recovery for deep crawl strategies

Add optional resume_state and on_state_change parameters to all deep
crawl strategies (BFS, DFS, Best-First) for cloud deployment crash
recovery.

Features:
- resume_state: Pass saved state to resume from checkpoint
- on_state_change: Async callback fired after each URL for real-time
  state persistence to external storage (Redis, DB, etc.)
- export_state(): Get last captured state manually
- Zero overhead when features are disabled (None defaults)

State includes visited URLs, pending queue/stack, depths, and
pages_crawled count. All state is JSON-serializable.

* Fix: HTTP strategy raw: URL parsing truncates at # character

The AsyncHTTPCrawlerStrategy.crawl() method used urlparse() to extract
content from raw: URLs. This caused HTML with CSS color codes like #eee
to be truncated because # is treated as a URL fragment delimiter.

Before: raw:body{background:#eee} -> parsed.path = 'body{background:'
After:  raw:body{background:#eee} -> raw_content = 'body{background:#eee'

Fix: Strip the raw: or raw:// prefix directly instead of using urlparse,
matching how the browser strategy handles it.

* Add base_url parameter to CrawlerRunConfig for raw HTML processing

When processing raw: HTML (e.g., from cache), the URL parameter is meaningless
for markdown link resolution. This adds a base_url parameter that can be set
explicitly to provide proper URL resolution context.

Changes:
- Add base_url parameter to CrawlerRunConfig.__init__
- Add base_url to CrawlerRunConfig.from_kwargs
- Update aprocess_html to use base_url for markdown generation

Usage:
  config = CrawlerRunConfig(base_url='https://example.com')
  result = await crawler.arun(url='raw:{html}', config=config)

* Add prefetch mode for two-phase deep crawling

- Add `prefetch` parameter to CrawlerRunConfig
- Add `quick_extract_links()` function for fast link extraction
- Add short-circuit in aprocess_html() for prefetch mode
- Add 42 tests (unit, integration, regression)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Updates on proxy rotation and proxy configuration

* Add proxy support to HTTP crawler strategy

* Add browser pipeline support for raw:/file:// URLs

- Add process_in_browser parameter to CrawlerRunConfig
- Route raw:/file:// URLs through _crawl_web() when browser operations needed
- Use page.set_content() instead of goto() for local content
- Fix cookie handling for non-HTTP URLs in browser_manager
- Auto-detect browser requirements: js_code, wait_for, screenshot, etc.
- Maintain fast path for raw:/file:// without browser params

Fixes #310

* Add smart TTL cache for sitemap URL seeder

- Add cache_ttl_hours and validate_sitemap_lastmod params to SeedingConfig
- New JSON cache format with metadata (version, created_at, lastmod, url_count)
- Cache validation by TTL expiry and sitemap lastmod comparison
- Auto-migration from old .jsonl to new .json format
- Fixes bug where incomplete cache was used indefinitely

* Update URL seeder docs with smart TTL cache parameters

- Add cache_ttl_hours and validate_sitemap_lastmod to parameter table
- Document smart TTL cache validation with examples
- Add cache-related troubleshooting entries
- Update key features summary

* Add MEMORY.md to gitignore

* Docs: Add multi-sample schema generation section

Add documentation explaining how to pass multiple HTML samples
to generate_schema() for stable selectors that work across pages
with varying DOM structures.

Includes:
- Problem explanation (fragile nth-child selectors)
- Solution with code example
- Key points for multi-sample queries
- Comparison table of fragile vs stable selectors

* Fix critical RCE and LFI vulnerabilities in Docker API deployment

Security fixes for vulnerabilities reported by ProjectDiscovery:

1. Remote Code Execution via Hooks (CVE pending)
   - Remove __import__ from allowed_builtins in hook_manager.py
   - Prevents arbitrary module imports (os, subprocess, etc.)
   - Hooks now disabled by default via CRAWL4AI_HOOKS_ENABLED env var

2. Local File Inclusion via file:// URLs (CVE pending)
   - Add URL scheme validation to /execute_js, /screenshot, /pdf, /html
   - Block file://, javascript:, data: and other dangerous schemes
   - Only allow http://, https://, and raw: (where appropriate)

3. Security hardening
   - Add CRAWL4AI_HOOKS_ENABLED=false as default (opt-in for hooks)
   - Add security warning comments in config.yml
   - Add validate_url_scheme() helper for consistent validation

Testing:
   - Add unit tests (test_security_fixes.py) - 16 tests
   - Add integration tests (run_security_tests.py) for live server

Affected endpoints:
   - POST /crawl (hooks disabled by default)
   - POST /crawl/stream (hooks disabled by default)
   - POST /execute_js (URL validation added)
   - POST /screenshot (URL validation added)
   - POST /pdf (URL validation added)
   - POST /html (URL validation added)

Breaking changes:
   - Hooks require CRAWL4AI_HOOKS_ENABLED=true to function
   - file:// URLs no longer work on API endpoints (use library directly)

* Enhance authentication flow by implementing JWT token retrieval and adding authorization headers to API requests

* Add release notes for v0.7.9, detailing breaking changes, security fixes, new features, bug fixes, and documentation updates

* Add release notes for v0.8.0, detailing breaking changes, security fixes, new features, bug fixes, and documentation updates

Documentation for v0.8.0 release:

- SECURITY.md: Security policy and vulnerability reporting guidelines
- RELEASE_NOTES_v0.8.0.md: Comprehensive release notes
- migration/v0.8.0-upgrade-guide.md: Step-by-step migration guide
- security/GHSA-DRAFT-RCE-LFI.md: GitHub security advisory drafts
- CHANGELOG.md: Updated with v0.8.0 changes

Breaking changes documented:
- Docker API hooks disabled by default (CRAWL4AI_HOOKS_ENABLED)
- file:// URLs blocked on Docker API endpoints

Security fixes credited to Neo by ProjectDiscovery

* Add examples for deep crawl crash recovery and prefetch mode in documentation

* Release v0.8.0: The v0.8.0 Update

- Updated version to 0.8.0
- Added comprehensive demo and release notes
- Updated all documentation

* Update security researcher acknowledgment with a hyperlink for Neo by ProjectDiscovery

* Add async agenerate_schema method for schema generation

- Extract prompt building to shared _build_schema_prompt() method
- Add agenerate_schema() async version using aperform_completion_with_backoff
- Refactor generate_schema() to use shared prompt builder
- Fixes Gemini/Vertex AI compatibility in async contexts (FastAPI)

* Fix: Enable litellm.drop_params for O-series/GPT-5 model compatibility

O-series (o1, o3) and GPT-5 models only support temperature=1.
Setting litellm.drop_params=True auto-drops unsupported parameters
instead of throwing UnsupportedParamsError.

Fixes temperature=0.01 error for these models in LLM extraction.

---------

Co-authored-by: rbushria <rbushri@gmail.com>
Co-authored-by: AHMET YILMAZ <tawfik@kidocode.com>
Co-authored-by: Soham Kukreti <kukretisoham@gmail.com>
Co-authored-by: Chris Murphy <chris.murphy@klaviyo.com>
Co-authored-by: unclecode <unclecode@kidocode.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-17 14:19:15 +01:00

450 lines
20 KiB
Python

"""
End-to-end tests for Smart Cache validation.
Tests the full flow:
1. Fresh crawl (browser launch) - SLOW
2. Cached crawl without validation (check_cache_freshness=False) - FAST
3. Cached crawl with validation (check_cache_freshness=True) - FAST (304/fingerprint)
Verifies all layers:
- Database storage of etag, last_modified, head_fingerprint, cached_at
- Cache validation logic
- HTTP conditional requests (304 Not Modified)
- Performance improvements
"""
import pytest
import time
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
from crawl4ai.async_database import async_db_manager
class TestEndToEndCacheValidation:
"""End-to-end tests for the complete cache validation flow."""
@pytest.mark.asyncio
async def test_full_cache_flow_docs_python(self):
"""
Test complete cache flow with docs.python.org:
1. Fresh crawl (slow - browser) - using BYPASS to force fresh
2. Cache hit without validation (fast)
3. Cache hit with validation (fast - 304)
"""
url = "https://docs.python.org/3/"
browser_config = BrowserConfig(headless=True, verbose=False)
# ========== CRAWL 1: Fresh crawl (force with WRITE_ONLY to skip cache read) ==========
config1 = CrawlerRunConfig(
cache_mode=CacheMode.WRITE_ONLY, # Skip reading, write new data
check_cache_freshness=False,
)
async with AsyncWebCrawler(config=browser_config) as crawler:
start1 = time.perf_counter()
result1 = await crawler.arun(url, config=config1)
time1 = time.perf_counter() - start1
assert result1.success, f"First crawl failed: {result1.error_message}"
# WRITE_ONLY means we did a fresh crawl and wrote to cache
assert result1.cache_status == "miss", f"Expected 'miss', got '{result1.cache_status}'"
print(f"\n[CRAWL 1] Fresh crawl: {time1:.2f}s (cache_status: {result1.cache_status})")
# Verify data is stored in database
metadata = await async_db_manager.aget_cache_metadata(url)
assert metadata is not None, "Metadata should be stored in database"
assert metadata.get("etag") or metadata.get("last_modified"), "Should have ETag or Last-Modified"
print(f" - Stored ETag: {metadata.get('etag', 'N/A')[:30]}...")
print(f" - Stored Last-Modified: {metadata.get('last_modified', 'N/A')}")
print(f" - Stored head_fingerprint: {metadata.get('head_fingerprint', 'N/A')}")
print(f" - Stored cached_at: {metadata.get('cached_at', 'N/A')}")
# ========== CRAWL 2: Cache hit WITHOUT validation ==========
config2 = CrawlerRunConfig(
cache_mode=CacheMode.ENABLED,
check_cache_freshness=False, # Skip validation - pure cache hit
)
async with AsyncWebCrawler(config=browser_config) as crawler:
start2 = time.perf_counter()
result2 = await crawler.arun(url, config=config2)
time2 = time.perf_counter() - start2
assert result2.success, f"Second crawl failed: {result2.error_message}"
assert result2.cache_status == "hit", f"Expected 'hit', got '{result2.cache_status}'"
print(f"\n[CRAWL 2] Cache hit (no validation): {time2:.2f}s (cache_status: {result2.cache_status})")
print(f" - Speedup: {time1/time2:.1f}x faster than fresh crawl")
# Should be MUCH faster - no browser, no HTTP request
assert time2 < time1 / 2, f"Cache hit should be at least 2x faster (was {time1/time2:.1f}x)"
# ========== CRAWL 3: Cache hit WITH validation (304) ==========
config3 = CrawlerRunConfig(
cache_mode=CacheMode.ENABLED,
check_cache_freshness=True, # Validate cache freshness
)
async with AsyncWebCrawler(config=browser_config) as crawler:
start3 = time.perf_counter()
result3 = await crawler.arun(url, config=config3)
time3 = time.perf_counter() - start3
assert result3.success, f"Third crawl failed: {result3.error_message}"
# Should be "hit_validated" (304) or "hit_fallback" (error during validation)
assert result3.cache_status in ["hit_validated", "hit_fallback"], \
f"Expected validated cache hit, got '{result3.cache_status}'"
print(f"\n[CRAWL 3] Cache hit (with validation): {time3:.2f}s (cache_status: {result3.cache_status})")
print(f" - Speedup: {time1/time3:.1f}x faster than fresh crawl")
# Should still be fast - just a HEAD request, no browser
assert time3 < time1 / 2, f"Validated cache hit should be faster than fresh crawl"
# ========== SUMMARY ==========
print(f"\n{'='*60}")
print(f"PERFORMANCE SUMMARY for {url}")
print(f"{'='*60}")
print(f" Fresh crawl (browser): {time1:.2f}s")
print(f" Cache hit (no validation): {time2:.2f}s ({time1/time2:.1f}x faster)")
print(f" Cache hit (with validation): {time3:.2f}s ({time1/time3:.1f}x faster)")
print(f"{'='*60}")
@pytest.mark.asyncio
async def test_full_cache_flow_crawl4ai_docs(self):
"""Test with docs.crawl4ai.com."""
url = "https://docs.crawl4ai.com/"
browser_config = BrowserConfig(headless=True, verbose=False)
# Fresh crawl - use WRITE_ONLY to ensure we get fresh data
config1 = CrawlerRunConfig(cache_mode=CacheMode.WRITE_ONLY, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
start1 = time.perf_counter()
result1 = await crawler.arun(url, config=config1)
time1 = time.perf_counter() - start1
assert result1.success
assert result1.cache_status == "miss"
print(f"\n[docs.crawl4ai.com] Fresh: {time1:.2f}s")
# Cache hit with validation
config2 = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
start2 = time.perf_counter()
result2 = await crawler.arun(url, config=config2)
time2 = time.perf_counter() - start2
assert result2.success
assert result2.cache_status in ["hit_validated", "hit_fallback"]
print(f"[docs.crawl4ai.com] Validated: {time2:.2f}s ({time1/time2:.1f}x faster)")
@pytest.mark.asyncio
async def test_verify_database_storage(self):
"""Verify all validation metadata is properly stored in database."""
url = "https://docs.python.org/3/library/asyncio.html"
browser_config = BrowserConfig(headless=True, verbose=False)
config = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url, config=config)
assert result.success
# Verify all fields in database
metadata = await async_db_manager.aget_cache_metadata(url)
assert metadata is not None, "Metadata must be stored"
assert "url" in metadata
assert "etag" in metadata
assert "last_modified" in metadata
assert "head_fingerprint" in metadata
assert "cached_at" in metadata
assert "response_headers" in metadata
print(f"\nDatabase storage verification for {url}:")
print(f" - etag: {metadata['etag'][:40] if metadata['etag'] else 'None'}...")
print(f" - last_modified: {metadata['last_modified']}")
print(f" - head_fingerprint: {metadata['head_fingerprint']}")
print(f" - cached_at: {metadata['cached_at']}")
print(f" - response_headers keys: {list(metadata['response_headers'].keys())[:5]}...")
# At least one validation field should be populated
has_validation_data = (
metadata["etag"] or
metadata["last_modified"] or
metadata["head_fingerprint"]
)
assert has_validation_data, "Should have at least one validation field"
@pytest.mark.asyncio
async def test_head_fingerprint_stored_and_used(self):
"""Verify head fingerprint is computed, stored, and used for validation."""
url = "https://example.com/"
browser_config = BrowserConfig(headless=True, verbose=False)
# Fresh crawl
config1 = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
result1 = await crawler.arun(url, config=config1)
assert result1.success
assert result1.head_fingerprint, "head_fingerprint should be set on CrawlResult"
# Verify in database
metadata = await async_db_manager.aget_cache_metadata(url)
assert metadata["head_fingerprint"], "head_fingerprint should be stored in database"
assert metadata["head_fingerprint"] == result1.head_fingerprint
print(f"\nHead fingerprint for {url}:")
print(f" - CrawlResult.head_fingerprint: {result1.head_fingerprint}")
print(f" - Database head_fingerprint: {metadata['head_fingerprint']}")
# Validate using fingerprint
config2 = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
result2 = await crawler.arun(url, config=config2)
assert result2.success
assert result2.cache_status in ["hit_validated", "hit_fallback"]
print(f" - Validation result: {result2.cache_status}")
class TestCacheValidationPerformance:
"""Performance benchmarks for cache validation."""
@pytest.mark.asyncio
async def test_multiple_urls_performance(self):
"""Test cache performance across multiple URLs."""
urls = [
"https://docs.python.org/3/",
"https://docs.python.org/3/library/asyncio.html",
"https://en.wikipedia.org/wiki/Python_(programming_language)",
]
browser_config = BrowserConfig(headless=True, verbose=False)
fresh_times = []
cached_times = []
print(f"\n{'='*70}")
print("MULTI-URL PERFORMANCE TEST")
print(f"{'='*70}")
# Fresh crawls - use WRITE_ONLY to force fresh crawl
for url in urls:
config = CrawlerRunConfig(cache_mode=CacheMode.WRITE_ONLY, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
start = time.perf_counter()
result = await crawler.arun(url, config=config)
elapsed = time.perf_counter() - start
fresh_times.append(elapsed)
print(f"Fresh: {url[:50]:50} {elapsed:.2f}s ({result.cache_status})")
# Cached crawls with validation
for url in urls:
config = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
start = time.perf_counter()
result = await crawler.arun(url, config=config)
elapsed = time.perf_counter() - start
cached_times.append(elapsed)
print(f"Cached: {url[:50]:50} {elapsed:.2f}s ({result.cache_status})")
avg_fresh = sum(fresh_times) / len(fresh_times)
avg_cached = sum(cached_times) / len(cached_times)
total_fresh = sum(fresh_times)
total_cached = sum(cached_times)
print(f"\n{'='*70}")
print(f"RESULTS:")
print(f" Total fresh crawl time: {total_fresh:.2f}s")
print(f" Total cached time: {total_cached:.2f}s")
print(f" Average speedup: {avg_fresh/avg_cached:.1f}x")
print(f" Time saved: {total_fresh - total_cached:.2f}s")
print(f"{'='*70}")
# Cached should be significantly faster
assert avg_cached < avg_fresh / 2, "Cached crawls should be at least 2x faster"
@pytest.mark.asyncio
async def test_repeated_access_same_url(self):
"""Test repeated access to the same URL shows consistent cache hits."""
url = "https://docs.python.org/3/"
num_accesses = 5
browser_config = BrowserConfig(headless=True, verbose=False)
print(f"\n{'='*60}")
print(f"REPEATED ACCESS TEST: {url}")
print(f"{'='*60}")
# First access - fresh crawl
config = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
start = time.perf_counter()
result = await crawler.arun(url, config=config)
fresh_time = time.perf_counter() - start
print(f"Access 1 (fresh): {fresh_time:.2f}s - {result.cache_status}")
# Repeated accesses - should all be cache hits
cached_times = []
for i in range(2, num_accesses + 1):
config = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
start = time.perf_counter()
result = await crawler.arun(url, config=config)
elapsed = time.perf_counter() - start
cached_times.append(elapsed)
print(f"Access {i} (cached): {elapsed:.2f}s - {result.cache_status}")
assert result.cache_status in ["hit", "hit_validated", "hit_fallback"]
avg_cached = sum(cached_times) / len(cached_times)
print(f"\nAverage cached time: {avg_cached:.2f}s")
print(f"Speedup over fresh: {fresh_time/avg_cached:.1f}x")
class TestCacheValidationModes:
"""Test different cache modes and their behavior."""
@pytest.mark.asyncio
async def test_cache_bypass_always_fresh(self):
"""CacheMode.BYPASS should always do fresh crawl."""
# Use a unique URL path to avoid cache from other tests
url = "https://example.com/test-bypass"
browser_config = BrowserConfig(headless=True, verbose=False)
# First crawl with WRITE_ONLY to populate cache (always fresh)
config1 = CrawlerRunConfig(cache_mode=CacheMode.WRITE_ONLY, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
result1 = await crawler.arun(url, config=config1)
assert result1.cache_status == "miss"
# Second crawl with BYPASS - should NOT use cache
config2 = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
result2 = await crawler.arun(url, config=config2)
# BYPASS mode means no cache interaction
assert result2.cache_status is None or result2.cache_status == "miss"
print(f"\nCacheMode.BYPASS result: {result2.cache_status}")
@pytest.mark.asyncio
async def test_validation_disabled_uses_cache_directly(self):
"""With check_cache_freshness=False, should use cache without HTTP validation."""
url = "https://docs.python.org/3/tutorial/"
browser_config = BrowserConfig(headless=True, verbose=False)
# Fresh crawl - use WRITE_ONLY to force fresh
config1 = CrawlerRunConfig(cache_mode=CacheMode.WRITE_ONLY, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
result1 = await crawler.arun(url, config=config1)
assert result1.cache_status == "miss"
# Cached with validation DISABLED - should be "hit" (not "hit_validated")
config2 = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
start = time.perf_counter()
result2 = await crawler.arun(url, config=config2)
elapsed = time.perf_counter() - start
assert result2.cache_status == "hit", f"Expected 'hit', got '{result2.cache_status}'"
print(f"\nValidation disabled: {elapsed:.3f}s (cache_status: {result2.cache_status})")
# Should be very fast - no HTTP request at all
assert elapsed < 1.0, "Cache hit without validation should be < 1 second"
@pytest.mark.asyncio
async def test_validation_enabled_checks_freshness(self):
"""With check_cache_freshness=True, should validate before using cache."""
url = "https://docs.python.org/3/reference/"
browser_config = BrowserConfig(headless=True, verbose=False)
# Fresh crawl
config1 = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
result1 = await crawler.arun(url, config=config1)
# Cached with validation ENABLED - should be "hit_validated"
config2 = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
start = time.perf_counter()
result2 = await crawler.arun(url, config=config2)
elapsed = time.perf_counter() - start
assert result2.cache_status in ["hit_validated", "hit_fallback"]
print(f"\nValidation enabled: {elapsed:.3f}s (cache_status: {result2.cache_status})")
class TestCacheValidationResponseHeaders:
"""Test that response headers are properly stored and retrieved."""
@pytest.mark.asyncio
async def test_response_headers_stored(self):
"""Verify response headers including ETag and Last-Modified are stored."""
url = "https://docs.python.org/3/"
browser_config = BrowserConfig(headless=True, verbose=False)
config = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url, config=config)
assert result.success
assert result.response_headers is not None
# Check that cache-relevant headers are captured
headers = result.response_headers
print(f"\nResponse headers for {url}:")
# Look for ETag (case-insensitive)
etag = headers.get("etag") or headers.get("ETag")
print(f" - ETag: {etag}")
# Look for Last-Modified
last_modified = headers.get("last-modified") or headers.get("Last-Modified")
print(f" - Last-Modified: {last_modified}")
# Look for Cache-Control
cache_control = headers.get("cache-control") or headers.get("Cache-Control")
print(f" - Cache-Control: {cache_control}")
# At least one should be present for docs.python.org
assert etag or last_modified, "Should have ETag or Last-Modified header"
@pytest.mark.asyncio
async def test_headers_used_for_validation(self):
"""Verify stored headers are used for conditional requests."""
url = "https://docs.crawl4ai.com/"
browser_config = BrowserConfig(headless=True, verbose=False)
# Fresh crawl to store headers
config1 = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=False)
async with AsyncWebCrawler(config=browser_config) as crawler:
result1 = await crawler.arun(url, config=config1)
# Get stored metadata
metadata = await async_db_manager.aget_cache_metadata(url)
stored_etag = metadata.get("etag")
stored_last_modified = metadata.get("last_modified")
print(f"\nStored validation data for {url}:")
print(f" - etag: {stored_etag}")
print(f" - last_modified: {stored_last_modified}")
# Validate - should use stored headers
config2 = CrawlerRunConfig(cache_mode=CacheMode.ENABLED, check_cache_freshness=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
result2 = await crawler.arun(url, config=config2)
# Should get validated hit (304 response)
assert result2.cache_status in ["hit_validated", "hit_fallback"]
print(f" - Validation result: {result2.cache_status}")