Release v0.8.0: Crash Recovery, Prefetch Mode & Security Fixes (#1712)

* Fix: Use correct URL variable for raw HTML extraction (#1116)

- Prevents full HTML content from being passed as URL to extraction strategies
- Added unit tests to verify raw HTML and regular URL processing

Fix: Wrong URL variable used for extraction of raw html

* Fix #1181: Preserve whitespace in code blocks during HTML scraping

  The remove_empty_elements_fast() method was removing whitespace-only
  span elements inside <pre> and <code> tags, causing import statements
  like "import torch" to become "importtorch". Now skips elements inside
  code blocks where whitespace is significant.

* Refactor Pydantic model configuration to use ConfigDict for arbitrary types

* Fix EmbeddingStrategy: Uncomment response handling for the variations and clean up mock data. ref #1621

* Fix: permission issues with .cache/url_seeder and other runtime cache dirs. ref #1638

* fix: ensure BrowserConfig.to_dict serializes proxy_config

* feat: make LLM backoff configurable end-to-end

- extend LLMConfig with backoff delay/attempt/factor fields and thread them
  through LLMExtractionStrategy, LLMContentFilter, table extraction, and
  Docker API handlers
- expose the backoff parameter knobs on perform_completion_with_backoff/aperform_completion_with_backoff
  and document them in the md_v2 guides

* reproduced AttributeError from #1642

* pass timeout parameter to docker client request

* added missing deep crawling objects to init

* generalized query in ContentRelevanceFilter to be a str or list

* import modules from enhanceable deserialization

* parameterized tests

* Fix: capture current page URL to reflect JavaScript navigation and add test for delayed redirects. ref #1268

* refactor: replace PyPDF2 with pypdf across the codebase. ref #1412

* Add browser_context_id and target_id parameters to BrowserConfig

Enable Crawl4AI to connect to pre-created CDP browser contexts, which is
essential for cloud browser services that pre-create isolated contexts.

Changes:
- Add browser_context_id and target_id parameters to BrowserConfig
- Update from_kwargs() and to_dict() methods
- Modify BrowserManager.start() to use existing context when provided
- Add _get_page_by_target_id() helper method
- Update get_page() to handle pre-existing targets
- Add test for browser_context_id functionality

This enables cloud services to:
1. Create isolated CDP contexts before Crawl4AI connects
2. Pass context/target IDs to BrowserConfig
3. Have Crawl4AI reuse existing contexts instead of creating new ones

* Add cdp_cleanup_on_close flag to prevent memory leaks in cloud/server scenarios

* Fix: add cdp_cleanup_on_close to from_kwargs

* Fix: find context by target_id for concurrent CDP connections

* Fix: use target_id to find correct page in get_page

* Fix: use CDP to find context by browserContextId for concurrent sessions

* Revert context matching attempts - Playwright cannot see CDP-created contexts

* Add create_isolated_context flag for concurrent CDP crawls

When True, forces creation of a new browser context instead of reusing
the default context. Essential for concurrent crawls on the same browser
to prevent navigation conflicts.

* Add context caching to create_isolated_context branch

Uses contexts_by_config cache (same as non-CDP mode) to reuse contexts
for multiple URLs with same config. Still creates new page per crawl
for navigation isolation. Benefits batch/deep crawls.

* Add init_scripts support to BrowserConfig for pre-page-load JS injection

This adds the ability to inject JavaScript that runs before any page loads,
useful for stealth evasions (canvas/audio fingerprinting, userAgentData).

- Add init_scripts parameter to BrowserConfig (list of JS strings)
- Apply init_scripts in setup_context() via context.add_init_script()
- Update from_kwargs() and to_dict() for serialization

* Fix CDP connection handling: support WS URLs and proper cleanup

Changes to browser_manager.py:

1. _verify_cdp_ready(): Support multiple URL formats
   - WebSocket URLs (ws://, wss://): Skip HTTP verification, Playwright handles directly
   - HTTP URLs with query params: Properly parse with urlparse to preserve query string
   - Fixes issue where naive f"{cdp_url}/json/version" broke WS URLs and query params

2. close(): Proper cleanup when cdp_cleanup_on_close=True
   - Close all sessions (pages)
   - Close all contexts
   - Call browser.close() to disconnect (doesn't terminate browser, just releases connection)
   - Wait 1 second for CDP connection to fully release
   - Stop Playwright instance to prevent memory leaks

This enables:
- Connecting to specific browsers via WS URL
- Reusing the same browser with multiple sequential connections
- No user wait needed between connections (internal 1s delay handles it)

Added tests/browser/test_cdp_cleanup_reuse.py with comprehensive tests.

* Update gitignore

* Some debugging for caching

* Add _generate_screenshot_from_html for raw: and file:// URLs

Implements the missing method that was being called but never defined.
Now raw: and file:// URLs can generate screenshots by:
1. Loading HTML into a browser page via page.set_content()
2. Taking screenshot using existing take_screenshot() method
3. Cleaning up the page afterward

This enables cached HTML to be rendered with screenshots in crawl4ai-cloud.

* Add PDF and MHTML support for raw: and file:// URLs

- Replace _generate_screenshot_from_html with _generate_media_from_html
- New method handles screenshot, PDF, and MHTML in one browser session
- Update raw: and file:// URL handlers to use new method
- Enables cached HTML to generate all media types

* Add crash recovery for deep crawl strategies

Add optional resume_state and on_state_change parameters to all deep
crawl strategies (BFS, DFS, Best-First) for cloud deployment crash
recovery.

Features:
- resume_state: Pass saved state to resume from checkpoint
- on_state_change: Async callback fired after each URL for real-time
  state persistence to external storage (Redis, DB, etc.)
- export_state(): Get last captured state manually
- Zero overhead when features are disabled (None defaults)

State includes visited URLs, pending queue/stack, depths, and
pages_crawled count. All state is JSON-serializable.

* Fix: HTTP strategy raw: URL parsing truncates at # character

The AsyncHTTPCrawlerStrategy.crawl() method used urlparse() to extract
content from raw: URLs. This caused HTML with CSS color codes like #eee
to be truncated because # is treated as a URL fragment delimiter.

Before: raw:body{background:#eee} -> parsed.path = 'body{background:'
After:  raw:body{background:#eee} -> raw_content = 'body{background:#eee'

Fix: Strip the raw: or raw:// prefix directly instead of using urlparse,
matching how the browser strategy handles it.

* Add base_url parameter to CrawlerRunConfig for raw HTML processing

When processing raw: HTML (e.g., from cache), the URL parameter is meaningless
for markdown link resolution. This adds a base_url parameter that can be set
explicitly to provide proper URL resolution context.

Changes:
- Add base_url parameter to CrawlerRunConfig.__init__
- Add base_url to CrawlerRunConfig.from_kwargs
- Update aprocess_html to use base_url for markdown generation

Usage:
  config = CrawlerRunConfig(base_url='https://example.com')
  result = await crawler.arun(url='raw:{html}', config=config)

* Add prefetch mode for two-phase deep crawling

- Add `prefetch` parameter to CrawlerRunConfig
- Add `quick_extract_links()` function for fast link extraction
- Add short-circuit in aprocess_html() for prefetch mode
- Add 42 tests (unit, integration, regression)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Updates on proxy rotation and proxy configuration

* Add proxy support to HTTP crawler strategy

* Add browser pipeline support for raw:/file:// URLs

- Add process_in_browser parameter to CrawlerRunConfig
- Route raw:/file:// URLs through _crawl_web() when browser operations needed
- Use page.set_content() instead of goto() for local content
- Fix cookie handling for non-HTTP URLs in browser_manager
- Auto-detect browser requirements: js_code, wait_for, screenshot, etc.
- Maintain fast path for raw:/file:// without browser params

Fixes #310

* Add smart TTL cache for sitemap URL seeder

- Add cache_ttl_hours and validate_sitemap_lastmod params to SeedingConfig
- New JSON cache format with metadata (version, created_at, lastmod, url_count)
- Cache validation by TTL expiry and sitemap lastmod comparison
- Auto-migration from old .jsonl to new .json format
- Fixes bug where incomplete cache was used indefinitely

* Update URL seeder docs with smart TTL cache parameters

- Add cache_ttl_hours and validate_sitemap_lastmod to parameter table
- Document smart TTL cache validation with examples
- Add cache-related troubleshooting entries
- Update key features summary

* Add MEMORY.md to gitignore

* Docs: Add multi-sample schema generation section

Add documentation explaining how to pass multiple HTML samples
to generate_schema() for stable selectors that work across pages
with varying DOM structures.

Includes:
- Problem explanation (fragile nth-child selectors)
- Solution with code example
- Key points for multi-sample queries
- Comparison table of fragile vs stable selectors

* Fix critical RCE and LFI vulnerabilities in Docker API deployment

Security fixes for vulnerabilities reported by ProjectDiscovery:

1. Remote Code Execution via Hooks (CVE pending)
   - Remove __import__ from allowed_builtins in hook_manager.py
   - Prevents arbitrary module imports (os, subprocess, etc.)
   - Hooks now disabled by default via CRAWL4AI_HOOKS_ENABLED env var

2. Local File Inclusion via file:// URLs (CVE pending)
   - Add URL scheme validation to /execute_js, /screenshot, /pdf, /html
   - Block file://, javascript:, data: and other dangerous schemes
   - Only allow http://, https://, and raw: (where appropriate)

3. Security hardening
   - Add CRAWL4AI_HOOKS_ENABLED=false as default (opt-in for hooks)
   - Add security warning comments in config.yml
   - Add validate_url_scheme() helper for consistent validation

Testing:
   - Add unit tests (test_security_fixes.py) - 16 tests
   - Add integration tests (run_security_tests.py) for live server

Affected endpoints:
   - POST /crawl (hooks disabled by default)
   - POST /crawl/stream (hooks disabled by default)
   - POST /execute_js (URL validation added)
   - POST /screenshot (URL validation added)
   - POST /pdf (URL validation added)
   - POST /html (URL validation added)

Breaking changes:
   - Hooks require CRAWL4AI_HOOKS_ENABLED=true to function
   - file:// URLs no longer work on API endpoints (use library directly)

* Enhance authentication flow by implementing JWT token retrieval and adding authorization headers to API requests

* Add release notes for v0.7.9, detailing breaking changes, security fixes, new features, bug fixes, and documentation updates

* Add release notes for v0.8.0, detailing breaking changes, security fixes, new features, bug fixes, and documentation updates

Documentation for v0.8.0 release:

- SECURITY.md: Security policy and vulnerability reporting guidelines
- RELEASE_NOTES_v0.8.0.md: Comprehensive release notes
- migration/v0.8.0-upgrade-guide.md: Step-by-step migration guide
- security/GHSA-DRAFT-RCE-LFI.md: GitHub security advisory drafts
- CHANGELOG.md: Updated with v0.8.0 changes

Breaking changes documented:
- Docker API hooks disabled by default (CRAWL4AI_HOOKS_ENABLED)
- file:// URLs blocked on Docker API endpoints

Security fixes credited to Neo by ProjectDiscovery

* Add examples for deep crawl crash recovery and prefetch mode in documentation

* Release v0.8.0: The v0.8.0 Update

- Updated version to 0.8.0
- Added comprehensive demo and release notes
- Updated all documentation

* Update security researcher acknowledgment with a hyperlink for Neo by ProjectDiscovery

* Add async agenerate_schema method for schema generation

- Extract prompt building to shared _build_schema_prompt() method
- Add agenerate_schema() async version using aperform_completion_with_backoff
- Refactor generate_schema() to use shared prompt builder
- Fixes Gemini/Vertex AI compatibility in async contexts (FastAPI)

* Fix: Enable litellm.drop_params for O-series/GPT-5 model compatibility

O-series (o1, o3) and GPT-5 models only support temperature=1.
Setting litellm.drop_params=True auto-drops unsupported parameters
instead of throwing UnsupportedParamsError.

Fixes temperature=0.01 error for these models in LLM extraction.

---------

Co-authored-by: rbushria <rbushri@gmail.com>
Co-authored-by: AHMET YILMAZ <tawfik@kidocode.com>
Co-authored-by: Soham Kukreti <kukretisoham@gmail.com>
Co-authored-by: Chris Murphy <chris.murphy@klaviyo.com>
Co-authored-by: unclecode <unclecode@kidocode.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Nasrin
2026-01-17 14:19:15 +01:00
committed by GitHub
parent c85f56b085
commit f6f7f1b551
58 changed files with 11942 additions and 2411 deletions

View File

@@ -0,0 +1,569 @@
"""
Comprehensive test suite for Sticky Proxy Sessions functionality.
Tests cover:
1. Basic sticky session - same proxy for same session_id
2. Different sessions get different proxies
3. Session release
4. TTL expiration
5. Thread safety / concurrent access
6. Integration tests with AsyncWebCrawler
"""
import asyncio
import os
import time
import pytest
from unittest.mock import patch
from crawl4ai import AsyncWebCrawler, BrowserConfig
from crawl4ai.async_configs import CrawlerRunConfig, ProxyConfig
from crawl4ai.proxy_strategy import RoundRobinProxyStrategy
from crawl4ai.cache_context import CacheMode
class TestRoundRobinProxyStrategySession:
"""Test suite for RoundRobinProxyStrategy session methods."""
def setup_method(self):
"""Setup for each test method."""
self.proxies = [
ProxyConfig(server=f"http://proxy{i}.test:8080")
for i in range(5)
]
# ==================== BASIC STICKY SESSION TESTS ====================
@pytest.mark.asyncio
async def test_sticky_session_same_proxy(self):
"""Verify same proxy is returned for same session_id."""
strategy = RoundRobinProxyStrategy(self.proxies)
# First call - acquires proxy
proxy1 = await strategy.get_proxy_for_session("session-1")
# Second call - should return same proxy
proxy2 = await strategy.get_proxy_for_session("session-1")
# Third call - should return same proxy
proxy3 = await strategy.get_proxy_for_session("session-1")
assert proxy1 is not None
assert proxy1.server == proxy2.server == proxy3.server
@pytest.mark.asyncio
async def test_different_sessions_different_proxies(self):
"""Verify different session_ids can get different proxies."""
strategy = RoundRobinProxyStrategy(self.proxies)
proxy_a = await strategy.get_proxy_for_session("session-a")
proxy_b = await strategy.get_proxy_for_session("session-b")
proxy_c = await strategy.get_proxy_for_session("session-c")
# All should be different (round-robin)
servers = {proxy_a.server, proxy_b.server, proxy_c.server}
assert len(servers) == 3
@pytest.mark.asyncio
async def test_sticky_session_with_regular_rotation(self):
"""Verify sticky sessions don't interfere with regular rotation."""
strategy = RoundRobinProxyStrategy(self.proxies)
# Acquire a sticky session
session_proxy = await strategy.get_proxy_for_session("sticky-session")
# Regular rotation should continue independently
regular_proxy1 = await strategy.get_next_proxy()
regular_proxy2 = await strategy.get_next_proxy()
# Sticky session should still return same proxy
session_proxy_again = await strategy.get_proxy_for_session("sticky-session")
assert session_proxy.server == session_proxy_again.server
# Regular proxies should rotate
assert regular_proxy1.server != regular_proxy2.server
# ==================== SESSION RELEASE TESTS ====================
@pytest.mark.asyncio
async def test_session_release(self):
"""Verify session can be released and reacquired."""
strategy = RoundRobinProxyStrategy(self.proxies)
# Acquire session
proxy1 = await strategy.get_proxy_for_session("session-1")
assert strategy.get_session_proxy("session-1") is not None
# Release session
await strategy.release_session("session-1")
assert strategy.get_session_proxy("session-1") is None
# Reacquire - should get a new proxy (next in round-robin)
proxy2 = await strategy.get_proxy_for_session("session-1")
assert proxy2 is not None
# After release, next call gets the next proxy in rotation
# (not necessarily the same as before)
@pytest.mark.asyncio
async def test_release_nonexistent_session(self):
"""Verify releasing non-existent session doesn't raise error."""
strategy = RoundRobinProxyStrategy(self.proxies)
# Should not raise
await strategy.release_session("nonexistent-session")
@pytest.mark.asyncio
async def test_release_twice(self):
"""Verify releasing session twice doesn't raise error."""
strategy = RoundRobinProxyStrategy(self.proxies)
await strategy.get_proxy_for_session("session-1")
await strategy.release_session("session-1")
await strategy.release_session("session-1") # Should not raise
# ==================== GET SESSION PROXY TESTS ====================
@pytest.mark.asyncio
async def test_get_session_proxy_existing(self):
"""Verify get_session_proxy returns proxy for existing session."""
strategy = RoundRobinProxyStrategy(self.proxies)
acquired = await strategy.get_proxy_for_session("session-1")
retrieved = strategy.get_session_proxy("session-1")
assert retrieved is not None
assert acquired.server == retrieved.server
def test_get_session_proxy_nonexistent(self):
"""Verify get_session_proxy returns None for non-existent session."""
strategy = RoundRobinProxyStrategy(self.proxies)
result = strategy.get_session_proxy("nonexistent-session")
assert result is None
# ==================== TTL EXPIRATION TESTS ====================
@pytest.mark.asyncio
async def test_session_ttl_not_expired(self):
"""Verify session returns same proxy when TTL not expired."""
strategy = RoundRobinProxyStrategy(self.proxies)
# Acquire with 10 second TTL
proxy1 = await strategy.get_proxy_for_session("session-1", ttl=10)
# Immediately request again - should return same proxy
proxy2 = await strategy.get_proxy_for_session("session-1", ttl=10)
assert proxy1.server == proxy2.server
@pytest.mark.asyncio
async def test_session_ttl_expired(self):
"""Verify new proxy acquired after TTL expires."""
strategy = RoundRobinProxyStrategy(self.proxies)
# Acquire with 1 second TTL
proxy1 = await strategy.get_proxy_for_session("session-1", ttl=1)
# Wait for TTL to expire
await asyncio.sleep(1.1)
# Request again - should get new proxy due to expiration
proxy2 = await strategy.get_proxy_for_session("session-1", ttl=1)
# May or may not be same server depending on round-robin state,
# but session should have been recreated
assert proxy2 is not None
@pytest.mark.asyncio
async def test_get_session_proxy_ttl_expired(self):
"""Verify get_session_proxy returns None after TTL expires."""
strategy = RoundRobinProxyStrategy(self.proxies)
await strategy.get_proxy_for_session("session-1", ttl=1)
# Wait for expiration
await asyncio.sleep(1.1)
# Should return None for expired session
result = strategy.get_session_proxy("session-1")
assert result is None
@pytest.mark.asyncio
async def test_cleanup_expired_sessions(self):
"""Verify cleanup_expired_sessions removes expired sessions."""
strategy = RoundRobinProxyStrategy(self.proxies)
# Create sessions with short TTL
await strategy.get_proxy_for_session("short-ttl-1", ttl=1)
await strategy.get_proxy_for_session("short-ttl-2", ttl=1)
# Create session without TTL (should not be cleaned up)
await strategy.get_proxy_for_session("no-ttl")
# Wait for TTL to expire
await asyncio.sleep(1.1)
# Cleanup
removed = await strategy.cleanup_expired_sessions()
assert removed == 2
assert strategy.get_session_proxy("short-ttl-1") is None
assert strategy.get_session_proxy("short-ttl-2") is None
assert strategy.get_session_proxy("no-ttl") is not None
# ==================== GET ACTIVE SESSIONS TESTS ====================
@pytest.mark.asyncio
async def test_get_active_sessions(self):
"""Verify get_active_sessions returns all active sessions."""
strategy = RoundRobinProxyStrategy(self.proxies)
await strategy.get_proxy_for_session("session-a")
await strategy.get_proxy_for_session("session-b")
await strategy.get_proxy_for_session("session-c")
active = strategy.get_active_sessions()
assert len(active) == 3
assert "session-a" in active
assert "session-b" in active
assert "session-c" in active
@pytest.mark.asyncio
async def test_get_active_sessions_excludes_expired(self):
"""Verify get_active_sessions excludes expired sessions."""
strategy = RoundRobinProxyStrategy(self.proxies)
await strategy.get_proxy_for_session("short-ttl", ttl=1)
await strategy.get_proxy_for_session("no-ttl")
# Before expiration
active = strategy.get_active_sessions()
assert len(active) == 2
# Wait for TTL to expire
await asyncio.sleep(1.1)
# After expiration
active = strategy.get_active_sessions()
assert len(active) == 1
assert "no-ttl" in active
assert "short-ttl" not in active
# ==================== THREAD SAFETY TESTS ====================
@pytest.mark.asyncio
async def test_concurrent_session_access(self):
"""Verify thread-safe access to sessions."""
strategy = RoundRobinProxyStrategy(self.proxies)
async def acquire_session(session_id: str):
proxy = await strategy.get_proxy_for_session(session_id)
await asyncio.sleep(0.01) # Simulate work
return proxy.server
# Acquire same session from multiple coroutines
results = await asyncio.gather(*[
acquire_session("shared-session") for _ in range(10)
])
# All should get same proxy
assert len(set(results)) == 1
@pytest.mark.asyncio
async def test_concurrent_different_sessions(self):
"""Verify concurrent acquisition of different sessions works correctly."""
strategy = RoundRobinProxyStrategy(self.proxies)
async def acquire_session(session_id: str):
proxy = await strategy.get_proxy_for_session(session_id)
await asyncio.sleep(0.01)
return (session_id, proxy.server)
# Acquire different sessions concurrently
results = await asyncio.gather(*[
acquire_session(f"session-{i}") for i in range(5)
])
# Each session should have a consistent proxy
session_proxies = dict(results)
assert len(session_proxies) == 5
# Verify each session still returns same proxy
for session_id, expected_server in session_proxies.items():
actual = await strategy.get_proxy_for_session(session_id)
assert actual.server == expected_server
@pytest.mark.asyncio
async def test_concurrent_session_acquire_and_release(self):
"""Verify concurrent acquire and release operations work correctly."""
strategy = RoundRobinProxyStrategy(self.proxies)
async def acquire_and_release(session_id: str):
proxy = await strategy.get_proxy_for_session(session_id)
await asyncio.sleep(0.01)
await strategy.release_session(session_id)
return proxy.server
# Run multiple acquire/release cycles concurrently
await asyncio.gather(*[
acquire_and_release(f"session-{i}") for i in range(10)
])
# All sessions should be released
active = strategy.get_active_sessions()
assert len(active) == 0
# ==================== EMPTY PROXY POOL TESTS ====================
@pytest.mark.asyncio
async def test_empty_proxy_pool_session(self):
"""Verify behavior with empty proxy pool."""
strategy = RoundRobinProxyStrategy() # No proxies
result = await strategy.get_proxy_for_session("session-1")
assert result is None
@pytest.mark.asyncio
async def test_add_proxies_after_session(self):
"""Verify adding proxies after session creation works."""
strategy = RoundRobinProxyStrategy()
# No proxies initially
result1 = await strategy.get_proxy_for_session("session-1")
assert result1 is None
# Add proxies
strategy.add_proxies(self.proxies)
# Now should work
result2 = await strategy.get_proxy_for_session("session-2")
assert result2 is not None
class TestCrawlerRunConfigSession:
"""Test CrawlerRunConfig with sticky session parameters."""
def test_config_has_session_fields(self):
"""Verify CrawlerRunConfig has sticky session fields."""
config = CrawlerRunConfig(
proxy_session_id="test-session",
proxy_session_ttl=300,
proxy_session_auto_release=True
)
assert config.proxy_session_id == "test-session"
assert config.proxy_session_ttl == 300
assert config.proxy_session_auto_release is True
def test_config_session_defaults(self):
"""Verify default values for session fields."""
config = CrawlerRunConfig()
assert config.proxy_session_id is None
assert config.proxy_session_ttl is None
assert config.proxy_session_auto_release is False
class TestCrawlerStickySessionIntegration:
"""Integration tests for AsyncWebCrawler with sticky sessions."""
def setup_method(self):
"""Setup for each test method."""
self.proxies = [
ProxyConfig(server=f"http://proxy{i}.test:8080")
for i in range(3)
]
self.test_url = "https://httpbin.org/ip"
@pytest.mark.asyncio
async def test_crawler_sticky_session_without_proxy(self):
"""Test that crawler works when proxy_session_id set but no strategy."""
browser_config = BrowserConfig(headless=True)
config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
proxy_session_id="test-session",
page_timeout=15000
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url=self.test_url, config=config)
# Should work without errors (no proxy strategy means no proxy)
assert result is not None
@pytest.mark.asyncio
async def test_crawler_sticky_session_basic(self):
"""Test basic sticky session with crawler."""
strategy = RoundRobinProxyStrategy(self.proxies)
config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
proxy_rotation_strategy=strategy,
proxy_session_id="integration-test",
page_timeout=10000
)
browser_config = BrowserConfig(headless=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
# First request
try:
result1 = await crawler.arun(url=self.test_url, config=config)
except Exception:
pass # Proxy connection may fail, but session should be tracked
# Verify session was created
session_proxy = strategy.get_session_proxy("integration-test")
assert session_proxy is not None
# Cleanup
await strategy.release_session("integration-test")
@pytest.mark.asyncio
async def test_crawler_rotating_vs_sticky(self):
"""Compare rotating behavior vs sticky session behavior."""
strategy = RoundRobinProxyStrategy(self.proxies)
# Config WITHOUT sticky session - should rotate
rotating_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
proxy_rotation_strategy=strategy,
page_timeout=5000
)
# Config WITH sticky session - should use same proxy
sticky_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
proxy_rotation_strategy=strategy,
proxy_session_id="sticky-test",
page_timeout=5000
)
browser_config = BrowserConfig(headless=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
# Track proxy configs used
rotating_proxies = []
sticky_proxies = []
# Try rotating requests (may fail due to test proxies, but config should be set)
for _ in range(3):
try:
await crawler.arun(url=self.test_url, config=rotating_config)
except Exception:
pass
rotating_proxies.append(rotating_config.proxy_config.server if rotating_config.proxy_config else None)
# Try sticky requests
for _ in range(3):
try:
await crawler.arun(url=self.test_url, config=sticky_config)
except Exception:
pass
sticky_proxies.append(sticky_config.proxy_config.server if sticky_config.proxy_config else None)
# Rotating should have different proxies (or cycle through them)
# Sticky should have same proxy for all requests
if all(sticky_proxies):
assert len(set(sticky_proxies)) == 1, "Sticky session should use same proxy"
await strategy.release_session("sticky-test")
class TestStickySessionRealWorld:
"""Real-world scenario tests for sticky sessions.
Note: These tests require actual proxy servers to verify IP consistency.
They are marked to be skipped if no proxy is configured.
"""
@pytest.mark.asyncio
@pytest.mark.skipif(
not os.environ.get('TEST_PROXY_1'),
reason="Requires TEST_PROXY_1 environment variable"
)
async def test_verify_ip_consistency(self):
"""Verify that sticky session actually uses same IP.
This test requires real proxies set in environment variables:
TEST_PROXY_1=ip:port:user:pass
TEST_PROXY_2=ip:port:user:pass
"""
import re
# Load proxies from environment
proxy_strs = [
os.environ.get('TEST_PROXY_1', ''),
os.environ.get('TEST_PROXY_2', '')
]
proxies = [ProxyConfig.from_string(p) for p in proxy_strs if p]
if len(proxies) < 2:
pytest.skip("Need at least 2 proxies for this test")
strategy = RoundRobinProxyStrategy(proxies)
# Config WITH sticky session
config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
proxy_rotation_strategy=strategy,
proxy_session_id="ip-verify-session",
page_timeout=30000
)
browser_config = BrowserConfig(headless=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
ips = []
for i in range(3):
result = await crawler.arun(
url="https://httpbin.org/ip",
config=config
)
if result and result.success and result.html:
# Extract IP from response
ip_match = re.search(r'"origin":\s*"([^"]+)"', result.html)
if ip_match:
ips.append(ip_match.group(1))
await strategy.release_session("ip-verify-session")
# All IPs should be same for sticky session
if len(ips) >= 2:
assert len(set(ips)) == 1, f"Expected same IP, got: {ips}"
# ==================== STANDALONE TEST FUNCTIONS ====================
@pytest.mark.asyncio
async def test_sticky_session_simple():
"""Simple test for sticky session functionality."""
proxies = [
ProxyConfig(server=f"http://proxy{i}.test:8080")
for i in range(3)
]
strategy = RoundRobinProxyStrategy(proxies)
# Same session should return same proxy
p1 = await strategy.get_proxy_for_session("test")
p2 = await strategy.get_proxy_for_session("test")
p3 = await strategy.get_proxy_for_session("test")
assert p1.server == p2.server == p3.server
print(f"Sticky session works! All requests use: {p1.server}")
# Cleanup
await strategy.release_session("test")
if __name__ == "__main__":
print("Running Sticky Session tests...")
print("=" * 50)
asyncio.run(test_sticky_session_simple())
print("\n" + "=" * 50)
print("To run the full pytest suite, use: pytest " + __file__)
print("=" * 50)