Compare commits
4 Commits
docs-proxy
...
copilot/mo
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c1c5dfc49b | ||
|
|
2507720cc7 | ||
|
|
7037021496 | ||
|
|
7c751837ef |
214
CHANGES_CDP_CONCURRENCY.md
Normal file
214
CHANGES_CDP_CONCURRENCY.md
Normal file
@@ -0,0 +1,214 @@
|
|||||||
|
# CDP Browser Concurrency Fixes and Improvements
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This document describes the changes made to fix concurrency issues with CDP (Chrome DevTools Protocol) browsers when using `arun_many` and improve overall browser management.
|
||||||
|
|
||||||
|
## Problems Addressed
|
||||||
|
|
||||||
|
1. **Race Conditions in Page Creation**: When using managed CDP browsers with concurrent `arun_many` calls, the code attempted to reuse existing pages from `context.pages`, leading to race conditions and "Target page/context closed" errors.
|
||||||
|
|
||||||
|
2. **Proxy Configuration Issues**: Proxy credentials were incorrectly embedded in the `--proxy-server` URL, which doesn't work properly with CDP browsers.
|
||||||
|
|
||||||
|
3. **Insufficient Startup Checks**: Browser process startup checks were minimal and didn't catch early failures effectively.
|
||||||
|
|
||||||
|
4. **Unclear Logging**: Logging messages lacked structure and context, making debugging difficult.
|
||||||
|
|
||||||
|
5. **Duplicate Browser Arguments**: Browser launch arguments could contain duplicates despite deduplication attempts.
|
||||||
|
|
||||||
|
## Solutions Implemented
|
||||||
|
|
||||||
|
### 1. Always Create New Pages in Managed Browser Mode
|
||||||
|
|
||||||
|
**File**: `crawl4ai/browser_manager.py` (lines 1106-1113)
|
||||||
|
|
||||||
|
**Change**: Modified `get_page()` method to always create new pages instead of attempting to reuse existing ones for managed browsers without `storage_state`.
|
||||||
|
|
||||||
|
**Before**:
|
||||||
|
```python
|
||||||
|
context = self.default_context
|
||||||
|
pages = context.pages
|
||||||
|
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
|
||||||
|
if not page:
|
||||||
|
if pages:
|
||||||
|
page = pages[0]
|
||||||
|
else:
|
||||||
|
# Create new page only if none exist
|
||||||
|
async with self._page_lock:
|
||||||
|
page = await context.new_page()
|
||||||
|
```
|
||||||
|
|
||||||
|
**After**:
|
||||||
|
```python
|
||||||
|
context = self.default_context
|
||||||
|
# Always create new pages instead of reusing existing ones
|
||||||
|
# This prevents race conditions in concurrent scenarios (arun_many with CDP)
|
||||||
|
# Serialize page creation to avoid 'Target page/context closed' errors
|
||||||
|
async with self._page_lock:
|
||||||
|
page = await context.new_page()
|
||||||
|
await self._apply_stealth_to_page(page)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits**:
|
||||||
|
- Eliminates race conditions when multiple tasks call `arun_many` concurrently
|
||||||
|
- Each request gets a fresh, independent page
|
||||||
|
- Page lock serializes creation to prevent TOCTOU (Time-of-check to time-of-use) issues
|
||||||
|
|
||||||
|
### 2. Fixed Proxy Flag Formatting
|
||||||
|
|
||||||
|
**File**: `crawl4ai/browser_manager.py` (lines 103-109)
|
||||||
|
|
||||||
|
**Change**: Removed credentials from proxy URL as they should be handled via separate authentication mechanisms in CDP.
|
||||||
|
|
||||||
|
**Before**:
|
||||||
|
```python
|
||||||
|
elif config.proxy_config:
|
||||||
|
creds = ""
|
||||||
|
if config.proxy_config.username and config.proxy_config.password:
|
||||||
|
creds = f"{config.proxy_config.username}:{config.proxy_config.password}@"
|
||||||
|
flags.append(f"--proxy-server={creds}{config.proxy_config.server}")
|
||||||
|
```
|
||||||
|
|
||||||
|
**After**:
|
||||||
|
```python
|
||||||
|
elif config.proxy_config:
|
||||||
|
# Note: For CDP/managed browsers, proxy credentials should be handled
|
||||||
|
# via authentication, not in the URL. Only pass the server address.
|
||||||
|
flags.append(f"--proxy-server={config.proxy_config.server}")
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Enhanced Startup Checks
|
||||||
|
|
||||||
|
**File**: `crawl4ai/browser_manager.py` (lines 298-336)
|
||||||
|
|
||||||
|
**Changes**:
|
||||||
|
- Multiple check intervals (0.1s, 0.2s, 0.3s) to catch early failures
|
||||||
|
- Capture and log stdout/stderr on failure (limited to 200 chars)
|
||||||
|
- Raise `RuntimeError` with detailed diagnostics on startup failure
|
||||||
|
- Log process PID on successful startup in verbose mode
|
||||||
|
|
||||||
|
**Benefits**:
|
||||||
|
- Catches browser crashes during startup
|
||||||
|
- Provides detailed diagnostic information for debugging
|
||||||
|
- Fails fast with clear error messages
|
||||||
|
|
||||||
|
### 4. Improved Logging
|
||||||
|
|
||||||
|
**File**: `crawl4ai/browser_manager.py` (lines 218-291)
|
||||||
|
|
||||||
|
**Changes**:
|
||||||
|
- Structured logging with proper parameter substitution
|
||||||
|
- Log browser type, port, and headless status at launch
|
||||||
|
- Format and log full command with proper shell escaping
|
||||||
|
- Better error messages with context
|
||||||
|
- Consistent use of logger with null checks
|
||||||
|
|
||||||
|
**Example**:
|
||||||
|
```python
|
||||||
|
if self.logger and self.browser_config.verbose:
|
||||||
|
self.logger.debug(
|
||||||
|
"Launching browser: {browser_type} | Port: {port} | Headless: {headless}",
|
||||||
|
tag="BROWSER",
|
||||||
|
params={
|
||||||
|
"browser_type": self.browser_type,
|
||||||
|
"port": self.debugging_port,
|
||||||
|
"headless": self.headless
|
||||||
|
}
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Deduplicate Browser Launch Arguments
|
||||||
|
|
||||||
|
**File**: `crawl4ai/browser_manager.py` (lines 424-425)
|
||||||
|
|
||||||
|
**Change**: Added explicit deduplication after merging all flags.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# merge common launch flags
|
||||||
|
flags.extend(self.build_browser_flags(self.browser_config))
|
||||||
|
# Deduplicate flags - use dict.fromkeys to preserve order while removing duplicates
|
||||||
|
flags = list(dict.fromkeys(flags))
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Import Refactoring
|
||||||
|
|
||||||
|
**Files**: `crawl4ai/browser_manager.py`, `crawl4ai/browser_profiler.py`, `tests/browser/test_cdp_concurrency.py`
|
||||||
|
|
||||||
|
**Changes**: Organized all imports according to PEP 8:
|
||||||
|
1. Standard library imports (alphabetized)
|
||||||
|
2. Third-party imports (alphabetized)
|
||||||
|
3. Local imports (alphabetized)
|
||||||
|
|
||||||
|
**Benefits**:
|
||||||
|
- Improved code readability
|
||||||
|
- Easier to spot missing or unused imports
|
||||||
|
- Consistent style across the codebase
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
### New Test Suite
|
||||||
|
|
||||||
|
**File**: `tests/browser/test_cdp_concurrency.py`
|
||||||
|
|
||||||
|
Comprehensive test suite with 8 tests covering:
|
||||||
|
|
||||||
|
1. **Basic Concurrent arun_many**: Validates multiple URLs can be crawled concurrently
|
||||||
|
2. **Sequential arun_many Calls**: Ensures multiple sequential batches work correctly
|
||||||
|
3. **Stress Test**: Multiple concurrent `arun_many` calls to test page lock effectiveness
|
||||||
|
4. **Page Isolation**: Verifies pages are truly independent
|
||||||
|
5. **Different Configurations**: Tests with varying viewport sizes and configs
|
||||||
|
6. **Error Handling**: Ensures errors in one request don't affect others
|
||||||
|
7. **Large Batches**: Scalability test with 10+ URLs
|
||||||
|
8. **Smoke Test Script**: Standalone script for quick validation
|
||||||
|
|
||||||
|
### Running Tests
|
||||||
|
|
||||||
|
**With pytest** (if available):
|
||||||
|
```bash
|
||||||
|
cd /path/to/crawl4ai
|
||||||
|
pytest tests/browser/test_cdp_concurrency.py -v
|
||||||
|
```
|
||||||
|
|
||||||
|
**Standalone smoke test**:
|
||||||
|
```bash
|
||||||
|
cd /path/to/crawl4ai
|
||||||
|
python3 tests/browser/smoke_test_cdp.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## Migration Guide
|
||||||
|
|
||||||
|
### For Users
|
||||||
|
|
||||||
|
No breaking changes. Existing code will continue to work, but with better reliability in concurrent scenarios.
|
||||||
|
|
||||||
|
### For Contributors
|
||||||
|
|
||||||
|
When working with managed browsers:
|
||||||
|
1. Always use the page lock when creating pages in shared contexts
|
||||||
|
2. Prefer creating new pages over reusing existing ones for concurrent operations
|
||||||
|
3. Use structured logging with parameter substitution
|
||||||
|
4. Follow PEP 8 import organization
|
||||||
|
|
||||||
|
## Performance Impact
|
||||||
|
|
||||||
|
- **Positive**: Eliminates race conditions and crashes in concurrent scenarios
|
||||||
|
- **Neutral**: Page creation overhead is negligible compared to page navigation
|
||||||
|
- **Consideration**: More pages may be created, but they are properly closed after use
|
||||||
|
|
||||||
|
## Backward Compatibility
|
||||||
|
|
||||||
|
All changes are backward compatible. Session-based page reuse still works as before when `session_id` is provided.
|
||||||
|
|
||||||
|
## Related Issues
|
||||||
|
|
||||||
|
- Fixes race conditions in concurrent `arun_many` calls with CDP browsers
|
||||||
|
- Addresses "Target page/context closed" errors
|
||||||
|
- Improves browser startup reliability
|
||||||
|
|
||||||
|
## Future Improvements
|
||||||
|
|
||||||
|
Consider:
|
||||||
|
1. Configurable page pooling with proper lifecycle management
|
||||||
|
2. More granular locks for different contexts
|
||||||
|
3. Metrics for page creation/reuse patterns
|
||||||
|
4. Connection pooling for CDP connections
|
||||||
@@ -617,17 +617,7 @@ class AsyncWebCrawler:
|
|||||||
else config.chunking_strategy
|
else config.chunking_strategy
|
||||||
)
|
)
|
||||||
sections = chunking.chunk(content)
|
sections = chunking.chunk(content)
|
||||||
# extracted_content = config.extraction_strategy.run(url, sections)
|
extracted_content = config.extraction_strategy.run(url, sections)
|
||||||
|
|
||||||
# Use async version if available for better parallelism
|
|
||||||
if hasattr(config.extraction_strategy, 'arun'):
|
|
||||||
extracted_content = await config.extraction_strategy.arun(url, sections)
|
|
||||||
else:
|
|
||||||
# Fallback to sync version run in thread pool to avoid blocking
|
|
||||||
extracted_content = await asyncio.to_thread(
|
|
||||||
config.extraction_strategy.run, url, sections
|
|
||||||
)
|
|
||||||
|
|
||||||
extracted_content = json.dumps(
|
extracted_content = json.dumps(
|
||||||
extracted_content, indent=4, default=str, ensure_ascii=False
|
extracted_content, indent=4, default=str, ensure_ascii=False
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -1,21 +1,26 @@
|
|||||||
|
# Standard library imports
|
||||||
import asyncio
|
import asyncio
|
||||||
import time
|
import hashlib
|
||||||
from typing import List, Optional
|
|
||||||
import os
|
import os
|
||||||
import sys
|
import shlex
|
||||||
import shutil
|
import shutil
|
||||||
import tempfile
|
|
||||||
import psutil
|
|
||||||
import signal
|
import signal
|
||||||
import subprocess
|
import subprocess
|
||||||
import shlex
|
import sys
|
||||||
from playwright.async_api import BrowserContext
|
import tempfile
|
||||||
import hashlib
|
import time
|
||||||
from .js_snippet import load_js_script
|
|
||||||
from .config import DOWNLOAD_PAGE_TIMEOUT
|
|
||||||
from .async_configs import BrowserConfig, CrawlerRunConfig
|
|
||||||
from .utils import get_chromium_path
|
|
||||||
import warnings
|
import warnings
|
||||||
|
from typing import List, Optional
|
||||||
|
|
||||||
|
# Third-party imports
|
||||||
|
import psutil
|
||||||
|
from playwright.async_api import BrowserContext
|
||||||
|
|
||||||
|
# Local imports
|
||||||
|
from .async_configs import BrowserConfig, CrawlerRunConfig
|
||||||
|
from .config import DOWNLOAD_PAGE_TIMEOUT
|
||||||
|
from .js_snippet import load_js_script
|
||||||
|
from .utils import get_chromium_path
|
||||||
|
|
||||||
|
|
||||||
BROWSER_DISABLE_OPTIONS = [
|
BROWSER_DISABLE_OPTIONS = [
|
||||||
@@ -104,10 +109,9 @@ class ManagedBrowser:
|
|||||||
if config.proxy:
|
if config.proxy:
|
||||||
flags.append(f"--proxy-server={config.proxy}")
|
flags.append(f"--proxy-server={config.proxy}")
|
||||||
elif config.proxy_config:
|
elif config.proxy_config:
|
||||||
creds = ""
|
# Note: For CDP/managed browsers, proxy credentials should be handled
|
||||||
if config.proxy_config.username and config.proxy_config.password:
|
# via authentication, not in the URL. Only pass the server address.
|
||||||
creds = f"{config.proxy_config.username}:{config.proxy_config.password}@"
|
flags.append(f"--proxy-server={config.proxy_config.server}")
|
||||||
flags.append(f"--proxy-server={creds}{config.proxy_config.server}")
|
|
||||||
# dedupe
|
# dedupe
|
||||||
return list(dict.fromkeys(flags))
|
return list(dict.fromkeys(flags))
|
||||||
|
|
||||||
@@ -219,11 +223,27 @@ class ManagedBrowser:
|
|||||||
os.remove(fp)
|
os.remove(fp)
|
||||||
except Exception as _e:
|
except Exception as _e:
|
||||||
# non-fatal — we'll try to start anyway, but log what happened
|
# non-fatal — we'll try to start anyway, but log what happened
|
||||||
self.logger.warning(f"pre-launch cleanup failed: {_e}", tag="BROWSER")
|
if self.logger:
|
||||||
|
self.logger.warning(
|
||||||
|
"Pre-launch cleanup failed: {error} | Will attempt to start browser anyway",
|
||||||
|
tag="BROWSER",
|
||||||
|
params={"error": str(_e)}
|
||||||
|
)
|
||||||
|
|
||||||
# Start browser process
|
# Start browser process
|
||||||
try:
|
try:
|
||||||
|
# Log browser launch intent
|
||||||
|
if self.logger and self.browser_config.verbose:
|
||||||
|
self.logger.debug(
|
||||||
|
"Launching browser: {browser_type} | Port: {port} | Headless: {headless}",
|
||||||
|
tag="BROWSER",
|
||||||
|
params={
|
||||||
|
"browser_type": self.browser_type,
|
||||||
|
"port": self.debugging_port,
|
||||||
|
"headless": self.headless
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
# Use DETACHED_PROCESS flag on Windows to fully detach the process
|
# Use DETACHED_PROCESS flag on Windows to fully detach the process
|
||||||
# On Unix, we'll use preexec_fn=os.setpgrp to start the process in a new process group
|
# On Unix, we'll use preexec_fn=os.setpgrp to start the process in a new process group
|
||||||
if sys.platform == "win32":
|
if sys.platform == "win32":
|
||||||
@@ -241,19 +261,36 @@ class ManagedBrowser:
|
|||||||
preexec_fn=os.setpgrp # Start in a new process group
|
preexec_fn=os.setpgrp # Start in a new process group
|
||||||
)
|
)
|
||||||
|
|
||||||
# If verbose is True print args used to run the process
|
# Log full command if verbose logging is enabled
|
||||||
if self.logger and self.browser_config.verbose:
|
if self.logger and self.browser_config.verbose:
|
||||||
|
# Format args for better readability - escape and join
|
||||||
|
formatted_args = ' '.join(shlex.quote(str(arg)) for arg in args)
|
||||||
self.logger.debug(
|
self.logger.debug(
|
||||||
f"Starting browser with args: {' '.join(args)}",
|
"Browser launch command: {command}",
|
||||||
tag="BROWSER"
|
tag="BROWSER",
|
||||||
)
|
params={"command": formatted_args}
|
||||||
|
)
|
||||||
|
|
||||||
# We'll monitor for a short time to make sure it starts properly, but won't keep monitoring
|
# Perform startup health checks
|
||||||
await asyncio.sleep(0.5) # Give browser time to start
|
await asyncio.sleep(0.5) # Initial delay for process startup
|
||||||
await self._initial_startup_check()
|
await self._initial_startup_check()
|
||||||
await asyncio.sleep(2) # Give browser time to start
|
await asyncio.sleep(2) # Additional time for browser initialization
|
||||||
return f"http://{self.host}:{self.debugging_port}"
|
|
||||||
|
cdp_url = f"http://{self.host}:{self.debugging_port}"
|
||||||
|
if self.logger:
|
||||||
|
self.logger.info(
|
||||||
|
"Browser started successfully | CDP URL: {cdp_url}",
|
||||||
|
tag="BROWSER",
|
||||||
|
params={"cdp_url": cdp_url}
|
||||||
|
)
|
||||||
|
return cdp_url
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
if self.logger:
|
||||||
|
self.logger.error(
|
||||||
|
"Failed to start browser: {error}",
|
||||||
|
tag="BROWSER",
|
||||||
|
params={"error": str(e)}
|
||||||
|
)
|
||||||
await self.cleanup()
|
await self.cleanup()
|
||||||
raise Exception(f"Failed to start browser: {e}")
|
raise Exception(f"Failed to start browser: {e}")
|
||||||
|
|
||||||
@@ -266,23 +303,41 @@ class ManagedBrowser:
|
|||||||
return
|
return
|
||||||
|
|
||||||
# Check that process started without immediate termination
|
# Check that process started without immediate termination
|
||||||
await asyncio.sleep(0.5)
|
# Perform multiple checks with increasing delays to catch early failures
|
||||||
if self.browser_process.poll() is not None:
|
check_intervals = [0.1, 0.2, 0.3] # Total 0.6s
|
||||||
# Process already terminated
|
|
||||||
stdout, stderr = b"", b""
|
for delay in check_intervals:
|
||||||
try:
|
await asyncio.sleep(delay)
|
||||||
stdout, stderr = self.browser_process.communicate(timeout=0.5)
|
if self.browser_process.poll() is not None:
|
||||||
except subprocess.TimeoutExpired:
|
# Process already terminated - capture output for debugging
|
||||||
pass
|
stdout, stderr = b"", b""
|
||||||
|
try:
|
||||||
|
stdout, stderr = self.browser_process.communicate(timeout=0.5)
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
pass
|
||||||
|
|
||||||
|
error_msg = "Browser process terminated during startup"
|
||||||
|
if stderr:
|
||||||
|
error_msg += f" | STDERR: {stderr.decode()[:200]}" # Limit output length
|
||||||
|
if stdout:
|
||||||
|
error_msg += f" | STDOUT: {stdout.decode()[:200]}"
|
||||||
|
|
||||||
|
self.logger.error(
|
||||||
|
message="{error_msg} | Exit code: {code}",
|
||||||
|
tag="BROWSER",
|
||||||
|
params={
|
||||||
|
"error_msg": error_msg,
|
||||||
|
"code": self.browser_process.returncode,
|
||||||
|
},
|
||||||
|
)
|
||||||
|
raise RuntimeError(f"Browser failed to start: {error_msg}")
|
||||||
|
|
||||||
self.logger.error(
|
# Process is still running after checks - log success
|
||||||
message="Browser process terminated during startup | Code: {code} | STDOUT: {stdout} | STDERR: {stderr}",
|
if self.logger and self.browser_config.verbose:
|
||||||
tag="ERROR",
|
self.logger.debug(
|
||||||
params={
|
"Browser process startup check passed | PID: {pid}",
|
||||||
"code": self.browser_process.returncode,
|
tag="BROWSER",
|
||||||
"stdout": stdout.decode() if stdout else "",
|
params={"pid": self.browser_process.pid}
|
||||||
"stderr": stderr.decode() if stderr else "",
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _monitor_browser_process(self):
|
async def _monitor_browser_process(self):
|
||||||
@@ -369,11 +424,10 @@ class ManagedBrowser:
|
|||||||
]
|
]
|
||||||
if self.headless:
|
if self.headless:
|
||||||
flags.append("--headless=new")
|
flags.append("--headless=new")
|
||||||
# Add viewport flag if specified in config
|
|
||||||
if self.browser_config.viewport_height and self.browser_config.viewport_width:
|
|
||||||
flags.append(f"--window-size={self.browser_config.viewport_width},{self.browser_config.viewport_height}")
|
|
||||||
# merge common launch flags
|
# merge common launch flags
|
||||||
flags.extend(self.build_browser_flags(self.browser_config))
|
flags.extend(self.build_browser_flags(self.browser_config))
|
||||||
|
# Deduplicate flags - use dict.fromkeys to preserve order while removing duplicates
|
||||||
|
flags = list(dict.fromkeys(flags))
|
||||||
elif self.browser_type == "firefox":
|
elif self.browser_type == "firefox":
|
||||||
flags = [
|
flags = [
|
||||||
"--remote-debugging-port",
|
"--remote-debugging-port",
|
||||||
@@ -1051,21 +1105,12 @@ class BrowserManager:
|
|||||||
await self._apply_stealth_to_page(page)
|
await self._apply_stealth_to_page(page)
|
||||||
else:
|
else:
|
||||||
context = self.default_context
|
context = self.default_context
|
||||||
pages = context.pages
|
# Always create new pages instead of reusing existing ones
|
||||||
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
|
# This prevents race conditions in concurrent scenarios (arun_many with CDP)
|
||||||
if not page:
|
# Serialize page creation to avoid 'Target page/context closed' errors
|
||||||
if pages:
|
async with self._page_lock:
|
||||||
page = pages[0]
|
page = await context.new_page()
|
||||||
else:
|
await self._apply_stealth_to_page(page)
|
||||||
# Double-check under lock to avoid TOCTOU and ensure only
|
|
||||||
# one task calls new_page when pages=[] concurrently
|
|
||||||
async with self._page_lock:
|
|
||||||
pages = context.pages
|
|
||||||
if pages:
|
|
||||||
page = pages[0]
|
|
||||||
else:
|
|
||||||
page = await context.new_page()
|
|
||||||
await self._apply_stealth_to_page(page)
|
|
||||||
else:
|
else:
|
||||||
# Otherwise, check if we have an existing context for this config
|
# Otherwise, check if we have an existing context for this config
|
||||||
config_signature = self._make_config_signature(crawlerRunConfig)
|
config_signature = self._make_config_signature(crawlerRunConfig)
|
||||||
|
|||||||
@@ -5,22 +5,26 @@ This module provides a dedicated class for managing browser profiles
|
|||||||
that can be used for identity-based crawling with Crawl4AI.
|
that can be used for identity-based crawling with Crawl4AI.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import os
|
# Standard library imports
|
||||||
import asyncio
|
import asyncio
|
||||||
import signal
|
|
||||||
import sys
|
|
||||||
import datetime
|
import datetime
|
||||||
import uuid
|
|
||||||
import shutil
|
|
||||||
import json
|
import json
|
||||||
|
import os
|
||||||
|
import shutil
|
||||||
|
import signal
|
||||||
import subprocess
|
import subprocess
|
||||||
|
import sys
|
||||||
import time
|
import time
|
||||||
from typing import List, Dict, Optional, Any
|
import uuid
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
# Third-party imports
|
||||||
from rich.console import Console
|
from rich.console import Console
|
||||||
|
|
||||||
|
# Local imports
|
||||||
from .async_configs import BrowserConfig
|
from .async_configs import BrowserConfig
|
||||||
from .browser_manager import ManagedBrowser
|
|
||||||
from .async_logger import AsyncLogger, AsyncLoggerBase, LogColor
|
from .async_logger import AsyncLogger, AsyncLoggerBase, LogColor
|
||||||
|
from .browser_manager import ManagedBrowser
|
||||||
from .utils import get_home_folder
|
from .utils import get_home_folder
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -94,20 +94,6 @@ class ExtractionStrategy(ABC):
|
|||||||
extracted_content.extend(future.result())
|
extracted_content.extend(future.result())
|
||||||
return extracted_content
|
return extracted_content
|
||||||
|
|
||||||
async def arun(self, url: str, sections: List[str], *q, **kwargs) -> List[Dict[str, Any]]:
|
|
||||||
"""
|
|
||||||
Async version: Process sections of text in parallel using asyncio.
|
|
||||||
|
|
||||||
Default implementation runs the sync version in a thread pool.
|
|
||||||
Subclasses can override this for true async processing.
|
|
||||||
|
|
||||||
:param url: The URL of the webpage.
|
|
||||||
:param sections: List of sections (strings) to process.
|
|
||||||
:return: A list of processed JSON blocks.
|
|
||||||
"""
|
|
||||||
import asyncio
|
|
||||||
return await asyncio.to_thread(self.run, url, sections, *q, **kwargs)
|
|
||||||
|
|
||||||
|
|
||||||
class NoExtractionStrategy(ExtractionStrategy):
|
class NoExtractionStrategy(ExtractionStrategy):
|
||||||
"""
|
"""
|
||||||
@@ -794,177 +780,6 @@ class LLMExtractionStrategy(ExtractionStrategy):
|
|||||||
|
|
||||||
return extracted_content
|
return extracted_content
|
||||||
|
|
||||||
async def aextract(self, url: str, ix: int, html: str) -> List[Dict[str, Any]]:
|
|
||||||
"""
|
|
||||||
Async version: Extract meaningful blocks or chunks from the given HTML using an LLM.
|
|
||||||
|
|
||||||
How it works:
|
|
||||||
1. Construct a prompt with variables.
|
|
||||||
2. Make an async request to the LLM using the prompt.
|
|
||||||
3. Parse the response and extract blocks or chunks.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
url: The URL of the webpage.
|
|
||||||
ix: Index of the block.
|
|
||||||
html: The HTML content of the webpage.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
A list of extracted blocks or chunks.
|
|
||||||
"""
|
|
||||||
from .utils import aperform_completion_with_backoff
|
|
||||||
|
|
||||||
if self.verbose:
|
|
||||||
print(f"[LOG] Call LLM for {url} - block index: {ix}")
|
|
||||||
|
|
||||||
variable_values = {
|
|
||||||
"URL": url,
|
|
||||||
"HTML": escape_json_string(sanitize_html(html)),
|
|
||||||
}
|
|
||||||
|
|
||||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS
|
|
||||||
if self.instruction:
|
|
||||||
variable_values["REQUEST"] = self.instruction
|
|
||||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
|
|
||||||
|
|
||||||
if self.extract_type == "schema" and self.schema:
|
|
||||||
variable_values["SCHEMA"] = json.dumps(self.schema, indent=2)
|
|
||||||
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
|
|
||||||
|
|
||||||
if self.extract_type == "schema" and not self.schema:
|
|
||||||
prompt_with_variables = PROMPT_EXTRACT_INFERRED_SCHEMA
|
|
||||||
|
|
||||||
for variable in variable_values:
|
|
||||||
prompt_with_variables = prompt_with_variables.replace(
|
|
||||||
"{" + variable + "}", variable_values[variable]
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
|
||||||
response = await aperform_completion_with_backoff(
|
|
||||||
self.llm_config.provider,
|
|
||||||
prompt_with_variables,
|
|
||||||
self.llm_config.api_token,
|
|
||||||
base_url=self.llm_config.base_url,
|
|
||||||
json_response=self.force_json_response,
|
|
||||||
extra_args=self.extra_args,
|
|
||||||
)
|
|
||||||
# Track usage
|
|
||||||
usage = TokenUsage(
|
|
||||||
completion_tokens=response.usage.completion_tokens,
|
|
||||||
prompt_tokens=response.usage.prompt_tokens,
|
|
||||||
total_tokens=response.usage.total_tokens,
|
|
||||||
completion_tokens_details=response.usage.completion_tokens_details.__dict__
|
|
||||||
if response.usage.completion_tokens_details
|
|
||||||
else {},
|
|
||||||
prompt_tokens_details=response.usage.prompt_tokens_details.__dict__
|
|
||||||
if response.usage.prompt_tokens_details
|
|
||||||
else {},
|
|
||||||
)
|
|
||||||
self.usages.append(usage)
|
|
||||||
|
|
||||||
# Update totals
|
|
||||||
self.total_usage.completion_tokens += usage.completion_tokens
|
|
||||||
self.total_usage.prompt_tokens += usage.prompt_tokens
|
|
||||||
self.total_usage.total_tokens += usage.total_tokens
|
|
||||||
|
|
||||||
try:
|
|
||||||
content = response.choices[0].message.content
|
|
||||||
blocks = None
|
|
||||||
|
|
||||||
if self.force_json_response:
|
|
||||||
blocks = json.loads(content)
|
|
||||||
if isinstance(blocks, dict):
|
|
||||||
if len(blocks) == 1 and isinstance(list(blocks.values())[0], list):
|
|
||||||
blocks = list(blocks.values())[0]
|
|
||||||
else:
|
|
||||||
blocks = [blocks]
|
|
||||||
elif isinstance(blocks, list):
|
|
||||||
blocks = blocks
|
|
||||||
else:
|
|
||||||
blocks = extract_xml_data(["blocks"], content)["blocks"]
|
|
||||||
blocks = json.loads(blocks)
|
|
||||||
|
|
||||||
for block in blocks:
|
|
||||||
block["error"] = False
|
|
||||||
except Exception:
|
|
||||||
parsed, unparsed = split_and_parse_json_objects(
|
|
||||||
response.choices[0].message.content
|
|
||||||
)
|
|
||||||
blocks = parsed
|
|
||||||
if unparsed:
|
|
||||||
blocks.append(
|
|
||||||
{"index": 0, "error": True, "tags": ["error"], "content": unparsed}
|
|
||||||
)
|
|
||||||
|
|
||||||
if self.verbose:
|
|
||||||
print(
|
|
||||||
"[LOG] Extracted",
|
|
||||||
len(blocks),
|
|
||||||
"blocks from URL:",
|
|
||||||
url,
|
|
||||||
"block index:",
|
|
||||||
ix,
|
|
||||||
)
|
|
||||||
return blocks
|
|
||||||
except Exception as e:
|
|
||||||
if self.verbose:
|
|
||||||
print(f"[LOG] Error in LLM extraction: {e}")
|
|
||||||
return [
|
|
||||||
{
|
|
||||||
"index": ix,
|
|
||||||
"error": True,
|
|
||||||
"tags": ["error"],
|
|
||||||
"content": str(e),
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
async def arun(self, url: str, sections: List[str]) -> List[Dict[str, Any]]:
|
|
||||||
"""
|
|
||||||
Async version: Process sections with true parallelism using asyncio.gather.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
url: The URL of the webpage.
|
|
||||||
sections: List of sections (strings) to process.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
A list of extracted blocks or chunks.
|
|
||||||
"""
|
|
||||||
import asyncio
|
|
||||||
|
|
||||||
merged_sections = self._merge(
|
|
||||||
sections,
|
|
||||||
self.chunk_token_threshold,
|
|
||||||
overlap=int(self.chunk_token_threshold * self.overlap_rate),
|
|
||||||
)
|
|
||||||
|
|
||||||
extracted_content = []
|
|
||||||
|
|
||||||
# Create tasks for all sections to run in parallel
|
|
||||||
tasks = [
|
|
||||||
self.aextract(url, ix, sanitize_input_encode(section))
|
|
||||||
for ix, section in enumerate(merged_sections)
|
|
||||||
]
|
|
||||||
|
|
||||||
# Execute all tasks concurrently
|
|
||||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
|
||||||
|
|
||||||
# Process results
|
|
||||||
for result in results:
|
|
||||||
if isinstance(result, Exception):
|
|
||||||
if self.verbose:
|
|
||||||
print(f"Error in async extraction: {result}")
|
|
||||||
extracted_content.append(
|
|
||||||
{
|
|
||||||
"index": 0,
|
|
||||||
"error": True,
|
|
||||||
"tags": ["error"],
|
|
||||||
"content": str(result),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
extracted_content.extend(result)
|
|
||||||
|
|
||||||
return extracted_content
|
|
||||||
|
|
||||||
def show_usage(self) -> None:
|
def show_usage(self) -> None:
|
||||||
"""Print a detailed token usage report showing total and per-request usage."""
|
"""Print a detailed token usage report showing total and per-request usage."""
|
||||||
print("\n=== Token Usage Summary ===")
|
print("\n=== Token Usage Summary ===")
|
||||||
|
|||||||
@@ -1825,82 +1825,6 @@ def perform_completion_with_backoff(
|
|||||||
# ]
|
# ]
|
||||||
|
|
||||||
|
|
||||||
async def aperform_completion_with_backoff(
|
|
||||||
provider,
|
|
||||||
prompt_with_variables,
|
|
||||||
api_token,
|
|
||||||
json_response=False,
|
|
||||||
base_url=None,
|
|
||||||
**kwargs,
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Async version: Perform an API completion request with exponential backoff.
|
|
||||||
|
|
||||||
How it works:
|
|
||||||
1. Sends an async completion request to the API.
|
|
||||||
2. Retries on rate-limit errors with exponential delays (async).
|
|
||||||
3. Returns the API response or an error after all retries.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
provider (str): The name of the API provider.
|
|
||||||
prompt_with_variables (str): The input prompt for the completion request.
|
|
||||||
api_token (str): The API token for authentication.
|
|
||||||
json_response (bool): Whether to request a JSON response. Defaults to False.
|
|
||||||
base_url (Optional[str]): The base URL for the API. Defaults to None.
|
|
||||||
**kwargs: Additional arguments for the API request.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
dict: The API response or an error message after all retries.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from litellm import acompletion
|
|
||||||
from litellm.exceptions import RateLimitError
|
|
||||||
import asyncio
|
|
||||||
|
|
||||||
max_attempts = 3
|
|
||||||
base_delay = 2 # Base delay in seconds, you can adjust this based on your needs
|
|
||||||
|
|
||||||
extra_args = {"temperature": 0.01, "api_key": api_token, "base_url": base_url}
|
|
||||||
if json_response:
|
|
||||||
extra_args["response_format"] = {"type": "json_object"}
|
|
||||||
|
|
||||||
if kwargs.get("extra_args"):
|
|
||||||
extra_args.update(kwargs["extra_args"])
|
|
||||||
|
|
||||||
for attempt in range(max_attempts):
|
|
||||||
try:
|
|
||||||
response = await acompletion(
|
|
||||||
model=provider,
|
|
||||||
messages=[{"role": "user", "content": prompt_with_variables}],
|
|
||||||
**extra_args,
|
|
||||||
)
|
|
||||||
return response # Return the successful response
|
|
||||||
except RateLimitError as e:
|
|
||||||
print("Rate limit error:", str(e))
|
|
||||||
|
|
||||||
if attempt == max_attempts - 1:
|
|
||||||
# Last attempt failed, raise the error.
|
|
||||||
raise
|
|
||||||
|
|
||||||
# Check if we have exhausted our max attempts
|
|
||||||
if attempt < max_attempts - 1:
|
|
||||||
# Calculate the delay and wait
|
|
||||||
delay = base_delay * (2**attempt) # Exponential backoff formula
|
|
||||||
print(f"Waiting for {delay} seconds before retrying...")
|
|
||||||
await asyncio.sleep(delay)
|
|
||||||
else:
|
|
||||||
# Return an error response after exhausting all retries
|
|
||||||
return [
|
|
||||||
{
|
|
||||||
"index": 0,
|
|
||||||
"tags": ["error"],
|
|
||||||
"content": ["Rate limit error. Please try again later."],
|
|
||||||
}
|
|
||||||
]
|
|
||||||
except Exception as e:
|
|
||||||
raise e # Raise any other exceptions immediately
|
|
||||||
|
|
||||||
|
|
||||||
def extract_blocks(url, html, provider=DEFAULT_PROVIDER, api_token=None, base_url=None):
|
def extract_blocks(url, html, provider=DEFAULT_PROVIDER, api_token=None, base_url=None):
|
||||||
"""
|
"""
|
||||||
Extract content blocks from website HTML using an AI provider.
|
Extract content blocks from website HTML using an AI provider.
|
||||||
|
|||||||
@@ -1,304 +1,98 @@
|
|||||||
# Proxy & Security
|
# Proxy
|
||||||
|
|
||||||
This guide covers proxy configuration and security features in Crawl4AI, including SSL certificate analysis and proxy rotation strategies.
|
|
||||||
|
|
||||||
## Understanding Proxy Configuration
|
|
||||||
|
|
||||||
Crawl4AI recommends configuring proxies per request through `CrawlerRunConfig.proxy_config`. This gives you precise control, enables rotation strategies, and keeps examples simple enough to copy, paste, and run.
|
|
||||||
|
|
||||||
## Basic Proxy Setup
|
## Basic Proxy Setup
|
||||||
|
|
||||||
Configure proxies that apply to each crawl operation:
|
Simple proxy configuration with `BrowserConfig`:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import asyncio
|
from crawl4ai.async_configs import BrowserConfig
|
||||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, ProxyConfig
|
|
||||||
|
|
||||||
run_config = CrawlerRunConfig(proxy_config=ProxyConfig(server="http://proxy.example.com:8080"))
|
# Using HTTP proxy
|
||||||
# run_config = CrawlerRunConfig(proxy_config={"server": "http://proxy.example.com:8080"})
|
browser_config = BrowserConfig(proxy_config={"server": "http://proxy.example.com:8080"})
|
||||||
# run_config = CrawlerRunConfig(proxy_config="http://proxy.example.com:8080")
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
result = await crawler.arun(url="https://example.com")
|
||||||
|
|
||||||
|
# Using SOCKS proxy
|
||||||
async def main():
|
browser_config = BrowserConfig(proxy_config={"server": "socks5://proxy.example.com:1080"})
|
||||||
browser_config = BrowserConfig()
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
result = await crawler.arun(url="https://example.com")
|
||||||
result = await crawler.arun(url="https://example.com", config=run_config)
|
|
||||||
print(f"Success: {result.success} -> {result.url}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
asyncio.run(main())
|
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! note "Why request-level?"
|
## Authenticated Proxy
|
||||||
`CrawlerRunConfig.proxy_config` keeps each request self-contained, so swapping proxies or rotation strategies is just a matter of building a new run configuration.
|
|
||||||
|
|
||||||
## Supported Proxy Formats
|
Use an authenticated proxy with `BrowserConfig`:
|
||||||
|
|
||||||
The `ProxyConfig.from_string()` method supports multiple formats:
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from crawl4ai import ProxyConfig
|
from crawl4ai.async_configs import BrowserConfig
|
||||||
|
|
||||||
# HTTP proxy with authentication
|
browser_config = BrowserConfig(proxy_config={
|
||||||
proxy1 = ProxyConfig.from_string("http://user:pass@192.168.1.1:8080")
|
"server": "http://[host]:[port]",
|
||||||
|
"username": "[username]",
|
||||||
# HTTPS proxy
|
"password": "[password]",
|
||||||
proxy2 = ProxyConfig.from_string("https://proxy.example.com:8080")
|
})
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
# SOCKS5 proxy
|
result = await crawler.arun(url="https://example.com")
|
||||||
proxy3 = ProxyConfig.from_string("socks5://proxy.example.com:1080")
|
|
||||||
|
|
||||||
# Simple IP:port format
|
|
||||||
proxy4 = ProxyConfig.from_string("192.168.1.1:8080")
|
|
||||||
|
|
||||||
# IP:port:user:pass format
|
|
||||||
proxy5 = ProxyConfig.from_string("192.168.1.1:8080:user:pass")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Authenticated Proxies
|
|
||||||
|
|
||||||
For proxies requiring authentication:
|
## Rotating Proxies
|
||||||
|
|
||||||
|
Example using a proxy rotation service dynamically:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import asyncio
|
|
||||||
from crawl4ai import AsyncWebCrawler,BrowserConfig, CrawlerRunConfig, ProxyConfig
|
|
||||||
|
|
||||||
run_config = CrawlerRunConfig(
|
|
||||||
proxy_config=ProxyConfig(
|
|
||||||
server="http://proxy.example.com:8080",
|
|
||||||
username="your_username",
|
|
||||||
password="your_password",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
# Or dictionary style:
|
|
||||||
# run_config = CrawlerRunConfig(proxy_config={
|
|
||||||
# "server": "http://proxy.example.com:8080",
|
|
||||||
# "username": "your_username",
|
|
||||||
# "password": "your_password",
|
|
||||||
# })
|
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
browser_config = BrowserConfig()
|
|
||||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
|
||||||
result = await crawler.arun(url="https://example.com", config=run_config)
|
|
||||||
print(f"Success: {result.success} -> {result.url}")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
asyncio.run(main())
|
|
||||||
```
|
|
||||||
|
|
||||||
## Environment Variable Configuration
|
|
||||||
|
|
||||||
Load proxies from environment variables for easy configuration:
|
|
||||||
|
|
||||||
```python
|
|
||||||
import os
|
|
||||||
from crawl4ai import ProxyConfig, CrawlerRunConfig
|
|
||||||
|
|
||||||
# Set environment variable
|
|
||||||
os.environ["PROXIES"] = "ip1:port1:user1:pass1,ip2:port2:user2:pass2,ip3:port3"
|
|
||||||
|
|
||||||
# Load all proxies
|
|
||||||
proxies = ProxyConfig.from_env()
|
|
||||||
print(f"Loaded {len(proxies)} proxies")
|
|
||||||
|
|
||||||
# Use first proxy
|
|
||||||
if proxies:
|
|
||||||
run_config = CrawlerRunConfig(proxy_config=proxies[0])
|
|
||||||
```
|
|
||||||
|
|
||||||
## Rotating Proxies
|
|
||||||
|
|
||||||
Crawl4AI supports automatic proxy rotation to distribute requests across multiple proxy servers. Rotation is applied per request using a rotation strategy on `CrawlerRunConfig`.
|
|
||||||
|
|
||||||
### Proxy Rotation (recommended)
|
|
||||||
```python
|
|
||||||
import asyncio
|
|
||||||
import re
|
import re
|
||||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, ProxyConfig
|
from crawl4ai import (
|
||||||
from crawl4ai.proxy_strategy import RoundRobinProxyStrategy
|
AsyncWebCrawler,
|
||||||
|
BrowserConfig,
|
||||||
|
CrawlerRunConfig,
|
||||||
|
CacheMode,
|
||||||
|
RoundRobinProxyStrategy,
|
||||||
|
)
|
||||||
|
import asyncio
|
||||||
|
from crawl4ai import ProxyConfig
|
||||||
async def main():
|
async def main():
|
||||||
# Load proxies from environment
|
# Load proxies and create rotation strategy
|
||||||
proxies = ProxyConfig.from_env()
|
proxies = ProxyConfig.from_env()
|
||||||
|
#eg: export PROXIES="ip1:port1:username1:password1,ip2:port2:username2:password2"
|
||||||
if not proxies:
|
if not proxies:
|
||||||
print("No proxies found! Set PROXIES environment variable.")
|
print("No proxies found in environment. Set PROXIES env variable!")
|
||||||
return
|
return
|
||||||
|
|
||||||
# Create rotation strategy
|
|
||||||
proxy_strategy = RoundRobinProxyStrategy(proxies)
|
proxy_strategy = RoundRobinProxyStrategy(proxies)
|
||||||
|
|
||||||
# Configure per-request with proxy rotation
|
# Create configs
|
||||||
browser_config = BrowserConfig(headless=True, verbose=False)
|
browser_config = BrowserConfig(headless=True, verbose=False)
|
||||||
run_config = CrawlerRunConfig(
|
run_config = CrawlerRunConfig(
|
||||||
cache_mode=CacheMode.BYPASS,
|
cache_mode=CacheMode.BYPASS,
|
||||||
proxy_rotation_strategy=proxy_strategy,
|
proxy_rotation_strategy=proxy_strategy
|
||||||
)
|
)
|
||||||
|
|
||||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
urls = ["https://httpbin.org/ip"] * (len(proxies) * 2) # Test each proxy twice
|
urls = ["https://httpbin.org/ip"] * (len(proxies) * 2) # Test each proxy twice
|
||||||
|
|
||||||
print(f"🚀 Testing {len(proxies)} proxies with rotation...")
|
print("\n📈 Initializing crawler with proxy rotation...")
|
||||||
results = await crawler.arun_many(urls=urls, config=run_config)
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
print("\n🚀 Starting batch crawl with proxy rotation...")
|
||||||
|
results = await crawler.arun_many(
|
||||||
|
urls=urls,
|
||||||
|
config=run_config
|
||||||
|
)
|
||||||
|
for result in results:
|
||||||
|
if result.success:
|
||||||
|
ip_match = re.search(r'(?:[0-9]{1,3}\.){3}[0-9]{1,3}', result.html)
|
||||||
|
current_proxy = run_config.proxy_config if run_config.proxy_config else None
|
||||||
|
|
||||||
for i, result in enumerate(results):
|
if current_proxy and ip_match:
|
||||||
if result.success:
|
print(f"URL {result.url}")
|
||||||
# Extract IP from response
|
print(f"Proxy {current_proxy.server} -> Response IP: {ip_match.group(0)}")
|
||||||
ip_match = re.search(r'(?:[0-9]{1,3}\.){3}[0-9]{1,3}', result.html)
|
verified = ip_match.group(0) == current_proxy.ip
|
||||||
if ip_match:
|
if verified:
|
||||||
detected_ip = ip_match.group(0)
|
print(f"✅ Proxy working! IP matches: {current_proxy.ip}")
|
||||||
proxy_index = i % len(proxies)
|
else:
|
||||||
expected_ip = proxies[proxy_index].ip
|
print("❌ Proxy failed or IP mismatch!")
|
||||||
|
print("---")
|
||||||
|
|
||||||
print(f"✅ Request {i+1}: Proxy {proxy_index+1} -> IP {detected_ip}")
|
asyncio.run(main())
|
||||||
if detected_ip == expected_ip:
|
|
||||||
print(" 🎯 IP matches proxy configuration")
|
|
||||||
else:
|
|
||||||
print(f" ⚠️ IP mismatch (expected {expected_ip})")
|
|
||||||
else:
|
|
||||||
print(f"❌ Request {i+1}: Could not extract IP from response")
|
|
||||||
else:
|
|
||||||
print(f"❌ Request {i+1}: Failed - {result.error_message}")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
asyncio.run(main())
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## SSL Certificate Analysis
|
|
||||||
|
|
||||||
Combine proxy usage with SSL certificate inspection for enhanced security analysis. SSL certificate fetching is configured per request via `CrawlerRunConfig`.
|
|
||||||
|
|
||||||
### Per-Request SSL Certificate Analysis
|
|
||||||
```python
|
|
||||||
import asyncio
|
|
||||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
|
||||||
|
|
||||||
run_config = CrawlerRunConfig(
|
|
||||||
proxy_config={
|
|
||||||
"server": "http://proxy.example.com:8080",
|
|
||||||
"username": "user",
|
|
||||||
"password": "pass",
|
|
||||||
},
|
|
||||||
fetch_ssl_certificate=True, # Enable SSL certificate analysis for this request
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
browser_config = BrowserConfig()
|
|
||||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
|
||||||
result = await crawler.arun(url="https://example.com", config=run_config)
|
|
||||||
|
|
||||||
if result.success:
|
|
||||||
print(f"✅ Crawled via proxy: {result.url}")
|
|
||||||
|
|
||||||
# Analyze SSL certificate
|
|
||||||
if result.ssl_certificate:
|
|
||||||
cert = result.ssl_certificate
|
|
||||||
print("🔒 SSL Certificate Info:")
|
|
||||||
print(f" Issuer: {cert.issuer}")
|
|
||||||
print(f" Subject: {cert.subject}")
|
|
||||||
print(f" Valid until: {cert.valid_until}")
|
|
||||||
print(f" Fingerprint: {cert.fingerprint}")
|
|
||||||
|
|
||||||
# Export certificate
|
|
||||||
cert.to_json("certificate.json")
|
|
||||||
print("💾 Certificate exported to certificate.json")
|
|
||||||
else:
|
|
||||||
print("⚠️ No SSL certificate information available")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
asyncio.run(main())
|
|
||||||
```
|
|
||||||
|
|
||||||
## Security Best Practices
|
|
||||||
|
|
||||||
### 1. Proxy Rotation for Anonymity
|
|
||||||
```python
|
|
||||||
from crawl4ai import CrawlerRunConfig, ProxyConfig
|
|
||||||
from crawl4ai.proxy_strategy import RoundRobinProxyStrategy
|
|
||||||
|
|
||||||
# Use multiple proxies to avoid IP blocking
|
|
||||||
proxies = ProxyConfig.from_env("PROXIES")
|
|
||||||
strategy = RoundRobinProxyStrategy(proxies)
|
|
||||||
|
|
||||||
# Configure rotation per request (recommended)
|
|
||||||
run_config = CrawlerRunConfig(proxy_rotation_strategy=strategy)
|
|
||||||
|
|
||||||
# For a fixed proxy across all requests, just reuse the same run_config instance
|
|
||||||
static_run_config = run_config
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. SSL Certificate Verification
|
|
||||||
```python
|
|
||||||
from crawl4ai import CrawlerRunConfig
|
|
||||||
|
|
||||||
# Always verify SSL certificates when possible
|
|
||||||
# Per-request (affects specific requests)
|
|
||||||
run_config = CrawlerRunConfig(fetch_ssl_certificate=True)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Environment Variable Security
|
|
||||||
```bash
|
|
||||||
# Use environment variables for sensitive proxy credentials
|
|
||||||
# Avoid hardcoding usernames/passwords in code
|
|
||||||
export PROXIES="ip1:port1:user1:pass1,ip2:port2:user2:pass2"
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. SOCKS5 for Enhanced Security
|
|
||||||
```python
|
|
||||||
from crawl4ai import CrawlerRunConfig
|
|
||||||
|
|
||||||
# Prefer SOCKS5 proxies for better protocol support
|
|
||||||
run_config = CrawlerRunConfig(proxy_config="socks5://proxy.example.com:1080")
|
|
||||||
```
|
|
||||||
|
|
||||||
## Migration from Deprecated `proxy` Parameter
|
|
||||||
|
|
||||||
!!! warning "Deprecation Notice"
|
|
||||||
The legacy `proxy` argument on `BrowserConfig` is deprecated. Configure proxies through `CrawlerRunConfig.proxy_config` so each request fully describes its network settings.
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Old (deprecated) approach
|
|
||||||
# from crawl4ai import BrowserConfig
|
|
||||||
# browser_config = BrowserConfig(proxy="http://proxy.example.com:8080")
|
|
||||||
|
|
||||||
# New (preferred) approach
|
|
||||||
from crawl4ai import CrawlerRunConfig
|
|
||||||
run_config = CrawlerRunConfig(proxy_config="http://proxy.example.com:8080")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Safe Logging of Proxies
|
|
||||||
```python
|
|
||||||
from crawl4ai import ProxyConfig
|
|
||||||
|
|
||||||
def safe_proxy_repr(proxy: ProxyConfig):
|
|
||||||
if getattr(proxy, "username", None):
|
|
||||||
return f"{proxy.server} (auth: ****)"
|
|
||||||
return proxy.server
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Common Issues
|
|
||||||
|
|
||||||
???+ question "Proxy connection failed"
|
|
||||||
- Verify the proxy server is reachable from your network.
|
|
||||||
- Double-check authentication credentials.
|
|
||||||
- Ensure the protocol matches (`http`, `https`, or `socks5`).
|
|
||||||
|
|
||||||
???+ question "SSL certificate errors"
|
|
||||||
- Some proxies break SSL inspection; switch proxies if you see repeated failures.
|
|
||||||
- Consider temporarily disabling certificate fetching to isolate the issue.
|
|
||||||
|
|
||||||
???+ question "Environment variables not loading"
|
|
||||||
- Confirm `PROXIES` (or your custom env var) is set before running the script.
|
|
||||||
- Check formatting: `ip:port:user:pass,ip:port:user:pass`.
|
|
||||||
|
|
||||||
???+ question "Proxy rotation not working"
|
|
||||||
- Ensure `ProxyConfig.from_env()` actually loaded entries (`len(proxies) > 0`).
|
|
||||||
- Attach `proxy_rotation_strategy` to `CrawlerRunConfig`.
|
|
||||||
- Validate the proxy definitions you pass into the strategy.
|
|
||||||
|
|||||||
165
tests/browser/smoke_test_cdp.py
Executable file
165
tests/browser/smoke_test_cdp.py
Executable file
@@ -0,0 +1,165 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Simple smoke test for CDP concurrency fixes.
|
||||||
|
This can be run without pytest to quickly validate the changes.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import sys
|
||||||
|
import os
|
||||||
|
|
||||||
|
# Add the project root to Python path
|
||||||
|
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../..')))
|
||||||
|
|
||||||
|
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||||||
|
|
||||||
|
|
||||||
|
async def test_basic_cdp():
|
||||||
|
"""Basic test that CDP browser works"""
|
||||||
|
print("Test 1: Basic CDP browser test...")
|
||||||
|
|
||||||
|
browser_config = BrowserConfig(
|
||||||
|
use_managed_browser=True,
|
||||||
|
headless=True,
|
||||||
|
verbose=False
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
result = await crawler.arun(
|
||||||
|
url="https://example.com",
|
||||||
|
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||||
|
)
|
||||||
|
assert result.success, f"Failed: {result.error_message}"
|
||||||
|
assert len(result.html) > 0, "Empty HTML"
|
||||||
|
print(" ✓ Basic CDP test passed")
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ✗ Basic CDP test failed: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
async def test_arun_many_cdp():
|
||||||
|
"""Test arun_many with CDP browser - the key concurrency fix"""
|
||||||
|
print("\nTest 2: arun_many with CDP browser...")
|
||||||
|
|
||||||
|
browser_config = BrowserConfig(
|
||||||
|
use_managed_browser=True,
|
||||||
|
headless=True,
|
||||||
|
verbose=False
|
||||||
|
)
|
||||||
|
|
||||||
|
urls = [
|
||||||
|
"https://example.com",
|
||||||
|
"https://httpbin.org/html",
|
||||||
|
"https://www.example.org",
|
||||||
|
]
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
results = await crawler.arun_many(
|
||||||
|
urls=urls,
|
||||||
|
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(results) == len(urls), f"Expected {len(urls)} results, got {len(results)}"
|
||||||
|
|
||||||
|
success_count = sum(1 for r in results if r.success)
|
||||||
|
print(f" ✓ Crawled {success_count}/{len(urls)} URLs successfully")
|
||||||
|
|
||||||
|
if success_count >= len(urls) * 0.8: # Allow 20% failure for network issues
|
||||||
|
print(" ✓ arun_many CDP test passed")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" ✗ Too many failures: {len(urls) - success_count}/{len(urls)}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ✗ arun_many CDP test failed: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
async def test_concurrent_arun_many():
|
||||||
|
"""Test concurrent arun_many calls - stress test for page lock"""
|
||||||
|
print("\nTest 3: Concurrent arun_many calls...")
|
||||||
|
|
||||||
|
browser_config = BrowserConfig(
|
||||||
|
use_managed_browser=True,
|
||||||
|
headless=True,
|
||||||
|
verbose=False
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
# Run two arun_many calls concurrently
|
||||||
|
task1 = crawler.arun_many(
|
||||||
|
urls=["https://example.com", "https://httpbin.org/html"],
|
||||||
|
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||||
|
)
|
||||||
|
|
||||||
|
task2 = crawler.arun_many(
|
||||||
|
urls=["https://www.example.org", "https://example.com"],
|
||||||
|
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||||
|
)
|
||||||
|
|
||||||
|
results1, results2 = await asyncio.gather(task1, task2, return_exceptions=True)
|
||||||
|
|
||||||
|
# Check for exceptions
|
||||||
|
if isinstance(results1, Exception):
|
||||||
|
print(f" ✗ Task 1 raised exception: {results1}")
|
||||||
|
return False
|
||||||
|
if isinstance(results2, Exception):
|
||||||
|
print(f" ✗ Task 2 raised exception: {results2}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
total_success = sum(1 for r in results1 if r.success) + sum(1 for r in results2 if r.success)
|
||||||
|
total_requests = len(results1) + len(results2)
|
||||||
|
|
||||||
|
print(f" ✓ {total_success}/{total_requests} concurrent requests succeeded")
|
||||||
|
|
||||||
|
if total_success >= total_requests * 0.7: # Allow 30% failure for concurrent stress
|
||||||
|
print(" ✓ Concurrent arun_many test passed")
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
print(f" ✗ Too many concurrent failures")
|
||||||
|
return False
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ✗ Concurrent test failed: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
"""Run all smoke tests"""
|
||||||
|
print("=" * 60)
|
||||||
|
print("CDP Concurrency Smoke Tests")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
results = []
|
||||||
|
|
||||||
|
# Run tests sequentially
|
||||||
|
results.append(await test_basic_cdp())
|
||||||
|
results.append(await test_arun_many_cdp())
|
||||||
|
results.append(await test_concurrent_arun_many())
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
passed = sum(results)
|
||||||
|
total = len(results)
|
||||||
|
|
||||||
|
if passed == total:
|
||||||
|
print(f"✓ All {total} smoke tests passed!")
|
||||||
|
print("=" * 60)
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
print(f"✗ {total - passed}/{total} smoke tests failed")
|
||||||
|
print("=" * 60)
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit_code = asyncio.run(main())
|
||||||
|
sys.exit(exit_code)
|
||||||
282
tests/browser/test_cdp_concurrency.py
Normal file
282
tests/browser/test_cdp_concurrency.py
Normal file
@@ -0,0 +1,282 @@
|
|||||||
|
"""
|
||||||
|
Test CDP browser concurrency with arun_many.
|
||||||
|
|
||||||
|
This test suite validates that the fixes for concurrent page creation
|
||||||
|
in managed browsers (CDP mode) work correctly, particularly:
|
||||||
|
1. Always creating new pages instead of reusing
|
||||||
|
2. Page lock serialization prevents race conditions
|
||||||
|
3. Multiple concurrent arun_many calls work correctly
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Standard library imports
|
||||||
|
import asyncio
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
|
||||||
|
# Third-party imports
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
# Add the project root to Python path
|
||||||
|
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../..')))
|
||||||
|
|
||||||
|
# Local imports
|
||||||
|
from crawl4ai import AsyncWebCrawler, BrowserConfig, CacheMode, CrawlerRunConfig
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_cdp_concurrent_arun_many_basic():
|
||||||
|
"""
|
||||||
|
Test basic concurrent arun_many with CDP browser.
|
||||||
|
This tests the fix for always creating new pages.
|
||||||
|
"""
|
||||||
|
browser_config = BrowserConfig(
|
||||||
|
use_managed_browser=True,
|
||||||
|
headless=True,
|
||||||
|
verbose=False
|
||||||
|
)
|
||||||
|
|
||||||
|
urls = [
|
||||||
|
"https://example.com",
|
||||||
|
"https://www.python.org",
|
||||||
|
"https://httpbin.org/html",
|
||||||
|
]
|
||||||
|
|
||||||
|
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||||
|
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
# Run arun_many - should create new pages for each URL
|
||||||
|
results = await crawler.arun_many(urls=urls, config=config)
|
||||||
|
|
||||||
|
# Verify all URLs were crawled successfully
|
||||||
|
assert len(results) == len(urls), f"Expected {len(urls)} results, got {len(results)}"
|
||||||
|
|
||||||
|
for i, result in enumerate(results):
|
||||||
|
assert result is not None, f"Result {i} is None"
|
||||||
|
assert result.success, f"Result {i} failed: {result.error_message}"
|
||||||
|
assert result.status_code == 200, f"Result {i} has status {result.status_code}"
|
||||||
|
assert len(result.html) > 0, f"Result {i} has empty HTML"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_cdp_multiple_sequential_arun_many():
|
||||||
|
"""
|
||||||
|
Test multiple sequential arun_many calls with CDP browser.
|
||||||
|
Each call should work correctly without interference.
|
||||||
|
"""
|
||||||
|
browser_config = BrowserConfig(
|
||||||
|
use_managed_browser=True,
|
||||||
|
headless=True,
|
||||||
|
verbose=False
|
||||||
|
)
|
||||||
|
|
||||||
|
urls_batch1 = [
|
||||||
|
"https://example.com",
|
||||||
|
"https://httpbin.org/html",
|
||||||
|
]
|
||||||
|
|
||||||
|
urls_batch2 = [
|
||||||
|
"https://www.python.org",
|
||||||
|
"https://example.org",
|
||||||
|
]
|
||||||
|
|
||||||
|
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||||
|
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
# First batch
|
||||||
|
results1 = await crawler.arun_many(urls=urls_batch1, config=config)
|
||||||
|
assert len(results1) == len(urls_batch1)
|
||||||
|
for result in results1:
|
||||||
|
assert result.success, f"First batch failed: {result.error_message}"
|
||||||
|
|
||||||
|
# Second batch - should work without issues
|
||||||
|
results2 = await crawler.arun_many(urls=urls_batch2, config=config)
|
||||||
|
assert len(results2) == len(urls_batch2)
|
||||||
|
for result in results2:
|
||||||
|
assert result.success, f"Second batch failed: {result.error_message}"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_cdp_concurrent_arun_many_stress():
|
||||||
|
"""
|
||||||
|
Stress test: Multiple concurrent arun_many calls with CDP browser.
|
||||||
|
This is the key test for the concurrency fix - ensures page lock works.
|
||||||
|
"""
|
||||||
|
browser_config = BrowserConfig(
|
||||||
|
use_managed_browser=True,
|
||||||
|
headless=True,
|
||||||
|
verbose=False
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create multiple batches of URLs
|
||||||
|
num_batches = 3
|
||||||
|
urls_per_batch = 3
|
||||||
|
|
||||||
|
batches = [
|
||||||
|
[f"https://httpbin.org/delay/{i}?batch={batch}"
|
||||||
|
for i in range(urls_per_batch)]
|
||||||
|
for batch in range(num_batches)
|
||||||
|
]
|
||||||
|
|
||||||
|
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||||
|
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
# Run multiple arun_many calls concurrently
|
||||||
|
tasks = [
|
||||||
|
crawler.arun_many(urls=batch, config=config)
|
||||||
|
for batch in batches
|
||||||
|
]
|
||||||
|
|
||||||
|
# Execute all batches in parallel
|
||||||
|
all_results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||||
|
|
||||||
|
# Verify no exceptions occurred
|
||||||
|
for i, results in enumerate(all_results):
|
||||||
|
assert not isinstance(results, Exception), f"Batch {i} raised exception: {results}"
|
||||||
|
assert len(results) == urls_per_batch, f"Batch {i}: expected {urls_per_batch} results, got {len(results)}"
|
||||||
|
|
||||||
|
# Verify each result
|
||||||
|
for j, result in enumerate(results):
|
||||||
|
assert result is not None, f"Batch {i}, result {j} is None"
|
||||||
|
# Some may fail due to network/timing, but should not crash
|
||||||
|
if result.success:
|
||||||
|
assert len(result.html) > 0, f"Batch {i}, result {j} has empty HTML"
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_cdp_page_isolation():
|
||||||
|
"""
|
||||||
|
Test that pages are properly isolated - changes to one don't affect another.
|
||||||
|
This validates that we're creating truly independent pages.
|
||||||
|
"""
|
||||||
|
browser_config = BrowserConfig(
|
||||||
|
use_managed_browser=True,
|
||||||
|
headless=True,
|
||||||
|
verbose=False
|
||||||
|
)
|
||||||
|
|
||||||
|
url = "https://example.com"
|
||||||
|
|
||||||
|
# Use different JS codes to verify isolation
|
||||||
|
config1 = CrawlerRunConfig(
|
||||||
|
cache_mode=CacheMode.BYPASS,
|
||||||
|
js_code="document.body.setAttribute('data-test', 'page1');"
|
||||||
|
)
|
||||||
|
|
||||||
|
config2 = CrawlerRunConfig(
|
||||||
|
cache_mode=CacheMode.BYPASS,
|
||||||
|
js_code="document.body.setAttribute('data-test', 'page2');"
|
||||||
|
)
|
||||||
|
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
# Run both configs concurrently
|
||||||
|
results = await crawler.arun_many(
|
||||||
|
urls=[url, url],
|
||||||
|
configs=[config1, config2]
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(results) == 2
|
||||||
|
assert results[0].success and results[1].success
|
||||||
|
|
||||||
|
# Both should succeed with their own modifications
|
||||||
|
# (We can't directly check the data-test attribute, but success indicates isolation)
|
||||||
|
assert 'Example Domain' in results[0].html
|
||||||
|
assert 'Example Domain' in results[1].html
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_cdp_with_different_viewport_sizes():
|
||||||
|
"""
|
||||||
|
Test concurrent crawling with different viewport configurations.
|
||||||
|
Ensures context/page creation handles different configs correctly.
|
||||||
|
"""
|
||||||
|
browser_config = BrowserConfig(
|
||||||
|
use_managed_browser=True,
|
||||||
|
headless=True,
|
||||||
|
verbose=False
|
||||||
|
)
|
||||||
|
|
||||||
|
url = "https://example.com"
|
||||||
|
|
||||||
|
# Different viewport sizes (though in CDP mode these may be limited)
|
||||||
|
configs = [
|
||||||
|
CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
|
||||||
|
CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
|
||||||
|
CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
|
||||||
|
]
|
||||||
|
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
results = await crawler.arun_many(
|
||||||
|
urls=[url] * len(configs),
|
||||||
|
configs=configs
|
||||||
|
)
|
||||||
|
|
||||||
|
assert len(results) == len(configs)
|
||||||
|
for i, result in enumerate(results):
|
||||||
|
assert result.success, f"Config {i} failed: {result.error_message}"
|
||||||
|
assert len(result.html) > 0
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_cdp_error_handling_concurrent():
|
||||||
|
"""
|
||||||
|
Test that errors in one concurrent request don't affect others.
|
||||||
|
This ensures proper isolation and error handling.
|
||||||
|
"""
|
||||||
|
browser_config = BrowserConfig(
|
||||||
|
use_managed_browser=True,
|
||||||
|
headless=True,
|
||||||
|
verbose=False
|
||||||
|
)
|
||||||
|
|
||||||
|
urls = [
|
||||||
|
"https://example.com", # Valid
|
||||||
|
"https://this-domain-definitely-does-not-exist-12345.com", # Invalid
|
||||||
|
"https://httpbin.org/html", # Valid
|
||||||
|
]
|
||||||
|
|
||||||
|
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||||
|
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
results = await crawler.arun_many(urls=urls, config=config)
|
||||||
|
|
||||||
|
assert len(results) == len(urls)
|
||||||
|
|
||||||
|
# First and third should succeed
|
||||||
|
assert results[0].success, "First URL should succeed"
|
||||||
|
assert results[2].success, "Third URL should succeed"
|
||||||
|
|
||||||
|
# Second may fail (invalid domain)
|
||||||
|
# But its failure shouldn't affect the others
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.mark.asyncio
|
||||||
|
async def test_cdp_large_batch():
|
||||||
|
"""
|
||||||
|
Test handling a larger batch of URLs to ensure scalability.
|
||||||
|
"""
|
||||||
|
browser_config = BrowserConfig(
|
||||||
|
use_managed_browser=True,
|
||||||
|
headless=True,
|
||||||
|
verbose=False
|
||||||
|
)
|
||||||
|
|
||||||
|
# Create 10 URLs
|
||||||
|
num_urls = 10
|
||||||
|
urls = [f"https://httpbin.org/delay/0?id={i}" for i in range(num_urls)]
|
||||||
|
|
||||||
|
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||||
|
|
||||||
|
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||||
|
results = await crawler.arun_many(urls=urls, config=config)
|
||||||
|
|
||||||
|
assert len(results) == num_urls
|
||||||
|
|
||||||
|
# Count successes
|
||||||
|
successes = sum(1 for r in results if r.success)
|
||||||
|
# Allow some failures due to network issues, but most should succeed
|
||||||
|
assert successes >= num_urls * 0.8, f"Only {successes}/{num_urls} succeeded"
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
# Run tests with pytest
|
||||||
|
pytest.main([__file__, "-v", "-s"])
|
||||||
@@ -1,220 +0,0 @@
|
|||||||
"""
|
|
||||||
Final verification test for Issue #1055 fix
|
|
||||||
|
|
||||||
This test demonstrates that LLM extraction now runs in parallel
|
|
||||||
when using arun_many with multiple URLs.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
import asyncio
|
|
||||||
|
|
||||||
grandparent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
|
||||||
sys.path.append(grandparent_dir)
|
|
||||||
|
|
||||||
from crawl4ai import (
|
|
||||||
AsyncWebCrawler,
|
|
||||||
BrowserConfig,
|
|
||||||
CrawlerRunConfig,
|
|
||||||
CacheMode,
|
|
||||||
LLMExtractionStrategy,
|
|
||||||
LLMConfig,
|
|
||||||
)
|
|
||||||
|
|
||||||
from pydantic import BaseModel
|
|
||||||
|
|
||||||
|
|
||||||
class SimpleData(BaseModel):
|
|
||||||
title: str
|
|
||||||
summary: str
|
|
||||||
|
|
||||||
|
|
||||||
def print_section(title):
|
|
||||||
print("\n" + "=" * 80)
|
|
||||||
print(title)
|
|
||||||
print("=" * 80 + "\n")
|
|
||||||
|
|
||||||
|
|
||||||
async def test_without_llm():
|
|
||||||
"""Baseline: Test crawling without LLM extraction"""
|
|
||||||
print_section("TEST 1: Crawling WITHOUT LLM Extraction")
|
|
||||||
|
|
||||||
config = CrawlerRunConfig(
|
|
||||||
cache_mode=CacheMode.BYPASS,
|
|
||||||
)
|
|
||||||
|
|
||||||
browser_config = BrowserConfig(headless=True, verbose=False)
|
|
||||||
|
|
||||||
urls = [
|
|
||||||
"https://www.example.com",
|
|
||||||
"https://www.iana.org",
|
|
||||||
"https://www.wikipedia.org",
|
|
||||||
]
|
|
||||||
|
|
||||||
print(f"Crawling {len(urls)} URLs without LLM extraction...")
|
|
||||||
print("Expected: Fast and parallel\n")
|
|
||||||
|
|
||||||
start_time = time.time()
|
|
||||||
|
|
||||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
|
||||||
results = await crawler.arun_many(urls=urls, config=config)
|
|
||||||
|
|
||||||
duration = time.time() - start_time
|
|
||||||
|
|
||||||
print(f"\n✅ Completed in {duration:.2f}s")
|
|
||||||
print(f" Successful: {sum(1 for r in results if r.success)}/{len(urls)}")
|
|
||||||
print(f" Average: {duration/len(urls):.2f}s per URL")
|
|
||||||
|
|
||||||
return duration
|
|
||||||
|
|
||||||
|
|
||||||
async def test_with_llm_before_fix():
|
|
||||||
"""Demonstrate the problem: Sequential execution with LLM"""
|
|
||||||
print_section("TEST 2: What Issue #1055 Reported (LLM Sequential Behavior)")
|
|
||||||
|
|
||||||
print("The issue reported that with LLM extraction, URLs would crawl")
|
|
||||||
print("one after another instead of in parallel.")
|
|
||||||
print("\nWithout our fix, this would show:")
|
|
||||||
print(" - URL 1 fetches → extracts → completes")
|
|
||||||
print(" - URL 2 fetches → extracts → completes")
|
|
||||||
print(" - URL 3 fetches → extracts → completes")
|
|
||||||
print("\nTotal time would be approximately sum of all individual times.")
|
|
||||||
|
|
||||||
|
|
||||||
async def test_with_llm_after_fix():
|
|
||||||
"""Demonstrate the fix: Parallel execution with LLM"""
|
|
||||||
print_section("TEST 3: After Fix - LLM Extraction in Parallel")
|
|
||||||
|
|
||||||
config = CrawlerRunConfig(
|
|
||||||
cache_mode=CacheMode.BYPASS,
|
|
||||||
extraction_strategy=LLMExtractionStrategy(
|
|
||||||
llm_config=LLMConfig(provider="openai/gpt-4o-mini"),
|
|
||||||
schema=SimpleData.model_json_schema(),
|
|
||||||
extraction_type="schema",
|
|
||||||
instruction="Extract title and summary",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
browser_config = BrowserConfig(headless=True, verbose=False)
|
|
||||||
|
|
||||||
urls = [
|
|
||||||
"https://www.example.com",
|
|
||||||
"https://www.iana.org",
|
|
||||||
"https://www.wikipedia.org",
|
|
||||||
]
|
|
||||||
|
|
||||||
print(f"Crawling {len(urls)} URLs WITH LLM extraction...")
|
|
||||||
print("Expected: Parallel execution with our fix\n")
|
|
||||||
|
|
||||||
completion_times = {}
|
|
||||||
start_time = time.time()
|
|
||||||
|
|
||||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
|
||||||
results = await crawler.arun_many(urls=urls, config=config)
|
|
||||||
for result in results:
|
|
||||||
elapsed = time.time() - start_time
|
|
||||||
completion_times[result.url] = elapsed
|
|
||||||
print(f" [{elapsed:5.2f}s] ✓ {result.url[:50]}")
|
|
||||||
|
|
||||||
duration = time.time() - start_time
|
|
||||||
|
|
||||||
print(f"\n✅ Total time: {duration:.2f}s")
|
|
||||||
print(f" Successful: {sum(1 for url in urls if url in completion_times)}/{len(urls)}")
|
|
||||||
|
|
||||||
# Analyze parallelism
|
|
||||||
times = list(completion_times.values())
|
|
||||||
if len(times) >= 2:
|
|
||||||
# If parallel, completion times should be staggered, not evenly spaced
|
|
||||||
time_diffs = [times[i+1] - times[i] for i in range(len(times)-1)]
|
|
||||||
avg_diff = sum(time_diffs) / len(time_diffs)
|
|
||||||
|
|
||||||
print(f"\nParallelism Analysis:")
|
|
||||||
print(f" Completion time differences: {[f'{d:.2f}s' for d in time_diffs]}")
|
|
||||||
print(f" Average difference: {avg_diff:.2f}s")
|
|
||||||
|
|
||||||
# In parallel mode, some tasks complete close together
|
|
||||||
# In sequential mode, they're evenly spaced (avg ~2-3s apart)
|
|
||||||
if avg_diff < duration / len(urls):
|
|
||||||
print(f" ✅ PARALLEL: Tasks completed with overlapping execution")
|
|
||||||
else:
|
|
||||||
print(f" ⚠️ SEQUENTIAL: Tasks completed one after another")
|
|
||||||
|
|
||||||
return duration
|
|
||||||
|
|
||||||
|
|
||||||
async def test_multiple_arun_calls():
|
|
||||||
"""Test multiple individual arun() calls in parallel"""
|
|
||||||
print_section("TEST 4: Multiple arun() Calls with asyncio.gather")
|
|
||||||
|
|
||||||
config = CrawlerRunConfig(
|
|
||||||
cache_mode=CacheMode.BYPASS,
|
|
||||||
extraction_strategy=LLMExtractionStrategy(
|
|
||||||
llm_config=LLMConfig(provider="openai/gpt-4o-mini"),
|
|
||||||
schema=SimpleData.model_json_schema(),
|
|
||||||
extraction_type="schema",
|
|
||||||
instruction="Extract title and summary",
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
browser_config = BrowserConfig(headless=True, verbose=False)
|
|
||||||
|
|
||||||
urls = [
|
|
||||||
"https://www.example.com",
|
|
||||||
"https://www.iana.org",
|
|
||||||
"https://www.wikipedia.org",
|
|
||||||
]
|
|
||||||
|
|
||||||
print(f"Running {len(urls)} arun() calls with asyncio.gather()...")
|
|
||||||
print("Expected: True parallel execution\n")
|
|
||||||
|
|
||||||
start_time = time.time()
|
|
||||||
|
|
||||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
|
||||||
tasks = [crawler.arun(url, config=config) for url in urls]
|
|
||||||
results = await asyncio.gather(*tasks)
|
|
||||||
|
|
||||||
duration = time.time() - start_time
|
|
||||||
|
|
||||||
print(f"\n✅ Completed in {duration:.2f}s")
|
|
||||||
print(f" Successful: {sum(1 for r in results if r.success)}/{len(urls)}")
|
|
||||||
print(f" This proves the async LLM extraction works correctly")
|
|
||||||
|
|
||||||
return duration
|
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
print("\n" + "🚀" * 40)
|
|
||||||
print("ISSUE #1055 FIX VERIFICATION")
|
|
||||||
print("Testing: Sequential → Parallel LLM Extraction")
|
|
||||||
print("🚀" * 40)
|
|
||||||
|
|
||||||
# Run tests
|
|
||||||
await test_without_llm()
|
|
||||||
|
|
||||||
await test_with_llm_before_fix()
|
|
||||||
|
|
||||||
time_with_llm = await test_with_llm_after_fix()
|
|
||||||
|
|
||||||
time_gather = await test_multiple_arun_calls()
|
|
||||||
|
|
||||||
# Final summary
|
|
||||||
print_section("FINAL VERDICT")
|
|
||||||
|
|
||||||
print("✅ Fix Verified!")
|
|
||||||
print("\nWhat changed:")
|
|
||||||
print(" • Created aperform_completion_with_backoff() using litellm.acompletion")
|
|
||||||
print(" • Added arun() method to ExtractionStrategy base class")
|
|
||||||
print(" • Implemented parallel arun() in LLMExtractionStrategy")
|
|
||||||
print(" • Updated AsyncWebCrawler to use arun() when available")
|
|
||||||
print("\nResult:")
|
|
||||||
print(" • LLM extraction now runs in parallel across multiple URLs")
|
|
||||||
print(" • Backward compatible - existing strategies still work")
|
|
||||||
print(" • No breaking changes to the API")
|
|
||||||
print("\n✨ Issue #1055 is RESOLVED!")
|
|
||||||
|
|
||||||
print("\n" + "=" * 80 + "\n")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
asyncio.run(main())
|
|
||||||
Reference in New Issue
Block a user