Compare commits

..

1 Commits

Author SHA1 Message Date
ntohidi
ca100c6518 Release v0.7.6: The 0.7.6 Update
- Updated version to 0.7.6
- Added comprehensive demo and release notes
- Updated all documentation
- Update the veriosn in Dockerfile to 0.7.6
2025-10-22 13:46:54 +02:00
27 changed files with 706 additions and 2429 deletions

View File

@@ -1,214 +0,0 @@
# CDP Browser Concurrency Fixes and Improvements
## Overview
This document describes the changes made to fix concurrency issues with CDP (Chrome DevTools Protocol) browsers when using `arun_many` and improve overall browser management.
## Problems Addressed
1. **Race Conditions in Page Creation**: When using managed CDP browsers with concurrent `arun_many` calls, the code attempted to reuse existing pages from `context.pages`, leading to race conditions and "Target page/context closed" errors.
2. **Proxy Configuration Issues**: Proxy credentials were incorrectly embedded in the `--proxy-server` URL, which doesn't work properly with CDP browsers.
3. **Insufficient Startup Checks**: Browser process startup checks were minimal and didn't catch early failures effectively.
4. **Unclear Logging**: Logging messages lacked structure and context, making debugging difficult.
5. **Duplicate Browser Arguments**: Browser launch arguments could contain duplicates despite deduplication attempts.
## Solutions Implemented
### 1. Always Create New Pages in Managed Browser Mode
**File**: `crawl4ai/browser_manager.py` (lines 1106-1113)
**Change**: Modified `get_page()` method to always create new pages instead of attempting to reuse existing ones for managed browsers without `storage_state`.
**Before**:
```python
context = self.default_context
pages = context.pages
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
if not page:
if pages:
page = pages[0]
else:
# Create new page only if none exist
async with self._page_lock:
page = await context.new_page()
```
**After**:
```python
context = self.default_context
# Always create new pages instead of reusing existing ones
# This prevents race conditions in concurrent scenarios (arun_many with CDP)
# Serialize page creation to avoid 'Target page/context closed' errors
async with self._page_lock:
page = await context.new_page()
await self._apply_stealth_to_page(page)
```
**Benefits**:
- Eliminates race conditions when multiple tasks call `arun_many` concurrently
- Each request gets a fresh, independent page
- Page lock serializes creation to prevent TOCTOU (Time-of-check to time-of-use) issues
### 2. Fixed Proxy Flag Formatting
**File**: `crawl4ai/browser_manager.py` (lines 103-109)
**Change**: Removed credentials from proxy URL as they should be handled via separate authentication mechanisms in CDP.
**Before**:
```python
elif config.proxy_config:
creds = ""
if config.proxy_config.username and config.proxy_config.password:
creds = f"{config.proxy_config.username}:{config.proxy_config.password}@"
flags.append(f"--proxy-server={creds}{config.proxy_config.server}")
```
**After**:
```python
elif config.proxy_config:
# Note: For CDP/managed browsers, proxy credentials should be handled
# via authentication, not in the URL. Only pass the server address.
flags.append(f"--proxy-server={config.proxy_config.server}")
```
### 3. Enhanced Startup Checks
**File**: `crawl4ai/browser_manager.py` (lines 298-336)
**Changes**:
- Multiple check intervals (0.1s, 0.2s, 0.3s) to catch early failures
- Capture and log stdout/stderr on failure (limited to 200 chars)
- Raise `RuntimeError` with detailed diagnostics on startup failure
- Log process PID on successful startup in verbose mode
**Benefits**:
- Catches browser crashes during startup
- Provides detailed diagnostic information for debugging
- Fails fast with clear error messages
### 4. Improved Logging
**File**: `crawl4ai/browser_manager.py` (lines 218-291)
**Changes**:
- Structured logging with proper parameter substitution
- Log browser type, port, and headless status at launch
- Format and log full command with proper shell escaping
- Better error messages with context
- Consistent use of logger with null checks
**Example**:
```python
if self.logger and self.browser_config.verbose:
self.logger.debug(
"Launching browser: {browser_type} | Port: {port} | Headless: {headless}",
tag="BROWSER",
params={
"browser_type": self.browser_type,
"port": self.debugging_port,
"headless": self.headless
}
)
```
### 5. Deduplicate Browser Launch Arguments
**File**: `crawl4ai/browser_manager.py` (lines 424-425)
**Change**: Added explicit deduplication after merging all flags.
```python
# merge common launch flags
flags.extend(self.build_browser_flags(self.browser_config))
# Deduplicate flags - use dict.fromkeys to preserve order while removing duplicates
flags = list(dict.fromkeys(flags))
```
### 6. Import Refactoring
**Files**: `crawl4ai/browser_manager.py`, `crawl4ai/browser_profiler.py`, `tests/browser/test_cdp_concurrency.py`
**Changes**: Organized all imports according to PEP 8:
1. Standard library imports (alphabetized)
2. Third-party imports (alphabetized)
3. Local imports (alphabetized)
**Benefits**:
- Improved code readability
- Easier to spot missing or unused imports
- Consistent style across the codebase
## Testing
### New Test Suite
**File**: `tests/browser/test_cdp_concurrency.py`
Comprehensive test suite with 8 tests covering:
1. **Basic Concurrent arun_many**: Validates multiple URLs can be crawled concurrently
2. **Sequential arun_many Calls**: Ensures multiple sequential batches work correctly
3. **Stress Test**: Multiple concurrent `arun_many` calls to test page lock effectiveness
4. **Page Isolation**: Verifies pages are truly independent
5. **Different Configurations**: Tests with varying viewport sizes and configs
6. **Error Handling**: Ensures errors in one request don't affect others
7. **Large Batches**: Scalability test with 10+ URLs
8. **Smoke Test Script**: Standalone script for quick validation
### Running Tests
**With pytest** (if available):
```bash
cd /path/to/crawl4ai
pytest tests/browser/test_cdp_concurrency.py -v
```
**Standalone smoke test**:
```bash
cd /path/to/crawl4ai
python3 tests/browser/smoke_test_cdp.py
```
## Migration Guide
### For Users
No breaking changes. Existing code will continue to work, but with better reliability in concurrent scenarios.
### For Contributors
When working with managed browsers:
1. Always use the page lock when creating pages in shared contexts
2. Prefer creating new pages over reusing existing ones for concurrent operations
3. Use structured logging with parameter substitution
4. Follow PEP 8 import organization
## Performance Impact
- **Positive**: Eliminates race conditions and crashes in concurrent scenarios
- **Neutral**: Page creation overhead is negligible compared to page navigation
- **Consideration**: More pages may be created, but they are properly closed after use
## Backward Compatibility
All changes are backward compatible. Session-based page reuse still works as before when `session_id` is provided.
## Related Issues
- Fixes race conditions in concurrent `arun_many` calls with CDP browsers
- Addresses "Target page/context closed" errors
- Improves browser startup reliability
## Future Improvements
Consider:
1. Configurable page pooling with proper lifecycle management
2. More granular locks for different contexts
3. Metrics for page creation/reuse patterns
4. Connection pooling for CDP connections

View File

@@ -1383,10 +1383,9 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
try:
await self.adapter.evaluate(page,
f"""
(async () => {{
(() => {{
try {{
const removeOverlays = {remove_overlays_js};
await removeOverlays();
{remove_overlays_js}
return {{ success: true }};
}} catch (error) {{
return {{

View File

@@ -1,26 +1,21 @@
# Standard library imports
import asyncio
import hashlib
import time
from typing import List, Optional
import os
import shlex
import sys
import shutil
import tempfile
import psutil
import signal
import subprocess
import sys
import tempfile
import time
import warnings
from typing import List, Optional
# Third-party imports
import psutil
import shlex
from playwright.async_api import BrowserContext
# Local imports
from .async_configs import BrowserConfig, CrawlerRunConfig
from .config import DOWNLOAD_PAGE_TIMEOUT
import hashlib
from .js_snippet import load_js_script
from .config import DOWNLOAD_PAGE_TIMEOUT
from .async_configs import BrowserConfig, CrawlerRunConfig
from .utils import get_chromium_path
import warnings
BROWSER_DISABLE_OPTIONS = [
@@ -109,9 +104,10 @@ class ManagedBrowser:
if config.proxy:
flags.append(f"--proxy-server={config.proxy}")
elif config.proxy_config:
# Note: For CDP/managed browsers, proxy credentials should be handled
# via authentication, not in the URL. Only pass the server address.
flags.append(f"--proxy-server={config.proxy_config.server}")
creds = ""
if config.proxy_config.username and config.proxy_config.password:
creds = f"{config.proxy_config.username}:{config.proxy_config.password}@"
flags.append(f"--proxy-server={creds}{config.proxy_config.server}")
# dedupe
return list(dict.fromkeys(flags))
@@ -223,27 +219,11 @@ class ManagedBrowser:
os.remove(fp)
except Exception as _e:
# non-fatal — we'll try to start anyway, but log what happened
if self.logger:
self.logger.warning(
"Pre-launch cleanup failed: {error} | Will attempt to start browser anyway",
tag="BROWSER",
params={"error": str(_e)}
)
self.logger.warning(f"pre-launch cleanup failed: {_e}", tag="BROWSER")
# Start browser process
try:
# Log browser launch intent
if self.logger and self.browser_config.verbose:
self.logger.debug(
"Launching browser: {browser_type} | Port: {port} | Headless: {headless}",
tag="BROWSER",
params={
"browser_type": self.browser_type,
"port": self.debugging_port,
"headless": self.headless
}
)
# Use DETACHED_PROCESS flag on Windows to fully detach the process
# On Unix, we'll use preexec_fn=os.setpgrp to start the process in a new process group
if sys.platform == "win32":
@@ -261,36 +241,19 @@ class ManagedBrowser:
preexec_fn=os.setpgrp # Start in a new process group
)
# Log full command if verbose logging is enabled
# If verbose is True print args used to run the process
if self.logger and self.browser_config.verbose:
# Format args for better readability - escape and join
formatted_args = ' '.join(shlex.quote(str(arg)) for arg in args)
self.logger.debug(
"Browser launch command: {command}",
tag="BROWSER",
params={"command": formatted_args}
)
f"Starting browser with args: {' '.join(args)}",
tag="BROWSER"
)
# Perform startup health checks
await asyncio.sleep(0.5) # Initial delay for process startup
# We'll monitor for a short time to make sure it starts properly, but won't keep monitoring
await asyncio.sleep(0.5) # Give browser time to start
await self._initial_startup_check()
await asyncio.sleep(2) # Additional time for browser initialization
cdp_url = f"http://{self.host}:{self.debugging_port}"
if self.logger:
self.logger.info(
"Browser started successfully | CDP URL: {cdp_url}",
tag="BROWSER",
params={"cdp_url": cdp_url}
)
return cdp_url
await asyncio.sleep(2) # Give browser time to start
return f"http://{self.host}:{self.debugging_port}"
except Exception as e:
if self.logger:
self.logger.error(
"Failed to start browser: {error}",
tag="BROWSER",
params={"error": str(e)}
)
await self.cleanup()
raise Exception(f"Failed to start browser: {e}")
@@ -303,41 +266,23 @@ class ManagedBrowser:
return
# Check that process started without immediate termination
# Perform multiple checks with increasing delays to catch early failures
check_intervals = [0.1, 0.2, 0.3] # Total 0.6s
for delay in check_intervals:
await asyncio.sleep(delay)
if self.browser_process.poll() is not None:
# Process already terminated - capture output for debugging
stdout, stderr = b"", b""
try:
stdout, stderr = self.browser_process.communicate(timeout=0.5)
except subprocess.TimeoutExpired:
pass
error_msg = "Browser process terminated during startup"
if stderr:
error_msg += f" | STDERR: {stderr.decode()[:200]}" # Limit output length
if stdout:
error_msg += f" | STDOUT: {stdout.decode()[:200]}"
self.logger.error(
message="{error_msg} | Exit code: {code}",
tag="BROWSER",
params={
"error_msg": error_msg,
"code": self.browser_process.returncode,
},
)
raise RuntimeError(f"Browser failed to start: {error_msg}")
await asyncio.sleep(0.5)
if self.browser_process.poll() is not None:
# Process already terminated
stdout, stderr = b"", b""
try:
stdout, stderr = self.browser_process.communicate(timeout=0.5)
except subprocess.TimeoutExpired:
pass
# Process is still running after checks - log success
if self.logger and self.browser_config.verbose:
self.logger.debug(
"Browser process startup check passed | PID: {pid}",
tag="BROWSER",
params={"pid": self.browser_process.pid}
self.logger.error(
message="Browser process terminated during startup | Code: {code} | STDOUT: {stdout} | STDERR: {stderr}",
tag="ERROR",
params={
"code": self.browser_process.returncode,
"stdout": stdout.decode() if stdout else "",
"stderr": stderr.decode() if stderr else "",
},
)
async def _monitor_browser_process(self):
@@ -426,8 +371,6 @@ class ManagedBrowser:
flags.append("--headless=new")
# merge common launch flags
flags.extend(self.build_browser_flags(self.browser_config))
# Deduplicate flags - use dict.fromkeys to preserve order while removing duplicates
flags = list(dict.fromkeys(flags))
elif self.browser_type == "firefox":
flags = [
"--remote-debugging-port",
@@ -1105,12 +1048,21 @@ class BrowserManager:
await self._apply_stealth_to_page(page)
else:
context = self.default_context
# Always create new pages instead of reusing existing ones
# This prevents race conditions in concurrent scenarios (arun_many with CDP)
# Serialize page creation to avoid 'Target page/context closed' errors
async with self._page_lock:
page = await context.new_page()
await self._apply_stealth_to_page(page)
pages = context.pages
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
if not page:
if pages:
page = pages[0]
else:
# Double-check under lock to avoid TOCTOU and ensure only
# one task calls new_page when pages=[] concurrently
async with self._page_lock:
pages = context.pages
if pages:
page = pages[0]
else:
page = await context.new_page()
await self._apply_stealth_to_page(page)
else:
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)

View File

@@ -5,26 +5,22 @@ This module provides a dedicated class for managing browser profiles
that can be used for identity-based crawling with Crawl4AI.
"""
# Standard library imports
import asyncio
import datetime
import json
import os
import shutil
import asyncio
import signal
import subprocess
import sys
import time
import datetime
import uuid
from typing import Any, Dict, List, Optional
# Third-party imports
import shutil
import json
import subprocess
import time
from typing import List, Dict, Optional, Any
from rich.console import Console
# Local imports
from .async_configs import BrowserConfig
from .async_logger import AsyncLogger, AsyncLoggerBase, LogColor
from .browser_manager import ManagedBrowser
from .async_logger import AsyncLogger, AsyncLoggerBase, LogColor
from .utils import get_home_folder

View File

@@ -785,54 +785,6 @@ curl http://localhost:11235/crawl/job/crawl_xyz
The response includes `status` field: `"processing"`, `"completed"`, or `"failed"`.
#### LLM Extraction Jobs with Webhooks
The same webhook system works for LLM extraction jobs via `/llm/job`:
```bash
# Submit LLM extraction job with webhook
curl -X POST http://localhost:11235/llm/job \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com/article",
"q": "Extract the article title, author, and main points",
"provider": "openai/gpt-4o-mini",
"webhook_config": {
"webhook_url": "https://myapp.com/webhooks/llm-complete",
"webhook_data_in_payload": true,
"webhook_headers": {
"X-Webhook-Secret": "your-secret-token"
}
}
}'
# Response: {"task_id": "llm_1234567890"}
```
**Your webhook receives:**
```json
{
"task_id": "llm_1234567890",
"task_type": "llm_extraction",
"status": "completed",
"timestamp": "2025-10-22T12:30:00.000000+00:00",
"urls": ["https://example.com/article"],
"data": {
"extracted_content": {
"title": "Understanding Web Scraping",
"author": "John Doe",
"main_points": ["Point 1", "Point 2", "Point 3"]
}
}
}
```
**Key Differences for LLM Jobs:**
- Task type is `"llm_extraction"` instead of `"crawl"`
- Extracted data is in `data.extracted_content`
- Single URL only (not an array)
- Supports schema-based extraction with `schema` parameter
> 💡 **Pro tip**: See [WEBHOOK_EXAMPLES.md](./WEBHOOK_EXAMPLES.md) for detailed examples including TypeScript client code, Flask webhook handlers, and failure handling.
---

View File

@@ -6,16 +6,15 @@ x-base-config: &base-config
- "11235:11235" # Gunicorn port
env_file:
- .llm.env # API keys (create from .llm.env.example)
# Uncomment to set default environment variables (will overwrite .llm.env)
# environment:
# - OPENAI_API_KEY=${OPENAI_API_KEY:-}
# - DEEPSEEK_API_KEY=${DEEPSEEK_API_KEY:-}
# - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
# - GROQ_API_KEY=${GROQ_API_KEY:-}
# - TOGETHER_API_KEY=${TOGETHER_API_KEY:-}
# - MISTRAL_API_KEY=${MISTRAL_API_KEY:-}
# - GEMINI_API_KEY=${GEMINI_API_KEY:-}
# - LLM_PROVIDER=${LLM_PROVIDER:-} # Optional: Override default provider (e.g., "anthropic/claude-3-opus")
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- DEEPSEEK_API_KEY=${DEEPSEEK_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- GROQ_API_KEY=${GROQ_API_KEY:-}
- TOGETHER_API_KEY=${TOGETHER_API_KEY:-}
- MISTRAL_API_KEY=${MISTRAL_API_KEY:-}
- GEMINI_API_TOKEN=${GEMINI_API_TOKEN:-}
- LLM_PROVIDER=${LLM_PROVIDER:-} # Optional: Override default provider (e.g., "anthropic/claude-3-opus")
volumes:
- /dev/shm:/dev/shm # Chromium performance
deploy:

View File

@@ -18,7 +18,7 @@ A comprehensive web-based tutorial for learning and experimenting with C4A-Scrip
2. **Install Dependencies**
```bash
pip install -r requirements.txt
pip install flask
```
3. **Launch the Server**
@@ -28,7 +28,7 @@ A comprehensive web-based tutorial for learning and experimenting with C4A-Scrip
4. **Open in Browser**
```
http://localhost:8000
http://localhost:8080
```
**🌐 Try Online**: [Live Demo](https://docs.crawl4ai.com/c4a-script/demo)
@@ -325,7 +325,7 @@ Powers the recording functionality:
### Configuration
```python
# server.py configuration
PORT = 8000
PORT = 8080
DEBUG = True
THREADED = True
```
@@ -343,9 +343,9 @@ THREADED = True
**Port Already in Use**
```bash
# Kill existing process
lsof -ti:8000 | xargs kill -9
lsof -ti:8080 | xargs kill -9
# Or use different port
python server.py --port 8001
python server.py --port 8081
```
**Blockly Not Loading**

View File

@@ -216,7 +216,7 @@ def get_examples():
'name': 'Handle Cookie Banner',
'description': 'Accept cookies and close newsletter popup',
'script': '''# Handle cookie banner and newsletter
GO http://127.0.0.1:8000/playground/
GO http://127.0.0.1:8080/playground/
WAIT `body` 2
IF (EXISTS `.cookie-banner`) THEN CLICK `.accept`
IF (EXISTS `.newsletter-popup`) THEN CLICK `.close`'''

File diff suppressed because it is too large Load Diff

View File

@@ -82,42 +82,6 @@ If you installed Crawl4AI (which installs Playwright under the hood), you alread
---
### Creating a Profile Using the Crawl4AI CLI (Easiest)
If you prefer a guided, interactive setup, use the built-in CLI to create and manage persistent browser profiles.
1.Launch the profile manager:
```bash
crwl profiles
```
2.Choose "Create new profile" and enter a profile name. A Chromium window opens so you can log in to sites and configure settings. When finished, return to the terminal and press `q` to save the profile.
3.Profiles are saved under `~/.crawl4ai/profiles/<profile_name>` (for example: `/home/<you>/.crawl4ai/profiles/test_profile_1`) along with a `storage_state.json` for cookies and session data.
4.Optionally, choose "List profiles" in the CLI to view available profiles and their paths.
5.Use the saved path with `BrowserConfig.user_data_dir`:
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig
profile_path = "/home/<you>/.crawl4ai/profiles/test_profile_1"
browser_config = BrowserConfig(
headless=True,
use_managed_browser=True,
user_data_dir=profile_path,
browser_type="chromium",
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="https://example.com/private")
```
The CLI also supports listing and deleting profiles, and even testing a crawl directly from the menu.
---
## 3. Using Managed Browsers in Crawl4AI
Once you have a data directory with your session data, pass it to **`BrowserConfig`**:

View File

@@ -18,7 +18,7 @@ A comprehensive web-based tutorial for learning and experimenting with C4A-Scrip
2. **Install Dependencies**
```bash
pip install -r requirements.txt
pip install flask
```
3. **Launch the Server**
@@ -28,7 +28,7 @@ A comprehensive web-based tutorial for learning and experimenting with C4A-Scrip
4. **Open in Browser**
```
http://localhost:8000
http://localhost:8080
```
**🌐 Try Online**: [Live Demo](https://docs.crawl4ai.com/c4a-script/demo)
@@ -325,7 +325,7 @@ Powers the recording functionality:
### Configuration
```python
# server.py configuration
PORT = 8000
PORT = 8080
DEBUG = True
THREADED = True
```
@@ -343,9 +343,9 @@ THREADED = True
**Port Already in Use**
```bash
# Kill existing process
lsof -ti:8000 | xargs kill -9
lsof -ti:8080 | xargs kill -9
# Or use different port
python server.py --port 8001
python server.py --port 8081
```
**Blockly Not Loading**

View File

@@ -216,7 +216,7 @@ def get_examples():
'name': 'Handle Cookie Banner',
'description': 'Accept cookies and close newsletter popup',
'script': '''# Handle cookie banner and newsletter
GO http://127.0.0.1:8000/playground/
GO http://127.0.0.1:8080/playground/
WAIT `body` 2
IF (EXISTS `.cookie-banner`) THEN CLICK `.accept`
IF (EXISTS `.newsletter-popup`) THEN CLICK `.close`'''
@@ -283,7 +283,7 @@ WAIT `.success-message` 5'''
return jsonify(examples)
if __name__ == '__main__':
port = int(os.environ.get('PORT', 8000))
port = int(os.environ.get('PORT', 8080))
print(f"""
╔══════════════════════════════════════════════════════════╗
║ C4A-Script Interactive Tutorial Server ║

View File

@@ -69,12 +69,12 @@ The tutorial includes a Flask-based web interface with:
cd docs/examples/c4a_script/tutorial/
# Install dependencies
pip install -r requirements.txt
pip install flask
# Launch the tutorial server
python server.py
python app.py
# Open http://localhost:8000 in your browser
# Open http://localhost:5000 in your browser
```
## Core Concepts
@@ -111,8 +111,8 @@ CLICK `.submit-btn`
# By attribute
CLICK `button[type="submit"]`
# By accessible attributes
CLICK `button[aria-label="Search"][title="Search"]`
# By text content
CLICK `button:contains("Sign In")`
# Complex selectors
CLICK `.form-container input[name="email"]`

View File

@@ -27,14 +27,6 @@
- [Hook Response Information](#hook-response-information)
- [Error Handling](#error-handling)
- [Hooks Utility: Function-Based Approach (Python)](#hooks-utility-function-based-approach-python)
- [Job Queue & Webhook API](#job-queue-webhook-api)
- [Why Use the Job Queue API?](#why-use-the-job-queue-api)
- [Available Endpoints](#available-endpoints)
- [Webhook Configuration](#webhook-configuration)
- [Usage Examples](#usage-examples)
- [Webhook Best Practices](#webhook-best-practices)
- [Use Cases](#use-cases)
- [Troubleshooting](#troubleshooting)
- [Dockerfile Parameters](#dockerfile-parameters)
- [Using the API](#using-the-api)
- [Playground Interface](#playground-interface)
@@ -1118,464 +1110,6 @@ if __name__ == "__main__":
---
## Job Queue & Webhook API
The Docker deployment includes a powerful asynchronous job queue system with webhook support for both crawling and LLM extraction tasks. Instead of waiting for long-running operations to complete, submit jobs and receive real-time notifications via webhooks when they finish.
### Why Use the Job Queue API?
**Traditional Synchronous API (`/crawl`):**
- Client waits for entire crawl to complete
- Timeout issues with long-running crawls
- Resource blocking during execution
- Constant polling required for status updates
**Asynchronous Job Queue API (`/crawl/job`, `/llm/job`):**
- ✅ Submit job and continue immediately
- ✅ No timeout concerns for long operations
- ✅ Real-time webhook notifications on completion
- ✅ Better resource utilization
- ✅ Perfect for batch processing
- ✅ Ideal for microservice architectures
### Available Endpoints
#### 1. Crawl Job Endpoint
```
POST /crawl/job
```
Submit an asynchronous crawl job with optional webhook notification.
**Request Body:**
```json
{
"urls": ["https://example.com"],
"cache_mode": "bypass",
"extraction_strategy": {
"type": "JsonCssExtractionStrategy",
"schema": {
"title": "h1",
"content": ".article-body"
}
},
"webhook_config": {
"webhook_url": "https://your-app.com/webhook/crawl-complete",
"webhook_data_in_payload": true,
"webhook_headers": {
"X-Webhook-Secret": "your-secret-token",
"X-Custom-Header": "value"
}
}
}
```
**Response:**
```json
{
"task_id": "crawl_1698765432",
"message": "Crawl job submitted"
}
```
#### 2. LLM Extraction Job Endpoint
```
POST /llm/job
```
Submit an asynchronous LLM extraction job with optional webhook notification.
**Request Body:**
```json
{
"url": "https://example.com/article",
"q": "Extract the article title, author, publication date, and main points",
"provider": "openai/gpt-4o-mini",
"schema": "{\"title\": \"string\", \"author\": \"string\", \"date\": \"string\", \"points\": [\"string\"]}",
"cache": false,
"webhook_config": {
"webhook_url": "https://your-app.com/webhook/llm-complete",
"webhook_data_in_payload": true,
"webhook_headers": {
"X-Webhook-Secret": "your-secret-token"
}
}
}
```
**Response:**
```json
{
"task_id": "llm_1698765432",
"message": "LLM job submitted"
}
```
#### 3. Job Status Endpoint
```
GET /job/{task_id}
```
Check the status and retrieve results of a submitted job.
**Response (In Progress):**
```json
{
"task_id": "crawl_1698765432",
"status": "processing",
"message": "Job is being processed"
}
```
**Response (Completed):**
```json
{
"task_id": "crawl_1698765432",
"status": "completed",
"result": {
"markdown": "# Page Title\n\nContent...",
"extracted_content": {...},
"links": {...}
}
}
```
### Webhook Configuration
Webhooks provide real-time notifications when your jobs complete, eliminating the need for constant polling.
#### Webhook Config Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `webhook_url` | string | Yes | Your HTTP(S) endpoint to receive notifications |
| `webhook_data_in_payload` | boolean | No | Include full result data in webhook payload (default: false) |
| `webhook_headers` | object | No | Custom headers for authentication/identification |
#### Webhook Payload Format
**Success Notification (Crawl Job):**
```json
{
"task_id": "crawl_1698765432",
"task_type": "crawl",
"status": "completed",
"timestamp": "2025-10-22T12:30:00.000000+00:00",
"urls": ["https://example.com"],
"data": {
"markdown": "# Page content...",
"extracted_content": {...},
"links": {...}
}
}
```
**Success Notification (LLM Job):**
```json
{
"task_id": "llm_1698765432",
"task_type": "llm_extraction",
"status": "completed",
"timestamp": "2025-10-22T12:30:00.000000+00:00",
"urls": ["https://example.com/article"],
"data": {
"extracted_content": {
"title": "Understanding Web Scraping",
"author": "John Doe",
"date": "2025-10-22",
"points": ["Point 1", "Point 2"]
}
}
}
```
**Failure Notification:**
```json
{
"task_id": "crawl_1698765432",
"task_type": "crawl",
"status": "failed",
"timestamp": "2025-10-22T12:30:00.000000+00:00",
"urls": ["https://example.com"],
"error": "Connection timeout after 30 seconds"
}
```
#### Webhook Delivery & Retry
- **Delivery Method:** HTTP POST to your `webhook_url`
- **Content-Type:** `application/json`
- **Retry Policy:** Exponential backoff with 5 attempts
- Attempt 1: Immediate
- Attempt 2: 1 second delay
- Attempt 3: 2 seconds delay
- Attempt 4: 4 seconds delay
- Attempt 5: 8 seconds delay
- **Success Status Codes:** 200-299
- **Custom Headers:** Your `webhook_headers` are included in every request
### Usage Examples
#### Example 1: Python with Webhook Handler (Flask)
```python
from flask import Flask, request, jsonify
import requests
app = Flask(__name__)
# Webhook handler
@app.route('/webhook/crawl-complete', methods=['POST'])
def handle_crawl_webhook():
payload = request.json
if payload['status'] == 'completed':
print(f"✅ Job {payload['task_id']} completed!")
print(f"Task type: {payload['task_type']}")
# Access the crawl results
if 'data' in payload:
markdown = payload['data'].get('markdown', '')
extracted = payload['data'].get('extracted_content', {})
print(f"Extracted {len(markdown)} characters")
print(f"Structured data: {extracted}")
else:
print(f"❌ Job {payload['task_id']} failed: {payload.get('error')}")
return jsonify({"status": "received"}), 200
# Submit a crawl job with webhook
def submit_crawl_job():
response = requests.post(
"http://localhost:11235/crawl/job",
json={
"urls": ["https://example.com"],
"extraction_strategy": {
"type": "JsonCssExtractionStrategy",
"schema": {
"name": "Example Schema",
"baseSelector": "body",
"fields": [
{"name": "title", "selector": "h1", "type": "text"},
{"name": "description", "selector": "meta[name='description']", "type": "attribute", "attribute": "content"}
]
}
},
"webhook_config": {
"webhook_url": "https://your-app.com/webhook/crawl-complete",
"webhook_data_in_payload": True,
"webhook_headers": {
"X-Webhook-Secret": "your-secret-token"
}
}
}
)
task_id = response.json()['task_id']
print(f"Job submitted: {task_id}")
return task_id
if __name__ == '__main__':
app.run(port=5000)
```
#### Example 2: LLM Extraction with Webhooks
```python
import requests
def submit_llm_job_with_webhook():
response = requests.post(
"http://localhost:11235/llm/job",
json={
"url": "https://example.com/article",
"q": "Extract the article title, author, and main points",
"provider": "openai/gpt-4o-mini",
"webhook_config": {
"webhook_url": "https://your-app.com/webhook/llm-complete",
"webhook_data_in_payload": True,
"webhook_headers": {
"X-Webhook-Secret": "your-secret-token"
}
}
}
)
task_id = response.json()['task_id']
print(f"LLM job submitted: {task_id}")
return task_id
# Webhook handler for LLM jobs
@app.route('/webhook/llm-complete', methods=['POST'])
def handle_llm_webhook():
payload = request.json
if payload['status'] == 'completed':
extracted = payload['data']['extracted_content']
print(f"✅ LLM extraction completed!")
print(f"Results: {extracted}")
else:
print(f"❌ LLM extraction failed: {payload.get('error')}")
return jsonify({"status": "received"}), 200
```
#### Example 3: Without Webhooks (Polling)
If you don't use webhooks, you can poll for results:
```python
import requests
import time
# Submit job
response = requests.post(
"http://localhost:11235/crawl/job",
json={"urls": ["https://example.com"]}
)
task_id = response.json()['task_id']
# Poll for results
while True:
result = requests.get(f"http://localhost:11235/job/{task_id}")
data = result.json()
if data['status'] == 'completed':
print("Job completed!")
print(data['result'])
break
elif data['status'] == 'failed':
print(f"Job failed: {data.get('error')}")
break
print("Still processing...")
time.sleep(2)
```
#### Example 4: Global Webhook Configuration
Set a default webhook URL in your `config.yml` to avoid repeating it in every request:
```yaml
# config.yml
api:
crawler:
# ... other settings ...
webhook:
default_url: "https://your-app.com/webhook/default"
default_headers:
X-Webhook-Secret: "your-secret-token"
```
Then submit jobs without webhook config:
```python
# Uses the global webhook configuration
response = requests.post(
"http://localhost:11235/crawl/job",
json={"urls": ["https://example.com"]}
)
```
### Webhook Best Practices
1. **Authentication:** Always use custom headers for webhook authentication
```json
"webhook_headers": {
"X-Webhook-Secret": "your-secret-token"
}
```
2. **Idempotency:** Design your webhook handler to be idempotent (safe to receive duplicate notifications)
3. **Fast Response:** Return HTTP 200 quickly; process data asynchronously if needed
```python
@app.route('/webhook', methods=['POST'])
def webhook():
payload = request.json
# Queue for background processing
queue.enqueue(process_webhook, payload)
return jsonify({"status": "received"}), 200
```
4. **Error Handling:** Handle both success and failure notifications
```python
if payload['status'] == 'completed':
# Process success
elif payload['status'] == 'failed':
# Log error, retry, or alert
```
5. **Validation:** Verify webhook authenticity using custom headers
```python
secret = request.headers.get('X-Webhook-Secret')
if secret != os.environ['EXPECTED_SECRET']:
return jsonify({"error": "Unauthorized"}), 401
```
6. **Logging:** Log webhook deliveries for debugging
```python
logger.info(f"Webhook received: {payload['task_id']} - {payload['status']}")
```
### Use Cases
**1. Batch Processing**
Submit hundreds of URLs and get notified as each completes:
```python
urls = ["https://site1.com", "https://site2.com", ...]
for url in urls:
submit_crawl_job(url, webhook_url="https://app.com/webhook")
```
**2. Microservice Integration**
Integrate with event-driven architectures:
```python
# Service A submits job
task_id = submit_crawl_job(url)
# Service B receives webhook and triggers next step
@app.route('/webhook')
def webhook():
process_result(request.json)
trigger_next_service()
return "OK", 200
```
**3. Long-Running Extractions**
Handle complex LLM extractions without timeouts:
```python
submit_llm_job(
url="https://long-article.com",
q="Comprehensive summary with key points and analysis",
webhook_url="https://app.com/webhook/llm"
)
```
### Troubleshooting
**Webhook not receiving notifications?**
- Check your webhook URL is publicly accessible
- Verify firewall/security group settings
- Use webhook testing tools like webhook.site for debugging
- Check server logs for delivery attempts
- Ensure your handler returns 200-299 status code
**Job stuck in processing?**
- Check Redis connection: `docker logs <container_name> | grep redis`
- Verify worker processes: `docker exec <container_name> ps aux | grep worker`
- Check server logs: `docker logs <container_name>`
**Need to cancel a job?**
Jobs are processed asynchronously. If you need to cancel:
- Delete the task from Redis (requires Redis CLI access)
- Or implement a cancellation endpoint in your webhook handler
---
## Dockerfile Parameters
You can customize the image build process using build arguments (`--build-arg`). These are typically used via `docker buildx build` or within the `docker-compose.yml` file.

View File

@@ -57,7 +57,7 @@
Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for large language models, AI agents, and data pipelines. Fully open source, flexible, and built for real-time performance, **Crawl4AI** empowers developers with unmatched speed, precision, and deployment ease.
> Enjoy using Crawl4AI? Consider **[becoming a sponsor](https://github.com/sponsors/unclecode)** to support ongoing development and community growth!
> **Note**: If you're looking for the old documentation, you can access it [here](https://old.docs.crawl4ai.com).
## 🆕 AI Assistant Skill Now Available!

View File

@@ -529,19 +529,8 @@ class AdminDashboard {
</label>
</div>
<div class="form-group full-width">
<label>Long Description (Markdown - Overview tab)</label>
<textarea id="form-long-description" rows="10" placeholder="Enter detailed description with markdown formatting...">${app?.long_description || ''}</textarea>
<small>Markdown support: **bold**, *italic*, [links](url), # headers, code blocks, lists</small>
</div>
<div class="form-group full-width">
<label>Integration Guide (Markdown - Integration tab)</label>
<textarea id="form-integration" rows="20" placeholder="Enter integration guide with installation, examples, and code snippets using markdown...">${app?.integration_guide || ''}</textarea>
<small>Single markdown field with installation, examples, and complete guide. Code blocks get auto copy buttons.</small>
</div>
<div class="form-group full-width">
<label>Documentation (Markdown - Documentation tab)</label>
<textarea id="form-documentation" rows="20" placeholder="Enter documentation with API reference, examples, and best practices using markdown...">${app?.documentation || ''}</textarea>
<small>Full documentation with API reference, examples, best practices, etc.</small>
<label>Integration Guide</label>
<textarea id="form-integration" rows="10">${app?.integration_guide || ''}</textarea>
</div>
</div>
`;
@@ -723,9 +712,7 @@ class AdminDashboard {
data.contact_email = document.getElementById('form-email').value;
data.featured = document.getElementById('form-featured').checked ? 1 : 0;
data.sponsored = document.getElementById('form-sponsored').checked ? 1 : 0;
data.long_description = document.getElementById('form-long-description').value;
data.integration_guide = document.getElementById('form-integration').value;
data.documentation = document.getElementById('form-documentation').value;
} else if (type === 'articles') {
data.title = document.getElementById('form-title').value;
data.slug = this.generateSlug(data.title);

View File

@@ -278,12 +278,12 @@
}
.tab-content {
display: none !important;
display: none;
padding: 2rem;
}
.tab-content.active {
display: block !important;
display: block;
}
/* Overview Layout */
@@ -510,31 +510,6 @@
line-height: 1.5;
}
/* Markdown rendered code blocks */
.integration-content pre,
.docs-content pre {
background: var(--bg-dark);
border: 1px solid var(--border-color);
margin: 1rem 0;
padding: 1rem;
padding-top: 2.5rem; /* Space for copy button */
overflow-x: auto;
position: relative;
max-height: none; /* Remove any height restrictions */
height: auto; /* Allow content to expand */
}
.integration-content pre code,
.docs-content pre code {
background: transparent;
padding: 0;
color: var(--text-secondary);
font-size: 0.875rem;
line-height: 1.5;
white-space: pre; /* Preserve whitespace and line breaks */
display: block;
}
/* Feature Grid */
.feature-grid {
display: grid;

View File

@@ -73,14 +73,27 @@
<div class="tabs">
<button class="tab-btn active" data-tab="overview">Overview</button>
<button class="tab-btn" data-tab="integration">Integration</button>
<!-- <button class="tab-btn" data-tab="docs">Documentation</button>
<button class="tab-btn" data-tab="support">Support</button> -->
<button class="tab-btn" data-tab="docs">Documentation</button>
<button class="tab-btn" data-tab="support">Support</button>
</div>
<section id="overview-tab" class="tab-content active">
<div class="overview-columns">
<div class="overview-main">
<h2>Overview</h2>
<div id="app-overview">Overview content goes here.</div>
<h3>Key Features</h3>
<ul id="app-features" class="features-list">
<li>Feature 1</li>
<li>Feature 2</li>
<li>Feature 3</li>
</ul>
<h3>Use Cases</h3>
<div id="app-use-cases" class="use-cases">
<p>Describe how this app can help your workflow.</p>
</div>
</div>
<aside class="sidebar">
@@ -129,16 +142,37 @@
</section>
<section id="integration-tab" class="tab-content">
<div class="integration-content" id="app-integration">
<div class="integration-content">
<h2>Integration Guide</h2>
<h3>Installation</h3>
<div class="code-block">
<pre><code id="install-code"># Installation instructions will appear here</code></pre>
</div>
<h3>Basic Usage</h3>
<div class="code-block">
<pre><code id="usage-code"># Usage example will appear here</code></pre>
</div>
<h3>Complete Integration Example</h3>
<div class="code-block">
<button class="copy-btn" id="copy-integration">Copy</button>
<pre><code id="integration-code"># Complete integration guide will appear here</code></pre>
</div>
</div>
</section>
<!-- <section id="docs-tab" class="tab-content">
<div class="docs-content" id="app-docs">
<section id="docs-tab" class="tab-content">
<div class="docs-content">
<h2>Documentation</h2>
<div id="app-docs" class="doc-sections">
<p>Documentation coming soon.</p>
</div>
</div>
</section> -->
</section>
<!-- <section id="support-tab" class="tab-content">
<section id="support-tab" class="tab-content">
<div class="docs-content">
<h2>Support</h2>
<div class="support-grid">
@@ -156,7 +190,7 @@
</div>
</div>
</div>
</section> -->
</section>
</div>
</main>

View File

@@ -112,7 +112,7 @@ class AppDetailPage {
}
// Contact
document.getElementById('app-contact') && (document.getElementById('app-contact').textContent = this.appData.contact_email || 'Not available');
document.getElementById('app-contact').textContent = this.appData.contact_email || 'Not available';
// Sidebar info
document.getElementById('sidebar-downloads').textContent = this.formatNumber(this.appData.downloads || 0);
@@ -123,134 +123,146 @@ class AppDetailPage {
document.getElementById('sidebar-pricing').textContent = this.appData.pricing || 'Free';
document.getElementById('sidebar-contact').textContent = this.appData.contact_email || 'contact@example.com';
// Render tab contents from database fields
this.renderTabContents();
// Integration guide
this.renderIntegrationGuide();
}
renderTabContents() {
// Overview tab - use long_description from database
const overviewDiv = document.getElementById('app-overview');
if (overviewDiv) {
if (this.appData.long_description) {
overviewDiv.innerHTML = this.renderMarkdown(this.appData.long_description);
} else {
overviewDiv.innerHTML = `<p>${this.appData.description || 'No overview available.'}</p>`;
renderIntegrationGuide() {
// Installation code
const installCode = document.getElementById('install-code');
if (installCode) {
if (this.appData.type === 'Open Source' && this.appData.github_url) {
installCode.textContent = `# Clone from GitHub
git clone ${this.appData.github_url}
# Install dependencies
pip install -r requirements.txt`;
} else if (this.appData.name.toLowerCase().includes('api')) {
installCode.textContent = `# Install via pip
pip install ${this.appData.slug}
# Or install from source
pip install git+${this.appData.github_url || 'https://github.com/example/repo'}`;
}
}
// Integration tab - use integration_guide field from database
const integrationDiv = document.getElementById('app-integration');
if (integrationDiv) {
if (this.appData.integration_guide) {
integrationDiv.innerHTML = this.renderMarkdown(this.appData.integration_guide);
// Add copy buttons to all code blocks
this.addCopyButtonsToCodeBlocks(integrationDiv);
} else {
integrationDiv.innerHTML = '<p>Integration guide not yet available. Please check the official website for details.</p>';
// Usage code - customize based on category
const usageCode = document.getElementById('usage-code');
if (usageCode) {
if (this.appData.category === 'Browser Automation') {
usageCode.textContent = `from crawl4ai import AsyncWebCrawler
from ${this.appData.slug.replace(/-/g, '_')} import ${this.appData.name.replace(/\s+/g, '')}
async def main():
# Initialize ${this.appData.name}
automation = ${this.appData.name.replace(/\s+/g, '')}()
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
browser_config=automation.config,
wait_for="css:body"
)
print(result.markdown)`;
} else if (this.appData.category === 'Proxy Services') {
usageCode.textContent = `from crawl4ai import AsyncWebCrawler
import ${this.appData.slug.replace(/-/g, '_')}
# Configure proxy
proxy_config = {
"server": "${this.appData.website_url || 'https://proxy.example.com'}",
"username": "your_username",
"password": "your_password"
}
async with AsyncWebCrawler(proxy=proxy_config) as crawler:
result = await crawler.arun(
url="https://example.com",
bypass_cache=True
)
print(result.status_code)`;
} else if (this.appData.category === 'LLM Integration') {
usageCode.textContent = `from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
# Configure LLM extraction
strategy = LLMExtractionStrategy(
provider="${this.appData.name.toLowerCase().includes('gpt') ? 'openai' : 'anthropic'}",
api_key="your-api-key",
model="${this.appData.name.toLowerCase().includes('gpt') ? 'gpt-4' : 'claude-3'}",
instruction="Extract structured data"
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
extraction_strategy=strategy
)
print(result.extracted_content)`;
}
}
// Documentation tab - use documentation field from database
const docsDiv = document.getElementById('app-docs');
if (docsDiv) {
if (this.appData.documentation) {
docsDiv.innerHTML = this.renderMarkdown(this.appData.documentation);
// Add copy buttons to all code blocks
this.addCopyButtonsToCodeBlocks(docsDiv);
} else {
docsDiv.innerHTML = '<p>Documentation coming soon.</p>';
}
// Integration example
const integrationCode = document.getElementById('integration-code');
if (integrationCode) {
integrationCode.textContent = this.appData.integration_guide ||
`# Complete ${this.appData.name} Integration Example
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
import json
async def crawl_with_${this.appData.slug.replace(/-/g, '_')}():
"""
Complete example showing how to use ${this.appData.name}
with Crawl4AI for production web scraping
"""
# Define extraction schema
schema = {
"name": "ProductList",
"baseSelector": "div.product",
"fields": [
{"name": "title", "selector": "h2", "type": "text"},
{"name": "price", "selector": ".price", "type": "text"},
{"name": "image", "selector": "img", "type": "attribute", "attribute": "src"},
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"}
]
}
# Initialize crawler with ${this.appData.name}
async with AsyncWebCrawler(
browser_type="chromium",
headless=True,
verbose=True
) as crawler:
# Crawl with extraction
result = await crawler.arun(
url="https://example.com/products",
extraction_strategy=JsonCssExtractionStrategy(schema),
cache_mode="bypass",
wait_for="css:.product",
screenshot=True
)
# Process results
if result.success:
products = json.loads(result.extracted_content)
print(f"Found {len(products)} products")
for product in products[:5]:
print(f"- {product['title']}: {product['price']}")
return products
# Run the crawler
if __name__ == "__main__":
import asyncio
asyncio.run(crawl_with_${this.appData.slug.replace(/-/g, '_')}())`;
}
}
addCopyButtonsToCodeBlocks(container) {
// Find all code blocks and add copy buttons
const codeBlocks = container.querySelectorAll('pre code');
codeBlocks.forEach(codeBlock => {
const pre = codeBlock.parentElement;
// Skip if already has a copy button
if (pre.querySelector('.copy-btn')) return;
// Create copy button
const copyBtn = document.createElement('button');
copyBtn.className = 'copy-btn';
copyBtn.textContent = 'Copy';
copyBtn.onclick = () => {
navigator.clipboard.writeText(codeBlock.textContent).then(() => {
copyBtn.textContent = '✓ Copied!';
setTimeout(() => {
copyBtn.textContent = 'Copy';
}, 2000);
});
};
// Add button to pre element
pre.style.position = 'relative';
pre.insertBefore(copyBtn, codeBlock);
});
}
renderMarkdown(text) {
if (!text) return '';
// Store code blocks temporarily to protect them from processing
const codeBlocks = [];
let processed = text.replace(/```(\w+)?\n([\s\S]*?)```/g, (match, lang, code) => {
const placeholder = `___CODE_BLOCK_${codeBlocks.length}___`;
codeBlocks.push(`<pre><code class="language-${lang || ''}">${this.escapeHtml(code)}</code></pre>`);
return placeholder;
});
// Store inline code temporarily
const inlineCodes = [];
processed = processed.replace(/`([^`]+)`/g, (match, code) => {
const placeholder = `___INLINE_CODE_${inlineCodes.length}___`;
inlineCodes.push(`<code>${this.escapeHtml(code)}</code>`);
return placeholder;
});
// Now process the rest of the markdown
processed = processed
// Headers
.replace(/^### (.*$)/gim, '<h3>$1</h3>')
.replace(/^## (.*$)/gim, '<h2>$1</h2>')
.replace(/^# (.*$)/gim, '<h1>$1</h1>')
// Bold
.replace(/\*\*(.*?)\*\*/g, '<strong>$1</strong>')
// Italic
.replace(/\*(.*?)\*/g, '<em>$1</em>')
// Links
.replace(/\[([^\]]+)\]\(([^)]+)\)/g, '<a href="$2" target="_blank">$1</a>')
// Line breaks
.replace(/\n\n/g, '</p><p>')
.replace(/\n/g, '<br>')
// Lists
.replace(/^\* (.*)$/gim, '<li>$1</li>')
.replace(/^- (.*)$/gim, '<li>$1</li>')
// Wrap in paragraphs
.replace(/^(?!<[h|p|pre|ul|ol|li])/gim, '<p>')
.replace(/(?<![>])$/gim, '</p>');
// Restore inline code
inlineCodes.forEach((code, i) => {
processed = processed.replace(`___INLINE_CODE_${i}___`, code);
});
// Restore code blocks
codeBlocks.forEach((block, i) => {
processed = processed.replace(`___CODE_BLOCK_${i}___`, block);
});
return processed;
}
escapeHtml(text) {
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
formatNumber(num) {
if (num >= 1000000) {
return (num / 1000000).toFixed(1) + 'M';
@@ -263,27 +275,45 @@ class AppDetailPage {
setupEventListeners() {
// Tab switching
const tabs = document.querySelectorAll('.tab-btn');
tabs.forEach(tab => {
tab.addEventListener('click', () => {
// Update active tab button
// Update active tab
tabs.forEach(t => t.classList.remove('active'));
tab.classList.add('active');
// Show corresponding content
const tabName = tab.dataset.tab;
// Hide all tab contents
const allTabContents = document.querySelectorAll('.tab-content');
allTabContents.forEach(content => {
document.querySelectorAll('.tab-content').forEach(content => {
content.classList.remove('active');
});
document.getElementById(`${tabName}-tab`).classList.add('active');
});
});
// Show the selected tab content
const targetTab = document.getElementById(`${tabName}-tab`);
if (targetTab) {
targetTab.classList.add('active');
}
// Copy integration code
document.getElementById('copy-integration').addEventListener('click', () => {
const code = document.getElementById('integration-code').textContent;
navigator.clipboard.writeText(code).then(() => {
const btn = document.getElementById('copy-integration');
const originalText = btn.innerHTML;
btn.innerHTML = '<span>✓</span> Copied!';
setTimeout(() => {
btn.innerHTML = originalText;
}, 2000);
});
});
// Copy code buttons
document.querySelectorAll('.copy-btn').forEach(btn => {
btn.addEventListener('click', (e) => {
const codeBlock = e.target.closest('.code-block');
const code = codeBlock.querySelector('code').textContent;
navigator.clipboard.writeText(code).then(() => {
btn.textContent = 'Copied!';
setTimeout(() => {
btn.textContent = 'Copy';
}, 2000);
});
});
});
}

View File

@@ -471,17 +471,13 @@ async def delete_sponsor(sponsor_id: int):
app.include_router(router)
# Version info
VERSION = "1.1.0"
BUILD_DATE = "2025-10-26"
@app.get("/")
async def root():
"""API info"""
return {
"name": "Crawl4AI Marketplace API",
"version": VERSION,
"build_date": BUILD_DATE,
"version": "1.0.0",
"endpoints": [
"/marketplace/api/apps",
"/marketplace/api/articles",

View File

@@ -31,7 +31,7 @@ dependencies = [
"rank-bm25~=0.2",
"snowballstemmer~=2.2",
"pydantic>=2.10",
"pyOpenSSL>=25.3.0",
"pyOpenSSL>=24.3.0",
"psutil>=6.1.1",
"PyYAML>=6.0",
"nltk>=3.9.1",

View File

@@ -19,7 +19,7 @@ rank-bm25~=0.2
colorama~=0.4
snowballstemmer~=2.2
pydantic>=2.10
pyOpenSSL>=25.3.0
pyOpenSSL>=24.3.0
psutil>=6.1.1
PyYAML>=6.0
nltk>=3.9.1

View File

@@ -1,165 +0,0 @@
#!/usr/bin/env python3
"""
Simple smoke test for CDP concurrency fixes.
This can be run without pytest to quickly validate the changes.
"""
import asyncio
import sys
import os
# Add the project root to Python path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../..')))
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
async def test_basic_cdp():
"""Basic test that CDP browser works"""
print("Test 1: Basic CDP browser test...")
browser_config = BrowserConfig(
use_managed_browser=True,
headless=True,
verbose=False
)
try:
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
assert result.success, f"Failed: {result.error_message}"
assert len(result.html) > 0, "Empty HTML"
print(" ✓ Basic CDP test passed")
return True
except Exception as e:
print(f" ✗ Basic CDP test failed: {e}")
return False
async def test_arun_many_cdp():
"""Test arun_many with CDP browser - the key concurrency fix"""
print("\nTest 2: arun_many with CDP browser...")
browser_config = BrowserConfig(
use_managed_browser=True,
headless=True,
verbose=False
)
urls = [
"https://example.com",
"https://httpbin.org/html",
"https://www.example.org",
]
try:
async with AsyncWebCrawler(config=browser_config) as crawler:
results = await crawler.arun_many(
urls=urls,
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
assert len(results) == len(urls), f"Expected {len(urls)} results, got {len(results)}"
success_count = sum(1 for r in results if r.success)
print(f" ✓ Crawled {success_count}/{len(urls)} URLs successfully")
if success_count >= len(urls) * 0.8: # Allow 20% failure for network issues
print(" ✓ arun_many CDP test passed")
return True
else:
print(f" ✗ Too many failures: {len(urls) - success_count}/{len(urls)}")
return False
except Exception as e:
print(f" ✗ arun_many CDP test failed: {e}")
import traceback
traceback.print_exc()
return False
async def test_concurrent_arun_many():
"""Test concurrent arun_many calls - stress test for page lock"""
print("\nTest 3: Concurrent arun_many calls...")
browser_config = BrowserConfig(
use_managed_browser=True,
headless=True,
verbose=False
)
try:
async with AsyncWebCrawler(config=browser_config) as crawler:
# Run two arun_many calls concurrently
task1 = crawler.arun_many(
urls=["https://example.com", "https://httpbin.org/html"],
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
task2 = crawler.arun_many(
urls=["https://www.example.org", "https://example.com"],
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
results1, results2 = await asyncio.gather(task1, task2, return_exceptions=True)
# Check for exceptions
if isinstance(results1, Exception):
print(f" ✗ Task 1 raised exception: {results1}")
return False
if isinstance(results2, Exception):
print(f" ✗ Task 2 raised exception: {results2}")
return False
total_success = sum(1 for r in results1 if r.success) + sum(1 for r in results2 if r.success)
total_requests = len(results1) + len(results2)
print(f"{total_success}/{total_requests} concurrent requests succeeded")
if total_success >= total_requests * 0.7: # Allow 30% failure for concurrent stress
print(" ✓ Concurrent arun_many test passed")
return True
else:
print(f" ✗ Too many concurrent failures")
return False
except Exception as e:
print(f" ✗ Concurrent test failed: {e}")
import traceback
traceback.print_exc()
return False
async def main():
"""Run all smoke tests"""
print("=" * 60)
print("CDP Concurrency Smoke Tests")
print("=" * 60)
results = []
# Run tests sequentially
results.append(await test_basic_cdp())
results.append(await test_arun_many_cdp())
results.append(await test_concurrent_arun_many())
print("\n" + "=" * 60)
passed = sum(results)
total = len(results)
if passed == total:
print(f"✓ All {total} smoke tests passed!")
print("=" * 60)
return 0
else:
print(f"{total - passed}/{total} smoke tests failed")
print("=" * 60)
return 1
if __name__ == "__main__":
exit_code = asyncio.run(main())
sys.exit(exit_code)

View File

@@ -1,282 +0,0 @@
"""
Test CDP browser concurrency with arun_many.
This test suite validates that the fixes for concurrent page creation
in managed browsers (CDP mode) work correctly, particularly:
1. Always creating new pages instead of reusing
2. Page lock serialization prevents race conditions
3. Multiple concurrent arun_many calls work correctly
"""
# Standard library imports
import asyncio
import os
import sys
# Third-party imports
import pytest
# Add the project root to Python path
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../..')))
# Local imports
from crawl4ai import AsyncWebCrawler, BrowserConfig, CacheMode, CrawlerRunConfig
@pytest.mark.asyncio
async def test_cdp_concurrent_arun_many_basic():
"""
Test basic concurrent arun_many with CDP browser.
This tests the fix for always creating new pages.
"""
browser_config = BrowserConfig(
use_managed_browser=True,
headless=True,
verbose=False
)
urls = [
"https://example.com",
"https://www.python.org",
"https://httpbin.org/html",
]
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
# Run arun_many - should create new pages for each URL
results = await crawler.arun_many(urls=urls, config=config)
# Verify all URLs were crawled successfully
assert len(results) == len(urls), f"Expected {len(urls)} results, got {len(results)}"
for i, result in enumerate(results):
assert result is not None, f"Result {i} is None"
assert result.success, f"Result {i} failed: {result.error_message}"
assert result.status_code == 200, f"Result {i} has status {result.status_code}"
assert len(result.html) > 0, f"Result {i} has empty HTML"
@pytest.mark.asyncio
async def test_cdp_multiple_sequential_arun_many():
"""
Test multiple sequential arun_many calls with CDP browser.
Each call should work correctly without interference.
"""
browser_config = BrowserConfig(
use_managed_browser=True,
headless=True,
verbose=False
)
urls_batch1 = [
"https://example.com",
"https://httpbin.org/html",
]
urls_batch2 = [
"https://www.python.org",
"https://example.org",
]
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
# First batch
results1 = await crawler.arun_many(urls=urls_batch1, config=config)
assert len(results1) == len(urls_batch1)
for result in results1:
assert result.success, f"First batch failed: {result.error_message}"
# Second batch - should work without issues
results2 = await crawler.arun_many(urls=urls_batch2, config=config)
assert len(results2) == len(urls_batch2)
for result in results2:
assert result.success, f"Second batch failed: {result.error_message}"
@pytest.mark.asyncio
async def test_cdp_concurrent_arun_many_stress():
"""
Stress test: Multiple concurrent arun_many calls with CDP browser.
This is the key test for the concurrency fix - ensures page lock works.
"""
browser_config = BrowserConfig(
use_managed_browser=True,
headless=True,
verbose=False
)
# Create multiple batches of URLs
num_batches = 3
urls_per_batch = 3
batches = [
[f"https://httpbin.org/delay/{i}?batch={batch}"
for i in range(urls_per_batch)]
for batch in range(num_batches)
]
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
# Run multiple arun_many calls concurrently
tasks = [
crawler.arun_many(urls=batch, config=config)
for batch in batches
]
# Execute all batches in parallel
all_results = await asyncio.gather(*tasks, return_exceptions=True)
# Verify no exceptions occurred
for i, results in enumerate(all_results):
assert not isinstance(results, Exception), f"Batch {i} raised exception: {results}"
assert len(results) == urls_per_batch, f"Batch {i}: expected {urls_per_batch} results, got {len(results)}"
# Verify each result
for j, result in enumerate(results):
assert result is not None, f"Batch {i}, result {j} is None"
# Some may fail due to network/timing, but should not crash
if result.success:
assert len(result.html) > 0, f"Batch {i}, result {j} has empty HTML"
@pytest.mark.asyncio
async def test_cdp_page_isolation():
"""
Test that pages are properly isolated - changes to one don't affect another.
This validates that we're creating truly independent pages.
"""
browser_config = BrowserConfig(
use_managed_browser=True,
headless=True,
verbose=False
)
url = "https://example.com"
# Use different JS codes to verify isolation
config1 = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
js_code="document.body.setAttribute('data-test', 'page1');"
)
config2 = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
js_code="document.body.setAttribute('data-test', 'page2');"
)
async with AsyncWebCrawler(config=browser_config) as crawler:
# Run both configs concurrently
results = await crawler.arun_many(
urls=[url, url],
configs=[config1, config2]
)
assert len(results) == 2
assert results[0].success and results[1].success
# Both should succeed with their own modifications
# (We can't directly check the data-test attribute, but success indicates isolation)
assert 'Example Domain' in results[0].html
assert 'Example Domain' in results[1].html
@pytest.mark.asyncio
async def test_cdp_with_different_viewport_sizes():
"""
Test concurrent crawling with different viewport configurations.
Ensures context/page creation handles different configs correctly.
"""
browser_config = BrowserConfig(
use_managed_browser=True,
headless=True,
verbose=False
)
url = "https://example.com"
# Different viewport sizes (though in CDP mode these may be limited)
configs = [
CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
]
async with AsyncWebCrawler(config=browser_config) as crawler:
results = await crawler.arun_many(
urls=[url] * len(configs),
configs=configs
)
assert len(results) == len(configs)
for i, result in enumerate(results):
assert result.success, f"Config {i} failed: {result.error_message}"
assert len(result.html) > 0
@pytest.mark.asyncio
async def test_cdp_error_handling_concurrent():
"""
Test that errors in one concurrent request don't affect others.
This ensures proper isolation and error handling.
"""
browser_config = BrowserConfig(
use_managed_browser=True,
headless=True,
verbose=False
)
urls = [
"https://example.com", # Valid
"https://this-domain-definitely-does-not-exist-12345.com", # Invalid
"https://httpbin.org/html", # Valid
]
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
results = await crawler.arun_many(urls=urls, config=config)
assert len(results) == len(urls)
# First and third should succeed
assert results[0].success, "First URL should succeed"
assert results[2].success, "Third URL should succeed"
# Second may fail (invalid domain)
# But its failure shouldn't affect the others
@pytest.mark.asyncio
async def test_cdp_large_batch():
"""
Test handling a larger batch of URLs to ensure scalability.
"""
browser_config = BrowserConfig(
use_managed_browser=True,
headless=True,
verbose=False
)
# Create 10 URLs
num_urls = 10
urls = [f"https://httpbin.org/delay/0?id={i}" for i in range(num_urls)]
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
results = await crawler.arun_many(urls=urls, config=config)
assert len(results) == num_urls
# Count successes
successes = sum(1 for r in results if r.success)
# Allow some failures due to network issues, but most should succeed
assert successes >= num_urls * 0.8, f"Only {successes}/{num_urls} succeeded"
if __name__ == "__main__":
# Run tests with pytest
pytest.main([__file__, "-v", "-s"])

View File

@@ -364,19 +364,5 @@ async def test_network_error_handling():
async with AsyncPlaywrightCrawlerStrategy() as strategy:
await strategy.crawl("https://invalid.example.com", config)
@pytest.mark.asyncio
async def test_remove_overlay_elements(crawler_strategy):
config = CrawlerRunConfig(
remove_overlay_elements=True,
delay_before_return_html=5,
)
response = await crawler_strategy.crawl(
"https://www2.hm.com/en_us/index.html",
config
)
assert response.status_code == 200
assert "Accept all cookies" not in response.html
if __name__ == "__main__":
pytest.main([__file__, "-v"])

View File

@@ -1,168 +0,0 @@
"""
Lightweight test to verify pyOpenSSL security fix (Issue #1545).
This test verifies the security requirements are met:
1. pyOpenSSL >= 25.3.0 is installed
2. cryptography >= 45.0.7 is installed (above vulnerable range)
3. SSL/TLS functionality works correctly
This test can run without full crawl4ai dependencies installed.
"""
import sys
from packaging import version
def test_package_versions():
"""Test that package versions meet security requirements."""
print("=" * 70)
print("TEST: Package Version Security Requirements (Issue #1545)")
print("=" * 70)
all_passed = True
# Test pyOpenSSL version
try:
import OpenSSL
pyopenssl_version = OpenSSL.__version__
print(f"\n✓ pyOpenSSL is installed: {pyopenssl_version}")
if version.parse(pyopenssl_version) >= version.parse("25.3.0"):
print(f" ✓ PASS: pyOpenSSL {pyopenssl_version} >= 25.3.0 (required)")
else:
print(f" ✗ FAIL: pyOpenSSL {pyopenssl_version} < 25.3.0 (required)")
all_passed = False
except ImportError as e:
print(f"\n✗ FAIL: pyOpenSSL not installed - {e}")
all_passed = False
# Test cryptography version
try:
import cryptography
crypto_version = cryptography.__version__
print(f"\n✓ cryptography is installed: {crypto_version}")
# The vulnerable range is >=37.0.0 & <43.0.1
# We need >= 45.0.7 to be safe
if version.parse(crypto_version) >= version.parse("45.0.7"):
print(f" ✓ PASS: cryptography {crypto_version} >= 45.0.7 (secure)")
print(f" ✓ NOT in vulnerable range (37.0.0 to 43.0.0)")
elif version.parse(crypto_version) >= version.parse("37.0.0") and version.parse(crypto_version) < version.parse("43.0.1"):
print(f" ✗ FAIL: cryptography {crypto_version} is VULNERABLE")
print(f" ✗ Version is in vulnerable range (>=37.0.0 & <43.0.1)")
all_passed = False
else:
print(f" ⚠ WARNING: cryptography {crypto_version} < 45.0.7")
print(f" ⚠ May not meet security requirements")
except ImportError as e:
print(f"\n✗ FAIL: cryptography not installed - {e}")
all_passed = False
return all_passed
def test_ssl_basic_functionality():
"""Test that SSL/TLS basic functionality works."""
print("\n" + "=" * 70)
print("TEST: SSL/TLS Basic Functionality")
print("=" * 70)
try:
import OpenSSL.SSL
# Create a basic SSL context to verify functionality
context = OpenSSL.SSL.Context(OpenSSL.SSL.TLSv1_2_METHOD)
print("\n✓ SSL Context created successfully")
print(" ✓ PASS: SSL/TLS functionality is working")
return True
except Exception as e:
print(f"\n✗ FAIL: SSL functionality test failed - {e}")
return False
def test_pyopenssl_crypto_integration():
"""Test that pyOpenSSL and cryptography integration works."""
print("\n" + "=" * 70)
print("TEST: pyOpenSSL <-> cryptography Integration")
print("=" * 70)
try:
from OpenSSL import crypto
# Generate a simple key pair to test integration
key = crypto.PKey()
key.generate_key(crypto.TYPE_RSA, 2048)
print("\n✓ Generated RSA key pair successfully")
print(" ✓ PASS: pyOpenSSL and cryptography are properly integrated")
return True
except Exception as e:
print(f"\n✗ FAIL: Integration test failed - {e}")
import traceback
traceback.print_exc()
return False
def main():
"""Run all security tests."""
print("\n")
print("" + "=" * 68 + "")
print("║ pyOpenSSL Security Fix Verification - Issue #1545 ║")
print("" + "=" * 68 + "")
print("\nVerifying that the pyOpenSSL update resolves the security vulnerability")
print("in the cryptography package (CVE: versions >=37.0.0 & <43.0.1)\n")
results = []
# Test 1: Package versions
results.append(("Package Versions", test_package_versions()))
# Test 2: SSL functionality
results.append(("SSL Functionality", test_ssl_basic_functionality()))
# Test 3: Integration
results.append(("pyOpenSSL-crypto Integration", test_pyopenssl_crypto_integration()))
# Summary
print("\n" + "=" * 70)
print("TEST SUMMARY")
print("=" * 70)
all_passed = True
for test_name, passed in results:
status = "✓ PASS" if passed else "✗ FAIL"
print(f"{status}: {test_name}")
all_passed = all_passed and passed
print("=" * 70)
if all_passed:
print("\n✓✓✓ ALL TESTS PASSED ✓✓✓")
print("✓ Security vulnerability is resolved")
print("✓ pyOpenSSL >= 25.3.0 is working correctly")
print("✓ cryptography >= 45.0.7 (not vulnerable)")
print("\nThe dependency update is safe to merge.\n")
return True
else:
print("\n✗✗✗ SOME TESTS FAILED ✗✗✗")
print("✗ Security requirements not met")
print("\nDo NOT merge until all tests pass.\n")
return False
if __name__ == "__main__":
try:
success = main()
sys.exit(0 if success else 1)
except KeyboardInterrupt:
print("\n\nTest interrupted by user")
sys.exit(1)
except Exception as e:
print(f"\n✗ Unexpected error: {e}")
import traceback
traceback.print_exc()
sys.exit(1)

View File

@@ -1,184 +0,0 @@
"""
Test script to verify pyOpenSSL update doesn't break crawl4ai functionality.
This test verifies:
1. pyOpenSSL and cryptography versions are correct and secure
2. Basic crawling functionality still works
3. HTTPS/SSL connections work properly
4. Stealth mode integration works (uses playwright-stealth internally)
Issue: #1545 - Security vulnerability in cryptography package
Fix: Updated pyOpenSSL from >=24.3.0 to >=25.3.0
Expected: cryptography package should be >=45.0.7 (above vulnerable range)
"""
import asyncio
import sys
from packaging import version
def check_versions():
"""Verify pyOpenSSL and cryptography versions meet security requirements."""
print("=" * 60)
print("STEP 1: Checking Package Versions")
print("=" * 60)
try:
import OpenSSL
pyopenssl_version = OpenSSL.__version__
print(f"✓ pyOpenSSL version: {pyopenssl_version}")
# Check pyOpenSSL >= 25.3.0
if version.parse(pyopenssl_version) >= version.parse("25.3.0"):
print(f" ✓ Version check passed: {pyopenssl_version} >= 25.3.0")
else:
print(f" ✗ Version check FAILED: {pyopenssl_version} < 25.3.0")
return False
except ImportError as e:
print(f"✗ Failed to import pyOpenSSL: {e}")
return False
try:
import cryptography
crypto_version = cryptography.__version__
print(f"✓ cryptography version: {crypto_version}")
# Check cryptography >= 45.0.7 (above vulnerable range)
if version.parse(crypto_version) >= version.parse("45.0.7"):
print(f" ✓ Security check passed: {crypto_version} >= 45.0.7 (not vulnerable)")
else:
print(f" ✗ Security check FAILED: {crypto_version} < 45.0.7 (potentially vulnerable)")
return False
except ImportError as e:
print(f"✗ Failed to import cryptography: {e}")
return False
print("\n✓ All version checks passed!\n")
return True
async def test_basic_crawl():
"""Test basic crawling functionality with HTTPS site."""
print("=" * 60)
print("STEP 2: Testing Basic HTTPS Crawling")
print("=" * 60)
try:
from crawl4ai import AsyncWebCrawler
async with AsyncWebCrawler(verbose=True) as crawler:
# Test with a simple HTTPS site (requires SSL/TLS)
print("Crawling example.com (HTTPS)...")
result = await crawler.arun(
url="https://www.example.com",
bypass_cache=True
)
if result.success:
print(f"✓ Crawl successful!")
print(f" - Status code: {result.status_code}")
print(f" - Content length: {len(result.html)} bytes")
print(f" - SSL/TLS connection: ✓ Working")
return True
else:
print(f"✗ Crawl failed: {result.error_message}")
return False
except Exception as e:
print(f"✗ Test failed with error: {e}")
import traceback
traceback.print_exc()
return False
async def test_stealth_mode():
"""Test stealth mode functionality (depends on playwright-stealth)."""
print("\n" + "=" * 60)
print("STEP 3: Testing Stealth Mode Integration")
print("=" * 60)
try:
from crawl4ai import AsyncWebCrawler, BrowserConfig
# Create browser config with stealth mode
browser_config = BrowserConfig(
headless=True,
verbose=False
)
async with AsyncWebCrawler(config=browser_config, verbose=True) as crawler:
print("Crawling with stealth mode enabled...")
result = await crawler.arun(
url="https://www.example.com",
bypass_cache=True
)
if result.success:
print(f"✓ Stealth crawl successful!")
print(f" - Stealth mode: ✓ Working")
return True
else:
print(f"✗ Stealth crawl failed: {result.error_message}")
return False
except Exception as e:
print(f"✗ Stealth test failed with error: {e}")
import traceback
traceback.print_exc()
return False
async def main():
"""Run all tests."""
print("\n")
print("" + "=" * 58 + "")
print("║ pyOpenSSL Security Update Verification Test (Issue #1545) ║")
print("" + "=" * 58 + "")
print("\n")
# Step 1: Check versions
versions_ok = check_versions()
if not versions_ok:
print("\n✗ FAILED: Version requirements not met")
return False
# Step 2: Test basic crawling
crawl_ok = await test_basic_crawl()
if not crawl_ok:
print("\n✗ FAILED: Basic crawling test failed")
return False
# Step 3: Test stealth mode
stealth_ok = await test_stealth_mode()
if not stealth_ok:
print("\n✗ FAILED: Stealth mode test failed")
return False
# All tests passed
print("\n" + "=" * 60)
print("FINAL RESULT")
print("=" * 60)
print("✓ All tests passed successfully!")
print("✓ pyOpenSSL update is working correctly")
print("✓ No breaking changes detected")
print("✓ Security vulnerability resolved")
print("=" * 60)
print("\n")
return True
if __name__ == "__main__":
try:
success = asyncio.run(main())
sys.exit(0 if success else 1)
except KeyboardInterrupt:
print("\n\nTest interrupted by user")
sys.exit(1)
except Exception as e:
print(f"\n✗ Unexpected error: {e}")
import traceback
traceback.print_exc()
sys.exit(1)