feat(core): Release v0.3.73 with Browser Takeover and Docker Support
Major changes: - Add browser takeover feature using CDP for authentic browsing - Implement Docker support with full API server documentation - Enhance Mockdown with tag preservation system - Improve parallel crawling performance This release focuses on authenticity and scalability, introducing the ability to use users' own browsers while providing containerized deployment options. Breaking changes include modified browser handling and API response structure. See CHANGELOG.md for detailed migration guide.
This commit is contained in:
35
docs/md_v2/api/parameters.md
Normal file
35
docs/md_v2/api/parameters.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Parameter Reference Table
|
||||
|
||||
| File Name | Parameter Name | Code Usage | Strategy/Class | Description |
|
||||
|-----------|---------------|------------|----------------|-------------|
|
||||
| async_crawler_strategy.py | user_agent | `kwargs.get("user_agent")` | AsyncPlaywrightCrawlerStrategy | User agent string for browser identification |
|
||||
| async_crawler_strategy.py | proxy | `kwargs.get("proxy")` | AsyncPlaywrightCrawlerStrategy | Proxy server configuration for network requests |
|
||||
| async_crawler_strategy.py | proxy_config | `kwargs.get("proxy_config")` | AsyncPlaywrightCrawlerStrategy | Detailed proxy configuration including auth |
|
||||
| async_crawler_strategy.py | headless | `kwargs.get("headless", True)` | AsyncPlaywrightCrawlerStrategy | Whether to run browser in headless mode |
|
||||
| async_crawler_strategy.py | browser_type | `kwargs.get("browser_type", "chromium")` | AsyncPlaywrightCrawlerStrategy | Type of browser to use (chromium/firefox/webkit) |
|
||||
| async_crawler_strategy.py | headers | `kwargs.get("headers", {})` | AsyncPlaywrightCrawlerStrategy | Custom HTTP headers for requests |
|
||||
| async_crawler_strategy.py | verbose | `kwargs.get("verbose", False)` | AsyncPlaywrightCrawlerStrategy | Enable detailed logging output |
|
||||
| async_crawler_strategy.py | sleep_on_close | `kwargs.get("sleep_on_close", False)` | AsyncPlaywrightCrawlerStrategy | Add delay before closing browser |
|
||||
| async_crawler_strategy.py | use_managed_browser | `kwargs.get("use_managed_browser", False)` | AsyncPlaywrightCrawlerStrategy | Use managed browser instance |
|
||||
| async_crawler_strategy.py | user_data_dir | `kwargs.get("user_data_dir", None)` | AsyncPlaywrightCrawlerStrategy | Custom directory for browser profile data |
|
||||
| async_crawler_strategy.py | session_id | `kwargs.get("session_id")` | AsyncPlaywrightCrawlerStrategy | Unique identifier for browser session |
|
||||
| async_crawler_strategy.py | override_navigator | `kwargs.get("override_navigator", False)` | AsyncPlaywrightCrawlerStrategy | Override browser navigator properties |
|
||||
| async_crawler_strategy.py | simulate_user | `kwargs.get("simulate_user", False)` | AsyncPlaywrightCrawlerStrategy | Simulate human-like behavior |
|
||||
| async_crawler_strategy.py | magic | `kwargs.get("magic", False)` | AsyncPlaywrightCrawlerStrategy | Enable advanced anti-detection features |
|
||||
| async_crawler_strategy.py | log_console | `kwargs.get("log_console", False)` | AsyncPlaywrightCrawlerStrategy | Log browser console messages |
|
||||
| async_crawler_strategy.py | js_only | `kwargs.get("js_only", False)` | AsyncPlaywrightCrawlerStrategy | Only execute JavaScript without page load |
|
||||
| async_crawler_strategy.py | page_timeout | `kwargs.get("page_timeout", 60000)` | AsyncPlaywrightCrawlerStrategy | Timeout for page load in milliseconds |
|
||||
| async_crawler_strategy.py | ignore_body_visibility | `kwargs.get("ignore_body_visibility", True)` | AsyncPlaywrightCrawlerStrategy | Process page even if body is hidden |
|
||||
| async_crawler_strategy.py | js_code | `kwargs.get("js_code", kwargs.get("js", self.js_code))` | AsyncPlaywrightCrawlerStrategy | Custom JavaScript code to execute |
|
||||
| async_crawler_strategy.py | wait_for | `kwargs.get("wait_for")` | AsyncPlaywrightCrawlerStrategy | Wait for specific element/condition |
|
||||
| async_crawler_strategy.py | process_iframes | `kwargs.get("process_iframes", False)` | AsyncPlaywrightCrawlerStrategy | Extract content from iframes |
|
||||
| async_crawler_strategy.py | delay_before_return_html | `kwargs.get("delay_before_return_html")` | AsyncPlaywrightCrawlerStrategy | Additional delay before returning HTML |
|
||||
| async_crawler_strategy.py | remove_overlay_elements | `kwargs.get("remove_overlay_elements", False)` | AsyncPlaywrightCrawlerStrategy | Remove pop-ups and overlay elements |
|
||||
| async_crawler_strategy.py | screenshot | `kwargs.get("screenshot")` | AsyncPlaywrightCrawlerStrategy | Take page screenshot |
|
||||
| async_crawler_strategy.py | screenshot_wait_for | `kwargs.get("screenshot_wait_for")` | AsyncPlaywrightCrawlerStrategy | Wait before taking screenshot |
|
||||
| async_crawler_strategy.py | semaphore_count | `kwargs.get("semaphore_count", 5)` | AsyncPlaywrightCrawlerStrategy | Concurrent request limit |
|
||||
| async_webcrawler.py | verbose | `kwargs.get("verbose", False)` | AsyncWebCrawler | Enable detailed logging |
|
||||
| async_webcrawler.py | warmup | `kwargs.get("warmup", True)` | AsyncWebCrawler | Initialize crawler with warmup request |
|
||||
| async_webcrawler.py | session_id | `kwargs.get("session_id", None)` | AsyncWebCrawler | Session identifier for browser reuse |
|
||||
| async_webcrawler.py | only_text | `kwargs.get("only_text", False)` | AsyncWebCrawler | Extract only text content |
|
||||
| async_webcrawler.py | bypass_cache | `kwargs.get("bypass_cache", False)` | AsyncWebCrawler | Skip cache and force fresh crawl |
|
||||
459
docs/md_v2/basic/docker-deploymeny.md
Normal file
459
docs/md_v2/basic/docker-deploymeny.md
Normal file
@@ -0,0 +1,459 @@
|
||||
# Docker Deployment
|
||||
|
||||
Crawl4AI provides official Docker images for easy deployment and scalability. This guide covers installation, configuration, and usage of Crawl4AI in Docker environments.
|
||||
|
||||
## Quick Start 🚀
|
||||
|
||||
Pull and run the basic version:
|
||||
|
||||
```bash
|
||||
docker pull unclecode/crawl4ai:basic
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic
|
||||
```
|
||||
|
||||
Test the deployment:
|
||||
```python
|
||||
import requests
|
||||
|
||||
# Test health endpoint
|
||||
health = requests.get("http://localhost:11235/health")
|
||||
print("Health check:", health.json())
|
||||
|
||||
# Test basic crawl
|
||||
response = requests.post(
|
||||
"http://localhost:11235/crawl",
|
||||
json={
|
||||
"urls": "https://www.nbcnews.com/business",
|
||||
"priority": 10
|
||||
}
|
||||
)
|
||||
task_id = response.json()["task_id"]
|
||||
print("Task ID:", task_id)
|
||||
```
|
||||
|
||||
## Available Images 🏷️
|
||||
|
||||
- `unclecode/crawl4ai:basic` - Basic web crawling capabilities
|
||||
- `unclecode/crawl4ai:all` - Full installation with all features
|
||||
- `unclecode/crawl4ai:gpu` - GPU-enabled version for ML features
|
||||
|
||||
## Configuration Options 🔧
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
docker run -p 11235:11235 \
|
||||
-e MAX_CONCURRENT_TASKS=5 \
|
||||
-e OPENAI_API_KEY=your_key \
|
||||
unclecode/crawl4ai:all
|
||||
```
|
||||
|
||||
### Volume Mounting
|
||||
|
||||
Mount a directory for persistent data:
|
||||
```bash
|
||||
docker run -p 11235:11235 \
|
||||
-v $(pwd)/data:/app/data \
|
||||
unclecode/crawl4ai:all
|
||||
```
|
||||
|
||||
### Resource Limits
|
||||
|
||||
Control container resources:
|
||||
```bash
|
||||
docker run -p 11235:11235 \
|
||||
--memory=4g \
|
||||
--cpus=2 \
|
||||
unclecode/crawl4ai:all
|
||||
```
|
||||
|
||||
## Usage Examples 📝
|
||||
|
||||
### Basic Crawling
|
||||
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://www.nbcnews.com/business",
|
||||
"priority": 10
|
||||
}
|
||||
|
||||
response = requests.post("http://localhost:11235/crawl", json=request)
|
||||
task_id = response.json()["task_id"]
|
||||
|
||||
# Get results
|
||||
result = requests.get(f"http://localhost:11235/task/{task_id}")
|
||||
```
|
||||
|
||||
### Structured Data Extraction
|
||||
|
||||
```python
|
||||
schema = {
|
||||
"name": "Crypto Prices",
|
||||
"baseSelector": ".cds-tableRow-t45thuk",
|
||||
"fields": [
|
||||
{
|
||||
"name": "crypto",
|
||||
"selector": "td:nth-child(1) h2",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "price",
|
||||
"selector": "td:nth-child(2)",
|
||||
"type": "text",
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
request = {
|
||||
"urls": "https://www.coinbase.com/explore",
|
||||
"extraction_config": {
|
||||
"type": "json_css",
|
||||
"params": {"schema": schema}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Dynamic Content Handling
|
||||
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://www.nbcnews.com/business",
|
||||
"js_code": [
|
||||
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
|
||||
],
|
||||
"wait_for": "article.tease-card:nth-child(10)"
|
||||
}
|
||||
```
|
||||
|
||||
### AI-Powered Extraction (Full Version)
|
||||
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://www.nbcnews.com/business",
|
||||
"extraction_config": {
|
||||
"type": "cosine",
|
||||
"params": {
|
||||
"semantic_filter": "business finance economy",
|
||||
"word_count_threshold": 10,
|
||||
"max_dist": 0.2,
|
||||
"top_k": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Platform-Specific Instructions 💻
|
||||
|
||||
### macOS
|
||||
```bash
|
||||
docker pull unclecode/crawl4ai:basic
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic
|
||||
```
|
||||
|
||||
### Ubuntu
|
||||
```bash
|
||||
# Basic version
|
||||
docker pull unclecode/crawl4ai:basic
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic
|
||||
|
||||
# With GPU support
|
||||
docker pull unclecode/crawl4ai:gpu
|
||||
docker run --gpus all -p 11235:11235 unclecode/crawl4ai:gpu
|
||||
```
|
||||
|
||||
### Windows (PowerShell)
|
||||
```powershell
|
||||
docker pull unclecode/crawl4ai:basic
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic
|
||||
```
|
||||
|
||||
## Testing 🧪
|
||||
|
||||
Save this as `test_docker.py`:
|
||||
|
||||
```python
|
||||
import requests
|
||||
import json
|
||||
import time
|
||||
import sys
|
||||
|
||||
class Crawl4AiTester:
|
||||
def __init__(self, base_url: str = "http://localhost:11235"):
|
||||
self.base_url = base_url
|
||||
|
||||
def submit_and_wait(self, request_data: dict, timeout: int = 300) -> dict:
|
||||
# Submit crawl job
|
||||
response = requests.post(f"{self.base_url}/crawl", json=request_data)
|
||||
task_id = response.json()["task_id"]
|
||||
print(f"Task ID: {task_id}")
|
||||
|
||||
# Poll for result
|
||||
start_time = time.time()
|
||||
while True:
|
||||
if time.time() - start_time > timeout:
|
||||
raise TimeoutError(f"Task {task_id} timeout")
|
||||
|
||||
result = requests.get(f"{self.base_url}/task/{task_id}")
|
||||
status = result.json()
|
||||
|
||||
if status["status"] == "completed":
|
||||
return status
|
||||
|
||||
time.sleep(2)
|
||||
|
||||
def test_deployment():
|
||||
tester = Crawl4AiTester()
|
||||
|
||||
# Test basic crawl
|
||||
request = {
|
||||
"urls": "https://www.nbcnews.com/business",
|
||||
"priority": 10
|
||||
}
|
||||
|
||||
result = tester.submit_and_wait(request)
|
||||
print("Basic crawl successful!")
|
||||
print(f"Content length: {len(result['result']['markdown'])}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_deployment()
|
||||
```
|
||||
|
||||
## Advanced Configuration ⚙️
|
||||
|
||||
### Crawler Parameters
|
||||
|
||||
The `crawler_params` field allows you to configure the browser instance and crawling behavior. Here are key parameters you can use:
|
||||
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://example.com",
|
||||
"crawler_params": {
|
||||
# Browser Configuration
|
||||
"headless": True, # Run in headless mode
|
||||
"browser_type": "chromium", # chromium/firefox/webkit
|
||||
"user_agent": "custom-agent", # Custom user agent
|
||||
"proxy": "http://proxy:8080", # Proxy configuration
|
||||
|
||||
# Performance & Behavior
|
||||
"page_timeout": 30000, # Page load timeout (ms)
|
||||
"verbose": True, # Enable detailed logging
|
||||
"semaphore_count": 5, # Concurrent request limit
|
||||
|
||||
# Anti-Detection Features
|
||||
"simulate_user": True, # Simulate human behavior
|
||||
"magic": True, # Advanced anti-detection
|
||||
"override_navigator": True, # Override navigator properties
|
||||
|
||||
# Session Management
|
||||
"user_data_dir": "./browser-data", # Browser profile location
|
||||
"use_managed_browser": True, # Use persistent browser
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Extra Parameters
|
||||
|
||||
The `extra` field allows passing additional parameters directly to the crawler's `arun` function:
|
||||
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://example.com",
|
||||
"extra": {
|
||||
"word_count_threshold": 10, # Min words per block
|
||||
"only_text": True, # Extract only text
|
||||
"bypass_cache": True, # Force fresh crawl
|
||||
"process_iframes": True, # Include iframe content
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Complete Examples
|
||||
|
||||
1. **Advanced News Crawling**
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://www.nbcnews.com/business",
|
||||
"crawler_params": {
|
||||
"headless": True,
|
||||
"page_timeout": 30000,
|
||||
"remove_overlay_elements": True # Remove popups
|
||||
},
|
||||
"extra": {
|
||||
"word_count_threshold": 50, # Longer content blocks
|
||||
"bypass_cache": True # Fresh content
|
||||
},
|
||||
"css_selector": ".article-body"
|
||||
}
|
||||
```
|
||||
|
||||
2. **Anti-Detection Configuration**
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://example.com",
|
||||
"crawler_params": {
|
||||
"simulate_user": True,
|
||||
"magic": True,
|
||||
"override_navigator": True,
|
||||
"user_agent": "Mozilla/5.0 ...",
|
||||
"headers": {
|
||||
"Accept-Language": "en-US,en;q=0.9"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **LLM Extraction with Custom Parameters**
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://openai.com/pricing",
|
||||
"extraction_config": {
|
||||
"type": "llm",
|
||||
"params": {
|
||||
"provider": "openai/gpt-4",
|
||||
"schema": pricing_schema
|
||||
}
|
||||
},
|
||||
"crawler_params": {
|
||||
"verbose": True,
|
||||
"page_timeout": 60000
|
||||
},
|
||||
"extra": {
|
||||
"word_count_threshold": 1,
|
||||
"only_text": True
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. **Session-Based Dynamic Content**
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://example.com",
|
||||
"crawler_params": {
|
||||
"session_id": "dynamic_session",
|
||||
"headless": False,
|
||||
"page_timeout": 60000
|
||||
},
|
||||
"js_code": ["window.scrollTo(0, document.body.scrollHeight);"],
|
||||
"wait_for": "js:() => document.querySelectorAll('.item').length > 10",
|
||||
"extra": {
|
||||
"delay_before_return_html": 2.0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
5. **Screenshot with Custom Timing**
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://example.com",
|
||||
"screenshot": True,
|
||||
"crawler_params": {
|
||||
"headless": True,
|
||||
"screenshot_wait_for": ".main-content"
|
||||
},
|
||||
"extra": {
|
||||
"delay_before_return_html": 3.0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Parameter Reference Table
|
||||
|
||||
| Category | Parameter | Type | Description |
|
||||
|----------|-----------|------|-------------|
|
||||
| Browser | headless | bool | Run browser in headless mode |
|
||||
| Browser | browser_type | str | Browser engine selection |
|
||||
| Browser | user_agent | str | Custom user agent string |
|
||||
| Network | proxy | str | Proxy server URL |
|
||||
| Network | headers | dict | Custom HTTP headers |
|
||||
| Timing | page_timeout | int | Page load timeout (ms) |
|
||||
| Timing | delay_before_return_html | float | Wait before capture |
|
||||
| Anti-Detection | simulate_user | bool | Human behavior simulation |
|
||||
| Anti-Detection | magic | bool | Advanced protection |
|
||||
| Session | session_id | str | Browser session ID |
|
||||
| Session | user_data_dir | str | Profile directory |
|
||||
| Content | word_count_threshold | int | Minimum words per block |
|
||||
| Content | only_text | bool | Text-only extraction |
|
||||
| Content | process_iframes | bool | Include iframe content |
|
||||
| Debug | verbose | bool | Detailed logging |
|
||||
| Debug | log_console | bool | Browser console logs |
|
||||
|
||||
## Troubleshooting 🔍
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Connection Refused**
|
||||
```
|
||||
Error: Connection refused at localhost:11235
|
||||
```
|
||||
Solution: Ensure the container is running and ports are properly mapped.
|
||||
|
||||
2. **Resource Limits**
|
||||
```
|
||||
Error: No available slots
|
||||
```
|
||||
Solution: Increase MAX_CONCURRENT_TASKS or container resources.
|
||||
|
||||
3. **GPU Access**
|
||||
```
|
||||
Error: GPU not found
|
||||
```
|
||||
Solution: Ensure proper NVIDIA drivers and use `--gpus all` flag.
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Access container for debugging:
|
||||
```bash
|
||||
docker run -it --entrypoint /bin/bash unclecode/crawl4ai:all
|
||||
```
|
||||
|
||||
View container logs:
|
||||
```bash
|
||||
docker logs [container_id]
|
||||
```
|
||||
|
||||
## Best Practices 🌟
|
||||
|
||||
1. **Resource Management**
|
||||
- Set appropriate memory and CPU limits
|
||||
- Monitor resource usage via health endpoint
|
||||
- Use basic version for simple crawling tasks
|
||||
|
||||
2. **Scaling**
|
||||
- Use multiple containers for high load
|
||||
- Implement proper load balancing
|
||||
- Monitor performance metrics
|
||||
|
||||
3. **Security**
|
||||
- Use environment variables for sensitive data
|
||||
- Implement proper network isolation
|
||||
- Regular security updates
|
||||
|
||||
## API Reference 📚
|
||||
|
||||
### Health Check
|
||||
```http
|
||||
GET /health
|
||||
```
|
||||
|
||||
### Submit Crawl Task
|
||||
```http
|
||||
POST /crawl
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"urls": "string or array",
|
||||
"extraction_config": {
|
||||
"type": "basic|llm|cosine|json_css",
|
||||
"params": {}
|
||||
},
|
||||
"priority": 1-10,
|
||||
"ttl": 3600
|
||||
}
|
||||
```
|
||||
|
||||
### Get Task Status
|
||||
```http
|
||||
GET /task/{task_id}
|
||||
```
|
||||
|
||||
For more details, visit the [official documentation](https://crawl4ai.com/mkdocs/).
|
||||
@@ -72,7 +72,7 @@ Our documentation is organized into several sections:
|
||||
### Advanced Features
|
||||
- [Magic Mode](advanced/magic-mode.md)
|
||||
- [Session Management](advanced/session-management.md)
|
||||
- [Hooks & Authentication](advanced/hooks.md)
|
||||
- [Hooks & Authentication](advanced/hooks-auth.md)
|
||||
- [Proxy & Security](advanced/proxy-security.md)
|
||||
- [Content Processing](advanced/content-processing.md)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user