Merge branch 'fix/docker' into develop
This commit is contained in:
13
.gitignore
vendored
13
.gitignore
vendored
@@ -271,6 +271,8 @@ continue_config.json
|
|||||||
CLAUDE_MONITOR.md
|
CLAUDE_MONITOR.md
|
||||||
CLAUDE.md
|
CLAUDE.md
|
||||||
|
|
||||||
|
.claude/
|
||||||
|
|
||||||
tests/**/test_site
|
tests/**/test_site
|
||||||
tests/**/reports
|
tests/**/reports
|
||||||
tests/**/benchmark_reports
|
tests/**/benchmark_reports
|
||||||
@@ -282,3 +284,14 @@ docs/apps/linkdin/debug*/
|
|||||||
docs/apps/linkdin/samples/insights/*
|
docs/apps/linkdin/samples/insights/*
|
||||||
|
|
||||||
scripts/
|
scripts/
|
||||||
|
|
||||||
|
|
||||||
|
# Databse files
|
||||||
|
*.sqlite3
|
||||||
|
*.sqlite3-journal
|
||||||
|
*.db-journal
|
||||||
|
*.db-wal
|
||||||
|
*.db-shm
|
||||||
|
*.db
|
||||||
|
*.rdb
|
||||||
|
*.ldb
|
||||||
|
|||||||
1149
deploy/docker/ARCHITECTURE.md
Normal file
1149
deploy/docker/ARCHITECTURE.md
Normal file
File diff suppressed because it is too large
Load Diff
241
deploy/docker/STRESS_TEST_PIPELINE.md
Normal file
241
deploy/docker/STRESS_TEST_PIPELINE.md
Normal file
@@ -0,0 +1,241 @@
|
|||||||
|
# Crawl4AI Docker Memory & Pool Optimization - Implementation Log
|
||||||
|
|
||||||
|
## Critical Issues Identified
|
||||||
|
|
||||||
|
### Memory Management
|
||||||
|
- **Host vs Container**: `psutil.virtual_memory()` reported host memory, not container limits
|
||||||
|
- **Browser Pooling**: No pool reuse - every endpoint created new browsers
|
||||||
|
- **Warmup Waste**: Permanent browser sat idle with mismatched config signature
|
||||||
|
- **Idle Cleanup**: 30min TTL too long, janitor ran every 60s
|
||||||
|
- **Endpoint Inconsistency**: 75% of endpoints bypassed pool (`/md`, `/html`, `/screenshot`, `/pdf`, `/execute_js`, `/llm`)
|
||||||
|
|
||||||
|
### Pool Design Flaws
|
||||||
|
- **Config Mismatch**: Permanent browser used `config.yml` args, endpoints used empty `BrowserConfig()`
|
||||||
|
- **Logging Level**: Pool hit markers at DEBUG, invisible with INFO logging
|
||||||
|
|
||||||
|
## Implementation Changes
|
||||||
|
|
||||||
|
### 1. Container-Aware Memory Detection (`utils.py`)
|
||||||
|
```python
|
||||||
|
def get_container_memory_percent() -> float:
|
||||||
|
# Try cgroup v2 → v1 → fallback to psutil
|
||||||
|
# Reads /sys/fs/cgroup/memory.{current,max} OR memory/memory.{usage,limit}_in_bytes
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Smart Browser Pool (`crawler_pool.py`)
|
||||||
|
**3-Tier System:**
|
||||||
|
- **PERMANENT**: Always-ready default browser (never cleaned)
|
||||||
|
- **HOT_POOL**: Configs used 3+ times (longer TTL)
|
||||||
|
- **COLD_POOL**: New/rare configs (short TTL)
|
||||||
|
|
||||||
|
**Key Functions:**
|
||||||
|
- `get_crawler(cfg)`: Check permanent → hot → cold → create new
|
||||||
|
- `init_permanent(cfg)`: Initialize permanent at startup
|
||||||
|
- `janitor()`: Adaptive cleanup (10s/30s/60s intervals based on memory)
|
||||||
|
- `_sig(cfg)`: SHA1 hash of config dict for pool keys
|
||||||
|
|
||||||
|
**Logging Fix**: Changed `logger.debug()` → `logger.info()` for pool hits
|
||||||
|
|
||||||
|
### 3. Endpoint Unification
|
||||||
|
**Helper Function** (`server.py`):
|
||||||
|
```python
|
||||||
|
def get_default_browser_config() -> BrowserConfig:
|
||||||
|
return BrowserConfig(
|
||||||
|
extra_args=config["crawler"]["browser"].get("extra_args", []),
|
||||||
|
**config["crawler"]["browser"].get("kwargs", {}),
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Migrated Endpoints:**
|
||||||
|
- `/html`, `/screenshot`, `/pdf`, `/execute_js` → use `get_default_browser_config()`
|
||||||
|
- `handle_llm_qa()`, `handle_markdown_request()` → same
|
||||||
|
|
||||||
|
**Result**: All endpoints now hit permanent browser pool
|
||||||
|
|
||||||
|
### 4. Config Updates (`config.yml`)
|
||||||
|
- `idle_ttl_sec: 1800` → `300` (30min → 5min base TTL)
|
||||||
|
- `port: 11234` → `11235` (fixed mismatch with Gunicorn)
|
||||||
|
|
||||||
|
### 5. Lifespan Fix (`server.py`)
|
||||||
|
```python
|
||||||
|
await init_permanent(BrowserConfig(
|
||||||
|
extra_args=config["crawler"]["browser"].get("extra_args", []),
|
||||||
|
**config["crawler"]["browser"].get("kwargs", {}),
|
||||||
|
))
|
||||||
|
```
|
||||||
|
Permanent browser now matches endpoint config signatures
|
||||||
|
|
||||||
|
## Test Results
|
||||||
|
|
||||||
|
### Test 1: Basic Health
|
||||||
|
- 10 requests to `/health`
|
||||||
|
- **Result**: 100% success, avg 3ms latency
|
||||||
|
- **Baseline**: Container starts in ~5s, 270 MB idle
|
||||||
|
|
||||||
|
### Test 2: Memory Monitoring
|
||||||
|
- 20 requests with Docker stats tracking
|
||||||
|
- **Result**: 100% success, no memory leak (-0.2 MB delta)
|
||||||
|
- **Baseline**: 269.7 MB container overhead
|
||||||
|
|
||||||
|
### Test 3: Pool Validation
|
||||||
|
- 30 requests to `/html` endpoint
|
||||||
|
- **Result**: **100% permanent browser hits**, 0 new browsers created
|
||||||
|
- **Memory**: 287 MB baseline → 396 MB active (+109 MB)
|
||||||
|
- **Latency**: Avg 4s (includes network to httpbin.org)
|
||||||
|
|
||||||
|
### Test 4: Concurrent Load
|
||||||
|
- Light (10) → Medium (50) → Heavy (100) concurrent
|
||||||
|
- **Total**: 320 requests
|
||||||
|
- **Result**: 100% success, **320/320 permanent hits**, 0 new browsers
|
||||||
|
- **Memory**: 269 MB → peak 1533 MB → final 993 MB
|
||||||
|
- **Latency**: P99 at 100 concurrent = 34s (expected with single browser)
|
||||||
|
|
||||||
|
### Test 5: Pool Stress (Mixed Configs)
|
||||||
|
- 20 requests with 4 different viewport configs
|
||||||
|
- **Result**: 4 new browsers, 4 cold hits, **4 promotions to hot**, 8 hot hits
|
||||||
|
- **Reuse Rate**: 60% (12 pool hits / 20 requests)
|
||||||
|
- **Memory**: 270 MB → 928 MB peak (+658 MB = ~165 MB per browser)
|
||||||
|
- **Proves**: Cold → hot promotion at 3 uses working perfectly
|
||||||
|
|
||||||
|
### Test 6: Multi-Endpoint
|
||||||
|
- 10 requests each: `/html`, `/screenshot`, `/pdf`, `/crawl`
|
||||||
|
- **Result**: 100% success across all 4 endpoints
|
||||||
|
- **Latency**: 5-8s avg (PDF slowest at 7.2s)
|
||||||
|
|
||||||
|
### Test 7: Cleanup Verification
|
||||||
|
- 20 requests (load spike) → 90s idle
|
||||||
|
- **Memory**: 269 MB → peak 1107 MB → final 780 MB
|
||||||
|
- **Recovery**: 327 MB (39%) - partial cleanup
|
||||||
|
- **Note**: Hot pool browsers persist (by design), janitor working correctly
|
||||||
|
|
||||||
|
## Performance Metrics
|
||||||
|
|
||||||
|
| Metric | Before | After | Improvement |
|
||||||
|
|--------|--------|-------|-------------|
|
||||||
|
| Pool Reuse | 0% | 100% (default config) | ∞ |
|
||||||
|
| Memory Leak | Unknown | 0 MB/cycle | Stable |
|
||||||
|
| Browser Reuse | No | Yes | ~3-5s saved per request |
|
||||||
|
| Idle Memory | 500-700 MB × N | 270-400 MB | 10x reduction |
|
||||||
|
| Concurrent Capacity | ~20 | 100+ | 5x |
|
||||||
|
|
||||||
|
## Key Learnings
|
||||||
|
|
||||||
|
1. **Config Signature Matching**: Permanent browser MUST match endpoint default config exactly (SHA1 hash)
|
||||||
|
2. **Logging Levels**: Pool diagnostics need INFO level, not DEBUG
|
||||||
|
3. **Memory in Docker**: Must read cgroup files, not host metrics
|
||||||
|
4. **Janitor Timing**: 60s interval adequate, but TTLs should be short (5min) for cold pool
|
||||||
|
5. **Hot Promotion**: 3-use threshold works well for production patterns
|
||||||
|
6. **Memory Per Browser**: ~150-200 MB per Chromium instance with headless + text_mode
|
||||||
|
|
||||||
|
## Test Infrastructure
|
||||||
|
|
||||||
|
**Location**: `deploy/docker/tests/`
|
||||||
|
**Dependencies**: `httpx`, `docker` (Python SDK)
|
||||||
|
**Pattern**: Sequential build - each test adds one capability
|
||||||
|
|
||||||
|
**Files**:
|
||||||
|
- `test_1_basic.py`: Health check + container lifecycle
|
||||||
|
- `test_2_memory.py`: + Docker stats monitoring
|
||||||
|
- `test_3_pool.py`: + Log analysis for pool markers
|
||||||
|
- `test_4_concurrent.py`: + asyncio.Semaphore for concurrency control
|
||||||
|
- `test_5_pool_stress.py`: + Config variants (viewports)
|
||||||
|
- `test_6_multi_endpoint.py`: + Multiple endpoint testing
|
||||||
|
- `test_7_cleanup.py`: + Time-series memory tracking for janitor
|
||||||
|
|
||||||
|
**Run Pattern**:
|
||||||
|
```bash
|
||||||
|
cd deploy/docker/tests
|
||||||
|
pip install -r requirements.txt
|
||||||
|
# Rebuild after code changes:
|
||||||
|
cd /path/to/repo && docker buildx build -t crawl4ai-local:latest --load .
|
||||||
|
# Run test:
|
||||||
|
python test_N_name.py
|
||||||
|
```
|
||||||
|
|
||||||
|
## Architecture Decisions
|
||||||
|
|
||||||
|
**Why Permanent Browser?**
|
||||||
|
- 90% of requests use default config → single browser serves most traffic
|
||||||
|
- Eliminates 3-5s startup overhead per request
|
||||||
|
|
||||||
|
**Why 3-Tier Pool?**
|
||||||
|
- Permanent: Zero cost for common case
|
||||||
|
- Hot: Amortized cost for frequent variants
|
||||||
|
- Cold: Lazy allocation for rare configs
|
||||||
|
|
||||||
|
**Why Adaptive Janitor?**
|
||||||
|
- Memory pressure triggers aggressive cleanup
|
||||||
|
- Low memory allows longer TTLs for better reuse
|
||||||
|
|
||||||
|
**Why Not Close After Each Request?**
|
||||||
|
- Browser startup: 3-5s overhead
|
||||||
|
- Pool reuse: <100ms overhead
|
||||||
|
- Net: 30-50x faster
|
||||||
|
|
||||||
|
## Future Optimizations
|
||||||
|
|
||||||
|
1. **Request Queuing**: When at capacity, queue instead of reject
|
||||||
|
2. **Pre-warming**: Predict common configs, pre-create browsers
|
||||||
|
3. **Metrics Export**: Prometheus metrics for pool efficiency
|
||||||
|
4. **Config Normalization**: Group similar viewports (e.g., 1920±50 → 1920)
|
||||||
|
|
||||||
|
## Critical Code Paths
|
||||||
|
|
||||||
|
**Browser Acquisition** (`crawler_pool.py:34-78`):
|
||||||
|
```
|
||||||
|
get_crawler(cfg) →
|
||||||
|
_sig(cfg) →
|
||||||
|
if sig == DEFAULT_CONFIG_SIG → PERMANENT
|
||||||
|
elif sig in HOT_POOL → HOT_POOL[sig]
|
||||||
|
elif sig in COLD_POOL → promote if count >= 3
|
||||||
|
else → create new in COLD_POOL
|
||||||
|
```
|
||||||
|
|
||||||
|
**Janitor Loop** (`crawler_pool.py:107-146`):
|
||||||
|
```
|
||||||
|
while True:
|
||||||
|
mem% = get_container_memory_percent()
|
||||||
|
if mem% > 80: interval=10s, cold_ttl=30s
|
||||||
|
elif mem% > 60: interval=30s, cold_ttl=60s
|
||||||
|
else: interval=60s, cold_ttl=300s
|
||||||
|
sleep(interval)
|
||||||
|
close idle browsers (COLD then HOT)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Endpoint Pattern** (`server.py` example):
|
||||||
|
```python
|
||||||
|
@app.post("/html")
|
||||||
|
async def generate_html(...):
|
||||||
|
from crawler_pool import get_crawler
|
||||||
|
crawler = await get_crawler(get_default_browser_config())
|
||||||
|
results = await crawler.arun(url=body.url, config=cfg)
|
||||||
|
# No crawler.close() - returned to pool
|
||||||
|
```
|
||||||
|
|
||||||
|
## Debugging Tips
|
||||||
|
|
||||||
|
**Check Pool Activity**:
|
||||||
|
```bash
|
||||||
|
docker logs crawl4ai-test | grep -E "(🔥|♨️|❄️|🆕|⬆️)"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify Config Signature**:
|
||||||
|
```python
|
||||||
|
from crawl4ai import BrowserConfig
|
||||||
|
import json, hashlib
|
||||||
|
cfg = BrowserConfig(...)
|
||||||
|
sig = hashlib.sha1(json.dumps(cfg.to_dict(), sort_keys=True).encode()).hexdigest()
|
||||||
|
print(sig[:8]) # Compare with logs
|
||||||
|
```
|
||||||
|
|
||||||
|
**Monitor Memory**:
|
||||||
|
```bash
|
||||||
|
docker stats crawl4ai-test
|
||||||
|
```
|
||||||
|
|
||||||
|
## Known Limitations
|
||||||
|
|
||||||
|
- **Mac Docker Stats**: CPU metrics unreliable, memory works
|
||||||
|
- **PDF Generation**: Slowest endpoint (~7s), no optimization yet
|
||||||
|
- **Hot Pool Persistence**: May hold memory longer than needed (trade-off for performance)
|
||||||
|
- **Janitor Lag**: Up to 60s before cleanup triggers in low-memory scenarios
|
||||||
@@ -67,6 +67,7 @@ async def handle_llm_qa(
|
|||||||
config: dict
|
config: dict
|
||||||
) -> str:
|
) -> str:
|
||||||
"""Process QA using LLM with crawled content as context."""
|
"""Process QA using LLM with crawled content as context."""
|
||||||
|
from crawler_pool import get_crawler
|
||||||
try:
|
try:
|
||||||
if not url.startswith(('http://', 'https://')) and not url.startswith(("raw:", "raw://")):
|
if not url.startswith(('http://', 'https://')) and not url.startswith(("raw:", "raw://")):
|
||||||
url = 'https://' + url
|
url = 'https://' + url
|
||||||
@@ -75,15 +76,21 @@ async def handle_llm_qa(
|
|||||||
if last_q_index != -1:
|
if last_q_index != -1:
|
||||||
url = url[:last_q_index]
|
url = url[:last_q_index]
|
||||||
|
|
||||||
# Get markdown content
|
# Get markdown content (use default config)
|
||||||
async with AsyncWebCrawler() as crawler:
|
from utils import load_config
|
||||||
result = await crawler.arun(url)
|
cfg = load_config()
|
||||||
if not result.success:
|
browser_cfg = BrowserConfig(
|
||||||
raise HTTPException(
|
extra_args=cfg["crawler"]["browser"].get("extra_args", []),
|
||||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
**cfg["crawler"]["browser"].get("kwargs", {}),
|
||||||
detail=result.error_message
|
)
|
||||||
)
|
crawler = await get_crawler(browser_cfg)
|
||||||
content = result.markdown.fit_markdown or result.markdown.raw_markdown
|
result = await crawler.arun(url)
|
||||||
|
if not result.success:
|
||||||
|
raise HTTPException(
|
||||||
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail=result.error_message
|
||||||
|
)
|
||||||
|
content = result.markdown.fit_markdown or result.markdown.raw_markdown
|
||||||
|
|
||||||
# Create prompt and get LLM response
|
# Create prompt and get LLM response
|
||||||
prompt = f"""Use the following content as context to answer the question.
|
prompt = f"""Use the following content as context to answer the question.
|
||||||
@@ -272,25 +279,32 @@ async def handle_markdown_request(
|
|||||||
|
|
||||||
cache_mode = CacheMode.ENABLED if cache == "1" else CacheMode.WRITE_ONLY
|
cache_mode = CacheMode.ENABLED if cache == "1" else CacheMode.WRITE_ONLY
|
||||||
|
|
||||||
async with AsyncWebCrawler() as crawler:
|
from crawler_pool import get_crawler
|
||||||
result = await crawler.arun(
|
from utils import load_config as _load_config
|
||||||
url=decoded_url,
|
_cfg = _load_config()
|
||||||
config=CrawlerRunConfig(
|
browser_cfg = BrowserConfig(
|
||||||
markdown_generator=md_generator,
|
extra_args=_cfg["crawler"]["browser"].get("extra_args", []),
|
||||||
scraping_strategy=LXMLWebScrapingStrategy(),
|
**_cfg["crawler"]["browser"].get("kwargs", {}),
|
||||||
cache_mode=cache_mode
|
)
|
||||||
)
|
crawler = await get_crawler(browser_cfg)
|
||||||
|
result = await crawler.arun(
|
||||||
|
url=decoded_url,
|
||||||
|
config=CrawlerRunConfig(
|
||||||
|
markdown_generator=md_generator,
|
||||||
|
scraping_strategy=LXMLWebScrapingStrategy(),
|
||||||
|
cache_mode=cache_mode
|
||||||
)
|
)
|
||||||
|
)
|
||||||
if not result.success:
|
|
||||||
raise HTTPException(
|
|
||||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
|
||||||
detail=result.error_message
|
|
||||||
)
|
|
||||||
|
|
||||||
return (result.markdown.raw_markdown
|
if not result.success:
|
||||||
if filter_type == FilterType.RAW
|
raise HTTPException(
|
||||||
else result.markdown.fit_markdown)
|
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||||
|
detail=result.error_message
|
||||||
|
)
|
||||||
|
|
||||||
|
return (result.markdown.raw_markdown
|
||||||
|
if filter_type == FilterType.RAW
|
||||||
|
else result.markdown.fit_markdown)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Markdown error: {str(e)}", exc_info=True)
|
logger.error(f"Markdown error: {str(e)}", exc_info=True)
|
||||||
@@ -504,12 +518,22 @@ async def handle_crawl_request(
|
|||||||
hooks_config: Optional[dict] = None
|
hooks_config: Optional[dict] = None
|
||||||
) -> dict:
|
) -> dict:
|
||||||
"""Handle non-streaming crawl requests with optional hooks."""
|
"""Handle non-streaming crawl requests with optional hooks."""
|
||||||
|
# Track request start
|
||||||
|
request_id = f"req_{uuid4().hex[:8]}"
|
||||||
|
try:
|
||||||
|
from monitor import get_monitor
|
||||||
|
await get_monitor().track_request_start(
|
||||||
|
request_id, "/crawl", urls[0] if urls else "batch", browser_config
|
||||||
|
)
|
||||||
|
except:
|
||||||
|
pass # Monitor not critical
|
||||||
|
|
||||||
start_mem_mb = _get_memory_mb() # <--- Get memory before
|
start_mem_mb = _get_memory_mb() # <--- Get memory before
|
||||||
start_time = time.time()
|
start_time = time.time()
|
||||||
mem_delta_mb = None
|
mem_delta_mb = None
|
||||||
peak_mem_mb = start_mem_mb
|
peak_mem_mb = start_mem_mb
|
||||||
hook_manager = None
|
hook_manager = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
urls = [('https://' + url) if not url.startswith(('http://', 'https://')) and not url.startswith(("raw:", "raw://")) else url for url in urls]
|
urls = [('https://' + url) if not url.startswith(('http://', 'https://')) and not url.startswith(("raw:", "raw://")) else url for url in urls]
|
||||||
browser_config = BrowserConfig.load(browser_config)
|
browser_config = BrowserConfig.load(browser_config)
|
||||||
@@ -614,7 +638,16 @@ async def handle_crawl_request(
|
|||||||
"server_memory_delta_mb": mem_delta_mb,
|
"server_memory_delta_mb": mem_delta_mb,
|
||||||
"server_peak_memory_mb": peak_mem_mb
|
"server_peak_memory_mb": peak_mem_mb
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Track request completion
|
||||||
|
try:
|
||||||
|
from monitor import get_monitor
|
||||||
|
await get_monitor().track_request_end(
|
||||||
|
request_id, success=True, pool_hit=True, status_code=200
|
||||||
|
)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
# Add hooks information if hooks were used
|
# Add hooks information if hooks were used
|
||||||
if hooks_config and hook_manager:
|
if hooks_config and hook_manager:
|
||||||
from hook_manager import UserHookManager
|
from hook_manager import UserHookManager
|
||||||
@@ -643,6 +676,16 @@ async def handle_crawl_request(
|
|||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.error(f"Crawl error: {str(e)}", exc_info=True)
|
logger.error(f"Crawl error: {str(e)}", exc_info=True)
|
||||||
|
|
||||||
|
# Track request error
|
||||||
|
try:
|
||||||
|
from monitor import get_monitor
|
||||||
|
await get_monitor().track_request_end(
|
||||||
|
request_id, success=False, error=str(e), status_code=500
|
||||||
|
)
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
if 'crawler' in locals() and crawler.ready: # Check if crawler was initialized and started
|
if 'crawler' in locals() and crawler.ready: # Check if crawler was initialized and started
|
||||||
# try:
|
# try:
|
||||||
# await crawler.close()
|
# await crawler.close()
|
||||||
|
|||||||
@@ -3,7 +3,7 @@ app:
|
|||||||
title: "Crawl4AI API"
|
title: "Crawl4AI API"
|
||||||
version: "1.0.0"
|
version: "1.0.0"
|
||||||
host: "0.0.0.0"
|
host: "0.0.0.0"
|
||||||
port: 11234
|
port: 11235
|
||||||
reload: False
|
reload: False
|
||||||
workers: 1
|
workers: 1
|
||||||
timeout_keep_alive: 300
|
timeout_keep_alive: 300
|
||||||
@@ -61,7 +61,7 @@ crawler:
|
|||||||
batch_process: 300.0 # Timeout for batch processing
|
batch_process: 300.0 # Timeout for batch processing
|
||||||
pool:
|
pool:
|
||||||
max_pages: 40 # ← GLOBAL_SEM permits
|
max_pages: 40 # ← GLOBAL_SEM permits
|
||||||
idle_ttl_sec: 1800 # ← 30 min janitor cutoff
|
idle_ttl_sec: 300 # ← 30 min janitor cutoff
|
||||||
browser:
|
browser:
|
||||||
kwargs:
|
kwargs:
|
||||||
headless: true
|
headless: true
|
||||||
|
|||||||
@@ -1,60 +1,170 @@
|
|||||||
# crawler_pool.py (new file)
|
# crawler_pool.py - Smart browser pool with tiered management
|
||||||
import asyncio, json, hashlib, time, psutil
|
import asyncio, json, hashlib, time
|
||||||
from contextlib import suppress
|
from contextlib import suppress
|
||||||
from typing import Dict
|
from typing import Dict, Optional
|
||||||
from crawl4ai import AsyncWebCrawler, BrowserConfig
|
from crawl4ai import AsyncWebCrawler, BrowserConfig
|
||||||
from typing import Dict
|
from utils import load_config, get_container_memory_percent
|
||||||
from utils import load_config
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
CONFIG = load_config()
|
CONFIG = load_config()
|
||||||
|
|
||||||
POOL: Dict[str, AsyncWebCrawler] = {}
|
# Pool tiers
|
||||||
|
PERMANENT: Optional[AsyncWebCrawler] = None # Always-ready default browser
|
||||||
|
HOT_POOL: Dict[str, AsyncWebCrawler] = {} # Frequent configs
|
||||||
|
COLD_POOL: Dict[str, AsyncWebCrawler] = {} # Rare configs
|
||||||
LAST_USED: Dict[str, float] = {}
|
LAST_USED: Dict[str, float] = {}
|
||||||
|
USAGE_COUNT: Dict[str, int] = {}
|
||||||
LOCK = asyncio.Lock()
|
LOCK = asyncio.Lock()
|
||||||
|
|
||||||
MEM_LIMIT = CONFIG.get("crawler", {}).get("memory_threshold_percent", 95.0) # % RAM – refuse new browsers above this
|
# Config
|
||||||
IDLE_TTL = CONFIG.get("crawler", {}).get("pool", {}).get("idle_ttl_sec", 1800) # close if unused for 30 min
|
MEM_LIMIT = CONFIG.get("crawler", {}).get("memory_threshold_percent", 95.0)
|
||||||
|
BASE_IDLE_TTL = CONFIG.get("crawler", {}).get("pool", {}).get("idle_ttl_sec", 300)
|
||||||
|
DEFAULT_CONFIG_SIG = None # Cached sig for default config
|
||||||
|
|
||||||
def _sig(cfg: BrowserConfig) -> str:
|
def _sig(cfg: BrowserConfig) -> str:
|
||||||
|
"""Generate config signature."""
|
||||||
payload = json.dumps(cfg.to_dict(), sort_keys=True, separators=(",",":"))
|
payload = json.dumps(cfg.to_dict(), sort_keys=True, separators=(",",":"))
|
||||||
return hashlib.sha1(payload.encode()).hexdigest()
|
return hashlib.sha1(payload.encode()).hexdigest()
|
||||||
|
|
||||||
|
def _is_default_config(sig: str) -> bool:
|
||||||
|
"""Check if config matches default."""
|
||||||
|
return sig == DEFAULT_CONFIG_SIG
|
||||||
|
|
||||||
async def get_crawler(cfg: BrowserConfig) -> AsyncWebCrawler:
|
async def get_crawler(cfg: BrowserConfig) -> AsyncWebCrawler:
|
||||||
try:
|
"""Get crawler from pool with tiered strategy."""
|
||||||
sig = _sig(cfg)
|
sig = _sig(cfg)
|
||||||
async with LOCK:
|
|
||||||
if sig in POOL:
|
|
||||||
LAST_USED[sig] = time.time();
|
|
||||||
return POOL[sig]
|
|
||||||
if psutil.virtual_memory().percent >= MEM_LIMIT:
|
|
||||||
raise MemoryError("RAM pressure – new browser denied")
|
|
||||||
crawler = AsyncWebCrawler(config=cfg, thread_safe=False)
|
|
||||||
await crawler.start()
|
|
||||||
POOL[sig] = crawler; LAST_USED[sig] = time.time()
|
|
||||||
return crawler
|
|
||||||
except MemoryError as e:
|
|
||||||
raise MemoryError(f"RAM pressure – new browser denied: {e}")
|
|
||||||
except Exception as e:
|
|
||||||
raise RuntimeError(f"Failed to start browser: {e}")
|
|
||||||
finally:
|
|
||||||
if sig in POOL:
|
|
||||||
LAST_USED[sig] = time.time()
|
|
||||||
else:
|
|
||||||
# If we failed to start the browser, we should remove it from the pool
|
|
||||||
POOL.pop(sig, None)
|
|
||||||
LAST_USED.pop(sig, None)
|
|
||||||
# If we failed to start the browser, we should remove it from the pool
|
|
||||||
async def close_all():
|
|
||||||
async with LOCK:
|
async with LOCK:
|
||||||
await asyncio.gather(*(c.close() for c in POOL.values()), return_exceptions=True)
|
# Check permanent browser for default config
|
||||||
POOL.clear(); LAST_USED.clear()
|
if PERMANENT and _is_default_config(sig):
|
||||||
|
LAST_USED[sig] = time.time()
|
||||||
|
USAGE_COUNT[sig] = USAGE_COUNT.get(sig, 0) + 1
|
||||||
|
logger.info("🔥 Using permanent browser")
|
||||||
|
return PERMANENT
|
||||||
|
|
||||||
|
# Check hot pool
|
||||||
|
if sig in HOT_POOL:
|
||||||
|
LAST_USED[sig] = time.time()
|
||||||
|
USAGE_COUNT[sig] = USAGE_COUNT.get(sig, 0) + 1
|
||||||
|
logger.info(f"♨️ Using hot pool browser (sig={sig[:8]})")
|
||||||
|
return HOT_POOL[sig]
|
||||||
|
|
||||||
|
# Check cold pool (promote to hot if used 3+ times)
|
||||||
|
if sig in COLD_POOL:
|
||||||
|
LAST_USED[sig] = time.time()
|
||||||
|
USAGE_COUNT[sig] = USAGE_COUNT.get(sig, 0) + 1
|
||||||
|
|
||||||
|
if USAGE_COUNT[sig] >= 3:
|
||||||
|
logger.info(f"⬆️ Promoting to hot pool (sig={sig[:8]}, count={USAGE_COUNT[sig]})")
|
||||||
|
HOT_POOL[sig] = COLD_POOL.pop(sig)
|
||||||
|
|
||||||
|
# Track promotion in monitor
|
||||||
|
try:
|
||||||
|
from monitor import get_monitor
|
||||||
|
await get_monitor().track_janitor_event("promote", sig, {"count": USAGE_COUNT[sig]})
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return HOT_POOL[sig]
|
||||||
|
|
||||||
|
logger.info(f"❄️ Using cold pool browser (sig={sig[:8]})")
|
||||||
|
return COLD_POOL[sig]
|
||||||
|
|
||||||
|
# Memory check before creating new
|
||||||
|
mem_pct = get_container_memory_percent()
|
||||||
|
if mem_pct >= MEM_LIMIT:
|
||||||
|
logger.error(f"💥 Memory pressure: {mem_pct:.1f}% >= {MEM_LIMIT}%")
|
||||||
|
raise MemoryError(f"Memory at {mem_pct:.1f}%, refusing new browser")
|
||||||
|
|
||||||
|
# Create new in cold pool
|
||||||
|
logger.info(f"🆕 Creating new browser in cold pool (sig={sig[:8]}, mem={mem_pct:.1f}%)")
|
||||||
|
crawler = AsyncWebCrawler(config=cfg, thread_safe=False)
|
||||||
|
await crawler.start()
|
||||||
|
COLD_POOL[sig] = crawler
|
||||||
|
LAST_USED[sig] = time.time()
|
||||||
|
USAGE_COUNT[sig] = 1
|
||||||
|
return crawler
|
||||||
|
|
||||||
|
async def init_permanent(cfg: BrowserConfig):
|
||||||
|
"""Initialize permanent default browser."""
|
||||||
|
global PERMANENT, DEFAULT_CONFIG_SIG
|
||||||
|
async with LOCK:
|
||||||
|
if PERMANENT:
|
||||||
|
return
|
||||||
|
DEFAULT_CONFIG_SIG = _sig(cfg)
|
||||||
|
logger.info("🔥 Creating permanent default browser")
|
||||||
|
PERMANENT = AsyncWebCrawler(config=cfg, thread_safe=False)
|
||||||
|
await PERMANENT.start()
|
||||||
|
LAST_USED[DEFAULT_CONFIG_SIG] = time.time()
|
||||||
|
USAGE_COUNT[DEFAULT_CONFIG_SIG] = 0
|
||||||
|
|
||||||
|
async def close_all():
|
||||||
|
"""Close all browsers."""
|
||||||
|
async with LOCK:
|
||||||
|
tasks = []
|
||||||
|
if PERMANENT:
|
||||||
|
tasks.append(PERMANENT.close())
|
||||||
|
tasks.extend([c.close() for c in HOT_POOL.values()])
|
||||||
|
tasks.extend([c.close() for c in COLD_POOL.values()])
|
||||||
|
await asyncio.gather(*tasks, return_exceptions=True)
|
||||||
|
HOT_POOL.clear()
|
||||||
|
COLD_POOL.clear()
|
||||||
|
LAST_USED.clear()
|
||||||
|
USAGE_COUNT.clear()
|
||||||
|
|
||||||
async def janitor():
|
async def janitor():
|
||||||
|
"""Adaptive cleanup based on memory pressure."""
|
||||||
while True:
|
while True:
|
||||||
await asyncio.sleep(60)
|
mem_pct = get_container_memory_percent()
|
||||||
|
|
||||||
|
# Adaptive intervals and TTLs
|
||||||
|
if mem_pct > 80:
|
||||||
|
interval, cold_ttl, hot_ttl = 10, 30, 120
|
||||||
|
elif mem_pct > 60:
|
||||||
|
interval, cold_ttl, hot_ttl = 30, 60, 300
|
||||||
|
else:
|
||||||
|
interval, cold_ttl, hot_ttl = 60, BASE_IDLE_TTL, BASE_IDLE_TTL * 2
|
||||||
|
|
||||||
|
await asyncio.sleep(interval)
|
||||||
|
|
||||||
now = time.time()
|
now = time.time()
|
||||||
async with LOCK:
|
async with LOCK:
|
||||||
for sig, crawler in list(POOL.items()):
|
# Clean cold pool
|
||||||
if now - LAST_USED[sig] > IDLE_TTL:
|
for sig in list(COLD_POOL.keys()):
|
||||||
with suppress(Exception): await crawler.close()
|
if now - LAST_USED.get(sig, now) > cold_ttl:
|
||||||
POOL.pop(sig, None); LAST_USED.pop(sig, None)
|
idle_time = now - LAST_USED[sig]
|
||||||
|
logger.info(f"🧹 Closing cold browser (sig={sig[:8]}, idle={idle_time:.0f}s)")
|
||||||
|
with suppress(Exception):
|
||||||
|
await COLD_POOL[sig].close()
|
||||||
|
COLD_POOL.pop(sig, None)
|
||||||
|
LAST_USED.pop(sig, None)
|
||||||
|
USAGE_COUNT.pop(sig, None)
|
||||||
|
|
||||||
|
# Track in monitor
|
||||||
|
try:
|
||||||
|
from monitor import get_monitor
|
||||||
|
await get_monitor().track_janitor_event("close_cold", sig, {"idle_seconds": int(idle_time), "ttl": cold_ttl})
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Clean hot pool (more conservative)
|
||||||
|
for sig in list(HOT_POOL.keys()):
|
||||||
|
if now - LAST_USED.get(sig, now) > hot_ttl:
|
||||||
|
idle_time = now - LAST_USED[sig]
|
||||||
|
logger.info(f"🧹 Closing hot browser (sig={sig[:8]}, idle={idle_time:.0f}s)")
|
||||||
|
with suppress(Exception):
|
||||||
|
await HOT_POOL[sig].close()
|
||||||
|
HOT_POOL.pop(sig, None)
|
||||||
|
LAST_USED.pop(sig, None)
|
||||||
|
USAGE_COUNT.pop(sig, None)
|
||||||
|
|
||||||
|
# Track in monitor
|
||||||
|
try:
|
||||||
|
from monitor import get_monitor
|
||||||
|
await get_monitor().track_janitor_event("close_hot", sig, {"idle_seconds": int(idle_time), "ttl": hot_ttl})
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Log pool stats
|
||||||
|
if mem_pct > 60:
|
||||||
|
logger.info(f"📊 Pool: hot={len(HOT_POOL)}, cold={len(COLD_POOL)}, mem={mem_pct:.1f}%")
|
||||||
|
|||||||
382
deploy/docker/monitor.py
Normal file
382
deploy/docker/monitor.py
Normal file
@@ -0,0 +1,382 @@
|
|||||||
|
# monitor.py - Real-time monitoring stats with Redis persistence
|
||||||
|
import time
|
||||||
|
import json
|
||||||
|
import asyncio
|
||||||
|
from typing import Dict, List, Optional
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
from collections import deque
|
||||||
|
from redis import asyncio as aioredis
|
||||||
|
from utils import get_container_memory_percent
|
||||||
|
import psutil
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
class MonitorStats:
|
||||||
|
"""Tracks real-time server stats with Redis persistence."""
|
||||||
|
|
||||||
|
def __init__(self, redis: aioredis.Redis):
|
||||||
|
self.redis = redis
|
||||||
|
self.start_time = time.time()
|
||||||
|
|
||||||
|
# In-memory queues (fast reads, Redis backup)
|
||||||
|
self.active_requests: Dict[str, Dict] = {} # id -> request info
|
||||||
|
self.completed_requests: deque = deque(maxlen=100) # Last 100
|
||||||
|
self.janitor_events: deque = deque(maxlen=100)
|
||||||
|
self.errors: deque = deque(maxlen=100)
|
||||||
|
|
||||||
|
# Endpoint stats (persisted in Redis)
|
||||||
|
self.endpoint_stats: Dict[str, Dict] = {} # endpoint -> {count, total_time, errors, ...}
|
||||||
|
|
||||||
|
# Background persistence queue (max 10 pending persist requests)
|
||||||
|
self._persist_queue: asyncio.Queue = asyncio.Queue(maxsize=10)
|
||||||
|
self._persist_worker_task: Optional[asyncio.Task] = None
|
||||||
|
|
||||||
|
# Timeline data (5min window, 5s resolution = 60 points)
|
||||||
|
self.memory_timeline: deque = deque(maxlen=60)
|
||||||
|
self.requests_timeline: deque = deque(maxlen=60)
|
||||||
|
self.browser_timeline: deque = deque(maxlen=60)
|
||||||
|
|
||||||
|
async def track_request_start(self, request_id: str, endpoint: str, url: str, config: Dict = None):
|
||||||
|
"""Track new request start."""
|
||||||
|
req_info = {
|
||||||
|
"id": request_id,
|
||||||
|
"endpoint": endpoint,
|
||||||
|
"url": url[:100], # Truncate long URLs
|
||||||
|
"start_time": time.time(),
|
||||||
|
"config_sig": config.get("sig", "default") if config else "default",
|
||||||
|
"mem_start": psutil.Process().memory_info().rss / (1024 * 1024)
|
||||||
|
}
|
||||||
|
self.active_requests[request_id] = req_info
|
||||||
|
|
||||||
|
# Increment endpoint counter
|
||||||
|
if endpoint not in self.endpoint_stats:
|
||||||
|
self.endpoint_stats[endpoint] = {
|
||||||
|
"count": 0, "total_time": 0, "errors": 0,
|
||||||
|
"pool_hits": 0, "success": 0
|
||||||
|
}
|
||||||
|
self.endpoint_stats[endpoint]["count"] += 1
|
||||||
|
|
||||||
|
# Queue persistence (handled by background worker)
|
||||||
|
try:
|
||||||
|
self._persist_queue.put_nowait(True)
|
||||||
|
except asyncio.QueueFull:
|
||||||
|
logger.warning("Persistence queue full, skipping")
|
||||||
|
|
||||||
|
async def track_request_end(self, request_id: str, success: bool, error: str = None,
|
||||||
|
pool_hit: bool = True, status_code: int = 200):
|
||||||
|
"""Track request completion."""
|
||||||
|
if request_id not in self.active_requests:
|
||||||
|
return
|
||||||
|
|
||||||
|
req_info = self.active_requests.pop(request_id)
|
||||||
|
end_time = time.time()
|
||||||
|
elapsed = end_time - req_info["start_time"]
|
||||||
|
mem_end = psutil.Process().memory_info().rss / (1024 * 1024)
|
||||||
|
mem_delta = mem_end - req_info["mem_start"]
|
||||||
|
|
||||||
|
# Update stats
|
||||||
|
endpoint = req_info["endpoint"]
|
||||||
|
if endpoint in self.endpoint_stats:
|
||||||
|
self.endpoint_stats[endpoint]["total_time"] += elapsed
|
||||||
|
if success:
|
||||||
|
self.endpoint_stats[endpoint]["success"] += 1
|
||||||
|
else:
|
||||||
|
self.endpoint_stats[endpoint]["errors"] += 1
|
||||||
|
if pool_hit:
|
||||||
|
self.endpoint_stats[endpoint]["pool_hits"] += 1
|
||||||
|
|
||||||
|
# Add to completed queue
|
||||||
|
completed = {
|
||||||
|
**req_info,
|
||||||
|
"end_time": end_time,
|
||||||
|
"elapsed": round(elapsed, 2),
|
||||||
|
"mem_delta": round(mem_delta, 1),
|
||||||
|
"success": success,
|
||||||
|
"error": error,
|
||||||
|
"status_code": status_code,
|
||||||
|
"pool_hit": pool_hit
|
||||||
|
}
|
||||||
|
self.completed_requests.append(completed)
|
||||||
|
|
||||||
|
# Track errors
|
||||||
|
if not success and error:
|
||||||
|
self.errors.append({
|
||||||
|
"timestamp": end_time,
|
||||||
|
"endpoint": endpoint,
|
||||||
|
"url": req_info["url"],
|
||||||
|
"error": error,
|
||||||
|
"request_id": request_id
|
||||||
|
})
|
||||||
|
|
||||||
|
await self._persist_endpoint_stats()
|
||||||
|
|
||||||
|
async def track_janitor_event(self, event_type: str, sig: str, details: Dict):
|
||||||
|
"""Track janitor cleanup events."""
|
||||||
|
self.janitor_events.append({
|
||||||
|
"timestamp": time.time(),
|
||||||
|
"type": event_type, # "close_cold", "close_hot", "promote"
|
||||||
|
"sig": sig[:8],
|
||||||
|
"details": details
|
||||||
|
})
|
||||||
|
|
||||||
|
def _cleanup_old_entries(self, max_age_seconds: int = 300):
|
||||||
|
"""Remove entries older than max_age_seconds (default 5min)."""
|
||||||
|
now = time.time()
|
||||||
|
cutoff = now - max_age_seconds
|
||||||
|
|
||||||
|
# Clean completed requests
|
||||||
|
while self.completed_requests and self.completed_requests[0].get("end_time", 0) < cutoff:
|
||||||
|
self.completed_requests.popleft()
|
||||||
|
|
||||||
|
# Clean janitor events
|
||||||
|
while self.janitor_events and self.janitor_events[0].get("timestamp", 0) < cutoff:
|
||||||
|
self.janitor_events.popleft()
|
||||||
|
|
||||||
|
# Clean errors
|
||||||
|
while self.errors and self.errors[0].get("timestamp", 0) < cutoff:
|
||||||
|
self.errors.popleft()
|
||||||
|
|
||||||
|
async def update_timeline(self):
|
||||||
|
"""Update timeline data points (called every 5s)."""
|
||||||
|
now = time.time()
|
||||||
|
mem_pct = get_container_memory_percent()
|
||||||
|
|
||||||
|
# Clean old entries (keep last 5 minutes)
|
||||||
|
self._cleanup_old_entries(max_age_seconds=300)
|
||||||
|
|
||||||
|
# Count requests in last 5s
|
||||||
|
recent_reqs = sum(1 for req in self.completed_requests
|
||||||
|
if now - req.get("end_time", 0) < 5)
|
||||||
|
|
||||||
|
# Browser counts (acquire lock to prevent race conditions)
|
||||||
|
from crawler_pool import PERMANENT, HOT_POOL, COLD_POOL, LOCK
|
||||||
|
async with LOCK:
|
||||||
|
browser_count = {
|
||||||
|
"permanent": 1 if PERMANENT else 0,
|
||||||
|
"hot": len(HOT_POOL),
|
||||||
|
"cold": len(COLD_POOL)
|
||||||
|
}
|
||||||
|
|
||||||
|
self.memory_timeline.append({"time": now, "value": mem_pct})
|
||||||
|
self.requests_timeline.append({"time": now, "value": recent_reqs})
|
||||||
|
self.browser_timeline.append({"time": now, "browsers": browser_count})
|
||||||
|
|
||||||
|
async def _persist_endpoint_stats(self):
|
||||||
|
"""Persist endpoint stats to Redis."""
|
||||||
|
try:
|
||||||
|
await self.redis.set(
|
||||||
|
"monitor:endpoint_stats",
|
||||||
|
json.dumps(self.endpoint_stats),
|
||||||
|
ex=86400 # 24h TTL
|
||||||
|
)
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Failed to persist endpoint stats: {e}")
|
||||||
|
|
||||||
|
async def _persistence_worker(self):
|
||||||
|
"""Background worker to persist stats to Redis."""
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
await self._persist_queue.get()
|
||||||
|
await self._persist_endpoint_stats()
|
||||||
|
self._persist_queue.task_done()
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Persistence worker error: {e}")
|
||||||
|
|
||||||
|
def start_persistence_worker(self):
|
||||||
|
"""Start the background persistence worker."""
|
||||||
|
if not self._persist_worker_task:
|
||||||
|
self._persist_worker_task = asyncio.create_task(self._persistence_worker())
|
||||||
|
logger.info("Started persistence worker")
|
||||||
|
|
||||||
|
async def stop_persistence_worker(self):
|
||||||
|
"""Stop the background persistence worker."""
|
||||||
|
if self._persist_worker_task:
|
||||||
|
self._persist_worker_task.cancel()
|
||||||
|
try:
|
||||||
|
await self._persist_worker_task
|
||||||
|
except asyncio.CancelledError:
|
||||||
|
pass
|
||||||
|
self._persist_worker_task = None
|
||||||
|
logger.info("Stopped persistence worker")
|
||||||
|
|
||||||
|
async def cleanup(self):
|
||||||
|
"""Cleanup on shutdown - persist final stats and stop workers."""
|
||||||
|
logger.info("Monitor cleanup starting...")
|
||||||
|
try:
|
||||||
|
# Persist final stats before shutdown
|
||||||
|
await self._persist_endpoint_stats()
|
||||||
|
# Stop background worker
|
||||||
|
await self.stop_persistence_worker()
|
||||||
|
logger.info("Monitor cleanup completed")
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Monitor cleanup error: {e}")
|
||||||
|
|
||||||
|
async def load_from_redis(self):
|
||||||
|
"""Load persisted stats from Redis."""
|
||||||
|
try:
|
||||||
|
data = await self.redis.get("monitor:endpoint_stats")
|
||||||
|
if data:
|
||||||
|
self.endpoint_stats = json.loads(data)
|
||||||
|
logger.info("Loaded endpoint stats from Redis")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Failed to load from Redis: {e}")
|
||||||
|
|
||||||
|
async def get_health_summary(self) -> Dict:
|
||||||
|
"""Get current system health snapshot."""
|
||||||
|
mem_pct = get_container_memory_percent()
|
||||||
|
cpu_pct = psutil.cpu_percent(interval=0.1)
|
||||||
|
|
||||||
|
# Network I/O (delta since last call)
|
||||||
|
net = psutil.net_io_counters()
|
||||||
|
|
||||||
|
# Pool status (acquire lock to prevent race conditions)
|
||||||
|
from crawler_pool import PERMANENT, HOT_POOL, COLD_POOL, LOCK
|
||||||
|
async with LOCK:
|
||||||
|
# TODO: Track actual browser process memory instead of estimates
|
||||||
|
# These are conservative estimates based on typical Chromium usage
|
||||||
|
permanent_mem = 270 if PERMANENT else 0 # Estimate: ~270MB for permanent browser
|
||||||
|
hot_mem = len(HOT_POOL) * 180 # Estimate: ~180MB per hot pool browser
|
||||||
|
cold_mem = len(COLD_POOL) * 180 # Estimate: ~180MB per cold pool browser
|
||||||
|
permanent_active = PERMANENT is not None
|
||||||
|
hot_count = len(HOT_POOL)
|
||||||
|
cold_count = len(COLD_POOL)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"container": {
|
||||||
|
"memory_percent": round(mem_pct, 1),
|
||||||
|
"cpu_percent": round(cpu_pct, 1),
|
||||||
|
"network_sent_mb": round(net.bytes_sent / (1024**2), 2),
|
||||||
|
"network_recv_mb": round(net.bytes_recv / (1024**2), 2),
|
||||||
|
"uptime_seconds": int(time.time() - self.start_time)
|
||||||
|
},
|
||||||
|
"pool": {
|
||||||
|
"permanent": {"active": permanent_active, "memory_mb": permanent_mem},
|
||||||
|
"hot": {"count": hot_count, "memory_mb": hot_mem},
|
||||||
|
"cold": {"count": cold_count, "memory_mb": cold_mem},
|
||||||
|
"total_memory_mb": permanent_mem + hot_mem + cold_mem
|
||||||
|
},
|
||||||
|
"janitor": {
|
||||||
|
"next_cleanup_estimate": "adaptive", # Would need janitor state
|
||||||
|
"memory_pressure": "LOW" if mem_pct < 60 else "MEDIUM" if mem_pct < 80 else "HIGH"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_active_requests(self) -> List[Dict]:
|
||||||
|
"""Get list of currently active requests."""
|
||||||
|
now = time.time()
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
**req,
|
||||||
|
"elapsed": round(now - req["start_time"], 1),
|
||||||
|
"status": "running"
|
||||||
|
}
|
||||||
|
for req in self.active_requests.values()
|
||||||
|
]
|
||||||
|
|
||||||
|
def get_completed_requests(self, limit: int = 50, filter_status: str = "all") -> List[Dict]:
|
||||||
|
"""Get recent completed requests."""
|
||||||
|
requests = list(self.completed_requests)[-limit:]
|
||||||
|
if filter_status == "success":
|
||||||
|
requests = [r for r in requests if r.get("success")]
|
||||||
|
elif filter_status == "error":
|
||||||
|
requests = [r for r in requests if not r.get("success")]
|
||||||
|
return requests
|
||||||
|
|
||||||
|
async def get_browser_list(self) -> List[Dict]:
|
||||||
|
"""Get detailed browser pool information."""
|
||||||
|
from crawler_pool import PERMANENT, HOT_POOL, COLD_POOL, LAST_USED, USAGE_COUNT, DEFAULT_CONFIG_SIG, LOCK
|
||||||
|
|
||||||
|
browsers = []
|
||||||
|
now = time.time()
|
||||||
|
|
||||||
|
# Acquire lock to prevent race conditions during iteration
|
||||||
|
async with LOCK:
|
||||||
|
if PERMANENT:
|
||||||
|
browsers.append({
|
||||||
|
"type": "permanent",
|
||||||
|
"sig": DEFAULT_CONFIG_SIG[:8] if DEFAULT_CONFIG_SIG else "unknown",
|
||||||
|
"age_seconds": int(now - self.start_time),
|
||||||
|
"last_used_seconds": int(now - LAST_USED.get(DEFAULT_CONFIG_SIG, now)),
|
||||||
|
"memory_mb": 270,
|
||||||
|
"hits": USAGE_COUNT.get(DEFAULT_CONFIG_SIG, 0),
|
||||||
|
"killable": False
|
||||||
|
})
|
||||||
|
|
||||||
|
for sig, crawler in HOT_POOL.items():
|
||||||
|
browsers.append({
|
||||||
|
"type": "hot",
|
||||||
|
"sig": sig[:8],
|
||||||
|
"age_seconds": int(now - self.start_time), # Approximation
|
||||||
|
"last_used_seconds": int(now - LAST_USED.get(sig, now)),
|
||||||
|
"memory_mb": 180, # Estimate
|
||||||
|
"hits": USAGE_COUNT.get(sig, 0),
|
||||||
|
"killable": True
|
||||||
|
})
|
||||||
|
|
||||||
|
for sig, crawler in COLD_POOL.items():
|
||||||
|
browsers.append({
|
||||||
|
"type": "cold",
|
||||||
|
"sig": sig[:8],
|
||||||
|
"age_seconds": int(now - self.start_time),
|
||||||
|
"last_used_seconds": int(now - LAST_USED.get(sig, now)),
|
||||||
|
"memory_mb": 180,
|
||||||
|
"hits": USAGE_COUNT.get(sig, 0),
|
||||||
|
"killable": True
|
||||||
|
})
|
||||||
|
|
||||||
|
return browsers
|
||||||
|
|
||||||
|
def get_endpoint_stats_summary(self) -> Dict[str, Dict]:
|
||||||
|
"""Get aggregated endpoint statistics."""
|
||||||
|
summary = {}
|
||||||
|
for endpoint, stats in self.endpoint_stats.items():
|
||||||
|
count = stats["count"]
|
||||||
|
avg_time = (stats["total_time"] / count) if count > 0 else 0
|
||||||
|
success_rate = (stats["success"] / count * 100) if count > 0 else 0
|
||||||
|
pool_hit_rate = (stats["pool_hits"] / count * 100) if count > 0 else 0
|
||||||
|
|
||||||
|
summary[endpoint] = {
|
||||||
|
"count": count,
|
||||||
|
"avg_latency_ms": round(avg_time * 1000, 1),
|
||||||
|
"success_rate_percent": round(success_rate, 1),
|
||||||
|
"pool_hit_rate_percent": round(pool_hit_rate, 1),
|
||||||
|
"errors": stats["errors"]
|
||||||
|
}
|
||||||
|
return summary
|
||||||
|
|
||||||
|
def get_timeline_data(self, metric: str, window: str = "5m") -> Dict:
|
||||||
|
"""Get timeline data for charts."""
|
||||||
|
# For now, only 5m window supported
|
||||||
|
if metric == "memory":
|
||||||
|
data = list(self.memory_timeline)
|
||||||
|
elif metric == "requests":
|
||||||
|
data = list(self.requests_timeline)
|
||||||
|
elif metric == "browsers":
|
||||||
|
data = list(self.browser_timeline)
|
||||||
|
else:
|
||||||
|
return {"timestamps": [], "values": []}
|
||||||
|
|
||||||
|
return {
|
||||||
|
"timestamps": [int(d["time"]) for d in data],
|
||||||
|
"values": [d.get("value", d.get("browsers")) for d in data]
|
||||||
|
}
|
||||||
|
|
||||||
|
def get_janitor_log(self, limit: int = 100) -> List[Dict]:
|
||||||
|
"""Get recent janitor events."""
|
||||||
|
return list(self.janitor_events)[-limit:]
|
||||||
|
|
||||||
|
def get_errors_log(self, limit: int = 100) -> List[Dict]:
|
||||||
|
"""Get recent errors."""
|
||||||
|
return list(self.errors)[-limit:]
|
||||||
|
|
||||||
|
# Global instance (initialized in server.py)
|
||||||
|
monitor_stats: Optional[MonitorStats] = None
|
||||||
|
|
||||||
|
def get_monitor() -> MonitorStats:
|
||||||
|
"""Get global monitor instance."""
|
||||||
|
if monitor_stats is None:
|
||||||
|
raise RuntimeError("Monitor not initialized")
|
||||||
|
return monitor_stats
|
||||||
405
deploy/docker/monitor_routes.py
Normal file
405
deploy/docker/monitor_routes.py
Normal file
@@ -0,0 +1,405 @@
|
|||||||
|
# monitor_routes.py - Monitor API endpoints
|
||||||
|
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from typing import Optional
|
||||||
|
from monitor import get_monitor
|
||||||
|
import logging
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
router = APIRouter(prefix="/monitor", tags=["monitor"])
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/health")
|
||||||
|
async def get_health():
|
||||||
|
"""Get current system health snapshot."""
|
||||||
|
try:
|
||||||
|
monitor = get_monitor()
|
||||||
|
return await monitor.get_health_summary()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting health: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/requests")
|
||||||
|
async def get_requests(status: str = "all", limit: int = 50):
|
||||||
|
"""Get active and completed requests.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
status: Filter by 'active', 'completed', 'success', 'error', or 'all'
|
||||||
|
limit: Max number of completed requests to return (default 50)
|
||||||
|
"""
|
||||||
|
# Input validation
|
||||||
|
if status not in ["all", "active", "completed", "success", "error"]:
|
||||||
|
raise HTTPException(400, f"Invalid status: {status}. Must be one of: all, active, completed, success, error")
|
||||||
|
if limit < 1 or limit > 1000:
|
||||||
|
raise HTTPException(400, f"Invalid limit: {limit}. Must be between 1 and 1000")
|
||||||
|
|
||||||
|
try:
|
||||||
|
monitor = get_monitor()
|
||||||
|
|
||||||
|
if status == "active":
|
||||||
|
return {"active": monitor.get_active_requests(), "completed": []}
|
||||||
|
elif status == "completed":
|
||||||
|
return {"active": [], "completed": monitor.get_completed_requests(limit)}
|
||||||
|
elif status in ["success", "error"]:
|
||||||
|
return {"active": [], "completed": monitor.get_completed_requests(limit, status)}
|
||||||
|
else: # "all"
|
||||||
|
return {
|
||||||
|
"active": monitor.get_active_requests(),
|
||||||
|
"completed": monitor.get_completed_requests(limit)
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting requests: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/browsers")
|
||||||
|
async def get_browsers():
|
||||||
|
"""Get detailed browser pool information."""
|
||||||
|
try:
|
||||||
|
monitor = get_monitor()
|
||||||
|
browsers = await monitor.get_browser_list()
|
||||||
|
|
||||||
|
# Calculate summary stats
|
||||||
|
total_browsers = len(browsers)
|
||||||
|
total_memory = sum(b["memory_mb"] for b in browsers)
|
||||||
|
|
||||||
|
# Calculate reuse rate from recent requests
|
||||||
|
recent = monitor.get_completed_requests(100)
|
||||||
|
pool_hits = sum(1 for r in recent if r.get("pool_hit", False))
|
||||||
|
reuse_rate = (pool_hits / len(recent) * 100) if recent else 0
|
||||||
|
|
||||||
|
return {
|
||||||
|
"browsers": browsers,
|
||||||
|
"summary": {
|
||||||
|
"total_count": total_browsers,
|
||||||
|
"total_memory_mb": total_memory,
|
||||||
|
"reuse_rate_percent": round(reuse_rate, 1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting browsers: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/endpoints/stats")
|
||||||
|
async def get_endpoint_stats():
|
||||||
|
"""Get aggregated endpoint statistics."""
|
||||||
|
try:
|
||||||
|
monitor = get_monitor()
|
||||||
|
return monitor.get_endpoint_stats_summary()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting endpoint stats: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/timeline")
|
||||||
|
async def get_timeline(metric: str = "memory", window: str = "5m"):
|
||||||
|
"""Get timeline data for charts.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
metric: 'memory', 'requests', or 'browsers'
|
||||||
|
window: Time window (only '5m' supported for now)
|
||||||
|
"""
|
||||||
|
# Input validation
|
||||||
|
if metric not in ["memory", "requests", "browsers"]:
|
||||||
|
raise HTTPException(400, f"Invalid metric: {metric}. Must be one of: memory, requests, browsers")
|
||||||
|
if window != "5m":
|
||||||
|
raise HTTPException(400, f"Invalid window: {window}. Only '5m' is currently supported")
|
||||||
|
|
||||||
|
try:
|
||||||
|
monitor = get_monitor()
|
||||||
|
return monitor.get_timeline_data(metric, window)
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting timeline: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/logs/janitor")
|
||||||
|
async def get_janitor_log(limit: int = 100):
|
||||||
|
"""Get recent janitor cleanup events."""
|
||||||
|
# Input validation
|
||||||
|
if limit < 1 or limit > 1000:
|
||||||
|
raise HTTPException(400, f"Invalid limit: {limit}. Must be between 1 and 1000")
|
||||||
|
|
||||||
|
try:
|
||||||
|
monitor = get_monitor()
|
||||||
|
return {"events": monitor.get_janitor_log(limit)}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting janitor log: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.get("/logs/errors")
|
||||||
|
async def get_errors_log(limit: int = 100):
|
||||||
|
"""Get recent errors."""
|
||||||
|
# Input validation
|
||||||
|
if limit < 1 or limit > 1000:
|
||||||
|
raise HTTPException(400, f"Invalid limit: {limit}. Must be between 1 and 1000")
|
||||||
|
|
||||||
|
try:
|
||||||
|
monitor = get_monitor()
|
||||||
|
return {"errors": monitor.get_errors_log(limit)}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error getting errors log: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
# ========== Control Actions ==========
|
||||||
|
|
||||||
|
class KillBrowserRequest(BaseModel):
|
||||||
|
sig: str
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/actions/cleanup")
|
||||||
|
async def force_cleanup():
|
||||||
|
"""Force immediate janitor cleanup (kills idle cold pool browsers)."""
|
||||||
|
try:
|
||||||
|
from crawler_pool import COLD_POOL, LAST_USED, USAGE_COUNT, LOCK
|
||||||
|
import time
|
||||||
|
from contextlib import suppress
|
||||||
|
|
||||||
|
killed_count = 0
|
||||||
|
now = time.time()
|
||||||
|
|
||||||
|
async with LOCK:
|
||||||
|
for sig in list(COLD_POOL.keys()):
|
||||||
|
# Kill all cold pool browsers immediately
|
||||||
|
logger.info(f"🧹 Force cleanup: closing cold browser (sig={sig[:8]})")
|
||||||
|
with suppress(Exception):
|
||||||
|
await COLD_POOL[sig].close()
|
||||||
|
COLD_POOL.pop(sig, None)
|
||||||
|
LAST_USED.pop(sig, None)
|
||||||
|
USAGE_COUNT.pop(sig, None)
|
||||||
|
killed_count += 1
|
||||||
|
|
||||||
|
monitor = get_monitor()
|
||||||
|
await monitor.track_janitor_event("force_cleanup", "manual", {"killed": killed_count})
|
||||||
|
|
||||||
|
return {"success": True, "killed_browsers": killed_count}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error during force cleanup: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/actions/kill_browser")
|
||||||
|
async def kill_browser(req: KillBrowserRequest):
|
||||||
|
"""Kill a specific browser by signature (hot or cold only).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sig: Browser config signature (first 8 chars)
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
from crawler_pool import HOT_POOL, COLD_POOL, LAST_USED, USAGE_COUNT, LOCK, DEFAULT_CONFIG_SIG
|
||||||
|
from contextlib import suppress
|
||||||
|
|
||||||
|
# Find full signature matching prefix
|
||||||
|
target_sig = None
|
||||||
|
pool_type = None
|
||||||
|
|
||||||
|
async with LOCK:
|
||||||
|
# Check hot pool
|
||||||
|
for sig in HOT_POOL.keys():
|
||||||
|
if sig.startswith(req.sig):
|
||||||
|
target_sig = sig
|
||||||
|
pool_type = "hot"
|
||||||
|
break
|
||||||
|
|
||||||
|
# Check cold pool
|
||||||
|
if not target_sig:
|
||||||
|
for sig in COLD_POOL.keys():
|
||||||
|
if sig.startswith(req.sig):
|
||||||
|
target_sig = sig
|
||||||
|
pool_type = "cold"
|
||||||
|
break
|
||||||
|
|
||||||
|
# Check if trying to kill permanent
|
||||||
|
if DEFAULT_CONFIG_SIG and DEFAULT_CONFIG_SIG.startswith(req.sig):
|
||||||
|
raise HTTPException(403, "Cannot kill permanent browser. Use restart instead.")
|
||||||
|
|
||||||
|
if not target_sig:
|
||||||
|
raise HTTPException(404, f"Browser with sig={req.sig} not found")
|
||||||
|
|
||||||
|
# Warn if there are active requests (browser might be in use)
|
||||||
|
monitor = get_monitor()
|
||||||
|
active_count = len(monitor.get_active_requests())
|
||||||
|
if active_count > 0:
|
||||||
|
logger.warning(f"Killing browser {target_sig[:8]} while {active_count} requests are active - may cause failures")
|
||||||
|
|
||||||
|
# Kill the browser
|
||||||
|
if pool_type == "hot":
|
||||||
|
browser = HOT_POOL.pop(target_sig)
|
||||||
|
else:
|
||||||
|
browser = COLD_POOL.pop(target_sig)
|
||||||
|
|
||||||
|
with suppress(Exception):
|
||||||
|
await browser.close()
|
||||||
|
|
||||||
|
LAST_USED.pop(target_sig, None)
|
||||||
|
USAGE_COUNT.pop(target_sig, None)
|
||||||
|
|
||||||
|
logger.info(f"🔪 Killed {pool_type} browser (sig={target_sig[:8]})")
|
||||||
|
|
||||||
|
monitor = get_monitor()
|
||||||
|
await monitor.track_janitor_event("kill_browser", target_sig, {"pool": pool_type, "manual": True})
|
||||||
|
|
||||||
|
return {"success": True, "killed_sig": target_sig[:8], "pool_type": pool_type}
|
||||||
|
except HTTPException:
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error killing browser: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/actions/restart_browser")
|
||||||
|
async def restart_browser(req: KillBrowserRequest):
|
||||||
|
"""Restart a browser (kill + recreate). Works for permanent too.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
sig: Browser config signature (first 8 chars), or "permanent"
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
from crawler_pool import (PERMANENT, HOT_POOL, COLD_POOL, LAST_USED,
|
||||||
|
USAGE_COUNT, LOCK, DEFAULT_CONFIG_SIG, init_permanent)
|
||||||
|
from crawl4ai import AsyncWebCrawler, BrowserConfig
|
||||||
|
from contextlib import suppress
|
||||||
|
import time
|
||||||
|
|
||||||
|
# Handle permanent browser restart
|
||||||
|
if req.sig == "permanent" or (DEFAULT_CONFIG_SIG and DEFAULT_CONFIG_SIG.startswith(req.sig)):
|
||||||
|
async with LOCK:
|
||||||
|
if PERMANENT:
|
||||||
|
with suppress(Exception):
|
||||||
|
await PERMANENT.close()
|
||||||
|
|
||||||
|
# Reinitialize permanent
|
||||||
|
from utils import load_config
|
||||||
|
config = load_config()
|
||||||
|
await init_permanent(BrowserConfig(
|
||||||
|
extra_args=config["crawler"]["browser"].get("extra_args", []),
|
||||||
|
**config["crawler"]["browser"].get("kwargs", {}),
|
||||||
|
))
|
||||||
|
|
||||||
|
logger.info("🔄 Restarted permanent browser")
|
||||||
|
return {"success": True, "restarted": "permanent"}
|
||||||
|
|
||||||
|
# Handle hot/cold browser restart
|
||||||
|
target_sig = None
|
||||||
|
pool_type = None
|
||||||
|
browser_config = None
|
||||||
|
|
||||||
|
async with LOCK:
|
||||||
|
# Find browser
|
||||||
|
for sig in HOT_POOL.keys():
|
||||||
|
if sig.startswith(req.sig):
|
||||||
|
target_sig = sig
|
||||||
|
pool_type = "hot"
|
||||||
|
# Would need to reconstruct config (not stored currently)
|
||||||
|
break
|
||||||
|
|
||||||
|
if not target_sig:
|
||||||
|
for sig in COLD_POOL.keys():
|
||||||
|
if sig.startswith(req.sig):
|
||||||
|
target_sig = sig
|
||||||
|
pool_type = "cold"
|
||||||
|
break
|
||||||
|
|
||||||
|
if not target_sig:
|
||||||
|
raise HTTPException(404, f"Browser with sig={req.sig} not found")
|
||||||
|
|
||||||
|
# Kill existing
|
||||||
|
if pool_type == "hot":
|
||||||
|
browser = HOT_POOL.pop(target_sig)
|
||||||
|
else:
|
||||||
|
browser = COLD_POOL.pop(target_sig)
|
||||||
|
|
||||||
|
with suppress(Exception):
|
||||||
|
await browser.close()
|
||||||
|
|
||||||
|
# Note: We can't easily recreate with same config without storing it
|
||||||
|
# For now, just kill and let new requests create fresh ones
|
||||||
|
LAST_USED.pop(target_sig, None)
|
||||||
|
USAGE_COUNT.pop(target_sig, None)
|
||||||
|
|
||||||
|
logger.info(f"🔄 Restarted {pool_type} browser (sig={target_sig[:8]})")
|
||||||
|
|
||||||
|
monitor = get_monitor()
|
||||||
|
await monitor.track_janitor_event("restart_browser", target_sig, {"pool": pool_type})
|
||||||
|
|
||||||
|
return {"success": True, "restarted_sig": target_sig[:8], "note": "Browser will be recreated on next request"}
|
||||||
|
except HTTPException:
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error restarting browser: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.post("/stats/reset")
|
||||||
|
async def reset_stats():
|
||||||
|
"""Reset today's endpoint counters."""
|
||||||
|
try:
|
||||||
|
monitor = get_monitor()
|
||||||
|
monitor.endpoint_stats.clear()
|
||||||
|
await monitor._persist_endpoint_stats()
|
||||||
|
|
||||||
|
return {"success": True, "message": "Endpoint stats reset"}
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Error resetting stats: {e}")
|
||||||
|
raise HTTPException(500, str(e))
|
||||||
|
|
||||||
|
|
||||||
|
@router.websocket("/ws")
|
||||||
|
async def websocket_endpoint(websocket: WebSocket):
|
||||||
|
"""WebSocket endpoint for real-time monitoring updates.
|
||||||
|
|
||||||
|
Sends updates every 2 seconds with:
|
||||||
|
- Health stats
|
||||||
|
- Active/completed requests
|
||||||
|
- Browser pool status
|
||||||
|
- Timeline data
|
||||||
|
"""
|
||||||
|
await websocket.accept()
|
||||||
|
logger.info("WebSocket client connected")
|
||||||
|
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
# Gather all monitoring data
|
||||||
|
monitor = get_monitor()
|
||||||
|
|
||||||
|
data = {
|
||||||
|
"timestamp": asyncio.get_event_loop().time(),
|
||||||
|
"health": await monitor.get_health_summary(),
|
||||||
|
"requests": {
|
||||||
|
"active": monitor.get_active_requests(),
|
||||||
|
"completed": monitor.get_completed_requests(limit=10)
|
||||||
|
},
|
||||||
|
"browsers": await monitor.get_browser_list(),
|
||||||
|
"timeline": {
|
||||||
|
"memory": monitor.get_timeline_data("memory", "5m"),
|
||||||
|
"requests": monitor.get_timeline_data("requests", "5m"),
|
||||||
|
"browsers": monitor.get_timeline_data("browsers", "5m")
|
||||||
|
},
|
||||||
|
"janitor": monitor.get_janitor_log(limit=10),
|
||||||
|
"errors": monitor.get_errors_log(limit=10)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Send update to client
|
||||||
|
await websocket.send_json(data)
|
||||||
|
|
||||||
|
# Wait 2 seconds before next update
|
||||||
|
await asyncio.sleep(2)
|
||||||
|
|
||||||
|
except WebSocketDisconnect:
|
||||||
|
logger.info("WebSocket client disconnected")
|
||||||
|
break
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"WebSocket error: {e}", exc_info=True)
|
||||||
|
await asyncio.sleep(2) # Continue trying
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"WebSocket connection error: {e}", exc_info=True)
|
||||||
|
finally:
|
||||||
|
logger.info("WebSocket connection closed")
|
||||||
@@ -16,6 +16,7 @@ from fastapi import Request, Depends
|
|||||||
from fastapi.responses import FileResponse
|
from fastapi.responses import FileResponse
|
||||||
import base64
|
import base64
|
||||||
import re
|
import re
|
||||||
|
import logging
|
||||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||||
from api import (
|
from api import (
|
||||||
handle_markdown_request, handle_llm_qa,
|
handle_markdown_request, handle_llm_qa,
|
||||||
@@ -78,6 +79,14 @@ __version__ = "0.5.1-d1"
|
|||||||
MAX_PAGES = config["crawler"]["pool"].get("max_pages", 30)
|
MAX_PAGES = config["crawler"]["pool"].get("max_pages", 30)
|
||||||
GLOBAL_SEM = asyncio.Semaphore(MAX_PAGES)
|
GLOBAL_SEM = asyncio.Semaphore(MAX_PAGES)
|
||||||
|
|
||||||
|
# ── default browser config helper ─────────────────────────────
|
||||||
|
def get_default_browser_config() -> BrowserConfig:
|
||||||
|
"""Get default BrowserConfig from config.yml."""
|
||||||
|
return BrowserConfig(
|
||||||
|
extra_args=config["crawler"]["browser"].get("extra_args", []),
|
||||||
|
**config["crawler"]["browser"].get("kwargs", {}),
|
||||||
|
)
|
||||||
|
|
||||||
# import logging
|
# import logging
|
||||||
# page_log = logging.getLogger("page_cap")
|
# page_log = logging.getLogger("page_cap")
|
||||||
# orig_arun = AsyncWebCrawler.arun
|
# orig_arun = AsyncWebCrawler.arun
|
||||||
@@ -103,15 +112,52 @@ AsyncWebCrawler.arun = capped_arun
|
|||||||
|
|
||||||
@asynccontextmanager
|
@asynccontextmanager
|
||||||
async def lifespan(_: FastAPI):
|
async def lifespan(_: FastAPI):
|
||||||
await get_crawler(BrowserConfig(
|
from crawler_pool import init_permanent
|
||||||
|
from monitor import MonitorStats
|
||||||
|
import monitor as monitor_module
|
||||||
|
|
||||||
|
# Initialize monitor
|
||||||
|
monitor_module.monitor_stats = MonitorStats(redis)
|
||||||
|
await monitor_module.monitor_stats.load_from_redis()
|
||||||
|
monitor_module.monitor_stats.start_persistence_worker()
|
||||||
|
|
||||||
|
# Initialize browser pool
|
||||||
|
await init_permanent(BrowserConfig(
|
||||||
extra_args=config["crawler"]["browser"].get("extra_args", []),
|
extra_args=config["crawler"]["browser"].get("extra_args", []),
|
||||||
**config["crawler"]["browser"].get("kwargs", {}),
|
**config["crawler"]["browser"].get("kwargs", {}),
|
||||||
)) # warm‑up
|
))
|
||||||
app.state.janitor = asyncio.create_task(janitor()) # idle GC
|
|
||||||
|
# Start background tasks
|
||||||
|
app.state.janitor = asyncio.create_task(janitor())
|
||||||
|
app.state.timeline_updater = asyncio.create_task(_timeline_updater())
|
||||||
|
|
||||||
yield
|
yield
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
app.state.janitor.cancel()
|
app.state.janitor.cancel()
|
||||||
|
app.state.timeline_updater.cancel()
|
||||||
|
|
||||||
|
# Monitor cleanup (persist stats and stop workers)
|
||||||
|
from monitor import get_monitor
|
||||||
|
try:
|
||||||
|
await get_monitor().cleanup()
|
||||||
|
except Exception as e:
|
||||||
|
logger.error(f"Monitor cleanup failed: {e}")
|
||||||
|
|
||||||
await close_all()
|
await close_all()
|
||||||
|
|
||||||
|
async def _timeline_updater():
|
||||||
|
"""Update timeline data every 5 seconds."""
|
||||||
|
from monitor import get_monitor
|
||||||
|
while True:
|
||||||
|
await asyncio.sleep(5)
|
||||||
|
try:
|
||||||
|
await asyncio.wait_for(get_monitor().update_timeline(), timeout=4.0)
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
logger.warning("Timeline update timeout after 4s")
|
||||||
|
except Exception as e:
|
||||||
|
logger.warning(f"Timeline update error: {e}")
|
||||||
|
|
||||||
# ───────────────────── FastAPI instance ──────────────────────
|
# ───────────────────── FastAPI instance ──────────────────────
|
||||||
app = FastAPI(
|
app = FastAPI(
|
||||||
title=config["app"]["title"],
|
title=config["app"]["title"],
|
||||||
@@ -129,6 +175,25 @@ app.mount(
|
|||||||
name="play",
|
name="play",
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# ── static monitor dashboard ────────────────────────────────
|
||||||
|
MONITOR_DIR = pathlib.Path(__file__).parent / "static" / "monitor"
|
||||||
|
if not MONITOR_DIR.exists():
|
||||||
|
raise RuntimeError(f"Monitor assets not found at {MONITOR_DIR}")
|
||||||
|
app.mount(
|
||||||
|
"/dashboard",
|
||||||
|
StaticFiles(directory=MONITOR_DIR, html=True),
|
||||||
|
name="monitor_ui",
|
||||||
|
)
|
||||||
|
|
||||||
|
# ── static assets (logo, etc) ────────────────────────────────
|
||||||
|
ASSETS_DIR = pathlib.Path(__file__).parent / "static" / "assets"
|
||||||
|
if ASSETS_DIR.exists():
|
||||||
|
app.mount(
|
||||||
|
"/static/assets",
|
||||||
|
StaticFiles(directory=ASSETS_DIR),
|
||||||
|
name="assets",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
@app.get("/")
|
@app.get("/")
|
||||||
async def root():
|
async def root():
|
||||||
@@ -212,6 +277,12 @@ def _safe_eval_config(expr: str) -> dict:
|
|||||||
# ── job router ──────────────────────────────────────────────
|
# ── job router ──────────────────────────────────────────────
|
||||||
app.include_router(init_job_router(redis, config, token_dep))
|
app.include_router(init_job_router(redis, config, token_dep))
|
||||||
|
|
||||||
|
# ── monitor router ──────────────────────────────────────────
|
||||||
|
from monitor_routes import router as monitor_router
|
||||||
|
app.include_router(monitor_router)
|
||||||
|
|
||||||
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
# ──────────────────────── Endpoints ──────────────────────────
|
# ──────────────────────── Endpoints ──────────────────────────
|
||||||
@app.post("/token")
|
@app.post("/token")
|
||||||
async def get_token(req: TokenRequest):
|
async def get_token(req: TokenRequest):
|
||||||
@@ -266,27 +337,20 @@ async def generate_html(
|
|||||||
Crawls the URL, preprocesses the raw HTML for schema extraction, and returns the processed HTML.
|
Crawls the URL, preprocesses the raw HTML for schema extraction, and returns the processed HTML.
|
||||||
Use when you need sanitized HTML structures for building schemas or further processing.
|
Use when you need sanitized HTML structures for building schemas or further processing.
|
||||||
"""
|
"""
|
||||||
|
from crawler_pool import get_crawler
|
||||||
cfg = CrawlerRunConfig()
|
cfg = CrawlerRunConfig()
|
||||||
try:
|
try:
|
||||||
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
|
crawler = await get_crawler(get_default_browser_config())
|
||||||
results = await crawler.arun(url=body.url, config=cfg)
|
results = await crawler.arun(url=body.url, config=cfg)
|
||||||
# Check if the crawl was successful
|
|
||||||
if not results[0].success:
|
if not results[0].success:
|
||||||
raise HTTPException(
|
raise HTTPException(500, detail=results[0].error_message or "Crawl failed")
|
||||||
status_code=500,
|
|
||||||
detail=results[0].error_message or "Crawl failed"
|
|
||||||
)
|
|
||||||
|
|
||||||
raw_html = results[0].html
|
raw_html = results[0].html
|
||||||
from crawl4ai.utils import preprocess_html_for_schema
|
from crawl4ai.utils import preprocess_html_for_schema
|
||||||
processed_html = preprocess_html_for_schema(raw_html)
|
processed_html = preprocess_html_for_schema(raw_html)
|
||||||
return JSONResponse({"html": processed_html, "url": body.url, "success": True})
|
return JSONResponse({"html": processed_html, "url": body.url, "success": True})
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# Log and raise as HTTP 500 for other exceptions
|
raise HTTPException(500, detail=str(e))
|
||||||
raise HTTPException(
|
|
||||||
status_code=500,
|
|
||||||
detail=str(e)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Screenshot endpoint
|
# Screenshot endpoint
|
||||||
|
|
||||||
@@ -304,16 +368,13 @@ async def generate_screenshot(
|
|||||||
Use when you need an image snapshot of the rendered page. Its recommened to provide an output path to save the screenshot.
|
Use when you need an image snapshot of the rendered page. Its recommened to provide an output path to save the screenshot.
|
||||||
Then in result instead of the screenshot you will get a path to the saved file.
|
Then in result instead of the screenshot you will get a path to the saved file.
|
||||||
"""
|
"""
|
||||||
|
from crawler_pool import get_crawler
|
||||||
try:
|
try:
|
||||||
cfg = CrawlerRunConfig(
|
cfg = CrawlerRunConfig(screenshot=True, screenshot_wait_for=body.screenshot_wait_for)
|
||||||
screenshot=True, screenshot_wait_for=body.screenshot_wait_for)
|
crawler = await get_crawler(get_default_browser_config())
|
||||||
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
|
results = await crawler.arun(url=body.url, config=cfg)
|
||||||
results = await crawler.arun(url=body.url, config=cfg)
|
|
||||||
if not results[0].success:
|
if not results[0].success:
|
||||||
raise HTTPException(
|
raise HTTPException(500, detail=results[0].error_message or "Crawl failed")
|
||||||
status_code=500,
|
|
||||||
detail=results[0].error_message or "Crawl failed"
|
|
||||||
)
|
|
||||||
screenshot_data = results[0].screenshot
|
screenshot_data = results[0].screenshot
|
||||||
if body.output_path:
|
if body.output_path:
|
||||||
abs_path = os.path.abspath(body.output_path)
|
abs_path = os.path.abspath(body.output_path)
|
||||||
@@ -323,10 +384,7 @@ async def generate_screenshot(
|
|||||||
return {"success": True, "path": abs_path}
|
return {"success": True, "path": abs_path}
|
||||||
return {"success": True, "screenshot": screenshot_data}
|
return {"success": True, "screenshot": screenshot_data}
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise HTTPException(
|
raise HTTPException(500, detail=str(e))
|
||||||
status_code=500,
|
|
||||||
detail=str(e)
|
|
||||||
)
|
|
||||||
|
|
||||||
# PDF endpoint
|
# PDF endpoint
|
||||||
|
|
||||||
@@ -344,15 +402,13 @@ async def generate_pdf(
|
|||||||
Use when you need a printable or archivable snapshot of the page. It is recommended to provide an output path to save the PDF.
|
Use when you need a printable or archivable snapshot of the page. It is recommended to provide an output path to save the PDF.
|
||||||
Then in result instead of the PDF you will get a path to the saved file.
|
Then in result instead of the PDF you will get a path to the saved file.
|
||||||
"""
|
"""
|
||||||
|
from crawler_pool import get_crawler
|
||||||
try:
|
try:
|
||||||
cfg = CrawlerRunConfig(pdf=True)
|
cfg = CrawlerRunConfig(pdf=True)
|
||||||
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
|
crawler = await get_crawler(get_default_browser_config())
|
||||||
results = await crawler.arun(url=body.url, config=cfg)
|
results = await crawler.arun(url=body.url, config=cfg)
|
||||||
if not results[0].success:
|
if not results[0].success:
|
||||||
raise HTTPException(
|
raise HTTPException(500, detail=results[0].error_message or "Crawl failed")
|
||||||
status_code=500,
|
|
||||||
detail=results[0].error_message or "Crawl failed"
|
|
||||||
)
|
|
||||||
pdf_data = results[0].pdf
|
pdf_data = results[0].pdf
|
||||||
if body.output_path:
|
if body.output_path:
|
||||||
abs_path = os.path.abspath(body.output_path)
|
abs_path = os.path.abspath(body.output_path)
|
||||||
@@ -362,10 +418,7 @@ async def generate_pdf(
|
|||||||
return {"success": True, "path": abs_path}
|
return {"success": True, "path": abs_path}
|
||||||
return {"success": True, "pdf": base64.b64encode(pdf_data).decode()}
|
return {"success": True, "pdf": base64.b64encode(pdf_data).decode()}
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise HTTPException(
|
raise HTTPException(500, detail=str(e))
|
||||||
status_code=500,
|
|
||||||
detail=str(e)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@app.post("/execute_js")
|
@app.post("/execute_js")
|
||||||
@@ -421,23 +474,17 @@ async def execute_js(
|
|||||||
```
|
```
|
||||||
|
|
||||||
"""
|
"""
|
||||||
|
from crawler_pool import get_crawler
|
||||||
try:
|
try:
|
||||||
cfg = CrawlerRunConfig(js_code=body.scripts)
|
cfg = CrawlerRunConfig(js_code=body.scripts)
|
||||||
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
|
crawler = await get_crawler(get_default_browser_config())
|
||||||
results = await crawler.arun(url=body.url, config=cfg)
|
results = await crawler.arun(url=body.url, config=cfg)
|
||||||
if not results[0].success:
|
if not results[0].success:
|
||||||
raise HTTPException(
|
raise HTTPException(500, detail=results[0].error_message or "Crawl failed")
|
||||||
status_code=500,
|
|
||||||
detail=results[0].error_message or "Crawl failed"
|
|
||||||
)
|
|
||||||
# Return JSON-serializable dict of the first CrawlResult
|
|
||||||
data = results[0].model_dump()
|
data = results[0].model_dump()
|
||||||
return JSONResponse(data)
|
return JSONResponse(data)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise HTTPException(
|
raise HTTPException(500, detail=str(e))
|
||||||
status_code=500,
|
|
||||||
detail=str(e)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
@app.get("/llm/{url:path}")
|
@app.get("/llm/{url:path}")
|
||||||
|
|||||||
BIN
deploy/docker/static/assets/crawl4ai-logo.jpg
Normal file
BIN
deploy/docker/static/assets/crawl4ai-logo.jpg
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 5.8 KiB |
BIN
deploy/docker/static/assets/crawl4ai-logo.png
Normal file
BIN
deploy/docker/static/assets/crawl4ai-logo.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 1.6 KiB |
BIN
deploy/docker/static/assets/logo.png
Normal file
BIN
deploy/docker/static/assets/logo.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 11 KiB |
1070
deploy/docker/static/monitor/index.html
Normal file
1070
deploy/docker/static/monitor/index.html
Normal file
File diff suppressed because it is too large
Load Diff
@@ -167,11 +167,14 @@
|
|||||||
</a>
|
</a>
|
||||||
</h1>
|
</h1>
|
||||||
|
|
||||||
<div class="ml-auto flex space-x-2">
|
<div class="ml-auto flex items-center space-x-4">
|
||||||
<button id="play-tab"
|
<a href="/dashboard" class="text-xs text-secondary hover:text-primary underline">Monitor</a>
|
||||||
class="px-3 py-1 rounded-t bg-surface border border-b-0 border-border text-primary">Playground</button>
|
<div class="flex space-x-2">
|
||||||
<button id="stress-tab" class="px-3 py-1 rounded-t border border-border hover:bg-surface">Stress
|
<button id="play-tab"
|
||||||
Test</button>
|
class="px-3 py-1 rounded-t bg-surface border border-b-0 border-border text-primary">Playground</button>
|
||||||
|
<button id="stress-tab" class="px-3 py-1 rounded-t border border-border hover:bg-surface">Stress
|
||||||
|
Test</button>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</header>
|
</header>
|
||||||
|
|
||||||
|
|||||||
34
deploy/docker/test-websocket.py
Executable file
34
deploy/docker/test-websocket.py
Executable file
@@ -0,0 +1,34 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Quick WebSocket test - Connect to monitor WebSocket and print updates
|
||||||
|
"""
|
||||||
|
import asyncio
|
||||||
|
import websockets
|
||||||
|
import json
|
||||||
|
|
||||||
|
async def test_websocket():
|
||||||
|
uri = "ws://localhost:11235/monitor/ws"
|
||||||
|
print(f"Connecting to {uri}...")
|
||||||
|
|
||||||
|
try:
|
||||||
|
async with websockets.connect(uri) as websocket:
|
||||||
|
print("✅ Connected!")
|
||||||
|
|
||||||
|
# Receive and print 5 updates
|
||||||
|
for i in range(5):
|
||||||
|
message = await websocket.recv()
|
||||||
|
data = json.loads(message)
|
||||||
|
print(f"\n📊 Update #{i+1}:")
|
||||||
|
print(f" - Health: CPU {data['health']['container']['cpu_percent']}%, Memory {data['health']['container']['memory_percent']}%")
|
||||||
|
print(f" - Active Requests: {len(data['requests']['active'])}")
|
||||||
|
print(f" - Browsers: {len(data['browsers'])}")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Error: {e}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
print("\n✅ WebSocket test passed!")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit(asyncio.run(test_websocket()))
|
||||||
164
deploy/docker/tests/demo_monitor_dashboard.py
Executable file
164
deploy/docker/tests/demo_monitor_dashboard.py
Executable file
@@ -0,0 +1,164 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Monitor Dashboard Demo Script
|
||||||
|
Generates varied activity to showcase all monitoring features for video recording.
|
||||||
|
"""
|
||||||
|
import httpx
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
BASE_URL = "http://localhost:11235"
|
||||||
|
|
||||||
|
async def demo_dashboard():
|
||||||
|
print("🎬 Monitor Dashboard Demo - Starting...\n")
|
||||||
|
print(f"📊 Dashboard: {BASE_URL}/dashboard")
|
||||||
|
print("=" * 60)
|
||||||
|
|
||||||
|
async with httpx.AsyncClient(timeout=60.0) as client:
|
||||||
|
|
||||||
|
# Phase 1: Simple requests (permanent browser)
|
||||||
|
print("\n🔷 Phase 1: Testing permanent browser pool")
|
||||||
|
print("-" * 60)
|
||||||
|
for i in range(5):
|
||||||
|
print(f" {i+1}/5 Request to /crawl (default config)...")
|
||||||
|
try:
|
||||||
|
r = await client.post(
|
||||||
|
f"{BASE_URL}/crawl",
|
||||||
|
json={"urls": [f"https://httpbin.org/html?req={i}"], "crawler_config": {}}
|
||||||
|
)
|
||||||
|
print(f" ✅ Status: {r.status_code}, Time: {r.elapsed.total_seconds():.2f}s")
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ❌ Error: {e}")
|
||||||
|
await asyncio.sleep(1) # Small delay between requests
|
||||||
|
|
||||||
|
# Phase 2: Create variant browsers (different configs)
|
||||||
|
print("\n🔶 Phase 2: Testing cold→hot pool promotion")
|
||||||
|
print("-" * 60)
|
||||||
|
viewports = [
|
||||||
|
{"width": 1920, "height": 1080},
|
||||||
|
{"width": 1280, "height": 720},
|
||||||
|
{"width": 800, "height": 600}
|
||||||
|
]
|
||||||
|
|
||||||
|
for idx, viewport in enumerate(viewports):
|
||||||
|
print(f" Viewport {viewport['width']}x{viewport['height']}:")
|
||||||
|
for i in range(4): # 4 requests each to trigger promotion at 3
|
||||||
|
try:
|
||||||
|
r = await client.post(
|
||||||
|
f"{BASE_URL}/crawl",
|
||||||
|
json={
|
||||||
|
"urls": [f"https://httpbin.org/json?v={idx}&r={i}"],
|
||||||
|
"browser_config": {"viewport": viewport},
|
||||||
|
"crawler_config": {}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
print(f" {i+1}/4 ✅ {r.status_code} - Should see cold→hot after 3 uses")
|
||||||
|
except Exception as e:
|
||||||
|
print(f" {i+1}/4 ❌ {e}")
|
||||||
|
await asyncio.sleep(0.5)
|
||||||
|
|
||||||
|
# Phase 3: Concurrent burst (stress pool)
|
||||||
|
print("\n🔷 Phase 3: Concurrent burst (10 parallel)")
|
||||||
|
print("-" * 60)
|
||||||
|
tasks = []
|
||||||
|
for i in range(10):
|
||||||
|
tasks.append(
|
||||||
|
client.post(
|
||||||
|
f"{BASE_URL}/crawl",
|
||||||
|
json={"urls": [f"https://httpbin.org/delay/2?burst={i}"], "crawler_config": {}}
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
print(" Sending 10 concurrent requests...")
|
||||||
|
start = time.time()
|
||||||
|
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||||
|
elapsed = time.time() - start
|
||||||
|
|
||||||
|
successes = sum(1 for r in results if not isinstance(r, Exception) and r.status_code == 200)
|
||||||
|
print(f" ✅ {successes}/10 succeeded in {elapsed:.2f}s")
|
||||||
|
|
||||||
|
# Phase 4: Multi-endpoint coverage
|
||||||
|
print("\n🔶 Phase 4: Testing multiple endpoints")
|
||||||
|
print("-" * 60)
|
||||||
|
endpoints = [
|
||||||
|
("/md", {"url": "https://httpbin.org/html", "f": "fit", "c": "0"}),
|
||||||
|
("/screenshot", {"url": "https://httpbin.org/html"}),
|
||||||
|
("/pdf", {"url": "https://httpbin.org/html"}),
|
||||||
|
]
|
||||||
|
|
||||||
|
for endpoint, payload in endpoints:
|
||||||
|
print(f" Testing {endpoint}...")
|
||||||
|
try:
|
||||||
|
if endpoint == "/md":
|
||||||
|
r = await client.post(f"{BASE_URL}{endpoint}", json=payload)
|
||||||
|
else:
|
||||||
|
r = await client.post(f"{BASE_URL}{endpoint}", json=payload)
|
||||||
|
print(f" ✅ {r.status_code}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ❌ {e}")
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
|
||||||
|
# Phase 5: Intentional error (to populate errors tab)
|
||||||
|
print("\n🔷 Phase 5: Generating error examples")
|
||||||
|
print("-" * 60)
|
||||||
|
print(" Triggering invalid URL error...")
|
||||||
|
try:
|
||||||
|
r = await client.post(
|
||||||
|
f"{BASE_URL}/crawl",
|
||||||
|
json={"urls": ["invalid://bad-url"], "crawler_config": {}}
|
||||||
|
)
|
||||||
|
print(f" Response: {r.status_code}")
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ✅ Error captured: {type(e).__name__}")
|
||||||
|
|
||||||
|
# Phase 6: Wait for janitor activity
|
||||||
|
print("\n🔶 Phase 6: Waiting for janitor cleanup...")
|
||||||
|
print("-" * 60)
|
||||||
|
print(" Idle for 40s to allow janitor to clean cold pool browsers...")
|
||||||
|
for i in range(40, 0, -10):
|
||||||
|
print(f" {i}s remaining... (Check dashboard for cleanup events)")
|
||||||
|
await asyncio.sleep(10)
|
||||||
|
|
||||||
|
# Phase 7: Final stats check
|
||||||
|
print("\n🔷 Phase 7: Final dashboard state")
|
||||||
|
print("-" * 60)
|
||||||
|
|
||||||
|
r = await client.get(f"{BASE_URL}/monitor/health")
|
||||||
|
health = r.json()
|
||||||
|
print(f" Memory: {health['container']['memory_percent']:.1f}%")
|
||||||
|
print(f" Browsers: Perm={health['pool']['permanent']['active']}, "
|
||||||
|
f"Hot={health['pool']['hot']['count']}, Cold={health['pool']['cold']['count']}")
|
||||||
|
|
||||||
|
r = await client.get(f"{BASE_URL}/monitor/endpoints/stats")
|
||||||
|
stats = r.json()
|
||||||
|
print(f"\n Endpoint Stats:")
|
||||||
|
for endpoint, data in stats.items():
|
||||||
|
print(f" {endpoint}: {data['count']} req, "
|
||||||
|
f"{data['avg_latency_ms']:.0f}ms avg, "
|
||||||
|
f"{data['success_rate_percent']:.1f}% success")
|
||||||
|
|
||||||
|
r = await client.get(f"{BASE_URL}/monitor/browsers")
|
||||||
|
browsers = r.json()
|
||||||
|
print(f"\n Pool Efficiency:")
|
||||||
|
print(f" Total browsers: {browsers['summary']['total_count']}")
|
||||||
|
print(f" Memory usage: {browsers['summary']['total_memory_mb']} MB")
|
||||||
|
print(f" Reuse rate: {browsers['summary']['reuse_rate_percent']:.1f}%")
|
||||||
|
|
||||||
|
print("\n" + "=" * 60)
|
||||||
|
print("✅ Demo complete! Dashboard is now populated with rich data.")
|
||||||
|
print(f"\n📹 Recording tip: Refresh {BASE_URL}/dashboard")
|
||||||
|
print(" You should see:")
|
||||||
|
print(" • Active & completed requests")
|
||||||
|
print(" • Browser pool (permanent + hot/cold)")
|
||||||
|
print(" • Janitor cleanup events")
|
||||||
|
print(" • Endpoint analytics")
|
||||||
|
print(" • Memory timeline")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
try:
|
||||||
|
asyncio.run(demo_dashboard())
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print("\n\n⚠️ Demo interrupted by user")
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n\n❌ Demo failed: {e}")
|
||||||
2
deploy/docker/tests/requirements.txt
Normal file
2
deploy/docker/tests/requirements.txt
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
httpx>=0.25.0
|
||||||
|
docker>=7.0.0
|
||||||
138
deploy/docker/tests/test_1_basic.py
Executable file
138
deploy/docker/tests/test_1_basic.py
Executable file
@@ -0,0 +1,138 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test 1: Basic Container Health + Single Endpoint
|
||||||
|
- Starts container
|
||||||
|
- Hits /health endpoint 10 times
|
||||||
|
- Reports success rate and basic latency
|
||||||
|
"""
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
import docker
|
||||||
|
import httpx
|
||||||
|
|
||||||
|
# Config
|
||||||
|
IMAGE = "crawl4ai-local:latest"
|
||||||
|
CONTAINER_NAME = "crawl4ai-test"
|
||||||
|
PORT = 11235
|
||||||
|
REQUESTS = 10
|
||||||
|
|
||||||
|
async def test_endpoint(url: str, count: int):
|
||||||
|
"""Hit endpoint multiple times, return stats."""
|
||||||
|
results = []
|
||||||
|
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||||
|
for i in range(count):
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
resp = await client.get(url)
|
||||||
|
elapsed = (time.time() - start) * 1000 # ms
|
||||||
|
results.append({
|
||||||
|
"success": resp.status_code == 200,
|
||||||
|
"latency_ms": elapsed,
|
||||||
|
"status": resp.status_code
|
||||||
|
})
|
||||||
|
print(f" [{i+1}/{count}] ✓ {resp.status_code} - {elapsed:.0f}ms")
|
||||||
|
except Exception as e:
|
||||||
|
results.append({
|
||||||
|
"success": False,
|
||||||
|
"latency_ms": None,
|
||||||
|
"error": str(e)
|
||||||
|
})
|
||||||
|
print(f" [{i+1}/{count}] ✗ Error: {e}")
|
||||||
|
return results
|
||||||
|
|
||||||
|
def start_container(client, image: str, name: str, port: int):
|
||||||
|
"""Start container, return container object."""
|
||||||
|
# Clean up existing
|
||||||
|
try:
|
||||||
|
old = client.containers.get(name)
|
||||||
|
print(f"🧹 Stopping existing container '{name}'...")
|
||||||
|
old.stop()
|
||||||
|
old.remove()
|
||||||
|
except docker.errors.NotFound:
|
||||||
|
pass
|
||||||
|
|
||||||
|
print(f"🚀 Starting container '{name}' from image '{image}'...")
|
||||||
|
container = client.containers.run(
|
||||||
|
image,
|
||||||
|
name=name,
|
||||||
|
ports={f"{port}/tcp": port},
|
||||||
|
detach=True,
|
||||||
|
shm_size="1g",
|
||||||
|
environment={"PYTHON_ENV": "production"}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Wait for health
|
||||||
|
print(f"⏳ Waiting for container to be healthy...")
|
||||||
|
for _ in range(30): # 30s timeout
|
||||||
|
time.sleep(1)
|
||||||
|
container.reload()
|
||||||
|
if container.status == "running":
|
||||||
|
try:
|
||||||
|
# Quick health check
|
||||||
|
import requests
|
||||||
|
resp = requests.get(f"http://localhost:{port}/health", timeout=2)
|
||||||
|
if resp.status_code == 200:
|
||||||
|
print(f"✅ Container healthy!")
|
||||||
|
return container
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
raise TimeoutError("Container failed to start")
|
||||||
|
|
||||||
|
def stop_container(container):
|
||||||
|
"""Stop and remove container."""
|
||||||
|
print(f"🛑 Stopping container...")
|
||||||
|
container.stop()
|
||||||
|
container.remove()
|
||||||
|
print(f"✅ Container removed")
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
print("="*60)
|
||||||
|
print("TEST 1: Basic Container Health + Single Endpoint")
|
||||||
|
print("="*60)
|
||||||
|
|
||||||
|
client = docker.from_env()
|
||||||
|
container = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Start container
|
||||||
|
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
|
||||||
|
|
||||||
|
# Test /health endpoint
|
||||||
|
print(f"\n📊 Testing /health endpoint ({REQUESTS} requests)...")
|
||||||
|
url = f"http://localhost:{PORT}/health"
|
||||||
|
results = await test_endpoint(url, REQUESTS)
|
||||||
|
|
||||||
|
# Calculate stats
|
||||||
|
successes = sum(1 for r in results if r["success"])
|
||||||
|
success_rate = (successes / len(results)) * 100
|
||||||
|
latencies = [r["latency_ms"] for r in results if r["latency_ms"] is not None]
|
||||||
|
avg_latency = sum(latencies) / len(latencies) if latencies else 0
|
||||||
|
|
||||||
|
# Print results
|
||||||
|
print(f"\n{'='*60}")
|
||||||
|
print(f"RESULTS:")
|
||||||
|
print(f" Success Rate: {success_rate:.1f}% ({successes}/{len(results)})")
|
||||||
|
print(f" Avg Latency: {avg_latency:.0f}ms")
|
||||||
|
if latencies:
|
||||||
|
print(f" Min Latency: {min(latencies):.0f}ms")
|
||||||
|
print(f" Max Latency: {max(latencies):.0f}ms")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
|
||||||
|
# Pass/Fail
|
||||||
|
if success_rate >= 100:
|
||||||
|
print(f"✅ TEST PASSED")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
print(f"❌ TEST FAILED (expected 100% success rate)")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n❌ TEST ERROR: {e}")
|
||||||
|
return 1
|
||||||
|
finally:
|
||||||
|
if container:
|
||||||
|
stop_container(container)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit_code = asyncio.run(main())
|
||||||
|
exit(exit_code)
|
||||||
205
deploy/docker/tests/test_2_memory.py
Executable file
205
deploy/docker/tests/test_2_memory.py
Executable file
@@ -0,0 +1,205 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test 2: Docker Stats Monitoring
|
||||||
|
- Extends Test 1 with real-time container stats
|
||||||
|
- Monitors memory % and CPU during requests
|
||||||
|
- Reports baseline, peak, and final memory
|
||||||
|
"""
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
import docker
|
||||||
|
import httpx
|
||||||
|
from threading import Thread, Event
|
||||||
|
|
||||||
|
# Config
|
||||||
|
IMAGE = "crawl4ai-local:latest"
|
||||||
|
CONTAINER_NAME = "crawl4ai-test"
|
||||||
|
PORT = 11235
|
||||||
|
REQUESTS = 20 # More requests to see memory usage
|
||||||
|
|
||||||
|
# Stats tracking
|
||||||
|
stats_history = []
|
||||||
|
stop_monitoring = Event()
|
||||||
|
|
||||||
|
def monitor_stats(container):
|
||||||
|
"""Background thread to collect container stats."""
|
||||||
|
for stat in container.stats(decode=True, stream=True):
|
||||||
|
if stop_monitoring.is_set():
|
||||||
|
break
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Extract memory stats
|
||||||
|
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024) # MB
|
||||||
|
mem_limit = stat['memory_stats'].get('limit', 1) / (1024 * 1024)
|
||||||
|
mem_percent = (mem_usage / mem_limit * 100) if mem_limit > 0 else 0
|
||||||
|
|
||||||
|
# Extract CPU stats (handle missing fields on Mac)
|
||||||
|
cpu_percent = 0
|
||||||
|
try:
|
||||||
|
cpu_delta = stat['cpu_stats']['cpu_usage']['total_usage'] - \
|
||||||
|
stat['precpu_stats']['cpu_usage']['total_usage']
|
||||||
|
system_delta = stat['cpu_stats'].get('system_cpu_usage', 0) - \
|
||||||
|
stat['precpu_stats'].get('system_cpu_usage', 0)
|
||||||
|
if system_delta > 0:
|
||||||
|
num_cpus = stat['cpu_stats'].get('online_cpus', 1)
|
||||||
|
cpu_percent = (cpu_delta / system_delta * num_cpus * 100.0)
|
||||||
|
except (KeyError, ZeroDivisionError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
stats_history.append({
|
||||||
|
'timestamp': time.time(),
|
||||||
|
'memory_mb': mem_usage,
|
||||||
|
'memory_percent': mem_percent,
|
||||||
|
'cpu_percent': cpu_percent
|
||||||
|
})
|
||||||
|
except Exception as e:
|
||||||
|
# Skip malformed stats
|
||||||
|
pass
|
||||||
|
|
||||||
|
time.sleep(0.5) # Sample every 500ms
|
||||||
|
|
||||||
|
async def test_endpoint(url: str, count: int):
|
||||||
|
"""Hit endpoint, return stats."""
|
||||||
|
results = []
|
||||||
|
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||||
|
for i in range(count):
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
resp = await client.get(url)
|
||||||
|
elapsed = (time.time() - start) * 1000
|
||||||
|
results.append({
|
||||||
|
"success": resp.status_code == 200,
|
||||||
|
"latency_ms": elapsed,
|
||||||
|
})
|
||||||
|
if (i + 1) % 5 == 0: # Print every 5 requests
|
||||||
|
print(f" [{i+1}/{count}] ✓ {resp.status_code} - {elapsed:.0f}ms")
|
||||||
|
except Exception as e:
|
||||||
|
results.append({"success": False, "error": str(e)})
|
||||||
|
print(f" [{i+1}/{count}] ✗ Error: {e}")
|
||||||
|
return results
|
||||||
|
|
||||||
|
def start_container(client, image: str, name: str, port: int):
|
||||||
|
"""Start container."""
|
||||||
|
try:
|
||||||
|
old = client.containers.get(name)
|
||||||
|
print(f"🧹 Stopping existing container '{name}'...")
|
||||||
|
old.stop()
|
||||||
|
old.remove()
|
||||||
|
except docker.errors.NotFound:
|
||||||
|
pass
|
||||||
|
|
||||||
|
print(f"🚀 Starting container '{name}'...")
|
||||||
|
container = client.containers.run(
|
||||||
|
image,
|
||||||
|
name=name,
|
||||||
|
ports={f"{port}/tcp": port},
|
||||||
|
detach=True,
|
||||||
|
shm_size="1g",
|
||||||
|
mem_limit="4g", # Set explicit memory limit
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"⏳ Waiting for health...")
|
||||||
|
for _ in range(30):
|
||||||
|
time.sleep(1)
|
||||||
|
container.reload()
|
||||||
|
if container.status == "running":
|
||||||
|
try:
|
||||||
|
import requests
|
||||||
|
resp = requests.get(f"http://localhost:{port}/health", timeout=2)
|
||||||
|
if resp.status_code == 200:
|
||||||
|
print(f"✅ Container healthy!")
|
||||||
|
return container
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
raise TimeoutError("Container failed to start")
|
||||||
|
|
||||||
|
def stop_container(container):
|
||||||
|
"""Stop container."""
|
||||||
|
print(f"🛑 Stopping container...")
|
||||||
|
container.stop()
|
||||||
|
container.remove()
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
print("="*60)
|
||||||
|
print("TEST 2: Docker Stats Monitoring")
|
||||||
|
print("="*60)
|
||||||
|
|
||||||
|
client = docker.from_env()
|
||||||
|
container = None
|
||||||
|
monitor_thread = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Start container
|
||||||
|
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
|
||||||
|
|
||||||
|
# Start stats monitoring in background
|
||||||
|
print(f"\n📊 Starting stats monitor...")
|
||||||
|
stop_monitoring.clear()
|
||||||
|
stats_history.clear()
|
||||||
|
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
|
||||||
|
monitor_thread.start()
|
||||||
|
|
||||||
|
# Wait a bit for baseline
|
||||||
|
await asyncio.sleep(2)
|
||||||
|
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
|
||||||
|
print(f"📏 Baseline memory: {baseline_mem:.1f} MB")
|
||||||
|
|
||||||
|
# Test /health endpoint
|
||||||
|
print(f"\n🔄 Running {REQUESTS} requests to /health...")
|
||||||
|
url = f"http://localhost:{PORT}/health"
|
||||||
|
results = await test_endpoint(url, REQUESTS)
|
||||||
|
|
||||||
|
# Wait a bit to capture peak
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
|
||||||
|
# Stop monitoring
|
||||||
|
stop_monitoring.set()
|
||||||
|
if monitor_thread:
|
||||||
|
monitor_thread.join(timeout=2)
|
||||||
|
|
||||||
|
# Calculate stats
|
||||||
|
successes = sum(1 for r in results if r.get("success"))
|
||||||
|
success_rate = (successes / len(results)) * 100
|
||||||
|
latencies = [r["latency_ms"] for r in results if "latency_ms" in r]
|
||||||
|
avg_latency = sum(latencies) / len(latencies) if latencies else 0
|
||||||
|
|
||||||
|
# Memory stats
|
||||||
|
memory_samples = [s['memory_mb'] for s in stats_history]
|
||||||
|
peak_mem = max(memory_samples) if memory_samples else 0
|
||||||
|
final_mem = memory_samples[-1] if memory_samples else 0
|
||||||
|
mem_delta = final_mem - baseline_mem
|
||||||
|
|
||||||
|
# Print results
|
||||||
|
print(f"\n{'='*60}")
|
||||||
|
print(f"RESULTS:")
|
||||||
|
print(f" Success Rate: {success_rate:.1f}% ({successes}/{len(results)})")
|
||||||
|
print(f" Avg Latency: {avg_latency:.0f}ms")
|
||||||
|
print(f"\n Memory Stats:")
|
||||||
|
print(f" Baseline: {baseline_mem:.1f} MB")
|
||||||
|
print(f" Peak: {peak_mem:.1f} MB")
|
||||||
|
print(f" Final: {final_mem:.1f} MB")
|
||||||
|
print(f" Delta: {mem_delta:+.1f} MB")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
|
||||||
|
# Pass/Fail
|
||||||
|
if success_rate >= 100 and mem_delta < 100: # No significant memory growth
|
||||||
|
print(f"✅ TEST PASSED")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
if success_rate < 100:
|
||||||
|
print(f"❌ TEST FAILED (success rate < 100%)")
|
||||||
|
if mem_delta >= 100:
|
||||||
|
print(f"⚠️ WARNING: Memory grew by {mem_delta:.1f} MB")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n❌ TEST ERROR: {e}")
|
||||||
|
return 1
|
||||||
|
finally:
|
||||||
|
stop_monitoring.set()
|
||||||
|
if container:
|
||||||
|
stop_container(container)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit_code = asyncio.run(main())
|
||||||
|
exit(exit_code)
|
||||||
229
deploy/docker/tests/test_3_pool.py
Executable file
229
deploy/docker/tests/test_3_pool.py
Executable file
@@ -0,0 +1,229 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test 3: Pool Validation - Permanent Browser Reuse
|
||||||
|
- Tests /html endpoint (should use permanent browser)
|
||||||
|
- Monitors container logs for pool hit markers
|
||||||
|
- Validates browser reuse rate
|
||||||
|
- Checks memory after browser creation
|
||||||
|
"""
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
import docker
|
||||||
|
import httpx
|
||||||
|
from threading import Thread, Event
|
||||||
|
|
||||||
|
# Config
|
||||||
|
IMAGE = "crawl4ai-local:latest"
|
||||||
|
CONTAINER_NAME = "crawl4ai-test"
|
||||||
|
PORT = 11235
|
||||||
|
REQUESTS = 30
|
||||||
|
|
||||||
|
# Stats tracking
|
||||||
|
stats_history = []
|
||||||
|
stop_monitoring = Event()
|
||||||
|
|
||||||
|
def monitor_stats(container):
|
||||||
|
"""Background stats collector."""
|
||||||
|
for stat in container.stats(decode=True, stream=True):
|
||||||
|
if stop_monitoring.is_set():
|
||||||
|
break
|
||||||
|
try:
|
||||||
|
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024)
|
||||||
|
stats_history.append({
|
||||||
|
'timestamp': time.time(),
|
||||||
|
'memory_mb': mem_usage,
|
||||||
|
})
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
time.sleep(0.5)
|
||||||
|
|
||||||
|
def count_log_markers(container):
|
||||||
|
"""Extract pool usage markers from logs."""
|
||||||
|
logs = container.logs().decode('utf-8')
|
||||||
|
|
||||||
|
permanent_hits = logs.count("🔥 Using permanent browser")
|
||||||
|
hot_hits = logs.count("♨️ Using hot pool browser")
|
||||||
|
cold_hits = logs.count("❄️ Using cold pool browser")
|
||||||
|
new_created = logs.count("🆕 Creating new browser")
|
||||||
|
|
||||||
|
return {
|
||||||
|
'permanent_hits': permanent_hits,
|
||||||
|
'hot_hits': hot_hits,
|
||||||
|
'cold_hits': cold_hits,
|
||||||
|
'new_created': new_created,
|
||||||
|
'total_hits': permanent_hits + hot_hits + cold_hits
|
||||||
|
}
|
||||||
|
|
||||||
|
async def test_endpoint(url: str, count: int):
|
||||||
|
"""Hit endpoint multiple times."""
|
||||||
|
results = []
|
||||||
|
async with httpx.AsyncClient(timeout=60.0) as client:
|
||||||
|
for i in range(count):
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
resp = await client.post(url, json={"url": "https://httpbin.org/html"})
|
||||||
|
elapsed = (time.time() - start) * 1000
|
||||||
|
results.append({
|
||||||
|
"success": resp.status_code == 200,
|
||||||
|
"latency_ms": elapsed,
|
||||||
|
})
|
||||||
|
if (i + 1) % 10 == 0:
|
||||||
|
print(f" [{i+1}/{count}] ✓ {resp.status_code} - {elapsed:.0f}ms")
|
||||||
|
except Exception as e:
|
||||||
|
results.append({"success": False, "error": str(e)})
|
||||||
|
print(f" [{i+1}/{count}] ✗ Error: {e}")
|
||||||
|
return results
|
||||||
|
|
||||||
|
def start_container(client, image: str, name: str, port: int):
|
||||||
|
"""Start container."""
|
||||||
|
try:
|
||||||
|
old = client.containers.get(name)
|
||||||
|
print(f"🧹 Stopping existing container...")
|
||||||
|
old.stop()
|
||||||
|
old.remove()
|
||||||
|
except docker.errors.NotFound:
|
||||||
|
pass
|
||||||
|
|
||||||
|
print(f"🚀 Starting container...")
|
||||||
|
container = client.containers.run(
|
||||||
|
image,
|
||||||
|
name=name,
|
||||||
|
ports={f"{port}/tcp": port},
|
||||||
|
detach=True,
|
||||||
|
shm_size="1g",
|
||||||
|
mem_limit="4g",
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"⏳ Waiting for health...")
|
||||||
|
for _ in range(30):
|
||||||
|
time.sleep(1)
|
||||||
|
container.reload()
|
||||||
|
if container.status == "running":
|
||||||
|
try:
|
||||||
|
import requests
|
||||||
|
resp = requests.get(f"http://localhost:{port}/health", timeout=2)
|
||||||
|
if resp.status_code == 200:
|
||||||
|
print(f"✅ Container healthy!")
|
||||||
|
return container
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
raise TimeoutError("Container failed to start")
|
||||||
|
|
||||||
|
def stop_container(container):
|
||||||
|
"""Stop container."""
|
||||||
|
print(f"🛑 Stopping container...")
|
||||||
|
container.stop()
|
||||||
|
container.remove()
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
print("="*60)
|
||||||
|
print("TEST 3: Pool Validation - Permanent Browser Reuse")
|
||||||
|
print("="*60)
|
||||||
|
|
||||||
|
client = docker.from_env()
|
||||||
|
container = None
|
||||||
|
monitor_thread = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Start container
|
||||||
|
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
|
||||||
|
|
||||||
|
# Wait for permanent browser initialization
|
||||||
|
print(f"\n⏳ Waiting for permanent browser init (3s)...")
|
||||||
|
await asyncio.sleep(3)
|
||||||
|
|
||||||
|
# Start stats monitoring
|
||||||
|
print(f"📊 Starting stats monitor...")
|
||||||
|
stop_monitoring.clear()
|
||||||
|
stats_history.clear()
|
||||||
|
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
|
||||||
|
monitor_thread.start()
|
||||||
|
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
|
||||||
|
print(f"📏 Baseline (with permanent browser): {baseline_mem:.1f} MB")
|
||||||
|
|
||||||
|
# Test /html endpoint (uses permanent browser for default config)
|
||||||
|
print(f"\n🔄 Running {REQUESTS} requests to /html...")
|
||||||
|
url = f"http://localhost:{PORT}/html"
|
||||||
|
results = await test_endpoint(url, REQUESTS)
|
||||||
|
|
||||||
|
# Wait a bit
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
|
||||||
|
# Stop monitoring
|
||||||
|
stop_monitoring.set()
|
||||||
|
if monitor_thread:
|
||||||
|
monitor_thread.join(timeout=2)
|
||||||
|
|
||||||
|
# Analyze logs for pool markers
|
||||||
|
print(f"\n📋 Analyzing pool usage...")
|
||||||
|
pool_stats = count_log_markers(container)
|
||||||
|
|
||||||
|
# Calculate request stats
|
||||||
|
successes = sum(1 for r in results if r.get("success"))
|
||||||
|
success_rate = (successes / len(results)) * 100
|
||||||
|
latencies = [r["latency_ms"] for r in results if "latency_ms" in r]
|
||||||
|
avg_latency = sum(latencies) / len(latencies) if latencies else 0
|
||||||
|
|
||||||
|
# Memory stats
|
||||||
|
memory_samples = [s['memory_mb'] for s in stats_history]
|
||||||
|
peak_mem = max(memory_samples) if memory_samples else 0
|
||||||
|
final_mem = memory_samples[-1] if memory_samples else 0
|
||||||
|
mem_delta = final_mem - baseline_mem
|
||||||
|
|
||||||
|
# Calculate reuse rate
|
||||||
|
total_requests = len(results)
|
||||||
|
total_pool_hits = pool_stats['total_hits']
|
||||||
|
reuse_rate = (total_pool_hits / total_requests * 100) if total_requests > 0 else 0
|
||||||
|
|
||||||
|
# Print results
|
||||||
|
print(f"\n{'='*60}")
|
||||||
|
print(f"RESULTS:")
|
||||||
|
print(f" Success Rate: {success_rate:.1f}% ({successes}/{len(results)})")
|
||||||
|
print(f" Avg Latency: {avg_latency:.0f}ms")
|
||||||
|
print(f"\n Pool Stats:")
|
||||||
|
print(f" 🔥 Permanent Hits: {pool_stats['permanent_hits']}")
|
||||||
|
print(f" ♨️ Hot Pool Hits: {pool_stats['hot_hits']}")
|
||||||
|
print(f" ❄️ Cold Pool Hits: {pool_stats['cold_hits']}")
|
||||||
|
print(f" 🆕 New Created: {pool_stats['new_created']}")
|
||||||
|
print(f" 📊 Reuse Rate: {reuse_rate:.1f}%")
|
||||||
|
print(f"\n Memory Stats:")
|
||||||
|
print(f" Baseline: {baseline_mem:.1f} MB")
|
||||||
|
print(f" Peak: {peak_mem:.1f} MB")
|
||||||
|
print(f" Final: {final_mem:.1f} MB")
|
||||||
|
print(f" Delta: {mem_delta:+.1f} MB")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
|
||||||
|
# Pass/Fail
|
||||||
|
passed = True
|
||||||
|
if success_rate < 100:
|
||||||
|
print(f"❌ FAIL: Success rate {success_rate:.1f}% < 100%")
|
||||||
|
passed = False
|
||||||
|
if reuse_rate < 80:
|
||||||
|
print(f"❌ FAIL: Reuse rate {reuse_rate:.1f}% < 80% (expected high permanent browser usage)")
|
||||||
|
passed = False
|
||||||
|
if pool_stats['permanent_hits'] < (total_requests * 0.8):
|
||||||
|
print(f"⚠️ WARNING: Only {pool_stats['permanent_hits']} permanent hits out of {total_requests} requests")
|
||||||
|
if mem_delta > 200:
|
||||||
|
print(f"⚠️ WARNING: Memory grew by {mem_delta:.1f} MB (possible browser leak)")
|
||||||
|
|
||||||
|
if passed:
|
||||||
|
print(f"✅ TEST PASSED")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
return 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n❌ TEST ERROR: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
finally:
|
||||||
|
stop_monitoring.set()
|
||||||
|
if container:
|
||||||
|
stop_container(container)
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit_code = asyncio.run(main())
|
||||||
|
exit(exit_code)
|
||||||
236
deploy/docker/tests/test_4_concurrent.py
Executable file
236
deploy/docker/tests/test_4_concurrent.py
Executable file
@@ -0,0 +1,236 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test 4: Concurrent Load Testing
|
||||||
|
- Tests pool under concurrent load
|
||||||
|
- Escalates: 10 → 50 → 100 concurrent requests
|
||||||
|
- Validates latency distribution (P50, P95, P99)
|
||||||
|
- Monitors memory stability
|
||||||
|
"""
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
import docker
|
||||||
|
import httpx
|
||||||
|
from threading import Thread, Event
|
||||||
|
from collections import defaultdict
|
||||||
|
|
||||||
|
# Config
|
||||||
|
IMAGE = "crawl4ai-local:latest"
|
||||||
|
CONTAINER_NAME = "crawl4ai-test"
|
||||||
|
PORT = 11235
|
||||||
|
LOAD_LEVELS = [
|
||||||
|
{"name": "Light", "concurrent": 10, "requests": 20},
|
||||||
|
{"name": "Medium", "concurrent": 50, "requests": 100},
|
||||||
|
{"name": "Heavy", "concurrent": 100, "requests": 200},
|
||||||
|
]
|
||||||
|
|
||||||
|
# Stats
|
||||||
|
stats_history = []
|
||||||
|
stop_monitoring = Event()
|
||||||
|
|
||||||
|
def monitor_stats(container):
|
||||||
|
"""Background stats collector."""
|
||||||
|
for stat in container.stats(decode=True, stream=True):
|
||||||
|
if stop_monitoring.is_set():
|
||||||
|
break
|
||||||
|
try:
|
||||||
|
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024)
|
||||||
|
stats_history.append({'timestamp': time.time(), 'memory_mb': mem_usage})
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
time.sleep(0.5)
|
||||||
|
|
||||||
|
def count_log_markers(container):
|
||||||
|
"""Extract pool markers."""
|
||||||
|
logs = container.logs().decode('utf-8')
|
||||||
|
return {
|
||||||
|
'permanent': logs.count("🔥 Using permanent browser"),
|
||||||
|
'hot': logs.count("♨️ Using hot pool browser"),
|
||||||
|
'cold': logs.count("❄️ Using cold pool browser"),
|
||||||
|
'new': logs.count("🆕 Creating new browser"),
|
||||||
|
}
|
||||||
|
|
||||||
|
async def hit_endpoint(client, url, payload, semaphore):
|
||||||
|
"""Single request with concurrency control."""
|
||||||
|
async with semaphore:
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
resp = await client.post(url, json=payload, timeout=60.0)
|
||||||
|
elapsed = (time.time() - start) * 1000
|
||||||
|
return {"success": resp.status_code == 200, "latency_ms": elapsed}
|
||||||
|
except Exception as e:
|
||||||
|
return {"success": False, "error": str(e)}
|
||||||
|
|
||||||
|
async def run_concurrent_test(url, payload, concurrent, total_requests):
|
||||||
|
"""Run concurrent requests."""
|
||||||
|
semaphore = asyncio.Semaphore(concurrent)
|
||||||
|
async with httpx.AsyncClient() as client:
|
||||||
|
tasks = [hit_endpoint(client, url, payload, semaphore) for _ in range(total_requests)]
|
||||||
|
results = await asyncio.gather(*tasks)
|
||||||
|
return results
|
||||||
|
|
||||||
|
def calculate_percentiles(latencies):
|
||||||
|
"""Calculate P50, P95, P99."""
|
||||||
|
if not latencies:
|
||||||
|
return 0, 0, 0
|
||||||
|
sorted_lat = sorted(latencies)
|
||||||
|
n = len(sorted_lat)
|
||||||
|
return (
|
||||||
|
sorted_lat[int(n * 0.50)],
|
||||||
|
sorted_lat[int(n * 0.95)],
|
||||||
|
sorted_lat[int(n * 0.99)],
|
||||||
|
)
|
||||||
|
|
||||||
|
def start_container(client, image, name, port):
|
||||||
|
"""Start container."""
|
||||||
|
try:
|
||||||
|
old = client.containers.get(name)
|
||||||
|
print(f"🧹 Stopping existing container...")
|
||||||
|
old.stop()
|
||||||
|
old.remove()
|
||||||
|
except docker.errors.NotFound:
|
||||||
|
pass
|
||||||
|
|
||||||
|
print(f"🚀 Starting container...")
|
||||||
|
container = client.containers.run(
|
||||||
|
image, name=name, ports={f"{port}/tcp": port},
|
||||||
|
detach=True, shm_size="1g", mem_limit="4g",
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"⏳ Waiting for health...")
|
||||||
|
for _ in range(30):
|
||||||
|
time.sleep(1)
|
||||||
|
container.reload()
|
||||||
|
if container.status == "running":
|
||||||
|
try:
|
||||||
|
import requests
|
||||||
|
if requests.get(f"http://localhost:{port}/health", timeout=2).status_code == 200:
|
||||||
|
print(f"✅ Container healthy!")
|
||||||
|
return container
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
raise TimeoutError("Container failed to start")
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
print("="*60)
|
||||||
|
print("TEST 4: Concurrent Load Testing")
|
||||||
|
print("="*60)
|
||||||
|
|
||||||
|
client = docker.from_env()
|
||||||
|
container = None
|
||||||
|
monitor_thread = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
|
||||||
|
|
||||||
|
print(f"\n⏳ Waiting for permanent browser init (3s)...")
|
||||||
|
await asyncio.sleep(3)
|
||||||
|
|
||||||
|
# Start monitoring
|
||||||
|
stop_monitoring.clear()
|
||||||
|
stats_history.clear()
|
||||||
|
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
|
||||||
|
monitor_thread.start()
|
||||||
|
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
|
||||||
|
print(f"📏 Baseline: {baseline_mem:.1f} MB\n")
|
||||||
|
|
||||||
|
url = f"http://localhost:{PORT}/html"
|
||||||
|
payload = {"url": "https://httpbin.org/html"}
|
||||||
|
|
||||||
|
all_results = []
|
||||||
|
level_stats = []
|
||||||
|
|
||||||
|
# Run load levels
|
||||||
|
for level in LOAD_LEVELS:
|
||||||
|
print(f"{'='*60}")
|
||||||
|
print(f"🔄 {level['name']} Load: {level['concurrent']} concurrent, {level['requests']} total")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
|
||||||
|
start_time = time.time()
|
||||||
|
results = await run_concurrent_test(url, payload, level['concurrent'], level['requests'])
|
||||||
|
duration = time.time() - start_time
|
||||||
|
|
||||||
|
successes = sum(1 for r in results if r.get("success"))
|
||||||
|
success_rate = (successes / len(results)) * 100
|
||||||
|
latencies = [r["latency_ms"] for r in results if "latency_ms" in r]
|
||||||
|
p50, p95, p99 = calculate_percentiles(latencies)
|
||||||
|
avg_lat = sum(latencies) / len(latencies) if latencies else 0
|
||||||
|
|
||||||
|
print(f" Duration: {duration:.1f}s")
|
||||||
|
print(f" Success: {success_rate:.1f}% ({successes}/{len(results)})")
|
||||||
|
print(f" Avg Latency: {avg_lat:.0f}ms")
|
||||||
|
print(f" P50/P95/P99: {p50:.0f}ms / {p95:.0f}ms / {p99:.0f}ms")
|
||||||
|
|
||||||
|
level_stats.append({
|
||||||
|
'name': level['name'],
|
||||||
|
'concurrent': level['concurrent'],
|
||||||
|
'success_rate': success_rate,
|
||||||
|
'avg_latency': avg_lat,
|
||||||
|
'p50': p50, 'p95': p95, 'p99': p99,
|
||||||
|
})
|
||||||
|
all_results.extend(results)
|
||||||
|
|
||||||
|
await asyncio.sleep(2) # Cool down between levels
|
||||||
|
|
||||||
|
# Stop monitoring
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
stop_monitoring.set()
|
||||||
|
if monitor_thread:
|
||||||
|
monitor_thread.join(timeout=2)
|
||||||
|
|
||||||
|
# Final stats
|
||||||
|
pool_stats = count_log_markers(container)
|
||||||
|
memory_samples = [s['memory_mb'] for s in stats_history]
|
||||||
|
peak_mem = max(memory_samples) if memory_samples else 0
|
||||||
|
final_mem = memory_samples[-1] if memory_samples else 0
|
||||||
|
|
||||||
|
print(f"\n{'='*60}")
|
||||||
|
print(f"FINAL RESULTS:")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
print(f" Total Requests: {len(all_results)}")
|
||||||
|
print(f"\n Pool Utilization:")
|
||||||
|
print(f" 🔥 Permanent: {pool_stats['permanent']}")
|
||||||
|
print(f" ♨️ Hot: {pool_stats['hot']}")
|
||||||
|
print(f" ❄️ Cold: {pool_stats['cold']}")
|
||||||
|
print(f" 🆕 New: {pool_stats['new']}")
|
||||||
|
print(f"\n Memory:")
|
||||||
|
print(f" Baseline: {baseline_mem:.1f} MB")
|
||||||
|
print(f" Peak: {peak_mem:.1f} MB")
|
||||||
|
print(f" Final: {final_mem:.1f} MB")
|
||||||
|
print(f" Delta: {final_mem - baseline_mem:+.1f} MB")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
|
||||||
|
# Pass/Fail
|
||||||
|
passed = True
|
||||||
|
for ls in level_stats:
|
||||||
|
if ls['success_rate'] < 99:
|
||||||
|
print(f"❌ FAIL: {ls['name']} success rate {ls['success_rate']:.1f}% < 99%")
|
||||||
|
passed = False
|
||||||
|
if ls['p99'] > 10000: # 10s threshold
|
||||||
|
print(f"⚠️ WARNING: {ls['name']} P99 latency {ls['p99']:.0f}ms very high")
|
||||||
|
|
||||||
|
if final_mem - baseline_mem > 300:
|
||||||
|
print(f"⚠️ WARNING: Memory grew {final_mem - baseline_mem:.1f} MB")
|
||||||
|
|
||||||
|
if passed:
|
||||||
|
print(f"✅ TEST PASSED")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
return 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n❌ TEST ERROR: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
finally:
|
||||||
|
stop_monitoring.set()
|
||||||
|
if container:
|
||||||
|
print(f"🛑 Stopping container...")
|
||||||
|
container.stop()
|
||||||
|
container.remove()
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit_code = asyncio.run(main())
|
||||||
|
exit(exit_code)
|
||||||
267
deploy/docker/tests/test_5_pool_stress.py
Executable file
267
deploy/docker/tests/test_5_pool_stress.py
Executable file
@@ -0,0 +1,267 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test 5: Pool Stress - Mixed Configs
|
||||||
|
- Tests hot/cold pool with different browser configs
|
||||||
|
- Uses different viewports to create config variants
|
||||||
|
- Validates cold → hot promotion after 3 uses
|
||||||
|
- Monitors pool tier distribution
|
||||||
|
"""
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
import docker
|
||||||
|
import httpx
|
||||||
|
from threading import Thread, Event
|
||||||
|
import random
|
||||||
|
|
||||||
|
# Config
|
||||||
|
IMAGE = "crawl4ai-local:latest"
|
||||||
|
CONTAINER_NAME = "crawl4ai-test"
|
||||||
|
PORT = 11235
|
||||||
|
REQUESTS_PER_CONFIG = 5 # 5 requests per config variant
|
||||||
|
|
||||||
|
# Different viewport configs to test pool tiers
|
||||||
|
VIEWPORT_CONFIGS = [
|
||||||
|
None, # Default (permanent browser)
|
||||||
|
{"width": 1920, "height": 1080}, # Desktop
|
||||||
|
{"width": 1024, "height": 768}, # Tablet
|
||||||
|
{"width": 375, "height": 667}, # Mobile
|
||||||
|
]
|
||||||
|
|
||||||
|
# Stats
|
||||||
|
stats_history = []
|
||||||
|
stop_monitoring = Event()
|
||||||
|
|
||||||
|
def monitor_stats(container):
|
||||||
|
"""Background stats collector."""
|
||||||
|
for stat in container.stats(decode=True, stream=True):
|
||||||
|
if stop_monitoring.is_set():
|
||||||
|
break
|
||||||
|
try:
|
||||||
|
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024)
|
||||||
|
stats_history.append({'timestamp': time.time(), 'memory_mb': mem_usage})
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
time.sleep(0.5)
|
||||||
|
|
||||||
|
def analyze_pool_logs(container):
|
||||||
|
"""Extract detailed pool stats from logs."""
|
||||||
|
logs = container.logs().decode('utf-8')
|
||||||
|
|
||||||
|
permanent = logs.count("🔥 Using permanent browser")
|
||||||
|
hot = logs.count("♨️ Using hot pool browser")
|
||||||
|
cold = logs.count("❄️ Using cold pool browser")
|
||||||
|
new = logs.count("🆕 Creating new browser")
|
||||||
|
promotions = logs.count("⬆️ Promoting to hot pool")
|
||||||
|
|
||||||
|
return {
|
||||||
|
'permanent': permanent,
|
||||||
|
'hot': hot,
|
||||||
|
'cold': cold,
|
||||||
|
'new': new,
|
||||||
|
'promotions': promotions,
|
||||||
|
'total': permanent + hot + cold
|
||||||
|
}
|
||||||
|
|
||||||
|
async def crawl_with_viewport(client, url, viewport):
|
||||||
|
"""Single request with specific viewport."""
|
||||||
|
payload = {
|
||||||
|
"urls": ["https://httpbin.org/html"],
|
||||||
|
"browser_config": {},
|
||||||
|
"crawler_config": {}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Add viewport if specified
|
||||||
|
if viewport:
|
||||||
|
payload["browser_config"] = {
|
||||||
|
"type": "BrowserConfig",
|
||||||
|
"params": {
|
||||||
|
"viewport": {"type": "dict", "value": viewport},
|
||||||
|
"headless": True,
|
||||||
|
"text_mode": True,
|
||||||
|
"extra_args": [
|
||||||
|
"--no-sandbox",
|
||||||
|
"--disable-dev-shm-usage",
|
||||||
|
"--disable-gpu",
|
||||||
|
"--disable-software-rasterizer",
|
||||||
|
"--disable-web-security",
|
||||||
|
"--allow-insecure-localhost",
|
||||||
|
"--ignore-certificate-errors"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
resp = await client.post(url, json=payload, timeout=60.0)
|
||||||
|
elapsed = (time.time() - start) * 1000
|
||||||
|
return {"success": resp.status_code == 200, "latency_ms": elapsed, "viewport": viewport}
|
||||||
|
except Exception as e:
|
||||||
|
return {"success": False, "error": str(e), "viewport": viewport}
|
||||||
|
|
||||||
|
def start_container(client, image, name, port):
|
||||||
|
"""Start container."""
|
||||||
|
try:
|
||||||
|
old = client.containers.get(name)
|
||||||
|
print(f"🧹 Stopping existing container...")
|
||||||
|
old.stop()
|
||||||
|
old.remove()
|
||||||
|
except docker.errors.NotFound:
|
||||||
|
pass
|
||||||
|
|
||||||
|
print(f"🚀 Starting container...")
|
||||||
|
container = client.containers.run(
|
||||||
|
image, name=name, ports={f"{port}/tcp": port},
|
||||||
|
detach=True, shm_size="1g", mem_limit="4g",
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"⏳ Waiting for health...")
|
||||||
|
for _ in range(30):
|
||||||
|
time.sleep(1)
|
||||||
|
container.reload()
|
||||||
|
if container.status == "running":
|
||||||
|
try:
|
||||||
|
import requests
|
||||||
|
if requests.get(f"http://localhost:{port}/health", timeout=2).status_code == 200:
|
||||||
|
print(f"✅ Container healthy!")
|
||||||
|
return container
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
raise TimeoutError("Container failed to start")
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
print("="*60)
|
||||||
|
print("TEST 5: Pool Stress - Mixed Configs")
|
||||||
|
print("="*60)
|
||||||
|
|
||||||
|
client = docker.from_env()
|
||||||
|
container = None
|
||||||
|
monitor_thread = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
|
||||||
|
|
||||||
|
print(f"\n⏳ Waiting for permanent browser init (3s)...")
|
||||||
|
await asyncio.sleep(3)
|
||||||
|
|
||||||
|
# Start monitoring
|
||||||
|
stop_monitoring.clear()
|
||||||
|
stats_history.clear()
|
||||||
|
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
|
||||||
|
monitor_thread.start()
|
||||||
|
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
|
||||||
|
print(f"📏 Baseline: {baseline_mem:.1f} MB\n")
|
||||||
|
|
||||||
|
url = f"http://localhost:{PORT}/crawl"
|
||||||
|
|
||||||
|
print(f"Testing {len(VIEWPORT_CONFIGS)} different configs:")
|
||||||
|
for i, vp in enumerate(VIEWPORT_CONFIGS):
|
||||||
|
vp_str = "Default" if vp is None else f"{vp['width']}x{vp['height']}"
|
||||||
|
print(f" {i+1}. {vp_str}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
# Run requests: repeat each config REQUESTS_PER_CONFIG times
|
||||||
|
all_results = []
|
||||||
|
config_sequence = []
|
||||||
|
|
||||||
|
for _ in range(REQUESTS_PER_CONFIG):
|
||||||
|
for viewport in VIEWPORT_CONFIGS:
|
||||||
|
config_sequence.append(viewport)
|
||||||
|
|
||||||
|
# Shuffle to mix configs
|
||||||
|
random.shuffle(config_sequence)
|
||||||
|
|
||||||
|
print(f"🔄 Running {len(config_sequence)} requests with mixed configs...")
|
||||||
|
|
||||||
|
async with httpx.AsyncClient() as http_client:
|
||||||
|
for i, viewport in enumerate(config_sequence):
|
||||||
|
result = await crawl_with_viewport(http_client, url, viewport)
|
||||||
|
all_results.append(result)
|
||||||
|
|
||||||
|
if (i + 1) % 5 == 0:
|
||||||
|
vp_str = "default" if result['viewport'] is None else f"{result['viewport']['width']}x{result['viewport']['height']}"
|
||||||
|
status = "✓" if result.get('success') else "✗"
|
||||||
|
lat = f"{result.get('latency_ms', 0):.0f}ms" if 'latency_ms' in result else "error"
|
||||||
|
print(f" [{i+1}/{len(config_sequence)}] {status} {vp_str} - {lat}")
|
||||||
|
|
||||||
|
# Stop monitoring
|
||||||
|
await asyncio.sleep(2)
|
||||||
|
stop_monitoring.set()
|
||||||
|
if monitor_thread:
|
||||||
|
monitor_thread.join(timeout=2)
|
||||||
|
|
||||||
|
# Analyze results
|
||||||
|
pool_stats = analyze_pool_logs(container)
|
||||||
|
|
||||||
|
successes = sum(1 for r in all_results if r.get("success"))
|
||||||
|
success_rate = (successes / len(all_results)) * 100
|
||||||
|
latencies = [r["latency_ms"] for r in all_results if "latency_ms" in r]
|
||||||
|
avg_lat = sum(latencies) / len(latencies) if latencies else 0
|
||||||
|
|
||||||
|
memory_samples = [s['memory_mb'] for s in stats_history]
|
||||||
|
peak_mem = max(memory_samples) if memory_samples else 0
|
||||||
|
final_mem = memory_samples[-1] if memory_samples else 0
|
||||||
|
|
||||||
|
print(f"\n{'='*60}")
|
||||||
|
print(f"RESULTS:")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
print(f" Requests: {len(all_results)}")
|
||||||
|
print(f" Success Rate: {success_rate:.1f}% ({successes}/{len(all_results)})")
|
||||||
|
print(f" Avg Latency: {avg_lat:.0f}ms")
|
||||||
|
print(f"\n Pool Statistics:")
|
||||||
|
print(f" 🔥 Permanent: {pool_stats['permanent']}")
|
||||||
|
print(f" ♨️ Hot: {pool_stats['hot']}")
|
||||||
|
print(f" ❄️ Cold: {pool_stats['cold']}")
|
||||||
|
print(f" 🆕 New: {pool_stats['new']}")
|
||||||
|
print(f" ⬆️ Promotions: {pool_stats['promotions']}")
|
||||||
|
print(f" 📊 Reuse: {(pool_stats['total'] / len(all_results) * 100):.1f}%")
|
||||||
|
print(f"\n Memory:")
|
||||||
|
print(f" Baseline: {baseline_mem:.1f} MB")
|
||||||
|
print(f" Peak: {peak_mem:.1f} MB")
|
||||||
|
print(f" Final: {final_mem:.1f} MB")
|
||||||
|
print(f" Delta: {final_mem - baseline_mem:+.1f} MB")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
|
||||||
|
# Pass/Fail
|
||||||
|
passed = True
|
||||||
|
|
||||||
|
if success_rate < 99:
|
||||||
|
print(f"❌ FAIL: Success rate {success_rate:.1f}% < 99%")
|
||||||
|
passed = False
|
||||||
|
|
||||||
|
# Should see promotions since we repeat each config 5 times
|
||||||
|
if pool_stats['promotions'] < (len(VIEWPORT_CONFIGS) - 1): # -1 for default
|
||||||
|
print(f"⚠️ WARNING: Only {pool_stats['promotions']} promotions (expected ~{len(VIEWPORT_CONFIGS)-1})")
|
||||||
|
|
||||||
|
# Should have created some browsers for different configs
|
||||||
|
if pool_stats['new'] == 0:
|
||||||
|
print(f"⚠️ NOTE: No new browsers created (all used default?)")
|
||||||
|
|
||||||
|
if pool_stats['permanent'] == len(all_results):
|
||||||
|
print(f"⚠️ NOTE: All requests used permanent browser (configs not varying enough?)")
|
||||||
|
|
||||||
|
if final_mem - baseline_mem > 500:
|
||||||
|
print(f"⚠️ WARNING: Memory grew {final_mem - baseline_mem:.1f} MB")
|
||||||
|
|
||||||
|
if passed:
|
||||||
|
print(f"✅ TEST PASSED")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
return 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n❌ TEST ERROR: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
finally:
|
||||||
|
stop_monitoring.set()
|
||||||
|
if container:
|
||||||
|
print(f"🛑 Stopping container...")
|
||||||
|
container.stop()
|
||||||
|
container.remove()
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit_code = asyncio.run(main())
|
||||||
|
exit(exit_code)
|
||||||
234
deploy/docker/tests/test_6_multi_endpoint.py
Executable file
234
deploy/docker/tests/test_6_multi_endpoint.py
Executable file
@@ -0,0 +1,234 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test 6: Multi-Endpoint Testing
|
||||||
|
- Tests multiple endpoints together: /html, /screenshot, /pdf, /crawl
|
||||||
|
- Validates each endpoint works correctly
|
||||||
|
- Monitors success rates per endpoint
|
||||||
|
"""
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
import docker
|
||||||
|
import httpx
|
||||||
|
from threading import Thread, Event
|
||||||
|
|
||||||
|
# Config
|
||||||
|
IMAGE = "crawl4ai-local:latest"
|
||||||
|
CONTAINER_NAME = "crawl4ai-test"
|
||||||
|
PORT = 11235
|
||||||
|
REQUESTS_PER_ENDPOINT = 10
|
||||||
|
|
||||||
|
# Stats
|
||||||
|
stats_history = []
|
||||||
|
stop_monitoring = Event()
|
||||||
|
|
||||||
|
def monitor_stats(container):
|
||||||
|
"""Background stats collector."""
|
||||||
|
for stat in container.stats(decode=True, stream=True):
|
||||||
|
if stop_monitoring.is_set():
|
||||||
|
break
|
||||||
|
try:
|
||||||
|
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024)
|
||||||
|
stats_history.append({'timestamp': time.time(), 'memory_mb': mem_usage})
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
time.sleep(0.5)
|
||||||
|
|
||||||
|
async def test_html(client, base_url, count):
|
||||||
|
"""Test /html endpoint."""
|
||||||
|
url = f"{base_url}/html"
|
||||||
|
results = []
|
||||||
|
for _ in range(count):
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
resp = await client.post(url, json={"url": "https://httpbin.org/html"}, timeout=30.0)
|
||||||
|
elapsed = (time.time() - start) * 1000
|
||||||
|
results.append({"success": resp.status_code == 200, "latency_ms": elapsed})
|
||||||
|
except Exception as e:
|
||||||
|
results.append({"success": False, "error": str(e)})
|
||||||
|
return results
|
||||||
|
|
||||||
|
async def test_screenshot(client, base_url, count):
|
||||||
|
"""Test /screenshot endpoint."""
|
||||||
|
url = f"{base_url}/screenshot"
|
||||||
|
results = []
|
||||||
|
for _ in range(count):
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
resp = await client.post(url, json={"url": "https://httpbin.org/html"}, timeout=30.0)
|
||||||
|
elapsed = (time.time() - start) * 1000
|
||||||
|
results.append({"success": resp.status_code == 200, "latency_ms": elapsed})
|
||||||
|
except Exception as e:
|
||||||
|
results.append({"success": False, "error": str(e)})
|
||||||
|
return results
|
||||||
|
|
||||||
|
async def test_pdf(client, base_url, count):
|
||||||
|
"""Test /pdf endpoint."""
|
||||||
|
url = f"{base_url}/pdf"
|
||||||
|
results = []
|
||||||
|
for _ in range(count):
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
resp = await client.post(url, json={"url": "https://httpbin.org/html"}, timeout=30.0)
|
||||||
|
elapsed = (time.time() - start) * 1000
|
||||||
|
results.append({"success": resp.status_code == 200, "latency_ms": elapsed})
|
||||||
|
except Exception as e:
|
||||||
|
results.append({"success": False, "error": str(e)})
|
||||||
|
return results
|
||||||
|
|
||||||
|
async def test_crawl(client, base_url, count):
|
||||||
|
"""Test /crawl endpoint."""
|
||||||
|
url = f"{base_url}/crawl"
|
||||||
|
results = []
|
||||||
|
payload = {
|
||||||
|
"urls": ["https://httpbin.org/html"],
|
||||||
|
"browser_config": {},
|
||||||
|
"crawler_config": {}
|
||||||
|
}
|
||||||
|
for _ in range(count):
|
||||||
|
start = time.time()
|
||||||
|
try:
|
||||||
|
resp = await client.post(url, json=payload, timeout=30.0)
|
||||||
|
elapsed = (time.time() - start) * 1000
|
||||||
|
results.append({"success": resp.status_code == 200, "latency_ms": elapsed})
|
||||||
|
except Exception as e:
|
||||||
|
results.append({"success": False, "error": str(e)})
|
||||||
|
return results
|
||||||
|
|
||||||
|
def start_container(client, image, name, port):
|
||||||
|
"""Start container."""
|
||||||
|
try:
|
||||||
|
old = client.containers.get(name)
|
||||||
|
print(f"🧹 Stopping existing container...")
|
||||||
|
old.stop()
|
||||||
|
old.remove()
|
||||||
|
except docker.errors.NotFound:
|
||||||
|
pass
|
||||||
|
|
||||||
|
print(f"🚀 Starting container...")
|
||||||
|
container = client.containers.run(
|
||||||
|
image, name=name, ports={f"{port}/tcp": port},
|
||||||
|
detach=True, shm_size="1g", mem_limit="4g",
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"⏳ Waiting for health...")
|
||||||
|
for _ in range(30):
|
||||||
|
time.sleep(1)
|
||||||
|
container.reload()
|
||||||
|
if container.status == "running":
|
||||||
|
try:
|
||||||
|
import requests
|
||||||
|
if requests.get(f"http://localhost:{port}/health", timeout=2).status_code == 200:
|
||||||
|
print(f"✅ Container healthy!")
|
||||||
|
return container
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
raise TimeoutError("Container failed to start")
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
print("="*60)
|
||||||
|
print("TEST 6: Multi-Endpoint Testing")
|
||||||
|
print("="*60)
|
||||||
|
|
||||||
|
client = docker.from_env()
|
||||||
|
container = None
|
||||||
|
monitor_thread = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
|
||||||
|
|
||||||
|
print(f"\n⏳ Waiting for permanent browser init (3s)...")
|
||||||
|
await asyncio.sleep(3)
|
||||||
|
|
||||||
|
# Start monitoring
|
||||||
|
stop_monitoring.clear()
|
||||||
|
stats_history.clear()
|
||||||
|
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
|
||||||
|
monitor_thread.start()
|
||||||
|
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
|
||||||
|
print(f"📏 Baseline: {baseline_mem:.1f} MB\n")
|
||||||
|
|
||||||
|
base_url = f"http://localhost:{PORT}"
|
||||||
|
|
||||||
|
# Test each endpoint
|
||||||
|
endpoints = {
|
||||||
|
"/html": test_html,
|
||||||
|
"/screenshot": test_screenshot,
|
||||||
|
"/pdf": test_pdf,
|
||||||
|
"/crawl": test_crawl,
|
||||||
|
}
|
||||||
|
|
||||||
|
all_endpoint_stats = {}
|
||||||
|
|
||||||
|
async with httpx.AsyncClient() as http_client:
|
||||||
|
for endpoint_name, test_func in endpoints.items():
|
||||||
|
print(f"🔄 Testing {endpoint_name} ({REQUESTS_PER_ENDPOINT} requests)...")
|
||||||
|
results = await test_func(http_client, base_url, REQUESTS_PER_ENDPOINT)
|
||||||
|
|
||||||
|
successes = sum(1 for r in results if r.get("success"))
|
||||||
|
success_rate = (successes / len(results)) * 100
|
||||||
|
latencies = [r["latency_ms"] for r in results if "latency_ms" in r]
|
||||||
|
avg_lat = sum(latencies) / len(latencies) if latencies else 0
|
||||||
|
|
||||||
|
all_endpoint_stats[endpoint_name] = {
|
||||||
|
'success_rate': success_rate,
|
||||||
|
'avg_latency': avg_lat,
|
||||||
|
'total': len(results),
|
||||||
|
'successes': successes
|
||||||
|
}
|
||||||
|
|
||||||
|
print(f" ✓ Success: {success_rate:.1f}% ({successes}/{len(results)}), Avg: {avg_lat:.0f}ms")
|
||||||
|
|
||||||
|
# Stop monitoring
|
||||||
|
await asyncio.sleep(1)
|
||||||
|
stop_monitoring.set()
|
||||||
|
if monitor_thread:
|
||||||
|
monitor_thread.join(timeout=2)
|
||||||
|
|
||||||
|
# Final stats
|
||||||
|
memory_samples = [s['memory_mb'] for s in stats_history]
|
||||||
|
peak_mem = max(memory_samples) if memory_samples else 0
|
||||||
|
final_mem = memory_samples[-1] if memory_samples else 0
|
||||||
|
|
||||||
|
print(f"\n{'='*60}")
|
||||||
|
print(f"RESULTS:")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
for endpoint, stats in all_endpoint_stats.items():
|
||||||
|
print(f" {endpoint:12} Success: {stats['success_rate']:5.1f}% Avg: {stats['avg_latency']:6.0f}ms")
|
||||||
|
|
||||||
|
print(f"\n Memory:")
|
||||||
|
print(f" Baseline: {baseline_mem:.1f} MB")
|
||||||
|
print(f" Peak: {peak_mem:.1f} MB")
|
||||||
|
print(f" Final: {final_mem:.1f} MB")
|
||||||
|
print(f" Delta: {final_mem - baseline_mem:+.1f} MB")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
|
||||||
|
# Pass/Fail
|
||||||
|
passed = True
|
||||||
|
for endpoint, stats in all_endpoint_stats.items():
|
||||||
|
if stats['success_rate'] < 100:
|
||||||
|
print(f"❌ FAIL: {endpoint} success rate {stats['success_rate']:.1f}% < 100%")
|
||||||
|
passed = False
|
||||||
|
|
||||||
|
if passed:
|
||||||
|
print(f"✅ TEST PASSED")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
return 1
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n❌ TEST ERROR: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
finally:
|
||||||
|
stop_monitoring.set()
|
||||||
|
if container:
|
||||||
|
print(f"🛑 Stopping container...")
|
||||||
|
container.stop()
|
||||||
|
container.remove()
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit_code = asyncio.run(main())
|
||||||
|
exit(exit_code)
|
||||||
199
deploy/docker/tests/test_7_cleanup.py
Executable file
199
deploy/docker/tests/test_7_cleanup.py
Executable file
@@ -0,0 +1,199 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Test 7: Cleanup Verification (Janitor)
|
||||||
|
- Creates load spike then goes idle
|
||||||
|
- Verifies memory returns to near baseline
|
||||||
|
- Tests janitor cleanup of idle browsers
|
||||||
|
- Monitors memory recovery time
|
||||||
|
"""
|
||||||
|
import asyncio
|
||||||
|
import time
|
||||||
|
import docker
|
||||||
|
import httpx
|
||||||
|
from threading import Thread, Event
|
||||||
|
|
||||||
|
# Config
|
||||||
|
IMAGE = "crawl4ai-local:latest"
|
||||||
|
CONTAINER_NAME = "crawl4ai-test"
|
||||||
|
PORT = 11235
|
||||||
|
SPIKE_REQUESTS = 20 # Create some browsers
|
||||||
|
IDLE_TIME = 90 # Wait 90s for janitor (runs every 60s)
|
||||||
|
|
||||||
|
# Stats
|
||||||
|
stats_history = []
|
||||||
|
stop_monitoring = Event()
|
||||||
|
|
||||||
|
def monitor_stats(container):
|
||||||
|
"""Background stats collector."""
|
||||||
|
for stat in container.stats(decode=True, stream=True):
|
||||||
|
if stop_monitoring.is_set():
|
||||||
|
break
|
||||||
|
try:
|
||||||
|
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024)
|
||||||
|
stats_history.append({'timestamp': time.time(), 'memory_mb': mem_usage})
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
time.sleep(1) # Sample every 1s for this test
|
||||||
|
|
||||||
|
def start_container(client, image, name, port):
|
||||||
|
"""Start container."""
|
||||||
|
try:
|
||||||
|
old = client.containers.get(name)
|
||||||
|
print(f"🧹 Stopping existing container...")
|
||||||
|
old.stop()
|
||||||
|
old.remove()
|
||||||
|
except docker.errors.NotFound:
|
||||||
|
pass
|
||||||
|
|
||||||
|
print(f"🚀 Starting container...")
|
||||||
|
container = client.containers.run(
|
||||||
|
image, name=name, ports={f"{port}/tcp": port},
|
||||||
|
detach=True, shm_size="1g", mem_limit="4g",
|
||||||
|
)
|
||||||
|
|
||||||
|
print(f"⏳ Waiting for health...")
|
||||||
|
for _ in range(30):
|
||||||
|
time.sleep(1)
|
||||||
|
container.reload()
|
||||||
|
if container.status == "running":
|
||||||
|
try:
|
||||||
|
import requests
|
||||||
|
if requests.get(f"http://localhost:{port}/health", timeout=2).status_code == 200:
|
||||||
|
print(f"✅ Container healthy!")
|
||||||
|
return container
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
raise TimeoutError("Container failed to start")
|
||||||
|
|
||||||
|
async def main():
|
||||||
|
print("="*60)
|
||||||
|
print("TEST 7: Cleanup Verification (Janitor)")
|
||||||
|
print("="*60)
|
||||||
|
|
||||||
|
client = docker.from_env()
|
||||||
|
container = None
|
||||||
|
monitor_thread = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
|
||||||
|
|
||||||
|
print(f"\n⏳ Waiting for permanent browser init (3s)...")
|
||||||
|
await asyncio.sleep(3)
|
||||||
|
|
||||||
|
# Start monitoring
|
||||||
|
stop_monitoring.clear()
|
||||||
|
stats_history.clear()
|
||||||
|
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
|
||||||
|
monitor_thread.start()
|
||||||
|
|
||||||
|
await asyncio.sleep(2)
|
||||||
|
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
|
||||||
|
print(f"📏 Baseline: {baseline_mem:.1f} MB\n")
|
||||||
|
|
||||||
|
# Create load spike with different configs to populate pool
|
||||||
|
print(f"🔥 Creating load spike ({SPIKE_REQUESTS} requests with varied configs)...")
|
||||||
|
url = f"http://localhost:{PORT}/crawl"
|
||||||
|
|
||||||
|
viewports = [
|
||||||
|
{"width": 1920, "height": 1080},
|
||||||
|
{"width": 1024, "height": 768},
|
||||||
|
{"width": 375, "height": 667},
|
||||||
|
]
|
||||||
|
|
||||||
|
async with httpx.AsyncClient(timeout=60.0) as http_client:
|
||||||
|
tasks = []
|
||||||
|
for i in range(SPIKE_REQUESTS):
|
||||||
|
vp = viewports[i % len(viewports)]
|
||||||
|
payload = {
|
||||||
|
"urls": ["https://httpbin.org/html"],
|
||||||
|
"browser_config": {
|
||||||
|
"type": "BrowserConfig",
|
||||||
|
"params": {
|
||||||
|
"viewport": {"type": "dict", "value": vp},
|
||||||
|
"headless": True,
|
||||||
|
"text_mode": True,
|
||||||
|
"extra_args": [
|
||||||
|
"--no-sandbox", "--disable-dev-shm-usage",
|
||||||
|
"--disable-gpu", "--disable-software-rasterizer",
|
||||||
|
"--disable-web-security", "--allow-insecure-localhost",
|
||||||
|
"--ignore-certificate-errors"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"crawler_config": {}
|
||||||
|
}
|
||||||
|
tasks.append(http_client.post(url, json=payload))
|
||||||
|
|
||||||
|
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||||
|
successes = sum(1 for r in results if hasattr(r, 'status_code') and r.status_code == 200)
|
||||||
|
print(f" ✓ Spike completed: {successes}/{len(results)} successful")
|
||||||
|
|
||||||
|
# Measure peak
|
||||||
|
await asyncio.sleep(2)
|
||||||
|
peak_mem = max([s['memory_mb'] for s in stats_history]) if stats_history else baseline_mem
|
||||||
|
print(f" 📊 Peak memory: {peak_mem:.1f} MB (+{peak_mem - baseline_mem:.1f} MB)")
|
||||||
|
|
||||||
|
# Now go idle and wait for janitor
|
||||||
|
print(f"\n⏸️ Going idle for {IDLE_TIME}s (janitor cleanup)...")
|
||||||
|
print(f" (Janitor runs every 60s, checking for idle browsers)")
|
||||||
|
|
||||||
|
for elapsed in range(0, IDLE_TIME, 10):
|
||||||
|
await asyncio.sleep(10)
|
||||||
|
current_mem = stats_history[-1]['memory_mb'] if stats_history else 0
|
||||||
|
print(f" [{elapsed+10:3d}s] Memory: {current_mem:.1f} MB")
|
||||||
|
|
||||||
|
# Stop monitoring
|
||||||
|
stop_monitoring.set()
|
||||||
|
if monitor_thread:
|
||||||
|
monitor_thread.join(timeout=2)
|
||||||
|
|
||||||
|
# Analyze memory recovery
|
||||||
|
final_mem = stats_history[-1]['memory_mb'] if stats_history else 0
|
||||||
|
recovery_mb = peak_mem - final_mem
|
||||||
|
recovery_pct = (recovery_mb / (peak_mem - baseline_mem) * 100) if (peak_mem - baseline_mem) > 0 else 0
|
||||||
|
|
||||||
|
print(f"\n{'='*60}")
|
||||||
|
print(f"RESULTS:")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
print(f" Memory Journey:")
|
||||||
|
print(f" Baseline: {baseline_mem:.1f} MB")
|
||||||
|
print(f" Peak: {peak_mem:.1f} MB (+{peak_mem - baseline_mem:.1f} MB)")
|
||||||
|
print(f" Final: {final_mem:.1f} MB (+{final_mem - baseline_mem:.1f} MB)")
|
||||||
|
print(f" Recovered: {recovery_mb:.1f} MB ({recovery_pct:.1f}%)")
|
||||||
|
print(f"{'='*60}")
|
||||||
|
|
||||||
|
# Pass/Fail
|
||||||
|
passed = True
|
||||||
|
|
||||||
|
# Should have created some memory pressure
|
||||||
|
if peak_mem - baseline_mem < 100:
|
||||||
|
print(f"⚠️ WARNING: Peak increase only {peak_mem - baseline_mem:.1f} MB (expected more browsers)")
|
||||||
|
|
||||||
|
# Should recover most memory (within 100MB of baseline)
|
||||||
|
if final_mem - baseline_mem > 100:
|
||||||
|
print(f"⚠️ WARNING: Memory didn't recover well (still +{final_mem - baseline_mem:.1f} MB above baseline)")
|
||||||
|
else:
|
||||||
|
print(f"✅ Good memory recovery!")
|
||||||
|
|
||||||
|
# Baseline + 50MB tolerance
|
||||||
|
if final_mem - baseline_mem < 50:
|
||||||
|
print(f"✅ Excellent cleanup (within 50MB of baseline)")
|
||||||
|
|
||||||
|
print(f"✅ TEST PASSED")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"\n❌ TEST ERROR: {e}")
|
||||||
|
import traceback
|
||||||
|
traceback.print_exc()
|
||||||
|
return 1
|
||||||
|
finally:
|
||||||
|
stop_monitoring.set()
|
||||||
|
if container:
|
||||||
|
print(f"🛑 Stopping container...")
|
||||||
|
container.stop()
|
||||||
|
container.remove()
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
exit_code = asyncio.run(main())
|
||||||
|
exit(exit_code)
|
||||||
57
deploy/docker/tests/test_monitor_demo.py
Normal file
57
deploy/docker/tests/test_monitor_demo.py
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Quick test to generate monitor dashboard activity"""
|
||||||
|
import httpx
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
async def test_dashboard():
|
||||||
|
async with httpx.AsyncClient(timeout=30.0) as client:
|
||||||
|
print("📊 Generating dashboard activity...")
|
||||||
|
|
||||||
|
# Test 1: Simple crawl
|
||||||
|
print("\n1️⃣ Running simple crawl...")
|
||||||
|
r1 = await client.post(
|
||||||
|
"http://localhost:11235/crawl",
|
||||||
|
json={"urls": ["https://httpbin.org/html"], "crawler_config": {}}
|
||||||
|
)
|
||||||
|
print(f" Status: {r1.status_code}")
|
||||||
|
|
||||||
|
# Test 2: Multiple URLs
|
||||||
|
print("\n2️⃣ Running multi-URL crawl...")
|
||||||
|
r2 = await client.post(
|
||||||
|
"http://localhost:11235/crawl",
|
||||||
|
json={
|
||||||
|
"urls": [
|
||||||
|
"https://httpbin.org/html",
|
||||||
|
"https://httpbin.org/json"
|
||||||
|
],
|
||||||
|
"crawler_config": {}
|
||||||
|
}
|
||||||
|
)
|
||||||
|
print(f" Status: {r2.status_code}")
|
||||||
|
|
||||||
|
# Test 3: Check monitor health
|
||||||
|
print("\n3️⃣ Checking monitor health...")
|
||||||
|
r3 = await client.get("http://localhost:11235/monitor/health")
|
||||||
|
health = r3.json()
|
||||||
|
print(f" Memory: {health['container']['memory_percent']}%")
|
||||||
|
print(f" Browsers: {health['pool']['permanent']['active']}")
|
||||||
|
|
||||||
|
# Test 4: Check requests
|
||||||
|
print("\n4️⃣ Checking request log...")
|
||||||
|
r4 = await client.get("http://localhost:11235/monitor/requests")
|
||||||
|
reqs = r4.json()
|
||||||
|
print(f" Active: {len(reqs['active'])}")
|
||||||
|
print(f" Completed: {len(reqs['completed'])}")
|
||||||
|
|
||||||
|
# Test 5: Check endpoint stats
|
||||||
|
print("\n5️⃣ Checking endpoint stats...")
|
||||||
|
r5 = await client.get("http://localhost:11235/monitor/endpoints/stats")
|
||||||
|
stats = r5.json()
|
||||||
|
for endpoint, data in stats.items():
|
||||||
|
print(f" {endpoint}: {data['count']} requests, {data['avg_latency_ms']}ms avg")
|
||||||
|
|
||||||
|
print("\n✅ Dashboard should now show activity!")
|
||||||
|
print(f"\n🌐 Open: http://localhost:11235/dashboard")
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
asyncio.run(test_dashboard())
|
||||||
@@ -178,4 +178,29 @@ def verify_email_domain(email: str) -> bool:
|
|||||||
records = dns.resolver.resolve(domain, 'MX')
|
records = dns.resolver.resolve(domain, 'MX')
|
||||||
return True if records else False
|
return True if records else False
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def get_container_memory_percent() -> float:
|
||||||
|
"""Get actual container memory usage vs limit (cgroup v1/v2 aware)."""
|
||||||
|
try:
|
||||||
|
# Try cgroup v2 first
|
||||||
|
usage_path = Path("/sys/fs/cgroup/memory.current")
|
||||||
|
limit_path = Path("/sys/fs/cgroup/memory.max")
|
||||||
|
if not usage_path.exists():
|
||||||
|
# Fall back to cgroup v1
|
||||||
|
usage_path = Path("/sys/fs/cgroup/memory/memory.usage_in_bytes")
|
||||||
|
limit_path = Path("/sys/fs/cgroup/memory/memory.limit_in_bytes")
|
||||||
|
|
||||||
|
usage = int(usage_path.read_text())
|
||||||
|
limit = int(limit_path.read_text())
|
||||||
|
|
||||||
|
# Handle unlimited (v2: "max", v1: > 1e18)
|
||||||
|
if limit > 1e18:
|
||||||
|
import psutil
|
||||||
|
limit = psutil.virtual_memory().total
|
||||||
|
|
||||||
|
return (usage / limit) * 100
|
||||||
|
except:
|
||||||
|
# Non-container or unsupported: fallback to host
|
||||||
|
import psutil
|
||||||
|
return psutil.virtual_memory().percent
|
||||||
@@ -1,4 +1,20 @@
|
|||||||
# Crawl4AI Docker Guide 🐳
|
# Self-Hosting Crawl4AI 🚀
|
||||||
|
|
||||||
|
**Take Control of Your Web Crawling Infrastructure**
|
||||||
|
|
||||||
|
Self-hosting Crawl4AI gives you complete control over your web crawling and data extraction pipeline. Unlike cloud-based solutions, you own your data, infrastructure, and destiny.
|
||||||
|
|
||||||
|
## Why Self-Host?
|
||||||
|
|
||||||
|
- **🔒 Data Privacy**: Your crawled data never leaves your infrastructure
|
||||||
|
- **💰 Cost Control**: No per-request pricing - scale within your own resources
|
||||||
|
- **🎯 Customization**: Full control over browser configurations, extraction strategies, and performance tuning
|
||||||
|
- **📊 Transparency**: Real-time monitoring dashboard shows exactly what's happening
|
||||||
|
- **⚡ Performance**: Direct access without API rate limits or geographic restrictions
|
||||||
|
- **🛡️ Security**: Keep sensitive data extraction workflows behind your firewall
|
||||||
|
- **🔧 Flexibility**: Customize, extend, and integrate with your existing infrastructure
|
||||||
|
|
||||||
|
When you self-host, you can scale from a single container to a full browser infrastructure, all while maintaining complete control and visibility.
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
- [Prerequisites](#prerequisites)
|
- [Prerequisites](#prerequisites)
|
||||||
@@ -13,36 +29,14 @@
|
|||||||
- [Available MCP Tools](#available-mcp-tools)
|
- [Available MCP Tools](#available-mcp-tools)
|
||||||
- [Testing MCP Connections](#testing-mcp-connections)
|
- [Testing MCP Connections](#testing-mcp-connections)
|
||||||
- [MCP Schemas](#mcp-schemas)
|
- [MCP Schemas](#mcp-schemas)
|
||||||
- [Additional API Endpoints](#additional-api-endpoints)
|
- [Real-time Monitoring & Operations](#real-time-monitoring--operations)
|
||||||
- [HTML Extraction Endpoint](#html-extraction-endpoint)
|
- [Monitoring Dashboard](#monitoring-dashboard)
|
||||||
- [Screenshot Endpoint](#screenshot-endpoint)
|
- [Monitor API Endpoints](#monitor-api-endpoints)
|
||||||
- [PDF Export Endpoint](#pdf-export-endpoint)
|
- [WebSocket Streaming](#websocket-streaming)
|
||||||
- [JavaScript Execution Endpoint](#javascript-execution-endpoint)
|
- [Control Actions](#control-actions)
|
||||||
- [User-Provided Hooks API](#user-provided-hooks-api)
|
- [Production Integration](#production-integration)
|
||||||
- [Hook Information Endpoint](#hook-information-endpoint)
|
- [Deployment Scenarios](#deployment-scenarios)
|
||||||
- [Available Hook Points](#available-hook-points)
|
- [Complete Examples](#complete-examples)
|
||||||
- [Using Hooks in Requests](#using-hooks-in-requests)
|
|
||||||
- [Hook Examples with Real URLs](#hook-examples-with-real-urls)
|
|
||||||
- [Security Best Practices](#security-best-practices)
|
|
||||||
- [Hook Response Information](#hook-response-information)
|
|
||||||
- [Error Handling](#error-handling)
|
|
||||||
- [Hooks Utility: Function-Based Approach (Python)](#hooks-utility-function-based-approach-python)
|
|
||||||
- [Job Queue & Webhook API](#job-queue-webhook-api)
|
|
||||||
- [Why Use the Job Queue API?](#why-use-the-job-queue-api)
|
|
||||||
- [Available Endpoints](#available-endpoints)
|
|
||||||
- [Webhook Configuration](#webhook-configuration)
|
|
||||||
- [Usage Examples](#usage-examples)
|
|
||||||
- [Webhook Best Practices](#webhook-best-practices)
|
|
||||||
- [Use Cases](#use-cases)
|
|
||||||
- [Troubleshooting](#troubleshooting)
|
|
||||||
- [Dockerfile Parameters](#dockerfile-parameters)
|
|
||||||
- [Using the API](#using-the-api)
|
|
||||||
- [Playground Interface](#playground-interface)
|
|
||||||
- [Python SDK](#python-sdk)
|
|
||||||
- [Understanding Request Schema](#understanding-request-schema)
|
|
||||||
- [REST API Examples](#rest-api-examples)
|
|
||||||
- [LLM Configuration Examples](#llm-configuration-examples)
|
|
||||||
- [Metrics & Monitoring](#metrics--monitoring)
|
|
||||||
- [Server Configuration](#server-configuration)
|
- [Server Configuration](#server-configuration)
|
||||||
- [Understanding config.yml](#understanding-configyml)
|
- [Understanding config.yml](#understanding-configyml)
|
||||||
- [JWT Authentication](#jwt-authentication)
|
- [JWT Authentication](#jwt-authentication)
|
||||||
@@ -1957,22 +1951,469 @@ async def test_stream_crawl(token: str = None): # Made token optional
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Metrics & Monitoring
|
## Real-time Monitoring & Operations
|
||||||
|
|
||||||
Keep an eye on your crawler with these endpoints:
|
One of the key advantages of self-hosting is complete visibility into your infrastructure. Crawl4AI includes a comprehensive real-time monitoring system that gives you full transparency and control.
|
||||||
|
|
||||||
- `/health` - Quick health check
|
### Monitoring Dashboard
|
||||||
- `/metrics` - Detailed Prometheus metrics
|
|
||||||
- `/schema` - Full API schema
|
|
||||||
|
|
||||||
Example health check:
|
Access the **built-in real-time monitoring dashboard** for complete operational visibility:
|
||||||
|
|
||||||
|
```
|
||||||
|
http://localhost:11235/monitor
|
||||||
|
```
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**Dashboard Features:**
|
||||||
|
|
||||||
|
#### 1. System Health Overview
|
||||||
|
- **CPU & Memory**: Live usage with progress bars and percentage indicators
|
||||||
|
- **Network I/O**: Total bytes sent/received since startup
|
||||||
|
- **Server Uptime**: How long your server has been running
|
||||||
|
- **Browser Pool Status**:
|
||||||
|
- 🔥 Permanent browser (always-on default config, ~270MB)
|
||||||
|
- ♨️ Hot pool (frequently used configs, ~180MB each)
|
||||||
|
- ❄️ Cold pool (idle browsers awaiting cleanup, ~180MB each)
|
||||||
|
- **Memory Pressure**: LOW/MEDIUM/HIGH indicator for janitor behavior
|
||||||
|
|
||||||
|
#### 2. Live Request Tracking
|
||||||
|
- **Active Requests**: Currently running crawls with:
|
||||||
|
- Request ID for tracking
|
||||||
|
- Target URL (truncated for display)
|
||||||
|
- Endpoint being used
|
||||||
|
- Elapsed time (updates in real-time)
|
||||||
|
- Memory usage from start
|
||||||
|
- **Completed Requests**: Last 10 finished requests showing:
|
||||||
|
- Success/failure status (color-coded)
|
||||||
|
- Total execution time
|
||||||
|
- Memory delta (how much memory changed)
|
||||||
|
- Pool hit (was browser reused?)
|
||||||
|
- HTTP status code
|
||||||
|
- **Filtering**: View all, success only, or errors only
|
||||||
|
|
||||||
|
#### 3. Browser Pool Management
|
||||||
|
Interactive table showing all active browsers:
|
||||||
|
|
||||||
|
| Type | Signature | Age | Last Used | Hits | Actions |
|
||||||
|
|------|-----------|-----|-----------|------|---------|
|
||||||
|
| permanent | abc12345 | 2h | 5s ago | 1,247 | Restart |
|
||||||
|
| hot | def67890 | 45m | 2m ago | 89 | Kill / Restart |
|
||||||
|
| cold | ghi11213 | 30m | 15m ago | 3 | Kill / Restart |
|
||||||
|
|
||||||
|
- **Reuse Rate**: Percentage of requests that reused existing browsers
|
||||||
|
- **Memory Estimates**: Total memory used by browser pool
|
||||||
|
- **Manual Control**: Kill or restart individual browsers
|
||||||
|
|
||||||
|
#### 4. Janitor Events Log
|
||||||
|
Real-time log of browser pool cleanup events:
|
||||||
|
- When cold browsers are closed due to memory pressure
|
||||||
|
- When browsers are promoted from cold to hot pool
|
||||||
|
- Forced cleanups triggered manually
|
||||||
|
- Detailed cleanup reasons and browser signatures
|
||||||
|
|
||||||
|
#### 5. Error Monitoring
|
||||||
|
Recent errors with full context:
|
||||||
|
- Timestamp
|
||||||
|
- Endpoint where error occurred
|
||||||
|
- Target URL
|
||||||
|
- Error message
|
||||||
|
- Request ID for correlation
|
||||||
|
|
||||||
|
**Live Updates:**
|
||||||
|
The dashboard connects via WebSocket and refreshes every **2 seconds** with the latest data. Connection status indicator shows when you're connected/disconnected.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Monitor API Endpoints
|
||||||
|
|
||||||
|
For programmatic monitoring, automation, and integration with your existing infrastructure:
|
||||||
|
|
||||||
|
#### Health & Statistics
|
||||||
|
|
||||||
|
**Get System Health**
|
||||||
```bash
|
```bash
|
||||||
curl http://localhost:11235/health
|
GET /monitor/health
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns current system snapshot:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"container": {
|
||||||
|
"memory_percent": 45.2,
|
||||||
|
"cpu_percent": 23.1,
|
||||||
|
"network_sent_mb": 1250.45,
|
||||||
|
"network_recv_mb": 3421.12,
|
||||||
|
"uptime_seconds": 7234
|
||||||
|
},
|
||||||
|
"pool": {
|
||||||
|
"permanent": {"active": true, "memory_mb": 270},
|
||||||
|
"hot": {"count": 3, "memory_mb": 540},
|
||||||
|
"cold": {"count": 1, "memory_mb": 180},
|
||||||
|
"total_memory_mb": 990
|
||||||
|
},
|
||||||
|
"janitor": {
|
||||||
|
"next_cleanup_estimate": "adaptive",
|
||||||
|
"memory_pressure": "MEDIUM"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Get Request Statistics**
|
||||||
|
```bash
|
||||||
|
GET /monitor/requests?status=all&limit=50
|
||||||
|
```
|
||||||
|
|
||||||
|
Query parameters:
|
||||||
|
- `status`: Filter by `all`, `active`, `completed`, `success`, or `error`
|
||||||
|
- `limit`: Number of completed requests to return (1-1000)
|
||||||
|
|
||||||
|
**Get Browser Pool Details**
|
||||||
|
```bash
|
||||||
|
GET /monitor/browsers
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns detailed information about all active browsers:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"browsers": [
|
||||||
|
{
|
||||||
|
"type": "permanent",
|
||||||
|
"sig": "abc12345",
|
||||||
|
"age_seconds": 7234,
|
||||||
|
"last_used_seconds": 5,
|
||||||
|
"memory_mb": 270,
|
||||||
|
"hits": 1247,
|
||||||
|
"killable": false
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "hot",
|
||||||
|
"sig": "def67890",
|
||||||
|
"age_seconds": 2701,
|
||||||
|
"last_used_seconds": 120,
|
||||||
|
"memory_mb": 180,
|
||||||
|
"hits": 89,
|
||||||
|
"killable": true
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"summary": {
|
||||||
|
"total_count": 5,
|
||||||
|
"total_memory_mb": 990,
|
||||||
|
"reuse_rate_percent": 87.3
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Get Endpoint Performance Statistics**
|
||||||
|
```bash
|
||||||
|
GET /monitor/endpoints/stats
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns aggregated metrics per endpoint:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"/crawl": {
|
||||||
|
"count": 1523,
|
||||||
|
"avg_latency_ms": 2341.5,
|
||||||
|
"success_rate_percent": 98.2,
|
||||||
|
"pool_hit_rate_percent": 89.1,
|
||||||
|
"errors": 27
|
||||||
|
},
|
||||||
|
"/md": {
|
||||||
|
"count": 891,
|
||||||
|
"avg_latency_ms": 1823.7,
|
||||||
|
"success_rate_percent": 99.4,
|
||||||
|
"pool_hit_rate_percent": 92.3,
|
||||||
|
"errors": 5
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Get Timeline Data**
|
||||||
|
```bash
|
||||||
|
GET /monitor/timeline?metric=memory&window=5m
|
||||||
|
```
|
||||||
|
|
||||||
|
Parameters:
|
||||||
|
- `metric`: `memory`, `requests`, or `browsers`
|
||||||
|
- `window`: Currently only `5m` (5-minute window, 5-second resolution)
|
||||||
|
|
||||||
|
Returns time-series data for charts:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"timestamps": [1699564800, 1699564805, 1699564810, ...],
|
||||||
|
"values": [42.1, 43.5, 41.8, ...]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Logs
|
||||||
|
|
||||||
|
**Get Janitor Events**
|
||||||
|
```bash
|
||||||
|
GET /monitor/logs/janitor?limit=100
|
||||||
|
```
|
||||||
|
|
||||||
|
**Get Error Log**
|
||||||
|
```bash
|
||||||
|
GET /monitor/logs/errors?limit=100
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
*(Deployment Scenarios and Complete Examples sections remain the same, maybe update links if examples moved)*
|
### WebSocket Streaming
|
||||||
|
|
||||||
|
For real-time monitoring in your own dashboards or applications:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
WS /monitor/ws
|
||||||
|
```
|
||||||
|
|
||||||
|
**Connection Example (Python):**
|
||||||
|
```python
|
||||||
|
import asyncio
|
||||||
|
import websockets
|
||||||
|
import json
|
||||||
|
|
||||||
|
async def monitor_server():
|
||||||
|
uri = "ws://localhost:11235/monitor/ws"
|
||||||
|
|
||||||
|
async with websockets.connect(uri) as websocket:
|
||||||
|
print("Connected to Crawl4AI monitor")
|
||||||
|
|
||||||
|
while True:
|
||||||
|
# Receive update every 2 seconds
|
||||||
|
data = await websocket.recv()
|
||||||
|
update = json.loads(data)
|
||||||
|
|
||||||
|
# Extract key metrics
|
||||||
|
health = update['health']
|
||||||
|
active_requests = len(update['requests']['active'])
|
||||||
|
browsers = len(update['browsers'])
|
||||||
|
|
||||||
|
print(f"Memory: {health['container']['memory_percent']:.1f}% | "
|
||||||
|
f"Active: {active_requests} | "
|
||||||
|
f"Browsers: {browsers}")
|
||||||
|
|
||||||
|
# Check for high memory pressure
|
||||||
|
if health['janitor']['memory_pressure'] == 'HIGH':
|
||||||
|
print("⚠️ HIGH MEMORY PRESSURE - Consider cleanup")
|
||||||
|
|
||||||
|
asyncio.run(monitor_server())
|
||||||
|
```
|
||||||
|
|
||||||
|
**Update Payload Structure:**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"timestamp": 1699564823.456,
|
||||||
|
"health": { /* System health snapshot */ },
|
||||||
|
"requests": {
|
||||||
|
"active": [ /* Currently running */ ],
|
||||||
|
"completed": [ /* Last 10 completed */ ]
|
||||||
|
},
|
||||||
|
"browsers": [ /* All active browsers */ ],
|
||||||
|
"timeline": {
|
||||||
|
"memory": { /* Last 5 minutes */ },
|
||||||
|
"requests": { /* Request rate */ },
|
||||||
|
"browsers": { /* Pool composition */ }
|
||||||
|
},
|
||||||
|
"janitor": [ /* Last 10 cleanup events */ ],
|
||||||
|
"errors": [ /* Last 10 errors */ ]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Control Actions
|
||||||
|
|
||||||
|
Take manual control when needed:
|
||||||
|
|
||||||
|
**Force Immediate Cleanup**
|
||||||
|
```bash
|
||||||
|
POST /monitor/actions/cleanup
|
||||||
|
```
|
||||||
|
|
||||||
|
Kills all cold pool browsers immediately (useful when memory is tight):
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": true,
|
||||||
|
"killed_browsers": 3
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Kill Specific Browser**
|
||||||
|
```bash
|
||||||
|
POST /monitor/actions/kill_browser
|
||||||
|
Content-Type: application/json
|
||||||
|
|
||||||
|
{
|
||||||
|
"sig": "abc12345" // First 8 chars of browser signature
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Response:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"success": true,
|
||||||
|
"killed_sig": "abc12345",
|
||||||
|
"pool_type": "hot"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Restart Browser**
|
||||||
|
```bash
|
||||||
|
POST /monitor/actions/restart_browser
|
||||||
|
Content-Type: application/json
|
||||||
|
|
||||||
|
{
|
||||||
|
"sig": "permanent" // Or first 8 chars of signature
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
For permanent browser, this will close and reinitialize it. For hot/cold browsers, it kills them and lets new requests create fresh ones.
|
||||||
|
|
||||||
|
**Reset Statistics**
|
||||||
|
```bash
|
||||||
|
POST /monitor/stats/reset
|
||||||
|
```
|
||||||
|
|
||||||
|
Clears endpoint counters (useful for starting fresh after testing).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Production Integration
|
||||||
|
|
||||||
|
#### Integration with Existing Monitoring Systems
|
||||||
|
|
||||||
|
**Prometheus Integration:**
|
||||||
|
```bash
|
||||||
|
# Scrape metrics endpoint
|
||||||
|
curl http://localhost:11235/metrics
|
||||||
|
```
|
||||||
|
|
||||||
|
**Custom Dashboard Integration:**
|
||||||
|
```python
|
||||||
|
# Example: Push metrics to your monitoring system
|
||||||
|
import asyncio
|
||||||
|
import websockets
|
||||||
|
import json
|
||||||
|
from your_monitoring import push_metric
|
||||||
|
|
||||||
|
async def integrate_monitoring():
|
||||||
|
async with websockets.connect("ws://localhost:11235/monitor/ws") as ws:
|
||||||
|
while True:
|
||||||
|
data = json.loads(await ws.recv())
|
||||||
|
|
||||||
|
# Push to your monitoring system
|
||||||
|
push_metric("crawl4ai.memory.percent",
|
||||||
|
data['health']['container']['memory_percent'])
|
||||||
|
push_metric("crawl4ai.active_requests",
|
||||||
|
len(data['requests']['active']))
|
||||||
|
push_metric("crawl4ai.browser_count",
|
||||||
|
len(data['browsers']))
|
||||||
|
```
|
||||||
|
|
||||||
|
**Alerting Example:**
|
||||||
|
```python
|
||||||
|
import requests
|
||||||
|
import time
|
||||||
|
|
||||||
|
def check_health():
|
||||||
|
"""Poll health endpoint and alert on issues"""
|
||||||
|
response = requests.get("http://localhost:11235/monitor/health")
|
||||||
|
health = response.json()
|
||||||
|
|
||||||
|
# Alert on high memory
|
||||||
|
if health['container']['memory_percent'] > 85:
|
||||||
|
send_alert(f"High memory: {health['container']['memory_percent']}%")
|
||||||
|
|
||||||
|
# Alert on high error rate
|
||||||
|
stats = requests.get("http://localhost:11235/monitor/endpoints/stats").json()
|
||||||
|
for endpoint, metrics in stats.items():
|
||||||
|
if metrics['success_rate_percent'] < 95:
|
||||||
|
send_alert(f"{endpoint} success rate: {metrics['success_rate_percent']}%")
|
||||||
|
|
||||||
|
# Run every minute
|
||||||
|
while True:
|
||||||
|
check_health()
|
||||||
|
time.sleep(60)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Log Aggregation:**
|
||||||
|
```python
|
||||||
|
import requests
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
def aggregate_errors():
|
||||||
|
"""Fetch and aggregate errors for logging system"""
|
||||||
|
response = requests.get("http://localhost:11235/monitor/logs/errors?limit=100")
|
||||||
|
errors = response.json()['errors']
|
||||||
|
|
||||||
|
for error in errors:
|
||||||
|
log_to_system({
|
||||||
|
'timestamp': datetime.fromtimestamp(error['timestamp']),
|
||||||
|
'service': 'crawl4ai',
|
||||||
|
'endpoint': error['endpoint'],
|
||||||
|
'url': error['url'],
|
||||||
|
'message': error['error'],
|
||||||
|
'request_id': error['request_id']
|
||||||
|
})
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Key Metrics to Track
|
||||||
|
|
||||||
|
For production self-hosted deployments, monitor these metrics:
|
||||||
|
|
||||||
|
1. **Memory Usage Trends**
|
||||||
|
- Track `container.memory_percent` over time
|
||||||
|
- Alert when consistently above 80%
|
||||||
|
- Prevents OOM kills
|
||||||
|
|
||||||
|
2. **Request Success Rates**
|
||||||
|
- Monitor per-endpoint success rates
|
||||||
|
- Alert when below 95%
|
||||||
|
- Indicates crawling issues
|
||||||
|
|
||||||
|
3. **Average Latency**
|
||||||
|
- Track `avg_latency_ms` per endpoint
|
||||||
|
- Detect performance degradation
|
||||||
|
- Optimize slow endpoints
|
||||||
|
|
||||||
|
4. **Browser Pool Efficiency**
|
||||||
|
- Monitor `reuse_rate_percent`
|
||||||
|
- Should be >80% for good efficiency
|
||||||
|
- Low rates indicate pool churn
|
||||||
|
|
||||||
|
5. **Error Frequency**
|
||||||
|
- Count errors per time window
|
||||||
|
- Alert on sudden spikes
|
||||||
|
- Track error patterns
|
||||||
|
|
||||||
|
6. **Janitor Activity**
|
||||||
|
- Monitor cleanup frequency
|
||||||
|
- Excessive cleanup indicates memory pressure
|
||||||
|
- Adjust pool settings if needed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Quick Health Check
|
||||||
|
|
||||||
|
For simple uptime monitoring:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl http://localhost:11235/health
|
||||||
|
```
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"status": "healthy",
|
||||||
|
"version": "0.7.4"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Other useful endpoints:
|
||||||
|
- `/metrics` - Prometheus metrics
|
||||||
|
- `/schema` - Full API schema
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -2132,43 +2573,46 @@ We're here to help you succeed with Crawl4AI! Here's how to get support:
|
|||||||
|
|
||||||
## Summary
|
## Summary
|
||||||
|
|
||||||
In this guide, we've covered everything you need to get started with Crawl4AI's Docker deployment:
|
Congratulations! You now have everything you need to self-host your own Crawl4AI infrastructure with complete control and visibility.
|
||||||
- Building and running the Docker container
|
|
||||||
- Configuring the environment
|
|
||||||
- Using the interactive playground for testing
|
|
||||||
- Making API requests with proper typing
|
|
||||||
- Using the Python SDK with **automatic hook conversion**
|
|
||||||
- **Working with hooks** - both string-based (REST API) and function-based (Python SDK)
|
|
||||||
- Leveraging specialized endpoints for screenshots, PDFs, and JavaScript execution
|
|
||||||
- Connecting via the Model Context Protocol (MCP)
|
|
||||||
- Monitoring your deployment
|
|
||||||
|
|
||||||
### Key Features
|
**What You've Learned:**
|
||||||
|
- ✅ Multiple deployment options (Docker Hub, Docker Compose, manual builds)
|
||||||
|
- ✅ Environment configuration and LLM integration
|
||||||
|
- ✅ Using the interactive playground for testing
|
||||||
|
- ✅ Making API requests with proper typing (SDK and REST)
|
||||||
|
- ✅ Specialized endpoints (screenshots, PDFs, JavaScript execution)
|
||||||
|
- ✅ MCP integration for AI-assisted development
|
||||||
|
- ✅ **Real-time monitoring dashboard** for operational transparency
|
||||||
|
- ✅ **Monitor API** for programmatic control and integration
|
||||||
|
- ✅ Production deployment best practices
|
||||||
|
|
||||||
**Hooks Support**: Crawl4AI offers two approaches for working with hooks:
|
**Why This Matters:**
|
||||||
- **String-based** (REST API): Works with any language, requires manual string formatting
|
|
||||||
- **Function-based** (Python SDK): Write hooks as regular Python functions with full IDE support and automatic conversion
|
|
||||||
|
|
||||||
**Playground Interface**: The built-in playground at `http://localhost:11235/playground` makes it easy to test configurations and generate corresponding JSON for API requests.
|
By self-hosting Crawl4AI, you:
|
||||||
|
- 🔒 **Own Your Data**: Everything stays in your infrastructure
|
||||||
|
- 📊 **See Everything**: Real-time dashboard shows exactly what's happening
|
||||||
|
- 💰 **Control Costs**: Scale within your resources, no per-request fees
|
||||||
|
- ⚡ **Maximize Performance**: Direct access with smart browser pooling (10x memory efficiency)
|
||||||
|
- 🛡️ **Stay Secure**: Keep sensitive workflows behind your firewall
|
||||||
|
- 🔧 **Customize Freely**: Full control over configs, strategies, and optimizations
|
||||||
|
|
||||||
**MCP Integration**: For AI application developers, the MCP integration allows tools like Claude Code to directly access Crawl4AI's capabilities without complex API handling.
|
**Next Steps:**
|
||||||
|
|
||||||
### Next Steps
|
1. **Start Simple**: Deploy with Docker Hub image and test with the playground
|
||||||
|
2. **Monitor Everything**: Open `http://localhost:11235/monitor` to watch your server
|
||||||
|
3. **Integrate**: Connect your applications using the Python SDK or REST API
|
||||||
|
4. **Scale Smart**: Use the monitoring data to optimize your deployment
|
||||||
|
5. **Go Production**: Set up alerting, log aggregation, and automated cleanup
|
||||||
|
|
||||||
1. **Explore Examples**: Check out the comprehensive examples in:
|
**Key Resources:**
|
||||||
- `/docs/examples/hooks_docker_client_example.py` - Python function-based hooks
|
- 🎮 **Playground**: `http://localhost:11235/playground` - Interactive testing
|
||||||
- `/docs/examples/hooks_rest_api_example.py` - REST API string-based hooks
|
- 📊 **Monitor Dashboard**: `http://localhost:11235/monitor` - Real-time visibility
|
||||||
- `/docs/examples/README_HOOKS.md` - Comparison and guide
|
- 📖 **Architecture Docs**: `deploy/docker/ARCHITECTURE.md` - Deep technical dive
|
||||||
|
- 💬 **Discord Community**: Get help and share experiences
|
||||||
|
- ⭐ **GitHub**: Report issues, contribute, show support
|
||||||
|
|
||||||
2. **Read Documentation**:
|
Remember: The monitoring dashboard is your window into your infrastructure. Use it to understand performance, troubleshoot issues, and optimize your deployment. The examples in the `examples` folder show real-world usage patterns you can adapt.
|
||||||
- `/docs/hooks-utility-guide.md` - Complete hooks utility guide
|
|
||||||
- API documentation for detailed configuration options
|
|
||||||
|
|
||||||
3. **Join the Community**:
|
**You're now in control of your web crawling destiny!** 🚀
|
||||||
- GitHub: Report issues and contribute
|
|
||||||
- Discord: Get help and share your experiences
|
|
||||||
- Documentation: Comprehensive guides and tutorials
|
|
||||||
|
|
||||||
Keep exploring, and don't hesitate to reach out if you need help! We're building something amazing together. 🚀
|
|
||||||
|
|
||||||
Happy crawling! 🕷️
|
Happy crawling! 🕷️
|
||||||
@@ -19,7 +19,7 @@ nav:
|
|||||||
- "Marketplace Admin": "marketplace/admin/index.html"
|
- "Marketplace Admin": "marketplace/admin/index.html"
|
||||||
- Setup & Installation:
|
- Setup & Installation:
|
||||||
- "Installation": "core/installation.md"
|
- "Installation": "core/installation.md"
|
||||||
- "Docker Deployment": "core/docker-deployment.md"
|
- "Self-Hosting Guide": "core/self-hosting.md"
|
||||||
- "Blog & Changelog":
|
- "Blog & Changelog":
|
||||||
- "Blog Home": "blog/index.md"
|
- "Blog Home": "blog/index.md"
|
||||||
- "Changelog": "https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md"
|
- "Changelog": "https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md"
|
||||||
|
|||||||
Reference in New Issue
Block a user