Compare commits

..

40 Commits

Author SHA1 Message Date
unclecode
1a22fb4d4f docs: rename Docker deployment to self-hosting guide with comprehensive monitoring documentation
Major documentation restructuring to emphasize self-hosting capabilities and fully document the real-time monitoring system.

Changes:
- Renamed docker-deployment.md → self-hosting.md to better reflect the value proposition
- Updated mkdocs.yml navigation to "Self-Hosting Guide"
- Completely rewrote introduction emphasizing self-hosting benefits:
  * Data privacy and ownership
  * Cost control and transparency
  * Performance and security advantages
  * Full customization capabilities

- Expanded "Metrics & Monitoring" → "Real-time Monitoring & Operations" with:
  * Monitoring Dashboard section documenting the /monitor UI
  * Complete feature breakdown (system health, requests, browsers, janitor, errors)
  * Monitor API Endpoints with all REST endpoints and examples
  * WebSocket Streaming integration guide with Python examples
  * Control Actions for manual browser management
  * Production Integration patterns (Prometheus, custom dashboards, alerting)
  * Key production metrics to track

- Enhanced summary section:
  * What users learned checklist
  * Why self-hosting matters
  * Clear next steps
  * Key resources with monitoring dashboard URL

The monitoring dashboard built 2-3 weeks ago is now fully documented and discoverable.
Users will understand they have complete operational visibility at http://localhost:11235/monitor
with real-time updates, browser pool management, and programmatic control via REST/WebSocket APIs.

This positions Crawl4AI as an enterprise-grade self-hosting solution with DevOps-level
monitoring capabilities, not just a Docker deployment.
2025-11-09 13:31:52 +08:00
unclecode
81b5312629 Update gitignore 2025-11-09 10:49:42 +08:00
unclecode
73a5a7b0f5 Update gitignore 2025-10-18 12:41:29 +08:00
unclecode
05921811b8 docs: add comprehensive technical architecture documentation
Created ARCHITECTURE.md as a complete technical reference for the
Crawl4AI Docker server, replacing the stress test pipeline document
with production-grade documentation.

Contents:
- System overview with architecture diagrams
- Core components deep-dive (server, API, utils)
- Smart browser pool implementation details
- Real-time monitoring system architecture
- WebSocket implementation and fallback strategy
- Memory management and container detection
- Production optimizations and code review fixes
- Deployment guides (local, Docker, production)
- Comprehensive troubleshooting section
- Debug tools and performance tuning
- Test suite documentation
- Architecture decision log (ADRs)

Target audience: Developers maintaining or extending the system
Goal: Enable rapid onboarding and confident modifications
2025-10-18 12:05:49 +08:00
unclecode
25507adb5b feat(monitor): implement code review fixes and real-time WebSocket monitoring
Backend Improvements (11 fixes applied):

Critical Fixes:
- Add lock protection for browser pool access in monitor stats
- Ensure async track_janitor_event across all call sites
- Improve error handling in monitor request tracking (already in place)

Important Fixes:
- Replace fire-and-forget Redis with background persistence worker
- Add time-based expiry for completed requests/errors (5min cleanup)
- Implement input validation for monitor route parameters
- Add 4s timeout to timeline updater to prevent hangs
- Add warning when killing browsers with active requests
- Implement monitor cleanup on shutdown with final persistence
- Document memory estimates with TODO for actual tracking

Frontend Enhancements:

WebSocket Real-time Updates:
- Add WebSocket endpoint at /monitor/ws for live monitoring
- Implement auto-reconnect with exponential backoff (max 5 attempts)
- Add graceful fallback to HTTP polling on WebSocket failure
- Send comprehensive updates every 2 seconds (health, requests, browsers, timeline, events)

UI/UX Improvements:
- Add live connection status indicator with pulsing animation
  - Green "Live" = WebSocket connected
  - Yellow "Connecting..." = Attempting connection
  - Blue "Polling" = Fallback to HTTP polling
  - Red "Disconnected" = Connection failed
- Restore original beautiful styling for all sections
- Improve request table layout with flex-grow for URL column
- Add browser type text labels alongside emojis
- Add flex layout to browser section header

Testing:
- Add test-websocket.py for WebSocket validation
- All 7 integration tests passing successfully

Summary: 563 additions across 6 files
2025-10-18 11:38:25 +08:00
unclecode
aba4036ab6 Add demo and test scripts for monitor dashboard activity
- Introduced a demo script (`demo_monitor_dashboard.py`) to showcase various monitoring features through simulated activity.
- Implemented a test script (`test_monitor_demo.py`) to generate dashboard activity and verify monitor health and endpoint statistics.
- Added a logo image to the static assets for branding purposes.
2025-10-17 22:43:06 +08:00
unclecode
e2af031b09 feat(monitor): add real-time monitoring dashboard with Redis persistence
Complete observability solution for production deployments with terminal-style UI.

**Backend Implementation:**
- `monitor.py`: Stats manager tracking requests, browsers, errors, timeline data
- `monitor_routes.py`: REST API endpoints for all monitor functionality
  - GET /monitor/health - System health snapshot
  - GET /monitor/requests - Active & completed requests
  - GET /monitor/browsers - Browser pool details
  - GET /monitor/endpoints/stats - Aggregated endpoint analytics
  - GET /monitor/timeline - Time-series data (memory, requests, browsers)
  - GET /monitor/logs/{janitor,errors} - Event logs
  - POST /monitor/actions/{cleanup,kill_browser,restart_browser} - Control actions
  - POST /monitor/stats/reset - Reset counters
- Redis persistence for endpoint stats (survives restart)
- Timeline tracking (5min window, 5s resolution, 60 data points)

**Frontend Dashboard** (`/dashboard`):
- **System Health Bar**: CPU%, Memory%, Network I/O, Uptime
- **Pool Status**: Live counts (permanent/hot/cold browsers + memory)
- **Live Activity Tabs**:
  - Requests: Active (realtime) + recent completed (last 100)
  - Browsers: Detailed table with actions (kill/restart)
  - Janitor: Cleanup event log with timestamps
  - Errors: Recent errors with stack traces
- **Endpoint Analytics**: Count, avg latency, success%, pool hit%
- **Resource Timeline**: SVG charts (memory/requests/browsers) with terminal aesthetics
- **Control Actions**: Force cleanup, restart permanent, reset stats
- **Auto-refresh**: 5s polling (toggleable)

**Integration:**
- Janitor events tracked (close_cold, close_hot, promote)
- Crawler pool promotion events logged
- Timeline updater background task (5s interval)
- Lifespan hooks for monitor initialization

**UI Design:**
- Terminal vibe matching Crawl4AI theme
- Dark background, cyan/pink accents, monospace font
- Neon glow effects on charts
- Responsive layout, hover interactions
- Cross-navigation: Playground ↔ Monitor

**Key Features:**
- Zero-config: Works out of the box with existing Redis
- Real-time visibility into pool efficiency
- Manual browser management (kill/restart)
- Historical data persistence
- DevOps-friendly UX

Routes:
- API: `/monitor/*` (backend endpoints)
- UI: `/dashboard` (static HTML)
2025-10-17 21:36:25 +08:00
unclecode
b97eaeea4c feat(docker): implement smart browser pool with 10x memory efficiency
Major refactoring to eliminate memory leaks and enable high-scale crawling:

- **Smart 3-Tier Browser Pool**:
  - Permanent browser (always-ready default config)
  - Hot pool (configs used 3+ times, longer TTL)
  - Cold pool (new/rare configs, short TTL)
  - Auto-promotion: cold → hot after 3 uses
  - 100% pool reuse achieved in tests

- **Container-Aware Memory Detection**:
  - Read cgroup v1/v2 memory limits (not host metrics)
  - Accurate memory pressure detection in Docker
  - Memory-based browser creation blocking

- **Adaptive Janitor**:
  - Dynamic cleanup intervals (10s/30s/60s based on memory)
  - Tiered TTLs: cold 30-300s, hot 120-600s
  - Aggressive cleanup at high memory pressure

- **Unified Pool Usage**:
  - All endpoints now use pool (/html, /screenshot, /pdf, /execute_js, /md, /llm)
  - Fixed config signature mismatch (permanent browser matches endpoints)
  - get_default_browser_config() helper for consistency

- **Configuration**:
  - Reduced idle_ttl: 1800s → 300s (30min → 5min)
  - Fixed port: 11234 → 11235 (match Gunicorn)

**Performance Results** (from stress tests):
- Memory: 10x reduction (500-700MB × N → 270MB permanent)
- Latency: 30-50x faster (<100ms pool hits vs 3-5s startup)
- Reuse: 100% for default config, 60%+ for variants
- Capacity: 100+ concurrent requests (vs ~20 before)
- Leak: 0 MB/cycle (stable across tests)

**Test Infrastructure**:
- 7-phase sequential test suite (tests/)
- Docker stats integration + log analysis
- Pool promotion verification
- Memory leak detection
- Full endpoint coverage

Fixes memory issues reported in production deployments.
2025-10-17 20:38:39 +08:00
unclecode
216019f29a fix(marketplace): prevent hero image overflow and secondary card stretching
- Fixed hero image to 200px height with min/max constraints
- Added object-fit: cover to hero-image img elements
- Changed secondary-featured align-items from stretch to flex-start
- Fixed secondary-card height to 118px (no flex: 1 stretching)
- Updated responsive grid layouts for wider screens
- Added flex: 1 to hero-content for better content distribution

These changes ensure a rigid, predictable layout that prevents:
1. Large images from pushing text content down
2. Single secondary cards from stretching to fill entire height
2025-10-11 12:52:04 +08:00
unclecode
abe8a92561 fix(marketplace): resolve app detail page routing and styling issues
- Fixed JavaScript errors from missing HTML elements (install-code, usage-code, integration-code)
- Added missing CSS classes for tabs, overview layout, sidebar, and integration content
- Fixed tab navigation to display horizontally in single line
- Added proper padding to tab content sections (removed from container, added to content)
- Fixed tab selector from .nav-tab to .tab-btn to match HTML structure
- Added sidebar styling with stats grid and metadata display
- Improved responsive design with mobile-friendly tab scrolling
- Fixed code block positioning for copy buttons
- Removed margin from first headings to prevent extra spacing
- Added null checks for DOM elements in JavaScript to prevent errors

These changes resolve the routing issue where clicking on apps caused page redirects,
and fix the broken layout where CSS was not properly applied to the app detail page.
2025-10-11 11:51:22 +08:00
unclecode
5a4f21fad9 fix(marketplace): isolate api under marketplace prefix 2025-10-09 22:26:15 +08:00
unclecode
2c373f0642 fix(marketplace): align admin api with backend endpoints 2025-10-08 18:42:19 +08:00
unclecode
d2c7f345ab feat(docs): add chatgpt quick link to page actions 2025-10-07 11:59:25 +08:00
unclecode
8c62277718 feat(marketplace): add sponsor logo uploads
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
2025-10-06 20:58:35 +08:00
unclecode
5145d42df7 fix(docs): hide copy menu on non-markdown pages 2025-10-03 20:11:20 +08:00
Nasrin
80aa6c11d9 Merge pull request #1530 from Sjoeborg/fix/arun-many-returns-none
Fix: run_urls() returns None, crashing arun_many()
2025-10-03 12:57:06 +08:00
unclecode
749d200866 fix(marketplace): Update URLs to use /marketplace path and relative API endpoints
- Change API_BASE to relative '/api' for production
- Move marketplace to /marketplace instead of /marketplace/frontend
- Update MkDocs navigation
- Fix logo path in marketplace index
2025-10-02 17:08:50 +08:00
unclecode
408ad1b750 feat(marketplace): Add Crawl4AI marketplace with secure configuration
- Implement marketplace frontend and admin dashboard
- Add FastAPI backend with environment-based configuration
- Use .env file for secrets management
- Include data generation scripts
- Add proper CORS configuration
- Remove hardcoded password from admin login
- Update gitignore for security
2025-10-02 16:41:11 +08:00
Martin Sjöborg
35dd206925 fix: always return a list, even if we catch an exception 2025-10-02 09:21:44 +02:00
Martin Sjöborg
8d30662647 fix: remove this import as it causes python to treat "json" as a variable in the except block 2025-10-02 09:19:15 +02:00
unclecode
ef46df10da Update gitignore add local scripts folder 2025-09-30 18:31:57 +08:00
unclecode
0d8d043109 feat(docs): add brand book and page copy functionality
- Add comprehensive brand book with color system, typography, components
- Add page copy dropdown with markdown copy/view functionality
- Update mkdocs.yml with new assets and branding navigation
- Use terminal-style ASCII icons and condensed menu design
2025-09-30 18:28:05 +08:00
ntohidi
3fe49a766c fix(docker-deployment): replace console.log with print for metadata extraction 2025-09-25 14:12:59 +08:00
ntohidi
fef715a891 Merge branch 'feature/docker-hooks' into develop 2025-09-25 14:11:46 +08:00
Nasrin
69e8ca3d0d Merge pull request #1508 from unclecode/docker/base_config_overrides
#1505 fix(api): update config handling to only set base config if not provided by user
2025-09-22 18:02:14 +08:00
AHMET YILMAZ
a1950afd98 #1505 fix(api): update config handling to only set base config if not provided by user 2025-09-22 17:19:27 +08:00
Nasrin
d0eb5a6ffe Merge pull request #1501 from unclecode/fix/n-playwright-stealth
feat(StealthAdapter): fix stealth features for Playwright integration
2025-09-19 14:17:35 +08:00
ntohidi
77559f3373 feat(StealthAdapter): fix stealth features for Playwright integration. ref #1481 2025-09-18 15:39:06 +08:00
Nasrin
3899ac3d3b Merge pull request #1464 from unclecode/fix/proxy_deprecation
Fix/proxy deprecation
2025-09-16 15:48:45 +08:00
Nasrin
23431d8109 Merge pull request #1389 from unclecode/fix/deep-crawl-scoring
fix(deep-crawl): BestFirst priority inversion
2025-09-16 15:45:54 +08:00
AHMET YILMAZ
1717827732 refactor(BrowserConfig): change deprecation warning for 'proxy' parameter to UserWarning 2025-09-12 11:10:38 +08:00
Nasrin
f8eaf01ed1 Merge pull request #1467 from unclecode/fix/request-crawl-stream
Fix: request /crawl with stream: true issue
2025-09-11 17:40:43 +08:00
Nasrin
14b42b1f9a Merge pull request #1471 from unclecode/fix/adaptive-crawler-llm-config
Fix: allow custom LLM providers for adaptive crawler embedding config…
2025-09-09 12:56:33 +08:00
ntohidi
3bc56dd028 fix: allow custom LLM providers for adaptive crawler embedding config. ref: #1291
- Change embedding_llm_config from Dict to Union[LLMConfig, Dict] for type safety
  - Add backward-compatible conversion property _embedding_llm_config_dict
  - Replace all hardcoded OpenAI embedding configs with configurable options
  - Fix LLMConfig object attribute access in query expansion logic
  - Add comprehensive example demonstrating multiple provider configurations
  - Update documentation with both LLMConfig object and dictionary usage patterns

  Users can now specify any LLM provider for query expansion in embedding strategy:
  - New: embedding_llm_config=LLMConfig(provider='anthropic/claude-3', api_token='key')
  - Old: embedding_llm_config={'provider': 'openai/gpt-4', 'api_token': 'key'} (still works)
2025-09-09 12:49:55 +08:00
Nasrin
0482c1eafc Merge pull request #1469 from unclecode/fix/docker-jwt
Fix(auth): Fixed Docker JWT authentication
2025-09-04 15:00:15 +08:00
ntohidi
6e728096fa fix(auth): fixed Docker JWT authentication. ref #1442 2025-09-01 12:48:16 +08:00
AHMET YILMAZ
4ed33fce9e Remove deprecated test for 'proxy' parameter in BrowserConfig and update .gitignore to include test_scripts directory. 2025-08-28 17:26:10 +08:00
AHMET YILMAZ
f7a3366f72 #1375 : refactor(proxy) Deprecate 'proxy' parameter in BrowserConfig and enhance proxy string parsing
- Updated ProxyConfig.from_string to support multiple proxy formats, including URLs with credentials.
- Deprecated the 'proxy' parameter in BrowserConfig, replacing it with 'proxy_config' for better flexibility.
- Added warnings for deprecated usage and clarified behavior when both parameters are provided.
- Updated documentation and tests to reflect changes in proxy configuration handling.
2025-08-28 17:21:49 +08:00
ntohidi
88a9fbbb7e fix(deep-crawl): BestFirst priority inversion; remove pre-scoring truncation. ref #1253
Use negative scores in PQ to visit high-score URLs first and drop link cap prior to scoring; add test for ordering.
2025-08-11 18:16:57 +08:00
ntohidi
be63c98db3 feat(docker): add user-provided hooks support to Docker API
Implements comprehensive hooks functionality allowing users to provide custom Python
functions as strings that execute at specific points in the crawling pipeline.

Key Features:
- Support for all 8 crawl4ai hook points:
  • on_browser_created: Initialize browser settings
  • on_page_context_created: Configure page context
  • before_goto: Pre-navigation setup
  • after_goto: Post-navigation processing
  • on_user_agent_updated: User agent modification handling
  • on_execution_started: Crawl execution initialization
  • before_retrieve_html: Pre-extraction processing
  • before_return_html: Final HTML processing

Implementation Details:
- Created UserHookManager for validation, compilation, and safe execution
- Added IsolatedHookWrapper for error isolation and timeout protection
- AST-based validation ensures code structure correctness
- Sandboxed execution with restricted builtins for security
- Configurable timeout (1-120 seconds) prevents infinite loops
- Comprehensive error handling ensures hooks don't crash main process
- Execution tracking with detailed statistics and logging

API Changes:
- Added HookConfig schema with code and timeout fields
- Extended CrawlRequest with optional hooks parameter
- Added /hooks/info endpoint for hook discovery
- Updated /crawl and /crawl/stream endpoints to support hooks

Safety Features:
- Malformed hooks return clear validation errors
- Hook errors are isolated and reported without stopping crawl
- Execution statistics track success/failure/timeout rates
- All hook results are JSON-serializable

Testing:
- Comprehensive test suite covering all 8 hooks
- Error handling and timeout scenarios validated
- Authentication, performance, and content extraction examples
- 100% success rate in production testing

Documentation:
- Added extensive hooks section to docker-deployment.md
- Security warnings about user-provided code risks
- Real-world examples using httpbin.org, GitHub, BBC
- Best practices and troubleshooting guide

ref #1377
2025-08-11 13:25:17 +08:00
78 changed files with 19740 additions and 290 deletions

26
.gitignore vendored
View File

@@ -1,6 +1,13 @@
# Scripts folder (private tools)
.scripts/
# Database files
*.db
# Environment files
.env
.env.local
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
@@ -262,12 +269,27 @@ continue_config.json
CLAUDE_MONITOR.md
CLAUDE.md
.claude/
tests/**/test_site
tests/**/reports
tests/**/benchmark_reports
test_scripts/
docs/**/data
.codecat/
docs/apps/linkdin/debug*/
docs/apps/linkdin/samples/insights/*
docs/apps/linkdin/samples/insights/*
scripts/
# Databse files
*.sqlite3
*.sqlite3-journal
*.db-journal
*.db-wal
*.db-shm
*.db
*.rdb
*.ldb

View File

@@ -19,7 +19,7 @@ import re
from pathlib import Path
from crawl4ai.async_webcrawler import AsyncWebCrawler
from crawl4ai.async_configs import CrawlerRunConfig, LinkPreviewConfig
from crawl4ai.async_configs import CrawlerRunConfig, LinkPreviewConfig, LLMConfig
from crawl4ai.models import Link, CrawlResult
import numpy as np
@@ -178,7 +178,7 @@ class AdaptiveConfig:
# Embedding strategy parameters
embedding_model: str = "sentence-transformers/all-MiniLM-L6-v2"
embedding_llm_config: Optional[Dict] = None # Separate config for embeddings
embedding_llm_config: Optional[Union[LLMConfig, Dict]] = None # Separate config for embeddings
n_query_variations: int = 10
coverage_threshold: float = 0.85
alpha_shape_alpha: float = 0.5
@@ -250,6 +250,30 @@ class AdaptiveConfig:
assert 0 <= self.embedding_quality_max_confidence <= 1, "embedding_quality_max_confidence must be between 0 and 1"
assert self.embedding_quality_scale_factor > 0, "embedding_quality_scale_factor must be positive"
assert 0 <= self.embedding_min_confidence_threshold <= 1, "embedding_min_confidence_threshold must be between 0 and 1"
@property
def _embedding_llm_config_dict(self) -> Optional[Dict]:
"""Convert LLMConfig to dict format for backward compatibility."""
if self.embedding_llm_config is None:
return None
if isinstance(self.embedding_llm_config, dict):
# Already a dict - return as-is for backward compatibility
return self.embedding_llm_config
# Convert LLMConfig object to dict format
return {
'provider': self.embedding_llm_config.provider,
'api_token': self.embedding_llm_config.api_token,
'base_url': getattr(self.embedding_llm_config, 'base_url', None),
'temperature': getattr(self.embedding_llm_config, 'temperature', None),
'max_tokens': getattr(self.embedding_llm_config, 'max_tokens', None),
'top_p': getattr(self.embedding_llm_config, 'top_p', None),
'frequency_penalty': getattr(self.embedding_llm_config, 'frequency_penalty', None),
'presence_penalty': getattr(self.embedding_llm_config, 'presence_penalty', None),
'stop': getattr(self.embedding_llm_config, 'stop', None),
'n': getattr(self.embedding_llm_config, 'n', None),
}
class CrawlStrategy(ABC):
@@ -593,7 +617,7 @@ class StatisticalStrategy(CrawlStrategy):
class EmbeddingStrategy(CrawlStrategy):
"""Embedding-based adaptive crawling using semantic space coverage"""
def __init__(self, embedding_model: str = None, llm_config: Dict = None):
def __init__(self, embedding_model: str = None, llm_config: Union[LLMConfig, Dict] = None):
self.embedding_model = embedding_model or "sentence-transformers/all-MiniLM-L6-v2"
self.llm_config = llm_config
self._embedding_cache = {}
@@ -605,14 +629,24 @@ class EmbeddingStrategy(CrawlStrategy):
self._kb_embeddings_hash = None # Track KB changes
self._validation_embeddings_cache = None # Cache validation query embeddings
self._kb_similarity_threshold = 0.95 # Threshold for deduplication
def _get_embedding_llm_config_dict(self) -> Dict:
"""Get embedding LLM config as dict with fallback to default."""
if hasattr(self, 'config') and self.config:
config_dict = self.config._embedding_llm_config_dict
if config_dict:
return config_dict
# Fallback to default if no config provided
return {
'provider': 'openai/text-embedding-3-small',
'api_token': os.getenv('OPENAI_API_KEY')
}
async def _get_embeddings(self, texts: List[str]) -> Any:
"""Get embeddings using configured method"""
from .utils import get_text_embeddings
embedding_llm_config = {
'provider': 'openai/text-embedding-3-small',
'api_token': os.getenv('OPENAI_API_KEY')
}
embedding_llm_config = self._get_embedding_llm_config_dict()
return await get_text_embeddings(
texts,
embedding_llm_config,
@@ -679,8 +713,20 @@ class EmbeddingStrategy(CrawlStrategy):
Return as a JSON array of strings."""
# Use the LLM for query generation
provider = self.llm_config.get('provider', 'openai/gpt-4o-mini') if self.llm_config else 'openai/gpt-4o-mini'
api_token = self.llm_config.get('api_token') if self.llm_config else None
# Convert LLMConfig to dict if needed
llm_config_dict = None
if self.llm_config:
if isinstance(self.llm_config, dict):
llm_config_dict = self.llm_config
else:
# Convert LLMConfig object to dict
llm_config_dict = {
'provider': self.llm_config.provider,
'api_token': self.llm_config.api_token
}
provider = llm_config_dict.get('provider', 'openai/gpt-4o-mini') if llm_config_dict else 'openai/gpt-4o-mini'
api_token = llm_config_dict.get('api_token') if llm_config_dict else None
# response = perform_completion_with_backoff(
# provider=provider,
@@ -843,10 +889,7 @@ class EmbeddingStrategy(CrawlStrategy):
# Batch embed only uncached links
if texts_to_embed:
embedding_llm_config = {
'provider': 'openai/text-embedding-3-small',
'api_token': os.getenv('OPENAI_API_KEY')
}
embedding_llm_config = self._get_embedding_llm_config_dict()
new_embeddings = await get_text_embeddings(texts_to_embed, embedding_llm_config, self.embedding_model)
# Cache the new embeddings
@@ -1184,10 +1227,7 @@ class EmbeddingStrategy(CrawlStrategy):
return
# Get embeddings for new texts
embedding_llm_config = {
'provider': 'openai/text-embedding-3-small',
'api_token': os.getenv('OPENAI_API_KEY')
}
embedding_llm_config = self._get_embedding_llm_config_dict()
new_embeddings = await get_text_embeddings(new_texts, embedding_llm_config, self.embedding_model)
# Deduplicate embeddings before adding to KB
@@ -1256,10 +1296,12 @@ class AdaptiveCrawler:
if strategy_name == "statistical":
return StatisticalStrategy()
elif strategy_name == "embedding":
return EmbeddingStrategy(
strategy = EmbeddingStrategy(
embedding_model=self.config.embedding_model,
llm_config=self.config.embedding_llm_config
)
strategy.config = self.config # Pass config to strategy
return strategy
else:
raise ValueError(f"Unknown strategy: {strategy_name}")

View File

@@ -1,5 +1,6 @@
import os
from typing import Union
import warnings
from .config import (
DEFAULT_PROVIDER,
DEFAULT_PROVIDER_API_KEY,
@@ -257,24 +258,39 @@ class ProxyConfig:
@staticmethod
def from_string(proxy_str: str) -> "ProxyConfig":
"""Create a ProxyConfig from a string in the format 'ip:port:username:password'."""
parts = proxy_str.split(":")
if len(parts) == 4: # ip:port:username:password
"""Create a ProxyConfig from a string.
Supported formats:
- 'http://username:password@ip:port'
- 'http://ip:port'
- 'socks5://ip:port'
- 'ip:port:username:password'
- 'ip:port'
"""
s = (proxy_str or "").strip()
# URL with credentials
if "@" in s and "://" in s:
auth_part, server_part = s.split("@", 1)
protocol, credentials = auth_part.split("://", 1)
if ":" in credentials:
username, password = credentials.split(":", 1)
return ProxyConfig(
server=f"{protocol}://{server_part}",
username=username,
password=password,
)
# URL without credentials (keep scheme)
if "://" in s and "@" not in s:
return ProxyConfig(server=s)
# Colon separated forms
parts = s.split(":")
if len(parts) == 4:
ip, port, username, password = parts
return ProxyConfig(
server=f"http://{ip}:{port}",
username=username,
password=password,
ip=ip
)
elif len(parts) == 2: # ip:port only
return ProxyConfig(server=f"http://{ip}:{port}", username=username, password=password)
if len(parts) == 2:
ip, port = parts
return ProxyConfig(
server=f"http://{ip}:{port}",
ip=ip
)
else:
raise ValueError(f"Invalid proxy string format: {proxy_str}")
return ProxyConfig(server=f"http://{ip}:{port}")
raise ValueError(f"Invalid proxy string format: {proxy_str}")
@staticmethod
def from_dict(proxy_dict: Dict) -> "ProxyConfig":
@@ -438,6 +454,7 @@ class BrowserConfig:
host: str = "localhost",
enable_stealth: bool = False,
):
self.browser_type = browser_type
self.headless = headless
self.browser_mode = browser_mode
@@ -450,13 +467,22 @@ class BrowserConfig:
if self.browser_type in ["firefox", "webkit"]:
self.channel = ""
self.chrome_channel = ""
if proxy:
warnings.warn("The 'proxy' parameter is deprecated and will be removed in a future release. Use 'proxy_config' instead.", UserWarning)
self.proxy = proxy
self.proxy_config = proxy_config
if isinstance(self.proxy_config, dict):
self.proxy_config = ProxyConfig.from_dict(self.proxy_config)
if isinstance(self.proxy_config, str):
self.proxy_config = ProxyConfig.from_string(self.proxy_config)
if self.proxy and self.proxy_config:
warnings.warn("Both 'proxy' and 'proxy_config' are provided. 'proxy_config' will take precedence.", UserWarning)
self.proxy = None
elif self.proxy:
# Convert proxy string to ProxyConfig if proxy_config is not provided
self.proxy_config = ProxyConfig.from_string(self.proxy)
self.proxy = None
self.viewport_width = viewport_width
self.viewport_height = viewport_height

View File

@@ -455,8 +455,6 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
# Update priorities for waiting tasks if needed
await self._update_queue_priorities()
return results
except Exception as e:
if self.monitor:
@@ -467,6 +465,7 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
memory_monitor.cancel()
if self.monitor:
self.monitor.stop()
return results
async def _update_queue_priorities(self):
"""Periodically update priorities of items in the queue to prevent starvation"""

View File

@@ -148,6 +148,134 @@ class PlaywrightAdapter(BrowserAdapter):
return Page, Error, PlaywrightTimeoutError
class StealthAdapter(BrowserAdapter):
"""Adapter for Playwright with stealth features using playwright_stealth"""
def __init__(self):
self._console_script_injected = {}
self._stealth_available = self._check_stealth_availability()
def _check_stealth_availability(self) -> bool:
"""Check if playwright_stealth is available and get the correct function"""
try:
from playwright_stealth import stealth_async
self._stealth_function = stealth_async
return True
except ImportError:
try:
from playwright_stealth import stealth_sync
self._stealth_function = stealth_sync
return True
except ImportError:
self._stealth_function = None
return False
async def apply_stealth(self, page: Page):
"""Apply stealth to a page if available"""
if self._stealth_available and self._stealth_function:
try:
if hasattr(self._stealth_function, '__call__'):
if 'async' in getattr(self._stealth_function, '__name__', ''):
await self._stealth_function(page)
else:
self._stealth_function(page)
except Exception as e:
# Fail silently or log error depending on requirements
pass
async def evaluate(self, page: Page, expression: str, arg: Any = None) -> Any:
"""Standard Playwright evaluate with stealth applied"""
if arg is not None:
return await page.evaluate(expression, arg)
return await page.evaluate(expression)
async def setup_console_capture(self, page: Page, captured_console: List[Dict]) -> Optional[Callable]:
"""Setup console capture using Playwright's event system with stealth"""
# Apply stealth to the page first
await self.apply_stealth(page)
def handle_console_capture(msg):
try:
message_type = "unknown"
try:
message_type = msg.type
except:
pass
message_text = "unknown"
try:
message_text = msg.text
except:
pass
entry = {
"type": message_type,
"text": message_text,
"timestamp": time.time()
}
captured_console.append(entry)
except Exception as e:
captured_console.append({
"type": "console_capture_error",
"error": str(e),
"timestamp": time.time()
})
page.on("console", handle_console_capture)
return handle_console_capture
async def setup_error_capture(self, page: Page, captured_console: List[Dict]) -> Optional[Callable]:
"""Setup error capture using Playwright's event system"""
def handle_pageerror_capture(err):
try:
error_message = "Unknown error"
try:
error_message = err.message
except:
pass
error_stack = ""
try:
error_stack = err.stack
except:
pass
captured_console.append({
"type": "error",
"text": error_message,
"stack": error_stack,
"timestamp": time.time()
})
except Exception as e:
captured_console.append({
"type": "pageerror_capture_error",
"error": str(e),
"timestamp": time.time()
})
page.on("pageerror", handle_pageerror_capture)
return handle_pageerror_capture
async def retrieve_console_messages(self, page: Page) -> List[Dict]:
"""Not needed for Playwright - messages are captured via events"""
return []
async def cleanup_console_capture(self, page: Page, handle_console: Optional[Callable], handle_error: Optional[Callable]):
"""Remove event listeners"""
if handle_console:
page.remove_listener("console", handle_console)
if handle_error:
page.remove_listener("pageerror", handle_error)
def get_imports(self) -> tuple:
"""Return Playwright imports"""
from playwright.async_api import Page, Error
from playwright.async_api import TimeoutError as PlaywrightTimeoutError
return Page, Error, PlaywrightTimeoutError
class UndetectedAdapter(BrowserAdapter):
"""Adapter for undetected browser automation with stealth features"""

View File

@@ -15,6 +15,7 @@ from .js_snippet import load_js_script
from .config import DOWNLOAD_PAGE_TIMEOUT
from .async_configs import BrowserConfig, CrawlerRunConfig
from .utils import get_chromium_path
import warnings
BROWSER_DISABLE_OPTIONS = [
@@ -613,9 +614,11 @@ class BrowserManager:
# for all racers). Prevents 'Target page/context closed' errors.
self._page_lock = asyncio.Lock()
# Stealth-related attributes
self._stealth_instance = None
self._stealth_cm = None
# Stealth adapter for stealth mode
self._stealth_adapter = None
if self.config.enable_stealth and not self.use_undetected:
from .browser_adapter import StealthAdapter
self._stealth_adapter = StealthAdapter()
# Initialize ManagedBrowser if needed
if self.config.use_managed_browser:
@@ -649,16 +652,8 @@ class BrowserManager:
else:
from playwright.async_api import async_playwright
# Initialize playwright with or without stealth
if self.config.enable_stealth and not self.use_undetected:
# Import stealth only when needed
from playwright_stealth import Stealth
# Use the recommended stealth wrapper approach
self._stealth_instance = Stealth()
self._stealth_cm = self._stealth_instance.use_async(async_playwright())
self.playwright = await self._stealth_cm.__aenter__()
else:
self.playwright = await async_playwright().start()
# Initialize playwright
self.playwright = await async_playwright().start()
if self.config.cdp_url or self.config.use_managed_browser:
self.config.use_managed_browser = True
@@ -741,17 +736,18 @@ class BrowserManager:
)
os.makedirs(browser_args["downloads_path"], exist_ok=True)
if self.config.proxy or self.config.proxy_config:
if self.config.proxy:
warnings.warn(
"BrowserConfig.proxy is deprecated and ignored. Use proxy_config instead.",
DeprecationWarning,
)
if self.config.proxy_config:
from playwright.async_api import ProxySettings
proxy_settings = (
ProxySettings(server=self.config.proxy)
if self.config.proxy
else ProxySettings(
server=self.config.proxy_config.server,
username=self.config.proxy_config.username,
password=self.config.proxy_config.password,
)
proxy_settings = ProxySettings(
server=self.config.proxy_config.server,
username=self.config.proxy_config.username,
password=self.config.proxy_config.password,
)
browser_args["proxy"] = proxy_settings
@@ -1007,6 +1003,19 @@ class BrowserManager:
signature_hash = hashlib.sha256(signature_json.encode("utf-8")).hexdigest()
return signature_hash
async def _apply_stealth_to_page(self, page):
"""Apply stealth to a page if stealth mode is enabled"""
if self._stealth_adapter:
try:
await self._stealth_adapter.apply_stealth(page)
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to apply stealth to page: {error}",
tag="STEALTH",
params={"error": str(e)}
)
async def get_page(self, crawlerRunConfig: CrawlerRunConfig):
"""
Get a page for the given session ID, creating a new one if needed.
@@ -1036,6 +1045,7 @@ class BrowserManager:
# See GH-1198: context.pages can be empty under races
async with self._page_lock:
page = await ctx.new_page()
await self._apply_stealth_to_page(page)
else:
context = self.default_context
pages = context.pages
@@ -1052,6 +1062,7 @@ class BrowserManager:
page = pages[0]
else:
page = await context.new_page()
await self._apply_stealth_to_page(page)
else:
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
@@ -1067,6 +1078,7 @@ class BrowserManager:
# Create a new page from the chosen context
page = await context.new_page()
await self._apply_stealth_to_page(page)
# If a session_id is specified, store this session so we can reuse later
if crawlerRunConfig.session_id:
@@ -1133,19 +1145,5 @@ class BrowserManager:
self.managed_browser = None
if self.playwright:
# Handle stealth context manager cleanup if it exists
if hasattr(self, '_stealth_cm') and self._stealth_cm is not None:
try:
await self._stealth_cm.__aexit__(None, None, None)
except Exception as e:
if self.logger:
self.logger.error(
message="Error closing stealth context: {error}",
tag="ERROR",
params={"error": str(e)}
)
self._stealth_cm = None
self._stealth_instance = None
else:
await self.playwright.stop()
await self.playwright.stop()
self.playwright = None

View File

@@ -122,11 +122,6 @@ class BestFirstCrawlingStrategy(DeepCrawlStrategy):
valid_links.append(base_url)
# If we have more valid links than capacity, limit them
if len(valid_links) > remaining_capacity:
valid_links = valid_links[:remaining_capacity]
self.logger.info(f"Limiting to {remaining_capacity} URLs due to max_pages limit")
# Record the new depths and add to next_links
for url in valid_links:
depths[url] = new_depth
@@ -146,7 +141,8 @@ class BestFirstCrawlingStrategy(DeepCrawlStrategy):
"""
queue: asyncio.PriorityQueue = asyncio.PriorityQueue()
# Push the initial URL with score 0 and depth 0.
await queue.put((0, 0, start_url, None))
initial_score = self.url_scorer.score(start_url) if self.url_scorer else 0
await queue.put((-initial_score, 0, start_url, None))
visited: Set[str] = set()
depths: Dict[str, int] = {start_url: 0}
@@ -193,7 +189,7 @@ class BestFirstCrawlingStrategy(DeepCrawlStrategy):
result.metadata = result.metadata or {}
result.metadata["depth"] = depth
result.metadata["parent_url"] = parent_url
result.metadata["score"] = score
result.metadata["score"] = -score
# Count only successful crawls toward max_pages limit
if result.success:
@@ -214,7 +210,7 @@ class BestFirstCrawlingStrategy(DeepCrawlStrategy):
for new_url, new_parent in new_links:
new_depth = depths.get(new_url, depth + 1)
new_score = self.url_scorer.score(new_url) if self.url_scorer else 0
await queue.put((new_score, new_depth, new_url, new_parent))
await queue.put((-new_score, new_depth, new_url, new_parent))
# End of crawl.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,241 @@
# Crawl4AI Docker Memory & Pool Optimization - Implementation Log
## Critical Issues Identified
### Memory Management
- **Host vs Container**: `psutil.virtual_memory()` reported host memory, not container limits
- **Browser Pooling**: No pool reuse - every endpoint created new browsers
- **Warmup Waste**: Permanent browser sat idle with mismatched config signature
- **Idle Cleanup**: 30min TTL too long, janitor ran every 60s
- **Endpoint Inconsistency**: 75% of endpoints bypassed pool (`/md`, `/html`, `/screenshot`, `/pdf`, `/execute_js`, `/llm`)
### Pool Design Flaws
- **Config Mismatch**: Permanent browser used `config.yml` args, endpoints used empty `BrowserConfig()`
- **Logging Level**: Pool hit markers at DEBUG, invisible with INFO logging
## Implementation Changes
### 1. Container-Aware Memory Detection (`utils.py`)
```python
def get_container_memory_percent() -> float:
# Try cgroup v2 → v1 → fallback to psutil
# Reads /sys/fs/cgroup/memory.{current,max} OR memory/memory.{usage,limit}_in_bytes
```
### 2. Smart Browser Pool (`crawler_pool.py`)
**3-Tier System:**
- **PERMANENT**: Always-ready default browser (never cleaned)
- **HOT_POOL**: Configs used 3+ times (longer TTL)
- **COLD_POOL**: New/rare configs (short TTL)
**Key Functions:**
- `get_crawler(cfg)`: Check permanent → hot → cold → create new
- `init_permanent(cfg)`: Initialize permanent at startup
- `janitor()`: Adaptive cleanup (10s/30s/60s intervals based on memory)
- `_sig(cfg)`: SHA1 hash of config dict for pool keys
**Logging Fix**: Changed `logger.debug()``logger.info()` for pool hits
### 3. Endpoint Unification
**Helper Function** (`server.py`):
```python
def get_default_browser_config() -> BrowserConfig:
return BrowserConfig(
extra_args=config["crawler"]["browser"].get("extra_args", []),
**config["crawler"]["browser"].get("kwargs", {}),
)
```
**Migrated Endpoints:**
- `/html`, `/screenshot`, `/pdf`, `/execute_js` → use `get_default_browser_config()`
- `handle_llm_qa()`, `handle_markdown_request()` → same
**Result**: All endpoints now hit permanent browser pool
### 4. Config Updates (`config.yml`)
- `idle_ttl_sec: 1800``300` (30min → 5min base TTL)
- `port: 11234``11235` (fixed mismatch with Gunicorn)
### 5. Lifespan Fix (`server.py`)
```python
await init_permanent(BrowserConfig(
extra_args=config["crawler"]["browser"].get("extra_args", []),
**config["crawler"]["browser"].get("kwargs", {}),
))
```
Permanent browser now matches endpoint config signatures
## Test Results
### Test 1: Basic Health
- 10 requests to `/health`
- **Result**: 100% success, avg 3ms latency
- **Baseline**: Container starts in ~5s, 270 MB idle
### Test 2: Memory Monitoring
- 20 requests with Docker stats tracking
- **Result**: 100% success, no memory leak (-0.2 MB delta)
- **Baseline**: 269.7 MB container overhead
### Test 3: Pool Validation
- 30 requests to `/html` endpoint
- **Result**: **100% permanent browser hits**, 0 new browsers created
- **Memory**: 287 MB baseline → 396 MB active (+109 MB)
- **Latency**: Avg 4s (includes network to httpbin.org)
### Test 4: Concurrent Load
- Light (10) → Medium (50) → Heavy (100) concurrent
- **Total**: 320 requests
- **Result**: 100% success, **320/320 permanent hits**, 0 new browsers
- **Memory**: 269 MB → peak 1533 MB → final 993 MB
- **Latency**: P99 at 100 concurrent = 34s (expected with single browser)
### Test 5: Pool Stress (Mixed Configs)
- 20 requests with 4 different viewport configs
- **Result**: 4 new browsers, 4 cold hits, **4 promotions to hot**, 8 hot hits
- **Reuse Rate**: 60% (12 pool hits / 20 requests)
- **Memory**: 270 MB → 928 MB peak (+658 MB = ~165 MB per browser)
- **Proves**: Cold → hot promotion at 3 uses working perfectly
### Test 6: Multi-Endpoint
- 10 requests each: `/html`, `/screenshot`, `/pdf`, `/crawl`
- **Result**: 100% success across all 4 endpoints
- **Latency**: 5-8s avg (PDF slowest at 7.2s)
### Test 7: Cleanup Verification
- 20 requests (load spike) → 90s idle
- **Memory**: 269 MB → peak 1107 MB → final 780 MB
- **Recovery**: 327 MB (39%) - partial cleanup
- **Note**: Hot pool browsers persist (by design), janitor working correctly
## Performance Metrics
| Metric | Before | After | Improvement |
|--------|--------|-------|-------------|
| Pool Reuse | 0% | 100% (default config) | ∞ |
| Memory Leak | Unknown | 0 MB/cycle | Stable |
| Browser Reuse | No | Yes | ~3-5s saved per request |
| Idle Memory | 500-700 MB × N | 270-400 MB | 10x reduction |
| Concurrent Capacity | ~20 | 100+ | 5x |
## Key Learnings
1. **Config Signature Matching**: Permanent browser MUST match endpoint default config exactly (SHA1 hash)
2. **Logging Levels**: Pool diagnostics need INFO level, not DEBUG
3. **Memory in Docker**: Must read cgroup files, not host metrics
4. **Janitor Timing**: 60s interval adequate, but TTLs should be short (5min) for cold pool
5. **Hot Promotion**: 3-use threshold works well for production patterns
6. **Memory Per Browser**: ~150-200 MB per Chromium instance with headless + text_mode
## Test Infrastructure
**Location**: `deploy/docker/tests/`
**Dependencies**: `httpx`, `docker` (Python SDK)
**Pattern**: Sequential build - each test adds one capability
**Files**:
- `test_1_basic.py`: Health check + container lifecycle
- `test_2_memory.py`: + Docker stats monitoring
- `test_3_pool.py`: + Log analysis for pool markers
- `test_4_concurrent.py`: + asyncio.Semaphore for concurrency control
- `test_5_pool_stress.py`: + Config variants (viewports)
- `test_6_multi_endpoint.py`: + Multiple endpoint testing
- `test_7_cleanup.py`: + Time-series memory tracking for janitor
**Run Pattern**:
```bash
cd deploy/docker/tests
pip install -r requirements.txt
# Rebuild after code changes:
cd /path/to/repo && docker buildx build -t crawl4ai-local:latest --load .
# Run test:
python test_N_name.py
```
## Architecture Decisions
**Why Permanent Browser?**
- 90% of requests use default config → single browser serves most traffic
- Eliminates 3-5s startup overhead per request
**Why 3-Tier Pool?**
- Permanent: Zero cost for common case
- Hot: Amortized cost for frequent variants
- Cold: Lazy allocation for rare configs
**Why Adaptive Janitor?**
- Memory pressure triggers aggressive cleanup
- Low memory allows longer TTLs for better reuse
**Why Not Close After Each Request?**
- Browser startup: 3-5s overhead
- Pool reuse: <100ms overhead
- Net: 30-50x faster
## Future Optimizations
1. **Request Queuing**: When at capacity, queue instead of reject
2. **Pre-warming**: Predict common configs, pre-create browsers
3. **Metrics Export**: Prometheus metrics for pool efficiency
4. **Config Normalization**: Group similar viewports (e.g., 1920±50 → 1920)
## Critical Code Paths
**Browser Acquisition** (`crawler_pool.py:34-78`):
```
get_crawler(cfg) →
_sig(cfg) →
if sig == DEFAULT_CONFIG_SIG → PERMANENT
elif sig in HOT_POOL → HOT_POOL[sig]
elif sig in COLD_POOL → promote if count >= 3
else → create new in COLD_POOL
```
**Janitor Loop** (`crawler_pool.py:107-146`):
```
while True:
mem% = get_container_memory_percent()
if mem% > 80: interval=10s, cold_ttl=30s
elif mem% > 60: interval=30s, cold_ttl=60s
else: interval=60s, cold_ttl=300s
sleep(interval)
close idle browsers (COLD then HOT)
```
**Endpoint Pattern** (`server.py` example):
```python
@app.post("/html")
async def generate_html(...):
from crawler_pool import get_crawler
crawler = await get_crawler(get_default_browser_config())
results = await crawler.arun(url=body.url, config=cfg)
# No crawler.close() - returned to pool
```
## Debugging Tips
**Check Pool Activity**:
```bash
docker logs crawl4ai-test | grep -E "(🔥|♨️|❄️|🆕|⬆️)"
```
**Verify Config Signature**:
```python
from crawl4ai import BrowserConfig
import json, hashlib
cfg = BrowserConfig(...)
sig = hashlib.sha1(json.dumps(cfg.to_dict(), sort_keys=True).encode()).hexdigest()
print(sig[:8]) # Compare with logs
```
**Monitor Memory**:
```bash
docker stats crawl4ai-test
```
## Known Limitations
- **Mac Docker Stats**: CPU metrics unreliable, memory works
- **PDF Generation**: Slowest endpoint (~7s), no optimization yet
- **Hot Pool Persistence**: May hold memory longer than needed (trade-off for performance)
- **Janitor Lag**: Up to 60s before cleanup triggers in low-memory scenarios

View File

@@ -66,6 +66,7 @@ async def handle_llm_qa(
config: dict
) -> str:
"""Process QA using LLM with crawled content as context."""
from crawler_pool import get_crawler
try:
if not url.startswith(('http://', 'https://')) and not url.startswith(("raw:", "raw://")):
url = 'https://' + url
@@ -74,15 +75,21 @@ async def handle_llm_qa(
if last_q_index != -1:
url = url[:last_q_index]
# Get markdown content
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url)
if not result.success:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.error_message
)
content = result.markdown.fit_markdown or result.markdown.raw_markdown
# Get markdown content (use default config)
from utils import load_config
cfg = load_config()
browser_cfg = BrowserConfig(
extra_args=cfg["crawler"]["browser"].get("extra_args", []),
**cfg["crawler"]["browser"].get("kwargs", {}),
)
crawler = await get_crawler(browser_cfg)
result = await crawler.arun(url)
if not result.success:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.error_message
)
content = result.markdown.fit_markdown or result.markdown.raw_markdown
# Create prompt and get LLM response
prompt = f"""Use the following content as context to answer the question.
@@ -224,25 +231,32 @@ async def handle_markdown_request(
cache_mode = CacheMode.ENABLED if cache == "1" else CacheMode.WRITE_ONLY
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=decoded_url,
config=CrawlerRunConfig(
markdown_generator=md_generator,
scraping_strategy=LXMLWebScrapingStrategy(),
cache_mode=cache_mode
)
from crawler_pool import get_crawler
from utils import load_config as _load_config
_cfg = _load_config()
browser_cfg = BrowserConfig(
extra_args=_cfg["crawler"]["browser"].get("extra_args", []),
**_cfg["crawler"]["browser"].get("kwargs", {}),
)
crawler = await get_crawler(browser_cfg)
result = await crawler.arun(
url=decoded_url,
config=CrawlerRunConfig(
markdown_generator=md_generator,
scraping_strategy=LXMLWebScrapingStrategy(),
cache_mode=cache_mode
)
if not result.success:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.error_message
)
)
return (result.markdown.raw_markdown
if filter_type == FilterType.RAW
else result.markdown.fit_markdown)
if not result.success:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.error_message
)
return (result.markdown.raw_markdown
if filter_type == FilterType.RAW
else result.markdown.fit_markdown)
except Exception as e:
logger.error(f"Markdown error: {str(e)}", exc_info=True)
@@ -442,14 +456,26 @@ async def handle_crawl_request(
urls: List[str],
browser_config: dict,
crawler_config: dict,
config: dict
config: dict,
hooks_config: Optional[dict] = None
) -> dict:
"""Handle non-streaming crawl requests."""
"""Handle non-streaming crawl requests with optional hooks."""
# Track request start
request_id = f"req_{uuid4().hex[:8]}"
try:
from monitor import get_monitor
await get_monitor().track_request_start(
request_id, "/crawl", urls[0] if urls else "batch", browser_config
)
except:
pass # Monitor not critical
start_mem_mb = _get_memory_mb() # <--- Get memory before
start_time = time.time()
mem_delta_mb = None
peak_mem_mb = start_mem_mb
hook_manager = None
try:
urls = [('https://' + url) if not url.startswith(('http://', 'https://')) and not url.startswith(("raw:", "raw://")) else url for url in urls]
browser_config = BrowserConfig.load(browser_config)
@@ -468,11 +494,27 @@ async def handle_crawl_request(
# crawler: AsyncWebCrawler = AsyncWebCrawler(config=browser_config)
# await crawler.start()
# Attach hooks if provided
hooks_status = {}
if hooks_config:
from hook_manager import attach_user_hooks_to_crawler, UserHookManager
hook_manager = UserHookManager(timeout=hooks_config.get('timeout', 30))
hooks_status, hook_manager = await attach_user_hooks_to_crawler(
crawler,
hooks_config.get('code', {}),
timeout=hooks_config.get('timeout', 30),
hook_manager=hook_manager
)
logger.info(f"Hooks attachment status: {hooks_status['status']}")
base_config = config["crawler"]["base_config"]
# Iterate on key-value pairs in global_config then use haseattr to set them
# Iterate on key-value pairs in global_config then use hasattr to set them
for key, value in base_config.items():
if hasattr(crawler_config, key):
setattr(crawler_config, key, value)
current_value = getattr(crawler_config, key)
# Only set base config if user didn't provide a value
if current_value is None or current_value == "":
setattr(crawler_config, key, value)
results = []
func = getattr(crawler, "arun" if len(urls) == 1 else "arun_many")
@@ -481,6 +523,10 @@ async def handle_crawl_request(
config=crawler_config,
dispatcher=dispatcher)
results = await partial_func()
# Ensure results is always a list
if not isinstance(results, list):
results = [results]
# await crawler.close()
@@ -495,16 +541,39 @@ async def handle_crawl_request(
# Process results to handle PDF bytes
processed_results = []
for result in results:
result_dict = result.model_dump()
# if fit_html is not a string, set it to None to avoid serialization errors
if "fit_html" in result_dict and not (result_dict["fit_html"] is None or isinstance(result_dict["fit_html"], str)):
result_dict["fit_html"] = None
# If PDF exists, encode it to base64
if result_dict.get('pdf') is not None:
result_dict['pdf'] = b64encode(result_dict['pdf']).decode('utf-8')
processed_results.append(result_dict)
try:
# Check if result has model_dump method (is a proper CrawlResult)
if hasattr(result, 'model_dump'):
result_dict = result.model_dump()
elif isinstance(result, dict):
result_dict = result
else:
# Handle unexpected result type
logger.warning(f"Unexpected result type: {type(result)}")
result_dict = {
"url": str(result) if hasattr(result, '__str__') else "unknown",
"success": False,
"error_message": f"Unexpected result type: {type(result).__name__}"
}
# if fit_html is not a string, set it to None to avoid serialization errors
if "fit_html" in result_dict and not (result_dict["fit_html"] is None or isinstance(result_dict["fit_html"], str)):
result_dict["fit_html"] = None
# If PDF exists, encode it to base64
if result_dict.get('pdf') is not None and isinstance(result_dict.get('pdf'), bytes):
result_dict['pdf'] = b64encode(result_dict['pdf']).decode('utf-8')
processed_results.append(result_dict)
except Exception as e:
logger.error(f"Error processing result: {e}")
processed_results.append({
"url": "unknown",
"success": False,
"error_message": str(e)
})
return {
response = {
"success": True,
"results": processed_results,
"server_processing_time_s": end_time - start_time,
@@ -512,8 +581,53 @@ async def handle_crawl_request(
"server_peak_memory_mb": peak_mem_mb
}
# Track request completion
try:
from monitor import get_monitor
await get_monitor().track_request_end(
request_id, success=True, pool_hit=True, status_code=200
)
except:
pass
# Add hooks information if hooks were used
if hooks_config and hook_manager:
from hook_manager import UserHookManager
if isinstance(hook_manager, UserHookManager):
try:
# Ensure all hook data is JSON serializable
hook_data = {
"status": hooks_status,
"execution_log": hook_manager.execution_log,
"errors": hook_manager.errors,
"summary": hook_manager.get_summary()
}
# Test that it's serializable
json.dumps(hook_data)
response["hooks"] = hook_data
except (TypeError, ValueError) as e:
logger.error(f"Hook data not JSON serializable: {e}")
response["hooks"] = {
"status": {"status": "error", "message": "Hook data serialization failed"},
"execution_log": [],
"errors": [{"error": str(e)}],
"summary": {}
}
return response
except Exception as e:
logger.error(f"Crawl error: {str(e)}", exc_info=True)
# Track request error
try:
from monitor import get_monitor
await get_monitor().track_request_end(
request_id, success=False, error=str(e), status_code=500
)
except:
pass
if 'crawler' in locals() and crawler.ready: # Check if crawler was initialized and started
# try:
# await crawler.close()
@@ -539,9 +653,11 @@ async def handle_stream_crawl_request(
urls: List[str],
browser_config: dict,
crawler_config: dict,
config: dict
) -> Tuple[AsyncWebCrawler, AsyncGenerator]:
"""Handle streaming crawl requests."""
config: dict,
hooks_config: Optional[dict] = None
) -> Tuple[AsyncWebCrawler, AsyncGenerator, Optional[Dict]]:
"""Handle streaming crawl requests with optional hooks."""
hooks_info = None
try:
browser_config = BrowserConfig.load(browser_config)
# browser_config.verbose = True # Set to False or remove for production stress testing
@@ -562,6 +678,20 @@ async def handle_stream_crawl_request(
# crawler = AsyncWebCrawler(config=browser_config)
# await crawler.start()
# Attach hooks if provided
if hooks_config:
from hook_manager import attach_user_hooks_to_crawler, UserHookManager
hook_manager = UserHookManager(timeout=hooks_config.get('timeout', 30))
hooks_status, hook_manager = await attach_user_hooks_to_crawler(
crawler,
hooks_config.get('code', {}),
timeout=hooks_config.get('timeout', 30),
hook_manager=hook_manager
)
logger.info(f"Hooks attachment status for streaming: {hooks_status['status']}")
# Include hook manager in hooks_info for proper tracking
hooks_info = {'status': hooks_status, 'manager': hook_manager}
results_gen = await crawler.arun_many(
urls=urls,
@@ -569,7 +699,7 @@ async def handle_stream_crawl_request(
dispatcher=dispatcher
)
return crawler, results_gen
return crawler, results_gen, hooks_info
except Exception as e:
# Make sure to close crawler if started during an error here

View File

@@ -28,25 +28,43 @@ def create_access_token(data: dict, expires_delta: Optional[timedelta] = None) -
signing_key = get_jwk_from_secret(SECRET_KEY)
return instance.encode(to_encode, signing_key, alg='HS256')
def verify_token(credentials: HTTPAuthorizationCredentials = Depends(security)) -> Dict:
def verify_token(credentials: HTTPAuthorizationCredentials) -> Dict:
"""Verify the JWT token from the Authorization header."""
if credentials is None:
return None
if not credentials or not credentials.credentials:
raise HTTPException(
status_code=401,
detail="No token provided",
headers={"WWW-Authenticate": "Bearer"}
)
token = credentials.credentials
verifying_key = get_jwk_from_secret(SECRET_KEY)
try:
payload = instance.decode(token, verifying_key, do_time_check=True, algorithms='HS256')
return payload
except Exception:
raise HTTPException(status_code=401, detail="Invalid or expired token")
except Exception as e:
raise HTTPException(
status_code=401,
detail=f"Invalid or expired token: {str(e)}",
headers={"WWW-Authenticate": "Bearer"}
)
def get_token_dependency(config: Dict):
"""Return the token dependency if JWT is enabled, else a function that returns None."""
if config.get("security", {}).get("jwt_enabled", False):
return verify_token
def jwt_required(credentials: HTTPAuthorizationCredentials = Depends(security)) -> Dict:
"""Enforce JWT authentication when enabled."""
if credentials is None:
raise HTTPException(
status_code=401,
detail="Authentication required. Please provide a valid Bearer token.",
headers={"WWW-Authenticate": "Bearer"}
)
return verify_token(credentials)
return jwt_required
else:
return lambda: None

View File

@@ -7520,17 +7520,18 @@ class BrowserManager:
)
os.makedirs(browser_args["downloads_path"], exist_ok=True)
if self.config.proxy or self.config.proxy_config:
if self.config.proxy:
warnings.warn(
"BrowserConfig.proxy is deprecated and ignored. Use proxy_config instead.",
DeprecationWarning,
)
if self.config.proxy_config:
from playwright.async_api import ProxySettings
proxy_settings = (
ProxySettings(server=self.config.proxy)
if self.config.proxy
else ProxySettings(
server=self.config.proxy_config.server,
username=self.config.proxy_config.username,
password=self.config.proxy_config.password,
)
proxy_settings = ProxySettings(
server=self.config.proxy_config.server,
username=self.config.proxy_config.username,
password=self.config.proxy_config.password,
)
browser_args["proxy"] = proxy_settings

View File

@@ -3,7 +3,7 @@ app:
title: "Crawl4AI API"
version: "1.0.0"
host: "0.0.0.0"
port: 11234
port: 11235
reload: False
workers: 1
timeout_keep_alive: 300
@@ -38,8 +38,8 @@ rate_limiting:
# Security Configuration
security:
enabled: false
jwt_enabled: false
enabled: false
jwt_enabled: false
https_redirect: false
trusted_hosts: ["*"]
headers:
@@ -61,7 +61,7 @@ crawler:
batch_process: 300.0 # Timeout for batch processing
pool:
max_pages: 40 # ← GLOBAL_SEM permits
idle_ttl_sec: 1800 # ← 30 min janitor cutoff
idle_ttl_sec: 300 # ← 30 min janitor cutoff
browser:
kwargs:
headless: true

View File

@@ -1,60 +1,170 @@
# crawler_pool.py (new file)
import asyncio, json, hashlib, time, psutil
# crawler_pool.py - Smart browser pool with tiered management
import asyncio, json, hashlib, time
from contextlib import suppress
from typing import Dict
from typing import Dict, Optional
from crawl4ai import AsyncWebCrawler, BrowserConfig
from typing import Dict
from utils import load_config
from utils import load_config, get_container_memory_percent
import logging
logger = logging.getLogger(__name__)
CONFIG = load_config()
POOL: Dict[str, AsyncWebCrawler] = {}
# Pool tiers
PERMANENT: Optional[AsyncWebCrawler] = None # Always-ready default browser
HOT_POOL: Dict[str, AsyncWebCrawler] = {} # Frequent configs
COLD_POOL: Dict[str, AsyncWebCrawler] = {} # Rare configs
LAST_USED: Dict[str, float] = {}
USAGE_COUNT: Dict[str, int] = {}
LOCK = asyncio.Lock()
MEM_LIMIT = CONFIG.get("crawler", {}).get("memory_threshold_percent", 95.0) # % RAM refuse new browsers above this
IDLE_TTL = CONFIG.get("crawler", {}).get("pool", {}).get("idle_ttl_sec", 1800) # close if unused for 30min
# Config
MEM_LIMIT = CONFIG.get("crawler", {}).get("memory_threshold_percent", 95.0)
BASE_IDLE_TTL = CONFIG.get("crawler", {}).get("pool", {}).get("idle_ttl_sec", 300)
DEFAULT_CONFIG_SIG = None # Cached sig for default config
def _sig(cfg: BrowserConfig) -> str:
"""Generate config signature."""
payload = json.dumps(cfg.to_dict(), sort_keys=True, separators=(",",":"))
return hashlib.sha1(payload.encode()).hexdigest()
def _is_default_config(sig: str) -> bool:
"""Check if config matches default."""
return sig == DEFAULT_CONFIG_SIG
async def get_crawler(cfg: BrowserConfig) -> AsyncWebCrawler:
try:
sig = _sig(cfg)
async with LOCK:
if sig in POOL:
LAST_USED[sig] = time.time();
return POOL[sig]
if psutil.virtual_memory().percent >= MEM_LIMIT:
raise MemoryError("RAM pressure new browser denied")
crawler = AsyncWebCrawler(config=cfg, thread_safe=False)
await crawler.start()
POOL[sig] = crawler; LAST_USED[sig] = time.time()
return crawler
except MemoryError as e:
raise MemoryError(f"RAM pressure new browser denied: {e}")
except Exception as e:
raise RuntimeError(f"Failed to start browser: {e}")
finally:
if sig in POOL:
LAST_USED[sig] = time.time()
else:
# If we failed to start the browser, we should remove it from the pool
POOL.pop(sig, None)
LAST_USED.pop(sig, None)
# If we failed to start the browser, we should remove it from the pool
async def close_all():
"""Get crawler from pool with tiered strategy."""
sig = _sig(cfg)
async with LOCK:
await asyncio.gather(*(c.close() for c in POOL.values()), return_exceptions=True)
POOL.clear(); LAST_USED.clear()
# Check permanent browser for default config
if PERMANENT and _is_default_config(sig):
LAST_USED[sig] = time.time()
USAGE_COUNT[sig] = USAGE_COUNT.get(sig, 0) + 1
logger.info("🔥 Using permanent browser")
return PERMANENT
# Check hot pool
if sig in HOT_POOL:
LAST_USED[sig] = time.time()
USAGE_COUNT[sig] = USAGE_COUNT.get(sig, 0) + 1
logger.info(f"♨️ Using hot pool browser (sig={sig[:8]})")
return HOT_POOL[sig]
# Check cold pool (promote to hot if used 3+ times)
if sig in COLD_POOL:
LAST_USED[sig] = time.time()
USAGE_COUNT[sig] = USAGE_COUNT.get(sig, 0) + 1
if USAGE_COUNT[sig] >= 3:
logger.info(f"⬆️ Promoting to hot pool (sig={sig[:8]}, count={USAGE_COUNT[sig]})")
HOT_POOL[sig] = COLD_POOL.pop(sig)
# Track promotion in monitor
try:
from monitor import get_monitor
await get_monitor().track_janitor_event("promote", sig, {"count": USAGE_COUNT[sig]})
except:
pass
return HOT_POOL[sig]
logger.info(f"❄️ Using cold pool browser (sig={sig[:8]})")
return COLD_POOL[sig]
# Memory check before creating new
mem_pct = get_container_memory_percent()
if mem_pct >= MEM_LIMIT:
logger.error(f"💥 Memory pressure: {mem_pct:.1f}% >= {MEM_LIMIT}%")
raise MemoryError(f"Memory at {mem_pct:.1f}%, refusing new browser")
# Create new in cold pool
logger.info(f"🆕 Creating new browser in cold pool (sig={sig[:8]}, mem={mem_pct:.1f}%)")
crawler = AsyncWebCrawler(config=cfg, thread_safe=False)
await crawler.start()
COLD_POOL[sig] = crawler
LAST_USED[sig] = time.time()
USAGE_COUNT[sig] = 1
return crawler
async def init_permanent(cfg: BrowserConfig):
"""Initialize permanent default browser."""
global PERMANENT, DEFAULT_CONFIG_SIG
async with LOCK:
if PERMANENT:
return
DEFAULT_CONFIG_SIG = _sig(cfg)
logger.info("🔥 Creating permanent default browser")
PERMANENT = AsyncWebCrawler(config=cfg, thread_safe=False)
await PERMANENT.start()
LAST_USED[DEFAULT_CONFIG_SIG] = time.time()
USAGE_COUNT[DEFAULT_CONFIG_SIG] = 0
async def close_all():
"""Close all browsers."""
async with LOCK:
tasks = []
if PERMANENT:
tasks.append(PERMANENT.close())
tasks.extend([c.close() for c in HOT_POOL.values()])
tasks.extend([c.close() for c in COLD_POOL.values()])
await asyncio.gather(*tasks, return_exceptions=True)
HOT_POOL.clear()
COLD_POOL.clear()
LAST_USED.clear()
USAGE_COUNT.clear()
async def janitor():
"""Adaptive cleanup based on memory pressure."""
while True:
await asyncio.sleep(60)
mem_pct = get_container_memory_percent()
# Adaptive intervals and TTLs
if mem_pct > 80:
interval, cold_ttl, hot_ttl = 10, 30, 120
elif mem_pct > 60:
interval, cold_ttl, hot_ttl = 30, 60, 300
else:
interval, cold_ttl, hot_ttl = 60, BASE_IDLE_TTL, BASE_IDLE_TTL * 2
await asyncio.sleep(interval)
now = time.time()
async with LOCK:
for sig, crawler in list(POOL.items()):
if now - LAST_USED[sig] > IDLE_TTL:
with suppress(Exception): await crawler.close()
POOL.pop(sig, None); LAST_USED.pop(sig, None)
# Clean cold pool
for sig in list(COLD_POOL.keys()):
if now - LAST_USED.get(sig, now) > cold_ttl:
idle_time = now - LAST_USED[sig]
logger.info(f"🧹 Closing cold browser (sig={sig[:8]}, idle={idle_time:.0f}s)")
with suppress(Exception):
await COLD_POOL[sig].close()
COLD_POOL.pop(sig, None)
LAST_USED.pop(sig, None)
USAGE_COUNT.pop(sig, None)
# Track in monitor
try:
from monitor import get_monitor
await get_monitor().track_janitor_event("close_cold", sig, {"idle_seconds": int(idle_time), "ttl": cold_ttl})
except:
pass
# Clean hot pool (more conservative)
for sig in list(HOT_POOL.keys()):
if now - LAST_USED.get(sig, now) > hot_ttl:
idle_time = now - LAST_USED[sig]
logger.info(f"🧹 Closing hot browser (sig={sig[:8]}, idle={idle_time:.0f}s)")
with suppress(Exception):
await HOT_POOL[sig].close()
HOT_POOL.pop(sig, None)
LAST_USED.pop(sig, None)
USAGE_COUNT.pop(sig, None)
# Track in monitor
try:
from monitor import get_monitor
await get_monitor().track_janitor_event("close_hot", sig, {"idle_seconds": int(idle_time), "ttl": hot_ttl})
except:
pass
# Log pool stats
if mem_pct > 60:
logger.info(f"📊 Pool: hot={len(HOT_POOL)}, cold={len(COLD_POOL)}, mem={mem_pct:.1f}%")

View File

@@ -0,0 +1,512 @@
"""
Hook Manager for User-Provided Hook Functions
Handles validation, compilation, and safe execution of user-provided hook code
"""
import ast
import asyncio
import traceback
from typing import Dict, Callable, Optional, Tuple, List, Any
import logging
logger = logging.getLogger(__name__)
class UserHookManager:
"""Manages user-provided hook functions with error isolation"""
# Expected signatures for each hook point
HOOK_SIGNATURES = {
"on_browser_created": ["browser"],
"on_page_context_created": ["page", "context"],
"before_goto": ["page", "context", "url"],
"after_goto": ["page", "context", "url", "response"],
"on_user_agent_updated": ["page", "context", "user_agent"],
"on_execution_started": ["page", "context"],
"before_retrieve_html": ["page", "context"],
"before_return_html": ["page", "context", "html"]
}
# Default timeout for hook execution (in seconds)
DEFAULT_TIMEOUT = 30
def __init__(self, timeout: int = DEFAULT_TIMEOUT):
self.timeout = timeout
self.errors: List[Dict[str, Any]] = []
self.compiled_hooks: Dict[str, Callable] = {}
self.execution_log: List[Dict[str, Any]] = []
def validate_hook_structure(self, hook_code: str, hook_point: str) -> Tuple[bool, str]:
"""
Validate the structure of user-provided hook code
Args:
hook_code: The Python code string containing the hook function
hook_point: The hook point name (e.g., 'on_page_context_created')
Returns:
Tuple of (is_valid, error_message)
"""
try:
# Parse the code
tree = ast.parse(hook_code)
# Check if it's empty
if not tree.body:
return False, "Hook code is empty"
# Find the function definition
func_def = None
for node in tree.body:
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef)):
func_def = node
break
if not func_def:
return False, "Hook must contain a function definition (def or async def)"
# Check if it's async (all hooks should be async)
if not isinstance(func_def, ast.AsyncFunctionDef):
return False, f"Hook function must be async (use 'async def' instead of 'def')"
# Get function name for better error messages
func_name = func_def.name
# Validate parameters
expected_params = self.HOOK_SIGNATURES.get(hook_point, [])
if not expected_params:
return False, f"Unknown hook point: {hook_point}"
func_params = [arg.arg for arg in func_def.args.args]
# Check if it has **kwargs for flexibility
has_kwargs = func_def.args.kwarg is not None
# Must have at least the expected parameters
missing_params = []
for expected in expected_params:
if expected not in func_params:
missing_params.append(expected)
if missing_params and not has_kwargs:
return False, f"Hook function '{func_name}' must accept parameters: {', '.join(expected_params)} (missing: {', '.join(missing_params)})"
# Check if it returns something (should return page or browser)
has_return = any(isinstance(node, ast.Return) for node in ast.walk(func_def))
if not has_return:
# Warning, not error - we'll handle this
logger.warning(f"Hook function '{func_name}' should return the {expected_params[0]} object")
return True, "Valid"
except SyntaxError as e:
return False, f"Syntax error at line {e.lineno}: {str(e)}"
except Exception as e:
return False, f"Failed to parse hook code: {str(e)}"
def compile_hook(self, hook_code: str, hook_point: str) -> Optional[Callable]:
"""
Compile user-provided hook code into a callable function
Args:
hook_code: The Python code string
hook_point: The hook point name
Returns:
Compiled function or None if compilation failed
"""
try:
# Create a safe namespace for the hook
# Use a more complete builtins that includes __import__
import builtins
safe_builtins = {}
# Add safe built-in functions
allowed_builtins = [
'print', 'len', 'str', 'int', 'float', 'bool',
'list', 'dict', 'set', 'tuple', 'range', 'enumerate',
'zip', 'map', 'filter', 'any', 'all', 'sum', 'min', 'max',
'sorted', 'reversed', 'abs', 'round', 'isinstance', 'type',
'getattr', 'hasattr', 'setattr', 'callable', 'iter', 'next',
'__import__', '__build_class__' # Required for exec
]
for name in allowed_builtins:
if hasattr(builtins, name):
safe_builtins[name] = getattr(builtins, name)
namespace = {
'__name__': f'user_hook_{hook_point}',
'__builtins__': safe_builtins
}
# Add commonly needed imports
exec("import asyncio", namespace)
exec("import json", namespace)
exec("import re", namespace)
exec("from typing import Dict, List, Optional", namespace)
# Execute the code to define the function
exec(hook_code, namespace)
# Find the async function in the namespace
for name, obj in namespace.items():
if callable(obj) and not name.startswith('_') and asyncio.iscoroutinefunction(obj):
return obj
# If no async function found, look for any function
for name, obj in namespace.items():
if callable(obj) and not name.startswith('_'):
logger.warning(f"Found non-async function '{name}' - wrapping it")
# Wrap sync function in async
async def async_wrapper(*args, **kwargs):
return obj(*args, **kwargs)
return async_wrapper
raise ValueError("No callable function found in hook code")
except Exception as e:
error = {
'hook_point': hook_point,
'error': f"Failed to compile hook: {str(e)}",
'type': 'compilation_error',
'traceback': traceback.format_exc()
}
self.errors.append(error)
logger.error(f"Hook compilation failed for {hook_point}: {str(e)}")
return None
async def execute_hook_safely(
self,
hook_func: Callable,
hook_point: str,
*args,
**kwargs
) -> Tuple[Any, Optional[Dict]]:
"""
Execute a user hook with error isolation and timeout
Args:
hook_func: The compiled hook function
hook_point: The hook point name
*args, **kwargs: Arguments to pass to the hook
Returns:
Tuple of (result, error_dict)
"""
start_time = asyncio.get_event_loop().time()
try:
# Add timeout to prevent infinite loops
result = await asyncio.wait_for(
hook_func(*args, **kwargs),
timeout=self.timeout
)
# Log successful execution
execution_time = asyncio.get_event_loop().time() - start_time
self.execution_log.append({
'hook_point': hook_point,
'status': 'success',
'execution_time': execution_time,
'timestamp': start_time
})
return result, None
except asyncio.TimeoutError:
error = {
'hook_point': hook_point,
'error': f'Hook execution timed out ({self.timeout}s limit)',
'type': 'timeout',
'execution_time': self.timeout
}
self.errors.append(error)
self.execution_log.append({
'hook_point': hook_point,
'status': 'timeout',
'error': error['error'],
'execution_time': self.timeout,
'timestamp': start_time
})
# Return the first argument (usually page/browser) to continue
return args[0] if args else None, error
except Exception as e:
execution_time = asyncio.get_event_loop().time() - start_time
error = {
'hook_point': hook_point,
'error': str(e),
'type': type(e).__name__,
'traceback': traceback.format_exc(),
'execution_time': execution_time
}
self.errors.append(error)
self.execution_log.append({
'hook_point': hook_point,
'status': 'failed',
'error': str(e),
'error_type': type(e).__name__,
'execution_time': execution_time,
'timestamp': start_time
})
# Return the first argument (usually page/browser) to continue
return args[0] if args else None, error
def get_summary(self) -> Dict[str, Any]:
"""Get a summary of hook execution"""
total_hooks = len(self.execution_log)
successful = sum(1 for log in self.execution_log if log['status'] == 'success')
failed = sum(1 for log in self.execution_log if log['status'] == 'failed')
timed_out = sum(1 for log in self.execution_log if log['status'] == 'timeout')
return {
'total_executions': total_hooks,
'successful': successful,
'failed': failed,
'timed_out': timed_out,
'success_rate': (successful / total_hooks * 100) if total_hooks > 0 else 0,
'total_errors': len(self.errors)
}
class IsolatedHookWrapper:
"""Wraps user hooks with error isolation and reporting"""
def __init__(self, hook_manager: UserHookManager):
self.hook_manager = hook_manager
def create_hook_wrapper(self, user_hook: Callable, hook_point: str) -> Callable:
"""
Create a wrapper that isolates hook errors from main process
Args:
user_hook: The compiled user hook function
hook_point: The hook point name
Returns:
Wrapped async function that handles errors gracefully
"""
async def wrapped_hook(*args, **kwargs):
"""Wrapped hook with error isolation"""
# Get the main return object (page/browser)
# This ensures we always have something to return
return_obj = None
if args:
return_obj = args[0]
elif 'page' in kwargs:
return_obj = kwargs['page']
elif 'browser' in kwargs:
return_obj = kwargs['browser']
try:
# Execute user hook with safety
result, error = await self.hook_manager.execute_hook_safely(
user_hook,
hook_point,
*args,
**kwargs
)
if error:
# Hook failed but we continue with original object
logger.warning(f"User hook failed at {hook_point}: {error['error']}")
return return_obj
# Hook succeeded - return its result or the original object
if result is None:
logger.debug(f"Hook at {hook_point} returned None, using original object")
return return_obj
return result
except Exception as e:
# This should rarely happen due to execute_hook_safely
logger.error(f"Unexpected error in hook wrapper for {hook_point}: {e}")
return return_obj
# Set function name for debugging
wrapped_hook.__name__ = f"wrapped_{hook_point}"
return wrapped_hook
async def process_user_hooks(
hooks_input: Dict[str, str],
timeout: int = 30
) -> Tuple[Dict[str, Callable], List[Dict], UserHookManager]:
"""
Process and compile user-provided hook functions
Args:
hooks_input: Dictionary mapping hook points to code strings
timeout: Timeout for each hook execution
Returns:
Tuple of (compiled_hooks, validation_errors, hook_manager)
"""
hook_manager = UserHookManager(timeout=timeout)
wrapper = IsolatedHookWrapper(hook_manager)
compiled_hooks = {}
validation_errors = []
for hook_point, hook_code in hooks_input.items():
# Skip empty hooks
if not hook_code or not hook_code.strip():
continue
# Validate hook point
if hook_point not in UserHookManager.HOOK_SIGNATURES:
validation_errors.append({
'hook_point': hook_point,
'error': f'Unknown hook point. Valid points: {", ".join(UserHookManager.HOOK_SIGNATURES.keys())}',
'code_preview': hook_code[:100] + '...' if len(hook_code) > 100 else hook_code
})
continue
# Validate structure
is_valid, message = hook_manager.validate_hook_structure(hook_code, hook_point)
if not is_valid:
validation_errors.append({
'hook_point': hook_point,
'error': message,
'code_preview': hook_code[:100] + '...' if len(hook_code) > 100 else hook_code
})
continue
# Compile the hook
hook_func = hook_manager.compile_hook(hook_code, hook_point)
if hook_func:
# Wrap with error isolation
wrapped_hook = wrapper.create_hook_wrapper(hook_func, hook_point)
compiled_hooks[hook_point] = wrapped_hook
logger.info(f"Successfully compiled hook for {hook_point}")
else:
validation_errors.append({
'hook_point': hook_point,
'error': 'Failed to compile hook function - check syntax and structure',
'code_preview': hook_code[:100] + '...' if len(hook_code) > 100 else hook_code
})
return compiled_hooks, validation_errors, hook_manager
async def process_user_hooks_with_manager(
hooks_input: Dict[str, str],
hook_manager: UserHookManager
) -> Tuple[Dict[str, Callable], List[Dict]]:
"""
Process and compile user-provided hook functions with existing manager
Args:
hooks_input: Dictionary mapping hook points to code strings
hook_manager: Existing UserHookManager instance
Returns:
Tuple of (compiled_hooks, validation_errors)
"""
wrapper = IsolatedHookWrapper(hook_manager)
compiled_hooks = {}
validation_errors = []
for hook_point, hook_code in hooks_input.items():
# Skip empty hooks
if not hook_code or not hook_code.strip():
continue
# Validate hook point
if hook_point not in UserHookManager.HOOK_SIGNATURES:
validation_errors.append({
'hook_point': hook_point,
'error': f'Unknown hook point. Valid points: {", ".join(UserHookManager.HOOK_SIGNATURES.keys())}',
'code_preview': hook_code[:100] + '...' if len(hook_code) > 100 else hook_code
})
continue
# Validate structure
is_valid, message = hook_manager.validate_hook_structure(hook_code, hook_point)
if not is_valid:
validation_errors.append({
'hook_point': hook_point,
'error': message,
'code_preview': hook_code[:100] + '...' if len(hook_code) > 100 else hook_code
})
continue
# Compile the hook
hook_func = hook_manager.compile_hook(hook_code, hook_point)
if hook_func:
# Wrap with error isolation
wrapped_hook = wrapper.create_hook_wrapper(hook_func, hook_point)
compiled_hooks[hook_point] = wrapped_hook
logger.info(f"Successfully compiled hook for {hook_point}")
else:
validation_errors.append({
'hook_point': hook_point,
'error': 'Failed to compile hook function - check syntax and structure',
'code_preview': hook_code[:100] + '...' if len(hook_code) > 100 else hook_code
})
return compiled_hooks, validation_errors
async def attach_user_hooks_to_crawler(
crawler, # AsyncWebCrawler instance
user_hooks: Dict[str, str],
timeout: int = 30,
hook_manager: Optional[UserHookManager] = None
) -> Tuple[Dict[str, Any], UserHookManager]:
"""
Attach user-provided hooks to crawler with full error reporting
Args:
crawler: AsyncWebCrawler instance
user_hooks: Dictionary mapping hook points to code strings
timeout: Timeout for each hook execution
hook_manager: Optional existing UserHookManager instance
Returns:
Tuple of (status_dict, hook_manager)
"""
# Use provided hook_manager or create a new one
if hook_manager is None:
hook_manager = UserHookManager(timeout=timeout)
# Process hooks with the hook_manager
compiled_hooks, validation_errors = await process_user_hooks_with_manager(
user_hooks, hook_manager
)
# Log validation errors
if validation_errors:
logger.warning(f"Hook validation errors: {validation_errors}")
# Attach successfully compiled hooks
attached_hooks = []
for hook_point, wrapped_hook in compiled_hooks.items():
try:
crawler.crawler_strategy.set_hook(hook_point, wrapped_hook)
attached_hooks.append(hook_point)
logger.info(f"Attached hook to {hook_point}")
except Exception as e:
logger.error(f"Failed to attach hook to {hook_point}: {e}")
validation_errors.append({
'hook_point': hook_point,
'error': f'Failed to attach hook: {str(e)}'
})
status = 'success' if not validation_errors else ('partial' if attached_hooks else 'failed')
status_dict = {
'status': status,
'attached_hooks': attached_hooks,
'validation_errors': validation_errors,
'total_hooks_provided': len(user_hooks),
'successfully_attached': len(attached_hooks),
'failed_validation': len(validation_errors)
}
return status_dict, hook_manager

382
deploy/docker/monitor.py Normal file
View File

@@ -0,0 +1,382 @@
# monitor.py - Real-time monitoring stats with Redis persistence
import time
import json
import asyncio
from typing import Dict, List, Optional
from datetime import datetime, timezone
from collections import deque
from redis import asyncio as aioredis
from utils import get_container_memory_percent
import psutil
import logging
logger = logging.getLogger(__name__)
class MonitorStats:
"""Tracks real-time server stats with Redis persistence."""
def __init__(self, redis: aioredis.Redis):
self.redis = redis
self.start_time = time.time()
# In-memory queues (fast reads, Redis backup)
self.active_requests: Dict[str, Dict] = {} # id -> request info
self.completed_requests: deque = deque(maxlen=100) # Last 100
self.janitor_events: deque = deque(maxlen=100)
self.errors: deque = deque(maxlen=100)
# Endpoint stats (persisted in Redis)
self.endpoint_stats: Dict[str, Dict] = {} # endpoint -> {count, total_time, errors, ...}
# Background persistence queue (max 10 pending persist requests)
self._persist_queue: asyncio.Queue = asyncio.Queue(maxsize=10)
self._persist_worker_task: Optional[asyncio.Task] = None
# Timeline data (5min window, 5s resolution = 60 points)
self.memory_timeline: deque = deque(maxlen=60)
self.requests_timeline: deque = deque(maxlen=60)
self.browser_timeline: deque = deque(maxlen=60)
async def track_request_start(self, request_id: str, endpoint: str, url: str, config: Dict = None):
"""Track new request start."""
req_info = {
"id": request_id,
"endpoint": endpoint,
"url": url[:100], # Truncate long URLs
"start_time": time.time(),
"config_sig": config.get("sig", "default") if config else "default",
"mem_start": psutil.Process().memory_info().rss / (1024 * 1024)
}
self.active_requests[request_id] = req_info
# Increment endpoint counter
if endpoint not in self.endpoint_stats:
self.endpoint_stats[endpoint] = {
"count": 0, "total_time": 0, "errors": 0,
"pool_hits": 0, "success": 0
}
self.endpoint_stats[endpoint]["count"] += 1
# Queue persistence (handled by background worker)
try:
self._persist_queue.put_nowait(True)
except asyncio.QueueFull:
logger.warning("Persistence queue full, skipping")
async def track_request_end(self, request_id: str, success: bool, error: str = None,
pool_hit: bool = True, status_code: int = 200):
"""Track request completion."""
if request_id not in self.active_requests:
return
req_info = self.active_requests.pop(request_id)
end_time = time.time()
elapsed = end_time - req_info["start_time"]
mem_end = psutil.Process().memory_info().rss / (1024 * 1024)
mem_delta = mem_end - req_info["mem_start"]
# Update stats
endpoint = req_info["endpoint"]
if endpoint in self.endpoint_stats:
self.endpoint_stats[endpoint]["total_time"] += elapsed
if success:
self.endpoint_stats[endpoint]["success"] += 1
else:
self.endpoint_stats[endpoint]["errors"] += 1
if pool_hit:
self.endpoint_stats[endpoint]["pool_hits"] += 1
# Add to completed queue
completed = {
**req_info,
"end_time": end_time,
"elapsed": round(elapsed, 2),
"mem_delta": round(mem_delta, 1),
"success": success,
"error": error,
"status_code": status_code,
"pool_hit": pool_hit
}
self.completed_requests.append(completed)
# Track errors
if not success and error:
self.errors.append({
"timestamp": end_time,
"endpoint": endpoint,
"url": req_info["url"],
"error": error,
"request_id": request_id
})
await self._persist_endpoint_stats()
async def track_janitor_event(self, event_type: str, sig: str, details: Dict):
"""Track janitor cleanup events."""
self.janitor_events.append({
"timestamp": time.time(),
"type": event_type, # "close_cold", "close_hot", "promote"
"sig": sig[:8],
"details": details
})
def _cleanup_old_entries(self, max_age_seconds: int = 300):
"""Remove entries older than max_age_seconds (default 5min)."""
now = time.time()
cutoff = now - max_age_seconds
# Clean completed requests
while self.completed_requests and self.completed_requests[0].get("end_time", 0) < cutoff:
self.completed_requests.popleft()
# Clean janitor events
while self.janitor_events and self.janitor_events[0].get("timestamp", 0) < cutoff:
self.janitor_events.popleft()
# Clean errors
while self.errors and self.errors[0].get("timestamp", 0) < cutoff:
self.errors.popleft()
async def update_timeline(self):
"""Update timeline data points (called every 5s)."""
now = time.time()
mem_pct = get_container_memory_percent()
# Clean old entries (keep last 5 minutes)
self._cleanup_old_entries(max_age_seconds=300)
# Count requests in last 5s
recent_reqs = sum(1 for req in self.completed_requests
if now - req.get("end_time", 0) < 5)
# Browser counts (acquire lock to prevent race conditions)
from crawler_pool import PERMANENT, HOT_POOL, COLD_POOL, LOCK
async with LOCK:
browser_count = {
"permanent": 1 if PERMANENT else 0,
"hot": len(HOT_POOL),
"cold": len(COLD_POOL)
}
self.memory_timeline.append({"time": now, "value": mem_pct})
self.requests_timeline.append({"time": now, "value": recent_reqs})
self.browser_timeline.append({"time": now, "browsers": browser_count})
async def _persist_endpoint_stats(self):
"""Persist endpoint stats to Redis."""
try:
await self.redis.set(
"monitor:endpoint_stats",
json.dumps(self.endpoint_stats),
ex=86400 # 24h TTL
)
except Exception as e:
logger.warning(f"Failed to persist endpoint stats: {e}")
async def _persistence_worker(self):
"""Background worker to persist stats to Redis."""
while True:
try:
await self._persist_queue.get()
await self._persist_endpoint_stats()
self._persist_queue.task_done()
except asyncio.CancelledError:
break
except Exception as e:
logger.error(f"Persistence worker error: {e}")
def start_persistence_worker(self):
"""Start the background persistence worker."""
if not self._persist_worker_task:
self._persist_worker_task = asyncio.create_task(self._persistence_worker())
logger.info("Started persistence worker")
async def stop_persistence_worker(self):
"""Stop the background persistence worker."""
if self._persist_worker_task:
self._persist_worker_task.cancel()
try:
await self._persist_worker_task
except asyncio.CancelledError:
pass
self._persist_worker_task = None
logger.info("Stopped persistence worker")
async def cleanup(self):
"""Cleanup on shutdown - persist final stats and stop workers."""
logger.info("Monitor cleanup starting...")
try:
# Persist final stats before shutdown
await self._persist_endpoint_stats()
# Stop background worker
await self.stop_persistence_worker()
logger.info("Monitor cleanup completed")
except Exception as e:
logger.error(f"Monitor cleanup error: {e}")
async def load_from_redis(self):
"""Load persisted stats from Redis."""
try:
data = await self.redis.get("monitor:endpoint_stats")
if data:
self.endpoint_stats = json.loads(data)
logger.info("Loaded endpoint stats from Redis")
except Exception as e:
logger.warning(f"Failed to load from Redis: {e}")
async def get_health_summary(self) -> Dict:
"""Get current system health snapshot."""
mem_pct = get_container_memory_percent()
cpu_pct = psutil.cpu_percent(interval=0.1)
# Network I/O (delta since last call)
net = psutil.net_io_counters()
# Pool status (acquire lock to prevent race conditions)
from crawler_pool import PERMANENT, HOT_POOL, COLD_POOL, LOCK
async with LOCK:
# TODO: Track actual browser process memory instead of estimates
# These are conservative estimates based on typical Chromium usage
permanent_mem = 270 if PERMANENT else 0 # Estimate: ~270MB for permanent browser
hot_mem = len(HOT_POOL) * 180 # Estimate: ~180MB per hot pool browser
cold_mem = len(COLD_POOL) * 180 # Estimate: ~180MB per cold pool browser
permanent_active = PERMANENT is not None
hot_count = len(HOT_POOL)
cold_count = len(COLD_POOL)
return {
"container": {
"memory_percent": round(mem_pct, 1),
"cpu_percent": round(cpu_pct, 1),
"network_sent_mb": round(net.bytes_sent / (1024**2), 2),
"network_recv_mb": round(net.bytes_recv / (1024**2), 2),
"uptime_seconds": int(time.time() - self.start_time)
},
"pool": {
"permanent": {"active": permanent_active, "memory_mb": permanent_mem},
"hot": {"count": hot_count, "memory_mb": hot_mem},
"cold": {"count": cold_count, "memory_mb": cold_mem},
"total_memory_mb": permanent_mem + hot_mem + cold_mem
},
"janitor": {
"next_cleanup_estimate": "adaptive", # Would need janitor state
"memory_pressure": "LOW" if mem_pct < 60 else "MEDIUM" if mem_pct < 80 else "HIGH"
}
}
def get_active_requests(self) -> List[Dict]:
"""Get list of currently active requests."""
now = time.time()
return [
{
**req,
"elapsed": round(now - req["start_time"], 1),
"status": "running"
}
for req in self.active_requests.values()
]
def get_completed_requests(self, limit: int = 50, filter_status: str = "all") -> List[Dict]:
"""Get recent completed requests."""
requests = list(self.completed_requests)[-limit:]
if filter_status == "success":
requests = [r for r in requests if r.get("success")]
elif filter_status == "error":
requests = [r for r in requests if not r.get("success")]
return requests
async def get_browser_list(self) -> List[Dict]:
"""Get detailed browser pool information."""
from crawler_pool import PERMANENT, HOT_POOL, COLD_POOL, LAST_USED, USAGE_COUNT, DEFAULT_CONFIG_SIG, LOCK
browsers = []
now = time.time()
# Acquire lock to prevent race conditions during iteration
async with LOCK:
if PERMANENT:
browsers.append({
"type": "permanent",
"sig": DEFAULT_CONFIG_SIG[:8] if DEFAULT_CONFIG_SIG else "unknown",
"age_seconds": int(now - self.start_time),
"last_used_seconds": int(now - LAST_USED.get(DEFAULT_CONFIG_SIG, now)),
"memory_mb": 270,
"hits": USAGE_COUNT.get(DEFAULT_CONFIG_SIG, 0),
"killable": False
})
for sig, crawler in HOT_POOL.items():
browsers.append({
"type": "hot",
"sig": sig[:8],
"age_seconds": int(now - self.start_time), # Approximation
"last_used_seconds": int(now - LAST_USED.get(sig, now)),
"memory_mb": 180, # Estimate
"hits": USAGE_COUNT.get(sig, 0),
"killable": True
})
for sig, crawler in COLD_POOL.items():
browsers.append({
"type": "cold",
"sig": sig[:8],
"age_seconds": int(now - self.start_time),
"last_used_seconds": int(now - LAST_USED.get(sig, now)),
"memory_mb": 180,
"hits": USAGE_COUNT.get(sig, 0),
"killable": True
})
return browsers
def get_endpoint_stats_summary(self) -> Dict[str, Dict]:
"""Get aggregated endpoint statistics."""
summary = {}
for endpoint, stats in self.endpoint_stats.items():
count = stats["count"]
avg_time = (stats["total_time"] / count) if count > 0 else 0
success_rate = (stats["success"] / count * 100) if count > 0 else 0
pool_hit_rate = (stats["pool_hits"] / count * 100) if count > 0 else 0
summary[endpoint] = {
"count": count,
"avg_latency_ms": round(avg_time * 1000, 1),
"success_rate_percent": round(success_rate, 1),
"pool_hit_rate_percent": round(pool_hit_rate, 1),
"errors": stats["errors"]
}
return summary
def get_timeline_data(self, metric: str, window: str = "5m") -> Dict:
"""Get timeline data for charts."""
# For now, only 5m window supported
if metric == "memory":
data = list(self.memory_timeline)
elif metric == "requests":
data = list(self.requests_timeline)
elif metric == "browsers":
data = list(self.browser_timeline)
else:
return {"timestamps": [], "values": []}
return {
"timestamps": [int(d["time"]) for d in data],
"values": [d.get("value", d.get("browsers")) for d in data]
}
def get_janitor_log(self, limit: int = 100) -> List[Dict]:
"""Get recent janitor events."""
return list(self.janitor_events)[-limit:]
def get_errors_log(self, limit: int = 100) -> List[Dict]:
"""Get recent errors."""
return list(self.errors)[-limit:]
# Global instance (initialized in server.py)
monitor_stats: Optional[MonitorStats] = None
def get_monitor() -> MonitorStats:
"""Get global monitor instance."""
if monitor_stats is None:
raise RuntimeError("Monitor not initialized")
return monitor_stats

View File

@@ -0,0 +1,405 @@
# monitor_routes.py - Monitor API endpoints
from fastapi import APIRouter, HTTPException, WebSocket, WebSocketDisconnect
from pydantic import BaseModel
from typing import Optional
from monitor import get_monitor
import logging
import asyncio
import json
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/monitor", tags=["monitor"])
@router.get("/health")
async def get_health():
"""Get current system health snapshot."""
try:
monitor = get_monitor()
return await monitor.get_health_summary()
except Exception as e:
logger.error(f"Error getting health: {e}")
raise HTTPException(500, str(e))
@router.get("/requests")
async def get_requests(status: str = "all", limit: int = 50):
"""Get active and completed requests.
Args:
status: Filter by 'active', 'completed', 'success', 'error', or 'all'
limit: Max number of completed requests to return (default 50)
"""
# Input validation
if status not in ["all", "active", "completed", "success", "error"]:
raise HTTPException(400, f"Invalid status: {status}. Must be one of: all, active, completed, success, error")
if limit < 1 or limit > 1000:
raise HTTPException(400, f"Invalid limit: {limit}. Must be between 1 and 1000")
try:
monitor = get_monitor()
if status == "active":
return {"active": monitor.get_active_requests(), "completed": []}
elif status == "completed":
return {"active": [], "completed": monitor.get_completed_requests(limit)}
elif status in ["success", "error"]:
return {"active": [], "completed": monitor.get_completed_requests(limit, status)}
else: # "all"
return {
"active": monitor.get_active_requests(),
"completed": monitor.get_completed_requests(limit)
}
except Exception as e:
logger.error(f"Error getting requests: {e}")
raise HTTPException(500, str(e))
@router.get("/browsers")
async def get_browsers():
"""Get detailed browser pool information."""
try:
monitor = get_monitor()
browsers = await monitor.get_browser_list()
# Calculate summary stats
total_browsers = len(browsers)
total_memory = sum(b["memory_mb"] for b in browsers)
# Calculate reuse rate from recent requests
recent = monitor.get_completed_requests(100)
pool_hits = sum(1 for r in recent if r.get("pool_hit", False))
reuse_rate = (pool_hits / len(recent) * 100) if recent else 0
return {
"browsers": browsers,
"summary": {
"total_count": total_browsers,
"total_memory_mb": total_memory,
"reuse_rate_percent": round(reuse_rate, 1)
}
}
except Exception as e:
logger.error(f"Error getting browsers: {e}")
raise HTTPException(500, str(e))
@router.get("/endpoints/stats")
async def get_endpoint_stats():
"""Get aggregated endpoint statistics."""
try:
monitor = get_monitor()
return monitor.get_endpoint_stats_summary()
except Exception as e:
logger.error(f"Error getting endpoint stats: {e}")
raise HTTPException(500, str(e))
@router.get("/timeline")
async def get_timeline(metric: str = "memory", window: str = "5m"):
"""Get timeline data for charts.
Args:
metric: 'memory', 'requests', or 'browsers'
window: Time window (only '5m' supported for now)
"""
# Input validation
if metric not in ["memory", "requests", "browsers"]:
raise HTTPException(400, f"Invalid metric: {metric}. Must be one of: memory, requests, browsers")
if window != "5m":
raise HTTPException(400, f"Invalid window: {window}. Only '5m' is currently supported")
try:
monitor = get_monitor()
return monitor.get_timeline_data(metric, window)
except Exception as e:
logger.error(f"Error getting timeline: {e}")
raise HTTPException(500, str(e))
@router.get("/logs/janitor")
async def get_janitor_log(limit: int = 100):
"""Get recent janitor cleanup events."""
# Input validation
if limit < 1 or limit > 1000:
raise HTTPException(400, f"Invalid limit: {limit}. Must be between 1 and 1000")
try:
monitor = get_monitor()
return {"events": monitor.get_janitor_log(limit)}
except Exception as e:
logger.error(f"Error getting janitor log: {e}")
raise HTTPException(500, str(e))
@router.get("/logs/errors")
async def get_errors_log(limit: int = 100):
"""Get recent errors."""
# Input validation
if limit < 1 or limit > 1000:
raise HTTPException(400, f"Invalid limit: {limit}. Must be between 1 and 1000")
try:
monitor = get_monitor()
return {"errors": monitor.get_errors_log(limit)}
except Exception as e:
logger.error(f"Error getting errors log: {e}")
raise HTTPException(500, str(e))
# ========== Control Actions ==========
class KillBrowserRequest(BaseModel):
sig: str
@router.post("/actions/cleanup")
async def force_cleanup():
"""Force immediate janitor cleanup (kills idle cold pool browsers)."""
try:
from crawler_pool import COLD_POOL, LAST_USED, USAGE_COUNT, LOCK
import time
from contextlib import suppress
killed_count = 0
now = time.time()
async with LOCK:
for sig in list(COLD_POOL.keys()):
# Kill all cold pool browsers immediately
logger.info(f"🧹 Force cleanup: closing cold browser (sig={sig[:8]})")
with suppress(Exception):
await COLD_POOL[sig].close()
COLD_POOL.pop(sig, None)
LAST_USED.pop(sig, None)
USAGE_COUNT.pop(sig, None)
killed_count += 1
monitor = get_monitor()
await monitor.track_janitor_event("force_cleanup", "manual", {"killed": killed_count})
return {"success": True, "killed_browsers": killed_count}
except Exception as e:
logger.error(f"Error during force cleanup: {e}")
raise HTTPException(500, str(e))
@router.post("/actions/kill_browser")
async def kill_browser(req: KillBrowserRequest):
"""Kill a specific browser by signature (hot or cold only).
Args:
sig: Browser config signature (first 8 chars)
"""
try:
from crawler_pool import HOT_POOL, COLD_POOL, LAST_USED, USAGE_COUNT, LOCK, DEFAULT_CONFIG_SIG
from contextlib import suppress
# Find full signature matching prefix
target_sig = None
pool_type = None
async with LOCK:
# Check hot pool
for sig in HOT_POOL.keys():
if sig.startswith(req.sig):
target_sig = sig
pool_type = "hot"
break
# Check cold pool
if not target_sig:
for sig in COLD_POOL.keys():
if sig.startswith(req.sig):
target_sig = sig
pool_type = "cold"
break
# Check if trying to kill permanent
if DEFAULT_CONFIG_SIG and DEFAULT_CONFIG_SIG.startswith(req.sig):
raise HTTPException(403, "Cannot kill permanent browser. Use restart instead.")
if not target_sig:
raise HTTPException(404, f"Browser with sig={req.sig} not found")
# Warn if there are active requests (browser might be in use)
monitor = get_monitor()
active_count = len(monitor.get_active_requests())
if active_count > 0:
logger.warning(f"Killing browser {target_sig[:8]} while {active_count} requests are active - may cause failures")
# Kill the browser
if pool_type == "hot":
browser = HOT_POOL.pop(target_sig)
else:
browser = COLD_POOL.pop(target_sig)
with suppress(Exception):
await browser.close()
LAST_USED.pop(target_sig, None)
USAGE_COUNT.pop(target_sig, None)
logger.info(f"🔪 Killed {pool_type} browser (sig={target_sig[:8]})")
monitor = get_monitor()
await monitor.track_janitor_event("kill_browser", target_sig, {"pool": pool_type, "manual": True})
return {"success": True, "killed_sig": target_sig[:8], "pool_type": pool_type}
except HTTPException:
raise
except Exception as e:
logger.error(f"Error killing browser: {e}")
raise HTTPException(500, str(e))
@router.post("/actions/restart_browser")
async def restart_browser(req: KillBrowserRequest):
"""Restart a browser (kill + recreate). Works for permanent too.
Args:
sig: Browser config signature (first 8 chars), or "permanent"
"""
try:
from crawler_pool import (PERMANENT, HOT_POOL, COLD_POOL, LAST_USED,
USAGE_COUNT, LOCK, DEFAULT_CONFIG_SIG, init_permanent)
from crawl4ai import AsyncWebCrawler, BrowserConfig
from contextlib import suppress
import time
# Handle permanent browser restart
if req.sig == "permanent" or (DEFAULT_CONFIG_SIG and DEFAULT_CONFIG_SIG.startswith(req.sig)):
async with LOCK:
if PERMANENT:
with suppress(Exception):
await PERMANENT.close()
# Reinitialize permanent
from utils import load_config
config = load_config()
await init_permanent(BrowserConfig(
extra_args=config["crawler"]["browser"].get("extra_args", []),
**config["crawler"]["browser"].get("kwargs", {}),
))
logger.info("🔄 Restarted permanent browser")
return {"success": True, "restarted": "permanent"}
# Handle hot/cold browser restart
target_sig = None
pool_type = None
browser_config = None
async with LOCK:
# Find browser
for sig in HOT_POOL.keys():
if sig.startswith(req.sig):
target_sig = sig
pool_type = "hot"
# Would need to reconstruct config (not stored currently)
break
if not target_sig:
for sig in COLD_POOL.keys():
if sig.startswith(req.sig):
target_sig = sig
pool_type = "cold"
break
if not target_sig:
raise HTTPException(404, f"Browser with sig={req.sig} not found")
# Kill existing
if pool_type == "hot":
browser = HOT_POOL.pop(target_sig)
else:
browser = COLD_POOL.pop(target_sig)
with suppress(Exception):
await browser.close()
# Note: We can't easily recreate with same config without storing it
# For now, just kill and let new requests create fresh ones
LAST_USED.pop(target_sig, None)
USAGE_COUNT.pop(target_sig, None)
logger.info(f"🔄 Restarted {pool_type} browser (sig={target_sig[:8]})")
monitor = get_monitor()
await monitor.track_janitor_event("restart_browser", target_sig, {"pool": pool_type})
return {"success": True, "restarted_sig": target_sig[:8], "note": "Browser will be recreated on next request"}
except HTTPException:
raise
except Exception as e:
logger.error(f"Error restarting browser: {e}")
raise HTTPException(500, str(e))
@router.post("/stats/reset")
async def reset_stats():
"""Reset today's endpoint counters."""
try:
monitor = get_monitor()
monitor.endpoint_stats.clear()
await monitor._persist_endpoint_stats()
return {"success": True, "message": "Endpoint stats reset"}
except Exception as e:
logger.error(f"Error resetting stats: {e}")
raise HTTPException(500, str(e))
@router.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
"""WebSocket endpoint for real-time monitoring updates.
Sends updates every 2 seconds with:
- Health stats
- Active/completed requests
- Browser pool status
- Timeline data
"""
await websocket.accept()
logger.info("WebSocket client connected")
try:
while True:
try:
# Gather all monitoring data
monitor = get_monitor()
data = {
"timestamp": asyncio.get_event_loop().time(),
"health": await monitor.get_health_summary(),
"requests": {
"active": monitor.get_active_requests(),
"completed": monitor.get_completed_requests(limit=10)
},
"browsers": await monitor.get_browser_list(),
"timeline": {
"memory": monitor.get_timeline_data("memory", "5m"),
"requests": monitor.get_timeline_data("requests", "5m"),
"browsers": monitor.get_timeline_data("browsers", "5m")
},
"janitor": monitor.get_janitor_log(limit=10),
"errors": monitor.get_errors_log(limit=10)
}
# Send update to client
await websocket.send_json(data)
# Wait 2 seconds before next update
await asyncio.sleep(2)
except WebSocketDisconnect:
logger.info("WebSocket client disconnected")
break
except Exception as e:
logger.error(f"WebSocket error: {e}", exc_info=True)
await asyncio.sleep(2) # Continue trying
except Exception as e:
logger.error(f"WebSocket connection error: {e}", exc_info=True)
finally:
logger.info("WebSocket connection closed")

View File

@@ -9,6 +9,50 @@ class CrawlRequest(BaseModel):
browser_config: Optional[Dict] = Field(default_factory=dict)
crawler_config: Optional[Dict] = Field(default_factory=dict)
class HookConfig(BaseModel):
"""Configuration for user-provided hooks"""
code: Dict[str, str] = Field(
default_factory=dict,
description="Map of hook points to Python code strings"
)
timeout: int = Field(
default=30,
ge=1,
le=120,
description="Timeout in seconds for each hook execution"
)
class Config:
schema_extra = {
"example": {
"code": {
"on_page_context_created": """
async def hook(page, context, **kwargs):
# Block images to speed up crawling
await context.route("**/*.{png,jpg,jpeg,gif}", lambda route: route.abort())
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
# Scroll to load lazy content
await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
await page.wait_for_timeout(2000)
return page
"""
},
"timeout": 30
}
}
class CrawlRequestWithHooks(CrawlRequest):
"""Extended crawl request with hooks support"""
hooks: Optional[HookConfig] = Field(
default=None,
description="Optional user-provided hook functions"
)
class MarkdownRequest(BaseModel):
"""Request body for the /md endpoint."""
url: str = Field(..., description="Absolute http/https URL to fetch")

View File

@@ -16,6 +16,7 @@ from fastapi import Request, Depends
from fastapi.responses import FileResponse
import base64
import re
import logging
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from api import (
handle_markdown_request, handle_llm_qa,
@@ -23,7 +24,7 @@ from api import (
stream_results
)
from schemas import (
CrawlRequest,
CrawlRequestWithHooks,
MarkdownRequest,
RawCode,
HTMLRequest,
@@ -78,6 +79,14 @@ __version__ = "0.5.1-d1"
MAX_PAGES = config["crawler"]["pool"].get("max_pages", 30)
GLOBAL_SEM = asyncio.Semaphore(MAX_PAGES)
# ── default browser config helper ─────────────────────────────
def get_default_browser_config() -> BrowserConfig:
"""Get default BrowserConfig from config.yml."""
return BrowserConfig(
extra_args=config["crawler"]["browser"].get("extra_args", []),
**config["crawler"]["browser"].get("kwargs", {}),
)
# import logging
# page_log = logging.getLogger("page_cap")
# orig_arun = AsyncWebCrawler.arun
@@ -103,15 +112,52 @@ AsyncWebCrawler.arun = capped_arun
@asynccontextmanager
async def lifespan(_: FastAPI):
await get_crawler(BrowserConfig(
from crawler_pool import init_permanent
from monitor import MonitorStats
import monitor as monitor_module
# Initialize monitor
monitor_module.monitor_stats = MonitorStats(redis)
await monitor_module.monitor_stats.load_from_redis()
monitor_module.monitor_stats.start_persistence_worker()
# Initialize browser pool
await init_permanent(BrowserConfig(
extra_args=config["crawler"]["browser"].get("extra_args", []),
**config["crawler"]["browser"].get("kwargs", {}),
)) # warmup
app.state.janitor = asyncio.create_task(janitor()) # idle GC
))
# Start background tasks
app.state.janitor = asyncio.create_task(janitor())
app.state.timeline_updater = asyncio.create_task(_timeline_updater())
yield
# Cleanup
app.state.janitor.cancel()
app.state.timeline_updater.cancel()
# Monitor cleanup (persist stats and stop workers)
from monitor import get_monitor
try:
await get_monitor().cleanup()
except Exception as e:
logger.error(f"Monitor cleanup failed: {e}")
await close_all()
async def _timeline_updater():
"""Update timeline data every 5 seconds."""
from monitor import get_monitor
while True:
await asyncio.sleep(5)
try:
await asyncio.wait_for(get_monitor().update_timeline(), timeout=4.0)
except asyncio.TimeoutError:
logger.warning("Timeline update timeout after 4s")
except Exception as e:
logger.warning(f"Timeline update error: {e}")
# ───────────────────── FastAPI instance ──────────────────────
app = FastAPI(
title=config["app"]["title"],
@@ -129,6 +175,25 @@ app.mount(
name="play",
)
# ── static monitor dashboard ────────────────────────────────
MONITOR_DIR = pathlib.Path(__file__).parent / "static" / "monitor"
if not MONITOR_DIR.exists():
raise RuntimeError(f"Monitor assets not found at {MONITOR_DIR}")
app.mount(
"/dashboard",
StaticFiles(directory=MONITOR_DIR, html=True),
name="monitor_ui",
)
# ── static assets (logo, etc) ────────────────────────────────
ASSETS_DIR = pathlib.Path(__file__).parent / "static" / "assets"
if ASSETS_DIR.exists():
app.mount(
"/static/assets",
StaticFiles(directory=ASSETS_DIR),
name="assets",
)
@app.get("/")
async def root():
@@ -212,6 +277,12 @@ def _safe_eval_config(expr: str) -> dict:
# ── job router ──────────────────────────────────────────────
app.include_router(init_job_router(redis, config, token_dep))
# ── monitor router ──────────────────────────────────────────
from monitor_routes import router as monitor_router
app.include_router(monitor_router)
logger = logging.getLogger(__name__)
# ──────────────────────── Endpoints ──────────────────────────
@app.post("/token")
async def get_token(req: TokenRequest):
@@ -266,27 +337,20 @@ async def generate_html(
Crawls the URL, preprocesses the raw HTML for schema extraction, and returns the processed HTML.
Use when you need sanitized HTML structures for building schemas or further processing.
"""
from crawler_pool import get_crawler
cfg = CrawlerRunConfig()
try:
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
# Check if the crawl was successful
crawler = await get_crawler(get_default_browser_config())
results = await crawler.arun(url=body.url, config=cfg)
if not results[0].success:
raise HTTPException(
status_code=500,
detail=results[0].error_message or "Crawl failed"
)
raise HTTPException(500, detail=results[0].error_message or "Crawl failed")
raw_html = results[0].html
from crawl4ai.utils import preprocess_html_for_schema
processed_html = preprocess_html_for_schema(raw_html)
return JSONResponse({"html": processed_html, "url": body.url, "success": True})
except Exception as e:
# Log and raise as HTTP 500 for other exceptions
raise HTTPException(
status_code=500,
detail=str(e)
)
raise HTTPException(500, detail=str(e))
# Screenshot endpoint
@@ -304,16 +368,13 @@ async def generate_screenshot(
Use when you need an image snapshot of the rendered page. Its recommened to provide an output path to save the screenshot.
Then in result instead of the screenshot you will get a path to the saved file.
"""
from crawler_pool import get_crawler
try:
cfg = CrawlerRunConfig(
screenshot=True, screenshot_wait_for=body.screenshot_wait_for)
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
cfg = CrawlerRunConfig(screenshot=True, screenshot_wait_for=body.screenshot_wait_for)
crawler = await get_crawler(get_default_browser_config())
results = await crawler.arun(url=body.url, config=cfg)
if not results[0].success:
raise HTTPException(
status_code=500,
detail=results[0].error_message or "Crawl failed"
)
raise HTTPException(500, detail=results[0].error_message or "Crawl failed")
screenshot_data = results[0].screenshot
if body.output_path:
abs_path = os.path.abspath(body.output_path)
@@ -323,10 +384,7 @@ async def generate_screenshot(
return {"success": True, "path": abs_path}
return {"success": True, "screenshot": screenshot_data}
except Exception as e:
raise HTTPException(
status_code=500,
detail=str(e)
)
raise HTTPException(500, detail=str(e))
# PDF endpoint
@@ -344,15 +402,13 @@ async def generate_pdf(
Use when you need a printable or archivable snapshot of the page. It is recommended to provide an output path to save the PDF.
Then in result instead of the PDF you will get a path to the saved file.
"""
from crawler_pool import get_crawler
try:
cfg = CrawlerRunConfig(pdf=True)
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
crawler = await get_crawler(get_default_browser_config())
results = await crawler.arun(url=body.url, config=cfg)
if not results[0].success:
raise HTTPException(
status_code=500,
detail=results[0].error_message or "Crawl failed"
)
raise HTTPException(500, detail=results[0].error_message or "Crawl failed")
pdf_data = results[0].pdf
if body.output_path:
abs_path = os.path.abspath(body.output_path)
@@ -362,10 +418,7 @@ async def generate_pdf(
return {"success": True, "path": abs_path}
return {"success": True, "pdf": base64.b64encode(pdf_data).decode()}
except Exception as e:
raise HTTPException(
status_code=500,
detail=str(e)
)
raise HTTPException(500, detail=str(e))
@app.post("/execute_js")
@@ -421,23 +474,17 @@ async def execute_js(
```
"""
from crawler_pool import get_crawler
try:
cfg = CrawlerRunConfig(js_code=body.scripts)
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
crawler = await get_crawler(get_default_browser_config())
results = await crawler.arun(url=body.url, config=cfg)
if not results[0].success:
raise HTTPException(
status_code=500,
detail=results[0].error_message or "Crawl failed"
)
# Return JSON-serializable dict of the first CrawlResult
raise HTTPException(500, detail=results[0].error_message or "Crawl failed")
data = results[0].model_dump()
return JSONResponse(data)
except Exception as e:
raise HTTPException(
status_code=500,
detail=str(e)
)
raise HTTPException(500, detail=str(e))
@app.get("/llm/{url:path}")
@@ -462,6 +509,72 @@ async def get_schema():
"crawler": CrawlerRunConfig().dump()}
@app.get("/hooks/info")
async def get_hooks_info():
"""Get information about available hook points and their signatures"""
from hook_manager import UserHookManager
hook_info = {}
for hook_point, params in UserHookManager.HOOK_SIGNATURES.items():
hook_info[hook_point] = {
"parameters": params,
"description": get_hook_description(hook_point),
"example": get_hook_example(hook_point)
}
return JSONResponse({
"available_hooks": hook_info,
"timeout_limits": {
"min": 1,
"max": 120,
"default": 30
}
})
def get_hook_description(hook_point: str) -> str:
"""Get description for each hook point"""
descriptions = {
"on_browser_created": "Called after browser instance is created",
"on_page_context_created": "Called after page and context are created - ideal for authentication",
"before_goto": "Called before navigating to the target URL",
"after_goto": "Called after navigation is complete",
"on_user_agent_updated": "Called when user agent is updated",
"on_execution_started": "Called when custom JavaScript execution begins",
"before_retrieve_html": "Called before retrieving the final HTML - ideal for scrolling",
"before_return_html": "Called just before returning the HTML content"
}
return descriptions.get(hook_point, "")
def get_hook_example(hook_point: str) -> str:
"""Get example code for each hook point"""
examples = {
"on_page_context_created": """async def hook(page, context, **kwargs):
# Add authentication cookie
await context.add_cookies([{
'name': 'session',
'value': 'my-session-id',
'domain': '.example.com'
}])
return page""",
"before_retrieve_html": """async def hook(page, context, **kwargs):
# Scroll to load lazy content
await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
await page.wait_for_timeout(2000)
return page""",
"before_goto": """async def hook(page, context, url, **kwargs):
# Set custom headers
await page.set_extra_http_headers({
'X-Custom-Header': 'value'
})
return page"""
}
return examples.get(hook_point, "# Implement your hook logic here\nreturn page")
@app.get(config["observability"]["health_check"]["endpoint"])
async def health():
return {"status": "ok", "timestamp": time.time(), "version": __version__}
@@ -477,12 +590,13 @@ async def metrics():
@mcp_tool("crawl")
async def crawl(
request: Request,
crawl_request: CrawlRequest,
crawl_request: CrawlRequestWithHooks,
_td: Dict = Depends(token_dep),
):
"""
Crawl a list of URLs and return the results as JSON.
For streaming responses, use /crawl/stream endpoint.
Supports optional user-provided hook functions for customization.
"""
if not crawl_request.urls:
raise HTTPException(400, "At least one URL required")
@@ -490,11 +604,21 @@ async def crawl(
crawler_config = CrawlerRunConfig.load(crawl_request.crawler_config)
if crawler_config.stream:
return await stream_process(crawl_request=crawl_request)
# Prepare hooks config if provided
hooks_config = None
if crawl_request.hooks:
hooks_config = {
'code': crawl_request.hooks.code,
'timeout': crawl_request.hooks.timeout
}
results = await handle_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config,
hooks_config=hooks_config
)
# check if all of the results are not successful
if all(not result["success"] for result in results["results"]):
@@ -506,7 +630,7 @@ async def crawl(
@limiter.limit(config["rate_limiting"]["default_limit"])
async def crawl_stream(
request: Request,
crawl_request: CrawlRequest,
crawl_request: CrawlRequestWithHooks,
_td: Dict = Depends(token_dep),
):
if not crawl_request.urls:
@@ -514,21 +638,38 @@ async def crawl_stream(
return await stream_process(crawl_request=crawl_request)
async def stream_process(crawl_request: CrawlRequest):
crawler, gen = await handle_stream_crawl_request(
async def stream_process(crawl_request: CrawlRequestWithHooks):
# Prepare hooks config if provided# Prepare hooks config if provided
hooks_config = None
if crawl_request.hooks:
hooks_config = {
'code': crawl_request.hooks.code,
'timeout': crawl_request.hooks.timeout
}
crawler, gen, hooks_info = await handle_stream_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config,
)
hooks_config=hooks_config
)
# Add hooks info to response headers if available
headers = {
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Stream-Status": "active",
}
if hooks_info:
import json
headers["X-Hooks-Status"] = json.dumps(hooks_info['status']['status'])
return StreamingResponse(
stream_results(crawler, gen),
media_type="application/x-ndjson",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Stream-Status": "active",
},
headers=headers,
)

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -167,11 +167,14 @@
</a>
</h1>
<div class="ml-auto flex space-x-2">
<button id="play-tab"
class="px-3 py-1 rounded-t bg-surface border border-b-0 border-border text-primary">Playground</button>
<button id="stress-tab" class="px-3 py-1 rounded-t border border-border hover:bg-surface">Stress
Test</button>
<div class="ml-auto flex items-center space-x-4">
<a href="/dashboard" class="text-xs text-secondary hover:text-primary underline">Monitor</a>
<div class="flex space-x-2">
<button id="play-tab"
class="px-3 py-1 rounded-t bg-surface border border-b-0 border-border text-primary">Playground</button>
<button id="stress-tab" class="px-3 py-1 rounded-t border border-border hover:bg-surface">Stress
Test</button>
</div>
</div>
</header>

34
deploy/docker/test-websocket.py Executable file
View File

@@ -0,0 +1,34 @@
#!/usr/bin/env python3
"""
Quick WebSocket test - Connect to monitor WebSocket and print updates
"""
import asyncio
import websockets
import json
async def test_websocket():
uri = "ws://localhost:11235/monitor/ws"
print(f"Connecting to {uri}...")
try:
async with websockets.connect(uri) as websocket:
print("✅ Connected!")
# Receive and print 5 updates
for i in range(5):
message = await websocket.recv()
data = json.loads(message)
print(f"\n📊 Update #{i+1}:")
print(f" - Health: CPU {data['health']['container']['cpu_percent']}%, Memory {data['health']['container']['memory_percent']}%")
print(f" - Active Requests: {len(data['requests']['active'])}")
print(f" - Browsers: {len(data['browsers'])}")
except Exception as e:
print(f"❌ Error: {e}")
return 1
print("\n✅ WebSocket test passed!")
return 0
if __name__ == "__main__":
exit(asyncio.run(test_websocket()))

View File

@@ -0,0 +1,164 @@
#!/usr/bin/env python3
"""
Monitor Dashboard Demo Script
Generates varied activity to showcase all monitoring features for video recording.
"""
import httpx
import asyncio
import time
from datetime import datetime
BASE_URL = "http://localhost:11235"
async def demo_dashboard():
print("🎬 Monitor Dashboard Demo - Starting...\n")
print(f"📊 Dashboard: {BASE_URL}/dashboard")
print("=" * 60)
async with httpx.AsyncClient(timeout=60.0) as client:
# Phase 1: Simple requests (permanent browser)
print("\n🔷 Phase 1: Testing permanent browser pool")
print("-" * 60)
for i in range(5):
print(f" {i+1}/5 Request to /crawl (default config)...")
try:
r = await client.post(
f"{BASE_URL}/crawl",
json={"urls": [f"https://httpbin.org/html?req={i}"], "crawler_config": {}}
)
print(f" ✅ Status: {r.status_code}, Time: {r.elapsed.total_seconds():.2f}s")
except Exception as e:
print(f" ❌ Error: {e}")
await asyncio.sleep(1) # Small delay between requests
# Phase 2: Create variant browsers (different configs)
print("\n🔶 Phase 2: Testing cold→hot pool promotion")
print("-" * 60)
viewports = [
{"width": 1920, "height": 1080},
{"width": 1280, "height": 720},
{"width": 800, "height": 600}
]
for idx, viewport in enumerate(viewports):
print(f" Viewport {viewport['width']}x{viewport['height']}:")
for i in range(4): # 4 requests each to trigger promotion at 3
try:
r = await client.post(
f"{BASE_URL}/crawl",
json={
"urls": [f"https://httpbin.org/json?v={idx}&r={i}"],
"browser_config": {"viewport": viewport},
"crawler_config": {}
}
)
print(f" {i+1}/4 ✅ {r.status_code} - Should see cold→hot after 3 uses")
except Exception as e:
print(f" {i+1}/4 ❌ {e}")
await asyncio.sleep(0.5)
# Phase 3: Concurrent burst (stress pool)
print("\n🔷 Phase 3: Concurrent burst (10 parallel)")
print("-" * 60)
tasks = []
for i in range(10):
tasks.append(
client.post(
f"{BASE_URL}/crawl",
json={"urls": [f"https://httpbin.org/delay/2?burst={i}"], "crawler_config": {}}
)
)
print(" Sending 10 concurrent requests...")
start = time.time()
results = await asyncio.gather(*tasks, return_exceptions=True)
elapsed = time.time() - start
successes = sum(1 for r in results if not isinstance(r, Exception) and r.status_code == 200)
print(f"{successes}/10 succeeded in {elapsed:.2f}s")
# Phase 4: Multi-endpoint coverage
print("\n🔶 Phase 4: Testing multiple endpoints")
print("-" * 60)
endpoints = [
("/md", {"url": "https://httpbin.org/html", "f": "fit", "c": "0"}),
("/screenshot", {"url": "https://httpbin.org/html"}),
("/pdf", {"url": "https://httpbin.org/html"}),
]
for endpoint, payload in endpoints:
print(f" Testing {endpoint}...")
try:
if endpoint == "/md":
r = await client.post(f"{BASE_URL}{endpoint}", json=payload)
else:
r = await client.post(f"{BASE_URL}{endpoint}", json=payload)
print(f"{r.status_code}")
except Exception as e:
print(f"{e}")
await asyncio.sleep(1)
# Phase 5: Intentional error (to populate errors tab)
print("\n🔷 Phase 5: Generating error examples")
print("-" * 60)
print(" Triggering invalid URL error...")
try:
r = await client.post(
f"{BASE_URL}/crawl",
json={"urls": ["invalid://bad-url"], "crawler_config": {}}
)
print(f" Response: {r.status_code}")
except Exception as e:
print(f" ✅ Error captured: {type(e).__name__}")
# Phase 6: Wait for janitor activity
print("\n🔶 Phase 6: Waiting for janitor cleanup...")
print("-" * 60)
print(" Idle for 40s to allow janitor to clean cold pool browsers...")
for i in range(40, 0, -10):
print(f" {i}s remaining... (Check dashboard for cleanup events)")
await asyncio.sleep(10)
# Phase 7: Final stats check
print("\n🔷 Phase 7: Final dashboard state")
print("-" * 60)
r = await client.get(f"{BASE_URL}/monitor/health")
health = r.json()
print(f" Memory: {health['container']['memory_percent']:.1f}%")
print(f" Browsers: Perm={health['pool']['permanent']['active']}, "
f"Hot={health['pool']['hot']['count']}, Cold={health['pool']['cold']['count']}")
r = await client.get(f"{BASE_URL}/monitor/endpoints/stats")
stats = r.json()
print(f"\n Endpoint Stats:")
for endpoint, data in stats.items():
print(f" {endpoint}: {data['count']} req, "
f"{data['avg_latency_ms']:.0f}ms avg, "
f"{data['success_rate_percent']:.1f}% success")
r = await client.get(f"{BASE_URL}/monitor/browsers")
browsers = r.json()
print(f"\n Pool Efficiency:")
print(f" Total browsers: {browsers['summary']['total_count']}")
print(f" Memory usage: {browsers['summary']['total_memory_mb']} MB")
print(f" Reuse rate: {browsers['summary']['reuse_rate_percent']:.1f}%")
print("\n" + "=" * 60)
print("✅ Demo complete! Dashboard is now populated with rich data.")
print(f"\n📹 Recording tip: Refresh {BASE_URL}/dashboard")
print(" You should see:")
print(" • Active & completed requests")
print(" • Browser pool (permanent + hot/cold)")
print(" • Janitor cleanup events")
print(" • Endpoint analytics")
print(" • Memory timeline")
if __name__ == "__main__":
try:
asyncio.run(demo_dashboard())
except KeyboardInterrupt:
print("\n\n⚠️ Demo interrupted by user")
except Exception as e:
print(f"\n\n❌ Demo failed: {e}")

View File

@@ -0,0 +1,2 @@
httpx>=0.25.0
docker>=7.0.0

View File

@@ -0,0 +1,138 @@
#!/usr/bin/env python3
"""
Test 1: Basic Container Health + Single Endpoint
- Starts container
- Hits /health endpoint 10 times
- Reports success rate and basic latency
"""
import asyncio
import time
import docker
import httpx
# Config
IMAGE = "crawl4ai-local:latest"
CONTAINER_NAME = "crawl4ai-test"
PORT = 11235
REQUESTS = 10
async def test_endpoint(url: str, count: int):
"""Hit endpoint multiple times, return stats."""
results = []
async with httpx.AsyncClient(timeout=30.0) as client:
for i in range(count):
start = time.time()
try:
resp = await client.get(url)
elapsed = (time.time() - start) * 1000 # ms
results.append({
"success": resp.status_code == 200,
"latency_ms": elapsed,
"status": resp.status_code
})
print(f" [{i+1}/{count}] ✓ {resp.status_code} - {elapsed:.0f}ms")
except Exception as e:
results.append({
"success": False,
"latency_ms": None,
"error": str(e)
})
print(f" [{i+1}/{count}] ✗ Error: {e}")
return results
def start_container(client, image: str, name: str, port: int):
"""Start container, return container object."""
# Clean up existing
try:
old = client.containers.get(name)
print(f"🧹 Stopping existing container '{name}'...")
old.stop()
old.remove()
except docker.errors.NotFound:
pass
print(f"🚀 Starting container '{name}' from image '{image}'...")
container = client.containers.run(
image,
name=name,
ports={f"{port}/tcp": port},
detach=True,
shm_size="1g",
environment={"PYTHON_ENV": "production"}
)
# Wait for health
print(f"⏳ Waiting for container to be healthy...")
for _ in range(30): # 30s timeout
time.sleep(1)
container.reload()
if container.status == "running":
try:
# Quick health check
import requests
resp = requests.get(f"http://localhost:{port}/health", timeout=2)
if resp.status_code == 200:
print(f"✅ Container healthy!")
return container
except:
pass
raise TimeoutError("Container failed to start")
def stop_container(container):
"""Stop and remove container."""
print(f"🛑 Stopping container...")
container.stop()
container.remove()
print(f"✅ Container removed")
async def main():
print("="*60)
print("TEST 1: Basic Container Health + Single Endpoint")
print("="*60)
client = docker.from_env()
container = None
try:
# Start container
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
# Test /health endpoint
print(f"\n📊 Testing /health endpoint ({REQUESTS} requests)...")
url = f"http://localhost:{PORT}/health"
results = await test_endpoint(url, REQUESTS)
# Calculate stats
successes = sum(1 for r in results if r["success"])
success_rate = (successes / len(results)) * 100
latencies = [r["latency_ms"] for r in results if r["latency_ms"] is not None]
avg_latency = sum(latencies) / len(latencies) if latencies else 0
# Print results
print(f"\n{'='*60}")
print(f"RESULTS:")
print(f" Success Rate: {success_rate:.1f}% ({successes}/{len(results)})")
print(f" Avg Latency: {avg_latency:.0f}ms")
if latencies:
print(f" Min Latency: {min(latencies):.0f}ms")
print(f" Max Latency: {max(latencies):.0f}ms")
print(f"{'='*60}")
# Pass/Fail
if success_rate >= 100:
print(f"✅ TEST PASSED")
return 0
else:
print(f"❌ TEST FAILED (expected 100% success rate)")
return 1
except Exception as e:
print(f"\n❌ TEST ERROR: {e}")
return 1
finally:
if container:
stop_container(container)
if __name__ == "__main__":
exit_code = asyncio.run(main())
exit(exit_code)

View File

@@ -0,0 +1,205 @@
#!/usr/bin/env python3
"""
Test 2: Docker Stats Monitoring
- Extends Test 1 with real-time container stats
- Monitors memory % and CPU during requests
- Reports baseline, peak, and final memory
"""
import asyncio
import time
import docker
import httpx
from threading import Thread, Event
# Config
IMAGE = "crawl4ai-local:latest"
CONTAINER_NAME = "crawl4ai-test"
PORT = 11235
REQUESTS = 20 # More requests to see memory usage
# Stats tracking
stats_history = []
stop_monitoring = Event()
def monitor_stats(container):
"""Background thread to collect container stats."""
for stat in container.stats(decode=True, stream=True):
if stop_monitoring.is_set():
break
try:
# Extract memory stats
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024) # MB
mem_limit = stat['memory_stats'].get('limit', 1) / (1024 * 1024)
mem_percent = (mem_usage / mem_limit * 100) if mem_limit > 0 else 0
# Extract CPU stats (handle missing fields on Mac)
cpu_percent = 0
try:
cpu_delta = stat['cpu_stats']['cpu_usage']['total_usage'] - \
stat['precpu_stats']['cpu_usage']['total_usage']
system_delta = stat['cpu_stats'].get('system_cpu_usage', 0) - \
stat['precpu_stats'].get('system_cpu_usage', 0)
if system_delta > 0:
num_cpus = stat['cpu_stats'].get('online_cpus', 1)
cpu_percent = (cpu_delta / system_delta * num_cpus * 100.0)
except (KeyError, ZeroDivisionError):
pass
stats_history.append({
'timestamp': time.time(),
'memory_mb': mem_usage,
'memory_percent': mem_percent,
'cpu_percent': cpu_percent
})
except Exception as e:
# Skip malformed stats
pass
time.sleep(0.5) # Sample every 500ms
async def test_endpoint(url: str, count: int):
"""Hit endpoint, return stats."""
results = []
async with httpx.AsyncClient(timeout=30.0) as client:
for i in range(count):
start = time.time()
try:
resp = await client.get(url)
elapsed = (time.time() - start) * 1000
results.append({
"success": resp.status_code == 200,
"latency_ms": elapsed,
})
if (i + 1) % 5 == 0: # Print every 5 requests
print(f" [{i+1}/{count}] ✓ {resp.status_code} - {elapsed:.0f}ms")
except Exception as e:
results.append({"success": False, "error": str(e)})
print(f" [{i+1}/{count}] ✗ Error: {e}")
return results
def start_container(client, image: str, name: str, port: int):
"""Start container."""
try:
old = client.containers.get(name)
print(f"🧹 Stopping existing container '{name}'...")
old.stop()
old.remove()
except docker.errors.NotFound:
pass
print(f"🚀 Starting container '{name}'...")
container = client.containers.run(
image,
name=name,
ports={f"{port}/tcp": port},
detach=True,
shm_size="1g",
mem_limit="4g", # Set explicit memory limit
)
print(f"⏳ Waiting for health...")
for _ in range(30):
time.sleep(1)
container.reload()
if container.status == "running":
try:
import requests
resp = requests.get(f"http://localhost:{port}/health", timeout=2)
if resp.status_code == 200:
print(f"✅ Container healthy!")
return container
except:
pass
raise TimeoutError("Container failed to start")
def stop_container(container):
"""Stop container."""
print(f"🛑 Stopping container...")
container.stop()
container.remove()
async def main():
print("="*60)
print("TEST 2: Docker Stats Monitoring")
print("="*60)
client = docker.from_env()
container = None
monitor_thread = None
try:
# Start container
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
# Start stats monitoring in background
print(f"\n📊 Starting stats monitor...")
stop_monitoring.clear()
stats_history.clear()
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
monitor_thread.start()
# Wait a bit for baseline
await asyncio.sleep(2)
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
print(f"📏 Baseline memory: {baseline_mem:.1f} MB")
# Test /health endpoint
print(f"\n🔄 Running {REQUESTS} requests to /health...")
url = f"http://localhost:{PORT}/health"
results = await test_endpoint(url, REQUESTS)
# Wait a bit to capture peak
await asyncio.sleep(1)
# Stop monitoring
stop_monitoring.set()
if monitor_thread:
monitor_thread.join(timeout=2)
# Calculate stats
successes = sum(1 for r in results if r.get("success"))
success_rate = (successes / len(results)) * 100
latencies = [r["latency_ms"] for r in results if "latency_ms" in r]
avg_latency = sum(latencies) / len(latencies) if latencies else 0
# Memory stats
memory_samples = [s['memory_mb'] for s in stats_history]
peak_mem = max(memory_samples) if memory_samples else 0
final_mem = memory_samples[-1] if memory_samples else 0
mem_delta = final_mem - baseline_mem
# Print results
print(f"\n{'='*60}")
print(f"RESULTS:")
print(f" Success Rate: {success_rate:.1f}% ({successes}/{len(results)})")
print(f" Avg Latency: {avg_latency:.0f}ms")
print(f"\n Memory Stats:")
print(f" Baseline: {baseline_mem:.1f} MB")
print(f" Peak: {peak_mem:.1f} MB")
print(f" Final: {final_mem:.1f} MB")
print(f" Delta: {mem_delta:+.1f} MB")
print(f"{'='*60}")
# Pass/Fail
if success_rate >= 100 and mem_delta < 100: # No significant memory growth
print(f"✅ TEST PASSED")
return 0
else:
if success_rate < 100:
print(f"❌ TEST FAILED (success rate < 100%)")
if mem_delta >= 100:
print(f"⚠️ WARNING: Memory grew by {mem_delta:.1f} MB")
return 1
except Exception as e:
print(f"\n❌ TEST ERROR: {e}")
return 1
finally:
stop_monitoring.set()
if container:
stop_container(container)
if __name__ == "__main__":
exit_code = asyncio.run(main())
exit(exit_code)

View File

@@ -0,0 +1,229 @@
#!/usr/bin/env python3
"""
Test 3: Pool Validation - Permanent Browser Reuse
- Tests /html endpoint (should use permanent browser)
- Monitors container logs for pool hit markers
- Validates browser reuse rate
- Checks memory after browser creation
"""
import asyncio
import time
import docker
import httpx
from threading import Thread, Event
# Config
IMAGE = "crawl4ai-local:latest"
CONTAINER_NAME = "crawl4ai-test"
PORT = 11235
REQUESTS = 30
# Stats tracking
stats_history = []
stop_monitoring = Event()
def monitor_stats(container):
"""Background stats collector."""
for stat in container.stats(decode=True, stream=True):
if stop_monitoring.is_set():
break
try:
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024)
stats_history.append({
'timestamp': time.time(),
'memory_mb': mem_usage,
})
except:
pass
time.sleep(0.5)
def count_log_markers(container):
"""Extract pool usage markers from logs."""
logs = container.logs().decode('utf-8')
permanent_hits = logs.count("🔥 Using permanent browser")
hot_hits = logs.count("♨️ Using hot pool browser")
cold_hits = logs.count("❄️ Using cold pool browser")
new_created = logs.count("🆕 Creating new browser")
return {
'permanent_hits': permanent_hits,
'hot_hits': hot_hits,
'cold_hits': cold_hits,
'new_created': new_created,
'total_hits': permanent_hits + hot_hits + cold_hits
}
async def test_endpoint(url: str, count: int):
"""Hit endpoint multiple times."""
results = []
async with httpx.AsyncClient(timeout=60.0) as client:
for i in range(count):
start = time.time()
try:
resp = await client.post(url, json={"url": "https://httpbin.org/html"})
elapsed = (time.time() - start) * 1000
results.append({
"success": resp.status_code == 200,
"latency_ms": elapsed,
})
if (i + 1) % 10 == 0:
print(f" [{i+1}/{count}] ✓ {resp.status_code} - {elapsed:.0f}ms")
except Exception as e:
results.append({"success": False, "error": str(e)})
print(f" [{i+1}/{count}] ✗ Error: {e}")
return results
def start_container(client, image: str, name: str, port: int):
"""Start container."""
try:
old = client.containers.get(name)
print(f"🧹 Stopping existing container...")
old.stop()
old.remove()
except docker.errors.NotFound:
pass
print(f"🚀 Starting container...")
container = client.containers.run(
image,
name=name,
ports={f"{port}/tcp": port},
detach=True,
shm_size="1g",
mem_limit="4g",
)
print(f"⏳ Waiting for health...")
for _ in range(30):
time.sleep(1)
container.reload()
if container.status == "running":
try:
import requests
resp = requests.get(f"http://localhost:{port}/health", timeout=2)
if resp.status_code == 200:
print(f"✅ Container healthy!")
return container
except:
pass
raise TimeoutError("Container failed to start")
def stop_container(container):
"""Stop container."""
print(f"🛑 Stopping container...")
container.stop()
container.remove()
async def main():
print("="*60)
print("TEST 3: Pool Validation - Permanent Browser Reuse")
print("="*60)
client = docker.from_env()
container = None
monitor_thread = None
try:
# Start container
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
# Wait for permanent browser initialization
print(f"\n⏳ Waiting for permanent browser init (3s)...")
await asyncio.sleep(3)
# Start stats monitoring
print(f"📊 Starting stats monitor...")
stop_monitoring.clear()
stats_history.clear()
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
monitor_thread.start()
await asyncio.sleep(1)
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
print(f"📏 Baseline (with permanent browser): {baseline_mem:.1f} MB")
# Test /html endpoint (uses permanent browser for default config)
print(f"\n🔄 Running {REQUESTS} requests to /html...")
url = f"http://localhost:{PORT}/html"
results = await test_endpoint(url, REQUESTS)
# Wait a bit
await asyncio.sleep(1)
# Stop monitoring
stop_monitoring.set()
if monitor_thread:
monitor_thread.join(timeout=2)
# Analyze logs for pool markers
print(f"\n📋 Analyzing pool usage...")
pool_stats = count_log_markers(container)
# Calculate request stats
successes = sum(1 for r in results if r.get("success"))
success_rate = (successes / len(results)) * 100
latencies = [r["latency_ms"] for r in results if "latency_ms" in r]
avg_latency = sum(latencies) / len(latencies) if latencies else 0
# Memory stats
memory_samples = [s['memory_mb'] for s in stats_history]
peak_mem = max(memory_samples) if memory_samples else 0
final_mem = memory_samples[-1] if memory_samples else 0
mem_delta = final_mem - baseline_mem
# Calculate reuse rate
total_requests = len(results)
total_pool_hits = pool_stats['total_hits']
reuse_rate = (total_pool_hits / total_requests * 100) if total_requests > 0 else 0
# Print results
print(f"\n{'='*60}")
print(f"RESULTS:")
print(f" Success Rate: {success_rate:.1f}% ({successes}/{len(results)})")
print(f" Avg Latency: {avg_latency:.0f}ms")
print(f"\n Pool Stats:")
print(f" 🔥 Permanent Hits: {pool_stats['permanent_hits']}")
print(f" ♨️ Hot Pool Hits: {pool_stats['hot_hits']}")
print(f" ❄️ Cold Pool Hits: {pool_stats['cold_hits']}")
print(f" 🆕 New Created: {pool_stats['new_created']}")
print(f" 📊 Reuse Rate: {reuse_rate:.1f}%")
print(f"\n Memory Stats:")
print(f" Baseline: {baseline_mem:.1f} MB")
print(f" Peak: {peak_mem:.1f} MB")
print(f" Final: {final_mem:.1f} MB")
print(f" Delta: {mem_delta:+.1f} MB")
print(f"{'='*60}")
# Pass/Fail
passed = True
if success_rate < 100:
print(f"❌ FAIL: Success rate {success_rate:.1f}% < 100%")
passed = False
if reuse_rate < 80:
print(f"❌ FAIL: Reuse rate {reuse_rate:.1f}% < 80% (expected high permanent browser usage)")
passed = False
if pool_stats['permanent_hits'] < (total_requests * 0.8):
print(f"⚠️ WARNING: Only {pool_stats['permanent_hits']} permanent hits out of {total_requests} requests")
if mem_delta > 200:
print(f"⚠️ WARNING: Memory grew by {mem_delta:.1f} MB (possible browser leak)")
if passed:
print(f"✅ TEST PASSED")
return 0
else:
return 1
except Exception as e:
print(f"\n❌ TEST ERROR: {e}")
import traceback
traceback.print_exc()
return 1
finally:
stop_monitoring.set()
if container:
stop_container(container)
if __name__ == "__main__":
exit_code = asyncio.run(main())
exit(exit_code)

View File

@@ -0,0 +1,236 @@
#!/usr/bin/env python3
"""
Test 4: Concurrent Load Testing
- Tests pool under concurrent load
- Escalates: 10 → 50 → 100 concurrent requests
- Validates latency distribution (P50, P95, P99)
- Monitors memory stability
"""
import asyncio
import time
import docker
import httpx
from threading import Thread, Event
from collections import defaultdict
# Config
IMAGE = "crawl4ai-local:latest"
CONTAINER_NAME = "crawl4ai-test"
PORT = 11235
LOAD_LEVELS = [
{"name": "Light", "concurrent": 10, "requests": 20},
{"name": "Medium", "concurrent": 50, "requests": 100},
{"name": "Heavy", "concurrent": 100, "requests": 200},
]
# Stats
stats_history = []
stop_monitoring = Event()
def monitor_stats(container):
"""Background stats collector."""
for stat in container.stats(decode=True, stream=True):
if stop_monitoring.is_set():
break
try:
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024)
stats_history.append({'timestamp': time.time(), 'memory_mb': mem_usage})
except:
pass
time.sleep(0.5)
def count_log_markers(container):
"""Extract pool markers."""
logs = container.logs().decode('utf-8')
return {
'permanent': logs.count("🔥 Using permanent browser"),
'hot': logs.count("♨️ Using hot pool browser"),
'cold': logs.count("❄️ Using cold pool browser"),
'new': logs.count("🆕 Creating new browser"),
}
async def hit_endpoint(client, url, payload, semaphore):
"""Single request with concurrency control."""
async with semaphore:
start = time.time()
try:
resp = await client.post(url, json=payload, timeout=60.0)
elapsed = (time.time() - start) * 1000
return {"success": resp.status_code == 200, "latency_ms": elapsed}
except Exception as e:
return {"success": False, "error": str(e)}
async def run_concurrent_test(url, payload, concurrent, total_requests):
"""Run concurrent requests."""
semaphore = asyncio.Semaphore(concurrent)
async with httpx.AsyncClient() as client:
tasks = [hit_endpoint(client, url, payload, semaphore) for _ in range(total_requests)]
results = await asyncio.gather(*tasks)
return results
def calculate_percentiles(latencies):
"""Calculate P50, P95, P99."""
if not latencies:
return 0, 0, 0
sorted_lat = sorted(latencies)
n = len(sorted_lat)
return (
sorted_lat[int(n * 0.50)],
sorted_lat[int(n * 0.95)],
sorted_lat[int(n * 0.99)],
)
def start_container(client, image, name, port):
"""Start container."""
try:
old = client.containers.get(name)
print(f"🧹 Stopping existing container...")
old.stop()
old.remove()
except docker.errors.NotFound:
pass
print(f"🚀 Starting container...")
container = client.containers.run(
image, name=name, ports={f"{port}/tcp": port},
detach=True, shm_size="1g", mem_limit="4g",
)
print(f"⏳ Waiting for health...")
for _ in range(30):
time.sleep(1)
container.reload()
if container.status == "running":
try:
import requests
if requests.get(f"http://localhost:{port}/health", timeout=2).status_code == 200:
print(f"✅ Container healthy!")
return container
except:
pass
raise TimeoutError("Container failed to start")
async def main():
print("="*60)
print("TEST 4: Concurrent Load Testing")
print("="*60)
client = docker.from_env()
container = None
monitor_thread = None
try:
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
print(f"\n⏳ Waiting for permanent browser init (3s)...")
await asyncio.sleep(3)
# Start monitoring
stop_monitoring.clear()
stats_history.clear()
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
monitor_thread.start()
await asyncio.sleep(1)
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
print(f"📏 Baseline: {baseline_mem:.1f} MB\n")
url = f"http://localhost:{PORT}/html"
payload = {"url": "https://httpbin.org/html"}
all_results = []
level_stats = []
# Run load levels
for level in LOAD_LEVELS:
print(f"{'='*60}")
print(f"🔄 {level['name']} Load: {level['concurrent']} concurrent, {level['requests']} total")
print(f"{'='*60}")
start_time = time.time()
results = await run_concurrent_test(url, payload, level['concurrent'], level['requests'])
duration = time.time() - start_time
successes = sum(1 for r in results if r.get("success"))
success_rate = (successes / len(results)) * 100
latencies = [r["latency_ms"] for r in results if "latency_ms" in r]
p50, p95, p99 = calculate_percentiles(latencies)
avg_lat = sum(latencies) / len(latencies) if latencies else 0
print(f" Duration: {duration:.1f}s")
print(f" Success: {success_rate:.1f}% ({successes}/{len(results)})")
print(f" Avg Latency: {avg_lat:.0f}ms")
print(f" P50/P95/P99: {p50:.0f}ms / {p95:.0f}ms / {p99:.0f}ms")
level_stats.append({
'name': level['name'],
'concurrent': level['concurrent'],
'success_rate': success_rate,
'avg_latency': avg_lat,
'p50': p50, 'p95': p95, 'p99': p99,
})
all_results.extend(results)
await asyncio.sleep(2) # Cool down between levels
# Stop monitoring
await asyncio.sleep(1)
stop_monitoring.set()
if monitor_thread:
monitor_thread.join(timeout=2)
# Final stats
pool_stats = count_log_markers(container)
memory_samples = [s['memory_mb'] for s in stats_history]
peak_mem = max(memory_samples) if memory_samples else 0
final_mem = memory_samples[-1] if memory_samples else 0
print(f"\n{'='*60}")
print(f"FINAL RESULTS:")
print(f"{'='*60}")
print(f" Total Requests: {len(all_results)}")
print(f"\n Pool Utilization:")
print(f" 🔥 Permanent: {pool_stats['permanent']}")
print(f" ♨️ Hot: {pool_stats['hot']}")
print(f" ❄️ Cold: {pool_stats['cold']}")
print(f" 🆕 New: {pool_stats['new']}")
print(f"\n Memory:")
print(f" Baseline: {baseline_mem:.1f} MB")
print(f" Peak: {peak_mem:.1f} MB")
print(f" Final: {final_mem:.1f} MB")
print(f" Delta: {final_mem - baseline_mem:+.1f} MB")
print(f"{'='*60}")
# Pass/Fail
passed = True
for ls in level_stats:
if ls['success_rate'] < 99:
print(f"❌ FAIL: {ls['name']} success rate {ls['success_rate']:.1f}% < 99%")
passed = False
if ls['p99'] > 10000: # 10s threshold
print(f"⚠️ WARNING: {ls['name']} P99 latency {ls['p99']:.0f}ms very high")
if final_mem - baseline_mem > 300:
print(f"⚠️ WARNING: Memory grew {final_mem - baseline_mem:.1f} MB")
if passed:
print(f"✅ TEST PASSED")
return 0
else:
return 1
except Exception as e:
print(f"\n❌ TEST ERROR: {e}")
import traceback
traceback.print_exc()
return 1
finally:
stop_monitoring.set()
if container:
print(f"🛑 Stopping container...")
container.stop()
container.remove()
if __name__ == "__main__":
exit_code = asyncio.run(main())
exit(exit_code)

View File

@@ -0,0 +1,267 @@
#!/usr/bin/env python3
"""
Test 5: Pool Stress - Mixed Configs
- Tests hot/cold pool with different browser configs
- Uses different viewports to create config variants
- Validates cold → hot promotion after 3 uses
- Monitors pool tier distribution
"""
import asyncio
import time
import docker
import httpx
from threading import Thread, Event
import random
# Config
IMAGE = "crawl4ai-local:latest"
CONTAINER_NAME = "crawl4ai-test"
PORT = 11235
REQUESTS_PER_CONFIG = 5 # 5 requests per config variant
# Different viewport configs to test pool tiers
VIEWPORT_CONFIGS = [
None, # Default (permanent browser)
{"width": 1920, "height": 1080}, # Desktop
{"width": 1024, "height": 768}, # Tablet
{"width": 375, "height": 667}, # Mobile
]
# Stats
stats_history = []
stop_monitoring = Event()
def monitor_stats(container):
"""Background stats collector."""
for stat in container.stats(decode=True, stream=True):
if stop_monitoring.is_set():
break
try:
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024)
stats_history.append({'timestamp': time.time(), 'memory_mb': mem_usage})
except:
pass
time.sleep(0.5)
def analyze_pool_logs(container):
"""Extract detailed pool stats from logs."""
logs = container.logs().decode('utf-8')
permanent = logs.count("🔥 Using permanent browser")
hot = logs.count("♨️ Using hot pool browser")
cold = logs.count("❄️ Using cold pool browser")
new = logs.count("🆕 Creating new browser")
promotions = logs.count("⬆️ Promoting to hot pool")
return {
'permanent': permanent,
'hot': hot,
'cold': cold,
'new': new,
'promotions': promotions,
'total': permanent + hot + cold
}
async def crawl_with_viewport(client, url, viewport):
"""Single request with specific viewport."""
payload = {
"urls": ["https://httpbin.org/html"],
"browser_config": {},
"crawler_config": {}
}
# Add viewport if specified
if viewport:
payload["browser_config"] = {
"type": "BrowserConfig",
"params": {
"viewport": {"type": "dict", "value": viewport},
"headless": True,
"text_mode": True,
"extra_args": [
"--no-sandbox",
"--disable-dev-shm-usage",
"--disable-gpu",
"--disable-software-rasterizer",
"--disable-web-security",
"--allow-insecure-localhost",
"--ignore-certificate-errors"
]
}
}
start = time.time()
try:
resp = await client.post(url, json=payload, timeout=60.0)
elapsed = (time.time() - start) * 1000
return {"success": resp.status_code == 200, "latency_ms": elapsed, "viewport": viewport}
except Exception as e:
return {"success": False, "error": str(e), "viewport": viewport}
def start_container(client, image, name, port):
"""Start container."""
try:
old = client.containers.get(name)
print(f"🧹 Stopping existing container...")
old.stop()
old.remove()
except docker.errors.NotFound:
pass
print(f"🚀 Starting container...")
container = client.containers.run(
image, name=name, ports={f"{port}/tcp": port},
detach=True, shm_size="1g", mem_limit="4g",
)
print(f"⏳ Waiting for health...")
for _ in range(30):
time.sleep(1)
container.reload()
if container.status == "running":
try:
import requests
if requests.get(f"http://localhost:{port}/health", timeout=2).status_code == 200:
print(f"✅ Container healthy!")
return container
except:
pass
raise TimeoutError("Container failed to start")
async def main():
print("="*60)
print("TEST 5: Pool Stress - Mixed Configs")
print("="*60)
client = docker.from_env()
container = None
monitor_thread = None
try:
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
print(f"\n⏳ Waiting for permanent browser init (3s)...")
await asyncio.sleep(3)
# Start monitoring
stop_monitoring.clear()
stats_history.clear()
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
monitor_thread.start()
await asyncio.sleep(1)
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
print(f"📏 Baseline: {baseline_mem:.1f} MB\n")
url = f"http://localhost:{PORT}/crawl"
print(f"Testing {len(VIEWPORT_CONFIGS)} different configs:")
for i, vp in enumerate(VIEWPORT_CONFIGS):
vp_str = "Default" if vp is None else f"{vp['width']}x{vp['height']}"
print(f" {i+1}. {vp_str}")
print()
# Run requests: repeat each config REQUESTS_PER_CONFIG times
all_results = []
config_sequence = []
for _ in range(REQUESTS_PER_CONFIG):
for viewport in VIEWPORT_CONFIGS:
config_sequence.append(viewport)
# Shuffle to mix configs
random.shuffle(config_sequence)
print(f"🔄 Running {len(config_sequence)} requests with mixed configs...")
async with httpx.AsyncClient() as http_client:
for i, viewport in enumerate(config_sequence):
result = await crawl_with_viewport(http_client, url, viewport)
all_results.append(result)
if (i + 1) % 5 == 0:
vp_str = "default" if result['viewport'] is None else f"{result['viewport']['width']}x{result['viewport']['height']}"
status = "" if result.get('success') else ""
lat = f"{result.get('latency_ms', 0):.0f}ms" if 'latency_ms' in result else "error"
print(f" [{i+1}/{len(config_sequence)}] {status} {vp_str} - {lat}")
# Stop monitoring
await asyncio.sleep(2)
stop_monitoring.set()
if monitor_thread:
monitor_thread.join(timeout=2)
# Analyze results
pool_stats = analyze_pool_logs(container)
successes = sum(1 for r in all_results if r.get("success"))
success_rate = (successes / len(all_results)) * 100
latencies = [r["latency_ms"] for r in all_results if "latency_ms" in r]
avg_lat = sum(latencies) / len(latencies) if latencies else 0
memory_samples = [s['memory_mb'] for s in stats_history]
peak_mem = max(memory_samples) if memory_samples else 0
final_mem = memory_samples[-1] if memory_samples else 0
print(f"\n{'='*60}")
print(f"RESULTS:")
print(f"{'='*60}")
print(f" Requests: {len(all_results)}")
print(f" Success Rate: {success_rate:.1f}% ({successes}/{len(all_results)})")
print(f" Avg Latency: {avg_lat:.0f}ms")
print(f"\n Pool Statistics:")
print(f" 🔥 Permanent: {pool_stats['permanent']}")
print(f" ♨️ Hot: {pool_stats['hot']}")
print(f" ❄️ Cold: {pool_stats['cold']}")
print(f" 🆕 New: {pool_stats['new']}")
print(f" ⬆️ Promotions: {pool_stats['promotions']}")
print(f" 📊 Reuse: {(pool_stats['total'] / len(all_results) * 100):.1f}%")
print(f"\n Memory:")
print(f" Baseline: {baseline_mem:.1f} MB")
print(f" Peak: {peak_mem:.1f} MB")
print(f" Final: {final_mem:.1f} MB")
print(f" Delta: {final_mem - baseline_mem:+.1f} MB")
print(f"{'='*60}")
# Pass/Fail
passed = True
if success_rate < 99:
print(f"❌ FAIL: Success rate {success_rate:.1f}% < 99%")
passed = False
# Should see promotions since we repeat each config 5 times
if pool_stats['promotions'] < (len(VIEWPORT_CONFIGS) - 1): # -1 for default
print(f"⚠️ WARNING: Only {pool_stats['promotions']} promotions (expected ~{len(VIEWPORT_CONFIGS)-1})")
# Should have created some browsers for different configs
if pool_stats['new'] == 0:
print(f"⚠️ NOTE: No new browsers created (all used default?)")
if pool_stats['permanent'] == len(all_results):
print(f"⚠️ NOTE: All requests used permanent browser (configs not varying enough?)")
if final_mem - baseline_mem > 500:
print(f"⚠️ WARNING: Memory grew {final_mem - baseline_mem:.1f} MB")
if passed:
print(f"✅ TEST PASSED")
return 0
else:
return 1
except Exception as e:
print(f"\n❌ TEST ERROR: {e}")
import traceback
traceback.print_exc()
return 1
finally:
stop_monitoring.set()
if container:
print(f"🛑 Stopping container...")
container.stop()
container.remove()
if __name__ == "__main__":
exit_code = asyncio.run(main())
exit(exit_code)

View File

@@ -0,0 +1,234 @@
#!/usr/bin/env python3
"""
Test 6: Multi-Endpoint Testing
- Tests multiple endpoints together: /html, /screenshot, /pdf, /crawl
- Validates each endpoint works correctly
- Monitors success rates per endpoint
"""
import asyncio
import time
import docker
import httpx
from threading import Thread, Event
# Config
IMAGE = "crawl4ai-local:latest"
CONTAINER_NAME = "crawl4ai-test"
PORT = 11235
REQUESTS_PER_ENDPOINT = 10
# Stats
stats_history = []
stop_monitoring = Event()
def monitor_stats(container):
"""Background stats collector."""
for stat in container.stats(decode=True, stream=True):
if stop_monitoring.is_set():
break
try:
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024)
stats_history.append({'timestamp': time.time(), 'memory_mb': mem_usage})
except:
pass
time.sleep(0.5)
async def test_html(client, base_url, count):
"""Test /html endpoint."""
url = f"{base_url}/html"
results = []
for _ in range(count):
start = time.time()
try:
resp = await client.post(url, json={"url": "https://httpbin.org/html"}, timeout=30.0)
elapsed = (time.time() - start) * 1000
results.append({"success": resp.status_code == 200, "latency_ms": elapsed})
except Exception as e:
results.append({"success": False, "error": str(e)})
return results
async def test_screenshot(client, base_url, count):
"""Test /screenshot endpoint."""
url = f"{base_url}/screenshot"
results = []
for _ in range(count):
start = time.time()
try:
resp = await client.post(url, json={"url": "https://httpbin.org/html"}, timeout=30.0)
elapsed = (time.time() - start) * 1000
results.append({"success": resp.status_code == 200, "latency_ms": elapsed})
except Exception as e:
results.append({"success": False, "error": str(e)})
return results
async def test_pdf(client, base_url, count):
"""Test /pdf endpoint."""
url = f"{base_url}/pdf"
results = []
for _ in range(count):
start = time.time()
try:
resp = await client.post(url, json={"url": "https://httpbin.org/html"}, timeout=30.0)
elapsed = (time.time() - start) * 1000
results.append({"success": resp.status_code == 200, "latency_ms": elapsed})
except Exception as e:
results.append({"success": False, "error": str(e)})
return results
async def test_crawl(client, base_url, count):
"""Test /crawl endpoint."""
url = f"{base_url}/crawl"
results = []
payload = {
"urls": ["https://httpbin.org/html"],
"browser_config": {},
"crawler_config": {}
}
for _ in range(count):
start = time.time()
try:
resp = await client.post(url, json=payload, timeout=30.0)
elapsed = (time.time() - start) * 1000
results.append({"success": resp.status_code == 200, "latency_ms": elapsed})
except Exception as e:
results.append({"success": False, "error": str(e)})
return results
def start_container(client, image, name, port):
"""Start container."""
try:
old = client.containers.get(name)
print(f"🧹 Stopping existing container...")
old.stop()
old.remove()
except docker.errors.NotFound:
pass
print(f"🚀 Starting container...")
container = client.containers.run(
image, name=name, ports={f"{port}/tcp": port},
detach=True, shm_size="1g", mem_limit="4g",
)
print(f"⏳ Waiting for health...")
for _ in range(30):
time.sleep(1)
container.reload()
if container.status == "running":
try:
import requests
if requests.get(f"http://localhost:{port}/health", timeout=2).status_code == 200:
print(f"✅ Container healthy!")
return container
except:
pass
raise TimeoutError("Container failed to start")
async def main():
print("="*60)
print("TEST 6: Multi-Endpoint Testing")
print("="*60)
client = docker.from_env()
container = None
monitor_thread = None
try:
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
print(f"\n⏳ Waiting for permanent browser init (3s)...")
await asyncio.sleep(3)
# Start monitoring
stop_monitoring.clear()
stats_history.clear()
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
monitor_thread.start()
await asyncio.sleep(1)
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
print(f"📏 Baseline: {baseline_mem:.1f} MB\n")
base_url = f"http://localhost:{PORT}"
# Test each endpoint
endpoints = {
"/html": test_html,
"/screenshot": test_screenshot,
"/pdf": test_pdf,
"/crawl": test_crawl,
}
all_endpoint_stats = {}
async with httpx.AsyncClient() as http_client:
for endpoint_name, test_func in endpoints.items():
print(f"🔄 Testing {endpoint_name} ({REQUESTS_PER_ENDPOINT} requests)...")
results = await test_func(http_client, base_url, REQUESTS_PER_ENDPOINT)
successes = sum(1 for r in results if r.get("success"))
success_rate = (successes / len(results)) * 100
latencies = [r["latency_ms"] for r in results if "latency_ms" in r]
avg_lat = sum(latencies) / len(latencies) if latencies else 0
all_endpoint_stats[endpoint_name] = {
'success_rate': success_rate,
'avg_latency': avg_lat,
'total': len(results),
'successes': successes
}
print(f" ✓ Success: {success_rate:.1f}% ({successes}/{len(results)}), Avg: {avg_lat:.0f}ms")
# Stop monitoring
await asyncio.sleep(1)
stop_monitoring.set()
if monitor_thread:
monitor_thread.join(timeout=2)
# Final stats
memory_samples = [s['memory_mb'] for s in stats_history]
peak_mem = max(memory_samples) if memory_samples else 0
final_mem = memory_samples[-1] if memory_samples else 0
print(f"\n{'='*60}")
print(f"RESULTS:")
print(f"{'='*60}")
for endpoint, stats in all_endpoint_stats.items():
print(f" {endpoint:12} Success: {stats['success_rate']:5.1f}% Avg: {stats['avg_latency']:6.0f}ms")
print(f"\n Memory:")
print(f" Baseline: {baseline_mem:.1f} MB")
print(f" Peak: {peak_mem:.1f} MB")
print(f" Final: {final_mem:.1f} MB")
print(f" Delta: {final_mem - baseline_mem:+.1f} MB")
print(f"{'='*60}")
# Pass/Fail
passed = True
for endpoint, stats in all_endpoint_stats.items():
if stats['success_rate'] < 100:
print(f"❌ FAIL: {endpoint} success rate {stats['success_rate']:.1f}% < 100%")
passed = False
if passed:
print(f"✅ TEST PASSED")
return 0
else:
return 1
except Exception as e:
print(f"\n❌ TEST ERROR: {e}")
import traceback
traceback.print_exc()
return 1
finally:
stop_monitoring.set()
if container:
print(f"🛑 Stopping container...")
container.stop()
container.remove()
if __name__ == "__main__":
exit_code = asyncio.run(main())
exit(exit_code)

View File

@@ -0,0 +1,199 @@
#!/usr/bin/env python3
"""
Test 7: Cleanup Verification (Janitor)
- Creates load spike then goes idle
- Verifies memory returns to near baseline
- Tests janitor cleanup of idle browsers
- Monitors memory recovery time
"""
import asyncio
import time
import docker
import httpx
from threading import Thread, Event
# Config
IMAGE = "crawl4ai-local:latest"
CONTAINER_NAME = "crawl4ai-test"
PORT = 11235
SPIKE_REQUESTS = 20 # Create some browsers
IDLE_TIME = 90 # Wait 90s for janitor (runs every 60s)
# Stats
stats_history = []
stop_monitoring = Event()
def monitor_stats(container):
"""Background stats collector."""
for stat in container.stats(decode=True, stream=True):
if stop_monitoring.is_set():
break
try:
mem_usage = stat['memory_stats'].get('usage', 0) / (1024 * 1024)
stats_history.append({'timestamp': time.time(), 'memory_mb': mem_usage})
except:
pass
time.sleep(1) # Sample every 1s for this test
def start_container(client, image, name, port):
"""Start container."""
try:
old = client.containers.get(name)
print(f"🧹 Stopping existing container...")
old.stop()
old.remove()
except docker.errors.NotFound:
pass
print(f"🚀 Starting container...")
container = client.containers.run(
image, name=name, ports={f"{port}/tcp": port},
detach=True, shm_size="1g", mem_limit="4g",
)
print(f"⏳ Waiting for health...")
for _ in range(30):
time.sleep(1)
container.reload()
if container.status == "running":
try:
import requests
if requests.get(f"http://localhost:{port}/health", timeout=2).status_code == 200:
print(f"✅ Container healthy!")
return container
except:
pass
raise TimeoutError("Container failed to start")
async def main():
print("="*60)
print("TEST 7: Cleanup Verification (Janitor)")
print("="*60)
client = docker.from_env()
container = None
monitor_thread = None
try:
container = start_container(client, IMAGE, CONTAINER_NAME, PORT)
print(f"\n⏳ Waiting for permanent browser init (3s)...")
await asyncio.sleep(3)
# Start monitoring
stop_monitoring.clear()
stats_history.clear()
monitor_thread = Thread(target=monitor_stats, args=(container,), daemon=True)
monitor_thread.start()
await asyncio.sleep(2)
baseline_mem = stats_history[-1]['memory_mb'] if stats_history else 0
print(f"📏 Baseline: {baseline_mem:.1f} MB\n")
# Create load spike with different configs to populate pool
print(f"🔥 Creating load spike ({SPIKE_REQUESTS} requests with varied configs)...")
url = f"http://localhost:{PORT}/crawl"
viewports = [
{"width": 1920, "height": 1080},
{"width": 1024, "height": 768},
{"width": 375, "height": 667},
]
async with httpx.AsyncClient(timeout=60.0) as http_client:
tasks = []
for i in range(SPIKE_REQUESTS):
vp = viewports[i % len(viewports)]
payload = {
"urls": ["https://httpbin.org/html"],
"browser_config": {
"type": "BrowserConfig",
"params": {
"viewport": {"type": "dict", "value": vp},
"headless": True,
"text_mode": True,
"extra_args": [
"--no-sandbox", "--disable-dev-shm-usage",
"--disable-gpu", "--disable-software-rasterizer",
"--disable-web-security", "--allow-insecure-localhost",
"--ignore-certificate-errors"
]
}
},
"crawler_config": {}
}
tasks.append(http_client.post(url, json=payload))
results = await asyncio.gather(*tasks, return_exceptions=True)
successes = sum(1 for r in results if hasattr(r, 'status_code') and r.status_code == 200)
print(f" ✓ Spike completed: {successes}/{len(results)} successful")
# Measure peak
await asyncio.sleep(2)
peak_mem = max([s['memory_mb'] for s in stats_history]) if stats_history else baseline_mem
print(f" 📊 Peak memory: {peak_mem:.1f} MB (+{peak_mem - baseline_mem:.1f} MB)")
# Now go idle and wait for janitor
print(f"\n⏸️ Going idle for {IDLE_TIME}s (janitor cleanup)...")
print(f" (Janitor runs every 60s, checking for idle browsers)")
for elapsed in range(0, IDLE_TIME, 10):
await asyncio.sleep(10)
current_mem = stats_history[-1]['memory_mb'] if stats_history else 0
print(f" [{elapsed+10:3d}s] Memory: {current_mem:.1f} MB")
# Stop monitoring
stop_monitoring.set()
if monitor_thread:
monitor_thread.join(timeout=2)
# Analyze memory recovery
final_mem = stats_history[-1]['memory_mb'] if stats_history else 0
recovery_mb = peak_mem - final_mem
recovery_pct = (recovery_mb / (peak_mem - baseline_mem) * 100) if (peak_mem - baseline_mem) > 0 else 0
print(f"\n{'='*60}")
print(f"RESULTS:")
print(f"{'='*60}")
print(f" Memory Journey:")
print(f" Baseline: {baseline_mem:.1f} MB")
print(f" Peak: {peak_mem:.1f} MB (+{peak_mem - baseline_mem:.1f} MB)")
print(f" Final: {final_mem:.1f} MB (+{final_mem - baseline_mem:.1f} MB)")
print(f" Recovered: {recovery_mb:.1f} MB ({recovery_pct:.1f}%)")
print(f"{'='*60}")
# Pass/Fail
passed = True
# Should have created some memory pressure
if peak_mem - baseline_mem < 100:
print(f"⚠️ WARNING: Peak increase only {peak_mem - baseline_mem:.1f} MB (expected more browsers)")
# Should recover most memory (within 100MB of baseline)
if final_mem - baseline_mem > 100:
print(f"⚠️ WARNING: Memory didn't recover well (still +{final_mem - baseline_mem:.1f} MB above baseline)")
else:
print(f"✅ Good memory recovery!")
# Baseline + 50MB tolerance
if final_mem - baseline_mem < 50:
print(f"✅ Excellent cleanup (within 50MB of baseline)")
print(f"✅ TEST PASSED")
return 0
except Exception as e:
print(f"\n❌ TEST ERROR: {e}")
import traceback
traceback.print_exc()
return 1
finally:
stop_monitoring.set()
if container:
print(f"🛑 Stopping container...")
container.stop()
container.remove()
if __name__ == "__main__":
exit_code = asyncio.run(main())
exit(exit_code)

View File

@@ -0,0 +1,57 @@
#!/usr/bin/env python3
"""Quick test to generate monitor dashboard activity"""
import httpx
import asyncio
async def test_dashboard():
async with httpx.AsyncClient(timeout=30.0) as client:
print("📊 Generating dashboard activity...")
# Test 1: Simple crawl
print("\n1⃣ Running simple crawl...")
r1 = await client.post(
"http://localhost:11235/crawl",
json={"urls": ["https://httpbin.org/html"], "crawler_config": {}}
)
print(f" Status: {r1.status_code}")
# Test 2: Multiple URLs
print("\n2⃣ Running multi-URL crawl...")
r2 = await client.post(
"http://localhost:11235/crawl",
json={
"urls": [
"https://httpbin.org/html",
"https://httpbin.org/json"
],
"crawler_config": {}
}
)
print(f" Status: {r2.status_code}")
# Test 3: Check monitor health
print("\n3⃣ Checking monitor health...")
r3 = await client.get("http://localhost:11235/monitor/health")
health = r3.json()
print(f" Memory: {health['container']['memory_percent']}%")
print(f" Browsers: {health['pool']['permanent']['active']}")
# Test 4: Check requests
print("\n4⃣ Checking request log...")
r4 = await client.get("http://localhost:11235/monitor/requests")
reqs = r4.json()
print(f" Active: {len(reqs['active'])}")
print(f" Completed: {len(reqs['completed'])}")
# Test 5: Check endpoint stats
print("\n5⃣ Checking endpoint stats...")
r5 = await client.get("http://localhost:11235/monitor/endpoints/stats")
stats = r5.json()
for endpoint, data in stats.items():
print(f" {endpoint}: {data['count']} requests, {data['avg_latency_ms']}ms avg")
print("\n✅ Dashboard should now show activity!")
print(f"\n🌐 Open: http://localhost:11235/dashboard")
if __name__ == "__main__":
asyncio.run(test_dashboard())

View File

@@ -178,4 +178,29 @@ def verify_email_domain(email: str) -> bool:
records = dns.resolver.resolve(domain, 'MX')
return True if records else False
except Exception as e:
return False
return False
def get_container_memory_percent() -> float:
"""Get actual container memory usage vs limit (cgroup v1/v2 aware)."""
try:
# Try cgroup v2 first
usage_path = Path("/sys/fs/cgroup/memory.current")
limit_path = Path("/sys/fs/cgroup/memory.max")
if not usage_path.exists():
# Fall back to cgroup v1
usage_path = Path("/sys/fs/cgroup/memory/memory.usage_in_bytes")
limit_path = Path("/sys/fs/cgroup/memory/memory.limit_in_bytes")
usage = int(usage_path.read_text())
limit = int(limit_path.read_text())
# Handle unlimited (v2: "max", v1: > 1e18)
if limit > 1e18:
import psutil
limit = psutil.virtual_memory().total
return (usage / limit) * 100
except:
# Non-container or unsupported: fallback to host
import psutil
return psutil.virtual_memory().percent

View File

@@ -0,0 +1,154 @@
import asyncio
import os
from crawl4ai import AsyncWebCrawler, AdaptiveCrawler, AdaptiveConfig, LLMConfig
async def test_configuration(name: str, config: AdaptiveConfig, url: str, query: str):
"""Test a specific configuration"""
print(f"\n{'='*60}")
print(f"Configuration: {name}")
print(f"{'='*60}")
async with AsyncWebCrawler(verbose=False) as crawler:
adaptive = AdaptiveCrawler(crawler, config)
result = await adaptive.digest(start_url=url, query=query)
print("\n" + "="*50)
print("CRAWL STATISTICS")
print("="*50)
adaptive.print_stats(detailed=False)
# Get the most relevant content found
print("\n" + "="*50)
print("MOST RELEVANT PAGES")
print("="*50)
relevant_pages = adaptive.get_relevant_content(top_k=5)
for i, page in enumerate(relevant_pages, 1):
print(f"\n{i}. {page['url']}")
print(f" Relevance Score: {page['score']:.2%}")
# Show a snippet of the content
content = page['content'] or ""
if content:
snippet = content[:200].replace('\n', ' ')
if len(content) > 200:
snippet += "..."
print(f" Preview: {snippet}")
print(f"\n{'='*50}")
print(f"Pages crawled: {len(result.crawled_urls)}")
print(f"Final confidence: {adaptive.confidence:.1%}")
print(f"Stopped reason: {result.metrics.get('stopped_reason', 'max_pages')}")
if result.metrics.get('is_irrelevant', False):
print("⚠️ Query detected as irrelevant!")
return result
async def llm_embedding():
"""Demonstrate various embedding configurations"""
print("EMBEDDING STRATEGY CONFIGURATION EXAMPLES")
print("=" * 60)
# Base URL and query for testing
test_url = "https://docs.python.org/3/library/asyncio.html"
openai_llm_config = LLMConfig(
provider='openai/text-embedding-3-small',
api_token=os.getenv('OPENAI_API_KEY'),
temperature=0.7,
max_tokens=2000
)
config_openai = AdaptiveConfig(
strategy="embedding",
max_pages=10,
# Use OpenAI embeddings
embedding_llm_config=openai_llm_config,
# embedding_llm_config={
# 'provider': 'openai/text-embedding-3-small',
# 'api_token': os.getenv('OPENAI_API_KEY')
# },
# OpenAI embeddings are high quality, can be stricter
embedding_k_exp=4.0,
n_query_variations=12
)
await test_configuration(
"OpenAI Embeddings",
config_openai,
test_url,
# "event-driven architecture patterns"
"async await context managers coroutines"
)
return
async def basic_adaptive_crawling():
"""Basic adaptive crawling example"""
# Initialize the crawler
async with AsyncWebCrawler(verbose=True) as crawler:
# Create an adaptive crawler with default settings (statistical strategy)
adaptive = AdaptiveCrawler(crawler)
# Note: You can also use embedding strategy for semantic understanding:
# from crawl4ai import AdaptiveConfig
# config = AdaptiveConfig(strategy="embedding")
# adaptive = AdaptiveCrawler(crawler, config)
# Start adaptive crawling
print("Starting adaptive crawl for Python async programming information...")
result = await adaptive.digest(
start_url="https://docs.python.org/3/library/asyncio.html",
query="async await context managers coroutines"
)
# Display crawl statistics
print("\n" + "="*50)
print("CRAWL STATISTICS")
print("="*50)
adaptive.print_stats(detailed=False)
# Get the most relevant content found
print("\n" + "="*50)
print("MOST RELEVANT PAGES")
print("="*50)
relevant_pages = adaptive.get_relevant_content(top_k=5)
for i, page in enumerate(relevant_pages, 1):
print(f"\n{i}. {page['url']}")
print(f" Relevance Score: {page['score']:.2%}")
# Show a snippet of the content
content = page['content'] or ""
if content:
snippet = content[:200].replace('\n', ' ')
if len(content) > 200:
snippet += "..."
print(f" Preview: {snippet}")
# Show final confidence
print(f"\n{'='*50}")
print(f"Final Confidence: {adaptive.confidence:.2%}")
print(f"Total Pages Crawled: {len(result.crawled_urls)}")
print(f"Knowledge Base Size: {len(adaptive.state.knowledge_base)} documents")
if adaptive.confidence >= 0.8:
print("✓ High confidence - can answer detailed questions about async Python")
elif adaptive.confidence >= 0.6:
print("~ Moderate confidence - can answer basic questions")
else:
print("✗ Low confidence - need more information")
if __name__ == "__main__":
asyncio.run(llm_embedding())
# asyncio.run(basic_adaptive_crawling())

View File

@@ -0,0 +1,513 @@
#!/usr/bin/env python3
"""
Comprehensive test demonstrating all hook types from hooks_example.py
adapted for the Docker API with real URLs
"""
import requests
import json
import time
from typing import Dict, Any
# API_BASE_URL = "http://localhost:11234"
API_BASE_URL = "http://localhost:11235"
def test_all_hooks_demo():
"""Demonstrate all 8 hook types with practical examples"""
print("=" * 70)
print("Testing: All Hooks Comprehensive Demo")
print("=" * 70)
hooks_code = {
"on_browser_created": """
async def hook(browser, **kwargs):
# Hook called after browser is created
print("[HOOK] on_browser_created - Browser is ready!")
# Browser-level configurations would go here
return browser
""",
"on_page_context_created": """
async def hook(page, context, **kwargs):
# Hook called after a new page and context are created
print("[HOOK] on_page_context_created - New page created!")
# Set viewport size for consistent rendering
await page.set_viewport_size({"width": 1920, "height": 1080})
# Add cookies for the session (using httpbin.org domain)
await context.add_cookies([
{
"name": "test_session",
"value": "abc123xyz",
"domain": ".httpbin.org",
"path": "/",
"httpOnly": True,
"secure": True
}
])
# Block ads and tracking scripts to speed up crawling
await context.route("**/*.{png,jpg,jpeg,gif,webp,svg}", lambda route: route.abort())
await context.route("**/analytics/*", lambda route: route.abort())
await context.route("**/ads/*", lambda route: route.abort())
print("[HOOK] Viewport set, cookies added, and ads blocked")
return page
""",
"on_user_agent_updated": """
async def hook(page, context, user_agent, **kwargs):
# Hook called when user agent is updated
print(f"[HOOK] on_user_agent_updated - User agent: {user_agent[:50]}...")
return page
""",
"before_goto": """
async def hook(page, context, url, **kwargs):
# Hook called before navigating to each URL
print(f"[HOOK] before_goto - About to visit: {url}")
# Add custom headers for the request
await page.set_extra_http_headers({
"X-Custom-Header": "crawl4ai-test",
"Accept-Language": "en-US,en;q=0.9",
"DNT": "1"
})
return page
""",
"after_goto": """
async def hook(page, context, url, response, **kwargs):
# Hook called after navigating to each URL
print(f"[HOOK] after_goto - Successfully loaded: {url}")
# Wait a moment for dynamic content to load
await page.wait_for_timeout(1000)
# Check if specific elements exist (with error handling)
try:
# For httpbin.org, wait for body element
await page.wait_for_selector("body", timeout=2000)
print("[HOOK] Body element found and loaded")
except:
print("[HOOK] Timeout waiting for body, continuing anyway")
return page
""",
"on_execution_started": """
async def hook(page, context, **kwargs):
# Hook called after custom JavaScript execution
print("[HOOK] on_execution_started - Custom JS executed!")
# You could inject additional JavaScript here if needed
await page.evaluate("console.log('[INJECTED] Hook JS running');")
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
# Hook called before retrieving the HTML content
print("[HOOK] before_retrieve_html - Preparing to get HTML")
# Scroll to bottom to trigger lazy loading
await page.evaluate("window.scrollTo(0, document.body.scrollHeight);")
await page.wait_for_timeout(500)
# Scroll back to top
await page.evaluate("window.scrollTo(0, 0);")
await page.wait_for_timeout(500)
# One more scroll to middle for good measure
await page.evaluate("window.scrollTo(0, document.body.scrollHeight / 2);")
print("[HOOK] Scrolling completed for lazy-loaded content")
return page
""",
"before_return_html": """
async def hook(page, context, html, **kwargs):
# Hook called before returning the HTML content
print(f"[HOOK] before_return_html - HTML length: {len(html)} characters")
# Log some page metrics
metrics = await page.evaluate('''() => {
return {
images: document.images.length,
links: document.links.length,
scripts: document.scripts.length
}
}''')
print(f"[HOOK] Page metrics - Images: {metrics['images']}, Links: {metrics['links']}, Scripts: {metrics['scripts']}")
return page
"""
}
# Create request payload
payload = {
"urls": ["https://httpbin.org/html"],
"hooks": {
"code": hooks_code,
"timeout": 30
},
"crawler_config": {
"js_code": "window.scrollTo(0, document.body.scrollHeight);",
"wait_for": "body",
"cache_mode": "bypass"
}
}
print("\nSending request with all 8 hooks...")
start_time = time.time()
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
elapsed_time = time.time() - start_time
print(f"Request completed in {elapsed_time:.2f} seconds")
if response.status_code == 200:
data = response.json()
print("\n✅ Request successful!")
# Check hooks execution
if 'hooks' in data:
hooks_info = data['hooks']
print("\n📊 Hooks Execution Summary:")
print(f" Status: {hooks_info['status']['status']}")
print(f" Attached hooks: {len(hooks_info['status']['attached_hooks'])}")
for hook_name in hooks_info['status']['attached_hooks']:
print(f"{hook_name}")
if 'summary' in hooks_info:
summary = hooks_info['summary']
print(f"\n📈 Execution Statistics:")
print(f" Total executions: {summary['total_executions']}")
print(f" Successful: {summary['successful']}")
print(f" Failed: {summary['failed']}")
print(f" Timed out: {summary['timed_out']}")
print(f" Success rate: {summary['success_rate']:.1f}%")
if hooks_info.get('execution_log'):
print(f"\n📝 Execution Log:")
for log_entry in hooks_info['execution_log']:
status_icon = "" if log_entry['status'] == 'success' else ""
exec_time = log_entry.get('execution_time', 0)
print(f" {status_icon} {log_entry['hook_point']}: {exec_time:.3f}s")
# Check crawl results
if 'results' in data and len(data['results']) > 0:
print(f"\n📄 Crawl Results:")
for result in data['results']:
print(f" URL: {result['url']}")
print(f" Success: {result.get('success', False)}")
if result.get('html'):
print(f" HTML length: {len(result['html'])} characters")
else:
print(f"❌ Error: {response.status_code}")
try:
error_data = response.json()
print(f"Error details: {json.dumps(error_data, indent=2)}")
except:
print(f"Error text: {response.text[:500]}")
def test_authentication_flow():
"""Test a complete authentication flow with multiple hooks"""
print("\n" + "=" * 70)
print("Testing: Authentication Flow with Multiple Hooks")
print("=" * 70)
hooks_code = {
"on_page_context_created": """
async def hook(page, context, **kwargs):
print("[HOOK] Setting up authentication context")
# Add authentication cookies
await context.add_cookies([
{
"name": "auth_token",
"value": "fake_jwt_token_here",
"domain": ".httpbin.org",
"path": "/",
"httpOnly": True,
"secure": True
}
])
# Set localStorage items (for SPA authentication)
await page.evaluate('''
localStorage.setItem('user_id', '12345');
localStorage.setItem('auth_time', new Date().toISOString());
''')
return page
""",
"before_goto": """
async def hook(page, context, url, **kwargs):
print(f"[HOOK] Adding auth headers for {url}")
# Add Authorization header
import base64
credentials = base64.b64encode(b"user:passwd").decode('ascii')
await page.set_extra_http_headers({
'Authorization': f'Basic {credentials}',
'X-API-Key': 'test-api-key-123'
})
return page
"""
}
payload = {
"urls": [
"https://httpbin.org/basic-auth/user/passwd"
],
"hooks": {
"code": hooks_code,
"timeout": 15
}
}
print("\nTesting authentication with httpbin endpoints...")
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
if response.status_code == 200:
data = response.json()
print("✅ Authentication test completed")
if 'results' in data:
for i, result in enumerate(data['results']):
print(f"\n URL {i+1}: {result['url']}")
if result.get('success'):
# Check for authentication success indicators
html_content = result.get('html', '')
if '"authenticated"' in html_content and 'true' in html_content:
print(" ✅ Authentication successful! Basic auth worked.")
else:
print(" ⚠️ Page loaded but auth status unclear")
else:
print(f" ❌ Failed: {result.get('error_message', 'Unknown error')}")
else:
print(f"❌ Error: {response.status_code}")
def test_performance_optimization_hooks():
"""Test hooks for performance optimization"""
print("\n" + "=" * 70)
print("Testing: Performance Optimization Hooks")
print("=" * 70)
hooks_code = {
"on_page_context_created": """
async def hook(page, context, **kwargs):
print("[HOOK] Optimizing page for performance")
# Block resource-heavy content
await context.route("**/*.{png,jpg,jpeg,gif,webp,svg,ico}", lambda route: route.abort())
await context.route("**/*.{woff,woff2,ttf,otf}", lambda route: route.abort())
await context.route("**/*.{mp4,webm,ogg,mp3,wav}", lambda route: route.abort())
await context.route("**/googletagmanager.com/*", lambda route: route.abort())
await context.route("**/google-analytics.com/*", lambda route: route.abort())
await context.route("**/doubleclick.net/*", lambda route: route.abort())
await context.route("**/facebook.com/*", lambda route: route.abort())
# Disable animations and transitions
await page.add_style_tag(content='''
*, *::before, *::after {
animation-duration: 0s !important;
animation-delay: 0s !important;
transition-duration: 0s !important;
transition-delay: 0s !important;
}
''')
print("[HOOK] Performance optimizations applied")
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
print("[HOOK] Removing unnecessary elements before extraction")
# Remove ads, popups, and other unnecessary elements
await page.evaluate('''() => {
// Remove common ad containers
const adSelectors = [
'.ad', '.ads', '.advertisement', '[id*="ad-"]', '[class*="ad-"]',
'.popup', '.modal', '.overlay', '.cookie-banner', '.newsletter-signup'
];
adSelectors.forEach(selector => {
document.querySelectorAll(selector).forEach(el => el.remove());
});
// Remove script tags to clean up HTML
document.querySelectorAll('script').forEach(el => el.remove());
// Remove style tags we don't need
document.querySelectorAll('style').forEach(el => el.remove());
}''')
return page
"""
}
payload = {
"urls": ["https://httpbin.org/html"],
"hooks": {
"code": hooks_code,
"timeout": 10
}
}
print("\nTesting performance optimization hooks...")
start_time = time.time()
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
elapsed_time = time.time() - start_time
print(f"Request completed in {elapsed_time:.2f} seconds")
if response.status_code == 200:
data = response.json()
print("✅ Performance optimization test completed")
if 'results' in data and len(data['results']) > 0:
result = data['results'][0]
if result.get('html'):
print(f" HTML size: {len(result['html'])} characters")
print(" Resources blocked, ads removed, animations disabled")
else:
print(f"❌ Error: {response.status_code}")
def test_content_extraction_hooks():
"""Test hooks for intelligent content extraction"""
print("\n" + "=" * 70)
print("Testing: Content Extraction Hooks")
print("=" * 70)
hooks_code = {
"after_goto": """
async def hook(page, context, url, response, **kwargs):
print(f"[HOOK] Waiting for dynamic content on {url}")
# Wait for any lazy-loaded content
await page.wait_for_timeout(2000)
# Trigger any "Load More" buttons
try:
load_more = await page.query_selector('[class*="load-more"], [class*="show-more"], button:has-text("Load More")')
if load_more:
await load_more.click()
await page.wait_for_timeout(1000)
print("[HOOK] Clicked 'Load More' button")
except:
pass
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
print("[HOOK] Extracting structured data")
# Extract metadata
metadata = await page.evaluate('''() => {
const getMeta = (name) => {
const element = document.querySelector(`meta[name="${name}"], meta[property="${name}"]`);
return element ? element.getAttribute('content') : null;
};
return {
title: document.title,
description: getMeta('description') || getMeta('og:description'),
author: getMeta('author'),
keywords: getMeta('keywords'),
ogTitle: getMeta('og:title'),
ogImage: getMeta('og:image'),
canonical: document.querySelector('link[rel="canonical"]')?.href,
jsonLd: Array.from(document.querySelectorAll('script[type="application/ld+json"]'))
.map(el => el.textContent).filter(Boolean)
};
}''')
print(f"[HOOK] Extracted metadata: {json.dumps(metadata, indent=2)}")
# Infinite scroll handling
for i in range(3):
await page.evaluate("window.scrollTo(0, document.body.scrollHeight);")
await page.wait_for_timeout(1000)
print(f"[HOOK] Scroll iteration {i+1}/3")
return page
"""
}
payload = {
"urls": ["https://httpbin.org/html", "https://httpbin.org/json"],
"hooks": {
"code": hooks_code,
"timeout": 20
}
}
print("\nTesting content extraction hooks...")
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
if response.status_code == 200:
data = response.json()
print("✅ Content extraction test completed")
if 'hooks' in data and 'summary' in data['hooks']:
summary = data['hooks']['summary']
print(f" Hooks executed: {summary['successful']}/{summary['total_executions']}")
if 'results' in data:
for result in data['results']:
print(f"\n URL: {result['url']}")
print(f" Success: {result.get('success', False)}")
else:
print(f"❌ Error: {response.status_code}")
def main():
"""Run comprehensive hook tests"""
print("🔧 Crawl4AI Docker API - Comprehensive Hooks Testing")
print("Based on docs/examples/hooks_example.py")
print("=" * 70)
tests = [
("All Hooks Demo", test_all_hooks_demo),
("Authentication Flow", test_authentication_flow),
("Performance Optimization", test_performance_optimization_hooks),
("Content Extraction", test_content_extraction_hooks),
]
for i, (name, test_func) in enumerate(tests, 1):
print(f"\n📌 Test {i}/{len(tests)}: {name}")
try:
test_func()
print(f"{name} completed")
except Exception as e:
print(f"{name} failed: {e}")
import traceback
traceback.print_exc()
print("\n" + "=" * 70)
print("🎉 All comprehensive hook tests completed!")
print("=" * 70)
if __name__ == "__main__":
main()

View File

@@ -7,13 +7,13 @@ Simple proxy configuration with `BrowserConfig`:
```python
from crawl4ai.async_configs import BrowserConfig
# Using proxy URL
browser_config = BrowserConfig(proxy="http://proxy.example.com:8080")
# Using HTTP proxy
browser_config = BrowserConfig(proxy_config={"server": "http://proxy.example.com:8080"})
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="https://example.com")
# Using SOCKS proxy
browser_config = BrowserConfig(proxy="socks5://proxy.example.com:1080")
browser_config = BrowserConfig(proxy_config={"server": "socks5://proxy.example.com:1080"})
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="https://example.com")
```
@@ -25,7 +25,11 @@ Use an authenticated proxy with `BrowserConfig`:
```python
from crawl4ai.async_configs import BrowserConfig
browser_config = BrowserConfig(proxy="http://[username]:[password]@[host]:[port]")
browser_config = BrowserConfig(proxy_config={
"server": "http://[host]:[port]",
"username": "[username]",
"password": "[password]",
})
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="https://example.com")
```

View File

@@ -23,7 +23,7 @@ browser_cfg = BrowserConfig(
| **`headless`** | `bool` (default: `True`) | Headless means no visible UI. `False` is handy for debugging. |
| **`viewport_width`** | `int` (default: `1080`) | Initial page width (in px). Useful for testing responsive layouts. |
| **`viewport_height`** | `int` (default: `600`) | Initial page height (in px). |
| **`proxy`** | `str` (default: `None`) | Single-proxy URL if you want all traffic to go through it, e.g. `"http://user:pass@proxy:8080"`. |
| **`proxy`** | `str` (deprecated) | Deprecated. Use `proxy_config` instead. If set, it will be auto-converted internally. |
| **`proxy_config`** | `dict` (default: `None`) | For advanced or multi-proxy needs, specify details like `{"server": "...", "username": "...", ...}`. |
| **`use_persistent_context`** | `bool` (default: `False`) | If `True`, uses a **persistent** browser context (keep cookies, sessions across runs). Also sets `use_managed_browser=True`. |
| **`user_data_dir`** | `str or None` (default: `None`) | Directory to store user data (profiles, cookies). Must be set if you want permanent sessions. |

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

@@ -0,0 +1,376 @@
/* ==== File: assets/page_actions.css ==== */
/* Page Actions Dropdown - Terminal Style */
/* Wrapper - positioned in content area */
.page-actions-wrapper {
position: absolute;
top: 1.3rem;
right: 1rem;
z-index: 1000;
}
/* Floating Action Button */
.page-actions-button {
position: relative;
display: inline-flex;
align-items: center;
gap: 0.5rem;
background: #3f3f44;
border: 1px solid #50ffff;
color: #e8e9ed;
padding: 0.75rem 1rem;
border-radius: 6px;
font-family: 'Dank Mono', Monaco, monospace;
font-size: 0.875rem;
cursor: pointer;
transition: all 0.2s ease;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.3);
}
.page-actions-button:hover {
background: #50ffff;
color: #070708;
transform: translateY(-2px);
box-shadow: 0 6px 16px rgba(80, 255, 255, 0.3);
}
.page-actions-button::before {
content: '▤';
font-size: 1.2rem;
line-height: 1;
}
.page-actions-button::after {
content: '▼';
font-size: 0.6rem;
transition: transform 0.2s ease;
}
.page-actions-button.active::after {
transform: rotate(180deg);
}
/* Dropdown Menu */
.page-actions-dropdown {
position: absolute;
top: 3.5rem;
right: 0;
z-index: 1001;
background: #1a1a1a;
border: 1px solid #3f3f44;
border-radius: 8px;
min-width: 280px;
opacity: 0;
visibility: hidden;
transform: translateY(-10px);
transition: all 0.2s ease;
box-shadow: 0 8px 24px rgba(0, 0, 0, 0.5);
overflow: hidden;
}
.page-actions-dropdown.active {
opacity: 1;
visibility: visible;
transform: translateY(0);
}
.page-actions-dropdown::before {
content: '';
position: absolute;
top: -8px;
right: 1.5rem;
width: 0;
height: 0;
border-left: 8px solid transparent;
border-right: 8px solid transparent;
border-bottom: 8px solid #3f3f44;
}
/* Menu Header */
.page-actions-header {
background: #3f3f44;
padding: 0.5rem 0.75rem;
border-bottom: 1px solid #50ffff;
font-family: 'Dank Mono', Monaco, monospace;
font-size: 0.7rem;
color: #a3abba;
text-transform: uppercase;
letter-spacing: 0.05em;
}
.page-actions-header::before {
content: '┌─';
margin-right: 0.5rem;
color: #50ffff;
}
/* Menu Items */
.page-actions-menu {
list-style: none;
margin: 0;
padding: 0.25rem 0;
}
.page-action-item {
display: block;
padding: 0;
}
ul>li.page-action-item::after{
content: '';
}
.page-action-link {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.5rem 0.75rem;
color: #e8e9ed;
text-decoration: none !important;
font-family: 'Dank Mono', Monaco, monospace;
font-size: 0.8rem;
transition: all 0.15s ease;
cursor: pointer;
border-left: 3px solid transparent;
}
.page-action-link:hover:not(.disabled) {
background: #3f3f44;
border-left-color: #50ffff;
color: #50ffff;
text-decoration: none;
}
.page-action-link.disabled {
opacity: 0.5;
cursor: not-allowed;
}
.page-action-link.disabled:hover {
background: transparent;
color: #e8e9ed;
text-decoration: none;
}
/* Icons using ASCII/Terminal characters */
.page-action-icon {
font-size: 1rem;
width: 1.5rem;
text-align: center;
font-weight: bold;
color: #50ffff;
}
.page-action-link:hover:not(.disabled) .page-action-icon {
color: #50ffff;
}
.page-action-link.disabled .page-action-icon {
color: #666;
}
/* Specific icons */
.icon-copy::before {
content: '⎘'; /* Copy/duplicate symbol */
}
.icon-view::before {
content: '⎙'; /* Document symbol */
}
.icon-ai::before {
content: '⚡'; /* Lightning/AI symbol */
}
/* Action Text */
.page-action-text {
flex: 1;
}
.page-action-label {
display: block;
font-weight: 600;
margin-bottom: 0.05rem;
line-height: 1.3;
}
.page-action-description {
display: block;
font-size: 0.7rem;
color: #a3abba;
line-height: 1.2;
}
/* Badge */
/* External link indicator */
.page-action-external::after {
content: '→';
margin-left: 0.25rem;
font-size: 0.75rem;
}
/* Divider */
.page-actions-divider {
height: 1px;
background: #3f3f44;
margin: 0.25rem 0;
}
/* Success/Copy feedback */
.page-action-copied {
background: #50ff50 !important;
color: #070708 !important;
border-left-color: #50ff50 !important;
}
.page-action-copied .page-action-icon {
color: #070708 !important;
}
.page-action-copied .page-action-icon::before {
content: '✓';
}
/* Mobile Responsive */
@media (max-width: 768px) {
.page-actions-wrapper {
top: 0.5rem;
right: 0.5rem;
}
.page-actions-button {
padding: 0.6rem 0.8rem;
font-size: 0.8rem;
}
.page-actions-dropdown {
min-width: 260px;
max-width: calc(100vw - 2rem);
right: -0.5rem;
}
.page-action-link {
padding: 0.6rem 0.8rem;
font-size: 0.8rem;
}
.page-action-description {
font-size: 0.7rem;
}
}
/* Animation for tooltip/notification */
@keyframes slideInFromTop {
from {
transform: translateY(-20px);
opacity: 0;
}
to {
transform: translateY(0);
opacity: 1;
}
}
.page-actions-notification {
position: fixed;
top: calc(var(--header-height) + 0.5rem);
right: 50%;
transform: translateX(50%);
z-index: 1100;
background: #50ff50;
color: #070708;
padding: 0.75rem 1.5rem;
border-radius: 6px;
font-family: 'Dank Mono', Monaco, monospace;
font-size: 0.875rem;
font-weight: 600;
box-shadow: 0 4px 12px rgba(80, 255, 80, 0.4);
animation: slideInFromTop 0.3s ease;
pointer-events: none;
}
.page-actions-notification::before {
content: '✓ ';
margin-right: 0.5rem;
}
/* Hide on print */
@media print {
.page-actions-button,
.page-actions-dropdown {
display: none !important;
}
}
/* Overlay for mobile */
.page-actions-overlay {
display: none;
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: rgba(0, 0, 0, 0.5);
z-index: 998;
opacity: 0;
transition: opacity 0.2s ease;
}
.page-actions-overlay.active {
display: block;
opacity: 1;
}
@media (max-width: 768px) {
.page-actions-overlay {
display: block;
}
}
/* Keyboard focus styles */
.page-action-link:focus {
outline: 2px solid #50ffff;
outline-offset: -2px;
}
.page-actions-button:focus {
outline: 2px solid #50ffff;
outline-offset: 2px;
}
/* Loading state */
.page-action-link.loading {
pointer-events: none;
opacity: 0.7;
}
.page-action-link.loading .page-action-icon::before {
content: '⟳';
animation: spin 1s linear infinite;
}
@keyframes spin {
from { transform: rotate(0deg); }
to { transform: rotate(360deg); }
}
/* Terminal-style border effect on hover */
.page-actions-dropdown:hover {
border-color: #50ffff;
}
/* Footer info */
.page-actions-footer {
background: #070708;
padding: 0.4rem 0.75rem;
border-top: 1px solid #3f3f44;
font-size: 0.65rem;
color: #666;
text-align: center;
font-family: 'Dank Mono', Monaco, monospace;
}
.page-actions-footer::before {
content: '└─';
margin-right: 0.5rem;
color: #3f3f44;
}

View File

@@ -0,0 +1,427 @@
// ==== File: assets/page_actions.js ====
// Page Actions - Copy/View Markdown functionality
document.addEventListener('DOMContentLoaded', () => {
// Configuration
const config = {
githubRepo: 'unclecode/crawl4ai',
githubBranch: 'main',
docsPath: 'docs/md_v2',
excludePaths: ['/apps/c4a-script/', '/apps/llmtxt/', '/apps/crawl4ai-assistant/', '/core/ask-ai/'], // Don't show on app pages
};
let cachedMarkdown = null;
let cachedMarkdownPath = null;
// Check if we should show the button on this page
function shouldShowButton() {
const currentPath = window.location.pathname;
// Don't show on homepage
if (currentPath === '/' || currentPath === '/index.html') {
return false;
}
// Don't show on 404 pages
if (document.title && document.title.toLowerCase().includes('404')) {
return false;
}
// Require mkdocs main content container
const mainContent = document.getElementById('terminal-mkdocs-main-content');
if (!mainContent) {
return false;
}
// Don't show on excluded paths (apps)
for (const excludePath of config.excludePaths) {
if (currentPath.includes(excludePath)) {
return false;
}
}
// Only show on documentation pages
return true;
}
if (!shouldShowButton()) {
return;
}
// Get current page markdown path
function getCurrentMarkdownPath() {
let path = window.location.pathname;
// Remove leading/trailing slashes
path = path.replace(/^\/|\/$/g, '');
// Remove .html extension if present
path = path.replace(/\.html$/, '');
// Handle root/index
if (!path || path === 'index') {
return 'index.md';
}
// Add .md extension
return `${path}.md`;
}
async function loadMarkdownContent() {
const mdPath = getCurrentMarkdownPath();
if (!mdPath) {
throw new Error('Invalid markdown path');
}
const rawUrl = getGithubRawUrl();
const response = await fetch(rawUrl);
if (!response.ok) {
throw new Error(`Failed to fetch markdown: ${response.status}`);
}
const markdown = await response.text();
cachedMarkdown = markdown;
cachedMarkdownPath = mdPath;
return markdown;
}
async function ensureMarkdownCached() {
const mdPath = getCurrentMarkdownPath();
if (!mdPath) {
return false;
}
if (cachedMarkdown && cachedMarkdownPath === mdPath) {
return true;
}
try {
await loadMarkdownContent();
return true;
} catch (error) {
console.warn('Page Actions: Markdown not available for this page.', error);
cachedMarkdown = null;
cachedMarkdownPath = null;
return false;
}
}
async function getMarkdownContent() {
const available = await ensureMarkdownCached();
if (!available) {
throw new Error('Markdown not available for this page.');
}
return cachedMarkdown;
}
// Get GitHub raw URL for current page
function getGithubRawUrl() {
const mdPath = getCurrentMarkdownPath();
return `https://raw.githubusercontent.com/${config.githubRepo}/${config.githubBranch}/${config.docsPath}/${mdPath}`;
}
// Get GitHub file URL for current page (for viewing)
function getGithubFileUrl() {
const mdPath = getCurrentMarkdownPath();
return `https://github.com/${config.githubRepo}/blob/${config.githubBranch}/${config.docsPath}/${mdPath}`;
}
// Create the UI
function createPageActionsUI() {
// Find the main content area
const mainContent = document.getElementById('terminal-mkdocs-main-content');
if (!mainContent) {
console.warn('Page Actions: Could not find #terminal-mkdocs-main-content');
return null;
}
// Create button
const button = document.createElement('button');
button.className = 'page-actions-button';
button.setAttribute('aria-label', 'Page copy');
button.setAttribute('aria-expanded', 'false');
button.innerHTML = '<span>Page Copy</span>';
// Create overlay for mobile
const overlay = document.createElement('div');
overlay.className = 'page-actions-overlay';
// Create dropdown
const dropdown = document.createElement('div');
dropdown.className = 'page-actions-dropdown';
dropdown.setAttribute('role', 'menu');
dropdown.innerHTML = `
<div class="page-actions-header">Page Copy</div>
<ul class="page-actions-menu">
<li class="page-action-item">
<a href="#" class="page-action-link" id="action-copy-markdown" role="menuitem">
<span class="page-action-icon icon-copy"></span>
<span class="page-action-text">
<span class="page-action-label">Copy as Markdown</span>
<span class="page-action-description">Copy page for LLMs</span>
</span>
</a>
</li>
<li class="page-action-item">
<a href="#" class="page-action-link page-action-external" id="action-view-markdown" target="_blank" role="menuitem">
<span class="page-action-icon icon-view"></span>
<span class="page-action-text">
<span class="page-action-label">View as Markdown</span>
<span class="page-action-description">Open raw source</span>
</span>
</a>
</li>
<div class="page-actions-divider"></div>
<li class="page-action-item">
<a href="#" class="page-action-link page-action-external" id="action-open-chatgpt" role="menuitem">
<span class="page-action-icon icon-ai"></span>
<span class="page-action-text">
<span class="page-action-label">Open in ChatGPT</span>
<span class="page-action-description">Ask questions about this page</span>
</span>
</a>
</li>
</ul>
<div class="page-actions-footer">ESC to close</div>
`;
// Create a wrapper for button and dropdown
const wrapper = document.createElement('div');
wrapper.className = 'page-actions-wrapper';
wrapper.appendChild(button);
wrapper.appendChild(dropdown);
// Inject into main content area
mainContent.appendChild(wrapper);
// Append overlay to body
document.body.appendChild(overlay);
return { button, dropdown, overlay, wrapper };
}
// Toggle dropdown
function toggleDropdown(button, dropdown, overlay) {
const isActive = dropdown.classList.contains('active');
if (isActive) {
closeDropdown(button, dropdown, overlay);
} else {
openDropdown(button, dropdown, overlay);
}
}
function openDropdown(button, dropdown, overlay) {
dropdown.classList.add('active');
// Don't activate overlay - not needed
button.classList.add('active');
button.setAttribute('aria-expanded', 'true');
}
function closeDropdown(button, dropdown, overlay) {
dropdown.classList.remove('active');
// Don't deactivate overlay - not needed
button.classList.remove('active');
button.setAttribute('aria-expanded', 'false');
}
// Show notification
function showNotification(message, duration = 2000) {
const notification = document.createElement('div');
notification.className = 'page-actions-notification';
notification.textContent = message;
document.body.appendChild(notification);
setTimeout(() => {
notification.remove();
}, duration);
}
// Copy markdown to clipboard
async function copyMarkdownToClipboard(link) {
// Add loading state
link.classList.add('loading');
try {
const markdown = await getMarkdownContent();
// Copy to clipboard
await navigator.clipboard.writeText(markdown);
// Visual feedback
link.classList.remove('loading');
link.classList.add('page-action-copied');
showNotification('Markdown copied to clipboard!');
// Reset after delay
setTimeout(() => {
link.classList.remove('page-action-copied');
}, 2000);
} catch (error) {
console.error('Error copying markdown:', error);
link.classList.remove('loading');
showNotification('Error: Could not copy markdown');
}
}
// View markdown in new tab
function viewMarkdown() {
const githubUrl = getGithubFileUrl();
window.open(githubUrl, '_blank', 'noopener,noreferrer');
}
function getCurrentPageUrl() {
const { href } = window.location;
return href.split('#')[0];
}
function openChatGPT() {
const pageUrl = getCurrentPageUrl();
const prompt = encodeURIComponent(`Read ${pageUrl} so I can ask questions about it.`);
const chatUrl = `https://chatgpt.com/?hint=search&prompt=${prompt}`;
window.open(chatUrl, '_blank', 'noopener,noreferrer');
}
(async () => {
if (!shouldShowButton()) {
return;
}
const markdownAvailable = await ensureMarkdownCached();
if (!markdownAvailable) {
return;
}
const ui = createPageActionsUI();
if (!ui) {
return;
}
const { button, dropdown, overlay } = ui;
// Event listeners
button.addEventListener('click', (e) => {
e.stopPropagation();
toggleDropdown(button, dropdown, overlay);
});
overlay.addEventListener('click', () => {
closeDropdown(button, dropdown, overlay);
});
// Copy markdown action
document.getElementById('action-copy-markdown').addEventListener('click', async (e) => {
e.preventDefault();
e.stopPropagation();
await copyMarkdownToClipboard(e.currentTarget);
});
// View markdown action
document.getElementById('action-view-markdown').addEventListener('click', (e) => {
e.preventDefault();
e.stopPropagation();
viewMarkdown();
closeDropdown(button, dropdown, overlay);
});
// Open in ChatGPT action
document.getElementById('action-open-chatgpt').addEventListener('click', (e) => {
e.preventDefault();
e.stopPropagation();
openChatGPT();
closeDropdown(button, dropdown, overlay);
});
// Close on ESC key
document.addEventListener('keydown', (e) => {
if (e.key === 'Escape' && dropdown.classList.contains('active')) {
closeDropdown(button, dropdown, overlay);
}
});
// Close when clicking outside
document.addEventListener('click', (e) => {
if (!dropdown.contains(e.target) && !button.contains(e.target)) {
closeDropdown(button, dropdown, overlay);
}
});
// Prevent dropdown from closing when clicking inside
dropdown.addEventListener('click', (e) => {
// Only stop propagation if not clicking on a link
if (!e.target.closest('.page-action-link')) {
e.stopPropagation();
}
});
// Close dropdown on link click (except for copy which handles itself)
dropdown.querySelectorAll('.page-action-link:not(#action-copy-markdown)').forEach(link => {
link.addEventListener('click', () => {
if (!link.classList.contains('disabled')) {
setTimeout(() => {
closeDropdown(button, dropdown, overlay);
}, 100);
}
});
});
// Handle window resize
let resizeTimer;
window.addEventListener('resize', () => {
clearTimeout(resizeTimer);
resizeTimer = setTimeout(() => {
// Close dropdown on resize to prevent positioning issues
if (dropdown.classList.contains('active')) {
closeDropdown(button, dropdown, overlay);
}
}, 250);
});
// Accessibility: Focus management
button.addEventListener('keydown', (e) => {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault();
toggleDropdown(button, dropdown, overlay);
// Focus first menu item when opening
if (dropdown.classList.contains('active')) {
const firstLink = dropdown.querySelector('.page-action-link:not(.disabled)');
if (firstLink) {
setTimeout(() => firstLink.focus(), 100);
}
}
}
});
// Arrow key navigation within menu
dropdown.addEventListener('keydown', (e) => {
if (!dropdown.classList.contains('active')) return;
const links = Array.from(dropdown.querySelectorAll('.page-action-link:not(.disabled)'));
const currentIndex = links.indexOf(document.activeElement);
if (e.key === 'ArrowDown') {
e.preventDefault();
const nextIndex = (currentIndex + 1) % links.length;
links[nextIndex].focus();
} else if (e.key === 'ArrowUp') {
e.preventDefault();
const prevIndex = (currentIndex - 1 + links.length) % links.length;
links[prevIndex].focus();
} else if (e.key === 'Home') {
e.preventDefault();
links[0].focus();
} else if (e.key === 'End') {
e.preventDefault();
links[links.length - 1].focus();
}
});
console.log('Page Actions initialized for:', getCurrentMarkdownPath());
})();
});

1371
docs/md_v2/branding/index.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -108,7 +108,19 @@ config = AdaptiveConfig(
embedding_min_confidence_threshold=0.1 # Stop if completely irrelevant
)
# With custom embedding provider (e.g., OpenAI)
# With custom LLM provider for query expansion (recommended)
from crawl4ai import LLMConfig
config = AdaptiveConfig(
strategy="embedding",
embedding_llm_config=LLMConfig(
provider='openai/text-embedding-3-small',
api_token='your-api-key',
temperature=0.7
)
)
# Alternative: Dictionary format (backward compatible)
config = AdaptiveConfig(
strategy="embedding",
embedding_llm_config={

View File

@@ -1,4 +1,20 @@
# Crawl4AI Docker Guide 🐳
# Self-Hosting Crawl4AI 🚀
**Take Control of Your Web Crawling Infrastructure**
Self-hosting Crawl4AI gives you complete control over your web crawling and data extraction pipeline. Unlike cloud-based solutions, you own your data, infrastructure, and destiny.
## Why Self-Host?
- **🔒 Data Privacy**: Your crawled data never leaves your infrastructure
- **💰 Cost Control**: No per-request pricing - scale within your own resources
- **🎯 Customization**: Full control over browser configurations, extraction strategies, and performance tuning
- **📊 Transparency**: Real-time monitoring dashboard shows exactly what's happening
- **⚡ Performance**: Direct access without API rate limits or geographic restrictions
- **🛡️ Security**: Keep sensitive data extraction workflows behind your firewall
- **🔧 Flexibility**: Customize, extend, and integrate with your existing infrastructure
When you self-host, you can scale from a single container to a full browser infrastructure, all while maintaining complete control and visibility.
## Table of Contents
- [Prerequisites](#prerequisites)
@@ -25,7 +41,12 @@
- [Available MCP Tools](#available-mcp-tools)
- [Testing MCP Connections](#testing-mcp-connections)
- [MCP Schemas](#mcp-schemas)
- [Metrics & Monitoring](#metrics--monitoring)
- [Real-time Monitoring & Operations](#real-time-monitoring--operations)
- [Monitoring Dashboard](#monitoring-dashboard)
- [Monitor API Endpoints](#monitor-api-endpoints)
- [WebSocket Streaming](#websocket-streaming)
- [Control Actions](#control-actions)
- [Production Integration](#production-integration)
- [Deployment Scenarios](#deployment-scenarios)
- [Complete Examples](#complete-examples)
- [Server Configuration](#server-configuration)
@@ -431,6 +452,409 @@ Executes JavaScript snippets on the specified URL and returns the full crawl res
---
## User-Provided Hooks API
The Docker API supports user-provided hook functions, allowing you to customize the crawling behavior by injecting your own Python code at specific points in the crawling pipeline. This powerful feature enables authentication, performance optimization, and custom content extraction without modifying the server code.
> ⚠️ **IMPORTANT SECURITY WARNING**:
> - **Never use hooks with untrusted code or on untrusted websites**
> - **Be extremely careful when crawling sites that might be phishing or malicious**
> - **Hook code has access to page context and can interact with the website**
> - **Always validate and sanitize any data extracted through hooks**
> - **Never expose credentials or sensitive data in hook code**
> - **Consider running the Docker container in an isolated network when testing**
### Hook Information Endpoint
```
GET /hooks/info
```
Returns information about available hook points and their signatures:
```bash
curl http://localhost:11235/hooks/info
```
### Available Hook Points
The API supports 8 hook points that match the local SDK:
| Hook Point | Parameters | Description | Best Use Cases |
|------------|------------|-------------|----------------|
| `on_browser_created` | `browser` | After browser instance creation | Light setup tasks |
| `on_page_context_created` | `page, context` | After page/context creation | **Authentication, cookies, route blocking** |
| `before_goto` | `page, context, url` | Before navigating to URL | Custom headers, logging |
| `after_goto` | `page, context, url, response` | After navigation completes | Verification, waiting for elements |
| `on_user_agent_updated` | `page, context, user_agent` | When user agent changes | UA-specific logic |
| `on_execution_started` | `page, context` | When JS execution begins | JS-related setup |
| `before_retrieve_html` | `page, context` | Before getting final HTML | **Scrolling, lazy loading** |
| `before_return_html` | `page, context, html` | Before returning HTML | Final modifications, metrics |
### Using Hooks in Requests
Add hooks to any crawl request by including the `hooks` parameter:
```json
{
"urls": ["https://httpbin.org/html"],
"hooks": {
"code": {
"hook_point_name": "async def hook(...): ...",
"another_hook": "async def hook(...): ..."
},
"timeout": 30 // Optional, default 30 seconds (max 120)
}
}
```
### Hook Examples with Real URLs
#### 1. Authentication with Cookies (GitHub)
```python
import requests
# Example: Setting GitHub session cookie (use your actual session)
hooks_code = {
"on_page_context_created": """
async def hook(page, context, **kwargs):
# Add authentication cookies for GitHub
# WARNING: Never hardcode real credentials!
await context.add_cookies([
{
'name': 'user_session',
'value': 'your_github_session_token', # Replace with actual token
'domain': '.github.com',
'path': '/',
'httpOnly': True,
'secure': True,
'sameSite': 'Lax'
}
])
return page
"""
}
response = requests.post("http://localhost:11235/crawl", json={
"urls": ["https://github.com/settings/profile"], # Protected page
"hooks": {"code": hooks_code, "timeout": 30}
})
```
#### 2. Basic Authentication (httpbin.org for testing)
```python
# Safe testing with httpbin.org (a service designed for HTTP testing)
hooks_code = {
"before_goto": """
async def hook(page, context, url, **kwargs):
import base64
# httpbin.org/basic-auth expects username="user" and password="passwd"
credentials = base64.b64encode(b"user:passwd").decode('ascii')
await page.set_extra_http_headers({
'Authorization': f'Basic {credentials}'
})
return page
"""
}
response = requests.post("http://localhost:11235/crawl", json={
"urls": ["https://httpbin.org/basic-auth/user/passwd"],
"hooks": {"code": hooks_code, "timeout": 15}
})
```
#### 3. Performance Optimization (News Sites)
```python
# Example: Optimizing crawling of news sites like CNN or BBC
hooks_code = {
"on_page_context_created": """
async def hook(page, context, **kwargs):
# Block images, fonts, and media to speed up crawling
await context.route("**/*.{png,jpg,jpeg,gif,webp,svg,ico}", lambda route: route.abort())
await context.route("**/*.{woff,woff2,ttf,otf,eot}", lambda route: route.abort())
await context.route("**/*.{mp4,webm,ogg,mp3,wav,flac}", lambda route: route.abort())
# Block common tracking and ad domains
await context.route("**/googletagmanager.com/*", lambda route: route.abort())
await context.route("**/google-analytics.com/*", lambda route: route.abort())
await context.route("**/doubleclick.net/*", lambda route: route.abort())
await context.route("**/facebook.com/tr/*", lambda route: route.abort())
await context.route("**/amazon-adsystem.com/*", lambda route: route.abort())
# Disable CSS animations for faster rendering
await page.add_style_tag(content='''
*, *::before, *::after {
animation-duration: 0s !important;
transition-duration: 0s !important;
}
''')
return page
"""
}
response = requests.post("http://localhost:11235/crawl", json={
"urls": ["https://www.bbc.com/news"], # Heavy news site
"hooks": {"code": hooks_code, "timeout": 30}
})
```
#### 4. Handling Infinite Scroll (Twitter/X)
```python
# Example: Scrolling on Twitter/X (requires authentication)
hooks_code = {
"before_retrieve_html": """
async def hook(page, context, **kwargs):
# Scroll to load more tweets
previous_height = 0
for i in range(5): # Limit scrolls to avoid infinite loop
current_height = await page.evaluate("document.body.scrollHeight")
if current_height == previous_height:
break # No more content to load
await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
await page.wait_for_timeout(2000) # Wait for content to load
previous_height = current_height
return page
"""
}
# Note: Twitter requires authentication for most content
response = requests.post("http://localhost:11235/crawl", json={
"urls": ["https://twitter.com/nasa"], # Public profile
"hooks": {"code": hooks_code, "timeout": 30}
})
```
#### 5. E-commerce Login (Example Pattern)
```python
# SECURITY WARNING: This is a pattern example.
# Never use real credentials in code!
# Always use environment variables or secure vaults.
hooks_code = {
"on_page_context_created": """
async def hook(page, context, **kwargs):
# Example pattern for e-commerce sites
# DO NOT use real credentials here!
# Navigate to login page first
await page.goto("https://example-shop.com/login")
# Wait for login form to load
await page.wait_for_selector("#email", timeout=5000)
# Fill login form (use environment variables in production!)
await page.fill("#email", "test@example.com") # Never use real email
await page.fill("#password", "test_password") # Never use real password
# Handle "Remember Me" checkbox if present
try:
await page.uncheck("#remember_me") # Don't remember on shared systems
except:
pass
# Submit form
await page.click("button[type='submit']")
# Wait for redirect after login
await page.wait_for_url("**/account/**", timeout=10000)
return page
"""
}
```
#### 6. Extracting Structured Data (Wikipedia)
```python
# Safe example using Wikipedia
hooks_code = {
"after_goto": """
async def hook(page, context, url, response, **kwargs):
# Wait for Wikipedia content to load
await page.wait_for_selector("#content", timeout=5000)
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
# Extract structured data from Wikipedia infobox
metadata = await page.evaluate('''() => {
const infobox = document.querySelector('.infobox');
if (!infobox) return null;
const data = {};
const rows = infobox.querySelectorAll('tr');
rows.forEach(row => {
const header = row.querySelector('th');
const value = row.querySelector('td');
if (header && value) {
data[header.innerText.trim()] = value.innerText.trim();
}
});
return data;
}''')
if metadata:
print("Extracted metadata:", metadata)
return page
"""
}
response = requests.post("http://localhost:11235/crawl", json={
"urls": ["https://en.wikipedia.org/wiki/Python_(programming_language)"],
"hooks": {"code": hooks_code, "timeout": 20}
})
```
### Security Best Practices
> 🔒 **Critical Security Guidelines**:
1. **Never Trust User Input**: If accepting hook code from users, always validate and sandbox it
2. **Avoid Phishing Sites**: Never use hooks on suspicious or unverified websites
3. **Protect Credentials**:
- Never hardcode passwords, tokens, or API keys in hook code
- Use environment variables or secure secret management
- Rotate credentials regularly
4. **Network Isolation**: Run the Docker container in an isolated network when testing
5. **Audit Hook Code**: Always review hook code before execution
6. **Limit Permissions**: Use the least privileged access needed
7. **Monitor Execution**: Check hook execution logs for suspicious behavior
8. **Timeout Protection**: Always set reasonable timeouts (default 30s)
### Hook Response Information
When hooks are used, the response includes detailed execution information:
```json
{
"success": true,
"results": [...],
"hooks": {
"status": {
"status": "success", // or "partial" or "failed"
"attached_hooks": ["on_page_context_created", "before_retrieve_html"],
"validation_errors": [],
"successfully_attached": 2,
"failed_validation": 0
},
"execution_log": [
{
"hook_point": "on_page_context_created",
"status": "success",
"execution_time": 0.523,
"timestamp": 1234567890.123
}
],
"errors": [], // Any runtime errors
"summary": {
"total_executions": 2,
"successful": 2,
"failed": 0,
"timed_out": 0,
"success_rate": 100.0
}
}
}
```
### Error Handling
The hooks system is designed to be resilient:
1. **Validation Errors**: Caught before execution (syntax errors, wrong parameters)
2. **Runtime Errors**: Handled gracefully - crawl continues with original page object
3. **Timeout Protection**: Hooks automatically terminated after timeout (configurable 1-120s)
### Complete Example: Safe Multi-Hook Crawling
```python
import requests
import json
import os
# Safe example using httpbin.org for testing
hooks_code = {
"on_page_context_created": """
async def hook(page, context, **kwargs):
# Set viewport and test cookies
await page.set_viewport_size({"width": 1920, "height": 1080})
await context.add_cookies([
{"name": "test_cookie", "value": "test_value", "domain": ".httpbin.org", "path": "/"}
])
# Block unnecessary resources for httpbin
await context.route("**/*.{png,jpg,jpeg}", lambda route: route.abort())
return page
""",
"before_goto": """
async def hook(page, context, url, **kwargs):
# Add custom headers for testing
await page.set_extra_http_headers({
"X-Test-Header": "crawl4ai-test",
"Accept-Language": "en-US,en;q=0.9"
})
print(f"[HOOK] Navigating to: {url}")
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
# Simple scroll for any lazy-loaded content
await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
await page.wait_for_timeout(1000)
return page
"""
}
# Make the request to safe testing endpoints
response = requests.post("http://localhost:11235/crawl", json={
"urls": [
"https://httpbin.org/html",
"https://httpbin.org/json"
],
"hooks": {
"code": hooks_code,
"timeout": 30
},
"crawler_config": {
"cache_mode": "bypass"
}
})
# Check results
if response.status_code == 200:
data = response.json()
# Check hook execution
if data['hooks']['status']['status'] == 'success':
print(f"✅ All {len(data['hooks']['status']['attached_hooks'])} hooks executed successfully")
print(f"Execution stats: {data['hooks']['summary']}")
# Process crawl results
for result in data['results']:
print(f"Crawled: {result['url']} - Success: {result['success']}")
else:
print(f"Error: {response.status_code}")
```
> 💡 **Remember**: Always test your hooks on safe, known websites first before using them on production sites. Never crawl sites that you don't have permission to access or that might be malicious.
---
## Dockerfile Parameters
You can customize the image build process using build arguments (`--build-arg`). These are typically used via `docker buildx build` or within the `docker-compose.yml` file.
@@ -772,22 +1196,469 @@ async def test_stream_crawl(token: str = None): # Made token optional
---
## Metrics & Monitoring
## Real-time Monitoring & Operations
Keep an eye on your crawler with these endpoints:
One of the key advantages of self-hosting is complete visibility into your infrastructure. Crawl4AI includes a comprehensive real-time monitoring system that gives you full transparency and control.
- `/health` - Quick health check
- `/metrics` - Detailed Prometheus metrics
- `/schema` - Full API schema
### Monitoring Dashboard
Example health check:
Access the **built-in real-time monitoring dashboard** for complete operational visibility:
```
http://localhost:11235/monitor
```
![Monitoring Dashboard](https://via.placeholder.com/800x400?text=Crawl4AI+Monitoring+Dashboard)
**Dashboard Features:**
#### 1. System Health Overview
- **CPU & Memory**: Live usage with progress bars and percentage indicators
- **Network I/O**: Total bytes sent/received since startup
- **Server Uptime**: How long your server has been running
- **Browser Pool Status**:
- 🔥 Permanent browser (always-on default config, ~270MB)
- ♨️ Hot pool (frequently used configs, ~180MB each)
- ❄️ Cold pool (idle browsers awaiting cleanup, ~180MB each)
- **Memory Pressure**: LOW/MEDIUM/HIGH indicator for janitor behavior
#### 2. Live Request Tracking
- **Active Requests**: Currently running crawls with:
- Request ID for tracking
- Target URL (truncated for display)
- Endpoint being used
- Elapsed time (updates in real-time)
- Memory usage from start
- **Completed Requests**: Last 10 finished requests showing:
- Success/failure status (color-coded)
- Total execution time
- Memory delta (how much memory changed)
- Pool hit (was browser reused?)
- HTTP status code
- **Filtering**: View all, success only, or errors only
#### 3. Browser Pool Management
Interactive table showing all active browsers:
| Type | Signature | Age | Last Used | Hits | Actions |
|------|-----------|-----|-----------|------|---------|
| permanent | abc12345 | 2h | 5s ago | 1,247 | Restart |
| hot | def67890 | 45m | 2m ago | 89 | Kill / Restart |
| cold | ghi11213 | 30m | 15m ago | 3 | Kill / Restart |
- **Reuse Rate**: Percentage of requests that reused existing browsers
- **Memory Estimates**: Total memory used by browser pool
- **Manual Control**: Kill or restart individual browsers
#### 4. Janitor Events Log
Real-time log of browser pool cleanup events:
- When cold browsers are closed due to memory pressure
- When browsers are promoted from cold to hot pool
- Forced cleanups triggered manually
- Detailed cleanup reasons and browser signatures
#### 5. Error Monitoring
Recent errors with full context:
- Timestamp
- Endpoint where error occurred
- Target URL
- Error message
- Request ID for correlation
**Live Updates:**
The dashboard connects via WebSocket and refreshes every **2 seconds** with the latest data. Connection status indicator shows when you're connected/disconnected.
---
### Monitor API Endpoints
For programmatic monitoring, automation, and integration with your existing infrastructure:
#### Health & Statistics
**Get System Health**
```bash
curl http://localhost:11235/health
GET /monitor/health
```
Returns current system snapshot:
```json
{
"container": {
"memory_percent": 45.2,
"cpu_percent": 23.1,
"network_sent_mb": 1250.45,
"network_recv_mb": 3421.12,
"uptime_seconds": 7234
},
"pool": {
"permanent": {"active": true, "memory_mb": 270},
"hot": {"count": 3, "memory_mb": 540},
"cold": {"count": 1, "memory_mb": 180},
"total_memory_mb": 990
},
"janitor": {
"next_cleanup_estimate": "adaptive",
"memory_pressure": "MEDIUM"
}
}
```
**Get Request Statistics**
```bash
GET /monitor/requests?status=all&limit=50
```
Query parameters:
- `status`: Filter by `all`, `active`, `completed`, `success`, or `error`
- `limit`: Number of completed requests to return (1-1000)
**Get Browser Pool Details**
```bash
GET /monitor/browsers
```
Returns detailed information about all active browsers:
```json
{
"browsers": [
{
"type": "permanent",
"sig": "abc12345",
"age_seconds": 7234,
"last_used_seconds": 5,
"memory_mb": 270,
"hits": 1247,
"killable": false
},
{
"type": "hot",
"sig": "def67890",
"age_seconds": 2701,
"last_used_seconds": 120,
"memory_mb": 180,
"hits": 89,
"killable": true
}
],
"summary": {
"total_count": 5,
"total_memory_mb": 990,
"reuse_rate_percent": 87.3
}
}
```
**Get Endpoint Performance Statistics**
```bash
GET /monitor/endpoints/stats
```
Returns aggregated metrics per endpoint:
```json
{
"/crawl": {
"count": 1523,
"avg_latency_ms": 2341.5,
"success_rate_percent": 98.2,
"pool_hit_rate_percent": 89.1,
"errors": 27
},
"/md": {
"count": 891,
"avg_latency_ms": 1823.7,
"success_rate_percent": 99.4,
"pool_hit_rate_percent": 92.3,
"errors": 5
}
}
```
**Get Timeline Data**
```bash
GET /monitor/timeline?metric=memory&window=5m
```
Parameters:
- `metric`: `memory`, `requests`, or `browsers`
- `window`: Currently only `5m` (5-minute window, 5-second resolution)
Returns time-series data for charts:
```json
{
"timestamps": [1699564800, 1699564805, 1699564810, ...],
"values": [42.1, 43.5, 41.8, ...]
}
```
#### Logs
**Get Janitor Events**
```bash
GET /monitor/logs/janitor?limit=100
```
**Get Error Log**
```bash
GET /monitor/logs/errors?limit=100
```
---
*(Deployment Scenarios and Complete Examples sections remain the same, maybe update links if examples moved)*
### WebSocket Streaming
For real-time monitoring in your own dashboards or applications:
```bash
WS /monitor/ws
```
**Connection Example (Python):**
```python
import asyncio
import websockets
import json
async def monitor_server():
uri = "ws://localhost:11235/monitor/ws"
async with websockets.connect(uri) as websocket:
print("Connected to Crawl4AI monitor")
while True:
# Receive update every 2 seconds
data = await websocket.recv()
update = json.loads(data)
# Extract key metrics
health = update['health']
active_requests = len(update['requests']['active'])
browsers = len(update['browsers'])
print(f"Memory: {health['container']['memory_percent']:.1f}% | "
f"Active: {active_requests} | "
f"Browsers: {browsers}")
# Check for high memory pressure
if health['janitor']['memory_pressure'] == 'HIGH':
print("⚠️ HIGH MEMORY PRESSURE - Consider cleanup")
asyncio.run(monitor_server())
```
**Update Payload Structure:**
```json
{
"timestamp": 1699564823.456,
"health": { /* System health snapshot */ },
"requests": {
"active": [ /* Currently running */ ],
"completed": [ /* Last 10 completed */ ]
},
"browsers": [ /* All active browsers */ ],
"timeline": {
"memory": { /* Last 5 minutes */ },
"requests": { /* Request rate */ },
"browsers": { /* Pool composition */ }
},
"janitor": [ /* Last 10 cleanup events */ ],
"errors": [ /* Last 10 errors */ ]
}
```
---
### Control Actions
Take manual control when needed:
**Force Immediate Cleanup**
```bash
POST /monitor/actions/cleanup
```
Kills all cold pool browsers immediately (useful when memory is tight):
```json
{
"success": true,
"killed_browsers": 3
}
```
**Kill Specific Browser**
```bash
POST /monitor/actions/kill_browser
Content-Type: application/json
{
"sig": "abc12345" // First 8 chars of browser signature
}
```
Response:
```json
{
"success": true,
"killed_sig": "abc12345",
"pool_type": "hot"
}
```
**Restart Browser**
```bash
POST /monitor/actions/restart_browser
Content-Type: application/json
{
"sig": "permanent" // Or first 8 chars of signature
}
```
For permanent browser, this will close and reinitialize it. For hot/cold browsers, it kills them and lets new requests create fresh ones.
**Reset Statistics**
```bash
POST /monitor/stats/reset
```
Clears endpoint counters (useful for starting fresh after testing).
---
### Production Integration
#### Integration with Existing Monitoring Systems
**Prometheus Integration:**
```bash
# Scrape metrics endpoint
curl http://localhost:11235/metrics
```
**Custom Dashboard Integration:**
```python
# Example: Push metrics to your monitoring system
import asyncio
import websockets
import json
from your_monitoring import push_metric
async def integrate_monitoring():
async with websockets.connect("ws://localhost:11235/monitor/ws") as ws:
while True:
data = json.loads(await ws.recv())
# Push to your monitoring system
push_metric("crawl4ai.memory.percent",
data['health']['container']['memory_percent'])
push_metric("crawl4ai.active_requests",
len(data['requests']['active']))
push_metric("crawl4ai.browser_count",
len(data['browsers']))
```
**Alerting Example:**
```python
import requests
import time
def check_health():
"""Poll health endpoint and alert on issues"""
response = requests.get("http://localhost:11235/monitor/health")
health = response.json()
# Alert on high memory
if health['container']['memory_percent'] > 85:
send_alert(f"High memory: {health['container']['memory_percent']}%")
# Alert on high error rate
stats = requests.get("http://localhost:11235/monitor/endpoints/stats").json()
for endpoint, metrics in stats.items():
if metrics['success_rate_percent'] < 95:
send_alert(f"{endpoint} success rate: {metrics['success_rate_percent']}%")
# Run every minute
while True:
check_health()
time.sleep(60)
```
**Log Aggregation:**
```python
import requests
from datetime import datetime
def aggregate_errors():
"""Fetch and aggregate errors for logging system"""
response = requests.get("http://localhost:11235/monitor/logs/errors?limit=100")
errors = response.json()['errors']
for error in errors:
log_to_system({
'timestamp': datetime.fromtimestamp(error['timestamp']),
'service': 'crawl4ai',
'endpoint': error['endpoint'],
'url': error['url'],
'message': error['error'],
'request_id': error['request_id']
})
```
#### Key Metrics to Track
For production self-hosted deployments, monitor these metrics:
1. **Memory Usage Trends**
- Track `container.memory_percent` over time
- Alert when consistently above 80%
- Prevents OOM kills
2. **Request Success Rates**
- Monitor per-endpoint success rates
- Alert when below 95%
- Indicates crawling issues
3. **Average Latency**
- Track `avg_latency_ms` per endpoint
- Detect performance degradation
- Optimize slow endpoints
4. **Browser Pool Efficiency**
- Monitor `reuse_rate_percent`
- Should be >80% for good efficiency
- Low rates indicate pool churn
5. **Error Frequency**
- Count errors per time window
- Alert on sudden spikes
- Track error patterns
6. **Janitor Activity**
- Monitor cleanup frequency
- Excessive cleanup indicates memory pressure
- Adjust pool settings if needed
---
### Quick Health Check
For simple uptime monitoring:
```bash
curl http://localhost:11235/health
```
Returns:
```json
{
"status": "healthy",
"version": "0.7.4"
}
```
Other useful endpoints:
- `/metrics` - Prometheus metrics
- `/schema` - Full API schema
---
@@ -947,22 +1818,46 @@ We're here to help you succeed with Crawl4AI! Here's how to get support:
## Summary
In this guide, we've covered everything you need to get started with Crawl4AI's Docker deployment:
- Building and running the Docker container
- Configuring the environment
- Using the interactive playground for testing
- Making API requests with proper typing
- Using the Python SDK
- Leveraging specialized endpoints for screenshots, PDFs, and JavaScript execution
- Connecting via the Model Context Protocol (MCP)
- Monitoring your deployment
Congratulations! You now have everything you need to self-host your own Crawl4AI infrastructure with complete control and visibility.
The new playground interface at `http://localhost:11235/playground` makes it much easier to test configurations and generate the corresponding JSON for API requests.
**What You've Learned:**
- ✅ Multiple deployment options (Docker Hub, Docker Compose, manual builds)
- ✅ Environment configuration and LLM integration
- ✅ Using the interactive playground for testing
- ✅ Making API requests with proper typing (SDK and REST)
- ✅ Specialized endpoints (screenshots, PDFs, JavaScript execution)
- ✅ MCP integration for AI-assisted development
- ✅ **Real-time monitoring dashboard** for operational transparency
- ✅ **Monitor API** for programmatic control and integration
- ✅ Production deployment best practices
For AI application developers, the MCP integration allows tools like Claude Code to directly access Crawl4AI's capabilities without complex API handling.
**Why This Matters:**
Remember, the examples in the `examples` folder are your friends - they show real-world usage patterns that you can adapt for your needs.
By self-hosting Crawl4AI, you:
- 🔒 **Own Your Data**: Everything stays in your infrastructure
- 📊 **See Everything**: Real-time dashboard shows exactly what's happening
- 💰 **Control Costs**: Scale within your resources, no per-request fees
- ⚡ **Maximize Performance**: Direct access with smart browser pooling (10x memory efficiency)
- 🛡️ **Stay Secure**: Keep sensitive workflows behind your firewall
- 🔧 **Customize Freely**: Full control over configs, strategies, and optimizations
Keep exploring, and don't hesitate to reach out if you need help! We're building something amazing together. 🚀
**Next Steps:**
1. **Start Simple**: Deploy with Docker Hub image and test with the playground
2. **Monitor Everything**: Open `http://localhost:11235/monitor` to watch your server
3. **Integrate**: Connect your applications using the Python SDK or REST API
4. **Scale Smart**: Use the monitoring data to optimize your deployment
5. **Go Production**: Set up alerting, log aggregation, and automated cleanup
**Key Resources:**
- 🎮 **Playground**: `http://localhost:11235/playground` - Interactive testing
- 📊 **Monitor Dashboard**: `http://localhost:11235/monitor` - Real-time visibility
- 📖 **Architecture Docs**: `deploy/docker/ARCHITECTURE.md` - Deep technical dive
- 💬 **Discord Community**: Get help and share experiences
- ⭐ **GitHub**: Report issues, contribute, show support
Remember: The monitoring dashboard is your window into your infrastructure. Use it to understand performance, troubleshoot issues, and optimize your deployment. The examples in the `examples` folder show real-world usage patterns you can adapt.
**You're now in control of your web crawling destiny!** 🚀
Happy crawling! 🕷️

View File

@@ -0,0 +1,66 @@
# Crawl4AI Marketplace
A terminal-themed marketplace for tools, integrations, and resources related to Crawl4AI.
## Setup
### Backend
1. Install dependencies:
```bash
cd backend
pip install -r requirements.txt
```
2. Generate dummy data:
```bash
python dummy_data.py
```
3. Run the server:
```bash
python server.py
```
The API will be available at http://localhost:8100
### Frontend
1. Open `frontend/index.html` in your browser
2. Or serve via MkDocs as part of the documentation site
## Database Schema
The marketplace uses SQLite with automatic migration from `schema.yaml`. Tables include:
- **apps**: Tools and integrations
- **articles**: Reviews, tutorials, and news
- **categories**: App categories
- **sponsors**: Sponsored content
## API Endpoints
- `GET /api/apps` - List apps with filters
- `GET /api/articles` - List articles
- `GET /api/categories` - Get all categories
- `GET /api/sponsors` - Get active sponsors
- `GET /api/search?q=query` - Search across content
- `GET /api/stats` - Marketplace statistics
## Features
- **Smart caching**: LocalStorage with TTL (1 hour)
- **Terminal theme**: Consistent with Crawl4AI branding
- **Responsive design**: Works on all devices
- **Fast search**: Debounced with 300ms delay
- **CORS protected**: Only crawl4ai.com and localhost
## Admin Panel
Coming soon - for now, edit the database directly or modify `dummy_data.py`
## Deployment
For production deployment on EC2:
1. Update `API_BASE` in `marketplace.js` to production URL
2. Run FastAPI with proper production settings (use gunicorn/uvicorn)
3. Set up nginx proxy if needed

View File

@@ -0,0 +1,759 @@
/* Admin Dashboard - C4AI Terminal Style */
/* Utility Classes */
.hidden {
display: none !important;
}
/* Brand Colors */
:root {
--c4ai-cyan: #50ffff;
--c4ai-green: #50ff50;
--c4ai-yellow: #ffff50;
--c4ai-pink: #ff50ff;
--c4ai-blue: #5050ff;
}
.admin-container {
min-height: 100vh;
background: var(--bg-dark);
}
/* Login Screen */
.login-screen {
min-height: 100vh;
display: flex;
align-items: center;
justify-content: center;
background: linear-gradient(135deg, #070708 0%, #1a1a2e 100%);
}
.login-box {
background: var(--bg-secondary);
border: 2px solid var(--primary-cyan);
padding: 3rem;
width: 400px;
box-shadow: 0 0 40px rgba(80, 255, 255, 0.2);
text-align: center;
}
.login-logo {
height: 60px;
margin-bottom: 2rem;
filter: brightness(1.2);
}
.login-box h1 {
color: var(--primary-cyan);
font-size: 1.5rem;
margin-bottom: 2rem;
}
#login-form input {
width: 100%;
padding: 0.75rem;
background: var(--bg-dark);
border: 1px solid var(--border-color);
color: var(--text-primary);
font-family: inherit;
margin-bottom: 1rem;
}
#login-form input:focus {
outline: none;
border-color: var(--primary-cyan);
}
#login-form button {
width: 100%;
padding: 0.75rem;
background: linear-gradient(135deg, var(--primary-cyan), var(--primary-teal));
border: none;
color: var(--bg-dark);
font-weight: 600;
cursor: pointer;
transition: all 0.2s;
}
#login-form button:hover {
box-shadow: 0 4px 15px rgba(80, 255, 255, 0.3);
transform: translateY(-2px);
}
.error-msg {
color: var(--error);
font-size: 0.875rem;
margin-top: 1rem;
}
/* Admin Dashboard */
.admin-dashboard.hidden {
display: none;
}
.admin-header {
background: var(--bg-secondary);
border-bottom: 2px solid var(--primary-cyan);
padding: 1rem 0;
}
.header-content {
max-width: 1800px;
margin: 0 auto;
padding: 0 2rem;
display: flex;
justify-content: space-between;
align-items: center;
}
.header-left {
display: flex;
align-items: center;
gap: 1rem;
}
.header-logo {
height: 35px;
}
.admin-header h1 {
font-size: 1.25rem;
color: var(--primary-cyan);
}
.header-right {
display: flex;
align-items: center;
gap: 2rem;
}
.admin-user {
color: var(--text-secondary);
}
.logout-btn {
padding: 0.5rem 1rem;
background: transparent;
border: 1px solid var(--error);
color: var(--error);
cursor: pointer;
transition: all 0.2s;
}
.logout-btn:hover {
background: rgba(255, 60, 116, 0.1);
}
/* Layout */
.admin-layout {
display: flex;
max-width: 1800px;
margin: 0 auto;
min-height: calc(100vh - 60px);
}
/* Sidebar */
.admin-sidebar {
width: 250px;
background: var(--bg-secondary);
border-right: 1px solid var(--border-color);
display: flex;
flex-direction: column;
justify-content: space-between;
}
.sidebar-nav {
padding: 1rem 0;
}
.nav-btn {
width: 100%;
padding: 1rem 1.5rem;
background: transparent;
border: none;
border-left: 3px solid transparent;
color: var(--text-secondary);
text-align: left;
cursor: pointer;
transition: all 0.2s;
display: flex;
align-items: center;
gap: 0.75rem;
}
.nav-btn:hover {
background: rgba(80, 255, 255, 0.05);
color: var(--primary-cyan);
}
.nav-btn.active {
border-left-color: var(--primary-cyan);
background: rgba(80, 255, 255, 0.1);
color: var(--primary-cyan);
}
.nav-icon {
font-size: 1.25rem;
margin-right: 0.25rem;
display: inline-block;
width: 1.5rem;
text-align: center;
}
.nav-btn[data-section="stats"] .nav-icon {
color: var(--c4ai-cyan);
}
.nav-btn[data-section="apps"] .nav-icon {
color: var(--c4ai-green);
}
.nav-btn[data-section="articles"] .nav-icon {
color: var(--c4ai-yellow);
}
.nav-btn[data-section="categories"] .nav-icon {
color: var(--c4ai-pink);
}
.nav-btn[data-section="sponsors"] .nav-icon {
color: var(--c4ai-blue);
}
.sidebar-actions {
padding: 1rem;
border-top: 1px solid var(--border-color);
}
.action-btn {
width: 100%;
padding: 0.75rem;
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
color: var(--text-secondary);
cursor: pointer;
margin-bottom: 0.5rem;
transition: all 0.2s;
}
.action-btn:hover {
border-color: var(--primary-cyan);
color: var(--primary-cyan);
}
/* Main Content */
.admin-main {
flex: 1;
padding: 2rem;
overflow-y: auto;
}
.content-section {
display: none;
}
.content-section.active {
display: block;
}
/* Stats Grid */
.stats-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 1.5rem;
margin-bottom: 3rem;
}
.stat-card {
background: linear-gradient(135deg, rgba(80, 255, 255, 0.03), rgba(243, 128, 245, 0.02));
border: 1px solid rgba(80, 255, 255, 0.3);
padding: 1.5rem;
display: flex;
gap: 1.5rem;
}
.stat-icon {
font-size: 2rem;
width: 3rem;
height: 3rem;
display: flex;
align-items: center;
justify-content: center;
border: 2px solid;
border-radius: 4px;
}
.stat-card:nth-child(1) .stat-icon {
color: var(--c4ai-cyan);
border-color: var(--c4ai-cyan);
}
.stat-card:nth-child(2) .stat-icon {
color: var(--c4ai-green);
border-color: var(--c4ai-green);
}
.stat-card:nth-child(3) .stat-icon {
color: var(--c4ai-yellow);
border-color: var(--c4ai-yellow);
}
.stat-card:nth-child(4) .stat-icon {
color: var(--c4ai-pink);
border-color: var(--c4ai-pink);
}
.stat-number {
font-size: 2rem;
color: var(--primary-cyan);
font-weight: 600;
}
.stat-label {
color: var(--text-secondary);
}
.stat-detail {
font-size: 0.875rem;
color: var(--text-tertiary);
margin-top: 0.5rem;
}
/* Quick Actions */
.quick-actions {
display: flex;
gap: 1rem;
}
.quick-btn {
padding: 0.75rem 1.5rem;
background: transparent;
border: 1px solid var(--primary-cyan);
color: var(--primary-cyan);
cursor: pointer;
transition: all 0.2s;
}
.quick-btn:hover {
background: rgba(80, 255, 255, 0.1);
transform: translateY(-2px);
}
/* Section Headers */
.section-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 2rem;
}
.section-header h2 {
font-size: 1.5rem;
color: var(--text-primary);
}
.header-actions {
display: flex;
gap: 1rem;
}
.search-input {
padding: 0.5rem 1rem;
background: var(--bg-dark);
border: 1px solid var(--border-color);
color: var(--text-primary);
width: 250px;
}
.search-input:focus {
outline: none;
border-color: var(--primary-cyan);
}
.filter-select {
padding: 0.5rem;
background: var(--bg-dark);
border: 1px solid var(--border-color);
color: var(--text-primary);
}
.add-btn {
padding: 0.5rem 1rem;
background: linear-gradient(135deg, var(--primary-cyan), var(--primary-teal));
border: none;
color: var(--bg-dark);
font-weight: 600;
cursor: pointer;
transition: all 0.2s;
}
.add-btn:hover {
box-shadow: 0 4px 15px rgba(80, 255, 255, 0.3);
transform: translateY(-2px);
}
/* Data Tables */
.data-table {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
overflow-x: auto;
}
.data-table table {
width: 100%;
border-collapse: collapse;
}
.data-table th {
background: var(--bg-tertiary);
padding: 1rem;
text-align: left;
color: var(--primary-cyan);
font-weight: 600;
border-bottom: 2px solid var(--border-color);
position: sticky;
top: 0;
z-index: 10;
}
.data-table td {
padding: 1rem;
border-bottom: 1px solid var(--border-color);
}
.data-table tr:hover {
background: rgba(80, 255, 255, 0.03);
}
/* Table Actions */
.table-actions {
display: flex;
gap: 0.5rem;
}
.table-logo {
width: 48px;
height: 48px;
object-fit: contain;
border-radius: 6px;
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
padding: 4px;
}
.btn-edit, .btn-delete, .btn-duplicate {
padding: 0.25rem 0.5rem;
background: transparent;
border: 1px solid var(--border-color);
color: var(--text-secondary);
cursor: pointer;
font-size: 0.875rem;
}
.btn-edit:hover {
border-color: var(--primary-cyan);
color: var(--primary-cyan);
}
.btn-delete:hover {
border-color: var(--error);
color: var(--error);
}
.btn-duplicate:hover {
border-color: var(--accent-pink);
color: var(--accent-pink);
}
/* Badges in Tables */
.badge {
padding: 0.25rem 0.5rem;
font-size: 0.75rem;
text-transform: uppercase;
}
.badge.featured {
background: var(--primary-cyan);
color: var(--bg-dark);
}
.badge.sponsored {
background: var(--warning);
color: var(--bg-dark);
}
.badge.active {
background: var(--success);
color: var(--bg-dark);
}
/* Modal Enhancements */
.modal-content.large {
max-width: 1000px;
width: 90%;
max-height: 90vh;
}
.modal-header {
display: flex;
justify-content: space-between;
align-items: center;
padding: 1.5rem;
border-bottom: 1px solid var(--border-color);
}
.modal-body {
padding: 1.5rem;
overflow-y: auto;
max-height: calc(90vh - 140px);
}
.modal-footer {
display: flex;
justify-content: flex-end;
gap: 1rem;
padding: 1rem 1.5rem;
border-top: 1px solid var(--border-color);
}
.btn-cancel, .btn-save {
padding: 0.5rem 1.5rem;
cursor: pointer;
transition: all 0.2s;
}
.btn-cancel {
background: transparent;
border: 1px solid var(--border-color);
color: var(--text-secondary);
}
.btn-cancel:hover {
border-color: var(--error);
color: var(--error);
}
.btn-save {
background: linear-gradient(135deg, var(--primary-cyan), var(--primary-teal));
border: none;
color: var(--bg-dark);
font-weight: 600;
}
.btn-save:hover {
box-shadow: 0 4px 15px rgba(80, 255, 255, 0.3);
}
/* Form Styles */
.form-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
gap: 1.5rem;
}
.form-group {
display: flex;
flex-direction: column;
gap: 0.5rem;
}
.form-group label {
color: var(--text-secondary);
font-size: 0.875rem;
}
.form-group input,
.form-group select,
.form-group textarea {
padding: 0.5rem;
background: var(--bg-dark);
border: 1px solid var(--border-color);
color: var(--text-primary);
font-family: inherit;
}
.form-group input:focus,
.form-group select:focus,
.form-group textarea:focus {
outline: none;
border-color: var(--primary-cyan);
}
.form-group.full-width {
grid-column: 1 / -1;
}
.checkbox-group {
display: flex;
gap: 2rem;
}
.checkbox-label {
display: flex;
align-items: center;
gap: 0.5rem;
cursor: pointer;
}
.sponsor-form {
grid-template-columns: 200px repeat(2, minmax(220px, 1fr));
align-items: flex-start;
grid-auto-flow: dense;
}
.sponsor-logo-group {
grid-row: span 3;
display: flex;
flex-direction: column;
gap: 0.75rem;
}
.span-two {
grid-column: span 2;
}
.logo-upload {
position: relative;
width: 180px;
}
.image-preview {
width: 180px;
height: 180px;
border: 1px dashed var(--border-color);
border-radius: 12px;
display: flex;
align-items: center;
justify-content: center;
background: var(--bg-tertiary);
overflow: hidden;
}
.image-preview.empty {
color: var(--text-secondary);
font-size: 0.75rem;
text-align: center;
padding: 0.75rem;
}
.image-preview img {
max-width: 100%;
max-height: 100%;
object-fit: contain;
}
.upload-btn {
position: absolute;
left: 50%;
bottom: 12px;
transform: translateX(-50%);
padding: 0.35rem 1rem;
background: linear-gradient(135deg, var(--primary-cyan), var(--primary-teal));
border: none;
border-radius: 999px;
color: var(--bg-dark);
font-size: 0.75rem;
font-weight: 600;
cursor: pointer;
box-shadow: 0 6px 18px rgba(80, 255, 255, 0.25);
}
.upload-btn:hover {
box-shadow: 0 8px 22px rgba(80, 255, 255, 0.35);
}
.logo-upload input[type="file"] {
display: none;
}
.upload-hint {
font-size: 0.75rem;
color: var(--text-secondary);
margin: 0;
}
@media (max-width: 960px) {
.sponsor-form {
grid-template-columns: repeat(auto-fit, minmax(240px, 1fr));
}
.sponsor-logo-group {
grid-column: 1 / -1;
grid-row: auto;
flex-direction: row;
align-items: center;
gap: 1.5rem;
}
.logo-upload {
width: 160px;
}
.span-two {
grid-column: 1 / -1;
}
}
/* Rich Text Editor */
.editor-toolbar {
display: flex;
gap: 0.5rem;
padding: 0.5rem;
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
border-bottom: none;
}
.editor-btn {
padding: 0.25rem 0.5rem;
background: transparent;
border: 1px solid var(--border-color);
color: var(--text-secondary);
cursor: pointer;
}
.editor-btn:hover {
background: rgba(80, 255, 255, 0.1);
border-color: var(--primary-cyan);
}
.editor-content {
min-height: 300px;
padding: 1rem;
background: var(--bg-dark);
border: 1px solid var(--border-color);
font-family: 'Dank Mono', Monaco, monospace;
}
/* Responsive */
@media (max-width: 1024px) {
.admin-layout {
flex-direction: column;
}
.admin-sidebar {
width: 100%;
border-right: none;
border-bottom: 1px solid var(--border-color);
}
.sidebar-nav {
display: flex;
overflow-x: auto;
padding: 0;
}
.nav-btn {
border-left: none;
border-bottom: 3px solid transparent;
white-space: nowrap;
}
.nav-btn.active {
border-bottom-color: var(--primary-cyan);
}
.sidebar-actions {
display: none;
}
}

View File

@@ -0,0 +1,920 @@
// Admin Dashboard - Smart & Powerful
const { API_BASE, API_ORIGIN } = (() => {
const cleanOrigin = (value) => value ? value.replace(/\/$/, '') : '';
const params = new URLSearchParams(window.location.search);
const overrideParam = cleanOrigin(params.get('api_origin'));
let storedOverride = '';
try {
storedOverride = cleanOrigin(localStorage.getItem('marketplace_api_origin'));
} catch (error) {
storedOverride = '';
}
let origin = overrideParam || storedOverride;
if (overrideParam && overrideParam !== storedOverride) {
try {
localStorage.setItem('marketplace_api_origin', overrideParam);
} catch (error) {
// ignore storage errors (private mode, etc.)
}
}
const { protocol, hostname, port } = window.location;
const isLocalHost = ['localhost', '127.0.0.1', '0.0.0.0'].includes(hostname);
if (!origin && isLocalHost && port !== '8100') {
origin = `${protocol}//127.0.0.1:8100`;
}
if (origin) {
const normalized = cleanOrigin(origin);
return { API_BASE: `${normalized}/marketplace/api`, API_ORIGIN: normalized };
}
return { API_BASE: '/marketplace/api', API_ORIGIN: '' };
})();
const resolveAssetUrl = (path) => {
if (!path) return '';
if (/^https?:\/\//i.test(path)) return path;
if (path.startsWith('/') && API_ORIGIN) {
return `${API_ORIGIN}${path}`;
}
return path;
};
class AdminDashboard {
constructor() {
this.token = localStorage.getItem('admin_token');
this.currentSection = 'stats';
this.data = {
apps: [],
articles: [],
categories: [],
sponsors: []
};
this.editingItem = null;
this.init();
}
async init() {
// Check auth
if (!this.token) {
this.showLogin();
return;
}
// Try to load stats to verify token
try {
await this.loadStats();
this.showDashboard();
this.setupEventListeners();
await this.loadAllData();
} catch (error) {
if (error.status === 401) {
this.showLogin();
}
}
}
showLogin() {
document.getElementById('login-screen').classList.remove('hidden');
document.getElementById('admin-dashboard').classList.add('hidden');
// Set up login button click handler
const loginBtn = document.getElementById('login-btn');
if (loginBtn) {
loginBtn.onclick = async () => {
const password = document.getElementById('password').value;
await this.login(password);
};
}
}
async login(password) {
try {
const response = await fetch(`${API_BASE}/admin/login`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ password })
});
if (!response.ok) throw new Error('Invalid password');
const data = await response.json();
this.token = data.token;
localStorage.setItem('admin_token', this.token);
document.getElementById('login-screen').classList.add('hidden');
this.showDashboard();
this.setupEventListeners();
await this.loadAllData();
} catch (error) {
document.getElementById('login-error').textContent = 'Invalid password';
document.getElementById('password').value = '';
}
}
showDashboard() {
document.getElementById('login-screen').classList.add('hidden');
document.getElementById('admin-dashboard').classList.remove('hidden');
}
setupEventListeners() {
// Navigation
document.querySelectorAll('.nav-btn').forEach(btn => {
btn.onclick = () => this.switchSection(btn.dataset.section);
});
// Logout
document.getElementById('logout-btn').onclick = () => this.logout();
// Export/Backup
document.getElementById('export-btn').onclick = () => this.exportData();
document.getElementById('backup-btn').onclick = () => this.backupDatabase();
// Search
['apps', 'articles'].forEach(type => {
const searchInput = document.getElementById(`${type}-search`);
if (searchInput) {
searchInput.oninput = (e) => this.filterTable(type, e.target.value);
}
});
// Category filter
const categoryFilter = document.getElementById('apps-filter');
if (categoryFilter) {
categoryFilter.onchange = (e) => this.filterByCategory(e.target.value);
}
// Save button in modal
document.getElementById('save-btn').onclick = () => this.saveItem();
}
async loadAllData() {
try {
await this.loadStats();
} catch (e) {
console.error('Failed to load stats:', e);
}
try {
await this.loadApps();
} catch (e) {
console.error('Failed to load apps:', e);
}
try {
await this.loadArticles();
} catch (e) {
console.error('Failed to load articles:', e);
}
try {
await this.loadCategories();
} catch (e) {
console.error('Failed to load categories:', e);
}
try {
await this.loadSponsors();
} catch (e) {
console.error('Failed to load sponsors:', e);
}
this.populateCategoryFilter();
}
async apiCall(endpoint, options = {}) {
const isFormData = options.body instanceof FormData;
const headers = {
'Authorization': `Bearer ${this.token}`,
...options.headers
};
if (!isFormData && !headers['Content-Type']) {
headers['Content-Type'] = 'application/json';
}
const response = await fetch(`${API_BASE}${endpoint}`, {
...options,
headers
});
if (response.status === 401) {
this.logout();
throw { status: 401 };
}
if (!response.ok) throw new Error(`API Error: ${response.status}`);
return response.json();
}
async loadStats() {
const stats = await this.apiCall(`/admin/stats?_=${Date.now()}`, {
cache: 'no-store'
});
document.getElementById('stat-apps').textContent = stats.apps.total;
document.getElementById('stat-featured').textContent = stats.apps.featured;
document.getElementById('stat-sponsored').textContent = stats.apps.sponsored;
document.getElementById('stat-articles').textContent = stats.articles;
document.getElementById('stat-sponsors').textContent = stats.sponsors.active;
document.getElementById('stat-views').textContent = this.formatNumber(stats.total_views);
}
async loadApps() {
this.data.apps = await this.apiCall(`/apps?limit=100&_=${Date.now()}`, {
cache: 'no-store'
});
this.renderAppsTable(this.data.apps);
}
async loadArticles() {
this.data.articles = await this.apiCall(`/articles?limit=100&_=${Date.now()}`, {
cache: 'no-store'
});
this.renderArticlesTable(this.data.articles);
}
async loadCategories() {
const cacheBuster = Date.now();
this.data.categories = await this.apiCall(`/categories?_=${cacheBuster}`, {
cache: 'no-store'
});
this.renderCategoriesTable(this.data.categories);
}
async loadSponsors() {
const cacheBuster = Date.now();
this.data.sponsors = await this.apiCall(`/sponsors?limit=100&_=${cacheBuster}`, {
cache: 'no-store'
});
this.renderSponsorsTable(this.data.sponsors);
}
renderAppsTable(apps) {
const table = document.getElementById('apps-table');
table.innerHTML = `
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Category</th>
<th>Type</th>
<th>Rating</th>
<th>Downloads</th>
<th>Status</th>
<th>Actions</th>
</tr>
</thead>
<tbody>
${apps.map(app => `
<tr>
<td>${app.id}</td>
<td>${app.name}</td>
<td>${app.category}</td>
<td>${app.type}</td>
<td>◆ ${app.rating}/5</td>
<td>${this.formatNumber(app.downloads)}</td>
<td>
${app.featured ? '<span class="badge featured">Featured</span>' : ''}
${app.sponsored ? '<span class="badge sponsored">Sponsored</span>' : ''}
</td>
<td>
<div class="table-actions">
<button class="btn-edit" onclick="admin.editItem('apps', ${app.id})">Edit</button>
<button class="btn-duplicate" onclick="admin.duplicateItem('apps', ${app.id})">Duplicate</button>
<button class="btn-delete" onclick="admin.deleteItem('apps', ${app.id})">Delete</button>
</div>
</td>
</tr>
`).join('')}
</tbody>
</table>
`;
}
renderArticlesTable(articles) {
const table = document.getElementById('articles-table');
table.innerHTML = `
<table>
<thead>
<tr>
<th>ID</th>
<th>Title</th>
<th>Category</th>
<th>Author</th>
<th>Published</th>
<th>Views</th>
<th>Actions</th>
</tr>
</thead>
<tbody>
${articles.map(article => `
<tr>
<td>${article.id}</td>
<td>${article.title}</td>
<td>${article.category}</td>
<td>${article.author}</td>
<td>${new Date(article.published_date).toLocaleDateString()}</td>
<td>${this.formatNumber(article.views)}</td>
<td>
<div class="table-actions">
<button class="btn-edit" onclick="admin.editItem('articles', ${article.id})">Edit</button>
<button class="btn-duplicate" onclick="admin.duplicateItem('articles', ${article.id})">Duplicate</button>
<button class="btn-delete" onclick="admin.deleteItem('articles', ${article.id})">Delete</button>
</div>
</td>
</tr>
`).join('')}
</tbody>
</table>
`;
}
renderCategoriesTable(categories) {
const table = document.getElementById('categories-table');
table.innerHTML = `
<table>
<thead>
<tr>
<th>Order</th>
<th>Icon</th>
<th>Name</th>
<th>Description</th>
<th>Actions</th>
</tr>
</thead>
<tbody>
${categories.map(cat => `
<tr>
<td>${cat.order_index}</td>
<td>${cat.icon}</td>
<td>${cat.name}</td>
<td>${cat.description}</td>
<td>
<div class="table-actions">
<button class="btn-edit" onclick="admin.editItem('categories', ${cat.id})">Edit</button>
<button class="btn-delete" onclick="admin.deleteCategory(${cat.id})">Delete</button>
</div>
</td>
</tr>
`).join('')}
</tbody>
</table>
`;
}
renderSponsorsTable(sponsors) {
const table = document.getElementById('sponsors-table');
table.innerHTML = `
<table>
<thead>
<tr>
<th>ID</th>
<th>Logo</th>
<th>Company</th>
<th>Tier</th>
<th>Start</th>
<th>End</th>
<th>Status</th>
<th>Actions</th>
</tr>
</thead>
<tbody>
${sponsors.map(sponsor => `
<tr>
<td>${sponsor.id}</td>
<td>${sponsor.logo_url ? `<img class="table-logo" src="${resolveAssetUrl(sponsor.logo_url)}" alt="${sponsor.company_name} logo">` : '-'}</td>
<td>${sponsor.company_name}</td>
<td>${sponsor.tier}</td>
<td>${new Date(sponsor.start_date).toLocaleDateString()}</td>
<td>${new Date(sponsor.end_date).toLocaleDateString()}</td>
<td>${sponsor.active ? '<span class="badge active">Active</span>' : 'Inactive'}</td>
<td>
<div class="table-actions">
<button class="btn-edit" onclick="admin.editItem('sponsors', ${sponsor.id})">Edit</button>
<button class="btn-delete" onclick="admin.deleteItem('sponsors', ${sponsor.id})">Delete</button>
</div>
</td>
</tr>
`).join('')}
</tbody>
</table>
`;
}
showAddForm(type) {
this.editingItem = null;
this.showModal(type, null);
}
async editItem(type, id) {
const item = this.data[type].find(i => i.id === id);
if (item) {
this.editingItem = item;
this.showModal(type, item);
}
}
async duplicateItem(type, id) {
const item = this.data[type].find(i => i.id === id);
if (item) {
const newItem = { ...item };
delete newItem.id;
newItem.name = `${newItem.name || newItem.title} (Copy)`;
if (newItem.slug) newItem.slug = `${newItem.slug}-copy-${Date.now()}`;
this.editingItem = null;
this.showModal(type, newItem);
}
}
showModal(type, item) {
const modal = document.getElementById('form-modal');
const title = document.getElementById('modal-title');
const body = document.getElementById('modal-body');
title.textContent = item ? `Edit ${type.slice(0, -1)}` : `Add New ${type.slice(0, -1)}`;
if (type === 'apps') {
body.innerHTML = this.getAppForm(item);
} else if (type === 'articles') {
body.innerHTML = this.getArticleForm(item);
} else if (type === 'categories') {
body.innerHTML = this.getCategoryForm(item);
} else if (type === 'sponsors') {
body.innerHTML = this.getSponsorForm(item);
}
modal.classList.remove('hidden');
modal.dataset.type = type;
if (type === 'sponsors') {
this.setupLogoUploadHandlers();
}
}
getAppForm(app) {
return `
<div class="form-grid">
<div class="form-group">
<label>Name *</label>
<input type="text" id="form-name" value="${app?.name || ''}" required>
</div>
<div class="form-group">
<label>Slug</label>
<input type="text" id="form-slug" value="${app?.slug || ''}" placeholder="auto-generated">
</div>
<div class="form-group">
<label>Category</label>
<select id="form-category">
${this.data.categories.map(cat =>
`<option value="${cat.name}" ${app?.category === cat.name ? 'selected' : ''}>${cat.name}</option>`
).join('')}
</select>
</div>
<div class="form-group">
<label>Type</label>
<select id="form-type">
<option value="Open Source" ${app?.type === 'Open Source' ? 'selected' : ''}>Open Source</option>
<option value="Paid" ${app?.type === 'Paid' ? 'selected' : ''}>Paid</option>
<option value="Freemium" ${app?.type === 'Freemium' ? 'selected' : ''}>Freemium</option>
</select>
</div>
<div class="form-group">
<label>Rating</label>
<input type="number" id="form-rating" value="${app?.rating || 4.5}" min="0" max="5" step="0.1">
</div>
<div class="form-group">
<label>Downloads</label>
<input type="number" id="form-downloads" value="${app?.downloads || 0}">
</div>
<div class="form-group full-width">
<label>Description</label>
<textarea id="form-description" rows="3">${app?.description || ''}</textarea>
</div>
<div class="form-group full-width">
<label>Image URL</label>
<input type="text" id="form-image" value="${app?.image || ''}" placeholder="https://...">
</div>
<div class="form-group">
<label>Website URL</label>
<input type="text" id="form-website" value="${app?.website_url || ''}">
</div>
<div class="form-group">
<label>GitHub URL</label>
<input type="text" id="form-github" value="${app?.github_url || ''}">
</div>
<div class="form-group">
<label>Pricing</label>
<input type="text" id="form-pricing" value="${app?.pricing || 'Free'}">
</div>
<div class="form-group">
<label>Contact Email</label>
<input type="email" id="form-email" value="${app?.contact_email || ''}">
</div>
<div class="form-group full-width checkbox-group">
<label class="checkbox-label">
<input type="checkbox" id="form-featured" ${app?.featured ? 'checked' : ''}>
Featured
</label>
<label class="checkbox-label">
<input type="checkbox" id="form-sponsored" ${app?.sponsored ? 'checked' : ''}>
Sponsored
</label>
</div>
<div class="form-group full-width">
<label>Integration Guide</label>
<textarea id="form-integration" rows="10">${app?.integration_guide || ''}</textarea>
</div>
</div>
`;
}
getArticleForm(article) {
return `
<div class="form-grid">
<div class="form-group full-width">
<label>Title *</label>
<input type="text" id="form-title" value="${article?.title || ''}" required>
</div>
<div class="form-group">
<label>Author</label>
<input type="text" id="form-author" value="${article?.author || 'Crawl4AI Team'}">
</div>
<div class="form-group">
<label>Category</label>
<select id="form-category">
<option value="News" ${article?.category === 'News' ? 'selected' : ''}>News</option>
<option value="Tutorial" ${article?.category === 'Tutorial' ? 'selected' : ''}>Tutorial</option>
<option value="Review" ${article?.category === 'Review' ? 'selected' : ''}>Review</option>
<option value="Comparison" ${article?.category === 'Comparison' ? 'selected' : ''}>Comparison</option>
</select>
</div>
<div class="form-group full-width">
<label>Featured Image URL</label>
<input type="text" id="form-image" value="${article?.featured_image || ''}">
</div>
<div class="form-group full-width">
<label>Content</label>
<textarea id="form-content" rows="20">${article?.content || ''}</textarea>
</div>
</div>
`;
}
getCategoryForm(category) {
return `
<div class="form-grid">
<div class="form-group">
<label>Name *</label>
<input type="text" id="form-name" value="${category?.name || ''}" required>
</div>
<div class="form-group">
<label>Icon</label>
<input type="text" id="form-icon" value="${category?.icon || '📁'}" maxlength="2">
</div>
<div class="form-group">
<label>Order</label>
<input type="number" id="form-order" value="${category?.order_index || 0}">
</div>
<div class="form-group full-width">
<label>Description</label>
<textarea id="form-description" rows="3">${category?.description || ''}</textarea>
</div>
</div>
`;
}
getSponsorForm(sponsor) {
const existingFile = sponsor?.logo_url ? sponsor.logo_url.split('/').pop().split('?')[0] : '';
return `
<div class="form-grid sponsor-form">
<div class="form-group sponsor-logo-group">
<label>Logo</label>
<input type="hidden" id="form-logo-url" value="${sponsor?.logo_url || ''}">
<div class="logo-upload">
<div class="image-preview ${sponsor?.logo_url ? '' : 'empty'}" id="form-logo-preview">
${sponsor?.logo_url ? `<img src="${resolveAssetUrl(sponsor.logo_url)}" alt="Logo preview">` : '<span>No logo uploaded</span>'}
</div>
<button type="button" class="upload-btn" id="form-logo-button">Upload Logo</button>
<input type="file" id="form-logo-file" accept="image/png,image/jpeg,image/webp,image/svg+xml" hidden>
</div>
<p class="upload-hint" id="form-logo-filename">${existingFile ? `Current: ${existingFile}` : 'No file selected'}</p>
</div>
<div class="form-group span-two">
<label>Company Name *</label>
<input type="text" id="form-name" value="${sponsor?.company_name || ''}" required>
</div>
<div class="form-group">
<label>Tier</label>
<select id="form-tier">
<option value="Bronze" ${sponsor?.tier === 'Bronze' ? 'selected' : ''}>Bronze</option>
<option value="Silver" ${sponsor?.tier === 'Silver' ? 'selected' : ''}>Silver</option>
<option value="Gold" ${sponsor?.tier === 'Gold' ? 'selected' : ''}>Gold</option>
</select>
</div>
<div class="form-group">
<label>Landing URL</label>
<input type="text" id="form-landing" value="${sponsor?.landing_url || ''}">
</div>
<div class="form-group">
<label>Banner URL</label>
<input type="text" id="form-banner" value="${sponsor?.banner_url || ''}">
</div>
<div class="form-group">
<label>Start Date</label>
<input type="date" id="form-start" value="${sponsor?.start_date?.split('T')[0] || ''}">
</div>
<div class="form-group">
<label>End Date</label>
<input type="date" id="form-end" value="${sponsor?.end_date?.split('T')[0] || ''}">
</div>
<div class="form-group">
<label class="checkbox-label">
<input type="checkbox" id="form-active" ${sponsor?.active ? 'checked' : ''}>
Active
</label>
</div>
</div>
`;
}
async saveItem() {
const modal = document.getElementById('form-modal');
const type = modal.dataset.type;
try {
if (type === 'sponsors') {
const fileInput = document.getElementById('form-logo-file');
if (fileInput && fileInput.files && fileInput.files[0]) {
const formData = new FormData();
formData.append('file', fileInput.files[0]);
formData.append('folder', 'sponsors');
const uploadResponse = await this.apiCall('/admin/upload-image', {
method: 'POST',
body: formData
});
if (!uploadResponse.url) {
throw new Error('Image upload failed');
}
document.getElementById('form-logo-url').value = uploadResponse.url;
}
}
const data = this.collectFormData(type);
if (this.editingItem) {
await this.apiCall(`/admin/${type}/${this.editingItem.id}`, {
method: 'PUT',
body: JSON.stringify(data)
});
} else {
await this.apiCall(`/admin/${type}`, {
method: 'POST',
body: JSON.stringify(data)
});
}
this.closeModal();
await this[`load${type.charAt(0).toUpperCase() + type.slice(1)}`]();
await this.loadStats();
} catch (error) {
alert('Error saving item: ' + error.message);
}
}
collectFormData(type) {
const data = {};
if (type === 'apps') {
data.name = document.getElementById('form-name').value;
data.slug = document.getElementById('form-slug').value || this.generateSlug(data.name);
data.description = document.getElementById('form-description').value;
data.category = document.getElementById('form-category').value;
data.type = document.getElementById('form-type').value;
const rating = parseFloat(document.getElementById('form-rating').value);
const downloads = parseInt(document.getElementById('form-downloads').value, 10);
data.rating = Number.isFinite(rating) ? rating : 0;
data.downloads = Number.isFinite(downloads) ? downloads : 0;
data.image = document.getElementById('form-image').value;
data.website_url = document.getElementById('form-website').value;
data.github_url = document.getElementById('form-github').value;
data.pricing = document.getElementById('form-pricing').value;
data.contact_email = document.getElementById('form-email').value;
data.featured = document.getElementById('form-featured').checked ? 1 : 0;
data.sponsored = document.getElementById('form-sponsored').checked ? 1 : 0;
data.integration_guide = document.getElementById('form-integration').value;
} else if (type === 'articles') {
data.title = document.getElementById('form-title').value;
data.slug = this.generateSlug(data.title);
data.author = document.getElementById('form-author').value;
data.category = document.getElementById('form-category').value;
data.featured_image = document.getElementById('form-image').value;
data.content = document.getElementById('form-content').value;
} else if (type === 'categories') {
data.name = document.getElementById('form-name').value;
data.slug = this.generateSlug(data.name);
data.icon = document.getElementById('form-icon').value;
data.description = document.getElementById('form-description').value;
const orderIndex = parseInt(document.getElementById('form-order').value, 10);
data.order_index = Number.isFinite(orderIndex) ? orderIndex : 0;
} else if (type === 'sponsors') {
data.company_name = document.getElementById('form-name').value;
data.logo_url = document.getElementById('form-logo-url').value;
data.tier = document.getElementById('form-tier').value;
data.landing_url = document.getElementById('form-landing').value;
data.banner_url = document.getElementById('form-banner').value;
data.start_date = document.getElementById('form-start').value;
data.end_date = document.getElementById('form-end').value;
data.active = document.getElementById('form-active').checked ? 1 : 0;
}
return data;
}
setupLogoUploadHandlers() {
const fileInput = document.getElementById('form-logo-file');
const preview = document.getElementById('form-logo-preview');
const logoUrlInput = document.getElementById('form-logo-url');
const trigger = document.getElementById('form-logo-button');
const fileNameEl = document.getElementById('form-logo-filename');
if (!fileInput || !preview || !logoUrlInput) return;
const setFileName = (text) => {
if (fileNameEl) {
fileNameEl.textContent = text;
}
};
const setEmptyState = () => {
preview.innerHTML = '<span>No logo uploaded</span>';
preview.classList.add('empty');
setFileName('No file selected');
};
const setExistingState = () => {
if (logoUrlInput.value) {
const existingFile = logoUrlInput.value.split('/').pop().split('?')[0];
preview.innerHTML = `<img src="${resolveAssetUrl(logoUrlInput.value)}" alt="Logo preview">`;
preview.classList.remove('empty');
setFileName(existingFile ? `Current: ${existingFile}` : 'Current logo');
} else {
setEmptyState();
}
};
setExistingState();
if (trigger) {
trigger.onclick = () => fileInput.click();
}
fileInput.addEventListener('change', (event) => {
const file = event.target.files && event.target.files[0];
if (!file) {
setExistingState();
return;
}
setFileName(file.name);
const reader = new FileReader();
reader.onload = () => {
preview.innerHTML = `<img src="${reader.result}" alt="Logo preview">`;
preview.classList.remove('empty');
};
reader.readAsDataURL(file);
});
}
async deleteItem(type, id) {
if (!confirm(`Are you sure you want to delete this ${type.slice(0, -1)}?`)) return;
try {
await this.apiCall(`/admin/${type}/${id}`, { method: 'DELETE' });
await this[`load${type.charAt(0).toUpperCase() + type.slice(1)}`]();
await this.loadStats();
} catch (error) {
alert('Error deleting item: ' + error.message);
}
}
async deleteCategory(id) {
const hasApps = this.data.apps.some(app =>
app.category === this.data.categories.find(c => c.id === id)?.name
);
if (hasApps) {
alert('Cannot delete category with existing apps');
return;
}
await this.deleteItem('categories', id);
}
closeModal() {
document.getElementById('form-modal').classList.add('hidden');
this.editingItem = null;
}
switchSection(section) {
// Update navigation
document.querySelectorAll('.nav-btn').forEach(btn => {
btn.classList.toggle('active', btn.dataset.section === section);
});
// Show section
document.querySelectorAll('.content-section').forEach(sec => {
sec.classList.remove('active');
});
document.getElementById(`${section}-section`).classList.add('active');
this.currentSection = section;
}
filterTable(type, query) {
const items = this.data[type].filter(item => {
const searchText = Object.values(item).join(' ').toLowerCase();
return searchText.includes(query.toLowerCase());
});
if (type === 'apps') {
this.renderAppsTable(items);
} else if (type === 'articles') {
this.renderArticlesTable(items);
}
}
filterByCategory(category) {
const apps = category
? this.data.apps.filter(app => app.category === category)
: this.data.apps;
this.renderAppsTable(apps);
}
populateCategoryFilter() {
const filter = document.getElementById('apps-filter');
if (!filter) return;
filter.innerHTML = '<option value="">All Categories</option>';
this.data.categories.forEach(cat => {
filter.innerHTML += `<option value="${cat.name}">${cat.name}</option>`;
});
}
async exportData() {
const data = {
apps: this.data.apps,
articles: this.data.articles,
categories: this.data.categories,
sponsors: this.data.sponsors,
exported: new Date().toISOString()
};
const blob = new Blob([JSON.stringify(data, null, 2)], { type: 'application/json' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = `marketplace-export-${Date.now()}.json`;
a.click();
}
async backupDatabase() {
// In production, this would download the SQLite file
alert('Database backup would be implemented on the server side');
}
generateSlug(text) {
return text.toLowerCase()
.replace(/[^\w\s-]/g, '')
.replace(/\s+/g, '-')
.replace(/-+/g, '-')
.trim();
}
formatNumber(num) {
if (num >= 1000000) return (num / 1000000).toFixed(1) + 'M';
if (num >= 1000) return (num / 1000).toFixed(1) + 'K';
return num.toString();
}
logout() {
localStorage.removeItem('admin_token');
this.token = null;
this.showLogin();
}
}
// Initialize
const admin = new AdminDashboard();

View File

@@ -0,0 +1,215 @@
<!DOCTYPE html>
<html lang="en" data-theme="dark">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Admin Dashboard - Crawl4AI Marketplace</title>
<link rel="stylesheet" href="../frontend/marketplace.css?v=1759329000">
<link rel="stylesheet" href="admin.css?v=1759329000">
</head>
<body>
<div class="admin-container">
<!-- Login Screen -->
<div id="login-screen" class="login-screen">
<div class="login-box">
<img src="../../assets/images/logo.png" alt="Crawl4AI" class="login-logo">
<h1>[ Admin Access ]</h1>
<div id="login-form">
<input type="password" id="password" placeholder="Enter admin password" autofocus onkeypress="if(event.key==='Enter'){document.getElementById('login-btn').click()}">
<button type="button" id="login-btn">→ Login</button>
</div>
<div id="login-error" class="error-msg"></div>
</div>
</div>
<!-- Admin Dashboard -->
<div id="admin-dashboard" class="admin-dashboard hidden">
<!-- Header -->
<header class="admin-header">
<div class="header-content">
<div class="header-left">
<img src="../../assets/images/logo.png" alt="Crawl4AI" class="header-logo">
<h1>[ Admin Dashboard ]</h1>
</div>
<div class="header-right">
<span class="admin-user">Administrator</span>
<button id="logout-btn" class="logout-btn">↗ Logout</button>
</div>
</div>
</header>
<!-- Main Layout -->
<div class="admin-layout">
<!-- Sidebar -->
<aside class="admin-sidebar">
<nav class="sidebar-nav">
<button class="nav-btn active" data-section="stats">
<span class="nav-icon"></span> Dashboard
</button>
<button class="nav-btn" data-section="apps">
<span class="nav-icon"></span> Apps
</button>
<button class="nav-btn" data-section="articles">
<span class="nav-icon"></span> Articles
</button>
<button class="nav-btn" data-section="categories">
<span class="nav-icon"></span> Categories
</button>
<button class="nav-btn" data-section="sponsors">
<span class="nav-icon"></span> Sponsors
</button>
</nav>
<div class="sidebar-actions">
<button id="export-btn" class="action-btn">
<span></span> Export Data
</button>
<button id="backup-btn" class="action-btn">
<span></span> Backup DB
</button>
</div>
</aside>
<!-- Main Content -->
<main class="admin-main">
<!-- Stats Section -->
<section id="stats-section" class="content-section active">
<h2>Dashboard Overview</h2>
<div class="stats-grid">
<div class="stat-card">
<div class="stat-icon"></div>
<div class="stat-info">
<div class="stat-number" id="stat-apps">--</div>
<div class="stat-label">Total Apps</div>
<div class="stat-detail">
<span id="stat-featured">--</span> featured,
<span id="stat-sponsored">--</span> sponsored
</div>
</div>
</div>
<div class="stat-card">
<div class="stat-icon"></div>
<div class="stat-info">
<div class="stat-number" id="stat-articles">--</div>
<div class="stat-label">Articles</div>
</div>
</div>
<div class="stat-card">
<div class="stat-icon"></div>
<div class="stat-info">
<div class="stat-number" id="stat-sponsors">--</div>
<div class="stat-label">Active Sponsors</div>
</div>
</div>
<div class="stat-card">
<div class="stat-icon"></div>
<div class="stat-info">
<div class="stat-number" id="stat-views">--</div>
<div class="stat-label">Total Views</div>
</div>
</div>
</div>
<h3>Quick Actions</h3>
<div class="quick-actions">
<button class="quick-btn" onclick="admin.showAddForm('apps')">
<span></span> Add New App
</button>
<button class="quick-btn" onclick="admin.showAddForm('articles')">
<span></span> Write Article
</button>
<button class="quick-btn" onclick="admin.showAddForm('sponsors')">
<span></span> Add Sponsor
</button>
</div>
</section>
<!-- Apps Section -->
<section id="apps-section" class="content-section">
<div class="section-header">
<h2>Apps Management</h2>
<div class="header-actions">
<input type="text" id="apps-search" class="search-input" placeholder="Search apps...">
<select id="apps-filter" class="filter-select">
<option value="">All Categories</option>
</select>
<button class="add-btn" onclick="admin.showAddForm('apps')">
<span></span> Add App
</button>
</div>
</div>
<div class="data-table" id="apps-table">
<!-- Apps table will be populated here -->
</div>
</section>
<!-- Articles Section -->
<section id="articles-section" class="content-section">
<div class="section-header">
<h2>Articles Management</h2>
<div class="header-actions">
<input type="text" id="articles-search" class="search-input" placeholder="Search articles...">
<button class="add-btn" onclick="admin.showAddForm('articles')">
<span></span> Add Article
</button>
</div>
</div>
<div class="data-table" id="articles-table">
<!-- Articles table will be populated here -->
</div>
</section>
<!-- Categories Section -->
<section id="categories-section" class="content-section">
<div class="section-header">
<h2>Categories Management</h2>
<div class="header-actions">
<button class="add-btn" onclick="admin.showAddForm('categories')">
<span></span> Add Category
</button>
</div>
</div>
<div class="data-table" id="categories-table">
<!-- Categories table will be populated here -->
</div>
</section>
<!-- Sponsors Section -->
<section id="sponsors-section" class="content-section">
<div class="section-header">
<h2>Sponsors Management</h2>
<div class="header-actions">
<button class="add-btn" onclick="admin.showAddForm('sponsors')">
<span></span> Add Sponsor
</button>
</div>
</div>
<div class="data-table" id="sponsors-table">
<!-- Sponsors table will be populated here -->
</div>
</section>
</main>
</div>
</div>
<!-- Modal for Add/Edit Forms -->
<div id="form-modal" class="modal hidden">
<div class="modal-content large">
<div class="modal-header">
<h2 id="modal-title">Add/Edit</h2>
<button class="modal-close" onclick="admin.closeModal()"></button>
</div>
<div class="modal-body" id="modal-body">
<!-- Dynamic form content -->
</div>
<div class="modal-footer">
<button class="btn-cancel" onclick="admin.closeModal()">Cancel</button>
<button class="btn-save" id="save-btn">Save</button>
</div>
</div>
</div>
</div>
<script src="admin.js?v=1759335000"></script>
</body>
</html>

View File

@@ -0,0 +1,658 @@
/* App Detail Page Styles */
.app-detail-container {
min-height: 100vh;
background: var(--bg-dark);
}
/* Back Button */
.header-nav {
display: flex;
align-items: center;
}
.back-btn {
padding: 0.5rem 1rem;
background: transparent;
border: 1px solid var(--border-color);
color: var(--primary-cyan);
text-decoration: none;
transition: all 0.2s;
font-size: 0.875rem;
}
.back-btn:hover {
border-color: var(--primary-cyan);
background: rgba(80, 255, 255, 0.1);
}
/* App Hero Section */
.app-hero {
max-width: 1800px;
margin: 2rem auto;
padding: 0 2rem;
}
.app-hero-content {
display: grid;
grid-template-columns: 1fr 2fr;
gap: 3rem;
background: linear-gradient(135deg, #1a1a2e, #0f0f1e);
border: 2px solid var(--primary-cyan);
padding: 2rem;
box-shadow: 0 0 30px rgba(80, 255, 255, 0.15),
inset 0 0 20px rgba(80, 255, 255, 0.05);
}
.app-hero-image {
width: 100%;
height: 300px;
background: linear-gradient(135deg, rgba(80, 255, 255, 0.1), rgba(243, 128, 245, 0.05));
background-size: cover;
background-position: center;
border: 1px solid var(--border-color);
display: flex;
align-items: center;
justify-content: center;
font-size: 4rem;
color: var(--primary-cyan);
}
.app-badges {
display: flex;
gap: 0.5rem;
margin-bottom: 1rem;
}
.app-badge {
padding: 0.3rem 0.6rem;
background: var(--bg-tertiary);
color: var(--text-secondary);
font-size: 0.75rem;
text-transform: uppercase;
font-weight: 600;
}
.app-badge.featured {
background: linear-gradient(135deg, var(--primary-cyan), var(--primary-teal));
color: var(--bg-dark);
box-shadow: 0 2px 10px rgba(80, 255, 255, 0.3);
}
.app-badge.sponsored {
background: linear-gradient(135deg, var(--warning), #ff8c00);
color: var(--bg-dark);
box-shadow: 0 2px 10px rgba(245, 158, 11, 0.3);
}
.app-hero-info h1 {
font-size: 2.5rem;
color: var(--primary-cyan);
margin: 0.5rem 0;
text-shadow: 0 0 20px rgba(80, 255, 255, 0.5);
}
.app-tagline {
font-size: 1.1rem;
color: var(--text-secondary);
margin-bottom: 2rem;
}
/* Stats */
.app-stats {
display: flex;
gap: 2rem;
margin: 2rem 0;
padding: 1rem 0;
border-top: 1px solid var(--border-color);
border-bottom: 1px solid var(--border-color);
}
.stat {
display: flex;
flex-direction: column;
gap: 0.25rem;
}
.stat-value {
font-size: 1.5rem;
color: var(--primary-cyan);
font-weight: 600;
}
.stat-label {
font-size: 0.875rem;
color: var(--text-tertiary);
}
/* Action Buttons */
.app-actions {
display: flex;
gap: 1rem;
margin: 2rem 0;
}
.action-btn {
padding: 0.75rem 1.5rem;
border: 1px solid var(--border-color);
background: transparent;
color: var(--text-primary);
text-decoration: none;
display: inline-flex;
align-items: center;
gap: 0.5rem;
transition: all 0.2s;
cursor: pointer;
font-family: inherit;
font-size: 0.9rem;
}
.action-btn.primary {
background: linear-gradient(135deg, var(--primary-cyan), var(--primary-teal));
color: var(--bg-dark);
border-color: var(--primary-cyan);
font-weight: 600;
}
.action-btn.primary:hover {
box-shadow: 0 4px 15px rgba(80, 255, 255, 0.3);
transform: translateY(-2px);
}
.action-btn.secondary {
border-color: var(--accent-pink);
color: var(--accent-pink);
}
.action-btn.secondary:hover {
background: rgba(243, 128, 245, 0.1);
box-shadow: 0 4px 15px rgba(243, 128, 245, 0.2);
}
.action-btn.ghost {
border-color: var(--border-color);
color: var(--text-secondary);
}
.action-btn.ghost:hover {
border-color: var(--primary-cyan);
color: var(--primary-cyan);
}
/* Pricing */
.pricing-info {
display: flex;
align-items: center;
gap: 1rem;
font-size: 1.1rem;
}
.pricing-label {
color: var(--text-tertiary);
}
.pricing-value {
color: var(--warning);
font-weight: 600;
}
/* Navigation Tabs */
.tabs {
display: flex;
flex-direction: row;
gap: 0;
border-bottom: 2px solid var(--border-color);
margin-bottom: 0;
background: var(--bg-tertiary);
}
.tab-btn {
padding: 1rem 2rem;
background: transparent;
border: none;
border-bottom: 3px solid transparent;
color: var(--text-secondary);
cursor: pointer;
transition: all 0.2s;
font-family: inherit;
font-size: 0.95rem;
margin-bottom: -2px;
white-space: nowrap;
font-weight: 500;
}
.tab-btn:hover {
color: var(--primary-cyan);
background: rgba(80, 255, 255, 0.05);
}
.tab-btn.active {
color: var(--primary-cyan);
border-bottom-color: var(--primary-cyan);
background: var(--bg-secondary);
}
.app-nav {
max-width: 1800px;
margin: 2rem auto 0;
padding: 0 2rem;
display: flex;
gap: 1rem;
border-bottom: 2px solid var(--border-color);
}
.nav-tab {
padding: 1rem 1.5rem;
background: transparent;
border: none;
border-bottom: 2px solid transparent;
color: var(--text-secondary);
cursor: pointer;
transition: all 0.2s;
font-family: inherit;
font-size: 0.9rem;
margin-bottom: -2px;
}
.nav-tab:hover {
color: var(--primary-cyan);
}
.nav-tab.active {
color: var(--primary-cyan);
border-bottom-color: var(--primary-cyan);
}
/* Main Content Wrapper */
.app-main {
max-width: 1800px;
margin: 2rem auto;
padding: 0 2rem;
}
/* Content Sections */
.app-content {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
padding: 0;
}
.tab-content {
display: none;
padding: 2rem;
}
.tab-content.active {
display: block;
}
/* Overview Layout */
.overview-columns {
display: grid;
grid-template-columns: 2fr 1fr;
gap: 2rem;
}
.overview-main h2, .overview-main h3 {
color: var(--primary-cyan);
margin-top: 2rem;
margin-bottom: 1rem;
}
.overview-main h2:first-child {
margin-top: 0;
}
.overview-main h2 {
font-size: 1.8rem;
border-bottom: 2px solid var(--border-color);
padding-bottom: 0.5rem;
}
.overview-main h3 {
font-size: 1.3rem;
}
.features-list {
list-style: none;
padding: 0;
}
.features-list li {
padding: 0.5rem 0;
padding-left: 1.5rem;
position: relative;
color: var(--text-secondary);
}
.features-list li:before {
content: "▸";
position: absolute;
left: 0;
color: var(--primary-cyan);
}
.use-cases p {
color: var(--text-secondary);
line-height: 1.6;
}
/* Sidebar */
.sidebar {
display: flex;
flex-direction: column;
gap: 1rem;
}
.sidebar-card {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
padding: 1.5rem;
}
.sidebar-card h3 {
font-size: 1.1rem;
color: var(--primary-cyan);
margin: 0 0 1rem 0;
border-bottom: 1px solid var(--border-color);
padding-bottom: 0.5rem;
}
.stats-grid {
display: grid;
grid-template-columns: 1fr 1fr;
gap: 1rem;
}
.stats-grid > div {
text-align: center;
}
.metadata {
margin: 0;
}
.metadata div {
display: flex;
justify-content: space-between;
padding: 0.75rem 0;
border-bottom: 1px solid var(--border-color);
}
.metadata dt {
color: var(--text-tertiary);
font-weight: normal;
}
.metadata dd {
color: var(--text-primary);
margin: 0;
font-weight: 600;
}
.sidebar-card p {
color: var(--text-secondary);
margin: 0;
}
/* Integration Content */
.integration-content {
max-width: 100%;
}
.integration-content h2 {
font-size: 1.8rem;
color: var(--primary-cyan);
margin: 0 0 2rem 0;
padding-bottom: 0.5rem;
border-bottom: 2px solid var(--border-color);
}
.integration-content h3 {
font-size: 1.3rem;
color: var(--text-primary);
margin: 2rem 0 1rem;
}
.docs-content {
max-width: 100%;
}
.docs-content h2 {
font-size: 1.8rem;
color: var(--primary-cyan);
margin: 0 0 1.5rem 0;
padding-bottom: 0.5rem;
border-bottom: 2px solid var(--border-color);
}
.docs-content h3 {
font-size: 1.3rem;
color: var(--text-primary);
margin: 2rem 0 1rem;
}
.docs-content h4 {
font-size: 1.1rem;
color: var(--accent-pink);
margin: 1.5rem 0 0.5rem;
}
.docs-content p {
color: var(--text-secondary);
line-height: 1.6;
margin-bottom: 1rem;
}
.docs-content code {
background: var(--bg-tertiary);
padding: 0.2rem 0.4rem;
color: var(--primary-cyan);
font-family: 'Dank Mono', Monaco, monospace;
font-size: 0.9em;
}
/* Code Blocks */
.code-block {
background: var(--bg-dark);
border: 1px solid var(--border-color);
margin: 1rem 0;
overflow: hidden;
position: relative;
}
.code-header {
display: flex;
justify-content: space-between;
align-items: center;
padding: 0.5rem 1rem;
background: var(--bg-tertiary);
border-bottom: 1px solid var(--border-color);
}
.code-lang {
color: var(--primary-cyan);
font-size: 0.875rem;
text-transform: uppercase;
}
.copy-btn {
position: absolute;
top: 0.5rem;
right: 0.5rem;
padding: 0.4rem 0.8rem;
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
color: var(--text-secondary);
cursor: pointer;
font-size: 0.75rem;
transition: all 0.2s;
z-index: 10;
}
.copy-btn:hover {
border-color: var(--primary-cyan);
color: var(--primary-cyan);
background: var(--bg-secondary);
}
.code-block pre {
margin: 0;
padding: 1rem;
overflow-x: auto;
}
.code-block code {
background: transparent;
padding: 0;
color: var(--text-secondary);
font-size: 0.875rem;
line-height: 1.5;
}
/* Feature Grid */
.feature-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 1rem;
margin: 2rem 0;
}
.feature-card {
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
padding: 1.5rem;
transition: all 0.2s;
}
.feature-card:hover {
border-color: var(--primary-cyan);
background: rgba(80, 255, 255, 0.05);
}
.feature-card h4 {
margin-top: 0;
}
/* Info Box */
.info-box {
background: linear-gradient(135deg, rgba(80, 255, 255, 0.05), rgba(243, 128, 245, 0.03));
border: 1px solid var(--primary-cyan);
border-left: 4px solid var(--primary-cyan);
padding: 1.5rem;
margin: 2rem 0;
}
.info-box h4 {
margin-top: 0;
color: var(--primary-cyan);
}
/* Support Grid */
.support-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 1rem;
margin: 2rem 0;
}
.support-card {
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
padding: 1.5rem;
text-align: center;
}
.support-card h3 {
color: var(--primary-cyan);
margin-bottom: 0.5rem;
}
/* Related Apps */
.related-apps {
max-width: 1800px;
margin: 4rem auto;
padding: 0 2rem;
}
.related-apps h2 {
font-size: 1.5rem;
color: var(--text-primary);
margin-bottom: 1.5rem;
}
.related-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(250px, 1fr));
gap: 1rem;
}
.related-app-card {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
padding: 1rem;
cursor: pointer;
transition: all 0.2s;
}
.related-app-card:hover {
border-color: var(--primary-cyan);
transform: translateY(-2px);
}
/* Responsive */
@media (max-width: 1024px) {
.app-hero-content {
grid-template-columns: 1fr;
}
.app-stats {
justify-content: space-around;
}
.overview-columns {
grid-template-columns: 1fr;
}
}
@media (max-width: 768px) {
.app-hero-info h1 {
font-size: 2rem;
}
.app-actions {
flex-direction: column;
}
.tabs {
overflow-x: auto;
-webkit-overflow-scrolling: touch;
}
.tab-btn {
padding: 0.75rem 1.5rem;
font-size: 0.875rem;
}
.app-nav {
overflow-x: auto;
gap: 0;
}
.nav-tab {
white-space: nowrap;
}
.feature-grid,
.support-grid {
grid-template-columns: 1fr;
}
.tab-content {
padding: 1rem;
}
.app-main {
padding: 0 1rem;
}
}

View File

@@ -0,0 +1,209 @@
<!DOCTYPE html>
<html lang="en" data-theme="dark">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>App Details - Crawl4AI Marketplace</title>
<link rel="stylesheet" href="marketplace.css">
<link rel="stylesheet" href="app-detail.css">
</head>
<body>
<div class="app-detail-container">
<!-- Header -->
<header class="marketplace-header">
<div class="header-content">
<div class="header-left">
<div class="logo-title">
<img src="../assets/images/logo.png" alt="Crawl4AI" class="header-logo">
<h1>
<span class="ascii-border">[</span>
Marketplace
<span class="ascii-border">]</span>
</h1>
</div>
</div>
<div class="header-nav">
<a href="index.html" class="back-btn">← Back to Marketplace</a>
</div>
</div>
</header>
<!-- App Hero Section -->
<section class="app-hero">
<div class="app-hero-content">
<div class="app-hero-image" id="app-image">
<!-- Dynamic image -->
</div>
<div class="app-hero-info">
<div class="app-badges">
<span class="app-badge" id="app-type">Open Source</span>
<span class="app-badge featured" id="app-featured" style="display:none">FEATURED</span>
<span class="app-badge sponsored" id="app-sponsored" style="display:none">SPONSORED</span>
</div>
<h1 id="app-name">App Name</h1>
<p id="app-description" class="app-tagline">App description goes here</p>
<div class="app-stats">
<div class="stat">
<span class="stat-value" id="app-rating">★★★★★</span>
<span class="stat-label">Rating</span>
</div>
<div class="stat">
<span class="stat-value" id="app-downloads">0</span>
<span class="stat-label">Downloads</span>
</div>
<div class="stat">
<span class="stat-value" id="app-category">Category</span>
<span class="stat-label">Category</span>
</div>
</div>
<div class="app-actions">
<a href="#" id="app-website" class="action-btn primary" target="_blank">Visit Website</a>
<a href="#" id="app-github" class="action-btn" target="_blank">View GitHub</a>
<a href="#" id="app-demo" class="action-btn" target="_blank" style="display:none">Live Demo</a>
</div>
</div>
</div>
</section>
<!-- App Details Section -->
<main class="app-main">
<div class="app-content">
<div class="tabs">
<button class="tab-btn active" data-tab="overview">Overview</button>
<button class="tab-btn" data-tab="integration">Integration</button>
<button class="tab-btn" data-tab="docs">Documentation</button>
<button class="tab-btn" data-tab="support">Support</button>
</div>
<section id="overview-tab" class="tab-content active">
<div class="overview-columns">
<div class="overview-main">
<h2>Overview</h2>
<div id="app-overview">Overview content goes here.</div>
<h3>Key Features</h3>
<ul id="app-features" class="features-list">
<li>Feature 1</li>
<li>Feature 2</li>
<li>Feature 3</li>
</ul>
<h3>Use Cases</h3>
<div id="app-use-cases" class="use-cases">
<p>Describe how this app can help your workflow.</p>
</div>
</div>
<aside class="sidebar">
<div class="sidebar-card">
<h3>Download Stats</h3>
<div class="stats-grid">
<div>
<span class="stat-value" id="sidebar-downloads">0</span>
<span class="stat-label">Downloads</span>
</div>
<div>
<span class="stat-value" id="sidebar-rating">0.0</span>
<span class="stat-label">Rating</span>
</div>
</div>
</div>
<div class="sidebar-card">
<h3>App Metadata</h3>
<dl class="metadata">
<div>
<dt>Category</dt>
<dd id="sidebar-category">-</dd>
</div>
<div>
<dt>Type</dt>
<dd id="sidebar-type">-</dd>
</div>
<div>
<dt>Status</dt>
<dd id="sidebar-status">Active</dd>
</div>
<div>
<dt>Pricing</dt>
<dd id="sidebar-pricing">-</dd>
</div>
</dl>
</div>
<div class="sidebar-card">
<h3>Contact</h3>
<p id="sidebar-contact">contact@example.com</p>
</div>
</aside>
</div>
</section>
<section id="integration-tab" class="tab-content">
<div class="integration-content">
<h2>Integration Guide</h2>
<h3>Installation</h3>
<div class="code-block">
<pre><code id="install-code"># Installation instructions will appear here</code></pre>
</div>
<h3>Basic Usage</h3>
<div class="code-block">
<pre><code id="usage-code"># Usage example will appear here</code></pre>
</div>
<h3>Complete Integration Example</h3>
<div class="code-block">
<button class="copy-btn" id="copy-integration">Copy</button>
<pre><code id="integration-code"># Complete integration guide will appear here</code></pre>
</div>
</div>
</section>
<section id="docs-tab" class="tab-content">
<div class="docs-content">
<h2>Documentation</h2>
<div id="app-docs" class="doc-sections">
<p>Documentation coming soon.</p>
</div>
</div>
</section>
<section id="support-tab" class="tab-content">
<div class="docs-content">
<h2>Support</h2>
<div class="support-grid">
<div class="support-card">
<h3>📧 Contact</h3>
<p id="app-contact">contact@example.com</p>
</div>
<div class="support-card">
<h3>🐛 Report Issues</h3>
<p>Found a bug? Report it on GitHub Issues.</p>
</div>
<div class="support-card">
<h3>💬 Community</h3>
<p>Join our Discord for help and discussions.</p>
</div>
</div>
</div>
</section>
</div>
</main>
<!-- Related Apps -->
<section class="related-apps">
<h2>Related Apps</h2>
<div id="related-apps-grid" class="related-grid">
<!-- Dynamic related apps -->
</div>
</section>
</div>
<script src="app-detail.js"></script>
</body>
</html>

View File

@@ -0,0 +1,348 @@
// App Detail Page JavaScript
const { API_BASE, API_ORIGIN } = (() => {
const { hostname, port, protocol } = window.location;
const isLocalHost = ['localhost', '127.0.0.1', '0.0.0.0'].includes(hostname);
if (isLocalHost && port && port !== '8100') {
const origin = `${protocol}//127.0.0.1:8100`;
return { API_BASE: `${origin}/marketplace/api`, API_ORIGIN: origin };
}
return { API_BASE: '/marketplace/api', API_ORIGIN: '' };
})();
class AppDetailPage {
constructor() {
this.appSlug = this.getAppSlugFromURL();
this.appData = null;
this.init();
}
getAppSlugFromURL() {
const params = new URLSearchParams(window.location.search);
return params.get('app') || '';
}
async init() {
if (!this.appSlug) {
window.location.href = 'index.html';
return;
}
await this.loadAppDetails();
this.setupEventListeners();
await this.loadRelatedApps();
}
async loadAppDetails() {
try {
const response = await fetch(`${API_BASE}/apps/${this.appSlug}`);
if (!response.ok) throw new Error('App not found');
this.appData = await response.json();
this.renderAppDetails();
} catch (error) {
console.error('Error loading app details:', error);
// Fallback to loading all apps and finding the right one
try {
const response = await fetch(`${API_BASE}/apps`);
const apps = await response.json();
this.appData = apps.find(app => app.slug === this.appSlug || app.name.toLowerCase().replace(/\s+/g, '-') === this.appSlug);
if (this.appData) {
this.renderAppDetails();
} else {
window.location.href = 'index.html';
}
} catch (err) {
console.error('Error loading apps:', err);
window.location.href = 'index.html';
}
}
}
renderAppDetails() {
if (!this.appData) return;
// Update title
document.title = `${this.appData.name} - Crawl4AI Marketplace`;
// Hero image
const appImage = document.getElementById('app-image');
if (this.appData.image) {
appImage.style.backgroundImage = `url('${this.appData.image}')`;
appImage.innerHTML = '';
} else {
appImage.innerHTML = `[${this.appData.category || 'APP'}]`;
}
// Basic info
document.getElementById('app-name').textContent = this.appData.name;
document.getElementById('app-description').textContent = this.appData.description;
document.getElementById('app-type').textContent = this.appData.type || 'Open Source';
document.getElementById('app-category').textContent = this.appData.category;
// Badges
if (this.appData.featured) {
document.getElementById('app-featured').style.display = 'inline-block';
}
if (this.appData.sponsored) {
document.getElementById('app-sponsored').style.display = 'inline-block';
}
// Stats
const rating = this.appData.rating || 0;
const stars = '★'.repeat(Math.floor(rating)) + '☆'.repeat(5 - Math.floor(rating));
document.getElementById('app-rating').textContent = stars + ` ${rating}/5`;
document.getElementById('app-downloads').textContent = this.formatNumber(this.appData.downloads || 0);
// Action buttons
const websiteBtn = document.getElementById('app-website');
const githubBtn = document.getElementById('app-github');
if (this.appData.website_url) {
websiteBtn.href = this.appData.website_url;
} else {
websiteBtn.style.display = 'none';
}
if (this.appData.github_url) {
githubBtn.href = this.appData.github_url;
} else {
githubBtn.style.display = 'none';
}
// Contact
document.getElementById('app-contact').textContent = this.appData.contact_email || 'Not available';
// Sidebar info
document.getElementById('sidebar-downloads').textContent = this.formatNumber(this.appData.downloads || 0);
document.getElementById('sidebar-rating').textContent = (this.appData.rating || 0).toFixed(1);
document.getElementById('sidebar-category').textContent = this.appData.category || '-';
document.getElementById('sidebar-type').textContent = this.appData.type || '-';
document.getElementById('sidebar-status').textContent = this.appData.status || 'Active';
document.getElementById('sidebar-pricing').textContent = this.appData.pricing || 'Free';
document.getElementById('sidebar-contact').textContent = this.appData.contact_email || 'contact@example.com';
// Integration guide
this.renderIntegrationGuide();
}
renderIntegrationGuide() {
// Installation code
const installCode = document.getElementById('install-code');
if (installCode) {
if (this.appData.type === 'Open Source' && this.appData.github_url) {
installCode.textContent = `# Clone from GitHub
git clone ${this.appData.github_url}
# Install dependencies
pip install -r requirements.txt`;
} else if (this.appData.name.toLowerCase().includes('api')) {
installCode.textContent = `# Install via pip
pip install ${this.appData.slug}
# Or install from source
pip install git+${this.appData.github_url || 'https://github.com/example/repo'}`;
}
}
// Usage code - customize based on category
const usageCode = document.getElementById('usage-code');
if (usageCode) {
if (this.appData.category === 'Browser Automation') {
usageCode.textContent = `from crawl4ai import AsyncWebCrawler
from ${this.appData.slug.replace(/-/g, '_')} import ${this.appData.name.replace(/\s+/g, '')}
async def main():
# Initialize ${this.appData.name}
automation = ${this.appData.name.replace(/\s+/g, '')}()
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
browser_config=automation.config,
wait_for="css:body"
)
print(result.markdown)`;
} else if (this.appData.category === 'Proxy Services') {
usageCode.textContent = `from crawl4ai import AsyncWebCrawler
import ${this.appData.slug.replace(/-/g, '_')}
# Configure proxy
proxy_config = {
"server": "${this.appData.website_url || 'https://proxy.example.com'}",
"username": "your_username",
"password": "your_password"
}
async with AsyncWebCrawler(proxy=proxy_config) as crawler:
result = await crawler.arun(
url="https://example.com",
bypass_cache=True
)
print(result.status_code)`;
} else if (this.appData.category === 'LLM Integration') {
usageCode.textContent = `from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
# Configure LLM extraction
strategy = LLMExtractionStrategy(
provider="${this.appData.name.toLowerCase().includes('gpt') ? 'openai' : 'anthropic'}",
api_key="your-api-key",
model="${this.appData.name.toLowerCase().includes('gpt') ? 'gpt-4' : 'claude-3'}",
instruction="Extract structured data"
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
extraction_strategy=strategy
)
print(result.extracted_content)`;
}
}
// Integration example
const integrationCode = document.getElementById('integration-code');
if (integrationCode) {
integrationCode.textContent = this.appData.integration_guide ||
`# Complete ${this.appData.name} Integration Example
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
import json
async def crawl_with_${this.appData.slug.replace(/-/g, '_')}():
"""
Complete example showing how to use ${this.appData.name}
with Crawl4AI for production web scraping
"""
# Define extraction schema
schema = {
"name": "ProductList",
"baseSelector": "div.product",
"fields": [
{"name": "title", "selector": "h2", "type": "text"},
{"name": "price", "selector": ".price", "type": "text"},
{"name": "image", "selector": "img", "type": "attribute", "attribute": "src"},
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"}
]
}
# Initialize crawler with ${this.appData.name}
async with AsyncWebCrawler(
browser_type="chromium",
headless=True,
verbose=True
) as crawler:
# Crawl with extraction
result = await crawler.arun(
url="https://example.com/products",
extraction_strategy=JsonCssExtractionStrategy(schema),
cache_mode="bypass",
wait_for="css:.product",
screenshot=True
)
# Process results
if result.success:
products = json.loads(result.extracted_content)
print(f"Found {len(products)} products")
for product in products[:5]:
print(f"- {product['title']}: {product['price']}")
return products
# Run the crawler
if __name__ == "__main__":
import asyncio
asyncio.run(crawl_with_${this.appData.slug.replace(/-/g, '_')}())`;
}
}
formatNumber(num) {
if (num >= 1000000) {
return (num / 1000000).toFixed(1) + 'M';
} else if (num >= 1000) {
return (num / 1000).toFixed(1) + 'K';
}
return num.toString();
}
setupEventListeners() {
// Tab switching
const tabs = document.querySelectorAll('.tab-btn');
tabs.forEach(tab => {
tab.addEventListener('click', () => {
// Update active tab
tabs.forEach(t => t.classList.remove('active'));
tab.classList.add('active');
// Show corresponding content
const tabName = tab.dataset.tab;
document.querySelectorAll('.tab-content').forEach(content => {
content.classList.remove('active');
});
document.getElementById(`${tabName}-tab`).classList.add('active');
});
});
// Copy integration code
document.getElementById('copy-integration').addEventListener('click', () => {
const code = document.getElementById('integration-code').textContent;
navigator.clipboard.writeText(code).then(() => {
const btn = document.getElementById('copy-integration');
const originalText = btn.innerHTML;
btn.innerHTML = '<span>✓</span> Copied!';
setTimeout(() => {
btn.innerHTML = originalText;
}, 2000);
});
});
// Copy code buttons
document.querySelectorAll('.copy-btn').forEach(btn => {
btn.addEventListener('click', (e) => {
const codeBlock = e.target.closest('.code-block');
const code = codeBlock.querySelector('code').textContent;
navigator.clipboard.writeText(code).then(() => {
btn.textContent = 'Copied!';
setTimeout(() => {
btn.textContent = 'Copy';
}, 2000);
});
});
});
}
async loadRelatedApps() {
try {
const response = await fetch(`${API_BASE}/apps?category=${encodeURIComponent(this.appData.category)}&limit=4`);
const apps = await response.json();
const relatedApps = apps.filter(app => app.slug !== this.appSlug).slice(0, 3);
const grid = document.getElementById('related-apps-grid');
grid.innerHTML = relatedApps.map(app => `
<div class="related-app-card" onclick="window.location.href='app-detail.html?app=${app.slug || app.name.toLowerCase().replace(/\s+/g, '-')}'">
<h4>${app.name}</h4>
<p>${app.description.substring(0, 100)}...</p>
<div style="display: flex; justify-content: space-between; margin-top: 0.5rem; font-size: 0.75rem;">
<span style="color: var(--primary-cyan)">${app.type}</span>
<span style="color: var(--warning)">★ ${app.rating}/5</span>
</div>
</div>
`).join('');
} catch (error) {
console.error('Error loading related apps:', error);
}
}
}
// Initialize when DOM is loaded
document.addEventListener('DOMContentLoaded', () => {
new AppDetailPage();
});

View File

@@ -0,0 +1,14 @@
# Marketplace Configuration
# Copy this to .env and update with your values
# Admin password (required)
MARKETPLACE_ADMIN_PASSWORD=change_this_password
# JWT secret key (required) - generate with: python3 -c "import secrets; print(secrets.token_urlsafe(32))"
MARKETPLACE_JWT_SECRET=change_this_to_a_secure_random_key
# Database path (optional, defaults to ./marketplace.db)
MARKETPLACE_DB_PATH=./marketplace.db
# Token expiry in hours (optional, defaults to 4)
MARKETPLACE_TOKEN_EXPIRY=4

View File

@@ -0,0 +1,59 @@
"""
Marketplace Configuration - Loads from .env file
"""
import os
import sys
import hashlib
from pathlib import Path
from dotenv import load_dotenv
# Load .env file
env_path = Path(__file__).parent / '.env'
if not env_path.exists():
print("\n❌ ERROR: No .env file found!")
print("Please copy .env.example to .env and update with your values:")
print(f" cp {Path(__file__).parent}/.env.example {Path(__file__).parent}/.env")
print("\nThen edit .env with your secure values.")
sys.exit(1)
load_dotenv(env_path)
# Required environment variables
required_vars = ['MARKETPLACE_ADMIN_PASSWORD', 'MARKETPLACE_JWT_SECRET']
missing_vars = [var for var in required_vars if not os.getenv(var)]
if missing_vars:
print(f"\n❌ ERROR: Missing required environment variables: {', '.join(missing_vars)}")
print("Please check your .env file and ensure all required variables are set.")
sys.exit(1)
class Config:
"""Configuration loaded from environment variables"""
# Admin authentication - hashed from password in .env
ADMIN_PASSWORD_HASH = hashlib.sha256(
os.getenv('MARKETPLACE_ADMIN_PASSWORD').encode()
).hexdigest()
# JWT secret for token generation
JWT_SECRET_KEY = os.getenv('MARKETPLACE_JWT_SECRET')
# Database path
DATABASE_PATH = os.getenv('MARKETPLACE_DB_PATH', './marketplace.db')
# Token expiry in hours
TOKEN_EXPIRY_HOURS = int(os.getenv('MARKETPLACE_TOKEN_EXPIRY', '4'))
# CORS origins - hardcoded as they don't contain secrets
ALLOWED_ORIGINS = [
"http://localhost:8000",
"http://localhost:8080",
"http://localhost:8100",
"http://127.0.0.1:8000",
"http://127.0.0.1:8080",
"http://127.0.0.1:8100",
"https://crawl4ai.com",
"https://www.crawl4ai.com",
"https://docs.crawl4ai.com",
"https://market.crawl4ai.com"
]

View File

@@ -0,0 +1,117 @@
import sqlite3
import yaml
import json
from pathlib import Path
from typing import Dict, List, Any
class DatabaseManager:
def __init__(self, db_path=None, schema_path='schema.yaml'):
self.schema = self._load_schema(schema_path)
# Use provided path or fallback to schema default
self.db_path = db_path or self.schema['database']['name']
self.conn = None
self._init_database()
def _load_schema(self, path: str) -> Dict:
with open(path, 'r') as f:
return yaml.safe_load(f)
def _init_database(self):
"""Auto-create/migrate database from schema"""
self.conn = sqlite3.connect(self.db_path, check_same_thread=False)
self.conn.row_factory = sqlite3.Row
for table_name, table_def in self.schema['tables'].items():
self._create_or_update_table(table_name, table_def['columns'])
def _create_or_update_table(self, table_name: str, columns: Dict):
cursor = self.conn.cursor()
# Check if table exists
cursor.execute(f"SELECT name FROM sqlite_master WHERE type='table' AND name=?", (table_name,))
table_exists = cursor.fetchone() is not None
if not table_exists:
# Create table
col_defs = []
for col_name, col_spec in columns.items():
col_def = f"{col_name} {col_spec['type']}"
if col_spec.get('primary'):
col_def += " PRIMARY KEY"
if col_spec.get('autoincrement'):
col_def += " AUTOINCREMENT"
if col_spec.get('unique'):
col_def += " UNIQUE"
if col_spec.get('required'):
col_def += " NOT NULL"
if 'default' in col_spec:
default = col_spec['default']
if default == 'CURRENT_TIMESTAMP':
col_def += f" DEFAULT {default}"
elif isinstance(default, str):
col_def += f" DEFAULT '{default}'"
else:
col_def += f" DEFAULT {default}"
col_defs.append(col_def)
create_sql = f"CREATE TABLE {table_name} ({', '.join(col_defs)})"
cursor.execute(create_sql)
else:
# Check for new columns and add them
cursor.execute(f"PRAGMA table_info({table_name})")
existing_columns = {row[1] for row in cursor.fetchall()}
for col_name, col_spec in columns.items():
if col_name not in existing_columns:
col_def = f"{col_spec['type']}"
if 'default' in col_spec:
default = col_spec['default']
if default == 'CURRENT_TIMESTAMP':
col_def += f" DEFAULT {default}"
elif isinstance(default, str):
col_def += f" DEFAULT '{default}'"
else:
col_def += f" DEFAULT {default}"
cursor.execute(f"ALTER TABLE {table_name} ADD COLUMN {col_name} {col_def}")
self.conn.commit()
def get_all(self, table: str, limit: int = 100, offset: int = 0, where: str = None) -> List[Dict]:
cursor = self.conn.cursor()
query = f"SELECT * FROM {table}"
if where:
query += f" WHERE {where}"
query += f" LIMIT {limit} OFFSET {offset}"
cursor.execute(query)
rows = cursor.fetchall()
return [dict(row) for row in rows]
def search(self, query: str, tables: List[str] = None) -> Dict[str, List[Dict]]:
if not tables:
tables = list(self.schema['tables'].keys())
results = {}
cursor = self.conn.cursor()
for table in tables:
# Search in text columns
columns = self.schema['tables'][table]['columns']
text_cols = [col for col, spec in columns.items()
if spec['type'] == 'TEXT' and col != 'id']
if text_cols:
where_clause = ' OR '.join([f"{col} LIKE ?" for col in text_cols])
params = [f'%{query}%'] * len(text_cols)
cursor.execute(f"SELECT * FROM {table} WHERE {where_clause} LIMIT 10", params)
rows = cursor.fetchall()
if rows:
results[table] = [dict(row) for row in rows]
return results
def close(self):
if self.conn:
self.conn.close()

View File

@@ -0,0 +1,267 @@
import sqlite3
import json
import random
from datetime import datetime, timedelta
from database import DatabaseManager
def generate_slug(text):
return text.lower().replace(' ', '-').replace('&', 'and')
def generate_dummy_data():
db = DatabaseManager()
conn = db.conn
cursor = conn.cursor()
# Clear existing data
for table in ['apps', 'articles', 'categories', 'sponsors']:
cursor.execute(f"DELETE FROM {table}")
# Categories
categories = [
("Browser Automation", "", "Tools for browser automation and control"),
("Proxy Services", "🔒", "Proxy providers and rotation services"),
("LLM Integration", "🤖", "AI/LLM tools and integrations"),
("Data Processing", "📊", "Data extraction and processing tools"),
("Cloud Infrastructure", "", "Cloud browser and computing services"),
("Developer Tools", "🛠", "Development and testing utilities")
]
for i, (name, icon, desc) in enumerate(categories):
cursor.execute("""
INSERT INTO categories (name, slug, icon, description, order_index)
VALUES (?, ?, ?, ?, ?)
""", (name, generate_slug(name), icon, desc, i))
# Apps with real Unsplash images
apps_data = [
# Browser Automation
("Playwright Cloud", "Browser Automation", "Paid", True, True,
"Scalable browser automation in the cloud with Playwright", "https://playwright.cloud",
None, "$99/month starter", 4.8, 12500,
"https://images.unsplash.com/photo-1633356122544-f134324a6cee?w=800&h=400&fit=crop"),
("Selenium Grid Hub", "Browser Automation", "Freemium", False, False,
"Distributed Selenium grid for parallel testing", "https://seleniumhub.io",
"https://github.com/seleniumhub/grid", "Free - $299/month", 4.2, 8400,
"https://images.unsplash.com/photo-1555066931-4365d14bab8c?w=800&h=400&fit=crop"),
("Puppeteer Extra", "Browser Automation", "Open Source", True, False,
"Enhanced Puppeteer with stealth plugins and more", "https://puppeteer-extra.dev",
"https://github.com/berstend/puppeteer-extra", "Free", 4.6, 15200,
"https://images.unsplash.com/photo-1461749280684-dccba630e2f6?w=800&h=400&fit=crop"),
# Proxy Services
("BrightData", "Proxy Services", "Paid", True, True,
"Premium proxy network with 72M+ IPs worldwide", "https://brightdata.com",
None, "Starting $500/month", 4.7, 9800,
"https://images.unsplash.com/photo-1558494949-ef010cbdcc31?w=800&h=400&fit=crop"),
("SmartProxy", "Proxy Services", "Paid", False, True,
"Residential and datacenter proxies with rotation", "https://smartproxy.com",
None, "Starting $75/month", 4.3, 7600,
"https://images.unsplash.com/photo-1544197150-b99a580bb7a8?w=800&h=400&fit=crop"),
("ProxyMesh", "Proxy Services", "Freemium", False, False,
"Rotating proxy servers with sticky sessions", "https://proxymesh.com",
None, "$10-$50/month", 4.0, 4200,
"https://images.unsplash.com/photo-1451187580459-43490279c0fa?w=800&h=400&fit=crop"),
# LLM Integration
("LangChain Crawl", "LLM Integration", "Open Source", True, False,
"LangChain integration for Crawl4AI workflows", "https://langchain-crawl.dev",
"https://github.com/langchain/crawl", "Free", 4.5, 18900,
"https://images.unsplash.com/photo-1677442136019-21780ecad995?w=800&h=400&fit=crop"),
("GPT Scraper", "LLM Integration", "Freemium", False, False,
"Extract structured data using GPT models", "https://gptscraper.ai",
None, "Free - $99/month", 4.1, 5600,
"https://images.unsplash.com/photo-1655720828018-edd2daec9349?w=800&h=400&fit=crop"),
("Claude Extract", "LLM Integration", "Paid", True, True,
"Professional extraction using Claude AI", "https://claude-extract.com",
None, "$199/month", 4.9, 3200,
"https://images.unsplash.com/photo-1686191128892-3b09ad503b4f?w=800&h=400&fit=crop"),
# Data Processing
("DataMiner Pro", "Data Processing", "Paid", False, False,
"Advanced data extraction and transformation", "https://dataminer.pro",
None, "$149/month", 4.2, 6700,
"https://images.unsplash.com/photo-1551288049-bebda4e38f71?w=800&h=400&fit=crop"),
("ScraperAPI", "Data Processing", "Freemium", True, True,
"Simple API for web scraping with proxy rotation", "https://scraperapi.com",
None, "Free - $299/month", 4.6, 22300,
"https://images.unsplash.com/photo-1460925895917-afdab827c52f?w=800&h=400&fit=crop"),
("Apify", "Data Processing", "Freemium", False, False,
"Web scraping and automation platform", "https://apify.com",
None, "$49-$499/month", 4.4, 14500,
"https://images.unsplash.com/photo-1504639725590-34d0984388bd?w=800&h=400&fit=crop"),
# Cloud Infrastructure
("BrowserCloud", "Cloud Infrastructure", "Paid", True, True,
"Managed headless browsers in the cloud", "https://browsercloud.io",
None, "$199/month", 4.5, 8900,
"https://images.unsplash.com/photo-1667372393119-3d4c48d07fc9?w=800&h=400&fit=crop"),
("LambdaTest", "Cloud Infrastructure", "Freemium", False, False,
"Cross-browser testing on cloud", "https://lambdatest.com",
None, "Free - $99/month", 4.1, 11200,
"https://images.unsplash.com/photo-1451187580459-43490279c0fa?w=800&h=400&fit=crop"),
("Browserless", "Cloud Infrastructure", "Freemium", True, False,
"Headless browser automation API", "https://browserless.io",
None, "$50-$500/month", 4.7, 19800,
"https://images.unsplash.com/photo-1639762681485-074b7f938ba0?w=800&h=400&fit=crop"),
# Developer Tools
("Crawl4AI VSCode", "Developer Tools", "Open Source", True, False,
"VSCode extension for Crawl4AI development", "https://marketplace.visualstudio.com",
"https://github.com/crawl4ai/vscode", "Free", 4.8, 34500,
"https://images.unsplash.com/photo-1629654297299-c8506221ca97?w=800&h=400&fit=crop"),
("Postman Collection", "Developer Tools", "Open Source", False, False,
"Postman collection for Crawl4AI API testing", "https://postman.com/crawl4ai",
"https://github.com/crawl4ai/postman", "Free", 4.3, 7800,
"https://images.unsplash.com/photo-1599507593499-a3f7d7d97667?w=800&h=400&fit=crop"),
("Debug Toolkit", "Developer Tools", "Open Source", False, False,
"Debugging tools for crawler development", "https://debug.crawl4ai.com",
"https://github.com/crawl4ai/debug", "Free", 4.0, 4300,
"https://images.unsplash.com/photo-1515879218367-8466d910aaa4?w=800&h=400&fit=crop"),
]
for name, category, type_, featured, sponsored, desc, url, github, pricing, rating, downloads, image in apps_data:
screenshots = json.dumps([
f"https://images.unsplash.com/photo-{random.randint(1500000000000, 1700000000000)}-{random.randint(1000000000000, 9999999999999)}?w=800&h=600&fit=crop",
f"https://images.unsplash.com/photo-{random.randint(1500000000000, 1700000000000)}-{random.randint(1000000000000, 9999999999999)}?w=800&h=600&fit=crop"
])
cursor.execute("""
INSERT INTO apps (name, slug, description, category, type, featured, sponsored,
website_url, github_url, pricing, rating, downloads, image, screenshots, logo_url,
integration_guide, contact_email, views)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (name, generate_slug(name), desc, category, type_, featured, sponsored,
url, github, pricing, rating, downloads, image, screenshots,
f"https://ui-avatars.com/api/?name={name}&background=50ffff&color=070708&size=128",
f"# {name} Integration\n\n```python\nfrom crawl4ai import AsyncWebCrawler\n# Integration code coming soon...\n```",
f"contact@{generate_slug(name)}.com",
random.randint(100, 5000)))
# Articles with real images
articles_data = [
("Browser Automation Showdown: Playwright vs Puppeteer vs Selenium",
"Review", "John Doe", ["Playwright Cloud", "Puppeteer Extra"],
["browser-automation", "comparison", "2024"],
"https://images.unsplash.com/photo-1587620962725-abab7fe55159?w=1200&h=630&fit=crop"),
("Top 5 Proxy Services for Web Scraping in 2024",
"Comparison", "Jane Smith", ["BrightData", "SmartProxy", "ProxyMesh"],
["proxy", "web-scraping", "guide"],
"https://images.unsplash.com/photo-1558494949-ef010cbdcc31?w=1200&h=630&fit=crop"),
("Integrating LLMs with Crawl4AI: A Complete Guide",
"Tutorial", "Crawl4AI Team", ["LangChain Crawl", "GPT Scraper", "Claude Extract"],
["llm", "integration", "tutorial"],
"https://images.unsplash.com/photo-1677442136019-21780ecad995?w=1200&h=630&fit=crop"),
("Building Scalable Crawlers with Cloud Infrastructure",
"Tutorial", "Mike Johnson", ["BrowserCloud", "Browserless"],
["cloud", "scalability", "architecture"],
"https://images.unsplash.com/photo-1667372393119-3d4c48d07fc9?w=1200&h=630&fit=crop"),
("What's New in Crawl4AI Marketplace",
"News", "Crawl4AI Team", [],
["marketplace", "announcement", "news"],
"https://images.unsplash.com/photo-1556075798-4825dfaaf498?w=1200&h=630&fit=crop"),
("Cost Analysis: Self-Hosted vs Cloud Browser Solutions",
"Comparison", "Sarah Chen", ["BrowserCloud", "LambdaTest", "Browserless"],
["cost", "cloud", "comparison"],
"https://images.unsplash.com/photo-1554224155-8d04cb21cd6c?w=1200&h=630&fit=crop"),
("Getting Started with Browser Automation",
"Tutorial", "Crawl4AI Team", ["Playwright Cloud", "Selenium Grid Hub"],
["beginner", "tutorial", "automation"],
"https://images.unsplash.com/photo-1498050108023-c5249f4df085?w=1200&h=630&fit=crop"),
("The Future of Web Scraping: AI-Powered Extraction",
"News", "Dr. Alan Turing", ["Claude Extract", "GPT Scraper"],
["ai", "future", "trends"],
"https://images.unsplash.com/photo-1593720213428-28a5b9e94613?w=1200&h=630&fit=crop")
]
for title, category, author, related_apps, tags, image in articles_data:
# Get app IDs for related apps
related_ids = []
for app_name in related_apps:
cursor.execute("SELECT id FROM apps WHERE name = ?", (app_name,))
result = cursor.fetchone()
if result:
related_ids.append(result[0])
content = f"""# {title}
By {author} | {datetime.now().strftime('%B %d, %Y')}
## Introduction
This is a comprehensive article about {title.lower()}. Lorem ipsum dolor sit amet, consectetur adipiscing elit.
Sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
## Key Points
- Important point about the topic
- Another crucial insight
- Technical details and specifications
- Performance comparisons
## Conclusion
In summary, this article explored various aspects of the topic. Stay tuned for more updates!
"""
cursor.execute("""
INSERT INTO articles (title, slug, content, author, category, related_apps,
featured_image, tags, views)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (title, generate_slug(title), content, author, category,
json.dumps(related_ids), image, json.dumps(tags),
random.randint(200, 10000)))
# Sponsors
sponsors_data = [
("BrightData", "Gold", "https://brightdata.com",
"https://images.unsplash.com/photo-1558494949-ef010cbdcc31?w=728&h=90&fit=crop"),
("ScraperAPI", "Gold", "https://scraperapi.com",
"https://images.unsplash.com/photo-1460925895917-afdab827c52f?w=728&h=90&fit=crop"),
("BrowserCloud", "Silver", "https://browsercloud.io",
"https://images.unsplash.com/photo-1667372393119-3d4c48d07fc9?w=728&h=90&fit=crop"),
("Claude Extract", "Silver", "https://claude-extract.com",
"https://images.unsplash.com/photo-1686191128892-3b09ad503b4f?w=728&h=90&fit=crop"),
("SmartProxy", "Bronze", "https://smartproxy.com",
"https://images.unsplash.com/photo-1544197150-b99a580bb7a8?w=728&h=90&fit=crop")
]
for company, tier, landing_url, banner in sponsors_data:
start_date = datetime.now() - timedelta(days=random.randint(1, 30))
end_date = datetime.now() + timedelta(days=random.randint(30, 180))
cursor.execute("""
INSERT INTO sponsors (company_name, logo_url, tier, banner_url,
landing_url, active, start_date, end_date)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", (company,
f"https://ui-avatars.com/api/?name={company}&background=09b5a5&color=fff&size=200",
tier, banner, landing_url, 1,
start_date.isoformat(), end_date.isoformat()))
conn.commit()
print("✓ Dummy data generated successfully!")
print(f" - {len(categories)} categories")
print(f" - {len(apps_data)} apps")
print(f" - {len(articles_data)} articles")
print(f" - {len(sponsors_data)} sponsors")
if __name__ == "__main__":
generate_dummy_data()

View File

@@ -0,0 +1,5 @@
fastapi
uvicorn
pyyaml
python-multipart
python-dotenv

View File

@@ -0,0 +1,75 @@
database:
name: marketplace.db
tables:
apps:
columns:
id: {type: INTEGER, primary: true, autoincrement: true}
name: {type: TEXT, required: true}
slug: {type: TEXT, unique: true}
description: {type: TEXT}
long_description: {type: TEXT}
logo_url: {type: TEXT}
image: {type: TEXT}
screenshots: {type: JSON, default: '[]'}
category: {type: TEXT}
type: {type: TEXT, default: 'Open Source'}
status: {type: TEXT, default: 'Active'}
website_url: {type: TEXT}
github_url: {type: TEXT}
demo_url: {type: TEXT}
video_url: {type: TEXT}
documentation_url: {type: TEXT}
support_url: {type: TEXT}
discord_url: {type: TEXT}
pricing: {type: TEXT}
rating: {type: REAL, default: 0.0}
downloads: {type: INTEGER, default: 0}
featured: {type: BOOLEAN, default: 0}
sponsored: {type: BOOLEAN, default: 0}
integration_guide: {type: TEXT}
documentation: {type: TEXT}
examples: {type: TEXT}
installation_command: {type: TEXT}
requirements: {type: TEXT}
changelog: {type: TEXT}
tags: {type: JSON, default: '[]'}
added_date: {type: DATETIME, default: CURRENT_TIMESTAMP}
updated_date: {type: DATETIME, default: CURRENT_TIMESTAMP}
contact_email: {type: TEXT}
views: {type: INTEGER, default: 0}
articles:
columns:
id: {type: INTEGER, primary: true, autoincrement: true}
title: {type: TEXT, required: true}
slug: {type: TEXT, unique: true}
content: {type: TEXT}
author: {type: TEXT, default: 'Crawl4AI Team'}
category: {type: TEXT}
related_apps: {type: JSON, default: '[]'}
featured_image: {type: TEXT}
published_date: {type: DATETIME, default: CURRENT_TIMESTAMP}
tags: {type: JSON, default: '[]'}
views: {type: INTEGER, default: 0}
categories:
columns:
id: {type: INTEGER, primary: true, autoincrement: true}
name: {type: TEXT, unique: true}
slug: {type: TEXT, unique: true}
icon: {type: TEXT}
description: {type: TEXT}
order_index: {type: INTEGER, default: 0}
sponsors:
columns:
id: {type: INTEGER, primary: true, autoincrement: true}
company_name: {type: TEXT, required: true}
logo_url: {type: TEXT}
tier: {type: TEXT, default: 'Bronze'}
banner_url: {type: TEXT}
landing_url: {type: TEXT}
active: {type: BOOLEAN, default: 1}
start_date: {type: DATETIME}
end_date: {type: DATETIME}

View File

@@ -0,0 +1,493 @@
from fastapi import FastAPI, HTTPException, Query, Depends, Body, UploadFile, File, Form, APIRouter
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import JSONResponse
from fastapi.staticfiles import StaticFiles
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from typing import Optional, Dict, Any
import json
import hashlib
import secrets
import re
from pathlib import Path
from database import DatabaseManager
from datetime import datetime, timedelta
# Import configuration (will exit if .env not found or invalid)
from config import Config
app = FastAPI(title="Crawl4AI Marketplace API")
router = APIRouter(prefix="/marketplace/api")
# Security setup
security = HTTPBearer()
tokens = {} # In production, use Redis or database for token storage
# CORS configuration
app.add_middleware(
CORSMiddleware,
allow_origins=Config.ALLOWED_ORIGINS,
allow_credentials=True,
allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"],
allow_headers=["*"],
max_age=3600
)
# Initialize database with configurable path
db = DatabaseManager(Config.DATABASE_PATH)
BASE_DIR = Path(__file__).parent
UPLOAD_ROOT = BASE_DIR / "uploads"
UPLOAD_ROOT.mkdir(parents=True, exist_ok=True)
app.mount("/uploads", StaticFiles(directory=UPLOAD_ROOT), name="uploads")
ALLOWED_IMAGE_TYPES = {
"image/png": ".png",
"image/jpeg": ".jpg",
"image/webp": ".webp",
"image/svg+xml": ".svg"
}
ALLOWED_UPLOAD_FOLDERS = {"sponsors"}
MAX_UPLOAD_SIZE = 2 * 1024 * 1024 # 2 MB
def json_response(data, cache_time=3600):
"""Helper to return JSON with cache headers"""
return JSONResponse(
content=data,
headers={
"Cache-Control": f"public, max-age={cache_time}",
"X-Content-Type-Options": "nosniff"
}
)
def to_int(value, default=0):
"""Coerce incoming values to integers, falling back to default."""
if value is None:
return default
if isinstance(value, bool):
return int(value)
if isinstance(value, (int, float)):
return int(value)
if isinstance(value, str):
stripped = value.strip()
if not stripped:
return default
match = re.match(r"^-?\d+", stripped)
if match:
try:
return int(match.group())
except ValueError:
return default
return default
# ============= PUBLIC ENDPOINTS =============
@router.get("/apps")
async def get_apps(
category: Optional[str] = None,
type: Optional[str] = None,
featured: Optional[bool] = None,
sponsored: Optional[bool] = None,
limit: int = Query(default=20, le=10000),
offset: int = Query(default=0)
):
"""Get apps with optional filters"""
where_clauses = []
if category:
where_clauses.append(f"category = '{category}'")
if type:
where_clauses.append(f"type = '{type}'")
if featured is not None:
where_clauses.append(f"featured = {1 if featured else 0}")
if sponsored is not None:
where_clauses.append(f"sponsored = {1 if sponsored else 0}")
where = " AND ".join(where_clauses) if where_clauses else None
apps = db.get_all('apps', limit=limit, offset=offset, where=where)
# Parse JSON fields
for app in apps:
if app.get('screenshots'):
app['screenshots'] = json.loads(app['screenshots'])
return json_response(apps)
@router.get("/apps/{slug}")
async def get_app(slug: str):
"""Get single app by slug"""
apps = db.get_all('apps', where=f"slug = '{slug}'", limit=1)
if not apps:
raise HTTPException(status_code=404, detail="App not found")
app = apps[0]
if app.get('screenshots'):
app['screenshots'] = json.loads(app['screenshots'])
return json_response(app)
@router.get("/articles")
async def get_articles(
category: Optional[str] = None,
limit: int = Query(default=20, le=10000),
offset: int = Query(default=0)
):
"""Get articles with optional category filter"""
where = f"category = '{category}'" if category else None
articles = db.get_all('articles', limit=limit, offset=offset, where=where)
# Parse JSON fields
for article in articles:
if article.get('related_apps'):
article['related_apps'] = json.loads(article['related_apps'])
if article.get('tags'):
article['tags'] = json.loads(article['tags'])
return json_response(articles)
@router.get("/articles/{slug}")
async def get_article(slug: str):
"""Get single article by slug"""
articles = db.get_all('articles', where=f"slug = '{slug}'", limit=1)
if not articles:
raise HTTPException(status_code=404, detail="Article not found")
article = articles[0]
if article.get('related_apps'):
article['related_apps'] = json.loads(article['related_apps'])
if article.get('tags'):
article['tags'] = json.loads(article['tags'])
return json_response(article)
@router.get("/categories")
async def get_categories():
"""Get all categories ordered by index"""
categories = db.get_all('categories', limit=50)
for category in categories:
category['order_index'] = to_int(category.get('order_index'), 0)
categories.sort(key=lambda x: x.get('order_index', 0))
return json_response(categories, cache_time=7200)
@router.get("/sponsors")
async def get_sponsors(active: Optional[bool] = True):
"""Get sponsors, default active only"""
where = f"active = {1 if active else 0}" if active is not None else None
sponsors = db.get_all('sponsors', where=where, limit=20)
# Filter by date if active
if active:
now = datetime.now().isoformat()
sponsors = [s for s in sponsors
if (not s.get('start_date') or s['start_date'] <= now) and
(not s.get('end_date') or s['end_date'] >= now)]
return json_response(sponsors)
@router.get("/search")
async def search(q: str = Query(min_length=2)):
"""Search across apps and articles"""
if len(q) < 2:
return json_response({})
results = db.search(q, tables=['apps', 'articles'])
# Parse JSON fields in results
for table, items in results.items():
for item in items:
if table == 'apps' and item.get('screenshots'):
item['screenshots'] = json.loads(item['screenshots'])
elif table == 'articles':
if item.get('related_apps'):
item['related_apps'] = json.loads(item['related_apps'])
if item.get('tags'):
item['tags'] = json.loads(item['tags'])
return json_response(results, cache_time=1800)
@router.get("/stats")
async def get_stats():
"""Get marketplace statistics"""
stats = {
"total_apps": len(db.get_all('apps', limit=10000)),
"total_articles": len(db.get_all('articles', limit=10000)),
"total_categories": len(db.get_all('categories', limit=1000)),
"active_sponsors": len(db.get_all('sponsors', where="active = 1", limit=1000))
}
return json_response(stats, cache_time=1800)
# ============= ADMIN AUTHENTICATION =============
def verify_token(credentials: HTTPAuthorizationCredentials = Depends(security)):
"""Verify admin authentication token"""
token = credentials.credentials
if token not in tokens or tokens[token] < datetime.now():
raise HTTPException(status_code=401, detail="Invalid or expired token")
return token
@router.post("/admin/upload-image", dependencies=[Depends(verify_token)])
async def upload_image(file: UploadFile = File(...), folder: str = Form("sponsors")):
"""Upload image files for admin assets"""
folder = (folder or "").strip().lower()
if folder not in ALLOWED_UPLOAD_FOLDERS:
raise HTTPException(status_code=400, detail="Invalid upload folder")
if file.content_type not in ALLOWED_IMAGE_TYPES:
raise HTTPException(status_code=400, detail="Unsupported file type")
contents = await file.read()
if len(contents) > MAX_UPLOAD_SIZE:
raise HTTPException(status_code=400, detail="File too large (max 2MB)")
extension = ALLOWED_IMAGE_TYPES[file.content_type]
filename = f"{datetime.now().strftime('%Y%m%d%H%M%S')}_{secrets.token_hex(8)}{extension}"
target_dir = UPLOAD_ROOT / folder
target_dir.mkdir(parents=True, exist_ok=True)
target_path = target_dir / filename
target_path.write_bytes(contents)
return {"url": f"/uploads/{folder}/{filename}"}
@router.post("/admin/login")
async def admin_login(password: str = Body(..., embed=True)):
"""Admin login with password"""
provided_hash = hashlib.sha256(password.encode()).hexdigest()
if provided_hash != Config.ADMIN_PASSWORD_HASH:
# Log failed attempt in production
print(f"Failed login attempt at {datetime.now()}")
raise HTTPException(status_code=401, detail="Invalid password")
# Generate secure token
token = secrets.token_urlsafe(32)
tokens[token] = datetime.now() + timedelta(hours=Config.TOKEN_EXPIRY_HOURS)
return {
"token": token,
"expires_in": Config.TOKEN_EXPIRY_HOURS * 3600
}
# ============= ADMIN ENDPOINTS =============
@router.get("/admin/stats", dependencies=[Depends(verify_token)])
async def get_admin_stats():
"""Get detailed admin statistics"""
stats = {
"apps": {
"total": len(db.get_all('apps', limit=10000)),
"featured": len(db.get_all('apps', where="featured = 1", limit=10000)),
"sponsored": len(db.get_all('apps', where="sponsored = 1", limit=10000))
},
"articles": len(db.get_all('articles', limit=10000)),
"categories": len(db.get_all('categories', limit=1000)),
"sponsors": {
"active": len(db.get_all('sponsors', where="active = 1", limit=1000)),
"total": len(db.get_all('sponsors', limit=10000))
},
"total_views": sum(app.get('views', 0) for app in db.get_all('apps', limit=10000))
}
return stats
# Apps CRUD
@router.post("/admin/apps", dependencies=[Depends(verify_token)])
async def create_app(app_data: Dict[str, Any]):
"""Create new app"""
try:
# Handle JSON fields
for field in ['screenshots', 'tags']:
if field in app_data and isinstance(app_data[field], list):
app_data[field] = json.dumps(app_data[field])
cursor = db.conn.cursor()
columns = ', '.join(app_data.keys())
placeholders = ', '.join(['?' for _ in app_data])
cursor.execute(f"INSERT INTO apps ({columns}) VALUES ({placeholders})",
list(app_data.values()))
db.conn.commit()
return {"id": cursor.lastrowid, "message": "App created"}
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
@router.put("/admin/apps/{app_id}", dependencies=[Depends(verify_token)])
async def update_app(app_id: int, app_data: Dict[str, Any]):
"""Update app"""
try:
# Handle JSON fields
for field in ['screenshots', 'tags']:
if field in app_data and isinstance(app_data[field], list):
app_data[field] = json.dumps(app_data[field])
set_clause = ', '.join([f"{k} = ?" for k in app_data.keys()])
cursor = db.conn.cursor()
cursor.execute(f"UPDATE apps SET {set_clause} WHERE id = ?",
list(app_data.values()) + [app_id])
db.conn.commit()
return {"message": "App updated"}
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
@router.delete("/admin/apps/{app_id}", dependencies=[Depends(verify_token)])
async def delete_app(app_id: int):
"""Delete app"""
cursor = db.conn.cursor()
cursor.execute("DELETE FROM apps WHERE id = ?", (app_id,))
db.conn.commit()
return {"message": "App deleted"}
# Articles CRUD
@router.post("/admin/articles", dependencies=[Depends(verify_token)])
async def create_article(article_data: Dict[str, Any]):
"""Create new article"""
try:
for field in ['related_apps', 'tags']:
if field in article_data and isinstance(article_data[field], list):
article_data[field] = json.dumps(article_data[field])
cursor = db.conn.cursor()
columns = ', '.join(article_data.keys())
placeholders = ', '.join(['?' for _ in article_data])
cursor.execute(f"INSERT INTO articles ({columns}) VALUES ({placeholders})",
list(article_data.values()))
db.conn.commit()
return {"id": cursor.lastrowid, "message": "Article created"}
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
@router.put("/admin/articles/{article_id}", dependencies=[Depends(verify_token)])
async def update_article(article_id: int, article_data: Dict[str, Any]):
"""Update article"""
try:
for field in ['related_apps', 'tags']:
if field in article_data and isinstance(article_data[field], list):
article_data[field] = json.dumps(article_data[field])
set_clause = ', '.join([f"{k} = ?" for k in article_data.keys()])
cursor = db.conn.cursor()
cursor.execute(f"UPDATE articles SET {set_clause} WHERE id = ?",
list(article_data.values()) + [article_id])
db.conn.commit()
return {"message": "Article updated"}
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
@router.delete("/admin/articles/{article_id}", dependencies=[Depends(verify_token)])
async def delete_article(article_id: int):
"""Delete article"""
cursor = db.conn.cursor()
cursor.execute("DELETE FROM articles WHERE id = ?", (article_id,))
db.conn.commit()
return {"message": "Article deleted"}
# Categories CRUD
@router.post("/admin/categories", dependencies=[Depends(verify_token)])
async def create_category(category_data: Dict[str, Any]):
"""Create new category"""
try:
category_data = dict(category_data)
category_data['order_index'] = to_int(category_data.get('order_index'), 0)
cursor = db.conn.cursor()
columns = ', '.join(category_data.keys())
placeholders = ', '.join(['?' for _ in category_data])
cursor.execute(f"INSERT INTO categories ({columns}) VALUES ({placeholders})",
list(category_data.values()))
db.conn.commit()
return {"id": cursor.lastrowid, "message": "Category created"}
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
@router.put("/admin/categories/{cat_id}", dependencies=[Depends(verify_token)])
async def update_category(cat_id: int, category_data: Dict[str, Any]):
"""Update category"""
try:
category_data = dict(category_data)
if 'order_index' in category_data:
category_data['order_index'] = to_int(category_data.get('order_index'), 0)
set_clause = ', '.join([f"{k} = ?" for k in category_data.keys()])
cursor = db.conn.cursor()
cursor.execute(f"UPDATE categories SET {set_clause} WHERE id = ?",
list(category_data.values()) + [cat_id])
db.conn.commit()
return {"message": "Category updated"}
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
@router.delete("/admin/categories/{cat_id}", dependencies=[Depends(verify_token)])
async def delete_category(cat_id: int):
"""Delete category"""
try:
cursor = db.conn.cursor()
cursor.execute("DELETE FROM categories WHERE id = ?", (cat_id,))
db.conn.commit()
return {"message": "Category deleted"}
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
# Sponsors CRUD
@router.post("/admin/sponsors", dependencies=[Depends(verify_token)])
async def create_sponsor(sponsor_data: Dict[str, Any]):
"""Create new sponsor"""
try:
cursor = db.conn.cursor()
columns = ', '.join(sponsor_data.keys())
placeholders = ', '.join(['?' for _ in sponsor_data])
cursor.execute(f"INSERT INTO sponsors ({columns}) VALUES ({placeholders})",
list(sponsor_data.values()))
db.conn.commit()
return {"id": cursor.lastrowid, "message": "Sponsor created"}
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
@router.put("/admin/sponsors/{sponsor_id}", dependencies=[Depends(verify_token)])
async def update_sponsor(sponsor_id: int, sponsor_data: Dict[str, Any]):
"""Update sponsor"""
try:
set_clause = ', '.join([f"{k} = ?" for k in sponsor_data.keys()])
cursor = db.conn.cursor()
cursor.execute(f"UPDATE sponsors SET {set_clause} WHERE id = ?",
list(sponsor_data.values()) + [sponsor_id])
db.conn.commit()
return {"message": "Sponsor updated"}
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
@router.delete("/admin/sponsors/{sponsor_id}", dependencies=[Depends(verify_token)])
async def delete_sponsor(sponsor_id: int):
"""Delete sponsor"""
try:
cursor = db.conn.cursor()
cursor.execute("DELETE FROM sponsors WHERE id = ?", (sponsor_id,))
db.conn.commit()
return {"message": "Sponsor deleted"}
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
app.include_router(router)
@app.get("/")
async def root():
"""API info"""
return {
"name": "Crawl4AI Marketplace API",
"version": "1.0.0",
"endpoints": [
"/marketplace/api/apps",
"/marketplace/api/articles",
"/marketplace/api/categories",
"/marketplace/api/sponsors",
"/marketplace/api/search?q=query",
"/marketplace/api/stats"
]
}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8100)

View File

@@ -0,0 +1,2 @@
*
!.gitignore

View File

@@ -0,0 +1,462 @@
/* App Detail Page Styles */
.app-detail-container {
min-height: 100vh;
background: var(--bg-dark);
}
/* Back Button */
.header-nav {
display: flex;
align-items: center;
}
.back-btn {
padding: 0.5rem 1rem;
background: transparent;
border: 1px solid var(--border-color);
color: var(--primary-cyan);
text-decoration: none;
transition: all 0.2s;
font-size: 0.875rem;
}
.back-btn:hover {
border-color: var(--primary-cyan);
background: rgba(80, 255, 255, 0.1);
}
/* App Hero Section */
.app-hero {
max-width: 1800px;
margin: 2rem auto;
padding: 0 2rem;
}
.app-hero-content {
display: grid;
grid-template-columns: 1fr 2fr;
gap: 3rem;
background: linear-gradient(135deg, #1a1a2e, #0f0f1e);
border: 2px solid var(--primary-cyan);
padding: 2rem;
box-shadow: 0 0 30px rgba(80, 255, 255, 0.15),
inset 0 0 20px rgba(80, 255, 255, 0.05);
}
.app-hero-image {
width: 100%;
height: 300px;
background: linear-gradient(135deg, rgba(80, 255, 255, 0.1), rgba(243, 128, 245, 0.05));
background-size: cover;
background-position: center;
border: 1px solid var(--border-color);
display: flex;
align-items: center;
justify-content: center;
font-size: 4rem;
color: var(--primary-cyan);
}
.app-badges {
display: flex;
gap: 0.5rem;
margin-bottom: 1rem;
}
.app-badge {
padding: 0.3rem 0.6rem;
background: var(--bg-tertiary);
color: var(--text-secondary);
font-size: 0.75rem;
text-transform: uppercase;
font-weight: 600;
}
.app-badge.featured {
background: linear-gradient(135deg, var(--primary-cyan), var(--primary-teal));
color: var(--bg-dark);
box-shadow: 0 2px 10px rgba(80, 255, 255, 0.3);
}
.app-badge.sponsored {
background: linear-gradient(135deg, var(--warning), #ff8c00);
color: var(--bg-dark);
box-shadow: 0 2px 10px rgba(245, 158, 11, 0.3);
}
.app-hero-info h1 {
font-size: 2.5rem;
color: var(--primary-cyan);
margin: 0.5rem 0;
text-shadow: 0 0 20px rgba(80, 255, 255, 0.5);
}
.app-tagline {
font-size: 1.1rem;
color: var(--text-secondary);
margin-bottom: 2rem;
}
/* Stats */
.app-stats {
display: flex;
gap: 2rem;
margin: 2rem 0;
padding: 1rem 0;
border-top: 1px solid var(--border-color);
border-bottom: 1px solid var(--border-color);
}
.stat {
display: flex;
flex-direction: column;
gap: 0.25rem;
}
.stat-value {
font-size: 1.5rem;
color: var(--primary-cyan);
font-weight: 600;
}
.stat-label {
font-size: 0.875rem;
color: var(--text-tertiary);
}
/* Action Buttons */
.app-actions {
display: flex;
gap: 1rem;
margin: 2rem 0;
}
.action-btn {
padding: 0.75rem 1.5rem;
border: 1px solid var(--border-color);
background: transparent;
color: var(--text-primary);
text-decoration: none;
display: inline-flex;
align-items: center;
gap: 0.5rem;
transition: all 0.2s;
cursor: pointer;
font-family: inherit;
font-size: 0.9rem;
}
.action-btn.primary {
background: linear-gradient(135deg, var(--primary-cyan), var(--primary-teal));
color: var(--bg-dark);
border-color: var(--primary-cyan);
font-weight: 600;
}
.action-btn.primary:hover {
box-shadow: 0 4px 15px rgba(80, 255, 255, 0.3);
transform: translateY(-2px);
}
.action-btn.secondary {
border-color: var(--accent-pink);
color: var(--accent-pink);
}
.action-btn.secondary:hover {
background: rgba(243, 128, 245, 0.1);
box-shadow: 0 4px 15px rgba(243, 128, 245, 0.2);
}
.action-btn.ghost {
border-color: var(--border-color);
color: var(--text-secondary);
}
.action-btn.ghost:hover {
border-color: var(--primary-cyan);
color: var(--primary-cyan);
}
/* Pricing */
.pricing-info {
display: flex;
align-items: center;
gap: 1rem;
font-size: 1.1rem;
}
.pricing-label {
color: var(--text-tertiary);
}
.pricing-value {
color: var(--warning);
font-weight: 600;
}
/* Navigation Tabs */
.app-nav {
max-width: 1800px;
margin: 2rem auto 0;
padding: 0 2rem;
display: flex;
gap: 1rem;
border-bottom: 2px solid var(--border-color);
}
.nav-tab {
padding: 1rem 1.5rem;
background: transparent;
border: none;
border-bottom: 2px solid transparent;
color: var(--text-secondary);
cursor: pointer;
transition: all 0.2s;
font-family: inherit;
font-size: 0.9rem;
margin-bottom: -2px;
}
.nav-tab:hover {
color: var(--primary-cyan);
}
.nav-tab.active {
color: var(--primary-cyan);
border-bottom-color: var(--primary-cyan);
}
/* Content Sections */
.app-content {
max-width: 1800px;
margin: 2rem auto;
padding: 0 2rem;
}
.tab-content {
display: none;
}
.tab-content.active {
display: block;
}
.docs-content {
max-width: 1200px;
padding: 2rem;
background: var(--bg-secondary);
border: 1px solid var(--border-color);
}
.docs-content h2 {
font-size: 1.8rem;
color: var(--primary-cyan);
margin-bottom: 1rem;
padding-bottom: 0.5rem;
border-bottom: 1px solid var(--border-color);
}
.docs-content h3 {
font-size: 1.3rem;
color: var(--text-primary);
margin: 2rem 0 1rem;
}
.docs-content h4 {
font-size: 1.1rem;
color: var(--accent-pink);
margin: 1.5rem 0 0.5rem;
}
.docs-content p {
color: var(--text-secondary);
line-height: 1.6;
margin-bottom: 1rem;
}
.docs-content code {
background: var(--bg-tertiary);
padding: 0.2rem 0.4rem;
color: var(--primary-cyan);
font-family: 'Dank Mono', Monaco, monospace;
font-size: 0.9em;
}
/* Code Blocks */
.code-block {
background: var(--bg-dark);
border: 1px solid var(--border-color);
margin: 1rem 0;
overflow: hidden;
}
.code-header {
display: flex;
justify-content: space-between;
align-items: center;
padding: 0.5rem 1rem;
background: var(--bg-tertiary);
border-bottom: 1px solid var(--border-color);
}
.code-lang {
color: var(--primary-cyan);
font-size: 0.875rem;
text-transform: uppercase;
}
.copy-btn {
padding: 0.25rem 0.5rem;
background: transparent;
border: 1px solid var(--border-color);
color: var(--text-secondary);
cursor: pointer;
font-size: 0.75rem;
transition: all 0.2s;
}
.copy-btn:hover {
border-color: var(--primary-cyan);
color: var(--primary-cyan);
}
.code-block pre {
margin: 0;
padding: 1rem;
overflow-x: auto;
}
.code-block code {
background: transparent;
padding: 0;
color: var(--text-secondary);
font-size: 0.875rem;
line-height: 1.5;
}
/* Feature Grid */
.feature-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 1rem;
margin: 2rem 0;
}
.feature-card {
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
padding: 1.5rem;
transition: all 0.2s;
}
.feature-card:hover {
border-color: var(--primary-cyan);
background: rgba(80, 255, 255, 0.05);
}
.feature-card h4 {
margin-top: 0;
}
/* Info Box */
.info-box {
background: linear-gradient(135deg, rgba(80, 255, 255, 0.05), rgba(243, 128, 245, 0.03));
border: 1px solid var(--primary-cyan);
border-left: 4px solid var(--primary-cyan);
padding: 1.5rem;
margin: 2rem 0;
}
.info-box h4 {
margin-top: 0;
color: var(--primary-cyan);
}
/* Support Grid */
.support-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 1rem;
margin: 2rem 0;
}
.support-card {
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
padding: 1.5rem;
text-align: center;
}
.support-card h3 {
color: var(--primary-cyan);
margin-bottom: 0.5rem;
}
/* Related Apps */
.related-apps {
max-width: 1800px;
margin: 4rem auto;
padding: 0 2rem;
}
.related-apps h2 {
font-size: 1.5rem;
color: var(--text-primary);
margin-bottom: 1.5rem;
}
.related-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(250px, 1fr));
gap: 1rem;
}
.related-app-card {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
padding: 1rem;
cursor: pointer;
transition: all 0.2s;
}
.related-app-card:hover {
border-color: var(--primary-cyan);
transform: translateY(-2px);
}
/* Responsive */
@media (max-width: 1024px) {
.app-hero-content {
grid-template-columns: 1fr;
}
.app-stats {
justify-content: space-around;
}
}
@media (max-width: 768px) {
.app-hero-info h1 {
font-size: 2rem;
}
.app-actions {
flex-direction: column;
}
.app-nav {
overflow-x: auto;
gap: 0;
}
.nav-tab {
white-space: nowrap;
}
.feature-grid,
.support-grid {
grid-template-columns: 1fr;
}
}

View File

@@ -0,0 +1,234 @@
<!DOCTYPE html>
<html lang="en" data-theme="dark">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>App Details - Crawl4AI Marketplace</title>
<link rel="stylesheet" href="marketplace.css">
<link rel="stylesheet" href="app-detail.css">
</head>
<body>
<div class="app-detail-container">
<!-- Header -->
<header class="marketplace-header">
<div class="header-content">
<div class="header-left">
<div class="logo-title">
<img src="../../assets/images/logo.png" alt="Crawl4AI" class="header-logo">
<h1>
<span class="ascii-border">[</span>
Marketplace
<span class="ascii-border">]</span>
</h1>
</div>
</div>
<div class="header-nav">
<a href="index.html" class="back-btn">← Back to Marketplace</a>
</div>
</div>
</header>
<!-- App Hero Section -->
<section class="app-hero">
<div class="app-hero-content">
<div class="app-hero-image" id="app-image">
<!-- Dynamic image -->
</div>
<div class="app-hero-info">
<div class="app-badges">
<span class="app-badge" id="app-type">Open Source</span>
<span class="app-badge featured" id="app-featured" style="display:none">FEATURED</span>
<span class="app-badge sponsored" id="app-sponsored" style="display:none">SPONSORED</span>
</div>
<h1 id="app-name">App Name</h1>
<p id="app-description" class="app-tagline">App description goes here</p>
<div class="app-stats">
<div class="stat">
<span class="stat-value" id="app-rating">★★★★★</span>
<span class="stat-label">Rating</span>
</div>
<div class="stat">
<span class="stat-value" id="app-downloads">0</span>
<span class="stat-label">Downloads</span>
</div>
<div class="stat">
<span class="stat-value" id="app-category">Category</span>
<span class="stat-label">Category</span>
</div>
</div>
<div class="app-actions">
<a href="#" id="app-website" class="action-btn primary" target="_blank">
<span></span> Visit Website
</a>
<a href="#" id="app-github" class="action-btn secondary" target="_blank">
<span></span> View on GitHub
</a>
<button id="copy-integration" class="action-btn ghost">
<span>📋</span> Copy Integration
</button>
</div>
<div class="pricing-info">
<span class="pricing-label">Pricing:</span>
<span id="app-pricing" class="pricing-value">Free</span>
</div>
</div>
</div>
</section>
<!-- Navigation Tabs -->
<nav class="app-nav">
<button class="nav-tab active" data-tab="integration">Integration Guide</button>
<button class="nav-tab" data-tab="docs">Documentation</button>
<button class="nav-tab" data-tab="examples">Examples</button>
<button class="nav-tab" data-tab="support">Support</button>
</nav>
<!-- Content Sections -->
<main class="app-content">
<!-- Integration Guide Tab -->
<section id="integration-tab" class="tab-content active">
<div class="docs-content">
<h2>Quick Start</h2>
<p>Get started with this integration in just a few steps.</p>
<h3>Installation</h3>
<div class="code-block">
<div class="code-header">
<span class="code-lang">bash</span>
<button class="copy-btn">Copy</button>
</div>
<pre><code id="install-code">pip install crawl4ai</code></pre>
</div>
<h3>Basic Usage</h3>
<div class="code-block">
<div class="code-header">
<span class="code-lang">python</span>
<button class="copy-btn">Copy</button>
</div>
<pre><code id="usage-code">from crawl4ai import AsyncWebCrawler
async def main():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
# Your configuration here
)
print(result.markdown)
if __name__ == "__main__":
import asyncio
asyncio.run(main())</code></pre>
</div>
<h3>Advanced Configuration</h3>
<p>Customize the crawler with these advanced options:</p>
<div class="feature-grid">
<div class="feature-card">
<h4>🚀 Performance</h4>
<p>Optimize crawling speed with parallel processing and caching strategies.</p>
</div>
<div class="feature-card">
<h4>🔒 Authentication</h4>
<p>Handle login forms, cookies, and session management automatically.</p>
</div>
<div class="feature-card">
<h4>🎯 Extraction</h4>
<p>Use CSS selectors, XPath, or AI-powered content extraction.</p>
</div>
<div class="feature-card">
<h4>🔄 Proxy Support</h4>
<p>Rotate proxies and bypass rate limiting with built-in proxy management.</p>
</div>
</div>
<h3>Integration Example</h3>
<div class="code-block">
<div class="code-header">
<span class="code-lang">python</span>
<button class="copy-btn">Copy</button>
</div>
<pre><code id="integration-code">from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
async def extract_with_llm():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
extraction_strategy=LLMExtractionStrategy(
provider="openai",
api_key="your-api-key",
instruction="Extract product information"
),
bypass_cache=True
)
return result.extracted_content
# Run the extraction
data = await extract_with_llm()
print(data)</code></pre>
</div>
<div class="info-box">
<h4>💡 Pro Tip</h4>
<p>Use the <code>bypass_cache=True</code> parameter when you need fresh data, or set <code>cache_mode="write"</code> to update the cache with new content.</p>
</div>
</div>
</section>
<!-- Documentation Tab -->
<section id="docs-tab" class="tab-content">
<div class="docs-content">
<h2>Documentation</h2>
<p>Complete documentation and API reference.</p>
<!-- Dynamic content loaded here -->
</div>
</section>
<!-- Examples Tab -->
<section id="examples-tab" class="tab-content">
<div class="docs-content">
<h2>Examples</h2>
<p>Real-world examples and use cases.</p>
<!-- Dynamic content loaded here -->
</div>
</section>
<!-- Support Tab -->
<section id="support-tab" class="tab-content">
<div class="docs-content">
<h2>Support</h2>
<div class="support-grid">
<div class="support-card">
<h3>📧 Contact</h3>
<p id="app-contact">contact@example.com</p>
</div>
<div class="support-card">
<h3>🐛 Report Issues</h3>
<p>Found a bug? Report it on GitHub Issues.</p>
</div>
<div class="support-card">
<h3>💬 Community</h3>
<p>Join our Discord for help and discussions.</p>
</div>
</div>
</div>
</section>
</main>
<!-- Related Apps -->
<section class="related-apps">
<h2>Related Apps</h2>
<div id="related-apps-grid" class="related-grid">
<!-- Dynamic related apps -->
</div>
</section>
</div>
<script src="app-detail.js"></script>
</body>
</html>

View File

@@ -0,0 +1,334 @@
// App Detail Page JavaScript
const { API_BASE, API_ORIGIN } = (() => {
const { hostname, port, protocol } = window.location;
const isLocalHost = ['localhost', '127.0.0.1', '0.0.0.0'].includes(hostname);
if (isLocalHost && port && port !== '8100') {
const origin = `${protocol}//127.0.0.1:8100`;
return { API_BASE: `${origin}/marketplace/api`, API_ORIGIN: origin };
}
return { API_BASE: '/marketplace/api', API_ORIGIN: '' };
})();
class AppDetailPage {
constructor() {
this.appSlug = this.getAppSlugFromURL();
this.appData = null;
this.init();
}
getAppSlugFromURL() {
const params = new URLSearchParams(window.location.search);
return params.get('app') || '';
}
async init() {
if (!this.appSlug) {
window.location.href = 'index.html';
return;
}
await this.loadAppDetails();
this.setupEventListeners();
await this.loadRelatedApps();
}
async loadAppDetails() {
try {
const response = await fetch(`${API_BASE}/apps/${this.appSlug}`);
if (!response.ok) throw new Error('App not found');
this.appData = await response.json();
this.renderAppDetails();
} catch (error) {
console.error('Error loading app details:', error);
// Fallback to loading all apps and finding the right one
try {
const response = await fetch(`${API_BASE}/apps`);
const apps = await response.json();
this.appData = apps.find(app => app.slug === this.appSlug || app.name.toLowerCase().replace(/\s+/g, '-') === this.appSlug);
if (this.appData) {
this.renderAppDetails();
} else {
window.location.href = 'index.html';
}
} catch (err) {
console.error('Error loading apps:', err);
window.location.href = 'index.html';
}
}
}
renderAppDetails() {
if (!this.appData) return;
// Update title
document.title = `${this.appData.name} - Crawl4AI Marketplace`;
// Hero image
const appImage = document.getElementById('app-image');
if (this.appData.image) {
appImage.style.backgroundImage = `url('${this.appData.image}')`;
appImage.innerHTML = '';
} else {
appImage.innerHTML = `[${this.appData.category || 'APP'}]`;
}
// Basic info
document.getElementById('app-name').textContent = this.appData.name;
document.getElementById('app-description').textContent = this.appData.description;
document.getElementById('app-type').textContent = this.appData.type || 'Open Source';
document.getElementById('app-category').textContent = this.appData.category;
document.getElementById('app-pricing').textContent = this.appData.pricing || 'Free';
// Badges
if (this.appData.featured) {
document.getElementById('app-featured').style.display = 'inline-block';
}
if (this.appData.sponsored) {
document.getElementById('app-sponsored').style.display = 'inline-block';
}
// Stats
const rating = this.appData.rating || 0;
const stars = '★'.repeat(Math.floor(rating)) + '☆'.repeat(5 - Math.floor(rating));
document.getElementById('app-rating').textContent = stars + ` ${rating}/5`;
document.getElementById('app-downloads').textContent = this.formatNumber(this.appData.downloads || 0);
// Action buttons
const websiteBtn = document.getElementById('app-website');
const githubBtn = document.getElementById('app-github');
if (this.appData.website_url) {
websiteBtn.href = this.appData.website_url;
} else {
websiteBtn.style.display = 'none';
}
if (this.appData.github_url) {
githubBtn.href = this.appData.github_url;
} else {
githubBtn.style.display = 'none';
}
// Contact
document.getElementById('app-contact').textContent = this.appData.contact_email || 'Not available';
// Integration guide
this.renderIntegrationGuide();
}
renderIntegrationGuide() {
// Installation code
const installCode = document.getElementById('install-code');
if (this.appData.type === 'Open Source' && this.appData.github_url) {
installCode.textContent = `# Clone from GitHub
git clone ${this.appData.github_url}
# Install dependencies
pip install -r requirements.txt`;
} else if (this.appData.name.toLowerCase().includes('api')) {
installCode.textContent = `# Install via pip
pip install ${this.appData.slug}
# Or install from source
pip install git+${this.appData.github_url || 'https://github.com/example/repo'}`;
}
// Usage code - customize based on category
const usageCode = document.getElementById('usage-code');
if (this.appData.category === 'Browser Automation') {
usageCode.textContent = `from crawl4ai import AsyncWebCrawler
from ${this.appData.slug.replace(/-/g, '_')} import ${this.appData.name.replace(/\s+/g, '')}
async def main():
# Initialize ${this.appData.name}
automation = ${this.appData.name.replace(/\s+/g, '')}()
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
browser_config=automation.config,
wait_for="css:body"
)
print(result.markdown)`;
} else if (this.appData.category === 'Proxy Services') {
usageCode.textContent = `from crawl4ai import AsyncWebCrawler
import ${this.appData.slug.replace(/-/g, '_')}
# Configure proxy
proxy_config = {
"server": "${this.appData.website_url || 'https://proxy.example.com'}",
"username": "your_username",
"password": "your_password"
}
async with AsyncWebCrawler(proxy=proxy_config) as crawler:
result = await crawler.arun(
url="https://example.com",
bypass_cache=True
)
print(result.status_code)`;
} else if (this.appData.category === 'LLM Integration') {
usageCode.textContent = `from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
# Configure LLM extraction
strategy = LLMExtractionStrategy(
provider="${this.appData.name.toLowerCase().includes('gpt') ? 'openai' : 'anthropic'}",
api_key="your-api-key",
model="${this.appData.name.toLowerCase().includes('gpt') ? 'gpt-4' : 'claude-3'}",
instruction="Extract structured data"
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
extraction_strategy=strategy
)
print(result.extracted_content)`;
}
// Integration example
const integrationCode = document.getElementById('integration-code');
integrationCode.textContent = this.appData.integration_guide ||
`# Complete ${this.appData.name} Integration Example
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
import json
async def crawl_with_${this.appData.slug.replace(/-/g, '_')}():
"""
Complete example showing how to use ${this.appData.name}
with Crawl4AI for production web scraping
"""
# Define extraction schema
schema = {
"name": "ProductList",
"baseSelector": "div.product",
"fields": [
{"name": "title", "selector": "h2", "type": "text"},
{"name": "price", "selector": ".price", "type": "text"},
{"name": "image", "selector": "img", "type": "attribute", "attribute": "src"},
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"}
]
}
# Initialize crawler with ${this.appData.name}
async with AsyncWebCrawler(
browser_type="chromium",
headless=True,
verbose=True
) as crawler:
# Crawl with extraction
result = await crawler.arun(
url="https://example.com/products",
extraction_strategy=JsonCssExtractionStrategy(schema),
cache_mode="bypass",
wait_for="css:.product",
screenshot=True
)
# Process results
if result.success:
products = json.loads(result.extracted_content)
print(f"Found {len(products)} products")
for product in products[:5]:
print(f"- {product['title']}: {product['price']}")
return products
# Run the crawler
if __name__ == "__main__":
import asyncio
asyncio.run(crawl_with_${this.appData.slug.replace(/-/g, '_')}())`;
}
formatNumber(num) {
if (num >= 1000000) {
return (num / 1000000).toFixed(1) + 'M';
} else if (num >= 1000) {
return (num / 1000).toFixed(1) + 'K';
}
return num.toString();
}
setupEventListeners() {
// Tab switching
const tabs = document.querySelectorAll('.nav-tab');
tabs.forEach(tab => {
tab.addEventListener('click', () => {
// Update active tab
tabs.forEach(t => t.classList.remove('active'));
tab.classList.add('active');
// Show corresponding content
const tabName = tab.dataset.tab;
document.querySelectorAll('.tab-content').forEach(content => {
content.classList.remove('active');
});
document.getElementById(`${tabName}-tab`).classList.add('active');
});
});
// Copy integration code
document.getElementById('copy-integration').addEventListener('click', () => {
const code = document.getElementById('integration-code').textContent;
navigator.clipboard.writeText(code).then(() => {
const btn = document.getElementById('copy-integration');
const originalText = btn.innerHTML;
btn.innerHTML = '<span>✓</span> Copied!';
setTimeout(() => {
btn.innerHTML = originalText;
}, 2000);
});
});
// Copy code buttons
document.querySelectorAll('.copy-btn').forEach(btn => {
btn.addEventListener('click', (e) => {
const codeBlock = e.target.closest('.code-block');
const code = codeBlock.querySelector('code').textContent;
navigator.clipboard.writeText(code).then(() => {
btn.textContent = 'Copied!';
setTimeout(() => {
btn.textContent = 'Copy';
}, 2000);
});
});
});
}
async loadRelatedApps() {
try {
const response = await fetch(`${API_BASE}/apps?category=${encodeURIComponent(this.appData.category)}&limit=4`);
const apps = await response.json();
const relatedApps = apps.filter(app => app.slug !== this.appSlug).slice(0, 3);
const grid = document.getElementById('related-apps-grid');
grid.innerHTML = relatedApps.map(app => `
<div class="related-app-card" onclick="window.location.href='app-detail.html?app=${app.slug || app.name.toLowerCase().replace(/\s+/g, '-')}'">
<h4>${app.name}</h4>
<p>${app.description.substring(0, 100)}...</p>
<div style="display: flex; justify-content: space-between; margin-top: 0.5rem; font-size: 0.75rem;">
<span style="color: var(--primary-cyan)">${app.type}</span>
<span style="color: var(--warning)">★ ${app.rating}/5</span>
</div>
</div>
`).join('');
} catch (error) {
console.error('Error loading related apps:', error);
}
}
}
// Initialize when DOM is loaded
document.addEventListener('DOMContentLoaded', () => {
new AppDetailPage();
});

View File

@@ -0,0 +1,147 @@
<!DOCTYPE html>
<html lang="en" data-theme="dark">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Marketplace - Crawl4AI</title>
<link rel="stylesheet" href="marketplace.css">
</head>
<body>
<div class="marketplace-container">
<!-- Header -->
<header class="marketplace-header">
<div class="header-content">
<div class="header-left">
<div class="logo-title">
<img src="../../assets/images/logo.png" alt="Crawl4AI" class="header-logo">
<h1>
<span class="ascii-border">[</span>
Marketplace
<span class="ascii-border">]</span>
</h1>
</div>
<p class="tagline">Tools, Integrations & Resources for Web Crawling</p>
</div>
<div class="header-stats" id="stats">
<span class="stat-item">Apps: <span id="total-apps">--</span></span>
<span class="stat-item">Articles: <span id="total-articles">--</span></span>
<span class="stat-item">Downloads: <span id="total-downloads">--</span></span>
</div>
</div>
</header>
<!-- Search and Category Bar -->
<div class="search-filter-bar">
<div class="search-box">
<span class="search-icon">></span>
<input type="text" id="search-input" placeholder="Search apps, articles, tools..." />
<kbd>/</kbd>
</div>
<div class="category-filter" id="category-filter">
<button class="filter-btn active" data-category="all">All</button>
<!-- Categories will be loaded here -->
</div>
</div>
<!-- Magazine Grid Layout -->
<main class="magazine-layout">
<!-- Hero Featured Section -->
<section class="hero-featured">
<div id="featured-hero" class="featured-hero-card">
<!-- Large featured card with big image -->
</div>
</section>
<!-- Secondary Featured -->
<section class="secondary-featured">
<div id="featured-secondary" class="featured-secondary-cards">
<!-- 2-3 medium featured cards with images -->
</div>
</section>
<!-- Sponsored Section -->
<section class="sponsored-section">
<div class="section-label">SPONSORED</div>
<div id="sponsored-content" class="sponsored-cards">
<!-- Sponsored content cards -->
</div>
</section>
<!-- Main Content Grid -->
<section class="main-content">
<!-- Apps Column -->
<div class="apps-column">
<div class="column-header">
<h2><span class="ascii-icon">></span> Latest Apps</h2>
<select id="type-filter" class="mini-filter">
<option value="">All</option>
<option value="Open Source">Open Source</option>
<option value="Paid">Paid</option>
</select>
</div>
<div id="apps-grid" class="apps-compact-grid">
<!-- Compact app cards -->
</div>
</div>
<!-- Articles Column -->
<div class="articles-column">
<div class="column-header">
<h2><span class="ascii-icon">></span> Latest Articles</h2>
</div>
<div id="articles-list" class="articles-compact-list">
<!-- Article items -->
</div>
</div>
<!-- Trending/Tools Column -->
<div class="trending-column">
<div class="column-header">
<h2><span class="ascii-icon">#</span> Trending</h2>
</div>
<div id="trending-list" class="trending-items">
<!-- Trending items -->
</div>
<div class="submit-box">
<h3><span class="ascii-icon">+</span> Submit Your Tool</h3>
<p>Share your integration</p>
<a href="mailto:marketplace@crawl4ai.com" class="submit-btn">Submit →</a>
</div>
</div>
</section>
<!-- More Apps Grid -->
<section class="more-apps">
<div class="section-header">
<h2><span class="ascii-icon">></span> More Apps</h2>
<button id="load-more" class="load-more-btn">Load More ↓</button>
</div>
<div id="more-apps-grid" class="more-apps-grid">
<!-- Additional app cards -->
</div>
</section>
</main>
<!-- Footer -->
<footer class="marketplace-footer">
<div class="footer-content">
<div class="footer-section">
<h3>About Marketplace</h3>
<p>Discover tools and integrations built by the Crawl4AI community.</p>
</div>
<div class="footer-section">
<h3>Become a Sponsor</h3>
<p>Reach developers building with Crawl4AI</p>
<a href="mailto:sponsors@crawl4ai.com" class="sponsor-btn">Learn More →</a>
</div>
</div>
<div class="footer-bottom">
<p>[ Crawl4AI Marketplace · Updated <span id="last-update">--</span> ]</p>
</div>
</footer>
</div>
<script src="marketplace.js"></script>
</body>
</html>

View File

@@ -0,0 +1,957 @@
/* Marketplace CSS - Magazine Style Terminal Theme */
@import url('../../assets/styles.css');
:root {
--primary-cyan: #50ffff;
--primary-teal: #09b5a5;
--accent-pink: #f380f5;
--bg-dark: #070708;
--bg-secondary: #1a1a1a;
--bg-tertiary: #3f3f44;
--text-primary: #e8e9ed;
--text-secondary: #d5cec0;
--text-tertiary: #a3abba;
--border-color: #3f3f44;
--success: #50ff50;
--error: #ff3c74;
--warning: #f59e0b;
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Dank Mono', Monaco, monospace;
background: var(--bg-dark);
color: var(--text-primary);
line-height: 1.6;
}
/* Global link styles */
a {
color: var(--primary-cyan);
text-decoration: none;
transition: color 0.2s;
}
a:hover {
color: var(--accent-pink);
}
.marketplace-container {
min-height: 100vh;
}
/* Header */
.marketplace-header {
background: var(--bg-secondary);
border-bottom: 1px solid var(--border-color);
padding: 1.5rem 0;
}
.header-content {
max-width: 1800px;
margin: 0 auto;
padding: 0 2rem;
display: flex;
justify-content: space-between;
align-items: center;
}
.logo-title {
display: flex;
align-items: center;
gap: 1rem;
}
.header-logo {
height: 40px;
width: auto;
filter: brightness(1.2);
}
.marketplace-header h1 {
font-size: 1.5rem;
color: var(--primary-cyan);
margin: 0;
}
.ascii-border {
color: var(--border-color);
}
.tagline {
font-size: 0.875rem;
color: var(--text-tertiary);
margin-top: 0.25rem;
}
.header-stats {
display: flex;
gap: 2rem;
}
.stat-item {
font-size: 0.875rem;
color: var(--text-secondary);
}
.stat-item span {
color: var(--primary-cyan);
font-weight: 600;
}
/* Search and Filter Bar */
.search-filter-bar {
max-width: 1800px;
margin: 1.5rem auto;
padding: 0 2rem;
display: flex;
gap: 1rem;
align-items: center;
}
.search-box {
flex: 1;
max-width: 500px;
display: flex;
align-items: center;
background: var(--bg-secondary);
border: 1px solid var(--border-color);
padding: 0.75rem 1rem;
transition: border-color 0.2s;
}
.search-box:focus-within {
border-color: var(--primary-cyan);
}
.search-icon {
color: var(--text-tertiary);
margin-right: 1rem;
}
#search-input {
flex: 1;
background: transparent;
border: none;
color: var(--text-primary);
font-family: inherit;
font-size: 0.9rem;
outline: none;
}
.search-box kbd {
font-size: 0.75rem;
padding: 0.2rem 0.5rem;
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
color: var(--text-tertiary);
}
.category-filter {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
.filter-btn {
background: transparent;
border: 1px solid var(--border-color);
color: var(--text-secondary);
padding: 0.5rem 1rem;
font-family: inherit;
font-size: 0.875rem;
cursor: pointer;
transition: all 0.2s;
}
.filter-btn:hover {
border-color: var(--primary-cyan);
color: var(--primary-cyan);
}
.filter-btn.active {
background: var(--primary-cyan);
color: var(--bg-dark);
border-color: var(--primary-cyan);
}
/* Magazine Layout */
.magazine-layout {
max-width: 1800px;
margin: 0 auto;
padding: 0 2rem 4rem;
display: grid;
grid-template-columns: 1fr;
gap: 2rem;
}
/* Hero Featured Section */
.hero-featured {
grid-column: 1 / -1;
position: relative;
}
.hero-featured::before {
content: '';
position: absolute;
top: -20px;
left: -20px;
right: -20px;
bottom: -20px;
background: radial-gradient(ellipse at center, rgba(80, 255, 255, 0.05), transparent 70%);
pointer-events: none;
z-index: -1;
}
.featured-hero-card {
background: linear-gradient(135deg, #1a1a2e, #0f0f1e);
border: 2px solid var(--primary-cyan);
box-shadow: 0 0 30px rgba(80, 255, 255, 0.15),
inset 0 0 20px rgba(80, 255, 255, 0.05);
height: 380px;
position: relative;
overflow: hidden;
cursor: pointer;
transition: all 0.3s ease;
display: flex;
flex-direction: column;
}
.featured-hero-card:hover {
border-color: var(--accent-pink);
box-shadow: 0 0 40px rgba(243, 128, 245, 0.2),
inset 0 0 30px rgba(243, 128, 245, 0.05);
transform: translateY(-2px);
}
.hero-image {
width: 100%;
height: 240px;
background: linear-gradient(135deg, rgba(80, 255, 255, 0.1), rgba(243, 128, 245, 0.05));
background-size: cover;
background-position: center;
display: flex;
align-items: center;
justify-content: center;
font-size: 3rem;
color: var(--primary-cyan);
flex-shrink: 0;
position: relative;
filter: brightness(1.1) contrast(1.1);
}
.hero-image::after {
content: '';
position: absolute;
bottom: 0;
left: 0;
right: 0;
height: 60%;
background: linear-gradient(to top, rgba(10, 10, 20, 0.95), transparent);
}
.hero-content {
padding: 1.5rem;
}
.hero-badge {
display: inline-block;
padding: 0.3rem 0.6rem;
background: linear-gradient(135deg, var(--primary-cyan), var(--primary-teal));
color: var(--bg-dark);
font-size: 0.7rem;
text-transform: uppercase;
margin-bottom: 0.5rem;
font-weight: 600;
box-shadow: 0 2px 10px rgba(80, 255, 255, 0.3);
}
.hero-title {
font-size: 1.6rem;
color: var(--primary-cyan);
margin: 0.5rem 0;
text-shadow: 0 0 20px rgba(80, 255, 255, 0.5);
}
.hero-description {
color: var(--text-secondary);
line-height: 1.5;
}
.hero-meta {
display: flex;
gap: 1.5rem;
margin-top: 1rem;
font-size: 0.875rem;
}
.hero-meta span {
color: var(--text-tertiary);
}
.hero-meta span:first-child {
color: var(--warning);
}
/* Secondary Featured */
.secondary-featured {
grid-column: 1 / -1;
height: 380px;
display: flex;
align-items: stretch;
}
.featured-secondary-cards {
width: 100%;
display: flex;
flex-direction: column;
gap: 0.75rem;
justify-content: space-between;
}
.secondary-card {
background: linear-gradient(135deg, rgba(80, 255, 255, 0.03), rgba(243, 128, 245, 0.02));
border: 1px solid rgba(80, 255, 255, 0.3);
cursor: pointer;
transition: all 0.3s ease;
display: flex;
overflow: hidden;
height: calc((380px - 1.5rem) / 3);
flex: 1;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.3);
}
.secondary-card:hover {
border-color: var(--accent-pink);
background: linear-gradient(135deg, rgba(243, 128, 245, 0.05), rgba(80, 255, 255, 0.03));
box-shadow: 0 4px 15px rgba(243, 128, 245, 0.2);
transform: translateX(-3px);
}
.secondary-image {
width: 120px;
background: linear-gradient(135deg, var(--bg-tertiary), var(--bg-secondary));
background-size: cover;
background-position: center;
display: flex;
align-items: center;
justify-content: center;
font-size: 1.5rem;
color: var(--primary-cyan);
flex-shrink: 0;
}
.secondary-content {
flex: 1;
padding: 1rem;
display: flex;
flex-direction: column;
justify-content: space-between;
}
.secondary-title {
font-size: 1rem;
color: var(--text-primary);
margin-bottom: 0.25rem;
}
.secondary-desc {
font-size: 0.75rem;
color: var(--text-secondary);
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
overflow: hidden;
}
.secondary-meta {
font-size: 0.75rem;
color: var(--text-tertiary);
}
.secondary-meta span:last-child {
color: var(--warning);
}
/* Sponsored Section */
.sponsored-section {
grid-column: 1 / -1;
background: var(--bg-secondary);
border: 1px solid var(--warning);
padding: 1rem;
position: relative;
}
.section-label {
position: absolute;
top: -0.5rem;
left: 1rem;
background: var(--bg-secondary);
padding: 0 0.5rem;
color: var(--warning);
font-size: 0.65rem;
letter-spacing: 0.1em;
}
.sponsored-cards {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 1rem;
}
.sponsor-card {
padding: 1rem;
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
}
.sponsor-card h4 {
color: var(--accent-pink);
margin-bottom: 0.5rem;
}
.sponsor-card p {
color: var(--text-secondary);
font-size: 0.85rem;
margin-bottom: 0.75rem;
}
.sponsor-card a {
color: var(--primary-cyan);
text-decoration: none;
font-size: 0.85rem;
}
.sponsor-card a:hover {
color: var(--accent-pink);
}
/* Main Content Grid */
.main-content {
grid-column: 1 / -1;
display: grid;
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
gap: 2rem;
}
/* Column Headers */
.column-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
border-bottom: 1px solid var(--border-color);
padding-bottom: 0.5rem;
}
.column-header h2 {
font-size: 1.1rem;
color: var(--text-primary);
}
.mini-filter {
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
color: var(--text-primary);
padding: 0.25rem 0.5rem;
font-family: inherit;
font-size: 0.75rem;
}
.ascii-icon {
color: var(--primary-cyan);
}
/* Apps Column */
.apps-compact-grid {
display: flex;
flex-direction: column;
gap: 0.75rem;
}
.app-compact {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-left: 3px solid var(--border-color);
padding: 0.75rem;
cursor: pointer;
transition: all 0.2s;
}
.app-compact:hover {
border-color: var(--primary-cyan);
border-left-color: var(--accent-pink);
transform: translateX(2px);
}
.app-compact-header {
display: flex;
justify-content: space-between;
font-size: 0.75rem;
color: var(--text-tertiary);
margin-bottom: 0.25rem;
}
.app-compact-header span:first-child {
color: var(--primary-cyan);
}
.app-compact-header span:last-child {
color: var(--warning);
}
.app-compact-title {
font-size: 0.9rem;
color: var(--text-primary);
margin-bottom: 0.25rem;
}
.app-compact-desc {
font-size: 0.75rem;
color: var(--text-secondary);
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
overflow: hidden;
}
/* Articles Column */
.articles-compact-list {
display: flex;
flex-direction: column;
gap: 1rem;
}
.article-compact {
border-left: 2px solid var(--border-color);
padding-left: 1rem;
cursor: pointer;
transition: all 0.2s;
}
.article-compact:hover {
border-left-color: var(--primary-cyan);
}
.article-meta {
font-size: 0.7rem;
color: var(--text-tertiary);
margin-bottom: 0.25rem;
}
.article-meta span:first-child {
color: var(--accent-pink);
}
.article-title {
font-size: 0.9rem;
color: var(--text-primary);
margin-bottom: 0.25rem;
}
.article-author {
font-size: 0.75rem;
color: var(--text-secondary);
}
/* Trending Column */
.trending-items {
display: flex;
flex-direction: column;
gap: 0.5rem;
}
.trending-item {
display: flex;
align-items: center;
gap: 0.75rem;
padding: 0.5rem;
background: var(--bg-secondary);
border: 1px solid var(--border-color);
cursor: pointer;
transition: all 0.2s;
}
.trending-item:hover {
border-color: var(--primary-cyan);
}
.trending-rank {
font-size: 1.2rem;
color: var(--primary-cyan);
width: 2rem;
text-align: center;
}
.trending-info {
flex: 1;
}
.trending-name {
font-size: 0.85rem;
color: var(--text-primary);
}
.trending-stats {
font-size: 0.7rem;
color: var(--text-tertiary);
}
/* Submit Box */
.submit-box {
margin-top: 1.5rem;
background: var(--bg-secondary);
border: 1px solid var(--primary-cyan);
padding: 1rem;
text-align: center;
}
.submit-box h3 {
font-size: 1rem;
color: var(--primary-cyan);
margin-bottom: 0.5rem;
}
.submit-box p {
font-size: 0.8rem;
color: var(--text-secondary);
margin-bottom: 0.75rem;
}
.submit-btn {
display: inline-block;
padding: 0.5rem 1rem;
background: transparent;
border: 1px solid var(--primary-cyan);
color: var(--primary-cyan);
text-decoration: none;
transition: all 0.2s;
}
.submit-btn:hover {
background: var(--primary-cyan);
color: var(--bg-dark);
}
/* More Apps Section */
.more-apps {
grid-column: 1 / -1;
margin-top: 2rem;
}
.section-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
}
.more-apps-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
gap: 1rem;
}
.load-more-btn {
background: transparent;
border: 1px solid var(--border-color);
color: var(--text-secondary);
padding: 0.5rem 1.5rem;
font-family: inherit;
cursor: pointer;
transition: all 0.2s;
}
.load-more-btn:hover {
border-color: var(--primary-cyan);
color: var(--primary-cyan);
}
/* Footer */
.marketplace-footer {
background: var(--bg-secondary);
border-top: 1px solid var(--border-color);
margin-top: 4rem;
padding: 2rem 0;
}
.footer-content {
max-width: 1800px;
margin: 0 auto;
padding: 0 2rem;
display: grid;
grid-template-columns: 1fr 1fr;
gap: 2rem;
}
.footer-section h3 {
font-size: 1rem;
margin-bottom: 0.5rem;
color: var(--primary-cyan);
}
.footer-section p {
font-size: 0.875rem;
color: var(--text-secondary);
margin-bottom: 1rem;
}
.sponsor-btn {
display: inline-block;
padding: 0.5rem 1rem;
background: transparent;
border: 1px solid var(--primary-cyan);
color: var(--primary-cyan);
text-decoration: none;
transition: all 0.2s;
}
.sponsor-btn:hover {
background: var(--primary-cyan);
color: var(--bg-dark);
}
.footer-bottom {
max-width: 1800px;
margin: 2rem auto 0;
padding: 1rem 2rem 0;
border-top: 1px solid var(--border-color);
font-size: 0.75rem;
color: var(--text-tertiary);
}
/* Modal */
.modal {
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: rgba(0, 0, 0, 0.8);
display: flex;
align-items: center;
justify-content: center;
z-index: 1000;
}
.modal.hidden {
display: none;
}
.modal-content {
background: var(--bg-secondary);
border: 1px solid var(--primary-cyan);
max-width: 800px;
width: 90%;
max-height: 80vh;
overflow-y: auto;
position: relative;
}
.modal-close {
position: absolute;
top: 1rem;
right: 1rem;
background: transparent;
border: 1px solid var(--border-color);
color: var(--text-primary);
padding: 0.25rem 0.5rem;
cursor: pointer;
font-size: 1.2rem;
}
.modal-close:hover {
border-color: var(--error);
color: var(--error);
}
.app-detail {
padding: 2rem;
}
.app-detail h2 {
font-size: 1.5rem;
margin-bottom: 1rem;
color: var(--primary-cyan);
}
/* Loading */
.loading {
text-align: center;
padding: 2rem;
color: var(--text-tertiary);
}
.no-results {
text-align: center;
padding: 2rem;
color: var(--text-tertiary);
}
/* Responsive - Tablet */
@media (min-width: 768px) {
.magazine-layout {
grid-template-columns: repeat(2, 1fr);
}
.hero-featured {
grid-column: 1 / -1;
}
.secondary-featured {
grid-column: 1 / -1;
}
.sponsored-section {
grid-column: 1 / -1;
}
.main-content {
grid-column: 1 / -1;
grid-template-columns: repeat(2, 1fr);
}
}
/* Responsive - Desktop */
@media (min-width: 1024px) {
.magazine-layout {
grid-template-columns: repeat(3, 1fr);
}
.hero-featured {
grid-column: 1 / 3;
grid-row: 1;
}
.secondary-featured {
grid-column: 3 / 4;
grid-row: 1;
}
.featured-secondary-cards {
flex-direction: column;
}
.sponsored-section {
grid-column: 1 / -1;
}
.main-content {
grid-column: 1 / -1;
grid-template-columns: repeat(3, 1fr);
}
}
/* Responsive - Wide Desktop */
@media (min-width: 1400px) {
.magazine-layout {
grid-template-columns: repeat(4, 1fr);
}
.hero-featured {
grid-column: 1 / 3;
}
.secondary-featured {
grid-column: 3 / 5;
grid-row: 1;
}
.featured-secondary-cards {
grid-template-columns: repeat(2, 1fr);
}
.main-content {
grid-template-columns: repeat(4, 1fr);
}
.apps-column {
grid-column: span 2;
}
.more-apps-grid {
grid-template-columns: repeat(auto-fill, minmax(250px, 1fr));
}
}
/* Responsive - Ultra Wide Desktop (for coders with wide monitors) */
@media (min-width: 1800px) {
.magazine-layout {
grid-template-columns: repeat(5, 1fr);
}
.hero-featured {
grid-column: 1 / 3;
}
.secondary-featured {
grid-column: 3 / 6;
}
.featured-secondary-cards {
grid-template-columns: repeat(3, 1fr);
}
.sponsored-section {
grid-column: 1 / -1;
}
.sponsored-cards {
grid-template-columns: repeat(5, 1fr);
}
.main-content {
grid-template-columns: repeat(5, 1fr);
}
.apps-column {
grid-column: span 2;
}
.articles-column {
grid-column: span 2;
}
.more-apps-grid {
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
}
}
/* Responsive - Mobile */
@media (max-width: 767px) {
.header-content {
flex-direction: column;
gap: 1rem;
}
.search-filter-bar {
flex-direction: column;
align-items: stretch;
}
.search-box {
max-width: none;
}
.magazine-layout {
padding: 0 1rem 2rem;
}
.footer-content {
grid-template-columns: 1fr;
}
.secondary-card {
flex-direction: column;
}
.secondary-image {
width: 100%;
height: 150px;
}
}

View File

@@ -0,0 +1,395 @@
// Marketplace JS - Magazine Layout
const API_BASE = '/marketplace/api';
const CACHE_TTL = 3600000; // 1 hour in ms
class MarketplaceCache {
constructor() {
this.prefix = 'c4ai_market_';
}
get(key) {
const item = localStorage.getItem(this.prefix + key);
if (!item) return null;
const data = JSON.parse(item);
if (Date.now() > data.expires) {
localStorage.removeItem(this.prefix + key);
return null;
}
return data.value;
}
set(key, value, ttl = CACHE_TTL) {
const data = {
value: value,
expires: Date.now() + ttl
};
localStorage.setItem(this.prefix + key, JSON.stringify(data));
}
clear() {
Object.keys(localStorage)
.filter(k => k.startsWith(this.prefix))
.forEach(k => localStorage.removeItem(k));
}
}
class MarketplaceAPI {
constructor() {
this.cache = new MarketplaceCache();
this.searchTimeout = null;
}
async fetch(endpoint, useCache = true) {
const cacheKey = endpoint.replace(/[^\w]/g, '_');
if (useCache) {
const cached = this.cache.get(cacheKey);
if (cached) return cached;
}
try {
const response = await fetch(`${API_BASE}${endpoint}`);
if (!response.ok) throw new Error(`HTTP ${response.status}`);
const data = await response.json();
this.cache.set(cacheKey, data);
return data;
} catch (error) {
console.error('API Error:', error);
return null;
}
}
async getStats() {
return this.fetch('/stats');
}
async getCategories() {
return this.fetch('/categories');
}
async getApps(params = {}) {
const query = new URLSearchParams(params).toString();
return this.fetch(`/apps${query ? '?' + query : ''}`);
}
async getArticles(params = {}) {
const query = new URLSearchParams(params).toString();
return this.fetch(`/articles${query ? '?' + query : ''}`);
}
async getSponsors() {
return this.fetch('/sponsors');
}
async search(query) {
if (query.length < 2) return {};
return this.fetch(`/search?q=${encodeURIComponent(query)}`, false);
}
}
class MarketplaceUI {
constructor() {
this.api = new MarketplaceAPI();
this.currentCategory = 'all';
this.currentType = '';
this.searchTimeout = null;
this.loadedApps = 10;
this.init();
}
async init() {
await this.loadStats();
await this.loadCategories();
await this.loadFeaturedContent();
await this.loadSponsors();
await this.loadMainContent();
this.setupEventListeners();
}
async loadStats() {
const stats = await this.api.getStats();
if (stats) {
document.getElementById('total-apps').textContent = stats.total_apps || '0';
document.getElementById('total-articles').textContent = stats.total_articles || '0';
document.getElementById('total-downloads').textContent = stats.total_downloads || '0';
document.getElementById('last-update').textContent = new Date().toLocaleDateString();
}
}
async loadCategories() {
const categories = await this.api.getCategories();
if (!categories) return;
const filter = document.getElementById('category-filter');
categories.forEach(cat => {
const btn = document.createElement('button');
btn.className = 'filter-btn';
btn.dataset.category = cat.slug;
btn.textContent = cat.name;
btn.onclick = () => this.filterByCategory(cat.slug);
filter.appendChild(btn);
});
}
async loadFeaturedContent() {
// Load hero featured
const featured = await this.api.getApps({ featured: true, limit: 4 });
if (!featured || !featured.length) return;
// Hero card (first featured)
const hero = featured[0];
const heroCard = document.getElementById('featured-hero');
if (hero) {
const imageUrl = hero.image || '';
heroCard.innerHTML = `
<div class="hero-image" ${imageUrl ? `style="background-image: url('${imageUrl}')"` : ''}>
${!imageUrl ? `[${hero.category || 'APP'}]` : ''}
</div>
<div class="hero-content">
<span class="hero-badge">${hero.type || 'PAID'}</span>
<h2 class="hero-title">${hero.name}</h2>
<p class="hero-description">${hero.description}</p>
<div class="hero-meta">
<span>★ ${hero.rating || 0}/5</span>
<span>${hero.downloads || 0} downloads</span>
</div>
</div>
`;
heroCard.onclick = () => this.showAppDetail(hero);
}
// Secondary featured cards
const secondary = document.getElementById('featured-secondary');
secondary.innerHTML = '';
if (featured.length > 1) {
featured.slice(1, 4).forEach(app => {
const card = document.createElement('div');
card.className = 'secondary-card';
const imageUrl = app.image || '';
card.innerHTML = `
<div class="secondary-image" ${imageUrl ? `style="background-image: url('${imageUrl}')"` : ''}>
${!imageUrl ? `[${app.category || 'APP'}]` : ''}
</div>
<div class="secondary-content">
<h3 class="secondary-title">${app.name}</h3>
<p class="secondary-desc">${(app.description || '').substring(0, 100)}...</p>
<div class="secondary-meta">
<span>${app.type || 'Open Source'}</span> · <span>★ ${app.rating || 0}/5</span>
</div>
</div>
`;
card.onclick = () => this.showAppDetail(app);
secondary.appendChild(card);
});
}
}
async loadSponsors() {
const sponsors = await this.api.getSponsors();
if (!sponsors || !sponsors.length) {
// Show placeholder if no sponsors
const container = document.getElementById('sponsored-content');
container.innerHTML = `
<div class="sponsor-card">
<h4>Become a Sponsor</h4>
<p>Reach thousands of developers using Crawl4AI</p>
<a href="mailto:sponsors@crawl4ai.com">Contact Us →</a>
</div>
`;
return;
}
const container = document.getElementById('sponsored-content');
container.innerHTML = sponsors.slice(0, 5).map(sponsor => `
<div class="sponsor-card">
<h4>${sponsor.company_name}</h4>
<p>${sponsor.tier} Sponsor - Premium Solutions</p>
<a href="${sponsor.landing_url}" target="_blank">Learn More →</a>
</div>
`).join('');
}
async loadMainContent() {
// Load apps column
const apps = await this.api.getApps({ limit: 8 });
if (apps && apps.length) {
const appsGrid = document.getElementById('apps-grid');
appsGrid.innerHTML = apps.map(app => `
<div class="app-compact" onclick="marketplace.showAppDetail(${JSON.stringify(app).replace(/"/g, '&quot;')})">
<div class="app-compact-header">
<span>${app.category}</span>
<span>★ ${app.rating}/5</span>
</div>
<div class="app-compact-title">${app.name}</div>
<div class="app-compact-desc">${app.description}</div>
</div>
`).join('');
}
// Load articles column
const articles = await this.api.getArticles({ limit: 6 });
if (articles && articles.length) {
const articlesList = document.getElementById('articles-list');
articlesList.innerHTML = articles.map(article => `
<div class="article-compact" onclick="marketplace.showArticle('${article.id}')">
<div class="article-meta">
<span>${article.category}</span> · <span>${new Date(article.published_at).toLocaleDateString()}</span>
</div>
<div class="article-title">${article.title}</div>
<div class="article-author">by ${article.author}</div>
</div>
`).join('');
}
// Load trending
if (apps && apps.length) {
const trending = apps.slice(0, 5);
const trendingList = document.getElementById('trending-list');
trendingList.innerHTML = trending.map((app, i) => `
<div class="trending-item" onclick="marketplace.showAppDetail(${JSON.stringify(app).replace(/"/g, '&quot;')})">
<div class="trending-rank">${i + 1}</div>
<div class="trending-info">
<div class="trending-name">${app.name}</div>
<div class="trending-stats">${app.downloads} downloads</div>
</div>
</div>
`).join('');
}
// Load more apps grid
const moreApps = await this.api.getApps({ offset: 8, limit: 12 });
if (moreApps && moreApps.length) {
const moreGrid = document.getElementById('more-apps-grid');
moreGrid.innerHTML = moreApps.map(app => `
<div class="app-compact" onclick="marketplace.showAppDetail(${JSON.stringify(app).replace(/"/g, '&quot;')})">
<div class="app-compact-header">
<span>${app.category}</span>
<span>${app.type}</span>
</div>
<div class="app-compact-title">${app.name}</div>
</div>
`).join('');
}
}
setupEventListeners() {
// Search
const searchInput = document.getElementById('search-input');
searchInput.addEventListener('input', (e) => {
clearTimeout(this.searchTimeout);
this.searchTimeout = setTimeout(() => this.search(e.target.value), 300);
});
// Keyboard shortcut
document.addEventListener('keydown', (e) => {
if (e.key === '/' && !searchInput.contains(document.activeElement)) {
e.preventDefault();
searchInput.focus();
}
if (e.key === 'Escape' && searchInput.contains(document.activeElement)) {
searchInput.blur();
searchInput.value = '';
}
});
// Type filter
const typeFilter = document.getElementById('type-filter');
typeFilter.addEventListener('change', (e) => {
this.currentType = e.target.value;
this.loadMainContent();
});
// Load more
const loadMore = document.getElementById('load-more');
loadMore.addEventListener('click', () => this.loadMoreApps());
}
async filterByCategory(category) {
// Update active state
document.querySelectorAll('.filter-btn').forEach(btn => {
btn.classList.toggle('active', btn.dataset.category === category);
});
this.currentCategory = category;
await this.loadMainContent();
}
async search(query) {
if (!query) {
await this.loadMainContent();
return;
}
const results = await this.api.search(query);
if (!results) return;
// Update apps grid with search results
if (results.apps && results.apps.length) {
const appsGrid = document.getElementById('apps-grid');
appsGrid.innerHTML = results.apps.map(app => `
<div class="app-compact" onclick="marketplace.showAppDetail(${JSON.stringify(app).replace(/"/g, '&quot;')})">
<div class="app-compact-header">
<span>${app.category}</span>
<span>★ ${app.rating}/5</span>
</div>
<div class="app-compact-title">${app.name}</div>
<div class="app-compact-desc">${app.description}</div>
</div>
`).join('');
}
// Update articles with search results
if (results.articles && results.articles.length) {
const articlesList = document.getElementById('articles-list');
articlesList.innerHTML = results.articles.map(article => `
<div class="article-compact" onclick="marketplace.showArticle('${article.id}')">
<div class="article-meta">
<span>${article.category}</span> · <span>${new Date(article.published_at).toLocaleDateString()}</span>
</div>
<div class="article-title">${article.title}</div>
<div class="article-author">by ${article.author}</div>
</div>
`).join('');
}
}
async loadMoreApps() {
this.loadedApps += 12;
const moreApps = await this.api.getApps({ offset: this.loadedApps, limit: 12 });
if (moreApps && moreApps.length) {
const moreGrid = document.getElementById('more-apps-grid');
moreApps.forEach(app => {
const card = document.createElement('div');
card.className = 'app-compact';
card.innerHTML = `
<div class="app-compact-header">
<span>${app.category}</span>
<span>${app.type}</span>
</div>
<div class="app-compact-title">${app.name}</div>
`;
card.onclick = () => this.showAppDetail(app);
moreGrid.appendChild(card);
});
}
}
showAppDetail(app) {
// Navigate to detail page instead of showing modal
const slug = app.slug || app.name.toLowerCase().replace(/\s+/g, '-');
window.location.href = `app-detail.html?app=${slug}`;
}
showArticle(articleId) {
// Could create article detail page similarly
console.log('Show article:', articleId);
}
}
// Initialize marketplace
let marketplace;
document.addEventListener('DOMContentLoaded', () => {
marketplace = new MarketplaceUI();
});

View File

@@ -0,0 +1,147 @@
<!DOCTYPE html>
<html lang="en" data-theme="dark">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Marketplace - Crawl4AI</title>
<link rel="stylesheet" href="marketplace.css">
</head>
<body>
<div class="marketplace-container">
<!-- Header -->
<header class="marketplace-header">
<div class="header-content">
<div class="header-left">
<div class="logo-title">
<img src="../assets/images/logo.png" alt="Crawl4AI" class="header-logo">
<h1>
<span class="ascii-border">[</span>
Marketplace
<span class="ascii-border">]</span>
</h1>
</div>
<p class="tagline">Tools, Integrations & Resources for Web Crawling</p>
</div>
<div class="header-stats" id="stats">
<span class="stat-item">Apps: <span id="total-apps">--</span></span>
<span class="stat-item">Articles: <span id="total-articles">--</span></span>
<span class="stat-item">Downloads: <span id="total-downloads">--</span></span>
</div>
</div>
</header>
<!-- Search and Category Bar -->
<div class="search-filter-bar">
<div class="search-box">
<span class="search-icon">></span>
<input type="text" id="search-input" placeholder="Search apps, articles, tools..." />
<kbd>/</kbd>
</div>
<div class="category-filter" id="category-filter">
<button class="filter-btn active" data-category="all">All</button>
<!-- Categories will be loaded here -->
</div>
</div>
<!-- Magazine Grid Layout -->
<main class="magazine-layout">
<!-- Hero Featured Section -->
<section class="hero-featured">
<div id="featured-hero" class="featured-hero-card">
<!-- Large featured card with big image -->
</div>
</section>
<!-- Secondary Featured -->
<section class="secondary-featured">
<div id="featured-secondary" class="featured-secondary-cards">
<!-- 2-3 medium featured cards with images -->
</div>
</section>
<!-- Sponsored Section -->
<section class="sponsored-section">
<div class="section-label">SPONSORED</div>
<div id="sponsored-content" class="sponsored-cards">
<!-- Sponsored content cards -->
</div>
</section>
<!-- Main Content Grid -->
<section class="main-content">
<!-- Apps Column -->
<div class="apps-column">
<div class="column-header">
<h2><span class="ascii-icon">></span> Latest Apps</h2>
<select id="type-filter" class="mini-filter">
<option value="">All</option>
<option value="Open Source">Open Source</option>
<option value="Paid">Paid</option>
</select>
</div>
<div id="apps-grid" class="apps-compact-grid">
<!-- Compact app cards -->
</div>
</div>
<!-- Articles Column -->
<div class="articles-column">
<div class="column-header">
<h2><span class="ascii-icon">></span> Latest Articles</h2>
</div>
<div id="articles-list" class="articles-compact-list">
<!-- Article items -->
</div>
</div>
<!-- Trending/Tools Column -->
<div class="trending-column">
<div class="column-header">
<h2><span class="ascii-icon">#</span> Trending</h2>
</div>
<div id="trending-list" class="trending-items">
<!-- Trending items -->
</div>
<div class="submit-box">
<h3><span class="ascii-icon">+</span> Submit Your Tool</h3>
<p>Share your integration</p>
<a href="mailto:marketplace@crawl4ai.com" class="submit-btn">Submit →</a>
</div>
</div>
</section>
<!-- More Apps Grid -->
<section class="more-apps">
<div class="section-header">
<h2><span class="ascii-icon">></span> More Apps</h2>
<button id="load-more" class="load-more-btn">Load More ↓</button>
</div>
<div id="more-apps-grid" class="more-apps-grid">
<!-- Additional app cards -->
</div>
</section>
</main>
<!-- Footer -->
<footer class="marketplace-footer">
<div class="footer-content">
<div class="footer-section">
<h3>About Marketplace</h3>
<p>Discover tools and integrations built by the Crawl4AI community.</p>
</div>
<div class="footer-section">
<h3>Become a Sponsor</h3>
<p>Reach developers building with Crawl4AI</p>
<a href="mailto:sponsors@crawl4ai.com" class="sponsor-btn">Learn More →</a>
</div>
</div>
<div class="footer-bottom">
<p>[ Crawl4AI Marketplace · Updated <span id="last-update">--</span> ]</p>
</div>
</footer>
</div>
<script src="marketplace.js"></script>
</body>
</html>

View File

@@ -0,0 +1,994 @@
/* Marketplace CSS - Magazine Style Terminal Theme */
@import url('../../assets/styles.css');
:root {
--primary-cyan: #50ffff;
--primary-teal: #09b5a5;
--accent-pink: #f380f5;
--bg-dark: #070708;
--bg-secondary: #1a1a1a;
--bg-tertiary: #3f3f44;
--text-primary: #e8e9ed;
--text-secondary: #d5cec0;
--text-tertiary: #a3abba;
--border-color: #3f3f44;
--success: #50ff50;
--error: #ff3c74;
--warning: #f59e0b;
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Dank Mono', Monaco, monospace;
background: var(--bg-dark);
color: var(--text-primary);
line-height: 1.6;
}
/* Global link styles */
a {
color: var(--primary-cyan);
text-decoration: none;
transition: color 0.2s;
}
a:hover {
color: var(--accent-pink);
}
.marketplace-container {
min-height: 100vh;
}
/* Header */
.marketplace-header {
background: var(--bg-secondary);
border-bottom: 1px solid var(--border-color);
padding: 1.5rem 0;
}
.header-content {
max-width: 1800px;
margin: 0 auto;
padding: 0 2rem;
display: flex;
justify-content: space-between;
align-items: center;
}
.logo-title {
display: flex;
align-items: center;
gap: 1rem;
}
.header-logo {
height: 40px;
width: auto;
filter: brightness(1.2);
}
.marketplace-header h1 {
font-size: 1.5rem;
color: var(--primary-cyan);
margin: 0;
}
.ascii-border {
color: var(--border-color);
}
.tagline {
font-size: 0.875rem;
color: var(--text-tertiary);
margin-top: 0.25rem;
}
.header-stats {
display: flex;
gap: 2rem;
}
.stat-item {
font-size: 0.875rem;
color: var(--text-secondary);
}
.stat-item span {
color: var(--primary-cyan);
font-weight: 600;
}
/* Search and Filter Bar */
.search-filter-bar {
max-width: 1800px;
margin: 1.5rem auto;
padding: 0 2rem;
display: flex;
gap: 1rem;
align-items: center;
}
.search-box {
flex: 1;
max-width: 500px;
display: flex;
align-items: center;
background: var(--bg-secondary);
border: 1px solid var(--border-color);
padding: 0.75rem 1rem;
transition: border-color 0.2s;
}
.search-box:focus-within {
border-color: var(--primary-cyan);
}
.search-icon {
color: var(--text-tertiary);
margin-right: 1rem;
}
#search-input {
flex: 1;
background: transparent;
border: none;
color: var(--text-primary);
font-family: inherit;
font-size: 0.9rem;
outline: none;
}
.search-box kbd {
font-size: 0.75rem;
padding: 0.2rem 0.5rem;
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
color: var(--text-tertiary);
}
.category-filter {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
.filter-btn {
background: transparent;
border: 1px solid var(--border-color);
color: var(--text-secondary);
padding: 0.5rem 1rem;
font-family: inherit;
font-size: 0.875rem;
cursor: pointer;
transition: all 0.2s;
}
.filter-btn:hover {
border-color: var(--primary-cyan);
color: var(--primary-cyan);
}
.filter-btn.active {
background: var(--primary-cyan);
color: var(--bg-dark);
border-color: var(--primary-cyan);
}
/* Magazine Layout */
.magazine-layout {
max-width: 1800px;
margin: 0 auto;
padding: 0 2rem 4rem;
display: grid;
grid-template-columns: 1fr;
gap: 2rem;
}
/* Hero Featured Section */
.hero-featured {
grid-column: 1 / -1;
position: relative;
}
.hero-featured::before {
content: '';
position: absolute;
top: -20px;
left: -20px;
right: -20px;
bottom: -20px;
background: radial-gradient(ellipse at center, rgba(80, 255, 255, 0.05), transparent 70%);
pointer-events: none;
z-index: -1;
}
.featured-hero-card {
background: linear-gradient(135deg, #1a1a2e, #0f0f1e);
border: 2px solid var(--primary-cyan);
box-shadow: 0 0 30px rgba(80, 255, 255, 0.15),
inset 0 0 20px rgba(80, 255, 255, 0.05);
height: 380px;
position: relative;
overflow: hidden;
cursor: pointer;
transition: all 0.3s ease;
display: flex;
flex-direction: column;
}
.featured-hero-card:hover {
border-color: var(--accent-pink);
box-shadow: 0 0 40px rgba(243, 128, 245, 0.2),
inset 0 0 30px rgba(243, 128, 245, 0.05);
transform: translateY(-2px);
}
.hero-image {
width: 100%;
height: 200px;
min-height: 200px;
max-height: 200px;
background: linear-gradient(135deg, rgba(80, 255, 255, 0.1), rgba(243, 128, 245, 0.05));
background-size: cover;
background-position: center;
display: flex;
align-items: center;
justify-content: center;
font-size: 3rem;
color: var(--primary-cyan);
flex-shrink: 0;
position: relative;
filter: brightness(1.1) contrast(1.1);
overflow: hidden;
}
.hero-image img {
width: 100%;
height: 100%;
object-fit: cover;
object-position: center;
}
.hero-image::after {
content: '';
position: absolute;
bottom: 0;
left: 0;
right: 0;
height: 60%;
background: linear-gradient(to top, rgba(10, 10, 20, 0.95), transparent);
}
.hero-content {
padding: 1.5rem;
flex: 1;
display: flex;
flex-direction: column;
justify-content: space-between;
}
.hero-badge {
display: inline-block;
padding: 0.3rem 0.6rem;
background: linear-gradient(135deg, var(--primary-cyan), var(--primary-teal));
color: var(--bg-dark);
font-size: 0.7rem;
text-transform: uppercase;
margin-bottom: 0.5rem;
font-weight: 600;
box-shadow: 0 2px 10px rgba(80, 255, 255, 0.3);
}
.hero-title {
font-size: 1.6rem;
color: var(--primary-cyan);
margin: 0.5rem 0;
text-shadow: 0 0 20px rgba(80, 255, 255, 0.5);
}
.hero-description {
color: var(--text-secondary);
line-height: 1.5;
}
.hero-meta {
display: flex;
gap: 1.5rem;
margin-top: 1rem;
font-size: 0.875rem;
}
.hero-meta span {
color: var(--text-tertiary);
}
.hero-meta span:first-child {
color: var(--warning);
}
/* Secondary Featured */
.secondary-featured {
grid-column: 1 / -1;
min-height: 380px;
display: flex;
align-items: flex-start;
}
.featured-secondary-cards {
width: 100%;
display: flex;
flex-direction: column;
gap: 0.75rem;
align-items: stretch;
}
.secondary-card {
background: linear-gradient(135deg, rgba(80, 255, 255, 0.03), rgba(243, 128, 245, 0.02));
border: 1px solid rgba(80, 255, 255, 0.3);
cursor: pointer;
transition: all 0.3s ease;
display: flex;
overflow: hidden;
height: 118px;
min-height: 118px;
max-height: 118px;
flex-shrink: 0;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.3);
}
.secondary-card:hover {
border-color: var(--accent-pink);
background: linear-gradient(135deg, rgba(243, 128, 245, 0.05), rgba(80, 255, 255, 0.03));
box-shadow: 0 4px 15px rgba(243, 128, 245, 0.2);
transform: translateX(-3px);
}
.secondary-image {
width: 120px;
background: linear-gradient(135deg, var(--bg-tertiary), var(--bg-secondary));
background-size: cover;
background-position: center;
display: flex;
align-items: center;
justify-content: center;
font-size: 1.5rem;
color: var(--primary-cyan);
flex-shrink: 0;
}
.secondary-content {
flex: 1;
padding: 1rem;
display: flex;
flex-direction: column;
justify-content: space-between;
}
.secondary-title {
font-size: 1rem;
color: var(--text-primary);
margin-bottom: 0.25rem;
}
.secondary-desc {
font-size: 0.75rem;
color: var(--text-secondary);
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
overflow: hidden;
}
.secondary-meta {
font-size: 0.75rem;
color: var(--text-tertiary);
}
.secondary-meta span:last-child {
color: var(--warning);
}
/* Sponsored Section */
.sponsored-section {
grid-column: 1 / -1;
background: var(--bg-secondary);
border: 1px solid var(--warning);
padding: 1rem;
position: relative;
}
.section-label {
position: absolute;
top: -0.5rem;
left: 1rem;
background: var(--bg-secondary);
padding: 0 0.5rem;
color: var(--warning);
font-size: 0.65rem;
letter-spacing: 0.1em;
}
.sponsored-cards {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 1rem;
}
.sponsor-card {
padding: 1rem;
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
}
.sponsor-logo {
display: flex;
align-items: center;
justify-content: center;
height: 60px;
margin-bottom: 0.75rem;
}
.sponsor-logo img {
max-height: 60px;
max-width: 100%;
width: auto;
object-fit: contain;
}
.sponsor-card h4 {
color: var(--accent-pink);
margin-bottom: 0.5rem;
}
.sponsor-card p {
color: var(--text-secondary);
font-size: 0.85rem;
margin-bottom: 0.75rem;
}
.sponsor-card a {
color: var(--primary-cyan);
text-decoration: none;
font-size: 0.85rem;
}
.sponsor-card a:hover {
color: var(--accent-pink);
}
/* Main Content Grid */
.main-content {
grid-column: 1 / -1;
display: grid;
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
gap: 2rem;
}
/* Column Headers */
.column-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
border-bottom: 1px solid var(--border-color);
padding-bottom: 0.5rem;
}
.column-header h2 {
font-size: 1.1rem;
color: var(--text-primary);
}
.mini-filter {
background: var(--bg-tertiary);
border: 1px solid var(--border-color);
color: var(--text-primary);
padding: 0.25rem 0.5rem;
font-family: inherit;
font-size: 0.75rem;
}
.ascii-icon {
color: var(--primary-cyan);
}
/* Apps Column */
.apps-compact-grid {
display: flex;
flex-direction: column;
gap: 0.75rem;
}
.app-compact {
background: var(--bg-secondary);
border: 1px solid var(--border-color);
border-left: 3px solid var(--border-color);
padding: 0.75rem;
cursor: pointer;
transition: all 0.2s;
}
.app-compact:hover {
border-color: var(--primary-cyan);
border-left-color: var(--accent-pink);
transform: translateX(2px);
}
.app-compact-header {
display: flex;
justify-content: space-between;
font-size: 0.75rem;
color: var(--text-tertiary);
margin-bottom: 0.25rem;
}
.app-compact-header span:first-child {
color: var(--primary-cyan);
}
.app-compact-header span:last-child {
color: var(--warning);
}
.app-compact-title {
font-size: 0.9rem;
color: var(--text-primary);
margin-bottom: 0.25rem;
}
.app-compact-desc {
font-size: 0.75rem;
color: var(--text-secondary);
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
overflow: hidden;
}
/* Articles Column */
.articles-compact-list {
display: flex;
flex-direction: column;
gap: 1rem;
}
.article-compact {
border-left: 2px solid var(--border-color);
padding-left: 1rem;
cursor: pointer;
transition: all 0.2s;
}
.article-compact:hover {
border-left-color: var(--primary-cyan);
}
.article-meta {
font-size: 0.7rem;
color: var(--text-tertiary);
margin-bottom: 0.25rem;
}
.article-meta span:first-child {
color: var(--accent-pink);
}
.article-title {
font-size: 0.9rem;
color: var(--text-primary);
margin-bottom: 0.25rem;
}
.article-author {
font-size: 0.75rem;
color: var(--text-secondary);
}
/* Trending Column */
.trending-items {
display: flex;
flex-direction: column;
gap: 0.5rem;
}
.trending-item {
display: flex;
align-items: center;
gap: 0.75rem;
padding: 0.5rem;
background: var(--bg-secondary);
border: 1px solid var(--border-color);
cursor: pointer;
transition: all 0.2s;
}
.trending-item:hover {
border-color: var(--primary-cyan);
}
.trending-rank {
font-size: 1.2rem;
color: var(--primary-cyan);
width: 2rem;
text-align: center;
}
.trending-info {
flex: 1;
}
.trending-name {
font-size: 0.85rem;
color: var(--text-primary);
}
.trending-stats {
font-size: 0.7rem;
color: var(--text-tertiary);
}
/* Submit Box */
.submit-box {
margin-top: 1.5rem;
background: var(--bg-secondary);
border: 1px solid var(--primary-cyan);
padding: 1rem;
text-align: center;
}
.submit-box h3 {
font-size: 1rem;
color: var(--primary-cyan);
margin-bottom: 0.5rem;
}
.submit-box p {
font-size: 0.8rem;
color: var(--text-secondary);
margin-bottom: 0.75rem;
}
.submit-btn {
display: inline-block;
padding: 0.5rem 1rem;
background: transparent;
border: 1px solid var(--primary-cyan);
color: var(--primary-cyan);
text-decoration: none;
transition: all 0.2s;
}
.submit-btn:hover {
background: var(--primary-cyan);
color: var(--bg-dark);
}
/* More Apps Section */
.more-apps {
grid-column: 1 / -1;
margin-top: 2rem;
}
.section-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
}
.more-apps-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
gap: 1rem;
}
.load-more-btn {
background: transparent;
border: 1px solid var(--border-color);
color: var(--text-secondary);
padding: 0.5rem 1.5rem;
font-family: inherit;
cursor: pointer;
transition: all 0.2s;
}
.load-more-btn:hover {
border-color: var(--primary-cyan);
color: var(--primary-cyan);
}
/* Footer */
.marketplace-footer {
background: var(--bg-secondary);
border-top: 1px solid var(--border-color);
margin-top: 4rem;
padding: 2rem 0;
}
.footer-content {
max-width: 1800px;
margin: 0 auto;
padding: 0 2rem;
display: grid;
grid-template-columns: 1fr 1fr;
gap: 2rem;
}
.footer-section h3 {
font-size: 1rem;
margin-bottom: 0.5rem;
color: var(--primary-cyan);
}
.footer-section p {
font-size: 0.875rem;
color: var(--text-secondary);
margin-bottom: 1rem;
}
.sponsor-btn {
display: inline-block;
padding: 0.5rem 1rem;
background: transparent;
border: 1px solid var(--primary-cyan);
color: var(--primary-cyan);
text-decoration: none;
transition: all 0.2s;
}
.sponsor-btn:hover {
background: var(--primary-cyan);
color: var(--bg-dark);
}
.footer-bottom {
max-width: 1800px;
margin: 2rem auto 0;
padding: 1rem 2rem 0;
border-top: 1px solid var(--border-color);
font-size: 0.75rem;
color: var(--text-tertiary);
}
/* Modal */
.modal {
position: fixed;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: rgba(0, 0, 0, 0.8);
display: flex;
align-items: center;
justify-content: center;
z-index: 1000;
}
.modal.hidden {
display: none;
}
.modal-content {
background: var(--bg-secondary);
border: 1px solid var(--primary-cyan);
max-width: 800px;
width: 90%;
max-height: 80vh;
overflow-y: auto;
position: relative;
}
.modal-close {
position: absolute;
top: 1rem;
right: 1rem;
background: transparent;
border: 1px solid var(--border-color);
color: var(--text-primary);
padding: 0.25rem 0.5rem;
cursor: pointer;
font-size: 1.2rem;
}
.modal-close:hover {
border-color: var(--error);
color: var(--error);
}
.app-detail {
padding: 2rem;
}
.app-detail h2 {
font-size: 1.5rem;
margin-bottom: 1rem;
color: var(--primary-cyan);
}
/* Loading */
.loading {
text-align: center;
padding: 2rem;
color: var(--text-tertiary);
}
.no-results {
text-align: center;
padding: 2rem;
color: var(--text-tertiary);
}
/* Responsive - Tablet */
@media (min-width: 768px) {
.magazine-layout {
grid-template-columns: repeat(2, 1fr);
}
.hero-featured {
grid-column: 1 / -1;
}
.secondary-featured {
grid-column: 1 / -1;
}
.sponsored-section {
grid-column: 1 / -1;
}
.main-content {
grid-column: 1 / -1;
grid-template-columns: repeat(2, 1fr);
}
}
/* Responsive - Desktop */
@media (min-width: 1024px) {
.magazine-layout {
grid-template-columns: repeat(3, 1fr);
}
.hero-featured {
grid-column: 1 / 3;
grid-row: 1;
}
.secondary-featured {
grid-column: 3 / 4;
grid-row: 1;
}
.featured-secondary-cards {
flex-direction: column;
}
.sponsored-section {
grid-column: 1 / -1;
}
.main-content {
grid-column: 1 / -1;
grid-template-columns: repeat(3, 1fr);
}
}
/* Responsive - Wide Desktop */
@media (min-width: 1400px) {
.magazine-layout {
grid-template-columns: repeat(4, 1fr);
}
.hero-featured {
grid-column: 1 / 3;
}
.secondary-featured {
grid-column: 3 / 5;
grid-row: 1;
min-height: auto;
}
.featured-secondary-cards {
display: grid;
grid-template-columns: repeat(2, 1fr);
flex-direction: unset;
}
.main-content {
grid-template-columns: repeat(4, 1fr);
}
.apps-column {
grid-column: span 2;
}
.more-apps-grid {
grid-template-columns: repeat(auto-fill, minmax(250px, 1fr));
}
}
/* Responsive - Ultra Wide Desktop (for coders with wide monitors) */
@media (min-width: 1800px) {
.magazine-layout {
grid-template-columns: repeat(5, 1fr);
}
.hero-featured {
grid-column: 1 / 3;
}
.secondary-featured {
grid-column: 3 / 6;
min-height: auto;
}
.featured-secondary-cards {
display: grid;
grid-template-columns: repeat(3, 1fr);
flex-direction: unset;
}
.sponsored-section {
grid-column: 1 / -1;
}
.sponsored-cards {
grid-template-columns: repeat(5, 1fr);
}
.main-content {
grid-template-columns: repeat(5, 1fr);
}
.apps-column {
grid-column: span 2;
}
.articles-column {
grid-column: span 2;
}
.more-apps-grid {
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
}
}
/* Responsive - Mobile */
@media (max-width: 767px) {
.header-content {
flex-direction: column;
gap: 1rem;
}
.search-filter-bar {
flex-direction: column;
align-items: stretch;
}
.search-box {
max-width: none;
}
.magazine-layout {
padding: 0 1rem 2rem;
}
.footer-content {
grid-template-columns: 1fr;
}
.secondary-card {
flex-direction: column;
}
.secondary-image {
width: 100%;
height: 150px;
}
}

View File

@@ -0,0 +1,412 @@
// Marketplace JS - Magazine Layout
const { API_BASE, API_ORIGIN } = (() => {
const { hostname, port } = window.location;
if ((hostname === 'localhost' || hostname === '127.0.0.1') && port === '8000') {
const origin = 'http://127.0.0.1:8100';
return { API_BASE: `${origin}/marketplace/api`, API_ORIGIN: origin };
}
return { API_BASE: '/marketplace/api', API_ORIGIN: '' };
})();
const resolveAssetUrl = (path) => {
if (!path) return '';
if (/^https?:\/\//i.test(path)) return path;
if (path.startsWith('/') && API_ORIGIN) {
return `${API_ORIGIN}${path}`;
}
return path;
};
const CACHE_TTL = 3600000; // 1 hour in ms
class MarketplaceCache {
constructor() {
this.prefix = 'c4ai_market_';
}
get(key) {
const item = localStorage.getItem(this.prefix + key);
if (!item) return null;
const data = JSON.parse(item);
if (Date.now() > data.expires) {
localStorage.removeItem(this.prefix + key);
return null;
}
return data.value;
}
set(key, value, ttl = CACHE_TTL) {
const data = {
value: value,
expires: Date.now() + ttl
};
localStorage.setItem(this.prefix + key, JSON.stringify(data));
}
clear() {
Object.keys(localStorage)
.filter(k => k.startsWith(this.prefix))
.forEach(k => localStorage.removeItem(k));
}
}
class MarketplaceAPI {
constructor() {
this.cache = new MarketplaceCache();
this.searchTimeout = null;
}
async fetch(endpoint, useCache = true) {
const cacheKey = endpoint.replace(/[^\w]/g, '_');
if (useCache) {
const cached = this.cache.get(cacheKey);
if (cached) return cached;
}
try {
const response = await fetch(`${API_BASE}${endpoint}`);
if (!response.ok) throw new Error(`HTTP ${response.status}`);
const data = await response.json();
this.cache.set(cacheKey, data);
return data;
} catch (error) {
console.error('API Error:', error);
return null;
}
}
async getStats() {
return this.fetch('/stats');
}
async getCategories() {
return this.fetch('/categories');
}
async getApps(params = {}) {
const query = new URLSearchParams(params).toString();
return this.fetch(`/apps${query ? '?' + query : ''}`);
}
async getArticles(params = {}) {
const query = new URLSearchParams(params).toString();
return this.fetch(`/articles${query ? '?' + query : ''}`);
}
async getSponsors() {
return this.fetch('/sponsors');
}
async search(query) {
if (query.length < 2) return {};
return this.fetch(`/search?q=${encodeURIComponent(query)}`, false);
}
}
class MarketplaceUI {
constructor() {
this.api = new MarketplaceAPI();
this.currentCategory = 'all';
this.currentType = '';
this.searchTimeout = null;
this.loadedApps = 10;
this.init();
}
async init() {
await this.loadStats();
await this.loadCategories();
await this.loadFeaturedContent();
await this.loadSponsors();
await this.loadMainContent();
this.setupEventListeners();
}
async loadStats() {
const stats = await this.api.getStats();
if (stats) {
document.getElementById('total-apps').textContent = stats.total_apps || '0';
document.getElementById('total-articles').textContent = stats.total_articles || '0';
document.getElementById('total-downloads').textContent = stats.total_downloads || '0';
document.getElementById('last-update').textContent = new Date().toLocaleDateString();
}
}
async loadCategories() {
const categories = await this.api.getCategories();
if (!categories) return;
const filter = document.getElementById('category-filter');
categories.forEach(cat => {
const btn = document.createElement('button');
btn.className = 'filter-btn';
btn.dataset.category = cat.slug;
btn.textContent = cat.name;
btn.onclick = () => this.filterByCategory(cat.slug);
filter.appendChild(btn);
});
}
async loadFeaturedContent() {
// Load hero featured
const featured = await this.api.getApps({ featured: true, limit: 4 });
if (!featured || !featured.length) return;
// Hero card (first featured)
const hero = featured[0];
const heroCard = document.getElementById('featured-hero');
if (hero) {
const imageUrl = hero.image || '';
heroCard.innerHTML = `
<div class="hero-image" ${imageUrl ? `style="background-image: url('${imageUrl}')"` : ''}>
${!imageUrl ? `[${hero.category || 'APP'}]` : ''}
</div>
<div class="hero-content">
<span class="hero-badge">${hero.type || 'PAID'}</span>
<h2 class="hero-title">${hero.name}</h2>
<p class="hero-description">${hero.description}</p>
<div class="hero-meta">
<span>★ ${hero.rating || 0}/5</span>
<span>${hero.downloads || 0} downloads</span>
</div>
</div>
`;
heroCard.onclick = () => this.showAppDetail(hero);
}
// Secondary featured cards
const secondary = document.getElementById('featured-secondary');
secondary.innerHTML = '';
if (featured.length > 1) {
featured.slice(1, 4).forEach(app => {
const card = document.createElement('div');
card.className = 'secondary-card';
const imageUrl = app.image || '';
card.innerHTML = `
<div class="secondary-image" ${imageUrl ? `style="background-image: url('${imageUrl}')"` : ''}>
${!imageUrl ? `[${app.category || 'APP'}]` : ''}
</div>
<div class="secondary-content">
<h3 class="secondary-title">${app.name}</h3>
<p class="secondary-desc">${(app.description || '').substring(0, 100)}...</p>
<div class="secondary-meta">
<span>${app.type || 'Open Source'}</span> · <span>★ ${app.rating || 0}/5</span>
</div>
</div>
`;
card.onclick = () => this.showAppDetail(app);
secondary.appendChild(card);
});
}
}
async loadSponsors() {
const sponsors = await this.api.getSponsors();
if (!sponsors || !sponsors.length) {
// Show placeholder if no sponsors
const container = document.getElementById('sponsored-content');
container.innerHTML = `
<div class="sponsor-card">
<h4>Become a Sponsor</h4>
<p>Reach thousands of developers using Crawl4AI</p>
<a href="mailto:sponsors@crawl4ai.com">Contact Us →</a>
</div>
`;
return;
}
const container = document.getElementById('sponsored-content');
container.innerHTML = sponsors.slice(0, 5).map(sponsor => `
<div class="sponsor-card">
${sponsor.logo_url ? `<div class="sponsor-logo"><img src="${resolveAssetUrl(sponsor.logo_url)}" alt="${sponsor.company_name} logo"></div>` : ''}
<h4>${sponsor.company_name}</h4>
<p>${sponsor.tier} Sponsor - Premium Solutions</p>
<a href="${sponsor.landing_url}" target="_blank">Learn More →</a>
</div>
`).join('');
}
async loadMainContent() {
// Load apps column
const apps = await this.api.getApps({ limit: 8 });
if (apps && apps.length) {
const appsGrid = document.getElementById('apps-grid');
appsGrid.innerHTML = apps.map(app => `
<div class="app-compact" onclick="marketplace.showAppDetail(${JSON.stringify(app).replace(/"/g, '&quot;')})">
<div class="app-compact-header">
<span>${app.category}</span>
<span>★ ${app.rating}/5</span>
</div>
<div class="app-compact-title">${app.name}</div>
<div class="app-compact-desc">${app.description}</div>
</div>
`).join('');
}
// Load articles column
const articles = await this.api.getArticles({ limit: 6 });
if (articles && articles.length) {
const articlesList = document.getElementById('articles-list');
articlesList.innerHTML = articles.map(article => `
<div class="article-compact" onclick="marketplace.showArticle('${article.id}')">
<div class="article-meta">
<span>${article.category}</span> · <span>${new Date(article.published_at).toLocaleDateString()}</span>
</div>
<div class="article-title">${article.title}</div>
<div class="article-author">by ${article.author}</div>
</div>
`).join('');
}
// Load trending
if (apps && apps.length) {
const trending = apps.slice(0, 5);
const trendingList = document.getElementById('trending-list');
trendingList.innerHTML = trending.map((app, i) => `
<div class="trending-item" onclick="marketplace.showAppDetail(${JSON.stringify(app).replace(/"/g, '&quot;')})">
<div class="trending-rank">${i + 1}</div>
<div class="trending-info">
<div class="trending-name">${app.name}</div>
<div class="trending-stats">${app.downloads} downloads</div>
</div>
</div>
`).join('');
}
// Load more apps grid
const moreApps = await this.api.getApps({ offset: 8, limit: 12 });
if (moreApps && moreApps.length) {
const moreGrid = document.getElementById('more-apps-grid');
moreGrid.innerHTML = moreApps.map(app => `
<div class="app-compact" onclick="marketplace.showAppDetail(${JSON.stringify(app).replace(/"/g, '&quot;')})">
<div class="app-compact-header">
<span>${app.category}</span>
<span>${app.type}</span>
</div>
<div class="app-compact-title">${app.name}</div>
</div>
`).join('');
}
}
setupEventListeners() {
// Search
const searchInput = document.getElementById('search-input');
searchInput.addEventListener('input', (e) => {
clearTimeout(this.searchTimeout);
this.searchTimeout = setTimeout(() => this.search(e.target.value), 300);
});
// Keyboard shortcut
document.addEventListener('keydown', (e) => {
if (e.key === '/' && !searchInput.contains(document.activeElement)) {
e.preventDefault();
searchInput.focus();
}
if (e.key === 'Escape' && searchInput.contains(document.activeElement)) {
searchInput.blur();
searchInput.value = '';
}
});
// Type filter
const typeFilter = document.getElementById('type-filter');
typeFilter.addEventListener('change', (e) => {
this.currentType = e.target.value;
this.loadMainContent();
});
// Load more
const loadMore = document.getElementById('load-more');
loadMore.addEventListener('click', () => this.loadMoreApps());
}
async filterByCategory(category) {
// Update active state
document.querySelectorAll('.filter-btn').forEach(btn => {
btn.classList.toggle('active', btn.dataset.category === category);
});
this.currentCategory = category;
await this.loadMainContent();
}
async search(query) {
if (!query) {
await this.loadMainContent();
return;
}
const results = await this.api.search(query);
if (!results) return;
// Update apps grid with search results
if (results.apps && results.apps.length) {
const appsGrid = document.getElementById('apps-grid');
appsGrid.innerHTML = results.apps.map(app => `
<div class="app-compact" onclick="marketplace.showAppDetail(${JSON.stringify(app).replace(/"/g, '&quot;')})">
<div class="app-compact-header">
<span>${app.category}</span>
<span>★ ${app.rating}/5</span>
</div>
<div class="app-compact-title">${app.name}</div>
<div class="app-compact-desc">${app.description}</div>
</div>
`).join('');
}
// Update articles with search results
if (results.articles && results.articles.length) {
const articlesList = document.getElementById('articles-list');
articlesList.innerHTML = results.articles.map(article => `
<div class="article-compact" onclick="marketplace.showArticle('${article.id}')">
<div class="article-meta">
<span>${article.category}</span> · <span>${new Date(article.published_at).toLocaleDateString()}</span>
</div>
<div class="article-title">${article.title}</div>
<div class="article-author">by ${article.author}</div>
</div>
`).join('');
}
}
async loadMoreApps() {
this.loadedApps += 12;
const moreApps = await this.api.getApps({ offset: this.loadedApps, limit: 12 });
if (moreApps && moreApps.length) {
const moreGrid = document.getElementById('more-apps-grid');
moreApps.forEach(app => {
const card = document.createElement('div');
card.className = 'app-compact';
card.innerHTML = `
<div class="app-compact-header">
<span>${app.category}</span>
<span>${app.type}</span>
</div>
<div class="app-compact-title">${app.name}</div>
`;
card.onclick = () => this.showAppDetail(app);
moreGrid.appendChild(card);
});
}
}
showAppDetail(app) {
// Navigate to detail page instead of showing modal
const slug = app.slug || app.name.toLowerCase().replace(/\s+/g, '-');
window.location.href = `app-detail.html?app=${slug}`;
}
showArticle(articleId) {
// Could create article detail page similarly
console.log('Show article:', articleId);
}
}
// Initialize marketplace
let marketplace;
document.addEventListener('DOMContentLoaded', () => {
marketplace = new MarketplaceUI();
});

View File

@@ -1,5 +1,4 @@
site_name: Crawl4AI Documentation (v0.7.x)
site_favicon: docs/md_v2/favicon.ico
site_description: 🚀🤖 Crawl4AI, Open-source LLM-Friendly Web Crawler & Scraper
site_url: https://docs.crawl4ai.com
repo_url: https://github.com/unclecode/crawl4ai
@@ -15,9 +14,11 @@ nav:
- "Demo Apps": "apps/index.md"
- "C4A-Script Editor": "apps/c4a-script/index.html"
- "LLM Context Builder": "apps/llmtxt/index.html"
- "Marketplace": "marketplace/index.html"
- "Marketplace Admin": "marketplace/admin/index.html"
- Setup & Installation:
- "Installation": "core/installation.md"
- "Docker Deployment": "core/docker-deployment.md"
- "Self-Hosting Guide": "core/self-hosting.md"
- "Blog & Changelog":
- "Blog Home": "blog/index.md"
- "Changelog": "https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md"
@@ -66,10 +67,12 @@ nav:
- "CrawlResult": "api/crawl-result.md"
- "Strategies": "api/strategies.md"
- "C4A-Script Reference": "api/c4a-script-reference.md"
- "Brand Book": "branding/index.md"
theme:
name: 'terminal'
palette: 'dark'
favicon: favicon.ico
custom_dir: docs/md_v2/overrides
color_mode: 'dark'
icon:
@@ -98,6 +101,7 @@ extra_css:
- assets/highlight.css
- assets/dmvendor.css
- assets/feedback-overrides.css
- assets/page_actions.css
extra_javascript:
- https://www.googletagmanager.com/gtag/js?id=G-58W0K2ZQ25
@@ -106,8 +110,9 @@ extra_javascript:
- assets/highlight_init.js
- https://buttons.github.io/buttons.js
- assets/toc.js
- assets/github_stats.js
- assets/github_stats.js
- assets/selection_ask_ai.js
- assets/copy_code.js
- assets/floating_ask_ai_button.js
- assets/mobile_menu.js
- assets/mobile_menu.js
- assets/page_actions.js?v=20251006

View File

@@ -0,0 +1,154 @@
import asyncio
import os
from crawl4ai import AsyncWebCrawler, AdaptiveCrawler, AdaptiveConfig, LLMConfig
async def test_configuration(name: str, config: AdaptiveConfig, url: str, query: str):
"""Test a specific configuration"""
print(f"\n{'='*60}")
print(f"Configuration: {name}")
print(f"{'='*60}")
async with AsyncWebCrawler(verbose=False) as crawler:
adaptive = AdaptiveCrawler(crawler, config)
result = await adaptive.digest(start_url=url, query=query)
print("\n" + "="*50)
print("CRAWL STATISTICS")
print("="*50)
adaptive.print_stats(detailed=False)
# Get the most relevant content found
print("\n" + "="*50)
print("MOST RELEVANT PAGES")
print("="*50)
relevant_pages = adaptive.get_relevant_content(top_k=5)
for i, page in enumerate(relevant_pages, 1):
print(f"\n{i}. {page['url']}")
print(f" Relevance Score: {page['score']:.2%}")
# Show a snippet of the content
content = page['content'] or ""
if content:
snippet = content[:200].replace('\n', ' ')
if len(content) > 200:
snippet += "..."
print(f" Preview: {snippet}")
print(f"\n{'='*50}")
print(f"Pages crawled: {len(result.crawled_urls)}")
print(f"Final confidence: {adaptive.confidence:.1%}")
print(f"Stopped reason: {result.metrics.get('stopped_reason', 'max_pages')}")
if result.metrics.get('is_irrelevant', False):
print("⚠️ Query detected as irrelevant!")
return result
async def llm_embedding():
"""Demonstrate various embedding configurations"""
print("EMBEDDING STRATEGY CONFIGURATION EXAMPLES")
print("=" * 60)
# Base URL and query for testing
test_url = "https://docs.python.org/3/library/asyncio.html"
openai_llm_config = LLMConfig(
provider='openai/text-embedding-3-small',
api_token=os.getenv('OPENAI_API_KEY'),
temperature=0.7,
max_tokens=2000
)
config_openai = AdaptiveConfig(
strategy="embedding",
max_pages=10,
# Use OpenAI embeddings
embedding_llm_config=openai_llm_config,
# embedding_llm_config={
# 'provider': 'openai/text-embedding-3-small',
# 'api_token': os.getenv('OPENAI_API_KEY')
# },
# OpenAI embeddings are high quality, can be stricter
embedding_k_exp=4.0,
n_query_variations=12
)
await test_configuration(
"OpenAI Embeddings",
config_openai,
test_url,
# "event-driven architecture patterns"
"async await context managers coroutines"
)
return
async def basic_adaptive_crawling():
"""Basic adaptive crawling example"""
# Initialize the crawler
async with AsyncWebCrawler(verbose=True) as crawler:
# Create an adaptive crawler with default settings (statistical strategy)
adaptive = AdaptiveCrawler(crawler)
# Note: You can also use embedding strategy for semantic understanding:
# from crawl4ai import AdaptiveConfig
# config = AdaptiveConfig(strategy="embedding")
# adaptive = AdaptiveCrawler(crawler, config)
# Start adaptive crawling
print("Starting adaptive crawl for Python async programming information...")
result = await adaptive.digest(
start_url="https://docs.python.org/3/library/asyncio.html",
query="async await context managers coroutines"
)
# Display crawl statistics
print("\n" + "="*50)
print("CRAWL STATISTICS")
print("="*50)
adaptive.print_stats(detailed=False)
# Get the most relevant content found
print("\n" + "="*50)
print("MOST RELEVANT PAGES")
print("="*50)
relevant_pages = adaptive.get_relevant_content(top_k=5)
for i, page in enumerate(relevant_pages, 1):
print(f"\n{i}. {page['url']}")
print(f" Relevance Score: {page['score']:.2%}")
# Show a snippet of the content
content = page['content'] or ""
if content:
snippet = content[:200].replace('\n', ' ')
if len(content) > 200:
snippet += "..."
print(f" Preview: {snippet}")
# Show final confidence
print(f"\n{'='*50}")
print(f"Final Confidence: {adaptive.confidence:.2%}")
print(f"Total Pages Crawled: {len(result.crawled_urls)}")
print(f"Knowledge Base Size: {len(adaptive.state.knowledge_base)} documents")
if adaptive.confidence >= 0.8:
print("✓ High confidence - can answer detailed questions about async Python")
elif adaptive.confidence >= 0.6:
print("~ Moderate confidence - can answer basic questions")
else:
print("✗ Low confidence - need more information")
if __name__ == "__main__":
asyncio.run(llm_embedding())
# asyncio.run(basic_adaptive_crawling())

View File

@@ -112,7 +112,7 @@ async def test_proxy_settings():
headless=True,
verbose=False,
user_agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36",
proxy="http://127.0.0.1:8080", # Assuming local proxy server for test
proxy_config={"server": "http://127.0.0.1:8080"}, # Assuming local proxy server for test
use_managed_browser=False,
use_persistent_context=False,
) as crawler:

View File

@@ -0,0 +1,372 @@
#!/usr/bin/env python3
"""
Test client for demonstrating user-provided hooks in Crawl4AI Docker API
"""
import requests
import json
from typing import Dict, Any
API_BASE_URL = "http://localhost:11234" # Adjust if needed
def test_hooks_info():
"""Get information about available hooks"""
print("=" * 70)
print("Testing: GET /hooks/info")
print("=" * 70)
response = requests.get(f"{API_BASE_URL}/hooks/info")
if response.status_code == 200:
data = response.json()
print("Available Hook Points:")
for hook, info in data['available_hooks'].items():
print(f"\n{hook}:")
print(f" Parameters: {', '.join(info['parameters'])}")
print(f" Description: {info['description']}")
else:
print(f"Error: {response.status_code}")
print(response.text)
def test_basic_crawl_with_hooks():
"""Test basic crawling with user-provided hooks"""
print("\n" + "=" * 70)
print("Testing: POST /crawl with hooks")
print("=" * 70)
# Define hooks as Python code strings
hooks_code = {
"on_page_context_created": """
async def hook(page, context, **kwargs):
print("Hook: Setting up page context")
# Block images to speed up crawling
await context.route("**/*.{png,jpg,jpeg,gif,webp}", lambda route: route.abort())
print("Hook: Images blocked")
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
print("Hook: Before retrieving HTML")
# Scroll to bottom to load lazy content
await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
await page.wait_for_timeout(1000)
print("Hook: Scrolled to bottom")
return page
""",
"before_goto": """
async def hook(page, context, url, **kwargs):
print(f"Hook: About to navigate to {url}")
# Add custom headers
await page.set_extra_http_headers({
'X-Test-Header': 'crawl4ai-hooks-test'
})
return page
"""
}
# Create request payload
payload = {
"urls": ["https://httpbin.org/html"],
"hooks": {
"code": hooks_code,
"timeout": 30
}
}
print("Sending request with hooks...")
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
if response.status_code == 200:
data = response.json()
print("\n✅ Crawl successful!")
# Check hooks status
if 'hooks' in data:
hooks_info = data['hooks']
print("\nHooks Execution Summary:")
print(f" Status: {hooks_info['status']['status']}")
print(f" Attached hooks: {', '.join(hooks_info['status']['attached_hooks'])}")
if hooks_info['status']['validation_errors']:
print("\n⚠️ Validation Errors:")
for error in hooks_info['status']['validation_errors']:
print(f" - {error['hook_point']}: {error['error']}")
if 'summary' in hooks_info:
summary = hooks_info['summary']
print(f"\nExecution Statistics:")
print(f" Total executions: {summary['total_executions']}")
print(f" Successful: {summary['successful']}")
print(f" Failed: {summary['failed']}")
print(f" Timed out: {summary['timed_out']}")
print(f" Success rate: {summary['success_rate']:.1f}%")
if hooks_info['execution_log']:
print("\nExecution Log:")
for log_entry in hooks_info['execution_log']:
status_icon = "" if log_entry['status'] == 'success' else ""
print(f" {status_icon} {log_entry['hook_point']}: {log_entry['status']} ({log_entry.get('execution_time', 0):.2f}s)")
if hooks_info['errors']:
print("\n❌ Hook Errors:")
for error in hooks_info['errors']:
print(f" - {error['hook_point']}: {error['error']}")
# Show crawl results
if 'results' in data:
print(f"\nCrawled {len(data['results'])} URL(s)")
for result in data['results']:
print(f" - {result['url']}: {'' if result['success'] else ''}")
else:
print(f"❌ Error: {response.status_code}")
print(response.text)
def test_invalid_hook():
"""Test with an invalid hook to see error handling"""
print("\n" + "=" * 70)
print("Testing: Invalid hook handling")
print("=" * 70)
# Intentionally broken hook
hooks_code = {
"on_page_context_created": """
def hook(page, context): # Missing async!
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
# This will cause an error
await page.non_existent_method()
return page
"""
}
payload = {
"urls": ["https://httpbin.org/html"],
"hooks": {
"code": hooks_code,
"timeout": 5
}
}
print("Sending request with invalid hooks...")
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
if response.status_code == 200:
data = response.json()
if 'hooks' in data:
hooks_info = data['hooks']
print(f"\nHooks Status: {hooks_info['status']['status']}")
if hooks_info['status']['validation_errors']:
print("\n✅ Validation caught errors (as expected):")
for error in hooks_info['status']['validation_errors']:
print(f" - {error['hook_point']}: {error['error']}")
if hooks_info['errors']:
print("\n✅ Runtime errors handled gracefully:")
for error in hooks_info['errors']:
print(f" - {error['hook_point']}: {error['error']}")
# The crawl should still succeed despite hook errors
if data.get('success'):
print("\n✅ Crawl succeeded despite hook errors (error isolation working!)")
else:
print(f"Error: {response.status_code}")
print(response.text)
def test_authentication_hook():
"""Test authentication using hooks"""
print("\n" + "=" * 70)
print("Testing: Authentication with hooks")
print("=" * 70)
hooks_code = {
"before_goto": """
async def hook(page, context, url, **kwargs):
# For httpbin.org basic auth test, set Authorization header
import base64
# httpbin.org/basic-auth/user/passwd expects username="user" and password="passwd"
credentials = base64.b64encode(b"user:passwd").decode('ascii')
await page.set_extra_http_headers({
'Authorization': f'Basic {credentials}'
})
print(f"Hook: Set Authorization header for {url}")
return page
""",
"on_page_context_created": """
async def hook(page, context, **kwargs):
# Example: Add cookies for session tracking
await context.add_cookies([
{
'name': 'session_id',
'value': 'test_session_123',
'domain': '.httpbin.org',
'path': '/',
'httpOnly': True,
'secure': True
}
])
print("Hook: Added session cookie")
return page
"""
}
payload = {
"urls": ["https://httpbin.org/basic-auth/user/passwd"],
"hooks": {
"code": hooks_code,
"timeout": 30
}
}
print("Sending request with authentication hook...")
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
if response.status_code == 200:
data = response.json()
if data.get('success'):
print("✅ Crawl with authentication hook successful")
# Check if hooks executed
if 'hooks' in data:
hooks_info = data['hooks']
if hooks_info.get('summary', {}).get('successful', 0) > 0:
print(f"✅ Authentication hooks executed: {hooks_info['summary']['successful']} successful")
# Check for any hook errors
if hooks_info.get('errors'):
print("⚠️ Hook errors:")
for error in hooks_info['errors']:
print(f" - {error}")
# Check if authentication worked by looking at the result
if 'results' in data and len(data['results']) > 0:
result = data['results'][0]
if result.get('success'):
print("✅ Page crawled successfully (authentication worked!)")
# httpbin.org/basic-auth returns JSON with authenticated=true when successful
if 'authenticated' in str(result.get('html', '')):
print("✅ Authentication confirmed in response content")
else:
print(f"❌ Crawl failed: {result.get('error_message', 'Unknown error')}")
else:
print("❌ Request failed")
print(f"Response: {json.dumps(data, indent=2)}")
else:
print(f"❌ Error: {response.status_code}")
try:
error_data = response.json()
print(f"Error details: {json.dumps(error_data, indent=2)}")
except:
print(f"Error text: {response.text[:500]}")
def test_streaming_with_hooks():
"""Test streaming endpoint with hooks"""
print("\n" + "=" * 70)
print("Testing: POST /crawl/stream with hooks")
print("=" * 70)
hooks_code = {
"before_retrieve_html": """
async def hook(page, context, **kwargs):
await page.evaluate("document.querySelectorAll('img').forEach(img => img.remove())")
return page
"""
}
payload = {
"urls": ["https://httpbin.org/html", "https://httpbin.org/json"],
"hooks": {
"code": hooks_code,
"timeout": 10
}
}
print("Sending streaming request with hooks...")
with requests.post(f"{API_BASE_URL}/crawl/stream", json=payload, stream=True) as response:
if response.status_code == 200:
# Check headers for hooks status
hooks_status = response.headers.get('X-Hooks-Status')
if hooks_status:
print(f"Hooks Status (from header): {hooks_status}")
print("\nStreaming results:")
for line in response.iter_lines():
if line:
try:
result = json.loads(line)
if 'url' in result:
print(f" Received: {result['url']}")
elif 'status' in result:
print(f" Stream status: {result['status']}")
except json.JSONDecodeError:
print(f" Raw: {line.decode()}")
else:
print(f"Error: {response.status_code}")
def test_basic_without_hooks():
"""Test basic crawl without hooks"""
print("\n" + "=" * 70)
print("Testing: POST /crawl with no hooks")
print("=" * 70)
payload = {
"urls": ["https://httpbin.org/html", "https://httpbin.org/json"]
}
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
if response.status_code == 200:
data = response.json()
print(f"Response: {json.dumps(data, indent=2)}")
else:
print(f"Error: {response.status_code}")
def main():
"""Run all tests"""
print("🔧 Crawl4AI Docker API - Hooks Testing")
print("=" * 70)
# Test 1: Get hooks information
# test_hooks_info()
# Test 2: Basic crawl with hooks
# test_basic_crawl_with_hooks()
# Test 3: Invalid hooks (error handling)
test_invalid_hook()
# # Test 4: Authentication hook
# test_authentication_hook()
# # Test 5: Streaming with hooks
# test_streaming_with_hooks()
# # Test 6: Basic crawl without hooks
# test_basic_without_hooks()
print("\n" + "=" * 70)
print("✅ All tests completed!")
print("=" * 70)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,512 @@
#!/usr/bin/env python3
"""
Comprehensive test demonstrating all hook types from hooks_example.py
adapted for the Docker API with real URLs
"""
import requests
import json
import time
from typing import Dict, Any
API_BASE_URL = "http://localhost:11234"
def test_all_hooks_demo():
"""Demonstrate all 8 hook types with practical examples"""
print("=" * 70)
print("Testing: All Hooks Comprehensive Demo")
print("=" * 70)
hooks_code = {
"on_browser_created": """
async def hook(browser, **kwargs):
# Hook called after browser is created
print("[HOOK] on_browser_created - Browser is ready!")
# Browser-level configurations would go here
return browser
""",
"on_page_context_created": """
async def hook(page, context, **kwargs):
# Hook called after a new page and context are created
print("[HOOK] on_page_context_created - New page created!")
# Set viewport size for consistent rendering
await page.set_viewport_size({"width": 1920, "height": 1080})
# Add cookies for the session (using httpbin.org domain)
await context.add_cookies([
{
"name": "test_session",
"value": "abc123xyz",
"domain": ".httpbin.org",
"path": "/",
"httpOnly": True,
"secure": True
}
])
# Block ads and tracking scripts to speed up crawling
await context.route("**/*.{png,jpg,jpeg,gif,webp,svg}", lambda route: route.abort())
await context.route("**/analytics/*", lambda route: route.abort())
await context.route("**/ads/*", lambda route: route.abort())
print("[HOOK] Viewport set, cookies added, and ads blocked")
return page
""",
"on_user_agent_updated": """
async def hook(page, context, user_agent, **kwargs):
# Hook called when user agent is updated
print(f"[HOOK] on_user_agent_updated - User agent: {user_agent[:50]}...")
return page
""",
"before_goto": """
async def hook(page, context, url, **kwargs):
# Hook called before navigating to each URL
print(f"[HOOK] before_goto - About to visit: {url}")
# Add custom headers for the request
await page.set_extra_http_headers({
"X-Custom-Header": "crawl4ai-test",
"Accept-Language": "en-US,en;q=0.9",
"DNT": "1"
})
return page
""",
"after_goto": """
async def hook(page, context, url, response, **kwargs):
# Hook called after navigating to each URL
print(f"[HOOK] after_goto - Successfully loaded: {url}")
# Wait a moment for dynamic content to load
await page.wait_for_timeout(1000)
# Check if specific elements exist (with error handling)
try:
# For httpbin.org, wait for body element
await page.wait_for_selector("body", timeout=2000)
print("[HOOK] Body element found and loaded")
except:
print("[HOOK] Timeout waiting for body, continuing anyway")
return page
""",
"on_execution_started": """
async def hook(page, context, **kwargs):
# Hook called after custom JavaScript execution
print("[HOOK] on_execution_started - Custom JS executed!")
# You could inject additional JavaScript here if needed
await page.evaluate("console.log('[INJECTED] Hook JS running');")
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
# Hook called before retrieving the HTML content
print("[HOOK] before_retrieve_html - Preparing to get HTML")
# Scroll to bottom to trigger lazy loading
await page.evaluate("window.scrollTo(0, document.body.scrollHeight);")
await page.wait_for_timeout(500)
# Scroll back to top
await page.evaluate("window.scrollTo(0, 0);")
await page.wait_for_timeout(500)
# One more scroll to middle for good measure
await page.evaluate("window.scrollTo(0, document.body.scrollHeight / 2);")
print("[HOOK] Scrolling completed for lazy-loaded content")
return page
""",
"before_return_html": """
async def hook(page, context, html, **kwargs):
# Hook called before returning the HTML content
print(f"[HOOK] before_return_html - HTML length: {len(html)} characters")
# Log some page metrics
metrics = await page.evaluate('''() => {
return {
images: document.images.length,
links: document.links.length,
scripts: document.scripts.length
}
}''')
print(f"[HOOK] Page metrics - Images: {metrics['images']}, Links: {metrics['links']}, Scripts: {metrics['scripts']}")
return page
"""
}
# Create request payload
payload = {
"urls": ["https://httpbin.org/html"],
"hooks": {
"code": hooks_code,
"timeout": 30
},
"crawler_config": {
"js_code": "window.scrollTo(0, document.body.scrollHeight);",
"wait_for": "body",
"cache_mode": "bypass"
}
}
print("\nSending request with all 8 hooks...")
start_time = time.time()
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
elapsed_time = time.time() - start_time
print(f"Request completed in {elapsed_time:.2f} seconds")
if response.status_code == 200:
data = response.json()
print("\n✅ Request successful!")
# Check hooks execution
if 'hooks' in data:
hooks_info = data['hooks']
print("\n📊 Hooks Execution Summary:")
print(f" Status: {hooks_info['status']['status']}")
print(f" Attached hooks: {len(hooks_info['status']['attached_hooks'])}")
for hook_name in hooks_info['status']['attached_hooks']:
print(f"{hook_name}")
if 'summary' in hooks_info:
summary = hooks_info['summary']
print(f"\n📈 Execution Statistics:")
print(f" Total executions: {summary['total_executions']}")
print(f" Successful: {summary['successful']}")
print(f" Failed: {summary['failed']}")
print(f" Timed out: {summary['timed_out']}")
print(f" Success rate: {summary['success_rate']:.1f}%")
if hooks_info.get('execution_log'):
print(f"\n📝 Execution Log:")
for log_entry in hooks_info['execution_log']:
status_icon = "" if log_entry['status'] == 'success' else ""
exec_time = log_entry.get('execution_time', 0)
print(f" {status_icon} {log_entry['hook_point']}: {exec_time:.3f}s")
# Check crawl results
if 'results' in data and len(data['results']) > 0:
print(f"\n📄 Crawl Results:")
for result in data['results']:
print(f" URL: {result['url']}")
print(f" Success: {result.get('success', False)}")
if result.get('html'):
print(f" HTML length: {len(result['html'])} characters")
else:
print(f"❌ Error: {response.status_code}")
try:
error_data = response.json()
print(f"Error details: {json.dumps(error_data, indent=2)}")
except:
print(f"Error text: {response.text[:500]}")
def test_authentication_flow():
"""Test a complete authentication flow with multiple hooks"""
print("\n" + "=" * 70)
print("Testing: Authentication Flow with Multiple Hooks")
print("=" * 70)
hooks_code = {
"on_page_context_created": """
async def hook(page, context, **kwargs):
print("[HOOK] Setting up authentication context")
# Add authentication cookies
await context.add_cookies([
{
"name": "auth_token",
"value": "fake_jwt_token_here",
"domain": ".httpbin.org",
"path": "/",
"httpOnly": True,
"secure": True
}
])
# Set localStorage items (for SPA authentication)
await page.evaluate('''
localStorage.setItem('user_id', '12345');
localStorage.setItem('auth_time', new Date().toISOString());
''')
return page
""",
"before_goto": """
async def hook(page, context, url, **kwargs):
print(f"[HOOK] Adding auth headers for {url}")
# Add Authorization header
import base64
credentials = base64.b64encode(b"user:passwd").decode('ascii')
await page.set_extra_http_headers({
'Authorization': f'Basic {credentials}',
'X-API-Key': 'test-api-key-123'
})
return page
"""
}
payload = {
"urls": [
"https://httpbin.org/basic-auth/user/passwd"
],
"hooks": {
"code": hooks_code,
"timeout": 15
}
}
print("\nTesting authentication with httpbin endpoints...")
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
if response.status_code == 200:
data = response.json()
print("✅ Authentication test completed")
if 'results' in data:
for i, result in enumerate(data['results']):
print(f"\n URL {i+1}: {result['url']}")
if result.get('success'):
# Check for authentication success indicators
html_content = result.get('html', '')
if '"authenticated"' in html_content and 'true' in html_content:
print(" ✅ Authentication successful! Basic auth worked.")
else:
print(" ⚠️ Page loaded but auth status unclear")
else:
print(f" ❌ Failed: {result.get('error_message', 'Unknown error')}")
else:
print(f"❌ Error: {response.status_code}")
def test_performance_optimization_hooks():
"""Test hooks for performance optimization"""
print("\n" + "=" * 70)
print("Testing: Performance Optimization Hooks")
print("=" * 70)
hooks_code = {
"on_page_context_created": """
async def hook(page, context, **kwargs):
print("[HOOK] Optimizing page for performance")
# Block resource-heavy content
await context.route("**/*.{png,jpg,jpeg,gif,webp,svg,ico}", lambda route: route.abort())
await context.route("**/*.{woff,woff2,ttf,otf}", lambda route: route.abort())
await context.route("**/*.{mp4,webm,ogg,mp3,wav}", lambda route: route.abort())
await context.route("**/googletagmanager.com/*", lambda route: route.abort())
await context.route("**/google-analytics.com/*", lambda route: route.abort())
await context.route("**/doubleclick.net/*", lambda route: route.abort())
await context.route("**/facebook.com/*", lambda route: route.abort())
# Disable animations and transitions
await page.add_style_tag(content='''
*, *::before, *::after {
animation-duration: 0s !important;
animation-delay: 0s !important;
transition-duration: 0s !important;
transition-delay: 0s !important;
}
''')
print("[HOOK] Performance optimizations applied")
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
print("[HOOK] Removing unnecessary elements before extraction")
# Remove ads, popups, and other unnecessary elements
await page.evaluate('''() => {
// Remove common ad containers
const adSelectors = [
'.ad', '.ads', '.advertisement', '[id*="ad-"]', '[class*="ad-"]',
'.popup', '.modal', '.overlay', '.cookie-banner', '.newsletter-signup'
];
adSelectors.forEach(selector => {
document.querySelectorAll(selector).forEach(el => el.remove());
});
// Remove script tags to clean up HTML
document.querySelectorAll('script').forEach(el => el.remove());
// Remove style tags we don't need
document.querySelectorAll('style').forEach(el => el.remove());
}''')
return page
"""
}
payload = {
"urls": ["https://httpbin.org/html"],
"hooks": {
"code": hooks_code,
"timeout": 10
}
}
print("\nTesting performance optimization hooks...")
start_time = time.time()
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
elapsed_time = time.time() - start_time
print(f"Request completed in {elapsed_time:.2f} seconds")
if response.status_code == 200:
data = response.json()
print("✅ Performance optimization test completed")
if 'results' in data and len(data['results']) > 0:
result = data['results'][0]
if result.get('html'):
print(f" HTML size: {len(result['html'])} characters")
print(" Resources blocked, ads removed, animations disabled")
else:
print(f"❌ Error: {response.status_code}")
def test_content_extraction_hooks():
"""Test hooks for intelligent content extraction"""
print("\n" + "=" * 70)
print("Testing: Content Extraction Hooks")
print("=" * 70)
hooks_code = {
"after_goto": """
async def hook(page, context, url, response, **kwargs):
print(f"[HOOK] Waiting for dynamic content on {url}")
# Wait for any lazy-loaded content
await page.wait_for_timeout(2000)
# Trigger any "Load More" buttons
try:
load_more = await page.query_selector('[class*="load-more"], [class*="show-more"], button:has-text("Load More")')
if load_more:
await load_more.click()
await page.wait_for_timeout(1000)
print("[HOOK] Clicked 'Load More' button")
except:
pass
return page
""",
"before_retrieve_html": """
async def hook(page, context, **kwargs):
print("[HOOK] Extracting structured data")
# Extract metadata
metadata = await page.evaluate('''() => {
const getMeta = (name) => {
const element = document.querySelector(`meta[name="${name}"], meta[property="${name}"]`);
return element ? element.getAttribute('content') : null;
};
return {
title: document.title,
description: getMeta('description') || getMeta('og:description'),
author: getMeta('author'),
keywords: getMeta('keywords'),
ogTitle: getMeta('og:title'),
ogImage: getMeta('og:image'),
canonical: document.querySelector('link[rel="canonical"]')?.href,
jsonLd: Array.from(document.querySelectorAll('script[type="application/ld+json"]'))
.map(el => el.textContent).filter(Boolean)
};
}''')
print(f"[HOOK] Extracted metadata: {json.dumps(metadata, indent=2)}")
# Infinite scroll handling
for i in range(3):
await page.evaluate("window.scrollTo(0, document.body.scrollHeight);")
await page.wait_for_timeout(1000)
print(f"[HOOK] Scroll iteration {i+1}/3")
return page
"""
}
payload = {
"urls": ["https://httpbin.org/html", "https://httpbin.org/json"],
"hooks": {
"code": hooks_code,
"timeout": 20
}
}
print("\nTesting content extraction hooks...")
response = requests.post(f"{API_BASE_URL}/crawl", json=payload)
if response.status_code == 200:
data = response.json()
print("✅ Content extraction test completed")
if 'hooks' in data and 'summary' in data['hooks']:
summary = data['hooks']['summary']
print(f" Hooks executed: {summary['successful']}/{summary['total_executions']}")
if 'results' in data:
for result in data['results']:
print(f"\n URL: {result['url']}")
print(f" Success: {result.get('success', False)}")
else:
print(f"❌ Error: {response.status_code}")
def main():
"""Run comprehensive hook tests"""
print("🔧 Crawl4AI Docker API - Comprehensive Hooks Testing")
print("Based on docs/examples/hooks_example.py")
print("=" * 70)
tests = [
("All Hooks Demo", test_all_hooks_demo),
("Authentication Flow", test_authentication_flow),
("Performance Optimization", test_performance_optimization_hooks),
("Content Extraction", test_content_extraction_hooks),
]
for i, (name, test_func) in enumerate(tests, 1):
print(f"\n📌 Test {i}/{len(tests)}: {name}")
try:
test_func()
print(f"{name} completed")
except Exception as e:
print(f"{name} failed: {e}")
import traceback
traceback.print_exc()
print("\n" + "=" * 70)
print("🎉 All comprehensive hook tests completed!")
print("=" * 70)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,117 @@
#!/usr/bin/env python3
"""
Simple test to verify BestFirstCrawlingStrategy fixes.
This test crawls a real website and shows that:
1. Higher-scoring pages are crawled first (priority queue fix)
2. Links are scored before truncation (link discovery fix)
"""
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from crawl4ai.deep_crawling import BestFirstCrawlingStrategy
from crawl4ai.deep_crawling.scorers import KeywordRelevanceScorer
async def test_best_first_strategy():
"""Test BestFirstCrawlingStrategy with keyword scoring"""
print("=" * 70)
print("Testing BestFirstCrawlingStrategy with Real URL")
print("=" * 70)
print("\nThis test will:")
print("1. Crawl Python.org documentation")
print("2. Score pages based on keywords: 'tutorial', 'guide', 'reference'")
print("3. Show that higher-scoring pages are crawled first")
print("-" * 70)
# Create a keyword scorer that prioritizes tutorial/guide pages
scorer = KeywordRelevanceScorer(
keywords=["tutorial", "guide", "reference", "documentation"],
weight=1.0,
case_sensitive=False
)
# Create the strategy with scoring
strategy = BestFirstCrawlingStrategy(
max_depth=2, # Crawl 2 levels deep
max_pages=10, # Limit to 10 pages total
url_scorer=scorer, # Use keyword scoring
include_external=False # Only internal links
)
# Configure browser and crawler
browser_config = BrowserConfig(
headless=True, # Run in background
verbose=False # Reduce output noise
)
crawler_config = CrawlerRunConfig(
deep_crawl_strategy=strategy,
verbose=False
)
print("\nStarting crawl of https://docs.python.org/3/")
print("Looking for pages with keywords: tutorial, guide, reference, documentation")
print("-" * 70)
crawled_urls = []
async with AsyncWebCrawler(config=browser_config) as crawler:
# Crawl and collect results
results = await crawler.arun(
url="https://docs.python.org/3/",
config=crawler_config
)
# Process results
if isinstance(results, list):
for result in results:
score = result.metadata.get('score', 0) if result.metadata else 0
depth = result.metadata.get('depth', 0) if result.metadata else 0
crawled_urls.append({
'url': result.url,
'score': score,
'depth': depth,
'success': result.success
})
print("\n" + "=" * 70)
print("CRAWL RESULTS (in order of crawling)")
print("=" * 70)
for i, item in enumerate(crawled_urls, 1):
status = "" if item['success'] else ""
# Highlight high-scoring pages
if item['score'] > 0.5:
print(f"{i:2}. [{status}] Score: {item['score']:.2f} | Depth: {item['depth']} | {item['url']}")
print(f" ^ HIGH SCORE - Contains keywords!")
else:
print(f"{i:2}. [{status}] Score: {item['score']:.2f} | Depth: {item['depth']} | {item['url']}")
print("\n" + "=" * 70)
print("ANALYSIS")
print("=" * 70)
# Check if higher scores appear early in the crawl
scores = [item['score'] for item in crawled_urls[1:]] # Skip initial URL
high_score_indices = [i for i, s in enumerate(scores) if s > 0.3]
if high_score_indices and high_score_indices[0] < len(scores) / 2:
print("✅ SUCCESS: Higher-scoring pages (with keywords) were crawled early!")
print(" This confirms the priority queue fix is working.")
else:
print("⚠️ Check the crawl order above - higher scores should appear early")
# Show score distribution
print(f"\nScore Statistics:")
print(f" - Total pages crawled: {len(crawled_urls)}")
print(f" - Average score: {sum(item['score'] for item in crawled_urls) / len(crawled_urls):.2f}")
print(f" - Max score: {max(item['score'] for item in crawled_urls):.2f}")
print(f" - Pages with keywords: {sum(1 for item in crawled_urls if item['score'] > 0.3)}")
print("\n" + "=" * 70)
print("TEST COMPLETE")
print("=" * 70)
if __name__ == "__main__":
print("\n🔍 BestFirstCrawlingStrategy Simple Test\n")
asyncio.run(test_best_first_strategy())

View File

@@ -24,7 +24,7 @@ CASES = [
# --- BrowserConfig variants ---
"BrowserConfig()",
"BrowserConfig(headless=False, extra_args=['--disable-gpu'])",
"BrowserConfig(browser_mode='builtin', proxy='http://1.2.3.4:8080')",
"BrowserConfig(browser_mode='builtin', proxy_config={'server': 'http://1.2.3.4:8080'})",
]
for code in CASES:

View File

@@ -0,0 +1,42 @@
import warnings
import pytest
from crawl4ai.async_configs import BrowserConfig, ProxyConfig
def test_browser_config_proxy_string_emits_deprecation_and_autoconverts():
warnings.simplefilter("always", DeprecationWarning)
proxy_str = "23.95.150.145:6114:username:password"
with warnings.catch_warnings(record=True) as caught:
cfg = BrowserConfig(proxy=proxy_str, headless=True)
dep_warnings = [w for w in caught if issubclass(w.category, DeprecationWarning)]
assert dep_warnings, "Expected DeprecationWarning when using BrowserConfig(proxy=...)"
assert cfg.proxy is None, "cfg.proxy should be None after auto-conversion"
assert isinstance(cfg.proxy_config, ProxyConfig), "cfg.proxy_config should be ProxyConfig instance"
assert cfg.proxy_config.username == "username"
assert cfg.proxy_config.password == "password"
assert cfg.proxy_config.server.startswith("http://")
assert cfg.proxy_config.server.endswith(":6114")
def test_browser_config_with_proxy_config_emits_no_deprecation():
warnings.simplefilter("always", DeprecationWarning)
with warnings.catch_warnings(record=True) as caught:
cfg = BrowserConfig(
headless=True,
proxy_config={
"server": "http://127.0.0.1:8080",
"username": "u",
"password": "p",
},
)
dep_warnings = [w for w in caught if issubclass(w.category, DeprecationWarning)]
assert not dep_warnings, "Did not expect DeprecationWarning when using proxy_config"
assert cfg.proxy is None
assert isinstance(cfg.proxy_config, ProxyConfig)