Compare commits

..

36 Commits

Author SHA1 Message Date
UncleCode
72d8e679ad feat(pipeline): add high-level Crawler utility class for simplified web crawling
Add new Crawler class that provides a simplified interface for both single and batch URL crawling operations. Key features include:
- Simple single URL crawling with configurable options
- Parallel batch crawling with concurrency control
- Shared browser hub support for resource efficiency
- Progress tracking and custom retry strategies
- Comprehensive error handling and retry logic

Remove demo and extended test files in favor of new focused test suite.
2025-04-07 22:50:44 +08:00
UncleCode
67a790b4a6 Add test file for Pipeline batch crawl. 2025-04-06 19:38:31 +08:00
UncleCode
d95b2dc9f2 Some refactoring, movie pipelin submodule folder into the main. 2025-04-06 18:28:28 +08:00
UncleCode
591f55edc7 refactor(browser): rename methods and update type hints in BrowserHub for clarity 2025-04-06 18:22:05 +08:00
UncleCode
b1693b1c21 Remove old quickstart files 2025-04-05 23:10:25 +08:00
UncleCode
14894b4d70 feat(config): set DefaultMarkdownGenerator as the default markdown generator in CrawlerRunConfig
feat(logger): add color mapping for log message formatting options
2025-04-03 20:34:19 +08:00
UncleCode
86df20234b fix(crawler): handle exceptions in get_page call to ensure page retrieval 2025-04-02 21:25:24 +08:00
UncleCode
179921a131 fix(crawler): update get_page call to include additional return value 2025-04-02 19:01:30 +08:00
UncleCode
c5cac2b459 feat(browser): add BrowserHub for centralized browser management and resource sharing 2025-04-01 20:35:02 +08:00
UncleCode
555455d710 feat(browser): implement browser pooling and page pre-warming
Adds a new BrowserManager implementation with browser pooling and page pre-warming capabilities:
- Adds support for managing multiple browser instances per configuration
- Implements page pre-warming for improved performance
- Adds configurable behavior for when no browsers are available
- Includes comprehensive status reporting and monitoring
- Maintains backward compatibility with existing API
- Adds demo script showcasing new features

BREAKING CHANGE: BrowserManager API now returns a strategy instance along with page and context
2025-03-31 21:55:07 +08:00
UncleCode
bb02398086 refactor(browser): improve browser strategy architecture and lifecycle management
Major refactoring of browser strategy implementations to improve code organization and reliability:
- Move CrawlResultContainer and RunManyReturn types from async_webcrawler to models.py
- Simplify browser lifecycle management in AsyncWebCrawler
- Standardize browser strategy interface with _generate_page method
- Improve headless mode handling and browser args construction
- Clean up Docker and Playwright strategy implementations
- Fix session management and context handling across strategies

BREAKING CHANGE: Browser strategy interface has changed with new _generate_page method requirement
2025-03-30 20:58:39 +08:00
UncleCode
3ff7eec8f3 refactor(browser): consolidate browser strategy implementations
Moves common browser functionality into BaseBrowserStrategy class to reduce code duplication and improve maintainability. Key changes:
- Adds shared browser argument building and session management to base class
- Standardizes storage state handling across strategies
- Improves process cleanup and error handling
- Consolidates CDP URL management and container lifecycle

BREAKING CHANGE: Changes browser_mode="custom" to "cdp" for consistency
2025-03-28 22:47:28 +08:00
UncleCode
64f20ab44a refactor(docker): update Dockerfile and browser strategy to use Chromium 2025-03-28 15:59:02 +08:00
UncleCode
c635f6b9a2 refactor(browser): reorganize browser strategies and improve Docker implementation
Reorganize browser strategy code into separate modules for better maintainability and separation of concerns. Improve Docker implementation with:
- Add Alpine and Debian-based Dockerfiles for better container options
- Enhance Docker registry to share configuration with BuiltinBrowserStrategy
- Add CPU and memory limits to container configuration
- Improve error handling and logging
- Update documentation and examples

BREAKING CHANGE: DockerConfig, DockerRegistry, and DockerUtils have been moved to new locations and their APIs have been updated.
2025-03-27 21:35:13 +08:00
UncleCode
7f93e88379 refactor(tests): remove unused imports in test_docker_browser.py 2025-03-26 15:19:29 +08:00
UncleCode
40d4dd36c9 chore(version): bump version to 0.5.0.post8 and update post-installation setup 2025-03-25 21:56:49 +08:00
UncleCode
d8f38f2298 chore(version): bump version to 0.5.0.post7 2025-03-25 21:47:19 +08:00
UncleCode
5c88d1310d feat(cli): add output file option and integrate LXML web scraping strategy 2025-03-25 21:38:24 +08:00
UncleCode
4a20d7f7c2 feat(cli): add quick JSON extraction and global config management
Adds new features to improve user experience and configuration:
- Quick JSON extraction with -j flag for direct LLM-based structured data extraction
- Global configuration management with 'crwl config' commands
- Enhanced LLM extraction with better JSON handling and error management
- New user settings for default behaviors (LLM provider, browser settings, etc.)

Breaking changes: None
2025-03-25 20:30:25 +08:00
UncleCode
6405cf0a6f Merge branch 'vr0.5.0.post5' into next 2025-03-25 14:51:29 +08:00
UncleCode
bdd9db579a chore(version): bump version to 0.5.0.post6
refactor(cli): remove unused import from FastAPI
2025-03-25 12:01:36 +08:00
UncleCode
1107fa1d62 feat(cli): enhance markdown generation with default content filters
Add DefaultMarkdownGenerator integration and automatic content filtering for markdown output formats. When using 'markdown-fit' or 'md-fit' output formats, automatically apply PruningContentFilter with default settings if no filter config is provided.

This change improves the user experience by providing sensible defaults for markdown generation while maintaining the ability to customize filtering behavior.
2025-03-25 11:56:00 +08:00
UncleCode
8c08521301 feat(browser): add Docker-based browser automation strategy
Implements a new browser strategy that runs Chrome in Docker containers,
providing better isolation and cross-platform consistency. Features include:
- Connect and launch modes for different container configurations
- Persistent storage support for maintaining browser state
- Container registry for efficient reuse
- Comprehensive test suite for Docker browser functionality

This addition allows users to run browser automation workloads in isolated
containers, improving security and resource management.
2025-03-24 21:36:58 +08:00
UncleCode
462d5765e2 fix(browser): improve storage state persistence in CDP strategy
Enhance storage state persistence mechanism in CDP browser strategy by:
- Explicitly saving storage state for each browser context
- Using proper file path for storage state
- Removing unnecessary sleep delay

Also includes test improvements:
- Simplified test configurations in playwright tests
- Temporarily disabled some CDP tests
2025-03-23 21:06:41 +08:00
UncleCode
6eeb2e4076 feat(browser): enhance browser context creation with user data directory support and improved storage state handling 2025-03-23 19:07:13 +08:00
UncleCode
0094cac675 refactor(browser): improve parallel crawling and browser management
Remove PagePoolConfig in favor of direct page management in browser strategies.
Add get_pages() method for efficient parallel page creation.
Improve storage state handling and persistence.
Add comprehensive parallel crawling tests and performance analysis.

BREAKING CHANGE: Removed PagePoolConfig class and related functionality.
2025-03-23 18:53:24 +08:00
UncleCode
4ab0893ffb feat(browser): implement modular browser management system
Adds a new browser management system with strategy pattern implementation:
- Introduces BrowserManager class with strategy pattern support
- Adds PlaywrightBrowserStrategy, CDPBrowserStrategy, and BuiltinBrowserStrategy
- Implements BrowserProfileManager for profile management
- Adds PagePoolConfig for browser page pooling
- Includes comprehensive test suite for all browser strategies

BREAKING CHANGE: Browser management has been moved to browser/ module. Direct usage of browser_manager.py and browser_profiler.py is deprecated.
2025-03-21 22:50:00 +08:00
UncleCode
6432ff1257 feat(browser): add builtin browser management system
Implements a persistent browser management system that allows running a single shared browser instance
that can be reused across multiple crawler sessions. Key changes include:

- Added browser_mode config option with 'builtin', 'dedicated', and 'custom' modes
- Implemented builtin browser management in BrowserProfiler
- Added CLI commands for managing builtin browser (start, stop, status, restart, view)
- Modified browser process handling to support detached processes
- Added automatic builtin browser setup during package installation

BREAKING CHANGE: The browser_mode config option changes how browser instances are managed
2025-03-20 12:13:59 +08:00
UncleCode
5358ac0fc2 refactor: clean up imports and improve JSON schema generation instructions 2025-03-18 18:53:34 +08:00
UncleCode
a24799918c feat(llm): add additional LLM configuration parameters
Extend LLMConfig class to support more fine-grained control over LLM behavior by adding:
- temperature control
- max tokens limit
- top_p sampling
- frequency and presence penalties
- stop sequences
- number of completions

These parameters allow for better customization of LLM responses.
2025-03-14 21:36:23 +08:00
UncleCode
a31d7b86be feat(changelog): update CHANGELOG for version 0.5.0.post5 with new features, changes, fixes, and breaking changes 2025-03-14 15:26:37 +08:00
UncleCode
7884a98be7 feat(crawler): add experimental parameters support and optimize browser handling
Add experimental parameters dictionary to CrawlerRunConfig to support beta features
Make CSP nonce headers optional via experimental config
Remove default cookie injection
Clean up browser context creation code
Improve code formatting in API handler

BREAKING CHANGE: Default cookie injection has been removed from page initialization
2025-03-14 14:39:24 +08:00
UncleCode
6e3c048328 feat(api): refactor crawl request handling to streamline single and multiple URL processing 2025-03-13 22:30:38 +08:00
UncleCode
b750542e6d feat(crawler): optimize single URL handling and add performance comparison
Add special handling for single URL requests in Docker API to use arun() instead of arun_many()
Add new example script demonstrating performance differences between sequential and parallel crawling
Update cache mode from aggressive to bypass in examples and tests
Remove unused dependencies (zstandard, msgpack)

BREAKING CHANGE: Changed default cache_mode from aggressive to bypass in examples
2025-03-13 22:15:15 +08:00
UncleCode
dc36997a08 feat(schema): improve HTML preprocessing for schema generation
Add new preprocess_html_for_schema utility function to better handle HTML cleaning
for schema generation. This replaces the previous optimize_html function in the
GoogleSearchCrawler and includes smarter attribute handling and pattern detection.

Other changes:
- Update default provider to gpt-4o
- Add DEFAULT_PROVIDER_API_KEY constant
- Make LLMConfig creation more flexible with create_llm_config helper
- Add new dependencies: zstandard and msgpack

This change improves schema generation reliability while reducing noise in the
processed HTML.
2025-03-12 22:40:46 +08:00
UncleCode
1630fbdafe feat(monitor): add real-time crawler monitoring system with memory management
Implements a comprehensive monitoring and visualization system for tracking web crawler operations in real-time. The system includes:
- Terminal-based dashboard with rich UI for displaying task statuses
- Memory pressure monitoring and adaptive dispatch control
- Queue statistics and performance metrics tracking
- Detailed task progress visualization
- Stress testing framework for memory management

This addition helps operators track crawler performance and manage memory usage more effectively.
2025-03-12 19:05:24 +08:00
117 changed files with 16423 additions and 9016 deletions

3
.gitignore vendored
View File

@@ -255,3 +255,6 @@ continue_config.json
.llm.env
.private/
CLAUDE_MONITOR.md
CLAUDE.md

View File

@@ -5,6 +5,39 @@ All notable changes to Crawl4AI will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Version 0.5.0.post5 (2025-03-14)
### Added
- *(crawler)* Add experimental parameters dictionary to CrawlerRunConfig to support beta features
- *(tables)* Add comprehensive table detection and extraction functionality with scoring system
- *(monitor)* Add real-time crawler monitoring system with memory management
- *(content)* Add target_elements parameter for selective content extraction
- *(browser)* Add standalone CDP browser launch capability
- *(schema)* Add preprocess_html_for_schema utility for better HTML cleaning
- *(api)* Add special handling for single URL requests in Docker API
### Changed
- *(filters)* Add reverse option to URLPatternFilter for inverting filter logic
- *(browser)* Make CSP nonce headers optional via experimental config
- *(browser)* Remove default cookie injection from page initialization
- *(crawler)* Optimize response handling for single-URL processing
- *(api)* Refactor crawl request handling to streamline processing
- *(config)* Update default provider to gpt-4o
- *(cache)* Change default cache_mode from aggressive to bypass in examples
### Fixed
- *(browser)* Clean up browser context creation code
- *(api)* Improve code formatting in API handler
### Breaking Changes
- WebScrapingStrategy no longer returns 'scraped_html' in its output dictionary
- Table extraction logic has been modified to better handle thead/tbody structures
- Default cookie injection has been removed from page initialization
## Version 0.5.0 (2025-03-02)
### Added

View File

@@ -4,6 +4,12 @@ import warnings
from .async_webcrawler import AsyncWebCrawler, CacheMode
from .async_configs import BrowserConfig, CrawlerRunConfig, HTTPCrawlerConfig, LLMConfig
from .pipeline.pipeline import (
Pipeline,
create_pipeline,
)
from .pipeline.crawler import Crawler
from .content_scraping_strategy import (
ContentScrapingStrategy,
WebScrapingStrategy,
@@ -33,13 +39,12 @@ from .content_filter_strategy import (
LLMContentFilter,
RelevantContentFilter,
)
from .models import CrawlResult, MarkdownGenerationResult
from .models import CrawlResult, MarkdownGenerationResult, DisplayMode
from .components.crawler_monitor import CrawlerMonitor
from .async_dispatcher import (
MemoryAdaptiveDispatcher,
SemaphoreDispatcher,
RateLimiter,
CrawlerMonitor,
DisplayMode,
BaseDispatcher,
)
from .docker_client import Crawl4aiDockerClient
@@ -66,7 +71,14 @@ from .deep_crawling import (
DeepCrawlDecorator,
)
from .async_crawler_strategy import AsyncPlaywrightCrawlerStrategy, AsyncHTTPCrawlerStrategy
__all__ = [
"Pipeline",
"AsyncPlaywrightCrawlerStrategy",
"AsyncHTTPCrawlerStrategy",
"create_pipeline",
"Crawler",
"AsyncLoggerBase",
"AsyncLogger",
"AsyncWebCrawler",

View File

@@ -1,2 +1,2 @@
# crawl4ai/_version.py
__version__ = "0.5.0.post4"
__version__ = "0.5.0.post8"

View File

@@ -1,6 +1,7 @@
import os
from .config import (
DEFAULT_PROVIDER,
DEFAULT_PROVIDER_API_KEY,
MIN_WORD_THRESHOLD,
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
PROVIDER_MODELS,
@@ -14,7 +15,7 @@ from .user_agent_generator import UAGen, ValidUAGenerator # , OnlineUAGenerator
from .extraction_strategy import ExtractionStrategy, LLMExtractionStrategy
from .chunking_strategy import ChunkingStrategy, RegexChunking
from .markdown_generation_strategy import MarkdownGenerationStrategy
from .markdown_generation_strategy import MarkdownGenerationStrategy, DefaultMarkdownGenerator
from .content_scraping_strategy import ContentScrapingStrategy, WebScrapingStrategy
from .deep_crawling import DeepCrawlStrategy
@@ -27,6 +28,10 @@ from typing import Any, Dict, Optional
from enum import Enum
from .proxy_strategy import ProxyConfig
try:
from .browser.models import DockerConfig
except ImportError:
DockerConfig = None
def to_serializable_dict(obj: Any, ignore_default_value : bool = False) -> Dict:
@@ -168,6 +173,12 @@ class BrowserConfig:
Default: "chromium".
headless (bool): Whether to run the browser in headless mode (no visible GUI).
Default: True.
browser_mode (str): Determines how the browser should be initialized:
"builtin" - use the builtin CDP browser running in background
"dedicated" - create a new dedicated browser instance each time
"cdp" - use explicit CDP settings provided in cdp_url
"docker" - run browser in Docker container with isolation
Default: "dedicated"
use_managed_browser (bool): Launch the browser using a managed approach (e.g., via CDP), allowing
advanced manipulation. Default: False.
cdp_url (str): URL for the Chrome DevTools Protocol (CDP) endpoint. Default: "ws://localhost:9222/devtools/browser/".
@@ -184,6 +195,8 @@ class BrowserConfig:
Default: None.
proxy_config (ProxyConfig or dict or None): Detailed proxy configuration, e.g. {"server": "...", "username": "..."}.
If None, no additional proxy config. Default: None.
docker_config (DockerConfig or dict or None): Configuration for Docker-based browser automation.
Contains settings for Docker container operation. Default: None.
viewport_width (int): Default viewport width for pages. Default: 1080.
viewport_height (int): Default viewport height for pages. Default: 600.
viewport (dict): Default viewport dimensions for pages. If set, overrides viewport_width and viewport_height.
@@ -194,7 +207,7 @@ class BrowserConfig:
Default: False.
downloads_path (str or None): Directory to store downloaded files. If None and accept_downloads is True,
a default path will be created. Default: None.
storage_state (str or dict or None): Path or object describing storage state (cookies, localStorage).
storage_state (str or dict or None): An in-memory storage state (cookies, localStorage).
Default: None.
ignore_https_errors (bool): Ignore HTTPS certificate errors. Default: True.
java_script_enabled (bool): Enable JavaScript execution in pages. Default: True.
@@ -220,6 +233,7 @@ class BrowserConfig:
self,
browser_type: str = "chromium",
headless: bool = True,
browser_mode: str = "dedicated",
use_managed_browser: bool = False,
cdp_url: str = None,
use_persistent_context: bool = False,
@@ -228,6 +242,7 @@ class BrowserConfig:
channel: str = "chromium",
proxy: str = None,
proxy_config: Union[ProxyConfig, dict, None] = None,
docker_config: Union[DockerConfig, dict, None] = None,
viewport_width: int = 1080,
viewport_height: int = 600,
viewport: dict = None,
@@ -255,7 +270,8 @@ class BrowserConfig:
host: str = "localhost",
):
self.browser_type = browser_type
self.headless = headless
self.headless = headless
self.browser_mode = browser_mode
self.use_managed_browser = use_managed_browser
self.cdp_url = cdp_url
self.use_persistent_context = use_persistent_context
@@ -267,6 +283,16 @@ class BrowserConfig:
self.chrome_channel = ""
self.proxy = proxy
self.proxy_config = proxy_config
# Handle docker configuration
if isinstance(docker_config, dict) and DockerConfig is not None:
self.docker_config = DockerConfig.from_kwargs(docker_config)
else:
self.docker_config = docker_config
if self.docker_config:
self.user_data_dir = self.docker_config.user_data_dir
self.viewport_width = viewport_width
self.viewport_height = viewport_height
self.viewport = viewport
@@ -289,6 +315,7 @@ class BrowserConfig:
self.sleep_on_close = sleep_on_close
self.verbose = verbose
self.debugging_port = debugging_port
self.host = host
fa_user_agenr_generator = ValidUAGenerator()
if self.user_agent_mode == "random":
@@ -301,6 +328,22 @@ class BrowserConfig:
self.browser_hint = UAGen.generate_client_hints(self.user_agent)
self.headers.setdefault("sec-ch-ua", self.browser_hint)
# Set appropriate browser management flags based on browser_mode
if self.browser_mode == "builtin":
# Builtin mode uses managed browser connecting to builtin CDP endpoint
self.use_managed_browser = True
# cdp_url will be set later by browser_manager
elif self.browser_mode == "docker":
# Docker mode uses managed browser with CDP to connect to browser in container
self.use_managed_browser = True
# cdp_url will be set later by docker browser strategy
elif self.browser_mode == "custom" and self.cdp_url:
# Custom mode with explicit CDP URL
self.use_managed_browser = True
elif self.browser_mode == "dedicated":
# Dedicated mode uses a new browser instance each time
pass
# If persistent context is requested, ensure managed browser is enabled
if self.use_persistent_context:
self.use_managed_browser = True
@@ -310,6 +353,7 @@ class BrowserConfig:
return BrowserConfig(
browser_type=kwargs.get("browser_type", "chromium"),
headless=kwargs.get("headless", True),
browser_mode=kwargs.get("browser_mode", "dedicated"),
use_managed_browser=kwargs.get("use_managed_browser", False),
cdp_url=kwargs.get("cdp_url"),
use_persistent_context=kwargs.get("use_persistent_context", False),
@@ -318,6 +362,7 @@ class BrowserConfig:
channel=kwargs.get("channel", "chromium"),
proxy=kwargs.get("proxy"),
proxy_config=kwargs.get("proxy_config", None),
docker_config=kwargs.get("docker_config", None),
viewport_width=kwargs.get("viewport_width", 1080),
viewport_height=kwargs.get("viewport_height", 600),
accept_downloads=kwargs.get("accept_downloads", False),
@@ -337,12 +382,15 @@ class BrowserConfig:
text_mode=kwargs.get("text_mode", False),
light_mode=kwargs.get("light_mode", False),
extra_args=kwargs.get("extra_args", []),
debugging_port=kwargs.get("debugging_port", 9222),
host=kwargs.get("host", "localhost"),
)
def to_dict(self):
return {
result = {
"browser_type": self.browser_type,
"headless": self.headless,
"browser_mode": self.browser_mode,
"use_managed_browser": self.use_managed_browser,
"cdp_url": self.cdp_url,
"use_persistent_context": self.use_persistent_context,
@@ -369,7 +417,17 @@ class BrowserConfig:
"sleep_on_close": self.sleep_on_close,
"verbose": self.verbose,
"debugging_port": self.debugging_port,
"host": self.host,
}
# Include docker_config if it exists
if hasattr(self, "docker_config") and self.docker_config is not None:
if hasattr(self.docker_config, "to_dict"):
result["docker_config"] = self.docker_config.to_dict()
else:
result["docker_config"] = self.docker_config
return result
def clone(self, **kwargs):
"""Create a copy of this configuration with updated values.
@@ -649,6 +707,12 @@ class CrawlerRunConfig():
user_agent_generator_config (dict or None): Configuration for user agent generation if user_agent_mode is set.
Default: None.
# Experimental Parameters
experimental (dict): Dictionary containing experimental parameters that are in beta phase.
This allows passing temporary features that are not yet fully integrated
into the main parameter set.
Default: None.
url: str = None # This is not a compulsory parameter
"""
@@ -658,7 +722,7 @@ class CrawlerRunConfig():
word_count_threshold: int = MIN_WORD_THRESHOLD,
extraction_strategy: ExtractionStrategy = None,
chunking_strategy: ChunkingStrategy = RegexChunking(),
markdown_generator: MarkdownGenerationStrategy = None,
markdown_generator: MarkdownGenerationStrategy = DefaultMarkdownGenerator(),
only_text: bool = False,
css_selector: str = None,
target_elements: List[str] = None,
@@ -731,6 +795,8 @@ class CrawlerRunConfig():
user_agent_generator_config: dict = {},
# Deep Crawl Parameters
deep_crawl_strategy: Optional[DeepCrawlStrategy] = None,
# Experimental Parameters
experimental: Dict[str, Any] = None,
):
# TODO: Planning to set properties dynamically based on the __init__ signature
self.url = url
@@ -844,6 +910,9 @@ class CrawlerRunConfig():
# Deep Crawl Parameters
self.deep_crawl_strategy = deep_crawl_strategy
# Experimental Parameters
self.experimental = experimental or {}
def __getattr__(self, name):
@@ -952,6 +1021,8 @@ class CrawlerRunConfig():
# Deep Crawl Parameters
deep_crawl_strategy=kwargs.get("deep_crawl_strategy"),
url=kwargs.get("url"),
# Experimental Parameters
experimental=kwargs.get("experimental"),
)
# Create a funciton returns dict of the object
@@ -1036,6 +1107,7 @@ class CrawlerRunConfig():
"user_agent_generator_config": self.user_agent_generator_config,
"deep_crawl_strategy": self.deep_crawl_strategy,
"url": self.url,
"experimental": self.experimental,
}
def clone(self, **kwargs):
@@ -1071,6 +1143,13 @@ class LLMConfig:
provider: str = DEFAULT_PROVIDER,
api_token: Optional[str] = None,
base_url: Optional[str] = None,
temprature: Optional[float] = None,
max_tokens: Optional[int] = None,
top_p: Optional[float] = None,
frequency_penalty: Optional[float] = None,
presence_penalty: Optional[float] = None,
stop: Optional[List[str]] = None,
n: Optional[int] = None,
):
"""Configuaration class for LLM provider and API token."""
self.provider = provider
@@ -1080,10 +1159,16 @@ class LLMConfig:
self.api_token = os.getenv(api_token[4:])
else:
self.api_token = PROVIDER_MODELS.get(provider, "no-token") or os.getenv(
"OPENAI_API_KEY"
DEFAULT_PROVIDER_API_KEY
)
self.base_url = base_url
self.temprature = temprature
self.max_tokens = max_tokens
self.top_p = top_p
self.frequency_penalty = frequency_penalty
self.presence_penalty = presence_penalty
self.stop = stop
self.n = n
@staticmethod
def from_kwargs(kwargs: dict) -> "LLMConfig":
@@ -1091,13 +1176,27 @@ class LLMConfig:
provider=kwargs.get("provider", DEFAULT_PROVIDER),
api_token=kwargs.get("api_token"),
base_url=kwargs.get("base_url"),
temprature=kwargs.get("temprature"),
max_tokens=kwargs.get("max_tokens"),
top_p=kwargs.get("top_p"),
frequency_penalty=kwargs.get("frequency_penalty"),
presence_penalty=kwargs.get("presence_penalty"),
stop=kwargs.get("stop"),
n=kwargs.get("n")
)
def to_dict(self):
return {
"provider": self.provider,
"api_token": self.api_token,
"base_url": self.base_url
"base_url": self.base_url,
"temprature": self.temprature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"stop": self.stop,
"n": self.n
}
def clone(self, **kwargs):

View File

@@ -505,12 +505,17 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
)
# Get page for session
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
try:
page, context, _ = await self.browser_manager.get_page(crawlerRunConfig=config)
except Exception as e:
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
# await page.goto(URL)
# Add default cookie
await context.add_cookies(
[{"name": "cookiesEnabled", "value": "true", "url": url}]
)
# await context.add_cookies(
# [{"name": "cookiesEnabled", "value": "true", "url": url}]
# )
# Handle navigator overrides
if config.override_navigator or config.simulate_user or config.magic:
@@ -562,14 +567,15 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
try:
# Generate a unique nonce for this request
nonce = hashlib.sha256(os.urandom(32)).hexdigest()
if config.experimental.get("use_csp_nonce", False):
nonce = hashlib.sha256(os.urandom(32)).hexdigest()
# Add CSP headers to the request
await page.set_extra_http_headers(
{
"Content-Security-Policy": f"default-src 'self'; script-src 'self' 'nonce-{nonce}' 'strict-dynamic'"
}
)
# Add CSP headers to the request
await page.set_extra_http_headers(
{
"Content-Security-Policy": f"default-src 'self'; script-src 'self' 'nonce-{nonce}' 'strict-dynamic'"
}
)
response = await page.goto(
url, wait_until=config.wait_until, timeout=config.page_timeout
@@ -619,7 +625,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
except Error:
visibility_info = await self.check_visibility(page)
if self.config.verbose:
if config.verbose:
self.logger.debug(
message="Body visibility info: {info}",
tag="DEBUG",

View File

@@ -4,19 +4,14 @@ import aiosqlite
import asyncio
from typing import Optional, Dict
from contextlib import asynccontextmanager
import json # Added for serialization/deserialization
from .utils import ensure_content_dirs, generate_content_hash
import json
from .models import CrawlResult, MarkdownGenerationResult, StringCompatibleMarkdown
# , StringCompatibleMarkdown
import aiofiles
from .utils import VersionManager
from .async_logger import AsyncLogger
from .utils import get_error_context, create_box_message
# Set up logging
# logging.basicConfig(level=logging.INFO)
# logger = logging.getLogger(__name__)
# logger.setLevel(logging.INFO)
from .utils import ensure_content_dirs, generate_content_hash
from .utils import VersionManager
from .utils import get_error_context, create_box_message
base_directory = DB_PATH = os.path.join(
os.getenv("CRAWL4_AI_BASE_DIRECTORY", Path.home()), ".crawl4ai"

View File

@@ -4,17 +4,15 @@ from .models import (
CrawlResult,
CrawlerTaskResult,
CrawlStatus,
DisplayMode,
CrawlStats,
DomainState,
)
from rich.live import Live
from rich.table import Table
from rich.console import Console
from rich import box
from datetime import timedelta, datetime
from .components.crawler_monitor import CrawlerMonitor
from .types import AsyncWebCrawler
from collections.abc import AsyncGenerator
import time
import psutil
import asyncio
@@ -24,8 +22,6 @@ from urllib.parse import urlparse
import random
from abc import ABC, abstractmethod
from math import inf as infinity
class RateLimiter:
def __init__(
@@ -87,201 +83,6 @@ class RateLimiter:
return True
class CrawlerMonitor:
def __init__(
self,
max_visible_rows: int = 15,
display_mode: DisplayMode = DisplayMode.DETAILED,
):
self.console = Console()
self.max_visible_rows = max_visible_rows
self.display_mode = display_mode
self.stats: Dict[str, CrawlStats] = {}
self.process = psutil.Process()
self.start_time = time.time()
self.live = Live(self._create_table(), refresh_per_second=2)
def start(self):
self.live.start()
def stop(self):
self.live.stop()
def add_task(self, task_id: str, url: str):
self.stats[task_id] = CrawlStats(
task_id=task_id, url=url, status=CrawlStatus.QUEUED
)
self.live.update(self._create_table())
def update_task(self, task_id: str, **kwargs):
if task_id in self.stats:
for key, value in kwargs.items():
setattr(self.stats[task_id], key, value)
self.live.update(self._create_table())
def _create_aggregated_table(self) -> Table:
"""Creates a compact table showing only aggregated statistics"""
table = Table(
box=box.ROUNDED,
title="Crawler Status Overview",
title_style="bold magenta",
header_style="bold blue",
show_lines=True,
)
# Calculate statistics
total_tasks = len(self.stats)
queued = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.QUEUED
)
in_progress = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.IN_PROGRESS
)
completed = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.COMPLETED
)
failed = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.FAILED
)
# Memory statistics
current_memory = self.process.memory_info().rss / (1024 * 1024)
total_task_memory = sum(stat.memory_usage for stat in self.stats.values())
peak_memory = max(
(stat.peak_memory for stat in self.stats.values()), default=0.0
)
# Duration
duration = time.time() - self.start_time
# Create status row
table.add_column("Status", style="bold cyan")
table.add_column("Count", justify="right")
table.add_column("Percentage", justify="right")
table.add_row("Total Tasks", str(total_tasks), "100%")
table.add_row(
"[yellow]In Queue[/yellow]",
str(queued),
f"{(queued / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
table.add_row(
"[blue]In Progress[/blue]",
str(in_progress),
f"{(in_progress / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
table.add_row(
"[green]Completed[/green]",
str(completed),
f"{(completed / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
table.add_row(
"[red]Failed[/red]",
str(failed),
f"{(failed / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
# Add memory information
table.add_section()
table.add_row(
"[magenta]Current Memory[/magenta]", f"{current_memory:.1f} MB", ""
)
table.add_row(
"[magenta]Total Task Memory[/magenta]", f"{total_task_memory:.1f} MB", ""
)
table.add_row(
"[magenta]Peak Task Memory[/magenta]", f"{peak_memory:.1f} MB", ""
)
table.add_row(
"[yellow]Runtime[/yellow]",
str(timedelta(seconds=int(duration))),
"",
)
return table
def _create_detailed_table(self) -> Table:
table = Table(
box=box.ROUNDED,
title="Crawler Performance Monitor",
title_style="bold magenta",
header_style="bold blue",
)
# Add columns
table.add_column("Task ID", style="cyan", no_wrap=True)
table.add_column("URL", style="cyan", no_wrap=True)
table.add_column("Status", style="bold")
table.add_column("Memory (MB)", justify="right")
table.add_column("Peak (MB)", justify="right")
table.add_column("Duration", justify="right")
table.add_column("Info", style="italic")
# Add summary row
total_memory = sum(stat.memory_usage for stat in self.stats.values())
active_count = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.IN_PROGRESS
)
completed_count = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.COMPLETED
)
failed_count = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.FAILED
)
table.add_row(
"[bold yellow]SUMMARY",
f"Total: {len(self.stats)}",
f"Active: {active_count}",
f"{total_memory:.1f}",
f"{self.process.memory_info().rss / (1024 * 1024):.1f}",
str(
timedelta(
seconds=int(time.time() - self.start_time)
)
),
f"{completed_count}{failed_count}",
style="bold",
)
table.add_section()
# Add rows for each task
visible_stats = sorted(
self.stats.values(),
key=lambda x: (
x.status != CrawlStatus.IN_PROGRESS,
x.status != CrawlStatus.QUEUED,
x.end_time or infinity,
),
)[: self.max_visible_rows]
for stat in visible_stats:
status_style = {
CrawlStatus.QUEUED: "white",
CrawlStatus.IN_PROGRESS: "yellow",
CrawlStatus.COMPLETED: "green",
CrawlStatus.FAILED: "red",
}[stat.status]
table.add_row(
stat.task_id[:8], # Show first 8 chars of task ID
stat.url[:40] + "..." if len(stat.url) > 40 else stat.url,
f"[{status_style}]{stat.status.value}[/{status_style}]",
f"{stat.memory_usage:.1f}",
f"{stat.peak_memory:.1f}",
stat.duration,
stat.error_message[:40] if stat.error_message else "",
)
return table
def _create_table(self) -> Table:
"""Creates the appropriate table based on display mode"""
if self.display_mode == DisplayMode.AGGREGATED:
return self._create_aggregated_table()
return self._create_detailed_table()
class BaseDispatcher(ABC):
def __init__(
@@ -309,7 +110,7 @@ class BaseDispatcher(ABC):
async def run_urls(
self,
urls: List[str],
crawler: "AsyncWebCrawler", # noqa: F821
crawler: AsyncWebCrawler, # noqa: F821
config: CrawlerRunConfig,
monitor: Optional[CrawlerMonitor] = None,
) -> List[CrawlerTaskResult]:
@@ -320,71 +121,144 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
def __init__(
self,
memory_threshold_percent: float = 90.0,
critical_threshold_percent: float = 95.0, # New critical threshold
recovery_threshold_percent: float = 85.0, # New recovery threshold
check_interval: float = 1.0,
max_session_permit: int = 20,
memory_wait_timeout: float = 300.0, # 5 minutes default timeout
fairness_timeout: float = 600.0, # 10 minutes before prioritizing long-waiting URLs
rate_limiter: Optional[RateLimiter] = None,
monitor: Optional[CrawlerMonitor] = None,
):
super().__init__(rate_limiter, monitor)
self.memory_threshold_percent = memory_threshold_percent
self.critical_threshold_percent = critical_threshold_percent
self.recovery_threshold_percent = recovery_threshold_percent
self.check_interval = check_interval
self.max_session_permit = max_session_permit
self.memory_wait_timeout = memory_wait_timeout
self.result_queue = asyncio.Queue() # Queue for storing results
self.fairness_timeout = fairness_timeout
self.result_queue = asyncio.Queue()
self.task_queue = asyncio.PriorityQueue() # Priority queue for better management
self.memory_pressure_mode = False # Flag to indicate when we're in memory pressure mode
self.current_memory_percent = 0.0 # Track current memory usage
async def _memory_monitor_task(self):
"""Background task to continuously monitor memory usage and update state"""
while True:
self.current_memory_percent = psutil.virtual_memory().percent
# Enter memory pressure mode if we cross the threshold
if not self.memory_pressure_mode and self.current_memory_percent >= self.memory_threshold_percent:
self.memory_pressure_mode = True
if self.monitor:
self.monitor.update_memory_status("PRESSURE")
# Exit memory pressure mode if we go below recovery threshold
elif self.memory_pressure_mode and self.current_memory_percent <= self.recovery_threshold_percent:
self.memory_pressure_mode = False
if self.monitor:
self.monitor.update_memory_status("NORMAL")
# In critical mode, we might need to take more drastic action
if self.current_memory_percent >= self.critical_threshold_percent:
if self.monitor:
self.monitor.update_memory_status("CRITICAL")
# We could implement additional memory-saving measures here
await asyncio.sleep(self.check_interval)
def _get_priority_score(self, wait_time: float, retry_count: int) -> float:
"""Calculate priority score (lower is higher priority)
- URLs waiting longer than fairness_timeout get higher priority
- More retry attempts decreases priority
"""
if wait_time > self.fairness_timeout:
# High priority for long-waiting URLs
return -wait_time
# Standard priority based on retries
return retry_count
async def crawl_url(
self,
url: str,
config: CrawlerRunConfig,
task_id: str,
retry_count: int = 0,
) -> CrawlerTaskResult:
start_time = time.time()
error_message = ""
memory_usage = peak_memory = 0.0
# Get starting memory for accurate measurement
process = psutil.Process()
start_memory = process.memory_info().rss / (1024 * 1024)
try:
if self.monitor:
self.monitor.update_task(
task_id, status=CrawlStatus.IN_PROGRESS, start_time=start_time
task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=start_time,
retry_count=retry_count
)
self.concurrent_sessions += 1
if self.rate_limiter:
await self.rate_limiter.wait_if_needed(url)
process = psutil.Process()
start_memory = process.memory_info().rss / (1024 * 1024)
# Check if we're in critical memory state
if self.current_memory_percent >= self.critical_threshold_percent:
# Requeue this task with increased priority and retry count
enqueue_time = time.time()
priority = self._get_priority_score(enqueue_time - start_time, retry_count + 1)
await self.task_queue.put((priority, (url, task_id, retry_count + 1, enqueue_time)))
# Update monitoring
if self.monitor:
self.monitor.update_task(
task_id,
status=CrawlStatus.QUEUED,
error_message="Requeued due to critical memory pressure"
)
# Return placeholder result with requeued status
return CrawlerTaskResult(
task_id=task_id,
url=url,
result=CrawlResult(
url=url, html="", metadata={"status": "requeued"},
success=False, error_message="Requeued due to critical memory pressure"
),
memory_usage=0,
peak_memory=0,
start_time=start_time,
end_time=time.time(),
error_message="Requeued due to critical memory pressure",
retry_count=retry_count + 1
)
# Execute the crawl
result = await self.crawler.arun(url, config=config, session_id=task_id)
# Measure memory usage
end_memory = process.memory_info().rss / (1024 * 1024)
memory_usage = peak_memory = end_memory - start_memory
# Handle rate limiting
if self.rate_limiter and result.status_code:
if not self.rate_limiter.update_delay(url, result.status_code):
error_message = f"Rate limit retry count exceeded for domain {urlparse(url).netloc}"
if self.monitor:
self.monitor.update_task(task_id, status=CrawlStatus.FAILED)
result = CrawlerTaskResult(
task_id=task_id,
url=url,
result=result,
memory_usage=memory_usage,
peak_memory=peak_memory,
start_time=start_time,
end_time=time.time(),
error_message=error_message,
)
await self.result_queue.put(result)
return result
# Update status based on result
if not result.success:
error_message = result.error_message
if self.monitor:
self.monitor.update_task(task_id, status=CrawlStatus.FAILED)
elif self.monitor:
self.monitor.update_task(task_id, status=CrawlStatus.COMPLETED)
except Exception as e:
error_message = str(e)
if self.monitor:
@@ -392,7 +266,7 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
result = CrawlResult(
url=url, html="", metadata={}, success=False, error_message=str(e)
)
finally:
end_time = time.time()
if self.monitor:
@@ -402,9 +276,10 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
memory_usage=memory_usage,
peak_memory=peak_memory,
error_message=error_message,
retry_count=retry_count
)
self.concurrent_sessions -= 1
return CrawlerTaskResult(
task_id=task_id,
url=url,
@@ -414,116 +289,240 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
start_time=start_time,
end_time=end_time,
error_message=error_message,
retry_count=retry_count
)
async def run_urls(
self,
urls: List[str],
crawler: "AsyncWebCrawler", # noqa: F821
crawler: AsyncWebCrawler,
config: CrawlerRunConfig,
) -> List[CrawlerTaskResult]:
self.crawler = crawler
# Start the memory monitor task
memory_monitor = asyncio.create_task(self._memory_monitor_task())
if self.monitor:
self.monitor.start()
results = []
try:
pending_tasks = []
active_tasks = []
task_queue = []
for url in urls:
task_id = str(uuid.uuid4())
if self.monitor:
self.monitor.add_task(task_id, url)
task_queue.append((url, task_id))
while task_queue or active_tasks:
wait_start_time = time.time()
while len(active_tasks) < self.max_session_permit and task_queue:
if psutil.virtual_memory().percent >= self.memory_threshold_percent:
# Check if we've exceeded the timeout
if time.time() - wait_start_time > self.memory_wait_timeout:
raise MemoryError(
f"Memory usage above threshold ({self.memory_threshold_percent}%) for more than {self.memory_wait_timeout} seconds"
)
await asyncio.sleep(self.check_interval)
continue
url, task_id = task_queue.pop(0)
task = asyncio.create_task(self.crawl_url(url, config, task_id))
active_tasks.append(task)
if not active_tasks:
await asyncio.sleep(self.check_interval)
continue
done, pending = await asyncio.wait(
active_tasks, return_when=asyncio.FIRST_COMPLETED
)
pending_tasks.extend(done)
active_tasks = list(pending)
return await asyncio.gather(*pending_tasks)
finally:
if self.monitor:
self.monitor.stop()
async def run_urls_stream(
self,
urls: List[str],
crawler: "AsyncWebCrawler", # noqa: F821
config: CrawlerRunConfig,
) -> AsyncGenerator[CrawlerTaskResult, None]:
self.crawler = crawler
if self.monitor:
self.monitor.start()
try:
active_tasks = []
task_queue = []
completed_count = 0
total_urls = len(urls)
# Initialize task queue
for url in urls:
task_id = str(uuid.uuid4())
if self.monitor:
self.monitor.add_task(task_id, url)
task_queue.append((url, task_id))
while completed_count < total_urls:
# Start new tasks if memory permits
while len(active_tasks) < self.max_session_permit and task_queue:
if psutil.virtual_memory().percent >= self.memory_threshold_percent:
await asyncio.sleep(self.check_interval)
continue
url, task_id = task_queue.pop(0)
task = asyncio.create_task(self.crawl_url(url, config, task_id))
active_tasks.append(task)
if not active_tasks and not task_queue:
break
# Wait for any task to complete and yield results
# Add to queue with initial priority 0, retry count 0, and current time
await self.task_queue.put((0, (url, task_id, 0, time.time())))
active_tasks = []
# Process until both queues are empty
while not self.task_queue.empty() or active_tasks:
# If memory pressure is low, start new tasks
if not self.memory_pressure_mode and len(active_tasks) < self.max_session_permit:
try:
# Try to get a task with timeout to avoid blocking indefinitely
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Create and start the task
task = asyncio.create_task(
self.crawl_url(url, config, task_id, retry_count)
)
active_tasks.append(task)
# Update waiting time in monitor
if self.monitor:
wait_time = time.time() - enqueue_time
self.monitor.update_task(
task_id,
wait_time=wait_time,
status=CrawlStatus.IN_PROGRESS
)
except asyncio.TimeoutError:
# No tasks in queue, that's fine
pass
# Wait for completion even if queue is starved
if active_tasks:
done, pending = await asyncio.wait(
active_tasks, timeout=0.1, return_when=asyncio.FIRST_COMPLETED
)
# Process completed tasks
for completed_task in done:
result = await completed_task
completed_count += 1
yield result
results.append(result)
# Update active tasks list
active_tasks = list(pending)
else:
await asyncio.sleep(self.check_interval)
# If no active tasks but still waiting, sleep briefly
await asyncio.sleep(self.check_interval / 2)
# Update priorities for waiting tasks if needed
await self._update_queue_priorities()
return results
except Exception as e:
if self.monitor:
self.monitor.update_memory_status(f"QUEUE_ERROR: {str(e)}")
finally:
# Clean up
memory_monitor.cancel()
if self.monitor:
self.monitor.stop()
async def _update_queue_priorities(self):
"""Periodically update priorities of items in the queue to prevent starvation"""
# Skip if queue is empty
if self.task_queue.empty():
return
# Use a drain-and-refill approach to update all priorities
temp_items = []
# Drain the queue (with a safety timeout to prevent blocking)
try:
drain_start = time.time()
while not self.task_queue.empty() and time.time() - drain_start < 5.0: # 5 second safety timeout
try:
# Get item from queue with timeout
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Calculate new priority based on current wait time
current_time = time.time()
wait_time = current_time - enqueue_time
new_priority = self._get_priority_score(wait_time, retry_count)
# Store with updated priority
temp_items.append((new_priority, (url, task_id, retry_count, enqueue_time)))
# Update monitoring stats for this task
if self.monitor and task_id in self.monitor.stats:
self.monitor.update_task(task_id, wait_time=wait_time)
except asyncio.TimeoutError:
# Queue might be empty or very slow
break
except Exception as e:
# If anything goes wrong, make sure we refill the queue with what we've got
self.monitor.update_memory_status(f"QUEUE_ERROR: {str(e)}")
# Calculate queue statistics
if temp_items and self.monitor:
total_queued = len(temp_items)
wait_times = [item[1][3] for item in temp_items]
highest_wait_time = time.time() - min(wait_times) if wait_times else 0
avg_wait_time = sum(time.time() - t for t in wait_times) / len(wait_times) if wait_times else 0
# Update queue statistics in monitor
self.monitor.update_queue_statistics(
total_queued=total_queued,
highest_wait_time=highest_wait_time,
avg_wait_time=avg_wait_time
)
# Sort by priority (lowest number = highest priority)
temp_items.sort(key=lambda x: x[0])
# Refill the queue with updated priorities
for item in temp_items:
await self.task_queue.put(item)
async def run_urls_stream(
self,
urls: List[str],
crawler: AsyncWebCrawler,
config: CrawlerRunConfig,
) -> AsyncGenerator[CrawlerTaskResult, None]:
self.crawler = crawler
# Start the memory monitor task
memory_monitor = asyncio.create_task(self._memory_monitor_task())
if self.monitor:
self.monitor.start()
try:
# Initialize task queue
for url in urls:
task_id = str(uuid.uuid4())
if self.monitor:
self.monitor.add_task(task_id, url)
# Add to queue with initial priority 0, retry count 0, and current time
await self.task_queue.put((0, (url, task_id, 0, time.time())))
active_tasks = []
completed_count = 0
total_urls = len(urls)
while completed_count < total_urls:
# If memory pressure is low, start new tasks
if not self.memory_pressure_mode and len(active_tasks) < self.max_session_permit:
try:
# Try to get a task with timeout
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Create and start the task
task = asyncio.create_task(
self.crawl_url(url, config, task_id, retry_count)
)
active_tasks.append(task)
# Update waiting time in monitor
if self.monitor:
wait_time = time.time() - enqueue_time
self.monitor.update_task(
task_id,
wait_time=wait_time,
status=CrawlStatus.IN_PROGRESS
)
except asyncio.TimeoutError:
# No tasks in queue, that's fine
pass
# Process completed tasks and yield results
if active_tasks:
done, pending = await asyncio.wait(
active_tasks, timeout=0.1, return_when=asyncio.FIRST_COMPLETED
)
for completed_task in done:
result = await completed_task
# Only count as completed if it wasn't requeued
if "requeued" not in result.error_message:
completed_count += 1
yield result
# Update active tasks list
active_tasks = list(pending)
else:
# If no active tasks but still waiting, sleep briefly
await asyncio.sleep(self.check_interval / 2)
# Update priorities for waiting tasks if needed
await self._update_queue_priorities()
finally:
# Clean up
memory_monitor.cancel()
if self.monitor:
self.monitor.stop()
class SemaphoreDispatcher(BaseDispatcher):
def __init__(
@@ -620,7 +619,7 @@ class SemaphoreDispatcher(BaseDispatcher):
async def run_urls(
self,
crawler: "AsyncWebCrawler", # noqa: F821
crawler: AsyncWebCrawler, # noqa: F821
urls: List[str],
config: CrawlerRunConfig,
) -> List[CrawlerTaskResult]:
@@ -644,4 +643,4 @@ class SemaphoreDispatcher(BaseDispatcher):
return await asyncio.gather(*tasks, return_exceptions=True)
finally:
if self.monitor:
self.monitor.stop()
self.monitor.stop()

View File

@@ -156,9 +156,22 @@ class AsyncLogger(AsyncLoggerBase):
formatted_message = message.format(**params)
# Then apply colors if specified
color_map = {
"green": Fore.GREEN,
"red": Fore.RED,
"yellow": Fore.YELLOW,
"blue": Fore.BLUE,
"cyan": Fore.CYAN,
"magenta": Fore.MAGENTA,
"white": Fore.WHITE,
"black": Fore.BLACK,
"reset": Style.RESET_ALL,
}
if colors:
for key, color in colors.items():
# Find the formatted value in the message and wrap it with color
if color in color_map:
color = color_map[color]
if key in params:
value_str = str(params[key])
formatted_message = formatted_message.replace(

View File

@@ -4,20 +4,26 @@ import sys
import time
from colorama import Fore
from pathlib import Path
from typing import Optional, List, Generic, TypeVar
from typing import Optional, List
import json
import asyncio
# from contextlib import nullcontext, asynccontextmanager
from contextlib import asynccontextmanager
from .models import CrawlResult, MarkdownGenerationResult, DispatchResult, ScrapingResult
from .models import (
CrawlResult,
MarkdownGenerationResult,
DispatchResult,
ScrapingResult,
CrawlResultContainer,
RunManyReturn
)
from .async_database import async_db_manager
from .chunking_strategy import * # noqa: F403
from .chunking_strategy import RegexChunking, ChunkingStrategy, IdentityChunking
from .chunking_strategy import IdentityChunking
from .content_filter_strategy import * # noqa: F403
from .content_filter_strategy import RelevantContentFilter
from .extraction_strategy import * # noqa: F403
from .extraction_strategy import NoExtractionStrategy, ExtractionStrategy
from .extraction_strategy import * # noqa: F403
from .extraction_strategy import NoExtractionStrategy
from .async_crawler_strategy import (
AsyncCrawlerStrategy,
AsyncPlaywrightCrawlerStrategy,
@@ -31,10 +37,9 @@ from .markdown_generation_strategy import (
from .deep_crawling import DeepCrawlDecorator
from .async_logger import AsyncLogger, AsyncLoggerBase
from .async_configs import BrowserConfig, CrawlerRunConfig
from .async_dispatcher import * # noqa: F403
from .async_dispatcher import * # noqa: F403
from .async_dispatcher import BaseDispatcher, MemoryAdaptiveDispatcher, RateLimiter
from .config import MIN_WORD_THRESHOLD
from .utils import (
sanitize_input_encode,
InvalidCSSSelectorError,
@@ -44,45 +49,6 @@ from .utils import (
RobotsParser,
)
from typing import Union, AsyncGenerator
CrawlResultT = TypeVar('CrawlResultT', bound=CrawlResult)
# RunManyReturn = Union[CrawlResultT, List[CrawlResultT], AsyncGenerator[CrawlResultT, None]]
class CrawlResultContainer(Generic[CrawlResultT]):
def __init__(self, results: Union[CrawlResultT, List[CrawlResultT]]):
# Normalize to a list
if isinstance(results, list):
self._results = results
else:
self._results = [results]
def __iter__(self):
return iter(self._results)
def __getitem__(self, index):
return self._results[index]
def __len__(self):
return len(self._results)
def __getattr__(self, attr):
# Delegate attribute access to the first element.
if self._results:
return getattr(self._results[0], attr)
raise AttributeError(f"{self.__class__.__name__} object has no attribute '{attr}'")
def __repr__(self):
return f"{self.__class__.__name__}({self._results!r})"
# Redefine the union type. Now synchronous calls always return a container,
# while stream mode is handled with an AsyncGenerator.
RunManyReturn = Union[
CrawlResultContainer[CrawlResultT],
AsyncGenerator[CrawlResultT, None]
]
class AsyncWebCrawler:
"""
@@ -195,23 +161,18 @@ class AsyncWebCrawler:
# Decorate arun method with deep crawling capabilities
self._deep_handler = DeepCrawlDecorator(self)
self.arun = self._deep_handler(self.arun)
self.arun = self._deep_handler(self.arun)
async def start(self):
"""
Start the crawler explicitly without using context manager.
This is equivalent to using 'async with' but gives more control over the lifecycle.
This method will:
1. Initialize the browser and context
2. Perform warmup sequence
3. Return the crawler instance for method chaining
Returns:
AsyncWebCrawler: The initialized crawler instance
"""
await self.crawler_strategy.__aenter__()
await self.awarmup()
self.logger.info(f"Crawl4AI {crawl4ai_version}", tag="INIT")
self.ready = True
return self
async def close(self):
@@ -231,18 +192,6 @@ class AsyncWebCrawler:
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self.close()
async def awarmup(self):
"""
Initialize the crawler with warm-up sequence.
This method:
1. Logs initialization info
2. Sets up browser configuration
3. Marks the crawler as ready
"""
self.logger.info(f"Crawl4AI {crawl4ai_version}", tag="INIT")
self.ready = True
@asynccontextmanager
async def nullcontext(self):
"""异步空上下文管理器"""
@@ -282,6 +231,10 @@ class AsyncWebCrawler:
Returns:
CrawlResult: The result of crawling and processing
"""
# Auto-start if not ready
if not self.ready:
await self.start()
config = config or CrawlerRunConfig()
if not isinstance(url, str) or not url:
raise ValueError("Invalid URL, make sure the URL is a non-empty string")
@@ -295,9 +248,7 @@ class AsyncWebCrawler:
config.cache_mode = CacheMode.ENABLED
# Create cache context
cache_context = CacheContext(
url, config.cache_mode, False
)
cache_context = CacheContext(url, config.cache_mode, False)
# Initialize processing variables
async_response: AsyncCrawlResponse = None
@@ -327,7 +278,7 @@ class AsyncWebCrawler:
# if config.screenshot and not screenshot or config.pdf and not pdf:
if config.screenshot and not screenshot_data:
cached_result = None
if config.pdf and not pdf_data:
cached_result = None
@@ -359,14 +310,18 @@ class AsyncWebCrawler:
# Check robots.txt if enabled
if config and config.check_robots_txt:
if not await self.robots_parser.can_fetch(url, self.browser_config.user_agent):
if not await self.robots_parser.can_fetch(
url, self.browser_config.user_agent
):
return CrawlResult(
url=url,
html="",
success=False,
status_code=403,
error_message="Access denied by robots.txt",
response_headers={"X-Robots-Status": "Blocked by robots.txt"}
response_headers={
"X-Robots-Status": "Blocked by robots.txt"
},
)
##############################
@@ -393,7 +348,7 @@ class AsyncWebCrawler:
###############################################################
# Process the HTML content, Call CrawlerStrategy.process_html #
###############################################################
crawl_result : CrawlResult = await self.aprocess_html(
crawl_result: CrawlResult = await self.aprocess_html(
url=url,
html=html,
extracted_content=extracted_content,
@@ -470,7 +425,7 @@ class AsyncWebCrawler:
tag="ERROR",
)
return CrawlResultContainer(
return CrawlResultContainer(
CrawlResult(
url=url, html="", success=False, error_message=error_message
)
@@ -515,15 +470,14 @@ class AsyncWebCrawler:
# Process HTML content
params = config.__dict__.copy()
params.pop("url", None)
params.pop("url", None)
# add keys from kwargs to params that doesn't exist in params
params.update({k: v for k, v in kwargs.items() if k not in params.keys()})
################################
# Scraping Strategy Execution #
################################
result : ScrapingResult = scraping_strategy.scrap(url, html, **params)
result: ScrapingResult = scraping_strategy.scrap(url, html, **params)
if result is None:
raise ValueError(
@@ -572,7 +526,10 @@ class AsyncWebCrawler:
self.logger.info(
message="{url:.50}... | Time: {timing}s",
tag="SCRAPE",
params={"url": _url, "timing": int((time.perf_counter() - t1) * 1000) / 1000},
params={
"url": _url,
"timing": int((time.perf_counter() - t1) * 1000) / 1000,
},
)
################################
@@ -647,7 +604,7 @@ class AsyncWebCrawler:
async def arun_many(
self,
urls: List[str],
config: Optional[CrawlerRunConfig] = None,
config: Optional[CrawlerRunConfig] = None,
dispatcher: Optional[BaseDispatcher] = None,
# Legacy parameters maintained for backwards compatibility
# word_count_threshold=MIN_WORD_THRESHOLD,
@@ -661,8 +618,8 @@ class AsyncWebCrawler:
# pdf: bool = False,
# user_agent: str = None,
# verbose=True,
**kwargs
) -> RunManyReturn:
**kwargs,
) -> RunManyReturn:
"""
Runs the crawler for multiple URLs concurrently using a configurable dispatcher strategy.
@@ -718,37 +675,32 @@ class AsyncWebCrawler:
def transform_result(task_result):
return (
setattr(task_result.result, 'dispatch_result',
DispatchResult(
task_id=task_result.task_id,
memory_usage=task_result.memory_usage,
peak_memory=task_result.peak_memory,
start_time=task_result.start_time,
end_time=task_result.end_time,
error_message=task_result.error_message,
)
) or task_result.result
setattr(
task_result.result,
"dispatch_result",
DispatchResult(
task_id=task_result.task_id,
memory_usage=task_result.memory_usage,
peak_memory=task_result.peak_memory,
start_time=task_result.start_time,
end_time=task_result.end_time,
error_message=task_result.error_message,
),
)
or task_result.result
)
stream = config.stream
if stream:
async def result_transformer():
async for task_result in dispatcher.run_urls_stream(crawler=self, urls=urls, config=config):
async for task_result in dispatcher.run_urls_stream(
crawler=self, urls=urls, config=config
):
yield transform_result(task_result)
return result_transformer()
else:
_results = await dispatcher.run_urls(crawler=self, urls=urls, config=config)
return [transform_result(res) for res in _results]
async def aclear_cache(self):
"""Clear the cache database."""
await async_db_manager.cleanup()
async def aflush_cache(self):
"""Flush the cache database."""
await async_db_manager.aflush_db()
async def aget_cache_size(self):
"""Get the total number of cached items."""
return await async_db_manager.aget_total_count()
return [transform_result(res) for res in _results]

View File

@@ -0,0 +1,23 @@
"""Browser management module for Crawl4AI.
This module provides browser management capabilities using different strategies
for browser creation and interaction.
"""
from .manager import BrowserManager
from .profiles import BrowserProfileManager
from .models import DockerConfig
from .docker_registry import DockerRegistry
from .docker_utils import DockerUtils
from .browser_hub import BrowserHub
from .strategies import (
BaseBrowserStrategy,
PlaywrightBrowserStrategy,
CDPBrowserStrategy,
BuiltinBrowserStrategy,
DockerBrowserStrategy
)
__all__ = ['BrowserManager', 'BrowserProfileManager', 'DockerConfig', 'DockerRegistry', 'DockerUtils', 'BaseBrowserStrategy',
'PlaywrightBrowserStrategy', 'CDPBrowserStrategy', 'BuiltinBrowserStrategy',
'DockerBrowserStrategy', 'BrowserHub']

View File

@@ -0,0 +1,184 @@
# browser_hub_manager.py
import hashlib
import json
import asyncio
from typing import Dict, Optional, List, Tuple
from .manager import BrowserManager, UnavailableBehavior
from ..async_configs import BrowserConfig, CrawlerRunConfig
from ..async_logger import AsyncLogger
class BrowserHub:
"""
Manages Browser-Hub instances for sharing across multiple pipelines.
This class provides centralized management for browser resources, allowing
multiple pipelines to share browser instances efficiently, connect to
existing browser hubs, or create new ones with custom configurations.
"""
_instances: Dict[str, BrowserManager] = {}
_lock = asyncio.Lock()
@classmethod
async def get_browser_manager(
cls,
config: Optional[BrowserConfig] = None,
hub_id: Optional[str] = None,
connection_info: Optional[str] = None,
logger: Optional[AsyncLogger] = None,
max_browsers_per_config: int = 10,
max_pages_per_browser: int = 5,
initial_pool_size: int = 1,
page_configs: Optional[List[Tuple[BrowserConfig, CrawlerRunConfig, int]]] = None
) -> BrowserManager:
"""
Get an existing BrowserManager or create a new one based on parameters.
Args:
config: Browser configuration for new hub
hub_id: Identifier for the hub instance
connection_info: Connection string for existing hub
logger: Logger for recording events and errors
max_browsers_per_config: Maximum browsers per configuration
max_pages_per_browser: Maximum pages per browser
initial_pool_size: Initial number of browsers to create
page_configs: Optional configurations for pre-warming pages
Returns:
BrowserManager: The requested browser manager instance
"""
async with cls._lock:
# Scenario 3: Use existing hub via connection info
if connection_info:
instance_key = f"connection:{connection_info}"
if instance_key not in cls._instances:
cls._instances[instance_key] = await cls._connect_to_browser_hub(
connection_info, logger
)
return cls._instances[instance_key]
# Scenario 2: Custom configured hub
if config:
config_hash = cls._hash_config(config)
instance_key = hub_id or f"config:{config_hash}"
if instance_key not in cls._instances:
cls._instances[instance_key] = await cls._create_browser_manager(
config,
logger,
max_browsers_per_config,
max_pages_per_browser,
initial_pool_size,
page_configs
)
return cls._instances[instance_key]
# Scenario 1: Default hub
instance_key = "default"
if instance_key not in cls._instances:
cls._instances[instance_key] = await cls._create_default_browser_hub(
logger,
max_browsers_per_config,
max_pages_per_browser,
initial_pool_size
)
return cls._instances[instance_key]
@classmethod
async def _create_browser_manager(
cls,
config: BrowserConfig,
logger: Optional[AsyncLogger],
max_browsers_per_config: int,
max_pages_per_browser: int,
initial_pool_size: int,
page_configs: Optional[List[Tuple[BrowserConfig, CrawlerRunConfig, int]]] = None
) -> BrowserManager:
"""Create a new browser hub with the specified configuration."""
manager = BrowserManager(
browser_config=config,
logger=logger,
unavailable_behavior=UnavailableBehavior.ON_DEMAND,
max_browsers_per_config=max_browsers_per_config,
max_pages_per_browser=max_pages_per_browser,
)
# Initialize the pool
await manager.initialize_pool(
browser_configs=[config] if config else None,
browsers_per_config=initial_pool_size,
page_configs=page_configs
)
return manager
@classmethod
async def _create_default_browser_hub(
cls,
logger: Optional[AsyncLogger],
max_browsers_per_config: int,
max_pages_per_browser: int,
initial_pool_size: int
) -> BrowserManager:
"""Create a default browser hub with standard settings."""
config = BrowserConfig(headless=True)
return await cls._create_browser_manager(
config,
logger,
max_browsers_per_config,
max_pages_per_browser,
initial_pool_size,
None
)
@classmethod
async def _connect_to_browser_hub(
cls,
connection_info: str,
logger: Optional[AsyncLogger]
) -> BrowserManager:
"""
Connect to an existing browser hub.
Note: This is a placeholder for future remote connection functionality.
Currently creates a local instance.
"""
if logger:
logger.info(
message="Remote browser hub connections not yet implemented. Creating local instance.",
tag="BROWSER_HUB"
)
# For now, create a default local instance
return await cls._create_default_browser_hub(
logger,
max_browsers_per_config=10,
max_pages_per_browser=5,
initial_pool_size=1
)
@classmethod
def _hash_config(cls, config: BrowserConfig) -> str:
"""Create a hash of the browser configuration for identification."""
# Convert config to dictionary, excluding any callable objects
config_dict = config.__dict__.copy()
for key in list(config_dict.keys()):
if callable(config_dict[key]):
del config_dict[key]
# Convert to canonical JSON string
config_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON
config_hash = hashlib.sha256(config_json.encode()).hexdigest()
return config_hash
@classmethod
async def shutdown_all(cls):
"""Close all browser hub instances and clear the registry."""
async with cls._lock:
shutdown_tasks = []
for hub in cls._instances.values():
shutdown_tasks.append(hub.close())
if shutdown_tasks:
await asyncio.gather(*shutdown_tasks)
cls._instances.clear()

View File

@@ -0,0 +1,34 @@
# ---------- Dockerfile ----------
FROM alpine:latest
# Combine everything in one RUN to keep layers minimal.
RUN apk update && apk upgrade && \
apk add --no-cache \
chromium \
nss \
freetype \
harfbuzz \
ca-certificates \
ttf-freefont \
socat \
curl && \
addgroup -S chromium && adduser -S chromium -G chromium && \
mkdir -p /data && chown chromium:chromium /data && \
rm -rf /var/cache/apk/*
# Copy start script, then chown/chmod in one step
COPY start.sh /home/chromium/start.sh
RUN chown chromium:chromium /home/chromium/start.sh && \
chmod +x /home/chromium/start.sh
USER chromium
WORKDIR /home/chromium
# Expose port used by socat (mapping 9222→9223 or whichever you prefer)
EXPOSE 9223
# Simple healthcheck: is the remote debug endpoint responding?
HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -f http://localhost:9222/json/version || exit 1
CMD ["./start.sh"]

View File

@@ -0,0 +1,27 @@
# ---------- Dockerfile (Idle Version) ----------
FROM alpine:latest
# Install only Chromium and its dependencies in a single layer
RUN apk update && apk upgrade && \
apk add --no-cache \
chromium \
nss \
freetype \
harfbuzz \
ca-certificates \
ttf-freefont \
socat \
curl && \
addgroup -S chromium && adduser -S chromium -G chromium && \
mkdir -p /data && chown chromium:chromium /data && \
rm -rf /var/cache/apk/*
ENV PATH="/usr/bin:/bin:/usr/sbin:/sbin"
# Switch to a non-root user for security
USER chromium
WORKDIR /home/chromium
# Idle: container does nothing except stay alive
CMD ["tail", "-f", "/dev/null"]

View File

@@ -0,0 +1,23 @@
# Use Debian 12 (Bookworm) slim for a small, stable base image
FROM debian:bookworm-slim
ENV DEBIAN_FRONTEND=noninteractive
# Install Chromium, socat, and basic fonts
RUN apt-get update && apt-get install -y --no-install-recommends \
chromium \
wget \
curl \
socat \
fonts-freefont-ttf \
fonts-noto-color-emoji && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Copy start.sh and make it executable
COPY start.sh /start.sh
RUN chmod +x /start.sh
# Expose socat port (use host mapping, e.g. -p 9225:9223)
EXPOSE 9223
ENTRYPOINT ["/start.sh"]

View File

@@ -0,0 +1,264 @@
"""Docker registry module for Crawl4AI.
This module provides a registry system for tracking and reusing Docker containers
across browser sessions, improving performance and resource utilization.
"""
import os
import json
import time
from typing import Dict, Optional
from ..utils import get_home_folder
class DockerRegistry:
"""Manages a registry of Docker containers used for browser automation.
This registry tracks containers by configuration hash, allowing reuse of appropriately
configured containers instead of creating new ones for each session.
Attributes:
registry_file (str): Path to the registry file
containers (dict): Dictionary of container information
port_map (dict): Map of host ports to container IDs
last_port (int): Last port assigned
"""
def __init__(self, registry_file: Optional[str] = None):
"""Initialize the registry with an optional path to the registry file.
Args:
registry_file: Path to the registry file. If None, uses default path.
"""
# Use the same file path as BuiltinBrowserStrategy by default
self.registry_file = registry_file or os.path.join(get_home_folder(), "builtin-browser", "browser_config.json")
self.containers = {} # Still maintain this for backward compatibility
self.port_map = {} # Will be populated from the shared file
self.last_port = 9222
self.load()
def load(self):
"""Load container registry from file."""
if os.path.exists(self.registry_file):
try:
with open(self.registry_file, 'r') as f:
registry_data = json.load(f)
# Initialize port_map if not present
if "port_map" not in registry_data:
registry_data["port_map"] = {}
self.port_map = registry_data.get("port_map", {})
# Extract container information from port_map entries of type "docker"
self.containers = {}
for port_str, browser_info in self.port_map.items():
if browser_info.get("browser_type") == "docker" and "container_id" in browser_info:
container_id = browser_info["container_id"]
self.containers[container_id] = {
"host_port": int(port_str),
"config_hash": browser_info.get("config_hash", ""),
"created_at": browser_info.get("created_at", time.time())
}
# Get last port if available
if "last_port" in registry_data:
self.last_port = registry_data["last_port"]
else:
# Find highest port in port_map
ports = [int(p) for p in self.port_map.keys() if p.isdigit()]
self.last_port = max(ports + [9222])
except Exception as e:
# Reset to defaults on error
print(f"Error loading registry: {e}")
self.containers = {}
self.port_map = {}
self.last_port = 9222
else:
# Initialize with defaults if file doesn't exist
self.containers = {}
self.port_map = {}
self.last_port = 9222
def save(self):
"""Save container registry to file."""
# First load the current file to avoid overwriting other browser types
current_data = {"port_map": {}, "last_port": self.last_port}
if os.path.exists(self.registry_file):
try:
with open(self.registry_file, 'r') as f:
current_data = json.load(f)
except Exception:
pass
# Create a new port_map dictionary
updated_port_map = {}
# First, copy all non-docker entries from the existing port_map
for port_str, browser_info in current_data.get("port_map", {}).items():
if browser_info.get("browser_type") != "docker":
updated_port_map[port_str] = browser_info
# Then add all current docker container entries
for container_id, container_info in self.containers.items():
port_str = str(container_info["host_port"])
updated_port_map[port_str] = {
"browser_type": "docker",
"container_id": container_id,
"cdp_url": f"http://localhost:{port_str}",
"config_hash": container_info["config_hash"],
"created_at": container_info["created_at"]
}
# Replace the port_map with our updated version
current_data["port_map"] = updated_port_map
# Update last_port
current_data["last_port"] = self.last_port
# Ensure directory exists
os.makedirs(os.path.dirname(self.registry_file), exist_ok=True)
# Save the updated data
with open(self.registry_file, 'w') as f:
json.dump(current_data, f, indent=2)
def register_container(self, container_id: str, host_port: int, config_hash: str, cdp_json_config: Optional[str] = None):
"""Register a container with its configuration hash and port mapping.
Args:
container_id: Docker container ID
host_port: Host port mapped to container
config_hash: Hash of configuration used to create container
cdp_json_config: CDP JSON configuration if available
"""
self.containers[container_id] = {
"host_port": host_port,
"config_hash": config_hash,
"created_at": time.time()
}
# Update port_map to maintain compatibility with BuiltinBrowserStrategy
port_str = str(host_port)
self.port_map[port_str] = {
"browser_type": "docker",
"container_id": container_id,
"cdp_url": f"http://localhost:{port_str}",
"config_hash": config_hash,
"created_at": time.time()
}
if cdp_json_config:
self.port_map[port_str]["cdp_json_config"] = cdp_json_config
self.save()
def unregister_container(self, container_id: str):
"""Unregister a container.
Args:
container_id: Docker container ID to unregister
"""
if container_id in self.containers:
host_port = self.containers[container_id]["host_port"]
port_str = str(host_port)
# Remove from port_map
if port_str in self.port_map:
del self.port_map[port_str]
# Remove from containers
del self.containers[container_id]
self.save()
async def find_container_by_config(self, config_hash: str, docker_utils) -> Optional[str]:
"""Find a container that matches the given configuration hash.
Args:
config_hash: Hash of configuration to match
docker_utils: DockerUtils instance to check running containers
Returns:
Container ID if found, None otherwise
"""
# Search through port_map for entries with matching config_hash
for port_str, browser_info in self.port_map.items():
if (browser_info.get("browser_type") == "docker" and
browser_info.get("config_hash") == config_hash and
"container_id" in browser_info):
container_id = browser_info["container_id"]
if await docker_utils.is_container_running(container_id):
return container_id
return None
def get_container_host_port(self, container_id: str) -> Optional[int]:
"""Get the host port mapped to the container.
Args:
container_id: Docker container ID
Returns:
Host port if container is registered, None otherwise
"""
if container_id in self.containers:
return self.containers[container_id]["host_port"]
return None
def get_next_available_port(self, docker_utils) -> int:
"""Get the next available host port for Docker mapping.
Args:
docker_utils: DockerUtils instance to check port availability
Returns:
Available port number
"""
# Start from last port + 1
port = self.last_port + 1
# Check if port is in use (either in our registry or system-wide)
while str(port) in self.port_map or docker_utils.is_port_in_use(port):
port += 1
# Update last port
self.last_port = port
self.save()
return port
def get_container_config_hash(self, container_id: str) -> Optional[str]:
"""Get the configuration hash for a container.
Args:
container_id: Docker container ID
Returns:
Configuration hash if container is registered, None otherwise
"""
if container_id in self.containers:
return self.containers[container_id]["config_hash"]
return None
def cleanup_stale_containers(self, docker_utils):
"""Clean up containers that are no longer running.
Args:
docker_utils: DockerUtils instance to check container status
"""
to_remove = []
# Find containers that are no longer running
for port_str, browser_info in self.port_map.items():
if browser_info.get("browser_type") == "docker" and "container_id" in browser_info:
container_id = browser_info["container_id"]
if not docker_utils.is_container_running(container_id):
to_remove.append(container_id)
# Remove stale containers
for container_id in to_remove:
self.unregister_container(container_id)

View File

@@ -0,0 +1,661 @@
import os
import json
import asyncio
import hashlib
import tempfile
import shutil
import socket
import subprocess
from typing import Dict, List, Optional, Tuple, Union
class DockerUtils:
"""Utility class for Docker operations in browser automation.
This class provides methods for managing Docker images, containers,
and related operations needed for browser automation. It handles
image building, container lifecycle, port management, and registry operations.
Attributes:
DOCKER_FOLDER (str): Path to folder containing Docker files
DOCKER_CONNECT_FILE (str): Path to Dockerfile for connect mode
DOCKER_LAUNCH_FILE (str): Path to Dockerfile for launch mode
DOCKER_START_SCRIPT (str): Path to startup script for connect mode
DEFAULT_CONNECT_IMAGE (str): Default image name for connect mode
DEFAULT_LAUNCH_IMAGE (str): Default image name for launch mode
logger: Optional logger instance
"""
# File paths for Docker resources
DOCKER_FOLDER = os.path.join(os.path.dirname(__file__), "docker")
DOCKER_CONNECT_FILE = os.path.join(DOCKER_FOLDER, "connect.Dockerfile")
DOCKER_LAUNCH_FILE = os.path.join(DOCKER_FOLDER, "launch.Dockerfile")
DOCKER_START_SCRIPT = os.path.join(DOCKER_FOLDER, "start.sh")
# Default image names
DEFAULT_CONNECT_IMAGE = "crawl4ai/browser-connect:latest"
DEFAULT_LAUNCH_IMAGE = "crawl4ai/browser-launch:latest"
def __init__(self, logger=None):
"""Initialize Docker utilities.
Args:
logger: Optional logger for recording operations
"""
self.logger = logger
# Image Management Methods
async def check_image_exists(self, image_name: str) -> bool:
"""Check if a Docker image exists.
Args:
image_name: Name of the Docker image to check
Returns:
bool: True if the image exists, False otherwise
"""
cmd = ["docker", "image", "inspect", image_name]
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
_, _ = await process.communicate()
return process.returncode == 0
except Exception as e:
if self.logger:
self.logger.debug(
f"Error checking if image exists: {str(e)}", tag="DOCKER"
)
return False
async def build_docker_image(
self,
image_name: str,
dockerfile_path: str,
files_to_copy: Dict[str, str] = None,
) -> bool:
"""Build a Docker image from a Dockerfile.
Args:
image_name: Name to give the built image
dockerfile_path: Path to the Dockerfile
files_to_copy: Dict of {dest_name: source_path} for files to copy to build context
Returns:
bool: True if image was built successfully, False otherwise
"""
# Create a temporary build context
with tempfile.TemporaryDirectory() as temp_dir:
# Copy the Dockerfile
shutil.copy(dockerfile_path, os.path.join(temp_dir, "Dockerfile"))
# Copy any additional files needed
if files_to_copy:
for dest_name, source_path in files_to_copy.items():
shutil.copy(source_path, os.path.join(temp_dir, dest_name))
# Build the image
cmd = ["docker", "build", "-t", image_name, temp_dir]
if self.logger:
self.logger.debug(
f"Building Docker image with command: {' '.join(cmd)}", tag="DOCKER"
)
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await process.communicate()
if process.returncode != 0:
if self.logger:
self.logger.error(
message="Failed to build Docker image: {error}",
tag="DOCKER",
params={"error": stderr.decode()},
)
return False
if self.logger:
self.logger.success(
f"Successfully built Docker image: {image_name}", tag="DOCKER"
)
return True
async def ensure_docker_image_exists(
self, image_name: str, mode: str = "connect"
) -> str:
"""Ensure the required Docker image exists, creating it if necessary.
Args:
image_name: Name of the Docker image
mode: Either "connect" or "launch" to determine which image to build
Returns:
str: Name of the available Docker image
Raises:
Exception: If image doesn't exist and can't be built
"""
# If image name is not specified, use default based on mode
if not image_name:
image_name = (
self.DEFAULT_CONNECT_IMAGE
if mode == "connect"
else self.DEFAULT_LAUNCH_IMAGE
)
# Check if the image already exists
if await self.check_image_exists(image_name):
if self.logger:
self.logger.debug(
f"Docker image {image_name} already exists", tag="DOCKER"
)
return image_name
# If we're using a custom image that doesn't exist, warn and fail
if (
image_name != self.DEFAULT_CONNECT_IMAGE
and image_name != self.DEFAULT_LAUNCH_IMAGE
):
if self.logger:
self.logger.warning(
f"Custom Docker image {image_name} not found and cannot be automatically created",
tag="DOCKER",
)
raise Exception(f"Docker image {image_name} not found")
# Build the appropriate default image
if self.logger:
self.logger.info(
f"Docker image {image_name} not found, creating it now...", tag="DOCKER"
)
if mode == "connect":
success = await self.build_docker_image(
image_name,
self.DOCKER_CONNECT_FILE,
{"start.sh": self.DOCKER_START_SCRIPT},
)
else:
success = await self.build_docker_image(image_name, self.DOCKER_LAUNCH_FILE)
if not success:
raise Exception(f"Failed to create Docker image {image_name}")
return image_name
# Container Management Methods
async def create_container(
self,
image_name: str,
host_port: int,
container_name: Optional[str] = None,
volumes: List[str] = None,
network: Optional[str] = None,
env_vars: Dict[str, str] = None,
cpu_limit: float = 1.0,
memory_limit: str = "1.5g",
extra_args: List[str] = None,
) -> Optional[str]:
"""Create a new Docker container.
Args:
image_name: Docker image to use
host_port: Port on host to map to container port 9223
container_name: Optional name for the container
volumes: List of volume mappings (e.g., ["host_path:container_path"])
network: Optional Docker network to use
env_vars: Dictionary of environment variables
cpu_limit: CPU limit for the container
memory_limit: Memory limit for the container
extra_args: Additional docker run arguments
Returns:
str: Container ID if successful, None otherwise
"""
# Prepare container command
cmd = [
"docker",
"run",
"--detach",
]
# Add container name if specified
if container_name:
cmd.extend(["--name", container_name])
# Add port mapping
cmd.extend(["-p", f"{host_port}:9223"])
# Add volumes
if volumes:
for volume in volumes:
cmd.extend(["-v", volume])
# Add network if specified
if network:
cmd.extend(["--network", network])
# Add environment variables
if env_vars:
for key, value in env_vars.items():
cmd.extend(["-e", f"{key}={value}"])
# Add CPU and memory limits
if cpu_limit:
cmd.extend(["--cpus", str(cpu_limit)])
if memory_limit:
cmd.extend(["--memory", memory_limit])
cmd.extend(["--memory-swap", memory_limit])
if self.logger:
self.logger.debug(
f"Setting CPU limit: {cpu_limit}, Memory limit: {memory_limit}",
tag="DOCKER",
)
# Add extra args
if extra_args:
cmd.extend(extra_args)
# Add image
cmd.append(image_name)
if self.logger:
self.logger.debug(
f"Creating Docker container with command: {' '.join(cmd)}", tag="DOCKER"
)
# Run docker command
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await process.communicate()
if process.returncode != 0:
if self.logger:
self.logger.error(
message="Failed to create Docker container: {error}",
tag="DOCKER",
params={"error": stderr.decode()},
)
return None
# Get container ID
container_id = stdout.decode().strip()
if self.logger:
self.logger.success(
f"Created Docker container: {container_id[:12]}", tag="DOCKER"
)
return container_id
except Exception as e:
if self.logger:
self.logger.error(
message="Error creating Docker container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return None
async def is_container_running(self, container_id: str) -> bool:
"""Check if a container is running.
Args:
container_id: ID of the container to check
Returns:
bool: True if the container is running, False otherwise
"""
cmd = ["docker", "inspect", "--format", "{{.State.Running}}", container_id]
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, _ = await process.communicate()
return process.returncode == 0 and stdout.decode().strip() == "true"
except Exception as e:
if self.logger:
self.logger.debug(
f"Error checking if container is running: {str(e)}", tag="DOCKER"
)
return False
async def wait_for_container_ready(
self, container_id: str, timeout: int = 30
) -> bool:
"""Wait for the container to be in running state.
Args:
container_id: ID of the container to wait for
timeout: Maximum time to wait in seconds
Returns:
bool: True if container is ready, False if timeout occurred
"""
for _ in range(timeout):
if await self.is_container_running(container_id):
return True
await asyncio.sleep(1)
if self.logger:
self.logger.warning(
f"Container {container_id[:12]} not ready after {timeout}s timeout",
tag="DOCKER",
)
return False
async def stop_container(self, container_id: str) -> bool:
"""Stop a Docker container.
Args:
container_id: ID of the container to stop
Returns:
bool: True if stopped successfully, False otherwise
"""
cmd = ["docker", "stop", container_id]
try:
process = await asyncio.create_subprocess_exec(*cmd)
await process.communicate()
if self.logger:
self.logger.debug(
f"Stopped container: {container_id[:12]}", tag="DOCKER"
)
return process.returncode == 0
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to stop container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return False
async def remove_container(self, container_id: str, force: bool = True) -> bool:
"""Remove a Docker container.
Args:
container_id: ID of the container to remove
force: Whether to force removal
Returns:
bool: True if removed successfully, False otherwise
"""
cmd = ["docker", "rm"]
if force:
cmd.append("-f")
cmd.append(container_id)
try:
process = await asyncio.create_subprocess_exec(*cmd)
await process.communicate()
if self.logger:
self.logger.debug(
f"Removed container: {container_id[:12]}", tag="DOCKER"
)
return process.returncode == 0
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to remove container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return False
# Container Command Execution Methods
async def exec_in_container(
self, container_id: str, command: List[str], detach: bool = False
) -> Tuple[int, str, str]:
"""Execute a command in a running container.
Args:
container_id: ID of the container
command: Command to execute as a list of strings
detach: Whether to run the command in detached mode
Returns:
Tuple of (return_code, stdout, stderr)
"""
cmd = ["docker", "exec"]
if detach:
cmd.append("-d")
cmd.append(container_id)
cmd.extend(command)
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await process.communicate()
return process.returncode, stdout.decode(), stderr.decode()
except Exception as e:
if self.logger:
self.logger.error(
message="Error executing command in container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return -1, "", str(e)
async def start_socat_in_container(self, container_id: str) -> bool:
"""Start socat in the container to map port 9222 to 9223.
Args:
container_id: ID of the container
Returns:
bool: True if socat started successfully, False otherwise
"""
# Command to run socat as a background process
cmd = ["socat", "TCP-LISTEN:9223,fork", "TCP:localhost:9222"]
returncode, _, stderr = await self.exec_in_container(
container_id, cmd, detach=True
)
if returncode != 0:
if self.logger:
self.logger.error(
message="Failed to start socat in container: {error}",
tag="DOCKER",
params={"error": stderr},
)
return False
if self.logger:
self.logger.debug(
f"Started socat in container: {container_id[:12]}", tag="DOCKER"
)
# Wait a moment for socat to start
await asyncio.sleep(1)
return True
async def launch_chrome_in_container(
self, container_id: str, browser_args: List[str]
) -> bool:
"""Launch Chrome inside the container with specified arguments.
Args:
container_id: ID of the container
browser_args: Chrome command line arguments
Returns:
bool: True if Chrome started successfully, False otherwise
"""
# Build Chrome command
chrome_cmd = ["chromium"]
chrome_cmd.extend(browser_args)
returncode, _, stderr = await self.exec_in_container(
container_id, chrome_cmd, detach=True
)
if returncode != 0:
if self.logger:
self.logger.error(
message="Failed to launch Chrome in container: {error}",
tag="DOCKER",
params={"error": stderr},
)
return False
if self.logger:
self.logger.debug(
f"Launched Chrome in container: {container_id[:12]}", tag="DOCKER"
)
return True
async def get_process_id_in_container(
self, container_id: str, process_name: str
) -> Optional[int]:
"""Get the process ID for a process in the container.
Args:
container_id: ID of the container
process_name: Name pattern to search for
Returns:
int: Process ID if found, None otherwise
"""
cmd = ["pgrep", "-f", process_name]
returncode, stdout, _ = await self.exec_in_container(container_id, cmd)
if returncode == 0 and stdout.strip():
pid = int(stdout.strip().split("\n")[0])
return pid
return None
async def stop_process_in_container(self, container_id: str, pid: int) -> bool:
"""Stop a process in the container by PID.
Args:
container_id: ID of the container
pid: Process ID to stop
Returns:
bool: True if process was stopped, False otherwise
"""
cmd = ["kill", "-TERM", str(pid)]
returncode, _, stderr = await self.exec_in_container(container_id, cmd)
if returncode != 0:
if self.logger:
self.logger.warning(
message="Failed to stop process in container: {error}",
tag="DOCKER",
params={"error": stderr},
)
return False
if self.logger:
self.logger.debug(
f"Stopped process {pid} in container: {container_id[:12]}", tag="DOCKER"
)
return True
# Network and Port Methods
async def wait_for_cdp_ready(self, host_port: int, timeout: int = 10) -> dict:
"""Wait for the CDP endpoint to be ready.
Args:
host_port: Port to check for CDP endpoint
timeout: Maximum time to wait in seconds
Returns:
dict: CDP JSON config if ready, None if timeout occurred
"""
import aiohttp
url = f"http://localhost:{host_port}/json/version"
for _ in range(timeout):
try:
async with aiohttp.ClientSession() as session:
async with session.get(url, timeout=1) as response:
if response.status == 200:
if self.logger:
self.logger.debug(
f"CDP endpoint ready on port {host_port}",
tag="DOCKER",
)
cdp_json_config = await response.json()
if self.logger:
self.logger.debug(
f"CDP JSON config: {cdp_json_config}", tag="DOCKER"
)
return cdp_json_config
except Exception:
pass
await asyncio.sleep(1)
if self.logger:
self.logger.warning(
f"CDP endpoint not ready on port {host_port} after {timeout}s timeout",
tag="DOCKER",
)
return None
def is_port_in_use(self, port: int) -> bool:
"""Check if a port is already in use on the host.
Args:
port: Port number to check
Returns:
bool: True if port is in use, False otherwise
"""
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
return s.connect_ex(("localhost", port)) == 0
def get_next_available_port(self, start_port: int = 9223) -> int:
"""Get the next available port starting from a given port.
Args:
start_port: Port number to start checking from
Returns:
int: First available port number
"""
port = start_port
while self.is_port_in_use(port):
port += 1
return port
# Configuration Hash Methods
def generate_config_hash(self, config_dict: Dict) -> str:
"""Generate a hash of the configuration for container matching.
Args:
config_dict: Dictionary of configuration parameters
Returns:
str: Hash string uniquely identifying this configuration
"""
# Convert to canonical JSON string and hash
config_json = json.dumps(config_dict, sort_keys=True)
return hashlib.sha256(config_json.encode()).hexdigest()

View File

@@ -0,0 +1,177 @@
"""Browser manager module for Crawl4AI.
This module provides a central browser management class that uses the
strategy pattern internally while maintaining the existing API.
It also implements a page pooling mechanism for improved performance.
"""
from typing import Optional, Tuple, List
from playwright.async_api import Page, BrowserContext
from ..async_logger import AsyncLogger
from ..async_configs import BrowserConfig, CrawlerRunConfig
from .strategies import (
BaseBrowserStrategy,
PlaywrightBrowserStrategy,
CDPBrowserStrategy,
BuiltinBrowserStrategy,
DockerBrowserStrategy
)
class BrowserManager:
"""Main interface for browser management in Crawl4AI.
This class maintains backward compatibility with the existing implementation
while using the strategy pattern internally for different browser types.
Attributes:
config (BrowserConfig): Configuration object containing all browser settings
logger: Logger instance for recording events and errors
browser: The browser instance
default_context: The default browser context
managed_browser: The managed browser instance
playwright: The Playwright instance
sessions: Dictionary to store session information
session_ttl: Session timeout in seconds
"""
def __init__(self, browser_config: Optional[BrowserConfig] = None, logger: Optional[AsyncLogger] = None):
"""Initialize the BrowserManager with a browser configuration.
Args:
browser_config: Configuration object containing all browser settings
logger: Logger instance for recording events and errors
"""
self.config = browser_config or BrowserConfig()
self.logger = logger
# Create strategy based on configuration
self.strategy = self._create_strategy()
# Initialize state variables for compatibility with existing code
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
# For session management (from existing implementation)
self.sessions = {}
self.session_ttl = 1800 # 30 minutes
def _create_strategy(self) -> BaseBrowserStrategy:
"""Create appropriate browser strategy based on configuration.
Returns:
BaseBrowserStrategy: The selected browser strategy
"""
if self.config.browser_mode == "builtin":
return BuiltinBrowserStrategy(self.config, self.logger)
elif self.config.browser_mode == "docker":
if DockerBrowserStrategy is None:
if self.logger:
self.logger.error(
"Docker browser strategy requested but not available. "
"Falling back to PlaywrightBrowserStrategy.",
tag="BROWSER"
)
return PlaywrightBrowserStrategy(self.config, self.logger)
return DockerBrowserStrategy(self.config, self.logger)
elif self.config.browser_mode == "cdp" or self.config.cdp_url or self.config.use_managed_browser:
return CDPBrowserStrategy(self.config, self.logger)
else:
return PlaywrightBrowserStrategy(self.config, self.logger)
async def start(self):
"""Start the browser instance and set up the default context.
Returns:
self: For method chaining
"""
# Start the strategy
await self.strategy.start()
# Update legacy references
self.browser = self.strategy.browser
self.default_context = self.strategy.default_context
# Set browser process reference (for CDP strategy)
if hasattr(self.strategy, 'browser_process'):
self.managed_browser = self.strategy
# Set Playwright reference
self.playwright = self.strategy.playwright
# Sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
self.session_ttl = self.strategy.session_ttl
return self
async def get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page for the given configuration.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
Tuple of (Page, BrowserContext)
"""
# Delegate to strategy
page, context = await self.strategy.get_page(crawlerRunConfig)
# Sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
return page, context
async def get_pages(self, crawlerRunConfig: CrawlerRunConfig, count: int = 1) -> List[Tuple[Page, BrowserContext]]:
"""Get multiple pages with the same configuration.
This method efficiently creates multiple browser pages using the same configuration,
which is useful for parallel crawling of multiple URLs.
Args:
crawlerRunConfig: Configuration for the pages
count: Number of pages to create
Returns:
List of (Page, Context) tuples
"""
# Delegate to strategy
pages = await self.strategy.get_pages(crawlerRunConfig, count)
# Sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
return pages
# Just for legacy compatibility
async def kill_session(self, session_id: str):
"""Kill a browser session and clean up resources.
Args:
session_id: The session ID to kill
"""
# Handle kill_session via our strategy if it supports it
await self.strategy.kill_session(session_id)
# sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
async def close(self):
"""Close the browser and clean up resources."""
# Delegate to strategy
await self.strategy.close()
# Reset legacy references
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
self.sessions = {}

853
crawl4ai/browser/manager.py Normal file
View File

@@ -0,0 +1,853 @@
"""Browser manager module for Crawl4AI.
This module provides a central browser management class that uses the
strategy pattern internally while maintaining the existing API.
It also implements browser pooling for improved performance.
"""
import asyncio
import hashlib
import json
import math
from enum import Enum
from typing import Dict, List, Optional, Tuple, Any
from playwright.async_api import Page, BrowserContext
from ..async_logger import AsyncLogger
from ..async_configs import BrowserConfig, CrawlerRunConfig
from .strategies import (
BaseBrowserStrategy,
PlaywrightBrowserStrategy,
CDPBrowserStrategy,
BuiltinBrowserStrategy,
DockerBrowserStrategy
)
class UnavailableBehavior(Enum):
"""Behavior when no browser is available."""
ON_DEMAND = "on_demand" # Create new browser on demand
PENDING = "pending" # Wait until a browser is available
EXCEPTION = "exception" # Raise an exception
class BrowserManager:
"""Main interface for browser management and pooling in Crawl4AI.
This class maintains backward compatibility with the existing implementation
while using the strategy pattern internally for different browser types.
It also implements browser pooling for improved performance.
Attributes:
config (BrowserConfig): Default configuration object for browsers
logger (AsyncLogger): Logger instance for recording events and errors
browser_pool (Dict): Dictionary to store browser instances by configuration
browser_in_use (Dict): Dictionary to track which browsers are in use
request_queues (Dict): Queues for pending requests by configuration
unavailable_behavior (UnavailableBehavior): Behavior when no browser is available
"""
def __init__(
self,
browser_config: Optional[BrowserConfig] = None,
logger: Optional[AsyncLogger] = None,
unavailable_behavior: UnavailableBehavior = UnavailableBehavior.EXCEPTION,
max_browsers_per_config: int = 10,
max_pages_per_browser: int = 5
):
"""Initialize the BrowserManager with a browser configuration.
Args:
browser_config: Configuration object containing all browser settings
logger: Logger instance for recording events and errors
unavailable_behavior: Behavior when no browser is available
max_browsers_per_config: Maximum number of browsers per configuration
max_pages_per_browser: Maximum number of pages per browser
"""
self.config = browser_config or BrowserConfig()
self.logger = logger
self.unavailable_behavior = unavailable_behavior
self.max_browsers_per_config = max_browsers_per_config
self.max_pages_per_browser = max_pages_per_browser
# Browser pool management
self.browser_pool = {} # config_hash -> list of browser strategies
self.browser_in_use = {} # strategy instance -> Boolean
self.request_queues = {} # config_hash -> asyncio.Queue()
self._browser_locks = {} # config_hash -> asyncio.Lock()
self._browser_pool_lock = asyncio.Lock() # Global lock for pool modifications
# Page pool management
self.page_pool = {} # (browser_config_hash, crawler_config_hash) -> list of (page, context, strategy)
self._page_pool_lock = asyncio.Lock()
self.browser_page_counts = {} # strategy instance -> current page count
self._page_count_lock = asyncio.Lock() # Lock for thread-safe access to page counts
# For session management (from existing implementation)
self.sessions = {}
self.session_ttl = 1800 # 30 minutes
# For legacy compatibility
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
self.strategy = None
def _create_browser_config_hash(self, browser_config: BrowserConfig) -> str:
"""Create a hash of the browser configuration for browser pooling.
Args:
browser_config: Browser configuration
Returns:
str: Hash of the browser configuration
"""
# Convert config to dictionary, excluding any callable objects
config_dict = browser_config.__dict__.copy()
for key in list(config_dict.keys()):
if callable(config_dict[key]):
del config_dict[key]
# Convert to canonical JSON string
config_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON
config_hash = hashlib.sha256(config_json.encode()).hexdigest()
return config_hash
def _create_strategy(self, browser_config: BrowserConfig) -> BaseBrowserStrategy:
"""Create appropriate browser strategy based on configuration.
Args:
browser_config: Browser configuration
Returns:
BaseBrowserStrategy: The selected browser strategy
"""
if browser_config.browser_mode == "builtin":
return BuiltinBrowserStrategy(browser_config, self.logger)
elif browser_config.browser_mode == "docker":
if DockerBrowserStrategy is None:
if self.logger:
self.logger.error(
"Docker browser strategy requested but not available. "
"Falling back to PlaywrightBrowserStrategy.",
tag="BROWSER"
)
return PlaywrightBrowserStrategy(browser_config, self.logger)
return DockerBrowserStrategy(browser_config, self.logger)
elif browser_config.browser_mode == "cdp" or browser_config.cdp_url or browser_config.use_managed_browser:
return CDPBrowserStrategy(browser_config, self.logger)
else:
return PlaywrightBrowserStrategy(browser_config, self.logger)
async def initialize_pool(
self,
browser_configs: List[BrowserConfig] = None,
browsers_per_config: int = 1,
page_configs: Optional[List[Tuple[BrowserConfig, CrawlerRunConfig, int]]] = None
):
"""Initialize the browser pool with multiple browser configurations.
Args:
browser_configs: List of browser configurations to initialize
browsers_per_config: Number of browser instances per configuration
page_configs: Optional list of (browser_config, crawler_run_config, count) tuples
for pre-warming pages
Returns:
self: For method chaining
"""
if not browser_configs:
browser_configs = [self.config]
# Calculate how many browsers we'll need based on page_configs
browsers_needed = {}
if page_configs:
for browser_config, _, page_count in page_configs:
config_hash = self._create_browser_config_hash(browser_config)
# Calculate browsers based on max_pages_per_browser
browsers_needed_for_config = math.ceil(page_count / self.max_pages_per_browser)
browsers_needed[config_hash] = max(
browsers_needed.get(config_hash, 0),
browsers_needed_for_config
)
# Adjust browsers_per_config if needed to ensure enough capacity
config_browsers_needed = {}
for browser_config in browser_configs:
config_hash = self._create_browser_config_hash(browser_config)
# Estimate browsers needed based on page requirements
browsers_for_config = browsers_per_config
if config_hash in browsers_needed:
browsers_for_config = max(browsers_for_config, browsers_needed[config_hash])
config_browsers_needed[config_hash] = browsers_for_config
# Update max_browsers_per_config if needed
if browsers_for_config > self.max_browsers_per_config:
self.max_browsers_per_config = browsers_for_config
if self.logger:
self.logger.info(
f"Increased max_browsers_per_config to {browsers_for_config} to accommodate page requirements",
tag="POOL"
)
# Initialize locks and queues for each config
async with self._browser_pool_lock:
for browser_config in browser_configs:
config_hash = self._create_browser_config_hash(browser_config)
# Initialize lock for this config if needed
if config_hash not in self._browser_locks:
self._browser_locks[config_hash] = asyncio.Lock()
# Initialize queue for this config if needed
if config_hash not in self.request_queues:
self.request_queues[config_hash] = asyncio.Queue()
# Initialize pool for this config if needed
if config_hash not in self.browser_pool:
self.browser_pool[config_hash] = []
# Create browser instances for each configuration in parallel
browser_tasks = []
for browser_config in browser_configs:
config_hash = self._create_browser_config_hash(browser_config)
browsers_to_create = config_browsers_needed.get(
config_hash,
browsers_per_config
) - len(self.browser_pool.get(config_hash, []))
if browsers_to_create <= 0:
continue
for _ in range(browsers_to_create):
# Create a task for each browser initialization
task = self._create_and_add_browser(browser_config, config_hash)
browser_tasks.append(task)
# Wait for all browser initializations to complete
if browser_tasks:
if self.logger:
self.logger.info(f"Initializing {len(browser_tasks)} browsers in parallel...", tag="POOL")
await asyncio.gather(*browser_tasks)
# Pre-warm pages if requested
if page_configs:
page_tasks = []
for browser_config, crawler_run_config, count in page_configs:
task = self._prewarm_pages(browser_config, crawler_run_config, count)
page_tasks.append(task)
if page_tasks:
if self.logger:
self.logger.info(f"Pre-warming pages with {len(page_tasks)} configurations...", tag="POOL")
await asyncio.gather(*page_tasks)
# Update legacy references
if self.browser_pool and next(iter(self.browser_pool.values()), []):
strategy = next(iter(self.browser_pool.values()))[0]
self.strategy = strategy
self.browser = strategy.browser
self.default_context = strategy.default_context
self.playwright = strategy.playwright
return self
async def _create_and_add_browser(self, browser_config: BrowserConfig, config_hash: str):
"""Create and add a browser to the pool.
Args:
browser_config: Browser configuration
config_hash: Hash of the configuration
"""
try:
strategy = self._create_strategy(browser_config)
await strategy.start()
async with self._browser_pool_lock:
if config_hash not in self.browser_pool:
self.browser_pool[config_hash] = []
self.browser_pool[config_hash].append(strategy)
self.browser_in_use[strategy] = False
if self.logger:
self.logger.debug(
f"Added browser to pool: {browser_config.browser_type} "
f"({browser_config.browser_mode})",
tag="POOL"
)
except Exception as e:
if self.logger:
self.logger.error(
f"Failed to create browser: {str(e)}",
tag="POOL"
)
raise
def _make_config_signature(self, crawlerRunConfig: CrawlerRunConfig) -> str:
"""Create a signature hash from crawler configuration.
Args:
crawlerRunConfig: Crawler run configuration
Returns:
str: Hash of the crawler configuration
"""
config_dict = crawlerRunConfig.__dict__.copy()
# Exclude items that do not affect page creation
ephemeral_keys = [
"session_id",
"js_code",
"scraping_strategy",
"extraction_strategy",
"chunking_strategy",
"cache_mode",
"content_filter",
"semaphore_count",
"url"
]
for key in ephemeral_keys:
if key in config_dict:
del config_dict[key]
# Convert to canonical JSON string
config_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON
config_hash = hashlib.sha256(config_json.encode("utf-8")).hexdigest()
return config_hash
async def _prewarm_pages(
self,
browser_config: BrowserConfig,
crawler_run_config: CrawlerRunConfig,
count: int
):
"""Pre-warm pages for a specific configuration.
Args:
browser_config: Browser configuration
crawler_run_config: Crawler run configuration
count: Number of pages to pre-warm
"""
try:
# Create individual page tasks and run them in parallel
browser_config_hash = self._create_browser_config_hash(browser_config)
crawler_config_hash = self._make_config_signature(crawler_run_config)
async def get_single_page():
strategy = await self.get_available_browser(browser_config)
try:
page, context = await strategy.get_page(crawler_run_config)
# Store config hashes on the page object for later retrieval
setattr(page, "_browser_config_hash", browser_config_hash)
setattr(page, "_crawler_config_hash", crawler_config_hash)
return page, context, strategy
except Exception as e:
# Release the browser back to the pool
await self.release_browser(strategy, browser_config)
raise e
# Create tasks for parallel execution
page_tasks = [get_single_page() for _ in range(count)]
# Execute all page creation tasks in parallel
pages_contexts_strategies = await asyncio.gather(*page_tasks)
# Add pages to the page pool
browser_config_hash = self._create_browser_config_hash(browser_config)
crawler_config_hash = self._make_config_signature(crawler_run_config)
pool_key = (browser_config_hash, crawler_config_hash)
async with self._page_pool_lock:
if pool_key not in self.page_pool:
self.page_pool[pool_key] = []
# Add all pages to the pool
self.page_pool[pool_key].extend(pages_contexts_strategies)
if self.logger:
self.logger.debug(
f"Pre-warmed {count} pages in parallel with config {crawler_run_config}",
tag="POOL"
)
except Exception as e:
if self.logger:
self.logger.error(
f"Failed to pre-warm pages: {str(e)}",
tag="POOL"
)
raise
async def get_available_browser(
self,
browser_config: Optional[BrowserConfig] = None
) -> BaseBrowserStrategy:
"""Get an available browser from the pool for the given configuration.
Args:
browser_config: Browser configuration to match
Returns:
BaseBrowserStrategy: An available browser strategy
Raises:
Exception: If no browser is available and behavior is EXCEPTION
"""
browser_config = browser_config or self.config
config_hash = self._create_browser_config_hash(browser_config)
async with self._browser_locks.get(config_hash, asyncio.Lock()):
# Check if we have browsers for this config
if config_hash not in self.browser_pool or not self.browser_pool[config_hash]:
if self.unavailable_behavior == UnavailableBehavior.ON_DEMAND:
# Create a new browser on demand
if self.logger:
self.logger.info(
f"1> Creating new browser on demand for config {config_hash[:8]}",
tag="POOL"
)
# Initialize pool for this config if needed
async with self._browser_pool_lock:
if config_hash not in self.browser_pool:
self.browser_pool[config_hash] = []
strategy = self._create_strategy(browser_config)
await strategy.start()
self.browser_pool[config_hash].append(strategy)
self.browser_in_use[strategy] = False
elif self.unavailable_behavior == UnavailableBehavior.EXCEPTION:
raise Exception(f"No browsers available for configuration {config_hash[:8]}")
# Check for an available browser with capacity in the pool
for strategy in self.browser_pool[config_hash]:
# Check if this browser has capacity for more pages
async with self._page_count_lock:
current_pages = self.browser_page_counts.get(strategy, 0)
if current_pages < self.max_pages_per_browser:
# Increment the page count
self.browser_page_counts[strategy] = current_pages + 1
self.browser_in_use[strategy] = True
# Get browser information for better logging
browser_type = getattr(strategy.config, 'browser_type', 'unknown')
browser_mode = getattr(strategy.config, 'browser_mode', 'unknown')
strategy_id = id(strategy) # Use object ID as a unique identifier
if self.logger:
self.logger.debug(
f"Selected browser #{strategy_id} ({browser_type}/{browser_mode}) - "
f"pages: {current_pages+1}/{self.max_pages_per_browser}",
tag="POOL"
)
return strategy
# All browsers are at capacity or in use
if self.unavailable_behavior == UnavailableBehavior.ON_DEMAND:
# Check if we've reached the maximum number of browsers
if len(self.browser_pool[config_hash]) >= self.max_browsers_per_config:
if self.logger:
self.logger.warning(
f"Maximum browsers reached for config {config_hash[:8]} and all at page capacity",
tag="POOL"
)
if self.unavailable_behavior == UnavailableBehavior.EXCEPTION:
raise Exception("Maximum browsers reached and all at page capacity")
# Create a new browser on demand
if self.logger:
self.logger.info(
f"2> Creating new browser on demand for config {config_hash[:8]}",
tag="POOL"
)
strategy = self._create_strategy(browser_config)
await strategy.start()
async with self._browser_pool_lock:
self.browser_pool[config_hash].append(strategy)
self.browser_in_use[strategy] = True
return strategy
# If we get here, either behavior is EXCEPTION or PENDING
if self.unavailable_behavior == UnavailableBehavior.EXCEPTION:
raise Exception(f"All browsers in use or at page capacity for configuration {config_hash[:8]}")
# For PENDING behavior, set up waiting mechanism
if config_hash not in self.request_queues:
self.request_queues[config_hash] = asyncio.Queue()
# Create a future to wait on
future = asyncio.Future()
await self.request_queues[config_hash].put(future)
if self.logger:
self.logger.debug(
f"Waiting for available browser for config {config_hash[:8]}",
tag="POOL"
)
# Wait for a browser to become available
strategy = await future
return strategy
async def get_page(
self,
crawlerRunConfig: CrawlerRunConfig,
browser_config: Optional[BrowserConfig] = None
) -> Tuple[Page, BrowserContext, BaseBrowserStrategy]:
"""Get a page from the browser pool."""
browser_config = browser_config or self.config
# Check if we have a pre-warmed page available
browser_config_hash = self._create_browser_config_hash(browser_config)
crawler_config_hash = self._make_config_signature(crawlerRunConfig)
pool_key = (browser_config_hash, crawler_config_hash)
# Try to get a page from the pool
async with self._page_pool_lock:
if pool_key in self.page_pool and self.page_pool[pool_key]:
# Get a page from the pool
page, context, strategy = self.page_pool[pool_key].pop()
# Mark browser as in use (it already is, but ensure consistency)
self.browser_in_use[strategy] = True
if self.logger:
self.logger.debug(
f"Using pre-warmed page for config {crawler_config_hash[:8]}",
tag="POOL"
)
# Note: We don't increment page count since it was already counted when created
return page, context, strategy
# No pre-warmed page available, create a new one
# get_available_browser already increments the page count
strategy = await self.get_available_browser(browser_config)
try:
# Get a page from the browser
page, context = await strategy.get_page(crawlerRunConfig)
# Store config hashes on the page object for later retrieval
setattr(page, "_browser_config_hash", browser_config_hash)
setattr(page, "_crawler_config_hash", crawler_config_hash)
return page, context, strategy
except Exception as e:
# Release the browser back to the pool and decrement the page count
await self.release_browser(strategy, browser_config, decrement_page_count=True)
raise e
async def release_page(
self,
page: Page,
strategy: BaseBrowserStrategy,
browser_config: Optional[BrowserConfig] = None,
keep_alive: bool = True,
return_to_pool: bool = True
):
"""Release a page back to the pool."""
browser_config = browser_config or self.config
page_url = page.url if page else None
# If not keeping the page alive, close it and decrement count
if not keep_alive:
try:
await page.close()
except Exception as e:
if self.logger:
self.logger.error(
f"Error closing page: {str(e)}",
tag="POOL"
)
# Release the browser with page count decrement
await self.release_browser(strategy, browser_config, decrement_page_count=True)
return
# If returning to pool
if return_to_pool:
# Get the configuration hashes from the page object
browser_config_hash = getattr(page, "_browser_config_hash", None)
crawler_config_hash = getattr(page, "_crawler_config_hash", None)
if browser_config_hash and crawler_config_hash:
pool_key = (browser_config_hash, crawler_config_hash)
async with self._page_pool_lock:
if pool_key not in self.page_pool:
self.page_pool[pool_key] = []
# Add page back to the pool
self.page_pool[pool_key].append((page, page.context, strategy))
if self.logger:
self.logger.debug(
f"Returned page to pool for config {crawler_config_hash[:8]}, url: {page_url}",
tag="POOL"
)
# Note: We don't decrement the page count here since the page is still "in use"
# from the browser's perspective, just in our pool
return
else:
# If we can't identify the configuration, log a warning
if self.logger:
self.logger.warning(
"Cannot return page to pool - missing configuration hashes",
tag="POOL"
)
# If we got here, we couldn't return to pool, so just release the browser
await self.release_browser(strategy, browser_config, decrement_page_count=True)
async def release_browser(
self,
strategy: BaseBrowserStrategy,
browser_config: Optional[BrowserConfig] = None,
decrement_page_count: bool = True
):
"""Release a browser back to the pool."""
browser_config = browser_config or self.config
config_hash = self._create_browser_config_hash(browser_config)
# Decrement page count
if decrement_page_count:
async with self._page_count_lock:
current_count = self.browser_page_counts.get(strategy, 1)
self.browser_page_counts[strategy] = max(0, current_count - 1)
if self.logger:
self.logger.debug(
f"Decremented page count for browser (now: {self.browser_page_counts[strategy]})",
tag="POOL"
)
# Mark as not in use
self.browser_in_use[strategy] = False
# Process any waiting requests
if config_hash in self.request_queues and not self.request_queues[config_hash].empty():
future = await self.request_queues[config_hash].get()
if not future.done():
future.set_result(strategy)
async def get_pages(
self,
crawlerRunConfig: CrawlerRunConfig,
count: int = 1,
browser_config: Optional[BrowserConfig] = None
) -> List[Tuple[Page, BrowserContext, BaseBrowserStrategy]]:
"""Get multiple pages from the browser pool.
Args:
crawlerRunConfig: Configuration for the crawler run
count: Number of pages to get
browser_config: Browser configuration to use
Returns:
List of (Page, Context, Strategy) tuples
"""
results = []
for _ in range(count):
try:
result = await self.get_page(crawlerRunConfig, browser_config)
results.append(result)
except Exception as e:
# Release any pages we've already gotten
for page, _, strategy in results:
await self.release_page(page, strategy, browser_config)
raise e
return results
async def get_page_pool_status(self) -> Dict[str, Any]:
"""Get information about the page pool status.
Returns:
Dict with page pool status information
"""
status = {
"total_pooled_pages": 0,
"configs": {}
}
async with self._page_pool_lock:
for (browser_hash, crawler_hash), pages in self.page_pool.items():
config_key = f"{browser_hash[:8]}_{crawler_hash[:8]}"
status["configs"][config_key] = len(pages)
status["total_pooled_pages"] += len(pages)
if self.logger:
self.logger.debug(
f"Page pool status: {status['total_pooled_pages']} pages available",
tag="POOL"
)
return status
async def get_pool_status(self) -> Dict[str, Any]:
"""Get information about the browser pool status.
Returns:
Dict with pool status information
"""
status = {
"total_browsers": 0,
"browsers_in_use": 0,
"total_pages": 0,
"configs": {}
}
for config_hash, strategies in self.browser_pool.items():
config_pages = 0
in_use = 0
for strategy in strategies:
is_in_use = self.browser_in_use.get(strategy, False)
if is_in_use:
in_use += 1
# Get page count for this browser
try:
page_count = len(await strategy.get_opened_pages())
config_pages += page_count
except Exception as e:
if self.logger:
self.logger.error(f"Error getting page count: {str(e)}", tag="POOL")
config_status = {
"total_browsers": len(strategies),
"browsers_in_use": in_use,
"pages_open": config_pages,
"waiting_requests": self.request_queues.get(config_hash, asyncio.Queue()).qsize(),
"max_capacity": len(strategies) * self.max_pages_per_browser,
"utilization_pct": round((config_pages / (len(strategies) * self.max_pages_per_browser)) * 100, 1)
if strategies else 0
}
status["configs"][config_hash] = config_status
status["total_browsers"] += config_status["total_browsers"]
status["browsers_in_use"] += config_status["browsers_in_use"]
status["total_pages"] += config_pages
# Add overall utilization
if status["total_browsers"] > 0:
max_capacity = status["total_browsers"] * self.max_pages_per_browser
status["overall_utilization_pct"] = round((status["total_pages"] / max_capacity) * 100, 1)
else:
status["overall_utilization_pct"] = 0
return status
async def start(self):
"""Start at least one browser instance in the pool.
This method is kept for backward compatibility.
Returns:
self: For method chaining
"""
await self.initialize_pool([self.config], 1)
return self
async def kill_session(self, session_id: str):
"""Kill a browser session and clean up resources.
Delegated to the strategy. This method is kept for backward compatibility.
Args:
session_id: The session ID to kill
"""
if not self.strategy:
return
await self.strategy.kill_session(session_id)
# Sync sessions
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
async def close(self):
"""Close all browsers in the pool and clean up resources."""
# Close all browsers in the pool
for strategies in self.browser_pool.values():
for strategy in strategies:
try:
await strategy.close()
except Exception as e:
if self.logger:
self.logger.error(
f"Error closing browser: {str(e)}",
tag="POOL"
)
# Clear pool data
self.browser_pool = {}
self.browser_in_use = {}
# Reset legacy references
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
self.strategy = None
self.sessions = {}
async def create_browser_manager(
browser_config: Optional[BrowserConfig] = None,
logger: Optional[AsyncLogger] = None,
unavailable_behavior: UnavailableBehavior = UnavailableBehavior.EXCEPTION,
max_browsers_per_config: int = 10,
initial_pool_size: int = 1,
page_configs: Optional[List[Tuple[BrowserConfig, CrawlerRunConfig, int]]] = None
) -> BrowserManager:
"""Factory function to create and initialize a BrowserManager.
Args:
browser_config: Configuration for the browsers
logger: Logger for recording events
unavailable_behavior: Behavior when no browser is available
max_browsers_per_config: Maximum browsers per configuration
initial_pool_size: Initial number of browsers per configuration
page_configs: Optional configurations for pre-warming pages
Returns:
Initialized BrowserManager
"""
manager = BrowserManager(
browser_config=browser_config,
logger=logger,
unavailable_behavior=unavailable_behavior,
max_browsers_per_config=max_browsers_per_config
)
await manager.initialize_pool(
[browser_config] if browser_config else None,
initial_pool_size,
page_configs
)
return manager

143
crawl4ai/browser/models.py Normal file
View File

@@ -0,0 +1,143 @@
"""Docker configuration module for Crawl4AI browser automation.
This module provides configuration classes for Docker-based browser automation,
allowing flexible configuration of Docker containers for browsing.
"""
from typing import Dict, List, Optional
class DockerConfig:
"""Configuration for Docker-based browser automation.
This class contains Docker-specific settings to avoid cluttering BrowserConfig.
Attributes:
mode (str): Docker operation mode - "connect" or "launch".
- "connect": Uses a container with Chrome already running
- "launch": Dynamically configures and starts Chrome in container
image (str): Docker image to use. If None, defaults from DockerUtils are used.
registry_file (str): Path to container registry file for persistence.
persistent (bool): Keep container running after browser closes.
remove_on_exit (bool): Remove container on exit when not persistent.
network (str): Docker network to use.
volumes (List[str]): Volume mappings (e.g., ["host_path:container_path"]).
env_vars (Dict[str, str]): Environment variables to set in container.
extra_args (List[str]): Additional docker run arguments.
host_port (int): Host port to map to container's 9223 port.
user_data_dir (str): Path to user data directory on host.
container_user_data_dir (str): Path to user data directory in container.
"""
def __init__(
self,
mode: str = "connect", # "connect" or "launch"
image: Optional[str] = None, # Docker image to use
registry_file: Optional[str] = None, # Path to registry file
persistent: bool = False, # Keep container running after browser closes
remove_on_exit: bool = True, # Remove container on exit when not persistent
network: Optional[str] = None, # Docker network to use
volumes: List[str] = None, # Volume mappings
cpu_limit: float = 1.0, # CPU limit for the container
memory_limit: str = "1.5g", # Memory limit for the container
env_vars: Dict[str, str] = None, # Environment variables
host_port: Optional[int] = None, # Host port to map to container's 9223
user_data_dir: Optional[str] = None, # Path to user data directory on host
container_user_data_dir: str = "/data", # Path to user data directory in container
extra_args: List[str] = None, # Additional docker run arguments
):
"""Initialize Docker configuration.
Args:
mode: Docker operation mode ("connect" or "launch")
image: Docker image to use
registry_file: Path to container registry file
persistent: Whether to keep container running after browser closes
remove_on_exit: Whether to remove container on exit when not persistent
network: Docker network to use
volumes: Volume mappings as list of strings
cpu_limit: CPU limit for the container
memory_limit: Memory limit for the container
env_vars: Environment variables as dictionary
extra_args: Additional docker run arguments
host_port: Host port to map to container's 9223
user_data_dir: Path to user data directory on host
container_user_data_dir: Path to user data directory in container
"""
self.mode = mode
self.image = image # If None, defaults will be used from DockerUtils
self.registry_file = registry_file
self.persistent = persistent
self.remove_on_exit = remove_on_exit
self.network = network
self.volumes = volumes or []
self.cpu_limit = cpu_limit
self.memory_limit = memory_limit
self.env_vars = env_vars or {}
self.extra_args = extra_args or []
self.host_port = host_port
self.user_data_dir = user_data_dir
self.container_user_data_dir = container_user_data_dir
def to_dict(self) -> Dict:
"""Convert this configuration to a dictionary.
Returns:
Dictionary representation of this configuration
"""
return {
"mode": self.mode,
"image": self.image,
"registry_file": self.registry_file,
"persistent": self.persistent,
"remove_on_exit": self.remove_on_exit,
"network": self.network,
"volumes": self.volumes,
"cpu_limit": self.cpu_limit,
"memory_limit": self.memory_limit,
"env_vars": self.env_vars,
"extra_args": self.extra_args,
"host_port": self.host_port,
"user_data_dir": self.user_data_dir,
"container_user_data_dir": self.container_user_data_dir
}
@staticmethod
def from_kwargs(kwargs: Dict) -> "DockerConfig":
"""Create a DockerConfig from a dictionary of keyword arguments.
Args:
kwargs: Dictionary of configuration options
Returns:
New DockerConfig instance
"""
return DockerConfig(
mode=kwargs.get("mode", "connect"),
image=kwargs.get("image"),
registry_file=kwargs.get("registry_file"),
persistent=kwargs.get("persistent", False),
remove_on_exit=kwargs.get("remove_on_exit", True),
network=kwargs.get("network"),
volumes=kwargs.get("volumes"),
cpu_limit=kwargs.get("cpu_limit", 1.0),
memory_limit=kwargs.get("memory_limit", "1.5g"),
env_vars=kwargs.get("env_vars"),
extra_args=kwargs.get("extra_args"),
host_port=kwargs.get("host_port"),
user_data_dir=kwargs.get("user_data_dir"),
container_user_data_dir=kwargs.get("container_user_data_dir", "/data")
)
def clone(self, **kwargs) -> "DockerConfig":
"""Create a copy of this configuration with updated values.
Args:
**kwargs: Key-value pairs of configuration options to update
Returns:
DockerConfig: A new instance with the specified updates
"""
config_dict = self.to_dict()
config_dict.update(kwargs)
return DockerConfig.from_kwargs(config_dict)

View File

@@ -0,0 +1,457 @@
"""Browser profile management module for Crawl4AI.
This module provides functionality for creating and managing browser profiles
that can be used for authenticated browsing.
"""
import os
import asyncio
import signal
import sys
import datetime
import uuid
import shutil
from typing import List, Dict, Optional, Any
from colorama import Fore, Style, init
from ..async_configs import BrowserConfig
from ..async_logger import AsyncLogger, AsyncLoggerBase
from ..utils import get_home_folder
class BrowserProfileManager:
"""Manages browser profiles for Crawl4AI.
This class provides functionality to create and manage browser profiles
that can be used for authenticated browsing with Crawl4AI.
Profiles are stored by default in ~/.crawl4ai/profiles/
"""
def __init__(self, logger: Optional[AsyncLoggerBase] = None):
"""Initialize the BrowserProfileManager.
Args:
logger: Logger for outputting messages. If None, a default AsyncLogger is created.
"""
# Initialize colorama for colorful terminal output
init()
# Create a logger if not provided
if logger is None:
self.logger = AsyncLogger(verbose=True)
elif not isinstance(logger, AsyncLoggerBase):
self.logger = AsyncLogger(verbose=True)
else:
self.logger = logger
# Ensure profiles directory exists
self.profiles_dir = os.path.join(get_home_folder(), "profiles")
os.makedirs(self.profiles_dir, exist_ok=True)
async def create_profile(self,
profile_name: Optional[str] = None,
browser_config: Optional[BrowserConfig] = None) -> Optional[str]:
"""Create a browser profile interactively.
Args:
profile_name: Name for the profile. If None, a name is generated.
browser_config: Configuration for the browser. If None, a default configuration is used.
Returns:
Path to the created profile directory, or None if creation failed
"""
# Create default browser config if none provided
if browser_config is None:
browser_config = BrowserConfig(
browser_type="chromium",
headless=False, # Must be visible for user interaction
verbose=True
)
else:
# Ensure headless is False for user interaction
browser_config.headless = False
# Generate profile name if not provided
if not profile_name:
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
profile_name = f"profile_{timestamp}_{uuid.uuid4().hex[:6]}"
# Sanitize profile name (replace spaces and special chars)
profile_name = "".join(c if c.isalnum() or c in "-_" else "_" for c in profile_name)
# Set user data directory
profile_path = os.path.join(self.profiles_dir, profile_name)
os.makedirs(profile_path, exist_ok=True)
# Print instructions for the user with colorama formatting
border = f"{Fore.CYAN}{'='*80}{Style.RESET_ALL}"
self.logger.info(f"\n{border}", tag="PROFILE")
self.logger.info(f"Creating browser profile: {Fore.GREEN}{profile_name}{Style.RESET_ALL}", tag="PROFILE")
self.logger.info(f"Profile directory: {Fore.YELLOW}{profile_path}{Style.RESET_ALL}", tag="PROFILE")
self.logger.info("\nInstructions:", tag="PROFILE")
self.logger.info("1. A browser window will open for you to set up your profile.", tag="PROFILE")
self.logger.info(f"2. {Fore.CYAN}Log in to websites{Style.RESET_ALL}, configure settings, etc. as needed.", tag="PROFILE")
self.logger.info(f"3. When you're done, {Fore.YELLOW}press 'q' in this terminal{Style.RESET_ALL} to close the browser.", tag="PROFILE")
self.logger.info("4. The profile will be saved and ready to use with Crawl4AI.", tag="PROFILE")
self.logger.info(f"{border}\n", tag="PROFILE")
# Import the necessary classes with local imports to avoid circular references
from .strategies import CDPBrowserStrategy
# Set browser config to use the profile path
browser_config.user_data_dir = profile_path
# Create a CDP browser strategy for the profile creation
browser_strategy = CDPBrowserStrategy(browser_config, self.logger)
# Set up signal handlers to ensure cleanup on interrupt
original_sigint = signal.getsignal(signal.SIGINT)
original_sigterm = signal.getsignal(signal.SIGTERM)
# Define cleanup handler for signals
async def cleanup_handler(sig, frame):
self.logger.warning("\nCleaning up browser process...", tag="PROFILE")
await browser_strategy.close()
# Restore original signal handlers
signal.signal(signal.SIGINT, original_sigint)
signal.signal(signal.SIGTERM, original_sigterm)
if sig == signal.SIGINT:
self.logger.error("Profile creation interrupted. Profile may be incomplete.", tag="PROFILE")
sys.exit(1)
# Set signal handlers
def sigint_handler(sig, frame):
asyncio.create_task(cleanup_handler(sig, frame))
signal.signal(signal.SIGINT, sigint_handler)
signal.signal(signal.SIGTERM, sigint_handler)
# Event to signal when user is done with the browser
user_done_event = asyncio.Event()
# Run keyboard input loop in a separate task
async def listen_for_quit_command():
import termios
import tty
import select
# First output the prompt
self.logger.info(f"{Fore.CYAN}Press '{Fore.WHITE}q{Fore.CYAN}' when you've finished using the browser...{Style.RESET_ALL}", tag="PROFILE")
# Save original terminal settings
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
# Switch to non-canonical mode (no line buffering)
tty.setcbreak(fd)
while True:
# Check if input is available (non-blocking)
readable, _, _ = select.select([sys.stdin], [], [], 0.5)
if readable:
key = sys.stdin.read(1)
if key.lower() == 'q':
self.logger.info(f"{Fore.GREEN}Closing browser and saving profile...{Style.RESET_ALL}", tag="PROFILE")
user_done_event.set()
return
# Check if the browser process has already exited
if browser_strategy.browser_process and browser_strategy.browser_process.poll() is not None:
self.logger.info("Browser already closed. Ending input listener.", tag="PROFILE")
user_done_event.set()
return
await asyncio.sleep(0.1)
finally:
# Restore terminal settings
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
try:
# Start the browser
await browser_strategy.start()
# Check if browser started successfully
if not browser_strategy.browser_process:
self.logger.error("Failed to start browser process.", tag="PROFILE")
return None
self.logger.info(f"Browser launched. {Fore.CYAN}Waiting for you to finish...{Style.RESET_ALL}", tag="PROFILE")
# Start listening for keyboard input
listener_task = asyncio.create_task(listen_for_quit_command())
# Wait for either the user to press 'q' or for the browser process to exit naturally
while not user_done_event.is_set() and browser_strategy.browser_process.poll() is None:
await asyncio.sleep(0.5)
# Cancel the listener task if it's still running
if not listener_task.done():
listener_task.cancel()
try:
await listener_task
except asyncio.CancelledError:
pass
# If the browser is still running and the user pressed 'q', terminate it
if browser_strategy.browser_process.poll() is None and user_done_event.is_set():
self.logger.info("Terminating browser process...", tag="PROFILE")
await browser_strategy.close()
self.logger.success(f"Browser closed. Profile saved at: {Fore.GREEN}{profile_path}{Style.RESET_ALL}", tag="PROFILE")
except Exception as e:
self.logger.error(f"Error creating profile: {str(e)}", tag="PROFILE")
await browser_strategy.close()
return None
finally:
# Restore original signal handlers
signal.signal(signal.SIGINT, original_sigint)
signal.signal(signal.SIGTERM, original_sigterm)
# Make sure browser is fully cleaned up
await browser_strategy.close()
# Return the profile path
return profile_path
def list_profiles(self) -> List[Dict[str, Any]]:
"""List all available browser profiles.
Returns:
List of dictionaries containing profile information
"""
if not os.path.exists(self.profiles_dir):
return []
profiles = []
for name in os.listdir(self.profiles_dir):
profile_path = os.path.join(self.profiles_dir, name)
# Skip if not a directory
if not os.path.isdir(profile_path):
continue
# Check if this looks like a valid browser profile
# For Chromium: Look for Preferences file
# For Firefox: Look for prefs.js file
is_valid = False
if os.path.exists(os.path.join(profile_path, "Preferences")) or \
os.path.exists(os.path.join(profile_path, "Default", "Preferences")):
is_valid = "chromium"
elif os.path.exists(os.path.join(profile_path, "prefs.js")):
is_valid = "firefox"
if is_valid:
# Get creation time
created = datetime.datetime.fromtimestamp(
os.path.getctime(profile_path)
)
profiles.append({
"name": name,
"path": profile_path,
"created": created,
"type": is_valid
})
# Sort by creation time, newest first
profiles.sort(key=lambda x: x["created"], reverse=True)
return profiles
def get_profile_path(self, profile_name: str) -> Optional[str]:
"""Get the full path to a profile by name.
Args:
profile_name: Name of the profile (not the full path)
Returns:
Full path to the profile directory, or None if not found
"""
profile_path = os.path.join(self.profiles_dir, profile_name)
# Check if path exists and is a valid profile
if not os.path.isdir(profile_path):
# Check if profile_name itself is full path
if os.path.isabs(profile_name):
profile_path = profile_name
else:
return None
# Look for profile indicators
is_profile = (
os.path.exists(os.path.join(profile_path, "Preferences")) or
os.path.exists(os.path.join(profile_path, "Default", "Preferences")) or
os.path.exists(os.path.join(profile_path, "prefs.js"))
)
if not is_profile:
return None # Not a valid browser profile
return profile_path
def delete_profile(self, profile_name_or_path: str) -> bool:
"""Delete a browser profile by name or path.
Args:
profile_name_or_path: Name of the profile or full path to profile directory
Returns:
True if the profile was deleted successfully, False otherwise
"""
# Determine if input is a name or a path
if os.path.isabs(profile_name_or_path):
# Full path provided
profile_path = profile_name_or_path
else:
# Just a name provided, construct path
profile_path = os.path.join(self.profiles_dir, profile_name_or_path)
# Check if path exists and is a valid profile
if not os.path.isdir(profile_path):
return False
# Look for profile indicators
is_profile = (
os.path.exists(os.path.join(profile_path, "Preferences")) or
os.path.exists(os.path.join(profile_path, "Default", "Preferences")) or
os.path.exists(os.path.join(profile_path, "prefs.js"))
)
if not is_profile:
return False # Not a valid browser profile
# Delete the profile directory
try:
shutil.rmtree(profile_path)
return True
except Exception:
return False
async def interactive_manager(self, crawl_callback=None):
"""Launch an interactive profile management console.
Args:
crawl_callback: Function to call when selecting option to use
a profile for crawling. It will be called with (profile_path, url).
"""
while True:
self.logger.info(f"\n{Fore.CYAN}Profile Management Options:{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"1. {Fore.GREEN}Create a new profile{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"2. {Fore.YELLOW}List available profiles{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"3. {Fore.RED}Delete a profile{Style.RESET_ALL}", tag="MENU")
# Only show crawl option if callback provided
if crawl_callback:
self.logger.info(f"4. {Fore.CYAN}Use a profile to crawl a website{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"5. {Fore.MAGENTA}Exit{Style.RESET_ALL}", tag="MENU")
exit_option = "5"
else:
self.logger.info(f"4. {Fore.MAGENTA}Exit{Style.RESET_ALL}", tag="MENU")
exit_option = "4"
choice = input(f"\n{Fore.CYAN}Enter your choice (1-{exit_option}): {Style.RESET_ALL}")
if choice == "1":
# Create new profile
name = input(f"{Fore.GREEN}Enter a name for the new profile (or press Enter for auto-generated name): {Style.RESET_ALL}")
await self.create_profile(name or None)
elif choice == "2":
# List profiles
profiles = self.list_profiles()
if not profiles:
self.logger.warning(" No profiles found. Create one first with option 1.", tag="PROFILES")
continue
# Print profile information with colorama formatting
self.logger.info("\nAvailable profiles:", tag="PROFILES")
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {Fore.CYAN}{profile['name']}{Style.RESET_ALL}", tag="PROFILES")
self.logger.info(f" Path: {Fore.YELLOW}{profile['path']}{Style.RESET_ALL}", tag="PROFILES")
self.logger.info(f" Created: {profile['created'].strftime('%Y-%m-%d %H:%M:%S')}", tag="PROFILES")
self.logger.info(f" Browser type: {profile['type']}", tag="PROFILES")
self.logger.info("", tag="PROFILES") # Empty line for spacing
elif choice == "3":
# Delete profile
profiles = self.list_profiles()
if not profiles:
self.logger.warning("No profiles found to delete", tag="PROFILES")
continue
# Display numbered list
self.logger.info(f"\n{Fore.YELLOW}Available profiles:{Style.RESET_ALL}", tag="PROFILES")
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {profile['name']}", tag="PROFILES")
# Get profile to delete
profile_idx = input(f"{Fore.RED}Enter the number of the profile to delete (or 'c' to cancel): {Style.RESET_ALL}")
if profile_idx.lower() == 'c':
continue
try:
idx = int(profile_idx) - 1
if 0 <= idx < len(profiles):
profile_name = profiles[idx]["name"]
self.logger.info(f"Deleting profile: {Fore.YELLOW}{profile_name}{Style.RESET_ALL}", tag="PROFILES")
# Confirm deletion
confirm = input(f"{Fore.RED}Are you sure you want to delete this profile? (y/n): {Style.RESET_ALL}")
if confirm.lower() == 'y':
success = self.delete_profile(profiles[idx]["path"])
if success:
self.logger.success(f"Profile {Fore.GREEN}{profile_name}{Style.RESET_ALL} deleted successfully", tag="PROFILES")
else:
self.logger.error(f"Failed to delete profile {Fore.RED}{profile_name}{Style.RESET_ALL}", tag="PROFILES")
else:
self.logger.error("Invalid profile number", tag="PROFILES")
except ValueError:
self.logger.error("Please enter a valid number", tag="PROFILES")
elif choice == "4" and crawl_callback:
# Use profile to crawl a site
profiles = self.list_profiles()
if not profiles:
self.logger.warning("No profiles found. Create one first.", tag="PROFILES")
continue
# Display numbered list
self.logger.info(f"\n{Fore.YELLOW}Available profiles:{Style.RESET_ALL}", tag="PROFILES")
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {profile['name']}", tag="PROFILES")
# Get profile to use
profile_idx = input(f"{Fore.CYAN}Enter the number of the profile to use (or 'c' to cancel): {Style.RESET_ALL}")
if profile_idx.lower() == 'c':
continue
try:
idx = int(profile_idx) - 1
if 0 <= idx < len(profiles):
profile_path = profiles[idx]["path"]
url = input(f"{Fore.CYAN}Enter the URL to crawl: {Style.RESET_ALL}")
if url:
# Call the provided crawl callback
await crawl_callback(profile_path, url)
else:
self.logger.error("No URL provided", tag="CRAWL")
else:
self.logger.error("Invalid profile number", tag="PROFILES")
except ValueError:
self.logger.error("Please enter a valid number", tag="PROFILES")
elif (choice == "4" and not crawl_callback) or (choice == "5" and crawl_callback):
# Exit
self.logger.info("Exiting profile management", tag="MENU")
break
else:
self.logger.error(f"Invalid choice. Please enter a number between 1 and {exit_option}.", tag="MENU")

View File

@@ -0,0 +1,13 @@
from .base import BaseBrowserStrategy
from .cdp import CDPBrowserStrategy
from .docker_strategy import DockerBrowserStrategy
from .playwright import PlaywrightBrowserStrategy
from .builtin import BuiltinBrowserStrategy
__all__ = [
"BrowserStrategy",
"CDPBrowserStrategy",
"DockerBrowserStrategy",
"PlaywrightBrowserStrategy",
"BuiltinBrowserStrategy",
]

View File

@@ -0,0 +1,601 @@
"""Browser strategies module for Crawl4AI.
This module implements the browser strategy pattern for different
browser implementations, including Playwright, CDP, and builtin browsers.
"""
from abc import ABC, abstractmethod
import asyncio
import json
import hashlib
import os
import time
from typing import Optional, Tuple, List
from playwright.async_api import BrowserContext, Page
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig, CrawlerRunConfig
from ...config import DOWNLOAD_PAGE_TIMEOUT
from ...js_snippet import load_js_script
from ..utils import get_playwright
class BaseBrowserStrategy(ABC):
"""Base class for all browser strategies.
This abstract class defines the interface that all browser strategies
must implement. It handles common functionality like context caching,
browser configuration, and session management.
"""
_playwright_instance = None
@classmethod
async def get_playwright(cls):
"""Get or create a shared Playwright instance.
Returns:
Playwright: The shared Playwright instance
"""
# For now I dont want Singleton pattern for Playwright
if cls._playwright_instance is None or True:
cls._playwright_instance = await get_playwright()
return cls._playwright_instance
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the strategy with configuration and logger.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
self.config = config
self.logger = logger
self.browser = None
self.default_context = None
# Context management
self.contexts_by_config = {} # config_signature -> context
self._contexts_lock = asyncio.Lock()
# Session management
self.sessions = {}
self.session_ttl = 1800 # 30 minutes default
# Playwright instance
self.playwright = None
@abstractmethod
async def start(self):
"""Start the browser.
This method should be implemented by concrete strategies to initialize
the browser in the appropriate way (direct launch, CDP connection, etc.)
Returns:
self: For method chaining
"""
# Base implementation gets the playwright instance
self.playwright = await self.get_playwright()
return self
@abstractmethod
async def _generate_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
pass
async def get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page with specified configuration.
This method should be implemented by concrete strategies to create
or retrieve a page according to their browser management approach.
Args:
crawlerRunConfig: Crawler run configuration
Returns:
Tuple of (Page, BrowserContext)
"""
# Clean up expired sessions first
self._cleanup_expired_sessions()
# If a session_id is provided and we already have it, reuse that page + context
if crawlerRunConfig.session_id and crawlerRunConfig.session_id in self.sessions:
context, page, _ = self.sessions[crawlerRunConfig.session_id]
# Update last-used timestamp
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context
page, context = await self._generate_page(crawlerRunConfig)
import uuid
setattr(page, "guid", uuid.uuid4())
# If a session_id is specified, store this session so we can reuse later
if crawlerRunConfig.session_id:
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context
pass
async def get_pages(self, crawlerRunConfig: CrawlerRunConfig, count: int = 1) -> List[Tuple[Page, BrowserContext]]:
"""Get multiple pages with the same configuration.
Args:
crawlerRunConfig: Configuration for the pages
count: Number of pages to create
Returns:
List of (Page, Context) tuples
"""
pages = []
for _ in range(count):
page, context = await self.get_page(crawlerRunConfig)
pages.append((page, context))
return pages
async def get_opened_pages(self) -> List[Page]:
"""Get all opened pages in the
browser.
"""
return [page for context in self.contexts_by_config.values() for page in context.pages]
def _build_browser_args(self) -> dict:
"""Build browser launch arguments from config.
Returns:
dict: Browser launch arguments for Playwright
"""
# Define common browser arguments that improve performance and stability
args = [
"--no-sandbox",
"--no-first-run",
"--no-default-browser-check",
"--window-position=0,0",
"--ignore-certificate-errors",
"--ignore-certificate-errors-spki-list",
"--window-position=400,0",
"--force-color-profile=srgb",
"--mute-audio",
"--disable-gpu",
"--disable-gpu-compositing",
"--disable-software-rasterizer",
"--disable-dev-shm-usage",
"--disable-infobars",
"--disable-blink-features=AutomationControlled",
"--disable-renderer-backgrounding",
"--disable-ipc-flooding-protection",
"--disable-background-timer-throttling",
f"--window-size={self.config.viewport_width},{self.config.viewport_height}",
]
# Define browser disable options for light mode
browser_disable_options = [
"--disable-backgrounding-occluded-windows",
"--disable-breakpad",
"--disable-client-side-phishing-detection",
"--disable-component-extensions-with-background-pages",
"--disable-default-apps",
"--disable-extensions",
"--disable-features=TranslateUI",
"--disable-hang-monitor",
"--disable-popup-blocking",
"--disable-prompt-on-repost",
"--disable-sync",
"--metrics-recording-only",
"--password-store=basic",
"--use-mock-keychain",
]
# Apply light mode settings if enabled
if self.config.light_mode:
args.extend(browser_disable_options)
# Apply text mode settings if enabled (disables images, JS, etc)
if self.config.text_mode:
args.extend([
"--blink-settings=imagesEnabled=false",
"--disable-remote-fonts",
"--disable-images",
"--disable-javascript",
"--disable-software-rasterizer",
"--disable-dev-shm-usage",
])
# Add any extra arguments from the config
if self.config.extra_args:
args.extend(self.config.extra_args)
# Build the core browser args dictionary
browser_args = {"headless": self.config.headless, "args": args}
# Add chrome channel if specified
if self.config.chrome_channel:
browser_args["channel"] = self.config.chrome_channel
# Configure downloads
if self.config.accept_downloads:
browser_args["downloads_path"] = self.config.downloads_path or os.path.join(
os.getcwd(), "downloads"
)
os.makedirs(browser_args["downloads_path"], exist_ok=True)
# Check for user data directory
if self.config.user_data_dir:
# Ensure the directory exists
os.makedirs(self.config.user_data_dir, exist_ok=True)
browser_args["user_data_dir"] = self.config.user_data_dir
# Configure proxy settings
if self.config.proxy or self.config.proxy_config:
from playwright.async_api import ProxySettings
proxy_settings = (
ProxySettings(server=self.config.proxy)
if self.config.proxy
else ProxySettings(
server=self.config.proxy_config.server,
username=self.config.proxy_config.username,
password=self.config.proxy_config.password,
)
)
browser_args["proxy"] = proxy_settings
return browser_args
def _make_config_signature(self, crawlerRunConfig: CrawlerRunConfig) -> str:
"""Create a signature hash from configuration for context caching.
Converts the crawlerRunConfig into a dict, excludes ephemeral fields,
then returns a hash of the sorted JSON. This yields a stable signature
that identifies configurations requiring a unique browser context.
Args:
crawlerRunConfig: Crawler run configuration
Returns:
str: Unique hash for this configuration
"""
config_dict = crawlerRunConfig.__dict__.copy()
# Exclude items that do not affect browser-level setup
ephemeral_keys = [
"session_id",
"js_code",
"scraping_strategy",
"extraction_strategy",
"chunking_strategy",
"cache_mode",
"content_filter",
"semaphore_count",
"url"
]
for key in ephemeral_keys:
if key in config_dict:
del config_dict[key]
# Convert to canonical JSON string
signature_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON so we get a compact, unique string
signature_hash = hashlib.sha256(signature_json.encode("utf-8")).hexdigest()
return signature_hash
async def create_browser_context(self, crawlerRunConfig: Optional[CrawlerRunConfig] = None) -> BrowserContext:
"""Creates and returns a new browser context with configured settings.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
BrowserContext: Browser context object with the specified configurations
"""
if not self.browser:
raise ValueError("Browser must be initialized before creating context")
# Base settings
user_agent = self.config.headers.get("User-Agent", self.config.user_agent)
viewport_settings = {
"width": self.config.viewport_width,
"height": self.config.viewport_height,
}
proxy_settings = {"server": self.config.proxy} if self.config.proxy else None
# Define blocked extensions for resource optimization
blocked_extensions = [
# Images
"jpg", "jpeg", "png", "gif", "webp", "svg", "ico", "bmp", "tiff", "psd",
# Fonts
"woff", "woff2", "ttf", "otf", "eot",
# Media
"mp4", "webm", "ogg", "avi", "mov", "wmv", "flv", "m4v", "mp3", "wav", "aac",
"m4a", "opus", "flac",
# Documents
"pdf", "doc", "docx", "xls", "xlsx", "ppt", "pptx",
# Archives
"zip", "rar", "7z", "tar", "gz",
# Scripts and data
"xml", "swf", "wasm",
]
# Common context settings
context_settings = {
"user_agent": user_agent,
"viewport": viewport_settings,
"proxy": proxy_settings,
"accept_downloads": self.config.accept_downloads,
"storage_state": self.config.storage_state,
"ignore_https_errors": self.config.ignore_https_errors,
"device_scale_factor": 1.0,
"java_script_enabled": self.config.java_script_enabled,
}
# Apply text mode settings if enabled
if self.config.text_mode:
text_mode_settings = {
"has_touch": False,
"is_mobile": False,
"java_script_enabled": False, # Disable javascript in text mode
}
# Update context settings with text mode settings
context_settings.update(text_mode_settings)
if self.logger:
self.logger.debug("Text mode enabled for browser context", tag="BROWSER")
# Handle storage state properly - this is key for persistence
if self.config.storage_state:
if self.logger:
if isinstance(self.config.storage_state, str):
self.logger.debug(f"Using storage state from file: {self.config.storage_state}", tag="BROWSER")
else:
self.logger.debug("Using storage state from config object", tag="BROWSER")
if self.config.user_data_dir:
# For CDP-based browsers, storage persistence is typically handled by the user_data_dir
# at the browser level, but we'll create a storage_state location for Playwright as well
storage_path = os.path.join(self.config.user_data_dir, "storage_state.json")
if not os.path.exists(storage_path):
# Create parent directory if it doesn't exist
os.makedirs(os.path.dirname(storage_path), exist_ok=True)
with open(storage_path, "w") as f:
json.dump({}, f)
self.config.storage_state = storage_path
if self.logger:
self.logger.debug(f"Using user data directory: {self.config.user_data_dir}", tag="BROWSER")
# Apply crawler-specific configurations if provided
if crawlerRunConfig:
# Check if there is value for crawlerRunConfig.proxy_config set add that to context
if crawlerRunConfig.proxy_config:
proxy_settings = {
"server": crawlerRunConfig.proxy_config.server,
}
if crawlerRunConfig.proxy_config.username:
proxy_settings.update({
"username": crawlerRunConfig.proxy_config.username,
"password": crawlerRunConfig.proxy_config.password,
})
context_settings["proxy"] = proxy_settings
# Create and return the context
try:
# Create the context with appropriate settings
context = await self.browser.new_context(**context_settings)
# Apply text mode resource blocking if enabled
if self.config.text_mode:
# Create and apply route patterns for each extension
for ext in blocked_extensions:
await context.route(f"**/*.{ext}", lambda route: route.abort())
return context
except Exception as e:
if self.logger:
self.logger.error(f"Error creating browser context: {str(e)}", tag="BROWSER")
# Fallback to basic context creation if the advanced settings fail
return await self.browser.new_context()
async def setup_context(self, context: BrowserContext, crawlerRunConfig: Optional[CrawlerRunConfig] = None):
"""Set up a browser context with the configured options.
Args:
context: The browser context to set up
crawlerRunConfig: Configuration object containing all browser settings
"""
# Set HTTP headers
if self.config.headers:
await context.set_extra_http_headers(self.config.headers)
# Add cookies
if self.config.cookies:
await context.add_cookies(self.config.cookies)
# Apply storage state if provided
if self.config.storage_state:
await context.storage_state(path=None)
# Configure downloads
if self.config.accept_downloads:
context.set_default_timeout(DOWNLOAD_PAGE_TIMEOUT)
context.set_default_navigation_timeout(DOWNLOAD_PAGE_TIMEOUT)
if self.config.downloads_path:
context._impl_obj._options["accept_downloads"] = True
context._impl_obj._options["downloads_path"] = self.config.downloads_path
# Handle user agent and browser hints
if self.config.user_agent:
combined_headers = {
"User-Agent": self.config.user_agent,
"sec-ch-ua": self.config.browser_hint,
}
combined_headers.update(self.config.headers)
await context.set_extra_http_headers(combined_headers)
# Add default cookie
target_url = (crawlerRunConfig and crawlerRunConfig.url) or "https://crawl4ai.com/"
await context.add_cookies(
[
{
"name": "cookiesEnabled",
"value": "true",
"url": target_url,
}
]
)
# Handle navigator overrides
if crawlerRunConfig:
if (
crawlerRunConfig.override_navigator
or crawlerRunConfig.simulate_user
or crawlerRunConfig.magic
):
await context.add_init_script(load_js_script("navigator_overrider"))
async def kill_session(self, session_id: str):
"""Kill a browser session and clean up resources.
Args:
session_id (str): The session ID to kill.
"""
if session_id not in self.sessions:
return
context, page, _ = self.sessions[session_id]
# Close the page
try:
await page.close()
except Exception as e:
if self.logger:
self.logger.error(f"Error closing page for session {session_id}: {str(e)}", tag="BROWSER")
# Remove session from tracking
del self.sessions[session_id]
# Clean up any contexts that no longer have pages
await self._cleanup_unused_contexts()
if self.logger:
self.logger.debug(f"Killed session: {session_id}", tag="BROWSER")
async def _cleanup_unused_contexts(self):
"""Clean up contexts that no longer have any pages."""
async with self._contexts_lock:
# Get all contexts we're managing
contexts_to_check = list(self.contexts_by_config.values())
for context in contexts_to_check:
# Check if the context has any pages left
if not context.pages:
# No pages left, we can close this context
config_signature = next((sig for sig, ctx in self.contexts_by_config.items()
if ctx == context), None)
if config_signature:
try:
await context.close()
del self.contexts_by_config[config_signature]
if self.logger:
self.logger.debug(f"Closed unused context", tag="BROWSER")
except Exception as e:
if self.logger:
self.logger.error(f"Error closing unused context: {str(e)}", tag="BROWSER")
def _cleanup_expired_sessions(self):
"""Clean up expired sessions based on TTL."""
current_time = time.time()
expired_sessions = [
sid
for sid, (_, _, last_used) in self.sessions.items()
if current_time - last_used > self.session_ttl
]
for sid in expired_sessions:
if self.logger:
self.logger.debug(f"Session expired: {sid}", tag="BROWSER")
asyncio.create_task(self.kill_session(sid))
async def close(self):
"""Close the browser and clean up resources.
This method handles common cleanup tasks like:
1. Persisting storage state if a user_data_dir is configured
2. Closing all sessions
3. Closing all browser contexts
4. Closing the browser
5. Stopping Playwright
Child classes should override this method to add their specific cleanup logic,
but should call super().close() to ensure common cleanup tasks are performed.
"""
# Set a flag to prevent race conditions during cleanup
self.shutting_down = True
try:
# Add brief delay if configured
if self.config.sleep_on_close:
await asyncio.sleep(0.5)
# Persist storage state if using a user data directory
if self.config.user_data_dir and self.browser:
for context in self.browser.contexts:
try:
# Ensure the directory exists
storage_dir = os.path.join(self.config.user_data_dir, "Default")
os.makedirs(storage_dir, exist_ok=True)
# Save storage state
storage_path = os.path.join(storage_dir, "storage_state.json")
await context.storage_state(path=storage_path)
if self.logger:
self.logger.debug("Storage state persisted before closing browser", tag="BROWSER")
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to ensure storage persistence: {error}",
tag="BROWSER",
params={"error": str(e)}
)
# Close all active sessions
session_ids = list(self.sessions.keys())
for session_id in session_ids:
await self.kill_session(session_id)
# Close all cached contexts
for ctx in self.contexts_by_config.values():
try:
await ctx.close()
except Exception as e:
if self.logger:
self.logger.error(
message="Error closing context: {error}",
tag="BROWSER",
params={"error": str(e)}
)
self.contexts_by_config.clear()
# Close the browser if it exists
if self.browser:
await self.browser.close()
self.browser = None
# Stop playwright
if self.playwright:
await self.playwright.stop()
self.playwright = None
except Exception as e:
if self.logger:
self.logger.error(
message="Error during browser cleanup: {error}",
tag="BROWSER",
params={"error": str(e)}
)
finally:
# Reset shutting down flag
self.shutting_down = False

View File

@@ -0,0 +1,468 @@
import asyncio
import os
import time
import json
import subprocess
import shutil
import signal
from typing import Optional, Dict, Any, Tuple
from ...async_logger import AsyncLogger
from ...async_configs import CrawlerRunConfig
from playwright.async_api import Page, BrowserContext
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig
from ...utils import get_home_folder
from ..utils import get_browser_executable, is_windows, is_browser_running, find_process_by_port, terminate_process
from .cdp import CDPBrowserStrategy
from .base import BaseBrowserStrategy
class BuiltinBrowserStrategy(CDPBrowserStrategy):
"""Built-in browser strategy.
This strategy extends the CDP strategy to use the built-in browser.
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the built-in browser strategy.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
self.builtin_browser_dir = os.path.join(get_home_folder(), "builtin-browser") if not self.config.user_data_dir else self.config.user_data_dir
self.builtin_config_file = os.path.join(self.builtin_browser_dir, "browser_config.json")
# Raise error if user data dir is already engaged
if self._check_user_dir_is_engaged(self.builtin_browser_dir):
raise Exception(f"User data directory {self.builtin_browser_dir} is already engaged by another browser instance.")
os.makedirs(self.builtin_browser_dir, exist_ok=True)
def _check_user_dir_is_engaged(self, user_data_dir: str) -> bool:
"""Check if the user data directory is already in use.
Returns:
bool: True if the directory is engaged, False otherwise
"""
# Load browser config file, then iterate in port_map values, check "user_data_dir" key if it matches
# the current user data directory
if os.path.exists(self.builtin_config_file):
try:
with open(self.builtin_config_file, 'r') as f:
browser_info_dict = json.load(f)
# Check if user data dir is already engaged
for port_str, browser_info in browser_info_dict.get("port_map", {}).items():
if browser_info.get("user_data_dir") == user_data_dir:
return True
except Exception as e:
if self.logger:
self.logger.error(f"Error reading built-in browser config: {str(e)}", tag="BUILTIN")
return False
async def start(self):
"""Start or connect to the built-in browser.
Returns:
self: For method chaining
"""
# Initialize Playwright instance via base class method
await BaseBrowserStrategy.start(self)
try:
# Check for existing built-in browser (get_browser_info already checks if running)
browser_info = self.get_browser_info()
if browser_info:
if self.logger:
self.logger.info(f"Using existing built-in browser at {browser_info.get('cdp_url')}", tag="BROWSER")
self.config.cdp_url = browser_info.get('cdp_url')
else:
if self.logger:
self.logger.info("Built-in browser not found, launching new instance...", tag="BROWSER")
cdp_url = await self.launch_builtin_browser(
browser_type=self.config.browser_type,
debugging_port=self.config.debugging_port,
headless=self.config.headless,
)
if not cdp_url:
if self.logger:
self.logger.warning("Failed to launch built-in browser, falling back to regular CDP strategy", tag="BROWSER")
# Call CDP's start but skip BaseBrowserStrategy.start() since we already called it
return await CDPBrowserStrategy.start(self)
self.config.cdp_url = cdp_url
# Connect to the browser using CDP protocol
self.browser = await self.playwright.chromium.connect_over_cdp(self.config.cdp_url)
# Get or create default context
contexts = self.browser.contexts
if contexts:
self.default_context = contexts[0]
else:
self.default_context = await self.create_browser_context()
await self.setup_context(self.default_context)
if self.logger:
self.logger.debug(f"Connected to built-in browser at {self.config.cdp_url}", tag="BUILTIN")
return self
except Exception as e:
if self.logger:
self.logger.error(f"Failed to start built-in browser: {str(e)}", tag="BUILTIN")
# There is a possibility that at this point I need to clean up some resourece
raise
def _get_builtin_browser_info(cls, debugging_port: int, config_file: str, logger: Optional[AsyncLogger] = None) -> Optional[Dict[str, Any]]:
"""Get information about the built-in browser for a specific debugging port.
Args:
debugging_port: The debugging port to look for
config_file: Path to the config file
logger: Optional logger for recording events
Returns:
dict: Browser information or None if no running browser is configured for this port
"""
if not os.path.exists(config_file):
return None
try:
with open(config_file, 'r') as f:
browser_info_dict = json.load(f)
# Get browser info from port map
if isinstance(browser_info_dict, dict) and "port_map" in browser_info_dict:
port_str = str(debugging_port)
if port_str in browser_info_dict["port_map"]:
browser_info = browser_info_dict["port_map"][port_str]
# Check if the browser is still running
pids = browser_info.get('pid', '')
if isinstance(pids, str):
pids = [int(pid) for pid in pids.split() if pid.isdigit()]
elif isinstance(pids, int):
pids = [pids]
else:
pids = []
# Check if any of the PIDs are running
if not pids:
if logger:
logger.warning(f"Built-in browser on port {debugging_port} has no valid PID", tag="BUILTIN")
# Remove this port from the dictionary
del browser_info_dict["port_map"][port_str]
with open(config_file, 'w') as f:
json.dump(browser_info_dict, f, indent=2)
return None
# Check if any of the PIDs are running
for pid in pids:
if is_browser_running(pid):
browser_info['pid'] = pid
break
else:
# If none of the PIDs are running, remove this port from the dictionary
if logger:
logger.warning(f"Built-in browser on port {debugging_port} is not running", tag="BUILTIN")
# Remove this port from the dictionary
del browser_info_dict["port_map"][port_str]
with open(config_file, 'w') as f:
json.dump(browser_info_dict, f, indent=2)
return None
return browser_info
return None
except Exception as e:
if logger:
logger.error(f"Error reading built-in browser config: {str(e)}", tag="BUILTIN")
return None
def get_browser_info(self) -> Optional[Dict[str, Any]]:
"""Get information about the current built-in browser instance.
Returns:
dict: Browser information or None if no running browser is configured
"""
return self._get_builtin_browser_info(
debugging_port=self.config.debugging_port,
config_file=self.builtin_config_file,
logger=self.logger
)
async def launch_builtin_browser(self,
browser_type: str = "chromium",
debugging_port: int = 9222,
headless: bool = True) -> Optional[str]:
"""Launch a browser in the background for use as the built-in browser.
Args:
browser_type: Type of browser to launch ('chromium' or 'firefox')
debugging_port: Port to use for CDP debugging
headless: Whether to run in headless mode
Returns:
str: CDP URL for the browser, or None if launch failed
"""
# Check if there's an existing browser still running
browser_info = self._get_builtin_browser_info(
debugging_port=debugging_port,
config_file=self.builtin_config_file,
logger=self.logger
)
if browser_info:
if self.logger:
self.logger.info(f"Built-in browser is already running on port {debugging_port}", tag="BUILTIN")
return browser_info.get('cdp_url')
# Create a user data directory for the built-in browser
user_data_dir = os.path.join(self.builtin_browser_dir, "user_data")
# Raise error if user data dir is already engaged
if self._check_user_dir_is_engaged(user_data_dir):
raise Exception(f"User data directory {user_data_dir} is already engaged by another browser instance.")
# Create the user data directory if it doesn't exist
os.makedirs(user_data_dir, exist_ok=True)
# Prepare browser launch arguments
browser_args = super()._build_browser_args()
browser_path = await get_browser_executable(browser_type)
base_args = [browser_path]
if browser_type == "chromium":
args = [
browser_path,
f"--remote-debugging-port={debugging_port}",
f"--user-data-dir={user_data_dir}",
]
# if headless:
# args.append("--headless=new")
elif browser_type == "firefox":
args = [
browser_path,
"--remote-debugging-port",
str(debugging_port),
"--profile",
user_data_dir,
]
if headless:
args.append("--headless")
else:
if self.logger:
self.logger.error(f"Browser type {browser_type} not supported for built-in browser", tag="BUILTIN")
return None
args = base_args + browser_args + args
try:
# Check if the port is already in use
PID = ""
cdp_url = f"http://localhost:{debugging_port}"
config_json = await self._check_port_in_use(cdp_url)
if config_json:
if self.logger:
self.logger.info(f"Port {debugging_port} is already in use.", tag="BUILTIN")
PID = find_process_by_port(debugging_port)
else:
# Start the browser process detached
process = None
if is_windows():
process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
)
else:
process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setpgrp # Start in a new process group
)
# Wait briefly to ensure the process starts successfully
await asyncio.sleep(2.0)
# Check if the process is still running
if process and process.poll() is not None:
if self.logger:
self.logger.error(f"Browser process exited immediately with code {process.returncode}", tag="BUILTIN")
return None
PID = process.pid
# Construct CDP URL
config_json = await self._check_port_in_use(cdp_url)
# Create browser info
browser_info = {
'pid': PID,
'cdp_url': cdp_url,
'user_data_dir': user_data_dir,
'browser_type': browser_type,
'debugging_port': debugging_port,
'start_time': time.time(),
'config': config_json
}
# Read existing config file if it exists
port_map = {}
if os.path.exists(self.builtin_config_file):
try:
with open(self.builtin_config_file, 'r') as f:
existing_data = json.load(f)
# Check if it already uses port mapping
if isinstance(existing_data, dict) and "port_map" in existing_data:
port_map = existing_data["port_map"]
# # Convert legacy format to port mapping
# elif isinstance(existing_data, dict) and "debugging_port" in existing_data:
# old_port = str(existing_data.get("debugging_port"))
# if self._is_browser_running(existing_data.get("pid")):
# port_map[old_port] = existing_data
except Exception as e:
if self.logger:
self.logger.warning(f"Could not read existing config: {str(e)}", tag="BUILTIN")
# Add/update this browser in the port map
port_map[str(debugging_port)] = browser_info
# Write updated config
with open(self.builtin_config_file, 'w') as f:
json.dump({"port_map": port_map}, f, indent=2)
# Detach from the browser process - don't keep any references
# This is important to allow the Python script to exit while the browser continues running
process = None
if self.logger:
self.logger.success(f"Built-in browser launched at CDP URL: {cdp_url}", tag="BUILTIN")
return cdp_url
except Exception as e:
if self.logger:
self.logger.error(f"Error launching built-in browser: {str(e)}", tag="BUILTIN")
return None
async def _check_port_in_use(self, cdp_url: str) -> dict:
"""Check if a port is already in use by a Chrome DevTools instance.
Args:
cdp_url: The CDP URL to check
Returns:
dict: Chrome DevTools protocol version information or None if not found
"""
import aiohttp
json_url = f"{cdp_url}/json/version"
json_config = None
try:
async with aiohttp.ClientSession() as session:
try:
async with session.get(json_url, timeout=2.0) as response:
if response.status == 200:
json_config = await response.json()
if self.logger:
self.logger.debug(f"Found CDP server running at {cdp_url}", tag="BUILTIN")
return json_config
except (aiohttp.ClientError, asyncio.TimeoutError):
pass
return None
except Exception as e:
if self.logger:
self.logger.debug(f"Error checking CDP port: {str(e)}", tag="BUILTIN")
return None
async def kill_builtin_browser(self) -> bool:
"""Kill the built-in browser if it's running.
Returns:
bool: True if the browser was killed, False otherwise
"""
browser_info = self.get_browser_info()
if not browser_info:
if self.logger:
self.logger.warning(f"No built-in browser found on port {self.config.debugging_port}", tag="BUILTIN")
return False
pid = browser_info.get('pid')
if not pid:
return False
success, error_msg = terminate_process(pid, logger=self.logger)
if success:
# Update config file to remove this browser
with open(self.builtin_config_file, 'r') as f:
browser_info_dict = json.load(f)
# Remove this port from the dictionary
port_str = str(self.config.debugging_port)
if port_str in browser_info_dict.get("port_map", {}):
del browser_info_dict["port_map"][port_str]
with open(self.builtin_config_file, 'w') as f:
json.dump(browser_info_dict, f, indent=2)
# Remove user data directory if it exists
if os.path.exists(self.builtin_browser_dir):
shutil.rmtree(self.builtin_browser_dir)
# Clear the browser info cache
self.browser = None
self.temp_dir = None
self.shutting_down = True
if self.logger:
self.logger.success("Built-in browser terminated", tag="BUILTIN")
return True
else:
if self.logger:
self.logger.error(f"Error killing built-in browser: {error_msg}", tag="BUILTIN")
return False
async def get_builtin_browser_status(self) -> Dict[str, Any]:
"""Get status information about the built-in browser.
Returns:
dict: Status information with running, cdp_url, and info fields
"""
browser_info = self.get_browser_info()
if not browser_info:
return {
'running': False,
'cdp_url': None,
'info': None,
'port': self.config.debugging_port
}
return {
'running': True,
'cdp_url': browser_info.get('cdp_url'),
'info': browser_info,
'port': self.config.debugging_port
}
async def close(self):
"""Close the built-in browser and clean up resources."""
# Call parent class close method
await super().close()
# Clean up built-in browser if we created it and were in shutdown mode
if self.shutting_down:
await self.kill_builtin_browser()
if self.logger:
self.logger.debug("Killed built-in browser during shutdown", tag="BUILTIN")

View File

@@ -0,0 +1,281 @@
"""Browser strategies module for Crawl4AI.
This module implements the browser strategy pattern for different
browser implementations, including Playwright, CDP, and builtin browsers.
"""
import asyncio
import os
import time
import json
import subprocess
import shutil
from typing import Optional, Tuple, List
from playwright.async_api import BrowserContext, Page
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig, CrawlerRunConfig
from ..utils import get_playwright, get_browser_executable, create_temp_directory, is_windows, check_process_is_running, terminate_process
from .base import BaseBrowserStrategy
class CDPBrowserStrategy(BaseBrowserStrategy):
"""CDP-based browser strategy.
This strategy connects to an existing browser using CDP protocol or
launches and connects to a browser using CDP.
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the CDP browser strategy.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
self.sessions = {}
self.session_ttl = 1800 # 30 minutes
self.browser_process = None
self.temp_dir = None
self.shutting_down = False
async def start(self):
"""Start or connect to the browser using CDP.
Returns:
self: For method chaining
"""
# Call the base class start to initialize Playwright
await super().start()
try:
# Get or create CDP URL
cdp_url = await self._get_or_create_cdp_url()
# Connect to the browser using CDP
self.browser = await self.playwright.chromium.connect_over_cdp(cdp_url)
# Get or create default context
contexts = self.browser.contexts
if contexts:
self.default_context = contexts[0]
else:
self.default_context = await self.create_browser_context()
await self.setup_context(self.default_context)
if self.logger:
self.logger.debug(f"Connected to CDP browser at {cdp_url}", tag="CDP")
except Exception as e:
if self.logger:
self.logger.error(f"Failed to connect to CDP browser: {str(e)}", tag="CDP")
# Clean up any resources before re-raising
await self._cleanup_process()
raise
return self
async def _get_or_create_cdp_url(self) -> str:
"""Get existing CDP URL or launch a browser and return its CDP URL.
Returns:
str: CDP URL for connecting to the browser
"""
# If CDP URL is provided, just return it
if self.config.cdp_url:
return self.config.cdp_url
# Create temp dir if needed
if not self.config.user_data_dir:
self.temp_dir = create_temp_directory()
user_data_dir = self.temp_dir
else:
user_data_dir = self.config.user_data_dir
# Get browser args based on OS and browser type
# args = await self._get_browser_args(user_data_dir)
browser_args = super()._build_browser_args()
browser_path = await get_browser_executable(self.config.browser_type)
base_args = [browser_path]
if self.config.browser_type == "chromium":
args = [
f"--remote-debugging-port={self.config.debugging_port}",
f"--user-data-dir={user_data_dir}",
]
# if self.config.headless:
# args.append("--headless=new")
elif self.config.browser_type == "firefox":
args = [
"--remote-debugging-port",
str(self.config.debugging_port),
"--profile",
user_data_dir,
]
if self.config.headless:
args.append("--headless")
else:
raise NotImplementedError(f"Browser type {self.config.browser_type} not supported")
args = base_args + browser_args['args'] + args
# Start browser process
try:
# Use DETACHED_PROCESS flag on Windows to fully detach the process
# On Unix, we'll use preexec_fn=os.setpgrp to start the process in a new process group
if is_windows():
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
)
else:
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setpgrp # Start in a new process group
)
# Monitor for a short time to make sure it starts properly
is_running, return_code, stdout, stderr = await check_process_is_running(self.browser_process, delay=2)
if not is_running:
if self.logger:
self.logger.error(
message="Browser process terminated unexpectedly | Code: {code} | STDOUT: {stdout} | STDERR: {stderr}",
tag="ERROR",
params={
"code": return_code,
"stdout": stdout.decode() if stdout else "",
"stderr": stderr.decode() if stderr else "",
},
)
await self._cleanup_process()
raise Exception("Browser process terminated unexpectedly")
return f"http://localhost:{self.config.debugging_port}"
except Exception as e:
await self._cleanup_process()
raise Exception(f"Failed to start browser: {e}")
async def _cleanup_process(self):
"""Cleanup browser process and temporary directory."""
# Set shutting_down flag BEFORE any termination actions
self.shutting_down = True
if self.browser_process:
try:
# Only attempt termination if the process is still running
if self.browser_process.poll() is None:
# Use our robust cross-platform termination utility
success = terminate_process(
pid=self.browser_process.pid,
timeout=1.0, # Equivalent to the previous 10*0.1s wait
logger=self.logger
)
if not success and self.logger:
self.logger.warning(
message="Failed to terminate browser process cleanly",
tag="PROCESS"
)
except Exception as e:
if self.logger:
self.logger.error(
message="Error during browser process cleanup: {error}",
tag="ERROR",
params={"error": str(e)},
)
if self.temp_dir and os.path.exists(self.temp_dir):
try:
shutil.rmtree(self.temp_dir)
self.temp_dir = None
if self.logger:
self.logger.debug("Removed temporary directory", tag="CDP")
except Exception as e:
if self.logger:
self.logger.error(
message="Error removing temporary directory: {error}",
tag="CDP",
params={"error": str(e)}
)
self.browser_process = None
async def _generate_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
# For CDP, we typically use the shared default_context
context = self.default_context
pages = context.pages
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
self.contexts_by_config[config_signature] = context
await self.setup_context(context, crawlerRunConfig)
# Check if there's already a page with the target URL
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
# If not found, create a new page
if not page:
page = await context.new_page()
return page, context
async def _get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page for the given configuration.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
Tuple of (Page, BrowserContext)
"""
# Call parent method to ensure browser is started
await super().get_page(crawlerRunConfig)
# For CDP, we typically use the shared default_context
context = self.default_context
pages = context.pages
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
self.contexts_by_config[config_signature] = context
await self.setup_context(context, crawlerRunConfig)
# Check if there's already a page with the target URL
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
# If not found, create a new page
if not page:
page = await context.new_page()
# If a session_id is specified, store this session for reuse
if crawlerRunConfig.session_id:
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context
async def close(self):
"""Close the CDP browser and clean up resources."""
# Skip cleanup if using external CDP URL and not launched by us
if self.config.cdp_url and not self.browser_process:
if self.logger:
self.logger.debug("Skipping cleanup for external CDP browser", tag="CDP")
return
# Call parent implementation for common cleanup
await super().close()
# Additional CDP-specific cleanup
await asyncio.sleep(0.5)
await self._cleanup_process()

View File

@@ -0,0 +1,430 @@
"""Docker browser strategy module for Crawl4AI.
This module provides browser strategies for running browsers in Docker containers,
which offers better isolation, consistency across platforms, and easy scaling.
"""
import os
import uuid
from typing import List, Optional
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig
from ..models import DockerConfig
from ..docker_registry import DockerRegistry
from ..docker_utils import DockerUtils
from .builtin import CDPBrowserStrategy
from .base import BaseBrowserStrategy
class DockerBrowserStrategy(CDPBrowserStrategy):
"""Docker-based browser strategy.
Extends the CDPBrowserStrategy to run browsers in Docker containers.
Supports two modes:
1. "connect" - Uses a Docker image with Chrome already running
2. "launch" - Starts Chrome within the container with custom settings
Attributes:
docker_config: Docker-specific configuration options
container_id: ID of current Docker container
container_name: Name assigned to the container
registry: Registry for tracking and reusing containers
docker_utils: Utilities for Docker operations
chrome_process_id: Process ID of Chrome within container
socat_process_id: Process ID of socat within container
internal_cdp_port: Chrome's internal CDP port
internal_mapped_port: Port that socat maps to internally
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the Docker browser strategy.
Args:
config: Browser configuration including Docker-specific settings
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
# Initialize Docker-specific attributes
self.docker_config = self.config.docker_config or DockerConfig()
self.container_id = None
self.container_name = f"crawl4ai-browser-{uuid.uuid4().hex[:8]}"
# Use the shared registry file path for consistency with BuiltinBrowserStrategy
registry_file = self.docker_config.registry_file
if registry_file is None and self.config.user_data_dir:
# Use the same registry file as BuiltinBrowserStrategy if possible
registry_file = os.path.join(
os.path.dirname(self.config.user_data_dir), "browser_config.json"
)
self.registry = DockerRegistry(self.docker_config.registry_file)
self.docker_utils = DockerUtils(logger)
self.chrome_process_id = None
self.socat_process_id = None
self.internal_cdp_port = 9222 # Chrome's internal CDP port
self.internal_mapped_port = 9223 # Port that socat maps to internally
self.shutting_down = False
async def start(self):
"""Start or connect to a browser running in a Docker container.
This method initializes Playwright and establishes a connection to
a browser running in a Docker container. Depending on the configured mode:
- "connect": Connects to a container with Chrome already running
- "launch": Creates a container and launches Chrome within it
Returns:
self: For method chaining
"""
# Initialize Playwright
await BaseBrowserStrategy.start(self)
if self.logger:
self.logger.info(
f"Starting Docker browser strategy in {self.docker_config.mode} mode",
tag="DOCKER",
)
try:
# Get CDP URL by creating or reusing a Docker container
# This handles the container management and browser startup
cdp_url = await self._get_or_create_cdp_url()
if not cdp_url:
raise Exception(
"Failed to establish CDP connection to Docker container"
)
if self.logger:
self.logger.info(
f"Connecting to browser in Docker via CDP: {cdp_url}", tag="DOCKER"
)
# Connect to the browser using CDP
self.browser = await self.playwright.chromium.connect_over_cdp(cdp_url)
# Get existing context or create default context
contexts = self.browser.contexts
if contexts:
self.default_context = contexts[0]
if self.logger:
self.logger.debug("Using existing browser context", tag="DOCKER")
else:
if self.logger:
self.logger.debug("Creating new browser context", tag="DOCKER")
self.default_context = await self.create_browser_context()
await self.setup_context(self.default_context)
return self
except Exception as e:
# Clean up resources if startup fails
if self.container_id and not self.docker_config.persistent:
if self.logger:
self.logger.warning(
f"Cleaning up container after failed start: {self.container_id[:12]}",
tag="DOCKER",
)
await self.docker_utils.remove_container(self.container_id)
self.registry.unregister_container(self.container_id)
self.container_id = None
if self.playwright:
await self.playwright.stop()
self.playwright = None
# Re-raise the exception
if self.logger:
self.logger.error(
f"Failed to start Docker browser: {str(e)}", tag="DOCKER"
)
raise
async def _generate_config_hash(self) -> str:
"""Generate a hash of the configuration for container matching.
Returns:
Hash string uniquely identifying this configuration
"""
# Create a dict with the relevant parts of the config
config_dict = {
"image": self.docker_config.image,
"mode": self.docker_config.mode,
"browser_type": self.config.browser_type,
"headless": self.config.headless,
}
# Add browser-specific config if in launch mode
if self.docker_config.mode == "launch":
config_dict.update(
{
"text_mode": self.config.text_mode,
"light_mode": self.config.light_mode,
"viewport_width": self.config.viewport_width,
"viewport_height": self.config.viewport_height,
}
)
# Use the utility method to generate the hash
return self.docker_utils.generate_config_hash(config_dict)
async def _get_or_create_cdp_url(self) -> str:
"""Get CDP URL by either creating a new container or using an existing one.
Returns:
CDP URL for connecting to the browser
Raises:
Exception: If container creation or browser launch fails
"""
# If CDP URL is explicitly provided, use it
if self.config.cdp_url:
return self.config.cdp_url
# Ensure Docker image exists (will build if needed)
image_name = await self.docker_utils.ensure_docker_image_exists(
self.docker_config.image, self.docker_config.mode
)
# Generate config hash for container matching
config_hash = await self._generate_config_hash()
# Look for existing container with matching config
container_id = await self.registry.find_container_by_config(
config_hash, self.docker_utils
)
if container_id:
# Use existing container
self.container_id = container_id
host_port = self.registry.get_container_host_port(container_id)
if self.logger:
self.logger.info(
f"Using existing Docker container: {container_id[:12]}",
tag="DOCKER",
)
else:
# Get a port for the new container
host_port = (
self.docker_config.host_port
or self.registry.get_next_available_port(self.docker_utils)
)
# Prepare volumes list
volumes = list(self.docker_config.volumes)
# Add user data directory if specified
if self.docker_config.user_data_dir:
# Ensure user data directory exists
os.makedirs(self.docker_config.user_data_dir, exist_ok=True)
volumes.append(
f"{self.docker_config.user_data_dir}:{self.docker_config.container_user_data_dir}"
)
# # Update config user_data_dir to point to container path
# self.config.user_data_dir = self.docker_config.container_user_data_dir
# Create a new container
container_id = await self.docker_utils.create_container(
image_name=image_name,
host_port=host_port,
container_name=self.container_name,
volumes=volumes,
network=self.docker_config.network,
env_vars=self.docker_config.env_vars,
cpu_limit=self.docker_config.cpu_limit,
memory_limit=self.docker_config.memory_limit,
extra_args=self.docker_config.extra_args,
)
if not container_id:
raise Exception("Failed to create Docker container")
self.container_id = container_id
# Wait for container to be ready
await self.docker_utils.wait_for_container_ready(container_id)
# Handle specific setup based on mode
if self.docker_config.mode == "launch":
# In launch mode, we need to start socat and Chrome
await self.docker_utils.start_socat_in_container(container_id)
# Build browser arguments
browser_args = self._build_browser_args()
# Launch Chrome
await self.docker_utils.launch_chrome_in_container(
container_id, browser_args
)
# Get PIDs for later cleanup
self.chrome_process_id = (
await self.docker_utils.get_process_id_in_container(
container_id, "chromium"
)
)
self.socat_process_id = (
await self.docker_utils.get_process_id_in_container(
container_id, "socat"
)
)
# Wait for CDP to be ready
cdp_json_config = await self.docker_utils.wait_for_cdp_ready(host_port)
if cdp_json_config:
# Register the container in the shared registry
self.registry.register_container(
container_id, host_port, config_hash, cdp_json_config
)
else:
raise Exception("Failed to get CDP JSON config from Docker container")
if self.logger:
self.logger.success(
f"Docker container ready: {container_id[:12]} on port {host_port}",
tag="DOCKER",
)
# Return CDP URL
return f"http://localhost:{host_port}"
def _build_browser_args(self) -> List[str]:
"""Build Chrome command line arguments based on BrowserConfig.
Returns:
List of command line arguments for Chrome
"""
# Call parent method to get common arguments
browser_args = super()._build_browser_args()
return browser_args["args"] + [
f"--remote-debugging-port={self.internal_cdp_port}",
"--remote-debugging-address=0.0.0.0", # Allow external connections
"--disable-dev-shm-usage",
"--headless=new",
]
# args = [
# "--no-sandbox",
# "--disable-gpu",
# f"--remote-debugging-port={self.internal_cdp_port}",
# "--remote-debugging-address=0.0.0.0", # Allow external connections
# "--disable-dev-shm-usage",
# ]
# if self.config.headless:
# args.append("--headless=new")
# if self.config.viewport_width and self.config.viewport_height:
# args.append(f"--window-size={self.config.viewport_width},{self.config.viewport_height}")
# if self.config.user_agent:
# args.append(f"--user-agent={self.config.user_agent}")
# if self.config.text_mode:
# args.extend([
# "--blink-settings=imagesEnabled=false",
# "--disable-remote-fonts",
# "--disable-images",
# "--disable-javascript",
# ])
# if self.config.light_mode:
# # Import here to avoid circular import
# from ..utils import get_browser_disable_options
# args.extend(get_browser_disable_options())
# if self.config.user_data_dir:
# args.append(f"--user-data-dir={self.config.user_data_dir}")
# if self.config.extra_args:
# args.extend(self.config.extra_args)
# return args
async def close(self):
"""Close the browser and clean up Docker container if needed."""
# Set flag to track if we were the ones initiating shutdown
initiated_shutdown = not self.shutting_down
# Storage persistence for Docker needs special handling
# We need to store state before calling super().close() which will close the browser
if (
self.browser
and self.docker_config.user_data_dir
and self.docker_config.persistent
):
for context in self.browser.contexts:
try:
# Ensure directory exists
os.makedirs(self.docker_config.user_data_dir, exist_ok=True)
# Save storage state to user data directory
storage_path = os.path.join(
self.docker_config.user_data_dir, "storage_state.json"
)
await context.storage_state(path=storage_path)
if self.logger:
self.logger.debug(
"Persisted Docker-specific storage state", tag="DOCKER"
)
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to persist Docker storage state: {error}",
tag="DOCKER",
params={"error": str(e)},
)
# Call parent method to handle common cleanup
await super().close()
# Only perform container cleanup if we initiated shutdown
# and we need to handle Docker-specific resources
if initiated_shutdown:
# Only clean up container if not persistent
if self.container_id and not self.docker_config.persistent:
# Stop Chrome process in "launch" mode
if self.docker_config.mode == "launch" and self.chrome_process_id:
await self.docker_utils.stop_process_in_container(
self.container_id, self.chrome_process_id
)
if self.logger:
self.logger.debug(
f"Stopped Chrome process {self.chrome_process_id} in container",
tag="DOCKER",
)
# Stop socat process in "launch" mode
if self.docker_config.mode == "launch" and self.socat_process_id:
await self.docker_utils.stop_process_in_container(
self.container_id, self.socat_process_id
)
if self.logger:
self.logger.debug(
f"Stopped socat process {self.socat_process_id} in container",
tag="DOCKER",
)
# Remove or stop container based on configuration
if self.docker_config.remove_on_exit:
await self.docker_utils.remove_container(self.container_id)
# Unregister from registry
if hasattr(self, "registry") and self.registry:
self.registry.unregister_container(self.container_id)
if self.logger:
self.logger.debug(
f"Removed Docker container {self.container_id}",
tag="DOCKER",
)
else:
await self.docker_utils.stop_container(self.container_id)
if self.logger:
self.logger.debug(
f"Stopped Docker container {self.container_id}",
tag="DOCKER",
)
self.container_id = None

View File

@@ -0,0 +1,134 @@
"""Browser strategies module for Crawl4AI.
This module implements the browser strategy pattern for different
browser implementations, including Playwright, CDP, and builtin browsers.
"""
import time
from typing import Optional, Tuple
from playwright.async_api import BrowserContext, Page
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig, CrawlerRunConfig
from playwright_stealth import StealthConfig
from .base import BaseBrowserStrategy
stealth_config = StealthConfig(
webdriver=True,
chrome_app=True,
chrome_csi=True,
chrome_load_times=True,
chrome_runtime=True,
navigator_languages=True,
navigator_plugins=True,
navigator_permissions=True,
webgl_vendor=True,
outerdimensions=True,
navigator_hardware_concurrency=True,
media_codecs=True,
)
class PlaywrightBrowserStrategy(BaseBrowserStrategy):
"""Standard Playwright browser strategy.
This strategy launches a new browser instance using Playwright
and manages browser contexts.
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the Playwright browser strategy.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
# No need to re-initialize sessions and session_ttl as they're now in the base class
async def start(self):
"""Start the browser instance.
Returns:
self: For method chaining
"""
# Call the base class start to initialize Playwright
await super().start()
# Build browser arguments using the base class method
browser_args = self._build_browser_args()
try:
# Launch appropriate browser type
if self.config.browser_type == "firefox":
self.browser = await self.playwright.firefox.launch(**browser_args)
elif self.config.browser_type == "webkit":
self.browser = await self.playwright.webkit.launch(**browser_args)
else:
self.browser = await self.playwright.chromium.launch(**browser_args)
self.default_context = self.browser
if self.logger:
self.logger.debug(f"Launched {self.config.browser_type} browser", tag="BROWSER")
except Exception as e:
if self.logger:
self.logger.error(f"Failed to launch browser: {str(e)}", tag="BROWSER")
raise
return self
async def _generate_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
async with self._contexts_lock:
if config_signature in self.contexts_by_config:
context = self.contexts_by_config[config_signature]
else:
# Create and setup a new context
context = await self.create_browser_context(crawlerRunConfig)
await self.setup_context(context, crawlerRunConfig)
self.contexts_by_config[config_signature] = context
# Create a new page from the chosen context
page = await context.new_page()
return page, context
async def _get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page for the given configuration.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
Tuple of (Page, BrowserContext)
"""
# Call parent method to ensure browser is started
await super().get_page(crawlerRunConfig)
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
async with self._contexts_lock:
if config_signature in self.contexts_by_config:
context = self.contexts_by_config[config_signature]
else:
# Create and setup a new context
context = await self.create_browser_context(crawlerRunConfig)
await self.setup_context(context, crawlerRunConfig)
self.contexts_by_config[config_signature] = context
# Create a new page from the chosen context
page = await context.new_page()
# If a session_id is specified, store this session so we can reuse later
if crawlerRunConfig.session_id:
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context

465
crawl4ai/browser/utils.py Normal file
View File

@@ -0,0 +1,465 @@
"""Browser utilities module for Crawl4AI.
This module provides utility functions for browser management,
including process management, CDP connection utilities,
and Playwright instance management.
"""
import asyncio
import os
import sys
import time
import tempfile
import subprocess
from typing import Optional, Tuple, Union
import signal
import psutil
from playwright.async_api import async_playwright
from ..utils import get_chromium_path
from ..async_configs import BrowserConfig, CrawlerRunConfig
from ..async_logger import AsyncLogger
_playwright_instance = None
async def get_playwright():
"""Get or create the Playwright instance (singleton pattern).
Returns:
Playwright: The Playwright instance
"""
global _playwright_instance
if _playwright_instance is None or True:
_playwright_instance = await async_playwright().start()
return _playwright_instance
async def get_browser_executable(browser_type: str) -> str:
"""Get the path to browser executable, with platform-specific handling.
Args:
browser_type: Type of browser (chromium, firefox, webkit)
Returns:
Path to browser executable
"""
return await get_chromium_path(browser_type)
def create_temp_directory(prefix="browser-profile-") -> str:
"""Create a temporary directory for browser data.
Args:
prefix: Prefix for the temporary directory name
Returns:
Path to the created temporary directory
"""
return tempfile.mkdtemp(prefix=prefix)
def is_windows() -> bool:
"""Check if the current platform is Windows.
Returns:
True if Windows, False otherwise
"""
return sys.platform == "win32"
def is_macos() -> bool:
"""Check if the current platform is macOS.
Returns:
True if macOS, False otherwise
"""
return sys.platform == "darwin"
def is_linux() -> bool:
"""Check if the current platform is Linux.
Returns:
True if Linux, False otherwise
"""
return not (is_windows() or is_macos())
def is_browser_running(pid: Optional[int]) -> bool:
"""Check if a process with the given PID is running.
Args:
pid: Process ID to check
Returns:
bool: True if the process is running, False otherwise
"""
if not pid:
return False
try:
if type(pid) is str:
pid = int(pid)
# Check if the process exists
if is_windows():
process = subprocess.run(["tasklist", "/FI", f"PID eq {pid}"],
capture_output=True, text=True)
return str(pid) in process.stdout
else:
# Unix-like systems
os.kill(pid, 0) # This doesn't actually kill the process, just checks if it exists
return True
except (ProcessLookupError, PermissionError, OSError):
return False
def get_browser_disable_options() -> list:
"""Get standard list of browser disable options for performance.
Returns:
List of command-line options to disable various browser features
"""
return [
"--disable-background-networking",
"--disable-background-timer-throttling",
"--disable-backgrounding-occluded-windows",
"--disable-breakpad",
"--disable-client-side-phishing-detection",
"--disable-component-extensions-with-background-pages",
"--disable-default-apps",
"--disable-extensions",
"--disable-features=TranslateUI",
"--disable-hang-monitor",
"--disable-ipc-flooding-protection",
"--disable-popup-blocking",
"--disable-prompt-on-repost",
"--disable-sync",
"--force-color-profile=srgb",
"--metrics-recording-only",
"--no-first-run",
"--password-store=basic",
"--use-mock-keychain",
]
async def find_optimal_browser_config(total_urls=50, verbose=True, rate_limit_delay=0.2):
"""Find optimal browser configuration for crawling a specific number of URLs.
Args:
total_urls: Number of URLs to crawl
verbose: Whether to print progress
rate_limit_delay: Delay between page loads to avoid rate limiting
Returns:
dict: Contains fastest, lowest_memory, and optimal configurations
"""
from .manager import BrowserManager
if verbose:
print(f"\n=== Finding optimal configuration for crawling {total_urls} URLs ===\n")
# Generate test URLs with timestamp to avoid caching
timestamp = int(time.time())
urls = [f"https://example.com/page_{i}?t={timestamp}" for i in range(total_urls)]
# Limit browser configurations to test (1 browser to max 10)
max_browsers = min(10, total_urls)
configs_to_test = []
# Generate configurations (browser count, pages distribution)
for num_browsers in range(1, max_browsers + 1):
base_pages = total_urls // num_browsers
remainder = total_urls % num_browsers
# Create distribution array like [3, 3, 2, 2] (some browsers get one more page)
if remainder > 0:
distribution = [base_pages + 1] * remainder + [base_pages] * (num_browsers - remainder)
else:
distribution = [base_pages] * num_browsers
configs_to_test.append((num_browsers, distribution))
results = []
# Test each configuration
for browser_count, page_distribution in configs_to_test:
if verbose:
print(f"Testing {browser_count} browsers with distribution {tuple(page_distribution)}")
try:
# Track memory if possible
try:
import psutil
process = psutil.Process()
start_memory = process.memory_info().rss / (1024 * 1024) # MB
except ImportError:
if verbose:
print("Memory tracking not available (psutil not installed)")
start_memory = 0
# Start browsers in parallel
managers = []
start_tasks = []
start_time = time.time()
logger = AsyncLogger(verbose=True, log_file=None)
for i in range(browser_count):
config = BrowserConfig(headless=True)
manager = BrowserManager(browser_config=config, logger=logger)
start_tasks.append(manager.start())
managers.append(manager)
await asyncio.gather(*start_tasks)
# Distribute URLs among browsers
urls_per_manager = {}
url_index = 0
for i, manager in enumerate(managers):
pages_for_this_browser = page_distribution[i]
end_index = url_index + pages_for_this_browser
urls_per_manager[manager] = urls[url_index:end_index]
url_index = end_index
# Create pages for each browser
all_pages = []
for manager, manager_urls in urls_per_manager.items():
if not manager_urls:
continue
pages = await manager.get_pages(CrawlerRunConfig(), count=len(manager_urls))
all_pages.extend(zip(pages, manager_urls))
# Crawl pages with delay to avoid rate limiting
async def crawl_page(page_ctx, url):
page, _ = page_ctx
try:
await page.goto(url)
if rate_limit_delay > 0:
await asyncio.sleep(rate_limit_delay)
title = await page.title()
return title
finally:
await page.close()
crawl_start = time.time()
crawl_tasks = [crawl_page(page_ctx, url) for page_ctx, url in all_pages]
await asyncio.gather(*crawl_tasks)
crawl_time = time.time() - crawl_start
total_time = time.time() - start_time
# Measure final memory usage
if start_memory > 0:
end_memory = process.memory_info().rss / (1024 * 1024)
memory_used = end_memory - start_memory
else:
memory_used = 0
# Close all browsers
for manager in managers:
await manager.close()
# Calculate metrics
pages_per_second = total_urls / crawl_time
# Calculate efficiency score (higher is better)
# This balances speed vs memory
if memory_used > 0:
efficiency = pages_per_second / (memory_used + 1)
else:
efficiency = pages_per_second
# Store result
result = {
"browser_count": browser_count,
"distribution": tuple(page_distribution),
"crawl_time": crawl_time,
"total_time": total_time,
"memory_used": memory_used,
"pages_per_second": pages_per_second,
"efficiency": efficiency
}
results.append(result)
if verbose:
print(f" ✓ Crawled {total_urls} pages in {crawl_time:.2f}s ({pages_per_second:.1f} pages/sec)")
if memory_used > 0:
print(f" ✓ Memory used: {memory_used:.1f}MB ({memory_used/total_urls:.1f}MB per page)")
print(f" ✓ Efficiency score: {efficiency:.4f}")
except Exception as e:
if verbose:
print(f" ✗ Error: {str(e)}")
# Clean up
for manager in managers:
try:
await manager.close()
except:
pass
# If no successful results, return None
if not results:
return None
# Find best configurations
fastest = sorted(results, key=lambda x: x["crawl_time"])[0]
# Only consider memory if available
memory_results = [r for r in results if r["memory_used"] > 0]
if memory_results:
lowest_memory = sorted(memory_results, key=lambda x: x["memory_used"])[0]
else:
lowest_memory = fastest
# Find most efficient (balanced speed vs memory)
optimal = sorted(results, key=lambda x: x["efficiency"], reverse=True)[0]
# Print summary
if verbose:
print("\n=== OPTIMAL CONFIGURATIONS ===")
print(f"⚡ Fastest: {fastest['browser_count']} browsers {fastest['distribution']}")
print(f" {fastest['crawl_time']:.2f}s, {fastest['pages_per_second']:.1f} pages/sec")
print(f"💾 Memory-efficient: {lowest_memory['browser_count']} browsers {lowest_memory['distribution']}")
if lowest_memory["memory_used"] > 0:
print(f" {lowest_memory['memory_used']:.1f}MB, {lowest_memory['memory_used']/total_urls:.2f}MB per page")
print(f"🌟 Balanced optimal: {optimal['browser_count']} browsers {optimal['distribution']}")
print(f" {optimal['crawl_time']:.2f}s, {optimal['pages_per_second']:.1f} pages/sec, score: {optimal['efficiency']:.4f}")
return {
"fastest": fastest,
"lowest_memory": lowest_memory,
"optimal": optimal,
"all_configs": results
}
# Find process ID of the existing browser using os
def find_process_by_port(port: int) -> str:
"""Find process ID listening on a specific port.
Args:
port: Port number to check
Returns:
str: Process ID or empty string if not found
"""
try:
if is_windows():
cmd = f"netstat -ano | findstr :{port}"
result = subprocess.check_output(cmd, shell=True).decode()
return result.strip().split()[-1] if result else ""
else:
cmd = f"lsof -i :{port} -t"
return subprocess.check_output(cmd, shell=True).decode().strip()
except subprocess.CalledProcessError:
return ""
async def check_process_is_running(process: subprocess.Popen, delay: float = 0.5) -> Tuple[bool, Optional[int], bytes, bytes]:
"""Perform a quick check to make sure the browser started successfully."""
if not process:
return False, None, b"", b""
# Check that process started without immediate termination
await asyncio.sleep(delay)
if process.poll() is not None:
# Process already terminated
stdout, stderr = b"", b""
try:
stdout, stderr = process.communicate(timeout=0.5)
except subprocess.TimeoutExpired:
pass
return False, process.returncode, stdout, stderr
return True, 0, b"", b""
def terminate_process(
pid: Union[int, str],
timeout: float = 5.0,
force_kill_timeout: float = 3.0,
logger = None
) -> Tuple[bool, Optional[str]]:
"""
Robustly terminate a process across platforms with verification.
Args:
pid: Process ID to terminate (int or string)
timeout: Seconds to wait for graceful termination before force killing
force_kill_timeout: Seconds to wait after force kill before considering it failed
logger: Optional logger object with error, warning, and info methods
Returns:
Tuple of (success: bool, error_message: Optional[str])
"""
# Convert pid to int if it's a string
if isinstance(pid, str):
try:
pid = int(pid)
except ValueError:
error_msg = f"Invalid PID format: {pid}"
if logger:
logger.error(error_msg)
return False, error_msg
# Check if process exists
if not psutil.pid_exists(pid):
return True, None # Process already terminated
try:
process = psutil.Process(pid)
# First attempt: graceful termination
if logger:
logger.info(f"Attempting graceful termination of process {pid}")
if os.name == 'nt': # Windows
subprocess.run(["taskkill", "/PID", str(pid)],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
check=False)
else: # Unix/Linux/MacOS
process.send_signal(signal.SIGTERM)
# Wait for process to terminate
try:
process.wait(timeout=timeout)
if logger:
logger.info(f"Process {pid} terminated gracefully")
return True, None
except psutil.TimeoutExpired:
if logger:
logger.warning(f"Process {pid} did not terminate gracefully within {timeout} seconds, forcing termination")
# Second attempt: force kill
if os.name == 'nt': # Windows
subprocess.run(["taskkill", "/F", "/PID", str(pid)],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
check=False)
else: # Unix/Linux/MacOS
process.send_signal(signal.SIGKILL)
# Verify process is killed
gone, alive = psutil.wait_procs([process], timeout=force_kill_timeout)
if process in alive:
error_msg = f"Failed to kill process {pid} even after force kill"
if logger:
logger.error(error_msg)
return False, error_msg
if logger:
logger.info(f"Process {pid} terminated by force")
return True, None
except psutil.NoSuchProcess:
# Process terminated while we were working with it
if logger:
logger.info(f"Process {pid} already terminated")
return True, None
except Exception as e:
error_msg = f"Error terminating process {pid}: {str(e)}"
if logger:
logger.error(error_msg)
return False, error_msg

View File

@@ -145,17 +145,60 @@ class ManagedBrowser:
# Start browser process
try:
self.browser_process = subprocess.Popen(
args, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
# Monitor browser process output for errors
asyncio.create_task(self._monitor_browser_process())
# Use DETACHED_PROCESS flag on Windows to fully detach the process
# On Unix, we'll use preexec_fn=os.setpgrp to start the process in a new process group
if sys.platform == "win32":
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
)
else:
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setpgrp # Start in a new process group
)
# We'll monitor for a short time to make sure it starts properly, but won't keep monitoring
await asyncio.sleep(0.5) # Give browser time to start
await self._initial_startup_check()
await asyncio.sleep(2) # Give browser time to start
return f"http://{self.host}:{self.debugging_port}"
except Exception as e:
await self.cleanup()
raise Exception(f"Failed to start browser: {e}")
async def _initial_startup_check(self):
"""
Perform a quick check to make sure the browser started successfully.
This only runs once at startup rather than continuously monitoring.
"""
if not self.browser_process:
return
# Check that process started without immediate termination
await asyncio.sleep(0.5)
if self.browser_process.poll() is not None:
# Process already terminated
stdout, stderr = b"", b""
try:
stdout, stderr = self.browser_process.communicate(timeout=0.5)
except subprocess.TimeoutExpired:
pass
self.logger.error(
message="Browser process terminated during startup | Code: {code} | STDOUT: {stdout} | STDERR: {stderr}",
tag="ERROR",
params={
"code": self.browser_process.returncode,
"stdout": stdout.decode() if stdout else "",
"stderr": stderr.decode() if stderr else "",
},
)
async def _monitor_browser_process(self):
"""
Monitor the browser process for unexpected termination.
@@ -167,6 +210,7 @@ class ManagedBrowser:
4. If any other error occurs, log the error message.
Note: This method should be called in a separate task to avoid blocking the main event loop.
This is DEPRECATED and should not be used for builtin browsers that need to outlive the Python process.
"""
if self.browser_process:
try:
@@ -261,22 +305,33 @@ class ManagedBrowser:
if self.browser_process:
try:
self.browser_process.terminate()
# Wait for process to end gracefully
for _ in range(10): # 10 attempts, 100ms each
if self.browser_process.poll() is not None:
break
await asyncio.sleep(0.1)
# For builtin browsers that should persist, we should check if it's a detached process
# Only terminate if we have proper control over the process
if not self.browser_process.poll():
# Process is still running
self.browser_process.terminate()
# Wait for process to end gracefully
for _ in range(10): # 10 attempts, 100ms each
if self.browser_process.poll() is not None:
break
await asyncio.sleep(0.1)
# Force kill if still running
if self.browser_process.poll() is None:
self.browser_process.kill()
await asyncio.sleep(0.1) # Brief wait for kill to take effect
# Force kill if still running
if self.browser_process.poll() is None:
if sys.platform == "win32":
# On Windows we might need taskkill for detached processes
try:
subprocess.run(["taskkill", "/F", "/PID", str(self.browser_process.pid)])
except Exception:
self.browser_process.kill()
else:
self.browser_process.kill()
await asyncio.sleep(0.1) # Brief wait for kill to take effect
except Exception as e:
self.logger.error(
message="Error terminating browser: {error}",
tag="ERROR",
tag="ERROR",
params={"error": str(e)},
)
@@ -379,7 +434,15 @@ class BrowserManager:
sessions (dict): Dictionary to store session information
session_ttl (int): Session timeout in seconds
"""
_playwright_instance = None
@classmethod
async def get_playwright(cls):
from playwright.async_api import async_playwright
if cls._playwright_instance is None:
cls._playwright_instance = await async_playwright().start()
return cls._playwright_instance
def __init__(self, browser_config: BrowserConfig, logger=None):
"""
@@ -429,6 +492,7 @@ class BrowserManager:
Note: This method should be called in a separate task to avoid blocking the main event loop.
"""
self.playwright = await self.get_playwright()
if self.playwright is None:
from playwright.async_api import async_playwright
@@ -443,19 +507,6 @@ class BrowserManager:
self.default_context = contexts[0]
else:
self.default_context = await self.create_browser_context()
# self.default_context = await self.browser.new_context(
# viewport={
# "width": self.config.viewport_width,
# "height": self.config.viewport_height,
# },
# storage_state=self.config.storage_state,
# user_agent=self.config.headers.get(
# "User-Agent", self.config.user_agent
# ),
# accept_downloads=self.config.accept_downloads,
# ignore_https_errors=self.config.ignore_https_errors,
# java_script_enabled=self.config.java_script_enabled,
# )
await self.setup_context(self.default_context)
else:
browser_args = self._build_browser_args()
@@ -470,6 +521,7 @@ class BrowserManager:
self.default_context = self.browser
def _build_browser_args(self) -> dict:
"""Build browser launch arguments from config."""
args = [

View File

@@ -12,7 +12,10 @@ import sys
import datetime
import uuid
import shutil
from typing import List, Dict, Optional, Any
import json
import subprocess
import time
from typing import List, Dict, Optional, Any, Tuple
from colorama import Fore, Style, init
from .async_configs import BrowserConfig
@@ -56,6 +59,11 @@ class BrowserProfiler:
# Ensure profiles directory exists
self.profiles_dir = os.path.join(get_home_folder(), "profiles")
os.makedirs(self.profiles_dir, exist_ok=True)
# Builtin browser config file
self.builtin_browser_dir = os.path.join(get_home_folder(), "builtin-browser")
self.builtin_config_file = os.path.join(self.builtin_browser_dir, "browser_config.json")
os.makedirs(self.builtin_browser_dir, exist_ok=True)
async def create_profile(self,
profile_name: Optional[str] = None,
@@ -547,12 +555,12 @@ class BrowserProfiler:
else:
self.logger.error(f"Invalid choice. Please enter a number between 1 and {exit_option}.", tag="MENU")
async def launch_standalone_browser(self,
browser_type: str = "chromium",
user_data_dir: Optional[str] = None,
debugging_port: int = 9222,
headless: bool = False) -> Optional[str]:
headless: bool = False,
save_as_builtin: bool = False) -> Optional[str]:
"""
Launch a standalone browser with CDP debugging enabled and keep it running
until the user presses 'q'. Returns and displays the CDP URL.
@@ -766,4 +774,201 @@ class BrowserProfiler:
# Return the CDP URL
return cdp_url
async def launch_builtin_browser(self,
browser_type: str = "chromium",
debugging_port: int = 9222,
headless: bool = True) -> Optional[str]:
"""
Launch a browser in the background for use as the builtin browser.
Args:
browser_type (str): Type of browser to launch ('chromium' or 'firefox')
debugging_port (int): Port to use for CDP debugging
headless (bool): Whether to run in headless mode
Returns:
str: CDP URL for the browser, or None if launch failed
"""
# Check if there's an existing browser still running
browser_info = self.get_builtin_browser_info()
if browser_info and self._is_browser_running(browser_info.get('pid')):
self.logger.info("Builtin browser is already running", tag="BUILTIN")
return browser_info.get('cdp_url')
# Create a user data directory for the builtin browser
user_data_dir = os.path.join(self.builtin_browser_dir, "user_data")
os.makedirs(user_data_dir, exist_ok=True)
# Create managed browser instance
managed_browser = ManagedBrowser(
browser_type=browser_type,
user_data_dir=user_data_dir,
headless=headless,
logger=self.logger,
debugging_port=debugging_port
)
try:
# Start the browser
await managed_browser.start()
# Check if browser started successfully
browser_process = managed_browser.browser_process
if not browser_process:
self.logger.error("Failed to start browser process.", tag="BUILTIN")
return None
# Get CDP URL
cdp_url = f"http://localhost:{debugging_port}"
# Try to verify browser is responsive by fetching version info
import aiohttp
json_url = f"{cdp_url}/json/version"
config_json = None
try:
async with aiohttp.ClientSession() as session:
for _ in range(10): # Try multiple times
try:
async with session.get(json_url) as response:
if response.status == 200:
config_json = await response.json()
break
except Exception:
pass
await asyncio.sleep(0.5)
except Exception as e:
self.logger.warning(f"Could not verify browser: {str(e)}", tag="BUILTIN")
# Save browser info
browser_info = {
'pid': browser_process.pid,
'cdp_url': cdp_url,
'user_data_dir': user_data_dir,
'browser_type': browser_type,
'debugging_port': debugging_port,
'start_time': time.time(),
'config': config_json
}
with open(self.builtin_config_file, 'w') as f:
json.dump(browser_info, f, indent=2)
# Detach from the browser process - don't keep any references
# This is important to allow the Python script to exit while the browser continues running
# We'll just record the PID and other info, and the browser will run independently
managed_browser.browser_process = None
self.logger.success(f"Builtin browser launched at CDP URL: {cdp_url}", tag="BUILTIN")
return cdp_url
except Exception as e:
self.logger.error(f"Error launching builtin browser: {str(e)}", tag="BUILTIN")
if managed_browser:
await managed_browser.cleanup()
return None
def get_builtin_browser_info(self) -> Optional[Dict[str, Any]]:
"""
Get information about the builtin browser.
Returns:
dict: Browser information or None if no builtin browser is configured
"""
if not os.path.exists(self.builtin_config_file):
return None
try:
with open(self.builtin_config_file, 'r') as f:
browser_info = json.load(f)
# Check if the browser is still running
if not self._is_browser_running(browser_info.get('pid')):
self.logger.warning("Builtin browser is not running", tag="BUILTIN")
return None
return browser_info
except Exception as e:
self.logger.error(f"Error reading builtin browser config: {str(e)}", tag="BUILTIN")
return None
def _is_browser_running(self, pid: Optional[int]) -> bool:
"""Check if a process with the given PID is running"""
if not pid:
return False
try:
# Check if the process exists
if sys.platform == "win32":
process = subprocess.run(["tasklist", "/FI", f"PID eq {pid}"],
capture_output=True, text=True)
return str(pid) in process.stdout
else:
# Unix-like systems
os.kill(pid, 0) # This doesn't actually kill the process, just checks if it exists
return True
except (ProcessLookupError, PermissionError, OSError):
return False
async def kill_builtin_browser(self) -> bool:
"""
Kill the builtin browser if it's running.
Returns:
bool: True if the browser was killed, False otherwise
"""
browser_info = self.get_builtin_browser_info()
if not browser_info:
self.logger.warning("No builtin browser found", tag="BUILTIN")
return False
pid = browser_info.get('pid')
if not pid:
return False
try:
if sys.platform == "win32":
subprocess.run(["taskkill", "/F", "/PID", str(pid)], check=True)
else:
os.kill(pid, signal.SIGTERM)
# Wait for termination
for _ in range(5):
if not self._is_browser_running(pid):
break
await asyncio.sleep(0.5)
else:
# Force kill if still running
os.kill(pid, signal.SIGKILL)
# Remove config file
if os.path.exists(self.builtin_config_file):
os.unlink(self.builtin_config_file)
self.logger.success("Builtin browser terminated", tag="BUILTIN")
return True
except Exception as e:
self.logger.error(f"Error killing builtin browser: {str(e)}", tag="BUILTIN")
return False
async def get_builtin_browser_status(self) -> Dict[str, Any]:
"""
Get status information about the builtin browser.
Returns:
dict: Status information with running, cdp_url, and info fields
"""
browser_info = self.get_builtin_browser_info()
if not browser_info:
return {
'running': False,
'cdp_url': None,
'info': None
}
return {
'running': True,
'cdp_url': browser_info.get('cdp_url'),
'info': browser_info
}

View File

@@ -20,13 +20,16 @@ from crawl4ai import (
BrowserConfig,
CrawlerRunConfig,
LLMExtractionStrategy,
LXMLWebScrapingStrategy,
JsonCssExtractionStrategy,
JsonXPathExtractionStrategy,
BM25ContentFilter,
PruningContentFilter,
BrowserProfiler,
DefaultMarkdownGenerator,
LLMConfig
)
from crawl4ai.config import USER_SETTINGS
from litellm import completion
from pathlib import Path
@@ -175,8 +178,12 @@ def show_examples():
# CSS-based extraction
crwl https://example.com -e extract_css.yml -s css_schema.json -o json
# LLM-based extraction
# LLM-based extraction with config file
crwl https://example.com -e extract_llm.yml -s llm_schema.json -o json
# Quick LLM-based JSON extraction (prompts for LLM provider first time)
crwl https://example.com -j # Auto-extracts structured data
crwl https://example.com -j "Extract product details including name, price, and features" # With specific instructions
3⃣ Direct Parameters:
# Browser settings
@@ -278,7 +285,7 @@ llm_schema.json:
# Combine configs with direct parameters
crwl https://example.com -B browser.yml -b "headless=false,viewport_width=1920"
# Full extraction pipeline
# Full extraction pipeline with config files
crwl https://example.com \\
-B browser.yml \\
-C crawler.yml \\
@@ -286,6 +293,12 @@ llm_schema.json:
-s llm_schema.json \\
-o json \\
-v
# Quick LLM-based extraction with specific instructions
crwl https://amazon.com/dp/B01DFKC2SO \\
-j "Extract product title, current price, original price, rating, and all product specifications" \\
-b "headless=true,viewport_width=1280" \\
-v
# Content filtering with BM25
crwl https://example.com \\
@@ -327,6 +340,14 @@ For more documentation visit: https://github.com/unclecode/crawl4ai
- google/gemini-pro
See full list of providers: https://docs.litellm.ai/docs/providers
# Set default LLM provider and token in advance
crwl config set DEFAULT_LLM_PROVIDER "anthropic/claude-3-sonnet"
crwl config set DEFAULT_LLM_PROVIDER_TOKEN "your-api-token-here"
# Set default browser behavior
crwl config set BROWSER_HEADLESS false # Always show browser window
crwl config set USER_AGENT_MODE random # Use random user agent
9⃣ Profile Management:
# Launch interactive profile manager
@@ -341,6 +362,32 @@ For more documentation visit: https://github.com/unclecode/crawl4ai
crwl profiles # Select "Create new profile" option
# 2. Then use that profile to crawl authenticated content:
crwl https://site-requiring-login.com/dashboard -p my-profile-name
🔄 Builtin Browser Management:
# Start a builtin browser (runs in the background)
crwl browser start
# Check builtin browser status
crwl browser status
# Open a visible window to see the browser
crwl browser view --url https://example.com
# Stop the builtin browser
crwl browser stop
# Restart with different options
crwl browser restart --browser-type chromium --port 9223 --no-headless
# Use the builtin browser in your code
# (Just set browser_mode="builtin" in your BrowserConfig)
browser_config = BrowserConfig(
browser_mode="builtin",
headless=True
)
# Usage via CLI:
crwl https://example.com -b "browser_mode=builtin"
"""
click.echo(examples)
@@ -575,6 +622,307 @@ def cli():
pass
@cli.group("browser")
def browser_cmd():
"""Manage browser instances for Crawl4AI
Commands to manage browser instances for Crawl4AI, including:
- status - Check status of the builtin browser
- start - Start a new builtin browser
- stop - Stop the running builtin browser
- restart - Restart the builtin browser
"""
pass
@browser_cmd.command("status")
def browser_status_cmd():
"""Show status of the builtin browser"""
profiler = BrowserProfiler()
try:
status = anyio.run(profiler.get_builtin_browser_status)
if status["running"]:
info = status["info"]
console.print(Panel(
f"[green]Builtin browser is running[/green]\n\n"
f"CDP URL: [cyan]{info['cdp_url']}[/cyan]\n"
f"Process ID: [yellow]{info['pid']}[/yellow]\n"
f"Browser type: [blue]{info['browser_type']}[/blue]\n"
f"User data directory: [magenta]{info['user_data_dir']}[/magenta]\n"
f"Started: [cyan]{time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(info['start_time']))}[/cyan]",
title="Builtin Browser Status",
border_style="green"
))
else:
console.print(Panel(
"[yellow]Builtin browser is not running[/yellow]\n\n"
"Use 'crwl browser start' to start a builtin browser",
title="Builtin Browser Status",
border_style="yellow"
))
except Exception as e:
console.print(f"[red]Error checking browser status: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("start")
@click.option("--browser-type", "-b", type=click.Choice(["chromium", "firefox"]), default="chromium",
help="Browser type (default: chromium)")
@click.option("--port", "-p", type=int, default=9222, help="Debugging port (default: 9222)")
@click.option("--headless/--no-headless", default=True, help="Run browser in headless mode")
def browser_start_cmd(browser_type: str, port: int, headless: bool):
"""Start a builtin browser instance
This will start a persistent browser instance that can be used by Crawl4AI
by setting browser_mode="builtin" in BrowserConfig.
"""
profiler = BrowserProfiler()
# First check if browser is already running
status = anyio.run(profiler.get_builtin_browser_status)
if status["running"]:
console.print(Panel(
"[yellow]Builtin browser is already running[/yellow]\n\n"
f"CDP URL: [cyan]{status['cdp_url']}[/cyan]\n\n"
"Use 'crwl browser restart' to restart the browser",
title="Builtin Browser Start",
border_style="yellow"
))
return
try:
console.print(Panel(
f"[cyan]Starting builtin browser[/cyan]\n\n"
f"Browser type: [green]{browser_type}[/green]\n"
f"Debugging port: [yellow]{port}[/yellow]\n"
f"Headless: [cyan]{'Yes' if headless else 'No'}[/cyan]",
title="Builtin Browser Start",
border_style="cyan"
))
cdp_url = anyio.run(
profiler.launch_builtin_browser,
browser_type,
port,
headless
)
if cdp_url:
console.print(Panel(
f"[green]Builtin browser started successfully[/green]\n\n"
f"CDP URL: [cyan]{cdp_url}[/cyan]\n\n"
"This browser will be used automatically when setting browser_mode='builtin'",
title="Builtin Browser Start",
border_style="green"
))
else:
console.print(Panel(
"[red]Failed to start builtin browser[/red]",
title="Builtin Browser Start",
border_style="red"
))
sys.exit(1)
except Exception as e:
console.print(f"[red]Error starting builtin browser: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("stop")
def browser_stop_cmd():
"""Stop the running builtin browser"""
profiler = BrowserProfiler()
try:
# First check if browser is running
status = anyio.run(profiler.get_builtin_browser_status)
if not status["running"]:
console.print(Panel(
"[yellow]No builtin browser is currently running[/yellow]",
title="Builtin Browser Stop",
border_style="yellow"
))
return
console.print(Panel(
"[cyan]Stopping builtin browser...[/cyan]",
title="Builtin Browser Stop",
border_style="cyan"
))
success = anyio.run(profiler.kill_builtin_browser)
if success:
console.print(Panel(
"[green]Builtin browser stopped successfully[/green]",
title="Builtin Browser Stop",
border_style="green"
))
else:
console.print(Panel(
"[red]Failed to stop builtin browser[/red]",
title="Builtin Browser Stop",
border_style="red"
))
sys.exit(1)
except Exception as e:
console.print(f"[red]Error stopping builtin browser: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("view")
@click.option("--url", "-u", help="URL to navigate to (defaults to about:blank)")
def browser_view_cmd(url: Optional[str]):
"""
Open a visible window of the builtin browser
This command connects to the running builtin browser and opens a visible window,
allowing you to see what the browser is currently viewing or navigate to a URL.
"""
profiler = BrowserProfiler()
try:
# First check if browser is running
status = anyio.run(profiler.get_builtin_browser_status)
if not status["running"]:
console.print(Panel(
"[yellow]No builtin browser is currently running[/yellow]\n\n"
"Use 'crwl browser start' to start a builtin browser first",
title="Builtin Browser View",
border_style="yellow"
))
return
info = status["info"]
cdp_url = info["cdp_url"]
console.print(Panel(
f"[cyan]Opening visible window connected to builtin browser[/cyan]\n\n"
f"CDP URL: [green]{cdp_url}[/green]\n"
f"URL to load: [yellow]{url or 'about:blank'}[/yellow]",
title="Builtin Browser View",
border_style="cyan"
))
# Use the CDP URL to launch a new visible window
import subprocess
import os
# Determine the browser command based on platform
if sys.platform == "darwin": # macOS
browser_cmd = ["/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"]
elif sys.platform == "win32": # Windows
browser_cmd = ["C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"]
else: # Linux
browser_cmd = ["google-chrome"]
# Add arguments
browser_args = [
f"--remote-debugging-port={info['debugging_port']}",
"--remote-debugging-address=localhost",
"--no-first-run",
"--no-default-browser-check"
]
# Add URL if provided
if url:
browser_args.append(url)
# Launch browser
try:
subprocess.Popen(browser_cmd + browser_args)
console.print("[green]Browser window opened. Close it when finished viewing.[/green]")
except Exception as e:
console.print(f"[red]Error launching browser: {str(e)}[/red]")
console.print(f"[yellow]Try connecting manually to {cdp_url} in Chrome or using the '--remote-debugging-port' flag.[/yellow]")
except Exception as e:
console.print(f"[red]Error viewing builtin browser: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("restart")
@click.option("--browser-type", "-b", type=click.Choice(["chromium", "firefox"]), default=None,
help="Browser type (defaults to same as current)")
@click.option("--port", "-p", type=int, default=None, help="Debugging port (defaults to same as current)")
@click.option("--headless/--no-headless", default=None, help="Run browser in headless mode")
def browser_restart_cmd(browser_type: Optional[str], port: Optional[int], headless: Optional[bool]):
"""Restart the builtin browser
Stops the current builtin browser if running and starts a new one.
By default, uses the same configuration as the current browser.
"""
profiler = BrowserProfiler()
try:
# First check if browser is running and get its config
status = anyio.run(profiler.get_builtin_browser_status)
current_config = {}
if status["running"]:
info = status["info"]
current_config = {
"browser_type": info["browser_type"],
"port": info["debugging_port"],
"headless": True # Default assumption
}
# Stop the browser
console.print(Panel(
"[cyan]Stopping current builtin browser...[/cyan]",
title="Builtin Browser Restart",
border_style="cyan"
))
success = anyio.run(profiler.kill_builtin_browser)
if not success:
console.print(Panel(
"[red]Failed to stop current browser[/red]",
title="Builtin Browser Restart",
border_style="red"
))
sys.exit(1)
# Use provided options or defaults from current config
browser_type = browser_type or current_config.get("browser_type", "chromium")
port = port or current_config.get("port", 9222)
headless = headless if headless is not None else current_config.get("headless", True)
# Start a new browser
console.print(Panel(
f"[cyan]Starting new builtin browser[/cyan]\n\n"
f"Browser type: [green]{browser_type}[/green]\n"
f"Debugging port: [yellow]{port}[/yellow]\n"
f"Headless: [cyan]{'Yes' if headless else 'No'}[/cyan]",
title="Builtin Browser Restart",
border_style="cyan"
))
cdp_url = anyio.run(
profiler.launch_builtin_browser,
browser_type,
port,
headless
)
if cdp_url:
console.print(Panel(
f"[green]Builtin browser restarted successfully[/green]\n\n"
f"CDP URL: [cyan]{cdp_url}[/cyan]",
title="Builtin Browser Restart",
border_style="green"
))
else:
console.print(Panel(
"[red]Failed to restart builtin browser[/red]",
title="Builtin Browser Restart",
border_style="red"
))
sys.exit(1)
except Exception as e:
console.print(f"[red]Error restarting builtin browser: {str(e)}[/red]")
sys.exit(1)
@cli.command("cdp")
@click.option("--user-data-dir", "-d", help="Directory to use for browser data (will be created if it doesn't exist)")
@click.option("--port", "-P", type=int, default=9222, help="Debugging port (default: 9222)")
@@ -656,17 +1004,19 @@ def cdp_cmd(user_data_dir: Optional[str], port: int, browser_type: str, headless
@click.option("--crawler-config", "-C", type=click.Path(exists=True), help="Crawler config file (YAML/JSON)")
@click.option("--filter-config", "-f", type=click.Path(exists=True), help="Content filter config file")
@click.option("--extraction-config", "-e", type=click.Path(exists=True), help="Extraction strategy config file")
@click.option("--json-extract", "-j", is_flag=False, flag_value="", default=None, help="Extract structured data using LLM with optional description")
@click.option("--schema", "-s", type=click.Path(exists=True), help="JSON schema for extraction")
@click.option("--browser", "-b", type=str, callback=parse_key_values, help="Browser parameters as key1=value1,key2=value2")
@click.option("--crawler", "-c", type=str, callback=parse_key_values, help="Crawler parameters as key1=value1,key2=value2")
@click.option("--output", "-o", type=click.Choice(["all", "json", "markdown", "md", "markdown-fit", "md-fit"]), default="all")
@click.option("--bypass-cache", is_flag=True, default=True, help="Bypass cache when crawling")
@click.option("--output-file", "-O", type=click.Path(), help="Output file path (default: stdout)")
@click.option("--bypass-cache", "-b", is_flag=True, default=True, help="Bypass cache when crawling")
@click.option("--question", "-q", help="Ask a question about the crawled content")
@click.option("--verbose", "-v", is_flag=True)
@click.option("--profile", "-p", help="Use a specific browser profile (by name)")
def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config: str,
extraction_config: str, schema: str, browser: Dict, crawler: Dict,
output: str, bypass_cache: bool, question: str, verbose: bool, profile: str):
extraction_config: str, json_extract: str, schema: str, browser: Dict, crawler: Dict,
output: str, output_file: str, bypass_cache: bool, question: str, verbose: bool, profile: str):
"""Crawl a website and extract content
Simple Usage:
@@ -710,21 +1060,65 @@ def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config:
crawler_cfg = crawler_cfg.clone(**crawler)
# Handle content filter config
if filter_config:
filter_conf = load_config_file(filter_config)
if filter_config or output in ["markdown-fit", "md-fit"]:
if filter_config:
filter_conf = load_config_file(filter_config)
elif not filter_config and output in ["markdown-fit", "md-fit"]:
filter_conf = {
"type": "pruning",
"query": "",
"threshold": 0.48
}
if filter_conf["type"] == "bm25":
crawler_cfg.content_filter = BM25ContentFilter(
user_query=filter_conf.get("query"),
bm25_threshold=filter_conf.get("threshold", 1.0)
crawler_cfg.markdown_generator = DefaultMarkdownGenerator(
content_filter = BM25ContentFilter(
user_query=filter_conf.get("query"),
bm25_threshold=filter_conf.get("threshold", 1.0)
)
)
elif filter_conf["type"] == "pruning":
crawler_cfg.content_filter = PruningContentFilter(
user_query=filter_conf.get("query"),
threshold=filter_conf.get("threshold", 0.48)
crawler_cfg.markdown_generator = DefaultMarkdownGenerator(
content_filter = PruningContentFilter(
user_query=filter_conf.get("query"),
threshold=filter_conf.get("threshold", 0.48)
)
)
# Handle json-extract option (takes precedence over extraction-config)
if json_extract is not None:
# Get LLM provider and token
provider, token = setup_llm_config()
# Default sophisticated instruction for structured data extraction
default_instruction = """Analyze the web page content and extract structured data as JSON.
If the page contains a list of items with repeated patterns, extract all items in an array.
If the page is an article or contains unique content, extract a comprehensive JSON object with all relevant information.
Look at the content, intention of content, what it offers and find the data item(s) in the page.
Always return valid, properly formatted JSON."""
default_instruction_with_user_query = """Analyze the web page content and extract structured data as JSON, following the below instruction and explanation of schema and always return valid, properly formatted JSON. \n\nInstruction:\n\n""" + json_extract
# Determine instruction based on whether json_extract is empty or has content
instruction = default_instruction_with_user_query if json_extract else default_instruction
# Create LLM extraction strategy
crawler_cfg.extraction_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider=provider, api_token=token),
instruction=instruction,
schema=load_schema_file(schema), # Will be None if no schema is provided
extraction_type="schema", #if schema else "block",
apply_chunking=False,
force_json_response=True,
verbose=verbose,
)
# Set output to JSON if not explicitly specified
if output == "all":
output = "json"
# Handle extraction strategy
if extraction_config:
# Handle extraction strategy from config file (only if json-extract wasn't used)
elif extraction_config:
extract_conf = load_config_file(extraction_config)
schema_data = load_schema_file(schema)
@@ -758,6 +1152,13 @@ def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config:
# No cache
if bypass_cache:
crawler_cfg.cache_mode = CacheMode.BYPASS
crawler_cfg.scraping_strategy = LXMLWebScrapingStrategy()
config = get_global_config()
browser_cfg.verbose = config.get("VERBOSE", False)
crawler_cfg.verbose = config.get("VERBOSE", False)
# Run crawler
result : CrawlResult = anyio.run(
@@ -776,14 +1177,31 @@ def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config:
return
# Handle output
if output == "all":
click.echo(json.dumps(result.model_dump(), indent=2))
elif output == "json":
click.echo(json.dumps(json.loads(result.extracted_content), indent=2))
elif output in ["markdown", "md"]:
click.echo(result.markdown.raw_markdown)
elif output in ["markdown-fit", "md-fit"]:
click.echo(result.markdown.fit_markdown)
if not output_file:
if output == "all":
click.echo(json.dumps(result.model_dump(), indent=2))
elif output == "json":
print(result.extracted_content)
extracted_items = json.loads(result.extracted_content)
click.echo(json.dumps(extracted_items, indent=2))
elif output in ["markdown", "md"]:
click.echo(result.markdown.raw_markdown)
elif output in ["markdown-fit", "md-fit"]:
click.echo(result.markdown.fit_markdown)
else:
if output == "all":
with open(output_file, "w") as f:
f.write(json.dumps(result.model_dump(), indent=2))
elif output == "json":
with open(output_file, "w") as f:
f.write(result.extracted_content)
elif output in ["markdown", "md"]:
with open(output_file, "w") as f:
f.write(result.markdown.raw_markdown)
elif output in ["markdown-fit", "md-fit"]:
with open(output_file, "w") as f:
f.write(result.markdown.fit_markdown)
except Exception as e:
raise click.ClickException(str(e))
@@ -793,6 +1211,120 @@ def examples_cmd():
"""Show usage examples"""
show_examples()
@cli.group("config")
def config_cmd():
"""Manage global configuration settings
Commands to view and update global configuration settings:
- list: Display all current configuration settings
- get: Get the value of a specific setting
- set: Set the value of a specific setting
"""
pass
@config_cmd.command("list")
def config_list_cmd():
"""List all configuration settings"""
config = get_global_config()
table = Table(title="Crawl4AI Configuration", show_header=True, header_style="bold cyan", border_style="blue")
table.add_column("Setting", style="cyan")
table.add_column("Value", style="green")
table.add_column("Default", style="yellow")
table.add_column("Description", style="white")
for key, setting in USER_SETTINGS.items():
value = config.get(key, setting["default"])
# Handle secret values
display_value = value
if setting.get("secret", False) and value:
display_value = "********"
# Handle boolean values
if setting["type"] == "boolean":
display_value = str(value).lower()
default_value = str(setting["default"]).lower()
else:
default_value = str(setting["default"])
table.add_row(
key,
str(display_value),
default_value,
setting["description"]
)
console.print(table)
@config_cmd.command("get")
@click.argument("key", required=True)
def config_get_cmd(key: str):
"""Get a specific configuration setting"""
config = get_global_config()
# Normalize key to uppercase
key = key.upper()
if key not in USER_SETTINGS:
console.print(f"[red]Error: Unknown setting '{key}'[/red]")
return
value = config.get(key, USER_SETTINGS[key]["default"])
# Handle secret values
display_value = value
if USER_SETTINGS[key].get("secret", False) and value:
display_value = "********"
console.print(f"[cyan]{key}[/cyan] = [green]{display_value}[/green]")
console.print(f"[dim]Description: {USER_SETTINGS[key]['description']}[/dim]")
@config_cmd.command("set")
@click.argument("key", required=True)
@click.argument("value", required=True)
def config_set_cmd(key: str, value: str):
"""Set a configuration setting"""
config = get_global_config()
# Normalize key to uppercase
key = key.upper()
if key not in USER_SETTINGS:
console.print(f"[red]Error: Unknown setting '{key}'[/red]")
console.print(f"[yellow]Available settings: {', '.join(USER_SETTINGS.keys())}[/yellow]")
return
setting = USER_SETTINGS[key]
# Type conversion and validation
if setting["type"] == "boolean":
if value.lower() in ["true", "yes", "1", "y"]:
typed_value = True
elif value.lower() in ["false", "no", "0", "n"]:
typed_value = False
else:
console.print(f"[red]Error: Invalid boolean value. Use 'true' or 'false'.[/red]")
return
elif setting["type"] == "string":
typed_value = value
# Check if the value should be one of the allowed options
if "options" in setting and value not in setting["options"]:
console.print(f"[red]Error: Value must be one of: {', '.join(setting['options'])}[/red]")
return
# Update config
config[key] = typed_value
save_global_config(config)
# Handle secret values for display
display_value = typed_value
if setting.get("secret", False) and typed_value:
display_value = "********"
console.print(f"[green]Successfully set[/green] [cyan]{key}[/cyan] = [green]{display_value}[/green]")
@cli.command("profiles")
def profiles_cmd():
"""Manage browser profiles interactively
@@ -812,6 +1344,7 @@ def profiles_cmd():
@click.option("--crawler-config", "-C", type=click.Path(exists=True), help="Crawler config file (YAML/JSON)")
@click.option("--filter-config", "-f", type=click.Path(exists=True), help="Content filter config file")
@click.option("--extraction-config", "-e", type=click.Path(exists=True), help="Extraction strategy config file")
@click.option("--json-extract", "-j", is_flag=False, flag_value="", default=None, help="Extract structured data using LLM with optional description")
@click.option("--schema", "-s", type=click.Path(exists=True), help="JSON schema for extraction")
@click.option("--browser", "-b", type=str, callback=parse_key_values, help="Browser parameters as key1=value1,key2=value2")
@click.option("--crawler", "-c", type=str, callback=parse_key_values, help="Crawler parameters as key1=value1,key2=value2")
@@ -821,7 +1354,7 @@ def profiles_cmd():
@click.option("--verbose", "-v", is_flag=True)
@click.option("--profile", "-p", help="Use a specific browser profile (by name)")
def default(url: str, example: bool, browser_config: str, crawler_config: str, filter_config: str,
extraction_config: str, schema: str, browser: Dict, crawler: Dict,
extraction_config: str, json_extract: str, schema: str, browser: Dict, crawler: Dict,
output: str, bypass_cache: bool, question: str, verbose: bool, profile: str):
"""Crawl4AI CLI - Web content extraction tool
@@ -834,7 +1367,15 @@ def default(url: str, example: bool, browser_config: str, crawler_config: str, f
crwl profiles - Manage browser profiles for identity-based crawling
crwl crawl - Crawl a website with advanced options
crwl cdp - Launch browser with CDP debugging enabled
crwl browser - Manage builtin browser (start, stop, status, restart)
crwl config - Manage global configuration settings
crwl examples - Show more usage examples
Configuration Examples:
crwl config list - List all configuration settings
crwl config get DEFAULT_LLM_PROVIDER - Show current LLM provider
crwl config set VERBOSE true - Enable verbose mode globally
crwl config set BROWSER_HEADLESS false - Default to visible browser
"""
if example:
@@ -855,7 +1396,8 @@ def default(url: str, example: bool, browser_config: str, crawler_config: str, f
browser_config=browser_config,
crawler_config=crawler_config,
filter_config=filter_config,
extraction_config=extraction_config,
extraction_config=extraction_config,
json_extract=json_extract,
schema=schema,
browser=browser,
crawler=crawler,

View File

@@ -0,0 +1,837 @@
import time
import uuid
import threading
import psutil
from datetime import datetime, timedelta
from typing import Dict, Optional, List
import threading
from rich.console import Console
from rich.layout import Layout
from rich.panel import Panel
from rich.table import Table
from rich.text import Text
from rich.live import Live
from rich import box
from ..models import CrawlStatus
class TerminalUI:
"""Terminal user interface for CrawlerMonitor using rich library."""
def __init__(self, refresh_rate: float = 1.0, max_width: int = 120):
"""
Initialize the terminal UI.
Args:
refresh_rate: How often to refresh the UI (in seconds)
max_width: Maximum width of the UI in characters
"""
self.console = Console(width=max_width)
self.layout = Layout()
self.refresh_rate = refresh_rate
self.stop_event = threading.Event()
self.ui_thread = None
self.monitor = None # Will be set by CrawlerMonitor
self.max_width = max_width
# Setup layout - vertical layout (top to bottom)
self.layout.split(
Layout(name="header", size=3),
Layout(name="pipeline_status", size=10),
Layout(name="task_details", ratio=1),
Layout(name="footer", size=3) # Increased footer size to fit all content
)
def start(self, monitor):
"""Start the UI thread."""
self.monitor = monitor
self.stop_event.clear()
self.ui_thread = threading.Thread(target=self._ui_loop)
self.ui_thread.daemon = True
self.ui_thread.start()
def stop(self):
"""Stop the UI thread."""
if self.ui_thread and self.ui_thread.is_alive():
self.stop_event.set()
# Only try to join if we're not in the UI thread
# This prevents "cannot join current thread" errors
if threading.current_thread() != self.ui_thread:
self.ui_thread.join(timeout=5.0)
def _ui_loop(self):
"""Main UI rendering loop."""
import sys
import select
import termios
import tty
# Setup terminal for non-blocking input
old_settings = termios.tcgetattr(sys.stdin)
try:
tty.setcbreak(sys.stdin.fileno())
# Use Live display to render the UI
with Live(self.layout, refresh_per_second=1/self.refresh_rate, screen=True) as live:
self.live = live # Store the live display for updates
# Main UI loop
while not self.stop_event.is_set():
self._update_display()
# Check for key press (non-blocking)
if select.select([sys.stdin], [], [], 0)[0]:
key = sys.stdin.read(1)
# Check for 'q' to quit
if key == 'q':
# Signal stop but don't call monitor.stop() from UI thread
# as it would cause the thread to try to join itself
self.stop_event.set()
self.monitor.is_running = False
break
time.sleep(self.refresh_rate)
# Just check if the monitor was stopped
if not self.monitor.is_running:
break
finally:
# Restore terminal settings
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)
def _update_display(self):
"""Update the terminal display with current statistics."""
if not self.monitor:
return
# Update crawler status panel
self.layout["header"].update(self._create_status_panel())
# Update pipeline status panel and task details panel
self.layout["pipeline_status"].update(self._create_pipeline_panel())
self.layout["task_details"].update(self._create_task_details_panel())
# Update footer
self.layout["footer"].update(self._create_footer())
def _create_status_panel(self) -> Panel:
"""Create the crawler status panel."""
summary = self.monitor.get_summary()
# Format memory status with icon
memory_status = self.monitor.get_memory_status()
memory_icon = "🟢" # Default NORMAL
if memory_status == "PRESSURE":
memory_icon = "🟠"
elif memory_status == "CRITICAL":
memory_icon = "🔴"
# Get current memory usage
current_memory = psutil.Process().memory_info().rss / (1024 * 1024) # MB
memory_percent = (current_memory / psutil.virtual_memory().total) * 100
# Format runtime
runtime = self.monitor._format_time(time.time() - self.monitor.start_time if self.monitor.start_time else 0)
# Create the status text
status_text = Text()
status_text.append(f"Web Crawler Dashboard | Runtime: {runtime} | Memory: {memory_percent:.1f}% {memory_icon}\n")
status_text.append(f"Status: {memory_status} | URLs: {summary['urls_completed']}/{summary['urls_total']} | ")
status_text.append(f"Peak Mem: {summary['peak_memory_percent']:.1f}% at {self.monitor._format_time(summary['peak_memory_time'])}")
return Panel(status_text, title="Crawler Status", border_style="blue")
def _create_pipeline_panel(self) -> Panel:
"""Create the pipeline status panel."""
summary = self.monitor.get_summary()
queue_stats = self.monitor.get_queue_stats()
# Create a table for status counts
table = Table(show_header=True, box=None)
table.add_column("Status", style="cyan")
table.add_column("Count", justify="right")
table.add_column("Percentage", justify="right")
table.add_column("Stat", style="cyan")
table.add_column("Value", justify="right")
# Calculate overall progress
progress = f"{summary['urls_completed']}/{summary['urls_total']}"
progress_percent = f"{summary['completion_percentage']:.1f}%"
# Add rows for each status
table.add_row(
"Overall Progress",
progress,
progress_percent,
"Est. Completion",
summary.get('estimated_completion_time', "N/A")
)
# Add rows for each status
status_counts = summary['status_counts']
total = summary['urls_total'] or 1 # Avoid division by zero
# Status rows
table.add_row(
"Completed",
str(status_counts.get(CrawlStatus.COMPLETED.name, 0)),
f"{status_counts.get(CrawlStatus.COMPLETED.name, 0) / total * 100:.1f}%",
"Avg. Time/URL",
f"{summary.get('avg_task_duration', 0):.2f}s"
)
table.add_row(
"Failed",
str(status_counts.get(CrawlStatus.FAILED.name, 0)),
f"{status_counts.get(CrawlStatus.FAILED.name, 0) / total * 100:.1f}%",
"Concurrent Tasks",
str(status_counts.get(CrawlStatus.IN_PROGRESS.name, 0))
)
table.add_row(
"In Progress",
str(status_counts.get(CrawlStatus.IN_PROGRESS.name, 0)),
f"{status_counts.get(CrawlStatus.IN_PROGRESS.name, 0) / total * 100:.1f}%",
"Queue Size",
str(queue_stats['total_queued'])
)
table.add_row(
"Queued",
str(status_counts.get(CrawlStatus.QUEUED.name, 0)),
f"{status_counts.get(CrawlStatus.QUEUED.name, 0) / total * 100:.1f}%",
"Max Wait Time",
f"{queue_stats['highest_wait_time']:.1f}s"
)
# Requeued is a special case as it's not a status
requeued_count = summary.get('requeued_count', 0)
table.add_row(
"Requeued",
str(requeued_count),
f"{summary.get('requeue_rate', 0):.1f}%",
"Avg Wait Time",
f"{queue_stats['avg_wait_time']:.1f}s"
)
# Add empty row for spacing
table.add_row(
"",
"",
"",
"Requeue Rate",
f"{summary.get('requeue_rate', 0):.1f}%"
)
return Panel(table, title="Pipeline Status", border_style="green")
def _create_task_details_panel(self) -> Panel:
"""Create the task details panel."""
# Create a table for task details
table = Table(show_header=True, expand=True)
table.add_column("Task ID", style="cyan", no_wrap=True, width=10)
table.add_column("URL", style="blue", ratio=3)
table.add_column("Status", style="green", width=15)
table.add_column("Memory", justify="right", width=8)
table.add_column("Peak", justify="right", width=8)
table.add_column("Duration", justify="right", width=10)
# Get all task stats
task_stats = self.monitor.get_all_task_stats()
# Add summary row
active_tasks = sum(1 for stats in task_stats.values()
if stats['status'] == CrawlStatus.IN_PROGRESS.name)
total_memory = sum(stats['memory_usage'] for stats in task_stats.values())
total_peak = sum(stats['peak_memory'] for stats in task_stats.values())
# Summary row with separators
table.add_row(
"SUMMARY",
f"Total: {len(task_stats)}",
f"Active: {active_tasks}",
f"{total_memory:.1f}",
f"{total_peak:.1f}",
"N/A"
)
# Add a separator
table.add_row("" * 10, "" * 20, "" * 10, "" * 8, "" * 8, "" * 10)
# Status icons
status_icons = {
CrawlStatus.QUEUED.name: "",
CrawlStatus.IN_PROGRESS.name: "🔄",
CrawlStatus.COMPLETED.name: "",
CrawlStatus.FAILED.name: ""
}
# Calculate how many rows we can display based on available space
# We can display more rows now that we have a dedicated panel
display_count = min(len(task_stats), 20) # Display up to 20 tasks
# Add rows for each task
for task_id, stats in sorted(
list(task_stats.items())[:display_count],
# Sort: 1. IN_PROGRESS first, 2. QUEUED, 3. COMPLETED/FAILED by recency
key=lambda x: (
0 if x[1]['status'] == CrawlStatus.IN_PROGRESS.name else
1 if x[1]['status'] == CrawlStatus.QUEUED.name else
2,
-1 * (x[1].get('end_time', 0) or 0) # Most recent first
)
):
# Truncate task_id and URL for display
short_id = task_id[:8]
url = stats['url']
if len(url) > 50: # Allow longer URLs in the dedicated panel
url = url[:47] + "..."
# Format status with icon
status = f"{status_icons.get(stats['status'], '?')} {stats['status']}"
# Add row
table.add_row(
short_id,
url,
status,
f"{stats['memory_usage']:.1f}",
f"{stats['peak_memory']:.1f}",
stats['duration'] if 'duration' in stats else "0:00"
)
return Panel(table, title="Task Details", border_style="yellow")
def _create_footer(self) -> Panel:
"""Create the footer panel."""
from rich.columns import Columns
from rich.align import Align
memory_status = self.monitor.get_memory_status()
memory_icon = "🟢" # Default NORMAL
if memory_status == "PRESSURE":
memory_icon = "🟠"
elif memory_status == "CRITICAL":
memory_icon = "🔴"
# Left section - memory status
left_text = Text()
left_text.append("Memory Status: ", style="bold")
status_style = "green" if memory_status == "NORMAL" else "yellow" if memory_status == "PRESSURE" else "red bold"
left_text.append(f"{memory_icon} {memory_status}", style=status_style)
# Center section - copyright
center_text = Text("© Crawl4AI 2025 | Made by UnclecCode", style="cyan italic")
# Right section - quit instruction
right_text = Text()
right_text.append("Press ", style="bold")
right_text.append("q", style="white on blue")
right_text.append(" to quit", style="bold")
# Create columns with the three sections
footer_content = Columns(
[
Align.left(left_text),
Align.center(center_text),
Align.right(right_text)
],
expand=True
)
# Create a more visible footer panel
return Panel(
footer_content,
border_style="white",
padding=(0, 1) # Add padding for better visibility
)
class CrawlerMonitor:
"""
Comprehensive monitoring and visualization system for tracking web crawler operations in real-time.
Provides a terminal-based dashboard that displays task statuses, memory usage, queue statistics,
and performance metrics.
"""
def __init__(
self,
urls_total: int = 0,
refresh_rate: float = 1.0,
enable_ui: bool = True,
max_width: int = 120
):
"""
Initialize the CrawlerMonitor.
Args:
urls_total: Total number of URLs to be crawled
refresh_rate: How often to refresh the UI (in seconds)
enable_ui: Whether to display the terminal UI
max_width: Maximum width of the UI in characters
"""
# Core monitoring attributes
self.stats = {} # Task ID -> stats dict
self.memory_status = "NORMAL"
self.start_time = None
self.end_time = None
self.is_running = False
self.queue_stats = {
"total_queued": 0,
"highest_wait_time": 0.0,
"avg_wait_time": 0.0
}
self.urls_total = urls_total
self.urls_completed = 0
self.peak_memory_percent = 0.0
self.peak_memory_time = 0.0
# Status counts
self.status_counts = {
CrawlStatus.QUEUED.name: 0,
CrawlStatus.IN_PROGRESS.name: 0,
CrawlStatus.COMPLETED.name: 0,
CrawlStatus.FAILED.name: 0
}
# Requeue tracking
self.requeued_count = 0
# Thread-safety
self._lock = threading.RLock()
# Terminal UI
self.enable_ui = enable_ui
self.terminal_ui = TerminalUI(
refresh_rate=refresh_rate,
max_width=max_width
) if enable_ui else None
def start(self):
"""
Start the monitoring session.
- Initializes the start_time
- Sets is_running to True
- Starts the terminal UI if enabled
"""
with self._lock:
self.start_time = time.time()
self.is_running = True
# Start the terminal UI
if self.enable_ui and self.terminal_ui:
self.terminal_ui.start(self)
def stop(self):
"""
Stop the monitoring session.
- Records end_time
- Sets is_running to False
- Stops the terminal UI
- Generates final summary statistics
"""
with self._lock:
self.end_time = time.time()
self.is_running = False
# Stop the terminal UI
if self.enable_ui and self.terminal_ui:
self.terminal_ui.stop()
def add_task(self, task_id: str, url: str):
"""
Register a new task with the monitor.
Args:
task_id: Unique identifier for the task
url: URL being crawled
The task is initialized with:
- status: QUEUED
- url: The URL to crawl
- enqueue_time: Current time
- memory_usage: 0
- peak_memory: 0
- wait_time: 0
- retry_count: 0
"""
with self._lock:
self.stats[task_id] = {
"task_id": task_id,
"url": url,
"status": CrawlStatus.QUEUED.name,
"enqueue_time": time.time(),
"start_time": None,
"end_time": None,
"memory_usage": 0.0,
"peak_memory": 0.0,
"error_message": "",
"wait_time": 0.0,
"retry_count": 0,
"duration": "0:00",
"counted_requeue": False
}
# Update status counts
self.status_counts[CrawlStatus.QUEUED.name] += 1
def update_task(
self,
task_id: str,
status: Optional[CrawlStatus] = None,
start_time: Optional[float] = None,
end_time: Optional[float] = None,
memory_usage: Optional[float] = None,
peak_memory: Optional[float] = None,
error_message: Optional[str] = None,
retry_count: Optional[int] = None,
wait_time: Optional[float] = None
):
"""
Update statistics for a specific task.
Args:
task_id: Unique identifier for the task
status: New status (QUEUED, IN_PROGRESS, COMPLETED, FAILED)
start_time: When task execution started
end_time: When task execution ended
memory_usage: Current memory usage in MB
peak_memory: Maximum memory usage in MB
error_message: Error description if failed
retry_count: Number of retry attempts
wait_time: Time spent in queue
Updates task statistics and updates status counts.
If status changes, decrements old status count and
increments new status count.
"""
with self._lock:
# Check if task exists
if task_id not in self.stats:
return
task_stats = self.stats[task_id]
# Update status counts if status is changing
old_status = task_stats["status"]
if status and status.name != old_status:
self.status_counts[old_status] -= 1
self.status_counts[status.name] += 1
# Track completion
if status == CrawlStatus.COMPLETED:
self.urls_completed += 1
# Track requeues
if old_status in [CrawlStatus.COMPLETED.name, CrawlStatus.FAILED.name] and not task_stats.get("counted_requeue", False):
self.requeued_count += 1
task_stats["counted_requeue"] = True
# Update task statistics
if status:
task_stats["status"] = status.name
if start_time is not None:
task_stats["start_time"] = start_time
if end_time is not None:
task_stats["end_time"] = end_time
if memory_usage is not None:
task_stats["memory_usage"] = memory_usage
# Update peak memory if necessary
current_percent = (memory_usage / psutil.virtual_memory().total) * 100
if current_percent > self.peak_memory_percent:
self.peak_memory_percent = current_percent
self.peak_memory_time = time.time()
if peak_memory is not None:
task_stats["peak_memory"] = peak_memory
if error_message is not None:
task_stats["error_message"] = error_message
if retry_count is not None:
task_stats["retry_count"] = retry_count
if wait_time is not None:
task_stats["wait_time"] = wait_time
# Calculate duration
if task_stats["start_time"]:
end = task_stats["end_time"] or time.time()
duration = end - task_stats["start_time"]
task_stats["duration"] = self._format_time(duration)
def update_memory_status(self, status: str):
"""
Update the current memory status.
Args:
status: Memory status (NORMAL, PRESSURE, CRITICAL, or custom)
Also updates the UI to reflect the new status.
"""
with self._lock:
self.memory_status = status
def update_queue_statistics(
self,
total_queued: int,
highest_wait_time: float,
avg_wait_time: float
):
"""
Update statistics related to the task queue.
Args:
total_queued: Number of tasks currently in queue
highest_wait_time: Longest wait time of any queued task
avg_wait_time: Average wait time across all queued tasks
"""
with self._lock:
self.queue_stats = {
"total_queued": total_queued,
"highest_wait_time": highest_wait_time,
"avg_wait_time": avg_wait_time
}
def get_task_stats(self, task_id: str) -> Dict:
"""
Get statistics for a specific task.
Args:
task_id: Unique identifier for the task
Returns:
Dictionary containing all task statistics
"""
with self._lock:
return self.stats.get(task_id, {}).copy()
def get_all_task_stats(self) -> Dict[str, Dict]:
"""
Get statistics for all tasks.
Returns:
Dictionary mapping task_ids to their statistics
"""
with self._lock:
return self.stats.copy()
def get_memory_status(self) -> str:
"""
Get the current memory status.
Returns:
Current memory status string
"""
with self._lock:
return self.memory_status
def get_queue_stats(self) -> Dict:
"""
Get current queue statistics.
Returns:
Dictionary with queue statistics including:
- total_queued: Number of tasks in queue
- highest_wait_time: Longest wait time
- avg_wait_time: Average wait time
"""
with self._lock:
return self.queue_stats.copy()
def get_summary(self) -> Dict:
"""
Get a summary of all crawler statistics.
Returns:
Dictionary containing:
- runtime: Total runtime in seconds
- urls_total: Total URLs to process
- urls_completed: Number of completed URLs
- completion_percentage: Percentage complete
- status_counts: Count of tasks in each status
- memory_status: Current memory status
- peak_memory_percent: Highest memory usage
- peak_memory_time: When peak memory occurred
- avg_task_duration: Average task processing time
- estimated_completion_time: Projected finish time
- requeue_rate: Percentage of tasks requeued
"""
with self._lock:
# Calculate runtime
current_time = time.time()
runtime = current_time - (self.start_time or current_time)
# Calculate completion percentage
completion_percentage = 0
if self.urls_total > 0:
completion_percentage = (self.urls_completed / self.urls_total) * 100
# Calculate average task duration for completed tasks
completed_tasks = [
task for task in self.stats.values()
if task["status"] == CrawlStatus.COMPLETED.name and task.get("start_time") and task.get("end_time")
]
avg_task_duration = 0
if completed_tasks:
total_duration = sum(task["end_time"] - task["start_time"] for task in completed_tasks)
avg_task_duration = total_duration / len(completed_tasks)
# Calculate requeue rate
requeue_rate = 0
if len(self.stats) > 0:
requeue_rate = (self.requeued_count / len(self.stats)) * 100
# Calculate estimated completion time
estimated_completion_time = "N/A"
if avg_task_duration > 0 and self.urls_total > 0 and self.urls_completed > 0:
remaining_tasks = self.urls_total - self.urls_completed
estimated_seconds = remaining_tasks * avg_task_duration
estimated_completion_time = self._format_time(estimated_seconds)
return {
"runtime": runtime,
"urls_total": self.urls_total,
"urls_completed": self.urls_completed,
"completion_percentage": completion_percentage,
"status_counts": self.status_counts.copy(),
"memory_status": self.memory_status,
"peak_memory_percent": self.peak_memory_percent,
"peak_memory_time": self.peak_memory_time,
"avg_task_duration": avg_task_duration,
"estimated_completion_time": estimated_completion_time,
"requeue_rate": requeue_rate,
"requeued_count": self.requeued_count
}
def render(self):
"""
Render the terminal UI.
This is the main UI rendering loop that:
1. Updates all statistics
2. Formats the display
3. Renders the ASCII interface
4. Handles keyboard input
Note: The actual rendering is handled by the TerminalUI class
which uses the rich library's Live display.
"""
if self.enable_ui and self.terminal_ui:
# Force an update of the UI
if hasattr(self.terminal_ui, '_update_display'):
self.terminal_ui._update_display()
def _format_time(self, seconds: float) -> str:
"""
Format time in hours:minutes:seconds.
Args:
seconds: Time in seconds
Returns:
Formatted time string (e.g., "1:23:45")
"""
delta = timedelta(seconds=int(seconds))
hours, remainder = divmod(delta.seconds, 3600)
minutes, seconds = divmod(remainder, 60)
if hours > 0:
return f"{hours}:{minutes:02}:{seconds:02}"
else:
return f"{minutes}:{seconds:02}"
def _calculate_estimated_completion(self) -> str:
"""
Calculate estimated completion time based on current progress.
Returns:
Formatted time string
"""
summary = self.get_summary()
return summary.get("estimated_completion_time", "N/A")
# Example code for testing
if __name__ == "__main__":
# Initialize the monitor
monitor = CrawlerMonitor(urls_total=100)
# Start monitoring
monitor.start()
try:
# Simulate some tasks
for i in range(20):
task_id = str(uuid.uuid4())
url = f"https://example.com/page{i}"
monitor.add_task(task_id, url)
# Simulate 20% of tasks are already running
if i < 4:
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=time.time() - 30, # Started 30 seconds ago
memory_usage=10.5
)
# Simulate 10% of tasks are completed
if i >= 4 and i < 6:
start_time = time.time() - 60
end_time = time.time() - 15
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=start_time,
memory_usage=8.2
)
monitor.update_task(
task_id=task_id,
status=CrawlStatus.COMPLETED,
end_time=end_time,
memory_usage=0,
peak_memory=15.7
)
# Simulate 5% of tasks fail
if i >= 6 and i < 7:
start_time = time.time() - 45
end_time = time.time() - 20
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=start_time,
memory_usage=12.3
)
monitor.update_task(
task_id=task_id,
status=CrawlStatus.FAILED,
end_time=end_time,
memory_usage=0,
peak_memory=18.2,
error_message="Connection timeout"
)
# Simulate memory pressure
monitor.update_memory_status("PRESSURE")
# Simulate queue statistics
monitor.update_queue_statistics(
total_queued=16, # 20 - 4 (in progress)
highest_wait_time=120.5,
avg_wait_time=60.2
)
# Keep the monitor running for a demonstration
print("Crawler Monitor is running. Press 'q' to exit.")
while monitor.is_running:
time.sleep(0.1)
except KeyboardInterrupt:
print("\nExiting crawler monitor...")
finally:
# Stop the monitor
monitor.stop()
print("Crawler monitor exited successfully.")

View File

@@ -4,7 +4,8 @@ from dotenv import load_dotenv
load_dotenv() # Load environment variables from .env file
# Default provider, ONLY used when the extraction strategy is LLMExtractionStrategy
DEFAULT_PROVIDER = "openai/gpt-4o-mini"
DEFAULT_PROVIDER = "openai/gpt-4o"
DEFAULT_PROVIDER_API_KEY = "OPENAI_API_KEY"
MODEL_REPO_BRANCH = "new-release-0.0.2"
# Provider-model dictionary, ONLY used when the extraction strategy is LLMExtractionStrategy
PROVIDER_MODELS = {
@@ -92,3 +93,46 @@ SHOW_DEPRECATION_WARNINGS = True
SCREENSHOT_HEIGHT_TRESHOLD = 10000
PAGE_TIMEOUT = 60000
DOWNLOAD_PAGE_TIMEOUT = 60000
# Global user settings with descriptions and default values
USER_SETTINGS = {
"DEFAULT_LLM_PROVIDER": {
"default": "openai/gpt-4o",
"description": "Default LLM provider in 'company/model' format (e.g., 'openai/gpt-4o', 'anthropic/claude-3-sonnet')",
"type": "string"
},
"DEFAULT_LLM_PROVIDER_TOKEN": {
"default": "",
"description": "API token for the default LLM provider",
"type": "string",
"secret": True
},
"VERBOSE": {
"default": False,
"description": "Enable verbose output for all commands",
"type": "boolean"
},
"BROWSER_HEADLESS": {
"default": True,
"description": "Run browser in headless mode by default",
"type": "boolean"
},
"BROWSER_TYPE": {
"default": "chromium",
"description": "Default browser type (chromium or firefox)",
"type": "string",
"options": ["chromium", "firefox"]
},
"CACHE_MODE": {
"default": "bypass",
"description": "Default cache mode (bypass, use, or refresh)",
"type": "string",
"options": ["bypass", "use", "refresh"]
},
"USER_AGENT_MODE": {
"default": "default",
"description": "Default user agent mode (default, random, or mobile)",
"type": "string",
"options": ["default", "random", "mobile"]
}
}

View File

@@ -1,6 +1,6 @@
from crawl4ai import BrowserConfig, AsyncWebCrawler, CrawlerRunConfig, CacheMode
from crawl4ai.hub import BaseCrawler
from crawl4ai.utils import optimize_html, get_home_folder
from crawl4ai.utils import optimize_html, get_home_folder, preprocess_html_for_schema
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
from pathlib import Path
import json
@@ -68,7 +68,8 @@ class GoogleSearchCrawler(BaseCrawler):
home_dir = get_home_folder() if not schema_cache_path else schema_cache_path
os.makedirs(f"{home_dir}/schema", exist_ok=True)
cleaned_html = optimize_html(html, threshold=100)
# cleaned_html = optimize_html(html, threshold=100)
cleaned_html = preprocess_html_for_schema(html)
organic_schema = None
if os.path.exists(f"{home_dir}/schema/organic_schema.json"):

View File

@@ -5,7 +5,7 @@ from concurrent.futures import ThreadPoolExecutor, as_completed
import json
import time
from .prompts import PROMPT_EXTRACT_BLOCKS, PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION, PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION, JSON_SCHEMA_BUILDER_XPATH
from .prompts import PROMPT_EXTRACT_BLOCKS, PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION, PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION, JSON_SCHEMA_BUILDER_XPATH, PROMPT_EXTRACT_INFERRED_SCHEMA
from .config import (
DEFAULT_PROVIDER, CHUNK_TOKEN_THRESHOLD,
OVERLAP_RATE,
@@ -34,7 +34,7 @@ from .model_loader import (
calculate_batch_size
)
from .types import LLMConfig
from .types import LLMConfig, create_llm_config
from functools import partial
import numpy as np
@@ -507,6 +507,7 @@ class LLMExtractionStrategy(ExtractionStrategy):
word_token_rate=WORD_TOKEN_RATE,
apply_chunking=True,
input_format: str = "markdown",
force_json_response=False,
verbose=False,
# Deprecated arguments
provider: str = DEFAULT_PROVIDER,
@@ -527,9 +528,10 @@ class LLMExtractionStrategy(ExtractionStrategy):
overlap_rate: Overlap between chunks.
word_token_rate: Word to token conversion rate.
apply_chunking: Whether to apply chunking.
input_format: Content format to use for extraction.
Options: "markdown" (default), "html", "fit_markdown"
force_json_response: Whether to force a JSON response from the LLM.
verbose: Whether to print verbose output.
usages: List of individual token usages.
total_usage: Accumulated token usage.
# Deprecated arguments, will be removed very soon
provider: The provider to use for extraction. It follows the format <provider_name>/<model_name>, e.g., "ollama/llama3.3".
@@ -545,6 +547,7 @@ class LLMExtractionStrategy(ExtractionStrategy):
self.schema = schema
if schema:
self.extract_type = "schema"
self.force_json_response = force_json_response
self.chunk_token_threshold = chunk_token_threshold or CHUNK_TOKEN_THRESHOLD
self.overlap_rate = overlap_rate
self.word_token_rate = word_token_rate
@@ -608,64 +611,97 @@ class LLMExtractionStrategy(ExtractionStrategy):
variable_values["SCHEMA"] = json.dumps(self.schema, indent=2) # if type of self.schema is dict else self.schema
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
if self.extract_type == "schema" and not self.schema:
prompt_with_variables = PROMPT_EXTRACT_INFERRED_SCHEMA
for variable in variable_values:
prompt_with_variables = prompt_with_variables.replace(
"{" + variable + "}", variable_values[variable]
)
response = perform_completion_with_backoff(
self.llm_config.provider,
prompt_with_variables,
self.llm_config.api_token,
base_url=self.llm_config.base_url,
extra_args=self.extra_args,
) # , json_response=self.extract_type == "schema")
# Track usage
usage = TokenUsage(
completion_tokens=response.usage.completion_tokens,
prompt_tokens=response.usage.prompt_tokens,
total_tokens=response.usage.total_tokens,
completion_tokens_details=response.usage.completion_tokens_details.__dict__
if response.usage.completion_tokens_details
else {},
prompt_tokens_details=response.usage.prompt_tokens_details.__dict__
if response.usage.prompt_tokens_details
else {},
)
self.usages.append(usage)
# Update totals
self.total_usage.completion_tokens += usage.completion_tokens
self.total_usage.prompt_tokens += usage.prompt_tokens
self.total_usage.total_tokens += usage.total_tokens
try:
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)[
"blocks"
]
blocks = json.loads(blocks)
for block in blocks:
block["error"] = False
except Exception:
parsed, unparsed = split_and_parse_json_objects(
response.choices[0].message.content
response = perform_completion_with_backoff(
self.llm_config.provider,
prompt_with_variables,
self.llm_config.api_token,
base_url=self.llm_config.base_url,
json_response=self.force_json_response,
extra_args=self.extra_args,
) # , json_response=self.extract_type == "schema")
# Track usage
usage = TokenUsage(
completion_tokens=response.usage.completion_tokens,
prompt_tokens=response.usage.prompt_tokens,
total_tokens=response.usage.total_tokens,
completion_tokens_details=response.usage.completion_tokens_details.__dict__
if response.usage.completion_tokens_details
else {},
prompt_tokens_details=response.usage.prompt_tokens_details.__dict__
if response.usage.prompt_tokens_details
else {},
)
blocks = parsed
if unparsed:
blocks.append(
{"index": 0, "error": True, "tags": ["error"], "content": unparsed}
)
self.usages.append(usage)
if self.verbose:
print(
"[LOG] Extracted",
len(blocks),
"blocks from URL:",
url,
"block index:",
ix,
)
return blocks
# Update totals
self.total_usage.completion_tokens += usage.completion_tokens
self.total_usage.prompt_tokens += usage.prompt_tokens
self.total_usage.total_tokens += usage.total_tokens
try:
response = response.choices[0].message.content
blocks = None
if self.force_json_response:
blocks = json.loads(response)
if isinstance(blocks, dict):
# If it has only one key which calue is list then assign that to blocks, exampled: {"news": [..]}
if len(blocks) == 1 and isinstance(list(blocks.values())[0], list):
blocks = list(blocks.values())[0]
else:
# If it has only one key which value is not list then assign that to blocks, exampled: { "article_id": "1234", ... }
blocks = [blocks]
elif isinstance(blocks, list):
# If it is a list then assign that to blocks
blocks = blocks
else:
# blocks = extract_xml_data(["blocks"], response.choices[0].message.content)["blocks"]
blocks = extract_xml_data(["blocks"], response)["blocks"]
blocks = json.loads(blocks)
for block in blocks:
block["error"] = False
except Exception:
parsed, unparsed = split_and_parse_json_objects(
response
)
blocks = parsed
if unparsed:
blocks.append(
{"index": 0, "error": True, "tags": ["error"], "content": unparsed}
)
if self.verbose:
print(
"[LOG] Extracted",
len(blocks),
"blocks from URL:",
url,
"block index:",
ix,
)
return blocks
except Exception as e:
if self.verbose:
print(f"[LOG] Error in LLM extraction: {e}")
# Add error information to extracted_content
return [
{
"index": ix,
"error": True,
"tags": ["error"],
"content": str(e),
}
]
def _merge(self, documents, chunk_token_threshold, overlap) -> List[str]:
"""
@@ -757,8 +793,6 @@ class LLMExtractionStrategy(ExtractionStrategy):
#######################################################
# New extraction strategies for JSON-based extraction #
#######################################################
class JsonElementExtractionStrategy(ExtractionStrategy):
"""
Abstract base class for extracting structured JSON from HTML content.
@@ -1049,7 +1083,7 @@ class JsonElementExtractionStrategy(ExtractionStrategy):
schema_type: str = "CSS", # or XPATH
query: str = None,
target_json_example: str = None,
llm_config: 'LLMConfig' = None,
llm_config: 'LLMConfig' = create_llm_config(),
provider: str = None,
api_token: str = None,
**kwargs
@@ -1081,7 +1115,7 @@ class JsonElementExtractionStrategy(ExtractionStrategy):
# Build the prompt
system_message = {
"role": "system",
"content": f"""You specialize in generating special JSON schemas for web scraping. This schema uses CSS or XPATH selectors to present a repetitive pattern in crawled HTML, such as a product in a product list or a search result item in a list of search results. You use this JSON schema to pass to a language model along with the HTML content to extract structured data from the HTML. The language model uses the JSON schema to extract data from the HTML and retrieve values for fields in the JSON schema, following the schema.
"content": f"""You specialize in generating special JSON schemas for web scraping. This schema uses CSS or XPATH selectors to present a repetitive pattern in crawled HTML, such as a product in a product list or a search result item in a list of search results. We use this JSON schema to pass to a language model along with the HTML content to extract structured data from the HTML. The language model uses the JSON schema to extract data from the HTML and retrieve values for fields in the JSON schema, following the schema.
Generating this HTML manually is not feasible, so you need to generate the JSON schema using the HTML content. The HTML copied from the crawled website is provided below, which we believe contains the repetitive pattern.
@@ -1095,9 +1129,10 @@ Generating this HTML manually is not feasible, so you need to generate the JSON
In this context, the following items may or may not be present:
- Example of target JSON object: This is a sample of the final JSON object that we hope to extract from the HTML using the schema you are generating.
- Extra Instructions: This is optional instructions to consider when generating the schema provided by the user.
- Query or explanation of target/goal data item: This is a description of what data we are trying to extract from the HTML. This explanation means we're not sure about the rigid schema of the structures we want, so we leave it to you to use your expertise to create the best and most comprehensive structures aimed at maximizing data extraction from this page. You must ensure that you do not pick up nuances that may exist on a particular page. The focus should be on the data we are extracting, and it must be valid, safe, and robust based on the given HTML.
# What if there is no example of target JSON object?
In this scenario, use your best judgment to generate the schema. Try to maximize the number of fields that you can extract from the HTML.
# What if there is no example of target JSON object and also no extra instructions or even no explanation of target/goal data item?
In this scenario, use your best judgment to generate the schema. You need to examine the content of the page and understand the data it provides. If the page contains repetitive data, such as lists of items, products, jobs, places, books, or movies, focus on one single item that repeats. If the page is a detailed page about one product or item, create a schema to extract the entire structured data. At this stage, you must think and decide for yourself. Try to maximize the number of fields that you can extract from the HTML.
# What are the instructions and details for this schema generation?
{prompt_template}"""
@@ -1114,11 +1149,18 @@ In this scenario, use your best judgment to generate the schema. Try to maximize
}
if query:
user_message["content"] += f"\n\nImportant Notes to Consider:\n{query}"
user_message["content"] += f"\n\n## Query or explanation of target/goal data item:\n{query}"
if target_json_example:
user_message["content"] += f"\n\nExample of target JSON object:\n{target_json_example}"
user_message["content"] += f"\n\n## Example of target JSON object:\n```json\n{target_json_example}\n```"
if query and not target_json_example:
user_message["content"] += """IMPORTANT: To remind you, in this process, we are not providing a rigid example of the adjacent objects we seek. We rely on your understanding of the explanation provided in the above section. Make sure to grasp what we are looking for and, based on that, create the best schema.."""
elif not query and target_json_example:
user_message["content"] += """IMPORTANT: Please remember that in this process, we provided a proper example of a target JSON object. Make sure to adhere to the structure and create a schema that exactly fits this example. If you find that some elements on the page do not match completely, vote for the majority."""
elif not query and not target_json_example:
user_message["content"] += """IMPORTANT: Since we neither have a query nor an example, it is crucial to rely solely on the HTML content provided. Leverage your expertise to determine the schema based on the repetitive patterns observed in the content."""
user_message["content"] += """IMPORTANT: Ensure your schema is reliable, meaning do not use selectors that seem to generate dynamically and are not reliable. A reliable schema is what you want, as it consistently returns the same data even after many reloads of the page.
user_message["content"] += """IMPORTANT: Ensure your schema remains reliable by avoiding selectors that appear to generate dynamically and are not dependable. You want a reliable schema, as it consistently returns the same data even after many page reloads.
Analyze the HTML and generate a JSON schema that follows the specified format. Only output valid JSON schema, nothing else.
"""
@@ -1140,7 +1182,6 @@ In this scenario, use your best judgment to generate the schema. Try to maximize
except Exception as e:
raise Exception(f"Failed to generate schema: {str(e)}")
class JsonCssExtractionStrategy(JsonElementExtractionStrategy):
"""
Concrete implementation of `JsonElementExtractionStrategy` using CSS selectors.

View File

@@ -45,7 +45,35 @@ def post_install():
setup_home_directory()
install_playwright()
run_migration()
# TODO: Will be added in the future
# setup_builtin_browser()
logger.success("Post-installation setup completed!", tag="COMPLETE")
def setup_builtin_browser():
"""Set up a builtin browser for use with Crawl4AI"""
try:
logger.info("Setting up builtin browser...", tag="INIT")
asyncio.run(_setup_builtin_browser())
logger.success("Builtin browser setup completed!", tag="COMPLETE")
except Exception as e:
logger.warning(f"Failed to set up builtin browser: {e}")
logger.warning("You can manually set up a builtin browser using 'crawl4ai-doctor builtin-browser-start'")
async def _setup_builtin_browser():
try:
# Import BrowserProfiler here to avoid circular imports
from .browser_profiler import BrowserProfiler
profiler = BrowserProfiler(logger=logger)
# Launch the builtin browser
cdp_url = await profiler.launch_builtin_browser(headless=True)
if cdp_url:
logger.success(f"Builtin browser launched at {cdp_url}", tag="BROWSER")
else:
logger.warning("Failed to launch builtin browser", tag="BROWSER")
except Exception as e:
logger.warning(f"Error setting up builtin browser: {e}", tag="BROWSER")
raise
def install_playwright():

View File

@@ -1,6 +1,7 @@
from re import U
from pydantic import BaseModel, HttpUrl, PrivateAttr
from pydantic import BaseModel, HttpUrl, PrivateAttr, ConfigDict
from typing import List, Dict, Optional, Callable, Awaitable, Union, Any
from typing import AsyncGenerator
from typing import Generic, TypeVar
from enum import Enum
from dataclasses import dataclass
from .ssl_certificate import SSLCertificate
@@ -28,7 +29,12 @@ class CrawlerTaskResult:
start_time: Union[datetime, float]
end_time: Union[datetime, float]
error_message: str = ""
retry_count: int = 0
wait_time: float = 0.0
@property
def success(self) -> bool:
return self.result.success
class CrawlStatus(Enum):
QUEUED = "QUEUED"
@@ -36,27 +42,6 @@ class CrawlStatus(Enum):
COMPLETED = "COMPLETED"
FAILED = "FAILED"
# @dataclass
# class CrawlStats:
# task_id: str
# url: str
# status: CrawlStatus
# start_time: Optional[datetime] = None
# end_time: Optional[datetime] = None
# memory_usage: float = 0.0
# peak_memory: float = 0.0
# error_message: str = ""
# @property
# def duration(self) -> str:
# if not self.start_time:
# return "0:00"
# end = self.end_time or datetime.now()
# duration = end - self.start_time
# return str(timedelta(seconds=int(duration.total_seconds())))
@dataclass
class CrawlStats:
task_id: str
@@ -67,6 +52,9 @@ class CrawlStats:
memory_usage: float = 0.0
peak_memory: float = 0.0
error_message: str = ""
wait_time: float = 0.0
retry_count: int = 0
counted_requeue: bool = False
@property
def duration(self) -> str:
@@ -103,12 +91,10 @@ class TokenUsage:
completion_tokens_details: Optional[dict] = None
prompt_tokens_details: Optional[dict] = None
class UrlModel(BaseModel):
url: HttpUrl
forced: bool = False
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
@@ -160,8 +146,9 @@ class CrawlResult(BaseModel):
dispatch_result: Optional[DispatchResult] = None
redirected_url: Optional[str] = None
class Config:
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
# class Config:
# arbitrary_types_allowed = True
# NOTE: The StringCompatibleMarkdown class, custom __init__ method, property getters/setters,
# and model_dump override all exist to support a smooth transition from markdown as a string
@@ -275,6 +262,40 @@ class StringCompatibleMarkdown(str):
def __getattr__(self, name):
return getattr(self._markdown_result, name)
CrawlResultT = TypeVar('CrawlResultT', bound=CrawlResult)
class CrawlResultContainer(Generic[CrawlResultT]):
def __init__(self, results: Union[CrawlResultT, List[CrawlResultT]]):
# Normalize to a list
if isinstance(results, list):
self._results = results
else:
self._results = [results]
def __iter__(self):
return iter(self._results)
def __getitem__(self, index):
return self._results[index]
def __len__(self):
return len(self._results)
def __getattr__(self, attr):
# Delegate attribute access to the first element.
if self._results:
return getattr(self._results[0], attr)
raise AttributeError(f"{self.__class__.__name__} object has no attribute '{attr}'")
def __repr__(self):
return f"{self.__class__.__name__}({self._results!r})"
RunManyReturn = Union[
CrawlResultContainer[CrawlResultT],
AsyncGenerator[CrawlResultT, None]
]
# END of backward compatibility code for markdown/markdown_v2.
# When removing this code in the future, make sure to:
# 1. Replace the private attribute and property with a standard field
@@ -292,9 +313,9 @@ class AsyncCrawlResponse(BaseModel):
ssl_certificate: Optional[SSLCertificate] = None
redirected_url: Optional[str] = None
class Config:
arbitrary_types_allowed = True
model_config = ConfigDict(arbitrary_types_allowed=True)
# class Config:
# arbitrary_types_allowed = True
###############################
# Scraping Models

View File

@@ -0,0 +1,6 @@
"""Pipeline module providing high-level crawling functionality."""
from .pipeline import Pipeline, create_pipeline
from .crawler import Crawler
__all__ = ["Pipeline", "create_pipeline", "Crawler"]

View File

@@ -0,0 +1,406 @@
"""Crawler utility class for simplified crawling operations.
This module provides a high-level utility class for crawling web pages
with support for both single and multiple URL processing.
"""
import asyncio
from typing import Dict, List, Optional, Tuple, Union, Callable
from crawl4ai.models import CrawlResultContainer, CrawlResult
from crawl4ai.pipeline.pipeline import create_pipeline
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
from crawl4ai.browser.browser_hub import BrowserHub
# Type definitions
UrlList = List[str]
UrlBatch = Tuple[List[str], CrawlerRunConfig]
UrlFullBatch = Tuple[List[str], BrowserConfig, CrawlerRunConfig]
BatchType = Union[UrlList, UrlBatch, UrlFullBatch]
ProgressCallback = Callable[[str, str, Optional[CrawlResultContainer]], None]
RetryStrategy = Callable[[str, int, Exception], Tuple[bool, float]]
class Crawler:
"""High-level utility class for crawling web pages.
This class provides simplified methods for crawling both single URLs
and batches of URLs, with parallel processing capabilities.
"""
@classmethod
async def crawl(
cls,
urls: Union[str, List[str]],
browser_config: Optional[BrowserConfig] = None,
crawler_config: Optional[CrawlerRunConfig] = None,
browser_hub: Optional[BrowserHub] = None,
logger: Optional[AsyncLogger] = None,
max_retries: int = 0,
retry_delay: float = 1.0,
use_new_loop: bool = True # By default use a new loop for safety
) -> Union[CrawlResultContainer, Dict[str, CrawlResultContainer]]:
"""Crawl one or more URLs with the specified configurations.
Args:
urls: Single URL or list of URLs to crawl
browser_config: Optional browser configuration
crawler_config: Optional crawler run configuration
browser_hub: Optional shared browser hub
logger: Optional logger instance
max_retries: Maximum number of retries for failed requests
retry_delay: Delay between retries in seconds
Returns:
For a single URL: CrawlResultContainer with crawl results
For multiple URLs: Dict mapping URLs to their CrawlResultContainer results
"""
# Handle single URL case
if isinstance(urls, str):
return await cls._crawl_single_url(
urls,
browser_config,
crawler_config,
browser_hub,
logger,
max_retries,
retry_delay,
use_new_loop
)
# Handle multiple URLs case (sequential processing)
results = {}
for url in urls:
results[url] = await cls._crawl_single_url(
url,
browser_config,
crawler_config,
browser_hub,
logger,
max_retries,
retry_delay,
use_new_loop
)
return results
@classmethod
async def _crawl_single_url(
cls,
url: str,
browser_config: Optional[BrowserConfig] = None,
crawler_config: Optional[CrawlerRunConfig] = None,
browser_hub: Optional[BrowserHub] = None,
logger: Optional[AsyncLogger] = None,
max_retries: int = 0,
retry_delay: float = 1.0,
use_new_loop: bool = False
) -> CrawlResultContainer:
"""Internal method to crawl a single URL with retry logic."""
# Create a logger if none provided
if logger is None:
logger = AsyncLogger(verbose=True)
# Create or use the provided crawler config
if crawler_config is None:
crawler_config = CrawlerRunConfig()
attempts = 0
last_error = None
# For testing purposes, each crawler gets a new event loop to avoid conflicts
# This is especially important in test suites where multiple tests run in sequence
if use_new_loop:
old_loop = asyncio.get_event_loop()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
while attempts <= max_retries:
try:
# Create a pipeline
pipeline_args = {}
if browser_config:
pipeline_args["browser_config"] = browser_config
if browser_hub:
pipeline_args["browser_hub"] = browser_hub
if logger:
pipeline_args["logger"] = logger
pipeline = await create_pipeline(**pipeline_args)
# Perform the crawl
result = await pipeline.crawl(url=url, config=crawler_config)
# Close the pipeline if we created it (not using a shared hub)
if not browser_hub:
await pipeline.close()
# Restore the original event loop if we created a new one
if use_new_loop:
asyncio.set_event_loop(old_loop)
loop.close()
return result
except Exception as e:
last_error = e
attempts += 1
if attempts <= max_retries:
logger.warning(
message="Crawl attempt {attempt} failed for {url}: {error}. Retrying in {delay}s...",
tag="RETRY",
params={
"attempt": attempts,
"url": url,
"error": str(e),
"delay": retry_delay
}
)
await asyncio.sleep(retry_delay)
else:
logger.error(
message="All {attempts} crawl attempts failed for {url}: {error}",
tag="FAILED",
params={
"attempts": attempts,
"url": url,
"error": str(e)
}
)
# If we get here, all attempts failed
result = CrawlResultContainer(
CrawlResult(
url=url,
html="",
success=False,
error_message=f"All {attempts} crawl attempts failed: {str(last_error)}"
)
)
# Restore the original event loop if we created a new one
if use_new_loop:
asyncio.set_event_loop(old_loop)
loop.close()
return result
@classmethod
async def parallel_crawl(
cls,
url_batches: Union[List[str], List[Union[UrlBatch, UrlFullBatch]]],
browser_config: Optional[BrowserConfig] = None,
crawler_config: Optional[CrawlerRunConfig] = None,
browser_hub: Optional[BrowserHub] = None,
logger: Optional[AsyncLogger] = None,
concurrency: int = 5,
max_retries: int = 0,
retry_delay: float = 1.0,
retry_strategy: Optional[RetryStrategy] = None,
progress_callback: Optional[ProgressCallback] = None,
use_new_loop: bool = True # By default use a new loop for safety
) -> Dict[str, CrawlResultContainer]:
"""Crawl multiple URLs in parallel with concurrency control.
Args:
url_batches: List of URLs or list of URL batches with configurations
browser_config: Default browser configuration (used if not in batch)
crawler_config: Default crawler configuration (used if not in batch)
browser_hub: Optional shared browser hub for resource efficiency
logger: Optional logger instance
concurrency: Maximum number of concurrent crawls
max_retries: Maximum number of retries for failed requests
retry_delay: Delay between retries in seconds
retry_strategy: Optional custom retry strategy function
progress_callback: Optional callback for progress reporting
Returns:
Dict mapping URLs to their CrawlResultContainer results
"""
# Create a logger if none provided
if logger is None:
logger = AsyncLogger(verbose=True)
# For testing purposes, each crawler gets a new event loop to avoid conflicts
# This is especially important in test suites where multiple tests run in sequence
if use_new_loop:
old_loop = asyncio.get_event_loop()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# Process batches to consistent format
processed_batches = cls._process_url_batches(
url_batches, browser_config, crawler_config
)
# Initialize results dictionary
results = {}
# Create semaphore for concurrency control
semaphore = asyncio.Semaphore(concurrency)
# Create shared browser hub if not provided
shared_hub = browser_hub
if not shared_hub:
shared_hub = await BrowserHub.get_browser_manager(
config=browser_config or BrowserConfig(),
logger=logger,
max_browsers_per_config=concurrency,
max_pages_per_browser=1,
initial_pool_size=min(concurrency, 3) # Start with a reasonable number
)
try:
# Create worker function for each URL
async def process_url(url, b_config, c_config):
async with semaphore:
# Report start if callback provided
if progress_callback:
await progress_callback("started", url)
attempts = 0
last_error = None
while attempts <= max_retries:
try:
# Create a pipeline using the shared hub
pipeline = await create_pipeline(
browser_config=b_config,
browser_hub=shared_hub,
logger=logger
)
# Perform the crawl
result = await pipeline.crawl(url=url, config=c_config)
# Report completion if callback provided
if progress_callback:
await progress_callback("completed", url, result)
return url, result
except Exception as e:
last_error = e
attempts += 1
# Determine if we should retry and with what delay
should_retry = attempts <= max_retries
delay = retry_delay
# Use custom retry strategy if provided
if retry_strategy and should_retry:
try:
should_retry, delay = await retry_strategy(url, attempts, e)
except Exception as strategy_error:
logger.error(
message="Error in retry strategy: {error}",
tag="RETRY",
params={"error": str(strategy_error)}
)
if should_retry:
logger.warning(
message="Crawl attempt {attempt} failed for {url}: {error}. Retrying in {delay}s...",
tag="RETRY",
params={
"attempt": attempts,
"url": url,
"error": str(e),
"delay": delay
}
)
await asyncio.sleep(delay)
else:
logger.error(
message="All {attempts} crawl attempts failed for {url}: {error}",
tag="FAILED",
params={
"attempts": attempts,
"url": url,
"error": str(e)
}
)
break
# If we get here, all attempts failed
error_result = CrawlResultContainer(
CrawlResult(
url=url,
html="",
success=False,
error_message=f"All {attempts} crawl attempts failed: {str(last_error)}"
)
)
# Report completion with error if callback provided
if progress_callback:
await progress_callback("completed", url, error_result)
return url, error_result
# Create tasks for all URLs
tasks = []
for urls, b_config, c_config in processed_batches:
for url in urls:
tasks.append(process_url(url, b_config, c_config))
# Run all tasks and collect results
for completed_task in asyncio.as_completed(tasks):
url, result = await completed_task
results[url] = result
return results
finally:
# Clean up the hub only if we created it
if not browser_hub and shared_hub:
await shared_hub.close()
# Restore the original event loop if we created a new one
if use_new_loop:
asyncio.set_event_loop(old_loop)
loop.close()
@classmethod
def _process_url_batches(
cls,
url_batches: Union[List[str], List[Union[UrlBatch, UrlFullBatch]]],
default_browser_config: Optional[BrowserConfig],
default_crawler_config: Optional[CrawlerRunConfig]
) -> List[Tuple[List[str], BrowserConfig, CrawlerRunConfig]]:
"""Process URL batches into a consistent format.
Converts various input formats into a consistent list of
(urls, browser_config, crawler_config) tuples.
"""
processed_batches = []
# Handle case where input is just a list of URLs
if all(isinstance(item, str) for item in url_batches):
urls = url_batches
browser_config = default_browser_config or BrowserConfig()
crawler_config = default_crawler_config or CrawlerRunConfig()
processed_batches.append((urls, browser_config, crawler_config))
return processed_batches
# Process each batch
for batch in url_batches:
# Handle case: (urls, crawler_config)
if len(batch) == 2 and isinstance(batch[1], CrawlerRunConfig):
urls, c_config = batch
b_config = default_browser_config or BrowserConfig()
processed_batches.append((urls, b_config, c_config))
# Handle case: (urls, browser_config, crawler_config)
elif len(batch) == 3 and isinstance(batch[1], BrowserConfig) and isinstance(batch[2], CrawlerRunConfig):
processed_batches.append(batch)
# Fallback for unknown formats - assume it's just a list of URLs
else:
urls = batch
browser_config = default_browser_config or BrowserConfig()
crawler_config = default_crawler_config or CrawlerRunConfig()
processed_batches.append((urls, browser_config, crawler_config))
return processed_batches

View File

@@ -0,0 +1,702 @@
import time
import sys
from typing import Dict, Any, List
import json
from crawl4ai.models import (
CrawlResult,
MarkdownGenerationResult,
ScrapingResult,
CrawlResultContainer,
)
from crawl4ai.async_database import async_db_manager
from crawl4ai.cache_context import CacheMode, CacheContext
from crawl4ai.utils import (
sanitize_input_encode,
InvalidCSSSelectorError,
fast_format_html,
create_box_message,
get_error_context,
)
async def initialize_context_middleware(context: Dict[str, Any]) -> int:
"""Initialize the context with basic configuration and validation"""
url = context.get("url")
config = context.get("config")
if not isinstance(url, str) or not url:
context["error_message"] = "Invalid URL, make sure the URL is a non-empty string"
return 0
# Default to ENABLED if no cache mode specified
if config.cache_mode is None:
config.cache_mode = CacheMode.ENABLED
# Create cache context
context["cache_context"] = CacheContext(url, config.cache_mode, False)
context["start_time"] = time.perf_counter()
return 1
# middlewares.py additions
async def browser_hub_middleware(context: Dict[str, Any]) -> int:
"""
Initialize or connect to a Browser-Hub and add it to the pipeline context.
This middleware handles browser hub initialization for all three scenarios:
1. Default configuration when nothing is specified
2. Custom configuration when browser_config is provided
3. Connection to existing hub when browser_hub_connection is provided
Args:
context: The pipeline context dictionary
Returns:
int: 1 for success, 0 for failure
"""
from crawl4ai.browser.browser_hub import BrowserHub
try:
# Get configuration from context
browser_config = context.get("browser_config")
browser_hub_id = context.get("browser_hub_id")
browser_hub_connection = context.get("browser_hub_connection")
logger = context.get("logger")
# If we already have a browser hub in context, use it
if context.get("browser_hub"):
return 1
# Get or create Browser-Hub
browser_hub = await BrowserHub.get_browser_manager(
config=browser_config,
hub_id=browser_hub_id,
connection_info=browser_hub_connection,
logger=logger
)
# Add to context
context["browser_hub"] = browser_hub
return 1
except Exception as e:
context["error_message"] = f"Failed to initialize browser hub: {str(e)}"
return 0
async def fetch_content_middleware(context: Dict[str, Any]) -> int:
"""
Fetch content from the web using the browser hub.
This middleware uses the browser hub to get pages for crawling,
and properly releases them back to the pool when done.
Args:
context: The pipeline context dictionary
Returns:
int: 1 for success, 0 for failure
"""
url = context.get("url")
config = context.get("config")
browser_hub = context.get("browser_hub")
logger = context.get("logger")
# Skip if using cached result
if context.get("cached_result") and context.get("html"):
return 1
try:
# Create crawler strategy without initializing its browser manager
from crawl4ai.async_crawler_strategy import AsyncPlaywrightCrawlerStrategy
crawler_strategy = AsyncPlaywrightCrawlerStrategy(
browser_config=browser_hub.config if browser_hub else None,
logger=logger
)
# Replace the browser manager with our shared instance
crawler_strategy.browser_manager = browser_hub
# Perform crawl without trying to initialize the browser
# The crawler will use the provided browser_manager to get pages
async_response = await crawler_strategy.crawl(url, config=config)
# Store results in context
context["html"] = async_response.html
context["screenshot_data"] = async_response.screenshot
context["pdf_data"] = async_response.pdf_data
context["js_execution_result"] = async_response.js_execution_result
context["async_response"] = async_response
return 1
except Exception as e:
context["error_message"] = f"Error fetching content: {str(e)}"
return 0
async def check_cache_middleware(context: Dict[str, Any]) -> int:
"""Check if there's a cached result and load it if available"""
url = context.get("url")
config = context.get("config")
cache_context = context.get("cache_context")
logger = context.get("logger")
# Initialize variables
context["cached_result"] = None
context["html"] = None
context["extracted_content"] = None
context["screenshot_data"] = None
context["pdf_data"] = None
# Try to get cached result if appropriate
if cache_context.should_read():
cached_result = await async_db_manager.aget_cached_url(url)
context["cached_result"] = cached_result
if cached_result:
html = sanitize_input_encode(cached_result.html)
extracted_content = sanitize_input_encode(cached_result.extracted_content or "")
extracted_content = None if not extracted_content or extracted_content == "[]" else extracted_content
# If screenshot is requested but its not in cache, then set cache_result to None
screenshot_data = cached_result.screenshot
pdf_data = cached_result.pdf
if config.screenshot and not screenshot_data:
context["cached_result"] = None
if config.pdf and not pdf_data:
context["cached_result"] = None
context["html"] = html
context["extracted_content"] = extracted_content
context["screenshot_data"] = screenshot_data
context["pdf_data"] = pdf_data
logger.url_status(
url=cache_context.display_url,
success=bool(html),
timing=time.perf_counter() - context["start_time"],
tag="FETCH",
)
return 1
async def configure_proxy_middleware(context: Dict[str, Any]) -> int:
"""Configure proxy if a proxy rotation strategy is available"""
config = context.get("config")
logger = context.get("logger")
# Skip if using cached result
if context.get("cached_result") and context.get("html"):
return 1
# Update proxy configuration from rotation strategy if available
if config and config.proxy_rotation_strategy:
next_proxy = await config.proxy_rotation_strategy.get_next_proxy()
if next_proxy:
logger.info(
message="Switch proxy: {proxy}",
tag="PROXY",
params={"proxy": next_proxy.server},
)
config.proxy_config = next_proxy
return 1
async def check_robots_txt_middleware(context: Dict[str, Any]) -> int:
"""Check if the URL is allowed by robots.txt if enabled"""
url = context.get("url")
config = context.get("config")
browser_config = context.get("browser_config")
robots_parser = context.get("robots_parser")
# Skip if using cached result
if context.get("cached_result") and context.get("html"):
return 1
# Check robots.txt if enabled
if config and config.check_robots_txt:
if not await robots_parser.can_fetch(url, browser_config.user_agent):
context["crawl_result"] = CrawlResult(
url=url,
html="",
success=False,
status_code=403,
error_message="Access denied by robots.txt",
response_headers={"X-Robots-Status": "Blocked by robots.txt"}
)
return 0
return 1
async def fetch_content_middleware_(context: Dict[str, Any]) -> int:
"""Fetch content from the web using the crawler strategy"""
url = context.get("url")
config = context.get("config")
crawler_strategy = context.get("crawler_strategy")
logger = context.get("logger")
# Skip if using cached result
if context.get("cached_result") and context.get("html"):
return 1
try:
t1 = time.perf_counter()
if config.user_agent:
crawler_strategy.update_user_agent(config.user_agent)
# Call CrawlerStrategy.crawl
async_response = await crawler_strategy.crawl(url, config=config)
html = sanitize_input_encode(async_response.html)
screenshot_data = async_response.screenshot
pdf_data = async_response.pdf_data
js_execution_result = async_response.js_execution_result
t2 = time.perf_counter()
logger.url_status(
url=context["cache_context"].display_url,
success=bool(html),
timing=t2 - t1,
tag="FETCH",
)
context["html"] = html
context["screenshot_data"] = screenshot_data
context["pdf_data"] = pdf_data
context["js_execution_result"] = js_execution_result
context["async_response"] = async_response
return 1
except Exception as e:
context["error_message"] = f"Error fetching content: {str(e)}"
return 0
async def scrape_content_middleware(context: Dict[str, Any]) -> int:
"""Apply scraping strategy to extract content"""
url = context.get("url")
html = context.get("html")
config = context.get("config")
extracted_content = context.get("extracted_content")
logger = context.get("logger")
# Skip if already have a crawl result
if context.get("crawl_result"):
return 1
try:
_url = url if not context.get("is_raw_html", False) else "Raw HTML"
t1 = time.perf_counter()
# Get scraping strategy and ensure it has a logger
scraping_strategy = config.scraping_strategy
if not scraping_strategy.logger:
scraping_strategy.logger = logger
# Process HTML content
params = config.__dict__.copy()
params.pop("url", None)
# Add keys from kwargs to params that don't exist in params
kwargs = context.get("kwargs", {})
params.update({k: v for k, v in kwargs.items() if k not in params.keys()})
# Scraping Strategy Execution
result: ScrapingResult = scraping_strategy.scrap(url, html, **params)
if result is None:
raise ValueError(f"Process HTML, Failed to extract content from the website: {url}")
# Extract results - handle both dict and ScrapingResult
if isinstance(result, dict):
cleaned_html = sanitize_input_encode(result.get("cleaned_html", ""))
media = result.get("media", {})
links = result.get("links", {})
metadata = result.get("metadata", {})
else:
cleaned_html = sanitize_input_encode(result.cleaned_html)
media = result.media.model_dump()
links = result.links.model_dump()
metadata = result.metadata
context["cleaned_html"] = cleaned_html
context["media"] = media
context["links"] = links
context["metadata"] = metadata
# Log processing completion
logger.info(
message="{url:.50}... | Time: {timing}s",
tag="SCRAPE",
params={
"url": _url,
"timing": int((time.perf_counter() - t1) * 1000) / 1000,
},
)
return 1
except InvalidCSSSelectorError as e:
context["error_message"] = str(e)
return 0
except Exception as e:
context["error_message"] = f"Process HTML, Failed to extract content from the website: {url}, error: {str(e)}"
return 0
async def generate_markdown_middleware(context: Dict[str, Any]) -> int:
"""Generate markdown from cleaned HTML"""
url = context.get("url")
cleaned_html = context.get("cleaned_html")
config = context.get("config")
# Skip if already have a crawl result
if context.get("crawl_result"):
return 1
# Generate Markdown
markdown_generator = config.markdown_generator
markdown_result: MarkdownGenerationResult = markdown_generator.generate_markdown(
cleaned_html=cleaned_html,
base_url=url,
)
context["markdown_result"] = markdown_result
return 1
async def extract_structured_content_middleware(context: Dict[str, Any]) -> int:
"""Extract structured content using extraction strategy"""
url = context.get("url")
extracted_content = context.get("extracted_content")
config = context.get("config")
markdown_result = context.get("markdown_result")
cleaned_html = context.get("cleaned_html")
logger = context.get("logger")
# Skip if already have a crawl result or extracted content
if context.get("crawl_result") or bool(extracted_content):
return 1
from crawl4ai.chunking_strategy import IdentityChunking
from crawl4ai.extraction_strategy import NoExtractionStrategy
if config.extraction_strategy and not isinstance(config.extraction_strategy, NoExtractionStrategy):
t1 = time.perf_counter()
_url = url if not context.get("is_raw_html", False) else "Raw HTML"
# Choose content based on input_format
content_format = config.extraction_strategy.input_format
if content_format == "fit_markdown" and not markdown_result.fit_markdown:
logger.warning(
message="Fit markdown requested but not available. Falling back to raw markdown.",
tag="EXTRACT",
params={"url": _url},
)
content_format = "markdown"
content = {
"markdown": markdown_result.raw_markdown,
"html": context.get("html"),
"cleaned_html": cleaned_html,
"fit_markdown": markdown_result.fit_markdown,
}.get(content_format, markdown_result.raw_markdown)
# Use IdentityChunking for HTML input, otherwise use provided chunking strategy
chunking = (
IdentityChunking()
if content_format in ["html", "cleaned_html"]
else config.chunking_strategy
)
sections = chunking.chunk(content)
extracted_content = config.extraction_strategy.run(url, sections)
extracted_content = json.dumps(
extracted_content, indent=4, default=str, ensure_ascii=False
)
context["extracted_content"] = extracted_content
# Log extraction completion
logger.info(
message="Completed for {url:.50}... | Time: {timing}s",
tag="EXTRACT",
params={"url": _url, "timing": time.perf_counter() - t1},
)
return 1
async def format_html_middleware(context: Dict[str, Any]) -> int:
"""Format HTML if prettify is enabled"""
config = context.get("config")
cleaned_html = context.get("cleaned_html")
# Skip if already have a crawl result
if context.get("crawl_result"):
return 1
# Apply HTML formatting if requested
if config.prettiify and cleaned_html:
context["cleaned_html"] = fast_format_html(cleaned_html)
return 1
async def write_cache_middleware(context: Dict[str, Any]) -> int:
"""Write result to cache if appropriate"""
cache_context = context.get("cache_context")
cached_result = context.get("cached_result")
# Skip if already have a crawl result or not using cache
if context.get("crawl_result") or not cache_context.should_write() or bool(cached_result):
return 1
# We'll create the CrawlResult in build_result_middleware and cache it there
# to avoid creating it twice
return 1
async def build_result_middleware(context: Dict[str, Any]) -> int:
"""Build the final CrawlResult object"""
url = context.get("url")
html = context.get("html", "")
cache_context = context.get("cache_context")
cached_result = context.get("cached_result")
config = context.get("config")
logger = context.get("logger")
# If we already have a crawl result (from an earlier middleware like robots.txt check)
if context.get("crawl_result"):
result = context["crawl_result"]
context["final_result"] = CrawlResultContainer(result)
return 1
# If we have a cached result
if cached_result and html:
logger.success(
message="{url:.50}... | Status: {status} | Total: {timing}",
tag="COMPLETE",
params={
"url": cache_context.display_url,
"status": True,
"timing": f"{time.perf_counter() - context['start_time']:.2f}s",
},
colors={"status": "green", "timing": "yellow"},
)
cached_result.success = bool(html)
cached_result.session_id = getattr(config, "session_id", None)
cached_result.redirected_url = cached_result.redirected_url or url
context["final_result"] = CrawlResultContainer(cached_result)
return 1
# Build a new result
try:
# Get all necessary components from context
cleaned_html = context.get("cleaned_html", "")
markdown_result = context.get("markdown_result")
media = context.get("media", {})
links = context.get("links", {})
metadata = context.get("metadata", {})
screenshot_data = context.get("screenshot_data")
pdf_data = context.get("pdf_data")
extracted_content = context.get("extracted_content")
async_response = context.get("async_response")
# Create the CrawlResult
crawl_result = CrawlResult(
url=url,
html=html,
cleaned_html=cleaned_html,
markdown=markdown_result,
media=media,
links=links,
metadata=metadata,
screenshot=screenshot_data,
pdf=pdf_data,
extracted_content=extracted_content,
success=bool(html),
error_message="",
)
# Add response details if available
if async_response:
crawl_result.status_code = async_response.status_code
crawl_result.redirected_url = async_response.redirected_url or url
crawl_result.response_headers = async_response.response_headers
crawl_result.downloaded_files = async_response.downloaded_files
crawl_result.js_execution_result = context.get("js_execution_result")
crawl_result.ssl_certificate = async_response.ssl_certificate
crawl_result.session_id = getattr(config, "session_id", None)
# Log completion
logger.success(
message="{url:.50}... | Status: {status} | Total: {timing}",
tag="COMPLETE",
params={
"url": cache_context.display_url,
"status": crawl_result.success,
"timing": f"{time.perf_counter() - context['start_time']:.2f}s",
},
colors={
"status": "green" if crawl_result.success else "red",
"timing": "yellow",
},
)
# Update cache if appropriate
if cache_context.should_write() and not bool(cached_result):
await async_db_manager.acache_url(crawl_result)
context["final_result"] = CrawlResultContainer(crawl_result)
return 1
except Exception as e:
error_context = get_error_context(sys.exc_info())
error_message = (
f"Unexpected error in build_result at line {error_context['line_no']} "
f"in {error_context['function']} ({error_context['filename']}):\n"
f"Error: {str(e)}\n\n"
f"Code context:\n{error_context['code_context']}"
)
logger.error_status(
url=url,
error=create_box_message(error_message, type="error"),
tag="ERROR",
)
context["final_result"] = CrawlResultContainer(
CrawlResult(
url=url, html="", success=False, error_message=error_message
)
)
return 1
async def handle_error_middleware(context: Dict[str, Any]) -> Dict[str, Any]:
"""Error handler middleware"""
url = context.get("url", "")
error_message = context.get("error_message", "Unknown error")
logger = context.get("logger")
# Log the error
if logger:
logger.error_status(
url=url,
error=create_box_message(error_message, type="error"),
tag="ERROR",
)
# Create a failure result
context["final_result"] = CrawlResultContainer(
CrawlResult(
url=url, html="", success=False, error_message=error_message
)
)
return context
# Custom middlewares as requested
async def sentiment_analysis_middleware(context: Dict[str, Any]) -> int:
"""Analyze sentiment of generated markdown using TextBlob"""
from textblob import TextBlob
markdown_result = context.get("markdown_result")
# Skip if no markdown or already failed
if not markdown_result or not context.get("success", True):
return 1
try:
# Get raw markdown text
raw_markdown = markdown_result.raw_markdown
# Analyze sentiment
blob = TextBlob(raw_markdown)
sentiment = blob.sentiment
# Add sentiment to context
context["sentiment_analysis"] = {
"polarity": sentiment.polarity, # -1.0 to 1.0 (negative to positive)
"subjectivity": sentiment.subjectivity, # 0.0 to 1.0 (objective to subjective)
"classification": "positive" if sentiment.polarity > 0.1 else
"negative" if sentiment.polarity < -0.1 else "neutral"
}
return 1
except Exception as e:
# Don't fail the pipeline on sentiment analysis failure
context["sentiment_analysis_error"] = str(e)
return 1
async def log_timing_middleware(context: Dict[str, Any], name: str) -> int:
"""Log timing information for a specific point in the pipeline"""
context[f"_timing_mark_{name}"] = time.perf_counter()
# Calculate duration if we have a start time
start_key = f"_timing_start_{name}"
if start_key in context:
duration = context[f"_timing_mark_{name}"] - context[start_key]
context[f"_timing_duration_{name}"] = duration
# Log the timing if we have a logger
logger = context.get("logger")
if logger:
logger.info(
message="{name} completed in {duration:.2f}s",
tag="TIMING",
params={"name": name, "duration": duration},
)
return 1
async def validate_url_middleware(context: Dict[str, Any], patterns: List[str]) -> int:
"""Validate URL against glob patterns"""
import fnmatch
url = context.get("url", "")
# If no patterns provided, allow all
if not patterns:
return 1
# Check if URL matches any of the allowed patterns
for pattern in patterns:
if fnmatch.fnmatch(url, pattern):
return 1
# If we get here, URL didn't match any patterns
context["error_message"] = f"URL '{url}' does not match any allowed patterns"
return 0
# Update the default middleware list function
def create_default_middleware_list():
"""Return the default list of middleware functions for the pipeline."""
return [
initialize_context_middleware,
check_cache_middleware,
browser_hub_middleware, # Add browser hub middleware before fetch_content
configure_proxy_middleware,
check_robots_txt_middleware,
fetch_content_middleware,
scrape_content_middleware,
generate_markdown_middleware,
extract_structured_content_middleware,
format_html_middleware,
build_result_middleware
]

View File

@@ -0,0 +1,297 @@
import time
import asyncio
from typing import Callable, Dict, List, Any, Optional, Awaitable, Union, TypedDict, Tuple, Coroutine
from .middlewares import create_default_middleware_list, handle_error_middleware
from crawl4ai.models import CrawlResultContainer, CrawlResult
from crawl4ai.async_crawler_strategy import AsyncCrawlerStrategy, AsyncPlaywrightCrawlerStrategy
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
class CrawlSpec(TypedDict, total=False):
"""Specification for a single crawl operation in batch_crawl."""
url: str
config: Optional[CrawlerRunConfig]
browser_config: Optional[BrowserConfig]
class BatchStatus(TypedDict, total=False):
"""Status information for batch crawl operations."""
total: int
processed: int
succeeded: int
failed: int
in_progress: int
duration: float
class Pipeline:
"""
A pipeline processor that executes a series of async middleware functions.
Each middleware function receives a context dictionary, updates it,
and returns 1 for success or 0 for failure.
"""
def __init__(
self,
middleware: List[Callable[[Dict[str, Any]], Awaitable[int]]] = None,
error_handler: Optional[Callable[[Dict[str, Any]], Awaitable[Dict[str, Any]]]] = None,
after_middleware_callback: Optional[Callable[[str, Dict[str, Any]], Awaitable[None]]] = None,
crawler_strategy: Optional[AsyncCrawlerStrategy] = None,
browser_config: Optional[BrowserConfig] = None,
logger: Optional[AsyncLogger] = None,
_initial_context: Optional[Dict[str, Any]] = None
):
self.middleware = middleware or create_default_middleware_list()
self.error_handler = error_handler or handle_error_middleware
self.after_middleware_callback = after_middleware_callback
self.browser_config = browser_config or BrowserConfig()
self.logger = logger or AsyncLogger(verbose=self.browser_config.verbose)
self.crawler_strategy = crawler_strategy or AsyncPlaywrightCrawlerStrategy(
browser_config=self.browser_config,
logger=self.logger
)
self._initial_context = _initial_context
self._strategy_initialized = False
async def _initialize_strategy__(self):
"""Initialize the crawler strategy if not already initialized"""
if not self.crawler_strategy:
self.crawler_strategy = AsyncPlaywrightCrawlerStrategy(
browser_config=self.browser_config,
logger=self.logger
)
if not self._strategy_initialized:
await self.crawler_strategy.__aenter__()
self._strategy_initialized = True
async def _initialize_strategy(self):
"""Initialize the crawler strategy if not already initialized"""
# With our new approach, we don't need to create the crawler strategy here
# as it will be created on-demand in fetch_content_middleware
# Just ensure browser hub is available if needed
if hasattr(self, "_initial_context") and "browser_hub" not in self._initial_context:
# If a browser_config was provided but no browser_hub yet,
# we'll let the browser_hub_middleware handle creating it
pass
# Mark as initialized to prevent repeated initialization attempts
self._strategy_initialized = True
async def start(self):
"""Start the crawler strategy and prepare it for use"""
if not self._strategy_initialized:
await self._initialize_strategy()
self._strategy_initialized = True
if self.crawler_strategy:
await self.crawler_strategy.__aenter__()
self._strategy_initialized = True
else:
raise ValueError("Crawler strategy is not initialized.")
async def close(self):
"""Close the crawler strategy and clean up resources"""
await self.stop()
async def stop(self):
"""Close the crawler strategy and clean up resources"""
if self._strategy_initialized and self.crawler_strategy:
await self.crawler_strategy.__aexit__(None, None, None)
self._strategy_initialized = False
async def __aenter__(self):
await self.start()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self.close()
async def crawl(self, url: str, config: Optional[CrawlerRunConfig] = None, **kwargs) -> CrawlResultContainer:
"""
Crawl a URL and process it through the pipeline.
Args:
url: The URL to crawl
config: Optional configuration for the crawl
**kwargs: Additional arguments to pass to the middleware
Returns:
CrawlResultContainer: The result of the crawl
"""
# Initialize strategy if needed
await self._initialize_strategy()
# Create the initial context
context = {
"url": url,
"config": config or CrawlerRunConfig(),
"browser_config": self.browser_config,
"logger": self.logger,
"crawler_strategy": self.crawler_strategy,
"kwargs": kwargs
}
# Process the pipeline
result_context = await self.process(context)
# Return the final result
return result_context.get("final_result")
async def process(self, initial_context: Dict[str, Any] = None) -> Dict[str, Any]:
"""
Process all middleware functions with the given context.
Args:
initial_context: Initial context dictionary, defaults to empty dict
Returns:
Updated context dictionary after all middleware have been processed
"""
context = {**self._initial_context}
if initial_context:
context.update(initial_context)
# Record pipeline start time
context["_pipeline_start_time"] = time.perf_counter()
for middleware_fn in self.middleware:
# Get middleware name for logging
middleware_name = getattr(middleware_fn, '__name__', str(middleware_fn))
# Record start time for this middleware
start_time = time.perf_counter()
context[f"_timing_start_{middleware_name}"] = start_time
try:
# Execute middleware (all middleware functions are async)
result = await middleware_fn(context)
# Record completion time
end_time = time.perf_counter()
context[f"_timing_end_{middleware_name}"] = end_time
context[f"_timing_duration_{middleware_name}"] = end_time - start_time
# Execute after-middleware callback if provided
if self.after_middleware_callback:
await self.after_middleware_callback(middleware_name, context)
# Convert boolean returns to int (True->1, False->0)
if isinstance(result, bool):
result = 1 if result else 0
# Handle failure
if result == 0:
if self.error_handler:
context["_error_in"] = middleware_name
context["_error_at"] = time.perf_counter()
return await self._handle_error(context)
else:
context["success"] = False
context["error_message"] = f"Pipeline failed at {middleware_name}"
break
except Exception as e:
# Record error information
context["_error_in"] = middleware_name
context["_error_at"] = time.perf_counter()
context["_exception"] = e
context["success"] = False
context["error_message"] = f"Exception in {middleware_name}: {str(e)}"
# Call error handler if available
if self.error_handler:
return await self._handle_error(context)
break
# Record pipeline completion time
pipeline_end_time = time.perf_counter()
context["_pipeline_end_time"] = pipeline_end_time
context["_pipeline_duration"] = pipeline_end_time - context["_pipeline_start_time"]
# Set success to True if not already set (no failures)
if "success" not in context:
context["success"] = True
return context
async def _handle_error(self, context: Dict[str, Any]) -> Dict[str, Any]:
"""Handle errors by calling the error handler"""
try:
return await self.error_handler(context)
except Exception as e:
# If error handler fails, update context with this new error
context["_error_handler_exception"] = e
context["error_message"] = f"Error handler failed: {str(e)}"
return context
async def create_pipeline(
middleware_list=None,
error_handler=None,
after_middleware_callback=None,
browser_config=None,
browser_hub_id=None,
browser_hub_connection=None,
browser_hub=None,
logger=None
) -> Pipeline:
"""
Factory function to create a pipeline with Browser-Hub integration.
Args:
middleware_list: List of middleware functions
error_handler: Error handler middleware
after_middleware_callback: Callback after middleware execution
browser_config: Configuration for the browser
browser_hub_id: ID for browser hub instance
browser_hub_connection: Connection string for existing browser hub
browser_hub: Existing browser hub instance to use
logger: Logger instance
Returns:
Pipeline: Configured pipeline instance
"""
# Use default middleware list if none provided
middleware = middleware_list or create_default_middleware_list()
# Create the pipeline
pipeline = Pipeline(
middleware=middleware,
error_handler=error_handler,
after_middleware_callback=after_middleware_callback,
logger=logger
)
# Set browser-related attributes in the initial context
pipeline._initial_context = {
"browser_config": browser_config,
"browser_hub_id": browser_hub_id,
"browser_hub_connection": browser_hub_connection,
"browser_hub": browser_hub,
"logger": logger
}
return pipeline
# async def create_pipeline(
# middleware_list: Optional[List[Callable[[Dict[str, Any]], Awaitable[int]]]] = None,
# error_handler: Optional[Callable[[Dict[str, Any]], Awaitable[Dict[str, Any]]]] = None,
# after_middleware_callback: Optional[Callable[[str, Dict[str, Any]], Awaitable[None]]] = None,
# crawler_strategy = None,
# browser_config = None,
# logger = None
# ) -> Pipeline:
# """Factory function to create a pipeline with the given middleware"""
# return Pipeline(
# middleware=middleware_list,
# error_handler=error_handler,
# after_middleware_callback=after_middleware_callback,
# crawler_strategy=crawler_strategy,
# browser_config=browser_config,
# logger=logger
# )

View File

@@ -0,0 +1,109 @@
import asyncio
from crawl4ai import (
BrowserConfig,
CrawlerRunConfig,
CacheMode,
DefaultMarkdownGenerator,
PruningContentFilter
)
from pipeline import Pipeline
async def main():
# Create configuration objects
browser_config = BrowserConfig(headless=True, verbose=True)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48,
threshold_type="fixed",
min_word_threshold=0
)
),
)
# Create and use pipeline with context manager
async with Pipeline(browser_config=browser_config) as pipeline:
result = await pipeline.crawl(
url="https://www.example.com",
config=crawler_config
)
# Print the result
print(f"URL: {result.url}")
print(f"Success: {result.success}")
if result.success:
print("\nMarkdown excerpt:")
print(result.markdown.raw_markdown[:500] + "...")
else:
print(f"Error: {result.error_message}")
if __name__ == "__main__":
asyncio.run(main())
class CrawlTarget:
def __init__(self, urls, config=None):
self.urls = urls
self.config = config
def __repr__(self):
return f"CrawlTarget(urls={self.urls}, config={self.config})"
# async def main():
# # Create configuration objects
# browser_config = BrowserConfig(headless=True, verbose=True)
# # Define different configurations
# config1 = CrawlerRunConfig(
# cache_mode=CacheMode.BYPASS,
# markdown_generator=DefaultMarkdownGenerator(
# content_filter=PruningContentFilter(threshold=0.48)
# ),
# )
# config2 = CrawlerRunConfig(
# cache_mode=CacheMode.ENABLED,
# screenshot=True,
# pdf=True
# )
# # Create crawl targets
# targets = [
# CrawlTarget(
# urls=["https://www.example.com", "https://www.wikipedia.org"],
# config=config1
# ),
# CrawlTarget(
# urls="https://news.ycombinator.com",
# config=config2
# ),
# CrawlTarget(
# urls=["https://github.com", "https://stackoverflow.com", "https://python.org"],
# config=None
# )
# ]
# # Create and use pipeline with context manager
# async with Pipeline(browser_config=browser_config) as pipeline:
# all_results = await pipeline.crawl_batch(targets)
# for target_key, results in all_results.items():
# print(f"\n===== Results for {target_key} =====")
# print(f"Number of URLs crawled: {len(results)}")
# for i, result in enumerate(results):
# print(f"\nURL {i+1}: {result.url}")
# print(f"Success: {result.success}")
# if result.success:
# print(f"Content length: {len(result.markdown.raw_markdown)} chars")
# else:
# print(f"Error: {result.error_message}")
# if __name__ == "__main__":
# asyncio.run(main())

View File

@@ -203,6 +203,62 @@ Avoid Common Mistakes:
Result
Output the final list of JSON objects, wrapped in <blocks>...</blocks> XML tags. Make sure to close the tag properly."""
PROMPT_EXTRACT_INFERRED_SCHEMA = """Here is the content from the URL:
<url>{URL}</url>
<url_content>
{HTML}
</url_content>
Please carefully read the URL content and the user's request. Analyze the page structure and infer the most appropriate JSON schema based on the content and request.
Extraction Strategy:
1. First, determine if the page contains repetitive items (like multiple products, articles, etc.) or a single content item (like a single article or page).
2. For repetitive items: Identify the common pattern and extract each instance as a separate JSON object in an array.
3. For single content: Extract the key information into a comprehensive JSON object that captures the essential details.
Extraction instructions:
Return the extracted information as a list of JSON objects. For repetitive content, each object in the list should correspond to a distinct item. For single content, you may return just one detailed JSON object. Wrap the entire JSON list in <blocks>...</blocks> XML tags.
Schema Design Guidelines:
- Create meaningful property names that clearly describe the data they contain
- Use nested objects for hierarchical information
- Use arrays for lists of related items
- Include all information requested by the user
- Maintain consistency in property names and data structures
- Only include properties that are actually present in the content
- For dates, prefer ISO format (YYYY-MM-DD)
- For prices or numeric values, extract them without currency symbols when possible
Quality Reflection:
Before outputting your final answer, double check that:
1. The inferred schema makes logical sense for the type of content
2. All requested information is included
3. The JSON is valid and could be parsed without errors
4. Property names are consistent and descriptive
5. The structure is optimal for the type of data being represented
Avoid Common Mistakes:
- Do NOT add any comments using "//" or "#" in the JSON output. It causes parsing errors.
- Make sure the JSON is properly formatted with curly braces, square brackets, and commas in the right places.
- Do not miss closing </blocks> tag at the end of the JSON output.
- Do not generate Python code showing how to do the task; this is your task to extract the information and return it in JSON format.
- Ensure consistency in property names across all objects
- Don't include empty properties or null values unless they're meaningful
- For repetitive content, ensure all objects follow the same schema
Important: If user specific instruction is provided, then stress significantly on what user is requesting and describing about the schema of end result (if any). If user is requesting to extract specific information, then focus on that and ignore the rest of the content.
<user_request>
{REQUEST}
</user_request>
Result:
Output the final list of JSON objects, wrapped in <blocks>...</blocks> XML tags. Make sure to close the tag properly.
DO NOT ADD ANY PRE OR POST COMMENTS. JUST RETURN THE JSON OBJECTS INSIDE <blocks>...</blocks> TAGS.
CRITICAL: The content inside the <blocks> tags MUST be a direct array of JSON objects (starting with '[' and ending with ']'), not a dictionary/object containing an array. For example, use <blocks>[{...}, {...}]</blocks> instead of <blocks>{"items": [{...}, {...}]}</blocks>. This is essential for proper parsing.
"""
PROMPT_FILTER_CONTENT = """Your task is to filter and convert HTML content into clean, focused markdown that's optimized for use with LLMs and information retrieval systems.

View File

@@ -178,4 +178,10 @@ if TYPE_CHECKING:
BestFirstCrawlingStrategy as BestFirstCrawlingStrategyType,
DFSDeepCrawlStrategy as DFSDeepCrawlStrategyType,
DeepCrawlDecorator as DeepCrawlDecoratorType,
)
)
def create_llm_config(*args, **kwargs) -> 'LLMConfigType':
from .async_configs import LLMConfig
return LLMConfig(*args, **kwargs)

View File

@@ -26,7 +26,7 @@ import cProfile
import pstats
from functools import wraps
import asyncio
from lxml import etree, html as lhtml
import sqlite3
import hashlib
@@ -1551,7 +1551,7 @@ def extract_xml_tags(string):
return list(set(tags))
def extract_xml_data(tags, string):
def extract_xml_data_legacy(tags, string):
"""
Extract data for specified XML tags from a string.
@@ -1580,6 +1580,38 @@ def extract_xml_data(tags, string):
return data
def extract_xml_data(tags, string):
"""
Extract data for specified XML tags from a string, returning the longest content for each tag.
How it works:
1. Finds all occurrences of each tag in the string using regex.
2. For each tag, selects the occurrence with the longest content.
3. Returns a dictionary of tag-content pairs.
Args:
tags (List[str]): The list of XML tags to extract.
string (str): The input string containing XML data.
Returns:
Dict[str, str]: A dictionary with tag names as keys and longest extracted content as values.
"""
data = {}
for tag in tags:
pattern = f"<{tag}>(.*?)</{tag}>"
matches = re.findall(pattern, string, re.DOTALL)
if matches:
# Find the longest content for this tag
longest_content = max(matches, key=len).strip()
data[tag] = longest_content
else:
data[tag] = ""
return data
def perform_completion_with_backoff(
provider,
@@ -1648,6 +1680,19 @@ def perform_completion_with_backoff(
"content": ["Rate limit error. Please try again later."],
}
]
except Exception as e:
raise e # Raise any other exceptions immediately
# print("Error during completion request:", str(e))
# error_message = e.message
# return [
# {
# "index": 0,
# "tags": ["error"],
# "content": [
# f"Error during LLM completion request. {error_message}"
# ],
# }
# ]
def extract_blocks(url, html, provider=DEFAULT_PROVIDER, api_token=None, base_url=None):
@@ -2617,3 +2662,116 @@ class HeadPeekr:
def get_title(head_content: str):
title_match = re.search(r'<title>(.*?)</title>', head_content, re.IGNORECASE | re.DOTALL)
return title_match.group(1) if title_match else None
def preprocess_html_for_schema(html_content, text_threshold=100, attr_value_threshold=200, max_size=100000):
"""
Preprocess HTML to reduce size while preserving structure for schema generation.
Args:
html_content (str): Raw HTML content
text_threshold (int): Maximum length for text nodes before truncation
attr_value_threshold (int): Maximum length for attribute values before truncation
max_size (int): Target maximum size for output HTML
Returns:
str: Preprocessed HTML content
"""
try:
# Parse HTML with error recovery
parser = etree.HTMLParser(remove_comments=True, remove_blank_text=True)
tree = lhtml.fromstring(html_content, parser=parser)
# 1. Remove HEAD section (keep only BODY)
head_elements = tree.xpath('//head')
for head in head_elements:
if head.getparent() is not None:
head.getparent().remove(head)
# 2. Define tags to remove completely
tags_to_remove = [
'script', 'style', 'noscript', 'iframe', 'canvas', 'svg',
'video', 'audio', 'source', 'track', 'map', 'area'
]
# Remove unwanted elements
for tag in tags_to_remove:
elements = tree.xpath(f'//{tag}')
for element in elements:
if element.getparent() is not None:
element.getparent().remove(element)
# 3. Process remaining elements to clean attributes and truncate text
for element in tree.iter():
# Skip if we're at the root level
if element.getparent() is None:
continue
# Clean non-essential attributes but preserve structural ones
# attribs_to_keep = {'id', 'class', 'name', 'href', 'src', 'type', 'value', 'data-'}
# This is more aggressive than the previous version
attribs_to_keep = {'id', 'class', 'name', 'type', 'value'}
# attributes_hates_truncate = ['id', 'class', "data-"]
# This means, I don't care, if an attribute is too long, truncate it, go and find a better css selector to build a schema
attributes_hates_truncate = []
# Process each attribute
for attrib in list(element.attrib.keys()):
# Keep if it's essential or starts with data-
if not (attrib in attribs_to_keep or attrib.startswith('data-')):
element.attrib.pop(attrib)
# Truncate long attribute values except for selectors
elif attrib not in attributes_hates_truncate and len(element.attrib[attrib]) > attr_value_threshold:
element.attrib[attrib] = element.attrib[attrib][:attr_value_threshold] + '...'
# Truncate text content if it's too long
if element.text and len(element.text.strip()) > text_threshold:
element.text = element.text.strip()[:text_threshold] + '...'
# Also truncate tail text if present
if element.tail and len(element.tail.strip()) > text_threshold:
element.tail = element.tail.strip()[:text_threshold] + '...'
# 4. Find repeated patterns and keep only a few examples
# This is a simplistic approach - more sophisticated pattern detection could be implemented
pattern_elements = {}
for element in tree.xpath('//*[contains(@class, "")]'):
parent = element.getparent()
if parent is None:
continue
# Create a signature based on tag and classes
classes = element.get('class', '')
if not classes:
continue
signature = f"{element.tag}.{classes}"
if signature in pattern_elements:
pattern_elements[signature].append(element)
else:
pattern_elements[signature] = [element]
# Keep only 3 examples of each repeating pattern
for signature, elements in pattern_elements.items():
if len(elements) > 3:
# Keep the first 2 and last elements
for element in elements[2:-1]:
if element.getparent() is not None:
element.getparent().remove(element)
# 5. Convert back to string
result = etree.tostring(tree, encoding='unicode', method='html')
# If still over the size limit, apply more aggressive truncation
if len(result) > max_size:
return result[:max_size] + "..."
return result
except Exception as e:
# Fallback for parsing errors
return html_content[:max_size] if len(html_content) > max_size else html_content

View File

@@ -1,137 +0,0 @@
FROM python:3.10-slim
# Set build arguments
ARG APP_HOME=/app
ARG GITHUB_REPO=https://github.com/unclecode/crawl4ai.git
ARG GITHUB_BRANCH=next
ARG USE_LOCAL=False
ARG CONFIG_PATH=""
ENV PYTHONFAULTHANDLER=1 \
PYTHONHASHSEED=random \
PYTHONUNBUFFERED=1 \
PIP_NO_CACHE_DIR=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_DEFAULT_TIMEOUT=100 \
DEBIAN_FRONTEND=noninteractive \
REDIS_HOST=localhost \
REDIS_PORT=6379
ARG PYTHON_VERSION=3.10
ARG INSTALL_TYPE=default
ARG ENABLE_GPU=false
ARG TARGETARCH
LABEL maintainer="unclecode"
LABEL description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & scraper"
LABEL version="1.0"
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
wget \
gnupg \
git \
cmake \
pkg-config \
python3-dev \
libjpeg-dev \
redis-server \
supervisor \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y --no-install-recommends \
libglib2.0-0 \
libnss3 \
libnspr4 \
libatk1.0-0 \
libatk-bridge2.0-0 \
libcups2 \
libdrm2 \
libdbus-1-3 \
libxcb1 \
libxkbcommon0 \
libx11-6 \
libxcomposite1 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxrandr2 \
libgbm1 \
libpango-1.0-0 \
libcairo2 \
libasound2 \
libatspi2.0-0 \
&& rm -rf /var/lib/apt/lists/*
RUN if [ "$ENABLE_GPU" = "true" ] && [ "$TARGETARCH" = "amd64" ] ; then \
apt-get update && apt-get install -y --no-install-recommends \
nvidia-cuda-toolkit \
&& rm -rf /var/lib/apt/lists/* ; \
else \
echo "Skipping NVIDIA CUDA Toolkit installation (unsupported platform or GPU disabled)"; \
fi
RUN if [ "$TARGETARCH" = "arm64" ]; then \
echo "🦾 Installing ARM-specific optimizations"; \
apt-get update && apt-get install -y --no-install-recommends \
libopenblas-dev \
&& rm -rf /var/lib/apt/lists/*; \
elif [ "$TARGETARCH" = "amd64" ]; then \
echo "🖥️ Installing AMD64-specific optimizations"; \
apt-get update && apt-get install -y --no-install-recommends \
libomp-dev \
&& rm -rf /var/lib/apt/lists/*; \
else \
echo "Skipping platform-specific optimizations (unsupported platform)"; \
fi
WORKDIR ${APP_HOME}
RUN git clone --branch ${GITHUB_BRANCH} ${GITHUB_REPO} /tmp/crawl4ai
COPY docker/supervisord.conf .
COPY docker/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
RUN if [ "$INSTALL_TYPE" = "all" ] ; then \
pip install "/tmp/crawl4ai/[all]" && \
python -m nltk.downloader punkt stopwords && \
python -m crawl4ai.model_loader ; \
elif [ "$INSTALL_TYPE" = "torch" ] ; then \
pip install "/tmp/crawl4ai/[torch]" ; \
elif [ "$INSTALL_TYPE" = "transformer" ] ; then \
pip install "/tmp/crawl4ai/[transformer]" && \
python -m crawl4ai.model_loader ; \
else \
pip install "/tmp/crawl4ai" ; \
fi
RUN pip install --no-cache-dir --upgrade pip && \
python -c "import crawl4ai; print('✅ crawl4ai is ready to rock!')" && \
python -c "from playwright.sync_api import sync_playwright; print('✅ Playwright is feeling dramatic!')"
RUN playwright install --with-deps chromium
COPY docker/* ${APP_HOME}/
RUN if [ -n "$CONFIG_PATH" ] && [ -f "$CONFIG_PATH" ]; then \
echo "Using custom config from $CONFIG_PATH" && \
cp $CONFIG_PATH /app/config.yml; \
fi
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD bash -c '\
MEM=$(free -m | awk "/^Mem:/{print \$2}"); \
if [ $MEM -lt 2048 ]; then \
echo "⚠️ Warning: Less than 2GB RAM available! Your container might need a memory boost! 🚀"; \
exit 1; \
fi && \
redis-cli ping > /dev/null && \
curl -f http://localhost:8000/health || exit 1'
# EXPOSE 6379
CMD ["supervisord", "-c", "supervisord.conf"]

View File

@@ -1,3 +0,0 @@
project_name: PROJECT_NAME
domain_name: DOMAIN_NAME
aws_region: AWS_REGION

View File

@@ -1,729 +0,0 @@
#!/usr/bin/env python3
import argparse
import subprocess
import sys
import time
import json
import yaml
import requests
import os
# Steps for deployment
STEPS = [
"refresh_aws_auth",
"fetch_or_create_vpc_and_subnets",
"create_ecr_repositories",
"create_iam_role",
"create_security_groups",
"request_acm_certificate",
"build_and_push_docker",
"create_task_definition",
"setup_alb",
"deploy_ecs_service",
"configure_custom_domain",
"test_endpoints"
]
# Utility function to prompt user for confirmation
def confirm_step(step_name):
while True:
response = input(f"Proceed with {step_name}? (yes/no): ").strip().lower()
if response in ["yes", "no"]:
return response == "yes"
print("Please enter 'yes' or 'no'.")
# Utility function to run AWS CLI or shell commands and handle errors
def run_command(command, error_message, additional_diagnostics=None, cwd="."):
try:
result = subprocess.run(command, capture_output=True, text=True, check=True, cwd=cwd)
return result
except subprocess.CalledProcessError as e:
with open("error_context.md", "w") as f:
f.write(f"{error_message}:\n")
f.write(f"Command: {' '.join(command)}\n")
f.write(f"Exit Code: {e.returncode}\n")
f.write(f"Stdout: {e.stdout}\n")
f.write(f"Stderr: {e.stderr}\n")
if additional_diagnostics:
for diag_cmd in additional_diagnostics:
diag_result = subprocess.run(diag_cmd, capture_output=True, text=True)
f.write(f"\nDiagnostic command: {' '.join(diag_cmd)}\n")
f.write(f"Stdout: {diag_result.stdout}\n")
f.write(f"Stderr: {diag_result.stderr}\n")
raise Exception(f"{error_message}: {e.stderr}")
# Utility function to load or initialize state
def load_state(project_name):
state_file = f"{project_name}-state.json"
if os.path.exists(state_file):
with open(state_file, "r") as f:
return json.load(f)
return {"last_step": -1}
# Utility function to save state
def save_state(project_name, state):
state_file = f"{project_name}-state.json"
with open(state_file, "w") as f:
json.dump(state, f, indent=4)
# DNS Check Function
def check_dns_propagation(domain, alb_dns):
try:
result = subprocess.run(["dig", "+short", domain], capture_output=True, text=True)
if alb_dns in result.stdout:
return True
return False
except Exception as e:
print(f"Failed to check DNS: {e}")
return False
# Step Functions
def refresh_aws_auth(project_name, state, config):
if state["last_step"] >= 0:
print("Skipping refresh_aws_auth (already completed)")
return
if not confirm_step("Refresh AWS authentication"):
sys.exit("User aborted.")
run_command(
["aws", "sts", "get-caller-identity"],
"Failed to verify AWS credentials"
)
print("AWS authentication verified.")
state["last_step"] = 0
save_state(project_name, state)
def fetch_or_create_vpc_and_subnets(project_name, state, config):
if state["last_step"] >= 1:
print("Skipping fetch_or_create_vpc_and_subnets (already completed)")
return state["vpc_id"], state["public_subnets"]
if not confirm_step("Fetch or Create VPC and Subnets"):
sys.exit("User aborted.")
# Fetch AWS account ID
result = run_command(
["aws", "sts", "get-caller-identity"],
"Failed to get AWS account ID"
)
account_id = json.loads(result.stdout)["Account"]
# Fetch default VPC
result = run_command(
["aws", "ec2", "describe-vpcs", "--filters", "Name=isDefault,Values=true", "--region", config["aws_region"]],
"Failed to describe VPCs"
)
vpcs = json.loads(result.stdout).get("Vpcs", [])
if not vpcs:
result = run_command(
["aws", "ec2", "create-vpc", "--cidr-block", "10.0.0.0/16", "--region", config["aws_region"]],
"Failed to create VPC"
)
vpc_id = json.loads(result.stdout)["Vpc"]["VpcId"]
run_command(
["aws", "ec2", "modify-vpc-attribute", "--vpc-id", vpc_id, "--enable-dns-hostnames", "--region", config["aws_region"]],
"Failed to enable DNS hostnames"
)
else:
vpc_id = vpcs[0]["VpcId"]
# Fetch or create subnets
result = run_command(
["aws", "ec2", "describe-subnets", "--filters", f"Name=vpc-id,Values={vpc_id}", "--region", config["aws_region"]],
"Failed to describe subnets"
)
subnets = json.loads(result.stdout).get("Subnets", [])
if len(subnets) < 2:
azs = json.loads(run_command(
["aws", "ec2", "describe-availability-zones", "--region", config["aws_region"]],
"Failed to describe availability zones"
).stdout)["AvailabilityZones"][:2]
subnet_ids = []
for i, az in enumerate(azs):
az_name = az["ZoneName"]
result = run_command(
["aws", "ec2", "create-subnet", "--vpc-id", vpc_id, "--cidr-block", f"10.0.{i}.0/24", "--availability-zone", az_name, "--region", config["aws_region"]],
f"Failed to create subnet in {az_name}"
)
subnet_id = json.loads(result.stdout)["Subnet"]["SubnetId"]
subnet_ids.append(subnet_id)
run_command(
["aws", "ec2", "modify-subnet-attribute", "--subnet-id", subnet_id, "--map-public-ip-on-launch", "--region", config["aws_region"]],
f"Failed to make subnet {subnet_id} public"
)
else:
subnet_ids = [s["SubnetId"] for s in subnets[:2]]
# Ensure internet gateway
result = run_command(
["aws", "ec2", "describe-internet-gateways", "--filters", f"Name=attachment.vpc-id,Values={vpc_id}", "--region", config["aws_region"]],
"Failed to describe internet gateways"
)
igws = json.loads(result.stdout).get("InternetGateways", [])
if not igws:
result = run_command(
["aws", "ec2", "create-internet-gateway", "--region", config["aws_region"]],
"Failed to create internet gateway"
)
igw_id = json.loads(result.stdout)["InternetGateway"]["InternetGatewayId"]
run_command(
["aws", "ec2", "attach-internet-gateway", "--vpc-id", vpc_id, "--internet-gateway-id", igw_id, "--region", config["aws_region"]],
"Failed to attach internet gateway"
)
state["vpc_id"] = vpc_id
state["public_subnets"] = subnet_ids
state["last_step"] = 1
save_state(project_name, state)
print(f"VPC ID: {vpc_id}, Subnets: {subnet_ids}")
return vpc_id, subnet_ids
def create_ecr_repositories(project_name, state, config):
if state["last_step"] >= 2:
print("Skipping create_ecr_repositories (already completed)")
return
if not confirm_step("Create ECR Repositories"):
sys.exit("User aborted.")
account_id = json.loads(run_command(
["aws", "sts", "get-caller-identity"],
"Failed to get AWS account ID"
).stdout)["Account"]
repos = [project_name, f"{project_name}-nginx"]
for repo in repos:
result = subprocess.run(
["aws", "ecr", "describe-repositories", "--repository-names", repo, "--region", config["aws_region"]],
capture_output=True, text=True
)
if result.returncode != 0:
run_command(
["aws", "ecr", "create-repository", "--repository-name", repo, "--region", config["aws_region"]],
f"Failed to create ECR repository {repo}"
)
print(f"ECR repository {repo} is ready.")
state["last_step"] = 2
save_state(project_name, state)
def create_iam_role(project_name, state, config):
if state["last_step"] >= 3:
print("Skipping create_iam_role (already completed)")
return
if not confirm_step("Create IAM Role"):
sys.exit("User aborted.")
account_id = json.loads(run_command(
["aws", "sts", "get-caller-identity"],
"Failed to get AWS account ID"
).stdout)["Account"]
role_name = "ecsTaskExecutionRole"
trust_policy = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
with open("trust_policy.json", "w") as f:
json.dump(trust_policy, f)
result = subprocess.run(
["aws", "iam", "get-role", "--role-name", role_name],
capture_output=True, text=True
)
if result.returncode != 0:
run_command(
["aws", "iam", "create-role", "--role-name", role_name, "--assume-role-policy-document", "file://trust_policy.json"],
f"Failed to create IAM role {role_name}"
)
run_command(
["aws", "iam", "attach-role-policy", "--role-name", role_name, "--policy-arn", "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"],
"Failed to attach ECS task execution policy"
)
os.remove("trust_policy.json")
state["execution_role_arn"] = f"arn:aws:iam::{account_id}:role/{role_name}"
state["last_step"] = 3
save_state(project_name, state)
print(f"IAM role {role_name} configured.")
def create_security_groups(project_name, state, config):
if state["last_step"] >= 4:
print("Skipping create_security_groups (already completed)")
return state["alb_sg_id"], state["ecs_sg_id"]
if not confirm_step("Create Security Groups"):
sys.exit("User aborted.")
vpc_id = state["vpc_id"]
alb_sg_name = f"{project_name}-alb-sg"
result = run_command(
["aws", "ec2", "describe-security-groups", "--filters", f"Name=vpc-id,Values={vpc_id}", f"Name=group-name,Values={alb_sg_name}", "--region", config["aws_region"]],
"Failed to describe ALB security group"
)
if not json.loads(result.stdout).get("SecurityGroups"):
result = run_command(
["aws", "ec2", "create-security-group", "--group-name", alb_sg_name, "--description", "Security group for ALB", "--vpc-id", vpc_id, "--region", config["aws_region"]],
"Failed to create ALB security group"
)
alb_sg_id = json.loads(result.stdout)["GroupId"]
run_command(
["aws", "ec2", "authorize-security-group-ingress", "--group-id", alb_sg_id, "--protocol", "tcp", "--port", "80", "--cidr", "0.0.0.0/0", "--region", config["aws_region"]],
"Failed to authorize HTTP ingress"
)
run_command(
["aws", "ec2", "authorize-security-group-ingress", "--group-id", alb_sg_id, "--protocol", "tcp", "--port", "443", "--cidr", "0.0.0.0/0", "--region", config["aws_region"]],
"Failed to authorize HTTPS ingress"
)
else:
alb_sg_id = json.loads(result.stdout)["SecurityGroups"][0]["GroupId"]
ecs_sg_name = f"{project_name}-ecs-sg"
result = run_command(
["aws", "ec2", "describe-security-groups", "--filters", f"Name=vpc-id,Values={vpc_id}", f"Name=group-name,Values={ecs_sg_name}", "--region", config["aws_region"]],
"Failed to describe ECS security group"
)
if not json.loads(result.stdout).get("SecurityGroups"):
result = run_command(
["aws", "ec2", "create-security-group", "--group-name", ecs_sg_name, "--description", "Security group for ECS tasks", "--vpc-id", vpc_id, "--region", config["aws_region"]],
"Failed to create ECS security group"
)
ecs_sg_id = json.loads(result.stdout)["GroupId"]
run_command(
["aws", "ec2", "authorize-security-group-ingress", "--group-id", ecs_sg_id, "--protocol", "tcp", "--port", "80", "--source-group", alb_sg_id, "--region", config["aws_region"]],
"Failed to authorize ECS ingress"
)
else:
ecs_sg_id = json.loads(result.stdout)["SecurityGroups"][0]["GroupId"]
state["alb_sg_id"] = alb_sg_id
state["ecs_sg_id"] = ecs_sg_id
state["last_step"] = 4
save_state(project_name, state)
print("Security groups configured.")
return alb_sg_id, ecs_sg_id
def request_acm_certificate(project_name, state, config):
if state["last_step"] >= 5:
print("Skipping request_acm_certificate (already completed)")
return state["cert_arn"]
if not confirm_step("Request ACM Certificate"):
sys.exit("User aborted.")
domain_name = config["domain_name"]
result = run_command(
["aws", "acm", "describe-certificates", "--certificate-statuses", "ISSUED", "--region", config["aws_region"]],
"Failed to describe certificates"
)
certificates = json.loads(result.stdout).get("CertificateSummaryList", [])
cert_arn = next((c["CertificateArn"] for c in certificates if c["DomainName"] == domain_name), None)
if not cert_arn:
result = run_command(
["aws", "acm", "request-certificate", "--domain-name", domain_name, "--validation-method", "DNS", "--region", config["aws_region"]],
"Failed to request ACM certificate"
)
cert_arn = json.loads(result.stdout)["CertificateArn"]
time.sleep(10)
result = run_command(
["aws", "acm", "describe-certificate", "--certificate-arn", cert_arn, "--region", config["aws_region"]],
"Failed to describe certificate"
)
cert_details = json.loads(result.stdout)["Certificate"]
dns_validations = cert_details.get("DomainValidationOptions", [])
for validation in dns_validations:
if validation["ValidationMethod"] == "DNS" and "ResourceRecord" in validation:
record = validation["ResourceRecord"]
print(f"Please add this DNS record to validate the certificate for {domain_name}:")
print(f"Name: {record['Name']}")
print(f"Type: {record['Type']}")
print(f"Value: {record['Value']}")
print("Press Enter after adding the DNS record...")
input()
while True:
result = run_command(
["aws", "acm", "describe-certificate", "--certificate-arn", cert_arn, "--region", config["aws_region"]],
"Failed to check certificate status"
)
status = json.loads(result.stdout)["Certificate"]["Status"]
if status == "ISSUED":
break
elif status in ["FAILED", "REVOKED", "INACTIVE"]:
print("Certificate issuance failed.")
sys.exit(1)
time.sleep(10)
state["cert_arn"] = cert_arn
state["last_step"] = 5
save_state(project_name, state)
print(f"Certificate ARN: {cert_arn}")
return cert_arn
def build_and_push_docker(project_name, state, config):
if state["last_step"] >= 6:
print("Skipping build_and_push_docker (already completed)")
return state["fastapi_image"], state["nginx_image"]
if not confirm_step("Build and Push Docker Images"):
sys.exit("User aborted.")
with open("./version.txt", "r") as f:
version = f.read().strip()
account_id = json.loads(run_command(
["aws", "sts", "get-caller-identity"],
"Failed to get AWS account ID"
).stdout)["Account"]
region = config["aws_region"]
login_password = run_command(
["aws", "ecr", "get-login-password", "--region", region],
"Failed to get ECR login password"
).stdout.strip()
run_command(
["docker", "login", "--username", "AWS", "--password", login_password, f"{account_id}.dkr.ecr.{region}.amazonaws.com"],
"Failed to authenticate Docker to ECR"
)
fastapi_image = f"{account_id}.dkr.ecr.{region}.amazonaws.com/{project_name}:{version}"
run_command(
["docker", "build", "-f", "Dockerfile", "-t", fastapi_image, "."],
"Failed to build FastAPI Docker image"
)
run_command(
["docker", "push", fastapi_image],
"Failed to push FastAPI image"
)
nginx_image = f"{account_id}.dkr.ecr.{region}.amazonaws.com/{project_name}-nginx:{version}"
run_command(
["docker", "build", "-f", "Dockerfile", "-t", nginx_image, "."],
"Failed to build Nginx Docker image",
cwd="./nginx"
)
run_command(
["docker", "push", nginx_image],
"Failed to push Nginx image"
)
state["fastapi_image"] = fastapi_image
state["nginx_image"] = nginx_image
state["last_step"] = 6
save_state(project_name, state)
print("Docker images built and pushed.")
return fastapi_image, nginx_image
def create_task_definition(project_name, state, config):
if state["last_step"] >= 7:
print("Skipping create_task_definition (already completed)")
return state["task_def_arn"]
if not confirm_step("Create Task Definition"):
sys.exit("User aborted.")
log_group = f"/ecs/{project_name}-logs"
result = run_command(
["aws", "logs", "describe-log-groups", "--log-group-name-prefix", log_group, "--region", config["aws_region"]],
"Failed to describe log groups"
)
if not any(lg["logGroupName"] == log_group for lg in json.loads(result.stdout).get("logGroups", [])):
run_command(
["aws", "logs", "create-log-group", "--log-group-name", log_group, "--region", config["aws_region"]],
f"Failed to create log group {log_group}"
)
task_definition = {
"family": f"{project_name}-taskdef",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "2048",
"executionRoleArn": state["execution_role_arn"],
"containerDefinitions": [
{
"name": "fastapi",
"image": state["fastapi_image"],
"portMappings": [{"containerPort": 8000, "hostPort": 8000, "protocol": "tcp"}],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": log_group,
"awslogs-region": config["aws_region"],
"awslogs-stream-prefix": "fastapi"
}
}
},
{
"name": "nginx",
"image": state["nginx_image"],
"portMappings": [{"containerPort": 80, "hostPort": 80, "protocol": "tcp"}],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": log_group,
"awslogs-region": config["aws_region"],
"awslogs-stream-prefix": "nginx"
}
}
}
]
}
with open("task_def.json", "w") as f:
json.dump(task_definition, f)
result = run_command(
["aws", "ecs", "register-task-definition", "--cli-input-json", "file://task_def.json", "--region", config["aws_region"]],
"Failed to register task definition"
)
task_def_arn = json.loads(result.stdout)["taskDefinition"]["taskDefinitionArn"]
os.remove("task_def.json")
state["task_def_arn"] = task_def_arn
state["last_step"] = 7
save_state(project_name, state)
print("Task definition created.")
return task_def_arn
def setup_alb(project_name, state, config):
if state["last_step"] >= 8:
print("Skipping setup_alb (already completed)")
return state["alb_arn"], state["tg_arn"], state["alb_dns"]
if not confirm_step("Set Up ALB"):
sys.exit("User aborted.")
vpc_id = state["vpc_id"]
public_subnets = state["public_subnets"]
alb_name = f"{project_name}-alb"
result = subprocess.run(
["aws", "elbv2", "describe-load-balancers", "--names", alb_name, "--region", config["aws_region"]],
capture_output=True, text=True
)
if result.returncode != 0:
run_command(
["aws", "elbv2", "create-load-balancer", "--name", alb_name, "--subnets"] + public_subnets + ["--security-groups", state["alb_sg_id"], "--region", config["aws_region"]],
"Failed to create ALB"
)
alb_arn = json.loads(run_command(
["aws", "elbv2", "describe-load-balancers", "--names", alb_name, "--region", config["aws_region"]],
"Failed to describe ALB"
).stdout)["LoadBalancers"][0]["LoadBalancerArn"]
alb_dns = json.loads(run_command(
["aws", "elbv2", "describe-load-balancers", "--names", alb_name, "--region", config["aws_region"]],
"Failed to get ALB DNS name"
).stdout)["LoadBalancers"][0]["DNSName"]
tg_name = f"{project_name}-tg"
result = subprocess.run(
["aws", "elbv2", "describe-target-groups", "--names", tg_name, "--region", config["aws_region"]],
capture_output=True, text=True
)
if result.returncode != 0:
run_command(
["aws", "elbv2", "create-target-group", "--name", tg_name, "--protocol", "HTTP", "--port", "80", "--vpc-id", vpc_id, "--region", config["aws_region"]],
"Failed to create target group"
)
tg_arn = json.loads(run_command(
["aws", "elbv2", "describe-target-groups", "--names", tg_name, "--region", config["aws_region"]],
"Failed to describe target group"
).stdout)["TargetGroups"][0]["TargetGroupArn"]
result = run_command(
["aws", "elbv2", "describe-listeners", "--load-balancer-arn", alb_arn, "--region", config["aws_region"]],
"Failed to describe listeners"
)
listeners = json.loads(result.stdout).get("Listeners", [])
if not any(l["Port"] == 80 for l in listeners):
run_command(
["aws", "elbv2", "create-listener", "--load-balancer-arn", alb_arn, "--protocol", "HTTP", "--port", "80", "--default-actions", "Type=redirect,RedirectConfig={Protocol=HTTPS,Port=443,StatusCode=HTTP_301}", "--region", config["aws_region"]],
"Failed to create HTTP listener"
)
if not any(l["Port"] == 443 for l in listeners):
run_command(
["aws", "elbv2", "create-listener", "--load-balancer-arn", alb_arn, "--protocol", "HTTPS", "--port", "443", "--certificates", f"CertificateArn={state['cert_arn']}", "--default-actions", f"Type=forward,TargetGroupArn={tg_arn}", "--region", config["aws_region"]],
"Failed to create HTTPS listener"
)
state["alb_arn"] = alb_arn
state["tg_arn"] = tg_arn
state["alb_dns"] = alb_dns
state["last_step"] = 8
save_state(project_name, state)
print("ALB configured.")
return alb_arn, tg_arn, alb_dns
def deploy_ecs_service(project_name, state, config):
if state["last_step"] >= 9:
print("Skipping deploy_ecs_service (already completed)")
return
if not confirm_step("Deploy ECS Service"):
sys.exit("User aborted.")
cluster_name = f"{project_name}-cluster"
result = run_command(
["aws", "ecs", "describe-clusters", "--clusters", cluster_name, "--region", config["aws_region"]],
"Failed to describe clusters"
)
if not json.loads(result.stdout).get("clusters"):
run_command(
["aws", "ecs", "create-cluster", "--cluster-name", cluster_name, "--region", config["aws_region"]],
"Failed to create ECS cluster"
)
service_name = f"{project_name}-service"
result = run_command(
["aws", "ecs", "describe-services", "--cluster", cluster_name, "--services", service_name, "--region", config["aws_region"]],
"Failed to describe services",
additional_diagnostics=[["aws", "ecs", "list-tasks", "--cluster", cluster_name, "--service-name", service_name, "--region", config["aws_region"]]]
)
services = json.loads(result.stdout).get("services", [])
if not services or services[0]["status"] == "INACTIVE":
run_command(
["aws", "ecs", "create-service", "--cluster", cluster_name, "--service-name", service_name, "--task-definition", state["task_def_arn"], "--desired-count", "1", "--launch-type", "FARGATE", "--network-configuration", f"awsvpcConfiguration={{subnets={json.dumps(state['public_subnets'])},securityGroups=[{state['ecs_sg_id']}],assignPublicIp=ENABLED}}", "--load-balancers", f"targetGroupArn={state['tg_arn']},containerName=nginx,containerPort=80", "--region", config["aws_region"]],
"Failed to create ECS service"
)
else:
run_command(
["aws", "ecs", "update-service", "--cluster", cluster_name, "--service", service_name, "--task-definition", state["task_def_arn"], "--region", config["aws_region"]],
"Failed to update ECS service"
)
state["last_step"] = 9
save_state(project_name, state)
print("ECS service deployed.")
def configure_custom_domain(project_name, state, config):
if state["last_step"] >= 10:
print("Skipping configure_custom_domain (already completed)")
return
if not confirm_step("Configure Custom Domain"):
sys.exit("User aborted.")
domain_name = config["domain_name"]
alb_dns = state["alb_dns"]
print(f"Please add a CNAME record for {domain_name} pointing to {alb_dns} in your DNS provider.")
print("Press Enter after updating the DNS record...")
input()
while not check_dns_propagation(domain_name, alb_dns):
print("DNS propagation not complete. Waiting 30 seconds before retrying...")
time.sleep(30)
print("DNS propagation confirmed.")
state["last_step"] = 10
save_state(project_name, state)
print("Custom domain configured.")
def test_endpoints(project_name, state, config):
if state["last_step"] >= 11:
print("Skipping test_endpoints (already completed)")
return
if not confirm_step("Test Endpoints"):
sys.exit("User aborted.")
domain = config["domain_name"]
time.sleep(30) # Wait for service to stabilize
response = requests.get(f"https://{domain}/health", verify=False)
if response.status_code != 200:
with open("error_context.md", "w") as f:
f.write("Health endpoint test failed:\n")
f.write(f"Status Code: {response.status_code}\n")
f.write(f"Response: {response.text}\n")
sys.exit(1)
print("Health endpoint test passed.")
payload = {
"urls": ["https://example.com"],
"browser_config": {"headless": True},
"crawler_config": {"stream": False}
}
response = requests.post(f"https://{domain}/crawl", json=payload, verify=False)
if response.status_code != 200:
with open("error_context.md", "w") as f:
f.write("Crawl endpoint test failed:\n")
f.write(f"Status Code: {response.status_code}\n")
f.write(f"Response: {response.text}\n")
sys.exit(1)
print("Crawl endpoint test passed.")
state["last_step"] = 11
save_state(project_name, state)
print("Endpoints tested successfully.")
# Main Deployment Function
def deploy(project_name, force=False):
config_file = f"{project_name}-config.yml"
if not os.path.exists(config_file):
print(f"Configuration file {config_file} not found. Run 'init' first.")
sys.exit(1)
with open(config_file, "r") as f:
config = yaml.safe_load(f)
state = load_state(project_name)
if force:
state = {"last_step": -1}
last_step = state.get("last_step", -1)
for step_idx, step_name in enumerate(STEPS):
if step_idx <= last_step:
print(f"Skipping {step_name} (already completed)")
continue
print(f"Executing step: {step_name}")
func = globals()[step_name]
if step_name == "fetch_or_create_vpc_and_subnets":
vpc_id, public_subnets = func(project_name, state, config)
elif step_name == "create_security_groups":
alb_sg_id, ecs_sg_id = func(project_name, state, config)
elif step_name == "request_acm_certificate":
cert_arn = func(project_name, state, config)
elif step_name == "build_and_push_docker":
fastapi_image, nginx_image = func(project_name, state, config)
elif step_name == "create_task_definition":
task_def_arn = func(project_name, state, config)
elif step_name == "setup_alb":
alb_arn, tg_arn, alb_dns = func(project_name, state, config)
elif step_name == "deploy_ecs_service":
func(project_name, state, config)
elif step_name == "configure_custom_domain":
func(project_name, state, config)
elif step_name == "test_endpoints":
func(project_name, state, config)
else:
func(project_name, state, config)
# Init Command
def init(project_name, domain_name, aws_region):
config = {
"project_name": project_name,
"domain_name": domain_name,
"aws_region": aws_region
}
config_file = f"{project_name}-config.yml"
with open(config_file, "w") as f:
yaml.dump(config, f)
print(f"Configuration file {config_file} created.")
# Argument Parser
parser = argparse.ArgumentParser(description="Crawl4AI Deployment Script")
subparsers = parser.add_subparsers(dest="command")
# Init Parser
init_parser = subparsers.add_parser("init", help="Initialize configuration")
init_parser.add_argument("--project", required=True, help="Project name")
init_parser.add_argument("--domain", required=True, help="Domain name")
init_parser.add_argument("--region", required=True, help="AWS region")
# Deploy Parser
deploy_parser = subparsers.add_parser("deploy", help="Deploy the project")
deploy_parser.add_argument("--project", required=True, help="Project name")
deploy_parser.add_argument("--force", action="store_true", help="Force redeployment from start")
args = parser.parse_args()
if args.command == "init":
init(args.project, args.domain, args.region)
elif args.command == "deploy":
deploy(args.project, args.force)
else:
parser.print_help()

View File

@@ -1,31 +0,0 @@
# .dockerignore
*
# Allow specific files and directories when using local installation
!crawl4ai/
!docs/
!deploy/docker/
!setup.py
!pyproject.toml
!README.md
!LICENSE
!MANIFEST.in
!setup.cfg
!mkdocs.yml
.git/
__pycache__/
*.pyc
*.pyo
*.pyd
.DS_Store
.env
.venv
venv/
tests/
coverage.xml
*.log
*.swp
*.egg-info/
dist/
build/

View File

@@ -1,8 +0,0 @@
# LLM Provider Keys
OPENAI_API_KEY=your_openai_key_here
DEEPSEEK_API_KEY=your_deepseek_key_here
ANTHROPIC_API_KEY=your_anthropic_key_here
GROQ_API_KEY=your_groq_key_here
TOGETHER_API_KEY=your_together_key_here
MISTRAL_API_KEY=your_mistral_key_here
GEMINI_API_TOKEN=your_gemini_key_here

View File

@@ -1,847 +0,0 @@
# Crawl4AI Docker Guide 🐳
## Table of Contents
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Local Build](#local-build)
- [Docker Hub](#docker-hub)
- [Dockerfile Parameters](#dockerfile-parameters)
- [Using the API](#using-the-api)
- [Understanding Request Schema](#understanding-request-schema)
- [REST API Examples](#rest-api-examples)
- [Python SDK](#python-sdk)
- [Metrics & Monitoring](#metrics--monitoring)
- [Deployment Scenarios](#deployment-scenarios)
- [Complete Examples](#complete-examples)
- [Getting Help](#getting-help)
## Prerequisites
Before we dive in, make sure you have:
- Docker installed and running (version 20.10.0 or higher)
- At least 4GB of RAM available for the container
- Python 3.10+ (if using the Python SDK)
- Node.js 16+ (if using the Node.js examples)
> 💡 **Pro tip**: Run `docker info` to check your Docker installation and available resources.
## Installation
### Local Build
Let's get your local environment set up step by step!
#### 1. Building the Image
First, clone the repository and build the Docker image:
```bash
# Clone the repository
git clone https://github.com/unclecode/crawl4ai.git
cd crawl4ai/deploy
# Build the Docker image
docker build --platform=linux/amd64 --no-cache -t crawl4ai .
# Or build for arm64
docker build --platform=linux/arm64 --no-cache -t crawl4ai .
```
#### 2. Environment Setup
If you plan to use LLMs (Language Models), you'll need to set up your API keys. Create a `.llm.env` file:
```env
# OpenAI
OPENAI_API_KEY=sk-your-key
# Anthropic
ANTHROPIC_API_KEY=your-anthropic-key
# DeepSeek
DEEPSEEK_API_KEY=your-deepseek-key
# Check out https://docs.litellm.ai/docs/providers for more providers!
```
> 🔑 **Note**: Keep your API keys secure! Never commit them to version control.
#### 3. Running the Container
You have several options for running the container:
Basic run (no LLM support):
```bash
docker run -d -p 8000:8000 --name crawl4ai crawl4ai
```
With LLM support:
```bash
docker run -d -p 8000:8000 \
--env-file .llm.env \
--name crawl4ai \
crawl4ai
```
Using host environment variables (Not a good practice, but works for local testing):
```bash
docker run -d -p 8000:8000 \
--env-file .llm.env \
--env "$(env)" \
--name crawl4ai \
crawl4ai
```
#### Multi-Platform Build
For distributing your image across different architectures, use `buildx`:
```bash
# Set up buildx builder
docker buildx create --use
# Build for multiple platforms
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t crawl4ai \
--push \
.
```
> 💡 **Note**: Multi-platform builds require Docker Buildx and need to be pushed to a registry.
#### Development Build
For development, you might want to enable all features:
```bash
docker build -t crawl4ai
--build-arg INSTALL_TYPE=all \
--build-arg PYTHON_VERSION=3.10 \
--build-arg ENABLE_GPU=true \
.
```
#### GPU-Enabled Build
If you plan to use GPU acceleration:
```bash
docker build -t crawl4ai
--build-arg ENABLE_GPU=true \
deploy/docker/
```
### Build Arguments Explained
| Argument | Description | Default | Options |
|----------|-------------|---------|----------|
| PYTHON_VERSION | Python version | 3.10 | 3.8, 3.9, 3.10 |
| INSTALL_TYPE | Feature set | default | default, all, torch, transformer |
| ENABLE_GPU | GPU support | false | true, false |
| APP_HOME | Install path | /app | any valid path |
### Build Best Practices
1. **Choose the Right Install Type**
- `default`: Basic installation, smallest image, to be honest, I use this most of the time.
- `all`: Full features, larger image (include transformer, and nltk, make sure you really need them)
2. **Platform Considerations**
- Let Docker auto-detect platform unless you need cross-compilation
- Use --platform for specific architecture requirements
- Consider buildx for multi-architecture distribution
3. **Performance Optimization**
- The image automatically includes platform-specific optimizations
- AMD64 gets OpenMP optimizations
- ARM64 gets OpenBLAS optimizations
### Docker Hub
> 🚧 Coming soon! The image will be available at `crawl4ai`. Stay tuned!
## Using the API
In the following sections, we discuss two ways to communicate with the Docker server. One option is to use the client SDK that I developed for Python, and I will soon develop one for Node.js. I highly recommend this approach to avoid mistakes. Alternatively, you can take a more technical route by using the JSON structure and passing it to all the URLs, which I will explain in detail.
### Python SDK
The SDK makes things easier! Here's how to use it:
```python
from crawl4ai.docker_client import Crawl4aiDockerClient
from crawl4ai import BrowserConfig, CrawlerRunConfig
async def main():
async with Crawl4aiDockerClient(base_url="http://localhost:8000", verbose=True) as client:
# If JWT is enabled, you can authenticate like this: (more on this later)
# await client.authenticate("test@example.com")
# Non-streaming crawl
results = await client.crawl(
["https://example.com", "https://python.org"],
browser_config=BrowserConfig(headless=True),
crawler_config=CrawlerRunConfig()
)
print(f"Non-streaming results: {results}")
# Streaming crawl
crawler_config = CrawlerRunConfig(stream=True)
async for result in await client.crawl(
["https://example.com", "https://python.org"],
browser_config=BrowserConfig(headless=True),
crawler_config=crawler_config
):
print(f"Streamed result: {result}")
# Get schema
schema = await client.get_schema()
print(f"Schema: {schema}")
if __name__ == "__main__":
asyncio.run(main())
```
`Crawl4aiDockerClient` is an async context manager that handles the connection for you. You can pass in optional parameters for more control:
- `base_url` (str): Base URL of the Crawl4AI Docker server
- `timeout` (float): Default timeout for requests in seconds
- `verify_ssl` (bool): Whether to verify SSL certificates
- `verbose` (bool): Whether to show logging output
- `log_file` (str, optional): Path to log file if file logging is desired
This client SDK generates a properly structured JSON request for the server's HTTP API.
## Second Approach: Direct API Calls
This is super important! The API expects a specific structure that matches our Python classes. Let me show you how it works.
### Understanding Configuration Structure
Let's dive deep into how configurations work in Crawl4AI. Every configuration object follows a consistent pattern of `type` and `params`. This structure enables complex, nested configurations while maintaining clarity.
#### The Basic Pattern
Try this in Python to understand the structure:
```python
from crawl4ai import BrowserConfig
# Create a config and see its structure
config = BrowserConfig(headless=True)
print(config.dump())
```
This outputs:
```json
{
"type": "BrowserConfig",
"params": {
"headless": true
}
}
```
#### Simple vs Complex Values
The structure follows these rules:
- Simple values (strings, numbers, booleans, lists) are passed directly
- Complex values (classes, dictionaries) use the type-params pattern
For example, with dictionaries:
```json
{
"browser_config": {
"type": "BrowserConfig",
"params": {
"headless": true, // Simple boolean - direct value
"viewport": { // Complex dictionary - needs type-params
"type": "dict",
"value": {
"width": 1200,
"height": 800
}
}
}
}
}
```
#### Strategy Pattern and Nesting
Strategies (like chunking or content filtering) demonstrate why we need this structure. Consider this chunking configuration:
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"chunking_strategy": {
"type": "RegexChunking", // Strategy implementation
"params": {
"patterns": ["\n\n", "\\.\\s+"]
}
}
}
}
}
```
Here, `chunking_strategy` accepts any chunking implementation. The `type` field tells the system which strategy to use, and `params` configures that specific strategy.
#### Complex Nested Example
Let's look at a more complex example with content filtering:
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
```
This shows how deeply configurations can nest while maintaining a consistent structure.
#### Quick Grammar Overview
```
config := {
"type": string,
"params": {
key: simple_value | complex_value
}
}
simple_value := string | number | boolean | [simple_value]
complex_value := config | dict_value
dict_value := {
"type": "dict",
"value": object
}
```
#### Important Rules 🚨
- Always use the type-params pattern for class instances
- Use direct values for primitives (numbers, strings, booleans)
- Wrap dictionaries with {"type": "dict", "value": {...}}
- Arrays/lists are passed directly without type-params
- All parameters are optional unless specifically required
#### Pro Tip 💡
The easiest way to get the correct structure is to:
1. Create configuration objects in Python
2. Use the `dump()` method to see their JSON representation
3. Use that JSON in your API calls
Example:
```python
from crawl4ai import CrawlerRunConfig, PruningContentFilter
config = CrawlerRunConfig(
content_filter=PruningContentFilter(threshold=0.48)
)
print(config.dump()) # Use this JSON in your API calls
```
#### More Examples
**Advanced Crawler Configuration**
```json
{
"urls": ["https://example.com"],
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"cache_mode": "bypass",
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed",
"min_word_threshold": 0
}
}
}
}
}
}
}
```
**Extraction Strategy**:
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"extraction_strategy": {
"type": "JsonCssExtractionStrategy",
"params": {
"schema": {
"baseSelector": "article.post",
"fields": [
{"name": "title", "selector": "h1", "type": "text"},
{"name": "content", "selector": ".content", "type": "html"}
]
}
}
}
}
}
}
```
**LLM Extraction Strategy**
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"extraction_strategy": {
"type": "LLMExtractionStrategy",
"params": {
"instruction": "Extract article title, author, publication date and main content",
"provider": "openai/gpt-4",
"api_token": "your-api-token",
"schema": {
"type": "dict",
"value": {
"title": "Article Schema",
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The article's headline"
},
"author": {
"type": "string",
"description": "The author's name"
},
"published_date": {
"type": "string",
"format": "date-time",
"description": "Publication date and time"
},
"content": {
"type": "string",
"description": "The main article content"
}
},
"required": ["title", "content"]
}
}
}
}
}
}
}
```
**Deep Crawler Example**
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"deep_crawl_strategy": {
"type": "BFSDeepCrawlStrategy",
"params": {
"max_depth": 3,
"max_pages": 100,
"filter_chain": {
"type": "FastFilterChain",
"params": {
"filters": [
{
"type": "FastContentTypeFilter",
"params": {
"allowed_types": ["text/html", "application/xhtml+xml"]
}
},
{
"type": "FastDomainFilter",
"params": {
"allowed_domains": ["blog.*", "docs.*"],
"blocked_domains": ["ads.*", "analytics.*"]
}
},
{
"type": "FastURLPatternFilter",
"params": {
"allowed_patterns": ["^/blog/", "^/docs/"],
"blocked_patterns": [".*/ads/", ".*/sponsored/"]
}
}
]
}
},
"url_scorer": {
"type": "FastCompositeScorer",
"params": {
"scorers": [
{
"type": "FastKeywordRelevanceScorer",
"params": {
"keywords": ["tutorial", "guide", "documentation"],
"weight": 1.0
}
},
{
"type": "FastPathDepthScorer",
"params": {
"weight": 0.5,
"preferred_depth": 2
}
},
{
"type": "FastFreshnessScorer",
"params": {
"weight": 0.8,
"max_age_days": 365
}
}
]
}
}
}
}
}
}
}
```
### REST API Examples
Let's look at some practical examples:
#### Simple Crawl
```python
import requests
crawl_payload = {
"urls": ["https://example.com"],
"browser_config": {"headless": True},
"crawler_config": {"stream": False}
}
response = requests.post(
"http://localhost:8000/crawl",
# headers={"Authorization": f"Bearer {token}"}, # If JWT is enabled, more on this later
json=crawl_payload
)
print(response.json()) # Print the response for debugging
```
#### Streaming Results
```python
async def test_stream_crawl(session, token: str):
"""Test the /crawl/stream endpoint with multiple URLs."""
url = "http://localhost:8000/crawl/stream"
payload = {
"urls": [
"https://example.com",
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3",
],
"browser_config": {"headless": True, "viewport": {"width": 1200}},
"crawler_config": {"stream": True, "cache_mode": "aggressive"}
}
# headers = {"Authorization": f"Bearer {token}"} # If JWT is enabled, more on this later
try:
async with session.post(url, json=payload, headers=headers) as response:
status = response.status
print(f"Status: {status} (Expected: 200)")
assert status == 200, f"Expected 200, got {status}"
# Read streaming response line-by-line (NDJSON)
async for line in response.content:
if line:
data = json.loads(line.decode('utf-8').strip())
print(f"Streamed Result: {json.dumps(data, indent=2)}")
except Exception as e:
print(f"Error in streaming crawl test: {str(e)}")
```
## Metrics & Monitoring
Keep an eye on your crawler with these endpoints:
- `/health` - Quick health check
- `/metrics` - Detailed Prometheus metrics
- `/schema` - Full API schema
Example health check:
```bash
curl http://localhost:8000/health
```
## Deployment Scenarios
> 🚧 Coming soon! We'll cover:
> - Kubernetes deployment
> - Cloud provider setups (AWS, GCP, Azure)
> - High-availability configurations
> - Load balancing strategies
## Complete Examples
Check out the `examples` folder in our repository for full working examples! Here are two to get you started:
[Using Client SDK](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_python_sdk_example.py)
[Using REST API](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_python_rest_api_example.py)
## Server Configuration
The server's behavior can be customized through the `config.yml` file. Let's explore how to configure your Crawl4AI server for optimal performance and security.
### Understanding config.yml
The configuration file is located at `deploy/docker/config.yml`. You can either modify this file before building the image or mount a custom configuration when running the container.
Here's a detailed breakdown of the configuration options:
```yaml
# Application Configuration
app:
title: "Crawl4AI API" # Server title in OpenAPI docs
version: "1.0.0" # API version
host: "0.0.0.0" # Listen on all interfaces
port: 8000 # Server port
reload: True # Enable hot reloading (development only)
timeout_keep_alive: 300 # Keep-alive timeout in seconds
# Rate Limiting Configuration
rate_limiting:
enabled: True # Enable/disable rate limiting
default_limit: "100/minute" # Rate limit format: "number/timeunit"
trusted_proxies: [] # List of trusted proxy IPs
storage_uri: "memory://" # Use "redis://localhost:6379" for production
# Security Configuration
security:
enabled: false # Master toggle for security features
jwt_enabled: true # Enable JWT authentication
https_redirect: True # Force HTTPS
trusted_hosts: ["*"] # Allowed hosts (use specific domains in production)
headers: # Security headers
x_content_type_options: "nosniff"
x_frame_options: "DENY"
content_security_policy: "default-src 'self'"
strict_transport_security: "max-age=63072000; includeSubDomains"
# Crawler Configuration
crawler:
memory_threshold_percent: 95.0 # Memory usage threshold
rate_limiter:
base_delay: [1.0, 2.0] # Min and max delay between requests
timeouts:
stream_init: 30.0 # Stream initialization timeout
batch_process: 300.0 # Batch processing timeout
# Logging Configuration
logging:
level: "INFO" # Log level (DEBUG, INFO, WARNING, ERROR)
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
# Observability Configuration
observability:
prometheus:
enabled: True # Enable Prometheus metrics
endpoint: "/metrics" # Metrics endpoint
health_check:
endpoint: "/health" # Health check endpoint
```
### JWT Authentication
When `security.jwt_enabled` is set to `true` in your config.yml, all endpoints require JWT authentication via bearer tokens. Here's how it works:
#### Getting a Token
```python
POST /token
Content-Type: application/json
{
"email": "user@example.com"
}
```
The endpoint returns:
```json
{
"email": "user@example.com",
"access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOi...",
"token_type": "bearer"
}
```
#### Using the Token
Add the token to your requests:
```bash
curl -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGci..." http://localhost:8000/crawl
```
Using the Python SDK:
```python
from crawl4ai.docker_client import Crawl4aiDockerClient
async with Crawl4aiDockerClient() as client:
# Authenticate first
await client.authenticate("user@example.com")
# Now all requests will include the token automatically
result = await client.crawl(urls=["https://example.com"])
```
#### Production Considerations 💡
The default implementation uses a simple email verification. For production use, consider:
- Email verification via OTP/magic links
- OAuth2 integration
- Rate limiting token generation
- Token expiration and refresh mechanisms
- IP-based restrictions
### Configuration Tips and Best Practices
1. **Production Settings** 🏭
```yaml
app:
reload: False # Disable reload in production
timeout_keep_alive: 120 # Lower timeout for better resource management
rate_limiting:
storage_uri: "redis://redis:6379" # Use Redis for distributed rate limiting
default_limit: "50/minute" # More conservative rate limit
security:
enabled: true # Enable all security features
trusted_hosts: ["your-domain.com"] # Restrict to your domain
```
2. **Development Settings** 🛠️
```yaml
app:
reload: True # Enable hot reloading
timeout_keep_alive: 300 # Longer timeout for debugging
logging:
level: "DEBUG" # More verbose logging
```
3. **High-Traffic Settings** 🚦
```yaml
crawler:
memory_threshold_percent: 85.0 # More conservative memory limit
rate_limiter:
base_delay: [2.0, 4.0] # More aggressive rate limiting
```
### Customizing Your Configuration
#### Method 1: Pre-build Configuration
```bash
# Copy and modify config before building
cd crawl4ai/deploy
vim custom-config.yml # Or use any editor
# Build with custom config
docker build --platform=linux/amd64 --no-cache -t crawl4ai:latest .
```
#### Method 2: Build-time Configuration
Use a custom config during build:
```bash
# Build with custom config
docker build --platform=linux/amd64 --no-cache \
--build-arg CONFIG_PATH=/path/to/custom-config.yml \
-t crawl4ai:latest .
```
#### Method 3: Runtime Configuration
```bash
# Mount custom config at runtime
docker run -d -p 8000:8000 \
-v $(pwd)/custom-config.yml:/app/config.yml \
crawl4ai-server:prod
```
> 💡 Note: When using Method 2, `/path/to/custom-config.yml` is relative to deploy directory.
> 💡 Note: When using Method 3, ensure your custom config file has all required fields as the container will use this instead of the built-in config.
### Configuration Recommendations
1. **Security First** 🔒
- Always enable security in production
- Use specific trusted_hosts instead of wildcards
- Set up proper rate limiting to protect your server
- Consider your environment before enabling HTTPS redirect
2. **Resource Management** 💻
- Adjust memory_threshold_percent based on available RAM
- Set timeouts according to your content size and network conditions
- Use Redis for rate limiting in multi-container setups
3. **Monitoring** 📊
- Enable Prometheus if you need metrics
- Set DEBUG logging in development, INFO in production
- Regular health check monitoring is crucial
4. **Performance Tuning** ⚡
- Start with conservative rate limiter delays
- Increase batch_process timeout for large content
- Adjust stream_init timeout based on initial response times
## Getting Help
We're here to help you succeed with Crawl4AI! Here's how to get support:
- 📖 Check our [full documentation](https://docs.crawl4ai.com)
- 🐛 Found a bug? [Open an issue](https://github.com/unclecode/crawl4ai/issues)
- 💬 Join our [Discord community](https://discord.gg/crawl4ai)
- ⭐ Star us on GitHub to show support!
## Summary
In this guide, we've covered everything you need to get started with Crawl4AI's Docker deployment:
- Building and running the Docker container
- Configuring the environment
- Making API requests with proper typing
- Using the Python SDK
- Monitoring your deployment
Remember, the examples in the `examples` folder are your friends - they show real-world usage patterns that you can adapt for your needs.
Keep exploring, and don't hesitate to reach out if you need help! We're building something amazing together. 🚀
Happy crawling! 🕷️

View File

@@ -1,442 +0,0 @@
import os
import json
import asyncio
from typing import List, Tuple
import logging
from typing import Optional, AsyncGenerator
from urllib.parse import unquote
from fastapi import HTTPException, Request, status
from fastapi.background import BackgroundTasks
from fastapi.responses import JSONResponse
from redis import asyncio as aioredis
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
LLMExtractionStrategy,
CacheMode,
BrowserConfig,
MemoryAdaptiveDispatcher,
RateLimiter
)
from crawl4ai.utils import perform_completion_with_backoff
from crawl4ai.content_filter_strategy import (
PruningContentFilter,
BM25ContentFilter,
LLMContentFilter
)
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.content_scraping_strategy import LXMLWebScrapingStrategy
from utils import (
TaskStatus,
FilterType,
get_base_url,
is_task_id,
should_cleanup_task,
decode_redis_hash
)
logger = logging.getLogger(__name__)
async def handle_llm_qa(
url: str,
query: str,
config: dict
) -> str:
"""Process QA using LLM with crawled content as context."""
try:
# Extract base URL by finding last '?q=' occurrence
last_q_index = url.rfind('?q=')
if last_q_index != -1:
url = url[:last_q_index]
# Get markdown content
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url)
if not result.success:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.error_message
)
content = result.markdown_v2.fit_markdown
# Create prompt and get LLM response
prompt = f"""Use the following content as context to answer the question.
Content:
{content}
Question: {query}
Answer:"""
response = perform_completion_with_backoff(
provider=config["llm"]["provider"],
prompt_with_variables=prompt,
api_token=os.environ.get(config["llm"].get("api_key_env", ""))
)
return response.choices[0].message.content
except Exception as e:
logger.error(f"QA processing error: {str(e)}", exc_info=True)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
async def process_llm_extraction(
redis: aioredis.Redis,
config: dict,
task_id: str,
url: str,
instruction: str,
schema: Optional[str] = None,
cache: str = "0"
) -> None:
"""Process LLM extraction in background."""
try:
# If config['llm'] has api_key then ignore the api_key_env
api_key = ""
if "api_key" in config["llm"]:
api_key = config["llm"]["api_key"]
else:
api_key = os.environ.get(config["llm"].get("api_key_env", None), "")
llm_strategy = LLMExtractionStrategy(
provider=config["llm"]["provider"],
api_token=api_key,
instruction=instruction,
schema=json.loads(schema) if schema else None,
)
cache_mode = CacheMode.ENABLED if cache == "1" else CacheMode.WRITE_ONLY
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=url,
config=CrawlerRunConfig(
extraction_strategy=llm_strategy,
scraping_strategy=LXMLWebScrapingStrategy(),
cache_mode=cache_mode
)
)
if not result.success:
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.FAILED,
"error": result.error_message
})
return
try:
content = json.loads(result.extracted_content)
except json.JSONDecodeError:
content = result.extracted_content
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.COMPLETED,
"result": json.dumps(content)
})
except Exception as e:
logger.error(f"LLM extraction error: {str(e)}", exc_info=True)
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.FAILED,
"error": str(e)
})
async def handle_markdown_request(
url: str,
filter_type: FilterType,
query: Optional[str] = None,
cache: str = "0",
config: Optional[dict] = None
) -> str:
"""Handle markdown generation requests."""
try:
decoded_url = unquote(url)
if not decoded_url.startswith(('http://', 'https://')):
decoded_url = 'https://' + decoded_url
if filter_type == FilterType.RAW:
md_generator = DefaultMarkdownGenerator()
else:
content_filter = {
FilterType.FIT: PruningContentFilter(),
FilterType.BM25: BM25ContentFilter(user_query=query or ""),
FilterType.LLM: LLMContentFilter(
provider=config["llm"]["provider"],
api_token=os.environ.get(config["llm"].get("api_key_env", None), ""),
instruction=query or "Extract main content"
)
}[filter_type]
md_generator = DefaultMarkdownGenerator(content_filter=content_filter)
cache_mode = CacheMode.ENABLED if cache == "1" else CacheMode.WRITE_ONLY
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=decoded_url,
config=CrawlerRunConfig(
markdown_generator=md_generator,
scraping_strategy=LXMLWebScrapingStrategy(),
cache_mode=cache_mode
)
)
if not result.success:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.error_message
)
return (result.markdown_v2.raw_markdown
if filter_type == FilterType.RAW
else result.markdown_v2.fit_markdown)
except Exception as e:
logger.error(f"Markdown error: {str(e)}", exc_info=True)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
async def handle_llm_request(
redis: aioredis.Redis,
background_tasks: BackgroundTasks,
request: Request,
input_path: str,
query: Optional[str] = None,
schema: Optional[str] = None,
cache: str = "0",
config: Optional[dict] = None
) -> JSONResponse:
"""Handle LLM extraction requests."""
base_url = get_base_url(request)
try:
if is_task_id(input_path):
return await handle_task_status(
redis, input_path, base_url
)
if not query:
return JSONResponse({
"message": "Please provide an instruction",
"_links": {
"example": {
"href": f"{base_url}/llm/{input_path}?q=Extract+main+content",
"title": "Try this example"
}
}
})
return await create_new_task(
redis,
background_tasks,
input_path,
query,
schema,
cache,
base_url,
config
)
except Exception as e:
logger.error(f"LLM endpoint error: {str(e)}", exc_info=True)
return JSONResponse({
"error": str(e),
"_links": {
"retry": {"href": str(request.url)}
}
}, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
async def handle_task_status(
redis: aioredis.Redis,
task_id: str,
base_url: str
) -> JSONResponse:
"""Handle task status check requests."""
task = await redis.hgetall(f"task:{task_id}")
if not task:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Task not found"
)
task = decode_redis_hash(task)
response = create_task_response(task, task_id, base_url)
if task["status"] in [TaskStatus.COMPLETED, TaskStatus.FAILED]:
if should_cleanup_task(task["created_at"]):
await redis.delete(f"task:{task_id}")
return JSONResponse(response)
async def create_new_task(
redis: aioredis.Redis,
background_tasks: BackgroundTasks,
input_path: str,
query: str,
schema: Optional[str],
cache: str,
base_url: str,
config: dict
) -> JSONResponse:
"""Create and initialize a new task."""
decoded_url = unquote(input_path)
if not decoded_url.startswith(('http://', 'https://')):
decoded_url = 'https://' + decoded_url
from datetime import datetime
task_id = f"llm_{int(datetime.now().timestamp())}_{id(background_tasks)}"
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.PROCESSING,
"created_at": datetime.now().isoformat(),
"url": decoded_url
})
background_tasks.add_task(
process_llm_extraction,
redis,
config,
task_id,
decoded_url,
query,
schema,
cache
)
return JSONResponse({
"task_id": task_id,
"status": TaskStatus.PROCESSING,
"url": decoded_url,
"_links": {
"self": {"href": f"{base_url}/llm/{task_id}"},
"status": {"href": f"{base_url}/llm/{task_id}"}
}
})
def create_task_response(task: dict, task_id: str, base_url: str) -> dict:
"""Create response for task status check."""
response = {
"task_id": task_id,
"status": task["status"],
"created_at": task["created_at"],
"url": task["url"],
"_links": {
"self": {"href": f"{base_url}/llm/{task_id}"},
"refresh": {"href": f"{base_url}/llm/{task_id}"}
}
}
if task["status"] == TaskStatus.COMPLETED:
response["result"] = json.loads(task["result"])
elif task["status"] == TaskStatus.FAILED:
response["error"] = task["error"]
return response
async def stream_results(crawler: AsyncWebCrawler, results_gen: AsyncGenerator) -> AsyncGenerator[bytes, None]:
"""Stream results with heartbeats and completion markers."""
import json
from utils import datetime_handler
try:
async for result in results_gen:
try:
result_dict = result.model_dump()
logger.info(f"Streaming result for {result_dict.get('url', 'unknown')}")
data = json.dumps(result_dict, default=datetime_handler) + "\n"
yield data.encode('utf-8')
except Exception as e:
logger.error(f"Serialization error: {e}")
error_response = {"error": str(e), "url": getattr(result, 'url', 'unknown')}
yield (json.dumps(error_response) + "\n").encode('utf-8')
yield json.dumps({"status": "completed"}).encode('utf-8')
except asyncio.CancelledError:
logger.warning("Client disconnected during streaming")
finally:
try:
await crawler.close()
except Exception as e:
logger.error(f"Crawler cleanup error: {e}")
async def handle_crawl_request(
urls: List[str],
browser_config: dict,
crawler_config: dict,
config: dict
) -> dict:
"""Handle non-streaming crawl requests."""
try:
browser_config = BrowserConfig.load(browser_config)
crawler_config = CrawlerRunConfig.load(crawler_config)
dispatcher = MemoryAdaptiveDispatcher(
memory_threshold_percent=config["crawler"]["memory_threshold_percent"],
rate_limiter=RateLimiter(
base_delay=tuple(config["crawler"]["rate_limiter"]["base_delay"])
)
)
async with AsyncWebCrawler(config=browser_config) as crawler:
results = await crawler.arun_many(
urls=urls,
config=crawler_config,
dispatcher=dispatcher
)
return {
"success": True,
"results": [result.model_dump() for result in results]
}
except Exception as e:
logger.error(f"Crawl error: {str(e)}", exc_info=True)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
async def handle_stream_crawl_request(
urls: List[str],
browser_config: dict,
crawler_config: dict,
config: dict
) -> Tuple[AsyncWebCrawler, AsyncGenerator]:
"""Handle streaming crawl requests."""
try:
browser_config = BrowserConfig.load(browser_config)
browser_config.verbose = True
crawler_config = CrawlerRunConfig.load(crawler_config)
crawler_config.scraping_strategy = LXMLWebScrapingStrategy()
dispatcher = MemoryAdaptiveDispatcher(
memory_threshold_percent=config["crawler"]["memory_threshold_percent"],
rate_limiter=RateLimiter(
base_delay=tuple(config["crawler"]["rate_limiter"]["base_delay"])
)
)
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
results_gen = await crawler.arun_many(
urls=urls,
config=crawler_config,
dispatcher=dispatcher
)
return crawler, results_gen
except Exception as e:
if 'crawler' in locals():
await crawler.close()
logger.error(f"Stream crawl error: {str(e)}", exc_info=True)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)

View File

@@ -1,46 +0,0 @@
import os
from datetime import datetime, timedelta, timezone
from typing import Dict, Optional
from jwt import JWT, jwk_from_dict
from jwt.utils import get_int_from_datetime
from fastapi import Depends, HTTPException
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from pydantic import EmailStr
from pydantic.main import BaseModel
import base64
instance = JWT()
security = HTTPBearer()
SECRET_KEY = os.environ.get("SECRET_KEY", "mysecret")
ACCESS_TOKEN_EXPIRE_MINUTES = 60
def get_jwk_from_secret(secret: str):
"""Convert a secret string into a JWK object."""
secret_bytes = secret.encode('utf-8')
b64_secret = base64.urlsafe_b64encode(secret_bytes).rstrip(b'=').decode('utf-8')
return jwk_from_dict({"kty": "oct", "k": b64_secret})
def create_access_token(data: dict, expires_delta: Optional[timedelta] = None) -> str:
"""Create a JWT access token with an expiration."""
to_encode = data.copy()
expire = datetime.now(timezone.utc) + (expires_delta or timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES))
to_encode.update({"exp": get_int_from_datetime(expire)})
signing_key = get_jwk_from_secret(SECRET_KEY)
return instance.encode(to_encode, signing_key, alg='HS256')
def verify_token(credentials: HTTPAuthorizationCredentials = Depends(security)) -> Dict:
"""Verify the JWT token from the Authorization header."""
token = credentials.credentials
verifying_key = get_jwk_from_secret(SECRET_KEY)
try:
payload = instance.decode(token, verifying_key, do_time_check=True, algorithms='HS256')
return payload
except Exception:
raise HTTPException(status_code=401, detail="Invalid or expired token")
def get_token_dependency(config: Dict):
"""Return the token dependency if JWT is enabled, else None."""
return verify_token if config.get("security", {}).get("jwt_enabled", False) else None
class TokenRequest(BaseModel):
email: EmailStr

View File

@@ -1,71 +0,0 @@
# Application Configuration
app:
title: "Crawl4AI API"
version: "1.0.0"
host: "0.0.0.0"
port: 8000
reload: True
timeout_keep_alive: 300
# Default LLM Configuration
llm:
provider: "openai/gpt-4o-mini"
api_key_env: "OPENAI_API_KEY"
# api_key: sk-... # If you pass the API key directly then api_key_env will be ignored
# Redis Configuration
redis:
host: "localhost"
port: 6379
db: 0
password: ""
ssl: False
ssl_cert_reqs: None
ssl_ca_certs: None
ssl_certfile: None
ssl_keyfile: None
ssl_cert_reqs: None
ssl_ca_certs: None
ssl_certfile: None
ssl_keyfile: None
# Rate Limiting Configuration
rate_limiting:
enabled: True
default_limit: "1000/minute"
trusted_proxies: []
storage_uri: "memory://" # Use "redis://localhost:6379" for production
# Security Configuration
security:
enabled: true
jwt_enabled: true
https_redirect: false
trusted_hosts: ["*"]
headers:
x_content_type_options: "nosniff"
x_frame_options: "DENY"
content_security_policy: "default-src 'self'"
strict_transport_security: "max-age=63072000; includeSubDomains"
# Crawler Configuration
crawler:
memory_threshold_percent: 95.0
rate_limiter:
base_delay: [1.0, 2.0]
timeouts:
stream_init: 30.0 # Timeout for stream initialization
batch_process: 300.0 # Timeout for batch processing
# Logging Configuration
logging:
level: "INFO"
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
# Observability Configuration
observability:
prometheus:
enabled: True
endpoint: "/metrics"
health_check:
endpoint: "/health"

View File

@@ -1,10 +0,0 @@
crawl4ai
fastapi
uvicorn
gunicorn>=23.0.0
slowapi>=0.1.9
prometheus-fastapi-instrumentator>=7.0.2
redis>=5.2.1
jwt>=1.3.1
dnspython>=2.7.0
email-validator>=2.2.0

View File

@@ -1,181 +0,0 @@
import os
import sys
import time
from typing import List, Optional, Dict
from fastapi import FastAPI, HTTPException, Request, Query, Path, Depends
from fastapi.responses import StreamingResponse, RedirectResponse, PlainTextResponse, JSONResponse
from fastapi.middleware.httpsredirect import HTTPSRedirectMiddleware
from fastapi.middleware.trustedhost import TrustedHostMiddleware
from pydantic import BaseModel, Field
from slowapi import Limiter
from slowapi.util import get_remote_address
from prometheus_fastapi_instrumentator import Instrumentator
from redis import asyncio as aioredis
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
from utils import FilterType, load_config, setup_logging, verify_email_domain
from api import (
handle_markdown_request,
handle_llm_qa,
handle_stream_crawl_request,
handle_crawl_request,
stream_results
)
from auth import create_access_token, get_token_dependency, TokenRequest # Import from auth.py
__version__ = "0.2.6"
class CrawlRequest(BaseModel):
urls: List[str] = Field(min_length=1, max_length=100)
browser_config: Optional[Dict] = Field(default_factory=dict)
crawler_config: Optional[Dict] = Field(default_factory=dict)
# Load configuration and setup
config = load_config()
setup_logging(config)
# Initialize Redis
redis = aioredis.from_url(config["redis"].get("uri", "redis://localhost"))
# Initialize rate limiter
limiter = Limiter(
key_func=get_remote_address,
default_limits=[config["rate_limiting"]["default_limit"]],
storage_uri=config["rate_limiting"]["storage_uri"]
)
app = FastAPI(
title=config["app"]["title"],
version=config["app"]["version"]
)
# Configure middleware
def setup_security_middleware(app, config):
sec_config = config.get("security", {})
if sec_config.get("enabled", False):
if sec_config.get("https_redirect", False):
app.add_middleware(HTTPSRedirectMiddleware)
if sec_config.get("trusted_hosts", []) != ["*"]:
app.add_middleware(TrustedHostMiddleware, allowed_hosts=sec_config["trusted_hosts"])
setup_security_middleware(app, config)
# Prometheus instrumentation
if config["observability"]["prometheus"]["enabled"]:
Instrumentator().instrument(app).expose(app)
# Get token dependency based on config
token_dependency = get_token_dependency(config)
# Middleware for security headers
@app.middleware("http")
async def add_security_headers(request: Request, call_next):
response = await call_next(request)
if config["security"]["enabled"]:
response.headers.update(config["security"]["headers"])
return response
# Token endpoint (always available, but usage depends on config)
@app.post("/token")
async def get_token(request_data: TokenRequest):
if not verify_email_domain(request_data.email):
raise HTTPException(status_code=400, detail="Invalid email domain")
token = create_access_token({"sub": request_data.email})
return {"email": request_data.email, "access_token": token, "token_type": "bearer"}
# Endpoints with conditional auth
@app.get("/md/{url:path}")
@limiter.limit(config["rate_limiting"]["default_limit"])
async def get_markdown(
request: Request,
url: str,
f: FilterType = FilterType.FIT,
q: Optional[str] = None,
c: Optional[str] = "0",
token_data: Optional[Dict] = Depends(token_dependency)
):
result = await handle_markdown_request(url, f, q, c, config)
return PlainTextResponse(result)
@app.get("/llm/{url:path}", description="URL should be without http/https prefix")
async def llm_endpoint(
request: Request,
url: str = Path(...),
q: Optional[str] = Query(None),
token_data: Optional[Dict] = Depends(token_dependency)
):
if not q:
raise HTTPException(status_code=400, detail="Query parameter 'q' is required")
if not url.startswith(('http://', 'https://')):
url = 'https://' + url
try:
answer = await handle_llm_qa(url, q, config)
return JSONResponse({"answer": answer})
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get("/schema")
async def get_schema():
from crawl4ai import BrowserConfig, CrawlerRunConfig
return {"browser": BrowserConfig().dump(), "crawler": CrawlerRunConfig().dump()}
@app.get(config["observability"]["health_check"]["endpoint"])
async def health():
return {"status": "ok", "timestamp": time.time(), "version": __version__}
@app.get(config["observability"]["prometheus"]["endpoint"])
async def metrics():
return RedirectResponse(url=config["observability"]["prometheus"]["endpoint"])
@app.post("/crawl")
@limiter.limit(config["rate_limiting"]["default_limit"])
async def crawl(
request: Request,
crawl_request: CrawlRequest,
token_data: Optional[Dict] = Depends(token_dependency)
):
if not crawl_request.urls:
raise HTTPException(status_code=400, detail="At least one URL required")
results = await handle_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config
)
return JSONResponse(results)
@app.post("/crawl/stream")
@limiter.limit(config["rate_limiting"]["default_limit"])
async def crawl_stream(
request: Request,
crawl_request: CrawlRequest,
token_data: Optional[Dict] = Depends(token_dependency)
):
if not crawl_request.urls:
raise HTTPException(status_code=400, detail="At least one URL required")
crawler, results_gen = await handle_stream_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config
)
return StreamingResponse(
stream_results(crawler, results_gen),
media_type='application/x-ndjson',
headers={'Cache-Control': 'no-cache', 'Connection': 'keep-alive', 'X-Stream-Status': 'active'}
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(
"server:app",
host=config["app"]["host"],
port=config["app"]["port"],
reload=config["app"]["reload"],
timeout_keep_alive=config["app"]["timeout_keep_alive"]
)

View File

@@ -1,12 +0,0 @@
[supervisord]
nodaemon=true
[program:redis]
command=redis-server
autorestart=true
priority=10
[program:gunicorn]
command=gunicorn --bind 0.0.0.0:8000 --workers 4 --threads 2 --timeout 300 --graceful-timeout 60 --keep-alive 65 --log-level debug --worker-class uvicorn.workers.UvicornWorker --max-requests 1000 --max-requests-jitter 50 server:app
autorestart=true
priority=20

View File

@@ -1,66 +0,0 @@
import dns.resolver
import logging
import yaml
from datetime import datetime
from enum import Enum
from pathlib import Path
from fastapi import Request
from typing import Dict, Optional
class TaskStatus(str, Enum):
PROCESSING = "processing"
FAILED = "failed"
COMPLETED = "completed"
class FilterType(str, Enum):
RAW = "raw"
FIT = "fit"
BM25 = "bm25"
LLM = "llm"
def load_config() -> Dict:
"""Load and return application configuration."""
config_path = Path(__file__).parent / "config.yml"
with open(config_path, "r") as config_file:
return yaml.safe_load(config_file)
def setup_logging(config: Dict) -> None:
"""Configure application logging."""
logging.basicConfig(
level=config["logging"]["level"],
format=config["logging"]["format"]
)
def get_base_url(request: Request) -> str:
"""Get base URL including scheme and host."""
return f"{request.url.scheme}://{request.url.netloc}"
def is_task_id(value: str) -> bool:
"""Check if the value matches task ID pattern."""
return value.startswith("llm_") and "_" in value
def datetime_handler(obj: any) -> Optional[str]:
"""Handle datetime serialization for JSON."""
if hasattr(obj, 'isoformat'):
return obj.isoformat()
raise TypeError(f"Object of type {type(obj)} is not JSON serializable")
def should_cleanup_task(created_at: str) -> bool:
"""Check if task should be cleaned up based on creation time."""
created = datetime.fromisoformat(created_at)
return (datetime.now() - created).total_seconds() > 3600
def decode_redis_hash(hash_data: Dict[bytes, bytes]) -> Dict[str, str]:
"""Decode Redis hash data from bytes to strings."""
return {k.decode('utf-8'): v.decode('utf-8') for k, v in hash_data.items()}
def verify_email_domain(email: str) -> bool:
try:
domain = email.split('@')[1]
# Try to resolve MX records for the domain.
records = dns.resolver.resolve(domain, 'MX')
return True if records else False
except Exception as e:
return False

View File

@@ -1,77 +0,0 @@
# Crawl4AI API Quickstart
This document shows how to generate an API token and use it to call the `/crawl` and `/md` endpoints.
---
## 1. Crawl Example
Send a POST request to `/crawl` with the following JSON payload:
```json
{
"urls": ["https://example.com"],
"browser_config": { "headless": true, "verbose": true },
"crawler_config": { "stream": false, "cache_mode": "enabled" }
}
```
**cURL Command:**
```bash
curl -X POST "https://api.crawl4ai.com/crawl" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"urls": ["https://example.com"],
"browser_config": {"headless": true, "verbose": true},
"crawler_config": {"stream": false, "cache_mode": "enabled"}
}'
```
---
## 2. Markdown Retrieval Example
To retrieve markdown from a given URL (e.g., `https://example.com`), use:
```bash
curl -X GET "https://api.crawl4ai.com/md/example.com" \
-H "Authorization: Bearer YOUR_API_TOKEN"
```
---
## 3. Python Code Example (Using `requests`)
Below is a sample Python script that demonstrates using the `requests` library to call the API endpoints:
```python
import requests
BASE_URL = "https://api.crawl4ai.com"
TOKEN = "YOUR_API_TOKEN" # Replace with your actual token
headers = {
"Authorization": f"Bearer {TOKEN}",
"Content-Type": "application/json"
}
# Crawl endpoint example
crawl_payload = {
"urls": ["https://example.com"],
"browser_config": {"headless": True, "verbose": True},
"crawler_config": {"stream": False, "cache_mode": "enabled"}
}
crawl_response = requests.post(f"{BASE_URL}/crawl", json=crawl_payload, headers=headers)
print("Crawl Response:", crawl_response.json())
# /md endpoint example
md_response = requests.get(f"{BASE_URL}/md/example.com", headers=headers)
print("Markdown Content:", md_response.text)
```
---
Happy crawling!

View File

@@ -1,2 +0,0 @@
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf

View File

@@ -1,55 +0,0 @@
server {
listen 80;
server_name api.crawl4ai.com;
# Main logging settings
error_log /var/log/nginx/error.log debug;
access_log /var/log/nginx/access.log combined buffer=512k flush=1m;
# Timeout and buffering settings
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
# Health check location
location /health {
proxy_pass http://127.0.0.1:8000/health;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Main proxy for application endpoints
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
add_header X-Debug-Info $request_uri;
proxy_request_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
}
# New endpoint: serve Nginx error log
location /nginx/error {
# Using "alias" to serve the error log file
alias /var/log/nginx/error.log;
# Optionally, you might restrict access with "allow" and "deny" directives.
}
# New endpoint: serve Nginx access log
location /nginx/access {
alias /var/log/nginx/access.log;
}
client_max_body_size 10M;
client_body_buffer_size 128k;
}

View File

@@ -1 +0,0 @@
v0.1.0

View File

@@ -554,7 +554,7 @@ async def test_stream_crawl(session, token: str):
"https://example.com/page3",
],
"browser_config": {"headless": True, "viewport": {"width": 1200}},
"crawler_config": {"stream": True, "cache_mode": "aggressive"}
"crawler_config": {"stream": True, "cache_mode": "bypass"}
}
# headers = {"Authorization": f"Bearer {token}"} # If JWT is enabled, more on this later

View File

@@ -2,6 +2,7 @@ import os
import json
import asyncio
from typing import List, Tuple
from functools import partial
import logging
from typing import Optional, AsyncGenerator
@@ -388,12 +389,13 @@ async def handle_crawl_request(
)
async with AsyncWebCrawler(config=browser_config) as crawler:
results = await crawler.arun_many(
urls=urls,
config=crawler_config,
dispatcher=dispatcher
)
results = []
func = getattr(crawler, "arun" if len(urls) == 1 else "arun_many")
partial_func = partial(func,
urls[0] if len(urls) == 1 else urls,
config=crawler_config,
dispatcher=dispatcher)
results = await partial_func()
return {
"success": True,
"results": [result.model_dump() for result in results]

View File

@@ -1,63 +0,0 @@
FROM --platform=linux/amd64 python:3.10-slim
# Install system dependencies required for Chromium and Git
RUN apt-get update && apt-get install -y \
python3-dev \
pkg-config \
libjpeg-dev \
gcc \
build-essential \
libnss3 \
libnspr4 \
libatk1.0-0 \
libatk-bridge2.0-0 \
libcups2 \
libdrm2 \
libxkbcommon0 \
libxcomposite1 \
libxdamage1 \
libxfixes3 \
libxrandr2 \
libgbm1 \
libasound2 \
libpango-1.0-0 \
libcairo2 \
procps \
git \
socat \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Make a directory for crawl4ai call it crawl4ai_repo
# RUN mkdir crawl4ai_repo
# # Clone Crawl4ai from the next branch and install it
# RUN git clone --branch next https://github.com/unclecode/crawl4ai.git ./crawl4ai_repo \
# && cd crawl4ai_repo \
# && pip install . \
# && cd .. \
# && rm -rf crawl4ai_repo
RUN python3 -m venv /app/venv
ENV PATH="/app/venv/bin:$PATH"
# RUN pip install git+https://github.com/unclecode/crawl4ai.git@next
# Copy requirements and install remaining dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy application files
COPY resources /app/resources
COPY main.py .
COPY start.sh .
# Set permissions for Chrome binary and start script
RUN chmod +x /app/resources/chrome/headless_shell && \
chmod -R 755 /app/resources/chrome && \
chmod +x start.sh
ENV FUNCTION_TARGET=crawl
EXPOSE 8080 9223
CMD ["/app/start.sh"]

View File

@@ -1,8 +0,0 @@
project_id: PROJECT_ID
region: REGION_NAME
artifact_repo: ARTIFACT_REPO_NAME
function_name: FUNCTION_NAME
memory: "2048MB"
timeout: "540s"
local_image: "gcr.io/ARTIFACT_REPO_NAME/crawl4ai:latest"
test_query_url: "https://example.com"

View File

@@ -1,187 +0,0 @@
#!/usr/bin/env python3
import argparse
import subprocess
import sys
import yaml
import requests
def run_command(cmd, explanation, require_confirm=True, allow_already_exists=False):
print("\n=== {} ===".format(explanation))
if require_confirm:
input("Press Enter to run: [{}]\n".format(cmd))
print("Running: {}".format(cmd))
result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
if result.returncode != 0:
if allow_already_exists and "ALREADY_EXISTS" in result.stderr:
print("Repository already exists, skipping creation.")
return ""
print("Error:\n{}".format(result.stderr))
sys.exit(1)
out = result.stdout.strip()
if out:
print("Output:\n{}".format(out))
return out
def load_config():
try:
with open("config.yml", "r") as f:
config = yaml.safe_load(f)
except Exception as e:
print("Failed to load config.yml: {}".format(e))
sys.exit(1)
required = ["project_id", "region", "artifact_repo", "function_name", "local_image"]
for key in required:
if key not in config or not config[key]:
print("Missing required config parameter: {}".format(key))
sys.exit(1)
return config
def deploy_function(config):
project_id = config["project_id"]
region = config["region"]
artifact_repo = config["artifact_repo"]
function_name = config["function_name"]
memory = config.get("memory", "2048MB")
timeout = config.get("timeout", "540s")
local_image = config["local_image"]
test_query_url = config.get("test_query_url", "https://example.com")
# Repository image format: "<region>-docker.pkg.dev/<project_id>/<artifact_repo>/<function_name>:latest"
repo_image = f"{region}-docker.pkg.dev/{project_id}/{artifact_repo}/{function_name}:latest"
# 1. Create Artifact Registry repository (skip if exists)
cmd = f"gcloud artifacts repositories create {artifact_repo} --repository-format=docker --location={region} --project={project_id}"
run_command(cmd, "Creating Artifact Registry repository (if it doesn't exist)", allow_already_exists=True)
# 2. Tag the local Docker image with the repository image name
cmd = f"docker tag {local_image} {repo_image}"
run_command(cmd, "Tagging Docker image for Artifact Registry")
# 3. Authenticate Docker to Artifact Registry
cmd = f"gcloud auth configure-docker {region}-docker.pkg.dev"
run_command(cmd, "Authenticating Docker to Artifact Registry")
# 4. Push the tagged Docker image to Artifact Registry
cmd = f"docker push {repo_image}"
run_command(cmd, "Pushing Docker image to Artifact Registry")
# 5. Deploy the Cloud Function using the custom container
cmd = (
f"gcloud beta functions deploy {function_name} "
f"--gen2 "
f"--runtime=python310 "
f"--entry-point=crawl "
f"--region={region} "
f"--docker-repository={region}-docker.pkg.dev/{project_id}/{artifact_repo} "
f"--trigger-http "
f"--memory={memory} "
f"--timeout={timeout} "
f"--project={project_id}"
)
run_command(cmd, "Deploying Cloud Function using custom container")
# 6. Set the Cloud Function to allow public (unauthenticated) invocations
cmd = (
f"gcloud functions add-iam-policy-binding {function_name} "
f"--region={region} "
f"--member='allUsers' "
f"--role='roles/cloudfunctions.invoker' "
f"--project={project_id}"
f"--quiet"
)
run_command(cmd, "Setting Cloud Function IAM to allow public invocations")
# 7. Retrieve the deployed Cloud Function URL
cmd = (
f"gcloud functions describe {function_name} "
f"--region={region} "
f"--project={project_id} "
f"--format='value(serviceConfig.uri)'"
)
deployed_url = run_command(cmd, "Extracting deployed Cloud Function URL", require_confirm=False)
print("\nDeployed URL: {}\n".format(deployed_url))
# 8. Test the deployed function
test_url = f"{deployed_url}?url={test_query_url}"
print("Testing function with: {}".format(test_url))
try:
response = requests.get(test_url)
print("Response status: {}".format(response.status_code))
print("Response body:\n{}".format(response.text))
if response.status_code == 200:
print("Test successful!")
else:
print("Non-200 response; check function logs.")
except Exception as e:
print("Test request error: {}".format(e))
sys.exit(1)
# 9. Final usage help
print("\nDeployment complete!")
print("Invoke your function with:")
print(f"curl '{deployed_url}?url={test_query_url}'")
print("For further instructions, refer to your documentation.")
def delete_function(config):
project_id = config["project_id"]
region = config["region"]
function_name = config["function_name"]
cmd = f"gcloud functions delete {function_name} --region={region} --project={project_id} --quiet"
run_command(cmd, "Deleting Cloud Function")
def describe_function(config):
project_id = config["project_id"]
region = config["region"]
function_name = config["function_name"]
cmd = (
f"gcloud functions describe {function_name} "
f"--region={region} "
f"--project={project_id} "
f"--format='value(serviceConfig.uri)'"
)
deployed_url = run_command(cmd, "Describing Cloud Function to extract URL", require_confirm=False)
print("\nCloud Function URL: {}\n".format(deployed_url))
def clear_all(config):
print("\n=== CLEAR ALL RESOURCES ===")
project_id = config["project_id"]
region = config["region"]
artifact_repo = config["artifact_repo"]
confirm = input("WARNING: This will DELETE the Cloud Function and the Artifact Registry repository. Are you sure? (y/N): ")
if confirm.lower() != "y":
print("Aborting clear operation.")
sys.exit(0)
# Delete the Cloud Function
delete_function(config)
# Delete the Artifact Registry repository
cmd = f"gcloud artifacts repositories delete {artifact_repo} --location={region} --project={project_id} --quiet"
run_command(cmd, "Deleting Artifact Registry repository", require_confirm=False)
print("All resources cleared.")
def main():
parser = argparse.ArgumentParser(description="Deploy, delete, describe, or clear Cloud Function resources using config.yml")
subparsers = parser.add_subparsers(dest="command", required=True)
subparsers.add_parser("deploy", help="Deploy the Cloud Function")
subparsers.add_parser("delete", help="Delete the deployed Cloud Function")
subparsers.add_parser("describe", help="Describe the Cloud Function and return its URL")
subparsers.add_parser("clear", help="Delete the Cloud Function and Artifact Registry repository")
args = parser.parse_args()
config = load_config()
if args.command == "deploy":
deploy_function(config)
elif args.command == "delete":
delete_function(config)
elif args.command == "describe":
describe_function(config)
elif args.command == "clear":
clear_all(config)
else:
parser.print_help()
if __name__ == "__main__":
main()

View File

@@ -1,204 +0,0 @@
# Deploying Crawl4ai on Google Cloud Functions
This guide explains how to deploy **Crawl4ai**—an opensource web crawler library—on Google Cloud Functions Gen2 using a custom container. We assume your project folder already includes:
- **Dockerfile:** Builds your container image (which installs Crawl4ai from its Git repository).
- **start.sh:** Activates your virtual environment and starts the function (using the Functions Framework).
- **main.py:** Contains your function logic with the entry point `crawl` (and imports Crawl4ai).
The guide is divided into two parts:
1. Manual deployment steps (using CLI commands)
2. Automated deployment using a Python script (`deploy.py`)
---
## Part 1: Manual Deployment Process
### Prerequisites
- **Google Cloud Project:** Ensure your project is active and billing is enabled.
- **Google Cloud CLI & Docker:** Installed and configured on your local machine.
- **Permissions:** You must have rights to create Cloud Functions and Artifact Registry repositories.
- **Files:** Your Dockerfile, start.sh, and main.py should be in the same directory.
### Step 1: Build Your Docker Image
Your Dockerfile packages Crawl4ai along with all its dependencies. Build your image with:
```bash
docker build -t gcr.io/<PROJECT_ID>/<FUNCTION_NAME>:latest .
```
Replace `<PROJECT_ID>` with your Google Cloud project ID and `<FUNCTION_NAME>` with your chosen function name (for example, `crawl4ai-t1`).
### Step 2: Create an Artifact Registry Repository
Cloud Functions Gen2 requires your custom container image to reside in an Artifact Registry repository. Create one by running:
```bash
gcloud artifacts repositories create <ARTIFACT_REPO> \
--repository-format=docker \
--location=<REGION> \
--project=<PROJECT_ID>
```
Replace `<ARTIFACT_REPO>` (for example, `crawl4ai`) and `<REGION>` (for example, `asia-east1`).
> **Note:** If you receive an `ALREADY_EXISTS` error, the repository is already created; simply proceed to the next step.
### Step 3: Tag Your Docker Image
Tag your locally built Docker image so it matches the Artifact Registry format:
```bash
docker tag gcr.io/<PROJECT_ID>/<FUNCTION_NAME>:latest <REGION>-docker.pkg.dev/<PROJECT_ID>/<ARTIFACT_REPO>/<FUNCTION_NAME>:latest
```
This step “renames” the image so you can push it to your repository.
### Step 4: Authenticate Docker to Artifact Registry
Configure Docker authentication to the Artifact Registry:
```bash
gcloud auth configure-docker <REGION>-docker.pkg.dev
```
This ensures Docker can securely push images to your registry using your Cloud credentials.
### Step 5: Push the Docker Image
Push the tagged image to Artifact Registry:
```bash
docker push <REGION>-docker.pkg.dev/<PROJECT_ID>/<ARTIFACT_REPO>/<FUNCTION_NAME>:latest
```
Once complete, your container image (with Crawl4ai installed) is hosted in Artifact Registry.
### Step 6: Deploy the Cloud Function
Deploy your function using the custom container image. Run:
```bash
gcloud beta functions deploy <FUNCTION_NAME> \
--gen2 \
--region=<REGION> \
--docker-repository=<REGION>-docker.pkg.dev/<PROJECT_ID>/<ARTIFACT_REPO> \
--trigger-http \
--memory=2048MB \
--timeout=540s \
--project=<PROJECT_ID>
```
This command tells Cloud Functions Gen2 to pull your container image from Artifact Registry and deploy it. Make sure your main.py defines the `crawl` entry point.
### Step 7: Make the Function Public
To allow external (unauthenticated) access, update the functions IAM policy:
```bash
gcloud functions add-iam-policy-binding <FUNCTION_NAME> \
--region=<REGION> \
--member="allUsers" \
--role="roles/cloudfunctions.invoker" \
--project=<PROJECT_ID> \
--quiet
```
Using the `--quiet` flag ensures the command runs noninteractively so the policy is applied immediately.
### Step 8: Retrieve and Test Your Function URL
Get the URL for your deployed function:
```bash
gcloud functions describe <FUNCTION_NAME> \
--region=<REGION> \
--project=<PROJECT_ID> \
--format='value(serviceConfig.uri)'
```
Test your deployment with a sample GET request (using curl or your browser):
```bash
curl "<FUNCTION_URL>?url=https://example.com"
```
Replace `<FUNCTION_URL>` with the output URL from the previous command. A successful test (HTTP status 200) means Crawl4ai is running on Cloud Functions.
---
## Part 2: Automated Deployment with deploy.py
For a more streamlined process, use the provided `deploy.py` script. This Python script automates the manual steps, prompting you to confirm key actions and providing detailed logs throughout the process.
### What deploy.py Does:
- **Reads Parameters:** It loads a `config.yml` file containing all necessary parameters such as `project_id`, `region`, `artifact_repo`, `function_name`, `local_image`, etc.
- **Creates/Skips Repository:** It creates the Artifact Registry repository (or skips if it already exists).
- **Tags & Pushes:** It tags your local Docker image and pushes it to the Artifact Registry.
- **Deploys the Function:** It deploys the Cloud Function with your custom container.
- **Updates IAM:** It sets the IAM policy to allow public access (using the `--quiet` flag).
- **Tests the Deployment:** It extracts the deployed URL and performs a test request.
- **Additional Commands:** You can also use subcommands in the script to delete or describe the deployed function, or even clear all resources.
### Example config.yml
Create a `config.yml` file in the same folder as your Dockerfile. An example configuration:
```yaml
project_id: your-project-id
region: asia-east1
artifact_repo: crawl4ai
function_name: crawl4ai-t1
memory: "2048MB"
timeout: "540s"
local_image: "gcr.io/your-project-id/crawl4ai-t1:latest"
test_query_url: "https://example.com"
```
### How to Use deploy.py
- **Deploy the Function:**
```bash
python deploy.py deploy
```
The script will guide you through each step, display the output, and ask for confirmation before executing critical commands.
- **Describe the Function:**
If you forget the function URL and want to retrieve it later:
```bash
python deploy.py describe
```
- **Delete the Function:**
To remove just the Cloud Function:
```bash
python deploy.py delete
```
- **Clear All Resources:**
To delete both the Cloud Function and the Artifact Registry repository:
```bash
python deploy.py clear
```
---
## Conclusion
This guide has walked you through two deployment methods for Crawl4ai on Google Cloud Functions Gen2:
1. **Manual Deployment:** Building your Docker image, pushing it to Artifact Registry, deploying the Cloud Function, and setting up IAM.
2. **Automated Deployment:** Using `deploy.py` with a configuration file to handle the entire process interactively.
By following these instructions, you can deploy, test, and manage your Crawl4ai-based Cloud Function with ease. Enjoy using Crawl4ai in your cloud environment!

View File

@@ -1,158 +0,0 @@
# Cleanup Chrome process on module unload
import atexit
import asyncio
import logging
import functions_framework
from flask import jsonify, Request
import os
import sys
import time
import subprocess
import signal
import requests
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info(f"Python version: {sys.version}")
logger.info(f"Python path: {sys.path}")
# Try to find where crawl4ai is coming from
try:
import crawl4ai
logger.info(f"Crawl4AI module location: {crawl4ai.__file__}")
logger.info(f"Contents of crawl4ai: {dir(crawl4ai)}")
except ImportError:
logger.error("Crawl4AI module not found")
# Now attempt the import
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, CrawlResult
# Configure logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
# Paths and constants
FUNCTION_DIR = os.path.dirname(os.path.realpath(__file__))
CHROME_BINARY = os.path.join(FUNCTION_DIR, "resources/chrome/headless_shell")
CDP_PORT = 9222
def start_chrome():
"""Start Chrome process synchronously with exponential backoff."""
logger.debug("Starting Chrome process...")
chrome_args = [
CHROME_BINARY,
f"--remote-debugging-port={CDP_PORT}",
"--remote-debugging-address=0.0.0.0",
"--no-sandbox",
"--disable-setuid-sandbox",
"--headless=new",
"--disable-gpu",
"--disable-dev-shm-usage",
"--no-zygote",
"--single-process",
"--disable-features=site-per-process",
"--no-first-run",
"--disable-extensions"
]
process = subprocess.Popen(
chrome_args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setsid
)
logger.debug(f"Chrome process started with PID: {process.pid}")
# Wait for CDP endpoint with exponential backoff
wait_time = 1 # Start with 1 second
max_wait_time = 16 # Cap at 16 seconds per retry
max_attempts = 10 # Total attempts
for attempt in range(max_attempts):
try:
response = requests.get(f"http://127.0.0.1:{CDP_PORT}/json/version", timeout=2)
if response.status_code == 200:
# Get ws URL from response
ws_url = response.json()['webSocketDebuggerUrl']
logger.debug("Chrome CDP is ready")
logger.debug(f"CDP URL: {ws_url}")
return process
except requests.exceptions.ConnectionError:
logger.debug(f"Waiting for CDP endpoint (attempt {attempt + 1}/{max_attempts}), retrying in {wait_time} seconds")
time.sleep(wait_time)
wait_time = min(wait_time * 2, max_wait_time) # Double wait time, up to max
# If we get here, all retries failed
stdout, stderr = process.communicate() # Get output for debugging
logger.error(f"Chrome stdout: {stdout.decode()}")
logger.error(f"Chrome stderr: {stderr.decode()}")
raise Exception("Chrome CDP endpoint failed to start after retries")
async def fetch_with_crawl4ai(url: str) -> dict:
"""Fetch page content using Crawl4ai and return the result object"""
# Get CDP URL from the running Chrome instance
version_response = requests.get(f'http://localhost:{CDP_PORT}/json/version')
cdp_url = version_response.json()['webSocketDebuggerUrl']
# Configure and run Crawl4ai
browser_config = BrowserConfig(cdp_url=cdp_url, use_managed_browser=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
)
result : CrawlResult = await crawler.arun(
url=url, config=crawler_config
)
return result.model_dump() # Convert Pydantic model to dict for JSON response
# Start Chrome when the module loads
logger.info("Starting Chrome process on module load")
chrome_process = start_chrome()
@functions_framework.http
def crawl(request: Request):
"""HTTP Cloud Function to fetch web content using Crawl4ai"""
try:
url = request.args.get('url')
if not url:
return jsonify({'error': 'URL parameter is required', 'status': 400}), 400
# Create and run an asyncio event loop
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
result = loop.run_until_complete(
asyncio.wait_for(fetch_with_crawl4ai(url), timeout=10.0)
)
return jsonify({
'status': 200,
'data': result
})
finally:
loop.close()
except Exception as e:
error_msg = f"Unexpected error: {str(e)}"
logger.error(error_msg, exc_info=True)
return jsonify({
'error': error_msg,
'status': 500,
'details': {
'error_type': type(e).__name__,
'stack_trace': str(e),
'chrome_running': chrome_process.poll() is None if chrome_process else False
}
}), 500
@atexit.register
def cleanup():
"""Cleanup Chrome process on shutdown"""
if chrome_process and chrome_process.poll() is None:
try:
os.killpg(os.getpgid(chrome_process.pid), signal.SIGTERM)
logger.info("Chrome process terminated")
except Exception as e:
logger.error(f"Failed to terminate Chrome process: {e}")

View File

@@ -1,5 +0,0 @@
functions-framework==3.*
flask==2.3.3
requests==2.31.0
websockets==12.0
git+https://github.com/unclecode/crawl4ai.git@next

View File

@@ -1,10 +0,0 @@
<?xml version="1.0" ?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<dir>/var/task/.fonts</dir>
<dir>/var/task/fonts</dir>
<dir>/opt/fonts</dir>
<dir>/tmp/fonts</dir>
<cachedir>/tmp/fonts-cache/</cachedir>
<config></config>
</fontconfig>

View File

@@ -1 +0,0 @@
{"file_format_version": "1.0.0", "ICD": {"library_path": "./libvk_swiftshader.so", "api_version": "1.0.5"}}

View File

@@ -1,104 +0,0 @@
FROM python:3.12-bookworm AS python-builder
RUN pip install poetry
ENV POETRY_NO_INTERACTION=1 \
POETRY_CACHE_DIR=/tmp/poetry_cache
WORKDIR /app
COPY pyproject.toml poetry.lock ./
RUN --mount=type=cache,target=$POETRY_CACHE_DIR poetry export -f requirements.txt -o requirements.txt
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
python3-dev \
python3-setuptools \
python3-wheel \
python3-pip \
gcc \
g++ \
&& rm -rf /var/lib/apt/lists/*
# Install specific dependencies that have build issues
RUN pip install --no-cache-dir cchardet
FROM python:3.12-bookworm
# Install AWS Lambda Runtime Interface Client
RUN python3 -m pip install --no-cache-dir awslambdaric
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
wget \
gnupg \
git \
cmake \
pkg-config \
python3-dev \
libjpeg-dev \
redis-server \
supervisor \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y --no-install-recommends \
libglib2.0-0 \
libnss3 \
libnspr4 \
libatk1.0-0 \
libatk-bridge2.0-0 \
libcups2 \
libdrm2 \
libdbus-1-3 \
libxcb1 \
libxkbcommon0 \
libx11-6 \
libxcomposite1 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxrandr2 \
libgbm1 \
libpango-1.0-0 \
libcairo2 \
libasound2 \
libatspi2.0-0 \
&& rm -rf /var/lib/apt/lists/*
# Install build essentials for any compilations needed
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Set up function directory and browser path
ARG FUNCTION_DIR="/function"
RUN mkdir -p "${FUNCTION_DIR}/pw-browsers"
RUN mkdir -p "/tmp/.crawl4ai"
# Set critical environment variables
ENV PLAYWRIGHT_BROWSERS_PATH="${FUNCTION_DIR}/pw-browsers" \
HOME="/tmp" \
CRAWL4_AI_BASE_DIRECTORY="/tmp/.crawl4ai"
# Create Craw4ai base directory
RUN mkdir -p ${CRAWL4_AI_BASE_DIRECTORY}
RUN pip install --no-cache-dir faust-cchardet
# Install Crawl4ai and dependencies
RUN pip install --no-cache-dir git+https://github.com/unclecode/crawl4ai.git@next
# Install Chromium only (no deps flag)
RUN playwright install chromium
# Copy function code
COPY lambda_function.py ${FUNCTION_DIR}/
# Set working directory
WORKDIR ${FUNCTION_DIR}
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "lambda_function.handler" ]

File diff suppressed because it is too large Load Diff

View File

@@ -1,345 +0,0 @@
# Deploying Crawl4ai on AWS Lambda
This guide walks you through deploying Crawl4ai as an AWS Lambda function with API Gateway integration. You'll learn how to set up, test, and clean up your deployment.
## Prerequisites
Before you begin, ensure you have:
- AWS CLI installed and configured (`aws configure`)
- Docker installed and running
- Python 3.8+ installed
- Basic familiarity with AWS services
## Project Files
Your project directory should contain:
- `Dockerfile`: Container configuration for Lambda
- `lambda_function.py`: Lambda handler code
- `deploy.py`: Our deployment script
## Step 1: Install Required Python Packages
Install the Python packages needed for our deployment script:
```bash
pip install typer rich
```
## Step 2: Run the Deployment Script
Our Python script automates the entire deployment process:
```bash
python deploy.py
```
The script will guide you through:
1. Configuration setup (AWS region, function name, memory allocation)
2. Docker image building
3. ECR repository creation
4. Lambda function deployment
5. API Gateway setup
6. Provisioned concurrency configuration (optional)
Follow the prompts and confirm each step by pressing Enter.
## Step 3: Manual Deployment (Alternative to the Script)
If you prefer to deploy manually or understand what the script does, follow these steps:
### Building and Pushing the Docker Image
```bash
# Build the Docker image
docker build -t crawl4ai-lambda .
# Create an ECR repository (if it doesn't exist)
aws ecr create-repository --repository-name crawl4ai-lambda
# Get ECR login password and login
aws ecr get-login-password | docker login --username AWS --password-stdin $(aws sts get-caller-identity --query Account --output text).dkr.ecr.us-east-1.amazonaws.com
# Tag the image
ECR_URI=$(aws ecr describe-repositories --repository-names crawl4ai-lambda --query 'repositories[0].repositoryUri' --output text)
docker tag crawl4ai-lambda:latest $ECR_URI:latest
# Push the image to ECR
docker push $ECR_URI:latest
```
### Creating the Lambda Function
```bash
# Get IAM role ARN (create it if needed)
ROLE_ARN=$(aws iam get-role --role-name lambda-execution-role --query 'Role.Arn' --output text)
# Create Lambda function
aws lambda create-function \
--function-name crawl4ai-function \
--package-type Image \
--code ImageUri=$ECR_URI:latest \
--role $ROLE_ARN \
--timeout 300 \
--memory-size 4096 \
--ephemeral-storage Size=10240 \
--environment "Variables={CRAWL4_AI_BASE_DIRECTORY=/tmp/.crawl4ai,HOME=/tmp,PLAYWRIGHT_BROWSERS_PATH=/function/pw-browsers}"
```
If you're updating an existing function:
```bash
# Update function code
aws lambda update-function-code \
--function-name crawl4ai-function \
--image-uri $ECR_URI:latest
# Update function configuration
aws lambda update-function-configuration \
--function-name crawl4ai-function \
--timeout 300 \
--memory-size 4096 \
--ephemeral-storage Size=10240 \
--environment "Variables={CRAWL4_AI_BASE_DIRECTORY=/tmp/.crawl4ai,HOME=/tmp,PLAYWRIGHT_BROWSERS_PATH=/function/pw-browsers}"
```
### Setting Up API Gateway
```bash
# Create API Gateway
API_ID=$(aws apigateway create-rest-api --name crawl4ai-api --query 'id' --output text)
# Get root resource ID
PARENT_ID=$(aws apigateway get-resources --rest-api-id $API_ID --query 'items[?path==`/`].id' --output text)
# Create resource
RESOURCE_ID=$(aws apigateway create-resource --rest-api-id $API_ID --parent-id $PARENT_ID --path-part "crawl" --query 'id' --output text)
# Create POST method
aws apigateway put-method --rest-api-id $API_ID --resource-id $RESOURCE_ID --http-method POST --authorization-type NONE
# Get Lambda function ARN
LAMBDA_ARN=$(aws lambda get-function --function-name crawl4ai-function --query 'Configuration.FunctionArn' --output text)
# Set Lambda integration
aws apigateway put-integration \
--rest-api-id $API_ID \
--resource-id $RESOURCE_ID \
--http-method POST \
--type AWS_PROXY \
--integration-http-method POST \
--uri arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/$LAMBDA_ARN/invocations
# Deploy API
aws apigateway create-deployment --rest-api-id $API_ID --stage-name prod
# Set Lambda permission
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
aws lambda add-permission \
--function-name crawl4ai-function \
--statement-id apigateway \
--action lambda:InvokeFunction \
--principal apigateway.amazonaws.com \
--source-arn "arn:aws:execute-api:us-east-1:$ACCOUNT_ID:$API_ID/*/POST/crawl"
```
### Setting Up Provisioned Concurrency (Optional)
This reduces cold starts:
```bash
# Publish a version
VERSION=$(aws lambda publish-version --function-name crawl4ai-function --query 'Version' --output text)
# Create alias
aws lambda create-alias \
--function-name crawl4ai-function \
--name prod \
--function-version $VERSION
# Configure provisioned concurrency
aws lambda put-provisioned-concurrency-config \
--function-name crawl4ai-function \
--qualifier prod \
--provisioned-concurrent-executions 2
# Update API Gateway to use alias
LAMBDA_ALIAS_ARN="arn:aws:lambda:us-east-1:$ACCOUNT_ID:function:crawl4ai-function:prod"
aws apigateway put-integration \
--rest-api-id $API_ID \
--resource-id $RESOURCE_ID \
--http-method POST \
--type AWS_PROXY \
--integration-http-method POST \
--uri arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/$LAMBDA_ALIAS_ARN/invocations
# Redeploy API Gateway
aws apigateway create-deployment --rest-api-id $API_ID --stage-name prod
```
## Step 4: Testing the Deployment
Once deployed, test your function with:
```bash
ENDPOINT_URL="https://$API_ID.execute-api.us-east-1.amazonaws.com/prod/crawl"
# Test with curl
curl -X POST $ENDPOINT_URL \
-H "Content-Type: application/json" \
-d '{"url":"https://example.com"}'
```
Or using Python:
```python
import requests
import json
url = "https://your-api-id.execute-api.us-east-1.amazonaws.com/prod/crawl"
payload = {
"url": "https://example.com",
"browser_config": {
"headless": True,
"verbose": False
},
"crawler_config": {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
}
response = requests.post(url, json=payload)
result = response.json()
print(json.dumps(result, indent=2))
```
## Step 5: Cleaning Up Resources
To remove all AWS resources created for this deployment:
```bash
python deploy.py cleanup
```
Or manually:
```bash
# Delete API Gateway
aws apigateway delete-rest-api --rest-api-id $API_ID
# Remove provisioned concurrency (if configured)
aws lambda delete-provisioned-concurrency-config \
--function-name crawl4ai-function \
--qualifier prod
# Delete alias (if created)
aws lambda delete-alias \
--function-name crawl4ai-function \
--name prod
# Delete Lambda function
aws lambda delete-function --function-name crawl4ai-function
# Delete ECR repository
aws ecr delete-repository --repository-name crawl4ai-lambda --force
```
## Troubleshooting
### Cold Start Issues
If experiencing long cold starts:
- Enable provisioned concurrency
- Increase memory allocation (4096 MB recommended)
- Ensure the Lambda function has enough ephemeral storage
### Permission Errors
If you encounter permission errors:
- Check the IAM role has the necessary permissions
- Ensure API Gateway has permission to invoke the Lambda function
### Container Size Issues
If your container is too large:
- Optimize the Dockerfile
- Use multi-stage builds
- Consider removing unnecessary dependencies
## Performance Considerations
- Lambda memory affects CPU allocation - higher memory means faster execution
- Provisioned concurrency eliminates cold starts but costs more
- Optimize the Playwright setup for faster browser initialization
## Security Best Practices
- Use the principle of least privilege for IAM roles
- Implement API Gateway authentication for production deployments
- Consider using AWS KMS for storing sensitive configuration
## Useful AWS Console Links
Here are quick links to access important AWS console pages for monitoring and managing your deployment:
| Resource | Console Link |
|----------|-------------|
| Lambda Functions | [AWS Lambda Console](https://console.aws.amazon.com/lambda/home#/functions) |
| Lambda Function Logs | [CloudWatch Logs](https://console.aws.amazon.com/cloudwatch/home#logsV2:log-groups) |
| API Gateway | [API Gateway Console](https://console.aws.amazon.com/apigateway/home) |
| ECR Repositories | [ECR Console](https://console.aws.amazon.com/ecr/repositories) |
| IAM Roles | [IAM Console](https://console.aws.amazon.com/iamv2/home#/roles) |
| CloudWatch Metrics | [CloudWatch Metrics](https://console.aws.amazon.com/cloudwatch/home#metricsV2) |
### Monitoring Lambda Execution
To monitor your Lambda function:
1. Go to the [Lambda function console](https://console.aws.amazon.com/lambda/home#/functions)
2. Select your function (`crawl4ai-function`)
3. Click the "Monitor" tab to see:
- Invocation metrics
- Success/failure rates
- Duration statistics
### Viewing Lambda Logs
To see detailed execution logs:
1. Go to [CloudWatch Logs](https://console.aws.amazon.com/cloudwatch/home#logsV2:log-groups)
2. Find the log group named `/aws/lambda/crawl4ai-function`
3. Click to see the latest log streams
4. Each stream contains logs from a function execution
### Checking API Gateway Traffic
To monitor API requests:
1. Go to the [API Gateway console](https://console.aws.amazon.com/apigateway/home)
2. Select your API (`crawl4ai-api`)
3. Click "Dashboard" to see:
- API calls
- Latency
- Error rates
## Conclusion
You now have Crawl4ai running as a serverless function on AWS Lambda! This setup allows you to crawl websites on-demand without maintaining infrastructure, while paying only for the compute time you use.

View File

@@ -1,107 +0,0 @@
import json
import asyncio
import os
# Ensure environment variables and directories are set
os.environ['CRAWL4_AI_BASE_DIRECTORY'] = '/tmp/.crawl4ai'
os.environ['HOME'] = '/tmp'
# Create directory if it doesn't exist
os.makedirs('/tmp/.crawl4ai', exist_ok=True)
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CacheMode
)
def handler(event, context):
# Parse the incoming event (API Gateway request)
try:
body = json.loads(event.get('body', '{}'))
url = body.get('url')
if not url:
return {
'statusCode': 400,
'body': json.dumps({'error': 'URL is required'})
}
# Get optional configurations or use defaults
browser_config_dict = body.get('browser_config', {})
crawler_config_dict = body.get('crawler_config', {})
# Run the crawler
result = asyncio.run(crawl(url, browser_config_dict, crawler_config_dict))
# Return successful response
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json'
},
'body': json.dumps(result)
}
except Exception as e:
# Handle errors
import traceback
return {
'statusCode': 500,
'body': json.dumps({
'error': str(e),
'traceback': traceback.format_exc()
})
}
async def crawl(url, browser_config_dict, crawler_config_dict):
"""
Run the crawler with the provided configurations, with Lambda-specific settings
"""
# Start with user-provided config but override with Lambda-required settings
base_browser_config = BrowserConfig.load(browser_config_dict) if browser_config_dict else BrowserConfig()
# Apply Lambda-specific browser configurations
browser_config = BrowserConfig(
verbose=True,
browser_type="chromium",
headless=True,
user_agent_mode="random",
light_mode=True,
use_managed_browser=False,
extra_args=[
"--headless=new",
"--no-sandbox",
"--disable-dev-shm-usage",
"--disable-setuid-sandbox",
"--remote-allow-origins=*",
"--autoplay-policy=user-gesture-required",
"--single-process",
],
# # Carry over any other settings from user config that aren't overridden
# **{k: v for k, v in base_browser_config.model_dump().items()
# if k not in ['verbose', 'browser_type', 'headless', 'user_agent_mode',
# 'light_mode', 'use_managed_browser', 'extra_args']}
)
# Start with user-provided crawler config but ensure cache is bypassed
base_crawler_config = CrawlerRunConfig.load(crawler_config_dict) if crawler_config_dict else CrawlerRunConfig()
# Apply Lambda-specific crawler configurations
crawler_config = CrawlerRunConfig(
exclude_external_links=base_crawler_config.exclude_external_links,
remove_overlay_elements=True,
magic=True,
cache_mode=CacheMode.BYPASS,
# Carry over markdown generator and other settings
markdown_generator=base_crawler_config.markdown_generator
)
# Perform the crawl with Lambda-optimized settings
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url=url, config=crawler_config)
# Return serializable results
return result.model_dump()

View File

@@ -1,543 +0,0 @@
import os
import time
import uuid
from datetime import datetime
from typing import Dict, Any, Optional, List
import modal
from modal import Image, App, Volume, Secret, web_endpoint, function
# Configuration
APP_NAME = "crawl4ai-api"
CRAWL4AI_VERSION = "next" # Using the 'next' branch
PYTHON_VERSION = "3.10" # Compatible with playwright
DEFAULT_CREDITS = 1000
# Create a custom image with Crawl4ai and its dependencies
image = Image.debian_slim(python_version=PYTHON_VERSION).pip_install(
["fastapi[standard]", "pymongo", "pydantic"]
).run_commands(
"apt-get update",
"apt-get install -y software-properties-common",
"apt-get install -y git",
"apt-add-repository non-free",
"apt-add-repository contrib",
# Install crawl4ai from the next branch
f"pip install -U git+https://github.com/unclecode/crawl4ai.git@{CRAWL4AI_VERSION}",
"pip install -U fastapi[standard]",
"pip install -U pydantic",
# Install playwright and browsers
"crawl4ai-setup",
)
# Create persistent volume for user database
user_db = Volume.from_name("crawl4ai-users", create_if_missing=True)
# Create admin secret for secure operations
admin_secret = Secret.from_name("admin-secret", create_if_missing=True)
# Define the app
app = App(APP_NAME, image=image)
# Default configurations
DEFAULT_BROWSER_CONFIG = {
"headless": True,
"verbose": False,
}
DEFAULT_CRAWLER_CONFIG = {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
# Database operations
@app.function(volumes={"/data": user_db})
def init_db() -> None:
"""Initialize database with indexes."""
from pymongo import MongoClient, ASCENDING
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
# Ensure indexes for faster lookups
db.users.create_index([("api_token", ASCENDING)], unique=True)
db.users.create_index([("email", ASCENDING)], unique=True)
# Create usage stats collection
db.usage_stats.create_index([("user_id", ASCENDING), ("timestamp", ASCENDING)])
print("Database initialized with required indexes")
@app.function(volumes={"/data": user_db})
def get_user_by_token(api_token: str) -> Optional[Dict[str, Any]]:
"""Get user by API token."""
from pymongo import MongoClient
from bson.objectid import ObjectId
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
user = db.users.find_one({"api_token": api_token})
if not user:
return None
# Convert ObjectId to string for serialization
user["_id"] = str(user["_id"])
return user
@app.function(volumes={"/data": user_db})
def create_user(email: str, name: str) -> Dict[str, Any]:
"""Create a new user with initial credits."""
from pymongo import MongoClient
from bson.objectid import ObjectId
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
# Generate API token
api_token = str(uuid.uuid4())
user = {
"email": email,
"name": name,
"api_token": api_token,
"credits": DEFAULT_CREDITS,
"created_at": datetime.utcnow(),
"updated_at": datetime.utcnow(),
"is_active": True
}
try:
result = db.users.insert_one(user)
user["_id"] = str(result.inserted_id)
return user
except Exception as e:
if "duplicate key error" in str(e):
return {"error": "User with this email already exists"}
raise
@app.function(volumes={"/data": user_db})
def update_user_credits(api_token: str, amount: int) -> Dict[str, Any]:
"""Update user credits (add or subtract)."""
from pymongo import MongoClient
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
# First get current user to check credits
user = db.users.find_one({"api_token": api_token})
if not user:
return {"success": False, "error": "User not found"}
# For deductions, ensure sufficient credits
if amount < 0 and user["credits"] + amount < 0:
return {"success": False, "error": "Insufficient credits"}
# Update credits
result = db.users.update_one(
{"api_token": api_token},
{
"$inc": {"credits": amount},
"$set": {"updated_at": datetime.utcnow()}
}
)
if result.modified_count == 1:
# Get updated user
updated_user = db.users.find_one({"api_token": api_token})
return {
"success": True,
"credits": updated_user["credits"]
}
else:
return {"success": False, "error": "Failed to update credits"}
@app.function(volumes={"/data": user_db})
def log_usage(user_id: str, url: str, success: bool, error: Optional[str] = None) -> None:
"""Log usage statistics."""
from pymongo import MongoClient
from bson.objectid import ObjectId
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
log_entry = {
"user_id": user_id,
"url": url,
"timestamp": datetime.utcnow(),
"success": success,
"error": error
}
db.usage_stats.insert_one(log_entry)
# Main crawling function
@app.function(timeout=300) # 5 minute timeout
async def crawl(
url: str,
browser_config: Optional[Dict[str, Any]] = None,
crawler_config: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]:
"""
Crawl a given URL using Crawl4ai.
Args:
url: The URL to crawl
browser_config: Optional browser configuration to override defaults
crawler_config: Optional crawler configuration to override defaults
Returns:
A dictionary containing the crawl results
"""
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CrawlResult
)
# Prepare browser config using the loader method
if browser_config is None:
browser_config = DEFAULT_BROWSER_CONFIG
browser_config_obj = BrowserConfig.load(browser_config)
# Prepare crawler config using the loader method
if crawler_config is None:
crawler_config = DEFAULT_CRAWLER_CONFIG
crawler_config_obj = CrawlerRunConfig.load(crawler_config)
# Perform the crawl
async with AsyncWebCrawler(config=browser_config_obj) as crawler:
result: CrawlResult = await crawler.arun(url=url, config=crawler_config_obj)
# Return serializable results
try:
# Try newer Pydantic v2 method
return result.model_dump()
except AttributeError:
try:
# Try older Pydantic v1 method
return result.dict()
except AttributeError:
# Fallback to manual conversion
return {
"url": result.url,
"title": result.title,
"status": result.status,
"content": str(result.content) if hasattr(result, "content") else None,
"links": [{"url": link.url, "text": link.text} for link in result.links] if hasattr(result, "links") else [],
"markdown_v2": {
"raw_markdown": result.markdown_v2.raw_markdown if hasattr(result, "markdown_v2") else None
}
}
# API endpoints
@app.function()
@web_endpoint(method="POST")
def crawl_endpoint(data: Dict[str, Any]) -> Dict[str, Any]:
"""
Web endpoint that accepts POST requests with JSON data containing:
- api_token: User's API token
- url: The URL to crawl
- browser_config: Optional browser configuration
- crawler_config: Optional crawler configuration
Returns the crawl results and remaining credits.
"""
# Extract and validate API token
api_token = data.get("api_token")
if not api_token:
return {
"success": False,
"error": "API token is required",
"status_code": 401
}
# Verify user
user = get_user_by_token.remote(api_token)
if not user:
return {
"success": False,
"error": "Invalid API token",
"status_code": 401
}
if not user.get("is_active", False):
return {
"success": False,
"error": "Account is inactive",
"status_code": 403
}
# Validate URL
url = data.get("url")
if not url:
return {
"success": False,
"error": "URL is required",
"status_code": 400
}
# Check credits
if user.get("credits", 0) <= 0:
return {
"success": False,
"error": "Insufficient credits",
"status_code": 403
}
# Deduct credit first (1 credit per call)
credit_result = update_user_credits.remote(api_token, -1)
if not credit_result.get("success", False):
return {
"success": False,
"error": credit_result.get("error", "Failed to process credits"),
"status_code": 500
}
# Extract configs
browser_config = data.get("browser_config")
crawler_config = data.get("crawler_config")
# Perform crawl
try:
start_time = time.time()
result = crawl.remote(url, browser_config, crawler_config)
execution_time = time.time() - start_time
# Log successful usage
log_usage.spawn(user["_id"], url, True)
return {
"success": True,
"data": result,
"credits_remaining": credit_result.get("credits"),
"execution_time_seconds": round(execution_time, 2),
"status_code": 200
}
except Exception as e:
# Log failed usage
log_usage.spawn(user["_id"], url, False, str(e))
# Return error
return {
"success": False,
"error": f"Crawling error: {str(e)}",
"credits_remaining": credit_result.get("credits"),
"status_code": 500
}
# Admin endpoints
@app.function(secrets=[admin_secret])
@web_endpoint(method="POST")
def admin_create_user(data: Dict[str, Any]) -> Dict[str, Any]:
"""Admin endpoint to create new users."""
# Validate admin token
admin_token = data.get("admin_token")
if admin_token != os.environ.get("ADMIN_TOKEN"):
return {
"success": False,
"error": "Invalid admin token",
"status_code": 401
}
# Validate input
email = data.get("email")
name = data.get("name")
if not email or not name:
return {
"success": False,
"error": "Email and name are required",
"status_code": 400
}
# Create user
user = create_user.remote(email, name)
if "error" in user:
return {
"success": False,
"error": user["error"],
"status_code": 400
}
return {
"success": True,
"data": {
"user_id": user["_id"],
"email": user["email"],
"name": user["name"],
"api_token": user["api_token"],
"credits": user["credits"],
"created_at": user["created_at"].isoformat() if isinstance(user["created_at"], datetime) else user["created_at"]
},
"status_code": 201
}
@app.function(secrets=[admin_secret])
@web_endpoint(method="POST")
def admin_update_credits(data: Dict[str, Any]) -> Dict[str, Any]:
"""Admin endpoint to update user credits."""
# Validate admin token
admin_token = data.get("admin_token")
if admin_token != os.environ.get("ADMIN_TOKEN"):
return {
"success": False,
"error": "Invalid admin token",
"status_code": 401
}
# Validate input
api_token = data.get("api_token")
amount = data.get("amount")
if not api_token:
return {
"success": False,
"error": "API token is required",
"status_code": 400
}
if not isinstance(amount, int):
return {
"success": False,
"error": "Amount must be an integer",
"status_code": 400
}
# Update credits
result = update_user_credits.remote(api_token, amount)
if not result.get("success", False):
return {
"success": False,
"error": result.get("error", "Failed to update credits"),
"status_code": 400
}
return {
"success": True,
"data": {
"credits": result["credits"]
},
"status_code": 200
}
@app.function(secrets=[admin_secret])
@web_endpoint(method="GET")
def admin_get_users(admin_token: str) -> Dict[str, Any]:
"""Admin endpoint to list all users."""
# Validate admin token
if admin_token != os.environ.get("ADMIN_TOKEN"):
return {
"success": False,
"error": "Invalid admin token",
"status_code": 401
}
users = get_all_users.remote()
return {
"success": True,
"data": users,
"status_code": 200
}
@app.function(volumes={"/data": user_db})
def get_all_users() -> List[Dict[str, Any]]:
"""Get all users (for admin)."""
from pymongo import MongoClient
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
users = []
for user in db.users.find():
# Convert ObjectId to string
user["_id"] = str(user["_id"])
# Convert datetime to ISO format
for field in ["created_at", "updated_at"]:
if field in user and isinstance(user[field], datetime):
user[field] = user[field].isoformat()
users.append(user)
return users
# Public endpoints
@app.function()
@web_endpoint(method="GET")
def health_check() -> Dict[str, Any]:
"""Health check endpoint."""
return {
"status": "online",
"service": APP_NAME,
"version": CRAWL4AI_VERSION,
"timestamp": datetime.utcnow().isoformat()
}
@app.function()
@web_endpoint(method="GET")
def check_credits(api_token: str) -> Dict[str, Any]:
"""Check user credits."""
if not api_token:
return {
"success": False,
"error": "API token is required",
"status_code": 401
}
user = get_user_by_token.remote(api_token)
if not user:
return {
"success": False,
"error": "Invalid API token",
"status_code": 401
}
return {
"success": True,
"data": {
"credits": user["credits"],
"email": user["email"],
"name": user["name"]
},
"status_code": 200
}
# Local entrypoint for testing
@app.local_entrypoint()
def main(url: str = "https://www.modal.com"):
"""Command line entrypoint for local testing."""
print("Initializing database...")
init_db.remote()
print(f"Testing crawl on URL: {url}")
result = crawl.remote(url)
# Print sample of result
print("\nCrawl Result Sample:")
if "title" in result:
print(f"Title: {result['title']}")
if "status" in result:
print(f"Status: {result['status']}")
if "links" in result:
print(f"Links found: {len(result['links'])}")
if "markdown_v2" in result and result["markdown_v2"] and "raw_markdown" in result["markdown_v2"]:
print("\nMarkdown Preview (first 300 chars):")
print(result["markdown_v2"]["raw_markdown"][:300] + "...")

View File

@@ -1,127 +0,0 @@
import modal
from typing import Optional, Dict, Any
# Create a custom image with Crawl4ai and its dependencies
# "pip install crawl4ai",
image = modal.Image.debian_slim(python_version="3.10").pip_install(["fastapi[standard]"]).run_commands(
"apt-get update",
"apt-get install -y software-properties-common",
"apt-get install -y git",
"apt-add-repository non-free",
"apt-add-repository contrib",
"pip install -U git+https://github.com/unclecode/crawl4ai.git@next",
"pip install -U fastapi[standard]",
"pip install -U pydantic",
"crawl4ai-setup", # This installs playwright and downloads chromium
# Print fastpi version
"python -m fastapi --version",
)
# Define the app
app = modal.App("crawl4ai", image=image)
# Define default configurations
DEFAULT_BROWSER_CONFIG = {
"headless": True,
"verbose": False,
}
DEFAULT_CRAWLER_CONFIG = {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
@app.function(timeout=300) # 5 minute timeout
async def crawl(
url: str,
browser_config: Optional[Dict[str, Any]] = None,
crawler_config: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]:
"""
Crawl a given URL using Crawl4ai.
Args:
url: The URL to crawl
browser_config: Optional browser configuration to override defaults
crawler_config: Optional crawler configuration to override defaults
Returns:
A dictionary containing the crawl results
"""
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CrawlResult
)
# Prepare browser config using the loader method
if browser_config is None:
browser_config = DEFAULT_BROWSER_CONFIG
browser_config_obj = BrowserConfig.load(browser_config)
# Prepare crawler config using the loader method
if crawler_config is None:
crawler_config = DEFAULT_CRAWLER_CONFIG
crawler_config_obj = CrawlerRunConfig.load(crawler_config)
# Perform the crawl
async with AsyncWebCrawler(config=browser_config_obj) as crawler:
result: CrawlResult = await crawler.arun(url=url, config=crawler_config_obj)
# Return serializable results
try:
# Try newer Pydantic v2 method
return result.model_dump()
except AttributeError:
try:
# Try older Pydantic v1 method
return result.__dict__
except AttributeError:
# Fallback to returning the raw result
return result
@app.function()
@modal.web_endpoint(method="POST")
def crawl_endpoint(data: Dict[str, Any]) -> Dict[str, Any]:
"""
Web endpoint that accepts POST requests with JSON data containing:
- url: The URL to crawl
- browser_config: Optional browser configuration
- crawler_config: Optional crawler configuration
Returns the crawl results.
"""
url = data.get("url")
if not url:
return {"error": "URL is required"}
browser_config = data.get("browser_config")
crawler_config = data.get("crawler_config")
return crawl.remote(url, browser_config, crawler_config)
@app.local_entrypoint()
def main(url: str = "https://www.modal.com"):
"""
Command line entrypoint for local testing.
"""
result = crawl.remote(url)
print(result)

View File

@@ -1,453 +0,0 @@
# Deploying Crawl4ai with Modal: A Comprehensive Tutorial
Hey there! UncleCode here. I'm excited to show you how to deploy Crawl4ai using Modal - a fantastic serverless platform that makes deployment super simple and scalable.
In this tutorial, I'll walk you through deploying your own Crawl4ai instance on Modal's infrastructure. This will give you a powerful, scalable web crawling solution without having to worry about infrastructure management.
## What is Modal?
Modal is a serverless platform that allows you to run Python functions in the cloud without managing servers. It's perfect for deploying Crawl4ai because:
1. It handles all the infrastructure for you
2. It scales automatically based on demand
3. It makes deployment incredibly simple
## Prerequisites
Before we get started, you'll need:
- A Modal account (sign up at [modal.com](https://modal.com))
- Python 3.10 or later installed on your local machine
- Basic familiarity with Python and command-line operations
## Step 1: Setting Up Your Modal Account
First, sign up for a Modal account at [modal.com](https://modal.com) if you haven't already. Modal offers a generous free tier that's perfect for getting started.
After signing up, install the Modal CLI and authenticate:
```bash
pip install modal
modal token new
```
This will open a browser window where you can authenticate and generate a token for the CLI.
## Step 2: Creating Your Crawl4ai Deployment
Now, let's create a Python file called `crawl4ai_modal.py` with our deployment code:
```python
import modal
from typing import Optional, Dict, Any
# Create a custom image with Crawl4ai and its dependencies
image = modal.Image.debian_slim(python_version="3.10").pip_install(
["fastapi[standard]"]
).run_commands(
"apt-get update",
"apt-get install -y software-properties-common",
"apt-get install -y git",
"apt-add-repository non-free",
"apt-add-repository contrib",
"pip install -U crawl4ai",
"pip install -U fastapi[standard]",
"pip install -U pydantic",
"crawl4ai-setup", # This installs playwright and downloads chromium
)
# Define the app
app = modal.App("crawl4ai", image=image)
# Define default configurations
DEFAULT_BROWSER_CONFIG = {
"headless": True,
"verbose": False,
}
DEFAULT_CRAWLER_CONFIG = {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
@app.function(timeout=300) # 5 minute timeout
async def crawl(
url: str,
browser_config: Optional[Dict[str, Any]] = None,
crawler_config: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]:
"""
Crawl a given URL using Crawl4ai.
Args:
url: The URL to crawl
browser_config: Optional browser configuration to override defaults
crawler_config: Optional crawler configuration to override defaults
Returns:
A dictionary containing the crawl results
"""
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CrawlResult
)
# Prepare browser config using the loader method
if browser_config is None:
browser_config = DEFAULT_BROWSER_CONFIG
browser_config_obj = BrowserConfig.load(browser_config)
# Prepare crawler config using the loader method
if crawler_config is None:
crawler_config = DEFAULT_CRAWLER_CONFIG
crawler_config_obj = CrawlerRunConfig.load(crawler_config)
# Perform the crawl
async with AsyncWebCrawler(config=browser_config_obj) as crawler:
result: CrawlResult = await crawler.arun(url=url, config=crawler_config_obj)
# Return serializable results
try:
# Try newer Pydantic v2 method
return result.model_dump()
except AttributeError:
try:
# Try older Pydantic v1 method
return result.dict()
except AttributeError:
# Fallback to manual conversion
return {
"url": result.url,
"title": result.title,
"status": result.status,
"content": str(result.content) if hasattr(result, "content") else None,
"links": [{"url": link.url, "text": link.text} for link in result.links] if hasattr(result, "links") else [],
"markdown_v2": {
"raw_markdown": result.markdown_v2.raw_markdown if hasattr(result, "markdown_v2") else None
}
}
@app.function()
@modal.web_endpoint(method="POST")
def crawl_endpoint(data: Dict[str, Any]) -> Dict[str, Any]:
"""
Web endpoint that accepts POST requests with JSON data containing:
- url: The URL to crawl
- browser_config: Optional browser configuration
- crawler_config: Optional crawler configuration
Returns the crawl results.
"""
url = data.get("url")
if not url:
return {"error": "URL is required"}
browser_config = data.get("browser_config")
crawler_config = data.get("crawler_config")
return crawl.remote(url, browser_config, crawler_config)
@app.local_entrypoint()
def main(url: str = "https://www.modal.com"):
"""
Command line entrypoint for local testing.
"""
result = crawl.remote(url)
print(result)
```
## Step 3: Understanding the Code Components
Let's break down what's happening in this code:
### 1. Image Definition
```python
image = modal.Image.debian_slim(python_version="3.10").pip_install(
["fastapi[standard]"]
).run_commands(
"apt-get update",
"apt-get install -y software-properties-common",
"apt-get install -y git",
"apt-add-repository non-free",
"apt-add-repository contrib",
"pip install -U git+https://github.com/unclecode/crawl4ai.git@next",
"pip install -U fastapi[standard]",
"pip install -U pydantic",
"crawl4ai-setup", # This installs playwright and downloads chromium
)
```
This section defines the container image that Modal will use to run your code. It:
- Starts with a Debian Slim base image with Python 3.10
- Installs FastAPI
- Updates the system packages
- Installs Git and other dependencies
- Installs Crawl4ai from the GitHub repository
- Runs the Crawl4ai setup to install Playwright and download Chromium
### 2. Modal App Definition
```python
app = modal.App("crawl4ai", image=image)
```
This creates a Modal application named "crawl4ai" that uses the image we defined above.
### 3. Default Configurations
```python
DEFAULT_BROWSER_CONFIG = {
"headless": True,
"verbose": False,
}
DEFAULT_CRAWLER_CONFIG = {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
```
These define the default configurations for the browser and crawler. You can customize these settings based on your specific needs.
### 4. The Crawl Function
```python
@app.function(timeout=300)
async def crawl(url, browser_config, crawler_config):
# Function implementation
```
This is the main function that performs the crawling. It:
- Takes a URL and optional configurations
- Sets up the browser and crawler with those configurations
- Performs the crawl
- Returns the results in a serializable format
The `@app.function(timeout=300)` decorator tells Modal to run this function in the cloud with a 5-minute timeout.
### 5. The Web Endpoint
```python
@app.function()
@modal.web_endpoint(method="POST")
def crawl_endpoint(data: Dict[str, Any]) -> Dict[str, Any]:
# Function implementation
```
This creates a web endpoint that accepts POST requests. It:
- Extracts the URL and configurations from the request
- Calls the crawl function with those parameters
- Returns the results
### 6. Local Entrypoint
```python
@app.local_entrypoint()
def main(url: str = "https://www.modal.com"):
# Function implementation
```
This provides a way to test the application from the command line.
## Step 4: Testing Locally
Before deploying, let's test our application locally:
```bash
modal run crawl4ai_modal.py --url "https://example.com"
```
This command will:
1. Upload your code to Modal
2. Create the necessary containers
3. Run the `main` function with the specified URL
4. Return the results
Modal will handle all the infrastructure setup for you. You should see the crawling results printed to your console.
## Step 5: Deploying Your Application
Once you're satisfied with the local testing, it's time to deploy:
```bash
modal deploy crawl4ai_modal.py
```
This will deploy your application to Modal's cloud. The deployment process will output URLs for your web endpoints.
You should see output similar to:
```
✓ Deployed crawl4ai.
URLs:
crawl_endpoint => https://your-username--crawl-endpoint.modal.run
```
Save this URL - you'll need it to make requests to your deployment.
## Step 6: Using Your Deployment
Now that your application is deployed, you can use it by sending POST requests to the endpoint URL:
```bash
curl -X POST https://your-username--crawl-endpoint.modal.run \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com"}'
```
Or in Python:
```python
import requests
response = requests.post(
"https://your-username--crawl-endpoint.modal.run",
json={"url": "https://example.com"}
)
result = response.json()
print(result)
```
You can also customize the browser and crawler configurations:
```python
requests.post(
"https://your-username--crawl-endpoint.modal.run",
json={
"url": "https://example.com",
"browser_config": {
"headless": False,
"verbose": True
},
"crawler_config": {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.6, # Adjusted threshold
"threshold_type": "fixed"
}
}
}
}
}
}
}
}
)
```
## Step 7: Calling Your Deployment from Another Python Script
You can also call your deployed function directly from another Python script:
```python
import modal
# Get a reference to the deployed function
crawl_function = modal.Function.from_name("crawl4ai", "crawl")
# Call the function
result = crawl_function.remote("https://example.com")
print(result)
```
## Understanding Modal's Execution Flow
To understand how Modal works, it's important to know:
1. **Local vs. Remote Execution**: When you call a function with `.remote()`, it runs in Modal's cloud, not on your local machine.
2. **Container Lifecycle**: Modal creates containers on-demand and destroys them when they're not needed.
3. **Caching**: Modal caches your container images to speed up subsequent runs.
4. **Serverless Scaling**: Modal automatically scales your application based on demand.
## Customizing Your Deployment
You can customize your deployment in several ways:
### Changing the Crawl4ai Version
To use a different version of Crawl4ai, update the installation command in the image definition:
```python
"pip install -U git+https://github.com/unclecode/crawl4ai.git@main", # Use main branch
```
### Adjusting Resource Limits
You can change the resources allocated to your functions:
```python
@app.function(timeout=600, cpu=2, memory=4096) # 10 minute timeout, 2 CPUs, 4GB RAM
async def crawl(...):
# Function implementation
```
### Keeping Containers Warm
To reduce cold start times, you can keep containers warm:
```python
@app.function(keep_warm=1) # Keep 1 container warm
async def crawl(...):
# Function implementation
```
## Conclusion
That's it! You've successfully deployed Crawl4ai on Modal. You now have a scalable web crawling solution that can handle as many requests as you need without requiring any infrastructure management.
The beauty of this setup is its simplicity - Modal handles all the hard parts, letting you focus on using Crawl4ai to extract the data you need.
Feel free to reach out if you have any questions or need help with your deployment!
Happy crawling!
- UncleCode
## Additional Resources
- [Modal Documentation](https://modal.com/docs)
- [Crawl4ai GitHub Repository](https://github.com/unclecode/crawl4ai)
- [Crawl4ai Documentation](https://docs.crawl4ai.com)

View File

@@ -1,317 +0,0 @@
#!/usr/bin/env python3
"""
Crawl4ai API Testing Script
This script tests all endpoints of the Crawl4ai API service and demonstrates their usage.
"""
import argparse
import json
import sys
import time
from typing import Dict, Any, List, Optional
import requests
# Colors for terminal output
class Colors:
HEADER = '\033[95m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
def print_header(text: str) -> None:
"""Print a formatted header."""
print(f"\n{Colors.HEADER}{Colors.BOLD}{'=' * 80}{Colors.ENDC}")
print(f"{Colors.HEADER}{Colors.BOLD}{text.center(80)}{Colors.ENDC}")
print(f"{Colors.HEADER}{Colors.BOLD}{'=' * 80}{Colors.ENDC}\n")
def print_step(text: str) -> None:
"""Print a formatted step description."""
print(f"{Colors.BLUE}{Colors.BOLD}>> {text}{Colors.ENDC}")
def print_success(text: str) -> None:
"""Print a success message."""
print(f"{Colors.GREEN}{text}{Colors.ENDC}")
def print_warning(text: str) -> None:
"""Print a warning message."""
print(f"{Colors.YELLOW}{text}{Colors.ENDC}")
def print_error(text: str) -> None:
"""Print an error message."""
print(f"{Colors.RED}{text}{Colors.ENDC}")
def print_json(data: Dict[str, Any]) -> None:
"""Pretty print JSON data."""
print(json.dumps(data, indent=2))
def make_request(method: str, url: str, params: Optional[Dict[str, Any]] = None,
json_data: Optional[Dict[str, Any]] = None,
expected_status: int = 200) -> Dict[str, Any]:
"""Make an HTTP request and handle errors."""
print_step(f"Making {method.upper()} request to {url}")
if params:
print(f" Parameters: {params}")
if json_data:
print(f" JSON Data: {json_data}")
try:
response = requests.request(
method=method,
url=url,
params=params,
json=json_data,
timeout=300 # 5 minute timeout for crawling operations
)
status_code = response.status_code
print(f" Status Code: {status_code}")
try:
data = response.json()
print(" Response:")
print_json(data)
if status_code != expected_status:
print_error(f"Expected status code {expected_status}, got {status_code}")
return data
print_success("Request successful")
return data
except ValueError:
print_error("Response is not valid JSON")
print(response.text)
return {"error": "Invalid JSON response"}
except requests.RequestException as e:
print_error(f"Request failed: {str(e)}")
return {"error": str(e)}
def test_health_check(base_url: str) -> bool:
"""Test the health check endpoint."""
print_header("Testing Health Check Endpoint")
response = make_request("GET", f"{base_url}/health_check")
if "status" in response and response["status"] == "online":
print_success("Health check passed")
return True
else:
print_error("Health check failed")
return False
def test_admin_create_user(base_url: str, admin_token: str, email: str, name: str) -> Optional[str]:
"""Test creating a new user."""
print_header("Testing Admin User Creation")
response = make_request(
"POST",
f"{base_url}/admin_create_user",
json_data={
"admin_token": admin_token,
"email": email,
"name": name
},
expected_status=201
)
if response.get("success") and "data" in response:
api_token = response["data"].get("api_token")
if api_token:
print_success(f"User created successfully with API token: {api_token}")
return api_token
print_error("Failed to create user")
return None
def test_check_credits(base_url: str, api_token: str) -> Optional[int]:
"""Test checking user credits."""
print_header("Testing Check Credits Endpoint")
response = make_request(
"GET",
f"{base_url}/check_credits",
params={"api_token": api_token}
)
if response.get("success") and "data" in response:
credits = response["data"].get("credits")
if credits is not None:
print_success(f"User has {credits} credits")
return credits
print_error("Failed to check credits")
return None
def test_crawl_endpoint(base_url: str, api_token: str, url: str) -> bool:
"""Test the crawl endpoint."""
print_header("Testing Crawl Endpoint")
response = make_request(
"POST",
f"{base_url}/crawl_endpoint",
json_data={
"api_token": api_token,
"url": url
}
)
if response.get("success") and "data" in response:
print_success("Crawl completed successfully")
# Display some crawl result data
data = response["data"]
if "title" in data:
print(f"Page Title: {data['title']}")
if "status" in data:
print(f"Status: {data['status']}")
if "links" in data:
print(f"Links found: {len(data['links'])}")
if "markdown_v2" in data and data["markdown_v2"] and "raw_markdown" in data["markdown_v2"]:
print("Markdown Preview (first 200 chars):")
print(data["markdown_v2"]["raw_markdown"][:200] + "...")
credits_remaining = response.get("credits_remaining")
if credits_remaining is not None:
print(f"Credits remaining: {credits_remaining}")
return True
print_error("Crawl failed")
return False
def test_admin_update_credits(base_url: str, admin_token: str, api_token: str, amount: int) -> bool:
"""Test updating user credits."""
print_header("Testing Admin Update Credits")
response = make_request(
"POST",
f"{base_url}/admin_update_credits",
json_data={
"admin_token": admin_token,
"api_token": api_token,
"amount": amount
}
)
if response.get("success") and "data" in response:
print_success(f"Credits updated successfully, new balance: {response['data'].get('credits')}")
return True
print_error("Failed to update credits")
return False
def test_admin_get_users(base_url: str, admin_token: str) -> List[Dict[str, Any]]:
"""Test getting all users."""
print_header("Testing Admin Get All Users")
response = make_request(
"GET",
f"{base_url}/admin_get_users",
params={"admin_token": admin_token}
)
if response.get("success") and "data" in response:
users = response["data"]
print_success(f"Retrieved {len(users)} users")
return users
print_error("Failed to get users")
return []
def run_full_test(base_url: str, admin_token: str) -> None:
"""Run all tests in sequence."""
# Remove trailing slash if present
base_url = base_url.rstrip('/')
# Test 1: Health Check
if not test_health_check(base_url):
print_error("Health check failed, aborting tests")
sys.exit(1)
# Test 2: Create a test user
email = f"test-user-{int(time.time())}@example.com"
name = "Test User"
api_token = test_admin_create_user(base_url, admin_token, email, name)
if not api_token:
print_error("User creation failed, aborting tests")
sys.exit(1)
# Test 3: Check initial credits
initial_credits = test_check_credits(base_url, api_token)
if initial_credits is None:
print_error("Credit check failed, aborting tests")
sys.exit(1)
# Test 4: Perform a crawl
test_url = "https://news.ycombinator.com"
crawl_success = test_crawl_endpoint(base_url, api_token, test_url)
if not crawl_success:
print_warning("Crawl test failed, but continuing with other tests")
# Test 5: Check credits after crawl
post_crawl_credits = test_check_credits(base_url, api_token)
if post_crawl_credits is not None and initial_credits is not None:
if post_crawl_credits == initial_credits - 1:
print_success("Credit deduction verified")
else:
print_warning(f"Unexpected credit change: {initial_credits} -> {post_crawl_credits}")
# Test 6: Add credits
add_credits_amount = 50
if test_admin_update_credits(base_url, admin_token, api_token, add_credits_amount):
print_success(f"Added {add_credits_amount} credits")
# Test 7: Check credits after addition
post_addition_credits = test_check_credits(base_url, api_token)
if post_addition_credits is not None and post_crawl_credits is not None:
if post_addition_credits == post_crawl_credits + add_credits_amount:
print_success("Credit addition verified")
else:
print_warning(f"Unexpected credit change: {post_crawl_credits} -> {post_addition_credits}")
# Test 8: Get all users
users = test_admin_get_users(base_url, admin_token)
if users:
# Check if our test user is in the list
test_user = next((user for user in users if user.get("email") == email), None)
if test_user:
print_success("Test user found in users list")
else:
print_warning("Test user not found in users list")
# Final report
print_header("Test Summary")
print_success("All endpoints tested successfully")
print(f"Test user created with email: {email}")
print(f"API token: {api_token}")
print(f"Final credit balance: {post_addition_credits}")
def main():
parser = argparse.ArgumentParser(description="Test Crawl4ai API endpoints")
parser.add_argument("--base-url", required=True, help="Base URL of the Crawl4ai API (e.g., https://username--crawl4ai-api.modal.run)")
parser.add_argument("--admin-token", required=True, help="Admin token for authentication")
args = parser.parse_args()
print_header("Crawl4ai API Test Script")
print(f"Testing API at: {args.base_url}")
run_full_test(args.base_url, args.admin_token)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,123 @@
# Builtin Browser in Crawl4AI
This document explains the builtin browser feature in Crawl4AI and how to use it effectively.
## What is the Builtin Browser?
The builtin browser is a persistent Chrome instance that Crawl4AI manages for you. It runs in the background and can be used by multiple crawling operations, eliminating the need to start and stop browsers for each crawl.
Benefits include:
- **Faster startup times** - The browser is already running, so your scripts start faster
- **Shared resources** - All your crawling scripts can use the same browser instance
- **Simplified management** - No need to worry about CDP URLs or browser processes
- **Persistent cookies and sessions** - Browser state persists between script runs
- **Less resource usage** - Only one browser instance for multiple scripts
## Using the Builtin Browser
### In Python Code
Using the builtin browser in your code is simple:
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
# Create browser config with builtin mode
browser_config = BrowserConfig(
browser_mode="builtin", # This is the key setting!
headless=True # Can be headless or not
)
# Create the crawler
crawler = AsyncWebCrawler(config=browser_config)
# Use it - no need to explicitly start()
result = await crawler.arun("https://example.com")
```
Key points:
1. Set `browser_mode="builtin"` in your BrowserConfig
2. No need for explicit `start()` call - the crawler will automatically connect to the builtin browser
3. No need to use a context manager or call `close()` - the browser stays running
### Via CLI
The CLI provides commands to manage the builtin browser:
```bash
# Start the builtin browser
crwl browser start
# Check its status
crwl browser status
# Open a visible window to see what the browser is doing
crwl browser view --url https://example.com
# Stop it when no longer needed
crwl browser stop
# Restart with different settings
crwl browser restart --no-headless
```
When crawling via CLI, simply add the builtin browser mode:
```bash
crwl https://example.com -b "browser_mode=builtin"
```
## How It Works
1. When a crawler with `browser_mode="builtin"` is created:
- It checks if a builtin browser is already running
- If not, it automatically launches one
- It connects to the browser via CDP (Chrome DevTools Protocol)
2. The browser process continues running after your script exits
- This means it's ready for the next crawl
- You can manage it via the CLI commands
3. During installation, Crawl4AI attempts to create a builtin browser automatically
## Example
See the [builtin_browser_example.py](builtin_browser_example.py) file for a complete example.
Run it with:
```bash
python builtin_browser_example.py
```
## When to Use
The builtin browser is ideal for:
- Scripts that run frequently
- Development and testing workflows
- Applications that need to minimize startup time
- Systems where you want to manage browser instances centrally
You might not want to use it when:
- Running one-off scripts
- When you need different browser configurations for different tasks
- In environments where persistent processes are not allowed
## Troubleshooting
If you encounter issues:
1. Check the browser status:
```
crwl browser status
```
2. Try restarting it:
```
crwl browser restart
```
3. If problems persist, stop it and let Crawl4AI start a fresh one:
```
crwl browser stop
```

View File

@@ -0,0 +1,79 @@
import asyncio
import time
from crawl4ai.async_webcrawler import AsyncWebCrawler, CacheMode
from crawl4ai.async_configs import CrawlerRunConfig
from crawl4ai.async_dispatcher import MemoryAdaptiveDispatcher, RateLimiter
VERBOSE = False
async def crawl_sequential(urls):
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, verbose=VERBOSE)
results = []
start_time = time.perf_counter()
async with AsyncWebCrawler() as crawler:
for url in urls:
result_container = await crawler.arun(url=url, config=config)
results.append(result_container[0])
total_time = time.perf_counter() - start_time
return total_time, results
async def crawl_parallel_dispatcher(urls):
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, verbose=VERBOSE)
# Dispatcher with rate limiter enabled (default behavior)
dispatcher = MemoryAdaptiveDispatcher(
rate_limiter=RateLimiter(base_delay=(1.0, 3.0), max_delay=60.0, max_retries=3),
max_session_permit=50,
)
start_time = time.perf_counter()
async with AsyncWebCrawler() as crawler:
result_container = await crawler.arun_many(urls=urls, config=config, dispatcher=dispatcher)
results = []
if isinstance(result_container, list):
results = result_container
else:
async for res in result_container:
results.append(res)
total_time = time.perf_counter() - start_time
return total_time, results
async def crawl_parallel_no_rate_limit(urls):
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, verbose=VERBOSE)
# Dispatcher with no rate limiter and a high session permit to avoid queuing
dispatcher = MemoryAdaptiveDispatcher(
rate_limiter=None,
max_session_permit=len(urls) # allow all URLs concurrently
)
start_time = time.perf_counter()
async with AsyncWebCrawler() as crawler:
result_container = await crawler.arun_many(urls=urls, config=config, dispatcher=dispatcher)
results = []
if isinstance(result_container, list):
results = result_container
else:
async for res in result_container:
results.append(res)
total_time = time.perf_counter() - start_time
return total_time, results
async def main():
urls = ["https://example.com"] * 100
print(f"Crawling {len(urls)} URLs sequentially...")
seq_time, seq_results = await crawl_sequential(urls)
print(f"Sequential crawling took: {seq_time:.2f} seconds\n")
print(f"Crawling {len(urls)} URLs in parallel using arun_many with dispatcher (with rate limit)...")
disp_time, disp_results = await crawl_parallel_dispatcher(urls)
print(f"Parallel (dispatcher with rate limiter) took: {disp_time:.2f} seconds\n")
print(f"Crawling {len(urls)} URLs in parallel using dispatcher with no rate limiter...")
no_rl_time, no_rl_results = await crawl_parallel_no_rate_limit(urls)
print(f"Parallel (dispatcher without rate limiter) took: {no_rl_time:.2f} seconds\n")
print("Crawl4ai - Crawling Comparison")
print("--------------------------------------------------------")
print(f"Sequential crawling took: {seq_time:.2f} seconds")
print(f"Parallel (dispatcher with rate limiter) took: {disp_time:.2f} seconds")
print(f"Parallel (dispatcher without rate limiter) took: {no_rl_time:.2f} seconds")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,86 @@
#!/usr/bin/env python3
"""
Builtin Browser Example
This example demonstrates how to use Crawl4AI's builtin browser feature,
which simplifies the browser management process. With builtin mode:
- No need to manually start or connect to a browser
- No need to manage CDP URLs or browser processes
- Automatically connects to an existing browser or launches one if needed
- Browser persists between script runs, reducing startup time
- No explicit cleanup or close() calls needed
The example also demonstrates "auto-starting" where you don't need to explicitly
call start() method on the crawler.
"""
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
import time
async def crawl_with_builtin_browser():
"""
Simple example of crawling with the builtin browser.
Key features:
1. browser_mode="builtin" in BrowserConfig
2. No explicit start() call needed
3. No explicit close() needed
"""
print("\n=== Crawl4AI Builtin Browser Example ===\n")
# Create a browser configuration with builtin mode
browser_config = BrowserConfig(
browser_mode="builtin", # This is the key setting!
headless=True # Can run headless for background operation
)
# Create crawler run configuration
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, # Skip cache for this demo
screenshot=True, # Take a screenshot
verbose=True # Show verbose logging
)
# Create the crawler instance
# Note: We don't need to use "async with" context manager
crawler = AsyncWebCrawler(config=browser_config)
# Start crawling several URLs - no explicit start() needed!
# The crawler will automatically connect to the builtin browser
print("\n➡️ Crawling first URL...")
t0 = time.time()
result1 = await crawler.arun(
url="https://crawl4ai.com",
config=crawler_config
)
t1 = time.time()
print(f"✅ First URL crawled in {t1-t0:.2f} seconds")
print(f" Got {len(result1.markdown.raw_markdown)} characters of content")
print(f" Title: {result1.metadata.get('title', 'No title')}")
# Try another URL - the browser is already running, so this should be faster
print("\n➡️ Crawling second URL...")
t0 = time.time()
result2 = await crawler.arun(
url="https://example.com",
config=crawler_config
)
t1 = time.time()
print(f"✅ Second URL crawled in {t1-t0:.2f} seconds")
print(f" Got {len(result2.markdown.raw_markdown)} characters of content")
print(f" Title: {result2.metadata.get('title', 'No title')}")
# The builtin browser continues running in the background
# No need to explicitly close it
print("\n🔄 The builtin browser remains running for future use")
print(" You can use 'crwl browser status' to check its status")
print(" or 'crwl browser stop' to stop it when completely done")
async def main():
"""Run the example"""
await crawl_with_builtin_browser()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,209 @@
"""
CrawlerMonitor Example
This example demonstrates how to use the CrawlerMonitor component
to visualize and track web crawler operations in real-time.
"""
import time
import uuid
import random
import threading
from crawl4ai.components.crawler_monitor import CrawlerMonitor
from crawl4ai.models import CrawlStatus
def simulate_webcrawler_operations(monitor, num_tasks=20):
"""
Simulates a web crawler's operations with multiple tasks and different states.
Args:
monitor: The CrawlerMonitor instance
num_tasks: Number of tasks to simulate
"""
print(f"Starting simulation with {num_tasks} tasks...")
# Create and register all tasks first
task_ids = []
for i in range(num_tasks):
task_id = str(uuid.uuid4())
url = f"https://example.com/page{i}"
monitor.add_task(task_id, url)
task_ids.append((task_id, url))
# Small delay between task creation
time.sleep(0.2)
# Process tasks with a variety of different behaviors
threads = []
for i, (task_id, url) in enumerate(task_ids):
# Create a thread for each task
thread = threading.Thread(
target=process_task,
args=(monitor, task_id, url, i)
)
thread.daemon = True
threads.append(thread)
# Start threads in batches to simulate concurrent processing
batch_size = 4 # Process 4 tasks at a time
for i in range(0, len(threads), batch_size):
batch = threads[i:i+batch_size]
for thread in batch:
thread.start()
time.sleep(0.5) # Stagger thread start times
# Wait a bit before starting next batch
time.sleep(random.uniform(1.0, 3.0))
# Update queue statistics
update_queue_stats(monitor)
# Simulate memory pressure changes
active_threads = [t for t in threads if t.is_alive()]
if len(active_threads) > 8:
monitor.update_memory_status("CRITICAL")
elif len(active_threads) > 4:
monitor.update_memory_status("PRESSURE")
else:
monitor.update_memory_status("NORMAL")
# Wait for all threads to complete
for thread in threads:
thread.join()
# Final updates
update_queue_stats(monitor)
monitor.update_memory_status("NORMAL")
print("Simulation completed!")
def process_task(monitor, task_id, url, index):
"""Simulate processing of a single task."""
# Tasks start in queued state (already added)
# Simulate waiting in queue
wait_time = random.uniform(0.5, 3.0)
time.sleep(wait_time)
# Start processing - move to IN_PROGRESS
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=time.time(),
wait_time=wait_time
)
# Simulate task processing with memory usage changes
total_process_time = random.uniform(2.0, 10.0)
step_time = total_process_time / 5 # Update in 5 steps
for step in range(5):
# Simulate increasing then decreasing memory usage
if step < 3: # First 3 steps - increasing
memory_usage = random.uniform(5.0, 20.0) * (step + 1)
else: # Last 2 steps - decreasing
memory_usage = random.uniform(5.0, 20.0) * (5 - step)
# Update peak memory if this is higher
peak = max(memory_usage, monitor.get_task_stats(task_id).get("peak_memory", 0))
monitor.update_task(
task_id=task_id,
memory_usage=memory_usage,
peak_memory=peak
)
time.sleep(step_time)
# Determine final state - 80% success, 20% failure
if index % 5 == 0: # Every 5th task fails
monitor.update_task(
task_id=task_id,
status=CrawlStatus.FAILED,
end_time=time.time(),
memory_usage=0.0,
error_message="Connection timeout"
)
else:
monitor.update_task(
task_id=task_id,
status=CrawlStatus.COMPLETED,
end_time=time.time(),
memory_usage=0.0
)
def update_queue_stats(monitor):
"""Update queue statistics based on current tasks."""
task_stats = monitor.get_all_task_stats()
# Count queued tasks
queued_tasks = [
stats for stats in task_stats.values()
if stats["status"] == CrawlStatus.QUEUED.name
]
total_queued = len(queued_tasks)
if total_queued > 0:
current_time = time.time()
# Calculate wait times
wait_times = [
current_time - stats.get("enqueue_time", current_time)
for stats in queued_tasks
]
highest_wait_time = max(wait_times) if wait_times else 0.0
avg_wait_time = sum(wait_times) / len(wait_times) if wait_times else 0.0
else:
highest_wait_time = 0.0
avg_wait_time = 0.0
# Update monitor
monitor.update_queue_statistics(
total_queued=total_queued,
highest_wait_time=highest_wait_time,
avg_wait_time=avg_wait_time
)
def main():
# Initialize the monitor
monitor = CrawlerMonitor(
urls_total=20, # Total URLs to process
refresh_rate=0.5, # Update UI twice per second
enable_ui=True, # Enable terminal UI
max_width=120 # Set maximum width to 120 characters
)
# Start the monitor
monitor.start()
try:
# Run simulation
simulate_webcrawler_operations(monitor)
# Keep monitor running a bit to see final state
print("Waiting to view final state...")
time.sleep(5)
except KeyboardInterrupt:
print("\nExample interrupted by user")
finally:
# Stop the monitor
monitor.stop()
print("Example completed!")
# Print some statistics
summary = monitor.get_summary()
print("\nCrawler Statistics Summary:")
print(f"Total URLs: {summary['urls_total']}")
print(f"Completed: {summary['urls_completed']}")
print(f"Completion percentage: {summary['completion_percentage']:.1f}%")
print(f"Peak memory usage: {summary['peak_memory_percent']:.1f}%")
# Print task status counts
status_counts = summary['status_counts']
print("\nTask Status Counts:")
for status, count in status_counts.items():
print(f" {status}: {count}")
if __name__ == "__main__":
main()

View File

@@ -18,11 +18,20 @@ Key Features:
import asyncio
import pandas as pd
import numpy as np
import re
import plotly.express as px
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LXMLWebScrapingStrategy
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CacheMode,
LXMLWebScrapingStrategy,
)
from crawl4ai import CrawlResult
from typing import List
from IPython.display import HTML
__current_dir__ = __file__.rsplit("/", 1)[0]
class CryptoAlphaGenerator:
"""
@@ -31,134 +40,319 @@ class CryptoAlphaGenerator:
- Liquidity scores
- Momentum-risk ratios
- Machine learning-inspired trading signals
Methods:
analyze_tables(): Process raw tables into trading insights
create_visuals(): Generate institutional-grade visualizations
generate_insights(): Create plain English trading recommendations
"""
def clean_data(self, df: pd.DataFrame) -> pd.DataFrame:
"""
Convert crypto market data to machine-readable format
Handles currency symbols, units (B=Billions), and percentage values
Convert crypto market data to machine-readable format.
Handles currency symbols, units (B=Billions), and percentage values.
"""
# Clean numeric columns
df['Price'] = df['Price'].str.replace('[^\d.]', '', regex=True).astype(float)
df['Market Cap'] = df['Market Cap'].str.extract(r'\$([\d.]+)B')[0].astype(float) * 1e9
df['Volume(24h)'] = df['Volume(24h)'].str.extract(r'\$([\d.]+)B')[0].astype(float) * 1e9
# Make a copy to avoid SettingWithCopyWarning
df = df.copy()
# Clean Price column (handle currency symbols)
df["Price"] = df["Price"].astype(str).str.replace("[^\d.]", "", regex=True).astype(float)
# Handle Market Cap and Volume, considering both Billions and Trillions
def convert_large_numbers(value):
if pd.isna(value):
return float('nan')
value = str(value)
multiplier = 1
if 'B' in value:
multiplier = 1e9
elif 'T' in value:
multiplier = 1e12
# Handle cases where the value might already be numeric
cleaned_value = re.sub(r"[^\d.]", "", value)
return float(cleaned_value) * multiplier if cleaned_value else float('nan')
df["Market Cap"] = df["Market Cap"].apply(convert_large_numbers)
df["Volume(24h)"] = df["Volume(24h)"].apply(convert_large_numbers)
# Convert percentages to decimal values
for col in ['1h %', '24h %', '7d %']:
df[col] = df[col].str.replace('%', '').astype(float) / 100
for col in ["1h %", "24h %", "7d %"]:
if col in df.columns:
# First ensure it's string, then clean
df[col] = (
df[col].astype(str)
.str.replace("%", "")
.str.replace(",", ".")
.replace("nan", np.nan)
)
df[col] = pd.to_numeric(df[col], errors='coerce') / 100
return df
def calculate_metrics(self, df: pd.DataFrame) -> pd.DataFrame:
"""
Compute advanced trading metrics used by quantitative funds:
1. Volume/Market Cap Ratio - Measures liquidity efficiency
(High ratio = Underestimated attention)
2. Volatility Score - Risk-adjusted momentum potential
(High ratio = Underestimated attention, and small-cap = higher growth potential)
2. Volatility Score - Risk-adjusted momentum potential - Shows how stable is the trend
(STD of 1h/24h/7d returns)
3. Momentum Score - Weighted average of returns
3. Momentum Score - Weighted average of returns - Shows how strong is the trend
(1h:30% + 24h:50% + 7d:20%)
4. Volume Anomaly - 3σ deviation detection
(Flags potential insider activity)
(Flags potential insider activity) - Unusual trading activity Flags coins with volume spikes (potential insider buying or news).
"""
# Liquidity Metrics
df['Volume/Market Cap Ratio'] = df['Volume(24h)'] / df['Market Cap']
df["Volume/Market Cap Ratio"] = df["Volume(24h)"] / df["Market Cap"]
# Risk Metrics
df['Volatility Score'] = df[['1h %','24h %','7d %']].std(axis=1)
df["Volatility Score"] = df[["1h %", "24h %", "7d %"]].std(axis=1)
# Momentum Metrics
df['Momentum Score'] = (df['1h %']*0.3 + df['24h %']*0.5 + df['7d %']*0.2)
df["Momentum Score"] = df["1h %"] * 0.3 + df["24h %"] * 0.5 + df["7d %"] * 0.2
# Anomaly Detection
median_vol = df['Volume(24h)'].median()
df['Volume Anomaly'] = df['Volume(24h)'] > 3 * median_vol
median_vol = df["Volume(24h)"].median()
df["Volume Anomaly"] = df["Volume(24h)"] > 3 * median_vol
# Value Flags
df['Undervalued Flag'] = (df['Market Cap'] < 1e9) & (df['Momentum Score'] > 0.05)
df['Liquid Giant'] = (df['Volume/Market Cap Ratio'] > 0.15) & (df['Market Cap'] > 1e9)
# Undervalued Flag - Low market cap and high momentum
# (High growth potential and low attention)
df["Undervalued Flag"] = (df["Market Cap"] < 1e9) & (
df["Momentum Score"] > 0.05
)
# Liquid Giant Flag - High volume/market cap ratio and large market cap
# (High liquidity and large market cap = institutional interest)
df["Liquid Giant"] = (df["Volume/Market Cap Ratio"] > 0.15) & (
df["Market Cap"] > 1e9
)
return df
def create_visuals(self, df: pd.DataFrame) -> dict:
def generate_insights_simple(self, df: pd.DataFrame) -> str:
"""
Generate three institutional-grade visualizations:
1. 3D Market Map - X:Size, Y:Liquidity, Z:Momentum
2. Liquidity Tree - Color:Volume Efficiency
3. Momentum Leaderboard - Top sustainable movers
Generates an ultra-actionable crypto trading report with:
- Risk-tiered opportunities (High/Medium/Low)
- Concrete examples for each trade type
- Entry/exit strategies spelled out
- Visual cues for quick scanning
"""
# 3D Market Overview
fig1 = px.scatter_3d(
df,
x='Market Cap',
y='Volume/Market Cap Ratio',
z='Momentum Score',
size='Volatility Score',
color='Volume Anomaly',
hover_name='Name',
title='Smart Money Market Map: Spot Overlooked Opportunities',
labels={'Market Cap': 'Size (Log $)', 'Volume/Market Cap Ratio': 'Liquidity Power'},
log_x=True,
template='plotly_dark'
)
# Liquidity Efficiency Tree
fig2 = px.treemap(
df,
path=['Name'],
values='Market Cap',
color='Volume/Market Cap Ratio',
hover_data=['Momentum Score'],
title='Liquidity Forest: Green = High Trading Efficiency',
color_continuous_scale='RdYlGn'
)
# Momentum Leaders
fig3 = px.bar(
df.sort_values('Momentum Score', ascending=False).head(10),
x='Name',
y='Momentum Score',
color='Volatility Score',
title='Sustainable Momentum Leaders (Low Volatility + High Growth)',
text='7d %',
template='plotly_dark'
)
return {'market_map': fig1, 'liquidity_tree': fig2, 'momentum_leaders': fig3}
report = [
"🚀 **CRYPTO TRADING CHEAT SHEET** 🚀",
"*Based on quantitative signals + hedge fund tactics*",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
]
# 1. HIGH-RISK: Undervalued Small-Caps (Momentum Plays)
high_risk = df[df["Undervalued Flag"]].sort_values("Momentum Score", ascending=False)
if not high_risk.empty:
example_coin = high_risk.iloc[0]
report.extend([
"\n🔥 **HIGH-RISK: Rocket Fuel Small-Caps**",
f"*Example Trade:* {example_coin['Name']} (Price: ${example_coin['Price']:.6f})",
"📊 *Why?* Tiny market cap (<$1B) but STRONG momentum (+{:.0f}% last week)".format(example_coin['7d %']*100),
"🎯 *Strategy:*",
"1. Wait for 5-10% dip from recent high (${:.6f} → Buy under ${:.6f})".format(
example_coin['Price'] / (1 - example_coin['24h %']), # Approx recent high
example_coin['Price'] * 0.95
),
"2. Set stop-loss at -10% (${:.6f})".format(example_coin['Price'] * 0.90),
"3. Take profit at +20% (${:.6f})".format(example_coin['Price'] * 1.20),
"⚠️ *Risk Warning:* These can drop 30% fast! Never bet more than 5% of your portfolio.",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
])
# 2. MEDIUM-RISK: Liquid Giants (Swing Trades)
medium_risk = df[df["Liquid Giant"]].sort_values("Volume/Market Cap Ratio", ascending=False)
if not medium_risk.empty:
example_coin = medium_risk.iloc[0]
report.extend([
"\n💎 **MEDIUM-RISK: Liquid Giants (Safe Swing Trades)**",
f"*Example Trade:* {example_coin['Name']} (Market Cap: ${example_coin['Market Cap']/1e9:.1f}B)",
"📊 *Why?* Huge volume (${:.1f}M/day) makes it easy to enter/exit".format(example_coin['Volume(24h)']/1e6),
"🎯 *Strategy:*",
"1. Buy when 24h volume > 15% of market cap (Current: {:.0f}%)".format(example_coin['Volume/Market Cap Ratio']*100),
"2. Hold 1-4 weeks (Big coins trend longer)",
"3. Exit when momentum drops below 5% (Current: {:.0f}%)".format(example_coin['Momentum Score']*100),
"📉 *Pro Tip:* Watch Bitcoin's trend - if BTC drops 5%, these usually follow.",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
])
# 3. LOW-RISK: Stable Momentum (DCA Targets)
low_risk = df[
(df["Momentum Score"] > 0.05) &
(df["Volatility Score"] < 0.03)
].sort_values("Market Cap", ascending=False)
if not low_risk.empty:
example_coin = low_risk.iloc[0]
report.extend([
"\n🛡️ **LOW-RISK: Steady Climbers (DCA & Forget)**",
f"*Example Trade:* {example_coin['Name']} (Volatility: {example_coin['Volatility Score']:.2f}/5)",
"📊 *Why?* Rises steadily (+{:.0f}%/week) with LOW drama".format(example_coin['7d %']*100),
"🎯 *Strategy:*",
"1. Buy small amounts every Tuesday/Friday (DCA)",
"2. Hold for 3+ months (Compound gains work best here)",
"3. Sell 10% at every +25% milestone",
"💰 *Best For:* Long-term investors who hate sleepless nights",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
])
# Volume Spike Alerts
anomalies = df[df["Volume Anomaly"]].sort_values("Volume(24h)", ascending=False)
if not anomalies.empty:
example_coin = anomalies.iloc[0]
report.extend([
"\n🚨 **Volume Spike Alert (Possible News/Whale Action)**",
f"*Coin:* {example_coin['Name']} (Volume: ${example_coin['Volume(24h)']/1e6:.1f}M, usual: ${example_coin['Volume(24h)']/3/1e6:.1f}M)",
"🔍 *Check:* Twitter/CoinGecko for news before trading",
"⚡ *If no news:* Could be insider buying - watch price action:",
"- Break above today's high → Buy with tight stop-loss",
"- Fade back down → Avoid (may be a fakeout)"
])
# Pro Tip Footer
report.append("\n✨ *Pro Tip:* Bookmark this report & check back in 24h to see if signals held up.")
return "\n".join(report)
def generate_insights(self, df: pd.DataFrame) -> str:
"""
Create plain English trading insights explaining:
- Volume spikes and their implications
- Risk-reward ratios of top movers
- Liquidity warnings for large positions
Generates a tactical trading report with:
- Top 3 trades per risk level (High/Medium/Low)
- Auto-calculated entry/exit prices
- BTC chart toggle tip
"""
top_coin = df.sort_values('Momentum Score', ascending=False).iloc[0]
anomaly_coins = df[df['Volume Anomaly']].sort_values('Volume(24h)', ascending=False)
# Filter top candidates for each risk level
high_risk = (
df[df["Undervalued Flag"]]
.sort_values("Momentum Score", ascending=False)
.head(3)
)
medium_risk = (
df[df["Liquid Giant"]]
.sort_values("Volume/Market Cap Ratio", ascending=False)
.head(3)
)
low_risk = (
df[(df["Momentum Score"] > 0.05) & (df["Volatility Score"] < 0.03)]
.sort_values("Momentum Score", ascending=False)
.head(3)
)
report = ["# 🎯 Crypto Trading Tactical Report (Top 3 Per Risk Tier)"]
report = f"""
🚀 Top Alpha Opportunity: {top_coin['Name']}
- Momentum Score: {top_coin['Momentum Score']:.2%} (Top 1%)
- Risk-Reward Ratio: {top_coin['Momentum Score']/top_coin['Volatility Score']:.1f}
- Liquidity Warning: {'✅ Safe' if top_coin['Liquid Giant'] else '⚠️ Thin Markets'}
# 1. High-Risk Trades (Small-Cap Momentum)
if not high_risk.empty:
report.append("\n## 🔥 HIGH RISK: Small-Cap Rockets (5-50% Potential)")
for i, coin in high_risk.iterrows():
current_price = coin["Price"]
entry = current_price * 0.95 # -5% dip
stop_loss = current_price * 0.90 # -10%
take_profit = current_price * 1.20 # +20%
report.append(
f"\n### {coin['Name']} (Momentum: {coin['Momentum Score']:.1%})"
f"\n- **Current Price:** ${current_price:.4f}"
f"\n- **Entry:** < ${entry:.4f} (Wait for pullback)"
f"\n- **Stop-Loss:** ${stop_loss:.4f} (-10%)"
f"\n- **Target:** ${take_profit:.4f} (+20%)"
f"\n- **Risk/Reward:** 1:2"
f"\n- **Watch:** Volume spikes above {coin['Volume(24h)']/1e6:.1f}M"
)
# 2. Medium-Risk Trades (Liquid Giants)
if not medium_risk.empty:
report.append("\n## 💎 MEDIUM RISK: Liquid Swing Trades (10-30% Potential)")
for i, coin in medium_risk.iterrows():
current_price = coin["Price"]
entry = current_price * 0.98 # -2% dip
stop_loss = current_price * 0.94 # -6%
take_profit = current_price * 1.15 # +15%
report.append(
f"\n### {coin['Name']} (Liquidity Score: {coin['Volume/Market Cap Ratio']:.1%})"
f"\n- **Current Price:** ${current_price:.2f}"
f"\n- **Entry:** < ${entry:.2f} (Buy slight dips)"
f"\n- **Stop-Loss:** ${stop_loss:.2f} (-6%)"
f"\n- **Target:** ${take_profit:.2f} (+15%)"
f"\n- **Hold Time:** 1-3 weeks"
f"\n- **Key Metric:** Volume/Cap > 15%"
)
# 3. Low-Risk Trades (Stable Momentum)
if not low_risk.empty:
report.append("\n## 🛡️ LOW RISK: Steady Gainers (5-15% Potential)")
for i, coin in low_risk.iterrows():
current_price = coin["Price"]
entry = current_price * 0.99 # -1% dip
stop_loss = current_price * 0.97 # -3%
take_profit = current_price * 1.10 # +10%
report.append(
f"\n### {coin['Name']} (Stability Score: {1/coin['Volatility Score']:.1f}x)"
f"\n- **Current Price:** ${current_price:.2f}"
f"\n- **Entry:** < ${entry:.2f} (Safe zone)"
f"\n- **Stop-Loss:** ${stop_loss:.2f} (-3%)"
f"\n- **Target:** ${take_profit:.2f} (+10%)"
f"\n- **DCA Suggestion:** 3 buys over 72 hours"
)
# Volume Anomaly Alert
anomalies = df[df["Volume Anomaly"]].sort_values("Volume(24h)", ascending=False).head(2)
if not anomalies.empty:
report.append("\n⚠️ **Volume Spike Alerts**")
for i, coin in anomalies.iterrows():
report.append(
f"- {coin['Name']}: Volume {coin['Volume(24h)']/1e6:.1f}M "
f"(3x normal) | Price moved: {coin['24h %']:.1%}"
)
# Pro Tip
report.append(
"\n📊 **Chart Hack:** Hide BTC in visuals:\n"
"```python\n"
"# For 3D Map:\n"
"fig.update_traces(visible=False, selector={'name':'Bitcoin'})\n"
"# For Treemap:\n"
"df = df[df['Name'] != 'Bitcoin']\n"
"```"
)
return "\n".join(report)
def create_visuals(self, df: pd.DataFrame) -> dict:
"""Enhanced visuals with BTC toggle support"""
# 3D Market Map (with BTC toggle hint)
fig1 = px.scatter_3d(
df,
x="Market Cap",
y="Volume/Market Cap Ratio",
z="Momentum Score",
color="Name", # Color by name to allow toggling
hover_name="Name",
title="Market Map (Toggle BTC in legend to focus on alts)",
log_x=True
)
fig1.update_traces(
marker=dict(size=df["Volatility Score"]*100 + 5) # Dynamic sizing
)
🔥 Volume Spikes Detected ({len(anomaly_coins)} coins):
{anomaly_coins[['Name', 'Volume(24h)']].head(3).to_markdown(index=False)}
# Liquidity Tree (exclude BTC if too dominant)
if df[df["Name"] == "BitcoinBTC"]["Market Cap"].values[0] > df["Market Cap"].median() * 10:
df = df[df["Name"] != "BitcoinBTC"]
💡 Smart Money Tip: Coins with Volume/Cap > 15% and Momentum > 5%
historically outperform by 22% weekly returns.
"""
return report
fig2 = px.treemap(
df,
path=["Name"],
values="Market Cap",
color="Volume/Market Cap Ratio",
title="Liquidity Tree (BTC auto-removed if dominant)"
)
return {"market_map": fig1, "liquidity_tree": fig2}
async def main():
"""
@@ -171,60 +365,79 @@ async def main():
"""
# Configure browser with anti-detection features
browser_config = BrowserConfig(
headless=True,
stealth=True,
block_resources=["image", "media"]
headless=False,
)
# Initialize crawler with smart table detection
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
try:
# Set up scraping parameters
crawl_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
scraping_strategy=LXMLWebScrapingStrategy(
table_score_threshold=8, # Strict table detection
keep_data_attributes=True
)
table_score_threshold=8, # Strict table detection
keep_data_attributes=True,
scraping_strategy=LXMLWebScrapingStrategy(),
scan_full_page=True,
scroll_delay=0.2,
)
# Execute market data extraction
results: List[CrawlResult] = await crawler.arun(
url='https://coinmarketcap.com/?page=1',
config=crawl_config
)
# Process results
for result in results:
if result.success and result.media['tables']:
# Extract primary market table
raw_df = pd.DataFrame(
result.media['tables'][0]['rows'],
columns=result.media['tables'][0]['headers']
)
# Initialize analysis engine
analyzer = CryptoAlphaGenerator()
clean_df = analyzer.clean_data(raw_df)
analyzed_df = analyzer.calculate_metrics(clean_df)
# Generate outputs
visuals = analyzer.create_visuals(analyzed_df)
insights = analyzer.generate_insights(analyzed_df)
# Save visualizations
visuals['market_map'].write_html("market_map.html")
visuals['liquidity_tree'].write_html("liquidity_tree.html")
# Display results
print("🔑 Key Trading Insights:")
print(insights)
print("\n📊 Open 'market_map.html' for interactive analysis")
# # Execute market data extraction
# results: List[CrawlResult] = await crawler.arun(
# url="https://coinmarketcap.com/?page=1", config=crawl_config
# )
# # Process results
# raw_df = pd.DataFrame()
# for result in results:
# if result.success and result.media["tables"]:
# # Extract primary market table
# # DataFrame
# raw_df = pd.DataFrame(
# result.media["tables"][0]["rows"],
# columns=result.media["tables"][0]["headers"],
# )
# break
# This is for debugging only
# ////// Remove this in production from here..
# Save raw data for debugging
# raw_df.to_csv(f"{__current_dir__}/tmp/raw_crypto_data.csv", index=False)
# print("🔍 Raw data saved to 'raw_crypto_data.csv'")
# Read from file for debugging
raw_df = pd.read_csv(f"{__current_dir__}/tmp/raw_crypto_data.csv")
# ////// ..to here
# Select top 20
raw_df = raw_df.head(50)
# Remove "Buy" from name
raw_df["Name"] = raw_df["Name"].str.replace("Buy", "")
# Initialize analysis engine
analyzer = CryptoAlphaGenerator()
clean_df = analyzer.clean_data(raw_df)
analyzed_df = analyzer.calculate_metrics(clean_df)
# Generate outputs
visuals = analyzer.create_visuals(analyzed_df)
insights = analyzer.generate_insights(analyzed_df)
# Save visualizations
visuals["market_map"].write_html(f"{__current_dir__}/tmp/market_map.html")
visuals["liquidity_tree"].write_html(f"{__current_dir__}/tmp/liquidity_tree.html")
# Display results
print("🔑 Key Trading Insights:")
print(insights)
print("\n📊 Open 'market_map.html' for interactive analysis")
print("\n📊 Open 'liquidity_tree.html' for interactive analysis")
finally:
await crawler.close()
if __name__ == "__main__":
asyncio.run(main())
asyncio.run(main())

View File

@@ -73,7 +73,7 @@ async def test_stream_crawl(session, token: str):
# "https://news.ycombinator.com/news"
],
"browser_config": {"headless": True, "viewport": {"width": 1200}},
"crawler_config": {"stream": True, "cache_mode": "aggressive"}
"crawler_config": {"stream": True, "cache_mode": "bypass"}
}
headers = {"Authorization": f"Bearer {token}"}
print(f"\nTesting Streaming Crawl: {url}")

View File

@@ -9,6 +9,26 @@ from crawl4ai import (
CrawlResult
)
async def example_cdp():
browser_conf = BrowserConfig(
headless=False,
cdp_url="http://localhost:9223"
)
crawler_config = CrawlerRunConfig(
session_id="test",
js_code = """(() => { return {"result": "Hello World!"} })()""",
js_only=True
)
async with AsyncWebCrawler(
config=browser_conf,
verbose=True,
) as crawler:
result : CrawlResult = await crawler.arun(
url="https://www.helloworld.org",
config=crawler_config,
)
print(result.js_execution_result)
async def main():
browser_config = BrowserConfig(headless=True, verbose=True)
@@ -16,18 +36,15 @@ async def main():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
# content_filter=PruningContentFilter(
# threshold=0.48, threshold_type="fixed", min_word_threshold=0
# )
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
),
)
result : CrawlResult = await crawler.arun(
# url="https://www.helloworld.org", config=crawler_config
url="https://www.kidocode.com", config=crawler_config
url="https://www.helloworld.org", config=crawler_config
)
print(result.markdown.raw_markdown[:500])
# print(result.model_dump())
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,675 +0,0 @@
import os, sys
from crawl4ai import LLMConfig
# append parent directory to system path
sys.path.append(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
)
os.environ["FIRECRAWL_API_KEY"] = "fc-84b370ccfad44beabc686b38f1769692"
import asyncio
# import nest_asyncio
# nest_asyncio.apply()
import time
import json
import os
import re
from typing import Dict, List
from bs4 import BeautifulSoup
from pydantic import BaseModel, Field
from crawl4ai import AsyncWebCrawler, CacheMode
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.content_filter_strategy import PruningContentFilter
from crawl4ai.extraction_strategy import (
JsonCssExtractionStrategy,
LLMExtractionStrategy,
)
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
print("Crawl4AI: Advanced Web Crawling and Data Extraction")
print("GitHub Repository: https://github.com/unclecode/crawl4ai")
print("Twitter: @unclecode")
print("Website: https://crawl4ai.com")
async def simple_crawl():
print("\n--- Basic Usage ---")
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500]) # Print first 500 characters
async def simple_example_with_running_js_code():
print("\n--- Executing JavaScript and Using CSS Selectors ---")
# New code to handle the wait_for parameter
wait_for = """() => {
return Array.from(document.querySelectorAll('article.tease-card')).length > 10;
}"""
# wait_for can be also just a css selector
# wait_for = "article.tease-card:nth-child(10)"
async with AsyncWebCrawler(verbose=True) as crawler:
js_code = [
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
]
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=js_code,
# wait_for=wait_for,
cache_mode=CacheMode.BYPASS,
)
print(result.markdown[:500]) # Print first 500 characters
async def simple_example_with_css_selector():
print("\n--- Using CSS Selectors ---")
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
css_selector=".wide-tease-item__description",
cache_mode=CacheMode.BYPASS,
)
print(result.markdown[:500]) # Print first 500 characters
async def use_proxy():
print("\n--- Using a Proxy ---")
print(
"Note: Replace 'http://your-proxy-url:port' with a working proxy to run this example."
)
# Uncomment and modify the following lines to use a proxy
async with AsyncWebCrawler(
verbose=True, proxy="http://your-proxy-url:port"
) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", cache_mode=CacheMode.BYPASS
)
if result.success:
print(result.markdown[:500]) # Print first 500 characters
async def capture_and_save_screenshot(url: str, output_path: str):
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url=url, screenshot=True, cache_mode=CacheMode.BYPASS
)
if result.success and result.screenshot:
import base64
# Decode the base64 screenshot data
screenshot_data = base64.b64decode(result.screenshot)
# Save the screenshot as a JPEG file
with open(output_path, "wb") as f:
f.write(screenshot_data)
print(f"Screenshot saved successfully to {output_path}")
else:
print("Failed to capture screenshot")
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(
..., description="Fee for output token for the OpenAI model."
)
async def extract_structured_data_using_llm(
provider: str, api_token: str = None, extra_headers: Dict[str, str] = None
):
print(f"\n--- Extracting Structured Data with {provider} ---")
if api_token is None and provider != "ollama":
print(f"API token is required for {provider}. Skipping this example.")
return
# extra_args = {}
extra_args = {
"temperature": 0,
"top_p": 0.9,
"max_tokens": 2000,
# any other supported parameters for litellm
}
if extra_headers:
extra_args["extra_headers"] = extra_headers
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://openai.com/api/pricing/",
word_count_threshold=1,
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider=provider,api_token=api_token),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content. One extracted model JSON format should look like this:
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}.""",
extra_args=extra_args,
),
cache_mode=CacheMode.BYPASS,
)
print(result.extracted_content)
async def extract_structured_data_using_css_extractor():
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
schema = {
"name": "KidoCode Courses",
"baseSelector": "section.charge-methodology .w-tab-content > div",
"fields": [
{
"name": "section_title",
"selector": "h3.heading-50",
"type": "text",
},
{
"name": "section_description",
"selector": ".charge-content",
"type": "text",
},
{
"name": "course_name",
"selector": ".text-block-93",
"type": "text",
},
{
"name": "course_description",
"selector": ".course-content-text",
"type": "text",
},
{
"name": "course_icon",
"selector": ".image-92",
"type": "attribute",
"attribute": "src",
},
],
}
async with AsyncWebCrawler(headless=True, verbose=True) as crawler:
# Create the JavaScript that handles clicking multiple times
js_click_tabs = """
(async () => {
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
for(let tab of tabs) {
// scroll to the tab
tab.scrollIntoView();
tab.click();
// Wait for content to load and animations to complete
await new Promise(r => setTimeout(r, 500));
}
})();
"""
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology",
extraction_strategy=JsonCssExtractionStrategy(schema, verbose=True),
js_code=[js_click_tabs],
cache_mode=CacheMode.BYPASS,
)
companies = json.loads(result.extracted_content)
print(f"Successfully extracted {len(companies)} companies")
print(json.dumps(companies[0], indent=2))
# Advanced Session-Based Crawling with Dynamic Content 🔄
async def crawl_dynamic_content_pages_method_1():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
first_commit = ""
async def on_execution_started(page):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.Box-sc-g0xbh4-0 h4")
commit = await page.query_selector("li.Box-sc-g0xbh4-0 h4")
commit = await commit.evaluate("(element) => element.textContent")
commit = re.sub(r"\s+", "", commit)
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
async with AsyncWebCrawler(verbose=True) as crawler:
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
(() => {
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
})();
"""
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
js=js_next_page if page > 0 else None,
cache_mode=CacheMode.BYPASS,
js_only=page > 0,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
soup = BeautifulSoup(result.cleaned_html, "html.parser")
commits = soup.select("li")
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_dynamic_content_pages_method_2():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
last_commit = ""
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
return commits.length > 0 ? commits[0].textContent.trim() : null;
};
const initialCommit = getCurrentCommit();
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
// Poll for changes
while (true) {
await new Promise(resolve => setTimeout(resolve, 100)); // Wait 100ms
const newCommit = getCurrentCommit();
if (newCommit && newCommit !== initialCommit) {
break;
}
}
})();
"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page_and_wait if page > 0 else None,
js_only=page > 0,
cache_mode=CacheMode.BYPASS,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_dynamic_content_pages_method_3():
print(
"\n--- Advanced Multi-Page Crawling with JavaScript Execution using `wait_for` ---"
)
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
if (commits.length > 0) {
window.firstCommit = commits[0].textContent.trim();
}
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
"""
wait_for = """() => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
if (commits.length === 0) return false;
const firstCommit = commits[0].textContent.trim();
return firstCommit !== window.firstCommit;
}"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page if page > 0 else None,
wait_for=wait_for if page > 0 else None,
js_only=page > 0,
cache_mode=CacheMode.BYPASS,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_custom_browser_type():
# Use Firefox
start = time.time()
async with AsyncWebCrawler(
browser_type="firefox", verbose=True, headless=True
) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
# Use WebKit
start = time.time()
async with AsyncWebCrawler(
browser_type="webkit", verbose=True, headless=True
) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
# Use Chromium (default)
start = time.time()
async with AsyncWebCrawler(verbose=True, headless=True) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
async def crawl_with_user_simultion():
async with AsyncWebCrawler(verbose=True, headless=True) as crawler:
url = "YOUR-URL-HERE"
result = await crawler.arun(
url=url,
cache_mode=CacheMode.BYPASS,
magic=True, # Automatically detects and removes overlays, popups, and other elements that block content
# simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
# override_navigator = True # Overrides the navigator object to make it look like a real user
)
print(result.markdown)
async def speed_comparison():
# print("\n--- Speed Comparison ---")
# print("Firecrawl (simulated):")
# print("Time taken: 7.02 seconds")
# print("Content length: 42074 characters")
# print("Images found: 49")
# print()
# Simulated Firecrawl performance
from firecrawl import FirecrawlApp
app = FirecrawlApp(api_key=os.environ["FIRECRAWL_API_KEY"])
start = time.time()
scrape_status = app.scrape_url(
"https://www.nbcnews.com/business", params={"formats": ["markdown", "html"]}
)
end = time.time()
print("Firecrawl:")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(scrape_status['markdown'])} characters")
print(f"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}")
print()
async with AsyncWebCrawler() as crawler:
# Crawl4AI simple crawl
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
word_count_threshold=0,
cache_mode=CacheMode.BYPASS,
verbose=False,
)
end = time.time()
print("Crawl4AI (simple crawl):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown)} characters")
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
print()
# Crawl4AI with advanced content filtering
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
word_count_threshold=0,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
# content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
),
cache_mode=CacheMode.BYPASS,
verbose=False,
)
end = time.time()
print("Crawl4AI (Markdown Plus):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown.raw_markdown)} characters")
print(f"Fit Markdown: {len(result.markdown.fit_markdown)} characters")
print(f"Images found: {result.markdown.raw_markdown.count('cldnry.s-nbcnews.com')}")
print()
# Crawl4AI with JavaScript execution
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=[
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
],
word_count_threshold=0,
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
# content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
),
verbose=False,
)
end = time.time()
print("Crawl4AI (with JavaScript execution):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown.raw_markdown)} characters")
print(f"Fit Markdown: {len(result.markdown.fit_markdown)} characters")
print(f"Images found: {result.markdown.raw_markdown.count('cldnry.s-nbcnews.com')}")
print("\nNote on Speed Comparison:")
print("The speed test conducted here may not reflect optimal conditions.")
print("When we call Firecrawl's API, we're seeing its best performance,")
print("while Crawl4AI's performance is limited by the local network speed.")
print("For a more accurate comparison, it's recommended to run these tests")
print("on servers with a stable and fast internet connection.")
print("Despite these limitations, Crawl4AI still demonstrates faster performance.")
print("If you run these tests in an environment with better network conditions,")
print("you may observe an even more significant speed advantage for Crawl4AI.")
async def generate_knowledge_graph():
class Entity(BaseModel):
name: str
description: str
class Relationship(BaseModel):
entity1: Entity
entity2: Entity
description: str
relation_type: str
class KnowledgeGraph(BaseModel):
entities: List[Entity]
relationships: List[Relationship]
extraction_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY")), # In case of Ollama just pass "no-token"
schema=KnowledgeGraph.model_json_schema(),
extraction_type="schema",
instruction="""Extract entities and relationships from the given text.""",
)
async with AsyncWebCrawler() as crawler:
url = "https://paulgraham.com/love.html"
result = await crawler.arun(
url=url,
cache_mode=CacheMode.BYPASS,
extraction_strategy=extraction_strategy,
# magic=True
)
# print(result.extracted_content)
with open(os.path.join(__location__, "kb.json"), "w") as f:
f.write(result.extracted_content)
async def fit_markdown_remove_overlay():
async with AsyncWebCrawler(
headless=True, # Set to False to see what is happening
verbose=True,
user_agent_mode="random",
user_agent_generator_config={"device_type": "mobile", "os_type": "android"},
) as crawler:
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology",
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
),
options={"ignore_links": True},
),
# markdown_generator=DefaultMarkdownGenerator(
# content_filter=BM25ContentFilter(user_query="", bm25_threshold=1.0),
# options={
# "ignore_links": True
# }
# ),
)
if result.success:
print(len(result.markdown.raw_markdown))
print(len(result.markdown.markdown_with_citations))
print(len(result.markdown.fit_markdown))
# Save clean html
with open(os.path.join(__location__, "output/cleaned_html.html"), "w") as f:
f.write(result.cleaned_html)
with open(
os.path.join(__location__, "output/output_raw_markdown.md"), "w"
) as f:
f.write(result.markdown.raw_markdown)
with open(
os.path.join(__location__, "output/output_markdown_with_citations.md"),
"w",
) as f:
f.write(result.markdown.markdown_with_citations)
with open(
os.path.join(__location__, "output/output_fit_markdown.md"), "w"
) as f:
f.write(result.markdown.fit_markdown)
print("Done")
async def main():
# await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY"))
# await simple_crawl()
# await simple_example_with_running_js_code()
# await simple_example_with_css_selector()
# # await use_proxy()
# await capture_and_save_screenshot("https://www.example.com", os.path.join(__location__, "tmp/example_screenshot.jpg"))
# await extract_structured_data_using_css_extractor()
# LLM extraction examples
# await extract_structured_data_using_llm()
# await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY"))
# await extract_structured_data_using_llm("ollama/llama3.2")
# You always can pass custom headers to the extraction strategy
# custom_headers = {
# "Authorization": "Bearer your-custom-token",
# "X-Custom-Header": "Some-Value"
# }
# await extract_structured_data_using_llm(extra_headers=custom_headers)
# await crawl_dynamic_content_pages_method_1()
# await crawl_dynamic_content_pages_method_2()
await crawl_dynamic_content_pages_method_3()
# await crawl_custom_browser_type()
# await speed_comparison()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,405 +0,0 @@
import os
import time
from crawl4ai import LLMConfig
from crawl4ai.web_crawler import WebCrawler
from crawl4ai.chunking_strategy import *
from crawl4ai.extraction_strategy import *
from crawl4ai.crawler_strategy import *
from rich import print
from rich.console import Console
from functools import lru_cache
console = Console()
@lru_cache()
def create_crawler():
crawler = WebCrawler(verbose=True)
crawler.warmup()
return crawler
def print_result(result):
# Print each key in one line and just the first 10 characters of each one's value and three dots
console.print("\t[bold]Result:[/bold]")
for key, value in result.model_dump().items():
if isinstance(value, str) and value:
console.print(f"\t{key}: [green]{value[:20]}...[/green]")
if result.extracted_content:
items = json.loads(result.extracted_content)
print(f"\t[bold]{len(items)} blocks is extracted![/bold]")
def cprint(message, press_any_key=False):
console.print(message)
if press_any_key:
console.print("Press any key to continue...", style="")
input()
def basic_usage(crawler):
cprint(
"🛠️ [bold cyan]Basic Usage: Simply provide a URL and let Crawl4ai do the magic![/bold cyan]"
)
result = crawler.run(url="https://www.nbcnews.com/business", only_text=True)
cprint("[LOG] 📦 [bold yellow]Basic crawl result:[/bold yellow]")
print_result(result)
def basic_usage_some_params(crawler):
cprint(
"🛠️ [bold cyan]Basic Usage: Simply provide a URL and let Crawl4ai do the magic![/bold cyan]"
)
result = crawler.run(
url="https://www.nbcnews.com/business", word_count_threshold=1, only_text=True
)
cprint("[LOG] 📦 [bold yellow]Basic crawl result:[/bold yellow]")
print_result(result)
def screenshot_usage(crawler):
cprint("\n📸 [bold cyan]Let's take a screenshot of the page![/bold cyan]")
result = crawler.run(url="https://www.nbcnews.com/business", screenshot=True)
cprint("[LOG] 📦 [bold yellow]Screenshot result:[/bold yellow]")
# Save the screenshot to a file
with open("screenshot.png", "wb") as f:
f.write(base64.b64decode(result.screenshot))
cprint("Screenshot saved to 'screenshot.png'!")
print_result(result)
def understanding_parameters(crawler):
cprint(
"\n🧠 [bold cyan]Understanding 'bypass_cache' and 'include_raw_html' parameters:[/bold cyan]"
)
cprint(
"By default, Crawl4ai caches the results of your crawls. This means that subsequent crawls of the same URL will be much faster! Let's see this in action."
)
# First crawl (reads from cache)
cprint("1⃣ First crawl (caches the result):", True)
start_time = time.time()
result = crawler.run(url="https://www.nbcnews.com/business")
end_time = time.time()
cprint(
f"[LOG] 📦 [bold yellow]First crawl took {end_time - start_time} seconds and result (from cache):[/bold yellow]"
)
print_result(result)
# Force to crawl again
cprint("2⃣ Second crawl (Force to crawl again):", True)
start_time = time.time()
result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
end_time = time.time()
cprint(
f"[LOG] 📦 [bold yellow]Second crawl took {end_time - start_time} seconds and result (forced to crawl):[/bold yellow]"
)
print_result(result)
def add_chunking_strategy(crawler):
# Adding a chunking strategy: RegexChunking
cprint(
"\n🧩 [bold cyan]Let's add a chunking strategy: RegexChunking![/bold cyan]",
True,
)
cprint(
"RegexChunking is a simple chunking strategy that splits the text based on a given regex pattern. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
chunking_strategy=RegexChunking(patterns=["\n\n"]),
)
cprint("[LOG] 📦 [bold yellow]RegexChunking result:[/bold yellow]")
print_result(result)
# Adding another chunking strategy: NlpSentenceChunking
cprint(
"\n🔍 [bold cyan]Time to explore another chunking strategy: NlpSentenceChunking![/bold cyan]",
True,
)
cprint(
"NlpSentenceChunking uses NLP techniques to split the text into sentences. Let's see how it performs!"
)
result = crawler.run(
url="https://www.nbcnews.com/business", chunking_strategy=NlpSentenceChunking()
)
cprint("[LOG] 📦 [bold yellow]NlpSentenceChunking result:[/bold yellow]")
print_result(result)
def add_extraction_strategy(crawler):
# Adding an extraction strategy: CosineStrategy
cprint(
"\n🧠 [bold cyan]Let's get smarter with an extraction strategy: CosineStrategy![/bold cyan]",
True,
)
cprint(
"CosineStrategy uses cosine similarity to extract semantically similar blocks of text. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=CosineStrategy(
word_count_threshold=10,
max_dist=0.2,
linkage_method="ward",
top_k=3,
sim_threshold=0.3,
verbose=True,
),
)
cprint("[LOG] 📦 [bold yellow]CosineStrategy result:[/bold yellow]")
print_result(result)
# Using semantic_filter with CosineStrategy
cprint(
"You can pass other parameters like 'semantic_filter' to the CosineStrategy to extract semantically similar blocks of text. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=CosineStrategy(
semantic_filter="inflation rent prices",
),
)
cprint(
"[LOG] 📦 [bold yellow]CosineStrategy result with semantic filter:[/bold yellow]"
)
print_result(result)
def add_llm_extraction_strategy(crawler):
# Adding an LLM extraction strategy without instructions
cprint(
"\n🤖 [bold cyan]Time to bring in the big guns: LLMExtractionStrategy without instructions![/bold cyan]",
True,
)
cprint(
"LLMExtractionStrategy uses a large language model to extract relevant information from the web page. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llm_config = LLMConfig(provider="openai/gpt-4o", api_token=os.getenv("OPENAI_API_KEY"))
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (no instructions) result:[/bold yellow]"
)
print_result(result)
# Adding an LLM extraction strategy with instructions
cprint(
"\n📜 [bold cyan]Let's make it even more interesting: LLMExtractionStrategy with instructions![/bold cyan]",
True,
)
cprint(
"Let's say we are only interested in financial news. Let's see how LLMExtractionStrategy performs with instructions!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o",api_token=os.getenv("OPENAI_API_KEY")),
instruction="I am interested in only financial news",
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (with instructions) result:[/bold yellow]"
)
print_result(result)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o",api_token=os.getenv("OPENAI_API_KEY")),
instruction="Extract only content related to technology",
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (with technology instruction) result:[/bold yellow]"
)
print_result(result)
def targeted_extraction(crawler):
# Using a CSS selector to extract only H2 tags
cprint(
"\n🎯 [bold cyan]Targeted extraction: Let's use a CSS selector to extract only H2 tags![/bold cyan]",
True,
)
result = crawler.run(url="https://www.nbcnews.com/business", css_selector="h2")
cprint("[LOG] 📦 [bold yellow]CSS Selector (H2 tags) result:[/bold yellow]")
print_result(result)
def interactive_extraction(crawler):
# Passing JavaScript code to interact with the page
cprint(
"\n🖱️ [bold cyan]Let's get interactive: Passing JavaScript code to click 'Load More' button![/bold cyan]",
True,
)
cprint(
"In this example we try to click the 'Load More' button on the page using JavaScript code."
)
js_code = """
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""
# crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
# crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
result = crawler.run(url="https://www.nbcnews.com/business", js=js_code)
cprint(
"[LOG] 📦 [bold yellow]JavaScript Code (Load More button) result:[/bold yellow]"
)
print_result(result)
def multiple_scrip(crawler):
# Passing JavaScript code to interact with the page
cprint(
"\n🖱️ [bold cyan]Let's get interactive: Passing JavaScript code to click 'Load More' button![/bold cyan]",
True,
)
cprint(
"In this example we try to click the 'Load More' button on the page using JavaScript code."
)
js_code = [
"""
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""
] * 2
# crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
# crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
result = crawler.run(url="https://www.nbcnews.com/business", js=js_code)
cprint(
"[LOG] 📦 [bold yellow]JavaScript Code (Load More button) result:[/bold yellow]"
)
print_result(result)
def using_crawler_hooks(crawler):
# Example usage of the hooks for authentication and setting a cookie
def on_driver_created(driver):
print("[HOOK] on_driver_created")
# Example customization: maximize the window
driver.maximize_window()
# Example customization: logging in to a hypothetical website
driver.get("https://example.com/login")
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.NAME, "username"))
)
driver.find_element(By.NAME, "username").send_keys("testuser")
driver.find_element(By.NAME, "password").send_keys("password123")
driver.find_element(By.NAME, "login").click()
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "welcome"))
)
# Add a custom cookie
driver.add_cookie({"name": "test_cookie", "value": "cookie_value"})
return driver
def before_get_url(driver):
print("[HOOK] before_get_url")
# Example customization: add a custom header
# Enable Network domain for sending headers
driver.execute_cdp_cmd("Network.enable", {})
# Add a custom header
driver.execute_cdp_cmd(
"Network.setExtraHTTPHeaders", {"headers": {"X-Test-Header": "test"}}
)
return driver
def after_get_url(driver):
print("[HOOK] after_get_url")
# Example customization: log the URL
print(driver.current_url)
return driver
def before_return_html(driver, html):
print("[HOOK] before_return_html")
# Example customization: log the HTML
print(len(html))
return driver
cprint(
"\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]",
True,
)
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook("on_driver_created", on_driver_created)
crawler_strategy.set_hook("before_get_url", before_get_url)
crawler_strategy.set_hook("after_get_url", after_get_url)
crawler_strategy.set_hook("before_return_html", before_return_html)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
result = crawler.run(url="https://example.com")
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print_result(result=result)
def using_crawler_hooks_dleay_example(crawler):
def delay(driver):
print("Delaying for 5 seconds...")
time.sleep(5)
print("Resuming...")
def create_crawler():
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook("after_get_url", delay)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
return crawler
cprint(
"\n🔗 [bold cyan]Using Crawler Hooks: Let's add a delay after fetching the url to make sure entire page is fetched.[/bold cyan]"
)
crawler = create_crawler()
result = crawler.run(url="https://google.com", bypass_cache=True)
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print_result(result)
def main():
cprint(
"🌟 [bold green]Welcome to the Crawl4ai Quickstart Guide! Let's dive into some web crawling fun! 🌐[/bold green]"
)
cprint(
"⛳️ [bold cyan]First Step: Create an instance of WebCrawler and call the `warmup()` function.[/bold cyan]"
)
cprint(
"If this is the first time you're running Crawl4ai, this might take a few seconds to load required model files."
)
crawler = create_crawler()
crawler.always_by_pass_cache = True
basic_usage(crawler)
# basic_usage_some_params(crawler)
understanding_parameters(crawler)
crawler.always_by_pass_cache = True
screenshot_usage(crawler)
add_chunking_strategy(crawler)
add_extraction_strategy(crawler)
add_llm_extraction_strategy(crawler)
targeted_extraction(crawler)
interactive_extraction(crawler)
multiple_scrip(crawler)
cprint(
"\n🎉 [bold green]Congratulations! You've made it through the Crawl4ai Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️[/bold green]"
)
if __name__ == "__main__":
main()

View File

@@ -1,735 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "6yLvrXn7yZQI"
},
"source": [
"# Crawl4AI: Advanced Web Crawling and Data Extraction\n",
"\n",
"Welcome to this interactive notebook showcasing Crawl4AI, an advanced asynchronous web crawling and data extraction library.\n",
"\n",
"- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n",
"- Twitter: [@unclecode](https://twitter.com/unclecode)\n",
"- Website: [https://crawl4ai.com](https://crawl4ai.com)\n",
"\n",
"Let's explore the powerful features of Crawl4AI!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KIn_9nxFyZQK"
},
"source": [
"## Installation\n",
"\n",
"First, let's install Crawl4AI from GitHub:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mSnaxLf3zMog"
},
"outputs": [],
"source": [
"!sudo apt-get update && sudo apt-get install -y libwoff1 libopus0 libwebp6 libwebpdemux2 libenchant1c2a libgudev-1.0-0 libsecret-1-0 libhyphen0 libgdk-pixbuf2.0-0 libegl1 libnotify4 libxslt1.1 libevent-2.1-7 libgles2 libvpx6 libxcomposite1 libatk1.0-0 libatk-bridge2.0-0 libepoxy0 libgtk-3-0 libharfbuzz-icu0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xlXqaRtayZQK"
},
"outputs": [],
"source": [
"!pip install crawl4ai\n",
"!pip install nest-asyncio\n",
"!playwright install"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qKCE7TI7yZQL"
},
"source": [
"Now, let's import the necessary libraries:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "I67tr7aAyZQL"
},
"outputs": [],
"source": [
"import asyncio\n",
"import nest_asyncio\n",
"from crawl4ai import AsyncWebCrawler\n",
"from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy\n",
"import json\n",
"import time\n",
"from pydantic import BaseModel, Field\n",
"\n",
"nest_asyncio.apply()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "h7yR_Rt_yZQM"
},
"source": [
"## Basic Usage\n",
"\n",
"Let's start with a simple crawl example:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "yBh6hf4WyZQM",
"outputId": "0f83af5c-abba-4175-ed95-70b7512e6bcc"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.05 seconds\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.05 seconds.\n",
"18102\n"
]
}
],
"source": [
"async def simple_crawl():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(url=\"https://www.nbcnews.com/business\")\n",
" print(len(result.markdown))\n",
"await simple_crawl()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9rtkgHI28uI4"
},
"source": [
"💡 By default, **Crawl4AI** caches the result of every URL, so the next time you call it, youll get an instant result. But if you want to bypass the cache, just set `bypass_cache=True`."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MzZ0zlJ9yZQM"
},
"source": [
"## Advanced Features\n",
"\n",
"### Executing JavaScript and Using CSS Selectors"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "gHStF86xyZQM",
"outputId": "34d0fb6d-4dec-4677-f76e-85a1f082829b"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
"[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 6.06 seconds\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n",
"41135\n"
]
}
],
"source": [
"async def js_and_css():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" js_code = [\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"]\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" js_code=js_code,\n",
" # css_selector=\"YOUR_CSS_SELECTOR_HERE\",\n",
" bypass_cache=True\n",
" )\n",
" print(len(result.markdown))\n",
"\n",
"await js_and_css()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cqE_W4coyZQM"
},
"source": [
"### Using a Proxy\n",
"\n",
"Note: You'll need to replace the proxy URL with a working proxy for this example to run successfully."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QjAyiAGqyZQM"
},
"outputs": [],
"source": [
"async def use_proxy():\n",
" async with AsyncWebCrawler(verbose=True, proxy=\"http://your-proxy-url:port\") as crawler:\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" bypass_cache=True\n",
" )\n",
" print(result.markdown[:500]) # Print first 500 characters\n",
"\n",
"# Uncomment the following line to run the proxy example\n",
"# await use_proxy()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XTZ88lbayZQN"
},
"source": [
"### Extracting Structured Data with OpenAI\n",
"\n",
"Note: You'll need to set your OpenAI API key as an environment variable for this example to work."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "fIOlDayYyZQN",
"outputId": "cb8359cc-dee0-4762-9698-5dfdcee055b8"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n",
"[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 3.77 seconds\n",
"[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.21 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 1\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 2\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 3\n",
"[LOG] Extracted 4 blocks from URL: https://openai.com/api/pricing/ block index: 3\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 4\n",
"[LOG] Extracted 5 blocks from URL: https://openai.com/api/pricing/ block index: 0\n",
"[LOG] Extracted 1 blocks from URL: https://openai.com/api/pricing/ block index: 4\n",
"[LOG] Extracted 8 blocks from URL: https://openai.com/api/pricing/ block index: 1\n",
"[LOG] Extracted 12 blocks from URL: https://openai.com/api/pricing/ block index: 2\n",
"[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 8.55 seconds.\n",
"5029\n"
]
}
],
"source": [
"import os\n",
"from google.colab import userdata\n",
"os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')\n",
"\n",
"class OpenAIModelFee(BaseModel):\n",
" model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n",
" input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n",
" output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n",
"\n",
"async def extract_openai_fees():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(\n",
" url='https://openai.com/api/pricing/',\n",
" word_count_threshold=1,\n",
" extraction_strategy=LLMExtractionStrategy(\n",
" provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY'),\n",
" schema=OpenAIModelFee.schema(),\n",
" extraction_type=\"schema\",\n",
" instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens.\n",
" Do not miss any models in the entire content. One extracted model JSON format should look like this:\n",
" {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n",
" ),\n",
" bypass_cache=True,\n",
" )\n",
" print(len(result.extracted_content))\n",
"\n",
"# Uncomment the following line to run the OpenAI extraction example\n",
"await extract_openai_fees()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BypA5YxEyZQN"
},
"source": [
"### Advanced Multi-Page Crawling with JavaScript Execution"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tfkcVQ0b7mw-"
},
"source": [
"## Advanced Multi-Page Crawling with JavaScript Execution\n",
"\n",
"This example demonstrates Crawl4AI's ability to handle complex crawling scenarios, specifically extracting commits from multiple pages of a GitHub repository. The challenge here is that clicking the \"Next\" button doesn't load a new page, but instead uses asynchronous JavaScript to update the content. This is a common hurdle in modern web crawling.\n",
"\n",
"To overcome this, we use Crawl4AI's custom JavaScript execution to simulate clicking the \"Next\" button, and implement a custom hook to detect when new data has loaded. Our strategy involves comparing the first commit's text before and after \"clicking\" Next, waiting until it changes to confirm new data has rendered. This showcases Crawl4AI's flexibility in handling dynamic content and its ability to implement custom logic for even the most challenging crawling tasks."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "qUBKGpn3yZQN",
"outputId": "3e555b6a-ed33-42f4-cce9-499a923fbe17"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 5.16 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.28 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.28 seconds.\n",
"Page 1: Found 35 commits\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.78 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.90 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.90 seconds.\n",
"Page 2: Found 35 commits\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 2.00 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.74 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.75 seconds.\n",
"Page 3: Found 35 commits\n",
"Successfully crawled 105 commits across 3 pages\n"
]
}
],
"source": [
"import re\n",
"from bs4 import BeautifulSoup\n",
"\n",
"async def crawl_typescript_commits():\n",
" first_commit = \"\"\n",
" async def on_execution_started(page):\n",
" nonlocal first_commit\n",
" try:\n",
" while True:\n",
" await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')\n",
" commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')\n",
" commit = await commit.evaluate('(element) => element.textContent')\n",
" commit = re.sub(r'\\s+', '', commit)\n",
" if commit and commit != first_commit:\n",
" first_commit = commit\n",
" break\n",
" await asyncio.sleep(0.5)\n",
" except Exception as e:\n",
" print(f\"Warning: New content didn't appear after JavaScript execution: {e}\")\n",
"\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)\n",
"\n",
" url = \"https://github.com/microsoft/TypeScript/commits/main\"\n",
" session_id = \"typescript_commits_session\"\n",
" all_commits = []\n",
"\n",
" js_next_page = \"\"\"\n",
" const button = document.querySelector('a[data-testid=\"pagination-next-button\"]');\n",
" if (button) button.click();\n",
" \"\"\"\n",
"\n",
" for page in range(3): # Crawl 3 pages\n",
" result = await crawler.arun(\n",
" url=url,\n",
" session_id=session_id,\n",
" css_selector=\"li.Box-sc-g0xbh4-0\",\n",
" js=js_next_page if page > 0 else None,\n",
" bypass_cache=True,\n",
" js_only=page > 0\n",
" )\n",
"\n",
" assert result.success, f\"Failed to crawl page {page + 1}\"\n",
"\n",
" soup = BeautifulSoup(result.cleaned_html, 'html.parser')\n",
" commits = soup.select(\"li\")\n",
" all_commits.extend(commits)\n",
"\n",
" print(f\"Page {page + 1}: Found {len(commits)} commits\")\n",
"\n",
" await crawler.crawler_strategy.kill_session(session_id)\n",
" print(f\"Successfully crawled {len(all_commits)} commits across 3 pages\")\n",
"\n",
"await crawl_typescript_commits()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EJRnYsp6yZQN"
},
"source": [
"### Using JsonCssExtractionStrategy for Fast Structured Output"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1ZMqIzB_8SYp"
},
"source": [
"The JsonCssExtractionStrategy is a powerful feature of Crawl4AI that allows for precise, structured data extraction from web pages. Here's how it works:\n",
"\n",
"1. You define a schema that describes the pattern of data you're interested in extracting.\n",
"2. The schema includes a base selector that identifies repeating elements on the page.\n",
"3. Within the schema, you define fields, each with its own selector and type.\n",
"4. These field selectors are applied within the context of each base selector element.\n",
"5. The strategy supports nested structures, lists within lists, and various data types.\n",
"6. You can even include computed fields for more complex data manipulation.\n",
"\n",
"This approach allows for highly flexible and precise data extraction, transforming semi-structured web content into clean, structured JSON data. It's particularly useful for extracting consistent data patterns from pages like product listings, news articles, or search results.\n",
"\n",
"For more details and advanced usage, check out the full documentation on the Crawl4AI website."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "trCMR2T9yZQN",
"outputId": "718d36f4-cccf-40f4-8d8c-c3ba73524d16"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
"[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 7.00 seconds\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.32 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.48 seconds.\n",
"Successfully extracted 11 news teasers\n",
"{\n",
" \"category\": \"Business News\",\n",
" \"headline\": \"NBC ripped up its Olympics playbook for 2024 \\u2014 so far, the new strategy paid off\",\n",
" \"summary\": \"The Olympics have long been key to NBCUniversal. Paris marked the 18th Olympic Games broadcast by NBC in the U.S.\",\n",
" \"time\": \"13h ago\",\n",
" \"image\": {\n",
" \"src\": \"https://media-cldnry.s-nbcnews.com/image/upload/t_focal-200x100,f_auto,q_auto:best/rockcms/2024-09/240903-nbc-olympics-ch-1344-c7a486.jpg\",\n",
" \"alt\": \"Mike Tirico.\"\n",
" },\n",
" \"link\": \"https://www.nbcnews.com/business\"\n",
"}\n"
]
}
],
"source": [
"async def extract_news_teasers():\n",
" schema = {\n",
" \"name\": \"News Teaser Extractor\",\n",
" \"baseSelector\": \".wide-tease-item__wrapper\",\n",
" \"fields\": [\n",
" {\n",
" \"name\": \"category\",\n",
" \"selector\": \".unibrow span[data-testid='unibrow-text']\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"headline\",\n",
" \"selector\": \".wide-tease-item__headline\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"summary\",\n",
" \"selector\": \".wide-tease-item__description\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"time\",\n",
" \"selector\": \"[data-testid='wide-tease-date']\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"image\",\n",
" \"type\": \"nested\",\n",
" \"selector\": \"picture.teasePicture img\",\n",
" \"fields\": [\n",
" {\"name\": \"src\", \"type\": \"attribute\", \"attribute\": \"src\"},\n",
" {\"name\": \"alt\", \"type\": \"attribute\", \"attribute\": \"alt\"},\n",
" ],\n",
" },\n",
" {\n",
" \"name\": \"link\",\n",
" \"selector\": \"a[href]\",\n",
" \"type\": \"attribute\",\n",
" \"attribute\": \"href\",\n",
" },\n",
" ],\n",
" }\n",
"\n",
" extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n",
"\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" extraction_strategy=extraction_strategy,\n",
" bypass_cache=True,\n",
" )\n",
"\n",
" assert result.success, \"Failed to crawl the page\"\n",
"\n",
" news_teasers = json.loads(result.extracted_content)\n",
" print(f\"Successfully extracted {len(news_teasers)} news teasers\")\n",
" print(json.dumps(news_teasers[0], indent=2))\n",
"\n",
"await extract_news_teasers()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FnyVhJaByZQN"
},
"source": [
"## Speed Comparison\n",
"\n",
"Let's compare the speed of Crawl4AI with Firecrawl, a paid service. Note that we can't run Firecrawl in this Colab environment, so we'll simulate its performance based on previously recorded data."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "agDD186f3wig"
},
"source": [
"💡 **Note on Speed Comparison:**\n",
"\n",
"The speed test conducted here is running on Google Colab, where the internet speed and performance can vary and may not reflect optimal conditions. When we call Firecrawl's API, we're seeing its best performance, while Crawl4AI's performance is limited by Colab's network speed.\n",
"\n",
"For a more accurate comparison, it's recommended to run these tests on your own servers or computers with a stable and fast internet connection. Despite these limitations, Crawl4AI still demonstrates faster performance in this environment.\n",
"\n",
"If you run these tests locally, you may observe an even more significant speed advantage for Crawl4AI compared to other services."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "F7KwHv8G1LbY"
},
"outputs": [],
"source": [
"!pip install firecrawl"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "91813zILyZQN",
"outputId": "663223db-ab89-4976-b233-05ceca62b19b"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Firecrawl (simulated):\n",
"Time taken: 4.38 seconds\n",
"Content length: 41967 characters\n",
"Images found: 49\n",
"\n",
"Crawl4AI (simple crawl):\n",
"Time taken: 4.22 seconds\n",
"Content length: 18221 characters\n",
"Images found: 49\n",
"\n",
"Crawl4AI (with JavaScript execution):\n",
"Time taken: 9.13 seconds\n",
"Content length: 34243 characters\n",
"Images found: 89\n"
]
}
],
"source": [
"import os\n",
"from google.colab import userdata\n",
"os.environ['FIRECRAWL_API_KEY'] = userdata.get('FIRECRAWL_API_KEY')\n",
"import time\n",
"from firecrawl import FirecrawlApp\n",
"\n",
"async def speed_comparison():\n",
" # Simulated Firecrawl performance\n",
" app = FirecrawlApp(api_key=os.environ['FIRECRAWL_API_KEY'])\n",
" start = time.time()\n",
" scrape_status = app.scrape_url(\n",
" 'https://www.nbcnews.com/business',\n",
" params={'formats': ['markdown', 'html']}\n",
" )\n",
" end = time.time()\n",
" print(\"Firecrawl (simulated):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(scrape_status['markdown'])} characters\")\n",
" print(f\"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}\")\n",
" print()\n",
"\n",
" async with AsyncWebCrawler() as crawler:\n",
" # Crawl4AI simple crawl\n",
" start = time.time()\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" word_count_threshold=0,\n",
" bypass_cache=True,\n",
" verbose=False\n",
" )\n",
" end = time.time()\n",
" print(\"Crawl4AI (simple crawl):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(result.markdown)} characters\")\n",
" print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
" print()\n",
"\n",
" # Crawl4AI with JavaScript execution\n",
" start = time.time()\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" js_code=[\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"],\n",
" word_count_threshold=0,\n",
" bypass_cache=True,\n",
" verbose=False\n",
" )\n",
" end = time.time()\n",
" print(\"Crawl4AI (with JavaScript execution):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(result.markdown)} characters\")\n",
" print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
"\n",
"await speed_comparison()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OBFFYVJIyZQN"
},
"source": [
"If you run on a local machine with a proper internet speed:\n",
"- Simple crawl: Crawl4AI is typically over 3-4 times faster than Firecrawl.\n",
"- With JavaScript execution: Even when executing JavaScript to load more content (potentially doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.\n",
"\n",
"Please note that actual performance may vary depending on network conditions and the specific content being crawled."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "A6_1RK1_yZQO"
},
"source": [
"## Conclusion\n",
"\n",
"In this notebook, we've explored the powerful features of Crawl4AI, including:\n",
"\n",
"1. Basic crawling\n",
"2. JavaScript execution and CSS selector usage\n",
"3. Proxy support\n",
"4. Structured data extraction with OpenAI\n",
"5. Advanced multi-page crawling with JavaScript execution\n",
"6. Fast structured output using JsonCssExtractionStrategy\n",
"7. Speed comparison with other services\n",
"\n",
"Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n",
"\n",
"For more information and advanced usage, please visit the [Crawl4AI documentation](https://docs.crawl4ai.com/).\n",
"\n",
"Happy crawling!"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -42,7 +42,7 @@ dependencies = [
"pyperclip>=1.8.2",
"faust-cchardet>=2.1.19",
"aiohttp>=3.11.11",
"humanize>=4.10.0"
"humanize>=4.10.0",
]
classifiers = [
"Development Status :: 4 - Beta",

View File

@@ -10,6 +10,7 @@ import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, JsonXPathExtractionStrategy
from crawl4ai.utils import preprocess_html_for_schema, JsonXPathExtractionStrategy
import json
# Test HTML - A complex job board with companies, departments, and positions

View File

@@ -0,0 +1,4 @@
"""Docker browser strategy tests.
This package contains tests for the Docker browser strategy implementation.
"""

View File

@@ -0,0 +1,651 @@
"""Test examples for Docker Browser Strategy.
These examples demonstrate the functionality of Docker Browser Strategy
and serve as functional tests.
"""
import asyncio
import os
import sys
import shutil
import uuid
# Add the project root to Python path if running directly
if __name__ == "__main__":
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../..')))
from crawl4ai.browser import BrowserManager
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
from crawl4ai.browser import DockerConfig
from crawl4ai.browser import DockerRegistry
from crawl4ai.browser import DockerUtils
# Create a logger for clear terminal output
logger = AsyncLogger(verbose=True, log_file=None)
# Global Docker utils instance
docker_utils = DockerUtils(logger)
async def test_docker_components():
"""Test Docker utilities, registry, and image building.
This function tests the core Docker components before running the browser tests.
It validates DockerRegistry, DockerUtils, and builds test images to ensure
everything is functioning correctly.
"""
logger.info("Testing Docker components", tag="SETUP")
# Create a test registry directory
registry_dir = os.path.join(os.path.dirname(__file__), "test_registry")
registry_file = os.path.join(registry_dir, "test_registry.json")
os.makedirs(registry_dir, exist_ok=True)
try:
# 1. Test DockerRegistry
logger.info("Testing DockerRegistry...", tag="SETUP")
registry = DockerRegistry(registry_file)
# Test saving and loading registry
test_container_id = "test-container-123"
registry.register_container(test_container_id, 9876, "test-hash-123")
registry.save()
# Create a new registry instance that loads from the file
registry2 = DockerRegistry(registry_file)
port = registry2.get_container_host_port(test_container_id)
hash_value = registry2.get_container_config_hash(test_container_id)
if port != 9876 or hash_value != "test-hash-123":
logger.error("DockerRegistry persistence failed", tag="SETUP")
return False
# Clean up test container from registry
registry2.unregister_container(test_container_id)
logger.success("DockerRegistry works correctly", tag="SETUP")
# 2. Test DockerUtils
logger.info("Testing DockerUtils...", tag="SETUP")
# Test port detection
in_use = docker_utils.is_port_in_use(22) # SSH port is usually in use
logger.info(f"Port 22 in use: {in_use}", tag="SETUP")
# Get next available port
available_port = docker_utils.get_next_available_port(9000)
logger.info(f"Next available port: {available_port}", tag="SETUP")
# Test config hash generation
config_dict = {"mode": "connect", "headless": True}
config_hash = docker_utils.generate_config_hash(config_dict)
logger.info(f"Generated config hash: {config_hash[:8]}...", tag="SETUP")
# 3. Test Docker is available
logger.info("Checking Docker availability...", tag="SETUP")
if not await check_docker_available():
logger.error("Docker is not available - cannot continue tests", tag="SETUP")
return False
# 4. Test building connect image
logger.info("Building connect mode Docker image...", tag="SETUP")
connect_image = await docker_utils.ensure_docker_image_exists(None, "connect")
if not connect_image:
logger.error("Failed to build connect mode image", tag="SETUP")
return False
logger.success(f"Successfully built connect image: {connect_image}", tag="SETUP")
# 5. Test building launch image
logger.info("Building launch mode Docker image...", tag="SETUP")
launch_image = await docker_utils.ensure_docker_image_exists(None, "launch")
if not launch_image:
logger.error("Failed to build launch mode image", tag="SETUP")
return False
logger.success(f"Successfully built launch image: {launch_image}", tag="SETUP")
# 6. Test creating and removing container
logger.info("Testing container creation and removal...", tag="SETUP")
container_id = await docker_utils.create_container(
image_name=launch_image,
host_port=available_port,
container_name="crawl4ai-test-container"
)
if not container_id:
logger.error("Failed to create test container", tag="SETUP")
return False
logger.info(f"Created test container: {container_id[:12]}", tag="SETUP")
# Verify container is running
running = await docker_utils.is_container_running(container_id)
if not running:
logger.error("Test container is not running", tag="SETUP")
await docker_utils.remove_container(container_id)
return False
# Test commands in container
logger.info("Testing command execution in container...", tag="SETUP")
returncode, stdout, stderr = await docker_utils.exec_in_container(
container_id, ["ls", "-la", "/"]
)
if returncode != 0:
logger.error(f"Command execution failed: {stderr}", tag="SETUP")
await docker_utils.remove_container(container_id)
return False
# Verify Chrome is installed in the container
returncode, stdout, stderr = await docker_utils.exec_in_container(
container_id, ["which", "chromium"]
)
if returncode != 0:
logger.error("Chrome not found in container", tag="SETUP")
await docker_utils.remove_container(container_id)
return False
chrome_path = stdout.strip()
logger.info(f"Chrome found at: {chrome_path}", tag="SETUP")
# Test Chrome version
returncode, stdout, stderr = await docker_utils.exec_in_container(
container_id, ["chromium", "--version"]
)
if returncode != 0:
logger.error(f"Failed to get Chrome version: {stderr}", tag="SETUP")
await docker_utils.remove_container(container_id)
return False
logger.info(f"Chrome version: {stdout.strip()}", tag="SETUP")
# Remove test container
removed = await docker_utils.remove_container(container_id)
if not removed:
logger.error("Failed to remove test container", tag="SETUP")
return False
logger.success("Test container removed successfully", tag="SETUP")
# All components tested successfully
logger.success("All Docker components tested successfully", tag="SETUP")
return True
except Exception as e:
logger.error(f"Docker component tests failed: {str(e)}", tag="SETUP")
return False
finally:
# Clean up registry test directory
if os.path.exists(registry_dir):
shutil.rmtree(registry_dir)
async def test_docker_connect_mode():
"""Test Docker browser in connect mode.
This tests the basic functionality of creating a browser in Docker
connect mode and using it for navigation.
"""
logger.info("Testing Docker browser in connect mode", tag="TEST")
# Create temp directory for user data
temp_dir = os.path.join(os.path.dirname(__file__), "tmp_user_data")
os.makedirs(temp_dir, exist_ok=True)
try:
# Create Docker configuration
docker_config = DockerConfig(
mode="connect",
persistent=False,
remove_on_exit=True,
user_data_dir=temp_dir
)
# Create browser configuration
browser_config = BrowserConfig(
browser_mode="docker",
headless=True,
docker_config=docker_config
)
# Create browser manager
manager = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager.start()
logger.info("Browser started successfully", tag="TEST")
# Create crawler config
crawler_config = CrawlerRunConfig(url="https://example.com")
# Get a page
page, context = await manager.get_page(crawler_config)
logger.info("Got page successfully", tag="TEST")
# Navigate to a website
await page.goto("https://example.com")
logger.info("Navigated to example.com", tag="TEST")
# Get page title
title = await page.title()
logger.info(f"Page title: {title}", tag="TEST")
# Clean up
await manager.close()
logger.info("Browser closed successfully", tag="TEST")
return True
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
# Ensure cleanup
try:
await manager.close()
except:
pass
return False
finally:
# Clean up the temp directory
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)
async def test_docker_launch_mode():
"""Test Docker browser in launch mode.
This tests launching a Chrome browser within a Docker container
on demand with custom settings.
"""
logger.info("Testing Docker browser in launch mode", tag="TEST")
# Create temp directory for user data
temp_dir = os.path.join(os.path.dirname(__file__), "tmp_user_data_launch")
os.makedirs(temp_dir, exist_ok=True)
try:
# Create Docker configuration
docker_config = DockerConfig(
mode="launch",
persistent=False,
remove_on_exit=True,
user_data_dir=temp_dir
)
# Create browser configuration
browser_config = BrowserConfig(
browser_mode="docker",
headless=True,
text_mode=True, # Enable text mode for faster operation
docker_config=docker_config
)
# Create browser manager
manager = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager.start()
logger.info("Browser started successfully", tag="TEST")
# Create crawler config
crawler_config = CrawlerRunConfig(url="https://example.com")
# Get a page
page, context = await manager.get_page(crawler_config)
logger.info("Got page successfully", tag="TEST")
# Navigate to a website
await page.goto("https://example.com")
logger.info("Navigated to example.com", tag="TEST")
# Get page title
title = await page.title()
logger.info(f"Page title: {title}", tag="TEST")
# Clean up
await manager.close()
logger.info("Browser closed successfully", tag="TEST")
return True
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
# Ensure cleanup
try:
await manager.close()
except:
pass
return False
finally:
# Clean up the temp directory
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)
async def test_docker_persistent_storage():
"""Test Docker browser with persistent storage.
This tests creating localStorage data in one session and verifying
it persists to another session when using persistent storage.
"""
logger.info("Testing Docker browser with persistent storage", tag="TEST")
# Create a unique temp directory
test_id = uuid.uuid4().hex[:8]
temp_dir = os.path.join(os.path.dirname(__file__), f"tmp_user_data_persist_{test_id}")
os.makedirs(temp_dir, exist_ok=True)
manager1 = None
manager2 = None
try:
# Create Docker configuration with persistence
docker_config = DockerConfig(
mode="connect",
persistent=True, # Keep container running between sessions
user_data_dir=temp_dir,
container_user_data_dir="/data"
)
# Create browser configuration
browser_config = BrowserConfig(
browser_mode="docker",
headless=True,
docker_config=docker_config
)
# Create first browser manager
manager1 = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager1.start()
logger.info("First browser started successfully", tag="TEST")
# Create crawler config
crawler_config = CrawlerRunConfig()
# Get a page
page1, context1 = await manager1.get_page(crawler_config)
# Navigate to example.com
await page1.goto("https://example.com")
# Set localStorage item
test_value = f"test_value_{test_id}"
await page1.evaluate(f"localStorage.setItem('test_key', '{test_value}')")
logger.info(f"Set localStorage test_key = {test_value}", tag="TEST")
# Close the first browser manager
await manager1.close()
logger.info("First browser closed", tag="TEST")
# Create second browser manager with same config
manager2 = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager2.start()
logger.info("Second browser started successfully", tag="TEST")
# Get a page
page2, context2 = await manager2.get_page(crawler_config)
# Navigate to same site
await page2.goto("https://example.com")
# Get localStorage item
value = await page2.evaluate("localStorage.getItem('test_key')")
logger.info(f"Retrieved localStorage test_key = {value}", tag="TEST")
# Check if persistence worked
if value == test_value:
logger.success("Storage persistence verified!", tag="TEST")
else:
logger.error(f"Storage persistence failed! Expected {test_value}, got {value}", tag="TEST")
# Clean up
await manager2.close()
logger.info("Second browser closed successfully", tag="TEST")
return value == test_value
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
# Ensure cleanup
try:
if manager1:
await manager1.close()
if manager2:
await manager2.close()
except:
pass
return False
finally:
# Clean up the temp directory
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)
async def test_docker_parallel_pages():
"""Test Docker browser with parallel page creation.
This tests the ability to create and use multiple pages in parallel
from a single Docker browser instance.
"""
logger.info("Testing Docker browser with parallel pages", tag="TEST")
try:
# Create Docker configuration
docker_config = DockerConfig(
mode="connect",
persistent=False,
remove_on_exit=True
)
# Create browser configuration
browser_config = BrowserConfig(
browser_mode="docker",
headless=True,
docker_config=docker_config
)
# Create browser manager
manager = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager.start()
logger.info("Browser started successfully", tag="TEST")
# Create crawler config
crawler_config = CrawlerRunConfig()
# Get multiple pages
page_count = 3
pages = await manager.get_pages(crawler_config, count=page_count)
logger.info(f"Got {len(pages)} pages successfully", tag="TEST")
if len(pages) != page_count:
logger.error(f"Expected {page_count} pages, got {len(pages)}", tag="TEST")
await manager.close()
return False
# Navigate to different sites with each page
tasks = []
for i, (page, _) in enumerate(pages):
tasks.append(page.goto(f"https://example.com?page={i}"))
# Wait for all navigations to complete
await asyncio.gather(*tasks)
logger.info("All pages navigated successfully", tag="TEST")
# Get titles from all pages
titles = []
for i, (page, _) in enumerate(pages):
title = await page.title()
titles.append(title)
logger.info(f"Page {i+1} title: {title}", tag="TEST")
# Clean up
await manager.close()
logger.info("Browser closed successfully", tag="TEST")
return True
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
# Ensure cleanup
try:
await manager.close()
except:
pass
return False
async def test_docker_registry_reuse():
"""Test Docker container reuse via registry.
This tests that containers with matching configurations
are reused rather than creating new ones.
"""
logger.info("Testing Docker container reuse via registry", tag="TEST")
# Create registry for this test
registry_dir = os.path.join(os.path.dirname(__file__), "registry_reuse_test")
registry_file = os.path.join(registry_dir, "registry.json")
os.makedirs(registry_dir, exist_ok=True)
manager1 = None
manager2 = None
container_id1 = None
try:
# Create identical Docker configurations with custom registry
docker_config1 = DockerConfig(
mode="connect",
persistent=True, # Keep container running after closing
registry_file=registry_file
)
# Create first browser configuration
browser_config1 = BrowserConfig(
browser_mode="docker",
headless=True,
docker_config=docker_config1
)
# Create first browser manager
manager1 = BrowserManager(browser_config=browser_config1, logger=logger)
# Start the first browser
await manager1.start()
logger.info("First browser started successfully", tag="TEST")
# Get container ID from the strategy
docker_strategy1 = manager1.strategy
container_id1 = docker_strategy1.container_id
logger.info(f"First browser container ID: {container_id1[:12]}", tag="TEST")
# Close the first manager but keep container running
await manager1.close()
logger.info("First browser closed", tag="TEST")
# Create second Docker configuration identical to first
docker_config2 = DockerConfig(
mode="connect",
persistent=True,
registry_file=registry_file
)
# Create second browser configuration
browser_config2 = BrowserConfig(
browser_mode="docker",
headless=True,
docker_config=docker_config2
)
# Create second browser manager
manager2 = BrowserManager(browser_config=browser_config2, logger=logger)
# Start the second browser - should reuse existing container
await manager2.start()
logger.info("Second browser started successfully", tag="TEST")
# Get container ID from the second strategy
docker_strategy2 = manager2.strategy
container_id2 = docker_strategy2.container_id
logger.info(f"Second browser container ID: {container_id2[:12]}", tag="TEST")
# Verify container reuse
if container_id1 == container_id2:
logger.success("Container reuse successful - using same container!", tag="TEST")
else:
logger.error("Container reuse failed - new container created!", tag="TEST")
# Clean up
docker_strategy2.docker_config.persistent = False
docker_strategy2.docker_config.remove_on_exit = True
await manager2.close()
logger.info("Second browser closed and container removed", tag="TEST")
return container_id1 == container_id2
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
# Ensure cleanup
try:
if manager1:
await manager1.close()
if manager2:
await manager2.close()
# Make sure container is removed
if container_id1:
await docker_utils.remove_container(container_id1, force=True)
except:
pass
return False
finally:
# Clean up registry directory
if os.path.exists(registry_dir):
shutil.rmtree(registry_dir)
async def run_tests():
"""Run all tests sequentially."""
results = []
logger.info("Starting Docker Browser Strategy tests", tag="TEST")
# Check if Docker is available
if not await check_docker_available():
logger.error("Docker is not available - skipping tests", tag="TEST")
return
# First test Docker components
# setup_result = await test_docker_components()
# if not setup_result:
# logger.error("Docker component tests failed - skipping browser tests", tag="TEST")
# return
# Run browser tests
results.append(await test_docker_connect_mode())
results.append(await test_docker_launch_mode())
results.append(await test_docker_persistent_storage())
results.append(await test_docker_parallel_pages())
results.append(await test_docker_registry_reuse())
# Print summary
total = len(results)
passed = sum(1 for r in results if r)
logger.info(f"Tests complete: {passed}/{total} passed", tag="SUMMARY")
if passed == total:
logger.success("All tests passed!", tag="SUMMARY")
else:
logger.error(f"{total - passed} tests failed", tag="SUMMARY")
async def check_docker_available() -> bool:
"""Check if Docker is available on the system.
Returns:
bool: True if Docker is available, False otherwise
"""
try:
proc = await asyncio.create_subprocess_exec(
"docker", "--version",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE
)
stdout, _ = await proc.communicate()
return proc.returncode == 0 and stdout
except:
return False
if __name__ == "__main__":
asyncio.run(run_tests())

Some files were not shown because too many files have changed in this diff Show More