Compare commits

..

1 Commits

Author SHA1 Message Date
UncleCode
3ea3c0520d Add all 5 deployments solution for testing 2025-03-10 18:57:14 +08:00
179 changed files with 10175 additions and 44421 deletions

View File

@@ -1,35 +0,0 @@
name: Discord GitHub Notifications
on:
issues:
types: [opened]
issue_comment:
types: [created]
pull_request:
types: [opened]
discussion:
types: [created]
jobs:
notify-discord:
runs-on: ubuntu-latest
steps:
- name: Set webhook based on event type
id: set-webhook
run: |
if [ "${{ github.event_name }}" == "discussion" ]; then
echo "webhook=${{ secrets.DISCORD_DISCUSSIONS_WEBHOOK }}" >> $GITHUB_OUTPUT
else
echo "webhook=${{ secrets.DISCORD_WEBHOOK }}" >> $GITHUB_OUTPUT
fi
- name: Discord Notification
uses: Ilshidur/action-discord@master
env:
DISCORD_WEBHOOK: ${{ steps.set-webhook.outputs.webhook }}
with:
args: |
${{ github.event_name == 'issues' && format('📣 New issue created: **{0}** by {1} - {2}', github.event.issue.title, github.event.issue.user.login, github.event.issue.html_url) ||
github.event_name == 'issue_comment' && format('💬 New comment on issue **{0}** by {1} - {2}', github.event.issue.title, github.event.comment.user.login, github.event.comment.html_url) ||
github.event_name == 'pull_request' && format('🔄 New PR opened: **{0}** by {1} - {2}', github.event.pull_request.title, github.event.pull_request.user.login, github.event.pull_request.html_url) ||
format('💬 New discussion started: **{0}** by {1} - {2}', github.event.discussion.title, github.event.discussion.user.login, github.event.discussion.html_url) }}

7
.gitignore vendored
View File

@@ -255,10 +255,3 @@ continue_config.json
.llm.env
.private/
CLAUDE_MONITOR.md
CLAUDE.md
tests/**/test_site
tests/**/reports
tests/**/benchmark_reports

View File

@@ -5,74 +5,6 @@ All notable changes to Crawl4AI will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
### [Feature] 2025-04-21
- Implemented MCP protocol for machine-to-machine communication
- Added WebSocket and SSE transport for MCP server
- Exposed server endpoints via MCP protocol
- Created tests for MCP socket and SSE communication
- Enhanced Docker server with file handling and intelligent search
- Added PDF and screenshot endpoints with file saving capability
- Added JavaScript execution endpoint for page interaction
- Implemented advanced context search with BM25 and code chunking
- Added file path output support for generated assets
- Improved server endpoints and API surface
- Added intelligent context search with query filtering
- Added syntax-aware code function chunking
- Implemented efficient HTML processing pipeline
- Added support for controlling browser geolocation via new GeolocationConfig class
- Added locale and timezone configuration options to CrawlerRunConfig
- Added example script demonstrating geolocation and locale usage
- Added documentation for location-based identity features
### [Refactor] 2025-04-20
- Replaced crawler_manager.py with simpler crawler_pool.py implementation
- Added global page semaphore for hard concurrency cap
- Implemented browser pool with idle cleanup
- Added playground UI for testing and stress testing
- Updated API handlers to use pooled crawlers
- Enhanced logging levels and symbols
- Added memory tests and stress test utilities
### [Added] 2025-04-17
- Added content source selection feature for markdown generation
- New `content_source` parameter allows choosing between `cleaned_html`, `raw_html`, and `fit_html`
- Provides flexibility in how HTML content is processed before markdown conversion
- Added examples and documentation for the new feature
- Includes backward compatibility with default `cleaned_html` behavior
## Version 0.5.0.post5 (2025-03-14)
### Added
- *(crawler)* Add experimental parameters dictionary to CrawlerRunConfig to support beta features
- *(tables)* Add comprehensive table detection and extraction functionality with scoring system
- *(monitor)* Add real-time crawler monitoring system with memory management
- *(content)* Add target_elements parameter for selective content extraction
- *(browser)* Add standalone CDP browser launch capability
- *(schema)* Add preprocess_html_for_schema utility for better HTML cleaning
- *(api)* Add special handling for single URL requests in Docker API
### Changed
- *(filters)* Add reverse option to URLPatternFilter for inverting filter logic
- *(browser)* Make CSP nonce headers optional via experimental config
- *(browser)* Remove default cookie injection from page initialization
- *(crawler)* Optimize response handling for single-URL processing
- *(api)* Refactor crawl request handling to streamline processing
- *(config)* Update default provider to gpt-4o
- *(cache)* Change default cache_mode from aggressive to bypass in examples
### Fixed
- *(browser)* Clean up browser context creation code
- *(api)* Improve code formatting in API handler
### Breaking Changes
- WebScrapingStrategy no longer returns 'scraped_html' in its output dictionary
- Table extraction logic has been modified to better handle thead/tbody structures
- Default cookie injection has been removed from page initialization
## Version 0.5.0 (2025-03-02)
### Added

View File

@@ -24,7 +24,7 @@ ARG TARGETARCH
LABEL maintainer="unclecode"
LABEL description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & scraper"
LABEL version="1.0"
LABEL version="1.0"
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
@@ -38,7 +38,6 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libjpeg-dev \
redis-server \
supervisor \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y --no-install-recommends \
@@ -63,13 +62,11 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libcairo2 \
libasound2 \
libatspi2.0-0 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN if [ "$ENABLE_GPU" = "true" ] && [ "$TARGETARCH" = "amd64" ] ; then \
apt-get update && apt-get install -y --no-install-recommends \
nvidia-cuda-toolkit \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* ; \
else \
echo "Skipping NVIDIA CUDA Toolkit installation (unsupported platform or GPU disabled)"; \
@@ -79,24 +76,16 @@ RUN if [ "$TARGETARCH" = "arm64" ]; then \
echo "🦾 Installing ARM-specific optimizations"; \
apt-get update && apt-get install -y --no-install-recommends \
libopenblas-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*; \
elif [ "$TARGETARCH" = "amd64" ]; then \
echo "🖥️ Installing AMD64-specific optimizations"; \
apt-get update && apt-get install -y --no-install-recommends \
libomp-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*; \
else \
echo "Skipping platform-specific optimizations (unsupported platform)"; \
fi
# Create a non-root user and group
RUN groupadd -r appuser && useradd --no-log-init -r -g appuser appuser
# Create and set permissions for appuser home directory
RUN mkdir -p /home/appuser && chown -R appuser:appuser /home/appuser
WORKDIR ${APP_HOME}
RUN echo '#!/bin/bash\n\
@@ -114,7 +103,6 @@ fi' > /tmp/install.sh && chmod +x /tmp/install.sh
COPY . /tmp/project/
# Copy supervisor config first (might need root later, but okay for now)
COPY deploy/docker/supervisord.conf .
COPY deploy/docker/requirements.txt .
@@ -143,34 +131,16 @@ RUN if [ "$INSTALL_TYPE" = "all" ] ; then \
else \
pip install "/tmp/project" ; \
fi
RUN pip install --no-cache-dir --upgrade pip && \
/tmp/install.sh && \
python -c "import crawl4ai; print('✅ crawl4ai is ready to rock!')" && \
python -c "from playwright.sync_api import sync_playwright; print('✅ Playwright is feeling dramatic!')"
RUN playwright install --with-deps chromium
RUN crawl4ai-setup
RUN playwright install --with-deps
RUN mkdir -p /home/appuser/.cache/ms-playwright \
&& cp -r /root/.cache/ms-playwright/chromium-* /home/appuser/.cache/ms-playwright/ \
&& chown -R appuser:appuser /home/appuser/.cache/ms-playwright
RUN crawl4ai-doctor
# Copy application code
COPY deploy/docker/* ${APP_HOME}/
# copy the playground + any future static assets
COPY deploy/docker/static ${APP_HOME}/static
# Change ownership of the application directory to the non-root user
RUN chown -R appuser:appuser ${APP_HOME}
# give permissions to redis persistence dirs if used
RUN mkdir -p /var/lib/redis /var/log/redis && chown -R appuser:appuser /var/lib/redis /var/log/redis
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD bash -c '\
MEM=$(free -m | awk "/^Mem:/{print \$2}"); \
@@ -179,14 +149,8 @@ HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
exit 1; \
fi && \
redis-cli ping > /dev/null && \
curl -f http://localhost:11235/health || exit 1'
curl -f http://localhost:8000/health || exit 1'
EXPOSE 6379
# Switch to the non-root user before starting the application
USER appuser
# Set environment variables to ptoduction
ENV PYTHON_ENV=production
# Start the application using supervisord
CMD ["supervisord", "-c", "supervisord.conf"]
CMD ["supervisord", "-c", "supervisord.conf"]

View File

@@ -1,339 +0,0 @@
# Development Journal
This journal tracks significant feature additions, bug fixes, and architectural decisions in the crawl4ai project. It serves as both documentation and a historical record of the project's evolution.
## [2025-04-17] Added Content Source Selection for Markdown Generation
**Feature:** Configurable content source for markdown generation
**Changes Made:**
1. Added `content_source: str = "cleaned_html"` parameter to `MarkdownGenerationStrategy` class
2. Updated `DefaultMarkdownGenerator` to accept and pass the content source parameter
3. Renamed the `cleaned_html` parameter to `input_html` in the `generate_markdown` method
4. Modified `AsyncWebCrawler.aprocess_html` to select the appropriate HTML source based on the generator's config
5. Added `preprocess_html_for_schema` import in `async_webcrawler.py`
**Implementation Details:**
- Added a new `content_source` parameter to specify which HTML input to use for markdown generation
- Options include: "cleaned_html" (default), "raw_html", and "fit_html"
- Used a dictionary dispatch pattern in `aprocess_html` to select the appropriate HTML source
- Added proper error handling with fallback to cleaned_html if content source selection fails
- Ensured backward compatibility by defaulting to "cleaned_html" option
**Files Modified:**
- `crawl4ai/markdown_generation_strategy.py`: Added content_source parameter and updated the method signature
- `crawl4ai/async_webcrawler.py`: Added HTML source selection logic and updated imports
**Examples:**
- Created `docs/examples/content_source_example.py` demonstrating how to use the new parameter
**Challenges:**
- Maintaining backward compatibility while reorganizing the parameter flow
- Ensuring proper error handling for all content source options
- Making the change with minimal code modifications
**Why This Feature:**
The content source selection feature allows users to choose which HTML content to use as input for markdown generation:
1. "cleaned_html" - Uses the post-processed HTML after scraping strategy (original behavior)
2. "raw_html" - Uses the original raw HTML directly from the web page
3. "fit_html" - Uses the preprocessed HTML optimized for schema extraction
This feature provides greater flexibility in how users generate markdown, enabling them to:
- Capture more detailed content from the original HTML when needed
- Use schema-optimized HTML when working with structured data
- Choose the approach that best suits their specific use case
## [2025-04-17] Implemented High Volume Stress Testing Solution for SDK
**Feature:** Comprehensive stress testing framework using `arun_many` and the dispatcher system to evaluate performance, concurrency handling, and identify potential issues under high-volume crawling scenarios.
**Changes Made:**
1. Created a dedicated stress testing framework in the `benchmarking/` (or similar) directory.
2. Implemented local test site generation (`SiteGenerator`) with configurable heavy HTML pages.
3. Added basic memory usage tracking (`SimpleMemoryTracker`) using platform-specific commands (avoiding `psutil` dependency for this specific test).
4. Utilized `CrawlerMonitor` from `crawl4ai` for rich terminal UI and real-time monitoring of test progress and dispatcher activity.
5. Implemented detailed result summary saving (JSON) and memory sample logging (CSV).
6. Developed `run_benchmark.py` to orchestrate tests with predefined configurations.
7. Created `run_all.sh` as a simple wrapper for `run_benchmark.py`.
**Implementation Details:**
- Generates a local test site with configurable pages containing heavy text and image content.
- Uses Python's built-in `http.server` for local serving, minimizing network variance.
- Leverages `crawl4ai`'s `arun_many` method for processing URLs.
- Utilizes `MemoryAdaptiveDispatcher` to manage concurrency via the `max_sessions` parameter (note: memory adaptation features require `psutil`, not used by `SimpleMemoryTracker`).
- Tracks memory usage via `SimpleMemoryTracker`, recording samples throughout test execution to a CSV file.
- Uses `CrawlerMonitor` (which uses the `rich` library) for clear terminal visualization and progress reporting directly from the dispatcher.
- Stores detailed final metrics in a JSON summary file.
**Files Created/Updated:**
- `stress_test_sdk.py`: Main stress testing implementation using `arun_many`.
- `benchmark_report.py`: (Assumed) Report generator for comparing test results.
- `run_benchmark.py`: Test runner script with predefined configurations.
- `run_all.sh`: Simple bash script wrapper for `run_benchmark.py`.
- `USAGE.md`: Comprehensive documentation on usage and interpretation (updated).
**Testing Approach:**
- Creates a controlled, reproducible test environment with a local HTTP server.
- Processes URLs using `arun_many`, allowing the dispatcher to manage concurrency up to `max_sessions`.
- Optionally logs per-batch summaries (when not in streaming mode) after processing chunks.
- Supports different test sizes via `run_benchmark.py` configurations.
- Records memory samples via platform commands for basic trend analysis.
- Includes cleanup functionality for the test environment.
**Challenges:**
- Ensuring proper cleanup of HTTP server processes.
- Getting reliable memory tracking across platforms without adding heavy dependencies (`psutil`) to this specific test script.
- Designing `run_benchmark.py` to correctly pass arguments to `stress_test_sdk.py`.
**Why This Feature:**
The high volume stress testing solution addresses critical needs for ensuring Crawl4AI's `arun_many` reliability:
1. Provides a reproducible way to evaluate performance under concurrent load.
2. Allows testing the dispatcher's concurrency control (`max_session_permit`) and queue management.
3. Enables performance tuning by observing throughput (`URLs/sec`) under different `max_sessions` settings.
4. Creates a controlled environment for testing `arun_many` behavior.
5. Supports continuous integration by providing deterministic test conditions for `arun_many`.
**Design Decisions:**
- Chose local site generation for reproducibility and isolation from network issues.
- Utilized the built-in `CrawlerMonitor` for real-time feedback, leveraging its `rich` integration.
- Implemented optional per-batch logging in `stress_test_sdk.py` (when not streaming) to provide chunk-level summaries alongside the continuous monitor.
- Adopted `arun_many` with a `MemoryAdaptiveDispatcher` as the core mechanism for parallel execution, reflecting the intended SDK usage.
- Created `run_benchmark.py` to simplify running standard test configurations.
- Used `SimpleMemoryTracker` to provide basic memory insights without requiring `psutil` for this particular test runner.
**Future Enhancements to Consider:**
- Create a separate test variant that *does* use `psutil` to specifically stress the memory-adaptive features of the dispatcher.
- Add support for generated JavaScript content.
- Add support for Docker-based testing with explicit memory limits.
- Enhance `benchmark_report.py` to provide more sophisticated analysis of performance and memory trends from the generated JSON/CSV files.
---
## [2025-04-17] Refined Stress Testing System Parameters and Execution
**Changes Made:**
1. Corrected `run_benchmark.py` and `stress_test_sdk.py` to use `--max-sessions` instead of the incorrect `--workers` parameter, accurately reflecting dispatcher configuration.
2. Updated `run_benchmark.py` argument handling to correctly pass all relevant custom parameters (including `--stream`, `--monitor-mode`, etc.) to `stress_test_sdk.py`.
3. (Assuming changes in `benchmark_report.py`) Applied dark theme to benchmark reports for better readability.
4. (Assuming changes in `benchmark_report.py`) Improved visualization code to eliminate matplotlib warnings.
5. Updated `run_benchmark.py` to provide clickable `file://` links to generated reports in the terminal output.
6. Updated `USAGE.md` with comprehensive parameter descriptions reflecting the final script arguments.
7. Updated `run_all.sh` wrapper to correctly invoke `run_benchmark.py` with flexible arguments.
**Details of Changes:**
1. **Parameter Correction (`--max-sessions`)**:
* Identified the fundamental misunderstanding where `--workers` was used incorrectly.
* Refactored `stress_test_sdk.py` to accept `--max-sessions` and configure the `MemoryAdaptiveDispatcher`'s `max_session_permit` accordingly.
* Updated `run_benchmark.py` argument parsing and command construction to use `--max-sessions`.
* Updated `TEST_CONFIGS` in `run_benchmark.py` to use `max_sessions`.
2. **Argument Handling (`run_benchmark.py`)**:
* Improved logic to collect all command-line arguments provided to `run_benchmark.py`.
* Ensured all relevant arguments (like `--stream`, `--monitor-mode`, `--port`, `--use-rate-limiter`, etc.) are correctly forwarded when calling `stress_test_sdk.py` as a subprocess.
3. **Dark Theme & Visualization Fixes (Assumed in `benchmark_report.py`)**:
* (Describes changes assumed to be made in the separate reporting script).
4. **Clickable Links (`run_benchmark.py`)**:
* Added logic to find the latest HTML report and PNG chart in the `benchmark_reports` directory after `benchmark_report.py` runs.
* Used `pathlib` to generate correct `file://` URLs for terminal output.
5. **Documentation Improvements (`USAGE.md`)**:
* Rewrote sections to explain `arun_many`, dispatchers, and `--max-sessions`.
* Updated parameter tables for all scripts (`stress_test_sdk.py`, `run_benchmark.py`).
* Clarified the difference between batch and streaming modes and their effect on logging.
* Updated examples to use correct arguments.
**Files Modified:**
- `stress_test_sdk.py`: Changed `--workers` to `--max-sessions`, added new arguments, used `arun_many`.
- `run_benchmark.py`: Changed argument handling, updated configs, calls `stress_test_sdk.py`.
- `run_all.sh`: Updated to call `run_benchmark.py` correctly.
- `USAGE.md`: Updated documentation extensively.
- `benchmark_report.py`: (Assumed modifications for dark theme and viz fixes).
**Testing:**
- Verified that `--max-sessions` correctly limits concurrency via the `CrawlerMonitor` output.
- Confirmed that custom arguments passed to `run_benchmark.py` are forwarded to `stress_test_sdk.py`.
- Validated clickable links work in supporting terminals.
- Ensured documentation matches the final script parameters and behavior.
**Why These Changes:**
These refinements correct the fundamental approach of the stress test to align with `crawl4ai`'s actual architecture and intended usage:
1. Ensures the test evaluates the correct components (`arun_many`, `MemoryAdaptiveDispatcher`).
2. Makes test configurations more accurate and flexible.
3. Improves the usability of the testing framework through better argument handling and documentation.
**Future Enhancements to Consider:**
- Add support for generated JavaScript content to test JS rendering performance
- Implement more sophisticated memory analysis like generational garbage collection tracking
- Add support for Docker-based testing with memory limits to force OOM conditions
- Create visualization tools for analyzing memory usage patterns across test runs
- Add benchmark comparisons between different crawler versions or configurations
## [2025-04-17] Fixed Issues in Stress Testing System
**Changes Made:**
1. Fixed custom parameter handling in run_benchmark.py
2. Applied dark theme to benchmark reports for better readability
3. Improved visualization code to eliminate matplotlib warnings
4. Added clickable links to generated reports in terminal output
5. Enhanced documentation with comprehensive parameter descriptions
**Details of Changes:**
1. **Custom Parameter Handling Fix**
- Identified bug where custom URL count was being ignored in run_benchmark.py
- Rewrote argument handling to use a custom args dictionary
- Properly passed parameters to the test_simple_stress.py command
- Added better UI indication of custom parameters in use
2. **Dark Theme Implementation**
- Added complete dark theme to HTML benchmark reports
- Applied dark styling to all visualization components
- Used Nord-inspired color palette for charts and graphs
- Improved contrast and readability for data visualization
- Updated text colors and backgrounds for better eye comfort
3. **Matplotlib Warning Fixes**
- Resolved warnings related to improper use of set_xticklabels()
- Implemented correct x-axis positioning for bar charts
- Ensured proper alignment of bar labels and data points
- Updated plotting code to use modern matplotlib practices
4. **Documentation Improvements**
- Created comprehensive USAGE.md with detailed instructions
- Added parameter documentation for all scripts
- Included examples for all common use cases
- Provided detailed explanations for interpreting results
- Added troubleshooting guide for common issues
**Files Modified:**
- `tests/memory/run_benchmark.py`: Fixed custom parameter handling
- `tests/memory/benchmark_report.py`: Added dark theme and fixed visualization warnings
- `tests/memory/run_all.sh`: Added clickable links to reports
- `tests/memory/USAGE.md`: Created comprehensive documentation
**Testing:**
- Verified that custom URL counts are now correctly used
- Confirmed dark theme is properly applied to all report elements
- Checked that matplotlib warnings are no longer appearing
- Validated clickable links to reports work in terminals that support them
**Why These Changes:**
These improvements address several usability issues with the stress testing system:
1. Better parameter handling ensures test configurations work as expected
2. Dark theme reduces eye strain during extended test review sessions
3. Fixing visualization warnings improves code quality and output clarity
4. Enhanced documentation makes the system more accessible for future use
**Future Enhancements:**
- Add additional visualization options for different types of analysis
- Implement theme toggle to support both light and dark preferences
- Add export options for embedding reports in other documentation
- Create dedicated CI/CD integration templates for automated testing
## [2025-04-09] Added MHTML Capture Feature
**Feature:** MHTML snapshot capture of crawled pages
**Changes Made:**
1. Added `capture_mhtml: bool = False` parameter to `CrawlerRunConfig` class
2. Added `mhtml: Optional[str] = None` field to `CrawlResult` model
3. Added `mhtml_data: Optional[str] = None` field to `AsyncCrawlResponse` class
4. Implemented `capture_mhtml()` method in `AsyncPlaywrightCrawlerStrategy` class to capture MHTML via CDP
5. Modified the crawler to capture MHTML when enabled and pass it to the result
**Implementation Details:**
- MHTML capture uses Chrome DevTools Protocol (CDP) via Playwright's CDP session API
- The implementation waits for page to fully load before capturing MHTML content
- Enhanced waiting for JavaScript content with requestAnimationFrame for better JS content capture
- We ensure all browser resources are properly cleaned up after capture
**Files Modified:**
- `crawl4ai/models.py`: Added the mhtml field to CrawlResult
- `crawl4ai/async_configs.py`: Added capture_mhtml parameter to CrawlerRunConfig
- `crawl4ai/async_crawler_strategy.py`: Implemented MHTML capture logic
- `crawl4ai/async_webcrawler.py`: Added mapping from AsyncCrawlResponse.mhtml_data to CrawlResult.mhtml
**Testing:**
- Created comprehensive tests in `tests/20241401/test_mhtml.py` covering:
- Capturing MHTML when enabled
- Ensuring mhtml is None when disabled explicitly
- Ensuring mhtml is None by default
- Capturing MHTML on JavaScript-enabled pages
**Challenges:**
- Had to improve page loading detection to ensure JavaScript content was fully rendered
- Tests needed to be run independently due to Playwright browser instance management
- Modified test expected content to match actual MHTML output
**Why This Feature:**
The MHTML capture feature allows users to capture complete web pages including all resources (CSS, images, etc.) in a single file. This is valuable for:
1. Offline viewing of captured pages
2. Creating permanent snapshots of web content for archival
3. Ensuring consistent content for later analysis, even if the original site changes
**Future Enhancements to Consider:**
- Add option to save MHTML to file
- Support for filtering what resources get included in MHTML
- Add support for specifying MHTML capture options
## [2025-04-10] Added Network Request and Console Message Capturing
**Feature:** Comprehensive capturing of network requests/responses and browser console messages during crawling
**Changes Made:**
1. Added `capture_network_requests: bool = False` and `capture_console_messages: bool = False` parameters to `CrawlerRunConfig` class
2. Added `network_requests: Optional[List[Dict[str, Any]]] = None` and `console_messages: Optional[List[Dict[str, Any]]] = None` fields to both `AsyncCrawlResponse` and `CrawlResult` models
3. Implemented event listeners in `AsyncPlaywrightCrawlerStrategy._crawl_web()` to capture browser network events and console messages
4. Added proper event listener cleanup in the finally block to prevent resource leaks
5. Modified the crawler flow to pass captured data from AsyncCrawlResponse to CrawlResult
**Implementation Details:**
- Network capture uses Playwright event listeners (`request`, `response`, and `requestfailed`) to record all network activity
- Console capture uses Playwright event listeners (`console` and `pageerror`) to record console messages and errors
- Each network event includes metadata like URL, headers, status, and timing information
- Each console message includes type, text content, and source location when available
- All captured events include timestamps for chronological analysis
- Error handling ensures even failed capture attempts won't crash the main crawling process
**Files Modified:**
- `crawl4ai/models.py`: Added new fields to AsyncCrawlResponse and CrawlResult
- `crawl4ai/async_configs.py`: Added new configuration parameters to CrawlerRunConfig
- `crawl4ai/async_crawler_strategy.py`: Implemented capture logic using event listeners
- `crawl4ai/async_webcrawler.py`: Added data transfer from AsyncCrawlResponse to CrawlResult
**Documentation:**
- Created detailed documentation in `docs/md_v2/advanced/network-console-capture.md`
- Added feature to site navigation in `mkdocs.yml`
- Updated CrawlResult documentation in `docs/md_v2/api/crawl-result.md`
- Created comprehensive example in `docs/examples/network_console_capture_example.py`
**Testing:**
- Created `tests/general/test_network_console_capture.py` with tests for:
- Verifying capture is disabled by default
- Testing network request capturing
- Testing console message capturing
- Ensuring both capture types can be enabled simultaneously
- Checking correct content is captured in expected formats
**Challenges:**
- Initial implementation had synchronous/asynchronous mismatches in event handlers
- Needed to fix type of property access vs. method calls in handlers
- Required careful cleanup of event listeners to prevent memory leaks
**Why This Feature:**
The network and console capture feature provides deep visibility into web page activity, enabling:
1. Debugging complex web applications by seeing all network requests and errors
2. Security analysis to detect unexpected third-party requests and data flows
3. Performance profiling to identify slow-loading resources
4. API discovery in single-page applications
5. Comprehensive analysis of web application behavior
**Future Enhancements to Consider:**
- Option to filter captured events by type, domain, or content
- Support for capturing response bodies (with size limits)
- Aggregate statistics calculation for performance metrics
- Integration with visualization tools for network waterfall analysis
- Exporting captures in HAR format for use with external tools

View File

@@ -2,7 +2,7 @@
import warnings
from .async_webcrawler import AsyncWebCrawler, CacheMode
from .async_configs import BrowserConfig, CrawlerRunConfig, HTTPCrawlerConfig, LLMConfig, ProxyConfig, GeolocationConfig
from .async_configs import BrowserConfig, CrawlerRunConfig, HTTPCrawlerConfig, LLMConfig
from .content_scraping_strategy import (
ContentScrapingStrategy,
@@ -33,12 +33,13 @@ from .content_filter_strategy import (
LLMContentFilter,
RelevantContentFilter,
)
from .models import CrawlResult, MarkdownGenerationResult, DisplayMode
from .components.crawler_monitor import CrawlerMonitor
from .models import CrawlResult, MarkdownGenerationResult
from .async_dispatcher import (
MemoryAdaptiveDispatcher,
SemaphoreDispatcher,
RateLimiter,
CrawlerMonitor,
DisplayMode,
BaseDispatcher,
)
from .docker_client import Crawl4aiDockerClient
@@ -71,7 +72,6 @@ __all__ = [
"AsyncWebCrawler",
"BrowserProfiler",
"LLMConfig",
"GeolocationConfig",
"DeepCrawlStrategy",
"BFSDeepCrawlStrategy",
"BestFirstCrawlingStrategy",
@@ -122,7 +122,6 @@ __all__ = [
"Crawl4aiDockerClient",
"ProxyRotationStrategy",
"RoundRobinProxyStrategy",
"ProxyConfig"
]

View File

@@ -1,2 +1,2 @@
# crawl4ai/_version.py
__version__ = "0.5.0.post8"
__version__ = "0.5.0.post4"

View File

@@ -1,11 +1,9 @@
import os
from .config import (
DEFAULT_PROVIDER,
DEFAULT_PROVIDER_API_KEY,
MIN_WORD_THRESHOLD,
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
PROVIDER_MODELS,
PROVIDER_MODELS_PREFIXES,
SCREENSHOT_HEIGHT_TRESHOLD,
PAGE_TIMEOUT,
IMAGE_SCORE_THRESHOLD,
@@ -16,7 +14,7 @@ from .user_agent_generator import UAGen, ValidUAGenerator # , OnlineUAGenerator
from .extraction_strategy import ExtractionStrategy, LLMExtractionStrategy
from .chunking_strategy import ChunkingStrategy, RegexChunking
from .markdown_generation_strategy import MarkdownGenerationStrategy, DefaultMarkdownGenerator
from .markdown_generation_strategy import MarkdownGenerationStrategy
from .content_scraping_strategy import ContentScrapingStrategy, WebScrapingStrategy
from .deep_crawling import DeepCrawlStrategy
@@ -28,8 +26,7 @@ import inspect
from typing import Any, Dict, Optional
from enum import Enum
# from .proxy_strategy import ProxyConfig
from .proxy_strategy import ProxyConfig
def to_serializable_dict(obj: Any, ignore_default_value : bool = False) -> Dict:
@@ -120,25 +117,23 @@ def from_serializable_dict(data: Any) -> Any:
# Handle typed data
if isinstance(data, dict) and "type" in data:
# Handle plain dictionaries
if data["type"] == "dict" and "value" in data:
if data["type"] == "dict":
return {k: from_serializable_dict(v) for k, v in data["value"].items()}
# Import from crawl4ai for class instances
import crawl4ai
if hasattr(crawl4ai, data["type"]):
cls = getattr(crawl4ai, data["type"])
cls = getattr(crawl4ai, data["type"])
# Handle Enum
if issubclass(cls, Enum):
return cls(data["params"])
# Handle Enum
if issubclass(cls, Enum):
return cls(data["params"])
if "params" in data:
# Handle class instances
constructor_args = {
k: from_serializable_dict(v) for k, v in data["params"].items()
}
return cls(**constructor_args)
# Handle class instances
constructor_args = {
k: from_serializable_dict(v) for k, v in data["params"].items()
}
return cls(**constructor_args)
# Handle lists
if isinstance(data, list):
@@ -159,166 +154,6 @@ def is_empty_value(value: Any) -> bool:
return True
return False
class GeolocationConfig:
def __init__(
self,
latitude: float,
longitude: float,
accuracy: Optional[float] = 0.0
):
"""Configuration class for geolocation settings.
Args:
latitude: Latitude coordinate (e.g., 37.7749)
longitude: Longitude coordinate (e.g., -122.4194)
accuracy: Accuracy in meters. Default: 0.0
"""
self.latitude = latitude
self.longitude = longitude
self.accuracy = accuracy
@staticmethod
def from_dict(geo_dict: Dict) -> "GeolocationConfig":
"""Create a GeolocationConfig from a dictionary."""
return GeolocationConfig(
latitude=geo_dict.get("latitude"),
longitude=geo_dict.get("longitude"),
accuracy=geo_dict.get("accuracy", 0.0)
)
def to_dict(self) -> Dict:
"""Convert to dictionary representation."""
return {
"latitude": self.latitude,
"longitude": self.longitude,
"accuracy": self.accuracy
}
def clone(self, **kwargs) -> "GeolocationConfig":
"""Create a copy of this configuration with updated values.
Args:
**kwargs: Key-value pairs of configuration options to update
Returns:
GeolocationConfig: A new instance with the specified updates
"""
config_dict = self.to_dict()
config_dict.update(kwargs)
return GeolocationConfig.from_dict(config_dict)
class ProxyConfig:
def __init__(
self,
server: str,
username: Optional[str] = None,
password: Optional[str] = None,
ip: Optional[str] = None,
):
"""Configuration class for a single proxy.
Args:
server: Proxy server URL (e.g., "http://127.0.0.1:8080")
username: Optional username for proxy authentication
password: Optional password for proxy authentication
ip: Optional IP address for verification purposes
"""
self.server = server
self.username = username
self.password = password
# Extract IP from server if not explicitly provided
self.ip = ip or self._extract_ip_from_server()
def _extract_ip_from_server(self) -> Optional[str]:
"""Extract IP address from server URL."""
try:
# Simple extraction assuming http://ip:port format
if "://" in self.server:
parts = self.server.split("://")[1].split(":")
return parts[0]
else:
parts = self.server.split(":")
return parts[0]
except Exception:
return None
@staticmethod
def from_string(proxy_str: str) -> "ProxyConfig":
"""Create a ProxyConfig from a string in the format 'ip:port:username:password'."""
parts = proxy_str.split(":")
if len(parts) == 4: # ip:port:username:password
ip, port, username, password = parts
return ProxyConfig(
server=f"http://{ip}:{port}",
username=username,
password=password,
ip=ip
)
elif len(parts) == 2: # ip:port only
ip, port = parts
return ProxyConfig(
server=f"http://{ip}:{port}",
ip=ip
)
else:
raise ValueError(f"Invalid proxy string format: {proxy_str}")
@staticmethod
def from_dict(proxy_dict: Dict) -> "ProxyConfig":
"""Create a ProxyConfig from a dictionary."""
return ProxyConfig(
server=proxy_dict.get("server"),
username=proxy_dict.get("username"),
password=proxy_dict.get("password"),
ip=proxy_dict.get("ip")
)
@staticmethod
def from_env(env_var: str = "PROXIES") -> List["ProxyConfig"]:
"""Load proxies from environment variable.
Args:
env_var: Name of environment variable containing comma-separated proxy strings
Returns:
List of ProxyConfig objects
"""
proxies = []
try:
proxy_list = os.getenv(env_var, "").split(",")
for proxy in proxy_list:
if not proxy:
continue
proxies.append(ProxyConfig.from_string(proxy))
except Exception as e:
print(f"Error loading proxies from environment: {e}")
return proxies
def to_dict(self) -> Dict:
"""Convert to dictionary representation."""
return {
"server": self.server,
"username": self.username,
"password": self.password,
"ip": self.ip
}
def clone(self, **kwargs) -> "ProxyConfig":
"""Create a copy of this configuration with updated values.
Args:
**kwargs: Key-value pairs of configuration options to update
Returns:
ProxyConfig: A new instance with the specified updates
"""
config_dict = self.to_dict()
config_dict.update(kwargs)
return ProxyConfig.from_dict(config_dict)
class BrowserConfig:
"""
@@ -333,12 +168,6 @@ class BrowserConfig:
Default: "chromium".
headless (bool): Whether to run the browser in headless mode (no visible GUI).
Default: True.
browser_mode (str): Determines how the browser should be initialized:
"builtin" - use the builtin CDP browser running in background
"dedicated" - create a new dedicated browser instance each time
"cdp" - use explicit CDP settings provided in cdp_url
"docker" - run browser in Docker container with isolation
Default: "dedicated"
use_managed_browser (bool): Launch the browser using a managed approach (e.g., via CDP), allowing
advanced manipulation. Default: False.
cdp_url (str): URL for the Chrome DevTools Protocol (CDP) endpoint. Default: "ws://localhost:9222/devtools/browser/".
@@ -365,7 +194,7 @@ class BrowserConfig:
Default: False.
downloads_path (str or None): Directory to store downloaded files. If None and accept_downloads is True,
a default path will be created. Default: None.
storage_state (str or dict or None): An in-memory storage state (cookies, localStorage).
storage_state (str or dict or None): Path or object describing storage state (cookies, localStorage).
Default: None.
ignore_https_errors (bool): Ignore HTTPS certificate errors. Default: True.
java_script_enabled (bool): Enable JavaScript execution in pages. Default: True.
@@ -391,7 +220,6 @@ class BrowserConfig:
self,
browser_type: str = "chromium",
headless: bool = True,
browser_mode: str = "dedicated",
use_managed_browser: bool = False,
cdp_url: str = None,
use_persistent_context: bool = False,
@@ -427,8 +255,7 @@ class BrowserConfig:
host: str = "localhost",
):
self.browser_type = browser_type
self.headless = headless or True
self.browser_mode = browser_mode
self.headless = headless
self.use_managed_browser = use_managed_browser
self.cdp_url = cdp_url
self.use_persistent_context = use_persistent_context
@@ -440,8 +267,6 @@ class BrowserConfig:
self.chrome_channel = ""
self.proxy = proxy
self.proxy_config = proxy_config
self.viewport_width = viewport_width
self.viewport_height = viewport_height
self.viewport = viewport
@@ -464,7 +289,6 @@ class BrowserConfig:
self.sleep_on_close = sleep_on_close
self.verbose = verbose
self.debugging_port = debugging_port
self.host = host
fa_user_agenr_generator = ValidUAGenerator()
if self.user_agent_mode == "random":
@@ -477,22 +301,6 @@ class BrowserConfig:
self.browser_hint = UAGen.generate_client_hints(self.user_agent)
self.headers.setdefault("sec-ch-ua", self.browser_hint)
# Set appropriate browser management flags based on browser_mode
if self.browser_mode == "builtin":
# Builtin mode uses managed browser connecting to builtin CDP endpoint
self.use_managed_browser = True
# cdp_url will be set later by browser_manager
elif self.browser_mode == "docker":
# Docker mode uses managed browser with CDP to connect to browser in container
self.use_managed_browser = True
# cdp_url will be set later by docker browser strategy
elif self.browser_mode == "custom" and self.cdp_url:
# Custom mode with explicit CDP URL
self.use_managed_browser = True
elif self.browser_mode == "dedicated":
# Dedicated mode uses a new browser instance each time
pass
# If persistent context is requested, ensure managed browser is enabled
if self.use_persistent_context:
self.use_managed_browser = True
@@ -502,7 +310,6 @@ class BrowserConfig:
return BrowserConfig(
browser_type=kwargs.get("browser_type", "chromium"),
headless=kwargs.get("headless", True),
browser_mode=kwargs.get("browser_mode", "dedicated"),
use_managed_browser=kwargs.get("use_managed_browser", False),
cdp_url=kwargs.get("cdp_url"),
use_persistent_context=kwargs.get("use_persistent_context", False),
@@ -530,15 +337,12 @@ class BrowserConfig:
text_mode=kwargs.get("text_mode", False),
light_mode=kwargs.get("light_mode", False),
extra_args=kwargs.get("extra_args", []),
debugging_port=kwargs.get("debugging_port", 9222),
host=kwargs.get("host", "localhost"),
)
def to_dict(self):
result = {
return {
"browser_type": self.browser_type,
"headless": self.headless,
"browser_mode": self.browser_mode,
"use_managed_browser": self.use_managed_browser,
"cdp_url": self.cdp_url,
"use_persistent_context": self.use_persistent_context,
@@ -565,12 +369,8 @@ class BrowserConfig:
"sleep_on_close": self.sleep_on_close,
"verbose": self.verbose,
"debugging_port": self.debugging_port,
"host": self.host,
}
return result
def clone(self, **kwargs):
"""Create a copy of this configuration with updated values.
@@ -729,14 +529,6 @@ class CrawlerRunConfig():
proxy_config (ProxyConfig or dict or None): Detailed proxy configuration, e.g. {"server": "...", "username": "..."}.
If None, no additional proxy config. Default: None.
# Browser Location and Identity Parameters
locale (str or None): Locale to use for the browser context (e.g., "en-US").
Default: None.
timezone_id (str or None): Timezone identifier to use for the browser context (e.g., "America/New_York").
Default: None.
geolocation (GeolocationConfig or None): Geolocation configuration for the browser.
Default: None.
# SSL Parameters
fetch_ssl_certificate: bool = False,
# Caching Parameters
@@ -857,12 +649,6 @@ class CrawlerRunConfig():
user_agent_generator_config (dict or None): Configuration for user agent generation if user_agent_mode is set.
Default: None.
# Experimental Parameters
experimental (dict): Dictionary containing experimental parameters that are in beta phase.
This allows passing temporary features that are not yet fully integrated
into the main parameter set.
Default: None.
url: str = None # This is not a compulsory parameter
"""
@@ -872,7 +658,7 @@ class CrawlerRunConfig():
word_count_threshold: int = MIN_WORD_THRESHOLD,
extraction_strategy: ExtractionStrategy = None,
chunking_strategy: ChunkingStrategy = RegexChunking(),
markdown_generator: MarkdownGenerationStrategy = DefaultMarkdownGenerator(),
markdown_generator: MarkdownGenerationStrategy = None,
only_text: bool = False,
css_selector: str = None,
target_elements: List[str] = None,
@@ -886,10 +672,6 @@ class CrawlerRunConfig():
scraping_strategy: ContentScrapingStrategy = None,
proxy_config: Union[ProxyConfig, dict, None] = None,
proxy_rotation_strategy: Optional[ProxyRotationStrategy] = None,
# Browser Location and Identity Parameters
locale: Optional[str] = None,
timezone_id: Optional[str] = None,
geolocation: Optional[GeolocationConfig] = None,
# SSL Parameters
fetch_ssl_certificate: bool = False,
# Caching Parameters
@@ -926,12 +708,10 @@ class CrawlerRunConfig():
screenshot_wait_for: float = None,
screenshot_height_threshold: int = SCREENSHOT_HEIGHT_TRESHOLD,
pdf: bool = False,
capture_mhtml: bool = False,
image_description_min_word_threshold: int = IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
image_score_threshold: int = IMAGE_SCORE_THRESHOLD,
table_score_threshold: int = 7,
exclude_external_images: bool = False,
exclude_all_images: bool = False,
# Link and Domain Handling Parameters
exclude_social_media_domains: list = None,
exclude_external_links: bool = False,
@@ -941,9 +721,6 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters
verbose: bool = True,
log_console: bool = False,
# Network and Console Capturing Parameters
capture_network_requests: bool = False,
capture_console_messages: bool = False,
# Connection Parameters
method: str = "GET",
stream: bool = False,
@@ -954,8 +731,6 @@ class CrawlerRunConfig():
user_agent_generator_config: dict = {},
# Deep Crawl Parameters
deep_crawl_strategy: Optional[DeepCrawlStrategy] = None,
# Experimental Parameters
experimental: Dict[str, Any] = None,
):
# TODO: Planning to set properties dynamically based on the __init__ signature
self.url = url
@@ -978,11 +753,6 @@ class CrawlerRunConfig():
self.scraping_strategy = scraping_strategy or WebScrapingStrategy()
self.proxy_config = proxy_config
self.proxy_rotation_strategy = proxy_rotation_strategy
# Browser Location and Identity Parameters
self.locale = locale
self.timezone_id = timezone_id
self.geolocation = geolocation
# SSL Parameters
self.fetch_ssl_certificate = fetch_ssl_certificate
@@ -1024,11 +794,9 @@ class CrawlerRunConfig():
self.screenshot_wait_for = screenshot_wait_for
self.screenshot_height_threshold = screenshot_height_threshold
self.pdf = pdf
self.capture_mhtml = capture_mhtml
self.image_description_min_word_threshold = image_description_min_word_threshold
self.image_score_threshold = image_score_threshold
self.exclude_external_images = exclude_external_images
self.exclude_all_images = exclude_all_images
self.table_score_threshold = table_score_threshold
# Link and Domain Handling Parameters
@@ -1043,10 +811,6 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters
self.verbose = verbose
self.log_console = log_console
# Network and Console Capturing Parameters
self.capture_network_requests = capture_network_requests
self.capture_console_messages = capture_console_messages
# Connection Parameters
self.stream = stream
@@ -1080,9 +844,6 @@ class CrawlerRunConfig():
# Deep Crawl Parameters
self.deep_crawl_strategy = deep_crawl_strategy
# Experimental Parameters
self.experimental = experimental or {}
def __getattr__(self, name):
@@ -1123,10 +884,6 @@ class CrawlerRunConfig():
scraping_strategy=kwargs.get("scraping_strategy"),
proxy_config=kwargs.get("proxy_config"),
proxy_rotation_strategy=kwargs.get("proxy_rotation_strategy"),
# Browser Location and Identity Parameters
locale=kwargs.get("locale", None),
timezone_id=kwargs.get("timezone_id", None),
geolocation=kwargs.get("geolocation", None),
# SSL Parameters
fetch_ssl_certificate=kwargs.get("fetch_ssl_certificate", False),
# Caching Parameters
@@ -1165,7 +922,6 @@ class CrawlerRunConfig():
"screenshot_height_threshold", SCREENSHOT_HEIGHT_TRESHOLD
),
pdf=kwargs.get("pdf", False),
capture_mhtml=kwargs.get("capture_mhtml", False),
image_description_min_word_threshold=kwargs.get(
"image_description_min_word_threshold",
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
@@ -1174,7 +930,6 @@ class CrawlerRunConfig():
"image_score_threshold", IMAGE_SCORE_THRESHOLD
),
table_score_threshold=kwargs.get("table_score_threshold", 7),
exclude_all_images=kwargs.get("exclude_all_images", False),
exclude_external_images=kwargs.get("exclude_external_images", False),
# Link and Domain Handling Parameters
exclude_social_media_domains=kwargs.get(
@@ -1187,9 +942,6 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters
verbose=kwargs.get("verbose", True),
log_console=kwargs.get("log_console", False),
# Network and Console Capturing Parameters
capture_network_requests=kwargs.get("capture_network_requests", False),
capture_console_messages=kwargs.get("capture_console_messages", False),
# Connection Parameters
method=kwargs.get("method", "GET"),
stream=kwargs.get("stream", False),
@@ -1200,8 +952,6 @@ class CrawlerRunConfig():
# Deep Crawl Parameters
deep_crawl_strategy=kwargs.get("deep_crawl_strategy"),
url=kwargs.get("url"),
# Experimental Parameters
experimental=kwargs.get("experimental"),
)
# Create a funciton returns dict of the object
@@ -1236,9 +986,6 @@ class CrawlerRunConfig():
"scraping_strategy": self.scraping_strategy,
"proxy_config": self.proxy_config,
"proxy_rotation_strategy": self.proxy_rotation_strategy,
"locale": self.locale,
"timezone_id": self.timezone_id,
"geolocation": self.geolocation,
"fetch_ssl_certificate": self.fetch_ssl_certificate,
"cache_mode": self.cache_mode,
"session_id": self.session_id,
@@ -1270,11 +1017,9 @@ class CrawlerRunConfig():
"screenshot_wait_for": self.screenshot_wait_for,
"screenshot_height_threshold": self.screenshot_height_threshold,
"pdf": self.pdf,
"capture_mhtml": self.capture_mhtml,
"image_description_min_word_threshold": self.image_description_min_word_threshold,
"image_score_threshold": self.image_score_threshold,
"table_score_threshold": self.table_score_threshold,
"exclude_all_images": self.exclude_all_images,
"exclude_external_images": self.exclude_external_images,
"exclude_social_media_domains": self.exclude_social_media_domains,
"exclude_external_links": self.exclude_external_links,
@@ -1283,8 +1028,6 @@ class CrawlerRunConfig():
"exclude_internal_links": self.exclude_internal_links,
"verbose": self.verbose,
"log_console": self.log_console,
"capture_network_requests": self.capture_network_requests,
"capture_console_messages": self.capture_console_messages,
"method": self.method,
"stream": self.stream,
"check_robots_txt": self.check_robots_txt,
@@ -1293,7 +1036,6 @@ class CrawlerRunConfig():
"user_agent_generator_config": self.user_agent_generator_config,
"deep_crawl_strategy": self.deep_crawl_strategy,
"url": self.url,
"experimental": self.experimental,
}
def clone(self, **kwargs):
@@ -1329,13 +1071,6 @@ class LLMConfig:
provider: str = DEFAULT_PROVIDER,
api_token: Optional[str] = None,
base_url: Optional[str] = None,
temprature: Optional[float] = None,
max_tokens: Optional[int] = None,
top_p: Optional[float] = None,
frequency_penalty: Optional[float] = None,
presence_penalty: Optional[float] = None,
stop: Optional[List[str]] = None,
n: Optional[int] = None,
):
"""Configuaration class for LLM provider and API token."""
self.provider = provider
@@ -1344,26 +1079,11 @@ class LLMConfig:
elif api_token and api_token.startswith("env:"):
self.api_token = os.getenv(api_token[4:])
else:
# Check if given provider starts with any of key in PROVIDER_MODELS_PREFIXES
# If not, check if it is in PROVIDER_MODELS
prefixes = PROVIDER_MODELS_PREFIXES.keys()
if any(provider.startswith(prefix) for prefix in prefixes):
selected_prefix = next(
(prefix for prefix in prefixes if provider.startswith(prefix)),
None,
)
self.api_token = PROVIDER_MODELS_PREFIXES.get(selected_prefix)
else:
self.provider = DEFAULT_PROVIDER
self.api_token = os.getenv(DEFAULT_PROVIDER_API_KEY)
self.api_token = PROVIDER_MODELS.get(provider, "no-token") or os.getenv(
"OPENAI_API_KEY"
)
self.base_url = base_url
self.temprature = temprature
self.max_tokens = max_tokens
self.top_p = top_p
self.frequency_penalty = frequency_penalty
self.presence_penalty = presence_penalty
self.stop = stop
self.n = n
@staticmethod
def from_kwargs(kwargs: dict) -> "LLMConfig":
@@ -1371,27 +1091,13 @@ class LLMConfig:
provider=kwargs.get("provider", DEFAULT_PROVIDER),
api_token=kwargs.get("api_token"),
base_url=kwargs.get("base_url"),
temprature=kwargs.get("temprature"),
max_tokens=kwargs.get("max_tokens"),
top_p=kwargs.get("top_p"),
frequency_penalty=kwargs.get("frequency_penalty"),
presence_penalty=kwargs.get("presence_penalty"),
stop=kwargs.get("stop"),
n=kwargs.get("n")
)
def to_dict(self):
return {
"provider": self.provider,
"api_token": self.api_token,
"base_url": self.base_url,
"temprature": self.temprature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"stop": self.stop,
"n": self.n
"base_url": self.base_url
}
def clone(self, **kwargs):

View File

@@ -24,7 +24,7 @@ from .browser_manager import BrowserManager
import aiofiles
import aiohttp
import chardet
import cchardet
from aiohttp.client import ClientTimeout
from urllib.parse import urlparse
from types import MappingProxyType
@@ -130,8 +130,6 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
Close the browser and clean up resources.
"""
await self.browser_manager.close()
# Explicitly reset the static Playwright instance
BrowserManager._playwright_instance = None
async def kill_session(self, session_id: str):
"""
@@ -411,11 +409,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
user_agent = kwargs.get("user_agent", self.user_agent)
# Use browser_manager to get a fresh page & context assigned to this session_id
page, context = await self.browser_manager.get_page(CrawlerRunConfig(
session_id=session_id,
user_agent=user_agent,
**kwargs,
))
page, context = await self.browser_manager.get_page(session_id, user_agent)
return session_id
async def crawl(
@@ -453,17 +447,12 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
html = f.read()
if config.screenshot:
screenshot_data = await self._generate_screenshot_from_html(html)
if config.capture_console_messages:
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
captured_console = await self._capture_console_messages(page, url)
return AsyncCrawlResponse(
html=html,
response_headers=response_headers,
status_code=status_code,
screenshot=screenshot_data,
get_delayed_content=None,
console_messages=captured_console,
)
elif url.startswith("raw:") or url.startswith("raw://"):
@@ -489,7 +478,6 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
) -> AsyncCrawlResponse:
"""
Internal method to crawl web URLs with the specified configuration.
Includes optional network and console capturing.
Args:
url (str): The web URL to crawl
@@ -506,10 +494,6 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Reset downloaded files list for new crawl
self._downloaded_files = []
# Initialize capture lists
captured_requests = []
captured_console = []
# Handle user agent with magic mode
user_agent_to_override = config.user_agent
@@ -523,12 +507,10 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Get page for session
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
# await page.goto(URL)
# Add default cookie
# await context.add_cookies(
# [{"name": "cookiesEnabled", "value": "true", "url": url}]
# )
await context.add_cookies(
[{"name": "cookiesEnabled", "value": "true", "url": url}]
)
# Handle navigator overrides
if config.override_navigator or config.simulate_user or config.magic:
@@ -537,156 +519,23 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Call hook after page creation
await self.execute_hook("on_page_context_created", page, context=context, config=config)
# Network Request Capturing
if config.capture_network_requests:
async def handle_request_capture(request):
try:
post_data_str = None
try:
# Be cautious with large post data
post_data = request.post_data_buffer
if post_data:
# Attempt to decode, fallback to base64 or size indication
try:
post_data_str = post_data.decode('utf-8', errors='replace')
except UnicodeDecodeError:
post_data_str = f"[Binary data: {len(post_data)} bytes]"
except Exception:
post_data_str = "[Error retrieving post data]"
captured_requests.append({
"event_type": "request",
"url": request.url,
"method": request.method,
"headers": dict(request.headers), # Convert Header dict
"post_data": post_data_str,
"resource_type": request.resource_type,
"is_navigation_request": request.is_navigation_request(),
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing request details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
async def handle_response_capture(response):
try:
captured_requests.append({
"event_type": "response",
"url": response.url,
"status": response.status,
"status_text": response.status_text,
"headers": dict(response.headers), # Convert Header dict
"from_service_worker": response.from_service_worker,
"request_timing": response.request.timing, # Detailed timing info
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing response details for {response.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "response_capture_error", "url": response.url, "error": str(e), "timestamp": time.time()})
async def handle_request_failed_capture(request):
try:
captured_requests.append({
"event_type": "request_failed",
"url": request.url,
"method": request.method,
"resource_type": request.resource_type,
"failure_text": str(request.failure) if request.failure else "Unknown failure",
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing request failed details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_failed_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
page.on("request", handle_request_capture)
page.on("response", handle_response_capture)
page.on("requestfailed", handle_request_failed_capture)
# Console Message Capturing
if config.capture_console_messages:
def handle_console_capture(msg):
try:
message_type = "unknown"
try:
message_type = msg.type
except:
pass
message_text = "unknown"
try:
message_text = msg.text
except:
pass
# Basic console message with minimal content
entry = {
"type": message_type,
"text": message_text,
"timestamp": time.time()
}
captured_console.append(entry)
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing console message: {e}", tag="CAPTURE")
# Still add something to the list even on error
captured_console.append({
"type": "console_capture_error",
"error": str(e),
"timestamp": time.time()
})
def handle_pageerror_capture(err):
try:
error_message = "Unknown error"
try:
error_message = err.message
except:
pass
error_stack = ""
try:
error_stack = err.stack
except:
pass
captured_console.append({
"type": "error",
"text": error_message,
"stack": error_stack,
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing page error: {e}", tag="CAPTURE")
captured_console.append({
"type": "pageerror_capture_error",
"error": str(e),
"timestamp": time.time()
})
# Add event listeners directly
page.on("console", handle_console_capture)
page.on("pageerror", handle_pageerror_capture)
# Set up console logging if requested
if config.log_console:
def log_consol(
msg, console_log_type="debug"
): # Corrected the parameter syntax
if console_log_type == "error":
self.logger.error(
message=f"Console error: {msg}", # Use f-string for variable interpolation
tag="CONSOLE"
tag="CONSOLE",
params={"msg": msg.text},
)
elif console_log_type == "debug":
self.logger.debug(
message=f"Console: {msg}", # Use f-string for variable interpolation
tag="CONSOLE"
tag="CONSOLE",
params={"msg": msg.text},
)
page.on("console", log_consol)
@@ -713,15 +562,14 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
try:
# Generate a unique nonce for this request
if config.experimental.get("use_csp_nonce", False):
nonce = hashlib.sha256(os.urandom(32)).hexdigest()
nonce = hashlib.sha256(os.urandom(32)).hexdigest()
# Add CSP headers to the request
await page.set_extra_http_headers(
{
"Content-Security-Policy": f"default-src 'self'; script-src 'self' 'nonce-{nonce}' 'strict-dynamic'"
}
)
# Add CSP headers to the request
await page.set_extra_http_headers(
{
"Content-Security-Policy": f"default-src 'self'; script-src 'self' 'nonce-{nonce}' 'strict-dynamic'"
}
)
response = await page.goto(
url, wait_until=config.wait_until, timeout=config.page_timeout
@@ -967,11 +815,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
for selector in selectors:
try:
content = await page.evaluate(
f"""Array.from(document.querySelectorAll("{selector}"))
.map(el => el.outerHTML)
.join('')"""
)
content = await page.evaluate(f"document.querySelector('{selector}')?.outerHTML || ''")
html_parts.append(content)
except Error as e:
print(f"Warning: Could not get content for selector '{selector}': {str(e)}")
@@ -989,18 +833,14 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
"before_return_html", page=page, html=html, context=context, config=config
)
# Handle PDF, MHTML and screenshot generation
# Handle PDF and screenshot generation
start_export_time = time.perf_counter()
pdf_data = None
screenshot_data = None
mhtml_data = None
if config.pdf:
pdf_data = await self.export_pdf(page)
if config.capture_mhtml:
mhtml_data = await self.capture_mhtml(page)
if config.screenshot:
if config.screenshot_wait_for:
await asyncio.sleep(config.screenshot_wait_for)
@@ -1008,9 +848,9 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
page, screenshot_height_threshold=config.screenshot_height_threshold
)
if screenshot_data or pdf_data or mhtml_data:
if screenshot_data or pdf_data:
self.logger.info(
message="Exporting media (PDF/MHTML/screenshot) took {duration:.2f}s",
message="Exporting PDF and taking screenshot took {duration:.2f}s",
tag="EXPORT",
params={"duration": time.perf_counter() - start_export_time},
)
@@ -1033,16 +873,12 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
status_code=status_code,
screenshot=screenshot_data,
pdf_data=pdf_data,
mhtml_data=mhtml_data,
get_delayed_content=get_delayed_content,
ssl_certificate=ssl_cert,
downloaded_files=(
self._downloaded_files if self._downloaded_files else None
),
redirected_url=redirected_url,
# Include captured data if enabled
network_requests=captured_requests if config.capture_network_requests else None,
console_messages=captured_console if config.capture_console_messages else None,
)
except Exception as e:
@@ -1051,15 +887,6 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
finally:
# If no session_id is given we should close the page
if not config.session_id:
# Detach listeners before closing to prevent potential errors during close
if config.capture_network_requests:
page.remove_listener("request", handle_request_capture)
page.remove_listener("response", handle_response_capture)
page.remove_listener("requestfailed", handle_request_failed_capture)
if config.capture_console_messages:
page.remove_listener("console", handle_console_capture)
page.remove_listener("pageerror", handle_pageerror_capture)
await page.close()
async def _handle_full_page_scan(self, page: Page, scroll_delay: float = 0.1):
@@ -1222,107 +1049,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
"""
pdf_data = await page.pdf(print_background=True)
return pdf_data
async def capture_mhtml(self, page: Page) -> Optional[str]:
"""
Captures the current page as MHTML using CDP.
MHTML (MIME HTML) is a web page archive format that combines the HTML content
with its resources (images, CSS, etc.) into a single MIME-encoded file.
Args:
page (Page): The Playwright page object
Returns:
Optional[str]: The MHTML content as a string, or None if there was an error
"""
try:
# Ensure the page is fully loaded before capturing
try:
# Wait for DOM content and network to be idle
await page.wait_for_load_state("domcontentloaded", timeout=5000)
await page.wait_for_load_state("networkidle", timeout=5000)
# Give a little extra time for JavaScript execution
await page.wait_for_timeout(1000)
# Wait for any animations to complete
await page.evaluate("""
() => new Promise(resolve => {
// First requestAnimationFrame gets scheduled after the next repaint
requestAnimationFrame(() => {
// Second requestAnimationFrame gets called after all animations complete
requestAnimationFrame(resolve);
});
})
""")
except Error as e:
if self.logger:
self.logger.warning(
message="Wait for load state timed out: {error}",
tag="MHTML",
params={"error": str(e)},
)
# Create a new CDP session
cdp_session = await page.context.new_cdp_session(page)
# Call Page.captureSnapshot with format "mhtml"
result = await cdp_session.send("Page.captureSnapshot", {"format": "mhtml"})
# The result contains a 'data' field with the MHTML content
mhtml_content = result.get("data")
# Detach the CDP session to clean up resources
await cdp_session.detach()
return mhtml_content
except Exception as e:
# Log the error but don't raise it - we'll just return None for the MHTML
if self.logger:
self.logger.error(
message="Failed to capture MHTML: {error}",
tag="MHTML",
params={"error": str(e)},
)
return None
async def _capture_console_messages(
self, page: Page, file_path: str
) -> List[Dict[str, Union[str, float]]]:
"""
Captures console messages from the page.
Args:
page (Page): The Playwright page object
Returns:
List[Dict[str, Union[str, float]]]: A list of captured console messages
"""
captured_console = []
def handle_console_message(msg):
try:
message_type = msg.type
message_text = msg.text
entry = {
"type": message_type,
"text": message_text,
"timestamp": time.time(),
}
captured_console.append(entry)
except Exception as e:
if self.logger:
self.logger.warning(
f"Error capturing console message: {e}", tag="CAPTURE"
)
page.on("console", handle_console_message)
await page.goto(file_path)
return captured_console
async def take_screenshot(self, page, **kwargs) -> str:
"""
Take a screenshot of the current page.
@@ -1979,7 +1706,7 @@ class AsyncHTTPCrawlerStrategy(AsyncCrawlerStrategy):
await self.start()
yield self._session
finally:
pass
await self.close()
def set_hook(self, hook_type: str, hook_func: Callable) -> None:
if hook_type in self.hooks:
@@ -2095,7 +1822,7 @@ class AsyncHTTPCrawlerStrategy(AsyncCrawlerStrategy):
encoding = response.charset
if not encoding:
encoding = chardet.detect(content.tobytes())['encoding'] or 'utf-8'
encoding = cchardet.detect(content.tobytes())['encoding'] or 'utf-8'
result = AsyncCrawlResponse(
html=content.tobytes().decode(encoding, errors='replace'),

View File

@@ -4,15 +4,20 @@ import aiosqlite
import asyncio
from typing import Optional, Dict
from contextlib import asynccontextmanager
import json
from .models import CrawlResult, MarkdownGenerationResult, StringCompatibleMarkdown
import aiofiles
from .async_logger import AsyncLogger
import json # Added for serialization/deserialization
from .utils import ensure_content_dirs, generate_content_hash
from .models import CrawlResult, MarkdownGenerationResult, StringCompatibleMarkdown
# , StringCompatibleMarkdown
import aiofiles
from .utils import VersionManager
from .async_logger import AsyncLogger
from .utils import get_error_context, create_box_message
# Set up logging
# logging.basicConfig(level=logging.INFO)
# logger = logging.getLogger(__name__)
# logger.setLevel(logging.INFO)
base_directory = DB_PATH = os.path.join(
os.getenv("CRAWL4_AI_BASE_DIRECTORY", Path.home()), ".crawl4ai"
)

View File

@@ -4,15 +4,17 @@ from .models import (
CrawlResult,
CrawlerTaskResult,
CrawlStatus,
DisplayMode,
CrawlStats,
DomainState,
)
from .components.crawler_monitor import CrawlerMonitor
from .types import AsyncWebCrawler
from rich.live import Live
from rich.table import Table
from rich.console import Console
from rich import box
from datetime import timedelta, datetime
from collections.abc import AsyncGenerator
import time
import psutil
import asyncio
@@ -22,6 +24,8 @@ from urllib.parse import urlparse
import random
from abc import ABC, abstractmethod
from math import inf as infinity
class RateLimiter:
def __init__(
@@ -83,6 +87,201 @@ class RateLimiter:
return True
class CrawlerMonitor:
def __init__(
self,
max_visible_rows: int = 15,
display_mode: DisplayMode = DisplayMode.DETAILED,
):
self.console = Console()
self.max_visible_rows = max_visible_rows
self.display_mode = display_mode
self.stats: Dict[str, CrawlStats] = {}
self.process = psutil.Process()
self.start_time = time.time()
self.live = Live(self._create_table(), refresh_per_second=2)
def start(self):
self.live.start()
def stop(self):
self.live.stop()
def add_task(self, task_id: str, url: str):
self.stats[task_id] = CrawlStats(
task_id=task_id, url=url, status=CrawlStatus.QUEUED
)
self.live.update(self._create_table())
def update_task(self, task_id: str, **kwargs):
if task_id in self.stats:
for key, value in kwargs.items():
setattr(self.stats[task_id], key, value)
self.live.update(self._create_table())
def _create_aggregated_table(self) -> Table:
"""Creates a compact table showing only aggregated statistics"""
table = Table(
box=box.ROUNDED,
title="Crawler Status Overview",
title_style="bold magenta",
header_style="bold blue",
show_lines=True,
)
# Calculate statistics
total_tasks = len(self.stats)
queued = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.QUEUED
)
in_progress = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.IN_PROGRESS
)
completed = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.COMPLETED
)
failed = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.FAILED
)
# Memory statistics
current_memory = self.process.memory_info().rss / (1024 * 1024)
total_task_memory = sum(stat.memory_usage for stat in self.stats.values())
peak_memory = max(
(stat.peak_memory for stat in self.stats.values()), default=0.0
)
# Duration
duration = time.time() - self.start_time
# Create status row
table.add_column("Status", style="bold cyan")
table.add_column("Count", justify="right")
table.add_column("Percentage", justify="right")
table.add_row("Total Tasks", str(total_tasks), "100%")
table.add_row(
"[yellow]In Queue[/yellow]",
str(queued),
f"{(queued / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
table.add_row(
"[blue]In Progress[/blue]",
str(in_progress),
f"{(in_progress / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
table.add_row(
"[green]Completed[/green]",
str(completed),
f"{(completed / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
table.add_row(
"[red]Failed[/red]",
str(failed),
f"{(failed / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
# Add memory information
table.add_section()
table.add_row(
"[magenta]Current Memory[/magenta]", f"{current_memory:.1f} MB", ""
)
table.add_row(
"[magenta]Total Task Memory[/magenta]", f"{total_task_memory:.1f} MB", ""
)
table.add_row(
"[magenta]Peak Task Memory[/magenta]", f"{peak_memory:.1f} MB", ""
)
table.add_row(
"[yellow]Runtime[/yellow]",
str(timedelta(seconds=int(duration))),
"",
)
return table
def _create_detailed_table(self) -> Table:
table = Table(
box=box.ROUNDED,
title="Crawler Performance Monitor",
title_style="bold magenta",
header_style="bold blue",
)
# Add columns
table.add_column("Task ID", style="cyan", no_wrap=True)
table.add_column("URL", style="cyan", no_wrap=True)
table.add_column("Status", style="bold")
table.add_column("Memory (MB)", justify="right")
table.add_column("Peak (MB)", justify="right")
table.add_column("Duration", justify="right")
table.add_column("Info", style="italic")
# Add summary row
total_memory = sum(stat.memory_usage for stat in self.stats.values())
active_count = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.IN_PROGRESS
)
completed_count = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.COMPLETED
)
failed_count = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.FAILED
)
table.add_row(
"[bold yellow]SUMMARY",
f"Total: {len(self.stats)}",
f"Active: {active_count}",
f"{total_memory:.1f}",
f"{self.process.memory_info().rss / (1024 * 1024):.1f}",
str(
timedelta(
seconds=int(time.time() - self.start_time)
)
),
f"{completed_count}{failed_count}",
style="bold",
)
table.add_section()
# Add rows for each task
visible_stats = sorted(
self.stats.values(),
key=lambda x: (
x.status != CrawlStatus.IN_PROGRESS,
x.status != CrawlStatus.QUEUED,
x.end_time or infinity,
),
)[: self.max_visible_rows]
for stat in visible_stats:
status_style = {
CrawlStatus.QUEUED: "white",
CrawlStatus.IN_PROGRESS: "yellow",
CrawlStatus.COMPLETED: "green",
CrawlStatus.FAILED: "red",
}[stat.status]
table.add_row(
stat.task_id[:8], # Show first 8 chars of task ID
stat.url[:40] + "..." if len(stat.url) > 40 else stat.url,
f"[{status_style}]{stat.status.value}[/{status_style}]",
f"{stat.memory_usage:.1f}",
f"{stat.peak_memory:.1f}",
stat.duration,
stat.error_message[:40] if stat.error_message else "",
)
return table
def _create_table(self) -> Table:
"""Creates the appropriate table based on display mode"""
if self.display_mode == DisplayMode.AGGREGATED:
return self._create_aggregated_table()
return self._create_detailed_table()
class BaseDispatcher(ABC):
def __init__(
@@ -110,7 +309,7 @@ class BaseDispatcher(ABC):
async def run_urls(
self,
urls: List[str],
crawler: AsyncWebCrawler, # noqa: F821
crawler: "AsyncWebCrawler", # noqa: F821
config: CrawlerRunConfig,
monitor: Optional[CrawlerMonitor] = None,
) -> List[CrawlerTaskResult]:
@@ -121,144 +320,71 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
def __init__(
self,
memory_threshold_percent: float = 90.0,
critical_threshold_percent: float = 95.0, # New critical threshold
recovery_threshold_percent: float = 85.0, # New recovery threshold
check_interval: float = 1.0,
max_session_permit: int = 20,
fairness_timeout: float = 600.0, # 10 minutes before prioritizing long-waiting URLs
memory_wait_timeout: float = 300.0, # 5 minutes default timeout
rate_limiter: Optional[RateLimiter] = None,
monitor: Optional[CrawlerMonitor] = None,
):
super().__init__(rate_limiter, monitor)
self.memory_threshold_percent = memory_threshold_percent
self.critical_threshold_percent = critical_threshold_percent
self.recovery_threshold_percent = recovery_threshold_percent
self.check_interval = check_interval
self.max_session_permit = max_session_permit
self.fairness_timeout = fairness_timeout
self.result_queue = asyncio.Queue()
self.task_queue = asyncio.PriorityQueue() # Priority queue for better management
self.memory_pressure_mode = False # Flag to indicate when we're in memory pressure mode
self.current_memory_percent = 0.0 # Track current memory usage
async def _memory_monitor_task(self):
"""Background task to continuously monitor memory usage and update state"""
while True:
self.current_memory_percent = psutil.virtual_memory().percent
# Enter memory pressure mode if we cross the threshold
if not self.memory_pressure_mode and self.current_memory_percent >= self.memory_threshold_percent:
self.memory_pressure_mode = True
if self.monitor:
self.monitor.update_memory_status("PRESSURE")
# Exit memory pressure mode if we go below recovery threshold
elif self.memory_pressure_mode and self.current_memory_percent <= self.recovery_threshold_percent:
self.memory_pressure_mode = False
if self.monitor:
self.monitor.update_memory_status("NORMAL")
# In critical mode, we might need to take more drastic action
if self.current_memory_percent >= self.critical_threshold_percent:
if self.monitor:
self.monitor.update_memory_status("CRITICAL")
# We could implement additional memory-saving measures here
await asyncio.sleep(self.check_interval)
def _get_priority_score(self, wait_time: float, retry_count: int) -> float:
"""Calculate priority score (lower is higher priority)
- URLs waiting longer than fairness_timeout get higher priority
- More retry attempts decreases priority
"""
if wait_time > self.fairness_timeout:
# High priority for long-waiting URLs
return -wait_time
# Standard priority based on retries
return retry_count
self.memory_wait_timeout = memory_wait_timeout
self.result_queue = asyncio.Queue() # Queue for storing results
async def crawl_url(
self,
url: str,
config: CrawlerRunConfig,
task_id: str,
retry_count: int = 0,
) -> CrawlerTaskResult:
start_time = time.time()
error_message = ""
memory_usage = peak_memory = 0.0
# Get starting memory for accurate measurement
process = psutil.Process()
start_memory = process.memory_info().rss / (1024 * 1024)
try:
if self.monitor:
self.monitor.update_task(
task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=start_time,
retry_count=retry_count
task_id, status=CrawlStatus.IN_PROGRESS, start_time=start_time
)
self.concurrent_sessions += 1
if self.rate_limiter:
await self.rate_limiter.wait_if_needed(url)
# Check if we're in critical memory state
if self.current_memory_percent >= self.critical_threshold_percent:
# Requeue this task with increased priority and retry count
enqueue_time = time.time()
priority = self._get_priority_score(enqueue_time - start_time, retry_count + 1)
await self.task_queue.put((priority, (url, task_id, retry_count + 1, enqueue_time)))
# Update monitoring
if self.monitor:
self.monitor.update_task(
task_id,
status=CrawlStatus.QUEUED,
error_message="Requeued due to critical memory pressure"
)
# Return placeholder result with requeued status
return CrawlerTaskResult(
task_id=task_id,
url=url,
result=CrawlResult(
url=url, html="", metadata={"status": "requeued"},
success=False, error_message="Requeued due to critical memory pressure"
),
memory_usage=0,
peak_memory=0,
start_time=start_time,
end_time=time.time(),
error_message="Requeued due to critical memory pressure",
retry_count=retry_count + 1
)
# Execute the crawl
process = psutil.Process()
start_memory = process.memory_info().rss / (1024 * 1024)
result = await self.crawler.arun(url, config=config, session_id=task_id)
# Measure memory usage
end_memory = process.memory_info().rss / (1024 * 1024)
memory_usage = peak_memory = end_memory - start_memory
# Handle rate limiting
if self.rate_limiter and result.status_code:
if not self.rate_limiter.update_delay(url, result.status_code):
error_message = f"Rate limit retry count exceeded for domain {urlparse(url).netloc}"
if self.monitor:
self.monitor.update_task(task_id, status=CrawlStatus.FAILED)
# Update status based on result
result = CrawlerTaskResult(
task_id=task_id,
url=url,
result=result,
memory_usage=memory_usage,
peak_memory=peak_memory,
start_time=start_time,
end_time=time.time(),
error_message=error_message,
)
await self.result_queue.put(result)
return result
if not result.success:
error_message = result.error_message
if self.monitor:
self.monitor.update_task(task_id, status=CrawlStatus.FAILED)
elif self.monitor:
self.monitor.update_task(task_id, status=CrawlStatus.COMPLETED)
except Exception as e:
error_message = str(e)
if self.monitor:
@@ -266,7 +392,7 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
result = CrawlResult(
url=url, html="", metadata={}, success=False, error_message=str(e)
)
finally:
end_time = time.time()
if self.monitor:
@@ -276,10 +402,9 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
memory_usage=memory_usage,
peak_memory=peak_memory,
error_message=error_message,
retry_count=retry_count
)
self.concurrent_sessions -= 1
return CrawlerTaskResult(
task_id=task_id,
url=url,
@@ -289,240 +414,116 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
start_time=start_time,
end_time=end_time,
error_message=error_message,
retry_count=retry_count
)
async def run_urls(
self,
urls: List[str],
crawler: AsyncWebCrawler,
crawler: "AsyncWebCrawler", # noqa: F821
config: CrawlerRunConfig,
) -> List[CrawlerTaskResult]:
self.crawler = crawler
# Start the memory monitor task
memory_monitor = asyncio.create_task(self._memory_monitor_task())
if self.monitor:
self.monitor.start()
results = []
try:
# Initialize task queue
pending_tasks = []
active_tasks = []
task_queue = []
for url in urls:
task_id = str(uuid.uuid4())
if self.monitor:
self.monitor.add_task(task_id, url)
# Add to queue with initial priority 0, retry count 0, and current time
await self.task_queue.put((0, (url, task_id, 0, time.time())))
active_tasks = []
# Process until both queues are empty
while not self.task_queue.empty() or active_tasks:
# If memory pressure is low, start new tasks
if not self.memory_pressure_mode and len(active_tasks) < self.max_session_permit:
try:
# Try to get a task with timeout to avoid blocking indefinitely
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Create and start the task
task = asyncio.create_task(
self.crawl_url(url, config, task_id, retry_count)
)
active_tasks.append(task)
# Update waiting time in monitor
if self.monitor:
wait_time = time.time() - enqueue_time
self.monitor.update_task(
task_id,
wait_time=wait_time,
status=CrawlStatus.IN_PROGRESS
)
except asyncio.TimeoutError:
# No tasks in queue, that's fine
pass
# Wait for completion even if queue is starved
if active_tasks:
done, pending = await asyncio.wait(
active_tasks, timeout=0.1, return_when=asyncio.FIRST_COMPLETED
)
# Process completed tasks
for completed_task in done:
result = await completed_task
results.append(result)
# Update active tasks list
active_tasks = list(pending)
else:
# If no active tasks but still waiting, sleep briefly
await asyncio.sleep(self.check_interval / 2)
# Update priorities for waiting tasks if needed
await self._update_queue_priorities()
return results
task_queue.append((url, task_id))
except Exception as e:
if self.monitor:
self.monitor.update_memory_status(f"QUEUE_ERROR: {str(e)}")
while task_queue or active_tasks:
wait_start_time = time.time()
while len(active_tasks) < self.max_session_permit and task_queue:
if psutil.virtual_memory().percent >= self.memory_threshold_percent:
# Check if we've exceeded the timeout
if time.time() - wait_start_time > self.memory_wait_timeout:
raise MemoryError(
f"Memory usage above threshold ({self.memory_threshold_percent}%) for more than {self.memory_wait_timeout} seconds"
)
await asyncio.sleep(self.check_interval)
continue
url, task_id = task_queue.pop(0)
task = asyncio.create_task(self.crawl_url(url, config, task_id))
active_tasks.append(task)
if not active_tasks:
await asyncio.sleep(self.check_interval)
continue
done, pending = await asyncio.wait(
active_tasks, return_when=asyncio.FIRST_COMPLETED
)
pending_tasks.extend(done)
active_tasks = list(pending)
return await asyncio.gather(*pending_tasks)
finally:
# Clean up
memory_monitor.cancel()
if self.monitor:
self.monitor.stop()
async def _update_queue_priorities(self):
"""Periodically update priorities of items in the queue to prevent starvation"""
# Skip if queue is empty
if self.task_queue.empty():
return
# Use a drain-and-refill approach to update all priorities
temp_items = []
# Drain the queue (with a safety timeout to prevent blocking)
try:
drain_start = time.time()
while not self.task_queue.empty() and time.time() - drain_start < 5.0: # 5 second safety timeout
try:
# Get item from queue with timeout
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Calculate new priority based on current wait time
current_time = time.time()
wait_time = current_time - enqueue_time
new_priority = self._get_priority_score(wait_time, retry_count)
# Store with updated priority
temp_items.append((new_priority, (url, task_id, retry_count, enqueue_time)))
# Update monitoring stats for this task
if self.monitor and task_id in self.monitor.stats:
self.monitor.update_task(task_id, wait_time=wait_time)
except asyncio.TimeoutError:
# Queue might be empty or very slow
break
except Exception as e:
# If anything goes wrong, make sure we refill the queue with what we've got
self.monitor.update_memory_status(f"QUEUE_ERROR: {str(e)}")
# Calculate queue statistics
if temp_items and self.monitor:
total_queued = len(temp_items)
wait_times = [item[1][3] for item in temp_items]
highest_wait_time = time.time() - min(wait_times) if wait_times else 0
avg_wait_time = sum(time.time() - t for t in wait_times) / len(wait_times) if wait_times else 0
# Update queue statistics in monitor
self.monitor.update_queue_statistics(
total_queued=total_queued,
highest_wait_time=highest_wait_time,
avg_wait_time=avg_wait_time
)
# Sort by priority (lowest number = highest priority)
temp_items.sort(key=lambda x: x[0])
# Refill the queue with updated priorities
for item in temp_items:
await self.task_queue.put(item)
async def run_urls_stream(
self,
urls: List[str],
crawler: AsyncWebCrawler,
crawler: "AsyncWebCrawler", # noqa: F821
config: CrawlerRunConfig,
) -> AsyncGenerator[CrawlerTaskResult, None]:
self.crawler = crawler
# Start the memory monitor task
memory_monitor = asyncio.create_task(self._memory_monitor_task())
if self.monitor:
self.monitor.start()
try:
active_tasks = []
task_queue = []
completed_count = 0
total_urls = len(urls)
# Initialize task queue
for url in urls:
task_id = str(uuid.uuid4())
if self.monitor:
self.monitor.add_task(task_id, url)
# Add to queue with initial priority 0, retry count 0, and current time
await self.task_queue.put((0, (url, task_id, 0, time.time())))
active_tasks = []
completed_count = 0
total_urls = len(urls)
task_queue.append((url, task_id))
while completed_count < total_urls:
# If memory pressure is low, start new tasks
if not self.memory_pressure_mode and len(active_tasks) < self.max_session_permit:
try:
# Try to get a task with timeout
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Create and start the task
task = asyncio.create_task(
self.crawl_url(url, config, task_id, retry_count)
)
active_tasks.append(task)
# Update waiting time in monitor
if self.monitor:
wait_time = time.time() - enqueue_time
self.monitor.update_task(
task_id,
wait_time=wait_time,
status=CrawlStatus.IN_PROGRESS
)
except asyncio.TimeoutError:
# No tasks in queue, that's fine
pass
# Process completed tasks and yield results
# Start new tasks if memory permits
while len(active_tasks) < self.max_session_permit and task_queue:
if psutil.virtual_memory().percent >= self.memory_threshold_percent:
await asyncio.sleep(self.check_interval)
continue
url, task_id = task_queue.pop(0)
task = asyncio.create_task(self.crawl_url(url, config, task_id))
active_tasks.append(task)
if not active_tasks and not task_queue:
break
# Wait for any task to complete and yield results
if active_tasks:
done, pending = await asyncio.wait(
active_tasks, timeout=0.1, return_when=asyncio.FIRST_COMPLETED
)
for completed_task in done:
result = await completed_task
# Only count as completed if it wasn't requeued
if "requeued" not in result.error_message:
completed_count += 1
yield result
# Update active tasks list
completed_count += 1
yield result
active_tasks = list(pending)
else:
# If no active tasks but still waiting, sleep briefly
await asyncio.sleep(self.check_interval / 2)
# Update priorities for waiting tasks if needed
await self._update_queue_priorities()
await asyncio.sleep(self.check_interval)
finally:
# Clean up
memory_monitor.cancel()
if self.monitor:
self.monitor.stop()
class SemaphoreDispatcher(BaseDispatcher):
def __init__(
@@ -619,7 +620,7 @@ class SemaphoreDispatcher(BaseDispatcher):
async def run_urls(
self,
crawler: AsyncWebCrawler, # noqa: F821
crawler: "AsyncWebCrawler", # noqa: F821
urls: List[str],
config: CrawlerRunConfig,
) -> List[CrawlerTaskResult]:
@@ -643,4 +644,4 @@ class SemaphoreDispatcher(BaseDispatcher):
return await asyncio.gather(*tasks, return_exceptions=True)
finally:
if self.monitor:
self.monitor.stop()
self.monitor.stop()

View File

@@ -4,22 +4,14 @@ from typing import Optional, Dict, Any
from colorama import Fore, Style, init
import os
from datetime import datetime
from urllib.parse import unquote
class LogLevel(Enum):
DEFAULT = 0
DEBUG = 1
INFO = 2
SUCCESS = 3
WARNING = 4
ERROR = 5
CRITICAL = 6
ALERT = 7
NOTICE = 8
EXCEPTION = 9
FATAL = 10
@@ -45,11 +37,11 @@ class AsyncLoggerBase(ABC):
pass
@abstractmethod
def url_status(self, url: str, success: bool, timing: float, tag: str = "FETCH", url_length: int = 100):
def url_status(self, url: str, success: bool, timing: float, tag: str = "FETCH", url_length: int = 50):
pass
@abstractmethod
def error_status(self, url: str, error: str, tag: str = "ERROR", url_length: int = 100):
def error_status(self, url: str, error: str, tag: str = "ERROR", url_length: int = 50):
pass
class AsyncLogger(AsyncLoggerBase):
@@ -69,13 +61,6 @@ class AsyncLogger(AsyncLoggerBase):
"DEBUG": "",
"INFO": "",
"WARNING": "",
"SUCCESS": "",
"CRITICAL": "",
"ALERT": "",
"NOTICE": "",
"EXCEPTION": "",
"FATAL": "",
"DEFAULT": "",
}
DEFAULT_COLORS = {
@@ -84,12 +69,6 @@ class AsyncLogger(AsyncLoggerBase):
LogLevel.SUCCESS: Fore.GREEN,
LogLevel.WARNING: Fore.YELLOW,
LogLevel.ERROR: Fore.RED,
LogLevel.CRITICAL: Fore.RED + Style.BRIGHT,
LogLevel.ALERT: Fore.RED + Style.BRIGHT,
LogLevel.NOTICE: Fore.BLUE,
LogLevel.EXCEPTION: Fore.RED + Style.BRIGHT,
LogLevel.FATAL: Fore.RED + Style.BRIGHT,
LogLevel.DEFAULT: Fore.WHITE,
}
def __init__(
@@ -131,14 +110,6 @@ class AsyncLogger(AsyncLoggerBase):
def _get_icon(self, tag: str) -> str:
"""Get the icon for a tag, defaulting to info icon if not found."""
return self.icons.get(tag, self.icons["INFO"])
def _shorten(self, text, length, placeholder="..."):
"""Truncate text in the middle if longer than length, or pad if shorter."""
if len(text) <= length:
return text.ljust(length) # Pad with spaces to reach desired length
half = (length - len(placeholder)) // 2
shortened = text[:half] + placeholder + text[-half:]
return shortened.ljust(length) # Also pad shortened text to consistent length
def _write_to_file(self, message: str):
"""Write a message to the log file if configured."""
@@ -185,22 +156,9 @@ class AsyncLogger(AsyncLoggerBase):
formatted_message = message.format(**params)
# Then apply colors if specified
color_map = {
"green": Fore.GREEN,
"red": Fore.RED,
"yellow": Fore.YELLOW,
"blue": Fore.BLUE,
"cyan": Fore.CYAN,
"magenta": Fore.MAGENTA,
"white": Fore.WHITE,
"black": Fore.BLACK,
"reset": Style.RESET_ALL,
}
if colors:
for key, color in colors.items():
# Find the formatted value in the message and wrap it with color
if color in color_map:
color = color_map[color]
if key in params:
value_str = str(params[key])
formatted_message = formatted_message.replace(
@@ -241,22 +199,6 @@ class AsyncLogger(AsyncLoggerBase):
def warning(self, message: str, tag: str = "WARNING", **kwargs):
"""Log a warning message."""
self._log(LogLevel.WARNING, message, tag, **kwargs)
def critical(self, message: str, tag: str = "CRITICAL", **kwargs):
"""Log a critical message."""
self._log(LogLevel.ERROR, message, tag, **kwargs)
def exception(self, message: str, tag: str = "EXCEPTION", **kwargs):
"""Log an exception message."""
self._log(LogLevel.ERROR, message, tag, **kwargs)
def fatal(self, message: str, tag: str = "FATAL", **kwargs):
"""Log a fatal message."""
self._log(LogLevel.ERROR, message, tag, **kwargs)
def alert(self, message: str, tag: str = "ALERT", **kwargs):
"""Log an alert message."""
self._log(LogLevel.ERROR, message, tag, **kwargs)
def notice(self, message: str, tag: str = "NOTICE", **kwargs):
"""Log a notice message."""
self._log(LogLevel.INFO, message, tag, **kwargs)
def error(self, message: str, tag: str = "ERROR", **kwargs):
"""Log an error message."""
@@ -268,7 +210,7 @@ class AsyncLogger(AsyncLoggerBase):
success: bool,
timing: float,
tag: str = "FETCH",
url_length: int = 100,
url_length: int = 50,
):
"""
Convenience method for logging URL fetch status.
@@ -280,15 +222,14 @@ class AsyncLogger(AsyncLoggerBase):
tag: Tag for the message
url_length: Maximum length for URL in log
"""
decoded_url = unquote(url)
readable_url = self._shorten(decoded_url, url_length)
self._log(
level=LogLevel.SUCCESS if success else LogLevel.ERROR,
message="{url} | {status} | : {timing:.2f}s",
message="{url:.{url_length}}... | Status: {status} | Time: {timing:.2f}s",
tag=tag,
params={
"url": readable_url,
"status": "" if success else "",
"url": url,
"url_length": url_length,
"status": success,
"timing": timing,
},
colors={
@@ -309,13 +250,11 @@ class AsyncLogger(AsyncLoggerBase):
tag: Tag for the message
url_length: Maximum length for URL in log
"""
decoded_url = unquote(url)
readable_url = self._shorten(decoded_url, url_length)
self._log(
level=LogLevel.ERROR,
message="{url} | Error: {error}",
message="{url:.{url_length}}... | Error: {error}",
tag=tag,
params={"url": readable_url, "error": error},
params={"url": url, "url_length": url_length, "error": error},
)
class AsyncFileLogger(AsyncLoggerBase):
@@ -359,13 +298,13 @@ class AsyncFileLogger(AsyncLoggerBase):
"""Log an error message to file."""
self._write_to_file("ERROR", message, tag)
def url_status(self, url: str, success: bool, timing: float, tag: str = "FETCH", url_length: int = 100):
def url_status(self, url: str, success: bool, timing: float, tag: str = "FETCH", url_length: int = 50):
"""Log URL fetch status to file."""
status = "SUCCESS" if success else "FAILED"
message = f"{url[:url_length]}... | Status: {status} | Time: {timing:.2f}s"
self._write_to_file("URL_STATUS", message, tag)
def error_status(self, url: str, error: str, tag: str = "ERROR", url_length: int = 100):
def error_status(self, url: str, error: str, tag: str = "ERROR", url_length: int = 50):
"""Log error status to file."""
message = f"{url[:url_length]}... | Error: {error}"
self._write_to_file("ERROR", message, tag)

View File

@@ -4,26 +4,20 @@ import sys
import time
from colorama import Fore
from pathlib import Path
from typing import Optional, List
from typing import Optional, List, Generic, TypeVar
import json
import asyncio
# from contextlib import nullcontext, asynccontextmanager
from contextlib import asynccontextmanager
from .models import (
CrawlResult,
MarkdownGenerationResult,
DispatchResult,
ScrapingResult,
CrawlResultContainer,
RunManyReturn
)
from .models import CrawlResult, MarkdownGenerationResult, DispatchResult, ScrapingResult
from .async_database import async_db_manager
from .chunking_strategy import * # noqa: F403
from .chunking_strategy import IdentityChunking
from .chunking_strategy import RegexChunking, ChunkingStrategy, IdentityChunking
from .content_filter_strategy import * # noqa: F403
from .extraction_strategy import * # noqa: F403
from .extraction_strategy import NoExtractionStrategy
from .content_filter_strategy import RelevantContentFilter
from .extraction_strategy import * # noqa: F403
from .extraction_strategy import NoExtractionStrategy, ExtractionStrategy
from .async_crawler_strategy import (
AsyncCrawlerStrategy,
AsyncPlaywrightCrawlerStrategy,
@@ -36,10 +30,11 @@ from .markdown_generation_strategy import (
)
from .deep_crawling import DeepCrawlDecorator
from .async_logger import AsyncLogger, AsyncLoggerBase
from .async_configs import BrowserConfig, CrawlerRunConfig, ProxyConfig
from .async_dispatcher import * # noqa: F403
from .async_configs import BrowserConfig, CrawlerRunConfig
from .async_dispatcher import * # noqa: F403
from .async_dispatcher import BaseDispatcher, MemoryAdaptiveDispatcher, RateLimiter
from .config import MIN_WORD_THRESHOLD
from .utils import (
sanitize_input_encode,
InvalidCSSSelectorError,
@@ -47,9 +42,47 @@ from .utils import (
create_box_message,
get_error_context,
RobotsParser,
preprocess_html_for_schema,
)
from typing import Union, AsyncGenerator
CrawlResultT = TypeVar('CrawlResultT', bound=CrawlResult)
# RunManyReturn = Union[CrawlResultT, List[CrawlResultT], AsyncGenerator[CrawlResultT, None]]
class CrawlResultContainer(Generic[CrawlResultT]):
def __init__(self, results: Union[CrawlResultT, List[CrawlResultT]]):
# Normalize to a list
if isinstance(results, list):
self._results = results
else:
self._results = [results]
def __iter__(self):
return iter(self._results)
def __getitem__(self, index):
return self._results[index]
def __len__(self):
return len(self._results)
def __getattr__(self, attr):
# Delegate attribute access to the first element.
if self._results:
return getattr(self._results[0], attr)
raise AttributeError(f"{self.__class__.__name__} object has no attribute '{attr}'")
def __repr__(self):
return f"{self.__class__.__name__}({self._results!r})"
# Redefine the union type. Now synchronous calls always return a container,
# while stream mode is handled with an AsyncGenerator.
RunManyReturn = Union[
CrawlResultContainer[CrawlResultT],
AsyncGenerator[CrawlResultT, None]
]
class AsyncWebCrawler:
"""
@@ -112,8 +145,7 @@ class AsyncWebCrawler:
self,
crawler_strategy: AsyncCrawlerStrategy = None,
config: BrowserConfig = None,
base_directory: str = str(
os.getenv("CRAWL4_AI_BASE_DIRECTORY", Path.home())),
base_directory: str = str(os.getenv("CRAWL4_AI_BASE_DIRECTORY", Path.home())),
thread_safe: bool = False,
logger: AsyncLoggerBase = None,
**kwargs,
@@ -141,8 +173,7 @@ class AsyncWebCrawler:
)
# Initialize crawler strategy
params = {k: v for k, v in kwargs.items() if k in [
"browser_config", "logger"]}
params = {k: v for k, v in kwargs.items() if k in ["browser_config", "logger"]}
self.crawler_strategy = crawler_strategy or AsyncPlaywrightCrawlerStrategy(
browser_config=browser_config,
logger=self.logger,
@@ -164,18 +195,23 @@ class AsyncWebCrawler:
# Decorate arun method with deep crawling capabilities
self._deep_handler = DeepCrawlDecorator(self)
self.arun = self._deep_handler(self.arun)
self.arun = self._deep_handler(self.arun)
async def start(self):
"""
Start the crawler explicitly without using context manager.
This is equivalent to using 'async with' but gives more control over the lifecycle.
This method will:
1. Initialize the browser and context
2. Perform warmup sequence
3. Return the crawler instance for method chaining
Returns:
AsyncWebCrawler: The initialized crawler instance
"""
await self.crawler_strategy.__aenter__()
self.logger.info(f"Crawl4AI {crawl4ai_version}", tag="INIT")
self.ready = True
await self.awarmup()
return self
async def close(self):
@@ -195,6 +231,18 @@ class AsyncWebCrawler:
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self.close()
async def awarmup(self):
"""
Initialize the crawler with warm-up sequence.
This method:
1. Logs initialization info
2. Sets up browser configuration
3. Marks the crawler as ready
"""
self.logger.info(f"Crawl4AI {crawl4ai_version}", tag="INIT")
self.ready = True
@asynccontextmanager
async def nullcontext(self):
"""异步空上下文管理器"""
@@ -234,14 +282,9 @@ class AsyncWebCrawler:
Returns:
CrawlResult: The result of crawling and processing
"""
# Auto-start if not ready
if not self.ready:
await self.start()
config = config or CrawlerRunConfig()
if not isinstance(url, str) or not url:
raise ValueError(
"Invalid URL, make sure the URL is a non-empty string")
raise ValueError("Invalid URL, make sure the URL is a non-empty string")
async with self._lock or self.nullcontext():
try:
@@ -252,7 +295,9 @@ class AsyncWebCrawler:
config.cache_mode = CacheMode.ENABLED
# Create cache context
cache_context = CacheContext(url, config.cache_mode, False)
cache_context = CacheContext(
url, config.cache_mode, False
)
# Initialize processing variables
async_response: AsyncCrawlResponse = None
@@ -282,7 +327,7 @@ class AsyncWebCrawler:
# if config.screenshot and not screenshot or config.pdf and not pdf:
if config.screenshot and not screenshot_data:
cached_result = None
if config.pdf and not pdf_data:
cached_result = None
@@ -295,12 +340,12 @@ class AsyncWebCrawler:
# Update proxy configuration from rotation strategy if available
if config and config.proxy_rotation_strategy:
next_proxy: ProxyConfig = await config.proxy_rotation_strategy.get_next_proxy()
next_proxy = await config.proxy_rotation_strategy.get_next_proxy()
if next_proxy:
self.logger.info(
message="Switch proxy: {proxy}",
tag="PROXY",
params={"proxy": next_proxy.server}
params={"proxy": next_proxy.server},
)
config.proxy_config = next_proxy
# config = config.clone(proxy_config=next_proxy)
@@ -310,23 +355,18 @@ class AsyncWebCrawler:
t1 = time.perf_counter()
if config.user_agent:
self.crawler_strategy.update_user_agent(
config.user_agent)
self.crawler_strategy.update_user_agent(config.user_agent)
# Check robots.txt if enabled
if config and config.check_robots_txt:
if not await self.robots_parser.can_fetch(
url, self.browser_config.user_agent
):
if not await self.robots_parser.can_fetch(url, self.browser_config.user_agent):
return CrawlResult(
url=url,
html="",
success=False,
status_code=403,
error_message="Access denied by robots.txt",
response_headers={
"X-Robots-Status": "Blocked by robots.txt"
},
response_headers={"X-Robots-Status": "Blocked by robots.txt"}
)
##############################
@@ -353,16 +393,15 @@ class AsyncWebCrawler:
###############################################################
# Process the HTML content, Call CrawlerStrategy.process_html #
###############################################################
crawl_result: CrawlResult = await self.aprocess_html(
crawl_result : CrawlResult = await self.aprocess_html(
url=url,
html=html,
extracted_content=extracted_content,
config=config, # Pass the config object instead of individual parameters
screenshot_data=screenshot_data,
screenshot=screenshot_data,
pdf_data=pdf_data,
verbose=config.verbose,
is_raw_html=True if url.startswith("raw:") else False,
redirected_url=async_response.redirected_url,
**kwargs,
)
@@ -371,21 +410,25 @@ class AsyncWebCrawler:
crawl_result.response_headers = async_response.response_headers
crawl_result.downloaded_files = async_response.downloaded_files
crawl_result.js_execution_result = js_execution_result
crawl_result.mhtml = async_response.mhtml_data
crawl_result.ssl_certificate = async_response.ssl_certificate
# Add captured network and console data if available
crawl_result.network_requests = async_response.network_requests
crawl_result.console_messages = async_response.console_messages
crawl_result.ssl_certificate = (
async_response.ssl_certificate
) # Add SSL certificate
crawl_result.success = bool(html)
crawl_result.session_id = getattr(
config, "session_id", None)
crawl_result.session_id = getattr(config, "session_id", None)
self.logger.url_status(
url=cache_context.display_url,
success=crawl_result.success,
timing=time.perf_counter() - start_time,
self.logger.success(
message="{url:.50}... | Status: {status} | Total: {timing}",
tag="COMPLETE",
params={
"url": cache_context.display_url,
"status": crawl_result.success,
"timing": f"{time.perf_counter() - start_time:.2f}s",
},
colors={
"status": Fore.GREEN if crawl_result.success else Fore.RED,
"timing": Fore.YELLOW,
},
)
# Update cache if appropriate
@@ -395,15 +438,19 @@ class AsyncWebCrawler:
return CrawlResultContainer(crawl_result)
else:
self.logger.url_status(
url=cache_context.display_url,
success=True,
timing=time.perf_counter() - start_time,
tag="COMPLETE"
self.logger.success(
message="{url:.50}... | Status: {status} | Total: {timing}",
tag="COMPLETE",
params={
"url": cache_context.display_url,
"status": True,
"timing": f"{time.perf_counter() - start_time:.2f}s",
},
colors={"status": Fore.GREEN, "timing": Fore.YELLOW},
)
cached_result.success = bool(html)
cached_result.session_id = getattr(
config, "session_id", None)
cached_result.session_id = getattr(config, "session_id", None)
cached_result.redirected_url = cached_result.redirected_url or url
return CrawlResultContainer(cached_result)
@@ -423,7 +470,7 @@ class AsyncWebCrawler:
tag="ERROR",
)
return CrawlResultContainer(
return CrawlResultContainer(
CrawlResult(
url=url, html="", success=False, error_message=error_message
)
@@ -435,7 +482,7 @@ class AsyncWebCrawler:
html: str,
extracted_content: str,
config: CrawlerRunConfig,
screenshot_data: str,
screenshot: str,
pdf_data: str,
verbose: bool,
**kwargs,
@@ -448,7 +495,7 @@ class AsyncWebCrawler:
html: Raw HTML content
extracted_content: Previously extracted content (if any)
config: Configuration object controlling processing behavior
screenshot_data: Screenshot data (if any)
screenshot: Screenshot data (if any)
pdf_data: PDF data (if any)
verbose: Whether to enable verbose logging
**kwargs: Additional parameters for backwards compatibility
@@ -468,16 +515,15 @@ class AsyncWebCrawler:
# Process HTML content
params = config.__dict__.copy()
params.pop("url", None)
params.pop("url", None)
# add keys from kwargs to params that doesn't exist in params
params.update({k: v for k, v in kwargs.items()
if k not in params.keys()})
params.update({k: v for k, v in kwargs.items() if k not in params.keys()})
################################
# Scraping Strategy Execution #
################################
result: ScrapingResult = scraping_strategy.scrap(
url, html, **params)
result : ScrapingResult = scraping_strategy.scrap(url, html, **params)
if result is None:
raise ValueError(
@@ -493,8 +539,7 @@ class AsyncWebCrawler:
# Extract results - handle both dict and ScrapingResult
if isinstance(result, dict):
cleaned_html = sanitize_input_encode(
result.get("cleaned_html", ""))
cleaned_html = sanitize_input_encode(result.get("cleaned_html", ""))
media = result.get("media", {})
links = result.get("links", {})
metadata = result.get("metadata", {})
@@ -511,65 +556,24 @@ class AsyncWebCrawler:
config.markdown_generator or DefaultMarkdownGenerator()
)
# --- SELECT HTML SOURCE BASED ON CONTENT_SOURCE ---
# Get the desired source from the generator config, default to 'cleaned_html'
selected_html_source = getattr(markdown_generator, 'content_source', 'cleaned_html')
# Define the source selection logic using dict dispatch
html_source_selector = {
"raw_html": lambda: html, # The original raw HTML
"cleaned_html": lambda: cleaned_html, # The HTML after scraping strategy
"fit_html": lambda: preprocess_html_for_schema(html_content=html), # Preprocessed raw HTML
}
markdown_input_html = cleaned_html # Default to cleaned_html
try:
# Get the appropriate lambda function, default to returning cleaned_html if key not found
source_lambda = html_source_selector.get(selected_html_source, lambda: cleaned_html)
# Execute the lambda to get the selected HTML
markdown_input_html = source_lambda()
# Log which source is being used (optional, but helpful for debugging)
# if self.logger and verbose:
# actual_source_used = selected_html_source if selected_html_source in html_source_selector else 'cleaned_html (default)'
# self.logger.debug(f"Using '{actual_source_used}' as source for Markdown generation for {url}", tag="MARKDOWN_SRC")
except Exception as e:
# Handle potential errors, especially from preprocess_html_for_schema
if self.logger:
self.logger.warning(
f"Error getting/processing '{selected_html_source}' for markdown source: {e}. Falling back to cleaned_html.",
tag="MARKDOWN_SRC"
)
# Ensure markdown_input_html is still the default cleaned_html in case of error
markdown_input_html = cleaned_html
# --- END: HTML SOURCE SELECTION ---
# Uncomment if by default we want to use PruningContentFilter
# if not config.content_filter and not markdown_generator.content_filter:
# markdown_generator.content_filter = PruningContentFilter()
markdown_result: MarkdownGenerationResult = (
markdown_generator.generate_markdown(
input_html=markdown_input_html,
base_url=params.get("redirected_url", url)
cleaned_html=cleaned_html,
base_url=url,
# html2text_options=kwargs.get('html2text', {})
)
)
# Log processing completion
self.logger.url_status(
url=_url,
success=True,
timing=int((time.perf_counter() - t1) * 1000) / 1000,
tag="SCRAPE"
self.logger.info(
message="{url:.50}... | Time: {timing}s",
tag="SCRAPE",
params={"url": _url, "timing": int((time.perf_counter() - t1) * 1000) / 1000},
)
# self.logger.info(
# message="{url:.50}... | Time: {timing}s",
# tag="SCRAPE",
# params={"url": _url, "timing": int((time.perf_counter() - t1) * 1000) / 1000},
# )
################################
# Structured Content Extraction #
@@ -616,6 +620,10 @@ class AsyncWebCrawler:
params={"url": _url, "timing": time.perf_counter() - t1},
)
# Handle screenshot and PDF data
screenshot_data = None if not screenshot else screenshot
pdf_data = None if not pdf_data else pdf_data
# Apply HTML formatting if requested
if config.prettiify:
cleaned_html = fast_format_html(cleaned_html)
@@ -639,7 +647,7 @@ class AsyncWebCrawler:
async def arun_many(
self,
urls: List[str],
config: Optional[CrawlerRunConfig] = None,
config: Optional[CrawlerRunConfig] = None,
dispatcher: Optional[BaseDispatcher] = None,
# Legacy parameters maintained for backwards compatibility
# word_count_threshold=MIN_WORD_THRESHOLD,
@@ -653,8 +661,8 @@ class AsyncWebCrawler:
# pdf: bool = False,
# user_agent: str = None,
# verbose=True,
**kwargs,
) -> RunManyReturn:
**kwargs
) -> RunManyReturn:
"""
Runs the crawler for multiple URLs concurrently using a configurable dispatcher strategy.
@@ -710,32 +718,37 @@ class AsyncWebCrawler:
def transform_result(task_result):
return (
setattr(
task_result.result,
"dispatch_result",
DispatchResult(
task_id=task_result.task_id,
memory_usage=task_result.memory_usage,
peak_memory=task_result.peak_memory,
start_time=task_result.start_time,
end_time=task_result.end_time,
error_message=task_result.error_message,
),
setattr(task_result.result, 'dispatch_result',
DispatchResult(
task_id=task_result.task_id,
memory_usage=task_result.memory_usage,
peak_memory=task_result.peak_memory,
start_time=task_result.start_time,
end_time=task_result.end_time,
error_message=task_result.error_message,
)
) or task_result.result
)
or task_result.result
)
stream = config.stream
if stream:
async def result_transformer():
async for task_result in dispatcher.run_urls_stream(
crawler=self, urls=urls, config=config
):
async for task_result in dispatcher.run_urls_stream(crawler=self, urls=urls, config=config):
yield transform_result(task_result)
return result_transformer()
else:
_results = await dispatcher.run_urls(crawler=self, urls=urls, config=config)
return [transform_result(res) for res in _results]
return [transform_result(res) for res in _results]
async def aclear_cache(self):
"""Clear the cache database."""
await async_db_manager.cleanup()
async def aflush_cache(self):
"""Flush the cache database."""
await async_db_manager.aflush_db()
async def aget_cache_size(self):
"""Get the total number of cached items."""
return await async_db_manager.aget_total_count()

View File

@@ -76,51 +76,6 @@ class ManagedBrowser:
_cleanup(): Terminates the browser process and removes the temporary directory.
create_profile(): Static method to create a user profile by launching a browser for user interaction.
"""
@staticmethod
def build_browser_flags(config: BrowserConfig) -> List[str]:
"""Common CLI flags for launching Chromium"""
flags = [
"--disable-gpu",
"--disable-gpu-compositing",
"--disable-software-rasterizer",
"--no-sandbox",
"--disable-dev-shm-usage",
"--no-first-run",
"--no-default-browser-check",
"--disable-infobars",
"--window-position=0,0",
"--ignore-certificate-errors",
"--ignore-certificate-errors-spki-list",
"--disable-blink-features=AutomationControlled",
"--window-position=400,0",
"--disable-renderer-backgrounding",
"--disable-ipc-flooding-protection",
"--force-color-profile=srgb",
"--mute-audio",
"--disable-background-timer-throttling",
]
if config.light_mode:
flags.extend(BROWSER_DISABLE_OPTIONS)
if config.text_mode:
flags.extend([
"--blink-settings=imagesEnabled=false",
"--disable-remote-fonts",
"--disable-images",
"--disable-javascript",
"--disable-software-rasterizer",
"--disable-dev-shm-usage",
])
# proxy support
if config.proxy:
flags.append(f"--proxy-server={config.proxy}")
elif config.proxy_config:
creds = ""
if config.proxy_config.username and config.proxy_config.password:
creds = f"{config.proxy_config.username}:{config.proxy_config.password}@"
flags.append(f"--proxy-server={creds}{config.proxy_config.server}")
# dedupe
return list(dict.fromkeys(flags))
browser_type: str
user_data_dir: str
@@ -139,7 +94,6 @@ class ManagedBrowser:
host: str = "localhost",
debugging_port: int = 9222,
cdp_url: Optional[str] = None,
browser_config: Optional[BrowserConfig] = None,
):
"""
Initialize the ManagedBrowser instance.
@@ -155,19 +109,17 @@ class ManagedBrowser:
host (str): Host for debugging the browser. Default: "localhost".
debugging_port (int): Port for debugging the browser. Default: 9222.
cdp_url (str or None): CDP URL to connect to the browser. Default: None.
browser_config (BrowserConfig): Configuration object containing all browser settings. Default: None.
"""
self.browser_type = browser_config.browser_type
self.user_data_dir = browser_config.user_data_dir
self.headless = browser_config.headless
self.browser_type = browser_type
self.user_data_dir = user_data_dir
self.headless = headless
self.browser_process = None
self.temp_dir = None
self.debugging_port = browser_config.debugging_port
self.host = browser_config.host
self.debugging_port = debugging_port
self.host = host
self.logger = logger
self.shutting_down = False
self.cdp_url = browser_config.cdp_url
self.browser_config = browser_config
self.cdp_url = cdp_url
async def start(self) -> str:
"""
@@ -190,66 +142,20 @@ class ManagedBrowser:
# Get browser path and args based on OS and browser type
# browser_path = self._get_browser_path()
args = await self._get_browser_args()
if self.browser_config.extra_args:
args.extend(self.browser_config.extra_args)
# Start browser process
try:
# Use DETACHED_PROCESS flag on Windows to fully detach the process
# On Unix, we'll use preexec_fn=os.setpgrp to start the process in a new process group
if sys.platform == "win32":
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
)
else:
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setpgrp # Start in a new process group
)
# We'll monitor for a short time to make sure it starts properly, but won't keep monitoring
await asyncio.sleep(0.5) # Give browser time to start
await self._initial_startup_check()
self.browser_process = subprocess.Popen(
args, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
# Monitor browser process output for errors
asyncio.create_task(self._monitor_browser_process())
await asyncio.sleep(2) # Give browser time to start
return f"http://{self.host}:{self.debugging_port}"
except Exception as e:
await self.cleanup()
raise Exception(f"Failed to start browser: {e}")
async def _initial_startup_check(self):
"""
Perform a quick check to make sure the browser started successfully.
This only runs once at startup rather than continuously monitoring.
"""
if not self.browser_process:
return
# Check that process started without immediate termination
await asyncio.sleep(0.5)
if self.browser_process.poll() is not None:
# Process already terminated
stdout, stderr = b"", b""
try:
stdout, stderr = self.browser_process.communicate(timeout=0.5)
except subprocess.TimeoutExpired:
pass
self.logger.error(
message="Browser process terminated during startup | Code: {code} | STDOUT: {stdout} | STDERR: {stderr}",
tag="ERROR",
params={
"code": self.browser_process.returncode,
"stdout": stdout.decode() if stdout else "",
"stderr": stderr.decode() if stderr else "",
},
)
async def _monitor_browser_process(self):
"""
Monitor the browser process for unexpected termination.
@@ -261,7 +167,6 @@ class ManagedBrowser:
4. If any other error occurs, log the error message.
Note: This method should be called in a separate task to avoid blocking the main event loop.
This is DEPRECATED and should not be used for builtin browsers that need to outlive the Python process.
"""
if self.browser_process:
try:
@@ -325,29 +230,29 @@ class ManagedBrowser:
return browser_path
async def _get_browser_args(self) -> List[str]:
"""Returns full CLI args for launching the browser"""
base = [await self._get_browser_path()]
"""Returns browser-specific command line arguments"""
base_args = [await self._get_browser_path()]
if self.browser_type == "chromium":
flags = [
args = [
f"--remote-debugging-port={self.debugging_port}",
f"--user-data-dir={self.user_data_dir}",
]
if self.headless:
flags.append("--headless=new")
# merge common launch flags
flags.extend(self.build_browser_flags(self.browser_config))
args.append("--headless=new")
elif self.browser_type == "firefox":
flags = [
args = [
"--remote-debugging-port",
str(self.debugging_port),
"--profile",
self.user_data_dir,
]
if self.headless:
flags.append("--headless")
args.append("--headless")
else:
raise NotImplementedError(f"Browser type {self.browser_type} not supported")
return base + flags
return base_args + args
async def cleanup(self):
"""Cleanup browser process and temporary directory"""
@@ -356,33 +261,22 @@ class ManagedBrowser:
if self.browser_process:
try:
# For builtin browsers that should persist, we should check if it's a detached process
# Only terminate if we have proper control over the process
if not self.browser_process.poll():
# Process is still running
self.browser_process.terminate()
# Wait for process to end gracefully
for _ in range(10): # 10 attempts, 100ms each
if self.browser_process.poll() is not None:
break
await asyncio.sleep(0.1)
self.browser_process.terminate()
# Wait for process to end gracefully
for _ in range(10): # 10 attempts, 100ms each
if self.browser_process.poll() is not None:
break
await asyncio.sleep(0.1)
# Force kill if still running
if self.browser_process.poll() is None:
if sys.platform == "win32":
# On Windows we might need taskkill for detached processes
try:
subprocess.run(["taskkill", "/F", "/PID", str(self.browser_process.pid)])
except Exception:
self.browser_process.kill()
else:
self.browser_process.kill()
await asyncio.sleep(0.1) # Brief wait for kill to take effect
# Force kill if still running
if self.browser_process.poll() is None:
self.browser_process.kill()
await asyncio.sleep(0.1) # Brief wait for kill to take effect
except Exception as e:
self.logger.error(
message="Error terminating browser: {error}",
tag="ERROR",
tag="ERROR",
params={"error": str(e)},
)
@@ -485,14 +379,7 @@ class BrowserManager:
sessions (dict): Dictionary to store session information
session_ttl (int): Session timeout in seconds
"""
_playwright_instance = None
@classmethod
async def get_playwright(cls):
from playwright.async_api import async_playwright
cls._playwright_instance = await async_playwright().start()
return cls._playwright_instance
def __init__(self, browser_config: BrowserConfig, logger=None):
"""
@@ -528,7 +415,6 @@ class BrowserManager:
logger=self.logger,
debugging_port=self.config.debugging_port,
cdp_url=self.config.cdp_url,
browser_config=self.config,
)
async def start(self):
@@ -543,12 +429,10 @@ class BrowserManager:
Note: This method should be called in a separate task to avoid blocking the main event loop.
"""
if self.playwright is not None:
await self.close()
from playwright.async_api import async_playwright
if self.playwright is None:
from playwright.async_api import async_playwright
self.playwright = await async_playwright().start()
self.playwright = await async_playwright().start()
if self.config.cdp_url or self.config.use_managed_browser:
self.config.use_managed_browser = True
@@ -559,6 +443,19 @@ class BrowserManager:
self.default_context = contexts[0]
else:
self.default_context = await self.create_browser_context()
# self.default_context = await self.browser.new_context(
# viewport={
# "width": self.config.viewport_width,
# "height": self.config.viewport_height,
# },
# storage_state=self.config.storage_state,
# user_agent=self.config.headers.get(
# "User-Agent", self.config.user_agent
# ),
# accept_downloads=self.config.accept_downloads,
# ignore_https_errors=self.config.ignore_https_errors,
# java_script_enabled=self.config.java_script_enabled,
# )
await self.setup_context(self.default_context)
else:
browser_args = self._build_browser_args()
@@ -573,7 +470,6 @@ class BrowserManager:
self.default_context = self.browser
def _build_browser_args(self) -> dict:
"""Build browser launch arguments from config."""
args = [
@@ -617,9 +513,6 @@ class BrowserManager:
if self.config.extra_args:
args.extend(self.config.extra_args)
# Deduplicate args
args = list(dict.fromkeys(args))
browser_args = {"headless": self.config.headless, "args": args}
if self.config.chrome_channel:
@@ -715,7 +608,7 @@ class BrowserManager:
"name": "cookiesEnabled",
"value": "true",
"url": crawlerRunConfig.url
if crawlerRunConfig and crawlerRunConfig.url
if crawlerRunConfig
else "https://crawl4ai.com/",
}
]
@@ -834,23 +727,6 @@ class BrowserManager:
# Update context settings with text mode settings
context_settings.update(text_mode_settings)
# inject locale / tz / geo if user provided them
if crawlerRunConfig:
if crawlerRunConfig.locale:
context_settings["locale"] = crawlerRunConfig.locale
if crawlerRunConfig.timezone_id:
context_settings["timezone_id"] = crawlerRunConfig.timezone_id
if crawlerRunConfig.geolocation:
context_settings["geolocation"] = {
"latitude": crawlerRunConfig.geolocation.latitude,
"longitude": crawlerRunConfig.geolocation.longitude,
"accuracy": crawlerRunConfig.geolocation.accuracy,
}
# ensure geolocation permission
perms = context_settings.get("permissions", [])
perms.append("geolocation")
context_settings["permissions"] = perms
# Create and return the context with all settings
context = await self.browser.new_context(**context_settings)
@@ -883,10 +759,6 @@ class BrowserManager:
"semaphore_count",
"url"
]
# Do NOT exclude locale, timezone_id, or geolocation as these DO affect browser context
# and should cause a new context to be created if they change
for key in ephemeral_keys:
if key in config_dict:
del config_dict[key]

View File

@@ -12,10 +12,7 @@ import sys
import datetime
import uuid
import shutil
import json
import subprocess
import time
from typing import List, Dict, Optional, Any, Tuple
from typing import List, Dict, Optional, Any
from colorama import Fore, Style, init
from .async_configs import BrowserConfig
@@ -59,11 +56,6 @@ class BrowserProfiler:
# Ensure profiles directory exists
self.profiles_dir = os.path.join(get_home_folder(), "profiles")
os.makedirs(self.profiles_dir, exist_ok=True)
# Builtin browser config file
self.builtin_browser_dir = os.path.join(get_home_folder(), "builtin-browser")
self.builtin_config_file = os.path.join(self.builtin_browser_dir, "browser_config.json")
os.makedirs(self.builtin_browser_dir, exist_ok=True)
async def create_profile(self,
profile_name: Optional[str] = None,
@@ -555,12 +547,12 @@ class BrowserProfiler:
else:
self.logger.error(f"Invalid choice. Please enter a number between 1 and {exit_option}.", tag="MENU")
async def launch_standalone_browser(self,
browser_type: str = "chromium",
user_data_dir: Optional[str] = None,
debugging_port: int = 9222,
headless: bool = False,
save_as_builtin: bool = False) -> Optional[str]:
headless: bool = False) -> Optional[str]:
"""
Launch a standalone browser with CDP debugging enabled and keep it running
until the user presses 'q'. Returns and displays the CDP URL.
@@ -774,201 +766,4 @@ class BrowserProfiler:
# Return the CDP URL
return cdp_url
async def launch_builtin_browser(self,
browser_type: str = "chromium",
debugging_port: int = 9222,
headless: bool = True) -> Optional[str]:
"""
Launch a browser in the background for use as the builtin browser.
Args:
browser_type (str): Type of browser to launch ('chromium' or 'firefox')
debugging_port (int): Port to use for CDP debugging
headless (bool): Whether to run in headless mode
Returns:
str: CDP URL for the browser, or None if launch failed
"""
# Check if there's an existing browser still running
browser_info = self.get_builtin_browser_info()
if browser_info and self._is_browser_running(browser_info.get('pid')):
self.logger.info("Builtin browser is already running", tag="BUILTIN")
return browser_info.get('cdp_url')
# Create a user data directory for the builtin browser
user_data_dir = os.path.join(self.builtin_browser_dir, "user_data")
os.makedirs(user_data_dir, exist_ok=True)
# Create managed browser instance
managed_browser = ManagedBrowser(
browser_type=browser_type,
user_data_dir=user_data_dir,
headless=headless,
logger=self.logger,
debugging_port=debugging_port
)
try:
# Start the browser
await managed_browser.start()
# Check if browser started successfully
browser_process = managed_browser.browser_process
if not browser_process:
self.logger.error("Failed to start browser process.", tag="BUILTIN")
return None
# Get CDP URL
cdp_url = f"http://localhost:{debugging_port}"
# Try to verify browser is responsive by fetching version info
import aiohttp
json_url = f"{cdp_url}/json/version"
config_json = None
try:
async with aiohttp.ClientSession() as session:
for _ in range(10): # Try multiple times
try:
async with session.get(json_url) as response:
if response.status == 200:
config_json = await response.json()
break
except Exception:
pass
await asyncio.sleep(0.5)
except Exception as e:
self.logger.warning(f"Could not verify browser: {str(e)}", tag="BUILTIN")
# Save browser info
browser_info = {
'pid': browser_process.pid,
'cdp_url': cdp_url,
'user_data_dir': user_data_dir,
'browser_type': browser_type,
'debugging_port': debugging_port,
'start_time': time.time(),
'config': config_json
}
with open(self.builtin_config_file, 'w') as f:
json.dump(browser_info, f, indent=2)
# Detach from the browser process - don't keep any references
# This is important to allow the Python script to exit while the browser continues running
# We'll just record the PID and other info, and the browser will run independently
managed_browser.browser_process = None
self.logger.success(f"Builtin browser launched at CDP URL: {cdp_url}", tag="BUILTIN")
return cdp_url
except Exception as e:
self.logger.error(f"Error launching builtin browser: {str(e)}", tag="BUILTIN")
if managed_browser:
await managed_browser.cleanup()
return None
def get_builtin_browser_info(self) -> Optional[Dict[str, Any]]:
"""
Get information about the builtin browser.
Returns:
dict: Browser information or None if no builtin browser is configured
"""
if not os.path.exists(self.builtin_config_file):
return None
try:
with open(self.builtin_config_file, 'r') as f:
browser_info = json.load(f)
# Check if the browser is still running
if not self._is_browser_running(browser_info.get('pid')):
self.logger.warning("Builtin browser is not running", tag="BUILTIN")
return None
return browser_info
except Exception as e:
self.logger.error(f"Error reading builtin browser config: {str(e)}", tag="BUILTIN")
return None
def _is_browser_running(self, pid: Optional[int]) -> bool:
"""Check if a process with the given PID is running"""
if not pid:
return False
try:
# Check if the process exists
if sys.platform == "win32":
process = subprocess.run(["tasklist", "/FI", f"PID eq {pid}"],
capture_output=True, text=True)
return str(pid) in process.stdout
else:
# Unix-like systems
os.kill(pid, 0) # This doesn't actually kill the process, just checks if it exists
return True
except (ProcessLookupError, PermissionError, OSError):
return False
async def kill_builtin_browser(self) -> bool:
"""
Kill the builtin browser if it's running.
Returns:
bool: True if the browser was killed, False otherwise
"""
browser_info = self.get_builtin_browser_info()
if not browser_info:
self.logger.warning("No builtin browser found", tag="BUILTIN")
return False
pid = browser_info.get('pid')
if not pid:
return False
try:
if sys.platform == "win32":
subprocess.run(["taskkill", "/F", "/PID", str(pid)], check=True)
else:
os.kill(pid, signal.SIGTERM)
# Wait for termination
for _ in range(5):
if not self._is_browser_running(pid):
break
await asyncio.sleep(0.5)
else:
# Force kill if still running
os.kill(pid, signal.SIGKILL)
# Remove config file
if os.path.exists(self.builtin_config_file):
os.unlink(self.builtin_config_file)
self.logger.success("Builtin browser terminated", tag="BUILTIN")
return True
except Exception as e:
self.logger.error(f"Error killing builtin browser: {str(e)}", tag="BUILTIN")
return False
async def get_builtin_browser_status(self) -> Dict[str, Any]:
"""
Get status information about the builtin browser.
Returns:
dict: Status information with running, cdp_url, and info fields
"""
browser_info = self.get_builtin_browser_info()
if not browser_info:
return {
'running': False,
'cdp_url': None,
'info': None
}
return {
'running': True,
'cdp_url': browser_info.get('cdp_url'),
'info': browser_info
}

View File

@@ -20,16 +20,13 @@ from crawl4ai import (
BrowserConfig,
CrawlerRunConfig,
LLMExtractionStrategy,
LXMLWebScrapingStrategy,
JsonCssExtractionStrategy,
JsonXPathExtractionStrategy,
BM25ContentFilter,
PruningContentFilter,
BrowserProfiler,
DefaultMarkdownGenerator,
LLMConfig
)
from crawl4ai.config import USER_SETTINGS
from litellm import completion
from pathlib import Path
@@ -178,12 +175,8 @@ def show_examples():
# CSS-based extraction
crwl https://example.com -e extract_css.yml -s css_schema.json -o json
# LLM-based extraction with config file
# LLM-based extraction
crwl https://example.com -e extract_llm.yml -s llm_schema.json -o json
# Quick LLM-based JSON extraction (prompts for LLM provider first time)
crwl https://example.com -j # Auto-extracts structured data
crwl https://example.com -j "Extract product details including name, price, and features" # With specific instructions
3⃣ Direct Parameters:
# Browser settings
@@ -285,7 +278,7 @@ llm_schema.json:
# Combine configs with direct parameters
crwl https://example.com -B browser.yml -b "headless=false,viewport_width=1920"
# Full extraction pipeline with config files
# Full extraction pipeline
crwl https://example.com \\
-B browser.yml \\
-C crawler.yml \\
@@ -293,12 +286,6 @@ llm_schema.json:
-s llm_schema.json \\
-o json \\
-v
# Quick LLM-based extraction with specific instructions
crwl https://amazon.com/dp/B01DFKC2SO \\
-j "Extract product title, current price, original price, rating, and all product specifications" \\
-b "headless=true,viewport_width=1280" \\
-v
# Content filtering with BM25
crwl https://example.com \\
@@ -340,14 +327,6 @@ For more documentation visit: https://github.com/unclecode/crawl4ai
- google/gemini-pro
See full list of providers: https://docs.litellm.ai/docs/providers
# Set default LLM provider and token in advance
crwl config set DEFAULT_LLM_PROVIDER "anthropic/claude-3-sonnet"
crwl config set DEFAULT_LLM_PROVIDER_TOKEN "your-api-token-here"
# Set default browser behavior
crwl config set BROWSER_HEADLESS false # Always show browser window
crwl config set USER_AGENT_MODE random # Use random user agent
9⃣ Profile Management:
# Launch interactive profile manager
@@ -362,32 +341,6 @@ For more documentation visit: https://github.com/unclecode/crawl4ai
crwl profiles # Select "Create new profile" option
# 2. Then use that profile to crawl authenticated content:
crwl https://site-requiring-login.com/dashboard -p my-profile-name
🔄 Builtin Browser Management:
# Start a builtin browser (runs in the background)
crwl browser start
# Check builtin browser status
crwl browser status
# Open a visible window to see the browser
crwl browser view --url https://example.com
# Stop the builtin browser
crwl browser stop
# Restart with different options
crwl browser restart --browser-type chromium --port 9223 --no-headless
# Use the builtin browser in your code
# (Just set browser_mode="builtin" in your BrowserConfig)
browser_config = BrowserConfig(
browser_mode="builtin",
headless=True
)
# Usage via CLI:
crwl https://example.com -b "browser_mode=builtin"
"""
click.echo(examples)
@@ -622,307 +575,6 @@ def cli():
pass
@cli.group("browser")
def browser_cmd():
"""Manage browser instances for Crawl4AI
Commands to manage browser instances for Crawl4AI, including:
- status - Check status of the builtin browser
- start - Start a new builtin browser
- stop - Stop the running builtin browser
- restart - Restart the builtin browser
"""
pass
@browser_cmd.command("status")
def browser_status_cmd():
"""Show status of the builtin browser"""
profiler = BrowserProfiler()
try:
status = anyio.run(profiler.get_builtin_browser_status)
if status["running"]:
info = status["info"]
console.print(Panel(
f"[green]Builtin browser is running[/green]\n\n"
f"CDP URL: [cyan]{info['cdp_url']}[/cyan]\n"
f"Process ID: [yellow]{info['pid']}[/yellow]\n"
f"Browser type: [blue]{info['browser_type']}[/blue]\n"
f"User data directory: [magenta]{info['user_data_dir']}[/magenta]\n"
f"Started: [cyan]{time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(info['start_time']))}[/cyan]",
title="Builtin Browser Status",
border_style="green"
))
else:
console.print(Panel(
"[yellow]Builtin browser is not running[/yellow]\n\n"
"Use 'crwl browser start' to start a builtin browser",
title="Builtin Browser Status",
border_style="yellow"
))
except Exception as e:
console.print(f"[red]Error checking browser status: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("start")
@click.option("--browser-type", "-b", type=click.Choice(["chromium", "firefox"]), default="chromium",
help="Browser type (default: chromium)")
@click.option("--port", "-p", type=int, default=9222, help="Debugging port (default: 9222)")
@click.option("--headless/--no-headless", default=True, help="Run browser in headless mode")
def browser_start_cmd(browser_type: str, port: int, headless: bool):
"""Start a builtin browser instance
This will start a persistent browser instance that can be used by Crawl4AI
by setting browser_mode="builtin" in BrowserConfig.
"""
profiler = BrowserProfiler()
# First check if browser is already running
status = anyio.run(profiler.get_builtin_browser_status)
if status["running"]:
console.print(Panel(
"[yellow]Builtin browser is already running[/yellow]\n\n"
f"CDP URL: [cyan]{status['cdp_url']}[/cyan]\n\n"
"Use 'crwl browser restart' to restart the browser",
title="Builtin Browser Start",
border_style="yellow"
))
return
try:
console.print(Panel(
f"[cyan]Starting builtin browser[/cyan]\n\n"
f"Browser type: [green]{browser_type}[/green]\n"
f"Debugging port: [yellow]{port}[/yellow]\n"
f"Headless: [cyan]{'Yes' if headless else 'No'}[/cyan]",
title="Builtin Browser Start",
border_style="cyan"
))
cdp_url = anyio.run(
profiler.launch_builtin_browser,
browser_type,
port,
headless
)
if cdp_url:
console.print(Panel(
f"[green]Builtin browser started successfully[/green]\n\n"
f"CDP URL: [cyan]{cdp_url}[/cyan]\n\n"
"This browser will be used automatically when setting browser_mode='builtin'",
title="Builtin Browser Start",
border_style="green"
))
else:
console.print(Panel(
"[red]Failed to start builtin browser[/red]",
title="Builtin Browser Start",
border_style="red"
))
sys.exit(1)
except Exception as e:
console.print(f"[red]Error starting builtin browser: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("stop")
def browser_stop_cmd():
"""Stop the running builtin browser"""
profiler = BrowserProfiler()
try:
# First check if browser is running
status = anyio.run(profiler.get_builtin_browser_status)
if not status["running"]:
console.print(Panel(
"[yellow]No builtin browser is currently running[/yellow]",
title="Builtin Browser Stop",
border_style="yellow"
))
return
console.print(Panel(
"[cyan]Stopping builtin browser...[/cyan]",
title="Builtin Browser Stop",
border_style="cyan"
))
success = anyio.run(profiler.kill_builtin_browser)
if success:
console.print(Panel(
"[green]Builtin browser stopped successfully[/green]",
title="Builtin Browser Stop",
border_style="green"
))
else:
console.print(Panel(
"[red]Failed to stop builtin browser[/red]",
title="Builtin Browser Stop",
border_style="red"
))
sys.exit(1)
except Exception as e:
console.print(f"[red]Error stopping builtin browser: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("view")
@click.option("--url", "-u", help="URL to navigate to (defaults to about:blank)")
def browser_view_cmd(url: Optional[str]):
"""
Open a visible window of the builtin browser
This command connects to the running builtin browser and opens a visible window,
allowing you to see what the browser is currently viewing or navigate to a URL.
"""
profiler = BrowserProfiler()
try:
# First check if browser is running
status = anyio.run(profiler.get_builtin_browser_status)
if not status["running"]:
console.print(Panel(
"[yellow]No builtin browser is currently running[/yellow]\n\n"
"Use 'crwl browser start' to start a builtin browser first",
title="Builtin Browser View",
border_style="yellow"
))
return
info = status["info"]
cdp_url = info["cdp_url"]
console.print(Panel(
f"[cyan]Opening visible window connected to builtin browser[/cyan]\n\n"
f"CDP URL: [green]{cdp_url}[/green]\n"
f"URL to load: [yellow]{url or 'about:blank'}[/yellow]",
title="Builtin Browser View",
border_style="cyan"
))
# Use the CDP URL to launch a new visible window
import subprocess
import os
# Determine the browser command based on platform
if sys.platform == "darwin": # macOS
browser_cmd = ["/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"]
elif sys.platform == "win32": # Windows
browser_cmd = ["C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"]
else: # Linux
browser_cmd = ["google-chrome"]
# Add arguments
browser_args = [
f"--remote-debugging-port={info['debugging_port']}",
"--remote-debugging-address=localhost",
"--no-first-run",
"--no-default-browser-check"
]
# Add URL if provided
if url:
browser_args.append(url)
# Launch browser
try:
subprocess.Popen(browser_cmd + browser_args)
console.print("[green]Browser window opened. Close it when finished viewing.[/green]")
except Exception as e:
console.print(f"[red]Error launching browser: {str(e)}[/red]")
console.print(f"[yellow]Try connecting manually to {cdp_url} in Chrome or using the '--remote-debugging-port' flag.[/yellow]")
except Exception as e:
console.print(f"[red]Error viewing builtin browser: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("restart")
@click.option("--browser-type", "-b", type=click.Choice(["chromium", "firefox"]), default=None,
help="Browser type (defaults to same as current)")
@click.option("--port", "-p", type=int, default=None, help="Debugging port (defaults to same as current)")
@click.option("--headless/--no-headless", default=None, help="Run browser in headless mode")
def browser_restart_cmd(browser_type: Optional[str], port: Optional[int], headless: Optional[bool]):
"""Restart the builtin browser
Stops the current builtin browser if running and starts a new one.
By default, uses the same configuration as the current browser.
"""
profiler = BrowserProfiler()
try:
# First check if browser is running and get its config
status = anyio.run(profiler.get_builtin_browser_status)
current_config = {}
if status["running"]:
info = status["info"]
current_config = {
"browser_type": info["browser_type"],
"port": info["debugging_port"],
"headless": True # Default assumption
}
# Stop the browser
console.print(Panel(
"[cyan]Stopping current builtin browser...[/cyan]",
title="Builtin Browser Restart",
border_style="cyan"
))
success = anyio.run(profiler.kill_builtin_browser)
if not success:
console.print(Panel(
"[red]Failed to stop current browser[/red]",
title="Builtin Browser Restart",
border_style="red"
))
sys.exit(1)
# Use provided options or defaults from current config
browser_type = browser_type or current_config.get("browser_type", "chromium")
port = port or current_config.get("port", 9222)
headless = headless if headless is not None else current_config.get("headless", True)
# Start a new browser
console.print(Panel(
f"[cyan]Starting new builtin browser[/cyan]\n\n"
f"Browser type: [green]{browser_type}[/green]\n"
f"Debugging port: [yellow]{port}[/yellow]\n"
f"Headless: [cyan]{'Yes' if headless else 'No'}[/cyan]",
title="Builtin Browser Restart",
border_style="cyan"
))
cdp_url = anyio.run(
profiler.launch_builtin_browser,
browser_type,
port,
headless
)
if cdp_url:
console.print(Panel(
f"[green]Builtin browser restarted successfully[/green]\n\n"
f"CDP URL: [cyan]{cdp_url}[/cyan]",
title="Builtin Browser Restart",
border_style="green"
))
else:
console.print(Panel(
"[red]Failed to restart builtin browser[/red]",
title="Builtin Browser Restart",
border_style="red"
))
sys.exit(1)
except Exception as e:
console.print(f"[red]Error restarting builtin browser: {str(e)}[/red]")
sys.exit(1)
@cli.command("cdp")
@click.option("--user-data-dir", "-d", help="Directory to use for browser data (will be created if it doesn't exist)")
@click.option("--port", "-P", type=int, default=9222, help="Debugging port (default: 9222)")
@@ -1004,19 +656,17 @@ def cdp_cmd(user_data_dir: Optional[str], port: int, browser_type: str, headless
@click.option("--crawler-config", "-C", type=click.Path(exists=True), help="Crawler config file (YAML/JSON)")
@click.option("--filter-config", "-f", type=click.Path(exists=True), help="Content filter config file")
@click.option("--extraction-config", "-e", type=click.Path(exists=True), help="Extraction strategy config file")
@click.option("--json-extract", "-j", is_flag=False, flag_value="", default=None, help="Extract structured data using LLM with optional description")
@click.option("--schema", "-s", type=click.Path(exists=True), help="JSON schema for extraction")
@click.option("--browser", "-b", type=str, callback=parse_key_values, help="Browser parameters as key1=value1,key2=value2")
@click.option("--crawler", "-c", type=str, callback=parse_key_values, help="Crawler parameters as key1=value1,key2=value2")
@click.option("--output", "-o", type=click.Choice(["all", "json", "markdown", "md", "markdown-fit", "md-fit"]), default="all")
@click.option("--output-file", "-O", type=click.Path(), help="Output file path (default: stdout)")
@click.option("--bypass-cache", "-b", is_flag=True, default=True, help="Bypass cache when crawling")
@click.option("--bypass-cache", is_flag=True, default=True, help="Bypass cache when crawling")
@click.option("--question", "-q", help="Ask a question about the crawled content")
@click.option("--verbose", "-v", is_flag=True)
@click.option("--profile", "-p", help="Use a specific browser profile (by name)")
def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config: str,
extraction_config: str, json_extract: str, schema: str, browser: Dict, crawler: Dict,
output: str, output_file: str, bypass_cache: bool, question: str, verbose: bool, profile: str):
extraction_config: str, schema: str, browser: Dict, crawler: Dict,
output: str, bypass_cache: bool, question: str, verbose: bool, profile: str):
"""Crawl a website and extract content
Simple Usage:
@@ -1060,65 +710,21 @@ def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config:
crawler_cfg = crawler_cfg.clone(**crawler)
# Handle content filter config
if filter_config or output in ["markdown-fit", "md-fit"]:
if filter_config:
filter_conf = load_config_file(filter_config)
elif not filter_config and output in ["markdown-fit", "md-fit"]:
filter_conf = {
"type": "pruning",
"query": "",
"threshold": 0.48
}
if filter_config:
filter_conf = load_config_file(filter_config)
if filter_conf["type"] == "bm25":
crawler_cfg.markdown_generator = DefaultMarkdownGenerator(
content_filter = BM25ContentFilter(
user_query=filter_conf.get("query"),
bm25_threshold=filter_conf.get("threshold", 1.0)
)
crawler_cfg.content_filter = BM25ContentFilter(
user_query=filter_conf.get("query"),
bm25_threshold=filter_conf.get("threshold", 1.0)
)
elif filter_conf["type"] == "pruning":
crawler_cfg.markdown_generator = DefaultMarkdownGenerator(
content_filter = PruningContentFilter(
user_query=filter_conf.get("query"),
threshold=filter_conf.get("threshold", 0.48)
)
crawler_cfg.content_filter = PruningContentFilter(
user_query=filter_conf.get("query"),
threshold=filter_conf.get("threshold", 0.48)
)
# Handle json-extract option (takes precedence over extraction-config)
if json_extract is not None:
# Get LLM provider and token
provider, token = setup_llm_config()
# Default sophisticated instruction for structured data extraction
default_instruction = """Analyze the web page content and extract structured data as JSON.
If the page contains a list of items with repeated patterns, extract all items in an array.
If the page is an article or contains unique content, extract a comprehensive JSON object with all relevant information.
Look at the content, intention of content, what it offers and find the data item(s) in the page.
Always return valid, properly formatted JSON."""
default_instruction_with_user_query = """Analyze the web page content and extract structured data as JSON, following the below instruction and explanation of schema and always return valid, properly formatted JSON. \n\nInstruction:\n\n""" + json_extract
# Determine instruction based on whether json_extract is empty or has content
instruction = default_instruction_with_user_query if json_extract else default_instruction
# Create LLM extraction strategy
crawler_cfg.extraction_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider=provider, api_token=token),
instruction=instruction,
schema=load_schema_file(schema), # Will be None if no schema is provided
extraction_type="schema", #if schema else "block",
apply_chunking=False,
force_json_response=True,
verbose=verbose,
)
# Set output to JSON if not explicitly specified
if output == "all":
output = "json"
# Handle extraction strategy from config file (only if json-extract wasn't used)
elif extraction_config:
# Handle extraction strategy
if extraction_config:
extract_conf = load_config_file(extraction_config)
schema_data = load_schema_file(schema)
@@ -1152,13 +758,6 @@ Always return valid, properly formatted JSON."""
# No cache
if bypass_cache:
crawler_cfg.cache_mode = CacheMode.BYPASS
crawler_cfg.scraping_strategy = LXMLWebScrapingStrategy()
config = get_global_config()
browser_cfg.verbose = config.get("VERBOSE", False)
crawler_cfg.verbose = config.get("VERBOSE", False)
# Run crawler
result : CrawlResult = anyio.run(
@@ -1177,31 +776,14 @@ Always return valid, properly formatted JSON."""
return
# Handle output
if not output_file:
if output == "all":
click.echo(json.dumps(result.model_dump(), indent=2))
elif output == "json":
print(result.extracted_content)
extracted_items = json.loads(result.extracted_content)
click.echo(json.dumps(extracted_items, indent=2))
elif output in ["markdown", "md"]:
click.echo(result.markdown.raw_markdown)
elif output in ["markdown-fit", "md-fit"]:
click.echo(result.markdown.fit_markdown)
else:
if output == "all":
with open(output_file, "w") as f:
f.write(json.dumps(result.model_dump(), indent=2))
elif output == "json":
with open(output_file, "w") as f:
f.write(result.extracted_content)
elif output in ["markdown", "md"]:
with open(output_file, "w") as f:
f.write(result.markdown.raw_markdown)
elif output in ["markdown-fit", "md-fit"]:
with open(output_file, "w") as f:
f.write(result.markdown.fit_markdown)
if output == "all":
click.echo(json.dumps(result.model_dump(), indent=2))
elif output == "json":
click.echo(json.dumps(json.loads(result.extracted_content), indent=2))
elif output in ["markdown", "md"]:
click.echo(result.markdown.raw_markdown)
elif output in ["markdown-fit", "md-fit"]:
click.echo(result.markdown.fit_markdown)
except Exception as e:
raise click.ClickException(str(e))
@@ -1211,120 +793,6 @@ def examples_cmd():
"""Show usage examples"""
show_examples()
@cli.group("config")
def config_cmd():
"""Manage global configuration settings
Commands to view and update global configuration settings:
- list: Display all current configuration settings
- get: Get the value of a specific setting
- set: Set the value of a specific setting
"""
pass
@config_cmd.command("list")
def config_list_cmd():
"""List all configuration settings"""
config = get_global_config()
table = Table(title="Crawl4AI Configuration", show_header=True, header_style="bold cyan", border_style="blue")
table.add_column("Setting", style="cyan")
table.add_column("Value", style="green")
table.add_column("Default", style="yellow")
table.add_column("Description", style="white")
for key, setting in USER_SETTINGS.items():
value = config.get(key, setting["default"])
# Handle secret values
display_value = value
if setting.get("secret", False) and value:
display_value = "********"
# Handle boolean values
if setting["type"] == "boolean":
display_value = str(value).lower()
default_value = str(setting["default"]).lower()
else:
default_value = str(setting["default"])
table.add_row(
key,
str(display_value),
default_value,
setting["description"]
)
console.print(table)
@config_cmd.command("get")
@click.argument("key", required=True)
def config_get_cmd(key: str):
"""Get a specific configuration setting"""
config = get_global_config()
# Normalize key to uppercase
key = key.upper()
if key not in USER_SETTINGS:
console.print(f"[red]Error: Unknown setting '{key}'[/red]")
return
value = config.get(key, USER_SETTINGS[key]["default"])
# Handle secret values
display_value = value
if USER_SETTINGS[key].get("secret", False) and value:
display_value = "********"
console.print(f"[cyan]{key}[/cyan] = [green]{display_value}[/green]")
console.print(f"[dim]Description: {USER_SETTINGS[key]['description']}[/dim]")
@config_cmd.command("set")
@click.argument("key", required=True)
@click.argument("value", required=True)
def config_set_cmd(key: str, value: str):
"""Set a configuration setting"""
config = get_global_config()
# Normalize key to uppercase
key = key.upper()
if key not in USER_SETTINGS:
console.print(f"[red]Error: Unknown setting '{key}'[/red]")
console.print(f"[yellow]Available settings: {', '.join(USER_SETTINGS.keys())}[/yellow]")
return
setting = USER_SETTINGS[key]
# Type conversion and validation
if setting["type"] == "boolean":
if value.lower() in ["true", "yes", "1", "y"]:
typed_value = True
elif value.lower() in ["false", "no", "0", "n"]:
typed_value = False
else:
console.print(f"[red]Error: Invalid boolean value. Use 'true' or 'false'.[/red]")
return
elif setting["type"] == "string":
typed_value = value
# Check if the value should be one of the allowed options
if "options" in setting and value not in setting["options"]:
console.print(f"[red]Error: Value must be one of: {', '.join(setting['options'])}[/red]")
return
# Update config
config[key] = typed_value
save_global_config(config)
# Handle secret values for display
display_value = typed_value
if setting.get("secret", False) and typed_value:
display_value = "********"
console.print(f"[green]Successfully set[/green] [cyan]{key}[/cyan] = [green]{display_value}[/green]")
@cli.command("profiles")
def profiles_cmd():
"""Manage browser profiles interactively
@@ -1344,7 +812,6 @@ def profiles_cmd():
@click.option("--crawler-config", "-C", type=click.Path(exists=True), help="Crawler config file (YAML/JSON)")
@click.option("--filter-config", "-f", type=click.Path(exists=True), help="Content filter config file")
@click.option("--extraction-config", "-e", type=click.Path(exists=True), help="Extraction strategy config file")
@click.option("--json-extract", "-j", is_flag=False, flag_value="", default=None, help="Extract structured data using LLM with optional description")
@click.option("--schema", "-s", type=click.Path(exists=True), help="JSON schema for extraction")
@click.option("--browser", "-b", type=str, callback=parse_key_values, help="Browser parameters as key1=value1,key2=value2")
@click.option("--crawler", "-c", type=str, callback=parse_key_values, help="Crawler parameters as key1=value1,key2=value2")
@@ -1354,7 +821,7 @@ def profiles_cmd():
@click.option("--verbose", "-v", is_flag=True)
@click.option("--profile", "-p", help="Use a specific browser profile (by name)")
def default(url: str, example: bool, browser_config: str, crawler_config: str, filter_config: str,
extraction_config: str, json_extract: str, schema: str, browser: Dict, crawler: Dict,
extraction_config: str, schema: str, browser: Dict, crawler: Dict,
output: str, bypass_cache: bool, question: str, verbose: bool, profile: str):
"""Crawl4AI CLI - Web content extraction tool
@@ -1367,15 +834,7 @@ def default(url: str, example: bool, browser_config: str, crawler_config: str, f
crwl profiles - Manage browser profiles for identity-based crawling
crwl crawl - Crawl a website with advanced options
crwl cdp - Launch browser with CDP debugging enabled
crwl browser - Manage builtin browser (start, stop, status, restart)
crwl config - Manage global configuration settings
crwl examples - Show more usage examples
Configuration Examples:
crwl config list - List all configuration settings
crwl config get DEFAULT_LLM_PROVIDER - Show current LLM provider
crwl config set VERBOSE true - Enable verbose mode globally
crwl config set BROWSER_HEADLESS false - Default to visible browser
"""
if example:
@@ -1396,8 +855,7 @@ def default(url: str, example: bool, browser_config: str, crawler_config: str, f
browser_config=browser_config,
crawler_config=crawler_config,
filter_config=filter_config,
extraction_config=extraction_config,
json_extract=json_extract,
extraction_config=extraction_config,
schema=schema,
browser=browser,
crawler=crawler,

View File

@@ -1,837 +0,0 @@
import time
import uuid
import threading
import psutil
from datetime import datetime, timedelta
from typing import Dict, Optional, List
import threading
from rich.console import Console
from rich.layout import Layout
from rich.panel import Panel
from rich.table import Table
from rich.text import Text
from rich.live import Live
from rich import box
from ..models import CrawlStatus
class TerminalUI:
"""Terminal user interface for CrawlerMonitor using rich library."""
def __init__(self, refresh_rate: float = 1.0, max_width: int = 120):
"""
Initialize the terminal UI.
Args:
refresh_rate: How often to refresh the UI (in seconds)
max_width: Maximum width of the UI in characters
"""
self.console = Console(width=max_width)
self.layout = Layout()
self.refresh_rate = refresh_rate
self.stop_event = threading.Event()
self.ui_thread = None
self.monitor = None # Will be set by CrawlerMonitor
self.max_width = max_width
# Setup layout - vertical layout (top to bottom)
self.layout.split(
Layout(name="header", size=3),
Layout(name="pipeline_status", size=10),
Layout(name="task_details", ratio=1),
Layout(name="footer", size=3) # Increased footer size to fit all content
)
def start(self, monitor):
"""Start the UI thread."""
self.monitor = monitor
self.stop_event.clear()
self.ui_thread = threading.Thread(target=self._ui_loop)
self.ui_thread.daemon = True
self.ui_thread.start()
def stop(self):
"""Stop the UI thread."""
if self.ui_thread and self.ui_thread.is_alive():
self.stop_event.set()
# Only try to join if we're not in the UI thread
# This prevents "cannot join current thread" errors
if threading.current_thread() != self.ui_thread:
self.ui_thread.join(timeout=5.0)
def _ui_loop(self):
"""Main UI rendering loop."""
import sys
import select
import termios
import tty
# Setup terminal for non-blocking input
old_settings = termios.tcgetattr(sys.stdin)
try:
tty.setcbreak(sys.stdin.fileno())
# Use Live display to render the UI
with Live(self.layout, refresh_per_second=1/self.refresh_rate, screen=True) as live:
self.live = live # Store the live display for updates
# Main UI loop
while not self.stop_event.is_set():
self._update_display()
# Check for key press (non-blocking)
if select.select([sys.stdin], [], [], 0)[0]:
key = sys.stdin.read(1)
# Check for 'q' to quit
if key == 'q':
# Signal stop but don't call monitor.stop() from UI thread
# as it would cause the thread to try to join itself
self.stop_event.set()
self.monitor.is_running = False
break
time.sleep(self.refresh_rate)
# Just check if the monitor was stopped
if not self.monitor.is_running:
break
finally:
# Restore terminal settings
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)
def _update_display(self):
"""Update the terminal display with current statistics."""
if not self.monitor:
return
# Update crawler status panel
self.layout["header"].update(self._create_status_panel())
# Update pipeline status panel and task details panel
self.layout["pipeline_status"].update(self._create_pipeline_panel())
self.layout["task_details"].update(self._create_task_details_panel())
# Update footer
self.layout["footer"].update(self._create_footer())
def _create_status_panel(self) -> Panel:
"""Create the crawler status panel."""
summary = self.monitor.get_summary()
# Format memory status with icon
memory_status = self.monitor.get_memory_status()
memory_icon = "🟢" # Default NORMAL
if memory_status == "PRESSURE":
memory_icon = "🟠"
elif memory_status == "CRITICAL":
memory_icon = "🔴"
# Get current memory usage
current_memory = psutil.Process().memory_info().rss / (1024 * 1024) # MB
memory_percent = (current_memory / psutil.virtual_memory().total) * 100
# Format runtime
runtime = self.monitor._format_time(time.time() - self.monitor.start_time if self.monitor.start_time else 0)
# Create the status text
status_text = Text()
status_text.append(f"Web Crawler Dashboard | Runtime: {runtime} | Memory: {memory_percent:.1f}% {memory_icon}\n")
status_text.append(f"Status: {memory_status} | URLs: {summary['urls_completed']}/{summary['urls_total']} | ")
status_text.append(f"Peak Mem: {summary['peak_memory_percent']:.1f}% at {self.monitor._format_time(summary['peak_memory_time'])}")
return Panel(status_text, title="Crawler Status", border_style="blue")
def _create_pipeline_panel(self) -> Panel:
"""Create the pipeline status panel."""
summary = self.monitor.get_summary()
queue_stats = self.monitor.get_queue_stats()
# Create a table for status counts
table = Table(show_header=True, box=None)
table.add_column("Status", style="cyan")
table.add_column("Count", justify="right")
table.add_column("Percentage", justify="right")
table.add_column("Stat", style="cyan")
table.add_column("Value", justify="right")
# Calculate overall progress
progress = f"{summary['urls_completed']}/{summary['urls_total']}"
progress_percent = f"{summary['completion_percentage']:.1f}%"
# Add rows for each status
table.add_row(
"Overall Progress",
progress,
progress_percent,
"Est. Completion",
summary.get('estimated_completion_time', "N/A")
)
# Add rows for each status
status_counts = summary['status_counts']
total = summary['urls_total'] or 1 # Avoid division by zero
# Status rows
table.add_row(
"Completed",
str(status_counts.get(CrawlStatus.COMPLETED.name, 0)),
f"{status_counts.get(CrawlStatus.COMPLETED.name, 0) / total * 100:.1f}%",
"Avg. Time/URL",
f"{summary.get('avg_task_duration', 0):.2f}s"
)
table.add_row(
"Failed",
str(status_counts.get(CrawlStatus.FAILED.name, 0)),
f"{status_counts.get(CrawlStatus.FAILED.name, 0) / total * 100:.1f}%",
"Concurrent Tasks",
str(status_counts.get(CrawlStatus.IN_PROGRESS.name, 0))
)
table.add_row(
"In Progress",
str(status_counts.get(CrawlStatus.IN_PROGRESS.name, 0)),
f"{status_counts.get(CrawlStatus.IN_PROGRESS.name, 0) / total * 100:.1f}%",
"Queue Size",
str(queue_stats['total_queued'])
)
table.add_row(
"Queued",
str(status_counts.get(CrawlStatus.QUEUED.name, 0)),
f"{status_counts.get(CrawlStatus.QUEUED.name, 0) / total * 100:.1f}%",
"Max Wait Time",
f"{queue_stats['highest_wait_time']:.1f}s"
)
# Requeued is a special case as it's not a status
requeued_count = summary.get('requeued_count', 0)
table.add_row(
"Requeued",
str(requeued_count),
f"{summary.get('requeue_rate', 0):.1f}%",
"Avg Wait Time",
f"{queue_stats['avg_wait_time']:.1f}s"
)
# Add empty row for spacing
table.add_row(
"",
"",
"",
"Requeue Rate",
f"{summary.get('requeue_rate', 0):.1f}%"
)
return Panel(table, title="Pipeline Status", border_style="green")
def _create_task_details_panel(self) -> Panel:
"""Create the task details panel."""
# Create a table for task details
table = Table(show_header=True, expand=True)
table.add_column("Task ID", style="cyan", no_wrap=True, width=10)
table.add_column("URL", style="blue", ratio=3)
table.add_column("Status", style="green", width=15)
table.add_column("Memory", justify="right", width=8)
table.add_column("Peak", justify="right", width=8)
table.add_column("Duration", justify="right", width=10)
# Get all task stats
task_stats = self.monitor.get_all_task_stats()
# Add summary row
active_tasks = sum(1 for stats in task_stats.values()
if stats['status'] == CrawlStatus.IN_PROGRESS.name)
total_memory = sum(stats['memory_usage'] for stats in task_stats.values())
total_peak = sum(stats['peak_memory'] for stats in task_stats.values())
# Summary row with separators
table.add_row(
"SUMMARY",
f"Total: {len(task_stats)}",
f"Active: {active_tasks}",
f"{total_memory:.1f}",
f"{total_peak:.1f}",
"N/A"
)
# Add a separator
table.add_row("" * 10, "" * 20, "" * 10, "" * 8, "" * 8, "" * 10)
# Status icons
status_icons = {
CrawlStatus.QUEUED.name: "",
CrawlStatus.IN_PROGRESS.name: "🔄",
CrawlStatus.COMPLETED.name: "",
CrawlStatus.FAILED.name: ""
}
# Calculate how many rows we can display based on available space
# We can display more rows now that we have a dedicated panel
display_count = min(len(task_stats), 20) # Display up to 20 tasks
# Add rows for each task
for task_id, stats in sorted(
list(task_stats.items())[:display_count],
# Sort: 1. IN_PROGRESS first, 2. QUEUED, 3. COMPLETED/FAILED by recency
key=lambda x: (
0 if x[1]['status'] == CrawlStatus.IN_PROGRESS.name else
1 if x[1]['status'] == CrawlStatus.QUEUED.name else
2,
-1 * (x[1].get('end_time', 0) or 0) # Most recent first
)
):
# Truncate task_id and URL for display
short_id = task_id[:8]
url = stats['url']
if len(url) > 50: # Allow longer URLs in the dedicated panel
url = url[:47] + "..."
# Format status with icon
status = f"{status_icons.get(stats['status'], '?')} {stats['status']}"
# Add row
table.add_row(
short_id,
url,
status,
f"{stats['memory_usage']:.1f}",
f"{stats['peak_memory']:.1f}",
stats['duration'] if 'duration' in stats else "0:00"
)
return Panel(table, title="Task Details", border_style="yellow")
def _create_footer(self) -> Panel:
"""Create the footer panel."""
from rich.columns import Columns
from rich.align import Align
memory_status = self.monitor.get_memory_status()
memory_icon = "🟢" # Default NORMAL
if memory_status == "PRESSURE":
memory_icon = "🟠"
elif memory_status == "CRITICAL":
memory_icon = "🔴"
# Left section - memory status
left_text = Text()
left_text.append("Memory Status: ", style="bold")
status_style = "green" if memory_status == "NORMAL" else "yellow" if memory_status == "PRESSURE" else "red bold"
left_text.append(f"{memory_icon} {memory_status}", style=status_style)
# Center section - copyright
center_text = Text("© Crawl4AI 2025 | Made by UnclecCode", style="cyan italic")
# Right section - quit instruction
right_text = Text()
right_text.append("Press ", style="bold")
right_text.append("q", style="white on blue")
right_text.append(" to quit", style="bold")
# Create columns with the three sections
footer_content = Columns(
[
Align.left(left_text),
Align.center(center_text),
Align.right(right_text)
],
expand=True
)
# Create a more visible footer panel
return Panel(
footer_content,
border_style="white",
padding=(0, 1) # Add padding for better visibility
)
class CrawlerMonitor:
"""
Comprehensive monitoring and visualization system for tracking web crawler operations in real-time.
Provides a terminal-based dashboard that displays task statuses, memory usage, queue statistics,
and performance metrics.
"""
def __init__(
self,
urls_total: int = 0,
refresh_rate: float = 1.0,
enable_ui: bool = True,
max_width: int = 120
):
"""
Initialize the CrawlerMonitor.
Args:
urls_total: Total number of URLs to be crawled
refresh_rate: How often to refresh the UI (in seconds)
enable_ui: Whether to display the terminal UI
max_width: Maximum width of the UI in characters
"""
# Core monitoring attributes
self.stats = {} # Task ID -> stats dict
self.memory_status = "NORMAL"
self.start_time = None
self.end_time = None
self.is_running = False
self.queue_stats = {
"total_queued": 0,
"highest_wait_time": 0.0,
"avg_wait_time": 0.0
}
self.urls_total = urls_total
self.urls_completed = 0
self.peak_memory_percent = 0.0
self.peak_memory_time = 0.0
# Status counts
self.status_counts = {
CrawlStatus.QUEUED.name: 0,
CrawlStatus.IN_PROGRESS.name: 0,
CrawlStatus.COMPLETED.name: 0,
CrawlStatus.FAILED.name: 0
}
# Requeue tracking
self.requeued_count = 0
# Thread-safety
self._lock = threading.RLock()
# Terminal UI
self.enable_ui = enable_ui
self.terminal_ui = TerminalUI(
refresh_rate=refresh_rate,
max_width=max_width
) if enable_ui else None
def start(self):
"""
Start the monitoring session.
- Initializes the start_time
- Sets is_running to True
- Starts the terminal UI if enabled
"""
with self._lock:
self.start_time = time.time()
self.is_running = True
# Start the terminal UI
if self.enable_ui and self.terminal_ui:
self.terminal_ui.start(self)
def stop(self):
"""
Stop the monitoring session.
- Records end_time
- Sets is_running to False
- Stops the terminal UI
- Generates final summary statistics
"""
with self._lock:
self.end_time = time.time()
self.is_running = False
# Stop the terminal UI
if self.enable_ui and self.terminal_ui:
self.terminal_ui.stop()
def add_task(self, task_id: str, url: str):
"""
Register a new task with the monitor.
Args:
task_id: Unique identifier for the task
url: URL being crawled
The task is initialized with:
- status: QUEUED
- url: The URL to crawl
- enqueue_time: Current time
- memory_usage: 0
- peak_memory: 0
- wait_time: 0
- retry_count: 0
"""
with self._lock:
self.stats[task_id] = {
"task_id": task_id,
"url": url,
"status": CrawlStatus.QUEUED.name,
"enqueue_time": time.time(),
"start_time": None,
"end_time": None,
"memory_usage": 0.0,
"peak_memory": 0.0,
"error_message": "",
"wait_time": 0.0,
"retry_count": 0,
"duration": "0:00",
"counted_requeue": False
}
# Update status counts
self.status_counts[CrawlStatus.QUEUED.name] += 1
def update_task(
self,
task_id: str,
status: Optional[CrawlStatus] = None,
start_time: Optional[float] = None,
end_time: Optional[float] = None,
memory_usage: Optional[float] = None,
peak_memory: Optional[float] = None,
error_message: Optional[str] = None,
retry_count: Optional[int] = None,
wait_time: Optional[float] = None
):
"""
Update statistics for a specific task.
Args:
task_id: Unique identifier for the task
status: New status (QUEUED, IN_PROGRESS, COMPLETED, FAILED)
start_time: When task execution started
end_time: When task execution ended
memory_usage: Current memory usage in MB
peak_memory: Maximum memory usage in MB
error_message: Error description if failed
retry_count: Number of retry attempts
wait_time: Time spent in queue
Updates task statistics and updates status counts.
If status changes, decrements old status count and
increments new status count.
"""
with self._lock:
# Check if task exists
if task_id not in self.stats:
return
task_stats = self.stats[task_id]
# Update status counts if status is changing
old_status = task_stats["status"]
if status and status.name != old_status:
self.status_counts[old_status] -= 1
self.status_counts[status.name] += 1
# Track completion
if status == CrawlStatus.COMPLETED:
self.urls_completed += 1
# Track requeues
if old_status in [CrawlStatus.COMPLETED.name, CrawlStatus.FAILED.name] and not task_stats.get("counted_requeue", False):
self.requeued_count += 1
task_stats["counted_requeue"] = True
# Update task statistics
if status:
task_stats["status"] = status.name
if start_time is not None:
task_stats["start_time"] = start_time
if end_time is not None:
task_stats["end_time"] = end_time
if memory_usage is not None:
task_stats["memory_usage"] = memory_usage
# Update peak memory if necessary
current_percent = (memory_usage / psutil.virtual_memory().total) * 100
if current_percent > self.peak_memory_percent:
self.peak_memory_percent = current_percent
self.peak_memory_time = time.time()
if peak_memory is not None:
task_stats["peak_memory"] = peak_memory
if error_message is not None:
task_stats["error_message"] = error_message
if retry_count is not None:
task_stats["retry_count"] = retry_count
if wait_time is not None:
task_stats["wait_time"] = wait_time
# Calculate duration
if task_stats["start_time"]:
end = task_stats["end_time"] or time.time()
duration = end - task_stats["start_time"]
task_stats["duration"] = self._format_time(duration)
def update_memory_status(self, status: str):
"""
Update the current memory status.
Args:
status: Memory status (NORMAL, PRESSURE, CRITICAL, or custom)
Also updates the UI to reflect the new status.
"""
with self._lock:
self.memory_status = status
def update_queue_statistics(
self,
total_queued: int,
highest_wait_time: float,
avg_wait_time: float
):
"""
Update statistics related to the task queue.
Args:
total_queued: Number of tasks currently in queue
highest_wait_time: Longest wait time of any queued task
avg_wait_time: Average wait time across all queued tasks
"""
with self._lock:
self.queue_stats = {
"total_queued": total_queued,
"highest_wait_time": highest_wait_time,
"avg_wait_time": avg_wait_time
}
def get_task_stats(self, task_id: str) -> Dict:
"""
Get statistics for a specific task.
Args:
task_id: Unique identifier for the task
Returns:
Dictionary containing all task statistics
"""
with self._lock:
return self.stats.get(task_id, {}).copy()
def get_all_task_stats(self) -> Dict[str, Dict]:
"""
Get statistics for all tasks.
Returns:
Dictionary mapping task_ids to their statistics
"""
with self._lock:
return self.stats.copy()
def get_memory_status(self) -> str:
"""
Get the current memory status.
Returns:
Current memory status string
"""
with self._lock:
return self.memory_status
def get_queue_stats(self) -> Dict:
"""
Get current queue statistics.
Returns:
Dictionary with queue statistics including:
- total_queued: Number of tasks in queue
- highest_wait_time: Longest wait time
- avg_wait_time: Average wait time
"""
with self._lock:
return self.queue_stats.copy()
def get_summary(self) -> Dict:
"""
Get a summary of all crawler statistics.
Returns:
Dictionary containing:
- runtime: Total runtime in seconds
- urls_total: Total URLs to process
- urls_completed: Number of completed URLs
- completion_percentage: Percentage complete
- status_counts: Count of tasks in each status
- memory_status: Current memory status
- peak_memory_percent: Highest memory usage
- peak_memory_time: When peak memory occurred
- avg_task_duration: Average task processing time
- estimated_completion_time: Projected finish time
- requeue_rate: Percentage of tasks requeued
"""
with self._lock:
# Calculate runtime
current_time = time.time()
runtime = current_time - (self.start_time or current_time)
# Calculate completion percentage
completion_percentage = 0
if self.urls_total > 0:
completion_percentage = (self.urls_completed / self.urls_total) * 100
# Calculate average task duration for completed tasks
completed_tasks = [
task for task in self.stats.values()
if task["status"] == CrawlStatus.COMPLETED.name and task.get("start_time") and task.get("end_time")
]
avg_task_duration = 0
if completed_tasks:
total_duration = sum(task["end_time"] - task["start_time"] for task in completed_tasks)
avg_task_duration = total_duration / len(completed_tasks)
# Calculate requeue rate
requeue_rate = 0
if len(self.stats) > 0:
requeue_rate = (self.requeued_count / len(self.stats)) * 100
# Calculate estimated completion time
estimated_completion_time = "N/A"
if avg_task_duration > 0 and self.urls_total > 0 and self.urls_completed > 0:
remaining_tasks = self.urls_total - self.urls_completed
estimated_seconds = remaining_tasks * avg_task_duration
estimated_completion_time = self._format_time(estimated_seconds)
return {
"runtime": runtime,
"urls_total": self.urls_total,
"urls_completed": self.urls_completed,
"completion_percentage": completion_percentage,
"status_counts": self.status_counts.copy(),
"memory_status": self.memory_status,
"peak_memory_percent": self.peak_memory_percent,
"peak_memory_time": self.peak_memory_time,
"avg_task_duration": avg_task_duration,
"estimated_completion_time": estimated_completion_time,
"requeue_rate": requeue_rate,
"requeued_count": self.requeued_count
}
def render(self):
"""
Render the terminal UI.
This is the main UI rendering loop that:
1. Updates all statistics
2. Formats the display
3. Renders the ASCII interface
4. Handles keyboard input
Note: The actual rendering is handled by the TerminalUI class
which uses the rich library's Live display.
"""
if self.enable_ui and self.terminal_ui:
# Force an update of the UI
if hasattr(self.terminal_ui, '_update_display'):
self.terminal_ui._update_display()
def _format_time(self, seconds: float) -> str:
"""
Format time in hours:minutes:seconds.
Args:
seconds: Time in seconds
Returns:
Formatted time string (e.g., "1:23:45")
"""
delta = timedelta(seconds=int(seconds))
hours, remainder = divmod(delta.seconds, 3600)
minutes, seconds = divmod(remainder, 60)
if hours > 0:
return f"{hours}:{minutes:02}:{seconds:02}"
else:
return f"{minutes}:{seconds:02}"
def _calculate_estimated_completion(self) -> str:
"""
Calculate estimated completion time based on current progress.
Returns:
Formatted time string
"""
summary = self.get_summary()
return summary.get("estimated_completion_time", "N/A")
# Example code for testing
if __name__ == "__main__":
# Initialize the monitor
monitor = CrawlerMonitor(urls_total=100)
# Start monitoring
monitor.start()
try:
# Simulate some tasks
for i in range(20):
task_id = str(uuid.uuid4())
url = f"https://example.com/page{i}"
monitor.add_task(task_id, url)
# Simulate 20% of tasks are already running
if i < 4:
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=time.time() - 30, # Started 30 seconds ago
memory_usage=10.5
)
# Simulate 10% of tasks are completed
if i >= 4 and i < 6:
start_time = time.time() - 60
end_time = time.time() - 15
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=start_time,
memory_usage=8.2
)
monitor.update_task(
task_id=task_id,
status=CrawlStatus.COMPLETED,
end_time=end_time,
memory_usage=0,
peak_memory=15.7
)
# Simulate 5% of tasks fail
if i >= 6 and i < 7:
start_time = time.time() - 45
end_time = time.time() - 20
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=start_time,
memory_usage=12.3
)
monitor.update_task(
task_id=task_id,
status=CrawlStatus.FAILED,
end_time=end_time,
memory_usage=0,
peak_memory=18.2,
error_message="Connection timeout"
)
# Simulate memory pressure
monitor.update_memory_status("PRESSURE")
# Simulate queue statistics
monitor.update_queue_statistics(
total_queued=16, # 20 - 4 (in progress)
highest_wait_time=120.5,
avg_wait_time=60.2
)
# Keep the monitor running for a demonstration
print("Crawler Monitor is running. Press 'q' to exit.")
while monitor.is_running:
time.sleep(0.1)
except KeyboardInterrupt:
print("\nExiting crawler monitor...")
finally:
# Stop the monitor
monitor.stop()
print("Crawler monitor exited successfully.")

View File

@@ -4,8 +4,7 @@ from dotenv import load_dotenv
load_dotenv() # Load environment variables from .env file
# Default provider, ONLY used when the extraction strategy is LLMExtractionStrategy
DEFAULT_PROVIDER = "openai/gpt-4o"
DEFAULT_PROVIDER_API_KEY = "OPENAI_API_KEY"
DEFAULT_PROVIDER = "openai/gpt-4o-mini"
MODEL_REPO_BRANCH = "new-release-0.0.2"
# Provider-model dictionary, ONLY used when the extraction strategy is LLMExtractionStrategy
PROVIDER_MODELS = {
@@ -29,14 +28,6 @@ PROVIDER_MODELS = {
'gemini/gemini-2.0-flash-lite-preview-02-05': os.getenv("GEMINI_API_KEY"),
"deepseek/deepseek-chat": os.getenv("DEEPSEEK_API_KEY"),
}
PROVIDER_MODELS_PREFIXES = {
"ollama": "no-token-needed", # Any model from Ollama no need for API token
"groq": os.getenv("GROQ_API_KEY"),
"openai": os.getenv("OPENAI_API_KEY"),
"anthropic": os.getenv("ANTHROPIC_API_KEY"),
"gemini": os.getenv("GEMINI_API_KEY"),
"deepseek": os.getenv("DEEPSEEK_API_KEY"),
}
# Chunk token threshold
CHUNK_TOKEN_THRESHOLD = 2**11 # 2048 tokens
@@ -101,46 +92,3 @@ SHOW_DEPRECATION_WARNINGS = True
SCREENSHOT_HEIGHT_TRESHOLD = 10000
PAGE_TIMEOUT = 60000
DOWNLOAD_PAGE_TIMEOUT = 60000
# Global user settings with descriptions and default values
USER_SETTINGS = {
"DEFAULT_LLM_PROVIDER": {
"default": "openai/gpt-4o",
"description": "Default LLM provider in 'company/model' format (e.g., 'openai/gpt-4o', 'anthropic/claude-3-sonnet')",
"type": "string"
},
"DEFAULT_LLM_PROVIDER_TOKEN": {
"default": "",
"description": "API token for the default LLM provider",
"type": "string",
"secret": True
},
"VERBOSE": {
"default": False,
"description": "Enable verbose output for all commands",
"type": "boolean"
},
"BROWSER_HEADLESS": {
"default": True,
"description": "Run browser in headless mode by default",
"type": "boolean"
},
"BROWSER_TYPE": {
"default": "chromium",
"description": "Default browser type (chromium or firefox)",
"type": "string",
"options": ["chromium", "firefox"]
},
"CACHE_MODE": {
"default": "bypass",
"description": "Default cache mode (bypass, use, or refresh)",
"type": "string",
"options": ["bypass", "use", "refresh"]
},
"USER_AGENT_MODE": {
"default": "default",
"description": "Default user agent mode (default, random, or mobile)",
"type": "string",
"options": ["default", "random", "mobile"]
}
}

View File

@@ -28,7 +28,6 @@ from lxml import etree
from lxml import html as lhtml
from typing import List
from .models import ScrapingResult, MediaItem, Link, Media, Links
import copy
# Pre-compile regular expressions for Open Graph and Twitter metadata
OG_REGEX = re.compile(r"^og:")
@@ -49,7 +48,7 @@ def parse_srcset(s: str) -> List[Dict]:
if len(parts) >= 1:
url = parts[0]
width = (
parts[1].rstrip("w").split('.')[0]
parts[1].rstrip("w")
if len(parts) > 1 and parts[1].endswith("w")
else None
)
@@ -129,8 +128,7 @@ class WebScrapingStrategy(ContentScrapingStrategy):
Returns:
ScrapingResult: A structured result containing the scraped content.
"""
actual_url = kwargs.get("redirected_url", url)
raw_result = self._scrap(actual_url, html, is_async=False, **kwargs)
raw_result = self._scrap(url, html, is_async=False, **kwargs)
if raw_result is None:
return ScrapingResult(
cleaned_html="",
@@ -621,9 +619,6 @@ class WebScrapingStrategy(ContentScrapingStrategy):
return False
keep_element = False
# Special case for table elements - always preserve structure
if element.name in ["tr", "td", "th"]:
keep_element = True
exclude_domains = kwargs.get("exclude_domains", [])
# exclude_social_media_domains = kwargs.get('exclude_social_media_domains', set(SOCIAL_MEDIA_DOMAINS))
@@ -864,15 +859,7 @@ class WebScrapingStrategy(ContentScrapingStrategy):
parser_type = kwargs.get("parser", "lxml")
soup = BeautifulSoup(html, parser_type)
body = soup.body
if body is None:
raise Exception("'<body>' tag is not found in fetched html. Consider adding wait_for=\"css:body\" to wait for body tag to be loaded into DOM.")
base_domain = get_base_domain(url)
# Early removal of all images if exclude_all_images is set
# This happens before any processing to minimize memory usage
if kwargs.get("exclude_all_images", False):
for img in body.find_all('img'):
img.decompose()
try:
meta = extract_metadata("", soup)
@@ -904,6 +891,23 @@ class WebScrapingStrategy(ContentScrapingStrategy):
for element in body.select(excluded_selector):
element.extract()
# if False and css_selector:
# selected_elements = body.select(css_selector)
# if not selected_elements:
# return {
# "markdown": "",
# "cleaned_html": "",
# "success": True,
# "media": {"images": [], "videos": [], "audios": []},
# "links": {"internal": [], "external": []},
# "metadata": {},
# "message": f"No elements found for CSS selector: {css_selector}",
# }
# # raise InvalidCSSSelectorError(f"Invalid CSS selector, No elements found for CSS selector: {css_selector}")
# body = soup.new_tag("div")
# for el in selected_elements:
# body.append(el)
content_element = None
if target_elements:
try:
@@ -912,12 +916,12 @@ class WebScrapingStrategy(ContentScrapingStrategy):
for_content_targeted_element.extend(body.select(target_element))
content_element = soup.new_tag("div")
for el in for_content_targeted_element:
content_element.append(copy.deepcopy(el))
content_element.append(el)
except Exception as e:
self._log("error", f"Error with target element detection: {str(e)}", "SCRAPE")
return None
else:
content_element = body
content_element = body
kwargs["exclude_social_media_domains"] = set(
kwargs.get("exclude_social_media_domains", []) + SOCIAL_MEDIA_DOMAINS
@@ -1298,9 +1302,6 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
"source",
"track",
"wbr",
"tr",
"td",
"th",
}
for el in reversed(list(root.iterdescendants())):
@@ -1490,13 +1491,6 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
body = doc
base_domain = get_base_domain(url)
# Early removal of all images if exclude_all_images is set
# This is more efficient in lxml as we remove elements before any processing
if kwargs.get("exclude_all_images", False):
for img in body.xpath('//img'):
if img.getparent() is not None:
img.getparent().remove(img)
# Add comment removal
if kwargs.get("remove_comments", False):
@@ -1533,6 +1527,26 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
self._log("error", f"Error extracting metadata: {str(e)}", "SCRAPE")
meta = {}
# Handle CSS selector targeting
# if css_selector:
# try:
# selected_elements = body.cssselect(css_selector)
# if not selected_elements:
# return {
# "markdown": "",
# "cleaned_html": "",
# "success": True,
# "media": {"images": [], "videos": [], "audios": []},
# "links": {"internal": [], "external": []},
# "metadata": meta,
# "message": f"No elements found for CSS selector: {css_selector}",
# }
# body = lhtml.Element("div")
# body.extend(selected_elements)
# except Exception as e:
# self._log("error", f"Error with CSS selector: {str(e)}", "SCRAPE")
# return None
content_element = None
if target_elements:
try:
@@ -1540,7 +1554,7 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
for target_element in target_elements:
for_content_targeted_element.extend(body.cssselect(target_element))
content_element = lhtml.Element("div")
content_element.extend(copy.deepcopy(for_content_targeted_element))
content_element.extend(for_content_targeted_element)
except Exception as e:
self._log("error", f"Error with target element detection: {str(e)}", "SCRAPE")
return None
@@ -1609,7 +1623,7 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
# Remove empty elements
self.remove_empty_elements_fast(body, 1)
# Remove unneeded attributes
# Remvoe unneeded attributes
self.remove_unwanted_attributes_fast(
body, keep_data_attributes=kwargs.get("keep_data_attributes", False)
)

View File

@@ -1,6 +1,6 @@
from crawl4ai import BrowserConfig, AsyncWebCrawler, CrawlerRunConfig, CacheMode
from crawl4ai.hub import BaseCrawler
from crawl4ai.utils import optimize_html, get_home_folder, preprocess_html_for_schema
from crawl4ai.utils import optimize_html, get_home_folder
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
from pathlib import Path
import json
@@ -68,8 +68,7 @@ class GoogleSearchCrawler(BaseCrawler):
home_dir = get_home_folder() if not schema_cache_path else schema_cache_path
os.makedirs(f"{home_dir}/schema", exist_ok=True)
# cleaned_html = optimize_html(html, threshold=100)
cleaned_html = preprocess_html_for_schema(html)
cleaned_html = optimize_html(html, threshold=100)
organic_schema = None
if os.path.exists(f"{home_dir}/schema/organic_schema.json"):

View File

@@ -11,7 +11,6 @@ from .scorers import URLScorer
from . import DeepCrawlStrategy
from ..types import AsyncWebCrawler, CrawlerRunConfig, CrawlResult, RunManyReturn
from ..utils import normalize_url_for_deep_crawl
from math import inf as infinity
@@ -107,14 +106,13 @@ class BestFirstCrawlingStrategy(DeepCrawlStrategy):
valid_links = []
for link in links:
url = link.get("href")
base_url = normalize_url_for_deep_crawl(url, source_url)
if base_url in visited:
if url in visited:
continue
if not await self.can_process_url(url, new_depth):
self.stats.urls_skipped += 1
continue
valid_links.append(base_url)
valid_links.append(url)
# If we have more valid links than capacity, limit them
if len(valid_links) > remaining_capacity:

View File

@@ -117,8 +117,7 @@ class BFSDeepCrawlStrategy(DeepCrawlStrategy):
self.logger.debug(f"URL {url} skipped: score {score} below threshold {self.score_threshold}")
self.stats.urls_skipped += 1
continue
visited.add(base_url)
valid_links.append((base_url, score))
# If we have more valid links than capacity, sort by score and take the top ones
@@ -159,6 +158,7 @@ class BFSDeepCrawlStrategy(DeepCrawlStrategy):
while current_level and not self._cancel_event.is_set():
next_level: List[Tuple[str, Optional[str]]] = []
urls = [url for url, _ in current_level]
visited.update(urls)
# Clone the config to disable deep crawling recursion and enforce batch mode.
batch_config = config.clone(deep_crawl_strategy=None, stream=False)

View File

@@ -5,11 +5,9 @@ from concurrent.futures import ThreadPoolExecutor, as_completed
import json
import time
from .prompts import PROMPT_EXTRACT_BLOCKS, PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION, PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION, JSON_SCHEMA_BUILDER_XPATH, PROMPT_EXTRACT_INFERRED_SCHEMA
from .prompts import PROMPT_EXTRACT_BLOCKS, PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION, PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION, JSON_SCHEMA_BUILDER_XPATH
from .config import (
DEFAULT_PROVIDER,
DEFAULT_PROVIDER_API_KEY,
CHUNK_TOKEN_THRESHOLD,
DEFAULT_PROVIDER, CHUNK_TOKEN_THRESHOLD,
OVERLAP_RATE,
WORD_TOKEN_RATE,
)
@@ -36,7 +34,7 @@ from .model_loader import (
calculate_batch_size
)
from .types import LLMConfig, create_llm_config
from .types import LLMConfig
from functools import partial
import numpy as np
@@ -509,7 +507,6 @@ class LLMExtractionStrategy(ExtractionStrategy):
word_token_rate=WORD_TOKEN_RATE,
apply_chunking=True,
input_format: str = "markdown",
force_json_response=False,
verbose=False,
# Deprecated arguments
provider: str = DEFAULT_PROVIDER,
@@ -530,10 +527,9 @@ class LLMExtractionStrategy(ExtractionStrategy):
overlap_rate: Overlap between chunks.
word_token_rate: Word to token conversion rate.
apply_chunking: Whether to apply chunking.
input_format: Content format to use for extraction.
Options: "markdown" (default), "html", "fit_markdown"
force_json_response: Whether to force a JSON response from the LLM.
verbose: Whether to print verbose output.
usages: List of individual token usages.
total_usage: Accumulated token usage.
# Deprecated arguments, will be removed very soon
provider: The provider to use for extraction. It follows the format <provider_name>/<model_name>, e.g., "ollama/llama3.3".
@@ -544,17 +540,11 @@ class LLMExtractionStrategy(ExtractionStrategy):
"""
super().__init__( input_format=input_format, **kwargs)
self.llm_config = llm_config
if not self.llm_config:
self.llm_config = create_llm_config(
provider=DEFAULT_PROVIDER,
api_token=os.environ.get(DEFAULT_PROVIDER_API_KEY),
)
self.instruction = instruction
self.extract_type = extraction_type
self.schema = schema
if schema:
self.extract_type = "schema"
self.force_json_response = force_json_response
self.chunk_token_threshold = chunk_token_threshold or CHUNK_TOKEN_THRESHOLD
self.overlap_rate = overlap_rate
self.word_token_rate = word_token_rate
@@ -618,97 +608,64 @@ class LLMExtractionStrategy(ExtractionStrategy):
variable_values["SCHEMA"] = json.dumps(self.schema, indent=2) # if type of self.schema is dict else self.schema
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
if self.extract_type == "schema" and not self.schema:
prompt_with_variables = PROMPT_EXTRACT_INFERRED_SCHEMA
for variable in variable_values:
prompt_with_variables = prompt_with_variables.replace(
"{" + variable + "}", variable_values[variable]
)
response = perform_completion_with_backoff(
self.llm_config.provider,
prompt_with_variables,
self.llm_config.api_token,
base_url=self.llm_config.base_url,
extra_args=self.extra_args,
) # , json_response=self.extract_type == "schema")
# Track usage
usage = TokenUsage(
completion_tokens=response.usage.completion_tokens,
prompt_tokens=response.usage.prompt_tokens,
total_tokens=response.usage.total_tokens,
completion_tokens_details=response.usage.completion_tokens_details.__dict__
if response.usage.completion_tokens_details
else {},
prompt_tokens_details=response.usage.prompt_tokens_details.__dict__
if response.usage.prompt_tokens_details
else {},
)
self.usages.append(usage)
# Update totals
self.total_usage.completion_tokens += usage.completion_tokens
self.total_usage.prompt_tokens += usage.prompt_tokens
self.total_usage.total_tokens += usage.total_tokens
try:
response = perform_completion_with_backoff(
self.llm_config.provider,
prompt_with_variables,
self.llm_config.api_token,
base_url=self.llm_config.base_url,
json_response=self.force_json_response,
extra_args=self.extra_args,
) # , json_response=self.extract_type == "schema")
# Track usage
usage = TokenUsage(
completion_tokens=response.usage.completion_tokens,
prompt_tokens=response.usage.prompt_tokens,
total_tokens=response.usage.total_tokens,
completion_tokens_details=response.usage.completion_tokens_details.__dict__
if response.usage.completion_tokens_details
else {},
prompt_tokens_details=response.usage.prompt_tokens_details.__dict__
if response.usage.prompt_tokens_details
else {},
)
self.usages.append(usage)
# Update totals
self.total_usage.completion_tokens += usage.completion_tokens
self.total_usage.prompt_tokens += usage.prompt_tokens
self.total_usage.total_tokens += usage.total_tokens
try:
response = response.choices[0].message.content
blocks = None
if self.force_json_response:
blocks = json.loads(response)
if isinstance(blocks, dict):
# If it has only one key which calue is list then assign that to blocks, exampled: {"news": [..]}
if len(blocks) == 1 and isinstance(list(blocks.values())[0], list):
blocks = list(blocks.values())[0]
else:
# If it has only one key which value is not list then assign that to blocks, exampled: { "article_id": "1234", ... }
blocks = [blocks]
elif isinstance(blocks, list):
# If it is a list then assign that to blocks
blocks = blocks
else:
# blocks = extract_xml_data(["blocks"], response.choices[0].message.content)["blocks"]
blocks = extract_xml_data(["blocks"], response)["blocks"]
blocks = json.loads(blocks)
for block in blocks:
block["error"] = False
except Exception:
parsed, unparsed = split_and_parse_json_objects(
response.choices[0].message.content
)
blocks = parsed
if unparsed:
blocks.append(
{"index": 0, "error": True, "tags": ["error"], "content": unparsed}
)
if self.verbose:
print(
"[LOG] Extracted",
len(blocks),
"blocks from URL:",
url,
"block index:",
ix,
)
return blocks
except Exception as e:
if self.verbose:
print(f"[LOG] Error in LLM extraction: {e}")
# Add error information to extracted_content
return [
{
"index": ix,
"error": True,
"tags": ["error"],
"content": str(e),
}
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)[
"blocks"
]
blocks = json.loads(blocks)
for block in blocks:
block["error"] = False
except Exception:
parsed, unparsed = split_and_parse_json_objects(
response.choices[0].message.content
)
blocks = parsed
if unparsed:
blocks.append(
{"index": 0, "error": True, "tags": ["error"], "content": unparsed}
)
if self.verbose:
print(
"[LOG] Extracted",
len(blocks),
"blocks from URL:",
url,
"block index:",
ix,
)
return blocks
def _merge(self, documents, chunk_token_threshold, overlap) -> List[str]:
"""
@@ -800,6 +757,8 @@ class LLMExtractionStrategy(ExtractionStrategy):
#######################################################
# New extraction strategies for JSON-based extraction #
#######################################################
class JsonElementExtractionStrategy(ExtractionStrategy):
"""
Abstract base class for extracting structured JSON from HTML content.
@@ -1090,7 +1049,7 @@ class JsonElementExtractionStrategy(ExtractionStrategy):
schema_type: str = "CSS", # or XPATH
query: str = None,
target_json_example: str = None,
llm_config: 'LLMConfig' = create_llm_config(),
llm_config: 'LLMConfig' = None,
provider: str = None,
api_token: str = None,
**kwargs
@@ -1122,7 +1081,7 @@ class JsonElementExtractionStrategy(ExtractionStrategy):
# Build the prompt
system_message = {
"role": "system",
"content": f"""You specialize in generating special JSON schemas for web scraping. This schema uses CSS or XPATH selectors to present a repetitive pattern in crawled HTML, such as a product in a product list or a search result item in a list of search results. We use this JSON schema to pass to a language model along with the HTML content to extract structured data from the HTML. The language model uses the JSON schema to extract data from the HTML and retrieve values for fields in the JSON schema, following the schema.
"content": f"""You specialize in generating special JSON schemas for web scraping. This schema uses CSS or XPATH selectors to present a repetitive pattern in crawled HTML, such as a product in a product list or a search result item in a list of search results. You use this JSON schema to pass to a language model along with the HTML content to extract structured data from the HTML. The language model uses the JSON schema to extract data from the HTML and retrieve values for fields in the JSON schema, following the schema.
Generating this HTML manually is not feasible, so you need to generate the JSON schema using the HTML content. The HTML copied from the crawled website is provided below, which we believe contains the repetitive pattern.
@@ -1136,10 +1095,9 @@ Generating this HTML manually is not feasible, so you need to generate the JSON
In this context, the following items may or may not be present:
- Example of target JSON object: This is a sample of the final JSON object that we hope to extract from the HTML using the schema you are generating.
- Extra Instructions: This is optional instructions to consider when generating the schema provided by the user.
- Query or explanation of target/goal data item: This is a description of what data we are trying to extract from the HTML. This explanation means we're not sure about the rigid schema of the structures we want, so we leave it to you to use your expertise to create the best and most comprehensive structures aimed at maximizing data extraction from this page. You must ensure that you do not pick up nuances that may exist on a particular page. The focus should be on the data we are extracting, and it must be valid, safe, and robust based on the given HTML.
# What if there is no example of target JSON object and also no extra instructions or even no explanation of target/goal data item?
In this scenario, use your best judgment to generate the schema. You need to examine the content of the page and understand the data it provides. If the page contains repetitive data, such as lists of items, products, jobs, places, books, or movies, focus on one single item that repeats. If the page is a detailed page about one product or item, create a schema to extract the entire structured data. At this stage, you must think and decide for yourself. Try to maximize the number of fields that you can extract from the HTML.
# What if there is no example of target JSON object?
In this scenario, use your best judgment to generate the schema. Try to maximize the number of fields that you can extract from the HTML.
# What are the instructions and details for this schema generation?
{prompt_template}"""
@@ -1156,18 +1114,11 @@ In this scenario, use your best judgment to generate the schema. You need to exa
}
if query:
user_message["content"] += f"\n\n## Query or explanation of target/goal data item:\n{query}"
user_message["content"] += f"\n\nImportant Notes to Consider:\n{query}"
if target_json_example:
user_message["content"] += f"\n\n## Example of target JSON object:\n```json\n{target_json_example}\n```"
if query and not target_json_example:
user_message["content"] += """IMPORTANT: To remind you, in this process, we are not providing a rigid example of the adjacent objects we seek. We rely on your understanding of the explanation provided in the above section. Make sure to grasp what we are looking for and, based on that, create the best schema.."""
elif not query and target_json_example:
user_message["content"] += """IMPORTANT: Please remember that in this process, we provided a proper example of a target JSON object. Make sure to adhere to the structure and create a schema that exactly fits this example. If you find that some elements on the page do not match completely, vote for the majority."""
elif not query and not target_json_example:
user_message["content"] += """IMPORTANT: Since we neither have a query nor an example, it is crucial to rely solely on the HTML content provided. Leverage your expertise to determine the schema based on the repetitive patterns observed in the content."""
user_message["content"] += f"\n\nExample of target JSON object:\n{target_json_example}"
user_message["content"] += """IMPORTANT: Ensure your schema remains reliable by avoiding selectors that appear to generate dynamically and are not dependable. You want a reliable schema, as it consistently returns the same data even after many page reloads.
user_message["content"] += """IMPORTANT: Ensure your schema is reliable, meaning do not use selectors that seem to generate dynamically and are not reliable. A reliable schema is what you want, as it consistently returns the same data even after many reloads of the page.
Analyze the HTML and generate a JSON schema that follows the specified format. Only output valid JSON schema, nothing else.
"""
@@ -1189,6 +1140,7 @@ In this scenario, use your best judgment to generate the schema. You need to exa
except Exception as e:
raise Exception(f"Failed to generate schema: {str(e)}")
class JsonCssExtractionStrategy(JsonElementExtractionStrategy):
"""
Concrete implementation of `JsonElementExtractionStrategy` using CSS selectors.

View File

@@ -40,55 +40,12 @@ def setup_home_directory():
f.write("")
def post_install():
"""
Run all post-installation tasks.
Checks CRAWL4AI_MODE environment variable. If set to 'api',
skips Playwright browser installation.
"""
"""Run all post-installation tasks"""
logger.info("Running post-installation setup...", tag="INIT")
setup_home_directory()
# Check environment variable to conditionally skip Playwright install
run_mode = os.getenv('CRAWL4AI_MODE')
if run_mode == 'api':
logger.warning(
"CRAWL4AI_MODE=api detected. Skipping Playwright browser installation.",
tag="SETUP"
)
else:
# Proceed with installation only if mode is not 'api'
install_playwright()
install_playwright()
run_migration()
# TODO: Will be added in the future
# setup_builtin_browser()
logger.success("Post-installation setup completed!", tag="COMPLETE")
def setup_builtin_browser():
"""Set up a builtin browser for use with Crawl4AI"""
try:
logger.info("Setting up builtin browser...", tag="INIT")
asyncio.run(_setup_builtin_browser())
logger.success("Builtin browser setup completed!", tag="COMPLETE")
except Exception as e:
logger.warning(f"Failed to set up builtin browser: {e}")
logger.warning("You can manually set up a builtin browser using 'crawl4ai-doctor builtin-browser-start'")
async def _setup_builtin_browser():
try:
# Import BrowserProfiler here to avoid circular imports
from .browser_profiler import BrowserProfiler
profiler = BrowserProfiler(logger=logger)
# Launch the builtin browser
cdp_url = await profiler.launch_builtin_browser(headless=True)
if cdp_url:
logger.success(f"Builtin browser launched at {cdp_url}", tag="BROWSER")
else:
logger.warning("Failed to launch builtin browser", tag="BROWSER")
except Exception as e:
logger.warning(f"Error setting up builtin browser: {e}", tag="BROWSER")
raise
def install_playwright():

View File

@@ -115,6 +115,5 @@ async () => {
document.body.style.overflow = "auto";
// Wait a bit for any animations to complete
document.body.scrollIntoView(false);
await new Promise((resolve) => setTimeout(resolve, 50));
await new Promise((resolve) => setTimeout(resolve, 100));
};

View File

@@ -31,24 +31,22 @@ class MarkdownGenerationStrategy(ABC):
content_filter: Optional[RelevantContentFilter] = None,
options: Optional[Dict[str, Any]] = None,
verbose: bool = False,
content_source: str = "cleaned_html",
):
self.content_filter = content_filter
self.options = options or {}
self.verbose = verbose
self.content_source = content_source
@abstractmethod
def generate_markdown(
self,
input_html: str,
cleaned_html: str,
base_url: str = "",
html2text_options: Optional[Dict[str, Any]] = None,
content_filter: Optional[RelevantContentFilter] = None,
citations: bool = True,
**kwargs,
) -> MarkdownGenerationResult:
"""Generate markdown from the selected input HTML."""
"""Generate markdown from cleaned HTML."""
pass
@@ -65,7 +63,6 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
Args:
content_filter (Optional[RelevantContentFilter]): Content filter for generating fit markdown.
options (Optional[Dict[str, Any]]): Additional options for markdown generation. Defaults to None.
content_source (str): Source of content to generate markdown from. Options: "cleaned_html", "raw_html", "fit_html". Defaults to "cleaned_html".
Returns:
MarkdownGenerationResult: Result containing raw markdown, fit markdown, fit HTML, and references markdown.
@@ -75,9 +72,8 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
self,
content_filter: Optional[RelevantContentFilter] = None,
options: Optional[Dict[str, Any]] = None,
content_source: str = "cleaned_html",
):
super().__init__(content_filter, options, verbose=False, content_source=content_source)
super().__init__(content_filter, options)
def convert_links_to_citations(
self, markdown: str, base_url: str = ""
@@ -147,7 +143,7 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
def generate_markdown(
self,
input_html: str,
cleaned_html: str,
base_url: str = "",
html2text_options: Optional[Dict[str, Any]] = None,
options: Optional[Dict[str, Any]] = None,
@@ -156,16 +152,16 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
**kwargs,
) -> MarkdownGenerationResult:
"""
Generate markdown with citations from the provided input HTML.
Generate markdown with citations from cleaned HTML.
How it works:
1. Generate raw markdown from the input HTML.
1. Generate raw markdown from cleaned HTML.
2. Convert links to citations.
3. Generate fit markdown if content filter is provided.
4. Return MarkdownGenerationResult.
Args:
input_html (str): The HTML content to process (selected based on content_source).
cleaned_html (str): Cleaned HTML content.
base_url (str): Base URL for URL joins.
html2text_options (Optional[Dict[str, Any]]): HTML2Text options.
options (Optional[Dict[str, Any]]): Additional options for markdown generation.
@@ -200,14 +196,14 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
h.update_params(**default_options)
# Ensure we have valid input
if not input_html:
input_html = ""
elif not isinstance(input_html, str):
input_html = str(input_html)
if not cleaned_html:
cleaned_html = ""
elif not isinstance(cleaned_html, str):
cleaned_html = str(cleaned_html)
# Generate raw markdown
try:
raw_markdown = h.handle(input_html)
raw_markdown = h.handle(cleaned_html)
except Exception as e:
raw_markdown = f"Error converting HTML to markdown: {str(e)}"
@@ -232,7 +228,7 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
if content_filter or self.content_filter:
try:
content_filter = content_filter or self.content_filter
filtered_html = content_filter.filter_content(input_html)
filtered_html = content_filter.filter_content(cleaned_html)
filtered_html = "\n".join(
"<div>{}</div>".format(s) for s in filtered_html
)

View File

@@ -1,7 +1,6 @@
from re import U
from pydantic import BaseModel, HttpUrl, PrivateAttr
from typing import List, Dict, Optional, Callable, Awaitable, Union, Any
from typing import AsyncGenerator
from typing import Generic, TypeVar
from enum import Enum
from dataclasses import dataclass
from .ssl_certificate import SSLCertificate
@@ -29,12 +28,7 @@ class CrawlerTaskResult:
start_time: Union[datetime, float]
end_time: Union[datetime, float]
error_message: str = ""
retry_count: int = 0
wait_time: float = 0.0
@property
def success(self) -> bool:
return self.result.success
class CrawlStatus(Enum):
QUEUED = "QUEUED"
@@ -42,6 +36,27 @@ class CrawlStatus(Enum):
COMPLETED = "COMPLETED"
FAILED = "FAILED"
# @dataclass
# class CrawlStats:
# task_id: str
# url: str
# status: CrawlStatus
# start_time: Optional[datetime] = None
# end_time: Optional[datetime] = None
# memory_usage: float = 0.0
# peak_memory: float = 0.0
# error_message: str = ""
# @property
# def duration(self) -> str:
# if not self.start_time:
# return "0:00"
# end = self.end_time or datetime.now()
# duration = end - self.start_time
# return str(timedelta(seconds=int(duration.total_seconds())))
@dataclass
class CrawlStats:
task_id: str
@@ -52,9 +67,6 @@ class CrawlStats:
memory_usage: float = 0.0
peak_memory: float = 0.0
error_message: str = ""
wait_time: float = 0.0
retry_count: int = 0
counted_requeue: bool = False
@property
def duration(self) -> str:
@@ -91,11 +103,21 @@ class TokenUsage:
completion_tokens_details: Optional[dict] = None
prompt_tokens_details: Optional[dict] = None
class UrlModel(BaseModel):
url: HttpUrl
forced: bool = False
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
fit_markdown: Optional[str] = None
fit_html: Optional[str] = None
def __str__(self):
return self.raw_markdown
@dataclass
class TraversalStats:
@@ -116,16 +138,6 @@ class DispatchResult(BaseModel):
end_time: Union[datetime, float]
error_message: str = ""
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
fit_markdown: Optional[str] = None
fit_html: Optional[str] = None
def __str__(self):
return self.raw_markdown
class CrawlResult(BaseModel):
url: str
html: str
@@ -137,7 +149,6 @@ class CrawlResult(BaseModel):
js_execution_result: Optional[Dict[str, Any]] = None
screenshot: Optional[str] = None
pdf: Optional[bytes] = None
mhtml: Optional[str] = None
_markdown: Optional[MarkdownGenerationResult] = PrivateAttr(default=None)
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
@@ -148,8 +159,6 @@ class CrawlResult(BaseModel):
ssl_certificate: Optional[SSLCertificate] = None
dispatch_result: Optional[DispatchResult] = None
redirected_url: Optional[str] = None
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class Config:
arbitrary_types_allowed = True
@@ -266,40 +275,6 @@ class StringCompatibleMarkdown(str):
def __getattr__(self, name):
return getattr(self._markdown_result, name)
CrawlResultT = TypeVar('CrawlResultT', bound=CrawlResult)
class CrawlResultContainer(Generic[CrawlResultT]):
def __init__(self, results: Union[CrawlResultT, List[CrawlResultT]]):
# Normalize to a list
if isinstance(results, list):
self._results = results
else:
self._results = [results]
def __iter__(self):
return iter(self._results)
def __getitem__(self, index):
return self._results[index]
def __len__(self):
return len(self._results)
def __getattr__(self, attr):
# Delegate attribute access to the first element.
if self._results:
return getattr(self._results[0], attr)
raise AttributeError(f"{self.__class__.__name__} object has no attribute '{attr}'")
def __repr__(self):
return f"{self.__class__.__name__}({self._results!r})"
RunManyReturn = Union[
CrawlResultContainer[CrawlResultT],
AsyncGenerator[CrawlResultT, None]
]
# END of backward compatibility code for markdown/markdown_v2.
# When removing this code in the future, make sure to:
# 1. Replace the private attribute and property with a standard field
@@ -312,17 +287,15 @@ class AsyncCrawlResponse(BaseModel):
status_code: int
screenshot: Optional[str] = None
pdf_data: Optional[bytes] = None
mhtml_data: Optional[str] = None
get_delayed_content: Optional[Callable[[Optional[float]], Awaitable[str]]] = None
downloaded_files: Optional[List[str]] = None
ssl_certificate: Optional[SSLCertificate] = None
redirected_url: Optional[str] = None
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class Config:
arbitrary_types_allowed = True
###############################
# Scraping Models
###############################

View File

@@ -203,62 +203,6 @@ Avoid Common Mistakes:
Result
Output the final list of JSON objects, wrapped in <blocks>...</blocks> XML tags. Make sure to close the tag properly."""
PROMPT_EXTRACT_INFERRED_SCHEMA = """Here is the content from the URL:
<url>{URL}</url>
<url_content>
{HTML}
</url_content>
Please carefully read the URL content and the user's request. Analyze the page structure and infer the most appropriate JSON schema based on the content and request.
Extraction Strategy:
1. First, determine if the page contains repetitive items (like multiple products, articles, etc.) or a single content item (like a single article or page).
2. For repetitive items: Identify the common pattern and extract each instance as a separate JSON object in an array.
3. For single content: Extract the key information into a comprehensive JSON object that captures the essential details.
Extraction instructions:
Return the extracted information as a list of JSON objects. For repetitive content, each object in the list should correspond to a distinct item. For single content, you may return just one detailed JSON object. Wrap the entire JSON list in <blocks>...</blocks> XML tags.
Schema Design Guidelines:
- Create meaningful property names that clearly describe the data they contain
- Use nested objects for hierarchical information
- Use arrays for lists of related items
- Include all information requested by the user
- Maintain consistency in property names and data structures
- Only include properties that are actually present in the content
- For dates, prefer ISO format (YYYY-MM-DD)
- For prices or numeric values, extract them without currency symbols when possible
Quality Reflection:
Before outputting your final answer, double check that:
1. The inferred schema makes logical sense for the type of content
2. All requested information is included
3. The JSON is valid and could be parsed without errors
4. Property names are consistent and descriptive
5. The structure is optimal for the type of data being represented
Avoid Common Mistakes:
- Do NOT add any comments using "//" or "#" in the JSON output. It causes parsing errors.
- Make sure the JSON is properly formatted with curly braces, square brackets, and commas in the right places.
- Do not miss closing </blocks> tag at the end of the JSON output.
- Do not generate Python code showing how to do the task; this is your task to extract the information and return it in JSON format.
- Ensure consistency in property names across all objects
- Don't include empty properties or null values unless they're meaningful
- For repetitive content, ensure all objects follow the same schema
Important: If user specific instruction is provided, then stress significantly on what user is requesting and describing about the schema of end result (if any). If user is requesting to extract specific information, then focus on that and ignore the rest of the content.
<user_request>
{REQUEST}
</user_request>
Result:
Output the final list of JSON objects, wrapped in <blocks>...</blocks> XML tags. Make sure to close the tag properly.
DO NOT ADD ANY PRE OR POST COMMENTS. JUST RETURN THE JSON OBJECTS INSIDE <blocks>...</blocks> TAGS.
CRITICAL: The content inside the <blocks> tags MUST be a direct array of JSON objects (starting with '[' and ending with ']'), not a dictionary/object containing an array. For example, use <blocks>[{...}, {...}]</blocks> instead of <blocks>{"items": [{...}, {...}]}</blocks>. This is essential for proper parsing.
"""
PROMPT_FILTER_CONTENT = """Your task is to filter and convert HTML content into clean, focused markdown that's optimized for use with LLMs and information retrieval systems.

View File

@@ -4,9 +4,6 @@ from itertools import cycle
import os
########### ATTENTION PEOPLE OF EARTH ###########
# I have moved this config to async_configs.py, kept it here, in case someone still importing it, however
# be a dear and follow `from crawl4ai import ProxyConfig` instead :)
class ProxyConfig:
def __init__(
self,
@@ -122,12 +119,12 @@ class ProxyRotationStrategy(ABC):
"""Base abstract class for proxy rotation strategies"""
@abstractmethod
async def get_next_proxy(self) -> Optional[ProxyConfig]:
async def get_next_proxy(self) -> Optional[Dict]:
"""Get next proxy configuration from the strategy"""
pass
@abstractmethod
def add_proxies(self, proxies: List[ProxyConfig]):
def add_proxies(self, proxies: List[Dict]):
"""Add proxy configurations to the strategy"""
pass

View File

@@ -9,44 +9,83 @@ from urllib.parse import urlparse
import OpenSSL.crypto
from pathlib import Path
# === Inherit from dict ===
class SSLCertificate(dict):
"""
A class representing an SSL certificate, behaving like a dictionary
for direct JSON serialization. It stores the certificate information internally
and provides methods for export and property access.
Inherits from dict, so instances are directly JSON serializable.
class SSLCertificate:
"""
A class representing an SSL certificate with methods to export in various formats.
# Use __slots__ for potential memory optimization if desired, though less common when inheriting dict
# __slots__ = ("_cert_info",) # If using slots, be careful with dict inheritance interaction
Attributes:
cert_info (Dict[str, Any]): The certificate information.
Methods:
from_url(url: str, timeout: int = 10) -> Optional['SSLCertificate']: Create SSLCertificate instance from a URL.
from_file(file_path: str) -> Optional['SSLCertificate']: Create SSLCertificate instance from a file.
from_binary(binary_data: bytes) -> Optional['SSLCertificate']: Create SSLCertificate instance from binary data.
export_as_pem() -> str: Export the certificate as PEM format.
export_as_der() -> bytes: Export the certificate as DER format.
export_as_json() -> Dict[str, Any]: Export the certificate as JSON format.
export_as_text() -> str: Export the certificate as text format.
"""
def __init__(self, cert_info: Dict[str, Any]):
self._cert_info = self._decode_cert_data(cert_info)
@staticmethod
def from_url(url: str, timeout: int = 10) -> Optional["SSLCertificate"]:
"""
Initializes the SSLCertificate object.
Create SSLCertificate instance from a URL.
Args:
cert_info (Dict[str, Any]): The raw certificate dictionary.
url (str): URL of the website.
timeout (int): Timeout for the connection (default: 10).
Returns:
Optional[SSLCertificate]: SSLCertificate instance if successful, None otherwise.
"""
# 1. Decode the data (handle bytes -> str)
decoded_info = self._decode_cert_data(cert_info)
try:
hostname = urlparse(url).netloc
if ":" in hostname:
hostname = hostname.split(":")[0]
# 2. Store the decoded info internally (optional but good practice)
# self._cert_info = decoded_info # You can keep this if methods rely on it
context = ssl.create_default_context()
with socket.create_connection((hostname, 443), timeout=timeout) as sock:
with context.wrap_socket(sock, server_hostname=hostname) as ssock:
cert_binary = ssock.getpeercert(binary_form=True)
x509 = OpenSSL.crypto.load_certificate(
OpenSSL.crypto.FILETYPE_ASN1, cert_binary
)
# 3. Initialize the dictionary part of the object with the decoded data
super().__init__(decoded_info)
cert_info = {
"subject": dict(x509.get_subject().get_components()),
"issuer": dict(x509.get_issuer().get_components()),
"version": x509.get_version(),
"serial_number": hex(x509.get_serial_number()),
"not_before": x509.get_notBefore(),
"not_after": x509.get_notAfter(),
"fingerprint": x509.digest("sha256").hex(),
"signature_algorithm": x509.get_signature_algorithm(),
"raw_cert": base64.b64encode(cert_binary),
}
# Add extensions
extensions = []
for i in range(x509.get_extension_count()):
ext = x509.get_extension(i)
extensions.append(
{"name": ext.get_short_name(), "value": str(ext)}
)
cert_info["extensions"] = extensions
return SSLCertificate(cert_info)
except Exception:
return None
@staticmethod
def _decode_cert_data(data: Any) -> Any:
"""Helper method to decode bytes in certificate data."""
if isinstance(data, bytes):
try:
# Try UTF-8 first, fallback to latin-1 for arbitrary bytes
return data.decode("utf-8")
except UnicodeDecodeError:
return data.decode("latin-1") # Or handle as needed, maybe hex representation
return data.decode("utf-8")
elif isinstance(data, dict):
return {
(
@@ -58,119 +97,36 @@ class SSLCertificate(dict):
return [SSLCertificate._decode_cert_data(item) for item in data]
return data
@staticmethod
def from_url(url: str, timeout: int = 10) -> Optional["SSLCertificate"]:
"""
Create SSLCertificate instance from a URL. Fetches cert info and initializes.
(Fetching logic remains the same)
"""
cert_info_raw = None # Variable to hold the fetched dict
try:
hostname = urlparse(url).netloc
if ":" in hostname:
hostname = hostname.split(":")[0]
context = ssl.create_default_context()
# Set check_hostname to False and verify_mode to CERT_NONE temporarily
# for potentially problematic certificates during fetch, but parse the result regardless.
# context.check_hostname = False
# context.verify_mode = ssl.CERT_NONE
with socket.create_connection((hostname, 443), timeout=timeout) as sock:
with context.wrap_socket(sock, server_hostname=hostname) as ssock:
cert_binary = ssock.getpeercert(binary_form=True)
if not cert_binary:
print(f"Warning: No certificate returned for {hostname}")
return None
x509 = OpenSSL.crypto.load_certificate(
OpenSSL.crypto.FILETYPE_ASN1, cert_binary
)
# Create the dictionary directly
cert_info_raw = {
"subject": dict(x509.get_subject().get_components()),
"issuer": dict(x509.get_issuer().get_components()),
"version": x509.get_version(),
"serial_number": hex(x509.get_serial_number()),
"not_before": x509.get_notBefore(), # Keep as bytes initially, _decode handles it
"not_after": x509.get_notAfter(), # Keep as bytes initially
"fingerprint": x509.digest("sha256").hex(), # hex() is already string
"signature_algorithm": x509.get_signature_algorithm(), # Keep as bytes
"raw_cert": base64.b64encode(cert_binary), # Base64 is bytes, _decode handles it
}
# Add extensions
extensions = []
for i in range(x509.get_extension_count()):
ext = x509.get_extension(i)
# get_short_name() returns bytes, str(ext) handles value conversion
extensions.append(
{"name": ext.get_short_name(), "value": str(ext)}
)
cert_info_raw["extensions"] = extensions
except ssl.SSLCertVerificationError as e:
print(f"SSL Verification Error for {url}: {e}")
# Decide if you want to proceed or return None based on your needs
# You might try fetching without verification here if needed, but be cautious.
return None
except socket.gaierror:
print(f"Could not resolve hostname: {hostname}")
return None
except socket.timeout:
print(f"Connection timed out for {url}")
return None
except Exception as e:
print(f"Error fetching/processing certificate for {url}: {e}")
# Log the full error details if needed: logging.exception("Cert fetch error")
return None
# If successful, create the SSLCertificate instance from the dictionary
if cert_info_raw:
return SSLCertificate(cert_info_raw)
else:
return None
# --- Properties now access the dictionary items directly via self[] ---
@property
def issuer(self) -> Dict[str, str]:
return self.get("issuer", {}) # Use self.get for safety
@property
def subject(self) -> Dict[str, str]:
return self.get("subject", {})
@property
def valid_from(self) -> str:
return self.get("not_before", "")
@property
def valid_until(self) -> str:
return self.get("not_after", "")
@property
def fingerprint(self) -> str:
return self.get("fingerprint", "")
# --- Export methods can use `self` directly as it is the dict ---
def to_json(self, filepath: Optional[str] = None) -> Optional[str]:
"""Export certificate as JSON."""
# `self` is already the dictionary we want to serialize
json_str = json.dumps(self, indent=2, ensure_ascii=False)
"""
Export certificate as JSON.
Args:
filepath (Optional[str]): Path to save the JSON file (default: None).
Returns:
Optional[str]: JSON string if successful, None otherwise.
"""
json_str = json.dumps(self._cert_info, indent=2, ensure_ascii=False)
if filepath:
Path(filepath).write_text(json_str, encoding="utf-8")
return None
return json_str
def to_pem(self, filepath: Optional[str] = None) -> Optional[str]:
"""Export certificate as PEM."""
"""
Export certificate as PEM.
Args:
filepath (Optional[str]): Path to save the PEM file (default: None).
Returns:
Optional[str]: PEM string if successful, None otherwise.
"""
try:
# Decode the raw_cert (which should be string due to _decode)
raw_cert_bytes = base64.b64decode(self.get("raw_cert", ""))
x509 = OpenSSL.crypto.load_certificate(
OpenSSL.crypto.FILETYPE_ASN1, raw_cert_bytes
OpenSSL.crypto.FILETYPE_ASN1,
base64.b64decode(self._cert_info["raw_cert"]),
)
pem_data = OpenSSL.crypto.dump_certificate(
OpenSSL.crypto.FILETYPE_PEM, x509
@@ -180,25 +136,49 @@ class SSLCertificate(dict):
Path(filepath).write_text(pem_data, encoding="utf-8")
return None
return pem_data
except Exception as e:
print(f"Error converting to PEM: {e}")
return None
except Exception:
return None
def to_der(self, filepath: Optional[str] = None) -> Optional[bytes]:
"""Export certificate as DER."""
"""
Export certificate as DER.
Args:
filepath (Optional[str]): Path to save the DER file (default: None).
Returns:
Optional[bytes]: DER bytes if successful, None otherwise.
"""
try:
# Decode the raw_cert (which should be string due to _decode)
der_data = base64.b64decode(self.get("raw_cert", ""))
der_data = base64.b64decode(self._cert_info["raw_cert"])
if filepath:
Path(filepath).write_bytes(der_data)
return None
return der_data
except Exception as e:
print(f"Error converting to DER: {e}")
return None
except Exception:
return None
# Optional: Add __repr__ for better debugging
def __repr__(self) -> str:
subject_cn = self.subject.get('CN', 'N/A')
issuer_cn = self.issuer.get('CN', 'N/A')
return f"<SSLCertificate Subject='{subject_cn}' Issuer='{issuer_cn}'>"
@property
def issuer(self) -> Dict[str, str]:
"""Get certificate issuer information."""
return self._cert_info.get("issuer", {})
@property
def subject(self) -> Dict[str, str]:
"""Get certificate subject information."""
return self._cert_info.get("subject", {})
@property
def valid_from(self) -> str:
"""Get certificate validity start date."""
return self._cert_info.get("not_before", "")
@property
def valid_until(self) -> str:
"""Get certificate validity end date."""
return self._cert_info.get("not_after", "")
@property
def fingerprint(self) -> str:
"""Get certificate fingerprint."""
return self._cert_info.get("fingerprint", "")

View File

@@ -178,10 +178,4 @@ if TYPE_CHECKING:
BestFirstCrawlingStrategy as BestFirstCrawlingStrategyType,
DFSDeepCrawlStrategy as DFSDeepCrawlStrategyType,
DeepCrawlDecorator as DeepCrawlDecoratorType,
)
def create_llm_config(*args, **kwargs) -> 'LLMConfigType':
from .async_configs import LLMConfig
return LLMConfig(*args, **kwargs)
)

View File

@@ -26,7 +26,7 @@ import cProfile
import pstats
from functools import wraps
import asyncio
from lxml import etree, html as lhtml
import sqlite3
import hashlib
@@ -1551,7 +1551,7 @@ def extract_xml_tags(string):
return list(set(tags))
def extract_xml_data_legacy(tags, string):
def extract_xml_data(tags, string):
"""
Extract data for specified XML tags from a string.
@@ -1580,38 +1580,6 @@ def extract_xml_data_legacy(tags, string):
return data
def extract_xml_data(tags, string):
"""
Extract data for specified XML tags from a string, returning the longest content for each tag.
How it works:
1. Finds all occurrences of each tag in the string using regex.
2. For each tag, selects the occurrence with the longest content.
3. Returns a dictionary of tag-content pairs.
Args:
tags (List[str]): The list of XML tags to extract.
string (str): The input string containing XML data.
Returns:
Dict[str, str]: A dictionary with tag names as keys and longest extracted content as values.
"""
data = {}
for tag in tags:
pattern = f"<{tag}>(.*?)</{tag}>"
matches = re.findall(pattern, string, re.DOTALL)
if matches:
# Find the longest content for this tag
longest_content = max(matches, key=len).strip()
data[tag] = longest_content
else:
data[tag] = ""
return data
def perform_completion_with_backoff(
provider,
@@ -1680,19 +1648,6 @@ def perform_completion_with_backoff(
"content": ["Rate limit error. Please try again later."],
}
]
except Exception as e:
raise e # Raise any other exceptions immediately
# print("Error during completion request:", str(e))
# error_message = e.message
# return [
# {
# "index": 0,
# "tags": ["error"],
# "content": [
# f"Error during LLM completion request. {error_message}"
# ],
# }
# ]
def extract_blocks(url, html, provider=DEFAULT_PROVIDER, api_token=None, base_url=None):
@@ -2003,10 +1958,6 @@ def normalize_url(href, base_url):
if not parsed_base.scheme or not parsed_base.netloc:
raise ValueError(f"Invalid base URL format: {base_url}")
# Ensure base_url ends with a trailing slash if it's a directory path
if not base_url.endswith('/'):
base_url = base_url + '/'
# Use urljoin to handle all cases
normalized = urljoin(base_url, href.strip())
return normalized
@@ -2051,7 +2002,7 @@ def normalize_url_for_deep_crawl(href, base_url):
normalized = urlunparse((
parsed.scheme,
netloc,
parsed.path.rstrip('/'), # Normalize trailing slash
parsed.path.rstrip('/') or '/', # Normalize trailing slash
parsed.params,
query,
fragment
@@ -2079,7 +2030,7 @@ def efficient_normalize_url_for_deep_crawl(href, base_url):
normalized = urlunparse((
parsed.scheme,
parsed.netloc.lower(),
parsed.path.rstrip('/'),
parsed.path,
parsed.params,
parsed.query,
'' # Remove fragment
@@ -2666,116 +2617,3 @@ class HeadPeekr:
def get_title(head_content: str):
title_match = re.search(r'<title>(.*?)</title>', head_content, re.IGNORECASE | re.DOTALL)
return title_match.group(1) if title_match else None
def preprocess_html_for_schema(html_content, text_threshold=100, attr_value_threshold=200, max_size=100000):
"""
Preprocess HTML to reduce size while preserving structure for schema generation.
Args:
html_content (str): Raw HTML content
text_threshold (int): Maximum length for text nodes before truncation
attr_value_threshold (int): Maximum length for attribute values before truncation
max_size (int): Target maximum size for output HTML
Returns:
str: Preprocessed HTML content
"""
try:
# Parse HTML with error recovery
parser = etree.HTMLParser(remove_comments=True, remove_blank_text=True)
tree = lhtml.fromstring(html_content, parser=parser)
# 1. Remove HEAD section (keep only BODY)
head_elements = tree.xpath('//head')
for head in head_elements:
if head.getparent() is not None:
head.getparent().remove(head)
# 2. Define tags to remove completely
tags_to_remove = [
'script', 'style', 'noscript', 'iframe', 'canvas', 'svg',
'video', 'audio', 'source', 'track', 'map', 'area'
]
# Remove unwanted elements
for tag in tags_to_remove:
elements = tree.xpath(f'//{tag}')
for element in elements:
if element.getparent() is not None:
element.getparent().remove(element)
# 3. Process remaining elements to clean attributes and truncate text
for element in tree.iter():
# Skip if we're at the root level
if element.getparent() is None:
continue
# Clean non-essential attributes but preserve structural ones
# attribs_to_keep = {'id', 'class', 'name', 'href', 'src', 'type', 'value', 'data-'}
# This is more aggressive than the previous version
attribs_to_keep = {'id', 'class', 'name', 'type', 'value'}
# attributes_hates_truncate = ['id', 'class', "data-"]
# This means, I don't care, if an attribute is too long, truncate it, go and find a better css selector to build a schema
attributes_hates_truncate = []
# Process each attribute
for attrib in list(element.attrib.keys()):
# Keep if it's essential or starts with data-
if not (attrib in attribs_to_keep or attrib.startswith('data-')):
element.attrib.pop(attrib)
# Truncate long attribute values except for selectors
elif attrib not in attributes_hates_truncate and len(element.attrib[attrib]) > attr_value_threshold:
element.attrib[attrib] = element.attrib[attrib][:attr_value_threshold] + '...'
# Truncate text content if it's too long
if element.text and len(element.text.strip()) > text_threshold:
element.text = element.text.strip()[:text_threshold] + '...'
# Also truncate tail text if present
if element.tail and len(element.tail.strip()) > text_threshold:
element.tail = element.tail.strip()[:text_threshold] + '...'
# 4. Find repeated patterns and keep only a few examples
# This is a simplistic approach - more sophisticated pattern detection could be implemented
pattern_elements = {}
for element in tree.xpath('//*[contains(@class, "")]'):
parent = element.getparent()
if parent is None:
continue
# Create a signature based on tag and classes
classes = element.get('class', '')
if not classes:
continue
signature = f"{element.tag}.{classes}"
if signature in pattern_elements:
pattern_elements[signature].append(element)
else:
pattern_elements[signature] = [element]
# Keep only 3 examples of each repeating pattern
for signature, elements in pattern_elements.items():
if len(elements) > 3:
# Keep the first 2 and last elements
for element in elements[2:-1]:
if element.getparent() is not None:
element.getparent().remove(element)
# 5. Convert back to string
result = etree.tostring(tree, encoding='unicode', method='html')
# If still over the size limit, apply more aggressive truncation
if len(result) > max_size:
return result[:max_size] + "..."
return result
except Exception as e:
# Fallback for parsing errors
return html_content[:max_size] if len(html_content) > max_size else html_content

137
deploy/aws/Dockerfile Normal file
View File

@@ -0,0 +1,137 @@
FROM python:3.10-slim
# Set build arguments
ARG APP_HOME=/app
ARG GITHUB_REPO=https://github.com/unclecode/crawl4ai.git
ARG GITHUB_BRANCH=next
ARG USE_LOCAL=False
ARG CONFIG_PATH=""
ENV PYTHONFAULTHANDLER=1 \
PYTHONHASHSEED=random \
PYTHONUNBUFFERED=1 \
PIP_NO_CACHE_DIR=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_DEFAULT_TIMEOUT=100 \
DEBIAN_FRONTEND=noninteractive \
REDIS_HOST=localhost \
REDIS_PORT=6379
ARG PYTHON_VERSION=3.10
ARG INSTALL_TYPE=default
ARG ENABLE_GPU=false
ARG TARGETARCH
LABEL maintainer="unclecode"
LABEL description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & scraper"
LABEL version="1.0"
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
wget \
gnupg \
git \
cmake \
pkg-config \
python3-dev \
libjpeg-dev \
redis-server \
supervisor \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y --no-install-recommends \
libglib2.0-0 \
libnss3 \
libnspr4 \
libatk1.0-0 \
libatk-bridge2.0-0 \
libcups2 \
libdrm2 \
libdbus-1-3 \
libxcb1 \
libxkbcommon0 \
libx11-6 \
libxcomposite1 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxrandr2 \
libgbm1 \
libpango-1.0-0 \
libcairo2 \
libasound2 \
libatspi2.0-0 \
&& rm -rf /var/lib/apt/lists/*
RUN if [ "$ENABLE_GPU" = "true" ] && [ "$TARGETARCH" = "amd64" ] ; then \
apt-get update && apt-get install -y --no-install-recommends \
nvidia-cuda-toolkit \
&& rm -rf /var/lib/apt/lists/* ; \
else \
echo "Skipping NVIDIA CUDA Toolkit installation (unsupported platform or GPU disabled)"; \
fi
RUN if [ "$TARGETARCH" = "arm64" ]; then \
echo "🦾 Installing ARM-specific optimizations"; \
apt-get update && apt-get install -y --no-install-recommends \
libopenblas-dev \
&& rm -rf /var/lib/apt/lists/*; \
elif [ "$TARGETARCH" = "amd64" ]; then \
echo "🖥️ Installing AMD64-specific optimizations"; \
apt-get update && apt-get install -y --no-install-recommends \
libomp-dev \
&& rm -rf /var/lib/apt/lists/*; \
else \
echo "Skipping platform-specific optimizations (unsupported platform)"; \
fi
WORKDIR ${APP_HOME}
RUN git clone --branch ${GITHUB_BRANCH} ${GITHUB_REPO} /tmp/crawl4ai
COPY docker/supervisord.conf .
COPY docker/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
RUN if [ "$INSTALL_TYPE" = "all" ] ; then \
pip install "/tmp/crawl4ai/[all]" && \
python -m nltk.downloader punkt stopwords && \
python -m crawl4ai.model_loader ; \
elif [ "$INSTALL_TYPE" = "torch" ] ; then \
pip install "/tmp/crawl4ai/[torch]" ; \
elif [ "$INSTALL_TYPE" = "transformer" ] ; then \
pip install "/tmp/crawl4ai/[transformer]" && \
python -m crawl4ai.model_loader ; \
else \
pip install "/tmp/crawl4ai" ; \
fi
RUN pip install --no-cache-dir --upgrade pip && \
python -c "import crawl4ai; print('✅ crawl4ai is ready to rock!')" && \
python -c "from playwright.sync_api import sync_playwright; print('✅ Playwright is feeling dramatic!')"
RUN playwright install --with-deps chromium
COPY docker/* ${APP_HOME}/
RUN if [ -n "$CONFIG_PATH" ] && [ -f "$CONFIG_PATH" ]; then \
echo "Using custom config from $CONFIG_PATH" && \
cp $CONFIG_PATH /app/config.yml; \
fi
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD bash -c '\
MEM=$(free -m | awk "/^Mem:/{print \$2}"); \
if [ $MEM -lt 2048 ]; then \
echo "⚠️ Warning: Less than 2GB RAM available! Your container might need a memory boost! 🚀"; \
exit 1; \
fi && \
redis-cli ping > /dev/null && \
curl -f http://localhost:8000/health || exit 1'
# EXPOSE 6379
CMD ["supervisord", "-c", "supervisord.conf"]

3
deploy/aws/deploy-config.yml Executable file
View File

@@ -0,0 +1,3 @@
project_name: PROJECT_NAME
domain_name: DOMAIN_NAME
aws_region: AWS_REGION

729
deploy/aws/deploy.py Executable file
View File

@@ -0,0 +1,729 @@
#!/usr/bin/env python3
import argparse
import subprocess
import sys
import time
import json
import yaml
import requests
import os
# Steps for deployment
STEPS = [
"refresh_aws_auth",
"fetch_or_create_vpc_and_subnets",
"create_ecr_repositories",
"create_iam_role",
"create_security_groups",
"request_acm_certificate",
"build_and_push_docker",
"create_task_definition",
"setup_alb",
"deploy_ecs_service",
"configure_custom_domain",
"test_endpoints"
]
# Utility function to prompt user for confirmation
def confirm_step(step_name):
while True:
response = input(f"Proceed with {step_name}? (yes/no): ").strip().lower()
if response in ["yes", "no"]:
return response == "yes"
print("Please enter 'yes' or 'no'.")
# Utility function to run AWS CLI or shell commands and handle errors
def run_command(command, error_message, additional_diagnostics=None, cwd="."):
try:
result = subprocess.run(command, capture_output=True, text=True, check=True, cwd=cwd)
return result
except subprocess.CalledProcessError as e:
with open("error_context.md", "w") as f:
f.write(f"{error_message}:\n")
f.write(f"Command: {' '.join(command)}\n")
f.write(f"Exit Code: {e.returncode}\n")
f.write(f"Stdout: {e.stdout}\n")
f.write(f"Stderr: {e.stderr}\n")
if additional_diagnostics:
for diag_cmd in additional_diagnostics:
diag_result = subprocess.run(diag_cmd, capture_output=True, text=True)
f.write(f"\nDiagnostic command: {' '.join(diag_cmd)}\n")
f.write(f"Stdout: {diag_result.stdout}\n")
f.write(f"Stderr: {diag_result.stderr}\n")
raise Exception(f"{error_message}: {e.stderr}")
# Utility function to load or initialize state
def load_state(project_name):
state_file = f"{project_name}-state.json"
if os.path.exists(state_file):
with open(state_file, "r") as f:
return json.load(f)
return {"last_step": -1}
# Utility function to save state
def save_state(project_name, state):
state_file = f"{project_name}-state.json"
with open(state_file, "w") as f:
json.dump(state, f, indent=4)
# DNS Check Function
def check_dns_propagation(domain, alb_dns):
try:
result = subprocess.run(["dig", "+short", domain], capture_output=True, text=True)
if alb_dns in result.stdout:
return True
return False
except Exception as e:
print(f"Failed to check DNS: {e}")
return False
# Step Functions
def refresh_aws_auth(project_name, state, config):
if state["last_step"] >= 0:
print("Skipping refresh_aws_auth (already completed)")
return
if not confirm_step("Refresh AWS authentication"):
sys.exit("User aborted.")
run_command(
["aws", "sts", "get-caller-identity"],
"Failed to verify AWS credentials"
)
print("AWS authentication verified.")
state["last_step"] = 0
save_state(project_name, state)
def fetch_or_create_vpc_and_subnets(project_name, state, config):
if state["last_step"] >= 1:
print("Skipping fetch_or_create_vpc_and_subnets (already completed)")
return state["vpc_id"], state["public_subnets"]
if not confirm_step("Fetch or Create VPC and Subnets"):
sys.exit("User aborted.")
# Fetch AWS account ID
result = run_command(
["aws", "sts", "get-caller-identity"],
"Failed to get AWS account ID"
)
account_id = json.loads(result.stdout)["Account"]
# Fetch default VPC
result = run_command(
["aws", "ec2", "describe-vpcs", "--filters", "Name=isDefault,Values=true", "--region", config["aws_region"]],
"Failed to describe VPCs"
)
vpcs = json.loads(result.stdout).get("Vpcs", [])
if not vpcs:
result = run_command(
["aws", "ec2", "create-vpc", "--cidr-block", "10.0.0.0/16", "--region", config["aws_region"]],
"Failed to create VPC"
)
vpc_id = json.loads(result.stdout)["Vpc"]["VpcId"]
run_command(
["aws", "ec2", "modify-vpc-attribute", "--vpc-id", vpc_id, "--enable-dns-hostnames", "--region", config["aws_region"]],
"Failed to enable DNS hostnames"
)
else:
vpc_id = vpcs[0]["VpcId"]
# Fetch or create subnets
result = run_command(
["aws", "ec2", "describe-subnets", "--filters", f"Name=vpc-id,Values={vpc_id}", "--region", config["aws_region"]],
"Failed to describe subnets"
)
subnets = json.loads(result.stdout).get("Subnets", [])
if len(subnets) < 2:
azs = json.loads(run_command(
["aws", "ec2", "describe-availability-zones", "--region", config["aws_region"]],
"Failed to describe availability zones"
).stdout)["AvailabilityZones"][:2]
subnet_ids = []
for i, az in enumerate(azs):
az_name = az["ZoneName"]
result = run_command(
["aws", "ec2", "create-subnet", "--vpc-id", vpc_id, "--cidr-block", f"10.0.{i}.0/24", "--availability-zone", az_name, "--region", config["aws_region"]],
f"Failed to create subnet in {az_name}"
)
subnet_id = json.loads(result.stdout)["Subnet"]["SubnetId"]
subnet_ids.append(subnet_id)
run_command(
["aws", "ec2", "modify-subnet-attribute", "--subnet-id", subnet_id, "--map-public-ip-on-launch", "--region", config["aws_region"]],
f"Failed to make subnet {subnet_id} public"
)
else:
subnet_ids = [s["SubnetId"] for s in subnets[:2]]
# Ensure internet gateway
result = run_command(
["aws", "ec2", "describe-internet-gateways", "--filters", f"Name=attachment.vpc-id,Values={vpc_id}", "--region", config["aws_region"]],
"Failed to describe internet gateways"
)
igws = json.loads(result.stdout).get("InternetGateways", [])
if not igws:
result = run_command(
["aws", "ec2", "create-internet-gateway", "--region", config["aws_region"]],
"Failed to create internet gateway"
)
igw_id = json.loads(result.stdout)["InternetGateway"]["InternetGatewayId"]
run_command(
["aws", "ec2", "attach-internet-gateway", "--vpc-id", vpc_id, "--internet-gateway-id", igw_id, "--region", config["aws_region"]],
"Failed to attach internet gateway"
)
state["vpc_id"] = vpc_id
state["public_subnets"] = subnet_ids
state["last_step"] = 1
save_state(project_name, state)
print(f"VPC ID: {vpc_id}, Subnets: {subnet_ids}")
return vpc_id, subnet_ids
def create_ecr_repositories(project_name, state, config):
if state["last_step"] >= 2:
print("Skipping create_ecr_repositories (already completed)")
return
if not confirm_step("Create ECR Repositories"):
sys.exit("User aborted.")
account_id = json.loads(run_command(
["aws", "sts", "get-caller-identity"],
"Failed to get AWS account ID"
).stdout)["Account"]
repos = [project_name, f"{project_name}-nginx"]
for repo in repos:
result = subprocess.run(
["aws", "ecr", "describe-repositories", "--repository-names", repo, "--region", config["aws_region"]],
capture_output=True, text=True
)
if result.returncode != 0:
run_command(
["aws", "ecr", "create-repository", "--repository-name", repo, "--region", config["aws_region"]],
f"Failed to create ECR repository {repo}"
)
print(f"ECR repository {repo} is ready.")
state["last_step"] = 2
save_state(project_name, state)
def create_iam_role(project_name, state, config):
if state["last_step"] >= 3:
print("Skipping create_iam_role (already completed)")
return
if not confirm_step("Create IAM Role"):
sys.exit("User aborted.")
account_id = json.loads(run_command(
["aws", "sts", "get-caller-identity"],
"Failed to get AWS account ID"
).stdout)["Account"]
role_name = "ecsTaskExecutionRole"
trust_policy = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
with open("trust_policy.json", "w") as f:
json.dump(trust_policy, f)
result = subprocess.run(
["aws", "iam", "get-role", "--role-name", role_name],
capture_output=True, text=True
)
if result.returncode != 0:
run_command(
["aws", "iam", "create-role", "--role-name", role_name, "--assume-role-policy-document", "file://trust_policy.json"],
f"Failed to create IAM role {role_name}"
)
run_command(
["aws", "iam", "attach-role-policy", "--role-name", role_name, "--policy-arn", "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"],
"Failed to attach ECS task execution policy"
)
os.remove("trust_policy.json")
state["execution_role_arn"] = f"arn:aws:iam::{account_id}:role/{role_name}"
state["last_step"] = 3
save_state(project_name, state)
print(f"IAM role {role_name} configured.")
def create_security_groups(project_name, state, config):
if state["last_step"] >= 4:
print("Skipping create_security_groups (already completed)")
return state["alb_sg_id"], state["ecs_sg_id"]
if not confirm_step("Create Security Groups"):
sys.exit("User aborted.")
vpc_id = state["vpc_id"]
alb_sg_name = f"{project_name}-alb-sg"
result = run_command(
["aws", "ec2", "describe-security-groups", "--filters", f"Name=vpc-id,Values={vpc_id}", f"Name=group-name,Values={alb_sg_name}", "--region", config["aws_region"]],
"Failed to describe ALB security group"
)
if not json.loads(result.stdout).get("SecurityGroups"):
result = run_command(
["aws", "ec2", "create-security-group", "--group-name", alb_sg_name, "--description", "Security group for ALB", "--vpc-id", vpc_id, "--region", config["aws_region"]],
"Failed to create ALB security group"
)
alb_sg_id = json.loads(result.stdout)["GroupId"]
run_command(
["aws", "ec2", "authorize-security-group-ingress", "--group-id", alb_sg_id, "--protocol", "tcp", "--port", "80", "--cidr", "0.0.0.0/0", "--region", config["aws_region"]],
"Failed to authorize HTTP ingress"
)
run_command(
["aws", "ec2", "authorize-security-group-ingress", "--group-id", alb_sg_id, "--protocol", "tcp", "--port", "443", "--cidr", "0.0.0.0/0", "--region", config["aws_region"]],
"Failed to authorize HTTPS ingress"
)
else:
alb_sg_id = json.loads(result.stdout)["SecurityGroups"][0]["GroupId"]
ecs_sg_name = f"{project_name}-ecs-sg"
result = run_command(
["aws", "ec2", "describe-security-groups", "--filters", f"Name=vpc-id,Values={vpc_id}", f"Name=group-name,Values={ecs_sg_name}", "--region", config["aws_region"]],
"Failed to describe ECS security group"
)
if not json.loads(result.stdout).get("SecurityGroups"):
result = run_command(
["aws", "ec2", "create-security-group", "--group-name", ecs_sg_name, "--description", "Security group for ECS tasks", "--vpc-id", vpc_id, "--region", config["aws_region"]],
"Failed to create ECS security group"
)
ecs_sg_id = json.loads(result.stdout)["GroupId"]
run_command(
["aws", "ec2", "authorize-security-group-ingress", "--group-id", ecs_sg_id, "--protocol", "tcp", "--port", "80", "--source-group", alb_sg_id, "--region", config["aws_region"]],
"Failed to authorize ECS ingress"
)
else:
ecs_sg_id = json.loads(result.stdout)["SecurityGroups"][0]["GroupId"]
state["alb_sg_id"] = alb_sg_id
state["ecs_sg_id"] = ecs_sg_id
state["last_step"] = 4
save_state(project_name, state)
print("Security groups configured.")
return alb_sg_id, ecs_sg_id
def request_acm_certificate(project_name, state, config):
if state["last_step"] >= 5:
print("Skipping request_acm_certificate (already completed)")
return state["cert_arn"]
if not confirm_step("Request ACM Certificate"):
sys.exit("User aborted.")
domain_name = config["domain_name"]
result = run_command(
["aws", "acm", "describe-certificates", "--certificate-statuses", "ISSUED", "--region", config["aws_region"]],
"Failed to describe certificates"
)
certificates = json.loads(result.stdout).get("CertificateSummaryList", [])
cert_arn = next((c["CertificateArn"] for c in certificates if c["DomainName"] == domain_name), None)
if not cert_arn:
result = run_command(
["aws", "acm", "request-certificate", "--domain-name", domain_name, "--validation-method", "DNS", "--region", config["aws_region"]],
"Failed to request ACM certificate"
)
cert_arn = json.loads(result.stdout)["CertificateArn"]
time.sleep(10)
result = run_command(
["aws", "acm", "describe-certificate", "--certificate-arn", cert_arn, "--region", config["aws_region"]],
"Failed to describe certificate"
)
cert_details = json.loads(result.stdout)["Certificate"]
dns_validations = cert_details.get("DomainValidationOptions", [])
for validation in dns_validations:
if validation["ValidationMethod"] == "DNS" and "ResourceRecord" in validation:
record = validation["ResourceRecord"]
print(f"Please add this DNS record to validate the certificate for {domain_name}:")
print(f"Name: {record['Name']}")
print(f"Type: {record['Type']}")
print(f"Value: {record['Value']}")
print("Press Enter after adding the DNS record...")
input()
while True:
result = run_command(
["aws", "acm", "describe-certificate", "--certificate-arn", cert_arn, "--region", config["aws_region"]],
"Failed to check certificate status"
)
status = json.loads(result.stdout)["Certificate"]["Status"]
if status == "ISSUED":
break
elif status in ["FAILED", "REVOKED", "INACTIVE"]:
print("Certificate issuance failed.")
sys.exit(1)
time.sleep(10)
state["cert_arn"] = cert_arn
state["last_step"] = 5
save_state(project_name, state)
print(f"Certificate ARN: {cert_arn}")
return cert_arn
def build_and_push_docker(project_name, state, config):
if state["last_step"] >= 6:
print("Skipping build_and_push_docker (already completed)")
return state["fastapi_image"], state["nginx_image"]
if not confirm_step("Build and Push Docker Images"):
sys.exit("User aborted.")
with open("./version.txt", "r") as f:
version = f.read().strip()
account_id = json.loads(run_command(
["aws", "sts", "get-caller-identity"],
"Failed to get AWS account ID"
).stdout)["Account"]
region = config["aws_region"]
login_password = run_command(
["aws", "ecr", "get-login-password", "--region", region],
"Failed to get ECR login password"
).stdout.strip()
run_command(
["docker", "login", "--username", "AWS", "--password", login_password, f"{account_id}.dkr.ecr.{region}.amazonaws.com"],
"Failed to authenticate Docker to ECR"
)
fastapi_image = f"{account_id}.dkr.ecr.{region}.amazonaws.com/{project_name}:{version}"
run_command(
["docker", "build", "-f", "Dockerfile", "-t", fastapi_image, "."],
"Failed to build FastAPI Docker image"
)
run_command(
["docker", "push", fastapi_image],
"Failed to push FastAPI image"
)
nginx_image = f"{account_id}.dkr.ecr.{region}.amazonaws.com/{project_name}-nginx:{version}"
run_command(
["docker", "build", "-f", "Dockerfile", "-t", nginx_image, "."],
"Failed to build Nginx Docker image",
cwd="./nginx"
)
run_command(
["docker", "push", nginx_image],
"Failed to push Nginx image"
)
state["fastapi_image"] = fastapi_image
state["nginx_image"] = nginx_image
state["last_step"] = 6
save_state(project_name, state)
print("Docker images built and pushed.")
return fastapi_image, nginx_image
def create_task_definition(project_name, state, config):
if state["last_step"] >= 7:
print("Skipping create_task_definition (already completed)")
return state["task_def_arn"]
if not confirm_step("Create Task Definition"):
sys.exit("User aborted.")
log_group = f"/ecs/{project_name}-logs"
result = run_command(
["aws", "logs", "describe-log-groups", "--log-group-name-prefix", log_group, "--region", config["aws_region"]],
"Failed to describe log groups"
)
if not any(lg["logGroupName"] == log_group for lg in json.loads(result.stdout).get("logGroups", [])):
run_command(
["aws", "logs", "create-log-group", "--log-group-name", log_group, "--region", config["aws_region"]],
f"Failed to create log group {log_group}"
)
task_definition = {
"family": f"{project_name}-taskdef",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "2048",
"executionRoleArn": state["execution_role_arn"],
"containerDefinitions": [
{
"name": "fastapi",
"image": state["fastapi_image"],
"portMappings": [{"containerPort": 8000, "hostPort": 8000, "protocol": "tcp"}],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": log_group,
"awslogs-region": config["aws_region"],
"awslogs-stream-prefix": "fastapi"
}
}
},
{
"name": "nginx",
"image": state["nginx_image"],
"portMappings": [{"containerPort": 80, "hostPort": 80, "protocol": "tcp"}],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": log_group,
"awslogs-region": config["aws_region"],
"awslogs-stream-prefix": "nginx"
}
}
}
]
}
with open("task_def.json", "w") as f:
json.dump(task_definition, f)
result = run_command(
["aws", "ecs", "register-task-definition", "--cli-input-json", "file://task_def.json", "--region", config["aws_region"]],
"Failed to register task definition"
)
task_def_arn = json.loads(result.stdout)["taskDefinition"]["taskDefinitionArn"]
os.remove("task_def.json")
state["task_def_arn"] = task_def_arn
state["last_step"] = 7
save_state(project_name, state)
print("Task definition created.")
return task_def_arn
def setup_alb(project_name, state, config):
if state["last_step"] >= 8:
print("Skipping setup_alb (already completed)")
return state["alb_arn"], state["tg_arn"], state["alb_dns"]
if not confirm_step("Set Up ALB"):
sys.exit("User aborted.")
vpc_id = state["vpc_id"]
public_subnets = state["public_subnets"]
alb_name = f"{project_name}-alb"
result = subprocess.run(
["aws", "elbv2", "describe-load-balancers", "--names", alb_name, "--region", config["aws_region"]],
capture_output=True, text=True
)
if result.returncode != 0:
run_command(
["aws", "elbv2", "create-load-balancer", "--name", alb_name, "--subnets"] + public_subnets + ["--security-groups", state["alb_sg_id"], "--region", config["aws_region"]],
"Failed to create ALB"
)
alb_arn = json.loads(run_command(
["aws", "elbv2", "describe-load-balancers", "--names", alb_name, "--region", config["aws_region"]],
"Failed to describe ALB"
).stdout)["LoadBalancers"][0]["LoadBalancerArn"]
alb_dns = json.loads(run_command(
["aws", "elbv2", "describe-load-balancers", "--names", alb_name, "--region", config["aws_region"]],
"Failed to get ALB DNS name"
).stdout)["LoadBalancers"][0]["DNSName"]
tg_name = f"{project_name}-tg"
result = subprocess.run(
["aws", "elbv2", "describe-target-groups", "--names", tg_name, "--region", config["aws_region"]],
capture_output=True, text=True
)
if result.returncode != 0:
run_command(
["aws", "elbv2", "create-target-group", "--name", tg_name, "--protocol", "HTTP", "--port", "80", "--vpc-id", vpc_id, "--region", config["aws_region"]],
"Failed to create target group"
)
tg_arn = json.loads(run_command(
["aws", "elbv2", "describe-target-groups", "--names", tg_name, "--region", config["aws_region"]],
"Failed to describe target group"
).stdout)["TargetGroups"][0]["TargetGroupArn"]
result = run_command(
["aws", "elbv2", "describe-listeners", "--load-balancer-arn", alb_arn, "--region", config["aws_region"]],
"Failed to describe listeners"
)
listeners = json.loads(result.stdout).get("Listeners", [])
if not any(l["Port"] == 80 for l in listeners):
run_command(
["aws", "elbv2", "create-listener", "--load-balancer-arn", alb_arn, "--protocol", "HTTP", "--port", "80", "--default-actions", "Type=redirect,RedirectConfig={Protocol=HTTPS,Port=443,StatusCode=HTTP_301}", "--region", config["aws_region"]],
"Failed to create HTTP listener"
)
if not any(l["Port"] == 443 for l in listeners):
run_command(
["aws", "elbv2", "create-listener", "--load-balancer-arn", alb_arn, "--protocol", "HTTPS", "--port", "443", "--certificates", f"CertificateArn={state['cert_arn']}", "--default-actions", f"Type=forward,TargetGroupArn={tg_arn}", "--region", config["aws_region"]],
"Failed to create HTTPS listener"
)
state["alb_arn"] = alb_arn
state["tg_arn"] = tg_arn
state["alb_dns"] = alb_dns
state["last_step"] = 8
save_state(project_name, state)
print("ALB configured.")
return alb_arn, tg_arn, alb_dns
def deploy_ecs_service(project_name, state, config):
if state["last_step"] >= 9:
print("Skipping deploy_ecs_service (already completed)")
return
if not confirm_step("Deploy ECS Service"):
sys.exit("User aborted.")
cluster_name = f"{project_name}-cluster"
result = run_command(
["aws", "ecs", "describe-clusters", "--clusters", cluster_name, "--region", config["aws_region"]],
"Failed to describe clusters"
)
if not json.loads(result.stdout).get("clusters"):
run_command(
["aws", "ecs", "create-cluster", "--cluster-name", cluster_name, "--region", config["aws_region"]],
"Failed to create ECS cluster"
)
service_name = f"{project_name}-service"
result = run_command(
["aws", "ecs", "describe-services", "--cluster", cluster_name, "--services", service_name, "--region", config["aws_region"]],
"Failed to describe services",
additional_diagnostics=[["aws", "ecs", "list-tasks", "--cluster", cluster_name, "--service-name", service_name, "--region", config["aws_region"]]]
)
services = json.loads(result.stdout).get("services", [])
if not services or services[0]["status"] == "INACTIVE":
run_command(
["aws", "ecs", "create-service", "--cluster", cluster_name, "--service-name", service_name, "--task-definition", state["task_def_arn"], "--desired-count", "1", "--launch-type", "FARGATE", "--network-configuration", f"awsvpcConfiguration={{subnets={json.dumps(state['public_subnets'])},securityGroups=[{state['ecs_sg_id']}],assignPublicIp=ENABLED}}", "--load-balancers", f"targetGroupArn={state['tg_arn']},containerName=nginx,containerPort=80", "--region", config["aws_region"]],
"Failed to create ECS service"
)
else:
run_command(
["aws", "ecs", "update-service", "--cluster", cluster_name, "--service", service_name, "--task-definition", state["task_def_arn"], "--region", config["aws_region"]],
"Failed to update ECS service"
)
state["last_step"] = 9
save_state(project_name, state)
print("ECS service deployed.")
def configure_custom_domain(project_name, state, config):
if state["last_step"] >= 10:
print("Skipping configure_custom_domain (already completed)")
return
if not confirm_step("Configure Custom Domain"):
sys.exit("User aborted.")
domain_name = config["domain_name"]
alb_dns = state["alb_dns"]
print(f"Please add a CNAME record for {domain_name} pointing to {alb_dns} in your DNS provider.")
print("Press Enter after updating the DNS record...")
input()
while not check_dns_propagation(domain_name, alb_dns):
print("DNS propagation not complete. Waiting 30 seconds before retrying...")
time.sleep(30)
print("DNS propagation confirmed.")
state["last_step"] = 10
save_state(project_name, state)
print("Custom domain configured.")
def test_endpoints(project_name, state, config):
if state["last_step"] >= 11:
print("Skipping test_endpoints (already completed)")
return
if not confirm_step("Test Endpoints"):
sys.exit("User aborted.")
domain = config["domain_name"]
time.sleep(30) # Wait for service to stabilize
response = requests.get(f"https://{domain}/health", verify=False)
if response.status_code != 200:
with open("error_context.md", "w") as f:
f.write("Health endpoint test failed:\n")
f.write(f"Status Code: {response.status_code}\n")
f.write(f"Response: {response.text}\n")
sys.exit(1)
print("Health endpoint test passed.")
payload = {
"urls": ["https://example.com"],
"browser_config": {"headless": True},
"crawler_config": {"stream": False}
}
response = requests.post(f"https://{domain}/crawl", json=payload, verify=False)
if response.status_code != 200:
with open("error_context.md", "w") as f:
f.write("Crawl endpoint test failed:\n")
f.write(f"Status Code: {response.status_code}\n")
f.write(f"Response: {response.text}\n")
sys.exit(1)
print("Crawl endpoint test passed.")
state["last_step"] = 11
save_state(project_name, state)
print("Endpoints tested successfully.")
# Main Deployment Function
def deploy(project_name, force=False):
config_file = f"{project_name}-config.yml"
if not os.path.exists(config_file):
print(f"Configuration file {config_file} not found. Run 'init' first.")
sys.exit(1)
with open(config_file, "r") as f:
config = yaml.safe_load(f)
state = load_state(project_name)
if force:
state = {"last_step": -1}
last_step = state.get("last_step", -1)
for step_idx, step_name in enumerate(STEPS):
if step_idx <= last_step:
print(f"Skipping {step_name} (already completed)")
continue
print(f"Executing step: {step_name}")
func = globals()[step_name]
if step_name == "fetch_or_create_vpc_and_subnets":
vpc_id, public_subnets = func(project_name, state, config)
elif step_name == "create_security_groups":
alb_sg_id, ecs_sg_id = func(project_name, state, config)
elif step_name == "request_acm_certificate":
cert_arn = func(project_name, state, config)
elif step_name == "build_and_push_docker":
fastapi_image, nginx_image = func(project_name, state, config)
elif step_name == "create_task_definition":
task_def_arn = func(project_name, state, config)
elif step_name == "setup_alb":
alb_arn, tg_arn, alb_dns = func(project_name, state, config)
elif step_name == "deploy_ecs_service":
func(project_name, state, config)
elif step_name == "configure_custom_domain":
func(project_name, state, config)
elif step_name == "test_endpoints":
func(project_name, state, config)
else:
func(project_name, state, config)
# Init Command
def init(project_name, domain_name, aws_region):
config = {
"project_name": project_name,
"domain_name": domain_name,
"aws_region": aws_region
}
config_file = f"{project_name}-config.yml"
with open(config_file, "w") as f:
yaml.dump(config, f)
print(f"Configuration file {config_file} created.")
# Argument Parser
parser = argparse.ArgumentParser(description="Crawl4AI Deployment Script")
subparsers = parser.add_subparsers(dest="command")
# Init Parser
init_parser = subparsers.add_parser("init", help="Initialize configuration")
init_parser.add_argument("--project", required=True, help="Project name")
init_parser.add_argument("--domain", required=True, help="Domain name")
init_parser.add_argument("--region", required=True, help="AWS region")
# Deploy Parser
deploy_parser = subparsers.add_parser("deploy", help="Deploy the project")
deploy_parser.add_argument("--project", required=True, help="Project name")
deploy_parser.add_argument("--force", action="store_true", help="Force redeployment from start")
args = parser.parse_args()
if args.command == "init":
init(args.project, args.domain, args.region)
elif args.command == "deploy":
deploy(args.project, args.force)
else:
parser.print_help()

View File

@@ -0,0 +1,31 @@
# .dockerignore
*
# Allow specific files and directories when using local installation
!crawl4ai/
!docs/
!deploy/docker/
!setup.py
!pyproject.toml
!README.md
!LICENSE
!MANIFEST.in
!setup.cfg
!mkdocs.yml
.git/
__pycache__/
*.pyc
*.pyo
*.pyd
.DS_Store
.env
.venv
venv/
tests/
coverage.xml
*.log
*.swp
*.egg-info/
dist/
build/

View File

@@ -0,0 +1,8 @@
# LLM Provider Keys
OPENAI_API_KEY=your_openai_key_here
DEEPSEEK_API_KEY=your_deepseek_key_here
ANTHROPIC_API_KEY=your_anthropic_key_here
GROQ_API_KEY=your_groq_key_here
TOGETHER_API_KEY=your_together_key_here
MISTRAL_API_KEY=your_mistral_key_here
GEMINI_API_TOKEN=your_gemini_key_here

847
deploy/aws/docker/README.md Normal file
View File

@@ -0,0 +1,847 @@
# Crawl4AI Docker Guide 🐳
## Table of Contents
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Local Build](#local-build)
- [Docker Hub](#docker-hub)
- [Dockerfile Parameters](#dockerfile-parameters)
- [Using the API](#using-the-api)
- [Understanding Request Schema](#understanding-request-schema)
- [REST API Examples](#rest-api-examples)
- [Python SDK](#python-sdk)
- [Metrics & Monitoring](#metrics--monitoring)
- [Deployment Scenarios](#deployment-scenarios)
- [Complete Examples](#complete-examples)
- [Getting Help](#getting-help)
## Prerequisites
Before we dive in, make sure you have:
- Docker installed and running (version 20.10.0 or higher)
- At least 4GB of RAM available for the container
- Python 3.10+ (if using the Python SDK)
- Node.js 16+ (if using the Node.js examples)
> 💡 **Pro tip**: Run `docker info` to check your Docker installation and available resources.
## Installation
### Local Build
Let's get your local environment set up step by step!
#### 1. Building the Image
First, clone the repository and build the Docker image:
```bash
# Clone the repository
git clone https://github.com/unclecode/crawl4ai.git
cd crawl4ai/deploy
# Build the Docker image
docker build --platform=linux/amd64 --no-cache -t crawl4ai .
# Or build for arm64
docker build --platform=linux/arm64 --no-cache -t crawl4ai .
```
#### 2. Environment Setup
If you plan to use LLMs (Language Models), you'll need to set up your API keys. Create a `.llm.env` file:
```env
# OpenAI
OPENAI_API_KEY=sk-your-key
# Anthropic
ANTHROPIC_API_KEY=your-anthropic-key
# DeepSeek
DEEPSEEK_API_KEY=your-deepseek-key
# Check out https://docs.litellm.ai/docs/providers for more providers!
```
> 🔑 **Note**: Keep your API keys secure! Never commit them to version control.
#### 3. Running the Container
You have several options for running the container:
Basic run (no LLM support):
```bash
docker run -d -p 8000:8000 --name crawl4ai crawl4ai
```
With LLM support:
```bash
docker run -d -p 8000:8000 \
--env-file .llm.env \
--name crawl4ai \
crawl4ai
```
Using host environment variables (Not a good practice, but works for local testing):
```bash
docker run -d -p 8000:8000 \
--env-file .llm.env \
--env "$(env)" \
--name crawl4ai \
crawl4ai
```
#### Multi-Platform Build
For distributing your image across different architectures, use `buildx`:
```bash
# Set up buildx builder
docker buildx create --use
# Build for multiple platforms
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t crawl4ai \
--push \
.
```
> 💡 **Note**: Multi-platform builds require Docker Buildx and need to be pushed to a registry.
#### Development Build
For development, you might want to enable all features:
```bash
docker build -t crawl4ai
--build-arg INSTALL_TYPE=all \
--build-arg PYTHON_VERSION=3.10 \
--build-arg ENABLE_GPU=true \
.
```
#### GPU-Enabled Build
If you plan to use GPU acceleration:
```bash
docker build -t crawl4ai
--build-arg ENABLE_GPU=true \
deploy/docker/
```
### Build Arguments Explained
| Argument | Description | Default | Options |
|----------|-------------|---------|----------|
| PYTHON_VERSION | Python version | 3.10 | 3.8, 3.9, 3.10 |
| INSTALL_TYPE | Feature set | default | default, all, torch, transformer |
| ENABLE_GPU | GPU support | false | true, false |
| APP_HOME | Install path | /app | any valid path |
### Build Best Practices
1. **Choose the Right Install Type**
- `default`: Basic installation, smallest image, to be honest, I use this most of the time.
- `all`: Full features, larger image (include transformer, and nltk, make sure you really need them)
2. **Platform Considerations**
- Let Docker auto-detect platform unless you need cross-compilation
- Use --platform for specific architecture requirements
- Consider buildx for multi-architecture distribution
3. **Performance Optimization**
- The image automatically includes platform-specific optimizations
- AMD64 gets OpenMP optimizations
- ARM64 gets OpenBLAS optimizations
### Docker Hub
> 🚧 Coming soon! The image will be available at `crawl4ai`. Stay tuned!
## Using the API
In the following sections, we discuss two ways to communicate with the Docker server. One option is to use the client SDK that I developed for Python, and I will soon develop one for Node.js. I highly recommend this approach to avoid mistakes. Alternatively, you can take a more technical route by using the JSON structure and passing it to all the URLs, which I will explain in detail.
### Python SDK
The SDK makes things easier! Here's how to use it:
```python
from crawl4ai.docker_client import Crawl4aiDockerClient
from crawl4ai import BrowserConfig, CrawlerRunConfig
async def main():
async with Crawl4aiDockerClient(base_url="http://localhost:8000", verbose=True) as client:
# If JWT is enabled, you can authenticate like this: (more on this later)
# await client.authenticate("test@example.com")
# Non-streaming crawl
results = await client.crawl(
["https://example.com", "https://python.org"],
browser_config=BrowserConfig(headless=True),
crawler_config=CrawlerRunConfig()
)
print(f"Non-streaming results: {results}")
# Streaming crawl
crawler_config = CrawlerRunConfig(stream=True)
async for result in await client.crawl(
["https://example.com", "https://python.org"],
browser_config=BrowserConfig(headless=True),
crawler_config=crawler_config
):
print(f"Streamed result: {result}")
# Get schema
schema = await client.get_schema()
print(f"Schema: {schema}")
if __name__ == "__main__":
asyncio.run(main())
```
`Crawl4aiDockerClient` is an async context manager that handles the connection for you. You can pass in optional parameters for more control:
- `base_url` (str): Base URL of the Crawl4AI Docker server
- `timeout` (float): Default timeout for requests in seconds
- `verify_ssl` (bool): Whether to verify SSL certificates
- `verbose` (bool): Whether to show logging output
- `log_file` (str, optional): Path to log file if file logging is desired
This client SDK generates a properly structured JSON request for the server's HTTP API.
## Second Approach: Direct API Calls
This is super important! The API expects a specific structure that matches our Python classes. Let me show you how it works.
### Understanding Configuration Structure
Let's dive deep into how configurations work in Crawl4AI. Every configuration object follows a consistent pattern of `type` and `params`. This structure enables complex, nested configurations while maintaining clarity.
#### The Basic Pattern
Try this in Python to understand the structure:
```python
from crawl4ai import BrowserConfig
# Create a config and see its structure
config = BrowserConfig(headless=True)
print(config.dump())
```
This outputs:
```json
{
"type": "BrowserConfig",
"params": {
"headless": true
}
}
```
#### Simple vs Complex Values
The structure follows these rules:
- Simple values (strings, numbers, booleans, lists) are passed directly
- Complex values (classes, dictionaries) use the type-params pattern
For example, with dictionaries:
```json
{
"browser_config": {
"type": "BrowserConfig",
"params": {
"headless": true, // Simple boolean - direct value
"viewport": { // Complex dictionary - needs type-params
"type": "dict",
"value": {
"width": 1200,
"height": 800
}
}
}
}
}
```
#### Strategy Pattern and Nesting
Strategies (like chunking or content filtering) demonstrate why we need this structure. Consider this chunking configuration:
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"chunking_strategy": {
"type": "RegexChunking", // Strategy implementation
"params": {
"patterns": ["\n\n", "\\.\\s+"]
}
}
}
}
}
```
Here, `chunking_strategy` accepts any chunking implementation. The `type` field tells the system which strategy to use, and `params` configures that specific strategy.
#### Complex Nested Example
Let's look at a more complex example with content filtering:
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
```
This shows how deeply configurations can nest while maintaining a consistent structure.
#### Quick Grammar Overview
```
config := {
"type": string,
"params": {
key: simple_value | complex_value
}
}
simple_value := string | number | boolean | [simple_value]
complex_value := config | dict_value
dict_value := {
"type": "dict",
"value": object
}
```
#### Important Rules 🚨
- Always use the type-params pattern for class instances
- Use direct values for primitives (numbers, strings, booleans)
- Wrap dictionaries with {"type": "dict", "value": {...}}
- Arrays/lists are passed directly without type-params
- All parameters are optional unless specifically required
#### Pro Tip 💡
The easiest way to get the correct structure is to:
1. Create configuration objects in Python
2. Use the `dump()` method to see their JSON representation
3. Use that JSON in your API calls
Example:
```python
from crawl4ai import CrawlerRunConfig, PruningContentFilter
config = CrawlerRunConfig(
content_filter=PruningContentFilter(threshold=0.48)
)
print(config.dump()) # Use this JSON in your API calls
```
#### More Examples
**Advanced Crawler Configuration**
```json
{
"urls": ["https://example.com"],
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"cache_mode": "bypass",
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed",
"min_word_threshold": 0
}
}
}
}
}
}
}
```
**Extraction Strategy**:
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"extraction_strategy": {
"type": "JsonCssExtractionStrategy",
"params": {
"schema": {
"baseSelector": "article.post",
"fields": [
{"name": "title", "selector": "h1", "type": "text"},
{"name": "content", "selector": ".content", "type": "html"}
]
}
}
}
}
}
}
```
**LLM Extraction Strategy**
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"extraction_strategy": {
"type": "LLMExtractionStrategy",
"params": {
"instruction": "Extract article title, author, publication date and main content",
"provider": "openai/gpt-4",
"api_token": "your-api-token",
"schema": {
"type": "dict",
"value": {
"title": "Article Schema",
"type": "object",
"properties": {
"title": {
"type": "string",
"description": "The article's headline"
},
"author": {
"type": "string",
"description": "The author's name"
},
"published_date": {
"type": "string",
"format": "date-time",
"description": "Publication date and time"
},
"content": {
"type": "string",
"description": "The main article content"
}
},
"required": ["title", "content"]
}
}
}
}
}
}
}
```
**Deep Crawler Example**
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"deep_crawl_strategy": {
"type": "BFSDeepCrawlStrategy",
"params": {
"max_depth": 3,
"max_pages": 100,
"filter_chain": {
"type": "FastFilterChain",
"params": {
"filters": [
{
"type": "FastContentTypeFilter",
"params": {
"allowed_types": ["text/html", "application/xhtml+xml"]
}
},
{
"type": "FastDomainFilter",
"params": {
"allowed_domains": ["blog.*", "docs.*"],
"blocked_domains": ["ads.*", "analytics.*"]
}
},
{
"type": "FastURLPatternFilter",
"params": {
"allowed_patterns": ["^/blog/", "^/docs/"],
"blocked_patterns": [".*/ads/", ".*/sponsored/"]
}
}
]
}
},
"url_scorer": {
"type": "FastCompositeScorer",
"params": {
"scorers": [
{
"type": "FastKeywordRelevanceScorer",
"params": {
"keywords": ["tutorial", "guide", "documentation"],
"weight": 1.0
}
},
{
"type": "FastPathDepthScorer",
"params": {
"weight": 0.5,
"preferred_depth": 2
}
},
{
"type": "FastFreshnessScorer",
"params": {
"weight": 0.8,
"max_age_days": 365
}
}
]
}
}
}
}
}
}
}
```
### REST API Examples
Let's look at some practical examples:
#### Simple Crawl
```python
import requests
crawl_payload = {
"urls": ["https://example.com"],
"browser_config": {"headless": True},
"crawler_config": {"stream": False}
}
response = requests.post(
"http://localhost:8000/crawl",
# headers={"Authorization": f"Bearer {token}"}, # If JWT is enabled, more on this later
json=crawl_payload
)
print(response.json()) # Print the response for debugging
```
#### Streaming Results
```python
async def test_stream_crawl(session, token: str):
"""Test the /crawl/stream endpoint with multiple URLs."""
url = "http://localhost:8000/crawl/stream"
payload = {
"urls": [
"https://example.com",
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3",
],
"browser_config": {"headless": True, "viewport": {"width": 1200}},
"crawler_config": {"stream": True, "cache_mode": "aggressive"}
}
# headers = {"Authorization": f"Bearer {token}"} # If JWT is enabled, more on this later
try:
async with session.post(url, json=payload, headers=headers) as response:
status = response.status
print(f"Status: {status} (Expected: 200)")
assert status == 200, f"Expected 200, got {status}"
# Read streaming response line-by-line (NDJSON)
async for line in response.content:
if line:
data = json.loads(line.decode('utf-8').strip())
print(f"Streamed Result: {json.dumps(data, indent=2)}")
except Exception as e:
print(f"Error in streaming crawl test: {str(e)}")
```
## Metrics & Monitoring
Keep an eye on your crawler with these endpoints:
- `/health` - Quick health check
- `/metrics` - Detailed Prometheus metrics
- `/schema` - Full API schema
Example health check:
```bash
curl http://localhost:8000/health
```
## Deployment Scenarios
> 🚧 Coming soon! We'll cover:
> - Kubernetes deployment
> - Cloud provider setups (AWS, GCP, Azure)
> - High-availability configurations
> - Load balancing strategies
## Complete Examples
Check out the `examples` folder in our repository for full working examples! Here are two to get you started:
[Using Client SDK](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_python_sdk_example.py)
[Using REST API](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_python_rest_api_example.py)
## Server Configuration
The server's behavior can be customized through the `config.yml` file. Let's explore how to configure your Crawl4AI server for optimal performance and security.
### Understanding config.yml
The configuration file is located at `deploy/docker/config.yml`. You can either modify this file before building the image or mount a custom configuration when running the container.
Here's a detailed breakdown of the configuration options:
```yaml
# Application Configuration
app:
title: "Crawl4AI API" # Server title in OpenAPI docs
version: "1.0.0" # API version
host: "0.0.0.0" # Listen on all interfaces
port: 8000 # Server port
reload: True # Enable hot reloading (development only)
timeout_keep_alive: 300 # Keep-alive timeout in seconds
# Rate Limiting Configuration
rate_limiting:
enabled: True # Enable/disable rate limiting
default_limit: "100/minute" # Rate limit format: "number/timeunit"
trusted_proxies: [] # List of trusted proxy IPs
storage_uri: "memory://" # Use "redis://localhost:6379" for production
# Security Configuration
security:
enabled: false # Master toggle for security features
jwt_enabled: true # Enable JWT authentication
https_redirect: True # Force HTTPS
trusted_hosts: ["*"] # Allowed hosts (use specific domains in production)
headers: # Security headers
x_content_type_options: "nosniff"
x_frame_options: "DENY"
content_security_policy: "default-src 'self'"
strict_transport_security: "max-age=63072000; includeSubDomains"
# Crawler Configuration
crawler:
memory_threshold_percent: 95.0 # Memory usage threshold
rate_limiter:
base_delay: [1.0, 2.0] # Min and max delay between requests
timeouts:
stream_init: 30.0 # Stream initialization timeout
batch_process: 300.0 # Batch processing timeout
# Logging Configuration
logging:
level: "INFO" # Log level (DEBUG, INFO, WARNING, ERROR)
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
# Observability Configuration
observability:
prometheus:
enabled: True # Enable Prometheus metrics
endpoint: "/metrics" # Metrics endpoint
health_check:
endpoint: "/health" # Health check endpoint
```
### JWT Authentication
When `security.jwt_enabled` is set to `true` in your config.yml, all endpoints require JWT authentication via bearer tokens. Here's how it works:
#### Getting a Token
```python
POST /token
Content-Type: application/json
{
"email": "user@example.com"
}
```
The endpoint returns:
```json
{
"email": "user@example.com",
"access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOi...",
"token_type": "bearer"
}
```
#### Using the Token
Add the token to your requests:
```bash
curl -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGci..." http://localhost:8000/crawl
```
Using the Python SDK:
```python
from crawl4ai.docker_client import Crawl4aiDockerClient
async with Crawl4aiDockerClient() as client:
# Authenticate first
await client.authenticate("user@example.com")
# Now all requests will include the token automatically
result = await client.crawl(urls=["https://example.com"])
```
#### Production Considerations 💡
The default implementation uses a simple email verification. For production use, consider:
- Email verification via OTP/magic links
- OAuth2 integration
- Rate limiting token generation
- Token expiration and refresh mechanisms
- IP-based restrictions
### Configuration Tips and Best Practices
1. **Production Settings** 🏭
```yaml
app:
reload: False # Disable reload in production
timeout_keep_alive: 120 # Lower timeout for better resource management
rate_limiting:
storage_uri: "redis://redis:6379" # Use Redis for distributed rate limiting
default_limit: "50/minute" # More conservative rate limit
security:
enabled: true # Enable all security features
trusted_hosts: ["your-domain.com"] # Restrict to your domain
```
2. **Development Settings** 🛠️
```yaml
app:
reload: True # Enable hot reloading
timeout_keep_alive: 300 # Longer timeout for debugging
logging:
level: "DEBUG" # More verbose logging
```
3. **High-Traffic Settings** 🚦
```yaml
crawler:
memory_threshold_percent: 85.0 # More conservative memory limit
rate_limiter:
base_delay: [2.0, 4.0] # More aggressive rate limiting
```
### Customizing Your Configuration
#### Method 1: Pre-build Configuration
```bash
# Copy and modify config before building
cd crawl4ai/deploy
vim custom-config.yml # Or use any editor
# Build with custom config
docker build --platform=linux/amd64 --no-cache -t crawl4ai:latest .
```
#### Method 2: Build-time Configuration
Use a custom config during build:
```bash
# Build with custom config
docker build --platform=linux/amd64 --no-cache \
--build-arg CONFIG_PATH=/path/to/custom-config.yml \
-t crawl4ai:latest .
```
#### Method 3: Runtime Configuration
```bash
# Mount custom config at runtime
docker run -d -p 8000:8000 \
-v $(pwd)/custom-config.yml:/app/config.yml \
crawl4ai-server:prod
```
> 💡 Note: When using Method 2, `/path/to/custom-config.yml` is relative to deploy directory.
> 💡 Note: When using Method 3, ensure your custom config file has all required fields as the container will use this instead of the built-in config.
### Configuration Recommendations
1. **Security First** 🔒
- Always enable security in production
- Use specific trusted_hosts instead of wildcards
- Set up proper rate limiting to protect your server
- Consider your environment before enabling HTTPS redirect
2. **Resource Management** 💻
- Adjust memory_threshold_percent based on available RAM
- Set timeouts according to your content size and network conditions
- Use Redis for rate limiting in multi-container setups
3. **Monitoring** 📊
- Enable Prometheus if you need metrics
- Set DEBUG logging in development, INFO in production
- Regular health check monitoring is crucial
4. **Performance Tuning** ⚡
- Start with conservative rate limiter delays
- Increase batch_process timeout for large content
- Adjust stream_init timeout based on initial response times
## Getting Help
We're here to help you succeed with Crawl4AI! Here's how to get support:
- 📖 Check our [full documentation](https://docs.crawl4ai.com)
- 🐛 Found a bug? [Open an issue](https://github.com/unclecode/crawl4ai/issues)
- 💬 Join our [Discord community](https://discord.gg/crawl4ai)
- ⭐ Star us on GitHub to show support!
## Summary
In this guide, we've covered everything you need to get started with Crawl4AI's Docker deployment:
- Building and running the Docker container
- Configuring the environment
- Making API requests with proper typing
- Using the Python SDK
- Monitoring your deployment
Remember, the examples in the `examples` folder are your friends - they show real-world usage patterns that you can adapt for your needs.
Keep exploring, and don't hesitate to reach out if you need help! We're building something amazing together. 🚀
Happy crawling! 🕷️

442
deploy/aws/docker/api.py Normal file
View File

@@ -0,0 +1,442 @@
import os
import json
import asyncio
from typing import List, Tuple
import logging
from typing import Optional, AsyncGenerator
from urllib.parse import unquote
from fastapi import HTTPException, Request, status
from fastapi.background import BackgroundTasks
from fastapi.responses import JSONResponse
from redis import asyncio as aioredis
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
LLMExtractionStrategy,
CacheMode,
BrowserConfig,
MemoryAdaptiveDispatcher,
RateLimiter
)
from crawl4ai.utils import perform_completion_with_backoff
from crawl4ai.content_filter_strategy import (
PruningContentFilter,
BM25ContentFilter,
LLMContentFilter
)
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.content_scraping_strategy import LXMLWebScrapingStrategy
from utils import (
TaskStatus,
FilterType,
get_base_url,
is_task_id,
should_cleanup_task,
decode_redis_hash
)
logger = logging.getLogger(__name__)
async def handle_llm_qa(
url: str,
query: str,
config: dict
) -> str:
"""Process QA using LLM with crawled content as context."""
try:
# Extract base URL by finding last '?q=' occurrence
last_q_index = url.rfind('?q=')
if last_q_index != -1:
url = url[:last_q_index]
# Get markdown content
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url)
if not result.success:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.error_message
)
content = result.markdown_v2.fit_markdown
# Create prompt and get LLM response
prompt = f"""Use the following content as context to answer the question.
Content:
{content}
Question: {query}
Answer:"""
response = perform_completion_with_backoff(
provider=config["llm"]["provider"],
prompt_with_variables=prompt,
api_token=os.environ.get(config["llm"].get("api_key_env", ""))
)
return response.choices[0].message.content
except Exception as e:
logger.error(f"QA processing error: {str(e)}", exc_info=True)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
async def process_llm_extraction(
redis: aioredis.Redis,
config: dict,
task_id: str,
url: str,
instruction: str,
schema: Optional[str] = None,
cache: str = "0"
) -> None:
"""Process LLM extraction in background."""
try:
# If config['llm'] has api_key then ignore the api_key_env
api_key = ""
if "api_key" in config["llm"]:
api_key = config["llm"]["api_key"]
else:
api_key = os.environ.get(config["llm"].get("api_key_env", None), "")
llm_strategy = LLMExtractionStrategy(
provider=config["llm"]["provider"],
api_token=api_key,
instruction=instruction,
schema=json.loads(schema) if schema else None,
)
cache_mode = CacheMode.ENABLED if cache == "1" else CacheMode.WRITE_ONLY
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=url,
config=CrawlerRunConfig(
extraction_strategy=llm_strategy,
scraping_strategy=LXMLWebScrapingStrategy(),
cache_mode=cache_mode
)
)
if not result.success:
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.FAILED,
"error": result.error_message
})
return
try:
content = json.loads(result.extracted_content)
except json.JSONDecodeError:
content = result.extracted_content
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.COMPLETED,
"result": json.dumps(content)
})
except Exception as e:
logger.error(f"LLM extraction error: {str(e)}", exc_info=True)
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.FAILED,
"error": str(e)
})
async def handle_markdown_request(
url: str,
filter_type: FilterType,
query: Optional[str] = None,
cache: str = "0",
config: Optional[dict] = None
) -> str:
"""Handle markdown generation requests."""
try:
decoded_url = unquote(url)
if not decoded_url.startswith(('http://', 'https://')):
decoded_url = 'https://' + decoded_url
if filter_type == FilterType.RAW:
md_generator = DefaultMarkdownGenerator()
else:
content_filter = {
FilterType.FIT: PruningContentFilter(),
FilterType.BM25: BM25ContentFilter(user_query=query or ""),
FilterType.LLM: LLMContentFilter(
provider=config["llm"]["provider"],
api_token=os.environ.get(config["llm"].get("api_key_env", None), ""),
instruction=query or "Extract main content"
)
}[filter_type]
md_generator = DefaultMarkdownGenerator(content_filter=content_filter)
cache_mode = CacheMode.ENABLED if cache == "1" else CacheMode.WRITE_ONLY
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url=decoded_url,
config=CrawlerRunConfig(
markdown_generator=md_generator,
scraping_strategy=LXMLWebScrapingStrategy(),
cache_mode=cache_mode
)
)
if not result.success:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.error_message
)
return (result.markdown_v2.raw_markdown
if filter_type == FilterType.RAW
else result.markdown_v2.fit_markdown)
except Exception as e:
logger.error(f"Markdown error: {str(e)}", exc_info=True)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
async def handle_llm_request(
redis: aioredis.Redis,
background_tasks: BackgroundTasks,
request: Request,
input_path: str,
query: Optional[str] = None,
schema: Optional[str] = None,
cache: str = "0",
config: Optional[dict] = None
) -> JSONResponse:
"""Handle LLM extraction requests."""
base_url = get_base_url(request)
try:
if is_task_id(input_path):
return await handle_task_status(
redis, input_path, base_url
)
if not query:
return JSONResponse({
"message": "Please provide an instruction",
"_links": {
"example": {
"href": f"{base_url}/llm/{input_path}?q=Extract+main+content",
"title": "Try this example"
}
}
})
return await create_new_task(
redis,
background_tasks,
input_path,
query,
schema,
cache,
base_url,
config
)
except Exception as e:
logger.error(f"LLM endpoint error: {str(e)}", exc_info=True)
return JSONResponse({
"error": str(e),
"_links": {
"retry": {"href": str(request.url)}
}
}, status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
async def handle_task_status(
redis: aioredis.Redis,
task_id: str,
base_url: str
) -> JSONResponse:
"""Handle task status check requests."""
task = await redis.hgetall(f"task:{task_id}")
if not task:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND,
detail="Task not found"
)
task = decode_redis_hash(task)
response = create_task_response(task, task_id, base_url)
if task["status"] in [TaskStatus.COMPLETED, TaskStatus.FAILED]:
if should_cleanup_task(task["created_at"]):
await redis.delete(f"task:{task_id}")
return JSONResponse(response)
async def create_new_task(
redis: aioredis.Redis,
background_tasks: BackgroundTasks,
input_path: str,
query: str,
schema: Optional[str],
cache: str,
base_url: str,
config: dict
) -> JSONResponse:
"""Create and initialize a new task."""
decoded_url = unquote(input_path)
if not decoded_url.startswith(('http://', 'https://')):
decoded_url = 'https://' + decoded_url
from datetime import datetime
task_id = f"llm_{int(datetime.now().timestamp())}_{id(background_tasks)}"
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.PROCESSING,
"created_at": datetime.now().isoformat(),
"url": decoded_url
})
background_tasks.add_task(
process_llm_extraction,
redis,
config,
task_id,
decoded_url,
query,
schema,
cache
)
return JSONResponse({
"task_id": task_id,
"status": TaskStatus.PROCESSING,
"url": decoded_url,
"_links": {
"self": {"href": f"{base_url}/llm/{task_id}"},
"status": {"href": f"{base_url}/llm/{task_id}"}
}
})
def create_task_response(task: dict, task_id: str, base_url: str) -> dict:
"""Create response for task status check."""
response = {
"task_id": task_id,
"status": task["status"],
"created_at": task["created_at"],
"url": task["url"],
"_links": {
"self": {"href": f"{base_url}/llm/{task_id}"},
"refresh": {"href": f"{base_url}/llm/{task_id}"}
}
}
if task["status"] == TaskStatus.COMPLETED:
response["result"] = json.loads(task["result"])
elif task["status"] == TaskStatus.FAILED:
response["error"] = task["error"]
return response
async def stream_results(crawler: AsyncWebCrawler, results_gen: AsyncGenerator) -> AsyncGenerator[bytes, None]:
"""Stream results with heartbeats and completion markers."""
import json
from utils import datetime_handler
try:
async for result in results_gen:
try:
result_dict = result.model_dump()
logger.info(f"Streaming result for {result_dict.get('url', 'unknown')}")
data = json.dumps(result_dict, default=datetime_handler) + "\n"
yield data.encode('utf-8')
except Exception as e:
logger.error(f"Serialization error: {e}")
error_response = {"error": str(e), "url": getattr(result, 'url', 'unknown')}
yield (json.dumps(error_response) + "\n").encode('utf-8')
yield json.dumps({"status": "completed"}).encode('utf-8')
except asyncio.CancelledError:
logger.warning("Client disconnected during streaming")
finally:
try:
await crawler.close()
except Exception as e:
logger.error(f"Crawler cleanup error: {e}")
async def handle_crawl_request(
urls: List[str],
browser_config: dict,
crawler_config: dict,
config: dict
) -> dict:
"""Handle non-streaming crawl requests."""
try:
browser_config = BrowserConfig.load(browser_config)
crawler_config = CrawlerRunConfig.load(crawler_config)
dispatcher = MemoryAdaptiveDispatcher(
memory_threshold_percent=config["crawler"]["memory_threshold_percent"],
rate_limiter=RateLimiter(
base_delay=tuple(config["crawler"]["rate_limiter"]["base_delay"])
)
)
async with AsyncWebCrawler(config=browser_config) as crawler:
results = await crawler.arun_many(
urls=urls,
config=crawler_config,
dispatcher=dispatcher
)
return {
"success": True,
"results": [result.model_dump() for result in results]
}
except Exception as e:
logger.error(f"Crawl error: {str(e)}", exc_info=True)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
async def handle_stream_crawl_request(
urls: List[str],
browser_config: dict,
crawler_config: dict,
config: dict
) -> Tuple[AsyncWebCrawler, AsyncGenerator]:
"""Handle streaming crawl requests."""
try:
browser_config = BrowserConfig.load(browser_config)
browser_config.verbose = True
crawler_config = CrawlerRunConfig.load(crawler_config)
crawler_config.scraping_strategy = LXMLWebScrapingStrategy()
dispatcher = MemoryAdaptiveDispatcher(
memory_threshold_percent=config["crawler"]["memory_threshold_percent"],
rate_limiter=RateLimiter(
base_delay=tuple(config["crawler"]["rate_limiter"]["base_delay"])
)
)
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
results_gen = await crawler.arun_many(
urls=urls,
config=crawler_config,
dispatcher=dispatcher
)
return crawler, results_gen
except Exception as e:
if 'crawler' in locals():
await crawler.close()
logger.error(f"Stream crawl error: {str(e)}", exc_info=True)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)

46
deploy/aws/docker/auth.py Normal file
View File

@@ -0,0 +1,46 @@
import os
from datetime import datetime, timedelta, timezone
from typing import Dict, Optional
from jwt import JWT, jwk_from_dict
from jwt.utils import get_int_from_datetime
from fastapi import Depends, HTTPException
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
from pydantic import EmailStr
from pydantic.main import BaseModel
import base64
instance = JWT()
security = HTTPBearer()
SECRET_KEY = os.environ.get("SECRET_KEY", "mysecret")
ACCESS_TOKEN_EXPIRE_MINUTES = 60
def get_jwk_from_secret(secret: str):
"""Convert a secret string into a JWK object."""
secret_bytes = secret.encode('utf-8')
b64_secret = base64.urlsafe_b64encode(secret_bytes).rstrip(b'=').decode('utf-8')
return jwk_from_dict({"kty": "oct", "k": b64_secret})
def create_access_token(data: dict, expires_delta: Optional[timedelta] = None) -> str:
"""Create a JWT access token with an expiration."""
to_encode = data.copy()
expire = datetime.now(timezone.utc) + (expires_delta or timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES))
to_encode.update({"exp": get_int_from_datetime(expire)})
signing_key = get_jwk_from_secret(SECRET_KEY)
return instance.encode(to_encode, signing_key, alg='HS256')
def verify_token(credentials: HTTPAuthorizationCredentials = Depends(security)) -> Dict:
"""Verify the JWT token from the Authorization header."""
token = credentials.credentials
verifying_key = get_jwk_from_secret(SECRET_KEY)
try:
payload = instance.decode(token, verifying_key, do_time_check=True, algorithms='HS256')
return payload
except Exception:
raise HTTPException(status_code=401, detail="Invalid or expired token")
def get_token_dependency(config: Dict):
"""Return the token dependency if JWT is enabled, else None."""
return verify_token if config.get("security", {}).get("jwt_enabled", False) else None
class TokenRequest(BaseModel):
email: EmailStr

View File

@@ -0,0 +1,71 @@
# Application Configuration
app:
title: "Crawl4AI API"
version: "1.0.0"
host: "0.0.0.0"
port: 8000
reload: True
timeout_keep_alive: 300
# Default LLM Configuration
llm:
provider: "openai/gpt-4o-mini"
api_key_env: "OPENAI_API_KEY"
# api_key: sk-... # If you pass the API key directly then api_key_env will be ignored
# Redis Configuration
redis:
host: "localhost"
port: 6379
db: 0
password: ""
ssl: False
ssl_cert_reqs: None
ssl_ca_certs: None
ssl_certfile: None
ssl_keyfile: None
ssl_cert_reqs: None
ssl_ca_certs: None
ssl_certfile: None
ssl_keyfile: None
# Rate Limiting Configuration
rate_limiting:
enabled: True
default_limit: "1000/minute"
trusted_proxies: []
storage_uri: "memory://" # Use "redis://localhost:6379" for production
# Security Configuration
security:
enabled: true
jwt_enabled: true
https_redirect: false
trusted_hosts: ["*"]
headers:
x_content_type_options: "nosniff"
x_frame_options: "DENY"
content_security_policy: "default-src 'self'"
strict_transport_security: "max-age=63072000; includeSubDomains"
# Crawler Configuration
crawler:
memory_threshold_percent: 95.0
rate_limiter:
base_delay: [1.0, 2.0]
timeouts:
stream_init: 30.0 # Timeout for stream initialization
batch_process: 300.0 # Timeout for batch processing
# Logging Configuration
logging:
level: "INFO"
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
# Observability Configuration
observability:
prometheus:
enabled: True
endpoint: "/metrics"
health_check:
endpoint: "/health"

View File

@@ -0,0 +1,10 @@
crawl4ai
fastapi
uvicorn
gunicorn>=23.0.0
slowapi>=0.1.9
prometheus-fastapi-instrumentator>=7.0.2
redis>=5.2.1
jwt>=1.3.1
dnspython>=2.7.0
email-validator>=2.2.0

181
deploy/aws/docker/server.py Normal file
View File

@@ -0,0 +1,181 @@
import os
import sys
import time
from typing import List, Optional, Dict
from fastapi import FastAPI, HTTPException, Request, Query, Path, Depends
from fastapi.responses import StreamingResponse, RedirectResponse, PlainTextResponse, JSONResponse
from fastapi.middleware.httpsredirect import HTTPSRedirectMiddleware
from fastapi.middleware.trustedhost import TrustedHostMiddleware
from pydantic import BaseModel, Field
from slowapi import Limiter
from slowapi.util import get_remote_address
from prometheus_fastapi_instrumentator import Instrumentator
from redis import asyncio as aioredis
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
from utils import FilterType, load_config, setup_logging, verify_email_domain
from api import (
handle_markdown_request,
handle_llm_qa,
handle_stream_crawl_request,
handle_crawl_request,
stream_results
)
from auth import create_access_token, get_token_dependency, TokenRequest # Import from auth.py
__version__ = "0.2.6"
class CrawlRequest(BaseModel):
urls: List[str] = Field(min_length=1, max_length=100)
browser_config: Optional[Dict] = Field(default_factory=dict)
crawler_config: Optional[Dict] = Field(default_factory=dict)
# Load configuration and setup
config = load_config()
setup_logging(config)
# Initialize Redis
redis = aioredis.from_url(config["redis"].get("uri", "redis://localhost"))
# Initialize rate limiter
limiter = Limiter(
key_func=get_remote_address,
default_limits=[config["rate_limiting"]["default_limit"]],
storage_uri=config["rate_limiting"]["storage_uri"]
)
app = FastAPI(
title=config["app"]["title"],
version=config["app"]["version"]
)
# Configure middleware
def setup_security_middleware(app, config):
sec_config = config.get("security", {})
if sec_config.get("enabled", False):
if sec_config.get("https_redirect", False):
app.add_middleware(HTTPSRedirectMiddleware)
if sec_config.get("trusted_hosts", []) != ["*"]:
app.add_middleware(TrustedHostMiddleware, allowed_hosts=sec_config["trusted_hosts"])
setup_security_middleware(app, config)
# Prometheus instrumentation
if config["observability"]["prometheus"]["enabled"]:
Instrumentator().instrument(app).expose(app)
# Get token dependency based on config
token_dependency = get_token_dependency(config)
# Middleware for security headers
@app.middleware("http")
async def add_security_headers(request: Request, call_next):
response = await call_next(request)
if config["security"]["enabled"]:
response.headers.update(config["security"]["headers"])
return response
# Token endpoint (always available, but usage depends on config)
@app.post("/token")
async def get_token(request_data: TokenRequest):
if not verify_email_domain(request_data.email):
raise HTTPException(status_code=400, detail="Invalid email domain")
token = create_access_token({"sub": request_data.email})
return {"email": request_data.email, "access_token": token, "token_type": "bearer"}
# Endpoints with conditional auth
@app.get("/md/{url:path}")
@limiter.limit(config["rate_limiting"]["default_limit"])
async def get_markdown(
request: Request,
url: str,
f: FilterType = FilterType.FIT,
q: Optional[str] = None,
c: Optional[str] = "0",
token_data: Optional[Dict] = Depends(token_dependency)
):
result = await handle_markdown_request(url, f, q, c, config)
return PlainTextResponse(result)
@app.get("/llm/{url:path}", description="URL should be without http/https prefix")
async def llm_endpoint(
request: Request,
url: str = Path(...),
q: Optional[str] = Query(None),
token_data: Optional[Dict] = Depends(token_dependency)
):
if not q:
raise HTTPException(status_code=400, detail="Query parameter 'q' is required")
if not url.startswith(('http://', 'https://')):
url = 'https://' + url
try:
answer = await handle_llm_qa(url, q, config)
return JSONResponse({"answer": answer})
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get("/schema")
async def get_schema():
from crawl4ai import BrowserConfig, CrawlerRunConfig
return {"browser": BrowserConfig().dump(), "crawler": CrawlerRunConfig().dump()}
@app.get(config["observability"]["health_check"]["endpoint"])
async def health():
return {"status": "ok", "timestamp": time.time(), "version": __version__}
@app.get(config["observability"]["prometheus"]["endpoint"])
async def metrics():
return RedirectResponse(url=config["observability"]["prometheus"]["endpoint"])
@app.post("/crawl")
@limiter.limit(config["rate_limiting"]["default_limit"])
async def crawl(
request: Request,
crawl_request: CrawlRequest,
token_data: Optional[Dict] = Depends(token_dependency)
):
if not crawl_request.urls:
raise HTTPException(status_code=400, detail="At least one URL required")
results = await handle_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config
)
return JSONResponse(results)
@app.post("/crawl/stream")
@limiter.limit(config["rate_limiting"]["default_limit"])
async def crawl_stream(
request: Request,
crawl_request: CrawlRequest,
token_data: Optional[Dict] = Depends(token_dependency)
):
if not crawl_request.urls:
raise HTTPException(status_code=400, detail="At least one URL required")
crawler, results_gen = await handle_stream_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config
)
return StreamingResponse(
stream_results(crawler, results_gen),
media_type='application/x-ndjson',
headers={'Cache-Control': 'no-cache', 'Connection': 'keep-alive', 'X-Stream-Status': 'active'}
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(
"server:app",
host=config["app"]["host"],
port=config["app"]["port"],
reload=config["app"]["reload"],
timeout_keep_alive=config["app"]["timeout_keep_alive"]
)

View File

@@ -0,0 +1,12 @@
[supervisord]
nodaemon=true
[program:redis]
command=redis-server
autorestart=true
priority=10
[program:gunicorn]
command=gunicorn --bind 0.0.0.0:8000 --workers 4 --threads 2 --timeout 300 --graceful-timeout 60 --keep-alive 65 --log-level debug --worker-class uvicorn.workers.UvicornWorker --max-requests 1000 --max-requests-jitter 50 server:app
autorestart=true
priority=20

View File

@@ -0,0 +1,66 @@
import dns.resolver
import logging
import yaml
from datetime import datetime
from enum import Enum
from pathlib import Path
from fastapi import Request
from typing import Dict, Optional
class TaskStatus(str, Enum):
PROCESSING = "processing"
FAILED = "failed"
COMPLETED = "completed"
class FilterType(str, Enum):
RAW = "raw"
FIT = "fit"
BM25 = "bm25"
LLM = "llm"
def load_config() -> Dict:
"""Load and return application configuration."""
config_path = Path(__file__).parent / "config.yml"
with open(config_path, "r") as config_file:
return yaml.safe_load(config_file)
def setup_logging(config: Dict) -> None:
"""Configure application logging."""
logging.basicConfig(
level=config["logging"]["level"],
format=config["logging"]["format"]
)
def get_base_url(request: Request) -> str:
"""Get base URL including scheme and host."""
return f"{request.url.scheme}://{request.url.netloc}"
def is_task_id(value: str) -> bool:
"""Check if the value matches task ID pattern."""
return value.startswith("llm_") and "_" in value
def datetime_handler(obj: any) -> Optional[str]:
"""Handle datetime serialization for JSON."""
if hasattr(obj, 'isoformat'):
return obj.isoformat()
raise TypeError(f"Object of type {type(obj)} is not JSON serializable")
def should_cleanup_task(created_at: str) -> bool:
"""Check if task should be cleaned up based on creation time."""
created = datetime.fromisoformat(created_at)
return (datetime.now() - created).total_seconds() > 3600
def decode_redis_hash(hash_data: Dict[bytes, bytes]) -> Dict[str, str]:
"""Decode Redis hash data from bytes to strings."""
return {k.decode('utf-8'): v.decode('utf-8') for k, v in hash_data.items()}
def verify_email_domain(email: str) -> bool:
try:
domain = email.split('@')[1]
# Try to resolve MX records for the domain.
records = dns.resolver.resolve(domain, 'MX')
return True if records else False
except Exception as e:
return False

77
deploy/aws/howto.md Normal file
View File

@@ -0,0 +1,77 @@
# Crawl4AI API Quickstart
This document shows how to generate an API token and use it to call the `/crawl` and `/md` endpoints.
---
## 1. Crawl Example
Send a POST request to `/crawl` with the following JSON payload:
```json
{
"urls": ["https://example.com"],
"browser_config": { "headless": true, "verbose": true },
"crawler_config": { "stream": false, "cache_mode": "enabled" }
}
```
**cURL Command:**
```bash
curl -X POST "https://api.crawl4ai.com/crawl" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"urls": ["https://example.com"],
"browser_config": {"headless": true, "verbose": true},
"crawler_config": {"stream": false, "cache_mode": "enabled"}
}'
```
---
## 2. Markdown Retrieval Example
To retrieve markdown from a given URL (e.g., `https://example.com`), use:
```bash
curl -X GET "https://api.crawl4ai.com/md/example.com" \
-H "Authorization: Bearer YOUR_API_TOKEN"
```
---
## 3. Python Code Example (Using `requests`)
Below is a sample Python script that demonstrates using the `requests` library to call the API endpoints:
```python
import requests
BASE_URL = "https://api.crawl4ai.com"
TOKEN = "YOUR_API_TOKEN" # Replace with your actual token
headers = {
"Authorization": f"Bearer {TOKEN}",
"Content-Type": "application/json"
}
# Crawl endpoint example
crawl_payload = {
"urls": ["https://example.com"],
"browser_config": {"headless": True, "verbose": True},
"crawler_config": {"stream": False, "cache_mode": "enabled"}
}
crawl_response = requests.post(f"{BASE_URL}/crawl", json=crawl_payload, headers=headers)
print("Crawl Response:", crawl_response.json())
# /md endpoint example
md_response = requests.get(f"{BASE_URL}/md/example.com", headers=headers)
print("Markdown Content:", md_response.text)
```
---
Happy crawling!

View File

@@ -0,0 +1,2 @@
FROM nginx:alpine
COPY nginx.conf /etc/nginx/conf.d/default.conf

View File

@@ -0,0 +1,55 @@
server {
listen 80;
server_name api.crawl4ai.com;
# Main logging settings
error_log /var/log/nginx/error.log debug;
access_log /var/log/nginx/access.log combined buffer=512k flush=1m;
# Timeout and buffering settings
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
# Health check location
location /health {
proxy_pass http://127.0.0.1:8000/health;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Main proxy for application endpoints
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
add_header X-Debug-Info $request_uri;
proxy_request_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
}
# New endpoint: serve Nginx error log
location /nginx/error {
# Using "alias" to serve the error log file
alias /var/log/nginx/error.log;
# Optionally, you might restrict access with "allow" and "deny" directives.
}
# New endpoint: serve Nginx access log
location /nginx/access {
alias /var/log/nginx/access.log;
}
client_max_body_size 10M;
client_body_buffer_size 128k;
}

1
deploy/aws/version.txt Normal file
View File

@@ -0,0 +1 @@
v0.1.0

View File

@@ -1,644 +0,0 @@
# Crawl4AI Docker Guide 🐳
## Table of Contents
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Option 1: Using Docker Compose (Recommended)](#option-1-using-docker-compose-recommended)
- [Option 2: Manual Local Build & Run](#option-2-manual-local-build--run)
- [Option 3: Using Pre-built Docker Hub Images](#option-3-using-pre-built-docker-hub-images)
- [Dockerfile Parameters](#dockerfile-parameters)
- [Using the API](#using-the-api)
- [Understanding Request Schema](#understanding-request-schema)
- [REST API Examples](#rest-api-examples)
- [Python SDK](#python-sdk)
- [Metrics & Monitoring](#metrics--monitoring)
- [Deployment Scenarios](#deployment-scenarios)
- [Complete Examples](#complete-examples)
- [Server Configuration](#server-configuration)
- [Understanding config.yml](#understanding-configyml)
- [JWT Authentication](#jwt-authentication)
- [Configuration Tips and Best Practices](#configuration-tips-and-best-practices)
- [Customizing Your Configuration](#customizing-your-configuration)
- [Configuration Recommendations](#configuration-recommendations)
- [Getting Help](#getting-help)
## Prerequisites
Before we dive in, make sure you have:
- Docker installed and running (version 20.10.0 or higher), including `docker compose` (usually bundled with Docker Desktop).
- `git` for cloning the repository.
- At least 4GB of RAM available for the container (more recommended for heavy use).
- Python 3.10+ (if using the Python SDK).
- Node.js 16+ (if using the Node.js examples).
> 💡 **Pro tip**: Run `docker info` to check your Docker installation and available resources.
## Installation
We offer several ways to get the Crawl4AI server running. Docker Compose is the easiest way to manage local builds and runs.
### Option 1: Using Docker Compose (Recommended)
Docker Compose simplifies building and running the service, especially for local development and testing across different platforms.
#### 1. Clone Repository
```bash
git clone https://github.com/unclecode/crawl4ai.git
cd crawl4ai
```
#### 2. Environment Setup (API Keys)
If you plan to use LLMs, copy the example environment file and add your API keys. This file should be in the **project root directory**.
```bash
# Make sure you are in the 'crawl4ai' root directory
cp deploy/docker/.llm.env.example .llm.env
# Now edit .llm.env and add your API keys
# Example content:
# OPENAI_API_KEY=sk-your-key
# ANTHROPIC_API_KEY=your-anthropic-key
# ...
```
> 🔑 **Note**: Keep your API keys secure! Never commit `.llm.env` to version control.
#### 3. Build and Run with Compose
The `docker-compose.yml` file in the project root defines services for different scenarios using **profiles**.
* **Build and Run Locally (AMD64):**
```bash
# Builds the image locally using Dockerfile and runs it
docker compose --profile local-amd64 up --build -d
```
* **Build and Run Locally (ARM64):**
```bash
# Builds the image locally using Dockerfile and runs it
docker compose --profile local-arm64 up --build -d
```
* **Run Pre-built Image from Docker Hub (AMD64):**
```bash
# Pulls and runs the specified AMD64 image from Docker Hub
# (Set VERSION env var for specific tags, e.g., VERSION=0.5.1-d1)
docker compose --profile hub-amd64 up -d
```
* **Run Pre-built Image from Docker Hub (ARM64):**
```bash
# Pulls and runs the specified ARM64 image from Docker Hub
docker compose --profile hub-arm64 up -d
```
> The server will be available at `http://localhost:11235`.
#### 4. Stopping Compose Services
```bash
# Stop the service(s) associated with a profile (e.g., local-amd64)
docker compose --profile local-amd64 down
```
### Option 2: Manual Local Build & Run
If you prefer not to use Docker Compose for local builds.
#### 1. Clone Repository & Setup Environment
Follow steps 1 and 2 from the Docker Compose section above (clone repo, `cd crawl4ai`, create `.llm.env` in the root).
#### 2. Build the Image (Multi-Arch)
Use `docker buildx` to build the image. This example builds for multiple platforms and loads the image matching your host architecture into the local Docker daemon.
```bash
# Make sure you are in the 'crawl4ai' root directory
docker buildx build --platform linux/amd64,linux/arm64 -t crawl4ai-local:latest --load .
```
#### 3. Run the Container
* **Basic run (no LLM support):**
```bash
# Replace --platform if your host is ARM64
docker run -d \
-p 11235:11235 \
--name crawl4ai-standalone \
--shm-size=1g \
--platform linux/amd64 \
crawl4ai-local:latest
```
* **With LLM support:**
```bash
# Make sure .llm.env is in the current directory (project root)
# Replace --platform if your host is ARM64
docker run -d \
-p 11235:11235 \
--name crawl4ai-standalone \
--env-file .llm.env \
--shm-size=1g \
--platform linux/amd64 \
crawl4ai-local:latest
```
> The server will be available at `http://localhost:11235`.
#### 4. Stopping the Manual Container
```bash
docker stop crawl4ai-standalone && docker rm crawl4ai-standalone
```
### Option 3: Using Pre-built Docker Hub Images
Pull and run images directly from Docker Hub without building locally.
#### 1. Pull the Image
We use a versioning scheme like `LIBRARY_VERSION-dREVISION` (e.g., `0.5.1-d1`). The `latest` tag points to the most recent stable release. Images are built with multi-arch manifests, so Docker usually pulls the correct version for your system automatically.
```bash
# Pull a specific version (recommended for stability)
docker pull unclecode/crawl4ai:0.5.1-d1
# Or pull the latest stable version
docker pull unclecode/crawl4ai:latest
```
#### 2. Setup Environment (API Keys)
If using LLMs, create the `.llm.env` file in a directory of your choice, similar to Step 2 in the Compose section.
#### 3. Run the Container
* **Basic run:**
```bash
docker run -d \
-p 11235:11235 \
--name crawl4ai-hub \
--shm-size=1g \
unclecode/crawl4ai:0.5.1-d1 # Or use :latest
```
* **With LLM support:**
```bash
# Make sure .llm.env is in the current directory you are running docker from
docker run -d \
-p 11235:11235 \
--name crawl4ai-hub \
--env-file .llm.env \
--shm-size=1g \
unclecode/crawl4ai:0.5.1-d1 # Or use :latest
```
> The server will be available at `http://localhost:11235`.
#### 4. Stopping the Hub Container
```bash
docker stop crawl4ai-hub && docker rm crawl4ai-hub
```
#### Docker Hub Versioning Explained
* **Image Name:** `unclecode/crawl4ai`
* **Tag Format:** `LIBRARY_VERSION-dREVISION`
* `LIBRARY_VERSION`: The Semantic Version of the core `crawl4ai` Python library included (e.g., `0.5.1`).
* `dREVISION`: An incrementing number (starting at `d1`) for Docker build changes made *without* changing the library version (e.g., base image updates, dependency fixes). Resets to `d1` for each new `LIBRARY_VERSION`.
* **Example:** `unclecode/crawl4ai:0.5.1-d1`
* **`latest` Tag:** Points to the most recent stable `LIBRARY_VERSION-dREVISION`.
* **Multi-Arch:** Images support `linux/amd64` and `linux/arm64`. Docker automatically selects the correct architecture.
---
*(Rest of the document remains largely the same, but with key updates below)*
---
## Dockerfile Parameters
You can customize the image build process using build arguments (`--build-arg`). These are typically used via `docker buildx build` or within the `docker-compose.yml` file.
```bash
# Example: Build with 'all' features using buildx
docker buildx build \
--platform linux/amd64,linux/arm64 \
--build-arg INSTALL_TYPE=all \
-t yourname/crawl4ai-all:latest \
--load \
. # Build from root context
```
### Build Arguments Explained
| Argument | Description | Default | Options |
| :----------- | :--------------------------------------- | :-------- | :--------------------------------- |
| INSTALL_TYPE | Feature set | `default` | `default`, `all`, `torch`, `transformer` |
| ENABLE_GPU | GPU support (CUDA for AMD64) | `false` | `true`, `false` |
| APP_HOME | Install path inside container (advanced) | `/app` | any valid path |
| USE_LOCAL | Install library from local source | `true` | `true`, `false` |
| GITHUB_REPO | Git repo to clone if USE_LOCAL=false | *(see Dockerfile)* | any git URL |
| GITHUB_BRANCH| Git branch to clone if USE_LOCAL=false | `main` | any branch name |
*(Note: PYTHON_VERSION is fixed by the `FROM` instruction in the Dockerfile)*
### Build Best Practices
1. **Choose the Right Install Type**
* `default`: Basic installation, smallest image size. Suitable for most standard web scraping and markdown generation.
* `all`: Full features including `torch` and `transformers` for advanced extraction strategies (e.g., CosineStrategy, certain LLM filters). Significantly larger image. Ensure you need these extras.
2. **Platform Considerations**
* Use `buildx` for building multi-architecture images, especially for pushing to registries.
* Use `docker compose` profiles (`local-amd64`, `local-arm64`) for easy platform-specific local builds.
3. **Performance Optimization**
* The image automatically includes platform-specific optimizations (OpenMP for AMD64, OpenBLAS for ARM64).
---
## Using the API
Communicate with the running Docker server via its REST API (defaulting to `http://localhost:11235`). You can use the Python SDK or make direct HTTP requests.
### Python SDK
Install the SDK: `pip install crawl4ai`
```python
import asyncio
from crawl4ai.docker_client import Crawl4aiDockerClient
from crawl4ai import BrowserConfig, CrawlerRunConfig, CacheMode # Assuming you have crawl4ai installed
async def main():
# Point to the correct server port
async with Crawl4aiDockerClient(base_url="http://localhost:11235", verbose=True) as client:
# If JWT is enabled on the server, authenticate first:
# await client.authenticate("user@example.com") # See Server Configuration section
# Example Non-streaming crawl
print("--- Running Non-Streaming Crawl ---")
results = await client.crawl(
["https://httpbin.org/html"],
browser_config=BrowserConfig(headless=True), # Use library classes for config aid
crawler_config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
if results: # client.crawl returns None on failure
print(f"Non-streaming results success: {results.success}")
if results.success:
for result in results: # Iterate through the CrawlResultContainer
print(f"URL: {result.url}, Success: {result.success}")
else:
print("Non-streaming crawl failed.")
# Example Streaming crawl
print("\n--- Running Streaming Crawl ---")
stream_config = CrawlerRunConfig(stream=True, cache_mode=CacheMode.BYPASS)
try:
async for result in await client.crawl( # client.crawl returns an async generator for streaming
["https://httpbin.org/html", "https://httpbin.org/links/5/0"],
browser_config=BrowserConfig(headless=True),
crawler_config=stream_config
):
print(f"Streamed result: URL: {result.url}, Success: {result.success}")
except Exception as e:
print(f"Streaming crawl failed: {e}")
# Example Get schema
print("\n--- Getting Schema ---")
schema = await client.get_schema()
print(f"Schema received: {bool(schema)}") # Print whether schema was received
if __name__ == "__main__":
asyncio.run(main())
```
*(SDK parameters like timeout, verify_ssl etc. remain the same)*
### Second Approach: Direct API Calls
Crucially, when sending configurations directly via JSON, they **must** follow the `{"type": "ClassName", "params": {...}}` structure for any non-primitive value (like config objects or strategies). Dictionaries must be wrapped as `{"type": "dict", "value": {...}}`.
*(Keep the detailed explanation of Configuration Structure, Basic Pattern, Simple vs Complex, Strategy Pattern, Complex Nested Example, Quick Grammar Overview, Important Rules, Pro Tip)*
#### More Examples *(Ensure Schema example uses type/value wrapper)*
**Advanced Crawler Configuration**
*(Keep example, ensure cache_mode uses valid enum value like "bypass")*
**Extraction Strategy**
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"extraction_strategy": {
"type": "JsonCssExtractionStrategy",
"params": {
"schema": {
"type": "dict",
"value": {
"baseSelector": "article.post",
"fields": [
{"name": "title", "selector": "h1", "type": "text"},
{"name": "content", "selector": ".content", "type": "html"}
]
}
}
}
}
}
}
}
```
**LLM Extraction Strategy** *(Keep example, ensure schema uses type/value wrapper)*
*(Keep Deep Crawler Example)*
### REST API Examples
Update URLs to use port `11235`.
#### Simple Crawl
```python
import requests
# Configuration objects converted to the required JSON structure
browser_config_payload = {
"type": "BrowserConfig",
"params": {"headless": True}
}
crawler_config_payload = {
"type": "CrawlerRunConfig",
"params": {"stream": False, "cache_mode": "bypass"} # Use string value of enum
}
crawl_payload = {
"urls": ["https://httpbin.org/html"],
"browser_config": browser_config_payload,
"crawler_config": crawler_config_payload
}
response = requests.post(
"http://localhost:11235/crawl", # Updated port
# headers={"Authorization": f"Bearer {token}"}, # If JWT is enabled
json=crawl_payload
)
print(f"Status Code: {response.status_code}")
if response.ok:
print(response.json())
else:
print(f"Error: {response.text}")
```
#### Streaming Results
```python
import json
import httpx # Use httpx for async streaming example
async def test_stream_crawl(token: str = None): # Made token optional
"""Test the /crawl/stream endpoint with multiple URLs."""
url = "http://localhost:11235/crawl/stream" # Updated port
payload = {
"urls": [
"https://httpbin.org/html",
"https://httpbin.org/links/5/0",
],
"browser_config": {
"type": "BrowserConfig",
"params": {"headless": True, "viewport": {"type": "dict", "value": {"width": 1200, "height": 800}}} # Viewport needs type:dict
},
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {"stream": True, "cache_mode": "bypass"}
}
}
headers = {}
# if token:
# headers = {"Authorization": f"Bearer {token}"} # If JWT is enabled
try:
async with httpx.AsyncClient() as client:
async with client.stream("POST", url, json=payload, headers=headers, timeout=120.0) as response:
print(f"Status: {response.status_code} (Expected: 200)")
response.raise_for_status() # Raise exception for bad status codes
# Read streaming response line-by-line (NDJSON)
async for line in response.aiter_lines():
if line:
try:
data = json.loads(line)
# Check for completion marker
if data.get("status") == "completed":
print("Stream completed.")
break
print(f"Streamed Result: {json.dumps(data, indent=2)}")
except json.JSONDecodeError:
print(f"Warning: Could not decode JSON line: {line}")
except httpx.HTTPStatusError as e:
print(f"HTTP error occurred: {e.response.status_code} - {e.response.text}")
except Exception as e:
print(f"Error in streaming crawl test: {str(e)}")
# To run this example:
# import asyncio
# asyncio.run(test_stream_crawl())
```
---
## Metrics & Monitoring
Keep an eye on your crawler with these endpoints:
- `/health` - Quick health check
- `/metrics` - Detailed Prometheus metrics
- `/schema` - Full API schema
Example health check:
```bash
curl http://localhost:11235/health
```
---
*(Deployment Scenarios and Complete Examples sections remain the same, maybe update links if examples moved)*
---
## Server Configuration
The server's behavior can be customized through the `config.yml` file.
### Understanding config.yml
The configuration file is loaded from `/app/config.yml` inside the container. By default, the file from `deploy/docker/config.yml` in the repository is copied there during the build.
Here's a detailed breakdown of the configuration options (using defaults from `deploy/docker/config.yml`):
```yaml
# Application Configuration
app:
title: "Crawl4AI API"
version: "1.0.0" # Consider setting this to match library version, e.g., "0.5.1"
host: "0.0.0.0"
port: 8020 # NOTE: This port is used ONLY when running server.py directly. Gunicorn overrides this (see supervisord.conf).
reload: False # Default set to False - suitable for production
timeout_keep_alive: 300
# Default LLM Configuration
llm:
provider: "openai/gpt-4o-mini"
api_key_env: "OPENAI_API_KEY"
# api_key: sk-... # If you pass the API key directly then api_key_env will be ignored
# Redis Configuration (Used by internal Redis server managed by supervisord)
redis:
host: "localhost"
port: 6379
db: 0
password: ""
# ... other redis options ...
# Rate Limiting Configuration
rate_limiting:
enabled: True
default_limit: "1000/minute"
trusted_proxies: []
storage_uri: "memory://" # Use "redis://localhost:6379" if you need persistent/shared limits
# Security Configuration
security:
enabled: false # Master toggle for security features
jwt_enabled: false # Enable JWT authentication (requires security.enabled=true)
https_redirect: false # Force HTTPS (requires security.enabled=true)
trusted_hosts: ["*"] # Allowed hosts (use specific domains in production)
headers: # Security headers (applied if security.enabled=true)
x_content_type_options: "nosniff"
x_frame_options: "DENY"
content_security_policy: "default-src 'self'"
strict_transport_security: "max-age=63072000; includeSubDomains"
# Crawler Configuration
crawler:
memory_threshold_percent: 95.0
rate_limiter:
base_delay: [1.0, 2.0] # Min/max delay between requests in seconds for dispatcher
timeouts:
stream_init: 30.0 # Timeout for stream initialization
batch_process: 300.0 # Timeout for non-streaming /crawl processing
# Logging Configuration
logging:
level: "INFO"
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
# Observability Configuration
observability:
prometheus:
enabled: True
endpoint: "/metrics"
health_check:
endpoint: "/health"
```
*(JWT Authentication section remains the same, just note the default port is now 11235 for requests)*
*(Configuration Tips and Best Practices remain the same)*
### Customizing Your Configuration
You can override the default `config.yml`.
#### Method 1: Modify Before Build
1. Edit the `deploy/docker/config.yml` file in your local repository clone.
2. Build the image using `docker buildx` or `docker compose --profile local-... up --build`. The modified file will be copied into the image.
#### Method 2: Runtime Mount (Recommended for Custom Deploys)
1. Create your custom configuration file, e.g., `my-custom-config.yml` locally. Ensure it contains all necessary sections.
2. Mount it when running the container:
* **Using `docker run`:**
```bash
# Assumes my-custom-config.yml is in the current directory
docker run -d -p 11235:11235 \
--name crawl4ai-custom-config \
--env-file .llm.env \
--shm-size=1g \
-v $(pwd)/my-custom-config.yml:/app/config.yml \
unclecode/crawl4ai:latest # Or your specific tag
```
* **Using `docker-compose.yml`:** Add a `volumes` section to the service definition:
```yaml
services:
crawl4ai-hub-amd64: # Or your chosen service
image: unclecode/crawl4ai:latest
profiles: ["hub-amd64"]
<<: *base-config
volumes:
# Mount local custom config over the default one in the container
- ./my-custom-config.yml:/app/config.yml
# Keep the shared memory volume from base-config
- /dev/shm:/dev/shm
```
*(Note: Ensure `my-custom-config.yml` is in the same directory as `docker-compose.yml`)*
> 💡 When mounting, your custom file *completely replaces* the default one. Ensure it's a valid and complete configuration.
### Configuration Recommendations
1. **Security First** 🔒
- Always enable security in production
- Use specific trusted_hosts instead of wildcards
- Set up proper rate limiting to protect your server
- Consider your environment before enabling HTTPS redirect
2. **Resource Management** 💻
- Adjust memory_threshold_percent based on available RAM
- Set timeouts according to your content size and network conditions
- Use Redis for rate limiting in multi-container setups
3. **Monitoring** 📊
- Enable Prometheus if you need metrics
- Set DEBUG logging in development, INFO in production
- Regular health check monitoring is crucial
4. **Performance Tuning** ⚡
- Start with conservative rate limiter delays
- Increase batch_process timeout for large content
- Adjust stream_init timeout based on initial response times
## Getting Help
We're here to help you succeed with Crawl4AI! Here's how to get support:
- 📖 Check our [full documentation](https://docs.crawl4ai.com)
- 🐛 Found a bug? [Open an issue](https://github.com/unclecode/crawl4ai/issues)
- 💬 Join our [Discord community](https://discord.gg/crawl4ai)
- ⭐ Star us on GitHub to show support!
## Summary
In this guide, we've covered everything you need to get started with Crawl4AI's Docker deployment:
- Building and running the Docker container
- Configuring the environment
- Making API requests with proper typing
- Using the Python SDK
- Monitoring your deployment
Remember, the examples in the `examples` folder are your friends - they show real-world usage patterns that you can adapt for your needs.
Keep exploring, and don't hesitate to reach out if you need help! We're building something amazing together. 🚀
Happy crawling! 🕷️

View File

@@ -554,7 +554,7 @@ async def test_stream_crawl(session, token: str):
"https://example.com/page3",
],
"browser_config": {"headless": True, "viewport": {"width": 1200}},
"crawler_config": {"stream": True, "cache_mode": "bypass"}
"crawler_config": {"stream": True, "cache_mode": "aggressive"}
}
# headers = {"Authorization": f"Bearer {token}"} # If JWT is enabled, more on this later

View File

@@ -2,7 +2,6 @@ import os
import json
import asyncio
from typing import List, Tuple
from functools import partial
import logging
from typing import Optional, AsyncGenerator
@@ -40,19 +39,8 @@ from utils import (
decode_redis_hash
)
import psutil, time
logger = logging.getLogger(__name__)
# --- Helper to get memory ---
def _get_memory_mb():
try:
return psutil.Process().memory_info().rss / (1024 * 1024)
except Exception as e:
logger.warning(f"Could not get memory info: {e}")
return None
async def handle_llm_qa(
url: str,
query: str,
@@ -60,8 +48,6 @@ async def handle_llm_qa(
) -> str:
"""Process QA using LLM with crawled content as context."""
try:
if not url.startswith(('http://', 'https://')):
url = 'https://' + url
# Extract base URL by finding last '?q=' occurrence
last_q_index = url.rfind('?q=')
if last_q_index != -1:
@@ -75,7 +61,7 @@ async def handle_llm_qa(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.error_message
)
content = result.markdown.fit_markdown or result.markdown.raw_markdown
content = result.markdown.fit_markdown
# Create prompt and get LLM response
prompt = f"""Use the following content as context to answer the question.
@@ -364,9 +350,7 @@ async def stream_results(crawler: AsyncWebCrawler, results_gen: AsyncGenerator)
try:
async for result in results_gen:
try:
server_memory_mb = _get_memory_mb()
result_dict = result.model_dump()
result_dict['server_memory_mb'] = server_memory_mb
logger.info(f"Streaming result for {result_dict.get('url', 'unknown')}")
data = json.dumps(result_dict, default=datetime_handler) + "\n"
yield data.encode('utf-8')
@@ -380,11 +364,10 @@ async def stream_results(crawler: AsyncWebCrawler, results_gen: AsyncGenerator)
except asyncio.CancelledError:
logger.warning("Client disconnected during streaming")
finally:
# try:
# await crawler.close()
# except Exception as e:
# logger.error(f"Crawler cleanup error: {e}")
pass
try:
await crawler.close()
except Exception as e:
logger.error(f"Crawler cleanup error: {e}")
async def handle_crawl_request(
urls: List[str],
@@ -393,13 +376,7 @@ async def handle_crawl_request(
config: dict
) -> dict:
"""Handle non-streaming crawl requests."""
start_mem_mb = _get_memory_mb() # <--- Get memory before
start_time = time.time()
mem_delta_mb = None
peak_mem_mb = start_mem_mb
try:
urls = [('https://' + url) if not url.startswith(('http://', 'https://')) else url for url in urls]
browser_config = BrowserConfig.load(browser_config)
crawler_config = CrawlerRunConfig.load(crawler_config)
@@ -407,68 +384,26 @@ async def handle_crawl_request(
memory_threshold_percent=config["crawler"]["memory_threshold_percent"],
rate_limiter=RateLimiter(
base_delay=tuple(config["crawler"]["rate_limiter"]["base_delay"])
) if config["crawler"]["rate_limiter"]["enabled"] else None
)
)
from crawler_pool import get_crawler
crawler = await get_crawler(browser_config)
# crawler: AsyncWebCrawler = AsyncWebCrawler(config=browser_config)
# await crawler.start()
base_config = config["crawler"]["base_config"]
# Iterate on key-value pairs in global_config then use haseattr to set them
for key, value in base_config.items():
if hasattr(crawler_config, key):
setattr(crawler_config, key, value)
results = []
func = getattr(crawler, "arun" if len(urls) == 1 else "arun_many")
partial_func = partial(func,
urls[0] if len(urls) == 1 else urls,
config=crawler_config,
dispatcher=dispatcher)
results = await partial_func()
# await crawler.close()
end_mem_mb = _get_memory_mb() # <--- Get memory after
end_time = time.time()
if start_mem_mb is not None and end_mem_mb is not None:
mem_delta_mb = end_mem_mb - start_mem_mb # <--- Calculate delta
peak_mem_mb = max(peak_mem_mb if peak_mem_mb else 0, end_mem_mb) # <--- Get peak memory
logger.info(f"Memory usage: Start: {start_mem_mb} MB, End: {end_mem_mb} MB, Delta: {mem_delta_mb} MB, Peak: {peak_mem_mb} MB")
return {
"success": True,
"results": [result.model_dump() for result in results],
"server_processing_time_s": end_time - start_time,
"server_memory_delta_mb": mem_delta_mb,
"server_peak_memory_mb": peak_mem_mb
}
async with AsyncWebCrawler(config=browser_config) as crawler:
results = await crawler.arun_many(
urls=urls,
config=crawler_config,
dispatcher=dispatcher
)
return {
"success": True,
"results": [result.model_dump() for result in results]
}
except Exception as e:
logger.error(f"Crawl error: {str(e)}", exc_info=True)
if 'crawler' in locals() and crawler.ready: # Check if crawler was initialized and started
# try:
# await crawler.close()
# except Exception as close_e:
# logger.error(f"Error closing crawler during exception handling: {close_e}")
logger.error(f"Error closing crawler during exception handling: {close_e}")
# Measure memory even on error if possible
end_mem_mb_error = _get_memory_mb()
if start_mem_mb is not None and end_mem_mb_error is not None:
mem_delta_mb = end_mem_mb_error - start_mem_mb
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=json.dumps({ # Send structured error
"error": str(e),
"server_memory_delta_mb": mem_delta_mb,
"server_peak_memory_mb": max(peak_mem_mb if peak_mem_mb else 0, end_mem_mb_error or 0)
})
detail=str(e)
)
async def handle_stream_crawl_request(
@@ -480,11 +415,9 @@ async def handle_stream_crawl_request(
"""Handle streaming crawl requests."""
try:
browser_config = BrowserConfig.load(browser_config)
# browser_config.verbose = True # Set to False or remove for production stress testing
browser_config.verbose = False
browser_config.verbose = True
crawler_config = CrawlerRunConfig.load(crawler_config)
crawler_config.scraping_strategy = LXMLWebScrapingStrategy()
crawler_config.stream = True
dispatcher = MemoryAdaptiveDispatcher(
memory_threshold_percent=config["crawler"]["memory_threshold_percent"],
@@ -493,11 +426,8 @@ async def handle_stream_crawl_request(
)
)
from crawler_pool import get_crawler
crawler = await get_crawler(browser_config)
# crawler = AsyncWebCrawler(config=browser_config)
# await crawler.start()
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
results_gen = await crawler.arun_many(
urls=urls,
@@ -508,15 +438,9 @@ async def handle_stream_crawl_request(
return crawler, results_gen
except Exception as e:
# Make sure to close crawler if started during an error here
if 'crawler' in locals() and crawler.ready:
# try:
# await crawler.close()
# except Exception as close_e:
# logger.error(f"Error closing crawler during stream setup exception: {close_e}")
logger.error(f"Error closing crawler during stream setup exception: {close_e}")
if 'crawler' in locals():
await crawler.close()
logger.error(f"Stream crawl error: {str(e)}", exc_info=True)
# Raising HTTPException here will prevent streaming response
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -4,8 +4,7 @@ app:
version: "1.0.0"
host: "0.0.0.0"
port: 8020
reload: False
workers: 4
reload: True
timeout_keep_alive: 300
# Default LLM Configuration
@@ -51,31 +50,12 @@ security:
# Crawler Configuration
crawler:
base_config:
simulate_user: true
memory_threshold_percent: 95.0
rate_limiter:
enabled: true
base_delay: [1.0, 2.0]
timeouts:
stream_init: 30.0 # Timeout for stream initialization
batch_process: 300.0 # Timeout for batch processing
pool:
max_pages: 40 # ← GLOBAL_SEM permits
idle_ttl_sec: 1800 # ← 30 min janitor cutoff
browser:
kwargs:
headless: true
text_mode: true
extra_args:
# - "--single-process"
- "--no-sandbox"
- "--disable-dev-shm-usage"
- "--disable-gpu"
- "--disable-software-rasterizer"
- "--disable-web-security"
- "--allow-insecure-localhost"
- "--ignore-certificate-errors"
# Logging Configuration
logging:

View File

@@ -1,60 +0,0 @@
# crawler_pool.py (new file)
import asyncio, json, hashlib, time, psutil
from contextlib import suppress
from typing import Dict
from crawl4ai import AsyncWebCrawler, BrowserConfig
from typing import Dict
from utils import load_config
CONFIG = load_config()
POOL: Dict[str, AsyncWebCrawler] = {}
LAST_USED: Dict[str, float] = {}
LOCK = asyncio.Lock()
MEM_LIMIT = CONFIG.get("crawler", {}).get("memory_threshold_percent", 95.0) # % RAM refuse new browsers above this
IDLE_TTL = CONFIG.get("crawler", {}).get("pool", {}).get("idle_ttl_sec", 1800) # close if unused for 30min
def _sig(cfg: BrowserConfig) -> str:
payload = json.dumps(cfg.to_dict(), sort_keys=True, separators=(",",":"))
return hashlib.sha1(payload.encode()).hexdigest()
async def get_crawler(cfg: BrowserConfig) -> AsyncWebCrawler:
try:
sig = _sig(cfg)
async with LOCK:
if sig in POOL:
LAST_USED[sig] = time.time();
return POOL[sig]
if psutil.virtual_memory().percent >= MEM_LIMIT:
raise MemoryError("RAM pressure new browser denied")
crawler = AsyncWebCrawler(config=cfg, thread_safe=False)
await crawler.start()
POOL[sig] = crawler; LAST_USED[sig] = time.time()
return crawler
except MemoryError as e:
raise MemoryError(f"RAM pressure new browser denied: {e}")
except Exception as e:
raise RuntimeError(f"Failed to start browser: {e}")
finally:
if sig in POOL:
LAST_USED[sig] = time.time()
else:
# If we failed to start the browser, we should remove it from the pool
POOL.pop(sig, None)
LAST_USED.pop(sig, None)
# If we failed to start the browser, we should remove it from the pool
async def close_all():
async with LOCK:
await asyncio.gather(*(c.close() for c in POOL.values()), return_exceptions=True)
POOL.clear(); LAST_USED.clear()
async def janitor():
while True:
await asyncio.sleep(60)
now = time.time()
async with LOCK:
for sig, crawler in list(POOL.items()):
if now - LAST_USED[sig] > IDLE_TTL:
with suppress(Exception): await crawler.close()
POOL.pop(sig, None); LAST_USED.pop(sig, None)

View File

@@ -1,252 +0,0 @@
# deploy/docker/mcp_bridge.py
from __future__ import annotations
import inspect, json, re, anyio
from contextlib import suppress
from typing import Any, Callable, Dict, List, Tuple
import httpx
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, HTTPException
from fastapi.responses import JSONResponse
from fastapi import Request
from sse_starlette.sse import EventSourceResponse
from pydantic import BaseModel
from mcp.server.sse import SseServerTransport
import mcp.types as t
from mcp.server.lowlevel.server import Server, NotificationOptions
from mcp.server.models import InitializationOptions
# ── optin decorators ───────────────────────────────────────────
def mcp_resource(name: str | None = None):
def deco(fn):
fn.__mcp_kind__, fn.__mcp_name__ = "resource", name
return fn
return deco
def mcp_template(name: str | None = None):
def deco(fn):
fn.__mcp_kind__, fn.__mcp_name__ = "template", name
return fn
return deco
def mcp_tool(name: str | None = None):
def deco(fn):
fn.__mcp_kind__, fn.__mcp_name__ = "tool", name
return fn
return deco
# ── HTTPproxy helper for FastAPI endpoints ─────────────────────
def _make_http_proxy(base_url: str, route):
method = list(route.methods - {"HEAD", "OPTIONS"})[0]
async def proxy(**kwargs):
# replace `/items/{id}` style params first
path = route.path
for k, v in list(kwargs.items()):
placeholder = "{" + k + "}"
if placeholder in path:
path = path.replace(placeholder, str(v))
kwargs.pop(k)
url = base_url.rstrip("/") + path
async with httpx.AsyncClient() as client:
try:
r = (
await client.get(url, params=kwargs)
if method == "GET"
else await client.request(method, url, json=kwargs)
)
r.raise_for_status()
return r.text if method == "GET" else r.json()
except httpx.HTTPStatusError as e:
# surface FastAPI error details instead of plain 500
raise HTTPException(e.response.status_code, e.response.text)
return proxy
# ── main entry point ────────────────────────────────────────────
def attach_mcp(
app: FastAPI,
*, # keywordonly
base: str = "/mcp",
name: str | None = None,
base_url: str, # eg. "http://127.0.0.1:8020"
) -> None:
"""Call once after all routes are declared to expose WS+SSE MCP endpoints."""
server_name = name or app.title or "FastAPI-MCP"
mcp = Server(server_name)
# tools: Dict[str, Callable] = {}
tools: Dict[str, Tuple[Callable, Callable]] = {}
resources: Dict[str, Callable] = {}
templates: Dict[str, Callable] = {}
# register decorated FastAPI routes
for route in app.routes:
fn = getattr(route, "endpoint", None)
kind = getattr(fn, "__mcp_kind__", None)
if not kind:
continue
key = fn.__mcp_name__ or re.sub(r"[/{}}]", "_", route.path).strip("_")
# if kind == "tool":
# tools[key] = _make_http_proxy(base_url, route)
if kind == "tool":
proxy = _make_http_proxy(base_url, route)
tools[key] = (proxy, fn)
continue
if kind == "resource":
resources[key] = fn
if kind == "template":
templates[key] = fn
# helpers for JSONSchema
def _schema(model: type[BaseModel] | None) -> dict:
return {"type": "object"} if model is None else model.model_json_schema()
def _body_model(fn: Callable) -> type[BaseModel] | None:
for p in inspect.signature(fn).parameters.values():
a = p.annotation
if inspect.isclass(a) and issubclass(a, BaseModel):
return a
return None
# MCP handlers
@mcp.list_tools()
async def _list_tools() -> List[t.Tool]:
out = []
for k, (proxy, orig_fn) in tools.items():
desc = getattr(orig_fn, "__mcp_description__", None) or inspect.getdoc(orig_fn) or ""
schema = getattr(orig_fn, "__mcp_schema__", None) or _schema(_body_model(orig_fn))
out.append(
t.Tool(name=k, description=desc, inputSchema=schema)
)
return out
@mcp.call_tool()
async def _call_tool(name: str, arguments: Dict | None) -> List[t.TextContent]:
if name not in tools:
raise HTTPException(404, "tool not found")
proxy, _ = tools[name]
try:
res = await proxy(**(arguments or {}))
except HTTPException as exc:
# map serverside errors into MCP "text/error" payloads
err = {"error": exc.status_code, "detail": exc.detail}
return [t.TextContent(type = "text", text=json.dumps(err))]
return [t.TextContent(type = "text", text=json.dumps(res, default=str))]
@mcp.list_resources()
async def _list_resources() -> List[t.Resource]:
return [
t.Resource(name=k, description=inspect.getdoc(f) or "", mime_type="application/json")
for k, f in resources.items()
]
@mcp.read_resource()
async def _read_resource(name: str) -> List[t.TextContent]:
if name not in resources:
raise HTTPException(404, "resource not found")
res = resources[name]()
return [t.TextContent(type = "text", text=json.dumps(res, default=str))]
@mcp.list_resource_templates()
async def _list_templates() -> List[t.ResourceTemplate]:
return [
t.ResourceTemplate(
name=k,
description=inspect.getdoc(f) or "",
parameters={
p: {"type": "string"} for p in _path_params(app, f)
},
)
for k, f in templates.items()
]
init_opts = InitializationOptions(
server_name=server_name,
server_version="0.1.0",
capabilities=mcp.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
)
# ── WebSocket transport ────────────────────────────────────
@app.websocket_route(f"{base}/ws")
async def _ws(ws: WebSocket):
await ws.accept()
c2s_send, c2s_recv = anyio.create_memory_object_stream(100)
s2c_send, s2c_recv = anyio.create_memory_object_stream(100)
from pydantic import TypeAdapter
from mcp.types import JSONRPCMessage
adapter = TypeAdapter(JSONRPCMessage)
init_done = anyio.Event()
async def srv_to_ws():
first = True
try:
async for msg in s2c_recv:
await ws.send_json(msg.model_dump())
if first:
init_done.set()
first = False
finally:
# make sure cleanup survives TaskGroup cancellation
with anyio.CancelScope(shield=True):
with suppress(RuntimeError): # idempotent close
await ws.close()
async def ws_to_srv():
try:
# 1st frame is always "initialize"
first = adapter.validate_python(await ws.receive_json())
await c2s_send.send(first)
await init_done.wait() # block until server ready
while True:
data = await ws.receive_json()
await c2s_send.send(adapter.validate_python(data))
except WebSocketDisconnect:
await c2s_send.aclose()
async with anyio.create_task_group() as tg:
tg.start_soon(mcp.run, c2s_recv, s2c_send, init_opts)
tg.start_soon(ws_to_srv)
tg.start_soon(srv_to_ws)
# ── SSE transport (official) ─────────────────────────────
sse = SseServerTransport(f"{base}/messages/")
@app.get(f"{base}/sse")
async def _mcp_sse(request: Request):
async with sse.connect_sse(
request.scope, request.receive, request._send # starlette ASGI primitives
) as (read_stream, write_stream):
await mcp.run(read_stream, write_stream, init_opts)
# client → server frames are POSTed here
app.mount(f"{base}/messages", app=sse.handle_post_message)
# ── schema endpoint ───────────────────────────────────────
@app.get(f"{base}/schema")
async def _schema_endpoint():
return JSONResponse({
"tools": [x.model_dump() for x in await _list_tools()],
"resources": [x.model_dump() for x in await _list_resources()],
"resource_templates": [x.model_dump() for x in await _list_templates()],
})
# ── helpers ────────────────────────────────────────────────────
def _route_name(path: str) -> str:
return re.sub(r"[/{}}]", "_", path).strip("_")
def _path_params(app: FastAPI, fn: Callable) -> List[str]:
for r in app.routes:
if r.endpoint is fn:
return list(r.param_convertors.keys())
return []

View File

@@ -1,15 +1,10 @@
fastapi==0.115.12
uvicorn==0.34.2
crawl4ai
fastapi
uvicorn
gunicorn>=23.0.0
slowapi==0.1.9
prometheus-fastapi-instrumentator>=7.1.0
slowapi>=0.1.9
prometheus-fastapi-instrumentator>=7.0.2
redis>=5.2.1
jwt>=1.3.1
dnspython>=2.7.0
email-validator==2.2.0
sse-starlette==2.2.1
pydantic==2.11
rank-bm25==0.2.2
anyio==4.9.0
PyJWT==2.10.1
email-validator>=2.2.0

View File

@@ -1,485 +1,150 @@
# ───────────────────────── server.py ─────────────────────────
"""
Crawl4AI FastAPI entrypoint
• Browser pool + global page cap
• Ratelimiting, security, metrics
• /crawl, /crawl/stream, /md, /llm endpoints
"""
# ── stdlib & 3rdparty imports ───────────────────────────────
from crawler_pool import get_crawler, close_all, janitor
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from auth import create_access_token, get_token_dependency, TokenRequest
from pydantic import BaseModel
from typing import Optional, List, Dict
from fastapi import Request, Depends
from fastapi.responses import FileResponse
import base64
import re
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from api import (
handle_markdown_request, handle_llm_qa,
handle_stream_crawl_request, handle_crawl_request,
stream_results
)
from utils import (
FilterType, load_config, setup_logging, verify_email_domain
)
import os
import sys
import time
import asyncio
from typing import List
from contextlib import asynccontextmanager
import pathlib
from fastapi import (
FastAPI, HTTPException, Request, Path, Query, Depends
)
from rank_bm25 import BM25Okapi
def chunk_code_functions(code: str) -> List[str]:
tree = ast.parse(code)
lines = code.splitlines()
chunks = []
for node in tree.body:
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)):
start = node.lineno - 1
end = getattr(node, 'end_lineno', start + 1)
chunks.append("\n".join(lines[start:end]))
return chunks
from fastapi.responses import (
StreamingResponse, RedirectResponse, PlainTextResponse, JSONResponse
)
from typing import List, Optional, Dict
from fastapi import FastAPI, HTTPException, Request, Query, Path, Depends
from fastapi.responses import StreamingResponse, RedirectResponse, PlainTextResponse, JSONResponse
from fastapi.middleware.httpsredirect import HTTPSRedirectMiddleware
from fastapi.middleware.trustedhost import TrustedHostMiddleware
from fastapi.staticfiles import StaticFiles
from mcp_bridge import attach_mcp, mcp_resource, mcp_template, mcp_tool
import ast
import crawl4ai as _c4
from pydantic import BaseModel, Field
from slowapi import Limiter
from slowapi.util import get_remote_address
from prometheus_fastapi_instrumentator import Instrumentator
from redis import asyncio as aioredis
# ── internal imports (after sys.path append) ─────────────────
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
# ────────────────── configuration / logging ──────────────────
config = load_config()
setup_logging(config)
__version__ = "0.5.1-d1"
# ── global page semaphore (hard cap) ─────────────────────────
MAX_PAGES = config["crawler"]["pool"].get("max_pages", 30)
GLOBAL_SEM = asyncio.Semaphore(MAX_PAGES)
# import logging
# page_log = logging.getLogger("page_cap")
# orig_arun = AsyncWebCrawler.arun
# async def capped_arun(self, *a, **kw):
# await GLOBAL_SEM.acquire() # ← take slot
# try:
# in_flight = MAX_PAGES - GLOBAL_SEM._value # used permits
# page_log.info("🕸️ pages_in_flight=%s / %s", in_flight, MAX_PAGES)
# return await orig_arun(self, *a, **kw)
# finally:
# GLOBAL_SEM.release() # ← free slot
orig_arun = AsyncWebCrawler.arun
async def capped_arun(self, *a, **kw):
async with GLOBAL_SEM:
return await orig_arun(self, *a, **kw)
AsyncWebCrawler.arun = capped_arun
# ───────────────────── FastAPI lifespan ──────────────────────
@asynccontextmanager
async def lifespan(_: FastAPI):
await get_crawler(BrowserConfig(
extra_args=config["crawler"]["browser"].get("extra_args", []),
**config["crawler"]["browser"].get("kwargs", {}),
)) # warmup
app.state.janitor = asyncio.create_task(janitor()) # idle GC
yield
app.state.janitor.cancel()
await close_all()
# ───────────────────── FastAPI instance ──────────────────────
app = FastAPI(
title=config["app"]["title"],
version=config["app"]["version"],
lifespan=lifespan,
from utils import FilterType, load_config, setup_logging, verify_email_domain
from api import (
handle_markdown_request,
handle_llm_qa,
handle_stream_crawl_request,
handle_crawl_request,
stream_results
)
from auth import create_access_token, get_token_dependency, TokenRequest # Import from auth.py
# ── static playground ──────────────────────────────────────
STATIC_DIR = pathlib.Path(__file__).parent / "static" / "playground"
if not STATIC_DIR.exists():
raise RuntimeError(f"Playground assets not found at {STATIC_DIR}")
app.mount(
"/playground",
StaticFiles(directory=STATIC_DIR, html=True),
name="play",
)
__version__ = "0.2.6"
# Optional nicetohave: opening the root shows the playground
@app.get("/")
async def root():
return RedirectResponse("/playground")
# ─────────────────── infra / middleware ─────────────────────
redis = aioredis.from_url(config["redis"].get("uri", "redis://localhost"))
limiter = Limiter(
key_func=get_remote_address,
default_limits=[config["rate_limiting"]["default_limit"]],
storage_uri=config["rate_limiting"]["storage_uri"],
)
def _setup_security(app_: FastAPI):
sec = config["security"]
if not sec["enabled"]:
return
if sec.get("https_redirect"):
app_.add_middleware(HTTPSRedirectMiddleware)
if sec.get("trusted_hosts", []) != ["*"]:
app_.add_middleware(
TrustedHostMiddleware, allowed_hosts=sec["trusted_hosts"]
)
_setup_security(app)
if config["observability"]["prometheus"]["enabled"]:
Instrumentator().instrument(app).expose(app)
token_dep = get_token_dependency(config)
@app.middleware("http")
async def add_security_headers(request: Request, call_next):
resp = await call_next(request)
if config["security"]["enabled"]:
resp.headers.update(config["security"]["headers"])
return resp
# ───────────────── safe configdump helper ─────────────────
ALLOWED_TYPES = {
"CrawlerRunConfig": CrawlerRunConfig,
"BrowserConfig": BrowserConfig,
}
def _safe_eval_config(expr: str) -> dict:
"""
Accept exactly one toplevel call to CrawlerRunConfig(...) or BrowserConfig(...).
Whatever is inside the parentheses is fine *except* further function calls
(so no __import__('os') stuff). All public names from crawl4ai are available
when we eval.
"""
tree = ast.parse(expr, mode="eval")
# must be a single call
if not isinstance(tree.body, ast.Call):
raise ValueError("Expression must be a single constructor call")
call = tree.body
if not (isinstance(call.func, ast.Name) and call.func.id in {"CrawlerRunConfig", "BrowserConfig"}):
raise ValueError(
"Only CrawlerRunConfig(...) or BrowserConfig(...) are allowed")
# forbid nested calls to keep the surface tiny
for node in ast.walk(call):
if isinstance(node, ast.Call) and node is not call:
raise ValueError("Nested function calls are not permitted")
# expose everything that crawl4ai exports, nothing else
safe_env = {name: getattr(_c4, name)
for name in dir(_c4) if not name.startswith("_")}
obj = eval(compile(tree, "<config>", "eval"),
{"__builtins__": {}}, safe_env)
return obj.dump()
# ───────────────────────── Schemas ───────────────────────────
class CrawlRequest(BaseModel):
urls: List[str] = Field(min_length=1, max_length=100)
browser_config: Optional[Dict] = Field(default_factory=dict)
crawler_config: Optional[Dict] = Field(default_factory=dict)
# ────────────── Schemas ──────────────
class MarkdownRequest(BaseModel):
"""Request body for the /md endpoint."""
url: str = Field(..., description="Absolute http/https URL to fetch")
f: FilterType = Field(FilterType.FIT,
description="Contentfilter strategy: FIT, RAW, BM25, or LLM")
q: Optional[str] = Field(None, description="Query string used by BM25/LLM filters")
c: Optional[str] = Field("0", description="Cachebust / revision counter")
# Load configuration and setup
config = load_config()
setup_logging(config)
# Initialize Redis
redis = aioredis.from_url(config["redis"].get("uri", "redis://localhost"))
class RawCode(BaseModel):
code: str
# Initialize rate limiter
limiter = Limiter(
key_func=get_remote_address,
default_limits=[config["rate_limiting"]["default_limit"]],
storage_uri=config["rate_limiting"]["storage_uri"]
)
class HTMLRequest(BaseModel):
url: str
class ScreenshotRequest(BaseModel):
url: str
screenshot_wait_for: Optional[float] = 2
output_path: Optional[str] = None
app = FastAPI(
title=config["app"]["title"],
version=config["app"]["version"]
)
class PDFRequest(BaseModel):
url: str
output_path: Optional[str] = None
# Configure middleware
def setup_security_middleware(app, config):
sec_config = config.get("security", {})
if sec_config.get("enabled", False):
if sec_config.get("https_redirect", False):
app.add_middleware(HTTPSRedirectMiddleware)
if sec_config.get("trusted_hosts", []) != ["*"]:
app.add_middleware(TrustedHostMiddleware, allowed_hosts=sec_config["trusted_hosts"])
setup_security_middleware(app, config)
class JSEndpointRequest(BaseModel):
url: str
scripts: List[str] = Field(
...,
description="List of separated JavaScript snippets to execute"
)
# Prometheus instrumentation
if config["observability"]["prometheus"]["enabled"]:
Instrumentator().instrument(app).expose(app)
# ──────────────────────── Endpoints ──────────────────────────
# Get token dependency based on config
token_dependency = get_token_dependency(config)
# Middleware for security headers
@app.middleware("http")
async def add_security_headers(request: Request, call_next):
response = await call_next(request)
if config["security"]["enabled"]:
response.headers.update(config["security"]["headers"])
return response
# Token endpoint (always available, but usage depends on config)
@app.post("/token")
async def get_token(req: TokenRequest):
if not verify_email_domain(req.email):
raise HTTPException(400, "Invalid email domain")
token = create_access_token({"sub": req.email})
return {"email": req.email, "access_token": token, "token_type": "bearer"}
async def get_token(request_data: TokenRequest):
if not verify_email_domain(request_data.email):
raise HTTPException(status_code=400, detail="Invalid email domain")
token = create_access_token({"sub": request_data.email})
return {"email": request_data.email, "access_token": token, "token_type": "bearer"}
@app.post("/config/dump")
async def config_dump(raw: RawCode):
try:
return JSONResponse(_safe_eval_config(raw.code.strip()))
except Exception as e:
raise HTTPException(400, str(e))
@app.post("/md")
# Endpoints with conditional auth
@app.get("/md/{url:path}")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("md")
async def get_markdown(
request: Request,
body: MarkdownRequest,
_td: Dict = Depends(token_dep),
url: str,
f: FilterType = FilterType.FIT,
q: Optional[str] = None,
c: Optional[str] = "0",
token_data: Optional[Dict] = Depends(token_dependency)
):
if not body.url.startswith(("http://", "https://")):
raise HTTPException(400, "URL must be absolute and start with http/https")
markdown = await handle_markdown_request(
body.url, body.f, body.q, body.c, config
)
return JSONResponse({
"url": body.url,
"filter": body.f,
"query": body.q,
"cache": body.c,
"markdown": markdown,
"success": True
})
result = await handle_markdown_request(url, f, q, c, config)
return PlainTextResponse(result)
@app.post("/html")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("html")
async def generate_html(
request: Request,
body: HTMLRequest,
_td: Dict = Depends(token_dep),
):
"""
Crawls the URL, preprocesses the raw HTML for schema extraction, and returns the processed HTML.
Use when you need sanitized HTML structures for building schemas or further processing.
"""
cfg = CrawlerRunConfig()
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
raw_html = results[0].html
from crawl4ai.utils import preprocess_html_for_schema
processed_html = preprocess_html_for_schema(raw_html)
return JSONResponse({"html": processed_html, "url": body.url, "success": True})
# Screenshot endpoint
@app.post("/screenshot")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("screenshot")
async def generate_screenshot(
request: Request,
body: ScreenshotRequest,
_td: Dict = Depends(token_dep),
):
"""
Capture a full-page PNG screenshot of the specified URL, waiting an optional delay before capture,
Use when you need an image snapshot of the rendered page. Its recommened to provide an output path to save the screenshot.
Then in result instead of the screenshot you will get a path to the saved file.
"""
cfg = CrawlerRunConfig(screenshot=True, screenshot_wait_for=body.screenshot_wait_for)
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
screenshot_data = results[0].screenshot
if body.output_path:
abs_path = os.path.abspath(body.output_path)
os.makedirs(os.path.dirname(abs_path), exist_ok=True)
with open(abs_path, "wb") as f:
f.write(base64.b64decode(screenshot_data))
return {"success": True, "path": abs_path}
return {"success": True, "screenshot": screenshot_data}
# PDF endpoint
@app.post("/pdf")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("pdf")
async def generate_pdf(
request: Request,
body: PDFRequest,
_td: Dict = Depends(token_dep),
):
"""
Generate a PDF document of the specified URL,
Use when you need a printable or archivable snapshot of the page. It is recommended to provide an output path to save the PDF.
Then in result instead of the PDF you will get a path to the saved file.
"""
cfg = CrawlerRunConfig(pdf=True)
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
pdf_data = results[0].pdf
if body.output_path:
abs_path = os.path.abspath(body.output_path)
os.makedirs(os.path.dirname(abs_path), exist_ok=True)
with open(abs_path, "wb") as f:
f.write(pdf_data)
return {"success": True, "path": abs_path}
return {"success": True, "pdf": base64.b64encode(pdf_data).decode()}
@app.post("/execute_js")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("execute_js")
async def execute_js(
request: Request,
body: JSEndpointRequest,
_td: Dict = Depends(token_dep),
):
"""
Execute a sequence of JavaScript snippets on the specified URL.
Return the full CrawlResult JSON (first result).
Use this when you need to interact with dynamic pages using JS.
REMEMBER: Scripts accept a list of separated JS snippets to execute and execute them in order.
IMPORTANT: Each script should be an expression that returns a value. It can be an IIFE or an async function. You can think of it as such.
Your script will replace '{script}' and execute in the browser context. So provide either an IIFE or a sync/async function that returns a value.
Return Format:
- The return result is an instance of CrawlResult, so you have access to markdown, links, and other stuff. If this is enough, you don't need to call again for other endpoints.
```python
class CrawlResult(BaseModel):
url: str
html: str
success: bool
cleaned_html: Optional[str] = None
media: Dict[str, List[Dict]] = {}
links: Dict[str, List[Dict]] = {}
downloaded_files: Optional[List[str]] = None
js_execution_result: Optional[Dict[str, Any]] = None
screenshot: Optional[str] = None
pdf: Optional[bytes] = None
mhtml: Optional[str] = None
_markdown: Optional[MarkdownGenerationResult] = PrivateAttr(default=None)
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
error_message: Optional[str] = None
session_id: Optional[str] = None
response_headers: Optional[dict] = None
status_code: Optional[int] = None
ssl_certificate: Optional[SSLCertificate] = None
dispatch_result: Optional[DispatchResult] = None
redirected_url: Optional[str] = None
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
fit_markdown: Optional[str] = None
fit_html: Optional[str] = None
```
"""
cfg = CrawlerRunConfig(js_code=body.scripts)
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
# Return JSON-serializable dict of the first CrawlResult
data = results[0].model_dump()
return JSONResponse(data)
@app.get("/llm/{url:path}")
@app.get("/llm/{url:path}", description="URL should be without http/https prefix")
async def llm_endpoint(
request: Request,
url: str = Path(...),
q: str = Query(...),
_td: Dict = Depends(token_dep),
q: Optional[str] = Query(None),
token_data: Optional[Dict] = Depends(token_dependency)
):
if not q:
raise HTTPException(400, "Query parameter 'q' is required")
if not url.startswith(("http://", "https://")):
url = "https://" + url
answer = await handle_llm_qa(url, q, config)
return JSONResponse({"answer": answer})
raise HTTPException(status_code=400, detail="Query parameter 'q' is required")
if not url.startswith(('http://', 'https://')):
url = 'https://' + url
try:
answer = await handle_llm_qa(url, q, config)
return JSONResponse({"answer": answer})
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get("/schema")
async def get_schema():
from crawl4ai import BrowserConfig, CrawlerRunConfig
return {"browser": BrowserConfig().dump(),
"crawler": CrawlerRunConfig().dump()}
return {"browser": BrowserConfig().dump(), "crawler": CrawlerRunConfig().dump()}
@app.get(config["observability"]["health_check"]["endpoint"])
async def health():
return {"status": "ok", "timestamp": time.time(), "version": __version__}
@app.get(config["observability"]["prometheus"]["endpoint"])
async def metrics():
return RedirectResponse(config["observability"]["prometheus"]["endpoint"])
return RedirectResponse(url=config["observability"]["prometheus"]["endpoint"])
@app.post("/crawl")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("crawl")
async def crawl(
request: Request,
crawl_request: CrawlRequest,
_td: Dict = Depends(token_dep),
token_data: Optional[Dict] = Depends(token_dependency)
):
"""
Crawl a list of URLs and return the results as JSON.
"""
if not crawl_request.urls:
raise HTTPException(400, "At least one URL required")
res = await handle_crawl_request(
raise HTTPException(status_code=400, detail="At least one URL required")
results = await handle_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config,
config=config
)
return JSONResponse(res)
return JSONResponse(results)
@app.post("/crawl/stream")
@@ -487,154 +152,24 @@ async def crawl(
async def crawl_stream(
request: Request,
crawl_request: CrawlRequest,
_td: Dict = Depends(token_dep),
token_data: Optional[Dict] = Depends(token_dependency)
):
if not crawl_request.urls:
raise HTTPException(400, "At least one URL required")
crawler, gen = await handle_stream_crawl_request(
raise HTTPException(status_code=400, detail="At least one URL required")
crawler, results_gen = await handle_stream_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config,
config=config
)
return StreamingResponse(
stream_results(crawler, gen),
media_type="application/x-ndjson",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Stream-Status": "active",
},
stream_results(crawler, results_gen),
media_type='application/x-ndjson',
headers={'Cache-Control': 'no-cache', 'Connection': 'keep-alive', 'X-Stream-Status': 'active'}
)
def chunk_code_functions(code_md: str) -> List[str]:
"""Extract each function/class from markdown code blocks per file."""
pattern = re.compile(
# match "## File: <path>" then a ```py fence, then capture until the closing ```
r'##\s*File:\s*(?P<path>.+?)\s*?\r?\n' # file header
r'```py\s*?\r?\n' # opening fence
r'(?P<code>.*?)(?=\r?\n```)', # code block
re.DOTALL
)
chunks: List[str] = []
for m in pattern.finditer(code_md):
file_path = m.group("path").strip()
code_blk = m.group("code")
tree = ast.parse(code_blk)
lines = code_blk.splitlines()
for node in tree.body:
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)):
start = node.lineno - 1
end = getattr(node, "end_lineno", start + 1)
snippet = "\n".join(lines[start:end])
chunks.append(f"# File: {file_path}\n{snippet}")
return chunks
def chunk_doc_sections(doc: str) -> List[str]:
lines = doc.splitlines(keepends=True)
sections = []
current: List[str] = []
for line in lines:
if re.match(r"^#{1,6}\s", line):
if current:
sections.append("".join(current))
current = [line]
else:
current.append(line)
if current:
sections.append("".join(current))
return sections
@app.get("/ask")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("ask")
async def get_context(
request: Request,
_td: Dict = Depends(token_dep),
context_type: str = Query("all", regex="^(code|doc|all)$"),
query: Optional[str] = Query(None, description="search query to filter chunks"),
score_ratio: float = Query(0.5, ge=0.0, le=1.0, description="min score as fraction of max_score"),
max_results: int = Query(20, ge=1, description="absolute cap on returned chunks"),
):
"""
This end point is design for any questions about Crawl4ai library. It returns a plain text markdown with extensive information about Crawl4ai.
You can use this as a context for any AI assistant. Use this endpoint for AI assistants to retrieve library context for decision making or code generation tasks.
Alway is BEST practice you provide a query to filter the context. Otherwise the lenght of the response will be very long.
Parameters:
- context_type: Specify "code" for code context, "doc" for documentation context, or "all" for both.
- query: RECOMMENDED search query to filter paragraphs using BM25. You can leave this empty to get all the context.
- score_ratio: Minimum score as a fraction of the maximum score for filtering results.
- max_results: Maximum number of results to return. Default is 20.
Returns:
- JSON response with the requested context.
- If "code" is specified, returns the code context.
- If "doc" is specified, returns the documentation context.
- If "all" is specified, returns both code and documentation contexts.
"""
# load contexts
base = os.path.dirname(__file__)
code_path = os.path.join(base, "c4ai-code-context.md")
doc_path = os.path.join(base, "c4ai-doc-context.md")
if not os.path.exists(code_path) or not os.path.exists(doc_path):
raise HTTPException(404, "Context files not found")
with open(code_path, "r") as f:
code_content = f.read()
with open(doc_path, "r") as f:
doc_content = f.read()
# if no query, just return raw contexts
if not query:
if context_type == "code":
return JSONResponse({"code_context": code_content})
if context_type == "doc":
return JSONResponse({"doc_context": doc_content})
return JSONResponse({
"code_context": code_content,
"doc_context": doc_content,
})
tokens = query.split()
results: Dict[str, List[Dict[str, float]]] = {}
# code BM25 over functions/classes
if context_type in ("code", "all"):
code_chunks = chunk_code_functions(code_content)
bm25 = BM25Okapi([c.split() for c in code_chunks])
scores = bm25.get_scores(tokens)
max_sc = float(scores.max()) if scores.size > 0 else 0.0
cutoff = max_sc * score_ratio
picked = [(c, s) for c, s in zip(code_chunks, scores) if s >= cutoff]
picked = sorted(picked, key=lambda x: x[1], reverse=True)[:max_results]
results["code_results"] = [{"text": c, "score": s} for c, s in picked]
# doc BM25 over markdown sections
if context_type in ("doc", "all"):
sections = chunk_doc_sections(doc_content)
bm25d = BM25Okapi([sec.split() for sec in sections])
scores_d = bm25d.get_scores(tokens)
max_sd = float(scores_d.max()) if scores_d.size > 0 else 0.0
cutoff_d = max_sd * score_ratio
idxs = [i for i, s in enumerate(scores_d) if s >= cutoff_d]
neighbors = set(i for idx in idxs for i in (idx-1, idx, idx+1))
valid = [i for i in sorted(neighbors) if 0 <= i < len(sections)]
valid = valid[:max_results]
results["doc_results"] = [
{"text": sections[i], "score": scores_d[i]} for i in valid
]
return JSONResponse(results)
# attach MCP layer (adds /mcp/ws, /mcp/sse, /mcp/schema)
attach_mcp(
app,
base_url=f"http://{config['app']['host']}:{config['app']['port']}"
)
# ────────────────────────── cli ──────────────────────────────
if __name__ == "__main__":
import uvicorn
uvicorn.run(
@@ -642,6 +177,5 @@ if __name__ == "__main__":
host=config["app"]["host"],
port=config["app"]["port"],
reload=config["app"]["reload"],
timeout_keep_alive=config["app"]["timeout_keep_alive"],
)
# ─────────────────────────────────────────────────────────────
timeout_keep_alive=config["app"]["timeout_keep_alive"]
)

View File

@@ -1,813 +0,0 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Crawl4AI Playground</title>
<script src="https://cdn.tailwindcss.com"></script>
<script>
tailwind.config = {
theme: {
extend: {
colors: {
primary: '#4EFFFF',
primarydim: '#09b5a5',
accent: '#F380F5',
dark: '#070708',
light: '#E8E9ED',
secondary: '#D5CEBF',
codebg: '#1E1E1E',
surface: '#202020',
border: '#3F3F44',
},
fontFamily: {
mono: ['Fira Code', 'monospace'],
},
}
}
}
</script>
<link href="https://fonts.googleapis.com/css2?family=Fira+Code:wght@400;500&display=swap" rel="stylesheet">
<!-- Highlight.js -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/github-dark.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/clipboard.js/2.0.11/clipboard.min.js"></script>
<!-- CodeMirror (python mode) -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/codemirror.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/codemirror.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/mode/python/python.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/addon/edit/matchbrackets.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/addon/selection/active-line.min.js"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/theme/darcula.min.css">
<!-- <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/python.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/bash.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/json.min.js"></script> -->
<style>
/* Custom CodeMirror styling to match theme */
.CodeMirror {
background-color: #1E1E1E !important;
color: #E8E9ED !important;
border-radius: 4px;
font-family: 'Fira Code', monospace;
font-size: 0.9rem;
}
.CodeMirror-gutters {
background-color: #1E1E1E !important;
border-right: 1px solid #3F3F44 !important;
}
.CodeMirror-linenumber {
color: #3F3F44 !important;
}
.cm-s-darcula .cm-keyword {
color: #4EFFFF !important;
}
.cm-s-darcula .cm-string {
color: #F380F5 !important;
}
.cm-s-darcula .cm-number {
color: #D5CEBF !important;
}
/* Add to your <style> section or Tailwind config */
.hljs {
background: #1E1E1E !important;
border-radius: 4px;
padding: 1rem !important;
}
pre code.hljs {
display: block;
overflow-x: auto;
}
/* Language-specific colors */
.hljs-attr {
color: #4EFFFF;
}
/* JSON keys */
.hljs-string {
color: #F380F5;
}
/* Strings */
.hljs-number {
color: #D5CEBF;
}
/* Numbers */
.hljs-keyword {
color: #4EFFFF;
}
pre code {
white-space: pre-wrap;
word-break: break-word;
}
.copy-btn {
transition: all 0.2s ease;
opacity: 0.7;
}
.copy-btn:hover {
opacity: 1;
}
.tab-content:hover .copy-btn {
opacity: 0.7;
}
.tab-content:hover .copy-btn:hover {
opacity: 1;
}
/* copid text highlighted */
.highlighted {
background-color: rgba(78, 255, 255, 0.2) !important;
transition: background-color 0.5s ease;
}
</style>
</head>
<body class="bg-dark text-light font-mono min-h-screen flex flex-col" style="font-feature-settings: 'calt' 0;">
<!-- Header -->
<header class="border-b border-border px-4 py-2 flex items-center">
<h1 class="text-lg font-medium flex items-center space-x-4">
<span>🚀🤖 <span class="text-primary">Crawl4AI</span> Playground</span>
<!-- GitHub badges -->
<a href="https://github.com/unclecode/crawl4ai" target="_blank" class="flex space-x-1">
<img src="https://img.shields.io/github/stars/unclecode/crawl4ai?style=social"
alt="GitHub stars" class="h-5">
<img src="https://img.shields.io/github/forks/unclecode/crawl4ai?style=social"
alt="GitHub forks" class="h-5">
</a>
<!-- Docs -->
<a href="https://docs.crawl4ai.com" target="_blank"
class="text-xs text-secondary hover:text-primary underline flex items-center">
Docs
</a>
<!-- X (Twitter) follow -->
<a href="https://x.com/unclecode" target="_blank"
class="hover:text-primary flex items-center" title="Follow @unclecode on X">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"
class="w-4 h-4 fill-current mr-1">
<path d="M22.46 6c-.77.35-1.6.58-2.46.69a4.27 4.27 0 001.88-2.35 8.53 8.53 0 01-2.71 1.04 4.24 4.24 0 00-7.23 3.87A12.05 12.05 0 013 4.62a4.24 4.24 0 001.31 5.65 4.2 4.2 0 01-1.92-.53v.05a4.24 4.24 0 003.4 4.16 4.31 4.31 0 01-1.91.07 4.25 4.25 0 003.96 2.95A8.5 8.5 0 012 19.55a12.04 12.04 0 006.53 1.92c7.84 0 12.13-6.49 12.13-12.13 0-.18-.01-.36-.02-.54A8.63 8.63 0 0024 5.1a8.45 8.45 0 01-2.54.7z"/>
</svg>
<span class="text-xs">@unclecode</span>
</a>
</h1>
<div class="ml-auto flex space-x-2">
<button id="play-tab"
class="px-3 py-1 rounded-t bg-surface border border-b-0 border-border text-primary">Playground</button>
<button id="stress-tab" class="px-3 py-1 rounded-t border border-border hover:bg-surface">Stress
Test</button>
</div>
</header>
<!-- Main Playground -->
<main id="playground" class="flex-1 flex flex-col p-4 space-y-4 max-w-5xl w-full mx-auto">
<!-- Request Builder -->
<section class="bg-surface rounded-lg border border-border overflow-hidden">
<div class="px-4 py-2 border-b border-border flex items-center">
<h2 class="font-medium">Request Builder</h2>
<select id="endpoint" class="ml-auto bg-dark border border-border rounded px-2 py-1 text-sm">
<option value="crawl">/crawl (batch)</option>
<option value="crawl_stream">/crawl/stream</option>
<option value="md">/md</option>
<option value="llm">/llm</option>
</select>
</div>
<div class="p-4">
<label class="block mb-2 text-sm">URL(s) - one per line</label>
<textarea id="urls" class="w-full bg-dark border border-border rounded p-2 h-32 text-sm mb-4"
spellcheck="false">https://example.com</textarea>
<details class="mb-4">
<summary class="text-sm text-secondary cursor-pointer">Advanced Config <span
class="text-xs text-primary">(Python → autoJSON)</span></summary>
<!-- Toolbar -->
<div class="flex items-center justify-end space-x-3 mt-2">
<label for="cfg-type" class="text-xs text-secondary">Type:</label>
<select id="cfg-type"
class="bg-dark border border-border rounded px-1 py-0.5 text-xs">
<option value="CrawlerRunConfig">CrawlerRunConfig</option>
<option value="BrowserConfig">BrowserConfig</option>
</select>
<!-- help link -->
<a href="https://docs.crawl4ai.com/api/parameters/"
target="_blank"
class="text-xs text-primary hover:underline flex items-center space-x-1"
title="Open parameter reference in new tab">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"
class="w-4 h-4 fill-current">
<path d="M13 3h8v8h-2V6.41l-9.29 9.3-1.42-1.42 9.3-9.29H13V3z"/>
<path d="M5 5h4V3H3v6h2V5zm0 14v-4H3v6h6v-2H5z"/>
</svg>
<span>Docs</span>
</a>
<span id="cfg-status" class="text-xs text-secondary ml-2"></span>
</div>
<!-- CodeMirror host -->
<div id="adv-editor" class="mt-2 border border-border rounded overflow-hidden h-40"></div>
</details>
<div class="flex space-x-2">
<button id="run-btn" class="bg-primary text-dark px-4 py-2 rounded hover:bg-primarydim font-medium">
Run (⌘/Ctrl+Enter)
</button>
<button id="export-btn" class="border border-border px-4 py-2 rounded hover:bg-surface hidden">
Export Python Code
</button>
</div>
</div>
</section>
<!-- Execution Status -->
<section id="execution-status" class="hidden bg-surface rounded-lg border border-border p-3 text-sm">
<div class="flex space-x-4">
<div id="status-badge" class="flex items-center">
<span class="w-3 h-3 rounded-full mr-2"></span>
<span>Ready</span>
</div>
<div>
<span class="text-secondary">Time:</span>
<span id="exec-time" class="text-light">-</span>
</div>
<div>
<span class="text-secondary">Memory:</span>
<span id="exec-mem" class="text-light">-</span>
</div>
</div>
</section>
<!-- Response Viewer -->
<!-- Update the Response Viewer section -->
<section class="bg-surface rounded-lg border border-border overflow-hidden flex-1 flex flex-col">
<div class="border-b border-border flex">
<button data-tab="response" class="tab-btn active px-4 py-2 border-r border-border">Response</button>
<button data-tab="python" class="tab-btn px-4 py-2 border-r border-border">Python</button>
<button data-tab="curl" class="tab-btn px-4 py-2">cURL</button>
</div>
<div class="flex-1 overflow-auto relative">
<!-- Response Tab -->
<div class="tab-content active h-full">
<div class="absolute right-2 top-2">
<button class="copy-btn bg-surface border border-border rounded px-2 py-1 text-xs hover:bg-dark"
data-target="#response-content code">
Copy
</button>
</div>
<pre id="response-content" class="p-4 text-sm h-full"><code class="json hljs">{}</code></pre>
</div>
<!-- Python Tab -->
<div class="tab-content hidden h-full">
<div class="absolute right-2 top-2">
<button class="copy-btn bg-surface border border-border rounded px-2 py-1 text-xs hover:bg-dark"
data-target="#python-content code">
Copy
</button>
</div>
<pre id="python-content" class="p-4 text-sm h-full"><code class="python hljs"></code></pre>
</div>
<!-- cURL Tab -->
<div class="tab-content hidden h-full">
<div class="absolute right-2 top-2">
<button class="copy-btn bg-surface border border-border rounded px-2 py-1 text-xs hover:bg-dark"
data-target="#curl-content code">
Copy
</button>
</div>
<pre id="curl-content" class="p-4 text-sm h-full"><code class="bash hljs"></code></pre>
</div>
</div>
</section>
</main>
<!-- Stress Test Modal -->
<div id="stress-modal"
class="hidden fixed inset-0 bg-black bg-opacity-70 z-50 flex items-center justify-center p-4">
<div class="bg-surface rounded-lg border border-accent w-full max-w-3xl max-h-[90vh] flex flex-col">
<div class="px-4 py-2 border-b border-border flex items-center">
<h2 class="font-medium text-accent">🔥 Stress Test</h2>
<button id="close-stress" class="ml-auto text-secondary hover:text-light">&times;</button>
</div>
<div class="p-4 space-y-4 flex-1 overflow-auto">
<div class="grid grid-cols-3 gap-4">
<div>
<label class="block text-sm mb-1">Total URLs</label>
<input id="st-total" type="number" value="20"
class="w-full bg-dark border border-border rounded px-3 py-1">
</div>
<div>
<label class="block text-sm mb-1">Chunk Size</label>
<input id="st-chunk" type="number" value="5"
class="w-full bg-dark border border-border rounded px-3 py-1">
</div>
<div>
<label class="block text-sm mb-1">Concurrency</label>
<input id="st-conc" type="number" value="2"
class="w-full bg-dark border border-border rounded px-3 py-1">
</div>
</div>
<div class="flex items-center">
<input id="st-stream" type="checkbox" class="mr-2">
<label for="st-stream" class="text-sm">Use /crawl/stream</label>
<button id="st-run"
class="ml-auto bg-accent text-dark px-4 py-2 rounded hover:bg-opacity-90 font-medium">
Run Stress Test
</button>
</div>
<div class="mt-4">
<div class="bg-dark rounded border border-border p-3 h-64 overflow-auto text-sm whitespace-break-spaces"
id="stress-log"></div>
</div>
</div>
<div class="px-4 py-2 border-t border-border text-sm text-secondary">
<div class="flex justify-between">
<span>Completed: <span id="stress-completed">0</span>/<span id="stress-total">0</span></span>
<span>Avg. Time: <span id="stress-avg-time">0</span>ms</span>
<span>Peak Memory: <span id="stress-peak-mem">0</span>MB</span>
</div>
</div>
</div>
</div>
<script>
// Tab switching
document.querySelectorAll('.tab-btn').forEach(btn => {
btn.addEventListener('click', () => {
document.querySelectorAll('.tab-btn').forEach(b => b.classList.remove('active'));
document.querySelectorAll('.tab-content').forEach(c => c.classList.add('hidden'));
btn.classList.add('active');
const tabName = btn.dataset.tab;
document.querySelector(`#${tabName}-content`).parentElement.classList.remove('hidden');
// Re-highlight content when switching tabs
const activeCode = document.querySelector(`#${tabName}-content code`);
if (activeCode) {
forceHighlightElement(activeCode);
}
});
});
// View switching
document.getElementById('play-tab').addEventListener('click', () => {
document.getElementById('playground').classList.remove('hidden');
document.getElementById('stress-modal').classList.add('hidden');
document.getElementById('play-tab').classList.add('bg-surface', 'border-b-0');
document.getElementById('stress-tab').classList.remove('bg-surface', 'border-b-0');
});
document.getElementById('stress-tab').addEventListener('click', () => {
document.getElementById('stress-modal').classList.remove('hidden');
document.getElementById('stress-tab').classList.add('bg-surface', 'border-b-0');
document.getElementById('play-tab').classList.remove('bg-surface', 'border-b-0');
});
document.getElementById('close-stress').addEventListener('click', () => {
document.getElementById('stress-modal').classList.add('hidden');
document.getElementById('play-tab').classList.add('bg-surface', 'border-b-0');
document.getElementById('stress-tab').classList.remove('bg-surface', 'border-b-0');
});
// Initialize clipboard and highlight.js
new ClipboardJS('#export-btn');
hljs.highlightAll();
// Keyboard shortcut
window.addEventListener('keydown', e => {
if ((e.ctrlKey || e.metaKey) && e.key === 'Enter') {
document.getElementById('run-btn').click();
}
});
// ================ ADVANCED CONFIG EDITOR ================
const cm = CodeMirror(document.getElementById('adv-editor'), {
value: `CrawlerRunConfig(
stream=True,
cache_mode=CacheMode.BYPASS,
)`,
mode: 'python',
lineNumbers: true,
theme: 'darcula',
tabSize: 4,
styleActiveLine: true,
matchBrackets: true,
gutters: ["CodeMirror-linenumbers"],
lineWrapping: true,
});
const TEMPLATES = {
CrawlerRunConfig: `CrawlerRunConfig(
stream=True,
cache_mode=CacheMode.BYPASS,
)`,
BrowserConfig: `BrowserConfig(
headless=True,
extra_args=[
"--no-sandbox",
"--disable-gpu",
],
)`,
};
document.getElementById('cfg-type').addEventListener('change', (e) => {
cm.setValue(TEMPLATES[e.target.value]);
document.getElementById('cfg-status').textContent = '';
});
async function pyConfigToJson() {
const code = cm.getValue().trim();
if (!code) return {};
const res = await fetch('/config/dump', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ code }),
});
const statusEl = document.getElementById('cfg-status');
if (!res.ok) {
const msg = await res.text();
statusEl.textContent = '✖ config error';
statusEl.className = 'text-xs text-red-400';
throw new Error(msg || 'Invalid config');
}
statusEl.textContent = '✓ parsed';
statusEl.className = 'text-xs text-green-400';
return await res.json();
}
// ================ SERVER COMMUNICATION ================
// Update status UI
function updateStatus(status, time, memory, peakMemory) {
const statusEl = document.getElementById('execution-status');
const badgeEl = document.querySelector('#status-badge span:first-child');
const textEl = document.querySelector('#status-badge span:last-child');
statusEl.classList.remove('hidden');
badgeEl.className = 'w-3 h-3 rounded-full mr-2';
if (status === 'success') {
badgeEl.classList.add('bg-green-500');
textEl.textContent = 'Success';
} else if (status === 'error') {
badgeEl.classList.add('bg-red-500');
textEl.textContent = 'Error';
} else {
badgeEl.classList.add('bg-yellow-500');
textEl.textContent = 'Processing...';
}
if (time) {
document.getElementById('exec-time').textContent = `${time}ms`;
}
if (memory !== undefined && peakMemory !== undefined) {
document.getElementById('exec-mem').textContent = `Δ${memory >= 0 ? '+' : ''}${memory}MB (Peak: ${peakMemory}MB)`;
}
}
// Generate code snippets
function generateSnippets(api, payload) {
// Python snippet
const pyCodeEl = document.querySelector('#python-content code');
const pySnippet = `import httpx\n\nasync def crawl():\n async with httpx.AsyncClient() as client:\n response = await client.post(\n "${window.location.origin}${api}",\n json=${JSON.stringify(payload, null, 4).replace(/\n/g, '\n ')}\n )\n return response.json()`;
pyCodeEl.textContent = pySnippet;
pyCodeEl.className = 'python hljs'; // Reset classes
forceHighlightElement(pyCodeEl);
// cURL snippet
const curlCodeEl = document.querySelector('#curl-content code');
const curlSnippet = `curl -X POST ${window.location.origin}${api} \\\n -H "Content-Type: application/json" \\\n -d '${JSON.stringify(payload)}'`;
curlCodeEl.textContent = curlSnippet;
curlCodeEl.className = 'bash hljs'; // Reset classes
forceHighlightElement(curlCodeEl);
}
// Main run function
async function runCrawl() {
const endpoint = document.getElementById('endpoint').value;
const urls = document.getElementById('urls').value.trim().split(/\n/).filter(u => u);
// 1) grab python from CodeMirror, validate via /config/dump
let advConfig = {};
try {
const cfgJson = await pyConfigToJson(); // may throw
if (Object.keys(cfgJson).length) {
const cfgType = document.getElementById('cfg-type').value;
advConfig = cfgType === 'CrawlerRunConfig'
? { crawler_config: cfgJson }
: { browser_config: cfgJson };
}
} catch (err) {
updateStatus('error');
document.querySelector('#response-content code').textContent =
JSON.stringify({ error: err.message }, null, 2);
forceHighlightElement(document.querySelector('#response-content code'));
return; // stop run
}
const endpointMap = {
crawl: '/crawl',
crawl_stream: '/crawl/stream',
md: '/md',
llm: '/llm'
};
const api = endpointMap[endpoint];
const payload = {
urls,
...advConfig
};
updateStatus('processing');
try {
const startTime = performance.now();
let response, responseData;
if (endpoint === 'crawl_stream') {
// Stream processing
response = await fetch(api, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
const reader = response.body.getReader();
let text = '';
let maxMemory = 0;
while (true) {
const { value, done } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
text += chunk;
// Process each line for memory updates
chunk.trim().split('\n').forEach(line => {
if (!line) return;
try {
const obj = JSON.parse(line);
if (obj.server_memory_mb) {
maxMemory = Math.max(maxMemory, obj.server_memory_mb);
}
} catch (e) {
console.error('Error parsing stream line:', e);
}
});
}
responseData = { stream: text };
const time = Math.round(performance.now() - startTime);
updateStatus('success', time, null, maxMemory);
document.querySelector('#response-content code').textContent = text;
document.querySelector('#response-content code').className = 'json hljs'; // Reset classes
forceHighlightElement(document.querySelector('#response-content code'));
} else {
// Regular request
response = await fetch(api, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
responseData = await response.json();
const time = Math.round(performance.now() - startTime);
if (!response.ok) {
updateStatus('error', time);
throw new Error(responseData.error || 'Request failed');
}
updateStatus(
'success',
time,
responseData.server_memory_delta_mb,
responseData.server_peak_memory_mb
);
document.querySelector('#response-content code').textContent = JSON.stringify(responseData, null, 2);
document.querySelector('#response-content code').className = 'json hljs'; // Ensure class is set
forceHighlightElement(document.querySelector('#response-content code'));
}
forceHighlightElement(document.querySelector('#response-content code'));
generateSnippets(api, payload);
} catch (error) {
console.error('Error:', error);
updateStatus('error');
document.querySelector('#response-content code').textContent = JSON.stringify(
{ error: error.message },
null,
2
);
forceHighlightElement(document.querySelector('#response-content code'));
}
}
// Stress test function
async function runStressTest() {
const total = parseInt(document.getElementById('st-total').value);
const chunkSize = parseInt(document.getElementById('st-chunk').value);
const concurrency = parseInt(document.getElementById('st-conc').value);
const useStream = document.getElementById('st-stream').checked;
const logEl = document.getElementById('stress-log');
logEl.textContent = '';
document.getElementById('stress-completed').textContent = '0';
document.getElementById('stress-total').textContent = total;
document.getElementById('stress-avg-time').textContent = '0';
document.getElementById('stress-peak-mem').textContent = '0';
const api = useStream ? '/crawl/stream' : '/crawl';
const urls = Array.from({ length: total }, (_, i) => `https://httpbin.org/anything/stress-${i}-${Date.now()}`);
const chunks = [];
for (let i = 0; i < urls.length; i += chunkSize) {
chunks.push(urls.slice(i, i + chunkSize));
}
let completed = 0;
let totalTime = 0;
let peakMemory = 0;
const processBatch = async (batch, index) => {
const payload = {
urls: batch,
browser_config: {},
crawler_config: { cache_mode: 'BYPASS', stream: useStream }
};
const start = performance.now();
let time, memory;
try {
if (useStream) {
const response = await fetch(api, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
const reader = response.body.getReader();
let maxMem = 0;
while (true) {
const { value, done } = await reader.read();
if (done) break;
const text = new TextDecoder().decode(value);
text.split('\n').forEach(line => {
try {
const obj = JSON.parse(line);
if (obj.server_memory_mb) {
maxMem = Math.max(maxMem, obj.server_memory_mb);
}
} catch { }
});
}
memory = maxMem;
} else {
const response = await fetch(api, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
const data = await response.json();
memory = data.server_peak_memory_mb;
}
time = Math.round(performance.now() - start);
peakMemory = Math.max(peakMemory, memory || 0);
totalTime += time;
logEl.textContent += `[${index + 1}/${chunks.length}] ✔ ${time}ms | Peak ${memory}MB\n`;
} catch (error) {
time = Math.round(performance.now() - start);
logEl.textContent += `[${index + 1}/${chunks.length}] ✖ ${time}ms | ${error.message}\n`;
}
completed += batch.length;
document.getElementById('stress-completed').textContent = completed;
document.getElementById('stress-peak-mem').textContent = peakMemory;
document.getElementById('stress-avg-time').textContent = Math.round(totalTime / (index + 1));
logEl.scrollTop = logEl.scrollHeight;
};
// Run with concurrency control
let active = 0;
let index = 0;
return new Promise(resolve => {
const runNext = () => {
while (active < concurrency && index < chunks.length) {
processBatch(chunks[index], index)
.finally(() => {
active--;
runNext();
});
active++;
index++;
}
if (active === 0 && index >= chunks.length) {
logEl.textContent += '\n✅ Stress test completed\n';
resolve();
}
};
runNext();
});
}
// Event listeners
document.getElementById('run-btn').addEventListener('click', runCrawl);
document.getElementById('st-run').addEventListener('click', runStressTest);
function forceHighlightElement(element) {
if (!element) return;
// Save current scroll position (important for large code blocks)
const scrollTop = element.parentElement.scrollTop;
// Reset the element
const text = element.textContent;
element.innerHTML = text;
element.removeAttribute('data-highlighted');
// Reapply highlighting
hljs.highlightElement(element);
// Restore scroll position
element.parentElement.scrollTop = scrollTop;
}
// Initialize clipboard for all copy buttons
function initCopyButtons() {
document.querySelectorAll('.copy-btn').forEach(btn => {
new ClipboardJS(btn, {
text: () => {
const target = document.querySelector(btn.dataset.target);
return target ? target.textContent : '';
}
}).on('success', e => {
e.clearSelection();
// make button text "copied" for 1 second
const originalText = e.trigger.textContent;
e.trigger.textContent = 'Copied!';
setTimeout(() => {
e.trigger.textContent = originalText;
}, 1000);
// Highlight the copied code
const target = document.querySelector(btn.dataset.target);
if (target) {
target.classList.add('highlighted');
setTimeout(() => {
target.classList.remove('highlighted');
}, 1000);
}
}).on('error', e => {
console.error('Error copying:', e);
});
});
}
// Call this in your DOMContentLoaded or initialization
initCopyButtons();
</script>
</body>
</html>

View File

@@ -1,28 +1,12 @@
[supervisord]
nodaemon=true ; Run supervisord in the foreground
logfile=/dev/null ; Log supervisord output to stdout/stderr
logfile_maxbytes=0
nodaemon=true
[program:redis]
command=/usr/bin/redis-server --loglevel notice ; Path to redis-server on Alpine
user=appuser ; Run redis as our non-root user
command=redis-server
autorestart=true
priority=10
stdout_logfile=/dev/stdout ; Redirect redis stdout to container stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr ; Redirect redis stderr to container stderr
stderr_logfile_maxbytes=0
[program:gunicorn]
command=/usr/local/bin/gunicorn --bind 0.0.0.0:11235 --workers 2 --threads 2 --timeout 120 --graceful-timeout 30 --keep-alive 60 --log-level info --worker-class uvicorn.workers.UvicornWorker server:app
directory=/app ; Working directory for the app
user=appuser ; Run gunicorn as our non-root user
command=gunicorn --bind 0.0.0.0:8000 --workers 4 --threads 2 --timeout 300 --graceful-timeout 60 --keep-alive 65 --log-level debug --worker-class uvicorn.workers.UvicornWorker --max-requests 1000 --max-requests-jitter 50 server:app
autorestart=true
priority=20
environment=PYTHONUNBUFFERED=1 ; Ensure Python output is sent straight to logs
stdout_logfile=/dev/stdout ; Redirect gunicorn stdout to container stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr ; Redirect gunicorn stderr to container stderr
stderr_logfile_maxbytes=0
# Optional: Add filebeat or other logging agents here if needed
priority=20

View File

@@ -0,0 +1,63 @@
FROM --platform=linux/amd64 python:3.10-slim
# Install system dependencies required for Chromium and Git
RUN apt-get update && apt-get install -y \
python3-dev \
pkg-config \
libjpeg-dev \
gcc \
build-essential \
libnss3 \
libnspr4 \
libatk1.0-0 \
libatk-bridge2.0-0 \
libcups2 \
libdrm2 \
libxkbcommon0 \
libxcomposite1 \
libxdamage1 \
libxfixes3 \
libxrandr2 \
libgbm1 \
libasound2 \
libpango-1.0-0 \
libcairo2 \
procps \
git \
socat \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Make a directory for crawl4ai call it crawl4ai_repo
# RUN mkdir crawl4ai_repo
# # Clone Crawl4ai from the next branch and install it
# RUN git clone --branch next https://github.com/unclecode/crawl4ai.git ./crawl4ai_repo \
# && cd crawl4ai_repo \
# && pip install . \
# && cd .. \
# && rm -rf crawl4ai_repo
RUN python3 -m venv /app/venv
ENV PATH="/app/venv/bin:$PATH"
# RUN pip install git+https://github.com/unclecode/crawl4ai.git@next
# Copy requirements and install remaining dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy application files
COPY resources /app/resources
COPY main.py .
COPY start.sh .
# Set permissions for Chrome binary and start script
RUN chmod +x /app/resources/chrome/headless_shell && \
chmod -R 755 /app/resources/chrome && \
chmod +x start.sh
ENV FUNCTION_TARGET=crawl
EXPOSE 8080 9223
CMD ["/app/start.sh"]

View File

@@ -0,0 +1,8 @@
project_id: PROJECT_ID
region: REGION_NAME
artifact_repo: ARTIFACT_REPO_NAME
function_name: FUNCTION_NAME
memory: "2048MB"
timeout: "540s"
local_image: "gcr.io/ARTIFACT_REPO_NAME/crawl4ai:latest"
test_query_url: "https://example.com"

View File

@@ -0,0 +1,187 @@
#!/usr/bin/env python3
import argparse
import subprocess
import sys
import yaml
import requests
def run_command(cmd, explanation, require_confirm=True, allow_already_exists=False):
print("\n=== {} ===".format(explanation))
if require_confirm:
input("Press Enter to run: [{}]\n".format(cmd))
print("Running: {}".format(cmd))
result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
if result.returncode != 0:
if allow_already_exists and "ALREADY_EXISTS" in result.stderr:
print("Repository already exists, skipping creation.")
return ""
print("Error:\n{}".format(result.stderr))
sys.exit(1)
out = result.stdout.strip()
if out:
print("Output:\n{}".format(out))
return out
def load_config():
try:
with open("config.yml", "r") as f:
config = yaml.safe_load(f)
except Exception as e:
print("Failed to load config.yml: {}".format(e))
sys.exit(1)
required = ["project_id", "region", "artifact_repo", "function_name", "local_image"]
for key in required:
if key not in config or not config[key]:
print("Missing required config parameter: {}".format(key))
sys.exit(1)
return config
def deploy_function(config):
project_id = config["project_id"]
region = config["region"]
artifact_repo = config["artifact_repo"]
function_name = config["function_name"]
memory = config.get("memory", "2048MB")
timeout = config.get("timeout", "540s")
local_image = config["local_image"]
test_query_url = config.get("test_query_url", "https://example.com")
# Repository image format: "<region>-docker.pkg.dev/<project_id>/<artifact_repo>/<function_name>:latest"
repo_image = f"{region}-docker.pkg.dev/{project_id}/{artifact_repo}/{function_name}:latest"
# 1. Create Artifact Registry repository (skip if exists)
cmd = f"gcloud artifacts repositories create {artifact_repo} --repository-format=docker --location={region} --project={project_id}"
run_command(cmd, "Creating Artifact Registry repository (if it doesn't exist)", allow_already_exists=True)
# 2. Tag the local Docker image with the repository image name
cmd = f"docker tag {local_image} {repo_image}"
run_command(cmd, "Tagging Docker image for Artifact Registry")
# 3. Authenticate Docker to Artifact Registry
cmd = f"gcloud auth configure-docker {region}-docker.pkg.dev"
run_command(cmd, "Authenticating Docker to Artifact Registry")
# 4. Push the tagged Docker image to Artifact Registry
cmd = f"docker push {repo_image}"
run_command(cmd, "Pushing Docker image to Artifact Registry")
# 5. Deploy the Cloud Function using the custom container
cmd = (
f"gcloud beta functions deploy {function_name} "
f"--gen2 "
f"--runtime=python310 "
f"--entry-point=crawl "
f"--region={region} "
f"--docker-repository={region}-docker.pkg.dev/{project_id}/{artifact_repo} "
f"--trigger-http "
f"--memory={memory} "
f"--timeout={timeout} "
f"--project={project_id}"
)
run_command(cmd, "Deploying Cloud Function using custom container")
# 6. Set the Cloud Function to allow public (unauthenticated) invocations
cmd = (
f"gcloud functions add-iam-policy-binding {function_name} "
f"--region={region} "
f"--member='allUsers' "
f"--role='roles/cloudfunctions.invoker' "
f"--project={project_id}"
f"--quiet"
)
run_command(cmd, "Setting Cloud Function IAM to allow public invocations")
# 7. Retrieve the deployed Cloud Function URL
cmd = (
f"gcloud functions describe {function_name} "
f"--region={region} "
f"--project={project_id} "
f"--format='value(serviceConfig.uri)'"
)
deployed_url = run_command(cmd, "Extracting deployed Cloud Function URL", require_confirm=False)
print("\nDeployed URL: {}\n".format(deployed_url))
# 8. Test the deployed function
test_url = f"{deployed_url}?url={test_query_url}"
print("Testing function with: {}".format(test_url))
try:
response = requests.get(test_url)
print("Response status: {}".format(response.status_code))
print("Response body:\n{}".format(response.text))
if response.status_code == 200:
print("Test successful!")
else:
print("Non-200 response; check function logs.")
except Exception as e:
print("Test request error: {}".format(e))
sys.exit(1)
# 9. Final usage help
print("\nDeployment complete!")
print("Invoke your function with:")
print(f"curl '{deployed_url}?url={test_query_url}'")
print("For further instructions, refer to your documentation.")
def delete_function(config):
project_id = config["project_id"]
region = config["region"]
function_name = config["function_name"]
cmd = f"gcloud functions delete {function_name} --region={region} --project={project_id} --quiet"
run_command(cmd, "Deleting Cloud Function")
def describe_function(config):
project_id = config["project_id"]
region = config["region"]
function_name = config["function_name"]
cmd = (
f"gcloud functions describe {function_name} "
f"--region={region} "
f"--project={project_id} "
f"--format='value(serviceConfig.uri)'"
)
deployed_url = run_command(cmd, "Describing Cloud Function to extract URL", require_confirm=False)
print("\nCloud Function URL: {}\n".format(deployed_url))
def clear_all(config):
print("\n=== CLEAR ALL RESOURCES ===")
project_id = config["project_id"]
region = config["region"]
artifact_repo = config["artifact_repo"]
confirm = input("WARNING: This will DELETE the Cloud Function and the Artifact Registry repository. Are you sure? (y/N): ")
if confirm.lower() != "y":
print("Aborting clear operation.")
sys.exit(0)
# Delete the Cloud Function
delete_function(config)
# Delete the Artifact Registry repository
cmd = f"gcloud artifacts repositories delete {artifact_repo} --location={region} --project={project_id} --quiet"
run_command(cmd, "Deleting Artifact Registry repository", require_confirm=False)
print("All resources cleared.")
def main():
parser = argparse.ArgumentParser(description="Deploy, delete, describe, or clear Cloud Function resources using config.yml")
subparsers = parser.add_subparsers(dest="command", required=True)
subparsers.add_parser("deploy", help="Deploy the Cloud Function")
subparsers.add_parser("delete", help="Delete the deployed Cloud Function")
subparsers.add_parser("describe", help="Describe the Cloud Function and return its URL")
subparsers.add_parser("clear", help="Delete the Cloud Function and Artifact Registry repository")
args = parser.parse_args()
config = load_config()
if args.command == "deploy":
deploy_function(config)
elif args.command == "delete":
delete_function(config)
elif args.command == "describe":
describe_function(config)
elif args.command == "clear":
clear_all(config)
else:
parser.print_help()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,204 @@
# Deploying Crawl4ai on Google Cloud Functions
This guide explains how to deploy **Crawl4ai**—an opensource web crawler library—on Google Cloud Functions Gen2 using a custom container. We assume your project folder already includes:
- **Dockerfile:** Builds your container image (which installs Crawl4ai from its Git repository).
- **start.sh:** Activates your virtual environment and starts the function (using the Functions Framework).
- **main.py:** Contains your function logic with the entry point `crawl` (and imports Crawl4ai).
The guide is divided into two parts:
1. Manual deployment steps (using CLI commands)
2. Automated deployment using a Python script (`deploy.py`)
---
## Part 1: Manual Deployment Process
### Prerequisites
- **Google Cloud Project:** Ensure your project is active and billing is enabled.
- **Google Cloud CLI & Docker:** Installed and configured on your local machine.
- **Permissions:** You must have rights to create Cloud Functions and Artifact Registry repositories.
- **Files:** Your Dockerfile, start.sh, and main.py should be in the same directory.
### Step 1: Build Your Docker Image
Your Dockerfile packages Crawl4ai along with all its dependencies. Build your image with:
```bash
docker build -t gcr.io/<PROJECT_ID>/<FUNCTION_NAME>:latest .
```
Replace `<PROJECT_ID>` with your Google Cloud project ID and `<FUNCTION_NAME>` with your chosen function name (for example, `crawl4ai-t1`).
### Step 2: Create an Artifact Registry Repository
Cloud Functions Gen2 requires your custom container image to reside in an Artifact Registry repository. Create one by running:
```bash
gcloud artifacts repositories create <ARTIFACT_REPO> \
--repository-format=docker \
--location=<REGION> \
--project=<PROJECT_ID>
```
Replace `<ARTIFACT_REPO>` (for example, `crawl4ai`) and `<REGION>` (for example, `asia-east1`).
> **Note:** If you receive an `ALREADY_EXISTS` error, the repository is already created; simply proceed to the next step.
### Step 3: Tag Your Docker Image
Tag your locally built Docker image so it matches the Artifact Registry format:
```bash
docker tag gcr.io/<PROJECT_ID>/<FUNCTION_NAME>:latest <REGION>-docker.pkg.dev/<PROJECT_ID>/<ARTIFACT_REPO>/<FUNCTION_NAME>:latest
```
This step “renames” the image so you can push it to your repository.
### Step 4: Authenticate Docker to Artifact Registry
Configure Docker authentication to the Artifact Registry:
```bash
gcloud auth configure-docker <REGION>-docker.pkg.dev
```
This ensures Docker can securely push images to your registry using your Cloud credentials.
### Step 5: Push the Docker Image
Push the tagged image to Artifact Registry:
```bash
docker push <REGION>-docker.pkg.dev/<PROJECT_ID>/<ARTIFACT_REPO>/<FUNCTION_NAME>:latest
```
Once complete, your container image (with Crawl4ai installed) is hosted in Artifact Registry.
### Step 6: Deploy the Cloud Function
Deploy your function using the custom container image. Run:
```bash
gcloud beta functions deploy <FUNCTION_NAME> \
--gen2 \
--region=<REGION> \
--docker-repository=<REGION>-docker.pkg.dev/<PROJECT_ID>/<ARTIFACT_REPO> \
--trigger-http \
--memory=2048MB \
--timeout=540s \
--project=<PROJECT_ID>
```
This command tells Cloud Functions Gen2 to pull your container image from Artifact Registry and deploy it. Make sure your main.py defines the `crawl` entry point.
### Step 7: Make the Function Public
To allow external (unauthenticated) access, update the functions IAM policy:
```bash
gcloud functions add-iam-policy-binding <FUNCTION_NAME> \
--region=<REGION> \
--member="allUsers" \
--role="roles/cloudfunctions.invoker" \
--project=<PROJECT_ID> \
--quiet
```
Using the `--quiet` flag ensures the command runs noninteractively so the policy is applied immediately.
### Step 8: Retrieve and Test Your Function URL
Get the URL for your deployed function:
```bash
gcloud functions describe <FUNCTION_NAME> \
--region=<REGION> \
--project=<PROJECT_ID> \
--format='value(serviceConfig.uri)'
```
Test your deployment with a sample GET request (using curl or your browser):
```bash
curl "<FUNCTION_URL>?url=https://example.com"
```
Replace `<FUNCTION_URL>` with the output URL from the previous command. A successful test (HTTP status 200) means Crawl4ai is running on Cloud Functions.
---
## Part 2: Automated Deployment with deploy.py
For a more streamlined process, use the provided `deploy.py` script. This Python script automates the manual steps, prompting you to confirm key actions and providing detailed logs throughout the process.
### What deploy.py Does:
- **Reads Parameters:** It loads a `config.yml` file containing all necessary parameters such as `project_id`, `region`, `artifact_repo`, `function_name`, `local_image`, etc.
- **Creates/Skips Repository:** It creates the Artifact Registry repository (or skips if it already exists).
- **Tags & Pushes:** It tags your local Docker image and pushes it to the Artifact Registry.
- **Deploys the Function:** It deploys the Cloud Function with your custom container.
- **Updates IAM:** It sets the IAM policy to allow public access (using the `--quiet` flag).
- **Tests the Deployment:** It extracts the deployed URL and performs a test request.
- **Additional Commands:** You can also use subcommands in the script to delete or describe the deployed function, or even clear all resources.
### Example config.yml
Create a `config.yml` file in the same folder as your Dockerfile. An example configuration:
```yaml
project_id: your-project-id
region: asia-east1
artifact_repo: crawl4ai
function_name: crawl4ai-t1
memory: "2048MB"
timeout: "540s"
local_image: "gcr.io/your-project-id/crawl4ai-t1:latest"
test_query_url: "https://example.com"
```
### How to Use deploy.py
- **Deploy the Function:**
```bash
python deploy.py deploy
```
The script will guide you through each step, display the output, and ask for confirmation before executing critical commands.
- **Describe the Function:**
If you forget the function URL and want to retrieve it later:
```bash
python deploy.py describe
```
- **Delete the Function:**
To remove just the Cloud Function:
```bash
python deploy.py delete
```
- **Clear All Resources:**
To delete both the Cloud Function and the Artifact Registry repository:
```bash
python deploy.py clear
```
---
## Conclusion
This guide has walked you through two deployment methods for Crawl4ai on Google Cloud Functions Gen2:
1. **Manual Deployment:** Building your Docker image, pushing it to Artifact Registry, deploying the Cloud Function, and setting up IAM.
2. **Automated Deployment:** Using `deploy.py` with a configuration file to handle the entire process interactively.
By following these instructions, you can deploy, test, and manage your Crawl4ai-based Cloud Function with ease. Enjoy using Crawl4ai in your cloud environment!

View File

@@ -0,0 +1,158 @@
# Cleanup Chrome process on module unload
import atexit
import asyncio
import logging
import functions_framework
from flask import jsonify, Request
import os
import sys
import time
import subprocess
import signal
import requests
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
logger.info(f"Python version: {sys.version}")
logger.info(f"Python path: {sys.path}")
# Try to find where crawl4ai is coming from
try:
import crawl4ai
logger.info(f"Crawl4AI module location: {crawl4ai.__file__}")
logger.info(f"Contents of crawl4ai: {dir(crawl4ai)}")
except ImportError:
logger.error("Crawl4AI module not found")
# Now attempt the import
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, CrawlResult
# Configure logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
# Paths and constants
FUNCTION_DIR = os.path.dirname(os.path.realpath(__file__))
CHROME_BINARY = os.path.join(FUNCTION_DIR, "resources/chrome/headless_shell")
CDP_PORT = 9222
def start_chrome():
"""Start Chrome process synchronously with exponential backoff."""
logger.debug("Starting Chrome process...")
chrome_args = [
CHROME_BINARY,
f"--remote-debugging-port={CDP_PORT}",
"--remote-debugging-address=0.0.0.0",
"--no-sandbox",
"--disable-setuid-sandbox",
"--headless=new",
"--disable-gpu",
"--disable-dev-shm-usage",
"--no-zygote",
"--single-process",
"--disable-features=site-per-process",
"--no-first-run",
"--disable-extensions"
]
process = subprocess.Popen(
chrome_args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setsid
)
logger.debug(f"Chrome process started with PID: {process.pid}")
# Wait for CDP endpoint with exponential backoff
wait_time = 1 # Start with 1 second
max_wait_time = 16 # Cap at 16 seconds per retry
max_attempts = 10 # Total attempts
for attempt in range(max_attempts):
try:
response = requests.get(f"http://127.0.0.1:{CDP_PORT}/json/version", timeout=2)
if response.status_code == 200:
# Get ws URL from response
ws_url = response.json()['webSocketDebuggerUrl']
logger.debug("Chrome CDP is ready")
logger.debug(f"CDP URL: {ws_url}")
return process
except requests.exceptions.ConnectionError:
logger.debug(f"Waiting for CDP endpoint (attempt {attempt + 1}/{max_attempts}), retrying in {wait_time} seconds")
time.sleep(wait_time)
wait_time = min(wait_time * 2, max_wait_time) # Double wait time, up to max
# If we get here, all retries failed
stdout, stderr = process.communicate() # Get output for debugging
logger.error(f"Chrome stdout: {stdout.decode()}")
logger.error(f"Chrome stderr: {stderr.decode()}")
raise Exception("Chrome CDP endpoint failed to start after retries")
async def fetch_with_crawl4ai(url: str) -> dict:
"""Fetch page content using Crawl4ai and return the result object"""
# Get CDP URL from the running Chrome instance
version_response = requests.get(f'http://localhost:{CDP_PORT}/json/version')
cdp_url = version_response.json()['webSocketDebuggerUrl']
# Configure and run Crawl4ai
browser_config = BrowserConfig(cdp_url=cdp_url, use_managed_browser=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
)
result : CrawlResult = await crawler.arun(
url=url, config=crawler_config
)
return result.model_dump() # Convert Pydantic model to dict for JSON response
# Start Chrome when the module loads
logger.info("Starting Chrome process on module load")
chrome_process = start_chrome()
@functions_framework.http
def crawl(request: Request):
"""HTTP Cloud Function to fetch web content using Crawl4ai"""
try:
url = request.args.get('url')
if not url:
return jsonify({'error': 'URL parameter is required', 'status': 400}), 400
# Create and run an asyncio event loop
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
result = loop.run_until_complete(
asyncio.wait_for(fetch_with_crawl4ai(url), timeout=10.0)
)
return jsonify({
'status': 200,
'data': result
})
finally:
loop.close()
except Exception as e:
error_msg = f"Unexpected error: {str(e)}"
logger.error(error_msg, exc_info=True)
return jsonify({
'error': error_msg,
'status': 500,
'details': {
'error_type': type(e).__name__,
'stack_trace': str(e),
'chrome_running': chrome_process.poll() is None if chrome_process else False
}
}), 500
@atexit.register
def cleanup():
"""Cleanup Chrome process on shutdown"""
if chrome_process and chrome_process.poll() is None:
try:
os.killpg(os.getpgid(chrome_process.pid), signal.SIGTERM)
logger.info("Chrome process terminated")
except Exception as e:
logger.error(f"Failed to terminate Chrome process: {e}")

View File

@@ -0,0 +1,5 @@
functions-framework==3.*
flask==2.3.3
requests==2.31.0
websockets==12.0
git+https://github.com/unclecode/crawl4ai.git@next

View File

@@ -0,0 +1,10 @@
<?xml version="1.0" ?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<dir>/var/task/.fonts</dir>
<dir>/var/task/fonts</dir>
<dir>/opt/fonts</dir>
<dir>/tmp/fonts</dir>
<cachedir>/tmp/fonts-cache/</cachedir>
<config></config>
</fontconfig>

Binary file not shown.

View File

@@ -0,0 +1 @@
{"file_format_version": "1.0.0", "ICD": {"library_path": "./libvk_swiftshader.so", "api_version": "1.0.5"}}

104
deploy/lambda/Dockerfile Normal file
View File

@@ -0,0 +1,104 @@
FROM python:3.12-bookworm AS python-builder
RUN pip install poetry
ENV POETRY_NO_INTERACTION=1 \
POETRY_CACHE_DIR=/tmp/poetry_cache
WORKDIR /app
COPY pyproject.toml poetry.lock ./
RUN --mount=type=cache,target=$POETRY_CACHE_DIR poetry export -f requirements.txt -o requirements.txt
# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
python3-dev \
python3-setuptools \
python3-wheel \
python3-pip \
gcc \
g++ \
&& rm -rf /var/lib/apt/lists/*
# Install specific dependencies that have build issues
RUN pip install --no-cache-dir cchardet
FROM python:3.12-bookworm
# Install AWS Lambda Runtime Interface Client
RUN python3 -m pip install --no-cache-dir awslambdaric
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
wget \
gnupg \
git \
cmake \
pkg-config \
python3-dev \
libjpeg-dev \
redis-server \
supervisor \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y --no-install-recommends \
libglib2.0-0 \
libnss3 \
libnspr4 \
libatk1.0-0 \
libatk-bridge2.0-0 \
libcups2 \
libdrm2 \
libdbus-1-3 \
libxcb1 \
libxkbcommon0 \
libx11-6 \
libxcomposite1 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxrandr2 \
libgbm1 \
libpango-1.0-0 \
libcairo2 \
libasound2 \
libatspi2.0-0 \
&& rm -rf /var/lib/apt/lists/*
# Install build essentials for any compilations needed
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Set up function directory and browser path
ARG FUNCTION_DIR="/function"
RUN mkdir -p "${FUNCTION_DIR}/pw-browsers"
RUN mkdir -p "/tmp/.crawl4ai"
# Set critical environment variables
ENV PLAYWRIGHT_BROWSERS_PATH="${FUNCTION_DIR}/pw-browsers" \
HOME="/tmp" \
CRAWL4_AI_BASE_DIRECTORY="/tmp/.crawl4ai"
# Create Craw4ai base directory
RUN mkdir -p ${CRAWL4_AI_BASE_DIRECTORY}
RUN pip install --no-cache-dir faust-cchardet
# Install Crawl4ai and dependencies
RUN pip install --no-cache-dir git+https://github.com/unclecode/crawl4ai.git@next
# Install Chromium only (no deps flag)
RUN playwright install chromium
# Copy function code
COPY lambda_function.py ${FUNCTION_DIR}/
# Set working directory
WORKDIR ${FUNCTION_DIR}
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "lambda_function.handler" ]

1081
deploy/lambda/deploy.py Normal file

File diff suppressed because it is too large Load Diff

345
deploy/lambda/guide.md Normal file
View File

@@ -0,0 +1,345 @@
# Deploying Crawl4ai on AWS Lambda
This guide walks you through deploying Crawl4ai as an AWS Lambda function with API Gateway integration. You'll learn how to set up, test, and clean up your deployment.
## Prerequisites
Before you begin, ensure you have:
- AWS CLI installed and configured (`aws configure`)
- Docker installed and running
- Python 3.8+ installed
- Basic familiarity with AWS services
## Project Files
Your project directory should contain:
- `Dockerfile`: Container configuration for Lambda
- `lambda_function.py`: Lambda handler code
- `deploy.py`: Our deployment script
## Step 1: Install Required Python Packages
Install the Python packages needed for our deployment script:
```bash
pip install typer rich
```
## Step 2: Run the Deployment Script
Our Python script automates the entire deployment process:
```bash
python deploy.py
```
The script will guide you through:
1. Configuration setup (AWS region, function name, memory allocation)
2. Docker image building
3. ECR repository creation
4. Lambda function deployment
5. API Gateway setup
6. Provisioned concurrency configuration (optional)
Follow the prompts and confirm each step by pressing Enter.
## Step 3: Manual Deployment (Alternative to the Script)
If you prefer to deploy manually or understand what the script does, follow these steps:
### Building and Pushing the Docker Image
```bash
# Build the Docker image
docker build -t crawl4ai-lambda .
# Create an ECR repository (if it doesn't exist)
aws ecr create-repository --repository-name crawl4ai-lambda
# Get ECR login password and login
aws ecr get-login-password | docker login --username AWS --password-stdin $(aws sts get-caller-identity --query Account --output text).dkr.ecr.us-east-1.amazonaws.com
# Tag the image
ECR_URI=$(aws ecr describe-repositories --repository-names crawl4ai-lambda --query 'repositories[0].repositoryUri' --output text)
docker tag crawl4ai-lambda:latest $ECR_URI:latest
# Push the image to ECR
docker push $ECR_URI:latest
```
### Creating the Lambda Function
```bash
# Get IAM role ARN (create it if needed)
ROLE_ARN=$(aws iam get-role --role-name lambda-execution-role --query 'Role.Arn' --output text)
# Create Lambda function
aws lambda create-function \
--function-name crawl4ai-function \
--package-type Image \
--code ImageUri=$ECR_URI:latest \
--role $ROLE_ARN \
--timeout 300 \
--memory-size 4096 \
--ephemeral-storage Size=10240 \
--environment "Variables={CRAWL4_AI_BASE_DIRECTORY=/tmp/.crawl4ai,HOME=/tmp,PLAYWRIGHT_BROWSERS_PATH=/function/pw-browsers}"
```
If you're updating an existing function:
```bash
# Update function code
aws lambda update-function-code \
--function-name crawl4ai-function \
--image-uri $ECR_URI:latest
# Update function configuration
aws lambda update-function-configuration \
--function-name crawl4ai-function \
--timeout 300 \
--memory-size 4096 \
--ephemeral-storage Size=10240 \
--environment "Variables={CRAWL4_AI_BASE_DIRECTORY=/tmp/.crawl4ai,HOME=/tmp,PLAYWRIGHT_BROWSERS_PATH=/function/pw-browsers}"
```
### Setting Up API Gateway
```bash
# Create API Gateway
API_ID=$(aws apigateway create-rest-api --name crawl4ai-api --query 'id' --output text)
# Get root resource ID
PARENT_ID=$(aws apigateway get-resources --rest-api-id $API_ID --query 'items[?path==`/`].id' --output text)
# Create resource
RESOURCE_ID=$(aws apigateway create-resource --rest-api-id $API_ID --parent-id $PARENT_ID --path-part "crawl" --query 'id' --output text)
# Create POST method
aws apigateway put-method --rest-api-id $API_ID --resource-id $RESOURCE_ID --http-method POST --authorization-type NONE
# Get Lambda function ARN
LAMBDA_ARN=$(aws lambda get-function --function-name crawl4ai-function --query 'Configuration.FunctionArn' --output text)
# Set Lambda integration
aws apigateway put-integration \
--rest-api-id $API_ID \
--resource-id $RESOURCE_ID \
--http-method POST \
--type AWS_PROXY \
--integration-http-method POST \
--uri arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/$LAMBDA_ARN/invocations
# Deploy API
aws apigateway create-deployment --rest-api-id $API_ID --stage-name prod
# Set Lambda permission
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
aws lambda add-permission \
--function-name crawl4ai-function \
--statement-id apigateway \
--action lambda:InvokeFunction \
--principal apigateway.amazonaws.com \
--source-arn "arn:aws:execute-api:us-east-1:$ACCOUNT_ID:$API_ID/*/POST/crawl"
```
### Setting Up Provisioned Concurrency (Optional)
This reduces cold starts:
```bash
# Publish a version
VERSION=$(aws lambda publish-version --function-name crawl4ai-function --query 'Version' --output text)
# Create alias
aws lambda create-alias \
--function-name crawl4ai-function \
--name prod \
--function-version $VERSION
# Configure provisioned concurrency
aws lambda put-provisioned-concurrency-config \
--function-name crawl4ai-function \
--qualifier prod \
--provisioned-concurrent-executions 2
# Update API Gateway to use alias
LAMBDA_ALIAS_ARN="arn:aws:lambda:us-east-1:$ACCOUNT_ID:function:crawl4ai-function:prod"
aws apigateway put-integration \
--rest-api-id $API_ID \
--resource-id $RESOURCE_ID \
--http-method POST \
--type AWS_PROXY \
--integration-http-method POST \
--uri arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/$LAMBDA_ALIAS_ARN/invocations
# Redeploy API Gateway
aws apigateway create-deployment --rest-api-id $API_ID --stage-name prod
```
## Step 4: Testing the Deployment
Once deployed, test your function with:
```bash
ENDPOINT_URL="https://$API_ID.execute-api.us-east-1.amazonaws.com/prod/crawl"
# Test with curl
curl -X POST $ENDPOINT_URL \
-H "Content-Type: application/json" \
-d '{"url":"https://example.com"}'
```
Or using Python:
```python
import requests
import json
url = "https://your-api-id.execute-api.us-east-1.amazonaws.com/prod/crawl"
payload = {
"url": "https://example.com",
"browser_config": {
"headless": True,
"verbose": False
},
"crawler_config": {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
}
response = requests.post(url, json=payload)
result = response.json()
print(json.dumps(result, indent=2))
```
## Step 5: Cleaning Up Resources
To remove all AWS resources created for this deployment:
```bash
python deploy.py cleanup
```
Or manually:
```bash
# Delete API Gateway
aws apigateway delete-rest-api --rest-api-id $API_ID
# Remove provisioned concurrency (if configured)
aws lambda delete-provisioned-concurrency-config \
--function-name crawl4ai-function \
--qualifier prod
# Delete alias (if created)
aws lambda delete-alias \
--function-name crawl4ai-function \
--name prod
# Delete Lambda function
aws lambda delete-function --function-name crawl4ai-function
# Delete ECR repository
aws ecr delete-repository --repository-name crawl4ai-lambda --force
```
## Troubleshooting
### Cold Start Issues
If experiencing long cold starts:
- Enable provisioned concurrency
- Increase memory allocation (4096 MB recommended)
- Ensure the Lambda function has enough ephemeral storage
### Permission Errors
If you encounter permission errors:
- Check the IAM role has the necessary permissions
- Ensure API Gateway has permission to invoke the Lambda function
### Container Size Issues
If your container is too large:
- Optimize the Dockerfile
- Use multi-stage builds
- Consider removing unnecessary dependencies
## Performance Considerations
- Lambda memory affects CPU allocation - higher memory means faster execution
- Provisioned concurrency eliminates cold starts but costs more
- Optimize the Playwright setup for faster browser initialization
## Security Best Practices
- Use the principle of least privilege for IAM roles
- Implement API Gateway authentication for production deployments
- Consider using AWS KMS for storing sensitive configuration
## Useful AWS Console Links
Here are quick links to access important AWS console pages for monitoring and managing your deployment:
| Resource | Console Link |
|----------|-------------|
| Lambda Functions | [AWS Lambda Console](https://console.aws.amazon.com/lambda/home#/functions) |
| Lambda Function Logs | [CloudWatch Logs](https://console.aws.amazon.com/cloudwatch/home#logsV2:log-groups) |
| API Gateway | [API Gateway Console](https://console.aws.amazon.com/apigateway/home) |
| ECR Repositories | [ECR Console](https://console.aws.amazon.com/ecr/repositories) |
| IAM Roles | [IAM Console](https://console.aws.amazon.com/iamv2/home#/roles) |
| CloudWatch Metrics | [CloudWatch Metrics](https://console.aws.amazon.com/cloudwatch/home#metricsV2) |
### Monitoring Lambda Execution
To monitor your Lambda function:
1. Go to the [Lambda function console](https://console.aws.amazon.com/lambda/home#/functions)
2. Select your function (`crawl4ai-function`)
3. Click the "Monitor" tab to see:
- Invocation metrics
- Success/failure rates
- Duration statistics
### Viewing Lambda Logs
To see detailed execution logs:
1. Go to [CloudWatch Logs](https://console.aws.amazon.com/cloudwatch/home#logsV2:log-groups)
2. Find the log group named `/aws/lambda/crawl4ai-function`
3. Click to see the latest log streams
4. Each stream contains logs from a function execution
### Checking API Gateway Traffic
To monitor API requests:
1. Go to the [API Gateway console](https://console.aws.amazon.com/apigateway/home)
2. Select your API (`crawl4ai-api`)
3. Click "Dashboard" to see:
- API calls
- Latency
- Error rates
## Conclusion
You now have Crawl4ai running as a serverless function on AWS Lambda! This setup allows you to crawl websites on-demand without maintaining infrastructure, while paying only for the compute time you use.

View File

@@ -0,0 +1,107 @@
import json
import asyncio
import os
# Ensure environment variables and directories are set
os.environ['CRAWL4_AI_BASE_DIRECTORY'] = '/tmp/.crawl4ai'
os.environ['HOME'] = '/tmp'
# Create directory if it doesn't exist
os.makedirs('/tmp/.crawl4ai', exist_ok=True)
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CacheMode
)
def handler(event, context):
# Parse the incoming event (API Gateway request)
try:
body = json.loads(event.get('body', '{}'))
url = body.get('url')
if not url:
return {
'statusCode': 400,
'body': json.dumps({'error': 'URL is required'})
}
# Get optional configurations or use defaults
browser_config_dict = body.get('browser_config', {})
crawler_config_dict = body.get('crawler_config', {})
# Run the crawler
result = asyncio.run(crawl(url, browser_config_dict, crawler_config_dict))
# Return successful response
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json'
},
'body': json.dumps(result)
}
except Exception as e:
# Handle errors
import traceback
return {
'statusCode': 500,
'body': json.dumps({
'error': str(e),
'traceback': traceback.format_exc()
})
}
async def crawl(url, browser_config_dict, crawler_config_dict):
"""
Run the crawler with the provided configurations, with Lambda-specific settings
"""
# Start with user-provided config but override with Lambda-required settings
base_browser_config = BrowserConfig.load(browser_config_dict) if browser_config_dict else BrowserConfig()
# Apply Lambda-specific browser configurations
browser_config = BrowserConfig(
verbose=True,
browser_type="chromium",
headless=True,
user_agent_mode="random",
light_mode=True,
use_managed_browser=False,
extra_args=[
"--headless=new",
"--no-sandbox",
"--disable-dev-shm-usage",
"--disable-setuid-sandbox",
"--remote-allow-origins=*",
"--autoplay-policy=user-gesture-required",
"--single-process",
],
# # Carry over any other settings from user config that aren't overridden
# **{k: v for k, v in base_browser_config.model_dump().items()
# if k not in ['verbose', 'browser_type', 'headless', 'user_agent_mode',
# 'light_mode', 'use_managed_browser', 'extra_args']}
)
# Start with user-provided crawler config but ensure cache is bypassed
base_crawler_config = CrawlerRunConfig.load(crawler_config_dict) if crawler_config_dict else CrawlerRunConfig()
# Apply Lambda-specific crawler configurations
crawler_config = CrawlerRunConfig(
exclude_external_links=base_crawler_config.exclude_external_links,
remove_overlay_elements=True,
magic=True,
cache_mode=CacheMode.BYPASS,
# Carry over markdown generator and other settings
markdown_generator=base_crawler_config.markdown_generator
)
# Perform the crawl with Lambda-optimized settings
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url=url, config=crawler_config)
# Return serializable results
return result.model_dump()

View File

@@ -0,0 +1,543 @@
import os
import time
import uuid
from datetime import datetime
from typing import Dict, Any, Optional, List
import modal
from modal import Image, App, Volume, Secret, web_endpoint, function
# Configuration
APP_NAME = "crawl4ai-api"
CRAWL4AI_VERSION = "next" # Using the 'next' branch
PYTHON_VERSION = "3.10" # Compatible with playwright
DEFAULT_CREDITS = 1000
# Create a custom image with Crawl4ai and its dependencies
image = Image.debian_slim(python_version=PYTHON_VERSION).pip_install(
["fastapi[standard]", "pymongo", "pydantic"]
).run_commands(
"apt-get update",
"apt-get install -y software-properties-common",
"apt-get install -y git",
"apt-add-repository non-free",
"apt-add-repository contrib",
# Install crawl4ai from the next branch
f"pip install -U git+https://github.com/unclecode/crawl4ai.git@{CRAWL4AI_VERSION}",
"pip install -U fastapi[standard]",
"pip install -U pydantic",
# Install playwright and browsers
"crawl4ai-setup",
)
# Create persistent volume for user database
user_db = Volume.from_name("crawl4ai-users", create_if_missing=True)
# Create admin secret for secure operations
admin_secret = Secret.from_name("admin-secret", create_if_missing=True)
# Define the app
app = App(APP_NAME, image=image)
# Default configurations
DEFAULT_BROWSER_CONFIG = {
"headless": True,
"verbose": False,
}
DEFAULT_CRAWLER_CONFIG = {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
# Database operations
@app.function(volumes={"/data": user_db})
def init_db() -> None:
"""Initialize database with indexes."""
from pymongo import MongoClient, ASCENDING
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
# Ensure indexes for faster lookups
db.users.create_index([("api_token", ASCENDING)], unique=True)
db.users.create_index([("email", ASCENDING)], unique=True)
# Create usage stats collection
db.usage_stats.create_index([("user_id", ASCENDING), ("timestamp", ASCENDING)])
print("Database initialized with required indexes")
@app.function(volumes={"/data": user_db})
def get_user_by_token(api_token: str) -> Optional[Dict[str, Any]]:
"""Get user by API token."""
from pymongo import MongoClient
from bson.objectid import ObjectId
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
user = db.users.find_one({"api_token": api_token})
if not user:
return None
# Convert ObjectId to string for serialization
user["_id"] = str(user["_id"])
return user
@app.function(volumes={"/data": user_db})
def create_user(email: str, name: str) -> Dict[str, Any]:
"""Create a new user with initial credits."""
from pymongo import MongoClient
from bson.objectid import ObjectId
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
# Generate API token
api_token = str(uuid.uuid4())
user = {
"email": email,
"name": name,
"api_token": api_token,
"credits": DEFAULT_CREDITS,
"created_at": datetime.utcnow(),
"updated_at": datetime.utcnow(),
"is_active": True
}
try:
result = db.users.insert_one(user)
user["_id"] = str(result.inserted_id)
return user
except Exception as e:
if "duplicate key error" in str(e):
return {"error": "User with this email already exists"}
raise
@app.function(volumes={"/data": user_db})
def update_user_credits(api_token: str, amount: int) -> Dict[str, Any]:
"""Update user credits (add or subtract)."""
from pymongo import MongoClient
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
# First get current user to check credits
user = db.users.find_one({"api_token": api_token})
if not user:
return {"success": False, "error": "User not found"}
# For deductions, ensure sufficient credits
if amount < 0 and user["credits"] + amount < 0:
return {"success": False, "error": "Insufficient credits"}
# Update credits
result = db.users.update_one(
{"api_token": api_token},
{
"$inc": {"credits": amount},
"$set": {"updated_at": datetime.utcnow()}
}
)
if result.modified_count == 1:
# Get updated user
updated_user = db.users.find_one({"api_token": api_token})
return {
"success": True,
"credits": updated_user["credits"]
}
else:
return {"success": False, "error": "Failed to update credits"}
@app.function(volumes={"/data": user_db})
def log_usage(user_id: str, url: str, success: bool, error: Optional[str] = None) -> None:
"""Log usage statistics."""
from pymongo import MongoClient
from bson.objectid import ObjectId
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
log_entry = {
"user_id": user_id,
"url": url,
"timestamp": datetime.utcnow(),
"success": success,
"error": error
}
db.usage_stats.insert_one(log_entry)
# Main crawling function
@app.function(timeout=300) # 5 minute timeout
async def crawl(
url: str,
browser_config: Optional[Dict[str, Any]] = None,
crawler_config: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]:
"""
Crawl a given URL using Crawl4ai.
Args:
url: The URL to crawl
browser_config: Optional browser configuration to override defaults
crawler_config: Optional crawler configuration to override defaults
Returns:
A dictionary containing the crawl results
"""
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CrawlResult
)
# Prepare browser config using the loader method
if browser_config is None:
browser_config = DEFAULT_BROWSER_CONFIG
browser_config_obj = BrowserConfig.load(browser_config)
# Prepare crawler config using the loader method
if crawler_config is None:
crawler_config = DEFAULT_CRAWLER_CONFIG
crawler_config_obj = CrawlerRunConfig.load(crawler_config)
# Perform the crawl
async with AsyncWebCrawler(config=browser_config_obj) as crawler:
result: CrawlResult = await crawler.arun(url=url, config=crawler_config_obj)
# Return serializable results
try:
# Try newer Pydantic v2 method
return result.model_dump()
except AttributeError:
try:
# Try older Pydantic v1 method
return result.dict()
except AttributeError:
# Fallback to manual conversion
return {
"url": result.url,
"title": result.title,
"status": result.status,
"content": str(result.content) if hasattr(result, "content") else None,
"links": [{"url": link.url, "text": link.text} for link in result.links] if hasattr(result, "links") else [],
"markdown_v2": {
"raw_markdown": result.markdown_v2.raw_markdown if hasattr(result, "markdown_v2") else None
}
}
# API endpoints
@app.function()
@web_endpoint(method="POST")
def crawl_endpoint(data: Dict[str, Any]) -> Dict[str, Any]:
"""
Web endpoint that accepts POST requests with JSON data containing:
- api_token: User's API token
- url: The URL to crawl
- browser_config: Optional browser configuration
- crawler_config: Optional crawler configuration
Returns the crawl results and remaining credits.
"""
# Extract and validate API token
api_token = data.get("api_token")
if not api_token:
return {
"success": False,
"error": "API token is required",
"status_code": 401
}
# Verify user
user = get_user_by_token.remote(api_token)
if not user:
return {
"success": False,
"error": "Invalid API token",
"status_code": 401
}
if not user.get("is_active", False):
return {
"success": False,
"error": "Account is inactive",
"status_code": 403
}
# Validate URL
url = data.get("url")
if not url:
return {
"success": False,
"error": "URL is required",
"status_code": 400
}
# Check credits
if user.get("credits", 0) <= 0:
return {
"success": False,
"error": "Insufficient credits",
"status_code": 403
}
# Deduct credit first (1 credit per call)
credit_result = update_user_credits.remote(api_token, -1)
if not credit_result.get("success", False):
return {
"success": False,
"error": credit_result.get("error", "Failed to process credits"),
"status_code": 500
}
# Extract configs
browser_config = data.get("browser_config")
crawler_config = data.get("crawler_config")
# Perform crawl
try:
start_time = time.time()
result = crawl.remote(url, browser_config, crawler_config)
execution_time = time.time() - start_time
# Log successful usage
log_usage.spawn(user["_id"], url, True)
return {
"success": True,
"data": result,
"credits_remaining": credit_result.get("credits"),
"execution_time_seconds": round(execution_time, 2),
"status_code": 200
}
except Exception as e:
# Log failed usage
log_usage.spawn(user["_id"], url, False, str(e))
# Return error
return {
"success": False,
"error": f"Crawling error: {str(e)}",
"credits_remaining": credit_result.get("credits"),
"status_code": 500
}
# Admin endpoints
@app.function(secrets=[admin_secret])
@web_endpoint(method="POST")
def admin_create_user(data: Dict[str, Any]) -> Dict[str, Any]:
"""Admin endpoint to create new users."""
# Validate admin token
admin_token = data.get("admin_token")
if admin_token != os.environ.get("ADMIN_TOKEN"):
return {
"success": False,
"error": "Invalid admin token",
"status_code": 401
}
# Validate input
email = data.get("email")
name = data.get("name")
if not email or not name:
return {
"success": False,
"error": "Email and name are required",
"status_code": 400
}
# Create user
user = create_user.remote(email, name)
if "error" in user:
return {
"success": False,
"error": user["error"],
"status_code": 400
}
return {
"success": True,
"data": {
"user_id": user["_id"],
"email": user["email"],
"name": user["name"],
"api_token": user["api_token"],
"credits": user["credits"],
"created_at": user["created_at"].isoformat() if isinstance(user["created_at"], datetime) else user["created_at"]
},
"status_code": 201
}
@app.function(secrets=[admin_secret])
@web_endpoint(method="POST")
def admin_update_credits(data: Dict[str, Any]) -> Dict[str, Any]:
"""Admin endpoint to update user credits."""
# Validate admin token
admin_token = data.get("admin_token")
if admin_token != os.environ.get("ADMIN_TOKEN"):
return {
"success": False,
"error": "Invalid admin token",
"status_code": 401
}
# Validate input
api_token = data.get("api_token")
amount = data.get("amount")
if not api_token:
return {
"success": False,
"error": "API token is required",
"status_code": 400
}
if not isinstance(amount, int):
return {
"success": False,
"error": "Amount must be an integer",
"status_code": 400
}
# Update credits
result = update_user_credits.remote(api_token, amount)
if not result.get("success", False):
return {
"success": False,
"error": result.get("error", "Failed to update credits"),
"status_code": 400
}
return {
"success": True,
"data": {
"credits": result["credits"]
},
"status_code": 200
}
@app.function(secrets=[admin_secret])
@web_endpoint(method="GET")
def admin_get_users(admin_token: str) -> Dict[str, Any]:
"""Admin endpoint to list all users."""
# Validate admin token
if admin_token != os.environ.get("ADMIN_TOKEN"):
return {
"success": False,
"error": "Invalid admin token",
"status_code": 401
}
users = get_all_users.remote()
return {
"success": True,
"data": users,
"status_code": 200
}
@app.function(volumes={"/data": user_db})
def get_all_users() -> List[Dict[str, Any]]:
"""Get all users (for admin)."""
from pymongo import MongoClient
client = MongoClient("mongodb://localhost:27017")
db = client.crawl4ai_db
users = []
for user in db.users.find():
# Convert ObjectId to string
user["_id"] = str(user["_id"])
# Convert datetime to ISO format
for field in ["created_at", "updated_at"]:
if field in user and isinstance(user[field], datetime):
user[field] = user[field].isoformat()
users.append(user)
return users
# Public endpoints
@app.function()
@web_endpoint(method="GET")
def health_check() -> Dict[str, Any]:
"""Health check endpoint."""
return {
"status": "online",
"service": APP_NAME,
"version": CRAWL4AI_VERSION,
"timestamp": datetime.utcnow().isoformat()
}
@app.function()
@web_endpoint(method="GET")
def check_credits(api_token: str) -> Dict[str, Any]:
"""Check user credits."""
if not api_token:
return {
"success": False,
"error": "API token is required",
"status_code": 401
}
user = get_user_by_token.remote(api_token)
if not user:
return {
"success": False,
"error": "Invalid API token",
"status_code": 401
}
return {
"success": True,
"data": {
"credits": user["credits"],
"email": user["email"],
"name": user["name"]
},
"status_code": 200
}
# Local entrypoint for testing
@app.local_entrypoint()
def main(url: str = "https://www.modal.com"):
"""Command line entrypoint for local testing."""
print("Initializing database...")
init_db.remote()
print(f"Testing crawl on URL: {url}")
result = crawl.remote(url)
# Print sample of result
print("\nCrawl Result Sample:")
if "title" in result:
print(f"Title: {result['title']}")
if "status" in result:
print(f"Status: {result['status']}")
if "links" in result:
print(f"Links found: {len(result['links'])}")
if "markdown_v2" in result and result["markdown_v2"] and "raw_markdown" in result["markdown_v2"]:
print("\nMarkdown Preview (first 300 chars):")
print(result["markdown_v2"]["raw_markdown"][:300] + "...")

127
deploy/modal/entry.py Normal file
View File

@@ -0,0 +1,127 @@
import modal
from typing import Optional, Dict, Any
# Create a custom image with Crawl4ai and its dependencies
# "pip install crawl4ai",
image = modal.Image.debian_slim(python_version="3.10").pip_install(["fastapi[standard]"]).run_commands(
"apt-get update",
"apt-get install -y software-properties-common",
"apt-get install -y git",
"apt-add-repository non-free",
"apt-add-repository contrib",
"pip install -U git+https://github.com/unclecode/crawl4ai.git@next",
"pip install -U fastapi[standard]",
"pip install -U pydantic",
"crawl4ai-setup", # This installs playwright and downloads chromium
# Print fastpi version
"python -m fastapi --version",
)
# Define the app
app = modal.App("crawl4ai", image=image)
# Define default configurations
DEFAULT_BROWSER_CONFIG = {
"headless": True,
"verbose": False,
}
DEFAULT_CRAWLER_CONFIG = {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
@app.function(timeout=300) # 5 minute timeout
async def crawl(
url: str,
browser_config: Optional[Dict[str, Any]] = None,
crawler_config: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]:
"""
Crawl a given URL using Crawl4ai.
Args:
url: The URL to crawl
browser_config: Optional browser configuration to override defaults
crawler_config: Optional crawler configuration to override defaults
Returns:
A dictionary containing the crawl results
"""
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CrawlResult
)
# Prepare browser config using the loader method
if browser_config is None:
browser_config = DEFAULT_BROWSER_CONFIG
browser_config_obj = BrowserConfig.load(browser_config)
# Prepare crawler config using the loader method
if crawler_config is None:
crawler_config = DEFAULT_CRAWLER_CONFIG
crawler_config_obj = CrawlerRunConfig.load(crawler_config)
# Perform the crawl
async with AsyncWebCrawler(config=browser_config_obj) as crawler:
result: CrawlResult = await crawler.arun(url=url, config=crawler_config_obj)
# Return serializable results
try:
# Try newer Pydantic v2 method
return result.model_dump()
except AttributeError:
try:
# Try older Pydantic v1 method
return result.__dict__
except AttributeError:
# Fallback to returning the raw result
return result
@app.function()
@modal.web_endpoint(method="POST")
def crawl_endpoint(data: Dict[str, Any]) -> Dict[str, Any]:
"""
Web endpoint that accepts POST requests with JSON data containing:
- url: The URL to crawl
- browser_config: Optional browser configuration
- crawler_config: Optional crawler configuration
Returns the crawl results.
"""
url = data.get("url")
if not url:
return {"error": "URL is required"}
browser_config = data.get("browser_config")
crawler_config = data.get("crawler_config")
return crawl.remote(url, browser_config, crawler_config)
@app.local_entrypoint()
def main(url: str = "https://www.modal.com"):
"""
Command line entrypoint for local testing.
"""
result = crawl.remote(url)
print(result)

453
deploy/modal/guide.md Normal file
View File

@@ -0,0 +1,453 @@
# Deploying Crawl4ai with Modal: A Comprehensive Tutorial
Hey there! UncleCode here. I'm excited to show you how to deploy Crawl4ai using Modal - a fantastic serverless platform that makes deployment super simple and scalable.
In this tutorial, I'll walk you through deploying your own Crawl4ai instance on Modal's infrastructure. This will give you a powerful, scalable web crawling solution without having to worry about infrastructure management.
## What is Modal?
Modal is a serverless platform that allows you to run Python functions in the cloud without managing servers. It's perfect for deploying Crawl4ai because:
1. It handles all the infrastructure for you
2. It scales automatically based on demand
3. It makes deployment incredibly simple
## Prerequisites
Before we get started, you'll need:
- A Modal account (sign up at [modal.com](https://modal.com))
- Python 3.10 or later installed on your local machine
- Basic familiarity with Python and command-line operations
## Step 1: Setting Up Your Modal Account
First, sign up for a Modal account at [modal.com](https://modal.com) if you haven't already. Modal offers a generous free tier that's perfect for getting started.
After signing up, install the Modal CLI and authenticate:
```bash
pip install modal
modal token new
```
This will open a browser window where you can authenticate and generate a token for the CLI.
## Step 2: Creating Your Crawl4ai Deployment
Now, let's create a Python file called `crawl4ai_modal.py` with our deployment code:
```python
import modal
from typing import Optional, Dict, Any
# Create a custom image with Crawl4ai and its dependencies
image = modal.Image.debian_slim(python_version="3.10").pip_install(
["fastapi[standard]"]
).run_commands(
"apt-get update",
"apt-get install -y software-properties-common",
"apt-get install -y git",
"apt-add-repository non-free",
"apt-add-repository contrib",
"pip install -U crawl4ai",
"pip install -U fastapi[standard]",
"pip install -U pydantic",
"crawl4ai-setup", # This installs playwright and downloads chromium
)
# Define the app
app = modal.App("crawl4ai", image=image)
# Define default configurations
DEFAULT_BROWSER_CONFIG = {
"headless": True,
"verbose": False,
}
DEFAULT_CRAWLER_CONFIG = {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
@app.function(timeout=300) # 5 minute timeout
async def crawl(
url: str,
browser_config: Optional[Dict[str, Any]] = None,
crawler_config: Optional[Dict[str, Any]] = None,
) -> Dict[str, Any]:
"""
Crawl a given URL using Crawl4ai.
Args:
url: The URL to crawl
browser_config: Optional browser configuration to override defaults
crawler_config: Optional crawler configuration to override defaults
Returns:
A dictionary containing the crawl results
"""
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CrawlResult
)
# Prepare browser config using the loader method
if browser_config is None:
browser_config = DEFAULT_BROWSER_CONFIG
browser_config_obj = BrowserConfig.load(browser_config)
# Prepare crawler config using the loader method
if crawler_config is None:
crawler_config = DEFAULT_CRAWLER_CONFIG
crawler_config_obj = CrawlerRunConfig.load(crawler_config)
# Perform the crawl
async with AsyncWebCrawler(config=browser_config_obj) as crawler:
result: CrawlResult = await crawler.arun(url=url, config=crawler_config_obj)
# Return serializable results
try:
# Try newer Pydantic v2 method
return result.model_dump()
except AttributeError:
try:
# Try older Pydantic v1 method
return result.dict()
except AttributeError:
# Fallback to manual conversion
return {
"url": result.url,
"title": result.title,
"status": result.status,
"content": str(result.content) if hasattr(result, "content") else None,
"links": [{"url": link.url, "text": link.text} for link in result.links] if hasattr(result, "links") else [],
"markdown_v2": {
"raw_markdown": result.markdown_v2.raw_markdown if hasattr(result, "markdown_v2") else None
}
}
@app.function()
@modal.web_endpoint(method="POST")
def crawl_endpoint(data: Dict[str, Any]) -> Dict[str, Any]:
"""
Web endpoint that accepts POST requests with JSON data containing:
- url: The URL to crawl
- browser_config: Optional browser configuration
- crawler_config: Optional crawler configuration
Returns the crawl results.
"""
url = data.get("url")
if not url:
return {"error": "URL is required"}
browser_config = data.get("browser_config")
crawler_config = data.get("crawler_config")
return crawl.remote(url, browser_config, crawler_config)
@app.local_entrypoint()
def main(url: str = "https://www.modal.com"):
"""
Command line entrypoint for local testing.
"""
result = crawl.remote(url)
print(result)
```
## Step 3: Understanding the Code Components
Let's break down what's happening in this code:
### 1. Image Definition
```python
image = modal.Image.debian_slim(python_version="3.10").pip_install(
["fastapi[standard]"]
).run_commands(
"apt-get update",
"apt-get install -y software-properties-common",
"apt-get install -y git",
"apt-add-repository non-free",
"apt-add-repository contrib",
"pip install -U git+https://github.com/unclecode/crawl4ai.git@next",
"pip install -U fastapi[standard]",
"pip install -U pydantic",
"crawl4ai-setup", # This installs playwright and downloads chromium
)
```
This section defines the container image that Modal will use to run your code. It:
- Starts with a Debian Slim base image with Python 3.10
- Installs FastAPI
- Updates the system packages
- Installs Git and other dependencies
- Installs Crawl4ai from the GitHub repository
- Runs the Crawl4ai setup to install Playwright and download Chromium
### 2. Modal App Definition
```python
app = modal.App("crawl4ai", image=image)
```
This creates a Modal application named "crawl4ai" that uses the image we defined above.
### 3. Default Configurations
```python
DEFAULT_BROWSER_CONFIG = {
"headless": True,
"verbose": False,
}
DEFAULT_CRAWLER_CONFIG = {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.48,
"threshold_type": "fixed"
}
}
}
}
}
}
}
```
These define the default configurations for the browser and crawler. You can customize these settings based on your specific needs.
### 4. The Crawl Function
```python
@app.function(timeout=300)
async def crawl(url, browser_config, crawler_config):
# Function implementation
```
This is the main function that performs the crawling. It:
- Takes a URL and optional configurations
- Sets up the browser and crawler with those configurations
- Performs the crawl
- Returns the results in a serializable format
The `@app.function(timeout=300)` decorator tells Modal to run this function in the cloud with a 5-minute timeout.
### 5. The Web Endpoint
```python
@app.function()
@modal.web_endpoint(method="POST")
def crawl_endpoint(data: Dict[str, Any]) -> Dict[str, Any]:
# Function implementation
```
This creates a web endpoint that accepts POST requests. It:
- Extracts the URL and configurations from the request
- Calls the crawl function with those parameters
- Returns the results
### 6. Local Entrypoint
```python
@app.local_entrypoint()
def main(url: str = "https://www.modal.com"):
# Function implementation
```
This provides a way to test the application from the command line.
## Step 4: Testing Locally
Before deploying, let's test our application locally:
```bash
modal run crawl4ai_modal.py --url "https://example.com"
```
This command will:
1. Upload your code to Modal
2. Create the necessary containers
3. Run the `main` function with the specified URL
4. Return the results
Modal will handle all the infrastructure setup for you. You should see the crawling results printed to your console.
## Step 5: Deploying Your Application
Once you're satisfied with the local testing, it's time to deploy:
```bash
modal deploy crawl4ai_modal.py
```
This will deploy your application to Modal's cloud. The deployment process will output URLs for your web endpoints.
You should see output similar to:
```
✓ Deployed crawl4ai.
URLs:
crawl_endpoint => https://your-username--crawl-endpoint.modal.run
```
Save this URL - you'll need it to make requests to your deployment.
## Step 6: Using Your Deployment
Now that your application is deployed, you can use it by sending POST requests to the endpoint URL:
```bash
curl -X POST https://your-username--crawl-endpoint.modal.run \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com"}'
```
Or in Python:
```python
import requests
response = requests.post(
"https://your-username--crawl-endpoint.modal.run",
json={"url": "https://example.com"}
)
result = response.json()
print(result)
```
You can also customize the browser and crawler configurations:
```python
requests.post(
"https://your-username--crawl-endpoint.modal.run",
json={
"url": "https://example.com",
"browser_config": {
"headless": False,
"verbose": True
},
"crawler_config": {
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"markdown_generator": {
"type": "DefaultMarkdownGenerator",
"params": {
"content_filter": {
"type": "PruningContentFilter",
"params": {
"threshold": 0.6, # Adjusted threshold
"threshold_type": "fixed"
}
}
}
}
}
}
}
}
)
```
## Step 7: Calling Your Deployment from Another Python Script
You can also call your deployed function directly from another Python script:
```python
import modal
# Get a reference to the deployed function
crawl_function = modal.Function.from_name("crawl4ai", "crawl")
# Call the function
result = crawl_function.remote("https://example.com")
print(result)
```
## Understanding Modal's Execution Flow
To understand how Modal works, it's important to know:
1. **Local vs. Remote Execution**: When you call a function with `.remote()`, it runs in Modal's cloud, not on your local machine.
2. **Container Lifecycle**: Modal creates containers on-demand and destroys them when they're not needed.
3. **Caching**: Modal caches your container images to speed up subsequent runs.
4. **Serverless Scaling**: Modal automatically scales your application based on demand.
## Customizing Your Deployment
You can customize your deployment in several ways:
### Changing the Crawl4ai Version
To use a different version of Crawl4ai, update the installation command in the image definition:
```python
"pip install -U git+https://github.com/unclecode/crawl4ai.git@main", # Use main branch
```
### Adjusting Resource Limits
You can change the resources allocated to your functions:
```python
@app.function(timeout=600, cpu=2, memory=4096) # 10 minute timeout, 2 CPUs, 4GB RAM
async def crawl(...):
# Function implementation
```
### Keeping Containers Warm
To reduce cold start times, you can keep containers warm:
```python
@app.function(keep_warm=1) # Keep 1 container warm
async def crawl(...):
# Function implementation
```
## Conclusion
That's it! You've successfully deployed Crawl4ai on Modal. You now have a scalable web crawling solution that can handle as many requests as you need without requiring any infrastructure management.
The beauty of this setup is its simplicity - Modal handles all the hard parts, letting you focus on using Crawl4ai to extract the data you need.
Feel free to reach out if you have any questions or need help with your deployment!
Happy crawling!
- UncleCode
## Additional Resources
- [Modal Documentation](https://modal.com/docs)
- [Crawl4ai GitHub Repository](https://github.com/unclecode/crawl4ai)
- [Crawl4ai Documentation](https://docs.crawl4ai.com)

317
deploy/modal/test_modal.py Normal file
View File

@@ -0,0 +1,317 @@
#!/usr/bin/env python3
"""
Crawl4ai API Testing Script
This script tests all endpoints of the Crawl4ai API service and demonstrates their usage.
"""
import argparse
import json
import sys
import time
from typing import Dict, Any, List, Optional
import requests
# Colors for terminal output
class Colors:
HEADER = '\033[95m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
def print_header(text: str) -> None:
"""Print a formatted header."""
print(f"\n{Colors.HEADER}{Colors.BOLD}{'=' * 80}{Colors.ENDC}")
print(f"{Colors.HEADER}{Colors.BOLD}{text.center(80)}{Colors.ENDC}")
print(f"{Colors.HEADER}{Colors.BOLD}{'=' * 80}{Colors.ENDC}\n")
def print_step(text: str) -> None:
"""Print a formatted step description."""
print(f"{Colors.BLUE}{Colors.BOLD}>> {text}{Colors.ENDC}")
def print_success(text: str) -> None:
"""Print a success message."""
print(f"{Colors.GREEN}{text}{Colors.ENDC}")
def print_warning(text: str) -> None:
"""Print a warning message."""
print(f"{Colors.YELLOW}{text}{Colors.ENDC}")
def print_error(text: str) -> None:
"""Print an error message."""
print(f"{Colors.RED}{text}{Colors.ENDC}")
def print_json(data: Dict[str, Any]) -> None:
"""Pretty print JSON data."""
print(json.dumps(data, indent=2))
def make_request(method: str, url: str, params: Optional[Dict[str, Any]] = None,
json_data: Optional[Dict[str, Any]] = None,
expected_status: int = 200) -> Dict[str, Any]:
"""Make an HTTP request and handle errors."""
print_step(f"Making {method.upper()} request to {url}")
if params:
print(f" Parameters: {params}")
if json_data:
print(f" JSON Data: {json_data}")
try:
response = requests.request(
method=method,
url=url,
params=params,
json=json_data,
timeout=300 # 5 minute timeout for crawling operations
)
status_code = response.status_code
print(f" Status Code: {status_code}")
try:
data = response.json()
print(" Response:")
print_json(data)
if status_code != expected_status:
print_error(f"Expected status code {expected_status}, got {status_code}")
return data
print_success("Request successful")
return data
except ValueError:
print_error("Response is not valid JSON")
print(response.text)
return {"error": "Invalid JSON response"}
except requests.RequestException as e:
print_error(f"Request failed: {str(e)}")
return {"error": str(e)}
def test_health_check(base_url: str) -> bool:
"""Test the health check endpoint."""
print_header("Testing Health Check Endpoint")
response = make_request("GET", f"{base_url}/health_check")
if "status" in response and response["status"] == "online":
print_success("Health check passed")
return True
else:
print_error("Health check failed")
return False
def test_admin_create_user(base_url: str, admin_token: str, email: str, name: str) -> Optional[str]:
"""Test creating a new user."""
print_header("Testing Admin User Creation")
response = make_request(
"POST",
f"{base_url}/admin_create_user",
json_data={
"admin_token": admin_token,
"email": email,
"name": name
},
expected_status=201
)
if response.get("success") and "data" in response:
api_token = response["data"].get("api_token")
if api_token:
print_success(f"User created successfully with API token: {api_token}")
return api_token
print_error("Failed to create user")
return None
def test_check_credits(base_url: str, api_token: str) -> Optional[int]:
"""Test checking user credits."""
print_header("Testing Check Credits Endpoint")
response = make_request(
"GET",
f"{base_url}/check_credits",
params={"api_token": api_token}
)
if response.get("success") and "data" in response:
credits = response["data"].get("credits")
if credits is not None:
print_success(f"User has {credits} credits")
return credits
print_error("Failed to check credits")
return None
def test_crawl_endpoint(base_url: str, api_token: str, url: str) -> bool:
"""Test the crawl endpoint."""
print_header("Testing Crawl Endpoint")
response = make_request(
"POST",
f"{base_url}/crawl_endpoint",
json_data={
"api_token": api_token,
"url": url
}
)
if response.get("success") and "data" in response:
print_success("Crawl completed successfully")
# Display some crawl result data
data = response["data"]
if "title" in data:
print(f"Page Title: {data['title']}")
if "status" in data:
print(f"Status: {data['status']}")
if "links" in data:
print(f"Links found: {len(data['links'])}")
if "markdown_v2" in data and data["markdown_v2"] and "raw_markdown" in data["markdown_v2"]:
print("Markdown Preview (first 200 chars):")
print(data["markdown_v2"]["raw_markdown"][:200] + "...")
credits_remaining = response.get("credits_remaining")
if credits_remaining is not None:
print(f"Credits remaining: {credits_remaining}")
return True
print_error("Crawl failed")
return False
def test_admin_update_credits(base_url: str, admin_token: str, api_token: str, amount: int) -> bool:
"""Test updating user credits."""
print_header("Testing Admin Update Credits")
response = make_request(
"POST",
f"{base_url}/admin_update_credits",
json_data={
"admin_token": admin_token,
"api_token": api_token,
"amount": amount
}
)
if response.get("success") and "data" in response:
print_success(f"Credits updated successfully, new balance: {response['data'].get('credits')}")
return True
print_error("Failed to update credits")
return False
def test_admin_get_users(base_url: str, admin_token: str) -> List[Dict[str, Any]]:
"""Test getting all users."""
print_header("Testing Admin Get All Users")
response = make_request(
"GET",
f"{base_url}/admin_get_users",
params={"admin_token": admin_token}
)
if response.get("success") and "data" in response:
users = response["data"]
print_success(f"Retrieved {len(users)} users")
return users
print_error("Failed to get users")
return []
def run_full_test(base_url: str, admin_token: str) -> None:
"""Run all tests in sequence."""
# Remove trailing slash if present
base_url = base_url.rstrip('/')
# Test 1: Health Check
if not test_health_check(base_url):
print_error("Health check failed, aborting tests")
sys.exit(1)
# Test 2: Create a test user
email = f"test-user-{int(time.time())}@example.com"
name = "Test User"
api_token = test_admin_create_user(base_url, admin_token, email, name)
if not api_token:
print_error("User creation failed, aborting tests")
sys.exit(1)
# Test 3: Check initial credits
initial_credits = test_check_credits(base_url, api_token)
if initial_credits is None:
print_error("Credit check failed, aborting tests")
sys.exit(1)
# Test 4: Perform a crawl
test_url = "https://news.ycombinator.com"
crawl_success = test_crawl_endpoint(base_url, api_token, test_url)
if not crawl_success:
print_warning("Crawl test failed, but continuing with other tests")
# Test 5: Check credits after crawl
post_crawl_credits = test_check_credits(base_url, api_token)
if post_crawl_credits is not None and initial_credits is not None:
if post_crawl_credits == initial_credits - 1:
print_success("Credit deduction verified")
else:
print_warning(f"Unexpected credit change: {initial_credits} -> {post_crawl_credits}")
# Test 6: Add credits
add_credits_amount = 50
if test_admin_update_credits(base_url, admin_token, api_token, add_credits_amount):
print_success(f"Added {add_credits_amount} credits")
# Test 7: Check credits after addition
post_addition_credits = test_check_credits(base_url, api_token)
if post_addition_credits is not None and post_crawl_credits is not None:
if post_addition_credits == post_crawl_credits + add_credits_amount:
print_success("Credit addition verified")
else:
print_warning(f"Unexpected credit change: {post_crawl_credits} -> {post_addition_credits}")
# Test 8: Get all users
users = test_admin_get_users(base_url, admin_token)
if users:
# Check if our test user is in the list
test_user = next((user for user in users if user.get("email") == email), None)
if test_user:
print_success("Test user found in users list")
else:
print_warning("Test user not found in users list")
# Final report
print_header("Test Summary")
print_success("All endpoints tested successfully")
print(f"Test user created with email: {email}")
print(f"API token: {api_token}")
print(f"Final credit balance: {post_addition_credits}")
def main():
parser = argparse.ArgumentParser(description="Test Crawl4ai API endpoints")
parser.add_argument("--base-url", required=True, help="Base URL of the Crawl4ai API (e.g., https://username--crawl4ai-api.modal.run)")
parser.add_argument("--admin-token", required=True, help="Admin token for authentication")
args = parser.parse_args()
print_header("Crawl4ai API Test Script")
print(f"Testing API at: {args.base_url}")
run_full_test(args.base_url, args.admin_token)
if __name__ == "__main__":
main()

View File

@@ -1,30 +1,15 @@
# docker-compose.yml
# Base configuration anchor for reusability
# Base configuration (not a service, just a reusable config block)
x-base-config: &base-config
ports:
# Map host port 11235 to container port 11235 (where Gunicorn will listen)
- "11235:11235"
# - "8080:8080" # Uncomment if needed
# Load API keys primarily from .llm.env file
# Create .llm.env in the root directory .llm.env.example
env_file:
- .llm.env
# Define environment variables, allowing overrides from host environment
# Syntax ${VAR:-} uses host env var 'VAR' if set, otherwise uses value from .llm.env
- "8000:8000"
- "9222:9222"
- "8080:8080"
environment:
- CRAWL4AI_API_TOKEN=${CRAWL4AI_API_TOKEN:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- DEEPSEEK_API_KEY=${DEEPSEEK_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- GROQ_API_KEY=${GROQ_API_KEY:-}
- TOGETHER_API_KEY=${TOGETHER_API_KEY:-}
- MISTRAL_API_KEY=${MISTRAL_API_KEY:-}
- GEMINI_API_TOKEN=${GEMINI_API_TOKEN:-}
- CLAUDE_API_KEY=${CLAUDE_API_KEY:-}
volumes:
# Mount /dev/shm for Chromium/Playwright performance
- /dev/shm:/dev/shm
deploy:
resources:
@@ -34,47 +19,47 @@ x-base-config: &base-config
memory: 1G
restart: unless-stopped
healthcheck:
# IMPORTANT: Ensure Gunicorn binds to 11235 in supervisord.conf
test: ["CMD", "curl", "-f", "http://localhost:11235/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s # Give the server time to start
# Run the container as the non-root user defined in the Dockerfile
user: "appuser"
start_period: 40s
services:
# --- Local Build Services ---
crawl4ai-local-amd64:
# Local build services for different platforms
crawl4ai-amd64:
build:
context: . # Build context is the root directory
dockerfile: Dockerfile # Dockerfile is in the root directory
context: .
dockerfile: Dockerfile
args:
INSTALL_TYPE: ${INSTALL_TYPE:-default}
ENABLE_GPU: ${ENABLE_GPU:-false}
# PYTHON_VERSION arg is omitted as it's fixed by 'FROM python:3.10-slim' in Dockerfile
platform: linux/amd64
PYTHON_VERSION: "3.10"
INSTALL_TYPE: ${INSTALL_TYPE:-basic}
ENABLE_GPU: false
platforms:
- linux/amd64
profiles: ["local-amd64"]
<<: *base-config # Inherit base configuration
<<: *base-config # extends yerine doğrudan yapılandırmayı dahil ettik
crawl4ai-local-arm64:
crawl4ai-arm64:
build:
context: . # Build context is the root directory
dockerfile: Dockerfile # Dockerfile is in the root directory
context: .
dockerfile: Dockerfile
args:
INSTALL_TYPE: ${INSTALL_TYPE:-default}
ENABLE_GPU: ${ENABLE_GPU:-false}
platform: linux/arm64
PYTHON_VERSION: "3.10"
INSTALL_TYPE: ${INSTALL_TYPE:-basic}
ENABLE_GPU: false
platforms:
- linux/arm64
profiles: ["local-arm64"]
<<: *base-config
# --- Docker Hub Image Services ---
# Hub services for different platforms and versions
crawl4ai-hub-amd64:
image: unclecode/crawl4ai:${VERSION:-latest}-amd64
image: unclecode/crawl4ai:${VERSION:-basic}-amd64
profiles: ["hub-amd64"]
<<: *base-config
crawl4ai-hub-arm64:
image: unclecode/crawl4ai:${VERSION:-latest}-arm64
image: unclecode/crawl4ai:${VERSION:-basic}-arm64
profiles: ["hub-arm64"]
<<: *base-config

View File

@@ -1,123 +0,0 @@
# Builtin Browser in Crawl4AI
This document explains the builtin browser feature in Crawl4AI and how to use it effectively.
## What is the Builtin Browser?
The builtin browser is a persistent Chrome instance that Crawl4AI manages for you. It runs in the background and can be used by multiple crawling operations, eliminating the need to start and stop browsers for each crawl.
Benefits include:
- **Faster startup times** - The browser is already running, so your scripts start faster
- **Shared resources** - All your crawling scripts can use the same browser instance
- **Simplified management** - No need to worry about CDP URLs or browser processes
- **Persistent cookies and sessions** - Browser state persists between script runs
- **Less resource usage** - Only one browser instance for multiple scripts
## Using the Builtin Browser
### In Python Code
Using the builtin browser in your code is simple:
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
# Create browser config with builtin mode
browser_config = BrowserConfig(
browser_mode="builtin", # This is the key setting!
headless=True # Can be headless or not
)
# Create the crawler
crawler = AsyncWebCrawler(config=browser_config)
# Use it - no need to explicitly start()
result = await crawler.arun("https://example.com")
```
Key points:
1. Set `browser_mode="builtin"` in your BrowserConfig
2. No need for explicit `start()` call - the crawler will automatically connect to the builtin browser
3. No need to use a context manager or call `close()` - the browser stays running
### Via CLI
The CLI provides commands to manage the builtin browser:
```bash
# Start the builtin browser
crwl browser start
# Check its status
crwl browser status
# Open a visible window to see what the browser is doing
crwl browser view --url https://example.com
# Stop it when no longer needed
crwl browser stop
# Restart with different settings
crwl browser restart --no-headless
```
When crawling via CLI, simply add the builtin browser mode:
```bash
crwl https://example.com -b "browser_mode=builtin"
```
## How It Works
1. When a crawler with `browser_mode="builtin"` is created:
- It checks if a builtin browser is already running
- If not, it automatically launches one
- It connects to the browser via CDP (Chrome DevTools Protocol)
2. The browser process continues running after your script exits
- This means it's ready for the next crawl
- You can manage it via the CLI commands
3. During installation, Crawl4AI attempts to create a builtin browser automatically
## Example
See the [builtin_browser_example.py](builtin_browser_example.py) file for a complete example.
Run it with:
```bash
python builtin_browser_example.py
```
## When to Use
The builtin browser is ideal for:
- Scripts that run frequently
- Development and testing workflows
- Applications that need to minimize startup time
- Systems where you want to manage browser instances centrally
You might not want to use it when:
- Running one-off scripts
- When you need different browser configurations for different tasks
- In environments where persistent processes are not allowed
## Troubleshooting
If you encounter issues:
1. Check the browser status:
```
crwl browser status
```
2. Try restarting it:
```
crwl browser restart
```
3. If problems persist, stop it and let Crawl4AI start a fresh one:
```
crwl browser stop
```

View File

@@ -1,79 +0,0 @@
import asyncio
import time
from crawl4ai.async_webcrawler import AsyncWebCrawler, CacheMode
from crawl4ai.async_configs import CrawlerRunConfig
from crawl4ai.async_dispatcher import MemoryAdaptiveDispatcher, RateLimiter
VERBOSE = False
async def crawl_sequential(urls):
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, verbose=VERBOSE)
results = []
start_time = time.perf_counter()
async with AsyncWebCrawler() as crawler:
for url in urls:
result_container = await crawler.arun(url=url, config=config)
results.append(result_container[0])
total_time = time.perf_counter() - start_time
return total_time, results
async def crawl_parallel_dispatcher(urls):
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, verbose=VERBOSE)
# Dispatcher with rate limiter enabled (default behavior)
dispatcher = MemoryAdaptiveDispatcher(
rate_limiter=RateLimiter(base_delay=(1.0, 3.0), max_delay=60.0, max_retries=3),
max_session_permit=50,
)
start_time = time.perf_counter()
async with AsyncWebCrawler() as crawler:
result_container = await crawler.arun_many(urls=urls, config=config, dispatcher=dispatcher)
results = []
if isinstance(result_container, list):
results = result_container
else:
async for res in result_container:
results.append(res)
total_time = time.perf_counter() - start_time
return total_time, results
async def crawl_parallel_no_rate_limit(urls):
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, verbose=VERBOSE)
# Dispatcher with no rate limiter and a high session permit to avoid queuing
dispatcher = MemoryAdaptiveDispatcher(
rate_limiter=None,
max_session_permit=len(urls) # allow all URLs concurrently
)
start_time = time.perf_counter()
async with AsyncWebCrawler() as crawler:
result_container = await crawler.arun_many(urls=urls, config=config, dispatcher=dispatcher)
results = []
if isinstance(result_container, list):
results = result_container
else:
async for res in result_container:
results.append(res)
total_time = time.perf_counter() - start_time
return total_time, results
async def main():
urls = ["https://example.com"] * 100
print(f"Crawling {len(urls)} URLs sequentially...")
seq_time, seq_results = await crawl_sequential(urls)
print(f"Sequential crawling took: {seq_time:.2f} seconds\n")
print(f"Crawling {len(urls)} URLs in parallel using arun_many with dispatcher (with rate limit)...")
disp_time, disp_results = await crawl_parallel_dispatcher(urls)
print(f"Parallel (dispatcher with rate limiter) took: {disp_time:.2f} seconds\n")
print(f"Crawling {len(urls)} URLs in parallel using dispatcher with no rate limiter...")
no_rl_time, no_rl_results = await crawl_parallel_no_rate_limit(urls)
print(f"Parallel (dispatcher without rate limiter) took: {no_rl_time:.2f} seconds\n")
print("Crawl4ai - Crawling Comparison")
print("--------------------------------------------------------")
print(f"Sequential crawling took: {seq_time:.2f} seconds")
print(f"Parallel (dispatcher with rate limiter) took: {disp_time:.2f} seconds")
print(f"Parallel (dispatcher without rate limiter) took: {no_rl_time:.2f} seconds")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,86 +0,0 @@
#!/usr/bin/env python3
"""
Builtin Browser Example
This example demonstrates how to use Crawl4AI's builtin browser feature,
which simplifies the browser management process. With builtin mode:
- No need to manually start or connect to a browser
- No need to manage CDP URLs or browser processes
- Automatically connects to an existing browser or launches one if needed
- Browser persists between script runs, reducing startup time
- No explicit cleanup or close() calls needed
The example also demonstrates "auto-starting" where you don't need to explicitly
call start() method on the crawler.
"""
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
import time
async def crawl_with_builtin_browser():
"""
Simple example of crawling with the builtin browser.
Key features:
1. browser_mode="builtin" in BrowserConfig
2. No explicit start() call needed
3. No explicit close() needed
"""
print("\n=== Crawl4AI Builtin Browser Example ===\n")
# Create a browser configuration with builtin mode
browser_config = BrowserConfig(
browser_mode="builtin", # This is the key setting!
headless=True # Can run headless for background operation
)
# Create crawler run configuration
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, # Skip cache for this demo
screenshot=True, # Take a screenshot
verbose=True # Show verbose logging
)
# Create the crawler instance
# Note: We don't need to use "async with" context manager
crawler = AsyncWebCrawler(config=browser_config)
# Start crawling several URLs - no explicit start() needed!
# The crawler will automatically connect to the builtin browser
print("\n➡️ Crawling first URL...")
t0 = time.time()
result1 = await crawler.arun(
url="https://crawl4ai.com",
config=crawler_config
)
t1 = time.time()
print(f"✅ First URL crawled in {t1-t0:.2f} seconds")
print(f" Got {len(result1.markdown.raw_markdown)} characters of content")
print(f" Title: {result1.metadata.get('title', 'No title')}")
# Try another URL - the browser is already running, so this should be faster
print("\n➡️ Crawling second URL...")
t0 = time.time()
result2 = await crawler.arun(
url="https://example.com",
config=crawler_config
)
t1 = time.time()
print(f"✅ Second URL crawled in {t1-t0:.2f} seconds")
print(f" Got {len(result2.markdown.raw_markdown)} characters of content")
print(f" Title: {result2.metadata.get('title', 'No title')}")
# The builtin browser continues running in the background
# No need to explicitly close it
print("\n🔄 The builtin browser remains running for future use")
print(" You can use 'crwl browser status' to check its status")
print(" or 'crwl browser stop' to stop it when completely done")
async def main():
"""Run the example"""
await crawl_with_builtin_browser()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,209 +0,0 @@
"""
CrawlerMonitor Example
This example demonstrates how to use the CrawlerMonitor component
to visualize and track web crawler operations in real-time.
"""
import time
import uuid
import random
import threading
from crawl4ai.components.crawler_monitor import CrawlerMonitor
from crawl4ai.models import CrawlStatus
def simulate_webcrawler_operations(monitor, num_tasks=20):
"""
Simulates a web crawler's operations with multiple tasks and different states.
Args:
monitor: The CrawlerMonitor instance
num_tasks: Number of tasks to simulate
"""
print(f"Starting simulation with {num_tasks} tasks...")
# Create and register all tasks first
task_ids = []
for i in range(num_tasks):
task_id = str(uuid.uuid4())
url = f"https://example.com/page{i}"
monitor.add_task(task_id, url)
task_ids.append((task_id, url))
# Small delay between task creation
time.sleep(0.2)
# Process tasks with a variety of different behaviors
threads = []
for i, (task_id, url) in enumerate(task_ids):
# Create a thread for each task
thread = threading.Thread(
target=process_task,
args=(monitor, task_id, url, i)
)
thread.daemon = True
threads.append(thread)
# Start threads in batches to simulate concurrent processing
batch_size = 4 # Process 4 tasks at a time
for i in range(0, len(threads), batch_size):
batch = threads[i:i+batch_size]
for thread in batch:
thread.start()
time.sleep(0.5) # Stagger thread start times
# Wait a bit before starting next batch
time.sleep(random.uniform(1.0, 3.0))
# Update queue statistics
update_queue_stats(monitor)
# Simulate memory pressure changes
active_threads = [t for t in threads if t.is_alive()]
if len(active_threads) > 8:
monitor.update_memory_status("CRITICAL")
elif len(active_threads) > 4:
monitor.update_memory_status("PRESSURE")
else:
monitor.update_memory_status("NORMAL")
# Wait for all threads to complete
for thread in threads:
thread.join()
# Final updates
update_queue_stats(monitor)
monitor.update_memory_status("NORMAL")
print("Simulation completed!")
def process_task(monitor, task_id, url, index):
"""Simulate processing of a single task."""
# Tasks start in queued state (already added)
# Simulate waiting in queue
wait_time = random.uniform(0.5, 3.0)
time.sleep(wait_time)
# Start processing - move to IN_PROGRESS
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=time.time(),
wait_time=wait_time
)
# Simulate task processing with memory usage changes
total_process_time = random.uniform(2.0, 10.0)
step_time = total_process_time / 5 # Update in 5 steps
for step in range(5):
# Simulate increasing then decreasing memory usage
if step < 3: # First 3 steps - increasing
memory_usage = random.uniform(5.0, 20.0) * (step + 1)
else: # Last 2 steps - decreasing
memory_usage = random.uniform(5.0, 20.0) * (5 - step)
# Update peak memory if this is higher
peak = max(memory_usage, monitor.get_task_stats(task_id).get("peak_memory", 0))
monitor.update_task(
task_id=task_id,
memory_usage=memory_usage,
peak_memory=peak
)
time.sleep(step_time)
# Determine final state - 80% success, 20% failure
if index % 5 == 0: # Every 5th task fails
monitor.update_task(
task_id=task_id,
status=CrawlStatus.FAILED,
end_time=time.time(),
memory_usage=0.0,
error_message="Connection timeout"
)
else:
monitor.update_task(
task_id=task_id,
status=CrawlStatus.COMPLETED,
end_time=time.time(),
memory_usage=0.0
)
def update_queue_stats(monitor):
"""Update queue statistics based on current tasks."""
task_stats = monitor.get_all_task_stats()
# Count queued tasks
queued_tasks = [
stats for stats in task_stats.values()
if stats["status"] == CrawlStatus.QUEUED.name
]
total_queued = len(queued_tasks)
if total_queued > 0:
current_time = time.time()
# Calculate wait times
wait_times = [
current_time - stats.get("enqueue_time", current_time)
for stats in queued_tasks
]
highest_wait_time = max(wait_times) if wait_times else 0.0
avg_wait_time = sum(wait_times) / len(wait_times) if wait_times else 0.0
else:
highest_wait_time = 0.0
avg_wait_time = 0.0
# Update monitor
monitor.update_queue_statistics(
total_queued=total_queued,
highest_wait_time=highest_wait_time,
avg_wait_time=avg_wait_time
)
def main():
# Initialize the monitor
monitor = CrawlerMonitor(
urls_total=20, # Total URLs to process
refresh_rate=0.5, # Update UI twice per second
enable_ui=True, # Enable terminal UI
max_width=120 # Set maximum width to 120 characters
)
# Start the monitor
monitor.start()
try:
# Run simulation
simulate_webcrawler_operations(monitor)
# Keep monitor running a bit to see final state
print("Waiting to view final state...")
time.sleep(5)
except KeyboardInterrupt:
print("\nExample interrupted by user")
finally:
# Stop the monitor
monitor.stop()
print("Example completed!")
# Print some statistics
summary = monitor.get_summary()
print("\nCrawler Statistics Summary:")
print(f"Total URLs: {summary['urls_total']}")
print(f"Completed: {summary['urls_completed']}")
print(f"Completion percentage: {summary['completion_percentage']:.1f}%")
print(f"Peak memory usage: {summary['peak_memory_percent']:.1f}%")
# Print task status counts
status_counts = summary['status_counts']
print("\nTask Status Counts:")
for status, count in status_counts.items():
print(f" {status}: {count}")
if __name__ == "__main__":
main()

View File

@@ -18,20 +18,11 @@ Key Features:
import asyncio
import pandas as pd
import numpy as np
import re
import plotly.express as px
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CacheMode,
LXMLWebScrapingStrategy,
)
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LXMLWebScrapingStrategy
from crawl4ai import CrawlResult
from typing import List
__current_dir__ = __file__.rsplit("/", 1)[0]
from IPython.display import HTML
class CryptoAlphaGenerator:
"""
@@ -40,319 +31,134 @@ class CryptoAlphaGenerator:
- Liquidity scores
- Momentum-risk ratios
- Machine learning-inspired trading signals
Methods:
analyze_tables(): Process raw tables into trading insights
create_visuals(): Generate institutional-grade visualizations
generate_insights(): Create plain English trading recommendations
"""
def clean_data(self, df: pd.DataFrame) -> pd.DataFrame:
"""
Convert crypto market data to machine-readable format.
Handles currency symbols, units (B=Billions), and percentage values.
Convert crypto market data to machine-readable format
Handles currency symbols, units (B=Billions), and percentage values
"""
# Make a copy to avoid SettingWithCopyWarning
df = df.copy()
# Clean Price column (handle currency symbols)
df["Price"] = df["Price"].astype(str).str.replace("[^\d.]", "", regex=True).astype(float)
# Handle Market Cap and Volume, considering both Billions and Trillions
def convert_large_numbers(value):
if pd.isna(value):
return float('nan')
value = str(value)
multiplier = 1
if 'B' in value:
multiplier = 1e9
elif 'T' in value:
multiplier = 1e12
# Handle cases where the value might already be numeric
cleaned_value = re.sub(r"[^\d.]", "", value)
return float(cleaned_value) * multiplier if cleaned_value else float('nan')
df["Market Cap"] = df["Market Cap"].apply(convert_large_numbers)
df["Volume(24h)"] = df["Volume(24h)"].apply(convert_large_numbers)
# Clean numeric columns
df['Price'] = df['Price'].str.replace('[^\d.]', '', regex=True).astype(float)
df['Market Cap'] = df['Market Cap'].str.extract(r'\$([\d.]+)B')[0].astype(float) * 1e9
df['Volume(24h)'] = df['Volume(24h)'].str.extract(r'\$([\d.]+)B')[0].astype(float) * 1e9
# Convert percentages to decimal values
for col in ["1h %", "24h %", "7d %"]:
if col in df.columns:
# First ensure it's string, then clean
df[col] = (
df[col].astype(str)
.str.replace("%", "")
.str.replace(",", ".")
.replace("nan", np.nan)
)
df[col] = pd.to_numeric(df[col], errors='coerce') / 100
for col in ['1h %', '24h %', '7d %']:
df[col] = df[col].str.replace('%', '').astype(float) / 100
return df
def calculate_metrics(self, df: pd.DataFrame) -> pd.DataFrame:
"""
Compute advanced trading metrics used by quantitative funds:
1. Volume/Market Cap Ratio - Measures liquidity efficiency
(High ratio = Underestimated attention, and small-cap = higher growth potential)
2. Volatility Score - Risk-adjusted momentum potential - Shows how stable is the trend
(High ratio = Underestimated attention)
2. Volatility Score - Risk-adjusted momentum potential
(STD of 1h/24h/7d returns)
3. Momentum Score - Weighted average of returns - Shows how strong is the trend
3. Momentum Score - Weighted average of returns
(1h:30% + 24h:50% + 7d:20%)
4. Volume Anomaly - 3σ deviation detection
(Flags potential insider activity) - Unusual trading activity Flags coins with volume spikes (potential insider buying or news).
(Flags potential insider activity)
"""
# Liquidity Metrics
df["Volume/Market Cap Ratio"] = df["Volume(24h)"] / df["Market Cap"]
df['Volume/Market Cap Ratio'] = df['Volume(24h)'] / df['Market Cap']
# Risk Metrics
df["Volatility Score"] = df[["1h %", "24h %", "7d %"]].std(axis=1)
df['Volatility Score'] = df[['1h %','24h %','7d %']].std(axis=1)
# Momentum Metrics
df["Momentum Score"] = df["1h %"] * 0.3 + df["24h %"] * 0.5 + df["7d %"] * 0.2
df['Momentum Score'] = (df['1h %']*0.3 + df['24h %']*0.5 + df['7d %']*0.2)
# Anomaly Detection
median_vol = df["Volume(24h)"].median()
df["Volume Anomaly"] = df["Volume(24h)"] > 3 * median_vol
median_vol = df['Volume(24h)'].median()
df['Volume Anomaly'] = df['Volume(24h)'] > 3 * median_vol
# Value Flags
# Undervalued Flag - Low market cap and high momentum
# (High growth potential and low attention)
df["Undervalued Flag"] = (df["Market Cap"] < 1e9) & (
df["Momentum Score"] > 0.05
)
# Liquid Giant Flag - High volume/market cap ratio and large market cap
# (High liquidity and large market cap = institutional interest)
df["Liquid Giant"] = (df["Volume/Market Cap Ratio"] > 0.15) & (
df["Market Cap"] > 1e9
)
df['Undervalued Flag'] = (df['Market Cap'] < 1e9) & (df['Momentum Score'] > 0.05)
df['Liquid Giant'] = (df['Volume/Market Cap Ratio'] > 0.15) & (df['Market Cap'] > 1e9)
return df
def generate_insights_simple(self, df: pd.DataFrame) -> str:
def create_visuals(self, df: pd.DataFrame) -> dict:
"""
Generates an ultra-actionable crypto trading report with:
- Risk-tiered opportunities (High/Medium/Low)
- Concrete examples for each trade type
- Entry/exit strategies spelled out
- Visual cues for quick scanning
Generate three institutional-grade visualizations:
1. 3D Market Map - X:Size, Y:Liquidity, Z:Momentum
2. Liquidity Tree - Color:Volume Efficiency
3. Momentum Leaderboard - Top sustainable movers
"""
report = [
"🚀 **CRYPTO TRADING CHEAT SHEET** 🚀",
"*Based on quantitative signals + hedge fund tactics*",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
]
# 1. HIGH-RISK: Undervalued Small-Caps (Momentum Plays)
high_risk = df[df["Undervalued Flag"]].sort_values("Momentum Score", ascending=False)
if not high_risk.empty:
example_coin = high_risk.iloc[0]
report.extend([
"\n🔥 **HIGH-RISK: Rocket Fuel Small-Caps**",
f"*Example Trade:* {example_coin['Name']} (Price: ${example_coin['Price']:.6f})",
"📊 *Why?* Tiny market cap (<$1B) but STRONG momentum (+{:.0f}% last week)".format(example_coin['7d %']*100),
"🎯 *Strategy:*",
"1. Wait for 5-10% dip from recent high (${:.6f} → Buy under ${:.6f})".format(
example_coin['Price'] / (1 - example_coin['24h %']), # Approx recent high
example_coin['Price'] * 0.95
),
"2. Set stop-loss at -10% (${:.6f})".format(example_coin['Price'] * 0.90),
"3. Take profit at +20% (${:.6f})".format(example_coin['Price'] * 1.20),
"⚠️ *Risk Warning:* These can drop 30% fast! Never bet more than 5% of your portfolio.",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
])
# 2. MEDIUM-RISK: Liquid Giants (Swing Trades)
medium_risk = df[df["Liquid Giant"]].sort_values("Volume/Market Cap Ratio", ascending=False)
if not medium_risk.empty:
example_coin = medium_risk.iloc[0]
report.extend([
"\n💎 **MEDIUM-RISK: Liquid Giants (Safe Swing Trades)**",
f"*Example Trade:* {example_coin['Name']} (Market Cap: ${example_coin['Market Cap']/1e9:.1f}B)",
"📊 *Why?* Huge volume (${:.1f}M/day) makes it easy to enter/exit".format(example_coin['Volume(24h)']/1e6),
"🎯 *Strategy:*",
"1. Buy when 24h volume > 15% of market cap (Current: {:.0f}%)".format(example_coin['Volume/Market Cap Ratio']*100),
"2. Hold 1-4 weeks (Big coins trend longer)",
"3. Exit when momentum drops below 5% (Current: {:.0f}%)".format(example_coin['Momentum Score']*100),
"📉 *Pro Tip:* Watch Bitcoin's trend - if BTC drops 5%, these usually follow.",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
])
# 3. LOW-RISK: Stable Momentum (DCA Targets)
low_risk = df[
(df["Momentum Score"] > 0.05) &
(df["Volatility Score"] < 0.03)
].sort_values("Market Cap", ascending=False)
if not low_risk.empty:
example_coin = low_risk.iloc[0]
report.extend([
"\n🛡️ **LOW-RISK: Steady Climbers (DCA & Forget)**",
f"*Example Trade:* {example_coin['Name']} (Volatility: {example_coin['Volatility Score']:.2f}/5)",
"📊 *Why?* Rises steadily (+{:.0f}%/week) with LOW drama".format(example_coin['7d %']*100),
"🎯 *Strategy:*",
"1. Buy small amounts every Tuesday/Friday (DCA)",
"2. Hold for 3+ months (Compound gains work best here)",
"3. Sell 10% at every +25% milestone",
"💰 *Best For:* Long-term investors who hate sleepless nights",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
])
# Volume Spike Alerts
anomalies = df[df["Volume Anomaly"]].sort_values("Volume(24h)", ascending=False)
if not anomalies.empty:
example_coin = anomalies.iloc[0]
report.extend([
"\n🚨 **Volume Spike Alert (Possible News/Whale Action)**",
f"*Coin:* {example_coin['Name']} (Volume: ${example_coin['Volume(24h)']/1e6:.1f}M, usual: ${example_coin['Volume(24h)']/3/1e6:.1f}M)",
"🔍 *Check:* Twitter/CoinGecko for news before trading",
"⚡ *If no news:* Could be insider buying - watch price action:",
"- Break above today's high → Buy with tight stop-loss",
"- Fade back down → Avoid (may be a fakeout)"
])
# Pro Tip Footer
report.append("\n✨ *Pro Tip:* Bookmark this report & check back in 24h to see if signals held up.")
return "\n".join(report)
# 3D Market Overview
fig1 = px.scatter_3d(
df,
x='Market Cap',
y='Volume/Market Cap Ratio',
z='Momentum Score',
size='Volatility Score',
color='Volume Anomaly',
hover_name='Name',
title='Smart Money Market Map: Spot Overlooked Opportunities',
labels={'Market Cap': 'Size (Log $)', 'Volume/Market Cap Ratio': 'Liquidity Power'},
log_x=True,
template='plotly_dark'
)
# Liquidity Efficiency Tree
fig2 = px.treemap(
df,
path=['Name'],
values='Market Cap',
color='Volume/Market Cap Ratio',
hover_data=['Momentum Score'],
title='Liquidity Forest: Green = High Trading Efficiency',
color_continuous_scale='RdYlGn'
)
# Momentum Leaders
fig3 = px.bar(
df.sort_values('Momentum Score', ascending=False).head(10),
x='Name',
y='Momentum Score',
color='Volatility Score',
title='Sustainable Momentum Leaders (Low Volatility + High Growth)',
text='7d %',
template='plotly_dark'
)
return {'market_map': fig1, 'liquidity_tree': fig2, 'momentum_leaders': fig3}
def generate_insights(self, df: pd.DataFrame) -> str:
"""
Generates a tactical trading report with:
- Top 3 trades per risk level (High/Medium/Low)
- Auto-calculated entry/exit prices
- BTC chart toggle tip
Create plain English trading insights explaining:
- Volume spikes and their implications
- Risk-reward ratios of top movers
- Liquidity warnings for large positions
"""
# Filter top candidates for each risk level
high_risk = (
df[df["Undervalued Flag"]]
.sort_values("Momentum Score", ascending=False)
.head(3)
)
medium_risk = (
df[df["Liquid Giant"]]
.sort_values("Volume/Market Cap Ratio", ascending=False)
.head(3)
)
low_risk = (
df[(df["Momentum Score"] > 0.05) & (df["Volatility Score"] < 0.03)]
.sort_values("Momentum Score", ascending=False)
.head(3)
)
report = ["# 🎯 Crypto Trading Tactical Report (Top 3 Per Risk Tier)"]
top_coin = df.sort_values('Momentum Score', ascending=False).iloc[0]
anomaly_coins = df[df['Volume Anomaly']].sort_values('Volume(24h)', ascending=False)
# 1. High-Risk Trades (Small-Cap Momentum)
if not high_risk.empty:
report.append("\n## 🔥 HIGH RISK: Small-Cap Rockets (5-50% Potential)")
for i, coin in high_risk.iterrows():
current_price = coin["Price"]
entry = current_price * 0.95 # -5% dip
stop_loss = current_price * 0.90 # -10%
take_profit = current_price * 1.20 # +20%
report.append(
f"\n### {coin['Name']} (Momentum: {coin['Momentum Score']:.1%})"
f"\n- **Current Price:** ${current_price:.4f}"
f"\n- **Entry:** < ${entry:.4f} (Wait for pullback)"
f"\n- **Stop-Loss:** ${stop_loss:.4f} (-10%)"
f"\n- **Target:** ${take_profit:.4f} (+20%)"
f"\n- **Risk/Reward:** 1:2"
f"\n- **Watch:** Volume spikes above {coin['Volume(24h)']/1e6:.1f}M"
)
# 2. Medium-Risk Trades (Liquid Giants)
if not medium_risk.empty:
report.append("\n## 💎 MEDIUM RISK: Liquid Swing Trades (10-30% Potential)")
for i, coin in medium_risk.iterrows():
current_price = coin["Price"]
entry = current_price * 0.98 # -2% dip
stop_loss = current_price * 0.94 # -6%
take_profit = current_price * 1.15 # +15%
report.append(
f"\n### {coin['Name']} (Liquidity Score: {coin['Volume/Market Cap Ratio']:.1%})"
f"\n- **Current Price:** ${current_price:.2f}"
f"\n- **Entry:** < ${entry:.2f} (Buy slight dips)"
f"\n- **Stop-Loss:** ${stop_loss:.2f} (-6%)"
f"\n- **Target:** ${take_profit:.2f} (+15%)"
f"\n- **Hold Time:** 1-3 weeks"
f"\n- **Key Metric:** Volume/Cap > 15%"
)
# 3. Low-Risk Trades (Stable Momentum)
if not low_risk.empty:
report.append("\n## 🛡️ LOW RISK: Steady Gainers (5-15% Potential)")
for i, coin in low_risk.iterrows():
current_price = coin["Price"]
entry = current_price * 0.99 # -1% dip
stop_loss = current_price * 0.97 # -3%
take_profit = current_price * 1.10 # +10%
report.append(
f"\n### {coin['Name']} (Stability Score: {1/coin['Volatility Score']:.1f}x)"
f"\n- **Current Price:** ${current_price:.2f}"
f"\n- **Entry:** < ${entry:.2f} (Safe zone)"
f"\n- **Stop-Loss:** ${stop_loss:.2f} (-3%)"
f"\n- **Target:** ${take_profit:.2f} (+10%)"
f"\n- **DCA Suggestion:** 3 buys over 72 hours"
)
# Volume Anomaly Alert
anomalies = df[df["Volume Anomaly"]].sort_values("Volume(24h)", ascending=False).head(2)
if not anomalies.empty:
report.append("\n⚠️ **Volume Spike Alerts**")
for i, coin in anomalies.iterrows():
report.append(
f"- {coin['Name']}: Volume {coin['Volume(24h)']/1e6:.1f}M "
f"(3x normal) | Price moved: {coin['24h %']:.1%}"
)
# Pro Tip
report.append(
"\n📊 **Chart Hack:** Hide BTC in visuals:\n"
"```python\n"
"# For 3D Map:\n"
"fig.update_traces(visible=False, selector={'name':'Bitcoin'})\n"
"# For Treemap:\n"
"df = df[df['Name'] != 'Bitcoin']\n"
"```"
)
return "\n".join(report)
def create_visuals(self, df: pd.DataFrame) -> dict:
"""Enhanced visuals with BTC toggle support"""
# 3D Market Map (with BTC toggle hint)
fig1 = px.scatter_3d(
df,
x="Market Cap",
y="Volume/Market Cap Ratio",
z="Momentum Score",
color="Name", # Color by name to allow toggling
hover_name="Name",
title="Market Map (Toggle BTC in legend to focus on alts)",
log_x=True
)
fig1.update_traces(
marker=dict(size=df["Volatility Score"]*100 + 5) # Dynamic sizing
)
report = f"""
🚀 Top Alpha Opportunity: {top_coin['Name']}
- Momentum Score: {top_coin['Momentum Score']:.2%} (Top 1%)
- Risk-Reward Ratio: {top_coin['Momentum Score']/top_coin['Volatility Score']:.1f}
- Liquidity Warning: {'✅ Safe' if top_coin['Liquid Giant'] else '⚠️ Thin Markets'}
# Liquidity Tree (exclude BTC if too dominant)
if df[df["Name"] == "BitcoinBTC"]["Market Cap"].values[0] > df["Market Cap"].median() * 10:
df = df[df["Name"] != "BitcoinBTC"]
🔥 Volume Spikes Detected ({len(anomaly_coins)} coins):
{anomaly_coins[['Name', 'Volume(24h)']].head(3).to_markdown(index=False)}
fig2 = px.treemap(
df,
path=["Name"],
values="Market Cap",
color="Volume/Market Cap Ratio",
title="Liquidity Tree (BTC auto-removed if dominant)"
)
return {"market_map": fig1, "liquidity_tree": fig2}
💡 Smart Money Tip: Coins with Volume/Cap > 15% and Momentum > 5%
historically outperform by 22% weekly returns.
"""
return report
async def main():
"""
@@ -365,79 +171,60 @@ async def main():
"""
# Configure browser with anti-detection features
browser_config = BrowserConfig(
headless=False,
headless=True,
stealth=True,
block_resources=["image", "media"]
)
# Initialize crawler with smart table detection
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
try:
# Set up scraping parameters
crawl_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_score_threshold=8, # Strict table detection
keep_data_attributes=True,
scraping_strategy=LXMLWebScrapingStrategy(),
scan_full_page=True,
scroll_delay=0.2,
scraping_strategy=LXMLWebScrapingStrategy(
table_score_threshold=8, # Strict table detection
keep_data_attributes=True
)
)
# # Execute market data extraction
# results: List[CrawlResult] = await crawler.arun(
# url="https://coinmarketcap.com/?page=1", config=crawl_config
# )
# # Process results
# raw_df = pd.DataFrame()
# for result in results:
# if result.success and result.media["tables"]:
# # Extract primary market table
# # DataFrame
# raw_df = pd.DataFrame(
# result.media["tables"][0]["rows"],
# columns=result.media["tables"][0]["headers"],
# )
# break
# This is for debugging only
# ////// Remove this in production from here..
# Save raw data for debugging
# raw_df.to_csv(f"{__current_dir__}/tmp/raw_crypto_data.csv", index=False)
# print("🔍 Raw data saved to 'raw_crypto_data.csv'")
# Read from file for debugging
raw_df = pd.read_csv(f"{__current_dir__}/tmp/raw_crypto_data.csv")
# ////// ..to here
# Select top 20
raw_df = raw_df.head(50)
# Remove "Buy" from name
raw_df["Name"] = raw_df["Name"].str.replace("Buy", "")
# Initialize analysis engine
analyzer = CryptoAlphaGenerator()
clean_df = analyzer.clean_data(raw_df)
analyzed_df = analyzer.calculate_metrics(clean_df)
# Generate outputs
visuals = analyzer.create_visuals(analyzed_df)
insights = analyzer.generate_insights(analyzed_df)
# Save visualizations
visuals["market_map"].write_html(f"{__current_dir__}/tmp/market_map.html")
visuals["liquidity_tree"].write_html(f"{__current_dir__}/tmp/liquidity_tree.html")
# Display results
print("🔑 Key Trading Insights:")
print(insights)
print("\n📊 Open 'market_map.html' for interactive analysis")
print("\n📊 Open 'liquidity_tree.html' for interactive analysis")
# Execute market data extraction
results: List[CrawlResult] = await crawler.arun(
url='https://coinmarketcap.com/?page=1',
config=crawl_config
)
# Process results
for result in results:
if result.success and result.media['tables']:
# Extract primary market table
raw_df = pd.DataFrame(
result.media['tables'][0]['rows'],
columns=result.media['tables'][0]['headers']
)
# Initialize analysis engine
analyzer = CryptoAlphaGenerator()
clean_df = analyzer.clean_data(raw_df)
analyzed_df = analyzer.calculate_metrics(clean_df)
# Generate outputs
visuals = analyzer.create_visuals(analyzed_df)
insights = analyzer.generate_insights(analyzed_df)
# Save visualizations
visuals['market_map'].write_html("market_map.html")
visuals['liquidity_tree'].write_html("liquidity_tree.html")
# Display results
print("🔑 Key Trading Insights:")
print(insights)
print("\n📊 Open 'market_map.html' for interactive analysis")
finally:
await crawler.close()
if __name__ == "__main__":
asyncio.run(main())
asyncio.run(main())

File diff suppressed because it is too large Load Diff

View File

@@ -73,7 +73,7 @@ async def test_stream_crawl(session, token: str):
# "https://news.ycombinator.com/news"
],
"browser_config": {"headless": True, "viewport": {"width": 1200}},
"crawler_config": {"stream": True, "cache_mode": "bypass"}
"crawler_config": {"stream": True, "cache_mode": "aggressive"}
}
headers = {"Authorization": f"Bearer {token}"}
print(f"\nTesting Streaming Crawl: {url}")

View File

@@ -12,10 +12,9 @@ Weve introduced a new feature that effortlessly handles even the biggest page
**Simple Example:**
```python
import os
import sys
import os, sys
import asyncio
from crawl4ai import AsyncWebCrawler, CacheMode, CrawlerRunConfig
from crawl4ai import AsyncWebCrawler, CacheMode
# Adjust paths as needed
parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
@@ -27,11 +26,9 @@ async def main():
# Request both PDF and screenshot
result = await crawler.arun(
url='https://en.wikipedia.org/wiki/List_of_common_misconceptions',
config=CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
pdf=True,
screenshot=True
)
cache_mode=CacheMode.BYPASS,
pdf=True,
screenshot=True
)
if result.success:
@@ -43,8 +40,9 @@ async def main():
# Save PDF
if result.pdf:
pdf_bytes = b64decode(result.pdf)
with open(os.path.join(__location__, "page.pdf"), "wb") as f:
f.write(result.pdf)
f.write(pdf_bytes)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -9,26 +9,6 @@ from crawl4ai import (
CrawlResult
)
async def example_cdp():
browser_conf = BrowserConfig(
headless=False,
cdp_url="http://localhost:9223"
)
crawler_config = CrawlerRunConfig(
session_id="test",
js_code = """(() => { return {"result": "Hello World!"} })()""",
js_only=True
)
async with AsyncWebCrawler(
config=browser_conf,
verbose=True,
) as crawler:
result : CrawlResult = await crawler.arun(
url="https://www.helloworld.org",
config=crawler_config,
)
print(result.js_execution_result)
async def main():
browser_config = BrowserConfig(headless=True, verbose=True)
@@ -36,15 +16,18 @@ async def main():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
# content_filter=PruningContentFilter(
# threshold=0.48, threshold_type="fixed", min_word_threshold=0
# )
),
)
result : CrawlResult = await crawler.arun(
url="https://www.helloworld.org", config=crawler_config
# url="https://www.helloworld.org", config=crawler_config
url="https://www.kidocode.com", config=crawler_config
)
print(result.markdown.raw_markdown[:500])
# print(result.model_dump())
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,64 +0,0 @@
"""
Example showing how to use the content_source parameter to control HTML input for markdown generation.
"""
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, DefaultMarkdownGenerator
async def demo_content_source():
"""Demonstrates different content_source options for markdown generation."""
url = "https://example.com" # Simple demo site
print("Crawling with different content_source options...")
# --- Example 1: Default Behavior (cleaned_html) ---
# This uses the HTML after it has been processed by the scraping strategy
# The HTML is cleaned, simplified, and optimized for readability
default_generator = DefaultMarkdownGenerator() # content_source="cleaned_html" is default
default_config = CrawlerRunConfig(markdown_generator=default_generator)
# --- Example 2: Raw HTML ---
# This uses the original HTML directly from the webpage
# Preserves more original content but may include navigation, ads, etc.
raw_generator = DefaultMarkdownGenerator(content_source="raw_html")
raw_config = CrawlerRunConfig(markdown_generator=raw_generator)
# --- Example 3: Fit HTML ---
# This uses preprocessed HTML optimized for schema extraction
# Better for structured data extraction but may lose some formatting
fit_generator = DefaultMarkdownGenerator(content_source="fit_html")
fit_config = CrawlerRunConfig(markdown_generator=fit_generator)
# Execute all three crawlers in sequence
async with AsyncWebCrawler() as crawler:
# Default (cleaned_html)
result_default = await crawler.arun(url=url, config=default_config)
# Raw HTML
result_raw = await crawler.arun(url=url, config=raw_config)
# Fit HTML
result_fit = await crawler.arun(url=url, config=fit_config)
# Print a summary of the results
print("\nMarkdown Generation Results:\n")
print("1. Default (cleaned_html):")
print(f" Length: {len(result_default.markdown.raw_markdown)} chars")
print(f" First 80 chars: {result_default.markdown.raw_markdown[:80]}...\n")
print("2. Raw HTML:")
print(f" Length: {len(result_raw.markdown.raw_markdown)} chars")
print(f" First 80 chars: {result_raw.markdown.raw_markdown[:80]}...\n")
print("3. Fit HTML:")
print(f" Length: {len(result_fit.markdown.raw_markdown)} chars")
print(f" First 80 chars: {result_fit.markdown.raw_markdown[:80]}...\n")
# Demonstrate differences in output
print("\nKey Takeaways:")
print("- cleaned_html: Best for readable, focused content")
print("- raw_html: Preserves more original content, but may include noise")
print("- fit_html: Optimized for schema extraction and structured data")
if __name__ == "__main__":
asyncio.run(demo_content_source())

View File

@@ -1,42 +0,0 @@
"""
Example demonstrating how to use the content_source parameter in MarkdownGenerationStrategy
"""
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, DefaultMarkdownGenerator
async def demo_markdown_source_config():
print("\n=== Demo: Configuring Markdown Source ===")
# Example 1: Generate markdown from cleaned HTML (default behavior)
cleaned_md_generator = DefaultMarkdownGenerator(content_source="cleaned_html")
config_cleaned = CrawlerRunConfig(markdown_generator=cleaned_md_generator)
async with AsyncWebCrawler() as crawler:
result_cleaned = await crawler.arun(url="https://example.com", config=config_cleaned)
print("Markdown from Cleaned HTML (default):")
print(f" Length: {len(result_cleaned.markdown.raw_markdown)}")
print(f" Start: {result_cleaned.markdown.raw_markdown[:100]}...")
# Example 2: Generate markdown directly from raw HTML
raw_md_generator = DefaultMarkdownGenerator(content_source="raw_html")
config_raw = CrawlerRunConfig(markdown_generator=raw_md_generator)
async with AsyncWebCrawler() as crawler:
result_raw = await crawler.arun(url="https://example.com", config=config_raw)
print("\nMarkdown from Raw HTML:")
print(f" Length: {len(result_raw.markdown.raw_markdown)}")
print(f" Start: {result_raw.markdown.raw_markdown[:100]}...")
# Example 3: Generate markdown from preprocessed 'fit' HTML
fit_md_generator = DefaultMarkdownGenerator(content_source="fit_html")
config_fit = CrawlerRunConfig(markdown_generator=fit_md_generator)
async with AsyncWebCrawler() as crawler:
result_fit = await crawler.arun(url="https://example.com", config=config_fit)
print("\nMarkdown from Fit HTML:")
print(f" Length: {len(result_fit.markdown.raw_markdown)}")
print(f" Start: {result_fit.markdown.raw_markdown[:100]}...")
if __name__ == "__main__":
asyncio.run(demo_markdown_source_config())

View File

@@ -1,477 +0,0 @@
import asyncio
import json
import os
import base64
from pathlib import Path
from typing import List, Dict, Any
from datetime import datetime
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode, CrawlResult
from crawl4ai import BrowserConfig
__cur_dir__ = Path(__file__).parent
# Create temp directory if it doesn't exist
os.makedirs(os.path.join(__cur_dir__, "tmp"), exist_ok=True)
async def demo_basic_network_capture():
"""Basic network request capturing example"""
print("\n=== 1. Basic Network Request Capturing ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
wait_until="networkidle" # Wait for network to be idle
)
result = await crawler.arun(
url="https://example.com/",
config=config
)
if result.success and result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Count by event type
event_types = {}
for req in result.network_requests:
event_type = req.get("event_type", "unknown")
event_types[event_type] = event_types.get(event_type, 0) + 1
print("Event types:")
for event_type, count in event_types.items():
print(f" - {event_type}: {count}")
# Show a sample request and response
request = next((r for r in result.network_requests if r.get("event_type") == "request"), None)
response = next((r for r in result.network_requests if r.get("event_type") == "response"), None)
if request:
print("\nSample request:")
print(f" URL: {request.get('url')}")
print(f" Method: {request.get('method')}")
print(f" Headers: {list(request.get('headers', {}).keys())}")
if response:
print("\nSample response:")
print(f" URL: {response.get('url')}")
print(f" Status: {response.get('status')} {response.get('status_text', '')}")
print(f" Headers: {list(response.get('headers', {}).keys())}")
async def demo_basic_console_capture():
"""Basic console message capturing example"""
print("\n=== 2. Basic Console Message Capturing ===")
# Create a simple HTML file with console messages
html_file = os.path.join(__cur_dir__, "tmp", "console_test.html")
with open(html_file, "w") as f:
f.write("""
<!DOCTYPE html>
<html>
<head>
<title>Console Test</title>
</head>
<body>
<h1>Console Message Test</h1>
<script>
console.log("This is a basic log message");
console.info("This is an info message");
console.warn("This is a warning message");
console.error("This is an error message");
// Generate an error
try {
nonExistentFunction();
} catch (e) {
console.error("Caught error:", e);
}
</script>
</body>
</html>
""")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_console_messages=True,
wait_until="networkidle" # Wait to make sure all scripts execute
)
result = await crawler.arun(
url=f"file://{html_file}",
config=config
)
if result.success and result.console_messages:
print(f"Captured {len(result.console_messages)} console messages")
# Count by message type
message_types = {}
for msg in result.console_messages:
msg_type = msg.get("type", "unknown")
message_types[msg_type] = message_types.get(msg_type, 0) + 1
print("Message types:")
for msg_type, count in message_types.items():
print(f" - {msg_type}: {count}")
# Show all messages
print("\nAll console messages:")
for i, msg in enumerate(result.console_messages, 1):
print(f" {i}. [{msg.get('type', 'unknown')}] {msg.get('text', '')}")
async def demo_combined_capture():
"""Capturing both network requests and console messages"""
print("\n=== 3. Combined Network and Console Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
wait_until="networkidle"
)
result = await crawler.arun(
url="https://httpbin.org/html",
config=config
)
if result.success:
network_count = len(result.network_requests) if result.network_requests else 0
console_count = len(result.console_messages) if result.console_messages else 0
print(f"Captured {network_count} network events and {console_count} console messages")
# Save the captured data to a JSON file for analysis
output_file = os.path.join(__cur_dir__, "tmp", "capture_data.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"network_requests": result.network_requests,
"console_messages": result.console_messages
}, f, indent=2)
print(f"Full capture data saved to {output_file}")
async def analyze_spa_network_traffic():
"""Analyze network traffic of a Single-Page Application"""
print("\n=== 4. Analyzing SPA Network Traffic ===")
async with AsyncWebCrawler(config=BrowserConfig(
headless=True,
viewport_width=1280,
viewport_height=800
)) as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
# Wait longer to ensure all resources are loaded
wait_until="networkidle",
page_timeout=60000, # 60 seconds
)
result = await crawler.arun(
url="https://weather.com",
config=config
)
if result.success and result.network_requests:
# Extract different types of requests
requests = []
responses = []
failures = []
for event in result.network_requests:
event_type = event.get("event_type")
if event_type == "request":
requests.append(event)
elif event_type == "response":
responses.append(event)
elif event_type == "request_failed":
failures.append(event)
print(f"Captured {len(requests)} requests, {len(responses)} responses, and {len(failures)} failures")
# Analyze request types
resource_types = {}
for req in requests:
resource_type = req.get("resource_type", "unknown")
resource_types[resource_type] = resource_types.get(resource_type, 0) + 1
print("\nResource types:")
for resource_type, count in sorted(resource_types.items(), key=lambda x: x[1], reverse=True):
print(f" - {resource_type}: {count}")
# Analyze API calls
api_calls = [r for r in requests if "api" in r.get("url", "").lower()]
if api_calls:
print(f"\nDetected {len(api_calls)} API calls:")
for i, call in enumerate(api_calls[:5], 1): # Show first 5
print(f" {i}. {call.get('method')} {call.get('url')}")
if len(api_calls) > 5:
print(f" ... and {len(api_calls) - 5} more")
# Analyze response status codes
status_codes = {}
for resp in responses:
status = resp.get("status", 0)
status_codes[status] = status_codes.get(status, 0) + 1
print("\nResponse status codes:")
for status, count in sorted(status_codes.items()):
print(f" - {status}: {count}")
# Analyze failures
if failures:
print("\nFailed requests:")
for i, failure in enumerate(failures[:5], 1): # Show first 5
print(f" {i}. {failure.get('url')} - {failure.get('failure_text')}")
if len(failures) > 5:
print(f" ... and {len(failures) - 5} more")
# Check for console errors
if result.console_messages:
errors = [msg for msg in result.console_messages if msg.get("type") == "error"]
if errors:
print(f"\nDetected {len(errors)} console errors:")
for i, error in enumerate(errors[:3], 1): # Show first 3
print(f" {i}. {error.get('text', '')[:100]}...")
if len(errors) > 3:
print(f" ... and {len(errors) - 3} more")
# Save analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "weather_network_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"statistics": {
"request_count": len(requests),
"response_count": len(responses),
"failure_count": len(failures),
"resource_types": resource_types,
"status_codes": {str(k): v for k, v in status_codes.items()},
"api_call_count": len(api_calls),
"console_error_count": len(errors) if result.console_messages else 0
},
"network_requests": result.network_requests,
"console_messages": result.console_messages
}, f, indent=2)
print(f"\nFull analysis saved to {output_file}")
async def demo_security_analysis():
"""Using network capture for security analysis"""
print("\n=== 5. Security Analysis with Network Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
wait_until="networkidle"
)
# A site that makes multiple third-party requests
result = await crawler.arun(
url="https://www.nytimes.com/",
config=config
)
if result.success and result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Extract all domains
domains = set()
for req in result.network_requests:
if req.get("event_type") == "request":
url = req.get("url", "")
try:
from urllib.parse import urlparse
domain = urlparse(url).netloc
if domain:
domains.add(domain)
except:
pass
print(f"\nDetected requests to {len(domains)} unique domains:")
main_domain = urlparse(result.url).netloc
# Separate first-party vs third-party domains
first_party = [d for d in domains if main_domain in d]
third_party = [d for d in domains if main_domain not in d]
print(f" - First-party domains: {len(first_party)}")
print(f" - Third-party domains: {len(third_party)}")
# Look for potential trackers/analytics
tracking_keywords = ["analytics", "tracker", "pixel", "tag", "stats", "metric", "collect", "beacon"]
potential_trackers = []
for domain in third_party:
if any(keyword in domain.lower() for keyword in tracking_keywords):
potential_trackers.append(domain)
if potential_trackers:
print(f"\nPotential tracking/analytics domains ({len(potential_trackers)}):")
for i, domain in enumerate(sorted(potential_trackers)[:10], 1):
print(f" {i}. {domain}")
if len(potential_trackers) > 10:
print(f" ... and {len(potential_trackers) - 10} more")
# Check for insecure (HTTP) requests
insecure_requests = [
req.get("url") for req in result.network_requests
if req.get("event_type") == "request" and req.get("url", "").startswith("http://")
]
if insecure_requests:
print(f"\nWarning: Found {len(insecure_requests)} insecure (HTTP) requests:")
for i, url in enumerate(insecure_requests[:5], 1):
print(f" {i}. {url}")
if len(insecure_requests) > 5:
print(f" ... and {len(insecure_requests) - 5} more")
# Save security analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "security_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"main_domain": main_domain,
"timestamp": datetime.now().isoformat(),
"analysis": {
"total_requests": len([r for r in result.network_requests if r.get("event_type") == "request"]),
"unique_domains": len(domains),
"first_party_domains": first_party,
"third_party_domains": third_party,
"potential_trackers": potential_trackers,
"insecure_requests": insecure_requests
}
}, f, indent=2)
print(f"\nFull security analysis saved to {output_file}")
async def demo_performance_analysis():
"""Using network capture for performance analysis"""
print("\n=== 6. Performance Analysis with Network Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
page_timeout=60 * 2 * 1000 # 120 seconds
)
result = await crawler.arun(
url="https://www.cnn.com/",
config=config
)
if result.success and result.network_requests:
# Filter only response events with timing information
responses_with_timing = [
r for r in result.network_requests
if r.get("event_type") == "response" and r.get("request_timing")
]
if responses_with_timing:
print(f"Analyzing timing for {len(responses_with_timing)} network responses")
# Group by resource type
resource_timings = {}
for resp in responses_with_timing:
url = resp.get("url", "")
timing = resp.get("request_timing", {})
# Determine resource type from URL extension
ext = url.split(".")[-1].lower() if "." in url.split("/")[-1] else "unknown"
if ext in ["jpg", "jpeg", "png", "gif", "webp", "svg", "ico"]:
resource_type = "image"
elif ext in ["js"]:
resource_type = "javascript"
elif ext in ["css"]:
resource_type = "css"
elif ext in ["woff", "woff2", "ttf", "otf", "eot"]:
resource_type = "font"
else:
resource_type = "other"
if resource_type not in resource_timings:
resource_timings[resource_type] = []
# Calculate request duration if timing information is available
if isinstance(timing, dict) and "requestTime" in timing and "receiveHeadersEnd" in timing:
# Convert to milliseconds
duration = (timing["receiveHeadersEnd"] - timing["requestTime"]) * 1000
resource_timings[resource_type].append({
"url": url,
"duration_ms": duration
})
if isinstance(timing, dict) and "requestStart" in timing and "responseStart" in timing and "startTime" in timing:
# Convert to milliseconds
duration = (timing["responseStart"] - timing["requestStart"]) * 1000
resource_timings[resource_type].append({
"url": url,
"duration_ms": duration
})
# Calculate statistics for each resource type
print("\nPerformance by resource type:")
for resource_type, timings in resource_timings.items():
if timings:
durations = [t["duration_ms"] for t in timings]
avg_duration = sum(durations) / len(durations)
max_duration = max(durations)
slowest_resource = next(t["url"] for t in timings if t["duration_ms"] == max_duration)
print(f" {resource_type.upper()}:")
print(f" - Count: {len(timings)}")
print(f" - Avg time: {avg_duration:.2f} ms")
print(f" - Max time: {max_duration:.2f} ms")
print(f" - Slowest: {slowest_resource}")
# Identify the slowest resources overall
all_timings = []
for resource_type, timings in resource_timings.items():
for timing in timings:
timing["type"] = resource_type
all_timings.append(timing)
all_timings.sort(key=lambda x: x["duration_ms"], reverse=True)
print("\nTop 5 slowest resources:")
for i, timing in enumerate(all_timings[:5], 1):
print(f" {i}. [{timing['type']}] {timing['url']} - {timing['duration_ms']:.2f} ms")
# Save performance analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "performance_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"resource_timings": resource_timings,
"slowest_resources": all_timings[:10] # Save top 10
}, f, indent=2)
print(f"\nFull performance analysis saved to {output_file}")
async def main():
"""Run all demo functions sequentially"""
print("=== Network and Console Capture Examples ===")
# Make sure tmp directory exists
os.makedirs(os.path.join(__cur_dir__, "tmp"), exist_ok=True)
# Run basic examples
# await demo_basic_network_capture()
await demo_basic_console_capture()
# await demo_combined_capture()
# Run advanced examples
# await analyze_spa_network_traffic()
# await demo_security_analysis()
# await demo_performance_analysis()
print("\n=== Examples Complete ===")
print(f"Check the tmp directory for output files: {os.path.join(__cur_dir__, 'tmp')}")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,675 @@
import os, sys
from crawl4ai import LLMConfig
# append parent directory to system path
sys.path.append(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
)
os.environ["FIRECRAWL_API_KEY"] = "fc-84b370ccfad44beabc686b38f1769692"
import asyncio
# import nest_asyncio
# nest_asyncio.apply()
import time
import json
import os
import re
from typing import Dict, List
from bs4 import BeautifulSoup
from pydantic import BaseModel, Field
from crawl4ai import AsyncWebCrawler, CacheMode
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.content_filter_strategy import PruningContentFilter
from crawl4ai.extraction_strategy import (
JsonCssExtractionStrategy,
LLMExtractionStrategy,
)
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
print("Crawl4AI: Advanced Web Crawling and Data Extraction")
print("GitHub Repository: https://github.com/unclecode/crawl4ai")
print("Twitter: @unclecode")
print("Website: https://crawl4ai.com")
async def simple_crawl():
print("\n--- Basic Usage ---")
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500]) # Print first 500 characters
async def simple_example_with_running_js_code():
print("\n--- Executing JavaScript and Using CSS Selectors ---")
# New code to handle the wait_for parameter
wait_for = """() => {
return Array.from(document.querySelectorAll('article.tease-card')).length > 10;
}"""
# wait_for can be also just a css selector
# wait_for = "article.tease-card:nth-child(10)"
async with AsyncWebCrawler(verbose=True) as crawler:
js_code = [
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
]
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=js_code,
# wait_for=wait_for,
cache_mode=CacheMode.BYPASS,
)
print(result.markdown[:500]) # Print first 500 characters
async def simple_example_with_css_selector():
print("\n--- Using CSS Selectors ---")
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
css_selector=".wide-tease-item__description",
cache_mode=CacheMode.BYPASS,
)
print(result.markdown[:500]) # Print first 500 characters
async def use_proxy():
print("\n--- Using a Proxy ---")
print(
"Note: Replace 'http://your-proxy-url:port' with a working proxy to run this example."
)
# Uncomment and modify the following lines to use a proxy
async with AsyncWebCrawler(
verbose=True, proxy="http://your-proxy-url:port"
) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", cache_mode=CacheMode.BYPASS
)
if result.success:
print(result.markdown[:500]) # Print first 500 characters
async def capture_and_save_screenshot(url: str, output_path: str):
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url=url, screenshot=True, cache_mode=CacheMode.BYPASS
)
if result.success and result.screenshot:
import base64
# Decode the base64 screenshot data
screenshot_data = base64.b64decode(result.screenshot)
# Save the screenshot as a JPEG file
with open(output_path, "wb") as f:
f.write(screenshot_data)
print(f"Screenshot saved successfully to {output_path}")
else:
print("Failed to capture screenshot")
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(
..., description="Fee for output token for the OpenAI model."
)
async def extract_structured_data_using_llm(
provider: str, api_token: str = None, extra_headers: Dict[str, str] = None
):
print(f"\n--- Extracting Structured Data with {provider} ---")
if api_token is None and provider != "ollama":
print(f"API token is required for {provider}. Skipping this example.")
return
# extra_args = {}
extra_args = {
"temperature": 0,
"top_p": 0.9,
"max_tokens": 2000,
# any other supported parameters for litellm
}
if extra_headers:
extra_args["extra_headers"] = extra_headers
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://openai.com/api/pricing/",
word_count_threshold=1,
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider=provider,api_token=api_token),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content. One extracted model JSON format should look like this:
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}.""",
extra_args=extra_args,
),
cache_mode=CacheMode.BYPASS,
)
print(result.extracted_content)
async def extract_structured_data_using_css_extractor():
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
schema = {
"name": "KidoCode Courses",
"baseSelector": "section.charge-methodology .w-tab-content > div",
"fields": [
{
"name": "section_title",
"selector": "h3.heading-50",
"type": "text",
},
{
"name": "section_description",
"selector": ".charge-content",
"type": "text",
},
{
"name": "course_name",
"selector": ".text-block-93",
"type": "text",
},
{
"name": "course_description",
"selector": ".course-content-text",
"type": "text",
},
{
"name": "course_icon",
"selector": ".image-92",
"type": "attribute",
"attribute": "src",
},
],
}
async with AsyncWebCrawler(headless=True, verbose=True) as crawler:
# Create the JavaScript that handles clicking multiple times
js_click_tabs = """
(async () => {
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
for(let tab of tabs) {
// scroll to the tab
tab.scrollIntoView();
tab.click();
// Wait for content to load and animations to complete
await new Promise(r => setTimeout(r, 500));
}
})();
"""
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology",
extraction_strategy=JsonCssExtractionStrategy(schema, verbose=True),
js_code=[js_click_tabs],
cache_mode=CacheMode.BYPASS,
)
companies = json.loads(result.extracted_content)
print(f"Successfully extracted {len(companies)} companies")
print(json.dumps(companies[0], indent=2))
# Advanced Session-Based Crawling with Dynamic Content 🔄
async def crawl_dynamic_content_pages_method_1():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
first_commit = ""
async def on_execution_started(page):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.Box-sc-g0xbh4-0 h4")
commit = await page.query_selector("li.Box-sc-g0xbh4-0 h4")
commit = await commit.evaluate("(element) => element.textContent")
commit = re.sub(r"\s+", "", commit)
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
async with AsyncWebCrawler(verbose=True) as crawler:
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
(() => {
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
})();
"""
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
js=js_next_page if page > 0 else None,
cache_mode=CacheMode.BYPASS,
js_only=page > 0,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
soup = BeautifulSoup(result.cleaned_html, "html.parser")
commits = soup.select("li")
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_dynamic_content_pages_method_2():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
last_commit = ""
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
return commits.length > 0 ? commits[0].textContent.trim() : null;
};
const initialCommit = getCurrentCommit();
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
// Poll for changes
while (true) {
await new Promise(resolve => setTimeout(resolve, 100)); // Wait 100ms
const newCommit = getCurrentCommit();
if (newCommit && newCommit !== initialCommit) {
break;
}
}
})();
"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page_and_wait if page > 0 else None,
js_only=page > 0,
cache_mode=CacheMode.BYPASS,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_dynamic_content_pages_method_3():
print(
"\n--- Advanced Multi-Page Crawling with JavaScript Execution using `wait_for` ---"
)
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
if (commits.length > 0) {
window.firstCommit = commits[0].textContent.trim();
}
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
"""
wait_for = """() => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
if (commits.length === 0) return false;
const firstCommit = commits[0].textContent.trim();
return firstCommit !== window.firstCommit;
}"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page if page > 0 else None,
wait_for=wait_for if page > 0 else None,
js_only=page > 0,
cache_mode=CacheMode.BYPASS,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_custom_browser_type():
# Use Firefox
start = time.time()
async with AsyncWebCrawler(
browser_type="firefox", verbose=True, headless=True
) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
# Use WebKit
start = time.time()
async with AsyncWebCrawler(
browser_type="webkit", verbose=True, headless=True
) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
# Use Chromium (default)
start = time.time()
async with AsyncWebCrawler(verbose=True, headless=True) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
async def crawl_with_user_simultion():
async with AsyncWebCrawler(verbose=True, headless=True) as crawler:
url = "YOUR-URL-HERE"
result = await crawler.arun(
url=url,
cache_mode=CacheMode.BYPASS,
magic=True, # Automatically detects and removes overlays, popups, and other elements that block content
# simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
# override_navigator = True # Overrides the navigator object to make it look like a real user
)
print(result.markdown)
async def speed_comparison():
# print("\n--- Speed Comparison ---")
# print("Firecrawl (simulated):")
# print("Time taken: 7.02 seconds")
# print("Content length: 42074 characters")
# print("Images found: 49")
# print()
# Simulated Firecrawl performance
from firecrawl import FirecrawlApp
app = FirecrawlApp(api_key=os.environ["FIRECRAWL_API_KEY"])
start = time.time()
scrape_status = app.scrape_url(
"https://www.nbcnews.com/business", params={"formats": ["markdown", "html"]}
)
end = time.time()
print("Firecrawl:")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(scrape_status['markdown'])} characters")
print(f"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}")
print()
async with AsyncWebCrawler() as crawler:
# Crawl4AI simple crawl
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
word_count_threshold=0,
cache_mode=CacheMode.BYPASS,
verbose=False,
)
end = time.time()
print("Crawl4AI (simple crawl):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown)} characters")
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
print()
# Crawl4AI with advanced content filtering
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
word_count_threshold=0,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
# content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
),
cache_mode=CacheMode.BYPASS,
verbose=False,
)
end = time.time()
print("Crawl4AI (Markdown Plus):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown.raw_markdown)} characters")
print(f"Fit Markdown: {len(result.markdown.fit_markdown)} characters")
print(f"Images found: {result.markdown.raw_markdown.count('cldnry.s-nbcnews.com')}")
print()
# Crawl4AI with JavaScript execution
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=[
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
],
word_count_threshold=0,
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
# content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
),
verbose=False,
)
end = time.time()
print("Crawl4AI (with JavaScript execution):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown.raw_markdown)} characters")
print(f"Fit Markdown: {len(result.markdown.fit_markdown)} characters")
print(f"Images found: {result.markdown.raw_markdown.count('cldnry.s-nbcnews.com')}")
print("\nNote on Speed Comparison:")
print("The speed test conducted here may not reflect optimal conditions.")
print("When we call Firecrawl's API, we're seeing its best performance,")
print("while Crawl4AI's performance is limited by the local network speed.")
print("For a more accurate comparison, it's recommended to run these tests")
print("on servers with a stable and fast internet connection.")
print("Despite these limitations, Crawl4AI still demonstrates faster performance.")
print("If you run these tests in an environment with better network conditions,")
print("you may observe an even more significant speed advantage for Crawl4AI.")
async def generate_knowledge_graph():
class Entity(BaseModel):
name: str
description: str
class Relationship(BaseModel):
entity1: Entity
entity2: Entity
description: str
relation_type: str
class KnowledgeGraph(BaseModel):
entities: List[Entity]
relationships: List[Relationship]
extraction_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY")), # In case of Ollama just pass "no-token"
schema=KnowledgeGraph.model_json_schema(),
extraction_type="schema",
instruction="""Extract entities and relationships from the given text.""",
)
async with AsyncWebCrawler() as crawler:
url = "https://paulgraham.com/love.html"
result = await crawler.arun(
url=url,
cache_mode=CacheMode.BYPASS,
extraction_strategy=extraction_strategy,
# magic=True
)
# print(result.extracted_content)
with open(os.path.join(__location__, "kb.json"), "w") as f:
f.write(result.extracted_content)
async def fit_markdown_remove_overlay():
async with AsyncWebCrawler(
headless=True, # Set to False to see what is happening
verbose=True,
user_agent_mode="random",
user_agent_generator_config={"device_type": "mobile", "os_type": "android"},
) as crawler:
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology",
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
),
options={"ignore_links": True},
),
# markdown_generator=DefaultMarkdownGenerator(
# content_filter=BM25ContentFilter(user_query="", bm25_threshold=1.0),
# options={
# "ignore_links": True
# }
# ),
)
if result.success:
print(len(result.markdown.raw_markdown))
print(len(result.markdown.markdown_with_citations))
print(len(result.markdown.fit_markdown))
# Save clean html
with open(os.path.join(__location__, "output/cleaned_html.html"), "w") as f:
f.write(result.cleaned_html)
with open(
os.path.join(__location__, "output/output_raw_markdown.md"), "w"
) as f:
f.write(result.markdown.raw_markdown)
with open(
os.path.join(__location__, "output/output_markdown_with_citations.md"),
"w",
) as f:
f.write(result.markdown.markdown_with_citations)
with open(
os.path.join(__location__, "output/output_fit_markdown.md"), "w"
) as f:
f.write(result.markdown.fit_markdown)
print("Done")
async def main():
# await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY"))
# await simple_crawl()
# await simple_example_with_running_js_code()
# await simple_example_with_css_selector()
# # await use_proxy()
# await capture_and_save_screenshot("https://www.example.com", os.path.join(__location__, "tmp/example_screenshot.jpg"))
# await extract_structured_data_using_css_extractor()
# LLM extraction examples
# await extract_structured_data_using_llm()
# await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY"))
# await extract_structured_data_using_llm("ollama/llama3.2")
# You always can pass custom headers to the extraction strategy
# custom_headers = {
# "Authorization": "Bearer your-custom-token",
# "X-Custom-Header": "Some-Value"
# }
# await extract_structured_data_using_llm(extra_headers=custom_headers)
# await crawl_dynamic_content_pages_method_1()
# await crawl_dynamic_content_pages_method_2()
await crawl_dynamic_content_pages_method_3()
# await crawl_custom_browser_type()
# await speed_comparison()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,412 +0,0 @@
import asyncio
import os
import json
import base64
from pathlib import Path
from typing import List
from crawl4ai import ProxyConfig
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode, CrawlResult
from crawl4ai import RoundRobinProxyStrategy
from crawl4ai import JsonCssExtractionStrategy, LLMExtractionStrategy
from crawl4ai import LLMConfig
from crawl4ai import PruningContentFilter, BM25ContentFilter
from crawl4ai import DefaultMarkdownGenerator
from crawl4ai import BFSDeepCrawlStrategy, DomainFilter, FilterChain
from crawl4ai import BrowserConfig
__cur_dir__ = Path(__file__).parent
async def demo_basic_crawl():
"""Basic web crawling with markdown generation"""
print("\n=== 1. Basic Web Crawling ===")
async with AsyncWebCrawler(config = BrowserConfig(
viewport_height=800,
viewport_width=1200,
headless=True,
verbose=True,
)) as crawler:
results: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com/"
)
for i, result in enumerate(results):
print(f"Result {i + 1}:")
print(f"Success: {result.success}")
if result.success:
print(f"Markdown length: {len(result.markdown.raw_markdown)} chars")
print(f"First 100 chars: {result.markdown.raw_markdown[:100]}...")
else:
print("Failed to crawl the URL")
async def demo_parallel_crawl():
"""Crawl multiple URLs in parallel"""
print("\n=== 2. Parallel Crawling ===")
urls = [
"https://news.ycombinator.com/",
"https://example.com/",
"https://httpbin.org/html",
]
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun_many(
urls=urls,
)
print(f"Crawled {len(results)} URLs in parallel:")
for i, result in enumerate(results):
print(
f" {i + 1}. {result.url} - {'Success' if result.success else 'Failed'}"
)
async def demo_fit_markdown():
"""Generate focused markdown with LLM content filter"""
print("\n=== 3. Fit Markdown with LLM Content Filter ===")
async with AsyncWebCrawler() as crawler:
result: CrawlResult = await crawler.arun(
url = "https://en.wikipedia.org/wiki/Python_(programming_language)",
config=CrawlerRunConfig(
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter()
)
),
)
# Print stats and save the fit markdown
print(f"Raw: {len(result.markdown.raw_markdown)} chars")
print(f"Fit: {len(result.markdown.fit_markdown)} chars")
async def demo_llm_structured_extraction_no_schema():
# Create a simple LLM extraction strategy (no schema required)
extraction_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(
provider="groq/qwen-2.5-32b",
api_token="env:GROQ_API_KEY",
),
instruction="This is news.ycombinator.com, extract all news, and for each, I want title, source url, number of comments.",
extract_type="schema",
schema="{title: string, url: string, comments: int}",
extra_args={
"temperature": 0.0,
"max_tokens": 4096,
},
verbose=True,
)
config = CrawlerRunConfig(extraction_strategy=extraction_strategy)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
"https://news.ycombinator.com/", config=config
)
for result in results:
print(f"URL: {result.url}")
print(f"Success: {result.success}")
if result.success:
data = json.loads(result.extracted_content)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
async def demo_css_structured_extraction_no_schema():
"""Extract structured data using CSS selectors"""
print("\n=== 5. CSS-Based Structured Extraction ===")
# Sample HTML for schema generation (one-time cost)
sample_html = """
<div class="body-post clear">
<a class="story-link" href="https://thehackernews.com/2025/04/malicious-python-packages-on-pypi.html">
<div class="clear home-post-box cf">
<div class="home-img clear">
<div class="img-ratio">
<img alt="..." src="...">
</div>
</div>
<div class="clear home-right">
<h2 class="home-title">Malicious Python Packages on PyPI Downloaded 39,000+ Times, Steal Sensitive Data</h2>
<div class="item-label">
<span class="h-datetime"><i class="icon-font icon-calendar"></i>Apr 05, 2025</span>
<span class="h-tags">Malware / Supply Chain Attack</span>
</div>
<div class="home-desc"> Cybersecurity researchers have...</div>
</div>
</div>
</a>
</div>
"""
# Check if schema file exists
schema_file_path = f"{__cur_dir__}/tmp/schema.json"
if os.path.exists(schema_file_path):
with open(schema_file_path, "r") as f:
schema = json.load(f)
else:
# Generate schema using LLM (one-time setup)
schema = JsonCssExtractionStrategy.generate_schema(
html=sample_html,
llm_config=LLMConfig(
provider="groq/qwen-2.5-32b",
api_token="env:GROQ_API_KEY",
),
query="From https://thehackernews.com/, I have shared a sample of one news div with a title, date, and description. Please generate a schema for this news div.",
)
print(f"Generated schema: {json.dumps(schema, indent=2)}")
# Save the schema to a file , and use it for future extractions, in result for such extraction you will call LLM once
with open(f"{__cur_dir__}/tmp/schema.json", "w") as f:
json.dump(schema, f, indent=2)
# Create no-LLM extraction strategy with the generated schema
extraction_strategy = JsonCssExtractionStrategy(schema)
config = CrawlerRunConfig(extraction_strategy=extraction_strategy)
# Use the fast CSS extraction (no LLM calls during extraction)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
"https://thehackernews.com", config=config
)
for result in results:
print(f"URL: {result.url}")
print(f"Success: {result.success}")
if result.success:
data = json.loads(result.extracted_content)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
async def demo_deep_crawl():
"""Deep crawling with BFS strategy"""
print("\n=== 6. Deep Crawling ===")
filter_chain = FilterChain([DomainFilter(allowed_domains=["crawl4ai.com"])])
deep_crawl_strategy = BFSDeepCrawlStrategy(
max_depth=1, max_pages=5, filter_chain=filter_chain
)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
url="https://docs.crawl4ai.com",
config=CrawlerRunConfig(deep_crawl_strategy=deep_crawl_strategy),
)
print(f"Deep crawl returned {len(results)} pages:")
for i, result in enumerate(results):
depth = result.metadata.get("depth", "unknown")
print(f" {i + 1}. {result.url} (Depth: {depth})")
async def demo_js_interaction():
"""Execute JavaScript to load more content"""
print("\n=== 7. JavaScript Interaction ===")
# A simple page that needs JS to reveal content
async with AsyncWebCrawler(config=BrowserConfig(headless=False)) as crawler:
# Initial load
news_schema = {
"name": "news",
"baseSelector": "tr.athing",
"fields": [
{
"name": "title",
"selector": "span.titleline",
"type": "text",
}
],
}
results: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com",
config=CrawlerRunConfig(
session_id="hn_session", # Keep session
extraction_strategy=JsonCssExtractionStrategy(schema=news_schema),
),
)
news = []
for result in results:
if result.success:
data = json.loads(result.extracted_content)
news.extend(data)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
print(f"Initial items: {len(news)}")
# Click "More" link
more_config = CrawlerRunConfig(
js_code="document.querySelector('a.morelink').click();",
js_only=True, # Continue in same page
session_id="hn_session", # Keep session
extraction_strategy=JsonCssExtractionStrategy(
schema=news_schema,
),
)
result: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com", config=more_config
)
# Extract new items
for result in results:
if result.success:
data = json.loads(result.extracted_content)
news.extend(data)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
print(f"Total items: {len(news)}")
async def demo_media_and_links():
"""Extract media and links from a page"""
print("\n=== 8. Media and Links Extraction ===")
async with AsyncWebCrawler() as crawler:
result: List[CrawlResult] = await crawler.arun("https://en.wikipedia.org/wiki/Main_Page")
for i, result in enumerate(result):
# Extract and save all images
images = result.media.get("images", [])
print(f"Found {len(images)} images")
# Extract and save all links (internal and external)
internal_links = result.links.get("internal", [])
external_links = result.links.get("external", [])
print(f"Found {len(internal_links)} internal links")
print(f"Found {len(external_links)} external links")
# Print some of the images and links
for image in images[:3]:
print(f"Image: {image['src']}")
for link in internal_links[:3]:
print(f"Internal link: {link['href']}")
for link in external_links[:3]:
print(f"External link: {link['href']}")
# # Save everything to files
with open(f"{__cur_dir__}/tmp/images.json", "w") as f:
json.dump(images, f, indent=2)
with open(f"{__cur_dir__}/tmp/links.json", "w") as f:
json.dump(
{"internal": internal_links, "external": external_links},
f,
indent=2,
)
async def demo_screenshot_and_pdf():
"""Capture screenshot and PDF of a page"""
print("\n=== 9. Screenshot and PDF Capture ===")
async with AsyncWebCrawler() as crawler:
result: List[CrawlResult] = await crawler.arun(
# url="https://example.com",
url="https://en.wikipedia.org/wiki/Giant_anteater",
config=CrawlerRunConfig(screenshot=True, pdf=True),
)
for i, result in enumerate(result):
# if result.screenshot_data:
if result.screenshot:
# Save screenshot
screenshot_path = f"{__cur_dir__}/tmp/example_screenshot.png"
with open(screenshot_path, "wb") as f:
f.write(base64.b64decode(result.screenshot))
print(f"Screenshot saved to {screenshot_path}")
# if result.pdf_data:
if result.pdf:
# Save PDF
pdf_path = f"{__cur_dir__}/tmp/example.pdf"
with open(pdf_path, "wb") as f:
f.write(result.pdf)
print(f"PDF saved to {pdf_path}")
async def demo_proxy_rotation():
"""Proxy rotation for multiple requests"""
print("\n=== 10. Proxy Rotation ===")
# Example proxies (replace with real ones)
proxies = [
ProxyConfig(server="http://proxy1.example.com:8080"),
ProxyConfig(server="http://proxy2.example.com:8080"),
]
proxy_strategy = RoundRobinProxyStrategy(proxies)
print(f"Using {len(proxies)} proxies in rotation")
print(
"Note: This example uses placeholder proxies - replace with real ones to test"
)
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
proxy_rotation_strategy=proxy_strategy
)
# In a real scenario, these would be run and the proxies would rotate
print("In a real scenario, requests would rotate through the available proxies")
async def demo_raw_html_and_file():
"""Process raw HTML and local files"""
print("\n=== 11. Raw HTML and Local Files ===")
raw_html = """
<html><body>
<h1>Sample Article</h1>
<p>This is sample content for testing Crawl4AI's raw HTML processing.</p>
</body></html>
"""
# Save to file
file_path = Path("docs/examples/tmp/sample.html").absolute()
with open(file_path, "w") as f:
f.write(raw_html)
async with AsyncWebCrawler() as crawler:
# Crawl raw HTML
raw_result = await crawler.arun(
url="raw:" + raw_html, config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
print("Raw HTML processing:")
print(f" Markdown: {raw_result.markdown.raw_markdown[:50]}...")
# Crawl local file
file_result = await crawler.arun(
url=f"file://{file_path}",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("\nLocal file processing:")
print(f" Markdown: {file_result.markdown.raw_markdown[:50]}...")
# Clean up
os.remove(file_path)
print(f"Processed both raw HTML and local file ({file_path})")
async def main():
"""Run all demo functions sequentially"""
print("=== Comprehensive Crawl4AI Demo ===")
print("Note: Some examples require API keys or other configurations")
# Run all demos
await demo_basic_crawl()
await demo_parallel_crawl()
await demo_fit_markdown()
await demo_llm_structured_extraction_no_schema()
await demo_css_structured_extraction_no_schema()
await demo_deep_crawl()
await demo_js_interaction()
await demo_media_and_links()
await demo_screenshot_and_pdf()
# # await demo_proxy_rotation()
await demo_raw_html_and_file()
# Clean up any temp files that may have been created
print("\n=== Demo Complete ===")
print("Check for any generated files (screenshots, PDFs) in the current directory")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,562 +0,0 @@
import os, sys
from crawl4ai.types import LLMConfig
sys.path.append(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
)
import asyncio
import time
import json
import re
from typing import Dict
from bs4 import BeautifulSoup
from pydantic import BaseModel, Field
from crawl4ai import AsyncWebCrawler, CacheMode, BrowserConfig, CrawlerRunConfig
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.content_filter_strategy import PruningContentFilter
from crawl4ai.extraction_strategy import (
JsonCssExtractionStrategy,
LLMExtractionStrategy,
)
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
print("Crawl4AI: Advanced Web Crawling and Data Extraction")
print("GitHub Repository: https://github.com/unclecode/crawl4ai")
print("Twitter: @unclecode")
print("Website: https://crawl4ai.com")
# Basic Example - Simple Crawl
async def simple_crawl():
print("\n--- Basic Usage ---")
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
async def clean_content():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
excluded_tags=["nav", "footer", "aside"],
remove_overlay_elements=True,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
),
options={"ignore_links": True},
),
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://en.wikipedia.org/wiki/Apple",
config=crawler_config,
)
full_markdown_length = len(result.markdown.raw_markdown)
fit_markdown_length = len(result.markdown.fit_markdown)
print(f"Full Markdown Length: {full_markdown_length}")
print(f"Fit Markdown Length: {fit_markdown_length}")
async def link_analysis():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.ENABLED,
exclude_external_links=True,
exclude_social_media_links=True,
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
config=crawler_config,
)
print(f"Found {len(result.links['internal'])} internal links")
print(f"Found {len(result.links['external'])} external links")
for link in result.links["internal"][:5]:
print(f"Href: {link['href']}\nText: {link['text']}\n")
# JavaScript Execution Example
async def simple_example_with_running_js_code():
print("\n--- Executing JavaScript and Using CSS Selectors ---")
browser_config = BrowserConfig(headless=True, java_script_enabled=True)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
js_code="const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();",
# wait_for="() => { return Array.from(document.querySelectorAll('article.tease-card')).length > 10; }"
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
# CSS Selector Example
async def simple_example_with_css_selector():
print("\n--- Using CSS Selectors ---")
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, css_selector=".wide-tease-item__description"
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
async def media_handling():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, exclude_external_images=True, screenshot=True
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
for img in result.media["images"][:5]:
print(f"Image URL: {img['src']}, Alt: {img['alt']}, Score: {img['score']}")
async def custom_hook_workflow(verbose=True):
async with AsyncWebCrawler() as crawler:
# Set a 'before_goto' hook to run custom code just before navigation
crawler.crawler_strategy.set_hook(
"before_goto",
lambda page, context: print("[Hook] Preparing to navigate..."),
)
# Perform the crawl operation
result = await crawler.arun(url="https://crawl4ai.com")
print(result.markdown.raw_markdown[:500].replace("\n", " -- "))
# Proxy Example
async def use_proxy():
print("\n--- Using a Proxy ---")
browser_config = BrowserConfig(
headless=True,
proxy_config={
"server": "http://proxy.example.com:8080",
"username": "username",
"password": "password",
},
)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
if result.success:
print(result.markdown[:500])
# Screenshot Example
async def capture_and_save_screenshot(url: str, output_path: str):
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, screenshot=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url=url, config=crawler_config)
if result.success and result.screenshot:
import base64
screenshot_data = base64.b64decode(result.screenshot)
with open(output_path, "wb") as f:
f.write(screenshot_data)
print(f"Screenshot saved successfully to {output_path}")
else:
print("Failed to capture screenshot")
# LLM Extraction Example
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(
..., description="Fee for output token for the OpenAI model."
)
async def extract_structured_data_using_llm(
provider: str, api_token: str = None, extra_headers: Dict[str, str] = None
):
print(f"\n--- Extracting Structured Data with {provider} ---")
if api_token is None and provider != "ollama":
print(f"API token is required for {provider}. Skipping this example.")
return
browser_config = BrowserConfig(headless=True)
extra_args = {"temperature": 0, "top_p": 0.9, "max_tokens": 2000}
if extra_headers:
extra_args["extra_headers"] = extra_headers
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
word_count_threshold=1,
page_timeout=80000,
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider=provider,api_token=api_token),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content.""",
extra_args=extra_args,
),
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://openai.com/api/pricing/", config=crawler_config
)
print(result.extracted_content)
# CSS Extraction Example
async def extract_structured_data_using_css_extractor():
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
schema = {
"name": "KidoCode Courses",
"baseSelector": "section.charge-methodology .framework-collection-item.w-dyn-item",
"fields": [
{
"name": "section_title",
"selector": "h3.heading-50",
"type": "text",
},
{
"name": "section_description",
"selector": ".charge-content",
"type": "text",
},
{
"name": "course_name",
"selector": ".text-block-93",
"type": "text",
},
{
"name": "course_description",
"selector": ".course-content-text",
"type": "text",
},
{
"name": "course_icon",
"selector": ".image-92",
"type": "attribute",
"attribute": "src",
},
],
}
browser_config = BrowserConfig(headless=True, java_script_enabled=True)
js_click_tabs = """
(async () => {
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
for(let tab of tabs) {
tab.scrollIntoView();
tab.click();
await new Promise(r => setTimeout(r, 500));
}
})();
"""
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
extraction_strategy=JsonCssExtractionStrategy(schema),
js_code=[js_click_tabs],
delay_before_return_html=1
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology", config=crawler_config
)
companies = json.loads(result.extracted_content)
print(f"Successfully extracted {len(companies)} companies")
print(json.dumps(companies[0], indent=2))
# Dynamic Content Examples - Method 1
async def crawl_dynamic_content_pages_method_1():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
first_commit = ""
async def on_execution_started(page, **kwargs):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.Box-sc-g0xbh4-0 h4")
commit = await page.query_selector("li.Box-sc-g0xbh4-0 h4")
commit = await commit.evaluate("(element) => element.textContent")
commit = re.sub(r"\s+", "", commit)
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
browser_config = BrowserConfig(headless=False, java_script_enabled=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
"""
for page in range(3):
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
css_selector="li.Box-sc-g0xbh4-0",
js_code=js_next_page if page > 0 else None,
js_only=page > 0,
session_id=session_id,
)
result = await crawler.arun(url=url, config=crawler_config)
assert result.success, f"Failed to crawl page {page + 1}"
soup = BeautifulSoup(result.cleaned_html, "html.parser")
commits = soup.select("li")
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
# Dynamic Content Examples - Method 2
async def crawl_dynamic_content_pages_method_2():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
browser_config = BrowserConfig(headless=False, java_script_enabled=True)
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
return commits.length > 0 ? commits[0].textContent.trim() : null;
};
const initialCommit = getCurrentCommit();
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
while (true) {
await new Promise(resolve => setTimeout(resolve, 100));
const newCommit = getCurrentCommit();
if (newCommit && newCommit !== initialCommit) {
break;
}
}
})();
"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
async with AsyncWebCrawler(config=browser_config) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
extraction_strategy = JsonCssExtractionStrategy(schema)
for page in range(3):
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page_and_wait if page > 0 else None,
js_only=page > 0,
session_id=session_id,
)
result = await crawler.arun(url=url, config=crawler_config)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def cosine_similarity_extraction():
from crawl4ai.extraction_strategy import CosineStrategy
crawl_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
extraction_strategy=CosineStrategy(
word_count_threshold=10,
max_dist=0.2, # Maximum distance between two words
linkage_method="ward", # Linkage method for hierarchical clustering (ward, complete, average, single)
top_k=3, # Number of top keywords to extract
sim_threshold=0.3, # Similarity threshold for clustering
semantic_filter="McDonald's economic impact, American consumer trends", # Keywords to filter the content semantically using embeddings
verbose=True,
),
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156",
config=crawl_config,
)
print(json.loads(result.extracted_content)[:5])
# Browser Comparison
async def crawl_custom_browser_type():
print("\n--- Browser Comparison ---")
# Firefox
browser_config_firefox = BrowserConfig(browser_type="firefox", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_firefox) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("Firefox:", time.time() - start)
print(result.markdown[:500])
# WebKit
browser_config_webkit = BrowserConfig(browser_type="webkit", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_webkit) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("WebKit:", time.time() - start)
print(result.markdown[:500])
# Chromium (default)
browser_config_chromium = BrowserConfig(browser_type="chromium", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_chromium) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("Chromium:", time.time() - start)
print(result.markdown[:500])
# Anti-Bot and User Simulation
async def crawl_with_user_simulation():
browser_config = BrowserConfig(
headless=True,
user_agent_mode="random",
user_agent_generator_config={"device_type": "mobile", "os_type": "android"},
)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
magic=True,
simulate_user=True,
override_navigator=True,
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="YOUR-URL-HERE", config=crawler_config)
print(result.markdown)
async def ssl_certification():
# Configure crawler to fetch SSL certificate
config = CrawlerRunConfig(
fetch_ssl_certificate=True,
cache_mode=CacheMode.BYPASS, # Bypass cache to always get fresh certificates
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url="https://example.com", config=config)
if result.success and result.ssl_certificate:
cert = result.ssl_certificate
tmp_dir = os.path.join(__location__, "tmp")
os.makedirs(tmp_dir, exist_ok=True)
# 1. Access certificate properties directly
print("\nCertificate Information:")
print(f"Issuer: {cert.issuer.get('CN', '')}")
print(f"Valid until: {cert.valid_until}")
print(f"Fingerprint: {cert.fingerprint}")
# 2. Export certificate in different formats
cert.to_json(os.path.join(tmp_dir, "certificate.json")) # For analysis
print("\nCertificate exported to:")
print(f"- JSON: {os.path.join(tmp_dir, 'certificate.json')}")
pem_data = cert.to_pem(
os.path.join(tmp_dir, "certificate.pem")
) # For web servers
print(f"- PEM: {os.path.join(tmp_dir, 'certificate.pem')}")
der_data = cert.to_der(
os.path.join(tmp_dir, "certificate.der")
) # For Java apps
print(f"- DER: {os.path.join(tmp_dir, 'certificate.der')}")
# Main execution
async def main():
# Basic examples
await simple_crawl()
await simple_example_with_running_js_code()
await simple_example_with_css_selector()
# Advanced examples
await extract_structured_data_using_css_extractor()
await extract_structured_data_using_llm(
"openai/gpt-4o", os.getenv("OPENAI_API_KEY")
)
await crawl_dynamic_content_pages_method_1()
await crawl_dynamic_content_pages_method_2()
# Browser comparisons
await crawl_custom_browser_type()
# Screenshot example
await capture_and_save_screenshot(
"https://www.example.com",
os.path.join(__location__, "tmp/example_screenshot.jpg")
)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,405 @@
import os
import time
from crawl4ai import LLMConfig
from crawl4ai.web_crawler import WebCrawler
from crawl4ai.chunking_strategy import *
from crawl4ai.extraction_strategy import *
from crawl4ai.crawler_strategy import *
from rich import print
from rich.console import Console
from functools import lru_cache
console = Console()
@lru_cache()
def create_crawler():
crawler = WebCrawler(verbose=True)
crawler.warmup()
return crawler
def print_result(result):
# Print each key in one line and just the first 10 characters of each one's value and three dots
console.print("\t[bold]Result:[/bold]")
for key, value in result.model_dump().items():
if isinstance(value, str) and value:
console.print(f"\t{key}: [green]{value[:20]}...[/green]")
if result.extracted_content:
items = json.loads(result.extracted_content)
print(f"\t[bold]{len(items)} blocks is extracted![/bold]")
def cprint(message, press_any_key=False):
console.print(message)
if press_any_key:
console.print("Press any key to continue...", style="")
input()
def basic_usage(crawler):
cprint(
"🛠️ [bold cyan]Basic Usage: Simply provide a URL and let Crawl4ai do the magic![/bold cyan]"
)
result = crawler.run(url="https://www.nbcnews.com/business", only_text=True)
cprint("[LOG] 📦 [bold yellow]Basic crawl result:[/bold yellow]")
print_result(result)
def basic_usage_some_params(crawler):
cprint(
"🛠️ [bold cyan]Basic Usage: Simply provide a URL and let Crawl4ai do the magic![/bold cyan]"
)
result = crawler.run(
url="https://www.nbcnews.com/business", word_count_threshold=1, only_text=True
)
cprint("[LOG] 📦 [bold yellow]Basic crawl result:[/bold yellow]")
print_result(result)
def screenshot_usage(crawler):
cprint("\n📸 [bold cyan]Let's take a screenshot of the page![/bold cyan]")
result = crawler.run(url="https://www.nbcnews.com/business", screenshot=True)
cprint("[LOG] 📦 [bold yellow]Screenshot result:[/bold yellow]")
# Save the screenshot to a file
with open("screenshot.png", "wb") as f:
f.write(base64.b64decode(result.screenshot))
cprint("Screenshot saved to 'screenshot.png'!")
print_result(result)
def understanding_parameters(crawler):
cprint(
"\n🧠 [bold cyan]Understanding 'bypass_cache' and 'include_raw_html' parameters:[/bold cyan]"
)
cprint(
"By default, Crawl4ai caches the results of your crawls. This means that subsequent crawls of the same URL will be much faster! Let's see this in action."
)
# First crawl (reads from cache)
cprint("1⃣ First crawl (caches the result):", True)
start_time = time.time()
result = crawler.run(url="https://www.nbcnews.com/business")
end_time = time.time()
cprint(
f"[LOG] 📦 [bold yellow]First crawl took {end_time - start_time} seconds and result (from cache):[/bold yellow]"
)
print_result(result)
# Force to crawl again
cprint("2⃣ Second crawl (Force to crawl again):", True)
start_time = time.time()
result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
end_time = time.time()
cprint(
f"[LOG] 📦 [bold yellow]Second crawl took {end_time - start_time} seconds and result (forced to crawl):[/bold yellow]"
)
print_result(result)
def add_chunking_strategy(crawler):
# Adding a chunking strategy: RegexChunking
cprint(
"\n🧩 [bold cyan]Let's add a chunking strategy: RegexChunking![/bold cyan]",
True,
)
cprint(
"RegexChunking is a simple chunking strategy that splits the text based on a given regex pattern. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
chunking_strategy=RegexChunking(patterns=["\n\n"]),
)
cprint("[LOG] 📦 [bold yellow]RegexChunking result:[/bold yellow]")
print_result(result)
# Adding another chunking strategy: NlpSentenceChunking
cprint(
"\n🔍 [bold cyan]Time to explore another chunking strategy: NlpSentenceChunking![/bold cyan]",
True,
)
cprint(
"NlpSentenceChunking uses NLP techniques to split the text into sentences. Let's see how it performs!"
)
result = crawler.run(
url="https://www.nbcnews.com/business", chunking_strategy=NlpSentenceChunking()
)
cprint("[LOG] 📦 [bold yellow]NlpSentenceChunking result:[/bold yellow]")
print_result(result)
def add_extraction_strategy(crawler):
# Adding an extraction strategy: CosineStrategy
cprint(
"\n🧠 [bold cyan]Let's get smarter with an extraction strategy: CosineStrategy![/bold cyan]",
True,
)
cprint(
"CosineStrategy uses cosine similarity to extract semantically similar blocks of text. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=CosineStrategy(
word_count_threshold=10,
max_dist=0.2,
linkage_method="ward",
top_k=3,
sim_threshold=0.3,
verbose=True,
),
)
cprint("[LOG] 📦 [bold yellow]CosineStrategy result:[/bold yellow]")
print_result(result)
# Using semantic_filter with CosineStrategy
cprint(
"You can pass other parameters like 'semantic_filter' to the CosineStrategy to extract semantically similar blocks of text. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=CosineStrategy(
semantic_filter="inflation rent prices",
),
)
cprint(
"[LOG] 📦 [bold yellow]CosineStrategy result with semantic filter:[/bold yellow]"
)
print_result(result)
def add_llm_extraction_strategy(crawler):
# Adding an LLM extraction strategy without instructions
cprint(
"\n🤖 [bold cyan]Time to bring in the big guns: LLMExtractionStrategy without instructions![/bold cyan]",
True,
)
cprint(
"LLMExtractionStrategy uses a large language model to extract relevant information from the web page. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llm_config = LLMConfig(provider="openai/gpt-4o", api_token=os.getenv("OPENAI_API_KEY"))
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (no instructions) result:[/bold yellow]"
)
print_result(result)
# Adding an LLM extraction strategy with instructions
cprint(
"\n📜 [bold cyan]Let's make it even more interesting: LLMExtractionStrategy with instructions![/bold cyan]",
True,
)
cprint(
"Let's say we are only interested in financial news. Let's see how LLMExtractionStrategy performs with instructions!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o",api_token=os.getenv("OPENAI_API_KEY")),
instruction="I am interested in only financial news",
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (with instructions) result:[/bold yellow]"
)
print_result(result)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o",api_token=os.getenv("OPENAI_API_KEY")),
instruction="Extract only content related to technology",
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (with technology instruction) result:[/bold yellow]"
)
print_result(result)
def targeted_extraction(crawler):
# Using a CSS selector to extract only H2 tags
cprint(
"\n🎯 [bold cyan]Targeted extraction: Let's use a CSS selector to extract only H2 tags![/bold cyan]",
True,
)
result = crawler.run(url="https://www.nbcnews.com/business", css_selector="h2")
cprint("[LOG] 📦 [bold yellow]CSS Selector (H2 tags) result:[/bold yellow]")
print_result(result)
def interactive_extraction(crawler):
# Passing JavaScript code to interact with the page
cprint(
"\n🖱️ [bold cyan]Let's get interactive: Passing JavaScript code to click 'Load More' button![/bold cyan]",
True,
)
cprint(
"In this example we try to click the 'Load More' button on the page using JavaScript code."
)
js_code = """
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""
# crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
# crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
result = crawler.run(url="https://www.nbcnews.com/business", js=js_code)
cprint(
"[LOG] 📦 [bold yellow]JavaScript Code (Load More button) result:[/bold yellow]"
)
print_result(result)
def multiple_scrip(crawler):
# Passing JavaScript code to interact with the page
cprint(
"\n🖱️ [bold cyan]Let's get interactive: Passing JavaScript code to click 'Load More' button![/bold cyan]",
True,
)
cprint(
"In this example we try to click the 'Load More' button on the page using JavaScript code."
)
js_code = [
"""
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""
] * 2
# crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
# crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
result = crawler.run(url="https://www.nbcnews.com/business", js=js_code)
cprint(
"[LOG] 📦 [bold yellow]JavaScript Code (Load More button) result:[/bold yellow]"
)
print_result(result)
def using_crawler_hooks(crawler):
# Example usage of the hooks for authentication and setting a cookie
def on_driver_created(driver):
print("[HOOK] on_driver_created")
# Example customization: maximize the window
driver.maximize_window()
# Example customization: logging in to a hypothetical website
driver.get("https://example.com/login")
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.NAME, "username"))
)
driver.find_element(By.NAME, "username").send_keys("testuser")
driver.find_element(By.NAME, "password").send_keys("password123")
driver.find_element(By.NAME, "login").click()
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "welcome"))
)
# Add a custom cookie
driver.add_cookie({"name": "test_cookie", "value": "cookie_value"})
return driver
def before_get_url(driver):
print("[HOOK] before_get_url")
# Example customization: add a custom header
# Enable Network domain for sending headers
driver.execute_cdp_cmd("Network.enable", {})
# Add a custom header
driver.execute_cdp_cmd(
"Network.setExtraHTTPHeaders", {"headers": {"X-Test-Header": "test"}}
)
return driver
def after_get_url(driver):
print("[HOOK] after_get_url")
# Example customization: log the URL
print(driver.current_url)
return driver
def before_return_html(driver, html):
print("[HOOK] before_return_html")
# Example customization: log the HTML
print(len(html))
return driver
cprint(
"\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]",
True,
)
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook("on_driver_created", on_driver_created)
crawler_strategy.set_hook("before_get_url", before_get_url)
crawler_strategy.set_hook("after_get_url", after_get_url)
crawler_strategy.set_hook("before_return_html", before_return_html)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
result = crawler.run(url="https://example.com")
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print_result(result=result)
def using_crawler_hooks_dleay_example(crawler):
def delay(driver):
print("Delaying for 5 seconds...")
time.sleep(5)
print("Resuming...")
def create_crawler():
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook("after_get_url", delay)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
return crawler
cprint(
"\n🔗 [bold cyan]Using Crawler Hooks: Let's add a delay after fetching the url to make sure entire page is fetched.[/bold cyan]"
)
crawler = create_crawler()
result = crawler.run(url="https://google.com", bypass_cache=True)
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print_result(result)
def main():
cprint(
"🌟 [bold green]Welcome to the Crawl4ai Quickstart Guide! Let's dive into some web crawling fun! 🌐[/bold green]"
)
cprint(
"⛳️ [bold cyan]First Step: Create an instance of WebCrawler and call the `warmup()` function.[/bold cyan]"
)
cprint(
"If this is the first time you're running Crawl4ai, this might take a few seconds to load required model files."
)
crawler = create_crawler()
crawler.always_by_pass_cache = True
basic_usage(crawler)
# basic_usage_some_params(crawler)
understanding_parameters(crawler)
crawler.always_by_pass_cache = True
screenshot_usage(crawler)
add_chunking_strategy(crawler)
add_extraction_strategy(crawler)
add_llm_extraction_strategy(crawler)
targeted_extraction(crawler)
interactive_extraction(crawler)
multiple_scrip(crawler)
cprint(
"\n🎉 [bold green]Congratulations! You've made it through the Crawl4ai Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️[/bold green]"
)
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show More