Compare commits

..

67 Commits

Author SHA1 Message Date
UncleCode
cd7ff6f9c1 feat(docs): add AI assistant interface and code copy button
Add new AI assistant chat interface with features:
- Real-time chat with markdown support
- Chat history management
- Citation tracking
- Selection-to-query functionality

Also adds code copy button to documentation code blocks and adjusts layout/styling.

Breaking changes: None
2025-04-14 23:00:47 +08:00
UncleCode
c56974cf59 feat(docs): enhance documentation UI with ToC and GitHub stats
Add new features to documentation UI:
- Add table of contents with scroll spy functionality
- Add GitHub repository statistics badge
- Implement new centered layout system with fixed sidebar
- Add conditional Playwright installation based on CRAWL4AI_MODE

Breaking changes: None
2025-04-14 20:46:32 +08:00
UncleCode
ecec53a8c1 Docker tested on Windows machine. 2025-04-13 20:14:41 +08:00
UncleCode
3179d6ad0c fix(core): improve error handling and stability in core components
Enhance error handling and stability across multiple components:
- Add safety checks in async_configs.py for type and params existence
- Fix browser manager initialization and cleanup logic
- Add default LLM config fallback in extraction strategy
- Add comprehensive Docker deployment guide and server tests

BREAKING CHANGE: BrowserManager.start() now automatically closes existing instances
2025-04-11 20:58:39 +08:00
UncleCode
108b2a8bfb Fixed capturing console messages for case the url is the local file. Update docker configuration (work in progress) 2025-04-10 23:22:38 +08:00
unclecode
66ac07b4f3 feat(crawler): add network request and console message capturing
Implement comprehensive network request and console message capturing functionality:
- Add capture_network_requests and capture_console_messages config parameters
- Add network_requests and console_messages fields to models
- Implement Playwright event listeners to capture requests, responses, and console output
- Create detailed documentation and examples
- Add comprehensive tests

This feature enables deep visibility into web page activity for debugging,
security analysis, performance profiling, and API discovery in web applications.
2025-04-10 16:03:48 +08:00
UncleCode
a2061bf31e feat(crawler): add MHTML capture functionality
Add ability to capture web pages as MHTML format, which includes all page resources
in a single file. This enables complete page archival and offline viewing.

- Add capture_mhtml parameter to CrawlerRunConfig
- Implement MHTML capture using CDP in AsyncPlaywrightCrawlerStrategy
- Add mhtml field to CrawlResult and AsyncCrawlResponse models
- Add comprehensive tests for MHTML capture functionality
- Update documentation with MHTML capture details
- Add exclude_all_images option for better memory management

Breaking changes: None
2025-04-09 15:39:04 +08:00
UncleCode
9038e9acbd Merge branch 'main' into next 2025-04-08 17:43:42 +08:00
UncleCode
02e627e0bd fix(crawler): simplify page retrieval logic in AsyncPlaywrightCrawlerStrategy 2025-04-08 17:43:36 +08:00
UncleCode
5b66208a7e Refactor next branch 2025-04-06 18:33:09 +08:00
UncleCode
591f55edc7 refactor(browser): rename methods and update type hints in BrowserHub for clarity 2025-04-06 18:22:05 +08:00
UncleCode
e1d9e2489c refactor(docs): update import statement in quickstart.py for improved clarity 2025-04-05 23:12:06 +08:00
UncleCode
b1693b1c21 Remove old quickstart files 2025-04-05 23:10:25 +08:00
UncleCode
49d904ca0a refactor(docs): enhance quickstart_examples.py with improved configuration and file handling 2025-04-05 22:57:45 +08:00
UncleCode
ca9351252a refactor(docs): update import paths and clean up example code in quickstart_examples.py 2025-04-05 22:55:56 +08:00
UncleCode
935d9d39f8 Add quickstart example set 2025-04-05 21:37:25 +08:00
UncleCode
f8213c32b9 Merge branch 'vr0.5.0.post8' 2025-04-05 21:36:17 +08:00
UncleCode
14894b4d70 feat(config): set DefaultMarkdownGenerator as the default markdown generator in CrawlerRunConfig
feat(logger): add color mapping for log message formatting options
2025-04-03 20:34:19 +08:00
UncleCode
86df20234b fix(crawler): handle exceptions in get_page call to ensure page retrieval 2025-04-02 21:25:24 +08:00
UncleCode
179921a131 fix(crawler): update get_page call to include additional return value 2025-04-02 19:01:30 +08:00
UncleCode
c5cac2b459 feat(browser): add BrowserHub for centralized browser management and resource sharing 2025-04-01 20:35:02 +08:00
UncleCode
555455d710 feat(browser): implement browser pooling and page pre-warming
Adds a new BrowserManager implementation with browser pooling and page pre-warming capabilities:
- Adds support for managing multiple browser instances per configuration
- Implements page pre-warming for improved performance
- Adds configurable behavior for when no browsers are available
- Includes comprehensive status reporting and monitoring
- Maintains backward compatibility with existing API
- Adds demo script showcasing new features

BREAKING CHANGE: BrowserManager API now returns a strategy instance along with page and context
2025-03-31 21:55:07 +08:00
UncleCode
bb02398086 refactor(browser): improve browser strategy architecture and lifecycle management
Major refactoring of browser strategy implementations to improve code organization and reliability:
- Move CrawlResultContainer and RunManyReturn types from async_webcrawler to models.py
- Simplify browser lifecycle management in AsyncWebCrawler
- Standardize browser strategy interface with _generate_page method
- Improve headless mode handling and browser args construction
- Clean up Docker and Playwright strategy implementations
- Fix session management and context handling across strategies

BREAKING CHANGE: Browser strategy interface has changed with new _generate_page method requirement
2025-03-30 20:58:39 +08:00
UncleCode
3ff7eec8f3 refactor(browser): consolidate browser strategy implementations
Moves common browser functionality into BaseBrowserStrategy class to reduce code duplication and improve maintainability. Key changes:
- Adds shared browser argument building and session management to base class
- Standardizes storage state handling across strategies
- Improves process cleanup and error handling
- Consolidates CDP URL management and container lifecycle

BREAKING CHANGE: Changes browser_mode="custom" to "cdp" for consistency
2025-03-28 22:47:28 +08:00
UncleCode
64f20ab44a refactor(docker): update Dockerfile and browser strategy to use Chromium 2025-03-28 15:59:02 +08:00
UncleCode
c635f6b9a2 refactor(browser): reorganize browser strategies and improve Docker implementation
Reorganize browser strategy code into separate modules for better maintainability and separation of concerns. Improve Docker implementation with:
- Add Alpine and Debian-based Dockerfiles for better container options
- Enhance Docker registry to share configuration with BuiltinBrowserStrategy
- Add CPU and memory limits to container configuration
- Improve error handling and logging
- Update documentation and examples

BREAKING CHANGE: DockerConfig, DockerRegistry, and DockerUtils have been moved to new locations and their APIs have been updated.
2025-03-27 21:35:13 +08:00
UncleCode
7f93e88379 refactor(tests): remove unused imports in test_docker_browser.py 2025-03-26 15:19:29 +08:00
UncleCode
40d4dd36c9 chore(version): bump version to 0.5.0.post8 and update post-installation setup 2025-03-25 21:56:49 +08:00
UncleCode
d8f38f2298 chore(version): bump version to 0.5.0.post7 2025-03-25 21:47:19 +08:00
UncleCode
5c88d1310d feat(cli): add output file option and integrate LXML web scraping strategy 2025-03-25 21:38:24 +08:00
UncleCode
4a20d7f7c2 feat(cli): add quick JSON extraction and global config management
Adds new features to improve user experience and configuration:
- Quick JSON extraction with -j flag for direct LLM-based structured data extraction
- Global configuration management with 'crwl config' commands
- Enhanced LLM extraction with better JSON handling and error management
- New user settings for default behaviors (LLM provider, browser settings, etc.)

Breaking changes: None
2025-03-25 20:30:25 +08:00
UncleCode
6405cf0a6f Merge branch 'vr0.5.0.post5' into next 2025-03-25 14:51:29 +08:00
UncleCode
6eed4adc65 Merge branch 'vr0.5.0.post5' 2025-03-25 12:24:07 +08:00
UncleCode
bdd9db579a chore(version): bump version to 0.5.0.post6
refactor(cli): remove unused import from FastAPI
2025-03-25 12:01:36 +08:00
UncleCode
1107fa1d62 feat(cli): enhance markdown generation with default content filters
Add DefaultMarkdownGenerator integration and automatic content filtering for markdown output formats. When using 'markdown-fit' or 'md-fit' output formats, automatically apply PruningContentFilter with default settings if no filter config is provided.

This change improves the user experience by providing sensible defaults for markdown generation while maintaining the ability to customize filtering behavior.
2025-03-25 11:56:00 +08:00
UncleCode
8c08521301 feat(browser): add Docker-based browser automation strategy
Implements a new browser strategy that runs Chrome in Docker containers,
providing better isolation and cross-platform consistency. Features include:
- Connect and launch modes for different container configurations
- Persistent storage support for maintaining browser state
- Container registry for efficient reuse
- Comprehensive test suite for Docker browser functionality

This addition allows users to run browser automation workloads in isolated
containers, improving security and resource management.
2025-03-24 21:36:58 +08:00
UncleCode
462d5765e2 fix(browser): improve storage state persistence in CDP strategy
Enhance storage state persistence mechanism in CDP browser strategy by:
- Explicitly saving storage state for each browser context
- Using proper file path for storage state
- Removing unnecessary sleep delay

Also includes test improvements:
- Simplified test configurations in playwright tests
- Temporarily disabled some CDP tests
2025-03-23 21:06:41 +08:00
UncleCode
6eeb2e4076 feat(browser): enhance browser context creation with user data directory support and improved storage state handling 2025-03-23 19:07:13 +08:00
UncleCode
0094cac675 refactor(browser): improve parallel crawling and browser management
Remove PagePoolConfig in favor of direct page management in browser strategies.
Add get_pages() method for efficient parallel page creation.
Improve storage state handling and persistence.
Add comprehensive parallel crawling tests and performance analysis.

BREAKING CHANGE: Removed PagePoolConfig class and related functionality.
2025-03-23 18:53:24 +08:00
UncleCode
4ab0893ffb feat(browser): implement modular browser management system
Adds a new browser management system with strategy pattern implementation:
- Introduces BrowserManager class with strategy pattern support
- Adds PlaywrightBrowserStrategy, CDPBrowserStrategy, and BuiltinBrowserStrategy
- Implements BrowserProfileManager for profile management
- Adds PagePoolConfig for browser page pooling
- Includes comprehensive test suite for all browser strategies

BREAKING CHANGE: Browser management has been moved to browser/ module. Direct usage of browser_manager.py and browser_profiler.py is deprecated.
2025-03-21 22:50:00 +08:00
UncleCode
6432ff1257 feat(browser): add builtin browser management system
Implements a persistent browser management system that allows running a single shared browser instance
that can be reused across multiple crawler sessions. Key changes include:

- Added browser_mode config option with 'builtin', 'dedicated', and 'custom' modes
- Implemented builtin browser management in BrowserProfiler
- Added CLI commands for managing builtin browser (start, stop, status, restart, view)
- Modified browser process handling to support detached processes
- Added automatic builtin browser setup during package installation

BREAKING CHANGE: The browser_mode config option changes how browser instances are managed
2025-03-20 12:13:59 +08:00
UncleCode
5358ac0fc2 refactor: clean up imports and improve JSON schema generation instructions 2025-03-18 18:53:34 +08:00
Aravind
79328e4292 Create main.yml (#846)
* Create main.yml

GH actions to post notifications in discord for new issues, PRs and discussions

* Add comments on bugs to the trigger
2025-03-17 20:47:57 +08:00
UncleCode
a24799918c feat(llm): add additional LLM configuration parameters
Extend LLMConfig class to support more fine-grained control over LLM behavior by adding:
- temperature control
- max tokens limit
- top_p sampling
- frequency and presence penalties
- stop sequences
- number of completions

These parameters allow for better customization of LLM responses.
2025-03-14 21:36:23 +08:00
UncleCode
a31d7b86be feat(changelog): update CHANGELOG for version 0.5.0.post5 with new features, changes, fixes, and breaking changes 2025-03-14 15:26:37 +08:00
UncleCode
7884a98be7 feat(crawler): add experimental parameters support and optimize browser handling
Add experimental parameters dictionary to CrawlerRunConfig to support beta features
Make CSP nonce headers optional via experimental config
Remove default cookie injection
Clean up browser context creation code
Improve code formatting in API handler

BREAKING CHANGE: Default cookie injection has been removed from page initialization
2025-03-14 14:39:24 +08:00
UncleCode
6e3c048328 feat(api): refactor crawl request handling to streamline single and multiple URL processing 2025-03-13 22:30:38 +08:00
UncleCode
b750542e6d feat(crawler): optimize single URL handling and add performance comparison
Add special handling for single URL requests in Docker API to use arun() instead of arun_many()
Add new example script demonstrating performance differences between sequential and parallel crawling
Update cache mode from aggressive to bypass in examples and tests
Remove unused dependencies (zstandard, msgpack)

BREAKING CHANGE: Changed default cache_mode from aggressive to bypass in examples
2025-03-13 22:15:15 +08:00
UncleCode
dc36997a08 feat(schema): improve HTML preprocessing for schema generation
Add new preprocess_html_for_schema utility function to better handle HTML cleaning
for schema generation. This replaces the previous optimize_html function in the
GoogleSearchCrawler and includes smarter attribute handling and pattern detection.

Other changes:
- Update default provider to gpt-4o
- Add DEFAULT_PROVIDER_API_KEY constant
- Make LLMConfig creation more flexible with create_llm_config helper
- Add new dependencies: zstandard and msgpack

This change improves schema generation reliability while reducing noise in the
processed HTML.
2025-03-12 22:40:46 +08:00
UncleCode
1630fbdafe feat(monitor): add real-time crawler monitoring system with memory management
Implements a comprehensive monitoring and visualization system for tracking web crawler operations in real-time. The system includes:
- Terminal-based dashboard with rich UI for displaying task statuses
- Memory pressure monitoring and adaptive dispatch control
- Queue statistics and performance metrics tracking
- Detailed task progress visualization
- Stress testing framework for memory management

This addition helps operators track crawler performance and manage memory usage more effectively.
2025-03-12 19:05:24 +08:00
UncleCode
9547bada3a feat(content): add target_elements parameter for selective content extraction
Adds new target_elements parameter to CrawlerRunConfig that allows more flexible content selection than css_selector. This enables focusing markdown generation and data extraction on specific elements while still processing the entire page for links and media.

Key changes:
- Added target_elements list parameter to CrawlerRunConfig
- Modified WebScrapingStrategy and LXMLWebScrapingStrategy to handle target_elements
- Updated documentation with examples and comparison between css_selector and target_elements
- Fixed table extraction in content_scraping_strategy.py

BREAKING CHANGE: Table extraction logic has been modified to better handle thead/tbody structures
2025-03-10 18:54:51 +08:00
UncleCode
9d69fce834 feat(scraping): add smart table extraction and analysis capabilities
Add comprehensive table detection and extraction functionality to the web scraping system:
- Implement intelligent table detection algorithm with scoring system
- Add table extraction with support for headers, rows, captions
- Update models to include tables in Media class
- Add table_score_threshold configuration option
- Add documentation and examples for table extraction
- Include crypto analysis example demonstrating table usage

This change enables users to extract structured data from HTML tables while intelligently filtering out layout tables.
2025-03-09 21:31:33 +08:00
UncleCode
c6a605ccce feat(filters): add reverse option to URLPatternFilter
Adds a new 'reverse' parameter to URLPatternFilter that allows inverting the filter's logic. When reverse=True, URLs that would normally match are rejected and vice versa.

Also removes unused 'scraped_html' from WebScrapingStrategy output to reduce memory usage.

BREAKING CHANGE: WebScrapingStrategy no longer returns 'scraped_html' in its output dictionary
2025-03-08 18:54:41 +08:00
UncleCode
4aeb7ef9ad refactor(proxy): consolidate proxy configuration handling
Moves ProxyConfig from configs/ directory into proxy_strategy.py to improve code organization and reduce fragmentation. Updates all imports and type hints to reflect the new location.

Key changes:
- Moved ProxyConfig class from configs/proxy_config.py to proxy_strategy.py
- Updated type hints in async_configs.py to support ProxyConfig
- Fixed proxy configuration handling in browser_manager.py
- Updated documentation and examples to use new import path

BREAKING CHANGE: ProxyConfig import path has changed from crawl4ai.configs to crawl4ai.proxy_strategy
2025-03-07 23:14:11 +08:00
UncleCode
a68cbb232b feat(browser): add standalone CDP browser launch and lxml extraction strategy
Add new features to enhance browser automation and HTML extraction:
- Add CDP browser launch capability with customizable ports and profiles
- Implement JsonLxmlExtractionStrategy for faster HTML parsing
- Add CLI command 'crwl cdp' for launching standalone CDP browsers
- Support connecting to external CDP browsers via URL
- Optimize selector caching and context-sensitive queries

BREAKING CHANGE: LLMConfig import path changed from crawl4ai.types to crawl4ai
2025-03-07 20:55:56 +08:00
UncleCode
e1b3bfe6fb Merge branch 'vr0.5.0.post4' 2025-03-06 22:46:44 +08:00
UncleCode
f78c46446b feat(deep-crawling): improve URL normalization and domain filtering
Enhance URL handling in deep crawling with:
- New URL normalization functions for consistent URL formats
- Improved domain filtering with subdomain support
- Added URLPatternFilter to public API
- Better URL deduplication in BFS strategy

These changes improve crawling accuracy and reduce duplicate visits.
2025-03-06 22:45:57 +08:00
UncleCode
1b72880007 chore(version): bump version to 0.5.0.post3 2025-03-06 20:32:32 +08:00
UncleCode
29f7915b79 fix(models): support float timestamps in CrawlStats
Modify CrawlStats class to handle both datetime and float timestamp formats for start_time and end_time fields. This change improves compatibility with different time formats while maintaining existing functionality.

Other minor changes:
- Add datetime import in async_dispatcher
- Update JsonElementExtractionStrategy kwargs handling

No breaking changes.
2025-03-06 20:30:57 +08:00
UncleCode
2327db6fdc refactor(crawler): introduce CrawlResultContainer and simplify interfaces
Introduces a new generic CrawlResultContainer class to standardize return types and
improve type safety. Removes legacy parameter handling and simplifies method signatures.
This change makes the API more consistent and easier to maintain.

BREAKING CHANGE: Synchronous crawler methods now always return CrawlResultContainer
instead of raw CrawlResult or List[CrawlResult]. Legacy parameters have been removed
from method signatures.
2025-03-05 22:23:08 +08:00
UncleCode
fd02dc782d Merge branch 'main' of https://github.com/unclecode/crawl4ai 2025-03-05 17:15:48 +08:00
UncleCode
3a234ec950 fix(auth): make JWT authentication optional with fallback
Modify authentication system to gracefully handle cases where JWT is not enabled or token is missing. This includes:
- Making HTTPBearer auto_error=False to prevent automatic 403 errors
- Updating token dependency to return None when JWT is disabled
- Fixing model deserialization in CrawlResult
- Updating documentation links
- Cleaning up imports

BREAKING CHANGE: Authentication behavior changed to be more permissive when JWT is disabled
2025-03-05 17:14:42 +08:00
UncleCode
9e89d27fcd chore(version): bump version to 0.5.0.post2 2025-03-05 14:18:29 +08:00
UncleCode
b3ec7ce960 Merge branch 'vr0.5.0.post1' into next 2025-03-05 14:17:19 +08:00
UncleCode
baee4949d3 refactor(llm): rename LlmConfig to LLMConfig for consistency
Rename LlmConfig to LLMConfig across the codebase to follow consistent naming conventions.
Update all imports and usages to use the new name.
Update documentation and examples to reflect the change.

BREAKING CHANGE: LlmConfig has been renamed to LLMConfig. Users need to update their imports and usage.
2025-03-05 14:17:04 +08:00
UncleCode
14fe5ef873 Update config.yml 2025-03-05 14:16:24 +08:00
UncleCode
fc425023f5 Update config.yml 2025-03-05 12:51:07 +08:00
133 changed files with 18010 additions and 3567 deletions

35
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Discord GitHub Notifications
on:
issues:
types: [opened]
issue_comment:
types: [created]
pull_request:
types: [opened]
discussion:
types: [created]
jobs:
notify-discord:
runs-on: ubuntu-latest
steps:
- name: Set webhook based on event type
id: set-webhook
run: |
if [ "${{ github.event_name }}" == "discussion" ]; then
echo "webhook=${{ secrets.DISCORD_DISCUSSIONS_WEBHOOK }}" >> $GITHUB_OUTPUT
else
echo "webhook=${{ secrets.DISCORD_WEBHOOK }}" >> $GITHUB_OUTPUT
fi
- name: Discord Notification
uses: Ilshidur/action-discord@master
env:
DISCORD_WEBHOOK: ${{ steps.set-webhook.outputs.webhook }}
with:
args: |
${{ github.event_name == 'issues' && format('📣 New issue created: **{0}** by {1} - {2}', github.event.issue.title, github.event.issue.user.login, github.event.issue.html_url) ||
github.event_name == 'issue_comment' && format('💬 New comment on issue **{0}** by {1} - {2}', github.event.issue.title, github.event.comment.user.login, github.event.comment.html_url) ||
github.event_name == 'pull_request' && format('🔄 New PR opened: **{0}** by {1} - {2}', github.event.pull_request.title, github.event.pull_request.user.login, github.event.pull_request.html_url) ||
format('💬 New discussion started: **{0}** by {1} - {2}', github.event.discussion.title, github.event.discussion.user.login, github.event.discussion.html_url) }}

3
.gitignore vendored
View File

@@ -255,3 +255,6 @@ continue_config.json
.llm.env
.private/
CLAUDE_MONITOR.md
CLAUDE.md

View File

@@ -5,6 +5,39 @@ All notable changes to Crawl4AI will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Version 0.5.0.post5 (2025-03-14)
### Added
- *(crawler)* Add experimental parameters dictionary to CrawlerRunConfig to support beta features
- *(tables)* Add comprehensive table detection and extraction functionality with scoring system
- *(monitor)* Add real-time crawler monitoring system with memory management
- *(content)* Add target_elements parameter for selective content extraction
- *(browser)* Add standalone CDP browser launch capability
- *(schema)* Add preprocess_html_for_schema utility for better HTML cleaning
- *(api)* Add special handling for single URL requests in Docker API
### Changed
- *(filters)* Add reverse option to URLPatternFilter for inverting filter logic
- *(browser)* Make CSP nonce headers optional via experimental config
- *(browser)* Remove default cookie injection from page initialization
- *(crawler)* Optimize response handling for single-URL processing
- *(api)* Refactor crawl request handling to streamline processing
- *(config)* Update default provider to gpt-4o
- *(cache)* Change default cache_mode from aggressive to bypass in examples
### Fixed
- *(browser)* Clean up browser context creation code
- *(api)* Improve code formatting in API handler
### Breaking Changes
- WebScrapingStrategy no longer returns 'scraped_html' in its output dictionary
- Table extraction logic has been modified to better handle thead/tbody structures
- Default cookie injection has been removed from page initialization
## Version 0.5.0 (2025-03-02)
### Added

View File

@@ -24,7 +24,7 @@ ARG TARGETARCH
LABEL maintainer="unclecode"
LABEL description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & scraper"
LABEL version="1.0"
LABEL version="1.0"
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
@@ -38,6 +38,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libjpeg-dev \
redis-server \
supervisor \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y --no-install-recommends \
@@ -62,11 +63,13 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libcairo2 \
libasound2 \
libatspi2.0-0 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN if [ "$ENABLE_GPU" = "true" ] && [ "$TARGETARCH" = "amd64" ] ; then \
apt-get update && apt-get install -y --no-install-recommends \
nvidia-cuda-toolkit \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* ; \
else \
echo "Skipping NVIDIA CUDA Toolkit installation (unsupported platform or GPU disabled)"; \
@@ -76,16 +79,24 @@ RUN if [ "$TARGETARCH" = "arm64" ]; then \
echo "🦾 Installing ARM-specific optimizations"; \
apt-get update && apt-get install -y --no-install-recommends \
libopenblas-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*; \
elif [ "$TARGETARCH" = "amd64" ]; then \
echo "🖥️ Installing AMD64-specific optimizations"; \
apt-get update && apt-get install -y --no-install-recommends \
libomp-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*; \
else \
echo "Skipping platform-specific optimizations (unsupported platform)"; \
fi
# Create a non-root user and group
RUN groupadd -r appuser && useradd --no-log-init -r -g appuser appuser
# Create and set permissions for appuser home directory
RUN mkdir -p /home/appuser && chown -R appuser:appuser /home/appuser
WORKDIR ${APP_HOME}
RUN echo '#!/bin/bash\n\
@@ -103,6 +114,7 @@ fi' > /tmp/install.sh && chmod +x /tmp/install.sh
COPY . /tmp/project/
# Copy supervisor config first (might need root later, but okay for now)
COPY deploy/docker/supervisord.conf .
COPY deploy/docker/requirements.txt .
@@ -131,16 +143,31 @@ RUN if [ "$INSTALL_TYPE" = "all" ] ; then \
else \
pip install "/tmp/project" ; \
fi
RUN pip install --no-cache-dir --upgrade pip && \
/tmp/install.sh && \
python -c "import crawl4ai; print('✅ crawl4ai is ready to rock!')" && \
python -c "from playwright.sync_api import sync_playwright; print('✅ Playwright is feeling dramatic!')"
RUN playwright install --with-deps chromium
RUN crawl4ai-setup
RUN playwright install --with-deps
RUN mkdir -p /home/appuser/.cache/ms-playwright \
&& cp -r /root/.cache/ms-playwright/chromium-* /home/appuser/.cache/ms-playwright/ \
&& chown -R appuser:appuser /home/appuser/.cache/ms-playwright
RUN crawl4ai-doctor
# Copy application code
COPY deploy/docker/* ${APP_HOME}/
# Change ownership of the application directory to the non-root user
RUN chown -R appuser:appuser ${APP_HOME}
# give permissions to redis persistence dirs if used
RUN mkdir -p /var/lib/redis /var/log/redis && chown -R appuser:appuser /var/lib/redis /var/log/redis
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD bash -c '\
MEM=$(free -m | awk "/^Mem:/{print \$2}"); \
@@ -149,8 +176,14 @@ HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
exit 1; \
fi && \
redis-cli ping > /dev/null && \
curl -f http://localhost:8000/health || exit 1'
curl -f http://localhost:11235/health || exit 1'
EXPOSE 6379
CMD ["supervisord", "-c", "supervisord.conf"]
# Switch to the non-root user before starting the application
USER appuser
# Set environment variables to ptoduction
ENV PYTHON_ENV=production
# Start the application using supervisord
CMD ["supervisord", "-c", "supervisord.conf"]

108
JOURNAL.md Normal file
View File

@@ -0,0 +1,108 @@
# Development Journal
This journal tracks significant feature additions, bug fixes, and architectural decisions in the crawl4ai project. It serves as both documentation and a historical record of the project's evolution.
## [2025-04-09] Added MHTML Capture Feature
**Feature:** MHTML snapshot capture of crawled pages
**Changes Made:**
1. Added `capture_mhtml: bool = False` parameter to `CrawlerRunConfig` class
2. Added `mhtml: Optional[str] = None` field to `CrawlResult` model
3. Added `mhtml_data: Optional[str] = None` field to `AsyncCrawlResponse` class
4. Implemented `capture_mhtml()` method in `AsyncPlaywrightCrawlerStrategy` class to capture MHTML via CDP
5. Modified the crawler to capture MHTML when enabled and pass it to the result
**Implementation Details:**
- MHTML capture uses Chrome DevTools Protocol (CDP) via Playwright's CDP session API
- The implementation waits for page to fully load before capturing MHTML content
- Enhanced waiting for JavaScript content with requestAnimationFrame for better JS content capture
- We ensure all browser resources are properly cleaned up after capture
**Files Modified:**
- `crawl4ai/models.py`: Added the mhtml field to CrawlResult
- `crawl4ai/async_configs.py`: Added capture_mhtml parameter to CrawlerRunConfig
- `crawl4ai/async_crawler_strategy.py`: Implemented MHTML capture logic
- `crawl4ai/async_webcrawler.py`: Added mapping from AsyncCrawlResponse.mhtml_data to CrawlResult.mhtml
**Testing:**
- Created comprehensive tests in `tests/20241401/test_mhtml.py` covering:
- Capturing MHTML when enabled
- Ensuring mhtml is None when disabled explicitly
- Ensuring mhtml is None by default
- Capturing MHTML on JavaScript-enabled pages
**Challenges:**
- Had to improve page loading detection to ensure JavaScript content was fully rendered
- Tests needed to be run independently due to Playwright browser instance management
- Modified test expected content to match actual MHTML output
**Why This Feature:**
The MHTML capture feature allows users to capture complete web pages including all resources (CSS, images, etc.) in a single file. This is valuable for:
1. Offline viewing of captured pages
2. Creating permanent snapshots of web content for archival
3. Ensuring consistent content for later analysis, even if the original site changes
**Future Enhancements to Consider:**
- Add option to save MHTML to file
- Support for filtering what resources get included in MHTML
- Add support for specifying MHTML capture options
## [2025-04-10] Added Network Request and Console Message Capturing
**Feature:** Comprehensive capturing of network requests/responses and browser console messages during crawling
**Changes Made:**
1. Added `capture_network_requests: bool = False` and `capture_console_messages: bool = False` parameters to `CrawlerRunConfig` class
2. Added `network_requests: Optional[List[Dict[str, Any]]] = None` and `console_messages: Optional[List[Dict[str, Any]]] = None` fields to both `AsyncCrawlResponse` and `CrawlResult` models
3. Implemented event listeners in `AsyncPlaywrightCrawlerStrategy._crawl_web()` to capture browser network events and console messages
4. Added proper event listener cleanup in the finally block to prevent resource leaks
5. Modified the crawler flow to pass captured data from AsyncCrawlResponse to CrawlResult
**Implementation Details:**
- Network capture uses Playwright event listeners (`request`, `response`, and `requestfailed`) to record all network activity
- Console capture uses Playwright event listeners (`console` and `pageerror`) to record console messages and errors
- Each network event includes metadata like URL, headers, status, and timing information
- Each console message includes type, text content, and source location when available
- All captured events include timestamps for chronological analysis
- Error handling ensures even failed capture attempts won't crash the main crawling process
**Files Modified:**
- `crawl4ai/models.py`: Added new fields to AsyncCrawlResponse and CrawlResult
- `crawl4ai/async_configs.py`: Added new configuration parameters to CrawlerRunConfig
- `crawl4ai/async_crawler_strategy.py`: Implemented capture logic using event listeners
- `crawl4ai/async_webcrawler.py`: Added data transfer from AsyncCrawlResponse to CrawlResult
**Documentation:**
- Created detailed documentation in `docs/md_v2/advanced/network-console-capture.md`
- Added feature to site navigation in `mkdocs.yml`
- Updated CrawlResult documentation in `docs/md_v2/api/crawl-result.md`
- Created comprehensive example in `docs/examples/network_console_capture_example.py`
**Testing:**
- Created `tests/general/test_network_console_capture.py` with tests for:
- Verifying capture is disabled by default
- Testing network request capturing
- Testing console message capturing
- Ensuring both capture types can be enabled simultaneously
- Checking correct content is captured in expected formats
**Challenges:**
- Initial implementation had synchronous/asynchronous mismatches in event handlers
- Needed to fix type of property access vs. method calls in handlers
- Required careful cleanup of event listeners to prevent memory leaks
**Why This Feature:**
The network and console capture feature provides deep visibility into web page activity, enabling:
1. Debugging complex web applications by seeing all network requests and errors
2. Security analysis to detect unexpected third-party requests and data flows
3. Performance profiling to identify slow-loading resources
4. API discovery in single-page applications
5. Comprehensive analysis of web application behavior
**Future Enhancements to Consider:**
- Option to filter captured events by type, domain, or content
- Support for capturing response bodies (with size limits)
- Aggregate statistics calculation for performance metrics
- Integration with visualization tools for network waterfall analysis
- Exporting captures in HAR format for use with external tools

View File

@@ -420,7 +420,7 @@ if __name__ == "__main__":
```python
import os
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LlmConfig
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LLMConfig
from crawl4ai.extraction_strategy import LLMExtractionStrategy
from pydantic import BaseModel, Field
@@ -436,7 +436,7 @@ async def main():
extraction_strategy=LLMExtractionStrategy(
# Here you can use any provider that Litellm library supports, for instance: ollama/qwen2
# provider="ollama/qwen2", api_token="no-token",
llmConfig = LlmConfig(provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY')),
llm_config = LLMConfig(provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY')),
schema=OpenAIModelFee.schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.

View File

@@ -2,7 +2,8 @@
import warnings
from .async_webcrawler import AsyncWebCrawler, CacheMode
from .async_configs import BrowserConfig, CrawlerRunConfig, HTTPCrawlerConfig
from .async_configs import BrowserConfig, CrawlerRunConfig, HTTPCrawlerConfig, LLMConfig
from .content_scraping_strategy import (
ContentScrapingStrategy,
WebScrapingStrategy,
@@ -22,6 +23,7 @@ from .extraction_strategy import (
CosineStrategy,
JsonCssExtractionStrategy,
JsonXPathExtractionStrategy,
JsonLxmlExtractionStrategy
)
from .chunking_strategy import ChunkingStrategy, RegexChunking
from .markdown_generation_strategy import DefaultMarkdownGenerator
@@ -31,13 +33,12 @@ from .content_filter_strategy import (
LLMContentFilter,
RelevantContentFilter,
)
from .models import CrawlResult, MarkdownGenerationResult
from .models import CrawlResult, MarkdownGenerationResult, DisplayMode
from .components.crawler_monitor import CrawlerMonitor
from .async_dispatcher import (
MemoryAdaptiveDispatcher,
SemaphoreDispatcher,
RateLimiter,
CrawlerMonitor,
DisplayMode,
BaseDispatcher,
)
from .docker_client import Crawl4aiDockerClient
@@ -47,8 +48,9 @@ from .deep_crawling import (
DeepCrawlStrategy,
BFSDeepCrawlStrategy,
FilterChain,
ContentTypeFilter,
URLPatternFilter,
DomainFilter,
ContentTypeFilter,
URLFilter,
FilterStats,
SEOFilter,
@@ -68,11 +70,13 @@ __all__ = [
"AsyncLogger",
"AsyncWebCrawler",
"BrowserProfiler",
"LLMConfig",
"DeepCrawlStrategy",
"BFSDeepCrawlStrategy",
"BestFirstCrawlingStrategy",
"DFSDeepCrawlStrategy",
"FilterChain",
"URLPatternFilter",
"ContentTypeFilter",
"DomainFilter",
"FilterStats",
@@ -99,6 +103,7 @@ __all__ = [
"CosineStrategy",
"JsonCssExtractionStrategy",
"JsonXPathExtractionStrategy",
"JsonLxmlExtractionStrategy",
"ChunkingStrategy",
"RegexChunking",
"DefaultMarkdownGenerator",

View File

@@ -1,2 +1,2 @@
# crawl4ai/_version.py
__version__ = "0.5.0.post1"
__version__ = "0.5.0.post8"

View File

@@ -1,6 +1,7 @@
import os
from .config import (
DEFAULT_PROVIDER,
DEFAULT_PROVIDER_API_KEY,
MIN_WORD_THRESHOLD,
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
PROVIDER_MODELS,
@@ -11,19 +12,27 @@ from .config import (
)
from .user_agent_generator import UAGen, ValidUAGenerator # , OnlineUAGenerator
from .extraction_strategy import ExtractionStrategy
from .extraction_strategy import ExtractionStrategy, LLMExtractionStrategy
from .chunking_strategy import ChunkingStrategy, RegexChunking
from .markdown_generation_strategy import MarkdownGenerationStrategy
from .markdown_generation_strategy import MarkdownGenerationStrategy, DefaultMarkdownGenerator
from .content_scraping_strategy import ContentScrapingStrategy, WebScrapingStrategy
from .deep_crawling import DeepCrawlStrategy
from typing import Union, List
from .cache_context import CacheMode
from .proxy_strategy import ProxyRotationStrategy
from typing import Union, List
import inspect
from typing import Any, Dict, Optional
from enum import Enum
from .proxy_strategy import ProxyConfig
try:
from .browser.models import DockerConfig
except ImportError:
DockerConfig = None
def to_serializable_dict(obj: Any, ignore_default_value : bool = False) -> Dict:
"""
@@ -113,23 +122,25 @@ def from_serializable_dict(data: Any) -> Any:
# Handle typed data
if isinstance(data, dict) and "type" in data:
# Handle plain dictionaries
if data["type"] == "dict":
if data["type"] == "dict" and "value" in data:
return {k: from_serializable_dict(v) for k, v in data["value"].items()}
# Import from crawl4ai for class instances
import crawl4ai
cls = getattr(crawl4ai, data["type"])
if hasattr(crawl4ai, data["type"]):
cls = getattr(crawl4ai, data["type"])
# Handle Enum
if issubclass(cls, Enum):
return cls(data["params"])
# Handle Enum
if issubclass(cls, Enum):
return cls(data["params"])
# Handle class instances
constructor_args = {
k: from_serializable_dict(v) for k, v in data["params"].items()
}
return cls(**constructor_args)
if "params" in data:
# Handle class instances
constructor_args = {
k: from_serializable_dict(v) for k, v in data["params"].items()
}
return cls(**constructor_args)
# Handle lists
if isinstance(data, list):
@@ -164,6 +175,12 @@ class BrowserConfig:
Default: "chromium".
headless (bool): Whether to run the browser in headless mode (no visible GUI).
Default: True.
browser_mode (str): Determines how the browser should be initialized:
"builtin" - use the builtin CDP browser running in background
"dedicated" - create a new dedicated browser instance each time
"cdp" - use explicit CDP settings provided in cdp_url
"docker" - run browser in Docker container with isolation
Default: "dedicated"
use_managed_browser (bool): Launch the browser using a managed approach (e.g., via CDP), allowing
advanced manipulation. Default: False.
cdp_url (str): URL for the Chrome DevTools Protocol (CDP) endpoint. Default: "ws://localhost:9222/devtools/browser/".
@@ -178,8 +195,10 @@ class BrowserConfig:
is "chromium". Default: "chromium".
proxy (Optional[str]): Proxy server URL (e.g., "http://username:password@proxy:port"). If None, no proxy is used.
Default: None.
proxy_config (dict or None): Detailed proxy configuration, e.g. {"server": "...", "username": "..."}.
proxy_config (ProxyConfig or dict or None): Detailed proxy configuration, e.g. {"server": "...", "username": "..."}.
If None, no additional proxy config. Default: None.
docker_config (DockerConfig or dict or None): Configuration for Docker-based browser automation.
Contains settings for Docker container operation. Default: None.
viewport_width (int): Default viewport width for pages. Default: 1080.
viewport_height (int): Default viewport height for pages. Default: 600.
viewport (dict): Default viewport dimensions for pages. If set, overrides viewport_width and viewport_height.
@@ -190,7 +209,7 @@ class BrowserConfig:
Default: False.
downloads_path (str or None): Directory to store downloaded files. If None and accept_downloads is True,
a default path will be created. Default: None.
storage_state (str or dict or None): Path or object describing storage state (cookies, localStorage).
storage_state (str or dict or None): An in-memory storage state (cookies, localStorage).
Default: None.
ignore_https_errors (bool): Ignore HTTPS certificate errors. Default: True.
java_script_enabled (bool): Enable JavaScript execution in pages. Default: True.
@@ -216,6 +235,7 @@ class BrowserConfig:
self,
browser_type: str = "chromium",
headless: bool = True,
browser_mode: str = "dedicated",
use_managed_browser: bool = False,
cdp_url: str = None,
use_persistent_context: bool = False,
@@ -223,7 +243,8 @@ class BrowserConfig:
chrome_channel: str = "chromium",
channel: str = "chromium",
proxy: str = None,
proxy_config: dict = None,
proxy_config: Union[ProxyConfig, dict, None] = None,
docker_config: Union[DockerConfig, dict, None] = None,
viewport_width: int = 1080,
viewport_height: int = 600,
viewport: dict = None,
@@ -251,7 +272,8 @@ class BrowserConfig:
host: str = "localhost",
):
self.browser_type = browser_type
self.headless = headless
self.headless = headless or True
self.browser_mode = browser_mode
self.use_managed_browser = use_managed_browser
self.cdp_url = cdp_url
self.use_persistent_context = use_persistent_context
@@ -263,6 +285,16 @@ class BrowserConfig:
self.chrome_channel = ""
self.proxy = proxy
self.proxy_config = proxy_config
# Handle docker configuration
if isinstance(docker_config, dict) and DockerConfig is not None:
self.docker_config = DockerConfig.from_kwargs(docker_config)
else:
self.docker_config = docker_config
if self.docker_config:
self.user_data_dir = self.docker_config.user_data_dir
self.viewport_width = viewport_width
self.viewport_height = viewport_height
self.viewport = viewport
@@ -285,6 +317,7 @@ class BrowserConfig:
self.sleep_on_close = sleep_on_close
self.verbose = verbose
self.debugging_port = debugging_port
self.host = host
fa_user_agenr_generator = ValidUAGenerator()
if self.user_agent_mode == "random":
@@ -297,6 +330,22 @@ class BrowserConfig:
self.browser_hint = UAGen.generate_client_hints(self.user_agent)
self.headers.setdefault("sec-ch-ua", self.browser_hint)
# Set appropriate browser management flags based on browser_mode
if self.browser_mode == "builtin":
# Builtin mode uses managed browser connecting to builtin CDP endpoint
self.use_managed_browser = True
# cdp_url will be set later by browser_manager
elif self.browser_mode == "docker":
# Docker mode uses managed browser with CDP to connect to browser in container
self.use_managed_browser = True
# cdp_url will be set later by docker browser strategy
elif self.browser_mode == "custom" and self.cdp_url:
# Custom mode with explicit CDP URL
self.use_managed_browser = True
elif self.browser_mode == "dedicated":
# Dedicated mode uses a new browser instance each time
pass
# If persistent context is requested, ensure managed browser is enabled
if self.use_persistent_context:
self.use_managed_browser = True
@@ -306,6 +355,7 @@ class BrowserConfig:
return BrowserConfig(
browser_type=kwargs.get("browser_type", "chromium"),
headless=kwargs.get("headless", True),
browser_mode=kwargs.get("browser_mode", "dedicated"),
use_managed_browser=kwargs.get("use_managed_browser", False),
cdp_url=kwargs.get("cdp_url"),
use_persistent_context=kwargs.get("use_persistent_context", False),
@@ -313,7 +363,8 @@ class BrowserConfig:
chrome_channel=kwargs.get("chrome_channel", "chromium"),
channel=kwargs.get("channel", "chromium"),
proxy=kwargs.get("proxy"),
proxy_config=kwargs.get("proxy_config"),
proxy_config=kwargs.get("proxy_config", None),
docker_config=kwargs.get("docker_config", None),
viewport_width=kwargs.get("viewport_width", 1080),
viewport_height=kwargs.get("viewport_height", 600),
accept_downloads=kwargs.get("accept_downloads", False),
@@ -333,12 +384,15 @@ class BrowserConfig:
text_mode=kwargs.get("text_mode", False),
light_mode=kwargs.get("light_mode", False),
extra_args=kwargs.get("extra_args", []),
debugging_port=kwargs.get("debugging_port", 9222),
host=kwargs.get("host", "localhost"),
)
def to_dict(self):
return {
result = {
"browser_type": self.browser_type,
"headless": self.headless,
"browser_mode": self.browser_mode,
"use_managed_browser": self.use_managed_browser,
"cdp_url": self.cdp_url,
"use_persistent_context": self.use_persistent_context,
@@ -365,7 +419,17 @@ class BrowserConfig:
"sleep_on_close": self.sleep_on_close,
"verbose": self.verbose,
"debugging_port": self.debugging_port,
"host": self.host,
}
# Include docker_config if it exists
if hasattr(self, "docker_config") and self.docker_config is not None:
if hasattr(self.docker_config, "to_dict"):
result["docker_config"] = self.docker_config.to_dict()
else:
result["docker_config"] = self.docker_config
return result
def clone(self, **kwargs):
"""Create a copy of this configuration with updated values.
@@ -497,6 +561,15 @@ class CrawlerRunConfig():
Default: False.
css_selector (str or None): CSS selector to extract a specific portion of the page.
Default: None.
target_elements (list of str or None): List of CSS selectors for specific elements for Markdown generation
and structured data extraction. When you set this, only the contents
of these elements are processed for extraction and Markdown generation.
If you do not set any value, the entire page is processed.
The difference between this and css_selector is that this will shrink
the initial raw HTML to the selected element, while this will only affect
the extraction and Markdown generation.
Default: None
excluded_tags (list of str or None): List of HTML tags to exclude from processing.
Default: None.
excluded_selector (str or None): CSS selector to exclude from processing.
@@ -513,7 +586,7 @@ class CrawlerRunConfig():
Default: "lxml".
scraping_strategy (ContentScrapingStrategy): Scraping strategy to use.
Default: WebScrapingStrategy.
proxy_config (dict or None): Detailed proxy configuration, e.g. {"server": "...", "username": "..."}.
proxy_config (ProxyConfig or dict or None): Detailed proxy configuration, e.g. {"server": "...", "username": "..."}.
If None, no additional proxy config. Default: None.
# SSL Parameters
@@ -593,6 +666,8 @@ class CrawlerRunConfig():
Default: IMAGE_SCORE_THRESHOLD (e.g., 3).
exclude_external_images (bool): If True, exclude all external images from processing.
Default: False.
table_score_threshold (int): Minimum score threshold for processing a table.
Default: 7.
# Link and Domain Handling Parameters
exclude_social_media_domains (list of str): List of domains to exclude for social media links.
@@ -634,6 +709,12 @@ class CrawlerRunConfig():
user_agent_generator_config (dict or None): Configuration for user agent generation if user_agent_mode is set.
Default: None.
# Experimental Parameters
experimental (dict): Dictionary containing experimental parameters that are in beta phase.
This allows passing temporary features that are not yet fully integrated
into the main parameter set.
Default: None.
url: str = None # This is not a compulsory parameter
"""
@@ -643,9 +724,10 @@ class CrawlerRunConfig():
word_count_threshold: int = MIN_WORD_THRESHOLD,
extraction_strategy: ExtractionStrategy = None,
chunking_strategy: ChunkingStrategy = RegexChunking(),
markdown_generator: MarkdownGenerationStrategy = None,
markdown_generator: MarkdownGenerationStrategy = DefaultMarkdownGenerator(),
only_text: bool = False,
css_selector: str = None,
target_elements: List[str] = None,
excluded_tags: list = None,
excluded_selector: str = None,
keep_data_attributes: bool = False,
@@ -654,7 +736,7 @@ class CrawlerRunConfig():
prettiify: bool = False,
parser_type: str = "lxml",
scraping_strategy: ContentScrapingStrategy = None,
proxy_config: dict = None,
proxy_config: Union[ProxyConfig, dict, None] = None,
proxy_rotation_strategy: Optional[ProxyRotationStrategy] = None,
# SSL Parameters
fetch_ssl_certificate: bool = False,
@@ -692,9 +774,12 @@ class CrawlerRunConfig():
screenshot_wait_for: float = None,
screenshot_height_threshold: int = SCREENSHOT_HEIGHT_TRESHOLD,
pdf: bool = False,
capture_mhtml: bool = False,
image_description_min_word_threshold: int = IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
image_score_threshold: int = IMAGE_SCORE_THRESHOLD,
table_score_threshold: int = 7,
exclude_external_images: bool = False,
exclude_all_images: bool = False,
# Link and Domain Handling Parameters
exclude_social_media_domains: list = None,
exclude_external_links: bool = False,
@@ -704,6 +789,9 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters
verbose: bool = True,
log_console: bool = False,
# Network and Console Capturing Parameters
capture_network_requests: bool = False,
capture_console_messages: bool = False,
# Connection Parameters
method: str = "GET",
stream: bool = False,
@@ -714,6 +802,8 @@ class CrawlerRunConfig():
user_agent_generator_config: dict = {},
# Deep Crawl Parameters
deep_crawl_strategy: Optional[DeepCrawlStrategy] = None,
# Experimental Parameters
experimental: Dict[str, Any] = None,
):
# TODO: Planning to set properties dynamically based on the __init__ signature
self.url = url
@@ -725,6 +815,7 @@ class CrawlerRunConfig():
self.markdown_generator = markdown_generator
self.only_text = only_text
self.css_selector = css_selector
self.target_elements = target_elements or []
self.excluded_tags = excluded_tags or []
self.excluded_selector = excluded_selector or ""
self.keep_data_attributes = keep_data_attributes
@@ -776,9 +867,12 @@ class CrawlerRunConfig():
self.screenshot_wait_for = screenshot_wait_for
self.screenshot_height_threshold = screenshot_height_threshold
self.pdf = pdf
self.capture_mhtml = capture_mhtml
self.image_description_min_word_threshold = image_description_min_word_threshold
self.image_score_threshold = image_score_threshold
self.exclude_external_images = exclude_external_images
self.exclude_all_images = exclude_all_images
self.table_score_threshold = table_score_threshold
# Link and Domain Handling Parameters
self.exclude_social_media_domains = (
@@ -792,6 +886,10 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters
self.verbose = verbose
self.log_console = log_console
# Network and Console Capturing Parameters
self.capture_network_requests = capture_network_requests
self.capture_console_messages = capture_console_messages
# Connection Parameters
self.stream = stream
@@ -825,6 +923,9 @@ class CrawlerRunConfig():
# Deep Crawl Parameters
self.deep_crawl_strategy = deep_crawl_strategy
# Experimental Parameters
self.experimental = experimental or {}
def __getattr__(self, name):
@@ -854,6 +955,7 @@ class CrawlerRunConfig():
markdown_generator=kwargs.get("markdown_generator"),
only_text=kwargs.get("only_text", False),
css_selector=kwargs.get("css_selector"),
target_elements=kwargs.get("target_elements", []),
excluded_tags=kwargs.get("excluded_tags", []),
excluded_selector=kwargs.get("excluded_selector", ""),
keep_data_attributes=kwargs.get("keep_data_attributes", False),
@@ -902,6 +1004,7 @@ class CrawlerRunConfig():
"screenshot_height_threshold", SCREENSHOT_HEIGHT_TRESHOLD
),
pdf=kwargs.get("pdf", False),
capture_mhtml=kwargs.get("capture_mhtml", False),
image_description_min_word_threshold=kwargs.get(
"image_description_min_word_threshold",
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
@@ -909,6 +1012,8 @@ class CrawlerRunConfig():
image_score_threshold=kwargs.get(
"image_score_threshold", IMAGE_SCORE_THRESHOLD
),
table_score_threshold=kwargs.get("table_score_threshold", 7),
exclude_all_images=kwargs.get("exclude_all_images", False),
exclude_external_images=kwargs.get("exclude_external_images", False),
# Link and Domain Handling Parameters
exclude_social_media_domains=kwargs.get(
@@ -921,6 +1026,9 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters
verbose=kwargs.get("verbose", True),
log_console=kwargs.get("log_console", False),
# Network and Console Capturing Parameters
capture_network_requests=kwargs.get("capture_network_requests", False),
capture_console_messages=kwargs.get("capture_console_messages", False),
# Connection Parameters
method=kwargs.get("method", "GET"),
stream=kwargs.get("stream", False),
@@ -931,6 +1039,8 @@ class CrawlerRunConfig():
# Deep Crawl Parameters
deep_crawl_strategy=kwargs.get("deep_crawl_strategy"),
url=kwargs.get("url"),
# Experimental Parameters
experimental=kwargs.get("experimental"),
)
# Create a funciton returns dict of the object
@@ -954,6 +1064,7 @@ class CrawlerRunConfig():
"markdown_generator": self.markdown_generator,
"only_text": self.only_text,
"css_selector": self.css_selector,
"target_elements": self.target_elements,
"excluded_tags": self.excluded_tags,
"excluded_selector": self.excluded_selector,
"keep_data_attributes": self.keep_data_attributes,
@@ -995,8 +1106,11 @@ class CrawlerRunConfig():
"screenshot_wait_for": self.screenshot_wait_for,
"screenshot_height_threshold": self.screenshot_height_threshold,
"pdf": self.pdf,
"capture_mhtml": self.capture_mhtml,
"image_description_min_word_threshold": self.image_description_min_word_threshold,
"image_score_threshold": self.image_score_threshold,
"table_score_threshold": self.table_score_threshold,
"exclude_all_images": self.exclude_all_images,
"exclude_external_images": self.exclude_external_images,
"exclude_social_media_domains": self.exclude_social_media_domains,
"exclude_external_links": self.exclude_external_links,
@@ -1005,6 +1119,8 @@ class CrawlerRunConfig():
"exclude_internal_links": self.exclude_internal_links,
"verbose": self.verbose,
"log_console": self.log_console,
"capture_network_requests": self.capture_network_requests,
"capture_console_messages": self.capture_console_messages,
"method": self.method,
"stream": self.stream,
"check_robots_txt": self.check_robots_txt,
@@ -1013,6 +1129,7 @@ class CrawlerRunConfig():
"user_agent_generator_config": self.user_agent_generator_config,
"deep_crawl_strategy": self.deep_crawl_strategy,
"url": self.url,
"experimental": self.experimental,
}
def clone(self, **kwargs):
@@ -1042,12 +1159,19 @@ class CrawlerRunConfig():
return CrawlerRunConfig.from_kwargs(config_dict)
class LlmConfig:
class LLMConfig:
def __init__(
self,
provider: str = DEFAULT_PROVIDER,
api_token: Optional[str] = None,
base_url: Optional[str] = None,
temprature: Optional[float] = None,
max_tokens: Optional[int] = None,
top_p: Optional[float] = None,
frequency_penalty: Optional[float] = None,
presence_penalty: Optional[float] = None,
stop: Optional[List[str]] = None,
n: Optional[int] = None,
):
"""Configuaration class for LLM provider and API token."""
self.provider = provider
@@ -1057,24 +1181,44 @@ class LlmConfig:
self.api_token = os.getenv(api_token[4:])
else:
self.api_token = PROVIDER_MODELS.get(provider, "no-token") or os.getenv(
"OPENAI_API_KEY"
DEFAULT_PROVIDER_API_KEY
)
self.base_url = base_url
self.temprature = temprature
self.max_tokens = max_tokens
self.top_p = top_p
self.frequency_penalty = frequency_penalty
self.presence_penalty = presence_penalty
self.stop = stop
self.n = n
@staticmethod
def from_kwargs(kwargs: dict) -> "LlmConfig":
return LlmConfig(
def from_kwargs(kwargs: dict) -> "LLMConfig":
return LLMConfig(
provider=kwargs.get("provider", DEFAULT_PROVIDER),
api_token=kwargs.get("api_token"),
base_url=kwargs.get("base_url"),
temprature=kwargs.get("temprature"),
max_tokens=kwargs.get("max_tokens"),
top_p=kwargs.get("top_p"),
frequency_penalty=kwargs.get("frequency_penalty"),
presence_penalty=kwargs.get("presence_penalty"),
stop=kwargs.get("stop"),
n=kwargs.get("n")
)
def to_dict(self):
return {
"provider": self.provider,
"api_token": self.api_token,
"base_url": self.base_url
"base_url": self.base_url,
"temprature": self.temprature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"stop": self.stop,
"n": self.n
}
def clone(self, **kwargs):
@@ -1084,8 +1228,10 @@ class LlmConfig:
**kwargs: Key-value pairs of configuration options to update
Returns:
LLMConfig: A new instance with the specified updates
llm_config: A new instance with the specified updates
"""
config_dict = self.to_dict()
config_dict.update(kwargs)
return LlmConfig.from_kwargs(config_dict)
return LLMConfig.from_kwargs(config_dict)

View File

@@ -409,7 +409,11 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
user_agent = kwargs.get("user_agent", self.user_agent)
# Use browser_manager to get a fresh page & context assigned to this session_id
page, context = await self.browser_manager.get_page(session_id, user_agent)
page, context = await self.browser_manager.get_page(CrawlerRunConfig(
session_id=session_id,
user_agent=user_agent,
**kwargs,
))
return session_id
async def crawl(
@@ -447,12 +451,17 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
html = f.read()
if config.screenshot:
screenshot_data = await self._generate_screenshot_from_html(html)
if config.capture_console_messages:
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
captured_console = await self._capture_console_messages(page, url)
return AsyncCrawlResponse(
html=html,
response_headers=response_headers,
status_code=status_code,
screenshot=screenshot_data,
get_delayed_content=None,
console_messages=captured_console,
)
elif url.startswith("raw:") or url.startswith("raw://"):
@@ -478,6 +487,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
) -> AsyncCrawlResponse:
"""
Internal method to crawl web URLs with the specified configuration.
Includes optional network and console capturing.
Args:
url (str): The web URL to crawl
@@ -494,6 +504,10 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Reset downloaded files list for new crawl
self._downloaded_files = []
# Initialize capture lists
captured_requests = []
captured_console = []
# Handle user agent with magic mode
user_agent_to_override = config.user_agent
@@ -507,10 +521,12 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Get page for session
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
# await page.goto(URL)
# Add default cookie
await context.add_cookies(
[{"name": "cookiesEnabled", "value": "true", "url": url}]
)
# await context.add_cookies(
# [{"name": "cookiesEnabled", "value": "true", "url": url}]
# )
# Handle navigator overrides
if config.override_navigator or config.simulate_user or config.magic:
@@ -519,9 +535,144 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Call hook after page creation
await self.execute_hook("on_page_context_created", page, context=context, config=config)
# Network Request Capturing
if config.capture_network_requests:
async def handle_request_capture(request):
try:
post_data_str = None
try:
# Be cautious with large post data
post_data = request.post_data_buffer
if post_data:
# Attempt to decode, fallback to base64 or size indication
try:
post_data_str = post_data.decode('utf-8', errors='replace')
except UnicodeDecodeError:
post_data_str = f"[Binary data: {len(post_data)} bytes]"
except Exception:
post_data_str = "[Error retrieving post data]"
captured_requests.append({
"event_type": "request",
"url": request.url,
"method": request.method,
"headers": dict(request.headers), # Convert Header dict
"post_data": post_data_str,
"resource_type": request.resource_type,
"is_navigation_request": request.is_navigation_request(),
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing request details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
async def handle_response_capture(response):
try:
captured_requests.append({
"event_type": "response",
"url": response.url,
"status": response.status,
"status_text": response.status_text,
"headers": dict(response.headers), # Convert Header dict
"from_service_worker": response.from_service_worker,
"request_timing": response.request.timing, # Detailed timing info
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing response details for {response.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "response_capture_error", "url": response.url, "error": str(e), "timestamp": time.time()})
async def handle_request_failed_capture(request):
try:
captured_requests.append({
"event_type": "request_failed",
"url": request.url,
"method": request.method,
"resource_type": request.resource_type,
"failure_text": str(request.failure) if request.failure else "Unknown failure",
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing request failed details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_failed_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
page.on("request", handle_request_capture)
page.on("response", handle_response_capture)
page.on("requestfailed", handle_request_failed_capture)
# Console Message Capturing
if config.capture_console_messages:
def handle_console_capture(msg):
try:
message_type = "unknown"
try:
message_type = msg.type
except:
pass
message_text = "unknown"
try:
message_text = msg.text
except:
pass
# Basic console message with minimal content
entry = {
"type": message_type,
"text": message_text,
"timestamp": time.time()
}
captured_console.append(entry)
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing console message: {e}", tag="CAPTURE")
# Still add something to the list even on error
captured_console.append({
"type": "console_capture_error",
"error": str(e),
"timestamp": time.time()
})
def handle_pageerror_capture(err):
try:
error_message = "Unknown error"
try:
error_message = err.message
except:
pass
error_stack = ""
try:
error_stack = err.stack
except:
pass
captured_console.append({
"type": "error",
"text": error_message,
"stack": error_stack,
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing page error: {e}", tag="CAPTURE")
captured_console.append({
"type": "pageerror_capture_error",
"error": str(e),
"timestamp": time.time()
})
# Add event listeners directly
page.on("console", handle_console_capture)
page.on("pageerror", handle_pageerror_capture)
# Set up console logging if requested
if config.log_console:
def log_consol(
msg, console_log_type="debug"
): # Corrected the parameter syntax
@@ -562,14 +713,15 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
try:
# Generate a unique nonce for this request
nonce = hashlib.sha256(os.urandom(32)).hexdigest()
if config.experimental.get("use_csp_nonce", False):
nonce = hashlib.sha256(os.urandom(32)).hexdigest()
# Add CSP headers to the request
await page.set_extra_http_headers(
{
"Content-Security-Policy": f"default-src 'self'; script-src 'self' 'nonce-{nonce}' 'strict-dynamic'"
}
)
# Add CSP headers to the request
await page.set_extra_http_headers(
{
"Content-Security-Policy": f"default-src 'self'; script-src 'self' 'nonce-{nonce}' 'strict-dynamic'"
}
)
response = await page.goto(
url, wait_until=config.wait_until, timeout=config.page_timeout
@@ -767,6 +919,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Handle wait_for condition
# Todo: Decide how to handle this
if not config.wait_for and config.css_selector and False:
# if not config.wait_for and config.css_selector:
config.wait_for = f"css:{config.css_selector}"
if config.wait_for:
@@ -806,20 +959,44 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
if config.remove_overlay_elements:
await self.remove_overlay_elements(page)
# Get final HTML content
html = await page.content()
if config.css_selector:
try:
# Handle comma-separated selectors by splitting them
selectors = [s.strip() for s in config.css_selector.split(',')]
html_parts = []
for selector in selectors:
try:
content = await page.evaluate(f"document.querySelector('{selector}')?.outerHTML || ''")
html_parts.append(content)
except Error as e:
print(f"Warning: Could not get content for selector '{selector}': {str(e)}")
# Wrap in a div to create a valid HTML structure
html = f"<div class='crawl4ai-result'>\n" + "\n".join(html_parts) + "\n</div>"
except Error as e:
raise RuntimeError(f"Failed to extract HTML content: {str(e)}")
else:
html = await page.content()
# # Get final HTML content
# html = await page.content()
await self.execute_hook(
"before_return_html", page=page, html=html, context=context, config=config
)
# Handle PDF and screenshot generation
# Handle PDF, MHTML and screenshot generation
start_export_time = time.perf_counter()
pdf_data = None
screenshot_data = None
mhtml_data = None
if config.pdf:
pdf_data = await self.export_pdf(page)
if config.capture_mhtml:
mhtml_data = await self.capture_mhtml(page)
if config.screenshot:
if config.screenshot_wait_for:
await asyncio.sleep(config.screenshot_wait_for)
@@ -827,9 +1004,9 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
page, screenshot_height_threshold=config.screenshot_height_threshold
)
if screenshot_data or pdf_data:
if screenshot_data or pdf_data or mhtml_data:
self.logger.info(
message="Exporting PDF and taking screenshot took {duration:.2f}s",
message="Exporting media (PDF/MHTML/screenshot) took {duration:.2f}s",
tag="EXPORT",
params={"duration": time.perf_counter() - start_export_time},
)
@@ -852,12 +1029,16 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
status_code=status_code,
screenshot=screenshot_data,
pdf_data=pdf_data,
mhtml_data=mhtml_data,
get_delayed_content=get_delayed_content,
ssl_certificate=ssl_cert,
downloaded_files=(
self._downloaded_files if self._downloaded_files else None
),
redirected_url=redirected_url,
# Include captured data if enabled
network_requests=captured_requests if config.capture_network_requests else None,
console_messages=captured_console if config.capture_console_messages else None,
)
except Exception as e:
@@ -866,6 +1047,15 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
finally:
# If no session_id is given we should close the page
if not config.session_id:
# Detach listeners before closing to prevent potential errors during close
if config.capture_network_requests:
page.remove_listener("request", handle_request_capture)
page.remove_listener("response", handle_response_capture)
page.remove_listener("requestfailed", handle_request_failed_capture)
if config.capture_console_messages:
page.remove_listener("console", handle_console_capture)
page.remove_listener("pageerror", handle_pageerror_capture)
await page.close()
async def _handle_full_page_scan(self, page: Page, scroll_delay: float = 0.1):
@@ -1028,7 +1218,107 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
"""
pdf_data = await page.pdf(print_background=True)
return pdf_data
async def capture_mhtml(self, page: Page) -> Optional[str]:
"""
Captures the current page as MHTML using CDP.
MHTML (MIME HTML) is a web page archive format that combines the HTML content
with its resources (images, CSS, etc.) into a single MIME-encoded file.
Args:
page (Page): The Playwright page object
Returns:
Optional[str]: The MHTML content as a string, or None if there was an error
"""
try:
# Ensure the page is fully loaded before capturing
try:
# Wait for DOM content and network to be idle
await page.wait_for_load_state("domcontentloaded", timeout=5000)
await page.wait_for_load_state("networkidle", timeout=5000)
# Give a little extra time for JavaScript execution
await page.wait_for_timeout(1000)
# Wait for any animations to complete
await page.evaluate("""
() => new Promise(resolve => {
// First requestAnimationFrame gets scheduled after the next repaint
requestAnimationFrame(() => {
// Second requestAnimationFrame gets called after all animations complete
requestAnimationFrame(resolve);
});
})
""")
except Error as e:
if self.logger:
self.logger.warning(
message="Wait for load state timed out: {error}",
tag="MHTML",
params={"error": str(e)},
)
# Create a new CDP session
cdp_session = await page.context.new_cdp_session(page)
# Call Page.captureSnapshot with format "mhtml"
result = await cdp_session.send("Page.captureSnapshot", {"format": "mhtml"})
# The result contains a 'data' field with the MHTML content
mhtml_content = result.get("data")
# Detach the CDP session to clean up resources
await cdp_session.detach()
return mhtml_content
except Exception as e:
# Log the error but don't raise it - we'll just return None for the MHTML
if self.logger:
self.logger.error(
message="Failed to capture MHTML: {error}",
tag="MHTML",
params={"error": str(e)},
)
return None
async def _capture_console_messages(
self, page: Page, file_path: str
) -> List[Dict[str, Union[str, float]]]:
"""
Captures console messages from the page.
Args:
page (Page): The Playwright page object
Returns:
List[Dict[str, Union[str, float]]]: A list of captured console messages
"""
captured_console = []
def handle_console_message(msg):
try:
message_type = msg.type
message_text = msg.text
entry = {
"type": message_type,
"text": message_text,
"timestamp": time.time(),
}
captured_console.append(entry)
except Exception as e:
if self.logger:
self.logger.warning(
f"Error capturing console message: {e}", tag="CAPTURE"
)
page.on("console", handle_console_message)
await page.goto(file_path)
return captured_console
async def take_screenshot(self, page, **kwargs) -> str:
"""
Take a screenshot of the current page.

View File

@@ -4,19 +4,14 @@ import aiosqlite
import asyncio
from typing import Optional, Dict
from contextlib import asynccontextmanager
import logging
import json # Added for serialization/deserialization
from .utils import ensure_content_dirs, generate_content_hash
import json
from .models import CrawlResult, MarkdownGenerationResult, StringCompatibleMarkdown
import aiofiles
from .utils import VersionManager
from .async_logger import AsyncLogger
from .utils import get_error_context, create_box_message
# Set up logging
# logging.basicConfig(level=logging.INFO)
# logger = logging.getLogger(__name__)
# logger.setLevel(logging.INFO)
from .utils import ensure_content_dirs, generate_content_hash
from .utils import VersionManager
from .utils import get_error_context, create_box_message
base_directory = DB_PATH = os.path.join(
os.getenv("CRAWL4_AI_BASE_DIRECTORY", Path.home()), ".crawl4ai"

View File

@@ -4,17 +4,15 @@ from .models import (
CrawlResult,
CrawlerTaskResult,
CrawlStatus,
DisplayMode,
CrawlStats,
DomainState,
)
from rich.live import Live
from rich.table import Table
from rich.console import Console
from rich import box
from datetime import timedelta
from .components.crawler_monitor import CrawlerMonitor
from .types import AsyncWebCrawler
from collections.abc import AsyncGenerator
import time
import psutil
import asyncio
@@ -24,8 +22,6 @@ from urllib.parse import urlparse
import random
from abc import ABC, abstractmethod
from math import inf as infinity
class RateLimiter:
def __init__(
@@ -87,201 +83,6 @@ class RateLimiter:
return True
class CrawlerMonitor:
def __init__(
self,
max_visible_rows: int = 15,
display_mode: DisplayMode = DisplayMode.DETAILED,
):
self.console = Console()
self.max_visible_rows = max_visible_rows
self.display_mode = display_mode
self.stats: Dict[str, CrawlStats] = {}
self.process = psutil.Process()
self.start_time = time.time()
self.live = Live(self._create_table(), refresh_per_second=2)
def start(self):
self.live.start()
def stop(self):
self.live.stop()
def add_task(self, task_id: str, url: str):
self.stats[task_id] = CrawlStats(
task_id=task_id, url=url, status=CrawlStatus.QUEUED
)
self.live.update(self._create_table())
def update_task(self, task_id: str, **kwargs):
if task_id in self.stats:
for key, value in kwargs.items():
setattr(self.stats[task_id], key, value)
self.live.update(self._create_table())
def _create_aggregated_table(self) -> Table:
"""Creates a compact table showing only aggregated statistics"""
table = Table(
box=box.ROUNDED,
title="Crawler Status Overview",
title_style="bold magenta",
header_style="bold blue",
show_lines=True,
)
# Calculate statistics
total_tasks = len(self.stats)
queued = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.QUEUED
)
in_progress = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.IN_PROGRESS
)
completed = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.COMPLETED
)
failed = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.FAILED
)
# Memory statistics
current_memory = self.process.memory_info().rss / (1024 * 1024)
total_task_memory = sum(stat.memory_usage for stat in self.stats.values())
peak_memory = max(
(stat.peak_memory for stat in self.stats.values()), default=0.0
)
# Duration
duration = time.time() - self.start_time
# Create status row
table.add_column("Status", style="bold cyan")
table.add_column("Count", justify="right")
table.add_column("Percentage", justify="right")
table.add_row("Total Tasks", str(total_tasks), "100%")
table.add_row(
"[yellow]In Queue[/yellow]",
str(queued),
f"{(queued / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
table.add_row(
"[blue]In Progress[/blue]",
str(in_progress),
f"{(in_progress / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
table.add_row(
"[green]Completed[/green]",
str(completed),
f"{(completed / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
table.add_row(
"[red]Failed[/red]",
str(failed),
f"{(failed / total_tasks * 100):.1f}%" if total_tasks > 0 else "0%",
)
# Add memory information
table.add_section()
table.add_row(
"[magenta]Current Memory[/magenta]", f"{current_memory:.1f} MB", ""
)
table.add_row(
"[magenta]Total Task Memory[/magenta]", f"{total_task_memory:.1f} MB", ""
)
table.add_row(
"[magenta]Peak Task Memory[/magenta]", f"{peak_memory:.1f} MB", ""
)
table.add_row(
"[yellow]Runtime[/yellow]",
str(timedelta(seconds=int(duration))),
"",
)
return table
def _create_detailed_table(self) -> Table:
table = Table(
box=box.ROUNDED,
title="Crawler Performance Monitor",
title_style="bold magenta",
header_style="bold blue",
)
# Add columns
table.add_column("Task ID", style="cyan", no_wrap=True)
table.add_column("URL", style="cyan", no_wrap=True)
table.add_column("Status", style="bold")
table.add_column("Memory (MB)", justify="right")
table.add_column("Peak (MB)", justify="right")
table.add_column("Duration", justify="right")
table.add_column("Info", style="italic")
# Add summary row
total_memory = sum(stat.memory_usage for stat in self.stats.values())
active_count = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.IN_PROGRESS
)
completed_count = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.COMPLETED
)
failed_count = sum(
1 for stat in self.stats.values() if stat.status == CrawlStatus.FAILED
)
table.add_row(
"[bold yellow]SUMMARY",
f"Total: {len(self.stats)}",
f"Active: {active_count}",
f"{total_memory:.1f}",
f"{self.process.memory_info().rss / (1024 * 1024):.1f}",
str(
timedelta(
seconds=int(time.time() - self.start_time)
)
),
f"{completed_count}{failed_count}",
style="bold",
)
table.add_section()
# Add rows for each task
visible_stats = sorted(
self.stats.values(),
key=lambda x: (
x.status != CrawlStatus.IN_PROGRESS,
x.status != CrawlStatus.QUEUED,
x.end_time or infinity,
),
)[: self.max_visible_rows]
for stat in visible_stats:
status_style = {
CrawlStatus.QUEUED: "white",
CrawlStatus.IN_PROGRESS: "yellow",
CrawlStatus.COMPLETED: "green",
CrawlStatus.FAILED: "red",
}[stat.status]
table.add_row(
stat.task_id[:8], # Show first 8 chars of task ID
stat.url[:40] + "..." if len(stat.url) > 40 else stat.url,
f"[{status_style}]{stat.status.value}[/{status_style}]",
f"{stat.memory_usage:.1f}",
f"{stat.peak_memory:.1f}",
stat.duration,
stat.error_message[:40] if stat.error_message else "",
)
return table
def _create_table(self) -> Table:
"""Creates the appropriate table based on display mode"""
if self.display_mode == DisplayMode.AGGREGATED:
return self._create_aggregated_table()
return self._create_detailed_table()
class BaseDispatcher(ABC):
def __init__(
@@ -309,7 +110,7 @@ class BaseDispatcher(ABC):
async def run_urls(
self,
urls: List[str],
crawler: "AsyncWebCrawler", # noqa: F821
crawler: AsyncWebCrawler, # noqa: F821
config: CrawlerRunConfig,
monitor: Optional[CrawlerMonitor] = None,
) -> List[CrawlerTaskResult]:
@@ -320,71 +121,144 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
def __init__(
self,
memory_threshold_percent: float = 90.0,
critical_threshold_percent: float = 95.0, # New critical threshold
recovery_threshold_percent: float = 85.0, # New recovery threshold
check_interval: float = 1.0,
max_session_permit: int = 20,
memory_wait_timeout: float = 300.0, # 5 minutes default timeout
fairness_timeout: float = 600.0, # 10 minutes before prioritizing long-waiting URLs
rate_limiter: Optional[RateLimiter] = None,
monitor: Optional[CrawlerMonitor] = None,
):
super().__init__(rate_limiter, monitor)
self.memory_threshold_percent = memory_threshold_percent
self.critical_threshold_percent = critical_threshold_percent
self.recovery_threshold_percent = recovery_threshold_percent
self.check_interval = check_interval
self.max_session_permit = max_session_permit
self.memory_wait_timeout = memory_wait_timeout
self.result_queue = asyncio.Queue() # Queue for storing results
self.fairness_timeout = fairness_timeout
self.result_queue = asyncio.Queue()
self.task_queue = asyncio.PriorityQueue() # Priority queue for better management
self.memory_pressure_mode = False # Flag to indicate when we're in memory pressure mode
self.current_memory_percent = 0.0 # Track current memory usage
async def _memory_monitor_task(self):
"""Background task to continuously monitor memory usage and update state"""
while True:
self.current_memory_percent = psutil.virtual_memory().percent
# Enter memory pressure mode if we cross the threshold
if not self.memory_pressure_mode and self.current_memory_percent >= self.memory_threshold_percent:
self.memory_pressure_mode = True
if self.monitor:
self.monitor.update_memory_status("PRESSURE")
# Exit memory pressure mode if we go below recovery threshold
elif self.memory_pressure_mode and self.current_memory_percent <= self.recovery_threshold_percent:
self.memory_pressure_mode = False
if self.monitor:
self.monitor.update_memory_status("NORMAL")
# In critical mode, we might need to take more drastic action
if self.current_memory_percent >= self.critical_threshold_percent:
if self.monitor:
self.monitor.update_memory_status("CRITICAL")
# We could implement additional memory-saving measures here
await asyncio.sleep(self.check_interval)
def _get_priority_score(self, wait_time: float, retry_count: int) -> float:
"""Calculate priority score (lower is higher priority)
- URLs waiting longer than fairness_timeout get higher priority
- More retry attempts decreases priority
"""
if wait_time > self.fairness_timeout:
# High priority for long-waiting URLs
return -wait_time
# Standard priority based on retries
return retry_count
async def crawl_url(
self,
url: str,
config: CrawlerRunConfig,
task_id: str,
retry_count: int = 0,
) -> CrawlerTaskResult:
start_time = time.time()
error_message = ""
memory_usage = peak_memory = 0.0
# Get starting memory for accurate measurement
process = psutil.Process()
start_memory = process.memory_info().rss / (1024 * 1024)
try:
if self.monitor:
self.monitor.update_task(
task_id, status=CrawlStatus.IN_PROGRESS, start_time=start_time
task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=start_time,
retry_count=retry_count
)
self.concurrent_sessions += 1
if self.rate_limiter:
await self.rate_limiter.wait_if_needed(url)
process = psutil.Process()
start_memory = process.memory_info().rss / (1024 * 1024)
# Check if we're in critical memory state
if self.current_memory_percent >= self.critical_threshold_percent:
# Requeue this task with increased priority and retry count
enqueue_time = time.time()
priority = self._get_priority_score(enqueue_time - start_time, retry_count + 1)
await self.task_queue.put((priority, (url, task_id, retry_count + 1, enqueue_time)))
# Update monitoring
if self.monitor:
self.monitor.update_task(
task_id,
status=CrawlStatus.QUEUED,
error_message="Requeued due to critical memory pressure"
)
# Return placeholder result with requeued status
return CrawlerTaskResult(
task_id=task_id,
url=url,
result=CrawlResult(
url=url, html="", metadata={"status": "requeued"},
success=False, error_message="Requeued due to critical memory pressure"
),
memory_usage=0,
peak_memory=0,
start_time=start_time,
end_time=time.time(),
error_message="Requeued due to critical memory pressure",
retry_count=retry_count + 1
)
# Execute the crawl
result = await self.crawler.arun(url, config=config, session_id=task_id)
# Measure memory usage
end_memory = process.memory_info().rss / (1024 * 1024)
memory_usage = peak_memory = end_memory - start_memory
# Handle rate limiting
if self.rate_limiter and result.status_code:
if not self.rate_limiter.update_delay(url, result.status_code):
error_message = f"Rate limit retry count exceeded for domain {urlparse(url).netloc}"
if self.monitor:
self.monitor.update_task(task_id, status=CrawlStatus.FAILED)
result = CrawlerTaskResult(
task_id=task_id,
url=url,
result=result,
memory_usage=memory_usage,
peak_memory=peak_memory,
start_time=start_time,
end_time=time.time(),
error_message=error_message,
)
await self.result_queue.put(result)
return result
# Update status based on result
if not result.success:
error_message = result.error_message
if self.monitor:
self.monitor.update_task(task_id, status=CrawlStatus.FAILED)
elif self.monitor:
self.monitor.update_task(task_id, status=CrawlStatus.COMPLETED)
except Exception as e:
error_message = str(e)
if self.monitor:
@@ -392,7 +266,7 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
result = CrawlResult(
url=url, html="", metadata={}, success=False, error_message=str(e)
)
finally:
end_time = time.time()
if self.monitor:
@@ -402,9 +276,10 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
memory_usage=memory_usage,
peak_memory=peak_memory,
error_message=error_message,
retry_count=retry_count
)
self.concurrent_sessions -= 1
return CrawlerTaskResult(
task_id=task_id,
url=url,
@@ -414,116 +289,240 @@ class MemoryAdaptiveDispatcher(BaseDispatcher):
start_time=start_time,
end_time=end_time,
error_message=error_message,
retry_count=retry_count
)
async def run_urls(
self,
urls: List[str],
crawler: "AsyncWebCrawler", # noqa: F821
crawler: AsyncWebCrawler,
config: CrawlerRunConfig,
) -> List[CrawlerTaskResult]:
self.crawler = crawler
# Start the memory monitor task
memory_monitor = asyncio.create_task(self._memory_monitor_task())
if self.monitor:
self.monitor.start()
results = []
try:
pending_tasks = []
active_tasks = []
task_queue = []
for url in urls:
task_id = str(uuid.uuid4())
if self.monitor:
self.monitor.add_task(task_id, url)
task_queue.append((url, task_id))
while task_queue or active_tasks:
wait_start_time = time.time()
while len(active_tasks) < self.max_session_permit and task_queue:
if psutil.virtual_memory().percent >= self.memory_threshold_percent:
# Check if we've exceeded the timeout
if time.time() - wait_start_time > self.memory_wait_timeout:
raise MemoryError(
f"Memory usage above threshold ({self.memory_threshold_percent}%) for more than {self.memory_wait_timeout} seconds"
)
await asyncio.sleep(self.check_interval)
continue
url, task_id = task_queue.pop(0)
task = asyncio.create_task(self.crawl_url(url, config, task_id))
active_tasks.append(task)
if not active_tasks:
await asyncio.sleep(self.check_interval)
continue
done, pending = await asyncio.wait(
active_tasks, return_when=asyncio.FIRST_COMPLETED
)
pending_tasks.extend(done)
active_tasks = list(pending)
return await asyncio.gather(*pending_tasks)
finally:
if self.monitor:
self.monitor.stop()
async def run_urls_stream(
self,
urls: List[str],
crawler: "AsyncWebCrawler", # noqa: F821
config: CrawlerRunConfig,
) -> AsyncGenerator[CrawlerTaskResult, None]:
self.crawler = crawler
if self.monitor:
self.monitor.start()
try:
active_tasks = []
task_queue = []
completed_count = 0
total_urls = len(urls)
# Initialize task queue
for url in urls:
task_id = str(uuid.uuid4())
if self.monitor:
self.monitor.add_task(task_id, url)
task_queue.append((url, task_id))
while completed_count < total_urls:
# Start new tasks if memory permits
while len(active_tasks) < self.max_session_permit and task_queue:
if psutil.virtual_memory().percent >= self.memory_threshold_percent:
await asyncio.sleep(self.check_interval)
continue
url, task_id = task_queue.pop(0)
task = asyncio.create_task(self.crawl_url(url, config, task_id))
active_tasks.append(task)
if not active_tasks and not task_queue:
break
# Wait for any task to complete and yield results
# Add to queue with initial priority 0, retry count 0, and current time
await self.task_queue.put((0, (url, task_id, 0, time.time())))
active_tasks = []
# Process until both queues are empty
while not self.task_queue.empty() or active_tasks:
# If memory pressure is low, start new tasks
if not self.memory_pressure_mode and len(active_tasks) < self.max_session_permit:
try:
# Try to get a task with timeout to avoid blocking indefinitely
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Create and start the task
task = asyncio.create_task(
self.crawl_url(url, config, task_id, retry_count)
)
active_tasks.append(task)
# Update waiting time in monitor
if self.monitor:
wait_time = time.time() - enqueue_time
self.monitor.update_task(
task_id,
wait_time=wait_time,
status=CrawlStatus.IN_PROGRESS
)
except asyncio.TimeoutError:
# No tasks in queue, that's fine
pass
# Wait for completion even if queue is starved
if active_tasks:
done, pending = await asyncio.wait(
active_tasks, timeout=0.1, return_when=asyncio.FIRST_COMPLETED
)
# Process completed tasks
for completed_task in done:
result = await completed_task
completed_count += 1
yield result
results.append(result)
# Update active tasks list
active_tasks = list(pending)
else:
await asyncio.sleep(self.check_interval)
# If no active tasks but still waiting, sleep briefly
await asyncio.sleep(self.check_interval / 2)
# Update priorities for waiting tasks if needed
await self._update_queue_priorities()
return results
except Exception as e:
if self.monitor:
self.monitor.update_memory_status(f"QUEUE_ERROR: {str(e)}")
finally:
# Clean up
memory_monitor.cancel()
if self.monitor:
self.monitor.stop()
async def _update_queue_priorities(self):
"""Periodically update priorities of items in the queue to prevent starvation"""
# Skip if queue is empty
if self.task_queue.empty():
return
# Use a drain-and-refill approach to update all priorities
temp_items = []
# Drain the queue (with a safety timeout to prevent blocking)
try:
drain_start = time.time()
while not self.task_queue.empty() and time.time() - drain_start < 5.0: # 5 second safety timeout
try:
# Get item from queue with timeout
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Calculate new priority based on current wait time
current_time = time.time()
wait_time = current_time - enqueue_time
new_priority = self._get_priority_score(wait_time, retry_count)
# Store with updated priority
temp_items.append((new_priority, (url, task_id, retry_count, enqueue_time)))
# Update monitoring stats for this task
if self.monitor and task_id in self.monitor.stats:
self.monitor.update_task(task_id, wait_time=wait_time)
except asyncio.TimeoutError:
# Queue might be empty or very slow
break
except Exception as e:
# If anything goes wrong, make sure we refill the queue with what we've got
self.monitor.update_memory_status(f"QUEUE_ERROR: {str(e)}")
# Calculate queue statistics
if temp_items and self.monitor:
total_queued = len(temp_items)
wait_times = [item[1][3] for item in temp_items]
highest_wait_time = time.time() - min(wait_times) if wait_times else 0
avg_wait_time = sum(time.time() - t for t in wait_times) / len(wait_times) if wait_times else 0
# Update queue statistics in monitor
self.monitor.update_queue_statistics(
total_queued=total_queued,
highest_wait_time=highest_wait_time,
avg_wait_time=avg_wait_time
)
# Sort by priority (lowest number = highest priority)
temp_items.sort(key=lambda x: x[0])
# Refill the queue with updated priorities
for item in temp_items:
await self.task_queue.put(item)
async def run_urls_stream(
self,
urls: List[str],
crawler: AsyncWebCrawler,
config: CrawlerRunConfig,
) -> AsyncGenerator[CrawlerTaskResult, None]:
self.crawler = crawler
# Start the memory monitor task
memory_monitor = asyncio.create_task(self._memory_monitor_task())
if self.monitor:
self.monitor.start()
try:
# Initialize task queue
for url in urls:
task_id = str(uuid.uuid4())
if self.monitor:
self.monitor.add_task(task_id, url)
# Add to queue with initial priority 0, retry count 0, and current time
await self.task_queue.put((0, (url, task_id, 0, time.time())))
active_tasks = []
completed_count = 0
total_urls = len(urls)
while completed_count < total_urls:
# If memory pressure is low, start new tasks
if not self.memory_pressure_mode and len(active_tasks) < self.max_session_permit:
try:
# Try to get a task with timeout
priority, (url, task_id, retry_count, enqueue_time) = await asyncio.wait_for(
self.task_queue.get(), timeout=0.1
)
# Create and start the task
task = asyncio.create_task(
self.crawl_url(url, config, task_id, retry_count)
)
active_tasks.append(task)
# Update waiting time in monitor
if self.monitor:
wait_time = time.time() - enqueue_time
self.monitor.update_task(
task_id,
wait_time=wait_time,
status=CrawlStatus.IN_PROGRESS
)
except asyncio.TimeoutError:
# No tasks in queue, that's fine
pass
# Process completed tasks and yield results
if active_tasks:
done, pending = await asyncio.wait(
active_tasks, timeout=0.1, return_when=asyncio.FIRST_COMPLETED
)
for completed_task in done:
result = await completed_task
# Only count as completed if it wasn't requeued
if "requeued" not in result.error_message:
completed_count += 1
yield result
# Update active tasks list
active_tasks = list(pending)
else:
# If no active tasks but still waiting, sleep briefly
await asyncio.sleep(self.check_interval / 2)
# Update priorities for waiting tasks if needed
await self._update_queue_priorities()
finally:
# Clean up
memory_monitor.cancel()
if self.monitor:
self.monitor.stop()
class SemaphoreDispatcher(BaseDispatcher):
def __init__(
@@ -620,7 +619,7 @@ class SemaphoreDispatcher(BaseDispatcher):
async def run_urls(
self,
crawler: "AsyncWebCrawler", # noqa: F821
crawler: AsyncWebCrawler, # noqa: F821
urls: List[str],
config: CrawlerRunConfig,
) -> List[CrawlerTaskResult]:
@@ -644,4 +643,4 @@ class SemaphoreDispatcher(BaseDispatcher):
return await asyncio.gather(*tasks, return_exceptions=True)
finally:
if self.monitor:
self.monitor.stop()
self.monitor.stop()

View File

@@ -156,9 +156,22 @@ class AsyncLogger(AsyncLoggerBase):
formatted_message = message.format(**params)
# Then apply colors if specified
color_map = {
"green": Fore.GREEN,
"red": Fore.RED,
"yellow": Fore.YELLOW,
"blue": Fore.BLUE,
"cyan": Fore.CYAN,
"magenta": Fore.MAGENTA,
"white": Fore.WHITE,
"black": Fore.BLACK,
"reset": Style.RESET_ALL,
}
if colors:
for key, color in colors.items():
# Find the formatted value in the message and wrap it with color
if color in color_map:
color = color_map[color]
if key in params:
value_str = str(params[key])
formatted_message = formatted_message.replace(

View File

@@ -10,20 +10,26 @@ import asyncio
# from contextlib import nullcontext, asynccontextmanager
from contextlib import asynccontextmanager
from .models import CrawlResult, MarkdownGenerationResult, DispatchResult, ScrapingResult
from .models import (
CrawlResult,
MarkdownGenerationResult,
DispatchResult,
ScrapingResult,
CrawlResultContainer,
RunManyReturn
)
from .async_database import async_db_manager
from .chunking_strategy import * # noqa: F403
from .chunking_strategy import RegexChunking, ChunkingStrategy, IdentityChunking
from .chunking_strategy import IdentityChunking
from .content_filter_strategy import * # noqa: F403
from .content_filter_strategy import RelevantContentFilter
from .extraction_strategy import * # noqa: F403
from .extraction_strategy import NoExtractionStrategy, ExtractionStrategy
from .extraction_strategy import * # noqa: F403
from .extraction_strategy import NoExtractionStrategy
from .async_crawler_strategy import (
AsyncCrawlerStrategy,
AsyncPlaywrightCrawlerStrategy,
AsyncCrawlResponse,
)
from .cache_context import CacheMode, CacheContext, _legacy_to_cache_mode
from .cache_context import CacheMode, CacheContext
from .markdown_generation_strategy import (
DefaultMarkdownGenerator,
MarkdownGenerationStrategy,
@@ -31,10 +37,9 @@ from .markdown_generation_strategy import (
from .deep_crawling import DeepCrawlDecorator
from .async_logger import AsyncLogger, AsyncLoggerBase
from .async_configs import BrowserConfig, CrawlerRunConfig
from .async_dispatcher import * # noqa: F403
from .async_dispatcher import * # noqa: F403
from .async_dispatcher import BaseDispatcher, MemoryAdaptiveDispatcher, RateLimiter
from .config import MIN_WORD_THRESHOLD
from .utils import (
sanitize_input_encode,
InvalidCSSSelectorError,
@@ -44,16 +49,6 @@ from .utils import (
RobotsParser,
)
from typing import Union, AsyncGenerator, TypeVar
CrawlResultT = TypeVar('CrawlResultT', bound=CrawlResult)
RunManyReturn = Union[CrawlResultT, List[CrawlResultT], AsyncGenerator[CrawlResultT, None]]
DeepCrawlSingleReturn = Union[List[CrawlResultT], AsyncGenerator[CrawlResultT, None]]
DeepCrawlManyReturn = Union[
List[List[CrawlResultT]],
AsyncGenerator[CrawlResultT, None],
]
class AsyncWebCrawler:
"""
@@ -166,23 +161,18 @@ class AsyncWebCrawler:
# Decorate arun method with deep crawling capabilities
self._deep_handler = DeepCrawlDecorator(self)
self.arun = self._deep_handler(self.arun)
self.arun = self._deep_handler(self.arun)
async def start(self):
"""
Start the crawler explicitly without using context manager.
This is equivalent to using 'async with' but gives more control over the lifecycle.
This method will:
1. Initialize the browser and context
2. Perform warmup sequence
3. Return the crawler instance for method chaining
Returns:
AsyncWebCrawler: The initialized crawler instance
"""
await self.crawler_strategy.__aenter__()
await self.awarmup()
self.logger.info(f"Crawl4AI {crawl4ai_version}", tag="INIT")
self.ready = True
return self
async def close(self):
@@ -202,18 +192,6 @@ class AsyncWebCrawler:
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self.close()
async def awarmup(self):
"""
Initialize the crawler with warm-up sequence.
This method:
1. Logs initialization info
2. Sets up browser configuration
3. Marks the crawler as ready
"""
self.logger.info(f"Crawl4AI {crawl4ai_version}", tag="INIT")
self.ready = True
@asynccontextmanager
async def nullcontext(self):
"""异步空上下文管理器"""
@@ -223,23 +201,6 @@ class AsyncWebCrawler:
self,
url: str,
config: CrawlerRunConfig = None,
# Legacy parameters maintained for backwards compatibility
# word_count_threshold=MIN_WORD_THRESHOLD,
# extraction_strategy: ExtractionStrategy = None,
# chunking_strategy: ChunkingStrategy = RegexChunking(),
# content_filter: RelevantContentFilter = None,
# cache_mode: Optional[CacheMode] = None,
# Deprecated cache parameters
# bypass_cache: bool = False,
# disable_cache: bool = False,
# no_cache_read: bool = False,
# no_cache_write: bool = False,
# Other legacy parameters
# css_selector: str = None,
# screenshot: bool = False,
# pdf: bool = False,
# user_agent: str = None,
# verbose=True,
**kwargs,
) -> RunManyReturn:
"""
@@ -270,56 +231,24 @@ class AsyncWebCrawler:
Returns:
CrawlResult: The result of crawling and processing
"""
crawler_config = config or CrawlerRunConfig()
# Auto-start if not ready
if not self.ready:
await self.start()
config = config or CrawlerRunConfig()
if not isinstance(url, str) or not url:
raise ValueError("Invalid URL, make sure the URL is a non-empty string")
async with self._lock or self.nullcontext():
try:
self.logger.verbose = crawler_config.verbose
# Handle configuration
if crawler_config is not None:
config = crawler_config
else:
# Merge all parameters into a single kwargs dict for config creation
# config_kwargs = {
# "word_count_threshold": word_count_threshold,
# "extraction_strategy": extraction_strategy,
# "chunking_strategy": chunking_strategy,
# "content_filter": content_filter,
# "cache_mode": cache_mode,
# "bypass_cache": bypass_cache,
# "disable_cache": disable_cache,
# "no_cache_read": no_cache_read,
# "no_cache_write": no_cache_write,
# "css_selector": css_selector,
# "screenshot": screenshot,
# "pdf": pdf,
# "verbose": verbose,
# **kwargs,
# }
# config = CrawlerRunConfig.from_kwargs(config_kwargs)
pass
# Handle deprecated cache parameters
# if any([bypass_cache, disable_cache, no_cache_read, no_cache_write]):
# # Convert legacy parameters if cache_mode not provided
# if config.cache_mode is None:
# config.cache_mode = _legacy_to_cache_mode(
# disable_cache=disable_cache,
# bypass_cache=bypass_cache,
# no_cache_read=no_cache_read,
# no_cache_write=no_cache_write,
# )
self.logger.verbose = config.verbose
# Default to ENABLED if no cache mode specified
if config.cache_mode is None:
config.cache_mode = CacheMode.ENABLED
# Create cache context
cache_context = CacheContext(
url, config.cache_mode, False
)
cache_context = CacheContext(url, config.cache_mode, False)
# Initialize processing variables
async_response: AsyncCrawlResponse = None
@@ -349,7 +278,7 @@ class AsyncWebCrawler:
# if config.screenshot and not screenshot or config.pdf and not pdf:
if config.screenshot and not screenshot_data:
cached_result = None
if config.pdf and not pdf_data:
cached_result = None
@@ -381,14 +310,18 @@ class AsyncWebCrawler:
# Check robots.txt if enabled
if config and config.check_robots_txt:
if not await self.robots_parser.can_fetch(url, self.browser_config.user_agent):
if not await self.robots_parser.can_fetch(
url, self.browser_config.user_agent
):
return CrawlResult(
url=url,
html="",
success=False,
status_code=403,
error_message="Access denied by robots.txt",
response_headers={"X-Robots-Status": "Blocked by robots.txt"}
response_headers={
"X-Robots-Status": "Blocked by robots.txt"
},
)
##############################
@@ -415,7 +348,7 @@ class AsyncWebCrawler:
###############################################################
# Process the HTML content, Call CrawlerStrategy.process_html #
###############################################################
crawl_result : CrawlResult = await self.aprocess_html(
crawl_result: CrawlResult = await self.aprocess_html(
url=url,
html=html,
extracted_content=extracted_content,
@@ -432,9 +365,11 @@ class AsyncWebCrawler:
crawl_result.response_headers = async_response.response_headers
crawl_result.downloaded_files = async_response.downloaded_files
crawl_result.js_execution_result = js_execution_result
crawl_result.ssl_certificate = (
async_response.ssl_certificate
) # Add SSL certificate
crawl_result.mhtml = async_response.mhtml_data
crawl_result.ssl_certificate = async_response.ssl_certificate
# Add captured network and console data if available
crawl_result.network_requests = async_response.network_requests
crawl_result.console_messages = async_response.console_messages
crawl_result.success = bool(html)
crawl_result.session_id = getattr(config, "session_id", None)
@@ -457,7 +392,7 @@ class AsyncWebCrawler:
if cache_context.should_write() and not bool(cached_result):
await async_db_manager.acache_url(crawl_result)
return crawl_result
return CrawlResultContainer(crawl_result)
else:
self.logger.success(
@@ -474,7 +409,7 @@ class AsyncWebCrawler:
cached_result.success = bool(html)
cached_result.session_id = getattr(config, "session_id", None)
cached_result.redirected_url = cached_result.redirected_url or url
return cached_result
return CrawlResultContainer(cached_result)
except Exception as e:
error_context = get_error_context(sys.exc_info())
@@ -492,8 +427,10 @@ class AsyncWebCrawler:
tag="ERROR",
)
return CrawlResult(
url=url, html="", success=False, error_message=error_message
return CrawlResultContainer(
CrawlResult(
url=url, html="", success=False, error_message=error_message
)
)
async def aprocess_html(
@@ -534,15 +471,15 @@ class AsyncWebCrawler:
scraping_strategy.logger = self.logger
# Process HTML content
params = {k: v for k, v in config.to_dict().items() if k not in ["url"]}
params = config.__dict__.copy()
params.pop("url", None)
# add keys from kwargs to params that doesn't exist in params
params.update({k: v for k, v in kwargs.items() if k not in params.keys()})
################################
# Scraping Strategy Execution #
################################
result : ScrapingResult = scraping_strategy.scrap(url, html, **params)
result: ScrapingResult = scraping_strategy.scrap(url, html, **params)
if result is None:
raise ValueError(
@@ -591,7 +528,10 @@ class AsyncWebCrawler:
self.logger.info(
message="{url:.50}... | Time: {timing}s",
tag="SCRAPE",
params={"url": _url, "timing": int((time.perf_counter() - t1) * 1000) / 1000},
params={
"url": _url,
"timing": int((time.perf_counter() - t1) * 1000) / 1000,
},
)
################################
@@ -666,22 +606,22 @@ class AsyncWebCrawler:
async def arun_many(
self,
urls: List[str],
config: Optional[CrawlerRunConfig] = None,
config: Optional[CrawlerRunConfig] = None,
dispatcher: Optional[BaseDispatcher] = None,
# Legacy parameters maintained for backwards compatibility
word_count_threshold=MIN_WORD_THRESHOLD,
extraction_strategy: ExtractionStrategy = None,
chunking_strategy: ChunkingStrategy = RegexChunking(),
content_filter: RelevantContentFilter = None,
cache_mode: Optional[CacheMode] = None,
bypass_cache: bool = False,
css_selector: str = None,
screenshot: bool = False,
pdf: bool = False,
user_agent: str = None,
verbose=True,
**kwargs
) -> RunManyReturn:
# word_count_threshold=MIN_WORD_THRESHOLD,
# extraction_strategy: ExtractionStrategy = None,
# chunking_strategy: ChunkingStrategy = RegexChunking(),
# content_filter: RelevantContentFilter = None,
# cache_mode: Optional[CacheMode] = None,
# bypass_cache: bool = False,
# css_selector: str = None,
# screenshot: bool = False,
# pdf: bool = False,
# user_agent: str = None,
# verbose=True,
**kwargs,
) -> RunManyReturn:
"""
Runs the crawler for multiple URLs concurrently using a configurable dispatcher strategy.
@@ -712,20 +652,21 @@ class AsyncWebCrawler:
):
print(f"Processed {result.url}: {len(result.markdown)} chars")
"""
if config is None:
config = CrawlerRunConfig(
word_count_threshold=word_count_threshold,
extraction_strategy=extraction_strategy,
chunking_strategy=chunking_strategy,
content_filter=content_filter,
cache_mode=cache_mode,
bypass_cache=bypass_cache,
css_selector=css_selector,
screenshot=screenshot,
pdf=pdf,
verbose=verbose,
**kwargs,
)
config = config or CrawlerRunConfig()
# if config is None:
# config = CrawlerRunConfig(
# word_count_threshold=word_count_threshold,
# extraction_strategy=extraction_strategy,
# chunking_strategy=chunking_strategy,
# content_filter=content_filter,
# cache_mode=cache_mode,
# bypass_cache=bypass_cache,
# css_selector=css_selector,
# screenshot=screenshot,
# pdf=pdf,
# verbose=verbose,
# **kwargs,
# )
if dispatcher is None:
dispatcher = MemoryAdaptiveDispatcher(
@@ -736,37 +677,32 @@ class AsyncWebCrawler:
def transform_result(task_result):
return (
setattr(task_result.result, 'dispatch_result',
DispatchResult(
task_id=task_result.task_id,
memory_usage=task_result.memory_usage,
peak_memory=task_result.peak_memory,
start_time=task_result.start_time,
end_time=task_result.end_time,
error_message=task_result.error_message,
)
) or task_result.result
setattr(
task_result.result,
"dispatch_result",
DispatchResult(
task_id=task_result.task_id,
memory_usage=task_result.memory_usage,
peak_memory=task_result.peak_memory,
start_time=task_result.start_time,
end_time=task_result.end_time,
error_message=task_result.error_message,
),
)
or task_result.result
)
stream = config.stream
if stream:
async def result_transformer():
async for task_result in dispatcher.run_urls_stream(crawler=self, urls=urls, config=config):
async for task_result in dispatcher.run_urls_stream(
crawler=self, urls=urls, config=config
):
yield transform_result(task_result)
return result_transformer()
else:
_results = await dispatcher.run_urls(crawler=self, urls=urls, config=config)
return [transform_result(res) for res in _results]
async def aclear_cache(self):
"""Clear the cache database."""
await async_db_manager.cleanup()
async def aflush_cache(self):
"""Flush the cache database."""
await async_db_manager.aflush_db()
async def aget_cache_size(self):
"""Get the total number of cached items."""
return await async_db_manager.aget_total_count()
return [transform_result(res) for res in _results]

View File

@@ -145,17 +145,60 @@ class ManagedBrowser:
# Start browser process
try:
self.browser_process = subprocess.Popen(
args, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
# Monitor browser process output for errors
asyncio.create_task(self._monitor_browser_process())
# Use DETACHED_PROCESS flag on Windows to fully detach the process
# On Unix, we'll use preexec_fn=os.setpgrp to start the process in a new process group
if sys.platform == "win32":
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
)
else:
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setpgrp # Start in a new process group
)
# We'll monitor for a short time to make sure it starts properly, but won't keep monitoring
await asyncio.sleep(0.5) # Give browser time to start
await self._initial_startup_check()
await asyncio.sleep(2) # Give browser time to start
return f"http://{self.host}:{self.debugging_port}"
except Exception as e:
await self.cleanup()
raise Exception(f"Failed to start browser: {e}")
async def _initial_startup_check(self):
"""
Perform a quick check to make sure the browser started successfully.
This only runs once at startup rather than continuously monitoring.
"""
if not self.browser_process:
return
# Check that process started without immediate termination
await asyncio.sleep(0.5)
if self.browser_process.poll() is not None:
# Process already terminated
stdout, stderr = b"", b""
try:
stdout, stderr = self.browser_process.communicate(timeout=0.5)
except subprocess.TimeoutExpired:
pass
self.logger.error(
message="Browser process terminated during startup | Code: {code} | STDOUT: {stdout} | STDERR: {stderr}",
tag="ERROR",
params={
"code": self.browser_process.returncode,
"stdout": stdout.decode() if stdout else "",
"stderr": stderr.decode() if stderr else "",
},
)
async def _monitor_browser_process(self):
"""
Monitor the browser process for unexpected termination.
@@ -167,6 +210,7 @@ class ManagedBrowser:
4. If any other error occurs, log the error message.
Note: This method should be called in a separate task to avoid blocking the main event loop.
This is DEPRECATED and should not be used for builtin browsers that need to outlive the Python process.
"""
if self.browser_process:
try:
@@ -261,22 +305,33 @@ class ManagedBrowser:
if self.browser_process:
try:
self.browser_process.terminate()
# Wait for process to end gracefully
for _ in range(10): # 10 attempts, 100ms each
if self.browser_process.poll() is not None:
break
await asyncio.sleep(0.1)
# For builtin browsers that should persist, we should check if it's a detached process
# Only terminate if we have proper control over the process
if not self.browser_process.poll():
# Process is still running
self.browser_process.terminate()
# Wait for process to end gracefully
for _ in range(10): # 10 attempts, 100ms each
if self.browser_process.poll() is not None:
break
await asyncio.sleep(0.1)
# Force kill if still running
if self.browser_process.poll() is None:
self.browser_process.kill()
await asyncio.sleep(0.1) # Brief wait for kill to take effect
# Force kill if still running
if self.browser_process.poll() is None:
if sys.platform == "win32":
# On Windows we might need taskkill for detached processes
try:
subprocess.run(["taskkill", "/F", "/PID", str(self.browser_process.pid)])
except Exception:
self.browser_process.kill()
else:
self.browser_process.kill()
await asyncio.sleep(0.1) # Brief wait for kill to take effect
except Exception as e:
self.logger.error(
message="Error terminating browser: {error}",
tag="ERROR",
tag="ERROR",
params={"error": str(e)},
)
@@ -379,7 +434,14 @@ class BrowserManager:
sessions (dict): Dictionary to store session information
session_ttl (int): Session timeout in seconds
"""
_playwright_instance = None
@classmethod
async def get_playwright(cls):
from playwright.async_api import async_playwright
cls._playwright_instance = await async_playwright().start()
return cls._playwright_instance
def __init__(self, browser_config: BrowserConfig, logger=None):
"""
@@ -429,32 +491,22 @@ class BrowserManager:
Note: This method should be called in a separate task to avoid blocking the main event loop.
"""
if self.playwright is None:
from playwright.async_api import async_playwright
if self.playwright is not None:
await self.close()
from playwright.async_api import async_playwright
self.playwright = await async_playwright().start()
self.playwright = await async_playwright().start()
if self.config.use_managed_browser:
cdp_url = await self.managed_browser.start()
if self.config.cdp_url or self.config.use_managed_browser:
self.config.use_managed_browser = True
cdp_url = await self.managed_browser.start() if not self.config.cdp_url else self.config.cdp_url
self.browser = await self.playwright.chromium.connect_over_cdp(cdp_url)
contexts = self.browser.contexts
if contexts:
self.default_context = contexts[0]
else:
self.default_context = await self.create_browser_context()
# self.default_context = await self.browser.new_context(
# viewport={
# "width": self.config.viewport_width,
# "height": self.config.viewport_height,
# },
# storage_state=self.config.storage_state,
# user_agent=self.config.headers.get(
# "User-Agent", self.config.user_agent
# ),
# accept_downloads=self.config.accept_downloads,
# ignore_https_errors=self.config.ignore_https_errors,
# java_script_enabled=self.config.java_script_enabled,
# )
await self.setup_context(self.default_context)
else:
browser_args = self._build_browser_args()
@@ -469,6 +521,7 @@ class BrowserManager:
self.default_context = self.browser
def _build_browser_args(self) -> dict:
"""Build browser launch arguments from config."""
args = [
@@ -530,9 +583,9 @@ class BrowserManager:
ProxySettings(server=self.config.proxy)
if self.config.proxy
else ProxySettings(
server=self.config.proxy_config.get("server"),
username=self.config.proxy_config.get("username"),
password=self.config.proxy_config.get("password"),
server=self.config.proxy_config.server,
username=self.config.proxy_config.username,
password=self.config.proxy_config.password,
)
)
browser_args["proxy"] = proxy_settings
@@ -607,7 +660,7 @@ class BrowserManager:
"name": "cookiesEnabled",
"value": "true",
"url": crawlerRunConfig.url
if crawlerRunConfig
if crawlerRunConfig and crawlerRunConfig.url
else "https://crawl4ai.com/",
}
]
@@ -790,7 +843,10 @@ class BrowserManager:
# If using a managed browser, just grab the shared default_context
if self.config.use_managed_browser:
context = self.default_context
page = await context.new_page()
pages = context.pages
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
if not page:
page = await context.new_page()
else:
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
@@ -840,6 +896,9 @@ class BrowserManager:
async def close(self):
"""Close all browser resources and clean up."""
if self.config.cdp_url:
return
if self.config.sleep_on_close:
await asyncio.sleep(0.5)

View File

@@ -12,7 +12,10 @@ import sys
import datetime
import uuid
import shutil
from typing import List, Dict, Optional, Any
import json
import subprocess
import time
from typing import List, Dict, Optional, Any, Tuple
from colorama import Fore, Style, init
from .async_configs import BrowserConfig
@@ -56,6 +59,11 @@ class BrowserProfiler:
# Ensure profiles directory exists
self.profiles_dir = os.path.join(get_home_folder(), "profiles")
os.makedirs(self.profiles_dir, exist_ok=True)
# Builtin browser config file
self.builtin_browser_dir = os.path.join(get_home_folder(), "builtin-browser")
self.builtin_config_file = os.path.join(self.builtin_browser_dir, "browser_config.json")
os.makedirs(self.builtin_browser_dir, exist_ok=True)
async def create_profile(self,
profile_name: Optional[str] = None,
@@ -342,7 +350,11 @@ class BrowserProfiler:
# Check if path exists and is a valid profile
if not os.path.isdir(profile_path):
return None
# Chrck if profile_name itself is full path
if os.path.isabs(profile_name):
profile_path = profile_name
else:
return None
# Look for profile indicators
is_profile = (
@@ -541,4 +553,422 @@ class BrowserProfiler:
break
else:
self.logger.error(f"Invalid choice. Please enter a number between 1 and {exit_option}.", tag="MENU")
self.logger.error(f"Invalid choice. Please enter a number between 1 and {exit_option}.", tag="MENU")
async def launch_standalone_browser(self,
browser_type: str = "chromium",
user_data_dir: Optional[str] = None,
debugging_port: int = 9222,
headless: bool = False,
save_as_builtin: bool = False) -> Optional[str]:
"""
Launch a standalone browser with CDP debugging enabled and keep it running
until the user presses 'q'. Returns and displays the CDP URL.
Args:
browser_type (str): Type of browser to launch ('chromium' or 'firefox')
user_data_dir (str, optional): Path to user profile directory
debugging_port (int): Port to use for CDP debugging
headless (bool): Whether to run in headless mode
Returns:
str: CDP URL for the browser, or None if launch failed
Example:
```python
profiler = BrowserProfiler()
cdp_url = await profiler.launch_standalone_browser(
user_data_dir="/path/to/profile",
debugging_port=9222
)
# Use cdp_url to connect to the browser
```
"""
# Use the provided directory if specified, otherwise create a temporary directory
if user_data_dir:
# Directory is provided directly, ensure it exists
profile_path = user_data_dir
os.makedirs(profile_path, exist_ok=True)
else:
# Create a temporary profile directory
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
profile_name = f"temp_{timestamp}_{uuid.uuid4().hex[:6]}"
profile_path = os.path.join(self.profiles_dir, profile_name)
os.makedirs(profile_path, exist_ok=True)
# Print initial information
border = f"{Fore.CYAN}{'='*80}{Style.RESET_ALL}"
self.logger.info(f"\n{border}", tag="CDP")
self.logger.info(f"Launching standalone browser with CDP debugging", tag="CDP")
self.logger.info(f"Browser type: {Fore.GREEN}{browser_type}{Style.RESET_ALL}", tag="CDP")
self.logger.info(f"Profile path: {Fore.YELLOW}{profile_path}{Style.RESET_ALL}", tag="CDP")
self.logger.info(f"Debugging port: {Fore.CYAN}{debugging_port}{Style.RESET_ALL}", tag="CDP")
self.logger.info(f"Headless mode: {Fore.CYAN}{headless}{Style.RESET_ALL}", tag="CDP")
# Create managed browser instance
managed_browser = ManagedBrowser(
browser_type=browser_type,
user_data_dir=profile_path,
headless=headless,
logger=self.logger,
debugging_port=debugging_port
)
# Set up signal handlers to ensure cleanup on interrupt
original_sigint = signal.getsignal(signal.SIGINT)
original_sigterm = signal.getsignal(signal.SIGTERM)
# Define cleanup handler for signals
async def cleanup_handler(sig, frame):
self.logger.warning("\nCleaning up browser process...", tag="CDP")
await managed_browser.cleanup()
# Restore original signal handlers
signal.signal(signal.SIGINT, original_sigint)
signal.signal(signal.SIGTERM, original_sigterm)
if sig == signal.SIGINT:
self.logger.error("Browser terminated by user.", tag="CDP")
sys.exit(1)
# Set signal handlers
def sigint_handler(sig, frame):
asyncio.create_task(cleanup_handler(sig, frame))
signal.signal(signal.SIGINT, sigint_handler)
signal.signal(signal.SIGTERM, sigint_handler)
# Event to signal when user wants to exit
user_done_event = asyncio.Event()
# Run keyboard input loop in a separate task
async def listen_for_quit_command():
import termios
import tty
import select
# First output the prompt
self.logger.info(f"{Fore.CYAN}Press '{Fore.WHITE}q{Fore.CYAN}' to stop the browser and exit...{Style.RESET_ALL}", tag="CDP")
# Save original terminal settings
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
# Switch to non-canonical mode (no line buffering)
tty.setcbreak(fd)
while True:
# Check if input is available (non-blocking)
readable, _, _ = select.select([sys.stdin], [], [], 0.5)
if readable:
key = sys.stdin.read(1)
if key.lower() == 'q':
self.logger.info(f"{Fore.GREEN}Closing browser...{Style.RESET_ALL}", tag="CDP")
user_done_event.set()
return
# Check if the browser process has already exited
if managed_browser.browser_process and managed_browser.browser_process.poll() is not None:
self.logger.info("Browser already closed. Ending input listener.", tag="CDP")
user_done_event.set()
return
await asyncio.sleep(0.1)
finally:
# Restore terminal settings
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
# Function to retrieve and display CDP JSON config
async def get_cdp_json(port):
import aiohttp
cdp_url = f"http://localhost:{port}"
json_url = f"{cdp_url}/json/version"
try:
async with aiohttp.ClientSession() as session:
# Try multiple times in case the browser is still starting up
for _ in range(10):
try:
async with session.get(json_url) as response:
if response.status == 200:
data = await response.json()
return cdp_url, data
except Exception:
pass
await asyncio.sleep(0.5)
return cdp_url, None
except Exception as e:
self.logger.error(f"Error fetching CDP JSON: {str(e)}", tag="CDP")
return cdp_url, None
cdp_url = None
config_json = None
try:
# Start the browser
await managed_browser.start()
# Check if browser started successfully
browser_process = managed_browser.browser_process
if not browser_process:
self.logger.error("Failed to start browser process.", tag="CDP")
return None
self.logger.info(f"Browser launched successfully. Retrieving CDP information...", tag="CDP")
# Get CDP URL and JSON config
cdp_url, config_json = await get_cdp_json(debugging_port)
if cdp_url:
self.logger.success(f"CDP URL: {Fore.GREEN}{cdp_url}{Style.RESET_ALL}", tag="CDP")
if config_json:
# Display relevant CDP information
self.logger.info(f"Browser: {Fore.CYAN}{config_json.get('Browser', 'Unknown')}{Style.RESET_ALL}", tag="CDP")
self.logger.info(f"Protocol Version: {config_json.get('Protocol-Version', 'Unknown')}", tag="CDP")
if 'webSocketDebuggerUrl' in config_json:
self.logger.info(f"WebSocket URL: {Fore.GREEN}{config_json['webSocketDebuggerUrl']}{Style.RESET_ALL}", tag="CDP")
else:
self.logger.warning("Could not retrieve CDP configuration JSON", tag="CDP")
else:
self.logger.error(f"Failed to get CDP URL on port {debugging_port}", tag="CDP")
await managed_browser.cleanup()
return None
# Start listening for keyboard input
listener_task = asyncio.create_task(listen_for_quit_command())
# Wait for the user to press 'q' or for the browser process to exit naturally
while not user_done_event.is_set() and browser_process.poll() is None:
await asyncio.sleep(0.5)
# Cancel the listener task if it's still running
if not listener_task.done():
listener_task.cancel()
try:
await listener_task
except asyncio.CancelledError:
pass
# If the browser is still running and the user pressed 'q', terminate it
if browser_process.poll() is None and user_done_event.is_set():
self.logger.info("Terminating browser process...", tag="CDP")
await managed_browser.cleanup()
self.logger.success(f"Browser closed.", tag="CDP")
except Exception as e:
self.logger.error(f"Error launching standalone browser: {str(e)}", tag="CDP")
await managed_browser.cleanup()
return None
finally:
# Restore original signal handlers
signal.signal(signal.SIGINT, original_sigint)
signal.signal(signal.SIGTERM, original_sigterm)
# Make sure browser is fully cleaned up
await managed_browser.cleanup()
# Return the CDP URL
return cdp_url
async def launch_builtin_browser(self,
browser_type: str = "chromium",
debugging_port: int = 9222,
headless: bool = True) -> Optional[str]:
"""
Launch a browser in the background for use as the builtin browser.
Args:
browser_type (str): Type of browser to launch ('chromium' or 'firefox')
debugging_port (int): Port to use for CDP debugging
headless (bool): Whether to run in headless mode
Returns:
str: CDP URL for the browser, or None if launch failed
"""
# Check if there's an existing browser still running
browser_info = self.get_builtin_browser_info()
if browser_info and self._is_browser_running(browser_info.get('pid')):
self.logger.info("Builtin browser is already running", tag="BUILTIN")
return browser_info.get('cdp_url')
# Create a user data directory for the builtin browser
user_data_dir = os.path.join(self.builtin_browser_dir, "user_data")
os.makedirs(user_data_dir, exist_ok=True)
# Create managed browser instance
managed_browser = ManagedBrowser(
browser_type=browser_type,
user_data_dir=user_data_dir,
headless=headless,
logger=self.logger,
debugging_port=debugging_port
)
try:
# Start the browser
await managed_browser.start()
# Check if browser started successfully
browser_process = managed_browser.browser_process
if not browser_process:
self.logger.error("Failed to start browser process.", tag="BUILTIN")
return None
# Get CDP URL
cdp_url = f"http://localhost:{debugging_port}"
# Try to verify browser is responsive by fetching version info
import aiohttp
json_url = f"{cdp_url}/json/version"
config_json = None
try:
async with aiohttp.ClientSession() as session:
for _ in range(10): # Try multiple times
try:
async with session.get(json_url) as response:
if response.status == 200:
config_json = await response.json()
break
except Exception:
pass
await asyncio.sleep(0.5)
except Exception as e:
self.logger.warning(f"Could not verify browser: {str(e)}", tag="BUILTIN")
# Save browser info
browser_info = {
'pid': browser_process.pid,
'cdp_url': cdp_url,
'user_data_dir': user_data_dir,
'browser_type': browser_type,
'debugging_port': debugging_port,
'start_time': time.time(),
'config': config_json
}
with open(self.builtin_config_file, 'w') as f:
json.dump(browser_info, f, indent=2)
# Detach from the browser process - don't keep any references
# This is important to allow the Python script to exit while the browser continues running
# We'll just record the PID and other info, and the browser will run independently
managed_browser.browser_process = None
self.logger.success(f"Builtin browser launched at CDP URL: {cdp_url}", tag="BUILTIN")
return cdp_url
except Exception as e:
self.logger.error(f"Error launching builtin browser: {str(e)}", tag="BUILTIN")
if managed_browser:
await managed_browser.cleanup()
return None
def get_builtin_browser_info(self) -> Optional[Dict[str, Any]]:
"""
Get information about the builtin browser.
Returns:
dict: Browser information or None if no builtin browser is configured
"""
if not os.path.exists(self.builtin_config_file):
return None
try:
with open(self.builtin_config_file, 'r') as f:
browser_info = json.load(f)
# Check if the browser is still running
if not self._is_browser_running(browser_info.get('pid')):
self.logger.warning("Builtin browser is not running", tag="BUILTIN")
return None
return browser_info
except Exception as e:
self.logger.error(f"Error reading builtin browser config: {str(e)}", tag="BUILTIN")
return None
def _is_browser_running(self, pid: Optional[int]) -> bool:
"""Check if a process with the given PID is running"""
if not pid:
return False
try:
# Check if the process exists
if sys.platform == "win32":
process = subprocess.run(["tasklist", "/FI", f"PID eq {pid}"],
capture_output=True, text=True)
return str(pid) in process.stdout
else:
# Unix-like systems
os.kill(pid, 0) # This doesn't actually kill the process, just checks if it exists
return True
except (ProcessLookupError, PermissionError, OSError):
return False
async def kill_builtin_browser(self) -> bool:
"""
Kill the builtin browser if it's running.
Returns:
bool: True if the browser was killed, False otherwise
"""
browser_info = self.get_builtin_browser_info()
if not browser_info:
self.logger.warning("No builtin browser found", tag="BUILTIN")
return False
pid = browser_info.get('pid')
if not pid:
return False
try:
if sys.platform == "win32":
subprocess.run(["taskkill", "/F", "/PID", str(pid)], check=True)
else:
os.kill(pid, signal.SIGTERM)
# Wait for termination
for _ in range(5):
if not self._is_browser_running(pid):
break
await asyncio.sleep(0.5)
else:
# Force kill if still running
os.kill(pid, signal.SIGKILL)
# Remove config file
if os.path.exists(self.builtin_config_file):
os.unlink(self.builtin_config_file)
self.logger.success("Builtin browser terminated", tag="BUILTIN")
return True
except Exception as e:
self.logger.error(f"Error killing builtin browser: {str(e)}", tag="BUILTIN")
return False
async def get_builtin_browser_status(self) -> Dict[str, Any]:
"""
Get status information about the builtin browser.
Returns:
dict: Status information with running, cdp_url, and info fields
"""
browser_info = self.get_builtin_browser_info()
if not browser_info:
return {
'running': False,
'cdp_url': None,
'info': None
}
return {
'running': True,
'cdp_url': browser_info.get('cdp_url'),
'info': browser_info
}

View File

@@ -1,9 +1,8 @@
import click
import os
import time
import datetime
import sys
import shutil
import time
import humanize
from typing import Dict, Any, Optional, List
import json
@@ -13,7 +12,6 @@ from rich.console import Console
from rich.table import Table
from rich.panel import Panel
from rich.prompt import Prompt, Confirm
from rich.style import Style
from crawl4ai import (
CacheMode,
@@ -22,16 +20,19 @@ from crawl4ai import (
BrowserConfig,
CrawlerRunConfig,
LLMExtractionStrategy,
LXMLWebScrapingStrategy,
JsonCssExtractionStrategy,
JsonXPathExtractionStrategy,
BM25ContentFilter,
PruningContentFilter,
BrowserProfiler
BrowserProfiler,
DefaultMarkdownGenerator,
LLMConfig
)
from crawl4ai.config import USER_SETTINGS
from litellm import completion
from pathlib import Path
from crawl4ai.async_configs import LlmConfig
# Initialize rich console
console = Console()
@@ -177,8 +178,12 @@ def show_examples():
# CSS-based extraction
crwl https://example.com -e extract_css.yml -s css_schema.json -o json
# LLM-based extraction
# LLM-based extraction with config file
crwl https://example.com -e extract_llm.yml -s llm_schema.json -o json
# Quick LLM-based JSON extraction (prompts for LLM provider first time)
crwl https://example.com -j # Auto-extracts structured data
crwl https://example.com -j "Extract product details including name, price, and features" # With specific instructions
3⃣ Direct Parameters:
# Browser settings
@@ -201,7 +206,24 @@ def show_examples():
# 2. Then use that profile to crawl the authenticated site:
crwl https://site-requiring-login.com/dashboard -p my-profile-name
5Sample Config Files:
5CDP Mode for Browser Automation:
# Launch browser with CDP debugging on default port 9222
crwl cdp
# Use a specific profile and custom port
crwl cdp -p my-profile -P 9223
# Launch headless browser with CDP enabled
crwl cdp --headless
# Launch in incognito mode (ignores profile)
crwl cdp --incognito
# Use the CDP URL with other tools (Puppeteer, Playwright, etc.)
# The URL will be displayed in the terminal when the browser starts
6⃣ Sample Config Files:
browser.yml:
headless: true
@@ -259,11 +281,11 @@ llm_schema.json:
}
}
6️⃣ Advanced Usage:
7️⃣ Advanced Usage:
# Combine configs with direct parameters
crwl https://example.com -B browser.yml -b "headless=false,viewport_width=1920"
# Full extraction pipeline
# Full extraction pipeline with config files
crwl https://example.com \\
-B browser.yml \\
-C crawler.yml \\
@@ -271,6 +293,12 @@ llm_schema.json:
-s llm_schema.json \\
-o json \\
-v
# Quick LLM-based extraction with specific instructions
crwl https://amazon.com/dp/B01DFKC2SO \\
-j "Extract product title, current price, original price, rating, and all product specifications" \\
-b "headless=true,viewport_width=1280" \\
-v
# Content filtering with BM25
crwl https://example.com \\
@@ -285,7 +313,7 @@ llm_schema.json:
For more documentation visit: https://github.com/unclecode/crawl4ai
7️⃣ Q&A with LLM:
8️⃣ Q&A with LLM:
# Ask a question about the content
crwl https://example.com -q "What is the main topic discussed?"
@@ -312,8 +340,16 @@ For more documentation visit: https://github.com/unclecode/crawl4ai
- google/gemini-pro
See full list of providers: https://docs.litellm.ai/docs/providers
# Set default LLM provider and token in advance
crwl config set DEFAULT_LLM_PROVIDER "anthropic/claude-3-sonnet"
crwl config set DEFAULT_LLM_PROVIDER_TOKEN "your-api-token-here"
# Set default browser behavior
crwl config set BROWSER_HEADLESS false # Always show browser window
crwl config set USER_AGENT_MODE random # Use random user agent
8️⃣ Profile Management:
9️⃣ Profile Management:
# Launch interactive profile manager
crwl profiles
@@ -326,6 +362,32 @@ For more documentation visit: https://github.com/unclecode/crawl4ai
crwl profiles # Select "Create new profile" option
# 2. Then use that profile to crawl authenticated content:
crwl https://site-requiring-login.com/dashboard -p my-profile-name
🔄 Builtin Browser Management:
# Start a builtin browser (runs in the background)
crwl browser start
# Check builtin browser status
crwl browser status
# Open a visible window to see the browser
crwl browser view --url https://example.com
# Stop the builtin browser
crwl browser stop
# Restart with different options
crwl browser restart --browser-type chromium --port 9223 --no-headless
# Use the builtin browser in your code
# (Just set browser_mode="builtin" in your BrowserConfig)
browser_config = BrowserConfig(
browser_mode="builtin",
headless=True
)
# Usage via CLI:
crwl https://example.com -b "browser_mode=builtin"
"""
click.echo(examples)
@@ -552,28 +614,409 @@ async def manage_profiles():
# Add a separator between operations
console.print("\n")
@click.group(context_settings={"help_option_names": ["-h", "--help"]})
def cli():
"""Crawl4AI CLI - Web content extraction and browser profile management tool"""
pass
@cli.group("browser")
def browser_cmd():
"""Manage browser instances for Crawl4AI
Commands to manage browser instances for Crawl4AI, including:
- status - Check status of the builtin browser
- start - Start a new builtin browser
- stop - Stop the running builtin browser
- restart - Restart the builtin browser
"""
pass
@browser_cmd.command("status")
def browser_status_cmd():
"""Show status of the builtin browser"""
profiler = BrowserProfiler()
try:
status = anyio.run(profiler.get_builtin_browser_status)
if status["running"]:
info = status["info"]
console.print(Panel(
f"[green]Builtin browser is running[/green]\n\n"
f"CDP URL: [cyan]{info['cdp_url']}[/cyan]\n"
f"Process ID: [yellow]{info['pid']}[/yellow]\n"
f"Browser type: [blue]{info['browser_type']}[/blue]\n"
f"User data directory: [magenta]{info['user_data_dir']}[/magenta]\n"
f"Started: [cyan]{time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(info['start_time']))}[/cyan]",
title="Builtin Browser Status",
border_style="green"
))
else:
console.print(Panel(
"[yellow]Builtin browser is not running[/yellow]\n\n"
"Use 'crwl browser start' to start a builtin browser",
title="Builtin Browser Status",
border_style="yellow"
))
except Exception as e:
console.print(f"[red]Error checking browser status: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("start")
@click.option("--browser-type", "-b", type=click.Choice(["chromium", "firefox"]), default="chromium",
help="Browser type (default: chromium)")
@click.option("--port", "-p", type=int, default=9222, help="Debugging port (default: 9222)")
@click.option("--headless/--no-headless", default=True, help="Run browser in headless mode")
def browser_start_cmd(browser_type: str, port: int, headless: bool):
"""Start a builtin browser instance
This will start a persistent browser instance that can be used by Crawl4AI
by setting browser_mode="builtin" in BrowserConfig.
"""
profiler = BrowserProfiler()
# First check if browser is already running
status = anyio.run(profiler.get_builtin_browser_status)
if status["running"]:
console.print(Panel(
"[yellow]Builtin browser is already running[/yellow]\n\n"
f"CDP URL: [cyan]{status['cdp_url']}[/cyan]\n\n"
"Use 'crwl browser restart' to restart the browser",
title="Builtin Browser Start",
border_style="yellow"
))
return
try:
console.print(Panel(
f"[cyan]Starting builtin browser[/cyan]\n\n"
f"Browser type: [green]{browser_type}[/green]\n"
f"Debugging port: [yellow]{port}[/yellow]\n"
f"Headless: [cyan]{'Yes' if headless else 'No'}[/cyan]",
title="Builtin Browser Start",
border_style="cyan"
))
cdp_url = anyio.run(
profiler.launch_builtin_browser,
browser_type,
port,
headless
)
if cdp_url:
console.print(Panel(
f"[green]Builtin browser started successfully[/green]\n\n"
f"CDP URL: [cyan]{cdp_url}[/cyan]\n\n"
"This browser will be used automatically when setting browser_mode='builtin'",
title="Builtin Browser Start",
border_style="green"
))
else:
console.print(Panel(
"[red]Failed to start builtin browser[/red]",
title="Builtin Browser Start",
border_style="red"
))
sys.exit(1)
except Exception as e:
console.print(f"[red]Error starting builtin browser: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("stop")
def browser_stop_cmd():
"""Stop the running builtin browser"""
profiler = BrowserProfiler()
try:
# First check if browser is running
status = anyio.run(profiler.get_builtin_browser_status)
if not status["running"]:
console.print(Panel(
"[yellow]No builtin browser is currently running[/yellow]",
title="Builtin Browser Stop",
border_style="yellow"
))
return
console.print(Panel(
"[cyan]Stopping builtin browser...[/cyan]",
title="Builtin Browser Stop",
border_style="cyan"
))
success = anyio.run(profiler.kill_builtin_browser)
if success:
console.print(Panel(
"[green]Builtin browser stopped successfully[/green]",
title="Builtin Browser Stop",
border_style="green"
))
else:
console.print(Panel(
"[red]Failed to stop builtin browser[/red]",
title="Builtin Browser Stop",
border_style="red"
))
sys.exit(1)
except Exception as e:
console.print(f"[red]Error stopping builtin browser: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("view")
@click.option("--url", "-u", help="URL to navigate to (defaults to about:blank)")
def browser_view_cmd(url: Optional[str]):
"""
Open a visible window of the builtin browser
This command connects to the running builtin browser and opens a visible window,
allowing you to see what the browser is currently viewing or navigate to a URL.
"""
profiler = BrowserProfiler()
try:
# First check if browser is running
status = anyio.run(profiler.get_builtin_browser_status)
if not status["running"]:
console.print(Panel(
"[yellow]No builtin browser is currently running[/yellow]\n\n"
"Use 'crwl browser start' to start a builtin browser first",
title="Builtin Browser View",
border_style="yellow"
))
return
info = status["info"]
cdp_url = info["cdp_url"]
console.print(Panel(
f"[cyan]Opening visible window connected to builtin browser[/cyan]\n\n"
f"CDP URL: [green]{cdp_url}[/green]\n"
f"URL to load: [yellow]{url or 'about:blank'}[/yellow]",
title="Builtin Browser View",
border_style="cyan"
))
# Use the CDP URL to launch a new visible window
import subprocess
import os
# Determine the browser command based on platform
if sys.platform == "darwin": # macOS
browser_cmd = ["/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"]
elif sys.platform == "win32": # Windows
browser_cmd = ["C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe"]
else: # Linux
browser_cmd = ["google-chrome"]
# Add arguments
browser_args = [
f"--remote-debugging-port={info['debugging_port']}",
"--remote-debugging-address=localhost",
"--no-first-run",
"--no-default-browser-check"
]
# Add URL if provided
if url:
browser_args.append(url)
# Launch browser
try:
subprocess.Popen(browser_cmd + browser_args)
console.print("[green]Browser window opened. Close it when finished viewing.[/green]")
except Exception as e:
console.print(f"[red]Error launching browser: {str(e)}[/red]")
console.print(f"[yellow]Try connecting manually to {cdp_url} in Chrome or using the '--remote-debugging-port' flag.[/yellow]")
except Exception as e:
console.print(f"[red]Error viewing builtin browser: {str(e)}[/red]")
sys.exit(1)
@browser_cmd.command("restart")
@click.option("--browser-type", "-b", type=click.Choice(["chromium", "firefox"]), default=None,
help="Browser type (defaults to same as current)")
@click.option("--port", "-p", type=int, default=None, help="Debugging port (defaults to same as current)")
@click.option("--headless/--no-headless", default=None, help="Run browser in headless mode")
def browser_restart_cmd(browser_type: Optional[str], port: Optional[int], headless: Optional[bool]):
"""Restart the builtin browser
Stops the current builtin browser if running and starts a new one.
By default, uses the same configuration as the current browser.
"""
profiler = BrowserProfiler()
try:
# First check if browser is running and get its config
status = anyio.run(profiler.get_builtin_browser_status)
current_config = {}
if status["running"]:
info = status["info"]
current_config = {
"browser_type": info["browser_type"],
"port": info["debugging_port"],
"headless": True # Default assumption
}
# Stop the browser
console.print(Panel(
"[cyan]Stopping current builtin browser...[/cyan]",
title="Builtin Browser Restart",
border_style="cyan"
))
success = anyio.run(profiler.kill_builtin_browser)
if not success:
console.print(Panel(
"[red]Failed to stop current browser[/red]",
title="Builtin Browser Restart",
border_style="red"
))
sys.exit(1)
# Use provided options or defaults from current config
browser_type = browser_type or current_config.get("browser_type", "chromium")
port = port or current_config.get("port", 9222)
headless = headless if headless is not None else current_config.get("headless", True)
# Start a new browser
console.print(Panel(
f"[cyan]Starting new builtin browser[/cyan]\n\n"
f"Browser type: [green]{browser_type}[/green]\n"
f"Debugging port: [yellow]{port}[/yellow]\n"
f"Headless: [cyan]{'Yes' if headless else 'No'}[/cyan]",
title="Builtin Browser Restart",
border_style="cyan"
))
cdp_url = anyio.run(
profiler.launch_builtin_browser,
browser_type,
port,
headless
)
if cdp_url:
console.print(Panel(
f"[green]Builtin browser restarted successfully[/green]\n\n"
f"CDP URL: [cyan]{cdp_url}[/cyan]",
title="Builtin Browser Restart",
border_style="green"
))
else:
console.print(Panel(
"[red]Failed to restart builtin browser[/red]",
title="Builtin Browser Restart",
border_style="red"
))
sys.exit(1)
except Exception as e:
console.print(f"[red]Error restarting builtin browser: {str(e)}[/red]")
sys.exit(1)
@cli.command("cdp")
@click.option("--user-data-dir", "-d", help="Directory to use for browser data (will be created if it doesn't exist)")
@click.option("--port", "-P", type=int, default=9222, help="Debugging port (default: 9222)")
@click.option("--browser-type", "-b", type=click.Choice(["chromium", "firefox"]), default="chromium",
help="Browser type (default: chromium)")
@click.option("--headless", is_flag=True, help="Run browser in headless mode")
@click.option("--incognito", is_flag=True, help="Run in incognito/private mode (ignores user-data-dir)")
def cdp_cmd(user_data_dir: Optional[str], port: int, browser_type: str, headless: bool, incognito: bool):
"""Launch a standalone browser with CDP debugging enabled
This command launches a browser with Chrome DevTools Protocol (CDP) debugging enabled,
prints the CDP URL, and keeps the browser running until you press 'q'.
The CDP URL can be used for various automation and debugging tasks.
Examples:
# Launch Chromium with CDP on default port 9222
crwl cdp
# Use a specific directory for browser data and custom port
crwl cdp --user-data-dir ~/browser-data --port 9223
# Launch in headless mode
crwl cdp --headless
# Launch in incognito mode (ignores user-data-dir)
crwl cdp --incognito
"""
profiler = BrowserProfiler()
try:
# Handle data directory
data_dir = None
if not incognito and user_data_dir:
# Expand user path (~/something)
expanded_path = os.path.expanduser(user_data_dir)
# Create directory if it doesn't exist
if not os.path.exists(expanded_path):
console.print(f"[yellow]Directory '{expanded_path}' doesn't exist. Creating it.[/yellow]")
os.makedirs(expanded_path, exist_ok=True)
data_dir = expanded_path
# Print launch info
console.print(Panel(
f"[cyan]Launching browser with CDP debugging[/cyan]\n\n"
f"Browser type: [green]{browser_type}[/green]\n"
f"Debugging port: [yellow]{port}[/yellow]\n"
f"User data directory: [cyan]{data_dir or 'Temporary directory'}[/cyan]\n"
f"Headless: [cyan]{'Yes' if headless else 'No'}[/cyan]\n"
f"Incognito: [cyan]{'Yes' if incognito else 'No'}[/cyan]\n\n"
f"[yellow]Press 'q' to quit when done[/yellow]",
title="CDP Browser",
border_style="cyan"
))
# Run the browser
cdp_url = anyio.run(
profiler.launch_standalone_browser,
browser_type,
data_dir,
port,
headless
)
if not cdp_url:
console.print("[red]Failed to launch browser or get CDP URL[/red]")
sys.exit(1)
except Exception as e:
console.print(f"[red]Error launching CDP browser: {str(e)}[/red]")
sys.exit(1)
@cli.command("crawl")
@click.argument("url", required=True)
@click.option("--browser-config", "-B", type=click.Path(exists=True), help="Browser config file (YAML/JSON)")
@click.option("--crawler-config", "-C", type=click.Path(exists=True), help="Crawler config file (YAML/JSON)")
@click.option("--filter-config", "-f", type=click.Path(exists=True), help="Content filter config file")
@click.option("--extraction-config", "-e", type=click.Path(exists=True), help="Extraction strategy config file")
@click.option("--json-extract", "-j", is_flag=False, flag_value="", default=None, help="Extract structured data using LLM with optional description")
@click.option("--schema", "-s", type=click.Path(exists=True), help="JSON schema for extraction")
@click.option("--browser", "-b", type=str, callback=parse_key_values, help="Browser parameters as key1=value1,key2=value2")
@click.option("--crawler", "-c", type=str, callback=parse_key_values, help="Crawler parameters as key1=value1,key2=value2")
@click.option("--output", "-o", type=click.Choice(["all", "json", "markdown", "md", "markdown-fit", "md-fit"]), default="all")
@click.option("--bypass-cache", is_flag=True, default=True, help="Bypass cache when crawling")
@click.option("--output-file", "-O", type=click.Path(), help="Output file path (default: stdout)")
@click.option("--bypass-cache", "-b", is_flag=True, default=True, help="Bypass cache when crawling")
@click.option("--question", "-q", help="Ask a question about the crawled content")
@click.option("--verbose", "-v", is_flag=True)
@click.option("--profile", "-p", help="Use a specific browser profile (by name)")
def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config: str,
extraction_config: str, schema: str, browser: Dict, crawler: Dict,
output: str, bypass_cache: bool, question: str, verbose: bool, profile: str):
extraction_config: str, json_extract: str, schema: str, browser: Dict, crawler: Dict,
output: str, output_file: str, bypass_cache: bool, question: str, verbose: bool, profile: str):
"""Crawl a website and extract content
Simple Usage:
@@ -617,21 +1060,65 @@ def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config:
crawler_cfg = crawler_cfg.clone(**crawler)
# Handle content filter config
if filter_config:
filter_conf = load_config_file(filter_config)
if filter_config or output in ["markdown-fit", "md-fit"]:
if filter_config:
filter_conf = load_config_file(filter_config)
elif not filter_config and output in ["markdown-fit", "md-fit"]:
filter_conf = {
"type": "pruning",
"query": "",
"threshold": 0.48
}
if filter_conf["type"] == "bm25":
crawler_cfg.content_filter = BM25ContentFilter(
user_query=filter_conf.get("query"),
bm25_threshold=filter_conf.get("threshold", 1.0)
crawler_cfg.markdown_generator = DefaultMarkdownGenerator(
content_filter = BM25ContentFilter(
user_query=filter_conf.get("query"),
bm25_threshold=filter_conf.get("threshold", 1.0)
)
)
elif filter_conf["type"] == "pruning":
crawler_cfg.content_filter = PruningContentFilter(
user_query=filter_conf.get("query"),
threshold=filter_conf.get("threshold", 0.48)
crawler_cfg.markdown_generator = DefaultMarkdownGenerator(
content_filter = PruningContentFilter(
user_query=filter_conf.get("query"),
threshold=filter_conf.get("threshold", 0.48)
)
)
# Handle json-extract option (takes precedence over extraction-config)
if json_extract is not None:
# Get LLM provider and token
provider, token = setup_llm_config()
# Default sophisticated instruction for structured data extraction
default_instruction = """Analyze the web page content and extract structured data as JSON.
If the page contains a list of items with repeated patterns, extract all items in an array.
If the page is an article or contains unique content, extract a comprehensive JSON object with all relevant information.
Look at the content, intention of content, what it offers and find the data item(s) in the page.
Always return valid, properly formatted JSON."""
default_instruction_with_user_query = """Analyze the web page content and extract structured data as JSON, following the below instruction and explanation of schema and always return valid, properly formatted JSON. \n\nInstruction:\n\n""" + json_extract
# Determine instruction based on whether json_extract is empty or has content
instruction = default_instruction_with_user_query if json_extract else default_instruction
# Create LLM extraction strategy
crawler_cfg.extraction_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider=provider, api_token=token),
instruction=instruction,
schema=load_schema_file(schema), # Will be None if no schema is provided
extraction_type="schema", #if schema else "block",
apply_chunking=False,
force_json_response=True,
verbose=verbose,
)
# Set output to JSON if not explicitly specified
if output == "all":
output = "json"
# Handle extraction strategy
if extraction_config:
# Handle extraction strategy from config file (only if json-extract wasn't used)
elif extraction_config:
extract_conf = load_config_file(extraction_config)
schema_data = load_schema_file(schema)
@@ -647,7 +1134,7 @@ def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config:
raise click.ClickException("LLM provider and API token are required for LLM extraction")
crawler_cfg.extraction_strategy = LLMExtractionStrategy(
llmConfig=LlmConfig(provider=extract_conf["provider"], api_token=extract_conf["api_token"]),
llm_config=LLMConfig(provider=extract_conf["provider"], api_token=extract_conf["api_token"]),
instruction=extract_conf["instruction"],
schema=schema_data,
**extract_conf.get("params", {})
@@ -665,6 +1152,13 @@ def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config:
# No cache
if bypass_cache:
crawler_cfg.cache_mode = CacheMode.BYPASS
crawler_cfg.scraping_strategy = LXMLWebScrapingStrategy()
config = get_global_config()
browser_cfg.verbose = config.get("VERBOSE", False)
crawler_cfg.verbose = config.get("VERBOSE", False)
# Run crawler
result : CrawlResult = anyio.run(
@@ -683,14 +1177,31 @@ def crawl_cmd(url: str, browser_config: str, crawler_config: str, filter_config:
return
# Handle output
if output == "all":
click.echo(json.dumps(result.model_dump(), indent=2))
elif output == "json":
click.echo(json.dumps(json.loads(result.extracted_content), indent=2))
elif output in ["markdown", "md"]:
click.echo(result.markdown.raw_markdown)
elif output in ["markdown-fit", "md-fit"]:
click.echo(result.markdown.fit_markdown)
if not output_file:
if output == "all":
click.echo(json.dumps(result.model_dump(), indent=2))
elif output == "json":
print(result.extracted_content)
extracted_items = json.loads(result.extracted_content)
click.echo(json.dumps(extracted_items, indent=2))
elif output in ["markdown", "md"]:
click.echo(result.markdown.raw_markdown)
elif output in ["markdown-fit", "md-fit"]:
click.echo(result.markdown.fit_markdown)
else:
if output == "all":
with open(output_file, "w") as f:
f.write(json.dumps(result.model_dump(), indent=2))
elif output == "json":
with open(output_file, "w") as f:
f.write(result.extracted_content)
elif output in ["markdown", "md"]:
with open(output_file, "w") as f:
f.write(result.markdown.raw_markdown)
elif output in ["markdown-fit", "md-fit"]:
with open(output_file, "w") as f:
f.write(result.markdown.fit_markdown)
except Exception as e:
raise click.ClickException(str(e))
@@ -700,6 +1211,120 @@ def examples_cmd():
"""Show usage examples"""
show_examples()
@cli.group("config")
def config_cmd():
"""Manage global configuration settings
Commands to view and update global configuration settings:
- list: Display all current configuration settings
- get: Get the value of a specific setting
- set: Set the value of a specific setting
"""
pass
@config_cmd.command("list")
def config_list_cmd():
"""List all configuration settings"""
config = get_global_config()
table = Table(title="Crawl4AI Configuration", show_header=True, header_style="bold cyan", border_style="blue")
table.add_column("Setting", style="cyan")
table.add_column("Value", style="green")
table.add_column("Default", style="yellow")
table.add_column("Description", style="white")
for key, setting in USER_SETTINGS.items():
value = config.get(key, setting["default"])
# Handle secret values
display_value = value
if setting.get("secret", False) and value:
display_value = "********"
# Handle boolean values
if setting["type"] == "boolean":
display_value = str(value).lower()
default_value = str(setting["default"]).lower()
else:
default_value = str(setting["default"])
table.add_row(
key,
str(display_value),
default_value,
setting["description"]
)
console.print(table)
@config_cmd.command("get")
@click.argument("key", required=True)
def config_get_cmd(key: str):
"""Get a specific configuration setting"""
config = get_global_config()
# Normalize key to uppercase
key = key.upper()
if key not in USER_SETTINGS:
console.print(f"[red]Error: Unknown setting '{key}'[/red]")
return
value = config.get(key, USER_SETTINGS[key]["default"])
# Handle secret values
display_value = value
if USER_SETTINGS[key].get("secret", False) and value:
display_value = "********"
console.print(f"[cyan]{key}[/cyan] = [green]{display_value}[/green]")
console.print(f"[dim]Description: {USER_SETTINGS[key]['description']}[/dim]")
@config_cmd.command("set")
@click.argument("key", required=True)
@click.argument("value", required=True)
def config_set_cmd(key: str, value: str):
"""Set a configuration setting"""
config = get_global_config()
# Normalize key to uppercase
key = key.upper()
if key not in USER_SETTINGS:
console.print(f"[red]Error: Unknown setting '{key}'[/red]")
console.print(f"[yellow]Available settings: {', '.join(USER_SETTINGS.keys())}[/yellow]")
return
setting = USER_SETTINGS[key]
# Type conversion and validation
if setting["type"] == "boolean":
if value.lower() in ["true", "yes", "1", "y"]:
typed_value = True
elif value.lower() in ["false", "no", "0", "n"]:
typed_value = False
else:
console.print(f"[red]Error: Invalid boolean value. Use 'true' or 'false'.[/red]")
return
elif setting["type"] == "string":
typed_value = value
# Check if the value should be one of the allowed options
if "options" in setting and value not in setting["options"]:
console.print(f"[red]Error: Value must be one of: {', '.join(setting['options'])}[/red]")
return
# Update config
config[key] = typed_value
save_global_config(config)
# Handle secret values for display
display_value = typed_value
if setting.get("secret", False) and typed_value:
display_value = "********"
console.print(f"[green]Successfully set[/green] [cyan]{key}[/cyan] = [green]{display_value}[/green]")
@cli.command("profiles")
def profiles_cmd():
"""Manage browser profiles interactively
@@ -719,6 +1344,7 @@ def profiles_cmd():
@click.option("--crawler-config", "-C", type=click.Path(exists=True), help="Crawler config file (YAML/JSON)")
@click.option("--filter-config", "-f", type=click.Path(exists=True), help="Content filter config file")
@click.option("--extraction-config", "-e", type=click.Path(exists=True), help="Extraction strategy config file")
@click.option("--json-extract", "-j", is_flag=False, flag_value="", default=None, help="Extract structured data using LLM with optional description")
@click.option("--schema", "-s", type=click.Path(exists=True), help="JSON schema for extraction")
@click.option("--browser", "-b", type=str, callback=parse_key_values, help="Browser parameters as key1=value1,key2=value2")
@click.option("--crawler", "-c", type=str, callback=parse_key_values, help="Crawler parameters as key1=value1,key2=value2")
@@ -728,7 +1354,7 @@ def profiles_cmd():
@click.option("--verbose", "-v", is_flag=True)
@click.option("--profile", "-p", help="Use a specific browser profile (by name)")
def default(url: str, example: bool, browser_config: str, crawler_config: str, filter_config: str,
extraction_config: str, schema: str, browser: Dict, crawler: Dict,
extraction_config: str, json_extract: str, schema: str, browser: Dict, crawler: Dict,
output: str, bypass_cache: bool, question: str, verbose: bool, profile: str):
"""Crawl4AI CLI - Web content extraction tool
@@ -740,7 +1366,16 @@ def default(url: str, example: bool, browser_config: str, crawler_config: str, f
Other commands:
crwl profiles - Manage browser profiles for identity-based crawling
crwl crawl - Crawl a website with advanced options
crwl cdp - Launch browser with CDP debugging enabled
crwl browser - Manage builtin browser (start, stop, status, restart)
crwl config - Manage global configuration settings
crwl examples - Show more usage examples
Configuration Examples:
crwl config list - List all configuration settings
crwl config get DEFAULT_LLM_PROVIDER - Show current LLM provider
crwl config set VERBOSE true - Enable verbose mode globally
crwl config set BROWSER_HEADLESS false - Default to visible browser
"""
if example:
@@ -761,7 +1396,8 @@ def default(url: str, example: bool, browser_config: str, crawler_config: str, f
browser_config=browser_config,
crawler_config=crawler_config,
filter_config=filter_config,
extraction_config=extraction_config,
extraction_config=extraction_config,
json_extract=json_extract,
schema=schema,
browser=browser,
crawler=crawler,

View File

@@ -0,0 +1,837 @@
import time
import uuid
import threading
import psutil
from datetime import datetime, timedelta
from typing import Dict, Optional, List
import threading
from rich.console import Console
from rich.layout import Layout
from rich.panel import Panel
from rich.table import Table
from rich.text import Text
from rich.live import Live
from rich import box
from ..models import CrawlStatus
class TerminalUI:
"""Terminal user interface for CrawlerMonitor using rich library."""
def __init__(self, refresh_rate: float = 1.0, max_width: int = 120):
"""
Initialize the terminal UI.
Args:
refresh_rate: How often to refresh the UI (in seconds)
max_width: Maximum width of the UI in characters
"""
self.console = Console(width=max_width)
self.layout = Layout()
self.refresh_rate = refresh_rate
self.stop_event = threading.Event()
self.ui_thread = None
self.monitor = None # Will be set by CrawlerMonitor
self.max_width = max_width
# Setup layout - vertical layout (top to bottom)
self.layout.split(
Layout(name="header", size=3),
Layout(name="pipeline_status", size=10),
Layout(name="task_details", ratio=1),
Layout(name="footer", size=3) # Increased footer size to fit all content
)
def start(self, monitor):
"""Start the UI thread."""
self.monitor = monitor
self.stop_event.clear()
self.ui_thread = threading.Thread(target=self._ui_loop)
self.ui_thread.daemon = True
self.ui_thread.start()
def stop(self):
"""Stop the UI thread."""
if self.ui_thread and self.ui_thread.is_alive():
self.stop_event.set()
# Only try to join if we're not in the UI thread
# This prevents "cannot join current thread" errors
if threading.current_thread() != self.ui_thread:
self.ui_thread.join(timeout=5.0)
def _ui_loop(self):
"""Main UI rendering loop."""
import sys
import select
import termios
import tty
# Setup terminal for non-blocking input
old_settings = termios.tcgetattr(sys.stdin)
try:
tty.setcbreak(sys.stdin.fileno())
# Use Live display to render the UI
with Live(self.layout, refresh_per_second=1/self.refresh_rate, screen=True) as live:
self.live = live # Store the live display for updates
# Main UI loop
while not self.stop_event.is_set():
self._update_display()
# Check for key press (non-blocking)
if select.select([sys.stdin], [], [], 0)[0]:
key = sys.stdin.read(1)
# Check for 'q' to quit
if key == 'q':
# Signal stop but don't call monitor.stop() from UI thread
# as it would cause the thread to try to join itself
self.stop_event.set()
self.monitor.is_running = False
break
time.sleep(self.refresh_rate)
# Just check if the monitor was stopped
if not self.monitor.is_running:
break
finally:
# Restore terminal settings
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_settings)
def _update_display(self):
"""Update the terminal display with current statistics."""
if not self.monitor:
return
# Update crawler status panel
self.layout["header"].update(self._create_status_panel())
# Update pipeline status panel and task details panel
self.layout["pipeline_status"].update(self._create_pipeline_panel())
self.layout["task_details"].update(self._create_task_details_panel())
# Update footer
self.layout["footer"].update(self._create_footer())
def _create_status_panel(self) -> Panel:
"""Create the crawler status panel."""
summary = self.monitor.get_summary()
# Format memory status with icon
memory_status = self.monitor.get_memory_status()
memory_icon = "🟢" # Default NORMAL
if memory_status == "PRESSURE":
memory_icon = "🟠"
elif memory_status == "CRITICAL":
memory_icon = "🔴"
# Get current memory usage
current_memory = psutil.Process().memory_info().rss / (1024 * 1024) # MB
memory_percent = (current_memory / psutil.virtual_memory().total) * 100
# Format runtime
runtime = self.monitor._format_time(time.time() - self.monitor.start_time if self.monitor.start_time else 0)
# Create the status text
status_text = Text()
status_text.append(f"Web Crawler Dashboard | Runtime: {runtime} | Memory: {memory_percent:.1f}% {memory_icon}\n")
status_text.append(f"Status: {memory_status} | URLs: {summary['urls_completed']}/{summary['urls_total']} | ")
status_text.append(f"Peak Mem: {summary['peak_memory_percent']:.1f}% at {self.monitor._format_time(summary['peak_memory_time'])}")
return Panel(status_text, title="Crawler Status", border_style="blue")
def _create_pipeline_panel(self) -> Panel:
"""Create the pipeline status panel."""
summary = self.monitor.get_summary()
queue_stats = self.monitor.get_queue_stats()
# Create a table for status counts
table = Table(show_header=True, box=None)
table.add_column("Status", style="cyan")
table.add_column("Count", justify="right")
table.add_column("Percentage", justify="right")
table.add_column("Stat", style="cyan")
table.add_column("Value", justify="right")
# Calculate overall progress
progress = f"{summary['urls_completed']}/{summary['urls_total']}"
progress_percent = f"{summary['completion_percentage']:.1f}%"
# Add rows for each status
table.add_row(
"Overall Progress",
progress,
progress_percent,
"Est. Completion",
summary.get('estimated_completion_time', "N/A")
)
# Add rows for each status
status_counts = summary['status_counts']
total = summary['urls_total'] or 1 # Avoid division by zero
# Status rows
table.add_row(
"Completed",
str(status_counts.get(CrawlStatus.COMPLETED.name, 0)),
f"{status_counts.get(CrawlStatus.COMPLETED.name, 0) / total * 100:.1f}%",
"Avg. Time/URL",
f"{summary.get('avg_task_duration', 0):.2f}s"
)
table.add_row(
"Failed",
str(status_counts.get(CrawlStatus.FAILED.name, 0)),
f"{status_counts.get(CrawlStatus.FAILED.name, 0) / total * 100:.1f}%",
"Concurrent Tasks",
str(status_counts.get(CrawlStatus.IN_PROGRESS.name, 0))
)
table.add_row(
"In Progress",
str(status_counts.get(CrawlStatus.IN_PROGRESS.name, 0)),
f"{status_counts.get(CrawlStatus.IN_PROGRESS.name, 0) / total * 100:.1f}%",
"Queue Size",
str(queue_stats['total_queued'])
)
table.add_row(
"Queued",
str(status_counts.get(CrawlStatus.QUEUED.name, 0)),
f"{status_counts.get(CrawlStatus.QUEUED.name, 0) / total * 100:.1f}%",
"Max Wait Time",
f"{queue_stats['highest_wait_time']:.1f}s"
)
# Requeued is a special case as it's not a status
requeued_count = summary.get('requeued_count', 0)
table.add_row(
"Requeued",
str(requeued_count),
f"{summary.get('requeue_rate', 0):.1f}%",
"Avg Wait Time",
f"{queue_stats['avg_wait_time']:.1f}s"
)
# Add empty row for spacing
table.add_row(
"",
"",
"",
"Requeue Rate",
f"{summary.get('requeue_rate', 0):.1f}%"
)
return Panel(table, title="Pipeline Status", border_style="green")
def _create_task_details_panel(self) -> Panel:
"""Create the task details panel."""
# Create a table for task details
table = Table(show_header=True, expand=True)
table.add_column("Task ID", style="cyan", no_wrap=True, width=10)
table.add_column("URL", style="blue", ratio=3)
table.add_column("Status", style="green", width=15)
table.add_column("Memory", justify="right", width=8)
table.add_column("Peak", justify="right", width=8)
table.add_column("Duration", justify="right", width=10)
# Get all task stats
task_stats = self.monitor.get_all_task_stats()
# Add summary row
active_tasks = sum(1 for stats in task_stats.values()
if stats['status'] == CrawlStatus.IN_PROGRESS.name)
total_memory = sum(stats['memory_usage'] for stats in task_stats.values())
total_peak = sum(stats['peak_memory'] for stats in task_stats.values())
# Summary row with separators
table.add_row(
"SUMMARY",
f"Total: {len(task_stats)}",
f"Active: {active_tasks}",
f"{total_memory:.1f}",
f"{total_peak:.1f}",
"N/A"
)
# Add a separator
table.add_row("" * 10, "" * 20, "" * 10, "" * 8, "" * 8, "" * 10)
# Status icons
status_icons = {
CrawlStatus.QUEUED.name: "",
CrawlStatus.IN_PROGRESS.name: "🔄",
CrawlStatus.COMPLETED.name: "",
CrawlStatus.FAILED.name: ""
}
# Calculate how many rows we can display based on available space
# We can display more rows now that we have a dedicated panel
display_count = min(len(task_stats), 20) # Display up to 20 tasks
# Add rows for each task
for task_id, stats in sorted(
list(task_stats.items())[:display_count],
# Sort: 1. IN_PROGRESS first, 2. QUEUED, 3. COMPLETED/FAILED by recency
key=lambda x: (
0 if x[1]['status'] == CrawlStatus.IN_PROGRESS.name else
1 if x[1]['status'] == CrawlStatus.QUEUED.name else
2,
-1 * (x[1].get('end_time', 0) or 0) # Most recent first
)
):
# Truncate task_id and URL for display
short_id = task_id[:8]
url = stats['url']
if len(url) > 50: # Allow longer URLs in the dedicated panel
url = url[:47] + "..."
# Format status with icon
status = f"{status_icons.get(stats['status'], '?')} {stats['status']}"
# Add row
table.add_row(
short_id,
url,
status,
f"{stats['memory_usage']:.1f}",
f"{stats['peak_memory']:.1f}",
stats['duration'] if 'duration' in stats else "0:00"
)
return Panel(table, title="Task Details", border_style="yellow")
def _create_footer(self) -> Panel:
"""Create the footer panel."""
from rich.columns import Columns
from rich.align import Align
memory_status = self.monitor.get_memory_status()
memory_icon = "🟢" # Default NORMAL
if memory_status == "PRESSURE":
memory_icon = "🟠"
elif memory_status == "CRITICAL":
memory_icon = "🔴"
# Left section - memory status
left_text = Text()
left_text.append("Memory Status: ", style="bold")
status_style = "green" if memory_status == "NORMAL" else "yellow" if memory_status == "PRESSURE" else "red bold"
left_text.append(f"{memory_icon} {memory_status}", style=status_style)
# Center section - copyright
center_text = Text("© Crawl4AI 2025 | Made by UnclecCode", style="cyan italic")
# Right section - quit instruction
right_text = Text()
right_text.append("Press ", style="bold")
right_text.append("q", style="white on blue")
right_text.append(" to quit", style="bold")
# Create columns with the three sections
footer_content = Columns(
[
Align.left(left_text),
Align.center(center_text),
Align.right(right_text)
],
expand=True
)
# Create a more visible footer panel
return Panel(
footer_content,
border_style="white",
padding=(0, 1) # Add padding for better visibility
)
class CrawlerMonitor:
"""
Comprehensive monitoring and visualization system for tracking web crawler operations in real-time.
Provides a terminal-based dashboard that displays task statuses, memory usage, queue statistics,
and performance metrics.
"""
def __init__(
self,
urls_total: int = 0,
refresh_rate: float = 1.0,
enable_ui: bool = True,
max_width: int = 120
):
"""
Initialize the CrawlerMonitor.
Args:
urls_total: Total number of URLs to be crawled
refresh_rate: How often to refresh the UI (in seconds)
enable_ui: Whether to display the terminal UI
max_width: Maximum width of the UI in characters
"""
# Core monitoring attributes
self.stats = {} # Task ID -> stats dict
self.memory_status = "NORMAL"
self.start_time = None
self.end_time = None
self.is_running = False
self.queue_stats = {
"total_queued": 0,
"highest_wait_time": 0.0,
"avg_wait_time": 0.0
}
self.urls_total = urls_total
self.urls_completed = 0
self.peak_memory_percent = 0.0
self.peak_memory_time = 0.0
# Status counts
self.status_counts = {
CrawlStatus.QUEUED.name: 0,
CrawlStatus.IN_PROGRESS.name: 0,
CrawlStatus.COMPLETED.name: 0,
CrawlStatus.FAILED.name: 0
}
# Requeue tracking
self.requeued_count = 0
# Thread-safety
self._lock = threading.RLock()
# Terminal UI
self.enable_ui = enable_ui
self.terminal_ui = TerminalUI(
refresh_rate=refresh_rate,
max_width=max_width
) if enable_ui else None
def start(self):
"""
Start the monitoring session.
- Initializes the start_time
- Sets is_running to True
- Starts the terminal UI if enabled
"""
with self._lock:
self.start_time = time.time()
self.is_running = True
# Start the terminal UI
if self.enable_ui and self.terminal_ui:
self.terminal_ui.start(self)
def stop(self):
"""
Stop the monitoring session.
- Records end_time
- Sets is_running to False
- Stops the terminal UI
- Generates final summary statistics
"""
with self._lock:
self.end_time = time.time()
self.is_running = False
# Stop the terminal UI
if self.enable_ui and self.terminal_ui:
self.terminal_ui.stop()
def add_task(self, task_id: str, url: str):
"""
Register a new task with the monitor.
Args:
task_id: Unique identifier for the task
url: URL being crawled
The task is initialized with:
- status: QUEUED
- url: The URL to crawl
- enqueue_time: Current time
- memory_usage: 0
- peak_memory: 0
- wait_time: 0
- retry_count: 0
"""
with self._lock:
self.stats[task_id] = {
"task_id": task_id,
"url": url,
"status": CrawlStatus.QUEUED.name,
"enqueue_time": time.time(),
"start_time": None,
"end_time": None,
"memory_usage": 0.0,
"peak_memory": 0.0,
"error_message": "",
"wait_time": 0.0,
"retry_count": 0,
"duration": "0:00",
"counted_requeue": False
}
# Update status counts
self.status_counts[CrawlStatus.QUEUED.name] += 1
def update_task(
self,
task_id: str,
status: Optional[CrawlStatus] = None,
start_time: Optional[float] = None,
end_time: Optional[float] = None,
memory_usage: Optional[float] = None,
peak_memory: Optional[float] = None,
error_message: Optional[str] = None,
retry_count: Optional[int] = None,
wait_time: Optional[float] = None
):
"""
Update statistics for a specific task.
Args:
task_id: Unique identifier for the task
status: New status (QUEUED, IN_PROGRESS, COMPLETED, FAILED)
start_time: When task execution started
end_time: When task execution ended
memory_usage: Current memory usage in MB
peak_memory: Maximum memory usage in MB
error_message: Error description if failed
retry_count: Number of retry attempts
wait_time: Time spent in queue
Updates task statistics and updates status counts.
If status changes, decrements old status count and
increments new status count.
"""
with self._lock:
# Check if task exists
if task_id not in self.stats:
return
task_stats = self.stats[task_id]
# Update status counts if status is changing
old_status = task_stats["status"]
if status and status.name != old_status:
self.status_counts[old_status] -= 1
self.status_counts[status.name] += 1
# Track completion
if status == CrawlStatus.COMPLETED:
self.urls_completed += 1
# Track requeues
if old_status in [CrawlStatus.COMPLETED.name, CrawlStatus.FAILED.name] and not task_stats.get("counted_requeue", False):
self.requeued_count += 1
task_stats["counted_requeue"] = True
# Update task statistics
if status:
task_stats["status"] = status.name
if start_time is not None:
task_stats["start_time"] = start_time
if end_time is not None:
task_stats["end_time"] = end_time
if memory_usage is not None:
task_stats["memory_usage"] = memory_usage
# Update peak memory if necessary
current_percent = (memory_usage / psutil.virtual_memory().total) * 100
if current_percent > self.peak_memory_percent:
self.peak_memory_percent = current_percent
self.peak_memory_time = time.time()
if peak_memory is not None:
task_stats["peak_memory"] = peak_memory
if error_message is not None:
task_stats["error_message"] = error_message
if retry_count is not None:
task_stats["retry_count"] = retry_count
if wait_time is not None:
task_stats["wait_time"] = wait_time
# Calculate duration
if task_stats["start_time"]:
end = task_stats["end_time"] or time.time()
duration = end - task_stats["start_time"]
task_stats["duration"] = self._format_time(duration)
def update_memory_status(self, status: str):
"""
Update the current memory status.
Args:
status: Memory status (NORMAL, PRESSURE, CRITICAL, or custom)
Also updates the UI to reflect the new status.
"""
with self._lock:
self.memory_status = status
def update_queue_statistics(
self,
total_queued: int,
highest_wait_time: float,
avg_wait_time: float
):
"""
Update statistics related to the task queue.
Args:
total_queued: Number of tasks currently in queue
highest_wait_time: Longest wait time of any queued task
avg_wait_time: Average wait time across all queued tasks
"""
with self._lock:
self.queue_stats = {
"total_queued": total_queued,
"highest_wait_time": highest_wait_time,
"avg_wait_time": avg_wait_time
}
def get_task_stats(self, task_id: str) -> Dict:
"""
Get statistics for a specific task.
Args:
task_id: Unique identifier for the task
Returns:
Dictionary containing all task statistics
"""
with self._lock:
return self.stats.get(task_id, {}).copy()
def get_all_task_stats(self) -> Dict[str, Dict]:
"""
Get statistics for all tasks.
Returns:
Dictionary mapping task_ids to their statistics
"""
with self._lock:
return self.stats.copy()
def get_memory_status(self) -> str:
"""
Get the current memory status.
Returns:
Current memory status string
"""
with self._lock:
return self.memory_status
def get_queue_stats(self) -> Dict:
"""
Get current queue statistics.
Returns:
Dictionary with queue statistics including:
- total_queued: Number of tasks in queue
- highest_wait_time: Longest wait time
- avg_wait_time: Average wait time
"""
with self._lock:
return self.queue_stats.copy()
def get_summary(self) -> Dict:
"""
Get a summary of all crawler statistics.
Returns:
Dictionary containing:
- runtime: Total runtime in seconds
- urls_total: Total URLs to process
- urls_completed: Number of completed URLs
- completion_percentage: Percentage complete
- status_counts: Count of tasks in each status
- memory_status: Current memory status
- peak_memory_percent: Highest memory usage
- peak_memory_time: When peak memory occurred
- avg_task_duration: Average task processing time
- estimated_completion_time: Projected finish time
- requeue_rate: Percentage of tasks requeued
"""
with self._lock:
# Calculate runtime
current_time = time.time()
runtime = current_time - (self.start_time or current_time)
# Calculate completion percentage
completion_percentage = 0
if self.urls_total > 0:
completion_percentage = (self.urls_completed / self.urls_total) * 100
# Calculate average task duration for completed tasks
completed_tasks = [
task for task in self.stats.values()
if task["status"] == CrawlStatus.COMPLETED.name and task.get("start_time") and task.get("end_time")
]
avg_task_duration = 0
if completed_tasks:
total_duration = sum(task["end_time"] - task["start_time"] for task in completed_tasks)
avg_task_duration = total_duration / len(completed_tasks)
# Calculate requeue rate
requeue_rate = 0
if len(self.stats) > 0:
requeue_rate = (self.requeued_count / len(self.stats)) * 100
# Calculate estimated completion time
estimated_completion_time = "N/A"
if avg_task_duration > 0 and self.urls_total > 0 and self.urls_completed > 0:
remaining_tasks = self.urls_total - self.urls_completed
estimated_seconds = remaining_tasks * avg_task_duration
estimated_completion_time = self._format_time(estimated_seconds)
return {
"runtime": runtime,
"urls_total": self.urls_total,
"urls_completed": self.urls_completed,
"completion_percentage": completion_percentage,
"status_counts": self.status_counts.copy(),
"memory_status": self.memory_status,
"peak_memory_percent": self.peak_memory_percent,
"peak_memory_time": self.peak_memory_time,
"avg_task_duration": avg_task_duration,
"estimated_completion_time": estimated_completion_time,
"requeue_rate": requeue_rate,
"requeued_count": self.requeued_count
}
def render(self):
"""
Render the terminal UI.
This is the main UI rendering loop that:
1. Updates all statistics
2. Formats the display
3. Renders the ASCII interface
4. Handles keyboard input
Note: The actual rendering is handled by the TerminalUI class
which uses the rich library's Live display.
"""
if self.enable_ui and self.terminal_ui:
# Force an update of the UI
if hasattr(self.terminal_ui, '_update_display'):
self.terminal_ui._update_display()
def _format_time(self, seconds: float) -> str:
"""
Format time in hours:minutes:seconds.
Args:
seconds: Time in seconds
Returns:
Formatted time string (e.g., "1:23:45")
"""
delta = timedelta(seconds=int(seconds))
hours, remainder = divmod(delta.seconds, 3600)
minutes, seconds = divmod(remainder, 60)
if hours > 0:
return f"{hours}:{minutes:02}:{seconds:02}"
else:
return f"{minutes}:{seconds:02}"
def _calculate_estimated_completion(self) -> str:
"""
Calculate estimated completion time based on current progress.
Returns:
Formatted time string
"""
summary = self.get_summary()
return summary.get("estimated_completion_time", "N/A")
# Example code for testing
if __name__ == "__main__":
# Initialize the monitor
monitor = CrawlerMonitor(urls_total=100)
# Start monitoring
monitor.start()
try:
# Simulate some tasks
for i in range(20):
task_id = str(uuid.uuid4())
url = f"https://example.com/page{i}"
monitor.add_task(task_id, url)
# Simulate 20% of tasks are already running
if i < 4:
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=time.time() - 30, # Started 30 seconds ago
memory_usage=10.5
)
# Simulate 10% of tasks are completed
if i >= 4 and i < 6:
start_time = time.time() - 60
end_time = time.time() - 15
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=start_time,
memory_usage=8.2
)
monitor.update_task(
task_id=task_id,
status=CrawlStatus.COMPLETED,
end_time=end_time,
memory_usage=0,
peak_memory=15.7
)
# Simulate 5% of tasks fail
if i >= 6 and i < 7:
start_time = time.time() - 45
end_time = time.time() - 20
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=start_time,
memory_usage=12.3
)
monitor.update_task(
task_id=task_id,
status=CrawlStatus.FAILED,
end_time=end_time,
memory_usage=0,
peak_memory=18.2,
error_message="Connection timeout"
)
# Simulate memory pressure
monitor.update_memory_status("PRESSURE")
# Simulate queue statistics
monitor.update_queue_statistics(
total_queued=16, # 20 - 4 (in progress)
highest_wait_time=120.5,
avg_wait_time=60.2
)
# Keep the monitor running for a demonstration
print("Crawler Monitor is running. Press 'q' to exit.")
while monitor.is_running:
time.sleep(0.1)
except KeyboardInterrupt:
print("\nExiting crawler monitor...")
finally:
# Stop the monitor
monitor.stop()
print("Crawler monitor exited successfully.")

View File

@@ -4,7 +4,8 @@ from dotenv import load_dotenv
load_dotenv() # Load environment variables from .env file
# Default provider, ONLY used when the extraction strategy is LLMExtractionStrategy
DEFAULT_PROVIDER = "openai/gpt-4o-mini"
DEFAULT_PROVIDER = "openai/gpt-4o"
DEFAULT_PROVIDER_API_KEY = "OPENAI_API_KEY"
MODEL_REPO_BRANCH = "new-release-0.0.2"
# Provider-model dictionary, ONLY used when the extraction strategy is LLMExtractionStrategy
PROVIDER_MODELS = {
@@ -92,3 +93,46 @@ SHOW_DEPRECATION_WARNINGS = True
SCREENSHOT_HEIGHT_TRESHOLD = 10000
PAGE_TIMEOUT = 60000
DOWNLOAD_PAGE_TIMEOUT = 60000
# Global user settings with descriptions and default values
USER_SETTINGS = {
"DEFAULT_LLM_PROVIDER": {
"default": "openai/gpt-4o",
"description": "Default LLM provider in 'company/model' format (e.g., 'openai/gpt-4o', 'anthropic/claude-3-sonnet')",
"type": "string"
},
"DEFAULT_LLM_PROVIDER_TOKEN": {
"default": "",
"description": "API token for the default LLM provider",
"type": "string",
"secret": True
},
"VERBOSE": {
"default": False,
"description": "Enable verbose output for all commands",
"type": "boolean"
},
"BROWSER_HEADLESS": {
"default": True,
"description": "Run browser in headless mode by default",
"type": "boolean"
},
"BROWSER_TYPE": {
"default": "chromium",
"description": "Default browser type (chromium or firefox)",
"type": "string",
"options": ["chromium", "firefox"]
},
"CACHE_MODE": {
"default": "bypass",
"description": "Default cache mode (bypass, use, or refresh)",
"type": "string",
"options": ["bypass", "use", "refresh"]
},
"USER_AGENT_MODE": {
"default": "default",
"description": "Default user agent mode (default, random, or mobile)",
"type": "string",
"options": ["default", "random", "mobile"]
}
}

View File

@@ -1,2 +0,0 @@
from .proxy_config import ProxyConfig
__all__ = ["ProxyConfig"]

View File

@@ -1,113 +0,0 @@
import os
from typing import Dict, List, Optional
class ProxyConfig:
def __init__(
self,
server: str,
username: Optional[str] = None,
password: Optional[str] = None,
ip: Optional[str] = None,
):
"""Configuration class for a single proxy.
Args:
server: Proxy server URL (e.g., "http://127.0.0.1:8080")
username: Optional username for proxy authentication
password: Optional password for proxy authentication
ip: Optional IP address for verification purposes
"""
self.server = server
self.username = username
self.password = password
# Extract IP from server if not explicitly provided
self.ip = ip or self._extract_ip_from_server()
def _extract_ip_from_server(self) -> Optional[str]:
"""Extract IP address from server URL."""
try:
# Simple extraction assuming http://ip:port format
if "://" in self.server:
parts = self.server.split("://")[1].split(":")
return parts[0]
else:
parts = self.server.split(":")
return parts[0]
except Exception:
return None
@staticmethod
def from_string(proxy_str: str) -> "ProxyConfig":
"""Create a ProxyConfig from a string in the format 'ip:port:username:password'."""
parts = proxy_str.split(":")
if len(parts) == 4: # ip:port:username:password
ip, port, username, password = parts
return ProxyConfig(
server=f"http://{ip}:{port}",
username=username,
password=password,
ip=ip
)
elif len(parts) == 2: # ip:port only
ip, port = parts
return ProxyConfig(
server=f"http://{ip}:{port}",
ip=ip
)
else:
raise ValueError(f"Invalid proxy string format: {proxy_str}")
@staticmethod
def from_dict(proxy_dict: Dict) -> "ProxyConfig":
"""Create a ProxyConfig from a dictionary."""
return ProxyConfig(
server=proxy_dict.get("server"),
username=proxy_dict.get("username"),
password=proxy_dict.get("password"),
ip=proxy_dict.get("ip")
)
@staticmethod
def from_env(env_var: str = "PROXIES") -> List["ProxyConfig"]:
"""Load proxies from environment variable.
Args:
env_var: Name of environment variable containing comma-separated proxy strings
Returns:
List of ProxyConfig objects
"""
proxies = []
try:
proxy_list = os.getenv(env_var, "").split(",")
for proxy in proxy_list:
if not proxy:
continue
proxies.append(ProxyConfig.from_string(proxy))
except Exception as e:
print(f"Error loading proxies from environment: {e}")
return proxies
def to_dict(self) -> Dict:
"""Convert to dictionary representation."""
return {
"server": self.server,
"username": self.username,
"password": self.password,
"ip": self.ip
}
def clone(self, **kwargs) -> "ProxyConfig":
"""Create a copy of this configuration with updated values.
Args:
**kwargs: Key-value pairs of configuration options to update
Returns:
ProxyConfig: A new instance with the specified updates
"""
config_dict = self.to_dict()
config_dict.update(kwargs)
return ProxyConfig.from_dict(config_dict)

View File

@@ -16,13 +16,13 @@ from .utils import (
extract_xml_data,
merge_chunks,
)
from .types import LLMConfig
from .config import DEFAULT_PROVIDER, OVERLAP_RATE, WORD_TOKEN_RATE
from abc import ABC, abstractmethod
import math
from snowballstemmer import stemmer
from .config import DEFAULT_PROVIDER, OVERLAP_RATE, WORD_TOKEN_RATE, PROVIDER_MODELS
from .models import TokenUsage
from .prompts import PROMPT_FILTER_CONTENT
import os
import json
import hashlib
from pathlib import Path
@@ -770,37 +770,56 @@ class PruningContentFilter(RelevantContentFilter):
class LLMContentFilter(RelevantContentFilter):
"""Content filtering using LLMs to generate relevant markdown."""
"""Content filtering using LLMs to generate relevant markdown.
How it works:
1. Extracts page metadata with fallbacks.
2. Extracts text chunks from the body element.
3. Applies LLMs to generate markdown for each chunk.
4. Filters out chunks below the threshold.
5. Sorts chunks by score in descending order.
6. Returns the top N chunks.
Attributes:
llm_config (LLMConfig): LLM configuration object.
instruction (str): Instruction for LLM markdown generation
chunk_token_threshold (int): Chunk token threshold for splitting (default: 1e9).
overlap_rate (float): Overlap rate for chunking (default: 0.5).
word_token_rate (float): Word token rate for chunking (default: 0.2).
verbose (bool): Enable verbose logging (default: False).
logger (AsyncLogger): Custom logger for LLM operations (optional).
"""
_UNWANTED_PROPS = {
'provider' : 'Instead, use llmConfig=LlmConfig(provider="...")',
'api_token' : 'Instead, use llmConfig=LlMConfig(api_token="...")',
'base_url' : 'Instead, use llmConfig=LlmConfig(base_url="...")',
'api_base' : 'Instead, use llmConfig=LlmConfig(base_url="...")',
'provider' : 'Instead, use llm_config=LLMConfig(provider="...")',
'api_token' : 'Instead, use llm_config=LlMConfig(api_token="...")',
'base_url' : 'Instead, use llm_config=LLMConfig(base_url="...")',
'api_base' : 'Instead, use llm_config=LLMConfig(base_url="...")',
}
def __init__(
self,
provider: str = DEFAULT_PROVIDER,
api_token: Optional[str] = None,
llmConfig: "LlmConfig" = None,
llm_config: "LLMConfig" = None,
instruction: str = None,
chunk_token_threshold: int = int(1e9),
overlap_rate: float = OVERLAP_RATE,
word_token_rate: float = WORD_TOKEN_RATE,
base_url: Optional[str] = None,
api_base: Optional[str] = None,
extra_args: Dict = None,
# char_token_rate: float = WORD_TOKEN_RATE * 5,
# chunk_mode: str = "char",
verbose: bool = False,
logger: Optional[AsyncLogger] = None,
ignore_cache: bool = True,
# Deprecated properties
provider: str = DEFAULT_PROVIDER,
api_token: Optional[str] = None,
base_url: Optional[str] = None,
api_base: Optional[str] = None,
extra_args: Dict = None,
):
super().__init__(None)
self.provider = provider
self.api_token = api_token
self.base_url = base_url or api_base
self.llmConfig = llmConfig
self.llm_config = llm_config
self.instruction = instruction
self.chunk_token_threshold = chunk_token_threshold
self.overlap_rate = overlap_rate
@@ -872,7 +891,7 @@ class LLMContentFilter(RelevantContentFilter):
self.logger.info(
"Starting LLM markdown content filtering process",
tag="LLM",
params={"provider": self.llmConfig.provider},
params={"provider": self.llm_config.provider},
colors={"provider": Fore.CYAN},
)
@@ -959,10 +978,10 @@ class LLMContentFilter(RelevantContentFilter):
future = executor.submit(
_proceed_with_chunk,
self.llmConfig.provider,
self.llm_config.provider,
prompt,
self.llmConfig.api_token,
self.llmConfig.base_url,
self.llm_config.api_token,
self.llm_config.base_url,
self.extra_args,
)
futures.append((i, future))

View File

@@ -155,6 +155,7 @@ class WebScrapingStrategy(ContentScrapingStrategy):
for aud in raw_result.get("media", {}).get("audios", [])
if aud
],
tables=raw_result.get("media", {}).get("tables", [])
)
# Convert links
@@ -193,6 +194,153 @@ class WebScrapingStrategy(ContentScrapingStrategy):
"""
return await asyncio.to_thread(self._scrap, url, html, **kwargs)
def is_data_table(self, table: Tag, **kwargs) -> bool:
"""
Determine if a table element is a data table (not a layout table).
Args:
table (Tag): BeautifulSoup Tag representing a table element
**kwargs: Additional keyword arguments including table_score_threshold
Returns:
bool: True if the table is a data table, False otherwise
"""
score = 0
# Check for thead and tbody
has_thead = len(table.select('thead')) > 0
has_tbody = len(table.select('tbody')) > 0
if has_thead:
score += 2
if has_tbody:
score += 1
# Check for th elements
th_count = len(table.select('th'))
if th_count > 0:
score += 2
if has_thead or len(table.select('tr:first-child th')) > 0:
score += 1
# Check for nested tables
if len(table.select('table')) > 0:
score -= 3
# Role attribute check
role = table.get('role', '').lower()
if role in {'presentation', 'none'}:
score -= 3
# Column consistency
rows = table.select('tr')
if not rows:
return False
col_counts = [len(row.select('td, th')) for row in rows]
avg_cols = sum(col_counts) / len(col_counts)
variance = sum((c - avg_cols)**2 for c in col_counts) / len(col_counts)
if variance < 1:
score += 2
# Caption and summary
if table.select('caption'):
score += 2
if table.has_attr('summary') and table['summary']:
score += 1
# Text density
total_text = sum(len(cell.get_text().strip()) for row in rows for cell in row.select('td, th'))
total_tags = sum(1 for _ in table.descendants if isinstance(_, Tag))
text_ratio = total_text / (total_tags + 1e-5)
if text_ratio > 20:
score += 3
elif text_ratio > 10:
score += 2
# Data attributes
data_attrs = sum(1 for attr in table.attrs if attr.startswith('data-'))
score += data_attrs * 0.5
# Size check
if avg_cols >= 2 and len(rows) >= 2:
score += 2
threshold = kwargs.get('table_score_threshold', 7)
return score >= threshold
def extract_table_data(self, table: Tag) -> dict:
"""
Extract structured data from a table element.
Args:
table (Tag): BeautifulSoup Tag representing a table element
Returns:
dict: Dictionary containing table data (headers, rows, caption, summary)
"""
caption_elem = table.select_one('caption')
caption = caption_elem.get_text().strip() if caption_elem else ""
summary = table.get('summary', '').strip()
# Extract headers with colspan handling
headers = []
thead_rows = table.select('thead tr')
if thead_rows:
header_cells = thead_rows[0].select('th')
for cell in header_cells:
text = cell.get_text().strip()
colspan = int(cell.get('colspan', 1))
headers.extend([text] * colspan)
else:
first_row = table.select('tr:first-child')
if first_row:
for cell in first_row[0].select('th, td'):
text = cell.get_text().strip()
colspan = int(cell.get('colspan', 1))
headers.extend([text] * colspan)
# Extract rows with colspan handling
rows = []
all_rows = table.select('tr')
thead = table.select_one('thead')
tbody_rows = []
if thead:
thead_rows = thead.select('tr')
tbody_rows = [row for row in all_rows if row not in thead_rows]
else:
if all_rows and all_rows[0].select('th'):
tbody_rows = all_rows[1:]
else:
tbody_rows = all_rows
for row in tbody_rows:
# for row in table.select('tr:not(:has(ancestor::thead))'):
row_data = []
for cell in row.select('td'):
text = cell.get_text().strip()
colspan = int(cell.get('colspan', 1))
row_data.extend([text] * colspan)
if row_data:
rows.append(row_data)
# Align rows with headers
max_columns = len(headers) if headers else (max(len(row) for row in rows) if rows else 0)
aligned_rows = []
for row in rows:
aligned = row[:max_columns] + [''] * (max_columns - len(row))
aligned_rows.append(aligned)
if not headers:
headers = [f"Column {i+1}" for i in range(max_columns)]
return {
"headers": headers,
"rows": aligned_rows,
"caption": caption,
"summary": summary,
}
def flatten_nested_elements(self, node):
"""
Flatten nested elements in a HTML tree.
@@ -431,7 +579,7 @@ class WebScrapingStrategy(ContentScrapingStrategy):
Returns:
dict: A dictionary containing the processed element information.
"""
media = {"images": [], "videos": [], "audios": []}
media = {"images": [], "videos": [], "audios": [], "tables": []}
internal_links_dict = {}
external_links_dict = {}
self._process_element(
@@ -688,6 +836,7 @@ class WebScrapingStrategy(ContentScrapingStrategy):
html: str,
word_count_threshold: int = MIN_WORD_THRESHOLD,
css_selector: str = None,
target_elements: List[str] = None,
**kwargs,
) -> Dict[str, Any]:
"""
@@ -711,6 +860,12 @@ class WebScrapingStrategy(ContentScrapingStrategy):
soup = BeautifulSoup(html, parser_type)
body = soup.body
base_domain = get_base_domain(url)
# Early removal of all images if exclude_all_images is set
# This happens before any processing to minimize memory usage
if kwargs.get("exclude_all_images", False):
for img in body.find_all('img'):
img.decompose()
try:
meta = extract_metadata("", soup)
@@ -742,22 +897,37 @@ class WebScrapingStrategy(ContentScrapingStrategy):
for element in body.select(excluded_selector):
element.extract()
if css_selector:
selected_elements = body.select(css_selector)
if not selected_elements:
return {
"markdown": "",
"cleaned_html": "",
"success": True,
"media": {"images": [], "videos": [], "audios": []},
"links": {"internal": [], "external": []},
"metadata": {},
"message": f"No elements found for CSS selector: {css_selector}",
}
# raise InvalidCSSSelectorError(f"Invalid CSS selector, No elements found for CSS selector: {css_selector}")
body = soup.new_tag("div")
for el in selected_elements:
body.append(el)
# if False and css_selector:
# selected_elements = body.select(css_selector)
# if not selected_elements:
# return {
# "markdown": "",
# "cleaned_html": "",
# "success": True,
# "media": {"images": [], "videos": [], "audios": []},
# "links": {"internal": [], "external": []},
# "metadata": {},
# "message": f"No elements found for CSS selector: {css_selector}",
# }
# # raise InvalidCSSSelectorError(f"Invalid CSS selector, No elements found for CSS selector: {css_selector}")
# body = soup.new_tag("div")
# for el in selected_elements:
# body.append(el)
content_element = None
if target_elements:
try:
for_content_targeted_element = []
for target_element in target_elements:
for_content_targeted_element.extend(body.select(target_element))
content_element = soup.new_tag("div")
for el in for_content_targeted_element:
content_element.append(el)
except Exception as e:
self._log("error", f"Error with target element detection: {str(e)}", "SCRAPE")
return None
else:
content_element = body
kwargs["exclude_social_media_domains"] = set(
kwargs.get("exclude_social_media_domains", []) + SOCIAL_MEDIA_DOMAINS
@@ -797,6 +967,15 @@ class WebScrapingStrategy(ContentScrapingStrategy):
if result is not None
for img in result
]
# Process tables if not excluded
excluded_tags = set(kwargs.get("excluded_tags", []) or [])
if 'table' not in excluded_tags:
tables = body.find_all('table')
for table in tables:
if self.is_data_table(table, **kwargs):
table_data = self.extract_table_data(table)
media["tables"].append(table_data)
body = self.flatten_nested_elements(body)
base64_pattern = re.compile(r'data:image/[^;]+;base64,([^"]+)')
@@ -808,7 +987,7 @@ class WebScrapingStrategy(ContentScrapingStrategy):
str_body = ""
try:
str_body = body.encode_contents().decode("utf-8")
str_body = content_element.encode_contents().decode("utf-8")
except Exception:
# Reset body to the original HTML
success = False
@@ -847,7 +1026,6 @@ class WebScrapingStrategy(ContentScrapingStrategy):
cleaned_html = str_body.replace("\n\n", "\n").replace(" ", " ")
return {
# **markdown_content,
"cleaned_html": cleaned_html,
"success": success,
"media": media,
@@ -1187,12 +1365,125 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
return root
def is_data_table(self, table: etree.Element, **kwargs) -> bool:
score = 0
# Check for thead and tbody
has_thead = len(table.xpath(".//thead")) > 0
has_tbody = len(table.xpath(".//tbody")) > 0
if has_thead:
score += 2
if has_tbody:
score += 1
# Check for th elements
th_count = len(table.xpath(".//th"))
if th_count > 0:
score += 2
if has_thead or table.xpath(".//tr[1]/th"):
score += 1
# Check for nested tables
if len(table.xpath(".//table")) > 0:
score -= 3
# Role attribute check
role = table.get("role", "").lower()
if role in {"presentation", "none"}:
score -= 3
# Column consistency
rows = table.xpath(".//tr")
if not rows:
return False
col_counts = [len(row.xpath(".//td|.//th")) for row in rows]
avg_cols = sum(col_counts) / len(col_counts)
variance = sum((c - avg_cols)**2 for c in col_counts) / len(col_counts)
if variance < 1:
score += 2
# Caption and summary
if table.xpath(".//caption"):
score += 2
if table.get("summary"):
score += 1
# Text density
total_text = sum(len(''.join(cell.itertext()).strip()) for row in rows for cell in row.xpath(".//td|.//th"))
total_tags = sum(1 for _ in table.iterdescendants())
text_ratio = total_text / (total_tags + 1e-5)
if text_ratio > 20:
score += 3
elif text_ratio > 10:
score += 2
# Data attributes
data_attrs = sum(1 for attr in table.attrib if attr.startswith('data-'))
score += data_attrs * 0.5
# Size check
if avg_cols >= 2 and len(rows) >= 2:
score += 2
threshold = kwargs.get("table_score_threshold", 7)
return score >= threshold
def extract_table_data(self, table: etree.Element) -> dict:
caption = table.xpath(".//caption/text()")
caption = caption[0].strip() if caption else ""
summary = table.get("summary", "").strip()
# Extract headers with colspan handling
headers = []
thead_rows = table.xpath(".//thead/tr")
if thead_rows:
header_cells = thead_rows[0].xpath(".//th")
for cell in header_cells:
text = cell.text_content().strip()
colspan = int(cell.get("colspan", 1))
headers.extend([text] * colspan)
else:
first_row = table.xpath(".//tr[1]")
if first_row:
for cell in first_row[0].xpath(".//th|.//td"):
text = cell.text_content().strip()
colspan = int(cell.get("colspan", 1))
headers.extend([text] * colspan)
# Extract rows with colspan handling
rows = []
for row in table.xpath(".//tr[not(ancestor::thead)]"):
row_data = []
for cell in row.xpath(".//td"):
text = cell.text_content().strip()
colspan = int(cell.get("colspan", 1))
row_data.extend([text] * colspan)
if row_data:
rows.append(row_data)
# Align rows with headers
max_columns = len(headers) if headers else (max(len(row) for row in rows) if rows else 0)
aligned_rows = []
for row in rows:
aligned = row[:max_columns] + [''] * (max_columns - len(row))
aligned_rows.append(aligned)
if not headers:
headers = [f"Column {i+1}" for i in range(max_columns)]
return {
"headers": headers,
"rows": aligned_rows,
"caption": caption,
"summary": summary,
}
def _scrap(
self,
url: str,
html: str,
word_count_threshold: int = MIN_WORD_THRESHOLD,
css_selector: str = None,
target_elements: List[str] = None,
**kwargs,
) -> Dict[str, Any]:
if not html:
@@ -1206,6 +1497,13 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
body = doc
base_domain = get_base_domain(url)
# Early removal of all images if exclude_all_images is set
# This is more efficient in lxml as we remove elements before any processing
if kwargs.get("exclude_all_images", False):
for img in body.xpath('//img'):
if img.getparent() is not None:
img.getparent().remove(img)
# Add comment removal
if kwargs.get("remove_comments", False):
@@ -1243,24 +1541,38 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
meta = {}
# Handle CSS selector targeting
if css_selector:
# if css_selector:
# try:
# selected_elements = body.cssselect(css_selector)
# if not selected_elements:
# return {
# "markdown": "",
# "cleaned_html": "",
# "success": True,
# "media": {"images": [], "videos": [], "audios": []},
# "links": {"internal": [], "external": []},
# "metadata": meta,
# "message": f"No elements found for CSS selector: {css_selector}",
# }
# body = lhtml.Element("div")
# body.extend(selected_elements)
# except Exception as e:
# self._log("error", f"Error with CSS selector: {str(e)}", "SCRAPE")
# return None
content_element = None
if target_elements:
try:
selected_elements = body.cssselect(css_selector)
if not selected_elements:
return {
"markdown": "",
"cleaned_html": "",
"success": True,
"media": {"images": [], "videos": [], "audios": []},
"links": {"internal": [], "external": []},
"metadata": meta,
"message": f"No elements found for CSS selector: {css_selector}",
}
body = lhtml.Element("div")
body.extend(selected_elements)
for_content_targeted_element = []
for target_element in target_elements:
for_content_targeted_element.extend(body.cssselect(target_element))
content_element = lhtml.Element("div")
content_element.extend(for_content_targeted_element)
except Exception as e:
self._log("error", f"Error with CSS selector: {str(e)}", "SCRAPE")
self._log("error", f"Error with target element detection: {str(e)}", "SCRAPE")
return None
else:
content_element = body
# Remove script and style tags
for tag in ["script", "style", "link", "meta", "noscript"]:
@@ -1284,7 +1596,7 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
form.getparent().remove(form)
# Process content
media = {"images": [], "videos": [], "audios": []}
media = {"images": [], "videos": [], "audios": [], "tables": []}
internal_links_dict = {}
external_links_dict = {}
@@ -1298,6 +1610,13 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
**kwargs,
)
if 'table' not in excluded_tags:
tables = body.xpath(".//table")
for table in tables:
if self.is_data_table(table, **kwargs):
table_data = self.extract_table_data(table)
media["tables"].append(table_data)
# Handle only_text option
if kwargs.get("only_text", False):
for tag in ONLY_TEXT_ELIGIBLE_TAGS:
@@ -1324,7 +1643,8 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
# Generate output HTML
cleaned_html = lhtml.tostring(
body,
# body,
content_element,
encoding="unicode",
pretty_print=True,
method="html",
@@ -1369,7 +1689,12 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
return {
"cleaned_html": cleaned_html,
"success": False,
"media": {"images": [], "videos": [], "audios": []},
"media": {
"images": [],
"videos": [],
"audios": [],
"tables": []
},
"links": {"internal": [], "external": []},
"metadata": {},
}

View File

@@ -1,6 +1,6 @@
from crawl4ai import BrowserConfig, AsyncWebCrawler, CrawlerRunConfig, CacheMode
from crawl4ai.hub import BaseCrawler
from crawl4ai.utils import optimize_html, get_home_folder
from crawl4ai.utils import optimize_html, get_home_folder, preprocess_html_for_schema
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
from pathlib import Path
import json
@@ -68,7 +68,8 @@ class GoogleSearchCrawler(BaseCrawler):
home_dir = get_home_folder() if not schema_cache_path else schema_cache_path
os.makedirs(f"{home_dir}/schema", exist_ok=True)
cleaned_html = optimize_html(html, threshold=100)
# cleaned_html = optimize_html(html, threshold=100)
cleaned_html = preprocess_html_for_schema(html)
organic_schema = None
if os.path.exists(f"{home_dir}/schema/organic_schema.json"):

View File

@@ -10,6 +10,7 @@ from .filters import FilterChain
from .scorers import URLScorer
from . import DeepCrawlStrategy
from ..types import AsyncWebCrawler, CrawlerRunConfig, CrawlResult
from ..utils import normalize_url_for_deep_crawl, efficient_normalize_url_for_deep_crawl
from math import inf as infinity
class BFSDeepCrawlStrategy(DeepCrawlStrategy):
@@ -99,14 +100,17 @@ class BFSDeepCrawlStrategy(DeepCrawlStrategy):
# First collect all valid links
for link in links:
url = link.get("href")
if url in visited:
# Strip URL fragments to avoid duplicate crawling
# base_url = url.split('#')[0] if url else url
base_url = normalize_url_for_deep_crawl(url, source_url)
if base_url in visited:
continue
if not await self.can_process_url(url, next_depth):
self.stats.urls_skipped += 1
continue
# Score the URL if a scorer is provided
score = self.url_scorer.score(url) if self.url_scorer else 0
score = self.url_scorer.score(base_url) if self.url_scorer else 0
# Skip URLs with scores below the threshold
if score < self.score_threshold:
@@ -114,7 +118,7 @@ class BFSDeepCrawlStrategy(DeepCrawlStrategy):
self.stats.urls_skipped += 1
continue
valid_links.append((url, score))
valid_links.append((base_url, score))
# If we have more valid links than capacity, sort by score and take the top ones
if len(valid_links) > remaining_capacity:

View File

@@ -124,6 +124,7 @@ class URLPatternFilter(URLFilter):
"_simple_prefixes",
"_domain_patterns",
"_path_patterns",
"_reverse",
)
PATTERN_TYPES = {
@@ -138,8 +139,10 @@ class URLPatternFilter(URLFilter):
self,
patterns: Union[str, Pattern, List[Union[str, Pattern]]],
use_glob: bool = True,
reverse: bool = False,
):
super().__init__()
self._reverse = reverse
patterns = [patterns] if isinstance(patterns, (str, Pattern)) else patterns
self._simple_suffixes = set()
@@ -205,36 +208,40 @@ class URLPatternFilter(URLFilter):
@lru_cache(maxsize=10000)
def apply(self, url: str) -> bool:
"""Hierarchical pattern matching"""
# Quick suffix check (*.html)
if self._simple_suffixes:
path = url.split("?")[0]
if path.split("/")[-1].split(".")[-1] in self._simple_suffixes:
self._update_stats(True)
return True
result = True
self._update_stats(result)
return not result if self._reverse else result
# Domain check
if self._domain_patterns:
for pattern in self._domain_patterns:
if pattern.match(url):
self._update_stats(True)
return True
result = True
self._update_stats(result)
return not result if self._reverse else result
# Prefix check (/foo/*)
if self._simple_prefixes:
path = url.split("?")[0]
if any(path.startswith(p) for p in self._simple_prefixes):
self._update_stats(True)
return True
result = True
self._update_stats(result)
return not result if self._reverse else result
# Complex patterns
if self._path_patterns:
if any(p.search(url) for p in self._path_patterns):
self._update_stats(True)
return True
result = True
self._update_stats(result)
return not result if self._reverse else result
self._update_stats(False)
return False
result = False
self._update_stats(result)
return not result if self._reverse else result
class ContentTypeFilter(URLFilter):
@@ -427,6 +434,11 @@ class DomainFilter(URLFilter):
if isinstance(domains, str):
return {domains.lower()}
return {d.lower() for d in domains}
@staticmethod
def _is_subdomain(domain: str, parent_domain: str) -> bool:
"""Check if domain is a subdomain of parent_domain"""
return domain == parent_domain or domain.endswith(f".{parent_domain}")
@staticmethod
@lru_cache(maxsize=10000)
@@ -444,20 +456,26 @@ class DomainFilter(URLFilter):
domain = self._extract_domain(url)
# Early return for blocked domains
if domain in self._blocked_domains:
self._update_stats(False)
return False
# Check for blocked domains, including subdomains
for blocked in self._blocked_domains:
if self._is_subdomain(domain, blocked):
self._update_stats(False)
return False
# If no allowed domains specified, accept all non-blocked
if self._allowed_domains is None:
self._update_stats(True)
return True
# Final allowed domains check
result = domain in self._allowed_domains
self._update_stats(result)
return result
# Check if domain matches any allowed domain (including subdomains)
for allowed in self._allowed_domains:
if self._is_subdomain(domain, allowed):
self._update_stats(True)
return True
# No matches found
self._update_stats(False)
return False
class ContentRelevanceFilter(URLFilter):

View File

@@ -4,11 +4,11 @@ from typing import Any, List, Dict, Optional
from concurrent.futures import ThreadPoolExecutor, as_completed
import json
import time
import os
from .prompts import PROMPT_EXTRACT_BLOCKS, PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION, PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION, JSON_SCHEMA_BUILDER_XPATH
from .prompts import PROMPT_EXTRACT_BLOCKS, PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION, PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION, JSON_SCHEMA_BUILDER_XPATH, PROMPT_EXTRACT_INFERRED_SCHEMA
from .config import (
DEFAULT_PROVIDER, PROVIDER_MODELS,
DEFAULT_PROVIDER,
DEFAULT_PROVIDER_API_KEY,
CHUNK_TOKEN_THRESHOLD,
OVERLAP_RATE,
WORD_TOKEN_RATE,
@@ -22,9 +22,7 @@ from .utils import (
extract_xml_data,
split_and_parse_json_objects,
sanitize_input_encode,
chunk_documents,
merge_chunks,
advanced_split,
)
from .models import * # noqa: F403
@@ -38,8 +36,9 @@ from .model_loader import (
calculate_batch_size
)
from .types import LLMConfig, create_llm_config
from functools import partial
import math
import numpy as np
import re
from bs4 import BeautifulSoup
@@ -481,8 +480,7 @@ class LLMExtractionStrategy(ExtractionStrategy):
A strategy that uses an LLM to extract meaningful content from the HTML.
Attributes:
provider: The provider to use for extraction. It follows the format <provider_name>/<model_name>, e.g., "ollama/llama3.3".
api_token: The API token for the provider.
llm_config: The LLM configuration object.
instruction: The instruction to use for the LLM model.
schema: Pydantic model schema for structured data.
extraction_type: "block" or "schema".
@@ -490,27 +488,20 @@ class LLMExtractionStrategy(ExtractionStrategy):
overlap_rate: Overlap between chunks.
word_token_rate: Word to token conversion rate.
apply_chunking: Whether to apply chunking.
base_url: The base URL for the API request.
api_base: The base URL for the API request.
extra_args: Additional arguments for the API request, such as temprature, max_tokens, etc.
verbose: Whether to print verbose output.
usages: List of individual token usages.
total_usage: Accumulated token usage.
"""
_UNWANTED_PROPS = {
'provider' : 'Instead, use llmConfig=LlmConfig(provider="...")',
'api_token' : 'Instead, use llmConfig=LlMConfig(api_token="...")',
'base_url' : 'Instead, use llmConfig=LlmConfig(base_url="...")',
'api_base' : 'Instead, use llmConfig=LlmConfig(base_url="...")',
'provider' : 'Instead, use llm_config=LLMConfig(provider="...")',
'api_token' : 'Instead, use llm_config=LlMConfig(api_token="...")',
'base_url' : 'Instead, use llm_config=LLMConfig(base_url="...")',
'api_base' : 'Instead, use llm_config=LLMConfig(base_url="...")',
}
def __init__(
self,
llmConfig: 'LLMConfig' = None,
llm_config: 'LLMConfig' = None,
instruction: str = None,
provider: str = DEFAULT_PROVIDER,
api_token: Optional[str] = None,
base_url: str = None,
api_base: str = None,
schema: Dict = None,
extraction_type="block",
chunk_token_threshold=CHUNK_TOKEN_THRESHOLD,
@@ -518,16 +509,20 @@ class LLMExtractionStrategy(ExtractionStrategy):
word_token_rate=WORD_TOKEN_RATE,
apply_chunking=True,
input_format: str = "markdown",
force_json_response=False,
verbose=False,
# Deprecated arguments
provider: str = DEFAULT_PROVIDER,
api_token: Optional[str] = None,
base_url: str = None,
api_base: str = None,
**kwargs,
):
"""
Initialize the strategy with clustering parameters.
Args:
llmConfig: The LLM configuration object.
provider: The provider to use for extraction. It follows the format <provider_name>/<model_name>, e.g., "ollama/llama3.3".
api_token: The API token for the provider.
llm_config: The LLM configuration object.
instruction: The instruction to use for the LLM model.
schema: Pydantic model schema for structured data.
extraction_type: "block" or "schema".
@@ -535,25 +530,31 @@ class LLMExtractionStrategy(ExtractionStrategy):
overlap_rate: Overlap between chunks.
word_token_rate: Word to token conversion rate.
apply_chunking: Whether to apply chunking.
input_format: Content format to use for extraction.
Options: "markdown" (default), "html", "fit_markdown"
force_json_response: Whether to force a JSON response from the LLM.
verbose: Whether to print verbose output.
# Deprecated arguments, will be removed very soon
provider: The provider to use for extraction. It follows the format <provider_name>/<model_name>, e.g., "ollama/llama3.3".
api_token: The API token for the provider.
base_url: The base URL for the API request.
api_base: The base URL for the API request.
extra_args: Additional arguments for the API request, such as temprature, max_tokens, etc.
verbose: Whether to print verbose output.
usages: List of individual token usages.
total_usage: Accumulated token usage.
"""
super().__init__( input_format=input_format, **kwargs)
self.llmConfig = llmConfig
self.provider = provider
self.api_token = api_token
self.base_url = base_url
self.api_base = api_base
self.llm_config = llm_config
if not self.llm_config:
self.llm_config = create_llm_config(
provider=DEFAULT_PROVIDER,
api_token=os.environ.get(DEFAULT_PROVIDER_API_KEY),
)
self.instruction = instruction
self.extract_type = extraction_type
self.schema = schema
if schema:
self.extract_type = "schema"
self.force_json_response = force_json_response
self.chunk_token_threshold = chunk_token_threshold or CHUNK_TOKEN_THRESHOLD
self.overlap_rate = overlap_rate
self.word_token_rate = word_token_rate
@@ -565,6 +566,11 @@ class LLMExtractionStrategy(ExtractionStrategy):
self.usages = [] # Store individual usages
self.total_usage = TokenUsage() # Accumulated usage
self.provider = provider
self.api_token = api_token
self.base_url = base_url
self.api_base = api_base
def __setattr__(self, name, value):
"""Handle attribute setting."""
@@ -612,64 +618,97 @@ class LLMExtractionStrategy(ExtractionStrategy):
variable_values["SCHEMA"] = json.dumps(self.schema, indent=2) # if type of self.schema is dict else self.schema
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
if self.extract_type == "schema" and not self.schema:
prompt_with_variables = PROMPT_EXTRACT_INFERRED_SCHEMA
for variable in variable_values:
prompt_with_variables = prompt_with_variables.replace(
"{" + variable + "}", variable_values[variable]
)
response = perform_completion_with_backoff(
self.llmConfig.provider,
prompt_with_variables,
self.llmConfig.api_token,
base_url=self.llmConfig.base_url,
extra_args=self.extra_args,
) # , json_response=self.extract_type == "schema")
# Track usage
usage = TokenUsage(
completion_tokens=response.usage.completion_tokens,
prompt_tokens=response.usage.prompt_tokens,
total_tokens=response.usage.total_tokens,
completion_tokens_details=response.usage.completion_tokens_details.__dict__
if response.usage.completion_tokens_details
else {},
prompt_tokens_details=response.usage.prompt_tokens_details.__dict__
if response.usage.prompt_tokens_details
else {},
)
self.usages.append(usage)
# Update totals
self.total_usage.completion_tokens += usage.completion_tokens
self.total_usage.prompt_tokens += usage.prompt_tokens
self.total_usage.total_tokens += usage.total_tokens
try:
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)[
"blocks"
]
blocks = json.loads(blocks)
for block in blocks:
block["error"] = False
except Exception:
parsed, unparsed = split_and_parse_json_objects(
response.choices[0].message.content
response = perform_completion_with_backoff(
self.llm_config.provider,
prompt_with_variables,
self.llm_config.api_token,
base_url=self.llm_config.base_url,
json_response=self.force_json_response,
extra_args=self.extra_args,
) # , json_response=self.extract_type == "schema")
# Track usage
usage = TokenUsage(
completion_tokens=response.usage.completion_tokens,
prompt_tokens=response.usage.prompt_tokens,
total_tokens=response.usage.total_tokens,
completion_tokens_details=response.usage.completion_tokens_details.__dict__
if response.usage.completion_tokens_details
else {},
prompt_tokens_details=response.usage.prompt_tokens_details.__dict__
if response.usage.prompt_tokens_details
else {},
)
blocks = parsed
if unparsed:
blocks.append(
{"index": 0, "error": True, "tags": ["error"], "content": unparsed}
)
self.usages.append(usage)
if self.verbose:
print(
"[LOG] Extracted",
len(blocks),
"blocks from URL:",
url,
"block index:",
ix,
)
return blocks
# Update totals
self.total_usage.completion_tokens += usage.completion_tokens
self.total_usage.prompt_tokens += usage.prompt_tokens
self.total_usage.total_tokens += usage.total_tokens
try:
response = response.choices[0].message.content
blocks = None
if self.force_json_response:
blocks = json.loads(response)
if isinstance(blocks, dict):
# If it has only one key which calue is list then assign that to blocks, exampled: {"news": [..]}
if len(blocks) == 1 and isinstance(list(blocks.values())[0], list):
blocks = list(blocks.values())[0]
else:
# If it has only one key which value is not list then assign that to blocks, exampled: { "article_id": "1234", ... }
blocks = [blocks]
elif isinstance(blocks, list):
# If it is a list then assign that to blocks
blocks = blocks
else:
# blocks = extract_xml_data(["blocks"], response.choices[0].message.content)["blocks"]
blocks = extract_xml_data(["blocks"], response)["blocks"]
blocks = json.loads(blocks)
for block in blocks:
block["error"] = False
except Exception:
parsed, unparsed = split_and_parse_json_objects(
response.choices[0].message.content
)
blocks = parsed
if unparsed:
blocks.append(
{"index": 0, "error": True, "tags": ["error"], "content": unparsed}
)
if self.verbose:
print(
"[LOG] Extracted",
len(blocks),
"blocks from URL:",
url,
"block index:",
ix,
)
return blocks
except Exception as e:
if self.verbose:
print(f"[LOG] Error in LLM extraction: {e}")
# Add error information to extracted_content
return [
{
"index": ix,
"error": True,
"tags": ["error"],
"content": str(e),
}
]
def _merge(self, documents, chunk_token_threshold, overlap) -> List[str]:
"""
@@ -701,7 +740,7 @@ class LLMExtractionStrategy(ExtractionStrategy):
overlap=int(self.chunk_token_threshold * self.overlap_rate),
)
extracted_content = []
if self.llmConfig.provider.startswith("groq/"):
if self.llm_config.provider.startswith("groq/"):
# Sequential processing with a delay
for ix, section in enumerate(merged_sections):
extract_func = partial(self.extract, url)
@@ -761,8 +800,6 @@ class LLMExtractionStrategy(ExtractionStrategy):
#######################################################
# New extraction strategies for JSON-based extraction #
#######################################################
class JsonElementExtractionStrategy(ExtractionStrategy):
"""
Abstract base class for extracting structured JSON from HTML content.
@@ -1043,8 +1080,8 @@ class JsonElementExtractionStrategy(ExtractionStrategy):
pass
_GENERATE_SCHEMA_UNWANTED_PROPS = {
'provider': 'Instead, use llmConfig=LlmConfig(provider="...")',
'api_token': 'Instead, use llmConfig=LlMConfig(api_token="...")',
'provider': 'Instead, use llm_config=LLMConfig(provider="...")',
'api_token': 'Instead, use llm_config=LlMConfig(api_token="...")',
}
@staticmethod
@@ -1053,7 +1090,7 @@ class JsonElementExtractionStrategy(ExtractionStrategy):
schema_type: str = "CSS", # or XPATH
query: str = None,
target_json_example: str = None,
llmConfig: 'LLMConfig' = None,
llm_config: 'LLMConfig' = create_llm_config(),
provider: str = None,
api_token: str = None,
**kwargs
@@ -1066,9 +1103,9 @@ class JsonElementExtractionStrategy(ExtractionStrategy):
query (str, optional): Natural language description of what data to extract
provider (str): Legacy Parameter. LLM provider to use
api_token (str): Legacy Parameter. API token for LLM provider
llmConfig (LlmConfig): LLM configuration object
llm_config (LLMConfig): LLM configuration object
prompt (str, optional): Custom prompt template to use
**kwargs: Additional args passed to perform_completion_with_backoff
**kwargs: Additional args passed to LLM processor
Returns:
dict: Generated schema following the JsonElementExtractionStrategy format
@@ -1085,7 +1122,7 @@ class JsonElementExtractionStrategy(ExtractionStrategy):
# Build the prompt
system_message = {
"role": "system",
"content": f"""You specialize in generating special JSON schemas for web scraping. This schema uses CSS or XPATH selectors to present a repetitive pattern in crawled HTML, such as a product in a product list or a search result item in a list of search results. You use this JSON schema to pass to a language model along with the HTML content to extract structured data from the HTML. The language model uses the JSON schema to extract data from the HTML and retrieve values for fields in the JSON schema, following the schema.
"content": f"""You specialize in generating special JSON schemas for web scraping. This schema uses CSS or XPATH selectors to present a repetitive pattern in crawled HTML, such as a product in a product list or a search result item in a list of search results. We use this JSON schema to pass to a language model along with the HTML content to extract structured data from the HTML. The language model uses the JSON schema to extract data from the HTML and retrieve values for fields in the JSON schema, following the schema.
Generating this HTML manually is not feasible, so you need to generate the JSON schema using the HTML content. The HTML copied from the crawled website is provided below, which we believe contains the repetitive pattern.
@@ -1099,9 +1136,10 @@ Generating this HTML manually is not feasible, so you need to generate the JSON
In this context, the following items may or may not be present:
- Example of target JSON object: This is a sample of the final JSON object that we hope to extract from the HTML using the schema you are generating.
- Extra Instructions: This is optional instructions to consider when generating the schema provided by the user.
- Query or explanation of target/goal data item: This is a description of what data we are trying to extract from the HTML. This explanation means we're not sure about the rigid schema of the structures we want, so we leave it to you to use your expertise to create the best and most comprehensive structures aimed at maximizing data extraction from this page. You must ensure that you do not pick up nuances that may exist on a particular page. The focus should be on the data we are extracting, and it must be valid, safe, and robust based on the given HTML.
# What if there is no example of target JSON object?
In this scenario, use your best judgment to generate the schema. Try to maximize the number of fields that you can extract from the HTML.
# What if there is no example of target JSON object and also no extra instructions or even no explanation of target/goal data item?
In this scenario, use your best judgment to generate the schema. You need to examine the content of the page and understand the data it provides. If the page contains repetitive data, such as lists of items, products, jobs, places, books, or movies, focus on one single item that repeats. If the page is a detailed page about one product or item, create a schema to extract the entire structured data. At this stage, you must think and decide for yourself. Try to maximize the number of fields that you can extract from the HTML.
# What are the instructions and details for this schema generation?
{prompt_template}"""
@@ -1118,11 +1156,18 @@ In this scenario, use your best judgment to generate the schema. Try to maximize
}
if query:
user_message["content"] += f"\n\nImportant Notes to Consider:\n{query}"
user_message["content"] += f"\n\n## Query or explanation of target/goal data item:\n{query}"
if target_json_example:
user_message["content"] += f"\n\nExample of target JSON object:\n{target_json_example}"
user_message["content"] += f"\n\n## Example of target JSON object:\n```json\n{target_json_example}\n```"
if query and not target_json_example:
user_message["content"] += """IMPORTANT: To remind you, in this process, we are not providing a rigid example of the adjacent objects we seek. We rely on your understanding of the explanation provided in the above section. Make sure to grasp what we are looking for and, based on that, create the best schema.."""
elif not query and target_json_example:
user_message["content"] += """IMPORTANT: Please remember that in this process, we provided a proper example of a target JSON object. Make sure to adhere to the structure and create a schema that exactly fits this example. If you find that some elements on the page do not match completely, vote for the majority."""
elif not query and not target_json_example:
user_message["content"] += """IMPORTANT: Since we neither have a query nor an example, it is crucial to rely solely on the HTML content provided. Leverage your expertise to determine the schema based on the repetitive patterns observed in the content."""
user_message["content"] += """IMPORTANT: Ensure your schema is reliable, meaning do not use selectors that seem to generate dynamically and are not reliable. A reliable schema is what you want, as it consistently returns the same data even after many reloads of the page.
user_message["content"] += """IMPORTANT: Ensure your schema remains reliable by avoiding selectors that appear to generate dynamically and are not dependable. You want a reliable schema, as it consistently returns the same data even after many page reloads.
Analyze the HTML and generate a JSON schema that follows the specified format. Only output valid JSON schema, nothing else.
"""
@@ -1130,11 +1175,12 @@ In this scenario, use your best judgment to generate the schema. Try to maximize
try:
# Call LLM with backoff handling
response = perform_completion_with_backoff(
provider=llmConfig.provider,
provider=llm_config.provider,
prompt_with_variables="\n\n".join([system_message["content"], user_message["content"]]),
json_response = True,
api_token=llmConfig.api_token,
**kwargs
api_token=llm_config.api_token,
base_url=llm_config.base_url,
extra_args=kwargs
)
# Extract and return schema
@@ -1143,7 +1189,6 @@ In this scenario, use your best judgment to generate the schema. Try to maximize
except Exception as e:
raise Exception(f"Failed to generate schema: {str(e)}")
class JsonCssExtractionStrategy(JsonElementExtractionStrategy):
"""
Concrete implementation of `JsonElementExtractionStrategy` using CSS selectors.
@@ -1171,7 +1216,8 @@ class JsonCssExtractionStrategy(JsonElementExtractionStrategy):
super().__init__(schema, **kwargs)
def _parse_html(self, html_content: str):
return BeautifulSoup(html_content, "html.parser")
# return BeautifulSoup(html_content, "html.parser")
return BeautifulSoup(html_content, "lxml")
def _get_base_elements(self, parsed_html, selector: str):
return parsed_html.select(selector)
@@ -1190,6 +1236,373 @@ class JsonCssExtractionStrategy(JsonElementExtractionStrategy):
def _get_element_attribute(self, element, attribute: str):
return element.get(attribute)
class JsonLxmlExtractionStrategy(JsonElementExtractionStrategy):
def __init__(self, schema: Dict[str, Any], **kwargs):
kwargs["input_format"] = "html"
super().__init__(schema, **kwargs)
self._selector_cache = {}
self._xpath_cache = {}
self._result_cache = {}
# Control selector optimization strategy
self.use_caching = kwargs.get("use_caching", True)
self.optimize_common_patterns = kwargs.get("optimize_common_patterns", True)
# Load lxml dependencies once
from lxml import etree, html
from lxml.cssselect import CSSSelector
self.etree = etree
self.html_parser = html
self.CSSSelector = CSSSelector
def _parse_html(self, html_content: str):
"""Parse HTML content with error recovery"""
try:
parser = self.etree.HTMLParser(recover=True, remove_blank_text=True)
return self.etree.fromstring(html_content, parser)
except Exception as e:
if self.verbose:
print(f"Error parsing HTML, falling back to alternative method: {e}")
try:
return self.html_parser.fromstring(html_content)
except Exception as e2:
if self.verbose:
print(f"Critical error parsing HTML: {e2}")
# Create minimal document as fallback
return self.etree.Element("html")
def _optimize_selector(self, selector_str):
"""Optimize common selector patterns for better performance"""
if not self.optimize_common_patterns:
return selector_str
# Handle td:nth-child(N) pattern which is very common in table scraping
import re
if re.search(r'td:nth-child\(\d+\)', selector_str):
return selector_str # Already handled specially in _apply_selector
# Split complex selectors into parts for optimization
parts = selector_str.split()
if len(parts) <= 1:
return selector_str
# For very long selectors, consider using just the last specific part
if len(parts) > 3 and any(p.startswith('.') or p.startswith('#') for p in parts):
specific_parts = [p for p in parts if p.startswith('.') or p.startswith('#')]
if specific_parts:
return specific_parts[-1] # Use most specific class/id selector
return selector_str
def _create_selector_function(self, selector_str):
"""Create a selector function that handles all edge cases"""
original_selector = selector_str
# Try to optimize the selector if appropriate
if self.optimize_common_patterns:
selector_str = self._optimize_selector(selector_str)
try:
# Attempt to compile the CSS selector
compiled = self.CSSSelector(selector_str)
xpath = compiled.path
# Store XPath for later use
self._xpath_cache[selector_str] = xpath
# Create the wrapper function that implements the selection strategy
def selector_func(element, context_sensitive=True):
cache_key = None
# Use result caching if enabled
if self.use_caching:
# Create a cache key based on element and selector
element_id = element.get('id', '') or str(hash(element))
cache_key = f"{element_id}::{selector_str}"
if cache_key in self._result_cache:
return self._result_cache[cache_key]
results = []
try:
# Strategy 1: Direct CSS selector application (fastest)
results = compiled(element)
# If that fails and we need context sensitivity
if not results and context_sensitive:
# Strategy 2: Try XPath with context adjustment
context_xpath = self._make_context_sensitive_xpath(xpath, element)
if context_xpath:
results = element.xpath(context_xpath)
# Strategy 3: Handle special case - nth-child
if not results and 'nth-child' in original_selector:
results = self._handle_nth_child_selector(element, original_selector)
# Strategy 4: Direct descendant search for class/ID selectors
if not results:
results = self._fallback_class_id_search(element, original_selector)
# Strategy 5: Last resort - tag name search for the final part
if not results:
parts = original_selector.split()
if parts:
last_part = parts[-1]
# Extract tag name from the selector
tag_match = re.match(r'^(\w+)', last_part)
if tag_match:
tag_name = tag_match.group(1)
results = element.xpath(f".//{tag_name}")
# Cache results if caching is enabled
if self.use_caching and cache_key:
self._result_cache[cache_key] = results
except Exception as e:
if self.verbose:
print(f"Error applying selector '{selector_str}': {e}")
return results
return selector_func
except Exception as e:
if self.verbose:
print(f"Error compiling selector '{selector_str}': {e}")
# Fallback function for invalid selectors
return lambda element, context_sensitive=True: []
def _make_context_sensitive_xpath(self, xpath, element):
"""Convert absolute XPath to context-sensitive XPath"""
try:
# If starts with descendant-or-self, it's already context-sensitive
if xpath.startswith('descendant-or-self::'):
return xpath
# Remove leading slash if present
if xpath.startswith('/'):
context_xpath = f".{xpath}"
else:
context_xpath = f".//{xpath}"
# Validate the XPath by trying it
try:
element.xpath(context_xpath)
return context_xpath
except:
# If that fails, try a simpler descendant search
return f".//{xpath.split('/')[-1]}"
except:
return None
def _handle_nth_child_selector(self, element, selector_str):
"""Special handling for nth-child selectors in tables"""
import re
results = []
try:
# Extract the column number from td:nth-child(N)
match = re.search(r'td:nth-child\((\d+)\)', selector_str)
if match:
col_num = match.group(1)
# Check if there's content after the nth-child part
remaining_selector = selector_str.split(f"td:nth-child({col_num})", 1)[-1].strip()
if remaining_selector:
# If there's a specific element we're looking for after the column
# Extract any tag names from the remaining selector
tag_match = re.search(r'(\w+)', remaining_selector)
tag_name = tag_match.group(1) if tag_match else '*'
results = element.xpath(f".//td[{col_num}]//{tag_name}")
else:
# Just get the column cell
results = element.xpath(f".//td[{col_num}]")
except Exception as e:
if self.verbose:
print(f"Error handling nth-child selector: {e}")
return results
def _fallback_class_id_search(self, element, selector_str):
"""Fallback to search by class or ID"""
results = []
try:
# Extract class selectors (.classname)
import re
class_matches = re.findall(r'\.([a-zA-Z0-9_-]+)', selector_str)
# Extract ID selectors (#idname)
id_matches = re.findall(r'#([a-zA-Z0-9_-]+)', selector_str)
# Try each class
for class_name in class_matches:
class_results = element.xpath(f".//*[contains(@class, '{class_name}')]")
results.extend(class_results)
# Try each ID (usually more specific)
for id_name in id_matches:
id_results = element.xpath(f".//*[@id='{id_name}']")
results.extend(id_results)
except Exception as e:
if self.verbose:
print(f"Error in fallback class/id search: {e}")
return results
def _get_selector(self, selector_str):
"""Get or create a selector function with caching"""
if selector_str not in self._selector_cache:
self._selector_cache[selector_str] = self._create_selector_function(selector_str)
return self._selector_cache[selector_str]
def _get_base_elements(self, parsed_html, selector: str):
"""Get all base elements using the selector"""
selector_func = self._get_selector(selector)
# For base elements, we don't need context sensitivity
return selector_func(parsed_html, context_sensitive=False)
def _get_elements(self, element, selector: str):
"""Get child elements using the selector with context sensitivity"""
selector_func = self._get_selector(selector)
return selector_func(element, context_sensitive=True)
def _get_element_text(self, element) -> str:
"""Extract normalized text from element"""
try:
# Get all text nodes and normalize
text = " ".join(t.strip() for t in element.xpath(".//text()") if t.strip())
return text
except Exception as e:
if self.verbose:
print(f"Error extracting text: {e}")
# Fallback
try:
return element.text_content().strip()
except:
return ""
def _get_element_html(self, element) -> str:
"""Get HTML string representation of element"""
try:
return self.etree.tostring(element, encoding='unicode', method='html')
except Exception as e:
if self.verbose:
print(f"Error serializing HTML: {e}")
return ""
def _get_element_attribute(self, element, attribute: str):
"""Get attribute value safely"""
try:
return element.get(attribute)
except Exception as e:
if self.verbose:
print(f"Error getting attribute '{attribute}': {e}")
return None
def _clear_caches(self):
"""Clear caches to free memory"""
if self.use_caching:
self._result_cache.clear()
class JsonLxmlExtractionStrategy_naive(JsonElementExtractionStrategy):
def __init__(self, schema: Dict[str, Any], **kwargs):
kwargs["input_format"] = "html" # Force HTML input
super().__init__(schema, **kwargs)
self._selector_cache = {}
def _parse_html(self, html_content: str):
from lxml import etree
parser = etree.HTMLParser(recover=True)
return etree.fromstring(html_content, parser)
def _get_selector(self, selector_str):
"""Get a selector function that works within the context of an element"""
if selector_str not in self._selector_cache:
from lxml.cssselect import CSSSelector
try:
# Store both the compiled selector and its xpath translation
compiled = CSSSelector(selector_str)
# Create a function that will apply this selector appropriately
def select_func(element):
try:
# First attempt: direct CSS selector application
results = compiled(element)
if results:
return results
# Second attempt: contextual XPath selection
# Convert the root-based XPath to a context-based XPath
xpath = compiled.path
# If the XPath already starts with descendant-or-self, handle it specially
if xpath.startswith('descendant-or-self::'):
context_xpath = xpath
else:
# For normal XPath expressions, make them relative to current context
context_xpath = f"./{xpath.lstrip('/')}"
results = element.xpath(context_xpath)
if results:
return results
# Final fallback: simple descendant search for common patterns
if 'nth-child' in selector_str:
# Handle td:nth-child(N) pattern
import re
match = re.search(r'td:nth-child\((\d+)\)', selector_str)
if match:
col_num = match.group(1)
sub_selector = selector_str.split(')', 1)[-1].strip()
if sub_selector:
return element.xpath(f".//td[{col_num}]//{sub_selector}")
else:
return element.xpath(f".//td[{col_num}]")
# Last resort: try each part of the selector separately
parts = selector_str.split()
if len(parts) > 1 and parts[-1]:
return element.xpath(f".//{parts[-1]}")
return []
except Exception as e:
if self.verbose:
print(f"Error applying selector '{selector_str}': {e}")
return []
self._selector_cache[selector_str] = select_func
except Exception as e:
if self.verbose:
print(f"Error compiling selector '{selector_str}': {e}")
# Fallback function for invalid selectors
def fallback_func(element):
return []
self._selector_cache[selector_str] = fallback_func
return self._selector_cache[selector_str]
def _get_base_elements(self, parsed_html, selector: str):
selector_func = self._get_selector(selector)
return selector_func(parsed_html)
def _get_elements(self, element, selector: str):
selector_func = self._get_selector(selector)
return selector_func(element)
def _get_element_text(self, element) -> str:
return "".join(element.xpath(".//text()")).strip()
def _get_element_html(self, element) -> str:
from lxml import etree
return etree.tostring(element, encoding='unicode')
def _get_element_attribute(self, element, attribute: str):
return element.get(attribute)
class JsonXPathExtractionStrategy(JsonElementExtractionStrategy):
"""

View File

@@ -40,12 +40,55 @@ def setup_home_directory():
f.write("")
def post_install():
"""Run all post-installation tasks"""
"""
Run all post-installation tasks.
Checks CRAWL4AI_MODE environment variable. If set to 'api',
skips Playwright browser installation.
"""
logger.info("Running post-installation setup...", tag="INIT")
setup_home_directory()
install_playwright()
# Check environment variable to conditionally skip Playwright install
run_mode = os.getenv('CRAWL4AI_MODE')
if run_mode == 'api':
logger.warning(
"CRAWL4AI_MODE=api detected. Skipping Playwright browser installation.",
tag="SETUP"
)
else:
# Proceed with installation only if mode is not 'api'
install_playwright()
run_migration()
# TODO: Will be added in the future
# setup_builtin_browser()
logger.success("Post-installation setup completed!", tag="COMPLETE")
def setup_builtin_browser():
"""Set up a builtin browser for use with Crawl4AI"""
try:
logger.info("Setting up builtin browser...", tag="INIT")
asyncio.run(_setup_builtin_browser())
logger.success("Builtin browser setup completed!", tag="COMPLETE")
except Exception as e:
logger.warning(f"Failed to set up builtin browser: {e}")
logger.warning("You can manually set up a builtin browser using 'crawl4ai-doctor builtin-browser-start'")
async def _setup_builtin_browser():
try:
# Import BrowserProfiler here to avoid circular imports
from .browser_profiler import BrowserProfiler
profiler = BrowserProfiler(logger=logger)
# Launch the builtin browser
cdp_url = await profiler.launch_builtin_browser(headless=True)
if cdp_url:
logger.success(f"Builtin browser launched at {cdp_url}", tag="BROWSER")
else:
logger.warning("Failed to launch builtin browser", tag="BROWSER")
except Exception as e:
logger.warning(f"Error setting up builtin browser: {e}", tag="BROWSER")
raise
def install_playwright():

View File

@@ -1,8 +1,8 @@
from abc import ABC, abstractmethod
from tabnanny import verbose
from typing import Optional, Dict, Any, Tuple
from .models import MarkdownGenerationResult
from .html2text import CustomHTML2Text
# from .types import RelevantContentFilter
from .content_filter_strategy import RelevantContentFilter
import re
from urllib.parse import urljoin

View File

@@ -1,6 +1,7 @@
from re import U
from pydantic import BaseModel, HttpUrl, PrivateAttr
from typing import List, Dict, Optional, Callable, Awaitable, Union, Any
from typing import AsyncGenerator
from typing import Generic, TypeVar
from enum import Enum
from dataclasses import dataclass
from .ssl_certificate import SSLCertificate
@@ -28,7 +29,12 @@ class CrawlerTaskResult:
start_time: Union[datetime, float]
end_time: Union[datetime, float]
error_message: str = ""
retry_count: int = 0
wait_time: float = 0.0
@property
def success(self) -> bool:
return self.result.success
class CrawlStatus(Enum):
QUEUED = "QUEUED"
@@ -36,27 +42,39 @@ class CrawlStatus(Enum):
COMPLETED = "COMPLETED"
FAILED = "FAILED"
@dataclass
class CrawlStats:
task_id: str
url: str
status: CrawlStatus
start_time: Optional[datetime] = None
end_time: Optional[datetime] = None
start_time: Optional[Union[datetime, float]] = None
end_time: Optional[Union[datetime, float]] = None
memory_usage: float = 0.0
peak_memory: float = 0.0
error_message: str = ""
wait_time: float = 0.0
retry_count: int = 0
counted_requeue: bool = False
@property
def duration(self) -> str:
if not self.start_time:
return "0:00"
# Convert start_time to datetime if it's a float
start = self.start_time
if isinstance(start, float):
start = datetime.fromtimestamp(start)
# Get end time or use current time
end = self.end_time or datetime.now()
duration = end - self.start_time
# Convert end_time to datetime if it's a float
if isinstance(end, float):
end = datetime.fromtimestamp(end)
duration = end - start
return str(timedelta(seconds=int(duration.total_seconds())))
class DisplayMode(Enum):
DETAILED = "DETAILED"
AGGREGATED = "AGGREGATED"
@@ -73,21 +91,11 @@ class TokenUsage:
completion_tokens_details: Optional[dict] = None
prompt_tokens_details: Optional[dict] = None
class UrlModel(BaseModel):
url: HttpUrl
forced: bool = False
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
fit_markdown: Optional[str] = None
fit_html: Optional[str] = None
def __str__(self):
return self.raw_markdown
@dataclass
class TraversalStats:
@@ -108,6 +116,16 @@ class DispatchResult(BaseModel):
end_time: Union[datetime, float]
error_message: str = ""
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
fit_markdown: Optional[str] = None
fit_html: Optional[str] = None
def __str__(self):
return self.raw_markdown
class CrawlResult(BaseModel):
url: str
html: str
@@ -119,6 +137,7 @@ class CrawlResult(BaseModel):
js_execution_result: Optional[Dict[str, Any]] = None
screenshot: Optional[str] = None
pdf: Optional[bytes] = None
mhtml: Optional[str] = None
_markdown: Optional[MarkdownGenerationResult] = PrivateAttr(default=None)
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
@@ -129,6 +148,8 @@ class CrawlResult(BaseModel):
ssl_certificate: Optional[SSLCertificate] = None
dispatch_result: Optional[DispatchResult] = None
redirected_url: Optional[str] = None
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class Config:
arbitrary_types_allowed = True
@@ -149,7 +170,11 @@ class CrawlResult(BaseModel):
markdown_result = data.pop('markdown', None)
super().__init__(**data)
if markdown_result is not None:
self._markdown = markdown_result
self._markdown = (
MarkdownGenerationResult(**markdown_result)
if isinstance(markdown_result, dict)
else markdown_result
)
@property
def markdown(self):
@@ -241,6 +266,40 @@ class StringCompatibleMarkdown(str):
def __getattr__(self, name):
return getattr(self._markdown_result, name)
CrawlResultT = TypeVar('CrawlResultT', bound=CrawlResult)
class CrawlResultContainer(Generic[CrawlResultT]):
def __init__(self, results: Union[CrawlResultT, List[CrawlResultT]]):
# Normalize to a list
if isinstance(results, list):
self._results = results
else:
self._results = [results]
def __iter__(self):
return iter(self._results)
def __getitem__(self, index):
return self._results[index]
def __len__(self):
return len(self._results)
def __getattr__(self, attr):
# Delegate attribute access to the first element.
if self._results:
return getattr(self._results[0], attr)
raise AttributeError(f"{self.__class__.__name__} object has no attribute '{attr}'")
def __repr__(self):
return f"{self.__class__.__name__}({self._results!r})"
RunManyReturn = Union[
CrawlResultContainer[CrawlResultT],
AsyncGenerator[CrawlResultT, None]
]
# END of backward compatibility code for markdown/markdown_v2.
# When removing this code in the future, make sure to:
# 1. Replace the private attribute and property with a standard field
@@ -253,15 +312,17 @@ class AsyncCrawlResponse(BaseModel):
status_code: int
screenshot: Optional[str] = None
pdf_data: Optional[bytes] = None
mhtml_data: Optional[str] = None
get_delayed_content: Optional[Callable[[Optional[float]], Awaitable[str]]] = None
downloaded_files: Optional[List[str]] = None
ssl_certificate: Optional[SSLCertificate] = None
redirected_url: Optional[str] = None
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class Config:
arbitrary_types_allowed = True
###############################
# Scraping Models
###############################
@@ -292,6 +353,7 @@ class Media(BaseModel):
audios: List[
MediaItem
] = [] # Using MediaItem model for now, can be extended with Audio model if needed
tables: List[Dict] = [] # Table data extracted from HTML tables
class Links(BaseModel):

View File

@@ -203,6 +203,62 @@ Avoid Common Mistakes:
Result
Output the final list of JSON objects, wrapped in <blocks>...</blocks> XML tags. Make sure to close the tag properly."""
PROMPT_EXTRACT_INFERRED_SCHEMA = """Here is the content from the URL:
<url>{URL}</url>
<url_content>
{HTML}
</url_content>
Please carefully read the URL content and the user's request. Analyze the page structure and infer the most appropriate JSON schema based on the content and request.
Extraction Strategy:
1. First, determine if the page contains repetitive items (like multiple products, articles, etc.) or a single content item (like a single article or page).
2. For repetitive items: Identify the common pattern and extract each instance as a separate JSON object in an array.
3. For single content: Extract the key information into a comprehensive JSON object that captures the essential details.
Extraction instructions:
Return the extracted information as a list of JSON objects. For repetitive content, each object in the list should correspond to a distinct item. For single content, you may return just one detailed JSON object. Wrap the entire JSON list in <blocks>...</blocks> XML tags.
Schema Design Guidelines:
- Create meaningful property names that clearly describe the data they contain
- Use nested objects for hierarchical information
- Use arrays for lists of related items
- Include all information requested by the user
- Maintain consistency in property names and data structures
- Only include properties that are actually present in the content
- For dates, prefer ISO format (YYYY-MM-DD)
- For prices or numeric values, extract them without currency symbols when possible
Quality Reflection:
Before outputting your final answer, double check that:
1. The inferred schema makes logical sense for the type of content
2. All requested information is included
3. The JSON is valid and could be parsed without errors
4. Property names are consistent and descriptive
5. The structure is optimal for the type of data being represented
Avoid Common Mistakes:
- Do NOT add any comments using "//" or "#" in the JSON output. It causes parsing errors.
- Make sure the JSON is properly formatted with curly braces, square brackets, and commas in the right places.
- Do not miss closing </blocks> tag at the end of the JSON output.
- Do not generate Python code showing how to do the task; this is your task to extract the information and return it in JSON format.
- Ensure consistency in property names across all objects
- Don't include empty properties or null values unless they're meaningful
- For repetitive content, ensure all objects follow the same schema
Important: If user specific instruction is provided, then stress significantly on what user is requesting and describing about the schema of end result (if any). If user is requesting to extract specific information, then focus on that and ignore the rest of the content.
<user_request>
{REQUEST}
</user_request>
Result:
Output the final list of JSON objects, wrapped in <blocks>...</blocks> XML tags. Make sure to close the tag properly.
DO NOT ADD ANY PRE OR POST COMMENTS. JUST RETURN THE JSON OBJECTS INSIDE <blocks>...</blocks> TAGS.
CRITICAL: The content inside the <blocks> tags MUST be a direct array of JSON objects (starting with '[' and ending with ']'), not a dictionary/object containing an array. For example, use <blocks>[{...}, {...}]</blocks> instead of <blocks>{"items": [{...}, {...}]}</blocks>. This is essential for proper parsing.
"""
PROMPT_FILTER_CONTENT = """Your task is to filter and convert HTML content into clean, focused markdown that's optimized for use with LLMs and information retrieval systems.

View File

@@ -1,8 +1,119 @@
from typing import List, Dict, Optional
from abc import ABC, abstractmethod
from itertools import cycle
import os
class ProxyConfig:
def __init__(
self,
server: str,
username: Optional[str] = None,
password: Optional[str] = None,
ip: Optional[str] = None,
):
"""Configuration class for a single proxy.
Args:
server: Proxy server URL (e.g., "http://127.0.0.1:8080")
username: Optional username for proxy authentication
password: Optional password for proxy authentication
ip: Optional IP address for verification purposes
"""
self.server = server
self.username = username
self.password = password
# Extract IP from server if not explicitly provided
self.ip = ip or self._extract_ip_from_server()
def _extract_ip_from_server(self) -> Optional[str]:
"""Extract IP address from server URL."""
try:
# Simple extraction assuming http://ip:port format
if "://" in self.server:
parts = self.server.split("://")[1].split(":")
return parts[0]
else:
parts = self.server.split(":")
return parts[0]
except Exception:
return None
@staticmethod
def from_string(proxy_str: str) -> "ProxyConfig":
"""Create a ProxyConfig from a string in the format 'ip:port:username:password'."""
parts = proxy_str.split(":")
if len(parts) == 4: # ip:port:username:password
ip, port, username, password = parts
return ProxyConfig(
server=f"http://{ip}:{port}",
username=username,
password=password,
ip=ip
)
elif len(parts) == 2: # ip:port only
ip, port = parts
return ProxyConfig(
server=f"http://{ip}:{port}",
ip=ip
)
else:
raise ValueError(f"Invalid proxy string format: {proxy_str}")
@staticmethod
def from_dict(proxy_dict: Dict) -> "ProxyConfig":
"""Create a ProxyConfig from a dictionary."""
return ProxyConfig(
server=proxy_dict.get("server"),
username=proxy_dict.get("username"),
password=proxy_dict.get("password"),
ip=proxy_dict.get("ip")
)
@staticmethod
def from_env(env_var: str = "PROXIES") -> List["ProxyConfig"]:
"""Load proxies from environment variable.
Args:
env_var: Name of environment variable containing comma-separated proxy strings
Returns:
List of ProxyConfig objects
"""
proxies = []
try:
proxy_list = os.getenv(env_var, "").split(",")
for proxy in proxy_list:
if not proxy:
continue
proxies.append(ProxyConfig.from_string(proxy))
except Exception as e:
print(f"Error loading proxies from environment: {e}")
return proxies
def to_dict(self) -> Dict:
"""Convert to dictionary representation."""
return {
"server": self.server,
"username": self.username,
"password": self.password,
"ip": self.ip
}
def clone(self, **kwargs) -> "ProxyConfig":
"""Create a copy of this configuration with updated values.
Args:
**kwargs: Key-value pairs of configuration options to update
Returns:
ProxyConfig: A new instance with the specified updates
"""
config_dict = self.to_dict()
config_dict.update(kwargs)
return ProxyConfig.from_dict(config_dict)
from crawl4ai.configs import ProxyConfig
class ProxyRotationStrategy(ABC):
"""Base abstract class for proxy rotation strategies"""

View File

@@ -1,14 +1,187 @@
from typing import TYPE_CHECKING, Union
AsyncWebCrawler = Union['AsyncWebCrawlerType'] # Note the string literal
CrawlerRunConfig = Union['CrawlerRunConfigType']
# Logger types
AsyncLoggerBase = Union['AsyncLoggerBaseType']
AsyncLogger = Union['AsyncLoggerType']
# Crawler core types
AsyncWebCrawler = Union['AsyncWebCrawlerType']
CacheMode = Union['CacheModeType']
CrawlResult = Union['CrawlResultType']
CrawlerHub = Union['CrawlerHubType']
BrowserProfiler = Union['BrowserProfilerType']
# Configuration types
BrowserConfig = Union['BrowserConfigType']
CrawlerRunConfig = Union['CrawlerRunConfigType']
HTTPCrawlerConfig = Union['HTTPCrawlerConfigType']
LLMConfig = Union['LLMConfigType']
# Content scraping types
ContentScrapingStrategy = Union['ContentScrapingStrategyType']
WebScrapingStrategy = Union['WebScrapingStrategyType']
LXMLWebScrapingStrategy = Union['LXMLWebScrapingStrategyType']
# Proxy types
ProxyRotationStrategy = Union['ProxyRotationStrategyType']
RoundRobinProxyStrategy = Union['RoundRobinProxyStrategyType']
# Extraction types
ExtractionStrategy = Union['ExtractionStrategyType']
LLMExtractionStrategy = Union['LLMExtractionStrategyType']
CosineStrategy = Union['CosineStrategyType']
JsonCssExtractionStrategy = Union['JsonCssExtractionStrategyType']
JsonXPathExtractionStrategy = Union['JsonXPathExtractionStrategyType']
# Chunking types
ChunkingStrategy = Union['ChunkingStrategyType']
RegexChunking = Union['RegexChunkingType']
# Markdown generation types
DefaultMarkdownGenerator = Union['DefaultMarkdownGeneratorType']
MarkdownGenerationResult = Union['MarkdownGenerationResultType']
# Content filter types
RelevantContentFilter = Union['RelevantContentFilterType']
PruningContentFilter = Union['PruningContentFilterType']
BM25ContentFilter = Union['BM25ContentFilterType']
LLMContentFilter = Union['LLMContentFilterType']
# Dispatcher types
BaseDispatcher = Union['BaseDispatcherType']
MemoryAdaptiveDispatcher = Union['MemoryAdaptiveDispatcherType']
SemaphoreDispatcher = Union['SemaphoreDispatcherType']
RateLimiter = Union['RateLimiterType']
CrawlerMonitor = Union['CrawlerMonitorType']
DisplayMode = Union['DisplayModeType']
RunManyReturn = Union['RunManyReturnType']
# Docker client
Crawl4aiDockerClient = Union['Crawl4aiDockerClientType']
# Deep crawling types
DeepCrawlStrategy = Union['DeepCrawlStrategyType']
BFSDeepCrawlStrategy = Union['BFSDeepCrawlStrategyType']
FilterChain = Union['FilterChainType']
ContentTypeFilter = Union['ContentTypeFilterType']
DomainFilter = Union['DomainFilterType']
URLFilter = Union['URLFilterType']
FilterStats = Union['FilterStatsType']
SEOFilter = Union['SEOFilterType']
KeywordRelevanceScorer = Union['KeywordRelevanceScorerType']
URLScorer = Union['URLScorerType']
CompositeScorer = Union['CompositeScorerType']
DomainAuthorityScorer = Union['DomainAuthorityScorerType']
FreshnessScorer = Union['FreshnessScorerType']
PathDepthScorer = Union['PathDepthScorerType']
BestFirstCrawlingStrategy = Union['BestFirstCrawlingStrategyType']
DFSDeepCrawlStrategy = Union['DFSDeepCrawlStrategyType']
DeepCrawlDecorator = Union['DeepCrawlDecoratorType']
# Only import types during type checking to avoid circular imports
if TYPE_CHECKING:
from . import (
# Logger imports
from .async_logger import (
AsyncLoggerBase as AsyncLoggerBaseType,
AsyncLogger as AsyncLoggerType,
)
# Crawler core imports
from .async_webcrawler import (
AsyncWebCrawler as AsyncWebCrawlerType,
CacheMode as CacheModeType,
)
from .models import CrawlResult as CrawlResultType
from .hub import CrawlerHub as CrawlerHubType
from .browser_profiler import BrowserProfiler as BrowserProfilerType
# Configuration imports
from .async_configs import (
BrowserConfig as BrowserConfigType,
CrawlerRunConfig as CrawlerRunConfigType,
CrawlResult as CrawlResultType,
HTTPCrawlerConfig as HTTPCrawlerConfigType,
LLMConfig as LLMConfigType,
)
# Content scraping imports
from .content_scraping_strategy import (
ContentScrapingStrategy as ContentScrapingStrategyType,
WebScrapingStrategy as WebScrapingStrategyType,
LXMLWebScrapingStrategy as LXMLWebScrapingStrategyType,
)
# Proxy imports
from .proxy_strategy import (
ProxyRotationStrategy as ProxyRotationStrategyType,
RoundRobinProxyStrategy as RoundRobinProxyStrategyType,
)
# Extraction imports
from .extraction_strategy import (
ExtractionStrategy as ExtractionStrategyType,
LLMExtractionStrategy as LLMExtractionStrategyType,
CosineStrategy as CosineStrategyType,
JsonCssExtractionStrategy as JsonCssExtractionStrategyType,
JsonXPathExtractionStrategy as JsonXPathExtractionStrategyType,
)
# Chunking imports
from .chunking_strategy import (
ChunkingStrategy as ChunkingStrategyType,
RegexChunking as RegexChunkingType,
)
# Markdown generation imports
from .markdown_generation_strategy import (
DefaultMarkdownGenerator as DefaultMarkdownGeneratorType,
)
from .models import MarkdownGenerationResult as MarkdownGenerationResultType
# Content filter imports
from .content_filter_strategy import (
RelevantContentFilter as RelevantContentFilterType,
PruningContentFilter as PruningContentFilterType,
BM25ContentFilter as BM25ContentFilterType,
LLMContentFilter as LLMContentFilterType,
)
# Dispatcher imports
from .async_dispatcher import (
BaseDispatcher as BaseDispatcherType,
MemoryAdaptiveDispatcher as MemoryAdaptiveDispatcherType,
SemaphoreDispatcher as SemaphoreDispatcherType,
RateLimiter as RateLimiterType,
CrawlerMonitor as CrawlerMonitorType,
DisplayMode as DisplayModeType,
RunManyReturn as RunManyReturnType,
)
)
# Docker client
from .docker_client import Crawl4aiDockerClient as Crawl4aiDockerClientType
# Deep crawling imports
from .deep_crawling import (
DeepCrawlStrategy as DeepCrawlStrategyType,
BFSDeepCrawlStrategy as BFSDeepCrawlStrategyType,
FilterChain as FilterChainType,
ContentTypeFilter as ContentTypeFilterType,
DomainFilter as DomainFilterType,
URLFilter as URLFilterType,
FilterStats as FilterStatsType,
SEOFilter as SEOFilterType,
KeywordRelevanceScorer as KeywordRelevanceScorerType,
URLScorer as URLScorerType,
CompositeScorer as CompositeScorerType,
DomainAuthorityScorer as DomainAuthorityScorerType,
FreshnessScorer as FreshnessScorerType,
PathDepthScorer as PathDepthScorerType,
BestFirstCrawlingStrategy as BestFirstCrawlingStrategyType,
DFSDeepCrawlStrategy as DFSDeepCrawlStrategyType,
DeepCrawlDecorator as DeepCrawlDecoratorType,
)
def create_llm_config(*args, **kwargs) -> 'LLMConfigType':
from .async_configs import LLMConfig
return LLMConfig(*args, **kwargs)

View File

@@ -1,5 +1,4 @@
import time
from urllib.parse import urlparse
from concurrent.futures import ThreadPoolExecutor, as_completed
from bs4 import BeautifulSoup, Comment, element, Tag, NavigableString
import json
@@ -27,12 +26,14 @@ import cProfile
import pstats
from functools import wraps
import asyncio
from lxml import etree, html as lhtml
import sqlite3
import hashlib
from urllib.robotparser import RobotFileParser
import aiohttp
from urllib.parse import urlparse, urlunparse
from functools import lru_cache
from packaging import version
from . import __version__
@@ -1550,7 +1551,7 @@ def extract_xml_tags(string):
return list(set(tags))
def extract_xml_data(tags, string):
def extract_xml_data_legacy(tags, string):
"""
Extract data for specified XML tags from a string.
@@ -1579,6 +1580,38 @@ def extract_xml_data(tags, string):
return data
def extract_xml_data(tags, string):
"""
Extract data for specified XML tags from a string, returning the longest content for each tag.
How it works:
1. Finds all occurrences of each tag in the string using regex.
2. For each tag, selects the occurrence with the longest content.
3. Returns a dictionary of tag-content pairs.
Args:
tags (List[str]): The list of XML tags to extract.
string (str): The input string containing XML data.
Returns:
Dict[str, str]: A dictionary with tag names as keys and longest extracted content as values.
"""
data = {}
for tag in tags:
pattern = f"<{tag}>(.*?)</{tag}>"
matches = re.findall(pattern, string, re.DOTALL)
if matches:
# Find the longest content for this tag
longest_content = max(matches, key=len).strip()
data[tag] = longest_content
else:
data[tag] = ""
return data
def perform_completion_with_backoff(
provider,
@@ -1647,6 +1680,19 @@ def perform_completion_with_backoff(
"content": ["Rate limit error. Please try again later."],
}
]
except Exception as e:
raise e # Raise any other exceptions immediately
# print("Error during completion request:", str(e))
# error_message = e.message
# return [
# {
# "index": 0,
# "tags": ["error"],
# "content": [
# f"Error during LLM completion request. {error_message}"
# ],
# }
# ]
def extract_blocks(url, html, provider=DEFAULT_PROVIDER, api_token=None, base_url=None):
@@ -1962,6 +2008,82 @@ def normalize_url(href, base_url):
return normalized
def normalize_url_for_deep_crawl(href, base_url):
"""Normalize URLs to ensure consistent format"""
from urllib.parse import urljoin, urlparse, urlunparse, parse_qs, urlencode
# Handle None or empty values
if not href:
return None
# Use urljoin to handle relative URLs
full_url = urljoin(base_url, href.strip())
# Parse the URL for normalization
parsed = urlparse(full_url)
# Convert hostname to lowercase
netloc = parsed.netloc.lower()
# Remove fragment entirely
fragment = ''
# Normalize query parameters if needed
query = parsed.query
if query:
# Parse query parameters
params = parse_qs(query)
# Remove tracking parameters (example - customize as needed)
tracking_params = ['utm_source', 'utm_medium', 'utm_campaign', 'ref', 'fbclid']
for param in tracking_params:
if param in params:
del params[param]
# Rebuild query string, sorted for consistency
query = urlencode(params, doseq=True) if params else ''
# Build normalized URL
normalized = urlunparse((
parsed.scheme,
netloc,
parsed.path.rstrip('/') or '/', # Normalize trailing slash
parsed.params,
query,
fragment
))
return normalized
@lru_cache(maxsize=10000)
def efficient_normalize_url_for_deep_crawl(href, base_url):
"""Efficient URL normalization with proper parsing"""
from urllib.parse import urljoin
if not href:
return None
# Resolve relative URLs
full_url = urljoin(base_url, href.strip())
# Use proper URL parsing
parsed = urlparse(full_url)
# Only perform the most critical normalizations
# 1. Lowercase hostname
# 2. Remove fragment
normalized = urlunparse((
parsed.scheme,
parsed.netloc.lower(),
parsed.path,
parsed.params,
parsed.query,
'' # Remove fragment
))
return normalized
def normalize_url_tmp(href, base_url):
"""Normalize URLs to ensure consistent format"""
# Extract protocol and domain from base URL
@@ -2540,3 +2662,116 @@ class HeadPeekr:
def get_title(head_content: str):
title_match = re.search(r'<title>(.*?)</title>', head_content, re.IGNORECASE | re.DOTALL)
return title_match.group(1) if title_match else None
def preprocess_html_for_schema(html_content, text_threshold=100, attr_value_threshold=200, max_size=100000):
"""
Preprocess HTML to reduce size while preserving structure for schema generation.
Args:
html_content (str): Raw HTML content
text_threshold (int): Maximum length for text nodes before truncation
attr_value_threshold (int): Maximum length for attribute values before truncation
max_size (int): Target maximum size for output HTML
Returns:
str: Preprocessed HTML content
"""
try:
# Parse HTML with error recovery
parser = etree.HTMLParser(remove_comments=True, remove_blank_text=True)
tree = lhtml.fromstring(html_content, parser=parser)
# 1. Remove HEAD section (keep only BODY)
head_elements = tree.xpath('//head')
for head in head_elements:
if head.getparent() is not None:
head.getparent().remove(head)
# 2. Define tags to remove completely
tags_to_remove = [
'script', 'style', 'noscript', 'iframe', 'canvas', 'svg',
'video', 'audio', 'source', 'track', 'map', 'area'
]
# Remove unwanted elements
for tag in tags_to_remove:
elements = tree.xpath(f'//{tag}')
for element in elements:
if element.getparent() is not None:
element.getparent().remove(element)
# 3. Process remaining elements to clean attributes and truncate text
for element in tree.iter():
# Skip if we're at the root level
if element.getparent() is None:
continue
# Clean non-essential attributes but preserve structural ones
# attribs_to_keep = {'id', 'class', 'name', 'href', 'src', 'type', 'value', 'data-'}
# This is more aggressive than the previous version
attribs_to_keep = {'id', 'class', 'name', 'type', 'value'}
# attributes_hates_truncate = ['id', 'class', "data-"]
# This means, I don't care, if an attribute is too long, truncate it, go and find a better css selector to build a schema
attributes_hates_truncate = []
# Process each attribute
for attrib in list(element.attrib.keys()):
# Keep if it's essential or starts with data-
if not (attrib in attribs_to_keep or attrib.startswith('data-')):
element.attrib.pop(attrib)
# Truncate long attribute values except for selectors
elif attrib not in attributes_hates_truncate and len(element.attrib[attrib]) > attr_value_threshold:
element.attrib[attrib] = element.attrib[attrib][:attr_value_threshold] + '...'
# Truncate text content if it's too long
if element.text and len(element.text.strip()) > text_threshold:
element.text = element.text.strip()[:text_threshold] + '...'
# Also truncate tail text if present
if element.tail and len(element.tail.strip()) > text_threshold:
element.tail = element.tail.strip()[:text_threshold] + '...'
# 4. Find repeated patterns and keep only a few examples
# This is a simplistic approach - more sophisticated pattern detection could be implemented
pattern_elements = {}
for element in tree.xpath('//*[contains(@class, "")]'):
parent = element.getparent()
if parent is None:
continue
# Create a signature based on tag and classes
classes = element.get('class', '')
if not classes:
continue
signature = f"{element.tag}.{classes}"
if signature in pattern_elements:
pattern_elements[signature].append(element)
else:
pattern_elements[signature] = [element]
# Keep only 3 examples of each repeating pattern
for signature, elements in pattern_elements.items():
if len(elements) > 3:
# Keep the first 2 and last elements
for element in elements[2:-1]:
if element.getparent() is not None:
element.getparent().remove(element)
# 5. Convert back to string
result = etree.tostring(tree, encoding='unicode', method='html')
# If still over the size limit, apply more aggressive truncation
if len(result) > max_size:
return result[:max_size] + "..."
return result
except Exception as e:
# Fallback for parsing errors
return html_content[:max_size] if len(html_content) > max_size else html_content

644
deploy/docker/README-new.md Normal file
View File

@@ -0,0 +1,644 @@
# Crawl4AI Docker Guide 🐳
## Table of Contents
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Option 1: Using Docker Compose (Recommended)](#option-1-using-docker-compose-recommended)
- [Option 2: Manual Local Build & Run](#option-2-manual-local-build--run)
- [Option 3: Using Pre-built Docker Hub Images](#option-3-using-pre-built-docker-hub-images)
- [Dockerfile Parameters](#dockerfile-parameters)
- [Using the API](#using-the-api)
- [Understanding Request Schema](#understanding-request-schema)
- [REST API Examples](#rest-api-examples)
- [Python SDK](#python-sdk)
- [Metrics & Monitoring](#metrics--monitoring)
- [Deployment Scenarios](#deployment-scenarios)
- [Complete Examples](#complete-examples)
- [Server Configuration](#server-configuration)
- [Understanding config.yml](#understanding-configyml)
- [JWT Authentication](#jwt-authentication)
- [Configuration Tips and Best Practices](#configuration-tips-and-best-practices)
- [Customizing Your Configuration](#customizing-your-configuration)
- [Configuration Recommendations](#configuration-recommendations)
- [Getting Help](#getting-help)
## Prerequisites
Before we dive in, make sure you have:
- Docker installed and running (version 20.10.0 or higher), including `docker compose` (usually bundled with Docker Desktop).
- `git` for cloning the repository.
- At least 4GB of RAM available for the container (more recommended for heavy use).
- Python 3.10+ (if using the Python SDK).
- Node.js 16+ (if using the Node.js examples).
> 💡 **Pro tip**: Run `docker info` to check your Docker installation and available resources.
## Installation
We offer several ways to get the Crawl4AI server running. Docker Compose is the easiest way to manage local builds and runs.
### Option 1: Using Docker Compose (Recommended)
Docker Compose simplifies building and running the service, especially for local development and testing across different platforms.
#### 1. Clone Repository
```bash
git clone https://github.com/unclecode/crawl4ai.git
cd crawl4ai
```
#### 2. Environment Setup (API Keys)
If you plan to use LLMs, copy the example environment file and add your API keys. This file should be in the **project root directory**.
```bash
# Make sure you are in the 'crawl4ai' root directory
cp deploy/docker/.llm.env.example .llm.env
# Now edit .llm.env and add your API keys
# Example content:
# OPENAI_API_KEY=sk-your-key
# ANTHROPIC_API_KEY=your-anthropic-key
# ...
```
> 🔑 **Note**: Keep your API keys secure! Never commit `.llm.env` to version control.
#### 3. Build and Run with Compose
The `docker-compose.yml` file in the project root defines services for different scenarios using **profiles**.
* **Build and Run Locally (AMD64):**
```bash
# Builds the image locally using Dockerfile and runs it
docker compose --profile local-amd64 up --build -d
```
* **Build and Run Locally (ARM64):**
```bash
# Builds the image locally using Dockerfile and runs it
docker compose --profile local-arm64 up --build -d
```
* **Run Pre-built Image from Docker Hub (AMD64):**
```bash
# Pulls and runs the specified AMD64 image from Docker Hub
# (Set VERSION env var for specific tags, e.g., VERSION=0.5.1-d1)
docker compose --profile hub-amd64 up -d
```
* **Run Pre-built Image from Docker Hub (ARM64):**
```bash
# Pulls and runs the specified ARM64 image from Docker Hub
docker compose --profile hub-arm64 up -d
```
> The server will be available at `http://localhost:11235`.
#### 4. Stopping Compose Services
```bash
# Stop the service(s) associated with a profile (e.g., local-amd64)
docker compose --profile local-amd64 down
```
### Option 2: Manual Local Build & Run
If you prefer not to use Docker Compose for local builds.
#### 1. Clone Repository & Setup Environment
Follow steps 1 and 2 from the Docker Compose section above (clone repo, `cd crawl4ai`, create `.llm.env` in the root).
#### 2. Build the Image (Multi-Arch)
Use `docker buildx` to build the image. This example builds for multiple platforms and loads the image matching your host architecture into the local Docker daemon.
```bash
# Make sure you are in the 'crawl4ai' root directory
docker buildx build --platform linux/amd64,linux/arm64 -t crawl4ai-local:latest --load .
```
#### 3. Run the Container
* **Basic run (no LLM support):**
```bash
# Replace --platform if your host is ARM64
docker run -d \
-p 11235:11235 \
--name crawl4ai-standalone \
--shm-size=1g \
--platform linux/amd64 \
crawl4ai-local:latest
```
* **With LLM support:**
```bash
# Make sure .llm.env is in the current directory (project root)
# Replace --platform if your host is ARM64
docker run -d \
-p 11235:11235 \
--name crawl4ai-standalone \
--env-file .llm.env \
--shm-size=1g \
--platform linux/amd64 \
crawl4ai-local:latest
```
> The server will be available at `http://localhost:11235`.
#### 4. Stopping the Manual Container
```bash
docker stop crawl4ai-standalone && docker rm crawl4ai-standalone
```
### Option 3: Using Pre-built Docker Hub Images
Pull and run images directly from Docker Hub without building locally.
#### 1. Pull the Image
We use a versioning scheme like `LIBRARY_VERSION-dREVISION` (e.g., `0.5.1-d1`). The `latest` tag points to the most recent stable release. Images are built with multi-arch manifests, so Docker usually pulls the correct version for your system automatically.
```bash
# Pull a specific version (recommended for stability)
docker pull unclecode/crawl4ai:0.5.1-d1
# Or pull the latest stable version
docker pull unclecode/crawl4ai:latest
```
#### 2. Setup Environment (API Keys)
If using LLMs, create the `.llm.env` file in a directory of your choice, similar to Step 2 in the Compose section.
#### 3. Run the Container
* **Basic run:**
```bash
docker run -d \
-p 11235:11235 \
--name crawl4ai-hub \
--shm-size=1g \
unclecode/crawl4ai:0.5.1-d1 # Or use :latest
```
* **With LLM support:**
```bash
# Make sure .llm.env is in the current directory you are running docker from
docker run -d \
-p 11235:11235 \
--name crawl4ai-hub \
--env-file .llm.env \
--shm-size=1g \
unclecode/crawl4ai:0.5.1-d1 # Or use :latest
```
> The server will be available at `http://localhost:11235`.
#### 4. Stopping the Hub Container
```bash
docker stop crawl4ai-hub && docker rm crawl4ai-hub
```
#### Docker Hub Versioning Explained
* **Image Name:** `unclecode/crawl4ai`
* **Tag Format:** `LIBRARY_VERSION-dREVISION`
* `LIBRARY_VERSION`: The Semantic Version of the core `crawl4ai` Python library included (e.g., `0.5.1`).
* `dREVISION`: An incrementing number (starting at `d1`) for Docker build changes made *without* changing the library version (e.g., base image updates, dependency fixes). Resets to `d1` for each new `LIBRARY_VERSION`.
* **Example:** `unclecode/crawl4ai:0.5.1-d1`
* **`latest` Tag:** Points to the most recent stable `LIBRARY_VERSION-dREVISION`.
* **Multi-Arch:** Images support `linux/amd64` and `linux/arm64`. Docker automatically selects the correct architecture.
---
*(Rest of the document remains largely the same, but with key updates below)*
---
## Dockerfile Parameters
You can customize the image build process using build arguments (`--build-arg`). These are typically used via `docker buildx build` or within the `docker-compose.yml` file.
```bash
# Example: Build with 'all' features using buildx
docker buildx build \
--platform linux/amd64,linux/arm64 \
--build-arg INSTALL_TYPE=all \
-t yourname/crawl4ai-all:latest \
--load \
. # Build from root context
```
### Build Arguments Explained
| Argument | Description | Default | Options |
| :----------- | :--------------------------------------- | :-------- | :--------------------------------- |
| INSTALL_TYPE | Feature set | `default` | `default`, `all`, `torch`, `transformer` |
| ENABLE_GPU | GPU support (CUDA for AMD64) | `false` | `true`, `false` |
| APP_HOME | Install path inside container (advanced) | `/app` | any valid path |
| USE_LOCAL | Install library from local source | `true` | `true`, `false` |
| GITHUB_REPO | Git repo to clone if USE_LOCAL=false | *(see Dockerfile)* | any git URL |
| GITHUB_BRANCH| Git branch to clone if USE_LOCAL=false | `main` | any branch name |
*(Note: PYTHON_VERSION is fixed by the `FROM` instruction in the Dockerfile)*
### Build Best Practices
1. **Choose the Right Install Type**
* `default`: Basic installation, smallest image size. Suitable for most standard web scraping and markdown generation.
* `all`: Full features including `torch` and `transformers` for advanced extraction strategies (e.g., CosineStrategy, certain LLM filters). Significantly larger image. Ensure you need these extras.
2. **Platform Considerations**
* Use `buildx` for building multi-architecture images, especially for pushing to registries.
* Use `docker compose` profiles (`local-amd64`, `local-arm64`) for easy platform-specific local builds.
3. **Performance Optimization**
* The image automatically includes platform-specific optimizations (OpenMP for AMD64, OpenBLAS for ARM64).
---
## Using the API
Communicate with the running Docker server via its REST API (defaulting to `http://localhost:11235`). You can use the Python SDK or make direct HTTP requests.
### Python SDK
Install the SDK: `pip install crawl4ai`
```python
import asyncio
from crawl4ai.docker_client import Crawl4aiDockerClient
from crawl4ai import BrowserConfig, CrawlerRunConfig, CacheMode # Assuming you have crawl4ai installed
async def main():
# Point to the correct server port
async with Crawl4aiDockerClient(base_url="http://localhost:11235", verbose=True) as client:
# If JWT is enabled on the server, authenticate first:
# await client.authenticate("user@example.com") # See Server Configuration section
# Example Non-streaming crawl
print("--- Running Non-Streaming Crawl ---")
results = await client.crawl(
["https://httpbin.org/html"],
browser_config=BrowserConfig(headless=True), # Use library classes for config aid
crawler_config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
if results: # client.crawl returns None on failure
print(f"Non-streaming results success: {results.success}")
if results.success:
for result in results: # Iterate through the CrawlResultContainer
print(f"URL: {result.url}, Success: {result.success}")
else:
print("Non-streaming crawl failed.")
# Example Streaming crawl
print("\n--- Running Streaming Crawl ---")
stream_config = CrawlerRunConfig(stream=True, cache_mode=CacheMode.BYPASS)
try:
async for result in await client.crawl( # client.crawl returns an async generator for streaming
["https://httpbin.org/html", "https://httpbin.org/links/5/0"],
browser_config=BrowserConfig(headless=True),
crawler_config=stream_config
):
print(f"Streamed result: URL: {result.url}, Success: {result.success}")
except Exception as e:
print(f"Streaming crawl failed: {e}")
# Example Get schema
print("\n--- Getting Schema ---")
schema = await client.get_schema()
print(f"Schema received: {bool(schema)}") # Print whether schema was received
if __name__ == "__main__":
asyncio.run(main())
```
*(SDK parameters like timeout, verify_ssl etc. remain the same)*
### Second Approach: Direct API Calls
Crucially, when sending configurations directly via JSON, they **must** follow the `{"type": "ClassName", "params": {...}}` structure for any non-primitive value (like config objects or strategies). Dictionaries must be wrapped as `{"type": "dict", "value": {...}}`.
*(Keep the detailed explanation of Configuration Structure, Basic Pattern, Simple vs Complex, Strategy Pattern, Complex Nested Example, Quick Grammar Overview, Important Rules, Pro Tip)*
#### More Examples *(Ensure Schema example uses type/value wrapper)*
**Advanced Crawler Configuration**
*(Keep example, ensure cache_mode uses valid enum value like "bypass")*
**Extraction Strategy**
```json
{
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {
"extraction_strategy": {
"type": "JsonCssExtractionStrategy",
"params": {
"schema": {
"type": "dict",
"value": {
"baseSelector": "article.post",
"fields": [
{"name": "title", "selector": "h1", "type": "text"},
{"name": "content", "selector": ".content", "type": "html"}
]
}
}
}
}
}
}
}
```
**LLM Extraction Strategy** *(Keep example, ensure schema uses type/value wrapper)*
*(Keep Deep Crawler Example)*
### REST API Examples
Update URLs to use port `11235`.
#### Simple Crawl
```python
import requests
# Configuration objects converted to the required JSON structure
browser_config_payload = {
"type": "BrowserConfig",
"params": {"headless": True}
}
crawler_config_payload = {
"type": "CrawlerRunConfig",
"params": {"stream": False, "cache_mode": "bypass"} # Use string value of enum
}
crawl_payload = {
"urls": ["https://httpbin.org/html"],
"browser_config": browser_config_payload,
"crawler_config": crawler_config_payload
}
response = requests.post(
"http://localhost:11235/crawl", # Updated port
# headers={"Authorization": f"Bearer {token}"}, # If JWT is enabled
json=crawl_payload
)
print(f"Status Code: {response.status_code}")
if response.ok:
print(response.json())
else:
print(f"Error: {response.text}")
```
#### Streaming Results
```python
import json
import httpx # Use httpx for async streaming example
async def test_stream_crawl(token: str = None): # Made token optional
"""Test the /crawl/stream endpoint with multiple URLs."""
url = "http://localhost:11235/crawl/stream" # Updated port
payload = {
"urls": [
"https://httpbin.org/html",
"https://httpbin.org/links/5/0",
],
"browser_config": {
"type": "BrowserConfig",
"params": {"headless": True, "viewport": {"type": "dict", "value": {"width": 1200, "height": 800}}} # Viewport needs type:dict
},
"crawler_config": {
"type": "CrawlerRunConfig",
"params": {"stream": True, "cache_mode": "bypass"}
}
}
headers = {}
# if token:
# headers = {"Authorization": f"Bearer {token}"} # If JWT is enabled
try:
async with httpx.AsyncClient() as client:
async with client.stream("POST", url, json=payload, headers=headers, timeout=120.0) as response:
print(f"Status: {response.status_code} (Expected: 200)")
response.raise_for_status() # Raise exception for bad status codes
# Read streaming response line-by-line (NDJSON)
async for line in response.aiter_lines():
if line:
try:
data = json.loads(line)
# Check for completion marker
if data.get("status") == "completed":
print("Stream completed.")
break
print(f"Streamed Result: {json.dumps(data, indent=2)}")
except json.JSONDecodeError:
print(f"Warning: Could not decode JSON line: {line}")
except httpx.HTTPStatusError as e:
print(f"HTTP error occurred: {e.response.status_code} - {e.response.text}")
except Exception as e:
print(f"Error in streaming crawl test: {str(e)}")
# To run this example:
# import asyncio
# asyncio.run(test_stream_crawl())
```
---
## Metrics & Monitoring
Keep an eye on your crawler with these endpoints:
- `/health` - Quick health check
- `/metrics` - Detailed Prometheus metrics
- `/schema` - Full API schema
Example health check:
```bash
curl http://localhost:11235/health
```
---
*(Deployment Scenarios and Complete Examples sections remain the same, maybe update links if examples moved)*
---
## Server Configuration
The server's behavior can be customized through the `config.yml` file.
### Understanding config.yml
The configuration file is loaded from `/app/config.yml` inside the container. By default, the file from `deploy/docker/config.yml` in the repository is copied there during the build.
Here's a detailed breakdown of the configuration options (using defaults from `deploy/docker/config.yml`):
```yaml
# Application Configuration
app:
title: "Crawl4AI API"
version: "1.0.0" # Consider setting this to match library version, e.g., "0.5.1"
host: "0.0.0.0"
port: 8020 # NOTE: This port is used ONLY when running server.py directly. Gunicorn overrides this (see supervisord.conf).
reload: False # Default set to False - suitable for production
timeout_keep_alive: 300
# Default LLM Configuration
llm:
provider: "openai/gpt-4o-mini"
api_key_env: "OPENAI_API_KEY"
# api_key: sk-... # If you pass the API key directly then api_key_env will be ignored
# Redis Configuration (Used by internal Redis server managed by supervisord)
redis:
host: "localhost"
port: 6379
db: 0
password: ""
# ... other redis options ...
# Rate Limiting Configuration
rate_limiting:
enabled: True
default_limit: "1000/minute"
trusted_proxies: []
storage_uri: "memory://" # Use "redis://localhost:6379" if you need persistent/shared limits
# Security Configuration
security:
enabled: false # Master toggle for security features
jwt_enabled: false # Enable JWT authentication (requires security.enabled=true)
https_redirect: false # Force HTTPS (requires security.enabled=true)
trusted_hosts: ["*"] # Allowed hosts (use specific domains in production)
headers: # Security headers (applied if security.enabled=true)
x_content_type_options: "nosniff"
x_frame_options: "DENY"
content_security_policy: "default-src 'self'"
strict_transport_security: "max-age=63072000; includeSubDomains"
# Crawler Configuration
crawler:
memory_threshold_percent: 95.0
rate_limiter:
base_delay: [1.0, 2.0] # Min/max delay between requests in seconds for dispatcher
timeouts:
stream_init: 30.0 # Timeout for stream initialization
batch_process: 300.0 # Timeout for non-streaming /crawl processing
# Logging Configuration
logging:
level: "INFO"
format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
# Observability Configuration
observability:
prometheus:
enabled: True
endpoint: "/metrics"
health_check:
endpoint: "/health"
```
*(JWT Authentication section remains the same, just note the default port is now 11235 for requests)*
*(Configuration Tips and Best Practices remain the same)*
### Customizing Your Configuration
You can override the default `config.yml`.
#### Method 1: Modify Before Build
1. Edit the `deploy/docker/config.yml` file in your local repository clone.
2. Build the image using `docker buildx` or `docker compose --profile local-... up --build`. The modified file will be copied into the image.
#### Method 2: Runtime Mount (Recommended for Custom Deploys)
1. Create your custom configuration file, e.g., `my-custom-config.yml` locally. Ensure it contains all necessary sections.
2. Mount it when running the container:
* **Using `docker run`:**
```bash
# Assumes my-custom-config.yml is in the current directory
docker run -d -p 11235:11235 \
--name crawl4ai-custom-config \
--env-file .llm.env \
--shm-size=1g \
-v $(pwd)/my-custom-config.yml:/app/config.yml \
unclecode/crawl4ai:latest # Or your specific tag
```
* **Using `docker-compose.yml`:** Add a `volumes` section to the service definition:
```yaml
services:
crawl4ai-hub-amd64: # Or your chosen service
image: unclecode/crawl4ai:latest
profiles: ["hub-amd64"]
<<: *base-config
volumes:
# Mount local custom config over the default one in the container
- ./my-custom-config.yml:/app/config.yml
# Keep the shared memory volume from base-config
- /dev/shm:/dev/shm
```
*(Note: Ensure `my-custom-config.yml` is in the same directory as `docker-compose.yml`)*
> 💡 When mounting, your custom file *completely replaces* the default one. Ensure it's a valid and complete configuration.
### Configuration Recommendations
1. **Security First** 🔒
- Always enable security in production
- Use specific trusted_hosts instead of wildcards
- Set up proper rate limiting to protect your server
- Consider your environment before enabling HTTPS redirect
2. **Resource Management** 💻
- Adjust memory_threshold_percent based on available RAM
- Set timeouts according to your content size and network conditions
- Use Redis for rate limiting in multi-container setups
3. **Monitoring** 📊
- Enable Prometheus if you need metrics
- Set DEBUG logging in development, INFO in production
- Regular health check monitoring is crucial
4. **Performance Tuning** ⚡
- Start with conservative rate limiter delays
- Increase batch_process timeout for large content
- Adjust stream_init timeout based on initial response times
## Getting Help
We're here to help you succeed with Crawl4AI! Here's how to get support:
- 📖 Check our [full documentation](https://docs.crawl4ai.com)
- 🐛 Found a bug? [Open an issue](https://github.com/unclecode/crawl4ai/issues)
- 💬 Join our [Discord community](https://discord.gg/crawl4ai)
- ⭐ Star us on GitHub to show support!
## Summary
In this guide, we've covered everything you need to get started with Crawl4AI's Docker deployment:
- Building and running the Docker container
- Configuring the environment
- Making API requests with proper typing
- Using the Python SDK
- Monitoring your deployment
Remember, the examples in the `examples` folder are your friends - they show real-world usage patterns that you can adapt for your needs.
Keep exploring, and don't hesitate to reach out if you need help! We're building something amazing together. 🚀
Happy crawling! 🕷️

View File

@@ -352,7 +352,10 @@ Example:
from crawl4ai import CrawlerRunConfig, PruningContentFilter
config = CrawlerRunConfig(
content_filter=PruningContentFilter(threshold=0.48)
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(threshold=0.48, threshold_type="fixed")
),
cache_mode= CacheMode.BYPASS
)
print(config.dump()) # Use this JSON in your API calls
```
@@ -551,7 +554,7 @@ async def test_stream_crawl(session, token: str):
"https://example.com/page3",
],
"browser_config": {"headless": True, "viewport": {"width": 1200}},
"crawler_config": {"stream": True, "cache_mode": "aggressive"}
"crawler_config": {"stream": True, "cache_mode": "bypass"}
}
# headers = {"Authorization": f"Bearer {token}"} # If JWT is enabled, more on this later
@@ -595,8 +598,8 @@ curl http://localhost:8000/health
## Complete Examples
Check out the `examples` folder in our repository for full working examples! Here are two to get you started:
[Using Client SDK](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_python_sdk_example.py)
[Using REST API](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_python_rest_api_example.py)
[Using Client SDK](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_python_sdk.py)
[Using REST API](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_python_rest_api.py)
## Server Configuration

View File

@@ -2,6 +2,7 @@ import os
import json
import asyncio
from typing import List, Tuple
from functools import partial
import logging
from typing import Optional, AsyncGenerator
@@ -18,7 +19,8 @@ from crawl4ai import (
CacheMode,
BrowserConfig,
MemoryAdaptiveDispatcher,
RateLimiter
RateLimiter,
LLMConfig
)
from crawl4ai.utils import perform_completion_with_backoff
from crawl4ai.content_filter_strategy import (
@@ -103,8 +105,10 @@ async def process_llm_extraction(
else:
api_key = os.environ.get(config["llm"].get("api_key_env", None), "")
llm_strategy = LLMExtractionStrategy(
provider=config["llm"]["provider"],
api_token=api_key,
llm_config=LLMConfig(
provider=config["llm"]["provider"],
api_token=api_key
),
instruction=instruction,
schema=json.loads(schema) if schema else None,
)
@@ -164,8 +168,10 @@ async def handle_markdown_request(
FilterType.FIT: PruningContentFilter(),
FilterType.BM25: BM25ContentFilter(user_query=query or ""),
FilterType.LLM: LLMContentFilter(
provider=config["llm"]["provider"],
api_token=os.environ.get(config["llm"].get("api_key_env", None), ""),
llm_config=LLMConfig(
provider=config["llm"]["provider"],
api_token=os.environ.get(config["llm"].get("api_key_env", None), ""),
),
instruction=query or "Extract main content"
)
}[filter_type]
@@ -382,20 +388,25 @@ async def handle_crawl_request(
)
)
async with AsyncWebCrawler(config=browser_config) as crawler:
results = await crawler.arun_many(
urls=urls,
config=crawler_config,
dispatcher=dispatcher
)
return {
"success": True,
"results": [result.model_dump() for result in results]
}
crawler: AsyncWebCrawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
results = []
func = getattr(crawler, "arun" if len(urls) == 1 else "arun_many")
partial_func = partial(func,
urls[0] if len(urls) == 1 else urls,
config=crawler_config,
dispatcher=dispatcher)
results = await partial_func()
await crawler.close()
return {
"success": True,
"results": [result.model_dump() for result in results]
}
except Exception as e:
logger.error(f"Crawl error: {str(e)}", exc_info=True)
if 'crawler' in locals():
await crawler.close()
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)

View File

@@ -10,7 +10,7 @@ from pydantic.main import BaseModel
import base64
instance = JWT()
security = HTTPBearer()
security = HTTPBearer(auto_error=False)
SECRET_KEY = os.environ.get("SECRET_KEY", "mysecret")
ACCESS_TOKEN_EXPIRE_MINUTES = 60
@@ -30,6 +30,9 @@ def create_access_token(data: dict, expires_delta: Optional[timedelta] = None) -
def verify_token(credentials: HTTPAuthorizationCredentials = Depends(security)) -> Dict:
"""Verify the JWT token from the Authorization header."""
if credentials is None:
return None
token = credentials.credentials
verifying_key = get_jwk_from_secret(SECRET_KEY)
try:
@@ -38,9 +41,15 @@ def verify_token(credentials: HTTPAuthorizationCredentials = Depends(security))
except Exception:
raise HTTPException(status_code=401, detail="Invalid or expired token")
def get_token_dependency(config: Dict):
"""Return the token dependency if JWT is enabled, else None."""
return verify_token if config.get("security", {}).get("jwt_enabled", False) else None
"""Return the token dependency if JWT is enabled, else a function that returns None."""
if config.get("security", {}).get("jwt_enabled", False):
return verify_token
else:
return lambda: None
class TokenRequest(BaseModel):
email: EmailStr

View File

@@ -3,8 +3,8 @@ app:
title: "Crawl4AI API"
version: "1.0.0"
host: "0.0.0.0"
port: 8000
reload: True
port: 8020
reload: False
timeout_keep_alive: 300
# Default LLM Configuration
@@ -38,8 +38,8 @@ rate_limiting:
# Security Configuration
security:
enabled: true
jwt_enabled: true
enabled: false
jwt_enabled: false
https_redirect: false
trusted_hosts: ["*"]
headers:
@@ -68,4 +68,4 @@ observability:
enabled: True
endpoint: "/metrics"
health_check:
endpoint: "/health"
endpoint: "/health"

View File

@@ -1,4 +1,3 @@
crawl4ai
fastapi
uvicorn
gunicorn>=23.0.0

View File

@@ -1,12 +1,28 @@
[supervisord]
nodaemon=true
nodaemon=true ; Run supervisord in the foreground
logfile=/dev/null ; Log supervisord output to stdout/stderr
logfile_maxbytes=0
[program:redis]
command=redis-server
command=/usr/bin/redis-server --loglevel notice ; Path to redis-server on Alpine
user=appuser ; Run redis as our non-root user
autorestart=true
priority=10
stdout_logfile=/dev/stdout ; Redirect redis stdout to container stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr ; Redirect redis stderr to container stderr
stderr_logfile_maxbytes=0
[program:gunicorn]
command=gunicorn --bind 0.0.0.0:8000 --workers 4 --threads 2 --timeout 300 --graceful-timeout 60 --keep-alive 65 --log-level debug --worker-class uvicorn.workers.UvicornWorker --max-requests 1000 --max-requests-jitter 50 server:app
command=/usr/local/bin/gunicorn --bind 0.0.0.0:11235 --workers 2 --threads 2 --timeout 120 --graceful-timeout 30 --keep-alive 60 --log-level info --worker-class uvicorn.workers.UvicornWorker server:app
directory=/app ; Working directory for the app
user=appuser ; Run gunicorn as our non-root user
autorestart=true
priority=20
priority=20
environment=PYTHONUNBUFFERED=1 ; Ensure Python output is sent straight to logs
stdout_logfile=/dev/stdout ; Redirect gunicorn stdout to container stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr ; Redirect gunicorn stderr to container stderr
stderr_logfile_maxbytes=0
# Optional: Add filebeat or other logging agents here if needed

View File

@@ -1,15 +1,30 @@
# Base configuration (not a service, just a reusable config block)
# docker-compose.yml
# Base configuration anchor for reusability
x-base-config: &base-config
ports:
# Map host port 11235 to container port 11235 (where Gunicorn will listen)
- "11235:11235"
- "8000:8000"
- "9222:9222"
- "8080:8080"
# - "8080:8080" # Uncomment if needed
# Load API keys primarily from .llm.env file
# Create .llm.env in the root directory .llm.env.example
env_file:
- .llm.env
# Define environment variables, allowing overrides from host environment
# Syntax ${VAR:-} uses host env var 'VAR' if set, otherwise uses value from .llm.env
environment:
- CRAWL4AI_API_TOKEN=${CRAWL4AI_API_TOKEN:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- CLAUDE_API_KEY=${CLAUDE_API_KEY:-}
- DEEPSEEK_API_KEY=${DEEPSEEK_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- GROQ_API_KEY=${GROQ_API_KEY:-}
- TOGETHER_API_KEY=${TOGETHER_API_KEY:-}
- MISTRAL_API_KEY=${MISTRAL_API_KEY:-}
- GEMINI_API_TOKEN=${GEMINI_API_TOKEN:-}
volumes:
# Mount /dev/shm for Chromium/Playwright performance
- /dev/shm:/dev/shm
deploy:
resources:
@@ -19,47 +34,47 @@ x-base-config: &base-config
memory: 1G
restart: unless-stopped
healthcheck:
# IMPORTANT: Ensure Gunicorn binds to 11235 in supervisord.conf
test: ["CMD", "curl", "-f", "http://localhost:11235/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
start_period: 40s # Give the server time to start
# Run the container as the non-root user defined in the Dockerfile
user: "appuser"
services:
# Local build services for different platforms
crawl4ai-amd64:
# --- Local Build Services ---
crawl4ai-local-amd64:
build:
context: .
dockerfile: Dockerfile
context: . # Build context is the root directory
dockerfile: Dockerfile # Dockerfile is in the root directory
args:
PYTHON_VERSION: "3.10"
INSTALL_TYPE: ${INSTALL_TYPE:-basic}
ENABLE_GPU: false
platforms:
- linux/amd64
INSTALL_TYPE: ${INSTALL_TYPE:-default}
ENABLE_GPU: ${ENABLE_GPU:-false}
# PYTHON_VERSION arg is omitted as it's fixed by 'FROM python:3.10-slim' in Dockerfile
platform: linux/amd64
profiles: ["local-amd64"]
<<: *base-config # extends yerine doğrudan yapılandırmayı dahil ettik
<<: *base-config # Inherit base configuration
crawl4ai-arm64:
crawl4ai-local-arm64:
build:
context: .
dockerfile: Dockerfile
context: . # Build context is the root directory
dockerfile: Dockerfile # Dockerfile is in the root directory
args:
PYTHON_VERSION: "3.10"
INSTALL_TYPE: ${INSTALL_TYPE:-basic}
ENABLE_GPU: false
platforms:
- linux/arm64
INSTALL_TYPE: ${INSTALL_TYPE:-default}
ENABLE_GPU: ${ENABLE_GPU:-false}
platform: linux/arm64
profiles: ["local-arm64"]
<<: *base-config
# Hub services for different platforms and versions
# --- Docker Hub Image Services ---
crawl4ai-hub-amd64:
image: unclecode/crawl4ai:${VERSION:-basic}-amd64
image: unclecode/crawl4ai:${VERSION:-latest}-amd64
profiles: ["hub-amd64"]
<<: *base-config
crawl4ai-hub-arm64:
image: unclecode/crawl4ai:${VERSION:-basic}-arm64
image: unclecode/crawl4ai:${VERSION:-latest}-arm64
profiles: ["hub-arm64"]
<<: *base-config

View File

@@ -0,0 +1,123 @@
# Builtin Browser in Crawl4AI
This document explains the builtin browser feature in Crawl4AI and how to use it effectively.
## What is the Builtin Browser?
The builtin browser is a persistent Chrome instance that Crawl4AI manages for you. It runs in the background and can be used by multiple crawling operations, eliminating the need to start and stop browsers for each crawl.
Benefits include:
- **Faster startup times** - The browser is already running, so your scripts start faster
- **Shared resources** - All your crawling scripts can use the same browser instance
- **Simplified management** - No need to worry about CDP URLs or browser processes
- **Persistent cookies and sessions** - Browser state persists between script runs
- **Less resource usage** - Only one browser instance for multiple scripts
## Using the Builtin Browser
### In Python Code
Using the builtin browser in your code is simple:
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
# Create browser config with builtin mode
browser_config = BrowserConfig(
browser_mode="builtin", # This is the key setting!
headless=True # Can be headless or not
)
# Create the crawler
crawler = AsyncWebCrawler(config=browser_config)
# Use it - no need to explicitly start()
result = await crawler.arun("https://example.com")
```
Key points:
1. Set `browser_mode="builtin"` in your BrowserConfig
2. No need for explicit `start()` call - the crawler will automatically connect to the builtin browser
3. No need to use a context manager or call `close()` - the browser stays running
### Via CLI
The CLI provides commands to manage the builtin browser:
```bash
# Start the builtin browser
crwl browser start
# Check its status
crwl browser status
# Open a visible window to see what the browser is doing
crwl browser view --url https://example.com
# Stop it when no longer needed
crwl browser stop
# Restart with different settings
crwl browser restart --no-headless
```
When crawling via CLI, simply add the builtin browser mode:
```bash
crwl https://example.com -b "browser_mode=builtin"
```
## How It Works
1. When a crawler with `browser_mode="builtin"` is created:
- It checks if a builtin browser is already running
- If not, it automatically launches one
- It connects to the browser via CDP (Chrome DevTools Protocol)
2. The browser process continues running after your script exits
- This means it's ready for the next crawl
- You can manage it via the CLI commands
3. During installation, Crawl4AI attempts to create a builtin browser automatically
## Example
See the [builtin_browser_example.py](builtin_browser_example.py) file for a complete example.
Run it with:
```bash
python builtin_browser_example.py
```
## When to Use
The builtin browser is ideal for:
- Scripts that run frequently
- Development and testing workflows
- Applications that need to minimize startup time
- Systems where you want to manage browser instances centrally
You might not want to use it when:
- Running one-off scripts
- When you need different browser configurations for different tasks
- In environments where persistent processes are not allowed
## Troubleshooting
If you encounter issues:
1. Check the browser status:
```
crwl browser status
```
2. Try restarting it:
```
crwl browser restart
```
3. If problems persist, stop it and let Crawl4AI start a fresh one:
```
crwl browser stop
```

View File

@@ -0,0 +1,79 @@
import asyncio
import time
from crawl4ai.async_webcrawler import AsyncWebCrawler, CacheMode
from crawl4ai.async_configs import CrawlerRunConfig
from crawl4ai.async_dispatcher import MemoryAdaptiveDispatcher, RateLimiter
VERBOSE = False
async def crawl_sequential(urls):
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, verbose=VERBOSE)
results = []
start_time = time.perf_counter()
async with AsyncWebCrawler() as crawler:
for url in urls:
result_container = await crawler.arun(url=url, config=config)
results.append(result_container[0])
total_time = time.perf_counter() - start_time
return total_time, results
async def crawl_parallel_dispatcher(urls):
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, verbose=VERBOSE)
# Dispatcher with rate limiter enabled (default behavior)
dispatcher = MemoryAdaptiveDispatcher(
rate_limiter=RateLimiter(base_delay=(1.0, 3.0), max_delay=60.0, max_retries=3),
max_session_permit=50,
)
start_time = time.perf_counter()
async with AsyncWebCrawler() as crawler:
result_container = await crawler.arun_many(urls=urls, config=config, dispatcher=dispatcher)
results = []
if isinstance(result_container, list):
results = result_container
else:
async for res in result_container:
results.append(res)
total_time = time.perf_counter() - start_time
return total_time, results
async def crawl_parallel_no_rate_limit(urls):
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, verbose=VERBOSE)
# Dispatcher with no rate limiter and a high session permit to avoid queuing
dispatcher = MemoryAdaptiveDispatcher(
rate_limiter=None,
max_session_permit=len(urls) # allow all URLs concurrently
)
start_time = time.perf_counter()
async with AsyncWebCrawler() as crawler:
result_container = await crawler.arun_many(urls=urls, config=config, dispatcher=dispatcher)
results = []
if isinstance(result_container, list):
results = result_container
else:
async for res in result_container:
results.append(res)
total_time = time.perf_counter() - start_time
return total_time, results
async def main():
urls = ["https://example.com"] * 100
print(f"Crawling {len(urls)} URLs sequentially...")
seq_time, seq_results = await crawl_sequential(urls)
print(f"Sequential crawling took: {seq_time:.2f} seconds\n")
print(f"Crawling {len(urls)} URLs in parallel using arun_many with dispatcher (with rate limit)...")
disp_time, disp_results = await crawl_parallel_dispatcher(urls)
print(f"Parallel (dispatcher with rate limiter) took: {disp_time:.2f} seconds\n")
print(f"Crawling {len(urls)} URLs in parallel using dispatcher with no rate limiter...")
no_rl_time, no_rl_results = await crawl_parallel_no_rate_limit(urls)
print(f"Parallel (dispatcher without rate limiter) took: {no_rl_time:.2f} seconds\n")
print("Crawl4ai - Crawling Comparison")
print("--------------------------------------------------------")
print(f"Sequential crawling took: {seq_time:.2f} seconds")
print(f"Parallel (dispatcher with rate limiter) took: {disp_time:.2f} seconds")
print(f"Parallel (dispatcher without rate limiter) took: {no_rl_time:.2f} seconds")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,86 @@
#!/usr/bin/env python3
"""
Builtin Browser Example
This example demonstrates how to use Crawl4AI's builtin browser feature,
which simplifies the browser management process. With builtin mode:
- No need to manually start or connect to a browser
- No need to manage CDP URLs or browser processes
- Automatically connects to an existing browser or launches one if needed
- Browser persists between script runs, reducing startup time
- No explicit cleanup or close() calls needed
The example also demonstrates "auto-starting" where you don't need to explicitly
call start() method on the crawler.
"""
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
import time
async def crawl_with_builtin_browser():
"""
Simple example of crawling with the builtin browser.
Key features:
1. browser_mode="builtin" in BrowserConfig
2. No explicit start() call needed
3. No explicit close() needed
"""
print("\n=== Crawl4AI Builtin Browser Example ===\n")
# Create a browser configuration with builtin mode
browser_config = BrowserConfig(
browser_mode="builtin", # This is the key setting!
headless=True # Can run headless for background operation
)
# Create crawler run configuration
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, # Skip cache for this demo
screenshot=True, # Take a screenshot
verbose=True # Show verbose logging
)
# Create the crawler instance
# Note: We don't need to use "async with" context manager
crawler = AsyncWebCrawler(config=browser_config)
# Start crawling several URLs - no explicit start() needed!
# The crawler will automatically connect to the builtin browser
print("\n➡️ Crawling first URL...")
t0 = time.time()
result1 = await crawler.arun(
url="https://crawl4ai.com",
config=crawler_config
)
t1 = time.time()
print(f"✅ First URL crawled in {t1-t0:.2f} seconds")
print(f" Got {len(result1.markdown.raw_markdown)} characters of content")
print(f" Title: {result1.metadata.get('title', 'No title')}")
# Try another URL - the browser is already running, so this should be faster
print("\n➡️ Crawling second URL...")
t0 = time.time()
result2 = await crawler.arun(
url="https://example.com",
config=crawler_config
)
t1 = time.time()
print(f"✅ Second URL crawled in {t1-t0:.2f} seconds")
print(f" Got {len(result2.markdown.raw_markdown)} characters of content")
print(f" Title: {result2.metadata.get('title', 'No title')}")
# The builtin browser continues running in the background
# No need to explicitly close it
print("\n🔄 The builtin browser remains running for future use")
print(" You can use 'crwl browser status' to check its status")
print(" or 'crwl browser stop' to stop it when completely done")
async def main():
"""Run the example"""
await crawl_with_builtin_browser()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,209 @@
"""
CrawlerMonitor Example
This example demonstrates how to use the CrawlerMonitor component
to visualize and track web crawler operations in real-time.
"""
import time
import uuid
import random
import threading
from crawl4ai.components.crawler_monitor import CrawlerMonitor
from crawl4ai.models import CrawlStatus
def simulate_webcrawler_operations(monitor, num_tasks=20):
"""
Simulates a web crawler's operations with multiple tasks and different states.
Args:
monitor: The CrawlerMonitor instance
num_tasks: Number of tasks to simulate
"""
print(f"Starting simulation with {num_tasks} tasks...")
# Create and register all tasks first
task_ids = []
for i in range(num_tasks):
task_id = str(uuid.uuid4())
url = f"https://example.com/page{i}"
monitor.add_task(task_id, url)
task_ids.append((task_id, url))
# Small delay between task creation
time.sleep(0.2)
# Process tasks with a variety of different behaviors
threads = []
for i, (task_id, url) in enumerate(task_ids):
# Create a thread for each task
thread = threading.Thread(
target=process_task,
args=(monitor, task_id, url, i)
)
thread.daemon = True
threads.append(thread)
# Start threads in batches to simulate concurrent processing
batch_size = 4 # Process 4 tasks at a time
for i in range(0, len(threads), batch_size):
batch = threads[i:i+batch_size]
for thread in batch:
thread.start()
time.sleep(0.5) # Stagger thread start times
# Wait a bit before starting next batch
time.sleep(random.uniform(1.0, 3.0))
# Update queue statistics
update_queue_stats(monitor)
# Simulate memory pressure changes
active_threads = [t for t in threads if t.is_alive()]
if len(active_threads) > 8:
monitor.update_memory_status("CRITICAL")
elif len(active_threads) > 4:
monitor.update_memory_status("PRESSURE")
else:
monitor.update_memory_status("NORMAL")
# Wait for all threads to complete
for thread in threads:
thread.join()
# Final updates
update_queue_stats(monitor)
monitor.update_memory_status("NORMAL")
print("Simulation completed!")
def process_task(monitor, task_id, url, index):
"""Simulate processing of a single task."""
# Tasks start in queued state (already added)
# Simulate waiting in queue
wait_time = random.uniform(0.5, 3.0)
time.sleep(wait_time)
# Start processing - move to IN_PROGRESS
monitor.update_task(
task_id=task_id,
status=CrawlStatus.IN_PROGRESS,
start_time=time.time(),
wait_time=wait_time
)
# Simulate task processing with memory usage changes
total_process_time = random.uniform(2.0, 10.0)
step_time = total_process_time / 5 # Update in 5 steps
for step in range(5):
# Simulate increasing then decreasing memory usage
if step < 3: # First 3 steps - increasing
memory_usage = random.uniform(5.0, 20.0) * (step + 1)
else: # Last 2 steps - decreasing
memory_usage = random.uniform(5.0, 20.0) * (5 - step)
# Update peak memory if this is higher
peak = max(memory_usage, monitor.get_task_stats(task_id).get("peak_memory", 0))
monitor.update_task(
task_id=task_id,
memory_usage=memory_usage,
peak_memory=peak
)
time.sleep(step_time)
# Determine final state - 80% success, 20% failure
if index % 5 == 0: # Every 5th task fails
monitor.update_task(
task_id=task_id,
status=CrawlStatus.FAILED,
end_time=time.time(),
memory_usage=0.0,
error_message="Connection timeout"
)
else:
monitor.update_task(
task_id=task_id,
status=CrawlStatus.COMPLETED,
end_time=time.time(),
memory_usage=0.0
)
def update_queue_stats(monitor):
"""Update queue statistics based on current tasks."""
task_stats = monitor.get_all_task_stats()
# Count queued tasks
queued_tasks = [
stats for stats in task_stats.values()
if stats["status"] == CrawlStatus.QUEUED.name
]
total_queued = len(queued_tasks)
if total_queued > 0:
current_time = time.time()
# Calculate wait times
wait_times = [
current_time - stats.get("enqueue_time", current_time)
for stats in queued_tasks
]
highest_wait_time = max(wait_times) if wait_times else 0.0
avg_wait_time = sum(wait_times) / len(wait_times) if wait_times else 0.0
else:
highest_wait_time = 0.0
avg_wait_time = 0.0
# Update monitor
monitor.update_queue_statistics(
total_queued=total_queued,
highest_wait_time=highest_wait_time,
avg_wait_time=avg_wait_time
)
def main():
# Initialize the monitor
monitor = CrawlerMonitor(
urls_total=20, # Total URLs to process
refresh_rate=0.5, # Update UI twice per second
enable_ui=True, # Enable terminal UI
max_width=120 # Set maximum width to 120 characters
)
# Start the monitor
monitor.start()
try:
# Run simulation
simulate_webcrawler_operations(monitor)
# Keep monitor running a bit to see final state
print("Waiting to view final state...")
time.sleep(5)
except KeyboardInterrupt:
print("\nExample interrupted by user")
finally:
# Stop the monitor
monitor.stop()
print("Example completed!")
# Print some statistics
summary = monitor.get_summary()
print("\nCrawler Statistics Summary:")
print(f"Total URLs: {summary['urls_total']}")
print(f"Completed: {summary['urls_completed']}")
print(f"Completion percentage: {summary['completion_percentage']:.1f}%")
print(f"Peak memory usage: {summary['peak_memory_percent']:.1f}%")
# Print task status counts
status_counts = summary['status_counts']
print("\nTask Status Counts:")
for status, count in status_counts.items():
print(f" {status}: {count}")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,443 @@
"""
Crawl4AI Crypto Trading Analysis Demo
Author: Unclecode
Date: 2024-03-15
This script demonstrates advanced crypto market analysis using:
1. Web scraping of real-time CoinMarketCap data
2. Smart table extraction with layout detection
3. Hedge fund-grade financial metrics
4. Interactive visualizations for trading signals
Key Features:
- Volume Anomaly Detection: Finds unusual trading activity
- Liquidity Power Score: Identifies easily tradable assets
- Volatility-Weighted Momentum: Surface sustainable trends
- Smart Money Signals: Algorithmic buy/hold recommendations
"""
import asyncio
import pandas as pd
import numpy as np
import re
import plotly.express as px
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CacheMode,
LXMLWebScrapingStrategy,
)
from crawl4ai import CrawlResult
from typing import List
__current_dir__ = __file__.rsplit("/", 1)[0]
class CryptoAlphaGenerator:
"""
Advanced crypto analysis engine that transforms raw web data into:
- Volume anomaly flags
- Liquidity scores
- Momentum-risk ratios
- Machine learning-inspired trading signals
Methods:
analyze_tables(): Process raw tables into trading insights
create_visuals(): Generate institutional-grade visualizations
generate_insights(): Create plain English trading recommendations
"""
def clean_data(self, df: pd.DataFrame) -> pd.DataFrame:
"""
Convert crypto market data to machine-readable format.
Handles currency symbols, units (B=Billions), and percentage values.
"""
# Make a copy to avoid SettingWithCopyWarning
df = df.copy()
# Clean Price column (handle currency symbols)
df["Price"] = df["Price"].astype(str).str.replace("[^\d.]", "", regex=True).astype(float)
# Handle Market Cap and Volume, considering both Billions and Trillions
def convert_large_numbers(value):
if pd.isna(value):
return float('nan')
value = str(value)
multiplier = 1
if 'B' in value:
multiplier = 1e9
elif 'T' in value:
multiplier = 1e12
# Handle cases where the value might already be numeric
cleaned_value = re.sub(r"[^\d.]", "", value)
return float(cleaned_value) * multiplier if cleaned_value else float('nan')
df["Market Cap"] = df["Market Cap"].apply(convert_large_numbers)
df["Volume(24h)"] = df["Volume(24h)"].apply(convert_large_numbers)
# Convert percentages to decimal values
for col in ["1h %", "24h %", "7d %"]:
if col in df.columns:
# First ensure it's string, then clean
df[col] = (
df[col].astype(str)
.str.replace("%", "")
.str.replace(",", ".")
.replace("nan", np.nan)
)
df[col] = pd.to_numeric(df[col], errors='coerce') / 100
return df
def calculate_metrics(self, df: pd.DataFrame) -> pd.DataFrame:
"""
Compute advanced trading metrics used by quantitative funds:
1. Volume/Market Cap Ratio - Measures liquidity efficiency
(High ratio = Underestimated attention, and small-cap = higher growth potential)
2. Volatility Score - Risk-adjusted momentum potential - Shows how stable is the trend
(STD of 1h/24h/7d returns)
3. Momentum Score - Weighted average of returns - Shows how strong is the trend
(1h:30% + 24h:50% + 7d:20%)
4. Volume Anomaly - 3σ deviation detection
(Flags potential insider activity) - Unusual trading activity Flags coins with volume spikes (potential insider buying or news).
"""
# Liquidity Metrics
df["Volume/Market Cap Ratio"] = df["Volume(24h)"] / df["Market Cap"]
# Risk Metrics
df["Volatility Score"] = df[["1h %", "24h %", "7d %"]].std(axis=1)
# Momentum Metrics
df["Momentum Score"] = df["1h %"] * 0.3 + df["24h %"] * 0.5 + df["7d %"] * 0.2
# Anomaly Detection
median_vol = df["Volume(24h)"].median()
df["Volume Anomaly"] = df["Volume(24h)"] > 3 * median_vol
# Value Flags
# Undervalued Flag - Low market cap and high momentum
# (High growth potential and low attention)
df["Undervalued Flag"] = (df["Market Cap"] < 1e9) & (
df["Momentum Score"] > 0.05
)
# Liquid Giant Flag - High volume/market cap ratio and large market cap
# (High liquidity and large market cap = institutional interest)
df["Liquid Giant"] = (df["Volume/Market Cap Ratio"] > 0.15) & (
df["Market Cap"] > 1e9
)
return df
def generate_insights_simple(self, df: pd.DataFrame) -> str:
"""
Generates an ultra-actionable crypto trading report with:
- Risk-tiered opportunities (High/Medium/Low)
- Concrete examples for each trade type
- Entry/exit strategies spelled out
- Visual cues for quick scanning
"""
report = [
"🚀 **CRYPTO TRADING CHEAT SHEET** 🚀",
"*Based on quantitative signals + hedge fund tactics*",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
]
# 1. HIGH-RISK: Undervalued Small-Caps (Momentum Plays)
high_risk = df[df["Undervalued Flag"]].sort_values("Momentum Score", ascending=False)
if not high_risk.empty:
example_coin = high_risk.iloc[0]
report.extend([
"\n🔥 **HIGH-RISK: Rocket Fuel Small-Caps**",
f"*Example Trade:* {example_coin['Name']} (Price: ${example_coin['Price']:.6f})",
"📊 *Why?* Tiny market cap (<$1B) but STRONG momentum (+{:.0f}% last week)".format(example_coin['7d %']*100),
"🎯 *Strategy:*",
"1. Wait for 5-10% dip from recent high (${:.6f} → Buy under ${:.6f})".format(
example_coin['Price'] / (1 - example_coin['24h %']), # Approx recent high
example_coin['Price'] * 0.95
),
"2. Set stop-loss at -10% (${:.6f})".format(example_coin['Price'] * 0.90),
"3. Take profit at +20% (${:.6f})".format(example_coin['Price'] * 1.20),
"⚠️ *Risk Warning:* These can drop 30% fast! Never bet more than 5% of your portfolio.",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
])
# 2. MEDIUM-RISK: Liquid Giants (Swing Trades)
medium_risk = df[df["Liquid Giant"]].sort_values("Volume/Market Cap Ratio", ascending=False)
if not medium_risk.empty:
example_coin = medium_risk.iloc[0]
report.extend([
"\n💎 **MEDIUM-RISK: Liquid Giants (Safe Swing Trades)**",
f"*Example Trade:* {example_coin['Name']} (Market Cap: ${example_coin['Market Cap']/1e9:.1f}B)",
"📊 *Why?* Huge volume (${:.1f}M/day) makes it easy to enter/exit".format(example_coin['Volume(24h)']/1e6),
"🎯 *Strategy:*",
"1. Buy when 24h volume > 15% of market cap (Current: {:.0f}%)".format(example_coin['Volume/Market Cap Ratio']*100),
"2. Hold 1-4 weeks (Big coins trend longer)",
"3. Exit when momentum drops below 5% (Current: {:.0f}%)".format(example_coin['Momentum Score']*100),
"📉 *Pro Tip:* Watch Bitcoin's trend - if BTC drops 5%, these usually follow.",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
])
# 3. LOW-RISK: Stable Momentum (DCA Targets)
low_risk = df[
(df["Momentum Score"] > 0.05) &
(df["Volatility Score"] < 0.03)
].sort_values("Market Cap", ascending=False)
if not low_risk.empty:
example_coin = low_risk.iloc[0]
report.extend([
"\n🛡️ **LOW-RISK: Steady Climbers (DCA & Forget)**",
f"*Example Trade:* {example_coin['Name']} (Volatility: {example_coin['Volatility Score']:.2f}/5)",
"📊 *Why?* Rises steadily (+{:.0f}%/week) with LOW drama".format(example_coin['7d %']*100),
"🎯 *Strategy:*",
"1. Buy small amounts every Tuesday/Friday (DCA)",
"2. Hold for 3+ months (Compound gains work best here)",
"3. Sell 10% at every +25% milestone",
"💰 *Best For:* Long-term investors who hate sleepless nights",
"━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
])
# Volume Spike Alerts
anomalies = df[df["Volume Anomaly"]].sort_values("Volume(24h)", ascending=False)
if not anomalies.empty:
example_coin = anomalies.iloc[0]
report.extend([
"\n🚨 **Volume Spike Alert (Possible News/Whale Action)**",
f"*Coin:* {example_coin['Name']} (Volume: ${example_coin['Volume(24h)']/1e6:.1f}M, usual: ${example_coin['Volume(24h)']/3/1e6:.1f}M)",
"🔍 *Check:* Twitter/CoinGecko for news before trading",
"⚡ *If no news:* Could be insider buying - watch price action:",
"- Break above today's high → Buy with tight stop-loss",
"- Fade back down → Avoid (may be a fakeout)"
])
# Pro Tip Footer
report.append("\n✨ *Pro Tip:* Bookmark this report & check back in 24h to see if signals held up.")
return "\n".join(report)
def generate_insights(self, df: pd.DataFrame) -> str:
"""
Generates a tactical trading report with:
- Top 3 trades per risk level (High/Medium/Low)
- Auto-calculated entry/exit prices
- BTC chart toggle tip
"""
# Filter top candidates for each risk level
high_risk = (
df[df["Undervalued Flag"]]
.sort_values("Momentum Score", ascending=False)
.head(3)
)
medium_risk = (
df[df["Liquid Giant"]]
.sort_values("Volume/Market Cap Ratio", ascending=False)
.head(3)
)
low_risk = (
df[(df["Momentum Score"] > 0.05) & (df["Volatility Score"] < 0.03)]
.sort_values("Momentum Score", ascending=False)
.head(3)
)
report = ["# 🎯 Crypto Trading Tactical Report (Top 3 Per Risk Tier)"]
# 1. High-Risk Trades (Small-Cap Momentum)
if not high_risk.empty:
report.append("\n## 🔥 HIGH RISK: Small-Cap Rockets (5-50% Potential)")
for i, coin in high_risk.iterrows():
current_price = coin["Price"]
entry = current_price * 0.95 # -5% dip
stop_loss = current_price * 0.90 # -10%
take_profit = current_price * 1.20 # +20%
report.append(
f"\n### {coin['Name']} (Momentum: {coin['Momentum Score']:.1%})"
f"\n- **Current Price:** ${current_price:.4f}"
f"\n- **Entry:** < ${entry:.4f} (Wait for pullback)"
f"\n- **Stop-Loss:** ${stop_loss:.4f} (-10%)"
f"\n- **Target:** ${take_profit:.4f} (+20%)"
f"\n- **Risk/Reward:** 1:2"
f"\n- **Watch:** Volume spikes above {coin['Volume(24h)']/1e6:.1f}M"
)
# 2. Medium-Risk Trades (Liquid Giants)
if not medium_risk.empty:
report.append("\n## 💎 MEDIUM RISK: Liquid Swing Trades (10-30% Potential)")
for i, coin in medium_risk.iterrows():
current_price = coin["Price"]
entry = current_price * 0.98 # -2% dip
stop_loss = current_price * 0.94 # -6%
take_profit = current_price * 1.15 # +15%
report.append(
f"\n### {coin['Name']} (Liquidity Score: {coin['Volume/Market Cap Ratio']:.1%})"
f"\n- **Current Price:** ${current_price:.2f}"
f"\n- **Entry:** < ${entry:.2f} (Buy slight dips)"
f"\n- **Stop-Loss:** ${stop_loss:.2f} (-6%)"
f"\n- **Target:** ${take_profit:.2f} (+15%)"
f"\n- **Hold Time:** 1-3 weeks"
f"\n- **Key Metric:** Volume/Cap > 15%"
)
# 3. Low-Risk Trades (Stable Momentum)
if not low_risk.empty:
report.append("\n## 🛡️ LOW RISK: Steady Gainers (5-15% Potential)")
for i, coin in low_risk.iterrows():
current_price = coin["Price"]
entry = current_price * 0.99 # -1% dip
stop_loss = current_price * 0.97 # -3%
take_profit = current_price * 1.10 # +10%
report.append(
f"\n### {coin['Name']} (Stability Score: {1/coin['Volatility Score']:.1f}x)"
f"\n- **Current Price:** ${current_price:.2f}"
f"\n- **Entry:** < ${entry:.2f} (Safe zone)"
f"\n- **Stop-Loss:** ${stop_loss:.2f} (-3%)"
f"\n- **Target:** ${take_profit:.2f} (+10%)"
f"\n- **DCA Suggestion:** 3 buys over 72 hours"
)
# Volume Anomaly Alert
anomalies = df[df["Volume Anomaly"]].sort_values("Volume(24h)", ascending=False).head(2)
if not anomalies.empty:
report.append("\n⚠️ **Volume Spike Alerts**")
for i, coin in anomalies.iterrows():
report.append(
f"- {coin['Name']}: Volume {coin['Volume(24h)']/1e6:.1f}M "
f"(3x normal) | Price moved: {coin['24h %']:.1%}"
)
# Pro Tip
report.append(
"\n📊 **Chart Hack:** Hide BTC in visuals:\n"
"```python\n"
"# For 3D Map:\n"
"fig.update_traces(visible=False, selector={'name':'Bitcoin'})\n"
"# For Treemap:\n"
"df = df[df['Name'] != 'Bitcoin']\n"
"```"
)
return "\n".join(report)
def create_visuals(self, df: pd.DataFrame) -> dict:
"""Enhanced visuals with BTC toggle support"""
# 3D Market Map (with BTC toggle hint)
fig1 = px.scatter_3d(
df,
x="Market Cap",
y="Volume/Market Cap Ratio",
z="Momentum Score",
color="Name", # Color by name to allow toggling
hover_name="Name",
title="Market Map (Toggle BTC in legend to focus on alts)",
log_x=True
)
fig1.update_traces(
marker=dict(size=df["Volatility Score"]*100 + 5) # Dynamic sizing
)
# Liquidity Tree (exclude BTC if too dominant)
if df[df["Name"] == "BitcoinBTC"]["Market Cap"].values[0] > df["Market Cap"].median() * 10:
df = df[df["Name"] != "BitcoinBTC"]
fig2 = px.treemap(
df,
path=["Name"],
values="Market Cap",
color="Volume/Market Cap Ratio",
title="Liquidity Tree (BTC auto-removed if dominant)"
)
return {"market_map": fig1, "liquidity_tree": fig2}
async def main():
"""
Main execution flow:
1. Configure headless browser for scraping
2. Extract live crypto market data
3. Clean and analyze using hedge fund models
4. Generate visualizations and insights
5. Output professional trading report
"""
# Configure browser with anti-detection features
browser_config = BrowserConfig(
headless=False,
)
# Initialize crawler with smart table detection
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
try:
# Set up scraping parameters
crawl_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
table_score_threshold=8, # Strict table detection
keep_data_attributes=True,
scraping_strategy=LXMLWebScrapingStrategy(),
scan_full_page=True,
scroll_delay=0.2,
)
# # Execute market data extraction
# results: List[CrawlResult] = await crawler.arun(
# url="https://coinmarketcap.com/?page=1", config=crawl_config
# )
# # Process results
# raw_df = pd.DataFrame()
# for result in results:
# if result.success and result.media["tables"]:
# # Extract primary market table
# # DataFrame
# raw_df = pd.DataFrame(
# result.media["tables"][0]["rows"],
# columns=result.media["tables"][0]["headers"],
# )
# break
# This is for debugging only
# ////// Remove this in production from here..
# Save raw data for debugging
# raw_df.to_csv(f"{__current_dir__}/tmp/raw_crypto_data.csv", index=False)
# print("🔍 Raw data saved to 'raw_crypto_data.csv'")
# Read from file for debugging
raw_df = pd.read_csv(f"{__current_dir__}/tmp/raw_crypto_data.csv")
# ////// ..to here
# Select top 20
raw_df = raw_df.head(50)
# Remove "Buy" from name
raw_df["Name"] = raw_df["Name"].str.replace("Buy", "")
# Initialize analysis engine
analyzer = CryptoAlphaGenerator()
clean_df = analyzer.clean_data(raw_df)
analyzed_df = analyzer.calculate_metrics(clean_df)
# Generate outputs
visuals = analyzer.create_visuals(analyzed_df)
insights = analyzer.generate_insights(analyzed_df)
# Save visualizations
visuals["market_map"].write_html(f"{__current_dir__}/tmp/market_map.html")
visuals["liquidity_tree"].write_html(f"{__current_dir__}/tmp/liquidity_tree.html")
# Display results
print("🔑 Key Trading Insights:")
print(insights)
print("\n📊 Open 'market_map.html' for interactive analysis")
print("\n📊 Open 'liquidity_tree.html' for interactive analysis")
finally:
await crawler.close()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -39,7 +39,7 @@ async def memory_adaptive_with_rate_limit(urls, browser_config, run_config):
start = time.perf_counter()
async with AsyncWebCrawler(config=browser_config) as crawler:
dispatcher = MemoryAdaptiveDispatcher(
memory_threshold_percent=70.0,
memory_threshold_percent=95.0,
max_session_permit=10,
rate_limiter=RateLimiter(
base_delay=(1.0, 2.0), max_delay=30.0, max_retries=2

View File

@@ -73,7 +73,7 @@ async def test_stream_crawl(session, token: str):
# "https://news.ycombinator.com/news"
],
"browser_config": {"headless": True, "viewport": {"width": 1200}},
"crawler_config": {"stream": True, "cache_mode": "aggressive"}
"crawler_config": {"stream": True, "cache_mode": "bypass"}
}
headers = {"Authorization": f"Bearer {token}"}
print(f"\nTesting Streaming Crawl: {url}")

View File

@@ -11,7 +11,7 @@ import asyncio
import os
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
from crawl4ai.extraction_strategy import (
LLMExtractionStrategy,
JsonCssExtractionStrategy,
@@ -61,19 +61,19 @@ async def main():
# 1. LLM Extraction with different input formats
markdown_strategy = LLMExtractionStrategy(
llmConfig = LlmConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY")),
llm_config = LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY")),
instruction="Extract product information including name, price, and description",
)
html_strategy = LLMExtractionStrategy(
input_format="html",
llmConfig=LlmConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY")),
llm_config=LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY")),
instruction="Extract product information from HTML including structured data",
)
fit_markdown_strategy = LLMExtractionStrategy(
input_format="fit_markdown",
llmConfig=LlmConfig(provider="openai/gpt-4o-mini",api_token=os.getenv("OPENAI_API_KEY")),
llm_config=LLMConfig(provider="openai/gpt-4o-mini",api_token=os.getenv("OPENAI_API_KEY")),
instruction="Extract product information from cleaned markdown",
)

View File

@@ -9,6 +9,26 @@ from crawl4ai import (
CrawlResult
)
async def example_cdp():
browser_conf = BrowserConfig(
headless=False,
cdp_url="http://localhost:9223"
)
crawler_config = CrawlerRunConfig(
session_id="test",
js_code = """(() => { return {"result": "Hello World!"} })()""",
js_only=True
)
async with AsyncWebCrawler(
config=browser_conf,
verbose=True,
) as crawler:
result : CrawlResult = await crawler.arun(
url="https://www.helloworld.org",
config=crawler_config,
)
print(result.js_execution_result)
async def main():
browser_config = BrowserConfig(headless=True, verbose=True)
@@ -16,18 +36,15 @@ async def main():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
# content_filter=PruningContentFilter(
# threshold=0.48, threshold_type="fixed", min_word_threshold=0
# )
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
),
)
result : CrawlResult = await crawler.arun(
# url="https://www.helloworld.org", config=crawler_config
url="https://www.kidocode.com", config=crawler_config
url="https://www.helloworld.org", config=crawler_config
)
print(result.markdown.raw_markdown[:500])
# print(result.model_dump())
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,4 +1,4 @@
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
from crawl4ai import AsyncWebCrawler, LLMExtractionStrategy
import asyncio
import os
@@ -23,7 +23,7 @@ async def main():
word_count_threshold=1,
extraction_strategy=LLMExtractionStrategy(
# provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
llmConfig=LlmConfig(provider="groq/llama-3.1-70b-versatile", api_token=os.getenv("GROQ_API_KEY")),
llm_config=LLMConfig(provider="groq/llama-3.1-70b-versatile", api_token=os.getenv("GROQ_API_KEY")),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="From the crawled content, extract all mentioned model names along with their "

View File

@@ -1,7 +1,7 @@
import os
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
from crawl4ai.content_filter_strategy import LLMContentFilter
async def test_llm_filter():
@@ -23,7 +23,7 @@ async def test_llm_filter():
# Initialize LLM filter with focused instruction
filter = LLMContentFilter(
llmConfig=LlmConfig(provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY')),
llm_config=LLMConfig(provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY')),
instruction="""
Focus on extracting the core educational content about Python classes.
Include:
@@ -43,7 +43,7 @@ async def test_llm_filter():
)
filter = LLMContentFilter(
llmConfig=LlmConfig(provider="openai/gpt-4o",api_token=os.getenv('OPENAI_API_KEY')),
llm_config=LLMConfig(provider="openai/gpt-4o",api_token=os.getenv('OPENAI_API_KEY')),
chunk_token_threshold=2 ** 12 * 2, # 2048 * 2
ignore_cache = True,
instruction="""

View File

@@ -0,0 +1,477 @@
import asyncio
import json
import os
import base64
from pathlib import Path
from typing import List, Dict, Any
from datetime import datetime
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode, CrawlResult
from crawl4ai import BrowserConfig
__cur_dir__ = Path(__file__).parent
# Create temp directory if it doesn't exist
os.makedirs(os.path.join(__cur_dir__, "tmp"), exist_ok=True)
async def demo_basic_network_capture():
"""Basic network request capturing example"""
print("\n=== 1. Basic Network Request Capturing ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
wait_until="networkidle" # Wait for network to be idle
)
result = await crawler.arun(
url="https://example.com/",
config=config
)
if result.success and result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Count by event type
event_types = {}
for req in result.network_requests:
event_type = req.get("event_type", "unknown")
event_types[event_type] = event_types.get(event_type, 0) + 1
print("Event types:")
for event_type, count in event_types.items():
print(f" - {event_type}: {count}")
# Show a sample request and response
request = next((r for r in result.network_requests if r.get("event_type") == "request"), None)
response = next((r for r in result.network_requests if r.get("event_type") == "response"), None)
if request:
print("\nSample request:")
print(f" URL: {request.get('url')}")
print(f" Method: {request.get('method')}")
print(f" Headers: {list(request.get('headers', {}).keys())}")
if response:
print("\nSample response:")
print(f" URL: {response.get('url')}")
print(f" Status: {response.get('status')} {response.get('status_text', '')}")
print(f" Headers: {list(response.get('headers', {}).keys())}")
async def demo_basic_console_capture():
"""Basic console message capturing example"""
print("\n=== 2. Basic Console Message Capturing ===")
# Create a simple HTML file with console messages
html_file = os.path.join(__cur_dir__, "tmp", "console_test.html")
with open(html_file, "w") as f:
f.write("""
<!DOCTYPE html>
<html>
<head>
<title>Console Test</title>
</head>
<body>
<h1>Console Message Test</h1>
<script>
console.log("This is a basic log message");
console.info("This is an info message");
console.warn("This is a warning message");
console.error("This is an error message");
// Generate an error
try {
nonExistentFunction();
} catch (e) {
console.error("Caught error:", e);
}
</script>
</body>
</html>
""")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_console_messages=True,
wait_until="networkidle" # Wait to make sure all scripts execute
)
result = await crawler.arun(
url=f"file://{html_file}",
config=config
)
if result.success and result.console_messages:
print(f"Captured {len(result.console_messages)} console messages")
# Count by message type
message_types = {}
for msg in result.console_messages:
msg_type = msg.get("type", "unknown")
message_types[msg_type] = message_types.get(msg_type, 0) + 1
print("Message types:")
for msg_type, count in message_types.items():
print(f" - {msg_type}: {count}")
# Show all messages
print("\nAll console messages:")
for i, msg in enumerate(result.console_messages, 1):
print(f" {i}. [{msg.get('type', 'unknown')}] {msg.get('text', '')}")
async def demo_combined_capture():
"""Capturing both network requests and console messages"""
print("\n=== 3. Combined Network and Console Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
wait_until="networkidle"
)
result = await crawler.arun(
url="https://httpbin.org/html",
config=config
)
if result.success:
network_count = len(result.network_requests) if result.network_requests else 0
console_count = len(result.console_messages) if result.console_messages else 0
print(f"Captured {network_count} network events and {console_count} console messages")
# Save the captured data to a JSON file for analysis
output_file = os.path.join(__cur_dir__, "tmp", "capture_data.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"network_requests": result.network_requests,
"console_messages": result.console_messages
}, f, indent=2)
print(f"Full capture data saved to {output_file}")
async def analyze_spa_network_traffic():
"""Analyze network traffic of a Single-Page Application"""
print("\n=== 4. Analyzing SPA Network Traffic ===")
async with AsyncWebCrawler(config=BrowserConfig(
headless=True,
viewport_width=1280,
viewport_height=800
)) as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
# Wait longer to ensure all resources are loaded
wait_until="networkidle",
page_timeout=60000, # 60 seconds
)
result = await crawler.arun(
url="https://weather.com",
config=config
)
if result.success and result.network_requests:
# Extract different types of requests
requests = []
responses = []
failures = []
for event in result.network_requests:
event_type = event.get("event_type")
if event_type == "request":
requests.append(event)
elif event_type == "response":
responses.append(event)
elif event_type == "request_failed":
failures.append(event)
print(f"Captured {len(requests)} requests, {len(responses)} responses, and {len(failures)} failures")
# Analyze request types
resource_types = {}
for req in requests:
resource_type = req.get("resource_type", "unknown")
resource_types[resource_type] = resource_types.get(resource_type, 0) + 1
print("\nResource types:")
for resource_type, count in sorted(resource_types.items(), key=lambda x: x[1], reverse=True):
print(f" - {resource_type}: {count}")
# Analyze API calls
api_calls = [r for r in requests if "api" in r.get("url", "").lower()]
if api_calls:
print(f"\nDetected {len(api_calls)} API calls:")
for i, call in enumerate(api_calls[:5], 1): # Show first 5
print(f" {i}. {call.get('method')} {call.get('url')}")
if len(api_calls) > 5:
print(f" ... and {len(api_calls) - 5} more")
# Analyze response status codes
status_codes = {}
for resp in responses:
status = resp.get("status", 0)
status_codes[status] = status_codes.get(status, 0) + 1
print("\nResponse status codes:")
for status, count in sorted(status_codes.items()):
print(f" - {status}: {count}")
# Analyze failures
if failures:
print("\nFailed requests:")
for i, failure in enumerate(failures[:5], 1): # Show first 5
print(f" {i}. {failure.get('url')} - {failure.get('failure_text')}")
if len(failures) > 5:
print(f" ... and {len(failures) - 5} more")
# Check for console errors
if result.console_messages:
errors = [msg for msg in result.console_messages if msg.get("type") == "error"]
if errors:
print(f"\nDetected {len(errors)} console errors:")
for i, error in enumerate(errors[:3], 1): # Show first 3
print(f" {i}. {error.get('text', '')[:100]}...")
if len(errors) > 3:
print(f" ... and {len(errors) - 3} more")
# Save analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "weather_network_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"statistics": {
"request_count": len(requests),
"response_count": len(responses),
"failure_count": len(failures),
"resource_types": resource_types,
"status_codes": {str(k): v for k, v in status_codes.items()},
"api_call_count": len(api_calls),
"console_error_count": len(errors) if result.console_messages else 0
},
"network_requests": result.network_requests,
"console_messages": result.console_messages
}, f, indent=2)
print(f"\nFull analysis saved to {output_file}")
async def demo_security_analysis():
"""Using network capture for security analysis"""
print("\n=== 5. Security Analysis with Network Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
wait_until="networkidle"
)
# A site that makes multiple third-party requests
result = await crawler.arun(
url="https://www.nytimes.com/",
config=config
)
if result.success and result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Extract all domains
domains = set()
for req in result.network_requests:
if req.get("event_type") == "request":
url = req.get("url", "")
try:
from urllib.parse import urlparse
domain = urlparse(url).netloc
if domain:
domains.add(domain)
except:
pass
print(f"\nDetected requests to {len(domains)} unique domains:")
main_domain = urlparse(result.url).netloc
# Separate first-party vs third-party domains
first_party = [d for d in domains if main_domain in d]
third_party = [d for d in domains if main_domain not in d]
print(f" - First-party domains: {len(first_party)}")
print(f" - Third-party domains: {len(third_party)}")
# Look for potential trackers/analytics
tracking_keywords = ["analytics", "tracker", "pixel", "tag", "stats", "metric", "collect", "beacon"]
potential_trackers = []
for domain in third_party:
if any(keyword in domain.lower() for keyword in tracking_keywords):
potential_trackers.append(domain)
if potential_trackers:
print(f"\nPotential tracking/analytics domains ({len(potential_trackers)}):")
for i, domain in enumerate(sorted(potential_trackers)[:10], 1):
print(f" {i}. {domain}")
if len(potential_trackers) > 10:
print(f" ... and {len(potential_trackers) - 10} more")
# Check for insecure (HTTP) requests
insecure_requests = [
req.get("url") for req in result.network_requests
if req.get("event_type") == "request" and req.get("url", "").startswith("http://")
]
if insecure_requests:
print(f"\nWarning: Found {len(insecure_requests)} insecure (HTTP) requests:")
for i, url in enumerate(insecure_requests[:5], 1):
print(f" {i}. {url}")
if len(insecure_requests) > 5:
print(f" ... and {len(insecure_requests) - 5} more")
# Save security analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "security_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"main_domain": main_domain,
"timestamp": datetime.now().isoformat(),
"analysis": {
"total_requests": len([r for r in result.network_requests if r.get("event_type") == "request"]),
"unique_domains": len(domains),
"first_party_domains": first_party,
"third_party_domains": third_party,
"potential_trackers": potential_trackers,
"insecure_requests": insecure_requests
}
}, f, indent=2)
print(f"\nFull security analysis saved to {output_file}")
async def demo_performance_analysis():
"""Using network capture for performance analysis"""
print("\n=== 6. Performance Analysis with Network Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
page_timeout=60 * 2 * 1000 # 120 seconds
)
result = await crawler.arun(
url="https://www.cnn.com/",
config=config
)
if result.success and result.network_requests:
# Filter only response events with timing information
responses_with_timing = [
r for r in result.network_requests
if r.get("event_type") == "response" and r.get("request_timing")
]
if responses_with_timing:
print(f"Analyzing timing for {len(responses_with_timing)} network responses")
# Group by resource type
resource_timings = {}
for resp in responses_with_timing:
url = resp.get("url", "")
timing = resp.get("request_timing", {})
# Determine resource type from URL extension
ext = url.split(".")[-1].lower() if "." in url.split("/")[-1] else "unknown"
if ext in ["jpg", "jpeg", "png", "gif", "webp", "svg", "ico"]:
resource_type = "image"
elif ext in ["js"]:
resource_type = "javascript"
elif ext in ["css"]:
resource_type = "css"
elif ext in ["woff", "woff2", "ttf", "otf", "eot"]:
resource_type = "font"
else:
resource_type = "other"
if resource_type not in resource_timings:
resource_timings[resource_type] = []
# Calculate request duration if timing information is available
if isinstance(timing, dict) and "requestTime" in timing and "receiveHeadersEnd" in timing:
# Convert to milliseconds
duration = (timing["receiveHeadersEnd"] - timing["requestTime"]) * 1000
resource_timings[resource_type].append({
"url": url,
"duration_ms": duration
})
if isinstance(timing, dict) and "requestStart" in timing and "responseStart" in timing and "startTime" in timing:
# Convert to milliseconds
duration = (timing["responseStart"] - timing["requestStart"]) * 1000
resource_timings[resource_type].append({
"url": url,
"duration_ms": duration
})
# Calculate statistics for each resource type
print("\nPerformance by resource type:")
for resource_type, timings in resource_timings.items():
if timings:
durations = [t["duration_ms"] for t in timings]
avg_duration = sum(durations) / len(durations)
max_duration = max(durations)
slowest_resource = next(t["url"] for t in timings if t["duration_ms"] == max_duration)
print(f" {resource_type.upper()}:")
print(f" - Count: {len(timings)}")
print(f" - Avg time: {avg_duration:.2f} ms")
print(f" - Max time: {max_duration:.2f} ms")
print(f" - Slowest: {slowest_resource}")
# Identify the slowest resources overall
all_timings = []
for resource_type, timings in resource_timings.items():
for timing in timings:
timing["type"] = resource_type
all_timings.append(timing)
all_timings.sort(key=lambda x: x["duration_ms"], reverse=True)
print("\nTop 5 slowest resources:")
for i, timing in enumerate(all_timings[:5], 1):
print(f" {i}. [{timing['type']}] {timing['url']} - {timing['duration_ms']:.2f} ms")
# Save performance analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "performance_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"resource_timings": resource_timings,
"slowest_resources": all_timings[:10] # Save top 10
}, f, indent=2)
print(f"\nFull performance analysis saved to {output_file}")
async def main():
"""Run all demo functions sequentially"""
print("=== Network and Console Capture Examples ===")
# Make sure tmp directory exists
os.makedirs(os.path.join(__cur_dir__, "tmp"), exist_ok=True)
# Run basic examples
# await demo_basic_network_capture()
await demo_basic_console_capture()
# await demo_combined_capture()
# Run advanced examples
# await analyze_spa_network_traffic()
# await demo_security_analysis()
# await demo_performance_analysis()
print("\n=== Examples Complete ===")
print(f"Check the tmp directory for output files: {os.path.join(__cur_dir__, 'tmp')}")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,6 +1,6 @@
import os, sys
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
sys.path.append(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
@@ -211,7 +211,7 @@ async def extract_structured_data_using_llm(
word_count_threshold=1,
page_timeout=80000,
extraction_strategy=LLMExtractionStrategy(
llmConfig=LlmConfig(provider=provider,api_token=api_token),
llm_config=LLMConfig(provider=provider,api_token=api_token),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.

View File

@@ -1,675 +0,0 @@
import os, sys
from crawl4ai.async_configs import LlmConfig
# append parent directory to system path
sys.path.append(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
)
os.environ["FIRECRAWL_API_KEY"] = "fc-84b370ccfad44beabc686b38f1769692"
import asyncio
# import nest_asyncio
# nest_asyncio.apply()
import time
import json
import os
import re
from typing import Dict, List
from bs4 import BeautifulSoup
from pydantic import BaseModel, Field
from crawl4ai import AsyncWebCrawler, CacheMode
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.content_filter_strategy import PruningContentFilter
from crawl4ai.extraction_strategy import (
JsonCssExtractionStrategy,
LLMExtractionStrategy,
)
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
print("Crawl4AI: Advanced Web Crawling and Data Extraction")
print("GitHub Repository: https://github.com/unclecode/crawl4ai")
print("Twitter: @unclecode")
print("Website: https://crawl4ai.com")
async def simple_crawl():
print("\n--- Basic Usage ---")
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500]) # Print first 500 characters
async def simple_example_with_running_js_code():
print("\n--- Executing JavaScript and Using CSS Selectors ---")
# New code to handle the wait_for parameter
wait_for = """() => {
return Array.from(document.querySelectorAll('article.tease-card')).length > 10;
}"""
# wait_for can be also just a css selector
# wait_for = "article.tease-card:nth-child(10)"
async with AsyncWebCrawler(verbose=True) as crawler:
js_code = [
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
]
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=js_code,
# wait_for=wait_for,
cache_mode=CacheMode.BYPASS,
)
print(result.markdown[:500]) # Print first 500 characters
async def simple_example_with_css_selector():
print("\n--- Using CSS Selectors ---")
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
css_selector=".wide-tease-item__description",
cache_mode=CacheMode.BYPASS,
)
print(result.markdown[:500]) # Print first 500 characters
async def use_proxy():
print("\n--- Using a Proxy ---")
print(
"Note: Replace 'http://your-proxy-url:port' with a working proxy to run this example."
)
# Uncomment and modify the following lines to use a proxy
async with AsyncWebCrawler(
verbose=True, proxy="http://your-proxy-url:port"
) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", cache_mode=CacheMode.BYPASS
)
if result.success:
print(result.markdown[:500]) # Print first 500 characters
async def capture_and_save_screenshot(url: str, output_path: str):
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url=url, screenshot=True, cache_mode=CacheMode.BYPASS
)
if result.success and result.screenshot:
import base64
# Decode the base64 screenshot data
screenshot_data = base64.b64decode(result.screenshot)
# Save the screenshot as a JPEG file
with open(output_path, "wb") as f:
f.write(screenshot_data)
print(f"Screenshot saved successfully to {output_path}")
else:
print("Failed to capture screenshot")
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(
..., description="Fee for output token for the OpenAI model."
)
async def extract_structured_data_using_llm(
provider: str, api_token: str = None, extra_headers: Dict[str, str] = None
):
print(f"\n--- Extracting Structured Data with {provider} ---")
if api_token is None and provider != "ollama":
print(f"API token is required for {provider}. Skipping this example.")
return
# extra_args = {}
extra_args = {
"temperature": 0,
"top_p": 0.9,
"max_tokens": 2000,
# any other supported parameters for litellm
}
if extra_headers:
extra_args["extra_headers"] = extra_headers
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://openai.com/api/pricing/",
word_count_threshold=1,
extraction_strategy=LLMExtractionStrategy(
llmConfig=LlmConfig(provider=provider,api_token=api_token),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content. One extracted model JSON format should look like this:
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}.""",
extra_args=extra_args,
),
cache_mode=CacheMode.BYPASS,
)
print(result.extracted_content)
async def extract_structured_data_using_css_extractor():
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
schema = {
"name": "KidoCode Courses",
"baseSelector": "section.charge-methodology .w-tab-content > div",
"fields": [
{
"name": "section_title",
"selector": "h3.heading-50",
"type": "text",
},
{
"name": "section_description",
"selector": ".charge-content",
"type": "text",
},
{
"name": "course_name",
"selector": ".text-block-93",
"type": "text",
},
{
"name": "course_description",
"selector": ".course-content-text",
"type": "text",
},
{
"name": "course_icon",
"selector": ".image-92",
"type": "attribute",
"attribute": "src",
},
],
}
async with AsyncWebCrawler(headless=True, verbose=True) as crawler:
# Create the JavaScript that handles clicking multiple times
js_click_tabs = """
(async () => {
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
for(let tab of tabs) {
// scroll to the tab
tab.scrollIntoView();
tab.click();
// Wait for content to load and animations to complete
await new Promise(r => setTimeout(r, 500));
}
})();
"""
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology",
extraction_strategy=JsonCssExtractionStrategy(schema, verbose=True),
js_code=[js_click_tabs],
cache_mode=CacheMode.BYPASS,
)
companies = json.loads(result.extracted_content)
print(f"Successfully extracted {len(companies)} companies")
print(json.dumps(companies[0], indent=2))
# Advanced Session-Based Crawling with Dynamic Content 🔄
async def crawl_dynamic_content_pages_method_1():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
first_commit = ""
async def on_execution_started(page):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.Box-sc-g0xbh4-0 h4")
commit = await page.query_selector("li.Box-sc-g0xbh4-0 h4")
commit = await commit.evaluate("(element) => element.textContent")
commit = re.sub(r"\s+", "", commit)
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
async with AsyncWebCrawler(verbose=True) as crawler:
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
(() => {
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
})();
"""
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
js=js_next_page if page > 0 else None,
cache_mode=CacheMode.BYPASS,
js_only=page > 0,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
soup = BeautifulSoup(result.cleaned_html, "html.parser")
commits = soup.select("li")
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_dynamic_content_pages_method_2():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
last_commit = ""
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
return commits.length > 0 ? commits[0].textContent.trim() : null;
};
const initialCommit = getCurrentCommit();
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
// Poll for changes
while (true) {
await new Promise(resolve => setTimeout(resolve, 100)); // Wait 100ms
const newCommit = getCurrentCommit();
if (newCommit && newCommit !== initialCommit) {
break;
}
}
})();
"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page_and_wait if page > 0 else None,
js_only=page > 0,
cache_mode=CacheMode.BYPASS,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_dynamic_content_pages_method_3():
print(
"\n--- Advanced Multi-Page Crawling with JavaScript Execution using `wait_for` ---"
)
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
if (commits.length > 0) {
window.firstCommit = commits[0].textContent.trim();
}
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
"""
wait_for = """() => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
if (commits.length === 0) return false;
const firstCommit = commits[0].textContent.trim();
return firstCommit !== window.firstCommit;
}"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page if page > 0 else None,
wait_for=wait_for if page > 0 else None,
js_only=page > 0,
cache_mode=CacheMode.BYPASS,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_custom_browser_type():
# Use Firefox
start = time.time()
async with AsyncWebCrawler(
browser_type="firefox", verbose=True, headless=True
) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
# Use WebKit
start = time.time()
async with AsyncWebCrawler(
browser_type="webkit", verbose=True, headless=True
) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
# Use Chromium (default)
start = time.time()
async with AsyncWebCrawler(verbose=True, headless=True) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
async def crawl_with_user_simultion():
async with AsyncWebCrawler(verbose=True, headless=True) as crawler:
url = "YOUR-URL-HERE"
result = await crawler.arun(
url=url,
cache_mode=CacheMode.BYPASS,
magic=True, # Automatically detects and removes overlays, popups, and other elements that block content
# simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
# override_navigator = True # Overrides the navigator object to make it look like a real user
)
print(result.markdown)
async def speed_comparison():
# print("\n--- Speed Comparison ---")
# print("Firecrawl (simulated):")
# print("Time taken: 7.02 seconds")
# print("Content length: 42074 characters")
# print("Images found: 49")
# print()
# Simulated Firecrawl performance
from firecrawl import FirecrawlApp
app = FirecrawlApp(api_key=os.environ["FIRECRAWL_API_KEY"])
start = time.time()
scrape_status = app.scrape_url(
"https://www.nbcnews.com/business", params={"formats": ["markdown", "html"]}
)
end = time.time()
print("Firecrawl:")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(scrape_status['markdown'])} characters")
print(f"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}")
print()
async with AsyncWebCrawler() as crawler:
# Crawl4AI simple crawl
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
word_count_threshold=0,
cache_mode=CacheMode.BYPASS,
verbose=False,
)
end = time.time()
print("Crawl4AI (simple crawl):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown)} characters")
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
print()
# Crawl4AI with advanced content filtering
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
word_count_threshold=0,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
# content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
),
cache_mode=CacheMode.BYPASS,
verbose=False,
)
end = time.time()
print("Crawl4AI (Markdown Plus):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown.raw_markdown)} characters")
print(f"Fit Markdown: {len(result.markdown.fit_markdown)} characters")
print(f"Images found: {result.markdown.raw_markdown.count('cldnry.s-nbcnews.com')}")
print()
# Crawl4AI with JavaScript execution
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=[
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
],
word_count_threshold=0,
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
# content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
),
verbose=False,
)
end = time.time()
print("Crawl4AI (with JavaScript execution):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown.raw_markdown)} characters")
print(f"Fit Markdown: {len(result.markdown.fit_markdown)} characters")
print(f"Images found: {result.markdown.raw_markdown.count('cldnry.s-nbcnews.com')}")
print("\nNote on Speed Comparison:")
print("The speed test conducted here may not reflect optimal conditions.")
print("When we call Firecrawl's API, we're seeing its best performance,")
print("while Crawl4AI's performance is limited by the local network speed.")
print("For a more accurate comparison, it's recommended to run these tests")
print("on servers with a stable and fast internet connection.")
print("Despite these limitations, Crawl4AI still demonstrates faster performance.")
print("If you run these tests in an environment with better network conditions,")
print("you may observe an even more significant speed advantage for Crawl4AI.")
async def generate_knowledge_graph():
class Entity(BaseModel):
name: str
description: str
class Relationship(BaseModel):
entity1: Entity
entity2: Entity
description: str
relation_type: str
class KnowledgeGraph(BaseModel):
entities: List[Entity]
relationships: List[Relationship]
extraction_strategy = LLMExtractionStrategy(
llmConfig=LlmConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY")), # In case of Ollama just pass "no-token"
schema=KnowledgeGraph.model_json_schema(),
extraction_type="schema",
instruction="""Extract entities and relationships from the given text.""",
)
async with AsyncWebCrawler() as crawler:
url = "https://paulgraham.com/love.html"
result = await crawler.arun(
url=url,
cache_mode=CacheMode.BYPASS,
extraction_strategy=extraction_strategy,
# magic=True
)
# print(result.extracted_content)
with open(os.path.join(__location__, "kb.json"), "w") as f:
f.write(result.extracted_content)
async def fit_markdown_remove_overlay():
async with AsyncWebCrawler(
headless=True, # Set to False to see what is happening
verbose=True,
user_agent_mode="random",
user_agent_generator_config={"device_type": "mobile", "os_type": "android"},
) as crawler:
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology",
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
),
options={"ignore_links": True},
),
# markdown_generator=DefaultMarkdownGenerator(
# content_filter=BM25ContentFilter(user_query="", bm25_threshold=1.0),
# options={
# "ignore_links": True
# }
# ),
)
if result.success:
print(len(result.markdown.raw_markdown))
print(len(result.markdown.markdown_with_citations))
print(len(result.markdown.fit_markdown))
# Save clean html
with open(os.path.join(__location__, "output/cleaned_html.html"), "w") as f:
f.write(result.cleaned_html)
with open(
os.path.join(__location__, "output/output_raw_markdown.md"), "w"
) as f:
f.write(result.markdown.raw_markdown)
with open(
os.path.join(__location__, "output/output_markdown_with_citations.md"),
"w",
) as f:
f.write(result.markdown.markdown_with_citations)
with open(
os.path.join(__location__, "output/output_fit_markdown.md"), "w"
) as f:
f.write(result.markdown.fit_markdown)
print("Done")
async def main():
# await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY"))
# await simple_crawl()
# await simple_example_with_running_js_code()
# await simple_example_with_css_selector()
# # await use_proxy()
# await capture_and_save_screenshot("https://www.example.com", os.path.join(__location__, "tmp/example_screenshot.jpg"))
# await extract_structured_data_using_css_extractor()
# LLM extraction examples
# await extract_structured_data_using_llm()
# await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY"))
# await extract_structured_data_using_llm("ollama/llama3.2")
# You always can pass custom headers to the extraction strategy
# custom_headers = {
# "Authorization": "Bearer your-custom-token",
# "X-Custom-Header": "Some-Value"
# }
# await extract_structured_data_using_llm(extra_headers=custom_headers)
# await crawl_dynamic_content_pages_method_1()
# await crawl_dynamic_content_pages_method_2()
await crawl_dynamic_content_pages_method_3()
# await crawl_custom_browser_type()
# await speed_comparison()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,412 @@
import asyncio
import os
import json
import base64
from pathlib import Path
from typing import List
from crawl4ai.proxy_strategy import ProxyConfig
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode, CrawlResult
from crawl4ai import RoundRobinProxyStrategy
from crawl4ai import JsonCssExtractionStrategy, LLMExtractionStrategy
from crawl4ai import LLMConfig
from crawl4ai import PruningContentFilter, BM25ContentFilter
from crawl4ai import DefaultMarkdownGenerator
from crawl4ai import BFSDeepCrawlStrategy, DomainFilter, FilterChain
from crawl4ai import BrowserConfig
__cur_dir__ = Path(__file__).parent
async def demo_basic_crawl():
"""Basic web crawling with markdown generation"""
print("\n=== 1. Basic Web Crawling ===")
async with AsyncWebCrawler(config = BrowserConfig(
viewport_height=800,
viewport_width=1200,
headless=True,
verbose=True,
)) as crawler:
results: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com/"
)
for i, result in enumerate(results):
print(f"Result {i + 1}:")
print(f"Success: {result.success}")
if result.success:
print(f"Markdown length: {len(result.markdown.raw_markdown)} chars")
print(f"First 100 chars: {result.markdown.raw_markdown[:100]}...")
else:
print("Failed to crawl the URL")
async def demo_parallel_crawl():
"""Crawl multiple URLs in parallel"""
print("\n=== 2. Parallel Crawling ===")
urls = [
"https://news.ycombinator.com/",
"https://example.com/",
"https://httpbin.org/html",
]
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun_many(
urls=urls,
)
print(f"Crawled {len(results)} URLs in parallel:")
for i, result in enumerate(results):
print(
f" {i + 1}. {result.url} - {'Success' if result.success else 'Failed'}"
)
async def demo_fit_markdown():
"""Generate focused markdown with LLM content filter"""
print("\n=== 3. Fit Markdown with LLM Content Filter ===")
async with AsyncWebCrawler() as crawler:
result: CrawlResult = await crawler.arun(
url = "https://en.wikipedia.org/wiki/Python_(programming_language)",
config=CrawlerRunConfig(
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter()
)
),
)
# Print stats and save the fit markdown
print(f"Raw: {len(result.markdown.raw_markdown)} chars")
print(f"Fit: {len(result.markdown.fit_markdown)} chars")
async def demo_llm_structured_extraction_no_schema():
# Create a simple LLM extraction strategy (no schema required)
extraction_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(
provider="groq/qwen-2.5-32b",
api_token="env:GROQ_API_KEY",
),
instruction="This is news.ycombinator.com, extract all news, and for each, I want title, source url, number of comments.",
extract_type="schema",
schema="{title: string, url: string, comments: int}",
extra_args={
"temperature": 0.0,
"max_tokens": 4096,
},
verbose=True,
)
config = CrawlerRunConfig(extraction_strategy=extraction_strategy)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
"https://news.ycombinator.com/", config=config
)
for result in results:
print(f"URL: {result.url}")
print(f"Success: {result.success}")
if result.success:
data = json.loads(result.extracted_content)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
async def demo_css_structured_extraction_no_schema():
"""Extract structured data using CSS selectors"""
print("\n=== 5. CSS-Based Structured Extraction ===")
# Sample HTML for schema generation (one-time cost)
sample_html = """
<div class="body-post clear">
<a class="story-link" href="https://thehackernews.com/2025/04/malicious-python-packages-on-pypi.html">
<div class="clear home-post-box cf">
<div class="home-img clear">
<div class="img-ratio">
<img alt="..." src="...">
</div>
</div>
<div class="clear home-right">
<h2 class="home-title">Malicious Python Packages on PyPI Downloaded 39,000+ Times, Steal Sensitive Data</h2>
<div class="item-label">
<span class="h-datetime"><i class="icon-font icon-calendar"></i>Apr 05, 2025</span>
<span class="h-tags">Malware / Supply Chain Attack</span>
</div>
<div class="home-desc"> Cybersecurity researchers have...</div>
</div>
</div>
</a>
</div>
"""
# Check if schema file exists
schema_file_path = f"{__cur_dir__}/tmp/schema.json"
if os.path.exists(schema_file_path):
with open(schema_file_path, "r") as f:
schema = json.load(f)
else:
# Generate schema using LLM (one-time setup)
schema = JsonCssExtractionStrategy.generate_schema(
html=sample_html,
llm_config=LLMConfig(
provider="groq/qwen-2.5-32b",
api_token="env:GROQ_API_KEY",
),
query="From https://thehackernews.com/, I have shared a sample of one news div with a title, date, and description. Please generate a schema for this news div.",
)
print(f"Generated schema: {json.dumps(schema, indent=2)}")
# Save the schema to a file , and use it for future extractions, in result for such extraction you will call LLM once
with open(f"{__cur_dir__}/tmp/schema.json", "w") as f:
json.dump(schema, f, indent=2)
# Create no-LLM extraction strategy with the generated schema
extraction_strategy = JsonCssExtractionStrategy(schema)
config = CrawlerRunConfig(extraction_strategy=extraction_strategy)
# Use the fast CSS extraction (no LLM calls during extraction)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
"https://thehackernews.com", config=config
)
for result in results:
print(f"URL: {result.url}")
print(f"Success: {result.success}")
if result.success:
data = json.loads(result.extracted_content)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
async def demo_deep_crawl():
"""Deep crawling with BFS strategy"""
print("\n=== 6. Deep Crawling ===")
filter_chain = FilterChain([DomainFilter(allowed_domains=["crawl4ai.com"])])
deep_crawl_strategy = BFSDeepCrawlStrategy(
max_depth=1, max_pages=5, filter_chain=filter_chain
)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
url="https://docs.crawl4ai.com",
config=CrawlerRunConfig(deep_crawl_strategy=deep_crawl_strategy),
)
print(f"Deep crawl returned {len(results)} pages:")
for i, result in enumerate(results):
depth = result.metadata.get("depth", "unknown")
print(f" {i + 1}. {result.url} (Depth: {depth})")
async def demo_js_interaction():
"""Execute JavaScript to load more content"""
print("\n=== 7. JavaScript Interaction ===")
# A simple page that needs JS to reveal content
async with AsyncWebCrawler(config=BrowserConfig(headless=False)) as crawler:
# Initial load
news_schema = {
"name": "news",
"baseSelector": "tr.athing",
"fields": [
{
"name": "title",
"selector": "span.titleline",
"type": "text",
}
],
}
results: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com",
config=CrawlerRunConfig(
session_id="hn_session", # Keep session
extraction_strategy=JsonCssExtractionStrategy(schema=news_schema),
),
)
news = []
for result in results:
if result.success:
data = json.loads(result.extracted_content)
news.extend(data)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
print(f"Initial items: {len(news)}")
# Click "More" link
more_config = CrawlerRunConfig(
js_code="document.querySelector('a.morelink').click();",
js_only=True, # Continue in same page
session_id="hn_session", # Keep session
extraction_strategy=JsonCssExtractionStrategy(
schema=news_schema,
),
)
result: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com", config=more_config
)
# Extract new items
for result in results:
if result.success:
data = json.loads(result.extracted_content)
news.extend(data)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
print(f"Total items: {len(news)}")
async def demo_media_and_links():
"""Extract media and links from a page"""
print("\n=== 8. Media and Links Extraction ===")
async with AsyncWebCrawler() as crawler:
result: List[CrawlResult] = await crawler.arun("https://en.wikipedia.org/wiki/Main_Page")
for i, result in enumerate(result):
# Extract and save all images
images = result.media.get("images", [])
print(f"Found {len(images)} images")
# Extract and save all links (internal and external)
internal_links = result.links.get("internal", [])
external_links = result.links.get("external", [])
print(f"Found {len(internal_links)} internal links")
print(f"Found {len(external_links)} external links")
# Print some of the images and links
for image in images[:3]:
print(f"Image: {image['src']}")
for link in internal_links[:3]:
print(f"Internal link: {link['href']}")
for link in external_links[:3]:
print(f"External link: {link['href']}")
# # Save everything to files
with open(f"{__cur_dir__}/tmp/images.json", "w") as f:
json.dump(images, f, indent=2)
with open(f"{__cur_dir__}/tmp/links.json", "w") as f:
json.dump(
{"internal": internal_links, "external": external_links},
f,
indent=2,
)
async def demo_screenshot_and_pdf():
"""Capture screenshot and PDF of a page"""
print("\n=== 9. Screenshot and PDF Capture ===")
async with AsyncWebCrawler() as crawler:
result: List[CrawlResult] = await crawler.arun(
# url="https://example.com",
url="https://en.wikipedia.org/wiki/Giant_anteater",
config=CrawlerRunConfig(screenshot=True, pdf=True),
)
for i, result in enumerate(result):
# if result.screenshot_data:
if result.screenshot:
# Save screenshot
screenshot_path = f"{__cur_dir__}/tmp/example_screenshot.png"
with open(screenshot_path, "wb") as f:
f.write(base64.b64decode(result.screenshot))
print(f"Screenshot saved to {screenshot_path}")
# if result.pdf_data:
if result.pdf:
# Save PDF
pdf_path = f"{__cur_dir__}/tmp/example.pdf"
with open(pdf_path, "wb") as f:
f.write(result.pdf)
print(f"PDF saved to {pdf_path}")
async def demo_proxy_rotation():
"""Proxy rotation for multiple requests"""
print("\n=== 10. Proxy Rotation ===")
# Example proxies (replace with real ones)
proxies = [
ProxyConfig(server="http://proxy1.example.com:8080"),
ProxyConfig(server="http://proxy2.example.com:8080"),
]
proxy_strategy = RoundRobinProxyStrategy(proxies)
print(f"Using {len(proxies)} proxies in rotation")
print(
"Note: This example uses placeholder proxies - replace with real ones to test"
)
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
proxy_rotation_strategy=proxy_strategy
)
# In a real scenario, these would be run and the proxies would rotate
print("In a real scenario, requests would rotate through the available proxies")
async def demo_raw_html_and_file():
"""Process raw HTML and local files"""
print("\n=== 11. Raw HTML and Local Files ===")
raw_html = """
<html><body>
<h1>Sample Article</h1>
<p>This is sample content for testing Crawl4AI's raw HTML processing.</p>
</body></html>
"""
# Save to file
file_path = Path("docs/examples/tmp/sample.html").absolute()
with open(file_path, "w") as f:
f.write(raw_html)
async with AsyncWebCrawler() as crawler:
# Crawl raw HTML
raw_result = await crawler.arun(
url="raw:" + raw_html, config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
print("Raw HTML processing:")
print(f" Markdown: {raw_result.markdown.raw_markdown[:50]}...")
# Crawl local file
file_result = await crawler.arun(
url=f"file://{file_path}",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("\nLocal file processing:")
print(f" Markdown: {file_result.markdown.raw_markdown[:50]}...")
# Clean up
os.remove(file_path)
print(f"Processed both raw HTML and local file ({file_path})")
async def main():
"""Run all demo functions sequentially"""
print("=== Comprehensive Crawl4AI Demo ===")
print("Note: Some examples require API keys or other configurations")
# Run all demos
await demo_basic_crawl()
await demo_parallel_crawl()
await demo_fit_markdown()
await demo_llm_structured_extraction_no_schema()
await demo_css_structured_extraction_no_schema()
await demo_deep_crawl()
await demo_js_interaction()
await demo_media_and_links()
await demo_screenshot_and_pdf()
# # await demo_proxy_rotation()
await demo_raw_html_and_file()
# Clean up any temp files that may have been created
print("\n=== Demo Complete ===")
print("Check for any generated files (screenshots, PDFs) in the current directory")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,562 @@
import os, sys
from crawl4ai.types import LLMConfig
sys.path.append(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
)
import asyncio
import time
import json
import re
from typing import Dict
from bs4 import BeautifulSoup
from pydantic import BaseModel, Field
from crawl4ai import AsyncWebCrawler, CacheMode, BrowserConfig, CrawlerRunConfig
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.content_filter_strategy import PruningContentFilter
from crawl4ai.extraction_strategy import (
JsonCssExtractionStrategy,
LLMExtractionStrategy,
)
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
print("Crawl4AI: Advanced Web Crawling and Data Extraction")
print("GitHub Repository: https://github.com/unclecode/crawl4ai")
print("Twitter: @unclecode")
print("Website: https://crawl4ai.com")
# Basic Example - Simple Crawl
async def simple_crawl():
print("\n--- Basic Usage ---")
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
async def clean_content():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
excluded_tags=["nav", "footer", "aside"],
remove_overlay_elements=True,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
),
options={"ignore_links": True},
),
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://en.wikipedia.org/wiki/Apple",
config=crawler_config,
)
full_markdown_length = len(result.markdown.raw_markdown)
fit_markdown_length = len(result.markdown.fit_markdown)
print(f"Full Markdown Length: {full_markdown_length}")
print(f"Fit Markdown Length: {fit_markdown_length}")
async def link_analysis():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.ENABLED,
exclude_external_links=True,
exclude_social_media_links=True,
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
config=crawler_config,
)
print(f"Found {len(result.links['internal'])} internal links")
print(f"Found {len(result.links['external'])} external links")
for link in result.links["internal"][:5]:
print(f"Href: {link['href']}\nText: {link['text']}\n")
# JavaScript Execution Example
async def simple_example_with_running_js_code():
print("\n--- Executing JavaScript and Using CSS Selectors ---")
browser_config = BrowserConfig(headless=True, java_script_enabled=True)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
js_code="const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();",
# wait_for="() => { return Array.from(document.querySelectorAll('article.tease-card')).length > 10; }"
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
# CSS Selector Example
async def simple_example_with_css_selector():
print("\n--- Using CSS Selectors ---")
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, css_selector=".wide-tease-item__description"
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
async def media_handling():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, exclude_external_images=True, screenshot=True
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
for img in result.media["images"][:5]:
print(f"Image URL: {img['src']}, Alt: {img['alt']}, Score: {img['score']}")
async def custom_hook_workflow(verbose=True):
async with AsyncWebCrawler() as crawler:
# Set a 'before_goto' hook to run custom code just before navigation
crawler.crawler_strategy.set_hook(
"before_goto",
lambda page, context: print("[Hook] Preparing to navigate..."),
)
# Perform the crawl operation
result = await crawler.arun(url="https://crawl4ai.com")
print(result.markdown.raw_markdown[:500].replace("\n", " -- "))
# Proxy Example
async def use_proxy():
print("\n--- Using a Proxy ---")
browser_config = BrowserConfig(
headless=True,
proxy_config={
"server": "http://proxy.example.com:8080",
"username": "username",
"password": "password",
},
)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
if result.success:
print(result.markdown[:500])
# Screenshot Example
async def capture_and_save_screenshot(url: str, output_path: str):
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, screenshot=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url=url, config=crawler_config)
if result.success and result.screenshot:
import base64
screenshot_data = base64.b64decode(result.screenshot)
with open(output_path, "wb") as f:
f.write(screenshot_data)
print(f"Screenshot saved successfully to {output_path}")
else:
print("Failed to capture screenshot")
# LLM Extraction Example
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(
..., description="Fee for output token for the OpenAI model."
)
async def extract_structured_data_using_llm(
provider: str, api_token: str = None, extra_headers: Dict[str, str] = None
):
print(f"\n--- Extracting Structured Data with {provider} ---")
if api_token is None and provider != "ollama":
print(f"API token is required for {provider}. Skipping this example.")
return
browser_config = BrowserConfig(headless=True)
extra_args = {"temperature": 0, "top_p": 0.9, "max_tokens": 2000}
if extra_headers:
extra_args["extra_headers"] = extra_headers
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
word_count_threshold=1,
page_timeout=80000,
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider=provider,api_token=api_token),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content.""",
extra_args=extra_args,
),
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://openai.com/api/pricing/", config=crawler_config
)
print(result.extracted_content)
# CSS Extraction Example
async def extract_structured_data_using_css_extractor():
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
schema = {
"name": "KidoCode Courses",
"baseSelector": "section.charge-methodology .framework-collection-item.w-dyn-item",
"fields": [
{
"name": "section_title",
"selector": "h3.heading-50",
"type": "text",
},
{
"name": "section_description",
"selector": ".charge-content",
"type": "text",
},
{
"name": "course_name",
"selector": ".text-block-93",
"type": "text",
},
{
"name": "course_description",
"selector": ".course-content-text",
"type": "text",
},
{
"name": "course_icon",
"selector": ".image-92",
"type": "attribute",
"attribute": "src",
},
],
}
browser_config = BrowserConfig(headless=True, java_script_enabled=True)
js_click_tabs = """
(async () => {
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
for(let tab of tabs) {
tab.scrollIntoView();
tab.click();
await new Promise(r => setTimeout(r, 500));
}
})();
"""
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
extraction_strategy=JsonCssExtractionStrategy(schema),
js_code=[js_click_tabs],
delay_before_return_html=1
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology", config=crawler_config
)
companies = json.loads(result.extracted_content)
print(f"Successfully extracted {len(companies)} companies")
print(json.dumps(companies[0], indent=2))
# Dynamic Content Examples - Method 1
async def crawl_dynamic_content_pages_method_1():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
first_commit = ""
async def on_execution_started(page, **kwargs):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.Box-sc-g0xbh4-0 h4")
commit = await page.query_selector("li.Box-sc-g0xbh4-0 h4")
commit = await commit.evaluate("(element) => element.textContent")
commit = re.sub(r"\s+", "", commit)
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
browser_config = BrowserConfig(headless=False, java_script_enabled=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
"""
for page in range(3):
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
css_selector="li.Box-sc-g0xbh4-0",
js_code=js_next_page if page > 0 else None,
js_only=page > 0,
session_id=session_id,
)
result = await crawler.arun(url=url, config=crawler_config)
assert result.success, f"Failed to crawl page {page + 1}"
soup = BeautifulSoup(result.cleaned_html, "html.parser")
commits = soup.select("li")
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
# Dynamic Content Examples - Method 2
async def crawl_dynamic_content_pages_method_2():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
browser_config = BrowserConfig(headless=False, java_script_enabled=True)
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
return commits.length > 0 ? commits[0].textContent.trim() : null;
};
const initialCommit = getCurrentCommit();
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
while (true) {
await new Promise(resolve => setTimeout(resolve, 100));
const newCommit = getCurrentCommit();
if (newCommit && newCommit !== initialCommit) {
break;
}
}
})();
"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
async with AsyncWebCrawler(config=browser_config) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
extraction_strategy = JsonCssExtractionStrategy(schema)
for page in range(3):
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page_and_wait if page > 0 else None,
js_only=page > 0,
session_id=session_id,
)
result = await crawler.arun(url=url, config=crawler_config)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def cosine_similarity_extraction():
from crawl4ai.extraction_strategy import CosineStrategy
crawl_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
extraction_strategy=CosineStrategy(
word_count_threshold=10,
max_dist=0.2, # Maximum distance between two words
linkage_method="ward", # Linkage method for hierarchical clustering (ward, complete, average, single)
top_k=3, # Number of top keywords to extract
sim_threshold=0.3, # Similarity threshold for clustering
semantic_filter="McDonald's economic impact, American consumer trends", # Keywords to filter the content semantically using embeddings
verbose=True,
),
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156",
config=crawl_config,
)
print(json.loads(result.extracted_content)[:5])
# Browser Comparison
async def crawl_custom_browser_type():
print("\n--- Browser Comparison ---")
# Firefox
browser_config_firefox = BrowserConfig(browser_type="firefox", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_firefox) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("Firefox:", time.time() - start)
print(result.markdown[:500])
# WebKit
browser_config_webkit = BrowserConfig(browser_type="webkit", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_webkit) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("WebKit:", time.time() - start)
print(result.markdown[:500])
# Chromium (default)
browser_config_chromium = BrowserConfig(browser_type="chromium", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_chromium) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("Chromium:", time.time() - start)
print(result.markdown[:500])
# Anti-Bot and User Simulation
async def crawl_with_user_simulation():
browser_config = BrowserConfig(
headless=True,
user_agent_mode="random",
user_agent_generator_config={"device_type": "mobile", "os_type": "android"},
)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
magic=True,
simulate_user=True,
override_navigator=True,
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="YOUR-URL-HERE", config=crawler_config)
print(result.markdown)
async def ssl_certification():
# Configure crawler to fetch SSL certificate
config = CrawlerRunConfig(
fetch_ssl_certificate=True,
cache_mode=CacheMode.BYPASS, # Bypass cache to always get fresh certificates
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url="https://example.com", config=config)
if result.success and result.ssl_certificate:
cert = result.ssl_certificate
tmp_dir = os.path.join(__location__, "tmp")
os.makedirs(tmp_dir, exist_ok=True)
# 1. Access certificate properties directly
print("\nCertificate Information:")
print(f"Issuer: {cert.issuer.get('CN', '')}")
print(f"Valid until: {cert.valid_until}")
print(f"Fingerprint: {cert.fingerprint}")
# 2. Export certificate in different formats
cert.to_json(os.path.join(tmp_dir, "certificate.json")) # For analysis
print("\nCertificate exported to:")
print(f"- JSON: {os.path.join(tmp_dir, 'certificate.json')}")
pem_data = cert.to_pem(
os.path.join(tmp_dir, "certificate.pem")
) # For web servers
print(f"- PEM: {os.path.join(tmp_dir, 'certificate.pem')}")
der_data = cert.to_der(
os.path.join(tmp_dir, "certificate.der")
) # For Java apps
print(f"- DER: {os.path.join(tmp_dir, 'certificate.der')}")
# Main execution
async def main():
# Basic examples
await simple_crawl()
await simple_example_with_running_js_code()
await simple_example_with_css_selector()
# Advanced examples
await extract_structured_data_using_css_extractor()
await extract_structured_data_using_llm(
"openai/gpt-4o", os.getenv("OPENAI_API_KEY")
)
await crawl_dynamic_content_pages_method_1()
await crawl_dynamic_content_pages_method_2()
# Browser comparisons
await crawl_custom_browser_type()
# Screenshot example
await capture_and_save_screenshot(
"https://www.example.com",
os.path.join(__location__, "tmp/example_screenshot.jpg")
)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,405 +0,0 @@
import os
import time
from crawl4ai.async_configs import LlmConfig
from crawl4ai.web_crawler import WebCrawler
from crawl4ai.chunking_strategy import *
from crawl4ai.extraction_strategy import *
from crawl4ai.crawler_strategy import *
from rich import print
from rich.console import Console
from functools import lru_cache
console = Console()
@lru_cache()
def create_crawler():
crawler = WebCrawler(verbose=True)
crawler.warmup()
return crawler
def print_result(result):
# Print each key in one line and just the first 10 characters of each one's value and three dots
console.print("\t[bold]Result:[/bold]")
for key, value in result.model_dump().items():
if isinstance(value, str) and value:
console.print(f"\t{key}: [green]{value[:20]}...[/green]")
if result.extracted_content:
items = json.loads(result.extracted_content)
print(f"\t[bold]{len(items)} blocks is extracted![/bold]")
def cprint(message, press_any_key=False):
console.print(message)
if press_any_key:
console.print("Press any key to continue...", style="")
input()
def basic_usage(crawler):
cprint(
"🛠️ [bold cyan]Basic Usage: Simply provide a URL and let Crawl4ai do the magic![/bold cyan]"
)
result = crawler.run(url="https://www.nbcnews.com/business", only_text=True)
cprint("[LOG] 📦 [bold yellow]Basic crawl result:[/bold yellow]")
print_result(result)
def basic_usage_some_params(crawler):
cprint(
"🛠️ [bold cyan]Basic Usage: Simply provide a URL and let Crawl4ai do the magic![/bold cyan]"
)
result = crawler.run(
url="https://www.nbcnews.com/business", word_count_threshold=1, only_text=True
)
cprint("[LOG] 📦 [bold yellow]Basic crawl result:[/bold yellow]")
print_result(result)
def screenshot_usage(crawler):
cprint("\n📸 [bold cyan]Let's take a screenshot of the page![/bold cyan]")
result = crawler.run(url="https://www.nbcnews.com/business", screenshot=True)
cprint("[LOG] 📦 [bold yellow]Screenshot result:[/bold yellow]")
# Save the screenshot to a file
with open("screenshot.png", "wb") as f:
f.write(base64.b64decode(result.screenshot))
cprint("Screenshot saved to 'screenshot.png'!")
print_result(result)
def understanding_parameters(crawler):
cprint(
"\n🧠 [bold cyan]Understanding 'bypass_cache' and 'include_raw_html' parameters:[/bold cyan]"
)
cprint(
"By default, Crawl4ai caches the results of your crawls. This means that subsequent crawls of the same URL will be much faster! Let's see this in action."
)
# First crawl (reads from cache)
cprint("1⃣ First crawl (caches the result):", True)
start_time = time.time()
result = crawler.run(url="https://www.nbcnews.com/business")
end_time = time.time()
cprint(
f"[LOG] 📦 [bold yellow]First crawl took {end_time - start_time} seconds and result (from cache):[/bold yellow]"
)
print_result(result)
# Force to crawl again
cprint("2⃣ Second crawl (Force to crawl again):", True)
start_time = time.time()
result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
end_time = time.time()
cprint(
f"[LOG] 📦 [bold yellow]Second crawl took {end_time - start_time} seconds and result (forced to crawl):[/bold yellow]"
)
print_result(result)
def add_chunking_strategy(crawler):
# Adding a chunking strategy: RegexChunking
cprint(
"\n🧩 [bold cyan]Let's add a chunking strategy: RegexChunking![/bold cyan]",
True,
)
cprint(
"RegexChunking is a simple chunking strategy that splits the text based on a given regex pattern. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
chunking_strategy=RegexChunking(patterns=["\n\n"]),
)
cprint("[LOG] 📦 [bold yellow]RegexChunking result:[/bold yellow]")
print_result(result)
# Adding another chunking strategy: NlpSentenceChunking
cprint(
"\n🔍 [bold cyan]Time to explore another chunking strategy: NlpSentenceChunking![/bold cyan]",
True,
)
cprint(
"NlpSentenceChunking uses NLP techniques to split the text into sentences. Let's see how it performs!"
)
result = crawler.run(
url="https://www.nbcnews.com/business", chunking_strategy=NlpSentenceChunking()
)
cprint("[LOG] 📦 [bold yellow]NlpSentenceChunking result:[/bold yellow]")
print_result(result)
def add_extraction_strategy(crawler):
# Adding an extraction strategy: CosineStrategy
cprint(
"\n🧠 [bold cyan]Let's get smarter with an extraction strategy: CosineStrategy![/bold cyan]",
True,
)
cprint(
"CosineStrategy uses cosine similarity to extract semantically similar blocks of text. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=CosineStrategy(
word_count_threshold=10,
max_dist=0.2,
linkage_method="ward",
top_k=3,
sim_threshold=0.3,
verbose=True,
),
)
cprint("[LOG] 📦 [bold yellow]CosineStrategy result:[/bold yellow]")
print_result(result)
# Using semantic_filter with CosineStrategy
cprint(
"You can pass other parameters like 'semantic_filter' to the CosineStrategy to extract semantically similar blocks of text. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=CosineStrategy(
semantic_filter="inflation rent prices",
),
)
cprint(
"[LOG] 📦 [bold yellow]CosineStrategy result with semantic filter:[/bold yellow]"
)
print_result(result)
def add_llm_extraction_strategy(crawler):
# Adding an LLM extraction strategy without instructions
cprint(
"\n🤖 [bold cyan]Time to bring in the big guns: LLMExtractionStrategy without instructions![/bold cyan]",
True,
)
cprint(
"LLMExtractionStrategy uses a large language model to extract relevant information from the web page. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llmConfig = LlmConfig(provider="openai/gpt-4o", api_token=os.getenv("OPENAI_API_KEY"))
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (no instructions) result:[/bold yellow]"
)
print_result(result)
# Adding an LLM extraction strategy with instructions
cprint(
"\n📜 [bold cyan]Let's make it even more interesting: LLMExtractionStrategy with instructions![/bold cyan]",
True,
)
cprint(
"Let's say we are only interested in financial news. Let's see how LLMExtractionStrategy performs with instructions!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llmConfig=LlmConfig(provider="openai/gpt-4o",api_token=os.getenv("OPENAI_API_KEY")),
instruction="I am interested in only financial news",
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (with instructions) result:[/bold yellow]"
)
print_result(result)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llmConfig=LlmConfig(provider="openai/gpt-4o",api_token=os.getenv("OPENAI_API_KEY")),
instruction="Extract only content related to technology",
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (with technology instruction) result:[/bold yellow]"
)
print_result(result)
def targeted_extraction(crawler):
# Using a CSS selector to extract only H2 tags
cprint(
"\n🎯 [bold cyan]Targeted extraction: Let's use a CSS selector to extract only H2 tags![/bold cyan]",
True,
)
result = crawler.run(url="https://www.nbcnews.com/business", css_selector="h2")
cprint("[LOG] 📦 [bold yellow]CSS Selector (H2 tags) result:[/bold yellow]")
print_result(result)
def interactive_extraction(crawler):
# Passing JavaScript code to interact with the page
cprint(
"\n🖱️ [bold cyan]Let's get interactive: Passing JavaScript code to click 'Load More' button![/bold cyan]",
True,
)
cprint(
"In this example we try to click the 'Load More' button on the page using JavaScript code."
)
js_code = """
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""
# crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
# crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
result = crawler.run(url="https://www.nbcnews.com/business", js=js_code)
cprint(
"[LOG] 📦 [bold yellow]JavaScript Code (Load More button) result:[/bold yellow]"
)
print_result(result)
def multiple_scrip(crawler):
# Passing JavaScript code to interact with the page
cprint(
"\n🖱️ [bold cyan]Let's get interactive: Passing JavaScript code to click 'Load More' button![/bold cyan]",
True,
)
cprint(
"In this example we try to click the 'Load More' button on the page using JavaScript code."
)
js_code = [
"""
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""
] * 2
# crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
# crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
result = crawler.run(url="https://www.nbcnews.com/business", js=js_code)
cprint(
"[LOG] 📦 [bold yellow]JavaScript Code (Load More button) result:[/bold yellow]"
)
print_result(result)
def using_crawler_hooks(crawler):
# Example usage of the hooks for authentication and setting a cookie
def on_driver_created(driver):
print("[HOOK] on_driver_created")
# Example customization: maximize the window
driver.maximize_window()
# Example customization: logging in to a hypothetical website
driver.get("https://example.com/login")
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.NAME, "username"))
)
driver.find_element(By.NAME, "username").send_keys("testuser")
driver.find_element(By.NAME, "password").send_keys("password123")
driver.find_element(By.NAME, "login").click()
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "welcome"))
)
# Add a custom cookie
driver.add_cookie({"name": "test_cookie", "value": "cookie_value"})
return driver
def before_get_url(driver):
print("[HOOK] before_get_url")
# Example customization: add a custom header
# Enable Network domain for sending headers
driver.execute_cdp_cmd("Network.enable", {})
# Add a custom header
driver.execute_cdp_cmd(
"Network.setExtraHTTPHeaders", {"headers": {"X-Test-Header": "test"}}
)
return driver
def after_get_url(driver):
print("[HOOK] after_get_url")
# Example customization: log the URL
print(driver.current_url)
return driver
def before_return_html(driver, html):
print("[HOOK] before_return_html")
# Example customization: log the HTML
print(len(html))
return driver
cprint(
"\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]",
True,
)
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook("on_driver_created", on_driver_created)
crawler_strategy.set_hook("before_get_url", before_get_url)
crawler_strategy.set_hook("after_get_url", after_get_url)
crawler_strategy.set_hook("before_return_html", before_return_html)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
result = crawler.run(url="https://example.com")
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print_result(result=result)
def using_crawler_hooks_dleay_example(crawler):
def delay(driver):
print("Delaying for 5 seconds...")
time.sleep(5)
print("Resuming...")
def create_crawler():
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook("after_get_url", delay)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
return crawler
cprint(
"\n🔗 [bold cyan]Using Crawler Hooks: Let's add a delay after fetching the url to make sure entire page is fetched.[/bold cyan]"
)
crawler = create_crawler()
result = crawler.run(url="https://google.com", bypass_cache=True)
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print_result(result)
def main():
cprint(
"🌟 [bold green]Welcome to the Crawl4ai Quickstart Guide! Let's dive into some web crawling fun! 🌐[/bold green]"
)
cprint(
"⛳️ [bold cyan]First Step: Create an instance of WebCrawler and call the `warmup()` function.[/bold cyan]"
)
cprint(
"If this is the first time you're running Crawl4ai, this might take a few seconds to load required model files."
)
crawler = create_crawler()
crawler.always_by_pass_cache = True
basic_usage(crawler)
# basic_usage_some_params(crawler)
understanding_parameters(crawler)
crawler.always_by_pass_cache = True
screenshot_usage(crawler)
add_chunking_strategy(crawler)
add_extraction_strategy(crawler)
add_llm_extraction_strategy(crawler)
targeted_extraction(crawler)
interactive_extraction(crawler)
multiple_scrip(crawler)
cprint(
"\n🎉 [bold green]Congratulations! You've made it through the Crawl4ai Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️[/bold green]"
)
if __name__ == "__main__":
main()

View File

@@ -1,735 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "6yLvrXn7yZQI"
},
"source": [
"# Crawl4AI: Advanced Web Crawling and Data Extraction\n",
"\n",
"Welcome to this interactive notebook showcasing Crawl4AI, an advanced asynchronous web crawling and data extraction library.\n",
"\n",
"- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n",
"- Twitter: [@unclecode](https://twitter.com/unclecode)\n",
"- Website: [https://crawl4ai.com](https://crawl4ai.com)\n",
"\n",
"Let's explore the powerful features of Crawl4AI!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KIn_9nxFyZQK"
},
"source": [
"## Installation\n",
"\n",
"First, let's install Crawl4AI from GitHub:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mSnaxLf3zMog"
},
"outputs": [],
"source": [
"!sudo apt-get update && sudo apt-get install -y libwoff1 libopus0 libwebp6 libwebpdemux2 libenchant1c2a libgudev-1.0-0 libsecret-1-0 libhyphen0 libgdk-pixbuf2.0-0 libegl1 libnotify4 libxslt1.1 libevent-2.1-7 libgles2 libvpx6 libxcomposite1 libatk1.0-0 libatk-bridge2.0-0 libepoxy0 libgtk-3-0 libharfbuzz-icu0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xlXqaRtayZQK"
},
"outputs": [],
"source": [
"!pip install crawl4ai\n",
"!pip install nest-asyncio\n",
"!playwright install"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qKCE7TI7yZQL"
},
"source": [
"Now, let's import the necessary libraries:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "I67tr7aAyZQL"
},
"outputs": [],
"source": [
"import asyncio\n",
"import nest_asyncio\n",
"from crawl4ai import AsyncWebCrawler\n",
"from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy\n",
"import json\n",
"import time\n",
"from pydantic import BaseModel, Field\n",
"\n",
"nest_asyncio.apply()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "h7yR_Rt_yZQM"
},
"source": [
"## Basic Usage\n",
"\n",
"Let's start with a simple crawl example:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "yBh6hf4WyZQM",
"outputId": "0f83af5c-abba-4175-ed95-70b7512e6bcc"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.05 seconds\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.05 seconds.\n",
"18102\n"
]
}
],
"source": [
"async def simple_crawl():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(url=\"https://www.nbcnews.com/business\")\n",
" print(len(result.markdown))\n",
"await simple_crawl()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9rtkgHI28uI4"
},
"source": [
"💡 By default, **Crawl4AI** caches the result of every URL, so the next time you call it, youll get an instant result. But if you want to bypass the cache, just set `bypass_cache=True`."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MzZ0zlJ9yZQM"
},
"source": [
"## Advanced Features\n",
"\n",
"### Executing JavaScript and Using CSS Selectors"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "gHStF86xyZQM",
"outputId": "34d0fb6d-4dec-4677-f76e-85a1f082829b"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
"[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 6.06 seconds\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n",
"41135\n"
]
}
],
"source": [
"async def js_and_css():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" js_code = [\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"]\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" js_code=js_code,\n",
" # css_selector=\"YOUR_CSS_SELECTOR_HERE\",\n",
" bypass_cache=True\n",
" )\n",
" print(len(result.markdown))\n",
"\n",
"await js_and_css()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cqE_W4coyZQM"
},
"source": [
"### Using a Proxy\n",
"\n",
"Note: You'll need to replace the proxy URL with a working proxy for this example to run successfully."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QjAyiAGqyZQM"
},
"outputs": [],
"source": [
"async def use_proxy():\n",
" async with AsyncWebCrawler(verbose=True, proxy=\"http://your-proxy-url:port\") as crawler:\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" bypass_cache=True\n",
" )\n",
" print(result.markdown[:500]) # Print first 500 characters\n",
"\n",
"# Uncomment the following line to run the proxy example\n",
"# await use_proxy()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XTZ88lbayZQN"
},
"source": [
"### Extracting Structured Data with OpenAI\n",
"\n",
"Note: You'll need to set your OpenAI API key as an environment variable for this example to work."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "fIOlDayYyZQN",
"outputId": "cb8359cc-dee0-4762-9698-5dfdcee055b8"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n",
"[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 3.77 seconds\n",
"[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.21 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 1\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 2\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 3\n",
"[LOG] Extracted 4 blocks from URL: https://openai.com/api/pricing/ block index: 3\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 4\n",
"[LOG] Extracted 5 blocks from URL: https://openai.com/api/pricing/ block index: 0\n",
"[LOG] Extracted 1 blocks from URL: https://openai.com/api/pricing/ block index: 4\n",
"[LOG] Extracted 8 blocks from URL: https://openai.com/api/pricing/ block index: 1\n",
"[LOG] Extracted 12 blocks from URL: https://openai.com/api/pricing/ block index: 2\n",
"[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 8.55 seconds.\n",
"5029\n"
]
}
],
"source": [
"import os\n",
"from google.colab import userdata\n",
"os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')\n",
"\n",
"class OpenAIModelFee(BaseModel):\n",
" model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n",
" input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n",
" output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n",
"\n",
"async def extract_openai_fees():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(\n",
" url='https://openai.com/api/pricing/',\n",
" word_count_threshold=1,\n",
" extraction_strategy=LLMExtractionStrategy(\n",
" provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY'),\n",
" schema=OpenAIModelFee.schema(),\n",
" extraction_type=\"schema\",\n",
" instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens.\n",
" Do not miss any models in the entire content. One extracted model JSON format should look like this:\n",
" {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n",
" ),\n",
" bypass_cache=True,\n",
" )\n",
" print(len(result.extracted_content))\n",
"\n",
"# Uncomment the following line to run the OpenAI extraction example\n",
"await extract_openai_fees()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BypA5YxEyZQN"
},
"source": [
"### Advanced Multi-Page Crawling with JavaScript Execution"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tfkcVQ0b7mw-"
},
"source": [
"## Advanced Multi-Page Crawling with JavaScript Execution\n",
"\n",
"This example demonstrates Crawl4AI's ability to handle complex crawling scenarios, specifically extracting commits from multiple pages of a GitHub repository. The challenge here is that clicking the \"Next\" button doesn't load a new page, but instead uses asynchronous JavaScript to update the content. This is a common hurdle in modern web crawling.\n",
"\n",
"To overcome this, we use Crawl4AI's custom JavaScript execution to simulate clicking the \"Next\" button, and implement a custom hook to detect when new data has loaded. Our strategy involves comparing the first commit's text before and after \"clicking\" Next, waiting until it changes to confirm new data has rendered. This showcases Crawl4AI's flexibility in handling dynamic content and its ability to implement custom logic for even the most challenging crawling tasks."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "qUBKGpn3yZQN",
"outputId": "3e555b6a-ed33-42f4-cce9-499a923fbe17"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 5.16 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.28 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.28 seconds.\n",
"Page 1: Found 35 commits\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.78 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.90 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.90 seconds.\n",
"Page 2: Found 35 commits\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 2.00 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.74 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.75 seconds.\n",
"Page 3: Found 35 commits\n",
"Successfully crawled 105 commits across 3 pages\n"
]
}
],
"source": [
"import re\n",
"from bs4 import BeautifulSoup\n",
"\n",
"async def crawl_typescript_commits():\n",
" first_commit = \"\"\n",
" async def on_execution_started(page):\n",
" nonlocal first_commit\n",
" try:\n",
" while True:\n",
" await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')\n",
" commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')\n",
" commit = await commit.evaluate('(element) => element.textContent')\n",
" commit = re.sub(r'\\s+', '', commit)\n",
" if commit and commit != first_commit:\n",
" first_commit = commit\n",
" break\n",
" await asyncio.sleep(0.5)\n",
" except Exception as e:\n",
" print(f\"Warning: New content didn't appear after JavaScript execution: {e}\")\n",
"\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)\n",
"\n",
" url = \"https://github.com/microsoft/TypeScript/commits/main\"\n",
" session_id = \"typescript_commits_session\"\n",
" all_commits = []\n",
"\n",
" js_next_page = \"\"\"\n",
" const button = document.querySelector('a[data-testid=\"pagination-next-button\"]');\n",
" if (button) button.click();\n",
" \"\"\"\n",
"\n",
" for page in range(3): # Crawl 3 pages\n",
" result = await crawler.arun(\n",
" url=url,\n",
" session_id=session_id,\n",
" css_selector=\"li.Box-sc-g0xbh4-0\",\n",
" js=js_next_page if page > 0 else None,\n",
" bypass_cache=True,\n",
" js_only=page > 0\n",
" )\n",
"\n",
" assert result.success, f\"Failed to crawl page {page + 1}\"\n",
"\n",
" soup = BeautifulSoup(result.cleaned_html, 'html.parser')\n",
" commits = soup.select(\"li\")\n",
" all_commits.extend(commits)\n",
"\n",
" print(f\"Page {page + 1}: Found {len(commits)} commits\")\n",
"\n",
" await crawler.crawler_strategy.kill_session(session_id)\n",
" print(f\"Successfully crawled {len(all_commits)} commits across 3 pages\")\n",
"\n",
"await crawl_typescript_commits()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EJRnYsp6yZQN"
},
"source": [
"### Using JsonCssExtractionStrategy for Fast Structured Output"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1ZMqIzB_8SYp"
},
"source": [
"The JsonCssExtractionStrategy is a powerful feature of Crawl4AI that allows for precise, structured data extraction from web pages. Here's how it works:\n",
"\n",
"1. You define a schema that describes the pattern of data you're interested in extracting.\n",
"2. The schema includes a base selector that identifies repeating elements on the page.\n",
"3. Within the schema, you define fields, each with its own selector and type.\n",
"4. These field selectors are applied within the context of each base selector element.\n",
"5. The strategy supports nested structures, lists within lists, and various data types.\n",
"6. You can even include computed fields for more complex data manipulation.\n",
"\n",
"This approach allows for highly flexible and precise data extraction, transforming semi-structured web content into clean, structured JSON data. It's particularly useful for extracting consistent data patterns from pages like product listings, news articles, or search results.\n",
"\n",
"For more details and advanced usage, check out the full documentation on the Crawl4AI website."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "trCMR2T9yZQN",
"outputId": "718d36f4-cccf-40f4-8d8c-c3ba73524d16"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
"[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 7.00 seconds\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.32 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.48 seconds.\n",
"Successfully extracted 11 news teasers\n",
"{\n",
" \"category\": \"Business News\",\n",
" \"headline\": \"NBC ripped up its Olympics playbook for 2024 \\u2014 so far, the new strategy paid off\",\n",
" \"summary\": \"The Olympics have long been key to NBCUniversal. Paris marked the 18th Olympic Games broadcast by NBC in the U.S.\",\n",
" \"time\": \"13h ago\",\n",
" \"image\": {\n",
" \"src\": \"https://media-cldnry.s-nbcnews.com/image/upload/t_focal-200x100,f_auto,q_auto:best/rockcms/2024-09/240903-nbc-olympics-ch-1344-c7a486.jpg\",\n",
" \"alt\": \"Mike Tirico.\"\n",
" },\n",
" \"link\": \"https://www.nbcnews.com/business\"\n",
"}\n"
]
}
],
"source": [
"async def extract_news_teasers():\n",
" schema = {\n",
" \"name\": \"News Teaser Extractor\",\n",
" \"baseSelector\": \".wide-tease-item__wrapper\",\n",
" \"fields\": [\n",
" {\n",
" \"name\": \"category\",\n",
" \"selector\": \".unibrow span[data-testid='unibrow-text']\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"headline\",\n",
" \"selector\": \".wide-tease-item__headline\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"summary\",\n",
" \"selector\": \".wide-tease-item__description\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"time\",\n",
" \"selector\": \"[data-testid='wide-tease-date']\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"image\",\n",
" \"type\": \"nested\",\n",
" \"selector\": \"picture.teasePicture img\",\n",
" \"fields\": [\n",
" {\"name\": \"src\", \"type\": \"attribute\", \"attribute\": \"src\"},\n",
" {\"name\": \"alt\", \"type\": \"attribute\", \"attribute\": \"alt\"},\n",
" ],\n",
" },\n",
" {\n",
" \"name\": \"link\",\n",
" \"selector\": \"a[href]\",\n",
" \"type\": \"attribute\",\n",
" \"attribute\": \"href\",\n",
" },\n",
" ],\n",
" }\n",
"\n",
" extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n",
"\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" extraction_strategy=extraction_strategy,\n",
" bypass_cache=True,\n",
" )\n",
"\n",
" assert result.success, \"Failed to crawl the page\"\n",
"\n",
" news_teasers = json.loads(result.extracted_content)\n",
" print(f\"Successfully extracted {len(news_teasers)} news teasers\")\n",
" print(json.dumps(news_teasers[0], indent=2))\n",
"\n",
"await extract_news_teasers()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FnyVhJaByZQN"
},
"source": [
"## Speed Comparison\n",
"\n",
"Let's compare the speed of Crawl4AI with Firecrawl, a paid service. Note that we can't run Firecrawl in this Colab environment, so we'll simulate its performance based on previously recorded data."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "agDD186f3wig"
},
"source": [
"💡 **Note on Speed Comparison:**\n",
"\n",
"The speed test conducted here is running on Google Colab, where the internet speed and performance can vary and may not reflect optimal conditions. When we call Firecrawl's API, we're seeing its best performance, while Crawl4AI's performance is limited by Colab's network speed.\n",
"\n",
"For a more accurate comparison, it's recommended to run these tests on your own servers or computers with a stable and fast internet connection. Despite these limitations, Crawl4AI still demonstrates faster performance in this environment.\n",
"\n",
"If you run these tests locally, you may observe an even more significant speed advantage for Crawl4AI compared to other services."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "F7KwHv8G1LbY"
},
"outputs": [],
"source": [
"!pip install firecrawl"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "91813zILyZQN",
"outputId": "663223db-ab89-4976-b233-05ceca62b19b"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Firecrawl (simulated):\n",
"Time taken: 4.38 seconds\n",
"Content length: 41967 characters\n",
"Images found: 49\n",
"\n",
"Crawl4AI (simple crawl):\n",
"Time taken: 4.22 seconds\n",
"Content length: 18221 characters\n",
"Images found: 49\n",
"\n",
"Crawl4AI (with JavaScript execution):\n",
"Time taken: 9.13 seconds\n",
"Content length: 34243 characters\n",
"Images found: 89\n"
]
}
],
"source": [
"import os\n",
"from google.colab import userdata\n",
"os.environ['FIRECRAWL_API_KEY'] = userdata.get('FIRECRAWL_API_KEY')\n",
"import time\n",
"from firecrawl import FirecrawlApp\n",
"\n",
"async def speed_comparison():\n",
" # Simulated Firecrawl performance\n",
" app = FirecrawlApp(api_key=os.environ['FIRECRAWL_API_KEY'])\n",
" start = time.time()\n",
" scrape_status = app.scrape_url(\n",
" 'https://www.nbcnews.com/business',\n",
" params={'formats': ['markdown', 'html']}\n",
" )\n",
" end = time.time()\n",
" print(\"Firecrawl (simulated):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(scrape_status['markdown'])} characters\")\n",
" print(f\"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}\")\n",
" print()\n",
"\n",
" async with AsyncWebCrawler() as crawler:\n",
" # Crawl4AI simple crawl\n",
" start = time.time()\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" word_count_threshold=0,\n",
" bypass_cache=True,\n",
" verbose=False\n",
" )\n",
" end = time.time()\n",
" print(\"Crawl4AI (simple crawl):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(result.markdown)} characters\")\n",
" print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
" print()\n",
"\n",
" # Crawl4AI with JavaScript execution\n",
" start = time.time()\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" js_code=[\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"],\n",
" word_count_threshold=0,\n",
" bypass_cache=True,\n",
" verbose=False\n",
" )\n",
" end = time.time()\n",
" print(\"Crawl4AI (with JavaScript execution):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(result.markdown)} characters\")\n",
" print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
"\n",
"await speed_comparison()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OBFFYVJIyZQN"
},
"source": [
"If you run on a local machine with a proper internet speed:\n",
"- Simple crawl: Crawl4AI is typically over 3-4 times faster than Firecrawl.\n",
"- With JavaScript execution: Even when executing JavaScript to load more content (potentially doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.\n",
"\n",
"Please note that actual performance may vary depending on network conditions and the specific content being crawled."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "A6_1RK1_yZQO"
},
"source": [
"## Conclusion\n",
"\n",
"In this notebook, we've explored the powerful features of Crawl4AI, including:\n",
"\n",
"1. Basic crawling\n",
"2. JavaScript execution and CSS selector usage\n",
"3. Proxy support\n",
"4. Structured data extraction with OpenAI\n",
"5. Advanced multi-page crawling with JavaScript execution\n",
"6. Fast structured output using JsonCssExtractionStrategy\n",
"7. Speed comparison with other services\n",
"\n",
"Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n",
"\n",
"For more information and advanced usage, please visit the [Crawl4AI documentation](https://docs.crawl4ai.com/).\n",
"\n",
"Happy crawling!"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -13,11 +13,11 @@ from crawl4ai.deep_crawling import (
)
from crawl4ai.deep_crawling.scorers import KeywordRelevanceScorer
from crawl4ai.async_crawler_strategy import AsyncHTTPCrawlerStrategy
from crawl4ai.configs import ProxyConfig
from crawl4ai.proxy_strategy import ProxyConfig
from crawl4ai import RoundRobinProxyStrategy
from crawl4ai.content_filter_strategy import LLMContentFilter
from crawl4ai import DefaultMarkdownGenerator
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
from crawl4ai.processors.pdf import PDFCrawlerStrategy, PDFContentScrapingStrategy
from pprint import pprint
@@ -284,9 +284,9 @@ async def llm_content_filter():
PART 5: LLM Content Filter
This function demonstrates:
- Configuring LLM providers via LlmConfig
- Configuring LLM providers via LLMConfig
- Using LLM to generate focused markdown
- LlmConfig for configuration
- LLMConfig for configuration
Note: Requires a valid API key for the chosen LLM provider
"""
@@ -296,7 +296,7 @@ async def llm_content_filter():
# Create LLM configuration
# Replace with your actual API key or set as environment variable
llm_config = LlmConfig(
llm_config = LLMConfig(
provider="gemini/gemini-1.5-pro",
api_token="env:GEMINI_API_KEY" # Will read from GEMINI_API_KEY environment variable
)
@@ -309,7 +309,7 @@ async def llm_content_filter():
# Create markdown generator with LLM filter
markdown_generator = DefaultMarkdownGenerator(
content_filter=LLMContentFilter(
llmConfig=llm_config,
llm_config=llm_config,
instruction="Extract key concepts and summaries"
)
)
@@ -381,7 +381,7 @@ async def llm_schema_generation():
PART 7: LLM Schema Generation
This function demonstrates:
- Configuring LLM providers via LlmConfig
- Configuring LLM providers via LLMConfig
- Using LLM to generate extraction schemas
- JsonCssExtractionStrategy
@@ -406,9 +406,9 @@ async def llm_schema_generation():
<div class="rating">4.7/5</div>
</div>
"""
print("\n📊 Setting up LlmConfig...")
print("\n📊 Setting up LLMConfig...")
# Create LLM configuration
llm_config = LlmConfig(
llm_config = LLMConfig(
provider="gemini/gemini-1.5-pro",
api_token="env:GEMINI_API_KEY"
)
@@ -416,7 +416,7 @@ async def llm_schema_generation():
print(" This would use the LLM to analyze HTML and create an extraction schema")
schema = JsonCssExtractionStrategy.generate_schema(
html=sample_html,
llmConfig = llm_config,
llm_config = llm_config,
query="Extract product name and price"
)
print("\n✅ Generated Schema:")

View File

@@ -0,0 +1,205 @@
# Network Requests & Console Message Capturing
Crawl4AI can capture all network requests and browser console messages during a crawl, which is invaluable for debugging, security analysis, or understanding page behavior.
## Configuration
To enable network and console capturing, use these configuration options:
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
# Enable both network request capture and console message capture
config = CrawlerRunConfig(
capture_network_requests=True, # Capture all network requests and responses
capture_console_messages=True # Capture all browser console output
)
```
## Example Usage
```python
import asyncio
import json
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
async def main():
# Enable both network request capture and console message capture
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
config=config
)
if result.success:
# Analyze network requests
if result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Count request types
request_count = len([r for r in result.network_requests if r.get("event_type") == "request"])
response_count = len([r for r in result.network_requests if r.get("event_type") == "response"])
failed_count = len([r for r in result.network_requests if r.get("event_type") == "request_failed"])
print(f"Requests: {request_count}, Responses: {response_count}, Failed: {failed_count}")
# Find API calls
api_calls = [r for r in result.network_requests
if r.get("event_type") == "request" and "api" in r.get("url", "")]
if api_calls:
print(f"Detected {len(api_calls)} API calls:")
for call in api_calls[:3]: # Show first 3
print(f" - {call.get('method')} {call.get('url')}")
# Analyze console messages
if result.console_messages:
print(f"Captured {len(result.console_messages)} console messages")
# Group by type
message_types = {}
for msg in result.console_messages:
msg_type = msg.get("type", "unknown")
message_types[msg_type] = message_types.get(msg_type, 0) + 1
print("Message types:", message_types)
# Show errors (often the most important)
errors = [msg for msg in result.console_messages if msg.get("type") == "error"]
if errors:
print(f"Found {len(errors)} console errors:")
for err in errors[:2]: # Show first 2
print(f" - {err.get('text', '')[:100]}")
# Export all captured data to a file for detailed analysis
with open("network_capture.json", "w") as f:
json.dump({
"url": result.url,
"network_requests": result.network_requests or [],
"console_messages": result.console_messages or []
}, f, indent=2)
print("Exported detailed capture data to network_capture.json")
if __name__ == "__main__":
asyncio.run(main())
```
## Captured Data Structure
### Network Requests
The `result.network_requests` contains a list of dictionaries, each representing a network event with these common fields:
| Field | Description |
|-------|-------------|
| `event_type` | Type of event: `"request"`, `"response"`, or `"request_failed"` |
| `url` | The URL of the request |
| `timestamp` | Unix timestamp when the event was captured |
#### Request Event Fields
```json
{
"event_type": "request",
"url": "https://example.com/api/data.json",
"method": "GET",
"headers": {"User-Agent": "...", "Accept": "..."},
"post_data": "key=value&otherkey=value",
"resource_type": "fetch",
"is_navigation_request": false,
"timestamp": 1633456789.123
}
```
#### Response Event Fields
```json
{
"event_type": "response",
"url": "https://example.com/api/data.json",
"status": 200,
"status_text": "OK",
"headers": {"Content-Type": "application/json", "Cache-Control": "..."},
"from_service_worker": false,
"request_timing": {"requestTime": 1234.56, "receiveHeadersEnd": 1234.78},
"timestamp": 1633456789.456
}
```
#### Failed Request Event Fields
```json
{
"event_type": "request_failed",
"url": "https://example.com/missing.png",
"method": "GET",
"resource_type": "image",
"failure_text": "net::ERR_ABORTED 404",
"timestamp": 1633456789.789
}
```
### Console Messages
The `result.console_messages` contains a list of dictionaries, each representing a console message with these common fields:
| Field | Description |
|-------|-------------|
| `type` | Message type: `"log"`, `"error"`, `"warning"`, `"info"`, etc. |
| `text` | The message text |
| `timestamp` | Unix timestamp when the message was captured |
#### Console Message Example
```json
{
"type": "error",
"text": "Uncaught TypeError: Cannot read property 'length' of undefined",
"location": "https://example.com/script.js:123:45",
"timestamp": 1633456790.123
}
```
## Key Benefits
- **Full Request Visibility**: Capture all network activity including:
- Requests (URLs, methods, headers, post data)
- Responses (status codes, headers, timing)
- Failed requests (with error messages)
- **Console Message Access**: View all JavaScript console output:
- Log messages
- Warnings
- Errors with stack traces
- Developer debugging information
- **Debugging Power**: Identify issues such as:
- Failed API calls or resource loading
- JavaScript errors affecting page functionality
- CORS or other security issues
- Hidden API endpoints and data flows
- **Security Analysis**: Detect:
- Unexpected third-party requests
- Data leakage in request payloads
- Suspicious script behavior
- **Performance Insights**: Analyze:
- Request timing data
- Resource loading patterns
- Potential bottlenecks
## Use Cases
1. **API Discovery**: Identify hidden endpoints and data flows in single-page applications
2. **Debugging**: Track down JavaScript errors affecting page functionality
3. **Security Auditing**: Detect unwanted third-party requests or data leakage
4. **Performance Analysis**: Identify slow-loading resources
5. **Ad/Tracker Analysis**: Detect and catalog advertising or tracking calls
This capability is especially valuable for complex sites with heavy JavaScript, single-page applications, or when you need to understand the exact communication happening between a browser and servers.

View File

@@ -15,6 +15,7 @@ class CrawlResult(BaseModel):
downloaded_files: Optional[List[str]] = None
screenshot: Optional[str] = None
pdf : Optional[bytes] = None
mhtml: Optional[str] = None
markdown: Optional[Union[str, MarkdownGenerationResult]] = None
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
@@ -236,7 +237,16 @@ if result.pdf:
f.write(result.pdf)
```
### 5.5 **`metadata`** *(Optional[dict])*
### 5.5 **`mhtml`** *(Optional[str])*
**What**: MHTML snapshot of the page if `capture_mhtml=True` in `CrawlerRunConfig`. MHTML (MIME HTML) format preserves the entire web page with all its resources (CSS, images, scripts, etc.) in a single file.
**Usage**:
```python
if result.mhtml:
with open("page.mhtml", "w", encoding="utf-8") as f:
f.write(result.mhtml)
```
### 5.6 **`metadata`** *(Optional[dict])*
**What**: Page-level metadata if discovered (title, description, OG data, etc.).
**Usage**:
```python
@@ -271,7 +281,69 @@ for result in results:
---
## 7. Example: Accessing Everything
## 7. Network Requests & Console Messages
When you enable network and console message capturing in `CrawlerRunConfig` using `capture_network_requests=True` and `capture_console_messages=True`, the `CrawlResult` will include these fields:
### 7.1 **`network_requests`** *(Optional[List[Dict[str, Any]]])*
**What**: A list of dictionaries containing information about all network requests, responses, and failures captured during the crawl.
**Structure**:
- Each item has an `event_type` field that can be `"request"`, `"response"`, or `"request_failed"`.
- Request events include `url`, `method`, `headers`, `post_data`, `resource_type`, and `is_navigation_request`.
- Response events include `url`, `status`, `status_text`, `headers`, and `request_timing`.
- Failed request events include `url`, `method`, `resource_type`, and `failure_text`.
- All events include a `timestamp` field.
**Usage**:
```python
if result.network_requests:
# Count different types of events
requests = [r for r in result.network_requests if r.get("event_type") == "request"]
responses = [r for r in result.network_requests if r.get("event_type") == "response"]
failures = [r for r in result.network_requests if r.get("event_type") == "request_failed"]
print(f"Captured {len(requests)} requests, {len(responses)} responses, and {len(failures)} failures")
# Analyze API calls
api_calls = [r for r in requests if "api" in r.get("url", "")]
# Identify failed resources
for failure in failures:
print(f"Failed to load: {failure.get('url')} - {failure.get('failure_text')}")
```
### 7.2 **`console_messages`** *(Optional[List[Dict[str, Any]]])*
**What**: A list of dictionaries containing all browser console messages captured during the crawl.
**Structure**:
- Each item has a `type` field indicating the message type (e.g., `"log"`, `"error"`, `"warning"`, etc.).
- The `text` field contains the actual message text.
- Some messages include `location` information (URL, line, column).
- All messages include a `timestamp` field.
**Usage**:
```python
if result.console_messages:
# Count messages by type
message_types = {}
for msg in result.console_messages:
msg_type = msg.get("type", "unknown")
message_types[msg_type] = message_types.get(msg_type, 0) + 1
print(f"Message type counts: {message_types}")
# Display errors (which are usually most important)
for msg in result.console_messages:
if msg.get("type") == "error":
print(f"Error: {msg.get('text')}")
```
These fields provide deep visibility into the page's network activity and browser console, which is invaluable for debugging, security analysis, and understanding complex web applications.
For more details on network and console capturing, see the [Network & Console Capture documentation](../advanced/network-console-capture.md).
---
## 8. Example: Accessing Everything
```python
async def handle_result(result: CrawlResult):
@@ -304,16 +376,36 @@ async def handle_result(result: CrawlResult):
if result.extracted_content:
print("Structured data:", result.extracted_content)
# Screenshot/PDF
# Screenshot/PDF/MHTML
if result.screenshot:
print("Screenshot length:", len(result.screenshot))
if result.pdf:
print("PDF bytes length:", len(result.pdf))
if result.mhtml:
print("MHTML length:", len(result.mhtml))
# Network and console capturing
if result.network_requests:
print(f"Network requests captured: {len(result.network_requests)}")
# Analyze request types
req_types = {}
for req in result.network_requests:
if "resource_type" in req:
req_types[req["resource_type"]] = req_types.get(req["resource_type"], 0) + 1
print(f"Resource types: {req_types}")
if result.console_messages:
print(f"Console messages captured: {len(result.console_messages)}")
# Count by message type
msg_types = {}
for msg in result.console_messages:
msg_types[msg.get("type", "unknown")] = msg_types.get(msg.get("type", "unknown"), 0) + 1
print(f"Message types: {msg_types}")
```
---
## 8. Key Points & Future
## 9. Key Points & Future
1. **Deprecated legacy properties of CrawlResult**
- `markdown_v2` - Deprecated in v0.5. Just use `markdown`. It holds the `MarkdownGenerationResult` now!

View File

@@ -71,7 +71,8 @@ We group them by category.
| **`word_count_threshold`** | `int` (default: ~200) | Skips text blocks below X words. Helps ignore trivial sections. |
| **`extraction_strategy`** | `ExtractionStrategy` (default: None) | If set, extracts structured data (CSS-based, LLM-based, etc.). |
| **`markdown_generator`** | `MarkdownGenerationStrategy` (None) | If you want specialized markdown output (citations, filtering, chunking, etc.). |
| **`css_selector`** | `str` (None) | Retains only the part of the page matching this selector. |
| **`css_selector`** | `str` (None) | Retains only the part of the page matching this selector. Affects the entire extraction process. |
| **`target_elements`** | `List[str]` (None) | List of CSS selectors for elements to focus on for markdown generation and data extraction, while still processing the entire page for links, media, etc. Provides more flexibility than `css_selector`. |
| **`excluded_tags`** | `list` (None) | Removes entire tags (e.g. `["script", "style"]`). |
| **`excluded_selector`** | `str` (None) | Like `css_selector` but to exclude. E.g. `"#ads, .tracker"`. |
| **`only_text`** | `bool` (False) | If `True`, tries to extract text-only content. |
@@ -139,6 +140,7 @@ If your page is a single-page app with repeated JS updates, set `js_only=True` i
| **`screenshot_wait_for`** | `float or None` | Extra wait time before the screenshot. |
| **`screenshot_height_threshold`** | `int` (~20000) | If the page is taller than this, alternate screenshot strategies are used. |
| **`pdf`** | `bool` (False) | If `True`, returns a PDF in `result.pdf`. |
| **`capture_mhtml`** | `bool` (False) | If `True`, captures an MHTML snapshot of the page in `result.mhtml`. MHTML includes all page resources (CSS, images, etc.) in a single file. |
| **`image_description_min_word_threshold`** | `int` (~50) | Minimum words for an images alt text or description to be considered valid. |
| **`image_score_threshold`** | `int` (~3) | Filter out low-scoring images. The crawler scores images by relevance (size, context, etc.). |
| **`exclude_external_images`** | `bool` (False) | Exclude images from other domains. |
@@ -245,8 +247,8 @@ run_config = CrawlerRunConfig(
)
```
# 3. **LlmConfig** - Setting up LLM providers
LlmConfig is useful to pass LLM provider config to strategies and functions that rely on LLMs to do extraction, filtering, schema generation etc. Currently it can be used in the following -
# 3. **LLMConfig** - Setting up LLM providers
LLMConfig is useful to pass LLM provider config to strategies and functions that rely on LLMs to do extraction, filtering, schema generation etc. Currently it can be used in the following -
1. LLMExtractionStrategy
2. LLMContentFilter
@@ -262,7 +264,7 @@ LlmConfig is useful to pass LLM provider config to strategies and functions that
## 3.2 Example Usage
```python
llmConfig = LlmConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY"))
llm_config = LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY"))
```
## 4. Putting It All Together
@@ -270,7 +272,7 @@ llmConfig = LlmConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI
- **Use** `BrowserConfig` for **global** browser settings: engine, headless, proxy, user agent.
- **Use** `CrawlerRunConfig` for each crawls **context**: how to filter content, handle caching, wait for dynamic elements, or run JS.
- **Pass** both configs to `AsyncWebCrawler` (the `BrowserConfig`) and then to `arun()` (the `CrawlerRunConfig`).
- **Use** `LlmConfig` for LLM provider configurations that can be used across all extraction, filtering, schema generation tasks. Can be used in - `LLMExtractionStrategy`, `LLMContentFilter`, `JsonCssExtractionStrategy.generate_schema` & `JsonXPathExtractionStrategy.generate_schema`
- **Use** `LLMConfig` for LLM provider configurations that can be used across all extraction, filtering, schema generation tasks. Can be used in - `LLMExtractionStrategy`, `LLMContentFilter`, `JsonCssExtractionStrategy.generate_schema` & `JsonXPathExtractionStrategy.generate_schema`
```python
# Create a modified copy with the clone() method

View File

@@ -131,7 +131,7 @@ OverlappingWindowChunking(
```python
from pydantic import BaseModel
from crawl4ai.extraction_strategy import LLMExtractionStrategy
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
# Define schema
class Article(BaseModel):
@@ -141,7 +141,7 @@ class Article(BaseModel):
# Create strategy
strategy = LLMExtractionStrategy(
llmConfig = LlmConfig(provider="ollama/llama2"),
llm_config = LLMConfig(provider="ollama/llama2"),
schema=Article.schema(),
instruction="Extract article details"
)
@@ -198,7 +198,7 @@ result = await crawler.arun(
```python
from crawl4ai.chunking_strategy import OverlappingWindowChunking
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
# Create chunking strategy
chunker = OverlappingWindowChunking(
@@ -208,7 +208,7 @@ chunker = OverlappingWindowChunking(
# Use with extraction strategy
strategy = LLMExtractionStrategy(
llmConfig = LlmConfig(provider="ollama/llama2"),
llm_config = LLMConfig(provider="ollama/llama2"),
chunking_strategy=chunker
)

View File

@@ -0,0 +1,444 @@
/* ==== File: docs/ask_ai/ask_ai.css ==== */
/* --- Basic Reset & Font --- */
body {
/* Attempt to inherit variables from parent window (iframe context) */
/* Fallback values if variables are not inherited */
--fallback-bg: #070708;
--fallback-font: #e8e9ed;
--fallback-secondary: #a3abba;
--fallback-primary: #50ffff;
--fallback-primary-dimmed: #09b5a5;
--fallback-border: #1d1d20;
--fallback-code-bg: #1e1e1e;
--fallback-invert-font: #222225;
--font-stack: dm, Monaco, Courier New, monospace, serif;
font-family: var(--font-stack, "Courier New", monospace); /* Use theme font stack */
background-color: var(--background-color, var(--fallback-bg));
color: var(--font-color, var(--fallback-font));
margin: 0;
padding: 0;
font-size: 14px; /* Match global font size */
line-height: 1.5em; /* Match global line height */
height: 100vh; /* Ensure body takes full height */
overflow: hidden; /* Prevent body scrollbars, panels handle scroll */
display: flex; /* Use flex for the main container */
}
a {
color: var(--secondary-color, var(--fallback-secondary));
text-decoration: none;
transition: color 0.2s;
}
a:hover {
color: var(--primary-color, var(--fallback-primary));
}
/* --- Main Container Layout --- */
.ai-assistant-container {
display: flex;
width: 100%;
height: 100%;
background-color: var(--background-color, var(--fallback-bg));
}
/* --- Sidebar Styling --- */
.sidebar {
flex-shrink: 0; /* Prevent sidebars from shrinking */
height: 100%;
display: flex;
flex-direction: column;
/* background-color: var(--code-bg-color, var(--fallback-code-bg)); */
overflow-y: hidden; /* Header fixed, list scrolls */
}
.left-sidebar {
flex-basis: 240px; /* Width of history panel */
border-right: 1px solid var(--progress-bar-background, var(--fallback-border));
}
.right-sidebar {
flex-basis: 280px; /* Width of citations panel */
border-left: 1px solid var(--progress-bar-background, var(--fallback-border));
}
.sidebar header {
padding: 0.6em 1em;
border-bottom: 1px solid var(--progress-bar-background, var(--fallback-border));
flex-shrink: 0;
display: flex;
justify-content: space-between;
align-items: center;
}
.sidebar header h3 {
margin: 0;
font-size: 1.1em;
color: var(--font-color, var(--fallback-font));
}
.sidebar ul {
list-style: none;
padding: 0;
margin: 0;
overflow-y: auto; /* Enable scrolling for the list */
flex-grow: 1; /* Allow list to take remaining space */
padding: 0.5em 0;
}
.sidebar ul li {
padding: 0.3em 1em;
}
.sidebar ul li.no-citations,
.sidebar ul li.no-history {
color: var(--secondary-color, var(--fallback-secondary));
font-style: italic;
font-size: 0.9em;
padding-left: 1em;
}
.sidebar ul li a {
color: var(--secondary-color, var(--fallback-secondary));
text-decoration: none;
display: block;
padding: 0.2em 0.5em;
border-radius: 3px;
transition: background-color 0.2s, color 0.2s;
}
.sidebar ul li a:hover {
color: var(--primary-color, var(--fallback-primary));
background-color: rgba(80, 255, 255, 0.08); /* Use primary color with alpha */
}
/* Style for active history item */
#history-list li.active a {
color: var(--primary-dimmed-color, var(--fallback-primary-dimmed));
font-weight: bold;
background-color: rgba(80, 255, 255, 0.12);
}
/* --- Chat Panel Styling --- */
#chat-panel {
flex-grow: 1; /* Take remaining space */
display: flex;
flex-direction: column;
height: 100%;
overflow: hidden; /* Prevent overflow, internal elements handle scroll */
}
#chat-messages {
flex-grow: 1;
overflow-y: auto; /* Scrollable chat history */
padding: 1em 1.5em;
border-bottom: 1px solid var(--progress-bar-background, var(--fallback-border));
}
.message {
margin-bottom: 1em;
padding: 0.8em 1.2em;
border-radius: 8px;
max-width: 90%; /* Slightly wider */
line-height: 1.6;
/* Apply pre-wrap for better handling of spaces/newlines AND wrapping */
white-space: pre-wrap;
word-wrap: break-word; /* Ensure long words break */
}
.user-message {
background-color: var(--progress-bar-background, var(--fallback-border)); /* User message background */
color: var(--font-color, var(--fallback-font));
margin-left: auto; /* Align user messages to the right */
text-align: left;
}
.ai-message {
background-color: var(--code-bg-color, var(--fallback-code-bg)); /* AI message background */
color: var(--font-color, var(--fallback-font));
margin-right: auto; /* Align AI messages to the left */
border: 1px solid var(--progress-bar-background, var(--fallback-border));
}
.ai-message.welcome-message {
border: none;
background-color: transparent;
max-width: 100%;
text-align: center;
color: var(--secondary-color, var(--fallback-secondary));
white-space: normal;
}
/* Styles for code within messages */
.ai-message code {
background-color: var(--invert-font-color, var(--fallback-invert-font)) !important; /* Use light bg for code */
/* color: var(--background-color, var(--fallback-bg)) !important; Dark text */
padding: 0.1em 0.4em;
border-radius: 4px;
font-size: 0.9em;
}
.ai-message pre {
background-color: var(--invert-font-color, var(--fallback-invert-font)) !important;
color: var(--background-color, var(--fallback-bg)) !important;
padding: 1em;
border-radius: 5px;
overflow-x: auto;
margin: 0.8em 0;
white-space: pre;
}
.ai-message pre code {
background-color: transparent !important;
padding: 0;
font-size: inherit;
}
/* Override white-space for specific elements generated by Markdown */
.ai-message p,
.ai-message ul,
.ai-message ol,
.ai-message blockquote {
white-space: normal; /* Allow standard wrapping for block elements */
}
/* --- Markdown Element Styling within Messages --- */
.message p {
margin-top: 0;
margin-bottom: 0.5em;
}
.message p:last-child {
margin-bottom: 0;
}
.message ul,
.message ol {
margin: 0.5em 0 0.5em 1.5em;
padding: 0;
}
.message li {
margin-bottom: 0.2em;
}
/* Code block styling (adjusts previous rules slightly) */
.message code {
/* Inline code */
background-color: var(--invert-font-color, var(--fallback-invert-font)) !important;
color: var(--font-color);
padding: 0.1em 0.4em;
border-radius: 4px;
font-size: 0.9em;
/* Ensure inline code breaks nicely */
word-break: break-all;
white-space: normal; /* Allow inline code to wrap if needed */
}
.message pre {
/* Code block container */
background-color: var(--invert-font-color, var(--fallback-invert-font)) !important;
color: var(--background-color, var(--fallback-bg)) !important;
padding: 1em;
border-radius: 5px;
overflow-x: auto;
margin: 0.8em 0;
font-size: 0.9em; /* Slightly smaller code blocks */
}
.message pre code {
/* Code within code block */
background-color: transparent !important;
padding: 0;
font-size: inherit;
word-break: normal; /* Don't break words in code blocks */
white-space: pre; /* Preserve whitespace strictly in code blocks */
}
/* Thinking indicator */
.message-thinking {
display: inline-block;
width: 5px;
height: 5px;
background-color: var(--primary-color, var(--fallback-primary));
border-radius: 50%;
margin-left: 8px;
vertical-align: middle;
animation: thinking 1s infinite ease-in-out;
}
@keyframes thinking {
0%,
100% {
opacity: 0.5;
transform: scale(0.8);
}
50% {
opacity: 1;
transform: scale(1.2);
}
}
/* --- Thinking Indicator (Blinking Cursor Style) --- */
.thinking-indicator-cursor {
display: inline-block;
width: 10px; /* Width of the cursor */
height: 1.1em; /* Match line height */
background-color: var(--primary-color, var(--fallback-primary));
margin-left: 5px;
vertical-align: text-bottom; /* Align with text baseline */
animation: blink-cursor 1s step-end infinite;
}
@keyframes blink-cursor {
from,
to {
background-color: transparent;
}
50% {
background-color: var(--primary-color, var(--fallback-primary));
}
}
#chat-input-area {
flex-shrink: 0; /* Prevent input area from shrinking */
padding: 1em 1.5em;
display: flex;
align-items: flex-end; /* Align items to bottom */
gap: 10px;
background-color: var(--code-bg-color, var(--fallback-code-bg)); /* Match sidebars */
}
#chat-input-area textarea {
flex-grow: 1;
padding: 0.8em 1em;
border: 1px solid var(--progress-bar-background, var(--fallback-border));
background-color: var(--background-color, var(--fallback-bg));
color: var(--font-color, var(--fallback-font));
border-radius: 5px;
resize: none; /* Disable manual resize */
font-family: inherit;
font-size: 1em;
line-height: 1.4;
max-height: 150px; /* Limit excessive height */
overflow-y: auto;
/* rows: 2; */
}
#chat-input-area button {
/* Basic button styling - maybe inherit from main theme? */
padding: 0.6em 1.2em;
border: 1px solid var(--primary-dimmed-color, var(--fallback-primary-dimmed));
background-color: var(--primary-dimmed-color, var(--fallback-primary-dimmed));
color: var(--background-color, var(--fallback-bg));
border-radius: 5px;
cursor: pointer;
font-size: 0.9em;
transition: background-color 0.2s, border-color 0.2s;
height: min-content; /* Align with bottom of textarea */
}
#chat-input-area button:hover {
background-color: var(--primary-color, var(--fallback-primary));
border-color: var(--primary-color, var(--fallback-primary));
}
#chat-input-area button:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.loading-indicator {
font-size: 0.9em;
color: var(--secondary-color, var(--fallback-secondary));
margin-right: 10px;
align-self: center;
}
/* --- Buttons --- */
/* Inherit some button styles if possible */
.btn.btn-sm {
color: var(--font-color, var(--fallback-font));
padding: 0.2em 0.5em;
font-size: 0.8em;
border: 1px solid var(--secondary-color, var(--fallback-secondary));
background: none;
border-radius: 3px;
cursor: pointer;
}
.btn.btn-sm:hover {
border-color: var(--font-color, var(--fallback-font));
background-color: var(--progress-bar-background, var(--fallback-border));
}
/* --- Basic Responsiveness --- */
@media screen and (max-width: 900px) {
.left-sidebar {
flex-basis: 200px; /* Shrink history */
}
.right-sidebar {
flex-basis: 240px; /* Shrink citations */
}
}
@media screen and (max-width: 768px) {
/* Stack layout on mobile? Or hide sidebars? Hiding for now */
.sidebar {
display: none; /* Hide sidebars on small screens */
}
/* Could add toggle buttons later */
}
/* ==== File: docs/ask_ai/ask-ai.css (Updates V4 - Delete Button) ==== */
.sidebar ul li {
/* Use flexbox to align link and delete button */
display: flex;
justify-content: space-between;
align-items: center;
padding: 0; /* Remove padding from li, add to link/button */
margin: 0.1em 0; /* Small vertical margin */
}
.sidebar ul li a {
/* Link takes most space */
flex-grow: 1;
padding: 0.3em 0.5em 0.3em 1em; /* Adjust padding */
/* Make ellipsis work for long titles */
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
/* Keep existing link styles */
color: var(--secondary-color, var(--fallback-secondary));
text-decoration: none;
display: block;
border-radius: 3px;
transition: background-color 0.2s, color 0.2s;
}
.sidebar ul li a:hover {
color: var(--primary-color, var(--fallback-primary));
background-color: rgba(80, 255, 255, 0.08);
}
/* Style for active history item's link */
#history-list li.active a {
color: var(--primary-dimmed-color, var(--fallback-primary-dimmed));
font-weight: bold;
background-color: rgba(80, 255, 255, 0.12);
}
/* --- Delete Chat Button --- */
.delete-chat-btn {
flex-shrink: 0; /* Don't shrink */
background: none;
border: none;
color: var(--secondary-color, var(--fallback-secondary));
cursor: pointer;
padding: 0.4em 0.8em; /* Padding around icon */
font-size: 0.9em;
opacity: 0.5; /* Dimmed by default */
transition: opacity 0.2s, color 0.2s;
margin-left: 5px; /* Space between link and button */
border-radius: 3px;
}
.sidebar ul li:hover .delete-chat-btn,
.delete-chat-btn:hover {
opacity: 1; /* Show fully on hover */
color: var(--error-color, #ff3c74); /* Use error color on hover */
}
.delete-chat-btn:focus {
outline: 1px dashed var(--error-color, #ff3c74); /* Accessibility */
opacity: 1;
}

603
docs/md_v2/ask_ai/ask-ai.js Normal file
View File

@@ -0,0 +1,603 @@
// ==== File: docs/ask_ai/ask-ai.js (Marked, Streaming, History) ====
document.addEventListener("DOMContentLoaded", () => {
console.log("AI Assistant JS V2 Loaded");
// --- DOM Element Selectors ---
const historyList = document.getElementById("history-list");
const newChatButton = document.getElementById("new-chat-button");
const chatMessages = document.getElementById("chat-messages");
const chatInput = document.getElementById("chat-input");
const sendButton = document.getElementById("send-button");
const citationsList = document.getElementById("citations-list");
// --- Constants ---
const CHAT_INDEX_KEY = "aiAssistantChatIndex_v1";
const CHAT_PREFIX = "aiAssistantChat_v1_";
// --- State ---
let currentChatId = null;
let conversationHistory = []; // Holds message objects { sender: 'user'/'ai', text: '...' }
let isThinking = false;
let streamInterval = null; // To control the streaming interval
// --- Event Listeners ---
sendButton.addEventListener("click", handleSendMessage);
chatInput.addEventListener("keydown", handleInputKeydown);
newChatButton.addEventListener("click", handleNewChat);
chatInput.addEventListener("input", autoGrowTextarea);
// --- Initialization ---
loadChatHistoryIndex(); // Load history list on startup
const initialQuery = checkForInitialQuery(window.parent.location); // Check for query param
if (!initialQuery) {
loadInitialChat(); // Load normally if no query
}
// --- Core Functions ---
function handleSendMessage() {
const userMessageText = chatInput.value.trim();
if (!userMessageText || isThinking) return;
setThinking(true); // Start thinking state
// Add user message to state and UI
const userMessage = { sender: "user", text: userMessageText };
conversationHistory.push(userMessage);
addMessageToChat(userMessage, false); // Add user message without parsing markdown
chatInput.value = "";
autoGrowTextarea(); // Reset textarea height
// Prepare for AI response (create empty div)
const aiMessageDiv = addMessageToChat({ sender: "ai", text: "" }, true); // Add empty div with thinking indicator
// TODO: Generate fingerprint/JWT here
// TODO: Send `conversationHistory` + JWT to backend API
// Replace placeholder below with actual API call
// The backend should ideally return a stream of text tokens
// --- Placeholder Streaming Simulation ---
const simulatedFullResponse = `Okay, Heres a minimal Python script that creates an AsyncWebCrawler, fetches a webpage, and prints the first 300 characters of its Markdown output:
\`\`\`python
import asyncio
from crawl4ai import AsyncWebCrawler
async def main():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun("https://example.com")
print(result.markdown[:300]) # Print first 300 chars
if __name__ == "__main__":
asyncio.run(main())
\`\`\`
A code snippet: \`crawler.run()\`. Check the [quickstart](/core/quickstart).`;
// Simulate receiving the response stream
streamSimulatedResponse(aiMessageDiv, simulatedFullResponse);
// // Simulate receiving citations *after* stream starts (or with first chunk)
// setTimeout(() => {
// addCitations([
// { title: "Simulated Doc 1", url: "#sim1" },
// { title: "Another Concept", url: "#sim2" },
// ]);
// }, 500); // Citations appear shortly after thinking starts
}
function handleInputKeydown(event) {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
handleSendMessage();
}
}
function addMessageToChat(message, addThinkingIndicator = false) {
const messageDiv = document.createElement("div");
messageDiv.classList.add("message", `${message.sender}-message`);
// Parse markdown and set HTML
messageDiv.innerHTML = message.text ? marked.parse(message.text) : "";
if (message.sender === "ai") {
// Apply Syntax Highlighting AFTER setting innerHTML
messageDiv.querySelectorAll("pre code:not(.hljs)").forEach((block) => {
if (typeof hljs !== "undefined") {
// Check if already highlighted to prevent double-highlighting issues
if (!block.classList.contains("hljs")) {
hljs.highlightElement(block);
}
} else {
console.warn("highlight.js (hljs) not found for syntax highlighting.");
}
});
// Add thinking indicator if needed (and not already present)
if (addThinkingIndicator && !message.text && !messageDiv.querySelector(".thinking-indicator-cursor")) {
const thinkingDiv = document.createElement("div");
thinkingDiv.className = "thinking-indicator-cursor";
messageDiv.appendChild(thinkingDiv);
}
} else {
// User messages remain plain text
// messageDiv.textContent = message.text;
}
// wrap each pre in a div.terminal
messageDiv.querySelectorAll("pre").forEach((block) => {
const wrapper = document.createElement("div");
wrapper.className = "terminal";
block.parentNode.insertBefore(wrapper, block);
wrapper.appendChild(block);
});
chatMessages.appendChild(messageDiv);
// Scroll only if user is near the bottom? (More advanced)
// Simple scroll for now:
scrollToBottom();
return messageDiv; // Return the created element
}
function streamSimulatedResponse(messageDiv, fullText) {
const thinkingIndicator = messageDiv.querySelector(".thinking-indicator-cursor");
if (thinkingIndicator) thinkingIndicator.remove();
const tokens = fullText.split(/(\s+)/);
let currentText = "";
let tokenIndex = 0;
// Clear previous interval just in case
if (streamInterval) clearInterval(streamInterval);
streamInterval = setInterval(() => {
const cursorSpan = '<span class="thinking-indicator-cursor"></span>'; // Cursor for streaming
if (tokenIndex < tokens.length) {
currentText += tokens[tokenIndex];
// Render intermediate markdown + cursor
messageDiv.innerHTML = marked.parse(currentText + cursorSpan);
// Re-highlight code blocks on each stream update - might be slightly inefficient
// but ensures partial code blocks look okay. Highlight only final on completion.
// messageDiv.querySelectorAll('pre code:not(.hljs)').forEach((block) => {
// hljs.highlightElement(block);
// });
scrollToBottom(); // Keep scrolling as content streams
tokenIndex++;
} else {
// Streaming finished
clearInterval(streamInterval);
streamInterval = null;
// Final render without cursor
messageDiv.innerHTML = marked.parse(currentText);
// === Final Syntax Highlighting ===
messageDiv.querySelectorAll("pre code:not(.hljs)").forEach((block) => {
if (typeof hljs !== "undefined" && !block.classList.contains("hljs")) {
hljs.highlightElement(block);
}
});
// === Extract Citations ===
const citations = extractMarkdownLinks(currentText);
// Wrap each pre in a div.terminal
messageDiv.querySelectorAll("pre").forEach((block) => {
const wrapper = document.createElement("div");
wrapper.className = "terminal";
block.parentNode.insertBefore(wrapper, block);
wrapper.appendChild(block);
});
const aiMessage = { sender: "ai", text: currentText, citations: citations };
conversationHistory.push(aiMessage);
updateCitationsDisplay();
saveCurrentChat();
setThinking(false);
}
}, 50); // Adjust speed
}
// === NEW Function to Extract Links ===
function extractMarkdownLinks(markdownText) {
const regex = /\[([^\]]+)\]\(([^)]+)\)/g; // [text](url)
const citations = [];
let match;
while ((match = regex.exec(markdownText)) !== null) {
// Avoid adding self-links from within the citations list if AI includes them
if (!match[2].startsWith("#citation-")) {
citations.push({
title: match[1].trim(),
url: match[2].trim(),
});
}
}
// Optional: Deduplicate links based on URL
const uniqueCitations = citations.filter(
(citation, index, self) => index === self.findIndex((c) => c.url === citation.url)
);
return uniqueCitations;
}
// === REVISED Function to Display Citations ===
function updateCitationsDisplay() {
let lastCitations = null;
// Find the most recent AI message with citations
for (let i = conversationHistory.length - 1; i >= 0; i--) {
if (
conversationHistory[i].sender === "ai" &&
conversationHistory[i].citations &&
conversationHistory[i].citations.length > 0
) {
lastCitations = conversationHistory[i].citations;
break; // Found the latest citations
}
}
citationsList.innerHTML = ""; // Clear previous
if (!lastCitations) {
citationsList.innerHTML = '<li class="no-citations">No citations available.</li>';
return;
}
lastCitations.forEach((citation, index) => {
const li = document.createElement("li");
const a = document.createElement("a");
// Generate a unique ID for potential internal linking if needed
// a.id = `citation-${index}`;
a.href = citation.url || "#";
a.textContent = citation.title;
a.target = "_top"; // Open in main window
li.appendChild(a);
citationsList.appendChild(li);
});
}
function addCitations(citations) {
citationsList.innerHTML = ""; // Clear
if (!citations || citations.length === 0) {
citationsList.innerHTML = '<li class="no-citations">No citations available.</li>';
return;
}
citations.forEach((citation) => {
const li = document.createElement("li");
const a = document.createElement("a");
a.href = citation.url || "#";
a.textContent = citation.title;
a.target = "_top"; // Open in main window
li.appendChild(a);
citationsList.appendChild(li);
});
}
function setThinking(thinking) {
isThinking = thinking;
sendButton.disabled = thinking;
chatInput.disabled = thinking;
chatInput.placeholder = thinking ? "AI is responding..." : "Ask about Crawl4AI...";
// Stop any existing stream if we start thinking again (e.g., rapid resend)
if (thinking && streamInterval) {
clearInterval(streamInterval);
streamInterval = null;
}
}
function autoGrowTextarea() {
chatInput.style.height = "auto";
chatInput.style.height = `${chatInput.scrollHeight}px`;
}
function scrollToBottom() {
chatMessages.scrollTop = chatMessages.scrollHeight;
}
// --- Query Parameter Handling ---
function checkForInitialQuery(locationToCheck) {
// <-- Receive location object
if (!locationToCheck) {
console.warn("Ask AI: Could not access parent window location.");
return false;
}
const urlParams = new URLSearchParams(locationToCheck.search); // <-- Use passed location's search string
const encodedQuery = urlParams.get("qq"); // <-- Use 'qq'
if (encodedQuery) {
console.log("Initial query found (qq):", encodedQuery);
try {
const decodedText = decodeURIComponent(escape(atob(encodedQuery)));
console.log("Decoded query:", decodedText);
// Start new chat immediately
handleNewChat(true);
// Delay setting input and sending message slightly
setTimeout(() => {
chatInput.value = decodedText;
autoGrowTextarea();
handleSendMessage();
// Clean the PARENT window's URL
try {
const cleanUrl = locationToCheck.pathname;
// Use parent's history object
window.parent.history.replaceState({}, window.parent.document.title, cleanUrl);
} catch (e) {
console.warn("Ask AI: Could not clean parent URL using replaceState.", e);
// This might fail due to cross-origin restrictions if served differently,
// but should work fine with mkdocs serve on the same origin.
}
}, 100);
return true; // Query processed
} catch (e) {
console.error("Error decoding initial query (qq):", e);
// Clean the PARENT window's URL even on error
try {
const cleanUrl = locationToCheck.pathname;
window.parent.history.replaceState({}, window.parent.document.title, cleanUrl);
} catch (cleanError) {
console.warn("Ask AI: Could not clean parent URL after decode error.", cleanError);
}
return false;
}
}
return false; // No 'qq' query found
}
// --- History Management ---
function handleNewChat(isFromQuery = false) {
if (isThinking) return; // Don't allow new chat while responding
// Only save if NOT triggered immediately by a query parameter load
if (!isFromQuery) {
saveCurrentChat();
}
currentChatId = `chat_${Date.now()}`;
conversationHistory = []; // Clear message history state
chatMessages.innerHTML = ""; // Start with clean slate for query
if (!isFromQuery) {
// Show welcome only if manually started
chatMessages.innerHTML =
'<div class="message ai-message welcome-message">Started a new chat! Ask me anything about Crawl4AI.</div>';
}
addCitations([]); // Clear citations
updateCitationsDisplay(); // Clear UI
// Add to index and save
let index = loadChatIndex();
// Generate a generic title initially, update later
const newTitle = isFromQuery ? "Chat from Selection" : `Chat ${new Date().toLocaleString()}`;
// index.unshift({ id: currentChatId, title: `Chat ${new Date().toLocaleString()}` }); // Add to start
index.unshift({ id: currentChatId, title: newTitle });
saveChatIndex(index);
renderHistoryList(index); // Update UI
setActiveHistoryItem(currentChatId);
saveCurrentChat(); // Save the empty new chat state
}
function loadChat(chatId) {
if (isThinking || chatId === currentChatId) return;
// Check if chat data actually exists before proceeding
const storedChat = localStorage.getItem(CHAT_PREFIX + chatId);
if (storedChat === null) {
console.warn(`Attempted to load non-existent chat: ${chatId}. Removing from index.`);
deleteChatData(chatId); // Clean up index
loadChatHistoryIndex(); // Reload history list
loadInitialChat(); // Load next available chat
return;
}
console.log(`Loading chat: ${chatId}`);
saveCurrentChat(); // Save current before switching
try {
conversationHistory = JSON.parse(storedChat);
currentChatId = chatId;
renderChatMessages(conversationHistory);
updateCitationsDisplay();
setActiveHistoryItem(chatId);
} catch (e) {
console.error("Error loading chat:", chatId, e);
alert("Failed to load chat data.");
conversationHistory = [];
renderChatMessages(conversationHistory);
updateCitationsDisplay();
}
}
function saveCurrentChat() {
if (currentChatId && conversationHistory.length > 0) {
try {
localStorage.setItem(CHAT_PREFIX + currentChatId, JSON.stringify(conversationHistory));
console.log(`Chat ${currentChatId} saved.`);
// Update title in index (e.g., use first user message)
let index = loadChatIndex();
const currentItem = index.find((item) => item.id === currentChatId);
if (
currentItem &&
conversationHistory[0]?.sender === "user" &&
!currentItem.title.startsWith("Chat about:")
) {
currentItem.title = `Chat about: ${conversationHistory[0].text.substring(0, 30)}...`;
saveChatIndex(index);
// Re-render history list if title changed - small optimization needed here maybe
renderHistoryList(index);
setActiveHistoryItem(currentChatId); // Re-set active after re-render
}
} catch (e) {
console.error("Error saving chat:", currentChatId, e);
// Handle potential storage full errors
if (e.name === "QuotaExceededError") {
alert("Local storage is full. Cannot save chat history.");
// Consider implementing history pruning logic here
}
}
} else if (currentChatId) {
// Save empty state for newly created chats if needed, or remove?
localStorage.setItem(CHAT_PREFIX + currentChatId, JSON.stringify([]));
}
}
function loadChatIndex() {
try {
const storedIndex = localStorage.getItem(CHAT_INDEX_KEY);
return storedIndex ? JSON.parse(storedIndex) : [];
} catch (e) {
console.error("Error loading chat index:", e);
return []; // Return empty array on error
}
}
function saveChatIndex(indexArray) {
try {
localStorage.setItem(CHAT_INDEX_KEY, JSON.stringify(indexArray));
} catch (e) {
console.error("Error saving chat index:", e);
}
}
function renderHistoryList(indexArray) {
historyList.innerHTML = ""; // Clear existing
if (!indexArray || indexArray.length === 0) {
historyList.innerHTML = '<li class="no-history">No past chats found.</li>';
return;
}
indexArray.forEach((item) => {
const li = document.createElement("li");
li.dataset.chatId = item.id; // Add ID to li for easier selection
const a = document.createElement("a");
a.href = "#";
a.dataset.chatId = item.id;
a.textContent = item.title || `Chat ${item.id.split("_")[1] || item.id}`;
a.title = a.textContent; // Tooltip for potentially long titles
a.addEventListener("click", (e) => {
e.preventDefault();
loadChat(item.id);
});
// === Add Delete Button ===
const deleteBtn = document.createElement("button");
deleteBtn.className = "delete-chat-btn";
deleteBtn.innerHTML = "✕"; // Trash can emoji/icon (or use text/SVG/FontAwesome)
deleteBtn.title = "Delete Chat";
deleteBtn.dataset.chatId = item.id; // Store ID on button too
deleteBtn.addEventListener("click", handleDeleteChat);
li.appendChild(a);
li.appendChild(deleteBtn); // Append button to the list item
historyList.appendChild(li);
});
}
function renderChatMessages(messages) {
chatMessages.innerHTML = ""; // Clear existing messages
messages.forEach((message) => {
// Ensure highlighting is applied when loading from history
addMessageToChat(message, false);
});
if (messages.length === 0) {
chatMessages.innerHTML =
'<div class="message ai-message welcome-message">Chat history loaded. Ask a question!</div>';
}
// Scroll to bottom after loading messages
scrollToBottom();
}
function setActiveHistoryItem(chatId) {
document.querySelectorAll("#history-list li").forEach((li) => li.classList.remove("active"));
// Select the LI element directly now
const activeLi = document.querySelector(`#history-list li[data-chat-id="${chatId}"]`);
if (activeLi) {
activeLi.classList.add("active");
}
}
function loadInitialChat() {
const index = loadChatIndex();
if (index.length > 0) {
loadChat(index[0].id);
} else {
// Check if handleNewChat wasn't already called by query handler
if (!currentChatId) {
handleNewChat();
}
}
}
function loadChatHistoryIndex() {
const index = loadChatIndex();
renderHistoryList(index);
if (currentChatId) setActiveHistoryItem(currentChatId);
}
// === NEW Function to Handle Delete Click ===
function handleDeleteChat(event) {
event.stopPropagation(); // Prevent triggering loadChat on the link behind it
const button = event.currentTarget;
const chatIdToDelete = button.dataset.chatId;
if (!chatIdToDelete) return;
// Confirmation dialog
if (
window.confirm(
`Are you sure you want to delete this chat session?\n"${
button.previousElementSibling?.textContent || "Chat " + chatIdToDelete
}"`
)
) {
console.log(`Deleting chat: ${chatIdToDelete}`);
// Perform deletion
const updatedIndex = deleteChatData(chatIdToDelete);
// If the deleted chat was the currently active one, load another chat
if (currentChatId === chatIdToDelete) {
currentChatId = null; // Reset current ID
conversationHistory = []; // Clear state
if (updatedIndex.length > 0) {
// Load the new top chat (most recent remaining)
loadChat(updatedIndex[0].id);
} else {
// No chats left, start a new one
handleNewChat();
}
} else {
// If a different chat was deleted, just re-render the list
renderHistoryList(updatedIndex);
// Re-apply active state in case IDs shifted (though they shouldn't)
setActiveHistoryItem(currentChatId);
}
}
}
// === NEW Function to Delete Chat Data ===
function deleteChatData(chatId) {
// Remove chat data
localStorage.removeItem(CHAT_PREFIX + chatId);
// Update index
let index = loadChatIndex();
index = index.filter((item) => item.id !== chatId);
saveChatIndex(index);
console.log(`Chat ${chatId} data and index entry removed.`);
return index; // Return the updated index
}
// --- Virtual Scrolling Placeholder ---
// NOTE: Virtual scrolling is complex. For now, we do direct rendering.
// If performance becomes an issue with very long chats/history,
// investigate libraries like 'simple-virtual-scroll' or 'virtual-scroller'.
// You would replace parts of `renderChatMessages` and `renderHistoryList`
// to work with the chosen library's API (providing data and item renderers).
console.warn("Virtual scrolling not implemented. Performance may degrade with very long chat histories.");
});

View File

@@ -0,0 +1,64 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Crawl4AI Assistant</title>
<!-- Link main styles first for variable access -->
<link rel="stylesheet" href="../assets/layout.css">
<link rel="stylesheet" href="../assets/styles.css">
<!-- Link specific AI styles -->
<link rel="stylesheet" href="../assets/highlight.css">
<link rel="stylesheet" href="ask-ai.css">
</head>
<body>
<div class="ai-assistant-container">
<!-- Left Sidebar: Conversation History -->
<aside id="history-panel" class="sidebar left-sidebar">
<header>
<h3>History</h3>
<button id="new-chat-button" class="btn btn-sm">New Chat</button>
</header>
<ul id="history-list">
<!-- History items populated by JS -->
</ul>
</aside>
<!-- Main Area: Chat Interface -->
<main id="chat-panel">
<div id="chat-messages">
<!-- Chat messages populated by JS -->
<div class="message ai-message welcome-message">
Welcome to the Crawl4AI Assistant! How can I help you today?
</div>
</div>
<div id="chat-input-area">
<!-- Loading indicator for general waiting (optional) -->
<!-- <div class="loading-indicator" style="display: none;">Thinking...</div> -->
<textarea id="chat-input" placeholder="Ask about Crawl4AI..." rows="2"></textarea>
<button id="send-button">Send</button>
</div>
</main>
<!-- Right Sidebar: Citations / Context -->
<aside id="citations-panel" class="sidebar right-sidebar">
<header>
<h3>Citations</h3>
</header>
<ul id="citations-list">
<!-- Citations populated by JS -->
<li class="no-citations">No citations for this response yet.</li>
</ul>
</aside>
</div>
<!-- Include Marked.js library -->
<script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
<script src="../assets/highlight.min.js"></script>
<!-- Your AI Assistant Logic -->
<script src="ask-ai.js"></script>
</body>
</html>

View File

@@ -0,0 +1,62 @@
// ==== File: docs/assets/copy_code.js ====
document.addEventListener('DOMContentLoaded', () => {
// Target specifically code blocks within the main content area
const codeBlocks = document.querySelectorAll('#terminal-mkdocs-main-content pre > code');
codeBlocks.forEach((codeElement) => {
const preElement = codeElement.parentElement; // The <pre> tag
// Ensure the <pre> tag can contain a positioned button
if (window.getComputedStyle(preElement).position === 'static') {
preElement.style.position = 'relative';
}
// Create the button
const copyButton = document.createElement('button');
copyButton.className = 'copy-code-button';
copyButton.type = 'button';
copyButton.setAttribute('aria-label', 'Copy code to clipboard');
copyButton.title = 'Copy code to clipboard';
copyButton.innerHTML = 'Copy'; // Or use an icon like an SVG or FontAwesome class
// Append the button to the <pre> element
preElement.appendChild(copyButton);
// Add click event listener
copyButton.addEventListener('click', () => {
copyCodeToClipboard(codeElement, copyButton);
});
});
async function copyCodeToClipboard(codeElement, button) {
// Use innerText to get the rendered text content, preserving line breaks
const textToCopy = codeElement.innerText;
try {
await navigator.clipboard.writeText(textToCopy);
// Visual feedback
button.innerHTML = 'Copied!';
button.classList.add('copied');
button.disabled = true; // Temporarily disable
// Revert button state after a short delay
setTimeout(() => {
button.innerHTML = 'Copy';
button.classList.remove('copied');
button.disabled = false;
}, 2000); // Show "Copied!" for 2 seconds
} catch (err) {
console.error('Failed to copy code: ', err);
// Optional: Provide error feedback on the button
button.innerHTML = 'Error';
setTimeout(() => {
button.innerHTML = 'Copy';
}, 2000);
}
}
console.log("Copy Code Button script loaded.");
});

View File

@@ -0,0 +1,39 @@
// ==== File: docs/assets/floating_ask_ai_button.js ====
document.addEventListener('DOMContentLoaded', () => {
const askAiPagePath = '/core/ask-ai/'; // IMPORTANT: Adjust this path if needed!
const currentPath = window.location.pathname;
// Determine the base URL for constructing the link correctly,
// especially if deployed in a sub-directory.
// This assumes a simple structure; adjust if needed.
const baseUrl = window.location.origin + (currentPath.startsWith('/core/') ? '../..' : '');
// Check if the current page IS the Ask AI page
// Use includes() for flexibility (handles trailing slash or .html)
if (currentPath.includes(askAiPagePath.replace(/\/$/, ''))) { // Remove trailing slash for includes check
console.log("Floating Ask AI Button: Not adding button on the Ask AI page itself.");
return; // Don't add the button on the target page
}
// --- Create the button ---
const fabLink = document.createElement('a');
fabLink.className = 'floating-ask-ai-button';
fabLink.href = askAiPagePath; // Construct the correct URL
fabLink.title = 'Ask Crawl4AI Assistant';
fabLink.setAttribute('aria-label', 'Ask Crawl4AI Assistant');
// Add content (using SVG icon for better visuals)
fabLink.innerHTML = `
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="24" height="24" fill="currentColor">
<path d="M20 2H4c-1.1 0-2 .9-2 2v12c0 1.1.9 2 2 2h14l4 4V4c0-1.1-.9-2-2-2zm-2 12H6v-2h12v2zm0-3H6V9h12v2zm0-3H6V6h12v2z"/>
</svg>
<span>Ask AI</span>
`;
// Append to body
document.body.appendChild(fabLink);
console.log("Floating Ask AI Button added.");
});

View File

@@ -0,0 +1,119 @@
// ==== File: assets/github_stats.js ====
document.addEventListener('DOMContentLoaded', async () => {
// --- Configuration ---
const targetHeaderSelector = '.terminal .container:first-child'; // Selector for your header container
const insertBeforeSelector = '.terminal-nav'; // Selector for the element to insert the badge BEFORE (e.g., the main nav)
// Or set to null to append at the end of the header.
// --- Find elements ---
const headerContainer = document.querySelector(targetHeaderSelector);
if (!headerContainer) {
console.warn('GitHub Stats: Header container not found with selector:', targetHeaderSelector);
return;
}
const repoLinkElement = headerContainer.querySelector('a[href*="github.com/"]'); // Find the existing GitHub link
let repoUrl = 'https://github.com/unclecode/crawl4ai';
// if (repoLinkElement) {
// repoUrl = repoLinkElement.href;
// } else {
// // Fallback: Try finding from config (requires template injection - harder)
// // Or hardcode if necessary, but reading from the link is better.
// console.warn('GitHub Stats: GitHub repo link not found in header.');
// // Try to get repo_url from mkdocs config if available globally (less likely)
// // repoUrl = window.mkdocs_config?.repo_url; // Requires setting this variable
// // if (!repoUrl) return; // Exit if still no URL
// return; // Exit for now if link isn't found
// }
// --- Extract Repo Owner/Name ---
let owner = '';
let repo = '';
try {
const url = new URL(repoUrl);
const pathParts = url.pathname.split('/').filter(part => part.length > 0);
if (pathParts.length >= 2) {
owner = pathParts[0];
repo = pathParts[1];
}
} catch (e) {
console.error('GitHub Stats: Could not parse repository URL:', repoUrl, e);
return;
}
if (!owner || !repo) {
console.warn('GitHub Stats: Could not extract owner/repo from URL:', repoUrl);
return;
}
// --- Get Version (Attempt to extract from site title) ---
let version = '';
const siteTitleElement = headerContainer.querySelector('.terminal-title, .site-title'); // Adjust selector based on theme's title element
// Example title: "Crawl4AI Documentation (v0.5.x)"
if (siteTitleElement) {
const match = siteTitleElement.textContent.match(/\((v?[^)]+)\)/); // Look for text in parentheses starting with 'v' (optional)
if (match && match[1]) {
version = match[1].trim();
}
}
if (!version) {
console.info('GitHub Stats: Could not extract version from title. You might need to adjust the selector or regex.');
// You could fallback to config.extra.version if injected into JS
// version = window.mkdocs_config?.extra?.version || 'N/A';
}
// --- Fetch GitHub API Data ---
let stars = '...';
let forks = '...';
try {
const apiUrl = `https://api.github.com/repos/${owner}/${repo}`;
const response = await fetch(apiUrl);
if (response.ok) {
const data = await response.json();
// Format large numbers (optional)
stars = data.stargazers_count > 1000 ? `${(data.stargazers_count / 1000).toFixed(1)}k` : data.stargazers_count;
forks = data.forks_count > 1000 ? `${(data.forks_count / 1000).toFixed(1)}k` : data.forks_count;
} else {
console.warn(`GitHub Stats: API request failed with status ${response.status}. Rate limit exceeded?`);
stars = 'N/A';
forks = 'N/A';
}
} catch (error) {
console.error('GitHub Stats: Error fetching repository data:', error);
stars = 'N/A';
forks = 'N/A';
}
// --- Create Badge HTML ---
const badgeContainer = document.createElement('div');
badgeContainer.className = 'github-stats-badge';
// Use innerHTML for simplicity, including potential icons (requires FontAwesome or similar)
// Ensure your theme loads FontAwesome or add it yourself if you want icons.
badgeContainer.innerHTML = `
<a href="${repoUrl}" target="_blank" rel="noopener">
<!-- Optional Icon (FontAwesome example) -->
<!-- <i class="fab fa-github"></i> -->
<span class="repo-name">${owner}/${repo}</span>
${version ? `<span class="stat version"><i class="fas fa-tag"></i> ${version}</span>` : ''}
<span class="stat stars"><i class="fas fa-star"></i> ${stars}</span>
<span class="stat forks"><i class="fas fa-code-branch"></i> ${forks}</span>
</a>
`;
// --- Inject Badge into Header ---
const insertBeforeElement = insertBeforeSelector ? headerContainer.querySelector(insertBeforeSelector) : null;
if (insertBeforeElement) {
// headerContainer.insertBefore(badgeContainer, insertBeforeElement);
headerContainer.querySelector(insertBeforeSelector).appendChild(badgeContainer);
} else {
headerContainer.appendChild(badgeContainer);
}
console.info('GitHub Stats: Badge added to header.');
});

View File

@@ -0,0 +1,441 @@
/* ==== File: assets/layout.css (Non-Fluid Centered Layout) ==== */
:root {
--header-height: 55px; /* Adjust if needed */
--sidebar-width: 280px; /* Adjust if needed */
--toc-width: 340px; /* As specified */
--content-max-width: 90em; /* Max width for the centered content */
--layout-transition-speed: 0.2s;
--global-space: 10px;
}
/* --- Basic Setup --- */
html {
scroll-behavior: smooth;
scroll-padding-top: calc(var(--header-height) + 15px);
box-sizing: border-box;
}
*, *:before, *:after {
box-sizing: inherit;
}
body {
padding-top: 0;
padding-bottom: 0;
background-color: var(--background-color);
color: var(--font-color);
/* Prevents horizontal scrollbars during transitions */
overflow-x: hidden;
}
/* --- Fixed Header --- */
/* Full width, fixed header */
.terminal .container:first-child { /* Assuming this targets the header container */
position: fixed;
top: 0;
left: 0;
right: 0;
height: var(--header-height);
background-color: var(--background-color);
z-index: 1000;
border-bottom: 1px solid var(--progress-bar-background);
max-width: none; /* Override any container max-width */
padding: 0 calc(var(--global-space) * 2);
}
/* --- Main Layout Container (Below Header) --- */
/* This container just provides space for the fixed header */
.container:has(.terminal-mkdocs-main-grid) {
margin: 0 auto;
padding: 0;
padding-top: var(--header-height); /* Space for fixed header */
}
/* --- Flex Container: Grid holding content and toc (CENTERED) --- */
/* THIS is the main centered block */
.terminal-mkdocs-main-grid {
display: flex;
align-items: flex-start;
/* Enforce max-width and center */
max-width: var(--content-max-width);
margin-left: auto;
margin-right: auto;
position: relative;
/* Apply side padding within the centered block */
padding-left: calc(var(--global-space) * 2);
padding-right: calc(var(--global-space) * 2);
/* Add margin-left to clear the fixed sidebar */
margin-left: var(--sidebar-width);
}
/* --- 1. Fixed Left Sidebar (Viewport Relative) --- */
#terminal-mkdocs-side-panel {
position: fixed;
top: var(--header-height);
left: max(0px, calc((90vw - var(--content-max-width)) / 2));
bottom: 0;
width: var(--sidebar-width);
background-color: var(--background-color);
border-right: 1px solid var(--progress-bar-background);
overflow-y: auto;
z-index: 900;
padding: 1em calc(var(--global-space) * 2);
padding-bottom: 2em;
/* transition: left var(--layout-transition-speed) ease-in-out; */
}
/* --- 2. Main Content Area (Within Centered Grid) --- */
#terminal-mkdocs-main-content {
flex-grow: 1;
flex-shrink: 1;
min-width: 0; /* Flexbox shrink fix */
/* No left/right margins needed here - handled by parent grid */
margin-left: 0;
margin-right: 0;
/* Internal Padding */
padding: 1.5em 2em;
position: relative;
z-index: 1;
}
/* --- 3. Right Table of Contents (Sticky, Within Centered Grid) --- */
#toc-sidebar {
flex-basis: var(--toc-width);
flex-shrink: 0;
width: var(--toc-width);
position: sticky; /* Sticks within the centered grid */
top: var(--header-height);
align-self: stretch;
height: calc(100vh - var(--header-height));
overflow-y: auto;
padding: 1.5em 1em;
font-size: 0.85em;
border-left: 1px solid var(--progress-bar-background);
z-index: 800;
/* display: none; /* JS handles */
}
/* (ToC link styles remain the same) */
#toc-sidebar h4 { margin-top: 0; margin-bottom: 1em; font-size: 1.1em; color: var(--secondary-color); padding-left: 0.8em; }
#toc-sidebar ul { list-style: none; padding: 0; margin: 0; }
#toc-sidebar ul li a { display: block; padding: 0.3em 0; color: var(--secondary-color); text-decoration: none; border-left: 3px solid transparent; padding-left: 0.8em; transition: all 0.1s ease-in-out; line-height: 1.4; word-break: break-word; }
#toc-sidebar ul li.toc-level-3 a { padding-left: 1.8em; }
#toc-sidebar ul li.toc-level-4 a { padding-left: 2.8em; }
#toc-sidebar ul li a:hover { color: var(--font-color); background-color: rgba(255, 255, 255, 0.05); }
#toc-sidebar ul li a.active { color: var(--primary-color); border-left-color: var(--primary-color); background-color: rgba(80, 255, 255, 0.08); }
/* --- Footer Styling (Respects Centered Layout) --- */
footer {
background-color: var(--code-bg-color);
color: var(--secondary-color);
position: relative;
z-index: 10;
margin-top: 2em;
/* Apply margin-left to clear the fixed sidebar */
margin-left: var(--sidebar-width);
/* Constrain width relative to the centered grid it follows */
max-width: calc(var(--content-max-width) - var(--sidebar-width));
margin-right: auto; /* Keep it left-aligned within the space next to sidebar */
/* Use padding consistent with the grid */
padding: 2em calc(var(--global-space) * 2);
}
/* Adjust footer grid if needed */
.terminal-mkdocs-footer-grid {
display: grid;
grid-template-columns: 1fr auto;
gap: 1em;
align-items: center;
}
/* ==========================================================================
RESPONSIVENESS (Adapting the Non-Fluid Layout)
========================================================================== */
/* --- Medium screens: Hide ToC --- */
@media screen and (max-width: 1200px) {
#toc-sidebar {
display: none;
}
.terminal-mkdocs-main-grid {
/* Grid adjusts automatically as ToC is removed */
/* Ensure grid padding remains */
padding-left: calc(var(--global-space) * 2);
padding-right: calc(var(--global-space) * 2);
}
#terminal-mkdocs-main-content {
/* Content area naturally expands */
}
footer {
/* Footer still respects the left sidebar and overall max width */
margin-left: var(--sidebar-width);
max-width: calc(var(--content-max-width) - var(--sidebar-width));
/* Padding remains consistent */
padding-left: calc(var(--global-space) * 2);
padding-right: calc(var(--global-space) * 2);
}
}
/* --- Small screens: Hide left sidebar, full width content & footer --- */
@media screen and (max-width: 768px) {
#terminal-mkdocs-side-panel {
left: calc(-1 * var(--sidebar-width));
z-index: 1100;
box-shadow: 2px 0 10px rgba(0,0,0,0.3);
}
#terminal-mkdocs-side-panel.sidebar-visible {
left: 0;
}
.terminal-mkdocs-main-grid {
/* Grid now takes full width (minus body padding) */
margin-left: 0; /* Override sidebar margin */
margin-right: 0; /* Override auto margin */
max-width: 100%; /* Allow full width */
padding-left: var(--global-space); /* Reduce padding */
padding-right: var(--global-space);
}
#terminal-mkdocs-main-content {
padding: 1.5em 1em; /* Adjust internal padding */
}
footer {
margin-left: 0; /* Full width footer */
max-width: 100%; /* Allow full width */
padding: 2em 1em; /* Adjust internal padding */
}
.terminal-mkdocs-footer-grid {
grid-template-columns: 1fr; /* Stack footer items */
text-align: center;
gap: 0.5em;
}
/* Remember JS for toggle button & overlay */
}
/* ==== GitHub Stats Badge Styling ==== */
.github-stats-badge {
display: inline-block; /* Or flex if needed */
margin-left: 2em; /* Adjust spacing */
vertical-align: middle; /* Align with other header items */
font-size: 0.9em; /* Slightly smaller font */
}
.github-stats-badge a {
color: var(--secondary-color); /* Use secondary color */
text-decoration: none;
display: flex; /* Use flex for alignment */
align-items: center;
gap: 0.8em; /* Space between items */
padding: 0.2em 0.5em;
border: 1px solid var(--progress-bar-background); /* Subtle border */
border-radius: 4px;
transition: color 0.2s, background-color 0.2s;
}
.github-stats-badge a:hover {
color: var(--font-color); /* Brighter color on hover */
background-color: var(--progress-bar-background); /* Subtle background on hover */
}
.github-stats-badge .repo-name {
color: var(--font-color); /* Make repo name stand out slightly */
font-weight: 500; /* Optional bolder weight */
}
.github-stats-badge .stat {
/* Styles for individual stats (version, stars, forks) */
white-space: nowrap; /* Prevent wrapping */
}
.github-stats-badge .stat i {
/* Optional: Style for FontAwesome icons */
margin-right: 0.3em;
color: var(--secondary-dimmed-color); /* Dimmer color for icons */
}
/* Adjust positioning relative to search/nav if needed */
/* Example: If search is floated right */
/* .terminal-nav { float: left; } */
/* .github-stats-badge { float: left; } */
/* #mkdocs-search-query { float: right; } */
/* --- Responsive adjustments --- */
@media screen and (max-width: 900px) { /* Example breakpoint */
.github-stats-badge .repo-name {
display: none; /* Hide full repo name on smaller screens */
}
.github-stats-badge {
margin-left: 1em;
}
.github-stats-badge a {
gap: 0.5em;
}
}
@media screen and (max-width: 768px) {
/* Further hide or simplify on mobile if needed */
.github-stats-badge {
display: none; /* Example: Hide completely on smallest screens */
}
}
/* --- Ask AI Selection Button --- */
.ask-ai-selection-button {
background-color: var(--primary-dimmed-color, #09b5a5);
color: var(--background-color, #070708);
border: none;
padding: 4px 8px;
font-size: 0.8em;
border-radius: 4px;
cursor: pointer;
box-shadow: 0 2px 5px rgba(0, 0, 0, 0.3);
transition: background-color 0.2s ease;
white-space: nowrap;
}
.ask-ai-selection-button:hover {
background-color: var(--primary-color, #50ffff);
}
/* ==== File: docs/assets/layout.css (Additions) ==== */
/* ... (keep all existing layout CSS) ... */
/* --- Copy Code Button Styling --- */
/* Ensure the parent <pre> can contain the absolutely positioned button */
#terminal-mkdocs-main-content pre {
position: relative; /* Needed for absolute positioning of child */
/* Add a little padding top/right to make space for the button */
padding-top: 2.5em;
padding-right: 1em; /* Ensure padding is sufficient */
}
.copy-code-button {
position: absolute;
top: 0.5em; /* Adjust spacing from top */
left: 0.5em; /* Adjust spacing from left */
z-index: 1; /* Sit on top of code */
background-color: var(--progress-bar-background, #444); /* Use a background */
color: var(--font-color, #eaeaea);
border: 1px solid var(--secondary-color, #727578);
padding: 3px 8px;
font-size: 0.8em;
font-family: var(--font-stack, monospace);
border-radius: 4px;
cursor: pointer;
opacity: 0; /* Hidden by default */
transition: opacity 0.2s ease-in-out, background-color 0.2s ease, color 0.2s ease;
white-space: nowrap;
}
/* Show button on hover of the <pre> container */
#terminal-mkdocs-main-content pre:hover .copy-code-button {
opacity: 0.8; /* Show partially */
}
.copy-code-button:hover {
opacity: 1; /* Fully visible on button hover */
background-color: var(--secondary-color, #727578);
}
.copy-code-button:focus {
opacity: 1; /* Ensure visible when focused */
outline: 1px dashed var(--primary-color);
}
/* Style for "Copied!" state */
.copy-code-button.copied {
background-color: var(--primary-dimmed-color, #09b5a5);
color: var(--background-color, #070708);
border-color: var(--primary-dimmed-color, #09b5a5);
opacity: 1; /* Ensure visible */
}
.copy-code-button.copied:hover {
background-color: var(--primary-dimmed-color, #09b5a5); /* Prevent hover change */
}
/* ==== File: docs/assets/layout.css (Additions) ==== */
/* ... (keep all existing layout CSS) ... */
/* --- Floating Ask AI Button --- */
.floating-ask-ai-button {
position: fixed;
bottom: 25px;
right: 25px;
z-index: 1050; /* Below modals, above most content */
background-color: var(--primary-dimmed-color, #09b5a5);
color: var(--background-color, #070708);
border: none;
border-radius: 50%; /* Make it circular */
width: 60px; /* Adjust size */
height: 60px; /* Adjust size */
padding: 10px; /* Adjust padding */
box-shadow: 0 4px 10px rgba(0, 0, 0, 0.4);
cursor: pointer;
transition: background-color 0.2s ease, transform 0.2s ease;
display: flex;
flex-direction: column; /* Stack icon and text */
align-items: center;
justify-content: center;
text-decoration: none;
text-align: center;
}
.floating-ask-ai-button svg {
width: 24px; /* Control icon size */
height: 24px;
}
.floating-ask-ai-button span {
font-size: 0.7em;
margin-top: 2px; /* Space between icon and text */
display: block; /* Ensure it takes space */
line-height: 1;
}
.floating-ask-ai-button:hover {
background-color: var(--primary-color, #50ffff);
transform: scale(1.05); /* Slight grow effect */
}
.floating-ask-ai-button:focus {
outline: 2px solid var(--primary-color);
outline-offset: 2px;
}
/* Optional: Hide text on smaller screens if needed */
@media screen and (max-width: 768px) {
.floating-ask-ai-button span {
/* display: none; */ /* Uncomment to hide text */
}
.floating-ask-ai-button {
width: 55px;
height: 55px;
bottom: 20px;
right: 20px;
}
}

View File

@@ -0,0 +1,109 @@
// ==== File: docs/assets/selection_ask_ai.js ====
document.addEventListener('DOMContentLoaded', () => {
let askAiButton = null;
const askAiPageUrl = '/core/ask-ai/'; // Adjust if your Ask AI page path is different
function createAskAiButton() {
const button = document.createElement('button');
button.id = 'ask-ai-selection-btn';
button.className = 'ask-ai-selection-button';
button.textContent = 'Ask AI'; // Or use an icon
button.style.display = 'none'; // Initially hidden
button.style.position = 'absolute';
button.style.zIndex = '1500'; // Ensure it's on top
document.body.appendChild(button);
button.addEventListener('click', handleAskAiClick);
return button;
}
function getSafeSelectedText() {
const selection = window.getSelection();
if (!selection || selection.rangeCount === 0) {
return null;
}
// Avoid selecting text within the button itself if it was somehow selected
const container = selection.getRangeAt(0).commonAncestorContainer;
if (askAiButton && askAiButton.contains(container)) {
return null;
}
const text = selection.toString().trim();
return text.length > 0 ? text : null;
}
function positionButton(event) {
const selection = window.getSelection();
if (!selection || selection.rangeCount === 0 || selection.isCollapsed) {
hideButton();
return;
}
const range = selection.getRangeAt(0);
const rect = range.getBoundingClientRect();
// Calculate position: top-right of the selection
const scrollX = window.scrollX;
const scrollY = window.scrollY;
const buttonTop = rect.top + scrollY - askAiButton.offsetHeight - 5; // 5px above
const buttonLeft = rect.right + scrollX + 5; // 5px to the right
askAiButton.style.top = `${buttonTop}px`;
askAiButton.style.left = `${buttonLeft}px`;
askAiButton.style.display = 'block'; // Show the button
}
function hideButton() {
if (askAiButton) {
askAiButton.style.display = 'none';
}
}
function handleAskAiClick(event) {
event.stopPropagation(); // Prevent mousedown from hiding button immediately
const selectedText = getSafeSelectedText();
if (selectedText) {
console.log("Selected Text:", selectedText);
// Base64 encode for URL safety (handles special chars, line breaks)
// Use encodeURIComponent first for proper Unicode handling before btoa
const encodedText = btoa(unescape(encodeURIComponent(selectedText)));
const targetUrl = `${askAiPageUrl}?qq=${encodedText}`;
console.log("Navigating to:", targetUrl);
window.location.href = targetUrl; // Navigate to Ask AI page
}
hideButton(); // Hide after click
}
// --- Event Listeners ---
// Show button on mouse up after selection
document.addEventListener('mouseup', (event) => {
// Slight delay to ensure selection is registered
setTimeout(() => {
const selectedText = getSafeSelectedText();
if (selectedText) {
if (!askAiButton) {
askAiButton = createAskAiButton();
}
// Don't position if the click was ON the button itself
if (event.target !== askAiButton) {
positionButton(event);
}
} else {
hideButton();
}
}, 10); // Small delay
});
// Hide button on scroll or click elsewhere
document.addEventListener('mousedown', (event) => {
// Hide if clicking anywhere EXCEPT the button itself
if (askAiButton && event.target !== askAiButton) {
hideButton();
}
});
document.addEventListener('scroll', hideButton, true); // Capture scroll events
console.log("Selection Ask AI script loaded.");
});

View File

@@ -6,8 +6,8 @@
}
:root {
--global-font-size: 16px;
--global-code-font-size: 16px;
--global-font-size: 14px;
--global-code-font-size: 13px;
--global-line-height: 1.5em;
--global-space: 10px;
--font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
@@ -50,8 +50,17 @@
--display-h1-decoration: none;
--display-h1-decoration: none;
--header-height: 65px; /* Adjust based on your actual header height */
--sidebar-width: 280px; /* Adjust based on your desired sidebar width */
--toc-width: 240px; /* Adjust based on your desired ToC width */
--layout-transition-speed: 0.2s; /* For potential future animations */
--page-width : 100em; /* Adjust based on your design */
}
/* body {
background-color: var(--background-color);
color: var(--font-color);
@@ -256,4 +265,6 @@ div.badges a {
}
div.badges a > img {
width: auto;
}
}

144
docs/md_v2/assets/toc.js Normal file
View File

@@ -0,0 +1,144 @@
// ==== File: assets/toc.js ====
document.addEventListener('DOMContentLoaded', () => {
const mainContent = document.getElementById('terminal-mkdocs-main-content');
const tocContainer = document.getElementById('toc-sidebar');
const mainGrid = document.querySelector('.terminal-mkdocs-main-grid'); // Get the flex container
if (!mainContent) {
console.warn("TOC Generator: Main content area '#terminal-mkdocs-main-content' not found.");
return;
}
// --- Create ToC container if it doesn't exist ---
let tocElement = tocContainer;
if (!tocElement) {
if (!mainGrid) {
console.warn("TOC Generator: Flex container '.terminal-mkdocs-main-grid' not found to append ToC.");
return;
}
tocElement = document.createElement('aside');
tocElement.id = 'toc-sidebar';
tocElement.style.display = 'none'; // Keep hidden initially
// Append it as the last child of the flex grid
mainGrid.appendChild(tocElement);
console.info("TOC Generator: Created '#toc-sidebar' element.");
}
// --- Find Headings (h2, h3, h4 are common for ToC) ---
const headings = mainContent.querySelectorAll('h2, h3, h4');
if (headings.length === 0) {
console.info("TOC Generator: No headings found on this page. ToC not generated.");
tocElement.style.display = 'none'; // Ensure it's hidden
return;
}
// --- Generate ToC List ---
const tocList = document.createElement('ul');
const observerTargets = []; // Store headings for IntersectionObserver
headings.forEach((heading, index) => {
// Ensure heading has an ID for linking
if (!heading.id) {
// Create a simple slug-like ID
heading.id = `toc-heading-${index}-${heading.textContent.toLowerCase().replace(/\s+/g, '-').replace(/[^a-z0-9-]/g, '')}`;
}
const listItem = document.createElement('li');
const link = document.createElement('a');
link.href = `#${heading.id}`;
link.textContent = heading.textContent;
// Add class for styling based on heading level
const level = parseInt(heading.tagName.substring(1), 10); // Get 2, 3, or 4
listItem.classList.add(`toc-level-${level}`);
listItem.appendChild(link);
tocList.appendChild(listItem);
observerTargets.push(heading); // Add to observer list
});
// --- Populate and Show ToC ---
// Optional: Add a title
const tocTitle = document.createElement('h4');
tocTitle.textContent = 'On this page'; // Customize title if needed
tocElement.innerHTML = ''; // Clear previous content if any
tocElement.appendChild(tocTitle);
tocElement.appendChild(tocList);
tocElement.style.display = ''; // Show the ToC container
console.info(`TOC Generator: Generated ToC with ${headings.length} items.`);
// --- Scroll Spy using Intersection Observer ---
const tocLinks = tocElement.querySelectorAll('a');
let activeLink = null; // Keep track of the current active link
const observerOptions = {
// Observe changes relative to the viewport, offset by the header height
// Negative top margin pushes the intersection trigger point down
// Negative bottom margin ensures elements low on the screen can trigger before they exit
rootMargin: `-${getComputedStyle(document.documentElement).getPropertyValue('--header-height').trim()} 0px -60% 0px`,
threshold: 0 // Trigger as soon as any part enters/exits the boundary
};
const observerCallback = (entries) => {
let topmostVisibleHeading = null;
entries.forEach(entry => {
const link = tocElement.querySelector(`a[href="#${entry.target.id}"]`);
if (!link) return;
// Check if the heading is intersecting (partially or fully visible within rootMargin)
if (entry.isIntersecting) {
// Among visible headings, find the one closest to the top edge (within the rootMargin)
if (!topmostVisibleHeading || entry.boundingClientRect.top < topmostVisibleHeading.boundingClientRect.top) {
topmostVisibleHeading = entry.target;
}
}
});
// If we found a topmost visible heading, activate its link
if (topmostVisibleHeading) {
const newActiveLink = tocElement.querySelector(`a[href="#${topmostVisibleHeading.id}"]`);
if (newActiveLink && newActiveLink !== activeLink) {
// Remove active class from previous link
if (activeLink) {
activeLink.classList.remove('active');
activeLink.parentElement.classList.remove('active-parent'); // Optional parent styling
}
// Add active class to the new link
newActiveLink.classList.add('active');
newActiveLink.parentElement.classList.add('active-parent'); // Optional parent styling
activeLink = newActiveLink;
// Optional: Scroll the ToC sidebar to keep the active link visible
// newActiveLink.scrollIntoView({ behavior: 'smooth', block: 'nearest' });
}
}
// If no headings are intersecting (scrolled past the last one?), maybe deactivate all
// Or keep the last one active - depends on desired behavior. Current logic keeps last active.
};
const observer = new IntersectionObserver(observerCallback, observerOptions);
// Observe all target headings
observerTargets.forEach(heading => observer.observe(heading));
// Initial check in case a heading is already in view on load
// (Requires slight delay for accurate layout calculation)
setTimeout(() => {
observerCallback(observer.takeRecords()); // Process initial state
}, 100);
// move footer and the hr before footer to the end of the main content
const footer = document.querySelector('footer');
const hr = footer.previousElementSibling;
if (hr && hr.tagName === 'HR') {
mainContent.appendChild(hr);
}
mainContent.appendChild(footer);
console.info("TOC Generator: Footer moved to the end of the main content.");
});

View File

@@ -16,7 +16,7 @@ My dear friends and crawlers, there you go, this is the release of Crawl4AI v0.5
* **Multiple Crawler Strategies:** Choose between the full-featured Playwright browser-based crawler or a new, *much* faster HTTP-only crawler for simpler tasks.
* **Docker Deployment:** Deploy Crawl4AI as a scalable, self-contained service with built-in API endpoints and optional JWT authentication.
* **Command-Line Interface (CLI):** Interact with Crawl4AI directly from your terminal. Crawl, configure, and extract data with simple commands.
* **LLM Configuration (`LlmConfig`):** A new, unified way to configure LLM providers (OpenAI, Anthropic, Ollama, etc.) for extraction, filtering, and schema generation. Simplifies API key management and switching between models.
* **LLM Configuration (`LLMConfig`):** A new, unified way to configure LLM providers (OpenAI, Anthropic, Ollama, etc.) for extraction, filtering, and schema generation. Simplifies API key management and switching between models.
**Minor Updates & Improvements:**
@@ -47,7 +47,7 @@ This release includes several breaking changes to improve the library's structur
* **Config**: FastFilterChain has been replaced with FilterChain
* **Deep-Crawl**: DeepCrawlStrategy.arun now returns Union[CrawlResultT, List[CrawlResultT], AsyncGenerator[CrawlResultT, None]]
* **Proxy**: Removed synchronous WebCrawler support and related rate limiting configurations
* **LLM Parameters:** Use the new `LlmConfig` object instead of passing `provider`, `api_token`, `base_url`, and `api_base` directly to `LLMExtractionStrategy` and `LLMContentFilter`.
* **LLM Parameters:** Use the new `LLMConfig` object instead of passing `provider`, `api_token`, `base_url`, and `api_base` directly to `LLMExtractionStrategy` and `LLMContentFilter`.
**In short:** Update imports, adjust `arun_many()` usage, check for optional fields, and review the Docker deployment guide.

View File

@@ -251,7 +251,7 @@ from crawl4ai import (
RoundRobinProxyStrategy,
)
import asyncio
from crawl4ai.configs import ProxyConfig
from crawl4ai.proxy_strategy import ProxyConfig
async def main():
# Load proxies and create rotation strategy
proxies = ProxyConfig.from_env()
@@ -305,13 +305,13 @@ asyncio.run(main())
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, DefaultMarkdownGenerator
from crawl4ai.content_filter_strategy import LLMContentFilter
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
import asyncio
llm_config = LlmConfig(provider="gemini/gemini-1.5-pro", api_token="env:GEMINI_API_KEY")
llm_config = LLMConfig(provider="gemini/gemini-1.5-pro", api_token="env:GEMINI_API_KEY")
markdown_generator = DefaultMarkdownGenerator(
content_filter=LLMContentFilter(llmConfig=llm_config, instruction="Extract key concepts and summaries")
content_filter=LLMContentFilter(llm_config=llm_config, instruction="Extract key concepts and summaries")
)
config = CrawlerRunConfig(markdown_generator=markdown_generator)
@@ -335,13 +335,13 @@ asyncio.run(main())
```python
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
llm_config = LlmConfig(provider="gemini/gemini-1.5-pro", api_token="env:GEMINI_API_KEY")
llm_config = LLMConfig(provider="gemini/gemini-1.5-pro", api_token="env:GEMINI_API_KEY")
schema = JsonCssExtractionStrategy.generate_schema(
html="<div class='product'><h2>Product Name</h2><span class='price'>$99</span></div>",
llmConfig = llm_config,
llm_config = llm_config,
query="Extract product name and price"
)
print(schema)
@@ -394,20 +394,20 @@ print(schema)
serialization, especially for sets of allowed/blocked domains. No code changes
required.
- **Added: New `LlmConfig` parameter.** This new parameter can be passed for
- **Added: New `LLMConfig` parameter.** This new parameter can be passed for
extraction, filtering, and schema generation tasks. It simplifies passing
provider strings, API tokens, and base URLs across all sections where LLM
configuration is necessary. It also enables reuse and allows for quick
experimentation between different LLM configurations.
```python
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
from crawl4ai.extraction_strategy import LLMExtractionStrategy
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
# Example of using LlmConfig with LLMExtractionStrategy
llm_config = LlmConfig(provider="openai/gpt-4o", api_token="YOUR_API_KEY")
strategy = LLMExtractionStrategy(llmConfig=llm_config, schema=...)
# Example of using LLMConfig with LLMExtractionStrategy
llm_config = LLMConfig(provider="openai/gpt-4o", api_token="YOUR_API_KEY")
strategy = LLMExtractionStrategy(llm_config=llm_config, schema=...)
# Example usage within a crawler
async with AsyncWebCrawler() as crawler:
@@ -418,7 +418,7 @@ print(schema)
```
**Breaking Change:** Removed old parameters like `provider`, `api_token`,
`base_url`, and `api_base` from `LLMExtractionStrategy` and
`LLMContentFilter`. Users should migrate to using the `LlmConfig` object.
`LLMContentFilter`. Users should migrate to using the `LLMConfig` object.
- **Changed: Improved browser context management and added shared data support.
(Breaking Change:** `BrowserContext` API updated). Browser contexts are now

74
docs/md_v2/core/ask-ai.md Normal file
View File

@@ -0,0 +1,74 @@
<div class="ask-ai-container">
<iframe id="ask-ai-frame" src="../../ask_ai/index.html" width="100%" style="border:none; display: block;" title="Crawl4AI Assistant"></iframe>
</div>
<script>
// Iframe height adjustment
function resizeAskAiIframe() {
const iframe = document.getElementById('ask-ai-frame');
if (iframe) {
const headerHeight = parseFloat(getComputedStyle(document.documentElement).getPropertyValue('--header-height') || '55');
// Footer is removed by JS below, so calculate height based on header + small buffer
const topOffset = headerHeight + 20; // Header + buffer/margin
const availableHeight = window.innerHeight - topOffset;
iframe.style.height = Math.max(600, availableHeight) + 'px'; // Min height 600px
}
}
// Run immediately and on resize/load
resizeAskAiIframe(); // Initial call
let resizeTimer;
window.addEventListener('load', resizeAskAiIframe);
window.addEventListener('resize', () => {
clearTimeout(resizeTimer);
resizeTimer = setTimeout(resizeAskAiIframe, 150);
});
// Remove Footer & HR from parent page (DOM Ready might be safer)
document.addEventListener('DOMContentLoaded', () => {
setTimeout(() => { // Add slight delay just in case elements render slowly
const footer = window.parent.document.querySelector('footer'); // Target parent document
if (footer) {
const hrBeforeFooter = footer.previousElementSibling;
if (hrBeforeFooter && hrBeforeFooter.tagName === 'HR') {
hrBeforeFooter.remove();
}
footer.remove();
// Trigger resize again after removing footer
resizeAskAiIframe();
} else {
console.warn("Ask AI Page: Could not find footer in parent document to remove.");
}
}, 100); // Shorter delay
});
</script>
<style>
#terminal-mkdocs-main-content {
padding: 0 !important;
margin: 0;
width: 100%;
height: 100%;
overflow: hidden; /* Prevent body scrollbars, panels handle scroll */
}
/* Ensure iframe container takes full space */
#terminal-mkdocs-main-content .ask-ai-container {
/* Remove negative margins if footer removal handles space */
margin: 0;
padding: 0;
max-width: none;
/* Let the JS set the height */
/* height: 600px; Initial fallback height */
overflow: hidden; /* Hide potential overflow before JS resize */
}
/* Hide title/paragraph if they were part of the markdown */
/* Alternatively, just remove them from the .md file directly */
/* #terminal-mkdocs-main-content > h1,
#terminal-mkdocs-main-content > p:first-of-type {
display: none;
} */
</style>

View File

@@ -4,7 +4,7 @@ Crawl4AIs flexibility stems from two key classes:
1. **`BrowserConfig`** Dictates **how** the browser is launched and behaves (e.g., headless or visible, proxy, user agent).
2. **`CrawlerRunConfig`** Dictates **how** each **crawl** operates (e.g., caching, extraction, timeouts, JavaScript code to run, etc.).
3. **`LlmConfig`** - Dictates **how** LLM providers are configured. (model, api token, base url, temperature etc.)
3. **`LLMConfig`** - Dictates **how** LLM providers are configured. (model, api token, base url, temperature etc.)
In most examples, you create **one** `BrowserConfig` for the entire crawler session, then pass a **fresh** or re-used `CrawlerRunConfig` whenever you call `arun()`. This tutorial shows the most commonly used parameters. If you need advanced or rarely used fields, see the [Configuration Parameters](../api/parameters.md).
@@ -136,6 +136,7 @@ class CrawlerRunConfig:
wait_for=None,
screenshot=False,
pdf=False,
capture_mhtml=False,
enable_rate_limiting=False,
rate_limit_config=None,
memory_threshold_percent=70.0,
@@ -175,10 +176,9 @@ class CrawlerRunConfig:
- A CSS or JS expression to wait for before extracting content.
- Common usage: `wait_for="css:.main-loaded"` or `wait_for="js:() => window.loaded === true"`.
7. **`screenshot`** & **`pdf`**:
- If `True`, captures a screenshot or PDF after the page is fully loaded.
- The results go to `result.screenshot` (base64) or `result.pdf` (bytes).
7. **`screenshot`**, **`pdf`**, & **`capture_mhtml`**:
- If `True`, captures a screenshot, PDF, or MHTML snapshot after the page is fully loaded.
- The results go to `result.screenshot` (base64), `result.pdf` (bytes), or `result.mhtml` (string).
8. **`verbose`**:
- Logs additional runtime details.
- Overlaps with the browsers verbosity if also set to `True` in `BrowserConfig`.
@@ -239,7 +239,7 @@ The `clone()` method:
## 3. LlmConfig Essentials
## 3. LLMConfig Essentials
### Key fields to note
@@ -256,16 +256,16 @@ The `clone()` method:
- If your provider has a custom endpoint
```python
llmConfig = LlmConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY"))
llm_config = LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY"))
```
## 4. Putting It All Together
In a typical scenario, you define **one** `BrowserConfig` for your crawler session, then create **one or more** `CrawlerRunConfig` & `LlmConfig` depending on each calls needs:
In a typical scenario, you define **one** `BrowserConfig` for your crawler session, then create **one or more** `CrawlerRunConfig` & `LLMConfig` depending on each calls needs:
```python
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LlmConfig
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LLMConfig
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
async def main():
@@ -289,14 +289,14 @@ async def main():
# 3) Example LLM content filtering
gemini_config = LlmConfig(
gemini_config = LLMConfig(
provider="gemini/gemini-1.5-pro"
api_token = "env:GEMINI_API_TOKEN"
)
# Initialize LLM filter with specific instruction
filter = LLMContentFilter(
llmConfig=gemini_config, # or your preferred provider
llm_config=gemini_config, # or your preferred provider
instruction="""
Focus on extracting the core educational content.
Include:
@@ -343,7 +343,7 @@ if __name__ == "__main__":
For a **detailed list** of available parameters (including advanced ones), see:
- [BrowserConfig, CrawlerRunConfig & LlmConfig Reference](../api/parameters.md)
- [BrowserConfig, CrawlerRunConfig & LLMConfig Reference](../api/parameters.md)
You can explore topics like:
@@ -356,7 +356,7 @@ You can explore topics like:
## 6. Conclusion
**BrowserConfig**, **CrawlerRunConfig** and **LlmConfig** give you straightforward ways to define:
**BrowserConfig**, **CrawlerRunConfig** and **LLMConfig** give you straightforward ways to define:
- **Which** browser to launch, how it should run, and any proxy or user agent needs.
- **How** each crawl should behave—caching, timeouts, JavaScript code, extraction strategies, etc.

View File

@@ -8,6 +8,10 @@ Below, we show how to configure these parameters and combine them for precise co
## 1. CSS-Based Selection
There are two ways to select content from a page: using `css_selector` or the more flexible `target_elements`.
### 1.1 Using `css_selector`
A straightforward way to **limit** your crawl results to a certain region of the page is **`css_selector`** in **`CrawlerRunConfig`**:
```python
@@ -32,6 +36,33 @@ if __name__ == "__main__":
**Result**: Only elements matching that selector remain in `result.cleaned_html`.
### 1.2 Using `target_elements`
The `target_elements` parameter provides more flexibility by allowing you to target **multiple elements** for content extraction while preserving the entire page context for other features:
```python
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
async def main():
config = CrawlerRunConfig(
# Target article body and sidebar, but not other content
target_elements=["article.main-content", "aside.sidebar"]
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com/blog-post",
config=config
)
print("Markdown focused on target elements")
print("Links from entire page still available:", len(result.links.get("internal", [])))
if __name__ == "__main__":
asyncio.run(main())
```
**Key difference**: With `target_elements`, the markdown generation and structural data extraction focus on those elements, but other page elements (like links, images, and tables) are still extracted from the entire page. This gives you fine-grained control over what appears in your markdown content while preserving full page context for link analysis and media collection.
---
## 2. Content Filtering & Exclusions
@@ -211,7 +242,7 @@ if __name__ == "__main__":
import asyncio
import json
from pydantic import BaseModel, Field
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, LlmConfig
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, LLMConfig
from crawl4ai.extraction_strategy import LLMExtractionStrategy
class ArticleData(BaseModel):
@@ -220,7 +251,7 @@ class ArticleData(BaseModel):
async def main():
llm_strategy = LLMExtractionStrategy(
llmConfig = LlmConfig(provider="openai/gpt-4",api_token="sk-YOUR_API_KEY")
llm_config = LLMConfig(provider="openai/gpt-4",api_token="sk-YOUR_API_KEY")
schema=ArticleData.schema(),
extraction_type="schema",
instruction="Extract 'headline' and a short 'summary' from the content."
@@ -404,15 +435,59 @@ Stick to BeautifulSoup strategy (default) when:
---
## 7. Conclusion
## 7. Combining CSS Selection Methods
By mixing **css_selector** scoping, **content filtering** parameters, and advanced **extraction strategies**, you can precisely **choose** which data to keep. Key parameters in **`CrawlerRunConfig`** for content selection include:
You can combine `css_selector` and `target_elements` in powerful ways to achieve fine-grained control over your output:
1. **`css_selector`** Basic scoping to an element or region.
2. **`word_count_threshold`** Skip short blocks.
3. **`excluded_tags`** Remove entire HTML tags.
4. **`exclude_external_links`**, **`exclude_social_media_links`**, **`exclude_domains`** Filter out unwanted links or domains.
5. **`exclude_external_images`** Remove images from external sources.
6. **`process_iframes`** Merge iframe content if needed.
```python
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
async def main():
# Target specific content but preserve page context
config = CrawlerRunConfig(
# Focus markdown on main content and sidebar
target_elements=["#main-content", ".sidebar"],
# Global filters applied to entire page
excluded_tags=["nav", "footer", "header"],
exclude_external_links=True,
# Use basic content thresholds
word_count_threshold=15,
cache_mode=CacheMode.BYPASS
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com/article",
config=config
)
print(f"Content focuses on specific elements, but all links still analyzed")
print(f"Internal links: {len(result.links.get('internal', []))}")
print(f"External links: {len(result.links.get('external', []))}")
if __name__ == "__main__":
asyncio.run(main())
```
This approach gives you the best of both worlds:
- Markdown generation and content extraction focus on the elements you care about
- Links, images and other page data still give you the full context of the page
- Content filtering still applies globally
## 8. Conclusion
By mixing **target_elements** or **css_selector** scoping, **content filtering** parameters, and advanced **extraction strategies**, you can precisely **choose** which data to keep. Key parameters in **`CrawlerRunConfig`** for content selection include:
1. **`target_elements`** Array of CSS selectors to focus markdown generation and data extraction, while preserving full page context for links and media.
2. **`css_selector`** Basic scoping to an element or region for all extraction processes.
3. **`word_count_threshold`** Skip short blocks.
4. **`excluded_tags`** Remove entire HTML tags.
5. **`exclude_external_links`**, **`exclude_social_media_links`**, **`exclude_domains`** Filter out unwanted links or domains.
6. **`exclude_external_images`** Remove images from external sources.
7. **`process_iframes`** Merge iframe content if needed.
Combine these with structured extraction (CSS, LLM-based, or others) to build powerful crawls that yield exactly the content you want, from raw or cleaned HTML up to sophisticated JSON structures. For more detail, see [Configuration Reference](../api/parameters.md). Enjoy curating your data to the max!

View File

@@ -26,6 +26,7 @@ class CrawlResult(BaseModel):
downloaded_files: Optional[List[str]] = None
screenshot: Optional[str] = None
pdf : Optional[bytes] = None
mhtml: Optional[str] = None
markdown: Optional[Union[str, MarkdownGenerationResult]] = None
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
@@ -51,6 +52,7 @@ class CrawlResult(BaseModel):
| **downloaded_files (`Optional[List[str]]`)** | If `accept_downloads=True` in `BrowserConfig`, this lists the filepaths of saved downloads. |
| **screenshot (`Optional[str]`)** | Screenshot of the page (base64-encoded) if `screenshot=True`. |
| **pdf (`Optional[bytes]`)** | PDF of the page if `pdf=True`. |
| **mhtml (`Optional[str]`)** | MHTML snapshot of the page if `capture_mhtml=True`. Contains the full page with all resources. |
| **markdown (`Optional[str or MarkdownGenerationResult]`)** | It holds a `MarkdownGenerationResult`. Over time, this will be consolidated into `markdown`. The generator can provide raw markdown, citations, references, and optionally `fit_markdown`. |
| **extracted_content (`Optional[str]`)** | The output of a structured extraction (CSS/LLM-based) stored as JSON string or other text. |
| **metadata (`Optional[dict]`)** | Additional info about the crawl or extracted data. |
@@ -190,18 +192,27 @@ for img in images:
print("Image URL:", img["src"], "Alt:", img.get("alt"))
```
### 5.3 `screenshot` and `pdf`
### 5.3 `screenshot`, `pdf`, and `mhtml`
If you set `screenshot=True` or `pdf=True` in **`CrawlerRunConfig`**, then:
If you set `screenshot=True`, `pdf=True`, or `capture_mhtml=True` in **`CrawlerRunConfig`**, then:
- `result.screenshot` contains a base64-encoded PNG string.
- `result.screenshot` contains a base64-encoded PNG string.
- `result.pdf` contains raw PDF bytes (you can write them to a file).
- `result.mhtml` contains the MHTML snapshot of the page as a string (you can write it to a .mhtml file).
```python
# Save the PDF
with open("page.pdf", "wb") as f:
f.write(result.pdf)
# Save the MHTML
if result.mhtml:
with open("page.mhtml", "w", encoding="utf-8") as f:
f.write(result.mhtml)
```
The MHTML (MIME HTML) format is particularly useful as it captures the entire web page including all of its resources (CSS, images, scripts, etc.) in a single file, making it perfect for archiving or offline viewing.
### 5.4 `ssl_certificate`
If `fetch_ssl_certificate=True`, `result.ssl_certificate` holds details about the sites SSL cert, such as issuer, validity dates, etc.

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,35 @@ In this tutorial, youll learn how to:
1. Extract links (internal, external) from crawled pages
2. Filter or exclude specific domains (e.g., social media or custom domains)
3. Access and manage media data (especially images) in the crawl result
3. Access and ma### 3.2 Excluding Images
#### Excluding External Images
If you're dealing with heavy pages or want to skip third-party images (advertisements, for example), you can turn on:
```python
crawler_cfg = CrawlerRunConfig(
exclude_external_images=True
)
```
This setting attempts to discard images from outside the primary domain, keeping only those from the site you're crawling.
#### Excluding All Images
If you want to completely remove all images from the page to maximize performance and reduce memory usage, use:
```python
crawler_cfg = CrawlerRunConfig(
exclude_all_images=True
)
```
This setting removes all images very early in the processing pipeline, which significantly improves memory efficiency and processing speed. This is particularly useful when:
- You don't need image data in your results
- You're crawling image-heavy pages that cause memory issues
- You want to focus only on text content
- You need to maximize crawling speeddata (especially images) in the crawl result
4. Configure your crawler to exclude or prioritize certain images
> **Prerequisites**
@@ -133,19 +161,28 @@ This approach is handy when you still want external links but need to block cert
### 3.1 Accessing `result.media`
By default, Crawl4AI collects images, audio, and video URLs it finds on the page. These are stored in `result.media`, a dictionary keyed by media type (e.g., `images`, `videos`, `audio`).
By default, Crawl4AI collects images, audio, video URLs, and data tables it finds on the page. These are stored in `result.media`, a dictionary keyed by media type (e.g., `images`, `videos`, `audio`, `tables`).
**Basic Example**:
```python
if result.success:
# Get images
images_info = result.media.get("images", [])
print(f"Found {len(images_info)} images in total.")
for i, img in enumerate(images_info[:5]): # Inspect just the first 5
for i, img in enumerate(images_info[:3]): # Inspect just the first 3
print(f"[Image {i}] URL: {img['src']}")
print(f" Alt text: {img.get('alt', '')}")
print(f" Score: {img.get('score')}")
print(f" Description: {img.get('desc', '')}\n")
# Get tables
tables = result.media.get("tables", [])
print(f"Found {len(tables)} data tables in total.")
for i, table in enumerate(tables):
print(f"[Table {i}] Caption: {table.get('caption', 'No caption')}")
print(f" Columns: {len(table.get('headers', []))}")
print(f" Rows: {len(table.get('rows', []))}")
```
**Structure Example**:
@@ -171,6 +208,19 @@ result.media = {
],
"audio": [
# Similar structure but with audio-specific fields
],
"tables": [
{
"headers": ["Name", "Age", "Location"],
"rows": [
["John Doe", "34", "New York"],
["Jane Smith", "28", "San Francisco"],
["Alex Johnson", "42", "Chicago"]
],
"caption": "Employee Directory",
"summary": "Directory of company employees"
},
# More tables if present
]
}
```
@@ -199,12 +249,91 @@ crawler_cfg = CrawlerRunConfig(
This setting attempts to discard images from outside the primary domain, keeping only those from the site youre crawling.
### 3.3 Additional Media Config
### 3.3 Working with Tables
Crawl4AI can detect and extract structured data from HTML tables. Tables are analyzed based on various criteria to determine if they are actual data tables (as opposed to layout tables), including:
- Presence of thead and tbody sections
- Use of th elements for headers
- Column consistency
- Text density
- And other factors
Tables that score above the threshold (default: 7) are extracted and stored in `result.media.tables`.
**Accessing Table Data**:
```python
if result.success:
tables = result.media.get("tables", [])
print(f"Found {len(tables)} data tables on the page")
if tables:
# Access the first table
first_table = tables[0]
print(f"Table caption: {first_table.get('caption', 'No caption')}")
print(f"Headers: {first_table.get('headers', [])}")
# Print the first 3 rows
for i, row in enumerate(first_table.get('rows', [])[:3]):
print(f"Row {i+1}: {row}")
```
**Configuring Table Extraction**:
You can adjust the sensitivity of the table detection algorithm with:
```python
crawler_cfg = CrawlerRunConfig(
table_score_threshold=5 # Lower value = more tables detected (default: 7)
)
```
Each extracted table contains:
- `headers`: Column header names
- `rows`: List of rows, each containing cell values
- `caption`: Table caption text (if available)
- `summary`: Table summary attribute (if specified)
### 3.4 Additional Media Config
- **`screenshot`**: Set to `True` if you want a full-page screenshot stored as `base64` in `result.screenshot`.
- **`pdf`**: Set to `True` if you want a PDF version of the page in `result.pdf`.
- **`capture_mhtml`**: Set to `True` if you want an MHTML snapshot of the page in `result.mhtml`. This format preserves the entire web page with all its resources (CSS, images, scripts) in a single file, making it perfect for archiving or offline viewing.
- **`wait_for_images`**: If `True`, attempts to wait until images are fully loaded before final extraction.
#### Example: Capturing Page as MHTML
```python
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
async def main():
crawler_cfg = CrawlerRunConfig(
capture_mhtml=True # Enable MHTML capture
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun("https://example.com", config=crawler_cfg)
if result.success and result.mhtml:
# Save the MHTML snapshot to a file
with open("example.mhtml", "w", encoding="utf-8") as f:
f.write(result.mhtml)
print("MHTML snapshot saved to example.mhtml")
else:
print("Failed to capture MHTML:", result.error_message)
if __name__ == "__main__":
asyncio.run(main())
```
The MHTML format is particularly useful because:
- It captures the complete page state including all resources
- It can be opened in most modern browsers for offline viewing
- It preserves the page exactly as it appeared during crawling
- It's a single file, making it easy to store and transfer
---
## 4. Putting It All Together: Link & Media Filtering
@@ -273,4 +402,11 @@ if __name__ == "__main__":
---
**Thats it for Link & Media Analysis!** Youre now equipped to filter out unwanted sites and zero in on the images and videos that matter for your project.
**Thats it for Link & Media Analysis!** Youre now equipped to filter out unwanted sites and zero in on the images and videos that matter for your project.
### Table Extraction Tips
- Not all HTML tables are extracted - only those detected as "data tables" vs. layout tables.
- Tables with inconsistent cell counts, nested tables, or those used purely for layout may be skipped.
- If you're missing tables, try adjusting the `table_score_threshold` to a lower value (default is 7).
The table detection algorithm scores tables based on features like consistent columns, presence of headers, text density, and more. Tables scoring above the threshold are considered data tables worth extracting.

View File

@@ -175,13 +175,13 @@ prune_filter = PruningContentFilter(
For intelligent content filtering and high-quality markdown generation, you can use the **LLMContentFilter**. This filter leverages LLMs to generate relevant markdown while preserving the original content's meaning and structure:
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, LlmConfig
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, LLMConfig
from crawl4ai.content_filter_strategy import LLMContentFilter
async def main():
# Initialize LLM filter with specific instruction
filter = LLMContentFilter(
llmConfig = LlmConfig(provider="openai/gpt-4o",api_token="your-api-token"), #or use environment variable
llm_config = LLMConfig(provider="openai/gpt-4o",api_token="your-api-token"), #or use environment variable
instruction="""
Focus on extracting the core educational content.
Include:

View File

@@ -128,7 +128,7 @@ Crawl4AI can also extract structured data (JSON) using CSS or XPath selectors. B
```python
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
# Generate a schema (one-time cost)
html = "<div class='product'><h2>Gaming Laptop</h2><span class='price'>$999.99</span></div>"
@@ -136,13 +136,13 @@ html = "<div class='product'><h2>Gaming Laptop</h2><span class='price'>$999.99</
# Using OpenAI (requires API token)
schema = JsonCssExtractionStrategy.generate_schema(
html,
llmConfig = LlmConfig(provider="openai/gpt-4o",api_token="your-openai-token") # Required for OpenAI
llm_config = LLMConfig(provider="openai/gpt-4o",api_token="your-openai-token") # Required for OpenAI
)
# Or using Ollama (open source, no token needed)
schema = JsonCssExtractionStrategy.generate_schema(
html,
llmConfig = LlmConfig(provider="ollama/llama3.3", api_token=None) # Not needed for Ollama
llm_config = LLMConfig(provider="ollama/llama3.3", api_token=None) # Not needed for Ollama
)
# Use the schema for fast, repeated extractions
@@ -211,7 +211,7 @@ import os
import json
import asyncio
from pydantic import BaseModel, Field
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, LlmConfig
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, LLMConfig
from crawl4ai.extraction_strategy import LLMExtractionStrategy
class OpenAIModelFee(BaseModel):
@@ -241,7 +241,7 @@ async def extract_structured_data_using_llm(
word_count_threshold=1,
page_timeout=80000,
extraction_strategy=LLMExtractionStrategy(
llmConfig = LlmConfig(provider=provider,api_token=api_token),
llm_config = LLMConfig(provider=provider,api_token=api_token),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.

View File

@@ -71,7 +71,7 @@ Below is an overview of important LLM extraction parameters. All are typically s
```python
extraction_strategy = LLMExtractionStrategy(
llmConfig = LlmConfig(provider="openai/gpt-4", api_token="YOUR_OPENAI_KEY"),
llm_config = LLMConfig(provider="openai/gpt-4", api_token="YOUR_OPENAI_KEY"),
schema=MyModel.model_json_schema(),
extraction_type="schema",
instruction="Extract a list of items from the text with 'name' and 'price' fields.",
@@ -96,7 +96,7 @@ import asyncio
import json
from pydantic import BaseModel, Field
from typing import List
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LlmConfig
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode, LLMConfig
from crawl4ai.extraction_strategy import LLMExtractionStrategy
class Product(BaseModel):
@@ -106,7 +106,7 @@ class Product(BaseModel):
async def main():
# 1. Define the LLM extraction strategy
llm_strategy = LLMExtractionStrategy(
llmConfig = LlmConfig(provider="openai/gpt-4o-mini", api_token=os.getenv('OPENAI_API_KEY')),
llm_config = LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv('OPENAI_API_KEY')),
schema=Product.schema_json(), # Or use model_json_schema()
extraction_type="schema",
instruction="Extract all product objects with 'name' and 'price' from the content.",

View File

@@ -415,7 +415,7 @@ The schema generator is available as a static method on both `JsonCssExtractionS
```python
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, JsonXPathExtractionStrategy
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
# Sample HTML with product information
html = """
@@ -435,14 +435,14 @@ html = """
css_schema = JsonCssExtractionStrategy.generate_schema(
html,
schema_type="css",
llmConfig = LlmConfig(provider="openai/gpt-4o",api_token="your-openai-token")
llm_config = LLMConfig(provider="openai/gpt-4o",api_token="your-openai-token")
)
# Option 2: Using Ollama (open source, no token needed)
xpath_schema = JsonXPathExtractionStrategy.generate_schema(
html,
schema_type="xpath",
llmConfig = LlmConfig(provider="ollama/llama3.3", api_token=None) # Not needed for Ollama
llm_config = LLMConfig(provider="ollama/llama3.3", api_token=None) # Not needed for Ollama
)
# Use the generated schema for fast, repeated extractions

View File

@@ -0,0 +1,78 @@
import asyncio
from typing import List
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
BFSDeepCrawlStrategy,
CrawlResult,
FilterChain,
DomainFilter,
URLPatternFilter,
)
# Import necessary classes from crawl4ai library:
# - AsyncWebCrawler: The main class for web crawling.
# - CrawlerRunConfig: Configuration class for crawler behavior.
# - BFSDeepCrawlStrategy: Breadth-First Search deep crawling strategy.
# - CrawlResult: Data model for individual crawl results.
# - FilterChain: Used to chain multiple URL filters.
# - URLPatternFilter: Filter URLs based on patterns.
# You had from crawl4ai.deep_crawling.filters import FilterChain, URLPatternFilter, which is also correct,
# but for simplicity and consistency, we will use the direct import from crawl4ai in this example, as it is re-exported in __init__.py
async def basic_deep_crawl():
"""
Performs a basic deep crawl starting from a seed URL, demonstrating:
- Breadth-First Search (BFS) deep crawling strategy.
- Filtering URLs based on URL patterns.
- Accessing crawl results and metadata.
"""
# 1. Define URL Filters:
# Create a URLPatternFilter to include only URLs containing "text".
# This filter will be used to restrict crawling to URLs that are likely to contain textual content.
url_filter = URLPatternFilter(
patterns=[
"*text*", # Include URLs that contain "text" in their path or URL
]
)
# Create a DomainFilter to allow only URLs from the "groq.com" domain and block URLs from the "example.com" domain.
# This filter will be used to restrict crawling to URLs within the "groq.com" domain.
domain_filter = DomainFilter(
allowed_domains=["groq.com"],
blocked_domains=["example.com"],
)
# 2. Configure CrawlerRunConfig for Deep Crawling:
# Configure CrawlerRunConfig to use BFSDeepCrawlStrategy for deep crawling.
config = CrawlerRunConfig(
deep_crawl_strategy=BFSDeepCrawlStrategy(
max_depth=2, # Set the maximum depth of crawling to 2 levels from the start URL
max_pages=10, # Limit the total number of pages to crawl to 10, to prevent excessive crawling
include_external=False, # Set to False to only crawl URLs within the same domain as the start URL
filter_chain=FilterChain(filters=[url_filter, domain_filter]), # Apply the URLPatternFilter and DomainFilter to filter URLs during deep crawl
),
verbose=True, # Enable verbose logging to see detailed output during crawling
)
# 3. Initialize and Run AsyncWebCrawler:
# Use AsyncWebCrawler as a context manager for automatic start and close.
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
# url="https://docs.crawl4ai.com", # Uncomment to use crawl4ai documentation as start URL
url="https://console.groq.com/docs", # Set the start URL for deep crawling to Groq documentation
config=config, # Pass the configured CrawlerRunConfig to arun method
)
# 4. Process and Print Crawl Results:
# Iterate through the list of CrawlResult objects returned by the deep crawl.
for result in results:
# Print the URL and its crawl depth from the metadata for each crawled URL.
print(f"URL: {result.url}, Depth: {result.metadata.get('depth', 0)}")
if __name__ == "__main__":
import asyncio
asyncio.run(basic_deep_crawl())

View File

@@ -0,0 +1,162 @@
import asyncio
from typing import List
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
BFSDeepCrawlStrategy,
CrawlResult,
URLFilter, # Base class for filters, not directly used in examples but good to import for context
ContentTypeFilter,
DomainFilter,
FilterChain,
URLPatternFilter,
SEOFilter # Advanced filter, can be introduced later or as bonus
)
async def deep_crawl_filter_tutorial_part_2():
"""
Tutorial demonstrating URL filters in Crawl4AI, focusing on isolated filter behavior
before integrating them into a deep crawl.
This tutorial covers:
- Testing individual filters with synthetic URLs.
- Understanding filter logic and behavior in isolation.
- Combining filters using FilterChain.
- Integrating filters into a deep crawling example.
"""
# === Introduction: URL Filters in Isolation ===
print("\n" + "=" * 40)
print("=== Introduction: URL Filters in Isolation ===")
print("=" * 40 + "\n")
print("In this section, we will explore each filter individually using synthetic URLs.")
print("This allows us to understand exactly how each filter works before using them in a crawl.\n")
# === 2. ContentTypeFilter - Testing in Isolation ===
print("\n" + "=" * 40)
print("=== 2. ContentTypeFilter - Testing in Isolation ===")
print("=" * 40 + "\n")
# 2.1. Create ContentTypeFilter:
# Create a ContentTypeFilter to allow only 'text/html' and 'application/json' content types
# BASED ON URL EXTENSIONS.
content_type_filter = ContentTypeFilter(allowed_types=["text/html", "application/json"])
print("ContentTypeFilter created, allowing types (by extension): ['text/html', 'application/json']")
print("Note: ContentTypeFilter in Crawl4ai works by checking URL file extensions, not HTTP headers.")
# 2.2. Synthetic URLs for Testing:
# ContentTypeFilter checks URL extensions. We provide URLs with different extensions to test.
test_urls_content_type = [
"https://example.com/page.html", # Should pass: .html extension (text/html)
"https://example.com/data.json", # Should pass: .json extension (application/json)
"https://example.com/image.png", # Should reject: .png extension (not allowed type)
"https://example.com/document.pdf", # Should reject: .pdf extension (not allowed type)
"https://example.com/page", # Should pass: no extension (defaults to allow) - check default behaviour!
"https://example.com/page.xhtml", # Should pass: .xhtml extension (text/html)
]
# 2.3. Apply Filter and Show Results:
print("\n=== Testing ContentTypeFilter (URL Extension based) ===")
for url in test_urls_content_type:
passed = content_type_filter.apply(url)
result = "PASSED" if passed else "REJECTED"
extension = ContentTypeFilter._extract_extension(url) # Show extracted extension for clarity
print(f"- URL: {url} - {result} (Extension: '{extension or 'No Extension'}')")
print("=" * 40)
input("Press Enter to continue to DomainFilter example...")
# === 3. DomainFilter - Testing in Isolation ===
print("\n" + "=" * 40)
print("=== 3. DomainFilter - Testing in Isolation ===")
print("=" * 40 + "\n")
# 3.1. Create DomainFilter:
domain_filter = DomainFilter(allowed_domains=["crawl4ai.com", "example.com"])
print("DomainFilter created, allowing domains: ['crawl4ai.com', 'example.com']")
# 3.2. Synthetic URLs for Testing:
test_urls_domain = [
"https://docs.crawl4ai.com/api",
"https://example.com/products",
"https://another-website.org/blog",
"https://sub.example.com/about",
"https://crawl4ai.com.attacker.net", # Corrected example: now should be rejected
]
# 3.3. Apply Filter and Show Results:
print("\n=== Testing DomainFilter ===")
for url in test_urls_domain:
passed = domain_filter.apply(url)
result = "PASSED" if passed else "REJECTED"
print(f"- URL: {url} - {result}")
print("=" * 40)
input("Press Enter to continue to FilterChain example...")
# === 4. FilterChain - Combining Filters ===
print("\n" + "=" * 40)
print("=== 4. FilterChain - Combining Filters ===")
print("=" * 40 + "\n")
combined_filter = FilterChain(
filters=[
URLPatternFilter(patterns=["*api*"]),
ContentTypeFilter(allowed_types=["text/html"]), # Still URL extension based
DomainFilter(allowed_domains=["docs.crawl4ai.com"]),
]
)
print("FilterChain created, combining URLPatternFilter, ContentTypeFilter, and DomainFilter.")
test_urls_combined = [
"https://docs.crawl4ai.com/api/async-webcrawler",
"https://example.com/api/products",
"https://docs.crawl4ai.com/core/crawling",
"https://another-website.org/api/data",
]
# 4.3. Apply FilterChain and Show Results
print("\n=== Testing FilterChain (URLPatternFilter + ContentTypeFilter + DomainFilter) ===")
for url in test_urls_combined:
passed = await combined_filter.apply(url)
result = "PASSED" if passed else "REJECTED"
print(f"- URL: {url} - {result}")
print("=" * 40)
input("Press Enter to continue to Deep Crawl with FilterChain example...")
# === 5. Deep Crawl with FilterChain ===
print("\n" + "=" * 40)
print("=== 5. Deep Crawl with FilterChain ===")
print("=" * 40 + "\n")
print("Finally, let's integrate the FilterChain into a deep crawl example.")
config_final_crawl = CrawlerRunConfig(
deep_crawl_strategy=BFSDeepCrawlStrategy(
max_depth=2,
max_pages=10,
include_external=False,
filter_chain=combined_filter
),
verbose=False,
)
async with AsyncWebCrawler() as crawler:
results_final_crawl: List[CrawlResult] = await crawler.arun(
url="https://docs.crawl4ai.com", config=config_final_crawl
)
print("=== Crawled URLs (Deep Crawl with FilterChain) ===")
for result in results_final_crawl:
print(f"- {result.url}, Depth: {result.metadata.get('depth', 0)}")
print("=" * 40)
print("\nTutorial Completed! Review the output of each section to understand URL filters.")
if __name__ == "__main__":
asyncio.run(deep_crawl_filter_tutorial_part_2())

View File

View File

@@ -7,10 +7,11 @@ docs_dir: docs/md_v2
nav:
- Home: 'index.md'
- "Ask AI": "core/ask-ai.md"
- "Quick Start": "core/quickstart.md"
- Setup & Installation:
- "Installation": "core/installation.md"
- "Docker Deployment": "core/docker-deployment.md"
- "Quick Start": "core/quickstart.md"
- "Blog & Changelog":
- "Blog Home": "blog/index.md"
- "Changelog": "https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md"
@@ -38,6 +39,7 @@ nav:
- "Crawl Dispatcher": "advanced/crawl-dispatcher.md"
- "Identity Based Crawling": "advanced/identity-based-crawling.md"
- "SSL Certificate": "advanced/ssl-certificate.md"
- "Network & Console Capture": "advanced/network-console-capture.md"
- Extraction:
- "LLM-Free Strategies": "extraction/no-llm-strategies.md"
- "LLM Strategies": "extraction/llm-strategies.md"
@@ -75,6 +77,7 @@ extra:
version: !ENV [CRAWL4AI_VERSION, 'development']
extra_css:
- assets/layout.css
- assets/styles.css
- assets/highlight.css
- assets/dmvendor.css
@@ -82,4 +85,9 @@ extra_css:
extra_javascript:
- assets/highlight.min.js
- assets/highlight_init.js
- https://buttons.github.io/buttons.js
- https://buttons.github.io/buttons.js
- assets/toc.js
- assets/github_stats.js
- assets/selection_ask_ai.js
- assets/copy_code.js
- assets/floating_ask_ai_button.js

20
parameter_updates.txt Normal file
View File

@@ -0,0 +1,20 @@
The file /docs/md_v2/api/parameters.md should be updated to include the new network and console capturing parameters.
Here's what needs to be updated:
1. Change section title from:
```
### G) **Debug & Logging**
```
to:
```
### G) **Debug, Logging & Capturing**
```
2. Add new parameters to the table:
```
| **`capture_network_requests`** | `bool` (False) | Captures all network requests, responses, and failures during the crawl. Available in `result.network_requests`. |
| **`capture_console_messages`** | `bool` (False) | Captures all browser console messages (logs, warnings, errors) during the crawl. Available in `result.console_messages`. |
```
These changes demonstrate how to use the new network and console capturing features in the CrawlerRunConfig.

View File

@@ -0,0 +1,489 @@
I want to enhance the `AsyncPlaywrightCrawlerStrategy` to optionally capture network requests and console messages during a crawl, storing them in the final `CrawlResult`.
Here's a breakdown of the proposed changes across the relevant files:
**1. Configuration (`crawl4ai/async_configs.py`)**
* **Goal:** Add flags to `CrawlerRunConfig` to enable/disable capturing.
* **Changes:**
* Add two new boolean attributes to `CrawlerRunConfig`:
* `capture_network_requests: bool = False`
* `capture_console_messages: bool = False`
* Update `__init__`, `from_kwargs`, `to_dict`, and implicitly `clone`/`dump`/`load` to include these new attributes.
```python
# ==== File: crawl4ai/async_configs.py ====
# ... (imports) ...
class CrawlerRunConfig():
# ... (existing attributes) ...
# NEW: Network and Console Capturing Parameters
capture_network_requests: bool = False
capture_console_messages: bool = False
# Experimental Parameters
experimental: Dict[str, Any] = None,
def __init__(
self,
# ... (existing parameters) ...
# NEW: Network and Console Capturing Parameters
capture_network_requests: bool = False,
capture_console_messages: bool = False,
# Experimental Parameters
experimental: Dict[str, Any] = None,
):
# ... (existing assignments) ...
# NEW: Assign new parameters
self.capture_network_requests = capture_network_requests
self.capture_console_messages = capture_console_messages
# Experimental Parameters
self.experimental = experimental or {}
# ... (rest of __init__) ...
@staticmethod
def from_kwargs(kwargs: dict) -> "CrawlerRunConfig":
return CrawlerRunConfig(
# ... (existing kwargs gets) ...
# NEW: Get new parameters
capture_network_requests=kwargs.get("capture_network_requests", False),
capture_console_messages=kwargs.get("capture_console_messages", False),
# Experimental Parameters
experimental=kwargs.get("experimental"),
)
def to_dict(self):
return {
# ... (existing dict entries) ...
# NEW: Add new parameters to dict
"capture_network_requests": self.capture_network_requests,
"capture_console_messages": self.capture_console_messages,
"experimental": self.experimental,
}
# clone(), dump(), load() should work automatically if they rely on to_dict() and from_kwargs()
# or the serialization logic correctly handles all attributes.
```
**2. Data Models (`crawl4ai/models.py`)**
* **Goal:** Add fields to store the captured data in the response/result objects.
* **Changes:**
* Add `network_requests: Optional[List[Dict[str, Any]]] = None` and `console_messages: Optional[List[Dict[str, Any]]] = None` to `AsyncCrawlResponse`.
* Add the same fields to `CrawlResult`.
```python
# ==== File: crawl4ai/models.py ====
# ... (imports) ...
# ... (Existing dataclasses/models) ...
class AsyncCrawlResponse(BaseModel):
html: str
response_headers: Dict[str, str]
js_execution_result: Optional[Dict[str, Any]] = None
status_code: int
screenshot: Optional[str] = None
pdf_data: Optional[bytes] = None
get_delayed_content: Optional[Callable[[Optional[float]], Awaitable[str]]] = None
downloaded_files: Optional[List[str]] = None
ssl_certificate: Optional[SSLCertificate] = None
redirected_url: Optional[str] = None
# NEW: Fields for captured data
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class Config:
arbitrary_types_allowed = True
# ... (Existing models like MediaItem, Link, etc.) ...
class CrawlResult(BaseModel):
url: str
html: str
success: bool
cleaned_html: Optional[str] = None
media: Dict[str, List[Dict]] = {}
links: Dict[str, List[Dict]] = {}
downloaded_files: Optional[List[str]] = None
js_execution_result: Optional[Dict[str, Any]] = None
screenshot: Optional[str] = None
pdf: Optional[bytes] = None
mhtml: Optional[str] = None # Added mhtml based on the provided models.py
_markdown: Optional[MarkdownGenerationResult] = PrivateAttr(default=None)
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
error_message: Optional[str] = None
session_id: Optional[str] = None
response_headers: Optional[dict] = None
status_code: Optional[int] = None
ssl_certificate: Optional[SSLCertificate] = None
dispatch_result: Optional[DispatchResult] = None
redirected_url: Optional[str] = None
# NEW: Fields for captured data
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class Config:
arbitrary_types_allowed = True
# ... (Existing __init__, properties, model_dump for markdown compatibility) ...
# ... (Rest of the models) ...
```
**3. Crawler Strategy (`crawl4ai/async_crawler_strategy.py`)**
* **Goal:** Implement the actual capturing logic within `AsyncPlaywrightCrawlerStrategy._crawl_web`.
* **Changes:**
* Inside `_crawl_web`, initialize empty lists `captured_requests = []` and `captured_console = []`.
* Conditionally attach Playwright event listeners (`page.on(...)`) based on the `config.capture_network_requests` and `config.capture_console_messages` flags.
* Define handler functions for these listeners to extract relevant data and append it to the respective lists. Include timestamps.
* Pass the captured lists to the `AsyncCrawlResponse` constructor at the end of the method.
```python
# ==== File: crawl4ai/async_crawler_strategy.py ====
# ... (imports) ...
import time # Make sure time is imported
class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# ... (existing methods like __init__, start, close, etc.) ...
async def _crawl_web(
self, url: str, config: CrawlerRunConfig
) -> AsyncCrawlResponse:
"""
Internal method to crawl web URLs with the specified configuration.
Includes optional network and console capturing. # MODIFIED DOCSTRING
"""
config.url = url
response_headers = {}
execution_result = None
status_code = None
redirected_url = url
# Reset downloaded files list for new crawl
self._downloaded_files = []
# Initialize capture lists - IMPORTANT: Reset per crawl
captured_requests: List[Dict[str, Any]] = []
captured_console: List[Dict[str, Any]] = []
# Handle user agent ... (existing code) ...
# Get page for session
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
# ... (existing code for cookies, navigator overrides, hooks) ...
# --- Setup Capturing Listeners ---
# NOTE: These listeners are attached *before* page.goto()
# Network Request Capturing
if config.capture_network_requests:
async def handle_request_capture(request):
try:
post_data_str = None
try:
# Be cautious with large post data
post_data = request.post_data_buffer
if post_data:
# Attempt to decode, fallback to base64 or size indication
try:
post_data_str = post_data.decode('utf-8', errors='replace')
except UnicodeDecodeError:
post_data_str = f"[Binary data: {len(post_data)} bytes]"
except Exception:
post_data_str = "[Error retrieving post data]"
captured_requests.append({
"event_type": "request",
"url": request.url,
"method": request.method,
"headers": dict(request.headers), # Convert Header dict
"post_data": post_data_str,
"resource_type": request.resource_type,
"is_navigation_request": request.is_navigation_request(),
"timestamp": time.time()
})
except Exception as e:
self.logger.warning(f"Error capturing request details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
async def handle_response_capture(response):
try:
# Avoid capturing full response body by default due to size/security
# security_details = await response.security_details() # Optional: More SSL info
captured_requests.append({
"event_type": "response",
"url": response.url,
"status": response.status,
"status_text": response.status_text,
"headers": dict(response.headers), # Convert Header dict
"from_service_worker": response.from_service_worker,
# "security_details": security_details, # Uncomment if needed
"request_timing": response.request.timing, # Detailed timing info
"timestamp": time.time()
})
except Exception as e:
self.logger.warning(f"Error capturing response details for {response.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "response_capture_error", "url": response.url, "error": str(e), "timestamp": time.time()})
async def handle_request_failed_capture(request):
try:
captured_requests.append({
"event_type": "request_failed",
"url": request.url,
"method": request.method,
"resource_type": request.resource_type,
"failure_text": request.failure.error_text if request.failure else "Unknown failure",
"timestamp": time.time()
})
except Exception as e:
self.logger.warning(f"Error capturing request failed details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_failed_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
page.on("request", handle_request_capture)
page.on("response", handle_response_capture)
page.on("requestfailed", handle_request_failed_capture)
# Console Message Capturing
if config.capture_console_messages:
def handle_console_capture(msg):
try:
location = msg.location()
# Attempt to resolve JSHandle args to primitive values
resolved_args = []
try:
for arg in msg.args:
resolved_args.append(arg.json_value()) # May fail for complex objects
except Exception:
resolved_args.append("[Could not resolve JSHandle args]")
captured_console.append({
"type": msg.type(), # e.g., 'log', 'error', 'warning'
"text": msg.text(),
"args": resolved_args, # Captured arguments
"location": f"{location['url']}:{location['lineNumber']}:{location['columnNumber']}" if location else "N/A",
"timestamp": time.time()
})
except Exception as e:
self.logger.warning(f"Error capturing console message: {e}", tag="CAPTURE")
captured_console.append({"type": "console_capture_error", "error": str(e), "timestamp": time.time()})
def handle_pageerror_capture(err):
try:
captured_console.append({
"type": "error", # Consistent type for page errors
"text": err.message,
"stack": err.stack,
"timestamp": time.time()
})
except Exception as e:
self.logger.warning(f"Error capturing page error: {e}", tag="CAPTURE")
captured_console.append({"type": "pageerror_capture_error", "error": str(e), "timestamp": time.time()})
page.on("console", handle_console_capture)
page.on("pageerror", handle_pageerror_capture)
# --- End Setup Capturing Listeners ---
# Set up console logging if requested (Keep original logging logic separate or merge carefully)
if config.log_console:
# ... (original log_console setup using page.on(...) remains here) ...
# This allows logging to screen *and* capturing to the list if both flags are True
def log_consol(msg, console_log_type="debug"):
# ... existing implementation ...
pass # Placeholder for existing code
page.on("console", lambda msg: log_consol(msg, "debug"))
page.on("pageerror", lambda e: log_consol(e, "error"))
try:
# ... (existing code for SSL, downloads, goto, waits, JS execution, etc.) ...
# Get final HTML content
# ... (existing code for selector logic or page.content()) ...
if config.css_selector:
# ... existing selector logic ...
html = f"<div class='crawl4ai-result'>\n" + "\n".join(html_parts) + "\n</div>"
else:
html = await page.content()
await self.execute_hook(
"before_return_html", page=page, html=html, context=context, config=config
)
# Handle PDF and screenshot generation
# ... (existing code) ...
# Define delayed content getter
# ... (existing code) ...
# Return complete response - ADD CAPTURED DATA HERE
return AsyncCrawlResponse(
html=html,
response_headers=response_headers,
js_execution_result=execution_result,
status_code=status_code,
screenshot=screenshot_data,
pdf_data=pdf_data,
get_delayed_content=get_delayed_content,
ssl_certificate=ssl_cert,
downloaded_files=(
self._downloaded_files if self._downloaded_files else None
),
redirected_url=redirected_url,
# NEW: Pass captured data conditionally
network_requests=captured_requests if config.capture_network_requests else None,
console_messages=captured_console if config.capture_console_messages else None,
)
except Exception as e:
raise e # Re-raise the original exception
finally:
# If no session_id is given we should close the page
if not config.session_id:
# Detach listeners before closing to prevent potential errors during close
if config.capture_network_requests:
page.remove_listener("request", handle_request_capture)
page.remove_listener("response", handle_response_capture)
page.remove_listener("requestfailed", handle_request_failed_capture)
if config.capture_console_messages:
page.remove_listener("console", handle_console_capture)
page.remove_listener("pageerror", handle_pageerror_capture)
# Also remove logging listeners if they were attached
if config.log_console:
# Need to figure out how to remove the lambdas if necessary,
# or ensure they don't cause issues on close. Often, it's fine.
pass
await page.close()
# ... (rest of AsyncPlaywrightCrawlerStrategy methods) ...
```
**4. Core Crawler (`crawl4ai/async_webcrawler.py`)**
* **Goal:** Ensure the captured data from `AsyncCrawlResponse` is transferred to the final `CrawlResult`.
* **Changes:**
* In `arun`, when processing a non-cached result (inside the `if not cached_result or not html:` block), after receiving `async_response` and calling `aprocess_html` to get `crawl_result`, copy the `network_requests` and `console_messages` from `async_response` to `crawl_result`.
```python
# ==== File: crawl4ai/async_webcrawler.py ====
# ... (imports) ...
class AsyncWebCrawler:
# ... (existing methods) ...
async def arun(
self,
url: str,
config: CrawlerRunConfig = None,
**kwargs,
) -> RunManyReturn:
# ... (existing setup, cache check) ...
async with self._lock or self.nullcontext():
try:
# ... (existing logging, cache context setup) ...
if cached_result:
# ... (existing cache handling logic) ...
# Note: Captured network/console usually not useful from cache
# Ensure they are None or empty if read from cache, unless stored explicitly
cached_result.network_requests = cached_result.network_requests or None
cached_result.console_messages = cached_result.console_messages or None
# ... (rest of cache logic) ...
# Fetch fresh content if needed
if not cached_result or not html:
t1 = time.perf_counter()
# ... (existing user agent update, robots.txt check) ...
##############################
# Call CrawlerStrategy.crawl #
##############################
async_response = await self.crawler_strategy.crawl(
url,
config=config,
)
# ... (existing assignment of html, screenshot, pdf, js_result from async_response) ...
t2 = time.perf_counter()
# ... (existing logging) ...
###############################################################
# Process the HTML content, Call CrawlerStrategy.process_html #
###############################################################
crawl_result: CrawlResult = await self.aprocess_html(
# ... (existing args) ...
)
# --- Transfer data from AsyncCrawlResponse to CrawlResult ---
crawl_result.status_code = async_response.status_code
crawl_result.redirected_url = async_response.redirected_url or url
crawl_result.response_headers = async_response.response_headers
crawl_result.downloaded_files = async_response.downloaded_files
crawl_result.js_execution_result = js_execution_result
crawl_result.ssl_certificate = async_response.ssl_certificate
# NEW: Copy captured data
crawl_result.network_requests = async_response.network_requests
crawl_result.console_messages = async_response.console_messages
# ------------------------------------------------------------
crawl_result.success = bool(html)
crawl_result.session_id = getattr(config, "session_id", None)
# ... (existing logging) ...
# Update cache if appropriate
if cache_context.should_write() and not bool(cached_result):
# crawl_result now includes network/console data if captured
await async_db_manager.acache_url(crawl_result)
return CrawlResultContainer(crawl_result)
else: # Cached result was used
# ... (existing logging for cache hit) ...
cached_result.success = bool(html)
cached_result.session_id = getattr(config, "session_id", None)
cached_result.redirected_url = cached_result.redirected_url or url
return CrawlResultContainer(cached_result)
except Exception as e:
# ... (existing error handling) ...
return CrawlResultContainer(
CrawlResult(
url=url, html="", success=False, error_message=error_message
)
)
# ... (aprocess_html remains unchanged regarding capture) ...
# ... (arun_many remains unchanged regarding capture) ...
```
**Summary of Changes:**
1. **Configuration:** Added `capture_network_requests` and `capture_console_messages` flags to `CrawlerRunConfig`.
2. **Models:** Added corresponding `network_requests` and `console_messages` fields (List of Dicts) to `AsyncCrawlResponse` and `CrawlResult`.
3. **Strategy:** Implemented conditional event listeners in `AsyncPlaywrightCrawlerStrategy._crawl_web` to capture data into lists when flags are true. Populated these fields in the returned `AsyncCrawlResponse`. Added basic error handling within capture handlers. Added timestamps.
4. **Crawler:** Modified `AsyncWebCrawler.arun` to copy the captured data from `AsyncCrawlResponse` into the final `CrawlResult` for non-cached fetches.
This approach keeps the capturing logic contained within the Playwright strategy, uses clear configuration flags, and integrates the results into the existing data flow. The data format (list of dictionaries) is flexible for storing varied information from requests/responses/console messages.

View File

@@ -42,7 +42,7 @@ dependencies = [
"pyperclip>=1.8.2",
"faust-cchardet>=2.1.19",
"aiohttp>=3.11.11",
"humanize>=4.10.0"
"humanize>=4.10.0",
]
classifiers = [
"Development Status :: 4 - Beta",

View File

@@ -7,7 +7,7 @@ import json
parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(parent_dir)
from crawl4ai.async_configs import LlmConfig
from crawl4ai import LLMConfig
from crawl4ai.async_webcrawler import AsyncWebCrawler
from crawl4ai.chunking_strategy import RegexChunking
from crawl4ai.extraction_strategy import LLMExtractionStrategy
@@ -49,7 +49,7 @@ async def test_llm_extraction_strategy():
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://www.nbcnews.com/business"
extraction_strategy = LLMExtractionStrategy(
llmConfig=LlmConfig(provider="openai/gpt-4o-mini",api_token=os.getenv("OPENAI_API_KEY")),
llm_config=LLMConfig(provider="openai/gpt-4o-mini",api_token=os.getenv("OPENAI_API_KEY")),
instruction="Extract only content related to technology",
)
result = await crawler.arun(

View File

@@ -0,0 +1,4 @@
"""Docker browser strategy tests.
This package contains tests for the Docker browser strategy implementation.
"""

View File

@@ -0,0 +1,651 @@
"""Test examples for Docker Browser Strategy.
These examples demonstrate the functionality of Docker Browser Strategy
and serve as functional tests.
"""
import asyncio
import os
import sys
import shutil
import uuid
# Add the project root to Python path if running directly
if __name__ == "__main__":
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../..')))
from crawl4ai.browser import BrowserManager
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
from crawl4ai.browser import DockerConfig
from crawl4ai.browser import DockerRegistry
from crawl4ai.browser import DockerUtils
# Create a logger for clear terminal output
logger = AsyncLogger(verbose=True, log_file=None)
# Global Docker utils instance
docker_utils = DockerUtils(logger)
async def test_docker_components():
"""Test Docker utilities, registry, and image building.
This function tests the core Docker components before running the browser tests.
It validates DockerRegistry, DockerUtils, and builds test images to ensure
everything is functioning correctly.
"""
logger.info("Testing Docker components", tag="SETUP")
# Create a test registry directory
registry_dir = os.path.join(os.path.dirname(__file__), "test_registry")
registry_file = os.path.join(registry_dir, "test_registry.json")
os.makedirs(registry_dir, exist_ok=True)
try:
# 1. Test DockerRegistry
logger.info("Testing DockerRegistry...", tag="SETUP")
registry = DockerRegistry(registry_file)
# Test saving and loading registry
test_container_id = "test-container-123"
registry.register_container(test_container_id, 9876, "test-hash-123")
registry.save()
# Create a new registry instance that loads from the file
registry2 = DockerRegistry(registry_file)
port = registry2.get_container_host_port(test_container_id)
hash_value = registry2.get_container_config_hash(test_container_id)
if port != 9876 or hash_value != "test-hash-123":
logger.error("DockerRegistry persistence failed", tag="SETUP")
return False
# Clean up test container from registry
registry2.unregister_container(test_container_id)
logger.success("DockerRegistry works correctly", tag="SETUP")
# 2. Test DockerUtils
logger.info("Testing DockerUtils...", tag="SETUP")
# Test port detection
in_use = docker_utils.is_port_in_use(22) # SSH port is usually in use
logger.info(f"Port 22 in use: {in_use}", tag="SETUP")
# Get next available port
available_port = docker_utils.get_next_available_port(9000)
logger.info(f"Next available port: {available_port}", tag="SETUP")
# Test config hash generation
config_dict = {"mode": "connect", "headless": True}
config_hash = docker_utils.generate_config_hash(config_dict)
logger.info(f"Generated config hash: {config_hash[:8]}...", tag="SETUP")
# 3. Test Docker is available
logger.info("Checking Docker availability...", tag="SETUP")
if not await check_docker_available():
logger.error("Docker is not available - cannot continue tests", tag="SETUP")
return False
# 4. Test building connect image
logger.info("Building connect mode Docker image...", tag="SETUP")
connect_image = await docker_utils.ensure_docker_image_exists(None, "connect")
if not connect_image:
logger.error("Failed to build connect mode image", tag="SETUP")
return False
logger.success(f"Successfully built connect image: {connect_image}", tag="SETUP")
# 5. Test building launch image
logger.info("Building launch mode Docker image...", tag="SETUP")
launch_image = await docker_utils.ensure_docker_image_exists(None, "launch")
if not launch_image:
logger.error("Failed to build launch mode image", tag="SETUP")
return False
logger.success(f"Successfully built launch image: {launch_image}", tag="SETUP")
# 6. Test creating and removing container
logger.info("Testing container creation and removal...", tag="SETUP")
container_id = await docker_utils.create_container(
image_name=launch_image,
host_port=available_port,
container_name="crawl4ai-test-container"
)
if not container_id:
logger.error("Failed to create test container", tag="SETUP")
return False
logger.info(f"Created test container: {container_id[:12]}", tag="SETUP")
# Verify container is running
running = await docker_utils.is_container_running(container_id)
if not running:
logger.error("Test container is not running", tag="SETUP")
await docker_utils.remove_container(container_id)
return False
# Test commands in container
logger.info("Testing command execution in container...", tag="SETUP")
returncode, stdout, stderr = await docker_utils.exec_in_container(
container_id, ["ls", "-la", "/"]
)
if returncode != 0:
logger.error(f"Command execution failed: {stderr}", tag="SETUP")
await docker_utils.remove_container(container_id)
return False
# Verify Chrome is installed in the container
returncode, stdout, stderr = await docker_utils.exec_in_container(
container_id, ["which", "chromium"]
)
if returncode != 0:
logger.error("Chrome not found in container", tag="SETUP")
await docker_utils.remove_container(container_id)
return False
chrome_path = stdout.strip()
logger.info(f"Chrome found at: {chrome_path}", tag="SETUP")
# Test Chrome version
returncode, stdout, stderr = await docker_utils.exec_in_container(
container_id, ["chromium", "--version"]
)
if returncode != 0:
logger.error(f"Failed to get Chrome version: {stderr}", tag="SETUP")
await docker_utils.remove_container(container_id)
return False
logger.info(f"Chrome version: {stdout.strip()}", tag="SETUP")
# Remove test container
removed = await docker_utils.remove_container(container_id)
if not removed:
logger.error("Failed to remove test container", tag="SETUP")
return False
logger.success("Test container removed successfully", tag="SETUP")
# All components tested successfully
logger.success("All Docker components tested successfully", tag="SETUP")
return True
except Exception as e:
logger.error(f"Docker component tests failed: {str(e)}", tag="SETUP")
return False
finally:
# Clean up registry test directory
if os.path.exists(registry_dir):
shutil.rmtree(registry_dir)
async def test_docker_connect_mode():
"""Test Docker browser in connect mode.
This tests the basic functionality of creating a browser in Docker
connect mode and using it for navigation.
"""
logger.info("Testing Docker browser in connect mode", tag="TEST")
# Create temp directory for user data
temp_dir = os.path.join(os.path.dirname(__file__), "tmp_user_data")
os.makedirs(temp_dir, exist_ok=True)
try:
# Create Docker configuration
docker_config = DockerConfig(
mode="connect",
persistent=False,
remove_on_exit=True,
user_data_dir=temp_dir
)
# Create browser configuration
browser_config = BrowserConfig(
browser_mode="docker",
headless=True,
docker_config=docker_config
)
# Create browser manager
manager = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager.start()
logger.info("Browser started successfully", tag="TEST")
# Create crawler config
crawler_config = CrawlerRunConfig(url="https://example.com")
# Get a page
page, context = await manager.get_page(crawler_config)
logger.info("Got page successfully", tag="TEST")
# Navigate to a website
await page.goto("https://example.com")
logger.info("Navigated to example.com", tag="TEST")
# Get page title
title = await page.title()
logger.info(f"Page title: {title}", tag="TEST")
# Clean up
await manager.close()
logger.info("Browser closed successfully", tag="TEST")
return True
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
# Ensure cleanup
try:
await manager.close()
except:
pass
return False
finally:
# Clean up the temp directory
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)
async def test_docker_launch_mode():
"""Test Docker browser in launch mode.
This tests launching a Chrome browser within a Docker container
on demand with custom settings.
"""
logger.info("Testing Docker browser in launch mode", tag="TEST")
# Create temp directory for user data
temp_dir = os.path.join(os.path.dirname(__file__), "tmp_user_data_launch")
os.makedirs(temp_dir, exist_ok=True)
try:
# Create Docker configuration
docker_config = DockerConfig(
mode="launch",
persistent=False,
remove_on_exit=True,
user_data_dir=temp_dir
)
# Create browser configuration
browser_config = BrowserConfig(
browser_mode="docker",
headless=True,
text_mode=True, # Enable text mode for faster operation
docker_config=docker_config
)
# Create browser manager
manager = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager.start()
logger.info("Browser started successfully", tag="TEST")
# Create crawler config
crawler_config = CrawlerRunConfig(url="https://example.com")
# Get a page
page, context = await manager.get_page(crawler_config)
logger.info("Got page successfully", tag="TEST")
# Navigate to a website
await page.goto("https://example.com")
logger.info("Navigated to example.com", tag="TEST")
# Get page title
title = await page.title()
logger.info(f"Page title: {title}", tag="TEST")
# Clean up
await manager.close()
logger.info("Browser closed successfully", tag="TEST")
return True
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
# Ensure cleanup
try:
await manager.close()
except:
pass
return False
finally:
# Clean up the temp directory
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)
async def test_docker_persistent_storage():
"""Test Docker browser with persistent storage.
This tests creating localStorage data in one session and verifying
it persists to another session when using persistent storage.
"""
logger.info("Testing Docker browser with persistent storage", tag="TEST")
# Create a unique temp directory
test_id = uuid.uuid4().hex[:8]
temp_dir = os.path.join(os.path.dirname(__file__), f"tmp_user_data_persist_{test_id}")
os.makedirs(temp_dir, exist_ok=True)
manager1 = None
manager2 = None
try:
# Create Docker configuration with persistence
docker_config = DockerConfig(
mode="connect",
persistent=True, # Keep container running between sessions
user_data_dir=temp_dir,
container_user_data_dir="/data"
)
# Create browser configuration
browser_config = BrowserConfig(
browser_mode="docker",
headless=True,
docker_config=docker_config
)
# Create first browser manager
manager1 = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager1.start()
logger.info("First browser started successfully", tag="TEST")
# Create crawler config
crawler_config = CrawlerRunConfig()
# Get a page
page1, context1 = await manager1.get_page(crawler_config)
# Navigate to example.com
await page1.goto("https://example.com")
# Set localStorage item
test_value = f"test_value_{test_id}"
await page1.evaluate(f"localStorage.setItem('test_key', '{test_value}')")
logger.info(f"Set localStorage test_key = {test_value}", tag="TEST")
# Close the first browser manager
await manager1.close()
logger.info("First browser closed", tag="TEST")
# Create second browser manager with same config
manager2 = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager2.start()
logger.info("Second browser started successfully", tag="TEST")
# Get a page
page2, context2 = await manager2.get_page(crawler_config)
# Navigate to same site
await page2.goto("https://example.com")
# Get localStorage item
value = await page2.evaluate("localStorage.getItem('test_key')")
logger.info(f"Retrieved localStorage test_key = {value}", tag="TEST")
# Check if persistence worked
if value == test_value:
logger.success("Storage persistence verified!", tag="TEST")
else:
logger.error(f"Storage persistence failed! Expected {test_value}, got {value}", tag="TEST")
# Clean up
await manager2.close()
logger.info("Second browser closed successfully", tag="TEST")
return value == test_value
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
# Ensure cleanup
try:
if manager1:
await manager1.close()
if manager2:
await manager2.close()
except:
pass
return False
finally:
# Clean up the temp directory
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)
async def test_docker_parallel_pages():
"""Test Docker browser with parallel page creation.
This tests the ability to create and use multiple pages in parallel
from a single Docker browser instance.
"""
logger.info("Testing Docker browser with parallel pages", tag="TEST")
try:
# Create Docker configuration
docker_config = DockerConfig(
mode="connect",
persistent=False,
remove_on_exit=True
)
# Create browser configuration
browser_config = BrowserConfig(
browser_mode="docker",
headless=True,
docker_config=docker_config
)
# Create browser manager
manager = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager.start()
logger.info("Browser started successfully", tag="TEST")
# Create crawler config
crawler_config = CrawlerRunConfig()
# Get multiple pages
page_count = 3
pages = await manager.get_pages(crawler_config, count=page_count)
logger.info(f"Got {len(pages)} pages successfully", tag="TEST")
if len(pages) != page_count:
logger.error(f"Expected {page_count} pages, got {len(pages)}", tag="TEST")
await manager.close()
return False
# Navigate to different sites with each page
tasks = []
for i, (page, _) in enumerate(pages):
tasks.append(page.goto(f"https://example.com?page={i}"))
# Wait for all navigations to complete
await asyncio.gather(*tasks)
logger.info("All pages navigated successfully", tag="TEST")
# Get titles from all pages
titles = []
for i, (page, _) in enumerate(pages):
title = await page.title()
titles.append(title)
logger.info(f"Page {i+1} title: {title}", tag="TEST")
# Clean up
await manager.close()
logger.info("Browser closed successfully", tag="TEST")
return True
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
# Ensure cleanup
try:
await manager.close()
except:
pass
return False
async def test_docker_registry_reuse():
"""Test Docker container reuse via registry.
This tests that containers with matching configurations
are reused rather than creating new ones.
"""
logger.info("Testing Docker container reuse via registry", tag="TEST")
# Create registry for this test
registry_dir = os.path.join(os.path.dirname(__file__), "registry_reuse_test")
registry_file = os.path.join(registry_dir, "registry.json")
os.makedirs(registry_dir, exist_ok=True)
manager1 = None
manager2 = None
container_id1 = None
try:
# Create identical Docker configurations with custom registry
docker_config1 = DockerConfig(
mode="connect",
persistent=True, # Keep container running after closing
registry_file=registry_file
)
# Create first browser configuration
browser_config1 = BrowserConfig(
browser_mode="docker",
headless=True,
docker_config=docker_config1
)
# Create first browser manager
manager1 = BrowserManager(browser_config=browser_config1, logger=logger)
# Start the first browser
await manager1.start()
logger.info("First browser started successfully", tag="TEST")
# Get container ID from the strategy
docker_strategy1 = manager1.strategy
container_id1 = docker_strategy1.container_id
logger.info(f"First browser container ID: {container_id1[:12]}", tag="TEST")
# Close the first manager but keep container running
await manager1.close()
logger.info("First browser closed", tag="TEST")
# Create second Docker configuration identical to first
docker_config2 = DockerConfig(
mode="connect",
persistent=True,
registry_file=registry_file
)
# Create second browser configuration
browser_config2 = BrowserConfig(
browser_mode="docker",
headless=True,
docker_config=docker_config2
)
# Create second browser manager
manager2 = BrowserManager(browser_config=browser_config2, logger=logger)
# Start the second browser - should reuse existing container
await manager2.start()
logger.info("Second browser started successfully", tag="TEST")
# Get container ID from the second strategy
docker_strategy2 = manager2.strategy
container_id2 = docker_strategy2.container_id
logger.info(f"Second browser container ID: {container_id2[:12]}", tag="TEST")
# Verify container reuse
if container_id1 == container_id2:
logger.success("Container reuse successful - using same container!", tag="TEST")
else:
logger.error("Container reuse failed - new container created!", tag="TEST")
# Clean up
docker_strategy2.docker_config.persistent = False
docker_strategy2.docker_config.remove_on_exit = True
await manager2.close()
logger.info("Second browser closed and container removed", tag="TEST")
return container_id1 == container_id2
except Exception as e:
logger.error(f"Test failed: {str(e)}", tag="TEST")
# Ensure cleanup
try:
if manager1:
await manager1.close()
if manager2:
await manager2.close()
# Make sure container is removed
if container_id1:
await docker_utils.remove_container(container_id1, force=True)
except:
pass
return False
finally:
# Clean up registry directory
if os.path.exists(registry_dir):
shutil.rmtree(registry_dir)
async def run_tests():
"""Run all tests sequentially."""
results = []
logger.info("Starting Docker Browser Strategy tests", tag="TEST")
# Check if Docker is available
if not await check_docker_available():
logger.error("Docker is not available - skipping tests", tag="TEST")
return
# First test Docker components
# setup_result = await test_docker_components()
# if not setup_result:
# logger.error("Docker component tests failed - skipping browser tests", tag="TEST")
# return
# Run browser tests
results.append(await test_docker_connect_mode())
results.append(await test_docker_launch_mode())
results.append(await test_docker_persistent_storage())
results.append(await test_docker_parallel_pages())
results.append(await test_docker_registry_reuse())
# Print summary
total = len(results)
passed = sum(1 for r in results if r)
logger.info(f"Tests complete: {passed}/{total} passed", tag="SUMMARY")
if passed == total:
logger.success("All tests passed!", tag="SUMMARY")
else:
logger.error(f"{total - passed} tests failed", tag="SUMMARY")
async def check_docker_available() -> bool:
"""Check if Docker is available on the system.
Returns:
bool: True if Docker is available, False otherwise
"""
try:
proc = await asyncio.create_subprocess_exec(
"docker", "--version",
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE
)
stdout, _ = await proc.communicate()
return proc.returncode == 0 and stdout
except:
return False
if __name__ == "__main__":
asyncio.run(run_tests())

View File

@@ -0,0 +1,525 @@
"""Demo script for testing the enhanced BrowserManager.
This script demonstrates the browser pooling capabilities of the enhanced
BrowserManager with various configurations and usage patterns.
"""
import asyncio
import time
import random
from crawl4ai.browser.manager import BrowserManager, UnavailableBehavior
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
import playwright
SAFE_URLS = [
"https://example.com",
"https://example.com/page1",
"https://httpbin.org/get",
"https://httpbin.org/html",
"https://httpbin.org/ip",
"https://httpbin.org/user-agent",
"https://httpbin.org/headers",
"https://httpbin.org/cookies",
"https://httpstat.us/200",
"https://httpstat.us/301",
"https://httpstat.us/404",
"https://httpstat.us/500",
"https://jsonplaceholder.typicode.com/posts/1",
"https://jsonplaceholder.typicode.com/posts/2",
"https://jsonplaceholder.typicode.com/posts/3",
"https://jsonplaceholder.typicode.com/posts/4",
"https://jsonplaceholder.typicode.com/posts/5",
"https://jsonplaceholder.typicode.com/comments/1",
"https://jsonplaceholder.typicode.com/comments/2",
"https://jsonplaceholder.typicode.com/users/1",
"https://jsonplaceholder.typicode.com/users/2",
"https://jsonplaceholder.typicode.com/albums/1",
"https://jsonplaceholder.typicode.com/albums/2",
"https://jsonplaceholder.typicode.com/photos/1",
"https://jsonplaceholder.typicode.com/photos/2",
"https://jsonplaceholder.typicode.com/todos/1",
"https://jsonplaceholder.typicode.com/todos/2",
"https://www.iana.org",
"https://www.iana.org/domains",
"https://www.iana.org/numbers",
"https://www.iana.org/protocols",
"https://www.iana.org/about",
"https://www.iana.org/time-zones",
"https://www.data.gov",
"https://catalog.data.gov/dataset",
"https://www.archives.gov",
"https://www.usa.gov",
"https://www.loc.gov",
"https://www.irs.gov",
"https://www.census.gov",
"https://www.bls.gov",
"https://www.gpo.gov",
"https://www.w3.org",
"https://www.w3.org/standards",
"https://www.w3.org/WAI",
"https://www.rfc-editor.org",
"https://www.ietf.org",
"https://www.icann.org",
"https://www.internetsociety.org",
"https://www.python.org"
]
async def basic_pooling_demo():
"""Demonstrate basic browser pooling functionality."""
print("\n=== Basic Browser Pooling Demo ===")
# Create logger
logger = AsyncLogger(verbose=True)
# Create browser configurations
config1 = BrowserConfig(
browser_type="chromium",
headless=True,
browser_mode="playwright"
)
config2 = BrowserConfig(
browser_type="chromium",
headless=True,
browser_mode="cdp"
)
# Create browser manager with on-demand behavior
manager = BrowserManager(
browser_config=config1,
logger=logger,
unavailable_behavior=UnavailableBehavior.ON_DEMAND,
max_browsers_per_config=3
)
try:
# Initialize pool with both configurations
print("Initializing browser pool...")
await manager.initialize_pool(
browser_configs=[config1, config2],
browsers_per_config=2
)
# Display initial pool status
status = await manager.get_pool_status()
print(f"Initial pool status: {status}")
# Create crawler run configurations
run_config1 = CrawlerRunConfig()
run_config2 = CrawlerRunConfig()
# Simulate concurrent page requests
print("\nGetting pages for parallel crawling...")
# Function to simulate crawling
async def simulate_crawl(index: int, config: BrowserConfig, run_config: CrawlerRunConfig):
print(f"Crawler {index}: Requesting page...")
page, context, strategy = await manager.get_page(run_config, config)
print(f"Crawler {index}: Got page, navigating to example.com...")
try:
await page.goto("https://example.com")
title = await page.title()
print(f"Crawler {index}: Page title: {title}")
# Simulate work
await asyncio.sleep(random.uniform(1, 3))
print(f"Crawler {index}: Work completed, releasing page...")
# Check dynamic page content
content = await page.content()
content_length = len(content)
print(f"Crawler {index}: Page content length: {content_length}")
except Exception as e:
print(f"Crawler {index}: Error: {str(e)}")
finally:
# Release the page
await manager.release_page(page, strategy, config)
print(f"Crawler {index}: Page released")
# Create 5 parallel crawls
crawl_tasks = []
for i in range(5):
# Alternate between configurations
config = config1 if i % 2 == 0 else config2
run_config = run_config1 if i % 2 == 0 else run_config2
task = asyncio.create_task(simulate_crawl(i+1, config, run_config))
crawl_tasks.append(task)
# Wait for all crawls to complete
await asyncio.gather(*crawl_tasks)
# Display final pool status
status = await manager.get_pool_status()
print(f"\nFinal pool status: {status}")
finally:
# Clean up
print("\nClosing browser manager...")
await manager.close()
print("Browser manager closed")
async def prewarm_pages_demo():
"""Demonstrate page pre-warming functionality."""
print("\n=== Page Pre-warming Demo ===")
# Create logger
logger = AsyncLogger(verbose=True)
# Create browser configuration
config = BrowserConfig(
browser_type="chromium",
headless=True,
browser_mode="playwright"
)
# Create crawler run configurations for pre-warming
run_config1 = CrawlerRunConfig(
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
)
run_config2 = CrawlerRunConfig(
user_agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Safari/605.1.15"
)
# Create page pre-warm configurations
page_configs = [
(config, run_config1, 2), # 2 pages with run_config1
(config, run_config2, 3) # 3 pages with run_config2
]
# Create browser manager
manager = BrowserManager(
browser_config=config,
logger=logger,
unavailable_behavior=UnavailableBehavior.EXCEPTION
)
try:
# Initialize pool with pre-warmed pages
print("Initializing browser pool with pre-warmed pages...")
await manager.initialize_pool(
browser_configs=[config],
browsers_per_config=2,
page_configs=page_configs
)
# Display pool status
status = await manager.get_pool_status()
print(f"Pool status after pre-warming: {status}")
# Simulate using pre-warmed pages
print("\nUsing pre-warmed pages...")
async def use_prewarm_page(index: int, run_config: CrawlerRunConfig):
print(f"Task {index}: Requesting pre-warmed page...")
page, context, strategy = await manager.get_page(run_config, config)
try:
print(f"Task {index}: Got page, navigating to example.com...")
await page.goto("https://example.com")
# Verify user agent was applied correctly
user_agent = await page.evaluate("() => navigator.userAgent")
print(f"Task {index}: User agent: {user_agent}")
# Get page title
title = await page.title()
print(f"Task {index}: Page title: {title}")
# Simulate work
await asyncio.sleep(1)
finally:
# Release the page
print(f"Task {index}: Releasing page...")
await manager.release_page(page, strategy, config)
# Create tasks to use pre-warmed pages
tasks = []
# Use run_config1 pages
for i in range(2):
tasks.append(asyncio.create_task(use_prewarm_page(i+1, run_config1)))
# Use run_config2 pages
for i in range(3):
tasks.append(asyncio.create_task(use_prewarm_page(i+3, run_config2)))
# Wait for all tasks to complete
await asyncio.gather(*tasks)
# Try to use more pages than we pre-warmed (should raise exception)
print("\nTrying to use more pages than pre-warmed...")
try:
page, context, strategy = await manager.get_page(run_config1, config)
try:
print("Got extra page (unexpected)")
await page.goto("https://example.com")
finally:
await manager.release_page(page, strategy, config)
except Exception as e:
print(f"Expected exception when requesting more pages: {str(e)}")
finally:
# Clean up
print("\nClosing browser manager...")
await manager.close()
print("Browser manager closed")
async def prewarm_on_demand_demo():
"""Demonstrate pre-warming with on-demand browser creation."""
print("\n=== Pre-warming with On-Demand Browser Creation Demo ===")
# Create logger
logger = AsyncLogger(verbose=True)
# Create browser configuration
config = BrowserConfig(
browser_type="chromium",
headless=True,
browser_mode="playwright"
)
# Create crawler run configurations
run_config = CrawlerRunConfig(
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
)
# Create page pre-warm configurations - just pre-warm 2 pages
page_configs = [
(config, run_config, 2)
]
# Create browser manager with ON_DEMAND behavior
manager = BrowserManager(
browser_config=config,
logger=logger,
unavailable_behavior=UnavailableBehavior.ON_DEMAND,
max_browsers_per_config=5 # Allow up to 5 browsers
)
try:
# Initialize pool with pre-warmed pages
print("Initializing browser pool with pre-warmed pages...")
await manager.initialize_pool(
browser_configs=[config],
browsers_per_config=1, # Start with just 1 browser
page_configs=page_configs
)
# Display initial pool status
status = await manager.get_pool_status()
print(f"Initial pool status: {status}")
# Simulate using more pages than pre-warmed - should create browsers on demand
print("\nUsing more pages than pre-warmed (should create on demand)...")
async def use_page(index: int):
print(f"Task {index}: Requesting page...")
page, context, strategy = await manager.get_page(run_config, config)
try:
print(f"Task {index}: Got page, navigating to example.com...")
await page.goto("https://example.com")
# Get page title
title = await page.title()
print(f"Task {index}: Page title: {title}")
# Simulate work for a varying amount of time
work_time = 1 + (index * 0.5) # Stagger completion times
print(f"Task {index}: Working for {work_time} seconds...")
await asyncio.sleep(work_time)
print(f"Task {index}: Work completed")
finally:
# Release the page
print(f"Task {index}: Releasing page...")
await manager.release_page(page, strategy, config)
# Create more tasks than pre-warmed pages
tasks = []
for i in range(5): # Try to use 5 pages when only 2 are pre-warmed
tasks.append(asyncio.create_task(use_page(i+1)))
# Wait for all tasks to complete
await asyncio.gather(*tasks)
# Display final pool status - should show on-demand created browsers
status = await manager.get_pool_status()
print(f"\nFinal pool status: {status}")
finally:
# Clean up
print("\nClosing browser manager...")
await manager.close()
print("Browser manager closed")
async def high_volume_demo():
"""Demonstrate high-volume access to pre-warmed pages."""
print("\n=== High Volume Pre-warmed Pages Demo ===")
# Create logger
logger = AsyncLogger(verbose=True)
# Create browser configuration
config = BrowserConfig(
browser_type="chromium",
headless=True,
browser_mode="playwright"
)
# Create crawler run configuration
run_config = CrawlerRunConfig(
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
)
# Set up dimensions
browser_count = 10
pages_per_browser = 5
total_pages = browser_count * pages_per_browser
# Create page pre-warm configuration
page_configs = [
(config, run_config, total_pages)
]
print(f"Preparing {browser_count} browsers with {pages_per_browser} pages each ({total_pages} total pages)")
# Create browser manager with ON_DEMAND behavior as fallback
# No need to specify max_browsers_per_config as it will be calculated automatically
manager = BrowserManager(
browser_config=config,
logger=logger,
unavailable_behavior=UnavailableBehavior.ON_DEMAND
)
try:
# Initialize pool with browsers and pre-warmed pages
print(f"Pre-warming {total_pages} pages...")
start_time = time.time()
await manager.initialize_pool(
browser_configs=[config],
browsers_per_config=browser_count,
page_configs=page_configs
)
warmup_time = time.time() - start_time
print(f"Pre-warming completed in {warmup_time:.2f} seconds")
# Display pool status
status = await manager.get_pool_status()
print(f"Pool status after pre-warming: {status}")
# Simulate using all pre-warmed pages simultaneously
print(f"\nSending {total_pages} crawl requests simultaneously...")
async def crawl_page(index: int):
# url = f"https://example.com/page{index}"
url = SAFE_URLS[index % len(SAFE_URLS)]
print(f"Page {index}: Requesting page...")
# Measure time to acquire page
page_start = time.time()
page, context, strategy = await manager.get_page(run_config, config)
page_acquisition_time = time.time() - page_start
try:
# Navigate to the URL
nav_start = time.time()
await page.goto(url, timeout=5000)
navigation_time = time.time() - nav_start
# Get the page title
title = await page.title()
return {
"index": index,
"url": url,
"title": title,
"page_acquisition_time": page_acquisition_time,
"navigation_time": navigation_time
}
except playwright._impl._errors.TimeoutError as e:
# print(f"Page {index}: Navigation timed out - {e}")
return {
"index": index,
"url": url,
"title": "Navigation timed out",
"page_acquisition_time": page_acquisition_time,
"navigation_time": 0
}
finally:
# Release the page
await manager.release_page(page, strategy, config)
# Create and execute all tasks simultaneously
start_time = time.time()
# Non-parallel way
# for i in range(total_pages):
# await crawl_page(i+1)
tasks = [crawl_page(i+1) for i in range(total_pages)]
results = await asyncio.gather(*tasks)
total_time = time.time() - start_time
# # Print all titles
# for result in results:
# print(f"Page {result['index']} ({result['url']}): Title: {result['title']}")
# print(f" Page acquisition time: {result['page_acquisition_time']:.4f}s")
# print(f" Navigation time: {result['navigation_time']:.4f}s")
# print(f" Total time: {result['page_acquisition_time'] + result['navigation_time']:.4f}s")
# print("-" * 40)
# Report results
print(f"\nAll {total_pages} crawls completed in {total_time:.2f} seconds")
# Calculate statistics
acquisition_times = [r["page_acquisition_time"] for r in results]
navigation_times = [r["navigation_time"] for r in results]
avg_acquisition = sum(acquisition_times) / len(acquisition_times)
max_acquisition = max(acquisition_times)
min_acquisition = min(acquisition_times)
avg_navigation = sum(navigation_times) / len(navigation_times)
max_navigation = max(navigation_times)
min_navigation = min(navigation_times)
print("\nPage acquisition times:")
print(f" Average: {avg_acquisition:.4f}s")
print(f" Min: {min_acquisition:.4f}s")
print(f" Max: {max_acquisition:.4f}s")
print("\nPage navigation times:")
print(f" Average: {avg_navigation:.4f}s")
print(f" Min: {min_navigation:.4f}s")
print(f" Max: {max_navigation:.4f}s")
# Display final pool status
status = await manager.get_pool_status()
print(f"\nFinal pool status: {status}")
finally:
# Clean up
print("\nClosing browser manager...")
await manager.close()
print("Browser manager closed")
async def main():
"""Run all demos."""
# await basic_pooling_demo()
# await prewarm_pages_demo()
# await prewarm_on_demand_demo()
await high_volume_demo()
# Additional demo functions can be added here
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,190 @@
"""Test examples for BrowserManager.
These examples demonstrate the functionality of BrowserManager
and serve as functional tests.
"""
import asyncio
import os
import sys
from typing import List
# Add the project root to Python path if running directly
if __name__ == "__main__":
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../..')))
from crawl4ai.browser import BrowserManager
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
# Create a logger for clear terminal output
logger = AsyncLogger(verbose=True, log_file=None)
async def test_basic_browser_manager():
"""Test basic BrowserManager functionality with default configuration."""
logger.info("Starting test_basic_browser_manager", tag="TEST")
try:
# Create a browser manager with default config
manager = BrowserManager(logger=logger)
# Start the browser
await manager.start()
logger.info("Browser started successfully", tag="TEST")
# Get a page
crawler_config = CrawlerRunConfig(url="https://example.com")
page, context = await manager.get_page(crawler_config)
logger.info("Page created successfully", tag="TEST")
# Navigate to a website
await page.goto("https://example.com")
title = await page.title()
logger.info(f"Page title: {title}", tag="TEST")
# Clean up
await manager.close()
logger.success("test_basic_browser_manager completed successfully", tag="TEST")
return True
except Exception as e:
logger.error(f"test_basic_browser_manager failed: {str(e)}", tag="TEST")
return False
async def test_custom_browser_config():
"""Test BrowserManager with custom browser configuration."""
logger.info("Starting test_custom_browser_config", tag="TEST")
try:
# Create a custom browser config
browser_config = BrowserConfig(
browser_type="chromium",
headless=True,
viewport_width=1280,
viewport_height=800,
light_mode=True
)
# Create browser manager with the config
manager = BrowserManager(browser_config=browser_config, logger=logger)
# Start the browser
await manager.start()
logger.info("Browser started successfully with custom config", tag="TEST")
# Get a page
crawler_config = CrawlerRunConfig(url="https://example.com")
page, context = await manager.get_page(crawler_config)
# Navigate to a website
await page.goto("https://example.com")
title = await page.title()
logger.info(f"Page title: {title}", tag="TEST")
# Verify viewport size
viewport_size = await page.evaluate("() => ({ width: window.innerWidth, height: window.innerHeight })")
logger.info(f"Viewport size: {viewport_size}", tag="TEST")
# Clean up
await manager.close()
logger.success("test_custom_browser_config completed successfully", tag="TEST")
return True
except Exception as e:
logger.error(f"test_custom_browser_config failed: {str(e)}", tag="TEST")
return False
async def test_multiple_pages():
"""Test BrowserManager with multiple pages."""
logger.info("Starting test_multiple_pages", tag="TEST")
try:
# Create browser manager
manager = BrowserManager(logger=logger)
# Start the browser
await manager.start()
logger.info("Browser started successfully", tag="TEST")
# Create multiple pages
pages = []
urls = ["https://example.com", "https://example.org", "https://mozilla.org"]
for i, url in enumerate(urls):
crawler_config = CrawlerRunConfig(url=url)
page, context = await manager.get_page(crawler_config)
await page.goto(url)
pages.append((page, url))
logger.info(f"Created page {i+1} for {url}", tag="TEST")
# Verify all pages are loaded correctly
for i, (page, url) in enumerate(pages):
title = await page.title()
logger.info(f"Page {i+1} title: {title}", tag="TEST")
# Clean up
await manager.close()
logger.success("test_multiple_pages completed successfully", tag="TEST")
return True
except Exception as e:
logger.error(f"test_multiple_pages failed: {str(e)}", tag="TEST")
return False
async def test_session_management():
"""Test session management in BrowserManager."""
logger.info("Starting test_session_management", tag="TEST")
try:
# Create browser manager
manager = BrowserManager(logger=logger)
# Start the browser
await manager.start()
logger.info("Browser started successfully", tag="TEST")
# Create a session
session_id = "test_session_1"
crawler_config = CrawlerRunConfig(url="https://example.com", session_id=session_id)
page1, context1 = await manager.get_page(crawler_config)
await page1.goto("https://example.com")
logger.info(f"Created session with ID: {session_id}", tag="TEST")
# Get the same session again
page2, context2 = await manager.get_page(crawler_config)
# Verify it's the same page/context
is_same_page = page1 == page2
is_same_context = context1 == context2
logger.info(f"Same page: {is_same_page}, Same context: {is_same_context}", tag="TEST")
# Kill the session
await manager.kill_session(session_id)
logger.info(f"Killed session with ID: {session_id}", tag="TEST")
# Clean up
await manager.close()
logger.success("test_session_management completed successfully", tag="TEST")
return True
except Exception as e:
logger.error(f"test_session_management failed: {str(e)}", tag="TEST")
return False
async def run_tests():
"""Run all tests sequentially."""
results = []
results.append(await test_basic_browser_manager())
results.append(await test_custom_browser_config())
results.append(await test_multiple_pages())
results.append(await test_session_management())
# Print summary
total = len(results)
passed = sum(results)
logger.info(f"Tests complete: {passed}/{total} passed", tag="SUMMARY")
if passed == total:
logger.success("All tests passed!", tag="SUMMARY")
else:
logger.error(f"{total - passed} tests failed", tag="SUMMARY")
if __name__ == "__main__":
asyncio.run(run_tests())

View File

@@ -0,0 +1,809 @@
"""
Test script for builtin browser functionality in the browser module.
This script tests:
1. Creating a builtin browser
2. Getting browser information
3. Killing the browser
4. Restarting the browser
5. Testing operations with different browser strategies
6. Testing edge cases
"""
import asyncio
import os
import sys
import time
from typing import List, Dict, Any
from colorama import Fore, Style, init
# Add the project root to the path for imports
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../..")))
from rich.console import Console
from rich.table import Table
from rich.panel import Panel
from rich.text import Text
from rich.box import Box, SIMPLE
from crawl4ai.browser import BrowserManager
from crawl4ai.browser.strategies import BuiltinBrowserStrategy
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
# Initialize colorama for cross-platform colored terminal output
init()
# Define colors for pretty output
SUCCESS = Fore.GREEN
WARNING = Fore.YELLOW
ERROR = Fore.RED
INFO = Fore.CYAN
RESET = Fore.RESET
# Create logger
logger = AsyncLogger(verbose=True)
async def test_builtin_browser_creation():
"""Test creating a builtin browser using the BrowserManager with BuiltinBrowserStrategy"""
print(f"\n{INFO}========== Testing Builtin Browser Creation =========={RESET}")
# Step 1: Create a BrowserManager with builtin mode
print(f"\n{INFO}1. Creating BrowserManager with builtin mode{RESET}")
browser_config = BrowserConfig(browser_mode="builtin", headless=True, verbose=True)
manager = BrowserManager(browser_config=browser_config, logger=logger)
# Step 2: Check if we have a BuiltinBrowserStrategy
print(f"\n{INFO}2. Checking if we have a BuiltinBrowserStrategy{RESET}")
if isinstance(manager.strategy, BuiltinBrowserStrategy):
print(
f"{SUCCESS}Correct strategy type: {manager.strategy.__class__.__name__}{RESET}"
)
else:
print(
f"{ERROR}Wrong strategy type: {manager.strategy.__class__.__name__}{RESET}"
)
return None
# Step 3: Start the manager to launch or connect to builtin browser
print(f"\n{INFO}3. Starting the browser manager{RESET}")
try:
await manager.start()
print(f"{SUCCESS}Browser manager started successfully{RESET}")
except Exception as e:
print(f"{ERROR}Failed to start browser manager: {str(e)}{RESET}")
return None
# Step 4: Get browser info from the strategy
print(f"\n{INFO}4. Getting browser information{RESET}")
browser_info = manager.strategy.get_browser_info()
if browser_info:
print(f"{SUCCESS}Browser info retrieved:{RESET}")
for key, value in browser_info.items():
if key != "config": # Skip the verbose config section
print(f" {key}: {value}")
cdp_url = browser_info.get("cdp_url")
print(f"{SUCCESS}CDP URL: {cdp_url}{RESET}")
else:
print(f"{ERROR}Failed to get browser information{RESET}")
cdp_url = None
# Save manager for later tests
return manager, cdp_url
async def test_page_operations(manager: BrowserManager):
"""Test page operations with the builtin browser"""
print(
f"\n{INFO}========== Testing Page Operations with Builtin Browser =========={RESET}"
)
# Step 1: Get a single page
print(f"\n{INFO}1. Getting a single page{RESET}")
try:
crawler_config = CrawlerRunConfig()
page, context = await manager.get_page(crawler_config)
print(f"{SUCCESS}Got page successfully{RESET}")
# Navigate to a test URL
await page.goto("https://example.com")
title = await page.title()
print(f"{SUCCESS}Page title: {title}{RESET}")
# Close the page
await page.close()
print(f"{SUCCESS}Page closed successfully{RESET}")
except Exception as e:
print(f"{ERROR}Page operation failed: {str(e)}{RESET}")
return False
# Step 2: Get multiple pages
print(f"\n{INFO}2. Getting multiple pages with get_pages(){RESET}")
try:
# Request 3 pages
crawler_config = CrawlerRunConfig()
pages = await manager.get_pages(crawler_config, count=3)
print(f"{SUCCESS}Got {len(pages)} pages{RESET}")
# Test each page
for i, (page, context) in enumerate(pages):
await page.goto(f"https://example.com?test={i}")
title = await page.title()
print(f"{SUCCESS}Page {i + 1} title: {title}{RESET}")
await page.close()
print(f"{SUCCESS}All pages tested and closed successfully{RESET}")
except Exception as e:
print(f"{ERROR}Multiple page operation failed: {str(e)}{RESET}")
return False
return True
async def test_browser_status_management(manager: BrowserManager):
"""Test browser status and management operations"""
print(f"\n{INFO}========== Testing Browser Status and Management =========={RESET}")
# Step 1: Get browser status
print(f"\n{INFO}1. Getting browser status{RESET}")
try:
status = await manager.strategy.get_builtin_browser_status()
print(f"{SUCCESS}Browser status:{RESET}")
print(f" Running: {status['running']}")
print(f" CDP URL: {status['cdp_url']}")
except Exception as e:
print(f"{ERROR}Failed to get browser status: {str(e)}{RESET}")
return False
# Step 2: Test killing the browser
print(f"\n{INFO}2. Testing killing the browser{RESET}")
try:
result = await manager.strategy.kill_builtin_browser()
if result:
print(f"{SUCCESS}Browser killed successfully{RESET}")
else:
print(f"{ERROR}Failed to kill browser{RESET}")
except Exception as e:
print(f"{ERROR}Browser kill operation failed: {str(e)}{RESET}")
return False
# Step 3: Check status after kill
print(f"\n{INFO}3. Checking status after kill{RESET}")
try:
status = await manager.strategy.get_builtin_browser_status()
if not status["running"]:
print(f"{SUCCESS}Browser is correctly reported as not running{RESET}")
else:
print(f"{ERROR}Browser is incorrectly reported as still running{RESET}")
except Exception as e:
print(f"{ERROR}Failed to get browser status: {str(e)}{RESET}")
return False
# Step 4: Launch a new browser
print(f"\n{INFO}4. Launching a new browser{RESET}")
try:
cdp_url = await manager.strategy.launch_builtin_browser(
browser_type="chromium", headless=True
)
if cdp_url:
print(f"{SUCCESS}New browser launched at: {cdp_url}{RESET}")
else:
print(f"{ERROR}Failed to launch new browser{RESET}")
return False
except Exception as e:
print(f"{ERROR}Browser launch failed: {str(e)}{RESET}")
return False
return True
async def test_multiple_managers():
"""Test creating multiple BrowserManagers that use the same builtin browser"""
print(f"\n{INFO}========== Testing Multiple Browser Managers =========={RESET}")
# Step 1: Create first manager
print(f"\n{INFO}1. Creating first browser manager{RESET}")
browser_config1 = BrowserConfig(browser_mode="builtin", headless=True)
manager1 = BrowserManager(browser_config=browser_config1, logger=logger)
# Step 2: Create second manager
print(f"\n{INFO}2. Creating second browser manager{RESET}")
browser_config2 = BrowserConfig(browser_mode="builtin", headless=True)
manager2 = BrowserManager(browser_config=browser_config2, logger=logger)
# Step 3: Start both managers (should connect to the same builtin browser)
print(f"\n{INFO}3. Starting both managers{RESET}")
try:
await manager1.start()
print(f"{SUCCESS}First manager started{RESET}")
await manager2.start()
print(f"{SUCCESS}Second manager started{RESET}")
# Check if they got the same CDP URL
cdp_url1 = manager1.strategy.config.cdp_url
cdp_url2 = manager2.strategy.config.cdp_url
if cdp_url1 == cdp_url2:
print(
f"{SUCCESS}Both managers connected to the same browser: {cdp_url1}{RESET}"
)
else:
print(
f"{WARNING}Managers connected to different browsers: {cdp_url1} and {cdp_url2}{RESET}"
)
except Exception as e:
print(f"{ERROR}Failed to start managers: {str(e)}{RESET}")
return False
# Step 4: Test using both managers
print(f"\n{INFO}4. Testing operations with both managers{RESET}")
try:
# First manager creates a page
page1, ctx1 = await manager1.get_page(CrawlerRunConfig())
await page1.goto("https://example.com")
title1 = await page1.title()
print(f"{SUCCESS}Manager 1 page title: {title1}{RESET}")
# Second manager creates a page
page2, ctx2 = await manager2.get_page(CrawlerRunConfig())
await page2.goto("https://example.org")
title2 = await page2.title()
print(f"{SUCCESS}Manager 2 page title: {title2}{RESET}")
# Clean up
await page1.close()
await page2.close()
except Exception as e:
print(f"{ERROR}Failed to use both managers: {str(e)}{RESET}")
return False
# Step 5: Close both managers
print(f"\n{INFO}5. Closing both managers{RESET}")
try:
await manager1.close()
print(f"{SUCCESS}First manager closed{RESET}")
await manager2.close()
print(f"{SUCCESS}Second manager closed{RESET}")
except Exception as e:
print(f"{ERROR}Failed to close managers: {str(e)}{RESET}")
return False
return True
async def test_edge_cases():
"""Test edge cases like multiple starts, killing browser during operations, etc."""
print(f"\n{INFO}========== Testing Edge Cases =========={RESET}")
# Step 1: Test multiple starts with the same manager
print(f"\n{INFO}1. Testing multiple starts with the same manager{RESET}")
browser_config = BrowserConfig(browser_mode="builtin", headless=True)
manager = BrowserManager(browser_config=browser_config, logger=logger)
try:
await manager.start()
print(f"{SUCCESS}First start successful{RESET}")
# Try to start again
await manager.start()
print(f"{SUCCESS}Second start completed without errors{RESET}")
# Test if it's still functional
page, context = await manager.get_page(CrawlerRunConfig())
await page.goto("https://example.com")
title = await page.title()
print(
f"{SUCCESS}Page operations work after multiple starts. Title: {title}{RESET}"
)
await page.close()
except Exception as e:
print(f"{ERROR}Multiple starts test failed: {str(e)}{RESET}")
return False
finally:
await manager.close()
# Step 2: Test killing the browser while manager is active
print(f"\n{INFO}2. Testing killing the browser while manager is active{RESET}")
manager = BrowserManager(browser_config=browser_config, logger=logger)
try:
await manager.start()
print(f"{SUCCESS}Manager started{RESET}")
# Kill the browser directly
print(f"{INFO}Killing the browser...{RESET}")
await manager.strategy.kill_builtin_browser()
print(f"{SUCCESS}Browser killed{RESET}")
# Try to get a page (should fail or launch a new browser)
try:
page, context = await manager.get_page(CrawlerRunConfig())
print(
f"{WARNING}Page request succeeded despite killed browser (might have auto-restarted){RESET}"
)
title = await page.title()
print(f"{SUCCESS}Got page title: {title}{RESET}")
await page.close()
except Exception as e:
print(
f"{SUCCESS}Page request failed as expected after browser was killed: {str(e)}{RESET}"
)
except Exception as e:
print(f"{ERROR}Kill during operation test failed: {str(e)}{RESET}")
return False
finally:
await manager.close()
return True
async def cleanup_browsers():
"""Clean up any remaining builtin browsers"""
print(f"\n{INFO}========== Cleaning Up Builtin Browsers =========={RESET}")
browser_config = BrowserConfig(browser_mode="builtin", headless=True)
manager = BrowserManager(browser_config=browser_config, logger=logger)
try:
# No need to start, just access the strategy directly
strategy = manager.strategy
if isinstance(strategy, BuiltinBrowserStrategy):
result = await strategy.kill_builtin_browser()
if result:
print(f"{SUCCESS}Successfully killed all builtin browsers{RESET}")
else:
print(f"{WARNING}No builtin browsers found to kill{RESET}")
else:
print(f"{ERROR}Wrong strategy type: {strategy.__class__.__name__}{RESET}")
except Exception as e:
print(f"{ERROR}Cleanup failed: {str(e)}{RESET}")
finally:
# Just to be safe
try:
await manager.close()
except:
pass
async def test_performance_scaling():
"""Test performance with multiple browsers and pages.
This test creates multiple browsers on different ports,
spawns multiple pages per browser, and measures performance metrics.
"""
print(f"\n{INFO}========== Testing Performance Scaling =========={RESET}")
# Configuration parameters
num_browsers = 10
pages_per_browser = 10
total_pages = num_browsers * pages_per_browser
base_port = 9222
# Set up a measuring mechanism for memory
import psutil
import gc
# Force garbage collection before starting
gc.collect()
process = psutil.Process()
initial_memory = process.memory_info().rss / 1024 / 1024 # in MB
peak_memory = initial_memory
# Report initial configuration
print(
f"{INFO}Test configuration: {num_browsers} browsers × {pages_per_browser} pages = {total_pages} total crawls{RESET}"
)
# List to track managers
managers: List[BrowserManager] = []
all_pages = []
# Get crawl4ai home directory
crawl4ai_home = os.path.expanduser("~/.crawl4ai")
temp_dir = os.path.join(crawl4ai_home, "temp")
os.makedirs(temp_dir, exist_ok=True)
# Create all managers but don't start them yet
manager_configs = []
for i in range(num_browsers):
port = base_port + i
browser_config = BrowserConfig(
browser_mode="builtin",
headless=True,
debugging_port=port,
user_data_dir=os.path.join(temp_dir, f"browser_profile_{i}"),
)
manager = BrowserManager(browser_config=browser_config, logger=logger)
manager.strategy.shutting_down = True
manager_configs.append((manager, i, port))
# Define async function to start a single manager
async def start_manager(manager, index, port):
try:
await manager.start()
return manager
except Exception as e:
print(
f"{ERROR}Failed to start browser {index + 1} on port {port}: {str(e)}{RESET}"
)
return None
# Start all managers in parallel
start_tasks = [
start_manager(manager, i, port) for manager, i, port in manager_configs
]
started_managers = await asyncio.gather(*start_tasks)
# Filter out None values (failed starts) and add to managers list
managers = [m for m in started_managers if m is not None]
if len(managers) == 0:
print(f"{ERROR}All browser managers failed to start. Aborting test.{RESET}")
return False
if len(managers) < num_browsers:
print(
f"{WARNING}Only {len(managers)} out of {num_browsers} browser managers started successfully{RESET}"
)
# Create pages for each browser
for i, manager in enumerate(managers):
try:
pages = await manager.get_pages(CrawlerRunConfig(), count=pages_per_browser)
all_pages.extend(pages)
except Exception as e:
print(f"{ERROR}Failed to create pages for browser {i + 1}: {str(e)}{RESET}")
# Check memory after page creation
gc.collect()
current_memory = process.memory_info().rss / 1024 / 1024
peak_memory = max(peak_memory, current_memory)
# Ask for confirmation before loading
confirmation = input(
f"{WARNING}Do you want to proceed with loading pages? (y/n): {RESET}"
)
# Step 1: Create and start multiple browser managers in parallel
start_time = time.time()
if confirmation.lower() == "y":
load_start_time = time.time()
# Function to load a single page
async def load_page(page_ctx, index):
page, _ = page_ctx
try:
await page.goto(f"https://example.com/page{index}", timeout=30000)
title = await page.title()
return title
except Exception as e:
return f"Error: {str(e)}"
# Load all pages concurrently
load_tasks = [load_page(page_ctx, i) for i, page_ctx in enumerate(all_pages)]
load_results = await asyncio.gather(*load_tasks, return_exceptions=True)
# Count successes and failures
successes = sum(
1 for r in load_results if isinstance(r, str) and not r.startswith("Error")
)
failures = len(load_results) - successes
load_time = time.time() - load_start_time
total_test_time = time.time() - start_time
# Check memory after loading (peak memory)
gc.collect()
current_memory = process.memory_info().rss / 1024 / 1024
peak_memory = max(peak_memory, current_memory)
# Calculate key metrics
memory_per_page = peak_memory / successes if successes > 0 else 0
time_per_crawl = total_test_time / successes if successes > 0 else 0
crawls_per_second = successes / total_test_time if total_test_time > 0 else 0
crawls_per_minute = crawls_per_second * 60
crawls_per_hour = crawls_per_minute * 60
# Print simplified performance summary
from rich.console import Console
from rich.table import Table
console = Console()
# Create a simple summary table
table = Table(title="CRAWL4AI PERFORMANCE SUMMARY")
table.add_column("Metric", style="cyan")
table.add_column("Value", style="green")
table.add_row("Total Crawls Completed", f"{successes}")
table.add_row("Total Time", f"{total_test_time:.2f} seconds")
table.add_row("Time Per Crawl", f"{time_per_crawl:.2f} seconds")
table.add_row("Crawling Speed", f"{crawls_per_second:.2f} crawls/second")
table.add_row("Projected Rate (1 minute)", f"{crawls_per_minute:.0f} crawls")
table.add_row("Projected Rate (1 hour)", f"{crawls_per_hour:.0f} crawls")
table.add_row("Peak Memory Usage", f"{peak_memory:.2f} MB")
table.add_row("Memory Per Crawl", f"{memory_per_page:.2f} MB")
# Display the table
console.print(table)
# Ask confirmation before cleanup
confirmation = input(
f"{WARNING}Do you want to proceed with cleanup? (y/n): {RESET}"
)
if confirmation.lower() != "y":
print(f"{WARNING}Cleanup aborted by user{RESET}")
return False
# Close all pages
for page, _ in all_pages:
try:
await page.close()
except:
pass
# Close all managers
for manager in managers:
try:
await manager.close()
except:
pass
# Remove the temp directory
import shutil
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)
return True
async def test_performance_scaling_lab( num_browsers: int = 10, pages_per_browser: int = 10):
"""Test performance with multiple browsers and pages.
This test creates multiple browsers on different ports,
spawns multiple pages per browser, and measures performance metrics.
"""
print(f"\n{INFO}========== Testing Performance Scaling =========={RESET}")
# Configuration parameters
num_browsers = num_browsers
pages_per_browser = pages_per_browser
total_pages = num_browsers * pages_per_browser
base_port = 9222
# Set up a measuring mechanism for memory
import psutil
import gc
# Force garbage collection before starting
gc.collect()
process = psutil.Process()
initial_memory = process.memory_info().rss / 1024 / 1024 # in MB
peak_memory = initial_memory
# Report initial configuration
print(
f"{INFO}Test configuration: {num_browsers} browsers × {pages_per_browser} pages = {total_pages} total crawls{RESET}"
)
# List to track managers
managers: List[BrowserManager] = []
all_pages = []
# Get crawl4ai home directory
crawl4ai_home = os.path.expanduser("~/.crawl4ai")
temp_dir = os.path.join(crawl4ai_home, "temp")
os.makedirs(temp_dir, exist_ok=True)
# Create all managers but don't start them yet
manager_configs = []
for i in range(num_browsers):
port = base_port + i
browser_config = BrowserConfig(
browser_mode="builtin",
headless=True,
debugging_port=port,
user_data_dir=os.path.join(temp_dir, f"browser_profile_{i}"),
)
manager = BrowserManager(browser_config=browser_config, logger=logger)
manager.strategy.shutting_down = True
manager_configs.append((manager, i, port))
# Define async function to start a single manager
async def start_manager(manager, index, port):
try:
await manager.start()
return manager
except Exception as e:
print(
f"{ERROR}Failed to start browser {index + 1} on port {port}: {str(e)}{RESET}"
)
return None
# Start all managers in parallel
start_tasks = [
start_manager(manager, i, port) for manager, i, port in manager_configs
]
started_managers = await asyncio.gather(*start_tasks)
# Filter out None values (failed starts) and add to managers list
managers = [m for m in started_managers if m is not None]
if len(managers) == 0:
print(f"{ERROR}All browser managers failed to start. Aborting test.{RESET}")
return False
if len(managers) < num_browsers:
print(
f"{WARNING}Only {len(managers)} out of {num_browsers} browser managers started successfully{RESET}"
)
# Create pages for each browser
for i, manager in enumerate(managers):
try:
pages = await manager.get_pages(CrawlerRunConfig(), count=pages_per_browser)
all_pages.extend(pages)
except Exception as e:
print(f"{ERROR}Failed to create pages for browser {i + 1}: {str(e)}{RESET}")
# Check memory after page creation
gc.collect()
current_memory = process.memory_info().rss / 1024 / 1024
peak_memory = max(peak_memory, current_memory)
# Ask for confirmation before loading
confirmation = input(
f"{WARNING}Do you want to proceed with loading pages? (y/n): {RESET}"
)
# Step 1: Create and start multiple browser managers in parallel
start_time = time.time()
if confirmation.lower() == "y":
load_start_time = time.time()
# Function to load a single page
async def load_page(page_ctx, index):
page, _ = page_ctx
try:
await page.goto(f"https://example.com/page{index}", timeout=30000)
title = await page.title()
return title
except Exception as e:
return f"Error: {str(e)}"
# Load all pages concurrently
load_tasks = [load_page(page_ctx, i) for i, page_ctx in enumerate(all_pages)]
load_results = await asyncio.gather(*load_tasks, return_exceptions=True)
# Count successes and failures
successes = sum(
1 for r in load_results if isinstance(r, str) and not r.startswith("Error")
)
failures = len(load_results) - successes
load_time = time.time() - load_start_time
total_test_time = time.time() - start_time
# Check memory after loading (peak memory)
gc.collect()
current_memory = process.memory_info().rss / 1024 / 1024
peak_memory = max(peak_memory, current_memory)
# Calculate key metrics
memory_per_page = peak_memory / successes if successes > 0 else 0
time_per_crawl = total_test_time / successes if successes > 0 else 0
crawls_per_second = successes / total_test_time if total_test_time > 0 else 0
crawls_per_minute = crawls_per_second * 60
crawls_per_hour = crawls_per_minute * 60
# Print simplified performance summary
from rich.console import Console
from rich.table import Table
console = Console()
# Create a simple summary table
table = Table(title="CRAWL4AI PERFORMANCE SUMMARY")
table.add_column("Metric", style="cyan")
table.add_column("Value", style="green")
table.add_row("Total Crawls Completed", f"{successes}")
table.add_row("Total Time", f"{total_test_time:.2f} seconds")
table.add_row("Time Per Crawl", f"{time_per_crawl:.2f} seconds")
table.add_row("Crawling Speed", f"{crawls_per_second:.2f} crawls/second")
table.add_row("Projected Rate (1 minute)", f"{crawls_per_minute:.0f} crawls")
table.add_row("Projected Rate (1 hour)", f"{crawls_per_hour:.0f} crawls")
table.add_row("Peak Memory Usage", f"{peak_memory:.2f} MB")
table.add_row("Memory Per Crawl", f"{memory_per_page:.2f} MB")
# Display the table
console.print(table)
# Ask confirmation before cleanup
confirmation = input(
f"{WARNING}Do you want to proceed with cleanup? (y/n): {RESET}"
)
if confirmation.lower() != "y":
print(f"{WARNING}Cleanup aborted by user{RESET}")
return False
# Close all pages
for page, _ in all_pages:
try:
await page.close()
except:
pass
# Close all managers
for manager in managers:
try:
await manager.close()
except:
pass
# Remove the temp directory
import shutil
if os.path.exists(temp_dir):
shutil.rmtree(temp_dir)
return True
async def main():
"""Run all tests"""
try:
print(f"{INFO}Starting builtin browser tests with browser module{RESET}")
# # Run browser creation test
# manager, cdp_url = await test_builtin_browser_creation()
# if not manager:
# print(f"{ERROR}Browser creation failed, cannot continue tests{RESET}")
# return
# # Run page operations test
# await test_page_operations(manager)
# # Run browser status and management test
# await test_browser_status_management(manager)
# # Close manager before multiple manager test
# await manager.close()
# Run multiple managers test
await test_multiple_managers()
# Run performance scaling test
await test_performance_scaling()
# Run cleanup test
await cleanup_browsers()
# Run edge cases test
await test_edge_cases()
print(f"\n{SUCCESS}All tests completed!{RESET}")
except Exception as e:
print(f"\n{ERROR}Test failed with error: {str(e)}{RESET}")
import traceback
traceback.print_exc()
finally:
# Clean up: kill any remaining builtin browsers
await cleanup_browsers()
print(f"{SUCCESS}Test cleanup complete{RESET}")
if __name__ == "__main__":
asyncio.run(main())

Some files were not shown because too many files have changed in this diff Show More