Compare commits

..

122 Commits

Author SHA1 Message Date
coderabbitai[bot]
9d8ead59b8 📝 Add docstrings to codex/find-and-fix-a-bug (#1123)
Docstrings generation was requested by @unclecode.

* https://github.com/unclecode/crawl4ai/pull/1122#issuecomment-2887985865

The following files were modified:

* `crawl4ai/utils.py`

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2025-05-17 10:52:55 +08:00
UncleCode
45f1652d98 Fix merge_chunks splitter usage and remove incorrect return 2025-05-17 10:31:19 +08:00
UncleCode
897e017361 Set version to 0.6.3 2025-05-12 21:20:10 +08:00
UncleCode
a3e9ef91ad fix(crawler): remove automatic page closure in screenshot methods
Removes automatic page closure in take_screenshot and take_screenshot_naive methods
to prevent premature closure of pages that might still be needed in the calling context.
This allows for more flexible page lifecycle management by the caller.

BREAKING CHANGE: Page objects are no longer automatically closed after taking screenshots.
Callers must explicitly handle page closure when appropriate.
2025-05-12 21:17:57 +08:00
UncleCode
76dd86d1b3 Merge remote-tracking branch 'origin/linkedin-prep' into next 2025-05-08 17:13:59 +08:00
UncleCode
206a9dfabd feat(crawler): add session management and view-source support
Add session_id feature to allow reusing browser pages across multiple crawls.
Add support for view-source: protocol in URL handling.
Fix browser config reference and string formatting issues.
Update examples to demonstrate new session management features.

BREAKING CHANGE: Browser page handling now persists when using session_id
2025-05-08 17:13:35 +08:00
Aravind Karnam
aaf05910eb fix: removed unnecessary imports and installs 2025-05-06 15:53:55 +05:30
Aravind Karnam
a0555d5fa6 merge:from next branch 2025-05-06 15:16:47 +05:30
Aravind Karnam
38ebcbb304 fix: provide support for local llm by adding it to the arguments 2025-05-05 10:34:38 +05:30
UncleCode
9b5ccac76e feat(extraction): add RegexExtractionStrategy for pattern-based extraction
Add new RegexExtractionStrategy for fast, zero-LLM extraction of common data types:
- Built-in patterns for emails, URLs, phones, dates, and more
- Support for custom regex patterns
- LLM-assisted pattern generation utility
- Optimized HTML preprocessing with fit_html field
- Enhanced network response body capture

Breaking changes: None
2025-05-02 21:15:24 +08:00
Aravind Karnam
87d4b0fff4 format bash scripts properly so copy & paste may work without issues 2025-05-02 17:21:09 +05:30
Aravind Karnam
bd5a9ac632 updated readme with arguments for litellm 2025-05-02 17:04:42 +05:30
Aravind Karnam
6650b2f34a fix: replace openAI with litellm to support multiple llm providers 2025-05-02 16:51:15 +05:30
Aravind Karnam
5cc58f9bb3 fix: 1. duplicate verbose flag 2.inconsistency in argument name --profile-name 3. duplicate initialisaiton of env_defaults 2025-05-02 16:40:58 +05:30
Aravind Karnam
baf7f6a6f5 fix: typo in readme 2025-05-02 16:33:11 +05:30
UncleCode
94e9959fe0 feat(docker-api): add job-based polling endpoints for crawl and LLM tasks
Implements new asynchronous endpoints for handling long-running crawl and LLM tasks:
- POST /crawl/job and GET /crawl/job/{task_id} for crawl operations
- POST /llm/job and GET /llm/job/{task_id} for LLM operations
- Added Redis-based task management with configurable TTL
- Moved schema definitions to dedicated schemas.py
- Added example polling client demo_docker_polling.py

This change allows clients to handle long-running operations asynchronously through a polling pattern rather than holding connections open.
2025-05-01 21:24:52 +08:00
Aravind Karnam
7c2fd5202e fix: incorrect params and commands in linkedin app readme 2025-05-01 18:27:03 +05:30
UncleCode
ee01b81f3e Merge branch 'merge-pr971' into next 2025-05-01 18:58:41 +08:00
UncleCode
0e5d672763 Merge branch 'pr-971' into merge-pr971 2025-05-01 18:57:28 +08:00
wakaka6
cd2b490b40 refactor(logger): Apply the Enumeration for color 2025-05-01 17:04:44 +08:00
UncleCode
50f0b83fcd feat(linkedin): add prospect-wizard app with scraping and visualization
Add new LinkedIn prospect discovery tool with three main components:
- c4ai_discover.py for company and people scraping
- c4ai_insights.py for org chart and decision maker analysis
- Interactive graph visualization with company/people exploration

Features include:
- Configurable LinkedIn search and scraping
- Org chart generation with decision maker scoring
- Interactive network graph visualization
- Company similarity analysis
- Chat interface for data exploration

Requires: crawl4ai, openai, sentence-transformers, networkx
2025-04-30 19:38:25 +08:00
UncleCode
9499164d3c feat(browser): improve browser profile management and cleanup
Enhance browser profile handling with better process cleanup and documentation:
- Add process cleanup for existing Chromium instances on Windows/Unix
- Fix profile creation by passing complete browser config
- Add comprehensive documentation for browser and CLI components
- Add initial profile creation test
- Bump version to 0.6.3

This change improves reliability when managing browser profiles and provides better documentation for developers.
2025-04-29 23:04:32 +08:00
UncleCode
2140d9aca4 fix(browser): correct headless mode default behavior
Modify BrowserConfig to respect explicit headless parameter setting instead of forcing True. Update version to 0.6.2 and clean up code formatting in examples.

BREAKING CHANGE: BrowserConfig no longer defaults to headless=True when explicitly set to False
2025-04-26 21:09:50 +08:00
UncleCode
ccec40ed17 feat(models): add dedicated tables field to CrawlResult
- Add tables field to CrawlResult model while maintaining backward compatibility
- Update async_webcrawler.py to extract tables from media and pass to tables field
- Update crypto_analysis_example.py to use the new tables field
- Add /config/dump examples to demo_docker_api.py
- Bump version to 0.6.1
2025-04-24 18:36:25 +08:00
UncleCode
ad4dfb21e1 Remoce "rc1" 2025-04-23 21:00:00 +08:00
UncleCode
7784b2468e feat(docs): enhance Ask AI button UX and add v0.6.0 release notes
Improve Ask AI button with better mobile support, animations, and positioning:
- Add button animations and hover effects
- Improve mobile responsiveness
- Add icon to button
- Fix positioning logic for different viewport sizes
- Add keyboard (Escape) support

Add comprehensive v0.6.0 release documentation:
- Create detailed release notes
- Update blog index with latest release
- Document all major features and breaking changes

BREAKING CHANGE: Documentation structure updated with new v0.6.0 section
2025-04-23 20:07:03 +08:00
UncleCode
146f9d415f Update README 2025-04-23 19:50:33 +08:00
UncleCode
37fd80e4b9 feat(docs): add mobile-friendly navigation menu
Implements a responsive hamburger menu for mobile devices with the following changes:
- Add new mobile_menu.js for handling mobile navigation
- Update layout.css with mobile-specific styles and animations
- Enhance README with updated geolocation example
- Register mobile_menu.js in mkdocs.yml

The mobile menu includes:
- Hamburger button animation
- Slide-out sidebar
- Backdrop overlay
- Touch-friendly navigation
- Proper event handling
2025-04-23 19:44:25 +08:00
UncleCode
949a93982e feat(docs): update documentation and disable Ask AI feature
Major documentation updates including:
- Add comprehensive code examples page
- Add video tutorial to homepage
- Update Docker deployment instructions for v0.6.0
- Temporarily disable Ask AI feature
- Add table border styling
- Update site version to v0.6.x

BREAKING CHANGE: Ask AI feature temporarily disabled pending launch
2025-04-23 19:02:39 +08:00
UncleCode
c4f5651199 chore(deps): upgrade to Python 3.12 and prepare for 0.6.0 release
- Update Docker base image to Python 3.12-slim-bookworm
- Bump version from 0.6.0rc1 to 0.6.0
- Update documentation to reflect release version changes
- Fix license specification in pyproject.toml and setup.py
- Clean up code formatting in demo_docker_api.py

BREAKING CHANGE: Base Python version upgraded from 3.10 to 3.12
2025-04-23 16:35:15 +08:00
UncleCode
b0aa8bc9f7 Update README 2025-04-22 23:21:42 +08:00
UncleCode
c98ffe2130 Update CHANGELOG 2025-04-22 22:36:41 +08:00
UncleCode
4812f08a73 feat(docker): update Docker deployment for v0.6.0
Major updates to Docker deployment infrastructure:
- Switch default port to 11235 for all services
- Add MCP (Model Context Protocol) support with WebSocket/SSE endpoints
- Simplify docker-compose.yml with auto-platform detection
- Update documentation with new features and examples
- Consolidate configuration and improve resource management

BREAKING CHANGE: Default port changed from 8020 to 11235. Update your configurations and deployment scripts accordingly.
2025-04-22 22:35:25 +08:00
unclecode
f3ebb38edf Merge PR #899 into next, resolve conflicts in server.py and docs/browser-crawler-config.md 2025-04-22 14:56:47 +08:00
UncleCode
0007aea204 Update changelog 2025-04-21 23:21:49 +08:00
UncleCode
b5c25731e6 feat(browser): add geolocation, locale and timezone support
Add support for controlling browser geolocation, locale and timezone settings:
- New GeolocationConfig class for managing GPS coordinates
- Add locale and timezone_id parameters to CrawlerRunConfig
- Update browser context creation to handle location settings
- Add example script for geolocation usage
- Update documentation with location-based identity features

This enables more precise control over browser identity and location reporting.
2025-04-21 23:20:59 +08:00
UncleCode
5297e362f3 feat(mcp): Implement MCP protocol and enhance server capabilities
This commit introduces several significant enhancements to the Crawl4AI Docker deployment:

  1. Add MCP Protocol Support:
     - Implement WebSocket and SSE transport layers for MCP server communication
     - Create mcp_bridge.py to expose existing API endpoints via MCP protocol
     - Add comprehensive tests for both socket and SSE transport methods

  2. Enhance Docker Server Capabilities:
     - Add PDF generation endpoint with file saving functionality
     - Add screenshot capture endpoint with configurable wait time
     - Implement JavaScript execution endpoint for dynamic page interaction
     - Add intelligent file path handling for saving generated assets

  3. Improve Search and Context Functionality:
     - Implement syntax-aware code function chunking using AST parsing
     - Add BM25-based intelligent document search with relevance scoring
     - Create separate code and documentation context endpoints
     - Enhance response format with structured results and scores

  4. Rename and Fix File Organization:
     - Fix typo in test_docker_config_gen.py filename
     - Update import statements and dependencies
     - Add FileResponse for context endpoints

  This enhancement significantly improves the machine-to-machine communication
  capabilities of Crawl4AI, making it more suitable for integration with LLM agents
  and other automated systems.

  The CHANGELOG update has been applied successfully, highlighting the key features and improvements made in this release. The commit message provides a detailed explanation of all the
  changes, which will be helpful for tracking the project's evolution.
2025-04-21 22:22:02 +08:00
UncleCode
a58c8000aa refactor(server): migrate to pool-based crawler management
Replace crawler_manager.py with simpler crawler_pool.py implementation:
- Add global page semaphore for hard concurrency cap
- Implement browser pool with idle cleanup
- Add playground UI for testing and stress testing
- Update API handlers to use pooled crawlers
- Enhance logging levels and symbols

BREAKING CHANGE: Removes CrawlerManager class in favor of simpler pool-based approach
2025-04-20 20:14:26 +08:00
Aravind Karnam
b27bb367e8 merge next. Resolve conflicts. Fix some import errors and error handling in server.py 2025-04-19 20:27:47 +05:30
Aravind Karnam
d2648eaa39 fix: solved with deepcopy of elements https://github.com/unclecode/crawl4ai/issues/902 2025-04-19 20:08:36 +05:30
Aravind Karnam
c2902fd200 reverse:last change in order of execution for it introduced a new issue in content generated. https://github.com/unclecode/crawl4ai/issues/902 2025-04-19 19:46:20 +05:30
UncleCode
16b2318242 feat(api): implement crawler pool manager for improved resource handling
Adds a new CrawlerManager class to handle browser instance pooling and failover:
- Implements auto-scaling based on system resources
- Adds primary/backup crawler management
- Integrates memory monitoring and throttling
- Adds streaming support with memory tracking
- Updates API endpoints to use pooled crawlers

BREAKING CHANGE: API endpoints now require CrawlerManager initialization
2025-04-18 22:26:24 +08:00
UncleCode
907cba194f Merge branch 'next-stress' into next 2025-04-17 22:34:43 +08:00
UncleCode
3bf78ff47a refactor(docker-demo): enhance error handling and output formatting
Improve the Docker API demo script with better error handling, more detailed output,
and enhanced visualization:
- Add detailed error messages and stack traces for debugging
- Implement better status code handling and display
- Enhance JSON output formatting with monokai theme and word wrap
- Add depth information display for deep crawls
- Improve proxy usage reporting
- Fix port number inconsistency

No breaking changes.
2025-04-17 22:32:58 +08:00
UncleCode
921e0c46b6 feat(tests): implement high volume stress testing framework
Add comprehensive stress testing solution for SDK using arun_many and dispatcher system:
- Create test_stress_sdk.py for running high volume crawl tests
- Add run_benchmark.py for orchestrating tests with predefined configs
- Implement benchmark_report.py for generating performance reports
- Add memory tracking and local test site generation
- Support both streaming and batch processing modes
- Add detailed documentation in README.md

The framework enables testing SDK performance, concurrency handling,
and memory behavior under high-volume scenarios.
2025-04-17 22:31:51 +08:00
UncleCode
fd899f66aa Merge branch 'next-fix-markdown-source' into next 2025-04-17 20:16:15 +08:00
UncleCode
30ec4f571f feat(docs): add comprehensive Docker API demo script
Add a new example script demonstrating Docker API usage with extensive features:
- Basic crawling with single/multi URL support
- Markdown generation with various filters
- Parameter demonstrations (CSS, JS, screenshots, SSL, proxies)
- Extraction strategies using CSS and LLM
- Deep crawling capabilities with streaming
- Integration examples with proxy rotation and SSL certificate fetching

Also includes minor formatting improvements in async_webcrawler.py
2025-04-17 20:16:11 +08:00
UncleCode
7db6b468d9 feat(markdown): add content source selection for markdown generation
Adds a new content_source parameter to MarkdownGenerationStrategy that allows
selecting which HTML content to use for markdown generation:
- cleaned_html (default): uses post-processed HTML
- raw_html: uses original webpage HTML
- fit_html: uses preprocessed HTML for schema extraction

Changes include:
- Added content_source parameter to MarkdownGenerationStrategy
- Updated AsyncWebCrawler to handle HTML source selection
- Added examples and tests for the new feature
- Updated documentation with new parameter details

BREAKING CHANGE: Renamed cleaned_html parameter to input_html in generate_markdown()
method signature to better reflect its generalized purpose
2025-04-17 20:13:53 +08:00
Aravind Karnam
eed7f88f29 Merge branch 'next' into 2025-MAR-ALPHA-1 2025-04-17 10:50:02 +05:30
UncleCode
94d486579c docs(tests): clarify server URL comments in deep crawl tests
Improve documentation of test configuration URLs by adding clearer
comments explaining when to use each URL configuration - Docker vs
development mode.

No functional changes, only comment improvements.
2025-04-15 22:32:27 +08:00
UncleCode
5206c6f2d6 Modify the test file 2025-04-15 22:28:01 +08:00
UncleCode
230f22da86 refactor(proxy): move ProxyConfig to async_configs and improve LLM token handling
Moved ProxyConfig class from proxy_strategy.py to async_configs.py for better organization.
Improved LLM token handling with new PROVIDER_MODELS_PREFIXES.
Added test cases for deep crawling and proxy rotation.
Removed docker_config from BrowserConfig as it's handled separately.

BREAKING CHANGE: ProxyConfig import path changed from crawl4ai.proxy_strategy to crawl4ai
2025-04-15 22:27:18 +08:00
UncleCode
793668a413 Remove parameter_updates.txt 2025-04-14 23:05:24 +08:00
UncleCode
82aa53aa59 Merge branch 'next-alpine-docker' into next 2025-04-14 23:01:22 +08:00
UncleCode
cd7ff6f9c1 feat(docs): add AI assistant interface and code copy button
Add new AI assistant chat interface with features:
- Real-time chat with markdown support
- Chat history management
- Citation tracking
- Selection-to-query functionality

Also adds code copy button to documentation code blocks and adjusts layout/styling.

Breaking changes: None
2025-04-14 23:00:47 +08:00
UncleCode
c56974cf59 feat(docs): enhance documentation UI with ToC and GitHub stats
Add new features to documentation UI:
- Add table of contents with scroll spy functionality
- Add GitHub repository statistics badge
- Implement new centered layout system with fixed sidebar
- Add conditional Playwright installation based on CRAWL4AI_MODE

Breaking changes: None
2025-04-14 20:46:32 +08:00
Aravind Karnam
dcc265458c fix: Add a nominal wait time for remove overlay elements since it's already controllable through delay_before_return_html 2025-04-14 12:39:05 +05:30
UncleCode
ecec53a8c1 Docker tested on Windows machine. 2025-04-13 20:14:41 +08:00
Aravind Karnam
7d8e81fb2e fix: fix target_elements, in a less invasive and more efficient way simply by changing order of execution :) https://github.com/unclecode/crawl4ai/issues/902 2025-04-12 12:44:00 +05:30
Aravind Karnam
9fc5d315af fix: revert the old target_elms code in LXMLwebscraping strategy 2025-04-12 12:07:04 +05:30
Aravind Karnam
d84508b4d5 fix: revert the old target_elms code in regular webscraping strategy 2025-04-12 12:05:17 +05:30
Aravind Karnam
022f5c9e25 Merged next branch 2025-04-12 10:47:02 +05:30
UncleCode
3179d6ad0c fix(core): improve error handling and stability in core components
Enhance error handling and stability across multiple components:
- Add safety checks in async_configs.py for type and params existence
- Fix browser manager initialization and cleanup logic
- Add default LLM config fallback in extraction strategy
- Add comprehensive Docker deployment guide and server tests

BREAKING CHANGE: BrowserManager.start() now automatically closes existing instances
2025-04-11 20:58:39 +08:00
wakaka6
b2f3cb0dfa WIP: logger migriate to rich 2025-04-11 00:44:43 +08:00
UncleCode
18e8227dfb feat(crawler): add console message capture functionality
Add ability to capture browser console messages during crawling:
- Implement _capture_console_messages method to collect console logs
- Update crawl method to support console message capture
- Modify browser_manager page creation to accept full CrawlerRunConfig
- Fix request failure text formatting

This enhancement allows debugging and monitoring of JavaScript console output during crawling operations.
2025-04-10 23:26:09 +08:00
UncleCode
7c358a1aee fix(browser): add null check for crawlerRunConfig.url
Add additional null check when accessing crawlerRunConfig.url in cookie configuration to prevent potential null pointer exceptions. Previously, the code only checked if crawlerRunConfig existed but not its url property.

Fixes potential runtime error when crawlerRunConfig.url is undefined.
2025-04-10 23:25:07 +08:00
UncleCode
108b2a8bfb Fixed capturing console messages for case the url is the local file. Update docker configuration (work in progress) 2025-04-10 23:22:38 +08:00
unclecode
66ac07b4f3 feat(crawler): add network request and console message capturing
Implement comprehensive network request and console message capturing functionality:
- Add capture_network_requests and capture_console_messages config parameters
- Add network_requests and console_messages fields to models
- Implement Playwright event listeners to capture requests, responses, and console output
- Create detailed documentation and examples
- Add comprehensive tests

This feature enables deep visibility into web page activity for debugging,
security analysis, performance profiling, and API discovery in web applications.
2025-04-10 16:03:48 +08:00
UncleCode
a2061bf31e feat(crawler): add MHTML capture functionality
Add ability to capture web pages as MHTML format, which includes all page resources
in a single file. This enables complete page archival and offline viewing.

- Add capture_mhtml parameter to CrawlerRunConfig
- Implement MHTML capture using CDP in AsyncPlaywrightCrawlerStrategy
- Add mhtml field to CrawlResult and AsyncCrawlResponse models
- Add comprehensive tests for MHTML capture functionality
- Update documentation with MHTML capture details
- Add exclude_all_images option for better memory management

Breaking changes: None
2025-04-09 15:39:04 +08:00
Aravind Karnam
6f7ab9c927 fix: Revert changes to session management in AsyncHttpWebcrawler and solve the underlying issue by removing the session closure in finally block of session context. 2025-04-08 18:31:00 +05:30
UncleCode
9038e9acbd Merge branch 'main' into next 2025-04-08 17:43:42 +08:00
UncleCode
02e627e0bd fix(crawler): simplify page retrieval logic in AsyncPlaywrightCrawlerStrategy 2025-04-08 17:43:36 +08:00
UncleCode
5b66208a7e Refactor next branch 2025-04-06 18:33:09 +08:00
UncleCode
e1d9e2489c refactor(docs): update import statement in quickstart.py for improved clarity 2025-04-05 23:12:06 +08:00
UncleCode
49d904ca0a refactor(docs): enhance quickstart_examples.py with improved configuration and file handling 2025-04-05 22:57:45 +08:00
UncleCode
ca9351252a refactor(docs): update import paths and clean up example code in quickstart_examples.py 2025-04-05 22:55:56 +08:00
UncleCode
935d9d39f8 Add quickstart example set 2025-04-05 21:37:25 +08:00
UncleCode
f8213c32b9 Merge branch 'vr0.5.0.post8' 2025-04-05 21:36:17 +08:00
Aravind Karnam
7155778eac chore: move from faust-cchardet to chardet 2025-04-03 17:42:51 +05:30
Aravind Karnam
4133e5460d typo-fix: https://github.com/unclecode/crawl4ai/pull/918 2025-04-03 17:42:24 +05:30
Aravind Karnam
73fda8a6ec fix: address the PR review: https://github.com/unclecode/crawl4ai/pull/899#discussion_r2024639193 2025-04-03 13:47:13 +05:30
Aravind Karnam
9e16a4bb26 Merge next and resolve conflicts 2025-04-02 12:18:23 +05:30
Aravind
765f856ed4 Merge pull request #808 from dvschuyl/bug/parse-srcset-fix-float-width
🐛 Truncate width to integer string in srcset
2025-03-31 18:21:09 +05:30
Aravind Karnam
757e3177ed fix: https://github.com/unclecode/crawl4ai/issues/839 2025-03-31 17:10:04 +05:30
Aravind
d8357e80d2 Merge pull request #915 from maggie-edkey/css-selector
fix(#911): css_selector is not working properly
2025-03-31 13:03:35 +05:30
Aravind Karnam
ef1f0c4102 fix:https://github.com/unclecode/crawl4ai/issues/701 2025-03-31 12:43:32 +05:30
maggie.wang
1119f2f5b5 fix: https://github.com/unclecode/crawl4ai/issues/911 2025-03-31 14:05:54 +08:00
Aravind Karnam
d8cbeff386 fix: https://github.com/unclecode/crawl4ai/issues/842 2025-03-28 19:31:05 +05:30
Aravind Karnam
57e0423b3a fix:target_element should not affect link extraction. -> https://github.com/unclecode/crawl4ai/issues/902 2025-03-28 12:56:37 +05:30
Aravind Karnam
7be5427283 Merge branch 'next' into 2025-MAR-ALPHA-1 2025-03-27 12:29:32 +05:30
Aravind Karnam
585e5e5973 fix: https://github.com/unclecode/crawl4ai/issues/733 2025-03-25 15:17:59 +05:30
Aravind Karnam
e3111d0a32 fix: prevent session closing after each request to maintain connection pool. Fixes: https://github.com/unclecode/crawl4ai/issues/867 2025-03-25 13:46:55 +05:30
Aravind Karnam
2f0e217751 Chore: Add brotli as dependancy to fix: https://github.com/unclecode/crawl4ai/issues/867 2025-03-25 13:44:41 +05:30
UncleCode
6eed4adc65 Merge branch 'vr0.5.0.post5' 2025-03-25 12:24:07 +08:00
Aravind Karnam
efa73257c5 Merge branch 'next' into 2025-MAR-ALPHA-1 2025-03-24 21:57:29 +05:30
Aravind Karnam
e01d1e73e1 fix: link normalisation in BestFirstStrategy 2025-03-21 17:34:13 +05:30
Aravind Karnam
471d110c5e fix: url normalisation ref: https://github.com/unclecode/crawl4ai/issues/841 2025-03-21 16:48:07 +05:30
Aravind Karnam
f89113377a fix: Move adding of visited urls to the 'visited' set, when queueing the URLs instead of after dequeuing, this is to prevent duplicate crawls. https://github.com/unclecode/crawl4ai/issues/843 2025-03-21 13:44:57 +05:30
Aravind Karnam
6740e87b4d fix: remove trailing slash when the path is empty. This is causing dupicate crawls 2025-03-21 13:41:31 +05:30
Aravind Karnam
8b761f232b fix: improve logged url readability by decoding encoded urls 2025-03-21 13:40:23 +05:30
Aravind Karnam
e0c2a7c284 chore: remove mistakenly commited deps.txt file 2025-03-21 11:06:46 +05:30
Aravind Karnam
ac2f9ae533 fix: streamline url status logging via single entrypoint i.e. logger.url_status 2025-03-20 18:59:15 +05:30
Aravind Karnam
eedda1ae5c fix: Truncate long urls in middle than end since users are confused that same url is being scraped several times. Also remove labels on status and timer to be replaced with symbols to save space and display more URL 2025-03-20 18:56:19 +05:30
Aravind Karnam
8cecbec7a7 Merge branch 'next' into 2025-MAR-ALPHA-1 2025-03-20 17:07:53 +05:30
Aravind Karnam
4359b12003 docs + fix: Update example for full page screenshot & PDF export. Fix the bug Error: crawl4ai.async_webcrawler.AsyncWebCrawler.aprocess_html() got multiple values for keyword argument - for screenshot param. https://github.com/unclecode/crawl4ai/issues/822#issuecomment-2732602118 2025-03-18 17:20:24 +05:30
Aravind Karnam
529a79725e docs: remove hallucinations from docs for CrawlerRunConfig + Add chunking strategy docs in the table 2025-03-18 16:14:00 +05:30
Aravind Karnam
9109ecd8fc chore: Raise an exception with clear messaging when body tag is missing in the fetched html. The message should warn users to add appropriate wait_for condition to wait until body tag is loaded into DOM.
fixes: https://github.com/unclecode/crawl4ai/issues/804
2025-03-18 15:26:44 +05:30
Aravind Karnam
84883be513 Merge branch 'next' into 2025-MAR-ALPHA-1 2025-03-18 15:12:21 +05:30
Aravind
79328e4292 Create main.yml (#846)
* Create main.yml

GH actions to post notifications in discord for new issues, PRs and discussions

* Add comments on bugs to the trigger
2025-03-17 20:47:57 +08:00
Aravind Karnam
c190ba816d refactor: Instead of custom validation of question, rely on the built in FastAPI validator, so generated API docs also reflects this expectation correctly 2025-03-14 09:40:50 +05:30
Aravind Karnam
a3954dd4c6 refactor: Move the checking of protocol and prepending protocol inside api handlers 2025-03-14 09:39:10 +05:30
Aravind Karnam
cbb8755972 Merge branch 'next' into 2025-MAR-ALPHA-1 2025-03-13 10:42:22 +05:30
dvschuyl
341b7a5f2a 🐛 Truncate width to integer string in parse_srcset 2025-03-11 11:05:14 +01:00
UncleCode
e1b3bfe6fb Merge branch 'vr0.5.0.post4' 2025-03-06 22:46:44 +08:00
UncleCode
fd02dc782d Merge branch 'main' of https://github.com/unclecode/crawl4ai 2025-03-05 17:15:48 +08:00
UncleCode
14fe5ef873 Update config.yml 2025-03-05 14:16:24 +08:00
UncleCode
fc425023f5 Update config.yml 2025-03-05 12:51:07 +08:00
Aravind Karnam
504207faa6 docs: update text in llm-strategies.md to reflect new changes in LlmConfig 2025-03-03 19:24:44 +05:30
Aravind
f14e4a4b67 Merge pull request #776 from jawshoeadan/patch-1
Fix LiteLLM branding and link
2025-03-03 19:01:30 +05:30
Aravind Karnam
1e819cdb26 fixes: https://github.com/unclecode/crawl4ai/issues/774 2025-03-03 11:53:15 +05:30
jawshoeadan
5edfea279d Fix LiteLLM branding and link 2025-03-02 16:58:00 +01:00
Aravind Karnam
7c1705712d fix: https://github.com/unclecode/crawl4ai/issues/756 2025-03-01 18:17:11 +05:30
167 changed files with 41682 additions and 10396 deletions

35
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Discord GitHub Notifications
on:
issues:
types: [opened]
issue_comment:
types: [created]
pull_request:
types: [opened]
discussion:
types: [created]
jobs:
notify-discord:
runs-on: ubuntu-latest
steps:
- name: Set webhook based on event type
id: set-webhook
run: |
if [ "${{ github.event_name }}" == "discussion" ]; then
echo "webhook=${{ secrets.DISCORD_DISCUSSIONS_WEBHOOK }}" >> $GITHUB_OUTPUT
else
echo "webhook=${{ secrets.DISCORD_WEBHOOK }}" >> $GITHUB_OUTPUT
fi
- name: Discord Notification
uses: Ilshidur/action-discord@master
env:
DISCORD_WEBHOOK: ${{ steps.set-webhook.outputs.webhook }}
with:
args: |
${{ github.event_name == 'issues' && format('📣 New issue created: **{0}** by {1} - {2}', github.event.issue.title, github.event.issue.user.login, github.event.issue.html_url) ||
github.event_name == 'issue_comment' && format('💬 New comment on issue **{0}** by {1} - {2}', github.event.issue.title, github.event.comment.user.login, github.event.comment.html_url) ||
github.event_name == 'pull_request' && format('🔄 New PR opened: **{0}** by {1} - {2}', github.event.pull_request.title, github.event.pull_request.user.login, github.event.pull_request.html_url) ||
format('💬 New discussion started: **{0}** by {1} - {2}', github.event.discussion.title, github.event.discussion.user.login, github.event.discussion.html_url) }}

9
.gitignore vendored
View File

@@ -257,4 +257,11 @@ continue_config.json
.private/
CLAUDE_MONITOR.md
CLAUDE.md
CLAUDE.md
tests/**/test_site
tests/**/reports
tests/**/benchmark_reports
docs/**/data
.codecat/

View File

@@ -5,6 +5,112 @@ All notable changes to Crawl4AI will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.6.2] - 2025-05-02
### Added
- New `RegexExtractionStrategy` for fast pattern-based extraction without requiring LLM
- Built-in patterns for emails, URLs, phone numbers, dates, and more
- Support for custom regex patterns
- `generate_pattern` utility for LLM-assisted pattern creation (one-time use)
- Added `fit_html` as a top-level field in `CrawlResult` for optimized HTML extraction
- Added support for network response body capture in network request tracking
### Changed
- Updated documentation for no-LLM extraction strategies
- Enhanced API reference to include RegexExtractionStrategy examples and usage
- Improved HTML preprocessing with optimized performance for extraction strategies
## [0.6.1] - 2025-04-24
### Added
- New dedicated `tables` field in `CrawlResult` model for better table extraction handling
- Updated crypto_analysis_example.py to use the new tables field with backward compatibility
### Changed
- Improved playground UI in Docker deployment with better endpoint handling and UI feedback
## [0.6.0] 20250422
### Added
- Browser pooling with page prewarming and finegrained **geolocation, locale, and timezone** controls
- Crawler pool manager (SDK + Docker API) for smarter resource allocation
- Network & console log capture plus MHTML snapshot export
- **Table extractor**: turn HTML `<table>`s into DataFrames or CSV with one flag
- Highvolume stresstest framework in `tests/memory` and API load scripts
- MCP protocol endpoints with socket & SSE support; playground UI scaffold
- Docs v2 revamp: TOC, GitHub badge, copycode buttons, Docker API demo
- “Ask AI” helper button *(workinprogress, shipping soon)*
- New examples: geolocation usage, network/console capture, Docker API, markdown source selection, crypto analysis
- Expanded automated test suites for browser, Docker, MCP and memory benchmarks
### Changed
- Consolidated and renamed browser strategies; legacy docker strategy modules removed
- `ProxyConfig` moved to `async_configs`
- Server migrated to poolbased crawler management
- FastAPI validators replace custom query validation
- Docker build now uses Chromium base image
- Largescale repo tidyup (≈36 k insertions, ≈5 k deletions)
### Fixed
- Async crawler session leak, duplicatevisit handling, URL normalisation
- Targetelement regressions in scraping strategies
- LoggedURL readability, encodedURL decoding, middle truncation for long URLs
- Closed issues: #701, #733, #756, #774, #804, #822, #839, #841, #842, #843, #867, #902, #911
### Removed
- Obsolete modules under `crawl4ai/browser/*` superseded by the new pooled browser layer
### Deprecated
- Old markdown generator names now alias `DefaultMarkdownGenerator` and emit warnings
---
#### Upgrade notes
1. Update any direct imports from `crawl4ai/browser/*` to the new pooled browser modules
2. If you override `AsyncPlaywrightCrawlerStrategy.get_page`, adopt the new signature
3. Rebuild Docker images to pull the new Chromium layer
4. Switch to `DefaultMarkdownGenerator` (or silence the deprecation warning)
---
`121 files changed, ≈36 223 insertions, ≈4 975 deletions` :contentReference[oaicite:0]{index=0}&#8203;:contentReference[oaicite:1]{index=1}
### [Feature] 2025-04-21
- Implemented MCP protocol for machine-to-machine communication
- Added WebSocket and SSE transport for MCP server
- Exposed server endpoints via MCP protocol
- Created tests for MCP socket and SSE communication
- Enhanced Docker server with file handling and intelligent search
- Added PDF and screenshot endpoints with file saving capability
- Added JavaScript execution endpoint for page interaction
- Implemented advanced context search with BM25 and code chunking
- Added file path output support for generated assets
- Improved server endpoints and API surface
- Added intelligent context search with query filtering
- Added syntax-aware code function chunking
- Implemented efficient HTML processing pipeline
- Added support for controlling browser geolocation via new GeolocationConfig class
- Added locale and timezone configuration options to CrawlerRunConfig
- Added example script demonstrating geolocation and locale usage
- Added documentation for location-based identity features
### [Refactor] 2025-04-20
- Replaced crawler_manager.py with simpler crawler_pool.py implementation
- Added global page semaphore for hard concurrency cap
- Implemented browser pool with idle cleanup
- Added playground UI for testing and stress testing
- Updated API handlers to use pooled crawlers
- Enhanced logging levels and symbols
- Added memory tests and stress test utilities
### [Added] 2025-04-17
- Added content source selection feature for markdown generation
- New `content_source` parameter allows choosing between `cleaned_html`, `raw_html`, and `fit_html`
- Provides flexibility in how HTML content is processed before markdown conversion
- Added examples and documentation for the new feature
- Includes backward compatibility with default `cleaned_html` behavior
## Version 0.5.0.post5 (2025-03-14)
### Added

View File

@@ -1,4 +1,9 @@
FROM python:3.10-slim
FROM python:3.12-slim-bookworm AS build
# C4ai version
ARG C4AI_VER=0.6.0
ENV C4AI_VERSION=$C4AI_VER
LABEL c4ai.version=$C4AI_VER
# Set build arguments
ARG APP_HOME=/app
@@ -17,14 +22,14 @@ ENV PYTHONFAULTHANDLER=1 \
REDIS_HOST=localhost \
REDIS_PORT=6379
ARG PYTHON_VERSION=3.10
ARG PYTHON_VERSION=3.12
ARG INSTALL_TYPE=default
ARG ENABLE_GPU=false
ARG TARGETARCH
LABEL maintainer="unclecode"
LABEL description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & scraper"
LABEL version="1.0"
LABEL version="1.0"
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
@@ -38,6 +43,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libjpeg-dev \
redis-server \
supervisor \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y --no-install-recommends \
@@ -62,11 +68,16 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libcairo2 \
libasound2 \
libatspi2.0-0 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get dist-upgrade -y \
&& rm -rf /var/lib/apt/lists/*
RUN if [ "$ENABLE_GPU" = "true" ] && [ "$TARGETARCH" = "amd64" ] ; then \
apt-get update && apt-get install -y --no-install-recommends \
nvidia-cuda-toolkit \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* ; \
else \
echo "Skipping NVIDIA CUDA Toolkit installation (unsupported platform or GPU disabled)"; \
@@ -76,16 +87,24 @@ RUN if [ "$TARGETARCH" = "arm64" ]; then \
echo "🦾 Installing ARM-specific optimizations"; \
apt-get update && apt-get install -y --no-install-recommends \
libopenblas-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*; \
elif [ "$TARGETARCH" = "amd64" ]; then \
echo "🖥️ Installing AMD64-specific optimizations"; \
apt-get update && apt-get install -y --no-install-recommends \
libomp-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*; \
else \
echo "Skipping platform-specific optimizations (unsupported platform)"; \
fi
# Create a non-root user and group
RUN groupadd -r appuser && useradd --no-log-init -r -g appuser appuser
# Create and set permissions for appuser home directory
RUN mkdir -p /home/appuser && chown -R appuser:appuser /home/appuser
WORKDIR ${APP_HOME}
RUN echo '#!/bin/bash\n\
@@ -103,6 +122,7 @@ fi' > /tmp/install.sh && chmod +x /tmp/install.sh
COPY . /tmp/project/
# Copy supervisor config first (might need root later, but okay for now)
COPY deploy/docker/supervisord.conf .
COPY deploy/docker/requirements.txt .
@@ -131,16 +151,34 @@ RUN if [ "$INSTALL_TYPE" = "all" ] ; then \
else \
pip install "/tmp/project" ; \
fi
RUN pip install --no-cache-dir --upgrade pip && \
/tmp/install.sh && \
python -c "import crawl4ai; print('✅ crawl4ai is ready to rock!')" && \
python -c "from playwright.sync_api import sync_playwright; print('✅ Playwright is feeling dramatic!')"
RUN playwright install --with-deps chromium
RUN crawl4ai-setup
RUN playwright install --with-deps
RUN mkdir -p /home/appuser/.cache/ms-playwright \
&& cp -r /root/.cache/ms-playwright/chromium-* /home/appuser/.cache/ms-playwright/ \
&& chown -R appuser:appuser /home/appuser/.cache/ms-playwright
RUN crawl4ai-doctor
# Copy application code
COPY deploy/docker/* ${APP_HOME}/
# copy the playground + any future static assets
COPY deploy/docker/static ${APP_HOME}/static
# Change ownership of the application directory to the non-root user
RUN chown -R appuser:appuser ${APP_HOME}
# give permissions to redis persistence dirs if used
RUN mkdir -p /var/lib/redis /var/log/redis && chown -R appuser:appuser /var/lib/redis /var/log/redis
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD bash -c '\
MEM=$(free -m | awk "/^Mem:/{print \$2}"); \
@@ -149,8 +187,14 @@ HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
exit 1; \
fi && \
redis-cli ping > /dev/null && \
curl -f http://localhost:8000/health || exit 1'
curl -f http://localhost:11235/health || exit 1'
EXPOSE 6379
CMD ["supervisord", "-c", "supervisord.conf"]
# Switch to the non-root user before starting the application
USER appuser
# Set environment variables to ptoduction
ENV PYTHON_ENV=production
# Start the application using supervisord
CMD ["supervisord", "-c", "supervisord.conf"]

339
JOURNAL.md Normal file
View File

@@ -0,0 +1,339 @@
# Development Journal
This journal tracks significant feature additions, bug fixes, and architectural decisions in the crawl4ai project. It serves as both documentation and a historical record of the project's evolution.
## [2025-04-17] Added Content Source Selection for Markdown Generation
**Feature:** Configurable content source for markdown generation
**Changes Made:**
1. Added `content_source: str = "cleaned_html"` parameter to `MarkdownGenerationStrategy` class
2. Updated `DefaultMarkdownGenerator` to accept and pass the content source parameter
3. Renamed the `cleaned_html` parameter to `input_html` in the `generate_markdown` method
4. Modified `AsyncWebCrawler.aprocess_html` to select the appropriate HTML source based on the generator's config
5. Added `preprocess_html_for_schema` import in `async_webcrawler.py`
**Implementation Details:**
- Added a new `content_source` parameter to specify which HTML input to use for markdown generation
- Options include: "cleaned_html" (default), "raw_html", and "fit_html"
- Used a dictionary dispatch pattern in `aprocess_html` to select the appropriate HTML source
- Added proper error handling with fallback to cleaned_html if content source selection fails
- Ensured backward compatibility by defaulting to "cleaned_html" option
**Files Modified:**
- `crawl4ai/markdown_generation_strategy.py`: Added content_source parameter and updated the method signature
- `crawl4ai/async_webcrawler.py`: Added HTML source selection logic and updated imports
**Examples:**
- Created `docs/examples/content_source_example.py` demonstrating how to use the new parameter
**Challenges:**
- Maintaining backward compatibility while reorganizing the parameter flow
- Ensuring proper error handling for all content source options
- Making the change with minimal code modifications
**Why This Feature:**
The content source selection feature allows users to choose which HTML content to use as input for markdown generation:
1. "cleaned_html" - Uses the post-processed HTML after scraping strategy (original behavior)
2. "raw_html" - Uses the original raw HTML directly from the web page
3. "fit_html" - Uses the preprocessed HTML optimized for schema extraction
This feature provides greater flexibility in how users generate markdown, enabling them to:
- Capture more detailed content from the original HTML when needed
- Use schema-optimized HTML when working with structured data
- Choose the approach that best suits their specific use case
## [2025-04-17] Implemented High Volume Stress Testing Solution for SDK
**Feature:** Comprehensive stress testing framework using `arun_many` and the dispatcher system to evaluate performance, concurrency handling, and identify potential issues under high-volume crawling scenarios.
**Changes Made:**
1. Created a dedicated stress testing framework in the `benchmarking/` (or similar) directory.
2. Implemented local test site generation (`SiteGenerator`) with configurable heavy HTML pages.
3. Added basic memory usage tracking (`SimpleMemoryTracker`) using platform-specific commands (avoiding `psutil` dependency for this specific test).
4. Utilized `CrawlerMonitor` from `crawl4ai` for rich terminal UI and real-time monitoring of test progress and dispatcher activity.
5. Implemented detailed result summary saving (JSON) and memory sample logging (CSV).
6. Developed `run_benchmark.py` to orchestrate tests with predefined configurations.
7. Created `run_all.sh` as a simple wrapper for `run_benchmark.py`.
**Implementation Details:**
- Generates a local test site with configurable pages containing heavy text and image content.
- Uses Python's built-in `http.server` for local serving, minimizing network variance.
- Leverages `crawl4ai`'s `arun_many` method for processing URLs.
- Utilizes `MemoryAdaptiveDispatcher` to manage concurrency via the `max_sessions` parameter (note: memory adaptation features require `psutil`, not used by `SimpleMemoryTracker`).
- Tracks memory usage via `SimpleMemoryTracker`, recording samples throughout test execution to a CSV file.
- Uses `CrawlerMonitor` (which uses the `rich` library) for clear terminal visualization and progress reporting directly from the dispatcher.
- Stores detailed final metrics in a JSON summary file.
**Files Created/Updated:**
- `stress_test_sdk.py`: Main stress testing implementation using `arun_many`.
- `benchmark_report.py`: (Assumed) Report generator for comparing test results.
- `run_benchmark.py`: Test runner script with predefined configurations.
- `run_all.sh`: Simple bash script wrapper for `run_benchmark.py`.
- `USAGE.md`: Comprehensive documentation on usage and interpretation (updated).
**Testing Approach:**
- Creates a controlled, reproducible test environment with a local HTTP server.
- Processes URLs using `arun_many`, allowing the dispatcher to manage concurrency up to `max_sessions`.
- Optionally logs per-batch summaries (when not in streaming mode) after processing chunks.
- Supports different test sizes via `run_benchmark.py` configurations.
- Records memory samples via platform commands for basic trend analysis.
- Includes cleanup functionality for the test environment.
**Challenges:**
- Ensuring proper cleanup of HTTP server processes.
- Getting reliable memory tracking across platforms without adding heavy dependencies (`psutil`) to this specific test script.
- Designing `run_benchmark.py` to correctly pass arguments to `stress_test_sdk.py`.
**Why This Feature:**
The high volume stress testing solution addresses critical needs for ensuring Crawl4AI's `arun_many` reliability:
1. Provides a reproducible way to evaluate performance under concurrent load.
2. Allows testing the dispatcher's concurrency control (`max_session_permit`) and queue management.
3. Enables performance tuning by observing throughput (`URLs/sec`) under different `max_sessions` settings.
4. Creates a controlled environment for testing `arun_many` behavior.
5. Supports continuous integration by providing deterministic test conditions for `arun_many`.
**Design Decisions:**
- Chose local site generation for reproducibility and isolation from network issues.
- Utilized the built-in `CrawlerMonitor` for real-time feedback, leveraging its `rich` integration.
- Implemented optional per-batch logging in `stress_test_sdk.py` (when not streaming) to provide chunk-level summaries alongside the continuous monitor.
- Adopted `arun_many` with a `MemoryAdaptiveDispatcher` as the core mechanism for parallel execution, reflecting the intended SDK usage.
- Created `run_benchmark.py` to simplify running standard test configurations.
- Used `SimpleMemoryTracker` to provide basic memory insights without requiring `psutil` for this particular test runner.
**Future Enhancements to Consider:**
- Create a separate test variant that *does* use `psutil` to specifically stress the memory-adaptive features of the dispatcher.
- Add support for generated JavaScript content.
- Add support for Docker-based testing with explicit memory limits.
- Enhance `benchmark_report.py` to provide more sophisticated analysis of performance and memory trends from the generated JSON/CSV files.
---
## [2025-04-17] Refined Stress Testing System Parameters and Execution
**Changes Made:**
1. Corrected `run_benchmark.py` and `stress_test_sdk.py` to use `--max-sessions` instead of the incorrect `--workers` parameter, accurately reflecting dispatcher configuration.
2. Updated `run_benchmark.py` argument handling to correctly pass all relevant custom parameters (including `--stream`, `--monitor-mode`, etc.) to `stress_test_sdk.py`.
3. (Assuming changes in `benchmark_report.py`) Applied dark theme to benchmark reports for better readability.
4. (Assuming changes in `benchmark_report.py`) Improved visualization code to eliminate matplotlib warnings.
5. Updated `run_benchmark.py` to provide clickable `file://` links to generated reports in the terminal output.
6. Updated `USAGE.md` with comprehensive parameter descriptions reflecting the final script arguments.
7. Updated `run_all.sh` wrapper to correctly invoke `run_benchmark.py` with flexible arguments.
**Details of Changes:**
1. **Parameter Correction (`--max-sessions`)**:
* Identified the fundamental misunderstanding where `--workers` was used incorrectly.
* Refactored `stress_test_sdk.py` to accept `--max-sessions` and configure the `MemoryAdaptiveDispatcher`'s `max_session_permit` accordingly.
* Updated `run_benchmark.py` argument parsing and command construction to use `--max-sessions`.
* Updated `TEST_CONFIGS` in `run_benchmark.py` to use `max_sessions`.
2. **Argument Handling (`run_benchmark.py`)**:
* Improved logic to collect all command-line arguments provided to `run_benchmark.py`.
* Ensured all relevant arguments (like `--stream`, `--monitor-mode`, `--port`, `--use-rate-limiter`, etc.) are correctly forwarded when calling `stress_test_sdk.py` as a subprocess.
3. **Dark Theme & Visualization Fixes (Assumed in `benchmark_report.py`)**:
* (Describes changes assumed to be made in the separate reporting script).
4. **Clickable Links (`run_benchmark.py`)**:
* Added logic to find the latest HTML report and PNG chart in the `benchmark_reports` directory after `benchmark_report.py` runs.
* Used `pathlib` to generate correct `file://` URLs for terminal output.
5. **Documentation Improvements (`USAGE.md`)**:
* Rewrote sections to explain `arun_many`, dispatchers, and `--max-sessions`.
* Updated parameter tables for all scripts (`stress_test_sdk.py`, `run_benchmark.py`).
* Clarified the difference between batch and streaming modes and their effect on logging.
* Updated examples to use correct arguments.
**Files Modified:**
- `stress_test_sdk.py`: Changed `--workers` to `--max-sessions`, added new arguments, used `arun_many`.
- `run_benchmark.py`: Changed argument handling, updated configs, calls `stress_test_sdk.py`.
- `run_all.sh`: Updated to call `run_benchmark.py` correctly.
- `USAGE.md`: Updated documentation extensively.
- `benchmark_report.py`: (Assumed modifications for dark theme and viz fixes).
**Testing:**
- Verified that `--max-sessions` correctly limits concurrency via the `CrawlerMonitor` output.
- Confirmed that custom arguments passed to `run_benchmark.py` are forwarded to `stress_test_sdk.py`.
- Validated clickable links work in supporting terminals.
- Ensured documentation matches the final script parameters and behavior.
**Why These Changes:**
These refinements correct the fundamental approach of the stress test to align with `crawl4ai`'s actual architecture and intended usage:
1. Ensures the test evaluates the correct components (`arun_many`, `MemoryAdaptiveDispatcher`).
2. Makes test configurations more accurate and flexible.
3. Improves the usability of the testing framework through better argument handling and documentation.
**Future Enhancements to Consider:**
- Add support for generated JavaScript content to test JS rendering performance
- Implement more sophisticated memory analysis like generational garbage collection tracking
- Add support for Docker-based testing with memory limits to force OOM conditions
- Create visualization tools for analyzing memory usage patterns across test runs
- Add benchmark comparisons between different crawler versions or configurations
## [2025-04-17] Fixed Issues in Stress Testing System
**Changes Made:**
1. Fixed custom parameter handling in run_benchmark.py
2. Applied dark theme to benchmark reports for better readability
3. Improved visualization code to eliminate matplotlib warnings
4. Added clickable links to generated reports in terminal output
5. Enhanced documentation with comprehensive parameter descriptions
**Details of Changes:**
1. **Custom Parameter Handling Fix**
- Identified bug where custom URL count was being ignored in run_benchmark.py
- Rewrote argument handling to use a custom args dictionary
- Properly passed parameters to the test_simple_stress.py command
- Added better UI indication of custom parameters in use
2. **Dark Theme Implementation**
- Added complete dark theme to HTML benchmark reports
- Applied dark styling to all visualization components
- Used Nord-inspired color palette for charts and graphs
- Improved contrast and readability for data visualization
- Updated text colors and backgrounds for better eye comfort
3. **Matplotlib Warning Fixes**
- Resolved warnings related to improper use of set_xticklabels()
- Implemented correct x-axis positioning for bar charts
- Ensured proper alignment of bar labels and data points
- Updated plotting code to use modern matplotlib practices
4. **Documentation Improvements**
- Created comprehensive USAGE.md with detailed instructions
- Added parameter documentation for all scripts
- Included examples for all common use cases
- Provided detailed explanations for interpreting results
- Added troubleshooting guide for common issues
**Files Modified:**
- `tests/memory/run_benchmark.py`: Fixed custom parameter handling
- `tests/memory/benchmark_report.py`: Added dark theme and fixed visualization warnings
- `tests/memory/run_all.sh`: Added clickable links to reports
- `tests/memory/USAGE.md`: Created comprehensive documentation
**Testing:**
- Verified that custom URL counts are now correctly used
- Confirmed dark theme is properly applied to all report elements
- Checked that matplotlib warnings are no longer appearing
- Validated clickable links to reports work in terminals that support them
**Why These Changes:**
These improvements address several usability issues with the stress testing system:
1. Better parameter handling ensures test configurations work as expected
2. Dark theme reduces eye strain during extended test review sessions
3. Fixing visualization warnings improves code quality and output clarity
4. Enhanced documentation makes the system more accessible for future use
**Future Enhancements:**
- Add additional visualization options for different types of analysis
- Implement theme toggle to support both light and dark preferences
- Add export options for embedding reports in other documentation
- Create dedicated CI/CD integration templates for automated testing
## [2025-04-09] Added MHTML Capture Feature
**Feature:** MHTML snapshot capture of crawled pages
**Changes Made:**
1. Added `capture_mhtml: bool = False` parameter to `CrawlerRunConfig` class
2. Added `mhtml: Optional[str] = None` field to `CrawlResult` model
3. Added `mhtml_data: Optional[str] = None` field to `AsyncCrawlResponse` class
4. Implemented `capture_mhtml()` method in `AsyncPlaywrightCrawlerStrategy` class to capture MHTML via CDP
5. Modified the crawler to capture MHTML when enabled and pass it to the result
**Implementation Details:**
- MHTML capture uses Chrome DevTools Protocol (CDP) via Playwright's CDP session API
- The implementation waits for page to fully load before capturing MHTML content
- Enhanced waiting for JavaScript content with requestAnimationFrame for better JS content capture
- We ensure all browser resources are properly cleaned up after capture
**Files Modified:**
- `crawl4ai/models.py`: Added the mhtml field to CrawlResult
- `crawl4ai/async_configs.py`: Added capture_mhtml parameter to CrawlerRunConfig
- `crawl4ai/async_crawler_strategy.py`: Implemented MHTML capture logic
- `crawl4ai/async_webcrawler.py`: Added mapping from AsyncCrawlResponse.mhtml_data to CrawlResult.mhtml
**Testing:**
- Created comprehensive tests in `tests/20241401/test_mhtml.py` covering:
- Capturing MHTML when enabled
- Ensuring mhtml is None when disabled explicitly
- Ensuring mhtml is None by default
- Capturing MHTML on JavaScript-enabled pages
**Challenges:**
- Had to improve page loading detection to ensure JavaScript content was fully rendered
- Tests needed to be run independently due to Playwright browser instance management
- Modified test expected content to match actual MHTML output
**Why This Feature:**
The MHTML capture feature allows users to capture complete web pages including all resources (CSS, images, etc.) in a single file. This is valuable for:
1. Offline viewing of captured pages
2. Creating permanent snapshots of web content for archival
3. Ensuring consistent content for later analysis, even if the original site changes
**Future Enhancements to Consider:**
- Add option to save MHTML to file
- Support for filtering what resources get included in MHTML
- Add support for specifying MHTML capture options
## [2025-04-10] Added Network Request and Console Message Capturing
**Feature:** Comprehensive capturing of network requests/responses and browser console messages during crawling
**Changes Made:**
1. Added `capture_network_requests: bool = False` and `capture_console_messages: bool = False` parameters to `CrawlerRunConfig` class
2. Added `network_requests: Optional[List[Dict[str, Any]]] = None` and `console_messages: Optional[List[Dict[str, Any]]] = None` fields to both `AsyncCrawlResponse` and `CrawlResult` models
3. Implemented event listeners in `AsyncPlaywrightCrawlerStrategy._crawl_web()` to capture browser network events and console messages
4. Added proper event listener cleanup in the finally block to prevent resource leaks
5. Modified the crawler flow to pass captured data from AsyncCrawlResponse to CrawlResult
**Implementation Details:**
- Network capture uses Playwright event listeners (`request`, `response`, and `requestfailed`) to record all network activity
- Console capture uses Playwright event listeners (`console` and `pageerror`) to record console messages and errors
- Each network event includes metadata like URL, headers, status, and timing information
- Each console message includes type, text content, and source location when available
- All captured events include timestamps for chronological analysis
- Error handling ensures even failed capture attempts won't crash the main crawling process
**Files Modified:**
- `crawl4ai/models.py`: Added new fields to AsyncCrawlResponse and CrawlResult
- `crawl4ai/async_configs.py`: Added new configuration parameters to CrawlerRunConfig
- `crawl4ai/async_crawler_strategy.py`: Implemented capture logic using event listeners
- `crawl4ai/async_webcrawler.py`: Added data transfer from AsyncCrawlResponse to CrawlResult
**Documentation:**
- Created detailed documentation in `docs/md_v2/advanced/network-console-capture.md`
- Added feature to site navigation in `mkdocs.yml`
- Updated CrawlResult documentation in `docs/md_v2/api/crawl-result.md`
- Created comprehensive example in `docs/examples/network_console_capture_example.py`
**Testing:**
- Created `tests/general/test_network_console_capture.py` with tests for:
- Verifying capture is disabled by default
- Testing network request capturing
- Testing console message capturing
- Ensuring both capture types can be enabled simultaneously
- Checking correct content is captured in expected formats
**Challenges:**
- Initial implementation had synchronous/asynchronous mismatches in event handlers
- Needed to fix type of property access vs. method calls in handlers
- Required careful cleanup of event listeners to prevent memory leaks
**Why This Feature:**
The network and console capture feature provides deep visibility into web page activity, enabling:
1. Debugging complex web applications by seeing all network requests and errors
2. Security analysis to detect unexpected third-party requests and data flows
3. Performance profiling to identify slow-loading resources
4. API discovery in single-page applications
5. Comprehensive analysis of web application behavior
**Future Enhancements to Consider:**
- Option to filter captured events by type, domain, or content
- Support for capturing response bodies (with size limits)
- Aggregate statistics calculation for performance metrics
- Integration with visualization tools for network waterfall analysis
- Exporting captures in HAR format for use with external tools

138
README.md
View File

@@ -21,9 +21,9 @@
Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for LLMs, AI agents, and data pipelines. Open source, flexible, and built for real-time performance, Crawl4AI empowers developers with unmatched speed, precision, and deployment ease.
[✨ Check out latest update v0.5.0](#-recent-updates)
[✨ Check out latest update v0.6.0](#-recent-updates)
🎉 **Version 0.5.0 is out!** This major release introduces Deep Crawling with BFS/DFS/BestFirst strategies, Memory-Adaptive Dispatcher, Multiple Crawling Strategies (Playwright and HTTP), Docker Deployment with FastAPI, Command-Line Interface (CLI), and more! [Read the release notes →](https://docs.crawl4ai.com/blog)
🎉 **Version 0.6.0 is now available!** This release candidate introduces World-aware Crawling with geolocation and locale settings, Table-to-DataFrame extraction, Browser pooling with pre-warming, Network and console traffic capture, MCP integration for AI tools, and a completely revamped Docker deployment! [Read the release notes →](https://docs.crawl4ai.com/blog)
<details>
<summary>🤓 <strong>My Personal Story</strong></summary>
@@ -253,24 +253,29 @@ pip install -e ".[all]" # Install all optional features
<details>
<summary>🐳 <strong>Docker Deployment</strong></summary>
> 🚀 **Major Changes Coming!** We're developing a completely new Docker implementation that will make deployment even more efficient and seamless. The current Docker setup is being deprecated in favor of this new solution.
> 🚀 **Now Available!** Our completely redesigned Docker implementation is here! This new solution makes deployment more efficient and seamless than ever.
### Current Docker Support
### New Docker Features
The existing Docker implementation is being deprecated and will be replaced soon. If you still need to use Docker with the current version:
The new Docker implementation includes:
- **Browser pooling** with page pre-warming for faster response times
- **Interactive playground** to test and generate request code
- **MCP integration** for direct connection to AI tools like Claude Code
- **Comprehensive API endpoints** including HTML extraction, screenshots, PDF generation, and JavaScript execution
- **Multi-architecture support** with automatic detection (AMD64/ARM64)
- **Optimized resources** with improved memory management
- 📚 [Deprecated Docker Setup](./docs/deprecated/docker-deployment.md) - Instructions for the current Docker implementation
- ⚠️ Note: This setup will be replaced in the next major release
### Getting Started
### What's Coming Next?
```bash
# Pull and run the latest release candidate
docker pull unclecode/crawl4ai:0.6.0-rN # Use your favorite revision number
docker run -d -p 11235:11235 --name crawl4ai --shm-size=1g unclecode/crawl4ai:0.6.0-rN # Use your favorite revision number
Our new Docker implementation will bring:
- Improved performance and resource efficiency
- Streamlined deployment process
- Better integration with Crawl4AI features
- Enhanced scalability options
# Visit the playground at http://localhost:11235/playground
```
Stay connected with our [GitHub repository](https://github.com/unclecode/crawl4ai) for updates!
For complete documentation, see our [Docker Deployment Guide](https://docs.crawl4ai.com/core/docker-deployment/).
</details>
@@ -500,31 +505,92 @@ async def test_news_crawl():
## ✨ Recent Updates
### Version 0.5.0 Major Release Highlights
### Version 0.6.0 Release Highlights
- **🚀 Deep Crawling System**: Explore websites beyond initial URLs with three strategies:
- **BFS Strategy**: Breadth-first search explores websites level by level
- **DFS Strategy**: Depth-first search explores each branch deeply before backtracking
- **BestFirst Strategy**: Uses scoring functions to prioritize which URLs to crawl next
- **Page Limiting**: Control the maximum number of pages to crawl with `max_pages` parameter
- **Score Thresholds**: Filter URLs based on relevance scores
- **⚡ Memory-Adaptive Dispatcher**: Dynamically adjusts concurrency based on system memory with built-in rate limiting
- **🔄 Multiple Crawling Strategies**:
- **AsyncPlaywrightCrawlerStrategy**: Browser-based crawling with JavaScript support (Default)
- **AsyncHTTPCrawlerStrategy**: Fast, lightweight HTTP-only crawler for simple tasks
- **🐳 Docker Deployment**: Easy deployment with FastAPI server and streaming/non-streaming endpoints
- **💻 Command-Line Interface**: New `crwl` CLI provides convenient terminal access to all features with intuitive commands and configuration options
- **👤 Browser Profiler**: Create and manage persistent browser profiles to save authentication states, cookies, and settings for seamless crawling of protected content
- **🧠 Crawl4AI Coding Assistant**: AI-powered coding assistant to answer your question for Crawl4ai, and generate proper code for crawling.
- **🏎️ LXML Scraping Mode**: Fast HTML parsing using the `lxml` library for improved performance
- **🌐 Proxy Rotation**: Built-in support for proxy switching with `RoundRobinProxyStrategy`
- **🌎 World-aware Crawling**: Set geolocation, language, and timezone for authentic locale-specific content:
```python
crun_cfg = CrawlerRunConfig(
url="https://browserleaks.com/geo", # test page that shows your location
locale="en-US", # Accept-Language & UI locale
timezone_id="America/Los_Angeles", # JS Date()/Intl timezone
geolocation=GeolocationConfig( # override GPS coords
latitude=34.0522,
longitude=-118.2437,
accuracy=10.0,
)
)
```
- **📊 Table-to-DataFrame Extraction**: Extract HTML tables directly to CSV or pandas DataFrames:
```python
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
try:
# Set up scraping parameters
crawl_config = CrawlerRunConfig(
table_score_threshold=8, # Strict table detection
)
# Execute market data extraction
results: List[CrawlResult] = await crawler.arun(
url="https://coinmarketcap.com/?page=1", config=crawl_config
)
# Process results
raw_df = pd.DataFrame()
for result in results:
if result.success and result.media["tables"]:
raw_df = pd.DataFrame(
result.media["tables"][0]["rows"],
columns=result.media["tables"][0]["headers"],
)
break
print(raw_df.head())
finally:
await crawler.stop()
```
- **🚀 Browser Pooling**: Pages launch hot with pre-warmed browser instances for lower latency and memory usage
- **🕸️ Network and Console Capture**: Full traffic logs and MHTML snapshots for debugging:
```python
crawler_config = CrawlerRunConfig(
capture_network=True,
capture_console=True,
mhtml=True
)
```
- **🔌 MCP Integration**: Connect to AI tools like Claude Code through the Model Context Protocol
```bash
# Add Crawl4AI to Claude Code
claude mcp add --transport sse c4ai-sse http://localhost:11235/mcp/sse
```
- **🖥️ Interactive Playground**: Test configurations and generate API requests with the built-in web interface at `http://localhost:11235//playground`
- **🐳 Revamped Docker Deployment**: Streamlined multi-architecture Docker image with improved resource efficiency
- **📱 Multi-stage Build System**: Optimized Dockerfile with platform-specific performance enhancements
Read the full details in our [0.6.0 Release Notes](https://docs.crawl4ai.com/blog/releases/0.6.0.html) or check the [CHANGELOG](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md).
### Previous Version: 0.5.0 Major Release Highlights
- **🚀 Deep Crawling System**: Explore websites beyond initial URLs with BFS, DFS, and BestFirst strategies
- **⚡ Memory-Adaptive Dispatcher**: Dynamically adjusts concurrency based on system memory
- **🔄 Multiple Crawling Strategies**: Browser-based and lightweight HTTP-only crawlers
- **💻 Command-Line Interface**: New `crwl` CLI provides convenient terminal access
- **👤 Browser Profiler**: Create and manage persistent browser profiles
- **🧠 Crawl4AI Coding Assistant**: AI-powered coding assistant
- **🏎️ LXML Scraping Mode**: Fast HTML parsing using the `lxml` library
- **🌐 Proxy Rotation**: Built-in support for proxy switching
- **🤖 LLM Content Filter**: Intelligent markdown generation using LLMs
- **📄 PDF Processing**: Extract text, images, and metadata from PDF files
- **🔗 URL Redirection Tracking**: Automatically follow and record HTTP redirects
- **🤖 LLM Schema Generation**: Easily create extraction schemas with LLM assistance
- **🔍 robots.txt Compliance**: Respect website crawling rules
Read the full details in our [0.5.0 Release Notes](https://docs.crawl4ai.com/blog/releases/0.5.0.html) or check the [CHANGELOG](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md).
Read the full details in our [0.5.0 Release Notes](https://docs.crawl4ai.com/blog/releases/0.5.0.html).
## Version Numbering in Crawl4AI
@@ -540,7 +606,7 @@ We use different suffixes to indicate development stages:
- `dev` (0.4.3dev1): Development versions, unstable
- `a` (0.4.3a1): Alpha releases, experimental features
- `b` (0.4.3b1): Beta releases, feature complete but needs testing
- `rc` (0.4.3rc1): Release candidates, potential final version
- `rc` (0.4.3): Release candidates, potential final version
#### Installation
- Regular installation (stable version):

View File

@@ -2,13 +2,7 @@
import warnings
from .async_webcrawler import AsyncWebCrawler, CacheMode
from .async_configs import BrowserConfig, CrawlerRunConfig, HTTPCrawlerConfig, LLMConfig
from .pipeline.pipeline import (
Pipeline,
create_pipeline,
)
from .pipeline.crawler import Crawler
from .async_configs import BrowserConfig, CrawlerRunConfig, HTTPCrawlerConfig, LLMConfig, ProxyConfig, GeolocationConfig
from .content_scraping_strategy import (
ContentScrapingStrategy,
@@ -29,7 +23,8 @@ from .extraction_strategy import (
CosineStrategy,
JsonCssExtractionStrategy,
JsonXPathExtractionStrategy,
JsonLxmlExtractionStrategy
JsonLxmlExtractionStrategy,
RegexExtractionStrategy
)
from .chunking_strategy import ChunkingStrategy, RegexChunking
from .markdown_generation_strategy import DefaultMarkdownGenerator
@@ -71,19 +66,13 @@ from .deep_crawling import (
DeepCrawlDecorator,
)
from .async_crawler_strategy import AsyncPlaywrightCrawlerStrategy, AsyncHTTPCrawlerStrategy
__all__ = [
"Pipeline",
"AsyncPlaywrightCrawlerStrategy",
"AsyncHTTPCrawlerStrategy",
"create_pipeline",
"Crawler",
"AsyncLoggerBase",
"AsyncLogger",
"AsyncWebCrawler",
"BrowserProfiler",
"LLMConfig",
"GeolocationConfig",
"DeepCrawlStrategy",
"BFSDeepCrawlStrategy",
"BestFirstCrawlingStrategy",
@@ -117,6 +106,7 @@ __all__ = [
"JsonCssExtractionStrategy",
"JsonXPathExtractionStrategy",
"JsonLxmlExtractionStrategy",
"RegexExtractionStrategy",
"ChunkingStrategy",
"RegexChunking",
"DefaultMarkdownGenerator",
@@ -134,6 +124,7 @@ __all__ = [
"Crawl4aiDockerClient",
"ProxyRotationStrategy",
"RoundRobinProxyStrategy",
"ProxyConfig"
]

View File

@@ -1,2 +1,3 @@
# crawl4ai/_version.py
__version__ = "0.5.0.post8"
__version__ = "0.6.3"

View File

@@ -5,6 +5,7 @@ from .config import (
MIN_WORD_THRESHOLD,
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
PROVIDER_MODELS,
PROVIDER_MODELS_PREFIXES,
SCREENSHOT_HEIGHT_TRESHOLD,
PAGE_TIMEOUT,
IMAGE_SCORE_THRESHOLD,
@@ -27,11 +28,8 @@ import inspect
from typing import Any, Dict, Optional
from enum import Enum
from .proxy_strategy import ProxyConfig
try:
from .browser.models import DockerConfig
except ImportError:
DockerConfig = None
# from .proxy_strategy import ProxyConfig
def to_serializable_dict(obj: Any, ignore_default_value : bool = False) -> Dict:
@@ -122,23 +120,25 @@ def from_serializable_dict(data: Any) -> Any:
# Handle typed data
if isinstance(data, dict) and "type" in data:
# Handle plain dictionaries
if data["type"] == "dict":
if data["type"] == "dict" and "value" in data:
return {k: from_serializable_dict(v) for k, v in data["value"].items()}
# Import from crawl4ai for class instances
import crawl4ai
cls = getattr(crawl4ai, data["type"])
if hasattr(crawl4ai, data["type"]):
cls = getattr(crawl4ai, data["type"])
# Handle Enum
if issubclass(cls, Enum):
return cls(data["params"])
# Handle Enum
if issubclass(cls, Enum):
return cls(data["params"])
# Handle class instances
constructor_args = {
k: from_serializable_dict(v) for k, v in data["params"].items()
}
return cls(**constructor_args)
if "params" in data:
# Handle class instances
constructor_args = {
k: from_serializable_dict(v) for k, v in data["params"].items()
}
return cls(**constructor_args)
# Handle lists
if isinstance(data, list):
@@ -159,6 +159,166 @@ def is_empty_value(value: Any) -> bool:
return True
return False
class GeolocationConfig:
def __init__(
self,
latitude: float,
longitude: float,
accuracy: Optional[float] = 0.0
):
"""Configuration class for geolocation settings.
Args:
latitude: Latitude coordinate (e.g., 37.7749)
longitude: Longitude coordinate (e.g., -122.4194)
accuracy: Accuracy in meters. Default: 0.0
"""
self.latitude = latitude
self.longitude = longitude
self.accuracy = accuracy
@staticmethod
def from_dict(geo_dict: Dict) -> "GeolocationConfig":
"""Create a GeolocationConfig from a dictionary."""
return GeolocationConfig(
latitude=geo_dict.get("latitude"),
longitude=geo_dict.get("longitude"),
accuracy=geo_dict.get("accuracy", 0.0)
)
def to_dict(self) -> Dict:
"""Convert to dictionary representation."""
return {
"latitude": self.latitude,
"longitude": self.longitude,
"accuracy": self.accuracy
}
def clone(self, **kwargs) -> "GeolocationConfig":
"""Create a copy of this configuration with updated values.
Args:
**kwargs: Key-value pairs of configuration options to update
Returns:
GeolocationConfig: A new instance with the specified updates
"""
config_dict = self.to_dict()
config_dict.update(kwargs)
return GeolocationConfig.from_dict(config_dict)
class ProxyConfig:
def __init__(
self,
server: str,
username: Optional[str] = None,
password: Optional[str] = None,
ip: Optional[str] = None,
):
"""Configuration class for a single proxy.
Args:
server: Proxy server URL (e.g., "http://127.0.0.1:8080")
username: Optional username for proxy authentication
password: Optional password for proxy authentication
ip: Optional IP address for verification purposes
"""
self.server = server
self.username = username
self.password = password
# Extract IP from server if not explicitly provided
self.ip = ip or self._extract_ip_from_server()
def _extract_ip_from_server(self) -> Optional[str]:
"""Extract IP address from server URL."""
try:
# Simple extraction assuming http://ip:port format
if "://" in self.server:
parts = self.server.split("://")[1].split(":")
return parts[0]
else:
parts = self.server.split(":")
return parts[0]
except Exception:
return None
@staticmethod
def from_string(proxy_str: str) -> "ProxyConfig":
"""Create a ProxyConfig from a string in the format 'ip:port:username:password'."""
parts = proxy_str.split(":")
if len(parts) == 4: # ip:port:username:password
ip, port, username, password = parts
return ProxyConfig(
server=f"http://{ip}:{port}",
username=username,
password=password,
ip=ip
)
elif len(parts) == 2: # ip:port only
ip, port = parts
return ProxyConfig(
server=f"http://{ip}:{port}",
ip=ip
)
else:
raise ValueError(f"Invalid proxy string format: {proxy_str}")
@staticmethod
def from_dict(proxy_dict: Dict) -> "ProxyConfig":
"""Create a ProxyConfig from a dictionary."""
return ProxyConfig(
server=proxy_dict.get("server"),
username=proxy_dict.get("username"),
password=proxy_dict.get("password"),
ip=proxy_dict.get("ip")
)
@staticmethod
def from_env(env_var: str = "PROXIES") -> List["ProxyConfig"]:
"""Load proxies from environment variable.
Args:
env_var: Name of environment variable containing comma-separated proxy strings
Returns:
List of ProxyConfig objects
"""
proxies = []
try:
proxy_list = os.getenv(env_var, "").split(",")
for proxy in proxy_list:
if not proxy:
continue
proxies.append(ProxyConfig.from_string(proxy))
except Exception as e:
print(f"Error loading proxies from environment: {e}")
return proxies
def to_dict(self) -> Dict:
"""Convert to dictionary representation."""
return {
"server": self.server,
"username": self.username,
"password": self.password,
"ip": self.ip
}
def clone(self, **kwargs) -> "ProxyConfig":
"""Create a copy of this configuration with updated values.
Args:
**kwargs: Key-value pairs of configuration options to update
Returns:
ProxyConfig: A new instance with the specified updates
"""
config_dict = self.to_dict()
config_dict.update(kwargs)
return ProxyConfig.from_dict(config_dict)
class BrowserConfig:
"""
@@ -195,8 +355,6 @@ class BrowserConfig:
Default: None.
proxy_config (ProxyConfig or dict or None): Detailed proxy configuration, e.g. {"server": "...", "username": "..."}.
If None, no additional proxy config. Default: None.
docker_config (DockerConfig or dict or None): Configuration for Docker-based browser automation.
Contains settings for Docker container operation. Default: None.
viewport_width (int): Default viewport width for pages. Default: 1080.
viewport_height (int): Default viewport height for pages. Default: 600.
viewport (dict): Default viewport dimensions for pages. If set, overrides viewport_width and viewport_height.
@@ -242,7 +400,6 @@ class BrowserConfig:
channel: str = "chromium",
proxy: str = None,
proxy_config: Union[ProxyConfig, dict, None] = None,
docker_config: Union[DockerConfig, dict, None] = None,
viewport_width: int = 1080,
viewport_height: int = 600,
viewport: dict = None,
@@ -283,15 +440,7 @@ class BrowserConfig:
self.chrome_channel = ""
self.proxy = proxy
self.proxy_config = proxy_config
# Handle docker configuration
if isinstance(docker_config, dict) and DockerConfig is not None:
self.docker_config = DockerConfig.from_kwargs(docker_config)
else:
self.docker_config = docker_config
if self.docker_config:
self.user_data_dir = self.docker_config.user_data_dir
self.viewport_width = viewport_width
self.viewport_height = viewport_height
@@ -362,7 +511,6 @@ class BrowserConfig:
channel=kwargs.get("channel", "chromium"),
proxy=kwargs.get("proxy"),
proxy_config=kwargs.get("proxy_config", None),
docker_config=kwargs.get("docker_config", None),
viewport_width=kwargs.get("viewport_width", 1080),
viewport_height=kwargs.get("viewport_height", 600),
accept_downloads=kwargs.get("accept_downloads", False),
@@ -419,13 +567,7 @@ class BrowserConfig:
"debugging_port": self.debugging_port,
"host": self.host,
}
# Include docker_config if it exists
if hasattr(self, "docker_config") and self.docker_config is not None:
if hasattr(self.docker_config, "to_dict"):
result["docker_config"] = self.docker_config.to_dict()
else:
result["docker_config"] = self.docker_config
return result
@@ -587,6 +729,14 @@ class CrawlerRunConfig():
proxy_config (ProxyConfig or dict or None): Detailed proxy configuration, e.g. {"server": "...", "username": "..."}.
If None, no additional proxy config. Default: None.
# Browser Location and Identity Parameters
locale (str or None): Locale to use for the browser context (e.g., "en-US").
Default: None.
timezone_id (str or None): Timezone identifier to use for the browser context (e.g., "America/New_York").
Default: None.
geolocation (GeolocationConfig or None): Geolocation configuration for the browser.
Default: None.
# SSL Parameters
fetch_ssl_certificate: bool = False,
# Caching Parameters
@@ -736,6 +886,10 @@ class CrawlerRunConfig():
scraping_strategy: ContentScrapingStrategy = None,
proxy_config: Union[ProxyConfig, dict, None] = None,
proxy_rotation_strategy: Optional[ProxyRotationStrategy] = None,
# Browser Location and Identity Parameters
locale: Optional[str] = None,
timezone_id: Optional[str] = None,
geolocation: Optional[GeolocationConfig] = None,
# SSL Parameters
fetch_ssl_certificate: bool = False,
# Caching Parameters
@@ -772,10 +926,12 @@ class CrawlerRunConfig():
screenshot_wait_for: float = None,
screenshot_height_threshold: int = SCREENSHOT_HEIGHT_TRESHOLD,
pdf: bool = False,
capture_mhtml: bool = False,
image_description_min_word_threshold: int = IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
image_score_threshold: int = IMAGE_SCORE_THRESHOLD,
table_score_threshold: int = 7,
exclude_external_images: bool = False,
exclude_all_images: bool = False,
# Link and Domain Handling Parameters
exclude_social_media_domains: list = None,
exclude_external_links: bool = False,
@@ -785,6 +941,9 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters
verbose: bool = True,
log_console: bool = False,
# Network and Console Capturing Parameters
capture_network_requests: bool = False,
capture_console_messages: bool = False,
# Connection Parameters
method: str = "GET",
stream: bool = False,
@@ -819,6 +978,11 @@ class CrawlerRunConfig():
self.scraping_strategy = scraping_strategy or WebScrapingStrategy()
self.proxy_config = proxy_config
self.proxy_rotation_strategy = proxy_rotation_strategy
# Browser Location and Identity Parameters
self.locale = locale
self.timezone_id = timezone_id
self.geolocation = geolocation
# SSL Parameters
self.fetch_ssl_certificate = fetch_ssl_certificate
@@ -860,9 +1024,11 @@ class CrawlerRunConfig():
self.screenshot_wait_for = screenshot_wait_for
self.screenshot_height_threshold = screenshot_height_threshold
self.pdf = pdf
self.capture_mhtml = capture_mhtml
self.image_description_min_word_threshold = image_description_min_word_threshold
self.image_score_threshold = image_score_threshold
self.exclude_external_images = exclude_external_images
self.exclude_all_images = exclude_all_images
self.table_score_threshold = table_score_threshold
# Link and Domain Handling Parameters
@@ -877,6 +1043,10 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters
self.verbose = verbose
self.log_console = log_console
# Network and Console Capturing Parameters
self.capture_network_requests = capture_network_requests
self.capture_console_messages = capture_console_messages
# Connection Parameters
self.stream = stream
@@ -953,6 +1123,10 @@ class CrawlerRunConfig():
scraping_strategy=kwargs.get("scraping_strategy"),
proxy_config=kwargs.get("proxy_config"),
proxy_rotation_strategy=kwargs.get("proxy_rotation_strategy"),
# Browser Location and Identity Parameters
locale=kwargs.get("locale", None),
timezone_id=kwargs.get("timezone_id", None),
geolocation=kwargs.get("geolocation", None),
# SSL Parameters
fetch_ssl_certificate=kwargs.get("fetch_ssl_certificate", False),
# Caching Parameters
@@ -991,6 +1165,7 @@ class CrawlerRunConfig():
"screenshot_height_threshold", SCREENSHOT_HEIGHT_TRESHOLD
),
pdf=kwargs.get("pdf", False),
capture_mhtml=kwargs.get("capture_mhtml", False),
image_description_min_word_threshold=kwargs.get(
"image_description_min_word_threshold",
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
@@ -999,6 +1174,7 @@ class CrawlerRunConfig():
"image_score_threshold", IMAGE_SCORE_THRESHOLD
),
table_score_threshold=kwargs.get("table_score_threshold", 7),
exclude_all_images=kwargs.get("exclude_all_images", False),
exclude_external_images=kwargs.get("exclude_external_images", False),
# Link and Domain Handling Parameters
exclude_social_media_domains=kwargs.get(
@@ -1011,6 +1187,9 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters
verbose=kwargs.get("verbose", True),
log_console=kwargs.get("log_console", False),
# Network and Console Capturing Parameters
capture_network_requests=kwargs.get("capture_network_requests", False),
capture_console_messages=kwargs.get("capture_console_messages", False),
# Connection Parameters
method=kwargs.get("method", "GET"),
stream=kwargs.get("stream", False),
@@ -1057,6 +1236,9 @@ class CrawlerRunConfig():
"scraping_strategy": self.scraping_strategy,
"proxy_config": self.proxy_config,
"proxy_rotation_strategy": self.proxy_rotation_strategy,
"locale": self.locale,
"timezone_id": self.timezone_id,
"geolocation": self.geolocation,
"fetch_ssl_certificate": self.fetch_ssl_certificate,
"cache_mode": self.cache_mode,
"session_id": self.session_id,
@@ -1088,9 +1270,11 @@ class CrawlerRunConfig():
"screenshot_wait_for": self.screenshot_wait_for,
"screenshot_height_threshold": self.screenshot_height_threshold,
"pdf": self.pdf,
"capture_mhtml": self.capture_mhtml,
"image_description_min_word_threshold": self.image_description_min_word_threshold,
"image_score_threshold": self.image_score_threshold,
"table_score_threshold": self.table_score_threshold,
"exclude_all_images": self.exclude_all_images,
"exclude_external_images": self.exclude_external_images,
"exclude_social_media_domains": self.exclude_social_media_domains,
"exclude_external_links": self.exclude_external_links,
@@ -1099,6 +1283,8 @@ class CrawlerRunConfig():
"exclude_internal_links": self.exclude_internal_links,
"verbose": self.verbose,
"log_console": self.log_console,
"capture_network_requests": self.capture_network_requests,
"capture_console_messages": self.capture_console_messages,
"method": self.method,
"stream": self.stream,
"check_robots_txt": self.check_robots_txt,
@@ -1158,9 +1344,18 @@ class LLMConfig:
elif api_token and api_token.startswith("env:"):
self.api_token = os.getenv(api_token[4:])
else:
self.api_token = PROVIDER_MODELS.get(provider, "no-token") or os.getenv(
DEFAULT_PROVIDER_API_KEY
)
# Check if given provider starts with any of key in PROVIDER_MODELS_PREFIXES
# If not, check if it is in PROVIDER_MODELS
prefixes = PROVIDER_MODELS_PREFIXES.keys()
if any(provider.startswith(prefix) for prefix in prefixes):
selected_prefix = next(
(prefix for prefix in prefixes if provider.startswith(prefix)),
None,
)
self.api_token = PROVIDER_MODELS_PREFIXES.get(selected_prefix)
else:
self.provider = DEFAULT_PROVIDER
self.api_token = os.getenv(DEFAULT_PROVIDER_API_KEY)
self.base_url = base_url
self.temprature = temprature
self.max_tokens = max_tokens

View File

@@ -24,7 +24,7 @@ from .browser_manager import BrowserManager
import aiofiles
import aiohttp
import cchardet
import chardet
from aiohttp.client import ClientTimeout
from urllib.parse import urlparse
from types import MappingProxyType
@@ -130,6 +130,8 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
Close the browser and clean up resources.
"""
await self.browser_manager.close()
# Explicitly reset the static Playwright instance
BrowserManager._playwright_instance = None
async def kill_session(self, session_id: str):
"""
@@ -409,7 +411,11 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
user_agent = kwargs.get("user_agent", self.user_agent)
# Use browser_manager to get a fresh page & context assigned to this session_id
page, context = await self.browser_manager.get_page(session_id, user_agent)
page, context = await self.browser_manager.get_page(CrawlerRunConfig(
session_id=session_id,
user_agent=user_agent,
**kwargs,
))
return session_id
async def crawl(
@@ -435,7 +441,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
status_code = 200 # Default for local/raw HTML
screenshot_data = None
if url.startswith(("http://", "https://")):
if url.startswith(("http://", "https://", "view-source:")):
return await self._crawl_web(url, config)
elif url.startswith("file://"):
@@ -447,12 +453,17 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
html = f.read()
if config.screenshot:
screenshot_data = await self._generate_screenshot_from_html(html)
if config.capture_console_messages:
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
captured_console = await self._capture_console_messages(page, url)
return AsyncCrawlResponse(
html=html,
response_headers=response_headers,
status_code=status_code,
screenshot=screenshot_data,
get_delayed_content=None,
console_messages=captured_console,
)
elif url.startswith("raw:") or url.startswith("raw://"):
@@ -478,6 +489,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
) -> AsyncCrawlResponse:
"""
Internal method to crawl web URLs with the specified configuration.
Includes optional network and console capturing.
Args:
url (str): The web URL to crawl
@@ -494,6 +506,10 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Reset downloaded files list for new crawl
self._downloaded_files = []
# Initialize capture lists
captured_requests = []
captured_console = []
# Handle user agent with magic mode
user_agent_to_override = config.user_agent
@@ -505,10 +521,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
)
# Get page for session
try:
page, context, _ = await self.browser_manager.get_page(crawlerRunConfig=config)
except Exception as e:
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
# await page.goto(URL)
@@ -524,23 +537,169 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Call hook after page creation
await self.execute_hook("on_page_context_created", page, context=context, config=config)
# Network Request Capturing
if config.capture_network_requests:
async def handle_request_capture(request):
try:
post_data_str = None
try:
# Be cautious with large post data
post_data = request.post_data_buffer
if post_data:
# Attempt to decode, fallback to base64 or size indication
try:
post_data_str = post_data.decode('utf-8', errors='replace')
except UnicodeDecodeError:
post_data_str = f"[Binary data: {len(post_data)} bytes]"
except Exception:
post_data_str = "[Error retrieving post data]"
captured_requests.append({
"event_type": "request",
"url": request.url,
"method": request.method,
"headers": dict(request.headers), # Convert Header dict
"post_data": post_data_str,
"resource_type": request.resource_type,
"is_navigation_request": request.is_navigation_request(),
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing request details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
async def handle_response_capture(response):
try:
try:
# body = await response.body()
# json_body = await response.json()
text_body = await response.text()
except Exception as e:
body = None
# json_body = None
# text_body = None
captured_requests.append({
"event_type": "response",
"url": response.url,
"status": response.status,
"status_text": response.status_text,
"headers": dict(response.headers), # Convert Header dict
"from_service_worker": response.from_service_worker,
"request_timing": response.request.timing, # Detailed timing info
"timestamp": time.time(),
"body" : {
# "raw": body,
# "json": json_body,
"text": text_body
}
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing response details for {response.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "response_capture_error", "url": response.url, "error": str(e), "timestamp": time.time()})
async def handle_request_failed_capture(request):
try:
captured_requests.append({
"event_type": "request_failed",
"url": request.url,
"method": request.method,
"resource_type": request.resource_type,
"failure_text": str(request.failure) if request.failure else "Unknown failure",
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing request failed details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_failed_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
page.on("request", handle_request_capture)
page.on("response", handle_response_capture)
page.on("requestfailed", handle_request_failed_capture)
# Console Message Capturing
if config.capture_console_messages:
def handle_console_capture(msg):
try:
message_type = "unknown"
try:
message_type = msg.type
except:
pass
message_text = "unknown"
try:
message_text = msg.text
except:
pass
# Basic console message with minimal content
entry = {
"type": message_type,
"text": message_text,
"timestamp": time.time()
}
captured_console.append(entry)
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing console message: {e}", tag="CAPTURE")
# Still add something to the list even on error
captured_console.append({
"type": "console_capture_error",
"error": str(e),
"timestamp": time.time()
})
def handle_pageerror_capture(err):
try:
error_message = "Unknown error"
try:
error_message = err.message
except:
pass
error_stack = ""
try:
error_stack = err.stack
except:
pass
captured_console.append({
"type": "error",
"text": error_message,
"stack": error_stack,
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing page error: {e}", tag="CAPTURE")
captured_console.append({
"type": "pageerror_capture_error",
"error": str(e),
"timestamp": time.time()
})
# Add event listeners directly
page.on("console", handle_console_capture)
page.on("pageerror", handle_pageerror_capture)
# Set up console logging if requested
if config.log_console:
def log_consol(
msg, console_log_type="debug"
): # Corrected the parameter syntax
if console_log_type == "error":
self.logger.error(
message=f"Console error: {msg}", # Use f-string for variable interpolation
tag="CONSOLE",
params={"msg": msg.text},
tag="CONSOLE"
)
elif console_log_type == "debug":
self.logger.debug(
message=f"Console: {msg}", # Use f-string for variable interpolation
tag="CONSOLE",
params={"msg": msg.text},
tag="CONSOLE"
)
page.on("console", log_consol)
@@ -625,7 +784,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
except Error:
visibility_info = await self.check_visibility(page)
if config.verbose:
if self.browser_config.config.verbose:
self.logger.debug(
message="Body visibility info: {info}",
tag="DEBUG",
@@ -821,7 +980,11 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
for selector in selectors:
try:
content = await page.evaluate(f"document.querySelector('{selector}')?.outerHTML || ''")
content = await page.evaluate(
f"""Array.from(document.querySelectorAll("{selector}"))
.map(el => el.outerHTML)
.join('')"""
)
html_parts.append(content)
except Error as e:
print(f"Warning: Could not get content for selector '{selector}': {str(e)}")
@@ -839,14 +1002,18 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
"before_return_html", page=page, html=html, context=context, config=config
)
# Handle PDF and screenshot generation
# Handle PDF, MHTML and screenshot generation
start_export_time = time.perf_counter()
pdf_data = None
screenshot_data = None
mhtml_data = None
if config.pdf:
pdf_data = await self.export_pdf(page)
if config.capture_mhtml:
mhtml_data = await self.capture_mhtml(page)
if config.screenshot:
if config.screenshot_wait_for:
await asyncio.sleep(config.screenshot_wait_for)
@@ -854,9 +1021,9 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
page, screenshot_height_threshold=config.screenshot_height_threshold
)
if screenshot_data or pdf_data:
if screenshot_data or pdf_data or mhtml_data:
self.logger.info(
message="Exporting PDF and taking screenshot took {duration:.2f}s",
message="Exporting media (PDF/MHTML/screenshot) took {duration:.2f}s",
tag="EXPORT",
params={"duration": time.perf_counter() - start_export_time},
)
@@ -879,12 +1046,16 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
status_code=status_code,
screenshot=screenshot_data,
pdf_data=pdf_data,
mhtml_data=mhtml_data,
get_delayed_content=get_delayed_content,
ssl_certificate=ssl_cert,
downloaded_files=(
self._downloaded_files if self._downloaded_files else None
),
redirected_url=redirected_url,
# Include captured data if enabled
network_requests=captured_requests if config.capture_network_requests else None,
console_messages=captured_console if config.capture_console_messages else None,
)
except Exception as e:
@@ -893,6 +1064,15 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
finally:
# If no session_id is given we should close the page
if not config.session_id:
# Detach listeners before closing to prevent potential errors during close
if config.capture_network_requests:
page.remove_listener("request", handle_request_capture)
page.remove_listener("response", handle_response_capture)
page.remove_listener("requestfailed", handle_request_failed_capture)
if config.capture_console_messages:
page.remove_listener("console", handle_console_capture)
page.remove_listener("pageerror", handle_pageerror_capture)
await page.close()
async def _handle_full_page_scan(self, page: Page, scroll_delay: float = 0.1):
@@ -1055,7 +1235,107 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
"""
pdf_data = await page.pdf(print_background=True)
return pdf_data
async def capture_mhtml(self, page: Page) -> Optional[str]:
"""
Captures the current page as MHTML using CDP.
MHTML (MIME HTML) is a web page archive format that combines the HTML content
with its resources (images, CSS, etc.) into a single MIME-encoded file.
Args:
page (Page): The Playwright page object
Returns:
Optional[str]: The MHTML content as a string, or None if there was an error
"""
try:
# Ensure the page is fully loaded before capturing
try:
# Wait for DOM content and network to be idle
await page.wait_for_load_state("domcontentloaded", timeout=5000)
await page.wait_for_load_state("networkidle", timeout=5000)
# Give a little extra time for JavaScript execution
await page.wait_for_timeout(1000)
# Wait for any animations to complete
await page.evaluate("""
() => new Promise(resolve => {
// First requestAnimationFrame gets scheduled after the next repaint
requestAnimationFrame(() => {
// Second requestAnimationFrame gets called after all animations complete
requestAnimationFrame(resolve);
});
})
""")
except Error as e:
if self.logger:
self.logger.warning(
message="Wait for load state timed out: {error}",
tag="MHTML",
params={"error": str(e)},
)
# Create a new CDP session
cdp_session = await page.context.new_cdp_session(page)
# Call Page.captureSnapshot with format "mhtml"
result = await cdp_session.send("Page.captureSnapshot", {"format": "mhtml"})
# The result contains a 'data' field with the MHTML content
mhtml_content = result.get("data")
# Detach the CDP session to clean up resources
await cdp_session.detach()
return mhtml_content
except Exception as e:
# Log the error but don't raise it - we'll just return None for the MHTML
if self.logger:
self.logger.error(
message="Failed to capture MHTML: {error}",
tag="MHTML",
params={"error": str(e)},
)
return None
async def _capture_console_messages(
self, page: Page, file_path: str
) -> List[Dict[str, Union[str, float]]]:
"""
Captures console messages from the page.
Args:
page (Page): The Playwright page object
Returns:
List[Dict[str, Union[str, float]]]: A list of captured console messages
"""
captured_console = []
def handle_console_message(msg):
try:
message_type = msg.type
message_text = msg.text
entry = {
"type": message_type,
"text": message_text,
"timestamp": time.time(),
}
captured_console.append(entry)
except Exception as e:
if self.logger:
self.logger.warning(
f"Error capturing console message: {e}", tag="CAPTURE"
)
page.on("console", handle_console_message)
await page.goto(file_path)
return captured_console
async def take_screenshot(self, page, **kwargs) -> str:
"""
Take a screenshot of the current page.
@@ -1187,8 +1467,8 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
buffered = BytesIO()
img.save(buffered, format="JPEG")
return base64.b64encode(buffered.getvalue()).decode("utf-8")
finally:
await page.close()
# finally:
# await page.close()
async def take_screenshot_naive(self, page: Page) -> str:
"""
@@ -1221,8 +1501,8 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
buffered = BytesIO()
img.save(buffered, format="JPEG")
return base64.b64encode(buffered.getvalue()).decode("utf-8")
finally:
await page.close()
# finally:
# await page.close()
async def export_storage_state(self, path: str = None) -> dict:
"""
@@ -1712,7 +1992,7 @@ class AsyncHTTPCrawlerStrategy(AsyncCrawlerStrategy):
await self.start()
yield self._session
finally:
await self.close()
pass
def set_hook(self, hook_type: str, hook_func: Callable) -> None:
if hook_type in self.hooks:
@@ -1828,7 +2108,7 @@ class AsyncHTTPCrawlerStrategy(AsyncCrawlerStrategy):
encoding = response.charset
if not encoding:
encoding = cchardet.detect(content.tobytes())['encoding'] or 'utf-8'
encoding = chardet.detect(content.tobytes())['encoding'] or 'utf-8'
result = AsyncCrawlResponse(
html=content.tobytes().decode(encoding, errors='replace'),

View File

@@ -171,7 +171,10 @@ class AsyncDatabaseManager:
f"Code context:\n{error_context['code_context']}"
)
self.logger.error(
message=create_box_message(error_message, type="error"),
message="{error}",
tag="ERROR",
params={"error": str(error_message)},
boxes=["error"],
)
raise
@@ -189,7 +192,10 @@ class AsyncDatabaseManager:
f"Code context:\n{error_context['code_context']}"
)
self.logger.error(
message=create_box_message(error_message, type="error"),
message="{error}",
tag="ERROR",
params={"error": str(error_message)},
boxes=["error"],
)
raise
finally:

View File

@@ -1,18 +1,48 @@
from abc import ABC, abstractmethod
from enum import Enum
from typing import Optional, Dict, Any
from colorama import Fore, Style, init
from typing import Optional, Dict, Any, List
import os
from datetime import datetime
from urllib.parse import unquote
from rich.console import Console
from rich.text import Text
from .utils import create_box_message
class LogLevel(Enum):
DEFAULT = 0
DEBUG = 1
INFO = 2
SUCCESS = 3
WARNING = 4
ERROR = 5
CRITICAL = 6
ALERT = 7
NOTICE = 8
EXCEPTION = 9
FATAL = 10
def __str__(self):
return self.name.lower()
class LogColor(str, Enum):
"""Enum for log colors."""
DEBUG = "lightblack"
INFO = "cyan"
SUCCESS = "green"
WARNING = "yellow"
ERROR = "red"
CYAN = "cyan"
GREEN = "green"
YELLOW = "yellow"
MAGENTA = "magenta"
DIM_MAGENTA = "dim magenta"
def __str__(self):
"""Automatically convert rich color to string."""
return self.value
class AsyncLoggerBase(ABC):
@@ -37,13 +67,14 @@ class AsyncLoggerBase(ABC):
pass
@abstractmethod
def url_status(self, url: str, success: bool, timing: float, tag: str = "FETCH", url_length: int = 50):
def url_status(self, url: str, success: bool, timing: float, tag: str = "FETCH", url_length: int = 100):
pass
@abstractmethod
def error_status(self, url: str, error: str, tag: str = "ERROR", url_length: int = 50):
def error_status(self, url: str, error: str, tag: str = "ERROR", url_length: int = 100):
pass
class AsyncLogger(AsyncLoggerBase):
"""
Asynchronous logger with support for colored console output and file logging.
@@ -61,14 +92,21 @@ class AsyncLogger(AsyncLoggerBase):
"DEBUG": "",
"INFO": "",
"WARNING": "",
"SUCCESS": "",
"CRITICAL": "",
"ALERT": "",
"NOTICE": "",
"EXCEPTION": "",
"FATAL": "",
"DEFAULT": "",
}
DEFAULT_COLORS = {
LogLevel.DEBUG: Fore.LIGHTBLACK_EX,
LogLevel.INFO: Fore.CYAN,
LogLevel.SUCCESS: Fore.GREEN,
LogLevel.WARNING: Fore.YELLOW,
LogLevel.ERROR: Fore.RED,
LogLevel.DEBUG: LogColor.DEBUG,
LogLevel.INFO: LogColor.INFO,
LogLevel.SUCCESS: LogColor.SUCCESS,
LogLevel.WARNING: LogColor.WARNING,
LogLevel.ERROR: LogColor.ERROR,
}
def __init__(
@@ -77,7 +115,7 @@ class AsyncLogger(AsyncLoggerBase):
log_level: LogLevel = LogLevel.DEBUG,
tag_width: int = 10,
icons: Optional[Dict[str, str]] = None,
colors: Optional[Dict[LogLevel, str]] = None,
colors: Optional[Dict[LogLevel, LogColor]] = None,
verbose: bool = True,
):
"""
@@ -91,13 +129,13 @@ class AsyncLogger(AsyncLoggerBase):
colors: Custom colors for different log levels
verbose: Whether to output to console
"""
init() # Initialize colorama
self.log_file = log_file
self.log_level = log_level
self.tag_width = tag_width
self.icons = icons or self.DEFAULT_ICONS
self.colors = colors or self.DEFAULT_COLORS
self.verbose = verbose
self.console = Console()
# Create log file directory if needed
if log_file:
@@ -110,20 +148,23 @@ class AsyncLogger(AsyncLoggerBase):
def _get_icon(self, tag: str) -> str:
"""Get the icon for a tag, defaulting to info icon if not found."""
return self.icons.get(tag, self.icons["INFO"])
def _shorten(self, text, length, placeholder="..."):
"""Truncate text in the middle if longer than length, or pad if shorter."""
if len(text) <= length:
return text.ljust(length) # Pad with spaces to reach desired length
half = (length - len(placeholder)) // 2
shortened = text[:half] + placeholder + text[-half:]
return shortened.ljust(length) # Also pad shortened text to consistent length
def _write_to_file(self, message: str):
"""Write a message to the log file if configured."""
if self.log_file:
text = Text.from_markup(message)
plain_text = text.plain
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f")[:-3]
with open(self.log_file, "a", encoding="utf-8") as f:
# Strip ANSI color codes for file output
clean_message = message.replace(Fore.RESET, "").replace(
Style.RESET_ALL, ""
)
for color in vars(Fore).values():
if isinstance(color, str):
clean_message = clean_message.replace(color, "")
f.write(f"[{timestamp}] {clean_message}\n")
f.write(f"[{timestamp}] {plain_text}\n")
def _log(
self,
@@ -131,8 +172,9 @@ class AsyncLogger(AsyncLoggerBase):
message: str,
tag: str,
params: Optional[Dict[str, Any]] = None,
colors: Optional[Dict[str, str]] = None,
base_color: Optional[str] = None,
colors: Optional[Dict[str, LogColor]] = None,
boxes: Optional[List[str]] = None,
base_color: Optional[LogColor] = None,
**kwargs,
):
"""
@@ -144,55 +186,44 @@ class AsyncLogger(AsyncLoggerBase):
tag: Tag for the message
params: Parameters to format into the message
colors: Color overrides for specific parameters
boxes: Box overrides for specific parameters
base_color: Base color for the entire message
"""
if level.value < self.log_level.value:
return
# Format the message with parameters if provided
# avoid conflict with rich formatting
parsed_message = message.replace("[", "[[").replace("]", "]]")
if params:
try:
# First format the message with raw parameters
formatted_message = message.format(**params)
# FIXME: If there are formatting strings in floating point format,
# this may result in colors and boxes not being applied properly.
# such as {value:.2f}, the value is 0.23333 format it to 0.23,
# but we replace("0.23333", "[color]0.23333[/color]")
formatted_message = parsed_message.format(**params)
for key, value in params.items():
# value_str may discard `[` and `]`, so we need to replace it.
value_str = str(value).replace("[", "[[").replace("]", "]]")
# check is need apply color
if colors and key in colors:
color_str = f"[{colors[key]}]{value_str}[/{colors[key]}]"
formatted_message = formatted_message.replace(value_str, color_str)
value_str = color_str
# Then apply colors if specified
color_map = {
"green": Fore.GREEN,
"red": Fore.RED,
"yellow": Fore.YELLOW,
"blue": Fore.BLUE,
"cyan": Fore.CYAN,
"magenta": Fore.MAGENTA,
"white": Fore.WHITE,
"black": Fore.BLACK,
"reset": Style.RESET_ALL,
}
if colors:
for key, color in colors.items():
# Find the formatted value in the message and wrap it with color
if color in color_map:
color = color_map[color]
if key in params:
value_str = str(params[key])
formatted_message = formatted_message.replace(
value_str, f"{color}{value_str}{Style.RESET_ALL}"
)
# check is need apply box
if boxes and key in boxes:
formatted_message = formatted_message.replace(value_str,
create_box_message(value_str, type=str(level)))
except KeyError as e:
formatted_message = (
f"LOGGING ERROR: Missing parameter {e} in message template"
)
level = LogLevel.ERROR
else:
formatted_message = message
formatted_message = parsed_message
# Construct the full log line
color = base_color or self.colors[level]
log_line = f"{color}{self._format_tag(tag)} {self._get_icon(tag)} {formatted_message}{Style.RESET_ALL}"
color: LogColor = base_color or self.colors[level]
log_line = f"[{color}]{self._format_tag(tag)} {self._get_icon(tag)} {formatted_message} [/{color}]"
# Output to console if verbose
if self.verbose or kwargs.get("force_verbose", False):
print(log_line)
self.console.print(log_line)
# Write to file if configured
self._write_to_file(log_line)
@@ -212,6 +243,22 @@ class AsyncLogger(AsyncLoggerBase):
def warning(self, message: str, tag: str = "WARNING", **kwargs):
"""Log a warning message."""
self._log(LogLevel.WARNING, message, tag, **kwargs)
def critical(self, message: str, tag: str = "CRITICAL", **kwargs):
"""Log a critical message."""
self._log(LogLevel.ERROR, message, tag, **kwargs)
def exception(self, message: str, tag: str = "EXCEPTION", **kwargs):
"""Log an exception message."""
self._log(LogLevel.ERROR, message, tag, **kwargs)
def fatal(self, message: str, tag: str = "FATAL", **kwargs):
"""Log a fatal message."""
self._log(LogLevel.ERROR, message, tag, **kwargs)
def alert(self, message: str, tag: str = "ALERT", **kwargs):
"""Log an alert message."""
self._log(LogLevel.ERROR, message, tag, **kwargs)
def notice(self, message: str, tag: str = "NOTICE", **kwargs):
"""Log a notice message."""
self._log(LogLevel.INFO, message, tag, **kwargs)
def error(self, message: str, tag: str = "ERROR", **kwargs):
"""Log an error message."""
@@ -223,7 +270,7 @@ class AsyncLogger(AsyncLoggerBase):
success: bool,
timing: float,
tag: str = "FETCH",
url_length: int = 50,
url_length: int = 100,
):
"""
Convenience method for logging URL fetch status.
@@ -235,19 +282,20 @@ class AsyncLogger(AsyncLoggerBase):
tag: Tag for the message
url_length: Maximum length for URL in log
"""
decoded_url = unquote(url)
readable_url = self._shorten(decoded_url, url_length)
self._log(
level=LogLevel.SUCCESS if success else LogLevel.ERROR,
message="{url:.{url_length}}... | Status: {status} | Time: {timing:.2f}s",
message="{url} | {status} | : {timing:.2f}s",
tag=tag,
params={
"url": url,
"url_length": url_length,
"status": success,
"url": readable_url,
"status": "" if success else "",
"timing": timing,
},
colors={
"status": Fore.GREEN if success else Fore.RED,
"timing": Fore.YELLOW,
"status": LogColor.SUCCESS if success else LogColor.ERROR,
"timing": LogColor.WARNING,
},
)
@@ -263,11 +311,13 @@ class AsyncLogger(AsyncLoggerBase):
tag: Tag for the message
url_length: Maximum length for URL in log
"""
decoded_url = unquote(url)
readable_url = self._shorten(decoded_url, url_length)
self._log(
level=LogLevel.ERROR,
message="{url:.{url_length}}... | Error: {error}",
message="{url} | Error: {error}",
tag=tag,
params={"url": url, "url_length": url_length, "error": error},
params={"url": readable_url, "error": error},
)
class AsyncFileLogger(AsyncLoggerBase):
@@ -311,13 +361,13 @@ class AsyncFileLogger(AsyncLoggerBase):
"""Log an error message to file."""
self._write_to_file("ERROR", message, tag)
def url_status(self, url: str, success: bool, timing: float, tag: str = "FETCH", url_length: int = 50):
def url_status(self, url: str, success: bool, timing: float, tag: str = "FETCH", url_length: int = 100):
"""Log URL fetch status to file."""
status = "SUCCESS" if success else "FAILED"
message = f"{url[:url_length]}... | Status: {status} | Time: {timing:.2f}s"
self._write_to_file("URL_STATUS", message, tag)
def error_status(self, url: str, error: str, tag: str = "ERROR", url_length: int = 50):
def error_status(self, url: str, error: str, tag: str = "ERROR", url_length: int = 100):
"""Log error status to file."""
message = f"{url[:url_length]}... | Error: {error}"
self._write_to_file("ERROR", message, tag)

View File

@@ -2,7 +2,6 @@ from .__version__ import __version__ as crawl4ai_version
import os
import sys
import time
from colorama import Fore
from pathlib import Path
from typing import Optional, List
import json
@@ -36,7 +35,7 @@ from .markdown_generation_strategy import (
)
from .deep_crawling import DeepCrawlDecorator
from .async_logger import AsyncLogger, AsyncLoggerBase
from .async_configs import BrowserConfig, CrawlerRunConfig
from .async_configs import BrowserConfig, CrawlerRunConfig, ProxyConfig
from .async_dispatcher import * # noqa: F403
from .async_dispatcher import BaseDispatcher, MemoryAdaptiveDispatcher, RateLimiter
@@ -44,9 +43,9 @@ from .utils import (
sanitize_input_encode,
InvalidCSSSelectorError,
fast_format_html,
create_box_message,
get_error_context,
RobotsParser,
preprocess_html_for_schema,
)
@@ -111,7 +110,8 @@ class AsyncWebCrawler:
self,
crawler_strategy: AsyncCrawlerStrategy = None,
config: BrowserConfig = None,
base_directory: str = str(os.getenv("CRAWL4_AI_BASE_DIRECTORY", Path.home())),
base_directory: str = str(
os.getenv("CRAWL4_AI_BASE_DIRECTORY", Path.home())),
thread_safe: bool = False,
logger: AsyncLoggerBase = None,
**kwargs,
@@ -139,7 +139,8 @@ class AsyncWebCrawler:
)
# Initialize crawler strategy
params = {k: v for k, v in kwargs.items() if k in ["browser_config", "logger"]}
params = {k: v for k, v in kwargs.items() if k in [
"browser_config", "logger"]}
self.crawler_strategy = crawler_strategy or AsyncPlaywrightCrawlerStrategy(
browser_config=browser_config,
logger=self.logger,
@@ -237,7 +238,8 @@ class AsyncWebCrawler:
config = config or CrawlerRunConfig()
if not isinstance(url, str) or not url:
raise ValueError("Invalid URL, make sure the URL is a non-empty string")
raise ValueError(
"Invalid URL, make sure the URL is a non-empty string")
async with self._lock or self.nullcontext():
try:
@@ -291,12 +293,12 @@ class AsyncWebCrawler:
# Update proxy configuration from rotation strategy if available
if config and config.proxy_rotation_strategy:
next_proxy = await config.proxy_rotation_strategy.get_next_proxy()
next_proxy: ProxyConfig = await config.proxy_rotation_strategy.get_next_proxy()
if next_proxy:
self.logger.info(
message="Switch proxy: {proxy}",
tag="PROXY",
params={"proxy": next_proxy.server},
params={"proxy": next_proxy.server}
)
config.proxy_config = next_proxy
# config = config.clone(proxy_config=next_proxy)
@@ -306,7 +308,8 @@ class AsyncWebCrawler:
t1 = time.perf_counter()
if config.user_agent:
self.crawler_strategy.update_user_agent(config.user_agent)
self.crawler_strategy.update_user_agent(
config.user_agent)
# Check robots.txt if enabled
if config and config.check_robots_txt:
@@ -353,10 +356,11 @@ class AsyncWebCrawler:
html=html,
extracted_content=extracted_content,
config=config, # Pass the config object instead of individual parameters
screenshot=screenshot_data,
screenshot_data=screenshot_data,
pdf_data=pdf_data,
verbose=config.verbose,
is_raw_html=True if url.startswith("raw:") else False,
redirected_url=async_response.redirected_url,
**kwargs,
)
@@ -365,25 +369,21 @@ class AsyncWebCrawler:
crawl_result.response_headers = async_response.response_headers
crawl_result.downloaded_files = async_response.downloaded_files
crawl_result.js_execution_result = js_execution_result
crawl_result.ssl_certificate = (
async_response.ssl_certificate
) # Add SSL certificate
crawl_result.mhtml = async_response.mhtml_data
crawl_result.ssl_certificate = async_response.ssl_certificate
# Add captured network and console data if available
crawl_result.network_requests = async_response.network_requests
crawl_result.console_messages = async_response.console_messages
crawl_result.success = bool(html)
crawl_result.session_id = getattr(config, "session_id", None)
crawl_result.session_id = getattr(
config, "session_id", None)
self.logger.success(
message="{url:.50}... | Status: {status} | Total: {timing}",
self.logger.url_status(
url=cache_context.display_url,
success=crawl_result.success,
timing=time.perf_counter() - start_time,
tag="COMPLETE",
params={
"url": cache_context.display_url,
"status": crawl_result.success,
"timing": f"{time.perf_counter() - start_time:.2f}s",
},
colors={
"status": Fore.GREEN if crawl_result.success else Fore.RED,
"timing": Fore.YELLOW,
},
)
# Update cache if appropriate
@@ -393,19 +393,15 @@ class AsyncWebCrawler:
return CrawlResultContainer(crawl_result)
else:
self.logger.success(
message="{url:.50}... | Status: {status} | Total: {timing}",
tag="COMPLETE",
params={
"url": cache_context.display_url,
"status": True,
"timing": f"{time.perf_counter() - start_time:.2f}s",
},
colors={"status": Fore.GREEN, "timing": Fore.YELLOW},
self.logger.url_status(
url=cache_context.display_url,
success=True,
timing=time.perf_counter() - start_time,
tag="COMPLETE"
)
cached_result.success = bool(html)
cached_result.session_id = getattr(config, "session_id", None)
cached_result.session_id = getattr(
config, "session_id", None)
cached_result.redirected_url = cached_result.redirected_url or url
return CrawlResultContainer(cached_result)
@@ -421,7 +417,7 @@ class AsyncWebCrawler:
self.logger.error_status(
url=url,
error=create_box_message(error_message, type="error"),
error=error_message,
tag="ERROR",
)
@@ -437,7 +433,7 @@ class AsyncWebCrawler:
html: str,
extracted_content: str,
config: CrawlerRunConfig,
screenshot: str,
screenshot_data: str,
pdf_data: str,
verbose: bool,
**kwargs,
@@ -450,7 +446,7 @@ class AsyncWebCrawler:
html: Raw HTML content
extracted_content: Previously extracted content (if any)
config: Configuration object controlling processing behavior
screenshot: Screenshot data (if any)
screenshot_data: Screenshot data (if any)
pdf_data: PDF data (if any)
verbose: Whether to enable verbose logging
**kwargs: Additional parameters for backwards compatibility
@@ -472,12 +468,14 @@ class AsyncWebCrawler:
params = config.__dict__.copy()
params.pop("url", None)
# add keys from kwargs to params that doesn't exist in params
params.update({k: v for k, v in kwargs.items() if k not in params.keys()})
params.update({k: v for k, v in kwargs.items()
if k not in params.keys()})
################################
# Scraping Strategy Execution #
################################
result: ScrapingResult = scraping_strategy.scrap(url, html, **params)
result: ScrapingResult = scraping_strategy.scrap(
url, html, **params)
if result is None:
raise ValueError(
@@ -493,15 +491,20 @@ class AsyncWebCrawler:
# Extract results - handle both dict and ScrapingResult
if isinstance(result, dict):
cleaned_html = sanitize_input_encode(result.get("cleaned_html", ""))
cleaned_html = sanitize_input_encode(
result.get("cleaned_html", ""))
media = result.get("media", {})
tables = media.pop("tables", []) if isinstance(media, dict) else []
links = result.get("links", {})
metadata = result.get("metadata", {})
else:
cleaned_html = sanitize_input_encode(result.cleaned_html)
media = result.media.model_dump()
tables = media.pop("tables", [])
links = result.links.model_dump()
metadata = result.metadata
fit_html = preprocess_html_for_schema(html_content=html, text_threshold= 500, max_size= 300_000)
################################
# Generate Markdown #
@@ -510,27 +513,65 @@ class AsyncWebCrawler:
config.markdown_generator or DefaultMarkdownGenerator()
)
# --- SELECT HTML SOURCE BASED ON CONTENT_SOURCE ---
# Get the desired source from the generator config, default to 'cleaned_html'
selected_html_source = getattr(markdown_generator, 'content_source', 'cleaned_html')
# Define the source selection logic using dict dispatch
html_source_selector = {
"raw_html": lambda: html, # The original raw HTML
"cleaned_html": lambda: cleaned_html, # The HTML after scraping strategy
"fit_html": lambda: fit_html, # The HTML after preprocessing for schema
}
markdown_input_html = cleaned_html # Default to cleaned_html
try:
# Get the appropriate lambda function, default to returning cleaned_html if key not found
source_lambda = html_source_selector.get(selected_html_source, lambda: cleaned_html)
# Execute the lambda to get the selected HTML
markdown_input_html = source_lambda()
# Log which source is being used (optional, but helpful for debugging)
# if self.logger and verbose:
# actual_source_used = selected_html_source if selected_html_source in html_source_selector else 'cleaned_html (default)'
# self.logger.debug(f"Using '{actual_source_used}' as source for Markdown generation for {url}", tag="MARKDOWN_SRC")
except Exception as e:
# Handle potential errors, especially from preprocess_html_for_schema
if self.logger:
self.logger.warning(
f"Error getting/processing '{selected_html_source}' for markdown source: {e}. Falling back to cleaned_html.",
tag="MARKDOWN_SRC"
)
# Ensure markdown_input_html is still the default cleaned_html in case of error
markdown_input_html = cleaned_html
# --- END: HTML SOURCE SELECTION ---
# Uncomment if by default we want to use PruningContentFilter
# if not config.content_filter and not markdown_generator.content_filter:
# markdown_generator.content_filter = PruningContentFilter()
markdown_result: MarkdownGenerationResult = (
markdown_generator.generate_markdown(
cleaned_html=cleaned_html,
base_url=url,
input_html=markdown_input_html,
base_url=params.get("redirected_url", url)
# html2text_options=kwargs.get('html2text', {})
)
)
# Log processing completion
self.logger.info(
message="{url:.50}... | Time: {timing}s",
tag="SCRAPE",
params={
"url": _url,
"timing": int((time.perf_counter() - t1) * 1000) / 1000,
},
self.logger.url_status(
url=_url,
success=True,
timing=int((time.perf_counter() - t1) * 1000) / 1000,
tag="SCRAPE"
)
# self.logger.info(
# message="{url:.50}... | Time: {timing}s",
# tag="SCRAPE",
# params={"url": _url, "timing": int((time.perf_counter() - t1) * 1000) / 1000},
# )
################################
# Structured Content Extraction #
@@ -554,6 +595,7 @@ class AsyncWebCrawler:
content = {
"markdown": markdown_result.raw_markdown,
"html": html,
"fit_html": fit_html,
"cleaned_html": cleaned_html,
"fit_markdown": markdown_result.fit_markdown,
}.get(content_format, markdown_result.raw_markdown)
@@ -561,7 +603,7 @@ class AsyncWebCrawler:
# Use IdentityChunking for HTML input, otherwise use provided chunking strategy
chunking = (
IdentityChunking()
if content_format in ["html", "cleaned_html"]
if content_format in ["html", "cleaned_html", "fit_html"]
else config.chunking_strategy
)
sections = chunking.chunk(content)
@@ -577,10 +619,6 @@ class AsyncWebCrawler:
params={"url": _url, "timing": time.perf_counter() - t1},
)
# Handle screenshot and PDF data
screenshot_data = None if not screenshot else screenshot
pdf_data = None if not pdf_data else pdf_data
# Apply HTML formatting if requested
if config.prettiify:
cleaned_html = fast_format_html(cleaned_html)
@@ -589,9 +627,11 @@ class AsyncWebCrawler:
return CrawlResult(
url=url,
html=html,
fit_html=fit_html,
cleaned_html=cleaned_html,
markdown=markdown_result,
media=media,
tables=tables, # NEW
links=links,
metadata=metadata,
screenshot=screenshot_data,

View File

@@ -1,23 +0,0 @@
"""Browser management module for Crawl4AI.
This module provides browser management capabilities using different strategies
for browser creation and interaction.
"""
from .manager import BrowserManager
from .profiles import BrowserProfileManager
from .models import DockerConfig
from .docker_registry import DockerRegistry
from .docker_utils import DockerUtils
from .browser_hub import BrowserHub
from .strategies import (
BaseBrowserStrategy,
PlaywrightBrowserStrategy,
CDPBrowserStrategy,
BuiltinBrowserStrategy,
DockerBrowserStrategy
)
__all__ = ['BrowserManager', 'BrowserProfileManager', 'DockerConfig', 'DockerRegistry', 'DockerUtils', 'BaseBrowserStrategy',
'PlaywrightBrowserStrategy', 'CDPBrowserStrategy', 'BuiltinBrowserStrategy',
'DockerBrowserStrategy', 'BrowserHub']

View File

@@ -1,184 +0,0 @@
# browser_hub_manager.py
import hashlib
import json
import asyncio
from typing import Dict, Optional, List, Tuple
from .manager import BrowserManager, UnavailableBehavior
from ..async_configs import BrowserConfig, CrawlerRunConfig
from ..async_logger import AsyncLogger
class BrowserHub:
"""
Manages Browser-Hub instances for sharing across multiple pipelines.
This class provides centralized management for browser resources, allowing
multiple pipelines to share browser instances efficiently, connect to
existing browser hubs, or create new ones with custom configurations.
"""
_instances: Dict[str, BrowserManager] = {}
_lock = asyncio.Lock()
@classmethod
async def get_browser_manager(
cls,
config: Optional[BrowserConfig] = None,
hub_id: Optional[str] = None,
connection_info: Optional[str] = None,
logger: Optional[AsyncLogger] = None,
max_browsers_per_config: int = 10,
max_pages_per_browser: int = 5,
initial_pool_size: int = 1,
page_configs: Optional[List[Tuple[BrowserConfig, CrawlerRunConfig, int]]] = None
) -> BrowserManager:
"""
Get an existing BrowserManager or create a new one based on parameters.
Args:
config: Browser configuration for new hub
hub_id: Identifier for the hub instance
connection_info: Connection string for existing hub
logger: Logger for recording events and errors
max_browsers_per_config: Maximum browsers per configuration
max_pages_per_browser: Maximum pages per browser
initial_pool_size: Initial number of browsers to create
page_configs: Optional configurations for pre-warming pages
Returns:
BrowserManager: The requested browser manager instance
"""
async with cls._lock:
# Scenario 3: Use existing hub via connection info
if connection_info:
instance_key = f"connection:{connection_info}"
if instance_key not in cls._instances:
cls._instances[instance_key] = await cls._connect_to_browser_hub(
connection_info, logger
)
return cls._instances[instance_key]
# Scenario 2: Custom configured hub
if config:
config_hash = cls._hash_config(config)
instance_key = hub_id or f"config:{config_hash}"
if instance_key not in cls._instances:
cls._instances[instance_key] = await cls._create_browser_manager(
config,
logger,
max_browsers_per_config,
max_pages_per_browser,
initial_pool_size,
page_configs
)
return cls._instances[instance_key]
# Scenario 1: Default hub
instance_key = "default"
if instance_key not in cls._instances:
cls._instances[instance_key] = await cls._create_default_browser_hub(
logger,
max_browsers_per_config,
max_pages_per_browser,
initial_pool_size
)
return cls._instances[instance_key]
@classmethod
async def _create_browser_manager(
cls,
config: BrowserConfig,
logger: Optional[AsyncLogger],
max_browsers_per_config: int,
max_pages_per_browser: int,
initial_pool_size: int,
page_configs: Optional[List[Tuple[BrowserConfig, CrawlerRunConfig, int]]] = None
) -> BrowserManager:
"""Create a new browser hub with the specified configuration."""
manager = BrowserManager(
browser_config=config,
logger=logger,
unavailable_behavior=UnavailableBehavior.ON_DEMAND,
max_browsers_per_config=max_browsers_per_config,
max_pages_per_browser=max_pages_per_browser,
)
# Initialize the pool
await manager.initialize_pool(
browser_configs=[config] if config else None,
browsers_per_config=initial_pool_size,
page_configs=page_configs
)
return manager
@classmethod
async def _create_default_browser_hub(
cls,
logger: Optional[AsyncLogger],
max_browsers_per_config: int,
max_pages_per_browser: int,
initial_pool_size: int
) -> BrowserManager:
"""Create a default browser hub with standard settings."""
config = BrowserConfig(headless=True)
return await cls._create_browser_manager(
config,
logger,
max_browsers_per_config,
max_pages_per_browser,
initial_pool_size,
None
)
@classmethod
async def _connect_to_browser_hub(
cls,
connection_info: str,
logger: Optional[AsyncLogger]
) -> BrowserManager:
"""
Connect to an existing browser hub.
Note: This is a placeholder for future remote connection functionality.
Currently creates a local instance.
"""
if logger:
logger.info(
message="Remote browser hub connections not yet implemented. Creating local instance.",
tag="BROWSER_HUB"
)
# For now, create a default local instance
return await cls._create_default_browser_hub(
logger,
max_browsers_per_config=10,
max_pages_per_browser=5,
initial_pool_size=1
)
@classmethod
def _hash_config(cls, config: BrowserConfig) -> str:
"""Create a hash of the browser configuration for identification."""
# Convert config to dictionary, excluding any callable objects
config_dict = config.__dict__.copy()
for key in list(config_dict.keys()):
if callable(config_dict[key]):
del config_dict[key]
# Convert to canonical JSON string
config_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON
config_hash = hashlib.sha256(config_json.encode()).hexdigest()
return config_hash
@classmethod
async def shutdown_all(cls):
"""Close all browser hub instances and clear the registry."""
async with cls._lock:
shutdown_tasks = []
for hub in cls._instances.values():
shutdown_tasks.append(hub.close())
if shutdown_tasks:
await asyncio.gather(*shutdown_tasks)
cls._instances.clear()

View File

@@ -1,34 +0,0 @@
# ---------- Dockerfile ----------
FROM alpine:latest
# Combine everything in one RUN to keep layers minimal.
RUN apk update && apk upgrade && \
apk add --no-cache \
chromium \
nss \
freetype \
harfbuzz \
ca-certificates \
ttf-freefont \
socat \
curl && \
addgroup -S chromium && adduser -S chromium -G chromium && \
mkdir -p /data && chown chromium:chromium /data && \
rm -rf /var/cache/apk/*
# Copy start script, then chown/chmod in one step
COPY start.sh /home/chromium/start.sh
RUN chown chromium:chromium /home/chromium/start.sh && \
chmod +x /home/chromium/start.sh
USER chromium
WORKDIR /home/chromium
# Expose port used by socat (mapping 9222→9223 or whichever you prefer)
EXPOSE 9223
# Simple healthcheck: is the remote debug endpoint responding?
HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -f http://localhost:9222/json/version || exit 1
CMD ["./start.sh"]

View File

@@ -1,27 +0,0 @@
# ---------- Dockerfile (Idle Version) ----------
FROM alpine:latest
# Install only Chromium and its dependencies in a single layer
RUN apk update && apk upgrade && \
apk add --no-cache \
chromium \
nss \
freetype \
harfbuzz \
ca-certificates \
ttf-freefont \
socat \
curl && \
addgroup -S chromium && adduser -S chromium -G chromium && \
mkdir -p /data && chown chromium:chromium /data && \
rm -rf /var/cache/apk/*
ENV PATH="/usr/bin:/bin:/usr/sbin:/sbin"
# Switch to a non-root user for security
USER chromium
WORKDIR /home/chromium
# Idle: container does nothing except stay alive
CMD ["tail", "-f", "/dev/null"]

View File

@@ -1,23 +0,0 @@
# Use Debian 12 (Bookworm) slim for a small, stable base image
FROM debian:bookworm-slim
ENV DEBIAN_FRONTEND=noninteractive
# Install Chromium, socat, and basic fonts
RUN apt-get update && apt-get install -y --no-install-recommends \
chromium \
wget \
curl \
socat \
fonts-freefont-ttf \
fonts-noto-color-emoji && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Copy start.sh and make it executable
COPY start.sh /start.sh
RUN chmod +x /start.sh
# Expose socat port (use host mapping, e.g. -p 9225:9223)
EXPOSE 9223
ENTRYPOINT ["/start.sh"]

View File

@@ -1,264 +0,0 @@
"""Docker registry module for Crawl4AI.
This module provides a registry system for tracking and reusing Docker containers
across browser sessions, improving performance and resource utilization.
"""
import os
import json
import time
from typing import Dict, Optional
from ..utils import get_home_folder
class DockerRegistry:
"""Manages a registry of Docker containers used for browser automation.
This registry tracks containers by configuration hash, allowing reuse of appropriately
configured containers instead of creating new ones for each session.
Attributes:
registry_file (str): Path to the registry file
containers (dict): Dictionary of container information
port_map (dict): Map of host ports to container IDs
last_port (int): Last port assigned
"""
def __init__(self, registry_file: Optional[str] = None):
"""Initialize the registry with an optional path to the registry file.
Args:
registry_file: Path to the registry file. If None, uses default path.
"""
# Use the same file path as BuiltinBrowserStrategy by default
self.registry_file = registry_file or os.path.join(get_home_folder(), "builtin-browser", "browser_config.json")
self.containers = {} # Still maintain this for backward compatibility
self.port_map = {} # Will be populated from the shared file
self.last_port = 9222
self.load()
def load(self):
"""Load container registry from file."""
if os.path.exists(self.registry_file):
try:
with open(self.registry_file, 'r') as f:
registry_data = json.load(f)
# Initialize port_map if not present
if "port_map" not in registry_data:
registry_data["port_map"] = {}
self.port_map = registry_data.get("port_map", {})
# Extract container information from port_map entries of type "docker"
self.containers = {}
for port_str, browser_info in self.port_map.items():
if browser_info.get("browser_type") == "docker" and "container_id" in browser_info:
container_id = browser_info["container_id"]
self.containers[container_id] = {
"host_port": int(port_str),
"config_hash": browser_info.get("config_hash", ""),
"created_at": browser_info.get("created_at", time.time())
}
# Get last port if available
if "last_port" in registry_data:
self.last_port = registry_data["last_port"]
else:
# Find highest port in port_map
ports = [int(p) for p in self.port_map.keys() if p.isdigit()]
self.last_port = max(ports + [9222])
except Exception as e:
# Reset to defaults on error
print(f"Error loading registry: {e}")
self.containers = {}
self.port_map = {}
self.last_port = 9222
else:
# Initialize with defaults if file doesn't exist
self.containers = {}
self.port_map = {}
self.last_port = 9222
def save(self):
"""Save container registry to file."""
# First load the current file to avoid overwriting other browser types
current_data = {"port_map": {}, "last_port": self.last_port}
if os.path.exists(self.registry_file):
try:
with open(self.registry_file, 'r') as f:
current_data = json.load(f)
except Exception:
pass
# Create a new port_map dictionary
updated_port_map = {}
# First, copy all non-docker entries from the existing port_map
for port_str, browser_info in current_data.get("port_map", {}).items():
if browser_info.get("browser_type") != "docker":
updated_port_map[port_str] = browser_info
# Then add all current docker container entries
for container_id, container_info in self.containers.items():
port_str = str(container_info["host_port"])
updated_port_map[port_str] = {
"browser_type": "docker",
"container_id": container_id,
"cdp_url": f"http://localhost:{port_str}",
"config_hash": container_info["config_hash"],
"created_at": container_info["created_at"]
}
# Replace the port_map with our updated version
current_data["port_map"] = updated_port_map
# Update last_port
current_data["last_port"] = self.last_port
# Ensure directory exists
os.makedirs(os.path.dirname(self.registry_file), exist_ok=True)
# Save the updated data
with open(self.registry_file, 'w') as f:
json.dump(current_data, f, indent=2)
def register_container(self, container_id: str, host_port: int, config_hash: str, cdp_json_config: Optional[str] = None):
"""Register a container with its configuration hash and port mapping.
Args:
container_id: Docker container ID
host_port: Host port mapped to container
config_hash: Hash of configuration used to create container
cdp_json_config: CDP JSON configuration if available
"""
self.containers[container_id] = {
"host_port": host_port,
"config_hash": config_hash,
"created_at": time.time()
}
# Update port_map to maintain compatibility with BuiltinBrowserStrategy
port_str = str(host_port)
self.port_map[port_str] = {
"browser_type": "docker",
"container_id": container_id,
"cdp_url": f"http://localhost:{port_str}",
"config_hash": config_hash,
"created_at": time.time()
}
if cdp_json_config:
self.port_map[port_str]["cdp_json_config"] = cdp_json_config
self.save()
def unregister_container(self, container_id: str):
"""Unregister a container.
Args:
container_id: Docker container ID to unregister
"""
if container_id in self.containers:
host_port = self.containers[container_id]["host_port"]
port_str = str(host_port)
# Remove from port_map
if port_str in self.port_map:
del self.port_map[port_str]
# Remove from containers
del self.containers[container_id]
self.save()
async def find_container_by_config(self, config_hash: str, docker_utils) -> Optional[str]:
"""Find a container that matches the given configuration hash.
Args:
config_hash: Hash of configuration to match
docker_utils: DockerUtils instance to check running containers
Returns:
Container ID if found, None otherwise
"""
# Search through port_map for entries with matching config_hash
for port_str, browser_info in self.port_map.items():
if (browser_info.get("browser_type") == "docker" and
browser_info.get("config_hash") == config_hash and
"container_id" in browser_info):
container_id = browser_info["container_id"]
if await docker_utils.is_container_running(container_id):
return container_id
return None
def get_container_host_port(self, container_id: str) -> Optional[int]:
"""Get the host port mapped to the container.
Args:
container_id: Docker container ID
Returns:
Host port if container is registered, None otherwise
"""
if container_id in self.containers:
return self.containers[container_id]["host_port"]
return None
def get_next_available_port(self, docker_utils) -> int:
"""Get the next available host port for Docker mapping.
Args:
docker_utils: DockerUtils instance to check port availability
Returns:
Available port number
"""
# Start from last port + 1
port = self.last_port + 1
# Check if port is in use (either in our registry or system-wide)
while str(port) in self.port_map or docker_utils.is_port_in_use(port):
port += 1
# Update last port
self.last_port = port
self.save()
return port
def get_container_config_hash(self, container_id: str) -> Optional[str]:
"""Get the configuration hash for a container.
Args:
container_id: Docker container ID
Returns:
Configuration hash if container is registered, None otherwise
"""
if container_id in self.containers:
return self.containers[container_id]["config_hash"]
return None
def cleanup_stale_containers(self, docker_utils):
"""Clean up containers that are no longer running.
Args:
docker_utils: DockerUtils instance to check container status
"""
to_remove = []
# Find containers that are no longer running
for port_str, browser_info in self.port_map.items():
if browser_info.get("browser_type") == "docker" and "container_id" in browser_info:
container_id = browser_info["container_id"]
if not docker_utils.is_container_running(container_id):
to_remove.append(container_id)
# Remove stale containers
for container_id in to_remove:
self.unregister_container(container_id)

View File

@@ -1,661 +0,0 @@
import os
import json
import asyncio
import hashlib
import tempfile
import shutil
import socket
import subprocess
from typing import Dict, List, Optional, Tuple, Union
class DockerUtils:
"""Utility class for Docker operations in browser automation.
This class provides methods for managing Docker images, containers,
and related operations needed for browser automation. It handles
image building, container lifecycle, port management, and registry operations.
Attributes:
DOCKER_FOLDER (str): Path to folder containing Docker files
DOCKER_CONNECT_FILE (str): Path to Dockerfile for connect mode
DOCKER_LAUNCH_FILE (str): Path to Dockerfile for launch mode
DOCKER_START_SCRIPT (str): Path to startup script for connect mode
DEFAULT_CONNECT_IMAGE (str): Default image name for connect mode
DEFAULT_LAUNCH_IMAGE (str): Default image name for launch mode
logger: Optional logger instance
"""
# File paths for Docker resources
DOCKER_FOLDER = os.path.join(os.path.dirname(__file__), "docker")
DOCKER_CONNECT_FILE = os.path.join(DOCKER_FOLDER, "connect.Dockerfile")
DOCKER_LAUNCH_FILE = os.path.join(DOCKER_FOLDER, "launch.Dockerfile")
DOCKER_START_SCRIPT = os.path.join(DOCKER_FOLDER, "start.sh")
# Default image names
DEFAULT_CONNECT_IMAGE = "crawl4ai/browser-connect:latest"
DEFAULT_LAUNCH_IMAGE = "crawl4ai/browser-launch:latest"
def __init__(self, logger=None):
"""Initialize Docker utilities.
Args:
logger: Optional logger for recording operations
"""
self.logger = logger
# Image Management Methods
async def check_image_exists(self, image_name: str) -> bool:
"""Check if a Docker image exists.
Args:
image_name: Name of the Docker image to check
Returns:
bool: True if the image exists, False otherwise
"""
cmd = ["docker", "image", "inspect", image_name]
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
_, _ = await process.communicate()
return process.returncode == 0
except Exception as e:
if self.logger:
self.logger.debug(
f"Error checking if image exists: {str(e)}", tag="DOCKER"
)
return False
async def build_docker_image(
self,
image_name: str,
dockerfile_path: str,
files_to_copy: Dict[str, str] = None,
) -> bool:
"""Build a Docker image from a Dockerfile.
Args:
image_name: Name to give the built image
dockerfile_path: Path to the Dockerfile
files_to_copy: Dict of {dest_name: source_path} for files to copy to build context
Returns:
bool: True if image was built successfully, False otherwise
"""
# Create a temporary build context
with tempfile.TemporaryDirectory() as temp_dir:
# Copy the Dockerfile
shutil.copy(dockerfile_path, os.path.join(temp_dir, "Dockerfile"))
# Copy any additional files needed
if files_to_copy:
for dest_name, source_path in files_to_copy.items():
shutil.copy(source_path, os.path.join(temp_dir, dest_name))
# Build the image
cmd = ["docker", "build", "-t", image_name, temp_dir]
if self.logger:
self.logger.debug(
f"Building Docker image with command: {' '.join(cmd)}", tag="DOCKER"
)
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await process.communicate()
if process.returncode != 0:
if self.logger:
self.logger.error(
message="Failed to build Docker image: {error}",
tag="DOCKER",
params={"error": stderr.decode()},
)
return False
if self.logger:
self.logger.success(
f"Successfully built Docker image: {image_name}", tag="DOCKER"
)
return True
async def ensure_docker_image_exists(
self, image_name: str, mode: str = "connect"
) -> str:
"""Ensure the required Docker image exists, creating it if necessary.
Args:
image_name: Name of the Docker image
mode: Either "connect" or "launch" to determine which image to build
Returns:
str: Name of the available Docker image
Raises:
Exception: If image doesn't exist and can't be built
"""
# If image name is not specified, use default based on mode
if not image_name:
image_name = (
self.DEFAULT_CONNECT_IMAGE
if mode == "connect"
else self.DEFAULT_LAUNCH_IMAGE
)
# Check if the image already exists
if await self.check_image_exists(image_name):
if self.logger:
self.logger.debug(
f"Docker image {image_name} already exists", tag="DOCKER"
)
return image_name
# If we're using a custom image that doesn't exist, warn and fail
if (
image_name != self.DEFAULT_CONNECT_IMAGE
and image_name != self.DEFAULT_LAUNCH_IMAGE
):
if self.logger:
self.logger.warning(
f"Custom Docker image {image_name} not found and cannot be automatically created",
tag="DOCKER",
)
raise Exception(f"Docker image {image_name} not found")
# Build the appropriate default image
if self.logger:
self.logger.info(
f"Docker image {image_name} not found, creating it now...", tag="DOCKER"
)
if mode == "connect":
success = await self.build_docker_image(
image_name,
self.DOCKER_CONNECT_FILE,
{"start.sh": self.DOCKER_START_SCRIPT},
)
else:
success = await self.build_docker_image(image_name, self.DOCKER_LAUNCH_FILE)
if not success:
raise Exception(f"Failed to create Docker image {image_name}")
return image_name
# Container Management Methods
async def create_container(
self,
image_name: str,
host_port: int,
container_name: Optional[str] = None,
volumes: List[str] = None,
network: Optional[str] = None,
env_vars: Dict[str, str] = None,
cpu_limit: float = 1.0,
memory_limit: str = "1.5g",
extra_args: List[str] = None,
) -> Optional[str]:
"""Create a new Docker container.
Args:
image_name: Docker image to use
host_port: Port on host to map to container port 9223
container_name: Optional name for the container
volumes: List of volume mappings (e.g., ["host_path:container_path"])
network: Optional Docker network to use
env_vars: Dictionary of environment variables
cpu_limit: CPU limit for the container
memory_limit: Memory limit for the container
extra_args: Additional docker run arguments
Returns:
str: Container ID if successful, None otherwise
"""
# Prepare container command
cmd = [
"docker",
"run",
"--detach",
]
# Add container name if specified
if container_name:
cmd.extend(["--name", container_name])
# Add port mapping
cmd.extend(["-p", f"{host_port}:9223"])
# Add volumes
if volumes:
for volume in volumes:
cmd.extend(["-v", volume])
# Add network if specified
if network:
cmd.extend(["--network", network])
# Add environment variables
if env_vars:
for key, value in env_vars.items():
cmd.extend(["-e", f"{key}={value}"])
# Add CPU and memory limits
if cpu_limit:
cmd.extend(["--cpus", str(cpu_limit)])
if memory_limit:
cmd.extend(["--memory", memory_limit])
cmd.extend(["--memory-swap", memory_limit])
if self.logger:
self.logger.debug(
f"Setting CPU limit: {cpu_limit}, Memory limit: {memory_limit}",
tag="DOCKER",
)
# Add extra args
if extra_args:
cmd.extend(extra_args)
# Add image
cmd.append(image_name)
if self.logger:
self.logger.debug(
f"Creating Docker container with command: {' '.join(cmd)}", tag="DOCKER"
)
# Run docker command
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await process.communicate()
if process.returncode != 0:
if self.logger:
self.logger.error(
message="Failed to create Docker container: {error}",
tag="DOCKER",
params={"error": stderr.decode()},
)
return None
# Get container ID
container_id = stdout.decode().strip()
if self.logger:
self.logger.success(
f"Created Docker container: {container_id[:12]}", tag="DOCKER"
)
return container_id
except Exception as e:
if self.logger:
self.logger.error(
message="Error creating Docker container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return None
async def is_container_running(self, container_id: str) -> bool:
"""Check if a container is running.
Args:
container_id: ID of the container to check
Returns:
bool: True if the container is running, False otherwise
"""
cmd = ["docker", "inspect", "--format", "{{.State.Running}}", container_id]
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, _ = await process.communicate()
return process.returncode == 0 and stdout.decode().strip() == "true"
except Exception as e:
if self.logger:
self.logger.debug(
f"Error checking if container is running: {str(e)}", tag="DOCKER"
)
return False
async def wait_for_container_ready(
self, container_id: str, timeout: int = 30
) -> bool:
"""Wait for the container to be in running state.
Args:
container_id: ID of the container to wait for
timeout: Maximum time to wait in seconds
Returns:
bool: True if container is ready, False if timeout occurred
"""
for _ in range(timeout):
if await self.is_container_running(container_id):
return True
await asyncio.sleep(1)
if self.logger:
self.logger.warning(
f"Container {container_id[:12]} not ready after {timeout}s timeout",
tag="DOCKER",
)
return False
async def stop_container(self, container_id: str) -> bool:
"""Stop a Docker container.
Args:
container_id: ID of the container to stop
Returns:
bool: True if stopped successfully, False otherwise
"""
cmd = ["docker", "stop", container_id]
try:
process = await asyncio.create_subprocess_exec(*cmd)
await process.communicate()
if self.logger:
self.logger.debug(
f"Stopped container: {container_id[:12]}", tag="DOCKER"
)
return process.returncode == 0
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to stop container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return False
async def remove_container(self, container_id: str, force: bool = True) -> bool:
"""Remove a Docker container.
Args:
container_id: ID of the container to remove
force: Whether to force removal
Returns:
bool: True if removed successfully, False otherwise
"""
cmd = ["docker", "rm"]
if force:
cmd.append("-f")
cmd.append(container_id)
try:
process = await asyncio.create_subprocess_exec(*cmd)
await process.communicate()
if self.logger:
self.logger.debug(
f"Removed container: {container_id[:12]}", tag="DOCKER"
)
return process.returncode == 0
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to remove container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return False
# Container Command Execution Methods
async def exec_in_container(
self, container_id: str, command: List[str], detach: bool = False
) -> Tuple[int, str, str]:
"""Execute a command in a running container.
Args:
container_id: ID of the container
command: Command to execute as a list of strings
detach: Whether to run the command in detached mode
Returns:
Tuple of (return_code, stdout, stderr)
"""
cmd = ["docker", "exec"]
if detach:
cmd.append("-d")
cmd.append(container_id)
cmd.extend(command)
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await process.communicate()
return process.returncode, stdout.decode(), stderr.decode()
except Exception as e:
if self.logger:
self.logger.error(
message="Error executing command in container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return -1, "", str(e)
async def start_socat_in_container(self, container_id: str) -> bool:
"""Start socat in the container to map port 9222 to 9223.
Args:
container_id: ID of the container
Returns:
bool: True if socat started successfully, False otherwise
"""
# Command to run socat as a background process
cmd = ["socat", "TCP-LISTEN:9223,fork", "TCP:localhost:9222"]
returncode, _, stderr = await self.exec_in_container(
container_id, cmd, detach=True
)
if returncode != 0:
if self.logger:
self.logger.error(
message="Failed to start socat in container: {error}",
tag="DOCKER",
params={"error": stderr},
)
return False
if self.logger:
self.logger.debug(
f"Started socat in container: {container_id[:12]}", tag="DOCKER"
)
# Wait a moment for socat to start
await asyncio.sleep(1)
return True
async def launch_chrome_in_container(
self, container_id: str, browser_args: List[str]
) -> bool:
"""Launch Chrome inside the container with specified arguments.
Args:
container_id: ID of the container
browser_args: Chrome command line arguments
Returns:
bool: True if Chrome started successfully, False otherwise
"""
# Build Chrome command
chrome_cmd = ["chromium"]
chrome_cmd.extend(browser_args)
returncode, _, stderr = await self.exec_in_container(
container_id, chrome_cmd, detach=True
)
if returncode != 0:
if self.logger:
self.logger.error(
message="Failed to launch Chrome in container: {error}",
tag="DOCKER",
params={"error": stderr},
)
return False
if self.logger:
self.logger.debug(
f"Launched Chrome in container: {container_id[:12]}", tag="DOCKER"
)
return True
async def get_process_id_in_container(
self, container_id: str, process_name: str
) -> Optional[int]:
"""Get the process ID for a process in the container.
Args:
container_id: ID of the container
process_name: Name pattern to search for
Returns:
int: Process ID if found, None otherwise
"""
cmd = ["pgrep", "-f", process_name]
returncode, stdout, _ = await self.exec_in_container(container_id, cmd)
if returncode == 0 and stdout.strip():
pid = int(stdout.strip().split("\n")[0])
return pid
return None
async def stop_process_in_container(self, container_id: str, pid: int) -> bool:
"""Stop a process in the container by PID.
Args:
container_id: ID of the container
pid: Process ID to stop
Returns:
bool: True if process was stopped, False otherwise
"""
cmd = ["kill", "-TERM", str(pid)]
returncode, _, stderr = await self.exec_in_container(container_id, cmd)
if returncode != 0:
if self.logger:
self.logger.warning(
message="Failed to stop process in container: {error}",
tag="DOCKER",
params={"error": stderr},
)
return False
if self.logger:
self.logger.debug(
f"Stopped process {pid} in container: {container_id[:12]}", tag="DOCKER"
)
return True
# Network and Port Methods
async def wait_for_cdp_ready(self, host_port: int, timeout: int = 10) -> dict:
"""Wait for the CDP endpoint to be ready.
Args:
host_port: Port to check for CDP endpoint
timeout: Maximum time to wait in seconds
Returns:
dict: CDP JSON config if ready, None if timeout occurred
"""
import aiohttp
url = f"http://localhost:{host_port}/json/version"
for _ in range(timeout):
try:
async with aiohttp.ClientSession() as session:
async with session.get(url, timeout=1) as response:
if response.status == 200:
if self.logger:
self.logger.debug(
f"CDP endpoint ready on port {host_port}",
tag="DOCKER",
)
cdp_json_config = await response.json()
if self.logger:
self.logger.debug(
f"CDP JSON config: {cdp_json_config}", tag="DOCKER"
)
return cdp_json_config
except Exception:
pass
await asyncio.sleep(1)
if self.logger:
self.logger.warning(
f"CDP endpoint not ready on port {host_port} after {timeout}s timeout",
tag="DOCKER",
)
return None
def is_port_in_use(self, port: int) -> bool:
"""Check if a port is already in use on the host.
Args:
port: Port number to check
Returns:
bool: True if port is in use, False otherwise
"""
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
return s.connect_ex(("localhost", port)) == 0
def get_next_available_port(self, start_port: int = 9223) -> int:
"""Get the next available port starting from a given port.
Args:
start_port: Port number to start checking from
Returns:
int: First available port number
"""
port = start_port
while self.is_port_in_use(port):
port += 1
return port
# Configuration Hash Methods
def generate_config_hash(self, config_dict: Dict) -> str:
"""Generate a hash of the configuration for container matching.
Args:
config_dict: Dictionary of configuration parameters
Returns:
str: Hash string uniquely identifying this configuration
"""
# Convert to canonical JSON string and hash
config_json = json.dumps(config_dict, sort_keys=True)
return hashlib.sha256(config_json.encode()).hexdigest()

View File

@@ -1,177 +0,0 @@
"""Browser manager module for Crawl4AI.
This module provides a central browser management class that uses the
strategy pattern internally while maintaining the existing API.
It also implements a page pooling mechanism for improved performance.
"""
from typing import Optional, Tuple, List
from playwright.async_api import Page, BrowserContext
from ..async_logger import AsyncLogger
from ..async_configs import BrowserConfig, CrawlerRunConfig
from .strategies import (
BaseBrowserStrategy,
PlaywrightBrowserStrategy,
CDPBrowserStrategy,
BuiltinBrowserStrategy,
DockerBrowserStrategy
)
class BrowserManager:
"""Main interface for browser management in Crawl4AI.
This class maintains backward compatibility with the existing implementation
while using the strategy pattern internally for different browser types.
Attributes:
config (BrowserConfig): Configuration object containing all browser settings
logger: Logger instance for recording events and errors
browser: The browser instance
default_context: The default browser context
managed_browser: The managed browser instance
playwright: The Playwright instance
sessions: Dictionary to store session information
session_ttl: Session timeout in seconds
"""
def __init__(self, browser_config: Optional[BrowserConfig] = None, logger: Optional[AsyncLogger] = None):
"""Initialize the BrowserManager with a browser configuration.
Args:
browser_config: Configuration object containing all browser settings
logger: Logger instance for recording events and errors
"""
self.config = browser_config or BrowserConfig()
self.logger = logger
# Create strategy based on configuration
self.strategy = self._create_strategy()
# Initialize state variables for compatibility with existing code
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
# For session management (from existing implementation)
self.sessions = {}
self.session_ttl = 1800 # 30 minutes
def _create_strategy(self) -> BaseBrowserStrategy:
"""Create appropriate browser strategy based on configuration.
Returns:
BaseBrowserStrategy: The selected browser strategy
"""
if self.config.browser_mode == "builtin":
return BuiltinBrowserStrategy(self.config, self.logger)
elif self.config.browser_mode == "docker":
if DockerBrowserStrategy is None:
if self.logger:
self.logger.error(
"Docker browser strategy requested but not available. "
"Falling back to PlaywrightBrowserStrategy.",
tag="BROWSER"
)
return PlaywrightBrowserStrategy(self.config, self.logger)
return DockerBrowserStrategy(self.config, self.logger)
elif self.config.browser_mode == "cdp" or self.config.cdp_url or self.config.use_managed_browser:
return CDPBrowserStrategy(self.config, self.logger)
else:
return PlaywrightBrowserStrategy(self.config, self.logger)
async def start(self):
"""Start the browser instance and set up the default context.
Returns:
self: For method chaining
"""
# Start the strategy
await self.strategy.start()
# Update legacy references
self.browser = self.strategy.browser
self.default_context = self.strategy.default_context
# Set browser process reference (for CDP strategy)
if hasattr(self.strategy, 'browser_process'):
self.managed_browser = self.strategy
# Set Playwright reference
self.playwright = self.strategy.playwright
# Sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
self.session_ttl = self.strategy.session_ttl
return self
async def get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page for the given configuration.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
Tuple of (Page, BrowserContext)
"""
# Delegate to strategy
page, context = await self.strategy.get_page(crawlerRunConfig)
# Sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
return page, context
async def get_pages(self, crawlerRunConfig: CrawlerRunConfig, count: int = 1) -> List[Tuple[Page, BrowserContext]]:
"""Get multiple pages with the same configuration.
This method efficiently creates multiple browser pages using the same configuration,
which is useful for parallel crawling of multiple URLs.
Args:
crawlerRunConfig: Configuration for the pages
count: Number of pages to create
Returns:
List of (Page, Context) tuples
"""
# Delegate to strategy
pages = await self.strategy.get_pages(crawlerRunConfig, count)
# Sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
return pages
# Just for legacy compatibility
async def kill_session(self, session_id: str):
"""Kill a browser session and clean up resources.
Args:
session_id: The session ID to kill
"""
# Handle kill_session via our strategy if it supports it
await self.strategy.kill_session(session_id)
# sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
async def close(self):
"""Close the browser and clean up resources."""
# Delegate to strategy
await self.strategy.close()
# Reset legacy references
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
self.sessions = {}

View File

@@ -1,853 +0,0 @@
"""Browser manager module for Crawl4AI.
This module provides a central browser management class that uses the
strategy pattern internally while maintaining the existing API.
It also implements browser pooling for improved performance.
"""
import asyncio
import hashlib
import json
import math
from enum import Enum
from typing import Dict, List, Optional, Tuple, Any
from playwright.async_api import Page, BrowserContext
from ..async_logger import AsyncLogger
from ..async_configs import BrowserConfig, CrawlerRunConfig
from .strategies import (
BaseBrowserStrategy,
PlaywrightBrowserStrategy,
CDPBrowserStrategy,
BuiltinBrowserStrategy,
DockerBrowserStrategy
)
class UnavailableBehavior(Enum):
"""Behavior when no browser is available."""
ON_DEMAND = "on_demand" # Create new browser on demand
PENDING = "pending" # Wait until a browser is available
EXCEPTION = "exception" # Raise an exception
class BrowserManager:
"""Main interface for browser management and pooling in Crawl4AI.
This class maintains backward compatibility with the existing implementation
while using the strategy pattern internally for different browser types.
It also implements browser pooling for improved performance.
Attributes:
config (BrowserConfig): Default configuration object for browsers
logger (AsyncLogger): Logger instance for recording events and errors
browser_pool (Dict): Dictionary to store browser instances by configuration
browser_in_use (Dict): Dictionary to track which browsers are in use
request_queues (Dict): Queues for pending requests by configuration
unavailable_behavior (UnavailableBehavior): Behavior when no browser is available
"""
def __init__(
self,
browser_config: Optional[BrowserConfig] = None,
logger: Optional[AsyncLogger] = None,
unavailable_behavior: UnavailableBehavior = UnavailableBehavior.EXCEPTION,
max_browsers_per_config: int = 10,
max_pages_per_browser: int = 5
):
"""Initialize the BrowserManager with a browser configuration.
Args:
browser_config: Configuration object containing all browser settings
logger: Logger instance for recording events and errors
unavailable_behavior: Behavior when no browser is available
max_browsers_per_config: Maximum number of browsers per configuration
max_pages_per_browser: Maximum number of pages per browser
"""
self.config = browser_config or BrowserConfig()
self.logger = logger
self.unavailable_behavior = unavailable_behavior
self.max_browsers_per_config = max_browsers_per_config
self.max_pages_per_browser = max_pages_per_browser
# Browser pool management
self.browser_pool = {} # config_hash -> list of browser strategies
self.browser_in_use = {} # strategy instance -> Boolean
self.request_queues = {} # config_hash -> asyncio.Queue()
self._browser_locks = {} # config_hash -> asyncio.Lock()
self._browser_pool_lock = asyncio.Lock() # Global lock for pool modifications
# Page pool management
self.page_pool = {} # (browser_config_hash, crawler_config_hash) -> list of (page, context, strategy)
self._page_pool_lock = asyncio.Lock()
self.browser_page_counts = {} # strategy instance -> current page count
self._page_count_lock = asyncio.Lock() # Lock for thread-safe access to page counts
# For session management (from existing implementation)
self.sessions = {}
self.session_ttl = 1800 # 30 minutes
# For legacy compatibility
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
self.strategy = None
def _create_browser_config_hash(self, browser_config: BrowserConfig) -> str:
"""Create a hash of the browser configuration for browser pooling.
Args:
browser_config: Browser configuration
Returns:
str: Hash of the browser configuration
"""
# Convert config to dictionary, excluding any callable objects
config_dict = browser_config.__dict__.copy()
for key in list(config_dict.keys()):
if callable(config_dict[key]):
del config_dict[key]
# Convert to canonical JSON string
config_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON
config_hash = hashlib.sha256(config_json.encode()).hexdigest()
return config_hash
def _create_strategy(self, browser_config: BrowserConfig) -> BaseBrowserStrategy:
"""Create appropriate browser strategy based on configuration.
Args:
browser_config: Browser configuration
Returns:
BaseBrowserStrategy: The selected browser strategy
"""
if browser_config.browser_mode == "builtin":
return BuiltinBrowserStrategy(browser_config, self.logger)
elif browser_config.browser_mode == "docker":
if DockerBrowserStrategy is None:
if self.logger:
self.logger.error(
"Docker browser strategy requested but not available. "
"Falling back to PlaywrightBrowserStrategy.",
tag="BROWSER"
)
return PlaywrightBrowserStrategy(browser_config, self.logger)
return DockerBrowserStrategy(browser_config, self.logger)
elif browser_config.browser_mode == "cdp" or browser_config.cdp_url or browser_config.use_managed_browser:
return CDPBrowserStrategy(browser_config, self.logger)
else:
return PlaywrightBrowserStrategy(browser_config, self.logger)
async def initialize_pool(
self,
browser_configs: List[BrowserConfig] = None,
browsers_per_config: int = 1,
page_configs: Optional[List[Tuple[BrowserConfig, CrawlerRunConfig, int]]] = None
):
"""Initialize the browser pool with multiple browser configurations.
Args:
browser_configs: List of browser configurations to initialize
browsers_per_config: Number of browser instances per configuration
page_configs: Optional list of (browser_config, crawler_run_config, count) tuples
for pre-warming pages
Returns:
self: For method chaining
"""
if not browser_configs:
browser_configs = [self.config]
# Calculate how many browsers we'll need based on page_configs
browsers_needed = {}
if page_configs:
for browser_config, _, page_count in page_configs:
config_hash = self._create_browser_config_hash(browser_config)
# Calculate browsers based on max_pages_per_browser
browsers_needed_for_config = math.ceil(page_count / self.max_pages_per_browser)
browsers_needed[config_hash] = max(
browsers_needed.get(config_hash, 0),
browsers_needed_for_config
)
# Adjust browsers_per_config if needed to ensure enough capacity
config_browsers_needed = {}
for browser_config in browser_configs:
config_hash = self._create_browser_config_hash(browser_config)
# Estimate browsers needed based on page requirements
browsers_for_config = browsers_per_config
if config_hash in browsers_needed:
browsers_for_config = max(browsers_for_config, browsers_needed[config_hash])
config_browsers_needed[config_hash] = browsers_for_config
# Update max_browsers_per_config if needed
if browsers_for_config > self.max_browsers_per_config:
self.max_browsers_per_config = browsers_for_config
if self.logger:
self.logger.info(
f"Increased max_browsers_per_config to {browsers_for_config} to accommodate page requirements",
tag="POOL"
)
# Initialize locks and queues for each config
async with self._browser_pool_lock:
for browser_config in browser_configs:
config_hash = self._create_browser_config_hash(browser_config)
# Initialize lock for this config if needed
if config_hash not in self._browser_locks:
self._browser_locks[config_hash] = asyncio.Lock()
# Initialize queue for this config if needed
if config_hash not in self.request_queues:
self.request_queues[config_hash] = asyncio.Queue()
# Initialize pool for this config if needed
if config_hash not in self.browser_pool:
self.browser_pool[config_hash] = []
# Create browser instances for each configuration in parallel
browser_tasks = []
for browser_config in browser_configs:
config_hash = self._create_browser_config_hash(browser_config)
browsers_to_create = config_browsers_needed.get(
config_hash,
browsers_per_config
) - len(self.browser_pool.get(config_hash, []))
if browsers_to_create <= 0:
continue
for _ in range(browsers_to_create):
# Create a task for each browser initialization
task = self._create_and_add_browser(browser_config, config_hash)
browser_tasks.append(task)
# Wait for all browser initializations to complete
if browser_tasks:
if self.logger:
self.logger.info(f"Initializing {len(browser_tasks)} browsers in parallel...", tag="POOL")
await asyncio.gather(*browser_tasks)
# Pre-warm pages if requested
if page_configs:
page_tasks = []
for browser_config, crawler_run_config, count in page_configs:
task = self._prewarm_pages(browser_config, crawler_run_config, count)
page_tasks.append(task)
if page_tasks:
if self.logger:
self.logger.info(f"Pre-warming pages with {len(page_tasks)} configurations...", tag="POOL")
await asyncio.gather(*page_tasks)
# Update legacy references
if self.browser_pool and next(iter(self.browser_pool.values()), []):
strategy = next(iter(self.browser_pool.values()))[0]
self.strategy = strategy
self.browser = strategy.browser
self.default_context = strategy.default_context
self.playwright = strategy.playwright
return self
async def _create_and_add_browser(self, browser_config: BrowserConfig, config_hash: str):
"""Create and add a browser to the pool.
Args:
browser_config: Browser configuration
config_hash: Hash of the configuration
"""
try:
strategy = self._create_strategy(browser_config)
await strategy.start()
async with self._browser_pool_lock:
if config_hash not in self.browser_pool:
self.browser_pool[config_hash] = []
self.browser_pool[config_hash].append(strategy)
self.browser_in_use[strategy] = False
if self.logger:
self.logger.debug(
f"Added browser to pool: {browser_config.browser_type} "
f"({browser_config.browser_mode})",
tag="POOL"
)
except Exception as e:
if self.logger:
self.logger.error(
f"Failed to create browser: {str(e)}",
tag="POOL"
)
raise
def _make_config_signature(self, crawlerRunConfig: CrawlerRunConfig) -> str:
"""Create a signature hash from crawler configuration.
Args:
crawlerRunConfig: Crawler run configuration
Returns:
str: Hash of the crawler configuration
"""
config_dict = crawlerRunConfig.__dict__.copy()
# Exclude items that do not affect page creation
ephemeral_keys = [
"session_id",
"js_code",
"scraping_strategy",
"extraction_strategy",
"chunking_strategy",
"cache_mode",
"content_filter",
"semaphore_count",
"url"
]
for key in ephemeral_keys:
if key in config_dict:
del config_dict[key]
# Convert to canonical JSON string
config_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON
config_hash = hashlib.sha256(config_json.encode("utf-8")).hexdigest()
return config_hash
async def _prewarm_pages(
self,
browser_config: BrowserConfig,
crawler_run_config: CrawlerRunConfig,
count: int
):
"""Pre-warm pages for a specific configuration.
Args:
browser_config: Browser configuration
crawler_run_config: Crawler run configuration
count: Number of pages to pre-warm
"""
try:
# Create individual page tasks and run them in parallel
browser_config_hash = self._create_browser_config_hash(browser_config)
crawler_config_hash = self._make_config_signature(crawler_run_config)
async def get_single_page():
strategy = await self.get_available_browser(browser_config)
try:
page, context = await strategy.get_page(crawler_run_config)
# Store config hashes on the page object for later retrieval
setattr(page, "_browser_config_hash", browser_config_hash)
setattr(page, "_crawler_config_hash", crawler_config_hash)
return page, context, strategy
except Exception as e:
# Release the browser back to the pool
await self.release_browser(strategy, browser_config)
raise e
# Create tasks for parallel execution
page_tasks = [get_single_page() for _ in range(count)]
# Execute all page creation tasks in parallel
pages_contexts_strategies = await asyncio.gather(*page_tasks)
# Add pages to the page pool
browser_config_hash = self._create_browser_config_hash(browser_config)
crawler_config_hash = self._make_config_signature(crawler_run_config)
pool_key = (browser_config_hash, crawler_config_hash)
async with self._page_pool_lock:
if pool_key not in self.page_pool:
self.page_pool[pool_key] = []
# Add all pages to the pool
self.page_pool[pool_key].extend(pages_contexts_strategies)
if self.logger:
self.logger.debug(
f"Pre-warmed {count} pages in parallel with config {crawler_run_config}",
tag="POOL"
)
except Exception as e:
if self.logger:
self.logger.error(
f"Failed to pre-warm pages: {str(e)}",
tag="POOL"
)
raise
async def get_available_browser(
self,
browser_config: Optional[BrowserConfig] = None
) -> BaseBrowserStrategy:
"""Get an available browser from the pool for the given configuration.
Args:
browser_config: Browser configuration to match
Returns:
BaseBrowserStrategy: An available browser strategy
Raises:
Exception: If no browser is available and behavior is EXCEPTION
"""
browser_config = browser_config or self.config
config_hash = self._create_browser_config_hash(browser_config)
async with self._browser_locks.get(config_hash, asyncio.Lock()):
# Check if we have browsers for this config
if config_hash not in self.browser_pool or not self.browser_pool[config_hash]:
if self.unavailable_behavior == UnavailableBehavior.ON_DEMAND:
# Create a new browser on demand
if self.logger:
self.logger.info(
f"1> Creating new browser on demand for config {config_hash[:8]}",
tag="POOL"
)
# Initialize pool for this config if needed
async with self._browser_pool_lock:
if config_hash not in self.browser_pool:
self.browser_pool[config_hash] = []
strategy = self._create_strategy(browser_config)
await strategy.start()
self.browser_pool[config_hash].append(strategy)
self.browser_in_use[strategy] = False
elif self.unavailable_behavior == UnavailableBehavior.EXCEPTION:
raise Exception(f"No browsers available for configuration {config_hash[:8]}")
# Check for an available browser with capacity in the pool
for strategy in self.browser_pool[config_hash]:
# Check if this browser has capacity for more pages
async with self._page_count_lock:
current_pages = self.browser_page_counts.get(strategy, 0)
if current_pages < self.max_pages_per_browser:
# Increment the page count
self.browser_page_counts[strategy] = current_pages + 1
self.browser_in_use[strategy] = True
# Get browser information for better logging
browser_type = getattr(strategy.config, 'browser_type', 'unknown')
browser_mode = getattr(strategy.config, 'browser_mode', 'unknown')
strategy_id = id(strategy) # Use object ID as a unique identifier
if self.logger:
self.logger.debug(
f"Selected browser #{strategy_id} ({browser_type}/{browser_mode}) - "
f"pages: {current_pages+1}/{self.max_pages_per_browser}",
tag="POOL"
)
return strategy
# All browsers are at capacity or in use
if self.unavailable_behavior == UnavailableBehavior.ON_DEMAND:
# Check if we've reached the maximum number of browsers
if len(self.browser_pool[config_hash]) >= self.max_browsers_per_config:
if self.logger:
self.logger.warning(
f"Maximum browsers reached for config {config_hash[:8]} and all at page capacity",
tag="POOL"
)
if self.unavailable_behavior == UnavailableBehavior.EXCEPTION:
raise Exception("Maximum browsers reached and all at page capacity")
# Create a new browser on demand
if self.logger:
self.logger.info(
f"2> Creating new browser on demand for config {config_hash[:8]}",
tag="POOL"
)
strategy = self._create_strategy(browser_config)
await strategy.start()
async with self._browser_pool_lock:
self.browser_pool[config_hash].append(strategy)
self.browser_in_use[strategy] = True
return strategy
# If we get here, either behavior is EXCEPTION or PENDING
if self.unavailable_behavior == UnavailableBehavior.EXCEPTION:
raise Exception(f"All browsers in use or at page capacity for configuration {config_hash[:8]}")
# For PENDING behavior, set up waiting mechanism
if config_hash not in self.request_queues:
self.request_queues[config_hash] = asyncio.Queue()
# Create a future to wait on
future = asyncio.Future()
await self.request_queues[config_hash].put(future)
if self.logger:
self.logger.debug(
f"Waiting for available browser for config {config_hash[:8]}",
tag="POOL"
)
# Wait for a browser to become available
strategy = await future
return strategy
async def get_page(
self,
crawlerRunConfig: CrawlerRunConfig,
browser_config: Optional[BrowserConfig] = None
) -> Tuple[Page, BrowserContext, BaseBrowserStrategy]:
"""Get a page from the browser pool."""
browser_config = browser_config or self.config
# Check if we have a pre-warmed page available
browser_config_hash = self._create_browser_config_hash(browser_config)
crawler_config_hash = self._make_config_signature(crawlerRunConfig)
pool_key = (browser_config_hash, crawler_config_hash)
# Try to get a page from the pool
async with self._page_pool_lock:
if pool_key in self.page_pool and self.page_pool[pool_key]:
# Get a page from the pool
page, context, strategy = self.page_pool[pool_key].pop()
# Mark browser as in use (it already is, but ensure consistency)
self.browser_in_use[strategy] = True
if self.logger:
self.logger.debug(
f"Using pre-warmed page for config {crawler_config_hash[:8]}",
tag="POOL"
)
# Note: We don't increment page count since it was already counted when created
return page, context, strategy
# No pre-warmed page available, create a new one
# get_available_browser already increments the page count
strategy = await self.get_available_browser(browser_config)
try:
# Get a page from the browser
page, context = await strategy.get_page(crawlerRunConfig)
# Store config hashes on the page object for later retrieval
setattr(page, "_browser_config_hash", browser_config_hash)
setattr(page, "_crawler_config_hash", crawler_config_hash)
return page, context, strategy
except Exception as e:
# Release the browser back to the pool and decrement the page count
await self.release_browser(strategy, browser_config, decrement_page_count=True)
raise e
async def release_page(
self,
page: Page,
strategy: BaseBrowserStrategy,
browser_config: Optional[BrowserConfig] = None,
keep_alive: bool = True,
return_to_pool: bool = True
):
"""Release a page back to the pool."""
browser_config = browser_config or self.config
page_url = page.url if page else None
# If not keeping the page alive, close it and decrement count
if not keep_alive:
try:
await page.close()
except Exception as e:
if self.logger:
self.logger.error(
f"Error closing page: {str(e)}",
tag="POOL"
)
# Release the browser with page count decrement
await self.release_browser(strategy, browser_config, decrement_page_count=True)
return
# If returning to pool
if return_to_pool:
# Get the configuration hashes from the page object
browser_config_hash = getattr(page, "_browser_config_hash", None)
crawler_config_hash = getattr(page, "_crawler_config_hash", None)
if browser_config_hash and crawler_config_hash:
pool_key = (browser_config_hash, crawler_config_hash)
async with self._page_pool_lock:
if pool_key not in self.page_pool:
self.page_pool[pool_key] = []
# Add page back to the pool
self.page_pool[pool_key].append((page, page.context, strategy))
if self.logger:
self.logger.debug(
f"Returned page to pool for config {crawler_config_hash[:8]}, url: {page_url}",
tag="POOL"
)
# Note: We don't decrement the page count here since the page is still "in use"
# from the browser's perspective, just in our pool
return
else:
# If we can't identify the configuration, log a warning
if self.logger:
self.logger.warning(
"Cannot return page to pool - missing configuration hashes",
tag="POOL"
)
# If we got here, we couldn't return to pool, so just release the browser
await self.release_browser(strategy, browser_config, decrement_page_count=True)
async def release_browser(
self,
strategy: BaseBrowserStrategy,
browser_config: Optional[BrowserConfig] = None,
decrement_page_count: bool = True
):
"""Release a browser back to the pool."""
browser_config = browser_config or self.config
config_hash = self._create_browser_config_hash(browser_config)
# Decrement page count
if decrement_page_count:
async with self._page_count_lock:
current_count = self.browser_page_counts.get(strategy, 1)
self.browser_page_counts[strategy] = max(0, current_count - 1)
if self.logger:
self.logger.debug(
f"Decremented page count for browser (now: {self.browser_page_counts[strategy]})",
tag="POOL"
)
# Mark as not in use
self.browser_in_use[strategy] = False
# Process any waiting requests
if config_hash in self.request_queues and not self.request_queues[config_hash].empty():
future = await self.request_queues[config_hash].get()
if not future.done():
future.set_result(strategy)
async def get_pages(
self,
crawlerRunConfig: CrawlerRunConfig,
count: int = 1,
browser_config: Optional[BrowserConfig] = None
) -> List[Tuple[Page, BrowserContext, BaseBrowserStrategy]]:
"""Get multiple pages from the browser pool.
Args:
crawlerRunConfig: Configuration for the crawler run
count: Number of pages to get
browser_config: Browser configuration to use
Returns:
List of (Page, Context, Strategy) tuples
"""
results = []
for _ in range(count):
try:
result = await self.get_page(crawlerRunConfig, browser_config)
results.append(result)
except Exception as e:
# Release any pages we've already gotten
for page, _, strategy in results:
await self.release_page(page, strategy, browser_config)
raise e
return results
async def get_page_pool_status(self) -> Dict[str, Any]:
"""Get information about the page pool status.
Returns:
Dict with page pool status information
"""
status = {
"total_pooled_pages": 0,
"configs": {}
}
async with self._page_pool_lock:
for (browser_hash, crawler_hash), pages in self.page_pool.items():
config_key = f"{browser_hash[:8]}_{crawler_hash[:8]}"
status["configs"][config_key] = len(pages)
status["total_pooled_pages"] += len(pages)
if self.logger:
self.logger.debug(
f"Page pool status: {status['total_pooled_pages']} pages available",
tag="POOL"
)
return status
async def get_pool_status(self) -> Dict[str, Any]:
"""Get information about the browser pool status.
Returns:
Dict with pool status information
"""
status = {
"total_browsers": 0,
"browsers_in_use": 0,
"total_pages": 0,
"configs": {}
}
for config_hash, strategies in self.browser_pool.items():
config_pages = 0
in_use = 0
for strategy in strategies:
is_in_use = self.browser_in_use.get(strategy, False)
if is_in_use:
in_use += 1
# Get page count for this browser
try:
page_count = len(await strategy.get_opened_pages())
config_pages += page_count
except Exception as e:
if self.logger:
self.logger.error(f"Error getting page count: {str(e)}", tag="POOL")
config_status = {
"total_browsers": len(strategies),
"browsers_in_use": in_use,
"pages_open": config_pages,
"waiting_requests": self.request_queues.get(config_hash, asyncio.Queue()).qsize(),
"max_capacity": len(strategies) * self.max_pages_per_browser,
"utilization_pct": round((config_pages / (len(strategies) * self.max_pages_per_browser)) * 100, 1)
if strategies else 0
}
status["configs"][config_hash] = config_status
status["total_browsers"] += config_status["total_browsers"]
status["browsers_in_use"] += config_status["browsers_in_use"]
status["total_pages"] += config_pages
# Add overall utilization
if status["total_browsers"] > 0:
max_capacity = status["total_browsers"] * self.max_pages_per_browser
status["overall_utilization_pct"] = round((status["total_pages"] / max_capacity) * 100, 1)
else:
status["overall_utilization_pct"] = 0
return status
async def start(self):
"""Start at least one browser instance in the pool.
This method is kept for backward compatibility.
Returns:
self: For method chaining
"""
await self.initialize_pool([self.config], 1)
return self
async def kill_session(self, session_id: str):
"""Kill a browser session and clean up resources.
Delegated to the strategy. This method is kept for backward compatibility.
Args:
session_id: The session ID to kill
"""
if not self.strategy:
return
await self.strategy.kill_session(session_id)
# Sync sessions
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
async def close(self):
"""Close all browsers in the pool and clean up resources."""
# Close all browsers in the pool
for strategies in self.browser_pool.values():
for strategy in strategies:
try:
await strategy.close()
except Exception as e:
if self.logger:
self.logger.error(
f"Error closing browser: {str(e)}",
tag="POOL"
)
# Clear pool data
self.browser_pool = {}
self.browser_in_use = {}
# Reset legacy references
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
self.strategy = None
self.sessions = {}
async def create_browser_manager(
browser_config: Optional[BrowserConfig] = None,
logger: Optional[AsyncLogger] = None,
unavailable_behavior: UnavailableBehavior = UnavailableBehavior.EXCEPTION,
max_browsers_per_config: int = 10,
initial_pool_size: int = 1,
page_configs: Optional[List[Tuple[BrowserConfig, CrawlerRunConfig, int]]] = None
) -> BrowserManager:
"""Factory function to create and initialize a BrowserManager.
Args:
browser_config: Configuration for the browsers
logger: Logger for recording events
unavailable_behavior: Behavior when no browser is available
max_browsers_per_config: Maximum browsers per configuration
initial_pool_size: Initial number of browsers per configuration
page_configs: Optional configurations for pre-warming pages
Returns:
Initialized BrowserManager
"""
manager = BrowserManager(
browser_config=browser_config,
logger=logger,
unavailable_behavior=unavailable_behavior,
max_browsers_per_config=max_browsers_per_config
)
await manager.initialize_pool(
[browser_config] if browser_config else None,
initial_pool_size,
page_configs
)
return manager

View File

@@ -1,143 +0,0 @@
"""Docker configuration module for Crawl4AI browser automation.
This module provides configuration classes for Docker-based browser automation,
allowing flexible configuration of Docker containers for browsing.
"""
from typing import Dict, List, Optional
class DockerConfig:
"""Configuration for Docker-based browser automation.
This class contains Docker-specific settings to avoid cluttering BrowserConfig.
Attributes:
mode (str): Docker operation mode - "connect" or "launch".
- "connect": Uses a container with Chrome already running
- "launch": Dynamically configures and starts Chrome in container
image (str): Docker image to use. If None, defaults from DockerUtils are used.
registry_file (str): Path to container registry file for persistence.
persistent (bool): Keep container running after browser closes.
remove_on_exit (bool): Remove container on exit when not persistent.
network (str): Docker network to use.
volumes (List[str]): Volume mappings (e.g., ["host_path:container_path"]).
env_vars (Dict[str, str]): Environment variables to set in container.
extra_args (List[str]): Additional docker run arguments.
host_port (int): Host port to map to container's 9223 port.
user_data_dir (str): Path to user data directory on host.
container_user_data_dir (str): Path to user data directory in container.
"""
def __init__(
self,
mode: str = "connect", # "connect" or "launch"
image: Optional[str] = None, # Docker image to use
registry_file: Optional[str] = None, # Path to registry file
persistent: bool = False, # Keep container running after browser closes
remove_on_exit: bool = True, # Remove container on exit when not persistent
network: Optional[str] = None, # Docker network to use
volumes: List[str] = None, # Volume mappings
cpu_limit: float = 1.0, # CPU limit for the container
memory_limit: str = "1.5g", # Memory limit for the container
env_vars: Dict[str, str] = None, # Environment variables
host_port: Optional[int] = None, # Host port to map to container's 9223
user_data_dir: Optional[str] = None, # Path to user data directory on host
container_user_data_dir: str = "/data", # Path to user data directory in container
extra_args: List[str] = None, # Additional docker run arguments
):
"""Initialize Docker configuration.
Args:
mode: Docker operation mode ("connect" or "launch")
image: Docker image to use
registry_file: Path to container registry file
persistent: Whether to keep container running after browser closes
remove_on_exit: Whether to remove container on exit when not persistent
network: Docker network to use
volumes: Volume mappings as list of strings
cpu_limit: CPU limit for the container
memory_limit: Memory limit for the container
env_vars: Environment variables as dictionary
extra_args: Additional docker run arguments
host_port: Host port to map to container's 9223
user_data_dir: Path to user data directory on host
container_user_data_dir: Path to user data directory in container
"""
self.mode = mode
self.image = image # If None, defaults will be used from DockerUtils
self.registry_file = registry_file
self.persistent = persistent
self.remove_on_exit = remove_on_exit
self.network = network
self.volumes = volumes or []
self.cpu_limit = cpu_limit
self.memory_limit = memory_limit
self.env_vars = env_vars or {}
self.extra_args = extra_args or []
self.host_port = host_port
self.user_data_dir = user_data_dir
self.container_user_data_dir = container_user_data_dir
def to_dict(self) -> Dict:
"""Convert this configuration to a dictionary.
Returns:
Dictionary representation of this configuration
"""
return {
"mode": self.mode,
"image": self.image,
"registry_file": self.registry_file,
"persistent": self.persistent,
"remove_on_exit": self.remove_on_exit,
"network": self.network,
"volumes": self.volumes,
"cpu_limit": self.cpu_limit,
"memory_limit": self.memory_limit,
"env_vars": self.env_vars,
"extra_args": self.extra_args,
"host_port": self.host_port,
"user_data_dir": self.user_data_dir,
"container_user_data_dir": self.container_user_data_dir
}
@staticmethod
def from_kwargs(kwargs: Dict) -> "DockerConfig":
"""Create a DockerConfig from a dictionary of keyword arguments.
Args:
kwargs: Dictionary of configuration options
Returns:
New DockerConfig instance
"""
return DockerConfig(
mode=kwargs.get("mode", "connect"),
image=kwargs.get("image"),
registry_file=kwargs.get("registry_file"),
persistent=kwargs.get("persistent", False),
remove_on_exit=kwargs.get("remove_on_exit", True),
network=kwargs.get("network"),
volumes=kwargs.get("volumes"),
cpu_limit=kwargs.get("cpu_limit", 1.0),
memory_limit=kwargs.get("memory_limit", "1.5g"),
env_vars=kwargs.get("env_vars"),
extra_args=kwargs.get("extra_args"),
host_port=kwargs.get("host_port"),
user_data_dir=kwargs.get("user_data_dir"),
container_user_data_dir=kwargs.get("container_user_data_dir", "/data")
)
def clone(self, **kwargs) -> "DockerConfig":
"""Create a copy of this configuration with updated values.
Args:
**kwargs: Key-value pairs of configuration options to update
Returns:
DockerConfig: A new instance with the specified updates
"""
config_dict = self.to_dict()
config_dict.update(kwargs)
return DockerConfig.from_kwargs(config_dict)

View File

@@ -1,457 +0,0 @@
"""Browser profile management module for Crawl4AI.
This module provides functionality for creating and managing browser profiles
that can be used for authenticated browsing.
"""
import os
import asyncio
import signal
import sys
import datetime
import uuid
import shutil
from typing import List, Dict, Optional, Any
from colorama import Fore, Style, init
from ..async_configs import BrowserConfig
from ..async_logger import AsyncLogger, AsyncLoggerBase
from ..utils import get_home_folder
class BrowserProfileManager:
"""Manages browser profiles for Crawl4AI.
This class provides functionality to create and manage browser profiles
that can be used for authenticated browsing with Crawl4AI.
Profiles are stored by default in ~/.crawl4ai/profiles/
"""
def __init__(self, logger: Optional[AsyncLoggerBase] = None):
"""Initialize the BrowserProfileManager.
Args:
logger: Logger for outputting messages. If None, a default AsyncLogger is created.
"""
# Initialize colorama for colorful terminal output
init()
# Create a logger if not provided
if logger is None:
self.logger = AsyncLogger(verbose=True)
elif not isinstance(logger, AsyncLoggerBase):
self.logger = AsyncLogger(verbose=True)
else:
self.logger = logger
# Ensure profiles directory exists
self.profiles_dir = os.path.join(get_home_folder(), "profiles")
os.makedirs(self.profiles_dir, exist_ok=True)
async def create_profile(self,
profile_name: Optional[str] = None,
browser_config: Optional[BrowserConfig] = None) -> Optional[str]:
"""Create a browser profile interactively.
Args:
profile_name: Name for the profile. If None, a name is generated.
browser_config: Configuration for the browser. If None, a default configuration is used.
Returns:
Path to the created profile directory, or None if creation failed
"""
# Create default browser config if none provided
if browser_config is None:
browser_config = BrowserConfig(
browser_type="chromium",
headless=False, # Must be visible for user interaction
verbose=True
)
else:
# Ensure headless is False for user interaction
browser_config.headless = False
# Generate profile name if not provided
if not profile_name:
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
profile_name = f"profile_{timestamp}_{uuid.uuid4().hex[:6]}"
# Sanitize profile name (replace spaces and special chars)
profile_name = "".join(c if c.isalnum() or c in "-_" else "_" for c in profile_name)
# Set user data directory
profile_path = os.path.join(self.profiles_dir, profile_name)
os.makedirs(profile_path, exist_ok=True)
# Print instructions for the user with colorama formatting
border = f"{Fore.CYAN}{'='*80}{Style.RESET_ALL}"
self.logger.info(f"\n{border}", tag="PROFILE")
self.logger.info(f"Creating browser profile: {Fore.GREEN}{profile_name}{Style.RESET_ALL}", tag="PROFILE")
self.logger.info(f"Profile directory: {Fore.YELLOW}{profile_path}{Style.RESET_ALL}", tag="PROFILE")
self.logger.info("\nInstructions:", tag="PROFILE")
self.logger.info("1. A browser window will open for you to set up your profile.", tag="PROFILE")
self.logger.info(f"2. {Fore.CYAN}Log in to websites{Style.RESET_ALL}, configure settings, etc. as needed.", tag="PROFILE")
self.logger.info(f"3. When you're done, {Fore.YELLOW}press 'q' in this terminal{Style.RESET_ALL} to close the browser.", tag="PROFILE")
self.logger.info("4. The profile will be saved and ready to use with Crawl4AI.", tag="PROFILE")
self.logger.info(f"{border}\n", tag="PROFILE")
# Import the necessary classes with local imports to avoid circular references
from .strategies import CDPBrowserStrategy
# Set browser config to use the profile path
browser_config.user_data_dir = profile_path
# Create a CDP browser strategy for the profile creation
browser_strategy = CDPBrowserStrategy(browser_config, self.logger)
# Set up signal handlers to ensure cleanup on interrupt
original_sigint = signal.getsignal(signal.SIGINT)
original_sigterm = signal.getsignal(signal.SIGTERM)
# Define cleanup handler for signals
async def cleanup_handler(sig, frame):
self.logger.warning("\nCleaning up browser process...", tag="PROFILE")
await browser_strategy.close()
# Restore original signal handlers
signal.signal(signal.SIGINT, original_sigint)
signal.signal(signal.SIGTERM, original_sigterm)
if sig == signal.SIGINT:
self.logger.error("Profile creation interrupted. Profile may be incomplete.", tag="PROFILE")
sys.exit(1)
# Set signal handlers
def sigint_handler(sig, frame):
asyncio.create_task(cleanup_handler(sig, frame))
signal.signal(signal.SIGINT, sigint_handler)
signal.signal(signal.SIGTERM, sigint_handler)
# Event to signal when user is done with the browser
user_done_event = asyncio.Event()
# Run keyboard input loop in a separate task
async def listen_for_quit_command():
import termios
import tty
import select
# First output the prompt
self.logger.info(f"{Fore.CYAN}Press '{Fore.WHITE}q{Fore.CYAN}' when you've finished using the browser...{Style.RESET_ALL}", tag="PROFILE")
# Save original terminal settings
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
# Switch to non-canonical mode (no line buffering)
tty.setcbreak(fd)
while True:
# Check if input is available (non-blocking)
readable, _, _ = select.select([sys.stdin], [], [], 0.5)
if readable:
key = sys.stdin.read(1)
if key.lower() == 'q':
self.logger.info(f"{Fore.GREEN}Closing browser and saving profile...{Style.RESET_ALL}", tag="PROFILE")
user_done_event.set()
return
# Check if the browser process has already exited
if browser_strategy.browser_process and browser_strategy.browser_process.poll() is not None:
self.logger.info("Browser already closed. Ending input listener.", tag="PROFILE")
user_done_event.set()
return
await asyncio.sleep(0.1)
finally:
# Restore terminal settings
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
try:
# Start the browser
await browser_strategy.start()
# Check if browser started successfully
if not browser_strategy.browser_process:
self.logger.error("Failed to start browser process.", tag="PROFILE")
return None
self.logger.info(f"Browser launched. {Fore.CYAN}Waiting for you to finish...{Style.RESET_ALL}", tag="PROFILE")
# Start listening for keyboard input
listener_task = asyncio.create_task(listen_for_quit_command())
# Wait for either the user to press 'q' or for the browser process to exit naturally
while not user_done_event.is_set() and browser_strategy.browser_process.poll() is None:
await asyncio.sleep(0.5)
# Cancel the listener task if it's still running
if not listener_task.done():
listener_task.cancel()
try:
await listener_task
except asyncio.CancelledError:
pass
# If the browser is still running and the user pressed 'q', terminate it
if browser_strategy.browser_process.poll() is None and user_done_event.is_set():
self.logger.info("Terminating browser process...", tag="PROFILE")
await browser_strategy.close()
self.logger.success(f"Browser closed. Profile saved at: {Fore.GREEN}{profile_path}{Style.RESET_ALL}", tag="PROFILE")
except Exception as e:
self.logger.error(f"Error creating profile: {str(e)}", tag="PROFILE")
await browser_strategy.close()
return None
finally:
# Restore original signal handlers
signal.signal(signal.SIGINT, original_sigint)
signal.signal(signal.SIGTERM, original_sigterm)
# Make sure browser is fully cleaned up
await browser_strategy.close()
# Return the profile path
return profile_path
def list_profiles(self) -> List[Dict[str, Any]]:
"""List all available browser profiles.
Returns:
List of dictionaries containing profile information
"""
if not os.path.exists(self.profiles_dir):
return []
profiles = []
for name in os.listdir(self.profiles_dir):
profile_path = os.path.join(self.profiles_dir, name)
# Skip if not a directory
if not os.path.isdir(profile_path):
continue
# Check if this looks like a valid browser profile
# For Chromium: Look for Preferences file
# For Firefox: Look for prefs.js file
is_valid = False
if os.path.exists(os.path.join(profile_path, "Preferences")) or \
os.path.exists(os.path.join(profile_path, "Default", "Preferences")):
is_valid = "chromium"
elif os.path.exists(os.path.join(profile_path, "prefs.js")):
is_valid = "firefox"
if is_valid:
# Get creation time
created = datetime.datetime.fromtimestamp(
os.path.getctime(profile_path)
)
profiles.append({
"name": name,
"path": profile_path,
"created": created,
"type": is_valid
})
# Sort by creation time, newest first
profiles.sort(key=lambda x: x["created"], reverse=True)
return profiles
def get_profile_path(self, profile_name: str) -> Optional[str]:
"""Get the full path to a profile by name.
Args:
profile_name: Name of the profile (not the full path)
Returns:
Full path to the profile directory, or None if not found
"""
profile_path = os.path.join(self.profiles_dir, profile_name)
# Check if path exists and is a valid profile
if not os.path.isdir(profile_path):
# Check if profile_name itself is full path
if os.path.isabs(profile_name):
profile_path = profile_name
else:
return None
# Look for profile indicators
is_profile = (
os.path.exists(os.path.join(profile_path, "Preferences")) or
os.path.exists(os.path.join(profile_path, "Default", "Preferences")) or
os.path.exists(os.path.join(profile_path, "prefs.js"))
)
if not is_profile:
return None # Not a valid browser profile
return profile_path
def delete_profile(self, profile_name_or_path: str) -> bool:
"""Delete a browser profile by name or path.
Args:
profile_name_or_path: Name of the profile or full path to profile directory
Returns:
True if the profile was deleted successfully, False otherwise
"""
# Determine if input is a name or a path
if os.path.isabs(profile_name_or_path):
# Full path provided
profile_path = profile_name_or_path
else:
# Just a name provided, construct path
profile_path = os.path.join(self.profiles_dir, profile_name_or_path)
# Check if path exists and is a valid profile
if not os.path.isdir(profile_path):
return False
# Look for profile indicators
is_profile = (
os.path.exists(os.path.join(profile_path, "Preferences")) or
os.path.exists(os.path.join(profile_path, "Default", "Preferences")) or
os.path.exists(os.path.join(profile_path, "prefs.js"))
)
if not is_profile:
return False # Not a valid browser profile
# Delete the profile directory
try:
shutil.rmtree(profile_path)
return True
except Exception:
return False
async def interactive_manager(self, crawl_callback=None):
"""Launch an interactive profile management console.
Args:
crawl_callback: Function to call when selecting option to use
a profile for crawling. It will be called with (profile_path, url).
"""
while True:
self.logger.info(f"\n{Fore.CYAN}Profile Management Options:{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"1. {Fore.GREEN}Create a new profile{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"2. {Fore.YELLOW}List available profiles{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"3. {Fore.RED}Delete a profile{Style.RESET_ALL}", tag="MENU")
# Only show crawl option if callback provided
if crawl_callback:
self.logger.info(f"4. {Fore.CYAN}Use a profile to crawl a website{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"5. {Fore.MAGENTA}Exit{Style.RESET_ALL}", tag="MENU")
exit_option = "5"
else:
self.logger.info(f"4. {Fore.MAGENTA}Exit{Style.RESET_ALL}", tag="MENU")
exit_option = "4"
choice = input(f"\n{Fore.CYAN}Enter your choice (1-{exit_option}): {Style.RESET_ALL}")
if choice == "1":
# Create new profile
name = input(f"{Fore.GREEN}Enter a name for the new profile (or press Enter for auto-generated name): {Style.RESET_ALL}")
await self.create_profile(name or None)
elif choice == "2":
# List profiles
profiles = self.list_profiles()
if not profiles:
self.logger.warning(" No profiles found. Create one first with option 1.", tag="PROFILES")
continue
# Print profile information with colorama formatting
self.logger.info("\nAvailable profiles:", tag="PROFILES")
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {Fore.CYAN}{profile['name']}{Style.RESET_ALL}", tag="PROFILES")
self.logger.info(f" Path: {Fore.YELLOW}{profile['path']}{Style.RESET_ALL}", tag="PROFILES")
self.logger.info(f" Created: {profile['created'].strftime('%Y-%m-%d %H:%M:%S')}", tag="PROFILES")
self.logger.info(f" Browser type: {profile['type']}", tag="PROFILES")
self.logger.info("", tag="PROFILES") # Empty line for spacing
elif choice == "3":
# Delete profile
profiles = self.list_profiles()
if not profiles:
self.logger.warning("No profiles found to delete", tag="PROFILES")
continue
# Display numbered list
self.logger.info(f"\n{Fore.YELLOW}Available profiles:{Style.RESET_ALL}", tag="PROFILES")
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {profile['name']}", tag="PROFILES")
# Get profile to delete
profile_idx = input(f"{Fore.RED}Enter the number of the profile to delete (or 'c' to cancel): {Style.RESET_ALL}")
if profile_idx.lower() == 'c':
continue
try:
idx = int(profile_idx) - 1
if 0 <= idx < len(profiles):
profile_name = profiles[idx]["name"]
self.logger.info(f"Deleting profile: {Fore.YELLOW}{profile_name}{Style.RESET_ALL}", tag="PROFILES")
# Confirm deletion
confirm = input(f"{Fore.RED}Are you sure you want to delete this profile? (y/n): {Style.RESET_ALL}")
if confirm.lower() == 'y':
success = self.delete_profile(profiles[idx]["path"])
if success:
self.logger.success(f"Profile {Fore.GREEN}{profile_name}{Style.RESET_ALL} deleted successfully", tag="PROFILES")
else:
self.logger.error(f"Failed to delete profile {Fore.RED}{profile_name}{Style.RESET_ALL}", tag="PROFILES")
else:
self.logger.error("Invalid profile number", tag="PROFILES")
except ValueError:
self.logger.error("Please enter a valid number", tag="PROFILES")
elif choice == "4" and crawl_callback:
# Use profile to crawl a site
profiles = self.list_profiles()
if not profiles:
self.logger.warning("No profiles found. Create one first.", tag="PROFILES")
continue
# Display numbered list
self.logger.info(f"\n{Fore.YELLOW}Available profiles:{Style.RESET_ALL}", tag="PROFILES")
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {profile['name']}", tag="PROFILES")
# Get profile to use
profile_idx = input(f"{Fore.CYAN}Enter the number of the profile to use (or 'c' to cancel): {Style.RESET_ALL}")
if profile_idx.lower() == 'c':
continue
try:
idx = int(profile_idx) - 1
if 0 <= idx < len(profiles):
profile_path = profiles[idx]["path"]
url = input(f"{Fore.CYAN}Enter the URL to crawl: {Style.RESET_ALL}")
if url:
# Call the provided crawl callback
await crawl_callback(profile_path, url)
else:
self.logger.error("No URL provided", tag="CRAWL")
else:
self.logger.error("Invalid profile number", tag="PROFILES")
except ValueError:
self.logger.error("Please enter a valid number", tag="PROFILES")
elif (choice == "4" and not crawl_callback) or (choice == "5" and crawl_callback):
# Exit
self.logger.info("Exiting profile management", tag="MENU")
break
else:
self.logger.error(f"Invalid choice. Please enter a number between 1 and {exit_option}.", tag="MENU")

View File

@@ -1,13 +0,0 @@
from .base import BaseBrowserStrategy
from .cdp import CDPBrowserStrategy
from .docker_strategy import DockerBrowserStrategy
from .playwright import PlaywrightBrowserStrategy
from .builtin import BuiltinBrowserStrategy
__all__ = [
"BrowserStrategy",
"CDPBrowserStrategy",
"DockerBrowserStrategy",
"PlaywrightBrowserStrategy",
"BuiltinBrowserStrategy",
]

View File

@@ -1,601 +0,0 @@
"""Browser strategies module for Crawl4AI.
This module implements the browser strategy pattern for different
browser implementations, including Playwright, CDP, and builtin browsers.
"""
from abc import ABC, abstractmethod
import asyncio
import json
import hashlib
import os
import time
from typing import Optional, Tuple, List
from playwright.async_api import BrowserContext, Page
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig, CrawlerRunConfig
from ...config import DOWNLOAD_PAGE_TIMEOUT
from ...js_snippet import load_js_script
from ..utils import get_playwright
class BaseBrowserStrategy(ABC):
"""Base class for all browser strategies.
This abstract class defines the interface that all browser strategies
must implement. It handles common functionality like context caching,
browser configuration, and session management.
"""
_playwright_instance = None
@classmethod
async def get_playwright(cls):
"""Get or create a shared Playwright instance.
Returns:
Playwright: The shared Playwright instance
"""
# For now I dont want Singleton pattern for Playwright
if cls._playwright_instance is None or True:
cls._playwright_instance = await get_playwright()
return cls._playwright_instance
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the strategy with configuration and logger.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
self.config = config
self.logger = logger
self.browser = None
self.default_context = None
# Context management
self.contexts_by_config = {} # config_signature -> context
self._contexts_lock = asyncio.Lock()
# Session management
self.sessions = {}
self.session_ttl = 1800 # 30 minutes default
# Playwright instance
self.playwright = None
@abstractmethod
async def start(self):
"""Start the browser.
This method should be implemented by concrete strategies to initialize
the browser in the appropriate way (direct launch, CDP connection, etc.)
Returns:
self: For method chaining
"""
# Base implementation gets the playwright instance
self.playwright = await self.get_playwright()
return self
@abstractmethod
async def _generate_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
pass
async def get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page with specified configuration.
This method should be implemented by concrete strategies to create
or retrieve a page according to their browser management approach.
Args:
crawlerRunConfig: Crawler run configuration
Returns:
Tuple of (Page, BrowserContext)
"""
# Clean up expired sessions first
self._cleanup_expired_sessions()
# If a session_id is provided and we already have it, reuse that page + context
if crawlerRunConfig.session_id and crawlerRunConfig.session_id in self.sessions:
context, page, _ = self.sessions[crawlerRunConfig.session_id]
# Update last-used timestamp
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context
page, context = await self._generate_page(crawlerRunConfig)
import uuid
setattr(page, "guid", uuid.uuid4())
# If a session_id is specified, store this session so we can reuse later
if crawlerRunConfig.session_id:
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context
pass
async def get_pages(self, crawlerRunConfig: CrawlerRunConfig, count: int = 1) -> List[Tuple[Page, BrowserContext]]:
"""Get multiple pages with the same configuration.
Args:
crawlerRunConfig: Configuration for the pages
count: Number of pages to create
Returns:
List of (Page, Context) tuples
"""
pages = []
for _ in range(count):
page, context = await self.get_page(crawlerRunConfig)
pages.append((page, context))
return pages
async def get_opened_pages(self) -> List[Page]:
"""Get all opened pages in the
browser.
"""
return [page for context in self.contexts_by_config.values() for page in context.pages]
def _build_browser_args(self) -> dict:
"""Build browser launch arguments from config.
Returns:
dict: Browser launch arguments for Playwright
"""
# Define common browser arguments that improve performance and stability
args = [
"--no-sandbox",
"--no-first-run",
"--no-default-browser-check",
"--window-position=0,0",
"--ignore-certificate-errors",
"--ignore-certificate-errors-spki-list",
"--window-position=400,0",
"--force-color-profile=srgb",
"--mute-audio",
"--disable-gpu",
"--disable-gpu-compositing",
"--disable-software-rasterizer",
"--disable-dev-shm-usage",
"--disable-infobars",
"--disable-blink-features=AutomationControlled",
"--disable-renderer-backgrounding",
"--disable-ipc-flooding-protection",
"--disable-background-timer-throttling",
f"--window-size={self.config.viewport_width},{self.config.viewport_height}",
]
# Define browser disable options for light mode
browser_disable_options = [
"--disable-backgrounding-occluded-windows",
"--disable-breakpad",
"--disable-client-side-phishing-detection",
"--disable-component-extensions-with-background-pages",
"--disable-default-apps",
"--disable-extensions",
"--disable-features=TranslateUI",
"--disable-hang-monitor",
"--disable-popup-blocking",
"--disable-prompt-on-repost",
"--disable-sync",
"--metrics-recording-only",
"--password-store=basic",
"--use-mock-keychain",
]
# Apply light mode settings if enabled
if self.config.light_mode:
args.extend(browser_disable_options)
# Apply text mode settings if enabled (disables images, JS, etc)
if self.config.text_mode:
args.extend([
"--blink-settings=imagesEnabled=false",
"--disable-remote-fonts",
"--disable-images",
"--disable-javascript",
"--disable-software-rasterizer",
"--disable-dev-shm-usage",
])
# Add any extra arguments from the config
if self.config.extra_args:
args.extend(self.config.extra_args)
# Build the core browser args dictionary
browser_args = {"headless": self.config.headless, "args": args}
# Add chrome channel if specified
if self.config.chrome_channel:
browser_args["channel"] = self.config.chrome_channel
# Configure downloads
if self.config.accept_downloads:
browser_args["downloads_path"] = self.config.downloads_path or os.path.join(
os.getcwd(), "downloads"
)
os.makedirs(browser_args["downloads_path"], exist_ok=True)
# Check for user data directory
if self.config.user_data_dir:
# Ensure the directory exists
os.makedirs(self.config.user_data_dir, exist_ok=True)
browser_args["user_data_dir"] = self.config.user_data_dir
# Configure proxy settings
if self.config.proxy or self.config.proxy_config:
from playwright.async_api import ProxySettings
proxy_settings = (
ProxySettings(server=self.config.proxy)
if self.config.proxy
else ProxySettings(
server=self.config.proxy_config.server,
username=self.config.proxy_config.username,
password=self.config.proxy_config.password,
)
)
browser_args["proxy"] = proxy_settings
return browser_args
def _make_config_signature(self, crawlerRunConfig: CrawlerRunConfig) -> str:
"""Create a signature hash from configuration for context caching.
Converts the crawlerRunConfig into a dict, excludes ephemeral fields,
then returns a hash of the sorted JSON. This yields a stable signature
that identifies configurations requiring a unique browser context.
Args:
crawlerRunConfig: Crawler run configuration
Returns:
str: Unique hash for this configuration
"""
config_dict = crawlerRunConfig.__dict__.copy()
# Exclude items that do not affect browser-level setup
ephemeral_keys = [
"session_id",
"js_code",
"scraping_strategy",
"extraction_strategy",
"chunking_strategy",
"cache_mode",
"content_filter",
"semaphore_count",
"url"
]
for key in ephemeral_keys:
if key in config_dict:
del config_dict[key]
# Convert to canonical JSON string
signature_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON so we get a compact, unique string
signature_hash = hashlib.sha256(signature_json.encode("utf-8")).hexdigest()
return signature_hash
async def create_browser_context(self, crawlerRunConfig: Optional[CrawlerRunConfig] = None) -> BrowserContext:
"""Creates and returns a new browser context with configured settings.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
BrowserContext: Browser context object with the specified configurations
"""
if not self.browser:
raise ValueError("Browser must be initialized before creating context")
# Base settings
user_agent = self.config.headers.get("User-Agent", self.config.user_agent)
viewport_settings = {
"width": self.config.viewport_width,
"height": self.config.viewport_height,
}
proxy_settings = {"server": self.config.proxy} if self.config.proxy else None
# Define blocked extensions for resource optimization
blocked_extensions = [
# Images
"jpg", "jpeg", "png", "gif", "webp", "svg", "ico", "bmp", "tiff", "psd",
# Fonts
"woff", "woff2", "ttf", "otf", "eot",
# Media
"mp4", "webm", "ogg", "avi", "mov", "wmv", "flv", "m4v", "mp3", "wav", "aac",
"m4a", "opus", "flac",
# Documents
"pdf", "doc", "docx", "xls", "xlsx", "ppt", "pptx",
# Archives
"zip", "rar", "7z", "tar", "gz",
# Scripts and data
"xml", "swf", "wasm",
]
# Common context settings
context_settings = {
"user_agent": user_agent,
"viewport": viewport_settings,
"proxy": proxy_settings,
"accept_downloads": self.config.accept_downloads,
"storage_state": self.config.storage_state,
"ignore_https_errors": self.config.ignore_https_errors,
"device_scale_factor": 1.0,
"java_script_enabled": self.config.java_script_enabled,
}
# Apply text mode settings if enabled
if self.config.text_mode:
text_mode_settings = {
"has_touch": False,
"is_mobile": False,
"java_script_enabled": False, # Disable javascript in text mode
}
# Update context settings with text mode settings
context_settings.update(text_mode_settings)
if self.logger:
self.logger.debug("Text mode enabled for browser context", tag="BROWSER")
# Handle storage state properly - this is key for persistence
if self.config.storage_state:
if self.logger:
if isinstance(self.config.storage_state, str):
self.logger.debug(f"Using storage state from file: {self.config.storage_state}", tag="BROWSER")
else:
self.logger.debug("Using storage state from config object", tag="BROWSER")
if self.config.user_data_dir:
# For CDP-based browsers, storage persistence is typically handled by the user_data_dir
# at the browser level, but we'll create a storage_state location for Playwright as well
storage_path = os.path.join(self.config.user_data_dir, "storage_state.json")
if not os.path.exists(storage_path):
# Create parent directory if it doesn't exist
os.makedirs(os.path.dirname(storage_path), exist_ok=True)
with open(storage_path, "w") as f:
json.dump({}, f)
self.config.storage_state = storage_path
if self.logger:
self.logger.debug(f"Using user data directory: {self.config.user_data_dir}", tag="BROWSER")
# Apply crawler-specific configurations if provided
if crawlerRunConfig:
# Check if there is value for crawlerRunConfig.proxy_config set add that to context
if crawlerRunConfig.proxy_config:
proxy_settings = {
"server": crawlerRunConfig.proxy_config.server,
}
if crawlerRunConfig.proxy_config.username:
proxy_settings.update({
"username": crawlerRunConfig.proxy_config.username,
"password": crawlerRunConfig.proxy_config.password,
})
context_settings["proxy"] = proxy_settings
# Create and return the context
try:
# Create the context with appropriate settings
context = await self.browser.new_context(**context_settings)
# Apply text mode resource blocking if enabled
if self.config.text_mode:
# Create and apply route patterns for each extension
for ext in blocked_extensions:
await context.route(f"**/*.{ext}", lambda route: route.abort())
return context
except Exception as e:
if self.logger:
self.logger.error(f"Error creating browser context: {str(e)}", tag="BROWSER")
# Fallback to basic context creation if the advanced settings fail
return await self.browser.new_context()
async def setup_context(self, context: BrowserContext, crawlerRunConfig: Optional[CrawlerRunConfig] = None):
"""Set up a browser context with the configured options.
Args:
context: The browser context to set up
crawlerRunConfig: Configuration object containing all browser settings
"""
# Set HTTP headers
if self.config.headers:
await context.set_extra_http_headers(self.config.headers)
# Add cookies
if self.config.cookies:
await context.add_cookies(self.config.cookies)
# Apply storage state if provided
if self.config.storage_state:
await context.storage_state(path=None)
# Configure downloads
if self.config.accept_downloads:
context.set_default_timeout(DOWNLOAD_PAGE_TIMEOUT)
context.set_default_navigation_timeout(DOWNLOAD_PAGE_TIMEOUT)
if self.config.downloads_path:
context._impl_obj._options["accept_downloads"] = True
context._impl_obj._options["downloads_path"] = self.config.downloads_path
# Handle user agent and browser hints
if self.config.user_agent:
combined_headers = {
"User-Agent": self.config.user_agent,
"sec-ch-ua": self.config.browser_hint,
}
combined_headers.update(self.config.headers)
await context.set_extra_http_headers(combined_headers)
# Add default cookie
target_url = (crawlerRunConfig and crawlerRunConfig.url) or "https://crawl4ai.com/"
await context.add_cookies(
[
{
"name": "cookiesEnabled",
"value": "true",
"url": target_url,
}
]
)
# Handle navigator overrides
if crawlerRunConfig:
if (
crawlerRunConfig.override_navigator
or crawlerRunConfig.simulate_user
or crawlerRunConfig.magic
):
await context.add_init_script(load_js_script("navigator_overrider"))
async def kill_session(self, session_id: str):
"""Kill a browser session and clean up resources.
Args:
session_id (str): The session ID to kill.
"""
if session_id not in self.sessions:
return
context, page, _ = self.sessions[session_id]
# Close the page
try:
await page.close()
except Exception as e:
if self.logger:
self.logger.error(f"Error closing page for session {session_id}: {str(e)}", tag="BROWSER")
# Remove session from tracking
del self.sessions[session_id]
# Clean up any contexts that no longer have pages
await self._cleanup_unused_contexts()
if self.logger:
self.logger.debug(f"Killed session: {session_id}", tag="BROWSER")
async def _cleanup_unused_contexts(self):
"""Clean up contexts that no longer have any pages."""
async with self._contexts_lock:
# Get all contexts we're managing
contexts_to_check = list(self.contexts_by_config.values())
for context in contexts_to_check:
# Check if the context has any pages left
if not context.pages:
# No pages left, we can close this context
config_signature = next((sig for sig, ctx in self.contexts_by_config.items()
if ctx == context), None)
if config_signature:
try:
await context.close()
del self.contexts_by_config[config_signature]
if self.logger:
self.logger.debug(f"Closed unused context", tag="BROWSER")
except Exception as e:
if self.logger:
self.logger.error(f"Error closing unused context: {str(e)}", tag="BROWSER")
def _cleanup_expired_sessions(self):
"""Clean up expired sessions based on TTL."""
current_time = time.time()
expired_sessions = [
sid
for sid, (_, _, last_used) in self.sessions.items()
if current_time - last_used > self.session_ttl
]
for sid in expired_sessions:
if self.logger:
self.logger.debug(f"Session expired: {sid}", tag="BROWSER")
asyncio.create_task(self.kill_session(sid))
async def close(self):
"""Close the browser and clean up resources.
This method handles common cleanup tasks like:
1. Persisting storage state if a user_data_dir is configured
2. Closing all sessions
3. Closing all browser contexts
4. Closing the browser
5. Stopping Playwright
Child classes should override this method to add their specific cleanup logic,
but should call super().close() to ensure common cleanup tasks are performed.
"""
# Set a flag to prevent race conditions during cleanup
self.shutting_down = True
try:
# Add brief delay if configured
if self.config.sleep_on_close:
await asyncio.sleep(0.5)
# Persist storage state if using a user data directory
if self.config.user_data_dir and self.browser:
for context in self.browser.contexts:
try:
# Ensure the directory exists
storage_dir = os.path.join(self.config.user_data_dir, "Default")
os.makedirs(storage_dir, exist_ok=True)
# Save storage state
storage_path = os.path.join(storage_dir, "storage_state.json")
await context.storage_state(path=storage_path)
if self.logger:
self.logger.debug("Storage state persisted before closing browser", tag="BROWSER")
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to ensure storage persistence: {error}",
tag="BROWSER",
params={"error": str(e)}
)
# Close all active sessions
session_ids = list(self.sessions.keys())
for session_id in session_ids:
await self.kill_session(session_id)
# Close all cached contexts
for ctx in self.contexts_by_config.values():
try:
await ctx.close()
except Exception as e:
if self.logger:
self.logger.error(
message="Error closing context: {error}",
tag="BROWSER",
params={"error": str(e)}
)
self.contexts_by_config.clear()
# Close the browser if it exists
if self.browser:
await self.browser.close()
self.browser = None
# Stop playwright
if self.playwright:
await self.playwright.stop()
self.playwright = None
except Exception as e:
if self.logger:
self.logger.error(
message="Error during browser cleanup: {error}",
tag="BROWSER",
params={"error": str(e)}
)
finally:
# Reset shutting down flag
self.shutting_down = False

View File

@@ -1,468 +0,0 @@
import asyncio
import os
import time
import json
import subprocess
import shutil
import signal
from typing import Optional, Dict, Any, Tuple
from ...async_logger import AsyncLogger
from ...async_configs import CrawlerRunConfig
from playwright.async_api import Page, BrowserContext
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig
from ...utils import get_home_folder
from ..utils import get_browser_executable, is_windows, is_browser_running, find_process_by_port, terminate_process
from .cdp import CDPBrowserStrategy
from .base import BaseBrowserStrategy
class BuiltinBrowserStrategy(CDPBrowserStrategy):
"""Built-in browser strategy.
This strategy extends the CDP strategy to use the built-in browser.
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the built-in browser strategy.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
self.builtin_browser_dir = os.path.join(get_home_folder(), "builtin-browser") if not self.config.user_data_dir else self.config.user_data_dir
self.builtin_config_file = os.path.join(self.builtin_browser_dir, "browser_config.json")
# Raise error if user data dir is already engaged
if self._check_user_dir_is_engaged(self.builtin_browser_dir):
raise Exception(f"User data directory {self.builtin_browser_dir} is already engaged by another browser instance.")
os.makedirs(self.builtin_browser_dir, exist_ok=True)
def _check_user_dir_is_engaged(self, user_data_dir: str) -> bool:
"""Check if the user data directory is already in use.
Returns:
bool: True if the directory is engaged, False otherwise
"""
# Load browser config file, then iterate in port_map values, check "user_data_dir" key if it matches
# the current user data directory
if os.path.exists(self.builtin_config_file):
try:
with open(self.builtin_config_file, 'r') as f:
browser_info_dict = json.load(f)
# Check if user data dir is already engaged
for port_str, browser_info in browser_info_dict.get("port_map", {}).items():
if browser_info.get("user_data_dir") == user_data_dir:
return True
except Exception as e:
if self.logger:
self.logger.error(f"Error reading built-in browser config: {str(e)}", tag="BUILTIN")
return False
async def start(self):
"""Start or connect to the built-in browser.
Returns:
self: For method chaining
"""
# Initialize Playwright instance via base class method
await BaseBrowserStrategy.start(self)
try:
# Check for existing built-in browser (get_browser_info already checks if running)
browser_info = self.get_browser_info()
if browser_info:
if self.logger:
self.logger.info(f"Using existing built-in browser at {browser_info.get('cdp_url')}", tag="BROWSER")
self.config.cdp_url = browser_info.get('cdp_url')
else:
if self.logger:
self.logger.info("Built-in browser not found, launching new instance...", tag="BROWSER")
cdp_url = await self.launch_builtin_browser(
browser_type=self.config.browser_type,
debugging_port=self.config.debugging_port,
headless=self.config.headless,
)
if not cdp_url:
if self.logger:
self.logger.warning("Failed to launch built-in browser, falling back to regular CDP strategy", tag="BROWSER")
# Call CDP's start but skip BaseBrowserStrategy.start() since we already called it
return await CDPBrowserStrategy.start(self)
self.config.cdp_url = cdp_url
# Connect to the browser using CDP protocol
self.browser = await self.playwright.chromium.connect_over_cdp(self.config.cdp_url)
# Get or create default context
contexts = self.browser.contexts
if contexts:
self.default_context = contexts[0]
else:
self.default_context = await self.create_browser_context()
await self.setup_context(self.default_context)
if self.logger:
self.logger.debug(f"Connected to built-in browser at {self.config.cdp_url}", tag="BUILTIN")
return self
except Exception as e:
if self.logger:
self.logger.error(f"Failed to start built-in browser: {str(e)}", tag="BUILTIN")
# There is a possibility that at this point I need to clean up some resourece
raise
def _get_builtin_browser_info(cls, debugging_port: int, config_file: str, logger: Optional[AsyncLogger] = None) -> Optional[Dict[str, Any]]:
"""Get information about the built-in browser for a specific debugging port.
Args:
debugging_port: The debugging port to look for
config_file: Path to the config file
logger: Optional logger for recording events
Returns:
dict: Browser information or None if no running browser is configured for this port
"""
if not os.path.exists(config_file):
return None
try:
with open(config_file, 'r') as f:
browser_info_dict = json.load(f)
# Get browser info from port map
if isinstance(browser_info_dict, dict) and "port_map" in browser_info_dict:
port_str = str(debugging_port)
if port_str in browser_info_dict["port_map"]:
browser_info = browser_info_dict["port_map"][port_str]
# Check if the browser is still running
pids = browser_info.get('pid', '')
if isinstance(pids, str):
pids = [int(pid) for pid in pids.split() if pid.isdigit()]
elif isinstance(pids, int):
pids = [pids]
else:
pids = []
# Check if any of the PIDs are running
if not pids:
if logger:
logger.warning(f"Built-in browser on port {debugging_port} has no valid PID", tag="BUILTIN")
# Remove this port from the dictionary
del browser_info_dict["port_map"][port_str]
with open(config_file, 'w') as f:
json.dump(browser_info_dict, f, indent=2)
return None
# Check if any of the PIDs are running
for pid in pids:
if is_browser_running(pid):
browser_info['pid'] = pid
break
else:
# If none of the PIDs are running, remove this port from the dictionary
if logger:
logger.warning(f"Built-in browser on port {debugging_port} is not running", tag="BUILTIN")
# Remove this port from the dictionary
del browser_info_dict["port_map"][port_str]
with open(config_file, 'w') as f:
json.dump(browser_info_dict, f, indent=2)
return None
return browser_info
return None
except Exception as e:
if logger:
logger.error(f"Error reading built-in browser config: {str(e)}", tag="BUILTIN")
return None
def get_browser_info(self) -> Optional[Dict[str, Any]]:
"""Get information about the current built-in browser instance.
Returns:
dict: Browser information or None if no running browser is configured
"""
return self._get_builtin_browser_info(
debugging_port=self.config.debugging_port,
config_file=self.builtin_config_file,
logger=self.logger
)
async def launch_builtin_browser(self,
browser_type: str = "chromium",
debugging_port: int = 9222,
headless: bool = True) -> Optional[str]:
"""Launch a browser in the background for use as the built-in browser.
Args:
browser_type: Type of browser to launch ('chromium' or 'firefox')
debugging_port: Port to use for CDP debugging
headless: Whether to run in headless mode
Returns:
str: CDP URL for the browser, or None if launch failed
"""
# Check if there's an existing browser still running
browser_info = self._get_builtin_browser_info(
debugging_port=debugging_port,
config_file=self.builtin_config_file,
logger=self.logger
)
if browser_info:
if self.logger:
self.logger.info(f"Built-in browser is already running on port {debugging_port}", tag="BUILTIN")
return browser_info.get('cdp_url')
# Create a user data directory for the built-in browser
user_data_dir = os.path.join(self.builtin_browser_dir, "user_data")
# Raise error if user data dir is already engaged
if self._check_user_dir_is_engaged(user_data_dir):
raise Exception(f"User data directory {user_data_dir} is already engaged by another browser instance.")
# Create the user data directory if it doesn't exist
os.makedirs(user_data_dir, exist_ok=True)
# Prepare browser launch arguments
browser_args = super()._build_browser_args()
browser_path = await get_browser_executable(browser_type)
base_args = [browser_path]
if browser_type == "chromium":
args = [
browser_path,
f"--remote-debugging-port={debugging_port}",
f"--user-data-dir={user_data_dir}",
]
# if headless:
# args.append("--headless=new")
elif browser_type == "firefox":
args = [
browser_path,
"--remote-debugging-port",
str(debugging_port),
"--profile",
user_data_dir,
]
if headless:
args.append("--headless")
else:
if self.logger:
self.logger.error(f"Browser type {browser_type} not supported for built-in browser", tag="BUILTIN")
return None
args = base_args + browser_args + args
try:
# Check if the port is already in use
PID = ""
cdp_url = f"http://localhost:{debugging_port}"
config_json = await self._check_port_in_use(cdp_url)
if config_json:
if self.logger:
self.logger.info(f"Port {debugging_port} is already in use.", tag="BUILTIN")
PID = find_process_by_port(debugging_port)
else:
# Start the browser process detached
process = None
if is_windows():
process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
)
else:
process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setpgrp # Start in a new process group
)
# Wait briefly to ensure the process starts successfully
await asyncio.sleep(2.0)
# Check if the process is still running
if process and process.poll() is not None:
if self.logger:
self.logger.error(f"Browser process exited immediately with code {process.returncode}", tag="BUILTIN")
return None
PID = process.pid
# Construct CDP URL
config_json = await self._check_port_in_use(cdp_url)
# Create browser info
browser_info = {
'pid': PID,
'cdp_url': cdp_url,
'user_data_dir': user_data_dir,
'browser_type': browser_type,
'debugging_port': debugging_port,
'start_time': time.time(),
'config': config_json
}
# Read existing config file if it exists
port_map = {}
if os.path.exists(self.builtin_config_file):
try:
with open(self.builtin_config_file, 'r') as f:
existing_data = json.load(f)
# Check if it already uses port mapping
if isinstance(existing_data, dict) and "port_map" in existing_data:
port_map = existing_data["port_map"]
# # Convert legacy format to port mapping
# elif isinstance(existing_data, dict) and "debugging_port" in existing_data:
# old_port = str(existing_data.get("debugging_port"))
# if self._is_browser_running(existing_data.get("pid")):
# port_map[old_port] = existing_data
except Exception as e:
if self.logger:
self.logger.warning(f"Could not read existing config: {str(e)}", tag="BUILTIN")
# Add/update this browser in the port map
port_map[str(debugging_port)] = browser_info
# Write updated config
with open(self.builtin_config_file, 'w') as f:
json.dump({"port_map": port_map}, f, indent=2)
# Detach from the browser process - don't keep any references
# This is important to allow the Python script to exit while the browser continues running
process = None
if self.logger:
self.logger.success(f"Built-in browser launched at CDP URL: {cdp_url}", tag="BUILTIN")
return cdp_url
except Exception as e:
if self.logger:
self.logger.error(f"Error launching built-in browser: {str(e)}", tag="BUILTIN")
return None
async def _check_port_in_use(self, cdp_url: str) -> dict:
"""Check if a port is already in use by a Chrome DevTools instance.
Args:
cdp_url: The CDP URL to check
Returns:
dict: Chrome DevTools protocol version information or None if not found
"""
import aiohttp
json_url = f"{cdp_url}/json/version"
json_config = None
try:
async with aiohttp.ClientSession() as session:
try:
async with session.get(json_url, timeout=2.0) as response:
if response.status == 200:
json_config = await response.json()
if self.logger:
self.logger.debug(f"Found CDP server running at {cdp_url}", tag="BUILTIN")
return json_config
except (aiohttp.ClientError, asyncio.TimeoutError):
pass
return None
except Exception as e:
if self.logger:
self.logger.debug(f"Error checking CDP port: {str(e)}", tag="BUILTIN")
return None
async def kill_builtin_browser(self) -> bool:
"""Kill the built-in browser if it's running.
Returns:
bool: True if the browser was killed, False otherwise
"""
browser_info = self.get_browser_info()
if not browser_info:
if self.logger:
self.logger.warning(f"No built-in browser found on port {self.config.debugging_port}", tag="BUILTIN")
return False
pid = browser_info.get('pid')
if not pid:
return False
success, error_msg = terminate_process(pid, logger=self.logger)
if success:
# Update config file to remove this browser
with open(self.builtin_config_file, 'r') as f:
browser_info_dict = json.load(f)
# Remove this port from the dictionary
port_str = str(self.config.debugging_port)
if port_str in browser_info_dict.get("port_map", {}):
del browser_info_dict["port_map"][port_str]
with open(self.builtin_config_file, 'w') as f:
json.dump(browser_info_dict, f, indent=2)
# Remove user data directory if it exists
if os.path.exists(self.builtin_browser_dir):
shutil.rmtree(self.builtin_browser_dir)
# Clear the browser info cache
self.browser = None
self.temp_dir = None
self.shutting_down = True
if self.logger:
self.logger.success("Built-in browser terminated", tag="BUILTIN")
return True
else:
if self.logger:
self.logger.error(f"Error killing built-in browser: {error_msg}", tag="BUILTIN")
return False
async def get_builtin_browser_status(self) -> Dict[str, Any]:
"""Get status information about the built-in browser.
Returns:
dict: Status information with running, cdp_url, and info fields
"""
browser_info = self.get_browser_info()
if not browser_info:
return {
'running': False,
'cdp_url': None,
'info': None,
'port': self.config.debugging_port
}
return {
'running': True,
'cdp_url': browser_info.get('cdp_url'),
'info': browser_info,
'port': self.config.debugging_port
}
async def close(self):
"""Close the built-in browser and clean up resources."""
# Call parent class close method
await super().close()
# Clean up built-in browser if we created it and were in shutdown mode
if self.shutting_down:
await self.kill_builtin_browser()
if self.logger:
self.logger.debug("Killed built-in browser during shutdown", tag="BUILTIN")

View File

@@ -1,281 +0,0 @@
"""Browser strategies module for Crawl4AI.
This module implements the browser strategy pattern for different
browser implementations, including Playwright, CDP, and builtin browsers.
"""
import asyncio
import os
import time
import json
import subprocess
import shutil
from typing import Optional, Tuple, List
from playwright.async_api import BrowserContext, Page
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig, CrawlerRunConfig
from ..utils import get_playwright, get_browser_executable, create_temp_directory, is_windows, check_process_is_running, terminate_process
from .base import BaseBrowserStrategy
class CDPBrowserStrategy(BaseBrowserStrategy):
"""CDP-based browser strategy.
This strategy connects to an existing browser using CDP protocol or
launches and connects to a browser using CDP.
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the CDP browser strategy.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
self.sessions = {}
self.session_ttl = 1800 # 30 minutes
self.browser_process = None
self.temp_dir = None
self.shutting_down = False
async def start(self):
"""Start or connect to the browser using CDP.
Returns:
self: For method chaining
"""
# Call the base class start to initialize Playwright
await super().start()
try:
# Get or create CDP URL
cdp_url = await self._get_or_create_cdp_url()
# Connect to the browser using CDP
self.browser = await self.playwright.chromium.connect_over_cdp(cdp_url)
# Get or create default context
contexts = self.browser.contexts
if contexts:
self.default_context = contexts[0]
else:
self.default_context = await self.create_browser_context()
await self.setup_context(self.default_context)
if self.logger:
self.logger.debug(f"Connected to CDP browser at {cdp_url}", tag="CDP")
except Exception as e:
if self.logger:
self.logger.error(f"Failed to connect to CDP browser: {str(e)}", tag="CDP")
# Clean up any resources before re-raising
await self._cleanup_process()
raise
return self
async def _get_or_create_cdp_url(self) -> str:
"""Get existing CDP URL or launch a browser and return its CDP URL.
Returns:
str: CDP URL for connecting to the browser
"""
# If CDP URL is provided, just return it
if self.config.cdp_url:
return self.config.cdp_url
# Create temp dir if needed
if not self.config.user_data_dir:
self.temp_dir = create_temp_directory()
user_data_dir = self.temp_dir
else:
user_data_dir = self.config.user_data_dir
# Get browser args based on OS and browser type
# args = await self._get_browser_args(user_data_dir)
browser_args = super()._build_browser_args()
browser_path = await get_browser_executable(self.config.browser_type)
base_args = [browser_path]
if self.config.browser_type == "chromium":
args = [
f"--remote-debugging-port={self.config.debugging_port}",
f"--user-data-dir={user_data_dir}",
]
# if self.config.headless:
# args.append("--headless=new")
elif self.config.browser_type == "firefox":
args = [
"--remote-debugging-port",
str(self.config.debugging_port),
"--profile",
user_data_dir,
]
if self.config.headless:
args.append("--headless")
else:
raise NotImplementedError(f"Browser type {self.config.browser_type} not supported")
args = base_args + browser_args['args'] + args
# Start browser process
try:
# Use DETACHED_PROCESS flag on Windows to fully detach the process
# On Unix, we'll use preexec_fn=os.setpgrp to start the process in a new process group
if is_windows():
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
)
else:
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setpgrp # Start in a new process group
)
# Monitor for a short time to make sure it starts properly
is_running, return_code, stdout, stderr = await check_process_is_running(self.browser_process, delay=2)
if not is_running:
if self.logger:
self.logger.error(
message="Browser process terminated unexpectedly | Code: {code} | STDOUT: {stdout} | STDERR: {stderr}",
tag="ERROR",
params={
"code": return_code,
"stdout": stdout.decode() if stdout else "",
"stderr": stderr.decode() if stderr else "",
},
)
await self._cleanup_process()
raise Exception("Browser process terminated unexpectedly")
return f"http://localhost:{self.config.debugging_port}"
except Exception as e:
await self._cleanup_process()
raise Exception(f"Failed to start browser: {e}")
async def _cleanup_process(self):
"""Cleanup browser process and temporary directory."""
# Set shutting_down flag BEFORE any termination actions
self.shutting_down = True
if self.browser_process:
try:
# Only attempt termination if the process is still running
if self.browser_process.poll() is None:
# Use our robust cross-platform termination utility
success = terminate_process(
pid=self.browser_process.pid,
timeout=1.0, # Equivalent to the previous 10*0.1s wait
logger=self.logger
)
if not success and self.logger:
self.logger.warning(
message="Failed to terminate browser process cleanly",
tag="PROCESS"
)
except Exception as e:
if self.logger:
self.logger.error(
message="Error during browser process cleanup: {error}",
tag="ERROR",
params={"error": str(e)},
)
if self.temp_dir and os.path.exists(self.temp_dir):
try:
shutil.rmtree(self.temp_dir)
self.temp_dir = None
if self.logger:
self.logger.debug("Removed temporary directory", tag="CDP")
except Exception as e:
if self.logger:
self.logger.error(
message="Error removing temporary directory: {error}",
tag="CDP",
params={"error": str(e)}
)
self.browser_process = None
async def _generate_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
# For CDP, we typically use the shared default_context
context = self.default_context
pages = context.pages
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
self.contexts_by_config[config_signature] = context
await self.setup_context(context, crawlerRunConfig)
# Check if there's already a page with the target URL
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
# If not found, create a new page
if not page:
page = await context.new_page()
return page, context
async def _get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page for the given configuration.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
Tuple of (Page, BrowserContext)
"""
# Call parent method to ensure browser is started
await super().get_page(crawlerRunConfig)
# For CDP, we typically use the shared default_context
context = self.default_context
pages = context.pages
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
self.contexts_by_config[config_signature] = context
await self.setup_context(context, crawlerRunConfig)
# Check if there's already a page with the target URL
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
# If not found, create a new page
if not page:
page = await context.new_page()
# If a session_id is specified, store this session for reuse
if crawlerRunConfig.session_id:
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context
async def close(self):
"""Close the CDP browser and clean up resources."""
# Skip cleanup if using external CDP URL and not launched by us
if self.config.cdp_url and not self.browser_process:
if self.logger:
self.logger.debug("Skipping cleanup for external CDP browser", tag="CDP")
return
# Call parent implementation for common cleanup
await super().close()
# Additional CDP-specific cleanup
await asyncio.sleep(0.5)
await self._cleanup_process()

View File

@@ -1,430 +0,0 @@
"""Docker browser strategy module for Crawl4AI.
This module provides browser strategies for running browsers in Docker containers,
which offers better isolation, consistency across platforms, and easy scaling.
"""
import os
import uuid
from typing import List, Optional
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig
from ..models import DockerConfig
from ..docker_registry import DockerRegistry
from ..docker_utils import DockerUtils
from .builtin import CDPBrowserStrategy
from .base import BaseBrowserStrategy
class DockerBrowserStrategy(CDPBrowserStrategy):
"""Docker-based browser strategy.
Extends the CDPBrowserStrategy to run browsers in Docker containers.
Supports two modes:
1. "connect" - Uses a Docker image with Chrome already running
2. "launch" - Starts Chrome within the container with custom settings
Attributes:
docker_config: Docker-specific configuration options
container_id: ID of current Docker container
container_name: Name assigned to the container
registry: Registry for tracking and reusing containers
docker_utils: Utilities for Docker operations
chrome_process_id: Process ID of Chrome within container
socat_process_id: Process ID of socat within container
internal_cdp_port: Chrome's internal CDP port
internal_mapped_port: Port that socat maps to internally
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the Docker browser strategy.
Args:
config: Browser configuration including Docker-specific settings
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
# Initialize Docker-specific attributes
self.docker_config = self.config.docker_config or DockerConfig()
self.container_id = None
self.container_name = f"crawl4ai-browser-{uuid.uuid4().hex[:8]}"
# Use the shared registry file path for consistency with BuiltinBrowserStrategy
registry_file = self.docker_config.registry_file
if registry_file is None and self.config.user_data_dir:
# Use the same registry file as BuiltinBrowserStrategy if possible
registry_file = os.path.join(
os.path.dirname(self.config.user_data_dir), "browser_config.json"
)
self.registry = DockerRegistry(self.docker_config.registry_file)
self.docker_utils = DockerUtils(logger)
self.chrome_process_id = None
self.socat_process_id = None
self.internal_cdp_port = 9222 # Chrome's internal CDP port
self.internal_mapped_port = 9223 # Port that socat maps to internally
self.shutting_down = False
async def start(self):
"""Start or connect to a browser running in a Docker container.
This method initializes Playwright and establishes a connection to
a browser running in a Docker container. Depending on the configured mode:
- "connect": Connects to a container with Chrome already running
- "launch": Creates a container and launches Chrome within it
Returns:
self: For method chaining
"""
# Initialize Playwright
await BaseBrowserStrategy.start(self)
if self.logger:
self.logger.info(
f"Starting Docker browser strategy in {self.docker_config.mode} mode",
tag="DOCKER",
)
try:
# Get CDP URL by creating or reusing a Docker container
# This handles the container management and browser startup
cdp_url = await self._get_or_create_cdp_url()
if not cdp_url:
raise Exception(
"Failed to establish CDP connection to Docker container"
)
if self.logger:
self.logger.info(
f"Connecting to browser in Docker via CDP: {cdp_url}", tag="DOCKER"
)
# Connect to the browser using CDP
self.browser = await self.playwright.chromium.connect_over_cdp(cdp_url)
# Get existing context or create default context
contexts = self.browser.contexts
if contexts:
self.default_context = contexts[0]
if self.logger:
self.logger.debug("Using existing browser context", tag="DOCKER")
else:
if self.logger:
self.logger.debug("Creating new browser context", tag="DOCKER")
self.default_context = await self.create_browser_context()
await self.setup_context(self.default_context)
return self
except Exception as e:
# Clean up resources if startup fails
if self.container_id and not self.docker_config.persistent:
if self.logger:
self.logger.warning(
f"Cleaning up container after failed start: {self.container_id[:12]}",
tag="DOCKER",
)
await self.docker_utils.remove_container(self.container_id)
self.registry.unregister_container(self.container_id)
self.container_id = None
if self.playwright:
await self.playwright.stop()
self.playwright = None
# Re-raise the exception
if self.logger:
self.logger.error(
f"Failed to start Docker browser: {str(e)}", tag="DOCKER"
)
raise
async def _generate_config_hash(self) -> str:
"""Generate a hash of the configuration for container matching.
Returns:
Hash string uniquely identifying this configuration
"""
# Create a dict with the relevant parts of the config
config_dict = {
"image": self.docker_config.image,
"mode": self.docker_config.mode,
"browser_type": self.config.browser_type,
"headless": self.config.headless,
}
# Add browser-specific config if in launch mode
if self.docker_config.mode == "launch":
config_dict.update(
{
"text_mode": self.config.text_mode,
"light_mode": self.config.light_mode,
"viewport_width": self.config.viewport_width,
"viewport_height": self.config.viewport_height,
}
)
# Use the utility method to generate the hash
return self.docker_utils.generate_config_hash(config_dict)
async def _get_or_create_cdp_url(self) -> str:
"""Get CDP URL by either creating a new container or using an existing one.
Returns:
CDP URL for connecting to the browser
Raises:
Exception: If container creation or browser launch fails
"""
# If CDP URL is explicitly provided, use it
if self.config.cdp_url:
return self.config.cdp_url
# Ensure Docker image exists (will build if needed)
image_name = await self.docker_utils.ensure_docker_image_exists(
self.docker_config.image, self.docker_config.mode
)
# Generate config hash for container matching
config_hash = await self._generate_config_hash()
# Look for existing container with matching config
container_id = await self.registry.find_container_by_config(
config_hash, self.docker_utils
)
if container_id:
# Use existing container
self.container_id = container_id
host_port = self.registry.get_container_host_port(container_id)
if self.logger:
self.logger.info(
f"Using existing Docker container: {container_id[:12]}",
tag="DOCKER",
)
else:
# Get a port for the new container
host_port = (
self.docker_config.host_port
or self.registry.get_next_available_port(self.docker_utils)
)
# Prepare volumes list
volumes = list(self.docker_config.volumes)
# Add user data directory if specified
if self.docker_config.user_data_dir:
# Ensure user data directory exists
os.makedirs(self.docker_config.user_data_dir, exist_ok=True)
volumes.append(
f"{self.docker_config.user_data_dir}:{self.docker_config.container_user_data_dir}"
)
# # Update config user_data_dir to point to container path
# self.config.user_data_dir = self.docker_config.container_user_data_dir
# Create a new container
container_id = await self.docker_utils.create_container(
image_name=image_name,
host_port=host_port,
container_name=self.container_name,
volumes=volumes,
network=self.docker_config.network,
env_vars=self.docker_config.env_vars,
cpu_limit=self.docker_config.cpu_limit,
memory_limit=self.docker_config.memory_limit,
extra_args=self.docker_config.extra_args,
)
if not container_id:
raise Exception("Failed to create Docker container")
self.container_id = container_id
# Wait for container to be ready
await self.docker_utils.wait_for_container_ready(container_id)
# Handle specific setup based on mode
if self.docker_config.mode == "launch":
# In launch mode, we need to start socat and Chrome
await self.docker_utils.start_socat_in_container(container_id)
# Build browser arguments
browser_args = self._build_browser_args()
# Launch Chrome
await self.docker_utils.launch_chrome_in_container(
container_id, browser_args
)
# Get PIDs for later cleanup
self.chrome_process_id = (
await self.docker_utils.get_process_id_in_container(
container_id, "chromium"
)
)
self.socat_process_id = (
await self.docker_utils.get_process_id_in_container(
container_id, "socat"
)
)
# Wait for CDP to be ready
cdp_json_config = await self.docker_utils.wait_for_cdp_ready(host_port)
if cdp_json_config:
# Register the container in the shared registry
self.registry.register_container(
container_id, host_port, config_hash, cdp_json_config
)
else:
raise Exception("Failed to get CDP JSON config from Docker container")
if self.logger:
self.logger.success(
f"Docker container ready: {container_id[:12]} on port {host_port}",
tag="DOCKER",
)
# Return CDP URL
return f"http://localhost:{host_port}"
def _build_browser_args(self) -> List[str]:
"""Build Chrome command line arguments based on BrowserConfig.
Returns:
List of command line arguments for Chrome
"""
# Call parent method to get common arguments
browser_args = super()._build_browser_args()
return browser_args["args"] + [
f"--remote-debugging-port={self.internal_cdp_port}",
"--remote-debugging-address=0.0.0.0", # Allow external connections
"--disable-dev-shm-usage",
"--headless=new",
]
# args = [
# "--no-sandbox",
# "--disable-gpu",
# f"--remote-debugging-port={self.internal_cdp_port}",
# "--remote-debugging-address=0.0.0.0", # Allow external connections
# "--disable-dev-shm-usage",
# ]
# if self.config.headless:
# args.append("--headless=new")
# if self.config.viewport_width and self.config.viewport_height:
# args.append(f"--window-size={self.config.viewport_width},{self.config.viewport_height}")
# if self.config.user_agent:
# args.append(f"--user-agent={self.config.user_agent}")
# if self.config.text_mode:
# args.extend([
# "--blink-settings=imagesEnabled=false",
# "--disable-remote-fonts",
# "--disable-images",
# "--disable-javascript",
# ])
# if self.config.light_mode:
# # Import here to avoid circular import
# from ..utils import get_browser_disable_options
# args.extend(get_browser_disable_options())
# if self.config.user_data_dir:
# args.append(f"--user-data-dir={self.config.user_data_dir}")
# if self.config.extra_args:
# args.extend(self.config.extra_args)
# return args
async def close(self):
"""Close the browser and clean up Docker container if needed."""
# Set flag to track if we were the ones initiating shutdown
initiated_shutdown = not self.shutting_down
# Storage persistence for Docker needs special handling
# We need to store state before calling super().close() which will close the browser
if (
self.browser
and self.docker_config.user_data_dir
and self.docker_config.persistent
):
for context in self.browser.contexts:
try:
# Ensure directory exists
os.makedirs(self.docker_config.user_data_dir, exist_ok=True)
# Save storage state to user data directory
storage_path = os.path.join(
self.docker_config.user_data_dir, "storage_state.json"
)
await context.storage_state(path=storage_path)
if self.logger:
self.logger.debug(
"Persisted Docker-specific storage state", tag="DOCKER"
)
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to persist Docker storage state: {error}",
tag="DOCKER",
params={"error": str(e)},
)
# Call parent method to handle common cleanup
await super().close()
# Only perform container cleanup if we initiated shutdown
# and we need to handle Docker-specific resources
if initiated_shutdown:
# Only clean up container if not persistent
if self.container_id and not self.docker_config.persistent:
# Stop Chrome process in "launch" mode
if self.docker_config.mode == "launch" and self.chrome_process_id:
await self.docker_utils.stop_process_in_container(
self.container_id, self.chrome_process_id
)
if self.logger:
self.logger.debug(
f"Stopped Chrome process {self.chrome_process_id} in container",
tag="DOCKER",
)
# Stop socat process in "launch" mode
if self.docker_config.mode == "launch" and self.socat_process_id:
await self.docker_utils.stop_process_in_container(
self.container_id, self.socat_process_id
)
if self.logger:
self.logger.debug(
f"Stopped socat process {self.socat_process_id} in container",
tag="DOCKER",
)
# Remove or stop container based on configuration
if self.docker_config.remove_on_exit:
await self.docker_utils.remove_container(self.container_id)
# Unregister from registry
if hasattr(self, "registry") and self.registry:
self.registry.unregister_container(self.container_id)
if self.logger:
self.logger.debug(
f"Removed Docker container {self.container_id}",
tag="DOCKER",
)
else:
await self.docker_utils.stop_container(self.container_id)
if self.logger:
self.logger.debug(
f"Stopped Docker container {self.container_id}",
tag="DOCKER",
)
self.container_id = None

View File

@@ -1,134 +0,0 @@
"""Browser strategies module for Crawl4AI.
This module implements the browser strategy pattern for different
browser implementations, including Playwright, CDP, and builtin browsers.
"""
import time
from typing import Optional, Tuple
from playwright.async_api import BrowserContext, Page
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig, CrawlerRunConfig
from playwright_stealth import StealthConfig
from .base import BaseBrowserStrategy
stealth_config = StealthConfig(
webdriver=True,
chrome_app=True,
chrome_csi=True,
chrome_load_times=True,
chrome_runtime=True,
navigator_languages=True,
navigator_plugins=True,
navigator_permissions=True,
webgl_vendor=True,
outerdimensions=True,
navigator_hardware_concurrency=True,
media_codecs=True,
)
class PlaywrightBrowserStrategy(BaseBrowserStrategy):
"""Standard Playwright browser strategy.
This strategy launches a new browser instance using Playwright
and manages browser contexts.
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the Playwright browser strategy.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
# No need to re-initialize sessions and session_ttl as they're now in the base class
async def start(self):
"""Start the browser instance.
Returns:
self: For method chaining
"""
# Call the base class start to initialize Playwright
await super().start()
# Build browser arguments using the base class method
browser_args = self._build_browser_args()
try:
# Launch appropriate browser type
if self.config.browser_type == "firefox":
self.browser = await self.playwright.firefox.launch(**browser_args)
elif self.config.browser_type == "webkit":
self.browser = await self.playwright.webkit.launch(**browser_args)
else:
self.browser = await self.playwright.chromium.launch(**browser_args)
self.default_context = self.browser
if self.logger:
self.logger.debug(f"Launched {self.config.browser_type} browser", tag="BROWSER")
except Exception as e:
if self.logger:
self.logger.error(f"Failed to launch browser: {str(e)}", tag="BROWSER")
raise
return self
async def _generate_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
async with self._contexts_lock:
if config_signature in self.contexts_by_config:
context = self.contexts_by_config[config_signature]
else:
# Create and setup a new context
context = await self.create_browser_context(crawlerRunConfig)
await self.setup_context(context, crawlerRunConfig)
self.contexts_by_config[config_signature] = context
# Create a new page from the chosen context
page = await context.new_page()
return page, context
async def _get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page for the given configuration.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
Tuple of (Page, BrowserContext)
"""
# Call parent method to ensure browser is started
await super().get_page(crawlerRunConfig)
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
async with self._contexts_lock:
if config_signature in self.contexts_by_config:
context = self.contexts_by_config[config_signature]
else:
# Create and setup a new context
context = await self.create_browser_context(crawlerRunConfig)
await self.setup_context(context, crawlerRunConfig)
self.contexts_by_config[config_signature] = context
# Create a new page from the chosen context
page = await context.new_page()
# If a session_id is specified, store this session so we can reuse later
if crawlerRunConfig.session_id:
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context

View File

@@ -1,465 +0,0 @@
"""Browser utilities module for Crawl4AI.
This module provides utility functions for browser management,
including process management, CDP connection utilities,
and Playwright instance management.
"""
import asyncio
import os
import sys
import time
import tempfile
import subprocess
from typing import Optional, Tuple, Union
import signal
import psutil
from playwright.async_api import async_playwright
from ..utils import get_chromium_path
from ..async_configs import BrowserConfig, CrawlerRunConfig
from ..async_logger import AsyncLogger
_playwright_instance = None
async def get_playwright():
"""Get or create the Playwright instance (singleton pattern).
Returns:
Playwright: The Playwright instance
"""
global _playwright_instance
if _playwright_instance is None or True:
_playwright_instance = await async_playwright().start()
return _playwright_instance
async def get_browser_executable(browser_type: str) -> str:
"""Get the path to browser executable, with platform-specific handling.
Args:
browser_type: Type of browser (chromium, firefox, webkit)
Returns:
Path to browser executable
"""
return await get_chromium_path(browser_type)
def create_temp_directory(prefix="browser-profile-") -> str:
"""Create a temporary directory for browser data.
Args:
prefix: Prefix for the temporary directory name
Returns:
Path to the created temporary directory
"""
return tempfile.mkdtemp(prefix=prefix)
def is_windows() -> bool:
"""Check if the current platform is Windows.
Returns:
True if Windows, False otherwise
"""
return sys.platform == "win32"
def is_macos() -> bool:
"""Check if the current platform is macOS.
Returns:
True if macOS, False otherwise
"""
return sys.platform == "darwin"
def is_linux() -> bool:
"""Check if the current platform is Linux.
Returns:
True if Linux, False otherwise
"""
return not (is_windows() or is_macos())
def is_browser_running(pid: Optional[int]) -> bool:
"""Check if a process with the given PID is running.
Args:
pid: Process ID to check
Returns:
bool: True if the process is running, False otherwise
"""
if not pid:
return False
try:
if type(pid) is str:
pid = int(pid)
# Check if the process exists
if is_windows():
process = subprocess.run(["tasklist", "/FI", f"PID eq {pid}"],
capture_output=True, text=True)
return str(pid) in process.stdout
else:
# Unix-like systems
os.kill(pid, 0) # This doesn't actually kill the process, just checks if it exists
return True
except (ProcessLookupError, PermissionError, OSError):
return False
def get_browser_disable_options() -> list:
"""Get standard list of browser disable options for performance.
Returns:
List of command-line options to disable various browser features
"""
return [
"--disable-background-networking",
"--disable-background-timer-throttling",
"--disable-backgrounding-occluded-windows",
"--disable-breakpad",
"--disable-client-side-phishing-detection",
"--disable-component-extensions-with-background-pages",
"--disable-default-apps",
"--disable-extensions",
"--disable-features=TranslateUI",
"--disable-hang-monitor",
"--disable-ipc-flooding-protection",
"--disable-popup-blocking",
"--disable-prompt-on-repost",
"--disable-sync",
"--force-color-profile=srgb",
"--metrics-recording-only",
"--no-first-run",
"--password-store=basic",
"--use-mock-keychain",
]
async def find_optimal_browser_config(total_urls=50, verbose=True, rate_limit_delay=0.2):
"""Find optimal browser configuration for crawling a specific number of URLs.
Args:
total_urls: Number of URLs to crawl
verbose: Whether to print progress
rate_limit_delay: Delay between page loads to avoid rate limiting
Returns:
dict: Contains fastest, lowest_memory, and optimal configurations
"""
from .manager import BrowserManager
if verbose:
print(f"\n=== Finding optimal configuration for crawling {total_urls} URLs ===\n")
# Generate test URLs with timestamp to avoid caching
timestamp = int(time.time())
urls = [f"https://example.com/page_{i}?t={timestamp}" for i in range(total_urls)]
# Limit browser configurations to test (1 browser to max 10)
max_browsers = min(10, total_urls)
configs_to_test = []
# Generate configurations (browser count, pages distribution)
for num_browsers in range(1, max_browsers + 1):
base_pages = total_urls // num_browsers
remainder = total_urls % num_browsers
# Create distribution array like [3, 3, 2, 2] (some browsers get one more page)
if remainder > 0:
distribution = [base_pages + 1] * remainder + [base_pages] * (num_browsers - remainder)
else:
distribution = [base_pages] * num_browsers
configs_to_test.append((num_browsers, distribution))
results = []
# Test each configuration
for browser_count, page_distribution in configs_to_test:
if verbose:
print(f"Testing {browser_count} browsers with distribution {tuple(page_distribution)}")
try:
# Track memory if possible
try:
import psutil
process = psutil.Process()
start_memory = process.memory_info().rss / (1024 * 1024) # MB
except ImportError:
if verbose:
print("Memory tracking not available (psutil not installed)")
start_memory = 0
# Start browsers in parallel
managers = []
start_tasks = []
start_time = time.time()
logger = AsyncLogger(verbose=True, log_file=None)
for i in range(browser_count):
config = BrowserConfig(headless=True)
manager = BrowserManager(browser_config=config, logger=logger)
start_tasks.append(manager.start())
managers.append(manager)
await asyncio.gather(*start_tasks)
# Distribute URLs among browsers
urls_per_manager = {}
url_index = 0
for i, manager in enumerate(managers):
pages_for_this_browser = page_distribution[i]
end_index = url_index + pages_for_this_browser
urls_per_manager[manager] = urls[url_index:end_index]
url_index = end_index
# Create pages for each browser
all_pages = []
for manager, manager_urls in urls_per_manager.items():
if not manager_urls:
continue
pages = await manager.get_pages(CrawlerRunConfig(), count=len(manager_urls))
all_pages.extend(zip(pages, manager_urls))
# Crawl pages with delay to avoid rate limiting
async def crawl_page(page_ctx, url):
page, _ = page_ctx
try:
await page.goto(url)
if rate_limit_delay > 0:
await asyncio.sleep(rate_limit_delay)
title = await page.title()
return title
finally:
await page.close()
crawl_start = time.time()
crawl_tasks = [crawl_page(page_ctx, url) for page_ctx, url in all_pages]
await asyncio.gather(*crawl_tasks)
crawl_time = time.time() - crawl_start
total_time = time.time() - start_time
# Measure final memory usage
if start_memory > 0:
end_memory = process.memory_info().rss / (1024 * 1024)
memory_used = end_memory - start_memory
else:
memory_used = 0
# Close all browsers
for manager in managers:
await manager.close()
# Calculate metrics
pages_per_second = total_urls / crawl_time
# Calculate efficiency score (higher is better)
# This balances speed vs memory
if memory_used > 0:
efficiency = pages_per_second / (memory_used + 1)
else:
efficiency = pages_per_second
# Store result
result = {
"browser_count": browser_count,
"distribution": tuple(page_distribution),
"crawl_time": crawl_time,
"total_time": total_time,
"memory_used": memory_used,
"pages_per_second": pages_per_second,
"efficiency": efficiency
}
results.append(result)
if verbose:
print(f" ✓ Crawled {total_urls} pages in {crawl_time:.2f}s ({pages_per_second:.1f} pages/sec)")
if memory_used > 0:
print(f" ✓ Memory used: {memory_used:.1f}MB ({memory_used/total_urls:.1f}MB per page)")
print(f" ✓ Efficiency score: {efficiency:.4f}")
except Exception as e:
if verbose:
print(f" ✗ Error: {str(e)}")
# Clean up
for manager in managers:
try:
await manager.close()
except:
pass
# If no successful results, return None
if not results:
return None
# Find best configurations
fastest = sorted(results, key=lambda x: x["crawl_time"])[0]
# Only consider memory if available
memory_results = [r for r in results if r["memory_used"] > 0]
if memory_results:
lowest_memory = sorted(memory_results, key=lambda x: x["memory_used"])[0]
else:
lowest_memory = fastest
# Find most efficient (balanced speed vs memory)
optimal = sorted(results, key=lambda x: x["efficiency"], reverse=True)[0]
# Print summary
if verbose:
print("\n=== OPTIMAL CONFIGURATIONS ===")
print(f"⚡ Fastest: {fastest['browser_count']} browsers {fastest['distribution']}")
print(f" {fastest['crawl_time']:.2f}s, {fastest['pages_per_second']:.1f} pages/sec")
print(f"💾 Memory-efficient: {lowest_memory['browser_count']} browsers {lowest_memory['distribution']}")
if lowest_memory["memory_used"] > 0:
print(f" {lowest_memory['memory_used']:.1f}MB, {lowest_memory['memory_used']/total_urls:.2f}MB per page")
print(f"🌟 Balanced optimal: {optimal['browser_count']} browsers {optimal['distribution']}")
print(f" {optimal['crawl_time']:.2f}s, {optimal['pages_per_second']:.1f} pages/sec, score: {optimal['efficiency']:.4f}")
return {
"fastest": fastest,
"lowest_memory": lowest_memory,
"optimal": optimal,
"all_configs": results
}
# Find process ID of the existing browser using os
def find_process_by_port(port: int) -> str:
"""Find process ID listening on a specific port.
Args:
port: Port number to check
Returns:
str: Process ID or empty string if not found
"""
try:
if is_windows():
cmd = f"netstat -ano | findstr :{port}"
result = subprocess.check_output(cmd, shell=True).decode()
return result.strip().split()[-1] if result else ""
else:
cmd = f"lsof -i :{port} -t"
return subprocess.check_output(cmd, shell=True).decode().strip()
except subprocess.CalledProcessError:
return ""
async def check_process_is_running(process: subprocess.Popen, delay: float = 0.5) -> Tuple[bool, Optional[int], bytes, bytes]:
"""Perform a quick check to make sure the browser started successfully."""
if not process:
return False, None, b"", b""
# Check that process started without immediate termination
await asyncio.sleep(delay)
if process.poll() is not None:
# Process already terminated
stdout, stderr = b"", b""
try:
stdout, stderr = process.communicate(timeout=0.5)
except subprocess.TimeoutExpired:
pass
return False, process.returncode, stdout, stderr
return True, 0, b"", b""
def terminate_process(
pid: Union[int, str],
timeout: float = 5.0,
force_kill_timeout: float = 3.0,
logger = None
) -> Tuple[bool, Optional[str]]:
"""
Robustly terminate a process across platforms with verification.
Args:
pid: Process ID to terminate (int or string)
timeout: Seconds to wait for graceful termination before force killing
force_kill_timeout: Seconds to wait after force kill before considering it failed
logger: Optional logger object with error, warning, and info methods
Returns:
Tuple of (success: bool, error_message: Optional[str])
"""
# Convert pid to int if it's a string
if isinstance(pid, str):
try:
pid = int(pid)
except ValueError:
error_msg = f"Invalid PID format: {pid}"
if logger:
logger.error(error_msg)
return False, error_msg
# Check if process exists
if not psutil.pid_exists(pid):
return True, None # Process already terminated
try:
process = psutil.Process(pid)
# First attempt: graceful termination
if logger:
logger.info(f"Attempting graceful termination of process {pid}")
if os.name == 'nt': # Windows
subprocess.run(["taskkill", "/PID", str(pid)],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
check=False)
else: # Unix/Linux/MacOS
process.send_signal(signal.SIGTERM)
# Wait for process to terminate
try:
process.wait(timeout=timeout)
if logger:
logger.info(f"Process {pid} terminated gracefully")
return True, None
except psutil.TimeoutExpired:
if logger:
logger.warning(f"Process {pid} did not terminate gracefully within {timeout} seconds, forcing termination")
# Second attempt: force kill
if os.name == 'nt': # Windows
subprocess.run(["taskkill", "/F", "/PID", str(pid)],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
check=False)
else: # Unix/Linux/MacOS
process.send_signal(signal.SIGKILL)
# Verify process is killed
gone, alive = psutil.wait_procs([process], timeout=force_kill_timeout)
if process in alive:
error_msg = f"Failed to kill process {pid} even after force kill"
if logger:
logger.error(error_msg)
return False, error_msg
if logger:
logger.info(f"Process {pid} terminated by force")
return True, None
except psutil.NoSuchProcess:
# Process terminated while we were working with it
if logger:
logger.info(f"Process {pid} already terminated")
return True, None
except Exception as e:
error_msg = f"Error terminating process {pid}: {str(e)}"
if logger:
logger.error(error_msg)
return False, error_msg

View File

@@ -5,7 +5,10 @@ import os
import sys
import shutil
import tempfile
import psutil
import signal
import subprocess
import shlex
from playwright.async_api import BrowserContext
import hashlib
from .js_snippet import load_js_script
@@ -76,6 +79,51 @@ class ManagedBrowser:
_cleanup(): Terminates the browser process and removes the temporary directory.
create_profile(): Static method to create a user profile by launching a browser for user interaction.
"""
@staticmethod
def build_browser_flags(config: BrowserConfig) -> List[str]:
"""Common CLI flags for launching Chromium"""
flags = [
"--disable-gpu",
"--disable-gpu-compositing",
"--disable-software-rasterizer",
"--no-sandbox",
"--disable-dev-shm-usage",
"--no-first-run",
"--no-default-browser-check",
"--disable-infobars",
"--window-position=0,0",
"--ignore-certificate-errors",
"--ignore-certificate-errors-spki-list",
"--disable-blink-features=AutomationControlled",
"--window-position=400,0",
"--disable-renderer-backgrounding",
"--disable-ipc-flooding-protection",
"--force-color-profile=srgb",
"--mute-audio",
"--disable-background-timer-throttling",
]
if config.light_mode:
flags.extend(BROWSER_DISABLE_OPTIONS)
if config.text_mode:
flags.extend([
"--blink-settings=imagesEnabled=false",
"--disable-remote-fonts",
"--disable-images",
"--disable-javascript",
"--disable-software-rasterizer",
"--disable-dev-shm-usage",
])
# proxy support
if config.proxy:
flags.append(f"--proxy-server={config.proxy}")
elif config.proxy_config:
creds = ""
if config.proxy_config.username and config.proxy_config.password:
creds = f"{config.proxy_config.username}:{config.proxy_config.password}@"
flags.append(f"--proxy-server={creds}{config.proxy_config.server}")
# dedupe
return list(dict.fromkeys(flags))
browser_type: str
user_data_dir: str
@@ -94,6 +142,7 @@ class ManagedBrowser:
host: str = "localhost",
debugging_port: int = 9222,
cdp_url: Optional[str] = None,
browser_config: Optional[BrowserConfig] = None,
):
"""
Initialize the ManagedBrowser instance.
@@ -109,17 +158,19 @@ class ManagedBrowser:
host (str): Host for debugging the browser. Default: "localhost".
debugging_port (int): Port for debugging the browser. Default: 9222.
cdp_url (str or None): CDP URL to connect to the browser. Default: None.
browser_config (BrowserConfig): Configuration object containing all browser settings. Default: None.
"""
self.browser_type = browser_type
self.user_data_dir = user_data_dir
self.headless = headless
self.browser_type = browser_config.browser_type
self.user_data_dir = browser_config.user_data_dir
self.headless = browser_config.headless
self.browser_process = None
self.temp_dir = None
self.debugging_port = debugging_port
self.host = host
self.debugging_port = browser_config.debugging_port
self.host = browser_config.host
self.logger = logger
self.shutting_down = False
self.cdp_url = cdp_url
self.cdp_url = browser_config.cdp_url
self.browser_config = browser_config
async def start(self) -> str:
"""
@@ -142,6 +193,48 @@ class ManagedBrowser:
# Get browser path and args based on OS and browser type
# browser_path = self._get_browser_path()
args = await self._get_browser_args()
if self.browser_config.extra_args:
args.extend(self.browser_config.extra_args)
# ── make sure no old Chromium instance is owning the same port/profile ──
try:
if sys.platform == "win32":
if psutil is None:
raise RuntimeError("psutil not available, cannot clean old browser")
for p in psutil.process_iter(["pid", "name", "cmdline"]):
cl = " ".join(p.info.get("cmdline") or [])
if (
f"--remote-debugging-port={self.debugging_port}" in cl
and f"--user-data-dir={self.user_data_dir}" in cl
):
p.kill()
p.wait(timeout=5)
else: # macOS / Linux
# kill any process listening on the same debugging port
pids = (
subprocess.check_output(shlex.split(f"lsof -t -i:{self.debugging_port}"))
.decode()
.strip()
.splitlines()
)
for pid in pids:
try:
os.kill(int(pid), signal.SIGTERM)
except ProcessLookupError:
pass
# remove Chromium singleton locks, or new launch exits with
# “Opening in existing browser session.”
for f in ("SingletonLock", "SingletonSocket", "SingletonCookie"):
fp = os.path.join(self.user_data_dir, f)
if os.path.exists(fp):
os.remove(fp)
except Exception as _e:
# non-fatal — we'll try to start anyway, but log what happened
self.logger.warning(f"pre-launch cleanup failed: {_e}", tag="BROWSER")
# Start browser process
try:
@@ -274,29 +367,29 @@ class ManagedBrowser:
return browser_path
async def _get_browser_args(self) -> List[str]:
"""Returns browser-specific command line arguments"""
base_args = [await self._get_browser_path()]
"""Returns full CLI args for launching the browser"""
base = [await self._get_browser_path()]
if self.browser_type == "chromium":
args = [
flags = [
f"--remote-debugging-port={self.debugging_port}",
f"--user-data-dir={self.user_data_dir}",
]
if self.headless:
args.append("--headless=new")
flags.append("--headless=new")
# merge common launch flags
flags.extend(self.build_browser_flags(self.browser_config))
elif self.browser_type == "firefox":
args = [
flags = [
"--remote-debugging-port",
str(self.debugging_port),
"--profile",
self.user_data_dir,
]
if self.headless:
args.append("--headless")
flags.append("--headless")
else:
raise NotImplementedError(f"Browser type {self.browser_type} not supported")
return base_args + args
return base + flags
async def cleanup(self):
"""Cleanup browser process and temporary directory"""
@@ -440,8 +533,7 @@ class BrowserManager:
@classmethod
async def get_playwright(cls):
from playwright.async_api import async_playwright
if cls._playwright_instance is None:
cls._playwright_instance = await async_playwright().start()
cls._playwright_instance = await async_playwright().start()
return cls._playwright_instance
def __init__(self, browser_config: BrowserConfig, logger=None):
@@ -478,6 +570,7 @@ class BrowserManager:
logger=self.logger,
debugging_port=self.config.debugging_port,
cdp_url=self.config.cdp_url,
browser_config=self.config,
)
async def start(self):
@@ -492,11 +585,12 @@ class BrowserManager:
Note: This method should be called in a separate task to avoid blocking the main event loop.
"""
self.playwright = await self.get_playwright()
if self.playwright is None:
from playwright.async_api import async_playwright
if self.playwright is not None:
await self.close()
from playwright.async_api import async_playwright
self.playwright = await async_playwright().start()
self.playwright = await async_playwright().start()
if self.config.cdp_url or self.config.use_managed_browser:
self.config.use_managed_browser = True
@@ -565,6 +659,9 @@ class BrowserManager:
if self.config.extra_args:
args.extend(self.config.extra_args)
# Deduplicate args
args = list(dict.fromkeys(args))
browser_args = {"headless": self.config.headless, "args": args}
if self.config.chrome_channel:
@@ -660,7 +757,7 @@ class BrowserManager:
"name": "cookiesEnabled",
"value": "true",
"url": crawlerRunConfig.url
if crawlerRunConfig
if crawlerRunConfig and crawlerRunConfig.url
else "https://crawl4ai.com/",
}
]
@@ -779,6 +876,23 @@ class BrowserManager:
# Update context settings with text mode settings
context_settings.update(text_mode_settings)
# inject locale / tz / geo if user provided them
if crawlerRunConfig:
if crawlerRunConfig.locale:
context_settings["locale"] = crawlerRunConfig.locale
if crawlerRunConfig.timezone_id:
context_settings["timezone_id"] = crawlerRunConfig.timezone_id
if crawlerRunConfig.geolocation:
context_settings["geolocation"] = {
"latitude": crawlerRunConfig.geolocation.latitude,
"longitude": crawlerRunConfig.geolocation.longitude,
"accuracy": crawlerRunConfig.geolocation.accuracy,
}
# ensure geolocation permission
perms = context_settings.get("permissions", [])
perms.append("geolocation")
context_settings["permissions"] = perms
# Create and return the context with all settings
context = await self.browser.new_context(**context_settings)
@@ -811,6 +925,10 @@ class BrowserManager:
"semaphore_count",
"url"
]
# Do NOT exclude locale, timezone_id, or geolocation as these DO affect browser context
# and should cause a new context to be created if they change
for key in ephemeral_keys:
if key in config_dict:
del config_dict[key]
@@ -846,7 +964,7 @@ class BrowserManager:
pages = context.pages
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
if not page:
page = await context.new_page()
page = context.pages[0] # await context.new_page()
else:
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)

View File

@@ -15,12 +15,12 @@ import shutil
import json
import subprocess
import time
from typing import List, Dict, Optional, Any, Tuple
from colorama import Fore, Style, init
from typing import List, Dict, Optional, Any
from rich.console import Console
from .async_configs import BrowserConfig
from .browser_manager import ManagedBrowser
from .async_logger import AsyncLogger, AsyncLoggerBase
from .async_logger import AsyncLogger, AsyncLoggerBase, LogColor
from .utils import get_home_folder
@@ -45,8 +45,8 @@ class BrowserProfiler:
logger (AsyncLoggerBase, optional): Logger for outputting messages.
If None, a default AsyncLogger will be created.
"""
# Initialize colorama for colorful terminal output
init()
# Initialize rich console for colorful input prompts
self.console = Console()
# Create a logger if not provided
if logger is None:
@@ -127,26 +127,30 @@ class BrowserProfiler:
profile_path = os.path.join(self.profiles_dir, profile_name)
os.makedirs(profile_path, exist_ok=True)
# Print instructions for the user with colorama formatting
border = f"{Fore.CYAN}{'='*80}{Style.RESET_ALL}"
self.logger.info(f"\n{border}", tag="PROFILE")
self.logger.info(f"Creating browser profile: {Fore.GREEN}{profile_name}{Style.RESET_ALL}", tag="PROFILE")
self.logger.info(f"Profile directory: {Fore.YELLOW}{profile_path}{Style.RESET_ALL}", tag="PROFILE")
# Print instructions for the user with rich formatting
border = f"{'='*80}"
self.logger.info("{border}", tag="PROFILE", params={"border": f"\n{border}"}, colors={"border": LogColor.CYAN})
self.logger.info("Creating browser profile: {profile_name}", tag="PROFILE", params={"profile_name": profile_name}, colors={"profile_name": LogColor.GREEN})
self.logger.info("Profile directory: {profile_path}", tag="PROFILE", params={"profile_path": profile_path}, colors={"profile_path": LogColor.YELLOW})
self.logger.info("\nInstructions:", tag="PROFILE")
self.logger.info("1. A browser window will open for you to set up your profile.", tag="PROFILE")
self.logger.info(f"2. {Fore.CYAN}Log in to websites{Style.RESET_ALL}, configure settings, etc. as needed.", tag="PROFILE")
self.logger.info(f"3. When you're done, {Fore.YELLOW}press 'q' in this terminal{Style.RESET_ALL} to close the browser.", tag="PROFILE")
self.logger.info("{segment}, configure settings, etc. as needed.", tag="PROFILE", params={"segment": "2. Log in to websites"}, colors={"segment": LogColor.CYAN})
self.logger.info("3. When you're done, {segment} to close the browser.", tag="PROFILE", params={"segment": "press 'q' in this terminal"}, colors={"segment": LogColor.YELLOW})
self.logger.info("4. The profile will be saved and ready to use with Crawl4AI.", tag="PROFILE")
self.logger.info(f"{border}\n", tag="PROFILE")
self.logger.info("{border}", tag="PROFILE", params={"border": f"{border}\n"}, colors={"border": LogColor.CYAN})
browser_config.headless = False
browser_config.user_data_dir = profile_path
# Create managed browser instance
managed_browser = ManagedBrowser(
browser_type=browser_config.browser_type,
user_data_dir=profile_path,
headless=False, # Must be visible
browser_config=browser_config,
# user_data_dir=profile_path,
# headless=False, # Must be visible
logger=self.logger,
debugging_port=browser_config.debugging_port
# debugging_port=browser_config.debugging_port
)
# Set up signal handlers to ensure cleanup on interrupt
@@ -181,7 +185,7 @@ class BrowserProfiler:
import select
# First output the prompt
self.logger.info(f"{Fore.CYAN}Press '{Fore.WHITE}q{Fore.CYAN}' when you've finished using the browser...{Style.RESET_ALL}", tag="PROFILE")
self.logger.info("Press 'q' when you've finished using the browser...", tag="PROFILE")
# Save original terminal settings
fd = sys.stdin.fileno()
@@ -197,7 +201,7 @@ class BrowserProfiler:
if readable:
key = sys.stdin.read(1)
if key.lower() == 'q':
self.logger.info(f"{Fore.GREEN}Closing browser and saving profile...{Style.RESET_ALL}", tag="PROFILE")
self.logger.info("Closing browser and saving profile...", tag="PROFILE", base_color=LogColor.GREEN)
user_done_event.set()
return
@@ -223,7 +227,7 @@ class BrowserProfiler:
self.logger.error("Failed to start browser process.", tag="PROFILE")
return None
self.logger.info(f"Browser launched. {Fore.CYAN}Waiting for you to finish...{Style.RESET_ALL}", tag="PROFILE")
self.logger.info("Browser launched. Waiting for you to finish...", tag="PROFILE")
# Start listening for keyboard input
listener_task = asyncio.create_task(listen_for_quit_command())
@@ -245,10 +249,10 @@ class BrowserProfiler:
self.logger.info("Terminating browser process...", tag="PROFILE")
await managed_browser.cleanup()
self.logger.success(f"Browser closed. Profile saved at: {Fore.GREEN}{profile_path}{Style.RESET_ALL}", tag="PROFILE")
self.logger.success(f"Browser closed. Profile saved at: {profile_path}", tag="PROFILE")
except Exception as e:
self.logger.error(f"Error creating profile: {str(e)}", tag="PROFILE")
self.logger.error(f"Error creating profile: {e!s}", tag="PROFILE")
await managed_browser.cleanup()
return None
finally:
@@ -440,25 +444,27 @@ class BrowserProfiler:
```
"""
while True:
self.logger.info(f"\n{Fore.CYAN}Profile Management Options:{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"1. {Fore.GREEN}Create a new profile{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"2. {Fore.YELLOW}List available profiles{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"3. {Fore.RED}Delete a profile{Style.RESET_ALL}", tag="MENU")
self.logger.info("\nProfile Management Options:", tag="MENU")
self.logger.info("1. Create a new profile", tag="MENU", base_color=LogColor.GREEN)
self.logger.info("2. List available profiles", tag="MENU", base_color=LogColor.YELLOW)
self.logger.info("3. Delete a profile", tag="MENU", base_color=LogColor.RED)
# Only show crawl option if callback provided
if crawl_callback:
self.logger.info(f"4. {Fore.CYAN}Use a profile to crawl a website{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"5. {Fore.MAGENTA}Exit{Style.RESET_ALL}", tag="MENU")
self.logger.info("4. Use a profile to crawl a website", tag="MENU", base_color=LogColor.CYAN)
self.logger.info("5. Exit", tag="MENU", base_color=LogColor.MAGENTA)
exit_option = "5"
else:
self.logger.info(f"4. {Fore.MAGENTA}Exit{Style.RESET_ALL}", tag="MENU")
self.logger.info("4. Exit", tag="MENU", base_color=LogColor.MAGENTA)
exit_option = "4"
choice = input(f"\n{Fore.CYAN}Enter your choice (1-{exit_option}): {Style.RESET_ALL}")
self.logger.print(f"\n[cyan]Enter your choice (1-{exit_option}): [/cyan]", end="")
choice = input()
if choice == "1":
# Create new profile
name = input(f"{Fore.GREEN}Enter a name for the new profile (or press Enter for auto-generated name): {Style.RESET_ALL}")
self.console.print("[green]Enter a name for the new profile (or press Enter for auto-generated name): [/green]", end="")
name = input()
await self.create_profile(name or None)
elif choice == "2":
@@ -469,11 +475,11 @@ class BrowserProfiler:
self.logger.warning(" No profiles found. Create one first with option 1.", tag="PROFILES")
continue
# Print profile information with colorama formatting
# Print profile information
self.logger.info("\nAvailable profiles:", tag="PROFILES")
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {Fore.CYAN}{profile['name']}{Style.RESET_ALL}", tag="PROFILES")
self.logger.info(f" Path: {Fore.YELLOW}{profile['path']}{Style.RESET_ALL}", tag="PROFILES")
self.logger.info(f"[{i+1}] {profile['name']}", tag="PROFILES")
self.logger.info(f" Path: {profile['path']}", tag="PROFILES", base_color=LogColor.YELLOW)
self.logger.info(f" Created: {profile['created'].strftime('%Y-%m-%d %H:%M:%S')}", tag="PROFILES")
self.logger.info(f" Browser type: {profile['type']}", tag="PROFILES")
self.logger.info("", tag="PROFILES") # Empty line for spacing
@@ -486,12 +492,13 @@ class BrowserProfiler:
continue
# Display numbered list
self.logger.info(f"\n{Fore.YELLOW}Available profiles:{Style.RESET_ALL}", tag="PROFILES")
self.logger.info("\nAvailable profiles:", tag="PROFILES", base_color=LogColor.YELLOW)
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {profile['name']}", tag="PROFILES")
# Get profile to delete
profile_idx = input(f"{Fore.RED}Enter the number of the profile to delete (or 'c' to cancel): {Style.RESET_ALL}")
self.console.print("[red]Enter the number of the profile to delete (or 'c' to cancel): [/red]", end="")
profile_idx = input()
if profile_idx.lower() == 'c':
continue
@@ -499,17 +506,18 @@ class BrowserProfiler:
idx = int(profile_idx) - 1
if 0 <= idx < len(profiles):
profile_name = profiles[idx]["name"]
self.logger.info(f"Deleting profile: {Fore.YELLOW}{profile_name}{Style.RESET_ALL}", tag="PROFILES")
self.logger.info(f"Deleting profile: [yellow]{profile_name}[/yellow]", tag="PROFILES")
# Confirm deletion
confirm = input(f"{Fore.RED}Are you sure you want to delete this profile? (y/n): {Style.RESET_ALL}")
self.console.print("[red]Are you sure you want to delete this profile? (y/n): [/red]", end="")
confirm = input()
if confirm.lower() == 'y':
success = self.delete_profile(profiles[idx]["path"])
if success:
self.logger.success(f"Profile {Fore.GREEN}{profile_name}{Style.RESET_ALL} deleted successfully", tag="PROFILES")
self.logger.success(f"Profile {profile_name} deleted successfully", tag="PROFILES")
else:
self.logger.error(f"Failed to delete profile {Fore.RED}{profile_name}{Style.RESET_ALL}", tag="PROFILES")
self.logger.error(f"Failed to delete profile {profile_name}", tag="PROFILES")
else:
self.logger.error("Invalid profile number", tag="PROFILES")
except ValueError:
@@ -523,12 +531,13 @@ class BrowserProfiler:
continue
# Display numbered list
self.logger.info(f"\n{Fore.YELLOW}Available profiles:{Style.RESET_ALL}", tag="PROFILES")
self.logger.info("\nAvailable profiles:", tag="PROFILES", base_color=LogColor.YELLOW)
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {profile['name']}", tag="PROFILES")
# Get profile to use
profile_idx = input(f"{Fore.CYAN}Enter the number of the profile to use (or 'c' to cancel): {Style.RESET_ALL}")
self.console.print("[cyan]Enter the number of the profile to use (or 'c' to cancel): [/cyan]", end="")
profile_idx = input()
if profile_idx.lower() == 'c':
continue
@@ -536,7 +545,8 @@ class BrowserProfiler:
idx = int(profile_idx) - 1
if 0 <= idx < len(profiles):
profile_path = profiles[idx]["path"]
url = input(f"{Fore.CYAN}Enter the URL to crawl: {Style.RESET_ALL}")
self.console.print("[cyan]Enter the URL to crawl: [/cyan]", end="")
url = input()
if url:
# Call the provided crawl callback
await crawl_callback(profile_path, url)
@@ -597,13 +607,13 @@ class BrowserProfiler:
os.makedirs(profile_path, exist_ok=True)
# Print initial information
border = f"{Fore.CYAN}{'='*80}{Style.RESET_ALL}"
self.logger.info(f"\n{border}", tag="CDP")
self.logger.info(f"Launching standalone browser with CDP debugging", tag="CDP")
self.logger.info(f"Browser type: {Fore.GREEN}{browser_type}{Style.RESET_ALL}", tag="CDP")
self.logger.info(f"Profile path: {Fore.YELLOW}{profile_path}{Style.RESET_ALL}", tag="CDP")
self.logger.info(f"Debugging port: {Fore.CYAN}{debugging_port}{Style.RESET_ALL}", tag="CDP")
self.logger.info(f"Headless mode: {Fore.CYAN}{headless}{Style.RESET_ALL}", tag="CDP")
border = f"{'='*80}"
self.logger.info("{border}", tag="CDP", params={"border": border}, colors={"border": LogColor.CYAN})
self.logger.info("Launching standalone browser with CDP debugging", tag="CDP")
self.logger.info("Browser type: {browser_type}", tag="CDP", params={"browser_type": browser_type}, colors={"browser_type": LogColor.CYAN})
self.logger.info("Profile path: {profile_path}", tag="CDP", params={"profile_path": profile_path}, colors={"profile_path": LogColor.YELLOW})
self.logger.info(f"Debugging port: {debugging_port}", tag="CDP")
self.logger.info(f"Headless mode: {headless}", tag="CDP")
# Create managed browser instance
managed_browser = ManagedBrowser(
@@ -646,7 +656,7 @@ class BrowserProfiler:
import select
# First output the prompt
self.logger.info(f"{Fore.CYAN}Press '{Fore.WHITE}q{Fore.CYAN}' to stop the browser and exit...{Style.RESET_ALL}", tag="CDP")
self.logger.info("Press 'q' to stop the browser and exit...", tag="CDP")
# Save original terminal settings
fd = sys.stdin.fileno()
@@ -662,7 +672,7 @@ class BrowserProfiler:
if readable:
key = sys.stdin.read(1)
if key.lower() == 'q':
self.logger.info(f"{Fore.GREEN}Closing browser...{Style.RESET_ALL}", tag="CDP")
self.logger.info("Closing browser...", tag="CDP")
user_done_event.set()
return
@@ -716,20 +726,20 @@ class BrowserProfiler:
self.logger.error("Failed to start browser process.", tag="CDP")
return None
self.logger.info(f"Browser launched successfully. Retrieving CDP information...", tag="CDP")
self.logger.info("Browser launched successfully. Retrieving CDP information...", tag="CDP")
# Get CDP URL and JSON config
cdp_url, config_json = await get_cdp_json(debugging_port)
if cdp_url:
self.logger.success(f"CDP URL: {Fore.GREEN}{cdp_url}{Style.RESET_ALL}", tag="CDP")
self.logger.success(f"CDP URL: {cdp_url}", tag="CDP")
if config_json:
# Display relevant CDP information
self.logger.info(f"Browser: {Fore.CYAN}{config_json.get('Browser', 'Unknown')}{Style.RESET_ALL}", tag="CDP")
self.logger.info(f"Protocol Version: {config_json.get('Protocol-Version', 'Unknown')}", tag="CDP")
self.logger.info(f"Browser: {config_json.get('Browser', 'Unknown')}", tag="CDP", colors={"Browser": LogColor.CYAN})
self.logger.info(f"Protocol Version: {config_json.get('Protocol-Version', 'Unknown')}", tag="CDP", colors={"Protocol-Version": LogColor.CYAN})
if 'webSocketDebuggerUrl' in config_json:
self.logger.info(f"WebSocket URL: {Fore.GREEN}{config_json['webSocketDebuggerUrl']}{Style.RESET_ALL}", tag="CDP")
self.logger.info("WebSocket URL: {webSocketDebuggerUrl}", tag="CDP", params={"webSocketDebuggerUrl": config_json['webSocketDebuggerUrl']}, colors={"webSocketDebuggerUrl": LogColor.GREEN})
else:
self.logger.warning("Could not retrieve CDP configuration JSON", tag="CDP")
else:
@@ -757,7 +767,7 @@ class BrowserProfiler:
self.logger.info("Terminating browser process...", tag="CDP")
await managed_browser.cleanup()
self.logger.success(f"Browser closed.", tag="CDP")
self.logger.success("Browser closed.", tag="CDP")
except Exception as e:
self.logger.error(f"Error launching standalone browser: {str(e)}", tag="CDP")
@@ -972,3 +982,30 @@ class BrowserProfiler:
'info': browser_info
}
if __name__ == "__main__":
# Example usage
profiler = BrowserProfiler()
# Create a new profile
import os
from pathlib import Path
home_dir = Path.home()
profile_path = asyncio.run(profiler.create_profile( str(home_dir / ".crawl4ai/profiles/test-profile")))
# Launch a standalone browser
asyncio.run(profiler.launch_standalone_browser())
# List profiles
profiles = profiler.list_profiles()
for profile in profiles:
print(f"Profile: {profile['name']}, Path: {profile['path']}")
# Delete a profile
success = profiler.delete_profile("my-profile")
if success:
print("Profile deleted successfully")
else:
print("Failed to delete profile")

View File

@@ -29,6 +29,14 @@ PROVIDER_MODELS = {
'gemini/gemini-2.0-flash-lite-preview-02-05': os.getenv("GEMINI_API_KEY"),
"deepseek/deepseek-chat": os.getenv("DEEPSEEK_API_KEY"),
}
PROVIDER_MODELS_PREFIXES = {
"ollama": "no-token-needed", # Any model from Ollama no need for API token
"groq": os.getenv("GROQ_API_KEY"),
"openai": os.getenv("OPENAI_API_KEY"),
"anthropic": os.getenv("ANTHROPIC_API_KEY"),
"gemini": os.getenv("GEMINI_API_KEY"),
"deepseek": os.getenv("DEEPSEEK_API_KEY"),
}
# Chunk token threshold
CHUNK_TOKEN_THRESHOLD = 2**11 # 2048 tokens

View File

@@ -27,8 +27,7 @@ import json
import hashlib
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor
from .async_logger import AsyncLogger, LogLevel
from colorama import Fore, Style
from .async_logger import AsyncLogger, LogLevel, LogColor
class RelevantContentFilter(ABC):
@@ -846,8 +845,7 @@ class LLMContentFilter(RelevantContentFilter):
},
colors={
**AsyncLogger.DEFAULT_COLORS,
LogLevel.INFO: Fore.MAGENTA
+ Style.DIM, # Dimmed purple for LLM ops
LogLevel.INFO: LogColor.DIM_MAGENTA # Dimmed purple for LLM ops
},
)
else:
@@ -892,7 +890,7 @@ class LLMContentFilter(RelevantContentFilter):
"Starting LLM markdown content filtering process",
tag="LLM",
params={"provider": self.llm_config.provider},
colors={"provider": Fore.CYAN},
colors={"provider": LogColor.CYAN},
)
# Cache handling
@@ -929,7 +927,7 @@ class LLMContentFilter(RelevantContentFilter):
"LLM markdown: Split content into {chunk_count} chunks",
tag="CHUNK",
params={"chunk_count": len(html_chunks)},
colors={"chunk_count": Fore.YELLOW},
colors={"chunk_count": LogColor.YELLOW},
)
start_time = time.time()
@@ -1038,7 +1036,7 @@ class LLMContentFilter(RelevantContentFilter):
"LLM markdown: Completed processing in {time:.2f}s",
tag="LLM",
params={"time": end_time - start_time},
colors={"time": Fore.YELLOW},
colors={"time": LogColor.YELLOW},
)
result = ordered_results if ordered_results else []

View File

@@ -28,6 +28,7 @@ from lxml import etree
from lxml import html as lhtml
from typing import List
from .models import ScrapingResult, MediaItem, Link, Media, Links
import copy
# Pre-compile regular expressions for Open Graph and Twitter metadata
OG_REGEX = re.compile(r"^og:")
@@ -48,7 +49,7 @@ def parse_srcset(s: str) -> List[Dict]:
if len(parts) >= 1:
url = parts[0]
width = (
parts[1].rstrip("w")
parts[1].rstrip("w").split('.')[0]
if len(parts) > 1 and parts[1].endswith("w")
else None
)
@@ -128,7 +129,8 @@ class WebScrapingStrategy(ContentScrapingStrategy):
Returns:
ScrapingResult: A structured result containing the scraped content.
"""
raw_result = self._scrap(url, html, is_async=False, **kwargs)
actual_url = kwargs.get("redirected_url", url)
raw_result = self._scrap(actual_url, html, is_async=False, **kwargs)
if raw_result is None:
return ScrapingResult(
cleaned_html="",
@@ -619,6 +621,9 @@ class WebScrapingStrategy(ContentScrapingStrategy):
return False
keep_element = False
# Special case for table elements - always preserve structure
if element.name in ["tr", "td", "th"]:
keep_element = True
exclude_domains = kwargs.get("exclude_domains", [])
# exclude_social_media_domains = kwargs.get('exclude_social_media_domains', set(SOCIAL_MEDIA_DOMAINS))
@@ -859,7 +864,15 @@ class WebScrapingStrategy(ContentScrapingStrategy):
parser_type = kwargs.get("parser", "lxml")
soup = BeautifulSoup(html, parser_type)
body = soup.body
if body is None:
raise Exception("'<body>' tag is not found in fetched html. Consider adding wait_for=\"css:body\" to wait for body tag to be loaded into DOM.")
base_domain = get_base_domain(url)
# Early removal of all images if exclude_all_images is set
# This happens before any processing to minimize memory usage
if kwargs.get("exclude_all_images", False):
for img in body.find_all('img'):
img.decompose()
try:
meta = extract_metadata("", soup)
@@ -891,23 +904,6 @@ class WebScrapingStrategy(ContentScrapingStrategy):
for element in body.select(excluded_selector):
element.extract()
# if False and css_selector:
# selected_elements = body.select(css_selector)
# if not selected_elements:
# return {
# "markdown": "",
# "cleaned_html": "",
# "success": True,
# "media": {"images": [], "videos": [], "audios": []},
# "links": {"internal": [], "external": []},
# "metadata": {},
# "message": f"No elements found for CSS selector: {css_selector}",
# }
# # raise InvalidCSSSelectorError(f"Invalid CSS selector, No elements found for CSS selector: {css_selector}")
# body = soup.new_tag("div")
# for el in selected_elements:
# body.append(el)
content_element = None
if target_elements:
try:
@@ -916,12 +912,12 @@ class WebScrapingStrategy(ContentScrapingStrategy):
for_content_targeted_element.extend(body.select(target_element))
content_element = soup.new_tag("div")
for el in for_content_targeted_element:
content_element.append(el)
content_element.append(copy.deepcopy(el))
except Exception as e:
self._log("error", f"Error with target element detection: {str(e)}", "SCRAPE")
return None
else:
content_element = body
content_element = body
kwargs["exclude_social_media_domains"] = set(
kwargs.get("exclude_social_media_domains", []) + SOCIAL_MEDIA_DOMAINS
@@ -1302,6 +1298,9 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
"source",
"track",
"wbr",
"tr",
"td",
"th",
}
for el in reversed(list(root.iterdescendants())):
@@ -1491,6 +1490,13 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
body = doc
base_domain = get_base_domain(url)
# Early removal of all images if exclude_all_images is set
# This is more efficient in lxml as we remove elements before any processing
if kwargs.get("exclude_all_images", False):
for img in body.xpath('//img'):
if img.getparent() is not None:
img.getparent().remove(img)
# Add comment removal
if kwargs.get("remove_comments", False):
@@ -1527,26 +1533,6 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
self._log("error", f"Error extracting metadata: {str(e)}", "SCRAPE")
meta = {}
# Handle CSS selector targeting
# if css_selector:
# try:
# selected_elements = body.cssselect(css_selector)
# if not selected_elements:
# return {
# "markdown": "",
# "cleaned_html": "",
# "success": True,
# "media": {"images": [], "videos": [], "audios": []},
# "links": {"internal": [], "external": []},
# "metadata": meta,
# "message": f"No elements found for CSS selector: {css_selector}",
# }
# body = lhtml.Element("div")
# body.extend(selected_elements)
# except Exception as e:
# self._log("error", f"Error with CSS selector: {str(e)}", "SCRAPE")
# return None
content_element = None
if target_elements:
try:
@@ -1554,7 +1540,7 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
for target_element in target_elements:
for_content_targeted_element.extend(body.cssselect(target_element))
content_element = lhtml.Element("div")
content_element.extend(for_content_targeted_element)
content_element.extend(copy.deepcopy(for_content_targeted_element))
except Exception as e:
self._log("error", f"Error with target element detection: {str(e)}", "SCRAPE")
return None
@@ -1623,7 +1609,7 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
# Remove empty elements
self.remove_empty_elements_fast(body, 1)
# Remvoe unneeded attributes
# Remove unneeded attributes
self.remove_unwanted_attributes_fast(
body, keep_data_attributes=kwargs.get("keep_data_attributes", False)
)

View File

@@ -11,6 +11,7 @@ from .scorers import URLScorer
from . import DeepCrawlStrategy
from ..types import AsyncWebCrawler, CrawlerRunConfig, CrawlResult, RunManyReturn
from ..utils import normalize_url_for_deep_crawl
from math import inf as infinity
@@ -106,13 +107,14 @@ class BestFirstCrawlingStrategy(DeepCrawlStrategy):
valid_links = []
for link in links:
url = link.get("href")
if url in visited:
base_url = normalize_url_for_deep_crawl(url, source_url)
if base_url in visited:
continue
if not await self.can_process_url(url, new_depth):
self.stats.urls_skipped += 1
continue
valid_links.append(url)
valid_links.append(base_url)
# If we have more valid links than capacity, limit them
if len(valid_links) > remaining_capacity:

View File

@@ -117,7 +117,8 @@ class BFSDeepCrawlStrategy(DeepCrawlStrategy):
self.logger.debug(f"URL {url} skipped: score {score} below threshold {self.score_threshold}")
self.stats.urls_skipped += 1
continue
visited.add(base_url)
valid_links.append((base_url, score))
# If we have more valid links than capacity, sort by score and take the top ones
@@ -158,7 +159,6 @@ class BFSDeepCrawlStrategy(DeepCrawlStrategy):
while current_level and not self._cancel_event.is_set():
next_level: List[Tuple[str, Optional[str]]] = []
urls = [url for url, _ in current_level]
visited.update(urls)
# Clone the config to disable deep crawling recursion and enforce batch mode.
batch_config = config.clone(deep_crawl_strategy=None, stream=False)

View File

@@ -1,13 +1,16 @@
from abc import ABC, abstractmethod
import inspect
from typing import Any, List, Dict, Optional
from typing import Any, List, Dict, Optional, Tuple, Pattern, Union
from concurrent.futures import ThreadPoolExecutor, as_completed
import json
import time
from enum import IntFlag, auto
from .prompts import PROMPT_EXTRACT_BLOCKS, PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION, PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION, JSON_SCHEMA_BUILDER_XPATH, PROMPT_EXTRACT_INFERRED_SCHEMA
from .config import (
DEFAULT_PROVIDER, CHUNK_TOKEN_THRESHOLD,
DEFAULT_PROVIDER,
DEFAULT_PROVIDER_API_KEY,
CHUNK_TOKEN_THRESHOLD,
OVERLAP_RATE,
WORD_TOKEN_RATE,
)
@@ -542,6 +545,11 @@ class LLMExtractionStrategy(ExtractionStrategy):
"""
super().__init__( input_format=input_format, **kwargs)
self.llm_config = llm_config
if not self.llm_config:
self.llm_config = create_llm_config(
provider=DEFAULT_PROVIDER,
api_token=os.environ.get(DEFAULT_PROVIDER_API_KEY),
)
self.instruction = instruction
self.extract_type = extraction_type
self.schema = schema
@@ -672,7 +680,7 @@ class LLMExtractionStrategy(ExtractionStrategy):
block["error"] = False
except Exception:
parsed, unparsed = split_and_parse_json_objects(
response
response.choices[0].message.content
)
blocks = parsed
if unparsed:
@@ -1661,3 +1669,303 @@ class JsonXPathExtractionStrategy(JsonElementExtractionStrategy):
def _get_element_attribute(self, element, attribute: str):
return element.get(attribute)
"""
RegexExtractionStrategy
Fast, zero-LLM extraction of common entities via regular expressions.
"""
_CTRL = {c: rf"\x{ord(c):02x}" for c in map(chr, range(32)) if c not in "\t\n\r"}
_WB_FIX = re.compile(r"\x08") # stray back-space → word-boundary
_NEEDS_ESCAPE = re.compile(r"(?<!\\)\\(?![\\u])") # lone backslash
def _sanitize_schema(schema: Dict[str, str]) -> Dict[str, str]:
"""Fix common JSON-escape goofs coming from LLMs or manual edits."""
safe = {}
for label, pat in schema.items():
# 1⃣ replace accidental control chars (inc. the infamous back-space)
pat = _WB_FIX.sub(r"\\b", pat).translate(_CTRL)
# 2⃣ double any single backslash that JSON kept single
pat = _NEEDS_ESCAPE.sub(r"\\\\", pat)
# 3⃣ quick sanity compile
try:
re.compile(pat)
except re.error as e:
raise ValueError(f"Regex for '{label}' wont compile after fix: {e}") from None
safe[label] = pat
return safe
class RegexExtractionStrategy(ExtractionStrategy):
"""
A lean strategy that finds e-mails, phones, URLs, dates, money, etc.,
using nothing but pre-compiled regular expressions.
Extraction returns::
{
"url": "<page-url>",
"label": "<pattern-label>",
"value": "<matched-string>",
"span": [start, end]
}
Only `generate_schema()` touches an LLM, extraction itself is pure Python.
"""
# -------------------------------------------------------------- #
# Built-in patterns exposed as IntFlag so callers can bit-OR them
# -------------------------------------------------------------- #
class _B(IntFlag):
EMAIL = auto()
PHONE_INTL = auto()
PHONE_US = auto()
URL = auto()
IPV4 = auto()
IPV6 = auto()
UUID = auto()
CURRENCY = auto()
PERCENTAGE = auto()
NUMBER = auto()
DATE_ISO = auto()
DATE_US = auto()
TIME_24H = auto()
POSTAL_US = auto()
POSTAL_UK = auto()
HTML_COLOR_HEX = auto()
TWITTER_HANDLE = auto()
HASHTAG = auto()
MAC_ADDR = auto()
IBAN = auto()
CREDIT_CARD = auto()
NOTHING = auto()
ALL = (
EMAIL | PHONE_INTL | PHONE_US | URL | IPV4 | IPV6 | UUID
| CURRENCY | PERCENTAGE | NUMBER | DATE_ISO | DATE_US | TIME_24H
| POSTAL_US | POSTAL_UK | HTML_COLOR_HEX | TWITTER_HANDLE
| HASHTAG | MAC_ADDR | IBAN | CREDIT_CARD
)
# user-friendly aliases (RegexExtractionStrategy.Email, .IPv4, …)
Email = _B.EMAIL
PhoneIntl = _B.PHONE_INTL
PhoneUS = _B.PHONE_US
Url = _B.URL
IPv4 = _B.IPV4
IPv6 = _B.IPV6
Uuid = _B.UUID
Currency = _B.CURRENCY
Percentage = _B.PERCENTAGE
Number = _B.NUMBER
DateIso = _B.DATE_ISO
DateUS = _B.DATE_US
Time24h = _B.TIME_24H
PostalUS = _B.POSTAL_US
PostalUK = _B.POSTAL_UK
HexColor = _B.HTML_COLOR_HEX
TwitterHandle = _B.TWITTER_HANDLE
Hashtag = _B.HASHTAG
MacAddr = _B.MAC_ADDR
Iban = _B.IBAN
CreditCard = _B.CREDIT_CARD
All = _B.ALL
Nothing = _B(0) # no patterns
# ------------------------------------------------------------------ #
# Built-in pattern catalog
# ------------------------------------------------------------------ #
DEFAULT_PATTERNS: Dict[str, str] = {
# Communication
"email": r"[\w.+-]+@[\w-]+\.[\w.-]+",
"phone_intl": r"\+?\d[\d .()-]{7,}\d",
"phone_us": r"\(?\d{3}\)?[ -. ]?\d{3}[ -. ]?\d{4}",
# Web
"url": r"https?://[^\s\"'<>]+",
"ipv4": r"(?:\d{1,3}\.){3}\d{1,3}",
"ipv6": r"[A-F0-9]{1,4}(?::[A-F0-9]{1,4}){7}",
# IDs
"uuid": r"[0-9a-f]{8}-[0-9a-f]{4}-[1-5][0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}",
# Money / numbers
"currency": r"(?:USD|EUR|RM|\$|€|£)\s?\d+(?:[.,]\d{2})?",
"percentage": r"\d+(?:\.\d+)?%",
"number": r"\b\d{1,3}(?:[,.\s]\d{3})*(?:\.\d+)?\b",
# Dates / Times
"date_iso": r"\d{4}-\d{2}-\d{2}",
"date_us": r"\d{1,2}/\d{1,2}/\d{2,4}",
"time_24h": r"\b(?:[01]?\d|2[0-3]):[0-5]\d(?:[:.][0-5]\d)?\b",
# Misc
"postal_us": r"\b\d{5}(?:-\d{4})?\b",
"postal_uk": r"\b[A-Z]{1,2}\d[A-Z\d]? ?\d[A-Z]{2}\b",
"html_color_hex": r"#[0-9A-Fa-f]{6}\b",
"twitter_handle": r"@[\w]{1,15}",
"hashtag": r"#[\w-]+",
"mac_addr": r"(?:[0-9A-Fa-f]{2}:){5}[0-9A-Fa-f]{2}",
"iban": r"[A-Z]{2}\d{2}[A-Z0-9]{11,30}",
"credit_card": r"\b(?:4\d{12}(?:\d{3})?|5[1-5]\d{14}|3[47]\d{13}|6(?:011|5\d{2})\d{12})\b",
}
_FLAGS = re.IGNORECASE | re.MULTILINE
_UNWANTED_PROPS = {
"provider": "Use llm_config instead",
"api_token": "Use llm_config instead",
}
# ------------------------------------------------------------------ #
# Construction
# ------------------------------------------------------------------ #
def __init__(
self,
pattern: "_B" = _B.NOTHING,
*,
custom: Optional[Union[Dict[str, str], List[Tuple[str, str]]]] = None,
input_format: str = "fit_html",
**kwargs,
) -> None:
"""
Args:
patterns: Custom patterns overriding or extending defaults.
Dict[label, regex] or list[tuple(label, regex)].
input_format: "html", "markdown" or "text".
**kwargs: Forwarded to ExtractionStrategy.
"""
super().__init__(input_format=input_format, **kwargs)
# 1⃣ take only the requested built-ins
merged: Dict[str, str] = {
key: rx
for key, rx in self.DEFAULT_PATTERNS.items()
if getattr(self._B, key.upper()).value & pattern
}
# 2⃣ apply user overrides / additions
if custom:
if isinstance(custom, dict):
merged.update(custom)
else: # iterable of (label, regex)
merged.update({lbl: rx for lbl, rx in custom})
self._compiled: Dict[str, Pattern] = {
lbl: re.compile(rx, self._FLAGS) for lbl, rx in merged.items()
}
# ------------------------------------------------------------------ #
# Extraction
# ------------------------------------------------------------------ #
def extract(self, url: str, content: str, *q, **kw) -> List[Dict[str, Any]]:
# text = self._plain_text(html)
out: List[Dict[str, Any]] = []
for label, cre in self._compiled.items():
for m in cre.finditer(content):
out.append(
{
"url": url,
"label": label,
"value": m.group(0),
"span": [m.start(), m.end()],
}
)
return out
# ------------------------------------------------------------------ #
# Helpers
# ------------------------------------------------------------------ #
def _plain_text(self, content: str) -> str:
if self.input_format == "text":
return content
return BeautifulSoup(content, "lxml").get_text(" ", strip=True)
# ------------------------------------------------------------------ #
# LLM-assisted pattern generator
# ------------------------------------------------------------------ #
# ------------------------------------------------------------------ #
# LLM-assisted one-off pattern builder
# ------------------------------------------------------------------ #
@staticmethod
def generate_pattern(
label: str,
html: str,
*,
query: Optional[str] = None,
examples: Optional[List[str]] = None,
llm_config: Optional[LLMConfig] = None,
**kwargs,
) -> Dict[str, str]:
"""
Ask an LLM for a single page-specific regex and return
{label: pattern} ── ready for RegexExtractionStrategy(custom=…)
"""
# ── guard deprecated kwargs
for k in RegexExtractionStrategy._UNWANTED_PROPS:
if k in kwargs:
raise AttributeError(
f"{k} is deprecated, {RegexExtractionStrategy._UNWANTED_PROPS[k]}"
)
# ── default LLM config
if llm_config is None:
llm_config = create_llm_config()
# ── system prompt hardened
system_msg = (
"You are an expert Python-regex engineer.\n"
f"Return **one** JSON object whose single key is exactly \"{label}\", "
"and whose value is a raw-string regex pattern that works with "
"the standard `re` module in Python.\n\n"
"Strict rules (obey every bullet):\n"
"• If a *user query* is supplied, treat it as the precise semantic target and optimise the "
" pattern to capture ONLY text that answers that query. If the query conflicts with the "
" sample HTML, the HTML wins.\n"
"• Tailor the pattern to the *sample HTML* reproduce its exact punctuation, spacing, "
" symbols, capitalisation, etc. Do **NOT** invent a generic form.\n"
"• Keep it minimal and fast: avoid unnecessary capturing, prefer non-capturing `(?: … )`, "
" and guard against catastrophic backtracking.\n"
"• Anchor with `^`, `$`, or `\\b` only when it genuinely improves precision.\n"
"• Use inline flags like `(?i)` when needed; no verbose flag comments.\n"
"• Output must be valid JSON no markdown, code fences, comments, or extra keys.\n"
"• The regex value must be a Python string literal: **double every backslash** "
"(e.g. `\\\\b`, `\\\\d`, `\\\\\\\\`).\n\n"
"Example valid output:\n"
f"{{\"{label}\": \"(?:RM|rm)\\\\s?\\\\d{{1,3}}(?:,\\\\d{{3}})*(?:\\\\.\\\\d{{2}})?\"}}"
)
# ── user message: cropped HTML + optional hints
user_parts = ["```html", html[:5000], "```"] # protect token budget
if query:
user_parts.append(f"\n\n## Query\n{query.strip()}")
if examples:
user_parts.append("## Examples\n" + "\n".join(examples[:20]))
user_msg = "\n\n".join(user_parts)
# ── LLM call (with retry/backoff)
resp = perform_completion_with_backoff(
provider=llm_config.provider,
prompt_with_variables="\n\n".join([system_msg, user_msg]),
json_response=True,
api_token=llm_config.api_token,
base_url=llm_config.base_url,
extra_args=kwargs,
)
# ── clean & load JSON (fix common escape mistakes *before* json.loads)
raw = resp.choices[0].message.content
raw = raw.replace("\x08", "\\b") # stray back-space → \b
raw = re.sub(r'(?<!\\)\\(?![\\u"])', r"\\\\", raw) # lone \ → \\
try:
pattern_dict = json.loads(raw)
except Exception as exc:
raise ValueError(f"LLM did not return valid JSON: {raw}") from exc
# quick sanity-compile
for lbl, pat in pattern_dict.items():
try:
re.compile(pat)
except re.error as e:
raise ValueError(f"Invalid regex for '{lbl}': {e}") from None
return pattern_dict

View File

@@ -40,10 +40,25 @@ def setup_home_directory():
f.write("")
def post_install():
"""Run all post-installation tasks"""
"""
Run all post-installation tasks.
Checks CRAWL4AI_MODE environment variable. If set to 'api',
skips Playwright browser installation.
"""
logger.info("Running post-installation setup...", tag="INIT")
setup_home_directory()
install_playwright()
# Check environment variable to conditionally skip Playwright install
run_mode = os.getenv('CRAWL4AI_MODE')
if run_mode == 'api':
logger.warning(
"CRAWL4AI_MODE=api detected. Skipping Playwright browser installation.",
tag="SETUP"
)
else:
# Proceed with installation only if mode is not 'api'
install_playwright()
run_migration()
# TODO: Will be added in the future
# setup_builtin_browser()

View File

@@ -115,5 +115,6 @@ async () => {
document.body.style.overflow = "auto";
// Wait a bit for any animations to complete
await new Promise((resolve) => setTimeout(resolve, 100));
document.body.scrollIntoView(false);
await new Promise((resolve) => setTimeout(resolve, 50));
};

View File

@@ -31,22 +31,24 @@ class MarkdownGenerationStrategy(ABC):
content_filter: Optional[RelevantContentFilter] = None,
options: Optional[Dict[str, Any]] = None,
verbose: bool = False,
content_source: str = "cleaned_html",
):
self.content_filter = content_filter
self.options = options or {}
self.verbose = verbose
self.content_source = content_source
@abstractmethod
def generate_markdown(
self,
cleaned_html: str,
input_html: str,
base_url: str = "",
html2text_options: Optional[Dict[str, Any]] = None,
content_filter: Optional[RelevantContentFilter] = None,
citations: bool = True,
**kwargs,
) -> MarkdownGenerationResult:
"""Generate markdown from cleaned HTML."""
"""Generate markdown from the selected input HTML."""
pass
@@ -63,6 +65,7 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
Args:
content_filter (Optional[RelevantContentFilter]): Content filter for generating fit markdown.
options (Optional[Dict[str, Any]]): Additional options for markdown generation. Defaults to None.
content_source (str): Source of content to generate markdown from. Options: "cleaned_html", "raw_html", "fit_html". Defaults to "cleaned_html".
Returns:
MarkdownGenerationResult: Result containing raw markdown, fit markdown, fit HTML, and references markdown.
@@ -72,8 +75,9 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
self,
content_filter: Optional[RelevantContentFilter] = None,
options: Optional[Dict[str, Any]] = None,
content_source: str = "cleaned_html",
):
super().__init__(content_filter, options)
super().__init__(content_filter, options, verbose=False, content_source=content_source)
def convert_links_to_citations(
self, markdown: str, base_url: str = ""
@@ -143,7 +147,7 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
def generate_markdown(
self,
cleaned_html: str,
input_html: str,
base_url: str = "",
html2text_options: Optional[Dict[str, Any]] = None,
options: Optional[Dict[str, Any]] = None,
@@ -152,16 +156,16 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
**kwargs,
) -> MarkdownGenerationResult:
"""
Generate markdown with citations from cleaned HTML.
Generate markdown with citations from the provided input HTML.
How it works:
1. Generate raw markdown from cleaned HTML.
1. Generate raw markdown from the input HTML.
2. Convert links to citations.
3. Generate fit markdown if content filter is provided.
4. Return MarkdownGenerationResult.
Args:
cleaned_html (str): Cleaned HTML content.
input_html (str): The HTML content to process (selected based on content_source).
base_url (str): Base URL for URL joins.
html2text_options (Optional[Dict[str, Any]]): HTML2Text options.
options (Optional[Dict[str, Any]]): Additional options for markdown generation.
@@ -196,14 +200,14 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
h.update_params(**default_options)
# Ensure we have valid input
if not cleaned_html:
cleaned_html = ""
elif not isinstance(cleaned_html, str):
cleaned_html = str(cleaned_html)
if not input_html:
input_html = ""
elif not isinstance(input_html, str):
input_html = str(input_html)
# Generate raw markdown
try:
raw_markdown = h.handle(cleaned_html)
raw_markdown = h.handle(input_html)
except Exception as e:
raw_markdown = f"Error converting HTML to markdown: {str(e)}"
@@ -228,7 +232,7 @@ class DefaultMarkdownGenerator(MarkdownGenerationStrategy):
if content_filter or self.content_filter:
try:
content_filter = content_filter or self.content_filter
filtered_html = content_filter.filter_content(cleaned_html)
filtered_html = content_filter.filter_content(input_html)
filtered_html = "\n".join(
"<div>{}</div>".format(s) for s in filtered_html
)

View File

@@ -1,4 +1,4 @@
from pydantic import BaseModel, HttpUrl, PrivateAttr, ConfigDict
from pydantic import BaseModel, HttpUrl, PrivateAttr, Field
from typing import List, Dict, Optional, Callable, Awaitable, Union, Any
from typing import AsyncGenerator
from typing import Generic, TypeVar
@@ -95,15 +95,7 @@ class UrlModel(BaseModel):
url: HttpUrl
forced: bool = False
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
fit_markdown: Optional[str] = None
fit_html: Optional[str] = None
def __str__(self):
return self.raw_markdown
@dataclass
class TraversalStats:
@@ -124,9 +116,20 @@ class DispatchResult(BaseModel):
end_time: Union[datetime, float]
error_message: str = ""
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
fit_markdown: Optional[str] = None
fit_html: Optional[str] = None
def __str__(self):
return self.raw_markdown
class CrawlResult(BaseModel):
url: str
html: str
fit_html: Optional[str] = None
success: bool
cleaned_html: Optional[str] = None
media: Dict[str, List[Dict]] = {}
@@ -135,6 +138,7 @@ class CrawlResult(BaseModel):
js_execution_result: Optional[Dict[str, Any]] = None
screenshot: Optional[str] = None
pdf: Optional[bytes] = None
mhtml: Optional[str] = None
_markdown: Optional[MarkdownGenerationResult] = PrivateAttr(default=None)
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
@@ -145,10 +149,12 @@ class CrawlResult(BaseModel):
ssl_certificate: Optional[SSLCertificate] = None
dispatch_result: Optional[DispatchResult] = None
redirected_url: Optional[str] = None
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
tables: List[Dict] = Field(default_factory=list) # NEW [{headers,rows,caption,summary}]
model_config = ConfigDict(arbitrary_types_allowed=True)
# class Config:
# arbitrary_types_allowed = True
class Config:
arbitrary_types_allowed = True
# NOTE: The StringCompatibleMarkdown class, custom __init__ method, property getters/setters,
# and model_dump override all exist to support a smooth transition from markdown as a string
@@ -308,14 +314,16 @@ class AsyncCrawlResponse(BaseModel):
status_code: int
screenshot: Optional[str] = None
pdf_data: Optional[bytes] = None
mhtml_data: Optional[str] = None
get_delayed_content: Optional[Callable[[Optional[float]], Awaitable[str]]] = None
downloaded_files: Optional[List[str]] = None
ssl_certificate: Optional[SSLCertificate] = None
redirected_url: Optional[str] = None
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
model_config = ConfigDict(arbitrary_types_allowed=True)
# class Config:
# arbitrary_types_allowed = True
class Config:
arbitrary_types_allowed = True
###############################
# Scraping Models

View File

@@ -1,6 +0,0 @@
"""Pipeline module providing high-level crawling functionality."""
from .pipeline import Pipeline, create_pipeline
from .crawler import Crawler
__all__ = ["Pipeline", "create_pipeline", "Crawler"]

View File

@@ -1,406 +0,0 @@
"""Crawler utility class for simplified crawling operations.
This module provides a high-level utility class for crawling web pages
with support for both single and multiple URL processing.
"""
import asyncio
from typing import Dict, List, Optional, Tuple, Union, Callable
from crawl4ai.models import CrawlResultContainer, CrawlResult
from crawl4ai.pipeline.pipeline import create_pipeline
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
from crawl4ai.browser.browser_hub import BrowserHub
# Type definitions
UrlList = List[str]
UrlBatch = Tuple[List[str], CrawlerRunConfig]
UrlFullBatch = Tuple[List[str], BrowserConfig, CrawlerRunConfig]
BatchType = Union[UrlList, UrlBatch, UrlFullBatch]
ProgressCallback = Callable[[str, str, Optional[CrawlResultContainer]], None]
RetryStrategy = Callable[[str, int, Exception], Tuple[bool, float]]
class Crawler:
"""High-level utility class for crawling web pages.
This class provides simplified methods for crawling both single URLs
and batches of URLs, with parallel processing capabilities.
"""
@classmethod
async def crawl(
cls,
urls: Union[str, List[str]],
browser_config: Optional[BrowserConfig] = None,
crawler_config: Optional[CrawlerRunConfig] = None,
browser_hub: Optional[BrowserHub] = None,
logger: Optional[AsyncLogger] = None,
max_retries: int = 0,
retry_delay: float = 1.0,
use_new_loop: bool = True # By default use a new loop for safety
) -> Union[CrawlResultContainer, Dict[str, CrawlResultContainer]]:
"""Crawl one or more URLs with the specified configurations.
Args:
urls: Single URL or list of URLs to crawl
browser_config: Optional browser configuration
crawler_config: Optional crawler run configuration
browser_hub: Optional shared browser hub
logger: Optional logger instance
max_retries: Maximum number of retries for failed requests
retry_delay: Delay between retries in seconds
Returns:
For a single URL: CrawlResultContainer with crawl results
For multiple URLs: Dict mapping URLs to their CrawlResultContainer results
"""
# Handle single URL case
if isinstance(urls, str):
return await cls._crawl_single_url(
urls,
browser_config,
crawler_config,
browser_hub,
logger,
max_retries,
retry_delay,
use_new_loop
)
# Handle multiple URLs case (sequential processing)
results = {}
for url in urls:
results[url] = await cls._crawl_single_url(
url,
browser_config,
crawler_config,
browser_hub,
logger,
max_retries,
retry_delay,
use_new_loop
)
return results
@classmethod
async def _crawl_single_url(
cls,
url: str,
browser_config: Optional[BrowserConfig] = None,
crawler_config: Optional[CrawlerRunConfig] = None,
browser_hub: Optional[BrowserHub] = None,
logger: Optional[AsyncLogger] = None,
max_retries: int = 0,
retry_delay: float = 1.0,
use_new_loop: bool = False
) -> CrawlResultContainer:
"""Internal method to crawl a single URL with retry logic."""
# Create a logger if none provided
if logger is None:
logger = AsyncLogger(verbose=True)
# Create or use the provided crawler config
if crawler_config is None:
crawler_config = CrawlerRunConfig()
attempts = 0
last_error = None
# For testing purposes, each crawler gets a new event loop to avoid conflicts
# This is especially important in test suites where multiple tests run in sequence
if use_new_loop:
old_loop = asyncio.get_event_loop()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
while attempts <= max_retries:
try:
# Create a pipeline
pipeline_args = {}
if browser_config:
pipeline_args["browser_config"] = browser_config
if browser_hub:
pipeline_args["browser_hub"] = browser_hub
if logger:
pipeline_args["logger"] = logger
pipeline = await create_pipeline(**pipeline_args)
# Perform the crawl
result = await pipeline.crawl(url=url, config=crawler_config)
# Close the pipeline if we created it (not using a shared hub)
if not browser_hub:
await pipeline.close()
# Restore the original event loop if we created a new one
if use_new_loop:
asyncio.set_event_loop(old_loop)
loop.close()
return result
except Exception as e:
last_error = e
attempts += 1
if attempts <= max_retries:
logger.warning(
message="Crawl attempt {attempt} failed for {url}: {error}. Retrying in {delay}s...",
tag="RETRY",
params={
"attempt": attempts,
"url": url,
"error": str(e),
"delay": retry_delay
}
)
await asyncio.sleep(retry_delay)
else:
logger.error(
message="All {attempts} crawl attempts failed for {url}: {error}",
tag="FAILED",
params={
"attempts": attempts,
"url": url,
"error": str(e)
}
)
# If we get here, all attempts failed
result = CrawlResultContainer(
CrawlResult(
url=url,
html="",
success=False,
error_message=f"All {attempts} crawl attempts failed: {str(last_error)}"
)
)
# Restore the original event loop if we created a new one
if use_new_loop:
asyncio.set_event_loop(old_loop)
loop.close()
return result
@classmethod
async def parallel_crawl(
cls,
url_batches: Union[List[str], List[Union[UrlBatch, UrlFullBatch]]],
browser_config: Optional[BrowserConfig] = None,
crawler_config: Optional[CrawlerRunConfig] = None,
browser_hub: Optional[BrowserHub] = None,
logger: Optional[AsyncLogger] = None,
concurrency: int = 5,
max_retries: int = 0,
retry_delay: float = 1.0,
retry_strategy: Optional[RetryStrategy] = None,
progress_callback: Optional[ProgressCallback] = None,
use_new_loop: bool = True # By default use a new loop for safety
) -> Dict[str, CrawlResultContainer]:
"""Crawl multiple URLs in parallel with concurrency control.
Args:
url_batches: List of URLs or list of URL batches with configurations
browser_config: Default browser configuration (used if not in batch)
crawler_config: Default crawler configuration (used if not in batch)
browser_hub: Optional shared browser hub for resource efficiency
logger: Optional logger instance
concurrency: Maximum number of concurrent crawls
max_retries: Maximum number of retries for failed requests
retry_delay: Delay between retries in seconds
retry_strategy: Optional custom retry strategy function
progress_callback: Optional callback for progress reporting
Returns:
Dict mapping URLs to their CrawlResultContainer results
"""
# Create a logger if none provided
if logger is None:
logger = AsyncLogger(verbose=True)
# For testing purposes, each crawler gets a new event loop to avoid conflicts
# This is especially important in test suites where multiple tests run in sequence
if use_new_loop:
old_loop = asyncio.get_event_loop()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# Process batches to consistent format
processed_batches = cls._process_url_batches(
url_batches, browser_config, crawler_config
)
# Initialize results dictionary
results = {}
# Create semaphore for concurrency control
semaphore = asyncio.Semaphore(concurrency)
# Create shared browser hub if not provided
shared_hub = browser_hub
if not shared_hub:
shared_hub = await BrowserHub.get_browser_manager(
config=browser_config or BrowserConfig(),
logger=logger,
max_browsers_per_config=concurrency,
max_pages_per_browser=1,
initial_pool_size=min(concurrency, 3) # Start with a reasonable number
)
try:
# Create worker function for each URL
async def process_url(url, b_config, c_config):
async with semaphore:
# Report start if callback provided
if progress_callback:
await progress_callback("started", url)
attempts = 0
last_error = None
while attempts <= max_retries:
try:
# Create a pipeline using the shared hub
pipeline = await create_pipeline(
browser_config=b_config,
browser_hub=shared_hub,
logger=logger
)
# Perform the crawl
result = await pipeline.crawl(url=url, config=c_config)
# Report completion if callback provided
if progress_callback:
await progress_callback("completed", url, result)
return url, result
except Exception as e:
last_error = e
attempts += 1
# Determine if we should retry and with what delay
should_retry = attempts <= max_retries
delay = retry_delay
# Use custom retry strategy if provided
if retry_strategy and should_retry:
try:
should_retry, delay = await retry_strategy(url, attempts, e)
except Exception as strategy_error:
logger.error(
message="Error in retry strategy: {error}",
tag="RETRY",
params={"error": str(strategy_error)}
)
if should_retry:
logger.warning(
message="Crawl attempt {attempt} failed for {url}: {error}. Retrying in {delay}s...",
tag="RETRY",
params={
"attempt": attempts,
"url": url,
"error": str(e),
"delay": delay
}
)
await asyncio.sleep(delay)
else:
logger.error(
message="All {attempts} crawl attempts failed for {url}: {error}",
tag="FAILED",
params={
"attempts": attempts,
"url": url,
"error": str(e)
}
)
break
# If we get here, all attempts failed
error_result = CrawlResultContainer(
CrawlResult(
url=url,
html="",
success=False,
error_message=f"All {attempts} crawl attempts failed: {str(last_error)}"
)
)
# Report completion with error if callback provided
if progress_callback:
await progress_callback("completed", url, error_result)
return url, error_result
# Create tasks for all URLs
tasks = []
for urls, b_config, c_config in processed_batches:
for url in urls:
tasks.append(process_url(url, b_config, c_config))
# Run all tasks and collect results
for completed_task in asyncio.as_completed(tasks):
url, result = await completed_task
results[url] = result
return results
finally:
# Clean up the hub only if we created it
if not browser_hub and shared_hub:
await shared_hub.close()
# Restore the original event loop if we created a new one
if use_new_loop:
asyncio.set_event_loop(old_loop)
loop.close()
@classmethod
def _process_url_batches(
cls,
url_batches: Union[List[str], List[Union[UrlBatch, UrlFullBatch]]],
default_browser_config: Optional[BrowserConfig],
default_crawler_config: Optional[CrawlerRunConfig]
) -> List[Tuple[List[str], BrowserConfig, CrawlerRunConfig]]:
"""Process URL batches into a consistent format.
Converts various input formats into a consistent list of
(urls, browser_config, crawler_config) tuples.
"""
processed_batches = []
# Handle case where input is just a list of URLs
if all(isinstance(item, str) for item in url_batches):
urls = url_batches
browser_config = default_browser_config or BrowserConfig()
crawler_config = default_crawler_config or CrawlerRunConfig()
processed_batches.append((urls, browser_config, crawler_config))
return processed_batches
# Process each batch
for batch in url_batches:
# Handle case: (urls, crawler_config)
if len(batch) == 2 and isinstance(batch[1], CrawlerRunConfig):
urls, c_config = batch
b_config = default_browser_config or BrowserConfig()
processed_batches.append((urls, b_config, c_config))
# Handle case: (urls, browser_config, crawler_config)
elif len(batch) == 3 and isinstance(batch[1], BrowserConfig) and isinstance(batch[2], CrawlerRunConfig):
processed_batches.append(batch)
# Fallback for unknown formats - assume it's just a list of URLs
else:
urls = batch
browser_config = default_browser_config or BrowserConfig()
crawler_config = default_crawler_config or CrawlerRunConfig()
processed_batches.append((urls, browser_config, crawler_config))
return processed_batches

View File

@@ -1,702 +0,0 @@
import time
import sys
from typing import Dict, Any, List
import json
from crawl4ai.models import (
CrawlResult,
MarkdownGenerationResult,
ScrapingResult,
CrawlResultContainer,
)
from crawl4ai.async_database import async_db_manager
from crawl4ai.cache_context import CacheMode, CacheContext
from crawl4ai.utils import (
sanitize_input_encode,
InvalidCSSSelectorError,
fast_format_html,
create_box_message,
get_error_context,
)
async def initialize_context_middleware(context: Dict[str, Any]) -> int:
"""Initialize the context with basic configuration and validation"""
url = context.get("url")
config = context.get("config")
if not isinstance(url, str) or not url:
context["error_message"] = "Invalid URL, make sure the URL is a non-empty string"
return 0
# Default to ENABLED if no cache mode specified
if config.cache_mode is None:
config.cache_mode = CacheMode.ENABLED
# Create cache context
context["cache_context"] = CacheContext(url, config.cache_mode, False)
context["start_time"] = time.perf_counter()
return 1
# middlewares.py additions
async def browser_hub_middleware(context: Dict[str, Any]) -> int:
"""
Initialize or connect to a Browser-Hub and add it to the pipeline context.
This middleware handles browser hub initialization for all three scenarios:
1. Default configuration when nothing is specified
2. Custom configuration when browser_config is provided
3. Connection to existing hub when browser_hub_connection is provided
Args:
context: The pipeline context dictionary
Returns:
int: 1 for success, 0 for failure
"""
from crawl4ai.browser.browser_hub import BrowserHub
try:
# Get configuration from context
browser_config = context.get("browser_config")
browser_hub_id = context.get("browser_hub_id")
browser_hub_connection = context.get("browser_hub_connection")
logger = context.get("logger")
# If we already have a browser hub in context, use it
if context.get("browser_hub"):
return 1
# Get or create Browser-Hub
browser_hub = await BrowserHub.get_browser_manager(
config=browser_config,
hub_id=browser_hub_id,
connection_info=browser_hub_connection,
logger=logger
)
# Add to context
context["browser_hub"] = browser_hub
return 1
except Exception as e:
context["error_message"] = f"Failed to initialize browser hub: {str(e)}"
return 0
async def fetch_content_middleware(context: Dict[str, Any]) -> int:
"""
Fetch content from the web using the browser hub.
This middleware uses the browser hub to get pages for crawling,
and properly releases them back to the pool when done.
Args:
context: The pipeline context dictionary
Returns:
int: 1 for success, 0 for failure
"""
url = context.get("url")
config = context.get("config")
browser_hub = context.get("browser_hub")
logger = context.get("logger")
# Skip if using cached result
if context.get("cached_result") and context.get("html"):
return 1
try:
# Create crawler strategy without initializing its browser manager
from crawl4ai.async_crawler_strategy import AsyncPlaywrightCrawlerStrategy
crawler_strategy = AsyncPlaywrightCrawlerStrategy(
browser_config=browser_hub.config if browser_hub else None,
logger=logger
)
# Replace the browser manager with our shared instance
crawler_strategy.browser_manager = browser_hub
# Perform crawl without trying to initialize the browser
# The crawler will use the provided browser_manager to get pages
async_response = await crawler_strategy.crawl(url, config=config)
# Store results in context
context["html"] = async_response.html
context["screenshot_data"] = async_response.screenshot
context["pdf_data"] = async_response.pdf_data
context["js_execution_result"] = async_response.js_execution_result
context["async_response"] = async_response
return 1
except Exception as e:
context["error_message"] = f"Error fetching content: {str(e)}"
return 0
async def check_cache_middleware(context: Dict[str, Any]) -> int:
"""Check if there's a cached result and load it if available"""
url = context.get("url")
config = context.get("config")
cache_context = context.get("cache_context")
logger = context.get("logger")
# Initialize variables
context["cached_result"] = None
context["html"] = None
context["extracted_content"] = None
context["screenshot_data"] = None
context["pdf_data"] = None
# Try to get cached result if appropriate
if cache_context.should_read():
cached_result = await async_db_manager.aget_cached_url(url)
context["cached_result"] = cached_result
if cached_result:
html = sanitize_input_encode(cached_result.html)
extracted_content = sanitize_input_encode(cached_result.extracted_content or "")
extracted_content = None if not extracted_content or extracted_content == "[]" else extracted_content
# If screenshot is requested but its not in cache, then set cache_result to None
screenshot_data = cached_result.screenshot
pdf_data = cached_result.pdf
if config.screenshot and not screenshot_data:
context["cached_result"] = None
if config.pdf and not pdf_data:
context["cached_result"] = None
context["html"] = html
context["extracted_content"] = extracted_content
context["screenshot_data"] = screenshot_data
context["pdf_data"] = pdf_data
logger.url_status(
url=cache_context.display_url,
success=bool(html),
timing=time.perf_counter() - context["start_time"],
tag="FETCH",
)
return 1
async def configure_proxy_middleware(context: Dict[str, Any]) -> int:
"""Configure proxy if a proxy rotation strategy is available"""
config = context.get("config")
logger = context.get("logger")
# Skip if using cached result
if context.get("cached_result") and context.get("html"):
return 1
# Update proxy configuration from rotation strategy if available
if config and config.proxy_rotation_strategy:
next_proxy = await config.proxy_rotation_strategy.get_next_proxy()
if next_proxy:
logger.info(
message="Switch proxy: {proxy}",
tag="PROXY",
params={"proxy": next_proxy.server},
)
config.proxy_config = next_proxy
return 1
async def check_robots_txt_middleware(context: Dict[str, Any]) -> int:
"""Check if the URL is allowed by robots.txt if enabled"""
url = context.get("url")
config = context.get("config")
browser_config = context.get("browser_config")
robots_parser = context.get("robots_parser")
# Skip if using cached result
if context.get("cached_result") and context.get("html"):
return 1
# Check robots.txt if enabled
if config and config.check_robots_txt:
if not await robots_parser.can_fetch(url, browser_config.user_agent):
context["crawl_result"] = CrawlResult(
url=url,
html="",
success=False,
status_code=403,
error_message="Access denied by robots.txt",
response_headers={"X-Robots-Status": "Blocked by robots.txt"}
)
return 0
return 1
async def fetch_content_middleware_(context: Dict[str, Any]) -> int:
"""Fetch content from the web using the crawler strategy"""
url = context.get("url")
config = context.get("config")
crawler_strategy = context.get("crawler_strategy")
logger = context.get("logger")
# Skip if using cached result
if context.get("cached_result") and context.get("html"):
return 1
try:
t1 = time.perf_counter()
if config.user_agent:
crawler_strategy.update_user_agent(config.user_agent)
# Call CrawlerStrategy.crawl
async_response = await crawler_strategy.crawl(url, config=config)
html = sanitize_input_encode(async_response.html)
screenshot_data = async_response.screenshot
pdf_data = async_response.pdf_data
js_execution_result = async_response.js_execution_result
t2 = time.perf_counter()
logger.url_status(
url=context["cache_context"].display_url,
success=bool(html),
timing=t2 - t1,
tag="FETCH",
)
context["html"] = html
context["screenshot_data"] = screenshot_data
context["pdf_data"] = pdf_data
context["js_execution_result"] = js_execution_result
context["async_response"] = async_response
return 1
except Exception as e:
context["error_message"] = f"Error fetching content: {str(e)}"
return 0
async def scrape_content_middleware(context: Dict[str, Any]) -> int:
"""Apply scraping strategy to extract content"""
url = context.get("url")
html = context.get("html")
config = context.get("config")
extracted_content = context.get("extracted_content")
logger = context.get("logger")
# Skip if already have a crawl result
if context.get("crawl_result"):
return 1
try:
_url = url if not context.get("is_raw_html", False) else "Raw HTML"
t1 = time.perf_counter()
# Get scraping strategy and ensure it has a logger
scraping_strategy = config.scraping_strategy
if not scraping_strategy.logger:
scraping_strategy.logger = logger
# Process HTML content
params = config.__dict__.copy()
params.pop("url", None)
# Add keys from kwargs to params that don't exist in params
kwargs = context.get("kwargs", {})
params.update({k: v for k, v in kwargs.items() if k not in params.keys()})
# Scraping Strategy Execution
result: ScrapingResult = scraping_strategy.scrap(url, html, **params)
if result is None:
raise ValueError(f"Process HTML, Failed to extract content from the website: {url}")
# Extract results - handle both dict and ScrapingResult
if isinstance(result, dict):
cleaned_html = sanitize_input_encode(result.get("cleaned_html", ""))
media = result.get("media", {})
links = result.get("links", {})
metadata = result.get("metadata", {})
else:
cleaned_html = sanitize_input_encode(result.cleaned_html)
media = result.media.model_dump()
links = result.links.model_dump()
metadata = result.metadata
context["cleaned_html"] = cleaned_html
context["media"] = media
context["links"] = links
context["metadata"] = metadata
# Log processing completion
logger.info(
message="{url:.50}... | Time: {timing}s",
tag="SCRAPE",
params={
"url": _url,
"timing": int((time.perf_counter() - t1) * 1000) / 1000,
},
)
return 1
except InvalidCSSSelectorError as e:
context["error_message"] = str(e)
return 0
except Exception as e:
context["error_message"] = f"Process HTML, Failed to extract content from the website: {url}, error: {str(e)}"
return 0
async def generate_markdown_middleware(context: Dict[str, Any]) -> int:
"""Generate markdown from cleaned HTML"""
url = context.get("url")
cleaned_html = context.get("cleaned_html")
config = context.get("config")
# Skip if already have a crawl result
if context.get("crawl_result"):
return 1
# Generate Markdown
markdown_generator = config.markdown_generator
markdown_result: MarkdownGenerationResult = markdown_generator.generate_markdown(
cleaned_html=cleaned_html,
base_url=url,
)
context["markdown_result"] = markdown_result
return 1
async def extract_structured_content_middleware(context: Dict[str, Any]) -> int:
"""Extract structured content using extraction strategy"""
url = context.get("url")
extracted_content = context.get("extracted_content")
config = context.get("config")
markdown_result = context.get("markdown_result")
cleaned_html = context.get("cleaned_html")
logger = context.get("logger")
# Skip if already have a crawl result or extracted content
if context.get("crawl_result") or bool(extracted_content):
return 1
from crawl4ai.chunking_strategy import IdentityChunking
from crawl4ai.extraction_strategy import NoExtractionStrategy
if config.extraction_strategy and not isinstance(config.extraction_strategy, NoExtractionStrategy):
t1 = time.perf_counter()
_url = url if not context.get("is_raw_html", False) else "Raw HTML"
# Choose content based on input_format
content_format = config.extraction_strategy.input_format
if content_format == "fit_markdown" and not markdown_result.fit_markdown:
logger.warning(
message="Fit markdown requested but not available. Falling back to raw markdown.",
tag="EXTRACT",
params={"url": _url},
)
content_format = "markdown"
content = {
"markdown": markdown_result.raw_markdown,
"html": context.get("html"),
"cleaned_html": cleaned_html,
"fit_markdown": markdown_result.fit_markdown,
}.get(content_format, markdown_result.raw_markdown)
# Use IdentityChunking for HTML input, otherwise use provided chunking strategy
chunking = (
IdentityChunking()
if content_format in ["html", "cleaned_html"]
else config.chunking_strategy
)
sections = chunking.chunk(content)
extracted_content = config.extraction_strategy.run(url, sections)
extracted_content = json.dumps(
extracted_content, indent=4, default=str, ensure_ascii=False
)
context["extracted_content"] = extracted_content
# Log extraction completion
logger.info(
message="Completed for {url:.50}... | Time: {timing}s",
tag="EXTRACT",
params={"url": _url, "timing": time.perf_counter() - t1},
)
return 1
async def format_html_middleware(context: Dict[str, Any]) -> int:
"""Format HTML if prettify is enabled"""
config = context.get("config")
cleaned_html = context.get("cleaned_html")
# Skip if already have a crawl result
if context.get("crawl_result"):
return 1
# Apply HTML formatting if requested
if config.prettiify and cleaned_html:
context["cleaned_html"] = fast_format_html(cleaned_html)
return 1
async def write_cache_middleware(context: Dict[str, Any]) -> int:
"""Write result to cache if appropriate"""
cache_context = context.get("cache_context")
cached_result = context.get("cached_result")
# Skip if already have a crawl result or not using cache
if context.get("crawl_result") or not cache_context.should_write() or bool(cached_result):
return 1
# We'll create the CrawlResult in build_result_middleware and cache it there
# to avoid creating it twice
return 1
async def build_result_middleware(context: Dict[str, Any]) -> int:
"""Build the final CrawlResult object"""
url = context.get("url")
html = context.get("html", "")
cache_context = context.get("cache_context")
cached_result = context.get("cached_result")
config = context.get("config")
logger = context.get("logger")
# If we already have a crawl result (from an earlier middleware like robots.txt check)
if context.get("crawl_result"):
result = context["crawl_result"]
context["final_result"] = CrawlResultContainer(result)
return 1
# If we have a cached result
if cached_result and html:
logger.success(
message="{url:.50}... | Status: {status} | Total: {timing}",
tag="COMPLETE",
params={
"url": cache_context.display_url,
"status": True,
"timing": f"{time.perf_counter() - context['start_time']:.2f}s",
},
colors={"status": "green", "timing": "yellow"},
)
cached_result.success = bool(html)
cached_result.session_id = getattr(config, "session_id", None)
cached_result.redirected_url = cached_result.redirected_url or url
context["final_result"] = CrawlResultContainer(cached_result)
return 1
# Build a new result
try:
# Get all necessary components from context
cleaned_html = context.get("cleaned_html", "")
markdown_result = context.get("markdown_result")
media = context.get("media", {})
links = context.get("links", {})
metadata = context.get("metadata", {})
screenshot_data = context.get("screenshot_data")
pdf_data = context.get("pdf_data")
extracted_content = context.get("extracted_content")
async_response = context.get("async_response")
# Create the CrawlResult
crawl_result = CrawlResult(
url=url,
html=html,
cleaned_html=cleaned_html,
markdown=markdown_result,
media=media,
links=links,
metadata=metadata,
screenshot=screenshot_data,
pdf=pdf_data,
extracted_content=extracted_content,
success=bool(html),
error_message="",
)
# Add response details if available
if async_response:
crawl_result.status_code = async_response.status_code
crawl_result.redirected_url = async_response.redirected_url or url
crawl_result.response_headers = async_response.response_headers
crawl_result.downloaded_files = async_response.downloaded_files
crawl_result.js_execution_result = context.get("js_execution_result")
crawl_result.ssl_certificate = async_response.ssl_certificate
crawl_result.session_id = getattr(config, "session_id", None)
# Log completion
logger.success(
message="{url:.50}... | Status: {status} | Total: {timing}",
tag="COMPLETE",
params={
"url": cache_context.display_url,
"status": crawl_result.success,
"timing": f"{time.perf_counter() - context['start_time']:.2f}s",
},
colors={
"status": "green" if crawl_result.success else "red",
"timing": "yellow",
},
)
# Update cache if appropriate
if cache_context.should_write() and not bool(cached_result):
await async_db_manager.acache_url(crawl_result)
context["final_result"] = CrawlResultContainer(crawl_result)
return 1
except Exception as e:
error_context = get_error_context(sys.exc_info())
error_message = (
f"Unexpected error in build_result at line {error_context['line_no']} "
f"in {error_context['function']} ({error_context['filename']}):\n"
f"Error: {str(e)}\n\n"
f"Code context:\n{error_context['code_context']}"
)
logger.error_status(
url=url,
error=create_box_message(error_message, type="error"),
tag="ERROR",
)
context["final_result"] = CrawlResultContainer(
CrawlResult(
url=url, html="", success=False, error_message=error_message
)
)
return 1
async def handle_error_middleware(context: Dict[str, Any]) -> Dict[str, Any]:
"""Error handler middleware"""
url = context.get("url", "")
error_message = context.get("error_message", "Unknown error")
logger = context.get("logger")
# Log the error
if logger:
logger.error_status(
url=url,
error=create_box_message(error_message, type="error"),
tag="ERROR",
)
# Create a failure result
context["final_result"] = CrawlResultContainer(
CrawlResult(
url=url, html="", success=False, error_message=error_message
)
)
return context
# Custom middlewares as requested
async def sentiment_analysis_middleware(context: Dict[str, Any]) -> int:
"""Analyze sentiment of generated markdown using TextBlob"""
from textblob import TextBlob
markdown_result = context.get("markdown_result")
# Skip if no markdown or already failed
if not markdown_result or not context.get("success", True):
return 1
try:
# Get raw markdown text
raw_markdown = markdown_result.raw_markdown
# Analyze sentiment
blob = TextBlob(raw_markdown)
sentiment = blob.sentiment
# Add sentiment to context
context["sentiment_analysis"] = {
"polarity": sentiment.polarity, # -1.0 to 1.0 (negative to positive)
"subjectivity": sentiment.subjectivity, # 0.0 to 1.0 (objective to subjective)
"classification": "positive" if sentiment.polarity > 0.1 else
"negative" if sentiment.polarity < -0.1 else "neutral"
}
return 1
except Exception as e:
# Don't fail the pipeline on sentiment analysis failure
context["sentiment_analysis_error"] = str(e)
return 1
async def log_timing_middleware(context: Dict[str, Any], name: str) -> int:
"""Log timing information for a specific point in the pipeline"""
context[f"_timing_mark_{name}"] = time.perf_counter()
# Calculate duration if we have a start time
start_key = f"_timing_start_{name}"
if start_key in context:
duration = context[f"_timing_mark_{name}"] - context[start_key]
context[f"_timing_duration_{name}"] = duration
# Log the timing if we have a logger
logger = context.get("logger")
if logger:
logger.info(
message="{name} completed in {duration:.2f}s",
tag="TIMING",
params={"name": name, "duration": duration},
)
return 1
async def validate_url_middleware(context: Dict[str, Any], patterns: List[str]) -> int:
"""Validate URL against glob patterns"""
import fnmatch
url = context.get("url", "")
# If no patterns provided, allow all
if not patterns:
return 1
# Check if URL matches any of the allowed patterns
for pattern in patterns:
if fnmatch.fnmatch(url, pattern):
return 1
# If we get here, URL didn't match any patterns
context["error_message"] = f"URL '{url}' does not match any allowed patterns"
return 0
# Update the default middleware list function
def create_default_middleware_list():
"""Return the default list of middleware functions for the pipeline."""
return [
initialize_context_middleware,
check_cache_middleware,
browser_hub_middleware, # Add browser hub middleware before fetch_content
configure_proxy_middleware,
check_robots_txt_middleware,
fetch_content_middleware,
scrape_content_middleware,
generate_markdown_middleware,
extract_structured_content_middleware,
format_html_middleware,
build_result_middleware
]

View File

@@ -1,297 +0,0 @@
import time
import asyncio
from typing import Callable, Dict, List, Any, Optional, Awaitable, Union, TypedDict, Tuple, Coroutine
from .middlewares import create_default_middleware_list, handle_error_middleware
from crawl4ai.models import CrawlResultContainer, CrawlResult
from crawl4ai.async_crawler_strategy import AsyncCrawlerStrategy, AsyncPlaywrightCrawlerStrategy
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.async_logger import AsyncLogger
class CrawlSpec(TypedDict, total=False):
"""Specification for a single crawl operation in batch_crawl."""
url: str
config: Optional[CrawlerRunConfig]
browser_config: Optional[BrowserConfig]
class BatchStatus(TypedDict, total=False):
"""Status information for batch crawl operations."""
total: int
processed: int
succeeded: int
failed: int
in_progress: int
duration: float
class Pipeline:
"""
A pipeline processor that executes a series of async middleware functions.
Each middleware function receives a context dictionary, updates it,
and returns 1 for success or 0 for failure.
"""
def __init__(
self,
middleware: List[Callable[[Dict[str, Any]], Awaitable[int]]] = None,
error_handler: Optional[Callable[[Dict[str, Any]], Awaitable[Dict[str, Any]]]] = None,
after_middleware_callback: Optional[Callable[[str, Dict[str, Any]], Awaitable[None]]] = None,
crawler_strategy: Optional[AsyncCrawlerStrategy] = None,
browser_config: Optional[BrowserConfig] = None,
logger: Optional[AsyncLogger] = None,
_initial_context: Optional[Dict[str, Any]] = None
):
self.middleware = middleware or create_default_middleware_list()
self.error_handler = error_handler or handle_error_middleware
self.after_middleware_callback = after_middleware_callback
self.browser_config = browser_config or BrowserConfig()
self.logger = logger or AsyncLogger(verbose=self.browser_config.verbose)
self.crawler_strategy = crawler_strategy or AsyncPlaywrightCrawlerStrategy(
browser_config=self.browser_config,
logger=self.logger
)
self._initial_context = _initial_context
self._strategy_initialized = False
async def _initialize_strategy__(self):
"""Initialize the crawler strategy if not already initialized"""
if not self.crawler_strategy:
self.crawler_strategy = AsyncPlaywrightCrawlerStrategy(
browser_config=self.browser_config,
logger=self.logger
)
if not self._strategy_initialized:
await self.crawler_strategy.__aenter__()
self._strategy_initialized = True
async def _initialize_strategy(self):
"""Initialize the crawler strategy if not already initialized"""
# With our new approach, we don't need to create the crawler strategy here
# as it will be created on-demand in fetch_content_middleware
# Just ensure browser hub is available if needed
if hasattr(self, "_initial_context") and "browser_hub" not in self._initial_context:
# If a browser_config was provided but no browser_hub yet,
# we'll let the browser_hub_middleware handle creating it
pass
# Mark as initialized to prevent repeated initialization attempts
self._strategy_initialized = True
async def start(self):
"""Start the crawler strategy and prepare it for use"""
if not self._strategy_initialized:
await self._initialize_strategy()
self._strategy_initialized = True
if self.crawler_strategy:
await self.crawler_strategy.__aenter__()
self._strategy_initialized = True
else:
raise ValueError("Crawler strategy is not initialized.")
async def close(self):
"""Close the crawler strategy and clean up resources"""
await self.stop()
async def stop(self):
"""Close the crawler strategy and clean up resources"""
if self._strategy_initialized and self.crawler_strategy:
await self.crawler_strategy.__aexit__(None, None, None)
self._strategy_initialized = False
async def __aenter__(self):
await self.start()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self.close()
async def crawl(self, url: str, config: Optional[CrawlerRunConfig] = None, **kwargs) -> CrawlResultContainer:
"""
Crawl a URL and process it through the pipeline.
Args:
url: The URL to crawl
config: Optional configuration for the crawl
**kwargs: Additional arguments to pass to the middleware
Returns:
CrawlResultContainer: The result of the crawl
"""
# Initialize strategy if needed
await self._initialize_strategy()
# Create the initial context
context = {
"url": url,
"config": config or CrawlerRunConfig(),
"browser_config": self.browser_config,
"logger": self.logger,
"crawler_strategy": self.crawler_strategy,
"kwargs": kwargs
}
# Process the pipeline
result_context = await self.process(context)
# Return the final result
return result_context.get("final_result")
async def process(self, initial_context: Dict[str, Any] = None) -> Dict[str, Any]:
"""
Process all middleware functions with the given context.
Args:
initial_context: Initial context dictionary, defaults to empty dict
Returns:
Updated context dictionary after all middleware have been processed
"""
context = {**self._initial_context}
if initial_context:
context.update(initial_context)
# Record pipeline start time
context["_pipeline_start_time"] = time.perf_counter()
for middleware_fn in self.middleware:
# Get middleware name for logging
middleware_name = getattr(middleware_fn, '__name__', str(middleware_fn))
# Record start time for this middleware
start_time = time.perf_counter()
context[f"_timing_start_{middleware_name}"] = start_time
try:
# Execute middleware (all middleware functions are async)
result = await middleware_fn(context)
# Record completion time
end_time = time.perf_counter()
context[f"_timing_end_{middleware_name}"] = end_time
context[f"_timing_duration_{middleware_name}"] = end_time - start_time
# Execute after-middleware callback if provided
if self.after_middleware_callback:
await self.after_middleware_callback(middleware_name, context)
# Convert boolean returns to int (True->1, False->0)
if isinstance(result, bool):
result = 1 if result else 0
# Handle failure
if result == 0:
if self.error_handler:
context["_error_in"] = middleware_name
context["_error_at"] = time.perf_counter()
return await self._handle_error(context)
else:
context["success"] = False
context["error_message"] = f"Pipeline failed at {middleware_name}"
break
except Exception as e:
# Record error information
context["_error_in"] = middleware_name
context["_error_at"] = time.perf_counter()
context["_exception"] = e
context["success"] = False
context["error_message"] = f"Exception in {middleware_name}: {str(e)}"
# Call error handler if available
if self.error_handler:
return await self._handle_error(context)
break
# Record pipeline completion time
pipeline_end_time = time.perf_counter()
context["_pipeline_end_time"] = pipeline_end_time
context["_pipeline_duration"] = pipeline_end_time - context["_pipeline_start_time"]
# Set success to True if not already set (no failures)
if "success" not in context:
context["success"] = True
return context
async def _handle_error(self, context: Dict[str, Any]) -> Dict[str, Any]:
"""Handle errors by calling the error handler"""
try:
return await self.error_handler(context)
except Exception as e:
# If error handler fails, update context with this new error
context["_error_handler_exception"] = e
context["error_message"] = f"Error handler failed: {str(e)}"
return context
async def create_pipeline(
middleware_list=None,
error_handler=None,
after_middleware_callback=None,
browser_config=None,
browser_hub_id=None,
browser_hub_connection=None,
browser_hub=None,
logger=None
) -> Pipeline:
"""
Factory function to create a pipeline with Browser-Hub integration.
Args:
middleware_list: List of middleware functions
error_handler: Error handler middleware
after_middleware_callback: Callback after middleware execution
browser_config: Configuration for the browser
browser_hub_id: ID for browser hub instance
browser_hub_connection: Connection string for existing browser hub
browser_hub: Existing browser hub instance to use
logger: Logger instance
Returns:
Pipeline: Configured pipeline instance
"""
# Use default middleware list if none provided
middleware = middleware_list or create_default_middleware_list()
# Create the pipeline
pipeline = Pipeline(
middleware=middleware,
error_handler=error_handler,
after_middleware_callback=after_middleware_callback,
logger=logger
)
# Set browser-related attributes in the initial context
pipeline._initial_context = {
"browser_config": browser_config,
"browser_hub_id": browser_hub_id,
"browser_hub_connection": browser_hub_connection,
"browser_hub": browser_hub,
"logger": logger
}
return pipeline
# async def create_pipeline(
# middleware_list: Optional[List[Callable[[Dict[str, Any]], Awaitable[int]]]] = None,
# error_handler: Optional[Callable[[Dict[str, Any]], Awaitable[Dict[str, Any]]]] = None,
# after_middleware_callback: Optional[Callable[[str, Dict[str, Any]], Awaitable[None]]] = None,
# crawler_strategy = None,
# browser_config = None,
# logger = None
# ) -> Pipeline:
# """Factory function to create a pipeline with the given middleware"""
# return Pipeline(
# middleware=middleware_list,
# error_handler=error_handler,
# after_middleware_callback=after_middleware_callback,
# crawler_strategy=crawler_strategy,
# browser_config=browser_config,
# logger=logger
# )

View File

@@ -1,109 +0,0 @@
import asyncio
from crawl4ai import (
BrowserConfig,
CrawlerRunConfig,
CacheMode,
DefaultMarkdownGenerator,
PruningContentFilter
)
from pipeline import Pipeline
async def main():
# Create configuration objects
browser_config = BrowserConfig(headless=True, verbose=True)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48,
threshold_type="fixed",
min_word_threshold=0
)
),
)
# Create and use pipeline with context manager
async with Pipeline(browser_config=browser_config) as pipeline:
result = await pipeline.crawl(
url="https://www.example.com",
config=crawler_config
)
# Print the result
print(f"URL: {result.url}")
print(f"Success: {result.success}")
if result.success:
print("\nMarkdown excerpt:")
print(result.markdown.raw_markdown[:500] + "...")
else:
print(f"Error: {result.error_message}")
if __name__ == "__main__":
asyncio.run(main())
class CrawlTarget:
def __init__(self, urls, config=None):
self.urls = urls
self.config = config
def __repr__(self):
return f"CrawlTarget(urls={self.urls}, config={self.config})"
# async def main():
# # Create configuration objects
# browser_config = BrowserConfig(headless=True, verbose=True)
# # Define different configurations
# config1 = CrawlerRunConfig(
# cache_mode=CacheMode.BYPASS,
# markdown_generator=DefaultMarkdownGenerator(
# content_filter=PruningContentFilter(threshold=0.48)
# ),
# )
# config2 = CrawlerRunConfig(
# cache_mode=CacheMode.ENABLED,
# screenshot=True,
# pdf=True
# )
# # Create crawl targets
# targets = [
# CrawlTarget(
# urls=["https://www.example.com", "https://www.wikipedia.org"],
# config=config1
# ),
# CrawlTarget(
# urls="https://news.ycombinator.com",
# config=config2
# ),
# CrawlTarget(
# urls=["https://github.com", "https://stackoverflow.com", "https://python.org"],
# config=None
# )
# ]
# # Create and use pipeline with context manager
# async with Pipeline(browser_config=browser_config) as pipeline:
# all_results = await pipeline.crawl_batch(targets)
# for target_key, results in all_results.items():
# print(f"\n===== Results for {target_key} =====")
# print(f"Number of URLs crawled: {len(results)}")
# for i, result in enumerate(results):
# print(f"\nURL {i+1}: {result.url}")
# print(f"Success: {result.success}")
# if result.success:
# print(f"Content length: {len(result.markdown.raw_markdown)} chars")
# else:
# print(f"Error: {result.error_message}")
# if __name__ == "__main__":
# asyncio.run(main())

View File

@@ -4,6 +4,9 @@ from itertools import cycle
import os
########### ATTENTION PEOPLE OF EARTH ###########
# I have moved this config to async_configs.py, kept it here, in case someone still importing it, however
# be a dear and follow `from crawl4ai import ProxyConfig` instead :)
class ProxyConfig:
def __init__(
self,
@@ -119,12 +122,12 @@ class ProxyRotationStrategy(ABC):
"""Base abstract class for proxy rotation strategies"""
@abstractmethod
async def get_next_proxy(self) -> Optional[Dict]:
async def get_next_proxy(self) -> Optional[ProxyConfig]:
"""Get next proxy configuration from the strategy"""
pass
@abstractmethod
def add_proxies(self, proxies: List[Dict]):
def add_proxies(self, proxies: List[ProxyConfig]):
"""Add proxy configurations to the strategy"""
pass

View File

@@ -9,83 +9,44 @@ from urllib.parse import urlparse
import OpenSSL.crypto
from pathlib import Path
class SSLCertificate:
# === Inherit from dict ===
class SSLCertificate(dict):
"""
A class representing an SSL certificate with methods to export in various formats.
A class representing an SSL certificate, behaving like a dictionary
for direct JSON serialization. It stores the certificate information internally
and provides methods for export and property access.
Attributes:
cert_info (Dict[str, Any]): The certificate information.
Methods:
from_url(url: str, timeout: int = 10) -> Optional['SSLCertificate']: Create SSLCertificate instance from a URL.
from_file(file_path: str) -> Optional['SSLCertificate']: Create SSLCertificate instance from a file.
from_binary(binary_data: bytes) -> Optional['SSLCertificate']: Create SSLCertificate instance from binary data.
export_as_pem() -> str: Export the certificate as PEM format.
export_as_der() -> bytes: Export the certificate as DER format.
export_as_json() -> Dict[str, Any]: Export the certificate as JSON format.
export_as_text() -> str: Export the certificate as text format.
Inherits from dict, so instances are directly JSON serializable.
"""
# Use __slots__ for potential memory optimization if desired, though less common when inheriting dict
# __slots__ = ("_cert_info",) # If using slots, be careful with dict inheritance interaction
def __init__(self, cert_info: Dict[str, Any]):
self._cert_info = self._decode_cert_data(cert_info)
@staticmethod
def from_url(url: str, timeout: int = 10) -> Optional["SSLCertificate"]:
"""
Create SSLCertificate instance from a URL.
Initializes the SSLCertificate object.
Args:
url (str): URL of the website.
timeout (int): Timeout for the connection (default: 10).
Returns:
Optional[SSLCertificate]: SSLCertificate instance if successful, None otherwise.
cert_info (Dict[str, Any]): The raw certificate dictionary.
"""
try:
hostname = urlparse(url).netloc
if ":" in hostname:
hostname = hostname.split(":")[0]
# 1. Decode the data (handle bytes -> str)
decoded_info = self._decode_cert_data(cert_info)
context = ssl.create_default_context()
with socket.create_connection((hostname, 443), timeout=timeout) as sock:
with context.wrap_socket(sock, server_hostname=hostname) as ssock:
cert_binary = ssock.getpeercert(binary_form=True)
x509 = OpenSSL.crypto.load_certificate(
OpenSSL.crypto.FILETYPE_ASN1, cert_binary
)
# 2. Store the decoded info internally (optional but good practice)
# self._cert_info = decoded_info # You can keep this if methods rely on it
cert_info = {
"subject": dict(x509.get_subject().get_components()),
"issuer": dict(x509.get_issuer().get_components()),
"version": x509.get_version(),
"serial_number": hex(x509.get_serial_number()),
"not_before": x509.get_notBefore(),
"not_after": x509.get_notAfter(),
"fingerprint": x509.digest("sha256").hex(),
"signature_algorithm": x509.get_signature_algorithm(),
"raw_cert": base64.b64encode(cert_binary),
}
# Add extensions
extensions = []
for i in range(x509.get_extension_count()):
ext = x509.get_extension(i)
extensions.append(
{"name": ext.get_short_name(), "value": str(ext)}
)
cert_info["extensions"] = extensions
return SSLCertificate(cert_info)
except Exception:
return None
# 3. Initialize the dictionary part of the object with the decoded data
super().__init__(decoded_info)
@staticmethod
def _decode_cert_data(data: Any) -> Any:
"""Helper method to decode bytes in certificate data."""
if isinstance(data, bytes):
return data.decode("utf-8")
try:
# Try UTF-8 first, fallback to latin-1 for arbitrary bytes
return data.decode("utf-8")
except UnicodeDecodeError:
return data.decode("latin-1") # Or handle as needed, maybe hex representation
elif isinstance(data, dict):
return {
(
@@ -97,36 +58,119 @@ class SSLCertificate:
return [SSLCertificate._decode_cert_data(item) for item in data]
return data
@staticmethod
def from_url(url: str, timeout: int = 10) -> Optional["SSLCertificate"]:
"""
Create SSLCertificate instance from a URL. Fetches cert info and initializes.
(Fetching logic remains the same)
"""
cert_info_raw = None # Variable to hold the fetched dict
try:
hostname = urlparse(url).netloc
if ":" in hostname:
hostname = hostname.split(":")[0]
context = ssl.create_default_context()
# Set check_hostname to False and verify_mode to CERT_NONE temporarily
# for potentially problematic certificates during fetch, but parse the result regardless.
# context.check_hostname = False
# context.verify_mode = ssl.CERT_NONE
with socket.create_connection((hostname, 443), timeout=timeout) as sock:
with context.wrap_socket(sock, server_hostname=hostname) as ssock:
cert_binary = ssock.getpeercert(binary_form=True)
if not cert_binary:
print(f"Warning: No certificate returned for {hostname}")
return None
x509 = OpenSSL.crypto.load_certificate(
OpenSSL.crypto.FILETYPE_ASN1, cert_binary
)
# Create the dictionary directly
cert_info_raw = {
"subject": dict(x509.get_subject().get_components()),
"issuer": dict(x509.get_issuer().get_components()),
"version": x509.get_version(),
"serial_number": hex(x509.get_serial_number()),
"not_before": x509.get_notBefore(), # Keep as bytes initially, _decode handles it
"not_after": x509.get_notAfter(), # Keep as bytes initially
"fingerprint": x509.digest("sha256").hex(), # hex() is already string
"signature_algorithm": x509.get_signature_algorithm(), # Keep as bytes
"raw_cert": base64.b64encode(cert_binary), # Base64 is bytes, _decode handles it
}
# Add extensions
extensions = []
for i in range(x509.get_extension_count()):
ext = x509.get_extension(i)
# get_short_name() returns bytes, str(ext) handles value conversion
extensions.append(
{"name": ext.get_short_name(), "value": str(ext)}
)
cert_info_raw["extensions"] = extensions
except ssl.SSLCertVerificationError as e:
print(f"SSL Verification Error for {url}: {e}")
# Decide if you want to proceed or return None based on your needs
# You might try fetching without verification here if needed, but be cautious.
return None
except socket.gaierror:
print(f"Could not resolve hostname: {hostname}")
return None
except socket.timeout:
print(f"Connection timed out for {url}")
return None
except Exception as e:
print(f"Error fetching/processing certificate for {url}: {e}")
# Log the full error details if needed: logging.exception("Cert fetch error")
return None
# If successful, create the SSLCertificate instance from the dictionary
if cert_info_raw:
return SSLCertificate(cert_info_raw)
else:
return None
# --- Properties now access the dictionary items directly via self[] ---
@property
def issuer(self) -> Dict[str, str]:
return self.get("issuer", {}) # Use self.get for safety
@property
def subject(self) -> Dict[str, str]:
return self.get("subject", {})
@property
def valid_from(self) -> str:
return self.get("not_before", "")
@property
def valid_until(self) -> str:
return self.get("not_after", "")
@property
def fingerprint(self) -> str:
return self.get("fingerprint", "")
# --- Export methods can use `self` directly as it is the dict ---
def to_json(self, filepath: Optional[str] = None) -> Optional[str]:
"""
Export certificate as JSON.
Args:
filepath (Optional[str]): Path to save the JSON file (default: None).
Returns:
Optional[str]: JSON string if successful, None otherwise.
"""
json_str = json.dumps(self._cert_info, indent=2, ensure_ascii=False)
"""Export certificate as JSON."""
# `self` is already the dictionary we want to serialize
json_str = json.dumps(self, indent=2, ensure_ascii=False)
if filepath:
Path(filepath).write_text(json_str, encoding="utf-8")
return None
return json_str
def to_pem(self, filepath: Optional[str] = None) -> Optional[str]:
"""
Export certificate as PEM.
Args:
filepath (Optional[str]): Path to save the PEM file (default: None).
Returns:
Optional[str]: PEM string if successful, None otherwise.
"""
"""Export certificate as PEM."""
try:
# Decode the raw_cert (which should be string due to _decode)
raw_cert_bytes = base64.b64decode(self.get("raw_cert", ""))
x509 = OpenSSL.crypto.load_certificate(
OpenSSL.crypto.FILETYPE_ASN1,
base64.b64decode(self._cert_info["raw_cert"]),
OpenSSL.crypto.FILETYPE_ASN1, raw_cert_bytes
)
pem_data = OpenSSL.crypto.dump_certificate(
OpenSSL.crypto.FILETYPE_PEM, x509
@@ -136,49 +180,25 @@ class SSLCertificate:
Path(filepath).write_text(pem_data, encoding="utf-8")
return None
return pem_data
except Exception:
return None
except Exception as e:
print(f"Error converting to PEM: {e}")
return None
def to_der(self, filepath: Optional[str] = None) -> Optional[bytes]:
"""
Export certificate as DER.
Args:
filepath (Optional[str]): Path to save the DER file (default: None).
Returns:
Optional[bytes]: DER bytes if successful, None otherwise.
"""
"""Export certificate as DER."""
try:
der_data = base64.b64decode(self._cert_info["raw_cert"])
# Decode the raw_cert (which should be string due to _decode)
der_data = base64.b64decode(self.get("raw_cert", ""))
if filepath:
Path(filepath).write_bytes(der_data)
return None
return der_data
except Exception:
return None
except Exception as e:
print(f"Error converting to DER: {e}")
return None
@property
def issuer(self) -> Dict[str, str]:
"""Get certificate issuer information."""
return self._cert_info.get("issuer", {})
@property
def subject(self) -> Dict[str, str]:
"""Get certificate subject information."""
return self._cert_info.get("subject", {})
@property
def valid_from(self) -> str:
"""Get certificate validity start date."""
return self._cert_info.get("not_before", "")
@property
def valid_until(self) -> str:
"""Get certificate validity end date."""
return self._cert_info.get("not_after", "")
@property
def fingerprint(self) -> str:
"""Get certificate fingerprint."""
return self._cert_info.get("fingerprint", "")
# Optional: Add __repr__ for better debugging
def __repr__(self) -> str:
subject_cn = self.subject.get('CN', 'N/A')
issuer_cn = self.issuer.get('CN', 'N/A')
return f"<SSLCertificate Subject='{subject_cn}' Issuer='{issuer_cn}'>"

View File

@@ -20,7 +20,6 @@ from urllib.parse import urljoin
import requests
from requests.exceptions import InvalidSchema
import xxhash
from colorama import Fore, Style, init
import textwrap
import cProfile
import pstats
@@ -136,13 +135,20 @@ def merge_chunks(
word_token_ratio: float = 1.0,
splitter: Callable = None
) -> List[str]:
"""Merges documents into chunks of specified token size.
"""
Merges a sequence of documents into chunks based on a target token count, with optional overlap.
Each document is split into tokens using the provided splitter function (defaults to str.split). Tokens are distributed into chunks aiming for the specified target size, with optional overlapping tokens between consecutive chunks. Returns a list of non-empty merged chunks as strings.
Args:
docs: Input documents
target_size: Desired token count per chunk
overlap: Number of tokens to overlap between chunks
word_token_ratio: Multiplier for word->token conversion
docs: Sequence of input document strings to be merged.
target_size: Target number of tokens per chunk.
overlap: Number of tokens to overlap between consecutive chunks.
word_token_ratio: Multiplier to estimate token count from word count.
splitter: Callable used to split each document into tokens.
Returns:
List of merged document chunks as strings, each not exceeding the target token size.
"""
# Pre-tokenize all docs and store token counts
splitter = splitter or str.split
@@ -151,7 +157,7 @@ def merge_chunks(
total_tokens = 0
for doc in docs:
tokens = doc.split()
tokens = splitter(doc)
count = int(len(tokens) * word_token_ratio)
if count: # Skip empty docs
token_counts.append(count)
@@ -441,14 +447,13 @@ def create_box_message(
str: A formatted string containing the styled message box.
"""
init()
# Define border and text colors for different types
styles = {
"warning": (Fore.YELLOW, Fore.LIGHTYELLOW_EX, ""),
"info": (Fore.BLUE, Fore.LIGHTBLUE_EX, ""),
"success": (Fore.GREEN, Fore.LIGHTGREEN_EX, ""),
"error": (Fore.RED, Fore.LIGHTRED_EX, "×"),
"warning": ("yellow", "bright_yellow", ""),
"info": ("blue", "bright_blue", ""),
"debug": ("lightblack", "bright_black", ""),
"success": ("green", "bright_green", ""),
"error": ("red", "bright_red", "×"),
}
border_color, text_color, prefix = styles.get(type.lower(), styles["info"])
@@ -480,12 +485,12 @@ def create_box_message(
# Create the box with colored borders and lighter text
horizontal_line = h_line * (width - 1)
box = [
f"{border_color}{tl}{horizontal_line}{tr}",
f"[{border_color}]{tl}{horizontal_line}{tr}[/{border_color}]",
*[
f"{border_color}{v_line}{text_color} {line:<{width-2}}{border_color}{v_line}"
f"[{border_color}]{v_line}[{text_color}] {line:<{width-2}}[/{text_color}][{border_color}]{v_line}[/{border_color}]"
for line in formatted_lines
],
f"{border_color}{bl}{horizontal_line}{br}{Style.RESET_ALL}",
f"[{border_color}]{bl}{horizontal_line}{br}[/{border_color}]",
]
result = "\n".join(box)
@@ -1111,6 +1116,23 @@ def get_content_of_website_optimized(
css_selector: str = None,
**kwargs,
) -> Dict[str, Any]:
"""
Extracts and cleans content from website HTML, optimizing for useful media and contextual information.
Parses the provided HTML to extract internal and external links, filters and scores images for usefulness, gathers contextual descriptions for media, removes unwanted or low-value elements, and converts the cleaned HTML to Markdown. Also extracts metadata and returns all structured content in a dictionary.
Args:
url: The URL of the website being processed.
html: The raw HTML content to extract from.
word_count_threshold: Minimum word count for elements to be retained.
css_selector: Optional CSS selector to restrict extraction to specific elements.
Returns:
A dictionary containing Markdown content, cleaned HTML, extraction success status, media and link lists, and metadata.
Raises:
InvalidCSSSelectorError: If a provided CSS selector does not match any elements.
"""
if not html:
return None
@@ -1153,6 +1175,20 @@ def get_content_of_website_optimized(
def process_image(img, url, index, total_images):
# Check if an image has valid display and inside undesired html elements
"""
Processes an HTML image element to determine its relevance and extract metadata.
Evaluates an image's visibility, context, and usefulness based on its attributes and parent elements. If the image passes validation and exceeds a usefulness score threshold, returns a dictionary with its source, alt text, contextual description, score, and type. Otherwise, returns None.
Args:
img: The BeautifulSoup image tag to process.
url: The base URL of the page containing the image.
index: The index of the image in the list of images on the page.
total_images: The total number of images on the page.
Returns:
A dictionary with image metadata if the image is considered useful, or None otherwise.
"""
def is_valid_image(img, parent, parent_classes):
style = img.get("style", "")
src = img.get("src", "")
@@ -1174,6 +1210,20 @@ def get_content_of_website_optimized(
# Score an image for it's usefulness
def score_image_for_usefulness(img, base_url, index, images_count):
# Function to parse image height/width value and units
"""
Scores an HTML image element for usefulness based on size, format, attributes, and position.
The function evaluates an image's dimensions, file format, alt text, and its position among all images on the page to assign a usefulness score. Higher scores indicate images that are likely more relevant or informative for content extraction or summarization.
Args:
img: The HTML image element to score.
base_url: The base URL used to resolve relative image sources.
index: The position of the image in the list of images on the page (zero-based).
images_count: The total number of images on the page.
Returns:
An integer usefulness score for the image.
"""
def parse_dimension(dimension):
if dimension:
match = re.match(r"(\d+)(\D*)", dimension)
@@ -1188,6 +1238,16 @@ def get_content_of_website_optimized(
# Fetch image file metadata to extract size and extension
def fetch_image_file_size(img, base_url):
# If src is relative path construct full URL, if not it may be CDN URL
"""
Fetches the file size of an image by sending a HEAD request to its URL.
Args:
img: A BeautifulSoup tag representing the image element.
base_url: The base URL to resolve relative image sources.
Returns:
The value of the "Content-Length" header as a string if available, otherwise None.
"""
img_url = urljoin(base_url, img.get("src"))
try:
response = requests.head(img_url)
@@ -1198,8 +1258,6 @@ def get_content_of_website_optimized(
return None
except InvalidSchema:
return None
finally:
return
image_height = img.get("height")
height_value, height_unit = parse_dimension(image_height)
@@ -2003,6 +2061,10 @@ def normalize_url(href, base_url):
if not parsed_base.scheme or not parsed_base.netloc:
raise ValueError(f"Invalid base URL format: {base_url}")
# Ensure base_url ends with a trailing slash if it's a directory path
if not base_url.endswith('/'):
base_url = base_url + '/'
# Use urljoin to handle all cases
normalized = urljoin(base_url, href.strip())
return normalized
@@ -2047,7 +2109,7 @@ def normalize_url_for_deep_crawl(href, base_url):
normalized = urlunparse((
parsed.scheme,
netloc,
parsed.path.rstrip('/') or '/', # Normalize trailing slash
parsed.path.rstrip('/'), # Normalize trailing slash
parsed.params,
query,
fragment
@@ -2075,7 +2137,7 @@ def efficient_normalize_url_for_deep_crawl(href, base_url):
normalized = urlunparse((
parsed.scheme,
parsed.netloc.lower(),
parsed.path,
parsed.path.rstrip('/'),
parsed.params,
parsed.query,
'' # Remove fragment
@@ -2733,33 +2795,67 @@ def preprocess_html_for_schema(html_content, text_threshold=100, attr_value_thre
# Also truncate tail text if present
if element.tail and len(element.tail.strip()) > text_threshold:
element.tail = element.tail.strip()[:text_threshold] + '...'
# 4. Find repeated patterns and keep only a few examples
# This is a simplistic approach - more sophisticated pattern detection could be implemented
pattern_elements = {}
for element in tree.xpath('//*[contains(@class, "")]'):
parent = element.getparent()
# 4. Detect duplicates and drop them in a single pass
seen: dict[tuple, None] = {}
for el in list(tree.xpath('//*[@class]')): # snapshot once, XPath is fast
parent = el.getparent()
if parent is None:
continue
# Create a signature based on tag and classes
classes = element.get('class', '')
if not classes:
cls = el.get('class')
if not cls:
continue
signature = f"{element.tag}.{classes}"
if signature in pattern_elements:
pattern_elements[signature].append(element)
# ── build signature ───────────────────────────────────────────
h = xxhash.xxh64() # stream, no big join()
for txt in el.itertext():
h.update(txt)
sig = (el.tag, cls, h.intdigest()) # tuple cheaper & hashable
# ── first seen? keep else drop ─────────────
if sig in seen and parent is not None:
parent.remove(el) # duplicate
else:
pattern_elements[signature] = [element]
seen[sig] = None
# Keep only 3 examples of each repeating pattern
for signature, elements in pattern_elements.items():
if len(elements) > 3:
# Keep the first 2 and last elements
for element in elements[2:-1]:
if element.getparent() is not None:
element.getparent().remove(element)
# # 4. Find repeated patterns and keep only a few examples
# # This is a simplistic approach - more sophisticated pattern detection could be implemented
# pattern_elements = {}
# for element in tree.xpath('//*[contains(@class, "")]'):
# parent = element.getparent()
# if parent is None:
# continue
# # Create a signature based on tag and classes
# classes = element.get('class', '')
# if not classes:
# continue
# innert_text = ''.join(element.xpath('.//text()'))
# innert_text_hash = xxhash.xxh64(innert_text.encode()).hexdigest()
# signature = f"{element.tag}.{classes}.{innert_text_hash}"
# if signature in pattern_elements:
# pattern_elements[signature].append(element)
# else:
# pattern_elements[signature] = [element]
# # Keep only first examples of each repeating pattern
# for signature, elements in pattern_elements.items():
# if len(elements) > 1:
# # Keep the first element and remove the rest
# for element in elements[1:]:
# if element.getparent() is not None:
# element.getparent().remove(element)
# # Keep only 3 examples of each repeating pattern
# for signature, elements in pattern_elements.items():
# if len(elements) > 3:
# # Keep the first 2 and last elements
# for element in elements[2:-1]:
# if element.getparent() is not None:
# element.getparent().remove(element)
# 5. Convert back to string
result = etree.tostring(tree, encoding='unicode', method='html')
@@ -2774,4 +2870,3 @@ def preprocess_html_for_schema(html_content, text_threshold=100, attr_value_thre
# Fallback for parsing errors
return html_content[:max_size] if len(html_content) > max_size else html_content

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,10 @@
import os
import json
import asyncio
from typing import List, Tuple
from typing import List, Tuple, Dict
from functools import partial
from uuid import uuid4
from datetime import datetime
import logging
from typing import Optional, AsyncGenerator
@@ -40,8 +42,19 @@ from utils import (
decode_redis_hash
)
import psutil, time
logger = logging.getLogger(__name__)
# --- Helper to get memory ---
def _get_memory_mb():
try:
return psutil.Process().memory_info().rss / (1024 * 1024)
except Exception as e:
logger.warning(f"Could not get memory info: {e}")
return None
async def handle_llm_qa(
url: str,
query: str,
@@ -49,6 +62,8 @@ async def handle_llm_qa(
) -> str:
"""Process QA using LLM with crawled content as context."""
try:
if not url.startswith(('http://', 'https://')):
url = 'https://' + url
# Extract base URL by finding last '?q=' occurrence
last_q_index = url.rfind('?q=')
if last_q_index != -1:
@@ -62,7 +77,7 @@ async def handle_llm_qa(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=result.error_message
)
content = result.markdown.fit_markdown
content = result.markdown.fit_markdown or result.markdown.raw_markdown
# Create prompt and get LLM response
prompt = f"""Use the following content as context to answer the question.
@@ -259,7 +274,9 @@ async def handle_llm_request(
async def handle_task_status(
redis: aioredis.Redis,
task_id: str,
base_url: str
base_url: str,
*,
keep: bool = False
) -> JSONResponse:
"""Handle task status check requests."""
task = await redis.hgetall(f"task:{task_id}")
@@ -273,7 +290,7 @@ async def handle_task_status(
response = create_task_response(task, task_id, base_url)
if task["status"] in [TaskStatus.COMPLETED, TaskStatus.FAILED]:
if should_cleanup_task(task["created_at"]):
if not keep and should_cleanup_task(task["created_at"]):
await redis.delete(f"task:{task_id}")
return JSONResponse(response)
@@ -351,7 +368,9 @@ async def stream_results(crawler: AsyncWebCrawler, results_gen: AsyncGenerator)
try:
async for result in results_gen:
try:
server_memory_mb = _get_memory_mb()
result_dict = result.model_dump()
result_dict['server_memory_mb'] = server_memory_mb
logger.info(f"Streaming result for {result_dict.get('url', 'unknown')}")
data = json.dumps(result_dict, default=datetime_handler) + "\n"
yield data.encode('utf-8')
@@ -365,10 +384,11 @@ async def stream_results(crawler: AsyncWebCrawler, results_gen: AsyncGenerator)
except asyncio.CancelledError:
logger.warning("Client disconnected during streaming")
finally:
try:
await crawler.close()
except Exception as e:
logger.error(f"Crawler cleanup error: {e}")
# try:
# await crawler.close()
# except Exception as e:
# logger.error(f"Crawler cleanup error: {e}")
pass
async def handle_crawl_request(
urls: List[str],
@@ -377,7 +397,13 @@ async def handle_crawl_request(
config: dict
) -> dict:
"""Handle non-streaming crawl requests."""
start_mem_mb = _get_memory_mb() # <--- Get memory before
start_time = time.time()
mem_delta_mb = None
peak_mem_mb = start_mem_mb
try:
urls = [('https://' + url) if not url.startswith(('http://', 'https://')) else url for url in urls]
browser_config = BrowserConfig.load(browser_config)
crawler_config = CrawlerRunConfig.load(crawler_config)
@@ -385,27 +411,68 @@ async def handle_crawl_request(
memory_threshold_percent=config["crawler"]["memory_threshold_percent"],
rate_limiter=RateLimiter(
base_delay=tuple(config["crawler"]["rate_limiter"]["base_delay"])
)
) if config["crawler"]["rate_limiter"]["enabled"] else None
)
from crawler_pool import get_crawler
crawler = await get_crawler(browser_config)
async with AsyncWebCrawler(config=browser_config) as crawler:
results = []
func = getattr(crawler, "arun" if len(urls) == 1 else "arun_many")
partial_func = partial(func,
urls[0] if len(urls) == 1 else urls,
config=crawler_config,
dispatcher=dispatcher)
results = await partial_func()
return {
"success": True,
"results": [result.model_dump() for result in results]
}
# crawler: AsyncWebCrawler = AsyncWebCrawler(config=browser_config)
# await crawler.start()
base_config = config["crawler"]["base_config"]
# Iterate on key-value pairs in global_config then use haseattr to set them
for key, value in base_config.items():
if hasattr(crawler_config, key):
setattr(crawler_config, key, value)
results = []
func = getattr(crawler, "arun" if len(urls) == 1 else "arun_many")
partial_func = partial(func,
urls[0] if len(urls) == 1 else urls,
config=crawler_config,
dispatcher=dispatcher)
results = await partial_func()
# await crawler.close()
end_mem_mb = _get_memory_mb() # <--- Get memory after
end_time = time.time()
if start_mem_mb is not None and end_mem_mb is not None:
mem_delta_mb = end_mem_mb - start_mem_mb # <--- Calculate delta
peak_mem_mb = max(peak_mem_mb if peak_mem_mb else 0, end_mem_mb) # <--- Get peak memory
logger.info(f"Memory usage: Start: {start_mem_mb} MB, End: {end_mem_mb} MB, Delta: {mem_delta_mb} MB, Peak: {peak_mem_mb} MB")
return {
"success": True,
"results": [result.model_dump() for result in results],
"server_processing_time_s": end_time - start_time,
"server_memory_delta_mb": mem_delta_mb,
"server_peak_memory_mb": peak_mem_mb
}
except Exception as e:
logger.error(f"Crawl error: {str(e)}", exc_info=True)
if 'crawler' in locals() and crawler.ready: # Check if crawler was initialized and started
# try:
# await crawler.close()
# except Exception as close_e:
# logger.error(f"Error closing crawler during exception handling: {close_e}")
logger.error(f"Error closing crawler during exception handling: {close_e}")
# Measure memory even on error if possible
end_mem_mb_error = _get_memory_mb()
if start_mem_mb is not None and end_mem_mb_error is not None:
mem_delta_mb = end_mem_mb_error - start_mem_mb
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
detail=json.dumps({ # Send structured error
"error": str(e),
"server_memory_delta_mb": mem_delta_mb,
"server_peak_memory_mb": max(peak_mem_mb if peak_mem_mb else 0, end_mem_mb_error or 0)
})
)
async def handle_stream_crawl_request(
@@ -417,9 +484,11 @@ async def handle_stream_crawl_request(
"""Handle streaming crawl requests."""
try:
browser_config = BrowserConfig.load(browser_config)
browser_config.verbose = True
# browser_config.verbose = True # Set to False or remove for production stress testing
browser_config.verbose = False
crawler_config = CrawlerRunConfig.load(crawler_config)
crawler_config.scraping_strategy = LXMLWebScrapingStrategy()
crawler_config.stream = True
dispatcher = MemoryAdaptiveDispatcher(
memory_threshold_percent=config["crawler"]["memory_threshold_percent"],
@@ -428,8 +497,11 @@ async def handle_stream_crawl_request(
)
)
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
from crawler_pool import get_crawler
crawler = await get_crawler(browser_config)
# crawler = AsyncWebCrawler(config=browser_config)
# await crawler.start()
results_gen = await crawler.arun_many(
urls=urls,
@@ -440,10 +512,60 @@ async def handle_stream_crawl_request(
return crawler, results_gen
except Exception as e:
if 'crawler' in locals():
await crawler.close()
# Make sure to close crawler if started during an error here
if 'crawler' in locals() and crawler.ready:
# try:
# await crawler.close()
# except Exception as close_e:
# logger.error(f"Error closing crawler during stream setup exception: {close_e}")
logger.error(f"Error closing crawler during stream setup exception: {close_e}")
logger.error(f"Stream crawl error: {str(e)}", exc_info=True)
# Raising HTTPException here will prevent streaming response
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=str(e)
)
)
async def handle_crawl_job(
redis,
background_tasks: BackgroundTasks,
urls: List[str],
browser_config: Dict,
crawler_config: Dict,
config: Dict,
) -> Dict:
"""
Fire-and-forget version of handle_crawl_request.
Creates a task in Redis, runs the heavy work in a background task,
lets /crawl/job/{task_id} polling fetch the result.
"""
task_id = f"crawl_{uuid4().hex[:8]}"
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.PROCESSING, # <-- keep enum values consistent
"created_at": datetime.utcnow().isoformat(),
"url": json.dumps(urls), # store list as JSON string
"result": "",
"error": "",
})
async def _runner():
try:
result = await handle_crawl_request(
urls=urls,
browser_config=browser_config,
crawler_config=crawler_config,
config=config,
)
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.COMPLETED,
"result": json.dumps(result),
})
await asyncio.sleep(5) # Give Redis time to process the update
except Exception as exc:
await redis.hset(f"task:{task_id}", mapping={
"status": TaskStatus.FAILED,
"error": str(exc),
})
background_tasks.add_task(_runner)
return {"task_id": task_id}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -3,8 +3,9 @@ app:
title: "Crawl4AI API"
version: "1.0.0"
host: "0.0.0.0"
port: 8020
reload: True
port: 11234
reload: False
workers: 1
timeout_keep_alive: 300
# Default LLM Configuration
@@ -50,12 +51,31 @@ security:
# Crawler Configuration
crawler:
base_config:
simulate_user: true
memory_threshold_percent: 95.0
rate_limiter:
enabled: true
base_delay: [1.0, 2.0]
timeouts:
stream_init: 30.0 # Timeout for stream initialization
batch_process: 300.0 # Timeout for batch processing
pool:
max_pages: 40 # ← GLOBAL_SEM permits
idle_ttl_sec: 1800 # ← 30 min janitor cutoff
browser:
kwargs:
headless: true
text_mode: true
extra_args:
# - "--single-process"
- "--no-sandbox"
- "--disable-dev-shm-usage"
- "--disable-gpu"
- "--disable-software-rasterizer"
- "--disable-web-security"
- "--allow-insecure-localhost"
- "--ignore-certificate-errors"
# Logging Configuration
logging:

View File

@@ -0,0 +1,60 @@
# crawler_pool.py (new file)
import asyncio, json, hashlib, time, psutil
from contextlib import suppress
from typing import Dict
from crawl4ai import AsyncWebCrawler, BrowserConfig
from typing import Dict
from utils import load_config
CONFIG = load_config()
POOL: Dict[str, AsyncWebCrawler] = {}
LAST_USED: Dict[str, float] = {}
LOCK = asyncio.Lock()
MEM_LIMIT = CONFIG.get("crawler", {}).get("memory_threshold_percent", 95.0) # % RAM refuse new browsers above this
IDLE_TTL = CONFIG.get("crawler", {}).get("pool", {}).get("idle_ttl_sec", 1800) # close if unused for 30min
def _sig(cfg: BrowserConfig) -> str:
payload = json.dumps(cfg.to_dict(), sort_keys=True, separators=(",",":"))
return hashlib.sha1(payload.encode()).hexdigest()
async def get_crawler(cfg: BrowserConfig) -> AsyncWebCrawler:
try:
sig = _sig(cfg)
async with LOCK:
if sig in POOL:
LAST_USED[sig] = time.time();
return POOL[sig]
if psutil.virtual_memory().percent >= MEM_LIMIT:
raise MemoryError("RAM pressure new browser denied")
crawler = AsyncWebCrawler(config=cfg, thread_safe=False)
await crawler.start()
POOL[sig] = crawler; LAST_USED[sig] = time.time()
return crawler
except MemoryError as e:
raise MemoryError(f"RAM pressure new browser denied: {e}")
except Exception as e:
raise RuntimeError(f"Failed to start browser: {e}")
finally:
if sig in POOL:
LAST_USED[sig] = time.time()
else:
# If we failed to start the browser, we should remove it from the pool
POOL.pop(sig, None)
LAST_USED.pop(sig, None)
# If we failed to start the browser, we should remove it from the pool
async def close_all():
async with LOCK:
await asyncio.gather(*(c.close() for c in POOL.values()), return_exceptions=True)
POOL.clear(); LAST_USED.clear()
async def janitor():
while True:
await asyncio.sleep(60)
now = time.time()
async with LOCK:
for sig, crawler in list(POOL.items()):
if now - LAST_USED[sig] > IDLE_TTL:
with suppress(Exception): await crawler.close()
POOL.pop(sig, None); LAST_USED.pop(sig, None)

99
deploy/docker/job.py Normal file
View File

@@ -0,0 +1,99 @@
"""
Job endpoints (enqueue + poll) for long-running LLM extraction and raw crawl.
Relies on the existing Redis task helpers in api.py
"""
from typing import Dict, Optional, Callable
from fastapi import APIRouter, BackgroundTasks, Depends, Request
from pydantic import BaseModel, HttpUrl
from api import (
handle_llm_request,
handle_crawl_job,
handle_task_status,
)
# ------------- dependency placeholders -------------
_redis = None # will be injected from server.py
_config = None
_token_dep: Callable = lambda: None # dummy until injected
# public router
router = APIRouter()
# === init hook called by server.py =========================================
def init_job_router(redis, config, token_dep) -> APIRouter:
"""Inject shared singletons and return the router for mounting."""
global _redis, _config, _token_dep
_redis, _config, _token_dep = redis, config, token_dep
return router
# ---------- payload models --------------------------------------------------
class LlmJobPayload(BaseModel):
url: HttpUrl
q: str
schema: Optional[str] = None
cache: bool = False
class CrawlJobPayload(BaseModel):
urls: list[HttpUrl]
browser_config: Dict = {}
crawler_config: Dict = {}
# ---------- LLM job ---------------------------------------------------------
@router.post("/llm/job", status_code=202)
async def llm_job_enqueue(
payload: LlmJobPayload,
background_tasks: BackgroundTasks,
request: Request,
_td: Dict = Depends(lambda: _token_dep()), # late-bound dep
):
return await handle_llm_request(
_redis,
background_tasks,
request,
str(payload.url),
query=payload.q,
schema=payload.schema,
cache=payload.cache,
config=_config,
)
@router.get("/llm/job/{task_id}")
async def llm_job_status(
request: Request,
task_id: str,
_td: Dict = Depends(lambda: _token_dep())
):
return await handle_task_status(_redis, task_id)
# ---------- CRAWL job -------------------------------------------------------
@router.post("/crawl/job", status_code=202)
async def crawl_job_enqueue(
payload: CrawlJobPayload,
background_tasks: BackgroundTasks,
_td: Dict = Depends(lambda: _token_dep()),
):
return await handle_crawl_job(
_redis,
background_tasks,
[str(u) for u in payload.urls],
payload.browser_config,
payload.crawler_config,
config=_config,
)
@router.get("/crawl/job/{task_id}")
async def crawl_job_status(
request: Request,
task_id: str,
_td: Dict = Depends(lambda: _token_dep())
):
return await handle_task_status(_redis, task_id, base_url=str(request.base_url))

252
deploy/docker/mcp_bridge.py Normal file
View File

@@ -0,0 +1,252 @@
# deploy/docker/mcp_bridge.py
from __future__ import annotations
import inspect, json, re, anyio
from contextlib import suppress
from typing import Any, Callable, Dict, List, Tuple
import httpx
from fastapi import FastAPI, WebSocket, WebSocketDisconnect, HTTPException
from fastapi.responses import JSONResponse
from fastapi import Request
from sse_starlette.sse import EventSourceResponse
from pydantic import BaseModel
from mcp.server.sse import SseServerTransport
import mcp.types as t
from mcp.server.lowlevel.server import Server, NotificationOptions
from mcp.server.models import InitializationOptions
# ── optin decorators ───────────────────────────────────────────
def mcp_resource(name: str | None = None):
def deco(fn):
fn.__mcp_kind__, fn.__mcp_name__ = "resource", name
return fn
return deco
def mcp_template(name: str | None = None):
def deco(fn):
fn.__mcp_kind__, fn.__mcp_name__ = "template", name
return fn
return deco
def mcp_tool(name: str | None = None):
def deco(fn):
fn.__mcp_kind__, fn.__mcp_name__ = "tool", name
return fn
return deco
# ── HTTPproxy helper for FastAPI endpoints ─────────────────────
def _make_http_proxy(base_url: str, route):
method = list(route.methods - {"HEAD", "OPTIONS"})[0]
async def proxy(**kwargs):
# replace `/items/{id}` style params first
path = route.path
for k, v in list(kwargs.items()):
placeholder = "{" + k + "}"
if placeholder in path:
path = path.replace(placeholder, str(v))
kwargs.pop(k)
url = base_url.rstrip("/") + path
async with httpx.AsyncClient() as client:
try:
r = (
await client.get(url, params=kwargs)
if method == "GET"
else await client.request(method, url, json=kwargs)
)
r.raise_for_status()
return r.text if method == "GET" else r.json()
except httpx.HTTPStatusError as e:
# surface FastAPI error details instead of plain 500
raise HTTPException(e.response.status_code, e.response.text)
return proxy
# ── main entry point ────────────────────────────────────────────
def attach_mcp(
app: FastAPI,
*, # keywordonly
base: str = "/mcp",
name: str | None = None,
base_url: str, # eg. "http://127.0.0.1:8020"
) -> None:
"""Call once after all routes are declared to expose WS+SSE MCP endpoints."""
server_name = name or app.title or "FastAPI-MCP"
mcp = Server(server_name)
# tools: Dict[str, Callable] = {}
tools: Dict[str, Tuple[Callable, Callable]] = {}
resources: Dict[str, Callable] = {}
templates: Dict[str, Callable] = {}
# register decorated FastAPI routes
for route in app.routes:
fn = getattr(route, "endpoint", None)
kind = getattr(fn, "__mcp_kind__", None)
if not kind:
continue
key = fn.__mcp_name__ or re.sub(r"[/{}}]", "_", route.path).strip("_")
# if kind == "tool":
# tools[key] = _make_http_proxy(base_url, route)
if kind == "tool":
proxy = _make_http_proxy(base_url, route)
tools[key] = (proxy, fn)
continue
if kind == "resource":
resources[key] = fn
if kind == "template":
templates[key] = fn
# helpers for JSONSchema
def _schema(model: type[BaseModel] | None) -> dict:
return {"type": "object"} if model is None else model.model_json_schema()
def _body_model(fn: Callable) -> type[BaseModel] | None:
for p in inspect.signature(fn).parameters.values():
a = p.annotation
if inspect.isclass(a) and issubclass(a, BaseModel):
return a
return None
# MCP handlers
@mcp.list_tools()
async def _list_tools() -> List[t.Tool]:
out = []
for k, (proxy, orig_fn) in tools.items():
desc = getattr(orig_fn, "__mcp_description__", None) or inspect.getdoc(orig_fn) or ""
schema = getattr(orig_fn, "__mcp_schema__", None) or _schema(_body_model(orig_fn))
out.append(
t.Tool(name=k, description=desc, inputSchema=schema)
)
return out
@mcp.call_tool()
async def _call_tool(name: str, arguments: Dict | None) -> List[t.TextContent]:
if name not in tools:
raise HTTPException(404, "tool not found")
proxy, _ = tools[name]
try:
res = await proxy(**(arguments or {}))
except HTTPException as exc:
# map serverside errors into MCP "text/error" payloads
err = {"error": exc.status_code, "detail": exc.detail}
return [t.TextContent(type = "text", text=json.dumps(err))]
return [t.TextContent(type = "text", text=json.dumps(res, default=str))]
@mcp.list_resources()
async def _list_resources() -> List[t.Resource]:
return [
t.Resource(name=k, description=inspect.getdoc(f) or "", mime_type="application/json")
for k, f in resources.items()
]
@mcp.read_resource()
async def _read_resource(name: str) -> List[t.TextContent]:
if name not in resources:
raise HTTPException(404, "resource not found")
res = resources[name]()
return [t.TextContent(type = "text", text=json.dumps(res, default=str))]
@mcp.list_resource_templates()
async def _list_templates() -> List[t.ResourceTemplate]:
return [
t.ResourceTemplate(
name=k,
description=inspect.getdoc(f) or "",
parameters={
p: {"type": "string"} for p in _path_params(app, f)
},
)
for k, f in templates.items()
]
init_opts = InitializationOptions(
server_name=server_name,
server_version="0.1.0",
capabilities=mcp.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
)
# ── WebSocket transport ────────────────────────────────────
@app.websocket_route(f"{base}/ws")
async def _ws(ws: WebSocket):
await ws.accept()
c2s_send, c2s_recv = anyio.create_memory_object_stream(100)
s2c_send, s2c_recv = anyio.create_memory_object_stream(100)
from pydantic import TypeAdapter
from mcp.types import JSONRPCMessage
adapter = TypeAdapter(JSONRPCMessage)
init_done = anyio.Event()
async def srv_to_ws():
first = True
try:
async for msg in s2c_recv:
await ws.send_json(msg.model_dump())
if first:
init_done.set()
first = False
finally:
# make sure cleanup survives TaskGroup cancellation
with anyio.CancelScope(shield=True):
with suppress(RuntimeError): # idempotent close
await ws.close()
async def ws_to_srv():
try:
# 1st frame is always "initialize"
first = adapter.validate_python(await ws.receive_json())
await c2s_send.send(first)
await init_done.wait() # block until server ready
while True:
data = await ws.receive_json()
await c2s_send.send(adapter.validate_python(data))
except WebSocketDisconnect:
await c2s_send.aclose()
async with anyio.create_task_group() as tg:
tg.start_soon(mcp.run, c2s_recv, s2c_send, init_opts)
tg.start_soon(ws_to_srv)
tg.start_soon(srv_to_ws)
# ── SSE transport (official) ─────────────────────────────
sse = SseServerTransport(f"{base}/messages/")
@app.get(f"{base}/sse")
async def _mcp_sse(request: Request):
async with sse.connect_sse(
request.scope, request.receive, request._send # starlette ASGI primitives
) as (read_stream, write_stream):
await mcp.run(read_stream, write_stream, init_opts)
# client → server frames are POSTed here
app.mount(f"{base}/messages", app=sse.handle_post_message)
# ── schema endpoint ───────────────────────────────────────
@app.get(f"{base}/schema")
async def _schema_endpoint():
return JSONResponse({
"tools": [x.model_dump() for x in await _list_tools()],
"resources": [x.model_dump() for x in await _list_resources()],
"resource_templates": [x.model_dump() for x in await _list_templates()],
})
# ── helpers ────────────────────────────────────────────────────
def _route_name(path: str) -> str:
return re.sub(r"[/{}}]", "_", path).strip("_")
def _path_params(app: FastAPI, fn: Callable) -> List[str]:
for r in app.routes:
if r.endpoint is fn:
return list(r.param_convertors.keys())
return []

View File

@@ -1,10 +1,16 @@
crawl4ai
fastapi
uvicorn
fastapi>=0.115.12
uvicorn>=0.34.2
gunicorn>=23.0.0
slowapi>=0.1.9
prometheus-fastapi-instrumentator>=7.0.2
slowapi==0.1.9
prometheus-fastapi-instrumentator>=7.1.0
redis>=5.2.1
jwt>=1.3.1
dnspython>=2.7.0
email-validator>=2.2.0
email-validator==2.2.0
sse-starlette==2.2.1
pydantic>=2.11
rank-bm25==0.2.2
anyio==4.9.0
PyJWT==2.10.1
mcp>=1.6.0
websockets>=15.0.1

42
deploy/docker/schemas.py Normal file
View File

@@ -0,0 +1,42 @@
from typing import List, Optional, Dict
from enum import Enum
from pydantic import BaseModel, Field
from utils import FilterType
class CrawlRequest(BaseModel):
urls: List[str] = Field(min_length=1, max_length=100)
browser_config: Optional[Dict] = Field(default_factory=dict)
crawler_config: Optional[Dict] = Field(default_factory=dict)
class MarkdownRequest(BaseModel):
"""Request body for the /md endpoint."""
url: str = Field(..., description="Absolute http/https URL to fetch")
f: FilterType = Field(FilterType.FIT,
description="Contentfilter strategy: FIT, RAW, BM25, or LLM")
q: Optional[str] = Field(None, description="Query string used by BM25/LLM filters")
c: Optional[str] = Field("0", description="Cachebust / revision counter")
class RawCode(BaseModel):
code: str
class HTMLRequest(BaseModel):
url: str
class ScreenshotRequest(BaseModel):
url: str
screenshot_wait_for: Optional[float] = 2
output_path: Optional[str] = None
class PDFRequest(BaseModel):
url: str
output_path: Optional[str] = None
class JSEndpointRequest(BaseModel):
url: str
scripts: List[str] = Field(
...,
description="List of separated JavaScript snippets to execute"
)

View File

@@ -1,150 +1,449 @@
# ───────────────────────── server.py ─────────────────────────
"""
Crawl4AI FastAPI entrypoint
• Browser pool + global page cap
• Ratelimiting, security, metrics
• /crawl, /crawl/stream, /md, /llm endpoints
"""
# ── stdlib & 3rdparty imports ───────────────────────────────
from crawler_pool import get_crawler, close_all, janitor
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from auth import create_access_token, get_token_dependency, TokenRequest
from pydantic import BaseModel
from typing import Optional, List, Dict
from fastapi import Request, Depends
from fastapi.responses import FileResponse
import base64
import re
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from api import (
handle_markdown_request, handle_llm_qa,
handle_stream_crawl_request, handle_crawl_request,
stream_results
)
from schemas import (
CrawlRequest,
MarkdownRequest,
RawCode,
HTMLRequest,
ScreenshotRequest,
PDFRequest,
JSEndpointRequest,
)
from utils import (
FilterType, load_config, setup_logging, verify_email_domain
)
import os
import sys
import time
from typing import List, Optional, Dict
from fastapi import FastAPI, HTTPException, Request, Query, Path, Depends
from fastapi.responses import StreamingResponse, RedirectResponse, PlainTextResponse, JSONResponse
import asyncio
from typing import List
from contextlib import asynccontextmanager
import pathlib
from fastapi import (
FastAPI, HTTPException, Request, Path, Query, Depends
)
from rank_bm25 import BM25Okapi
from fastapi.responses import (
StreamingResponse, RedirectResponse, PlainTextResponse, JSONResponse
)
from fastapi.middleware.httpsredirect import HTTPSRedirectMiddleware
from fastapi.middleware.trustedhost import TrustedHostMiddleware
from fastapi.staticfiles import StaticFiles
from job import init_job_router
from mcp_bridge import attach_mcp, mcp_resource, mcp_template, mcp_tool
import ast
import crawl4ai as _c4
from pydantic import BaseModel, Field
from slowapi import Limiter
from slowapi.util import get_remote_address
from prometheus_fastapi_instrumentator import Instrumentator
from redis import asyncio as aioredis
# ── internal imports (after sys.path append) ─────────────────
sys.path.append(os.path.dirname(os.path.realpath(__file__)))
from utils import FilterType, load_config, setup_logging, verify_email_domain
from api import (
handle_markdown_request,
handle_llm_qa,
handle_stream_crawl_request,
handle_crawl_request,
stream_results
)
from auth import create_access_token, get_token_dependency, TokenRequest # Import from auth.py
__version__ = "0.2.6"
class CrawlRequest(BaseModel):
urls: List[str] = Field(min_length=1, max_length=100)
browser_config: Optional[Dict] = Field(default_factory=dict)
crawler_config: Optional[Dict] = Field(default_factory=dict)
# Load configuration and setup
# ────────────────── configuration / logging ──────────────────
config = load_config()
setup_logging(config)
# Initialize Redis
__version__ = "0.5.1-d1"
# ── global page semaphore (hard cap) ─────────────────────────
MAX_PAGES = config["crawler"]["pool"].get("max_pages", 30)
GLOBAL_SEM = asyncio.Semaphore(MAX_PAGES)
# import logging
# page_log = logging.getLogger("page_cap")
# orig_arun = AsyncWebCrawler.arun
# async def capped_arun(self, *a, **kw):
# await GLOBAL_SEM.acquire() # ← take slot
# try:
# in_flight = MAX_PAGES - GLOBAL_SEM._value # used permits
# page_log.info("🕸️ pages_in_flight=%s / %s", in_flight, MAX_PAGES)
# return await orig_arun(self, *a, **kw)
# finally:
# GLOBAL_SEM.release() # ← free slot
orig_arun = AsyncWebCrawler.arun
async def capped_arun(self, *a, **kw):
async with GLOBAL_SEM:
return await orig_arun(self, *a, **kw)
AsyncWebCrawler.arun = capped_arun
# ───────────────────── FastAPI lifespan ──────────────────────
@asynccontextmanager
async def lifespan(_: FastAPI):
await get_crawler(BrowserConfig(
extra_args=config["crawler"]["browser"].get("extra_args", []),
**config["crawler"]["browser"].get("kwargs", {}),
)) # warmup
app.state.janitor = asyncio.create_task(janitor()) # idle GC
yield
app.state.janitor.cancel()
await close_all()
# ───────────────────── FastAPI instance ──────────────────────
app = FastAPI(
title=config["app"]["title"],
version=config["app"]["version"],
lifespan=lifespan,
)
# ── static playground ──────────────────────────────────────
STATIC_DIR = pathlib.Path(__file__).parent / "static" / "playground"
if not STATIC_DIR.exists():
raise RuntimeError(f"Playground assets not found at {STATIC_DIR}")
app.mount(
"/playground",
StaticFiles(directory=STATIC_DIR, html=True),
name="play",
)
@app.get("/")
async def root():
return RedirectResponse("/playground")
# ─────────────────── infra / middleware ─────────────────────
redis = aioredis.from_url(config["redis"].get("uri", "redis://localhost"))
# Initialize rate limiter
limiter = Limiter(
key_func=get_remote_address,
default_limits=[config["rate_limiting"]["default_limit"]],
storage_uri=config["rate_limiting"]["storage_uri"]
storage_uri=config["rate_limiting"]["storage_uri"],
)
app = FastAPI(
title=config["app"]["title"],
version=config["app"]["version"]
)
# Configure middleware
def setup_security_middleware(app, config):
sec_config = config.get("security", {})
if sec_config.get("enabled", False):
if sec_config.get("https_redirect", False):
app.add_middleware(HTTPSRedirectMiddleware)
if sec_config.get("trusted_hosts", []) != ["*"]:
app.add_middleware(TrustedHostMiddleware, allowed_hosts=sec_config["trusted_hosts"])
def _setup_security(app_: FastAPI):
sec = config["security"]
if not sec["enabled"]:
return
if sec.get("https_redirect"):
app_.add_middleware(HTTPSRedirectMiddleware)
if sec.get("trusted_hosts", []) != ["*"]:
app_.add_middleware(
TrustedHostMiddleware, allowed_hosts=sec["trusted_hosts"]
)
setup_security_middleware(app, config)
# Prometheus instrumentation
_setup_security(app)
if config["observability"]["prometheus"]["enabled"]:
Instrumentator().instrument(app).expose(app)
# Get token dependency based on config
token_dependency = get_token_dependency(config)
token_dep = get_token_dependency(config)
# Middleware for security headers
@app.middleware("http")
async def add_security_headers(request: Request, call_next):
response = await call_next(request)
resp = await call_next(request)
if config["security"]["enabled"]:
response.headers.update(config["security"]["headers"])
return response
resp.headers.update(config["security"]["headers"])
return resp
# Token endpoint (always available, but usage depends on config)
# ───────────────── safe configdump helper ─────────────────
ALLOWED_TYPES = {
"CrawlerRunConfig": CrawlerRunConfig,
"BrowserConfig": BrowserConfig,
}
def _safe_eval_config(expr: str) -> dict:
"""
Accept exactly one toplevel call to CrawlerRunConfig(...) or BrowserConfig(...).
Whatever is inside the parentheses is fine *except* further function calls
(so no __import__('os') stuff). All public names from crawl4ai are available
when we eval.
"""
tree = ast.parse(expr, mode="eval")
# must be a single call
if not isinstance(tree.body, ast.Call):
raise ValueError("Expression must be a single constructor call")
call = tree.body
if not (isinstance(call.func, ast.Name) and call.func.id in {"CrawlerRunConfig", "BrowserConfig"}):
raise ValueError(
"Only CrawlerRunConfig(...) or BrowserConfig(...) are allowed")
# forbid nested calls to keep the surface tiny
for node in ast.walk(call):
if isinstance(node, ast.Call) and node is not call:
raise ValueError("Nested function calls are not permitted")
# expose everything that crawl4ai exports, nothing else
safe_env = {name: getattr(_c4, name)
for name in dir(_c4) if not name.startswith("_")}
obj = eval(compile(tree, "<config>", "eval"),
{"__builtins__": {}}, safe_env)
return obj.dump()
# ── job router ──────────────────────────────────────────────
app.include_router(init_job_router(redis, config, token_dep))
# ──────────────────────── Endpoints ──────────────────────────
@app.post("/token")
async def get_token(request_data: TokenRequest):
if not verify_email_domain(request_data.email):
raise HTTPException(status_code=400, detail="Invalid email domain")
token = create_access_token({"sub": request_data.email})
return {"email": request_data.email, "access_token": token, "token_type": "bearer"}
async def get_token(req: TokenRequest):
if not verify_email_domain(req.email):
raise HTTPException(400, "Invalid email domain")
token = create_access_token({"sub": req.email})
return {"email": req.email, "access_token": token, "token_type": "bearer"}
# Endpoints with conditional auth
@app.get("/md/{url:path}")
@app.post("/config/dump")
async def config_dump(raw: RawCode):
try:
return JSONResponse(_safe_eval_config(raw.code.strip()))
except Exception as e:
raise HTTPException(400, str(e))
@app.post("/md")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("md")
async def get_markdown(
request: Request,
url: str,
f: FilterType = FilterType.FIT,
q: Optional[str] = None,
c: Optional[str] = "0",
token_data: Optional[Dict] = Depends(token_dependency)
body: MarkdownRequest,
_td: Dict = Depends(token_dep),
):
result = await handle_markdown_request(url, f, q, c, config)
return PlainTextResponse(result)
if not body.url.startswith(("http://", "https://")):
raise HTTPException(
400, "URL must be absolute and start with http/https")
markdown = await handle_markdown_request(
body.url, body.f, body.q, body.c, config
)
return JSONResponse({
"url": body.url,
"filter": body.f,
"query": body.q,
"cache": body.c,
"markdown": markdown,
"success": True
})
@app.get("/llm/{url:path}", description="URL should be without http/https prefix")
@app.post("/html")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("html")
async def generate_html(
request: Request,
body: HTMLRequest,
_td: Dict = Depends(token_dep),
):
"""
Crawls the URL, preprocesses the raw HTML for schema extraction, and returns the processed HTML.
Use when you need sanitized HTML structures for building schemas or further processing.
"""
cfg = CrawlerRunConfig()
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
raw_html = results[0].html
from crawl4ai.utils import preprocess_html_for_schema
processed_html = preprocess_html_for_schema(raw_html)
return JSONResponse({"html": processed_html, "url": body.url, "success": True})
# Screenshot endpoint
@app.post("/screenshot")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("screenshot")
async def generate_screenshot(
request: Request,
body: ScreenshotRequest,
_td: Dict = Depends(token_dep),
):
"""
Capture a full-page PNG screenshot of the specified URL, waiting an optional delay before capture,
Use when you need an image snapshot of the rendered page. Its recommened to provide an output path to save the screenshot.
Then in result instead of the screenshot you will get a path to the saved file.
"""
cfg = CrawlerRunConfig(
screenshot=True, screenshot_wait_for=body.screenshot_wait_for)
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
screenshot_data = results[0].screenshot
if body.output_path:
abs_path = os.path.abspath(body.output_path)
os.makedirs(os.path.dirname(abs_path), exist_ok=True)
with open(abs_path, "wb") as f:
f.write(base64.b64decode(screenshot_data))
return {"success": True, "path": abs_path}
return {"success": True, "screenshot": screenshot_data}
# PDF endpoint
@app.post("/pdf")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("pdf")
async def generate_pdf(
request: Request,
body: PDFRequest,
_td: Dict = Depends(token_dep),
):
"""
Generate a PDF document of the specified URL,
Use when you need a printable or archivable snapshot of the page. It is recommended to provide an output path to save the PDF.
Then in result instead of the PDF you will get a path to the saved file.
"""
cfg = CrawlerRunConfig(pdf=True)
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
pdf_data = results[0].pdf
if body.output_path:
abs_path = os.path.abspath(body.output_path)
os.makedirs(os.path.dirname(abs_path), exist_ok=True)
with open(abs_path, "wb") as f:
f.write(pdf_data)
return {"success": True, "path": abs_path}
return {"success": True, "pdf": base64.b64encode(pdf_data).decode()}
@app.post("/execute_js")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("execute_js")
async def execute_js(
request: Request,
body: JSEndpointRequest,
_td: Dict = Depends(token_dep),
):
"""
Execute a sequence of JavaScript snippets on the specified URL.
Return the full CrawlResult JSON (first result).
Use this when you need to interact with dynamic pages using JS.
REMEMBER: Scripts accept a list of separated JS snippets to execute and execute them in order.
IMPORTANT: Each script should be an expression that returns a value. It can be an IIFE or an async function. You can think of it as such.
Your script will replace '{script}' and execute in the browser context. So provide either an IIFE or a sync/async function that returns a value.
Return Format:
- The return result is an instance of CrawlResult, so you have access to markdown, links, and other stuff. If this is enough, you don't need to call again for other endpoints.
```python
class CrawlResult(BaseModel):
url: str
html: str
success: bool
cleaned_html: Optional[str] = None
media: Dict[str, List[Dict]] = {}
links: Dict[str, List[Dict]] = {}
downloaded_files: Optional[List[str]] = None
js_execution_result: Optional[Dict[str, Any]] = None
screenshot: Optional[str] = None
pdf: Optional[bytes] = None
mhtml: Optional[str] = None
_markdown: Optional[MarkdownGenerationResult] = PrivateAttr(default=None)
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
error_message: Optional[str] = None
session_id: Optional[str] = None
response_headers: Optional[dict] = None
status_code: Optional[int] = None
ssl_certificate: Optional[SSLCertificate] = None
dispatch_result: Optional[DispatchResult] = None
redirected_url: Optional[str] = None
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
fit_markdown: Optional[str] = None
fit_html: Optional[str] = None
```
"""
cfg = CrawlerRunConfig(js_code=body.scripts)
async with AsyncWebCrawler(config=BrowserConfig()) as crawler:
results = await crawler.arun(url=body.url, config=cfg)
# Return JSON-serializable dict of the first CrawlResult
data = results[0].model_dump()
return JSONResponse(data)
@app.get("/llm/{url:path}")
async def llm_endpoint(
request: Request,
url: str = Path(...),
q: Optional[str] = Query(None),
token_data: Optional[Dict] = Depends(token_dependency)
q: str = Query(...),
_td: Dict = Depends(token_dep),
):
if not q:
raise HTTPException(status_code=400, detail="Query parameter 'q' is required")
if not url.startswith(('http://', 'https://')):
url = 'https://' + url
try:
answer = await handle_llm_qa(url, q, config)
return JSONResponse({"answer": answer})
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
raise HTTPException(400, "Query parameter 'q' is required")
if not url.startswith(("http://", "https://")):
url = "https://" + url
answer = await handle_llm_qa(url, q, config)
return JSONResponse({"answer": answer})
@app.get("/schema")
async def get_schema():
from crawl4ai import BrowserConfig, CrawlerRunConfig
return {"browser": BrowserConfig().dump(), "crawler": CrawlerRunConfig().dump()}
return {"browser": BrowserConfig().dump(),
"crawler": CrawlerRunConfig().dump()}
@app.get(config["observability"]["health_check"]["endpoint"])
async def health():
return {"status": "ok", "timestamp": time.time(), "version": __version__}
@app.get(config["observability"]["prometheus"]["endpoint"])
async def metrics():
return RedirectResponse(url=config["observability"]["prometheus"]["endpoint"])
return RedirectResponse(config["observability"]["prometheus"]["endpoint"])
@app.post("/crawl")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("crawl")
async def crawl(
request: Request,
crawl_request: CrawlRequest,
token_data: Optional[Dict] = Depends(token_dependency)
_td: Dict = Depends(token_dep),
):
"""
Crawl a list of URLs and return the results as JSON.
"""
if not crawl_request.urls:
raise HTTPException(status_code=400, detail="At least one URL required")
results = await handle_crawl_request(
raise HTTPException(400, "At least one URL required")
res = await handle_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config
config=config,
)
return JSONResponse(results)
return JSONResponse(res)
@app.post("/crawl/stream")
@@ -152,24 +451,161 @@ async def crawl(
async def crawl_stream(
request: Request,
crawl_request: CrawlRequest,
token_data: Optional[Dict] = Depends(token_dependency)
_td: Dict = Depends(token_dep),
):
if not crawl_request.urls:
raise HTTPException(status_code=400, detail="At least one URL required")
crawler, results_gen = await handle_stream_crawl_request(
raise HTTPException(400, "At least one URL required")
crawler, gen = await handle_stream_crawl_request(
urls=crawl_request.urls,
browser_config=crawl_request.browser_config,
crawler_config=crawl_request.crawler_config,
config=config
config=config,
)
return StreamingResponse(
stream_results(crawler, results_gen),
media_type='application/x-ndjson',
headers={'Cache-Control': 'no-cache', 'Connection': 'keep-alive', 'X-Stream-Status': 'active'}
stream_results(crawler, gen),
media_type="application/x-ndjson",
headers={
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"X-Stream-Status": "active",
},
)
def chunk_code_functions(code_md: str) -> List[str]:
"""Extract each function/class from markdown code blocks per file."""
pattern = re.compile(
# match "## File: <path>" then a ```py fence, then capture until the closing ```
r'##\s*File:\s*(?P<path>.+?)\s*?\r?\n' # file header
r'```py\s*?\r?\n' # opening fence
r'(?P<code>.*?)(?=\r?\n```)', # code block
re.DOTALL
)
chunks: List[str] = []
for m in pattern.finditer(code_md):
file_path = m.group("path").strip()
code_blk = m.group("code")
tree = ast.parse(code_blk)
lines = code_blk.splitlines()
for node in tree.body:
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)):
start = node.lineno - 1
end = getattr(node, "end_lineno", start + 1)
snippet = "\n".join(lines[start:end])
chunks.append(f"# File: {file_path}\n{snippet}")
return chunks
def chunk_doc_sections(doc: str) -> List[str]:
lines = doc.splitlines(keepends=True)
sections = []
current: List[str] = []
for line in lines:
if re.match(r"^#{1,6}\s", line):
if current:
sections.append("".join(current))
current = [line]
else:
current.append(line)
if current:
sections.append("".join(current))
return sections
@app.get("/ask")
@limiter.limit(config["rate_limiting"]["default_limit"])
@mcp_tool("ask")
async def get_context(
request: Request,
_td: Dict = Depends(token_dep),
context_type: str = Query("all", regex="^(code|doc|all)$"),
query: Optional[str] = Query(
None, description="search query to filter chunks"),
score_ratio: float = Query(
0.5, ge=0.0, le=1.0, description="min score as fraction of max_score"),
max_results: int = Query(
20, ge=1, description="absolute cap on returned chunks"),
):
"""
This end point is design for any questions about Crawl4ai library. It returns a plain text markdown with extensive information about Crawl4ai.
You can use this as a context for any AI assistant. Use this endpoint for AI assistants to retrieve library context for decision making or code generation tasks.
Alway is BEST practice you provide a query to filter the context. Otherwise the lenght of the response will be very long.
Parameters:
- context_type: Specify "code" for code context, "doc" for documentation context, or "all" for both.
- query: RECOMMENDED search query to filter paragraphs using BM25. You can leave this empty to get all the context.
- score_ratio: Minimum score as a fraction of the maximum score for filtering results.
- max_results: Maximum number of results to return. Default is 20.
Returns:
- JSON response with the requested context.
- If "code" is specified, returns the code context.
- If "doc" is specified, returns the documentation context.
- If "all" is specified, returns both code and documentation contexts.
"""
# load contexts
base = os.path.dirname(__file__)
code_path = os.path.join(base, "c4ai-code-context.md")
doc_path = os.path.join(base, "c4ai-doc-context.md")
if not os.path.exists(code_path) or not os.path.exists(doc_path):
raise HTTPException(404, "Context files not found")
with open(code_path, "r") as f:
code_content = f.read()
with open(doc_path, "r") as f:
doc_content = f.read()
# if no query, just return raw contexts
if not query:
if context_type == "code":
return JSONResponse({"code_context": code_content})
if context_type == "doc":
return JSONResponse({"doc_context": doc_content})
return JSONResponse({
"code_context": code_content,
"doc_context": doc_content,
})
tokens = query.split()
results: Dict[str, List[Dict[str, float]]] = {}
# code BM25 over functions/classes
if context_type in ("code", "all"):
code_chunks = chunk_code_functions(code_content)
bm25 = BM25Okapi([c.split() for c in code_chunks])
scores = bm25.get_scores(tokens)
max_sc = float(scores.max()) if scores.size > 0 else 0.0
cutoff = max_sc * score_ratio
picked = [(c, s) for c, s in zip(code_chunks, scores) if s >= cutoff]
picked = sorted(picked, key=lambda x: x[1], reverse=True)[:max_results]
results["code_results"] = [{"text": c, "score": s} for c, s in picked]
# doc BM25 over markdown sections
if context_type in ("doc", "all"):
sections = chunk_doc_sections(doc_content)
bm25d = BM25Okapi([sec.split() for sec in sections])
scores_d = bm25d.get_scores(tokens)
max_sd = float(scores_d.max()) if scores_d.size > 0 else 0.0
cutoff_d = max_sd * score_ratio
idxs = [i for i, s in enumerate(scores_d) if s >= cutoff_d]
neighbors = set(i for idx in idxs for i in (idx-1, idx, idx+1))
valid = [i for i in sorted(neighbors) if 0 <= i < len(sections)]
valid = valid[:max_results]
results["doc_results"] = [
{"text": sections[i], "score": scores_d[i]} for i in valid
]
return JSONResponse(results)
# attach MCP layer (adds /mcp/ws, /mcp/sse, /mcp/schema)
print(f"MCP server running on {config['app']['host']}:{config['app']['port']}")
attach_mcp(
app,
base_url=f"http://{config['app']['host']}:{config['app']['port']}"
)
# ────────────────────────── cli ──────────────────────────────
if __name__ == "__main__":
import uvicorn
uvicorn.run(
@@ -177,5 +613,6 @@ if __name__ == "__main__":
host=config["app"]["host"],
port=config["app"]["port"],
reload=config["app"]["reload"],
timeout_keep_alive=config["app"]["timeout_keep_alive"]
)
timeout_keep_alive=config["app"]["timeout_keep_alive"],
)
# ─────────────────────────────────────────────────────────────

View File

@@ -0,0 +1,955 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Crawl4AI Playground</title>
<script src="https://cdn.tailwindcss.com"></script>
<script>
tailwind.config = {
theme: {
extend: {
colors: {
primary: '#4EFFFF',
primarydim: '#09b5a5',
accent: '#F380F5',
dark: '#070708',
light: '#E8E9ED',
secondary: '#D5CEBF',
codebg: '#1E1E1E',
surface: '#202020',
border: '#3F3F44',
},
fontFamily: {
mono: ['Fira Code', 'monospace'],
},
}
}
}
</script>
<link href="https://fonts.googleapis.com/css2?family=Fira+Code:wght@400;500&display=swap" rel="stylesheet">
<!-- Highlight.js -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/github-dark.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/clipboard.js/2.0.11/clipboard.min.js"></script>
<!-- CodeMirror (python mode) -->
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/codemirror.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/codemirror.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/mode/python/python.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/addon/edit/matchbrackets.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/addon/selection/active-line.min.js"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.65.16/theme/darcula.min.css">
<!-- <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/python.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/bash.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/languages/json.min.js"></script> -->
<style>
/* Custom CodeMirror styling to match theme */
.CodeMirror {
background-color: #1E1E1E !important;
color: #E8E9ED !important;
border-radius: 4px;
font-family: 'Fira Code', monospace;
font-size: 0.9rem;
}
.CodeMirror-gutters {
background-color: #1E1E1E !important;
border-right: 1px solid #3F3F44 !important;
}
.CodeMirror-linenumber {
color: #3F3F44 !important;
}
.cm-s-darcula .cm-keyword {
color: #4EFFFF !important;
}
.cm-s-darcula .cm-string {
color: #F380F5 !important;
}
.cm-s-darcula .cm-number {
color: #D5CEBF !important;
}
/* Add to your <style> section or Tailwind config */
.hljs {
background: #1E1E1E !important;
border-radius: 4px;
padding: 1rem !important;
}
pre code.hljs {
display: block;
overflow-x: auto;
}
/* Language-specific colors */
.hljs-attr {
color: #4EFFFF;
}
/* JSON keys */
.hljs-string {
color: #F380F5;
}
/* Strings */
.hljs-number {
color: #D5CEBF;
}
/* Numbers */
.hljs-keyword {
color: #4EFFFF;
}
pre code {
white-space: pre-wrap;
word-break: break-word;
}
.copy-btn {
transition: all 0.2s ease;
opacity: 0.7;
}
.copy-btn:hover {
opacity: 1;
}
.tab-content:hover .copy-btn {
opacity: 0.7;
}
.tab-content:hover .copy-btn:hover {
opacity: 1;
}
/* copid text highlighted */
.highlighted {
background-color: rgba(78, 255, 255, 0.2) !important;
transition: background-color 0.5s ease;
}
</style>
</head>
<body class="bg-dark text-light font-mono min-h-screen flex flex-col" style="font-feature-settings: 'calt' 0;">
<!-- Header -->
<header class="border-b border-border px-4 py-2 flex items-center">
<h1 class="text-lg font-medium flex items-center space-x-4">
<span>🚀🤖 <span class="text-primary">Crawl4AI</span> Playground</span>
<!-- GitHub badges -->
<a href="https://github.com/unclecode/crawl4ai" target="_blank" class="flex space-x-1">
<img src="https://img.shields.io/github/stars/unclecode/crawl4ai?style=social"
alt="GitHub stars" class="h-5">
<img src="https://img.shields.io/github/forks/unclecode/crawl4ai?style=social"
alt="GitHub forks" class="h-5">
</a>
<!-- Docs -->
<a href="https://docs.crawl4ai.com" target="_blank"
class="text-xs text-secondary hover:text-primary underline flex items-center">
Docs
</a>
<!-- X (Twitter) follow -->
<a href="https://x.com/unclecode" target="_blank"
class="hover:text-primary flex items-center" title="Follow @unclecode on X">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"
class="w-4 h-4 fill-current mr-1">
<path d="M22.46 6c-.77.35-1.6.58-2.46.69a4.27 4.27 0 001.88-2.35 8.53 8.53 0 01-2.71 1.04 4.24 4.24 0 00-7.23 3.87A12.05 12.05 0 013 4.62a4.24 4.24 0 001.31 5.65 4.2 4.2 0 01-1.92-.53v.05a4.24 4.24 0 003.4 4.16 4.31 4.31 0 01-1.91.07 4.25 4.25 0 003.96 2.95A8.5 8.5 0 012 19.55a12.04 12.04 0 006.53 1.92c7.84 0 12.13-6.49 12.13-12.13 0-.18-.01-.36-.02-.54A8.63 8.63 0 0024 5.1a8.45 8.45 0 01-2.54.7z"/>
</svg>
<span class="text-xs">@unclecode</span>
</a>
</h1>
<div class="ml-auto flex space-x-2">
<button id="play-tab"
class="px-3 py-1 rounded-t bg-surface border border-b-0 border-border text-primary">Playground</button>
<button id="stress-tab" class="px-3 py-1 rounded-t border border-border hover:bg-surface">Stress
Test</button>
</div>
</header>
<!-- Main Playground -->
<main id="playground" class="flex-1 flex flex-col p-4 space-y-4 max-w-5xl w-full mx-auto">
<!-- Request Builder -->
<section class="bg-surface rounded-lg border border-border overflow-hidden">
<div class="px-4 py-2 border-b border-border flex items-center">
<h2 class="font-medium">Request Builder</h2>
<select id="endpoint" class="ml-auto bg-dark border border-border rounded px-2 py-1 text-sm">
<option value="crawl">/crawl (batch)</option>
<option value="crawl_stream">/crawl/stream</option>
<option value="md">/md</option>
<option value="llm">/llm</option>
</select>
</div>
<div class="p-4">
<label class="block mb-2 text-sm">URL(s) - one per line</label>
<textarea id="urls" class="w-full bg-dark border border-border rounded p-2 h-32 text-sm mb-4"
spellcheck="false">https://example.com</textarea>
<!-- Specific options for /md endpoint -->
<details id="md-options" class="mb-4 hidden">
<summary class="text-sm text-secondary cursor-pointer">/md Options</summary>
<div class="mt-2 space-y-3 p-2 border border-border rounded">
<div>
<label for="md-filter" class="block text-xs text-secondary mb-1">Filter Type</label>
<select id="md-filter" class="bg-dark border border-border rounded px-2 py-1 text-sm w-full">
<option value="fit">fit - Adaptive content filtering</option>
<option value="raw">raw - No filtering</option>
<option value="bm25">bm25 - BM25 keyword relevance</option>
<option value="llm">llm - LLM-based filtering</option>
</select>
</div>
<div>
<label for="md-query" class="block text-xs text-secondary mb-1">Query (for BM25/LLM filters)</label>
<input id="md-query" type="text" placeholder="Enter search terms or instructions"
class="bg-dark border border-border rounded px-2 py-1 text-sm w-full">
</div>
<div>
<label for="md-cache" class="block text-xs text-secondary mb-1">Cache Mode</label>
<select id="md-cache" class="bg-dark border border-border rounded px-2 py-1 text-sm w-full">
<option value="0">Write-Only (0)</option>
<option value="1">Enabled (1)</option>
</select>
</div>
</div>
</details>
<!-- Specific options for /llm endpoint -->
<details id="llm-options" class="mb-4 hidden">
<summary class="text-sm text-secondary cursor-pointer">/llm Options</summary>
<div class="mt-2 space-y-3 p-2 border border-border rounded">
<div>
<label for="llm-question" class="block text-xs text-secondary mb-1">Question</label>
<input id="llm-question" type="text" value="What is this page about?"
class="bg-dark border border-border rounded px-2 py-1 text-sm w-full">
</div>
</div>
</details>
<!-- Advanced config for /crawl endpoints -->
<details id="adv-config" class="mb-4">
<summary class="text-sm text-secondary cursor-pointer">Advanced Config <span
class="text-xs text-primary">(Python → autoJSON)</span></summary>
<!-- Toolbar -->
<div class="flex items-center justify-end space-x-3 mt-2">
<label for="cfg-type" class="text-xs text-secondary">Type:</label>
<select id="cfg-type"
class="bg-dark border border-border rounded px-1 py-0.5 text-xs">
<option value="CrawlerRunConfig">CrawlerRunConfig</option>
<option value="BrowserConfig">BrowserConfig</option>
</select>
<!-- help link -->
<a href="https://docs.crawl4ai.com/api/parameters/"
target="_blank"
class="text-xs text-primary hover:underline flex items-center space-x-1"
title="Open parameter reference in new tab">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24"
class="w-4 h-4 fill-current">
<path d="M13 3h8v8h-2V6.41l-9.29 9.3-1.42-1.42 9.3-9.29H13V3z"/>
<path d="M5 5h4V3H3v6h2V5zm0 14v-4H3v6h6v-2H5z"/>
</svg>
<span>Docs</span>
</a>
<span id="cfg-status" class="text-xs text-secondary ml-2"></span>
</div>
<!-- CodeMirror host -->
<div id="adv-editor" class="mt-2 border border-border rounded overflow-hidden h-40"></div>
</details>
<div class="flex space-x-2">
<button id="run-btn" class="bg-primary text-dark px-4 py-2 rounded hover:bg-primarydim font-medium">
Run (⌘/Ctrl+Enter)
</button>
<button id="export-btn" class="border border-border px-4 py-2 rounded hover:bg-surface hidden">
Export Python Code
</button>
</div>
</div>
</section>
<!-- Execution Status -->
<section id="execution-status" class="hidden bg-surface rounded-lg border border-border p-3 text-sm">
<div class="flex space-x-4">
<div id="status-badge" class="flex items-center">
<span class="w-3 h-3 rounded-full mr-2"></span>
<span>Ready</span>
</div>
<div>
<span class="text-secondary">Time:</span>
<span id="exec-time" class="text-light">-</span>
</div>
<div>
<span class="text-secondary">Memory:</span>
<span id="exec-mem" class="text-light">-</span>
</div>
</div>
</section>
<!-- Response Viewer -->
<!-- Update the Response Viewer section -->
<section class="bg-surface rounded-lg border border-border overflow-hidden flex-1 flex flex-col">
<div class="border-b border-border flex">
<button data-tab="response" class="tab-btn active px-4 py-2 border-r border-border">Response</button>
<button data-tab="python" class="tab-btn px-4 py-2 border-r border-border">Python</button>
<button data-tab="curl" class="tab-btn px-4 py-2">cURL</button>
</div>
<div class="flex-1 overflow-auto relative">
<!-- Response Tab -->
<div class="tab-content active h-full">
<div class="absolute right-2 top-2">
<button class="copy-btn bg-surface border border-border rounded px-2 py-1 text-xs hover:bg-dark"
data-target="#response-content code">
Copy
</button>
</div>
<pre id="response-content" class="p-4 text-sm h-full"><code class="json hljs">{}</code></pre>
</div>
<!-- Python Tab -->
<div class="tab-content hidden h-full">
<div class="absolute right-2 top-2">
<button class="copy-btn bg-surface border border-border rounded px-2 py-1 text-xs hover:bg-dark"
data-target="#python-content code">
Copy
</button>
</div>
<pre id="python-content" class="p-4 text-sm h-full"><code class="python hljs"></code></pre>
</div>
<!-- cURL Tab -->
<div class="tab-content hidden h-full">
<div class="absolute right-2 top-2">
<button class="copy-btn bg-surface border border-border rounded px-2 py-1 text-xs hover:bg-dark"
data-target="#curl-content code">
Copy
</button>
</div>
<pre id="curl-content" class="p-4 text-sm h-full"><code class="bash hljs"></code></pre>
</div>
</div>
</section>
</main>
<!-- Stress Test Modal -->
<div id="stress-modal"
class="hidden fixed inset-0 bg-black bg-opacity-70 z-50 flex items-center justify-center p-4">
<div class="bg-surface rounded-lg border border-accent w-full max-w-3xl max-h-[90vh] flex flex-col">
<div class="px-4 py-2 border-b border-border flex items-center">
<h2 class="font-medium text-accent">🔥 Stress Test</h2>
<button id="close-stress" class="ml-auto text-secondary hover:text-light">&times;</button>
</div>
<div class="p-4 space-y-4 flex-1 overflow-auto">
<div class="grid grid-cols-3 gap-4">
<div>
<label class="block text-sm mb-1">Total URLs</label>
<input id="st-total" type="number" value="20"
class="w-full bg-dark border border-border rounded px-3 py-1">
</div>
<div>
<label class="block text-sm mb-1">Chunk Size</label>
<input id="st-chunk" type="number" value="5"
class="w-full bg-dark border border-border rounded px-3 py-1">
</div>
<div>
<label class="block text-sm mb-1">Concurrency</label>
<input id="st-conc" type="number" value="2"
class="w-full bg-dark border border-border rounded px-3 py-1">
</div>
</div>
<div class="flex items-center">
<input id="st-stream" type="checkbox" class="mr-2">
<label for="st-stream" class="text-sm">Use /crawl/stream</label>
<button id="st-run"
class="ml-auto bg-accent text-dark px-4 py-2 rounded hover:bg-opacity-90 font-medium">
Run Stress Test
</button>
</div>
<div class="mt-4">
<div class="bg-dark rounded border border-border p-3 h-64 overflow-auto text-sm whitespace-break-spaces"
id="stress-log"></div>
</div>
</div>
<div class="px-4 py-2 border-t border-border text-sm text-secondary">
<div class="flex justify-between">
<span>Completed: <span id="stress-completed">0</span>/<span id="stress-total">0</span></span>
<span>Avg. Time: <span id="stress-avg-time">0</span>ms</span>
<span>Peak Memory: <span id="stress-peak-mem">0</span>MB</span>
</div>
</div>
</div>
</div>
<script>
// Tab switching
document.querySelectorAll('.tab-btn').forEach(btn => {
btn.addEventListener('click', () => {
document.querySelectorAll('.tab-btn').forEach(b => b.classList.remove('active'));
document.querySelectorAll('.tab-content').forEach(c => c.classList.add('hidden'));
btn.classList.add('active');
const tabName = btn.dataset.tab;
document.querySelector(`#${tabName}-content`).parentElement.classList.remove('hidden');
// Re-highlight content when switching tabs
const activeCode = document.querySelector(`#${tabName}-content code`);
if (activeCode) {
forceHighlightElement(activeCode);
}
});
});
// View switching
document.getElementById('play-tab').addEventListener('click', () => {
document.getElementById('playground').classList.remove('hidden');
document.getElementById('stress-modal').classList.add('hidden');
document.getElementById('play-tab').classList.add('bg-surface', 'border-b-0');
document.getElementById('stress-tab').classList.remove('bg-surface', 'border-b-0');
});
document.getElementById('stress-tab').addEventListener('click', () => {
document.getElementById('stress-modal').classList.remove('hidden');
document.getElementById('stress-tab').classList.add('bg-surface', 'border-b-0');
document.getElementById('play-tab').classList.remove('bg-surface', 'border-b-0');
});
document.getElementById('close-stress').addEventListener('click', () => {
document.getElementById('stress-modal').classList.add('hidden');
document.getElementById('play-tab').classList.add('bg-surface', 'border-b-0');
document.getElementById('stress-tab').classList.remove('bg-surface', 'border-b-0');
});
// Initialize clipboard and highlight.js
new ClipboardJS('#export-btn');
hljs.highlightAll();
// Keyboard shortcut
window.addEventListener('keydown', e => {
if ((e.ctrlKey || e.metaKey) && e.key === 'Enter') {
document.getElementById('run-btn').click();
}
});
// ================ ADVANCED CONFIG EDITOR ================
const cm = CodeMirror(document.getElementById('adv-editor'), {
value: `CrawlerRunConfig(
stream=True,
cache_mode=CacheMode.BYPASS,
)`,
mode: 'python',
lineNumbers: true,
theme: 'darcula',
tabSize: 4,
styleActiveLine: true,
matchBrackets: true,
gutters: ["CodeMirror-linenumbers"],
lineWrapping: true,
});
const TEMPLATES = {
CrawlerRunConfig: `CrawlerRunConfig(
stream=True,
cache_mode=CacheMode.BYPASS,
)`,
BrowserConfig: `BrowserConfig(
headless=True,
extra_args=[
"--no-sandbox",
"--disable-gpu",
],
)`,
};
document.getElementById('cfg-type').addEventListener('change', (e) => {
cm.setValue(TEMPLATES[e.target.value]);
document.getElementById('cfg-status').textContent = '';
});
// Handle endpoint selection change to show appropriate options
document.getElementById('endpoint').addEventListener('change', function(e) {
const endpoint = e.target.value;
const mdOptions = document.getElementById('md-options');
const llmOptions = document.getElementById('llm-options');
const advConfig = document.getElementById('adv-config');
// Hide all option sections first
mdOptions.classList.add('hidden');
llmOptions.classList.add('hidden');
advConfig.classList.add('hidden');
// Show the appropriate section based on endpoint
if (endpoint === 'md') {
mdOptions.classList.remove('hidden');
// Auto-open the /md options
mdOptions.setAttribute('open', '');
} else if (endpoint === 'llm') {
llmOptions.classList.remove('hidden');
// Auto-open the /llm options
llmOptions.setAttribute('open', '');
} else {
// For /crawl endpoints, show the advanced config
advConfig.classList.remove('hidden');
}
});
async function pyConfigToJson() {
const code = cm.getValue().trim();
if (!code) return {};
const res = await fetch('/config/dump', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ code }),
});
const statusEl = document.getElementById('cfg-status');
if (!res.ok) {
const msg = await res.text();
statusEl.textContent = '✖ config error';
statusEl.className = 'text-xs text-red-400';
throw new Error(msg || 'Invalid config');
}
statusEl.textContent = '✓ parsed';
statusEl.className = 'text-xs text-green-400';
return await res.json();
}
// ================ SERVER COMMUNICATION ================
// Update status UI
function updateStatus(status, time, memory, peakMemory) {
const statusEl = document.getElementById('execution-status');
const badgeEl = document.querySelector('#status-badge span:first-child');
const textEl = document.querySelector('#status-badge span:last-child');
statusEl.classList.remove('hidden');
badgeEl.className = 'w-3 h-3 rounded-full mr-2';
if (status === 'success') {
badgeEl.classList.add('bg-green-500');
textEl.textContent = 'Success';
} else if (status === 'error') {
badgeEl.classList.add('bg-red-500');
textEl.textContent = 'Error';
} else {
badgeEl.classList.add('bg-yellow-500');
textEl.textContent = 'Processing...';
}
if (time) {
document.getElementById('exec-time').textContent = `${time}ms`;
}
if (memory !== undefined && peakMemory !== undefined) {
document.getElementById('exec-mem').textContent = `Δ${memory >= 0 ? '+' : ''}${memory}MB (Peak: ${peakMemory}MB)`;
}
}
// Generate code snippets
function generateSnippets(api, payload, method = 'POST') {
// Python snippet
const pyCodeEl = document.querySelector('#python-content code');
let pySnippet;
if (method === 'GET') {
// GET request (for /llm endpoint)
pySnippet = `import httpx\n\nasync def crawl():\n async with httpx.AsyncClient() as client:\n response = await client.get(\n "${window.location.origin}${api}"\n )\n return response.json()`;
} else {
// POST request (for /crawl and /md endpoints)
pySnippet = `import httpx\n\nasync def crawl():\n async with httpx.AsyncClient() as client:\n response = await client.post(\n "${window.location.origin}${api}",\n json=${JSON.stringify(payload, null, 4).replace(/\n/g, '\n ')}\n )\n return response.json()`;
}
pyCodeEl.textContent = pySnippet;
pyCodeEl.className = 'python hljs'; // Reset classes
forceHighlightElement(pyCodeEl);
// cURL snippet
const curlCodeEl = document.querySelector('#curl-content code');
let curlSnippet;
if (method === 'GET') {
// GET request (for /llm endpoint)
curlSnippet = `curl -X GET "${window.location.origin}${api}"`;
} else {
// POST request (for /crawl and /md endpoints)
curlSnippet = `curl -X POST ${window.location.origin}${api} \\\n -H "Content-Type: application/json" \\\n -d '${JSON.stringify(payload)}'`;
}
curlCodeEl.textContent = curlSnippet;
curlCodeEl.className = 'bash hljs'; // Reset classes
forceHighlightElement(curlCodeEl);
}
// Main run function
async function runCrawl() {
const endpoint = document.getElementById('endpoint').value;
const urls = document.getElementById('urls').value.trim().split(/\n/).filter(u => u);
// 1) grab python from CodeMirror, validate via /config/dump
let advConfig = {};
try {
const cfgJson = await pyConfigToJson(); // may throw
if (Object.keys(cfgJson).length) {
const cfgType = document.getElementById('cfg-type').value;
advConfig = cfgType === 'CrawlerRunConfig'
? { crawler_config: cfgJson }
: { browser_config: cfgJson };
}
} catch (err) {
updateStatus('error');
document.querySelector('#response-content code').textContent =
JSON.stringify({ error: err.message }, null, 2);
forceHighlightElement(document.querySelector('#response-content code'));
return; // stop run
}
const endpointMap = {
crawl: '/crawl',
// crawl_stream: '/crawl/stream',
md: '/md',
llm: '/llm'
};
const api = endpointMap[endpoint];
let payload;
// Create appropriate payload based on endpoint type
if (endpoint === 'md') {
// Get values from the /md specific inputs
const filterType = document.getElementById('md-filter').value;
const query = document.getElementById('md-query').value.trim();
const cache = document.getElementById('md-cache').value;
// MD endpoint expects: { url, f, q, c }
payload = {
url: urls[0], // Take first URL
f: filterType, // Lowercase filter type as required by server
q: query || null, // Use the query if provided, otherwise null
c: cache
};
} else if (endpoint === 'llm') {
// LLM endpoint has a different URL pattern and uses query params
// This will be handled directly in the fetch below
payload = null;
} else {
// Default payload for /crawl and /crawl/stream
payload = {
urls,
...advConfig
};
}
updateStatus('processing');
try {
const startTime = performance.now();
let response, responseData;
if (endpoint === 'llm') {
// Special handling for LLM endpoint which uses URL pattern: /llm/{encoded_url}?q={query}
const url = urls[0];
const encodedUrl = encodeURIComponent(url);
// Get the question from the LLM-specific input
const question = document.getElementById('llm-question').value.trim() || "What is this page about?";
response = await fetch(`${api}/${encodedUrl}?q=${encodeURIComponent(question)}`, {
method: 'GET',
headers: { 'Accept': 'application/json' }
});
} else if (endpoint === 'crawl_stream') {
// Stream processing
response = await fetch(api, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
const reader = response.body.getReader();
let text = '';
let maxMemory = 0;
while (true) {
const { value, done } = await reader.read();
if (done) break;
const chunk = new TextDecoder().decode(value);
text += chunk;
// Process each line for memory updates
chunk.trim().split('\n').forEach(line => {
if (!line) return;
try {
const obj = JSON.parse(line);
if (obj.server_memory_mb) {
maxMemory = Math.max(maxMemory, obj.server_memory_mb);
}
} catch (e) {
console.error('Error parsing stream line:', e);
}
});
}
responseData = { stream: text };
const time = Math.round(performance.now() - startTime);
updateStatus('success', time, null, maxMemory);
document.querySelector('#response-content code').textContent = text;
document.querySelector('#response-content code').className = 'json hljs'; // Reset classes
forceHighlightElement(document.querySelector('#response-content code'));
} else {
// Regular request (handles /crawl and /md)
response = await fetch(api, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
responseData = await response.json();
const time = Math.round(performance.now() - startTime);
if (!response.ok) {
updateStatus('error', time);
throw new Error(responseData.error || 'Request failed');
}
updateStatus(
'success',
time,
responseData.server_memory_delta_mb,
responseData.server_peak_memory_mb
);
document.querySelector('#response-content code').textContent = JSON.stringify(responseData, null, 2);
document.querySelector('#response-content code').className = 'json hljs'; // Ensure class is set
forceHighlightElement(document.querySelector('#response-content code'));
}
forceHighlightElement(document.querySelector('#response-content code'));
// For generateSnippets, handle the LLM case specially
if (endpoint === 'llm') {
const url = urls[0];
const encodedUrl = encodeURIComponent(url);
const question = document.getElementById('llm-question').value.trim() || "What is this page about?";
generateSnippets(`${api}/${encodedUrl}?q=${encodeURIComponent(question)}`, null, 'GET');
} else {
generateSnippets(api, payload);
}
} catch (error) {
console.error('Error:', error);
updateStatus('error');
document.querySelector('#response-content code').textContent = JSON.stringify(
{ error: error.message },
null,
2
);
forceHighlightElement(document.querySelector('#response-content code'));
}
}
// Stress test function
async function runStressTest() {
const total = parseInt(document.getElementById('st-total').value);
const chunkSize = parseInt(document.getElementById('st-chunk').value);
const concurrency = parseInt(document.getElementById('st-conc').value);
const useStream = document.getElementById('st-stream').checked;
const logEl = document.getElementById('stress-log');
logEl.textContent = '';
document.getElementById('stress-completed').textContent = '0';
document.getElementById('stress-total').textContent = total;
document.getElementById('stress-avg-time').textContent = '0';
document.getElementById('stress-peak-mem').textContent = '0';
const api = useStream ? '/crawl/stream' : '/crawl';
const urls = Array.from({ length: total }, (_, i) => `https://httpbin.org/anything/stress-${i}-${Date.now()}`);
const chunks = [];
for (let i = 0; i < urls.length; i += chunkSize) {
chunks.push(urls.slice(i, i + chunkSize));
}
let completed = 0;
let totalTime = 0;
let peakMemory = 0;
const processBatch = async (batch, index) => {
const payload = {
urls: batch,
browser_config: {},
crawler_config: { cache_mode: 'BYPASS', stream: useStream }
};
const start = performance.now();
let time, memory;
try {
if (useStream) {
const response = await fetch(api, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
const reader = response.body.getReader();
let maxMem = 0;
while (true) {
const { value, done } = await reader.read();
if (done) break;
const text = new TextDecoder().decode(value);
text.split('\n').forEach(line => {
try {
const obj = JSON.parse(line);
if (obj.server_memory_mb) {
maxMem = Math.max(maxMem, obj.server_memory_mb);
}
} catch { }
});
}
memory = maxMem;
} else {
const response = await fetch(api, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload)
});
const data = await response.json();
memory = data.server_peak_memory_mb;
}
time = Math.round(performance.now() - start);
peakMemory = Math.max(peakMemory, memory || 0);
totalTime += time;
logEl.textContent += `[${index + 1}/${chunks.length}] ✔ ${time}ms | Peak ${memory}MB\n`;
} catch (error) {
time = Math.round(performance.now() - start);
logEl.textContent += `[${index + 1}/${chunks.length}] ✖ ${time}ms | ${error.message}\n`;
}
completed += batch.length;
document.getElementById('stress-completed').textContent = completed;
document.getElementById('stress-peak-mem').textContent = peakMemory;
document.getElementById('stress-avg-time').textContent = Math.round(totalTime / (index + 1));
logEl.scrollTop = logEl.scrollHeight;
};
// Run with concurrency control
let active = 0;
let index = 0;
return new Promise(resolve => {
const runNext = () => {
while (active < concurrency && index < chunks.length) {
processBatch(chunks[index], index)
.finally(() => {
active--;
runNext();
});
active++;
index++;
}
if (active === 0 && index >= chunks.length) {
logEl.textContent += '\n✅ Stress test completed\n';
resolve();
}
};
runNext();
});
}
// Event listeners
document.getElementById('run-btn').addEventListener('click', runCrawl);
document.getElementById('st-run').addEventListener('click', runStressTest);
function forceHighlightElement(element) {
if (!element) return;
// Save current scroll position (important for large code blocks)
const scrollTop = element.parentElement.scrollTop;
// Reset the element
const text = element.textContent;
element.innerHTML = text;
element.removeAttribute('data-highlighted');
// Reapply highlighting
hljs.highlightElement(element);
// Restore scroll position
element.parentElement.scrollTop = scrollTop;
}
// Initialize clipboard for all copy buttons
function initCopyButtons() {
document.querySelectorAll('.copy-btn').forEach(btn => {
new ClipboardJS(btn, {
text: () => {
const target = document.querySelector(btn.dataset.target);
return target ? target.textContent : '';
}
}).on('success', e => {
e.clearSelection();
// make button text "copied" for 1 second
const originalText = e.trigger.textContent;
e.trigger.textContent = 'Copied!';
setTimeout(() => {
e.trigger.textContent = originalText;
}, 1000);
// Highlight the copied code
const target = document.querySelector(btn.dataset.target);
if (target) {
target.classList.add('highlighted');
setTimeout(() => {
target.classList.remove('highlighted');
}, 1000);
}
}).on('error', e => {
console.error('Error copying:', e);
});
});
}
// Function to initialize UI based on selected endpoint
function initUI() {
// Trigger the endpoint change handler to set initial UI state
const endpointSelect = document.getElementById('endpoint');
const event = new Event('change');
endpointSelect.dispatchEvent(event);
// Initialize copy buttons
initCopyButtons();
}
// Initialize on page load
document.addEventListener('DOMContentLoaded', initUI);
// Also call it immediately in case the script runs after DOM is already loaded
if (document.readyState !== 'loading') {
initUI();
}
</script>
</body>
</html>

View File

@@ -1,12 +1,28 @@
[supervisord]
nodaemon=true
nodaemon=true ; Run supervisord in the foreground
logfile=/dev/null ; Log supervisord output to stdout/stderr
logfile_maxbytes=0
[program:redis]
command=redis-server
command=/usr/bin/redis-server --loglevel notice ; Path to redis-server on Alpine
user=appuser ; Run redis as our non-root user
autorestart=true
priority=10
stdout_logfile=/dev/stdout ; Redirect redis stdout to container stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr ; Redirect redis stderr to container stderr
stderr_logfile_maxbytes=0
[program:gunicorn]
command=gunicorn --bind 0.0.0.0:8000 --workers 4 --threads 2 --timeout 300 --graceful-timeout 60 --keep-alive 65 --log-level debug --worker-class uvicorn.workers.UvicornWorker --max-requests 1000 --max-requests-jitter 50 server:app
command=/usr/local/bin/gunicorn --bind 0.0.0.0:11235 --workers 1 --threads 4 --timeout 1800 --graceful-timeout 30 --keep-alive 300 --log-level info --worker-class uvicorn.workers.UvicornWorker server:app
directory=/app ; Working directory for the app
user=appuser ; Run gunicorn as our non-root user
autorestart=true
priority=20
priority=20
environment=PYTHONUNBUFFERED=1 ; Ensure Python output is sent straight to logs
stdout_logfile=/dev/stdout ; Redirect gunicorn stdout to container stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr ; Redirect gunicorn stderr to container stderr
stderr_logfile_maxbytes=0
# Optional: Add filebeat or other logging agents here if needed

View File

@@ -45,10 +45,10 @@ def datetime_handler(obj: any) -> Optional[str]:
return obj.isoformat()
raise TypeError(f"Object of type {type(obj)} is not JSON serializable")
def should_cleanup_task(created_at: str) -> bool:
def should_cleanup_task(created_at: str, ttl_seconds: int = 3600) -> bool:
"""Check if task should be cleaned up based on creation time."""
created = datetime.fromisoformat(created_at)
return (datetime.now() - created).total_seconds() > 3600
return (datetime.now() - created).total_seconds() > ttl_seconds
def decode_redis_hash(hash_data: Dict[bytes, bytes]) -> Dict[str, str]:
"""Decode Redis hash data from bytes to strings."""

View File

@@ -1,16 +1,21 @@
# Base configuration (not a service, just a reusable config block)
version: '3.8'
# Shared configuration for all environments
x-base-config: &base-config
ports:
- "11235:11235"
- "8000:8000"
- "9222:9222"
- "8080:8080"
- "11235:11235" # Gunicorn port
env_file:
- .llm.env # API keys (create from .llm.env.example)
environment:
- CRAWL4AI_API_TOKEN=${CRAWL4AI_API_TOKEN:-}
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- CLAUDE_API_KEY=${CLAUDE_API_KEY:-}
- DEEPSEEK_API_KEY=${DEEPSEEK_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- GROQ_API_KEY=${GROQ_API_KEY:-}
- TOGETHER_API_KEY=${TOGETHER_API_KEY:-}
- MISTRAL_API_KEY=${MISTRAL_API_KEY:-}
- GEMINI_API_TOKEN=${GEMINI_API_TOKEN:-}
volumes:
- /dev/shm:/dev/shm
- /dev/shm:/dev/shm # Chromium performance
deploy:
resources:
limits:
@@ -24,42 +29,21 @@ x-base-config: &base-config
timeout: 10s
retries: 3
start_period: 40s
user: "appuser"
services:
# Local build services for different platforms
crawl4ai-amd64:
crawl4ai:
# 1. Default: Pull multi-platform test image from Docker Hub
# 2. Override with local image via: IMAGE=local-test docker compose up
image: ${IMAGE:-unclecode/crawl4ai:${TAG:-latest}}
# Local build config (used with --build)
build:
context: .
dockerfile: Dockerfile
args:
PYTHON_VERSION: "3.10"
INSTALL_TYPE: ${INSTALL_TYPE:-basic}
ENABLE_GPU: false
platforms:
- linux/amd64
profiles: ["local-amd64"]
<<: *base-config # extends yerine doğrudan yapılandırmayı dahil ettik
crawl4ai-arm64:
build:
context: .
dockerfile: Dockerfile
args:
PYTHON_VERSION: "3.10"
INSTALL_TYPE: ${INSTALL_TYPE:-basic}
ENABLE_GPU: false
platforms:
- linux/arm64
profiles: ["local-arm64"]
<<: *base-config
# Hub services for different platforms and versions
crawl4ai-hub-amd64:
image: unclecode/crawl4ai:${VERSION:-basic}-amd64
profiles: ["hub-amd64"]
<<: *base-config
crawl4ai-hub-arm64:
image: unclecode/crawl4ai:${VERSION:-basic}-arm64
profiles: ["hub-arm64"]
INSTALL_TYPE: ${INSTALL_TYPE:-default}
ENABLE_GPU: ${ENABLE_GPU:-false}
# Inherit shared config
<<: *base-config

127
docs/apps/linkdin/README.md Normal file
View File

@@ -0,0 +1,127 @@
# Crawl4AIProspectWizard stepbystep guide
A threestage demo that goes from **LinkedIn scraping****LLM reasoning****graph visualisation**.
```
prospectwizard/
├─ c4ai_discover.py # Stage 1 scrape companies + people
├─ c4ai_insights.py # Stage 2 embeddings, orgcharts, scores
├─ graph_view_template.html # Stage 3 graph viewer (static HTML)
└─ data/ # output lands here (*.jsonl / *.json)
```
---
## 1  Install & boot a LinkedIn profile (onetime)
### 1.1  Install dependencies
```bash
pip install crawl4ai litellm sentence-transformers pandas rich
```
### 1.2  Create / warm a LinkedIn browser profile
```bash
crwl profiles
```
1. The interactive shell shows **New profile** hit **enter**.
2. Choose a name, e.g. `profile_linkedin_uc`.
3. A Chromium window opens log in to LinkedIn, solve whatever CAPTCHA, then close.
> Remember the **profile name**. All future runs take `--profile-name <your_name>`.
---
## 2  Discovery scrape companies & people
```bash
python c4ai_discover.py full \
--query "health insurance management" \
--geo 102713980 \ # Malaysia geoUrn
--title-filters "" \ # or "Product,Engineering"
--max-companies 10 \ # default set small for workshops
--max-people 20 \ # \^ same
--profile-name profile_linkedin_uc \
--outdir ./data \
--concurrency 2 \
--log-level debug
```
**Outputs** in `./data/`:
* `companies.jsonl` one JSON per company
* `people.jsonl` one JSON per employee
🛠️ **Dryrun:** `C4AI_DEMO_DEBUG=1 python c4ai_discover.py full --query coffee` uses bundled HTML snippets, no network.
### Handy geoUrn cheatsheet
| Location | geoUrn |
|----------|--------|
| Singapore | **103644278** |
| Malaysia | **102713980** |
| UnitedStates | **103644922** |
| UnitedKingdom | **102221843** |
| Australia | **101452733** |
_See more: <https://www.linkedin.com/search/results/companies/?geoUrn=XXX> the number after `geoUrn=` is what you need._
---
## 3  Insights embeddings, orgcharts, decision makers
```bash
python c4ai_insights.py \
--in ./data \
--out ./data \
--embed-model all-MiniLM-L6-v2 \
--llm-provider gemini/gemini-2.0-flash \
--llm-api-key "" \
--top-k 10 \
--max-llm-tokens 8024 \
--llm-temperature 1.0 \
--workers 4
```
Emits next to the Stage1 files:
* `company_graph.json` intercompany similarity graph
* `org_chart_<handle>.json` one per company
* `decision_makers.csv` handpicked who to pitch list
Flags reference (straight from `build_arg_parser()`):
| Flag | Default | Purpose |
|------|---------|---------|
| `--in` | `.` | Stage1 output dir |
| `--out` | `.` | Destination dir |
| `--embed_model` | `all-MiniLM-L6-v2` | SentenceTransformer model |
| `--top_k` | `10` | Neighbours per company in graph |
| `--openai_model` | `gpt-4.1` | LLM for scoring decision makers |
| `--max_llm_tokens` | `8024` | Token budget per LLM call |
| `--llm_temperature` | `1.0` | Creativity knob |
| `--stub` | off | Skip OpenAI and fabricate tiny charts |
| `--workers` | `4` | Parallel LLM workers |
---
## 4  Visualise interactive graph
After Stage 2 completes, simply open the HTML viewer from the project root:
```bash
open graph_view_template.html # or Live Server / Python -http
```
The page fetches `data/company_graph.json` and the `org_chart_*.json` files automatically; keep the `data/` folder beside the HTML file.
* Left pane → list of companies (clans).
* Click a node to load its orgchart on the right.
* Chat drawer lets you ask followup questions; context is pulled from `people.jsonl`.
---
## 5  Common snags
| Symptom | Fix |
|---------|-----|
| Infinite CAPTCHA | Use a residential proxy: `--proxy http://user:pass@ip:port` |
| 429 Too Many Requests | Lower `--concurrency`, rotate profile, add delay |
| Blank graph | Check JSON paths, clear `localStorage` in browser |
---
### TL;DR
`crwl profiles``c4ai_discover.py``c4ai_insights.py` → open `graph_view_template.html`.
Live long and `import crawl4ai`.

View File

@@ -0,0 +1,437 @@
#!/usr/bin/env python3
"""
c4ai-discover — Stage1 Discovery CLI
Scrapes LinkedIn company search + their people pages and dumps two newlinedelimited
JSON files: companies.jsonl and people.jsonl.
Key design rules
----------------
* No BeautifulSoup — Crawl4AI only for network + HTML fetch.
* JsonCssExtractionStrategy for structured scraping; schema autogenerated once
from sample HTML provided by user and then cached under ./schemas/.
* Defaults are embedded so the file runs inside VS Code debugger without CLI args.
* If executed as a console script (argv > 1), CLI flags win.
* Lightweight deps: argparse + Crawl4AI stack.
Author: Tom @ Kidocode 20250426
"""
from __future__ import annotations
import warnings, re
warnings.filterwarnings(
"ignore",
message=r"The pseudo class ':contains' is deprecated, ':-soup-contains' should be used.*",
category=FutureWarning,
module=r"soupsieve"
)
# ───────────────────────────────────────────────────────────────────────────────
# Imports
# ───────────────────────────────────────────────────────────────────────────────
import argparse
import random
import asyncio
import json
import logging
import os
import pathlib
import sys
# 3rd-party rich for pretty logging
from rich.console import Console
from rich.logging import RichHandler
from datetime import datetime, UTC
from textwrap import dedent
from types import SimpleNamespace
from typing import Dict, List, Optional
from urllib.parse import quote
from pathlib import Path
from glob import glob
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CacheMode,
CrawlerRunConfig,
JsonCssExtractionStrategy,
BrowserProfiler,
LLMConfig,
)
# ───────────────────────────────────────────────────────────────────────────────
# Constants / paths
# ───────────────────────────────────────────────────────────────────────────────
BASE_DIR = pathlib.Path(__file__).resolve().parent
SCHEMA_DIR = BASE_DIR / "schemas"
SCHEMA_DIR.mkdir(parents=True, exist_ok=True)
COMPANY_SCHEMA_PATH = SCHEMA_DIR / "company_card.json"
PEOPLE_SCHEMA_PATH = SCHEMA_DIR / "people_card.json"
# ---------- deterministic target JSON examples ----------
_COMPANY_SCHEMA_EXAMPLE = {
"handle": "/company/posify/",
"profile_image": "https://media.licdn.com/dms/image/v2/.../logo.jpg",
"name": "Management Research Services, Inc. (MRS, Inc)",
"descriptor": "Insurance • Milwaukee, Wisconsin",
"about": "Insurance • Milwaukee, Wisconsin",
"followers": 1000
}
_PEOPLE_SCHEMA_EXAMPLE = {
"profile_url": "https://www.linkedin.com/in/lily-ng/",
"name": "Lily Ng",
"headline": "VP Product @ Posify",
"followers": 890,
"connection_degree": "2nd",
"avatar_url": "https://media.licdn.com/dms/image/v2/.../lily.jpg"
}
# Provided sample HTML snippets (trimmed) — used exactly once to coldgenerate schema.
_SAMPLE_COMPANY_HTML = (Path(__file__).resolve().parent / "snippets/company.html").read_text()
_SAMPLE_PEOPLE_HTML = (Path(__file__).resolve().parent / "snippets/people.html").read_text()
# --------- tighter schema prompts ----------
_COMPANY_SCHEMA_QUERY = dedent(
"""
Using the supplied <li> company-card HTML, build a JsonCssExtractionStrategy schema that,
for every card, outputs *exactly* the keys shown in the example JSON below.
JSON spec:
• handle href of the outermost <a> that wraps the logo/title, e.g. "/company/posify/"
• profile_image absolute URL of the <img> inside that link
• name text of the <a> inside the <span class*='t-16'>
• descriptor text line with industry • location
• about text of the <div class*='t-normal'> below the name (industry + geo)
• followers integer parsed from the <div> containing 'followers'
IMPORTANT: Do not use the base64 kind of classes to target element. It's not reliable.
The main div parent contains these li element is "div.search-results-container" you can use this.
The <ul> parent has "role" equal to "list". Using these two should be enough to target the <li> elements."
"""
)
_PEOPLE_SCHEMA_QUERY = dedent(
"""
Using the supplied <li> people-card HTML, build a JsonCssExtractionStrategy schema that
outputs exactly the keys in the example JSON below.
Fields:
• profile_url href of the outermost profile link
• name text inside artdeco-entity-lockup__title
• headline inner text of artdeco-entity-lockup__subtitle
• followers integer parsed from the span inside lt-line-clamp--multi-line
• connection_degree '1st', '2nd', etc. from artdeco-entity-lockup__badge
• avatar_url src of the <img> within artdeco-entity-lockup__image
IMPORTANT: Do not use the base64 kind of classes to target element. It's not reliable.
The main div parent contains these li element is a "div" has these classes "artdeco-card org-people-profile-card__card-spacing org-people__card-margin-bottom".
"""
)
# ---------------------------------------------------------------------------
# Utility helpers
# ---------------------------------------------------------------------------
def _load_or_build_schema(
path: pathlib.Path,
sample_html: str,
query: str,
example_json: Dict,
force = False
) -> Dict:
"""Load schema from path, else call generate_schema once and persist."""
if path.exists() and not force:
return json.loads(path.read_text())
logging.info("[SCHEMA] Generating schema %s", path.name)
schema = JsonCssExtractionStrategy.generate_schema(
html=sample_html,
llm_config=LLMConfig(
provider=os.getenv("C4AI_SCHEMA_PROVIDER", "openai/gpt-4o"),
api_token=os.getenv("OPENAI_API_KEY", "env:OPENAI_API_KEY"),
),
query=query,
target_json_example=json.dumps(example_json, indent=2),
)
path.write_text(json.dumps(schema, indent=2))
return schema
def _openai_friendly_number(text: str) -> Optional[int]:
"""Extract first int from text like '1K followers' (returns 1000)."""
import re
m = re.search(r"(\d[\d,]*)", text.replace(",", ""))
if not m:
return None
val = int(m.group(1))
if "k" in text.lower():
val *= 1000
if "m" in text.lower():
val *= 1_000_000
return val
# ---------------------------------------------------------------------------
# Core async workers
# ---------------------------------------------------------------------------
async def crawl_company_search(crawler: AsyncWebCrawler, url: str, schema: Dict, limit: int) -> List[Dict]:
"""Paginate 10-item company search pages until `limit` reached."""
extraction = JsonCssExtractionStrategy(schema)
cfg = CrawlerRunConfig(
extraction_strategy=extraction,
cache_mode=CacheMode.BYPASS,
wait_for = ".search-marvel-srp",
session_id="company_search",
delay_before_return_html=1,
magic = True,
verbose= False,
)
companies, page = [], 1
while len(companies) < max(limit, 10):
paged_url = f"{url}&page={page}"
res = await crawler.arun(paged_url, config=cfg)
batch = json.loads(res[0].extracted_content)
if not batch:
break
for item in batch:
name = item.get("name", "").strip()
handle = item.get("handle", "").strip()
if not handle or not name:
continue
descriptor = item.get("descriptor")
about = item.get("about")
followers = _openai_friendly_number(str(item.get("followers", "")))
companies.append(
{
"handle": handle,
"name": name,
"descriptor": descriptor,
"about": about,
"followers": followers,
"people_url": f"{handle}people/",
"captured_at": datetime.now(UTC).isoformat(timespec="seconds") + "Z",
}
)
page += 1
logging.info(
f"[dim]Page {page}[/] — running total: {len(companies)}/{limit} companies"
)
return companies[:max(limit, 10)]
async def crawl_people_page(
crawler: AsyncWebCrawler,
people_url: str,
schema: Dict,
limit: int,
title_kw: str,
) -> List[Dict]:
people_u = f"{people_url}?keywords={quote(title_kw)}"
extraction = JsonCssExtractionStrategy(schema)
cfg = CrawlerRunConfig(
extraction_strategy=extraction,
# scan_full_page=True,
cache_mode=CacheMode.BYPASS,
magic=True,
wait_for=".org-people-profile-card__card-spacing",
delay_before_return_html=1,
session_id="people_search",
)
res = await crawler.arun(people_u, config=cfg)
if not res[0].success:
return []
raw = json.loads(res[0].extracted_content)
people = []
for p in raw[:limit]:
followers = _openai_friendly_number(str(p.get("followers", "")))
people.append(
{
"profile_url": p.get("profile_url"),
"name": p.get("name"),
"headline": p.get("headline"),
"followers": followers,
"connection_degree": p.get("connection_degree"),
"avatar_url": p.get("avatar_url"),
}
)
return people
# ---------------------------------------------------------------------------
# CLI + main
# ---------------------------------------------------------------------------
def build_arg_parser() -> argparse.ArgumentParser:
ap = argparse.ArgumentParser("c4ai-discover — Crawl4AI LinkedIn discovery")
sub = ap.add_subparsers(dest="cmd", required=False, help="run scope")
def add_flags(parser: argparse.ArgumentParser):
parser.add_argument("--query", required=False, help="query keyword(s)")
parser.add_argument("--geo", required=False, type=int, help="LinkedIn geoUrn")
parser.add_argument("--title-filters", default="Product,Engineering", help="comma list of job keywords")
parser.add_argument("--max-companies", type=int, default=1000)
parser.add_argument("--max-people", type=int, default=500)
parser.add_argument("--profile-name", default=str(pathlib.Path.home() / ".crawl4ai/profiles/profile_linkedin_uc"))
parser.add_argument("--outdir", default="./output")
parser.add_argument("--concurrency", type=int, default=4)
parser.add_argument("--log-level", default="info", choices=["debug", "info", "warn", "error"])
add_flags(sub.add_parser("full"))
add_flags(sub.add_parser("companies"))
add_flags(sub.add_parser("people"))
# global flags
ap.add_argument(
"--debug",
action="store_true",
help="Use built-in demo defaults (same as C4AI_DEMO_DEBUG=1)",
)
return ap
def detect_debug_defaults(force = False) -> SimpleNamespace:
if not force and sys.gettrace() is None and not os.getenv("C4AI_DEMO_DEBUG"):
return SimpleNamespace()
# ----- debugfriendly defaults -----
return SimpleNamespace(
cmd="full",
query="health insurance management",
geo=102713980,
# title_filters="Product,Engineering",
title_filters="",
max_companies=10,
max_people=5,
profile_name="profile_linkedin_uc",
outdir="./debug_out",
concurrency=2,
log_level="debug",
)
async def async_main(opts):
# ─────────── logging setup ───────────
console = Console()
logging.basicConfig(
level=opts.log_level.upper(),
format="%(message)s",
handlers=[RichHandler(console=console, markup=True, rich_tracebacks=True)],
)
# -------------------------------------------------------------------
# Load or build schemas (onetime LLM call each)
# -------------------------------------------------------------------
company_schema = _load_or_build_schema(
COMPANY_SCHEMA_PATH,
_SAMPLE_COMPANY_HTML,
_COMPANY_SCHEMA_QUERY,
_COMPANY_SCHEMA_EXAMPLE,
# True
)
people_schema = _load_or_build_schema(
PEOPLE_SCHEMA_PATH,
_SAMPLE_PEOPLE_HTML,
_PEOPLE_SCHEMA_QUERY,
_PEOPLE_SCHEMA_EXAMPLE,
# True
)
outdir = BASE_DIR / pathlib.Path(opts.outdir)
outdir.mkdir(parents=True, exist_ok=True)
f_companies = (BASE_DIR / outdir / "companies.jsonl").open("a", encoding="utf-8")
f_people = (BASE_DIR / outdir / "people.jsonl").open("a", encoding="utf-8")
# -------------------------------------------------------------------
# Prepare crawler with cookie pool rotation
# -------------------------------------------------------------------
profiler = BrowserProfiler()
path = profiler.get_profile_path(opts.profile_name)
bc = BrowserConfig(
headless=False,
verbose=False,
user_data_dir=path,
use_managed_browser=True,
user_agent_mode = "random",
user_agent_generator_config= {
"platforms": "mobile",
"os": "Android"
}
)
crawler = AsyncWebCrawler(config=bc)
await crawler.start()
# Single worker for simplicity; concurrency can be scaled by arun_many if needed.
# crawler = await next_crawler().start()
try:
# Build LinkedIn search URL
search_url = f'https://www.linkedin.com/search/results/companies/?keywords={quote(opts.query)}&companyHqGeo="{opts.geo}"'
logging.info("Seed URL => %s", search_url)
companies: List[Dict] = []
if opts.cmd in ("companies", "full"):
companies = await crawl_company_search(
crawler, search_url, company_schema, opts.max_companies
)
for c in companies:
f_companies.write(json.dumps(c, ensure_ascii=False) + "\n")
logging.info(f"[bold green]✓[/] Companies scraped so far: {len(companies)}")
if opts.cmd in ("people", "full"):
if not companies:
# load from previous run
src = outdir / "companies.jsonl"
if not src.exists():
logging.error("companies.jsonl missing — run companies/full first")
return 10
companies = [json.loads(l) for l in src.read_text().splitlines()]
total_people = 0
title_kw = " ".join([t.strip() for t in opts.title_filters.split(",") if t.strip()]) if opts.title_filters else ""
for comp in companies:
people = await crawl_people_page(
crawler,
comp["people_url"],
people_schema,
opts.max_people,
title_kw,
)
for p in people:
rec = p | {
"company_handle": comp["handle"],
# "captured_at": datetime.now(UTC).isoformat(timespec="seconds") + "Z",
"captured_at": datetime.now(UTC).isoformat(timespec="seconds") + "Z",
}
f_people.write(json.dumps(rec, ensure_ascii=False) + "\n")
total_people += len(people)
logging.info(
f"{comp['name']} — [cyan]{len(people)}[/] people extracted"
)
await asyncio.sleep(random.uniform(0.5, 1))
logging.info("Total people scraped: %d", total_people)
finally:
await crawler.close()
f_companies.close()
f_people.close()
return 0
def main():
parser = build_arg_parser()
cli_opts = parser.parse_args()
# decide on debug defaults
if cli_opts.debug:
opts = detect_debug_defaults(force=True)
else:
env_defaults = detect_debug_defaults()
opts = env_defaults if env_defaults else cli_opts
if not getattr(opts, "cmd", None):
opts.cmd = "full"
exit_code = asyncio.run(async_main(cli_opts))
sys.exit(exit_code)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,377 @@
#!/usr/bin/env python3
"""
Stage-2 Insights builder
------------------------
Reads companies.jsonl & people.jsonl (Stage-1 output) and produces:
• company_graph.json
• org_chart_<handle>.json (one per company)
• decision_makers.csv
• graph_view.html (interactive visualisation)
Run:
python c4ai_insights.py --in ./stage1_out --out ./stage2_out
Author : Tom @ Kidocode, 2025-04-28
"""
from __future__ import annotations
# ───────────────────────────────────────────────────────────────────────────────
# Imports & Third-party
# ───────────────────────────────────────────────────────────────────────────────
import argparse, asyncio, json, pathlib, random
from datetime import datetime, UTC
from types import SimpleNamespace
from pathlib import Path
from typing import List, Dict, Any
# Pretty CLI UX
from rich.console import Console
from rich.logging import RichHandler
from rich.progress import Progress, SpinnerColumn, BarColumn, TextColumn, TimeElapsedColumn
import logging
BASE_DIR = pathlib.Path(__file__).resolve().parent
# ───────────────────────────────────────────────────────────────────────────────
# 3rd-party deps
# ───────────────────────────────────────────────────────────────────────────────
import numpy as np
# from sentence_transformers import SentenceTransformer
# from sklearn.metrics.pairwise import cosine_similarity
import pandas as pd
import hashlib
from litellm import completion #Support any LLM Provider
# ───────────────────────────────────────────────────────────────────────────────
# Utils
# ───────────────────────────────────────────────────────────────────────────────
def load_jsonl(path: Path) -> List[Dict[str, Any]]:
with open(path, "r", encoding="utf-8") as f:
return [json.loads(l) for l in f]
def dump_json(obj, path: Path):
with open(path, "w", encoding="utf-8") as f:
json.dump(obj, f, ensure_ascii=False, indent=2)
# ───────────────────────────────────────────────────────────────────────────────
# Constants
# ───────────────────────────────────────────────────────────────────────────────
BASE_DIR = pathlib.Path(__file__).resolve().parent
# ───────────────────────────────────────────────────────────────────────────────
# Debug defaults (mirrors Stage-1 trick)
# ───────────────────────────────────────────────────────────────────────────────
def dev_defaults() -> SimpleNamespace:
return SimpleNamespace(
in_dir="./debug_out",
out_dir="./insights_debug",
embed_model="all-MiniLM-L6-v2",
top_k=10,
llm_provider="openai/gpt-4.1",
llm_api_key=None,
max_llm_tokens=8000,
llm_temperature=1.0,
workers=4
)
# ───────────────────────────────────────────────────────────────────────────────
# Graph builders
# ───────────────────────────────────────────────────────────────────────────────
def embed_descriptions(companies, model_name:str, opts) -> np.ndarray:
from sentence_transformers import SentenceTransformer
logging.debug(f"Using embedding model: {model_name}")
cache_path = BASE_DIR / Path(opts.out_dir) / "embeds_cache.json"
cache = {}
if cache_path.exists():
with open(cache_path) as f:
cache = json.load(f)
# flush cache if model differs
if cache.get("_model") != model_name:
cache = {}
model = SentenceTransformer(model_name)
new_texts, new_indices = [], []
vectors = np.zeros((len(companies), 384), dtype=np.float32)
for idx, comp in enumerate(companies):
text = comp.get("about") or comp.get("descriptor","")
h = hashlib.sha1(text.encode("utf-8")).hexdigest()
cached = cache.get(comp["handle"])
if cached and cached["hash"] == h:
vectors[idx] = np.array(cached["vector"], dtype=np.float32)
else:
new_texts.append(text)
new_indices.append((idx, comp["handle"], h))
if new_texts:
embeds = model.encode(new_texts, show_progress_bar=False, convert_to_numpy=True)
for vec, (idx, handle, h) in zip(embeds, new_indices):
vectors[idx] = vec
cache[handle] = {"hash": h, "vector": vec.tolist()}
cache["_model"] = model_name
with open(cache_path, "w") as f:
json.dump(cache, f)
return vectors
def build_company_graph(companies, embeds:np.ndarray, top_k:int) -> Dict[str,Any]:
from sklearn.metrics.pairwise import cosine_similarity
sims = cosine_similarity(embeds)
nodes, edges = [], []
idx_of = {c["handle"]: i for i,c in enumerate(companies)}
for i,c in enumerate(companies):
node = dict(
id=c["handle"].strip("/"),
name=c["name"],
handle=c["handle"],
about=c.get("about",""),
people_url=c.get("people_url",""),
industry=c.get("descriptor","").split("")[0].strip(),
geoUrn=c.get("geoUrn"),
followers=c.get("followers",0),
# desc_embed=embeds[i].tolist(),
desc_embed=[],
)
nodes.append(node)
# pick top-k most similar except itself
top_idx = np.argsort(sims[i])[::-1][1:top_k+1]
for j in top_idx:
tgt = companies[j]
weight = float(sims[i,j])
if node["industry"] == tgt.get("descriptor","").split("")[0].strip():
weight += 0.10
if node["geoUrn"] == tgt.get("geoUrn"):
weight += 0.05
tgt['followers'] = tgt.get("followers", None) or 1
node["followers"] = node.get("followers", None) or 1
follower_ratio = min(node["followers"], tgt.get("followers",1)) / max(node["followers"] or 1, tgt.get("followers",1))
weight += 0.05 * follower_ratio
edges.append(dict(
source=node["id"],
target=tgt["handle"].strip("/"),
weight=round(weight,4),
drivers=dict(
embed_sim=round(float(sims[i,j]),4),
industry_match=0.10 if node["industry"] == tgt.get("descriptor","").split("")[0].strip() else 0,
geo_overlap=0.05 if node["geoUrn"] == tgt.get("geoUrn") else 0,
)
))
# return {"nodes":nodes,"edges":edges,"meta":{"generated_at":datetime.now(UTC).isoformat()}}
return {"nodes":nodes,"edges":edges,"meta":{"generated_at":datetime.now(UTC).isoformat()}}
# ───────────────────────────────────────────────────────────────────────────────
# Org-chart via LLM
# ───────────────────────────────────────────────────────────────────────────────
async def infer_org_chart_llm(company, people, llm_provider:str, api_key:str, max_tokens:int, temperature:float, stub:bool=False, base_url:str=None):
if stub:
# Tiny fake org-chart when debugging offline
chief = random.choice(people)
nodes = [{
"id": chief["profile_url"],
"name": chief["name"],
"title": chief["headline"],
"dept": chief["headline"].split()[:1][0],
"yoe_total": 8,
"yoe_current": 2,
"seniority_score": 0.8,
"decision_score": 0.9,
"avatar_url": chief.get("avatar_url")
}]
return {"nodes":nodes,"edges":[],"meta":{"debug_stub":True,"generated_at":datetime.now(UTC).isoformat()}}
prompt = [
{"role":"system","content":"You are an expert B2B org-chart reasoner."},
{"role":"user","content":f"""Here is the company description:
<company>
{json.dumps(company, ensure_ascii=False)}
</company>
Here is a JSON list of employees:
<employees>
{json.dumps(people, ensure_ascii=False)}
</employees>
1) Build a reporting tree (manager -> direct reports)
2) For each person output a decision_score 0-1 for buying new software
Return JSON: {{ "nodes":[{{id,name,title,dept,yoe_total,yoe_current,seniority_score,decision_score,avatar_url,profile_url}}], "edges":[{{source,target,type,confidence}}] }}
"""}
]
resp = completion(
model=llm_provider,
messages=prompt,
max_tokens=max_tokens,
temperature=temperature,
response_format={"type":"json_object"},
api_key=api_key,
base_url=base_url
)
chart = json.loads(resp.choices[0].message.content)
chart["meta"] = dict(
model=llm_provider,
generated_at=datetime.now(UTC).isoformat()
)
return chart
# ───────────────────────────────────────────────────────────────────────────────
# CSV flatten
# ───────────────────────────────────────────────────────────────────────────────
def export_decision_makers(charts_dir:Path, csv_path:Path, threshold:float=0.5):
rows=[]
for p in charts_dir.glob("org_chart_*.json"):
data=json.loads(p.read_text())
comp = p.stem.split("org_chart_")[1]
for n in data.get("nodes",[]):
if n.get("decision_score",0)>=threshold:
rows.append(dict(
company=comp,
person=n["name"],
title=n["title"],
decision_score=n["decision_score"],
profile_url=n["id"]
))
pd.DataFrame(rows).to_csv(csv_path,index=False)
# ───────────────────────────────────────────────────────────────────────────────
# HTML rendering
# ───────────────────────────────────────────────────────────────────────────────
def render_html(out:Path, template_dir:Path):
# From template folder cp graph_view.html and ai.js in out folder
import shutil
shutil.copy(template_dir/"graph_view_template.html", out / "graph_view.html")
shutil.copy(template_dir/"ai.js", out)
# ───────────────────────────────────────────────────────────────────────────────
# Main async pipeline
# ───────────────────────────────────────────────────────────────────────────────
async def run(opts):
# ── silence SDK noise ──────────────────────────────────────────────────────
for noisy in ("openai", "httpx", "httpcore"):
lg = logging.getLogger(noisy)
lg.setLevel(logging.WARNING) # or ERROR if you want total silence
lg.propagate = False # optional: stop them reaching root
# ────────────── logging bootstrap ──────────────
console = Console()
logging.basicConfig(
level="INFO",
format="%(message)s",
handlers=[RichHandler(console=console, markup=True, rich_tracebacks=True)],
)
in_dir = BASE_DIR / Path(opts.in_dir)
out_dir = BASE_DIR / Path(opts.out_dir)
out_dir.mkdir(parents=True, exist_ok=True)
companies = load_jsonl(in_dir/"companies.jsonl")
people = load_jsonl(in_dir/"people.jsonl")
logging.info(f"[bold cyan]Loaded[/] {len(companies)} companies, {len(people)} people")
logging.info("[bold]⇢[/] Embedding company descriptions…")
embeds = embed_descriptions(companies, opts.embed_model, opts)
logging.info("[bold]⇢[/] Building similarity graph")
company_graph = build_company_graph(companies, embeds, opts.top_k)
dump_json(company_graph, out_dir/"company_graph.json")
# Filter companies that need processing
to_process = []
for comp in companies:
handle = comp["handle"].strip("/").replace("/","_")
out_file = out_dir/f"org_chart_{handle}.json"
if out_file.exists() and False:
logging.info(f"[green]✓[/] Skipping existing {comp['name']}")
continue
to_process.append(comp)
if not to_process:
logging.info("[yellow]All companies already processed[/]")
else:
workers = getattr(opts, 'workers', 1)
parallel = workers > 1
logging.info(f"[bold]⇢[/] Inferring org-charts via LLM {f'(parallel={workers} workers)' if parallel else ''}")
with Progress(
SpinnerColumn(),
BarColumn(),
TextColumn("[progress.description]{task.description}"),
TimeElapsedColumn(),
console=console,
) as progress:
task = progress.add_task("Org charts", total=len(to_process))
async def process_one(comp):
handle = comp["handle"].strip("/").replace("/","_")
persons = [p for p in people if p["company_handle"].strip("/") == comp["handle"].strip("/")]
chart = await infer_org_chart_llm(
comp, persons,
llm_provider=opts.llm_provider,
api_key=opts.llm_api_key or None,
max_tokens=opts.max_llm_tokens,
temperature=opts.llm_temperature,
stub=opts.stub or False,
base_url=opts.llm_base_url or None
)
chart["meta"]["company"] = comp["name"]
# Save the result immediately
dump_json(chart, out_dir/f"org_chart_{handle}.json")
progress.update(task, advance=1, description=f"{comp['name']} ({len(persons)} ppl)")
# Create tasks for all companies
tasks = [process_one(comp) for comp in to_process]
# Process in batches based on worker count
semaphore = asyncio.Semaphore(workers)
async def bounded_process(coro):
async with semaphore:
return await coro
# Run with concurrency control
await asyncio.gather(*(bounded_process(task) for task in tasks))
logging.info("[bold]⇢[/] Flattening decision-makers CSV")
export_decision_makers(out_dir, out_dir/"decision_makers.csv")
render_html(out_dir, template_dir=BASE_DIR/"templates")
logging.success = lambda msg, **k: console.print(f"[bold green]✓[/] {msg}", **k)
logging.success(f"Stage-2 artefacts written to {out_dir}")
# ───────────────────────────────────────────────────────────────────────────────
# CLI
# ───────────────────────────────────────────────────────────────────────────────
def build_arg_parser():
p = argparse.ArgumentParser(description="Build graphs & visualisation from Stage-1 output")
p.add_argument("--in", dest="in_dir", required=False, help="Stage-1 output dir", default=".")
p.add_argument("--out", dest="out_dir", required=False, help="Destination dir", default=".")
p.add_argument("--embed-model", default="all-MiniLM-L6-v2")
p.add_argument("--top-k", type=int, default=10, help="Top-k neighbours per company")
p.add_argument("--llm-provider", default="openai/gpt-4.1",
help="LLM model to use in format 'provider/model_name' (e.g., 'anthropic/claude-3')")
p.add_argument("--llm-api-key", help="API key for LLM provider (defaults to env vars)")
p.add_argument("--llm-base-url", help="Base URL for LLM API endpoint")
p.add_argument("--max-llm-tokens", type=int, default=8024)
p.add_argument("--llm-temperature", type=float, default=1.0)
p.add_argument("--stub", action="store_true", help="Skip OpenAI call and generate tiny fake org charts")
p.add_argument("--workers", type=int, default=4, help="Number of parallel workers for LLM inference")
return p
def main():
dbg = dev_defaults()
# opts = dbg if True else build_arg_parser().parse_args()
opts = build_arg_parser().parse_args()
asyncio.run(run(opts))
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,39 @@
{
"name": "LinkedIn Company Card",
"baseSelector": "div.search-results-container ul[role='list'] > li",
"fields": [
{
"name": "handle",
"selector": "a[href*='/company/']",
"type": "attribute",
"attribute": "href"
},
{
"name": "profile_image",
"selector": "a[href*='/company/'] img",
"type": "attribute",
"attribute": "src"
},
{
"name": "name",
"selector": "span[class*='t-16'] a",
"type": "text"
},
{
"name": "descriptor",
"selector": "div[class*='t-black t-normal']",
"type": "text"
},
{
"name": "about",
"selector": "p[class*='entity-result__summary--2-lines']",
"type": "text"
},
{
"name": "followers",
"selector": "div:contains('followers')",
"type": "regex",
"pattern": "(\\d+)\\s*followers"
}
]
}

View File

@@ -0,0 +1,38 @@
{
"name": "LinkedIn People Card",
"baseSelector": "li.org-people-profile-card__profile-card-spacing",
"fields": [
{
"name": "profile_url",
"selector": "a.eETATgYTipaVsmrBChiBJJvFsdPhNpulhPZUVLHLo",
"type": "attribute",
"attribute": "href"
},
{
"name": "name",
"selector": ".artdeco-entity-lockup__title .lt-line-clamp--single-line",
"type": "text"
},
{
"name": "headline",
"selector": ".artdeco-entity-lockup__subtitle .lt-line-clamp--multi-line",
"type": "text"
},
{
"name": "followers",
"selector": ".lt-line-clamp--multi-line.t-12",
"type": "text"
},
{
"name": "connection_degree",
"selector": ".artdeco-entity-lockup__badge .artdeco-entity-lockup__degree",
"type": "text"
},
{
"name": "avatar_url",
"selector": ".artdeco-entity-lockup__image img",
"type": "attribute",
"attribute": "src"
}
]
}

View File

@@ -0,0 +1,143 @@
<li class="yCLWzruNprmIzaZzFFonVFBtMrbaVYnuDFA">
<!----><!---->
<div class="IxlEPbRZwQYrRltKPvHAyjBmCdIWTAoYo" data-chameleon-result-urn="urn:li:company:362492"
data-view-name="search-entity-result-universal-template">
<div class="linked-area flex-1
cursor-pointer">
<div class="BAEgVqVuxosMJZodcelsgPoyRcrkiqgVCGHXNQ">
<div class="afcvrbGzNuyRlhPPQWrWirJtUdHAAtUlqxwvVA">
<div class="display-flex align-items-center">
<!---->
<a class="eETATgYTipaVsmrBChiBJJvFsdPhNpulhPZUVLHLo scale-down " aria-hidden="true"
tabindex="-1" href="https://www.linkedin.com/company/managment-research-services-inc./"
data-test-app-aware-link="">
<div class="ivm-image-view-model ">
<div class="ivm-view-attr__img-wrapper
">
<!---->
<!----> <img width="48"
src="https://media.licdn.com/dms/image/v2/C560BAQFWpusEOgW-ww/company-logo_100_100/company-logo_100_100/0/1630583697877/managment_research_services_inc_logo?e=1750896000&amp;v=beta&amp;t=Ch9vyEZdfng-1D1m_XqP5kjNpVXUBKkk9cNhMZUhx0E"
loading="lazy" height="48" alt="Management Research Services, Inc. (MRS, Inc)"
id="ember28"
class="ivm-view-attr__img--centered EntityPhoto-square-3 evi-image lazy-image ember-view">
</div>
</div>
</a>
</div>
</div>
<div
class="wympnVuDByXHvafWrMGJLZuchDmCRqLmWPwg MmzCPRicJimZvjJhvqTzDcDbdHhWPzspERzA pt3 pb3 t-12 t-black--light">
<div class="mb1">
<div class="t-roman t-sans">
<div class="display-flex">
<span class="TikBXjihYvcNUoIzkslUaEjfIuLmYxfs OoHEyXgsiIqGADjcOtTmfdpoYVXrLKTvkwI ">
<span class="CgaWLOzmXNuKbRIRARSErqCJcBPYudEKo
t-16">
<a class="eETATgYTipaVsmrBChiBJJvFsdPhNpulhPZUVLHLo "
href="https://www.linkedin.com/company/managment-research-services-inc./"
data-test-app-aware-link="">
<!---->Management Research Services, Inc. (MRS, Inc)<!---->
<!----> </a>
<!----> </span>
</span>
<!---->
</div>
</div>
<div class="LjmdKCEqKITHihFOiQsBAQylkdnsWhqZii
t-14 t-black t-normal">
<!---->Insurance • Milwaukee, Wisconsin<!---->
</div>
<div class="cTPhJiHyNLmxdQYFlsEOutjznmqrVHUByZwZ
t-14 t-normal">
<!---->1K followers<!---->
</div>
</div>
<!---->
<p class="yWzlqwKNlvCWVNoKqmzoDDEnBMUuyynaLg
entity-result__summary--2-lines
t-12 t-black--light
">
<!---->MRS combines 30 years of experience supporting the Life,<span class="white-space-pre">
</span><strong><!---->Health<!----></strong><span class="white-space-pre"> </span>and
Annuities<span class="white-space-pre"> </span><strong><!---->Insurance<!----></strong><span
class="white-space-pre"> </span>Industry with customized<span class="white-space-pre">
</span><strong><!---->insurance<!----></strong><span class="white-space-pre">
</span>underwriting solutions that efficiently support clients workflows. Supported by the
Agenium Platform (www.agenium.ai) our innovative underwriting solutions are guaranteed to
optimize requirements...<!---->
</p>
<!---->
</div>
<div class="qXxdnXtzRVFTnTnetmNpssucBwQBsWlUuk MmzCPRicJimZvjJhvqTzDcDbdHhWPzspERzA">
<!---->
<div>
<button aria-label="Follow Management Research Services, Inc. (MRS, Inc)" id="ember61"
class="artdeco-button artdeco-button--2 artdeco-button--secondary ember-view"
type="button"><!---->
<span class="artdeco-button__text">
Follow
</span></button>
<!---->
<!---->
</div>
</div>
</div>
</div>
</div>
</li>

View File

@@ -0,0 +1,94 @@
<li class="grid grid__col--lg-8 block org-people-profile-card__profile-card-spacing">
<div>
<section class="artdeco-card full-width qQdPErXQkSAbwApNgNfuxukTIPPykttCcZGOHk">
<!---->
<img width="210" src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7"
ariarole="presentation" loading="lazy" height="210" alt="" id="ember96"
class="evi-image lazy-image ghost-default ember-view org-people-profile-card__cover-photo org-people-profile-card__cover-photo--people">
<div class="org-people-profile-card__profile-info">
<div id="ember97"
class="artdeco-entity-lockup artdeco-entity-lockup--stacked-center artdeco-entity-lockup--size-7 ember-view">
<div id="ember98"
class="artdeco-entity-lockup__image artdeco-entity-lockup__image--type-circle ember-view"
type="circle">
<a class="eETATgYTipaVsmrBChiBJJvFsdPhNpulhPZUVLHLo "
id="org-people-profile-card__profile-image-0"
href="https://www.linkedin.com/in/speakerrayna?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAABsqUBoBr5x071PuGGpNtK3NlvSARiVXPIs"
data-test-app-aware-link="">
<img width="104"
src="https://media.licdn.com/dms/image/v2/D5603AQGs2Vyju4xZ7A/profile-displayphoto-shrink_100_100/profile-displayphoto-shrink_100_100/0/1681741067031?e=1750896000&amp;v=beta&amp;t=Hvj--IrrmpVIH7pec7-l_PQok8vsS__CGeUqBWOw7co"
loading="lazy" height="104" alt="Dr. Rayna S." id="ember99"
class="evi-image lazy-image ember-view">
</a>
</div>
<div id="ember100" class="artdeco-entity-lockup__content ember-view">
<div id="ember101" class="artdeco-entity-lockup__title ember-view">
<a class="eETATgYTipaVsmrBChiBJJvFsdPhNpulhPZUVLHLo link-without-visited-state"
aria-label="View Dr. Rayna S.s profile"
href="https://www.linkedin.com/in/speakerrayna?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAABsqUBoBr5x071PuGGpNtK3NlvSARiVXPIs"
data-test-app-aware-link="">
<div id="ember103" class="ember-view lt-line-clamp lt-line-clamp--single-line AGabuksChUpCmjWshSnaZryLKSthOKkwclxY
t-black" style="">
Dr. Rayna S.
<!---->
</div>
</a>
</div>
<div id="ember104" class="artdeco-entity-lockup__badge ember-view"> <span class="a11y-text">3rd+
degree connection</span>
<span class="artdeco-entity-lockup__degree" aria-hidden="true">
·&nbsp;3rd
</span>
<!----><!---->
</div>
<div id="ember105" class="artdeco-entity-lockup__subtitle ember-view">
<div class="t-14 t-black--light t-normal">
<div id="ember107" class="ember-view lt-line-clamp lt-line-clamp--multi-line"
style="-webkit-line-clamp: 2">
Leadership and Talent Development Consultant and Professional Speaker
<!---->
</div>
</div>
</div>
<div id="ember108" class="artdeco-entity-lockup__caption ember-view"></div>
</div>
</div>
<span class="text-align-center">
<span id="ember110"
class="ember-view lt-line-clamp lt-line-clamp--multi-line t-12 t-black--light mt2"
style="-webkit-line-clamp: 3">
727 followers
<!----> </span>
</span>
</div>
<footer class="ph3 pb3">
<button aria-label="Follow Dr. Rayna S." id="ember111"
class="artdeco-button artdeco-button--2 artdeco-button--secondary ember-view full-width"
type="button"><!---->
<span class="artdeco-button__text">
Follow
</span></button>
</footer>
</section>
</div>
</li>

View File

@@ -0,0 +1,50 @@
// ==== File: ai.js ====
class ApiHandler {
constructor(apiKey = null) {
this.apiKey = apiKey || localStorage.getItem("openai_api_key") || "";
console.log("ApiHandler ready");
}
setApiKey(k) {
this.apiKey = k.trim();
if (this.apiKey) localStorage.setItem("openai_api_key", this.apiKey);
}
async *chatStream(messages, {model = "gpt-4o", temperature = 0.7} = {}) {
if (!this.apiKey) throw new Error("OpenAI API key missing");
const payload = {model, messages, stream: true, max_tokens: 1024};
const controller = new AbortController();
const res = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${this.apiKey}`,
},
body: JSON.stringify(payload),
signal: controller.signal,
});
if (!res.ok) throw new Error(`OpenAI: ${res.statusText}`);
const reader = res.body.getReader();
const dec = new TextDecoder();
let buf = "";
while (true) {
const {done, value} = await reader.read();
if (done) break;
buf += dec.decode(value, {stream: true});
for (const line of buf.split("\n")) {
if (!line.startsWith("data: ")) continue;
if (line.includes("[DONE]")) return;
const json = JSON.parse(line.slice(6));
const delta = json.choices?.[0]?.delta?.content;
if (delta) yield delta;
}
buf = buf.endsWith("\n") ? "" : buf; // keep partial line
}
}
}
window.API = new ApiHandler();

File diff suppressed because it is too large Load Diff

51
docs/codebase/browser.md Normal file
View File

@@ -0,0 +1,51 @@
### browser_manager.py
| Function | What it does |
|---|---|
| `ManagedBrowser.build_browser_flags` | Returns baseline Chromium CLI flags, disables GPU and sandbox, plugs locale, timezone, stealth tweaks, and any extras from `BrowserConfig`. |
| `ManagedBrowser.__init__` | Stores config and logger, creates temp dir, preps internal state. |
| `ManagedBrowser.start` | Spawns or connects to the Chromium process, returns its CDP endpoint plus the `subprocess.Popen` handle. |
| `ManagedBrowser._initial_startup_check` | Pings the CDP endpoint once to be sure the browser is alive, raises if not. |
| `ManagedBrowser._monitor_browser_process` | Async-loops on the subprocess, logs exits or crashes, restarts if policy allows. |
| `ManagedBrowser._get_browser_path_WIP` | Old helper that maps OS + browser type to an executable path. |
| `ManagedBrowser._get_browser_path` | Current helper, checks env vars, Playwright cache, and OS defaults for the real executable. |
| `ManagedBrowser._get_browser_args` | Builds the final CLI arg list by merging user flags, stealth flags, and defaults. |
| `ManagedBrowser.cleanup` | Terminates the browser, stops monitors, deletes the temp dir. |
| `ManagedBrowser.create_profile` | Opens a visible browser so a human can log in, then zips the resulting user-data-dir to `~/.crawl4ai/profiles/<name>`. |
| `ManagedBrowser.list_profiles` | Thin wrapper, now forwarded to `BrowserProfiler.list_profiles()`. |
| `ManagedBrowser.delete_profile` | Thin wrapper, now forwarded to `BrowserProfiler.delete_profile()`. |
| `BrowserManager.__init__` | Holds the global Playwright instance, browser handle, config signature cache, session map, and logger. |
| `BrowserManager.start` | Boots the underlying `ManagedBrowser`, then spins up the default Playwright browser context with stealth patches. |
| `BrowserManager._build_browser_args` | Translates `CrawlerRunConfig` (proxy, UA, timezone, headless flag, etc.) into Playwright `launch_args`. |
| `BrowserManager.setup_context` | Applies locale, geolocation, permissions, cookies, and UA overrides on a fresh context. |
| `BrowserManager.create_browser_context` | Internal helper that actually calls `browser.new_context(**options)` after running `setup_context`. |
| `BrowserManager._make_config_signature` | Hashes the non-ephemeral parts of `CrawlerRunConfig` so contexts can be reused safely. |
| `BrowserManager.get_page` | Returns a ready `Page` for a given session id, reusing an existing one or creating a new context/page, injects helper scripts, updates `last_used`. |
| `BrowserManager.kill_session` | Force-closes a context/page for a session and removes it from the session map. |
| `BrowserManager._cleanup_expired_sessions` | Periodic sweep that drops sessions idle longer than `ttl_seconds`. |
| `BrowserManager.close` | Gracefully shuts down all contexts, the browser, Playwright, and background tasks. |
---
### browser_profiler.py
| Function | What it does |
|---|---|
| `BrowserProfiler.__init__` | Sets up profile folder paths, async logger, and signal handlers. |
| `BrowserProfiler.create_profile` | Launches a visible browser with a new user-data-dir for manual login, on exit compresses and stores it as a named profile. |
| `BrowserProfiler.cleanup_handler` | General SIGTERM/SIGINT cleanup wrapper that kills child processes. |
| `BrowserProfiler.sigint_handler` | Handles Ctrl-C during an interactive session, makes sure the browser shuts down cleanly. |
| `BrowserProfiler.listen_for_quit_command` | Async REPL that exits when the user types `q`. |
| `BrowserProfiler.list_profiles` | Enumerates `~/.crawl4ai/profiles`, prints profile name, browser type, size, and last modified. |
| `BrowserProfiler.get_profile_path` | Returns the absolute path of a profile given its name, or `None` if missing. |
| `BrowserProfiler.delete_profile` | Removes a profile folder or a direct path from disk, with optional confirmation prompt. |
| `BrowserProfiler.interactive_manager` | Text UI loop for listing, creating, deleting, or launching profiles. |
| `BrowserProfiler.launch_standalone_browser` | Starts a non-headless Chromium with remote debugging enabled and keeps it alive for manual tests. |
| `BrowserProfiler.get_cdp_json` | Pulls `/json/version` from a CDP endpoint and returns the parsed JSON. |
| `BrowserProfiler.launch_builtin_browser` | Spawns a headless Chromium in the background, saves `{wsEndpoint, pid, started_at}` to `~/.crawl4ai/builtin_browser.json`. |
| `BrowserProfiler.get_builtin_browser_info` | Reads that JSON file, verifies the PID, and returns browser status info. |
| `BrowserProfiler._is_browser_running` | Cross-platform helper that checks if a PID is still alive. |
| `BrowserProfiler.kill_builtin_browser` | Terminates the background builtin browser and removes its status file. |
| `BrowserProfiler.get_builtin_browser_status` | Returns `{running: bool, wsEndpoint, pid, started_at}` for quick health checks. |
Let me know what you want to tweak or dive into next.

40
docs/codebase/cli.md Normal file
View File

@@ -0,0 +1,40 @@
### `cli.py` command surface
| Command | Inputs / flags | What it does |
|---|---|---|
| **profiles** | *(none)* | Opens the interactive profile manager, lets you list, create, delete saved browser profiles that live in `~/.crawl4ai/profiles`. |
| **browser status** | | Prints whether the always-on *builtin* browser is running, shows its CDP URL, PID, start time. |
| **browser stop** | | Kills the builtin browser and deletes its status file. |
| **browser view** | `--url, -u` URL *(optional)* | Pops a visible window of the builtin browser, navigates to `URL` or `about:blank`. |
| **config list** | | Dumps every global setting, showing current value, default, and description. |
| **config get** | `key` | Prints the value of a single setting, falls back to default if unset. |
| **config set** | `key value` | Persists a new value in the global config (stored under `~/.crawl4ai/config.yml`). |
| **examples** | | Just spits out real-world CLI usage samples. |
| **crawl** | `url` *(positional)*<br>`--browser-config,-B` path<br>`--crawler-config,-C` path<br>`--filter-config,-f` path<br>`--extraction-config,-e` path<br>`--json-extract,-j` [desc]\*<br>`--schema,-s` path<br>`--browser,-b` k=v list<br>`--crawler,-c` k=v list<br>`--output,-o` all,json,markdown,md,markdown-fit,md-fit *(default all)*<br>`--output-file,-O` path<br>`--bypass-cache,-b` *(flag, default true — note flag reuse)*<br>`--question,-q` str<br>`--verbose,-v` *(flag)*<br>`--profile,-p` profile-name | One-shot crawl + extraction. Builds `BrowserConfig` and `CrawlerRunConfig` from inline flags or separate YAML/JSON files, runs `AsyncWebCrawler.run()`, can route through a named saved profile and pipe the result to stdout or a file. |
| **(default)** | Same flags as **crawl**, plus `--example` | Shortcut so you can type just `crwl https://site.com`. When first arg is not a known sub-command, it falls through to *crawl*. |
\* `--json-extract/-j` with no value turns on LLM-based JSON extraction using an auto schema, supplying a string lets you prompt-engineer the field descriptions.
> Quick mental model
> `profiles` = manage identities,
> `browser ...` = control long-running headless Chrome that all crawls can piggy-back on,
> `crawl` = do the actual work,
> `config` = tweak global defaults,
> everything else is sugar.
### Quick-fire “profile” usage cheatsheet
| Scenario | Command (copy-paste ready) | Notes |
|---|---|---|
| **Launch interactive Profile Manager UI** | `crwl profiles` | Opens TUI with options: 1 List, 2 Create, 3 Delete, 4 Use-to-crawl, 5 Exit. |
| **Create a fresh profile** | `crwl profiles` → choose **2** → name it → browser opens → log in → press **q** in terminal | Saves to `~/.crawl4ai/profiles/<name>`. |
| **List saved profiles** | `crwl profiles` → choose **1** | Shows name, browser type, size, last-modified. |
| **Delete a profile** | `crwl profiles` → choose **3** → pick the profile index → confirm | Removes the folder. |
| **Crawl with a profile (default alias)** | `crwl https://site.com/dashboard -p my-profile` | Keeps login cookies, sets `use_managed_browser=true` under the hood. |
| **Crawl + verbose JSON output** | `crwl https://site.com -p my-profile -o json -v` | Any other `crawl` flags work the same. |
| **Crawl with extra browser tweaks** | `crwl https://site.com -p my-profile -b "headless=true,viewport_width=1680"` | CLI overrides go on top of the profile. |
| **Same but via explicit sub-command** | `crwl crawl https://site.com -p my-profile` | Identical to default alias. |
| **Use profile from inside Profile Manager** | `crwl profiles` → choose **4** → pick profile → enter URL → follow prompts | Handy when demo-ing to non-CLI folks. |
| **One-off crawl with a profile folder path (no name lookup)** | `crwl https://site.com -b "user_data_dir=$HOME/.crawl4ai/profiles/my-profile,use_managed_browser=true"` | Bypasses registry, useful for CI scripts. |
| **Launch a dev browser on CDP port with the same identity** | `crwl cdp -d $HOME/.crawl4ai/profiles/my-profile -P 9223` | Lets Puppeteer/Playwright attach for debugging. |

View File

@@ -383,29 +383,31 @@ async def main():
scroll_delay=0.2,
)
# # Execute market data extraction
# results: List[CrawlResult] = await crawler.arun(
# url="https://coinmarketcap.com/?page=1", config=crawl_config
# )
# Execute market data extraction
results: List[CrawlResult] = await crawler.arun(
url="https://coinmarketcap.com/?page=1", config=crawl_config
)
# # Process results
# raw_df = pd.DataFrame()
# for result in results:
# if result.success and result.media["tables"]:
# # Extract primary market table
# # DataFrame
# raw_df = pd.DataFrame(
# result.media["tables"][0]["rows"],
# columns=result.media["tables"][0]["headers"],
# )
# break
# Process results
raw_df = pd.DataFrame()
for result in results:
# Use the new tables field, falling back to media["tables"] for backward compatibility
tables = result.tables if hasattr(result, "tables") and result.tables else result.media.get("tables", [])
if result.success and tables:
# Extract primary market table
# DataFrame
raw_df = pd.DataFrame(
tables[0]["rows"],
columns=tables[0]["headers"],
)
break
# This is for debugging only
# ////// Remove this in production from here..
# Save raw data for debugging
# raw_df.to_csv(f"{__current_dir__}/tmp/raw_crypto_data.csv", index=False)
# print("🔍 Raw data saved to 'raw_crypto_data.csv'")
raw_df.to_csv(f"{__current_dir__}/tmp/raw_crypto_data.csv", index=False)
print("🔍 Raw data saved to 'raw_crypto_data.csv'")
# Read from file for debugging
raw_df = pd.read_csv(f"{__current_dir__}/tmp/raw_crypto_data.csv")

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,149 @@
#!/usr/bin/env python3
"""
demo_docker_polling.py
Quick sanity-check for the asynchronous crawl job endpoints:
• POST /crawl/job enqueue work, get task_id
• GET /crawl/job/{id} poll status / fetch result
The style matches demo_docker_api.py (console.rule banners, helper
functions, coloured status lines). Adjust BASE_URL as needed.
Run: python demo_docker_polling.py
"""
import asyncio, json, os, time, urllib.parse
from typing import Dict, List
import httpx
from rich.console import Console
from rich.panel import Panel
from rich.syntax import Syntax
console = Console()
BASE_URL = os.getenv("BASE_URL", "http://localhost:11234")
SIMPLE_URL = "https://example.org"
LINKS_URL = "https://httpbin.org/links/10/1"
# --- helpers --------------------------------------------------------------
def print_payload(payload: Dict):
console.print(Panel(Syntax(json.dumps(payload, indent=2),
"json", theme="monokai", line_numbers=False),
title="Payload", border_style="cyan", expand=False))
async def check_server_health(client: httpx.AsyncClient) -> bool:
try:
resp = await client.get("/health")
if resp.is_success:
console.print("[green]Server healthy[/]")
return True
except Exception:
pass
console.print("[bold red]Server is not responding on /health[/]")
return False
async def poll_for_result(client: httpx.AsyncClient, task_id: str,
poll_interval: float = 1.5, timeout: float = 90.0):
"""Hit /crawl/job/{id} until COMPLETED/FAILED or timeout."""
start = time.time()
while True:
resp = await client.get(f"/crawl/job/{task_id}")
resp.raise_for_status()
data = resp.json()
status = data.get("status")
if status.upper() in ("COMPLETED", "FAILED"):
return data
if time.time() - start > timeout:
raise TimeoutError(f"Task {task_id} did not finish in {timeout}s")
await asyncio.sleep(poll_interval)
# --- demo functions -------------------------------------------------------
async def demo_poll_single_url(client: httpx.AsyncClient):
payload = {
"urls": [SIMPLE_URL],
"browser_config": {"type": "BrowserConfig",
"params": {"headless": True}},
"crawler_config": {"type": "CrawlerRunConfig",
"params": {"cache_mode": "BYPASS"}}
}
console.rule("[bold blue]Demo A: /crawl/job Single URL[/]", style="blue")
print_payload(payload)
# enqueue
resp = await client.post("/crawl/job", json=payload)
console.print(f"Enqueue status: [bold]{resp.status_code}[/]")
resp.raise_for_status()
task_id = resp.json()["task_id"]
console.print(f"Task ID: [yellow]{task_id}[/]")
# poll
console.print("Polling…")
result = await poll_for_result(client, task_id)
console.print(Panel(Syntax(json.dumps(result, indent=2),
"json", theme="fruity"),
title="Final result", border_style="green"))
if result["status"] == "COMPLETED":
console.print("[green]✅ Crawl succeeded[/]")
else:
console.print("[red]❌ Crawl failed[/]")
async def demo_poll_multi_url(client: httpx.AsyncClient):
payload = {
"urls": [SIMPLE_URL, LINKS_URL],
"browser_config": {"type": "BrowserConfig",
"params": {"headless": True}},
"crawler_config": {"type": "CrawlerRunConfig",
"params": {"cache_mode": "BYPASS"}}
}
console.rule("[bold magenta]Demo B: /crawl/job Multi-URL[/]",
style="magenta")
print_payload(payload)
resp = await client.post("/crawl/job", json=payload)
console.print(f"Enqueue status: [bold]{resp.status_code}[/]")
resp.raise_for_status()
task_id = resp.json()["task_id"]
console.print(f"Task ID: [yellow]{task_id}[/]")
console.print("Polling…")
result = await poll_for_result(client, task_id)
console.print(Panel(Syntax(json.dumps(result, indent=2),
"json", theme="fruity"),
title="Final result", border_style="green"))
if result["status"] == "COMPLETED":
console.print(
f"[green]✅ {len(json.loads(result['result'])['results'])} URLs crawled[/]")
else:
console.print("[red]❌ Crawl failed[/]")
# --- main runner ----------------------------------------------------------
async def main_demo():
async with httpx.AsyncClient(base_url=BASE_URL, timeout=300.0) as client:
if not await check_server_health(client):
return
await demo_poll_single_url(client)
await demo_poll_multi_url(client)
console.rule("[bold green]Polling demos complete[/]", style="green")
if __name__ == "__main__":
try:
asyncio.run(main_demo())
except KeyboardInterrupt:
console.print("\n[yellow]Interrupted by user[/]")
except Exception:
console.print_exception(show_locals=False)

View File

@@ -12,9 +12,10 @@ Weve introduced a new feature that effortlessly handles even the biggest page
**Simple Example:**
```python
import os, sys
import os
import sys
import asyncio
from crawl4ai import AsyncWebCrawler, CacheMode
from crawl4ai import AsyncWebCrawler, CacheMode, CrawlerRunConfig
# Adjust paths as needed
parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
@@ -26,9 +27,11 @@ async def main():
# Request both PDF and screenshot
result = await crawler.arun(
url='https://en.wikipedia.org/wiki/List_of_common_misconceptions',
cache_mode=CacheMode.BYPASS,
pdf=True,
screenshot=True
config=CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
pdf=True,
screenshot=True
)
)
if result.success:
@@ -40,9 +43,8 @@ async def main():
# Save PDF
if result.pdf:
pdf_bytes = b64decode(result.pdf)
with open(os.path.join(__location__, "page.pdf"), "wb") as f:
f.write(pdf_bytes)
f.write(result.pdf)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -3,45 +3,24 @@ from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
CacheMode,
DefaultMarkdownGenerator,
PruningContentFilter,
CrawlResult
)
async def example_cdp():
browser_conf = BrowserConfig(
headless=False,
cdp_url="http://localhost:9223"
)
crawler_config = CrawlerRunConfig(
session_id="test",
js_code = """(() => { return {"result": "Hello World!"} })()""",
js_only=True
)
async with AsyncWebCrawler(
config=browser_conf,
verbose=True,
) as crawler:
result : CrawlResult = await crawler.arun(
url="https://www.helloworld.org",
config=crawler_config,
)
print(result.js_execution_result)
async def main():
browser_config = BrowserConfig(headless=True, verbose=True)
browser_config = BrowserConfig(
headless=False,
verbose=True,
)
async with AsyncWebCrawler(config=browser_config) as crawler:
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
content_filter=PruningContentFilter()
),
)
result : CrawlResult = await crawler.arun(
result: CrawlResult = await crawler.arun(
url="https://www.helloworld.org", config=crawler_config
)
print(result.markdown.raw_markdown[:500])

View File

@@ -0,0 +1,64 @@
"""
Example showing how to use the content_source parameter to control HTML input for markdown generation.
"""
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, DefaultMarkdownGenerator
async def demo_content_source():
"""Demonstrates different content_source options for markdown generation."""
url = "https://example.com" # Simple demo site
print("Crawling with different content_source options...")
# --- Example 1: Default Behavior (cleaned_html) ---
# This uses the HTML after it has been processed by the scraping strategy
# The HTML is cleaned, simplified, and optimized for readability
default_generator = DefaultMarkdownGenerator() # content_source="cleaned_html" is default
default_config = CrawlerRunConfig(markdown_generator=default_generator)
# --- Example 2: Raw HTML ---
# This uses the original HTML directly from the webpage
# Preserves more original content but may include navigation, ads, etc.
raw_generator = DefaultMarkdownGenerator(content_source="raw_html")
raw_config = CrawlerRunConfig(markdown_generator=raw_generator)
# --- Example 3: Fit HTML ---
# This uses preprocessed HTML optimized for schema extraction
# Better for structured data extraction but may lose some formatting
fit_generator = DefaultMarkdownGenerator(content_source="fit_html")
fit_config = CrawlerRunConfig(markdown_generator=fit_generator)
# Execute all three crawlers in sequence
async with AsyncWebCrawler() as crawler:
# Default (cleaned_html)
result_default = await crawler.arun(url=url, config=default_config)
# Raw HTML
result_raw = await crawler.arun(url=url, config=raw_config)
# Fit HTML
result_fit = await crawler.arun(url=url, config=fit_config)
# Print a summary of the results
print("\nMarkdown Generation Results:\n")
print("1. Default (cleaned_html):")
print(f" Length: {len(result_default.markdown.raw_markdown)} chars")
print(f" First 80 chars: {result_default.markdown.raw_markdown[:80]}...\n")
print("2. Raw HTML:")
print(f" Length: {len(result_raw.markdown.raw_markdown)} chars")
print(f" First 80 chars: {result_raw.markdown.raw_markdown[:80]}...\n")
print("3. Fit HTML:")
print(f" Length: {len(result_fit.markdown.raw_markdown)} chars")
print(f" First 80 chars: {result_fit.markdown.raw_markdown[:80]}...\n")
# Demonstrate differences in output
print("\nKey Takeaways:")
print("- cleaned_html: Best for readable, focused content")
print("- raw_html: Preserves more original content, but may include noise")
print("- fit_html: Optimized for schema extraction and structured data")
if __name__ == "__main__":
asyncio.run(demo_content_source())

View File

@@ -0,0 +1,42 @@
"""
Example demonstrating how to use the content_source parameter in MarkdownGenerationStrategy
"""
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, DefaultMarkdownGenerator
async def demo_markdown_source_config():
print("\n=== Demo: Configuring Markdown Source ===")
# Example 1: Generate markdown from cleaned HTML (default behavior)
cleaned_md_generator = DefaultMarkdownGenerator(content_source="cleaned_html")
config_cleaned = CrawlerRunConfig(markdown_generator=cleaned_md_generator)
async with AsyncWebCrawler() as crawler:
result_cleaned = await crawler.arun(url="https://example.com", config=config_cleaned)
print("Markdown from Cleaned HTML (default):")
print(f" Length: {len(result_cleaned.markdown.raw_markdown)}")
print(f" Start: {result_cleaned.markdown.raw_markdown[:100]}...")
# Example 2: Generate markdown directly from raw HTML
raw_md_generator = DefaultMarkdownGenerator(content_source="raw_html")
config_raw = CrawlerRunConfig(markdown_generator=raw_md_generator)
async with AsyncWebCrawler() as crawler:
result_raw = await crawler.arun(url="https://example.com", config=config_raw)
print("\nMarkdown from Raw HTML:")
print(f" Length: {len(result_raw.markdown.raw_markdown)}")
print(f" Start: {result_raw.markdown.raw_markdown[:100]}...")
# Example 3: Generate markdown from preprocessed 'fit' HTML
fit_md_generator = DefaultMarkdownGenerator(content_source="fit_html")
config_fit = CrawlerRunConfig(markdown_generator=fit_md_generator)
async with AsyncWebCrawler() as crawler:
result_fit = await crawler.arun(url="https://example.com", config=config_fit)
print("\nMarkdown from Fit HTML:")
print(f" Length: {len(result_fit.markdown.raw_markdown)}")
print(f" Start: {result_fit.markdown.raw_markdown[:100]}...")
if __name__ == "__main__":
asyncio.run(demo_markdown_source_config())

View File

@@ -0,0 +1,477 @@
import asyncio
import json
import os
import base64
from pathlib import Path
from typing import List, Dict, Any
from datetime import datetime
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode, CrawlResult
from crawl4ai import BrowserConfig
__cur_dir__ = Path(__file__).parent
# Create temp directory if it doesn't exist
os.makedirs(os.path.join(__cur_dir__, "tmp"), exist_ok=True)
async def demo_basic_network_capture():
"""Basic network request capturing example"""
print("\n=== 1. Basic Network Request Capturing ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
wait_until="networkidle" # Wait for network to be idle
)
result = await crawler.arun(
url="https://example.com/",
config=config
)
if result.success and result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Count by event type
event_types = {}
for req in result.network_requests:
event_type = req.get("event_type", "unknown")
event_types[event_type] = event_types.get(event_type, 0) + 1
print("Event types:")
for event_type, count in event_types.items():
print(f" - {event_type}: {count}")
# Show a sample request and response
request = next((r for r in result.network_requests if r.get("event_type") == "request"), None)
response = next((r for r in result.network_requests if r.get("event_type") == "response"), None)
if request:
print("\nSample request:")
print(f" URL: {request.get('url')}")
print(f" Method: {request.get('method')}")
print(f" Headers: {list(request.get('headers', {}).keys())}")
if response:
print("\nSample response:")
print(f" URL: {response.get('url')}")
print(f" Status: {response.get('status')} {response.get('status_text', '')}")
print(f" Headers: {list(response.get('headers', {}).keys())}")
async def demo_basic_console_capture():
"""Basic console message capturing example"""
print("\n=== 2. Basic Console Message Capturing ===")
# Create a simple HTML file with console messages
html_file = os.path.join(__cur_dir__, "tmp", "console_test.html")
with open(html_file, "w") as f:
f.write("""
<!DOCTYPE html>
<html>
<head>
<title>Console Test</title>
</head>
<body>
<h1>Console Message Test</h1>
<script>
console.log("This is a basic log message");
console.info("This is an info message");
console.warn("This is a warning message");
console.error("This is an error message");
// Generate an error
try {
nonExistentFunction();
} catch (e) {
console.error("Caught error:", e);
}
</script>
</body>
</html>
""")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_console_messages=True,
wait_until="networkidle" # Wait to make sure all scripts execute
)
result = await crawler.arun(
url=f"file://{html_file}",
config=config
)
if result.success and result.console_messages:
print(f"Captured {len(result.console_messages)} console messages")
# Count by message type
message_types = {}
for msg in result.console_messages:
msg_type = msg.get("type", "unknown")
message_types[msg_type] = message_types.get(msg_type, 0) + 1
print("Message types:")
for msg_type, count in message_types.items():
print(f" - {msg_type}: {count}")
# Show all messages
print("\nAll console messages:")
for i, msg in enumerate(result.console_messages, 1):
print(f" {i}. [{msg.get('type', 'unknown')}] {msg.get('text', '')}")
async def demo_combined_capture():
"""Capturing both network requests and console messages"""
print("\n=== 3. Combined Network and Console Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
wait_until="networkidle"
)
result = await crawler.arun(
url="https://httpbin.org/html",
config=config
)
if result.success:
network_count = len(result.network_requests) if result.network_requests else 0
console_count = len(result.console_messages) if result.console_messages else 0
print(f"Captured {network_count} network events and {console_count} console messages")
# Save the captured data to a JSON file for analysis
output_file = os.path.join(__cur_dir__, "tmp", "capture_data.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"network_requests": result.network_requests,
"console_messages": result.console_messages
}, f, indent=2)
print(f"Full capture data saved to {output_file}")
async def analyze_spa_network_traffic():
"""Analyze network traffic of a Single-Page Application"""
print("\n=== 4. Analyzing SPA Network Traffic ===")
async with AsyncWebCrawler(config=BrowserConfig(
headless=True,
viewport_width=1280,
viewport_height=800
)) as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
# Wait longer to ensure all resources are loaded
wait_until="networkidle",
page_timeout=60000, # 60 seconds
)
result = await crawler.arun(
url="https://weather.com",
config=config
)
if result.success and result.network_requests:
# Extract different types of requests
requests = []
responses = []
failures = []
for event in result.network_requests:
event_type = event.get("event_type")
if event_type == "request":
requests.append(event)
elif event_type == "response":
responses.append(event)
elif event_type == "request_failed":
failures.append(event)
print(f"Captured {len(requests)} requests, {len(responses)} responses, and {len(failures)} failures")
# Analyze request types
resource_types = {}
for req in requests:
resource_type = req.get("resource_type", "unknown")
resource_types[resource_type] = resource_types.get(resource_type, 0) + 1
print("\nResource types:")
for resource_type, count in sorted(resource_types.items(), key=lambda x: x[1], reverse=True):
print(f" - {resource_type}: {count}")
# Analyze API calls
api_calls = [r for r in requests if "api" in r.get("url", "").lower()]
if api_calls:
print(f"\nDetected {len(api_calls)} API calls:")
for i, call in enumerate(api_calls[:5], 1): # Show first 5
print(f" {i}. {call.get('method')} {call.get('url')}")
if len(api_calls) > 5:
print(f" ... and {len(api_calls) - 5} more")
# Analyze response status codes
status_codes = {}
for resp in responses:
status = resp.get("status", 0)
status_codes[status] = status_codes.get(status, 0) + 1
print("\nResponse status codes:")
for status, count in sorted(status_codes.items()):
print(f" - {status}: {count}")
# Analyze failures
if failures:
print("\nFailed requests:")
for i, failure in enumerate(failures[:5], 1): # Show first 5
print(f" {i}. {failure.get('url')} - {failure.get('failure_text')}")
if len(failures) > 5:
print(f" ... and {len(failures) - 5} more")
# Check for console errors
if result.console_messages:
errors = [msg for msg in result.console_messages if msg.get("type") == "error"]
if errors:
print(f"\nDetected {len(errors)} console errors:")
for i, error in enumerate(errors[:3], 1): # Show first 3
print(f" {i}. {error.get('text', '')[:100]}...")
if len(errors) > 3:
print(f" ... and {len(errors) - 3} more")
# Save analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "weather_network_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"statistics": {
"request_count": len(requests),
"response_count": len(responses),
"failure_count": len(failures),
"resource_types": resource_types,
"status_codes": {str(k): v for k, v in status_codes.items()},
"api_call_count": len(api_calls),
"console_error_count": len(errors) if result.console_messages else 0
},
"network_requests": result.network_requests,
"console_messages": result.console_messages
}, f, indent=2)
print(f"\nFull analysis saved to {output_file}")
async def demo_security_analysis():
"""Using network capture for security analysis"""
print("\n=== 5. Security Analysis with Network Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
wait_until="networkidle"
)
# A site that makes multiple third-party requests
result = await crawler.arun(
url="https://www.nytimes.com/",
config=config
)
if result.success and result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Extract all domains
domains = set()
for req in result.network_requests:
if req.get("event_type") == "request":
url = req.get("url", "")
try:
from urllib.parse import urlparse
domain = urlparse(url).netloc
if domain:
domains.add(domain)
except:
pass
print(f"\nDetected requests to {len(domains)} unique domains:")
main_domain = urlparse(result.url).netloc
# Separate first-party vs third-party domains
first_party = [d for d in domains if main_domain in d]
third_party = [d for d in domains if main_domain not in d]
print(f" - First-party domains: {len(first_party)}")
print(f" - Third-party domains: {len(third_party)}")
# Look for potential trackers/analytics
tracking_keywords = ["analytics", "tracker", "pixel", "tag", "stats", "metric", "collect", "beacon"]
potential_trackers = []
for domain in third_party:
if any(keyword in domain.lower() for keyword in tracking_keywords):
potential_trackers.append(domain)
if potential_trackers:
print(f"\nPotential tracking/analytics domains ({len(potential_trackers)}):")
for i, domain in enumerate(sorted(potential_trackers)[:10], 1):
print(f" {i}. {domain}")
if len(potential_trackers) > 10:
print(f" ... and {len(potential_trackers) - 10} more")
# Check for insecure (HTTP) requests
insecure_requests = [
req.get("url") for req in result.network_requests
if req.get("event_type") == "request" and req.get("url", "").startswith("http://")
]
if insecure_requests:
print(f"\nWarning: Found {len(insecure_requests)} insecure (HTTP) requests:")
for i, url in enumerate(insecure_requests[:5], 1):
print(f" {i}. {url}")
if len(insecure_requests) > 5:
print(f" ... and {len(insecure_requests) - 5} more")
# Save security analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "security_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"main_domain": main_domain,
"timestamp": datetime.now().isoformat(),
"analysis": {
"total_requests": len([r for r in result.network_requests if r.get("event_type") == "request"]),
"unique_domains": len(domains),
"first_party_domains": first_party,
"third_party_domains": third_party,
"potential_trackers": potential_trackers,
"insecure_requests": insecure_requests
}
}, f, indent=2)
print(f"\nFull security analysis saved to {output_file}")
async def demo_performance_analysis():
"""Using network capture for performance analysis"""
print("\n=== 6. Performance Analysis with Network Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
page_timeout=60 * 2 * 1000 # 120 seconds
)
result = await crawler.arun(
url="https://www.cnn.com/",
config=config
)
if result.success and result.network_requests:
# Filter only response events with timing information
responses_with_timing = [
r for r in result.network_requests
if r.get("event_type") == "response" and r.get("request_timing")
]
if responses_with_timing:
print(f"Analyzing timing for {len(responses_with_timing)} network responses")
# Group by resource type
resource_timings = {}
for resp in responses_with_timing:
url = resp.get("url", "")
timing = resp.get("request_timing", {})
# Determine resource type from URL extension
ext = url.split(".")[-1].lower() if "." in url.split("/")[-1] else "unknown"
if ext in ["jpg", "jpeg", "png", "gif", "webp", "svg", "ico"]:
resource_type = "image"
elif ext in ["js"]:
resource_type = "javascript"
elif ext in ["css"]:
resource_type = "css"
elif ext in ["woff", "woff2", "ttf", "otf", "eot"]:
resource_type = "font"
else:
resource_type = "other"
if resource_type not in resource_timings:
resource_timings[resource_type] = []
# Calculate request duration if timing information is available
if isinstance(timing, dict) and "requestTime" in timing and "receiveHeadersEnd" in timing:
# Convert to milliseconds
duration = (timing["receiveHeadersEnd"] - timing["requestTime"]) * 1000
resource_timings[resource_type].append({
"url": url,
"duration_ms": duration
})
if isinstance(timing, dict) and "requestStart" in timing and "responseStart" in timing and "startTime" in timing:
# Convert to milliseconds
duration = (timing["responseStart"] - timing["requestStart"]) * 1000
resource_timings[resource_type].append({
"url": url,
"duration_ms": duration
})
# Calculate statistics for each resource type
print("\nPerformance by resource type:")
for resource_type, timings in resource_timings.items():
if timings:
durations = [t["duration_ms"] for t in timings]
avg_duration = sum(durations) / len(durations)
max_duration = max(durations)
slowest_resource = next(t["url"] for t in timings if t["duration_ms"] == max_duration)
print(f" {resource_type.upper()}:")
print(f" - Count: {len(timings)}")
print(f" - Avg time: {avg_duration:.2f} ms")
print(f" - Max time: {max_duration:.2f} ms")
print(f" - Slowest: {slowest_resource}")
# Identify the slowest resources overall
all_timings = []
for resource_type, timings in resource_timings.items():
for timing in timings:
timing["type"] = resource_type
all_timings.append(timing)
all_timings.sort(key=lambda x: x["duration_ms"], reverse=True)
print("\nTop 5 slowest resources:")
for i, timing in enumerate(all_timings[:5], 1):
print(f" {i}. [{timing['type']}] {timing['url']} - {timing['duration_ms']:.2f} ms")
# Save performance analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "performance_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"resource_timings": resource_timings,
"slowest_resources": all_timings[:10] # Save top 10
}, f, indent=2)
print(f"\nFull performance analysis saved to {output_file}")
async def main():
"""Run all demo functions sequentially"""
print("=== Network and Console Capture Examples ===")
# Make sure tmp directory exists
os.makedirs(os.path.join(__cur_dir__, "tmp"), exist_ok=True)
# Run basic examples
# await demo_basic_network_capture()
await demo_basic_console_capture()
# await demo_combined_capture()
# Run advanced examples
# await analyze_spa_network_traffic()
# await demo_security_analysis()
# await demo_performance_analysis()
print("\n=== Examples Complete ===")
print(f"Check the tmp directory for output files: {os.path.join(__cur_dir__, 'tmp')}")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,412 @@
import asyncio
import os
import json
import base64
from pathlib import Path
from typing import List
from crawl4ai import ProxyConfig
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode, CrawlResult
from crawl4ai import RoundRobinProxyStrategy
from crawl4ai import JsonCssExtractionStrategy, LLMExtractionStrategy
from crawl4ai import LLMConfig
from crawl4ai import PruningContentFilter, BM25ContentFilter
from crawl4ai import DefaultMarkdownGenerator
from crawl4ai import BFSDeepCrawlStrategy, DomainFilter, FilterChain
from crawl4ai import BrowserConfig
__cur_dir__ = Path(__file__).parent
async def demo_basic_crawl():
"""Basic web crawling with markdown generation"""
print("\n=== 1. Basic Web Crawling ===")
async with AsyncWebCrawler(config = BrowserConfig(
viewport_height=800,
viewport_width=1200,
headless=True,
verbose=True,
)) as crawler:
results: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com/"
)
for i, result in enumerate(results):
print(f"Result {i + 1}:")
print(f"Success: {result.success}")
if result.success:
print(f"Markdown length: {len(result.markdown.raw_markdown)} chars")
print(f"First 100 chars: {result.markdown.raw_markdown[:100]}...")
else:
print("Failed to crawl the URL")
async def demo_parallel_crawl():
"""Crawl multiple URLs in parallel"""
print("\n=== 2. Parallel Crawling ===")
urls = [
"https://news.ycombinator.com/",
"https://example.com/",
"https://httpbin.org/html",
]
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun_many(
urls=urls,
)
print(f"Crawled {len(results)} URLs in parallel:")
for i, result in enumerate(results):
print(
f" {i + 1}. {result.url} - {'Success' if result.success else 'Failed'}"
)
async def demo_fit_markdown():
"""Generate focused markdown with LLM content filter"""
print("\n=== 3. Fit Markdown with LLM Content Filter ===")
async with AsyncWebCrawler() as crawler:
result: CrawlResult = await crawler.arun(
url = "https://en.wikipedia.org/wiki/Python_(programming_language)",
config=CrawlerRunConfig(
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter()
)
),
)
# Print stats and save the fit markdown
print(f"Raw: {len(result.markdown.raw_markdown)} chars")
print(f"Fit: {len(result.markdown.fit_markdown)} chars")
async def demo_llm_structured_extraction_no_schema():
# Create a simple LLM extraction strategy (no schema required)
extraction_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(
provider="groq/qwen-2.5-32b",
api_token="env:GROQ_API_KEY",
),
instruction="This is news.ycombinator.com, extract all news, and for each, I want title, source url, number of comments.",
extract_type="schema",
schema="{title: string, url: string, comments: int}",
extra_args={
"temperature": 0.0,
"max_tokens": 4096,
},
verbose=True,
)
config = CrawlerRunConfig(extraction_strategy=extraction_strategy)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
"https://news.ycombinator.com/", config=config
)
for result in results:
print(f"URL: {result.url}")
print(f"Success: {result.success}")
if result.success:
data = json.loads(result.extracted_content)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
async def demo_css_structured_extraction_no_schema():
"""Extract structured data using CSS selectors"""
print("\n=== 5. CSS-Based Structured Extraction ===")
# Sample HTML for schema generation (one-time cost)
sample_html = """
<div class="body-post clear">
<a class="story-link" href="https://thehackernews.com/2025/04/malicious-python-packages-on-pypi.html">
<div class="clear home-post-box cf">
<div class="home-img clear">
<div class="img-ratio">
<img alt="..." src="...">
</div>
</div>
<div class="clear home-right">
<h2 class="home-title">Malicious Python Packages on PyPI Downloaded 39,000+ Times, Steal Sensitive Data</h2>
<div class="item-label">
<span class="h-datetime"><i class="icon-font icon-calendar"></i>Apr 05, 2025</span>
<span class="h-tags">Malware / Supply Chain Attack</span>
</div>
<div class="home-desc"> Cybersecurity researchers have...</div>
</div>
</div>
</a>
</div>
"""
# Check if schema file exists
schema_file_path = f"{__cur_dir__}/tmp/schema.json"
if os.path.exists(schema_file_path):
with open(schema_file_path, "r") as f:
schema = json.load(f)
else:
# Generate schema using LLM (one-time setup)
schema = JsonCssExtractionStrategy.generate_schema(
html=sample_html,
llm_config=LLMConfig(
provider="groq/qwen-2.5-32b",
api_token="env:GROQ_API_KEY",
),
query="From https://thehackernews.com/, I have shared a sample of one news div with a title, date, and description. Please generate a schema for this news div.",
)
print(f"Generated schema: {json.dumps(schema, indent=2)}")
# Save the schema to a file , and use it for future extractions, in result for such extraction you will call LLM once
with open(f"{__cur_dir__}/tmp/schema.json", "w") as f:
json.dump(schema, f, indent=2)
# Create no-LLM extraction strategy with the generated schema
extraction_strategy = JsonCssExtractionStrategy(schema)
config = CrawlerRunConfig(extraction_strategy=extraction_strategy)
# Use the fast CSS extraction (no LLM calls during extraction)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
"https://thehackernews.com", config=config
)
for result in results:
print(f"URL: {result.url}")
print(f"Success: {result.success}")
if result.success:
data = json.loads(result.extracted_content)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
async def demo_deep_crawl():
"""Deep crawling with BFS strategy"""
print("\n=== 6. Deep Crawling ===")
filter_chain = FilterChain([DomainFilter(allowed_domains=["crawl4ai.com"])])
deep_crawl_strategy = BFSDeepCrawlStrategy(
max_depth=1, max_pages=5, filter_chain=filter_chain
)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
url="https://docs.crawl4ai.com",
config=CrawlerRunConfig(deep_crawl_strategy=deep_crawl_strategy),
)
print(f"Deep crawl returned {len(results)} pages:")
for i, result in enumerate(results):
depth = result.metadata.get("depth", "unknown")
print(f" {i + 1}. {result.url} (Depth: {depth})")
async def demo_js_interaction():
"""Execute JavaScript to load more content"""
print("\n=== 7. JavaScript Interaction ===")
# A simple page that needs JS to reveal content
async with AsyncWebCrawler(config=BrowserConfig(headless=False)) as crawler:
# Initial load
news_schema = {
"name": "news",
"baseSelector": "tr.athing",
"fields": [
{
"name": "title",
"selector": "span.titleline",
"type": "text",
}
],
}
results: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com",
config=CrawlerRunConfig(
session_id="hn_session", # Keep session
extraction_strategy=JsonCssExtractionStrategy(schema=news_schema),
),
)
news = []
for result in results:
if result.success:
data = json.loads(result.extracted_content)
news.extend(data)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
print(f"Initial items: {len(news)}")
# Click "More" link
more_config = CrawlerRunConfig(
js_code="document.querySelector('a.morelink').click();",
js_only=True, # Continue in same page
session_id="hn_session", # Keep session
extraction_strategy=JsonCssExtractionStrategy(
schema=news_schema,
),
)
result: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com", config=more_config
)
# Extract new items
for result in results:
if result.success:
data = json.loads(result.extracted_content)
news.extend(data)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
print(f"Total items: {len(news)}")
async def demo_media_and_links():
"""Extract media and links from a page"""
print("\n=== 8. Media and Links Extraction ===")
async with AsyncWebCrawler() as crawler:
result: List[CrawlResult] = await crawler.arun("https://en.wikipedia.org/wiki/Main_Page")
for i, result in enumerate(result):
# Extract and save all images
images = result.media.get("images", [])
print(f"Found {len(images)} images")
# Extract and save all links (internal and external)
internal_links = result.links.get("internal", [])
external_links = result.links.get("external", [])
print(f"Found {len(internal_links)} internal links")
print(f"Found {len(external_links)} external links")
# Print some of the images and links
for image in images[:3]:
print(f"Image: {image['src']}")
for link in internal_links[:3]:
print(f"Internal link: {link['href']}")
for link in external_links[:3]:
print(f"External link: {link['href']}")
# # Save everything to files
with open(f"{__cur_dir__}/tmp/images.json", "w") as f:
json.dump(images, f, indent=2)
with open(f"{__cur_dir__}/tmp/links.json", "w") as f:
json.dump(
{"internal": internal_links, "external": external_links},
f,
indent=2,
)
async def demo_screenshot_and_pdf():
"""Capture screenshot and PDF of a page"""
print("\n=== 9. Screenshot and PDF Capture ===")
async with AsyncWebCrawler() as crawler:
result: List[CrawlResult] = await crawler.arun(
# url="https://example.com",
url="https://en.wikipedia.org/wiki/Giant_anteater",
config=CrawlerRunConfig(screenshot=True, pdf=True),
)
for i, result in enumerate(result):
# if result.screenshot_data:
if result.screenshot:
# Save screenshot
screenshot_path = f"{__cur_dir__}/tmp/example_screenshot.png"
with open(screenshot_path, "wb") as f:
f.write(base64.b64decode(result.screenshot))
print(f"Screenshot saved to {screenshot_path}")
# if result.pdf_data:
if result.pdf:
# Save PDF
pdf_path = f"{__cur_dir__}/tmp/example.pdf"
with open(pdf_path, "wb") as f:
f.write(result.pdf)
print(f"PDF saved to {pdf_path}")
async def demo_proxy_rotation():
"""Proxy rotation for multiple requests"""
print("\n=== 10. Proxy Rotation ===")
# Example proxies (replace with real ones)
proxies = [
ProxyConfig(server="http://proxy1.example.com:8080"),
ProxyConfig(server="http://proxy2.example.com:8080"),
]
proxy_strategy = RoundRobinProxyStrategy(proxies)
print(f"Using {len(proxies)} proxies in rotation")
print(
"Note: This example uses placeholder proxies - replace with real ones to test"
)
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
proxy_rotation_strategy=proxy_strategy
)
# In a real scenario, these would be run and the proxies would rotate
print("In a real scenario, requests would rotate through the available proxies")
async def demo_raw_html_and_file():
"""Process raw HTML and local files"""
print("\n=== 11. Raw HTML and Local Files ===")
raw_html = """
<html><body>
<h1>Sample Article</h1>
<p>This is sample content for testing Crawl4AI's raw HTML processing.</p>
</body></html>
"""
# Save to file
file_path = Path("docs/examples/tmp/sample.html").absolute()
with open(file_path, "w") as f:
f.write(raw_html)
async with AsyncWebCrawler() as crawler:
# Crawl raw HTML
raw_result = await crawler.arun(
url="raw:" + raw_html, config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
print("Raw HTML processing:")
print(f" Markdown: {raw_result.markdown.raw_markdown[:50]}...")
# Crawl local file
file_result = await crawler.arun(
url=f"file://{file_path}",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("\nLocal file processing:")
print(f" Markdown: {file_result.markdown.raw_markdown[:50]}...")
# Clean up
os.remove(file_path)
print(f"Processed both raw HTML and local file ({file_path})")
async def main():
"""Run all demo functions sequentially"""
print("=== Comprehensive Crawl4AI Demo ===")
print("Note: Some examples require API keys or other configurations")
# Run all demos
await demo_basic_crawl()
await demo_parallel_crawl()
await demo_fit_markdown()
await demo_llm_structured_extraction_no_schema()
await demo_css_structured_extraction_no_schema()
await demo_deep_crawl()
await demo_js_interaction()
await demo_media_and_links()
await demo_screenshot_and_pdf()
# # await demo_proxy_rotation()
await demo_raw_html_and_file()
# Clean up any temp files that may have been created
print("\n=== Demo Complete ===")
print("Check for any generated files (screenshots, PDFs) in the current directory")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,562 @@
import os, sys
from crawl4ai.types import LLMConfig
sys.path.append(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
)
import asyncio
import time
import json
import re
from typing import Dict
from bs4 import BeautifulSoup
from pydantic import BaseModel, Field
from crawl4ai import AsyncWebCrawler, CacheMode, BrowserConfig, CrawlerRunConfig
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.content_filter_strategy import PruningContentFilter
from crawl4ai.extraction_strategy import (
JsonCssExtractionStrategy,
LLMExtractionStrategy,
)
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
print("Crawl4AI: Advanced Web Crawling and Data Extraction")
print("GitHub Repository: https://github.com/unclecode/crawl4ai")
print("Twitter: @unclecode")
print("Website: https://crawl4ai.com")
# Basic Example - Simple Crawl
async def simple_crawl():
print("\n--- Basic Usage ---")
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
async def clean_content():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
excluded_tags=["nav", "footer", "aside"],
remove_overlay_elements=True,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
),
options={"ignore_links": True},
),
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://en.wikipedia.org/wiki/Apple",
config=crawler_config,
)
full_markdown_length = len(result.markdown.raw_markdown)
fit_markdown_length = len(result.markdown.fit_markdown)
print(f"Full Markdown Length: {full_markdown_length}")
print(f"Fit Markdown Length: {fit_markdown_length}")
async def link_analysis():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.ENABLED,
exclude_external_links=True,
exclude_social_media_links=True,
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
config=crawler_config,
)
print(f"Found {len(result.links['internal'])} internal links")
print(f"Found {len(result.links['external'])} external links")
for link in result.links["internal"][:5]:
print(f"Href: {link['href']}\nText: {link['text']}\n")
# JavaScript Execution Example
async def simple_example_with_running_js_code():
print("\n--- Executing JavaScript and Using CSS Selectors ---")
browser_config = BrowserConfig(headless=True, java_script_enabled=True)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
js_code="const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();",
# wait_for="() => { return Array.from(document.querySelectorAll('article.tease-card')).length > 10; }"
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
# CSS Selector Example
async def simple_example_with_css_selector():
print("\n--- Using CSS Selectors ---")
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, css_selector=".wide-tease-item__description"
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
async def media_handling():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, exclude_external_images=True, screenshot=True
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
for img in result.media["images"][:5]:
print(f"Image URL: {img['src']}, Alt: {img['alt']}, Score: {img['score']}")
async def custom_hook_workflow(verbose=True):
async with AsyncWebCrawler() as crawler:
# Set a 'before_goto' hook to run custom code just before navigation
crawler.crawler_strategy.set_hook(
"before_goto",
lambda page, context: print("[Hook] Preparing to navigate..."),
)
# Perform the crawl operation
result = await crawler.arun(url="https://crawl4ai.com")
print(result.markdown.raw_markdown[:500].replace("\n", " -- "))
# Proxy Example
async def use_proxy():
print("\n--- Using a Proxy ---")
browser_config = BrowserConfig(
headless=True,
proxy_config={
"server": "http://proxy.example.com:8080",
"username": "username",
"password": "password",
},
)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
if result.success:
print(result.markdown[:500])
# Screenshot Example
async def capture_and_save_screenshot(url: str, output_path: str):
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, screenshot=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url=url, config=crawler_config)
if result.success and result.screenshot:
import base64
screenshot_data = base64.b64decode(result.screenshot)
with open(output_path, "wb") as f:
f.write(screenshot_data)
print(f"Screenshot saved successfully to {output_path}")
else:
print("Failed to capture screenshot")
# LLM Extraction Example
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(
..., description="Fee for output token for the OpenAI model."
)
async def extract_structured_data_using_llm(
provider: str, api_token: str = None, extra_headers: Dict[str, str] = None
):
print(f"\n--- Extracting Structured Data with {provider} ---")
if api_token is None and provider != "ollama":
print(f"API token is required for {provider}. Skipping this example.")
return
browser_config = BrowserConfig(headless=True)
extra_args = {"temperature": 0, "top_p": 0.9, "max_tokens": 2000}
if extra_headers:
extra_args["extra_headers"] = extra_headers
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
word_count_threshold=1,
page_timeout=80000,
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider=provider,api_token=api_token),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content.""",
extra_args=extra_args,
),
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://openai.com/api/pricing/", config=crawler_config
)
print(result.extracted_content)
# CSS Extraction Example
async def extract_structured_data_using_css_extractor():
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
schema = {
"name": "KidoCode Courses",
"baseSelector": "section.charge-methodology .framework-collection-item.w-dyn-item",
"fields": [
{
"name": "section_title",
"selector": "h3.heading-50",
"type": "text",
},
{
"name": "section_description",
"selector": ".charge-content",
"type": "text",
},
{
"name": "course_name",
"selector": ".text-block-93",
"type": "text",
},
{
"name": "course_description",
"selector": ".course-content-text",
"type": "text",
},
{
"name": "course_icon",
"selector": ".image-92",
"type": "attribute",
"attribute": "src",
},
],
}
browser_config = BrowserConfig(headless=True, java_script_enabled=True)
js_click_tabs = """
(async () => {
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
for(let tab of tabs) {
tab.scrollIntoView();
tab.click();
await new Promise(r => setTimeout(r, 500));
}
})();
"""
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
extraction_strategy=JsonCssExtractionStrategy(schema),
js_code=[js_click_tabs],
delay_before_return_html=1
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology", config=crawler_config
)
companies = json.loads(result.extracted_content)
print(f"Successfully extracted {len(companies)} companies")
print(json.dumps(companies[0], indent=2))
# Dynamic Content Examples - Method 1
async def crawl_dynamic_content_pages_method_1():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
first_commit = ""
async def on_execution_started(page, **kwargs):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.Box-sc-g0xbh4-0 h4")
commit = await page.query_selector("li.Box-sc-g0xbh4-0 h4")
commit = await commit.evaluate("(element) => element.textContent")
commit = re.sub(r"\s+", "", commit)
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
browser_config = BrowserConfig(headless=False, java_script_enabled=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
"""
for page in range(3):
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
css_selector="li.Box-sc-g0xbh4-0",
js_code=js_next_page if page > 0 else None,
js_only=page > 0,
session_id=session_id,
)
result = await crawler.arun(url=url, config=crawler_config)
assert result.success, f"Failed to crawl page {page + 1}"
soup = BeautifulSoup(result.cleaned_html, "html.parser")
commits = soup.select("li")
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
# Dynamic Content Examples - Method 2
async def crawl_dynamic_content_pages_method_2():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
browser_config = BrowserConfig(headless=False, java_script_enabled=True)
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
return commits.length > 0 ? commits[0].textContent.trim() : null;
};
const initialCommit = getCurrentCommit();
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
while (true) {
await new Promise(resolve => setTimeout(resolve, 100));
const newCommit = getCurrentCommit();
if (newCommit && newCommit !== initialCommit) {
break;
}
}
})();
"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
async with AsyncWebCrawler(config=browser_config) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
extraction_strategy = JsonCssExtractionStrategy(schema)
for page in range(3):
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page_and_wait if page > 0 else None,
js_only=page > 0,
session_id=session_id,
)
result = await crawler.arun(url=url, config=crawler_config)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def cosine_similarity_extraction():
from crawl4ai.extraction_strategy import CosineStrategy
crawl_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
extraction_strategy=CosineStrategy(
word_count_threshold=10,
max_dist=0.2, # Maximum distance between two words
linkage_method="ward", # Linkage method for hierarchical clustering (ward, complete, average, single)
top_k=3, # Number of top keywords to extract
sim_threshold=0.3, # Similarity threshold for clustering
semantic_filter="McDonald's economic impact, American consumer trends", # Keywords to filter the content semantically using embeddings
verbose=True,
),
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156",
config=crawl_config,
)
print(json.loads(result.extracted_content)[:5])
# Browser Comparison
async def crawl_custom_browser_type():
print("\n--- Browser Comparison ---")
# Firefox
browser_config_firefox = BrowserConfig(browser_type="firefox", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_firefox) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("Firefox:", time.time() - start)
print(result.markdown[:500])
# WebKit
browser_config_webkit = BrowserConfig(browser_type="webkit", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_webkit) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("WebKit:", time.time() - start)
print(result.markdown[:500])
# Chromium (default)
browser_config_chromium = BrowserConfig(browser_type="chromium", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_chromium) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("Chromium:", time.time() - start)
print(result.markdown[:500])
# Anti-Bot and User Simulation
async def crawl_with_user_simulation():
browser_config = BrowserConfig(
headless=True,
user_agent_mode="random",
user_agent_generator_config={"device_type": "mobile", "os_type": "android"},
)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
magic=True,
simulate_user=True,
override_navigator=True,
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="YOUR-URL-HERE", config=crawler_config)
print(result.markdown)
async def ssl_certification():
# Configure crawler to fetch SSL certificate
config = CrawlerRunConfig(
fetch_ssl_certificate=True,
cache_mode=CacheMode.BYPASS, # Bypass cache to always get fresh certificates
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url="https://example.com", config=config)
if result.success and result.ssl_certificate:
cert = result.ssl_certificate
tmp_dir = os.path.join(__location__, "tmp")
os.makedirs(tmp_dir, exist_ok=True)
# 1. Access certificate properties directly
print("\nCertificate Information:")
print(f"Issuer: {cert.issuer.get('CN', '')}")
print(f"Valid until: {cert.valid_until}")
print(f"Fingerprint: {cert.fingerprint}")
# 2. Export certificate in different formats
cert.to_json(os.path.join(tmp_dir, "certificate.json")) # For analysis
print("\nCertificate exported to:")
print(f"- JSON: {os.path.join(tmp_dir, 'certificate.json')}")
pem_data = cert.to_pem(
os.path.join(tmp_dir, "certificate.pem")
) # For web servers
print(f"- PEM: {os.path.join(tmp_dir, 'certificate.pem')}")
der_data = cert.to_der(
os.path.join(tmp_dir, "certificate.der")
) # For Java apps
print(f"- DER: {os.path.join(tmp_dir, 'certificate.der')}")
# Main execution
async def main():
# Basic examples
await simple_crawl()
await simple_example_with_running_js_code()
await simple_example_with_css_selector()
# Advanced examples
await extract_structured_data_using_css_extractor()
await extract_structured_data_using_llm(
"openai/gpt-4o", os.getenv("OPENAI_API_KEY")
)
await crawl_dynamic_content_pages_method_1()
await crawl_dynamic_content_pages_method_2()
# Browser comparisons
await crawl_custom_browser_type()
# Screenshot example
await capture_and_save_screenshot(
"https://www.example.com",
os.path.join(__location__, "tmp/example_screenshot.jpg")
)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,143 @@
# == File: regex_extraction_quickstart.py ==
"""
Miniquick-start for RegexExtractionStrategy
────────────────────────────────────────────
3 bite-sized demos that parallel the style of *quickstart_examples_set_1.py*:
1. **Default catalog** scrape a page and pull out e-mails / phones / URLs, etc.
2. **Custom pattern** add your own regex at instantiation time.
3. **LLM-assisted schema** ask the model to write a pattern, cache it, then
run extraction _without_ further LLM calls.
Run the whole thing with::
python regex_extraction_quickstart.py
"""
import os, json, asyncio
from pathlib import Path
from typing import List
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
CrawlResult,
RegexExtractionStrategy,
LLMConfig,
)
# ────────────────────────────────────────────────────────────────────────────
# 1. Default-catalog extraction
# ────────────────────────────────────────────────────────────────────────────
async def demo_regex_default() -> None:
print("\n=== 1. Regex extraction default patterns ===")
url = "https://www.iana.org/domains/example" # has e-mail + URLs
strategy = RegexExtractionStrategy(
pattern = RegexExtractionStrategy.Url | RegexExtractionStrategy.Currency
)
config = CrawlerRunConfig(extraction_strategy=strategy)
async with AsyncWebCrawler() as crawler:
result: CrawlResult = await crawler.arun(url, config=config)
print(f"Fetched {url} - success={result.success}")
if result.success:
data = json.loads(result.extracted_content)
for d in data[:10]:
print(f" {d['label']:<12} {d['value']}")
print(f"... total matches: {len(data)}")
else:
print(" !!! crawl failed")
# ────────────────────────────────────────────────────────────────────────────
# 2. Custom pattern override / extension
# ────────────────────────────────────────────────────────────────────────────
async def demo_regex_custom() -> None:
print("\n=== 2. Regex extraction custom price pattern ===")
url = "https://www.apple.com/shop/buy-mac/macbook-pro"
price_pattern = {"usd_price": r"\$\s?\d{1,3}(?:,\d{3})*(?:\.\d{2})?"}
strategy = RegexExtractionStrategy(custom = price_pattern)
config = CrawlerRunConfig(extraction_strategy=strategy)
async with AsyncWebCrawler() as crawler:
result: CrawlResult = await crawler.arun(url, config=config)
if result.success:
data = json.loads(result.extracted_content)
for d in data:
print(f" {d['value']}")
if not data:
print(" (No prices found - page layout may have changed)")
else:
print(" !!! crawl failed")
# ────────────────────────────────────────────────────────────────────────────
# 3. One-shot LLM pattern generation, then fast extraction
# ────────────────────────────────────────────────────────────────────────────
async def demo_regex_generate_pattern() -> None:
print("\n=== 3. generate_pattern → regex extraction ===")
cache_dir = Path(__file__).parent / "tmp"
cache_dir.mkdir(exist_ok=True)
pattern_file = cache_dir / "price_pattern.json"
url = "https://www.lazada.sg/tag/smartphone/"
# ── 3-A. build or load the cached pattern
if pattern_file.exists():
pattern = json.load(pattern_file.open(encoding="utf-8"))
print("Loaded cached pattern:", pattern)
else:
print("Generating pattern via LLM…")
llm_cfg = LLMConfig(
provider="openai/gpt-4o-mini",
api_token="env:OPENAI_API_KEY",
)
# pull one sample page as HTML context
async with AsyncWebCrawler() as crawler:
html = (await crawler.arun(url)).fit_html
pattern = RegexExtractionStrategy.generate_pattern(
label="price",
html=html,
query="Prices in Malaysian Ringgit (e.g. RM1,299.00 or RM200)",
llm_config=llm_cfg,
)
json.dump(pattern, pattern_file.open("w", encoding="utf-8"), indent=2)
print("Saved pattern:", pattern_file)
# ── 3-B. extraction pass zero LLM calls
strategy = RegexExtractionStrategy(custom=pattern)
config = CrawlerRunConfig(extraction_strategy=strategy, delay_before_return_html=3)
async with AsyncWebCrawler() as crawler:
result: CrawlResult = await crawler.arun(url, config=config)
if result.success:
data = json.loads(result.extracted_content)
for d in data[:15]:
print(f" {d['value']}")
print(f"... total matches: {len(data)}")
else:
print(" !!! crawl failed")
# ────────────────────────────────────────────────────────────────────────────
# Entrypoint
# ────────────────────────────────────────────────────────────────────────────
async def main() -> None:
# await demo_regex_default()
# await demo_regex_custom()
await demo_regex_generate_pattern()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,38 @@
import asyncio
from crawl4ai import (
AsyncWebCrawler,
BrowserConfig,
CrawlerRunConfig,
DefaultMarkdownGenerator,
PruningContentFilter,
CrawlResult
)
async def main():
browser_config = BrowserConfig(
headless=False,
verbose=True,
)
async with AsyncWebCrawler(config=browser_config) as crawler:
crawler_config = CrawlerRunConfig(
session_id= "hello_world", # This help us to use the same page
)
result : CrawlResult = await crawler.arun(
url="https://www.helloworld.org", config=crawler_config
)
# Add a breakpoint here, then you will the page is open and browser is not closed
print(result.markdown.raw_markdown[:500])
new_config = crawler_config.clone(js_code=["(() => ({'data':'hello'}))()"], js_only=True)
result : CrawlResult = await crawler.arun( # This time there is no fetch and this only executes JS in the same opened page
url="https://www.helloworld.org", config= new_config
)
print(result.js_execution_result) # You should see {'data':'hello'} in the console
# Get direct access to Playwright paege object. This works only if you use the same session_id and pass same config
page, context = crawler.crawler_strategy.get_page(new_config)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -13,7 +13,7 @@ from crawl4ai.deep_crawling import (
)
from crawl4ai.deep_crawling.scorers import KeywordRelevanceScorer
from crawl4ai.async_crawler_strategy import AsyncHTTPCrawlerStrategy
from crawl4ai.proxy_strategy import ProxyConfig
from crawl4ai import ProxyConfig
from crawl4ai import RoundRobinProxyStrategy
from crawl4ai.content_filter_strategy import LLMContentFilter
from crawl4ai import DefaultMarkdownGenerator

View File

@@ -0,0 +1,70 @@
# use_geo_location.py
"""
Example: override locale, timezone, and geolocation using Crawl4ai patterns.
This demo uses `AsyncWebCrawler.arun()` to fetch a page with
browser context primed for specific locale, timezone, and GPS,
and saves a screenshot for visual verification.
"""
import asyncio
import base64
from pathlib import Path
from typing import List
from crawl4ai import (
AsyncWebCrawler,
CrawlerRunConfig,
BrowserConfig,
GeolocationConfig,
CrawlResult,
)
async def demo_geo_override():
"""Demo: Crawl a geolocation-test page with overrides and screenshot."""
print("\n=== Geo-Override Crawl ===")
# 1) Browser setup: use Playwright-managed contexts
browser_cfg = BrowserConfig(
headless=False,
viewport_width=1280,
viewport_height=720,
use_managed_browser=False,
)
# 2) Run config: include locale, timezone_id, geolocation, and screenshot
run_cfg = CrawlerRunConfig(
url="https://browserleaks.com/geo", # test page that shows your location
locale="en-US", # Accept-Language & UI locale
timezone_id="America/Los_Angeles", # JS Date()/Intl timezone
geolocation=GeolocationConfig( # override GPS coords
latitude=34.0522,
longitude=-118.2437,
accuracy=10.0,
),
screenshot=True, # capture screenshot after load
session_id="geo_test", # reuse context if rerunning
delay_before_return_html=5
)
async with AsyncWebCrawler(config=browser_cfg) as crawler:
# 3) Run crawl (returns list even for single URL)
results: List[CrawlResult] = await crawler.arun(
url=run_cfg.url,
config=run_cfg,
)
result = results[0]
# 4) Save screenshot and report path
if result.screenshot:
__current_dir = Path(__file__).parent
out_dir = __current_dir / "tmp"
out_dir.mkdir(exist_ok=True)
shot_path = out_dir / "geo_test.png"
with open(shot_path, "wb") as f:
f.write(base64.b64decode(result.screenshot))
print(f"Saved screenshot to {shot_path}")
else:
print("No screenshot captured, check configuration.")
if __name__ == "__main__":
asyncio.run(demo_geo_override())

View File

@@ -263,7 +263,102 @@ See the full example in `docs/examples/identity_based_browsing.py` for a complet
---
## 7. Summary
## 7. Locale, Timezone, and Geolocation Control
In addition to using persistent profiles, Crawl4AI supports customizing your browser's locale, timezone, and geolocation settings. These features enhance your identity-based browsing experience by allowing you to control how websites perceive your location and regional settings.
### Setting Locale and Timezone
You can set the browser's locale and timezone through `CrawlerRunConfig`:
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
config=CrawlerRunConfig(
# Set browser locale (language and region formatting)
locale="fr-FR", # French (France)
# Set browser timezone
timezone_id="Europe/Paris",
# Other normal options...
magic=True,
page_timeout=60000
)
)
```
**How it works:**
- `locale` affects language preferences, date formats, number formats, etc.
- `timezone_id` affects JavaScript's Date object and time-related functionality
- These settings are applied when creating the browser context and maintained throughout the session
### Configuring Geolocation
Control the GPS coordinates reported by the browser's geolocation API:
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, GeolocationConfig
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://maps.google.com", # Or any location-aware site
config=CrawlerRunConfig(
# Configure precise GPS coordinates
geolocation=GeolocationConfig(
latitude=48.8566, # Paris coordinates
longitude=2.3522,
accuracy=100 # Accuracy in meters (optional)
),
# This site will see you as being in Paris
page_timeout=60000
)
)
```
**Important notes:**
- When `geolocation` is specified, the browser is automatically granted permission to access location
- Websites using the Geolocation API will receive the exact coordinates you specify
- This affects map services, store locators, delivery services, etc.
- Combined with the appropriate `locale` and `timezone_id`, you can create a fully consistent location profile
### Combining with Managed Browsers
These settings work perfectly with managed browsers for a complete identity solution:
```python
from crawl4ai import (
AsyncWebCrawler, BrowserConfig, CrawlerRunConfig,
GeolocationConfig
)
browser_config = BrowserConfig(
use_managed_browser=True,
user_data_dir="/path/to/my-profile",
browser_type="chromium"
)
crawl_config = CrawlerRunConfig(
# Location settings
locale="es-MX", # Spanish (Mexico)
timezone_id="America/Mexico_City",
geolocation=GeolocationConfig(
latitude=19.4326, # Mexico City
longitude=-99.1332
)
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="https://example.com", config=crawl_config)
```
Combining persistent profiles with precise geolocation and region settings gives you complete control over your digital identity.
## 8. Summary
- **Create** your user-data directory either:
- By launching Chrome/Chromium externally with `--user-data-dir=/some/path`
@@ -271,6 +366,7 @@ See the full example in `docs/examples/identity_based_browsing.py` for a complet
- Or through the interactive interface with `profiler.interactive_manager()`
- **Log in** or configure sites as needed, then close the browser
- **Reference** that folder in `BrowserConfig(user_data_dir="...")` + `use_managed_browser=True`
- **Customize** identity aspects with `locale`, `timezone_id`, and `geolocation`
- **List and reuse** profiles with `BrowserProfiler.list_profiles()`
- **Manage** your profiles with the dedicated `BrowserProfiler` class
- Enjoy **persistent** sessions that reflect your real identity

View File

@@ -0,0 +1,205 @@
# Network Requests & Console Message Capturing
Crawl4AI can capture all network requests and browser console messages during a crawl, which is invaluable for debugging, security analysis, or understanding page behavior.
## Configuration
To enable network and console capturing, use these configuration options:
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
# Enable both network request capture and console message capture
config = CrawlerRunConfig(
capture_network_requests=True, # Capture all network requests and responses
capture_console_messages=True # Capture all browser console output
)
```
## Example Usage
```python
import asyncio
import json
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
async def main():
# Enable both network request capture and console message capture
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
config=config
)
if result.success:
# Analyze network requests
if result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Count request types
request_count = len([r for r in result.network_requests if r.get("event_type") == "request"])
response_count = len([r for r in result.network_requests if r.get("event_type") == "response"])
failed_count = len([r for r in result.network_requests if r.get("event_type") == "request_failed"])
print(f"Requests: {request_count}, Responses: {response_count}, Failed: {failed_count}")
# Find API calls
api_calls = [r for r in result.network_requests
if r.get("event_type") == "request" and "api" in r.get("url", "")]
if api_calls:
print(f"Detected {len(api_calls)} API calls:")
for call in api_calls[:3]: # Show first 3
print(f" - {call.get('method')} {call.get('url')}")
# Analyze console messages
if result.console_messages:
print(f"Captured {len(result.console_messages)} console messages")
# Group by type
message_types = {}
for msg in result.console_messages:
msg_type = msg.get("type", "unknown")
message_types[msg_type] = message_types.get(msg_type, 0) + 1
print("Message types:", message_types)
# Show errors (often the most important)
errors = [msg for msg in result.console_messages if msg.get("type") == "error"]
if errors:
print(f"Found {len(errors)} console errors:")
for err in errors[:2]: # Show first 2
print(f" - {err.get('text', '')[:100]}")
# Export all captured data to a file for detailed analysis
with open("network_capture.json", "w") as f:
json.dump({
"url": result.url,
"network_requests": result.network_requests or [],
"console_messages": result.console_messages or []
}, f, indent=2)
print("Exported detailed capture data to network_capture.json")
if __name__ == "__main__":
asyncio.run(main())
```
## Captured Data Structure
### Network Requests
The `result.network_requests` contains a list of dictionaries, each representing a network event with these common fields:
| Field | Description |
|-------|-------------|
| `event_type` | Type of event: `"request"`, `"response"`, or `"request_failed"` |
| `url` | The URL of the request |
| `timestamp` | Unix timestamp when the event was captured |
#### Request Event Fields
```json
{
"event_type": "request",
"url": "https://example.com/api/data.json",
"method": "GET",
"headers": {"User-Agent": "...", "Accept": "..."},
"post_data": "key=value&otherkey=value",
"resource_type": "fetch",
"is_navigation_request": false,
"timestamp": 1633456789.123
}
```
#### Response Event Fields
```json
{
"event_type": "response",
"url": "https://example.com/api/data.json",
"status": 200,
"status_text": "OK",
"headers": {"Content-Type": "application/json", "Cache-Control": "..."},
"from_service_worker": false,
"request_timing": {"requestTime": 1234.56, "receiveHeadersEnd": 1234.78},
"timestamp": 1633456789.456
}
```
#### Failed Request Event Fields
```json
{
"event_type": "request_failed",
"url": "https://example.com/missing.png",
"method": "GET",
"resource_type": "image",
"failure_text": "net::ERR_ABORTED 404",
"timestamp": 1633456789.789
}
```
### Console Messages
The `result.console_messages` contains a list of dictionaries, each representing a console message with these common fields:
| Field | Description |
|-------|-------------|
| `type` | Message type: `"log"`, `"error"`, `"warning"`, `"info"`, etc. |
| `text` | The message text |
| `timestamp` | Unix timestamp when the message was captured |
#### Console Message Example
```json
{
"type": "error",
"text": "Uncaught TypeError: Cannot read property 'length' of undefined",
"location": "https://example.com/script.js:123:45",
"timestamp": 1633456790.123
}
```
## Key Benefits
- **Full Request Visibility**: Capture all network activity including:
- Requests (URLs, methods, headers, post data)
- Responses (status codes, headers, timing)
- Failed requests (with error messages)
- **Console Message Access**: View all JavaScript console output:
- Log messages
- Warnings
- Errors with stack traces
- Developer debugging information
- **Debugging Power**: Identify issues such as:
- Failed API calls or resource loading
- JavaScript errors affecting page functionality
- CORS or other security issues
- Hidden API endpoints and data flows
- **Security Analysis**: Detect:
- Unexpected third-party requests
- Data leakage in request payloads
- Suspicious script behavior
- **Performance Insights**: Analyze:
- Request timing data
- Resource loading patterns
- Potential bottlenecks
## Use Cases
1. **API Discovery**: Identify hidden endpoints and data flows in single-page applications
2. **Debugging**: Track down JavaScript errors affecting page functionality
3. **Security Auditing**: Detect unwanted third-party requests or data leakage
4. **Performance Analysis**: Identify slow-loading resources
5. **Ad/Tracker Analysis**: Detect and catalog advertising or tracking calls
This capability is especially valuable for complex sites with heavy JavaScript, single-page applications, or when you need to understand the exact communication happening between a browser and servers.

View File

@@ -10,11 +10,13 @@ class CrawlResult(BaseModel):
html: str
success: bool
cleaned_html: Optional[str] = None
fit_html: Optional[str] = None # Preprocessed HTML optimized for extraction
media: Dict[str, List[Dict]] = {}
links: Dict[str, List[Dict]] = {}
downloaded_files: Optional[List[str]] = None
screenshot: Optional[str] = None
pdf : Optional[bytes] = None
mhtml: Optional[str] = None
markdown: Optional[Union[str, MarkdownGenerationResult]] = None
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
@@ -49,7 +51,7 @@ if not result.success:
```
### 1.3 **`status_code`** *(Optional[int])*
**What**: The pages HTTP status code (e.g., 200, 404).
**What**: The page's HTTP status code (e.g., 200, 404).
**Usage**:
```python
if result.status_code == 404:
@@ -81,7 +83,7 @@ if result.response_headers:
```
### 1.7 **`ssl_certificate`** *(Optional[SSLCertificate])*
**What**: If `fetch_ssl_certificate=True` in your CrawlerRunConfig, **`result.ssl_certificate`** contains a [**`SSLCertificate`**](../advanced/ssl-certificate.md) object describing the sites certificate. You can export the cert in multiple formats (PEM/DER/JSON) or access its properties like `issuer`,
**What**: If `fetch_ssl_certificate=True` in your CrawlerRunConfig, **`result.ssl_certificate`** contains a [**`SSLCertificate`**](../advanced/ssl-certificate.md) object describing the site's certificate. You can export the cert in multiple formats (PEM/DER/JSON) or access its properties like `issuer`,
`subject`, `valid_from`, `valid_until`, etc.
**Usage**:
```python
@@ -108,14 +110,6 @@ print(len(result.html))
print(result.cleaned_html[:500]) # Show a snippet
```
### 2.3 **`fit_html`** *(Optional[str])*
**What**: If a **content filter** or heuristic (e.g., Pruning/BM25) modifies the HTML, the “fit” or post-filter version.
**When**: This is **only** present if your `markdown_generator` or `content_filter` produces it.
**Usage**:
```python
if result.markdown.fit_html:
print("High-value HTML content:", result.markdown.fit_html[:300])
```
---
@@ -134,7 +128,7 @@ Crawl4AI can convert HTML→Markdown, optionally including:
- **`raw_markdown`** *(str)*: The full HTML→Markdown conversion.
- **`markdown_with_citations`** *(str)*: Same markdown, but with link references as academic-style citations.
- **`references_markdown`** *(str)*: The reference list or footnotes at the end.
- **`fit_markdown`** *(Optional[str])*: If content filtering (Pruning/BM25) was applied, the filtered fit text.
- **`fit_markdown`** *(Optional[str])*: If content filtering (Pruning/BM25) was applied, the filtered "fit" text.
- **`fit_html`** *(Optional[str])*: The HTML that led to `fit_markdown`.
**Usage**:
@@ -156,7 +150,7 @@ print(result.markdown.raw_markdown[:200])
print(result.markdown.fit_markdown)
print(result.markdown.fit_html)
```
**Important**: Fit content (in `fit_markdown`/`fit_html`) exists in result.markdown, only if you used a **filter** (like **PruningContentFilter** or **BM25ContentFilter**) within a `MarkdownGenerationStrategy`.
**Important**: "Fit" content (in `fit_markdown`/`fit_html`) exists in result.markdown, only if you used a **filter** (like **PruningContentFilter** or **BM25ContentFilter**) within a `MarkdownGenerationStrategy`.
---
@@ -168,7 +162,7 @@ print(result.markdown.fit_html)
- `src` *(str)*: Media URL
- `alt` or `title` *(str)*: Descriptive text
- `score` *(float)*: Relevance score if the crawlers heuristic found it important
- `score` *(float)*: Relevance score if the crawler's heuristic found it "important"
- `desc` or `description` *(Optional[str])*: Additional context extracted from surrounding text
**Usage**:
@@ -236,7 +230,16 @@ if result.pdf:
f.write(result.pdf)
```
### 5.5 **`metadata`** *(Optional[dict])*
### 5.5 **`mhtml`** *(Optional[str])*
**What**: MHTML snapshot of the page if `capture_mhtml=True` in `CrawlerRunConfig`. MHTML (MIME HTML) format preserves the entire web page with all its resources (CSS, images, scripts, etc.) in a single file.
**Usage**:
```python
if result.mhtml:
with open("page.mhtml", "w", encoding="utf-8") as f:
f.write(result.mhtml)
```
### 5.6 **`metadata`** *(Optional[dict])*
**What**: Page-level metadata if discovered (title, description, OG data, etc.).
**Usage**:
```python
@@ -253,7 +256,7 @@ A `DispatchResult` object providing additional concurrency and resource usage in
- **`task_id`**: A unique identifier for the parallel task.
- **`memory_usage`** (float): The memory (in MB) used at the time of completion.
- **`peak_memory`** (float): The peak memory usage (in MB) recorded during the tasks execution.
- **`peak_memory`** (float): The peak memory usage (in MB) recorded during the task's execution.
- **`start_time`** / **`end_time`** (datetime): Time range for this crawling task.
- **`error_message`** (str): Any dispatcher- or concurrency-related error encountered.
@@ -271,7 +274,69 @@ for result in results:
---
## 7. Example: Accessing Everything
## 7. Network Requests & Console Messages
When you enable network and console message capturing in `CrawlerRunConfig` using `capture_network_requests=True` and `capture_console_messages=True`, the `CrawlResult` will include these fields:
### 7.1 **`network_requests`** *(Optional[List[Dict[str, Any]]])*
**What**: A list of dictionaries containing information about all network requests, responses, and failures captured during the crawl.
**Structure**:
- Each item has an `event_type` field that can be `"request"`, `"response"`, or `"request_failed"`.
- Request events include `url`, `method`, `headers`, `post_data`, `resource_type`, and `is_navigation_request`.
- Response events include `url`, `status`, `status_text`, `headers`, and `request_timing`.
- Failed request events include `url`, `method`, `resource_type`, and `failure_text`.
- All events include a `timestamp` field.
**Usage**:
```python
if result.network_requests:
# Count different types of events
requests = [r for r in result.network_requests if r.get("event_type") == "request"]
responses = [r for r in result.network_requests if r.get("event_type") == "response"]
failures = [r for r in result.network_requests if r.get("event_type") == "request_failed"]
print(f"Captured {len(requests)} requests, {len(responses)} responses, and {len(failures)} failures")
# Analyze API calls
api_calls = [r for r in requests if "api" in r.get("url", "")]
# Identify failed resources
for failure in failures:
print(f"Failed to load: {failure.get('url')} - {failure.get('failure_text')}")
```
### 7.2 **`console_messages`** *(Optional[List[Dict[str, Any]]])*
**What**: A list of dictionaries containing all browser console messages captured during the crawl.
**Structure**:
- Each item has a `type` field indicating the message type (e.g., `"log"`, `"error"`, `"warning"`, etc.).
- The `text` field contains the actual message text.
- Some messages include `location` information (URL, line, column).
- All messages include a `timestamp` field.
**Usage**:
```python
if result.console_messages:
# Count messages by type
message_types = {}
for msg in result.console_messages:
msg_type = msg.get("type", "unknown")
message_types[msg_type] = message_types.get(msg_type, 0) + 1
print(f"Message type counts: {message_types}")
# Display errors (which are usually most important)
for msg in result.console_messages:
if msg.get("type") == "error":
print(f"Error: {msg.get('text')}")
```
These fields provide deep visibility into the page's network activity and browser console, which is invaluable for debugging, security analysis, and understanding complex web applications.
For more details on network and console capturing, see the [Network & Console Capture documentation](../advanced/network-console-capture.md).
---
## 8. Example: Accessing Everything
```python
async def handle_result(result: CrawlResult):
@@ -286,7 +351,7 @@ async def handle_result(result: CrawlResult):
# HTML
print("Original HTML size:", len(result.html))
print("Cleaned HTML size:", len(result.cleaned_html or ""))
# Markdown output
if result.markdown:
print("Raw Markdown:", result.markdown.raw_markdown[:300])
@@ -304,16 +369,36 @@ async def handle_result(result: CrawlResult):
if result.extracted_content:
print("Structured data:", result.extracted_content)
# Screenshot/PDF
# Screenshot/PDF/MHTML
if result.screenshot:
print("Screenshot length:", len(result.screenshot))
if result.pdf:
print("PDF bytes length:", len(result.pdf))
if result.mhtml:
print("MHTML length:", len(result.mhtml))
# Network and console capturing
if result.network_requests:
print(f"Network requests captured: {len(result.network_requests)}")
# Analyze request types
req_types = {}
for req in result.network_requests:
if "resource_type" in req:
req_types[req["resource_type"]] = req_types.get(req["resource_type"], 0) + 1
print(f"Resource types: {req_types}")
if result.console_messages:
print(f"Console messages captured: {len(result.console_messages)}")
# Count by message type
msg_types = {}
for msg in result.console_messages:
msg_types[msg.get("type", "unknown")] = msg_types.get(msg.get("type", "unknown"), 0) + 1
print(f"Message types: {msg_types}")
```
---
## 8. Key Points & Future
## 9. Key Points & Future
1. **Deprecated legacy properties of CrawlResult**
- `markdown_v2` - Deprecated in v0.5. Just use `markdown`. It holds the `MarkdownGenerationResult` now!

View File

@@ -70,7 +70,7 @@ We group them by category.
|------------------------------|--------------------------------------|-------------------------------------------------------------------------------------------------|
| **`word_count_threshold`** | `int` (default: ~200) | Skips text blocks below X words. Helps ignore trivial sections. |
| **`extraction_strategy`** | `ExtractionStrategy` (default: None) | If set, extracts structured data (CSS-based, LLM-based, etc.). |
| **`markdown_generator`** | `MarkdownGenerationStrategy` (None) | If you want specialized markdown output (citations, filtering, chunking, etc.). |
| **`markdown_generator`** | `MarkdownGenerationStrategy` (None) | If you want specialized markdown output (citations, filtering, chunking, etc.). Can be customized with options such as `content_source` parameter to select the HTML input source ('cleaned_html', 'raw_html', or 'fit_html'). |
| **`css_selector`** | `str` (None) | Retains only the part of the page matching this selector. Affects the entire extraction process. |
| **`target_elements`** | `List[str]` (None) | List of CSS selectors for elements to focus on for markdown generation and data extraction, while still processing the entire page for links, media, etc. Provides more flexibility than `css_selector`. |
| **`excluded_tags`** | `list` (None) | Removes entire tags (e.g. `["script", "style"]`). |
@@ -140,6 +140,7 @@ If your page is a single-page app with repeated JS updates, set `js_only=True` i
| **`screenshot_wait_for`** | `float or None` | Extra wait time before the screenshot. |
| **`screenshot_height_threshold`** | `int` (~20000) | If the page is taller than this, alternate screenshot strategies are used. |
| **`pdf`** | `bool` (False) | If `True`, returns a PDF in `result.pdf`. |
| **`capture_mhtml`** | `bool` (False) | If `True`, captures an MHTML snapshot of the page in `result.mhtml`. MHTML includes all page resources (CSS, images, etc.) in a single file. |
| **`image_description_min_word_threshold`** | `int` (~50) | Minimum words for an images alt text or description to be considered valid. |
| **`image_score_threshold`** | `int` (~3) | Filter out low-scoring images. The crawler scores images by relevance (size, context, etc.). |
| **`exclude_external_images`** | `bool` (False) | Exclude images from other domains. |
@@ -231,6 +232,7 @@ async def main():
if __name__ == "__main__":
asyncio.run(main())
```
## 2.4 Compliance & Ethics

View File

@@ -36,6 +36,45 @@ LLMExtractionStrategy(
)
```
### RegexExtractionStrategy
Used for fast pattern-based extraction of common entities using regular expressions.
```python
RegexExtractionStrategy(
# Pattern Configuration
pattern: IntFlag = RegexExtractionStrategy.Nothing, # Bit flags of built-in patterns to use
custom: Optional[Dict[str, str]] = None, # Custom pattern dictionary {label: regex}
# Input Format
input_format: str = "fit_html", # "html", "markdown", "text" or "fit_html"
)
# Built-in Patterns as Bit Flags
RegexExtractionStrategy.Email # Email addresses
RegexExtractionStrategy.PhoneIntl # International phone numbers
RegexExtractionStrategy.PhoneUS # US-format phone numbers
RegexExtractionStrategy.Url # HTTP/HTTPS URLs
RegexExtractionStrategy.IPv4 # IPv4 addresses
RegexExtractionStrategy.IPv6 # IPv6 addresses
RegexExtractionStrategy.Uuid # UUIDs
RegexExtractionStrategy.Currency # Currency values (USD, EUR, etc)
RegexExtractionStrategy.Percentage # Percentage values
RegexExtractionStrategy.Number # Numeric values
RegexExtractionStrategy.DateIso # ISO format dates
RegexExtractionStrategy.DateUS # US format dates
RegexExtractionStrategy.Time24h # 24-hour format times
RegexExtractionStrategy.PostalUS # US postal codes
RegexExtractionStrategy.PostalUK # UK postal codes
RegexExtractionStrategy.HexColor # HTML hex color codes
RegexExtractionStrategy.TwitterHandle # Twitter handles
RegexExtractionStrategy.Hashtag # Hashtags
RegexExtractionStrategy.MacAddr # MAC addresses
RegexExtractionStrategy.Iban # International bank account numbers
RegexExtractionStrategy.CreditCard # Credit card numbers
RegexExtractionStrategy.All # All available patterns
```
### CosineStrategy
Used for content similarity-based extraction and clustering.
@@ -156,6 +195,55 @@ result = await crawler.arun(
data = json.loads(result.extracted_content)
```
### Regex Extraction
```python
import json
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, RegexExtractionStrategy
# Method 1: Use built-in patterns
strategy = RegexExtractionStrategy(
pattern = RegexExtractionStrategy.Email | RegexExtractionStrategy.Url
)
# Method 2: Use custom patterns
price_pattern = {"usd_price": r"\$\s?\d{1,3}(?:,\d{3})*(?:\.\d{2})?"}
strategy = RegexExtractionStrategy(custom=price_pattern)
# Method 3: Generate pattern with LLM assistance (one-time)
from crawl4ai import LLMConfig
async with AsyncWebCrawler() as crawler:
# Get sample HTML first
sample_result = await crawler.arun("https://example.com/products")
html = sample_result.fit_html
# Generate regex pattern once
pattern = RegexExtractionStrategy.generate_pattern(
label="price",
html=html,
query="Product prices in USD format",
llm_config=LLMConfig(provider="openai/gpt-4o-mini")
)
# Save pattern for reuse
import json
with open("price_pattern.json", "w") as f:
json.dump(pattern, f)
# Use pattern for extraction (no LLM calls)
strategy = RegexExtractionStrategy(custom=pattern)
result = await crawler.arun(
url="https://example.com/products",
config=CrawlerRunConfig(extraction_strategy=strategy)
)
# Process results
data = json.loads(result.extracted_content)
for item in data:
print(f"{item['label']}: {item['value']}")
```
### CSS Extraction
```python
@@ -220,12 +308,28 @@ result = await crawler.arun(
## Best Practices
1. **Choose the Right Strategy**
- Use `LLMExtractionStrategy` for complex, unstructured content
- Use `JsonCssExtractionStrategy` for well-structured HTML
1. **Choose the Right Strategy**
- Use `RegexExtractionStrategy` for common data types like emails, phones, URLs, dates
- Use `JsonCssExtractionStrategy` for well-structured HTML with consistent patterns
- Use `LLMExtractionStrategy` for complex, unstructured content requiring reasoning
- Use `CosineStrategy` for content similarity and clustering
2. **Optimize Chunking**
2. **Strategy Selection Guide**
```
Is the target data a common type (email/phone/date/URL)?
→ RegexExtractionStrategy
Does the page have consistent HTML structure?
→ JsonCssExtractionStrategy or JsonXPathExtractionStrategy
Is the data semantically complex or unstructured?
→ LLMExtractionStrategy
Need to find content similar to a specific topic?
→ CosineStrategy
```
3. **Optimize Chunking**
```python
# For long documents
strategy = LLMExtractionStrategy(
@@ -234,7 +338,26 @@ result = await crawler.arun(
)
```
3. **Handle Errors**
4. **Combine Strategies for Best Performance**
```python
# First pass: Extract structure with CSS
css_strategy = JsonCssExtractionStrategy(product_schema)
css_result = await crawler.arun(url, config=CrawlerRunConfig(extraction_strategy=css_strategy))
product_data = json.loads(css_result.extracted_content)
# Second pass: Extract specific fields with regex
descriptions = [product["description"] for product in product_data]
regex_strategy = RegexExtractionStrategy(
pattern=RegexExtractionStrategy.Email | RegexExtractionStrategy.PhoneUS,
custom={"dimension": r"\d+x\d+x\d+ (?:cm|in)"}
)
# Process descriptions with regex
for text in descriptions:
matches = regex_strategy.extract("", text) # Direct extraction
```
5. **Handle Errors**
```python
try:
result = await crawler.arun(
@@ -247,11 +370,31 @@ result = await crawler.arun(
print(f"Extraction failed: {e}")
```
4. **Monitor Performance**
6. **Monitor Performance**
```python
strategy = CosineStrategy(
verbose=True, # Enable logging
word_count_threshold=20, # Filter short content
top_k=5 # Limit results
)
```
7. **Cache Generated Patterns**
```python
# For RegexExtractionStrategy pattern generation
import json
from pathlib import Path
cache_dir = Path("./pattern_cache")
cache_dir.mkdir(exist_ok=True)
pattern_file = cache_dir / "product_pattern.json"
if pattern_file.exists():
with open(pattern_file) as f:
pattern = json.load(f)
else:
# Generate once with LLM
pattern = RegexExtractionStrategy.generate_pattern(...)
with open(pattern_file, "w") as f:
json.dump(pattern, f)
```

View File

@@ -0,0 +1,444 @@
/* ==== File: docs/ask_ai/ask_ai.css ==== */
/* --- Basic Reset & Font --- */
body {
/* Attempt to inherit variables from parent window (iframe context) */
/* Fallback values if variables are not inherited */
--fallback-bg: #070708;
--fallback-font: #e8e9ed;
--fallback-secondary: #a3abba;
--fallback-primary: #50ffff;
--fallback-primary-dimmed: #09b5a5;
--fallback-border: #1d1d20;
--fallback-code-bg: #1e1e1e;
--fallback-invert-font: #222225;
--font-stack: dm, Monaco, Courier New, monospace, serif;
font-family: var(--font-stack, "Courier New", monospace); /* Use theme font stack */
background-color: var(--background-color, var(--fallback-bg));
color: var(--font-color, var(--fallback-font));
margin: 0;
padding: 0;
font-size: 14px; /* Match global font size */
line-height: 1.5em; /* Match global line height */
height: 100vh; /* Ensure body takes full height */
overflow: hidden; /* Prevent body scrollbars, panels handle scroll */
display: flex; /* Use flex for the main container */
}
a {
color: var(--secondary-color, var(--fallback-secondary));
text-decoration: none;
transition: color 0.2s;
}
a:hover {
color: var(--primary-color, var(--fallback-primary));
}
/* --- Main Container Layout --- */
.ai-assistant-container {
display: flex;
width: 100%;
height: 100%;
background-color: var(--background-color, var(--fallback-bg));
}
/* --- Sidebar Styling --- */
.sidebar {
flex-shrink: 0; /* Prevent sidebars from shrinking */
height: 100%;
display: flex;
flex-direction: column;
/* background-color: var(--code-bg-color, var(--fallback-code-bg)); */
overflow-y: hidden; /* Header fixed, list scrolls */
}
.left-sidebar {
flex-basis: 240px; /* Width of history panel */
border-right: 1px solid var(--progress-bar-background, var(--fallback-border));
}
.right-sidebar {
flex-basis: 280px; /* Width of citations panel */
border-left: 1px solid var(--progress-bar-background, var(--fallback-border));
}
.sidebar header {
padding: 0.6em 1em;
border-bottom: 1px solid var(--progress-bar-background, var(--fallback-border));
flex-shrink: 0;
display: flex;
justify-content: space-between;
align-items: center;
}
.sidebar header h3 {
margin: 0;
font-size: 1.1em;
color: var(--font-color, var(--fallback-font));
}
.sidebar ul {
list-style: none;
padding: 0;
margin: 0;
overflow-y: auto; /* Enable scrolling for the list */
flex-grow: 1; /* Allow list to take remaining space */
padding: 0.5em 0;
}
.sidebar ul li {
padding: 0.3em 1em;
}
.sidebar ul li.no-citations,
.sidebar ul li.no-history {
color: var(--secondary-color, var(--fallback-secondary));
font-style: italic;
font-size: 0.9em;
padding-left: 1em;
}
.sidebar ul li a {
color: var(--secondary-color, var(--fallback-secondary));
text-decoration: none;
display: block;
padding: 0.2em 0.5em;
border-radius: 3px;
transition: background-color 0.2s, color 0.2s;
}
.sidebar ul li a:hover {
color: var(--primary-color, var(--fallback-primary));
background-color: rgba(80, 255, 255, 0.08); /* Use primary color with alpha */
}
/* Style for active history item */
#history-list li.active a {
color: var(--primary-dimmed-color, var(--fallback-primary-dimmed));
font-weight: bold;
background-color: rgba(80, 255, 255, 0.12);
}
/* --- Chat Panel Styling --- */
#chat-panel {
flex-grow: 1; /* Take remaining space */
display: flex;
flex-direction: column;
height: 100%;
overflow: hidden; /* Prevent overflow, internal elements handle scroll */
}
#chat-messages {
flex-grow: 1;
overflow-y: auto; /* Scrollable chat history */
padding: 1em 1.5em;
border-bottom: 1px solid var(--progress-bar-background, var(--fallback-border));
}
.message {
margin-bottom: 1em;
padding: 0.8em 1.2em;
border-radius: 8px;
max-width: 90%; /* Slightly wider */
line-height: 1.6;
/* Apply pre-wrap for better handling of spaces/newlines AND wrapping */
white-space: pre-wrap;
word-wrap: break-word; /* Ensure long words break */
}
.user-message {
background-color: var(--progress-bar-background, var(--fallback-border)); /* User message background */
color: var(--font-color, var(--fallback-font));
margin-left: auto; /* Align user messages to the right */
text-align: left;
}
.ai-message {
background-color: var(--code-bg-color, var(--fallback-code-bg)); /* AI message background */
color: var(--font-color, var(--fallback-font));
margin-right: auto; /* Align AI messages to the left */
border: 1px solid var(--progress-bar-background, var(--fallback-border));
}
.ai-message.welcome-message {
border: none;
background-color: transparent;
max-width: 100%;
text-align: center;
color: var(--secondary-color, var(--fallback-secondary));
white-space: normal;
}
/* Styles for code within messages */
.ai-message code {
background-color: var(--invert-font-color, var(--fallback-invert-font)) !important; /* Use light bg for code */
/* color: var(--background-color, var(--fallback-bg)) !important; Dark text */
padding: 0.1em 0.4em;
border-radius: 4px;
font-size: 0.9em;
}
.ai-message pre {
background-color: var(--invert-font-color, var(--fallback-invert-font)) !important;
color: var(--background-color, var(--fallback-bg)) !important;
padding: 1em;
border-radius: 5px;
overflow-x: auto;
margin: 0.8em 0;
white-space: pre;
}
.ai-message pre code {
background-color: transparent !important;
padding: 0;
font-size: inherit;
}
/* Override white-space for specific elements generated by Markdown */
.ai-message p,
.ai-message ul,
.ai-message ol,
.ai-message blockquote {
white-space: normal; /* Allow standard wrapping for block elements */
}
/* --- Markdown Element Styling within Messages --- */
.message p {
margin-top: 0;
margin-bottom: 0.5em;
}
.message p:last-child {
margin-bottom: 0;
}
.message ul,
.message ol {
margin: 0.5em 0 0.5em 1.5em;
padding: 0;
}
.message li {
margin-bottom: 0.2em;
}
/* Code block styling (adjusts previous rules slightly) */
.message code {
/* Inline code */
background-color: var(--invert-font-color, var(--fallback-invert-font)) !important;
color: var(--font-color);
padding: 0.1em 0.4em;
border-radius: 4px;
font-size: 0.9em;
/* Ensure inline code breaks nicely */
word-break: break-all;
white-space: normal; /* Allow inline code to wrap if needed */
}
.message pre {
/* Code block container */
background-color: var(--invert-font-color, var(--fallback-invert-font)) !important;
color: var(--background-color, var(--fallback-bg)) !important;
padding: 1em;
border-radius: 5px;
overflow-x: auto;
margin: 0.8em 0;
font-size: 0.9em; /* Slightly smaller code blocks */
}
.message pre code {
/* Code within code block */
background-color: transparent !important;
padding: 0;
font-size: inherit;
word-break: normal; /* Don't break words in code blocks */
white-space: pre; /* Preserve whitespace strictly in code blocks */
}
/* Thinking indicator */
.message-thinking {
display: inline-block;
width: 5px;
height: 5px;
background-color: var(--primary-color, var(--fallback-primary));
border-radius: 50%;
margin-left: 8px;
vertical-align: middle;
animation: thinking 1s infinite ease-in-out;
}
@keyframes thinking {
0%,
100% {
opacity: 0.5;
transform: scale(0.8);
}
50% {
opacity: 1;
transform: scale(1.2);
}
}
/* --- Thinking Indicator (Blinking Cursor Style) --- */
.thinking-indicator-cursor {
display: inline-block;
width: 10px; /* Width of the cursor */
height: 1.1em; /* Match line height */
background-color: var(--primary-color, var(--fallback-primary));
margin-left: 5px;
vertical-align: text-bottom; /* Align with text baseline */
animation: blink-cursor 1s step-end infinite;
}
@keyframes blink-cursor {
from,
to {
background-color: transparent;
}
50% {
background-color: var(--primary-color, var(--fallback-primary));
}
}
#chat-input-area {
flex-shrink: 0; /* Prevent input area from shrinking */
padding: 1em 1.5em;
display: flex;
align-items: flex-end; /* Align items to bottom */
gap: 10px;
background-color: var(--code-bg-color, var(--fallback-code-bg)); /* Match sidebars */
}
#chat-input-area textarea {
flex-grow: 1;
padding: 0.8em 1em;
border: 1px solid var(--progress-bar-background, var(--fallback-border));
background-color: var(--background-color, var(--fallback-bg));
color: var(--font-color, var(--fallback-font));
border-radius: 5px;
resize: none; /* Disable manual resize */
font-family: inherit;
font-size: 1em;
line-height: 1.4;
max-height: 150px; /* Limit excessive height */
overflow-y: auto;
/* rows: 2; */
}
#chat-input-area button {
/* Basic button styling - maybe inherit from main theme? */
padding: 0.6em 1.2em;
border: 1px solid var(--primary-dimmed-color, var(--fallback-primary-dimmed));
background-color: var(--primary-dimmed-color, var(--fallback-primary-dimmed));
color: var(--background-color, var(--fallback-bg));
border-radius: 5px;
cursor: pointer;
font-size: 0.9em;
transition: background-color 0.2s, border-color 0.2s;
height: min-content; /* Align with bottom of textarea */
}
#chat-input-area button:hover {
background-color: var(--primary-color, var(--fallback-primary));
border-color: var(--primary-color, var(--fallback-primary));
}
#chat-input-area button:disabled {
opacity: 0.6;
cursor: not-allowed;
}
.loading-indicator {
font-size: 0.9em;
color: var(--secondary-color, var(--fallback-secondary));
margin-right: 10px;
align-self: center;
}
/* --- Buttons --- */
/* Inherit some button styles if possible */
.btn.btn-sm {
color: var(--font-color, var(--fallback-font));
padding: 0.2em 0.5em;
font-size: 0.8em;
border: 1px solid var(--secondary-color, var(--fallback-secondary));
background: none;
border-radius: 3px;
cursor: pointer;
}
.btn.btn-sm:hover {
border-color: var(--font-color, var(--fallback-font));
background-color: var(--progress-bar-background, var(--fallback-border));
}
/* --- Basic Responsiveness --- */
@media screen and (max-width: 900px) {
.left-sidebar {
flex-basis: 200px; /* Shrink history */
}
.right-sidebar {
flex-basis: 240px; /* Shrink citations */
}
}
@media screen and (max-width: 768px) {
/* Stack layout on mobile? Or hide sidebars? Hiding for now */
.sidebar {
display: none; /* Hide sidebars on small screens */
}
/* Could add toggle buttons later */
}
/* ==== File: docs/ask_ai/ask-ai.css (Updates V4 - Delete Button) ==== */
.sidebar ul li {
/* Use flexbox to align link and delete button */
display: flex;
justify-content: space-between;
align-items: center;
padding: 0; /* Remove padding from li, add to link/button */
margin: 0.1em 0; /* Small vertical margin */
}
.sidebar ul li a {
/* Link takes most space */
flex-grow: 1;
padding: 0.3em 0.5em 0.3em 1em; /* Adjust padding */
/* Make ellipsis work for long titles */
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
/* Keep existing link styles */
color: var(--secondary-color, var(--fallback-secondary));
text-decoration: none;
display: block;
border-radius: 3px;
transition: background-color 0.2s, color 0.2s;
}
.sidebar ul li a:hover {
color: var(--primary-color, var(--fallback-primary));
background-color: rgba(80, 255, 255, 0.08);
}
/* Style for active history item's link */
#history-list li.active a {
color: var(--primary-dimmed-color, var(--fallback-primary-dimmed));
font-weight: bold;
background-color: rgba(80, 255, 255, 0.12);
}
/* --- Delete Chat Button --- */
.delete-chat-btn {
flex-shrink: 0; /* Don't shrink */
background: none;
border: none;
color: var(--secondary-color, var(--fallback-secondary));
cursor: pointer;
padding: 0.4em 0.8em; /* Padding around icon */
font-size: 0.9em;
opacity: 0.5; /* Dimmed by default */
transition: opacity 0.2s, color 0.2s;
margin-left: 5px; /* Space between link and button */
border-radius: 3px;
}
.sidebar ul li:hover .delete-chat-btn,
.delete-chat-btn:hover {
opacity: 1; /* Show fully on hover */
color: var(--error-color, #ff3c74); /* Use error color on hover */
}
.delete-chat-btn:focus {
outline: 1px dashed var(--error-color, #ff3c74); /* Accessibility */
opacity: 1;
}

607
docs/md_v2/ask_ai/ask-ai.js Normal file
View File

@@ -0,0 +1,607 @@
// ==== File: docs/ask_ai/ask-ai.js (Marked, Streaming, History) ====
document.addEventListener("DOMContentLoaded", () => {
console.log("AI Assistant JS V2 Loaded");
// --- DOM Element Selectors ---
const historyList = document.getElementById("history-list");
const newChatButton = document.getElementById("new-chat-button");
const chatMessages = document.getElementById("chat-messages");
const chatInput = document.getElementById("chat-input");
const sendButton = document.getElementById("send-button");
const citationsList = document.getElementById("citations-list");
// --- Constants ---
const CHAT_INDEX_KEY = "aiAssistantChatIndex_v1";
const CHAT_PREFIX = "aiAssistantChat_v1_";
// --- State ---
let currentChatId = null;
let conversationHistory = []; // Holds message objects { sender: 'user'/'ai', text: '...' }
let isThinking = false;
let streamInterval = null; // To control the streaming interval
// --- Event Listeners ---
sendButton.addEventListener("click", handleSendMessage);
chatInput.addEventListener("keydown", handleInputKeydown);
newChatButton.addEventListener("click", handleNewChat);
chatInput.addEventListener("input", autoGrowTextarea);
// --- Initialization ---
loadChatHistoryIndex(); // Load history list on startup
const initialQuery = checkForInitialQuery(window.parent.location); // Check for query param
if (!initialQuery) {
loadInitialChat(); // Load normally if no query
}
// --- Core Functions ---
function handleSendMessage() {
const userMessageText = chatInput.value.trim();
if (!userMessageText || isThinking) return;
setThinking(true); // Start thinking state
// Add user message to state and UI
const userMessage = { sender: "user", text: userMessageText };
conversationHistory.push(userMessage);
addMessageToChat(userMessage, false); // Add user message without parsing markdown
chatInput.value = "";
autoGrowTextarea(); // Reset textarea height
// Prepare for AI response (create empty div)
const aiMessageDiv = addMessageToChat({ sender: "ai", text: "" }, true); // Add empty div with thinking indicator
// TODO: Generate fingerprint/JWT here
// TODO: Send `conversationHistory` + JWT to backend API
// Replace placeholder below with actual API call
// The backend should ideally return a stream of text tokens
// --- Placeholder Streaming Simulation ---
const simulatedFullResponse = `Okay, Heres a minimal Python script that creates an AsyncWebCrawler, fetches a webpage, and prints the first 300 characters of its Markdown output:
\`\`\`python
import asyncio
from crawl4ai import AsyncWebCrawler
async def main():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun("https://example.com")
print(result.markdown[:300]) # Print first 300 chars
if __name__ == "__main__":
asyncio.run(main())
\`\`\`
A code snippet: \`crawler.run()\`. Check the [quickstart](/core/quickstart).`;
// Simulate receiving the response stream
streamSimulatedResponse(aiMessageDiv, simulatedFullResponse);
// // Simulate receiving citations *after* stream starts (or with first chunk)
// setTimeout(() => {
// addCitations([
// { title: "Simulated Doc 1", url: "#sim1" },
// { title: "Another Concept", url: "#sim2" },
// ]);
// }, 500); // Citations appear shortly after thinking starts
}
function handleInputKeydown(event) {
if (event.key === "Enter" && !event.shiftKey) {
event.preventDefault();
handleSendMessage();
}
}
function addMessageToChat(message, addThinkingIndicator = false) {
const messageDiv = document.createElement("div");
messageDiv.classList.add("message", `${message.sender}-message`);
// Parse markdown and set HTML
messageDiv.innerHTML = message.text ? marked.parse(message.text) : "";
if (message.sender === "ai") {
// Apply Syntax Highlighting AFTER setting innerHTML
messageDiv.querySelectorAll("pre code:not(.hljs)").forEach((block) => {
if (typeof hljs !== "undefined") {
// Check if already highlighted to prevent double-highlighting issues
if (!block.classList.contains("hljs")) {
hljs.highlightElement(block);
}
} else {
console.warn("highlight.js (hljs) not found for syntax highlighting.");
}
});
// Add thinking indicator if needed (and not already present)
if (addThinkingIndicator && !message.text && !messageDiv.querySelector(".thinking-indicator-cursor")) {
const thinkingDiv = document.createElement("div");
thinkingDiv.className = "thinking-indicator-cursor";
messageDiv.appendChild(thinkingDiv);
}
} else {
// User messages remain plain text
// messageDiv.textContent = message.text;
}
// wrap each pre in a div.terminal
messageDiv.querySelectorAll("pre").forEach((block) => {
const wrapper = document.createElement("div");
wrapper.className = "terminal";
block.parentNode.insertBefore(wrapper, block);
wrapper.appendChild(block);
});
chatMessages.appendChild(messageDiv);
// Scroll only if user is near the bottom? (More advanced)
// Simple scroll for now:
scrollToBottom();
return messageDiv; // Return the created element
}
function streamSimulatedResponse(messageDiv, fullText) {
const thinkingIndicator = messageDiv.querySelector(".thinking-indicator-cursor");
if (thinkingIndicator) thinkingIndicator.remove();
const tokens = fullText.split(/(\s+)/);
let currentText = "";
let tokenIndex = 0;
// Clear previous interval just in case
if (streamInterval) clearInterval(streamInterval);
streamInterval = setInterval(() => {
const cursorSpan = '<span class="thinking-indicator-cursor"></span>'; // Cursor for streaming
if (tokenIndex < tokens.length) {
currentText += tokens[tokenIndex];
// Render intermediate markdown + cursor
messageDiv.innerHTML = marked.parse(currentText + cursorSpan);
// Re-highlight code blocks on each stream update - might be slightly inefficient
// but ensures partial code blocks look okay. Highlight only final on completion.
// messageDiv.querySelectorAll('pre code:not(.hljs)').forEach((block) => {
// hljs.highlightElement(block);
// });
scrollToBottom(); // Keep scrolling as content streams
tokenIndex++;
} else {
// Streaming finished
clearInterval(streamInterval);
streamInterval = null;
// Final render without cursor
messageDiv.innerHTML = marked.parse(currentText);
// === Final Syntax Highlighting ===
messageDiv.querySelectorAll("pre code:not(.hljs)").forEach((block) => {
if (typeof hljs !== "undefined" && !block.classList.contains("hljs")) {
hljs.highlightElement(block);
}
});
// === Extract Citations ===
const citations = extractMarkdownLinks(currentText);
// Wrap each pre in a div.terminal
messageDiv.querySelectorAll("pre").forEach((block) => {
const wrapper = document.createElement("div");
wrapper.className = "terminal";
block.parentNode.insertBefore(wrapper, block);
wrapper.appendChild(block);
});
const aiMessage = { sender: "ai", text: currentText, citations: citations };
conversationHistory.push(aiMessage);
updateCitationsDisplay();
saveCurrentChat();
setThinking(false);
}
}, 50); // Adjust speed
}
// === NEW Function to Extract Links ===
function extractMarkdownLinks(markdownText) {
const regex = /\[([^\]]+)\]\(([^)]+)\)/g; // [text](url)
const citations = [];
let match;
while ((match = regex.exec(markdownText)) !== null) {
// Avoid adding self-links from within the citations list if AI includes them
if (!match[2].startsWith("#citation-")) {
citations.push({
title: match[1].trim(),
url: match[2].trim(),
});
}
}
// Optional: Deduplicate links based on URL
const uniqueCitations = citations.filter(
(citation, index, self) => index === self.findIndex((c) => c.url === citation.url)
);
return uniqueCitations;
}
// === REVISED Function to Display Citations ===
function updateCitationsDisplay() {
let lastCitations = null;
// Find the most recent AI message with citations
for (let i = conversationHistory.length - 1; i >= 0; i--) {
if (
conversationHistory[i].sender === "ai" &&
conversationHistory[i].citations &&
conversationHistory[i].citations.length > 0
) {
lastCitations = conversationHistory[i].citations;
break; // Found the latest citations
}
}
citationsList.innerHTML = ""; // Clear previous
if (!lastCitations) {
citationsList.innerHTML = '<li class="no-citations">No citations available.</li>';
return;
}
lastCitations.forEach((citation, index) => {
const li = document.createElement("li");
const a = document.createElement("a");
// Generate a unique ID for potential internal linking if needed
// a.id = `citation-${index}`;
a.href = citation.url || "#";
a.textContent = citation.title;
a.target = "_top"; // Open in main window
li.appendChild(a);
citationsList.appendChild(li);
});
}
function addCitations(citations) {
citationsList.innerHTML = ""; // Clear
if (!citations || citations.length === 0) {
citationsList.innerHTML = '<li class="no-citations">No citations available.</li>';
return;
}
citations.forEach((citation) => {
const li = document.createElement("li");
const a = document.createElement("a");
a.href = citation.url || "#";
a.textContent = citation.title;
a.target = "_top"; // Open in main window
li.appendChild(a);
citationsList.appendChild(li);
});
}
function setThinking(thinking) {
isThinking = thinking;
sendButton.disabled = thinking;
chatInput.disabled = thinking;
chatInput.placeholder = thinking ? "AI is responding..." : "Ask about Crawl4AI...";
// Stop any existing stream if we start thinking again (e.g., rapid resend)
if (thinking && streamInterval) {
clearInterval(streamInterval);
streamInterval = null;
}
}
function autoGrowTextarea() {
chatInput.style.height = "auto";
chatInput.style.height = `${chatInput.scrollHeight}px`;
}
function scrollToBottom() {
chatMessages.scrollTop = chatMessages.scrollHeight;
}
// --- Query Parameter Handling ---
function checkForInitialQuery(locationToCheck) {
// <-- Receive location object
if (!locationToCheck) {
console.warn("Ask AI: Could not access parent window location.");
return false;
}
const urlParams = new URLSearchParams(locationToCheck.search); // <-- Use passed location's search string
const encodedQuery = urlParams.get("qq"); // <-- Use 'qq'
if (encodedQuery) {
console.log("Initial query found (qq):", encodedQuery);
try {
const decodedText = decodeURIComponent(escape(atob(encodedQuery)));
console.log("Decoded query:", decodedText);
// Start new chat immediately
handleNewChat(true);
// Delay setting input and sending message slightly
setTimeout(() => {
chatInput.value = decodedText;
autoGrowTextarea();
handleSendMessage();
// Clean the PARENT window's URL
try {
const cleanUrl = locationToCheck.pathname;
// Use parent's history object
window.parent.history.replaceState({}, window.parent.document.title, cleanUrl);
} catch (e) {
console.warn("Ask AI: Could not clean parent URL using replaceState.", e);
// This might fail due to cross-origin restrictions if served differently,
// but should work fine with mkdocs serve on the same origin.
}
}, 100);
return true; // Query processed
} catch (e) {
console.error("Error decoding initial query (qq):", e);
// Clean the PARENT window's URL even on error
try {
const cleanUrl = locationToCheck.pathname;
window.parent.history.replaceState({}, window.parent.document.title, cleanUrl);
} catch (cleanError) {
console.warn("Ask AI: Could not clean parent URL after decode error.", cleanError);
}
return false;
}
}
return false; // No 'qq' query found
}
// --- History Management ---
function handleNewChat(isFromQuery = false) {
if (isThinking) return; // Don't allow new chat while responding
// Only save if NOT triggered immediately by a query parameter load
if (!isFromQuery) {
saveCurrentChat();
}
currentChatId = `chat_${Date.now()}`;
conversationHistory = []; // Clear message history state
chatMessages.innerHTML = ""; // Start with clean slate for query
if (!isFromQuery) {
// Show welcome only if manually started
// chatMessages.innerHTML =
// '<div class="message ai-message welcome-message">Started a new chat! Ask me anything about Crawl4AI.</div>';
chatMessages.innerHTML =
'<div class="message ai-message welcome-message">We will launch this feature very soon.</div>';
}
addCitations([]); // Clear citations
updateCitationsDisplay(); // Clear UI
// Add to index and save
let index = loadChatIndex();
// Generate a generic title initially, update later
const newTitle = isFromQuery ? "Chat from Selection" : `Chat ${new Date().toLocaleString()}`;
// index.unshift({ id: currentChatId, title: `Chat ${new Date().toLocaleString()}` }); // Add to start
index.unshift({ id: currentChatId, title: newTitle });
saveChatIndex(index);
renderHistoryList(index); // Update UI
setActiveHistoryItem(currentChatId);
saveCurrentChat(); // Save the empty new chat state
}
function loadChat(chatId) {
if (isThinking || chatId === currentChatId) return;
// Check if chat data actually exists before proceeding
const storedChat = localStorage.getItem(CHAT_PREFIX + chatId);
if (storedChat === null) {
console.warn(`Attempted to load non-existent chat: ${chatId}. Removing from index.`);
deleteChatData(chatId); // Clean up index
loadChatHistoryIndex(); // Reload history list
loadInitialChat(); // Load next available chat
return;
}
console.log(`Loading chat: ${chatId}`);
saveCurrentChat(); // Save current before switching
try {
conversationHistory = JSON.parse(storedChat);
currentChatId = chatId;
renderChatMessages(conversationHistory);
updateCitationsDisplay();
setActiveHistoryItem(chatId);
} catch (e) {
console.error("Error loading chat:", chatId, e);
alert("Failed to load chat data.");
conversationHistory = [];
renderChatMessages(conversationHistory);
updateCitationsDisplay();
}
}
function saveCurrentChat() {
if (currentChatId && conversationHistory.length > 0) {
try {
localStorage.setItem(CHAT_PREFIX + currentChatId, JSON.stringify(conversationHistory));
console.log(`Chat ${currentChatId} saved.`);
// Update title in index (e.g., use first user message)
let index = loadChatIndex();
const currentItem = index.find((item) => item.id === currentChatId);
if (
currentItem &&
conversationHistory[0]?.sender === "user" &&
!currentItem.title.startsWith("Chat about:")
) {
currentItem.title = `Chat about: ${conversationHistory[0].text.substring(0, 30)}...`;
saveChatIndex(index);
// Re-render history list if title changed - small optimization needed here maybe
renderHistoryList(index);
setActiveHistoryItem(currentChatId); // Re-set active after re-render
}
} catch (e) {
console.error("Error saving chat:", currentChatId, e);
// Handle potential storage full errors
if (e.name === "QuotaExceededError") {
alert("Local storage is full. Cannot save chat history.");
// Consider implementing history pruning logic here
}
}
} else if (currentChatId) {
// Save empty state for newly created chats if needed, or remove?
localStorage.setItem(CHAT_PREFIX + currentChatId, JSON.stringify([]));
}
}
function loadChatIndex() {
try {
const storedIndex = localStorage.getItem(CHAT_INDEX_KEY);
return storedIndex ? JSON.parse(storedIndex) : [];
} catch (e) {
console.error("Error loading chat index:", e);
return []; // Return empty array on error
}
}
function saveChatIndex(indexArray) {
try {
localStorage.setItem(CHAT_INDEX_KEY, JSON.stringify(indexArray));
} catch (e) {
console.error("Error saving chat index:", e);
}
}
function renderHistoryList(indexArray) {
historyList.innerHTML = ""; // Clear existing
if (!indexArray || indexArray.length === 0) {
historyList.innerHTML = '<li class="no-history">No past chats found.</li>';
return;
}
indexArray.forEach((item) => {
const li = document.createElement("li");
li.dataset.chatId = item.id; // Add ID to li for easier selection
const a = document.createElement("a");
a.href = "#";
a.dataset.chatId = item.id;
a.textContent = item.title || `Chat ${item.id.split("_")[1] || item.id}`;
a.title = a.textContent; // Tooltip for potentially long titles
a.addEventListener("click", (e) => {
e.preventDefault();
loadChat(item.id);
});
// === Add Delete Button ===
const deleteBtn = document.createElement("button");
deleteBtn.className = "delete-chat-btn";
deleteBtn.innerHTML = "✕"; // Trash can emoji/icon (or use text/SVG/FontAwesome)
deleteBtn.title = "Delete Chat";
deleteBtn.dataset.chatId = item.id; // Store ID on button too
deleteBtn.addEventListener("click", handleDeleteChat);
li.appendChild(a);
li.appendChild(deleteBtn); // Append button to the list item
historyList.appendChild(li);
});
}
function renderChatMessages(messages) {
chatMessages.innerHTML = ""; // Clear existing messages
messages.forEach((message) => {
// Ensure highlighting is applied when loading from history
addMessageToChat(message, false);
});
if (messages.length === 0) {
// chatMessages.innerHTML =
// '<div class="message ai-message welcome-message">Chat history loaded. Ask a question!</div>';
chatMessages.innerHTML =
'<div class="message ai-message welcome-message">We will launch this feature very soon.</div>';
}
// Scroll to bottom after loading messages
scrollToBottom();
}
function setActiveHistoryItem(chatId) {
document.querySelectorAll("#history-list li").forEach((li) => li.classList.remove("active"));
// Select the LI element directly now
const activeLi = document.querySelector(`#history-list li[data-chat-id="${chatId}"]`);
if (activeLi) {
activeLi.classList.add("active");
}
}
function loadInitialChat() {
const index = loadChatIndex();
if (index.length > 0) {
loadChat(index[0].id);
} else {
// Check if handleNewChat wasn't already called by query handler
if (!currentChatId) {
handleNewChat();
}
}
}
function loadChatHistoryIndex() {
const index = loadChatIndex();
renderHistoryList(index);
if (currentChatId) setActiveHistoryItem(currentChatId);
}
// === NEW Function to Handle Delete Click ===
function handleDeleteChat(event) {
event.stopPropagation(); // Prevent triggering loadChat on the link behind it
const button = event.currentTarget;
const chatIdToDelete = button.dataset.chatId;
if (!chatIdToDelete) return;
// Confirmation dialog
if (
window.confirm(
`Are you sure you want to delete this chat session?\n"${
button.previousElementSibling?.textContent || "Chat " + chatIdToDelete
}"`
)
) {
console.log(`Deleting chat: ${chatIdToDelete}`);
// Perform deletion
const updatedIndex = deleteChatData(chatIdToDelete);
// If the deleted chat was the currently active one, load another chat
if (currentChatId === chatIdToDelete) {
currentChatId = null; // Reset current ID
conversationHistory = []; // Clear state
if (updatedIndex.length > 0) {
// Load the new top chat (most recent remaining)
loadChat(updatedIndex[0].id);
} else {
// No chats left, start a new one
handleNewChat();
}
} else {
// If a different chat was deleted, just re-render the list
renderHistoryList(updatedIndex);
// Re-apply active state in case IDs shifted (though they shouldn't)
setActiveHistoryItem(currentChatId);
}
}
}
// === NEW Function to Delete Chat Data ===
function deleteChatData(chatId) {
// Remove chat data
localStorage.removeItem(CHAT_PREFIX + chatId);
// Update index
let index = loadChatIndex();
index = index.filter((item) => item.id !== chatId);
saveChatIndex(index);
console.log(`Chat ${chatId} data and index entry removed.`);
return index; // Return the updated index
}
// --- Virtual Scrolling Placeholder ---
// NOTE: Virtual scrolling is complex. For now, we do direct rendering.
// If performance becomes an issue with very long chats/history,
// investigate libraries like 'simple-virtual-scroll' or 'virtual-scroller'.
// You would replace parts of `renderChatMessages` and `renderHistoryList`
// to work with the chosen library's API (providing data and item renderers).
console.warn("Virtual scrolling not implemented. Performance may degrade with very long chat histories.");
});

View File

@@ -0,0 +1,64 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Crawl4AI Assistant</title>
<!-- Link main styles first for variable access -->
<link rel="stylesheet" href="../assets/layout.css">
<link rel="stylesheet" href="../assets/styles.css">
<!-- Link specific AI styles -->
<link rel="stylesheet" href="../assets/highlight.css">
<link rel="stylesheet" href="ask-ai.css">
</head>
<body>
<div class="ai-assistant-container">
<!-- Left Sidebar: Conversation History -->
<aside id="history-panel" class="sidebar left-sidebar">
<header>
<h3>History</h3>
<button id="new-chat-button" class="btn btn-sm">New Chat</button>
</header>
<ul id="history-list">
<!-- History items populated by JS -->
</ul>
</aside>
<!-- Main Area: Chat Interface -->
<main id="chat-panel">
<div id="chat-messages">
<!-- Chat messages populated by JS -->
<div class="message ai-message welcome-message">
Welcome to the Crawl4AI Assistant! How can I help you today?
</div>
</div>
<div id="chat-input-area">
<!-- Loading indicator for general waiting (optional) -->
<!-- <div class="loading-indicator" style="display: none;">Thinking...</div> -->
<textarea id="chat-input" placeholder="We will roll out this feature very soon." rows="2" disabled></textarea>
<button id="send-button">Send</button>
</div>
</main>
<!-- Right Sidebar: Citations / Context -->
<aside id="citations-panel" class="sidebar right-sidebar">
<header>
<h3>Citations</h3>
</header>
<ul id="citations-list">
<!-- Citations populated by JS -->
<li class="no-citations">No citations for this response yet.</li>
</ul>
</aside>
</div>
<!-- Include Marked.js library -->
<script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
<script src="../assets/highlight.min.js"></script>
<!-- Your AI Assistant Logic -->
<script src="ask-ai.js"></script>
</body>
</html>

View File

@@ -0,0 +1,62 @@
// ==== File: docs/assets/copy_code.js ====
document.addEventListener('DOMContentLoaded', () => {
// Target specifically code blocks within the main content area
const codeBlocks = document.querySelectorAll('#terminal-mkdocs-main-content pre > code');
codeBlocks.forEach((codeElement) => {
const preElement = codeElement.parentElement; // The <pre> tag
// Ensure the <pre> tag can contain a positioned button
if (window.getComputedStyle(preElement).position === 'static') {
preElement.style.position = 'relative';
}
// Create the button
const copyButton = document.createElement('button');
copyButton.className = 'copy-code-button';
copyButton.type = 'button';
copyButton.setAttribute('aria-label', 'Copy code to clipboard');
copyButton.title = 'Copy code to clipboard';
copyButton.innerHTML = 'Copy'; // Or use an icon like an SVG or FontAwesome class
// Append the button to the <pre> element
preElement.appendChild(copyButton);
// Add click event listener
copyButton.addEventListener('click', () => {
copyCodeToClipboard(codeElement, copyButton);
});
});
async function copyCodeToClipboard(codeElement, button) {
// Use innerText to get the rendered text content, preserving line breaks
const textToCopy = codeElement.innerText;
try {
await navigator.clipboard.writeText(textToCopy);
// Visual feedback
button.innerHTML = 'Copied!';
button.classList.add('copied');
button.disabled = true; // Temporarily disable
// Revert button state after a short delay
setTimeout(() => {
button.innerHTML = 'Copy';
button.classList.remove('copied');
button.disabled = false;
}, 2000); // Show "Copied!" for 2 seconds
} catch (err) {
console.error('Failed to copy code: ', err);
// Optional: Provide error feedback on the button
button.innerHTML = 'Error';
setTimeout(() => {
button.innerHTML = 'Copy';
}, 2000);
}
}
console.log("Copy Code Button script loaded.");
});

Some files were not shown because too many files have changed in this diff Show More