Merged next branch

This commit is contained in:
Aravind Karnam
2025-04-12 10:47:02 +05:30
62 changed files with 3225 additions and 7085 deletions

35
.github/workflows/main.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Discord GitHub Notifications
on:
issues:
types: [opened]
issue_comment:
types: [created]
pull_request:
types: [opened]
discussion:
types: [created]
jobs:
notify-discord:
runs-on: ubuntu-latest
steps:
- name: Set webhook based on event type
id: set-webhook
run: |
if [ "${{ github.event_name }}" == "discussion" ]; then
echo "webhook=${{ secrets.DISCORD_DISCUSSIONS_WEBHOOK }}" >> $GITHUB_OUTPUT
else
echo "webhook=${{ secrets.DISCORD_WEBHOOK }}" >> $GITHUB_OUTPUT
fi
- name: Discord Notification
uses: Ilshidur/action-discord@master
env:
DISCORD_WEBHOOK: ${{ steps.set-webhook.outputs.webhook }}
with:
args: |
${{ github.event_name == 'issues' && format('📣 New issue created: **{0}** by {1} - {2}', github.event.issue.title, github.event.issue.user.login, github.event.issue.html_url) ||
github.event_name == 'issue_comment' && format('💬 New comment on issue **{0}** by {1} - {2}', github.event.issue.title, github.event.comment.user.login, github.event.comment.html_url) ||
github.event_name == 'pull_request' && format('🔄 New PR opened: **{0}** by {1} - {2}', github.event.pull_request.title, github.event.pull_request.user.login, github.event.pull_request.html_url) ||
format('💬 New discussion started: **{0}** by {1} - {2}', github.event.discussion.title, github.event.discussion.user.login, github.event.discussion.html_url) }}

108
JOURNAL.md Normal file
View File

@@ -0,0 +1,108 @@
# Development Journal
This journal tracks significant feature additions, bug fixes, and architectural decisions in the crawl4ai project. It serves as both documentation and a historical record of the project's evolution.
## [2025-04-09] Added MHTML Capture Feature
**Feature:** MHTML snapshot capture of crawled pages
**Changes Made:**
1. Added `capture_mhtml: bool = False` parameter to `CrawlerRunConfig` class
2. Added `mhtml: Optional[str] = None` field to `CrawlResult` model
3. Added `mhtml_data: Optional[str] = None` field to `AsyncCrawlResponse` class
4. Implemented `capture_mhtml()` method in `AsyncPlaywrightCrawlerStrategy` class to capture MHTML via CDP
5. Modified the crawler to capture MHTML when enabled and pass it to the result
**Implementation Details:**
- MHTML capture uses Chrome DevTools Protocol (CDP) via Playwright's CDP session API
- The implementation waits for page to fully load before capturing MHTML content
- Enhanced waiting for JavaScript content with requestAnimationFrame for better JS content capture
- We ensure all browser resources are properly cleaned up after capture
**Files Modified:**
- `crawl4ai/models.py`: Added the mhtml field to CrawlResult
- `crawl4ai/async_configs.py`: Added capture_mhtml parameter to CrawlerRunConfig
- `crawl4ai/async_crawler_strategy.py`: Implemented MHTML capture logic
- `crawl4ai/async_webcrawler.py`: Added mapping from AsyncCrawlResponse.mhtml_data to CrawlResult.mhtml
**Testing:**
- Created comprehensive tests in `tests/20241401/test_mhtml.py` covering:
- Capturing MHTML when enabled
- Ensuring mhtml is None when disabled explicitly
- Ensuring mhtml is None by default
- Capturing MHTML on JavaScript-enabled pages
**Challenges:**
- Had to improve page loading detection to ensure JavaScript content was fully rendered
- Tests needed to be run independently due to Playwright browser instance management
- Modified test expected content to match actual MHTML output
**Why This Feature:**
The MHTML capture feature allows users to capture complete web pages including all resources (CSS, images, etc.) in a single file. This is valuable for:
1. Offline viewing of captured pages
2. Creating permanent snapshots of web content for archival
3. Ensuring consistent content for later analysis, even if the original site changes
**Future Enhancements to Consider:**
- Add option to save MHTML to file
- Support for filtering what resources get included in MHTML
- Add support for specifying MHTML capture options
## [2025-04-10] Added Network Request and Console Message Capturing
**Feature:** Comprehensive capturing of network requests/responses and browser console messages during crawling
**Changes Made:**
1. Added `capture_network_requests: bool = False` and `capture_console_messages: bool = False` parameters to `CrawlerRunConfig` class
2. Added `network_requests: Optional[List[Dict[str, Any]]] = None` and `console_messages: Optional[List[Dict[str, Any]]] = None` fields to both `AsyncCrawlResponse` and `CrawlResult` models
3. Implemented event listeners in `AsyncPlaywrightCrawlerStrategy._crawl_web()` to capture browser network events and console messages
4. Added proper event listener cleanup in the finally block to prevent resource leaks
5. Modified the crawler flow to pass captured data from AsyncCrawlResponse to CrawlResult
**Implementation Details:**
- Network capture uses Playwright event listeners (`request`, `response`, and `requestfailed`) to record all network activity
- Console capture uses Playwright event listeners (`console` and `pageerror`) to record console messages and errors
- Each network event includes metadata like URL, headers, status, and timing information
- Each console message includes type, text content, and source location when available
- All captured events include timestamps for chronological analysis
- Error handling ensures even failed capture attempts won't crash the main crawling process
**Files Modified:**
- `crawl4ai/models.py`: Added new fields to AsyncCrawlResponse and CrawlResult
- `crawl4ai/async_configs.py`: Added new configuration parameters to CrawlerRunConfig
- `crawl4ai/async_crawler_strategy.py`: Implemented capture logic using event listeners
- `crawl4ai/async_webcrawler.py`: Added data transfer from AsyncCrawlResponse to CrawlResult
**Documentation:**
- Created detailed documentation in `docs/md_v2/advanced/network-console-capture.md`
- Added feature to site navigation in `mkdocs.yml`
- Updated CrawlResult documentation in `docs/md_v2/api/crawl-result.md`
- Created comprehensive example in `docs/examples/network_console_capture_example.py`
**Testing:**
- Created `tests/general/test_network_console_capture.py` with tests for:
- Verifying capture is disabled by default
- Testing network request capturing
- Testing console message capturing
- Ensuring both capture types can be enabled simultaneously
- Checking correct content is captured in expected formats
**Challenges:**
- Initial implementation had synchronous/asynchronous mismatches in event handlers
- Needed to fix type of property access vs. method calls in handlers
- Required careful cleanup of event listeners to prevent memory leaks
**Why This Feature:**
The network and console capture feature provides deep visibility into web page activity, enabling:
1. Debugging complex web applications by seeing all network requests and errors
2. Security analysis to detect unexpected third-party requests and data flows
3. Performance profiling to identify slow-loading resources
4. API discovery in single-page applications
5. Comprehensive analysis of web application behavior
**Future Enhancements to Consider:**
- Option to filter captured events by type, domain, or content
- Support for capturing response bodies (with size limits)
- Aggregate statistics calculation for performance metrics
- Integration with visualization tools for network waterfall analysis
- Exporting captures in HAR format for use with external tools

View File

@@ -15,7 +15,7 @@ from .user_agent_generator import UAGen, ValidUAGenerator # , OnlineUAGenerator
from .extraction_strategy import ExtractionStrategy, LLMExtractionStrategy from .extraction_strategy import ExtractionStrategy, LLMExtractionStrategy
from .chunking_strategy import ChunkingStrategy, RegexChunking from .chunking_strategy import ChunkingStrategy, RegexChunking
from .markdown_generation_strategy import MarkdownGenerationStrategy from .markdown_generation_strategy import MarkdownGenerationStrategy, DefaultMarkdownGenerator
from .content_scraping_strategy import ContentScrapingStrategy, WebScrapingStrategy from .content_scraping_strategy import ContentScrapingStrategy, WebScrapingStrategy
from .deep_crawling import DeepCrawlStrategy from .deep_crawling import DeepCrawlStrategy
@@ -722,7 +722,7 @@ class CrawlerRunConfig():
word_count_threshold: int = MIN_WORD_THRESHOLD, word_count_threshold: int = MIN_WORD_THRESHOLD,
extraction_strategy: ExtractionStrategy = None, extraction_strategy: ExtractionStrategy = None,
chunking_strategy: ChunkingStrategy = RegexChunking(), chunking_strategy: ChunkingStrategy = RegexChunking(),
markdown_generator: MarkdownGenerationStrategy = None, markdown_generator: MarkdownGenerationStrategy = DefaultMarkdownGenerator(),
only_text: bool = False, only_text: bool = False,
css_selector: str = None, css_selector: str = None,
target_elements: List[str] = None, target_elements: List[str] = None,
@@ -772,10 +772,12 @@ class CrawlerRunConfig():
screenshot_wait_for: float = None, screenshot_wait_for: float = None,
screenshot_height_threshold: int = SCREENSHOT_HEIGHT_TRESHOLD, screenshot_height_threshold: int = SCREENSHOT_HEIGHT_TRESHOLD,
pdf: bool = False, pdf: bool = False,
capture_mhtml: bool = False,
image_description_min_word_threshold: int = IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD, image_description_min_word_threshold: int = IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
image_score_threshold: int = IMAGE_SCORE_THRESHOLD, image_score_threshold: int = IMAGE_SCORE_THRESHOLD,
table_score_threshold: int = 7, table_score_threshold: int = 7,
exclude_external_images: bool = False, exclude_external_images: bool = False,
exclude_all_images: bool = False,
# Link and Domain Handling Parameters # Link and Domain Handling Parameters
exclude_social_media_domains: list = None, exclude_social_media_domains: list = None,
exclude_external_links: bool = False, exclude_external_links: bool = False,
@@ -785,6 +787,9 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters # Debugging and Logging Parameters
verbose: bool = True, verbose: bool = True,
log_console: bool = False, log_console: bool = False,
# Network and Console Capturing Parameters
capture_network_requests: bool = False,
capture_console_messages: bool = False,
# Connection Parameters # Connection Parameters
method: str = "GET", method: str = "GET",
stream: bool = False, stream: bool = False,
@@ -860,9 +865,11 @@ class CrawlerRunConfig():
self.screenshot_wait_for = screenshot_wait_for self.screenshot_wait_for = screenshot_wait_for
self.screenshot_height_threshold = screenshot_height_threshold self.screenshot_height_threshold = screenshot_height_threshold
self.pdf = pdf self.pdf = pdf
self.capture_mhtml = capture_mhtml
self.image_description_min_word_threshold = image_description_min_word_threshold self.image_description_min_word_threshold = image_description_min_word_threshold
self.image_score_threshold = image_score_threshold self.image_score_threshold = image_score_threshold
self.exclude_external_images = exclude_external_images self.exclude_external_images = exclude_external_images
self.exclude_all_images = exclude_all_images
self.table_score_threshold = table_score_threshold self.table_score_threshold = table_score_threshold
# Link and Domain Handling Parameters # Link and Domain Handling Parameters
@@ -877,6 +884,10 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters # Debugging and Logging Parameters
self.verbose = verbose self.verbose = verbose
self.log_console = log_console self.log_console = log_console
# Network and Console Capturing Parameters
self.capture_network_requests = capture_network_requests
self.capture_console_messages = capture_console_messages
# Connection Parameters # Connection Parameters
self.stream = stream self.stream = stream
@@ -991,6 +1002,7 @@ class CrawlerRunConfig():
"screenshot_height_threshold", SCREENSHOT_HEIGHT_TRESHOLD "screenshot_height_threshold", SCREENSHOT_HEIGHT_TRESHOLD
), ),
pdf=kwargs.get("pdf", False), pdf=kwargs.get("pdf", False),
capture_mhtml=kwargs.get("capture_mhtml", False),
image_description_min_word_threshold=kwargs.get( image_description_min_word_threshold=kwargs.get(
"image_description_min_word_threshold", "image_description_min_word_threshold",
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD, IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD,
@@ -999,6 +1011,7 @@ class CrawlerRunConfig():
"image_score_threshold", IMAGE_SCORE_THRESHOLD "image_score_threshold", IMAGE_SCORE_THRESHOLD
), ),
table_score_threshold=kwargs.get("table_score_threshold", 7), table_score_threshold=kwargs.get("table_score_threshold", 7),
exclude_all_images=kwargs.get("exclude_all_images", False),
exclude_external_images=kwargs.get("exclude_external_images", False), exclude_external_images=kwargs.get("exclude_external_images", False),
# Link and Domain Handling Parameters # Link and Domain Handling Parameters
exclude_social_media_domains=kwargs.get( exclude_social_media_domains=kwargs.get(
@@ -1011,6 +1024,9 @@ class CrawlerRunConfig():
# Debugging and Logging Parameters # Debugging and Logging Parameters
verbose=kwargs.get("verbose", True), verbose=kwargs.get("verbose", True),
log_console=kwargs.get("log_console", False), log_console=kwargs.get("log_console", False),
# Network and Console Capturing Parameters
capture_network_requests=kwargs.get("capture_network_requests", False),
capture_console_messages=kwargs.get("capture_console_messages", False),
# Connection Parameters # Connection Parameters
method=kwargs.get("method", "GET"), method=kwargs.get("method", "GET"),
stream=kwargs.get("stream", False), stream=kwargs.get("stream", False),
@@ -1088,9 +1104,11 @@ class CrawlerRunConfig():
"screenshot_wait_for": self.screenshot_wait_for, "screenshot_wait_for": self.screenshot_wait_for,
"screenshot_height_threshold": self.screenshot_height_threshold, "screenshot_height_threshold": self.screenshot_height_threshold,
"pdf": self.pdf, "pdf": self.pdf,
"capture_mhtml": self.capture_mhtml,
"image_description_min_word_threshold": self.image_description_min_word_threshold, "image_description_min_word_threshold": self.image_description_min_word_threshold,
"image_score_threshold": self.image_score_threshold, "image_score_threshold": self.image_score_threshold,
"table_score_threshold": self.table_score_threshold, "table_score_threshold": self.table_score_threshold,
"exclude_all_images": self.exclude_all_images,
"exclude_external_images": self.exclude_external_images, "exclude_external_images": self.exclude_external_images,
"exclude_social_media_domains": self.exclude_social_media_domains, "exclude_social_media_domains": self.exclude_social_media_domains,
"exclude_external_links": self.exclude_external_links, "exclude_external_links": self.exclude_external_links,
@@ -1099,6 +1117,8 @@ class CrawlerRunConfig():
"exclude_internal_links": self.exclude_internal_links, "exclude_internal_links": self.exclude_internal_links,
"verbose": self.verbose, "verbose": self.verbose,
"log_console": self.log_console, "log_console": self.log_console,
"capture_network_requests": self.capture_network_requests,
"capture_console_messages": self.capture_console_messages,
"method": self.method, "method": self.method,
"stream": self.stream, "stream": self.stream,
"check_robots_txt": self.check_robots_txt, "check_robots_txt": self.check_robots_txt,

View File

@@ -411,7 +411,11 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
user_agent = kwargs.get("user_agent", self.user_agent) user_agent = kwargs.get("user_agent", self.user_agent)
# Use browser_manager to get a fresh page & context assigned to this session_id # Use browser_manager to get a fresh page & context assigned to this session_id
page, context = await self.browser_manager.get_page(session_id, user_agent) page, context = await self.browser_manager.get_page(CrawlerRunConfig(
session_id=session_id,
user_agent=user_agent,
**kwargs,
))
return session_id return session_id
async def crawl( async def crawl(
@@ -449,12 +453,17 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
html = f.read() html = f.read()
if config.screenshot: if config.screenshot:
screenshot_data = await self._generate_screenshot_from_html(html) screenshot_data = await self._generate_screenshot_from_html(html)
if config.capture_console_messages:
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
captured_console = await self._capture_console_messages(page, url)
return AsyncCrawlResponse( return AsyncCrawlResponse(
html=html, html=html,
response_headers=response_headers, response_headers=response_headers,
status_code=status_code, status_code=status_code,
screenshot=screenshot_data, screenshot=screenshot_data,
get_delayed_content=None, get_delayed_content=None,
console_messages=captured_console,
) )
elif url.startswith("raw:") or url.startswith("raw://"): elif url.startswith("raw:") or url.startswith("raw://"):
@@ -480,6 +489,7 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
) -> AsyncCrawlResponse: ) -> AsyncCrawlResponse:
""" """
Internal method to crawl web URLs with the specified configuration. Internal method to crawl web URLs with the specified configuration.
Includes optional network and console capturing.
Args: Args:
url (str): The web URL to crawl url (str): The web URL to crawl
@@ -496,6 +506,10 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Reset downloaded files list for new crawl # Reset downloaded files list for new crawl
self._downloaded_files = [] self._downloaded_files = []
# Initialize capture lists
captured_requests = []
captured_console = []
# Handle user agent with magic mode # Handle user agent with magic mode
user_agent_to_override = config.user_agent user_agent_to_override = config.user_agent
@@ -523,9 +537,144 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# Call hook after page creation # Call hook after page creation
await self.execute_hook("on_page_context_created", page, context=context, config=config) await self.execute_hook("on_page_context_created", page, context=context, config=config)
# Network Request Capturing
if config.capture_network_requests:
async def handle_request_capture(request):
try:
post_data_str = None
try:
# Be cautious with large post data
post_data = request.post_data_buffer
if post_data:
# Attempt to decode, fallback to base64 or size indication
try:
post_data_str = post_data.decode('utf-8', errors='replace')
except UnicodeDecodeError:
post_data_str = f"[Binary data: {len(post_data)} bytes]"
except Exception:
post_data_str = "[Error retrieving post data]"
captured_requests.append({
"event_type": "request",
"url": request.url,
"method": request.method,
"headers": dict(request.headers), # Convert Header dict
"post_data": post_data_str,
"resource_type": request.resource_type,
"is_navigation_request": request.is_navigation_request(),
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing request details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
async def handle_response_capture(response):
try:
captured_requests.append({
"event_type": "response",
"url": response.url,
"status": response.status,
"status_text": response.status_text,
"headers": dict(response.headers), # Convert Header dict
"from_service_worker": response.from_service_worker,
"request_timing": response.request.timing, # Detailed timing info
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing response details for {response.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "response_capture_error", "url": response.url, "error": str(e), "timestamp": time.time()})
async def handle_request_failed_capture(request):
try:
captured_requests.append({
"event_type": "request_failed",
"url": request.url,
"method": request.method,
"resource_type": request.resource_type,
"failure_text": str(request.failure) if request.failure else "Unknown failure",
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing request failed details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_failed_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
page.on("request", handle_request_capture)
page.on("response", handle_response_capture)
page.on("requestfailed", handle_request_failed_capture)
# Console Message Capturing
if config.capture_console_messages:
def handle_console_capture(msg):
try:
message_type = "unknown"
try:
message_type = msg.type
except:
pass
message_text = "unknown"
try:
message_text = msg.text
except:
pass
# Basic console message with minimal content
entry = {
"type": message_type,
"text": message_text,
"timestamp": time.time()
}
captured_console.append(entry)
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing console message: {e}", tag="CAPTURE")
# Still add something to the list even on error
captured_console.append({
"type": "console_capture_error",
"error": str(e),
"timestamp": time.time()
})
def handle_pageerror_capture(err):
try:
error_message = "Unknown error"
try:
error_message = err.message
except:
pass
error_stack = ""
try:
error_stack = err.stack
except:
pass
captured_console.append({
"type": "error",
"text": error_message,
"stack": error_stack,
"timestamp": time.time()
})
except Exception as e:
if self.logger:
self.logger.warning(f"Error capturing page error: {e}", tag="CAPTURE")
captured_console.append({
"type": "pageerror_capture_error",
"error": str(e),
"timestamp": time.time()
})
# Add event listeners directly
page.on("console", handle_console_capture)
page.on("pageerror", handle_pageerror_capture)
# Set up console logging if requested # Set up console logging if requested
if config.log_console: if config.log_console:
def log_consol( def log_consol(
msg, console_log_type="debug" msg, console_log_type="debug"
): # Corrected the parameter syntax ): # Corrected the parameter syntax
@@ -840,14 +989,18 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
"before_return_html", page=page, html=html, context=context, config=config "before_return_html", page=page, html=html, context=context, config=config
) )
# Handle PDF and screenshot generation # Handle PDF, MHTML and screenshot generation
start_export_time = time.perf_counter() start_export_time = time.perf_counter()
pdf_data = None pdf_data = None
screenshot_data = None screenshot_data = None
mhtml_data = None
if config.pdf: if config.pdf:
pdf_data = await self.export_pdf(page) pdf_data = await self.export_pdf(page)
if config.capture_mhtml:
mhtml_data = await self.capture_mhtml(page)
if config.screenshot: if config.screenshot:
if config.screenshot_wait_for: if config.screenshot_wait_for:
await asyncio.sleep(config.screenshot_wait_for) await asyncio.sleep(config.screenshot_wait_for)
@@ -855,9 +1008,9 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
page, screenshot_height_threshold=config.screenshot_height_threshold page, screenshot_height_threshold=config.screenshot_height_threshold
) )
if screenshot_data or pdf_data: if screenshot_data or pdf_data or mhtml_data:
self.logger.info( self.logger.info(
message="Exporting PDF and taking screenshot took {duration:.2f}s", message="Exporting media (PDF/MHTML/screenshot) took {duration:.2f}s",
tag="EXPORT", tag="EXPORT",
params={"duration": time.perf_counter() - start_export_time}, params={"duration": time.perf_counter() - start_export_time},
) )
@@ -880,12 +1033,16 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
status_code=status_code, status_code=status_code,
screenshot=screenshot_data, screenshot=screenshot_data,
pdf_data=pdf_data, pdf_data=pdf_data,
mhtml_data=mhtml_data,
get_delayed_content=get_delayed_content, get_delayed_content=get_delayed_content,
ssl_certificate=ssl_cert, ssl_certificate=ssl_cert,
downloaded_files=( downloaded_files=(
self._downloaded_files if self._downloaded_files else None self._downloaded_files if self._downloaded_files else None
), ),
redirected_url=redirected_url, redirected_url=redirected_url,
# Include captured data if enabled
network_requests=captured_requests if config.capture_network_requests else None,
console_messages=captured_console if config.capture_console_messages else None,
) )
except Exception as e: except Exception as e:
@@ -894,6 +1051,15 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
finally: finally:
# If no session_id is given we should close the page # If no session_id is given we should close the page
if not config.session_id: if not config.session_id:
# Detach listeners before closing to prevent potential errors during close
if config.capture_network_requests:
page.remove_listener("request", handle_request_capture)
page.remove_listener("response", handle_response_capture)
page.remove_listener("requestfailed", handle_request_failed_capture)
if config.capture_console_messages:
page.remove_listener("console", handle_console_capture)
page.remove_listener("pageerror", handle_pageerror_capture)
await page.close() await page.close()
async def _handle_full_page_scan(self, page: Page, scroll_delay: float = 0.1): async def _handle_full_page_scan(self, page: Page, scroll_delay: float = 0.1):
@@ -1056,7 +1222,107 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
""" """
pdf_data = await page.pdf(print_background=True) pdf_data = await page.pdf(print_background=True)
return pdf_data return pdf_data
async def capture_mhtml(self, page: Page) -> Optional[str]:
"""
Captures the current page as MHTML using CDP.
MHTML (MIME HTML) is a web page archive format that combines the HTML content
with its resources (images, CSS, etc.) into a single MIME-encoded file.
Args:
page (Page): The Playwright page object
Returns:
Optional[str]: The MHTML content as a string, or None if there was an error
"""
try:
# Ensure the page is fully loaded before capturing
try:
# Wait for DOM content and network to be idle
await page.wait_for_load_state("domcontentloaded", timeout=5000)
await page.wait_for_load_state("networkidle", timeout=5000)
# Give a little extra time for JavaScript execution
await page.wait_for_timeout(1000)
# Wait for any animations to complete
await page.evaluate("""
() => new Promise(resolve => {
// First requestAnimationFrame gets scheduled after the next repaint
requestAnimationFrame(() => {
// Second requestAnimationFrame gets called after all animations complete
requestAnimationFrame(resolve);
});
})
""")
except Error as e:
if self.logger:
self.logger.warning(
message="Wait for load state timed out: {error}",
tag="MHTML",
params={"error": str(e)},
)
# Create a new CDP session
cdp_session = await page.context.new_cdp_session(page)
# Call Page.captureSnapshot with format "mhtml"
result = await cdp_session.send("Page.captureSnapshot", {"format": "mhtml"})
# The result contains a 'data' field with the MHTML content
mhtml_content = result.get("data")
# Detach the CDP session to clean up resources
await cdp_session.detach()
return mhtml_content
except Exception as e:
# Log the error but don't raise it - we'll just return None for the MHTML
if self.logger:
self.logger.error(
message="Failed to capture MHTML: {error}",
tag="MHTML",
params={"error": str(e)},
)
return None
async def _capture_console_messages(
self, page: Page, file_path: str
) -> List[Dict[str, Union[str, float]]]:
"""
Captures console messages from the page.
Args:
page (Page): The Playwright page object
Returns:
List[Dict[str, Union[str, float]]]: A list of captured console messages
"""
captured_console = []
def handle_console_message(msg):
try:
message_type = msg.type
message_text = msg.text
entry = {
"type": message_type,
"text": message_text,
"timestamp": time.time(),
}
captured_console.append(entry)
except Exception as e:
if self.logger:
self.logger.warning(
f"Error capturing console message: {e}", tag="CAPTURE"
)
page.on("console", handle_console_message)
await page.goto(file_path)
return captured_console
async def take_screenshot(self, page, **kwargs) -> str: async def take_screenshot(self, page, **kwargs) -> str:
""" """
Take a screenshot of the current page. Take a screenshot of the current page.

View File

@@ -165,9 +165,22 @@ class AsyncLogger(AsyncLoggerBase):
formatted_message = message.format(**params) formatted_message = message.format(**params)
# Then apply colors if specified # Then apply colors if specified
color_map = {
"green": Fore.GREEN,
"red": Fore.RED,
"yellow": Fore.YELLOW,
"blue": Fore.BLUE,
"cyan": Fore.CYAN,
"magenta": Fore.MAGENTA,
"white": Fore.WHITE,
"black": Fore.BLACK,
"reset": Style.RESET_ALL,
}
if colors: if colors:
for key, color in colors.items(): for key, color in colors.items():
# Find the formatted value in the message and wrap it with color # Find the formatted value in the message and wrap it with color
if color in color_map:
color = color_map[color]
if key in params: if key in params:
value_str = str(params[key]) value_str = str(params[key])
formatted_message = formatted_message.replace( formatted_message = formatted_message.replace(

View File

@@ -366,9 +366,11 @@ class AsyncWebCrawler:
crawl_result.response_headers = async_response.response_headers crawl_result.response_headers = async_response.response_headers
crawl_result.downloaded_files = async_response.downloaded_files crawl_result.downloaded_files = async_response.downloaded_files
crawl_result.js_execution_result = js_execution_result crawl_result.js_execution_result = js_execution_result
crawl_result.ssl_certificate = ( crawl_result.mhtml = async_response.mhtml_data
async_response.ssl_certificate crawl_result.ssl_certificate = async_response.ssl_certificate
) # Add SSL certificate # Add captured network and console data if available
crawl_result.network_requests = async_response.network_requests
crawl_result.console_messages = async_response.console_messages
crawl_result.success = bool(html) crawl_result.success = bool(html)
crawl_result.session_id = getattr(config, "session_id", None) crawl_result.session_id = getattr(config, "session_id", None)

View File

@@ -1,22 +0,0 @@
"""Browser management module for Crawl4AI.
This module provides browser management capabilities using different strategies
for browser creation and interaction.
"""
from .manager import BrowserManager
from .profiles import BrowserProfileManager
from .models import DockerConfig
from .docker_registry import DockerRegistry
from .docker_utils import DockerUtils
from .strategies import (
BaseBrowserStrategy,
PlaywrightBrowserStrategy,
CDPBrowserStrategy,
BuiltinBrowserStrategy,
DockerBrowserStrategy
)
__all__ = ['BrowserManager', 'BrowserProfileManager', 'DockerConfig', 'DockerRegistry', 'DockerUtils', 'BaseBrowserStrategy',
'PlaywrightBrowserStrategy', 'CDPBrowserStrategy', 'BuiltinBrowserStrategy',
'DockerBrowserStrategy']

View File

@@ -1,183 +0,0 @@
# browser_hub_manager.py
import hashlib
import json
import asyncio
from typing import Dict, Optional
from .manager import BrowserManager, UnavailableBehavior
from ..async_configs import BrowserConfig
from ..async_logger import AsyncLogger
class BrowserHub:
"""
Manages Browser-Hub instances for sharing across multiple pipelines.
This class provides centralized management for browser resources, allowing
multiple pipelines to share browser instances efficiently, connect to
existing browser hubs, or create new ones with custom configurations.
"""
_instances: Dict[str, BrowserManager] = {}
_lock = asyncio.Lock()
@classmethod
async def get_or_create_hub(
cls,
config: Optional[BrowserConfig] = None,
hub_id: Optional[str] = None,
connection_info: Optional[str] = None,
logger: Optional[AsyncLogger] = None,
max_browsers_per_config: int = 10,
max_pages_per_browser: int = 5,
initial_pool_size: int = 1,
page_configs: Optional[list] = None
) -> BrowserManager:
"""
Get an existing Browser-Hub or create a new one based on parameters.
Args:
config: Browser configuration for new hub
hub_id: Identifier for the hub instance
connection_info: Connection string for existing hub
logger: Logger for recording events and errors
max_browsers_per_config: Maximum browsers per configuration
max_pages_per_browser: Maximum pages per browser
initial_pool_size: Initial number of browsers to create
page_configs: Optional configurations for pre-warming pages
Returns:
BrowserManager: The requested browser manager instance
"""
async with cls._lock:
# Scenario 3: Use existing hub via connection info
if connection_info:
instance_key = f"connection:{connection_info}"
if instance_key not in cls._instances:
cls._instances[instance_key] = await cls._connect_to_browser_hub(
connection_info, logger
)
return cls._instances[instance_key]
# Scenario 2: Custom configured hub
if config:
config_hash = cls._hash_config(config)
instance_key = hub_id or f"config:{config_hash}"
if instance_key not in cls._instances:
cls._instances[instance_key] = await cls._create_browser_hub(
config,
logger,
max_browsers_per_config,
max_pages_per_browser,
initial_pool_size,
page_configs
)
return cls._instances[instance_key]
# Scenario 1: Default hub
instance_key = "default"
if instance_key not in cls._instances:
cls._instances[instance_key] = await cls._create_default_browser_hub(
logger,
max_browsers_per_config,
max_pages_per_browser,
initial_pool_size
)
return cls._instances[instance_key]
@classmethod
async def _create_browser_hub(
cls,
config: BrowserConfig,
logger: Optional[AsyncLogger],
max_browsers_per_config: int,
max_pages_per_browser: int,
initial_pool_size: int,
page_configs: Optional[list]
) -> BrowserManager:
"""Create a new browser hub with the specified configuration."""
manager = BrowserManager(
browser_config=config,
logger=logger,
unavailable_behavior=UnavailableBehavior.ON_DEMAND,
max_browsers_per_config=max_browsers_per_config
)
# Initialize the pool
await manager.initialize_pool(
browser_configs=[config] if config else None,
browsers_per_config=initial_pool_size,
page_configs=page_configs
)
return manager
@classmethod
async def _create_default_browser_hub(
cls,
logger: Optional[AsyncLogger],
max_browsers_per_config: int,
max_pages_per_browser: int,
initial_pool_size: int
) -> BrowserManager:
"""Create a default browser hub with standard settings."""
config = BrowserConfig(headless=True)
return await cls._create_browser_hub(
config,
logger,
max_browsers_per_config,
max_pages_per_browser,
initial_pool_size,
None
)
@classmethod
async def _connect_to_browser_hub(
cls,
connection_info: str,
logger: Optional[AsyncLogger]
) -> BrowserManager:
"""
Connect to an existing browser hub.
Note: This is a placeholder for future remote connection functionality.
Currently creates a local instance.
"""
if logger:
logger.info(
message="Remote browser hub connections not yet implemented. Creating local instance.",
tag="BROWSER_HUB"
)
# For now, create a default local instance
return await cls._create_default_browser_hub(
logger,
max_browsers_per_config=10,
max_pages_per_browser=5,
initial_pool_size=1
)
@classmethod
def _hash_config(cls, config: BrowserConfig) -> str:
"""Create a hash of the browser configuration for identification."""
# Convert config to dictionary, excluding any callable objects
config_dict = config.__dict__.copy()
for key in list(config_dict.keys()):
if callable(config_dict[key]):
del config_dict[key]
# Convert to canonical JSON string
config_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON
config_hash = hashlib.sha256(config_json.encode()).hexdigest()
return config_hash
@classmethod
async def shutdown_all(cls):
"""Close all browser hub instances and clear the registry."""
async with cls._lock:
shutdown_tasks = []
for hub in cls._instances.values():
shutdown_tasks.append(hub.close())
if shutdown_tasks:
await asyncio.gather(*shutdown_tasks)
cls._instances.clear()

View File

@@ -1,34 +0,0 @@
# ---------- Dockerfile ----------
FROM alpine:latest
# Combine everything in one RUN to keep layers minimal.
RUN apk update && apk upgrade && \
apk add --no-cache \
chromium \
nss \
freetype \
harfbuzz \
ca-certificates \
ttf-freefont \
socat \
curl && \
addgroup -S chromium && adduser -S chromium -G chromium && \
mkdir -p /data && chown chromium:chromium /data && \
rm -rf /var/cache/apk/*
# Copy start script, then chown/chmod in one step
COPY start.sh /home/chromium/start.sh
RUN chown chromium:chromium /home/chromium/start.sh && \
chmod +x /home/chromium/start.sh
USER chromium
WORKDIR /home/chromium
# Expose port used by socat (mapping 9222→9223 or whichever you prefer)
EXPOSE 9223
# Simple healthcheck: is the remote debug endpoint responding?
HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -f http://localhost:9222/json/version || exit 1
CMD ["./start.sh"]

View File

@@ -1,27 +0,0 @@
# ---------- Dockerfile (Idle Version) ----------
FROM alpine:latest
# Install only Chromium and its dependencies in a single layer
RUN apk update && apk upgrade && \
apk add --no-cache \
chromium \
nss \
freetype \
harfbuzz \
ca-certificates \
ttf-freefont \
socat \
curl && \
addgroup -S chromium && adduser -S chromium -G chromium && \
mkdir -p /data && chown chromium:chromium /data && \
rm -rf /var/cache/apk/*
ENV PATH="/usr/bin:/bin:/usr/sbin:/sbin"
# Switch to a non-root user for security
USER chromium
WORKDIR /home/chromium
# Idle: container does nothing except stay alive
CMD ["tail", "-f", "/dev/null"]

View File

@@ -1,23 +0,0 @@
# Use Debian 12 (Bookworm) slim for a small, stable base image
FROM debian:bookworm-slim
ENV DEBIAN_FRONTEND=noninteractive
# Install Chromium, socat, and basic fonts
RUN apt-get update && apt-get install -y --no-install-recommends \
chromium \
wget \
curl \
socat \
fonts-freefont-ttf \
fonts-noto-color-emoji && \
apt-get clean && rm -rf /var/lib/apt/lists/*
# Copy start.sh and make it executable
COPY start.sh /start.sh
RUN chmod +x /start.sh
# Expose socat port (use host mapping, e.g. -p 9225:9223)
EXPOSE 9223
ENTRYPOINT ["/start.sh"]

View File

@@ -1,264 +0,0 @@
"""Docker registry module for Crawl4AI.
This module provides a registry system for tracking and reusing Docker containers
across browser sessions, improving performance and resource utilization.
"""
import os
import json
import time
from typing import Dict, Optional
from ..utils import get_home_folder
class DockerRegistry:
"""Manages a registry of Docker containers used for browser automation.
This registry tracks containers by configuration hash, allowing reuse of appropriately
configured containers instead of creating new ones for each session.
Attributes:
registry_file (str): Path to the registry file
containers (dict): Dictionary of container information
port_map (dict): Map of host ports to container IDs
last_port (int): Last port assigned
"""
def __init__(self, registry_file: Optional[str] = None):
"""Initialize the registry with an optional path to the registry file.
Args:
registry_file: Path to the registry file. If None, uses default path.
"""
# Use the same file path as BuiltinBrowserStrategy by default
self.registry_file = registry_file or os.path.join(get_home_folder(), "builtin-browser", "browser_config.json")
self.containers = {} # Still maintain this for backward compatibility
self.port_map = {} # Will be populated from the shared file
self.last_port = 9222
self.load()
def load(self):
"""Load container registry from file."""
if os.path.exists(self.registry_file):
try:
with open(self.registry_file, 'r') as f:
registry_data = json.load(f)
# Initialize port_map if not present
if "port_map" not in registry_data:
registry_data["port_map"] = {}
self.port_map = registry_data.get("port_map", {})
# Extract container information from port_map entries of type "docker"
self.containers = {}
for port_str, browser_info in self.port_map.items():
if browser_info.get("browser_type") == "docker" and "container_id" in browser_info:
container_id = browser_info["container_id"]
self.containers[container_id] = {
"host_port": int(port_str),
"config_hash": browser_info.get("config_hash", ""),
"created_at": browser_info.get("created_at", time.time())
}
# Get last port if available
if "last_port" in registry_data:
self.last_port = registry_data["last_port"]
else:
# Find highest port in port_map
ports = [int(p) for p in self.port_map.keys() if p.isdigit()]
self.last_port = max(ports + [9222])
except Exception as e:
# Reset to defaults on error
print(f"Error loading registry: {e}")
self.containers = {}
self.port_map = {}
self.last_port = 9222
else:
# Initialize with defaults if file doesn't exist
self.containers = {}
self.port_map = {}
self.last_port = 9222
def save(self):
"""Save container registry to file."""
# First load the current file to avoid overwriting other browser types
current_data = {"port_map": {}, "last_port": self.last_port}
if os.path.exists(self.registry_file):
try:
with open(self.registry_file, 'r') as f:
current_data = json.load(f)
except Exception:
pass
# Create a new port_map dictionary
updated_port_map = {}
# First, copy all non-docker entries from the existing port_map
for port_str, browser_info in current_data.get("port_map", {}).items():
if browser_info.get("browser_type") != "docker":
updated_port_map[port_str] = browser_info
# Then add all current docker container entries
for container_id, container_info in self.containers.items():
port_str = str(container_info["host_port"])
updated_port_map[port_str] = {
"browser_type": "docker",
"container_id": container_id,
"cdp_url": f"http://localhost:{port_str}",
"config_hash": container_info["config_hash"],
"created_at": container_info["created_at"]
}
# Replace the port_map with our updated version
current_data["port_map"] = updated_port_map
# Update last_port
current_data["last_port"] = self.last_port
# Ensure directory exists
os.makedirs(os.path.dirname(self.registry_file), exist_ok=True)
# Save the updated data
with open(self.registry_file, 'w') as f:
json.dump(current_data, f, indent=2)
def register_container(self, container_id: str, host_port: int, config_hash: str, cdp_json_config: Optional[str] = None):
"""Register a container with its configuration hash and port mapping.
Args:
container_id: Docker container ID
host_port: Host port mapped to container
config_hash: Hash of configuration used to create container
cdp_json_config: CDP JSON configuration if available
"""
self.containers[container_id] = {
"host_port": host_port,
"config_hash": config_hash,
"created_at": time.time()
}
# Update port_map to maintain compatibility with BuiltinBrowserStrategy
port_str = str(host_port)
self.port_map[port_str] = {
"browser_type": "docker",
"container_id": container_id,
"cdp_url": f"http://localhost:{port_str}",
"config_hash": config_hash,
"created_at": time.time()
}
if cdp_json_config:
self.port_map[port_str]["cdp_json_config"] = cdp_json_config
self.save()
def unregister_container(self, container_id: str):
"""Unregister a container.
Args:
container_id: Docker container ID to unregister
"""
if container_id in self.containers:
host_port = self.containers[container_id]["host_port"]
port_str = str(host_port)
# Remove from port_map
if port_str in self.port_map:
del self.port_map[port_str]
# Remove from containers
del self.containers[container_id]
self.save()
async def find_container_by_config(self, config_hash: str, docker_utils) -> Optional[str]:
"""Find a container that matches the given configuration hash.
Args:
config_hash: Hash of configuration to match
docker_utils: DockerUtils instance to check running containers
Returns:
Container ID if found, None otherwise
"""
# Search through port_map for entries with matching config_hash
for port_str, browser_info in self.port_map.items():
if (browser_info.get("browser_type") == "docker" and
browser_info.get("config_hash") == config_hash and
"container_id" in browser_info):
container_id = browser_info["container_id"]
if await docker_utils.is_container_running(container_id):
return container_id
return None
def get_container_host_port(self, container_id: str) -> Optional[int]:
"""Get the host port mapped to the container.
Args:
container_id: Docker container ID
Returns:
Host port if container is registered, None otherwise
"""
if container_id in self.containers:
return self.containers[container_id]["host_port"]
return None
def get_next_available_port(self, docker_utils) -> int:
"""Get the next available host port for Docker mapping.
Args:
docker_utils: DockerUtils instance to check port availability
Returns:
Available port number
"""
# Start from last port + 1
port = self.last_port + 1
# Check if port is in use (either in our registry or system-wide)
while str(port) in self.port_map or docker_utils.is_port_in_use(port):
port += 1
# Update last port
self.last_port = port
self.save()
return port
def get_container_config_hash(self, container_id: str) -> Optional[str]:
"""Get the configuration hash for a container.
Args:
container_id: Docker container ID
Returns:
Configuration hash if container is registered, None otherwise
"""
if container_id in self.containers:
return self.containers[container_id]["config_hash"]
return None
def cleanup_stale_containers(self, docker_utils):
"""Clean up containers that are no longer running.
Args:
docker_utils: DockerUtils instance to check container status
"""
to_remove = []
# Find containers that are no longer running
for port_str, browser_info in self.port_map.items():
if browser_info.get("browser_type") == "docker" and "container_id" in browser_info:
container_id = browser_info["container_id"]
if not docker_utils.is_container_running(container_id):
to_remove.append(container_id)
# Remove stale containers
for container_id in to_remove:
self.unregister_container(container_id)

View File

@@ -1,661 +0,0 @@
import os
import json
import asyncio
import hashlib
import tempfile
import shutil
import socket
import subprocess
from typing import Dict, List, Optional, Tuple, Union
class DockerUtils:
"""Utility class for Docker operations in browser automation.
This class provides methods for managing Docker images, containers,
and related operations needed for browser automation. It handles
image building, container lifecycle, port management, and registry operations.
Attributes:
DOCKER_FOLDER (str): Path to folder containing Docker files
DOCKER_CONNECT_FILE (str): Path to Dockerfile for connect mode
DOCKER_LAUNCH_FILE (str): Path to Dockerfile for launch mode
DOCKER_START_SCRIPT (str): Path to startup script for connect mode
DEFAULT_CONNECT_IMAGE (str): Default image name for connect mode
DEFAULT_LAUNCH_IMAGE (str): Default image name for launch mode
logger: Optional logger instance
"""
# File paths for Docker resources
DOCKER_FOLDER = os.path.join(os.path.dirname(__file__), "docker")
DOCKER_CONNECT_FILE = os.path.join(DOCKER_FOLDER, "connect.Dockerfile")
DOCKER_LAUNCH_FILE = os.path.join(DOCKER_FOLDER, "launch.Dockerfile")
DOCKER_START_SCRIPT = os.path.join(DOCKER_FOLDER, "start.sh")
# Default image names
DEFAULT_CONNECT_IMAGE = "crawl4ai/browser-connect:latest"
DEFAULT_LAUNCH_IMAGE = "crawl4ai/browser-launch:latest"
def __init__(self, logger=None):
"""Initialize Docker utilities.
Args:
logger: Optional logger for recording operations
"""
self.logger = logger
# Image Management Methods
async def check_image_exists(self, image_name: str) -> bool:
"""Check if a Docker image exists.
Args:
image_name: Name of the Docker image to check
Returns:
bool: True if the image exists, False otherwise
"""
cmd = ["docker", "image", "inspect", image_name]
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
_, _ = await process.communicate()
return process.returncode == 0
except Exception as e:
if self.logger:
self.logger.debug(
f"Error checking if image exists: {str(e)}", tag="DOCKER"
)
return False
async def build_docker_image(
self,
image_name: str,
dockerfile_path: str,
files_to_copy: Dict[str, str] = None,
) -> bool:
"""Build a Docker image from a Dockerfile.
Args:
image_name: Name to give the built image
dockerfile_path: Path to the Dockerfile
files_to_copy: Dict of {dest_name: source_path} for files to copy to build context
Returns:
bool: True if image was built successfully, False otherwise
"""
# Create a temporary build context
with tempfile.TemporaryDirectory() as temp_dir:
# Copy the Dockerfile
shutil.copy(dockerfile_path, os.path.join(temp_dir, "Dockerfile"))
# Copy any additional files needed
if files_to_copy:
for dest_name, source_path in files_to_copy.items():
shutil.copy(source_path, os.path.join(temp_dir, dest_name))
# Build the image
cmd = ["docker", "build", "-t", image_name, temp_dir]
if self.logger:
self.logger.debug(
f"Building Docker image with command: {' '.join(cmd)}", tag="DOCKER"
)
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await process.communicate()
if process.returncode != 0:
if self.logger:
self.logger.error(
message="Failed to build Docker image: {error}",
tag="DOCKER",
params={"error": stderr.decode()},
)
return False
if self.logger:
self.logger.success(
f"Successfully built Docker image: {image_name}", tag="DOCKER"
)
return True
async def ensure_docker_image_exists(
self, image_name: str, mode: str = "connect"
) -> str:
"""Ensure the required Docker image exists, creating it if necessary.
Args:
image_name: Name of the Docker image
mode: Either "connect" or "launch" to determine which image to build
Returns:
str: Name of the available Docker image
Raises:
Exception: If image doesn't exist and can't be built
"""
# If image name is not specified, use default based on mode
if not image_name:
image_name = (
self.DEFAULT_CONNECT_IMAGE
if mode == "connect"
else self.DEFAULT_LAUNCH_IMAGE
)
# Check if the image already exists
if await self.check_image_exists(image_name):
if self.logger:
self.logger.debug(
f"Docker image {image_name} already exists", tag="DOCKER"
)
return image_name
# If we're using a custom image that doesn't exist, warn and fail
if (
image_name != self.DEFAULT_CONNECT_IMAGE
and image_name != self.DEFAULT_LAUNCH_IMAGE
):
if self.logger:
self.logger.warning(
f"Custom Docker image {image_name} not found and cannot be automatically created",
tag="DOCKER",
)
raise Exception(f"Docker image {image_name} not found")
# Build the appropriate default image
if self.logger:
self.logger.info(
f"Docker image {image_name} not found, creating it now...", tag="DOCKER"
)
if mode == "connect":
success = await self.build_docker_image(
image_name,
self.DOCKER_CONNECT_FILE,
{"start.sh": self.DOCKER_START_SCRIPT},
)
else:
success = await self.build_docker_image(image_name, self.DOCKER_LAUNCH_FILE)
if not success:
raise Exception(f"Failed to create Docker image {image_name}")
return image_name
# Container Management Methods
async def create_container(
self,
image_name: str,
host_port: int,
container_name: Optional[str] = None,
volumes: List[str] = None,
network: Optional[str] = None,
env_vars: Dict[str, str] = None,
cpu_limit: float = 1.0,
memory_limit: str = "1.5g",
extra_args: List[str] = None,
) -> Optional[str]:
"""Create a new Docker container.
Args:
image_name: Docker image to use
host_port: Port on host to map to container port 9223
container_name: Optional name for the container
volumes: List of volume mappings (e.g., ["host_path:container_path"])
network: Optional Docker network to use
env_vars: Dictionary of environment variables
cpu_limit: CPU limit for the container
memory_limit: Memory limit for the container
extra_args: Additional docker run arguments
Returns:
str: Container ID if successful, None otherwise
"""
# Prepare container command
cmd = [
"docker",
"run",
"--detach",
]
# Add container name if specified
if container_name:
cmd.extend(["--name", container_name])
# Add port mapping
cmd.extend(["-p", f"{host_port}:9223"])
# Add volumes
if volumes:
for volume in volumes:
cmd.extend(["-v", volume])
# Add network if specified
if network:
cmd.extend(["--network", network])
# Add environment variables
if env_vars:
for key, value in env_vars.items():
cmd.extend(["-e", f"{key}={value}"])
# Add CPU and memory limits
if cpu_limit:
cmd.extend(["--cpus", str(cpu_limit)])
if memory_limit:
cmd.extend(["--memory", memory_limit])
cmd.extend(["--memory-swap", memory_limit])
if self.logger:
self.logger.debug(
f"Setting CPU limit: {cpu_limit}, Memory limit: {memory_limit}",
tag="DOCKER",
)
# Add extra args
if extra_args:
cmd.extend(extra_args)
# Add image
cmd.append(image_name)
if self.logger:
self.logger.debug(
f"Creating Docker container with command: {' '.join(cmd)}", tag="DOCKER"
)
# Run docker command
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await process.communicate()
if process.returncode != 0:
if self.logger:
self.logger.error(
message="Failed to create Docker container: {error}",
tag="DOCKER",
params={"error": stderr.decode()},
)
return None
# Get container ID
container_id = stdout.decode().strip()
if self.logger:
self.logger.success(
f"Created Docker container: {container_id[:12]}", tag="DOCKER"
)
return container_id
except Exception as e:
if self.logger:
self.logger.error(
message="Error creating Docker container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return None
async def is_container_running(self, container_id: str) -> bool:
"""Check if a container is running.
Args:
container_id: ID of the container to check
Returns:
bool: True if the container is running, False otherwise
"""
cmd = ["docker", "inspect", "--format", "{{.State.Running}}", container_id]
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, _ = await process.communicate()
return process.returncode == 0 and stdout.decode().strip() == "true"
except Exception as e:
if self.logger:
self.logger.debug(
f"Error checking if container is running: {str(e)}", tag="DOCKER"
)
return False
async def wait_for_container_ready(
self, container_id: str, timeout: int = 30
) -> bool:
"""Wait for the container to be in running state.
Args:
container_id: ID of the container to wait for
timeout: Maximum time to wait in seconds
Returns:
bool: True if container is ready, False if timeout occurred
"""
for _ in range(timeout):
if await self.is_container_running(container_id):
return True
await asyncio.sleep(1)
if self.logger:
self.logger.warning(
f"Container {container_id[:12]} not ready after {timeout}s timeout",
tag="DOCKER",
)
return False
async def stop_container(self, container_id: str) -> bool:
"""Stop a Docker container.
Args:
container_id: ID of the container to stop
Returns:
bool: True if stopped successfully, False otherwise
"""
cmd = ["docker", "stop", container_id]
try:
process = await asyncio.create_subprocess_exec(*cmd)
await process.communicate()
if self.logger:
self.logger.debug(
f"Stopped container: {container_id[:12]}", tag="DOCKER"
)
return process.returncode == 0
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to stop container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return False
async def remove_container(self, container_id: str, force: bool = True) -> bool:
"""Remove a Docker container.
Args:
container_id: ID of the container to remove
force: Whether to force removal
Returns:
bool: True if removed successfully, False otherwise
"""
cmd = ["docker", "rm"]
if force:
cmd.append("-f")
cmd.append(container_id)
try:
process = await asyncio.create_subprocess_exec(*cmd)
await process.communicate()
if self.logger:
self.logger.debug(
f"Removed container: {container_id[:12]}", tag="DOCKER"
)
return process.returncode == 0
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to remove container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return False
# Container Command Execution Methods
async def exec_in_container(
self, container_id: str, command: List[str], detach: bool = False
) -> Tuple[int, str, str]:
"""Execute a command in a running container.
Args:
container_id: ID of the container
command: Command to execute as a list of strings
detach: Whether to run the command in detached mode
Returns:
Tuple of (return_code, stdout, stderr)
"""
cmd = ["docker", "exec"]
if detach:
cmd.append("-d")
cmd.append(container_id)
cmd.extend(command)
try:
process = await asyncio.create_subprocess_exec(
*cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE
)
stdout, stderr = await process.communicate()
return process.returncode, stdout.decode(), stderr.decode()
except Exception as e:
if self.logger:
self.logger.error(
message="Error executing command in container: {error}",
tag="DOCKER",
params={"error": str(e)},
)
return -1, "", str(e)
async def start_socat_in_container(self, container_id: str) -> bool:
"""Start socat in the container to map port 9222 to 9223.
Args:
container_id: ID of the container
Returns:
bool: True if socat started successfully, False otherwise
"""
# Command to run socat as a background process
cmd = ["socat", "TCP-LISTEN:9223,fork", "TCP:localhost:9222"]
returncode, _, stderr = await self.exec_in_container(
container_id, cmd, detach=True
)
if returncode != 0:
if self.logger:
self.logger.error(
message="Failed to start socat in container: {error}",
tag="DOCKER",
params={"error": stderr},
)
return False
if self.logger:
self.logger.debug(
f"Started socat in container: {container_id[:12]}", tag="DOCKER"
)
# Wait a moment for socat to start
await asyncio.sleep(1)
return True
async def launch_chrome_in_container(
self, container_id: str, browser_args: List[str]
) -> bool:
"""Launch Chrome inside the container with specified arguments.
Args:
container_id: ID of the container
browser_args: Chrome command line arguments
Returns:
bool: True if Chrome started successfully, False otherwise
"""
# Build Chrome command
chrome_cmd = ["chromium"]
chrome_cmd.extend(browser_args)
returncode, _, stderr = await self.exec_in_container(
container_id, chrome_cmd, detach=True
)
if returncode != 0:
if self.logger:
self.logger.error(
message="Failed to launch Chrome in container: {error}",
tag="DOCKER",
params={"error": stderr},
)
return False
if self.logger:
self.logger.debug(
f"Launched Chrome in container: {container_id[:12]}", tag="DOCKER"
)
return True
async def get_process_id_in_container(
self, container_id: str, process_name: str
) -> Optional[int]:
"""Get the process ID for a process in the container.
Args:
container_id: ID of the container
process_name: Name pattern to search for
Returns:
int: Process ID if found, None otherwise
"""
cmd = ["pgrep", "-f", process_name]
returncode, stdout, _ = await self.exec_in_container(container_id, cmd)
if returncode == 0 and stdout.strip():
pid = int(stdout.strip().split("\n")[0])
return pid
return None
async def stop_process_in_container(self, container_id: str, pid: int) -> bool:
"""Stop a process in the container by PID.
Args:
container_id: ID of the container
pid: Process ID to stop
Returns:
bool: True if process was stopped, False otherwise
"""
cmd = ["kill", "-TERM", str(pid)]
returncode, _, stderr = await self.exec_in_container(container_id, cmd)
if returncode != 0:
if self.logger:
self.logger.warning(
message="Failed to stop process in container: {error}",
tag="DOCKER",
params={"error": stderr},
)
return False
if self.logger:
self.logger.debug(
f"Stopped process {pid} in container: {container_id[:12]}", tag="DOCKER"
)
return True
# Network and Port Methods
async def wait_for_cdp_ready(self, host_port: int, timeout: int = 10) -> dict:
"""Wait for the CDP endpoint to be ready.
Args:
host_port: Port to check for CDP endpoint
timeout: Maximum time to wait in seconds
Returns:
dict: CDP JSON config if ready, None if timeout occurred
"""
import aiohttp
url = f"http://localhost:{host_port}/json/version"
for _ in range(timeout):
try:
async with aiohttp.ClientSession() as session:
async with session.get(url, timeout=1) as response:
if response.status == 200:
if self.logger:
self.logger.debug(
f"CDP endpoint ready on port {host_port}",
tag="DOCKER",
)
cdp_json_config = await response.json()
if self.logger:
self.logger.debug(
f"CDP JSON config: {cdp_json_config}", tag="DOCKER"
)
return cdp_json_config
except Exception:
pass
await asyncio.sleep(1)
if self.logger:
self.logger.warning(
f"CDP endpoint not ready on port {host_port} after {timeout}s timeout",
tag="DOCKER",
)
return None
def is_port_in_use(self, port: int) -> bool:
"""Check if a port is already in use on the host.
Args:
port: Port number to check
Returns:
bool: True if port is in use, False otherwise
"""
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
return s.connect_ex(("localhost", port)) == 0
def get_next_available_port(self, start_port: int = 9223) -> int:
"""Get the next available port starting from a given port.
Args:
start_port: Port number to start checking from
Returns:
int: First available port number
"""
port = start_port
while self.is_port_in_use(port):
port += 1
return port
# Configuration Hash Methods
def generate_config_hash(self, config_dict: Dict) -> str:
"""Generate a hash of the configuration for container matching.
Args:
config_dict: Dictionary of configuration parameters
Returns:
str: Hash string uniquely identifying this configuration
"""
# Convert to canonical JSON string and hash
config_json = json.dumps(config_dict, sort_keys=True)
return hashlib.sha256(config_json.encode()).hexdigest()

View File

@@ -1,177 +0,0 @@
"""Browser manager module for Crawl4AI.
This module provides a central browser management class that uses the
strategy pattern internally while maintaining the existing API.
It also implements a page pooling mechanism for improved performance.
"""
from typing import Optional, Tuple, List
from playwright.async_api import Page, BrowserContext
from ..async_logger import AsyncLogger
from ..async_configs import BrowserConfig, CrawlerRunConfig
from .strategies import (
BaseBrowserStrategy,
PlaywrightBrowserStrategy,
CDPBrowserStrategy,
BuiltinBrowserStrategy,
DockerBrowserStrategy
)
class BrowserManager:
"""Main interface for browser management in Crawl4AI.
This class maintains backward compatibility with the existing implementation
while using the strategy pattern internally for different browser types.
Attributes:
config (BrowserConfig): Configuration object containing all browser settings
logger: Logger instance for recording events and errors
browser: The browser instance
default_context: The default browser context
managed_browser: The managed browser instance
playwright: The Playwright instance
sessions: Dictionary to store session information
session_ttl: Session timeout in seconds
"""
def __init__(self, browser_config: Optional[BrowserConfig] = None, logger: Optional[AsyncLogger] = None):
"""Initialize the BrowserManager with a browser configuration.
Args:
browser_config: Configuration object containing all browser settings
logger: Logger instance for recording events and errors
"""
self.config = browser_config or BrowserConfig()
self.logger = logger
# Create strategy based on configuration
self.strategy = self._create_strategy()
# Initialize state variables for compatibility with existing code
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
# For session management (from existing implementation)
self.sessions = {}
self.session_ttl = 1800 # 30 minutes
def _create_strategy(self) -> BaseBrowserStrategy:
"""Create appropriate browser strategy based on configuration.
Returns:
BaseBrowserStrategy: The selected browser strategy
"""
if self.config.browser_mode == "builtin":
return BuiltinBrowserStrategy(self.config, self.logger)
elif self.config.browser_mode == "docker":
if DockerBrowserStrategy is None:
if self.logger:
self.logger.error(
"Docker browser strategy requested but not available. "
"Falling back to PlaywrightBrowserStrategy.",
tag="BROWSER"
)
return PlaywrightBrowserStrategy(self.config, self.logger)
return DockerBrowserStrategy(self.config, self.logger)
elif self.config.browser_mode == "cdp" or self.config.cdp_url or self.config.use_managed_browser:
return CDPBrowserStrategy(self.config, self.logger)
else:
return PlaywrightBrowserStrategy(self.config, self.logger)
async def start(self):
"""Start the browser instance and set up the default context.
Returns:
self: For method chaining
"""
# Start the strategy
await self.strategy.start()
# Update legacy references
self.browser = self.strategy.browser
self.default_context = self.strategy.default_context
# Set browser process reference (for CDP strategy)
if hasattr(self.strategy, 'browser_process'):
self.managed_browser = self.strategy
# Set Playwright reference
self.playwright = self.strategy.playwright
# Sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
self.session_ttl = self.strategy.session_ttl
return self
async def get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page for the given configuration.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
Tuple of (Page, BrowserContext)
"""
# Delegate to strategy
page, context = await self.strategy.get_page(crawlerRunConfig)
# Sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
return page, context
async def get_pages(self, crawlerRunConfig: CrawlerRunConfig, count: int = 1) -> List[Tuple[Page, BrowserContext]]:
"""Get multiple pages with the same configuration.
This method efficiently creates multiple browser pages using the same configuration,
which is useful for parallel crawling of multiple URLs.
Args:
crawlerRunConfig: Configuration for the pages
count: Number of pages to create
Returns:
List of (Page, Context) tuples
"""
# Delegate to strategy
pages = await self.strategy.get_pages(crawlerRunConfig, count)
# Sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
return pages
# Just for legacy compatibility
async def kill_session(self, session_id: str):
"""Kill a browser session and clean up resources.
Args:
session_id: The session ID to kill
"""
# Handle kill_session via our strategy if it supports it
await self.strategy.kill_session(session_id)
# sync sessions if needed
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
async def close(self):
"""Close the browser and clean up resources."""
# Delegate to strategy
await self.strategy.close()
# Reset legacy references
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
self.sessions = {}

View File

@@ -1,853 +0,0 @@
"""Browser manager module for Crawl4AI.
This module provides a central browser management class that uses the
strategy pattern internally while maintaining the existing API.
It also implements browser pooling for improved performance.
"""
import asyncio
import hashlib
import json
import math
from enum import Enum
from typing import Dict, List, Optional, Tuple, Any
from playwright.async_api import Page, BrowserContext
from ..async_logger import AsyncLogger
from ..async_configs import BrowserConfig, CrawlerRunConfig
from .strategies import (
BaseBrowserStrategy,
PlaywrightBrowserStrategy,
CDPBrowserStrategy,
BuiltinBrowserStrategy,
DockerBrowserStrategy
)
class UnavailableBehavior(Enum):
"""Behavior when no browser is available."""
ON_DEMAND = "on_demand" # Create new browser on demand
PENDING = "pending" # Wait until a browser is available
EXCEPTION = "exception" # Raise an exception
class BrowserManager:
"""Main interface for browser management and pooling in Crawl4AI.
This class maintains backward compatibility with the existing implementation
while using the strategy pattern internally for different browser types.
It also implements browser pooling for improved performance.
Attributes:
config (BrowserConfig): Default configuration object for browsers
logger (AsyncLogger): Logger instance for recording events and errors
browser_pool (Dict): Dictionary to store browser instances by configuration
browser_in_use (Dict): Dictionary to track which browsers are in use
request_queues (Dict): Queues for pending requests by configuration
unavailable_behavior (UnavailableBehavior): Behavior when no browser is available
"""
def __init__(
self,
browser_config: Optional[BrowserConfig] = None,
logger: Optional[AsyncLogger] = None,
unavailable_behavior: UnavailableBehavior = UnavailableBehavior.EXCEPTION,
max_browsers_per_config: int = 10,
max_pages_per_browser: int = 5
):
"""Initialize the BrowserManager with a browser configuration.
Args:
browser_config: Configuration object containing all browser settings
logger: Logger instance for recording events and errors
unavailable_behavior: Behavior when no browser is available
max_browsers_per_config: Maximum number of browsers per configuration
max_pages_per_browser: Maximum number of pages per browser
"""
self.config = browser_config or BrowserConfig()
self.logger = logger
self.unavailable_behavior = unavailable_behavior
self.max_browsers_per_config = max_browsers_per_config
self.max_pages_per_browser = max_pages_per_browser
# Browser pool management
self.browser_pool = {} # config_hash -> list of browser strategies
self.browser_in_use = {} # strategy instance -> Boolean
self.request_queues = {} # config_hash -> asyncio.Queue()
self._browser_locks = {} # config_hash -> asyncio.Lock()
self._browser_pool_lock = asyncio.Lock() # Global lock for pool modifications
# Page pool management
self.page_pool = {} # (browser_config_hash, crawler_config_hash) -> list of (page, context, strategy)
self._page_pool_lock = asyncio.Lock()
self.browser_page_counts = {} # strategy instance -> current page count
self._page_count_lock = asyncio.Lock() # Lock for thread-safe access to page counts
# For session management (from existing implementation)
self.sessions = {}
self.session_ttl = 1800 # 30 minutes
# For legacy compatibility
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
self.strategy = None
def _create_browser_config_hash(self, browser_config: BrowserConfig) -> str:
"""Create a hash of the browser configuration for browser pooling.
Args:
browser_config: Browser configuration
Returns:
str: Hash of the browser configuration
"""
# Convert config to dictionary, excluding any callable objects
config_dict = browser_config.__dict__.copy()
for key in list(config_dict.keys()):
if callable(config_dict[key]):
del config_dict[key]
# Convert to canonical JSON string
config_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON
config_hash = hashlib.sha256(config_json.encode()).hexdigest()
return config_hash
def _create_strategy(self, browser_config: BrowserConfig) -> BaseBrowserStrategy:
"""Create appropriate browser strategy based on configuration.
Args:
browser_config: Browser configuration
Returns:
BaseBrowserStrategy: The selected browser strategy
"""
if browser_config.browser_mode == "builtin":
return BuiltinBrowserStrategy(browser_config, self.logger)
elif browser_config.browser_mode == "docker":
if DockerBrowserStrategy is None:
if self.logger:
self.logger.error(
"Docker browser strategy requested but not available. "
"Falling back to PlaywrightBrowserStrategy.",
tag="BROWSER"
)
return PlaywrightBrowserStrategy(browser_config, self.logger)
return DockerBrowserStrategy(browser_config, self.logger)
elif browser_config.browser_mode == "cdp" or browser_config.cdp_url or browser_config.use_managed_browser:
return CDPBrowserStrategy(browser_config, self.logger)
else:
return PlaywrightBrowserStrategy(browser_config, self.logger)
async def initialize_pool(
self,
browser_configs: List[BrowserConfig] = None,
browsers_per_config: int = 1,
page_configs: Optional[List[Tuple[BrowserConfig, CrawlerRunConfig, int]]] = None
):
"""Initialize the browser pool with multiple browser configurations.
Args:
browser_configs: List of browser configurations to initialize
browsers_per_config: Number of browser instances per configuration
page_configs: Optional list of (browser_config, crawler_run_config, count) tuples
for pre-warming pages
Returns:
self: For method chaining
"""
if not browser_configs:
browser_configs = [self.config]
# Calculate how many browsers we'll need based on page_configs
browsers_needed = {}
if page_configs:
for browser_config, _, page_count in page_configs:
config_hash = self._create_browser_config_hash(browser_config)
# Calculate browsers based on max_pages_per_browser
browsers_needed_for_config = math.ceil(page_count / self.max_pages_per_browser)
browsers_needed[config_hash] = max(
browsers_needed.get(config_hash, 0),
browsers_needed_for_config
)
# Adjust browsers_per_config if needed to ensure enough capacity
config_browsers_needed = {}
for browser_config in browser_configs:
config_hash = self._create_browser_config_hash(browser_config)
# Estimate browsers needed based on page requirements
browsers_for_config = browsers_per_config
if config_hash in browsers_needed:
browsers_for_config = max(browsers_for_config, browsers_needed[config_hash])
config_browsers_needed[config_hash] = browsers_for_config
# Update max_browsers_per_config if needed
if browsers_for_config > self.max_browsers_per_config:
self.max_browsers_per_config = browsers_for_config
if self.logger:
self.logger.info(
f"Increased max_browsers_per_config to {browsers_for_config} to accommodate page requirements",
tag="POOL"
)
# Initialize locks and queues for each config
async with self._browser_pool_lock:
for browser_config in browser_configs:
config_hash = self._create_browser_config_hash(browser_config)
# Initialize lock for this config if needed
if config_hash not in self._browser_locks:
self._browser_locks[config_hash] = asyncio.Lock()
# Initialize queue for this config if needed
if config_hash not in self.request_queues:
self.request_queues[config_hash] = asyncio.Queue()
# Initialize pool for this config if needed
if config_hash not in self.browser_pool:
self.browser_pool[config_hash] = []
# Create browser instances for each configuration in parallel
browser_tasks = []
for browser_config in browser_configs:
config_hash = self._create_browser_config_hash(browser_config)
browsers_to_create = config_browsers_needed.get(
config_hash,
browsers_per_config
) - len(self.browser_pool.get(config_hash, []))
if browsers_to_create <= 0:
continue
for _ in range(browsers_to_create):
# Create a task for each browser initialization
task = self._create_and_add_browser(browser_config, config_hash)
browser_tasks.append(task)
# Wait for all browser initializations to complete
if browser_tasks:
if self.logger:
self.logger.info(f"Initializing {len(browser_tasks)} browsers in parallel...", tag="POOL")
await asyncio.gather(*browser_tasks)
# Pre-warm pages if requested
if page_configs:
page_tasks = []
for browser_config, crawler_run_config, count in page_configs:
task = self._prewarm_pages(browser_config, crawler_run_config, count)
page_tasks.append(task)
if page_tasks:
if self.logger:
self.logger.info(f"Pre-warming pages with {len(page_tasks)} configurations...", tag="POOL")
await asyncio.gather(*page_tasks)
# Update legacy references
if self.browser_pool and next(iter(self.browser_pool.values()), []):
strategy = next(iter(self.browser_pool.values()))[0]
self.strategy = strategy
self.browser = strategy.browser
self.default_context = strategy.default_context
self.playwright = strategy.playwright
return self
async def _create_and_add_browser(self, browser_config: BrowserConfig, config_hash: str):
"""Create and add a browser to the pool.
Args:
browser_config: Browser configuration
config_hash: Hash of the configuration
"""
try:
strategy = self._create_strategy(browser_config)
await strategy.start()
async with self._browser_pool_lock:
if config_hash not in self.browser_pool:
self.browser_pool[config_hash] = []
self.browser_pool[config_hash].append(strategy)
self.browser_in_use[strategy] = False
if self.logger:
self.logger.debug(
f"Added browser to pool: {browser_config.browser_type} "
f"({browser_config.browser_mode})",
tag="POOL"
)
except Exception as e:
if self.logger:
self.logger.error(
f"Failed to create browser: {str(e)}",
tag="POOL"
)
raise
def _make_config_signature(self, crawlerRunConfig: CrawlerRunConfig) -> str:
"""Create a signature hash from crawler configuration.
Args:
crawlerRunConfig: Crawler run configuration
Returns:
str: Hash of the crawler configuration
"""
config_dict = crawlerRunConfig.__dict__.copy()
# Exclude items that do not affect page creation
ephemeral_keys = [
"session_id",
"js_code",
"scraping_strategy",
"extraction_strategy",
"chunking_strategy",
"cache_mode",
"content_filter",
"semaphore_count",
"url"
]
for key in ephemeral_keys:
if key in config_dict:
del config_dict[key]
# Convert to canonical JSON string
config_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON
config_hash = hashlib.sha256(config_json.encode("utf-8")).hexdigest()
return config_hash
async def _prewarm_pages(
self,
browser_config: BrowserConfig,
crawler_run_config: CrawlerRunConfig,
count: int
):
"""Pre-warm pages for a specific configuration.
Args:
browser_config: Browser configuration
crawler_run_config: Crawler run configuration
count: Number of pages to pre-warm
"""
try:
# Create individual page tasks and run them in parallel
browser_config_hash = self._create_browser_config_hash(browser_config)
crawler_config_hash = self._make_config_signature(crawler_run_config)
async def get_single_page():
strategy = await self.get_available_browser(browser_config)
try:
page, context = await strategy.get_page(crawler_run_config)
# Store config hashes on the page object for later retrieval
setattr(page, "_browser_config_hash", browser_config_hash)
setattr(page, "_crawler_config_hash", crawler_config_hash)
return page, context, strategy
except Exception as e:
# Release the browser back to the pool
await self.release_browser(strategy, browser_config)
raise e
# Create tasks for parallel execution
page_tasks = [get_single_page() for _ in range(count)]
# Execute all page creation tasks in parallel
pages_contexts_strategies = await asyncio.gather(*page_tasks)
# Add pages to the page pool
browser_config_hash = self._create_browser_config_hash(browser_config)
crawler_config_hash = self._make_config_signature(crawler_run_config)
pool_key = (browser_config_hash, crawler_config_hash)
async with self._page_pool_lock:
if pool_key not in self.page_pool:
self.page_pool[pool_key] = []
# Add all pages to the pool
self.page_pool[pool_key].extend(pages_contexts_strategies)
if self.logger:
self.logger.debug(
f"Pre-warmed {count} pages in parallel with config {crawler_run_config}",
tag="POOL"
)
except Exception as e:
if self.logger:
self.logger.error(
f"Failed to pre-warm pages: {str(e)}",
tag="POOL"
)
raise
async def get_available_browser(
self,
browser_config: Optional[BrowserConfig] = None
) -> BaseBrowserStrategy:
"""Get an available browser from the pool for the given configuration.
Args:
browser_config: Browser configuration to match
Returns:
BaseBrowserStrategy: An available browser strategy
Raises:
Exception: If no browser is available and behavior is EXCEPTION
"""
browser_config = browser_config or self.config
config_hash = self._create_browser_config_hash(browser_config)
async with self._browser_locks.get(config_hash, asyncio.Lock()):
# Check if we have browsers for this config
if config_hash not in self.browser_pool or not self.browser_pool[config_hash]:
if self.unavailable_behavior == UnavailableBehavior.ON_DEMAND:
# Create a new browser on demand
if self.logger:
self.logger.info(
f"1> Creating new browser on demand for config {config_hash[:8]}",
tag="POOL"
)
# Initialize pool for this config if needed
async with self._browser_pool_lock:
if config_hash not in self.browser_pool:
self.browser_pool[config_hash] = []
strategy = self._create_strategy(browser_config)
await strategy.start()
self.browser_pool[config_hash].append(strategy)
self.browser_in_use[strategy] = False
elif self.unavailable_behavior == UnavailableBehavior.EXCEPTION:
raise Exception(f"No browsers available for configuration {config_hash[:8]}")
# Check for an available browser with capacity in the pool
for strategy in self.browser_pool[config_hash]:
# Check if this browser has capacity for more pages
async with self._page_count_lock:
current_pages = self.browser_page_counts.get(strategy, 0)
if current_pages < self.max_pages_per_browser:
# Increment the page count
self.browser_page_counts[strategy] = current_pages + 1
self.browser_in_use[strategy] = True
# Get browser information for better logging
browser_type = getattr(strategy.config, 'browser_type', 'unknown')
browser_mode = getattr(strategy.config, 'browser_mode', 'unknown')
strategy_id = id(strategy) # Use object ID as a unique identifier
if self.logger:
self.logger.debug(
f"Selected browser #{strategy_id} ({browser_type}/{browser_mode}) - "
f"pages: {current_pages+1}/{self.max_pages_per_browser}",
tag="POOL"
)
return strategy
# All browsers are at capacity or in use
if self.unavailable_behavior == UnavailableBehavior.ON_DEMAND:
# Check if we've reached the maximum number of browsers
if len(self.browser_pool[config_hash]) >= self.max_browsers_per_config:
if self.logger:
self.logger.warning(
f"Maximum browsers reached for config {config_hash[:8]} and all at page capacity",
tag="POOL"
)
if self.unavailable_behavior == UnavailableBehavior.EXCEPTION:
raise Exception("Maximum browsers reached and all at page capacity")
# Create a new browser on demand
if self.logger:
self.logger.info(
f"2> Creating new browser on demand for config {config_hash[:8]}",
tag="POOL"
)
strategy = self._create_strategy(browser_config)
await strategy.start()
async with self._browser_pool_lock:
self.browser_pool[config_hash].append(strategy)
self.browser_in_use[strategy] = True
return strategy
# If we get here, either behavior is EXCEPTION or PENDING
if self.unavailable_behavior == UnavailableBehavior.EXCEPTION:
raise Exception(f"All browsers in use or at page capacity for configuration {config_hash[:8]}")
# For PENDING behavior, set up waiting mechanism
if config_hash not in self.request_queues:
self.request_queues[config_hash] = asyncio.Queue()
# Create a future to wait on
future = asyncio.Future()
await self.request_queues[config_hash].put(future)
if self.logger:
self.logger.debug(
f"Waiting for available browser for config {config_hash[:8]}",
tag="POOL"
)
# Wait for a browser to become available
strategy = await future
return strategy
async def get_page(
self,
crawlerRunConfig: CrawlerRunConfig,
browser_config: Optional[BrowserConfig] = None
) -> Tuple[Page, BrowserContext, BaseBrowserStrategy]:
"""Get a page from the browser pool."""
browser_config = browser_config or self.config
# Check if we have a pre-warmed page available
browser_config_hash = self._create_browser_config_hash(browser_config)
crawler_config_hash = self._make_config_signature(crawlerRunConfig)
pool_key = (browser_config_hash, crawler_config_hash)
# Try to get a page from the pool
async with self._page_pool_lock:
if pool_key in self.page_pool and self.page_pool[pool_key]:
# Get a page from the pool
page, context, strategy = self.page_pool[pool_key].pop()
# Mark browser as in use (it already is, but ensure consistency)
self.browser_in_use[strategy] = True
if self.logger:
self.logger.debug(
f"Using pre-warmed page for config {crawler_config_hash[:8]}",
tag="POOL"
)
# Note: We don't increment page count since it was already counted when created
return page, context, strategy
# No pre-warmed page available, create a new one
# get_available_browser already increments the page count
strategy = await self.get_available_browser(browser_config)
try:
# Get a page from the browser
page, context = await strategy.get_page(crawlerRunConfig)
# Store config hashes on the page object for later retrieval
setattr(page, "_browser_config_hash", browser_config_hash)
setattr(page, "_crawler_config_hash", crawler_config_hash)
return page, context, strategy
except Exception as e:
# Release the browser back to the pool and decrement the page count
await self.release_browser(strategy, browser_config, decrement_page_count=True)
raise e
async def release_page(
self,
page: Page,
strategy: BaseBrowserStrategy,
browser_config: Optional[BrowserConfig] = None,
keep_alive: bool = True,
return_to_pool: bool = True
):
"""Release a page back to the pool."""
browser_config = browser_config or self.config
page_url = page.url if page else None
# If not keeping the page alive, close it and decrement count
if not keep_alive:
try:
await page.close()
except Exception as e:
if self.logger:
self.logger.error(
f"Error closing page: {str(e)}",
tag="POOL"
)
# Release the browser with page count decrement
await self.release_browser(strategy, browser_config, decrement_page_count=True)
return
# If returning to pool
if return_to_pool:
# Get the configuration hashes from the page object
browser_config_hash = getattr(page, "_browser_config_hash", None)
crawler_config_hash = getattr(page, "_crawler_config_hash", None)
if browser_config_hash and crawler_config_hash:
pool_key = (browser_config_hash, crawler_config_hash)
async with self._page_pool_lock:
if pool_key not in self.page_pool:
self.page_pool[pool_key] = []
# Add page back to the pool
self.page_pool[pool_key].append((page, page.context, strategy))
if self.logger:
self.logger.debug(
f"Returned page to pool for config {crawler_config_hash[:8]}, url: {page_url}",
tag="POOL"
)
# Note: We don't decrement the page count here since the page is still "in use"
# from the browser's perspective, just in our pool
return
else:
# If we can't identify the configuration, log a warning
if self.logger:
self.logger.warning(
"Cannot return page to pool - missing configuration hashes",
tag="POOL"
)
# If we got here, we couldn't return to pool, so just release the browser
await self.release_browser(strategy, browser_config, decrement_page_count=True)
async def release_browser(
self,
strategy: BaseBrowserStrategy,
browser_config: Optional[BrowserConfig] = None,
decrement_page_count: bool = True
):
"""Release a browser back to the pool."""
browser_config = browser_config or self.config
config_hash = self._create_browser_config_hash(browser_config)
# Decrement page count
if decrement_page_count:
async with self._page_count_lock:
current_count = self.browser_page_counts.get(strategy, 1)
self.browser_page_counts[strategy] = max(0, current_count - 1)
if self.logger:
self.logger.debug(
f"Decremented page count for browser (now: {self.browser_page_counts[strategy]})",
tag="POOL"
)
# Mark as not in use
self.browser_in_use[strategy] = False
# Process any waiting requests
if config_hash in self.request_queues and not self.request_queues[config_hash].empty():
future = await self.request_queues[config_hash].get()
if not future.done():
future.set_result(strategy)
async def get_pages(
self,
crawlerRunConfig: CrawlerRunConfig,
count: int = 1,
browser_config: Optional[BrowserConfig] = None
) -> List[Tuple[Page, BrowserContext, BaseBrowserStrategy]]:
"""Get multiple pages from the browser pool.
Args:
crawlerRunConfig: Configuration for the crawler run
count: Number of pages to get
browser_config: Browser configuration to use
Returns:
List of (Page, Context, Strategy) tuples
"""
results = []
for _ in range(count):
try:
result = await self.get_page(crawlerRunConfig, browser_config)
results.append(result)
except Exception as e:
# Release any pages we've already gotten
for page, _, strategy in results:
await self.release_page(page, strategy, browser_config)
raise e
return results
async def get_page_pool_status(self) -> Dict[str, Any]:
"""Get information about the page pool status.
Returns:
Dict with page pool status information
"""
status = {
"total_pooled_pages": 0,
"configs": {}
}
async with self._page_pool_lock:
for (browser_hash, crawler_hash), pages in self.page_pool.items():
config_key = f"{browser_hash[:8]}_{crawler_hash[:8]}"
status["configs"][config_key] = len(pages)
status["total_pooled_pages"] += len(pages)
if self.logger:
self.logger.debug(
f"Page pool status: {status['total_pooled_pages']} pages available",
tag="POOL"
)
return status
async def get_pool_status(self) -> Dict[str, Any]:
"""Get information about the browser pool status.
Returns:
Dict with pool status information
"""
status = {
"total_browsers": 0,
"browsers_in_use": 0,
"total_pages": 0,
"configs": {}
}
for config_hash, strategies in self.browser_pool.items():
config_pages = 0
in_use = 0
for strategy in strategies:
is_in_use = self.browser_in_use.get(strategy, False)
if is_in_use:
in_use += 1
# Get page count for this browser
try:
page_count = len(await strategy.get_opened_pages())
config_pages += page_count
except Exception as e:
if self.logger:
self.logger.error(f"Error getting page count: {str(e)}", tag="POOL")
config_status = {
"total_browsers": len(strategies),
"browsers_in_use": in_use,
"pages_open": config_pages,
"waiting_requests": self.request_queues.get(config_hash, asyncio.Queue()).qsize(),
"max_capacity": len(strategies) * self.max_pages_per_browser,
"utilization_pct": round((config_pages / (len(strategies) * self.max_pages_per_browser)) * 100, 1)
if strategies else 0
}
status["configs"][config_hash] = config_status
status["total_browsers"] += config_status["total_browsers"]
status["browsers_in_use"] += config_status["browsers_in_use"]
status["total_pages"] += config_pages
# Add overall utilization
if status["total_browsers"] > 0:
max_capacity = status["total_browsers"] * self.max_pages_per_browser
status["overall_utilization_pct"] = round((status["total_pages"] / max_capacity) * 100, 1)
else:
status["overall_utilization_pct"] = 0
return status
async def start(self):
"""Start at least one browser instance in the pool.
This method is kept for backward compatibility.
Returns:
self: For method chaining
"""
await self.initialize_pool([self.config], 1)
return self
async def kill_session(self, session_id: str):
"""Kill a browser session and clean up resources.
Delegated to the strategy. This method is kept for backward compatibility.
Args:
session_id: The session ID to kill
"""
if not self.strategy:
return
await self.strategy.kill_session(session_id)
# Sync sessions
if hasattr(self.strategy, 'sessions'):
self.sessions = self.strategy.sessions
async def close(self):
"""Close all browsers in the pool and clean up resources."""
# Close all browsers in the pool
for strategies in self.browser_pool.values():
for strategy in strategies:
try:
await strategy.close()
except Exception as e:
if self.logger:
self.logger.error(
f"Error closing browser: {str(e)}",
tag="POOL"
)
# Clear pool data
self.browser_pool = {}
self.browser_in_use = {}
# Reset legacy references
self.browser = None
self.default_context = None
self.managed_browser = None
self.playwright = None
self.strategy = None
self.sessions = {}
async def create_browser_manager(
browser_config: Optional[BrowserConfig] = None,
logger: Optional[AsyncLogger] = None,
unavailable_behavior: UnavailableBehavior = UnavailableBehavior.EXCEPTION,
max_browsers_per_config: int = 10,
initial_pool_size: int = 1,
page_configs: Optional[List[Tuple[BrowserConfig, CrawlerRunConfig, int]]] = None
) -> BrowserManager:
"""Factory function to create and initialize a BrowserManager.
Args:
browser_config: Configuration for the browsers
logger: Logger for recording events
unavailable_behavior: Behavior when no browser is available
max_browsers_per_config: Maximum browsers per configuration
initial_pool_size: Initial number of browsers per configuration
page_configs: Optional configurations for pre-warming pages
Returns:
Initialized BrowserManager
"""
manager = BrowserManager(
browser_config=browser_config,
logger=logger,
unavailable_behavior=unavailable_behavior,
max_browsers_per_config=max_browsers_per_config
)
await manager.initialize_pool(
[browser_config] if browser_config else None,
initial_pool_size,
page_configs
)
return manager

View File

@@ -1,143 +0,0 @@
"""Docker configuration module for Crawl4AI browser automation.
This module provides configuration classes for Docker-based browser automation,
allowing flexible configuration of Docker containers for browsing.
"""
from typing import Dict, List, Optional
class DockerConfig:
"""Configuration for Docker-based browser automation.
This class contains Docker-specific settings to avoid cluttering BrowserConfig.
Attributes:
mode (str): Docker operation mode - "connect" or "launch".
- "connect": Uses a container with Chrome already running
- "launch": Dynamically configures and starts Chrome in container
image (str): Docker image to use. If None, defaults from DockerUtils are used.
registry_file (str): Path to container registry file for persistence.
persistent (bool): Keep container running after browser closes.
remove_on_exit (bool): Remove container on exit when not persistent.
network (str): Docker network to use.
volumes (List[str]): Volume mappings (e.g., ["host_path:container_path"]).
env_vars (Dict[str, str]): Environment variables to set in container.
extra_args (List[str]): Additional docker run arguments.
host_port (int): Host port to map to container's 9223 port.
user_data_dir (str): Path to user data directory on host.
container_user_data_dir (str): Path to user data directory in container.
"""
def __init__(
self,
mode: str = "connect", # "connect" or "launch"
image: Optional[str] = None, # Docker image to use
registry_file: Optional[str] = None, # Path to registry file
persistent: bool = False, # Keep container running after browser closes
remove_on_exit: bool = True, # Remove container on exit when not persistent
network: Optional[str] = None, # Docker network to use
volumes: List[str] = None, # Volume mappings
cpu_limit: float = 1.0, # CPU limit for the container
memory_limit: str = "1.5g", # Memory limit for the container
env_vars: Dict[str, str] = None, # Environment variables
host_port: Optional[int] = None, # Host port to map to container's 9223
user_data_dir: Optional[str] = None, # Path to user data directory on host
container_user_data_dir: str = "/data", # Path to user data directory in container
extra_args: List[str] = None, # Additional docker run arguments
):
"""Initialize Docker configuration.
Args:
mode: Docker operation mode ("connect" or "launch")
image: Docker image to use
registry_file: Path to container registry file
persistent: Whether to keep container running after browser closes
remove_on_exit: Whether to remove container on exit when not persistent
network: Docker network to use
volumes: Volume mappings as list of strings
cpu_limit: CPU limit for the container
memory_limit: Memory limit for the container
env_vars: Environment variables as dictionary
extra_args: Additional docker run arguments
host_port: Host port to map to container's 9223
user_data_dir: Path to user data directory on host
container_user_data_dir: Path to user data directory in container
"""
self.mode = mode
self.image = image # If None, defaults will be used from DockerUtils
self.registry_file = registry_file
self.persistent = persistent
self.remove_on_exit = remove_on_exit
self.network = network
self.volumes = volumes or []
self.cpu_limit = cpu_limit
self.memory_limit = memory_limit
self.env_vars = env_vars or {}
self.extra_args = extra_args or []
self.host_port = host_port
self.user_data_dir = user_data_dir
self.container_user_data_dir = container_user_data_dir
def to_dict(self) -> Dict:
"""Convert this configuration to a dictionary.
Returns:
Dictionary representation of this configuration
"""
return {
"mode": self.mode,
"image": self.image,
"registry_file": self.registry_file,
"persistent": self.persistent,
"remove_on_exit": self.remove_on_exit,
"network": self.network,
"volumes": self.volumes,
"cpu_limit": self.cpu_limit,
"memory_limit": self.memory_limit,
"env_vars": self.env_vars,
"extra_args": self.extra_args,
"host_port": self.host_port,
"user_data_dir": self.user_data_dir,
"container_user_data_dir": self.container_user_data_dir
}
@staticmethod
def from_kwargs(kwargs: Dict) -> "DockerConfig":
"""Create a DockerConfig from a dictionary of keyword arguments.
Args:
kwargs: Dictionary of configuration options
Returns:
New DockerConfig instance
"""
return DockerConfig(
mode=kwargs.get("mode", "connect"),
image=kwargs.get("image"),
registry_file=kwargs.get("registry_file"),
persistent=kwargs.get("persistent", False),
remove_on_exit=kwargs.get("remove_on_exit", True),
network=kwargs.get("network"),
volumes=kwargs.get("volumes"),
cpu_limit=kwargs.get("cpu_limit", 1.0),
memory_limit=kwargs.get("memory_limit", "1.5g"),
env_vars=kwargs.get("env_vars"),
extra_args=kwargs.get("extra_args"),
host_port=kwargs.get("host_port"),
user_data_dir=kwargs.get("user_data_dir"),
container_user_data_dir=kwargs.get("container_user_data_dir", "/data")
)
def clone(self, **kwargs) -> "DockerConfig":
"""Create a copy of this configuration with updated values.
Args:
**kwargs: Key-value pairs of configuration options to update
Returns:
DockerConfig: A new instance with the specified updates
"""
config_dict = self.to_dict()
config_dict.update(kwargs)
return DockerConfig.from_kwargs(config_dict)

View File

@@ -1,457 +0,0 @@
"""Browser profile management module for Crawl4AI.
This module provides functionality for creating and managing browser profiles
that can be used for authenticated browsing.
"""
import os
import asyncio
import signal
import sys
import datetime
import uuid
import shutil
from typing import List, Dict, Optional, Any
from colorama import Fore, Style, init
from ..async_configs import BrowserConfig
from ..async_logger import AsyncLogger, AsyncLoggerBase
from ..utils import get_home_folder
class BrowserProfileManager:
"""Manages browser profiles for Crawl4AI.
This class provides functionality to create and manage browser profiles
that can be used for authenticated browsing with Crawl4AI.
Profiles are stored by default in ~/.crawl4ai/profiles/
"""
def __init__(self, logger: Optional[AsyncLoggerBase] = None):
"""Initialize the BrowserProfileManager.
Args:
logger: Logger for outputting messages. If None, a default AsyncLogger is created.
"""
# Initialize colorama for colorful terminal output
init()
# Create a logger if not provided
if logger is None:
self.logger = AsyncLogger(verbose=True)
elif not isinstance(logger, AsyncLoggerBase):
self.logger = AsyncLogger(verbose=True)
else:
self.logger = logger
# Ensure profiles directory exists
self.profiles_dir = os.path.join(get_home_folder(), "profiles")
os.makedirs(self.profiles_dir, exist_ok=True)
async def create_profile(self,
profile_name: Optional[str] = None,
browser_config: Optional[BrowserConfig] = None) -> Optional[str]:
"""Create a browser profile interactively.
Args:
profile_name: Name for the profile. If None, a name is generated.
browser_config: Configuration for the browser. If None, a default configuration is used.
Returns:
Path to the created profile directory, or None if creation failed
"""
# Create default browser config if none provided
if browser_config is None:
browser_config = BrowserConfig(
browser_type="chromium",
headless=False, # Must be visible for user interaction
verbose=True
)
else:
# Ensure headless is False for user interaction
browser_config.headless = False
# Generate profile name if not provided
if not profile_name:
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
profile_name = f"profile_{timestamp}_{uuid.uuid4().hex[:6]}"
# Sanitize profile name (replace spaces and special chars)
profile_name = "".join(c if c.isalnum() or c in "-_" else "_" for c in profile_name)
# Set user data directory
profile_path = os.path.join(self.profiles_dir, profile_name)
os.makedirs(profile_path, exist_ok=True)
# Print instructions for the user with colorama formatting
border = f"{Fore.CYAN}{'='*80}{Style.RESET_ALL}"
self.logger.info(f"\n{border}", tag="PROFILE")
self.logger.info(f"Creating browser profile: {Fore.GREEN}{profile_name}{Style.RESET_ALL}", tag="PROFILE")
self.logger.info(f"Profile directory: {Fore.YELLOW}{profile_path}{Style.RESET_ALL}", tag="PROFILE")
self.logger.info("\nInstructions:", tag="PROFILE")
self.logger.info("1. A browser window will open for you to set up your profile.", tag="PROFILE")
self.logger.info(f"2. {Fore.CYAN}Log in to websites{Style.RESET_ALL}, configure settings, etc. as needed.", tag="PROFILE")
self.logger.info(f"3. When you're done, {Fore.YELLOW}press 'q' in this terminal{Style.RESET_ALL} to close the browser.", tag="PROFILE")
self.logger.info("4. The profile will be saved and ready to use with Crawl4AI.", tag="PROFILE")
self.logger.info(f"{border}\n", tag="PROFILE")
# Import the necessary classes with local imports to avoid circular references
from .strategies import CDPBrowserStrategy
# Set browser config to use the profile path
browser_config.user_data_dir = profile_path
# Create a CDP browser strategy for the profile creation
browser_strategy = CDPBrowserStrategy(browser_config, self.logger)
# Set up signal handlers to ensure cleanup on interrupt
original_sigint = signal.getsignal(signal.SIGINT)
original_sigterm = signal.getsignal(signal.SIGTERM)
# Define cleanup handler for signals
async def cleanup_handler(sig, frame):
self.logger.warning("\nCleaning up browser process...", tag="PROFILE")
await browser_strategy.close()
# Restore original signal handlers
signal.signal(signal.SIGINT, original_sigint)
signal.signal(signal.SIGTERM, original_sigterm)
if sig == signal.SIGINT:
self.logger.error("Profile creation interrupted. Profile may be incomplete.", tag="PROFILE")
sys.exit(1)
# Set signal handlers
def sigint_handler(sig, frame):
asyncio.create_task(cleanup_handler(sig, frame))
signal.signal(signal.SIGINT, sigint_handler)
signal.signal(signal.SIGTERM, sigint_handler)
# Event to signal when user is done with the browser
user_done_event = asyncio.Event()
# Run keyboard input loop in a separate task
async def listen_for_quit_command():
import termios
import tty
import select
# First output the prompt
self.logger.info(f"{Fore.CYAN}Press '{Fore.WHITE}q{Fore.CYAN}' when you've finished using the browser...{Style.RESET_ALL}", tag="PROFILE")
# Save original terminal settings
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
# Switch to non-canonical mode (no line buffering)
tty.setcbreak(fd)
while True:
# Check if input is available (non-blocking)
readable, _, _ = select.select([sys.stdin], [], [], 0.5)
if readable:
key = sys.stdin.read(1)
if key.lower() == 'q':
self.logger.info(f"{Fore.GREEN}Closing browser and saving profile...{Style.RESET_ALL}", tag="PROFILE")
user_done_event.set()
return
# Check if the browser process has already exited
if browser_strategy.browser_process and browser_strategy.browser_process.poll() is not None:
self.logger.info("Browser already closed. Ending input listener.", tag="PROFILE")
user_done_event.set()
return
await asyncio.sleep(0.1)
finally:
# Restore terminal settings
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
try:
# Start the browser
await browser_strategy.start()
# Check if browser started successfully
if not browser_strategy.browser_process:
self.logger.error("Failed to start browser process.", tag="PROFILE")
return None
self.logger.info(f"Browser launched. {Fore.CYAN}Waiting for you to finish...{Style.RESET_ALL}", tag="PROFILE")
# Start listening for keyboard input
listener_task = asyncio.create_task(listen_for_quit_command())
# Wait for either the user to press 'q' or for the browser process to exit naturally
while not user_done_event.is_set() and browser_strategy.browser_process.poll() is None:
await asyncio.sleep(0.5)
# Cancel the listener task if it's still running
if not listener_task.done():
listener_task.cancel()
try:
await listener_task
except asyncio.CancelledError:
pass
# If the browser is still running and the user pressed 'q', terminate it
if browser_strategy.browser_process.poll() is None and user_done_event.is_set():
self.logger.info("Terminating browser process...", tag="PROFILE")
await browser_strategy.close()
self.logger.success(f"Browser closed. Profile saved at: {Fore.GREEN}{profile_path}{Style.RESET_ALL}", tag="PROFILE")
except Exception as e:
self.logger.error(f"Error creating profile: {str(e)}", tag="PROFILE")
await browser_strategy.close()
return None
finally:
# Restore original signal handlers
signal.signal(signal.SIGINT, original_sigint)
signal.signal(signal.SIGTERM, original_sigterm)
# Make sure browser is fully cleaned up
await browser_strategy.close()
# Return the profile path
return profile_path
def list_profiles(self) -> List[Dict[str, Any]]:
"""List all available browser profiles.
Returns:
List of dictionaries containing profile information
"""
if not os.path.exists(self.profiles_dir):
return []
profiles = []
for name in os.listdir(self.profiles_dir):
profile_path = os.path.join(self.profiles_dir, name)
# Skip if not a directory
if not os.path.isdir(profile_path):
continue
# Check if this looks like a valid browser profile
# For Chromium: Look for Preferences file
# For Firefox: Look for prefs.js file
is_valid = False
if os.path.exists(os.path.join(profile_path, "Preferences")) or \
os.path.exists(os.path.join(profile_path, "Default", "Preferences")):
is_valid = "chromium"
elif os.path.exists(os.path.join(profile_path, "prefs.js")):
is_valid = "firefox"
if is_valid:
# Get creation time
created = datetime.datetime.fromtimestamp(
os.path.getctime(profile_path)
)
profiles.append({
"name": name,
"path": profile_path,
"created": created,
"type": is_valid
})
# Sort by creation time, newest first
profiles.sort(key=lambda x: x["created"], reverse=True)
return profiles
def get_profile_path(self, profile_name: str) -> Optional[str]:
"""Get the full path to a profile by name.
Args:
profile_name: Name of the profile (not the full path)
Returns:
Full path to the profile directory, or None if not found
"""
profile_path = os.path.join(self.profiles_dir, profile_name)
# Check if path exists and is a valid profile
if not os.path.isdir(profile_path):
# Check if profile_name itself is full path
if os.path.isabs(profile_name):
profile_path = profile_name
else:
return None
# Look for profile indicators
is_profile = (
os.path.exists(os.path.join(profile_path, "Preferences")) or
os.path.exists(os.path.join(profile_path, "Default", "Preferences")) or
os.path.exists(os.path.join(profile_path, "prefs.js"))
)
if not is_profile:
return None # Not a valid browser profile
return profile_path
def delete_profile(self, profile_name_or_path: str) -> bool:
"""Delete a browser profile by name or path.
Args:
profile_name_or_path: Name of the profile or full path to profile directory
Returns:
True if the profile was deleted successfully, False otherwise
"""
# Determine if input is a name or a path
if os.path.isabs(profile_name_or_path):
# Full path provided
profile_path = profile_name_or_path
else:
# Just a name provided, construct path
profile_path = os.path.join(self.profiles_dir, profile_name_or_path)
# Check if path exists and is a valid profile
if not os.path.isdir(profile_path):
return False
# Look for profile indicators
is_profile = (
os.path.exists(os.path.join(profile_path, "Preferences")) or
os.path.exists(os.path.join(profile_path, "Default", "Preferences")) or
os.path.exists(os.path.join(profile_path, "prefs.js"))
)
if not is_profile:
return False # Not a valid browser profile
# Delete the profile directory
try:
shutil.rmtree(profile_path)
return True
except Exception:
return False
async def interactive_manager(self, crawl_callback=None):
"""Launch an interactive profile management console.
Args:
crawl_callback: Function to call when selecting option to use
a profile for crawling. It will be called with (profile_path, url).
"""
while True:
self.logger.info(f"\n{Fore.CYAN}Profile Management Options:{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"1. {Fore.GREEN}Create a new profile{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"2. {Fore.YELLOW}List available profiles{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"3. {Fore.RED}Delete a profile{Style.RESET_ALL}", tag="MENU")
# Only show crawl option if callback provided
if crawl_callback:
self.logger.info(f"4. {Fore.CYAN}Use a profile to crawl a website{Style.RESET_ALL}", tag="MENU")
self.logger.info(f"5. {Fore.MAGENTA}Exit{Style.RESET_ALL}", tag="MENU")
exit_option = "5"
else:
self.logger.info(f"4. {Fore.MAGENTA}Exit{Style.RESET_ALL}", tag="MENU")
exit_option = "4"
choice = input(f"\n{Fore.CYAN}Enter your choice (1-{exit_option}): {Style.RESET_ALL}")
if choice == "1":
# Create new profile
name = input(f"{Fore.GREEN}Enter a name for the new profile (or press Enter for auto-generated name): {Style.RESET_ALL}")
await self.create_profile(name or None)
elif choice == "2":
# List profiles
profiles = self.list_profiles()
if not profiles:
self.logger.warning(" No profiles found. Create one first with option 1.", tag="PROFILES")
continue
# Print profile information with colorama formatting
self.logger.info("\nAvailable profiles:", tag="PROFILES")
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {Fore.CYAN}{profile['name']}{Style.RESET_ALL}", tag="PROFILES")
self.logger.info(f" Path: {Fore.YELLOW}{profile['path']}{Style.RESET_ALL}", tag="PROFILES")
self.logger.info(f" Created: {profile['created'].strftime('%Y-%m-%d %H:%M:%S')}", tag="PROFILES")
self.logger.info(f" Browser type: {profile['type']}", tag="PROFILES")
self.logger.info("", tag="PROFILES") # Empty line for spacing
elif choice == "3":
# Delete profile
profiles = self.list_profiles()
if not profiles:
self.logger.warning("No profiles found to delete", tag="PROFILES")
continue
# Display numbered list
self.logger.info(f"\n{Fore.YELLOW}Available profiles:{Style.RESET_ALL}", tag="PROFILES")
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {profile['name']}", tag="PROFILES")
# Get profile to delete
profile_idx = input(f"{Fore.RED}Enter the number of the profile to delete (or 'c' to cancel): {Style.RESET_ALL}")
if profile_idx.lower() == 'c':
continue
try:
idx = int(profile_idx) - 1
if 0 <= idx < len(profiles):
profile_name = profiles[idx]["name"]
self.logger.info(f"Deleting profile: {Fore.YELLOW}{profile_name}{Style.RESET_ALL}", tag="PROFILES")
# Confirm deletion
confirm = input(f"{Fore.RED}Are you sure you want to delete this profile? (y/n): {Style.RESET_ALL}")
if confirm.lower() == 'y':
success = self.delete_profile(profiles[idx]["path"])
if success:
self.logger.success(f"Profile {Fore.GREEN}{profile_name}{Style.RESET_ALL} deleted successfully", tag="PROFILES")
else:
self.logger.error(f"Failed to delete profile {Fore.RED}{profile_name}{Style.RESET_ALL}", tag="PROFILES")
else:
self.logger.error("Invalid profile number", tag="PROFILES")
except ValueError:
self.logger.error("Please enter a valid number", tag="PROFILES")
elif choice == "4" and crawl_callback:
# Use profile to crawl a site
profiles = self.list_profiles()
if not profiles:
self.logger.warning("No profiles found. Create one first.", tag="PROFILES")
continue
# Display numbered list
self.logger.info(f"\n{Fore.YELLOW}Available profiles:{Style.RESET_ALL}", tag="PROFILES")
for i, profile in enumerate(profiles):
self.logger.info(f"[{i+1}] {profile['name']}", tag="PROFILES")
# Get profile to use
profile_idx = input(f"{Fore.CYAN}Enter the number of the profile to use (or 'c' to cancel): {Style.RESET_ALL}")
if profile_idx.lower() == 'c':
continue
try:
idx = int(profile_idx) - 1
if 0 <= idx < len(profiles):
profile_path = profiles[idx]["path"]
url = input(f"{Fore.CYAN}Enter the URL to crawl: {Style.RESET_ALL}")
if url:
# Call the provided crawl callback
await crawl_callback(profile_path, url)
else:
self.logger.error("No URL provided", tag="CRAWL")
else:
self.logger.error("Invalid profile number", tag="PROFILES")
except ValueError:
self.logger.error("Please enter a valid number", tag="PROFILES")
elif (choice == "4" and not crawl_callback) or (choice == "5" and crawl_callback):
# Exit
self.logger.info("Exiting profile management", tag="MENU")
break
else:
self.logger.error(f"Invalid choice. Please enter a number between 1 and {exit_option}.", tag="MENU")

View File

@@ -1,13 +0,0 @@
from .base import BaseBrowserStrategy
from .cdp import CDPBrowserStrategy
from .docker_strategy import DockerBrowserStrategy
from .playwright import PlaywrightBrowserStrategy
from .builtin import BuiltinBrowserStrategy
__all__ = [
"BrowserStrategy",
"CDPBrowserStrategy",
"DockerBrowserStrategy",
"PlaywrightBrowserStrategy",
"BuiltinBrowserStrategy",
]

View File

@@ -1,601 +0,0 @@
"""Browser strategies module for Crawl4AI.
This module implements the browser strategy pattern for different
browser implementations, including Playwright, CDP, and builtin browsers.
"""
from abc import ABC, abstractmethod
import asyncio
import json
import hashlib
import os
import time
from typing import Optional, Tuple, List
from playwright.async_api import BrowserContext, Page
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig, CrawlerRunConfig
from ...config import DOWNLOAD_PAGE_TIMEOUT
from ...js_snippet import load_js_script
from ..utils import get_playwright
class BaseBrowserStrategy(ABC):
"""Base class for all browser strategies.
This abstract class defines the interface that all browser strategies
must implement. It handles common functionality like context caching,
browser configuration, and session management.
"""
_playwright_instance = None
@classmethod
async def get_playwright(cls):
"""Get or create a shared Playwright instance.
Returns:
Playwright: The shared Playwright instance
"""
# For now I dont want Singleton pattern for Playwright
if cls._playwright_instance is None or True:
cls._playwright_instance = await get_playwright()
return cls._playwright_instance
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the strategy with configuration and logger.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
self.config = config
self.logger = logger
self.browser = None
self.default_context = None
# Context management
self.contexts_by_config = {} # config_signature -> context
self._contexts_lock = asyncio.Lock()
# Session management
self.sessions = {}
self.session_ttl = 1800 # 30 minutes default
# Playwright instance
self.playwright = None
@abstractmethod
async def start(self):
"""Start the browser.
This method should be implemented by concrete strategies to initialize
the browser in the appropriate way (direct launch, CDP connection, etc.)
Returns:
self: For method chaining
"""
# Base implementation gets the playwright instance
self.playwright = await self.get_playwright()
return self
@abstractmethod
async def _generate_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
pass
async def get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page with specified configuration.
This method should be implemented by concrete strategies to create
or retrieve a page according to their browser management approach.
Args:
crawlerRunConfig: Crawler run configuration
Returns:
Tuple of (Page, BrowserContext)
"""
# Clean up expired sessions first
self._cleanup_expired_sessions()
# If a session_id is provided and we already have it, reuse that page + context
if crawlerRunConfig.session_id and crawlerRunConfig.session_id in self.sessions:
context, page, _ = self.sessions[crawlerRunConfig.session_id]
# Update last-used timestamp
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context
page, context = await self._generate_page(crawlerRunConfig)
import uuid
setattr(page, "guid", uuid.uuid4())
# If a session_id is specified, store this session so we can reuse later
if crawlerRunConfig.session_id:
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context
pass
async def get_pages(self, crawlerRunConfig: CrawlerRunConfig, count: int = 1) -> List[Tuple[Page, BrowserContext]]:
"""Get multiple pages with the same configuration.
Args:
crawlerRunConfig: Configuration for the pages
count: Number of pages to create
Returns:
List of (Page, Context) tuples
"""
pages = []
for _ in range(count):
page, context = await self.get_page(crawlerRunConfig)
pages.append((page, context))
return pages
async def get_opened_pages(self) -> List[Page]:
"""Get all opened pages in the
browser.
"""
return [page for context in self.contexts_by_config.values() for page in context.pages]
def _build_browser_args(self) -> dict:
"""Build browser launch arguments from config.
Returns:
dict: Browser launch arguments for Playwright
"""
# Define common browser arguments that improve performance and stability
args = [
"--no-sandbox",
"--no-first-run",
"--no-default-browser-check",
"--window-position=0,0",
"--ignore-certificate-errors",
"--ignore-certificate-errors-spki-list",
"--window-position=400,0",
"--force-color-profile=srgb",
"--mute-audio",
"--disable-gpu",
"--disable-gpu-compositing",
"--disable-software-rasterizer",
"--disable-dev-shm-usage",
"--disable-infobars",
"--disable-blink-features=AutomationControlled",
"--disable-renderer-backgrounding",
"--disable-ipc-flooding-protection",
"--disable-background-timer-throttling",
f"--window-size={self.config.viewport_width},{self.config.viewport_height}",
]
# Define browser disable options for light mode
browser_disable_options = [
"--disable-backgrounding-occluded-windows",
"--disable-breakpad",
"--disable-client-side-phishing-detection",
"--disable-component-extensions-with-background-pages",
"--disable-default-apps",
"--disable-extensions",
"--disable-features=TranslateUI",
"--disable-hang-monitor",
"--disable-popup-blocking",
"--disable-prompt-on-repost",
"--disable-sync",
"--metrics-recording-only",
"--password-store=basic",
"--use-mock-keychain",
]
# Apply light mode settings if enabled
if self.config.light_mode:
args.extend(browser_disable_options)
# Apply text mode settings if enabled (disables images, JS, etc)
if self.config.text_mode:
args.extend([
"--blink-settings=imagesEnabled=false",
"--disable-remote-fonts",
"--disable-images",
"--disable-javascript",
"--disable-software-rasterizer",
"--disable-dev-shm-usage",
])
# Add any extra arguments from the config
if self.config.extra_args:
args.extend(self.config.extra_args)
# Build the core browser args dictionary
browser_args = {"headless": self.config.headless, "args": args}
# Add chrome channel if specified
if self.config.chrome_channel:
browser_args["channel"] = self.config.chrome_channel
# Configure downloads
if self.config.accept_downloads:
browser_args["downloads_path"] = self.config.downloads_path or os.path.join(
os.getcwd(), "downloads"
)
os.makedirs(browser_args["downloads_path"], exist_ok=True)
# Check for user data directory
if self.config.user_data_dir:
# Ensure the directory exists
os.makedirs(self.config.user_data_dir, exist_ok=True)
browser_args["user_data_dir"] = self.config.user_data_dir
# Configure proxy settings
if self.config.proxy or self.config.proxy_config:
from playwright.async_api import ProxySettings
proxy_settings = (
ProxySettings(server=self.config.proxy)
if self.config.proxy
else ProxySettings(
server=self.config.proxy_config.server,
username=self.config.proxy_config.username,
password=self.config.proxy_config.password,
)
)
browser_args["proxy"] = proxy_settings
return browser_args
def _make_config_signature(self, crawlerRunConfig: CrawlerRunConfig) -> str:
"""Create a signature hash from configuration for context caching.
Converts the crawlerRunConfig into a dict, excludes ephemeral fields,
then returns a hash of the sorted JSON. This yields a stable signature
that identifies configurations requiring a unique browser context.
Args:
crawlerRunConfig: Crawler run configuration
Returns:
str: Unique hash for this configuration
"""
config_dict = crawlerRunConfig.__dict__.copy()
# Exclude items that do not affect browser-level setup
ephemeral_keys = [
"session_id",
"js_code",
"scraping_strategy",
"extraction_strategy",
"chunking_strategy",
"cache_mode",
"content_filter",
"semaphore_count",
"url"
]
for key in ephemeral_keys:
if key in config_dict:
del config_dict[key]
# Convert to canonical JSON string
signature_json = json.dumps(config_dict, sort_keys=True, default=str)
# Hash the JSON so we get a compact, unique string
signature_hash = hashlib.sha256(signature_json.encode("utf-8")).hexdigest()
return signature_hash
async def create_browser_context(self, crawlerRunConfig: Optional[CrawlerRunConfig] = None) -> BrowserContext:
"""Creates and returns a new browser context with configured settings.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
BrowserContext: Browser context object with the specified configurations
"""
if not self.browser:
raise ValueError("Browser must be initialized before creating context")
# Base settings
user_agent = self.config.headers.get("User-Agent", self.config.user_agent)
viewport_settings = {
"width": self.config.viewport_width,
"height": self.config.viewport_height,
}
proxy_settings = {"server": self.config.proxy} if self.config.proxy else None
# Define blocked extensions for resource optimization
blocked_extensions = [
# Images
"jpg", "jpeg", "png", "gif", "webp", "svg", "ico", "bmp", "tiff", "psd",
# Fonts
"woff", "woff2", "ttf", "otf", "eot",
# Media
"mp4", "webm", "ogg", "avi", "mov", "wmv", "flv", "m4v", "mp3", "wav", "aac",
"m4a", "opus", "flac",
# Documents
"pdf", "doc", "docx", "xls", "xlsx", "ppt", "pptx",
# Archives
"zip", "rar", "7z", "tar", "gz",
# Scripts and data
"xml", "swf", "wasm",
]
# Common context settings
context_settings = {
"user_agent": user_agent,
"viewport": viewport_settings,
"proxy": proxy_settings,
"accept_downloads": self.config.accept_downloads,
"storage_state": self.config.storage_state,
"ignore_https_errors": self.config.ignore_https_errors,
"device_scale_factor": 1.0,
"java_script_enabled": self.config.java_script_enabled,
}
# Apply text mode settings if enabled
if self.config.text_mode:
text_mode_settings = {
"has_touch": False,
"is_mobile": False,
"java_script_enabled": False, # Disable javascript in text mode
}
# Update context settings with text mode settings
context_settings.update(text_mode_settings)
if self.logger:
self.logger.debug("Text mode enabled for browser context", tag="BROWSER")
# Handle storage state properly - this is key for persistence
if self.config.storage_state:
if self.logger:
if isinstance(self.config.storage_state, str):
self.logger.debug(f"Using storage state from file: {self.config.storage_state}", tag="BROWSER")
else:
self.logger.debug("Using storage state from config object", tag="BROWSER")
if self.config.user_data_dir:
# For CDP-based browsers, storage persistence is typically handled by the user_data_dir
# at the browser level, but we'll create a storage_state location for Playwright as well
storage_path = os.path.join(self.config.user_data_dir, "storage_state.json")
if not os.path.exists(storage_path):
# Create parent directory if it doesn't exist
os.makedirs(os.path.dirname(storage_path), exist_ok=True)
with open(storage_path, "w") as f:
json.dump({}, f)
self.config.storage_state = storage_path
if self.logger:
self.logger.debug(f"Using user data directory: {self.config.user_data_dir}", tag="BROWSER")
# Apply crawler-specific configurations if provided
if crawlerRunConfig:
# Check if there is value for crawlerRunConfig.proxy_config set add that to context
if crawlerRunConfig.proxy_config:
proxy_settings = {
"server": crawlerRunConfig.proxy_config.server,
}
if crawlerRunConfig.proxy_config.username:
proxy_settings.update({
"username": crawlerRunConfig.proxy_config.username,
"password": crawlerRunConfig.proxy_config.password,
})
context_settings["proxy"] = proxy_settings
# Create and return the context
try:
# Create the context with appropriate settings
context = await self.browser.new_context(**context_settings)
# Apply text mode resource blocking if enabled
if self.config.text_mode:
# Create and apply route patterns for each extension
for ext in blocked_extensions:
await context.route(f"**/*.{ext}", lambda route: route.abort())
return context
except Exception as e:
if self.logger:
self.logger.error(f"Error creating browser context: {str(e)}", tag="BROWSER")
# Fallback to basic context creation if the advanced settings fail
return await self.browser.new_context()
async def setup_context(self, context: BrowserContext, crawlerRunConfig: Optional[CrawlerRunConfig] = None):
"""Set up a browser context with the configured options.
Args:
context: The browser context to set up
crawlerRunConfig: Configuration object containing all browser settings
"""
# Set HTTP headers
if self.config.headers:
await context.set_extra_http_headers(self.config.headers)
# Add cookies
if self.config.cookies:
await context.add_cookies(self.config.cookies)
# Apply storage state if provided
if self.config.storage_state:
await context.storage_state(path=None)
# Configure downloads
if self.config.accept_downloads:
context.set_default_timeout(DOWNLOAD_PAGE_TIMEOUT)
context.set_default_navigation_timeout(DOWNLOAD_PAGE_TIMEOUT)
if self.config.downloads_path:
context._impl_obj._options["accept_downloads"] = True
context._impl_obj._options["downloads_path"] = self.config.downloads_path
# Handle user agent and browser hints
if self.config.user_agent:
combined_headers = {
"User-Agent": self.config.user_agent,
"sec-ch-ua": self.config.browser_hint,
}
combined_headers.update(self.config.headers)
await context.set_extra_http_headers(combined_headers)
# Add default cookie
target_url = (crawlerRunConfig and crawlerRunConfig.url) or "https://crawl4ai.com/"
await context.add_cookies(
[
{
"name": "cookiesEnabled",
"value": "true",
"url": target_url,
}
]
)
# Handle navigator overrides
if crawlerRunConfig:
if (
crawlerRunConfig.override_navigator
or crawlerRunConfig.simulate_user
or crawlerRunConfig.magic
):
await context.add_init_script(load_js_script("navigator_overrider"))
async def kill_session(self, session_id: str):
"""Kill a browser session and clean up resources.
Args:
session_id (str): The session ID to kill.
"""
if session_id not in self.sessions:
return
context, page, _ = self.sessions[session_id]
# Close the page
try:
await page.close()
except Exception as e:
if self.logger:
self.logger.error(f"Error closing page for session {session_id}: {str(e)}", tag="BROWSER")
# Remove session from tracking
del self.sessions[session_id]
# Clean up any contexts that no longer have pages
await self._cleanup_unused_contexts()
if self.logger:
self.logger.debug(f"Killed session: {session_id}", tag="BROWSER")
async def _cleanup_unused_contexts(self):
"""Clean up contexts that no longer have any pages."""
async with self._contexts_lock:
# Get all contexts we're managing
contexts_to_check = list(self.contexts_by_config.values())
for context in contexts_to_check:
# Check if the context has any pages left
if not context.pages:
# No pages left, we can close this context
config_signature = next((sig for sig, ctx in self.contexts_by_config.items()
if ctx == context), None)
if config_signature:
try:
await context.close()
del self.contexts_by_config[config_signature]
if self.logger:
self.logger.debug(f"Closed unused context", tag="BROWSER")
except Exception as e:
if self.logger:
self.logger.error(f"Error closing unused context: {str(e)}", tag="BROWSER")
def _cleanup_expired_sessions(self):
"""Clean up expired sessions based on TTL."""
current_time = time.time()
expired_sessions = [
sid
for sid, (_, _, last_used) in self.sessions.items()
if current_time - last_used > self.session_ttl
]
for sid in expired_sessions:
if self.logger:
self.logger.debug(f"Session expired: {sid}", tag="BROWSER")
asyncio.create_task(self.kill_session(sid))
async def close(self):
"""Close the browser and clean up resources.
This method handles common cleanup tasks like:
1. Persisting storage state if a user_data_dir is configured
2. Closing all sessions
3. Closing all browser contexts
4. Closing the browser
5. Stopping Playwright
Child classes should override this method to add their specific cleanup logic,
but should call super().close() to ensure common cleanup tasks are performed.
"""
# Set a flag to prevent race conditions during cleanup
self.shutting_down = True
try:
# Add brief delay if configured
if self.config.sleep_on_close:
await asyncio.sleep(0.5)
# Persist storage state if using a user data directory
if self.config.user_data_dir and self.browser:
for context in self.browser.contexts:
try:
# Ensure the directory exists
storage_dir = os.path.join(self.config.user_data_dir, "Default")
os.makedirs(storage_dir, exist_ok=True)
# Save storage state
storage_path = os.path.join(storage_dir, "storage_state.json")
await context.storage_state(path=storage_path)
if self.logger:
self.logger.debug("Storage state persisted before closing browser", tag="BROWSER")
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to ensure storage persistence: {error}",
tag="BROWSER",
params={"error": str(e)}
)
# Close all active sessions
session_ids = list(self.sessions.keys())
for session_id in session_ids:
await self.kill_session(session_id)
# Close all cached contexts
for ctx in self.contexts_by_config.values():
try:
await ctx.close()
except Exception as e:
if self.logger:
self.logger.error(
message="Error closing context: {error}",
tag="BROWSER",
params={"error": str(e)}
)
self.contexts_by_config.clear()
# Close the browser if it exists
if self.browser:
await self.browser.close()
self.browser = None
# Stop playwright
if self.playwright:
await self.playwright.stop()
self.playwright = None
except Exception as e:
if self.logger:
self.logger.error(
message="Error during browser cleanup: {error}",
tag="BROWSER",
params={"error": str(e)}
)
finally:
# Reset shutting down flag
self.shutting_down = False

View File

@@ -1,468 +0,0 @@
import asyncio
import os
import time
import json
import subprocess
import shutil
import signal
from typing import Optional, Dict, Any, Tuple
from ...async_logger import AsyncLogger
from ...async_configs import CrawlerRunConfig
from playwright.async_api import Page, BrowserContext
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig
from ...utils import get_home_folder
from ..utils import get_browser_executable, is_windows, is_browser_running, find_process_by_port, terminate_process
from .cdp import CDPBrowserStrategy
from .base import BaseBrowserStrategy
class BuiltinBrowserStrategy(CDPBrowserStrategy):
"""Built-in browser strategy.
This strategy extends the CDP strategy to use the built-in browser.
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the built-in browser strategy.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
self.builtin_browser_dir = os.path.join(get_home_folder(), "builtin-browser") if not self.config.user_data_dir else self.config.user_data_dir
self.builtin_config_file = os.path.join(self.builtin_browser_dir, "browser_config.json")
# Raise error if user data dir is already engaged
if self._check_user_dir_is_engaged(self.builtin_browser_dir):
raise Exception(f"User data directory {self.builtin_browser_dir} is already engaged by another browser instance.")
os.makedirs(self.builtin_browser_dir, exist_ok=True)
def _check_user_dir_is_engaged(self, user_data_dir: str) -> bool:
"""Check if the user data directory is already in use.
Returns:
bool: True if the directory is engaged, False otherwise
"""
# Load browser config file, then iterate in port_map values, check "user_data_dir" key if it matches
# the current user data directory
if os.path.exists(self.builtin_config_file):
try:
with open(self.builtin_config_file, 'r') as f:
browser_info_dict = json.load(f)
# Check if user data dir is already engaged
for port_str, browser_info in browser_info_dict.get("port_map", {}).items():
if browser_info.get("user_data_dir") == user_data_dir:
return True
except Exception as e:
if self.logger:
self.logger.error(f"Error reading built-in browser config: {str(e)}", tag="BUILTIN")
return False
async def start(self):
"""Start or connect to the built-in browser.
Returns:
self: For method chaining
"""
# Initialize Playwright instance via base class method
await BaseBrowserStrategy.start(self)
try:
# Check for existing built-in browser (get_browser_info already checks if running)
browser_info = self.get_browser_info()
if browser_info:
if self.logger:
self.logger.info(f"Using existing built-in browser at {browser_info.get('cdp_url')}", tag="BROWSER")
self.config.cdp_url = browser_info.get('cdp_url')
else:
if self.logger:
self.logger.info("Built-in browser not found, launching new instance...", tag="BROWSER")
cdp_url = await self.launch_builtin_browser(
browser_type=self.config.browser_type,
debugging_port=self.config.debugging_port,
headless=self.config.headless,
)
if not cdp_url:
if self.logger:
self.logger.warning("Failed to launch built-in browser, falling back to regular CDP strategy", tag="BROWSER")
# Call CDP's start but skip BaseBrowserStrategy.start() since we already called it
return await CDPBrowserStrategy.start(self)
self.config.cdp_url = cdp_url
# Connect to the browser using CDP protocol
self.browser = await self.playwright.chromium.connect_over_cdp(self.config.cdp_url)
# Get or create default context
contexts = self.browser.contexts
if contexts:
self.default_context = contexts[0]
else:
self.default_context = await self.create_browser_context()
await self.setup_context(self.default_context)
if self.logger:
self.logger.debug(f"Connected to built-in browser at {self.config.cdp_url}", tag="BUILTIN")
return self
except Exception as e:
if self.logger:
self.logger.error(f"Failed to start built-in browser: {str(e)}", tag="BUILTIN")
# There is a possibility that at this point I need to clean up some resourece
raise
def _get_builtin_browser_info(cls, debugging_port: int, config_file: str, logger: Optional[AsyncLogger] = None) -> Optional[Dict[str, Any]]:
"""Get information about the built-in browser for a specific debugging port.
Args:
debugging_port: The debugging port to look for
config_file: Path to the config file
logger: Optional logger for recording events
Returns:
dict: Browser information or None if no running browser is configured for this port
"""
if not os.path.exists(config_file):
return None
try:
with open(config_file, 'r') as f:
browser_info_dict = json.load(f)
# Get browser info from port map
if isinstance(browser_info_dict, dict) and "port_map" in browser_info_dict:
port_str = str(debugging_port)
if port_str in browser_info_dict["port_map"]:
browser_info = browser_info_dict["port_map"][port_str]
# Check if the browser is still running
pids = browser_info.get('pid', '')
if isinstance(pids, str):
pids = [int(pid) for pid in pids.split() if pid.isdigit()]
elif isinstance(pids, int):
pids = [pids]
else:
pids = []
# Check if any of the PIDs are running
if not pids:
if logger:
logger.warning(f"Built-in browser on port {debugging_port} has no valid PID", tag="BUILTIN")
# Remove this port from the dictionary
del browser_info_dict["port_map"][port_str]
with open(config_file, 'w') as f:
json.dump(browser_info_dict, f, indent=2)
return None
# Check if any of the PIDs are running
for pid in pids:
if is_browser_running(pid):
browser_info['pid'] = pid
break
else:
# If none of the PIDs are running, remove this port from the dictionary
if logger:
logger.warning(f"Built-in browser on port {debugging_port} is not running", tag="BUILTIN")
# Remove this port from the dictionary
del browser_info_dict["port_map"][port_str]
with open(config_file, 'w') as f:
json.dump(browser_info_dict, f, indent=2)
return None
return browser_info
return None
except Exception as e:
if logger:
logger.error(f"Error reading built-in browser config: {str(e)}", tag="BUILTIN")
return None
def get_browser_info(self) -> Optional[Dict[str, Any]]:
"""Get information about the current built-in browser instance.
Returns:
dict: Browser information or None if no running browser is configured
"""
return self._get_builtin_browser_info(
debugging_port=self.config.debugging_port,
config_file=self.builtin_config_file,
logger=self.logger
)
async def launch_builtin_browser(self,
browser_type: str = "chromium",
debugging_port: int = 9222,
headless: bool = True) -> Optional[str]:
"""Launch a browser in the background for use as the built-in browser.
Args:
browser_type: Type of browser to launch ('chromium' or 'firefox')
debugging_port: Port to use for CDP debugging
headless: Whether to run in headless mode
Returns:
str: CDP URL for the browser, or None if launch failed
"""
# Check if there's an existing browser still running
browser_info = self._get_builtin_browser_info(
debugging_port=debugging_port,
config_file=self.builtin_config_file,
logger=self.logger
)
if browser_info:
if self.logger:
self.logger.info(f"Built-in browser is already running on port {debugging_port}", tag="BUILTIN")
return browser_info.get('cdp_url')
# Create a user data directory for the built-in browser
user_data_dir = os.path.join(self.builtin_browser_dir, "user_data")
# Raise error if user data dir is already engaged
if self._check_user_dir_is_engaged(user_data_dir):
raise Exception(f"User data directory {user_data_dir} is already engaged by another browser instance.")
# Create the user data directory if it doesn't exist
os.makedirs(user_data_dir, exist_ok=True)
# Prepare browser launch arguments
browser_args = super()._build_browser_args()
browser_path = await get_browser_executable(browser_type)
base_args = [browser_path]
if browser_type == "chromium":
args = [
browser_path,
f"--remote-debugging-port={debugging_port}",
f"--user-data-dir={user_data_dir}",
]
# if headless:
# args.append("--headless=new")
elif browser_type == "firefox":
args = [
browser_path,
"--remote-debugging-port",
str(debugging_port),
"--profile",
user_data_dir,
]
if headless:
args.append("--headless")
else:
if self.logger:
self.logger.error(f"Browser type {browser_type} not supported for built-in browser", tag="BUILTIN")
return None
args = base_args + browser_args + args
try:
# Check if the port is already in use
PID = ""
cdp_url = f"http://localhost:{debugging_port}"
config_json = await self._check_port_in_use(cdp_url)
if config_json:
if self.logger:
self.logger.info(f"Port {debugging_port} is already in use.", tag="BUILTIN")
PID = find_process_by_port(debugging_port)
else:
# Start the browser process detached
process = None
if is_windows():
process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
)
else:
process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setpgrp # Start in a new process group
)
# Wait briefly to ensure the process starts successfully
await asyncio.sleep(2.0)
# Check if the process is still running
if process and process.poll() is not None:
if self.logger:
self.logger.error(f"Browser process exited immediately with code {process.returncode}", tag="BUILTIN")
return None
PID = process.pid
# Construct CDP URL
config_json = await self._check_port_in_use(cdp_url)
# Create browser info
browser_info = {
'pid': PID,
'cdp_url': cdp_url,
'user_data_dir': user_data_dir,
'browser_type': browser_type,
'debugging_port': debugging_port,
'start_time': time.time(),
'config': config_json
}
# Read existing config file if it exists
port_map = {}
if os.path.exists(self.builtin_config_file):
try:
with open(self.builtin_config_file, 'r') as f:
existing_data = json.load(f)
# Check if it already uses port mapping
if isinstance(existing_data, dict) and "port_map" in existing_data:
port_map = existing_data["port_map"]
# # Convert legacy format to port mapping
# elif isinstance(existing_data, dict) and "debugging_port" in existing_data:
# old_port = str(existing_data.get("debugging_port"))
# if self._is_browser_running(existing_data.get("pid")):
# port_map[old_port] = existing_data
except Exception as e:
if self.logger:
self.logger.warning(f"Could not read existing config: {str(e)}", tag="BUILTIN")
# Add/update this browser in the port map
port_map[str(debugging_port)] = browser_info
# Write updated config
with open(self.builtin_config_file, 'w') as f:
json.dump({"port_map": port_map}, f, indent=2)
# Detach from the browser process - don't keep any references
# This is important to allow the Python script to exit while the browser continues running
process = None
if self.logger:
self.logger.success(f"Built-in browser launched at CDP URL: {cdp_url}", tag="BUILTIN")
return cdp_url
except Exception as e:
if self.logger:
self.logger.error(f"Error launching built-in browser: {str(e)}", tag="BUILTIN")
return None
async def _check_port_in_use(self, cdp_url: str) -> dict:
"""Check if a port is already in use by a Chrome DevTools instance.
Args:
cdp_url: The CDP URL to check
Returns:
dict: Chrome DevTools protocol version information or None if not found
"""
import aiohttp
json_url = f"{cdp_url}/json/version"
json_config = None
try:
async with aiohttp.ClientSession() as session:
try:
async with session.get(json_url, timeout=2.0) as response:
if response.status == 200:
json_config = await response.json()
if self.logger:
self.logger.debug(f"Found CDP server running at {cdp_url}", tag="BUILTIN")
return json_config
except (aiohttp.ClientError, asyncio.TimeoutError):
pass
return None
except Exception as e:
if self.logger:
self.logger.debug(f"Error checking CDP port: {str(e)}", tag="BUILTIN")
return None
async def kill_builtin_browser(self) -> bool:
"""Kill the built-in browser if it's running.
Returns:
bool: True if the browser was killed, False otherwise
"""
browser_info = self.get_browser_info()
if not browser_info:
if self.logger:
self.logger.warning(f"No built-in browser found on port {self.config.debugging_port}", tag="BUILTIN")
return False
pid = browser_info.get('pid')
if not pid:
return False
success, error_msg = terminate_process(pid, logger=self.logger)
if success:
# Update config file to remove this browser
with open(self.builtin_config_file, 'r') as f:
browser_info_dict = json.load(f)
# Remove this port from the dictionary
port_str = str(self.config.debugging_port)
if port_str in browser_info_dict.get("port_map", {}):
del browser_info_dict["port_map"][port_str]
with open(self.builtin_config_file, 'w') as f:
json.dump(browser_info_dict, f, indent=2)
# Remove user data directory if it exists
if os.path.exists(self.builtin_browser_dir):
shutil.rmtree(self.builtin_browser_dir)
# Clear the browser info cache
self.browser = None
self.temp_dir = None
self.shutting_down = True
if self.logger:
self.logger.success("Built-in browser terminated", tag="BUILTIN")
return True
else:
if self.logger:
self.logger.error(f"Error killing built-in browser: {error_msg}", tag="BUILTIN")
return False
async def get_builtin_browser_status(self) -> Dict[str, Any]:
"""Get status information about the built-in browser.
Returns:
dict: Status information with running, cdp_url, and info fields
"""
browser_info = self.get_browser_info()
if not browser_info:
return {
'running': False,
'cdp_url': None,
'info': None,
'port': self.config.debugging_port
}
return {
'running': True,
'cdp_url': browser_info.get('cdp_url'),
'info': browser_info,
'port': self.config.debugging_port
}
async def close(self):
"""Close the built-in browser and clean up resources."""
# Call parent class close method
await super().close()
# Clean up built-in browser if we created it and were in shutdown mode
if self.shutting_down:
await self.kill_builtin_browser()
if self.logger:
self.logger.debug("Killed built-in browser during shutdown", tag="BUILTIN")

View File

@@ -1,281 +0,0 @@
"""Browser strategies module for Crawl4AI.
This module implements the browser strategy pattern for different
browser implementations, including Playwright, CDP, and builtin browsers.
"""
import asyncio
import os
import time
import json
import subprocess
import shutil
from typing import Optional, Tuple, List
from playwright.async_api import BrowserContext, Page
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig, CrawlerRunConfig
from ..utils import get_playwright, get_browser_executable, create_temp_directory, is_windows, check_process_is_running, terminate_process
from .base import BaseBrowserStrategy
class CDPBrowserStrategy(BaseBrowserStrategy):
"""CDP-based browser strategy.
This strategy connects to an existing browser using CDP protocol or
launches and connects to a browser using CDP.
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the CDP browser strategy.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
self.sessions = {}
self.session_ttl = 1800 # 30 minutes
self.browser_process = None
self.temp_dir = None
self.shutting_down = False
async def start(self):
"""Start or connect to the browser using CDP.
Returns:
self: For method chaining
"""
# Call the base class start to initialize Playwright
await super().start()
try:
# Get or create CDP URL
cdp_url = await self._get_or_create_cdp_url()
# Connect to the browser using CDP
self.browser = await self.playwright.chromium.connect_over_cdp(cdp_url)
# Get or create default context
contexts = self.browser.contexts
if contexts:
self.default_context = contexts[0]
else:
self.default_context = await self.create_browser_context()
await self.setup_context(self.default_context)
if self.logger:
self.logger.debug(f"Connected to CDP browser at {cdp_url}", tag="CDP")
except Exception as e:
if self.logger:
self.logger.error(f"Failed to connect to CDP browser: {str(e)}", tag="CDP")
# Clean up any resources before re-raising
await self._cleanup_process()
raise
return self
async def _get_or_create_cdp_url(self) -> str:
"""Get existing CDP URL or launch a browser and return its CDP URL.
Returns:
str: CDP URL for connecting to the browser
"""
# If CDP URL is provided, just return it
if self.config.cdp_url:
return self.config.cdp_url
# Create temp dir if needed
if not self.config.user_data_dir:
self.temp_dir = create_temp_directory()
user_data_dir = self.temp_dir
else:
user_data_dir = self.config.user_data_dir
# Get browser args based on OS and browser type
# args = await self._get_browser_args(user_data_dir)
browser_args = super()._build_browser_args()
browser_path = await get_browser_executable(self.config.browser_type)
base_args = [browser_path]
if self.config.browser_type == "chromium":
args = [
f"--remote-debugging-port={self.config.debugging_port}",
f"--user-data-dir={user_data_dir}",
]
# if self.config.headless:
# args.append("--headless=new")
elif self.config.browser_type == "firefox":
args = [
"--remote-debugging-port",
str(self.config.debugging_port),
"--profile",
user_data_dir,
]
if self.config.headless:
args.append("--headless")
else:
raise NotImplementedError(f"Browser type {self.config.browser_type} not supported")
args = base_args + browser_args['args'] + args
# Start browser process
try:
# Use DETACHED_PROCESS flag on Windows to fully detach the process
# On Unix, we'll use preexec_fn=os.setpgrp to start the process in a new process group
if is_windows():
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
creationflags=subprocess.DETACHED_PROCESS | subprocess.CREATE_NEW_PROCESS_GROUP
)
else:
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=os.setpgrp # Start in a new process group
)
# Monitor for a short time to make sure it starts properly
is_running, return_code, stdout, stderr = await check_process_is_running(self.browser_process, delay=2)
if not is_running:
if self.logger:
self.logger.error(
message="Browser process terminated unexpectedly | Code: {code} | STDOUT: {stdout} | STDERR: {stderr}",
tag="ERROR",
params={
"code": return_code,
"stdout": stdout.decode() if stdout else "",
"stderr": stderr.decode() if stderr else "",
},
)
await self._cleanup_process()
raise Exception("Browser process terminated unexpectedly")
return f"http://localhost:{self.config.debugging_port}"
except Exception as e:
await self._cleanup_process()
raise Exception(f"Failed to start browser: {e}")
async def _cleanup_process(self):
"""Cleanup browser process and temporary directory."""
# Set shutting_down flag BEFORE any termination actions
self.shutting_down = True
if self.browser_process:
try:
# Only attempt termination if the process is still running
if self.browser_process.poll() is None:
# Use our robust cross-platform termination utility
success = terminate_process(
pid=self.browser_process.pid,
timeout=1.0, # Equivalent to the previous 10*0.1s wait
logger=self.logger
)
if not success and self.logger:
self.logger.warning(
message="Failed to terminate browser process cleanly",
tag="PROCESS"
)
except Exception as e:
if self.logger:
self.logger.error(
message="Error during browser process cleanup: {error}",
tag="ERROR",
params={"error": str(e)},
)
if self.temp_dir and os.path.exists(self.temp_dir):
try:
shutil.rmtree(self.temp_dir)
self.temp_dir = None
if self.logger:
self.logger.debug("Removed temporary directory", tag="CDP")
except Exception as e:
if self.logger:
self.logger.error(
message="Error removing temporary directory: {error}",
tag="CDP",
params={"error": str(e)}
)
self.browser_process = None
async def _generate_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
# For CDP, we typically use the shared default_context
context = self.default_context
pages = context.pages
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
self.contexts_by_config[config_signature] = context
await self.setup_context(context, crawlerRunConfig)
# Check if there's already a page with the target URL
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
# If not found, create a new page
if not page:
page = await context.new_page()
return page, context
async def _get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page for the given configuration.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
Tuple of (Page, BrowserContext)
"""
# Call parent method to ensure browser is started
await super().get_page(crawlerRunConfig)
# For CDP, we typically use the shared default_context
context = self.default_context
pages = context.pages
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
self.contexts_by_config[config_signature] = context
await self.setup_context(context, crawlerRunConfig)
# Check if there's already a page with the target URL
page = next((p for p in pages if p.url == crawlerRunConfig.url), None)
# If not found, create a new page
if not page:
page = await context.new_page()
# If a session_id is specified, store this session for reuse
if crawlerRunConfig.session_id:
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context
async def close(self):
"""Close the CDP browser and clean up resources."""
# Skip cleanup if using external CDP URL and not launched by us
if self.config.cdp_url and not self.browser_process:
if self.logger:
self.logger.debug("Skipping cleanup for external CDP browser", tag="CDP")
return
# Call parent implementation for common cleanup
await super().close()
# Additional CDP-specific cleanup
await asyncio.sleep(0.5)
await self._cleanup_process()

View File

@@ -1,430 +0,0 @@
"""Docker browser strategy module for Crawl4AI.
This module provides browser strategies for running browsers in Docker containers,
which offers better isolation, consistency across platforms, and easy scaling.
"""
import os
import uuid
from typing import List, Optional
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig
from ..models import DockerConfig
from ..docker_registry import DockerRegistry
from ..docker_utils import DockerUtils
from .builtin import CDPBrowserStrategy
from .base import BaseBrowserStrategy
class DockerBrowserStrategy(CDPBrowserStrategy):
"""Docker-based browser strategy.
Extends the CDPBrowserStrategy to run browsers in Docker containers.
Supports two modes:
1. "connect" - Uses a Docker image with Chrome already running
2. "launch" - Starts Chrome within the container with custom settings
Attributes:
docker_config: Docker-specific configuration options
container_id: ID of current Docker container
container_name: Name assigned to the container
registry: Registry for tracking and reusing containers
docker_utils: Utilities for Docker operations
chrome_process_id: Process ID of Chrome within container
socat_process_id: Process ID of socat within container
internal_cdp_port: Chrome's internal CDP port
internal_mapped_port: Port that socat maps to internally
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the Docker browser strategy.
Args:
config: Browser configuration including Docker-specific settings
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
# Initialize Docker-specific attributes
self.docker_config = self.config.docker_config or DockerConfig()
self.container_id = None
self.container_name = f"crawl4ai-browser-{uuid.uuid4().hex[:8]}"
# Use the shared registry file path for consistency with BuiltinBrowserStrategy
registry_file = self.docker_config.registry_file
if registry_file is None and self.config.user_data_dir:
# Use the same registry file as BuiltinBrowserStrategy if possible
registry_file = os.path.join(
os.path.dirname(self.config.user_data_dir), "browser_config.json"
)
self.registry = DockerRegistry(self.docker_config.registry_file)
self.docker_utils = DockerUtils(logger)
self.chrome_process_id = None
self.socat_process_id = None
self.internal_cdp_port = 9222 # Chrome's internal CDP port
self.internal_mapped_port = 9223 # Port that socat maps to internally
self.shutting_down = False
async def start(self):
"""Start or connect to a browser running in a Docker container.
This method initializes Playwright and establishes a connection to
a browser running in a Docker container. Depending on the configured mode:
- "connect": Connects to a container with Chrome already running
- "launch": Creates a container and launches Chrome within it
Returns:
self: For method chaining
"""
# Initialize Playwright
await BaseBrowserStrategy.start(self)
if self.logger:
self.logger.info(
f"Starting Docker browser strategy in {self.docker_config.mode} mode",
tag="DOCKER",
)
try:
# Get CDP URL by creating or reusing a Docker container
# This handles the container management and browser startup
cdp_url = await self._get_or_create_cdp_url()
if not cdp_url:
raise Exception(
"Failed to establish CDP connection to Docker container"
)
if self.logger:
self.logger.info(
f"Connecting to browser in Docker via CDP: {cdp_url}", tag="DOCKER"
)
# Connect to the browser using CDP
self.browser = await self.playwright.chromium.connect_over_cdp(cdp_url)
# Get existing context or create default context
contexts = self.browser.contexts
if contexts:
self.default_context = contexts[0]
if self.logger:
self.logger.debug("Using existing browser context", tag="DOCKER")
else:
if self.logger:
self.logger.debug("Creating new browser context", tag="DOCKER")
self.default_context = await self.create_browser_context()
await self.setup_context(self.default_context)
return self
except Exception as e:
# Clean up resources if startup fails
if self.container_id and not self.docker_config.persistent:
if self.logger:
self.logger.warning(
f"Cleaning up container after failed start: {self.container_id[:12]}",
tag="DOCKER",
)
await self.docker_utils.remove_container(self.container_id)
self.registry.unregister_container(self.container_id)
self.container_id = None
if self.playwright:
await self.playwright.stop()
self.playwright = None
# Re-raise the exception
if self.logger:
self.logger.error(
f"Failed to start Docker browser: {str(e)}", tag="DOCKER"
)
raise
async def _generate_config_hash(self) -> str:
"""Generate a hash of the configuration for container matching.
Returns:
Hash string uniquely identifying this configuration
"""
# Create a dict with the relevant parts of the config
config_dict = {
"image": self.docker_config.image,
"mode": self.docker_config.mode,
"browser_type": self.config.browser_type,
"headless": self.config.headless,
}
# Add browser-specific config if in launch mode
if self.docker_config.mode == "launch":
config_dict.update(
{
"text_mode": self.config.text_mode,
"light_mode": self.config.light_mode,
"viewport_width": self.config.viewport_width,
"viewport_height": self.config.viewport_height,
}
)
# Use the utility method to generate the hash
return self.docker_utils.generate_config_hash(config_dict)
async def _get_or_create_cdp_url(self) -> str:
"""Get CDP URL by either creating a new container or using an existing one.
Returns:
CDP URL for connecting to the browser
Raises:
Exception: If container creation or browser launch fails
"""
# If CDP URL is explicitly provided, use it
if self.config.cdp_url:
return self.config.cdp_url
# Ensure Docker image exists (will build if needed)
image_name = await self.docker_utils.ensure_docker_image_exists(
self.docker_config.image, self.docker_config.mode
)
# Generate config hash for container matching
config_hash = await self._generate_config_hash()
# Look for existing container with matching config
container_id = await self.registry.find_container_by_config(
config_hash, self.docker_utils
)
if container_id:
# Use existing container
self.container_id = container_id
host_port = self.registry.get_container_host_port(container_id)
if self.logger:
self.logger.info(
f"Using existing Docker container: {container_id[:12]}",
tag="DOCKER",
)
else:
# Get a port for the new container
host_port = (
self.docker_config.host_port
or self.registry.get_next_available_port(self.docker_utils)
)
# Prepare volumes list
volumes = list(self.docker_config.volumes)
# Add user data directory if specified
if self.docker_config.user_data_dir:
# Ensure user data directory exists
os.makedirs(self.docker_config.user_data_dir, exist_ok=True)
volumes.append(
f"{self.docker_config.user_data_dir}:{self.docker_config.container_user_data_dir}"
)
# # Update config user_data_dir to point to container path
# self.config.user_data_dir = self.docker_config.container_user_data_dir
# Create a new container
container_id = await self.docker_utils.create_container(
image_name=image_name,
host_port=host_port,
container_name=self.container_name,
volumes=volumes,
network=self.docker_config.network,
env_vars=self.docker_config.env_vars,
cpu_limit=self.docker_config.cpu_limit,
memory_limit=self.docker_config.memory_limit,
extra_args=self.docker_config.extra_args,
)
if not container_id:
raise Exception("Failed to create Docker container")
self.container_id = container_id
# Wait for container to be ready
await self.docker_utils.wait_for_container_ready(container_id)
# Handle specific setup based on mode
if self.docker_config.mode == "launch":
# In launch mode, we need to start socat and Chrome
await self.docker_utils.start_socat_in_container(container_id)
# Build browser arguments
browser_args = self._build_browser_args()
# Launch Chrome
await self.docker_utils.launch_chrome_in_container(
container_id, browser_args
)
# Get PIDs for later cleanup
self.chrome_process_id = (
await self.docker_utils.get_process_id_in_container(
container_id, "chromium"
)
)
self.socat_process_id = (
await self.docker_utils.get_process_id_in_container(
container_id, "socat"
)
)
# Wait for CDP to be ready
cdp_json_config = await self.docker_utils.wait_for_cdp_ready(host_port)
if cdp_json_config:
# Register the container in the shared registry
self.registry.register_container(
container_id, host_port, config_hash, cdp_json_config
)
else:
raise Exception("Failed to get CDP JSON config from Docker container")
if self.logger:
self.logger.success(
f"Docker container ready: {container_id[:12]} on port {host_port}",
tag="DOCKER",
)
# Return CDP URL
return f"http://localhost:{host_port}"
def _build_browser_args(self) -> List[str]:
"""Build Chrome command line arguments based on BrowserConfig.
Returns:
List of command line arguments for Chrome
"""
# Call parent method to get common arguments
browser_args = super()._build_browser_args()
return browser_args["args"] + [
f"--remote-debugging-port={self.internal_cdp_port}",
"--remote-debugging-address=0.0.0.0", # Allow external connections
"--disable-dev-shm-usage",
"--headless=new",
]
# args = [
# "--no-sandbox",
# "--disable-gpu",
# f"--remote-debugging-port={self.internal_cdp_port}",
# "--remote-debugging-address=0.0.0.0", # Allow external connections
# "--disable-dev-shm-usage",
# ]
# if self.config.headless:
# args.append("--headless=new")
# if self.config.viewport_width and self.config.viewport_height:
# args.append(f"--window-size={self.config.viewport_width},{self.config.viewport_height}")
# if self.config.user_agent:
# args.append(f"--user-agent={self.config.user_agent}")
# if self.config.text_mode:
# args.extend([
# "--blink-settings=imagesEnabled=false",
# "--disable-remote-fonts",
# "--disable-images",
# "--disable-javascript",
# ])
# if self.config.light_mode:
# # Import here to avoid circular import
# from ..utils import get_browser_disable_options
# args.extend(get_browser_disable_options())
# if self.config.user_data_dir:
# args.append(f"--user-data-dir={self.config.user_data_dir}")
# if self.config.extra_args:
# args.extend(self.config.extra_args)
# return args
async def close(self):
"""Close the browser and clean up Docker container if needed."""
# Set flag to track if we were the ones initiating shutdown
initiated_shutdown = not self.shutting_down
# Storage persistence for Docker needs special handling
# We need to store state before calling super().close() which will close the browser
if (
self.browser
and self.docker_config.user_data_dir
and self.docker_config.persistent
):
for context in self.browser.contexts:
try:
# Ensure directory exists
os.makedirs(self.docker_config.user_data_dir, exist_ok=True)
# Save storage state to user data directory
storage_path = os.path.join(
self.docker_config.user_data_dir, "storage_state.json"
)
await context.storage_state(path=storage_path)
if self.logger:
self.logger.debug(
"Persisted Docker-specific storage state", tag="DOCKER"
)
except Exception as e:
if self.logger:
self.logger.warning(
message="Failed to persist Docker storage state: {error}",
tag="DOCKER",
params={"error": str(e)},
)
# Call parent method to handle common cleanup
await super().close()
# Only perform container cleanup if we initiated shutdown
# and we need to handle Docker-specific resources
if initiated_shutdown:
# Only clean up container if not persistent
if self.container_id and not self.docker_config.persistent:
# Stop Chrome process in "launch" mode
if self.docker_config.mode == "launch" and self.chrome_process_id:
await self.docker_utils.stop_process_in_container(
self.container_id, self.chrome_process_id
)
if self.logger:
self.logger.debug(
f"Stopped Chrome process {self.chrome_process_id} in container",
tag="DOCKER",
)
# Stop socat process in "launch" mode
if self.docker_config.mode == "launch" and self.socat_process_id:
await self.docker_utils.stop_process_in_container(
self.container_id, self.socat_process_id
)
if self.logger:
self.logger.debug(
f"Stopped socat process {self.socat_process_id} in container",
tag="DOCKER",
)
# Remove or stop container based on configuration
if self.docker_config.remove_on_exit:
await self.docker_utils.remove_container(self.container_id)
# Unregister from registry
if hasattr(self, "registry") and self.registry:
self.registry.unregister_container(self.container_id)
if self.logger:
self.logger.debug(
f"Removed Docker container {self.container_id}",
tag="DOCKER",
)
else:
await self.docker_utils.stop_container(self.container_id)
if self.logger:
self.logger.debug(
f"Stopped Docker container {self.container_id}",
tag="DOCKER",
)
self.container_id = None

View File

@@ -1,134 +0,0 @@
"""Browser strategies module for Crawl4AI.
This module implements the browser strategy pattern for different
browser implementations, including Playwright, CDP, and builtin browsers.
"""
import time
from typing import Optional, Tuple
from playwright.async_api import BrowserContext, Page
from ...async_logger import AsyncLogger
from ...async_configs import BrowserConfig, CrawlerRunConfig
from playwright_stealth import StealthConfig
from .base import BaseBrowserStrategy
stealth_config = StealthConfig(
webdriver=True,
chrome_app=True,
chrome_csi=True,
chrome_load_times=True,
chrome_runtime=True,
navigator_languages=True,
navigator_plugins=True,
navigator_permissions=True,
webgl_vendor=True,
outerdimensions=True,
navigator_hardware_concurrency=True,
media_codecs=True,
)
class PlaywrightBrowserStrategy(BaseBrowserStrategy):
"""Standard Playwright browser strategy.
This strategy launches a new browser instance using Playwright
and manages browser contexts.
"""
def __init__(self, config: BrowserConfig, logger: Optional[AsyncLogger] = None):
"""Initialize the Playwright browser strategy.
Args:
config: Browser configuration
logger: Logger for recording events and errors
"""
super().__init__(config, logger)
# No need to re-initialize sessions and session_ttl as they're now in the base class
async def start(self):
"""Start the browser instance.
Returns:
self: For method chaining
"""
# Call the base class start to initialize Playwright
await super().start()
# Build browser arguments using the base class method
browser_args = self._build_browser_args()
try:
# Launch appropriate browser type
if self.config.browser_type == "firefox":
self.browser = await self.playwright.firefox.launch(**browser_args)
elif self.config.browser_type == "webkit":
self.browser = await self.playwright.webkit.launch(**browser_args)
else:
self.browser = await self.playwright.chromium.launch(**browser_args)
self.default_context = self.browser
if self.logger:
self.logger.debug(f"Launched {self.config.browser_type} browser", tag="BROWSER")
except Exception as e:
if self.logger:
self.logger.error(f"Failed to launch browser: {str(e)}", tag="BROWSER")
raise
return self
async def _generate_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
async with self._contexts_lock:
if config_signature in self.contexts_by_config:
context = self.contexts_by_config[config_signature]
else:
# Create and setup a new context
context = await self.create_browser_context(crawlerRunConfig)
await self.setup_context(context, crawlerRunConfig)
self.contexts_by_config[config_signature] = context
# Create a new page from the chosen context
page = await context.new_page()
return page, context
async def _get_page(self, crawlerRunConfig: CrawlerRunConfig) -> Tuple[Page, BrowserContext]:
"""Get a page for the given configuration.
Args:
crawlerRunConfig: Configuration object for the crawler run
Returns:
Tuple of (Page, BrowserContext)
"""
# Call parent method to ensure browser is started
await super().get_page(crawlerRunConfig)
# Otherwise, check if we have an existing context for this config
config_signature = self._make_config_signature(crawlerRunConfig)
async with self._contexts_lock:
if config_signature in self.contexts_by_config:
context = self.contexts_by_config[config_signature]
else:
# Create and setup a new context
context = await self.create_browser_context(crawlerRunConfig)
await self.setup_context(context, crawlerRunConfig)
self.contexts_by_config[config_signature] = context
# Create a new page from the chosen context
page = await context.new_page()
# If a session_id is specified, store this session so we can reuse later
if crawlerRunConfig.session_id:
self.sessions[crawlerRunConfig.session_id] = (context, page, time.time())
return page, context

View File

@@ -1,465 +0,0 @@
"""Browser utilities module for Crawl4AI.
This module provides utility functions for browser management,
including process management, CDP connection utilities,
and Playwright instance management.
"""
import asyncio
import os
import sys
import time
import tempfile
import subprocess
from typing import Optional, Tuple, Union
import signal
import psutil
from playwright.async_api import async_playwright
from ..utils import get_chromium_path
from ..async_configs import BrowserConfig, CrawlerRunConfig
from ..async_logger import AsyncLogger
_playwright_instance = None
async def get_playwright():
"""Get or create the Playwright instance (singleton pattern).
Returns:
Playwright: The Playwright instance
"""
global _playwright_instance
if _playwright_instance is None or True:
_playwright_instance = await async_playwright().start()
return _playwright_instance
async def get_browser_executable(browser_type: str) -> str:
"""Get the path to browser executable, with platform-specific handling.
Args:
browser_type: Type of browser (chromium, firefox, webkit)
Returns:
Path to browser executable
"""
return await get_chromium_path(browser_type)
def create_temp_directory(prefix="browser-profile-") -> str:
"""Create a temporary directory for browser data.
Args:
prefix: Prefix for the temporary directory name
Returns:
Path to the created temporary directory
"""
return tempfile.mkdtemp(prefix=prefix)
def is_windows() -> bool:
"""Check if the current platform is Windows.
Returns:
True if Windows, False otherwise
"""
return sys.platform == "win32"
def is_macos() -> bool:
"""Check if the current platform is macOS.
Returns:
True if macOS, False otherwise
"""
return sys.platform == "darwin"
def is_linux() -> bool:
"""Check if the current platform is Linux.
Returns:
True if Linux, False otherwise
"""
return not (is_windows() or is_macos())
def is_browser_running(pid: Optional[int]) -> bool:
"""Check if a process with the given PID is running.
Args:
pid: Process ID to check
Returns:
bool: True if the process is running, False otherwise
"""
if not pid:
return False
try:
if type(pid) is str:
pid = int(pid)
# Check if the process exists
if is_windows():
process = subprocess.run(["tasklist", "/FI", f"PID eq {pid}"],
capture_output=True, text=True)
return str(pid) in process.stdout
else:
# Unix-like systems
os.kill(pid, 0) # This doesn't actually kill the process, just checks if it exists
return True
except (ProcessLookupError, PermissionError, OSError):
return False
def get_browser_disable_options() -> list:
"""Get standard list of browser disable options for performance.
Returns:
List of command-line options to disable various browser features
"""
return [
"--disable-background-networking",
"--disable-background-timer-throttling",
"--disable-backgrounding-occluded-windows",
"--disable-breakpad",
"--disable-client-side-phishing-detection",
"--disable-component-extensions-with-background-pages",
"--disable-default-apps",
"--disable-extensions",
"--disable-features=TranslateUI",
"--disable-hang-monitor",
"--disable-ipc-flooding-protection",
"--disable-popup-blocking",
"--disable-prompt-on-repost",
"--disable-sync",
"--force-color-profile=srgb",
"--metrics-recording-only",
"--no-first-run",
"--password-store=basic",
"--use-mock-keychain",
]
async def find_optimal_browser_config(total_urls=50, verbose=True, rate_limit_delay=0.2):
"""Find optimal browser configuration for crawling a specific number of URLs.
Args:
total_urls: Number of URLs to crawl
verbose: Whether to print progress
rate_limit_delay: Delay between page loads to avoid rate limiting
Returns:
dict: Contains fastest, lowest_memory, and optimal configurations
"""
from .manager import BrowserManager
if verbose:
print(f"\n=== Finding optimal configuration for crawling {total_urls} URLs ===\n")
# Generate test URLs with timestamp to avoid caching
timestamp = int(time.time())
urls = [f"https://example.com/page_{i}?t={timestamp}" for i in range(total_urls)]
# Limit browser configurations to test (1 browser to max 10)
max_browsers = min(10, total_urls)
configs_to_test = []
# Generate configurations (browser count, pages distribution)
for num_browsers in range(1, max_browsers + 1):
base_pages = total_urls // num_browsers
remainder = total_urls % num_browsers
# Create distribution array like [3, 3, 2, 2] (some browsers get one more page)
if remainder > 0:
distribution = [base_pages + 1] * remainder + [base_pages] * (num_browsers - remainder)
else:
distribution = [base_pages] * num_browsers
configs_to_test.append((num_browsers, distribution))
results = []
# Test each configuration
for browser_count, page_distribution in configs_to_test:
if verbose:
print(f"Testing {browser_count} browsers with distribution {tuple(page_distribution)}")
try:
# Track memory if possible
try:
import psutil
process = psutil.Process()
start_memory = process.memory_info().rss / (1024 * 1024) # MB
except ImportError:
if verbose:
print("Memory tracking not available (psutil not installed)")
start_memory = 0
# Start browsers in parallel
managers = []
start_tasks = []
start_time = time.time()
logger = AsyncLogger(verbose=True, log_file=None)
for i in range(browser_count):
config = BrowserConfig(headless=True)
manager = BrowserManager(browser_config=config, logger=logger)
start_tasks.append(manager.start())
managers.append(manager)
await asyncio.gather(*start_tasks)
# Distribute URLs among browsers
urls_per_manager = {}
url_index = 0
for i, manager in enumerate(managers):
pages_for_this_browser = page_distribution[i]
end_index = url_index + pages_for_this_browser
urls_per_manager[manager] = urls[url_index:end_index]
url_index = end_index
# Create pages for each browser
all_pages = []
for manager, manager_urls in urls_per_manager.items():
if not manager_urls:
continue
pages = await manager.get_pages(CrawlerRunConfig(), count=len(manager_urls))
all_pages.extend(zip(pages, manager_urls))
# Crawl pages with delay to avoid rate limiting
async def crawl_page(page_ctx, url):
page, _ = page_ctx
try:
await page.goto(url)
if rate_limit_delay > 0:
await asyncio.sleep(rate_limit_delay)
title = await page.title()
return title
finally:
await page.close()
crawl_start = time.time()
crawl_tasks = [crawl_page(page_ctx, url) for page_ctx, url in all_pages]
await asyncio.gather(*crawl_tasks)
crawl_time = time.time() - crawl_start
total_time = time.time() - start_time
# Measure final memory usage
if start_memory > 0:
end_memory = process.memory_info().rss / (1024 * 1024)
memory_used = end_memory - start_memory
else:
memory_used = 0
# Close all browsers
for manager in managers:
await manager.close()
# Calculate metrics
pages_per_second = total_urls / crawl_time
# Calculate efficiency score (higher is better)
# This balances speed vs memory
if memory_used > 0:
efficiency = pages_per_second / (memory_used + 1)
else:
efficiency = pages_per_second
# Store result
result = {
"browser_count": browser_count,
"distribution": tuple(page_distribution),
"crawl_time": crawl_time,
"total_time": total_time,
"memory_used": memory_used,
"pages_per_second": pages_per_second,
"efficiency": efficiency
}
results.append(result)
if verbose:
print(f" ✓ Crawled {total_urls} pages in {crawl_time:.2f}s ({pages_per_second:.1f} pages/sec)")
if memory_used > 0:
print(f" ✓ Memory used: {memory_used:.1f}MB ({memory_used/total_urls:.1f}MB per page)")
print(f" ✓ Efficiency score: {efficiency:.4f}")
except Exception as e:
if verbose:
print(f" ✗ Error: {str(e)}")
# Clean up
for manager in managers:
try:
await manager.close()
except:
pass
# If no successful results, return None
if not results:
return None
# Find best configurations
fastest = sorted(results, key=lambda x: x["crawl_time"])[0]
# Only consider memory if available
memory_results = [r for r in results if r["memory_used"] > 0]
if memory_results:
lowest_memory = sorted(memory_results, key=lambda x: x["memory_used"])[0]
else:
lowest_memory = fastest
# Find most efficient (balanced speed vs memory)
optimal = sorted(results, key=lambda x: x["efficiency"], reverse=True)[0]
# Print summary
if verbose:
print("\n=== OPTIMAL CONFIGURATIONS ===")
print(f"⚡ Fastest: {fastest['browser_count']} browsers {fastest['distribution']}")
print(f" {fastest['crawl_time']:.2f}s, {fastest['pages_per_second']:.1f} pages/sec")
print(f"💾 Memory-efficient: {lowest_memory['browser_count']} browsers {lowest_memory['distribution']}")
if lowest_memory["memory_used"] > 0:
print(f" {lowest_memory['memory_used']:.1f}MB, {lowest_memory['memory_used']/total_urls:.2f}MB per page")
print(f"🌟 Balanced optimal: {optimal['browser_count']} browsers {optimal['distribution']}")
print(f" {optimal['crawl_time']:.2f}s, {optimal['pages_per_second']:.1f} pages/sec, score: {optimal['efficiency']:.4f}")
return {
"fastest": fastest,
"lowest_memory": lowest_memory,
"optimal": optimal,
"all_configs": results
}
# Find process ID of the existing browser using os
def find_process_by_port(port: int) -> str:
"""Find process ID listening on a specific port.
Args:
port: Port number to check
Returns:
str: Process ID or empty string if not found
"""
try:
if is_windows():
cmd = f"netstat -ano | findstr :{port}"
result = subprocess.check_output(cmd, shell=True).decode()
return result.strip().split()[-1] if result else ""
else:
cmd = f"lsof -i :{port} -t"
return subprocess.check_output(cmd, shell=True).decode().strip()
except subprocess.CalledProcessError:
return ""
async def check_process_is_running(process: subprocess.Popen, delay: float = 0.5) -> Tuple[bool, Optional[int], bytes, bytes]:
"""Perform a quick check to make sure the browser started successfully."""
if not process:
return False, None, b"", b""
# Check that process started without immediate termination
await asyncio.sleep(delay)
if process.poll() is not None:
# Process already terminated
stdout, stderr = b"", b""
try:
stdout, stderr = process.communicate(timeout=0.5)
except subprocess.TimeoutExpired:
pass
return False, process.returncode, stdout, stderr
return True, 0, b"", b""
def terminate_process(
pid: Union[int, str],
timeout: float = 5.0,
force_kill_timeout: float = 3.0,
logger = None
) -> Tuple[bool, Optional[str]]:
"""
Robustly terminate a process across platforms with verification.
Args:
pid: Process ID to terminate (int or string)
timeout: Seconds to wait for graceful termination before force killing
force_kill_timeout: Seconds to wait after force kill before considering it failed
logger: Optional logger object with error, warning, and info methods
Returns:
Tuple of (success: bool, error_message: Optional[str])
"""
# Convert pid to int if it's a string
if isinstance(pid, str):
try:
pid = int(pid)
except ValueError:
error_msg = f"Invalid PID format: {pid}"
if logger:
logger.error(error_msg)
return False, error_msg
# Check if process exists
if not psutil.pid_exists(pid):
return True, None # Process already terminated
try:
process = psutil.Process(pid)
# First attempt: graceful termination
if logger:
logger.info(f"Attempting graceful termination of process {pid}")
if os.name == 'nt': # Windows
subprocess.run(["taskkill", "/PID", str(pid)],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
check=False)
else: # Unix/Linux/MacOS
process.send_signal(signal.SIGTERM)
# Wait for process to terminate
try:
process.wait(timeout=timeout)
if logger:
logger.info(f"Process {pid} terminated gracefully")
return True, None
except psutil.TimeoutExpired:
if logger:
logger.warning(f"Process {pid} did not terminate gracefully within {timeout} seconds, forcing termination")
# Second attempt: force kill
if os.name == 'nt': # Windows
subprocess.run(["taskkill", "/F", "/PID", str(pid)],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
check=False)
else: # Unix/Linux/MacOS
process.send_signal(signal.SIGKILL)
# Verify process is killed
gone, alive = psutil.wait_procs([process], timeout=force_kill_timeout)
if process in alive:
error_msg = f"Failed to kill process {pid} even after force kill"
if logger:
logger.error(error_msg)
return False, error_msg
if logger:
logger.info(f"Process {pid} terminated by force")
return True, None
except psutil.NoSuchProcess:
# Process terminated while we were working with it
if logger:
logger.info(f"Process {pid} already terminated")
return True, None
except Exception as e:
error_msg = f"Error terminating process {pid}: {str(e)}"
if logger:
logger.error(error_msg)
return False, error_msg

View File

@@ -440,8 +440,7 @@ class BrowserManager:
@classmethod @classmethod
async def get_playwright(cls): async def get_playwright(cls):
from playwright.async_api import async_playwright from playwright.async_api import async_playwright
if cls._playwright_instance is None: cls._playwright_instance = await async_playwright().start()
cls._playwright_instance = await async_playwright().start()
return cls._playwright_instance return cls._playwright_instance
def __init__(self, browser_config: BrowserConfig, logger=None): def __init__(self, browser_config: BrowserConfig, logger=None):
@@ -492,7 +491,6 @@ class BrowserManager:
Note: This method should be called in a separate task to avoid blocking the main event loop. Note: This method should be called in a separate task to avoid blocking the main event loop.
""" """
self.playwright = await self.get_playwright()
if self.playwright is None: if self.playwright is None:
from playwright.async_api import async_playwright from playwright.async_api import async_playwright
@@ -660,7 +658,7 @@ class BrowserManager:
"name": "cookiesEnabled", "name": "cookiesEnabled",
"value": "true", "value": "true",
"url": crawlerRunConfig.url "url": crawlerRunConfig.url
if crawlerRunConfig if crawlerRunConfig and crawlerRunConfig.url
else "https://crawl4ai.com/", else "https://crawl4ai.com/",
} }
] ]

View File

@@ -866,6 +866,12 @@ class WebScrapingStrategy(ContentScrapingStrategy):
if body is None: if body is None:
raise Exception("'<body>' tag is not found in fetched html. Consider adding wait_for=\"css:body\" to wait for body tag to be loaded into DOM.") raise Exception("'<body>' tag is not found in fetched html. Consider adding wait_for=\"css:body\" to wait for body tag to be loaded into DOM.")
base_domain = get_base_domain(url) base_domain = get_base_domain(url)
# Early removal of all images if exclude_all_images is set
# This happens before any processing to minimize memory usage
if kwargs.get("exclude_all_images", False):
for img in body.find_all('img'):
img.decompose()
try: try:
meta = extract_metadata("", soup) meta = extract_metadata("", soup)
@@ -1487,6 +1493,13 @@ class LXMLWebScrapingStrategy(WebScrapingStrategy):
body = doc body = doc
base_domain = get_base_domain(url) base_domain = get_base_domain(url)
# Early removal of all images if exclude_all_images is set
# This is more efficient in lxml as we remove elements before any processing
if kwargs.get("exclude_all_images", False):
for img in body.xpath('//img'):
if img.getparent() is not None:
img.getparent().remove(img)
# Add comment removal # Add comment removal
if kwargs.get("remove_comments", False): if kwargs.get("remove_comments", False):

View File

@@ -95,15 +95,7 @@ class UrlModel(BaseModel):
url: HttpUrl url: HttpUrl
forced: bool = False forced: bool = False
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
fit_markdown: Optional[str] = None
fit_html: Optional[str] = None
def __str__(self):
return self.raw_markdown
@dataclass @dataclass
class TraversalStats: class TraversalStats:
@@ -124,6 +116,16 @@ class DispatchResult(BaseModel):
end_time: Union[datetime, float] end_time: Union[datetime, float]
error_message: str = "" error_message: str = ""
class MarkdownGenerationResult(BaseModel):
raw_markdown: str
markdown_with_citations: str
references_markdown: str
fit_markdown: Optional[str] = None
fit_html: Optional[str] = None
def __str__(self):
return self.raw_markdown
class CrawlResult(BaseModel): class CrawlResult(BaseModel):
url: str url: str
html: str html: str
@@ -135,6 +137,7 @@ class CrawlResult(BaseModel):
js_execution_result: Optional[Dict[str, Any]] = None js_execution_result: Optional[Dict[str, Any]] = None
screenshot: Optional[str] = None screenshot: Optional[str] = None
pdf: Optional[bytes] = None pdf: Optional[bytes] = None
mhtml: Optional[str] = None
_markdown: Optional[MarkdownGenerationResult] = PrivateAttr(default=None) _markdown: Optional[MarkdownGenerationResult] = PrivateAttr(default=None)
extracted_content: Optional[str] = None extracted_content: Optional[str] = None
metadata: Optional[dict] = None metadata: Optional[dict] = None
@@ -145,6 +148,8 @@ class CrawlResult(BaseModel):
ssl_certificate: Optional[SSLCertificate] = None ssl_certificate: Optional[SSLCertificate] = None
dispatch_result: Optional[DispatchResult] = None dispatch_result: Optional[DispatchResult] = None
redirected_url: Optional[str] = None redirected_url: Optional[str] = None
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class Config: class Config:
arbitrary_types_allowed = True arbitrary_types_allowed = True
@@ -307,10 +312,13 @@ class AsyncCrawlResponse(BaseModel):
status_code: int status_code: int
screenshot: Optional[str] = None screenshot: Optional[str] = None
pdf_data: Optional[bytes] = None pdf_data: Optional[bytes] = None
mhtml_data: Optional[str] = None
get_delayed_content: Optional[Callable[[Optional[float]], Awaitable[str]]] = None get_delayed_content: Optional[Callable[[Optional[float]], Awaitable[str]]] = None
downloaded_files: Optional[List[str]] = None downloaded_files: Optional[List[str]] = None
ssl_certificate: Optional[SSLCertificate] = None ssl_certificate: Optional[SSLCertificate] = None
redirected_url: Optional[str] = None redirected_url: Optional[str] = None
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class Config: class Config:
arbitrary_types_allowed = True arbitrary_types_allowed = True

View File

@@ -0,0 +1,471 @@
import asyncio
import json
import os
import base64
from pathlib import Path
from typing import List, Dict, Any
from datetime import datetime
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode, CrawlResult
from crawl4ai import BrowserConfig
__cur_dir__ = Path(__file__).parent
# Create temp directory if it doesn't exist
os.makedirs(os.path.join(__cur_dir__, "tmp"), exist_ok=True)
async def demo_basic_network_capture():
"""Basic network request capturing example"""
print("\n=== 1. Basic Network Request Capturing ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
wait_until="networkidle" # Wait for network to be idle
)
result = await crawler.arun(
url="https://example.com/",
config=config
)
if result.success and result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Count by event type
event_types = {}
for req in result.network_requests:
event_type = req.get("event_type", "unknown")
event_types[event_type] = event_types.get(event_type, 0) + 1
print("Event types:")
for event_type, count in event_types.items():
print(f" - {event_type}: {count}")
# Show a sample request and response
request = next((r for r in result.network_requests if r.get("event_type") == "request"), None)
response = next((r for r in result.network_requests if r.get("event_type") == "response"), None)
if request:
print("\nSample request:")
print(f" URL: {request.get('url')}")
print(f" Method: {request.get('method')}")
print(f" Headers: {list(request.get('headers', {}).keys())}")
if response:
print("\nSample response:")
print(f" URL: {response.get('url')}")
print(f" Status: {response.get('status')} {response.get('status_text', '')}")
print(f" Headers: {list(response.get('headers', {}).keys())}")
async def demo_basic_console_capture():
"""Basic console message capturing example"""
print("\n=== 2. Basic Console Message Capturing ===")
# Create a simple HTML file with console messages
html_file = os.path.join(__cur_dir__, "tmp", "console_test.html")
with open(html_file, "w") as f:
f.write("""
<!DOCTYPE html>
<html>
<head>
<title>Console Test</title>
</head>
<body>
<h1>Console Message Test</h1>
<script>
console.log("This is a basic log message");
console.info("This is an info message");
console.warn("This is a warning message");
console.error("This is an error message");
// Generate an error
try {
nonExistentFunction();
} catch (e) {
console.error("Caught error:", e);
}
</script>
</body>
</html>
""")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_console_messages=True,
wait_until="networkidle" # Wait to make sure all scripts execute
)
result = await crawler.arun(
url=f"file://{html_file}",
config=config
)
if result.success and result.console_messages:
print(f"Captured {len(result.console_messages)} console messages")
# Count by message type
message_types = {}
for msg in result.console_messages:
msg_type = msg.get("type", "unknown")
message_types[msg_type] = message_types.get(msg_type, 0) + 1
print("Message types:")
for msg_type, count in message_types.items():
print(f" - {msg_type}: {count}")
# Show all messages
print("\nAll console messages:")
for i, msg in enumerate(result.console_messages, 1):
print(f" {i}. [{msg.get('type', 'unknown')}] {msg.get('text', '')}")
async def demo_combined_capture():
"""Capturing both network requests and console messages"""
print("\n=== 3. Combined Network and Console Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
wait_until="networkidle"
)
result = await crawler.arun(
url="https://httpbin.org/html",
config=config
)
if result.success:
network_count = len(result.network_requests) if result.network_requests else 0
console_count = len(result.console_messages) if result.console_messages else 0
print(f"Captured {network_count} network events and {console_count} console messages")
# Save the captured data to a JSON file for analysis
output_file = os.path.join(__cur_dir__, "tmp", "capture_data.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"network_requests": result.network_requests,
"console_messages": result.console_messages
}, f, indent=2)
print(f"Full capture data saved to {output_file}")
async def analyze_spa_network_traffic():
"""Analyze network traffic of a Single-Page Application"""
print("\n=== 4. Analyzing SPA Network Traffic ===")
async with AsyncWebCrawler(config=BrowserConfig(
headless=True,
viewport_width=1280,
viewport_height=800
)) as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
# Wait longer to ensure all resources are loaded
wait_until="networkidle",
page_timeout=60000, # 60 seconds
)
result = await crawler.arun(
url="https://weather.com",
config=config
)
if result.success and result.network_requests:
# Extract different types of requests
requests = []
responses = []
failures = []
for event in result.network_requests:
event_type = event.get("event_type")
if event_type == "request":
requests.append(event)
elif event_type == "response":
responses.append(event)
elif event_type == "request_failed":
failures.append(event)
print(f"Captured {len(requests)} requests, {len(responses)} responses, and {len(failures)} failures")
# Analyze request types
resource_types = {}
for req in requests:
resource_type = req.get("resource_type", "unknown")
resource_types[resource_type] = resource_types.get(resource_type, 0) + 1
print("\nResource types:")
for resource_type, count in sorted(resource_types.items(), key=lambda x: x[1], reverse=True):
print(f" - {resource_type}: {count}")
# Analyze API calls
api_calls = [r for r in requests if "api" in r.get("url", "").lower()]
if api_calls:
print(f"\nDetected {len(api_calls)} API calls:")
for i, call in enumerate(api_calls[:5], 1): # Show first 5
print(f" {i}. {call.get('method')} {call.get('url')}")
if len(api_calls) > 5:
print(f" ... and {len(api_calls) - 5} more")
# Analyze response status codes
status_codes = {}
for resp in responses:
status = resp.get("status", 0)
status_codes[status] = status_codes.get(status, 0) + 1
print("\nResponse status codes:")
for status, count in sorted(status_codes.items()):
print(f" - {status}: {count}")
# Analyze failures
if failures:
print("\nFailed requests:")
for i, failure in enumerate(failures[:5], 1): # Show first 5
print(f" {i}. {failure.get('url')} - {failure.get('failure_text')}")
if len(failures) > 5:
print(f" ... and {len(failures) - 5} more")
# Check for console errors
if result.console_messages:
errors = [msg for msg in result.console_messages if msg.get("type") == "error"]
if errors:
print(f"\nDetected {len(errors)} console errors:")
for i, error in enumerate(errors[:3], 1): # Show first 3
print(f" {i}. {error.get('text', '')[:100]}...")
if len(errors) > 3:
print(f" ... and {len(errors) - 3} more")
# Save analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "weather_network_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"statistics": {
"request_count": len(requests),
"response_count": len(responses),
"failure_count": len(failures),
"resource_types": resource_types,
"status_codes": {str(k): v for k, v in status_codes.items()},
"api_call_count": len(api_calls),
"console_error_count": len(errors) if result.console_messages else 0
},
"network_requests": result.network_requests,
"console_messages": result.console_messages
}, f, indent=2)
print(f"\nFull analysis saved to {output_file}")
async def demo_security_analysis():
"""Using network capture for security analysis"""
print("\n=== 5. Security Analysis with Network Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True,
wait_until="networkidle"
)
# A site that makes multiple third-party requests
result = await crawler.arun(
url="https://www.nytimes.com/",
config=config
)
if result.success and result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Extract all domains
domains = set()
for req in result.network_requests:
if req.get("event_type") == "request":
url = req.get("url", "")
try:
from urllib.parse import urlparse
domain = urlparse(url).netloc
if domain:
domains.add(domain)
except:
pass
print(f"\nDetected requests to {len(domains)} unique domains:")
main_domain = urlparse(result.url).netloc
# Separate first-party vs third-party domains
first_party = [d for d in domains if main_domain in d]
third_party = [d for d in domains if main_domain not in d]
print(f" - First-party domains: {len(first_party)}")
print(f" - Third-party domains: {len(third_party)}")
# Look for potential trackers/analytics
tracking_keywords = ["analytics", "tracker", "pixel", "tag", "stats", "metric", "collect", "beacon"]
potential_trackers = []
for domain in third_party:
if any(keyword in domain.lower() for keyword in tracking_keywords):
potential_trackers.append(domain)
if potential_trackers:
print(f"\nPotential tracking/analytics domains ({len(potential_trackers)}):")
for i, domain in enumerate(sorted(potential_trackers)[:10], 1):
print(f" {i}. {domain}")
if len(potential_trackers) > 10:
print(f" ... and {len(potential_trackers) - 10} more")
# Check for insecure (HTTP) requests
insecure_requests = [
req.get("url") for req in result.network_requests
if req.get("event_type") == "request" and req.get("url", "").startswith("http://")
]
if insecure_requests:
print(f"\nWarning: Found {len(insecure_requests)} insecure (HTTP) requests:")
for i, url in enumerate(insecure_requests[:5], 1):
print(f" {i}. {url}")
if len(insecure_requests) > 5:
print(f" ... and {len(insecure_requests) - 5} more")
# Save security analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "security_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"main_domain": main_domain,
"timestamp": datetime.now().isoformat(),
"analysis": {
"total_requests": len([r for r in result.network_requests if r.get("event_type") == "request"]),
"unique_domains": len(domains),
"first_party_domains": first_party,
"third_party_domains": third_party,
"potential_trackers": potential_trackers,
"insecure_requests": insecure_requests
}
}, f, indent=2)
print(f"\nFull security analysis saved to {output_file}")
async def demo_performance_analysis():
"""Using network capture for performance analysis"""
print("\n=== 6. Performance Analysis with Network Capture ===")
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
capture_network_requests=True,
wait_until="networkidle",
page_timeout=60000 # 60 seconds
)
result = await crawler.arun(
url="https://www.cnn.com/",
config=config
)
if result.success and result.network_requests:
# Filter only response events with timing information
responses_with_timing = [
r for r in result.network_requests
if r.get("event_type") == "response" and r.get("request_timing")
]
if responses_with_timing:
print(f"Analyzing timing for {len(responses_with_timing)} network responses")
# Group by resource type
resource_timings = {}
for resp in responses_with_timing:
url = resp.get("url", "")
timing = resp.get("request_timing", {})
# Determine resource type from URL extension
ext = url.split(".")[-1].lower() if "." in url.split("/")[-1] else "unknown"
if ext in ["jpg", "jpeg", "png", "gif", "webp", "svg", "ico"]:
resource_type = "image"
elif ext in ["js"]:
resource_type = "javascript"
elif ext in ["css"]:
resource_type = "css"
elif ext in ["woff", "woff2", "ttf", "otf", "eot"]:
resource_type = "font"
else:
resource_type = "other"
if resource_type not in resource_timings:
resource_timings[resource_type] = []
# Calculate request duration if timing information is available
if isinstance(timing, dict) and "requestTime" in timing and "receiveHeadersEnd" in timing:
# Convert to milliseconds
duration = (timing["receiveHeadersEnd"] - timing["requestTime"]) * 1000
resource_timings[resource_type].append({
"url": url,
"duration_ms": duration
})
# Calculate statistics for each resource type
print("\nPerformance by resource type:")
for resource_type, timings in resource_timings.items():
if timings:
durations = [t["duration_ms"] for t in timings]
avg_duration = sum(durations) / len(durations)
max_duration = max(durations)
slowest_resource = next(t["url"] for t in timings if t["duration_ms"] == max_duration)
print(f" {resource_type.upper()}:")
print(f" - Count: {len(timings)}")
print(f" - Avg time: {avg_duration:.2f} ms")
print(f" - Max time: {max_duration:.2f} ms")
print(f" - Slowest: {slowest_resource}")
# Identify the slowest resources overall
all_timings = []
for resource_type, timings in resource_timings.items():
for timing in timings:
timing["type"] = resource_type
all_timings.append(timing)
all_timings.sort(key=lambda x: x["duration_ms"], reverse=True)
print("\nTop 5 slowest resources:")
for i, timing in enumerate(all_timings[:5], 1):
print(f" {i}. [{timing['type']}] {timing['url']} - {timing['duration_ms']:.2f} ms")
# Save performance analysis to file
output_file = os.path.join(__cur_dir__, "tmp", "performance_analysis.json")
with open(output_file, "w") as f:
json.dump({
"url": result.url,
"timestamp": datetime.now().isoformat(),
"resource_timings": resource_timings,
"slowest_resources": all_timings[:10] # Save top 10
}, f, indent=2)
print(f"\nFull performance analysis saved to {output_file}")
async def main():
"""Run all demo functions sequentially"""
print("=== Network and Console Capture Examples ===")
# Make sure tmp directory exists
os.makedirs(os.path.join(__cur_dir__, "tmp"), exist_ok=True)
# Run basic examples
await demo_basic_network_capture()
await demo_basic_console_capture()
await demo_combined_capture()
# Run advanced examples
await analyze_spa_network_traffic()
await demo_security_analysis()
await demo_performance_analysis()
print("\n=== Examples Complete ===")
print(f"Check the tmp directory for output files: {os.path.join(__cur_dir__, 'tmp')}")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,675 +0,0 @@
import os, sys
from crawl4ai import LLMConfig
# append parent directory to system path
sys.path.append(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
)
os.environ["FIRECRAWL_API_KEY"] = "fc-84b370ccfad44beabc686b38f1769692"
import asyncio
# import nest_asyncio
# nest_asyncio.apply()
import time
import json
import os
import re
from typing import Dict, List
from bs4 import BeautifulSoup
from pydantic import BaseModel, Field
from crawl4ai import AsyncWebCrawler, CacheMode
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.content_filter_strategy import PruningContentFilter
from crawl4ai.extraction_strategy import (
JsonCssExtractionStrategy,
LLMExtractionStrategy,
)
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
print("Crawl4AI: Advanced Web Crawling and Data Extraction")
print("GitHub Repository: https://github.com/unclecode/crawl4ai")
print("Twitter: @unclecode")
print("Website: https://crawl4ai.com")
async def simple_crawl():
print("\n--- Basic Usage ---")
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500]) # Print first 500 characters
async def simple_example_with_running_js_code():
print("\n--- Executing JavaScript and Using CSS Selectors ---")
# New code to handle the wait_for parameter
wait_for = """() => {
return Array.from(document.querySelectorAll('article.tease-card')).length > 10;
}"""
# wait_for can be also just a css selector
# wait_for = "article.tease-card:nth-child(10)"
async with AsyncWebCrawler(verbose=True) as crawler:
js_code = [
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
]
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=js_code,
# wait_for=wait_for,
cache_mode=CacheMode.BYPASS,
)
print(result.markdown[:500]) # Print first 500 characters
async def simple_example_with_css_selector():
print("\n--- Using CSS Selectors ---")
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
css_selector=".wide-tease-item__description",
cache_mode=CacheMode.BYPASS,
)
print(result.markdown[:500]) # Print first 500 characters
async def use_proxy():
print("\n--- Using a Proxy ---")
print(
"Note: Replace 'http://your-proxy-url:port' with a working proxy to run this example."
)
# Uncomment and modify the following lines to use a proxy
async with AsyncWebCrawler(
verbose=True, proxy="http://your-proxy-url:port"
) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", cache_mode=CacheMode.BYPASS
)
if result.success:
print(result.markdown[:500]) # Print first 500 characters
async def capture_and_save_screenshot(url: str, output_path: str):
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url=url, screenshot=True, cache_mode=CacheMode.BYPASS
)
if result.success and result.screenshot:
import base64
# Decode the base64 screenshot data
screenshot_data = base64.b64decode(result.screenshot)
# Save the screenshot as a JPEG file
with open(output_path, "wb") as f:
f.write(screenshot_data)
print(f"Screenshot saved successfully to {output_path}")
else:
print("Failed to capture screenshot")
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(
..., description="Fee for output token for the OpenAI model."
)
async def extract_structured_data_using_llm(
provider: str, api_token: str = None, extra_headers: Dict[str, str] = None
):
print(f"\n--- Extracting Structured Data with {provider} ---")
if api_token is None and provider != "ollama":
print(f"API token is required for {provider}. Skipping this example.")
return
# extra_args = {}
extra_args = {
"temperature": 0,
"top_p": 0.9,
"max_tokens": 2000,
# any other supported parameters for litellm
}
if extra_headers:
extra_args["extra_headers"] = extra_headers
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://openai.com/api/pricing/",
word_count_threshold=1,
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider=provider,api_token=api_token),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content. One extracted model JSON format should look like this:
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}.""",
extra_args=extra_args,
),
cache_mode=CacheMode.BYPASS,
)
print(result.extracted_content)
async def extract_structured_data_using_css_extractor():
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
schema = {
"name": "KidoCode Courses",
"baseSelector": "section.charge-methodology .w-tab-content > div",
"fields": [
{
"name": "section_title",
"selector": "h3.heading-50",
"type": "text",
},
{
"name": "section_description",
"selector": ".charge-content",
"type": "text",
},
{
"name": "course_name",
"selector": ".text-block-93",
"type": "text",
},
{
"name": "course_description",
"selector": ".course-content-text",
"type": "text",
},
{
"name": "course_icon",
"selector": ".image-92",
"type": "attribute",
"attribute": "src",
},
],
}
async with AsyncWebCrawler(headless=True, verbose=True) as crawler:
# Create the JavaScript that handles clicking multiple times
js_click_tabs = """
(async () => {
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
for(let tab of tabs) {
// scroll to the tab
tab.scrollIntoView();
tab.click();
// Wait for content to load and animations to complete
await new Promise(r => setTimeout(r, 500));
}
})();
"""
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology",
extraction_strategy=JsonCssExtractionStrategy(schema, verbose=True),
js_code=[js_click_tabs],
cache_mode=CacheMode.BYPASS,
)
companies = json.loads(result.extracted_content)
print(f"Successfully extracted {len(companies)} companies")
print(json.dumps(companies[0], indent=2))
# Advanced Session-Based Crawling with Dynamic Content 🔄
async def crawl_dynamic_content_pages_method_1():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
first_commit = ""
async def on_execution_started(page):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.Box-sc-g0xbh4-0 h4")
commit = await page.query_selector("li.Box-sc-g0xbh4-0 h4")
commit = await commit.evaluate("(element) => element.textContent")
commit = re.sub(r"\s+", "", commit)
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
async with AsyncWebCrawler(verbose=True) as crawler:
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
(() => {
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
})();
"""
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
js=js_next_page if page > 0 else None,
cache_mode=CacheMode.BYPASS,
js_only=page > 0,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
soup = BeautifulSoup(result.cleaned_html, "html.parser")
commits = soup.select("li")
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_dynamic_content_pages_method_2():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
last_commit = ""
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
return commits.length > 0 ? commits[0].textContent.trim() : null;
};
const initialCommit = getCurrentCommit();
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
// Poll for changes
while (true) {
await new Promise(resolve => setTimeout(resolve, 100)); // Wait 100ms
const newCommit = getCurrentCommit();
if (newCommit && newCommit !== initialCommit) {
break;
}
}
})();
"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page_and_wait if page > 0 else None,
js_only=page > 0,
cache_mode=CacheMode.BYPASS,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_dynamic_content_pages_method_3():
print(
"\n--- Advanced Multi-Page Crawling with JavaScript Execution using `wait_for` ---"
)
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
if (commits.length > 0) {
window.firstCommit = commits[0].textContent.trim();
}
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
"""
wait_for = """() => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
if (commits.length === 0) return false;
const firstCommit = commits[0].textContent.trim();
return firstCommit !== window.firstCommit;
}"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page if page > 0 else None,
wait_for=wait_for if page > 0 else None,
js_only=page > 0,
cache_mode=CacheMode.BYPASS,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def crawl_custom_browser_type():
# Use Firefox
start = time.time()
async with AsyncWebCrawler(
browser_type="firefox", verbose=True, headless=True
) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
# Use WebKit
start = time.time()
async with AsyncWebCrawler(
browser_type="webkit", verbose=True, headless=True
) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
# Use Chromium (default)
start = time.time()
async with AsyncWebCrawler(verbose=True, headless=True) as crawler:
result = await crawler.arun(
url="https://www.example.com", cache_mode=CacheMode.BYPASS
)
print(result.markdown[:500])
print("Time taken: ", time.time() - start)
async def crawl_with_user_simultion():
async with AsyncWebCrawler(verbose=True, headless=True) as crawler:
url = "YOUR-URL-HERE"
result = await crawler.arun(
url=url,
cache_mode=CacheMode.BYPASS,
magic=True, # Automatically detects and removes overlays, popups, and other elements that block content
# simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
# override_navigator = True # Overrides the navigator object to make it look like a real user
)
print(result.markdown)
async def speed_comparison():
# print("\n--- Speed Comparison ---")
# print("Firecrawl (simulated):")
# print("Time taken: 7.02 seconds")
# print("Content length: 42074 characters")
# print("Images found: 49")
# print()
# Simulated Firecrawl performance
from firecrawl import FirecrawlApp
app = FirecrawlApp(api_key=os.environ["FIRECRAWL_API_KEY"])
start = time.time()
scrape_status = app.scrape_url(
"https://www.nbcnews.com/business", params={"formats": ["markdown", "html"]}
)
end = time.time()
print("Firecrawl:")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(scrape_status['markdown'])} characters")
print(f"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}")
print()
async with AsyncWebCrawler() as crawler:
# Crawl4AI simple crawl
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
word_count_threshold=0,
cache_mode=CacheMode.BYPASS,
verbose=False,
)
end = time.time()
print("Crawl4AI (simple crawl):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown)} characters")
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
print()
# Crawl4AI with advanced content filtering
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
word_count_threshold=0,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
# content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
),
cache_mode=CacheMode.BYPASS,
verbose=False,
)
end = time.time()
print("Crawl4AI (Markdown Plus):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown.raw_markdown)} characters")
print(f"Fit Markdown: {len(result.markdown.fit_markdown)} characters")
print(f"Images found: {result.markdown.raw_markdown.count('cldnry.s-nbcnews.com')}")
print()
# Crawl4AI with JavaScript execution
start = time.time()
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=[
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
],
word_count_threshold=0,
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
)
# content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
),
verbose=False,
)
end = time.time()
print("Crawl4AI (with JavaScript execution):")
print(f"Time taken: {end - start:.2f} seconds")
print(f"Content length: {len(result.markdown.raw_markdown)} characters")
print(f"Fit Markdown: {len(result.markdown.fit_markdown)} characters")
print(f"Images found: {result.markdown.raw_markdown.count('cldnry.s-nbcnews.com')}")
print("\nNote on Speed Comparison:")
print("The speed test conducted here may not reflect optimal conditions.")
print("When we call Firecrawl's API, we're seeing its best performance,")
print("while Crawl4AI's performance is limited by the local network speed.")
print("For a more accurate comparison, it's recommended to run these tests")
print("on servers with a stable and fast internet connection.")
print("Despite these limitations, Crawl4AI still demonstrates faster performance.")
print("If you run these tests in an environment with better network conditions,")
print("you may observe an even more significant speed advantage for Crawl4AI.")
async def generate_knowledge_graph():
class Entity(BaseModel):
name: str
description: str
class Relationship(BaseModel):
entity1: Entity
entity2: Entity
description: str
relation_type: str
class KnowledgeGraph(BaseModel):
entities: List[Entity]
relationships: List[Relationship]
extraction_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY")), # In case of Ollama just pass "no-token"
schema=KnowledgeGraph.model_json_schema(),
extraction_type="schema",
instruction="""Extract entities and relationships from the given text.""",
)
async with AsyncWebCrawler() as crawler:
url = "https://paulgraham.com/love.html"
result = await crawler.arun(
url=url,
cache_mode=CacheMode.BYPASS,
extraction_strategy=extraction_strategy,
# magic=True
)
# print(result.extracted_content)
with open(os.path.join(__location__, "kb.json"), "w") as f:
f.write(result.extracted_content)
async def fit_markdown_remove_overlay():
async with AsyncWebCrawler(
headless=True, # Set to False to see what is happening
verbose=True,
user_agent_mode="random",
user_agent_generator_config={"device_type": "mobile", "os_type": "android"},
) as crawler:
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology",
cache_mode=CacheMode.BYPASS,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
),
options={"ignore_links": True},
),
# markdown_generator=DefaultMarkdownGenerator(
# content_filter=BM25ContentFilter(user_query="", bm25_threshold=1.0),
# options={
# "ignore_links": True
# }
# ),
)
if result.success:
print(len(result.markdown.raw_markdown))
print(len(result.markdown.markdown_with_citations))
print(len(result.markdown.fit_markdown))
# Save clean html
with open(os.path.join(__location__, "output/cleaned_html.html"), "w") as f:
f.write(result.cleaned_html)
with open(
os.path.join(__location__, "output/output_raw_markdown.md"), "w"
) as f:
f.write(result.markdown.raw_markdown)
with open(
os.path.join(__location__, "output/output_markdown_with_citations.md"),
"w",
) as f:
f.write(result.markdown.markdown_with_citations)
with open(
os.path.join(__location__, "output/output_fit_markdown.md"), "w"
) as f:
f.write(result.markdown.fit_markdown)
print("Done")
async def main():
# await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY"))
# await simple_crawl()
# await simple_example_with_running_js_code()
# await simple_example_with_css_selector()
# # await use_proxy()
# await capture_and_save_screenshot("https://www.example.com", os.path.join(__location__, "tmp/example_screenshot.jpg"))
# await extract_structured_data_using_css_extractor()
# LLM extraction examples
# await extract_structured_data_using_llm()
# await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY"))
# await extract_structured_data_using_llm("ollama/llama3.2")
# You always can pass custom headers to the extraction strategy
# custom_headers = {
# "Authorization": "Bearer your-custom-token",
# "X-Custom-Header": "Some-Value"
# }
# await extract_structured_data_using_llm(extra_headers=custom_headers)
# await crawl_dynamic_content_pages_method_1()
# await crawl_dynamic_content_pages_method_2()
await crawl_dynamic_content_pages_method_3()
# await crawl_custom_browser_type()
# await speed_comparison()
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,412 @@
import asyncio
import os
import json
import base64
from pathlib import Path
from typing import List
from crawl4ai.proxy_strategy import ProxyConfig
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode, CrawlResult
from crawl4ai import RoundRobinProxyStrategy
from crawl4ai import JsonCssExtractionStrategy, LLMExtractionStrategy
from crawl4ai import LLMConfig
from crawl4ai import PruningContentFilter, BM25ContentFilter
from crawl4ai import DefaultMarkdownGenerator
from crawl4ai import BFSDeepCrawlStrategy, DomainFilter, FilterChain
from crawl4ai import BrowserConfig
__cur_dir__ = Path(__file__).parent
async def demo_basic_crawl():
"""Basic web crawling with markdown generation"""
print("\n=== 1. Basic Web Crawling ===")
async with AsyncWebCrawler(config = BrowserConfig(
viewport_height=800,
viewport_width=1200,
headless=True,
verbose=True,
)) as crawler:
results: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com/"
)
for i, result in enumerate(results):
print(f"Result {i + 1}:")
print(f"Success: {result.success}")
if result.success:
print(f"Markdown length: {len(result.markdown.raw_markdown)} chars")
print(f"First 100 chars: {result.markdown.raw_markdown[:100]}...")
else:
print("Failed to crawl the URL")
async def demo_parallel_crawl():
"""Crawl multiple URLs in parallel"""
print("\n=== 2. Parallel Crawling ===")
urls = [
"https://news.ycombinator.com/",
"https://example.com/",
"https://httpbin.org/html",
]
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun_many(
urls=urls,
)
print(f"Crawled {len(results)} URLs in parallel:")
for i, result in enumerate(results):
print(
f" {i + 1}. {result.url} - {'Success' if result.success else 'Failed'}"
)
async def demo_fit_markdown():
"""Generate focused markdown with LLM content filter"""
print("\n=== 3. Fit Markdown with LLM Content Filter ===")
async with AsyncWebCrawler() as crawler:
result: CrawlResult = await crawler.arun(
url = "https://en.wikipedia.org/wiki/Python_(programming_language)",
config=CrawlerRunConfig(
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter()
)
),
)
# Print stats and save the fit markdown
print(f"Raw: {len(result.markdown.raw_markdown)} chars")
print(f"Fit: {len(result.markdown.fit_markdown)} chars")
async def demo_llm_structured_extraction_no_schema():
# Create a simple LLM extraction strategy (no schema required)
extraction_strategy = LLMExtractionStrategy(
llm_config=LLMConfig(
provider="groq/qwen-2.5-32b",
api_token="env:GROQ_API_KEY",
),
instruction="This is news.ycombinator.com, extract all news, and for each, I want title, source url, number of comments.",
extract_type="schema",
schema="{title: string, url: string, comments: int}",
extra_args={
"temperature": 0.0,
"max_tokens": 4096,
},
verbose=True,
)
config = CrawlerRunConfig(extraction_strategy=extraction_strategy)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
"https://news.ycombinator.com/", config=config
)
for result in results:
print(f"URL: {result.url}")
print(f"Success: {result.success}")
if result.success:
data = json.loads(result.extracted_content)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
async def demo_css_structured_extraction_no_schema():
"""Extract structured data using CSS selectors"""
print("\n=== 5. CSS-Based Structured Extraction ===")
# Sample HTML for schema generation (one-time cost)
sample_html = """
<div class="body-post clear">
<a class="story-link" href="https://thehackernews.com/2025/04/malicious-python-packages-on-pypi.html">
<div class="clear home-post-box cf">
<div class="home-img clear">
<div class="img-ratio">
<img alt="..." src="...">
</div>
</div>
<div class="clear home-right">
<h2 class="home-title">Malicious Python Packages on PyPI Downloaded 39,000+ Times, Steal Sensitive Data</h2>
<div class="item-label">
<span class="h-datetime"><i class="icon-font icon-calendar"></i>Apr 05, 2025</span>
<span class="h-tags">Malware / Supply Chain Attack</span>
</div>
<div class="home-desc"> Cybersecurity researchers have...</div>
</div>
</div>
</a>
</div>
"""
# Check if schema file exists
schema_file_path = f"{__cur_dir__}/tmp/schema.json"
if os.path.exists(schema_file_path):
with open(schema_file_path, "r") as f:
schema = json.load(f)
else:
# Generate schema using LLM (one-time setup)
schema = JsonCssExtractionStrategy.generate_schema(
html=sample_html,
llm_config=LLMConfig(
provider="groq/qwen-2.5-32b",
api_token="env:GROQ_API_KEY",
),
query="From https://thehackernews.com/, I have shared a sample of one news div with a title, date, and description. Please generate a schema for this news div.",
)
print(f"Generated schema: {json.dumps(schema, indent=2)}")
# Save the schema to a file , and use it for future extractions, in result for such extraction you will call LLM once
with open(f"{__cur_dir__}/tmp/schema.json", "w") as f:
json.dump(schema, f, indent=2)
# Create no-LLM extraction strategy with the generated schema
extraction_strategy = JsonCssExtractionStrategy(schema)
config = CrawlerRunConfig(extraction_strategy=extraction_strategy)
# Use the fast CSS extraction (no LLM calls during extraction)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
"https://thehackernews.com", config=config
)
for result in results:
print(f"URL: {result.url}")
print(f"Success: {result.success}")
if result.success:
data = json.loads(result.extracted_content)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
async def demo_deep_crawl():
"""Deep crawling with BFS strategy"""
print("\n=== 6. Deep Crawling ===")
filter_chain = FilterChain([DomainFilter(allowed_domains=["crawl4ai.com"])])
deep_crawl_strategy = BFSDeepCrawlStrategy(
max_depth=1, max_pages=5, filter_chain=filter_chain
)
async with AsyncWebCrawler() as crawler:
results: List[CrawlResult] = await crawler.arun(
url="https://docs.crawl4ai.com",
config=CrawlerRunConfig(deep_crawl_strategy=deep_crawl_strategy),
)
print(f"Deep crawl returned {len(results)} pages:")
for i, result in enumerate(results):
depth = result.metadata.get("depth", "unknown")
print(f" {i + 1}. {result.url} (Depth: {depth})")
async def demo_js_interaction():
"""Execute JavaScript to load more content"""
print("\n=== 7. JavaScript Interaction ===")
# A simple page that needs JS to reveal content
async with AsyncWebCrawler(config=BrowserConfig(headless=False)) as crawler:
# Initial load
news_schema = {
"name": "news",
"baseSelector": "tr.athing",
"fields": [
{
"name": "title",
"selector": "span.titleline",
"type": "text",
}
],
}
results: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com",
config=CrawlerRunConfig(
session_id="hn_session", # Keep session
extraction_strategy=JsonCssExtractionStrategy(schema=news_schema),
),
)
news = []
for result in results:
if result.success:
data = json.loads(result.extracted_content)
news.extend(data)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
print(f"Initial items: {len(news)}")
# Click "More" link
more_config = CrawlerRunConfig(
js_code="document.querySelector('a.morelink').click();",
js_only=True, # Continue in same page
session_id="hn_session", # Keep session
extraction_strategy=JsonCssExtractionStrategy(
schema=news_schema,
),
)
result: List[CrawlResult] = await crawler.arun(
url="https://news.ycombinator.com", config=more_config
)
# Extract new items
for result in results:
if result.success:
data = json.loads(result.extracted_content)
news.extend(data)
print(json.dumps(data, indent=2))
else:
print("Failed to extract structured data")
print(f"Total items: {len(news)}")
async def demo_media_and_links():
"""Extract media and links from a page"""
print("\n=== 8. Media and Links Extraction ===")
async with AsyncWebCrawler() as crawler:
result: List[CrawlResult] = await crawler.arun("https://en.wikipedia.org/wiki/Main_Page")
for i, result in enumerate(result):
# Extract and save all images
images = result.media.get("images", [])
print(f"Found {len(images)} images")
# Extract and save all links (internal and external)
internal_links = result.links.get("internal", [])
external_links = result.links.get("external", [])
print(f"Found {len(internal_links)} internal links")
print(f"Found {len(external_links)} external links")
# Print some of the images and links
for image in images[:3]:
print(f"Image: {image['src']}")
for link in internal_links[:3]:
print(f"Internal link: {link['href']}")
for link in external_links[:3]:
print(f"External link: {link['href']}")
# # Save everything to files
with open(f"{__cur_dir__}/tmp/images.json", "w") as f:
json.dump(images, f, indent=2)
with open(f"{__cur_dir__}/tmp/links.json", "w") as f:
json.dump(
{"internal": internal_links, "external": external_links},
f,
indent=2,
)
async def demo_screenshot_and_pdf():
"""Capture screenshot and PDF of a page"""
print("\n=== 9. Screenshot and PDF Capture ===")
async with AsyncWebCrawler() as crawler:
result: List[CrawlResult] = await crawler.arun(
# url="https://example.com",
url="https://en.wikipedia.org/wiki/Giant_anteater",
config=CrawlerRunConfig(screenshot=True, pdf=True),
)
for i, result in enumerate(result):
# if result.screenshot_data:
if result.screenshot:
# Save screenshot
screenshot_path = f"{__cur_dir__}/tmp/example_screenshot.png"
with open(screenshot_path, "wb") as f:
f.write(base64.b64decode(result.screenshot))
print(f"Screenshot saved to {screenshot_path}")
# if result.pdf_data:
if result.pdf:
# Save PDF
pdf_path = f"{__cur_dir__}/tmp/example.pdf"
with open(pdf_path, "wb") as f:
f.write(result.pdf)
print(f"PDF saved to {pdf_path}")
async def demo_proxy_rotation():
"""Proxy rotation for multiple requests"""
print("\n=== 10. Proxy Rotation ===")
# Example proxies (replace with real ones)
proxies = [
ProxyConfig(server="http://proxy1.example.com:8080"),
ProxyConfig(server="http://proxy2.example.com:8080"),
]
proxy_strategy = RoundRobinProxyStrategy(proxies)
print(f"Using {len(proxies)} proxies in rotation")
print(
"Note: This example uses placeholder proxies - replace with real ones to test"
)
async with AsyncWebCrawler() as crawler:
config = CrawlerRunConfig(
proxy_rotation_strategy=proxy_strategy
)
# In a real scenario, these would be run and the proxies would rotate
print("In a real scenario, requests would rotate through the available proxies")
async def demo_raw_html_and_file():
"""Process raw HTML and local files"""
print("\n=== 11. Raw HTML and Local Files ===")
raw_html = """
<html><body>
<h1>Sample Article</h1>
<p>This is sample content for testing Crawl4AI's raw HTML processing.</p>
</body></html>
"""
# Save to file
file_path = Path("docs/examples/tmp/sample.html").absolute()
with open(file_path, "w") as f:
f.write(raw_html)
async with AsyncWebCrawler() as crawler:
# Crawl raw HTML
raw_result = await crawler.arun(
url="raw:" + raw_html, config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
)
print("Raw HTML processing:")
print(f" Markdown: {raw_result.markdown.raw_markdown[:50]}...")
# Crawl local file
file_result = await crawler.arun(
url=f"file://{file_path}",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("\nLocal file processing:")
print(f" Markdown: {file_result.markdown.raw_markdown[:50]}...")
# Clean up
os.remove(file_path)
print(f"Processed both raw HTML and local file ({file_path})")
async def main():
"""Run all demo functions sequentially"""
print("=== Comprehensive Crawl4AI Demo ===")
print("Note: Some examples require API keys or other configurations")
# Run all demos
await demo_basic_crawl()
await demo_parallel_crawl()
await demo_fit_markdown()
await demo_llm_structured_extraction_no_schema()
await demo_css_structured_extraction_no_schema()
await demo_deep_crawl()
await demo_js_interaction()
await demo_media_and_links()
await demo_screenshot_and_pdf()
# # await demo_proxy_rotation()
await demo_raw_html_and_file()
# Clean up any temp files that may have been created
print("\n=== Demo Complete ===")
print("Check for any generated files (screenshots, PDFs) in the current directory")
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -0,0 +1,562 @@
import os, sys
from crawl4ai.types import LLMConfig
sys.path.append(
os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
)
import asyncio
import time
import json
import re
from typing import Dict
from bs4 import BeautifulSoup
from pydantic import BaseModel, Field
from crawl4ai import AsyncWebCrawler, CacheMode, BrowserConfig, CrawlerRunConfig
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
from crawl4ai.content_filter_strategy import PruningContentFilter
from crawl4ai.extraction_strategy import (
JsonCssExtractionStrategy,
LLMExtractionStrategy,
)
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
print("Crawl4AI: Advanced Web Crawling and Data Extraction")
print("GitHub Repository: https://github.com/unclecode/crawl4ai")
print("Twitter: @unclecode")
print("Website: https://crawl4ai.com")
# Basic Example - Simple Crawl
async def simple_crawl():
print("\n--- Basic Usage ---")
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
async def clean_content():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
excluded_tags=["nav", "footer", "aside"],
remove_overlay_elements=True,
markdown_generator=DefaultMarkdownGenerator(
content_filter=PruningContentFilter(
threshold=0.48, threshold_type="fixed", min_word_threshold=0
),
options={"ignore_links": True},
),
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://en.wikipedia.org/wiki/Apple",
config=crawler_config,
)
full_markdown_length = len(result.markdown.raw_markdown)
fit_markdown_length = len(result.markdown.fit_markdown)
print(f"Full Markdown Length: {full_markdown_length}")
print(f"Fit Markdown Length: {fit_markdown_length}")
async def link_analysis():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.ENABLED,
exclude_external_links=True,
exclude_social_media_links=True,
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
config=crawler_config,
)
print(f"Found {len(result.links['internal'])} internal links")
print(f"Found {len(result.links['external'])} external links")
for link in result.links["internal"][:5]:
print(f"Href: {link['href']}\nText: {link['text']}\n")
# JavaScript Execution Example
async def simple_example_with_running_js_code():
print("\n--- Executing JavaScript and Using CSS Selectors ---")
browser_config = BrowserConfig(headless=True, java_script_enabled=True)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
js_code="const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();",
# wait_for="() => { return Array.from(document.querySelectorAll('article.tease-card')).length > 10; }"
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
# CSS Selector Example
async def simple_example_with_css_selector():
print("\n--- Using CSS Selectors ---")
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, css_selector=".wide-tease-item__description"
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
print(result.markdown[:500])
async def media_handling():
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS, exclude_external_images=True, screenshot=True
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
for img in result.media["images"][:5]:
print(f"Image URL: {img['src']}, Alt: {img['alt']}, Score: {img['score']}")
async def custom_hook_workflow(verbose=True):
async with AsyncWebCrawler() as crawler:
# Set a 'before_goto' hook to run custom code just before navigation
crawler.crawler_strategy.set_hook(
"before_goto",
lambda page, context: print("[Hook] Preparing to navigate..."),
)
# Perform the crawl operation
result = await crawler.arun(url="https://crawl4ai.com")
print(result.markdown.raw_markdown[:500].replace("\n", " -- "))
# Proxy Example
async def use_proxy():
print("\n--- Using a Proxy ---")
browser_config = BrowserConfig(
headless=True,
proxy_config={
"server": "http://proxy.example.com:8080",
"username": "username",
"password": "password",
},
)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business", config=crawler_config
)
if result.success:
print(result.markdown[:500])
# Screenshot Example
async def capture_and_save_screenshot(url: str, output_path: str):
browser_config = BrowserConfig(headless=True)
crawler_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS, screenshot=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url=url, config=crawler_config)
if result.success and result.screenshot:
import base64
screenshot_data = base64.b64decode(result.screenshot)
with open(output_path, "wb") as f:
f.write(screenshot_data)
print(f"Screenshot saved successfully to {output_path}")
else:
print("Failed to capture screenshot")
# LLM Extraction Example
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(
..., description="Fee for output token for the OpenAI model."
)
async def extract_structured_data_using_llm(
provider: str, api_token: str = None, extra_headers: Dict[str, str] = None
):
print(f"\n--- Extracting Structured Data with {provider} ---")
if api_token is None and provider != "ollama":
print(f"API token is required for {provider}. Skipping this example.")
return
browser_config = BrowserConfig(headless=True)
extra_args = {"temperature": 0, "top_p": 0.9, "max_tokens": 2000}
if extra_headers:
extra_args["extra_headers"] = extra_headers
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
word_count_threshold=1,
page_timeout=80000,
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider=provider,api_token=api_token),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content.""",
extra_args=extra_args,
),
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://openai.com/api/pricing/", config=crawler_config
)
print(result.extracted_content)
# CSS Extraction Example
async def extract_structured_data_using_css_extractor():
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
schema = {
"name": "KidoCode Courses",
"baseSelector": "section.charge-methodology .framework-collection-item.w-dyn-item",
"fields": [
{
"name": "section_title",
"selector": "h3.heading-50",
"type": "text",
},
{
"name": "section_description",
"selector": ".charge-content",
"type": "text",
},
{
"name": "course_name",
"selector": ".text-block-93",
"type": "text",
},
{
"name": "course_description",
"selector": ".course-content-text",
"type": "text",
},
{
"name": "course_icon",
"selector": ".image-92",
"type": "attribute",
"attribute": "src",
},
],
}
browser_config = BrowserConfig(headless=True, java_script_enabled=True)
js_click_tabs = """
(async () => {
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
for(let tab of tabs) {
tab.scrollIntoView();
tab.click();
await new Promise(r => setTimeout(r, 500));
}
})();
"""
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
extraction_strategy=JsonCssExtractionStrategy(schema),
js_code=[js_click_tabs],
delay_before_return_html=1
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://www.kidocode.com/degrees/technology", config=crawler_config
)
companies = json.loads(result.extracted_content)
print(f"Successfully extracted {len(companies)} companies")
print(json.dumps(companies[0], indent=2))
# Dynamic Content Examples - Method 1
async def crawl_dynamic_content_pages_method_1():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
first_commit = ""
async def on_execution_started(page, **kwargs):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.Box-sc-g0xbh4-0 h4")
commit = await page.query_selector("li.Box-sc-g0xbh4-0 h4")
commit = await commit.evaluate("(element) => element.textContent")
commit = re.sub(r"\s+", "", commit)
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
browser_config = BrowserConfig(headless=False, java_script_enabled=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
"""
for page in range(3):
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
css_selector="li.Box-sc-g0xbh4-0",
js_code=js_next_page if page > 0 else None,
js_only=page > 0,
session_id=session_id,
)
result = await crawler.arun(url=url, config=crawler_config)
assert result.success, f"Failed to crawl page {page + 1}"
soup = BeautifulSoup(result.cleaned_html, "html.parser")
commits = soup.select("li")
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
# Dynamic Content Examples - Method 2
async def crawl_dynamic_content_pages_method_2():
print("\n--- Advanced Multi-Page Crawling with JavaScript Execution ---")
browser_config = BrowserConfig(headless=False, java_script_enabled=True)
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
return commits.length > 0 ? commits[0].textContent.trim() : null;
};
const initialCommit = getCurrentCommit();
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
while (true) {
await new Promise(resolve => setTimeout(resolve, 100));
const newCommit = getCurrentCommit();
if (newCommit && newCommit !== initialCommit) {
break;
}
}
})();
"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
async with AsyncWebCrawler(config=browser_config) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
extraction_strategy = JsonCssExtractionStrategy(schema)
for page in range(3):
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page_and_wait if page > 0 else None,
js_only=page > 0,
session_id=session_id,
)
result = await crawler.arun(url=url, config=crawler_config)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
async def cosine_similarity_extraction():
from crawl4ai.extraction_strategy import CosineStrategy
crawl_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
extraction_strategy=CosineStrategy(
word_count_threshold=10,
max_dist=0.2, # Maximum distance between two words
linkage_method="ward", # Linkage method for hierarchical clustering (ward, complete, average, single)
top_k=3, # Number of top keywords to extract
sim_threshold=0.3, # Similarity threshold for clustering
semantic_filter="McDonald's economic impact, American consumer trends", # Keywords to filter the content semantically using embeddings
verbose=True,
),
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156",
config=crawl_config,
)
print(json.loads(result.extracted_content)[:5])
# Browser Comparison
async def crawl_custom_browser_type():
print("\n--- Browser Comparison ---")
# Firefox
browser_config_firefox = BrowserConfig(browser_type="firefox", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_firefox) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("Firefox:", time.time() - start)
print(result.markdown[:500])
# WebKit
browser_config_webkit = BrowserConfig(browser_type="webkit", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_webkit) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("WebKit:", time.time() - start)
print(result.markdown[:500])
# Chromium (default)
browser_config_chromium = BrowserConfig(browser_type="chromium", headless=True)
start = time.time()
async with AsyncWebCrawler(config=browser_config_chromium) as crawler:
result = await crawler.arun(
url="https://www.example.com",
config=CrawlerRunConfig(cache_mode=CacheMode.BYPASS),
)
print("Chromium:", time.time() - start)
print(result.markdown[:500])
# Anti-Bot and User Simulation
async def crawl_with_user_simulation():
browser_config = BrowserConfig(
headless=True,
user_agent_mode="random",
user_agent_generator_config={"device_type": "mobile", "os_type": "android"},
)
crawler_config = CrawlerRunConfig(
cache_mode=CacheMode.BYPASS,
magic=True,
simulate_user=True,
override_navigator=True,
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="YOUR-URL-HERE", config=crawler_config)
print(result.markdown)
async def ssl_certification():
# Configure crawler to fetch SSL certificate
config = CrawlerRunConfig(
fetch_ssl_certificate=True,
cache_mode=CacheMode.BYPASS, # Bypass cache to always get fresh certificates
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url="https://example.com", config=config)
if result.success and result.ssl_certificate:
cert = result.ssl_certificate
tmp_dir = os.path.join(__location__, "tmp")
os.makedirs(tmp_dir, exist_ok=True)
# 1. Access certificate properties directly
print("\nCertificate Information:")
print(f"Issuer: {cert.issuer.get('CN', '')}")
print(f"Valid until: {cert.valid_until}")
print(f"Fingerprint: {cert.fingerprint}")
# 2. Export certificate in different formats
cert.to_json(os.path.join(tmp_dir, "certificate.json")) # For analysis
print("\nCertificate exported to:")
print(f"- JSON: {os.path.join(tmp_dir, 'certificate.json')}")
pem_data = cert.to_pem(
os.path.join(tmp_dir, "certificate.pem")
) # For web servers
print(f"- PEM: {os.path.join(tmp_dir, 'certificate.pem')}")
der_data = cert.to_der(
os.path.join(tmp_dir, "certificate.der")
) # For Java apps
print(f"- DER: {os.path.join(tmp_dir, 'certificate.der')}")
# Main execution
async def main():
# Basic examples
await simple_crawl()
await simple_example_with_running_js_code()
await simple_example_with_css_selector()
# Advanced examples
await extract_structured_data_using_css_extractor()
await extract_structured_data_using_llm(
"openai/gpt-4o", os.getenv("OPENAI_API_KEY")
)
await crawl_dynamic_content_pages_method_1()
await crawl_dynamic_content_pages_method_2()
# Browser comparisons
await crawl_custom_browser_type()
# Screenshot example
await capture_and_save_screenshot(
"https://www.example.com",
os.path.join(__location__, "tmp/example_screenshot.jpg")
)
if __name__ == "__main__":
asyncio.run(main())

View File

@@ -1,405 +0,0 @@
import os
import time
from crawl4ai import LLMConfig
from crawl4ai.web_crawler import WebCrawler
from crawl4ai.chunking_strategy import *
from crawl4ai.extraction_strategy import *
from crawl4ai.crawler_strategy import *
from rich import print
from rich.console import Console
from functools import lru_cache
console = Console()
@lru_cache()
def create_crawler():
crawler = WebCrawler(verbose=True)
crawler.warmup()
return crawler
def print_result(result):
# Print each key in one line and just the first 10 characters of each one's value and three dots
console.print("\t[bold]Result:[/bold]")
for key, value in result.model_dump().items():
if isinstance(value, str) and value:
console.print(f"\t{key}: [green]{value[:20]}...[/green]")
if result.extracted_content:
items = json.loads(result.extracted_content)
print(f"\t[bold]{len(items)} blocks is extracted![/bold]")
def cprint(message, press_any_key=False):
console.print(message)
if press_any_key:
console.print("Press any key to continue...", style="")
input()
def basic_usage(crawler):
cprint(
"🛠️ [bold cyan]Basic Usage: Simply provide a URL and let Crawl4ai do the magic![/bold cyan]"
)
result = crawler.run(url="https://www.nbcnews.com/business", only_text=True)
cprint("[LOG] 📦 [bold yellow]Basic crawl result:[/bold yellow]")
print_result(result)
def basic_usage_some_params(crawler):
cprint(
"🛠️ [bold cyan]Basic Usage: Simply provide a URL and let Crawl4ai do the magic![/bold cyan]"
)
result = crawler.run(
url="https://www.nbcnews.com/business", word_count_threshold=1, only_text=True
)
cprint("[LOG] 📦 [bold yellow]Basic crawl result:[/bold yellow]")
print_result(result)
def screenshot_usage(crawler):
cprint("\n📸 [bold cyan]Let's take a screenshot of the page![/bold cyan]")
result = crawler.run(url="https://www.nbcnews.com/business", screenshot=True)
cprint("[LOG] 📦 [bold yellow]Screenshot result:[/bold yellow]")
# Save the screenshot to a file
with open("screenshot.png", "wb") as f:
f.write(base64.b64decode(result.screenshot))
cprint("Screenshot saved to 'screenshot.png'!")
print_result(result)
def understanding_parameters(crawler):
cprint(
"\n🧠 [bold cyan]Understanding 'bypass_cache' and 'include_raw_html' parameters:[/bold cyan]"
)
cprint(
"By default, Crawl4ai caches the results of your crawls. This means that subsequent crawls of the same URL will be much faster! Let's see this in action."
)
# First crawl (reads from cache)
cprint("1⃣ First crawl (caches the result):", True)
start_time = time.time()
result = crawler.run(url="https://www.nbcnews.com/business")
end_time = time.time()
cprint(
f"[LOG] 📦 [bold yellow]First crawl took {end_time - start_time} seconds and result (from cache):[/bold yellow]"
)
print_result(result)
# Force to crawl again
cprint("2⃣ Second crawl (Force to crawl again):", True)
start_time = time.time()
result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
end_time = time.time()
cprint(
f"[LOG] 📦 [bold yellow]Second crawl took {end_time - start_time} seconds and result (forced to crawl):[/bold yellow]"
)
print_result(result)
def add_chunking_strategy(crawler):
# Adding a chunking strategy: RegexChunking
cprint(
"\n🧩 [bold cyan]Let's add a chunking strategy: RegexChunking![/bold cyan]",
True,
)
cprint(
"RegexChunking is a simple chunking strategy that splits the text based on a given regex pattern. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
chunking_strategy=RegexChunking(patterns=["\n\n"]),
)
cprint("[LOG] 📦 [bold yellow]RegexChunking result:[/bold yellow]")
print_result(result)
# Adding another chunking strategy: NlpSentenceChunking
cprint(
"\n🔍 [bold cyan]Time to explore another chunking strategy: NlpSentenceChunking![/bold cyan]",
True,
)
cprint(
"NlpSentenceChunking uses NLP techniques to split the text into sentences. Let's see how it performs!"
)
result = crawler.run(
url="https://www.nbcnews.com/business", chunking_strategy=NlpSentenceChunking()
)
cprint("[LOG] 📦 [bold yellow]NlpSentenceChunking result:[/bold yellow]")
print_result(result)
def add_extraction_strategy(crawler):
# Adding an extraction strategy: CosineStrategy
cprint(
"\n🧠 [bold cyan]Let's get smarter with an extraction strategy: CosineStrategy![/bold cyan]",
True,
)
cprint(
"CosineStrategy uses cosine similarity to extract semantically similar blocks of text. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=CosineStrategy(
word_count_threshold=10,
max_dist=0.2,
linkage_method="ward",
top_k=3,
sim_threshold=0.3,
verbose=True,
),
)
cprint("[LOG] 📦 [bold yellow]CosineStrategy result:[/bold yellow]")
print_result(result)
# Using semantic_filter with CosineStrategy
cprint(
"You can pass other parameters like 'semantic_filter' to the CosineStrategy to extract semantically similar blocks of text. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=CosineStrategy(
semantic_filter="inflation rent prices",
),
)
cprint(
"[LOG] 📦 [bold yellow]CosineStrategy result with semantic filter:[/bold yellow]"
)
print_result(result)
def add_llm_extraction_strategy(crawler):
# Adding an LLM extraction strategy without instructions
cprint(
"\n🤖 [bold cyan]Time to bring in the big guns: LLMExtractionStrategy without instructions![/bold cyan]",
True,
)
cprint(
"LLMExtractionStrategy uses a large language model to extract relevant information from the web page. Let's see it in action!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llm_config = LLMConfig(provider="openai/gpt-4o", api_token=os.getenv("OPENAI_API_KEY"))
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (no instructions) result:[/bold yellow]"
)
print_result(result)
# Adding an LLM extraction strategy with instructions
cprint(
"\n📜 [bold cyan]Let's make it even more interesting: LLMExtractionStrategy with instructions![/bold cyan]",
True,
)
cprint(
"Let's say we are only interested in financial news. Let's see how LLMExtractionStrategy performs with instructions!"
)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o",api_token=os.getenv("OPENAI_API_KEY")),
instruction="I am interested in only financial news",
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (with instructions) result:[/bold yellow]"
)
print_result(result)
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
llm_config=LLMConfig(provider="openai/gpt-4o",api_token=os.getenv("OPENAI_API_KEY")),
instruction="Extract only content related to technology",
),
)
cprint(
"[LOG] 📦 [bold yellow]LLMExtractionStrategy (with technology instruction) result:[/bold yellow]"
)
print_result(result)
def targeted_extraction(crawler):
# Using a CSS selector to extract only H2 tags
cprint(
"\n🎯 [bold cyan]Targeted extraction: Let's use a CSS selector to extract only H2 tags![/bold cyan]",
True,
)
result = crawler.run(url="https://www.nbcnews.com/business", css_selector="h2")
cprint("[LOG] 📦 [bold yellow]CSS Selector (H2 tags) result:[/bold yellow]")
print_result(result)
def interactive_extraction(crawler):
# Passing JavaScript code to interact with the page
cprint(
"\n🖱️ [bold cyan]Let's get interactive: Passing JavaScript code to click 'Load More' button![/bold cyan]",
True,
)
cprint(
"In this example we try to click the 'Load More' button on the page using JavaScript code."
)
js_code = """
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""
# crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
# crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
result = crawler.run(url="https://www.nbcnews.com/business", js=js_code)
cprint(
"[LOG] 📦 [bold yellow]JavaScript Code (Load More button) result:[/bold yellow]"
)
print_result(result)
def multiple_scrip(crawler):
# Passing JavaScript code to interact with the page
cprint(
"\n🖱️ [bold cyan]Let's get interactive: Passing JavaScript code to click 'Load More' button![/bold cyan]",
True,
)
cprint(
"In this example we try to click the 'Load More' button on the page using JavaScript code."
)
js_code = [
"""
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""
] * 2
# crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
# crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
result = crawler.run(url="https://www.nbcnews.com/business", js=js_code)
cprint(
"[LOG] 📦 [bold yellow]JavaScript Code (Load More button) result:[/bold yellow]"
)
print_result(result)
def using_crawler_hooks(crawler):
# Example usage of the hooks for authentication and setting a cookie
def on_driver_created(driver):
print("[HOOK] on_driver_created")
# Example customization: maximize the window
driver.maximize_window()
# Example customization: logging in to a hypothetical website
driver.get("https://example.com/login")
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.NAME, "username"))
)
driver.find_element(By.NAME, "username").send_keys("testuser")
driver.find_element(By.NAME, "password").send_keys("password123")
driver.find_element(By.NAME, "login").click()
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "welcome"))
)
# Add a custom cookie
driver.add_cookie({"name": "test_cookie", "value": "cookie_value"})
return driver
def before_get_url(driver):
print("[HOOK] before_get_url")
# Example customization: add a custom header
# Enable Network domain for sending headers
driver.execute_cdp_cmd("Network.enable", {})
# Add a custom header
driver.execute_cdp_cmd(
"Network.setExtraHTTPHeaders", {"headers": {"X-Test-Header": "test"}}
)
return driver
def after_get_url(driver):
print("[HOOK] after_get_url")
# Example customization: log the URL
print(driver.current_url)
return driver
def before_return_html(driver, html):
print("[HOOK] before_return_html")
# Example customization: log the HTML
print(len(html))
return driver
cprint(
"\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]",
True,
)
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook("on_driver_created", on_driver_created)
crawler_strategy.set_hook("before_get_url", before_get_url)
crawler_strategy.set_hook("after_get_url", after_get_url)
crawler_strategy.set_hook("before_return_html", before_return_html)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
result = crawler.run(url="https://example.com")
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print_result(result=result)
def using_crawler_hooks_dleay_example(crawler):
def delay(driver):
print("Delaying for 5 seconds...")
time.sleep(5)
print("Resuming...")
def create_crawler():
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook("after_get_url", delay)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
return crawler
cprint(
"\n🔗 [bold cyan]Using Crawler Hooks: Let's add a delay after fetching the url to make sure entire page is fetched.[/bold cyan]"
)
crawler = create_crawler()
result = crawler.run(url="https://google.com", bypass_cache=True)
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print_result(result)
def main():
cprint(
"🌟 [bold green]Welcome to the Crawl4ai Quickstart Guide! Let's dive into some web crawling fun! 🌐[/bold green]"
)
cprint(
"⛳️ [bold cyan]First Step: Create an instance of WebCrawler and call the `warmup()` function.[/bold cyan]"
)
cprint(
"If this is the first time you're running Crawl4ai, this might take a few seconds to load required model files."
)
crawler = create_crawler()
crawler.always_by_pass_cache = True
basic_usage(crawler)
# basic_usage_some_params(crawler)
understanding_parameters(crawler)
crawler.always_by_pass_cache = True
screenshot_usage(crawler)
add_chunking_strategy(crawler)
add_extraction_strategy(crawler)
add_llm_extraction_strategy(crawler)
targeted_extraction(crawler)
interactive_extraction(crawler)
multiple_scrip(crawler)
cprint(
"\n🎉 [bold green]Congratulations! You've made it through the Crawl4ai Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️[/bold green]"
)
if __name__ == "__main__":
main()

View File

@@ -1,735 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "6yLvrXn7yZQI"
},
"source": [
"# Crawl4AI: Advanced Web Crawling and Data Extraction\n",
"\n",
"Welcome to this interactive notebook showcasing Crawl4AI, an advanced asynchronous web crawling and data extraction library.\n",
"\n",
"- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n",
"- Twitter: [@unclecode](https://twitter.com/unclecode)\n",
"- Website: [https://crawl4ai.com](https://crawl4ai.com)\n",
"\n",
"Let's explore the powerful features of Crawl4AI!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KIn_9nxFyZQK"
},
"source": [
"## Installation\n",
"\n",
"First, let's install Crawl4AI from GitHub:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mSnaxLf3zMog"
},
"outputs": [],
"source": [
"!sudo apt-get update && sudo apt-get install -y libwoff1 libopus0 libwebp6 libwebpdemux2 libenchant1c2a libgudev-1.0-0 libsecret-1-0 libhyphen0 libgdk-pixbuf2.0-0 libegl1 libnotify4 libxslt1.1 libevent-2.1-7 libgles2 libvpx6 libxcomposite1 libatk1.0-0 libatk-bridge2.0-0 libepoxy0 libgtk-3-0 libharfbuzz-icu0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xlXqaRtayZQK"
},
"outputs": [],
"source": [
"!pip install crawl4ai\n",
"!pip install nest-asyncio\n",
"!playwright install"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qKCE7TI7yZQL"
},
"source": [
"Now, let's import the necessary libraries:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "I67tr7aAyZQL"
},
"outputs": [],
"source": [
"import asyncio\n",
"import nest_asyncio\n",
"from crawl4ai import AsyncWebCrawler\n",
"from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy\n",
"import json\n",
"import time\n",
"from pydantic import BaseModel, Field\n",
"\n",
"nest_asyncio.apply()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "h7yR_Rt_yZQM"
},
"source": [
"## Basic Usage\n",
"\n",
"Let's start with a simple crawl example:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "yBh6hf4WyZQM",
"outputId": "0f83af5c-abba-4175-ed95-70b7512e6bcc"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.05 seconds\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.05 seconds.\n",
"18102\n"
]
}
],
"source": [
"async def simple_crawl():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(url=\"https://www.nbcnews.com/business\")\n",
" print(len(result.markdown))\n",
"await simple_crawl()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9rtkgHI28uI4"
},
"source": [
"💡 By default, **Crawl4AI** caches the result of every URL, so the next time you call it, youll get an instant result. But if you want to bypass the cache, just set `bypass_cache=True`."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MzZ0zlJ9yZQM"
},
"source": [
"## Advanced Features\n",
"\n",
"### Executing JavaScript and Using CSS Selectors"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "gHStF86xyZQM",
"outputId": "34d0fb6d-4dec-4677-f76e-85a1f082829b"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
"[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 6.06 seconds\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n",
"41135\n"
]
}
],
"source": [
"async def js_and_css():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" js_code = [\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"]\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" js_code=js_code,\n",
" # css_selector=\"YOUR_CSS_SELECTOR_HERE\",\n",
" bypass_cache=True\n",
" )\n",
" print(len(result.markdown))\n",
"\n",
"await js_and_css()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cqE_W4coyZQM"
},
"source": [
"### Using a Proxy\n",
"\n",
"Note: You'll need to replace the proxy URL with a working proxy for this example to run successfully."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QjAyiAGqyZQM"
},
"outputs": [],
"source": [
"async def use_proxy():\n",
" async with AsyncWebCrawler(verbose=True, proxy=\"http://your-proxy-url:port\") as crawler:\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" bypass_cache=True\n",
" )\n",
" print(result.markdown[:500]) # Print first 500 characters\n",
"\n",
"# Uncomment the following line to run the proxy example\n",
"# await use_proxy()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XTZ88lbayZQN"
},
"source": [
"### Extracting Structured Data with OpenAI\n",
"\n",
"Note: You'll need to set your OpenAI API key as an environment variable for this example to work."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "fIOlDayYyZQN",
"outputId": "cb8359cc-dee0-4762-9698-5dfdcee055b8"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n",
"[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 3.77 seconds\n",
"[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.21 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 1\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 2\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 3\n",
"[LOG] Extracted 4 blocks from URL: https://openai.com/api/pricing/ block index: 3\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 4\n",
"[LOG] Extracted 5 blocks from URL: https://openai.com/api/pricing/ block index: 0\n",
"[LOG] Extracted 1 blocks from URL: https://openai.com/api/pricing/ block index: 4\n",
"[LOG] Extracted 8 blocks from URL: https://openai.com/api/pricing/ block index: 1\n",
"[LOG] Extracted 12 blocks from URL: https://openai.com/api/pricing/ block index: 2\n",
"[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 8.55 seconds.\n",
"5029\n"
]
}
],
"source": [
"import os\n",
"from google.colab import userdata\n",
"os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')\n",
"\n",
"class OpenAIModelFee(BaseModel):\n",
" model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n",
" input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n",
" output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n",
"\n",
"async def extract_openai_fees():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(\n",
" url='https://openai.com/api/pricing/',\n",
" word_count_threshold=1,\n",
" extraction_strategy=LLMExtractionStrategy(\n",
" provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY'),\n",
" schema=OpenAIModelFee.schema(),\n",
" extraction_type=\"schema\",\n",
" instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens.\n",
" Do not miss any models in the entire content. One extracted model JSON format should look like this:\n",
" {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n",
" ),\n",
" bypass_cache=True,\n",
" )\n",
" print(len(result.extracted_content))\n",
"\n",
"# Uncomment the following line to run the OpenAI extraction example\n",
"await extract_openai_fees()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BypA5YxEyZQN"
},
"source": [
"### Advanced Multi-Page Crawling with JavaScript Execution"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tfkcVQ0b7mw-"
},
"source": [
"## Advanced Multi-Page Crawling with JavaScript Execution\n",
"\n",
"This example demonstrates Crawl4AI's ability to handle complex crawling scenarios, specifically extracting commits from multiple pages of a GitHub repository. The challenge here is that clicking the \"Next\" button doesn't load a new page, but instead uses asynchronous JavaScript to update the content. This is a common hurdle in modern web crawling.\n",
"\n",
"To overcome this, we use Crawl4AI's custom JavaScript execution to simulate clicking the \"Next\" button, and implement a custom hook to detect when new data has loaded. Our strategy involves comparing the first commit's text before and after \"clicking\" Next, waiting until it changes to confirm new data has rendered. This showcases Crawl4AI's flexibility in handling dynamic content and its ability to implement custom logic for even the most challenging crawling tasks."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "qUBKGpn3yZQN",
"outputId": "3e555b6a-ed33-42f4-cce9-499a923fbe17"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 5.16 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.28 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.28 seconds.\n",
"Page 1: Found 35 commits\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.78 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.90 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.90 seconds.\n",
"Page 2: Found 35 commits\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 2.00 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.74 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.75 seconds.\n",
"Page 3: Found 35 commits\n",
"Successfully crawled 105 commits across 3 pages\n"
]
}
],
"source": [
"import re\n",
"from bs4 import BeautifulSoup\n",
"\n",
"async def crawl_typescript_commits():\n",
" first_commit = \"\"\n",
" async def on_execution_started(page):\n",
" nonlocal first_commit\n",
" try:\n",
" while True:\n",
" await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')\n",
" commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')\n",
" commit = await commit.evaluate('(element) => element.textContent')\n",
" commit = re.sub(r'\\s+', '', commit)\n",
" if commit and commit != first_commit:\n",
" first_commit = commit\n",
" break\n",
" await asyncio.sleep(0.5)\n",
" except Exception as e:\n",
" print(f\"Warning: New content didn't appear after JavaScript execution: {e}\")\n",
"\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)\n",
"\n",
" url = \"https://github.com/microsoft/TypeScript/commits/main\"\n",
" session_id = \"typescript_commits_session\"\n",
" all_commits = []\n",
"\n",
" js_next_page = \"\"\"\n",
" const button = document.querySelector('a[data-testid=\"pagination-next-button\"]');\n",
" if (button) button.click();\n",
" \"\"\"\n",
"\n",
" for page in range(3): # Crawl 3 pages\n",
" result = await crawler.arun(\n",
" url=url,\n",
" session_id=session_id,\n",
" css_selector=\"li.Box-sc-g0xbh4-0\",\n",
" js=js_next_page if page > 0 else None,\n",
" bypass_cache=True,\n",
" js_only=page > 0\n",
" )\n",
"\n",
" assert result.success, f\"Failed to crawl page {page + 1}\"\n",
"\n",
" soup = BeautifulSoup(result.cleaned_html, 'html.parser')\n",
" commits = soup.select(\"li\")\n",
" all_commits.extend(commits)\n",
"\n",
" print(f\"Page {page + 1}: Found {len(commits)} commits\")\n",
"\n",
" await crawler.crawler_strategy.kill_session(session_id)\n",
" print(f\"Successfully crawled {len(all_commits)} commits across 3 pages\")\n",
"\n",
"await crawl_typescript_commits()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EJRnYsp6yZQN"
},
"source": [
"### Using JsonCssExtractionStrategy for Fast Structured Output"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1ZMqIzB_8SYp"
},
"source": [
"The JsonCssExtractionStrategy is a powerful feature of Crawl4AI that allows for precise, structured data extraction from web pages. Here's how it works:\n",
"\n",
"1. You define a schema that describes the pattern of data you're interested in extracting.\n",
"2. The schema includes a base selector that identifies repeating elements on the page.\n",
"3. Within the schema, you define fields, each with its own selector and type.\n",
"4. These field selectors are applied within the context of each base selector element.\n",
"5. The strategy supports nested structures, lists within lists, and various data types.\n",
"6. You can even include computed fields for more complex data manipulation.\n",
"\n",
"This approach allows for highly flexible and precise data extraction, transforming semi-structured web content into clean, structured JSON data. It's particularly useful for extracting consistent data patterns from pages like product listings, news articles, or search results.\n",
"\n",
"For more details and advanced usage, check out the full documentation on the Crawl4AI website."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "trCMR2T9yZQN",
"outputId": "718d36f4-cccf-40f4-8d8c-c3ba73524d16"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
"[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 7.00 seconds\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.32 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.48 seconds.\n",
"Successfully extracted 11 news teasers\n",
"{\n",
" \"category\": \"Business News\",\n",
" \"headline\": \"NBC ripped up its Olympics playbook for 2024 \\u2014 so far, the new strategy paid off\",\n",
" \"summary\": \"The Olympics have long been key to NBCUniversal. Paris marked the 18th Olympic Games broadcast by NBC in the U.S.\",\n",
" \"time\": \"13h ago\",\n",
" \"image\": {\n",
" \"src\": \"https://media-cldnry.s-nbcnews.com/image/upload/t_focal-200x100,f_auto,q_auto:best/rockcms/2024-09/240903-nbc-olympics-ch-1344-c7a486.jpg\",\n",
" \"alt\": \"Mike Tirico.\"\n",
" },\n",
" \"link\": \"https://www.nbcnews.com/business\"\n",
"}\n"
]
}
],
"source": [
"async def extract_news_teasers():\n",
" schema = {\n",
" \"name\": \"News Teaser Extractor\",\n",
" \"baseSelector\": \".wide-tease-item__wrapper\",\n",
" \"fields\": [\n",
" {\n",
" \"name\": \"category\",\n",
" \"selector\": \".unibrow span[data-testid='unibrow-text']\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"headline\",\n",
" \"selector\": \".wide-tease-item__headline\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"summary\",\n",
" \"selector\": \".wide-tease-item__description\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"time\",\n",
" \"selector\": \"[data-testid='wide-tease-date']\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"image\",\n",
" \"type\": \"nested\",\n",
" \"selector\": \"picture.teasePicture img\",\n",
" \"fields\": [\n",
" {\"name\": \"src\", \"type\": \"attribute\", \"attribute\": \"src\"},\n",
" {\"name\": \"alt\", \"type\": \"attribute\", \"attribute\": \"alt\"},\n",
" ],\n",
" },\n",
" {\n",
" \"name\": \"link\",\n",
" \"selector\": \"a[href]\",\n",
" \"type\": \"attribute\",\n",
" \"attribute\": \"href\",\n",
" },\n",
" ],\n",
" }\n",
"\n",
" extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n",
"\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" extraction_strategy=extraction_strategy,\n",
" bypass_cache=True,\n",
" )\n",
"\n",
" assert result.success, \"Failed to crawl the page\"\n",
"\n",
" news_teasers = json.loads(result.extracted_content)\n",
" print(f\"Successfully extracted {len(news_teasers)} news teasers\")\n",
" print(json.dumps(news_teasers[0], indent=2))\n",
"\n",
"await extract_news_teasers()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FnyVhJaByZQN"
},
"source": [
"## Speed Comparison\n",
"\n",
"Let's compare the speed of Crawl4AI with Firecrawl, a paid service. Note that we can't run Firecrawl in this Colab environment, so we'll simulate its performance based on previously recorded data."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "agDD186f3wig"
},
"source": [
"💡 **Note on Speed Comparison:**\n",
"\n",
"The speed test conducted here is running on Google Colab, where the internet speed and performance can vary and may not reflect optimal conditions. When we call Firecrawl's API, we're seeing its best performance, while Crawl4AI's performance is limited by Colab's network speed.\n",
"\n",
"For a more accurate comparison, it's recommended to run these tests on your own servers or computers with a stable and fast internet connection. Despite these limitations, Crawl4AI still demonstrates faster performance in this environment.\n",
"\n",
"If you run these tests locally, you may observe an even more significant speed advantage for Crawl4AI compared to other services."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "F7KwHv8G1LbY"
},
"outputs": [],
"source": [
"!pip install firecrawl"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "91813zILyZQN",
"outputId": "663223db-ab89-4976-b233-05ceca62b19b"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Firecrawl (simulated):\n",
"Time taken: 4.38 seconds\n",
"Content length: 41967 characters\n",
"Images found: 49\n",
"\n",
"Crawl4AI (simple crawl):\n",
"Time taken: 4.22 seconds\n",
"Content length: 18221 characters\n",
"Images found: 49\n",
"\n",
"Crawl4AI (with JavaScript execution):\n",
"Time taken: 9.13 seconds\n",
"Content length: 34243 characters\n",
"Images found: 89\n"
]
}
],
"source": [
"import os\n",
"from google.colab import userdata\n",
"os.environ['FIRECRAWL_API_KEY'] = userdata.get('FIRECRAWL_API_KEY')\n",
"import time\n",
"from firecrawl import FirecrawlApp\n",
"\n",
"async def speed_comparison():\n",
" # Simulated Firecrawl performance\n",
" app = FirecrawlApp(api_key=os.environ['FIRECRAWL_API_KEY'])\n",
" start = time.time()\n",
" scrape_status = app.scrape_url(\n",
" 'https://www.nbcnews.com/business',\n",
" params={'formats': ['markdown', 'html']}\n",
" )\n",
" end = time.time()\n",
" print(\"Firecrawl (simulated):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(scrape_status['markdown'])} characters\")\n",
" print(f\"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}\")\n",
" print()\n",
"\n",
" async with AsyncWebCrawler() as crawler:\n",
" # Crawl4AI simple crawl\n",
" start = time.time()\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" word_count_threshold=0,\n",
" bypass_cache=True,\n",
" verbose=False\n",
" )\n",
" end = time.time()\n",
" print(\"Crawl4AI (simple crawl):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(result.markdown)} characters\")\n",
" print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
" print()\n",
"\n",
" # Crawl4AI with JavaScript execution\n",
" start = time.time()\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" js_code=[\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"],\n",
" word_count_threshold=0,\n",
" bypass_cache=True,\n",
" verbose=False\n",
" )\n",
" end = time.time()\n",
" print(\"Crawl4AI (with JavaScript execution):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(result.markdown)} characters\")\n",
" print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
"\n",
"await speed_comparison()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OBFFYVJIyZQN"
},
"source": [
"If you run on a local machine with a proper internet speed:\n",
"- Simple crawl: Crawl4AI is typically over 3-4 times faster than Firecrawl.\n",
"- With JavaScript execution: Even when executing JavaScript to load more content (potentially doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.\n",
"\n",
"Please note that actual performance may vary depending on network conditions and the specific content being crawled."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "A6_1RK1_yZQO"
},
"source": [
"## Conclusion\n",
"\n",
"In this notebook, we've explored the powerful features of Crawl4AI, including:\n",
"\n",
"1. Basic crawling\n",
"2. JavaScript execution and CSS selector usage\n",
"3. Proxy support\n",
"4. Structured data extraction with OpenAI\n",
"5. Advanced multi-page crawling with JavaScript execution\n",
"6. Fast structured output using JsonCssExtractionStrategy\n",
"7. Speed comparison with other services\n",
"\n",
"Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n",
"\n",
"For more information and advanced usage, please visit the [Crawl4AI documentation](https://docs.crawl4ai.com/).\n",
"\n",
"Happy crawling!"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -0,0 +1,205 @@
# Network Requests & Console Message Capturing
Crawl4AI can capture all network requests and browser console messages during a crawl, which is invaluable for debugging, security analysis, or understanding page behavior.
## Configuration
To enable network and console capturing, use these configuration options:
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
# Enable both network request capture and console message capture
config = CrawlerRunConfig(
capture_network_requests=True, # Capture all network requests and responses
capture_console_messages=True # Capture all browser console output
)
```
## Example Usage
```python
import asyncio
import json
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
async def main():
# Enable both network request capture and console message capture
config = CrawlerRunConfig(
capture_network_requests=True,
capture_console_messages=True
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
config=config
)
if result.success:
# Analyze network requests
if result.network_requests:
print(f"Captured {len(result.network_requests)} network events")
# Count request types
request_count = len([r for r in result.network_requests if r.get("event_type") == "request"])
response_count = len([r for r in result.network_requests if r.get("event_type") == "response"])
failed_count = len([r for r in result.network_requests if r.get("event_type") == "request_failed"])
print(f"Requests: {request_count}, Responses: {response_count}, Failed: {failed_count}")
# Find API calls
api_calls = [r for r in result.network_requests
if r.get("event_type") == "request" and "api" in r.get("url", "")]
if api_calls:
print(f"Detected {len(api_calls)} API calls:")
for call in api_calls[:3]: # Show first 3
print(f" - {call.get('method')} {call.get('url')}")
# Analyze console messages
if result.console_messages:
print(f"Captured {len(result.console_messages)} console messages")
# Group by type
message_types = {}
for msg in result.console_messages:
msg_type = msg.get("type", "unknown")
message_types[msg_type] = message_types.get(msg_type, 0) + 1
print("Message types:", message_types)
# Show errors (often the most important)
errors = [msg for msg in result.console_messages if msg.get("type") == "error"]
if errors:
print(f"Found {len(errors)} console errors:")
for err in errors[:2]: # Show first 2
print(f" - {err.get('text', '')[:100]}")
# Export all captured data to a file for detailed analysis
with open("network_capture.json", "w") as f:
json.dump({
"url": result.url,
"network_requests": result.network_requests or [],
"console_messages": result.console_messages or []
}, f, indent=2)
print("Exported detailed capture data to network_capture.json")
if __name__ == "__main__":
asyncio.run(main())
```
## Captured Data Structure
### Network Requests
The `result.network_requests` contains a list of dictionaries, each representing a network event with these common fields:
| Field | Description |
|-------|-------------|
| `event_type` | Type of event: `"request"`, `"response"`, or `"request_failed"` |
| `url` | The URL of the request |
| `timestamp` | Unix timestamp when the event was captured |
#### Request Event Fields
```json
{
"event_type": "request",
"url": "https://example.com/api/data.json",
"method": "GET",
"headers": {"User-Agent": "...", "Accept": "..."},
"post_data": "key=value&otherkey=value",
"resource_type": "fetch",
"is_navigation_request": false,
"timestamp": 1633456789.123
}
```
#### Response Event Fields
```json
{
"event_type": "response",
"url": "https://example.com/api/data.json",
"status": 200,
"status_text": "OK",
"headers": {"Content-Type": "application/json", "Cache-Control": "..."},
"from_service_worker": false,
"request_timing": {"requestTime": 1234.56, "receiveHeadersEnd": 1234.78},
"timestamp": 1633456789.456
}
```
#### Failed Request Event Fields
```json
{
"event_type": "request_failed",
"url": "https://example.com/missing.png",
"method": "GET",
"resource_type": "image",
"failure_text": "net::ERR_ABORTED 404",
"timestamp": 1633456789.789
}
```
### Console Messages
The `result.console_messages` contains a list of dictionaries, each representing a console message with these common fields:
| Field | Description |
|-------|-------------|
| `type` | Message type: `"log"`, `"error"`, `"warning"`, `"info"`, etc. |
| `text` | The message text |
| `timestamp` | Unix timestamp when the message was captured |
#### Console Message Example
```json
{
"type": "error",
"text": "Uncaught TypeError: Cannot read property 'length' of undefined",
"location": "https://example.com/script.js:123:45",
"timestamp": 1633456790.123
}
```
## Key Benefits
- **Full Request Visibility**: Capture all network activity including:
- Requests (URLs, methods, headers, post data)
- Responses (status codes, headers, timing)
- Failed requests (with error messages)
- **Console Message Access**: View all JavaScript console output:
- Log messages
- Warnings
- Errors with stack traces
- Developer debugging information
- **Debugging Power**: Identify issues such as:
- Failed API calls or resource loading
- JavaScript errors affecting page functionality
- CORS or other security issues
- Hidden API endpoints and data flows
- **Security Analysis**: Detect:
- Unexpected third-party requests
- Data leakage in request payloads
- Suspicious script behavior
- **Performance Insights**: Analyze:
- Request timing data
- Resource loading patterns
- Potential bottlenecks
## Use Cases
1. **API Discovery**: Identify hidden endpoints and data flows in single-page applications
2. **Debugging**: Track down JavaScript errors affecting page functionality
3. **Security Auditing**: Detect unwanted third-party requests or data leakage
4. **Performance Analysis**: Identify slow-loading resources
5. **Ad/Tracker Analysis**: Detect and catalog advertising or tracking calls
This capability is especially valuable for complex sites with heavy JavaScript, single-page applications, or when you need to understand the exact communication happening between a browser and servers.

View File

@@ -15,6 +15,7 @@ class CrawlResult(BaseModel):
downloaded_files: Optional[List[str]] = None downloaded_files: Optional[List[str]] = None
screenshot: Optional[str] = None screenshot: Optional[str] = None
pdf : Optional[bytes] = None pdf : Optional[bytes] = None
mhtml: Optional[str] = None
markdown: Optional[Union[str, MarkdownGenerationResult]] = None markdown: Optional[Union[str, MarkdownGenerationResult]] = None
extracted_content: Optional[str] = None extracted_content: Optional[str] = None
metadata: Optional[dict] = None metadata: Optional[dict] = None
@@ -236,7 +237,16 @@ if result.pdf:
f.write(result.pdf) f.write(result.pdf)
``` ```
### 5.5 **`metadata`** *(Optional[dict])* ### 5.5 **`mhtml`** *(Optional[str])*
**What**: MHTML snapshot of the page if `capture_mhtml=True` in `CrawlerRunConfig`. MHTML (MIME HTML) format preserves the entire web page with all its resources (CSS, images, scripts, etc.) in a single file.
**Usage**:
```python
if result.mhtml:
with open("page.mhtml", "w", encoding="utf-8") as f:
f.write(result.mhtml)
```
### 5.6 **`metadata`** *(Optional[dict])*
**What**: Page-level metadata if discovered (title, description, OG data, etc.). **What**: Page-level metadata if discovered (title, description, OG data, etc.).
**Usage**: **Usage**:
```python ```python
@@ -271,7 +281,69 @@ for result in results:
--- ---
## 7. Example: Accessing Everything ## 7. Network Requests & Console Messages
When you enable network and console message capturing in `CrawlerRunConfig` using `capture_network_requests=True` and `capture_console_messages=True`, the `CrawlResult` will include these fields:
### 7.1 **`network_requests`** *(Optional[List[Dict[str, Any]]])*
**What**: A list of dictionaries containing information about all network requests, responses, and failures captured during the crawl.
**Structure**:
- Each item has an `event_type` field that can be `"request"`, `"response"`, or `"request_failed"`.
- Request events include `url`, `method`, `headers`, `post_data`, `resource_type`, and `is_navigation_request`.
- Response events include `url`, `status`, `status_text`, `headers`, and `request_timing`.
- Failed request events include `url`, `method`, `resource_type`, and `failure_text`.
- All events include a `timestamp` field.
**Usage**:
```python
if result.network_requests:
# Count different types of events
requests = [r for r in result.network_requests if r.get("event_type") == "request"]
responses = [r for r in result.network_requests if r.get("event_type") == "response"]
failures = [r for r in result.network_requests if r.get("event_type") == "request_failed"]
print(f"Captured {len(requests)} requests, {len(responses)} responses, and {len(failures)} failures")
# Analyze API calls
api_calls = [r for r in requests if "api" in r.get("url", "")]
# Identify failed resources
for failure in failures:
print(f"Failed to load: {failure.get('url')} - {failure.get('failure_text')}")
```
### 7.2 **`console_messages`** *(Optional[List[Dict[str, Any]]])*
**What**: A list of dictionaries containing all browser console messages captured during the crawl.
**Structure**:
- Each item has a `type` field indicating the message type (e.g., `"log"`, `"error"`, `"warning"`, etc.).
- The `text` field contains the actual message text.
- Some messages include `location` information (URL, line, column).
- All messages include a `timestamp` field.
**Usage**:
```python
if result.console_messages:
# Count messages by type
message_types = {}
for msg in result.console_messages:
msg_type = msg.get("type", "unknown")
message_types[msg_type] = message_types.get(msg_type, 0) + 1
print(f"Message type counts: {message_types}")
# Display errors (which are usually most important)
for msg in result.console_messages:
if msg.get("type") == "error":
print(f"Error: {msg.get('text')}")
```
These fields provide deep visibility into the page's network activity and browser console, which is invaluable for debugging, security analysis, and understanding complex web applications.
For more details on network and console capturing, see the [Network & Console Capture documentation](../advanced/network-console-capture.md).
---
## 8. Example: Accessing Everything
```python ```python
async def handle_result(result: CrawlResult): async def handle_result(result: CrawlResult):
@@ -304,16 +376,36 @@ async def handle_result(result: CrawlResult):
if result.extracted_content: if result.extracted_content:
print("Structured data:", result.extracted_content) print("Structured data:", result.extracted_content)
# Screenshot/PDF # Screenshot/PDF/MHTML
if result.screenshot: if result.screenshot:
print("Screenshot length:", len(result.screenshot)) print("Screenshot length:", len(result.screenshot))
if result.pdf: if result.pdf:
print("PDF bytes length:", len(result.pdf)) print("PDF bytes length:", len(result.pdf))
if result.mhtml:
print("MHTML length:", len(result.mhtml))
# Network and console capturing
if result.network_requests:
print(f"Network requests captured: {len(result.network_requests)}")
# Analyze request types
req_types = {}
for req in result.network_requests:
if "resource_type" in req:
req_types[req["resource_type"]] = req_types.get(req["resource_type"], 0) + 1
print(f"Resource types: {req_types}")
if result.console_messages:
print(f"Console messages captured: {len(result.console_messages)}")
# Count by message type
msg_types = {}
for msg in result.console_messages:
msg_types[msg.get("type", "unknown")] = msg_types.get(msg.get("type", "unknown"), 0) + 1
print(f"Message types: {msg_types}")
``` ```
--- ---
## 8. Key Points & Future ## 9. Key Points & Future
1. **Deprecated legacy properties of CrawlResult** 1. **Deprecated legacy properties of CrawlResult**
- `markdown_v2` - Deprecated in v0.5. Just use `markdown`. It holds the `MarkdownGenerationResult` now! - `markdown_v2` - Deprecated in v0.5. Just use `markdown`. It holds the `MarkdownGenerationResult` now!

View File

@@ -141,6 +141,7 @@ If your page is a single-page app with repeated JS updates, set `js_only=True` i
| **`screenshot_wait_for`** | `float or None` | Extra wait time before the screenshot. | | **`screenshot_wait_for`** | `float or None` | Extra wait time before the screenshot. |
| **`screenshot_height_threshold`** | `int` (~20000) | If the page is taller than this, alternate screenshot strategies are used. | | **`screenshot_height_threshold`** | `int` (~20000) | If the page is taller than this, alternate screenshot strategies are used. |
| **`pdf`** | `bool` (False) | If `True`, returns a PDF in `result.pdf`. | | **`pdf`** | `bool` (False) | If `True`, returns a PDF in `result.pdf`. |
| **`capture_mhtml`** | `bool` (False) | If `True`, captures an MHTML snapshot of the page in `result.mhtml`. MHTML includes all page resources (CSS, images, etc.) in a single file. |
| **`image_description_min_word_threshold`** | `int` (~50) | Minimum words for an images alt text or description to be considered valid. | | **`image_description_min_word_threshold`** | `int` (~50) | Minimum words for an images alt text or description to be considered valid. |
| **`image_score_threshold`** | `int` (~3) | Filter out low-scoring images. The crawler scores images by relevance (size, context, etc.). | | **`image_score_threshold`** | `int` (~3) | Filter out low-scoring images. The crawler scores images by relevance (size, context, etc.). |
| **`exclude_external_images`** | `bool` (False) | Exclude images from other domains. | | **`exclude_external_images`** | `bool` (False) | Exclude images from other domains. |

View File

@@ -136,6 +136,12 @@ class CrawlerRunConfig:
wait_for=None, wait_for=None,
screenshot=False, screenshot=False,
pdf=False, pdf=False,
capture_mhtml=False,
enable_rate_limiting=False,
rate_limit_config=None,
memory_threshold_percent=70.0,
check_interval=1.0,
max_session_permit=20,
display_mode=None, display_mode=None,
verbose=True, verbose=True,
stream=False, # Enable streaming for arun_many() stream=False, # Enable streaming for arun_many()
@@ -170,10 +176,9 @@ class CrawlerRunConfig:
- A CSS or JS expression to wait for before extracting content. - A CSS or JS expression to wait for before extracting content.
- Common usage: `wait_for="css:.main-loaded"` or `wait_for="js:() => window.loaded === true"`. - Common usage: `wait_for="css:.main-loaded"` or `wait_for="js:() => window.loaded === true"`.
7. **`screenshot`** & **`pdf`**: 7. **`screenshot`**, **`pdf`**, & **`capture_mhtml`**:
- If `True`, captures a screenshot or PDF after the page is fully loaded. - If `True`, captures a screenshot, PDF, or MHTML snapshot after the page is fully loaded.
- The results go to `result.screenshot` (base64) or `result.pdf` (bytes). - The results go to `result.screenshot` (base64), `result.pdf` (bytes), or `result.mhtml` (string).
8. **`verbose`**: 8. **`verbose`**:
- Logs additional runtime details. - Logs additional runtime details.
- Overlaps with the browsers verbosity if also set to `True` in `BrowserConfig`. - Overlaps with the browsers verbosity if also set to `True` in `BrowserConfig`.

View File

@@ -26,6 +26,7 @@ class CrawlResult(BaseModel):
downloaded_files: Optional[List[str]] = None downloaded_files: Optional[List[str]] = None
screenshot: Optional[str] = None screenshot: Optional[str] = None
pdf : Optional[bytes] = None pdf : Optional[bytes] = None
mhtml: Optional[str] = None
markdown: Optional[Union[str, MarkdownGenerationResult]] = None markdown: Optional[Union[str, MarkdownGenerationResult]] = None
extracted_content: Optional[str] = None extracted_content: Optional[str] = None
metadata: Optional[dict] = None metadata: Optional[dict] = None
@@ -51,6 +52,7 @@ class CrawlResult(BaseModel):
| **downloaded_files (`Optional[List[str]]`)** | If `accept_downloads=True` in `BrowserConfig`, this lists the filepaths of saved downloads. | | **downloaded_files (`Optional[List[str]]`)** | If `accept_downloads=True` in `BrowserConfig`, this lists the filepaths of saved downloads. |
| **screenshot (`Optional[str]`)** | Screenshot of the page (base64-encoded) if `screenshot=True`. | | **screenshot (`Optional[str]`)** | Screenshot of the page (base64-encoded) if `screenshot=True`. |
| **pdf (`Optional[bytes]`)** | PDF of the page if `pdf=True`. | | **pdf (`Optional[bytes]`)** | PDF of the page if `pdf=True`. |
| **mhtml (`Optional[str]`)** | MHTML snapshot of the page if `capture_mhtml=True`. Contains the full page with all resources. |
| **markdown (`Optional[str or MarkdownGenerationResult]`)** | It holds a `MarkdownGenerationResult`. Over time, this will be consolidated into `markdown`. The generator can provide raw markdown, citations, references, and optionally `fit_markdown`. | | **markdown (`Optional[str or MarkdownGenerationResult]`)** | It holds a `MarkdownGenerationResult`. Over time, this will be consolidated into `markdown`. The generator can provide raw markdown, citations, references, and optionally `fit_markdown`. |
| **extracted_content (`Optional[str]`)** | The output of a structured extraction (CSS/LLM-based) stored as JSON string or other text. | | **extracted_content (`Optional[str]`)** | The output of a structured extraction (CSS/LLM-based) stored as JSON string or other text. |
| **metadata (`Optional[dict]`)** | Additional info about the crawl or extracted data. | | **metadata (`Optional[dict]`)** | Additional info about the crawl or extracted data. |
@@ -190,18 +192,27 @@ for img in images:
print("Image URL:", img["src"], "Alt:", img.get("alt")) print("Image URL:", img["src"], "Alt:", img.get("alt"))
``` ```
### 5.3 `screenshot` and `pdf` ### 5.3 `screenshot`, `pdf`, and `mhtml`
If you set `screenshot=True` or `pdf=True` in **`CrawlerRunConfig`**, then: If you set `screenshot=True`, `pdf=True`, or `capture_mhtml=True` in **`CrawlerRunConfig`**, then:
- `result.screenshot` contains a base64-encoded PNG string. - `result.screenshot` contains a base64-encoded PNG string.
- `result.pdf` contains raw PDF bytes (you can write them to a file). - `result.pdf` contains raw PDF bytes (you can write them to a file).
- `result.mhtml` contains the MHTML snapshot of the page as a string (you can write it to a .mhtml file).
```python ```python
# Save the PDF
with open("page.pdf", "wb") as f: with open("page.pdf", "wb") as f:
f.write(result.pdf) f.write(result.pdf)
# Save the MHTML
if result.mhtml:
with open("page.mhtml", "w", encoding="utf-8") as f:
f.write(result.mhtml)
``` ```
The MHTML (MIME HTML) format is particularly useful as it captures the entire web page including all of its resources (CSS, images, scripts, etc.) in a single file, making it perfect for archiving or offline viewing.
### 5.4 `ssl_certificate` ### 5.4 `ssl_certificate`
If `fetch_ssl_certificate=True`, `result.ssl_certificate` holds details about the sites SSL cert, such as issuer, validity dates, etc. If `fetch_ssl_certificate=True`, `result.ssl_certificate` holds details about the sites SSL cert, such as issuer, validity dates, etc.

View File

@@ -4,7 +4,35 @@ In this tutorial, youll learn how to:
1. Extract links (internal, external) from crawled pages 1. Extract links (internal, external) from crawled pages
2. Filter or exclude specific domains (e.g., social media or custom domains) 2. Filter or exclude specific domains (e.g., social media or custom domains)
3. Access and manage media data (especially images) in the crawl result 3. Access and ma### 3.2 Excluding Images
#### Excluding External Images
If you're dealing with heavy pages or want to skip third-party images (advertisements, for example), you can turn on:
```python
crawler_cfg = CrawlerRunConfig(
exclude_external_images=True
)
```
This setting attempts to discard images from outside the primary domain, keeping only those from the site you're crawling.
#### Excluding All Images
If you want to completely remove all images from the page to maximize performance and reduce memory usage, use:
```python
crawler_cfg = CrawlerRunConfig(
exclude_all_images=True
)
```
This setting removes all images very early in the processing pipeline, which significantly improves memory efficiency and processing speed. This is particularly useful when:
- You don't need image data in your results
- You're crawling image-heavy pages that cause memory issues
- You want to focus only on text content
- You need to maximize crawling speeddata (especially images) in the crawl result
4. Configure your crawler to exclude or prioritize certain images 4. Configure your crawler to exclude or prioritize certain images
> **Prerequisites** > **Prerequisites**
@@ -271,8 +299,41 @@ Each extracted table contains:
- **`screenshot`**: Set to `True` if you want a full-page screenshot stored as `base64` in `result.screenshot`. - **`screenshot`**: Set to `True` if you want a full-page screenshot stored as `base64` in `result.screenshot`.
- **`pdf`**: Set to `True` if you want a PDF version of the page in `result.pdf`. - **`pdf`**: Set to `True` if you want a PDF version of the page in `result.pdf`.
- **`capture_mhtml`**: Set to `True` if you want an MHTML snapshot of the page in `result.mhtml`. This format preserves the entire web page with all its resources (CSS, images, scripts) in a single file, making it perfect for archiving or offline viewing.
- **`wait_for_images`**: If `True`, attempts to wait until images are fully loaded before final extraction. - **`wait_for_images`**: If `True`, attempts to wait until images are fully loaded before final extraction.
#### Example: Capturing Page as MHTML
```python
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
async def main():
crawler_cfg = CrawlerRunConfig(
capture_mhtml=True # Enable MHTML capture
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun("https://example.com", config=crawler_cfg)
if result.success and result.mhtml:
# Save the MHTML snapshot to a file
with open("example.mhtml", "w", encoding="utf-8") as f:
f.write(result.mhtml)
print("MHTML snapshot saved to example.mhtml")
else:
print("Failed to capture MHTML:", result.error_message)
if __name__ == "__main__":
asyncio.run(main())
```
The MHTML format is particularly useful because:
- It captures the complete page state including all resources
- It can be opened in most modern browsers for offline viewing
- It preserves the page exactly as it appeared during crawling
- It's a single file, making it easy to store and transfer
--- ---
## 4. Putting It All Together: Link & Media Filtering ## 4. Putting It All Together: Link & Media Filtering

View File

@@ -38,6 +38,7 @@ nav:
- "Crawl Dispatcher": "advanced/crawl-dispatcher.md" - "Crawl Dispatcher": "advanced/crawl-dispatcher.md"
- "Identity Based Crawling": "advanced/identity-based-crawling.md" - "Identity Based Crawling": "advanced/identity-based-crawling.md"
- "SSL Certificate": "advanced/ssl-certificate.md" - "SSL Certificate": "advanced/ssl-certificate.md"
- "Network & Console Capture": "advanced/network-console-capture.md"
- Extraction: - Extraction:
- "LLM-Free Strategies": "extraction/no-llm-strategies.md" - "LLM-Free Strategies": "extraction/no-llm-strategies.md"
- "LLM Strategies": "extraction/llm-strategies.md" - "LLM Strategies": "extraction/llm-strategies.md"

20
parameter_updates.txt Normal file
View File

@@ -0,0 +1,20 @@
The file /docs/md_v2/api/parameters.md should be updated to include the new network and console capturing parameters.
Here's what needs to be updated:
1. Change section title from:
```
### G) **Debug & Logging**
```
to:
```
### G) **Debug, Logging & Capturing**
```
2. Add new parameters to the table:
```
| **`capture_network_requests`** | `bool` (False) | Captures all network requests, responses, and failures during the crawl. Available in `result.network_requests`. |
| **`capture_console_messages`** | `bool` (False) | Captures all browser console messages (logs, warnings, errors) during the crawl. Available in `result.console_messages`. |
```
These changes demonstrate how to use the new network and console capturing features in the CrawlerRunConfig.

View File

@@ -0,0 +1,489 @@
I want to enhance the `AsyncPlaywrightCrawlerStrategy` to optionally capture network requests and console messages during a crawl, storing them in the final `CrawlResult`.
Here's a breakdown of the proposed changes across the relevant files:
**1. Configuration (`crawl4ai/async_configs.py`)**
* **Goal:** Add flags to `CrawlerRunConfig` to enable/disable capturing.
* **Changes:**
* Add two new boolean attributes to `CrawlerRunConfig`:
* `capture_network_requests: bool = False`
* `capture_console_messages: bool = False`
* Update `__init__`, `from_kwargs`, `to_dict`, and implicitly `clone`/`dump`/`load` to include these new attributes.
```python
# ==== File: crawl4ai/async_configs.py ====
# ... (imports) ...
class CrawlerRunConfig():
# ... (existing attributes) ...
# NEW: Network and Console Capturing Parameters
capture_network_requests: bool = False
capture_console_messages: bool = False
# Experimental Parameters
experimental: Dict[str, Any] = None,
def __init__(
self,
# ... (existing parameters) ...
# NEW: Network and Console Capturing Parameters
capture_network_requests: bool = False,
capture_console_messages: bool = False,
# Experimental Parameters
experimental: Dict[str, Any] = None,
):
# ... (existing assignments) ...
# NEW: Assign new parameters
self.capture_network_requests = capture_network_requests
self.capture_console_messages = capture_console_messages
# Experimental Parameters
self.experimental = experimental or {}
# ... (rest of __init__) ...
@staticmethod
def from_kwargs(kwargs: dict) -> "CrawlerRunConfig":
return CrawlerRunConfig(
# ... (existing kwargs gets) ...
# NEW: Get new parameters
capture_network_requests=kwargs.get("capture_network_requests", False),
capture_console_messages=kwargs.get("capture_console_messages", False),
# Experimental Parameters
experimental=kwargs.get("experimental"),
)
def to_dict(self):
return {
# ... (existing dict entries) ...
# NEW: Add new parameters to dict
"capture_network_requests": self.capture_network_requests,
"capture_console_messages": self.capture_console_messages,
"experimental": self.experimental,
}
# clone(), dump(), load() should work automatically if they rely on to_dict() and from_kwargs()
# or the serialization logic correctly handles all attributes.
```
**2. Data Models (`crawl4ai/models.py`)**
* **Goal:** Add fields to store the captured data in the response/result objects.
* **Changes:**
* Add `network_requests: Optional[List[Dict[str, Any]]] = None` and `console_messages: Optional[List[Dict[str, Any]]] = None` to `AsyncCrawlResponse`.
* Add the same fields to `CrawlResult`.
```python
# ==== File: crawl4ai/models.py ====
# ... (imports) ...
# ... (Existing dataclasses/models) ...
class AsyncCrawlResponse(BaseModel):
html: str
response_headers: Dict[str, str]
js_execution_result: Optional[Dict[str, Any]] = None
status_code: int
screenshot: Optional[str] = None
pdf_data: Optional[bytes] = None
get_delayed_content: Optional[Callable[[Optional[float]], Awaitable[str]]] = None
downloaded_files: Optional[List[str]] = None
ssl_certificate: Optional[SSLCertificate] = None
redirected_url: Optional[str] = None
# NEW: Fields for captured data
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class Config:
arbitrary_types_allowed = True
# ... (Existing models like MediaItem, Link, etc.) ...
class CrawlResult(BaseModel):
url: str
html: str
success: bool
cleaned_html: Optional[str] = None
media: Dict[str, List[Dict]] = {}
links: Dict[str, List[Dict]] = {}
downloaded_files: Optional[List[str]] = None
js_execution_result: Optional[Dict[str, Any]] = None
screenshot: Optional[str] = None
pdf: Optional[bytes] = None
mhtml: Optional[str] = None # Added mhtml based on the provided models.py
_markdown: Optional[MarkdownGenerationResult] = PrivateAttr(default=None)
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
error_message: Optional[str] = None
session_id: Optional[str] = None
response_headers: Optional[dict] = None
status_code: Optional[int] = None
ssl_certificate: Optional[SSLCertificate] = None
dispatch_result: Optional[DispatchResult] = None
redirected_url: Optional[str] = None
# NEW: Fields for captured data
network_requests: Optional[List[Dict[str, Any]]] = None
console_messages: Optional[List[Dict[str, Any]]] = None
class Config:
arbitrary_types_allowed = True
# ... (Existing __init__, properties, model_dump for markdown compatibility) ...
# ... (Rest of the models) ...
```
**3. Crawler Strategy (`crawl4ai/async_crawler_strategy.py`)**
* **Goal:** Implement the actual capturing logic within `AsyncPlaywrightCrawlerStrategy._crawl_web`.
* **Changes:**
* Inside `_crawl_web`, initialize empty lists `captured_requests = []` and `captured_console = []`.
* Conditionally attach Playwright event listeners (`page.on(...)`) based on the `config.capture_network_requests` and `config.capture_console_messages` flags.
* Define handler functions for these listeners to extract relevant data and append it to the respective lists. Include timestamps.
* Pass the captured lists to the `AsyncCrawlResponse` constructor at the end of the method.
```python
# ==== File: crawl4ai/async_crawler_strategy.py ====
# ... (imports) ...
import time # Make sure time is imported
class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
# ... (existing methods like __init__, start, close, etc.) ...
async def _crawl_web(
self, url: str, config: CrawlerRunConfig
) -> AsyncCrawlResponse:
"""
Internal method to crawl web URLs with the specified configuration.
Includes optional network and console capturing. # MODIFIED DOCSTRING
"""
config.url = url
response_headers = {}
execution_result = None
status_code = None
redirected_url = url
# Reset downloaded files list for new crawl
self._downloaded_files = []
# Initialize capture lists - IMPORTANT: Reset per crawl
captured_requests: List[Dict[str, Any]] = []
captured_console: List[Dict[str, Any]] = []
# Handle user agent ... (existing code) ...
# Get page for session
page, context = await self.browser_manager.get_page(crawlerRunConfig=config)
# ... (existing code for cookies, navigator overrides, hooks) ...
# --- Setup Capturing Listeners ---
# NOTE: These listeners are attached *before* page.goto()
# Network Request Capturing
if config.capture_network_requests:
async def handle_request_capture(request):
try:
post_data_str = None
try:
# Be cautious with large post data
post_data = request.post_data_buffer
if post_data:
# Attempt to decode, fallback to base64 or size indication
try:
post_data_str = post_data.decode('utf-8', errors='replace')
except UnicodeDecodeError:
post_data_str = f"[Binary data: {len(post_data)} bytes]"
except Exception:
post_data_str = "[Error retrieving post data]"
captured_requests.append({
"event_type": "request",
"url": request.url,
"method": request.method,
"headers": dict(request.headers), # Convert Header dict
"post_data": post_data_str,
"resource_type": request.resource_type,
"is_navigation_request": request.is_navigation_request(),
"timestamp": time.time()
})
except Exception as e:
self.logger.warning(f"Error capturing request details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
async def handle_response_capture(response):
try:
# Avoid capturing full response body by default due to size/security
# security_details = await response.security_details() # Optional: More SSL info
captured_requests.append({
"event_type": "response",
"url": response.url,
"status": response.status,
"status_text": response.status_text,
"headers": dict(response.headers), # Convert Header dict
"from_service_worker": response.from_service_worker,
# "security_details": security_details, # Uncomment if needed
"request_timing": response.request.timing, # Detailed timing info
"timestamp": time.time()
})
except Exception as e:
self.logger.warning(f"Error capturing response details for {response.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "response_capture_error", "url": response.url, "error": str(e), "timestamp": time.time()})
async def handle_request_failed_capture(request):
try:
captured_requests.append({
"event_type": "request_failed",
"url": request.url,
"method": request.method,
"resource_type": request.resource_type,
"failure_text": request.failure.error_text if request.failure else "Unknown failure",
"timestamp": time.time()
})
except Exception as e:
self.logger.warning(f"Error capturing request failed details for {request.url}: {e}", tag="CAPTURE")
captured_requests.append({"event_type": "request_failed_capture_error", "url": request.url, "error": str(e), "timestamp": time.time()})
page.on("request", handle_request_capture)
page.on("response", handle_response_capture)
page.on("requestfailed", handle_request_failed_capture)
# Console Message Capturing
if config.capture_console_messages:
def handle_console_capture(msg):
try:
location = msg.location()
# Attempt to resolve JSHandle args to primitive values
resolved_args = []
try:
for arg in msg.args:
resolved_args.append(arg.json_value()) # May fail for complex objects
except Exception:
resolved_args.append("[Could not resolve JSHandle args]")
captured_console.append({
"type": msg.type(), # e.g., 'log', 'error', 'warning'
"text": msg.text(),
"args": resolved_args, # Captured arguments
"location": f"{location['url']}:{location['lineNumber']}:{location['columnNumber']}" if location else "N/A",
"timestamp": time.time()
})
except Exception as e:
self.logger.warning(f"Error capturing console message: {e}", tag="CAPTURE")
captured_console.append({"type": "console_capture_error", "error": str(e), "timestamp": time.time()})
def handle_pageerror_capture(err):
try:
captured_console.append({
"type": "error", # Consistent type for page errors
"text": err.message,
"stack": err.stack,
"timestamp": time.time()
})
except Exception as e:
self.logger.warning(f"Error capturing page error: {e}", tag="CAPTURE")
captured_console.append({"type": "pageerror_capture_error", "error": str(e), "timestamp": time.time()})
page.on("console", handle_console_capture)
page.on("pageerror", handle_pageerror_capture)
# --- End Setup Capturing Listeners ---
# Set up console logging if requested (Keep original logging logic separate or merge carefully)
if config.log_console:
# ... (original log_console setup using page.on(...) remains here) ...
# This allows logging to screen *and* capturing to the list if both flags are True
def log_consol(msg, console_log_type="debug"):
# ... existing implementation ...
pass # Placeholder for existing code
page.on("console", lambda msg: log_consol(msg, "debug"))
page.on("pageerror", lambda e: log_consol(e, "error"))
try:
# ... (existing code for SSL, downloads, goto, waits, JS execution, etc.) ...
# Get final HTML content
# ... (existing code for selector logic or page.content()) ...
if config.css_selector:
# ... existing selector logic ...
html = f"<div class='crawl4ai-result'>\n" + "\n".join(html_parts) + "\n</div>"
else:
html = await page.content()
await self.execute_hook(
"before_return_html", page=page, html=html, context=context, config=config
)
# Handle PDF and screenshot generation
# ... (existing code) ...
# Define delayed content getter
# ... (existing code) ...
# Return complete response - ADD CAPTURED DATA HERE
return AsyncCrawlResponse(
html=html,
response_headers=response_headers,
js_execution_result=execution_result,
status_code=status_code,
screenshot=screenshot_data,
pdf_data=pdf_data,
get_delayed_content=get_delayed_content,
ssl_certificate=ssl_cert,
downloaded_files=(
self._downloaded_files if self._downloaded_files else None
),
redirected_url=redirected_url,
# NEW: Pass captured data conditionally
network_requests=captured_requests if config.capture_network_requests else None,
console_messages=captured_console if config.capture_console_messages else None,
)
except Exception as e:
raise e # Re-raise the original exception
finally:
# If no session_id is given we should close the page
if not config.session_id:
# Detach listeners before closing to prevent potential errors during close
if config.capture_network_requests:
page.remove_listener("request", handle_request_capture)
page.remove_listener("response", handle_response_capture)
page.remove_listener("requestfailed", handle_request_failed_capture)
if config.capture_console_messages:
page.remove_listener("console", handle_console_capture)
page.remove_listener("pageerror", handle_pageerror_capture)
# Also remove logging listeners if they were attached
if config.log_console:
# Need to figure out how to remove the lambdas if necessary,
# or ensure they don't cause issues on close. Often, it's fine.
pass
await page.close()
# ... (rest of AsyncPlaywrightCrawlerStrategy methods) ...
```
**4. Core Crawler (`crawl4ai/async_webcrawler.py`)**
* **Goal:** Ensure the captured data from `AsyncCrawlResponse` is transferred to the final `CrawlResult`.
* **Changes:**
* In `arun`, when processing a non-cached result (inside the `if not cached_result or not html:` block), after receiving `async_response` and calling `aprocess_html` to get `crawl_result`, copy the `network_requests` and `console_messages` from `async_response` to `crawl_result`.
```python
# ==== File: crawl4ai/async_webcrawler.py ====
# ... (imports) ...
class AsyncWebCrawler:
# ... (existing methods) ...
async def arun(
self,
url: str,
config: CrawlerRunConfig = None,
**kwargs,
) -> RunManyReturn:
# ... (existing setup, cache check) ...
async with self._lock or self.nullcontext():
try:
# ... (existing logging, cache context setup) ...
if cached_result:
# ... (existing cache handling logic) ...
# Note: Captured network/console usually not useful from cache
# Ensure they are None or empty if read from cache, unless stored explicitly
cached_result.network_requests = cached_result.network_requests or None
cached_result.console_messages = cached_result.console_messages or None
# ... (rest of cache logic) ...
# Fetch fresh content if needed
if not cached_result or not html:
t1 = time.perf_counter()
# ... (existing user agent update, robots.txt check) ...
##############################
# Call CrawlerStrategy.crawl #
##############################
async_response = await self.crawler_strategy.crawl(
url,
config=config,
)
# ... (existing assignment of html, screenshot, pdf, js_result from async_response) ...
t2 = time.perf_counter()
# ... (existing logging) ...
###############################################################
# Process the HTML content, Call CrawlerStrategy.process_html #
###############################################################
crawl_result: CrawlResult = await self.aprocess_html(
# ... (existing args) ...
)
# --- Transfer data from AsyncCrawlResponse to CrawlResult ---
crawl_result.status_code = async_response.status_code
crawl_result.redirected_url = async_response.redirected_url or url
crawl_result.response_headers = async_response.response_headers
crawl_result.downloaded_files = async_response.downloaded_files
crawl_result.js_execution_result = js_execution_result
crawl_result.ssl_certificate = async_response.ssl_certificate
# NEW: Copy captured data
crawl_result.network_requests = async_response.network_requests
crawl_result.console_messages = async_response.console_messages
# ------------------------------------------------------------
crawl_result.success = bool(html)
crawl_result.session_id = getattr(config, "session_id", None)
# ... (existing logging) ...
# Update cache if appropriate
if cache_context.should_write() and not bool(cached_result):
# crawl_result now includes network/console data if captured
await async_db_manager.acache_url(crawl_result)
return CrawlResultContainer(crawl_result)
else: # Cached result was used
# ... (existing logging for cache hit) ...
cached_result.success = bool(html)
cached_result.session_id = getattr(config, "session_id", None)
cached_result.redirected_url = cached_result.redirected_url or url
return CrawlResultContainer(cached_result)
except Exception as e:
# ... (existing error handling) ...
return CrawlResultContainer(
CrawlResult(
url=url, html="", success=False, error_message=error_message
)
)
# ... (aprocess_html remains unchanged regarding capture) ...
# ... (arun_many remains unchanged regarding capture) ...
```
**Summary of Changes:**
1. **Configuration:** Added `capture_network_requests` and `capture_console_messages` flags to `CrawlerRunConfig`.
2. **Models:** Added corresponding `network_requests` and `console_messages` fields (List of Dicts) to `AsyncCrawlResponse` and `CrawlResult`.
3. **Strategy:** Implemented conditional event listeners in `AsyncPlaywrightCrawlerStrategy._crawl_web` to capture data into lists when flags are true. Populated these fields in the returned `AsyncCrawlResponse`. Added basic error handling within capture handlers. Added timestamps.
4. **Crawler:** Modified `AsyncWebCrawler.arun` to copy the captured data from `AsyncCrawlResponse` into the final `CrawlResult` for non-cached fetches.
This approach keeps the capturing logic contained within the Playwright strategy, uses clear configuration flags, and integrates the results into the existing data flow. The data format (list of dictionaries) is flexible for storing varied information from requests/responses/console messages.

213
tests/general/test_mhtml.py Normal file
View File

@@ -0,0 +1,213 @@
# test_mhtml_capture.py
import pytest
import asyncio
import re # For more robust MHTML checks
# Assuming these can be imported directly from the crawl4ai library
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CrawlResult
# A reliable, simple static HTML page for testing
# Using httpbin as it's designed for testing clients
TEST_URL_SIMPLE = "https://httpbin.org/html"
EXPECTED_CONTENT_SIMPLE = "Herman Melville - Moby-Dick"
# A slightly more complex page that might involve JS (good secondary test)
TEST_URL_JS = "https://quotes.toscrape.com/js/"
EXPECTED_CONTENT_JS = "Quotes to Scrape" # Title of the page, which should be present in MHTML
# Removed the custom event_loop fixture as pytest-asyncio provides a default one.
@pytest.mark.asyncio
async def test_mhtml_capture_when_enabled():
"""
Verify that when CrawlerRunConfig has capture_mhtml=True,
the CrawlResult contains valid MHTML content.
"""
# Create a fresh browser config and crawler instance for this test
browser_config = BrowserConfig(headless=True) # Use headless for testing CI/CD
# --- Key: Enable MHTML capture in the run config ---
run_config = CrawlerRunConfig(capture_mhtml=True)
# Create a fresh crawler instance
crawler = AsyncWebCrawler(config=browser_config)
try:
# Start the browser
await crawler.start()
# Perform the crawl with the MHTML-enabled config
result: CrawlResult = await crawler.arun(TEST_URL_SIMPLE, config=run_config)
# --- Assertions ---
assert result is not None, "Crawler should return a result object"
assert result.success is True, f"Crawling {TEST_URL_SIMPLE} should succeed. Error: {result.error_message}"
# 1. Check if the mhtml attribute exists (will fail if CrawlResult not updated)
assert hasattr(result, 'mhtml'), "CrawlResult object must have an 'mhtml' attribute"
# 2. Check if mhtml is populated
assert result.mhtml is not None, "MHTML content should be captured when enabled"
assert isinstance(result.mhtml, str), "MHTML content should be a string"
assert len(result.mhtml) > 500, "MHTML content seems too short, likely invalid" # Basic sanity check
# 3. Check for MHTML structure indicators (more robust than simple string contains)
# MHTML files are multipart MIME messages
assert re.search(r"Content-Type: multipart/related;", result.mhtml, re.IGNORECASE), \
"MHTML should contain 'Content-Type: multipart/related;'"
# Should contain a boundary definition
assert re.search(r"boundary=\"----MultipartBoundary", result.mhtml), \
"MHTML should contain a multipart boundary"
# Should contain the main HTML part
assert re.search(r"Content-Type: text/html", result.mhtml, re.IGNORECASE), \
"MHTML should contain a 'Content-Type: text/html' part"
# 4. Check if the *actual page content* is within the MHTML string
# This confirms the snapshot captured the rendered page
assert EXPECTED_CONTENT_SIMPLE in result.mhtml, \
f"Expected content '{EXPECTED_CONTENT_SIMPLE}' not found within the captured MHTML"
# 5. Ensure standard HTML is still present and correct
assert result.html is not None, "Standard HTML should still be present"
assert isinstance(result.html, str), "Standard HTML should be a string"
assert EXPECTED_CONTENT_SIMPLE in result.html, \
f"Expected content '{EXPECTED_CONTENT_SIMPLE}' not found within the standard HTML"
finally:
# Important: Ensure browser is completely closed even if assertions fail
await crawler.close()
# Help the garbage collector clean up
crawler = None
@pytest.mark.asyncio
async def test_mhtml_capture_when_disabled_explicitly():
"""
Verify that when CrawlerRunConfig explicitly has capture_mhtml=False,
the CrawlResult.mhtml attribute is None.
"""
# Create a fresh browser config and crawler instance for this test
browser_config = BrowserConfig(headless=True)
# --- Key: Explicitly disable MHTML capture ---
run_config = CrawlerRunConfig(capture_mhtml=False)
# Create a fresh crawler instance
crawler = AsyncWebCrawler(config=browser_config)
try:
# Start the browser
await crawler.start()
result: CrawlResult = await crawler.arun(TEST_URL_SIMPLE, config=run_config)
assert result is not None
assert result.success is True, f"Crawling {TEST_URL_SIMPLE} should succeed. Error: {result.error_message}"
# 1. Check attribute existence (important for TDD start)
assert hasattr(result, 'mhtml'), "CrawlResult object must have an 'mhtml' attribute"
# 2. Check mhtml is None
assert result.mhtml is None, "MHTML content should be None when explicitly disabled"
# 3. Ensure standard HTML is still present
assert result.html is not None
assert EXPECTED_CONTENT_SIMPLE in result.html
finally:
# Important: Ensure browser is completely closed even if assertions fail
await crawler.close()
# Help the garbage collector clean up
crawler = None
@pytest.mark.asyncio
async def test_mhtml_capture_when_disabled_by_default():
"""
Verify that if capture_mhtml is not specified (using its default),
the CrawlResult.mhtml attribute is None.
(This assumes the default value for capture_mhtml in CrawlerRunConfig is False)
"""
# Create a fresh browser config and crawler instance for this test
browser_config = BrowserConfig(headless=True)
# --- Key: Use default run config ---
run_config = CrawlerRunConfig() # Do not specify capture_mhtml
# Create a fresh crawler instance
crawler = AsyncWebCrawler(config=browser_config)
try:
# Start the browser
await crawler.start()
result: CrawlResult = await crawler.arun(TEST_URL_SIMPLE, config=run_config)
assert result is not None
assert result.success is True, f"Crawling {TEST_URL_SIMPLE} should succeed. Error: {result.error_message}"
# 1. Check attribute existence
assert hasattr(result, 'mhtml'), "CrawlResult object must have an 'mhtml' attribute"
# 2. Check mhtml is None (assuming default is False)
assert result.mhtml is None, "MHTML content should be None when using default config (assuming default=False)"
# 3. Ensure standard HTML is still present
assert result.html is not None
assert EXPECTED_CONTENT_SIMPLE in result.html
finally:
# Important: Ensure browser is completely closed even if assertions fail
await crawler.close()
# Help the garbage collector clean up
crawler = None
# Optional: Add a test for a JS-heavy page if needed
@pytest.mark.asyncio
async def test_mhtml_capture_on_js_page_when_enabled():
"""
Verify MHTML capture works on a page requiring JavaScript execution.
"""
# Create a fresh browser config and crawler instance for this test
browser_config = BrowserConfig(headless=True)
run_config = CrawlerRunConfig(
capture_mhtml=True,
# Add a small wait or JS execution if needed for the JS page to fully render
# For quotes.toscrape.com/js/, it renders quickly, but a wait might be safer
# wait_for_timeout=2000 # Example: wait up to 2 seconds
js_code="await new Promise(r => setTimeout(r, 500));" # Small delay after potential load
)
# Create a fresh crawler instance
crawler = AsyncWebCrawler(config=browser_config)
try:
# Start the browser
await crawler.start()
result: CrawlResult = await crawler.arun(TEST_URL_JS, config=run_config)
assert result is not None
assert result.success is True, f"Crawling {TEST_URL_JS} should succeed. Error: {result.error_message}"
assert hasattr(result, 'mhtml'), "CrawlResult object must have an 'mhtml' attribute"
assert result.mhtml is not None, "MHTML content should be captured on JS page when enabled"
assert isinstance(result.mhtml, str), "MHTML content should be a string"
assert len(result.mhtml) > 500, "MHTML content from JS page seems too short"
# Check for MHTML structure
assert re.search(r"Content-Type: multipart/related;", result.mhtml, re.IGNORECASE)
assert re.search(r"Content-Type: text/html", result.mhtml, re.IGNORECASE)
# Check for content rendered by JS within the MHTML
assert EXPECTED_CONTENT_JS in result.mhtml, \
f"Expected JS-rendered content '{EXPECTED_CONTENT_JS}' not found within the captured MHTML"
# Check standard HTML too
assert result.html is not None
assert EXPECTED_CONTENT_JS in result.html, \
f"Expected JS-rendered content '{EXPECTED_CONTENT_JS}' not found within the standard HTML"
finally:
# Important: Ensure browser is completely closed even if assertions fail
await crawler.close()
# Help the garbage collector clean up
crawler = None
if __name__ == "__main__":
# Use pytest for async tests
pytest.main(["-xvs", __file__])

View File

@@ -0,0 +1,185 @@
from crawl4ai.async_webcrawler import AsyncWebCrawler
from crawl4ai.async_configs import CrawlerRunConfig, BrowserConfig
import asyncio
import aiohttp
from aiohttp import web
import tempfile
import shutil
import os, sys, time, json
async def start_test_server():
app = web.Application()
async def basic_page(request):
return web.Response(text="""
<!DOCTYPE html>
<html>
<head>
<title>Network Request Test</title>
</head>
<body>
<h1>Test Page for Network Capture</h1>
<p>This page performs network requests and console logging.</p>
<img src="/image.png" alt="Test Image">
<script>
console.log("Basic console log");
console.error("Error message");
console.warn("Warning message");
// Make some XHR requests
const xhr = new XMLHttpRequest();
xhr.open('GET', '/api/data', true);
xhr.send();
// Make a fetch request
fetch('/api/json')
.then(response => response.json())
.catch(error => console.error('Fetch error:', error));
// Trigger an error
setTimeout(() => {
try {
nonExistentFunction();
} catch (e) {
console.error("Caught error:", e);
}
}, 100);
</script>
</body>
</html>
""", content_type="text/html")
async def image(request):
# Return a small 1x1 transparent PNG
return web.Response(body=bytes.fromhex('89504E470D0A1A0A0000000D49484452000000010000000108060000001F15C4890000000D4944415478DA63FAFFFF3F030079DB00018D959DE70000000049454E44AE426082'), content_type="image/png")
async def api_data(request):
return web.Response(text="sample data")
async def api_json(request):
return web.json_response({"status": "success", "message": "JSON data"})
# Register routes
app.router.add_get('/', basic_page)
app.router.add_get('/image.png', image)
app.router.add_get('/api/data', api_data)
app.router.add_get('/api/json', api_json)
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, 'localhost', 8080)
await site.start()
return runner
async def test_network_console_capture():
print("\n=== Testing Network and Console Capture ===\n")
# Start test server
runner = await start_test_server()
try:
browser_config = BrowserConfig(headless=True)
# Test with capture disabled (default)
print("\n1. Testing with capture disabled (default)...")
async with AsyncWebCrawler(config=browser_config) as crawler:
config = CrawlerRunConfig(
wait_until="networkidle", # Wait for network to be idle
)
result = await crawler.arun(url="http://localhost:8080/", config=config)
assert result.network_requests is None, "Network requests should be None when capture is disabled"
assert result.console_messages is None, "Console messages should be None when capture is disabled"
print("✓ Default config correctly returns None for network_requests and console_messages")
# Test with network capture enabled
print("\n2. Testing with network capture enabled...")
async with AsyncWebCrawler(config=browser_config) as crawler:
config = CrawlerRunConfig(
wait_until="networkidle", # Wait for network to be idle
capture_network_requests=True
)
result = await crawler.arun(url="http://localhost:8080/", config=config)
assert result.network_requests is not None, "Network requests should be captured"
print(f"✓ Captured {len(result.network_requests)} network requests")
# Check if we have both requests and responses
request_count = len([r for r in result.network_requests if r.get("event_type") == "request"])
response_count = len([r for r in result.network_requests if r.get("event_type") == "response"])
print(f" - {request_count} requests, {response_count} responses")
# Check if we captured specific resources
urls = [r.get("url") for r in result.network_requests]
has_image = any("/image.png" in url for url in urls)
has_api_data = any("/api/data" in url for url in urls)
has_api_json = any("/api/json" in url for url in urls)
assert has_image, "Should have captured image request"
assert has_api_data, "Should have captured API data request"
assert has_api_json, "Should have captured API JSON request"
print("✓ Captured expected network requests (image, API endpoints)")
# Test with console capture enabled
print("\n3. Testing with console capture enabled...")
async with AsyncWebCrawler(config=browser_config) as crawler:
config = CrawlerRunConfig(
wait_until="networkidle", # Wait for network to be idle
capture_console_messages=True
)
result = await crawler.arun(url="http://localhost:8080/", config=config)
assert result.console_messages is not None, "Console messages should be captured"
print(f"✓ Captured {len(result.console_messages)} console messages")
# Check if we have different types of console messages
message_types = set(msg.get("type") for msg in result.console_messages if "type" in msg)
print(f" - Message types: {', '.join(message_types)}")
# Print all captured messages for debugging
print(" - Captured messages:")
for msg in result.console_messages:
print(f" * Type: {msg.get('type', 'N/A')}, Text: {msg.get('text', 'N/A')}")
# Look for specific messages
messages = [msg.get("text") for msg in result.console_messages if "text" in msg]
has_basic_log = any("Basic console log" in msg for msg in messages)
has_error_msg = any("Error message" in msg for msg in messages)
has_warning_msg = any("Warning message" in msg for msg in messages)
assert has_basic_log, "Should have captured basic console.log message"
assert has_error_msg, "Should have captured console.error message"
assert has_warning_msg, "Should have captured console.warn message"
print("✓ Captured expected console messages (log, error, warning)")
# Test with both captures enabled
print("\n4. Testing with both network and console capture enabled...")
async with AsyncWebCrawler(config=browser_config) as crawler:
config = CrawlerRunConfig(
wait_until="networkidle", # Wait for network to be idle
capture_network_requests=True,
capture_console_messages=True
)
result = await crawler.arun(url="http://localhost:8080/", config=config)
assert result.network_requests is not None, "Network requests should be captured"
assert result.console_messages is not None, "Console messages should be captured"
print(f"✓ Successfully captured both {len(result.network_requests)} network requests and {len(result.console_messages)} console messages")
finally:
await runner.cleanup()
print("\nTest server shutdown")
async def main():
try:
await test_network_console_capture()
print("\n✅ All tests passed successfully!")
except Exception as e:
print(f"\n❌ Test failed: {str(e)}")
raise
if __name__ == "__main__":
asyncio.run(main())