refactor(docs): reorganize documentation structure and update styles

Reorganize documentation into core/advanced/extraction sections for better navigation.
Update terminal theme styles and add rich library for better CLI output.
Remove redundant tutorial files and consolidate content into core sections.
Add personal story to index page for project context.

BREAKING CHANGE: Documentation structure has been significantly reorganized
This commit is contained in:
UncleCode
2025-01-07 20:49:50 +08:00
parent ae376f15fb
commit ca3e33122e
87 changed files with 4869 additions and 8951 deletions

View File

@@ -0,0 +1,327 @@
# Overview of Some Important Advanced Features
(Proxy, PDF, Screenshot, SSL, Headers, & Storage State)
Crawl4AI offers multiple power-user features that go beyond simple crawling. This tutorial covers:
1. **Proxy Usage**
2. **Capturing PDFs & Screenshots**
3. **Handling SSL Certificates**
4. **Custom Headers**
5. **Session Persistence & Local Storage**
> **Prerequisites**
> - You have a basic grasp of [AsyncWebCrawler Basics](../core/simple-crawling.md)
> - You know how to run or configure your Python environment with Playwright installed
---
## 1. Proxy Usage
If you need to route your crawl traffic through a proxy—whether for IP rotation, geo-testing, or privacy—Crawl4AI supports it via `BrowserConfig.proxy_config`.
```python
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
async def main():
browser_cfg = BrowserConfig(
proxy_config={
"server": "http://proxy.example.com:8080",
"username": "myuser",
"password": "mypass",
},
headless=True
)
crawler_cfg = CrawlerRunConfig(
verbose=True
)
async with AsyncWebCrawler(config=browser_cfg) as crawler:
result = await crawler.arun(
url="https://www.whatismyip.com/",
config=crawler_cfg
)
if result.success:
print("[OK] Page fetched via proxy.")
print("Page HTML snippet:", result.html[:200])
else:
print("[ERROR]", result.error_message)
if __name__ == "__main__":
asyncio.run(main())
```
**Key Points**
- **`proxy_config`** expects a dict with `server` and optional auth credentials.
- Many commercial proxies provide an HTTP/HTTPS “gateway” server that you specify in `server`.
- If your proxy doesnt need auth, omit `username`/`password`.
---
## 2. Capturing PDFs & Screenshots
Sometimes you need a visual record of a page or a PDF “printout.” Crawl4AI can do both in one pass:
```python
import os, asyncio
from base64 import b64decode
from crawl4ai import AsyncWebCrawler, CacheMode
async def main():
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://en.wikipedia.org/wiki/List_of_common_misconceptions",
cache_mode=CacheMode.BYPASS,
pdf=True,
screenshot=True
)
if result.success:
# Save screenshot
if result.screenshot:
with open("wikipedia_screenshot.png", "wb") as f:
f.write(b64decode(result.screenshot))
# Save PDF
if result.pdf:
with open("wikipedia_page.pdf", "wb") as f:
f.write(result.pdf)
print("[OK] PDF & screenshot captured.")
else:
print("[ERROR]", result.error_message)
if __name__ == "__main__":
asyncio.run(main())
```
**Why PDF + Screenshot?**
- Large or complex pages can be slow or error-prone with “traditional” full-page screenshots.
- Exporting a PDF is more reliable for very long pages. Crawl4AI automatically converts the first PDF page into an image if you request both.
**Relevant Parameters**
- **`pdf=True`**: Exports the current page as a PDF (base64-encoded in `result.pdf`).
- **`screenshot=True`**: Creates a screenshot (base64-encoded in `result.screenshot`).
- **`scan_full_page`** or advanced hooking can further refine how the crawler captures content.
---
## 3. Handling SSL Certificates
If you need to verify or export a sites SSL certificate—for compliance, debugging, or data analysis—Crawl4AI can fetch it during the crawl:
```python
import asyncio, os
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
async def main():
tmp_dir = os.path.join(os.getcwd(), "tmp")
os.makedirs(tmp_dir, exist_ok=True)
config = CrawlerRunConfig(
fetch_ssl_certificate=True,
cache_mode=CacheMode.BYPASS
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url="https://example.com", config=config)
if result.success and result.ssl_certificate:
cert = result.ssl_certificate
print("\nCertificate Information:")
print(f"Issuer (CN): {cert.issuer.get('CN', '')}")
print(f"Valid until: {cert.valid_until}")
print(f"Fingerprint: {cert.fingerprint}")
# Export in multiple formats:
cert.to_json(os.path.join(tmp_dir, "certificate.json"))
cert.to_pem(os.path.join(tmp_dir, "certificate.pem"))
cert.to_der(os.path.join(tmp_dir, "certificate.der"))
print("\nCertificate exported to JSON/PEM/DER in 'tmp' folder.")
else:
print("[ERROR] No certificate or crawl failed.")
if __name__ == "__main__":
asyncio.run(main())
```
**Key Points**
- **`fetch_ssl_certificate=True`** triggers certificate retrieval.
- `result.ssl_certificate` includes methods (`to_json`, `to_pem`, `to_der`) for saving in various formats (handy for server config, Java keystores, etc.).
---
## 4. Custom Headers
Sometimes you need to set custom headers (e.g., language preferences, authentication tokens, or specialized user-agent strings). You can do this in multiple ways:
```python
import asyncio
from crawl4ai import AsyncWebCrawler
async def main():
# Option 1: Set headers at the crawler strategy level
crawler1 = AsyncWebCrawler(
# The underlying strategy can accept headers in its constructor
crawler_strategy=None # We'll override below for clarity
)
crawler1.crawler_strategy.update_user_agent("MyCustomUA/1.0")
crawler1.crawler_strategy.set_custom_headers({
"Accept-Language": "fr-FR,fr;q=0.9"
})
result1 = await crawler1.arun("https://www.example.com")
print("Example 1 result success:", result1.success)
# Option 2: Pass headers directly to `arun()`
crawler2 = AsyncWebCrawler()
result2 = await crawler2.arun(
url="https://www.example.com",
headers={"Accept-Language": "es-ES,es;q=0.9"}
)
print("Example 2 result success:", result2.success)
if __name__ == "__main__":
asyncio.run(main())
```
**Notes**
- Some sites may react differently to certain headers (e.g., `Accept-Language`).
- If you need advanced user-agent randomization or client hints, see [Identity-Based Crawling (Anti-Bot)](./identity-based-crawling.md) or use `UserAgentGenerator`.
---
## 5. Session Persistence & Local Storage
Crawl4AI can preserve cookies and localStorage so you can continue where you left off—ideal for logging into sites or skipping repeated auth flows.
### 5.1 `storage_state`
```python
import asyncio
from crawl4ai import AsyncWebCrawler
async def main():
storage_dict = {
"cookies": [
{
"name": "session",
"value": "abcd1234",
"domain": "example.com",
"path": "/",
"expires": 1699999999.0,
"httpOnly": False,
"secure": False,
"sameSite": "None"
}
],
"origins": [
{
"origin": "https://example.com",
"localStorage": [
{"name": "token", "value": "my_auth_token"}
]
}
]
}
# Provide the storage state as a dictionary to start "already logged in"
async with AsyncWebCrawler(
headless=True,
storage_state=storage_dict
) as crawler:
result = await crawler.arun("https://example.com/protected")
if result.success:
print("Protected page content length:", len(result.html))
else:
print("Failed to crawl protected page")
if __name__ == "__main__":
asyncio.run(main())
```
### 5.2 Exporting & Reusing State
You can sign in once, export the browser context, and reuse it later—without re-entering credentials.
- **`await context.storage_state(path="my_storage.json")`**: Exports cookies, localStorage, etc. to a file.
- Provide `storage_state="my_storage.json"` on subsequent runs to skip the login step.
**See**: [Detailed session management tutorial](./session-management.md) or [Explanations → Browser Context & Managed Browser](./identity-based-crawling.md) for more advanced scenarios (like multi-step logins, or capturing after interactive pages).
---
## Putting It All Together
Heres a snippet that combines multiple “advanced” features (proxy, PDF, screenshot, SSL, custom headers, and session reuse) into one run. Normally, youd tailor each setting to your projects needs.
```python
import os, asyncio
from base64 import b64decode
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
async def main():
# 1. Browser config with proxy + headless
browser_cfg = BrowserConfig(
proxy_config={
"server": "http://proxy.example.com:8080",
"username": "myuser",
"password": "mypass",
},
headless=True,
)
# 2. Crawler config with PDF, screenshot, SSL, custom headers, and ignoring caches
crawler_cfg = CrawlerRunConfig(
pdf=True,
screenshot=True,
fetch_ssl_certificate=True,
cache_mode=CacheMode.BYPASS,
headers={"Accept-Language": "en-US,en;q=0.8"},
storage_state="my_storage.json", # Reuse session from a previous sign-in
verbose=True,
)
# 3. Crawl
async with AsyncWebCrawler(config=browser_cfg) as crawler:
result = await crawler.arun(
url = "https://secure.example.com/protected",
config=crawler_cfg
)
if result.success:
print("[OK] Crawled the secure page. Links found:", len(result.links.get("internal", [])))
# Save PDF & screenshot
if result.pdf:
with open("result.pdf", "wb") as f:
f.write(b64decode(result.pdf))
if result.screenshot:
with open("result.png", "wb") as f:
f.write(b64decode(result.screenshot))
# Check SSL cert
if result.ssl_certificate:
print("SSL Issuer CN:", result.ssl_certificate.issuer.get("CN", ""))
else:
print("[ERROR]", result.error_message)
if __name__ == "__main__":
asyncio.run(main())
```
---
## Conclusion & Next Steps
Youve now explored several **advanced** features:
- **Proxy Usage**
- **PDF & Screenshot** capturing for large or critical pages
- **SSL Certificate** retrieval & exporting
- **Custom Headers** for language or specialized requests
- **Session Persistence** via storage state
With these power tools, you can build robust scraping workflows that mimic real user behavior, handle secure sites, capture detailed snapshots, and manage sessions across multiple runs—streamlining your entire data collection pipeline.
**Last Updated**: 2025-01-01

View File

@@ -1,136 +0,0 @@
# Content Processing
Crawl4AI provides powerful content processing capabilities that help you extract clean, relevant content from web pages. This guide covers content cleaning, media handling, link analysis, and metadata extraction.
## Media Processing
Crawl4AI provides comprehensive media extraction and analysis capabilities. It automatically detects and processes various types of media elements while maintaining their context and relevance.
### Image Processing
The library handles various image scenarios, including:
- Regular images
- Lazy-loaded images
- Background images
- Responsive images
- Image metadata and context
```python
from crawl4ai.async_configs import CrawlerRunConfig
config = CrawlerRunConfig()
result = await crawler.arun(url="https://example.com", config=config)
for image in result.media["images"]:
# Each image includes rich metadata
print(f"Source: {image['src']}")
print(f"Alt text: {image['alt']}")
print(f"Description: {image['desc']}")
print(f"Context: {image['context']}") # Surrounding text
print(f"Relevance score: {image['score']}") # 0-10 score
```
### Handling Lazy-Loaded Content
Crawl4AI already handles lazy loading for media elements. You can customize the wait time for lazy-loaded content with `CrawlerRunConfig`:
```python
config = CrawlerRunConfig(
wait_for="css:img[data-src]", # Wait for lazy images
delay_before_return_html=2.0 # Additional wait time
)
result = await crawler.arun(url="https://example.com", config=config)
```
### Video and Audio Content
The library extracts video and audio elements with their metadata:
```python
from crawl4ai.async_configs import CrawlerRunConfig
config = CrawlerRunConfig()
result = await crawler.arun(url="https://example.com", config=config)
# Process videos
for video in result.media["videos"]:
print(f"Video source: {video['src']}")
print(f"Type: {video['type']}")
print(f"Duration: {video.get('duration')}")
print(f"Thumbnail: {video.get('poster')}")
# Process audio
for audio in result.media["audios"]:
print(f"Audio source: {audio['src']}")
print(f"Type: {audio['type']}")
print(f"Duration: {audio.get('duration')}")
```
## Link Analysis
Crawl4AI provides sophisticated link analysis capabilities, helping you understand the relationship between pages and identify important navigation patterns.
### Link Classification
The library automatically categorizes links into:
- Internal links (same domain)
- External links (different domains)
- Social media links
- Navigation links
- Content links
```python
from crawl4ai.async_configs import CrawlerRunConfig
config = CrawlerRunConfig()
result = await crawler.arun(url="https://example.com", config=config)
# Analyze internal links
for link in result.links["internal"]:
print(f"Internal: {link['href']}")
print(f"Link text: {link['text']}")
print(f"Context: {link['context']}") # Surrounding text
print(f"Type: {link['type']}") # nav, content, etc.
# Analyze external links
for link in result.links["external"]:
print(f"External: {link['href']}")
print(f"Domain: {link['domain']}")
print(f"Type: {link['type']}")
```
### Smart Link Filtering
Control which links are included in the results with `CrawlerRunConfig`:
```python
config = CrawlerRunConfig(
exclude_external_links=True, # Remove external links
exclude_social_media_links=True, # Remove social media links
exclude_social_media_domains=[ # Custom social media domains
"facebook.com", "twitter.com", "instagram.com"
],
exclude_domains=["ads.example.com"] # Exclude specific domains
)
result = await crawler.arun(url="https://example.com", config=config)
```
## Metadata Extraction
Crawl4AI automatically extracts and processes page metadata, providing valuable information about the content:
```python
from crawl4ai.async_configs import CrawlerRunConfig
config = CrawlerRunConfig()
result = await crawler.arun(url="https://example.com", config=config)
metadata = result.metadata
print(f"Title: {metadata['title']}")
print(f"Description: {metadata['description']}")
print(f"Keywords: {metadata['keywords']}")
print(f"Author: {metadata['author']}")
print(f"Published Date: {metadata['published_date']}")
print(f"Modified Date: {metadata['modified_date']}")
print(f"Language: {metadata['language']}")
```

View File

@@ -0,0 +1,12 @@
# Crawl Dispatcher
Were excited to announce a **Crawl Dispatcher** module that can handle **thousands** of crawling tasks simultaneously. By efficiently managing system resources (memory, CPU, network), this dispatcher ensures high-performance data extraction at scale. It also provides **real-time monitoring** of each crawlers status, memory usage, and overall progress.
Stay tuned—this feature is **coming soon** in an upcoming release of Crawl4AI! For the latest news, keep an eye on our changelogs and follow [@unclecode](https://twitter.com/unclecode) on X.
Below is a **sample** of how the dispatchers performance monitor might look in action:
![Crawl Dispatcher Performance Monitor](../assets/images/dispatcher.png)
We cant wait to bring you this streamlined, **scalable** approach to multi-URL crawling—**watch this space** for updates!

View File

@@ -0,0 +1,118 @@
# Download Handling in Crawl4AI
This guide explains how to use Crawl4AI to handle file downloads during crawling. You'll learn how to trigger downloads, specify download locations, and access downloaded files.
## Enabling Downloads
To enable downloads, set the `accept_downloads` parameter in the `BrowserConfig` object and pass it to the crawler.
```python
from crawl4ai.async_configs import BrowserConfig, AsyncWebCrawler
async def main():
config = BrowserConfig(accept_downloads=True) # Enable downloads globally
async with AsyncWebCrawler(config=config) as crawler:
# ... your crawling logic ...
asyncio.run(main())
```
## Specifying Download Location
Specify the download directory using the `downloads_path` attribute in the `BrowserConfig` object. If not provided, Crawl4AI defaults to creating a "downloads" directory inside the `.crawl4ai` folder in your home directory.
```python
from crawl4ai.async_configs import BrowserConfig
import os
downloads_path = os.path.join(os.getcwd(), "my_downloads") # Custom download path
os.makedirs(downloads_path, exist_ok=True)
config = BrowserConfig(accept_downloads=True, downloads_path=downloads_path)
async def main():
async with AsyncWebCrawler(config=config) as crawler:
result = await crawler.arun(url="https://example.com")
# ...
```
## Triggering Downloads
Downloads are typically triggered by user interactions on a web page, such as clicking a download button. Use `js_code` in `CrawlerRunConfig` to simulate these actions and `wait_for` to allow sufficient time for downloads to start.
```python
from crawl4ai.async_configs import CrawlerRunConfig
config = CrawlerRunConfig(
js_code="""
const downloadLink = document.querySelector('a[href$=".exe"]');
if (downloadLink) {
downloadLink.click();
}
""",
wait_for=5 # Wait 5 seconds for the download to start
)
result = await crawler.arun(url="https://www.python.org/downloads/", config=config)
```
## Accessing Downloaded Files
The `downloaded_files` attribute of the `CrawlResult` object contains paths to downloaded files.
```python
if result.downloaded_files:
print("Downloaded files:")
for file_path in result.downloaded_files:
print(f"- {file_path}")
file_size = os.path.getsize(file_path)
print(f"- File size: {file_size} bytes")
else:
print("No files downloaded.")
```
## Example: Downloading Multiple Files
```python
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
import os
from pathlib import Path
async def download_multiple_files(url: str, download_path: str):
config = BrowserConfig(accept_downloads=True, downloads_path=download_path)
async with AsyncWebCrawler(config=config) as crawler:
run_config = CrawlerRunConfig(
js_code="""
const downloadLinks = document.querySelectorAll('a[download]');
for (const link of downloadLinks) {
link.click();
// Delay between clicks
await new Promise(r => setTimeout(r, 2000));
}
""",
wait_for=10 # Wait for all downloads to start
)
result = await crawler.arun(url=url, config=run_config)
if result.downloaded_files:
print("Downloaded files:")
for file in result.downloaded_files:
print(f"- {file}")
else:
print("No files downloaded.")
# Usage
download_path = os.path.join(Path.home(), ".crawl4ai", "downloads")
os.makedirs(download_path, exist_ok=True)
asyncio.run(download_multiple_files("https://www.python.org/downloads/windows/", download_path))
```
## Important Considerations
- **Browser Context:** Downloads are managed within the browser context. Ensure `js_code` correctly targets the download triggers on the webpage.
- **Timing:** Use `wait_for` in `CrawlerRunConfig` to manage download timing.
- **Error Handling:** Handle errors to manage failed downloads or incorrect paths gracefully.
- **Security:** Scan downloaded files for potential security threats before use.
This revised guide ensures consistency with the `Crawl4AI` codebase by using `BrowserConfig` and `CrawlerRunConfig` for all download-related configurations. Let me know if further adjustments are needed!

View File

@@ -1,121 +1,254 @@
# Hooks & Auth for AsyncWebCrawler
# Hooks & Auth in AsyncWebCrawler
Crawl4AI's `AsyncWebCrawler` allows you to customize the behavior of the web crawler using hooks. Hooks are asynchronous functions called at specific points in the crawling process, allowing you to modify the crawler's behavior or perform additional actions. This updated documentation demonstrates how to use hooks, including the new `on_page_context_created` hook, and ensures compatibility with `BrowserConfig` and `CrawlerRunConfig`.
Crawl4AIs **hooks** let you customize the crawler at specific points in the pipeline:
## Example: Using Crawler Hooks with AsyncWebCrawler
1. **`on_browser_created`** After browser creation.
2. **`on_page_context_created`** After a new context & page are created.
3. **`before_goto`** Just before navigating to a page.
4. **`after_goto`** Right after navigation completes.
5. **`on_user_agent_updated`** Whenever the user agent changes.
6. **`on_execution_started`** Once custom JavaScript execution begins.
7. **`before_retrieve_html`** Just before the crawler retrieves final HTML.
8. **`before_return_html`** Right before returning the HTML content.
In this example, we'll:
**Important**: Avoid heavy tasks in `on_browser_created` since you dont yet have a page context. If you need to *log in*, do so in **`on_page_context_created`**.
1. Configure the browser and set up authentication when it's created.
2. Apply custom routing and initial actions when the page context is created.
3. Add custom headers before navigating to the URL.
4. Log the current URL after navigation.
5. Perform actions after JavaScript execution.
6. Log the length of the HTML before returning it.
> note "Important Hook Usage Warning"
**Avoid Misusing Hooks**: Do not manipulate page objects in the wrong hook or at the wrong time, as it can crash the pipeline or produce incorrect results. A common mistake is attempting to handle authentication prematurely—such as creating or closing pages in `on_browser_created`.
### Hook Definitions
> **Use the Right Hook for Auth**: If you need to log in or set tokens, use `on_page_context_created`. This ensures you have a valid page/context to work with, without disrupting the main crawling flow.
> **Identity-Based Crawling**: For robust auth, consider identity-based crawling (or passing a session ID) to preserve state. Run your initial login steps in a separate, well-defined process, then feed that session to your main crawl—rather than shoehorning complex authentication into early hooks. Check out [Identity-Based Crawling](../advanced/identity-based-crawling.md) for more details.
> **Be Cautious**: Overwriting or removing elements in the wrong hook can compromise the final crawl. Keep hooks focused on smaller tasks (like route filters, custom headers), and let your main logic (crawling, data extraction) proceed normally.
Below is an example demonstration.
---
## Example: Using Hooks in AsyncWebCrawler
```python
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from playwright.async_api import Page, Browser, BrowserContext
import json
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
from playwright.async_api import Page, BrowserContext
def log_routing(route):
# Example: block loading images
if route.request.resource_type == "image":
print(f"[HOOK] Blocking image request: {route.request.url}")
asyncio.create_task(route.abort())
else:
asyncio.create_task(route.continue_())
async def on_browser_created(browser: Browser, **kwargs):
print("[HOOK] on_browser_created")
# Example: Set browser viewport size and log in
context = await browser.new_context(viewport={"width": 1920, "height": 1080})
page = await context.new_page()
await page.goto("https://example.com/login")
await page.fill("input[name='username']", "testuser")
await page.fill("input[name='password']", "password123")
await page.click("button[type='submit']")
await page.wait_for_selector("#welcome")
await context.add_cookies([{"name": "auth_token", "value": "abc123", "url": "https://example.com"}])
await page.close()
await context.close()
async def on_page_context_created(context: BrowserContext, page: Page, **kwargs):
print("[HOOK] on_page_context_created")
await context.route("**", log_routing)
async def before_goto(page: Page, context: BrowserContext, **kwargs):
print("[HOOK] before_goto")
await page.set_extra_http_headers({"X-Test-Header": "test"})
async def after_goto(page: Page, context: BrowserContext, **kwargs):
print("[HOOK] after_goto")
print(f"Current URL: {page.url}")
async def on_execution_started(page: Page, context: BrowserContext, **kwargs):
print("[HOOK] on_execution_started")
await page.evaluate("console.log('Custom JS executed')")
async def before_return_html(page: Page, context: BrowserContext, html: str, **kwargs):
print("[HOOK] before_return_html")
print(f"HTML length: {len(html)}")
return page
```
### Using the Hooks with AsyncWebCrawler
```python
async def main():
print("\n🔗 Using Crawler Hooks: Customize AsyncWebCrawler with hooks!")
print("🔗 Hooks Example: Demonstrating recommended usage")
# Configure browser and crawler settings
# 1) Configure the browser
browser_config = BrowserConfig(
headless=True,
viewport_width=1920,
viewport_height=1080
verbose=True
)
# 2) Configure the crawler run
crawler_run_config = CrawlerRunConfig(
js_code="window.scrollTo(0, document.body.scrollHeight);",
wait_for="footer"
wait_for="body",
cache_mode=CacheMode.BYPASS
)
# Initialize crawler
async with AsyncWebCrawler(config=browser_config) as crawler:
crawler.crawler_strategy.set_hook("on_browser_created", on_browser_created)
crawler.crawler_strategy.set_hook("on_page_context_created", on_page_context_created)
crawler.crawler_strategy.set_hook("before_goto", before_goto)
crawler.crawler_strategy.set_hook("after_goto", after_goto)
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
crawler.crawler_strategy.set_hook("before_return_html", before_return_html)
# 3) Create the crawler instance
crawler = AsyncWebCrawler(config=browser_config)
# Run the crawler
result = await crawler.arun(url="https://example.com", config=crawler_run_config)
#
# Define Hook Functions
#
print("\n📦 Crawler Hooks Result:")
print(result)
async def on_browser_created(browser, **kwargs):
# Called once the browser instance is created (but no pages or contexts yet)
print("[HOOK] on_browser_created - Browser created successfully!")
# Typically, do minimal setup here if needed
return browser
asyncio.run(main())
async def on_page_context_created(page: Page, context: BrowserContext, **kwargs):
# Called right after a new page + context are created (ideal for auth or route config).
print("[HOOK] on_page_context_created - Setting up page & context.")
# Example 1: Route filtering (e.g., block images)
async def route_filter(route):
if route.request.resource_type == "image":
print(f"[HOOK] Blocking image request: {route.request.url}")
await route.abort()
else:
await route.continue_()
await context.route("**", route_filter)
# Example 2: (Optional) Simulate a login scenario
# (We do NOT create or close pages here, just do quick steps if needed)
# e.g., await page.goto("https://example.com/login")
# e.g., await page.fill("input[name='username']", "testuser")
# e.g., await page.fill("input[name='password']", "password123")
# e.g., await page.click("button[type='submit']")
# e.g., await page.wait_for_selector("#welcome")
# e.g., await context.add_cookies([...])
# Then continue
# Example 3: Adjust the viewport
await page.set_viewport_size({"width": 1080, "height": 600})
return page
async def before_goto(
page: Page, context: BrowserContext, url: str, **kwargs
):
# Called before navigating to each URL.
print(f"[HOOK] before_goto - About to navigate: {url}")
# e.g., inject custom headers
await page.set_extra_http_headers({
"Custom-Header": "my-value"
})
return page
async def after_goto(
page: Page, context: BrowserContext,
url: str, response, **kwargs
):
# Called after navigation completes.
print(f"[HOOK] after_goto - Successfully loaded: {url}")
# e.g., wait for a certain element if we want to verify
try:
await page.wait_for_selector('.content', timeout=1000)
print("[HOOK] Found .content element!")
except:
print("[HOOK] .content not found, continuing anyway.")
return page
async def on_user_agent_updated(
page: Page, context: BrowserContext,
user_agent: str, **kwargs
):
# Called whenever the user agent updates.
print(f"[HOOK] on_user_agent_updated - New user agent: {user_agent}")
return page
async def on_execution_started(page: Page, context: BrowserContext, **kwargs):
# Called after custom JavaScript execution begins.
print("[HOOK] on_execution_started - JS code is running!")
return page
async def before_retrieve_html(page: Page, context: BrowserContext, **kwargs):
# Called before final HTML retrieval.
print("[HOOK] before_retrieve_html - We can do final actions")
# Example: Scroll again
await page.evaluate("window.scrollTo(0, document.body.scrollHeight);")
return page
async def before_return_html(
page: Page, context: BrowserContext, html: str, **kwargs
):
# Called just before returning the HTML in the result.
print(f"[HOOK] before_return_html - HTML length: {len(html)}")
return page
#
# Attach Hooks
#
crawler.crawler_strategy.set_hook("on_browser_created", on_browser_created)
crawler.crawler_strategy.set_hook(
"on_page_context_created", on_page_context_created
)
crawler.crawler_strategy.set_hook("before_goto", before_goto)
crawler.crawler_strategy.set_hook("after_goto", after_goto)
crawler.crawler_strategy.set_hook(
"on_user_agent_updated", on_user_agent_updated
)
crawler.crawler_strategy.set_hook(
"on_execution_started", on_execution_started
)
crawler.crawler_strategy.set_hook(
"before_retrieve_html", before_retrieve_html
)
crawler.crawler_strategy.set_hook(
"before_return_html", before_return_html
)
await crawler.start()
# 4) Run the crawler on an example page
url = "https://example.com"
result = await crawler.arun(url, config=crawler_run_config)
if result.success:
print("\nCrawled URL:", result.url)
print("HTML length:", len(result.html))
else:
print("Error:", result.error_message)
await crawler.close()
if __name__ == "__main__":
asyncio.run(main())
```
### Explanation of Hooks
---
- **`on_browser_created`**: Called when the browser is created. Use this to configure the browser or handle authentication (e.g., logging in and setting cookies).
- **`on_page_context_created`**: Called when a new page context is created. Use this to apply routing, block resources, or inject custom logic before navigating to the URL.
- **`before_goto`**: Called before navigating to the URL. Use this to add custom headers or perform other pre-navigation actions.
- **`after_goto`**: Called after navigation. Use this to verify content or log the URL.
- **`on_execution_started`**: Called after executing custom JavaScript. Use this to perform additional actions.
- **`before_return_html`**: Called before returning the HTML content. Use this to log details or preprocess the content.
## Hook Lifecycle Summary
### Additional Customizations
1. **`on_browser_created`**:
- Browser is up, but **no** pages or contexts yet.
- Light setup only—dont try to open or close pages here (that belongs in `on_page_context_created`).
- **Resource Management**: Use `on_page_context_created` to block or modify requests (e.g., block images, fonts, or third-party scripts).
- **Dynamic Headers**: Use `before_goto` to add or modify headers dynamically based on the URL.
- **Authentication**: Use `on_browser_created` to handle login processes and set authentication cookies or tokens.
- **Content Analysis**: Use `before_return_html` to analyze or modify the extracted HTML content.
2. **`on_page_context_created`**:
- Perfect for advanced **auth** or route blocking.
- You have a **page** + **context** ready but havent navigated to the target URL yet.
These hooks provide powerful customization options for tailoring the crawling process to your needs.
3. **`before_goto`**:
- Right before navigation. Typically used for setting **custom headers** or logging the target URL.
4. **`after_goto`**:
- After page navigation is done. Good place for verifying content or waiting on essential elements.
5. **`on_user_agent_updated`**:
- Whenever the user agent changes (for stealth or different UA modes).
6. **`on_execution_started`**:
- If you set `js_code` or run custom scripts, this runs once your JS is about to start.
7. **`before_retrieve_html`**:
- Just before the final HTML snapshot is taken. Often you do a final scroll or lazy-load triggers here.
8. **`before_return_html`**:
- The last hook before returning HTML to the `CrawlResult`. Good for logging HTML length or minor modifications.
---
## When to Handle Authentication
**Recommended**: Use **`on_page_context_created`** if you need to:
- Navigate to a login page or fill forms
- Set cookies or localStorage tokens
- Block resource routes to avoid ads
This ensures the newly created context is under your control **before** `arun()` navigates to the main URL.
---
## Additional Considerations
- **Session Management**: If you want multiple `arun()` calls to reuse a single session, pass `session_id=` in your `CrawlerRunConfig`. Hooks remain the same.
- **Performance**: Hooks can slow down crawling if they do heavy tasks. Keep them concise.
- **Error Handling**: If a hook fails, the overall crawl might fail. Catch exceptions or handle them gracefully.
- **Concurrency**: If you run `arun_many()`, each URL triggers these hooks in parallel. Ensure your hooks are thread/async-safe.
---
## Conclusion
Hooks provide **fine-grained** control over:
- **Browser** creation (light tasks only)
- **Page** and **context** creation (auth, route blocking)
- **Navigation** phases
- **Final HTML** retrieval
Follow the recommended usage:
- **Login** or advanced tasks in `on_page_context_created`
- **Custom headers** or logs in `before_goto` / `after_goto`
- **Scrolling** or final checks in `before_retrieve_html` / `before_return_html`

View File

@@ -0,0 +1,180 @@
# Preserve Your Identity with Crawl4AI
Crawl4AI empowers you to navigate and interact with the web using your **authentic digital identity**, ensuring youre recognized as a human and not mistaken for a bot. This tutorial covers:
1. **Managed Browsers** The recommended approach for persistent profiles and identity-based crawling.
2. **Magic Mode** A simplified fallback solution for quick automation without persistent identity.
---
## 1. Managed Browsers: Your Digital Identity Solution
**Managed Browsers** let developers create and use **persistent browser profiles**. These profiles store local storage, cookies, and other session data, letting you browse as your **real self**—complete with logins, preferences, and cookies.
### Key Benefits
- **Authentic Browsing Experience**: Retain session data and browser fingerprints as though youre a normal user.
- **Effortless Configuration**: Once you log in or solve CAPTCHAs in your chosen data directory, you can re-run crawls without repeating those steps.
- **Empowered Data Access**: If you can see the data in your own browser, you can automate its retrieval with your genuine identity.
---
Below is a **partial update** to your **Managed Browsers** tutorial, specifically the section about **creating a user-data directory** using **Playwrights Chromium** binary rather than a system-wide Chrome/Edge. Well show how to **locate** that binary and launch it with a `--user-data-dir` argument to set up your profile. You can then point `BrowserConfig.user_data_dir` to that folder for subsequent crawls.
---
### Creating a User Data Directory (Command-Line Approach via Playwright)
If you installed Crawl4AI (which installs Playwright under the hood), you already have a Playwright-managed Chromium on your system. Follow these steps to launch that **Chromium** from your command line, specifying a **custom** data directory:
1. **Find** the Playwright Chromium binary:
- On most systems, installed browsers go under a `~/.cache/ms-playwright/` folder or similar path.
- To see an overview of installed browsers, run:
```bash
python -m playwright install --dry-run
```
or
```bash
playwright install --dry-run
```
(depending on your environment). This shows where Playwright keeps Chromium.
- For instance, you might see a path like:
```
~/.cache/ms-playwright/chromium-1234/chrome-linux/chrome
```
on Linux, or a corresponding folder on macOS/Windows.
2. **Launch** the Playwright Chromium binary with a **custom** user-data directory:
```bash
# Linux example
~/.cache/ms-playwright/chromium-1234/chrome-linux/chrome \
--user-data-dir=/home/<you>/my_chrome_profile
```
```bash
# macOS example (Playwrights internal binary)
~/Library/Caches/ms-playwright/chromium-1234/chrome-mac/Chromium.app/Contents/MacOS/Chromium \
--user-data-dir=/Users/<you>/my_chrome_profile
```
```powershell
# Windows example (PowerShell/cmd)
"C:\Users\<you>\AppData\Local\ms-playwright\chromium-1234\chrome-win\chrome.exe" ^
--user-data-dir="C:\Users\<you>\my_chrome_profile"
```
**Replace** the path with the actual subfolder indicated in your `ms-playwright` cache structure.
- This **opens** a fresh Chromium with your new or existing data folder.
- **Log into** any sites or configure your browser the way you want.
- **Close** when done—your profile data is saved in that folder.
3. **Use** that folder in **`BrowserConfig.user_data_dir`**:
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
browser_config = BrowserConfig(
headless=True,
use_managed_browser=True,
user_data_dir="/home/<you>/my_chrome_profile",
browser_type="chromium"
)
```
- Next time you run your code, it reuses that folder—**preserving** your session data, cookies, local storage, etc.
---
## 3. Using Managed Browsers in Crawl4AI
Once you have a data directory with your session data, pass it to **`BrowserConfig`**:
```python
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
async def main():
# 1) Reference your persistent data directory
browser_config = BrowserConfig(
headless=True, # 'True' for automated runs
verbose=True,
use_managed_browser=True, # Enables persistent browser strategy
browser_type="chromium",
user_data_dir="/path/to/my-chrome-profile"
)
# 2) Standard crawl config
crawl_config = CrawlerRunConfig(
wait_for="css:.logged-in-content"
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="https://example.com/private", config=crawl_config)
if result.success:
print("Successfully accessed private data with your identity!")
else:
print("Error:", result.error_message)
if __name__ == "__main__":
asyncio.run(main())
```
### Workflow
1. **Login** externally (via CLI or your normal Chrome with `--user-data-dir=...`).
2. **Close** that browser.
3. **Use** the same folder in `user_data_dir=` in Crawl4AI.
4. **Crawl** The site sees your identity as if youre the same user who just logged in.
---
## 4. Magic Mode: Simplified Automation
If you **dont** need a persistent profile or identity-based approach, **Magic Mode** offers a quick way to simulate human-like browsing without storing long-term data.
```python
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
config=CrawlerRunConfig(
magic=True, # Simplifies a lot of interaction
remove_overlay_elements=True,
page_timeout=60000
)
)
```
**Magic Mode**:
- Simulates a user-like experience
- Randomizes user agent & navigator
- Randomizes interactions & timings
- Masks automation signals
- Attempts pop-up handling
**But** its no substitute for **true** user-based sessions if you want a fully legitimate identity-based solution.
---
## 5. Comparing Managed Browsers vs. Magic Mode
| Feature | **Managed Browsers** | **Magic Mode** |
|----------------------------|---------------------------------------------------------------|-----------------------------------------------------|
| **Session Persistence** | Full localStorage/cookies retained in user_data_dir | No persistent data (fresh each run) |
| **Genuine Identity** | Real user profile with full rights & preferences | Emulated user-like patterns, but no actual identity |
| **Complex Sites** | Best for login-gated sites or heavy config | Simple tasks, minimal login or config needed |
| **Setup** | External creation of user_data_dir, then use in Crawl4AI | Single-line approach (`magic=True`) |
| **Reliability** | Extremely consistent (same data across runs) | Good for smaller tasks, can be less stable |
---
## 6. Summary
- **Create** your user-data directory by launching Chrome/Chromium externally with `--user-data-dir=/some/path`.
- **Log in** or configure sites as needed, then close the browser.
- **Reference** that folder in `BrowserConfig(user_data_dir="...")` + `use_managed_browser=True`.
- Enjoy **persistent** sessions that reflect your real identity.
- If you only need quick, ephemeral automation, **Magic Mode** might suffice.
**Recommended**: Always prefer a **Managed Browser** for robust, identity-based crawling and simpler interactions with complex sites. Use **Magic Mode** for quick tasks or prototypes where persistent data is unnecessary.
With these approaches, you preserve your **authentic** browsing environment, ensuring the site sees you exactly as a normal user—no repeated logins or wasted time.

View File

@@ -1,156 +0,0 @@
### Preserve Your Identity with Crawl4AI
Crawl4AI empowers you to navigate and interact with the web using your authentic digital identity, ensuring that you are recognized as a human and not mistaken for a bot. This document introduces Managed Browsers, the recommended approach for preserving your rights to access the web, and Magic Mode, a simplified solution for specific scenarios.
---
### Managed Browsers: Your Digital Identity Solution
**Managed Browsers** enable developers to create and use persistent browser profiles. These profiles store local storage, cookies, and other session-related data, allowing you to interact with websites as a recognized user. By leveraging your unique identity, Managed Browsers ensure that your experience reflects your rights as a human browsing the web.
#### Why Use Managed Browsers?
1. **Authentic Browsing Experience**: Managed Browsers retain session data and browser fingerprints, mirroring genuine user behavior.
2. **Effortless Configuration**: Once you interact with the site using the browser (e.g., solving a CAPTCHA), the session data is saved and reused, providing seamless access.
3. **Empowered Data Access**: By using your identity, Managed Browsers empower users to access data they can view on their own screens without artificial restrictions.
#### Steps to Use Managed Browsers
1. **Setup the Browser Configuration**:
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
browser_config = BrowserConfig(
headless=False, # Set to False for initial setup to view browser actions
verbose=True,
user_agent_mode="random",
use_managed_browser=True, # Enables persistent browser sessions
browser_type="chromium",
user_data_dir="/path/to/user_profile_data" # Path to save session data
)
```
2. **Perform an Initial Run**:
- Run the crawler with `headless=False`.
- Manually interact with the site (e.g., solve CAPTCHA or log in).
- The browser session saves cookies, local storage, and other required data.
3. **Subsequent Runs**:
- Switch to `headless=True` for automation.
- The session data is reused, allowing seamless crawling.
#### Example: Extracting Data Using Managed Browsers
```python
import asyncio
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
async def main():
# Define schema for structured data extraction
schema = {
"name": "Example Data",
"baseSelector": "div.example",
"fields": [
{"name": "title", "selector": "h1", "type": "text"},
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"}
]
}
# Configure crawler
browser_config = BrowserConfig(
headless=True, # Automate subsequent runs
verbose=True,
use_managed_browser=True,
user_data_dir="/path/to/user_profile_data"
)
crawl_config = CrawlerRunConfig(
extraction_strategy=JsonCssExtractionStrategy(schema),
wait_for="css:div.example" # Wait for the targeted element to load
)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(
url="https://example.com",
config=crawl_config
)
if result.success:
print("Extracted Data:", result.extracted_content)
if __name__ == "__main__":
asyncio.run(main())
```
### Benefits of Managed Browsers Over Other Methods
Managed Browsers eliminate the need for manual detection workarounds by enabling developers to work directly with their identity and user profile data. This approach ensures maximum compatibility with websites and simplifies the crawling process while preserving your right to access data freely.
---
### Magic Mode: Simplified Automation
While Managed Browsers are the preferred approach, **Magic Mode** provides an alternative for scenarios where persistent user profiles are unnecessary or infeasible. Magic Mode automates user-like behavior and simplifies configuration.
#### What Magic Mode Does:
- Simulates human browsing by randomizing interaction patterns and timing.
- Masks browser automation signals.
- Handles cookie popups and modals.
- Modifies navigator properties for enhanced compatibility.
#### Using Magic Mode
```python
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
magic=True # Enables all automation features
)
```
Magic Mode is particularly useful for:
- Quick prototyping when a Managed Browser setup is not available.
- Basic sites requiring minimal interaction or configuration.
#### Example: Combining Magic Mode with Additional Options
```python
async def crawl_with_magic_mode(url: str):
async with AsyncWebCrawler(headless=True) as crawler:
result = await crawler.arun(
url=url,
magic=True,
remove_overlay_elements=True, # Remove popups/modals
page_timeout=60000 # Increased timeout for complex pages
)
return result.markdown if result.success else None
```
### Magic Mode vs. Managed Browsers
While Magic Mode simplifies many tasks, it cannot match the reliability and authenticity of Managed Browsers. By using your identity and persistent profiles, Managed Browsers render Magic Mode largely unnecessary. However, Magic Mode remains a viable fallback for specific situations where user identity is not a factor.
---
### Key Comparison: Managed Browsers vs. Magic Mode
| Feature | **Managed Browsers** | **Magic Mode** |
|-------------------------|------------------------------------------|-------------------------------------|
| **Session Persistence** | Retains cookies and local storage. | No session retention. |
| **Human Interaction** | Uses real user profiles and data. | Simulates human-like patterns. |
| **Complex Sites** | Best suited for heavily configured sites.| Works well with simpler challenges.|
| **Setup Complexity** | Requires initial manual interaction. | Fully automated, one-line setup. |
#### Recommendation:
- Use **Managed Browsers** for reliable, session-based crawling and data extraction.
- Use **Magic Mode** for quick prototyping or when persistent profiles are not required.
---
### Conclusion
- **Use Managed Browsers** to preserve your digital identity and ensure reliable, identity-based crawling with persistent sessions. This approach works seamlessly for even the most complex websites.
- **Leverage Magic Mode** for quick automation or in scenarios where persistent user profiles are not needed.
By combining these approaches, Crawl4AI provides unparalleled flexibility and capability for your crawling needs.

View File

@@ -0,0 +1,104 @@
## Handling Lazy-Loaded Images
Many websites now load images **lazily** as you scroll. If you need to ensure they appear in your final crawl (and in `result.media`), consider:
1. **`wait_for_images=True`** Wait for images to fully load.
2. **`scan_full_page`** Force the crawler to scroll the entire page, triggering lazy loads.
3. **`scroll_delay`** Add small delays between scroll steps.
**Note**: If the site requires multiple “Load More” triggers or complex interactions, see the [Page Interaction docs](../core/page-interaction.md).
### Example: Ensuring Lazy Images Appear
```python
import asyncio
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, BrowserConfig
from crawl4ai.async_configs import CacheMode
async def main():
config = CrawlerRunConfig(
# Force the crawler to wait until images are fully loaded
wait_for_images=True,
# Option 1: If you want to automatically scroll the page to load images
scan_full_page=True, # Tells the crawler to try scrolling the entire page
scroll_delay=0.5, # Delay (seconds) between scroll steps
# Option 2: If the site uses a 'Load More' or JS triggers for images,
# you can also specify js_code or wait_for logic here.
cache_mode=CacheMode.BYPASS,
verbose=True
)
async with AsyncWebCrawler(config=BrowserConfig(headless=True)) as crawler:
result = await crawler.arun("https://www.example.com/gallery", config=config)
if result.success:
images = result.media.get("images", [])
print("Images found:", len(images))
for i, img in enumerate(images[:5]):
print(f"[Image {i}] URL: {img['src']}, Score: {img.get('score','N/A')}")
else:
print("Error:", result.error_message)
if __name__ == "__main__":
asyncio.run(main())
```
**Explanation**:
- **`wait_for_images=True`**
The crawler tries to ensure images have finished loading before finalizing the HTML.
- **`scan_full_page=True`**
Tells the crawler to attempt scrolling from top to bottom. Each scroll step helps trigger lazy loading.
- **`scroll_delay=0.5`**
Pause half a second between each scroll step. Helps the site load images before continuing.
**When to Use**:
- **Lazy-Loading**: If images appear only when the user scrolls into view, `scan_full_page` + `scroll_delay` helps the crawler see them.
- **Heavier Pages**: If a page is extremely long, be mindful that scanning the entire page can be slow. Adjust `scroll_delay` or the max scroll steps as needed.
---
## Combining with Other Link & Media Filters
You can still combine **lazy-load** logic with the usual **exclude_external_images**, **exclude_domains**, or link filtration:
```python
config = CrawlerRunConfig(
wait_for_images=True,
scan_full_page=True,
scroll_delay=0.5,
# Filter out external images if you only want local ones
exclude_external_images=True,
# Exclude certain domains for links
exclude_domains=["spammycdn.com"],
)
```
This approach ensures you see **all** images from the main domain while ignoring external ones, and the crawler physically scrolls the entire page so that lazy-loading triggers.
---
## Tips & Troubleshooting
1. **Long Pages**
- Setting `scan_full_page=True` on extremely long or infinite-scroll pages can be resource-intensive.
- Consider using [hooks](../core/page-interaction.md) or specialized logic to load specific sections or “Load More” triggers repeatedly.
2. **Mixed Image Behavior**
- Some sites load images in batches as you scroll. If youre missing images, increase your `scroll_delay` or call multiple partial scrolls in a loop with JS code or hooks.
3. **Combining with Dynamic Wait**
- If the site has a placeholder that only changes to a real image after a certain event, you might do `wait_for="css:img.loaded"` or a custom JS `wait_for`.
4. **Caching**
- If `cache_mode` is enabled, repeated crawls might skip some network fetches. If you suspect caching is missing new images, set `cache_mode=CacheMode.BYPASS` for fresh fetches.
---
With **lazy-loading** support, **wait_for_images**, and **scan_full_page** settings, you can capture the entire gallery or feed of images you expect—even if the site only loads them as the user scrolls. Combine these with the standard media filtering and domain exclusion for a complete link & media handling strategy.

View File

@@ -1,52 +0,0 @@
# Magic Mode & Anti-Bot Protection
Crawl4AI provides powerful anti-detection capabilities, with Magic Mode being the simplest and most comprehensive solution.
## Magic Mode
The easiest way to bypass anti-bot protections:
```python
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
magic=True # Enables all anti-detection features
)
```
Magic Mode automatically:
- Masks browser automation signals
- Simulates human-like behavior
- Overrides navigator properties
- Handles cookie consent popups
- Manages browser fingerprinting
- Randomizes timing patterns
## Manual Anti-Bot Options
While Magic Mode is recommended, you can also configure individual anti-detection features:
```python
result = await crawler.arun(
url="https://example.com",
simulate_user=True, # Simulate human behavior
override_navigator=True # Mask automation signals
)
```
Note: When `magic=True` is used, you don't need to set these individual options.
## Example: Handling Protected Sites
```python
async def crawl_protected_site(url: str):
async with AsyncWebCrawler(headless=True) as crawler:
result = await crawler.arun(
url=url,
magic=True,
remove_overlay_elements=True, # Remove popups/modals
page_timeout=60000 # Increased timeout for protection checks
)
return result.markdown if result.success else None
```

View File

@@ -1,188 +0,0 @@
# Creating Browser Instances, Contexts, and Pages
## 1 Introduction
### Overview of Browser Management in Crawl4AI
Crawl4AI's browser management system is designed to provide developers with advanced tools for handling complex web crawling tasks. By managing browser instances, contexts, and pages, Crawl4AI ensures optimal performance, anti-bot measures, and session persistence for high-volume, dynamic web crawling.
### Key Objectives
- **Anti-Bot Handling**:
- Implements stealth techniques to evade detection mechanisms used by modern websites.
- Simulates human-like behavior, such as mouse movements, scrolling, and key presses.
- Supports integration with third-party services to bypass CAPTCHA challenges.
- **Persistent Sessions**:
- Retains session data (cookies, local storage) for workflows requiring user authentication.
- Allows seamless continuation of tasks across multiple runs without re-authentication.
- **Scalable Crawling**:
- Optimized resource utilization for handling thousands of URLs concurrently.
- Flexible configuration options to tailor crawling behavior to specific requirements.
---
## 2 Browser Creation Methods
### Standard Browser Creation
Standard browser creation initializes a browser instance with default or minimal configurations. It is suitable for tasks that do not require session persistence or heavy customization.
#### Features and Limitations
- **Features**:
- Quick and straightforward setup for small-scale tasks.
- Supports headless and headful modes.
- **Limitations**:
- Lacks advanced customization options like session reuse.
- May struggle with sites employing strict anti-bot measures.
#### Example Usage
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig
browser_config = BrowserConfig(browser_type="chromium", headless=True)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun("https://crawl4ai.com")
print(result.markdown)
```
### Persistent Contexts
Persistent contexts create browser sessions with stored data, enabling workflows that require maintaining login states or other session-specific information.
#### Benefits of Using `user_data_dir`
- **Session Persistence**:
- Stores cookies, local storage, and cache between crawling sessions.
- Reduces overhead for repetitive logins or multi-step workflows.
- **Enhanced Performance**:
- Leverages pre-loaded resources for faster page loading.
- **Flexibility**:
- Adapts to complex workflows requiring user-specific configurations.
#### Example: Setting Up Persistent Contexts
```python
config = BrowserConfig(user_data_dir="/path/to/user/data")
async with AsyncWebCrawler(config=config) as crawler:
result = await crawler.arun("https://crawl4ai.com")
print(result.markdown)
```
### Managed Browser
The `ManagedBrowser` class offers a high-level abstraction for managing browser instances, emphasizing resource management, debugging capabilities, and anti-bot measures.
#### How It Works
- **Browser Process Management**:
- Automates initialization and cleanup of browser processes.
- Optimizes resource usage by pooling and reusing browser instances.
- **Debugging Support**:
- Integrates with debugging tools like Chrome Developer Tools for real-time inspection.
- **Anti-Bot Measures**:
- Implements stealth plugins to mimic real user behavior and bypass bot detection.
#### Features
- **Customizable Configurations**:
- Supports advanced options such as viewport resizing, proxy settings, and header manipulation.
- **Debugging and Logging**:
- Logs detailed browser interactions for debugging and performance analysis.
- **Scalability**:
- Handles multiple browser instances concurrently, scaling dynamically based on workload.
#### Example: Using `ManagedBrowser`
```python
from crawl4ai import AsyncWebCrawler, BrowserConfig
config = BrowserConfig(headless=False, debug_port=9222)
async with AsyncWebCrawler(config=config) as crawler:
result = await crawler.arun("https://crawl4ai.com")
print(result.markdown)
```
---
## 3 Context and Page Management
### Creating and Configuring Browser Contexts
Browser contexts act as isolated environments within a single browser instance, enabling independent browsing sessions with their own cookies, cache, and storage.
#### Customizations
- **Headers and Cookies**:
- Define custom headers to mimic specific devices or browsers.
- Set cookies for authenticated sessions.
- **Session Reuse**:
- Retain and reuse session data across multiple requests.
- Example: Preserve login states for authenticated crawls.
#### Example: Context Initialization
```python
from crawl4ai import CrawlerRunConfig
config = CrawlerRunConfig(headers={"User-Agent": "Crawl4AI/1.0"})
async with AsyncWebCrawler() as crawler:
result = await crawler.arun("https://crawl4ai.com", config=config)
print(result.markdown)
```
### Creating Pages
Pages represent individual tabs or views within a browser context. They are responsible for rendering content, executing JavaScript, and handling user interactions.
#### Key Features
- **IFrame Handling**:
- Extract content from embedded iframes.
- Navigate and interact with nested content.
- **Viewport Customization**:
- Adjust viewport size to match target device dimensions.
- **Lazy Loading**:
- Ensure dynamic elements are fully loaded before extraction.
#### Example: Page Initialization
```python
config = CrawlerRunConfig(viewport_width=1920, viewport_height=1080)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun("https://crawl4ai.com", config=config)
print(result.markdown)
```
---
## 4 Advanced Features and Best Practices
### Debugging and Logging
Remote debugging provides a powerful way to troubleshoot complex crawling workflows.
#### Example: Enabling Remote Debugging
```python
config = BrowserConfig(debug_port=9222)
async with AsyncWebCrawler(config=config) as crawler:
result = await crawler.arun("https://crawl4ai.com")
```
### Anti-Bot Techniques
- **Human Behavior Simulation**:
- Mimic real user actions, such as scrolling, clicking, and typing.
- Example: Use JavaScript to simulate interactions.
- **Captcha Handling**:
- Integrate with third-party services like 2Captcha or AntiCaptcha for automated solving.
#### Example: Simulating User Actions
```python
js_code = """
(async () => {
document.querySelector('input[name="search"]').value = 'test';
document.querySelector('button[type="submit"]').click();
})();
"""
config = CrawlerRunConfig(js_code=[js_code])
async with AsyncWebCrawler() as crawler:
result = await crawler.arun("https://crawl4ai.com", config=config)
```
### Optimizations for Performance and Scalability
- **Persistent Contexts**:
- Reuse browser contexts to minimize resource consumption.
- **Concurrent Crawls**:
- Use `arun_many` with a controlled semaphore count for efficient batch processing.
#### Example: Scaling Crawls
```python
urls = ["https://example1.com", "https://example2.com"]
config = CrawlerRunConfig(semaphore_count=10)
async with AsyncWebCrawler() as crawler:
results = await crawler.arun_many(urls, config=config)
for result in results:
print(result.url, result.markdown)
```

View File

@@ -0,0 +1,264 @@
# Optimized Multi-URL Crawling
> **Note**: Were developing a new **executor module** that uses a sophisticated algorithm to dynamically manage multi-URL crawling, optimizing for speed and memory usage. The approaches in this document remain fully valid, but keep an eye on **Crawl4AI**s upcoming releases for this powerful feature! Follow [@unclecode](https://twitter.com/unclecode) on X and check the changelogs to stay updated.
Crawl4AIs **AsyncWebCrawler** can handle multiple URLs in a single run, which can greatly reduce overhead and speed up crawling. This guide shows how to:
1. **Sequentially** crawl a list of URLs using the **same** session, avoiding repeated browser creation.
2. **Parallel**-crawl subsets of URLs in batches, again reusing the same browser.
When the entire process finishes, you close the browser once—**minimizing** memory and resource usage.
---
## 1. Why Avoid Simple Loops per URL?
If you naively do:
```python
for url in urls:
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url)
```
You end up:
1. Spinning up a **new** browser for each URL
2. Closing it immediately after the single crawl
3. Potentially using a lot of CPU/memory for short-living browsers
4. Missing out on session reusability if you have login or ongoing states
**Better** approaches ensure you **create** the browser once, then crawl multiple URLs with minimal overhead.
---
## 2. Sequential Crawling with Session Reuse
### 2.1 Overview
1. **One** `AsyncWebCrawler` instance for **all** URLs.
2. **One** session (via `session_id`) so we can preserve local storage or cookies across URLs if needed.
3. The crawler is only closed at the **end**.
**This** is the simplest pattern if your workload is moderate (dozens to a few hundred URLs).
### 2.2 Example Code
```python
import asyncio
from typing import List
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
async def crawl_sequential(urls: List[str]):
print("\n=== Sequential Crawling with Session Reuse ===")
browser_config = BrowserConfig(
headless=True,
# For better performance in Docker or low-memory environments:
extra_args=["--disable-gpu", "--disable-dev-shm-usage", "--no-sandbox"],
)
crawl_config = CrawlerRunConfig(
markdown_generator=DefaultMarkdownGenerator()
)
# Create the crawler (opens the browser)
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
try:
session_id = "session1" # Reuse the same session across all URLs
for url in urls:
result = await crawler.arun(
url=url,
config=crawl_config,
session_id=session_id
)
if result.success:
print(f"Successfully crawled: {url}")
# E.g. check markdown length
print(f"Markdown length: {len(result.markdown_v2.raw_markdown)}")
else:
print(f"Failed: {url} - Error: {result.error_message}")
finally:
# After all URLs are done, close the crawler (and the browser)
await crawler.close()
async def main():
urls = [
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3"
]
await crawl_sequential(urls)
if __name__ == "__main__":
asyncio.run(main())
```
**Why Its Good**:
- **One** browser launch.
- Minimal memory usage.
- If the site requires login, you can log in once in `session_id` context and preserve auth across all URLs.
---
## 3. Parallel Crawling with Browser Reuse
### 3.1 Overview
To speed up crawling further, you can crawl multiple URLs in **parallel** (batches or a concurrency limit). The crawler still uses **one** browser, but spawns different sessions (or the same, depending on your logic) for each task.
### 3.2 Example Code
For this example make sure to install the [psutil](https://pypi.org/project/psutil/) package.
```bash
pip install psutil
```
Then you can run the following code:
```python
import os
import sys
import psutil
import asyncio
__location__ = os.path.dirname(os.path.abspath(__file__))
__output__ = os.path.join(__location__, "output")
# Append parent directory to system path
parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(parent_dir)
from typing import List
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
async def crawl_parallel(urls: List[str], max_concurrent: int = 3):
print("\n=== Parallel Crawling with Browser Reuse + Memory Check ===")
# We'll keep track of peak memory usage across all tasks
peak_memory = 0
process = psutil.Process(os.getpid())
def log_memory(prefix: str = ""):
nonlocal peak_memory
current_mem = process.memory_info().rss # in bytes
if current_mem > peak_memory:
peak_memory = current_mem
print(f"{prefix} Current Memory: {current_mem // (1024 * 1024)} MB, Peak: {peak_memory // (1024 * 1024)} MB")
# Minimal browser config
browser_config = BrowserConfig(
headless=True,
verbose=False, # corrected from 'verbos=False'
extra_args=["--disable-gpu", "--disable-dev-shm-usage", "--no-sandbox"],
)
crawl_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
# Create the crawler instance
crawler = AsyncWebCrawler(config=browser_config)
await crawler.start()
try:
# We'll chunk the URLs in batches of 'max_concurrent'
success_count = 0
fail_count = 0
for i in range(0, len(urls), max_concurrent):
batch = urls[i : i + max_concurrent]
tasks = []
for j, url in enumerate(batch):
# Unique session_id per concurrent sub-task
session_id = f"parallel_session_{i + j}"
task = crawler.arun(url=url, config=crawl_config, session_id=session_id)
tasks.append(task)
# Check memory usage prior to launching tasks
log_memory(prefix=f"Before batch {i//max_concurrent + 1}: ")
# Gather results
results = await asyncio.gather(*tasks, return_exceptions=True)
# Check memory usage after tasks complete
log_memory(prefix=f"After batch {i//max_concurrent + 1}: ")
# Evaluate results
for url, result in zip(batch, results):
if isinstance(result, Exception):
print(f"Error crawling {url}: {result}")
fail_count += 1
elif result.success:
success_count += 1
else:
fail_count += 1
print(f"\nSummary:")
print(f" - Successfully crawled: {success_count}")
print(f" - Failed: {fail_count}")
finally:
print("\nClosing crawler...")
await crawler.close()
# Final memory log
log_memory(prefix="Final: ")
print(f"\nPeak memory usage (MB): {peak_memory // (1024 * 1024)}")
async def main():
urls = [
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3",
"https://example.com/page4"
]
await crawl_parallel(urls, max_concurrent=2)
if __name__ == "__main__":
asyncio.run(main())
```
**Notes**:
- We **reuse** the same `AsyncWebCrawler` instance for all parallel tasks, launching **one** browser.
- Each parallel sub-task might get its own `session_id` so they dont share cookies/localStorage (unless thats desired).
- We limit concurrency to `max_concurrent=2` or 3 to avoid saturating CPU/memory.
---
## 4. Performance Tips
1. **Extra Browser Args**
- `--disable-gpu`, `--no-sandbox` can help in Docker or restricted environments.
- `--disable-dev-shm-usage` avoids using `/dev/shm` which can be small on some systems.
2. **Session Reuse**
- If your site requires a login or you want to maintain local data across URLs, share the **same** `session_id`.
- If you want isolation (each URL fresh), create unique sessions.
3. **Batching**
- If you have **many** URLs (like thousands), you can do parallel crawling in chunks (like `max_concurrent=5`).
- Use `arun_many()` for a built-in approach if you prefer, but the example above is often more flexible.
4. **Cache**
- If your pages share many resources or youre re-crawling the same domain repeatedly, consider setting `cache_mode=CacheMode.ENABLED` in `CrawlerRunConfig`.
- If you need fresh data each time, keep `cache_mode=CacheMode.BYPASS`.
5. **Hooks**
- You can set up global hooks for each crawler (like to block images) or per-run if you want.
- Keep them consistent if youre reusing sessions.
---
## 5. Summary
- **One** `AsyncWebCrawler` + multiple calls to `.arun()` is far more efficient than launching a new crawler per URL.
- **Sequential** approach with a shared session is simple and memory-friendly for moderate sets of URLs.
- **Parallel** approach can speed up large crawls by concurrency, but keep concurrency balanced to avoid overhead.
- Close the crawler once at the end, ensuring the browser is only opened/closed once.
For even more advanced memory optimizations or dynamic concurrency patterns, see future sections on hooking or distributed crawling. The patterns above suffice for the majority of multi-URL scenarios—**giving you speed, simplicity, and minimal resource usage**. Enjoy your optimized crawling!

View File

@@ -1,6 +1,4 @@
# Proxy & Security
Configure proxy settings and enhance security features in Crawl4AI for reliable data extraction.
# Proxy
## Basic Proxy Setup
@@ -58,38 +56,3 @@ async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url=url, config=browser_config)
```
## Custom Headers
Add security-related headers via `BrowserConfig`:
```python
from crawl4ai.async_configs import BrowserConfig
headers = {
"X-Forwarded-For": "203.0.113.195",
"Accept-Language": "en-US,en;q=0.9",
"Cache-Control": "no-cache",
"Pragma": "no-cache"
}
browser_config = BrowserConfig(headers=headers)
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="https://example.com")
```
## Combining with Magic Mode
For maximum protection, combine proxy with Magic Mode via `CrawlerRunConfig` and `BrowserConfig`:
```python
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
browser_config = BrowserConfig(
proxy="http://proxy.example.com:8080",
headers={"Accept-Language": "en-US"}
)
crawler_config = CrawlerRunConfig(magic=True) # Enable all anti-detection features
async with AsyncWebCrawler(config=browser_config) as crawler:
result = await crawler.arun(url="https://example.com", config=crawler_config)
```

View File

@@ -1,179 +0,0 @@
### Session-Based Crawling for Dynamic Content
In modern web applications, content is often loaded dynamically without changing the URL. Examples include "Load More" buttons, infinite scrolling, or paginated content that updates via JavaScript. Crawl4AI provides session-based crawling capabilities to handle such scenarios effectively.
This guide explores advanced techniques for crawling dynamic content using Crawl4AI's session management features.
---
## Understanding Session-Based Crawling
Session-based crawling allows you to reuse a persistent browser session across multiple actions. This means the same browser tab (or page object) is used throughout, enabling:
1. **Efficient handling of dynamic content** without reloading the page.
2. **JavaScript actions before and after crawling** (e.g., clicking buttons or scrolling).
3. **State maintenance** for authenticated sessions or multi-step workflows.
4. **Faster sequential crawling**, as it avoids reopening tabs or reallocating resources.
**Note:** Session-based crawling is ideal for sequential operations, not parallel tasks.
---
## Basic Concepts
Before diving into examples, here are some key concepts:
- **Session ID**: A unique identifier for a browsing session. Use the same `session_id` across multiple requests to maintain state.
- **BrowserConfig & CrawlerRunConfig**: These configuration objects control browser settings and crawling behavior.
- **JavaScript Execution**: Use `js_code` to perform actions like clicking buttons.
- **CSS Selectors**: Target specific elements for interaction or data extraction.
- **Extraction Strategy**: Define rules to extract structured data.
- **Wait Conditions**: Specify conditions to wait for before proceeding.
---
## Example 1: Basic Session-Based Crawling
A simple example using session-based crawling:
```python
import asyncio
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.cache_context import CacheMode
async def basic_session_crawl():
async with AsyncWebCrawler() as crawler:
session_id = "dynamic_content_session"
url = "https://example.com/dynamic-content"
for page in range(3):
config = CrawlerRunConfig(
url=url,
session_id=session_id,
js_code="document.querySelector('.load-more-button').click();" if page > 0 else None,
css_selector=".content-item",
cache_mode=CacheMode.BYPASS
)
result = await crawler.arun(config=config)
print(f"Page {page + 1}: Found {result.extracted_content.count('.content-item')} items")
await crawler.crawler_strategy.kill_session(session_id)
asyncio.run(basic_session_crawl())
```
This example shows:
1. Reusing the same `session_id` across multiple requests.
2. Executing JavaScript to load more content dynamically.
3. Properly closing the session to free resources.
---
## Advanced Technique 1: Custom Execution Hooks
Use custom hooks to handle complex scenarios, such as waiting for content to load dynamically:
```python
async def advanced_session_crawl_with_hooks():
first_commit = ""
async def on_execution_started(page):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.commit-item h4")
commit = await page.query_selector("li.commit-item h4")
commit = await commit.evaluate("(element) => element.textContent").strip()
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear: {e}")
async with AsyncWebCrawler() as crawler:
session_id = "commit_session"
url = "https://github.com/example/repo/commits/main"
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
js_next_page = """document.querySelector('a.pagination-next').click();"""
for page in range(3):
config = CrawlerRunConfig(
url=url,
session_id=session_id,
js_code=js_next_page if page > 0 else None,
css_selector="li.commit-item",
js_only=page > 0,
cache_mode=CacheMode.BYPASS
)
result = await crawler.arun(config=config)
print(f"Page {page + 1}: Found {len(result.extracted_content)} commits")
await crawler.crawler_strategy.kill_session(session_id)
asyncio.run(advanced_session_crawl_with_hooks())
```
This technique ensures new content loads before the next action.
---
## Advanced Technique 2: Integrated JavaScript Execution and Waiting
Combine JavaScript execution and waiting logic for concise handling of dynamic content:
```python
async def integrated_js_and_wait_crawl():
async with AsyncWebCrawler() as crawler:
session_id = "integrated_session"
url = "https://github.com/example/repo/commits/main"
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => document.querySelector('li.commit-item h4').textContent.trim();
const initialCommit = getCurrentCommit();
document.querySelector('a.pagination-next').click();
while (getCurrentCommit() === initialCommit) {
await new Promise(resolve => setTimeout(resolve, 100));
}
})();
"""
for page in range(3):
config = CrawlerRunConfig(
url=url,
session_id=session_id,
js_code=js_next_page_and_wait if page > 0 else None,
css_selector="li.commit-item",
js_only=page > 0,
cache_mode=CacheMode.BYPASS
)
result = await crawler.arun(config=config)
print(f"Page {page + 1}: Found {len(result.extracted_content)} commits")
await crawler.crawler_strategy.kill_session(session_id)
asyncio.run(integrated_js_and_wait_crawl())
```
---
## Best Practices for Session-Based Crawling
1. **Unique Session IDs**: Assign descriptive and unique `session_id` values.
2. **Close Sessions**: Always clean up sessions with `kill_session` after use.
3. **Error Handling**: Anticipate and handle errors gracefully.
4. **Respect Websites**: Follow terms of service and robots.txt.
5. **Delays**: Add delays to avoid overwhelming servers.
6. **Optimize JavaScript**: Keep scripts concise for better performance.
7. **Monitor Resources**: Track memory and CPU usage for long sessions.
---
## Conclusion
Session-based crawling in Crawl4AI is a robust solution for handling dynamic content and multi-step workflows. By combining session management, JavaScript execution, and structured extraction strategies, you can effectively navigate and extract data from modern web applications. Always adhere to ethical web scraping practices and respect website policies.

View File

@@ -1,4 +1,4 @@
### Session Management
# Session Management
Session management in Crawl4AI is a powerful feature that allows you to maintain state across multiple requests, making it particularly suitable for handling complex multi-step crawling tasks. It enables you to reuse the same browser tab (or page object) across sequential actions and crawls, which is beneficial for:
@@ -20,8 +20,12 @@ async with AsyncWebCrawler() as crawler:
session_id = "my_session"
# Define configurations
config1 = CrawlerRunConfig(url="https://example.com/page1", session_id=session_id)
config2 = CrawlerRunConfig(url="https://example.com/page2", session_id=session_id)
config1 = CrawlerRunConfig(
url="https://example.com/page1", session_id=session_id
)
config2 = CrawlerRunConfig(
url="https://example.com/page2", session_id=session_id
)
# First request
result1 = await crawler.arun(config=config1)
@@ -54,7 +58,9 @@ async def crawl_dynamic_content():
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [{"name": "title", "selector": "h4.markdown-title", "type": "text"}],
"fields": [{
"name": "title", "selector": "h4.markdown-title", "type": "text"
}],
}
extraction_strategy = JsonCssExtractionStrategy(schema)
@@ -87,51 +93,146 @@ async def crawl_dynamic_content():
---
#### Session Best Practices
## Example 1: Basic Session-Based Crawling
1. **Descriptive Session IDs**:
Use meaningful names for session IDs to organize workflows:
```python
session_id = "login_flow_session"
session_id = "product_catalog_session"
```
A simple example using session-based crawling:
2. **Resource Management**:
Always ensure sessions are cleaned up to free resources:
```python
try:
# Your crawling code here
pass
finally:
await crawler.crawler_strategy.kill_session(session_id)
```
```python
import asyncio
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
from crawl4ai.cache_context import CacheMode
3. **State Maintenance**:
Reuse the session for subsequent actions within the same workflow:
```python
# Step 1: Login
login_config = CrawlerRunConfig(
url="https://example.com/login",
session_id=session_id,
js_code="document.querySelector('form').submit();"
)
await crawler.arun(config=login_config)
async def basic_session_crawl():
async with AsyncWebCrawler() as crawler:
session_id = "dynamic_content_session"
url = "https://example.com/dynamic-content"
# Step 2: Verify login success
dashboard_config = CrawlerRunConfig(
url="https://example.com/dashboard",
session_id=session_id,
wait_for="css:.user-profile" # Wait for authenticated content
)
result = await crawler.arun(config=dashboard_config)
```
for page in range(3):
config = CrawlerRunConfig(
url=url,
session_id=session_id,
js_code="document.querySelector('.load-more-button').click();" if page > 0 else None,
css_selector=".content-item",
cache_mode=CacheMode.BYPASS
)
result = await crawler.arun(config=config)
print(f"Page {page + 1}: Found {result.extracted_content.count('.content-item')} items")
await crawler.crawler_strategy.kill_session(session_id)
asyncio.run(basic_session_crawl())
```
This example shows:
1. Reusing the same `session_id` across multiple requests.
2. Executing JavaScript to load more content dynamically.
3. Properly closing the session to free resources.
---
## Advanced Technique 1: Custom Execution Hooks
> Warning: You might feel confused by the end of the next few examples 😅, so make sure you are comfortable with the order of the parts before you start this.
Use custom hooks to handle complex scenarios, such as waiting for content to load dynamically:
```python
async def advanced_session_crawl_with_hooks():
first_commit = ""
async def on_execution_started(page):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.commit-item h4")
commit = await page.query_selector("li.commit-item h4")
commit = await commit.evaluate("(element) => element.textContent").strip()
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear: {e}")
async with AsyncWebCrawler() as crawler:
session_id = "commit_session"
url = "https://github.com/example/repo/commits/main"
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
js_next_page = """document.querySelector('a.pagination-next').click();"""
for page in range(3):
config = CrawlerRunConfig(
url=url,
session_id=session_id,
js_code=js_next_page if page > 0 else None,
css_selector="li.commit-item",
js_only=page > 0,
cache_mode=CacheMode.BYPASS
)
result = await crawler.arun(config=config)
print(f"Page {page + 1}: Found {len(result.extracted_content)} commits")
await crawler.crawler_strategy.kill_session(session_id)
asyncio.run(advanced_session_crawl_with_hooks())
```
This technique ensures new content loads before the next action.
---
## Advanced Technique 2: Integrated JavaScript Execution and Waiting
Combine JavaScript execution and waiting logic for concise handling of dynamic content:
```python
async def integrated_js_and_wait_crawl():
async with AsyncWebCrawler() as crawler:
session_id = "integrated_session"
url = "https://github.com/example/repo/commits/main"
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => document.querySelector('li.commit-item h4').textContent.trim();
const initialCommit = getCurrentCommit();
document.querySelector('a.pagination-next').click();
while (getCurrentCommit() === initialCommit) {
await new Promise(resolve => setTimeout(resolve, 100));
}
})();
"""
for page in range(3):
config = CrawlerRunConfig(
url=url,
session_id=session_id,
js_code=js_next_page_and_wait if page > 0 else None,
css_selector="li.commit-item",
js_only=page > 0,
cache_mode=CacheMode.BYPASS
)
result = await crawler.arun(config=config)
print(f"Page {page + 1}: Found {len(result.extracted_content)} commits")
await crawler.crawler_strategy.kill_session(session_id)
asyncio.run(integrated_js_and_wait_crawl())
```
---
#### Common Use Cases for Sessions
1. **Authentication Flows**: Login and interact with secured pages.
2. **Pagination Handling**: Navigate through multiple pages.
3. **Form Submissions**: Fill forms, submit, and process results.
4. **Multi-step Processes**: Complete workflows that span multiple actions.
5. **Dynamic Content Navigation**: Handle JavaScript-rendered or event-triggered content.
1. **Authentication Flows**: Login and interact with secured pages.
2. **Pagination Handling**: Navigate through multiple pages.
3. **Form Submissions**: Fill forms, submit, and process results.
4. **Multi-step Processes**: Complete workflows that span multiple actions.
5. **Dynamic Content Navigation**: Handle JavaScript-rendered or event-triggered content.

View File

@@ -0,0 +1,179 @@
# `SSLCertificate` Reference
The **`SSLCertificate`** class encapsulates an SSL certificates data and allows exporting it in various formats (PEM, DER, JSON, or text). Its used within **Crawl4AI** whenever you set **`fetch_ssl_certificate=True`** in your **`CrawlerRunConfig`**.
## 1. Overview
**Location**: `crawl4ai/ssl_certificate.py`
```python
class SSLCertificate:
"""
Represents an SSL certificate with methods to export in various formats.
Main Methods:
- from_url(url, timeout=10)
- from_file(file_path)
- from_binary(binary_data)
- to_json(filepath=None)
- to_pem(filepath=None)
- to_der(filepath=None)
...
Common Properties:
- issuer
- subject
- valid_from
- valid_until
- fingerprint
"""
```
### Typical Use Case
1. You **enable** certificate fetching in your crawl by:
```python
CrawlerRunConfig(fetch_ssl_certificate=True, ...)
```
2. After `arun()`, if `result.ssl_certificate` is present, its an instance of **`SSLCertificate`**.
3. You can **read** basic properties (issuer, subject, validity) or **export** them in multiple formats.
---
## 2. Construction & Fetching
### 2.1 **`from_url(url, timeout=10)`**
Manually load an SSL certificate from a given URL (port 443). Typically used internally, but you can call it directly if you want:
```python
cert = SSLCertificate.from_url("https://example.com")
if cert:
print("Fingerprint:", cert.fingerprint)
```
### 2.2 **`from_file(file_path)`**
Load from a file containing certificate data in ASN.1 or DER. Rarely needed unless you have local cert files:
```python
cert = SSLCertificate.from_file("/path/to/cert.der")
```
### 2.3 **`from_binary(binary_data)`**
Initialize from raw binary. E.g., if you captured it from a socket or another source:
```python
cert = SSLCertificate.from_binary(raw_bytes)
```
---
## 3. Common Properties
After obtaining a **`SSLCertificate`** instance (e.g. `result.ssl_certificate` from a crawl), you can read:
1. **`issuer`** *(dict)*
- E.g. `{"CN": "My Root CA", "O": "..."}`
2. **`subject`** *(dict)*
- E.g. `{"CN": "example.com", "O": "ExampleOrg"}`
3. **`valid_from`** *(str)*
- NotBefore date/time. Often in ASN.1/UTC format.
4. **`valid_until`** *(str)*
- NotAfter date/time.
5. **`fingerprint`** *(str)*
- The SHA-256 digest (lowercase hex).
- E.g. `"d14d2e..."`
---
## 4. Export Methods
Once you have a **`SSLCertificate`** object, you can **export** or **inspect** it:
### 4.1 **`to_json(filepath=None)` → `Optional[str]`**
- Returns a JSON string containing the parsed certificate fields.
- If `filepath` is provided, saves it to disk instead, returning `None`.
**Usage**:
```python
json_data = cert.to_json() # returns JSON string
cert.to_json("certificate.json") # writes file, returns None
```
### 4.2 **`to_pem(filepath=None)` → `Optional[str]`**
- Returns a PEM-encoded string (common for web servers).
- If `filepath` is provided, saves it to disk instead.
```python
pem_str = cert.to_pem() # in-memory PEM string
cert.to_pem("/path/to/cert.pem") # saved to file
```
### 4.3 **`to_der(filepath=None)` → `Optional[bytes]`**
- Returns the original DER (binary ASN.1) bytes.
- If `filepath` is specified, writes the bytes there instead.
```python
der_bytes = cert.to_der()
cert.to_der("certificate.der")
```
### 4.4 (Optional) **`export_as_text()`**
- If you see a method like `export_as_text()`, it typically returns an OpenSSL-style textual representation.
- Not always needed, but can help for debugging or manual inspection.
---
## 5. Example Usage in Crawl4AI
Below is a minimal sample showing how the crawler obtains an SSL cert from a site, then reads or exports it. The code snippet:
```python
import asyncio
import os
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
async def main():
tmp_dir = "tmp"
os.makedirs(tmp_dir, exist_ok=True)
config = CrawlerRunConfig(
fetch_ssl_certificate=True,
cache_mode=CacheMode.BYPASS
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun("https://example.com", config=config)
if result.success and result.ssl_certificate:
cert = result.ssl_certificate
# 1. Basic Info
print("Issuer CN:", cert.issuer.get("CN", ""))
print("Valid until:", cert.valid_until)
print("Fingerprint:", cert.fingerprint)
# 2. Export
cert.to_json(os.path.join(tmp_dir, "certificate.json"))
cert.to_pem(os.path.join(tmp_dir, "certificate.pem"))
cert.to_der(os.path.join(tmp_dir, "certificate.der"))
if __name__ == "__main__":
asyncio.run(main())
```
---
## 6. Notes & Best Practices
1. **Timeout**: `SSLCertificate.from_url` internally uses a default **10s** socket connect and wraps SSL.
2. **Binary Form**: The certificate is loaded in ASN.1 (DER) form, then re-parsed by `OpenSSL.crypto`.
3. **Validation**: This does **not** validate the certificate chain or trust store. It only fetches and parses.
4. **Integration**: Within Crawl4AI, you typically just set `fetch_ssl_certificate=True` in `CrawlerRunConfig`; the final results `ssl_certificate` is automatically built.
5. **Export**: If you need to store or analyze a cert, the `to_json` and `to_pem` are quite universal.
---
### Summary
- **`SSLCertificate`** is a convenience class for capturing and exporting the **TLS certificate** from your crawled site(s).
- Common usage is in the **`CrawlResult.ssl_certificate`** field, accessible after setting `fetch_ssl_certificate=True`.
- Offers quick access to essential certificate details (`issuer`, `subject`, `fingerprint`) and is easy to export (PEM, DER, JSON) for further analysis or server usage.
Use it whenever you need **insight** into a sites certificate or require some form of cryptographic or compliance check.