Compare commits

..

33 Commits

Author SHA1 Message Date
UncleCode
3cf19a1bc2 chore(version): bump version to 0.3.73 2024-11-05 20:05:58 +08:00
UncleCode
67a23c3182 feat(core): Release v0.3.73 with Browser Takeover and Docker Support
Major changes:
- Add browser takeover feature using CDP for authentic browsing
- Implement Docker support with full API server documentation
- Enhance Mockdown with tag preservation system
- Improve parallel crawling performance

This release focuses on authenticity and scalability, introducing the ability
to use users' own browsers while providing containerized deployment options.
Breaking changes include modified browser handling and API response structure.

See CHANGELOG.md for detailed migration guide.
2024-11-05 20:04:18 +08:00
UncleCode
c4c6227962 Creating the API server component 2024-11-04 20:33:15 +08:00
UncleCode
e6c914d2fa Refactor version management and remove deprecated gitignore.dev file 2024-11-04 16:51:59 +08:00
UncleCode
be8f4fc59a Merge branch '0.3.73' of https://github.com/unclecode/crawl4ai into 0.3.73 2024-11-04 14:12:07 +08:00
unclecode
fbdf870fbf Update CHANGELOG 2024-11-04 14:10:27 +08:00
UncleCode
7b0cca41b4 Update gitignore 2024-11-04 13:48:26 +08:00
UncleCode
33d0e9ec8c Update dev gitignore 2024-11-04 13:42:37 +08:00
UncleCode
42f1c67ca8 Merge branch '0.3.73' of https://github.com/unclecode/crawl4ai into 0.3.73 2024-11-04 13:39:39 +08:00
UncleCode
e28c49a8fe Refactor .gitignore.dev file: Add ignore patterns for various files and directories 2024-11-04 13:39:38 +08:00
unclecode
54d5a3a259 Improved database management and error handling, updated README instructions, refined .gitignore, enhanced async web crawling capabilities, and updated dependencies. 2024-11-04 13:22:13 +08:00
UncleCode
62a86dbe8d Refactor mission section in README and add mission diagram 2024-10-31 16:38:56 +08:00
UncleCode
492ada0ed4 Add mission diagram to MISSION.md 2024-10-31 15:26:43 +08:00
UncleCode
d8eef02867 Add link to mission statement in README 2024-10-31 15:23:58 +08:00
UncleCode
6c7235d6a7 Add mission.md file 2024-10-31 15:22:00 +08:00
UncleCode
19c3f3efb2 Refactor tutorial markdown files: Update numbering and formatting 2024-10-30 20:58:07 +08:00
UncleCode
e97e8df6ba Update README: Fix typo in project name 2024-10-30 20:45:20 +08:00
UncleCode
cb6f5323ae Update README 2024-10-30 20:44:57 +08:00
UncleCode
47464cedec Update README 2024-10-30 20:42:27 +08:00
UncleCode
982d203d91 Merge branch '0.3.73' 2024-10-30 20:40:09 +08:00
UncleCode
9307c19f35 Update documents, upload new version of quickstart. 2024-10-30 20:39:35 +08:00
UncleCode
e9f7d5e73a Merge branch '0.3.73' 2024-10-30 00:16:49 +08:00
UncleCode
3529c2e732 Update new tutorial documents and added to the docs folder. 2024-10-30 00:16:18 +08:00
UncleCode
d9e0b7abab Fix README badge 2024-10-28 15:14:16 +08:00
UncleCode
b2800fefc6 Add badges to README 2024-10-28 15:10:12 +08:00
UncleCode
d913e20edc Update Readme 2024-10-28 15:09:37 +08:00
UncleCode
c2a71a5abe Update Docs folder, prepare branch for new version 0.3.73 2024-10-27 19:35:13 +08:00
UncleCode
d61615e0b0 Merge branch '0.3.72' 2024-10-27 19:33:05 +08:00
UncleCode
ac9d83c72f Update gitignore 2024-10-27 19:29:04 +08:00
UncleCode
ff9149b5c9 Merge branch 'main' of https://github.com/unclecode/crawl4ai 2024-10-27 19:28:05 +08:00
UncleCode
32f57c49d6 Merge pull request #194 from IdrisHanafi/feat/customize-crawl-base-directory
Support for custom crawl base directory
2024-10-24 13:09:27 +02:00
Idris Hanafi
a5f627ba1a feat: customize crawl base directory 2024-10-21 17:58:39 -04:00
UncleCode
dbb587d681 Update gitignore 2024-10-17 21:38:48 +08:00
130 changed files with 7786 additions and 73687 deletions

2
.gitignore vendored
View File

@@ -206,4 +206,6 @@ git_issues.py
git_issues.md
.tests/
.issues/
.docs/
.issues/

View File

@@ -1,8 +1,104 @@
# Changelog
# CHANGELOG
## [v0.3.73] - 2024-11-05
### Major Features
- **New Doctor Feature**
- Added comprehensive system diagnostics tool
- Available through package hub and CLI
- Provides automated troubleshooting and system health checks
- Includes detailed reporting of configuration issues
- **Dockerized API Server**
- Released complete Docker implementation for API server
- Added comprehensive documentation for Docker deployment
- Implemented container communication protocols
- Added environment configuration guides
- **Managed Browser Integration**
- Added support for user-controlled browser instances
- Implemented `ManagedBrowser` class for better browser lifecycle management
- Added ability to connect to existing Chrome DevTools Protocol (CDP) endpoints
- Introduced user data directory support for persistent browser profiles
- **Enhanced HTML Processing**
- Added HTML tag preservation feature during markdown conversion
- Introduced configurable tag preservation system
- Improved pre-tag and code block handling
- Added support for nested preserved tags with attribute retention
### Improvements
- **Browser Handling**
- Added flag to ignore body visibility for problematic pages
- Improved browser process cleanup and management
- Enhanced temporary directory handling for browser profiles
- Added configurable browser launch arguments
- **Database Management**
- Implemented connection pooling for better performance
- Added retry logic for database operations
- Improved error handling and logging
- Enhanced cleanup procedures for database connections
- **Resource Management**
- Added memory and CPU monitoring
- Implemented dynamic task slot allocation based on system resources
- Added configurable cleanup intervals
### Technical Improvements
- **Code Structure**
- Moved version management to dedicated _version.py file
- Improved error handling throughout the codebase
- Enhanced logging system with better error reporting
- Reorganized core components for better maintainability
### Bug Fixes
- Fixed issues with browser process termination
- Improved handling of connection timeouts
- Enhanced error recovery in database operations
- Fixed memory leaks in long-running processes
### Dependencies
- Updated Playwright to v1.47
- Updated core dependencies with more flexible version constraints
- Added new development dependencies for testing
### Breaking Changes
- Changed default browser handling behavior
- Modified database connection management approach
- Updated API response structure for better consistency
## Migration Guide
When upgrading to v0.3.73, be aware of the following changes:
1. Docker Deployment:
- Review Docker documentation for new deployment options
- Update environment configurations as needed
- Check container communication settings
2. If using custom browser management:
- Update browser initialization code to use new ManagedBrowser class
- Review browser cleanup procedures
3. For database operations:
- Check custom database queries for compatibility with new connection pooling
- Update error handling to work with new retry logic
4. Using the Doctor:
- Run doctor command for system diagnostics: `crawl4ai doctor`
- Review generated reports for potential issues
- Follow recommended fixes for any identified problems
## [2024-11-04 - 13:21:42] Comprehensive Update of Crawl4AI Features and Dependencies
This commit introduces several key enhancements, including improved error handling and robust database operations in `async_database.py`, which now features a connection pool and retry logic for better reliability. Updates to the README.md provide clearer instructions and a better user experience with links to documentation sections. The `.gitignore` file has been refined to include additional directories, while the async web crawler now utilizes a managed browser for more efficient crawling. Furthermore, multiple dependency updates and introduction of the `CustomHTML2Text` class enhance text extraction capabilities.
## [v0.3.73] - 2024-10-24
### Added
- preserve_tags: Added support for preserving specific HTML tags during markdown conversion.
- Smart overlay removal system in AsyncPlaywrightCrawlerStrategy:
- Automatic removal of popups, modals, and cookie notices
- Detection and removal of fixed/sticky position elements

121
Dockerfile Normal file
View File

@@ -0,0 +1,121 @@
# syntax=docker/dockerfile:1.4
# Build arguments
ARG PYTHON_VERSION=3.10
# Base stage with system dependencies
FROM python:${PYTHON_VERSION}-slim as base
# Declare ARG variables again within the build stage
ARG INSTALL_TYPE=all
ARG ENABLE_GPU=false
# Platform-specific labels
LABEL maintainer="unclecode"
LABEL description="Crawl4AI - Advanced Web Crawler with AI capabilities"
LABEL version="1.0"
# Environment setup
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_NO_CACHE_DIR=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_DEFAULT_TIMEOUT=100 \
DEBIAN_FRONTEND=noninteractive
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
wget \
gnupg \
git \
cmake \
pkg-config \
python3-dev \
libjpeg-dev \
libpng-dev \
&& rm -rf /var/lib/apt/lists/*
# Playwright system dependencies for Linux
RUN apt-get update && apt-get install -y --no-install-recommends \
libglib2.0-0 \
libnss3 \
libnspr4 \
libatk1.0-0 \
libatk-bridge2.0-0 \
libcups2 \
libdrm2 \
libdbus-1-3 \
libxcb1 \
libxkbcommon0 \
libx11-6 \
libxcomposite1 \
libxdamage1 \
libxext6 \
libxfixes3 \
libxrandr2 \
libgbm1 \
libpango-1.0-0 \
libcairo2 \
libasound2 \
libatspi2.0-0 \
&& rm -rf /var/lib/apt/lists/*
# GPU support if enabled
RUN if [ "$ENABLE_GPU" = "true" ] ; then \
apt-get update && apt-get install -y --no-install-recommends \
nvidia-cuda-toolkit \
&& rm -rf /var/lib/apt/lists/* ; \
fi
# Create and set working directory
WORKDIR /app
# Copy the entire project
COPY . .
# Install base requirements
RUN pip install --no-cache-dir -r requirements.txt
# Install required library for FastAPI
RUN pip install fastapi uvicorn psutil
# Install ML dependencies first for better layer caching
RUN if [ "$INSTALL_TYPE" = "all" ] ; then \
pip install --no-cache-dir \
torch \
torchvision \
torchaudio \
scikit-learn \
nltk \
transformers \
tokenizers && \
python -m nltk.downloader punkt stopwords ; \
fi
# Install the package
RUN if [ "$INSTALL_TYPE" = "all" ] ; then \
pip install -e ".[all]" && \
python -m crawl4ai.model_loader ; \
elif [ "$INSTALL_TYPE" = "torch" ] ; then \
pip install -e ".[torch]" ; \
elif [ "$INSTALL_TYPE" = "transformer" ] ; then \
pip install -e ".[transformer]" && \
python -m crawl4ai.model_loader ; \
else \
pip install -e "." ; \
fi
# Install Playwright and browsers
RUN playwright install
# Health check
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8000/health || exit 1
# Expose port
EXPOSE 8000
# Start the FastAPI server
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "11235"]

46
MISSION.md Normal file
View File

@@ -0,0 +1,46 @@
# Mission
![Mission Diagram](./docs/assets/pitch-dark.svg)
### 1. The Data Capitalization Opportunity
We live in an unprecedented era of digital wealth creation. Every day, individuals and enterprises generate massive amounts of valuable digital footprints across various platforms, social media channels, messenger apps, and cloud services. While people can interact with their data within these platforms, there's an immense untapped opportunity to transform this data into true capital assets. Just as physical property became a foundational element of wealth creation, personal and enterprise data has the potential to become a new form of capital on balance sheets.
For individuals, this represents an opportunity to transform their digital activities into valuable assets. For enterprises, their internal communications, team discussions, and collaborative documents contain rich insights that could be structured and valued as intellectual capital. This wealth of information represents an unprecedented opportunity for value creation in the digital age.
### 2. The Potential of Authentic Data
While synthetic data has played a crucial role in AI development, there's an enormous untapped potential in the authentic data generated by individuals and organizations. Every message, document, and interaction contains unique insights and patterns that could enhance AI development. The challenge isn't a lack of data - it's that most authentic human-generated data remains inaccessible for productive use.
By enabling willing participation in data sharing, we can unlock this vast reservoir of authentic human knowledge. This represents an opportunity to enhance AI development with diverse, real-world data that reflects the full spectrum of human experience and knowledge.
## Our Pathway to Data Democracy
### 1. Open-Source Foundation
Our first step is creating an open-source data extraction engine that empowers developers and innovators to build tools for data structuring and organization. This foundation ensures transparency, security, and community-driven development. By making these tools openly available, we enable the technical infrastructure needed for true data ownership and capitalization.
### 2. Data Capitalization Platform
Building on this open-source foundation, we're developing a platform that helps individuals and enterprises transform their digital footprints into structured, valuable assets. This platform will provide the tools and frameworks needed to organize, understand, and value personal and organizational data as true capital assets.
### 3. Creating a Data Marketplace
The final piece is establishing a marketplace where individuals and organizations can willingly share their data assets. This creates opportunities for:
- Individuals to earn equity, revenue, or other forms of value from their data
- Enterprises to access diverse, high-quality data for AI development
- Researchers to work with authentic human-generated data
- Startups to build innovative solutions using real-world data
## Economic Vision: A Shared Data Economy
We envision a future where data becomes a fundamental asset class in a thriving shared economy. This transformation will democratize AI development by enabling willing participation in data sharing, ensuring that the benefits of AI advancement flow back to data creators. Just as property rights revolutionized economic systems, establishing data as a capital asset will create new opportunities for wealth creation and economic participation.
This shared data economy will:
- Enable individuals to capitalize on their digital footprints
- Create new revenue streams for data creators
- Provide AI developers with access to diverse, authentic data
- Foster innovation through broader access to real-world data
- Ensure more equitable distribution of AI's economic benefits
Our vision is to facilitate this transformation from the ground up - starting with open-source tools, progressing to data capitalization platforms, and ultimately creating a thriving marketplace where data becomes a true asset class in a shared economy. This approach ensures that the future of AI is built on a foundation of authentic human knowledge, with benefits flowing back to the individuals and organizations who create and share their valuable data.

101
README.md
View File

@@ -1,6 +1,9 @@
# Crawl4AI (Async Version) 🕷️🤖
# 🔥🕷️ Crawl4AI: LLM Friendly Web Crawler & Scrapper
<a href="https://trendshift.io/repositories/11716" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11716" alt="unclecode%2Fcrawl4ai | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
[![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers)
![PyPI - Downloads](https://img.shields.io/pypi/dm/Crawl4AI)
[![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members)
[![GitHub Issues](https://img.shields.io/github/issues/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/issues)
[![GitHub Pull Requests](https://img.shields.io/github/issues-pr/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/pulls)
@@ -8,6 +11,14 @@
Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐
## 🌟 Meet the Crawl4AI Assistant: Your Copilot for Crawling
Use the [Crawl4AI GPT Assistant](https://tinyurl.com/crawl4ai-gpt) as your AI-powered copilot! With this assistant, you can:
- 🧑‍💻 Generate code for complex crawling and extraction tasks
- 💡 Get tailored support and examples
- 📘 Learn Crawl4AI faster with step-by-step guidance
## New in 0.3.72 ✨
- 📄 Fit markdown generation for extracting main article content.
@@ -19,7 +30,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
## Try it Now!
✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1REChY6fXQf-EaVYLv0eHEWvzlYxGm0pd?usp=sharing)
✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing)
✨ Visit our [Documentation Website](https://crawl4ai.com/mkdocs/)
@@ -72,11 +83,13 @@ By default, this will install the asynchronous version of Crawl4AI, using Playwr
👉 Note: When you install Crawl4AI, the setup script should automatically install and set up Playwright. However, if you encounter any Playwright-related errors, you can manually install it using one of these methods:
1. Through the command line:
```bash
playwright install
```
2. If the above doesn't work, try this more specific command:
```bash
python -m playwright install chromium
```
@@ -103,9 +116,53 @@ pip install -e .
### Using Docker 🐳
We're in the process of creating Docker images and pushing them to Docker Hub. This will provide an easy way to run Crawl4AI in a containerized environment. Stay tuned for updates!
Crawl4AI is available as Docker images for easy deployment. You can either pull directly from Docker Hub (recommended) or build from the repository.
#### Option 1: Docker Hub (Recommended)
```bash
# Pull and run from Docker Hub (choose one):
docker pull unclecode/crawl4ai:basic # Basic crawling features
docker pull unclecode/crawl4ai:all # Full installation (ML, LLM support)
docker pull unclecode/crawl4ai:gpu # GPU-enabled version
# Run the container
docker run -p 11235:11235 unclecode/crawl4ai:basic # Replace 'basic' with your chosen version
```
#### Option 2: Build from Repository
```bash
# Clone the repository
git clone https://github.com/unclecode/crawl4ai.git
cd crawl4ai
# Build the image
docker build -t crawl4ai:local \
--build-arg INSTALL_TYPE=basic \ # Options: basic, all
.
# Run your local build
docker run -p 11235:11235 crawl4ai:local
```
Quick test (works for both options):
```python
import requests
# Submit a crawl job
response = requests.post(
"http://localhost:11235/crawl",
json={"urls": "https://example.com", "priority": 10}
)
task_id = response.json()["task_id"]
# Get results
result = requests.get(f"http://localhost:11235/task/{task_id}")
```
For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://crawl4ai.com/mkdocs/basic/docker-deployment/).
For more detailed installation instructions and options, please refer to our [Installation Guide](https://crawl4ai.com/mkdocs/installation).
## Quick Start 🚀
@@ -235,7 +292,7 @@ if __name__ == "__main__":
asyncio.run(extract_news_teasers())
```
For more advanced usage examples, check out our [Examples](https://crawl4ai.com/mkdocs/full_details/advanced_jsoncss_extraction.md) section in the documentation.
For more advanced usage examples, check out our [Examples](https://crawl4ai.com/mkdocs/extraction/css-advanced/) section in the documentation.
### Extracting Structured Data with OpenAI
@@ -338,7 +395,8 @@ if __name__ == "__main__":
This example demonstrates Crawl4AI's ability to handle complex scenarios where content is loaded asynchronously. It crawls multiple pages of GitHub commits, executing JavaScript to load new content and using custom hooks to ensure data is loaded before proceeding.
For more advanced usage examples, check out our [Examples](https://crawl4ai.com/mkdocs/full_details/session_based_crawling.md) section in the documentation.
For more advanced usage examples, check out our [Examples](https://crawl4ai.com/mkdocs/tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites/) section in the documentation.
</details>
## Speed Comparison 🚀
@@ -347,7 +405,7 @@ Crawl4AI is designed with speed as a primary focus. Our goal is to provide the f
We've conducted a speed comparison between Crawl4AI and Firecrawl, a paid service. The results demonstrate Crawl4AI's superior performance:
```
```bash
Firecrawl:
Time taken: 7.02 seconds
Content length: 42074 characters
@@ -365,6 +423,7 @@ Images found: 89
```
As you can see, Crawl4AI outperforms Firecrawl significantly:
- Simple crawl: Crawl4AI is over 4 times faster than Firecrawl.
- With JavaScript execution: Even when executing JavaScript to load more content (doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.
@@ -392,6 +451,34 @@ For questions, suggestions, or feedback, feel free to reach out:
Happy Crawling! 🕸️🚀
# Mission
Our mission is to unlock the untapped potential of personal and enterprise data in the digital age. In today's world, individuals and organizations generate vast amounts of valuable digital footprints, yet this data remains largely uncapitalized as a true asset.
Our open-source solution empowers developers and innovators to build tools for data extraction and structuring, laying the foundation for a new era of data ownership. By transforming personal and enterprise data into structured, tradeable assets, we're creating opportunities for individuals to capitalize on their digital footprints and for organizations to unlock the value of their collective knowledge.
This democratization of data represents the first step toward a shared data economy, where willing participation in data sharing drives AI advancement while ensuring the benefits flow back to data creators. Through this approach, we're building a future where AI development is powered by authentic human knowledge rather than synthetic alternatives.
![Mission Diagram](./docs/assets/pitch-dark.svg)
For a detailed exploration of our vision, opportunities, and pathway forward, please see our [full mission statement](./MISSION.md).
## Key Opportunities
- **Data Capitalization**: Transform digital footprints into valuable assets that can appear on personal and enterprise balance sheets
- **Authentic Data**: Unlock the vast reservoir of real human insights and knowledge for AI advancement
- **Shared Economy**: Create new value streams where data creators directly benefit from their contributions
## Development Pathway
1. **Open-Source Foundation**: Building transparent, community-driven data extraction tools
2. **Data Capitalization Platform**: Creating tools to structure and value digital assets
3. **Shared Data Marketplace**: Establishing an economic platform for ethical data exchange
For a detailed exploration of our vision, challenges, and solutions, please see our [full mission statement](./MISSION.md).
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=unclecode/crawl4ai&type=Date)](https://star-history.com/#unclecode/crawl4ai&Date)

View File

@@ -2,8 +2,8 @@
from .async_webcrawler import AsyncWebCrawler
from .models import CrawlResult
__version__ = "0.3.72"
from ._version import __version__
# __version__ = "0.3.73"
__all__ = [
"AsyncWebCrawler",

2
crawl4ai/_version.py Normal file
View File

@@ -0,0 +1,2 @@
# crawl4ai/_version.py
__version__ = "0.3.73"

View File

@@ -1,558 +0,0 @@
import asyncio
import base64
import time
from abc import ABC, abstractmethod
from typing import Callable, Dict, Any, List, Optional, Awaitable
import os
from playwright.async_api import async_playwright, Page, Browser, Error
from io import BytesIO
from PIL import Image, ImageDraw, ImageFont
from pathlib import Path
from playwright.async_api import ProxySettings
from pydantic import BaseModel
import hashlib
import json
import uuid
from playwright_stealth import stealth_async
class AsyncCrawlResponse(BaseModel):
html: str
response_headers: Dict[str, str]
status_code: int
screenshot: Optional[str] = None
get_delayed_content: Optional[Callable[[Optional[float]], Awaitable[str]]] = None
class Config:
arbitrary_types_allowed = True
class AsyncCrawlerStrategy(ABC):
@abstractmethod
async def crawl(self, url: str, **kwargs) -> AsyncCrawlResponse:
pass
@abstractmethod
async def crawl_many(self, urls: List[str], **kwargs) -> List[AsyncCrawlResponse]:
pass
@abstractmethod
async def take_screenshot(self, url: str) -> str:
pass
@abstractmethod
def update_user_agent(self, user_agent: str):
pass
@abstractmethod
def set_hook(self, hook_type: str, hook: Callable):
pass
class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
def __init__(self, use_cached_html=False, js_code=None, **kwargs):
self.use_cached_html = use_cached_html
self.user_agent = kwargs.get(
"user_agent",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 "
"(KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
)
self.proxy = kwargs.get("proxy")
self.headless = kwargs.get("headless", True)
self.browser_type = kwargs.get("browser_type", "chromium")
self.headers = kwargs.get("headers", {})
self.sessions = {}
self.session_ttl = 1800
self.js_code = js_code
self.verbose = kwargs.get("verbose", False)
self.playwright = None
self.browser = None
self.hooks = {
'on_browser_created': None,
'on_user_agent_updated': None,
'on_execution_started': None,
'before_goto': None,
'after_goto': None,
'before_return_html': None,
'before_retrieve_html': None
}
async def __aenter__(self):
await self.start()
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
await self.close()
async def start(self):
if self.playwright is None:
self.playwright = await async_playwright().start()
if self.browser is None:
browser_args = {
"headless": self.headless,
"args": [
"--disable-gpu",
"--no-sandbox",
"--disable-dev-shm-usage",
"--disable-blink-features=AutomationControlled",
"--disable-infobars",
"--window-position=0,0",
"--ignore-certificate-errors",
"--ignore-certificate-errors-spki-list",
# "--headless=new", # Use the new headless mode
]
}
# Add proxy settings if a proxy is specified
if self.proxy:
proxy_settings = ProxySettings(server=self.proxy)
browser_args["proxy"] = proxy_settings
# Select the appropriate browser based on the browser_type
if self.browser_type == "firefox":
self.browser = await self.playwright.firefox.launch(**browser_args)
elif self.browser_type == "webkit":
self.browser = await self.playwright.webkit.launch(**browser_args)
else:
self.browser = await self.playwright.chromium.launch(**browser_args)
await self.execute_hook('on_browser_created', self.browser)
async def close(self):
if self.browser:
await self.browser.close()
self.browser = None
if self.playwright:
await self.playwright.stop()
self.playwright = None
def __del__(self):
if self.browser or self.playwright:
asyncio.get_event_loop().run_until_complete(self.close())
def set_hook(self, hook_type: str, hook: Callable):
if hook_type in self.hooks:
self.hooks[hook_type] = hook
else:
raise ValueError(f"Invalid hook type: {hook_type}")
async def execute_hook(self, hook_type: str, *args):
hook = self.hooks.get(hook_type)
if hook:
if asyncio.iscoroutinefunction(hook):
return await hook(*args)
else:
return hook(*args)
return args[0] if args else None
def update_user_agent(self, user_agent: str):
self.user_agent = user_agent
def set_custom_headers(self, headers: Dict[str, str]):
self.headers = headers
async def kill_session(self, session_id: str):
if session_id in self.sessions:
context, page, _ = self.sessions[session_id]
await page.close()
await context.close()
del self.sessions[session_id]
def _cleanup_expired_sessions(self):
current_time = time.time()
expired_sessions = [
sid for sid, (_, _, last_used) in self.sessions.items()
if current_time - last_used > self.session_ttl
]
for sid in expired_sessions:
asyncio.create_task(self.kill_session(sid))
async def smart_wait(self, page: Page, wait_for: str, timeout: float = 30000):
wait_for = wait_for.strip()
if wait_for.startswith('js:'):
# Explicitly specified JavaScript
js_code = wait_for[3:].strip()
return await self.csp_compliant_wait(page, js_code, timeout)
elif wait_for.startswith('css:'):
# Explicitly specified CSS selector
css_selector = wait_for[4:].strip()
try:
await page.wait_for_selector(css_selector, timeout=timeout)
except Error as e:
if 'Timeout' in str(e):
raise TimeoutError(f"Timeout after {timeout}ms waiting for selector '{css_selector}'")
else:
raise ValueError(f"Invalid CSS selector: '{css_selector}'")
else:
# Auto-detect based on content
if wait_for.startswith('()') or wait_for.startswith('function'):
# It's likely a JavaScript function
return await self.csp_compliant_wait(page, wait_for, timeout)
else:
# Assume it's a CSS selector first
try:
await page.wait_for_selector(wait_for, timeout=timeout)
except Error as e:
if 'Timeout' in str(e):
raise TimeoutError(f"Timeout after {timeout}ms waiting for selector '{wait_for}'")
else:
# If it's not a timeout error, it might be an invalid selector
# Let's try to evaluate it as a JavaScript function as a fallback
try:
return await self.csp_compliant_wait(page, f"() => {{{wait_for}}}", timeout)
except Error:
raise ValueError(f"Invalid wait_for parameter: '{wait_for}'. "
"It should be either a valid CSS selector, a JavaScript function, "
"or explicitly prefixed with 'js:' or 'css:'.")
async def csp_compliant_wait(self, page: Page, user_wait_function: str, timeout: float = 30000):
wrapper_js = f"""
async () => {{
const userFunction = {user_wait_function};
const startTime = Date.now();
while (true) {{
if (await userFunction()) {{
return true;
}}
if (Date.now() - startTime > {timeout}) {{
throw new Error('Timeout waiting for condition');
}}
await new Promise(resolve => setTimeout(resolve, 100));
}}
}}
"""
try:
await page.evaluate(wrapper_js)
except TimeoutError:
raise TimeoutError(f"Timeout after {timeout}ms waiting for condition")
except Exception as e:
raise RuntimeError(f"Error in wait condition: {str(e)}")
async def process_iframes(self, page):
# Find all iframes
iframes = await page.query_selector_all('iframe')
for i, iframe in enumerate(iframes):
try:
# Add a unique identifier to the iframe
await iframe.evaluate(f'(element) => element.id = "iframe-{i}"')
# Get the frame associated with this iframe
frame = await iframe.content_frame()
if frame:
# Wait for the frame to load
await frame.wait_for_load_state('load', timeout=30000) # 30 seconds timeout
# Extract the content of the iframe's body
iframe_content = await frame.evaluate('() => document.body.innerHTML')
# Generate a unique class name for this iframe
class_name = f'extracted-iframe-content-{i}'
# Replace the iframe with a div containing the extracted content
_iframe = iframe_content.replace('`', '\\`')
await page.evaluate(f"""
() => {{
const iframe = document.getElementById('iframe-{i}');
const div = document.createElement('div');
div.innerHTML = `{_iframe}`;
div.className = '{class_name}';
iframe.replaceWith(div);
}}
""")
else:
print(f"Warning: Could not access content frame for iframe {i}")
except Exception as e:
print(f"Error processing iframe {i}: {str(e)}")
# Return the page object
return page
async def crawl(self, url: str, **kwargs) -> AsyncCrawlResponse:
response_headers = {}
status_code = None
self._cleanup_expired_sessions()
session_id = kwargs.get("session_id")
if session_id:
context, page, _ = self.sessions.get(session_id, (None, None, None))
if not context:
context = await self.browser.new_context(
user_agent=self.user_agent,
viewport={"width": 1920, "height": 1080},
proxy={"server": self.proxy} if self.proxy else None
)
await context.set_extra_http_headers(self.headers)
page = await context.new_page()
self.sessions[session_id] = (context, page, time.time())
else:
context = await self.browser.new_context(
user_agent=self.user_agent,
viewport={"width": 1920, "height": 1080},
proxy={"server": self.proxy} if self.proxy else None
)
await context.set_extra_http_headers(self.headers)
if kwargs.get("override_navigator", False):
# Inject scripts to override navigator properties
await context.add_init_script("""
// Pass the Permissions Test.
const originalQuery = window.navigator.permissions.query;
window.navigator.permissions.query = (parameters) => (
parameters.name === 'notifications' ?
Promise.resolve({ state: Notification.permission }) :
originalQuery(parameters)
);
Object.defineProperty(navigator, 'webdriver', {
get: () => undefined
});
window.navigator.chrome = {
runtime: {},
// Add other properties if necessary
};
Object.defineProperty(navigator, 'plugins', {
get: () => [1, 2, 3, 4, 5],
});
Object.defineProperty(navigator, 'languages', {
get: () => ['en-US', 'en'],
});
Object.defineProperty(document, 'hidden', {
get: () => false
});
Object.defineProperty(document, 'visibilityState', {
get: () => 'visible'
});
""")
page = await context.new_page()
try:
if self.verbose:
print(f"[LOG] 🕸️ Crawling {url} using AsyncPlaywrightCrawlerStrategy...")
if self.use_cached_html:
cache_file_path = os.path.join(
Path.home(), ".crawl4ai", "cache", hashlib.md5(url.encode()).hexdigest()
)
if os.path.exists(cache_file_path):
html = ""
with open(cache_file_path, "r") as f:
html = f.read()
# retrieve response headers and status code from cache
with open(cache_file_path + ".meta", "r") as f:
meta = json.load(f)
response_headers = meta.get("response_headers", {})
status_code = meta.get("status_code")
response = AsyncCrawlResponse(
html=html, response_headers=response_headers, status_code=status_code
)
return response
if not kwargs.get("js_only", False):
await self.execute_hook('before_goto', page)
response = await page.goto("about:blank")
await stealth_async(page)
response = await page.goto(
url, wait_until="domcontentloaded", timeout=kwargs.get("page_timeout", 60000)
)
# await stealth_async(page)
# response = await page.goto("about:blank")
# await stealth_async(page)
# await page.evaluate(f"window.location.href = '{url}'")
await self.execute_hook('after_goto', page)
# Get status code and headers
status_code = response.status
response_headers = response.headers
else:
status_code = 200
response_headers = {}
await page.wait_for_selector('body')
await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
js_code = kwargs.get("js_code", kwargs.get("js", self.js_code))
if js_code:
if isinstance(js_code, str):
await page.evaluate(js_code)
elif isinstance(js_code, list):
for js in js_code:
await page.evaluate(js)
await page.wait_for_load_state('networkidle')
# Check for on execution event
await self.execute_hook('on_execution_started', page)
if kwargs.get("simulate_user", False):
# Simulate user interactions
await page.mouse.move(100, 100)
await page.mouse.down()
await page.mouse.up()
await page.keyboard.press('ArrowDown')
# Handle the wait_for parameter
wait_for = kwargs.get("wait_for")
if wait_for:
try:
await self.smart_wait(page, wait_for, timeout=kwargs.get("page_timeout", 60000))
except Exception as e:
raise RuntimeError(f"Wait condition failed: {str(e)}")
# Update image dimensions
update_image_dimensions_js = """
() => {
return new Promise((resolve) => {
const filterImage = (img) => {
// Filter out images that are too small
if (img.width < 100 && img.height < 100) return false;
// Filter out images that are not visible
const rect = img.getBoundingClientRect();
if (rect.width === 0 || rect.height === 0) return false;
// Filter out images with certain class names (e.g., icons, thumbnails)
if (img.classList.contains('icon') || img.classList.contains('thumbnail')) return false;
// Filter out images with certain patterns in their src (e.g., placeholder images)
if (img.src.includes('placeholder') || img.src.includes('icon')) return false;
return true;
};
const images = Array.from(document.querySelectorAll('img')).filter(filterImage);
let imagesLeft = images.length;
if (imagesLeft === 0) {
resolve();
return;
}
const checkImage = (img) => {
if (img.complete && img.naturalWidth !== 0) {
img.setAttribute('width', img.naturalWidth);
img.setAttribute('height', img.naturalHeight);
imagesLeft--;
if (imagesLeft === 0) resolve();
}
};
images.forEach(img => {
checkImage(img);
if (!img.complete) {
img.onload = () => {
checkImage(img);
};
img.onerror = () => {
imagesLeft--;
if (imagesLeft === 0) resolve();
};
}
});
// Fallback timeout of 5 seconds
setTimeout(() => resolve(), 5000);
});
}
"""
await page.evaluate(update_image_dimensions_js)
# Wait a bit for any onload events to complete
await page.wait_for_timeout(100)
# Process iframes
if kwargs.get("process_iframes", False):
page = await self.process_iframes(page)
await self.execute_hook('before_retrieve_html', page)
# Check if delay_before_return_html is set then wait for that time
delay_before_return_html = kwargs.get("delay_before_return_html")
if delay_before_return_html:
await asyncio.sleep(delay_before_return_html)
html = await page.content()
await self.execute_hook('before_return_html', page, html)
# Check if kwargs has screenshot=True then take screenshot
screenshot_data = None
if kwargs.get("screenshot"):
screenshot_data = await self.take_screenshot(url)
if self.verbose:
print(f"[LOG] ✅ Crawled {url} successfully!")
if self.use_cached_html:
cache_file_path = os.path.join(
Path.home(), ".crawl4ai", "cache", hashlib.md5(url.encode()).hexdigest()
)
with open(cache_file_path, "w", encoding="utf-8") as f:
f.write(html)
# store response headers and status code in cache
with open(cache_file_path + ".meta", "w", encoding="utf-8") as f:
json.dump({
"response_headers": response_headers,
"status_code": status_code
}, f)
async def get_delayed_content(delay: float = 5.0) -> str:
if self.verbose:
print(f"[LOG] Waiting for {delay} seconds before retrieving content for {url}")
await asyncio.sleep(delay)
return await page.content()
response = AsyncCrawlResponse(
html=html,
response_headers=response_headers,
status_code=status_code,
screenshot=screenshot_data,
get_delayed_content=get_delayed_content
)
return response
except Error as e:
raise Error(f"Failed to crawl {url}: {str(e)}")
finally:
if not session_id:
await page.close()
await context.close()
async def crawl_many(self, urls: List[str], **kwargs) -> List[AsyncCrawlResponse]:
semaphore_count = kwargs.get('semaphore_count', 5) # Adjust as needed
semaphore = asyncio.Semaphore(semaphore_count)
async def crawl_with_semaphore(url):
async with semaphore:
return await self.crawl(url, **kwargs)
tasks = [crawl_with_semaphore(url) for url in urls]
results = await asyncio.gather(*tasks, return_exceptions=True)
return [result if not isinstance(result, Exception) else str(result) for result in results]
async def take_screenshot(self, url: str, wait_time=1000) -> str:
async with await self.browser.new_context(user_agent=self.user_agent) as context:
page = await context.new_page()
try:
await page.goto(url, wait_until="domcontentloaded", timeout=30000)
# Wait for a specified time (default is 1 second)
await page.wait_for_timeout(wait_time)
screenshot = await page.screenshot(full_page=True)
return base64.b64encode(screenshot).decode('utf-8')
except Exception as e:
error_message = f"Failed to take screenshot: {str(e)}"
print(error_message)
# Generate an error image
img = Image.new('RGB', (800, 600), color='black')
draw = ImageDraw.Draw(img)
font = ImageFont.load_default()
draw.text((10, 10), error_message, fill=(255, 255, 255), font=font)
buffered = BytesIO()
img.save(buffered, format="JPEG")
return base64.b64encode(buffered.getvalue()).decode('utf-8')
finally:
await page.close()

View File

@@ -3,7 +3,8 @@ import base64
import time
from abc import ABC, abstractmethod
from typing import Callable, Dict, Any, List, Optional, Awaitable
import os
import os, sys, shutil
import tempfile, subprocess
from playwright.async_api import async_playwright, Page, Browser, Error
from io import BytesIO
from PIL import Image, ImageDraw, ImageFont
@@ -13,6 +14,7 @@ from pydantic import BaseModel
import hashlib
import json
import uuid
from playwright_stealth import StealthConfig, stealth_async
stealth_config = StealthConfig(
@@ -31,6 +33,106 @@ stealth_config = StealthConfig(
)
class ManagedBrowser:
def __init__(self, browser_type: str = "chromium", user_data_dir: Optional[str] = None, headless: bool = False):
self.browser_type = browser_type
self.user_data_dir = user_data_dir
self.headless = headless
self.browser_process = None
self.temp_dir = None
self.debugging_port = 9222
async def start(self) -> str:
"""
Starts the browser process and returns the CDP endpoint URL.
If user_data_dir is not provided, creates a temporary directory.
"""
# Create temp dir if needed
if not self.user_data_dir:
self.temp_dir = tempfile.mkdtemp(prefix="browser-profile-")
self.user_data_dir = self.temp_dir
# Get browser path and args based on OS and browser type
browser_path = self._get_browser_path()
args = self._get_browser_args()
# Start browser process
try:
self.browser_process = subprocess.Popen(
args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
)
await asyncio.sleep(2) # Give browser time to start
return f"http://localhost:{self.debugging_port}"
except Exception as e:
await self.cleanup()
raise Exception(f"Failed to start browser: {e}")
def _get_browser_path(self) -> str:
"""Returns the browser executable path based on OS and browser type"""
if sys.platform == "darwin": # macOS
paths = {
"chromium": "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome",
"firefox": "/Applications/Firefox.app/Contents/MacOS/firefox",
"webkit": "/Applications/Safari.app/Contents/MacOS/Safari"
}
elif sys.platform == "win32": # Windows
paths = {
"chromium": "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe",
"firefox": "C:\\Program Files\\Mozilla Firefox\\firefox.exe",
"webkit": None # WebKit not supported on Windows
}
else: # Linux
paths = {
"chromium": "google-chrome",
"firefox": "firefox",
"webkit": None # WebKit not supported on Linux
}
return paths.get(self.browser_type)
def _get_browser_args(self) -> List[str]:
"""Returns browser-specific command line arguments"""
base_args = [self._get_browser_path()]
if self.browser_type == "chromium":
args = [
f"--remote-debugging-port={self.debugging_port}",
f"--user-data-dir={self.user_data_dir}",
]
if self.headless:
args.append("--headless=new")
elif self.browser_type == "firefox":
args = [
"--remote-debugging-port", str(self.debugging_port),
"--profile", self.user_data_dir,
]
if self.headless:
args.append("--headless")
else:
raise NotImplementedError(f"Browser type {self.browser_type} not supported")
return base_args + args
async def cleanup(self):
"""Cleanup browser process and temporary directory"""
if self.browser_process:
try:
self.browser_process.terminate()
await asyncio.sleep(1)
if self.browser_process.poll() is None:
self.browser_process.kill()
except Exception as e:
print(f"Error terminating browser: {e}")
if self.temp_dir and os.path.exists(self.temp_dir):
try:
shutil.rmtree(self.temp_dir)
except Exception as e:
print(f"Error removing temporary directory: {e}")
class AsyncCrawlResponse(BaseModel):
html: str
response_headers: Dict[str, str]
@@ -82,6 +184,9 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
self.playwright = None
self.browser = None
self.sleep_on_close = kwargs.get("sleep_on_close", False)
self.use_managed_browser = kwargs.get("use_managed_browser", False)
self.user_data_dir = kwargs.get("user_data_dir", None)
self.managed_browser = None
self.hooks = {
'on_browser_created': None,
'on_user_agent_updated': None,
@@ -103,36 +208,46 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
if self.playwright is None:
self.playwright = await async_playwright().start()
if self.browser is None:
browser_args = {
"headless": self.headless,
"args": [
"--disable-gpu",
"--no-sandbox",
"--disable-dev-shm-usage",
"--disable-blink-features=AutomationControlled",
"--disable-infobars",
"--window-position=0,0",
"--ignore-certificate-errors",
"--ignore-certificate-errors-spki-list",
# "--headless=new", # Use the new headless mode
]
}
# Add proxy settings if a proxy is specified
if self.proxy:
proxy_settings = ProxySettings(server=self.proxy)
browser_args["proxy"] = proxy_settings
elif self.proxy_config:
proxy_settings = ProxySettings(server=self.proxy_config.get("server"), username=self.proxy_config.get("username"), password=self.proxy_config.get("password"))
browser_args["proxy"] = proxy_settings
# Select the appropriate browser based on the browser_type
if self.browser_type == "firefox":
self.browser = await self.playwright.firefox.launch(**browser_args)
elif self.browser_type == "webkit":
self.browser = await self.playwright.webkit.launch(**browser_args)
if self.use_managed_browser:
# Use managed browser approach
self.managed_browser = ManagedBrowser(
browser_type=self.browser_type,
user_data_dir=self.user_data_dir,
headless=self.headless
)
cdp_url = await self.managed_browser.start()
self.browser = await self.playwright.chromium.connect_over_cdp(cdp_url)
else:
self.browser = await self.playwright.chromium.launch(**browser_args)
browser_args = {
"headless": self.headless,
"args": [
"--disable-gpu",
"--no-sandbox",
"--disable-dev-shm-usage",
"--disable-blink-features=AutomationControlled",
"--disable-infobars",
"--window-position=0,0",
"--ignore-certificate-errors",
"--ignore-certificate-errors-spki-list",
# "--headless=new", # Use the new headless mode
]
}
# Add proxy settings if a proxy is specified
if self.proxy:
proxy_settings = ProxySettings(server=self.proxy)
browser_args["proxy"] = proxy_settings
elif self.proxy_config:
proxy_settings = ProxySettings(server=self.proxy_config.get("server"), username=self.proxy_config.get("username"), password=self.proxy_config.get("password"))
browser_args["proxy"] = proxy_settings
# Select the appropriate browser based on the browser_type
if self.browser_type == "firefox":
self.browser = await self.playwright.firefox.launch(**browser_args)
elif self.browser_type == "webkit":
self.browser = await self.playwright.webkit.launch(**browser_args)
else:
self.browser = await self.playwright.chromium.launch(**browser_args)
await self.execute_hook('on_browser_created', self.browser)
@@ -142,6 +257,9 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
if self.browser:
await self.browser.close()
self.browser = None
if self.managed_browser:
await self.managed_browser.cleanup()
self.managed_browser = None
if self.playwright:
await self.playwright.stop()
self.playwright = None
@@ -399,7 +517,48 @@ class AsyncPlaywrightCrawlerStrategy(AsyncCrawlerStrategy):
status_code = 200
response_headers = {}
await page.wait_for_selector('body')
# Replace the current wait_for_selector line with this more robust check:
try:
# First wait for body to exist, regardless of visibility
await page.wait_for_selector('body', state='attached', timeout=30000)
# Then wait for it to become visible by checking CSS
await page.wait_for_function("""
() => {
const body = document.body;
const style = window.getComputedStyle(body);
return style.display !== 'none' &&
style.visibility !== 'hidden' &&
style.opacity !== '0';
}
""", timeout=30000)
except Error as e:
# If waiting fails, let's try to diagnose the issue
visibility_info = await page.evaluate("""
() => {
const body = document.body;
const style = window.getComputedStyle(body);
return {
display: style.display,
visibility: style.visibility,
opacity: style.opacity,
hasContent: body.innerHTML.length,
classList: Array.from(body.classList)
}
}
""")
if self.verbose:
print(f"Body visibility debug info: {visibility_info}")
# Even if body is hidden, we might still want to proceed
if kwargs.get('ignore_body_visibility', True):
if self.verbose:
print("Proceeding despite hidden body...")
pass
else:
raise Error(f"Body element is hidden: {visibility_info}")
await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")

View File

@@ -2,18 +2,82 @@ import os
from pathlib import Path
import aiosqlite
import asyncio
from typing import Optional, Tuple
from typing import Optional, Tuple, Dict
from contextlib import asynccontextmanager
import logging
# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
DB_PATH = os.path.join(Path.home(), ".crawl4ai")
os.makedirs(DB_PATH, exist_ok=True)
DB_PATH = os.path.join(DB_PATH, "crawl4ai.db")
class AsyncDatabaseManager:
def __init__(self):
def __init__(self, pool_size: int = 10, max_retries: int = 3):
self.db_path = DB_PATH
self.pool_size = pool_size
self.max_retries = max_retries
self.connection_pool: Dict[int, aiosqlite.Connection] = {}
self.pool_lock = asyncio.Lock()
self.connection_semaphore = asyncio.Semaphore(pool_size)
async def initialize(self):
"""Initialize the database and connection pool"""
await self.ainit_db()
async def cleanup(self):
"""Cleanup connections when shutting down"""
async with self.pool_lock:
for conn in self.connection_pool.values():
await conn.close()
self.connection_pool.clear()
@asynccontextmanager
async def get_connection(self):
"""Connection pool manager"""
async with self.connection_semaphore:
task_id = id(asyncio.current_task())
try:
async with self.pool_lock:
if task_id not in self.connection_pool:
conn = await aiosqlite.connect(
self.db_path,
timeout=30.0
)
await conn.execute('PRAGMA journal_mode = WAL')
await conn.execute('PRAGMA busy_timeout = 5000')
self.connection_pool[task_id] = conn
yield self.connection_pool[task_id]
except Exception as e:
logger.error(f"Connection error: {e}")
raise
finally:
async with self.pool_lock:
if task_id in self.connection_pool:
await self.connection_pool[task_id].close()
del self.connection_pool[task_id]
async def execute_with_retry(self, operation, *args):
"""Execute database operations with retry logic"""
for attempt in range(self.max_retries):
try:
async with self.get_connection() as db:
result = await operation(db, *args)
await db.commit()
return result
except Exception as e:
if attempt == self.max_retries - 1:
logger.error(f"Operation failed after {self.max_retries} attempts: {e}")
raise
await asyncio.sleep(1 * (attempt + 1)) # Exponential backoff
async def ainit_db(self):
async with aiosqlite.connect(self.db_path) as db:
"""Initialize database schema"""
async def _init(db):
await db.execute('''
CREATE TABLE IF NOT EXISTS crawled_data (
url TEXT PRIMARY KEY,
@@ -28,87 +92,101 @@ class AsyncDatabaseManager:
screenshot TEXT DEFAULT ""
)
''')
await db.commit()
await self.execute_with_retry(_init)
await self.update_db_schema()
async def update_db_schema(self):
async with aiosqlite.connect(self.db_path) as db:
# Check if the 'media' column exists
"""Update database schema if needed"""
async def _check_columns(db):
cursor = await db.execute("PRAGMA table_info(crawled_data)")
columns = await cursor.fetchall()
column_names = [column[1] for column in columns]
if 'media' not in column_names:
await self.aalter_db_add_column('media')
# Check for other missing columns and add them if necessary
for column in ['links', 'metadata', 'screenshot']:
if column not in column_names:
await self.aalter_db_add_column(column)
return [column[1] for column in columns]
column_names = await self.execute_with_retry(_check_columns)
for column in ['media', 'links', 'metadata', 'screenshot']:
if column not in column_names:
await self.aalter_db_add_column(column)
async def aalter_db_add_column(self, new_column: str):
try:
async with aiosqlite.connect(self.db_path) as db:
await db.execute(f'ALTER TABLE crawled_data ADD COLUMN {new_column} TEXT DEFAULT ""')
await db.commit()
print(f"Added column '{new_column}' to the database.")
except Exception as e:
print(f"Error altering database to add {new_column} column: {e}")
"""Add new column to the database"""
async def _alter(db):
await db.execute(f'ALTER TABLE crawled_data ADD COLUMN {new_column} TEXT DEFAULT ""')
logger.info(f"Added column '{new_column}' to the database.")
await self.execute_with_retry(_alter)
async def aget_cached_url(self, url: str) -> Optional[Tuple[str, str, str, str, str, str, str, bool, str]]:
"""Retrieve cached URL data"""
async def _get(db):
async with db.execute(
'SELECT url, html, cleaned_html, markdown, extracted_content, success, media, links, metadata, screenshot FROM crawled_data WHERE url = ?',
(url,)
) as cursor:
return await cursor.fetchone()
try:
async with aiosqlite.connect(self.db_path) as db:
async with db.execute('SELECT url, html, cleaned_html, markdown, extracted_content, success, media, links, metadata, screenshot FROM crawled_data WHERE url = ?', (url,)) as cursor:
return await cursor.fetchone()
return await self.execute_with_retry(_get)
except Exception as e:
print(f"Error retrieving cached URL: {e}")
logger.error(f"Error retrieving cached URL: {e}")
return None
async def acache_url(self, url: str, html: str, cleaned_html: str, markdown: str, extracted_content: str, success: bool, media: str = "{}", links: str = "{}", metadata: str = "{}", screenshot: str = ""):
"""Cache URL data with retry logic"""
async def _cache(db):
await db.execute('''
INSERT INTO crawled_data (url, html, cleaned_html, markdown, extracted_content, success, media, links, metadata, screenshot)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(url) DO UPDATE SET
html = excluded.html,
cleaned_html = excluded.cleaned_html,
markdown = excluded.markdown,
extracted_content = excluded.extracted_content,
success = excluded.success,
media = excluded.media,
links = excluded.links,
metadata = excluded.metadata,
screenshot = excluded.screenshot
''', (url, html, cleaned_html, markdown, extracted_content, success, media, links, metadata, screenshot))
try:
async with aiosqlite.connect(self.db_path) as db:
await db.execute('''
INSERT INTO crawled_data (url, html, cleaned_html, markdown, extracted_content, success, media, links, metadata, screenshot)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(url) DO UPDATE SET
html = excluded.html,
cleaned_html = excluded.cleaned_html,
markdown = excluded.markdown,
extracted_content = excluded.extracted_content,
success = excluded.success,
media = excluded.media,
links = excluded.links,
metadata = excluded.metadata,
screenshot = excluded.screenshot
''', (url, html, cleaned_html, markdown, extracted_content, success, media, links, metadata, screenshot))
await db.commit()
await self.execute_with_retry(_cache)
except Exception as e:
print(f"Error caching URL: {e}")
logger.error(f"Error caching URL: {e}")
async def aget_total_count(self) -> int:
"""Get total number of cached URLs"""
async def _count(db):
async with db.execute('SELECT COUNT(*) FROM crawled_data') as cursor:
result = await cursor.fetchone()
return result[0] if result else 0
try:
async with aiosqlite.connect(self.db_path) as db:
async with db.execute('SELECT COUNT(*) FROM crawled_data') as cursor:
result = await cursor.fetchone()
return result[0] if result else 0
return await self.execute_with_retry(_count)
except Exception as e:
print(f"Error getting total count: {e}")
logger.error(f"Error getting total count: {e}")
return 0
async def aclear_db(self):
"""Clear all data from the database"""
async def _clear(db):
await db.execute('DELETE FROM crawled_data')
try:
async with aiosqlite.connect(self.db_path) as db:
await db.execute('DELETE FROM crawled_data')
await db.commit()
await self.execute_with_retry(_clear)
except Exception as e:
print(f"Error clearing database: {e}")
logger.error(f"Error clearing database: {e}")
async def aflush_db(self):
try:
async with aiosqlite.connect(self.db_path) as db:
await db.execute('DROP TABLE IF EXISTS crawled_data')
await db.commit()
except Exception as e:
print(f"Error flushing database: {e}")
"""Drop the entire table"""
async def _flush(db):
await db.execute('DROP TABLE IF EXISTS crawled_data')
try:
await self.execute_with_retry(_flush)
except Exception as e:
logger.error(f"Error flushing database: {e}")
# Create a singleton instance
async_db_manager = AsyncDatabaseManager()

View File

@@ -16,7 +16,7 @@ from .utils import (
InvalidCSSSelectorError,
format_html
)
from ._version import __version__ as crawl4ai_version
class AsyncWebCrawler:
def __init__(
@@ -46,9 +46,12 @@ class AsyncWebCrawler:
await self.crawler_strategy.__aexit__(exc_type, exc_val, exc_tb)
async def awarmup(self):
# Print a message for crawl4ai and its version
print(f"[LOG] 🚀 Crawl4AI {crawl4ai_version}")
if self.verbose:
print("[LOG] 🌤️ Warming up the AsyncWebCrawler")
await async_db_manager.ainit_db()
# await async_db_manager.ainit_db()
await async_db_manager.initialize()
await self.arun(
url="https://google.com/",
word_count_threshold=5,
@@ -125,6 +128,7 @@ class AsyncWebCrawler:
verbose,
bool(cached),
async_response=async_response,
bypass_cache=bypass_cache,
**kwargs,
)
crawl_result.status_code = async_response.status_code if async_response else 200
@@ -168,7 +172,6 @@ class AsyncWebCrawler:
]
return await asyncio.gather(*tasks)
async def aprocess_html(
self,
url: str,
@@ -243,7 +246,7 @@ class AsyncWebCrawler:
screenshot = None if not screenshot else screenshot
if not is_cached:
if not is_cached or kwargs.get("bypass_cache", False) or self.always_by_pass_cache:
await async_db_manager.acache_url(
url,
html,
@@ -274,10 +277,13 @@ class AsyncWebCrawler:
)
async def aclear_cache(self):
await async_db_manager.aclear_db()
# await async_db_manager.aclear_db()
await async_db_manager.cleanup()
async def aflush_cache(self):
await async_db_manager.aflush_db()
async def aget_cache_size(self):
return await async_db_manager.aget_total_count()

View File

@@ -14,12 +14,97 @@ from .utils import (
sanitize_html,
extract_metadata,
InvalidCSSSelectorError,
CustomHTML2Text,
# CustomHTML2Text,
normalize_url,
is_external_url
)
from .html2text import HTML2Text
class CustomHTML2Text(HTML2Text):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inside_pre = False
self.inside_code = False
self.preserve_tags = set() # Set of tags to preserve
self.current_preserved_tag = None
self.preserved_content = []
self.preserve_depth = 0
# Configuration options
self.skip_internal_links = False
self.single_line_break = False
self.mark_code = False
self.include_sup_sub = False
self.body_width = 0
self.ignore_mailto_links = True
self.ignore_links = False
self.escape_backslash = False
self.escape_dot = False
self.escape_plus = False
self.escape_dash = False
self.escape_snob = False
def update_params(self, **kwargs):
"""Update parameters and set preserved tags."""
for key, value in kwargs.items():
if key == 'preserve_tags':
self.preserve_tags = set(value)
else:
setattr(self, key, value)
def handle_tag(self, tag, attrs, start):
# Handle preserved tags
if tag in self.preserve_tags:
if start:
if self.preserve_depth == 0:
self.current_preserved_tag = tag
self.preserved_content = []
# Format opening tag with attributes
attr_str = ''.join(f' {k}="{v}"' for k, v in attrs.items() if v is not None)
self.preserved_content.append(f'<{tag}{attr_str}>')
self.preserve_depth += 1
return
else:
self.preserve_depth -= 1
if self.preserve_depth == 0:
self.preserved_content.append(f'</{tag}>')
# Output the preserved HTML block with proper spacing
preserved_html = ''.join(self.preserved_content)
self.o('\n' + preserved_html + '\n')
self.current_preserved_tag = None
return
# If we're inside a preserved tag, collect all content
if self.preserve_depth > 0:
if start:
# Format nested tags with attributes
attr_str = ''.join(f' {k}="{v}"' for k, v in attrs.items() if v is not None)
self.preserved_content.append(f'<{tag}{attr_str}>')
else:
self.preserved_content.append(f'</{tag}>')
return
# Handle pre tags
if tag == 'pre':
if start:
self.o('```\n')
self.inside_pre = True
else:
self.o('\n```')
self.inside_pre = False
elif tag in ["h1", "h2", "h3", "h4", "h5", "h6"]:
pass
else:
super().handle_tag(tag, attrs, start)
def handle_data(self, data, entity_char=False):
"""Override handle_data to capture content within preserved tags."""
if self.preserve_depth > 0:
self.preserved_content.append(data)
return
super().handle_data(data, entity_char)
class ContentScrappingStrategy(ABC):
@abstractmethod
def scrap(self, url: str, html: str, **kwargs) -> Dict[str, Any]:

View File

@@ -1,25 +0,0 @@
{
"_name_or_path": "sentence-transformers/all-MiniLM-L6-v2",
"architectures": [
"BertModel"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 384,
"initializer_range": 0.02,
"intermediate_size": 1536,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.27.4",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}

Binary file not shown.

View File

@@ -1,7 +0,0 @@
{
"cls_token": "[CLS]",
"mask_token": "[MASK]",
"pad_token": "[PAD]",
"sep_token": "[SEP]",
"unk_token": "[UNK]"
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +0,0 @@
{
"cls_token": "[CLS]",
"do_basic_tokenize": true,
"do_lower_case": true,
"mask_token": "[MASK]",
"model_max_length": 512,
"never_split": null,
"pad_token": "[PAD]",
"sep_token": "[SEP]",
"special_tokens_map_file": "/Users/hammad/.cache/huggingface/hub/models--sentence-transformers--all-MiniLM-L6-v2/snapshots/7dbbc90392e2f80f3d3c277d6e90027e55de9125/special_tokens_map.json",
"strip_accents": null,
"tokenize_chinese_chars": true,
"tokenizer_class": "BertTokenizer",
"unk_token": "[UNK]"
}

File diff suppressed because it is too large Load Diff

View File

@@ -178,7 +178,7 @@ def escape_json_string(s):
return s
class CustomHTML2Text(HTML2Text):
class CustomHTML2Text_v0(HTML2Text):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.inside_pre = False
@@ -981,6 +981,19 @@ def format_html(html_string):
return soup.prettify()
def normalize_url(href, base_url):
"""Normalize URLs to ensure consistent format"""
from urllib.parse import urljoin, urlparse
# Parse base URL to get components
parsed_base = urlparse(base_url)
if not parsed_base.scheme or not parsed_base.netloc:
raise ValueError(f"Invalid base URL format: {base_url}")
# Use urljoin to handle all cases
normalized = urljoin(base_url, href.strip())
return normalized
def normalize_url_tmp(href, base_url):
"""Normalize URLs to ensure consistent format"""
# Extract protocol and domain from base URL
try:

BIN
docs/.DS_Store vendored

Binary file not shown.

BIN
docs/assets/pitch-dark.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

View File

@@ -0,0 +1,64 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 800 500">
<!-- Background -->
<rect width="800" height="500" fill="#1a1a1a"/>
<!-- Opportunities Section -->
<g transform="translate(50,50)">
<!-- Opportunity 1 Box -->
<rect x="0" y="0" width="300" height="150" rx="10" fill="#1a2d3d" stroke="#64b5f6" stroke-width="2"/>
<text x="150" y="30" text-anchor="middle" font-family="Arial" font-weight="bold" font-size="16" fill="#64b5f6">Data Capitalization Opportunity</text>
<text x="150" y="60" text-anchor="middle" font-family="Arial" font-size="12" fill="#e0e0e0">
<tspan x="150" dy="0">Transform digital footprints into assets</tspan>
<tspan x="150" dy="20">Personal data as capital</tspan>
<tspan x="150" dy="20">Enterprise knowledge valuation</tspan>
<tspan x="150" dy="20">New form of wealth creation</tspan>
</text>
<!-- Opportunity 2 Box -->
<rect x="0" y="200" width="300" height="150" rx="10" fill="#1a2d1a" stroke="#81c784" stroke-width="2"/>
<text x="150" y="230" text-anchor="middle" font-family="Arial" font-weight="bold" font-size="16" fill="#81c784">Authentic Data Potential</text>
<text x="150" y="260" text-anchor="middle" font-family="Arial" font-size="12" fill="#e0e0e0">
<tspan x="150" dy="0">Vast reservoir of real insights</tspan>
<tspan x="150" dy="20">Enhanced AI development</tspan>
<tspan x="150" dy="20">Diverse human knowledge</tspan>
<tspan x="150" dy="20">Willing participation model</tspan>
</text>
</g>
<!-- Development Pathway -->
<g transform="translate(450,50)">
<!-- Step 1 Box -->
<rect x="0" y="0" width="300" height="100" rx="10" fill="#2d1a2d" stroke="#ce93d8" stroke-width="2"/>
<text x="150" y="35" text-anchor="middle" font-family="Arial" font-weight="bold" font-size="16" fill="#ce93d8">1. Open-Source Foundation</text>
<text x="150" y="65" text-anchor="middle" font-family="Arial" font-size="12" fill="#e0e0e0">Data extraction engine &amp; community development</text>
<!-- Step 2 Box -->
<rect x="0" y="125" width="300" height="100" rx="10" fill="#2d1a2d" stroke="#ce93d8" stroke-width="2"/>
<text x="150" y="160" text-anchor="middle" font-family="Arial" font-weight="bold" font-size="16" fill="#ce93d8">2. Data Capitalization Platform</text>
<text x="150" y="190" text-anchor="middle" font-family="Arial" font-size="12" fill="#e0e0e0">Tools to structure &amp; value digital assets</text>
<!-- Step 3 Box -->
<rect x="0" y="250" width="300" height="100" rx="10" fill="#2d1a2d" stroke="#ce93d8" stroke-width="2"/>
<text x="150" y="285" text-anchor="middle" font-family="Arial" font-weight="bold" font-size="16" fill="#ce93d8">3. Shared Data Marketplace</text>
<text x="150" y="315" text-anchor="middle" font-family="Arial" font-size="12" fill="#e0e0e0">Economic platform for data exchange</text>
</g>
<!-- Connecting Arrows -->
<g transform="translate(400,125)">
<path d="M-20,0 L40,0" stroke="#666" stroke-width="2" marker-end="url(#arrowhead)"/>
<path d="M-20,200 L40,200" stroke="#666" stroke-width="2" marker-end="url(#arrowhead)"/>
</g>
<!-- Arrow Marker -->
<defs>
<marker id="arrowhead" markerWidth="10" markerHeight="7" refX="9" refY="3.5" orient="auto">
<polygon points="0 0, 10 3.5, 0 7" fill="#666"/>
</marker>
</defs>
<!-- Vision Box at Bottom -->
<g transform="translate(200,420)">
<rect x="0" y="0" width="400" height="60" rx="10" fill="#2d2613" stroke="#ffd54f" stroke-width="2"/>
<text x="200" y="35" text-anchor="middle" font-family="Arial" font-weight="bold" font-size="16" fill="#ffd54f">Economic Vision: Shared Data Economy</text>
</g>
</svg>

After

Width:  |  Height:  |  Size: 3.8 KiB

View File

@@ -1,157 +0,0 @@
### Extraction Strategies
#### 1. LLMExtractionStrategy
```python
LLMExtractionStrategy(
# Core Parameters
provider: str = DEFAULT_PROVIDER, # LLM provider (e.g., "openai/gpt-4", "huggingface/...", "ollama/...")
api_token: Optional[str] = None, # API token for the provider
instruction: str = None, # Custom instruction for extraction
schema: Dict = None, # Pydantic model schema for structured extraction
extraction_type: str = "block", # Type of extraction: "block" or "schema"
# Chunking Parameters
chunk_token_threshold: int = CHUNK_TOKEN_THRESHOLD, # Maximum tokens per chunk
overlap_rate: float = OVERLAP_RATE, # Overlap between chunks
word_token_rate: float = WORD_TOKEN_RATE, # Conversion rate from words to tokens
apply_chunking: bool = True, # Whether to apply text chunking
# API Configuration
base_url: str = None, # Base URL for API calls
api_base: str = None, # Alternative base URL
extra_args: Dict = {}, # Additional provider-specific arguments
verbose: bool = False # Enable verbose logging
)
```
Usage Example:
```python
class NewsArticle(BaseModel):
title: str
content: str
strategy = LLMExtractionStrategy(
provider="ollama/nemotron",
api_token="your-token",
schema=NewsArticle.schema(),
instruction="Extract news article content with title and main text"
)
result = await crawler.arun(url="https://example.com", extraction_strategy=strategy)
```
#### 2. JsonCssExtractionStrategy
```python
JsonCssExtractionStrategy(
schema: Dict[str, Any], # Schema defining extraction rules
verbose: bool = False # Enable verbose logging
)
# Schema Structure
schema = {
"name": str, # Name of the extraction schema
"baseSelector": str, # CSS selector for base elements
"fields": [
{
"name": str, # Field name
"selector": str, # CSS selector
"type": str, # Field type: "text", "attribute", "html", "regex", "nested", "list", "nested_list"
"attribute": str, # For type="attribute"
"pattern": str, # For type="regex"
"transform": str, # Optional: "lowercase", "uppercase", "strip"
"default": Any, # Default value if extraction fails
"fields": List[Dict], # For nested/list types
}
]
}
```
Usage Example:
```python
schema = {
"name": "News Articles",
"baseSelector": "article.news-item",
"fields": [
{
"name": "title",
"selector": "h1",
"type": "text",
"transform": "strip"
},
{
"name": "date",
"selector": ".date",
"type": "attribute",
"attribute": "datetime"
}
]
}
strategy = JsonCssExtractionStrategy(schema)
result = await crawler.arun(url="https://example.com", extraction_strategy=strategy)
```
#### 3. CosineStrategy
```python
CosineStrategy(
# Content Filtering
semantic_filter: str = None, # Keyword filter for document filtering
word_count_threshold: int = 10, # Minimum words per cluster
sim_threshold: float = 0.3, # Similarity threshold for filtering
# Clustering Parameters
max_dist: float = 0.2, # Maximum distance for clustering
linkage_method: str = 'ward', # Clustering linkage method
top_k: int = 3, # Number of top categories to extract
# Model Configuration
model_name: str = 'sentence-transformers/all-MiniLM-L6-v2', # Embedding model
verbose: bool = False # Enable verbose logging
)
```
### Chunking Strategies
#### 1. RegexChunking
```python
RegexChunking(
patterns: List[str] = None # List of regex patterns for splitting text
# Default pattern: [r'\n\n']
)
```
Usage Example:
```python
chunker = RegexChunking(patterns=[r'\n\n', r'\.\s+']) # Split on double newlines and sentences
chunks = chunker.chunk(text)
```
#### 2. SlidingWindowChunking
```python
SlidingWindowChunking(
window_size: int = 100, # Size of the window in words
step: int = 50, # Number of words to slide the window
)
```
Usage Example:
```python
chunker = SlidingWindowChunking(window_size=200, step=100)
chunks = chunker.chunk(text) # Creates overlapping chunks of 200 words, moving 100 words at a time
```
#### 3. OverlappingWindowChunking
```python
OverlappingWindowChunking(
window_size: int = 1000, # Size of each chunk in words
overlap: int = 100 # Number of words to overlap between chunks
)
```
Usage Example:
```python
chunker = OverlappingWindowChunking(window_size=500, overlap=50)
chunks = chunker.chunk(text) # Creates 500-word chunks with 50-word overlap
```

View File

@@ -1,175 +0,0 @@
# Features
## Current Features
1. Async-first architecture for high-performance web crawling
2. Built-in anti-bot detection bypass ("magic mode")
3. Multiple browser engine support (Chromium, Firefox, WebKit)
4. Smart session management with automatic cleanup
5. Automatic content cleaning and relevance scoring
6. Built-in markdown generation with formatting preservation
7. Intelligent image scoring and filtering
8. Automatic popup and overlay removal
9. Smart wait conditions (CSS/JavaScript based)
10. Multi-provider LLM integration (OpenAI, HuggingFace, Ollama)
11. Schema-based structured data extraction
12. Automated iframe content processing
13. Intelligent link categorization (internal/external)
14. Multiple chunking strategies for large content
15. Real-time HTML cleaning and sanitization
16. Automatic screenshot capabilities
17. Social media link filtering
18. Semantic similarity-based content clustering
19. Human behavior simulation for anti-bot bypass
20. Proxy support with authentication
21. Automatic resource cleanup
22. Custom CSS selector-based extraction
23. Automatic content relevance scoring ("fit" content)
24. Recursive website crawling capabilities
25. Flexible hook system for customization
26. Built-in caching system
27. Domain-based content filtering
28. Dynamic content handling with JavaScript execution
29. Automatic media content extraction and classification
30. Metadata extraction and processing
31. Customizable HTML to Markdown conversion
32. Token-aware content chunking for LLM processing
33. Automatic response header and status code handling
34. Browser fingerprint customization
35. Multiple extraction strategies (LLM, CSS, Cosine, XPATH)
36. Automatic error image generation for failed screenshots
37. Smart content overlap handling for large texts
38. Built-in rate limiting for batch processing
39. Automatic cookie handling
40. Browser Console logging and debugging capabilities
## Feature Techs
• Browser Management
- Asynchronous browser control
- Multi-browser support (Chromium, Firefox, WebKit)
- Headless mode support
- Browser cleanup and resource management
- Custom browser arguments and configuration
- Context management with `__aenter__` and `__aexit__`
• Session Handling
- Session management with TTL (Time To Live)
- Session reuse capabilities
- Session cleanup for expired sessions
- Session-based context preservation
• Stealth Features
- Playwright stealth configuration
- Navigator properties override
- WebDriver detection evasion
- Chrome app simulation
- Plugin simulation
- Language preferences simulation
- Hardware concurrency simulation
- Media codecs simulation
• Network Features
- Proxy support with authentication
- Custom headers management
- Cookie handling
- Response header capture
- Status code tracking
- Network idle detection
• Page Interaction
- Smart wait functionality for multiple conditions
- CSS selector-based waiting
- JavaScript condition waiting
- Custom JavaScript execution
- User interaction simulation (mouse/keyboard)
- Page scrolling
- Timeout management
- Load state monitoring
• Content Processing
- HTML content extraction
- Iframe processing and content extraction
- Delayed content retrieval
- Content caching
- Cache file management
- HTML cleaning and processing
• Image Handling
- Screenshot capabilities (full page)
- Base64 encoding of screenshots
- Image dimension updating
- Image filtering (size/visibility)
- Error image generation
- Natural width/height preservation
• Overlay Management
- Popup removal
- Cookie notice removal
- Newsletter dialog removal
- Modal removal
- Fixed position element removal
- Z-index based overlay detection
- Visibility checking
• Hook System
- Browser creation hooks
- User agent update hooks
- Execution start hooks
- Navigation hooks (before/after goto)
- HTML retrieval hooks
- HTML return hooks
• Error Handling
- Browser error catching
- Network error handling
- Timeout handling
- Screenshot error recovery
- Invalid selector handling
- General exception management
• Performance Features
- Concurrent URL processing
- Semaphore-based rate limiting
- Async gathering of results
- Resource cleanup
- Memory management
• Debug Features
- Console logging
- Page error logging
- Verbose mode
- Error message generation
- Warning system
• Security Features
- Certificate error handling
- Sandbox configuration
- GPU handling
- CSP (Content Security Policy) compliant waiting
• Configuration
- User agent customization
- Viewport configuration
- Timeout configuration
- Browser type selection
- Proxy configuration
- Header configuration
• Data Models
- Pydantic model for responses
- Type hints throughout code
- Structured response format
- Optional response fields
• File System Integration
- Cache directory management
- File path handling
- Cache metadata storage
- File read/write operations
• Metadata Handling
- Response headers capture
- Status code tracking
- Cache metadata
- Session tracking
- Timestamp management

View File

@@ -1,150 +0,0 @@
### 1. Basic Web Crawling
```python
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url="https://example.com")
print(result.markdown) # Get clean markdown content
print(result.html) # Get raw HTML
print(result.cleaned_html) # Get cleaned HTML
```
### 2. Browser Control Options
- Multiple Browser Support
```python
# Choose between different browser engines
crawler = AsyncWebCrawler(browser_type="firefox") # or "chromium", "webkit"
crawler = AsyncWebCrawler(headless=False) # For visible browser
```
- Proxy Configuration
```python
crawler = AsyncWebCrawler(proxy="http://proxy.example.com:8080")
# Or with authentication
crawler = AsyncWebCrawler(proxy_config={
"server": "http://proxy.example.com:8080",
"username": "user",
"password": "pass"
})
```
### 3. Content Selection & Filtering
- CSS Selector Support
```python
result = await crawler.arun(
url="https://example.com",
css_selector=".main-content" # Extract specific content
)
```
- Content Filtering Options
```python
result = await crawler.arun(
url="https://example.com",
word_count_threshold=10, # Minimum words per block
excluded_tags=['form', 'header'], # Tags to exclude
exclude_external_links=True, # Remove external links
exclude_social_media_links=True, # Remove social media links
exclude_external_images=True # Remove external images
)
```
### 4. Dynamic Content Handling
- JavaScript Execution
```python
result = await crawler.arun(
url="https://example.com",
js_code="window.scrollTo(0, document.body.scrollHeight)" # Execute custom JS
)
```
- Wait Conditions
```python
result = await crawler.arun(
url="https://example.com",
wait_for="css:.my-element", # Wait for element
wait_for="js:() => document.readyState === 'complete'" # Wait for condition
)
```
### 5. Anti-Bot Protection Handling
```python
result = await crawler.arun(
url="https://example.com",
simulate_user=True, # Simulate human behavior
override_navigator=True, # Mask automation signals
magic=True # Enable all anti-detection features
)
```
### 6. Session Management
```python
session_id = "my_session"
result1 = await crawler.arun(url="https://example.com/page1", session_id=session_id)
result2 = await crawler.arun(url="https://example.com/page2", session_id=session_id)
await crawler.crawler_strategy.kill_session(session_id)
```
### 7. Media Handling
- Screenshot Capture
```python
result = await crawler.arun(
url="https://example.com",
screenshot=True
)
base64_screenshot = result.screenshot
```
- Media Extraction
```python
result = await crawler.arun(url="https://example.com")
print(result.media['images']) # List of images
print(result.media['videos']) # List of videos
print(result.media['audios']) # List of audio files
```
### 8. Structured Data Extraction
- CSS-based Extraction
```python
schema = {
"name": "News Articles",
"baseSelector": "article",
"fields": [
{"name": "title", "selector": "h1", "type": "text"},
{"name": "date", "selector": ".date", "type": "text"}
]
}
extraction_strategy = JsonCssExtractionStrategy(schema)
result = await crawler.arun(
url="https://example.com",
extraction_strategy=extraction_strategy
)
structured_data = json.loads(result.extracted_content)
```
- LLM-based Extraction (Multiple Providers)
```python
class NewsArticle(BaseModel):
title: str
summary: str
strategy = LLMExtractionStrategy(
provider="ollama/nemotron", # or "huggingface/...", "ollama/..."
api_token="your-token",
schema=NewsArticle.schema(),
instruction="Extract news article details..."
)
result = await crawler.arun(
url="https://example.com",
extraction_strategy=strategy
)
```
### 9. Content Cleaning & Processing
```python
result = await crawler.arun(
url="https://example.com",
remove_overlay_elements=True, # Remove popups/modals
process_iframes=True, # Process iframe content
)
print(result.fit_markdown) # Get most relevant content
print(result.fit_html) # Get cleaned HTML
```

View File

@@ -1,457 +0,0 @@
I'll expand the outline with detailed descriptions and examples based on all the provided files. I'll start with the first few sections:
### 1. Basic Web Crawling
Basic web crawling provides the foundation for extracting content from websites. The library supports both simple single-page crawling and recursive website crawling.
```python
# Simple page crawling
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url="https://example.com")
print(result.html) # Raw HTML
print(result.markdown) # Cleaned markdown
print(result.cleaned_html) # Cleaned HTML
# Recursive website crawling
class SimpleWebsiteScraper:
def __init__(self, crawler: AsyncWebCrawler):
self.crawler = crawler
async def scrape(self, start_url: str, max_depth: int):
results = await self.scrape_recursive(start_url, max_depth)
return results
# Usage
async with AsyncWebCrawler() as crawler:
scraper = SimpleWebsiteScraper(crawler)
results = await scraper.scrape("https://example.com", depth=2)
```
### 2. Browser Control Options
The library provides extensive control over browser behavior, allowing customization of browser type, headless mode, and proxy settings.
```python
# Browser Type Selection
async with AsyncWebCrawler(
browser_type="firefox", # Options: "chromium", "firefox", "webkit"
headless=False, # For visible browser
verbose=True # Enable logging
) as crawler:
result = await crawler.arun(url="https://example.com")
# Proxy Configuration
async with AsyncWebCrawler(
proxy_config={
"server": "http://proxy.example.com:8080",
"username": "user",
"password": "pass"
},
headers={
"User-Agent": "Custom User Agent",
"Accept-Language": "en-US,en;q=0.9"
}
) as crawler:
result = await crawler.arun(url="https://example.com")
```
### 3. Content Selection & Filtering
The library offers multiple ways to select and filter content, from CSS selectors to word count thresholds.
```python
# CSS Selector and Content Filtering
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
css_selector="article.main-content", # Extract specific content
word_count_threshold=10, # Minimum words per block
excluded_tags=['form', 'header'], # Tags to exclude
exclude_external_links=True, # Remove external links
exclude_social_media_links=True, # Remove social media links
exclude_domains=["pinterest.com", "facebook.com"] # Exclude specific domains
)
# Custom HTML to Text Options
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
html2text={
"escape_dot": False,
"links_each_paragraph": True,
"protect_links": True
}
)
```
### 4. Dynamic Content Handling
The library provides sophisticated handling of dynamic content with JavaScript execution and wait conditions.
```python
# JavaScript Execution and Wait Conditions
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
js_code=[
"window.scrollTo(0, document.body.scrollHeight);",
"document.querySelector('.load-more').click();"
],
wait_for="css:.dynamic-content", # Wait for element
delay_before_return_html=2.0 # Wait after JS execution
)
# Smart Wait Conditions
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
wait_for="""() => {
return document.querySelectorAll('.item').length > 10;
}""",
page_timeout=60000 # 60 seconds timeout
)
```
### 5. Advanced Link Analysis
The library provides comprehensive link analysis capabilities, distinguishing between internal and external links, with options for filtering and processing.
```python
# Basic Link Analysis
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url="https://example.com")
# Access internal and external links
for internal_link in result.links['internal']:
print(f"Internal: {internal_link['href']} - {internal_link['text']}")
for external_link in result.links['external']:
print(f"External: {external_link['href']} - {external_link['text']}")
# Advanced Link Filtering
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
exclude_external_links=True, # Remove all external links
exclude_social_media_links=True, # Remove social media links
exclude_social_media_domains=[ # Custom social media domains
"facebook.com", "twitter.com", "instagram.com"
],
exclude_domains=["pinterest.com"] # Specific domains to exclude
)
```
### 6. Anti-Bot Protection Handling
The library includes sophisticated anti-detection mechanisms to handle websites with bot protection.
```python
# Basic Anti-Detection
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
simulate_user=True, # Simulate human behavior
override_navigator=True # Override navigator properties
)
# Advanced Anti-Detection with Magic Mode
async with AsyncWebCrawler(headless=False) as crawler:
result = await crawler.arun(
url="https://example.com",
magic=True, # Enable all anti-detection features
remove_overlay_elements=True, # Remove popups/modals automatically
# Custom navigator properties
js_code="""
Object.defineProperty(navigator, 'webdriver', {
get: () => undefined
});
"""
)
```
### 7. Session Management
Session management allows maintaining state across multiple requests and handling cookies.
```python
# Basic Session Management
async with AsyncWebCrawler() as crawler:
session_id = "my_session"
# Login
login_result = await crawler.arun(
url="https://example.com/login",
session_id=session_id,
js_code="document.querySelector('form').submit();"
)
# Use same session for subsequent requests
protected_result = await crawler.arun(
url="https://example.com/protected",
session_id=session_id
)
# Clean up session
await crawler.crawler_strategy.kill_session(session_id)
# Advanced Session with Custom Cookies
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
session_id="custom_session",
cookies=[{
"name": "sessionId",
"value": "abc123",
"domain": "example.com"
}]
)
```
### 8. Screenshot and Media Handling
The library provides comprehensive media handling capabilities, including screenshots and media content extraction.
```python
# Screenshot Capture
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
screenshot=True,
screenshot_wait_for=2.0 # Wait before taking screenshot
)
# Save screenshot
if result.screenshot:
with open("screenshot.png", "wb") as f:
f.write(base64.b64decode(result.screenshot))
# Media Extraction
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url="https://example.com")
# Process images with metadata
for image in result.media['images']:
print(f"Image: {image['src']}")
print(f"Alt text: {image['alt']}")
print(f"Context: {image['desc']}")
print(f"Relevance score: {image['score']}")
# Process videos and audio
for video in result.media['videos']:
print(f"Video: {video['src']}")
for audio in result.media['audios']:
print(f"Audio: {audio['src']}")
```
### 9. Structured Data Extraction & Chunking
The library supports multiple strategies for structured data extraction and content chunking.
```python
# LLM-based Extraction
class NewsArticle(BaseModel):
title: str
content: str
author: str
extraction_strategy = LLMExtractionStrategy(
provider='openai/gpt-4',
api_token="your-token",
schema=NewsArticle.schema(),
instruction="Extract news article details",
chunk_token_threshold=1000,
overlap_rate=0.1
)
# CSS-based Extraction
schema = {
"name": "Product Listing",
"baseSelector": ".product-card",
"fields": [
{
"name": "title",
"selector": "h2",
"type": "text"
},
{
"name": "price",
"selector": ".price",
"type": "text",
"transform": "strip"
}
]
}
css_strategy = JsonCssExtractionStrategy(schema)
# Text Chunking
from crawl4ai.chunking_strategy import OverlappingWindowChunking
chunking_strategy = OverlappingWindowChunking(
window_size=1000,
overlap=100
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
extraction_strategy=extraction_strategy,
chunking_strategy=chunking_strategy
)
```
### 10. Content Cleaning & Processing
The library provides extensive content cleaning and processing capabilities, ensuring high-quality output in various formats.
```python
# Basic Content Cleaning
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
remove_overlay_elements=True, # Remove popups/modals
process_iframes=True, # Process iframe content
word_count_threshold=10 # Minimum words per block
)
print(result.cleaned_html) # Clean HTML
print(result.fit_html) # Most relevant HTML content
print(result.fit_markdown) # Most relevant markdown content
# Advanced Content Processing
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com",
excluded_tags=['form', 'header', 'footer', 'nav'],
html2text={
"escape_dot": False,
"body_width": 0,
"protect_links": True,
"unicode_snob": True,
"ignore_links": False,
"ignore_images": False,
"ignore_emphasis": False,
"bypass_tables": False,
"ignore_tables": False
}
)
```
### Advanced Usage Patterns
#### 1. Combining Multiple Features
```python
async with AsyncWebCrawler(
browser_type="chromium",
headless=False,
verbose=True
) as crawler:
result = await crawler.arun(
url="https://example.com",
# Anti-bot measures
magic=True,
simulate_user=True,
# Content selection
css_selector="article.main",
word_count_threshold=10,
# Dynamic content handling
js_code="window.scrollTo(0, document.body.scrollHeight);",
wait_for="css:.dynamic-content",
# Content filtering
exclude_external_links=True,
exclude_social_media_links=True,
# Media handling
screenshot=True,
process_iframes=True,
# Content cleaning
remove_overlay_elements=True
)
```
#### 2. Custom Extraction Pipeline
```python
# Define custom schemas and strategies
class Article(BaseModel):
title: str
content: str
date: str
# CSS extraction for initial content
css_schema = {
"name": "Article Extraction",
"baseSelector": "article",
"fields": [
{"name": "title", "selector": "h1", "type": "text"},
{"name": "content", "selector": ".content", "type": "html"},
{"name": "date", "selector": ".date", "type": "text"}
]
}
# LLM processing for semantic analysis
llm_strategy = LLMExtractionStrategy(
provider="ollama/nemotron",
api_token="your-token",
schema=Article.schema(),
instruction="Extract and clean article content"
)
# Chunking strategy for large content
chunking = OverlappingWindowChunking(window_size=1000, overlap=100)
async with AsyncWebCrawler() as crawler:
# First pass: Extract structure
css_result = await crawler.arun(
url="https://example.com",
extraction_strategy=JsonCssExtractionStrategy(css_schema)
)
# Second pass: Semantic processing
llm_result = await crawler.arun(
url="https://example.com",
extraction_strategy=llm_strategy,
chunking_strategy=chunking
)
```
#### 3. Website Crawling with Custom Processing
```python
class CustomWebsiteCrawler:
def __init__(self, crawler: AsyncWebCrawler):
self.crawler = crawler
self.results = {}
async def process_page(self, url: str) -> Dict:
result = await self.crawler.arun(
url=url,
magic=True,
word_count_threshold=10,
exclude_external_links=True,
process_iframes=True,
remove_overlay_elements=True
)
# Process internal links
internal_links = [
link['href'] for link in result.links['internal']
if self._is_valid_link(link['href'])
]
# Extract media
media_urls = [img['src'] for img in result.media['images']]
return {
'content': result.markdown,
'links': internal_links,
'media': media_urls,
'metadata': result.metadata
}
async def crawl_website(self, start_url: str, max_depth: int = 2):
visited = set()
queue = [(start_url, 0)]
while queue:
url, depth = queue.pop(0)
if depth > max_depth or url in visited:
continue
visited.add(url)
self.results[url] = await self.process_page(url)
```

View File

@@ -1,282 +0,0 @@
### AsyncWebCrawler Constructor Parameters
```python
AsyncWebCrawler(
# Core Browser Settings
browser_type: str = "chromium", # Options: "chromium", "firefox", "webkit"
headless: bool = True, # Whether to run browser in headless mode
verbose: bool = False, # Enable verbose logging
# Cache Settings
always_by_pass_cache: bool = False, # Always bypass cache regardless of run settings
base_directory: str = str(Path.home()), # Base directory for cache storage
# Network Settings
proxy: str = None, # Simple proxy URL (e.g., "http://proxy.example.com:8080")
proxy_config: Dict = None, # Advanced proxy settings with auth: {"server": str, "username": str, "password": str}
# Browser Behavior
sleep_on_close: bool = False, # Wait before closing browser
# Other Settings passed to AsyncPlaywrightCrawlerStrategy
user_agent: str = None, # Custom user agent string
headers: Dict[str, str] = {}, # Custom HTTP headers
js_code: Union[str, List[str]] = None, # Default JavaScript to execute
)
```
### arun() Method Parameters
```python
arun(
# Core Parameters
url: str, # Required: URL to crawl
# Content Selection
css_selector: str = None, # CSS selector to extract specific content
word_count_threshold: int = MIN_WORD_THRESHOLD, # Minimum words for content blocks
# Cache Control
bypass_cache: bool = False, # Bypass cache for this request
# Session Management
session_id: str = None, # Session identifier for persistent browsing
# Screenshot Options
screenshot: bool = False, # Take page screenshot
screenshot_wait_for: float = None, # Wait time before screenshot
# Content Processing
process_iframes: bool = False, # Process iframe content
remove_overlay_elements: bool = False, # Remove popups/modals
# Anti-Bot/Detection
simulate_user: bool = False, # Simulate human-like behavior
override_navigator: bool = False, # Override navigator properties
magic: bool = False, # Enable all anti-detection features
# Content Filtering
excluded_tags: List[str] = None, # HTML tags to exclude
exclude_external_links: bool = False, # Remove external links
exclude_social_media_links: bool = False, # Remove social media links
exclude_external_images: bool = False, # Remove external images
exclude_social_media_domains: List[str] = None, # Additional social media domains to exclude
remove_forms: bool = False, # Remove all form elements
# JavaScript Handling
js_code: Union[str, List[str]] = None, # JavaScript to execute
js_only: bool = False, # Only execute JavaScript without reloading page
wait_for: str = None, # Wait condition (CSS selector or JS function)
# Page Loading
page_timeout: int = 60000, # Page load timeout in milliseconds
delay_before_return_html: float = None, # Wait before returning HTML
# Debug Options
log_console: bool = False, # Log browser console messages
# Content Format Control
only_text: bool = False, # Extract only text content
keep_data_attributes: bool = False, # Keep data-* attributes in HTML
# Markdown Options
include_links_on_markdown: bool = False, # Include links in markdown output
html2text: Dict = {}, # HTML to text conversion options
# Extraction Strategy
extraction_strategy: ExtractionStrategy = None, # Strategy for structured data extraction
# Advanced Browser Control
user_agent: str = None, # Override user agent for this request
)
```
### Extraction Strategy Parameters
```python
# JsonCssExtractionStrategy
{
"name": str, # Name of extraction schema
"baseSelector": str, # Base CSS selector
"fields": [
{
"name": str, # Field name
"selector": str, # CSS selector
"type": str, # Data type ("text", etc.)
"transform": str = None # Optional transformation
}
]
}
# LLMExtractionStrategy
{
"provider": str, # LLM provider (e.g., "openai/gpt-4", "huggingface/...", "ollama/...")
"api_token": str, # API token
"schema": dict, # Pydantic model schema
"extraction_type": str, # Type of extraction ("schema", etc.)
"instruction": str, # Extraction instruction
"extra_args": dict = None, # Additional provider-specific arguments
"extra_headers": dict = None # Additional HTTP headers
}
```
### HTML to Text Conversion Options (html2text parameter)
```python
{
"escape_dot": bool = True, # Escape dots in text
# Other html2text library options
}
```
### CrawlResult Fields
```python
class CrawlResult(BaseModel):
# Basic Information
url: str # The crawled URL
# Example: "https://example.com"
success: bool # Whether the crawl was successful
# Example: True/False
status_code: Optional[int] # HTTP status code
# Example: 200, 404, 500
# Content Fields
html: str # Raw HTML content
# Example: "<html><body>...</body></html>"
cleaned_html: Optional[str] # HTML after cleaning and processing
# Example: "<article><p>Clean content...</p></article>"
fit_html: Optional[str] # Most relevant HTML content after content cleaning strategy
# Example: "<div><p>Most relevant content...</p></div>"
markdown: Optional[str] # HTML converted to markdown
# Example: "# Title\n\nContent paragraph..."
fit_markdown: Optional[str] # Most relevant content in markdown
# Example: "# Main Article\n\nKey content..."
# Media Content
media: Dict[str, List[Dict]] = {} # Extracted media information
# Example: {
# "images": [
# {
# "src": "https://example.com/image.jpg",
# "alt": "Image description",
# "desc": "Contextual description",
# "score": 5, # Relevance score
# "type": "image"
# }
# ],
# "videos": [
# {
# "src": "https://example.com/video.mp4",
# "alt": "Video title",
# "type": "video",
# "description": "Video context"
# }
# ],
# "audios": [
# {
# "src": "https://example.com/audio.mp3",
# "alt": "Audio title",
# "type": "audio",
# "description": "Audio context"
# }
# ]
# }
# Link Information
links: Dict[str, List[Dict]] = {} # Extracted links
# Example: {
# "internal": [
# {
# "href": "https://example.com/page",
# "text": "Link text",
# "title": "Link title"
# }
# ],
# "external": [
# {
# "href": "https://external.com",
# "text": "External link text",
# "title": "External link title"
# }
# ]
# }
# Extraction Results
extracted_content: Optional[str] # Content from extraction strategy
# Example for JsonCssExtractionStrategy:
# '[{"title": "Article 1", "date": "2024-03-20"}, ...]'
# Example for LLMExtractionStrategy:
# '{"entities": [...], "relationships": [...]}'
# Additional Information
metadata: Optional[dict] = None # Page metadata
# Example: {
# "title": "Page Title",
# "description": "Meta description",
# "keywords": ["keyword1", "keyword2"],
# "author": "Author Name",
# "published_date": "2024-03-20"
# }
screenshot: Optional[str] = None # Base64 encoded screenshot
# Example: "iVBORw0KGgoAAAANSUhEUgAA..."
error_message: Optional[str] = None # Error message if crawl failed
# Example: "Failed to load page: timeout"
session_id: Optional[str] = None # Session identifier
# Example: "session_123456"
response_headers: Optional[dict] = None # HTTP response headers
# Example: {
# "content-type": "text/html",
# "server": "nginx/1.18.0",
# "date": "Wed, 20 Mar 2024 12:00:00 GMT"
# }
```
### Common Usage Patterns:
1. Basic Content Extraction:
```python
result = await crawler.arun(url="https://example.com")
print(result.markdown) # Clean, readable content
print(result.cleaned_html) # Cleaned HTML
```
2. Media Analysis:
```python
result = await crawler.arun(url="https://example.com")
for image in result.media["images"]:
if image["score"] > 3: # High-relevance images
print(f"High-quality image: {image['src']}")
```
3. Link Analysis:
```python
result = await crawler.arun(url="https://example.com")
internal_links = [link["href"] for link in result.links["internal"]]
external_links = [link["href"] for link in result.links["external"]]
```
4. Structured Data Extraction:
```python
result = await crawler.arun(
url="https://example.com",
extraction_strategy=my_strategy
)
structured_data = json.loads(result.extracted_content)
```
5. Error Handling:
```python
result = await crawler.arun(url="https://example.com")
if not result.success:
print(f"Crawl failed: {result.error_message}")
print(f"Status code: {result.status_code}")
```

View File

@@ -1,67 +0,0 @@
1. **E-commerce Product Monitor**
- Scraping product details from multiple e-commerce sites
- Price tracking with structured data extraction
- Handling dynamic content and anti-bot measures
- Features: JsonCssExtraction, session management, anti-bot
2. **News Aggregator & Summarizer**
- Crawling news websites
- Content extraction and summarization
- Topic classification
- Features: LLMExtraction, CosineStrategy, content cleaning
3. **Academic Paper Research Assistant**
- Crawling research papers from academic sites
- Extracting citations and references
- Building knowledge graphs
- Features: structured extraction, link analysis, chunking
4. **Social Media Content Analyzer**
- Handling JavaScript-heavy sites
- Dynamic content loading
- Sentiment analysis integration
- Features: dynamic content handling, session management
5. **Real Estate Market Analyzer**
- Scraping property listings
- Processing image galleries
- Geolocation data extraction
- Features: media handling, structured data extraction
6. **Documentation Site Generator**
- Recursive website crawling
- Markdown generation
- Link validation
- Features: website crawling, content cleaning
7. **Job Board Aggregator**
- Handling pagination
- Structured job data extraction
- Filtering and categorization
- Features: session management, JsonCssExtraction
8. **Recipe Database Builder**
- Schema-based extraction
- Image processing
- Ingredient parsing
- Features: structured extraction, media handling
9. **Travel Blog Content Analyzer**
- Location extraction
- Image and map processing
- Content categorization
- Features: CosineStrategy, media handling
10. **Technical Documentation Scraper**
- API documentation extraction
- Code snippet processing
- Version tracking
- Features: content cleaning, structured extraction
Each example will include:
- Problem description
- Technical requirements
- Complete implementation
- Error handling
- Output processing
- Performance considerations

File diff suppressed because one or more lines are too long

View File

@@ -383,10 +383,11 @@ async def crawl_with_user_simultion():
async with AsyncWebCrawler(verbose=True, headless=True) as crawler:
url = "YOUR-URL-HERE"
result = await crawler.arun(
url=url,
url=url,
bypass_cache=True,
simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
override_navigator = True # Overrides the navigator object to make it look like a real user
magic = True, # Automatically detects and removes overlays, popups, and other elements that block content
# simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
# override_navigator = True # Overrides the navigator object to make it look like a real user
)
print(result.markdown)

View File

@@ -0,0 +1,735 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "6yLvrXn7yZQI"
},
"source": [
"# Crawl4AI: Advanced Web Crawling and Data Extraction\n",
"\n",
"Welcome to this interactive notebook showcasing Crawl4AI, an advanced asynchronous web crawling and data extraction library.\n",
"\n",
"- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n",
"- Twitter: [@unclecode](https://twitter.com/unclecode)\n",
"- Website: [https://crawl4ai.com](https://crawl4ai.com)\n",
"\n",
"Let's explore the powerful features of Crawl4AI!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KIn_9nxFyZQK"
},
"source": [
"## Installation\n",
"\n",
"First, let's install Crawl4AI from GitHub:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mSnaxLf3zMog"
},
"outputs": [],
"source": [
"!sudo apt-get update && sudo apt-get install -y libwoff1 libopus0 libwebp6 libwebpdemux2 libenchant1c2a libgudev-1.0-0 libsecret-1-0 libhyphen0 libgdk-pixbuf2.0-0 libegl1 libnotify4 libxslt1.1 libevent-2.1-7 libgles2 libvpx6 libxcomposite1 libatk1.0-0 libatk-bridge2.0-0 libepoxy0 libgtk-3-0 libharfbuzz-icu0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xlXqaRtayZQK"
},
"outputs": [],
"source": [
"!pip install crawl4ai\n",
"!pip install nest-asyncio\n",
"!playwright install"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qKCE7TI7yZQL"
},
"source": [
"Now, let's import the necessary libraries:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "I67tr7aAyZQL"
},
"outputs": [],
"source": [
"import asyncio\n",
"import nest_asyncio\n",
"from crawl4ai import AsyncWebCrawler\n",
"from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy\n",
"import json\n",
"import time\n",
"from pydantic import BaseModel, Field\n",
"\n",
"nest_asyncio.apply()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "h7yR_Rt_yZQM"
},
"source": [
"## Basic Usage\n",
"\n",
"Let's start with a simple crawl example:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "yBh6hf4WyZQM",
"outputId": "0f83af5c-abba-4175-ed95-70b7512e6bcc"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.05 seconds\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.05 seconds.\n",
"18102\n"
]
}
],
"source": [
"async def simple_crawl():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(url=\"https://www.nbcnews.com/business\")\n",
" print(len(result.markdown))\n",
"await simple_crawl()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9rtkgHI28uI4"
},
"source": [
"💡 By default, **Crawl4AI** caches the result of every URL, so the next time you call it, youll get an instant result. But if you want to bypass the cache, just set `bypass_cache=True`."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MzZ0zlJ9yZQM"
},
"source": [
"## Advanced Features\n",
"\n",
"### Executing JavaScript and Using CSS Selectors"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "gHStF86xyZQM",
"outputId": "34d0fb6d-4dec-4677-f76e-85a1f082829b"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
"[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 6.06 seconds\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n",
"41135\n"
]
}
],
"source": [
"async def js_and_css():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" js_code = [\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"]\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" js_code=js_code,\n",
" # css_selector=\"YOUR_CSS_SELECTOR_HERE\",\n",
" bypass_cache=True\n",
" )\n",
" print(len(result.markdown))\n",
"\n",
"await js_and_css()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cqE_W4coyZQM"
},
"source": [
"### Using a Proxy\n",
"\n",
"Note: You'll need to replace the proxy URL with a working proxy for this example to run successfully."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QjAyiAGqyZQM"
},
"outputs": [],
"source": [
"async def use_proxy():\n",
" async with AsyncWebCrawler(verbose=True, proxy=\"http://your-proxy-url:port\") as crawler:\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" bypass_cache=True\n",
" )\n",
" print(result.markdown[:500]) # Print first 500 characters\n",
"\n",
"# Uncomment the following line to run the proxy example\n",
"# await use_proxy()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XTZ88lbayZQN"
},
"source": [
"### Extracting Structured Data with OpenAI\n",
"\n",
"Note: You'll need to set your OpenAI API key as an environment variable for this example to work."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "fIOlDayYyZQN",
"outputId": "cb8359cc-dee0-4762-9698-5dfdcee055b8"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n",
"[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 3.77 seconds\n",
"[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.21 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 1\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 2\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 3\n",
"[LOG] Extracted 4 blocks from URL: https://openai.com/api/pricing/ block index: 3\n",
"[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 4\n",
"[LOG] Extracted 5 blocks from URL: https://openai.com/api/pricing/ block index: 0\n",
"[LOG] Extracted 1 blocks from URL: https://openai.com/api/pricing/ block index: 4\n",
"[LOG] Extracted 8 blocks from URL: https://openai.com/api/pricing/ block index: 1\n",
"[LOG] Extracted 12 blocks from URL: https://openai.com/api/pricing/ block index: 2\n",
"[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 8.55 seconds.\n",
"5029\n"
]
}
],
"source": [
"import os\n",
"from google.colab import userdata\n",
"os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')\n",
"\n",
"class OpenAIModelFee(BaseModel):\n",
" model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n",
" input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n",
" output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n",
"\n",
"async def extract_openai_fees():\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(\n",
" url='https://openai.com/api/pricing/',\n",
" word_count_threshold=1,\n",
" extraction_strategy=LLMExtractionStrategy(\n",
" provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY'),\n",
" schema=OpenAIModelFee.schema(),\n",
" extraction_type=\"schema\",\n",
" instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens.\n",
" Do not miss any models in the entire content. One extracted model JSON format should look like this:\n",
" {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n",
" ),\n",
" bypass_cache=True,\n",
" )\n",
" print(len(result.extracted_content))\n",
"\n",
"# Uncomment the following line to run the OpenAI extraction example\n",
"await extract_openai_fees()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BypA5YxEyZQN"
},
"source": [
"### Advanced Multi-Page Crawling with JavaScript Execution"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tfkcVQ0b7mw-"
},
"source": [
"## Advanced Multi-Page Crawling with JavaScript Execution\n",
"\n",
"This example demonstrates Crawl4AI's ability to handle complex crawling scenarios, specifically extracting commits from multiple pages of a GitHub repository. The challenge here is that clicking the \"Next\" button doesn't load a new page, but instead uses asynchronous JavaScript to update the content. This is a common hurdle in modern web crawling.\n",
"\n",
"To overcome this, we use Crawl4AI's custom JavaScript execution to simulate clicking the \"Next\" button, and implement a custom hook to detect when new data has loaded. Our strategy involves comparing the first commit's text before and after \"clicking\" Next, waiting until it changes to confirm new data has rendered. This showcases Crawl4AI's flexibility in handling dynamic content and its ability to implement custom logic for even the most challenging crawling tasks."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "qUBKGpn3yZQN",
"outputId": "3e555b6a-ed33-42f4-cce9-499a923fbe17"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 5.16 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.28 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.28 seconds.\n",
"Page 1: Found 35 commits\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.78 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.90 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.90 seconds.\n",
"Page 2: Found 35 commits\n",
"[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
"[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 2.00 seconds\n",
"[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.74 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.75 seconds.\n",
"Page 3: Found 35 commits\n",
"Successfully crawled 105 commits across 3 pages\n"
]
}
],
"source": [
"import re\n",
"from bs4 import BeautifulSoup\n",
"\n",
"async def crawl_typescript_commits():\n",
" first_commit = \"\"\n",
" async def on_execution_started(page):\n",
" nonlocal first_commit\n",
" try:\n",
" while True:\n",
" await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')\n",
" commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')\n",
" commit = await commit.evaluate('(element) => element.textContent')\n",
" commit = re.sub(r'\\s+', '', commit)\n",
" if commit and commit != first_commit:\n",
" first_commit = commit\n",
" break\n",
" await asyncio.sleep(0.5)\n",
" except Exception as e:\n",
" print(f\"Warning: New content didn't appear after JavaScript execution: {e}\")\n",
"\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)\n",
"\n",
" url = \"https://github.com/microsoft/TypeScript/commits/main\"\n",
" session_id = \"typescript_commits_session\"\n",
" all_commits = []\n",
"\n",
" js_next_page = \"\"\"\n",
" const button = document.querySelector('a[data-testid=\"pagination-next-button\"]');\n",
" if (button) button.click();\n",
" \"\"\"\n",
"\n",
" for page in range(3): # Crawl 3 pages\n",
" result = await crawler.arun(\n",
" url=url,\n",
" session_id=session_id,\n",
" css_selector=\"li.Box-sc-g0xbh4-0\",\n",
" js=js_next_page if page > 0 else None,\n",
" bypass_cache=True,\n",
" js_only=page > 0\n",
" )\n",
"\n",
" assert result.success, f\"Failed to crawl page {page + 1}\"\n",
"\n",
" soup = BeautifulSoup(result.cleaned_html, 'html.parser')\n",
" commits = soup.select(\"li\")\n",
" all_commits.extend(commits)\n",
"\n",
" print(f\"Page {page + 1}: Found {len(commits)} commits\")\n",
"\n",
" await crawler.crawler_strategy.kill_session(session_id)\n",
" print(f\"Successfully crawled {len(all_commits)} commits across 3 pages\")\n",
"\n",
"await crawl_typescript_commits()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EJRnYsp6yZQN"
},
"source": [
"### Using JsonCssExtractionStrategy for Fast Structured Output"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1ZMqIzB_8SYp"
},
"source": [
"The JsonCssExtractionStrategy is a powerful feature of Crawl4AI that allows for precise, structured data extraction from web pages. Here's how it works:\n",
"\n",
"1. You define a schema that describes the pattern of data you're interested in extracting.\n",
"2. The schema includes a base selector that identifies repeating elements on the page.\n",
"3. Within the schema, you define fields, each with its own selector and type.\n",
"4. These field selectors are applied within the context of each base selector element.\n",
"5. The strategy supports nested structures, lists within lists, and various data types.\n",
"6. You can even include computed fields for more complex data manipulation.\n",
"\n",
"This approach allows for highly flexible and precise data extraction, transforming semi-structured web content into clean, structured JSON data. It's particularly useful for extracting consistent data patterns from pages like product listings, news articles, or search results.\n",
"\n",
"For more details and advanced usage, check out the full documentation on the Crawl4AI website."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "trCMR2T9yZQN",
"outputId": "718d36f4-cccf-40f4-8d8c-c3ba73524d16"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
"[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
"[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
"[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
"[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 7.00 seconds\n",
"[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.32 seconds\n",
"[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
"[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.48 seconds.\n",
"Successfully extracted 11 news teasers\n",
"{\n",
" \"category\": \"Business News\",\n",
" \"headline\": \"NBC ripped up its Olympics playbook for 2024 \\u2014 so far, the new strategy paid off\",\n",
" \"summary\": \"The Olympics have long been key to NBCUniversal. Paris marked the 18th Olympic Games broadcast by NBC in the U.S.\",\n",
" \"time\": \"13h ago\",\n",
" \"image\": {\n",
" \"src\": \"https://media-cldnry.s-nbcnews.com/image/upload/t_focal-200x100,f_auto,q_auto:best/rockcms/2024-09/240903-nbc-olympics-ch-1344-c7a486.jpg\",\n",
" \"alt\": \"Mike Tirico.\"\n",
" },\n",
" \"link\": \"https://www.nbcnews.com/business\"\n",
"}\n"
]
}
],
"source": [
"async def extract_news_teasers():\n",
" schema = {\n",
" \"name\": \"News Teaser Extractor\",\n",
" \"baseSelector\": \".wide-tease-item__wrapper\",\n",
" \"fields\": [\n",
" {\n",
" \"name\": \"category\",\n",
" \"selector\": \".unibrow span[data-testid='unibrow-text']\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"headline\",\n",
" \"selector\": \".wide-tease-item__headline\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"summary\",\n",
" \"selector\": \".wide-tease-item__description\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"time\",\n",
" \"selector\": \"[data-testid='wide-tease-date']\",\n",
" \"type\": \"text\",\n",
" },\n",
" {\n",
" \"name\": \"image\",\n",
" \"type\": \"nested\",\n",
" \"selector\": \"picture.teasePicture img\",\n",
" \"fields\": [\n",
" {\"name\": \"src\", \"type\": \"attribute\", \"attribute\": \"src\"},\n",
" {\"name\": \"alt\", \"type\": \"attribute\", \"attribute\": \"alt\"},\n",
" ],\n",
" },\n",
" {\n",
" \"name\": \"link\",\n",
" \"selector\": \"a[href]\",\n",
" \"type\": \"attribute\",\n",
" \"attribute\": \"href\",\n",
" },\n",
" ],\n",
" }\n",
"\n",
" extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n",
"\n",
" async with AsyncWebCrawler(verbose=True) as crawler:\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" extraction_strategy=extraction_strategy,\n",
" bypass_cache=True,\n",
" )\n",
"\n",
" assert result.success, \"Failed to crawl the page\"\n",
"\n",
" news_teasers = json.loads(result.extracted_content)\n",
" print(f\"Successfully extracted {len(news_teasers)} news teasers\")\n",
" print(json.dumps(news_teasers[0], indent=2))\n",
"\n",
"await extract_news_teasers()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FnyVhJaByZQN"
},
"source": [
"## Speed Comparison\n",
"\n",
"Let's compare the speed of Crawl4AI with Firecrawl, a paid service. Note that we can't run Firecrawl in this Colab environment, so we'll simulate its performance based on previously recorded data."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "agDD186f3wig"
},
"source": [
"💡 **Note on Speed Comparison:**\n",
"\n",
"The speed test conducted here is running on Google Colab, where the internet speed and performance can vary and may not reflect optimal conditions. When we call Firecrawl's API, we're seeing its best performance, while Crawl4AI's performance is limited by Colab's network speed.\n",
"\n",
"For a more accurate comparison, it's recommended to run these tests on your own servers or computers with a stable and fast internet connection. Despite these limitations, Crawl4AI still demonstrates faster performance in this environment.\n",
"\n",
"If you run these tests locally, you may observe an even more significant speed advantage for Crawl4AI compared to other services."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "F7KwHv8G1LbY"
},
"outputs": [],
"source": [
"!pip install firecrawl"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "91813zILyZQN",
"outputId": "663223db-ab89-4976-b233-05ceca62b19b"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Firecrawl (simulated):\n",
"Time taken: 4.38 seconds\n",
"Content length: 41967 characters\n",
"Images found: 49\n",
"\n",
"Crawl4AI (simple crawl):\n",
"Time taken: 4.22 seconds\n",
"Content length: 18221 characters\n",
"Images found: 49\n",
"\n",
"Crawl4AI (with JavaScript execution):\n",
"Time taken: 9.13 seconds\n",
"Content length: 34243 characters\n",
"Images found: 89\n"
]
}
],
"source": [
"import os\n",
"from google.colab import userdata\n",
"os.environ['FIRECRAWL_API_KEY'] = userdata.get('FIRECRAWL_API_KEY')\n",
"import time\n",
"from firecrawl import FirecrawlApp\n",
"\n",
"async def speed_comparison():\n",
" # Simulated Firecrawl performance\n",
" app = FirecrawlApp(api_key=os.environ['FIRECRAWL_API_KEY'])\n",
" start = time.time()\n",
" scrape_status = app.scrape_url(\n",
" 'https://www.nbcnews.com/business',\n",
" params={'formats': ['markdown', 'html']}\n",
" )\n",
" end = time.time()\n",
" print(\"Firecrawl (simulated):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(scrape_status['markdown'])} characters\")\n",
" print(f\"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}\")\n",
" print()\n",
"\n",
" async with AsyncWebCrawler() as crawler:\n",
" # Crawl4AI simple crawl\n",
" start = time.time()\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" word_count_threshold=0,\n",
" bypass_cache=True,\n",
" verbose=False\n",
" )\n",
" end = time.time()\n",
" print(\"Crawl4AI (simple crawl):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(result.markdown)} characters\")\n",
" print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
" print()\n",
"\n",
" # Crawl4AI with JavaScript execution\n",
" start = time.time()\n",
" result = await crawler.arun(\n",
" url=\"https://www.nbcnews.com/business\",\n",
" js_code=[\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"],\n",
" word_count_threshold=0,\n",
" bypass_cache=True,\n",
" verbose=False\n",
" )\n",
" end = time.time()\n",
" print(\"Crawl4AI (with JavaScript execution):\")\n",
" print(f\"Time taken: {end - start:.2f} seconds\")\n",
" print(f\"Content length: {len(result.markdown)} characters\")\n",
" print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
"\n",
"await speed_comparison()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OBFFYVJIyZQN"
},
"source": [
"If you run on a local machine with a proper internet speed:\n",
"- Simple crawl: Crawl4AI is typically over 3-4 times faster than Firecrawl.\n",
"- With JavaScript execution: Even when executing JavaScript to load more content (potentially doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.\n",
"\n",
"Please note that actual performance may vary depending on network conditions and the specific content being crawled."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "A6_1RK1_yZQO"
},
"source": [
"## Conclusion\n",
"\n",
"In this notebook, we've explored the powerful features of Crawl4AI, including:\n",
"\n",
"1. Basic crawling\n",
"2. JavaScript execution and CSS selector usage\n",
"3. Proxy support\n",
"4. Structured data extraction with OpenAI\n",
"5. Advanced multi-page crawling with JavaScript execution\n",
"6. Fast structured output using JsonCssExtractionStrategy\n",
"7. Speed comparison with other services\n",
"\n",
"Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n",
"\n",
"For more information and advanced usage, please visit the [Crawl4AI documentation](https://crawl4ai.com/mkdocs/).\n",
"\n",
"Happy crawling!"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,141 +0,0 @@
# Core Classes and Functions
## Overview
In this section, we will delve into the core classes and functions that make up the Crawl4AI library. This includes the `WebCrawler` class, various `CrawlerStrategy` classes, `ChunkingStrategy` classes, and `ExtractionStrategy` classes. Understanding these core components will help you leverage the full power of Crawl4AI for your web crawling and data extraction needs.
## WebCrawler Class
The `WebCrawler` class is the main class you'll interact with. It provides the interface for crawling web pages and extracting data.
### Initialization
```python
from crawl4ai import WebCrawler
# Create an instance of WebCrawler
crawler = WebCrawler()
```
### Methods
- **`warmup()`**: Prepares the crawler for use, such as loading necessary models.
- **`run(url: str, **kwargs)`**: Runs the crawler on the specified URL with optional parameters for customization.
```python
crawler.warmup()
result = crawler.run(url="https://www.nbcnews.com/business")
print(result)
```
## CrawlerStrategy Classes
The `CrawlerStrategy` classes define how the web crawling is executed. The base class is `CrawlerStrategy`, which is extended by specific implementations like `LocalSeleniumCrawlerStrategy`.
### CrawlerStrategy Base Class
An abstract base class that defines the interface for different crawler strategies.
```python
from abc import ABC, abstractmethod
class CrawlerStrategy(ABC):
@abstractmethod
def crawl(self, url: str, **kwargs) -> str:
pass
@abstractmethod
def take_screenshot(self, save_path: str):
pass
@abstractmethod
def update_user_agent(self, user_agent: str):
pass
@abstractmethod
def set_hook(self, hook_type: str, hook: Callable):
pass
```
### LocalSeleniumCrawlerStrategy Class
A concrete implementation of `CrawlerStrategy` that uses Selenium to crawl web pages.
#### Initialization
```python
from crawl4ai.crawler_strategy import LocalSeleniumCrawlerStrategy
strategy = LocalSeleniumCrawlerStrategy(js_code=["console.log('Hello, world!');"])
```
#### Methods
- **`crawl(url: str, **kwargs)`**: Crawls the specified URL.
- **`take_screenshot(save_path: str)`**: Takes a screenshot of the current page.
- **`update_user_agent(user_agent: str)`**: Updates the user agent for the browser.
- **`set_hook(hook_type: str, hook: Callable)`**: Sets a hook for various events.
```python
result = strategy.crawl("https://www.example.com")
strategy.take_screenshot("screenshot.png")
strategy.update_user_agent("Mozilla/5.0")
strategy.set_hook("before_get_url", lambda: print("About to get URL"))
```
## ChunkingStrategy Classes
The `ChunkingStrategy` classes define how the text from a web page is divided into chunks. Here are a few examples:
### RegexChunking Class
Splits text using regular expressions.
```python
from crawl4ai.chunking_strategy import RegexChunking
chunker = RegexChunking(patterns=[r'\n\n'])
chunks = chunker.chunk("This is a sample text. It will be split into chunks.")
```
### NlpSentenceChunking Class
Uses NLP to split text into sentences.
```python
from crawl4ai.chunking_strategy import NlpSentenceChunking
chunker = NlpSentenceChunking()
chunks = chunker.chunk("This is a sample text. It will be split into sentences.")
```
## ExtractionStrategy Classes
The `ExtractionStrategy` classes define how meaningful content is extracted from the chunks. Here are a few examples:
### CosineStrategy Class
Clusters text chunks based on cosine similarity.
```python
from crawl4ai.extraction_strategy import CosineStrategy
extractor = CosineStrategy(semantic_filter="finance", word_count_threshold=10)
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
### LLMExtractionStrategy Class
Uses a Language Model to extract meaningful blocks from HTML.
```python
from crawl4ai.extraction_strategy import LLMExtractionStrategy
extractor = LLMExtractionStrategy(provider='openai', api_token='your_api_token', instruction='Extract only news about AI.')
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
## Conclusion
By understanding these core classes and functions, you can customize and extend Crawl4AI to suit your specific web crawling and data extraction needs. Happy crawling! 🕷️🤖

View File

@@ -1,338 +0,0 @@
# Detailed API Documentation
## Overview
This section provides comprehensive documentation for the Crawl4AI API, covering all classes, methods, and their parameters. This guide will help you understand how to utilize the API to its full potential, enabling efficient web crawling and data extraction.
## WebCrawler Class
The `WebCrawler` class is the primary interface for crawling web pages and extracting data.
### Initialization
```python
from crawl4ai import WebCrawler
crawler = WebCrawler()
```
### Methods
#### `warmup()`
Prepares the crawler for use, such as loading necessary models.
```python
crawler.warmup()
```
#### `run(url: str, **kwargs) -> CrawlResult`
Crawls the specified URL and returns the result.
- **Parameters:**
- `url` (str): The URL to crawl.
- `**kwargs`: Additional parameters for customization.
- **Returns:**
- `CrawlResult`: An object containing the crawl result.
- **Example:**
```python
result = crawler.run(url="https://www.nbcnews.com/business")
print(result)
```
### CrawlResult Class
Represents the result of a crawl operation.
- **Attributes:**
- `url` (str): The URL of the crawled page.
- `html` (str): The raw HTML of the page.
- `success` (bool): Whether the crawl was successful.
- `cleaned_html` (Optional[str]): The cleaned HTML.
- `media` (Dict[str, List[Dict]]): Media tags in the page (images, audio, video).
- `links` (Dict[str, List[Dict]]): Links in the page (external, internal).
- `screenshot` (Optional[str]): Base64 encoded screenshot.
- `markdown` (Optional[str]): Extracted content in Markdown format.
- `extracted_content` (Optional[str]): Extracted meaningful content.
- `metadata` (Optional[dict]): Metadata from the page.
- `error_message` (Optional[str]): Error message if any.
## CrawlerStrategy Classes
The `CrawlerStrategy` classes define how the web crawling is executed.
### CrawlerStrategy Base Class
An abstract base class for different crawler strategies.
#### Methods
- **`crawl(url: str, **kwargs) -> str`**: Crawls the specified URL.
- **`take_screenshot(save_path: str)`**: Takes a screenshot of the current page.
- **`update_user_agent(user_agent: str)`**: Updates the user agent for the browser.
- **`set_hook(hook_type: str, hook: Callable)`**: Sets a hook for various events.
### LocalSeleniumCrawlerStrategy Class
Uses Selenium to crawl web pages.
#### Initialization
```python
from crawl4ai.crawler_strategy import LocalSeleniumCrawlerStrategy
strategy = LocalSeleniumCrawlerStrategy(js_code=["console.log('Hello, world!');"])
```
#### Methods
- **`crawl(url: str, **kwargs)`**: Crawls the specified URL.
- **`take_screenshot(save_path: str)`**: Takes a screenshot of the current page.
- **`update_user_agent(user_agent: str)`**: Updates the user agent for the browser.
- **`set_hook(hook_type: str, hook: Callable)`**: Sets a hook for various events.
#### Example
```python
result = strategy.crawl("https://www.example.com")
strategy.take_screenshot("screenshot.png")
strategy.update_user_agent("Mozilla/5.0")
strategy.set_hook("before_get_url", lambda: print("About to get URL"))
```
## ChunkingStrategy Classes
The `ChunkingStrategy` classes define how the text from a web page is divided into chunks.
### RegexChunking Class
Splits text using regular expressions.
#### Initialization
```python
from crawl4ai.chunking_strategy import RegexChunking
chunker = RegexChunking(patterns=[r'\n\n'])
```
#### Methods
- **`chunk(text: str) -> List[str]`**: Splits the text into chunks.
#### Example
```python
chunks = chunker.chunk("This is a sample text. It will be split into chunks.")
```
### NlpSentenceChunking Class
Uses NLP to split text into sentences.
#### Initialization
```python
from crawl4ai.chunking_strategy import NlpSentenceChunking
chunker = NlpSentenceChunking()
```
#### Methods
- **`chunk(text: str) -> List[str]`**: Splits the text into sentences.
#### Example
```python
chunks = chunker.chunk("This is a sample text. It will be split into sentences.")
```
### TopicSegmentationChunking Class
Uses the TextTiling algorithm to segment text into topics.
#### Initialization
```python
from crawl4ai.chunking_strategy import TopicSegmentationChunking
chunker = TopicSegmentationChunking(num_keywords=3)
```
#### Methods
- **`chunk(text: str) -> List[str]`**: Splits the text into topic-based segments.
#### Example
```python
chunks = chunker.chunk("This is a sample text. It will be split into topic-based segments.")
```
### FixedLengthWordChunking Class
Splits text into chunks of fixed length based on the number of words.
#### Initialization
```python
from crawl4ai.chunking_strategy import FixedLengthWordChunking
chunker = FixedLengthWordChunking(chunk_size=100)
```
#### Methods
- **`chunk(text: str) -> List[str]`**: Splits the text into fixed-length word chunks.
#### Example
```python
chunks = chunker.chunk("This is a sample text. It will be split into fixed-length word chunks.")
```
### SlidingWindowChunking Class
Uses a sliding window approach to chunk text.
#### Initialization
```python
from crawl4ai.chunking_strategy import SlidingWindowChunking
chunker = SlidingWindowChunking(window_size=100, step=50)
```
#### Methods
- **`chunk(text: str) -> List[str]`**: Splits the text using a sliding window approach.
#### Example
```python
chunks = chunker.chunk("This is a sample text. It will be split using a sliding window approach.")
```
## ExtractionStrategy Classes
The `ExtractionStrategy` classes define how meaningful content is extracted from the chunks.
### NoExtractionStrategy Class
Returns the entire HTML content without any modification.
#### Initialization
```python
from crawl4ai.extraction_strategy import NoExtractionStrategy
extractor = NoExtractionStrategy()
```
#### Methods
- **`extract(url: str, html: str) -> str`**: Returns the HTML content.
#### Example
```python
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
### LLMExtractionStrategy Class
Uses a Language Model to extract meaningful blocks from HTML.
#### Initialization
```python
from crawl4ai.extraction_strategy import LLMExtractionStrategy
extractor = LLMExtractionStrategy(provider='openai', api_token='your_api_token', instruction='Extract only news about AI.')
```
#### Methods
- **`extract(url: str, html: str) -> str`**: Extracts meaningful content using the LLM.
#### Example
```python
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
### CosineStrategy Class
Clusters text chunks based on cosine similarity.
#### Initialization
```python
from crawl4ai.extraction_strategy import CosineStrategy
extractor = CosineStrategy(semantic_filter="finance", word_count_threshold=10)
```
#### Methods
- **`extract(url: str, html: str) -> str`**: Extracts clusters of text based on cosine similarity.
#### Example
```python
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
### TopicExtractionStrategy Class
Uses the TextTiling algorithm to segment HTML content into topics and extract keywords.
#### Initialization
```python
from crawl4ai.extraction_strategy import TopicExtractionStrategy
extractor = TopicExtractionStrategy(num_keywords=3)
```
#### Methods
- **`extract(url: str, html: str) -> str`**: Extracts topic-based segments and keywords.
#### Example
```python
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
## Parameters
Here are the common parameters used across various classes and methods:
- **`url`** (str): The URL to crawl.
- **`html`** (str): The HTML content of the page.
- **`user_agent`** (str): The user agent for the HTTP requests.
- **`patterns`** (list): A list of regular expression patterns for chunking.
- **`num_keywords`** (int): Number of keywords for topic extraction.
- **`chunk_size`** (int): Number of words in each chunk.
- **`window_size`** (int): Number of words in the sliding window.
- **`step`** (int): Step size for the sliding window.
- **`semantic_filter`** (str): Keywords for filtering relevant documents.
- **`word_count_threshold`** (int): Minimum number of words per cluster.
- **`max_dist`** (float): Maximum cophenetic distance for clustering.
- **`linkage_method`** (str): Linkage method for hierarchical clustering.
- **`top_k`** (int): Number of top categories to extract.
- **`provider`** (
str): Provider for language model completions.
- **`api_token`** (str): API token for the provider.
- **`instruction`** (str): Instruction to guide the LLM extraction.
## Conclusion
This detailed API documentation provides a thorough understanding of the classes, methods, and parameters in the Crawl4AI library. With this knowledge, you can effectively use the API to perform advanced web crawling and data extraction tasks.

Binary file not shown.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,6 +0,0 @@
document.addEventListener('DOMContentLoaded', (event) => {
document.querySelectorAll('pre code').forEach((block) => {
hljs.highlightBlock(block);
});
});

View File

@@ -1,153 +0,0 @@
@font-face {
font-family: "Monaco";
font-style: normal;
font-weight: normal;
src: local("Monaco"), url("Monaco.woff") format("woff");
}
:root {
--global-font-size: 16px;
--global-line-height: 1.5em;
--global-space: 10px;
--font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
Courier New, monospace, serif;
--font-stack: dm, Monaco, Courier New, monospace, serif;
--mono-font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
Courier New, monospace, serif;
--background-color: #151515; /* Dark background */
--font-color: #eaeaea; /* Light font color for contrast */
--invert-font-color: #151515; /* Dark color for inverted elements */
--primary-color: #1a95e0; /* Primary color can remain the same or be adjusted for better contrast */
--secondary-color: #727578; /* Secondary color for less important text */
--error-color: #ff5555; /* Bright color for errors */
--progress-bar-background: #444; /* Darker background for progress bar */
--progress-bar-fill: #1a95e0; /* Bright color for progress bar fill */
--code-bg-color: #1e1e1e; /* Darker background for code blocks */
--input-style: solid; /* Keeping input style solid */
--block-background-color: #202020; /* Darker background for block elements */
--global-font-color: #eaeaea; /* Light font color for global elements */
--background-color: #222225;
--background-color: #070708;
--page-width: 70em;
--font-color: #e8e9ed;
--invert-font-color: #222225;
--secondary-color: #a3abba;
--secondary-color: #d5cec0;
--tertiary-color: #a3abba;
--primary-color: #09b5a5; /* Updated to the brand color */
--primary-color: #50ffff; /* Updated to the brand color */
--error-color: #ff3c74;
--progress-bar-background: #3f3f44;
--progress-bar-fill: #09b5a5; /* Updated to the brand color */
--code-bg-color: #3f3f44;
--input-style: solid;
--display-h1-decoration: none;
--display-h1-decoration: none;
}
/* body {
background-color: var(--background-color);
color: var(--font-color);
}
a {
color: var(--primary-color);
}
a:hover {
background-color: var(--primary-color);
color: var(--invert-font-color);
}
blockquote::after {
color: #444;
}
pre, code {
background-color: var(--code-bg-color);
color: var(--font-color);
}
.terminal-nav:first-child {
border-bottom: 1px dashed var(--secondary-color);
} */
.terminal-mkdocs-main-content {
line-height: var(--global-line-height);
}
strong,
.highlight {
/* background: url(//s2.svgbox.net/pen-brushes.svg?ic=brush-1&color=50ffff); */
background-color: #50ffff33;
}
.terminal-card > header {
color: var(--font-color);
text-align: center;
background-color: var(--progress-bar-background);
padding: 0.3em 0.5em;
}
.btn.btn-sm {
color: var(--font-color);
padding: 0.2em 0.5em;
font-size: 0.8em;
}
.loading-message {
display: none;
margin-top: 20px;
}
.response-section {
display: none;
padding-top: 20px;
}
.tabs {
display: flex;
flex-direction: column;
}
.tab-list {
display: flex;
padding: 0;
margin: 0;
list-style-type: none;
border-bottom: 1px solid var(--font-color);
}
.tab-item {
cursor: pointer;
padding: 10px;
border: 1px solid var(--font-color);
margin-right: -1px;
border-bottom: none;
}
.tab-item:hover,
.tab-item:focus,
.tab-item:active {
background-color: var(--progress-bar-background);
}
.tab-content {
display: none;
border: 1px solid var(--font-color);
border-top: none;
}
.tab-content:first-of-type {
display: block;
}
.tab-content header {
padding: 0.5em;
display: flex;
justify-content: end;
align-items: center;
background-color: var(--progress-bar-background);
}
.tab-content pre {
margin: 0;
max-height: 300px; overflow: auto; border:none;
}

View File

@@ -1,102 +0,0 @@
# Changelog
## [v0.2.77] - 2024-08-04
Significant improvements in text processing and performance:
- 🚀 **Dependency reduction**: Removed dependency on spaCy model for text chunk labeling in cosine extraction strategy.
- 🤖 **Transformer upgrade**: Implemented text sequence classification using a transformer model for labeling text chunks.
-**Performance enhancement**: Improved model loading speed due to removal of spaCy dependency.
- 🔧 **Future-proofing**: Laid groundwork for potential complete removal of spaCy dependency in future versions.
These changes address issue #68 and provide a foundation for faster, more efficient text processing in Crawl4AI.
## [v0.2.76] - 2024-08-02
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
- 🐳 **Docker enhancements**: Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
- 🌐 **Official Docker Hub image**: Launched our first official image on Docker Hub for streamlined deployment.
- 🔧 **Selenium upgrade**: Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
- 🖼️ **Image description**: Implemented ability to generate textual descriptions for extracted images from web pages.
-**Performance boost**: Various improvements to enhance overall speed and performance.
A big shoutout to our amazing community contributors:
- [@aravindkarnam](https://github.com/aravindkarnam) for developing the textual description extraction feature.
- [@FractalMind](https://github.com/FractalMind) for creating the first official Docker Hub image and fixing Dockerfile errors.
- [@ketonkss4](https://github.com/ketonkss4) for identifying Selenium's new capabilities, helping us reduce dependencies.
Your contributions are driving Crawl4AI forward! 🙌
## [v0.2.75] - 2024-07-19
Minor improvements for a more maintainable codebase:
- 🔄 Fixed typos in `chunking_strategy.py` and `crawler_strategy.py` to improve code readability
- 🔄 Removed `.test_pads/` directory from `.gitignore` to keep our repository clean and organized
These changes may seem small, but they contribute to a more stable and sustainable codebase. By fixing typos and updating our `.gitignore` settings, we're ensuring that our code is easier to maintain and scale in the long run.
## v0.2.74 - 2024-07-08
A slew of exciting updates to improve the crawler's stability and robustness! 🎉
- 💻 **UTF encoding fix**: Resolved the Windows \"charmap\" error by adding UTF encoding.
- 🛡️ **Error handling**: Implemented MaxRetryError exception handling in LocalSeleniumCrawlerStrategy.
- 🧹 **Input sanitization**: Improved input sanitization and handled encoding issues in LLMExtractionStrategy.
- 🚮 **Database cleanup**: Removed existing database file and initialized a new one.
## [v0.2.73] - 2024-07-03
💡 In this release, we've bumped the version to v0.2.73 and refreshed our documentation to ensure you have the best experience with our project.
* Supporting website need "with-head" mode to crawl the website with head.
* Fixing the installation issues for setup.py and dockerfile.
* Resolve multiple issues.
## [v0.2.72] - 2024-06-30
This release brings exciting updates and improvements to our project! 🎉
* 📚 **Documentation Updates**: Our documentation has been revamped to reflect the latest changes and additions.
* 🚀 **New Modes in setup.py**: We've added support for three new modes in setup.py: default, torch, and transformers. This enhances the project's flexibility and usability.
* 🐳 **Docker File Updates**: The Docker file has been updated to ensure seamless compatibility with the new modes and improvements.
* 🕷️ **Temporary Solution for Headless Crawling**: We've implemented a temporary solution to overcome issues with crawling websites in headless mode.
These changes aim to improve the overall user experience, provide more flexibility, and enhance the project's performance. We're thrilled to share these updates with you and look forward to continuing to evolve and improve our project!
## [0.2.71] - 2024-06-26
**Improved Error Handling and Performance** 🚧
* 🚫 Refactored `crawler_strategy.py` to handle exceptions and provide better error messages, making it more robust and reliable.
* 💻 Optimized the `get_content_of_website_optimized` function in `utils.py` for improved performance, reducing potential bottlenecks.
* 💻 Updated `utils.py` with the latest changes, ensuring consistency and accuracy.
* 🚫 Migrated to `ChromeDriverManager` to resolve Chrome driver download issues, providing a smoother user experience.
These changes focus on refining the existing codebase, resulting in a more stable, efficient, and user-friendly experience. With these improvements, you can expect fewer errors and better performance in the crawler strategy and utility functions.
## [0.2.71] - 2024-06-25
### Fixed
- Speed up twice the extraction function.
## [0.2.6] - 2024-06-22
### Fixed
- Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms.
## [0.2.5] - 2024-06-18
### Added
- Added five important hooks to the crawler:
- on_driver_created: Called when the driver is ready for initializations.
- before_get_url: Called right before Selenium fetches the URL.
- after_get_url: Called after Selenium fetches the URL.
- before_return_html: Called when the data is parsed and ready.
- on_user_agent_updated: Called when the user changes the user_agent, causing the driver to reinitialize.
- Added an example in `quickstart.py` in the example folder under the docs.
- Enhancement issue #24: Replaced inline HTML tags (e.g., DEL, INS, SUB, ABBR) with textual format for better context handling in LLM.
- Maintaining the semantic context of inline tags (e.g., abbreviation, DEL, INS) for improved LLM-friendliness.
- Updated Dockerfile to ensure compatibility across multiple platforms (Hopefully!).
## [0.2.4] - 2024-06-17
### Fixed
- Fix issue #22: Use MD5 hash for caching HTML files to handle long URLs

View File

@@ -1,12 +0,0 @@
{
"RegexChunking": "### RegexChunking\n\n`RegexChunking` is a text chunking strategy that splits a given text into smaller parts using regular expressions.\nThis is useful for preparing large texts for processing by language models, ensuring they are divided into manageable segments.\n\n#### Constructor Parameters:\n- `patterns` (list, optional): A list of regular expression patterns used to split the text. Default is to split by double newlines (`['\\n\\n']`).\n\n#### Example usage:\n```python\nchunker = RegexChunking(patterns=[r'\\n\\n', r'\\. '])\nchunks = chunker.chunk(\"This is a sample text. It will be split into chunks.\")\n```",
"NlpSentenceChunking": "### NlpSentenceChunking\n\n`NlpSentenceChunking` uses a natural language processing model to chunk a given text into sentences. This approach leverages SpaCy to accurately split text based on sentence boundaries.\n\n#### Constructor Parameters:\n- None.\n\n#### Example usage:\n```python\nchunker = NlpSentenceChunking()\nchunks = chunker.chunk(\"This is a sample text. It will be split into sentences.\")\n```",
"TopicSegmentationChunking": "### TopicSegmentationChunking\n\n`TopicSegmentationChunking` uses the TextTiling algorithm to segment a given text into topic-based chunks. This method identifies thematic boundaries in the text.\n\n#### Constructor Parameters:\n- `num_keywords` (int, optional): The number of keywords to extract for each topic segment. Default is `3`.\n\n#### Example usage:\n```python\nchunker = TopicSegmentationChunking(num_keywords=3)\nchunks = chunker.chunk(\"This is a sample text. It will be split into topic-based segments.\")\n```",
"FixedLengthWordChunking": "### FixedLengthWordChunking\n\n`FixedLengthWordChunking` splits a given text into chunks of fixed length, based on the number of words.\n\n#### Constructor Parameters:\n- `chunk_size` (int, optional): The number of words in each chunk. Default is `100`.\n\n#### Example usage:\n```python\nchunker = FixedLengthWordChunking(chunk_size=100)\nchunks = chunker.chunk(\"This is a sample text. It will be split into fixed-length word chunks.\")\n```",
"SlidingWindowChunking": "### SlidingWindowChunking\n\n`SlidingWindowChunking` uses a sliding window approach to chunk a given text. Each chunk has a fixed length, and the window slides by a specified step size.\n\n#### Constructor Parameters:\n- `window_size` (int, optional): The number of words in each chunk. Default is `100`.\n- `step` (int, optional): The number of words to slide the window. Default is `50`.\n\n#### Example usage:\n```python\nchunker = SlidingWindowChunking(window_size=100, step=50)\nchunks = chunker.chunk(\"This is a sample text. It will be split using a sliding window approach.\")\n```"
}

View File

@@ -1,25 +0,0 @@
# Contact
If you have any questions, suggestions, or feedback, please feel free to reach out to us:
- GitHub: [unclecode](https://github.com/unclecode)
- Twitter: [@unclecode](https://twitter.com/unclecode)
- Website: [crawl4ai.com](https://crawl4ai.com)
## Contributing 🤝
We welcome contributions from the open-source community to help improve Crawl4AI and make it even more valuable for AI enthusiasts and developers. To contribute, please follow these steps:
1. Fork the repository.
2. Create a new branch for your feature or bug fix.
3. Make your changes and commit them with descriptive messages.
4. Push your changes to your forked repository.
5. Submit a pull request to the main repository.
For more information on contributing, please see our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTING.md).
## License 📄
Crawl4AI is released under the [Apache 2.0 License](https://github.com/unclecode/crawl4ai/blob/main/LICENSE).
Let's work together to make the web more accessible and useful for AI applications! 💪🌐🤖

View File

@@ -1,231 +0,0 @@
# Interactive Demo for Crowler
<div id="demo">
<form id="crawlForm" class="terminal-form">
<fieldset>
<legend>Enter URL and Options</legend>
<div class="form-group">
<label for="url">Enter URL:</label>
<input type="text" id="url" name="url" required>
</div>
<div class="form-group">
<label for="screenshot">Get Screenshot:</label>
<input type="checkbox" id="screenshot" name="screenshot">
</div>
<div class="form-group">
<button class="btn btn-default" type="submit">Submit</button>
</div>
</fieldset>
</form>
<div id="loading" class="loading-message">
<div class="terminal-alert terminal-alert-primary">Loading... Please wait.</div>
</div>
<section id="response" class="response-section">
<h2>Response</h2>
<div class="tabs">
<ul class="tab-list">
<li class="tab-item" onclick="showTab('markdown')">Markdown</li>
<li class="tab-item" onclick="showTab('cleanedHtml')">Cleaned HTML</li>
<li class="tab-item" onclick="showTab('media')">Media</li>
<li class="tab-item" onclick="showTab('extractedContent')">Extracted Content</li>
<li class="tab-item" onclick="showTab('screenshot')">Screenshot</li>
<li class="tab-item" onclick="showTab('pythonCode')">Python Code</li>
</ul>
<div class="tab-content" id="tab-markdown">
<header>
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('markdownContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('markdownContent', 'markdown.md')">Download</button>
</div>
</header>
<pre><code id="markdownContent" class="language-markdown hljs"></code></pre>
</div>
<div class="tab-content" id="tab-cleanedHtml" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('cleanedHtmlContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('cleanedHtmlContent', 'cleaned.html')">Download</button>
</div>
</header>
<pre><code id="cleanedHtmlContent" class="language-html hljs"></code></pre>
</div>
<div class="tab-content" id="tab-media" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('mediaContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('mediaContent', 'media.json')">Download</button>
</div>
</header>
<pre><code id="mediaContent" class="language-json hljs"></code></pre>
</div>
<div class="tab-content" id="tab-extractedContent" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('extractedContentContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('extractedContentContent', 'extracted_content.json')">Download</button>
</div>
</header>
<pre><code id="extractedContentContent" class="language-json hljs"></code></pre>
</div>
<div class="tab-content" id="tab-screenshot" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadImage('screenshotContent', 'screenshot.png')">Download</button>
</div>
</header>
<pre><img id="screenshotContent" /></pre>
</div>
<div class="tab-content" id="tab-pythonCode" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('pythonCode')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('pythonCode', 'example.py')">Download</button>
</div>
</header>
<pre><code id="pythonCode" class="language-python hljs"></code></pre>
</div>
</div>
</section>
<div id="error" class="error-message" style="display: none; margin-top:1em;">
<div class="terminal-alert terminal-alert-error"></div>
</div>
<script>
function showTab(tabId) {
const tabs = document.querySelectorAll('.tab-content');
tabs.forEach(tab => tab.style.display = 'none');
document.getElementById(`tab-${tabId}`).style.display = 'block';
}
function redo(codeBlock, codeText){
codeBlock.classList.remove('hljs');
codeBlock.removeAttribute('data-highlighted');
// Set new code and re-highlight
codeBlock.textContent = codeText;
hljs.highlightBlock(codeBlock);
}
function copyToClipboard(elementId) {
const content = document.getElementById(elementId).textContent;
navigator.clipboard.writeText(content).then(() => {
alert('Copied to clipboard');
});
}
function downloadContent(elementId, filename) {
const content = document.getElementById(elementId).textContent;
const blob = new Blob([content], { type: 'text/plain' });
const url = window.URL.createObjectURL(blob);
const a = document.createElement('a');
a.style.display = 'none';
a.href = url;
a.download = filename;
document.body.appendChild(a);
a.click();
window.URL.revokeObjectURL(url);
document.body.removeChild(a);
}
function downloadImage(elementId, filename) {
const content = document.getElementById(elementId).src;
const a = document.createElement('a');
a.style.display = 'none';
a.href = content;
a.download = filename;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
}
document.getElementById('crawlForm').addEventListener('submit', function(event) {
event.preventDefault();
document.getElementById('loading').style.display = 'block';
document.getElementById('response').style.display = 'none';
const url = document.getElementById('url').value;
const screenshot = document.getElementById('screenshot').checked;
const data = {
urls: [url],
bypass_cache: false,
word_count_threshold: 5,
screenshot: screenshot
};
fetch('/crawl', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
})
.then(response => {
if (!response.ok) {
if (response.status === 429) {
return response.json().then(err => {
throw Object.assign(new Error('Rate limit exceeded'), { status: 429, details: err });
});
}
throw new Error('Network response was not ok');
}
return response.json();
})
.then(data => {
data = data.results[0]; // Only one URL is requested
document.getElementById('loading').style.display = 'none';
document.getElementById('response').style.display = 'block';
redo(document.getElementById('markdownContent'), data.markdown);
redo(document.getElementById('cleanedHtmlContent'), data.cleaned_html);
redo(document.getElementById('mediaContent'), JSON.stringify(data.media, null, 2));
redo(document.getElementById('extractedContentContent'), data.extracted_content);
if (screenshot) {
document.getElementById('screenshotContent').src = `data:image/png;base64,${data.screenshot}`;
}
const pythonCode = `
from crawl4ai.web_crawler import WebCrawler
crawler = WebCrawler()
crawler.warmup()
result = crawler.run(
url='${url}',
screenshot=${screenshot}
)
print(result)
`;
redo(document.getElementById('pythonCode'), pythonCode);
document.getElementById('error').style.display = 'none';
})
.catch(error => {
document.getElementById('loading').style.display = 'none';
document.getElementById('error').style.display = 'block';
let errorMessage = 'An unexpected error occurred. Please try again later.';
if (error.status === 429) {
const details = error.details;
if (details.retry_after) {
errorMessage = `Rate limit exceeded. Please wait ${parseFloat(details.retry_after).toFixed(1)} seconds before trying again.`;
} else if (details.reset_at) {
const resetTime = new Date(details.reset_at);
const waitTime = Math.ceil((resetTime - new Date()) / 1000);
errorMessage = `Rate limit exceeded. Please try again after ${waitTime} seconds.`;
} else {
errorMessage = `Rate limit exceeded. Please try again later.`;
}
} else if (error.message) {
errorMessage = error.message;
}
document.querySelector('#error .terminal-alert').textContent = errorMessage;
});
});
</script>
</div>

View File

@@ -1,100 +0,0 @@
# Hooks & Auth
Crawl4AI allows you to customize the behavior of the web crawler using hooks. Hooks are functions that are called at specific points in the crawling process, allowing you to modify the crawler's behavior or perform additional actions. This example demonstrates how to use various hooks to customize the crawling process.
## Example: Using Crawler Hooks
Let's see how we can customize the crawler using hooks! In this example, we'll:
1. Maximize the browser window and log in to a website when the driver is created.
2. Add a custom header before fetching the URL.
3. Log the current URL after fetching it.
4. Log the length of the HTML before returning it.
### Hook Definitions
```python
from crawl4ai.web_crawler import WebCrawler
from crawl4ai.crawler_strategy import *
def on_driver_created(driver):
print("[HOOK] on_driver_created")
# Example customization: maximize the window
driver.maximize_window()
# Example customization: logging in to a hypothetical website
driver.get('https://example.com/login')
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.NAME, 'username'))
)
driver.find_element(By.NAME, 'username').send_keys('testuser')
driver.find_element(By.NAME, 'password').send_keys('password123')
driver.find_element(By.NAME, 'login').click()
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, 'welcome'))
)
# Add a custom cookie
driver.add_cookie({'name': 'test_cookie', 'value': 'cookie_value'})
return driver
def before_get_url(driver):
print("[HOOK] before_get_url")
# Example customization: add a custom header
# Enable Network domain for sending headers
driver.execute_cdp_cmd('Network.enable', {})
# Add a custom header
driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': {'X-Test-Header': 'test'}})
return driver
def after_get_url(driver):
print("[HOOK] after_get_url")
# Example customization: log the URL
print(driver.current_url)
return driver
def before_return_html(driver, html):
print("[HOOK] before_return_html")
# Example customization: log the HTML
print(len(html))
return driver
```
### Using the Hooks with the WebCrawler
```python
print("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True)
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook('on_driver_created', on_driver_created)
crawler_strategy.set_hook('before_get_url', before_get_url)
crawler_strategy.set_hook('after_get_url', after_get_url)
crawler_strategy.set_hook('before_return_html', before_return_html)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
result = crawler.run(url="https://example.com")
print("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print(result)
```
### Explanation
- `on_driver_created`: This hook is called when the Selenium driver is created. In this example, it maximizes the window, logs in to a website, and adds a custom cookie.
- `before_get_url`: This hook is called right before Selenium fetches the URL. In this example, it adds a custom HTTP header.
- `after_get_url`: This hook is called after Selenium fetches the URL. In this example, it logs the current URL.
- `before_return_html`: This hook is called before returning the HTML content. In this example, it logs the length of the HTML content.
### Additional Ideas
- **Add custom headers to requests**: You can add custom headers to the requests using the `before_get_url` hook.
- **Perform safety checks**: Use the hooks to perform safety checks before the crawling process starts.
- **Modify the HTML content**: Use the `before_return_html` hook to modify the HTML content before it is returned.
- **Log additional information**: Use the hooks to log additional information for debugging or monitoring purposes.
By using these hooks, you can customize the behavior of the crawler to suit your specific needs.

View File

@@ -1,29 +0,0 @@
# Examples
Welcome to the examples section of Crawl4AI documentation! In this section, you will find practical examples demonstrating how to use Crawl4AI for various web crawling and data extraction tasks. Each example is designed to showcase different features and capabilities of the library.
## Examples Index
### [LLM Extraction](llm_extraction.md)
This example demonstrates how to use Crawl4AI to extract information using Large Language Models (LLMs). You will learn how to configure the `LLMExtractionStrategy` to get structured data from web pages.
### [JS Execution & CSS Filtering](js_execution_css_filtering.md)
Learn how to execute custom JavaScript code and filter data using CSS selectors. This example shows how to perform complex web interactions and extract specific content from web pages.
### [Hooks & Auth](hooks_auth.md)
This example covers the use of custom hooks for authentication and other pre-crawling tasks. You will see how to set up hooks to modify headers, authenticate sessions, and perform other preparatory actions before crawling.
### [Summarization](summarization.md)
Discover how to use Crawl4AI to summarize web page content. This example demonstrates the summarization capabilities of the library, helping you extract concise information from lengthy web pages.
### [Research Assistant](research_assistant.md)
In this example, Crawl4AI is used as a research assistant to gather and organize information from multiple sources. You will learn how to use various extraction and chunking strategies to compile a comprehensive report.
---
Each example includes detailed explanations and code snippets to help you understand and implement the features in your projects. Click on the links to explore each example and start making the most of Crawl4AI!

View File

@@ -1,44 +0,0 @@
# JS Execution & CSS Filtering
In this example, we'll demonstrate how to use Crawl4AI to execute JavaScript, filter data with CSS selectors, and use a cosine similarity strategy to extract relevant content. This approach is particularly useful when you need to interact with dynamic content on web pages, such as clicking "Load More" buttons.
## Example: Extracting Structured Data
```python
# Import necessary modules
from crawl4ai import WebCrawler
from crawl4ai.chunking_strategy import *
from crawl4ai.extraction_strategy import *
from crawl4ai.crawler_strategy import *
# Define the JavaScript code to click the "Load More" button
js_code = ["""
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""]
crawler = WebCrawler(verbose=True)
crawler.warmup()
# Run the crawler with keyword filtering and CSS selector
result = crawler.run(
url="https://www.nbcnews.com/business",
js=js_code,
css_selector="p",
extraction_strategy=CosineStrategy(
semantic_filter="technology",
),
)
# Display the extracted result
print(result)
```
### Explanation
1. **JavaScript Execution**: The `js_code` variable contains JavaScript code that simulates clicking a "Load More" button. This is useful for loading additional content dynamically.
2. **CSS Selector**: The `css_selector="p"` parameter ensures that only paragraph (`<p>`) tags are extracted from the web page.
3. **Extraction Strategy**: The `CosineStrategy` is used with a semantic filter for "technology" to extract relevant content based on cosine similarity.
## Try It Yourself
This example demonstrates the power and flexibility of Crawl4AI in handling complex web interactions and extracting meaningful data. You can customize the JavaScript code, CSS selectors, and extraction strategies to suit your specific requirements.

View File

@@ -1,90 +0,0 @@
# LLM Extraction
Crawl4AI allows you to use Language Models (LLMs) to extract structured data or relevant content from web pages. Below are two examples demonstrating how to use LLMExtractionStrategy for different purposes.
## Example 1: Extract Structured Data
In this example, we use the `LLMExtractionStrategy` to extract structured data (model names and their fees) from the OpenAI pricing page.
```python
import os
import time
from crawl4ai.web_crawler import WebCrawler
from crawl4ai.chunking_strategy import *
from crawl4ai.extraction_strategy import *
from crawl4ai.crawler_strategy import *
url = r'https://openai.com/api/pricing/'
crawler = WebCrawler()
crawler.warmup()
from pydantic import BaseModel, Field
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(..., description="Fee for output token for the OpenAI model.")
result = crawler.run(
url=url,
word_count_threshold=1,
extraction_strategy= LLMExtractionStrategy(
provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="From the crawled content, extract all mentioned model names along with their "\
"fees for input and output tokens. Make sure not to miss anything in the entire content. "\
'One extracted model JSON format should look like this: '\
'{ "model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens" }'
),
bypass_cache=True,
)
model_fees = json.loads(result.extracted_content)
print(len(model_fees))
with open(".data/data.json", "w", encoding="utf-8") as f:
f.write(result.extracted_content)
```
## Example 2: Extract Relevant Content
In this example, we instruct the LLM to extract only content related to technology from the NBC News business page.
```python
crawler = WebCrawler()
crawler.warmup()
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
provider="openai/gpt-4o",
api_token=os.getenv('OPENAI_API_KEY'),
instruction="Extract only content related to technology"
),
bypass_cache=True,
)
model_fees = json.loads(result.extracted_content)
print(len(model_fees))
with open(".data/data.json", "w", encoding="utf-8") as f:
f.write(result.extracted_content)
```
## Customizing LLM Provider
Under the hood, Crawl4AI uses the `litellm` library, which allows you to use any LLM provider you want. Just pass the correct model name and API token.
```python
extraction_strategy=LLMExtractionStrategy(
provider="your_llm_provider/model_name",
api_token="your_api_token",
instruction="Your extraction instruction"
)
```
This flexibility allows you to integrate with various LLM providers and tailor the extraction process to your specific needs.

View File

@@ -1,248 +0,0 @@
## Research Assistant Example
This example demonstrates how to build a research assistant using `Chainlit` and `Crawl4AI`. The assistant will be capable of crawling web pages for information and answering questions based on the crawled content. Additionally, it integrates speech-to-text functionality for audio inputs.
### Step-by-Step Guide
1. **Install Required Packages**
Ensure you have the necessary packages installed. You need `chainlit`, `groq`, `requests`, and `openai`.
```bash
pip install chainlit groq requests openai
```
2. **Import Libraries**
Import all the necessary modules and initialize the OpenAI client.
```python
import os
import time
from openai import AsyncOpenAI
import chainlit as cl
import re
import requests
from io import BytesIO
from chainlit.element import ElementBased
from groq import Groq
from concurrent.futures import ThreadPoolExecutor
client = AsyncOpenAI(base_url="https://api.groq.com/openai/v1", api_key=os.getenv("GROQ_API_KEY"))
# Instrument the OpenAI client
cl.instrument_openai()
```
3. **Set Configuration**
Define the model settings for the assistant.
```python
settings = {
"model": "llama3-8b-8192",
"temperature": 0.5,
"max_tokens": 500,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
}
```
4. **Define Utility Functions**
- **Extract URLs from Text**: Use regex to find URLs in messages.
```python
def extract_urls(text):
url_pattern = re.compile(r'(https?://\S+)')
return url_pattern.findall(text)
```
- **Crawl URL**: Send a request to `Crawl4AI` to fetch the content of a URL.
```python
def crawl_url(url):
data = {
"urls": [url],
"include_raw_html": True,
"word_count_threshold": 10,
"extraction_strategy": "NoExtractionStrategy",
"chunking_strategy": "RegexChunking"
}
response = requests.post("https://crawl4ai.com/crawl", json=data)
response_data = response.json()
response_data = response_data['results'][0]
return response_data['markdown']
```
5. **Initialize Chat Start Event**
Set up the initial chat message and user session.
```python
@cl.on_chat_start
async def on_chat_start():
cl.user_session.set("session", {
"history": [],
"context": {}
})
await cl.Message(
content="Welcome to the chat! How can I assist you today?"
).send()
```
6. **Handle Incoming Messages**
Process user messages, extract URLs, and crawl them concurrently. Update the chat history and system message.
```python
@cl.on_message
async def on_message(message: cl.Message):
user_session = cl.user_session.get("session")
# Extract URLs from the user's message
urls = extract_urls(message.content)
futures = []
with ThreadPoolExecutor() as executor:
for url in urls:
futures.append(executor.submit(crawl_url, url))
results = [future.result() for future in futures]
for url, result in zip(urls, results):
ref_number = f"REF_{len(user_session['context']) + 1}"
user_session["context"][ref_number] = {
"url": url,
"content": result
}
user_session["history"].append({
"role": "user",
"content": message.content
})
# Create a system message that includes the context
context_messages = [
f'<appendix ref="{ref}">\n{data["content"]}\n</appendix>'
for ref, data in user_session["context"].items()
]
if context_messages:
system_message = {
"role": "system",
"content": (
"You are a helpful bot. Use the following context for answering questions. "
"Refer to the sources using the REF number in square brackets, e.g., [1], only if the source is given in the appendices below.\n\n"
"If the question requires any information from the provided appendices or context, refer to the sources. "
"If not, there is no need to add a references section. "
"At the end of your response, provide a reference section listing the URLs and their REF numbers only if sources from the appendices were used.\n\n"
"\n\n".join(context_messages)
)
}
else:
system_message = {
"role": "system",
"content": "You are a helpful assistant."
}
msg = cl.Message(content="")
await msg.send()
# Get response from the LLM
stream = await client.chat.completions.create(
messages=[
system_message,
*user_session["history"]
],
stream=True,
**settings
)
assistant_response = ""
async for part in stream:
if token := part.choices[0].delta.content:
assistant_response += token
await msg.stream_token(token)
# Add assistant message to the history
user_session["history"].append({
"role": "assistant",
"content": assistant_response
})
await msg.update()
# Append the reference section to the assistant's response
reference_section = "\n\nReferences:\n"
for ref, data in user_session["context"].items():
reference_section += f"[{ref.split('_')[1]}]: {data['url']}\n"
msg.content += reference_section
await msg.update()
```
7. **Handle Audio Input**
Capture and transcribe audio input. Store the audio buffer and transcribe it when the audio ends.
```python
@cl.on_audio_chunk
async def on_audio_chunk(chunk: cl.AudioChunk):
if chunk.isStart:
buffer = BytesIO()
buffer.name = f"input_audio.{chunk.mimeType.split('/')[1]}"
cl.user_session.set("audio_buffer", buffer)
cl.user_session.set("audio_mime_type", chunk.mimeType)
cl.user_session.get("audio_buffer").write(chunk.data)
@cl.step(type="tool")
async def speech_to_text(audio_file):
cli = Groq()
response = await client.audio.transcriptions.create(
model="whisper-large-v3", file=audio_file
)
return response.text
@cl.on_audio_end
async def on_audio_end(elements: list[ElementBased]):
audio_buffer: BytesIO = cl.user_session.get("audio_buffer")
audio_buffer.seek(0)
audio_file = audio_buffer.read()
audio_mime_type: str = cl.user_session.get("audio_mime_type")
start_time = time.time()
transcription = await speech_to_text((audio_buffer.name, audio_file, audio_mime_type))
end_time = time.time()
print(f"Transcription took {end_time - start_time} seconds")
user_msg = cl.Message(
author="You",
type="user_message",
content=transcription
)
await user_msg.send()
await on_message(user_msg)
```
8. **Run the Chat Application**
Start the Chainlit application.
```python
if __name__ == "__main__":
from chainlit.cli import run_chainlit
run_chainlit(__file__)
```
### Explanation
- **Libraries and Configuration**: Import necessary libraries and configure the OpenAI client.
- **Utility Functions**: Define functions to extract URLs and crawl them.
- **Chat Start Event**: Initialize chat session and welcome message.
- **Message Handling**: Extract URLs, crawl them concurrently, and update chat history and context.
- **Audio Handling**: Capture, buffer, and transcribe audio input, then process the transcription as text.
- **Running the Application**: Start the Chainlit server to interact with the assistant.
This example showcases how to create an interactive research assistant that can fetch, process, and summarize web content, along with handling audio inputs for a seamless user experience.

View File

@@ -1,108 +0,0 @@
## Summarization Example
This example demonstrates how to use `Crawl4AI` to extract a summary from a web page. The goal is to obtain the title, a detailed summary, a brief summary, and a list of keywords from the given page.
### Step-by-Step Guide
1. **Import Necessary Modules**
First, import the necessary modules and classes.
```python
import os
import time
import json
from crawl4ai.web_crawler import WebCrawler
from crawl4ai.chunking_strategy import *
from crawl4ai.extraction_strategy import *
from crawl4ai.crawler_strategy import *
from pydantic import BaseModel, Field
```
2. **Define the URL to be Crawled**
Set the URL of the web page you want to summarize.
```python
url = r'https://marketplace.visualstudio.com/items?itemName=Unclecode.groqopilot'
```
3. **Initialize the WebCrawler**
Create an instance of the `WebCrawler` and call the `warmup` method.
```python
crawler = WebCrawler()
crawler.warmup()
```
4. **Define the Data Model**
Use Pydantic to define the structure of the extracted data.
```python
class PageSummary(BaseModel):
title: str = Field(..., description="Title of the page.")
summary: str = Field(..., description="Summary of the page.")
brief_summary: str = Field(..., description="Brief summary of the page.")
keywords: list = Field(..., description="Keywords assigned to the page.")
```
5. **Run the Crawler**
Set up and run the crawler with the `LLMExtractionStrategy`. Provide the necessary parameters, including the schema for the extracted data and the instruction for the LLM.
```python
result = crawler.run(
url=url,
word_count_threshold=1,
extraction_strategy=LLMExtractionStrategy(
provider="openai/gpt-4o",
api_token=os.getenv('OPENAI_API_KEY'),
schema=PageSummary.model_json_schema(),
extraction_type="schema",
apply_chunking=False,
instruction=(
"From the crawled content, extract the following details: "
"1. Title of the page "
"2. Summary of the page, which is a detailed summary "
"3. Brief summary of the page, which is a paragraph text "
"4. Keywords assigned to the page, which is a list of keywords. "
'The extracted JSON format should look like this: '
'{ "title": "Page Title", "summary": "Detailed summary of the page.", '
'"brief_summary": "Brief summary in a paragraph.", "keywords": ["keyword1", "keyword2", "keyword3"] }'
)
),
bypass_cache=True,
)
```
6. **Process the Extracted Data**
Load the extracted content into a JSON object and print it.
```python
page_summary = json.loads(result.extracted_content)
print(page_summary)
```
7. **Save the Extracted Data**
Save the extracted data to a file for further use.
```python
with open(".data/page_summary.json", "w", encoding="utf-8") as f:
f.write(result.extracted_content)
```
### Explanation
- **Importing Modules**: Import the necessary modules, including `WebCrawler` and `LLMExtractionStrategy` from `Crawl4AI`.
- **URL Definition**: Set the URL of the web page you want to crawl and summarize.
- **WebCrawler Initialization**: Create an instance of `WebCrawler` and call the `warmup` method to prepare the crawler.
- **Data Model Definition**: Define the structure of the data you want to extract using Pydantic's `BaseModel`.
- **Crawler Execution**: Run the crawler with the `LLMExtractionStrategy`, providing the schema and detailed instructions for the extraction process.
- **Data Processing**: Load the extracted content into a JSON object and print it to verify the results.
- **Data Saving**: Save the extracted data to a file for further use.
This example demonstrates how to harness the power of `Crawl4AI` to perform advanced web crawling and data extraction tasks with minimal code.

View File

@@ -1,10 +0,0 @@
{
"NoExtractionStrategy": "### NoExtractionStrategy\n\n`NoExtractionStrategy` is a basic extraction strategy that returns the entire HTML content without any modification. It is useful for cases where no specific extraction is required. Only clean html, and amrkdown.\n\n#### Constructor Parameters:\nNone.\n\n#### Example usage:\n```python\nextractor = NoExtractionStrategy()\nextracted_content = extractor.extract(url, html)\n```",
"LLMExtractionStrategy": "### LLMExtractionStrategy\n\n`LLMExtractionStrategy` uses a Language Model (LLM) to extract meaningful blocks or chunks from the given HTML content. This strategy leverages an external provider for language model completions.\n\n#### Constructor Parameters:\n- `provider` (str, optional): The provider to use for the language model completions. Default is `DEFAULT_PROVIDER` (e.g., openai/gpt-4).\n- `api_token` (str, optional): The API token for the provider. If not provided, it will try to load from the environment variable `OPENAI_API_KEY`.\n- `instruction` (str, optional): An instruction to guide the LLM on how to perform the extraction. This allows users to specify the type of data they are interested in or set the tone of the response. Default is `None`.\n\n#### Example usage:\n```python\nextractor = LLMExtractionStrategy(provider='openai', api_token='your_api_token', instruction='Extract only news about AI.')\nextracted_content = extractor.extract(url, html)\n```\n\nBy providing clear instructions, users can tailor the extraction process to their specific needs, enhancing the relevance and utility of the extracted content.",
"CosineStrategy": "### CosineStrategy\n\n`CosineStrategy` uses hierarchical clustering based on cosine similarity to extract clusters of text from the given HTML content. This strategy is suitable for identifying related content sections.\n\n#### Constructor Parameters:\n- `semantic_filter` (str, optional): A string containing keywords for filtering relevant documents before clustering. If provided, documents are filtered based on their cosine similarity to the keyword filter embedding. Default is `None`.\n- `word_count_threshold` (int, optional): Minimum number of words per cluster. Default is `20`.\n- `max_dist` (float, optional): The maximum cophenetic distance on the dendrogram to form clusters. Default is `0.2`.\n- `linkage_method` (str, optional): The linkage method for hierarchical clustering. Default is `'ward'`.\n- `top_k` (int, optional): Number of top categories to extract. Default is `3`.\n- `model_name` (str, optional): The model name for embedding generation. Default is `'BAAI/bge-small-en-v1.5'`.\n\n#### Example usage:\n```python\nextractor = CosineStrategy(semantic_filter='artificial intelligence', word_count_threshold=10, max_dist=0.2, linkage_method='ward', top_k=3, model_name='BAAI/bge-small-en-v1.5')\nextracted_content = extractor.extract(url, html)\n```\n\n#### Cosine Similarity Filtering\n\nWhen a `semantic_filter` is provided, the `CosineStrategy` applies an embedding-based filtering process to select relevant documents before performing hierarchical clustering.",
"TopicExtractionStrategy": "### TopicExtractionStrategy\n\n`TopicExtractionStrategy` uses the TextTiling algorithm to segment the HTML content into topics and extracts keywords for each segment. This strategy is useful for identifying and summarizing thematic content.\n\n#### Constructor Parameters:\n- `num_keywords` (int, optional): Number of keywords to represent each topic segment. Default is `3`.\n\n#### Example usage:\n```python\nextractor = TopicExtractionStrategy(num_keywords=3)\nextracted_content = extractor.extract(url, html)\n```"
}

View File

@@ -1,138 +0,0 @@
# Advanced Features
Crawl4AI offers a range of advanced features that allow you to fine-tune your web crawling and data extraction process. This section will cover some of these advanced features, including taking screenshots, extracting media and links, customizing the user agent, using custom hooks, and leveraging CSS selectors.
## Taking Screenshots 📸
One of the cool features of Crawl4AI is the ability to take screenshots of the web pages you're crawling. This can be particularly useful for visual verification or for capturing the state of dynamic content.
Here's how you can take a screenshot:
```python
from crawl4ai import WebCrawler
import base64
# Create the WebCrawler instance
crawler = WebCrawler()
crawler.warmup()
# Run the crawler with the screenshot parameter
result = crawler.run(url="https://www.nbcnews.com/business", screenshot=True)
# Save the screenshot to a file
with open("screenshot.png", "wb") as f:
f.write(base64.b64decode(result.screenshot))
print("Screenshot saved to 'screenshot.png'!")
```
In this example, we create a `WebCrawler` instance, warm it up, and then run it with the `screenshot` parameter set to `True`. The screenshot is saved as a base64 encoded string in the result, which we then decode and save as a PNG file.
## Extracting Media and Links 🎨🔗
Crawl4AI can extract all media tags (images, audio, and video) and links (both internal and external) from a web page. This feature is useful for collecting multimedia content or analyzing link structures.
Here's an example:
```python
from crawl4ai import WebCrawler
# Create the WebCrawler instance
crawler = WebCrawler()
crawler.warmup()
# Run the crawler
result = crawler.run(url="https://www.nbcnews.com/business")
print("Extracted media:", result.media)
print("Extracted links:", result.links)
```
In this example, the `result` object contains dictionaries for media and links, which you can access and use as needed.
## Customizing the User Agent 🕵️‍♂️
Crawl4AI allows you to set a custom user agent for your HTTP requests. This can help you avoid detection by web servers or simulate different browsing environments.
Here's how to set a custom user agent:
```python
from crawl4ai import WebCrawler
# Create the WebCrawler instance
crawler = WebCrawler()
crawler.warmup()
# Run the crawler with a custom user agent
result = crawler.run(url="https://www.nbcnews.com/business", user_agent="Mozilla/5.0 (compatible; MyCrawler/1.0)")
print("Crawl result:", result)
```
In this example, we specify a custom user agent string when running the crawler.
## Using Custom Hooks 🪝
Hooks are a powerful feature in Crawl4AI that allow you to customize the crawling process at various stages. You can define hooks for actions such as driver initialization, before and after URL fetching, and before returning the HTML.
Here's an example of using hooks:
```python
from crawl4ai import WebCrawler
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Define the hooks
def on_driver_created(driver):
driver.maximize_window()
driver.get('https://example.com/login')
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.NAME, 'username'))).send_keys('testuser')
driver.find_element(By.NAME, 'password').send_keys('password123')
driver.find_element(By.NAME, 'login').click()
return driver
def before_get_url(driver):
driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': {'X-Test-Header': 'test'}})
return driver
# Create the WebCrawler instance
crawler = WebCrawler()
crawler.warmup()
# Set the hooks
crawler.set_hook('on_driver_created', on_driver_created)
crawler.set_hook('before_get_url', before_get_url)
# Run the crawler
result = crawler.run(url="https://example.com")
print("Crawl result:", result)
```
In this example, we define hooks to handle driver initialization and custom headers before fetching the URL.
## Using CSS Selectors 🎯
CSS selectors allow you to target specific elements on a web page for extraction. This can be useful for scraping structured content, such as articles or product details.
Here's an example of using a CSS selector:
```python
from crawl4ai import WebCrawler
# Create the WebCrawler instance
crawler = WebCrawler()
crawler.warmup()
# Run the crawler with a CSS selector to extract only H2 tags
result = crawler.run(url="https://www.nbcnews.com/business", css_selector="h2")
print("Extracted H2 tags:", result.extracted_content)
```
In this example, we use the `css_selector` parameter to extract only the H2 tags from the web page.
---
With these advanced features, you can leverage Crawl4AI to perform sophisticated web crawling and data extraction tasks. Whether you need to take screenshots, extract specific elements, customize the crawling process, or set custom headers, Crawl4AI provides the flexibility and power to meet your needs. Happy crawling! 🕷️🚀

View File

@@ -1,133 +0,0 @@
## Chunking Strategies 📚
Crawl4AI provides several powerful chunking strategies to divide text into manageable parts for further processing. Each strategy has unique characteristics and is suitable for different scenarios. Let's explore them one by one.
### RegexChunking
`RegexChunking` splits text using regular expressions. This is ideal for creating chunks based on specific patterns like paragraphs or sentences.
#### When to Use
- Great for structured text with consistent delimiters.
- Suitable for documents where specific patterns (e.g., double newlines, periods) indicate logical chunks.
#### Parameters
- `patterns` (list, optional): Regular expressions used to split the text. Default is to split by double newlines (`['\n\n']`).
#### Example
```python
from crawl4ai.chunking_strategy import RegexChunking
# Define patterns for splitting text
patterns = [r'\n\n', r'\. ']
chunker = RegexChunking(patterns=patterns)
# Sample text
text = "This is a sample text. It will be split into chunks.\n\nThis is another paragraph."
# Chunk the text
chunks = chunker.chunk(text)
print(chunks)
```
### NlpSentenceChunking
`NlpSentenceChunking` uses NLP models to split text into sentences, ensuring accurate sentence boundaries.
#### When to Use
- Ideal for texts where sentence boundaries are crucial.
- Useful for creating chunks that preserve grammatical structures.
#### Parameters
- None.
#### Example
```python
from crawl4ai.chunking_strategy import NlpSentenceChunking
chunker = NlpSentenceChunking()
# Sample text
text = "This is a sample text. It will be split into sentences. Here's another sentence."
# Chunk the text
chunks = chunker.chunk(text)
print(chunks)
```
### TopicSegmentationChunking
`TopicSegmentationChunking` employs the TextTiling algorithm to segment text into topic-based chunks. This method identifies thematic boundaries.
#### When to Use
- Perfect for long documents with distinct topics.
- Useful when preserving topic continuity is more important than maintaining text order.
#### Parameters
- `num_keywords` (int, optional): Number of keywords for each topic segment. Default is `3`.
#### Example
```python
from crawl4ai.chunking_strategy import TopicSegmentationChunking
chunker = TopicSegmentationChunking(num_keywords=3)
# Sample text
text = "This document contains several topics. Topic one discusses AI. Topic two covers machine learning."
# Chunk the text
chunks = chunker.chunk(text)
print(chunks)
```
### FixedLengthWordChunking
`FixedLengthWordChunking` splits text into chunks based on a fixed number of words. This ensures each chunk has approximately the same length.
#### When to Use
- Suitable for processing large texts where uniform chunk size is important.
- Useful when the number of words per chunk needs to be controlled.
#### Parameters
- `chunk_size` (int, optional): Number of words per chunk. Default is `100`.
#### Example
```python
from crawl4ai.chunking_strategy import FixedLengthWordChunking
chunker = FixedLengthWordChunking(chunk_size=10)
# Sample text
text = "This is a sample text. It will be split into chunks of fixed length."
# Chunk the text
chunks = chunker.chunk(text)
print(chunks)
```
### SlidingWindowChunking
`SlidingWindowChunking` uses a sliding window approach to create overlapping chunks. Each chunk has a fixed length, and the window slides by a specified step size.
#### When to Use
- Ideal for creating overlapping chunks to preserve context.
- Useful for tasks where context from adjacent chunks is needed.
#### Parameters
- `window_size` (int, optional): Number of words in each chunk. Default is `100`.
- `step` (int, optional): Number of words to slide the window. Default is `50`.
#### Example
```python
from crawl4ai.chunking_strategy import SlidingWindowChunking
chunker = SlidingWindowChunking(window_size=10, step=5)
# Sample text
text = "This is a sample text. It will be split using a sliding window approach to preserve context."
# Chunk the text
chunks = chunker.chunk(text)
print(chunks)
```
With these chunking strategies, you can choose the best method to divide your text based on your specific needs. Whether you need precise sentence boundaries, topic-based segmentation, or uniform chunk sizes, Crawl4AI has you covered. Happy chunking! 📝✨

View File

@@ -1,130 +0,0 @@
# Crawl Request Parameters
The `run` function in Crawl4AI is designed to be highly configurable, allowing you to customize the crawling and extraction process to suit your needs. Below are the parameters you can use with the `run` function, along with their descriptions, possible values, and examples.
## Parameters
### url (str)
**Description:** The URL of the webpage to crawl.
**Required:** Yes
**Example:**
```python
url = "https://www.nbcnews.com/business"
```
### word_count_threshold (int)
**Description:** The minimum number of words a block must contain to be considered meaningful. The default value is `5`.
**Required:** No
**Default Value:** `5`
**Example:**
```python
word_count_threshold = 10
```
### extraction_strategy (ExtractionStrategy)
**Description:** The strategy to use for extracting content from the HTML. It must be an instance of `ExtractionStrategy`. If not provided, the default is `NoExtractionStrategy`.
**Required:** No
**Default Value:** `NoExtractionStrategy()`
**Example:**
```python
extraction_strategy = CosineStrategy(semantic_filter="finance")
```
### chunking_strategy (ChunkingStrategy)
**Description:** The strategy to use for chunking the text before processing. It must be an instance of `ChunkingStrategy`. The default value is `RegexChunking()`.
**Required:** No
**Default Value:** `RegexChunking()`
**Example:**
```python
chunking_strategy = NlpSentenceChunking()
```
### bypass_cache (bool)
**Description:** Whether to force a fresh crawl even if the URL has been previously crawled. The default value is `False`.
**Required:** No
**Default Value:** `False`
**Example:**
```python
bypass_cache = True
```
### css_selector (str)
**Description:** The CSS selector to target specific parts of the HTML for extraction. If not provided, the entire HTML will be processed.
**Required:** No
**Default Value:** `None`
**Example:**
```python
css_selector = "div.article-content"
```
### screenshot (bool)
**Description:** Whether to take screenshots of the page. The default value is `False`.
**Required:** No
**Default Value:** `False`
**Example:**
```python
screenshot = True
```
### user_agent (str)
**Description:** The user agent to use for the HTTP requests. If not provided, a default user agent will be used.
**Required:** No
**Default Value:** `None`
**Example:**
```python
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
```
### verbose (bool)
**Description:** Whether to enable verbose logging. The default value is `True`.
**Required:** No
**Default Value:** `True`
**Example:**
```python
verbose = True
```
### **kwargs
Additional keyword arguments that can be passed to customize the crawling process further. Some notable options include:
- **only_text (bool):** Whether to extract only text content, excluding HTML tags. Default is `False`.
**Example:**
```python
result = crawler.run(
url="https://www.nbcnews.com/business",
css_selector="p",
only_text=True
)
```
## Example Usage
Here's an example of how to use the `run` function with various parameters:
```python
from crawl4ai import WebCrawler
from crawl4ai.extraction_strategy import CosineStrategy
from crawl4ai.chunking_strategy import NlpSentenceChunking
# Create the WebCrawler instance
crawler = WebCrawler()
# Run the crawler with custom parameters
result = crawler.run(
url="https://www.nbcnews.com/business",
word_count_threshold=10,
extraction_strategy=CosineStrategy(semantic_filter="finance"),
chunking_strategy=NlpSentenceChunking(),
bypass_cache=True,
css_selector="div.article-content",
screenshot=True,
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3",
verbose=True,
only_text=True
)
print(result)
```
This example demonstrates how to configure various parameters to customize the crawling and extraction process using Crawl4AI.

View File

@@ -1,120 +0,0 @@
# Crawl Result
The `CrawlResult` class is the heart of Crawl4AI's output, encapsulating all the data extracted from a crawling session. This class contains various fields that store the results of the web crawling and extraction process. Let's break down each field and see what it holds. 🎉
## Class Definition
```python
class CrawlResult(BaseModel):
url: str
html: str
success: bool
cleaned_html: Optional[str] = None
media: Dict[str, List[Dict]] = {}
links: Dict[str, List[Dict]] = {}
screenshot: Optional[str] = None
markdown: Optional[str] = None
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
error_message: Optional[str] = None
```
## Fields Explanation
### `url: str`
The URL that was crawled. This field simply stores the URL of the web page that was processed.
### `html: str`
The raw HTML content of the web page. This is the unprocessed HTML source as retrieved by the crawler.
### `success: bool`
A flag indicating whether the crawling and extraction were successful. If any error occurs during the process, this will be `False`.
### `cleaned_html: Optional[str]`
The cleaned HTML content of the web page. This field holds the HTML after removing unwanted tags like `<script>`, `<style>`, and others that do not contribute to the useful content.
### `media: Dict[str, List[Dict]]`
A dictionary containing lists of extracted media elements from the web page. The media elements are categorized into images, videos, and audios. Heres how they are structured:
- **Images**: Each image is represented as a dictionary with `src` (source URL) and `alt` (alternate text).
- **Videos**: Each video is represented similarly with `src` and `alt`.
- **Audios**: Each audio is represented with `src` and `alt`.
```python
media = {
'images': [
{'src': 'image_url1', 'alt': 'description1', "type": "image"},
{'src': 'image_url2', 'alt': 'description2', "type": "image"}
],
'videos': [
{'src': 'video_url1', 'alt': 'description1', "type": "video"}
],
'audios': [
{'src': 'audio_url1', 'alt': 'description1', "type": "audio"}
]
}
```
### `links: Dict[str, List[Dict]]`
A dictionary containing lists of internal and external links extracted from the web page. Each link is represented as a dictionary with `href` (URL) and `text` (link text).
- **Internal Links**: Links pointing to the same domain.
- **External Links**: Links pointing to different domains.
```python
links = {
'internal': [
{'href': 'internal_link1', 'text': 'link_text1'},
{'href': 'internal_link2', 'text': 'link_text2'}
],
'external': [
{'href': 'external_link1', 'text': 'link_text1'}
]
}
```
### `screenshot: Optional[str]`
A base64-encoded screenshot of the web page. This field stores the screenshot data if the crawling was configured to take a screenshot.
### `markdown: Optional[str]`
The content of the web page converted to Markdown format. This is useful for generating clean, readable text that retains the structure of the original HTML.
### `extracted_content: Optional[str]`
The content extracted based on the specified extraction strategy. This field holds the meaningful content blocks extracted from the web page, ready for your AI and data processing needs.
### `metadata: Optional[dict]`
A dictionary containing metadata extracted from the web page, such as title, description, keywords, and other meta tags.
### `error_message: Optional[str]`
If an error occurs during crawling, this field will contain the error message, helping you debug and understand what went wrong. 🚨
## Example Usage
Here's a quick example to illustrate how you might use the `CrawlResult` in your code:
```python
from crawl4ai import WebCrawler
# Create the WebCrawler instance
crawler = WebCrawler()
# Run the crawler on a URL
result = crawler.run(url="https://www.example.com")
# Check if the crawl was successful
if result.success:
print("Crawl succeeded!")
print("URL:", result.url)
print("HTML:", result.html[:100]) # Print the first 100 characters of the HTML
print("Cleaned HTML:", result.cleaned_html[:100])
print("Media:", result.media)
print("Links:", result.links)
print("Screenshot:", result.screenshot)
print("Markdown:", result.markdown[:100])
print("Extracted Content:", result.extracted_content)
print("Metadata:", result.metadata)
else:
print("Crawl failed with error:", result.error_message)
```
With this setup, you can easily access all the valuable data extracted from the web page and integrate it into your applications. Happy crawling! 🕷️🤖

View File

@@ -1,116 +0,0 @@
## Extraction Strategies 🧠
Crawl4AI offers powerful extraction strategies to derive meaningful information from web content. Let's dive into two of the most important strategies: `CosineStrategy` and `LLMExtractionStrategy`.
### CosineStrategy
`CosineStrategy` uses hierarchical clustering based on cosine similarity to group text chunks into meaningful clusters. This method converts each chunk into its embedding and then clusters them to form semantical chunks.
#### When to Use
- Ideal for fast, accurate semantic segmentation of text.
- Perfect for scenarios where LLMs might be overkill or too slow.
- Suitable for narrowing down content based on specific queries or keywords.
#### Parameters
- `semantic_filter` (str, optional): Keywords for filtering relevant documents before clustering. Documents are filtered based on their cosine similarity to the keyword filter embedding. Default is `None`.
- `word_count_threshold` (int, optional): Minimum number of words per cluster. Default is `20`.
- `max_dist` (float, optional): Maximum cophenetic distance on the dendrogram to form clusters. Default is `0.2`.
- `linkage_method` (str, optional): Linkage method for hierarchical clustering. Default is `'ward'`.
- `top_k` (int, optional): Number of top categories to extract. Default is `3`.
- `model_name` (str, optional): Model name for embedding generation. Default is `'BAAI/bge-small-en-v1.5'`.
#### Example
```python
from crawl4ai.extraction_strategy import CosineStrategy
from crawl4ai import WebCrawler
crawler = WebCrawler()
crawler.warmup()
# Define extraction strategy
strategy = CosineStrategy(
semantic_filter="finance economy stock market",
word_count_threshold=10,
max_dist=0.2,
linkage_method='ward',
top_k=3,
model_name='BAAI/bge-small-en-v1.5'
)
# Sample URL
url = "https://www.nbcnews.com/business"
# Run the crawler with the extraction strategy
result = crawler.run(url=url, extraction_strategy=strategy)
print(result.extracted_content)
```
### LLMExtractionStrategy
`LLMExtractionStrategy` leverages a Language Model (LLM) to extract meaningful content from HTML. This strategy uses an external provider for LLM completions to perform extraction based on instructions.
#### When to Use
- Suitable for complex extraction tasks requiring nuanced understanding.
- Ideal for scenarios where detailed instructions can guide the extraction process.
- Perfect for extracting specific types of information or content with precise guidelines.
#### Parameters
- `provider` (str, optional): Provider for language model completions (e.g., openai/gpt-4). Default is `DEFAULT_PROVIDER`.
- `api_token` (str, optional): API token for the provider. If not provided, it will try to load from the environment variable `OPENAI_API_KEY`.
- `instruction` (str, optional): Instructions to guide the LLM on how to perform the extraction. Default is `None`.
#### Example Without Instructions
```python
from crawl4ai.extraction_strategy import LLMExtractionStrategy
from crawl4ai import WebCrawler
crawler = WebCrawler()
crawler.warmup()
# Define extraction strategy without instructions
strategy = LLMExtractionStrategy(
provider='openai',
api_token='your_api_token'
)
# Sample URL
url = "https://www.nbcnews.com/business"
# Run the crawler with the extraction strategy
result = crawler.run(url=url, extraction_strategy=strategy)
print(result.extracted_content)
```
#### Example With Instructions
```python
from crawl4ai.extraction_strategy import LLMExtractionStrategy
from crawl4ai import WebCrawler
crawler = WebCrawler()
crawler.warmup()
# Define extraction strategy with instructions
strategy = LLMExtractionStrategy(
provider='openai',
api_token='your_api_token',
instruction="Extract only financial news and summarize key points."
)
# Sample URL
url = "https://www.nbcnews.com/business"
# Run the crawler with the extraction strategy
result = crawler.run(url=url, extraction_strategy=strategy)
print(result.extracted_content)
```
#### Use Cases for LLMExtractionStrategy
- Extracting specific data types from structured or semi-structured content.
- Generating summaries, extracting key information, or transforming content into different formats.
- Performing detailed extractions based on custom instructions.
For more detailed examples, please refer to the [Examples section](../examples/index.md) of the documentation.
---
By choosing the right extraction strategy, you can effectively extract the most relevant and useful information from web content. Whether you need fast, accurate semantic segmentation with `CosineStrategy` or nuanced, instruction-based extraction with `LLMExtractionStrategy`, Crawl4AI has you covered. Happy extracting! 🕵️‍♂️✨

View File

@@ -1,101 +0,0 @@
# Crawl4AI v0.2.77
Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.
## Try the [Demo](demo.md)
Just try it now and crawl different pages to see how it works. You can set the links, see the structures of the output, and also view the Python sample code on how to run it. The old demo is available at [/old_demo](/old) where you can see more details.
## Introduction
Crawl4AI has one clear task: to make crawling and data extraction from web pages easy and efficient, especially for large language models (LLMs) and AI applications. Whether you are using it as a REST API or a Python library, Crawl4AI offers a robust and flexible solution.
## Quick Start
Here's a quick example to show you how easy it is to use Crawl4AI:
```python
from crawl4ai import WebCrawler
# Create an instance of WebCrawler
crawler = WebCrawler()
# Warm up the crawler (load necessary models)
crawler.warmup()
# Run the crawler on a URL
result = crawler.run(url="https://www.nbcnews.com/business")
# Print the extracted content
print(result.extracted_content)
```
### Explanation
1. **Importing the Library**: We start by importing the `WebCrawler` class from the `crawl4ai` library.
2. **Creating an Instance**: An instance of `WebCrawler` is created.
3. **Warming Up**: The `warmup()` method prepares the crawler by loading necessary models and settings.
4. **Running the Crawler**: The `run()` method is used to crawl the specified URL and extract meaningful content.
5. **Printing the Result**: The extracted content is printed, showcasing the data extracted from the web page.
## Documentation Structure
This documentation is organized into several sections to help you navigate and find the information you need quickly:
### [Home](index.md)
An introduction to Crawl4AI, including a quick start guide and an overview of the documentation structure.
### [Installation](installation.md)
Instructions on how to install Crawl4AI and its dependencies.
### [Introduction](introduction.md)
A detailed introduction to Crawl4AI, its features, and how it can be used for various web crawling and data extraction tasks.
### [Quick Start](quickstart.md)
A step-by-step guide to get you up and running with Crawl4AI, including installation instructions and basic usage examples.
### [Examples](examples/index.md)
This section contains practical examples demonstrating different use cases of Crawl4AI:
- [LLM Extraction](examples/llm_extraction.md)
- [JS Execution & CSS Filtering](examples/js_execution_css_filtering.md)
- [Hooks & Auth](examples/hooks_auth.md)
- [Summarization](examples/summarization.md)
- [Research Assistant](examples/research_assistant.md)
### [Full Details of Using Crawler](full_details/crawl_request_parameters.md)
Comprehensive details on using the crawler, including:
- [Crawl Request Parameters](full_details/crawl_request_parameters.md)
- [Crawl Result Class](full_details/crawl_result_class.md)
- [Advanced Features](full_details/advanced_features.md)
- [Chunking Strategies](full_details/chunking_strategies.md)
- [Extraction Strategies](full_details/extraction_strategies.md)
### [API Reference](api/core_classes_and_functions.md)
Detailed documentation of the API, covering:
- [Core Classes and Functions](api/core_classes_and_functions.md)
- [Detailed API Documentation](api/detailed_api_documentation.md)
### [Change Log](changelog.md)
A log of all changes, updates, and improvements made to Crawl4AI.
### [Contact](contact.md)
Information on how to get in touch with the developers, report issues, and contribute to the project.
## Get Started
To get started with Crawl4AI, follow the quick start guide above or explore the detailed sections of this documentation. Whether you are a beginner or an advanced user, Crawl4AI has something to offer to make your web crawling and data extraction tasks easier and more efficient.
Happy Crawling! 🕸️🚀

View File

@@ -1,193 +0,0 @@
# Installation 💻
There are three ways to use Crawl4AI:
1. As a library (Recommended).
2. As a local server (Docker) or using the REST API.
3. As a local server (Docker) using the pre-built image from Docker Hub.
## Option 1: Library Installation
You can try this Colab for a quick start: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1sJPAmeLj5PMrg2VgOwMJ2ubGIcK0cJeX#scrollTo=g1RrmI4W_rPk)
Crawl4AI offers flexible installation options to suit various use cases. Choose the option that best fits your needs:
- **Default Installation** (Basic functionality):
```bash
virtualenv venv
source venv/bin/activate
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
```
Use this for basic web crawling and scraping tasks.
- **Installation with PyTorch** (For advanced text clustering):
```bash
virtualenv venv
source venv/bin/activate
pip install "crawl4ai[torch] @ git+https://github.com/unclecode/crawl4ai.git"
```
Choose this if you need the CosineSimilarity cluster strategy.
- **Installation with Transformers** (For summarization and Hugging Face models):
```bash
virtualenv venv
source venv/bin/activate
pip install "crawl4ai[transformer] @ git+https://github.com/unclecode/crawl4ai.git"
```
Opt for this if you require text summarization or plan to use Hugging Face models.
- **Full Installation** (All features):
```bash
virtualenv venv
source venv/bin/activate
pip install "crawl4ai[all] @ git+https://github.com/unclecode/crawl4ai.git"
```
This installs all dependencies for full functionality.
- **Development Installation** (For contributors):
```bash
virtualenv venv
source venv/bin/activate
git clone https://github.com/unclecode/crawl4ai.git
cd crawl4ai
pip install -e ".[all]"
```
Use this if you plan to modify the source code.
💡 After installation, if you have used "torch", "transformer" or "all", it's recommended to run the following CLI command to load the required models. This is optional but will boost the performance and speed of the crawler. You need to do this only once, this is only for when you install using []
```bash
crawl4ai-download-models
```
## Option 2: Using Docker for Local Server
Crawl4AI can be run as a local server using Docker. The Dockerfile supports different installation options to cater to various use cases. Here's how you can build and run the Docker image:
### Default Installation
The default installation includes the basic Crawl4AI package without additional dependencies or pre-downloaded models.
```bash
# For Mac users (M1/M2)
docker build --platform linux/amd64 -t crawl4ai .
# For other users
docker build -t crawl4ai .
# Run the container
docker run -d -p 8000:80 crawl4ai
```
### Full Installation (All Dependencies and Models)
This option installs all dependencies and downloads the models.
```bash
# For Mac users (M1/M2)
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=all -t crawl4ai:all .
# For other users
docker build --build-arg INSTALL_OPTION=all -t crawl4ai:all .
# Run the container
docker run -d -p 8000:80 crawl4ai:all
```
### Torch Installation
This option installs torch-related dependencies and downloads the models.
```bash
# For Mac users (M1/M2)
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=torch -t crawl4ai:torch .
# For other users
docker build --build-arg INSTALL_OPTION=torch -t crawl4ai:torch .
# Run the container
docker run -d -p 8000:80 crawl4ai:torch
```
### Transformer Installation
This option installs transformer-related dependencies and downloads the models.
```bash
# For Mac users (M1/M2)
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=transformer -t crawl4ai:transformer .
# For other users
docker build --build-arg INSTALL_OPTION=transformer -t crawl4ai:transformer .
# Run the container
docker run -d -p 8000:80 crawl4ai:transformer
```
### Notes
- The `--platform linux/amd64` flag is necessary for Mac users with M1/M2 chips to ensure compatibility.
- The `-t` flag tags the image with a name (and optionally a tag in the 'name:tag' format).
- The `-d` flag runs the container in detached mode.
- The `-p 8000:80` flag maps port 8000 on the host to port 80 in the container.
Choose the installation option that best suits your needs. The default installation is suitable for basic usage, while the other options provide additional capabilities for more advanced use cases.
## Option 3: Using the Pre-built Image from Docker Hub
You can use pre-built Crawl4AI images from Docker Hub, which are available for all platforms (Mac, Linux, Windows). We have official images as well as a community-contributed image (Thanks to https://github.com/FractalMind):
### Default Installation
```bash
# Pull the image
docker pull unclecode/crawl4ai:latest
# Run the container
docker run -d -p 8000:80 unclecode/crawl4ai:latest
```
### Community-Contributed Image
A stable version of Crawl4AI is also available, created and maintained by a community member:
```bash
# Pull the community-contributed image
docker pull ryser007/crawl4ai:stable
# Run the container
docker run -d -p 8000:80 ryser007/crawl4ai:stable
```
We'd like to express our gratitude to GitHub user [@FractalMind](https://github.com/FractalMind) for creating and maintaining this stable version of the Crawl4AI Docker image. Community contributions like this are invaluable to the project.
### Testing the Installation
After running the container, you can test if it's working correctly:
- On Mac and Linux:
```bash
curl http://localhost:8000
```
- On Windows (PowerShell):
```powershell
Invoke-WebRequest -Uri http://localhost:8000
```
Or open a web browser and navigate to http://localhost:8000

View File

@@ -1,28 +0,0 @@
<h1>Try Our Library</h1>
<form id="apiForm">
<label for="inputField">Enter some input:</label>
<input type="text" id="inputField" name="inputField" required>
<button type="submit">Submit</button>
</form>
<div id="result"></div>
<script>
document.getElementById('apiForm').addEventListener('submit', function(event) {
event.preventDefault();
const input = document.getElementById('inputField').value;
fetch('https://your-api-endpoint.com/api', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ input: input })
})
.then(response => response.json())
.then(data => {
document.getElementById('result').textContent = JSON.stringify(data);
})
.catch(error => {
document.getElementById('result').textContent = 'Error: ' + error;
});
});
</script>

View File

@@ -1,29 +0,0 @@
# Introduction
Welcome to the documentation for Crawl4AI v0.2.5! 🕷️🤖
Crawl4AI is designed to simplify the process of crawling web pages and extracting useful information for large language models (LLMs) and AI applications. Whether you're using it as a REST API, a Python library, or through a Google Colab notebook, Crawl4AI provides powerful features to make web data extraction easier and more efficient.
## Key Features ✨
- **🆓 Completely Free and Open-Source**: Crawl4AI is free to use and open-source, making it accessible for everyone.
- **🤖 LLM-Friendly Output Formats**: Supports JSON, cleaned HTML, and markdown formats.
- **🌍 Concurrent Crawling**: Crawl multiple URLs simultaneously to save time.
- **🎨 Media Extraction**: Extract all media tags including images, audio, and video.
- **🔗 Link Extraction**: Extract all external and internal links from web pages.
- **📚 Metadata Extraction**: Extract metadata from web pages for additional context.
- **🔄 Custom Hooks**: Define custom hooks for authentication, headers, and page modifications before crawling.
- **🕵️ User Agent Support**: Customize the user agent for HTTP requests.
- **🖼️ Screenshot Capability**: Take screenshots of web pages during crawling.
- **📜 JavaScript Execution**: Execute custom JavaScripts before crawling.
- **📚 Advanced Chunking and Extraction Strategies**: Utilize topic-based, regex, sentence chunking, cosine clustering, and LLM extraction strategies.
- **🎯 CSS Selector Support**: Extract specific content using CSS selectors.
- **📝 Instruction/Keyword Refinement**: Pass instructions or keywords to refine the extraction process.
Check the [Changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md) for more details.
## Power and Simplicity of Crawl4AI 🚀
Crawl4AI provides an easy way to crawl and extract data from web pages without installing any library. You can use the REST API on our server or run the local server on your machine. For more advanced control, use the Python library to customize your crawling and extraction strategies.
Explore the documentation to learn more about the features, installation process, usage examples, and how to contribute to Crawl4AI. Let's make the web more accessible and useful for AI applications! 💪🌐🤖

View File

@@ -1,204 +0,0 @@
# Quick Start Guide 🚀
Welcome to the Crawl4AI Quickstart Guide! In this tutorial, we'll walk you through the basic usage of Crawl4AI with a friendly and humorous tone. We'll cover everything from basic usage to advanced features like chunking and extraction strategies. Let's dive in! 🌟
## Getting Started 🛠️
First, let's create an instance of `WebCrawler` and call the `warmup()` function. This might take a few seconds the first time you run Crawl4AI, as it loads the required model files.
```python
from crawl4ai import WebCrawler
def create_crawler():
crawler = WebCrawler(verbose=True)
crawler.warmup()
return crawler
crawler = create_crawler()
```
### Basic Usage
Simply provide a URL and let Crawl4AI do the magic!
```python
result = crawler.run(url="https://www.nbcnews.com/business")
print(f"Basic crawl result: {result}")
```
### Taking Screenshots 📸
Let's take a screenshot of the page!
```python
result = crawler.run(url="https://www.nbcnews.com/business", screenshot=True)
with open("screenshot.png", "wb") as f:
f.write(base64.b64decode(result.screenshot))
print("Screenshot saved to 'screenshot.png'!")
```
### Understanding Parameters 🧠
By default, Crawl4AI caches the results of your crawls. This means that subsequent crawls of the same URL will be much faster! Let's see this in action.
First crawl (caches the result):
```python
result = crawler.run(url="https://www.nbcnews.com/business")
print(f"First crawl result: {result}")
```
Force to crawl again:
```python
result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
print(f"Second crawl result: {result}")
```
### Adding a Chunking Strategy 🧩
Let's add a chunking strategy: `RegexChunking`! This strategy splits the text based on a given regex pattern.
```python
from crawl4ai.chunking_strategy import RegexChunking
result = crawler.run(
url="https://www.nbcnews.com/business",
chunking_strategy=RegexChunking(patterns=["\n\n"])
)
print(f"RegexChunking result: {result}")
```
You can also use `NlpSentenceChunking` which splits the text into sentences using NLP techniques.
```python
from crawl4ai.chunking_strategy import NlpSentenceChunking
result = crawler.run(
url="https://www.nbcnews.com/business",
chunking_strategy=NlpSentenceChunking()
)
print(f"NlpSentenceChunking result: {result}")
```
### Adding an Extraction Strategy 🧠
Let's get smarter with an extraction strategy: `CosineStrategy`! This strategy uses cosine similarity to extract semantically similar blocks of text.
```python
from crawl4ai.extraction_strategy import CosineStrategy
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=CosineStrategy(
word_count_threshold=10,
max_dist=0.2,
linkage_method="ward",
top_k=3
)
)
print(f"CosineStrategy result: {result}")
```
You can also pass other parameters like `semantic_filter` to extract specific content.
```python
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=CosineStrategy(
semantic_filter="inflation rent prices"
)
)
print(f"CosineStrategy result with semantic filter: {result}")
```
### Using LLMExtractionStrategy 🤖
Time to bring in the big guns: `LLMExtractionStrategy` without instructions! This strategy uses a large language model to extract relevant information from the web page.
```python
from crawl4ai.extraction_strategy import LLMExtractionStrategy
import os
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
provider="openai/gpt-4o",
api_token=os.getenv('OPENAI_API_KEY')
)
)
print(f"LLMExtractionStrategy (no instructions) result: {result}")
```
You can also provide specific instructions to guide the extraction.
```python
result = crawler.run(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
provider="openai/gpt-4o",
api_token=os.getenv('OPENAI_API_KEY'),
instruction="I am interested in only financial news"
)
)
print(f"LLMExtractionStrategy (with instructions) result: {result}")
```
### Targeted Extraction 🎯
Let's use a CSS selector to extract only H2 tags!
```python
result = crawler.run(
url="https://www.nbcnews.com/business",
css_selector="h2"
)
print(f"CSS Selector (H2 tags) result: {result}")
```
### Interactive Extraction 🖱️
Passing JavaScript code to click the 'Load More' button!
```python
js_code = """
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""
result = crawler.run(
url="https://www.nbcnews.com/business",
js=js_code
)
print(f"JavaScript Code (Load More button) result: {result}")
```
### Using Crawler Hooks 🔗
Let's see how we can customize the crawler using hooks!
```python
import time
from crawl4ai.web_crawler import WebCrawler
from crawl4ai.crawler_strategy import *
def delay(driver):
print("Delaying for 5 seconds...")
time.sleep(5)
print("Resuming...")
def create_crawler():
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook('after_get_url', delay)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
return crawler
crawler = create_crawler()
result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
```
check [Hooks](examples/hooks_auth.md) for more examples.
## Congratulations! 🎉
You've made it through the Crawl4AI Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️

View File

@@ -1,141 +0,0 @@
# Core Classes and Functions
## Overview
In this section, we will delve into the core classes and functions that make up the Crawl4AI library. This includes the `WebCrawler` class, various `CrawlerStrategy` classes, `ChunkingStrategy` classes, and `ExtractionStrategy` classes. Understanding these core components will help you leverage the full power of Crawl4AI for your web crawling and data extraction needs.
## WebCrawler Class
The `WebCrawler` class is the main class you'll interact with. It provides the interface for crawling web pages and extracting data.
### Initialization
```python
from crawl4ai import WebCrawler
# Create an instance of WebCrawler
crawler = WebCrawler()
```
### Methods
- **`warmup()`**: Prepares the crawler for use, such as loading necessary models.
- **`run(url: str, **kwargs)`**: Runs the crawler on the specified URL with optional parameters for customization.
```python
crawler.warmup()
result = crawler.run(url="https://www.nbcnews.com/business")
print(result)
```
## CrawlerStrategy Classes
The `CrawlerStrategy` classes define how the web crawling is executed. The base class is `CrawlerStrategy`, which is extended by specific implementations like `LocalSeleniumCrawlerStrategy`.
### CrawlerStrategy Base Class
An abstract base class that defines the interface for different crawler strategies.
```python
from abc import ABC, abstractmethod
class CrawlerStrategy(ABC):
@abstractmethod
def crawl(self, url: str, **kwargs) -> str:
pass
@abstractmethod
def take_screenshot(self, save_path: str):
pass
@abstractmethod
def update_user_agent(self, user_agent: str):
pass
@abstractmethod
def set_hook(self, hook_type: str, hook: Callable):
pass
```
### LocalSeleniumCrawlerStrategy Class
A concrete implementation of `CrawlerStrategy` that uses Selenium to crawl web pages.
#### Initialization
```python
from crawl4ai.crawler_strategy import LocalSeleniumCrawlerStrategy
strategy = LocalSeleniumCrawlerStrategy(js_code=["console.log('Hello, world!');"])
```
#### Methods
- **`crawl(url: str, **kwargs)`**: Crawls the specified URL.
- **`take_screenshot(save_path: str)`**: Takes a screenshot of the current page.
- **`update_user_agent(user_agent: str)`**: Updates the user agent for the browser.
- **`set_hook(hook_type: str, hook: Callable)`**: Sets a hook for various events.
```python
result = strategy.crawl("https://www.example.com")
strategy.take_screenshot("screenshot.png")
strategy.update_user_agent("Mozilla/5.0")
strategy.set_hook("before_get_url", lambda: print("About to get URL"))
```
## ChunkingStrategy Classes
The `ChunkingStrategy` classes define how the text from a web page is divided into chunks. Here are a few examples:
### RegexChunking Class
Splits text using regular expressions.
```python
from crawl4ai.chunking_strategy import RegexChunking
chunker = RegexChunking(patterns=[r'\n\n'])
chunks = chunker.chunk("This is a sample text. It will be split into chunks.")
```
### NlpSentenceChunking Class
Uses NLP to split text into sentences.
```python
from crawl4ai.chunking_strategy import NlpSentenceChunking
chunker = NlpSentenceChunking()
chunks = chunker.chunk("This is a sample text. It will be split into sentences.")
```
## ExtractionStrategy Classes
The `ExtractionStrategy` classes define how meaningful content is extracted from the chunks. Here are a few examples:
### CosineStrategy Class
Clusters text chunks based on cosine similarity.
```python
from crawl4ai.extraction_strategy import CosineStrategy
extractor = CosineStrategy(semantic_filter="finance", word_count_threshold=10)
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
### LLMExtractionStrategy Class
Uses a Language Model to extract meaningful blocks from HTML.
```python
from crawl4ai.extraction_strategy import LLMExtractionStrategy
extractor = LLMExtractionStrategy(provider='openai', api_token='your_api_token', instruction='Extract only news about AI.')
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
## Conclusion
By understanding these core classes and functions, you can customize and extend Crawl4AI to suit your specific web crawling and data extraction needs. Happy crawling! 🕷️🤖

View File

@@ -1,338 +0,0 @@
# Detailed API Documentation
## Overview
This section provides comprehensive documentation for the Crawl4AI API, covering all classes, methods, and their parameters. This guide will help you understand how to utilize the API to its full potential, enabling efficient web crawling and data extraction.
## WebCrawler Class
The `WebCrawler` class is the primary interface for crawling web pages and extracting data.
### Initialization
```python
from crawl4ai import WebCrawler
crawler = WebCrawler()
```
### Methods
#### `warmup()`
Prepares the crawler for use, such as loading necessary models.
```python
crawler.warmup()
```
#### `run(url: str, **kwargs) -> CrawlResult`
Crawls the specified URL and returns the result.
- **Parameters:**
- `url` (str): The URL to crawl.
- `**kwargs`: Additional parameters for customization.
- **Returns:**
- `CrawlResult`: An object containing the crawl result.
- **Example:**
```python
result = crawler.run(url="https://www.nbcnews.com/business")
print(result)
```
### CrawlResult Class
Represents the result of a crawl operation.
- **Attributes:**
- `url` (str): The URL of the crawled page.
- `html` (str): The raw HTML of the page.
- `success` (bool): Whether the crawl was successful.
- `cleaned_html` (Optional[str]): The cleaned HTML.
- `media` (Dict[str, List[Dict]]): Media tags in the page (images, audio, video).
- `links` (Dict[str, List[Dict]]): Links in the page (external, internal).
- `screenshot` (Optional[str]): Base64 encoded screenshot.
- `markdown` (Optional[str]): Extracted content in Markdown format.
- `extracted_content` (Optional[str]): Extracted meaningful content.
- `metadata` (Optional[dict]): Metadata from the page.
- `error_message` (Optional[str]): Error message if any.
## CrawlerStrategy Classes
The `CrawlerStrategy` classes define how the web crawling is executed.
### CrawlerStrategy Base Class
An abstract base class for different crawler strategies.
#### Methods
- **`crawl(url: str, **kwargs) -> str`**: Crawls the specified URL.
- **`take_screenshot(save_path: str)`**: Takes a screenshot of the current page.
- **`update_user_agent(user_agent: str)`**: Updates the user agent for the browser.
- **`set_hook(hook_type: str, hook: Callable)`**: Sets a hook for various events.
### LocalSeleniumCrawlerStrategy Class
Uses Selenium to crawl web pages.
#### Initialization
```python
from crawl4ai.crawler_strategy import LocalSeleniumCrawlerStrategy
strategy = LocalSeleniumCrawlerStrategy(js_code=["console.log('Hello, world!');"])
```
#### Methods
- **`crawl(url: str, **kwargs)`**: Crawls the specified URL.
- **`take_screenshot(save_path: str)`**: Takes a screenshot of the current page.
- **`update_user_agent(user_agent: str)`**: Updates the user agent for the browser.
- **`set_hook(hook_type: str, hook: Callable)`**: Sets a hook for various events.
#### Example
```python
result = strategy.crawl("https://www.example.com")
strategy.take_screenshot("screenshot.png")
strategy.update_user_agent("Mozilla/5.0")
strategy.set_hook("before_get_url", lambda: print("About to get URL"))
```
## ChunkingStrategy Classes
The `ChunkingStrategy` classes define how the text from a web page is divided into chunks.
### RegexChunking Class
Splits text using regular expressions.
#### Initialization
```python
from crawl4ai.chunking_strategy import RegexChunking
chunker = RegexChunking(patterns=[r'\n\n'])
```
#### Methods
- **`chunk(text: str) -> List[str]`**: Splits the text into chunks.
#### Example
```python
chunks = chunker.chunk("This is a sample text. It will be split into chunks.")
```
### NlpSentenceChunking Class
Uses NLP to split text into sentences.
#### Initialization
```python
from crawl4ai.chunking_strategy import NlpSentenceChunking
chunker = NlpSentenceChunking()
```
#### Methods
- **`chunk(text: str) -> List[str]`**: Splits the text into sentences.
#### Example
```python
chunks = chunker.chunk("This is a sample text. It will be split into sentences.")
```
### TopicSegmentationChunking Class
Uses the TextTiling algorithm to segment text into topics.
#### Initialization
```python
from crawl4ai.chunking_strategy import TopicSegmentationChunking
chunker = TopicSegmentationChunking(num_keywords=3)
```
#### Methods
- **`chunk(text: str) -> List[str]`**: Splits the text into topic-based segments.
#### Example
```python
chunks = chunker.chunk("This is a sample text. It will be split into topic-based segments.")
```
### FixedLengthWordChunking Class
Splits text into chunks of fixed length based on the number of words.
#### Initialization
```python
from crawl4ai.chunking_strategy import FixedLengthWordChunking
chunker = FixedLengthWordChunking(chunk_size=100)
```
#### Methods
- **`chunk(text: str) -> List[str]`**: Splits the text into fixed-length word chunks.
#### Example
```python
chunks = chunker.chunk("This is a sample text. It will be split into fixed-length word chunks.")
```
### SlidingWindowChunking Class
Uses a sliding window approach to chunk text.
#### Initialization
```python
from crawl4ai.chunking_strategy import SlidingWindowChunking
chunker = SlidingWindowChunking(window_size=100, step=50)
```
#### Methods
- **`chunk(text: str) -> List[str]`**: Splits the text using a sliding window approach.
#### Example
```python
chunks = chunker.chunk("This is a sample text. It will be split using a sliding window approach.")
```
## ExtractionStrategy Classes
The `ExtractionStrategy` classes define how meaningful content is extracted from the chunks.
### NoExtractionStrategy Class
Returns the entire HTML content without any modification.
#### Initialization
```python
from crawl4ai.extraction_strategy import NoExtractionStrategy
extractor = NoExtractionStrategy()
```
#### Methods
- **`extract(url: str, html: str) -> str`**: Returns the HTML content.
#### Example
```python
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
### LLMExtractionStrategy Class
Uses a Language Model to extract meaningful blocks from HTML.
#### Initialization
```python
from crawl4ai.extraction_strategy import LLMExtractionStrategy
extractor = LLMExtractionStrategy(provider='openai', api_token='your_api_token', instruction='Extract only news about AI.')
```
#### Methods
- **`extract(url: str, html: str) -> str`**: Extracts meaningful content using the LLM.
#### Example
```python
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
### CosineStrategy Class
Clusters text chunks based on cosine similarity.
#### Initialization
```python
from crawl4ai.extraction_strategy import CosineStrategy
extractor = CosineStrategy(semantic_filter="finance", word_count_threshold=10)
```
#### Methods
- **`extract(url: str, html: str) -> str`**: Extracts clusters of text based on cosine similarity.
#### Example
```python
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
### TopicExtractionStrategy Class
Uses the TextTiling algorithm to segment HTML content into topics and extract keywords.
#### Initialization
```python
from crawl4ai.extraction_strategy import TopicExtractionStrategy
extractor = TopicExtractionStrategy(num_keywords=3)
```
#### Methods
- **`extract(url: str, html: str) -> str`**: Extracts topic-based segments and keywords.
#### Example
```python
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
```
## Parameters
Here are the common parameters used across various classes and methods:
- **`url`** (str): The URL to crawl.
- **`html`** (str): The HTML content of the page.
- **`user_agent`** (str): The user agent for the HTTP requests.
- **`patterns`** (list): A list of regular expression patterns for chunking.
- **`num_keywords`** (int): Number of keywords for topic extraction.
- **`chunk_size`** (int): Number of words in each chunk.
- **`window_size`** (int): Number of words in the sliding window.
- **`step`** (int): Step size for the sliding window.
- **`semantic_filter`** (str): Keywords for filtering relevant documents.
- **`word_count_threshold`** (int): Minimum number of words per cluster.
- **`max_dist`** (float): Maximum cophenetic distance for clustering.
- **`linkage_method`** (str): Linkage method for hierarchical clustering.
- **`top_k`** (int): Number of top categories to extract.
- **`provider`** (
str): Provider for language model completions.
- **`api_token`** (str): API token for the provider.
- **`instruction`** (str): Instruction to guide the LLM extraction.
## Conclusion
This detailed API documentation provides a thorough understanding of the classes, methods, and parameters in the Crawl4AI library. With this knowledge, you can effectively use the API to perform advanced web crawling and data extraction tasks.

Binary file not shown.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,6 +0,0 @@
document.addEventListener('DOMContentLoaded', (event) => {
document.querySelectorAll('pre code').forEach((block) => {
hljs.highlightBlock(block);
});
});

View File

@@ -1,153 +0,0 @@
@font-face {
font-family: "Monaco";
font-style: normal;
font-weight: normal;
src: local("Monaco"), url("Monaco.woff") format("woff");
}
:root {
--global-font-size: 16px;
--global-line-height: 1.5em;
--global-space: 10px;
--font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
Courier New, monospace, serif;
--font-stack: dm, Monaco, Courier New, monospace, serif;
--mono-font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
Courier New, monospace, serif;
--background-color: #151515; /* Dark background */
--font-color: #eaeaea; /* Light font color for contrast */
--invert-font-color: #151515; /* Dark color for inverted elements */
--primary-color: #1a95e0; /* Primary color can remain the same or be adjusted for better contrast */
--secondary-color: #727578; /* Secondary color for less important text */
--error-color: #ff5555; /* Bright color for errors */
--progress-bar-background: #444; /* Darker background for progress bar */
--progress-bar-fill: #1a95e0; /* Bright color for progress bar fill */
--code-bg-color: #1e1e1e; /* Darker background for code blocks */
--input-style: solid; /* Keeping input style solid */
--block-background-color: #202020; /* Darker background for block elements */
--global-font-color: #eaeaea; /* Light font color for global elements */
--background-color: #222225;
--background-color: #070708;
--page-width: 70em;
--font-color: #e8e9ed;
--invert-font-color: #222225;
--secondary-color: #a3abba;
--secondary-color: #d5cec0;
--tertiary-color: #a3abba;
--primary-color: #09b5a5; /* Updated to the brand color */
--primary-color: #50ffff; /* Updated to the brand color */
--error-color: #ff3c74;
--progress-bar-background: #3f3f44;
--progress-bar-fill: #09b5a5; /* Updated to the brand color */
--code-bg-color: #3f3f44;
--input-style: solid;
--display-h1-decoration: none;
--display-h1-decoration: none;
}
/* body {
background-color: var(--background-color);
color: var(--font-color);
}
a {
color: var(--primary-color);
}
a:hover {
background-color: var(--primary-color);
color: var(--invert-font-color);
}
blockquote::after {
color: #444;
}
pre, code {
background-color: var(--code-bg-color);
color: var(--font-color);
}
.terminal-nav:first-child {
border-bottom: 1px dashed var(--secondary-color);
} */
.terminal-mkdocs-main-content {
line-height: var(--global-line-height);
}
strong,
.highlight {
/* background: url(//s2.svgbox.net/pen-brushes.svg?ic=brush-1&color=50ffff); */
background-color: #50ffff33;
}
.terminal-card > header {
color: var(--font-color);
text-align: center;
background-color: var(--progress-bar-background);
padding: 0.3em 0.5em;
}
.btn.btn-sm {
color: var(--font-color);
padding: 0.2em 0.5em;
font-size: 0.8em;
}
.loading-message {
display: none;
margin-top: 20px;
}
.response-section {
display: none;
padding-top: 20px;
}
.tabs {
display: flex;
flex-direction: column;
}
.tab-list {
display: flex;
padding: 0;
margin: 0;
list-style-type: none;
border-bottom: 1px solid var(--font-color);
}
.tab-item {
cursor: pointer;
padding: 10px;
border: 1px solid var(--font-color);
margin-right: -1px;
border-bottom: none;
}
.tab-item:hover,
.tab-item:focus,
.tab-item:active {
background-color: var(--progress-bar-background);
}
.tab-content {
display: none;
border: 1px solid var(--font-color);
border-top: none;
}
.tab-content:first-of-type {
display: block;
}
.tab-content header {
padding: 0.5em;
display: flex;
justify-content: end;
align-items: center;
background-color: var(--progress-bar-background);
}
.tab-content pre {
margin: 0;
max-height: 300px; overflow: auto; border:none;
}

View File

@@ -1,102 +0,0 @@
# Changelog
## [v0.2.77] - 2024-08-04
Significant improvements in text processing and performance:
- 🚀 **Dependency reduction**: Removed dependency on spaCy model for text chunk labeling in cosine extraction strategy.
- 🤖 **Transformer upgrade**: Implemented text sequence classification using a transformer model for labeling text chunks.
-**Performance enhancement**: Improved model loading speed due to removal of spaCy dependency.
- 🔧 **Future-proofing**: Laid groundwork for potential complete removal of spaCy dependency in future versions.
These changes address issue #68 and provide a foundation for faster, more efficient text processing in Crawl4AI.
## [v0.2.76] - 2024-08-02
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
- 🐳 **Docker enhancements**: Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
- 🌐 **Official Docker Hub image**: Launched our first official image on Docker Hub for streamlined deployment.
- 🔧 **Selenium upgrade**: Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
- 🖼️ **Image description**: Implemented ability to generate textual descriptions for extracted images from web pages.
-**Performance boost**: Various improvements to enhance overall speed and performance.
A big shoutout to our amazing community contributors:
- [@aravindkarnam](https://github.com/aravindkarnam) for developing the textual description extraction feature.
- [@FractalMind](https://github.com/FractalMind) for creating the first official Docker Hub image and fixing Dockerfile errors.
- [@ketonkss4](https://github.com/ketonkss4) for identifying Selenium's new capabilities, helping us reduce dependencies.
Your contributions are driving Crawl4AI forward! 🙌
## [v0.2.75] - 2024-07-19
Minor improvements for a more maintainable codebase:
- 🔄 Fixed typos in `chunking_strategy.py` and `crawler_strategy.py` to improve code readability
- 🔄 Removed `.test_pads/` directory from `.gitignore` to keep our repository clean and organized
These changes may seem small, but they contribute to a more stable and sustainable codebase. By fixing typos and updating our `.gitignore` settings, we're ensuring that our code is easier to maintain and scale in the long run.
## v0.2.74 - 2024-07-08
A slew of exciting updates to improve the crawler's stability and robustness! 🎉
- 💻 **UTF encoding fix**: Resolved the Windows \"charmap\" error by adding UTF encoding.
- 🛡️ **Error handling**: Implemented MaxRetryError exception handling in LocalSeleniumCrawlerStrategy.
- 🧹 **Input sanitization**: Improved input sanitization and handled encoding issues in LLMExtractionStrategy.
- 🚮 **Database cleanup**: Removed existing database file and initialized a new one.
## [v0.2.73] - 2024-07-03
💡 In this release, we've bumped the version to v0.2.73 and refreshed our documentation to ensure you have the best experience with our project.
* Supporting website need "with-head" mode to crawl the website with head.
* Fixing the installation issues for setup.py and dockerfile.
* Resolve multiple issues.
## [v0.2.72] - 2024-06-30
This release brings exciting updates and improvements to our project! 🎉
* 📚 **Documentation Updates**: Our documentation has been revamped to reflect the latest changes and additions.
* 🚀 **New Modes in setup.py**: We've added support for three new modes in setup.py: default, torch, and transformers. This enhances the project's flexibility and usability.
* 🐳 **Docker File Updates**: The Docker file has been updated to ensure seamless compatibility with the new modes and improvements.
* 🕷️ **Temporary Solution for Headless Crawling**: We've implemented a temporary solution to overcome issues with crawling websites in headless mode.
These changes aim to improve the overall user experience, provide more flexibility, and enhance the project's performance. We're thrilled to share these updates with you and look forward to continuing to evolve and improve our project!
## [0.2.71] - 2024-06-26
**Improved Error Handling and Performance** 🚧
* 🚫 Refactored `crawler_strategy.py` to handle exceptions and provide better error messages, making it more robust and reliable.
* 💻 Optimized the `get_content_of_website_optimized` function in `utils.py` for improved performance, reducing potential bottlenecks.
* 💻 Updated `utils.py` with the latest changes, ensuring consistency and accuracy.
* 🚫 Migrated to `ChromeDriverManager` to resolve Chrome driver download issues, providing a smoother user experience.
These changes focus on refining the existing codebase, resulting in a more stable, efficient, and user-friendly experience. With these improvements, you can expect fewer errors and better performance in the crawler strategy and utility functions.
## [0.2.71] - 2024-06-25
### Fixed
- Speed up twice the extraction function.
## [0.2.6] - 2024-06-22
### Fixed
- Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms.
## [0.2.5] - 2024-06-18
### Added
- Added five important hooks to the crawler:
- on_driver_created: Called when the driver is ready for initializations.
- before_get_url: Called right before Selenium fetches the URL.
- after_get_url: Called after Selenium fetches the URL.
- before_return_html: Called when the data is parsed and ready.
- on_user_agent_updated: Called when the user changes the user_agent, causing the driver to reinitialize.
- Added an example in `quickstart.py` in the example folder under the docs.
- Enhancement issue #24: Replaced inline HTML tags (e.g., DEL, INS, SUB, ABBR) with textual format for better context handling in LLM.
- Maintaining the semantic context of inline tags (e.g., abbreviation, DEL, INS) for improved LLM-friendliness.
- Updated Dockerfile to ensure compatibility across multiple platforms (Hopefully!).
## [0.2.4] - 2024-06-17
### Fixed
- Fix issue #22: Use MD5 hash for caching HTML files to handle long URLs

View File

@@ -1,25 +0,0 @@
# Contact
If you have any questions, suggestions, or feedback, please feel free to reach out to us:
- GitHub: [unclecode](https://github.com/unclecode)
- Twitter: [@unclecode](https://twitter.com/unclecode)
- Website: [crawl4ai.com](https://crawl4ai.com)
## Contributing 🤝
We welcome contributions from the open-source community to help improve Crawl4AI and make it even more valuable for AI enthusiasts and developers. To contribute, please follow these steps:
1. Fork the repository.
2. Create a new branch for your feature or bug fix.
3. Make your changes and commit them with descriptive messages.
4. Push your changes to your forked repository.
5. Submit a pull request to the main repository.
For more information on contributing, please see our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTING.md).
## License 📄
Crawl4AI is released under the [Apache 2.0 License](https://github.com/unclecode/crawl4ai/blob/main/LICENSE).
Let's work together to make the web more accessible and useful for AI applications! 💪🌐🤖

View File

@@ -1,231 +0,0 @@
# Interactive Demo for Crowler
<div id="demo">
<form id="crawlForm" class="terminal-form">
<fieldset>
<legend>Enter URL and Options</legend>
<div class="form-group">
<label for="url">Enter URL:</label>
<input type="text" id="url" name="url" required>
</div>
<div class="form-group">
<label for="screenshot">Get Screenshot:</label>
<input type="checkbox" id="screenshot" name="screenshot">
</div>
<div class="form-group">
<button class="btn btn-default" type="submit">Submit</button>
</div>
</fieldset>
</form>
<div id="loading" class="loading-message">
<div class="terminal-alert terminal-alert-primary">Loading... Please wait.</div>
</div>
<section id="response" class="response-section">
<h2>Response</h2>
<div class="tabs">
<ul class="tab-list">
<li class="tab-item" onclick="showTab('markdown')">Markdown</li>
<li class="tab-item" onclick="showTab('cleanedHtml')">Cleaned HTML</li>
<li class="tab-item" onclick="showTab('media')">Media</li>
<li class="tab-item" onclick="showTab('extractedContent')">Extracted Content</li>
<li class="tab-item" onclick="showTab('screenshot')">Screenshot</li>
<li class="tab-item" onclick="showTab('pythonCode')">Python Code</li>
</ul>
<div class="tab-content" id="tab-markdown">
<header>
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('markdownContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('markdownContent', 'markdown.md')">Download</button>
</div>
</header>
<pre><code id="markdownContent" class="language-markdown hljs"></code></pre>
</div>
<div class="tab-content" id="tab-cleanedHtml" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('cleanedHtmlContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('cleanedHtmlContent', 'cleaned.html')">Download</button>
</div>
</header>
<pre><code id="cleanedHtmlContent" class="language-html hljs"></code></pre>
</div>
<div class="tab-content" id="tab-media" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('mediaContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('mediaContent', 'media.json')">Download</button>
</div>
</header>
<pre><code id="mediaContent" class="language-json hljs"></code></pre>
</div>
<div class="tab-content" id="tab-extractedContent" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('extractedContentContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('extractedContentContent', 'extracted_content.json')">Download</button>
</div>
</header>
<pre><code id="extractedContentContent" class="language-json hljs"></code></pre>
</div>
<div class="tab-content" id="tab-screenshot" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadImage('screenshotContent', 'screenshot.png')">Download</button>
</div>
</header>
<pre><img id="screenshotContent" /></pre>
</div>
<div class="tab-content" id="tab-pythonCode" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('pythonCode')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('pythonCode', 'example.py')">Download</button>
</div>
</header>
<pre><code id="pythonCode" class="language-python hljs"></code></pre>
</div>
</div>
</section>
<div id="error" class="error-message" style="display: none; margin-top:1em;">
<div class="terminal-alert terminal-alert-error"></div>
</div>
<script>
function showTab(tabId) {
const tabs = document.querySelectorAll('.tab-content');
tabs.forEach(tab => tab.style.display = 'none');
document.getElementById(`tab-${tabId}`).style.display = 'block';
}
function redo(codeBlock, codeText){
codeBlock.classList.remove('hljs');
codeBlock.removeAttribute('data-highlighted');
// Set new code and re-highlight
codeBlock.textContent = codeText;
hljs.highlightBlock(codeBlock);
}
function copyToClipboard(elementId) {
const content = document.getElementById(elementId).textContent;
navigator.clipboard.writeText(content).then(() => {
alert('Copied to clipboard');
});
}
function downloadContent(elementId, filename) {
const content = document.getElementById(elementId).textContent;
const blob = new Blob([content], { type: 'text/plain' });
const url = window.URL.createObjectURL(blob);
const a = document.createElement('a');
a.style.display = 'none';
a.href = url;
a.download = filename;
document.body.appendChild(a);
a.click();
window.URL.revokeObjectURL(url);
document.body.removeChild(a);
}
function downloadImage(elementId, filename) {
const content = document.getElementById(elementId).src;
const a = document.createElement('a');
a.style.display = 'none';
a.href = content;
a.download = filename;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
}
document.getElementById('crawlForm').addEventListener('submit', function(event) {
event.preventDefault();
document.getElementById('loading').style.display = 'block';
document.getElementById('response').style.display = 'none';
const url = document.getElementById('url').value;
const screenshot = document.getElementById('screenshot').checked;
const data = {
urls: [url],
bypass_cache: false,
word_count_threshold: 5,
screenshot: screenshot
};
fetch('/crawl', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
})
.then(response => {
if (!response.ok) {
if (response.status === 429) {
return response.json().then(err => {
throw Object.assign(new Error('Rate limit exceeded'), { status: 429, details: err });
});
}
throw new Error('Network response was not ok');
}
return response.json();
})
.then(data => {
data = data.results[0]; // Only one URL is requested
document.getElementById('loading').style.display = 'none';
document.getElementById('response').style.display = 'block';
redo(document.getElementById('markdownContent'), data.markdown);
redo(document.getElementById('cleanedHtmlContent'), data.cleaned_html);
redo(document.getElementById('mediaContent'), JSON.stringify(data.media, null, 2));
redo(document.getElementById('extractedContentContent'), data.extracted_content);
if (screenshot) {
document.getElementById('screenshotContent').src = `data:image/png;base64,${data.screenshot}`;
}
const pythonCode = `
from crawl4ai.web_crawler import WebCrawler
crawler = WebCrawler()
crawler.warmup()
result = crawler.run(
url='${url}',
screenshot=${screenshot}
)
print(result)
`;
redo(document.getElementById('pythonCode'), pythonCode);
document.getElementById('error').style.display = 'none';
})
.catch(error => {
document.getElementById('loading').style.display = 'none';
document.getElementById('error').style.display = 'block';
let errorMessage = 'An unexpected error occurred. Please try again later.';
if (error.status === 429) {
const details = error.details;
if (details.retry_after) {
errorMessage = `Rate limit exceeded. Please wait ${parseFloat(details.retry_after).toFixed(1)} seconds before trying again.`;
} else if (details.reset_at) {
const resetTime = new Date(details.reset_at);
const waitTime = Math.ceil((resetTime - new Date()) / 1000);
errorMessage = `Rate limit exceeded. Please try again after ${waitTime} seconds.`;
} else {
errorMessage = `Rate limit exceeded. Please try again later.`;
}
} else if (error.message) {
errorMessage = error.message;
}
document.querySelector('#error .terminal-alert').textContent = errorMessage;
});
});
</script>
</div>

View File

@@ -1,110 +0,0 @@
# Hooks & Auth for AsyncWebCrawler
Crawl4AI's AsyncWebCrawler allows you to customize the behavior of the web crawler using hooks. Hooks are asynchronous functions that are called at specific points in the crawling process, allowing you to modify the crawler's behavior or perform additional actions. This example demonstrates how to use various hooks to customize the asynchronous crawling process.
## Example: Using Crawler Hooks with AsyncWebCrawler
Let's see how we can customize the AsyncWebCrawler using hooks! In this example, we'll:
1. Configure the browser when it's created.
2. Add custom headers before navigating to the URL.
3. Log the current URL after navigation.
4. Perform actions after JavaScript execution.
5. Log the length of the HTML before returning it.
### Hook Definitions
```python
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.async_crawler_strategy import AsyncPlaywrightCrawlerStrategy
from playwright.async_api import Page, Browser
async def on_browser_created(browser: Browser):
print("[HOOK] on_browser_created")
# Example customization: set browser viewport size
context = await browser.new_context(viewport={'width': 1920, 'height': 1080})
page = await context.new_page()
# Example customization: logging in to a hypothetical website
await page.goto('https://example.com/login')
await page.fill('input[name="username"]', 'testuser')
await page.fill('input[name="password"]', 'password123')
await page.click('button[type="submit"]')
await page.wait_for_selector('#welcome')
# Add a custom cookie
await context.add_cookies([{'name': 'test_cookie', 'value': 'cookie_value', 'url': 'https://example.com'}])
await page.close()
await context.close()
async def before_goto(page: Page):
print("[HOOK] before_goto")
# Example customization: add custom headers
await page.set_extra_http_headers({'X-Test-Header': 'test'})
async def after_goto(page: Page):
print("[HOOK] after_goto")
# Example customization: log the URL
print(f"Current URL: {page.url}")
async def on_execution_started(page: Page):
print("[HOOK] on_execution_started")
# Example customization: perform actions after JS execution
await page.evaluate("console.log('Custom JS executed')")
async def before_return_html(page: Page, html: str):
print("[HOOK] before_return_html")
# Example customization: log the HTML length
print(f"HTML length: {len(html)}")
return page
```
### Using the Hooks with the AsyncWebCrawler
```python
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.async_crawler_strategy import AsyncPlaywrightCrawlerStrategy
async def main():
print("\n🔗 Using Crawler Hooks: Let's see how we can customize the AsyncWebCrawler using hooks!")
crawler_strategy = AsyncPlaywrightCrawlerStrategy(verbose=True)
crawler_strategy.set_hook('on_browser_created', on_browser_created)
crawler_strategy.set_hook('before_goto', before_goto)
crawler_strategy.set_hook('after_goto', after_goto)
crawler_strategy.set_hook('on_execution_started', on_execution_started)
crawler_strategy.set_hook('before_return_html', before_return_html)
async with AsyncWebCrawler(verbose=True, crawler_strategy=crawler_strategy) as crawler:
result = await crawler.arun(
url="https://example.com",
js_code="window.scrollTo(0, document.body.scrollHeight);",
wait_for="footer"
)
print("📦 Crawler Hooks result:")
print(result)
asyncio.run(main())
```
### Explanation
- `on_browser_created`: This hook is called when the Playwright browser is created. It sets up the browser context, logs in to a website, and adds a custom cookie.
- `before_goto`: This hook is called right before Playwright navigates to the URL. It adds custom HTTP headers.
- `after_goto`: This hook is called after Playwright navigates to the URL. It logs the current URL.
- `on_execution_started`: This hook is called after any custom JavaScript is executed. It performs additional JavaScript actions.
- `before_return_html`: This hook is called before returning the HTML content. It logs the length of the HTML content.
### Additional Ideas
- **Handling authentication**: Use the `on_browser_created` hook to handle login processes or set authentication tokens.
- **Dynamic header modification**: Modify headers based on the target URL or other conditions in the `before_goto` hook.
- **Content verification**: Use the `after_goto` hook to verify that the expected content is present on the page.
- **Custom JavaScript injection**: Inject and execute custom JavaScript using the `on_execution_started` hook.
- **Content preprocessing**: Modify or analyze the HTML content in the `before_return_html` hook before it's returned.
By using these hooks, you can customize the behavior of the AsyncWebCrawler to suit your specific needs, including handling authentication, modifying requests, and preprocessing content.

View File

@@ -1,33 +0,0 @@
# Examples
Welcome to the examples section of Crawl4AI documentation! In this section, you will find practical examples demonstrating how to use Crawl4AI for various web crawling and data extraction tasks. Each example is designed to showcase different features and capabilities of the library.
## Examples Index
### [LLM Extraction](llm_extraction.md)
This example demonstrates how to use Crawl4AI to extract information using Large Language Models (LLMs). You will learn how to configure the `LLMExtractionStrategy` to get structured data from web pages.
### [JSON CSS Extraction](json_css_extraction.md)
This example demonstrates how to use Crawl4AI to extract structured data without using LLM, and just focusing on page structure. You will learn how to use the `JsonCssExtractionStrategy` to extract data using CSS selectors.
### [JS Execution & CSS Filtering](js_execution_css_filtering.md)
Learn how to execute custom JavaScript code and filter data using CSS selectors. This example shows how to perform complex web interactions and extract specific content from web pages.
### [Hooks & Auth](hooks_auth.md)
This example covers the use of custom hooks for authentication and other pre-crawling tasks. You will see how to set up hooks to modify headers, authenticate sessions, and perform other preparatory actions before crawling.
### [Summarization](summarization.md)
Discover how to use Crawl4AI to summarize web page content. This example demonstrates the summarization capabilities of the library, helping you extract concise information from lengthy web pages.
### [Research Assistant](research_assistant.md)
In this example, Crawl4AI is used as a research assistant to gather and organize information from multiple sources. You will learn how to use various extraction and chunking strategies to compile a comprehensive report.
---
Each example includes detailed explanations and code snippets to help you understand and implement the features in your projects. Click on the links to explore each example and start making the most of Crawl4AI!

View File

@@ -1,104 +0,0 @@
# JS Execution & CSS Filtering with AsyncWebCrawler
In this example, we'll demonstrate how to use Crawl4AI's AsyncWebCrawler to execute JavaScript, filter data with CSS selectors, and use a cosine similarity strategy to extract relevant content. This approach is particularly useful when you need to interact with dynamic content on web pages, such as clicking "Load More" buttons.
## Example: Extracting Structured Data Asynchronously
```python
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.chunking_strategy import RegexChunking
from crawl4ai.extraction_strategy import CosineStrategy
from crawl4ai.async_crawler_strategy import AsyncPlaywrightCrawlerStrategy
async def main():
# Define the JavaScript code to click the "Load More" button
js_code = """
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
if (loadMoreButton) {
loadMoreButton.click();
// Wait for new content to load
await new Promise(resolve => setTimeout(resolve, 2000));
}
"""
# Define a wait_for function to ensure content is loaded
wait_for = """
() => {
const articles = document.querySelectorAll('article.tease-card');
return articles.length > 10;
}
"""
async with AsyncWebCrawler(verbose=True) as crawler:
# Run the crawler with keyword filtering and CSS selector
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=js_code,
wait_for=wait_for,
css_selector="article.tease-card",
extraction_strategy=CosineStrategy(
semantic_filter="technology",
),
chunking_strategy=RegexChunking(),
)
# Display the extracted result
print(result.extracted_content)
# Run the async function
asyncio.run(main())
```
### Explanation
1. **Asynchronous Execution**: We use `AsyncWebCrawler` with async/await syntax for non-blocking execution.
2. **JavaScript Execution**: The `js_code` variable contains JavaScript code that simulates clicking a "Load More" button and waits for new content to load.
3. **Wait Condition**: The `wait_for` function ensures that the page has loaded more than 10 articles before proceeding with the extraction.
4. **CSS Selector**: The `css_selector="article.tease-card"` parameter ensures that only article cards are extracted from the web page.
5. **Extraction Strategy**: The `CosineStrategy` is used with a semantic filter for "technology" to extract relevant content based on cosine similarity.
6. **Chunking Strategy**: We use `RegexChunking()` to split the content into manageable chunks for processing.
## Advanced Usage: Custom Session and Multiple Requests
For more complex scenarios where you need to maintain state across multiple requests or execute additional JavaScript after the initial page load, you can use a custom session:
```python
async def advanced_crawl():
async with AsyncWebCrawler(verbose=True) as crawler:
# Initial crawl with custom session
result1 = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=js_code,
wait_for=wait_for,
css_selector="article.tease-card",
session_id="business_session"
)
# Execute additional JavaScript in the same session
result2 = await crawler.crawler_strategy.execute_js(
session_id="business_session",
js_code="window.scrollTo(0, document.body.scrollHeight);",
wait_for_js="() => window.innerHeight + window.scrollY >= document.body.offsetHeight"
)
# Process results
print("Initial crawl result:", result1.extracted_content)
print("Additional JS execution result:", result2.html)
asyncio.run(advanced_crawl())
```
This advanced example demonstrates how to:
1. Use a custom session to maintain state across requests.
2. Execute additional JavaScript after the initial page load.
3. Wait for specific conditions using JavaScript functions.
## Try It Yourself
These examples demonstrate the power and flexibility of Crawl4AI's AsyncWebCrawler in handling complex web interactions and extracting meaningful data asynchronously. You can customize the JavaScript code, CSS selectors, extraction strategies, and waiting conditions to suit your specific requirements.

View File

@@ -1,142 +0,0 @@
# JSON CSS Extraction Strategy with AsyncWebCrawler
The `JsonCssExtractionStrategy` is a powerful feature of Crawl4AI that allows you to extract structured data from web pages using CSS selectors. This method is particularly useful when you need to extract specific data points from a consistent HTML structure, such as tables or repeated elements. Here's how to use it with the AsyncWebCrawler.
## Overview
The `JsonCssExtractionStrategy` works by defining a schema that specifies:
1. A base CSS selector for the repeating elements
2. Fields to extract from each element, each with its own CSS selector
This strategy is fast and efficient, as it doesn't rely on external services like LLMs for extraction.
## Example: Extracting Cryptocurrency Prices from Coinbase
Let's look at an example that extracts cryptocurrency prices from the Coinbase explore page.
```python
import json
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
async def extract_structured_data_using_css_extractor():
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
# Define the extraction schema
schema = {
"name": "Coinbase Crypto Prices",
"baseSelector": ".cds-tableRow-t45thuk",
"fields": [
{
"name": "crypto",
"selector": "td:nth-child(1) h2",
"type": "text",
},
{
"name": "symbol",
"selector": "td:nth-child(1) p",
"type": "text",
},
{
"name": "price",
"selector": "td:nth-child(2)",
"type": "text",
}
],
}
# Create the extraction strategy
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
# Use the AsyncWebCrawler with the extraction strategy
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.coinbase.com/explore",
extraction_strategy=extraction_strategy,
bypass_cache=True,
)
assert result.success, "Failed to crawl the page"
# Parse the extracted content
crypto_prices = json.loads(result.extracted_content)
print(f"Successfully extracted {len(crypto_prices)} cryptocurrency prices")
print(json.dumps(crypto_prices[0], indent=2))
return crypto_prices
# Run the async function
asyncio.run(extract_structured_data_using_css_extractor())
```
## Explanation of the Schema
The schema defines how to extract the data:
- `name`: A descriptive name for the extraction task.
- `baseSelector`: The CSS selector for the repeating elements (in this case, table rows).
- `fields`: An array of fields to extract from each element:
- `name`: The name to give the extracted data.
- `selector`: The CSS selector to find the specific data within the base element.
- `type`: The type of data to extract (usually "text" for textual content).
## Advantages of JsonCssExtractionStrategy
1. **Speed**: CSS selectors are fast to execute, making this method efficient for large datasets.
2. **Precision**: You can target exactly the elements you need.
3. **Structured Output**: The result is already structured as JSON, ready for further processing.
4. **No External Dependencies**: Unlike LLM-based strategies, this doesn't require any API calls to external services.
## Tips for Using JsonCssExtractionStrategy
1. **Inspect the Page**: Use browser developer tools to identify the correct CSS selectors.
2. **Test Selectors**: Verify your selectors in the browser console before using them in the script.
3. **Handle Dynamic Content**: If the page uses JavaScript to load content, you may need to combine this with JS execution (see the Advanced Usage section).
4. **Error Handling**: Always check the `result.success` flag and handle potential failures.
## Advanced Usage: Combining with JavaScript Execution
For pages that load data dynamically, you can combine the `JsonCssExtractionStrategy` with JavaScript execution:
```python
async def extract_dynamic_structured_data():
schema = {
"name": "Dynamic Crypto Prices",
"baseSelector": ".crypto-row",
"fields": [
{"name": "name", "selector": ".crypto-name", "type": "text"},
{"name": "price", "selector": ".crypto-price", "type": "text"},
]
}
js_code = """
window.scrollTo(0, document.body.scrollHeight);
await new Promise(resolve => setTimeout(resolve, 2000)); // Wait for 2 seconds
"""
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://example.com/crypto-prices",
extraction_strategy=extraction_strategy,
js_code=js_code,
wait_for=".crypto-row:nth-child(20)", # Wait for 20 rows to load
bypass_cache=True,
)
crypto_data = json.loads(result.extracted_content)
print(f"Extracted {len(crypto_data)} cryptocurrency entries")
asyncio.run(extract_dynamic_structured_data())
```
This advanced example demonstrates how to:
1. Execute JavaScript to trigger dynamic content loading.
2. Wait for a specific condition (20 rows loaded) before extraction.
3. Extract data from the dynamically loaded content.
By mastering the `JsonCssExtractionStrategy`, you can efficiently extract structured data from a wide variety of web pages, making it a valuable tool in your web scraping toolkit.
For more details on schema definitions and advanced extraction strategies, check out the[Advanced JsonCssExtraction](../full_details/advanced_jsoncss_extraction.md).

View File

@@ -1,179 +0,0 @@
# LLM Extraction with AsyncWebCrawler
Crawl4AI's AsyncWebCrawler allows you to use Language Models (LLMs) to extract structured data or relevant content from web pages asynchronously. Below are two examples demonstrating how to use `LLMExtractionStrategy` for different purposes with the AsyncWebCrawler.
## Example 1: Extract Structured Data
In this example, we use the `LLMExtractionStrategy` to extract structured data (model names and their fees) from the OpenAI pricing page.
```python
import os
import json
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
from pydantic import BaseModel, Field
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(..., description="Fee for output token for the OpenAI model.")
async def extract_openai_fees():
url = 'https://openai.com/api/pricing/'
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url=url,
word_count_threshold=1,
extraction_strategy=LLMExtractionStrategy(
provider="openai/gpt-4o",
api_token=os.getenv('OPENAI_API_KEY'),
schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema",
instruction="From the crawled content, extract all mentioned model names along with their "
"fees for input and output tokens. Make sure not to miss anything in the entire content. "
'One extracted model JSON format should look like this: '
'{ "model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens" }'
),
bypass_cache=True,
)
model_fees = json.loads(result.extracted_content)
print(f"Number of models extracted: {len(model_fees)}")
with open(".data/openai_fees.json", "w", encoding="utf-8") as f:
json.dump(model_fees, f, indent=2)
asyncio.run(extract_openai_fees())
```
## Example 2: Extract Relevant Content
In this example, we instruct the LLM to extract only content related to technology from the NBC News business page.
```python
import os
import json
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
async def extract_tech_content():
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
extraction_strategy=LLMExtractionStrategy(
provider="openai/gpt-4o",
api_token=os.getenv('OPENAI_API_KEY'),
instruction="Extract only content related to technology"
),
bypass_cache=True,
)
tech_content = json.loads(result.extracted_content)
print(f"Number of tech-related items extracted: {len(tech_content)}")
with open(".data/tech_content.json", "w", encoding="utf-8") as f:
json.dump(tech_content, f, indent=2)
asyncio.run(extract_tech_content())
```
## Advanced Usage: Combining JS Execution with LLM Extraction
This example demonstrates how to combine JavaScript execution with LLM extraction to handle dynamic content:
```python
async def extract_dynamic_content():
js_code = """
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
if (loadMoreButton) {
loadMoreButton.click();
await new Promise(resolve => setTimeout(resolve, 2000));
}
"""
wait_for = """
() => {
const articles = document.querySelectorAll('article.tease-card');
return articles.length > 10;
}
"""
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=js_code,
wait_for=wait_for,
css_selector="article.tease-card",
extraction_strategy=LLMExtractionStrategy(
provider="openai/gpt-4o",
api_token=os.getenv('OPENAI_API_KEY'),
instruction="Summarize each article, focusing on technology-related content"
),
bypass_cache=True,
)
summaries = json.loads(result.extracted_content)
print(f"Number of summarized articles: {len(summaries)}")
with open(".data/tech_summaries.json", "w", encoding="utf-8") as f:
json.dump(summaries, f, indent=2)
asyncio.run(extract_dynamic_content())
```
## Customizing LLM Provider
Crawl4AI uses the `litellm` library under the hood, which allows you to use any LLM provider you want. Just pass the correct model name and API token:
```python
extraction_strategy=LLMExtractionStrategy(
provider="your_llm_provider/model_name",
api_token="your_api_token",
instruction="Your extraction instruction"
)
```
This flexibility allows you to integrate with various LLM providers and tailor the extraction process to your specific needs.
## Error Handling and Retries
When working with external LLM APIs, it's important to handle potential errors and implement retry logic. Here's an example of how you might do this:
```python
import asyncio
from tenacity import retry, stop_after_attempt, wait_exponential
class LLMExtractionError(Exception):
pass
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
async def extract_with_retry(crawler, url, extraction_strategy):
try:
result = await crawler.arun(url=url, extraction_strategy=extraction_strategy, bypass_cache=True)
return json.loads(result.extracted_content)
except Exception as e:
raise LLMExtractionError(f"Failed to extract content: {str(e)}")
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
try:
content = await extract_with_retry(
crawler,
"https://www.example.com",
LLMExtractionStrategy(
provider="openai/gpt-4o",
api_token=os.getenv('OPENAI_API_KEY'),
instruction="Extract and summarize main points"
)
)
print("Extracted content:", content)
except LLMExtractionError as e:
print(f"Extraction failed after retries: {e}")
asyncio.run(main())
```
This example uses the `tenacity` library to implement a retry mechanism with exponential backoff, which can help handle temporary failures or rate limiting from the LLM API.

View File

@@ -1,220 +0,0 @@
# Research Assistant Example with AsyncWebCrawler
This example demonstrates how to build an advanced research assistant using `Chainlit`, `Crawl4AI`'s `AsyncWebCrawler`, and various AI services. The assistant can crawl web pages asynchronously, answer questions based on the crawled content, and handle audio inputs.
## Step-by-Step Guide
1. **Install Required Packages**
Ensure you have the necessary packages installed:
```bash
pip install chainlit groq openai crawl4ai
```
2. **Import Libraries**
```python
import os
import time
import asyncio
from openai import AsyncOpenAI
import chainlit as cl
import re
from io import BytesIO
from chainlit.element import ElementBased
from groq import Groq
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import NoExtractionStrategy
from crawl4ai.chunking_strategy import RegexChunking
client = AsyncOpenAI(base_url="https://api.groq.com/openai/v1", api_key=os.getenv("GROQ_API_KEY"))
# Instrument the OpenAI client
cl.instrument_openai()
```
3. **Set Configuration**
```python
settings = {
"model": "llama3-8b-8192",
"temperature": 0.5,
"max_tokens": 500,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
}
```
4. **Define Utility Functions**
```python
def extract_urls(text):
url_pattern = re.compile(r'(https?://\S+)')
return url_pattern.findall(text)
async def crawl_urls(urls):
async with AsyncWebCrawler(verbose=True) as crawler:
results = await crawler.arun_many(
urls=urls,
word_count_threshold=10,
extraction_strategy=NoExtractionStrategy(),
chunking_strategy=RegexChunking(),
bypass_cache=True
)
return [result.markdown for result in results if result.success]
```
5. **Initialize Chat Start Event**
```python
@cl.on_chat_start
async def on_chat_start():
cl.user_session.set("session", {
"history": [],
"context": {}
})
await cl.Message(content="Welcome to the chat! How can I assist you today?").send()
```
6. **Handle Incoming Messages**
```python
@cl.on_message
async def on_message(message: cl.Message):
user_session = cl.user_session.get("session")
# Extract URLs from the user's message
urls = extract_urls(message.content)
if urls:
crawled_contents = await crawl_urls(urls)
for url, content in zip(urls, crawled_contents):
ref_number = f"REF_{len(user_session['context']) + 1}"
user_session["context"][ref_number] = {
"url": url,
"content": content
}
user_session["history"].append({
"role": "user",
"content": message.content
})
# Create a system message that includes the context
context_messages = [
f'<appendix ref="{ref}">\n{data["content"]}\n</appendix>'
for ref, data in user_session["context"].items()
]
system_message = {
"role": "system",
"content": (
"You are a helpful bot. Use the following context for answering questions. "
"Refer to the sources using the REF number in square brackets, e.g., [1], only if the source is given in the appendices below.\n\n"
"If the question requires any information from the provided appendices or context, refer to the sources. "
"If not, there is no need to add a references section. "
"At the end of your response, provide a reference section listing the URLs and their REF numbers only if sources from the appendices were used.\n\n"
"\n\n".join(context_messages)
) if context_messages else "You are a helpful assistant."
}
msg = cl.Message(content="")
await msg.send()
# Get response from the LLM
stream = await client.chat.completions.create(
messages=[system_message, *user_session["history"]],
stream=True,
**settings
)
assistant_response = ""
async for part in stream:
if token := part.choices[0].delta.content:
assistant_response += token
await msg.stream_token(token)
# Add assistant message to the history
user_session["history"].append({
"role": "assistant",
"content": assistant_response
})
await msg.update()
# Append the reference section to the assistant's response
if user_session["context"]:
reference_section = "\n\nReferences:\n"
for ref, data in user_session["context"].items():
reference_section += f"[{ref.split('_')[1]}]: {data['url']}\n"
msg.content += reference_section
await msg.update()
```
7. **Handle Audio Input**
```python
@cl.on_audio_chunk
async def on_audio_chunk(chunk: cl.AudioChunk):
if chunk.isStart:
buffer = BytesIO()
buffer.name = f"input_audio.{chunk.mimeType.split('/')[1]}"
cl.user_session.set("audio_buffer", buffer)
cl.user_session.set("audio_mime_type", chunk.mimeType)
cl.user_session.get("audio_buffer").write(chunk.data)
@cl.step(type="tool")
async def speech_to_text(audio_file):
response = await client.audio.transcriptions.create(
model="whisper-large-v3", file=audio_file
)
return response.text
@cl.on_audio_end
async def on_audio_end(elements: list[ElementBased]):
audio_buffer: BytesIO = cl.user_session.get("audio_buffer")
audio_buffer.seek(0)
audio_file = audio_buffer.read()
audio_mime_type: str = cl.user_session.get("audio_mime_type")
start_time = time.time()
transcription = await speech_to_text((audio_buffer.name, audio_file, audio_mime_type))
end_time = time.time()
print(f"Transcription took {end_time - start_time} seconds")
user_msg = cl.Message(author="You", type="user_message", content=transcription)
await user_msg.send()
await on_message(user_msg)
```
8. **Run the Chat Application**
```python
if __name__ == "__main__":
from chainlit.cli import run_chainlit
run_chainlit(__file__)
```
## Explanation
- **Libraries and Configuration**: We import necessary libraries, including `AsyncWebCrawler` from `crawl4ai`.
- **Utility Functions**:
- `extract_urls`: Uses regex to find URLs in messages.
- `crawl_urls`: An asynchronous function that uses `AsyncWebCrawler` to fetch content from multiple URLs concurrently.
- **Chat Start Event**: Initializes the chat session and sends a welcome message.
- **Message Handling**:
- Extracts URLs from user messages.
- Asynchronously crawls the URLs using `AsyncWebCrawler`.
- Updates chat history and context with crawled content.
- Generates a response using the LLM, incorporating the crawled context.
- **Audio Handling**: Captures, buffers, and transcribes audio input, then processes the transcription as text.
- **Running the Application**: Starts the Chainlit server for interaction with the assistant.
## Key Improvements
1. **Asynchronous Web Crawling**: Using `AsyncWebCrawler` allows for efficient, concurrent crawling of multiple URLs.
2. **Improved Context Management**: The assistant now maintains a context of crawled content, allowing for more informed responses.
3. **Dynamic Reference System**: The assistant can refer to specific sources in its responses and provide a reference section.
4. **Seamless Audio Integration**: The ability to handle audio inputs makes the assistant more versatile and user-friendly.
This updated Research Assistant showcases how to create a powerful, interactive tool that can efficiently fetch and process web content, handle various input types, and provide informed responses based on the gathered information.

View File

@@ -1,153 +0,0 @@
# Summarization Example with AsyncWebCrawler
This example demonstrates how to use Crawl4AI's `AsyncWebCrawler` to extract a summary from a web page asynchronously. The goal is to obtain the title, a detailed summary, a brief summary, and a list of keywords from the given page.
## Step-by-Step Guide
1. **Import Necessary Modules**
First, import the necessary modules and classes:
```python
import os
import json
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
from crawl4ai.chunking_strategy import RegexChunking
from pydantic import BaseModel, Field
```
2. **Define the URL to be Crawled**
Set the URL of the web page you want to summarize:
```python
url = 'https://marketplace.visualstudio.com/items?itemName=Unclecode.groqopilot'
```
3. **Define the Data Model**
Use Pydantic to define the structure of the extracted data:
```python
class PageSummary(BaseModel):
title: str = Field(..., description="Title of the page.")
summary: str = Field(..., description="Summary of the page.")
brief_summary: str = Field(..., description="Brief summary of the page.")
keywords: list = Field(..., description="Keywords assigned to the page.")
```
4. **Create the Extraction Strategy**
Set up the `LLMExtractionStrategy` with the necessary parameters:
```python
extraction_strategy = LLMExtractionStrategy(
provider="openai/gpt-4o",
api_token=os.getenv('OPENAI_API_KEY'),
schema=PageSummary.model_json_schema(),
extraction_type="schema",
apply_chunking=False,
instruction=(
"From the crawled content, extract the following details: "
"1. Title of the page "
"2. Summary of the page, which is a detailed summary "
"3. Brief summary of the page, which is a paragraph text "
"4. Keywords assigned to the page, which is a list of keywords. "
'The extracted JSON format should look like this: '
'{ "title": "Page Title", "summary": "Detailed summary of the page.", '
'"brief_summary": "Brief summary in a paragraph.", "keywords": ["keyword1", "keyword2", "keyword3"] }'
)
)
```
5. **Define the Async Crawl Function**
Create an asynchronous function to run the crawler:
```python
async def crawl_and_summarize(url):
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url=url,
word_count_threshold=1,
extraction_strategy=extraction_strategy,
chunking_strategy=RegexChunking(),
bypass_cache=True,
)
return result
```
6. **Run the Crawler and Process Results**
Use asyncio to run the crawler and process the results:
```python
async def main():
result = await crawl_and_summarize(url)
if result.success:
page_summary = json.loads(result.extracted_content)
print("Extracted Page Summary:")
print(json.dumps(page_summary, indent=2))
# Save the extracted data
with open(".data/page_summary.json", "w", encoding="utf-8") as f:
json.dump(page_summary, f, indent=2)
print("Page summary saved to .data/page_summary.json")
else:
print(f"Failed to crawl and summarize the page. Error: {result.error_message}")
# Run the async main function
asyncio.run(main())
```
## Explanation
- **Importing Modules**: We import the necessary modules, including `AsyncWebCrawler` and `LLMExtractionStrategy` from Crawl4AI.
- **URL Definition**: We set the URL of the web page to crawl and summarize.
- **Data Model Definition**: We define the structure of the data to extract using Pydantic's `BaseModel`.
- **Extraction Strategy Setup**: We create an instance of `LLMExtractionStrategy` with the schema and detailed instructions for the extraction process.
- **Async Crawl Function**: We define an asynchronous function `crawl_and_summarize` that uses `AsyncWebCrawler` to perform the crawling and extraction.
- **Main Execution**: In the `main` function, we run the crawler, process the results, and save the extracted data.
## Advanced Usage: Crawling Multiple URLs
To demonstrate the power of `AsyncWebCrawler`, here's how you can summarize multiple pages concurrently:
```python
async def crawl_multiple_urls(urls):
async with AsyncWebCrawler(verbose=True) as crawler:
tasks = [crawler.arun(
url=url,
word_count_threshold=1,
extraction_strategy=extraction_strategy,
chunking_strategy=RegexChunking(),
bypass_cache=True
) for url in urls]
results = await asyncio.gather(*tasks)
return results
async def main():
urls = [
'https://marketplace.visualstudio.com/items?itemName=Unclecode.groqopilot',
'https://marketplace.visualstudio.com/items?itemName=GitHub.copilot',
'https://marketplace.visualstudio.com/items?itemName=ms-python.python'
]
results = await crawl_multiple_urls(urls)
for i, result in enumerate(results):
if result.success:
page_summary = json.loads(result.extracted_content)
print(f"\nSummary for URL {i+1}:")
print(json.dumps(page_summary, indent=2))
else:
print(f"\nFailed to summarize URL {i+1}. Error: {result.error_message}")
asyncio.run(main())
```
This advanced example shows how to use `AsyncWebCrawler` to efficiently summarize multiple web pages concurrently, significantly reducing the total processing time compared to sequential crawling.
By leveraging the asynchronous capabilities of Crawl4AI, you can perform advanced web crawling and data extraction tasks with improved efficiency and scalability.

View File

@@ -1,138 +0,0 @@
# Advanced Features
Crawl4AI offers a range of advanced features that allow you to fine-tune your web crawling and data extraction process. This section will cover some of these advanced features, including taking screenshots, extracting media and links, customizing the user agent, using custom hooks, and leveraging CSS selectors.
## Taking Screenshots 📸
One of the cool features of Crawl4AI is the ability to take screenshots of the web pages you're crawling. This can be particularly useful for visual verification or for capturing the state of dynamic content.
Here's how you can take a screenshot:
```python
from crawl4ai import WebCrawler
import base64
# Create the WebCrawler instance
crawler = WebCrawler()
crawler.warmup()
# Run the crawler with the screenshot parameter
result = crawler.run(url="https://www.nbcnews.com/business", screenshot=True)
# Save the screenshot to a file
with open("screenshot.png", "wb") as f:
f.write(base64.b64decode(result.screenshot))
print("Screenshot saved to 'screenshot.png'!")
```
In this example, we create a `WebCrawler` instance, warm it up, and then run it with the `screenshot` parameter set to `True`. The screenshot is saved as a base64 encoded string in the result, which we then decode and save as a PNG file.
## Extracting Media and Links 🎨🔗
Crawl4AI can extract all media tags (images, audio, and video) and links (both internal and external) from a web page. This feature is useful for collecting multimedia content or analyzing link structures.
Here's an example:
```python
from crawl4ai import WebCrawler
# Create the WebCrawler instance
crawler = WebCrawler()
crawler.warmup()
# Run the crawler
result = crawler.run(url="https://www.nbcnews.com/business")
print("Extracted media:", result.media)
print("Extracted links:", result.links)
```
In this example, the `result` object contains dictionaries for media and links, which you can access and use as needed.
## Customizing the User Agent 🕵️‍♂️
Crawl4AI allows you to set a custom user agent for your HTTP requests. This can help you avoid detection by web servers or simulate different browsing environments.
Here's how to set a custom user agent:
```python
from crawl4ai import WebCrawler
# Create the WebCrawler instance
crawler = WebCrawler()
crawler.warmup()
# Run the crawler with a custom user agent
result = crawler.run(url="https://www.nbcnews.com/business", user_agent="Mozilla/5.0 (compatible; MyCrawler/1.0)")
print("Crawl result:", result)
```
In this example, we specify a custom user agent string when running the crawler.
## Using Custom Hooks 🪝
Hooks are a powerful feature in Crawl4AI that allow you to customize the crawling process at various stages. You can define hooks for actions such as driver initialization, before and after URL fetching, and before returning the HTML.
Here's an example of using hooks:
```python
from crawl4ai import WebCrawler
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Define the hooks
def on_driver_created(driver):
driver.maximize_window()
driver.get('https://example.com/login')
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.NAME, 'username'))).send_keys('testuser')
driver.find_element(By.NAME, 'password').send_keys('password123')
driver.find_element(By.NAME, 'login').click()
return driver
def before_get_url(driver):
driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': {'X-Test-Header': 'test'}})
return driver
# Create the WebCrawler instance
crawler = WebCrawler()
crawler.warmup()
# Set the hooks
crawler.set_hook('on_driver_created', on_driver_created)
crawler.set_hook('before_get_url', before_get_url)
# Run the crawler
result = crawler.run(url="https://example.com")
print("Crawl result:", result)
```
In this example, we define hooks to handle driver initialization and custom headers before fetching the URL.
## Using CSS Selectors 🎯
CSS selectors allow you to target specific elements on a web page for extraction. This can be useful for scraping structured content, such as articles or product details.
Here's an example of using a CSS selector:
```python
from crawl4ai import WebCrawler
# Create the WebCrawler instance
crawler = WebCrawler()
crawler.warmup()
# Run the crawler with a CSS selector to extract only H2 tags
result = crawler.run(url="https://www.nbcnews.com/business", css_selector="h2")
print("Extracted H2 tags:", result.extracted_content)
```
In this example, we use the `css_selector` parameter to extract only the H2 tags from the web page.
---
With these advanced features, you can leverage Crawl4AI to perform sophisticated web crawling and data extraction tasks. Whether you need to take screenshots, extract specific elements, customize the crawling process, or set custom headers, Crawl4AI provides the flexibility and power to meet your needs. Happy crawling! 🕷️🚀

View File

@@ -1,282 +0,0 @@
# Advanced Usage of JsonCssExtractionStrategy
While the basic usage of JsonCssExtractionStrategy is powerful for simple structures, its true potential shines when dealing with complex, nested HTML structures. This section will explore advanced usage scenarios, demonstrating how to extract nested objects, lists, and nested lists.
## Hypothetical Website Example
Let's consider a hypothetical e-commerce website that displays product categories, each containing multiple products. Each product has details, reviews, and related items. This complex structure will allow us to demonstrate various advanced features of JsonCssExtractionStrategy.
Assume the HTML structure looks something like this:
```html
<div class="category">
<h2 class="category-name">Electronics</h2>
<div class="product">
<h3 class="product-name">Smartphone X</h3>
<p class="product-price">$999</p>
<div class="product-details">
<span class="brand">TechCorp</span>
<span class="model">X-2000</span>
</div>
<ul class="product-features">
<li>5G capable</li>
<li>6.5" OLED screen</li>
<li>128GB storage</li>
</ul>
<div class="product-reviews">
<div class="review">
<span class="reviewer">John D.</span>
<span class="rating">4.5</span>
<p class="review-text">Great phone, love the camera!</p>
</div>
<div class="review">
<span class="reviewer">Jane S.</span>
<span class="rating">5</span>
<p class="review-text">Best smartphone I've ever owned.</p>
</div>
</div>
<ul class="related-products">
<li>
<span class="related-name">Phone Case</span>
<span class="related-price">$29.99</span>
</li>
<li>
<span class="related-name">Screen Protector</span>
<span class="related-price">$9.99</span>
</li>
</ul>
</div>
<!-- More products... -->
</div>
```
Now, let's create a schema to extract this complex structure:
```python
schema = {
"name": "E-commerce Product Catalog",
"baseSelector": "div.category",
"fields": [
{
"name": "category_name",
"selector": "h2.category-name",
"type": "text"
},
{
"name": "products",
"selector": "div.product",
"type": "nested_list",
"fields": [
{
"name": "name",
"selector": "h3.product-name",
"type": "text"
},
{
"name": "price",
"selector": "p.product-price",
"type": "text"
},
{
"name": "details",
"selector": "div.product-details",
"type": "nested",
"fields": [
{
"name": "brand",
"selector": "span.brand",
"type": "text"
},
{
"name": "model",
"selector": "span.model",
"type": "text"
}
]
},
{
"name": "features",
"selector": "ul.product-features li",
"type": "list",
"fields": [
{
"name": "feature",
"type": "text"
}
]
},
{
"name": "reviews",
"selector": "div.review",
"type": "nested_list",
"fields": [
{
"name": "reviewer",
"selector": "span.reviewer",
"type": "text"
},
{
"name": "rating",
"selector": "span.rating",
"type": "text"
},
{
"name": "comment",
"selector": "p.review-text",
"type": "text"
}
]
},
{
"name": "related_products",
"selector": "ul.related-products li",
"type": "list",
"fields": [
{
"name": "name",
"selector": "span.related-name",
"type": "text"
},
{
"name": "price",
"selector": "span.related-price",
"type": "text"
}
]
}
]
}
]
}
```
This schema demonstrates several advanced features:
1. **Nested Objects**: The `details` field is a nested object within each product.
2. **Simple Lists**: The `features` field is a simple list of text items.
3. **Nested Lists**: The `products` field is a nested list, where each item is a complex object.
4. **Lists of Objects**: The `reviews` and `related_products` fields are lists of objects.
Let's break down the key concepts:
### Nested Objects
To create a nested object, use `"type": "nested"` and provide a `fields` array for the nested structure:
```python
{
"name": "details",
"selector": "div.product-details",
"type": "nested",
"fields": [
{
"name": "brand",
"selector": "span.brand",
"type": "text"
},
{
"name": "model",
"selector": "span.model",
"type": "text"
}
]
}
```
### Simple Lists
For a simple list of identical items, use `"type": "list"`:
```python
{
"name": "features",
"selector": "ul.product-features li",
"type": "list",
"fields": [
{
"name": "feature",
"type": "text"
}
]
}
```
### Nested Lists
For a list of complex objects, use `"type": "nested_list"`:
```python
{
"name": "products",
"selector": "div.product",
"type": "nested_list",
"fields": [
// ... fields for each product
]
}
```
### Lists of Objects
Similar to nested lists, but typically used for simpler objects within the list:
```python
{
"name": "related_products",
"selector": "ul.related-products li",
"type": "list",
"fields": [
{
"name": "name",
"selector": "span.related-name",
"type": "text"
},
{
"name": "price",
"selector": "span.related-price",
"type": "text"
}
]
}
```
## Using the Advanced Schema
To use this advanced schema with AsyncWebCrawler:
```python
import json
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
async def extract_complex_product_data():
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://gist.githubusercontent.com/githubusercontent/2d7b8ba3cd8ab6cf3c8da771ddb36878/raw/1ae2f90c6861ce7dd84cc50d3df9920dee5e1fd2/sample_ecommerce.html",
extraction_strategy=extraction_strategy,
bypass_cache=True,
)
assert result.success, "Failed to crawl the page"
product_data = json.loads(result.extracted_content)
print(json.dumps(product_data, indent=2))
asyncio.run(extract_complex_product_data())
```
This will produce a structured JSON output that captures the complex hierarchy of the product catalog, including nested objects, lists, and nested lists.
## Tips for Advanced Usage
1. **Start Simple**: Begin with a basic schema and gradually add complexity.
2. **Test Incrementally**: Test each part of your schema separately before combining them.
3. **Use Chrome DevTools**: The Element Inspector is invaluable for identifying the correct selectors.
4. **Handle Missing Data**: Use the `default` key in your field definitions to handle cases where data might be missing.
5. **Leverage Transforms**: Use the `transform` key to clean or format extracted data (e.g., converting prices to numbers).
6. **Consider Performance**: Very complex schemas might slow down extraction. Balance complexity with performance needs.
By mastering these advanced techniques, you can use JsonCssExtractionStrategy to extract highly structured data from even the most complex web pages, making it a powerful tool for web scraping and data analysis tasks.

View File

@@ -1,133 +0,0 @@
## Chunking Strategies 📚
Crawl4AI provides several powerful chunking strategies to divide text into manageable parts for further processing. Each strategy has unique characteristics and is suitable for different scenarios. Let's explore them one by one.
### RegexChunking
`RegexChunking` splits text using regular expressions. This is ideal for creating chunks based on specific patterns like paragraphs or sentences.
#### When to Use
- Great for structured text with consistent delimiters.
- Suitable for documents where specific patterns (e.g., double newlines, periods) indicate logical chunks.
#### Parameters
- `patterns` (list, optional): Regular expressions used to split the text. Default is to split by double newlines (`['\n\n']`).
#### Example
```python
from crawl4ai.chunking_strategy import RegexChunking
# Define patterns for splitting text
patterns = [r'\n\n', r'\. ']
chunker = RegexChunking(patterns=patterns)
# Sample text
text = "This is a sample text. It will be split into chunks.\n\nThis is another paragraph."
# Chunk the text
chunks = chunker.chunk(text)
print(chunks)
```
### NlpSentenceChunking
`NlpSentenceChunking` uses NLP models to split text into sentences, ensuring accurate sentence boundaries.
#### When to Use
- Ideal for texts where sentence boundaries are crucial.
- Useful for creating chunks that preserve grammatical structures.
#### Parameters
- None.
#### Example
```python
from crawl4ai.chunking_strategy import NlpSentenceChunking
chunker = NlpSentenceChunking()
# Sample text
text = "This is a sample text. It will be split into sentences. Here's another sentence."
# Chunk the text
chunks = chunker.chunk(text)
print(chunks)
```
### TopicSegmentationChunking
`TopicSegmentationChunking` employs the TextTiling algorithm to segment text into topic-based chunks. This method identifies thematic boundaries.
#### When to Use
- Perfect for long documents with distinct topics.
- Useful when preserving topic continuity is more important than maintaining text order.
#### Parameters
- `num_keywords` (int, optional): Number of keywords for each topic segment. Default is `3`.
#### Example
```python
from crawl4ai.chunking_strategy import TopicSegmentationChunking
chunker = TopicSegmentationChunking(num_keywords=3)
# Sample text
text = "This document contains several topics. Topic one discusses AI. Topic two covers machine learning."
# Chunk the text
chunks = chunker.chunk(text)
print(chunks)
```
### FixedLengthWordChunking
`FixedLengthWordChunking` splits text into chunks based on a fixed number of words. This ensures each chunk has approximately the same length.
#### When to Use
- Suitable for processing large texts where uniform chunk size is important.
- Useful when the number of words per chunk needs to be controlled.
#### Parameters
- `chunk_size` (int, optional): Number of words per chunk. Default is `100`.
#### Example
```python
from crawl4ai.chunking_strategy import FixedLengthWordChunking
chunker = FixedLengthWordChunking(chunk_size=10)
# Sample text
text = "This is a sample text. It will be split into chunks of fixed length."
# Chunk the text
chunks = chunker.chunk(text)
print(chunks)
```
### SlidingWindowChunking
`SlidingWindowChunking` uses a sliding window approach to create overlapping chunks. Each chunk has a fixed length, and the window slides by a specified step size.
#### When to Use
- Ideal for creating overlapping chunks to preserve context.
- Useful for tasks where context from adjacent chunks is needed.
#### Parameters
- `window_size` (int, optional): Number of words in each chunk. Default is `100`.
- `step` (int, optional): Number of words to slide the window. Default is `50`.
#### Example
```python
from crawl4ai.chunking_strategy import SlidingWindowChunking
chunker = SlidingWindowChunking(window_size=10, step=5)
# Sample text
text = "This is a sample text. It will be split using a sliding window approach to preserve context."
# Chunk the text
chunks = chunker.chunk(text)
print(chunks)
```
With these chunking strategies, you can choose the best method to divide your text based on your specific needs. Whether you need precise sentence boundaries, topic-based segmentation, or uniform chunk sizes, Crawl4AI has you covered. Happy chunking! 📝✨

View File

@@ -1,179 +0,0 @@
# Crawl Request Parameters for AsyncWebCrawler
The `arun` method in Crawl4AI's `AsyncWebCrawler` is designed to be highly configurable, allowing you to customize the crawling and extraction process to suit your needs. Below are the parameters you can use with the `arun` method, along with their descriptions, possible values, and examples.
## Parameters
### url (str)
**Description:** The URL of the webpage to crawl.
**Required:** Yes
**Example:**
```python
url = "https://www.nbcnews.com/business"
```
### word_count_threshold (int)
**Description:** The minimum number of words a block must contain to be considered meaningful. The default value is defined by `MIN_WORD_THRESHOLD`.
**Required:** No
**Default Value:** `MIN_WORD_THRESHOLD`
**Example:**
```python
word_count_threshold = 10
```
### extraction_strategy (ExtractionStrategy)
**Description:** The strategy to use for extracting content from the HTML. It must be an instance of `ExtractionStrategy`. If not provided, the default is `NoExtractionStrategy`.
**Required:** No
**Default Value:** `NoExtractionStrategy()`
**Example:**
```python
extraction_strategy = CosineStrategy(semantic_filter="finance")
```
### chunking_strategy (ChunkingStrategy)
**Description:** The strategy to use for chunking the text before processing. It must be an instance of `ChunkingStrategy`. The default value is `RegexChunking()`.
**Required:** No
**Default Value:** `RegexChunking()`
**Example:**
```python
chunking_strategy = NlpSentenceChunking()
```
### bypass_cache (bool)
**Description:** Whether to force a fresh crawl even if the URL has been previously crawled. The default value is `False`.
**Required:** No
**Default Value:** `False`
**Example:**
```python
bypass_cache = True
```
### css_selector (str)
**Description:** The CSS selector to target specific parts of the HTML for extraction. If not provided, the entire HTML will be processed.
**Required:** No
**Default Value:** `None`
**Example:**
```python
css_selector = "div.article-content"
```
### screenshot (bool)
**Description:** Whether to take screenshots of the page. The default value is `False`.
**Required:** No
**Default Value:** `False`
**Example:**
```python
screenshot = True
```
### user_agent (str)
**Description:** The user agent to use for the HTTP requests. If not provided, a default user agent will be used.
**Required:** No
**Default Value:** `None`
**Example:**
```python
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
```
### verbose (bool)
**Description:** Whether to enable verbose logging. The default value is `True`.
**Required:** No
**Default Value:** `True`
**Example:**
```python
verbose = True
```
### **kwargs
Additional keyword arguments that can be passed to customize the crawling process further. Some notable options include:
- **only_text (bool):** Whether to extract only text content, excluding HTML tags. Default is `False`.
- **session_id (str):** A unique identifier for the crawling session. This is useful for maintaining state across multiple requests.
- **js_code (str or list):** JavaScript code to be executed on the page before extraction.
- **wait_for (str):** A CSS selector or JavaScript function to wait for before considering the page load complete.
**Example:**
```python
result = await crawler.arun(
url="https://www.nbcnews.com/business",
css_selector="p",
only_text=True,
session_id="unique_session_123",
js_code="window.scrollTo(0, document.body.scrollHeight);",
wait_for="article.main-article"
)
```
## Example Usage
Here's an example of how to use the `arun` method with various parameters:
```python
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import CosineStrategy
from crawl4ai.chunking_strategy import NlpSentenceChunking
async def main():
# Create the AsyncWebCrawler instance
async with AsyncWebCrawler(verbose=True) as crawler:
# Run the crawler with custom parameters
result = await crawler.arun(
url="https://www.nbcnews.com/business",
word_count_threshold=10,
extraction_strategy=CosineStrategy(semantic_filter="finance"),
chunking_strategy=NlpSentenceChunking(),
bypass_cache=True,
css_selector="div.article-content",
screenshot=True,
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3",
verbose=True,
only_text=True,
session_id="business_news_session",
js_code="window.scrollTo(0, document.body.scrollHeight);",
wait_for="footer"
)
print(result)
# Run the async function
asyncio.run(main())
```
This example demonstrates how to configure various parameters to customize the crawling and extraction process using the asynchronous version of Crawl4AI.
## Additional Asynchronous Methods
The `AsyncWebCrawler` class also provides other useful asynchronous methods:
### arun_many
**Description:** Crawl multiple URLs concurrently.
**Example:**
```python
urls = ["https://example1.com", "https://example2.com", "https://example3.com"]
results = await crawler.arun_many(urls, word_count_threshold=10, bypass_cache=True)
```
### aclear_cache
**Description:** Clear the crawler's cache.
**Example:**
```python
await crawler.aclear_cache()
```
### aflush_cache
**Description:** Completely flush the crawler's cache.
**Example:**
```python
await crawler.aflush_cache()
```
### aget_cache_size
**Description:** Get the current size of the cache.
**Example:**
```python
cache_size = await crawler.aget_cache_size()
print(f"Current cache size: {cache_size}")
```
These asynchronous methods allow for efficient and flexible use of the AsyncWebCrawler in various scenarios.

View File

@@ -1,104 +0,0 @@
# Crawl Result
The `CrawlResult` class is the heart of Crawl4AI's output, encapsulating all the data extracted from a crawling session. This class contains various fields that store the results of the web crawling and extraction process. Let's break down each field and see what it holds. 🎉
## Class Definition
```python
from pydantic import BaseModel
from typing import Dict, List, Optional
class CrawlResult(BaseModel):
url: str
html: str
success: bool
cleaned_html: Optional[str] = None
media: Dict[str, List[Dict]] = {}
links: Dict[str, List[Dict]] = {}
screenshot: Optional[str] = None
markdown: Optional[str] = None
extracted_content: Optional[str] = None
metadata: Optional[dict] = None
error_message: Optional[str] = None
session_id: Optional[str] = None
responser_headers: Optional[dict] = None
status_code: Optional[int] = None
```
## Fields Explanation
### `url: str`
The URL that was crawled. This field simply stores the URL of the web page that was processed.
### `html: str`
The raw HTML content of the web page. This is the unprocessed HTML source as retrieved by the crawler.
### `success: bool`
A flag indicating whether the crawling and extraction were successful. If any error occurs during the process, this will be `False`.
### `cleaned_html: Optional[str]`
The cleaned HTML content of the web page. This field holds the HTML after removing unwanted tags like `<script>`, `<style>`, and others that do not contribute to the useful content.
### `media: Dict[str, List[Dict]]`
A dictionary containing lists of extracted media elements from the web page. The media elements are categorized into images, videos, and audios. Here's how they are structured:
- **Images**: Each image is represented as a dictionary with `src` (source URL) and `alt` (alternate text).
- **Videos**: Each video is represented similarly with `src` and `alt`.
- **Audios**: Each audio is represented with `src` and `alt`.
```python
media = {
'images': [
{'src': 'image_url1', 'alt': 'description1', "type": "image"},
{'src': 'image_url2', 'alt': 'description2', "type": "image"}
],
'videos': [
{'src': 'video_url1', 'alt': 'description1', "type": "video"}
],
'audios': [
{'src': 'audio_url1', 'alt': 'description1', "type": "audio"}
]
}
```
### `links: Dict[str, List[Dict]]`
A dictionary containing lists of internal and external links extracted from the web page. Each link is represented as a dictionary with `href` (URL) and `text` (link text).
- **Internal Links**: Links pointing to the same domain.
- **External Links**: Links pointing to different domains.
```python
links = {
'internal': [
{'href': 'internal_link1', 'text': 'link_text1'},
{'href': 'internal_link2', 'text': 'link_text2'}
],
'external': [
{'href': 'external_link1', 'text': 'link_text1'}
]
}
```
### `screenshot: Optional[str]`
A base64-encoded screenshot of the web page. This field stores the screenshot data if the crawling was configured to take a screenshot.
### `markdown: Optional[str]`
The content of the web page converted to Markdown format. This is useful for generating clean, readable text that retains the structure of the original HTML.
### `extracted_content: Optional[str]`
The content extracted based on the specified extraction strategy. This field holds the meaningful content blocks extracted from the web page, ready for your AI and data processing needs.
### `metadata: Optional[dict]`
A dictionary containing metadata extracted from the web page, such as title, description, keywords, and other meta tags.
### `error_message: Optional[str]`
If an error occurs during crawling, this field will contain the error message, helping you debug and understand what went wrong. 🚨
### `session_id: Optional[str]`
A unique identifier for the crawling session. This can be useful for tracking and managing multiple crawling sessions.
### `responser_headers: Optional[dict]`
A dictionary containing the response headers from the web server. This can provide additional information about the server and the response.
### `status_code: Optional[int]`
The HTTP status code of the response. This indicates the success or failure of the HTTP request (e.g., 200 for success, 404 for not found, etc.).

View File

@@ -1,185 +0,0 @@
## Extraction Strategies 🧠
Crawl4AI offers powerful extraction strategies to derive meaningful information from web content. Let's dive into three of the most important strategies: `CosineStrategy`, `LLMExtractionStrategy`, and the new `JsonCssExtractionStrategy`.
### LLMExtractionStrategy
`LLMExtractionStrategy` leverages a Language Model (LLM) to extract meaningful content from HTML. This strategy uses an external provider for LLM completions to perform extraction based on instructions.
#### When to Use
- Suitable for complex extraction tasks requiring nuanced understanding.
- Ideal for scenarios where detailed instructions can guide the extraction process.
- Perfect for extracting specific types of information or content with precise guidelines.
#### Parameters
- `provider` (str, optional): Provider for language model completions (e.g., openai/gpt-4). Default is `DEFAULT_PROVIDER`.
- `api_token` (str, optional): API token for the provider. If not provided, it will try to load from the environment variable `OPENAI_API_KEY`.
- `instruction` (str, optional): Instructions to guide the LLM on how to perform the extraction. Default is `None`.
#### Example Without Instructions
```python
import asyncio
import os
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
# Define extraction strategy without instructions
strategy = LLMExtractionStrategy(
provider='openai',
api_token=os.getenv('OPENAI_API_KEY')
)
# Sample URL
url = "https://www.nbcnews.com/business"
# Run the crawler with the extraction strategy
result = await crawler.arun(url=url, extraction_strategy=strategy)
print(result.extracted_content)
asyncio.run(main())
```
#### Example With Instructions
```python
import asyncio
import os
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
# Define extraction strategy with instructions
strategy = LLMExtractionStrategy(
provider='openai',
api_token=os.getenv('OPENAI_API_KEY'),
instruction="Extract only financial news and summarize key points."
)
# Sample URL
url = "https://www.nbcnews.com/business"
# Run the crawler with the extraction strategy
result = await crawler.arun(url=url, extraction_strategy=strategy)
print(result.extracted_content)
asyncio.run(main())
```
### JsonCssExtractionStrategy
`JsonCssExtractionStrategy` is a powerful tool for extracting structured data from HTML using CSS selectors. It allows you to define a schema that maps CSS selectors to specific fields, enabling precise and efficient data extraction.
#### When to Use
- Ideal for extracting structured data from websites with consistent HTML structures.
- Perfect for scenarios where you need to extract specific elements or attributes from a webpage.
- Suitable for creating datasets from web pages with tabular or list-based information.
#### Parameters
- `schema` (Dict[str, Any]): A dictionary defining the extraction schema, including base selector and field definitions.
#### Example
```python
import asyncio
import json
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
# Define the extraction schema
schema = {
"name": "News Articles",
"baseSelector": "article.tease-card",
"fields": [
{
"name": "title",
"selector": "h2",
"type": "text",
},
{
"name": "summary",
"selector": "div.tease-card__info",
"type": "text",
},
{
"name": "link",
"selector": "a",
"type": "attribute",
"attribute": "href"
}
],
}
# Create the extraction strategy
strategy = JsonCssExtractionStrategy(schema, verbose=True)
# Sample URL
url = "https://www.nbcnews.com/business"
# Run the crawler with the extraction strategy
result = await crawler.arun(url=url, extraction_strategy=strategy)
# Parse and print the extracted content
extracted_data = json.loads(result.extracted_content)
print(json.dumps(extracted_data, indent=2))
asyncio.run(main())
```
#### Use Cases for JsonCssExtractionStrategy
- Extracting product information from e-commerce websites.
- Gathering news articles and their metadata from news portals.
- Collecting user reviews and ratings from review websites.
- Extracting job listings from job boards.
By choosing the right extraction strategy, you can effectively extract the most relevant and useful information from web content. Whether you need fast, accurate semantic segmentation with `CosineStrategy`, nuanced, instruction-based extraction with `LLMExtractionStrategy`, or precise structured data extraction with `JsonCssExtractionStrategy`, Crawl4AI has you covered. Happy extracting! 🕵️‍♂️✨
For more details on schema definitions and advanced extraction strategies, check out the[Advanced JsonCssExtraction](../full_details/advanced_jsoncss_extraction.md).
### CosineStrategy
`CosineStrategy` uses hierarchical clustering based on cosine similarity to group text chunks into meaningful clusters. This method converts each chunk into its embedding and then clusters them to form semantical chunks.
#### When to Use
- Ideal for fast, accurate semantic segmentation of text.
- Perfect for scenarios where LLMs might be overkill or too slow.
- Suitable for narrowing down content based on specific queries or keywords.
#### Parameters
- `semantic_filter` (str, optional): Keywords for filtering relevant documents before clustering. Documents are filtered based on their cosine similarity to the keyword filter embedding. Default is `None`.
- `word_count_threshold` (int, optional): Minimum number of words per cluster. Default is `20`.
- `max_dist` (float, optional): Maximum cophenetic distance on the dendrogram to form clusters. Default is `0.2`.
- `linkage_method` (str, optional): Linkage method for hierarchical clustering. Default is `'ward'`.
- `top_k` (int, optional): Number of top categories to extract. Default is `3`.
- `model_name` (str, optional): Model name for embedding generation. Default is `'BAAI/bge-small-en-v1.5'`.
#### Example
```python
import asyncio
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import CosineStrategy
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
# Define extraction strategy
strategy = CosineStrategy(
semantic_filter="finance economy stock market",
word_count_threshold=10,
max_dist=0.2,
linkage_method='ward',
top_k=3,
model_name='BAAI/bge-small-en-v1.5'
)
# Sample URL
url = "https://www.nbcnews.com/business"
# Run the crawler with the extraction strategy
result = await crawler.arun(url=url, extraction_strategy=strategy)
print(result.extracted_content)
asyncio.run(main())
```

View File

@@ -1,276 +0,0 @@
# Session-Based Crawling for Dynamic Content
In modern web applications, content is often loaded dynamically without changing the URL. Examples include "Load More" buttons, infinite scrolling, or paginated content that updates via JavaScript. To effectively crawl such websites, Crawl4AI provides powerful session-based crawling capabilities.
This guide will explore advanced techniques for crawling dynamic content using Crawl4AI's session management features.
## Understanding Session-Based Crawling
Session-based crawling allows you to maintain a persistent browser session across multiple requests. This is crucial when:
1. The content changes dynamically without URL changes
2. You need to interact with the page (e.g., clicking buttons) between requests
3. The site requires authentication or maintains state across pages
Crawl4AI's `AsyncWebCrawler` class supports session-based crawling through the `session_id` parameter and related methods.
## Basic Concepts
Before diving into examples, let's review some key concepts:
- **Session ID**: A unique identifier for a browsing session. Use the same `session_id` across multiple `arun` calls to maintain state.
- **JavaScript Execution**: Use the `js_code` parameter to execute JavaScript on the page, such as clicking a "Load More" button.
- **CSS Selectors**: Use these to target specific elements for extraction or interaction.
- **Extraction Strategy**: Define how to extract structured data from the page.
- **Wait Conditions**: Specify conditions to wait for before considering the page loaded.
## Example 1: Basic Session-Based Crawling
Let's start with a basic example of session-based crawling:
```python
import asyncio
from crawl4ai import AsyncWebCrawler
async def basic_session_crawl():
async with AsyncWebCrawler(verbose=True) as crawler:
session_id = "my_session"
url = "https://example.com/dynamic-content"
for page in range(3):
result = await crawler.arun(
url=url,
session_id=session_id,
js_code="document.querySelector('.load-more-button').click();" if page > 0 else None,
css_selector=".content-item",
bypass_cache=True
)
print(f"Page {page + 1}: Found {result.extracted_content.count('.content-item')} items")
await crawler.crawler_strategy.kill_session(session_id)
asyncio.run(basic_session_crawl())
```
This example demonstrates:
1. Using a consistent `session_id` across multiple `arun` calls
2. Executing JavaScript to load more content after the first page
3. Using a CSS selector to extract specific content
4. Properly closing the session after crawling
## Advanced Technique 1: Custom Execution Hooks
Crawl4AI allows you to set custom hooks that execute at different stages of the crawling process. This is particularly useful for handling complex loading scenarios.
Here's an example that waits for new content to appear before proceeding:
```python
async def advanced_session_crawl_with_hooks():
first_commit = ""
async def on_execution_started(page):
nonlocal first_commit
try:
while True:
await page.wait_for_selector("li.commit-item h4")
commit = await page.query_selector("li.commit-item h4")
commit = await commit.evaluate("(element) => element.textContent")
commit = commit.strip()
if commit and commit != first_commit:
first_commit = commit
break
await asyncio.sleep(0.5)
except Exception as e:
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
async with AsyncWebCrawler(verbose=True) as crawler:
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
url = "https://github.com/example/repo/commits/main"
session_id = "commit_session"
all_commits = []
js_next_page = """
const button = document.querySelector('a.pagination-next');
if (button) button.click();
"""
for page in range(3):
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.commit-item",
js_code=js_next_page if page > 0 else None,
bypass_cache=True,
js_only=page > 0
)
commits = result.extracted_content.select("li.commit-item")
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
asyncio.run(advanced_session_crawl_with_hooks())
```
This technique uses a custom `on_execution_started` hook to ensure new content has loaded before proceeding to the next step.
## Advanced Technique 2: Integrated JavaScript Execution and Waiting
Instead of using separate hooks, you can integrate the waiting logic directly into your JavaScript execution. This approach can be more concise and easier to manage for some scenarios.
Here's an example:
```python
async def integrated_js_and_wait_crawl():
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/example/repo/commits/main"
session_id = "integrated_session"
all_commits = []
js_next_page_and_wait = """
(async () => {
const getCurrentCommit = () => {
const commits = document.querySelectorAll('li.commit-item h4');
return commits.length > 0 ? commits[0].textContent.trim() : null;
};
const initialCommit = getCurrentCommit();
const button = document.querySelector('a.pagination-next');
if (button) button.click();
while (true) {
await new Promise(resolve => setTimeout(resolve, 100));
const newCommit = getCurrentCommit();
if (newCommit && newCommit !== initialCommit) {
break;
}
}
})();
"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.commit-item",
"fields": [
{
"name": "title",
"selector": "h4.commit-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3):
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.commit-item",
extraction_strategy=extraction_strategy,
js_code=js_next_page_and_wait if page > 0 else None,
js_only=page > 0,
bypass_cache=True
)
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
asyncio.run(integrated_js_and_wait_crawl())
```
This approach combines the JavaScript for clicking the "next" button and waiting for new content to load into a single script.
## Advanced Technique 3: Using the `wait_for` Parameter
Crawl4AI provides a `wait_for` parameter that allows you to specify a condition to wait for before considering the page fully loaded. This can be particularly useful for dynamic content.
Here's an example:
```python
async def wait_for_parameter_crawl():
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/example/repo/commits/main"
session_id = "wait_for_session"
all_commits = []
js_next_page = """
const commits = document.querySelectorAll('li.commit-item h4');
if (commits.length > 0) {
window.lastCommit = commits[0].textContent.trim();
}
const button = document.querySelector('a.pagination-next');
if (button) button.click();
"""
wait_for = """() => {
const commits = document.querySelectorAll('li.commit-item h4');
if (commits.length === 0) return false;
const firstCommit = commits[0].textContent.trim();
return firstCommit !== window.lastCommit;
}"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.commit-item",
"fields": [
{
"name": "title",
"selector": "h4.commit-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3):
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.commit-item",
extraction_strategy=extraction_strategy,
js_code=js_next_page if page > 0 else None,
wait_for=wait_for if page > 0 else None,
js_only=page > 0,
bypass_cache=True
)
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
asyncio.run(wait_for_parameter_crawl())
```
This technique separates the JavaScript execution (clicking the "next" button) from the waiting condition, providing more flexibility and clarity in some scenarios.
## Best Practices for Session-Based Crawling
1. **Use Unique Session IDs**: Ensure each crawling session has a unique `session_id` to prevent conflicts.
2. **Close Sessions**: Always close sessions using `kill_session` when you're done to free up resources.
3. **Handle Errors**: Implement proper error handling to deal with unexpected situations during crawling.
4. **Respect Website Terms**: Ensure your crawling adheres to the website's terms of service and robots.txt file.
5. **Implement Delays**: Add appropriate delays between requests to avoid overwhelming the target server.
6. **Use Extraction Strategies**: Leverage `JsonCssExtractionStrategy` or other extraction strategies for structured data extraction.
7. **Optimize JavaScript**: Keep your JavaScript execution concise and efficient to improve crawling speed.
8. **Monitor Performance**: Keep an eye on memory usage and crawling speed, especially for long-running sessions.
## Conclusion
Session-based crawling with Crawl4AI provides powerful capabilities for handling dynamic content and complex web applications. By leveraging session management, JavaScript execution, and waiting strategies, you can effectively crawl and extract data from a wide range of modern websites.
Remember to use these techniques responsibly and in compliance with website policies and ethical web scraping practices.
For more advanced usage and API details, refer to the Crawl4AI API documentation.

View File

@@ -1,93 +0,0 @@
# Crawl4AI
Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.
## Introduction
Crawl4AI has one clear task: to make crawling and data extraction from web pages easy and efficient, especially for large language models (LLMs) and AI applications. Whether you are using it as a REST API or a Python library, Crawl4AI offers a robust and flexible solution with full asynchronous support.
## Quick Start
Here's a quick example to show you how easy it is to use Crawl4AI with its new asynchronous capabilities:
```python
import asyncio
from crawl4ai import AsyncWebCrawler
async def main():
# Create an instance of AsyncWebCrawler
async with AsyncWebCrawler(verbose=True) as crawler:
# Run the crawler on a URL
result = await crawler.arun(url="https://www.nbcnews.com/business")
# Print the extracted content
print(result.markdown)
# Run the async main function
asyncio.run(main())
```
### Explanation
1. **Importing the Library**: We start by importing the `AsyncWebCrawler` class from the `crawl4ai` library and the `asyncio` module.
2. **Creating an Async Context**: We use an async context manager to create an instance of `AsyncWebCrawler`.
3. **Running the Crawler**: The `arun()` method is used to asynchronously crawl the specified URL and extract meaningful content.
4. **Printing the Result**: The extracted content is printed, showcasing the data extracted from the web page.
5. **Running the Async Function**: We use `asyncio.run()` to execute our async main function.
## Documentation Structure
This documentation is organized into several sections to help you navigate and find the information you need quickly:
### [Home](index.md)
An introduction to Crawl4AI, including a quick start guide and an overview of the documentation structure.
### [Installation](installation.md)
Instructions on how to install Crawl4AI and its dependencies.
### [Introduction](introduction.md)
A detailed introduction to Crawl4AI, its features, and how it can be used for various web crawling and data extraction tasks.
### [Quick Start](quickstart.md)
A step-by-step guide to get you up and running with Crawl4AI, including installation instructions and basic usage examples.
### [Examples](examples/index.md)
This section contains practical examples demonstrating different use cases of Crawl4AI:
- [Structured Data Extraction](examples/json_css_extraction.md)
- [LLM Extraction](examples/llm_extraction.md)
- [JS Execution & CSS Filtering](examples/js_execution_css_filtering.md)
- [Hooks & Auth](examples/hooks_auth.md)
- [Summarization](examples/summarization.md)
- [Research Assistant](examples/research_assistant.md)
### [Full Details of Using Crawler](full_details/crawl_request_parameters.md)
Comprehensive details on using the crawler, including:
- [Crawl Request Parameters](full_details/crawl_request_parameters.md)
- [Crawl Result Class](full_details/crawl_result_class.md)
- [Session Based Crawling](full_details/session_based_crawling.md)
- [Advanced Structured Data Extraction JsonCssExtraction](full_details/advanced_jsoncss_extraction.md)
- [Advanced Features](full_details/advanced_features.md)
- [Chunking Strategies](full_details/chunking_strategies.md)
- [Extraction Strategies](full_details/extraction_strategies.md)
### [Change Log](changelog.md)
A log of all changes, updates, and improvements made to Crawl4AI.
### [Contact](contact.md)
Information on how to get in touch with the developers, report issues, and contribute to the project.
## Get Started
To get started with Crawl4AI, follow the quick start guide above or explore the detailed sections of this documentation. Whether you are a beginner or an advanced user, Crawl4AI has something to offer to make your web crawling and data extraction tasks easier, more efficient, and now fully asynchronous.
Happy Crawling! 🕸️🚀

View File

@@ -1,92 +0,0 @@
# Installation 💻
Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package, use it with Docker, or run it as a local server.
## Option 1: Python Package Installation (Recommended)
Crawl4AI is now available on PyPI, making installation easier than ever. Choose the option that best fits your needs:
### Basic Installation
For basic web crawling and scraping tasks:
```bash
pip install crawl4ai
playwright install # Install Playwright dependencies
```
### Installation with PyTorch
For advanced text clustering (includes CosineSimilarity cluster strategy):
```bash
pip install crawl4ai[torch]
```
### Installation with Transformers
For text summarization and Hugging Face models:
```bash
pip install crawl4ai[transformer]
```
### Full Installation
For all features:
```bash
pip install crawl4ai[all]
```
### Development Installation
For contributors who plan to modify the source code:
```bash
git clone https://github.com/unclecode/crawl4ai.git
cd crawl4ai
pip install -e ".[all]"
playwright install # Install Playwright dependencies
```
💡 After installation with "torch", "transformer", or "all" options, it's recommended to run the following CLI command to load the required models:
```bash
crawl4ai-download-models
```
This is optional but will boost the performance and speed of the crawler. You only need to do this once after installation.
## Option 2: Using Docker (Coming Soon)
Docker support for Crawl4AI is currently in progress and will be available soon. This will allow you to run Crawl4AI in a containerized environment, ensuring consistency across different systems.
## Option 3: Local Server Installation
For those who prefer to run Crawl4AI as a local server, instructions will be provided once the Docker implementation is complete.
## Verifying Your Installation
After installation, you can verify that Crawl4AI is working correctly by running a simple Python script:
```python
import asyncio
from crawl4ai import AsyncWebCrawler
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(url="https://www.example.com")
print(result.markdown[:500]) # Print first 500 characters
if __name__ == "__main__":
asyncio.run(main())
```
This script should successfully crawl the example website and print the first 500 characters of the extracted content.
## Getting Help
If you encounter any issues during installation or usage, please check the [documentation](https://crawl4ai.com/mkdocs/) or raise an issue on the [GitHub repository](https://github.com/unclecode/crawl4ai/issues).
Happy crawling! 🕷️🤖

View File

@@ -1,28 +0,0 @@
<h1>Try Our Library</h1>
<form id="apiForm">
<label for="inputField">Enter some input:</label>
<input type="text" id="inputField" name="inputField" required>
<button type="submit">Submit</button>
</form>
<div id="result"></div>
<script>
document.getElementById('apiForm').addEventListener('submit', function(event) {
event.preventDefault();
const input = document.getElementById('inputField').value;
fetch('https://your-api-endpoint.com/api', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ input: input })
})
.then(response => response.json())
.then(data => {
document.getElementById('result').textContent = JSON.stringify(data);
})
.catch(error => {
document.getElementById('result').textContent = 'Error: ' + error;
});
});
</script>

View File

@@ -1,29 +0,0 @@
# Introduction
Welcome to the documentation for Crawl4AI v0.2.5! 🕷️🤖
Crawl4AI is designed to simplify the process of crawling web pages and extracting useful information for large language models (LLMs) and AI applications. Whether you're using it as a REST API, a Python library, or through a Google Colab notebook, Crawl4AI provides powerful features to make web data extraction easier and more efficient.
## Key Features ✨
- **🆓 Completely Free and Open-Source**: Crawl4AI is free to use and open-source, making it accessible for everyone.
- **🤖 LLM-Friendly Output Formats**: Supports JSON, cleaned HTML, and markdown formats.
- **🌍 Concurrent Crawling**: Crawl multiple URLs simultaneously to save time.
- **🎨 Media Extraction**: Extract all media tags including images, audio, and video.
- **🔗 Link Extraction**: Extract all external and internal links from web pages.
- **📚 Metadata Extraction**: Extract metadata from web pages for additional context.
- **🔄 Custom Hooks**: Define custom hooks for authentication, headers, and page modifications before crawling.
- **🕵️ User Agent Support**: Customize the user agent for HTTP requests.
- **🖼️ Screenshot Capability**: Take screenshots of web pages during crawling.
- **📜 JavaScript Execution**: Execute custom JavaScripts before crawling.
- **📚 Advanced Chunking and Extraction Strategies**: Utilize topic-based, regex, sentence chunking, cosine clustering, and LLM extraction strategies.
- **🎯 CSS Selector Support**: Extract specific content using CSS selectors.
- **📝 Instruction/Keyword Refinement**: Pass instructions or keywords to refine the extraction process.
Check the [Changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md) for more details.
## Power and Simplicity of Crawl4AI 🚀
Crawl4AI provides an easy way to crawl and extract data from web pages without installing any library. You can use the REST API on our server or run the local server on your machine. For more advanced control, use the Python library to customize your crawling and extraction strategies.
Explore the documentation to learn more about the features, installation process, usage examples, and how to contribute to Crawl4AI. Let's make the web more accessible and useful for AI applications! 💪🌐🤖

View File

@@ -1,45 +0,0 @@
site_name: Crawl4AI Documentation
site_description: 🔥🕷️ Crawl4AI, Open-source LLM Friendly Web Crawler & Scrapper
site_url: https://docs.crawl4ai.com
repo_url: https://github.com/unclecode/crawl4ai
repo_name: unclecode/crawl4ai
docs_dir: docs/md
nav:
- Home: index.md
- First Steps:
- Introduction: introduction.md
- Installation: installation.md
- Quick Start: quickstart.md
- Examples:
- Intro: examples/index.md
- Structured Data Extraction: examples/json_css_extraction.md
- LLM Extraction: examples/llm_extraction.md
- JS Execution & CSS Filtering: examples/js_execution_css_filtering.md
- Hooks & Auth: examples/hooks_auth.md
- Summarization: examples/summarization.md
- Research Assistant: examples/research_assistant.md
- Full Details of Using Crawler:
- Crawl Request Parameters: full_details/crawl_request_parameters.md
- Crawl Result Class: full_details/crawl_result_class.md
- Session Based Crawling: full_details/session_based_crawling.md
- Advanced Features: full_details/advanced_features.md
- Advanced JsonCssExtraction: full_details/advanced_jsoncss_extraction.md
- Chunking Strategies: full_details/chunking_strategies.md
- Extraction Strategies: full_details/extraction_strategies.md
- Miscellaneous:
- Change Log: changelog.md
- Contact: contact.md
theme:
name: terminal
palette: dark
# Add the css/extra.css
extra_css:
- assets/styles.css
- assets/highlight.css
- assets/dmvendor.css
extra_javascript:
- assets/highlight.min.js
- assets/highlight_init.js

View File

@@ -1,285 +0,0 @@
# Quick Start Guide 🚀
Welcome to the Crawl4AI Quickstart Guide! In this tutorial, we'll walk you through the basic usage of Crawl4AI with a friendly and humorous tone. We'll cover everything from basic usage to advanced features like chunking and extraction strategies, all with the power of asynchronous programming. Let's dive in! 🌟
## Getting Started 🛠️
First, let's import the necessary modules and create an instance of `AsyncWebCrawler`. We'll use an async context manager, which handles the setup and teardown of the crawler for us.
```python
import asyncio
from crawl4ai import AsyncWebCrawler
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
# We'll add our crawling code here
pass
if __name__ == "__main__":
asyncio.run(main())
```
### Basic Usage
Simply provide a URL and let Crawl4AI do the magic!
```python
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(url="https://www.nbcnews.com/business")
print(f"Basic crawl result: {result.markdown[:500]}") # Print first 500 characters
asyncio.run(main())
```
### Taking Screenshots 📸
Let's take a screenshot of the page!
```python
import base64
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(url="https://www.nbcnews.com/business", screenshot=True)
with open("screenshot.png", "wb") as f:
f.write(base64.b64decode(result.screenshot))
print("Screenshot saved to 'screenshot.png'!")
asyncio.run(main())
```
### Understanding Parameters 🧠
By default, Crawl4AI caches the results of your crawls. This means that subsequent crawls of the same URL will be much faster! Let's see this in action.
```python
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
# First crawl (caches the result)
result1 = await crawler.arun(url="https://www.nbcnews.com/business")
print(f"First crawl result: {result1.markdown[:100]}...")
# Force to crawl again
result2 = await crawler.arun(url="https://www.nbcnews.com/business", bypass_cache=True)
print(f"Second crawl result: {result2.markdown[:100]}...")
asyncio.run(main())
```
### Adding a Chunking Strategy 🧩
Let's add a chunking strategy: `RegexChunking`! This strategy splits the text based on a given regex pattern.
```python
from crawl4ai.chunking_strategy import RegexChunking
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
chunking_strategy=RegexChunking(patterns=["\n\n"])
)
print(f"RegexChunking result: {result.extracted_content[:200]}...")
asyncio.run(main())
```
### Adding an Extraction Strategy 🧠
Let's get smarter with an extraction strategy: `JsonCssExtractionStrategy`! This strategy extracts structured data from HTML using CSS selectors.
```python
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
import json
async def main():
schema = {
"name": "News Articles",
"baseSelector": "article.tease-card",
"fields": [
{
"name": "title",
"selector": "h2",
"type": "text",
},
{
"name": "summary",
"selector": "div.tease-card__info",
"type": "text",
}
],
}
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
extraction_strategy=JsonCssExtractionStrategy(schema, verbose=True)
)
extracted_data = json.loads(result.extracted_content)
print(f"Extracted {len(extracted_data)} articles")
print(json.dumps(extracted_data[0], indent=2))
asyncio.run(main())
```
### Using LLMExtractionStrategy 🤖
Time to bring in the big guns: `LLMExtractionStrategy`! This strategy uses a large language model to extract relevant information from the web page.
```python
from crawl4ai.extraction_strategy import LLMExtractionStrategy
import os
from pydantic import BaseModel, Field
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(..., description="Fee for output token for the OpenAI model.")
async def main():
if not os.getenv("OPENAI_API_KEY"):
print("OpenAI API key not found. Skipping this example.")
return
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://openai.com/api/pricing/",
word_count_threshold=1,
extraction_strategy=LLMExtractionStrategy(
provider="openai/gpt-4o", # Or use open source model like "ollama/nemotron"
api_token=os.getenv("OPENAI_API_KEY"), # Pass "no-token" if using Ollama
schema=OpenAIModelFee.schema(),
extraction_type="schema",
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content. One extracted model JSON format should look like this:
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}.""",
),
bypass_cache=True,
)
print(result.extracted_content)
asyncio.run(main())
```
### Interactive Extraction 🖱️
Let's use JavaScript to interact with the page before extraction!
```python
async def main():
js_code = """
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
loadMoreButton && loadMoreButton.click();
"""
wait_for = """() => {
return Array.from(document.querySelectorAll('article.tease-card')).length > 10;
}"""
async with AsyncWebCrawler(verbose=True) as crawler:
result = await crawler.arun(
url="https://www.nbcnews.com/business",
js_code=js_code,
wait_for=wait_for,
css_selector="article.tease-card",
bypass_cache=True,
)
print(f"JavaScript interaction result: {result.extracted_content[:500]}")
asyncio.run(main())
```
### Advanced Session-Based Crawling with Dynamic Content 🔄
In modern web applications, content is often loaded dynamically without changing the URL. This is common in single-page applications (SPAs) or websites using infinite scrolling. Traditional crawling methods that rely on URL changes won't work here. That's where Crawl4AI's advanced session-based crawling comes in handy!
Here's what makes this approach powerful:
1.**Session Preservation**: By using a `session_id`, we can maintain the state of our crawling session across multiple interactions with the page. This is crucial for navigating through dynamically loaded content.
2.**Asynchronous JavaScript Execution**: We can execute custom JavaScript to trigger content loading or navigation. In this example, we'll click a "Load More" button to fetch the next page of commits.
3.**Dynamic Content Waiting**: The `wait_for` parameter allows us to specify a condition that must be met before considering the page load complete. This ensures we don't extract data before the new content is fully loaded.
Let's see how this works with a real-world example: crawling multiple pages of commits on a GitHub repository. The URL doesn't change as we load more commits, so we'll use these advanced techniques to navigate and extract data.
```python
import json
from bs4 import BeautifulSoup
from crawl4ai import AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
async def main():
async with AsyncWebCrawler(verbose=True) as crawler:
url = "https://github.com/microsoft/TypeScript/commits/main"
session_id = "typescript_commits_session"
all_commits = []
js_next_page = """
const button = document.querySelector('a[data-testid="pagination-next-button"]');
if (button) button.click();
"""
wait_for = """() => {
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
if (commits.length === 0) return false;
const firstCommit = commits[0].textContent.trim();
return firstCommit !== window.lastCommit;
}"""
schema = {
"name": "Commit Extractor",
"baseSelector": "li.Box-sc-g0xbh4-0",
"fields": [
{
"name": "title",
"selector": "h4.markdown-title",
"type": "text",
"transform": "strip",
},
],
}
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
for page in range(3): # Crawl 3 pages
result = await crawler.arun(
url=url,
session_id=session_id,
css_selector="li.Box-sc-g0xbh4-0",
extraction_strategy=extraction_strategy,
js_code=js_next_page if page > 0 else None,
wait_for=wait_for if page > 0 else None,
js_only=page > 0,
bypass_cache=True,
headless=False,
)
assert result.success, f"Failed to crawl page {page + 1}"
commits = json.loads(result.extracted_content)
all_commits.extend(commits)
print(f"Page {page + 1}: Found {len(commits)} commits")
await crawler.crawler_strategy.kill_session(session_id)
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
asyncio.run(main())
```
In this example, we're crawling multiple pages of commits from a GitHub repository. The URL doesn't change as we load more commits, so we use JavaScript to click the "Load More" button and a `wait_for` condition to ensure the new content is loaded before extraction. This powerful combination allows us to navigate and extract data from complex, dynamically-loaded web applications with ease!
## Congratulations! 🎉
You've made it through the Crawl4AI Quickstart Guide! Now go forth and crawl the web asynchronously like a pro! 🕸️
Remember, these are just a few examples of what Crawl4AI can do. For more advanced usage, check out our other documentation pages:
- [LLM Extraction](examples/llm_extraction.md)
- [JS Execution & CSS Filtering](examples/js_execution_css_filtering.md)
- [Hooks & Auth](examples/hooks_auth.md)
- [Summarization](examples/summarization.md)
- [Research Assistant](examples/research_assistant.md)
Happy crawling! 🚀

View File

@@ -0,0 +1,35 @@
# Parameter Reference Table
| File Name | Parameter Name | Code Usage | Strategy/Class | Description |
|-----------|---------------|------------|----------------|-------------|
| async_crawler_strategy.py | user_agent | `kwargs.get("user_agent")` | AsyncPlaywrightCrawlerStrategy | User agent string for browser identification |
| async_crawler_strategy.py | proxy | `kwargs.get("proxy")` | AsyncPlaywrightCrawlerStrategy | Proxy server configuration for network requests |
| async_crawler_strategy.py | proxy_config | `kwargs.get("proxy_config")` | AsyncPlaywrightCrawlerStrategy | Detailed proxy configuration including auth |
| async_crawler_strategy.py | headless | `kwargs.get("headless", True)` | AsyncPlaywrightCrawlerStrategy | Whether to run browser in headless mode |
| async_crawler_strategy.py | browser_type | `kwargs.get("browser_type", "chromium")` | AsyncPlaywrightCrawlerStrategy | Type of browser to use (chromium/firefox/webkit) |
| async_crawler_strategy.py | headers | `kwargs.get("headers", {})` | AsyncPlaywrightCrawlerStrategy | Custom HTTP headers for requests |
| async_crawler_strategy.py | verbose | `kwargs.get("verbose", False)` | AsyncPlaywrightCrawlerStrategy | Enable detailed logging output |
| async_crawler_strategy.py | sleep_on_close | `kwargs.get("sleep_on_close", False)` | AsyncPlaywrightCrawlerStrategy | Add delay before closing browser |
| async_crawler_strategy.py | use_managed_browser | `kwargs.get("use_managed_browser", False)` | AsyncPlaywrightCrawlerStrategy | Use managed browser instance |
| async_crawler_strategy.py | user_data_dir | `kwargs.get("user_data_dir", None)` | AsyncPlaywrightCrawlerStrategy | Custom directory for browser profile data |
| async_crawler_strategy.py | session_id | `kwargs.get("session_id")` | AsyncPlaywrightCrawlerStrategy | Unique identifier for browser session |
| async_crawler_strategy.py | override_navigator | `kwargs.get("override_navigator", False)` | AsyncPlaywrightCrawlerStrategy | Override browser navigator properties |
| async_crawler_strategy.py | simulate_user | `kwargs.get("simulate_user", False)` | AsyncPlaywrightCrawlerStrategy | Simulate human-like behavior |
| async_crawler_strategy.py | magic | `kwargs.get("magic", False)` | AsyncPlaywrightCrawlerStrategy | Enable advanced anti-detection features |
| async_crawler_strategy.py | log_console | `kwargs.get("log_console", False)` | AsyncPlaywrightCrawlerStrategy | Log browser console messages |
| async_crawler_strategy.py | js_only | `kwargs.get("js_only", False)` | AsyncPlaywrightCrawlerStrategy | Only execute JavaScript without page load |
| async_crawler_strategy.py | page_timeout | `kwargs.get("page_timeout", 60000)` | AsyncPlaywrightCrawlerStrategy | Timeout for page load in milliseconds |
| async_crawler_strategy.py | ignore_body_visibility | `kwargs.get("ignore_body_visibility", True)` | AsyncPlaywrightCrawlerStrategy | Process page even if body is hidden |
| async_crawler_strategy.py | js_code | `kwargs.get("js_code", kwargs.get("js", self.js_code))` | AsyncPlaywrightCrawlerStrategy | Custom JavaScript code to execute |
| async_crawler_strategy.py | wait_for | `kwargs.get("wait_for")` | AsyncPlaywrightCrawlerStrategy | Wait for specific element/condition |
| async_crawler_strategy.py | process_iframes | `kwargs.get("process_iframes", False)` | AsyncPlaywrightCrawlerStrategy | Extract content from iframes |
| async_crawler_strategy.py | delay_before_return_html | `kwargs.get("delay_before_return_html")` | AsyncPlaywrightCrawlerStrategy | Additional delay before returning HTML |
| async_crawler_strategy.py | remove_overlay_elements | `kwargs.get("remove_overlay_elements", False)` | AsyncPlaywrightCrawlerStrategy | Remove pop-ups and overlay elements |
| async_crawler_strategy.py | screenshot | `kwargs.get("screenshot")` | AsyncPlaywrightCrawlerStrategy | Take page screenshot |
| async_crawler_strategy.py | screenshot_wait_for | `kwargs.get("screenshot_wait_for")` | AsyncPlaywrightCrawlerStrategy | Wait before taking screenshot |
| async_crawler_strategy.py | semaphore_count | `kwargs.get("semaphore_count", 5)` | AsyncPlaywrightCrawlerStrategy | Concurrent request limit |
| async_webcrawler.py | verbose | `kwargs.get("verbose", False)` | AsyncWebCrawler | Enable detailed logging |
| async_webcrawler.py | warmup | `kwargs.get("warmup", True)` | AsyncWebCrawler | Initialize crawler with warmup request |
| async_webcrawler.py | session_id | `kwargs.get("session_id", None)` | AsyncWebCrawler | Session identifier for browser reuse |
| async_webcrawler.py | only_text | `kwargs.get("only_text", False)` | AsyncWebCrawler | Extract only text content |
| async_webcrawler.py | bypass_cache | `kwargs.get("bypass_cache", False)` | AsyncWebCrawler | Skip cache and force fresh crawl |

BIN
docs/md_v2/assets/docs.zip Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More