refactor: update cache handling in quickstart_async example to use CacheMode enum
This commit is contained in:
470
README.md
470
README.md
@@ -29,94 +29,86 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
|
|||||||
## Features ✨
|
## Features ✨
|
||||||
|
|
||||||
<details open>
|
<details open>
|
||||||
<summary>🚀 <strong>Performance & Scalability</strong></summary>
|
<summary>📝 <strong>Markdown Generation</strong></summary>
|
||||||
|
|
||||||
- ⚡ **Blazing Fast Scraping**: Outperforms many paid services with cutting-edge optimization.
|
|
||||||
- 🔄 **Asynchronous Architecture**: Enhanced performance for complex multi-page crawling.
|
|
||||||
- ⚡ **Dynamic HTML Formatting**: New, fast HTML formatting for streamlined workflows.
|
|
||||||
- 🗂️ **Large Dataset Optimization**: Improved caching for handling massive content sets.
|
|
||||||
|
|
||||||
|
- 🧹 **Clean Markdown**: Generates clean, structured Markdown with accurate formatting.
|
||||||
|
- 🎯 **Fit Markdown**: Heuristic-based filtering to remove noise and irrelevant parts for AI-friendly processing.
|
||||||
|
- 🔗 **Citations and References**: Converts page links into a numbered reference list with clean citations.
|
||||||
|
- 🛠️ **Custom Strategies**: Users can create their own Markdown generation strategies tailored to specific needs.
|
||||||
|
- 📚 **BM25 Algorithm**: Employs BM25-based filtering for extracting core information and removing irrelevant content.
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
<details open>
|
<details open>
|
||||||
<summary>🔎 <strong>Extraction Capabilities</strong></summary>
|
<summary>📊 <strong>Structured Data Extraction</strong></summary>
|
||||||
|
|
||||||
- 🖼️ **Comprehensive Media Support**: Extracts images, audio, video, and responsive image formats like `srcset` and `picture`.
|
- 🤖 **LLM-Driven Extraction**: Supports all LLMs (open-source and proprietary) for structured data extraction.
|
||||||
- 📚 **Advanced Content Chunking**: Topic-based, regex, sentence-level, and cosine clustering strategies.
|
- 🧱 **Chunking Strategies**: Implements chunking (topic-based, regex, sentence-level) for targeted content processing.
|
||||||
- 🎯 **Precise Data Extraction**: Supports CSS selectors and keyword-based refinements.
|
- 🌌 **Cosine Similarity**: Find relevant content chunks based on user queries for semantic extraction.
|
||||||
- 🔗 **All-Inclusive Link Crawling**: Extracts internal and external links.
|
- 🔎 **CSS-Based Extraction**: Fast schema-based data extraction using XPath and CSS selectors.
|
||||||
- 📝 **Markdown Generation**: Enhanced markdown generator class for custom, clean, LLM-friendly outputs.
|
- 🔧 **Schema Definition**: Define custom schemas for extracting structured JSON from repetitive patterns.
|
||||||
- 🏷️ **Metadata Extraction**: Fetches metadata directly from pages.
|
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
<details open>
|
<details open>
|
||||||
<summary>🌐 <strong>Browser Integration</strong></summary>
|
<summary>🌐 <strong>Browser Integration</strong></summary>
|
||||||
|
|
||||||
- 🌍 **Multi-Browser Support**: Works with Chromium, Firefox, and WebKit.
|
- 🖥️ **Managed Browser**: Use user-owned browsers with full control, avoiding bot detection.
|
||||||
- 🖥️ **ManagedBrowser with Dynamic Config**: Flexible host/port control for tailored setups.
|
- 🔄 **Remote Browser Control**: Connect to Chrome Developer Tools Protocol for remote, large-scale data extraction.
|
||||||
- ⚙️ **Custom Browser Hooks**: Authentication, headers, and page modifications.
|
- 🔒 **Session Management**: Preserve browser states and reuse them for multi-step crawling.
|
||||||
- 🕶️ **Stealth Mode**: Bypasses bot detection with advanced techniques.
|
- 🧩 **Proxy Support**: Seamlessly connect to proxies with authentication for secure access.
|
||||||
- 📸 **Screenshots & JavaScript Execution**: Takes screenshots and executes custom JavaScript before crawling.
|
- ⚙️ **Full Browser Control**: Modify headers, cookies, user agents, and more for tailored crawling setups.
|
||||||
|
- 🌍 **Multi-Browser Support**: Compatible with Chromium, Firefox, and WebKit.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
<details open>
|
<details open>
|
||||||
<summary>📁 <strong>Input/Output Flexibility</strong></summary>
|
<summary>🔎 <strong>Crawling & Scraping</strong></summary>
|
||||||
|
|
||||||
- 📂 **Local & Raw HTML Crawling**: Directly processes `file://` paths and raw HTML.
|
- 🖼️ **Media Support**: Extract images, audio, videos, and responsive image formats like `srcset` and `picture`.
|
||||||
- 🌐 **Custom Headers for LLM**: Tailored headers for enhanced AI interactions.
|
- 🚀 **Dynamic Crawling**: Execute JS and wait for async or sync for dynamic content extraction.
|
||||||
- 🛠️ **Structured Output Options**: Supports JSON, cleaned HTML, and markdown outputs.
|
- 📸 **Screenshots**: Capture page screenshots during crawling for debugging or analysis.
|
||||||
|
- 📂 **Raw Data Crawling**: Directly process raw HTML (`raw:`) or local files (`file://`).
|
||||||
|
- 🔗 **Comprehensive Link Extraction**: Extracts internal, external links, and embedded iframe content.
|
||||||
|
- 🛠️ **Customizable Hooks**: Define hooks at every step to customize crawling behavior.
|
||||||
|
- 💾 **Caching**: Cache data for improved speed and to avoid redundant fetches.
|
||||||
|
- 📄 **Metadata Extraction**: Retrieve structured metadata from web pages.
|
||||||
|
- 📡 **IFrame Content Extraction**: Seamless extraction from embedded iframe content.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
<details open>
|
<details open>
|
||||||
<summary>🔧 <strong>Utility & Debugging</strong></summary>
|
<summary>🚀 <strong>Deployment</strong></summary>
|
||||||
|
|
||||||
|
- 🐳 **Dockerized Setup**: Optimized Docker image with API server for easy deployment.
|
||||||
|
- 🔄 **API Gateway**: One-click deployment with secure token authentication for API-based workflows.
|
||||||
|
- 🌐 **Scalable Architecture**: Designed for mass-scale production and optimized server performance.
|
||||||
|
- ⚙️ **DigitalOcean Deployment**: Ready-to-deploy configurations for DigitalOcean and similar platforms.
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>🎯 <strong>Additional Features</strong></summary>
|
||||||
|
|
||||||
|
- 🕶️ **Stealth Mode**: Avoid bot detection by mimicking real users.
|
||||||
|
- 🏷️ **Tag-Based Content Extraction**: Refine crawling based on custom tags, headers, or metadata.
|
||||||
|
- 🔗 **Link Analysis**: Extract and analyze all links for detailed data exploration.
|
||||||
- 🛡️ **Error Handling**: Robust error management for seamless execution.
|
- 🛡️ **Error Handling**: Robust error management for seamless execution.
|
||||||
- 🔐 **Session Management**: Handles complex, multi-page interactions.
|
- 🔐 **CORS & Static Serving**: Supports filesystem-based caching and cross-origin requests.
|
||||||
- 🧹 **Utility Functions**: Enhanced sanitization and flexible extraction helpers.
|
- 📖 **Clear Documentation**: Simplified and updated guides for onboarding and advanced usage.
|
||||||
- 🕰️ **Delayed Content Loading**: Improved handling of lazy-loading and dynamic content.
|
- 🙌 **Community Recognition**: Acknowledges contributors and pull requests for transparency.
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
<details open>
|
|
||||||
<summary>🔐 <strong>Security & Accessibility</strong></summary>
|
|
||||||
|
|
||||||
- 🕵️ **Proxy Support**: Enables authenticated access for restricted pages.
|
|
||||||
- 🚪 **API Gateway**: Deploy as an API service with secure token authentication.
|
|
||||||
- 🌐 **CORS & Static Serving**: Enhanced support for filesystem-based caching and cross-origin requests.
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details open>
|
|
||||||
<summary>🌟 <strong>Community & Documentation</strong></summary>
|
|
||||||
|
|
||||||
- 🙌 **Contributor Acknowledgments**: Recognition for pull requests and contributions.
|
|
||||||
- 📖 **Clear Documentation**: Simplified and updated for better onboarding and usage.
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details open>
|
|
||||||
<summary>🎯 <strong>Cutting-Edge Features</strong></summary>
|
|
||||||
|
|
||||||
- 🛠️ **BM25-Based Markdown Filtering**: Extracts cleaner, context-relevant markdown.
|
|
||||||
- 📚 **LLM-Friendly Citations**: Auto-converts links to numbered citations with reference lists.
|
|
||||||
- 📡 **IFrame Content Extraction**: Comprehensive analysis for embedded content.
|
|
||||||
- 🕰️ **Flexible Content Retrieval**: Combines timing-based strategies for reliable extractions.
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
|
|
||||||
## Installation 🛠️
|
## Installation 🛠️
|
||||||
|
|
||||||
Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package or use Docker.
|
Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package or use Docker.
|
||||||
|
|
||||||
### Using pip 🐍
|
<details open>
|
||||||
|
<summary>🐍 <strong>Using pip</strong></summary>
|
||||||
|
|
||||||
Choose the installation option that best fits your needs:
|
Choose the installation option that best fits your needs:
|
||||||
|
|
||||||
#### Basic Installation
|
### Basic Installation
|
||||||
|
|
||||||
For basic web crawling and scraping tasks:
|
For basic web crawling and scraping tasks:
|
||||||
|
|
||||||
@@ -126,7 +118,7 @@ pip install crawl4ai
|
|||||||
|
|
||||||
By default, this will install the asynchronous version of Crawl4AI, using Playwright for web crawling.
|
By default, this will install the asynchronous version of Crawl4AI, using Playwright for web crawling.
|
||||||
|
|
||||||
👉 Note: When you install Crawl4AI, the setup script should automatically install and set up Playwright. However, if you encounter any Playwright-related errors, you can manually install it using one of these methods:
|
👉 **Note**: When you install Crawl4AI, the setup script should automatically install and set up Playwright. However, if you encounter any Playwright-related errors, you can manually install it using one of these methods:
|
||||||
|
|
||||||
1. Through the command line:
|
1. Through the command line:
|
||||||
|
|
||||||
@@ -142,15 +134,19 @@ By default, this will install the asynchronous version of Crawl4AI, using Playwr
|
|||||||
|
|
||||||
This second method has proven to be more reliable in some cases.
|
This second method has proven to be more reliable in some cases.
|
||||||
|
|
||||||
#### Installation with Synchronous Version
|
---
|
||||||
|
|
||||||
If you need the synchronous version using Selenium:
|
### Installation with Synchronous Version
|
||||||
|
|
||||||
|
The sync version is deprecated and will be removed in future versions. If you need the synchronous version using Selenium:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install crawl4ai[sync]
|
pip install crawl4ai[sync]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Development Installation
|
---
|
||||||
|
|
||||||
|
### Development Installation
|
||||||
|
|
||||||
For contributors who plan to modify the source code:
|
For contributors who plan to modify the source code:
|
||||||
|
|
||||||
@@ -159,7 +155,9 @@ git clone https://github.com/unclecode/crawl4ai.git
|
|||||||
cd crawl4ai
|
cd crawl4ai
|
||||||
pip install -e . # Basic installation in editable mode
|
pip install -e . # Basic installation in editable mode
|
||||||
```
|
```
|
||||||
|
|
||||||
Install optional features:
|
Install optional features:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install -e ".[torch]" # With PyTorch features
|
pip install -e ".[torch]" # With PyTorch features
|
||||||
pip install -e ".[transformer]" # With Transformer features
|
pip install -e ".[transformer]" # With Transformer features
|
||||||
@@ -168,7 +166,10 @@ pip install -e ".[sync]" # With synchronous crawling (Selenium)
|
|||||||
pip install -e ".[all]" # Install all optional features
|
pip install -e ".[all]" # Install all optional features
|
||||||
```
|
```
|
||||||
|
|
||||||
## One-Click Deployment 🚀
|
</details>
|
||||||
|
|
||||||
|
<details open>
|
||||||
|
<summary>🚀 <strong>One-Click Deployment</strong></summary>
|
||||||
|
|
||||||
Deploy your own instance of Crawl4AI with one click:
|
Deploy your own instance of Crawl4AI with one click:
|
||||||
|
|
||||||
@@ -179,14 +180,19 @@ Deploy your own instance of Crawl4AI with one click:
|
|||||||
The deploy will:
|
The deploy will:
|
||||||
- Set up a Docker container with Crawl4AI
|
- Set up a Docker container with Crawl4AI
|
||||||
- Configure Playwright and all dependencies
|
- Configure Playwright and all dependencies
|
||||||
- Start the FastAPI server on port 11235
|
- Start the FastAPI server on port `11235`
|
||||||
- Set up health checks and auto-deployment
|
- Set up health checks and auto-deployment
|
||||||
|
|
||||||
### Using Docker 🐳
|
</details>
|
||||||
|
|
||||||
|
<details open>
|
||||||
|
<summary>🐳 <strong>Using Docker</strong></summary>
|
||||||
|
|
||||||
Crawl4AI is available as Docker images for easy deployment. You can either pull directly from Docker Hub (recommended) or build from the repository.
|
Crawl4AI is available as Docker images for easy deployment. You can either pull directly from Docker Hub (recommended) or build from the repository.
|
||||||
|
|
||||||
#### Option 1: Docker Hub (Recommended)
|
---
|
||||||
|
|
||||||
|
### Option 1: Docker Hub (Recommended)
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Pull and run from Docker Hub (choose one):
|
# Pull and run from Docker Hub (choose one):
|
||||||
@@ -204,7 +210,9 @@ docker run --platform linux/arm64 -p 11235:11235 unclecode/crawl4ai:basic
|
|||||||
docker run --shm-size=2gb -p 11235:11235 unclecode/crawl4ai:basic
|
docker run --shm-size=2gb -p 11235:11235 unclecode/crawl4ai:basic
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Option 2: Build from Repository
|
---
|
||||||
|
|
||||||
|
### Option 2: Build from Repository
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Clone the repository
|
# Clone the repository
|
||||||
@@ -226,7 +234,12 @@ docker build -t crawl4ai:local \
|
|||||||
docker run -p 11235:11235 crawl4ai:local
|
docker run -p 11235:11235 crawl4ai:local
|
||||||
```
|
```
|
||||||
|
|
||||||
Quick test (works for both options):
|
---
|
||||||
|
|
||||||
|
### Quick Test
|
||||||
|
|
||||||
|
Run a quick test (works for both Docker options):
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
@@ -243,143 +256,149 @@ result = requests.get(f"http://localhost:11235/task/{task_id}")
|
|||||||
|
|
||||||
For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://crawl4ai.com/mkdocs/basic/docker-deployment/).
|
For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://crawl4ai.com/mkdocs/basic/docker-deployment/).
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
|
||||||
## Quick Start 🚀
|
## Quick Start 🚀
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import asyncio
|
import asyncio
|
||||||
from crawl4ai import AsyncWebCrawler
|
from crawl4ai import AsyncWebCrawler, CacheMode
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||||
result = await crawler.arun(url="https://www.nbcnews.com/business")
|
result = await crawler.arun(url="https://www.nbcnews.com/business")
|
||||||
print(result.markdown)
|
print(result.markdown_v2.raw_markdown) # Soone will be change to result.markdown
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
asyncio.run(main())
|
asyncio.run(main())
|
||||||
```
|
```
|
||||||
|
|
||||||
## Advanced Usage 🔬
|
## Advanced Usage Examples 🔬
|
||||||
|
|
||||||
### Executing JavaScript and Using CSS Selectors
|
You can check the project structure in the directory [https://github.com/unclecode/crawl4ai/docs/examples](docs/examples). Over there, you can find a variety of examples; here, some popular examples are shared.
|
||||||
|
|
||||||
|
<details open>
|
||||||
|
<summary>🖥️ <strong>Heuristic Markdown Generation with Clean and Fit Markdown</strong></summary>
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import asyncio
|
import asyncio
|
||||||
from crawl4ai import AsyncWebCrawler
|
from crawl4ai import AsyncWebCrawler, CacheMode
|
||||||
|
from crawl4ai.content_filter_strategy import BM25ContentFilter
|
||||||
|
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
async with AsyncWebCrawler(
|
||||||
js_code = ["const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"]
|
headless=True,
|
||||||
|
verbose=True,
|
||||||
|
) as crawler:
|
||||||
result = await crawler.arun(
|
result = await crawler.arun(
|
||||||
url="https://www.nbcnews.com/business",
|
url="https://docs.micronaut.io/4.7.6/guide/",
|
||||||
js_code=js_code,
|
cache_mode=CacheMode.ENABLED,
|
||||||
css_selector=".wide-tease-item__description",
|
markdown_generator=DefaultMarkdownGenerator(
|
||||||
bypass_cache=True
|
content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
|
||||||
|
),
|
||||||
)
|
)
|
||||||
print(result.extracted_content)
|
print(len(result.markdown))
|
||||||
|
print(len(result.fit_markdown))
|
||||||
|
print(len(result.markdown_v2.fit_markdown))
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
asyncio.run(main())
|
asyncio.run(main())
|
||||||
```
|
```
|
||||||
|
|
||||||
### Using a Proxy
|
</details>
|
||||||
|
|
||||||
|
<details open>
|
||||||
|
<summary>🖥️ <strong>Structured Data Extraction and Executing JavaScript</strong></summary>
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import asyncio
|
import asyncio
|
||||||
from crawl4ai import AsyncWebCrawler
|
from crawl4ai import AsyncWebCrawler, CacheMode
|
||||||
|
|
||||||
async def main():
|
|
||||||
async with AsyncWebCrawler(verbose=True, proxy="http://127.0.0.1:7890") as crawler:
|
|
||||||
result = await crawler.arun(
|
|
||||||
url="https://www.nbcnews.com/business",
|
|
||||||
bypass_cache=True
|
|
||||||
)
|
|
||||||
print(result.markdown)
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
asyncio.run(main())
|
|
||||||
```
|
|
||||||
|
|
||||||
### Extracting Structured Data without LLM
|
|
||||||
|
|
||||||
The `JsonCssExtractionStrategy` allows for precise extraction of structured data from web pages using CSS selectors.
|
|
||||||
|
|
||||||
```python
|
|
||||||
import asyncio
|
|
||||||
import json
|
|
||||||
from crawl4ai import AsyncWebCrawler
|
|
||||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||||
|
import json
|
||||||
|
|
||||||
async def extract_news_teasers():
|
async def main():
|
||||||
schema = {
|
schema = {
|
||||||
"name": "News Teaser Extractor",
|
"name": "KidoCode Courses",
|
||||||
"baseSelector": ".wide-tease-item__wrapper",
|
"baseSelector": "section.charge-methodology .w-tab-content > div",
|
||||||
"fields": [
|
"fields": [
|
||||||
{
|
{
|
||||||
"name": "category",
|
"name": "section_title",
|
||||||
"selector": ".unibrow span[data-testid='unibrow-text']",
|
"selector": "h3.heading-50",
|
||||||
"type": "text",
|
"type": "text",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "headline",
|
"name": "section_description",
|
||||||
"selector": ".wide-tease-item__headline",
|
"selector": ".charge-content",
|
||||||
"type": "text",
|
"type": "text",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "summary",
|
"name": "course_name",
|
||||||
"selector": ".wide-tease-item__description",
|
"selector": ".text-block-93",
|
||||||
"type": "text",
|
"type": "text",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "time",
|
"name": "course_description",
|
||||||
"selector": "[data-testid='wide-tease-date']",
|
"selector": ".course-content-text",
|
||||||
"type": "text",
|
"type": "text",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name": "image",
|
"name": "course_icon",
|
||||||
"type": "nested",
|
"selector": ".image-92",
|
||||||
"selector": "picture.teasePicture img",
|
"type": "attribute",
|
||||||
"fields": [
|
"attribute": "src"
|
||||||
{"name": "src", "type": "attribute", "attribute": "src"},
|
}
|
||||||
{"name": "alt", "type": "attribute", "attribute": "alt"},
|
]
|
||||||
],
|
}
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "link",
|
|
||||||
"selector": "a[href]",
|
|
||||||
"type": "attribute",
|
|
||||||
"attribute": "href",
|
|
||||||
},
|
|
||||||
],
|
|
||||||
}
|
|
||||||
|
|
||||||
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
|
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
|
||||||
|
|
||||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
async with AsyncWebCrawler(
|
||||||
|
headless=False,
|
||||||
|
verbose=True
|
||||||
|
) as crawler:
|
||||||
|
|
||||||
|
# Create the JavaScript that handles clicking multiple times
|
||||||
|
js_click_tabs = """
|
||||||
|
(async () => {
|
||||||
|
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
|
||||||
|
|
||||||
|
for(let tab of tabs) {
|
||||||
|
// scroll to the tab
|
||||||
|
tab.scrollIntoView();
|
||||||
|
tab.click();
|
||||||
|
// Wait for content to load and animations to complete
|
||||||
|
await new Promise(r => setTimeout(r, 500));
|
||||||
|
}
|
||||||
|
})();
|
||||||
|
"""
|
||||||
|
|
||||||
result = await crawler.arun(
|
result = await crawler.arun(
|
||||||
url="https://www.nbcnews.com/business",
|
url="https://www.kidocode.com/degrees/technology",
|
||||||
extraction_strategy=extraction_strategy,
|
extraction_strategy=JsonCssExtractionStrategy(schema, verbose=True),
|
||||||
bypass_cache=True,
|
js_code=[js_click_tabs],
|
||||||
|
cache_mode=CacheMode.BYPASS
|
||||||
)
|
)
|
||||||
|
|
||||||
assert result.success, "Failed to crawl the page"
|
companies = json.loads(result.extracted_content)
|
||||||
|
print(f"Successfully extracted {len(companies)} companies")
|
||||||
|
print(json.dumps(companies[0], indent=2))
|
||||||
|
|
||||||
news_teasers = json.loads(result.extracted_content)
|
|
||||||
print(f"Successfully extracted {len(news_teasers)} news teasers")
|
|
||||||
print(json.dumps(news_teasers[0], indent=2))
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
asyncio.run(extract_news_teasers())
|
asyncio.run(main())
|
||||||
```
|
```
|
||||||
|
|
||||||
For more advanced usage examples, check out our [Examples](https://crawl4ai.com/mkdocs/extraction/css-advanced/) section in the documentation.
|
</details>
|
||||||
|
|
||||||
### Extracting Structured Data with OpenAI
|
<details open>
|
||||||
|
<summary>🤖 <strong>Extracting Structured Data with LLMs</strong></summary>
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import os
|
import os
|
||||||
import asyncio
|
import asyncio
|
||||||
from crawl4ai import AsyncWebCrawler
|
from crawl4ai import AsyncWebCrawler, CacheMode
|
||||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
@@ -394,6 +413,8 @@ async def main():
|
|||||||
url='https://openai.com/api/pricing/',
|
url='https://openai.com/api/pricing/',
|
||||||
word_count_threshold=1,
|
word_count_threshold=1,
|
||||||
extraction_strategy=LLMExtractionStrategy(
|
extraction_strategy=LLMExtractionStrategy(
|
||||||
|
# Here you can use any provider that Litellm library supports, for instance: ollama/qwen2
|
||||||
|
# provider="ollama/qwen2", api_token="no-token",
|
||||||
provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY'),
|
provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY'),
|
||||||
schema=OpenAIModelFee.schema(),
|
schema=OpenAIModelFee.schema(),
|
||||||
extraction_type="schema",
|
extraction_type="schema",
|
||||||
@@ -401,7 +422,7 @@ async def main():
|
|||||||
Do not miss any models in the entire content. One extracted model JSON format should look like this:
|
Do not miss any models in the entire content. One extracted model JSON format should look like this:
|
||||||
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}."""
|
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}."""
|
||||||
),
|
),
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
)
|
)
|
||||||
print(result.extracted_content)
|
print(result.extracted_content)
|
||||||
|
|
||||||
@@ -409,105 +430,86 @@ if __name__ == "__main__":
|
|||||||
asyncio.run(main())
|
asyncio.run(main())
|
||||||
```
|
```
|
||||||
|
|
||||||
### Session Management and Dynamic Content Crawling
|
</details>
|
||||||
|
|
||||||
Crawl4AI excels at handling complex scenarios, such as crawling multiple pages with dynamic content loaded via JavaScript. Here's an example of crawling GitHub commits across multiple pages:
|
<details open>
|
||||||
|
<summary>🤖 <strong>Using You own Browswer with Custome User Profile</strong></summary>
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import asyncio
|
import os, sys
|
||||||
import re
|
from pathlib import Path
|
||||||
from bs4 import BeautifulSoup
|
import asyncio, time
|
||||||
from crawl4ai import AsyncWebCrawler
|
from crawl4ai import AsyncWebCrawler
|
||||||
|
|
||||||
async def crawl_typescript_commits():
|
async def test_news_crawl():
|
||||||
first_commit = ""
|
# Create a persistent user data directory
|
||||||
async def on_execution_started(page):
|
user_data_dir = os.path.join(Path.home(), ".crawl4ai", "browser_profile")
|
||||||
nonlocal first_commit
|
os.makedirs(user_data_dir, exist_ok=True)
|
||||||
try:
|
|
||||||
while True:
|
|
||||||
await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')
|
|
||||||
commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')
|
|
||||||
commit = await commit.evaluate('(element) => element.textContent')
|
|
||||||
commit = re.sub(r'\s+', '', commit)
|
|
||||||
if commit and commit != first_commit:
|
|
||||||
first_commit = commit
|
|
||||||
break
|
|
||||||
await asyncio.sleep(0.5)
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Warning: New content didn't appear after JavaScript execution: {e}")
|
|
||||||
|
|
||||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
async with AsyncWebCrawler(
|
||||||
crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)
|
verbose=True,
|
||||||
|
headless=True,
|
||||||
url = "https://github.com/microsoft/TypeScript/commits/main"
|
user_data_dir=user_data_dir,
|
||||||
session_id = "typescript_commits_session"
|
use_persistent_context=True,
|
||||||
all_commits = []
|
headers={
|
||||||
|
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
|
||||||
js_next_page = """
|
"Accept-Language": "en-US,en;q=0.5",
|
||||||
const button = document.querySelector('a[data-testid="pagination-next-button"]');
|
"Accept-Encoding": "gzip, deflate, br",
|
||||||
if (button) button.click();
|
"DNT": "1",
|
||||||
"""
|
"Connection": "keep-alive",
|
||||||
|
"Upgrade-Insecure-Requests": "1",
|
||||||
for page in range(3): # Crawl 3 pages
|
"Sec-Fetch-Dest": "document",
|
||||||
result = await crawler.arun(
|
"Sec-Fetch-Mode": "navigate",
|
||||||
url=url,
|
"Sec-Fetch-Site": "none",
|
||||||
session_id=session_id,
|
"Sec-Fetch-User": "?1",
|
||||||
css_selector="li.Box-sc-g0xbh4-0",
|
"Cache-Control": "max-age=0",
|
||||||
js=js_next_page if page > 0 else None,
|
}
|
||||||
bypass_cache=True,
|
) as crawler:
|
||||||
js_only=page > 0
|
url = "ADDRESS_OF_A_CHALLENGING_WEBSITE"
|
||||||
)
|
|
||||||
|
result = await crawler.arun(
|
||||||
assert result.success, f"Failed to crawl page {page + 1}"
|
url,
|
||||||
|
cache_mode=CacheMode.BYPASS,
|
||||||
soup = BeautifulSoup(result.cleaned_html, 'html.parser')
|
magic=True,
|
||||||
commits = soup.select("li")
|
)
|
||||||
all_commits.extend(commits)
|
|
||||||
|
print(f"Successfully crawled {url}")
|
||||||
print(f"Page {page + 1}: Found {len(commits)} commits")
|
print(f"Content length: {len(result.markdown)}")
|
||||||
|
|
||||||
await crawler.crawler_strategy.kill_session(session_id)
|
|
||||||
print(f"Successfully crawled {len(all_commits)} commits across 3 pages")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
asyncio.run(crawl_typescript_commits())
|
|
||||||
```
|
```
|
||||||
|
|
||||||
This example demonstrates Crawl4AI's ability to handle complex scenarios where content is loaded asynchronously. It crawls multiple pages of GitHub commits, executing JavaScript to load new content and using custom hooks to ensure data is loaded before proceeding.
|
|
||||||
|
|
||||||
For more advanced usage examples, check out our [Examples](https://crawl4ai.com/mkdocs/tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites/) section in the documentation.
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
|
||||||
## Speed Comparison 🚀
|
## Speed Comparison 🚀
|
||||||
|
A test was conducted on **[NBC News - Business Section](https://www.nbcnews.com/business)** to compare Crawl4AI and Firecrawl, highlighting Crawl4AI's speed, efficiency, and advanced features.
|
||||||
|
|
||||||
Crawl4AI is designed with speed as a primary focus. Our goal is to provide the fastest possible response with high-quality data extraction, minimizing abstractions between the data and the user.
|
---
|
||||||
|
|
||||||
We've conducted a speed comparison between Crawl4AI and Firecrawl, a paid service. The results demonstrate Crawl4AI's superior performance:
|
#### Results Summary
|
||||||
|
|
||||||
```bash
|
| **Method** | **Time Taken** | **Markdown Length** | **Fit Markdown** | **Images Found** |
|
||||||
Firecrawl:
|
|--------------------------------|----------------|----------------------|-------------------|------------------|
|
||||||
Time taken: 7.02 seconds
|
| **Firecrawl** | 6.04 seconds | 38,382 characters | - | 52 |
|
||||||
Content length: 42074 characters
|
| **Crawl4AI (Simple Crawl)** | 1.06 seconds | 42,027 characters | - | 52 |
|
||||||
Images found: 49
|
| **Crawl4AI (Markdown Plus)** | 1.30 seconds | 54,342 characters | 11,119 characters | 52 |
|
||||||
|
| **Crawl4AI (JavaScript)** | 1.56 seconds | 75,869 characters | 13,406 characters | 92 |
|
||||||
|
|
||||||
Crawl4AI (simple crawl):
|
---
|
||||||
Time taken: 1.60 seconds
|
|
||||||
Content length: 18238 characters
|
|
||||||
Images found: 49
|
|
||||||
|
|
||||||
Crawl4AI (with JavaScript execution):
|
#### Key Takeaways
|
||||||
Time taken: 4.64 seconds
|
|
||||||
Content length: 40869 characters
|
|
||||||
Images found: 89
|
|
||||||
```
|
|
||||||
|
|
||||||
As you can see, Crawl4AI outperforms Firecrawl significantly:
|
1. **Superior Speed**: Crawl4AI processes even advanced crawls up to **6x faster** than Firecrawl, with times as low as **1.06 seconds**.
|
||||||
|
2. **Rich Content Extraction**: Crawl4AI consistently captures more comprehensive content, producing a **Markdown Plus** output of **54,342 characters**, compared to Firecrawl's **38,382 characters**.
|
||||||
|
3. **AI-Optimized Output**: With **Fit Markdown**, Crawl4AI removes noise to produce concise, AI-friendly outputs (**11,119–13,406 characters**) tailored for LLM workflows.
|
||||||
|
4. **Dynamic Content Handling**: Using JavaScript execution, Crawl4AI extracted **92 images** and enriched content dynamically loaded via “Load More” buttons—unmatched by Firecrawl.
|
||||||
|
|
||||||
- Simple crawl: Crawl4AI is over 4 times faster than Firecrawl.
|
---
|
||||||
- With JavaScript execution: Even when executing JavaScript to load more content (doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.
|
|
||||||
|
|
||||||
You can find the full comparison code in our repository at `docs/examples/crawl4ai_vs_firecrawl.py`.
|
#### Conclusion
|
||||||
|
|
||||||
|
Crawl4AI outshines Firecrawl in speed, completeness, and flexibility. Its advanced features, including **Markdown Plus**, **Fit Markdown**, and **dynamic content handling**, make it the ideal choice for AI-ready web crawling. Whether you're targeting rich structured data or handling complex dynamic websites, Crawl4AI delivers unmatched performance and precision.
|
||||||
|
|
||||||
|
You can find the full comparison code in our repository at [docs/examples/quickstart_async.py](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/quickstart_async.py).
|
||||||
|
|
||||||
## Documentation 📚
|
## Documentation 📚
|
||||||
|
|
||||||
|
|||||||
@@ -13,7 +13,9 @@ import re
|
|||||||
from typing import Dict, List
|
from typing import Dict, List
|
||||||
from bs4 import BeautifulSoup
|
from bs4 import BeautifulSoup
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
from crawl4ai import AsyncWebCrawler
|
from crawl4ai import AsyncWebCrawler, CacheMode
|
||||||
|
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||||
|
from crawl4ai.content_filter_strategy import BM25ContentFilter
|
||||||
from crawl4ai.extraction_strategy import (
|
from crawl4ai.extraction_strategy import (
|
||||||
JsonCssExtractionStrategy,
|
JsonCssExtractionStrategy,
|
||||||
LLMExtractionStrategy,
|
LLMExtractionStrategy,
|
||||||
@@ -51,7 +53,7 @@ async def simple_example_with_running_js_code():
|
|||||||
url="https://www.nbcnews.com/business",
|
url="https://www.nbcnews.com/business",
|
||||||
js_code=js_code,
|
js_code=js_code,
|
||||||
# wait_for=wait_for,
|
# wait_for=wait_for,
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
)
|
)
|
||||||
print(result.markdown[:500]) # Print first 500 characters
|
print(result.markdown[:500]) # Print first 500 characters
|
||||||
|
|
||||||
@@ -61,7 +63,7 @@ async def simple_example_with_css_selector():
|
|||||||
result = await crawler.arun(
|
result = await crawler.arun(
|
||||||
url="https://www.nbcnews.com/business",
|
url="https://www.nbcnews.com/business",
|
||||||
css_selector=".wide-tease-item__description",
|
css_selector=".wide-tease-item__description",
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
)
|
)
|
||||||
print(result.markdown[:500]) # Print first 500 characters
|
print(result.markdown[:500]) # Print first 500 characters
|
||||||
|
|
||||||
@@ -132,7 +134,7 @@ async def extract_structured_data_using_llm(provider: str, api_token: str = None
|
|||||||
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}.""",
|
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}.""",
|
||||||
extra_args=extra_args
|
extra_args=extra_args
|
||||||
),
|
),
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
)
|
)
|
||||||
print(result.extracted_content)
|
print(result.extracted_content)
|
||||||
|
|
||||||
@@ -166,7 +168,7 @@ async def extract_structured_data_using_css_extractor():
|
|||||||
result = await crawler.arun(
|
result = await crawler.arun(
|
||||||
url="https://www.coinbase.com/explore",
|
url="https://www.coinbase.com/explore",
|
||||||
extraction_strategy=extraction_strategy,
|
extraction_strategy=extraction_strategy,
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
)
|
)
|
||||||
|
|
||||||
assert result.success, "Failed to crawl the page"
|
assert result.success, "Failed to crawl the page"
|
||||||
@@ -213,7 +215,7 @@ async def crawl_dynamic_content_pages_method_1():
|
|||||||
session_id=session_id,
|
session_id=session_id,
|
||||||
css_selector="li.Box-sc-g0xbh4-0",
|
css_selector="li.Box-sc-g0xbh4-0",
|
||||||
js=js_next_page if page > 0 else None,
|
js=js_next_page if page > 0 else None,
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
js_only=page > 0,
|
js_only=page > 0,
|
||||||
headless=False,
|
headless=False,
|
||||||
)
|
)
|
||||||
@@ -282,7 +284,7 @@ async def crawl_dynamic_content_pages_method_2():
|
|||||||
extraction_strategy=extraction_strategy,
|
extraction_strategy=extraction_strategy,
|
||||||
js_code=js_next_page_and_wait if page > 0 else None,
|
js_code=js_next_page_and_wait if page > 0 else None,
|
||||||
js_only=page > 0,
|
js_only=page > 0,
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
headless=False,
|
headless=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -343,7 +345,7 @@ async def crawl_dynamic_content_pages_method_3():
|
|||||||
js_code=js_next_page if page > 0 else None,
|
js_code=js_next_page if page > 0 else None,
|
||||||
wait_for=wait_for if page > 0 else None,
|
wait_for=wait_for if page > 0 else None,
|
||||||
js_only=page > 0,
|
js_only=page > 0,
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
headless=False,
|
headless=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -384,7 +386,7 @@ async def crawl_with_user_simultion():
|
|||||||
url = "YOUR-URL-HERE"
|
url = "YOUR-URL-HERE"
|
||||||
result = await crawler.arun(
|
result = await crawler.arun(
|
||||||
url=url,
|
url=url,
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
magic = True, # Automatically detects and removes overlays, popups, and other elements that block content
|
magic = True, # Automatically detects and removes overlays, popups, and other elements that block content
|
||||||
# simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
|
# simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
|
||||||
# override_navigator = True # Overrides the navigator object to make it look like a real user
|
# override_navigator = True # Overrides the navigator object to make it look like a real user
|
||||||
@@ -408,7 +410,7 @@ async def speed_comparison():
|
|||||||
params={'formats': ['markdown', 'html']}
|
params={'formats': ['markdown', 'html']}
|
||||||
)
|
)
|
||||||
end = time.time()
|
end = time.time()
|
||||||
print("Firecrawl (simulated):")
|
print("Firecrawl:")
|
||||||
print(f"Time taken: {end - start:.2f} seconds")
|
print(f"Time taken: {end - start:.2f} seconds")
|
||||||
print(f"Content length: {len(scrape_status['markdown'])} characters")
|
print(f"Content length: {len(scrape_status['markdown'])} characters")
|
||||||
print(f"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}")
|
print(f"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}")
|
||||||
@@ -420,7 +422,7 @@ async def speed_comparison():
|
|||||||
result = await crawler.arun(
|
result = await crawler.arun(
|
||||||
url="https://www.nbcnews.com/business",
|
url="https://www.nbcnews.com/business",
|
||||||
word_count_threshold=0,
|
word_count_threshold=0,
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
verbose=False,
|
verbose=False,
|
||||||
)
|
)
|
||||||
end = time.time()
|
end = time.time()
|
||||||
@@ -430,6 +432,25 @@ async def speed_comparison():
|
|||||||
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
|
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
|
||||||
print()
|
print()
|
||||||
|
|
||||||
|
# Crawl4AI with advanced content filtering
|
||||||
|
start = time.time()
|
||||||
|
result = await crawler.arun(
|
||||||
|
url="https://www.nbcnews.com/business",
|
||||||
|
word_count_threshold=0,
|
||||||
|
markdown_generator=DefaultMarkdownGenerator(
|
||||||
|
content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
|
||||||
|
),
|
||||||
|
cache_mode=CacheMode.BYPASS,
|
||||||
|
verbose=False,
|
||||||
|
)
|
||||||
|
end = time.time()
|
||||||
|
print("Crawl4AI (Markdown Plus):")
|
||||||
|
print(f"Time taken: {end - start:.2f} seconds")
|
||||||
|
print(f"Content length: {len(result.markdown_v2.raw_markdown)} characters")
|
||||||
|
print(f"Fit Markdown: {len(result.markdown_v2.fit_markdown)} characters")
|
||||||
|
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
|
||||||
|
print()
|
||||||
|
|
||||||
# Crawl4AI with JavaScript execution
|
# Crawl4AI with JavaScript execution
|
||||||
start = time.time()
|
start = time.time()
|
||||||
result = await crawler.arun(
|
result = await crawler.arun(
|
||||||
@@ -438,13 +459,17 @@ async def speed_comparison():
|
|||||||
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
|
"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"
|
||||||
],
|
],
|
||||||
word_count_threshold=0,
|
word_count_threshold=0,
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
|
markdown_generator=DefaultMarkdownGenerator(
|
||||||
|
content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0)
|
||||||
|
),
|
||||||
verbose=False,
|
verbose=False,
|
||||||
)
|
)
|
||||||
end = time.time()
|
end = time.time()
|
||||||
print("Crawl4AI (with JavaScript execution):")
|
print("Crawl4AI (with JavaScript execution):")
|
||||||
print(f"Time taken: {end - start:.2f} seconds")
|
print(f"Time taken: {end - start:.2f} seconds")
|
||||||
print(f"Content length: {len(result.markdown)} characters")
|
print(f"Content length: {len(result.markdown)} characters")
|
||||||
|
print(f"Fit Markdown: {len(result.markdown_v2.fit_markdown)} characters")
|
||||||
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
|
print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}")
|
||||||
|
|
||||||
print("\nNote on Speed Comparison:")
|
print("\nNote on Speed Comparison:")
|
||||||
@@ -483,7 +508,7 @@ async def generate_knowledge_graph():
|
|||||||
url = "https://paulgraham.com/love.html"
|
url = "https://paulgraham.com/love.html"
|
||||||
result = await crawler.arun(
|
result = await crawler.arun(
|
||||||
url=url,
|
url=url,
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
extraction_strategy=extraction_strategy,
|
extraction_strategy=extraction_strategy,
|
||||||
# magic=True
|
# magic=True
|
||||||
)
|
)
|
||||||
@@ -496,7 +521,7 @@ async def fit_markdown_remove_overlay():
|
|||||||
url = "https://janineintheworld.com/places-to-visit-in-central-mexico"
|
url = "https://janineintheworld.com/places-to-visit-in-central-mexico"
|
||||||
result = await crawler.arun(
|
result = await crawler.arun(
|
||||||
url=url,
|
url=url,
|
||||||
bypass_cache=True,
|
cache_mode=CacheMode.BYPASS,
|
||||||
word_count_threshold = 10,
|
word_count_threshold = 10,
|
||||||
remove_overlay_elements=True,
|
remove_overlay_elements=True,
|
||||||
screenshot = True
|
screenshot = True
|
||||||
@@ -509,31 +534,31 @@ async def fit_markdown_remove_overlay():
|
|||||||
|
|
||||||
|
|
||||||
async def main():
|
async def main():
|
||||||
await simple_crawl()
|
# await simple_crawl()
|
||||||
await simple_example_with_running_js_code()
|
# await simple_example_with_running_js_code()
|
||||||
await simple_example_with_css_selector()
|
# await simple_example_with_css_selector()
|
||||||
await use_proxy()
|
# await use_proxy()
|
||||||
await capture_and_save_screenshot("https://www.example.com", os.path.join(__location__, "tmp/example_screenshot.jpg"))
|
# await capture_and_save_screenshot("https://www.example.com", os.path.join(__location__, "tmp/example_screenshot.jpg"))
|
||||||
await extract_structured_data_using_css_extractor()
|
# await extract_structured_data_using_css_extractor()
|
||||||
|
|
||||||
# LLM extraction examples
|
# # LLM extraction examples
|
||||||
await extract_structured_data_using_llm()
|
# await extract_structured_data_using_llm()
|
||||||
await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY"))
|
# await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY"))
|
||||||
await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY"))
|
# await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY"))
|
||||||
await extract_structured_data_using_llm("ollama/llama3.2")
|
# await extract_structured_data_using_llm("ollama/llama3.2")
|
||||||
|
|
||||||
# You always can pass custom headers to the extraction strategy
|
# # You always can pass custom headers to the extraction strategy
|
||||||
custom_headers = {
|
# custom_headers = {
|
||||||
"Authorization": "Bearer your-custom-token",
|
# "Authorization": "Bearer your-custom-token",
|
||||||
"X-Custom-Header": "Some-Value"
|
# "X-Custom-Header": "Some-Value"
|
||||||
}
|
# }
|
||||||
await extract_structured_data_using_llm(extra_headers=custom_headers)
|
# await extract_structured_data_using_llm(extra_headers=custom_headers)
|
||||||
|
|
||||||
# await crawl_dynamic_content_pages_method_1()
|
# # await crawl_dynamic_content_pages_method_1()
|
||||||
# await crawl_dynamic_content_pages_method_2()
|
# # await crawl_dynamic_content_pages_method_2()
|
||||||
await crawl_dynamic_content_pages_method_3()
|
# await crawl_dynamic_content_pages_method_3()
|
||||||
|
|
||||||
await crawl_custom_browser_type()
|
# await crawl_custom_browser_type()
|
||||||
|
|
||||||
await speed_comparison()
|
await speed_comparison()
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user