Update all documents
This commit is contained in:
2
.gitignore
vendored
2
.gitignore
vendored
@@ -226,3 +226,5 @@ tree.md
|
||||
.local
|
||||
.do
|
||||
/plans
|
||||
.codeiumignore
|
||||
todo/
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,7 +1,3 @@
|
||||
Below is the **updated** guide for the **AsyncWebCrawler** class, reflecting the **new** recommended approach of configuring the browser via **`BrowserConfig`** and each crawl via **`CrawlerRunConfig`**. While the crawler still accepts legacy parameters for backward compatibility, the modern, maintainable way is shown below.
|
||||
|
||||
---
|
||||
|
||||
# AsyncWebCrawler
|
||||
|
||||
The **`AsyncWebCrawler`** is the core class for asynchronous web crawling in Crawl4AI. You typically create it **once**, optionally customize it with a **`BrowserConfig`** (e.g., headless, user agent), then **run** multiple **`arun()`** calls with different **`CrawlerRunConfig`** objects.
|
||||
@@ -32,14 +28,20 @@ class AsyncWebCrawler:
|
||||
Create an AsyncWebCrawler instance.
|
||||
|
||||
Args:
|
||||
crawler_strategy: (Advanced) Provide a custom crawler strategy if needed.
|
||||
config: A BrowserConfig object specifying how the browser is set up.
|
||||
always_bypass_cache: (Deprecated) Use CrawlerRunConfig.cache_mode instead.
|
||||
base_directory: Folder for storing caches/logs (if relevant).
|
||||
thread_safe: If True, attempts some concurrency safeguards. Usually False.
|
||||
**kwargs: Additional legacy or debugging parameters.
|
||||
crawler_strategy:
|
||||
(Advanced) Provide a custom crawler strategy if needed.
|
||||
config:
|
||||
A BrowserConfig object specifying how the browser is set up.
|
||||
always_bypass_cache:
|
||||
(Deprecated) Use CrawlerRunConfig.cache_mode instead.
|
||||
base_directory:
|
||||
Folder for storing caches/logs (if relevant).
|
||||
thread_safe:
|
||||
If True, attempts some concurrency safeguards. Usually False.
|
||||
**kwargs:
|
||||
Additional legacy or debugging parameters.
|
||||
"""
|
||||
```
|
||||
)
|
||||
|
||||
### Typical Initialization
|
||||
|
||||
@@ -216,8 +218,17 @@ async def main():
|
||||
"name": "Articles",
|
||||
"baseSelector": "article.post",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h2", "type": "text"},
|
||||
{"name": "url", "selector": "a", "type": "attribute", "attribute": "href"}
|
||||
{
|
||||
"name": "title",
|
||||
"selector": "h2",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "url",
|
||||
"selector": "a",
|
||||
"type": "attribute",
|
||||
"attribute": "href"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
@@ -33,12 +33,6 @@ Introduced significant improvements to content filtering, multi-threaded environ
|
||||
|
||||
Curious about how Crawl4AI has evolved? Check out our [complete changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md) for a detailed history of all versions and updates.
|
||||
|
||||
## Categories
|
||||
|
||||
- [Technical Deep Dives](/blog/technical) - Coming soon
|
||||
- [Tutorials & Guides](/blog/tutorials) - Coming soon
|
||||
- [Community Updates](/blog/community) - Coming soon
|
||||
|
||||
## Stay Updated
|
||||
|
||||
- Star us on [GitHub](https://github.com/unclecode/crawl4ai)
|
||||
|
||||
@@ -6,7 +6,7 @@ In some cases, you need to extract **complex or unstructured** information from
|
||||
2. Automatically splits content into chunks (if desired) to handle token limits, then combines results.
|
||||
3. Lets you define a **schema** (like a Pydantic model) or a simpler “block” extraction approach.
|
||||
|
||||
**Important**: LLM-based extraction can be slower and costlier than schema-based approaches. If your page data is highly structured, consider using [`JsonCssExtractionStrategy`](./json-extraction-basic.md) or [`JsonXPathExtractionStrategy`](./json-extraction-basic.md) first. But if you need AI to interpret or reorganize content, read on!
|
||||
**Important**: LLM-based extraction can be slower and costlier than schema-based approaches. If your page data is highly structured, consider using [`JsonCssExtractionStrategy`](./no-llm-strategies.md) or [`JsonXPathExtractionStrategy`](./no-llm-strategies.md) first. But if you need AI to interpret or reorganize content, read on!
|
||||
|
||||
---
|
||||
|
||||
@@ -295,7 +295,7 @@ if __name__ == "__main__":
|
||||
- Tweak **`chunk_token_threshold`**, **`overlap_rate`**, and **`apply_chunking`** to handle large content efficiently.
|
||||
- Monitor token usage with `show_usage()`.
|
||||
|
||||
If your site’s data is consistent or repetitive, consider [`JsonCssExtractionStrategy`](./json-extraction-basic.md) first for speed and simplicity. But if you need an **AI-driven** approach, `LLMExtractionStrategy` offers a flexible, multi-provider solution for extracting structured JSON from any website.
|
||||
If your site’s data is consistent or repetitive, consider [`JsonCssExtractionStrategy`](./no-llm-strategies.md) first for speed and simplicity. But if you need an **AI-driven** approach, `LLMExtractionStrategy` offers a flexible, multi-provider solution for extracting structured JSON from any website.
|
||||
|
||||
**Next Steps**:
|
||||
|
||||
@@ -303,26 +303,18 @@ If your site’s data is consistent or repetitive, consider [`JsonCssExtractionS
|
||||
- Try switching the `provider` (e.g., `"ollama/llama2"`, `"openai/gpt-4o"`, etc.) to see differences in speed, accuracy, or cost.
|
||||
- Pass different `extra_args` like `temperature`, `top_p`, and `max_tokens` to fine-tune your results.
|
||||
|
||||
2. **Combine With Other Strategies**
|
||||
- Use [content filters](../../how-to/content-filters.md) like BM25 or Pruning prior to LLM extraction to remove noise and reduce token usage.
|
||||
- Apply a [CSS or XPath extraction strategy](./json-extraction-basic.md) first for obvious, structured data, then send only the tricky parts to the LLM.
|
||||
|
||||
3. **Performance Tuning**
|
||||
2. **Performance Tuning**
|
||||
- If pages are large, tweak `chunk_token_threshold`, `overlap_rate`, or `apply_chunking` to optimize throughput.
|
||||
- Check the usage logs with `show_usage()` to keep an eye on token consumption and identify potential bottlenecks.
|
||||
|
||||
4. **Validate Outputs**
|
||||
3. **Validate Outputs**
|
||||
- If using `extraction_type="schema"`, parse the LLM’s JSON with a Pydantic model for a final validation step.
|
||||
- Log or handle any parse errors gracefully, especially if the model occasionally returns malformed JSON.
|
||||
|
||||
5. **Explore Hooks & Automation**
|
||||
- Integrate LLM extraction with [hooks](./hooks-custom.md) for complex pre/post-processing.
|
||||
4. **Explore Hooks & Automation**
|
||||
- Integrate LLM extraction with [hooks](../advanced/hooks-auth.md) for complex pre/post-processing.
|
||||
- Use a multi-step pipeline: crawl, filter, LLM-extract, then store or index results for further analysis.
|
||||
|
||||
6. **Scale and Deploy**
|
||||
- Combine your LLM extraction setup with [Docker or other deployment solutions](./docker-quickstart.md) to run at scale.
|
||||
- Monitor memory usage and concurrency if you call LLMs frequently.
|
||||
|
||||
**Last Updated**: 2025-01-01
|
||||
|
||||
---
|
||||
|
||||
@@ -193,7 +193,8 @@ This snippet includes categories, products, features, reviews, and related items
|
||||
schema = {
|
||||
"name": "E-commerce Product Catalog",
|
||||
"baseSelector": "div.category",
|
||||
# (1) We can define optional baseFields if we want to extract attributes from the category container
|
||||
# (1) We can define optional baseFields if we want to extract attributes
|
||||
# from the category container
|
||||
"baseFields": [
|
||||
{"name": "data_cat_id", "type": "attribute", "attribute": "data-cat-id"},
|
||||
],
|
||||
@@ -223,8 +224,16 @@ schema = {
|
||||
"selector": "div.product-details",
|
||||
"type": "nested", # single sub-object
|
||||
"fields": [
|
||||
{"name": "brand", "selector": "span.brand", "type": "text"},
|
||||
{"name": "model", "selector": "span.model", "type": "text"}
|
||||
{
|
||||
"name": "brand",
|
||||
"selector": "span.brand",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "model",
|
||||
"selector": "span.model",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -240,9 +249,21 @@ schema = {
|
||||
"selector": "div.review",
|
||||
"type": "nested_list",
|
||||
"fields": [
|
||||
{"name": "reviewer", "selector": "span.reviewer", "type": "text"},
|
||||
{"name": "rating", "selector": "span.rating", "type": "text"},
|
||||
{"name": "comment", "selector": "p.review-text", "type": "text"}
|
||||
{
|
||||
"name": "reviewer",
|
||||
"selector": "span.reviewer",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "rating",
|
||||
"selector": "span.rating",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "comment",
|
||||
"selector": "p.review-text",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -250,8 +271,16 @@ schema = {
|
||||
"selector": "ul.related-products li",
|
||||
"type": "list",
|
||||
"fields": [
|
||||
{"name": "name", "selector": "span.related-name", "type": "text"},
|
||||
{"name": "price", "selector": "span.related-price", "type": "text"}
|
||||
{
|
||||
"name": "name",
|
||||
"selector": "span.related-name",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "price",
|
||||
"selector": "span.related-price",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@@ -382,7 +411,6 @@ With **JsonCssExtractionStrategy** (or **JsonXPathExtractionStrategy**), you can
|
||||
|
||||
**Next Steps**:
|
||||
|
||||
- Explore the [Advanced Usage of JSON Extraction](../../explanations/extraction-chunking.md) for deeper details on schema nesting, transformations, or hooking.
|
||||
- Combine your extracted JSON with advanced filtering or summarization in a second pass if needed.
|
||||
- For dynamic pages, combine strategies with `js_code` or infinite scroll hooking to ensure all content is loaded.
|
||||
|
||||
|
||||
@@ -1,282 +0,0 @@
|
||||
# Advanced Usage of JsonCssExtractionStrategy
|
||||
|
||||
While the basic usage of JsonCssExtractionStrategy is powerful for simple structures, its true potential shines when dealing with complex, nested HTML structures. This section will explore advanced usage scenarios, demonstrating how to extract nested objects, lists, and nested lists.
|
||||
|
||||
## Hypothetical Website Example
|
||||
|
||||
Let's consider a hypothetical e-commerce website that displays product categories, each containing multiple products. Each product has details, reviews, and related items. This complex structure will allow us to demonstrate various advanced features of JsonCssExtractionStrategy.
|
||||
|
||||
Assume the HTML structure looks something like this:
|
||||
|
||||
```html
|
||||
<div class="category">
|
||||
<h2 class="category-name">Electronics</h2>
|
||||
<div class="product">
|
||||
<h3 class="product-name">Smartphone X</h3>
|
||||
<p class="product-price">$999</p>
|
||||
<div class="product-details">
|
||||
<span class="brand">TechCorp</span>
|
||||
<span class="model">X-2000</span>
|
||||
</div>
|
||||
<ul class="product-features">
|
||||
<li>5G capable</li>
|
||||
<li>6.5" OLED screen</li>
|
||||
<li>128GB storage</li>
|
||||
</ul>
|
||||
<div class="product-reviews">
|
||||
<div class="review">
|
||||
<span class="reviewer">John D.</span>
|
||||
<span class="rating">4.5</span>
|
||||
<p class="review-text">Great phone, love the camera!</p>
|
||||
</div>
|
||||
<div class="review">
|
||||
<span class="reviewer">Jane S.</span>
|
||||
<span class="rating">5</span>
|
||||
<p class="review-text">Best smartphone I've ever owned.</p>
|
||||
</div>
|
||||
</div>
|
||||
<ul class="related-products">
|
||||
<li>
|
||||
<span class="related-name">Phone Case</span>
|
||||
<span class="related-price">$29.99</span>
|
||||
</li>
|
||||
<li>
|
||||
<span class="related-name">Screen Protector</span>
|
||||
<span class="related-price">$9.99</span>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
<!-- More products... -->
|
||||
</div>
|
||||
```
|
||||
|
||||
Now, let's create a schema to extract this complex structure:
|
||||
|
||||
```python
|
||||
schema = {
|
||||
"name": "E-commerce Product Catalog",
|
||||
"baseSelector": "div.category",
|
||||
"fields": [
|
||||
{
|
||||
"name": "category_name",
|
||||
"selector": "h2.category-name",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "products",
|
||||
"selector": "div.product",
|
||||
"type": "nested_list",
|
||||
"fields": [
|
||||
{
|
||||
"name": "name",
|
||||
"selector": "h3.product-name",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "price",
|
||||
"selector": "p.product-price",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "details",
|
||||
"selector": "div.product-details",
|
||||
"type": "nested",
|
||||
"fields": [
|
||||
{
|
||||
"name": "brand",
|
||||
"selector": "span.brand",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "model",
|
||||
"selector": "span.model",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "features",
|
||||
"selector": "ul.product-features li",
|
||||
"type": "list",
|
||||
"fields": [
|
||||
{
|
||||
"name": "feature",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "reviews",
|
||||
"selector": "div.review",
|
||||
"type": "nested_list",
|
||||
"fields": [
|
||||
{
|
||||
"name": "reviewer",
|
||||
"selector": "span.reviewer",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "rating",
|
||||
"selector": "span.rating",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "comment",
|
||||
"selector": "p.review-text",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "related_products",
|
||||
"selector": "ul.related-products li",
|
||||
"type": "list",
|
||||
"fields": [
|
||||
{
|
||||
"name": "name",
|
||||
"selector": "span.related-name",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "price",
|
||||
"selector": "span.related-price",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
This schema demonstrates several advanced features:
|
||||
|
||||
1. **Nested Objects**: The `details` field is a nested object within each product.
|
||||
2. **Simple Lists**: The `features` field is a simple list of text items.
|
||||
3. **Nested Lists**: The `products` field is a nested list, where each item is a complex object.
|
||||
4. **Lists of Objects**: The `reviews` and `related_products` fields are lists of objects.
|
||||
|
||||
Let's break down the key concepts:
|
||||
|
||||
### Nested Objects
|
||||
|
||||
To create a nested object, use `"type": "nested"` and provide a `fields` array for the nested structure:
|
||||
|
||||
```python
|
||||
{
|
||||
"name": "details",
|
||||
"selector": "div.product-details",
|
||||
"type": "nested",
|
||||
"fields": [
|
||||
{
|
||||
"name": "brand",
|
||||
"selector": "span.brand",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "model",
|
||||
"selector": "span.model",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Simple Lists
|
||||
|
||||
For a simple list of identical items, use `"type": "list"`:
|
||||
|
||||
```python
|
||||
{
|
||||
"name": "features",
|
||||
"selector": "ul.product-features li",
|
||||
"type": "list",
|
||||
"fields": [
|
||||
{
|
||||
"name": "feature",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Nested Lists
|
||||
|
||||
For a list of complex objects, use `"type": "nested_list"`:
|
||||
|
||||
```python
|
||||
{
|
||||
"name": "products",
|
||||
"selector": "div.product",
|
||||
"type": "nested_list",
|
||||
"fields": [
|
||||
// ... fields for each product
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Lists of Objects
|
||||
|
||||
Similar to nested lists, but typically used for simpler objects within the list:
|
||||
|
||||
```python
|
||||
{
|
||||
"name": "related_products",
|
||||
"selector": "ul.related-products li",
|
||||
"type": "list",
|
||||
"fields": [
|
||||
{
|
||||
"name": "name",
|
||||
"selector": "span.related-name",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "price",
|
||||
"selector": "span.related-price",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Using the Advanced Schema
|
||||
|
||||
To use this advanced schema with AsyncWebCrawler:
|
||||
|
||||
```python
|
||||
import json
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def extract_complex_product_data():
|
||||
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
|
||||
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://gist.githubusercontent.com/githubusercontent/2d7b8ba3cd8ab6cf3c8da771ddb36878/raw/1ae2f90c6861ce7dd84cc50d3df9920dee5e1fd2/sample_ecommerce.html",
|
||||
extraction_strategy=extraction_strategy,
|
||||
bypass_cache=True,
|
||||
)
|
||||
|
||||
assert result.success, "Failed to crawl the page"
|
||||
|
||||
product_data = json.loads(result.extracted_content)
|
||||
print(json.dumps(product_data, indent=2))
|
||||
|
||||
asyncio.run(extract_complex_product_data())
|
||||
```
|
||||
|
||||
This will produce a structured JSON output that captures the complex hierarchy of the product catalog, including nested objects, lists, and nested lists.
|
||||
|
||||
## Tips for Advanced Usage
|
||||
|
||||
1. **Start Simple**: Begin with a basic schema and gradually add complexity.
|
||||
2. **Test Incrementally**: Test each part of your schema separately before combining them.
|
||||
3. **Use Chrome DevTools**: The Element Inspector is invaluable for identifying the correct selectors.
|
||||
4. **Handle Missing Data**: Use the `default` key in your field definitions to handle cases where data might be missing.
|
||||
5. **Leverage Transforms**: Use the `transform` key to clean or format extracted data (e.g., converting prices to numbers).
|
||||
6. **Consider Performance**: Very complex schemas might slow down extraction. Balance complexity with performance needs.
|
||||
|
||||
By mastering these advanced techniques, you can use JsonCssExtractionStrategy to extract highly structured data from even the most complex web pages, making it a powerful tool for web scraping and data analysis tasks.
|
||||
@@ -1,142 +0,0 @@
|
||||
# JSON CSS Extraction Strategy with AsyncWebCrawler
|
||||
|
||||
The `JsonCssExtractionStrategy` is a powerful feature of Crawl4AI that allows you to extract structured data from web pages using CSS selectors. This method is particularly useful when you need to extract specific data points from a consistent HTML structure, such as tables or repeated elements. Here's how to use it with the AsyncWebCrawler.
|
||||
|
||||
## Overview
|
||||
|
||||
The `JsonCssExtractionStrategy` works by defining a schema that specifies:
|
||||
1. A base CSS selector for the repeating elements
|
||||
2. Fields to extract from each element, each with its own CSS selector
|
||||
|
||||
This strategy is fast and efficient, as it doesn't rely on external services like LLMs for extraction.
|
||||
|
||||
## Example: Extracting Cryptocurrency Prices from Coinbase
|
||||
|
||||
Let's look at an example that extracts cryptocurrency prices from the Coinbase explore page.
|
||||
|
||||
```python
|
||||
import json
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def extract_structured_data_using_css_extractor():
|
||||
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
|
||||
|
||||
# Define the extraction schema
|
||||
schema = {
|
||||
"name": "Coinbase Crypto Prices",
|
||||
"baseSelector": ".cds-tableRow-t45thuk",
|
||||
"fields": [
|
||||
{
|
||||
"name": "crypto",
|
||||
"selector": "td:nth-child(1) h2",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "symbol",
|
||||
"selector": "td:nth-child(1) p",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "price",
|
||||
"selector": "td:nth-child(2)",
|
||||
"type": "text",
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
# Create the extraction strategy
|
||||
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
|
||||
|
||||
# Use the AsyncWebCrawler with the extraction strategy
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://www.coinbase.com/explore",
|
||||
extraction_strategy=extraction_strategy,
|
||||
bypass_cache=True,
|
||||
)
|
||||
|
||||
assert result.success, "Failed to crawl the page"
|
||||
|
||||
# Parse the extracted content
|
||||
crypto_prices = json.loads(result.extracted_content)
|
||||
print(f"Successfully extracted {len(crypto_prices)} cryptocurrency prices")
|
||||
print(json.dumps(crypto_prices[0], indent=2))
|
||||
|
||||
return crypto_prices
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(extract_structured_data_using_css_extractor())
|
||||
```
|
||||
|
||||
## Explanation of the Schema
|
||||
|
||||
The schema defines how to extract the data:
|
||||
|
||||
- `name`: A descriptive name for the extraction task.
|
||||
- `baseSelector`: The CSS selector for the repeating elements (in this case, table rows).
|
||||
- `fields`: An array of fields to extract from each element:
|
||||
- `name`: The name to give the extracted data.
|
||||
- `selector`: The CSS selector to find the specific data within the base element.
|
||||
- `type`: The type of data to extract (usually "text" for textual content).
|
||||
|
||||
## Advantages of JsonCssExtractionStrategy
|
||||
|
||||
1. **Speed**: CSS selectors are fast to execute, making this method efficient for large datasets.
|
||||
2. **Precision**: You can target exactly the elements you need.
|
||||
3. **Structured Output**: The result is already structured as JSON, ready for further processing.
|
||||
4. **No External Dependencies**: Unlike LLM-based strategies, this doesn't require any API calls to external services.
|
||||
|
||||
## Tips for Using JsonCssExtractionStrategy
|
||||
|
||||
1. **Inspect the Page**: Use browser developer tools to identify the correct CSS selectors.
|
||||
2. **Test Selectors**: Verify your selectors in the browser console before using them in the script.
|
||||
3. **Handle Dynamic Content**: If the page uses JavaScript to load content, you may need to combine this with JS execution (see the Advanced Usage section).
|
||||
4. **Error Handling**: Always check the `result.success` flag and handle potential failures.
|
||||
|
||||
## Advanced Usage: Combining with JavaScript Execution
|
||||
|
||||
For pages that load data dynamically, you can combine the `JsonCssExtractionStrategy` with JavaScript execution:
|
||||
|
||||
```python
|
||||
async def extract_dynamic_structured_data():
|
||||
schema = {
|
||||
"name": "Dynamic Crypto Prices",
|
||||
"baseSelector": ".crypto-row",
|
||||
"fields": [
|
||||
{"name": "name", "selector": ".crypto-name", "type": "text"},
|
||||
{"name": "price", "selector": ".crypto-price", "type": "text"},
|
||||
]
|
||||
}
|
||||
|
||||
js_code = """
|
||||
window.scrollTo(0, document.body.scrollHeight);
|
||||
await new Promise(resolve => setTimeout(resolve, 2000)); // Wait for 2 seconds
|
||||
"""
|
||||
|
||||
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
|
||||
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/crypto-prices",
|
||||
extraction_strategy=extraction_strategy,
|
||||
js_code=js_code,
|
||||
wait_for=".crypto-row:nth-child(20)", # Wait for 20 rows to load
|
||||
bypass_cache=True,
|
||||
)
|
||||
|
||||
crypto_data = json.loads(result.extracted_content)
|
||||
print(f"Extracted {len(crypto_data)} cryptocurrency entries")
|
||||
|
||||
asyncio.run(extract_dynamic_structured_data())
|
||||
```
|
||||
|
||||
This advanced example demonstrates how to:
|
||||
1. Execute JavaScript to trigger dynamic content loading.
|
||||
2. Wait for a specific condition (20 rows loaded) before extraction.
|
||||
3. Extract data from the dynamically loaded content.
|
||||
|
||||
By mastering the `JsonCssExtractionStrategy`, you can efficiently extract structured data from a wide variety of web pages, making it a valuable tool in your web scraping toolkit.
|
||||
|
||||
For more details on schema definitions and advanced extraction strategies, check out the[Advanced JsonCssExtraction](./css-advanced.md).
|
||||
@@ -1,179 +0,0 @@
|
||||
# LLM Extraction with AsyncWebCrawler
|
||||
|
||||
Crawl4AI's AsyncWebCrawler allows you to use Language Models (LLMs) to extract structured data or relevant content from web pages asynchronously. Below are two examples demonstrating how to use `LLMExtractionStrategy` for different purposes with the AsyncWebCrawler.
|
||||
|
||||
## Example 1: Extract Structured Data
|
||||
|
||||
In this example, we use the `LLMExtractionStrategy` to extract structured data (model names and their fees) from the OpenAI pricing page.
|
||||
|
||||
```python
|
||||
import os
|
||||
import json
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class OpenAIModelFee(BaseModel):
|
||||
model_name: str = Field(..., description="Name of the OpenAI model.")
|
||||
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
|
||||
output_fee: str = Field(..., description="Fee for output token for the OpenAI model.")
|
||||
|
||||
async def extract_openai_fees():
|
||||
url = 'https://openai.com/api/pricing/'
|
||||
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
word_count_threshold=1,
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o", # Or use ollama like provider="ollama/nemotron"
|
||||
api_token=os.getenv('OPENAI_API_KEY'),
|
||||
schema=OpenAIModelFee.model_json_schema(),
|
||||
extraction_type="schema",
|
||||
instruction="From the crawled content, extract all mentioned model names along with their "
|
||||
"fees for input and output tokens. Make sure not to miss anything in the entire content. "
|
||||
'One extracted model JSON format should look like this: '
|
||||
'{ "model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens" }'
|
||||
),
|
||||
bypass_cache=True,
|
||||
)
|
||||
|
||||
model_fees = json.loads(result.extracted_content)
|
||||
print(f"Number of models extracted: {len(model_fees)}")
|
||||
|
||||
with open(".data/openai_fees.json", "w", encoding="utf-8") as f:
|
||||
json.dump(model_fees, f, indent=2)
|
||||
|
||||
asyncio.run(extract_openai_fees())
|
||||
```
|
||||
|
||||
## Example 2: Extract Relevant Content
|
||||
|
||||
In this example, we instruct the LLM to extract only content related to technology from the NBC News business page.
|
||||
|
||||
```python
|
||||
import os
|
||||
import json
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
|
||||
async def extract_tech_content():
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv('OPENAI_API_KEY'),
|
||||
instruction="Extract only content related to technology"
|
||||
),
|
||||
bypass_cache=True,
|
||||
)
|
||||
|
||||
tech_content = json.loads(result.extracted_content)
|
||||
print(f"Number of tech-related items extracted: {len(tech_content)}")
|
||||
|
||||
with open(".data/tech_content.json", "w", encoding="utf-8") as f:
|
||||
json.dump(tech_content, f, indent=2)
|
||||
|
||||
asyncio.run(extract_tech_content())
|
||||
```
|
||||
|
||||
## Advanced Usage: Combining JS Execution with LLM Extraction
|
||||
|
||||
This example demonstrates how to combine JavaScript execution with LLM extraction to handle dynamic content:
|
||||
|
||||
```python
|
||||
async def extract_dynamic_content():
|
||||
js_code = """
|
||||
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
|
||||
if (loadMoreButton) {
|
||||
loadMoreButton.click();
|
||||
await new Promise(resolve => setTimeout(resolve, 2000));
|
||||
}
|
||||
"""
|
||||
|
||||
wait_for = """
|
||||
() => {
|
||||
const articles = document.querySelectorAll('article.tease-card');
|
||||
return articles.length > 10;
|
||||
}
|
||||
"""
|
||||
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://www.nbcnews.com/business",
|
||||
js_code=js_code,
|
||||
wait_for=wait_for,
|
||||
css_selector="article.tease-card",
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv('OPENAI_API_KEY'),
|
||||
instruction="Summarize each article, focusing on technology-related content"
|
||||
),
|
||||
bypass_cache=True,
|
||||
)
|
||||
|
||||
summaries = json.loads(result.extracted_content)
|
||||
print(f"Number of summarized articles: {len(summaries)}")
|
||||
|
||||
with open(".data/tech_summaries.json", "w", encoding="utf-8") as f:
|
||||
json.dump(summaries, f, indent=2)
|
||||
|
||||
asyncio.run(extract_dynamic_content())
|
||||
```
|
||||
|
||||
## Customizing LLM Provider
|
||||
|
||||
Crawl4AI uses the `litellm` library under the hood, which allows you to use any LLM provider you want. Just pass the correct model name and API token:
|
||||
|
||||
```python
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="your_llm_provider/model_name",
|
||||
api_token="your_api_token",
|
||||
instruction="Your extraction instruction"
|
||||
)
|
||||
```
|
||||
|
||||
This flexibility allows you to integrate with various LLM providers and tailor the extraction process to your specific needs.
|
||||
|
||||
## Error Handling and Retries
|
||||
|
||||
When working with external LLM APIs, it's important to handle potential errors and implement retry logic. Here's an example of how you might do this:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from tenacity import retry, stop_after_attempt, wait_exponential
|
||||
|
||||
class LLMExtractionError(Exception):
|
||||
pass
|
||||
|
||||
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
|
||||
async def extract_with_retry(crawler, url, extraction_strategy):
|
||||
try:
|
||||
result = await crawler.arun(url=url, extraction_strategy=extraction_strategy, bypass_cache=True)
|
||||
return json.loads(result.extracted_content)
|
||||
except Exception as e:
|
||||
raise LLMExtractionError(f"Failed to extract content: {str(e)}")
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
try:
|
||||
content = await extract_with_retry(
|
||||
crawler,
|
||||
"https://www.example.com",
|
||||
LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv('OPENAI_API_KEY'),
|
||||
instruction="Extract and summarize main points"
|
||||
)
|
||||
)
|
||||
print("Extracted content:", content)
|
||||
except LLMExtractionError as e:
|
||||
print(f"Extraction failed after retries: {e}")
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
This example uses the `tenacity` library to implement a retry mechanism with exponential backoff, which can help handle temporary failures or rate limiting from the LLM API.
|
||||
@@ -1,226 +0,0 @@
|
||||
# Extraction Strategies Overview
|
||||
|
||||
Crawl4AI provides powerful extraction strategies to help you get structured data from web pages. Each strategy is designed for specific use cases and offers different approaches to data extraction.
|
||||
|
||||
## Available Strategies
|
||||
|
||||
### [LLM-Based Extraction](llm.md)
|
||||
|
||||
`LLMExtractionStrategy` uses Language Models to extract structured data from web content. This approach is highly flexible and can understand content semantically.
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
|
||||
class Product(BaseModel):
|
||||
name: str
|
||||
price: float
|
||||
description: str
|
||||
|
||||
strategy = LLMExtractionStrategy(
|
||||
provider="ollama/llama2",
|
||||
schema=Product.schema(),
|
||||
instruction="Extract product details from the page"
|
||||
)
|
||||
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/product",
|
||||
extraction_strategy=strategy
|
||||
)
|
||||
```
|
||||
|
||||
**Best for:**
|
||||
- Complex data structures
|
||||
- Content requiring interpretation
|
||||
- Flexible content formats
|
||||
- Natural language processing
|
||||
|
||||
### [CSS-Based Extraction](css.md)
|
||||
|
||||
`JsonCssExtractionStrategy` extracts data using CSS selectors. This is fast, reliable, and perfect for consistently structured pages.
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
schema = {
|
||||
"name": "Product Listing",
|
||||
"baseSelector": ".product-card",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h2", "type": "text"},
|
||||
{"name": "price", "selector": ".price", "type": "text"},
|
||||
{"name": "image", "selector": "img", "type": "attribute", "attribute": "src"}
|
||||
]
|
||||
}
|
||||
|
||||
strategy = JsonCssExtractionStrategy(schema)
|
||||
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/products",
|
||||
extraction_strategy=strategy
|
||||
)
|
||||
```
|
||||
|
||||
**Best for:**
|
||||
- E-commerce product listings
|
||||
- News article collections
|
||||
- Structured content pages
|
||||
- High-performance needs
|
||||
|
||||
### [Cosine Strategy](cosine.md)
|
||||
|
||||
`CosineStrategy` uses similarity-based clustering to identify and extract relevant content sections.
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import CosineStrategy
|
||||
|
||||
strategy = CosineStrategy(
|
||||
semantic_filter="product reviews", # Content focus
|
||||
word_count_threshold=10, # Minimum words per cluster
|
||||
sim_threshold=0.3, # Similarity threshold
|
||||
max_dist=0.2, # Maximum cluster distance
|
||||
top_k=3 # Number of top clusters to extract
|
||||
)
|
||||
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/reviews",
|
||||
extraction_strategy=strategy
|
||||
)
|
||||
```
|
||||
|
||||
**Best for:**
|
||||
- Content similarity analysis
|
||||
- Topic clustering
|
||||
- Relevant content extraction
|
||||
- Pattern recognition in text
|
||||
|
||||
## Strategy Selection Guide
|
||||
|
||||
Choose your strategy based on these factors:
|
||||
|
||||
1. **Content Structure**
|
||||
- Well-structured HTML → Use CSS Strategy
|
||||
- Natural language text → Use LLM Strategy
|
||||
- Mixed/Complex content → Use Cosine Strategy
|
||||
|
||||
2. **Performance Requirements**
|
||||
- Fastest: CSS Strategy
|
||||
- Moderate: Cosine Strategy
|
||||
- Variable: LLM Strategy (depends on provider)
|
||||
|
||||
3. **Accuracy Needs**
|
||||
- Highest structure accuracy: CSS Strategy
|
||||
- Best semantic understanding: LLM Strategy
|
||||
- Best content relevance: Cosine Strategy
|
||||
|
||||
## Combining Strategies
|
||||
|
||||
You can combine strategies for more powerful extraction:
|
||||
|
||||
```python
|
||||
# First use CSS strategy for initial structure
|
||||
css_result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
extraction_strategy=css_strategy
|
||||
)
|
||||
|
||||
# Then use LLM for semantic analysis
|
||||
llm_result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
extraction_strategy=llm_strategy
|
||||
)
|
||||
```
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
1. **E-commerce Scraping**
|
||||
```python
|
||||
# CSS Strategy for product listings
|
||||
schema = {
|
||||
"name": "Products",
|
||||
"baseSelector": ".product",
|
||||
"fields": [
|
||||
{"name": "name", "selector": ".title", "type": "text"},
|
||||
{"name": "price", "selector": ".price", "type": "text"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
2. **News Article Extraction**
|
||||
```python
|
||||
# LLM Strategy for article content
|
||||
class Article(BaseModel):
|
||||
title: str
|
||||
content: str
|
||||
author: str
|
||||
date: str
|
||||
|
||||
strategy = LLMExtractionStrategy(
|
||||
provider="ollama/llama2",
|
||||
schema=Article.schema()
|
||||
)
|
||||
```
|
||||
|
||||
3. **Content Analysis**
|
||||
```python
|
||||
# Cosine Strategy for topic analysis
|
||||
strategy = CosineStrategy(
|
||||
semantic_filter="technology trends",
|
||||
top_k=5
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
## Input Formats
|
||||
All extraction strategies support different input formats to give you more control over how content is processed:
|
||||
|
||||
- **markdown** (default): Uses the raw markdown conversion of the HTML content. Best for general text extraction where HTML structure isn't critical.
|
||||
- **html**: Uses the raw HTML content. Useful when you need to preserve HTML structure or extract data from specific HTML elements.
|
||||
- **fit_markdown**: Uses the cleaned and filtered markdown content. Best for extracting relevant content while removing noise. Requires a markdown generator with content filter to be configured.
|
||||
|
||||
To specify an input format:
|
||||
```python
|
||||
strategy = LLMExtractionStrategy(
|
||||
input_format="html", # or "markdown" or "fit_markdown"
|
||||
provider="openai/gpt-4",
|
||||
instruction="Extract product information"
|
||||
)
|
||||
```
|
||||
|
||||
Note: When using "fit_markdown", ensure your CrawlerRunConfig includes a markdown generator with content filter:
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
extraction_strategy=strategy,
|
||||
markdown_generator=DefaultMarkdownGenerator(
|
||||
content_filter=PruningContentFilter() # Content filter goes here for fit_markdown
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
If fit_markdown is requested but not available (no markdown generator or content filter), the system will automatically fall back to raw markdown with a warning.
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Choose the Right Strategy**
|
||||
- Start with CSS for structured data
|
||||
- Use LLM for complex interpretation
|
||||
- Try Cosine for content relevance
|
||||
|
||||
2. **Optimize Performance**
|
||||
- Cache LLM results
|
||||
- Keep CSS selectors specific
|
||||
- Tune similarity thresholds
|
||||
|
||||
3. **Handle Errors**
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
extraction_strategy=strategy
|
||||
)
|
||||
|
||||
if not result.success:
|
||||
print(f"Extraction failed: {result.error_message}")
|
||||
else:
|
||||
data = json.loads(result.extracted_content)
|
||||
```
|
||||
|
||||
Each strategy has its strengths and optimal use cases. Explore the detailed documentation for each strategy to learn more about their specific features and configurations.
|
||||
Reference in New Issue
Block a user