- Add llm.txt generator - Added SSL certificate extraction in AsyncWebCrawler. - Introduced new content filters and chunking strategies for more robust data extraction. - Updated documentation.
3.3 KiB
3.3 KiB
Crawl4AI Content Selection (LLM-Friendly Reference)
Minimal, code-oriented reference for selecting and filtering webpage content using Crawl4AI.
Quick Start
from crawl4ai.async_configs import CrawlerRunConfig, AsyncWebCrawler
async def run():
config = CrawlerRunConfig(css_selector=".main-article")
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(url="https://example.com", config=config)
print(result.extracted_content)
CSS Selectors
- Use
css_selector="selector"to target specific content.
config = CrawlerRunConfig(css_selector="article h1, article .content")
result = await crawler.arun(url="...", config=config)
Content Filtering
word_count_threshold: intexcluded_tags: list of tagsexclude_external_links: boolexclude_social_media_links: boolexclude_external_images: bool
config = CrawlerRunConfig(
word_count_threshold=10,
excluded_tags=["form","header","footer","nav"],
exclude_external_links=True,
exclude_social_media_links=True,
exclude_external_images=True
)
Iframe Content
process_iframes: boolremove_overlay_elements: bool
config = CrawlerRunConfig(
process_iframes=True,
remove_overlay_elements=True
)
LLM-Based Extraction
- Use
LLMExtractionStrategy(provider="...")withschema=...andinstruction="..."
from crawl4ai.extraction_strategy import LLMExtractionStrategy
from pydantic import BaseModel
class ArticleContent(BaseModel):
title: str
main_points: list[str]
conclusion: str
strategy = LLMExtractionStrategy(
provider="ollama/nemotron",
schema=ArticleContent.schema(),
instruction="Extract title, points, conclusion"
)
config = CrawlerRunConfig(extraction_strategy=strategy)
Pattern-Based Selection (JsonCssExtractionStrategy)
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
schema = {
"name": "News Articles",
"baseSelector": "article.news-item",
"fields": [
{"name":"headline","selector":"h2","type":"text"},
{"name":"summary","selector":".summary","type":"text"},
{"name":"category","selector":".category","type":"text"},
{
"name":"metadata",
"type":"nested",
"fields":[
{"name":"author","selector":".author","type":"text"},
{"name":"date","selector":".date","type":"text"}
]
}
]
}
config = CrawlerRunConfig(extraction_strategy=JsonCssExtractionStrategy(schema))
Combined Example
from crawl4ai.async_configs import CrawlerRunConfig, AsyncWebCrawler
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
article_schema = {
"name":"Article",
"baseSelector":"article.main",
"fields":[
{"name":"title","selector":"h1","type":"text"},
{"name":"content","selector":".content","type":"text"}
]
}
config = CrawlerRunConfig(
extraction_strategy=JsonCssExtractionStrategy(article_schema),
word_count_threshold=10,
excluded_tags=["nav","footer"],
exclude_external_links=True
)