Transform any website into structured data with just a few clicks! The Crawl4AI Assistant Chrome Extension lets you visually select elements on any webpage and automatically generates Python code for web scraping.
Visual Selection
Click on any element to select it - no CSS selectors needed
Schema Builder
Build extraction schemas by clicking on container and field elements
Python Code
Get production-ready Crawl4AI code with LLM extraction
Beautiful UI
Draggable toolbar with macOS-style interface
Quick Start
Download the Extension
Get the latest release from GitHub or use the button below
Download Extension (v1.0.1)Load in Chrome
Navigate to chrome://extensions/ and enable Developer Mode
Load Unpacked
Click "Load unpacked" and select the extracted extension folder
How to Use
Start Schema Builder
Click the extension icon and select "Schema Builder" to begin
Select Container
Click on a container element (e.g., product card, article, listing)
Select Fields
Click on individual fields inside the container and name them
Generate Code
Click "Stop & Generate" to create your Python extraction code
Generated Code Example
import asyncio
import json
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
async def extract_products():
# Schema generated from your visual selection
schema = {
"name": "Product Catalog",
"baseSelector": "div.product-card", # Container you clicked
"fields": [
{
"name": "title",
"selector": "h3.product-title",
"type": "text"
},
{
"name": "price",
"selector": "span.price",
"type": "text"
},
{
"name": "description",
"selector": "p.description",
"type": "text"
},
{
"name": "image",
"selector": "img.product-image",
"type": "attribute",
"attribute": "src"
}
]
}
# Create extraction strategy
extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)
# Configure the crawler
config = CrawlerRunConfig(
extraction_strategy=extraction_strategy
)
async with AsyncWebCrawler() as crawler:
result = await crawler.arun(
url="https://example.com/products",
config=config
)
# Parse the extracted data
products = json.loads(result.extracted_content)
print(f"Extracted {len(products)} products")
# Display first product
if products:
print(json.dumps(products[0], indent=2))
return products
# Run the extraction
if __name__ == "__main__":
asyncio.run(extract_products())
Coming Soon: Even More Power
We're continuously expanding C4AI Assistant with powerful new features to make web scraping even easier:
Run on C4AI Cloud
Execute your extraction directly in the cloud without setting up any local environment. Just click "Run on Cloud" and get your data instantly.
☁️ Instant results • Auto-scaling
Get CrawlResult Without Code
Skip the code generation entirely! Get extracted data directly in the extension as a CrawlResult object, ready to download as JSON.
📊 One-click extraction • No Python needed • Export to JSON/CSV
Smart Schema Suggestions
AI-powered field detection that automatically suggests the most likely data fields on any page, making schema building even faster.
🤖 Auto-detect fields • Smart naming • Pattern recognition
C4A Script Builder
Visual automation script builder for complex interactions - fill forms, click buttons, handle pagination, all without writing code.
🎯 Visual automation • Record & replay • Export as C4A script
🚀 Stay tuned for updates! Follow our GitHub for the latest releases.