From ce7d49484fc097a834d1eac883ecce6f444ceb1e Mon Sep 17 00:00:00 2001 From: UncleCode Date: Thu, 28 Nov 2024 13:06:46 +0800 Subject: [PATCH 1/8] docs: update README for version 0.3.743 with new features, enhancements, and contributor acknowledgments --- README.md | 125 +++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 87 insertions(+), 38 deletions(-) diff --git a/README.md b/README.md index 5ba33dea..16d154b5 100644 --- a/README.md +++ b/README.md @@ -11,20 +11,15 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. πŸ†“πŸŒ -## New in 0.3.74 ✨ +## New in 0.3.743 ✨ -- πŸš€ **Blazing Fast Scraping**: Significantly improved scraping speed. -- πŸ“₯ **Download Manager**: Integrated file crawling, downloading, and tracking within `CrawlResult`. -- πŸ“ **Markdown Strategy**: Flexible system for custom markdown generation and formats. -- πŸ”— **LLM-Friendly Citations**: Auto-converts links to numbered citations with reference lists. -- πŸ”Ž **Markdown Filter**: BM25-based content extraction for cleaner, relevant markdown. -- πŸ–ΌοΈ **Image Extraction**: Supports `srcset`, `picture`, and responsive image formats. -- πŸ—‚οΈ **Local/Raw HTML**: Crawl `file://` paths and raw HTML (`raw:`) directly. -- πŸ€– **Browser Control**: Custom browser setups with stealth integration to bypass bots. -- ☁️ **API & Cache Boost**: CORS, static serving, and enhanced filesystem-based caching. -- 🐳 **API Gateway**: Run as an API service with secure token authentication. -- πŸ› οΈ **Database Upgrades**: Optimized for larger content sets with faster caching. -- πŸ› **Bug Fixes**: Resolved browser context issues, memory leaks, and improved error handling. +πŸš€ **Improved ManagedBrowser Configuration**: Dynamic host and port support for more flexible browser management. +πŸ“ **Enhanced Markdown Generation**: New generator class for better formatting and customization. +⚑ **Fast HTML Formatting**: Significantly optimized HTML formatting in the web crawler. +πŸ› οΈ **Utility & Sanitization Upgrades**: Improved sanitization and expanded utility functions for streamlined workflows. +πŸ‘₯ **Acknowledgments**: Added contributor details and pull request acknowledgments for better transparency. +πŸ“– **Documentation Updates**: Clearer usage scenarios and updated guidance for better user onboarding. +πŸ§ͺ **Test Adjustments**: Refined tests to align with recent class name changes. ## Try it Now! @@ -35,31 +30,85 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc ## Features ✨ -- πŸ†“ Completely free and open-source -- πŸš€ Blazing fast performance, outperforming many paid services -- πŸ€– LLM-friendly output formats (JSON, cleaned HTML, markdown) -- 🌐 Multi-browser support (Chromium, Firefox, WebKit) -- 🌍 Supports crawling multiple URLs simultaneously -- 🎨 Extracts and returns all media tags (Images, Audio, and Video) -- πŸ”— Extracts all external and internal links -- πŸ“š Extracts metadata from the page -- πŸ”„ Custom hooks for authentication, headers, and page modifications -- πŸ•΅οΈ User-agent customization -- πŸ–ΌοΈ Takes screenshots of pages with enhanced error handling -- πŸ“œ Executes multiple custom JavaScripts before crawling -- πŸ“Š Generates structured output without LLM using JsonCssExtractionStrategy -- πŸ“š Various chunking strategies: topic-based, regex, sentence, and more -- 🧠 Advanced extraction strategies: cosine clustering, LLM, and more -- 🎯 CSS selector support for precise data extraction -- πŸ“ Passes instructions/keywords to refine extraction -- πŸ”’ Proxy support with authentication for enhanced access -- πŸ”„ Session management for complex multi-page crawling -- 🌐 Asynchronous architecture for improved performance -- πŸ–ΌοΈ Improved image processing with lazy-loading detection -- πŸ•°οΈ Enhanced handling of delayed content loading -- πŸ”‘ Custom headers support for LLM interactions -- πŸ–ΌοΈ iframe content extraction for comprehensive analysis -- ⏱️ Flexible timeout and delayed content retrieval options +
+πŸš€ Performance & Scalability + +- ⚑ **Blazing Fast Scraping**: Outperforms many paid services with cutting-edge optimization. +- πŸ”„ **Asynchronous Architecture**: Enhanced performance for complex multi-page crawling. +- ⚑ **Dynamic HTML Formatting**: New, fast HTML formatting for streamlined workflows. +- πŸ—‚οΈ **Large Dataset Optimization**: Improved caching for handling massive content sets. + +
+ +
+πŸ”Ž Extraction Capabilities + +- πŸ–ΌοΈ **Comprehensive Media Support**: Extracts images, audio, video, and responsive image formats like `srcset` and `picture`. +- πŸ“š **Advanced Content Chunking**: Topic-based, regex, sentence-level, and cosine clustering strategies. +- 🎯 **Precise Data Extraction**: Supports CSS selectors and keyword-based refinements. +- πŸ”— **All-Inclusive Link Crawling**: Extracts internal and external links. +- πŸ“ **Markdown Generation**: Enhanced markdown generator class for custom, clean, LLM-friendly outputs. +- 🏷️ **Metadata Extraction**: Fetches metadata directly from pages. + +
+ +
+🌐 Browser Integration + +- 🌍 **Multi-Browser Support**: Works with Chromium, Firefox, and WebKit. +- πŸ–₯️ **ManagedBrowser with Dynamic Config**: Flexible host/port control for tailored setups. +- βš™οΈ **Custom Browser Hooks**: Authentication, headers, and page modifications. +- πŸ•ΆοΈ **Stealth Mode**: Bypasses bot detection with advanced techniques. +- πŸ“Έ **Screenshots & JavaScript Execution**: Takes screenshots and executes custom JavaScript before crawling. + +
+ +
+πŸ“ Input/Output Flexibility + +- πŸ“‚ **Local & Raw HTML Crawling**: Directly processes `file://` paths and raw HTML. +- 🌐 **Custom Headers for LLM**: Tailored headers for enhanced AI interactions. +- πŸ› οΈ **Structured Output Options**: Supports JSON, cleaned HTML, and markdown outputs. + +
+ +
+πŸ”§ Utility & Debugging + +- πŸ›‘οΈ **Error Handling**: Robust error management for seamless execution. +- πŸ” **Session Management**: Handles complex, multi-page interactions. +- 🧹 **Utility Functions**: Enhanced sanitization and flexible extraction helpers. +- πŸ•°οΈ **Delayed Content Loading**: Improved handling of lazy-loading and dynamic content. + +
+ +
+πŸ” Security & Accessibility + +- πŸ•΅οΈ **Proxy Support**: Enables authenticated access for restricted pages. +- πŸšͺ **API Gateway**: Deploy as an API service with secure token authentication. +- 🌐 **CORS & Static Serving**: Enhanced support for filesystem-based caching and cross-origin requests. + +
+ +
+🌟 Community & Documentation + +- πŸ™Œ **Contributor Acknowledgments**: Recognition for pull requests and contributions. +- πŸ“– **Clear Documentation**: Simplified and updated for better onboarding and usage. + +
+ +
+🎯 Cutting-Edge Features + +- πŸ› οΈ **BM25-Based Markdown Filtering**: Extracts cleaner, context-relevant markdown. +- πŸ“š **LLM-Friendly Citations**: Auto-converts links to numbered citations with reference lists. +- πŸ“‘ **IFrame Content Extraction**: Comprehensive analysis for embedded content. +- πŸ•°οΈ **Flexible Content Retrieval**: Combines timing-based strategies for reliable extractions. + +
+ ## Installation πŸ› οΈ From d556dada9fb4003b42cf7d619ff44feef478cf2c Mon Sep 17 00:00:00 2001 From: UncleCode Date: Thu, 28 Nov 2024 13:07:33 +0800 Subject: [PATCH 2/8] docs: update README to keep details open for extraction capabilities, browser integration, input/output flexibility, utility & debugging, security & accessibility, community & documentation, and cutting-edge features --- README.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index 16d154b5..cd643211 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc -
+
πŸ”Ž Extraction Capabilities - πŸ–ΌοΈ **Comprehensive Media Support**: Extracts images, audio, video, and responsive image formats like `srcset` and `picture`. @@ -52,7 +52,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
-
+
🌐 Browser Integration - 🌍 **Multi-Browser Support**: Works with Chromium, Firefox, and WebKit. @@ -63,7 +63,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
-
+
πŸ“ Input/Output Flexibility - πŸ“‚ **Local & Raw HTML Crawling**: Directly processes `file://` paths and raw HTML. @@ -72,7 +72,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
-
+
πŸ”§ Utility & Debugging - πŸ›‘οΈ **Error Handling**: Robust error management for seamless execution. @@ -82,7 +82,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
-
+
πŸ” Security & Accessibility - πŸ•΅οΈ **Proxy Support**: Enables authenticated access for restricted pages. @@ -91,7 +91,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
-
+
🌟 Community & Documentation - πŸ™Œ **Contributor Acknowledgments**: Recognition for pull requests and contributions. @@ -99,7 +99,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
-
+
🎯 Cutting-Edge Features - πŸ› οΈ **BM25-Based Markdown Filtering**: Extracts cleaner, context-relevant markdown. From 3abb573142d5588a1fc5790e2731ca8641ca4a95 Mon Sep 17 00:00:00 2001 From: UncleCode Date: Thu, 28 Nov 2024 13:07:59 +0800 Subject: [PATCH 3/8] docs: update README for version 0.3.743 with improved formatting and contributor acknowledgments --- README.md | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index cd643211..e02d7ef8 100644 --- a/README.md +++ b/README.md @@ -13,13 +13,11 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc ## New in 0.3.743 ✨ -πŸš€ **Improved ManagedBrowser Configuration**: Dynamic host and port support for more flexible browser management. -πŸ“ **Enhanced Markdown Generation**: New generator class for better formatting and customization. -⚑ **Fast HTML Formatting**: Significantly optimized HTML formatting in the web crawler. -πŸ› οΈ **Utility & Sanitization Upgrades**: Improved sanitization and expanded utility functions for streamlined workflows. -πŸ‘₯ **Acknowledgments**: Added contributor details and pull request acknowledgments for better transparency. -πŸ“– **Documentation Updates**: Clearer usage scenarios and updated guidance for better user onboarding. -πŸ§ͺ **Test Adjustments**: Refined tests to align with recent class name changes. +- πŸš€ **Improved ManagedBrowser Configuration**: Dynamic host and port support for more flexible browser management. +- πŸ“ **Enhanced Markdown Generation**: New generator class for better formatting and customization. +- ⚑ **Fast HTML Formatting**: Significantly optimized HTML formatting in the web crawler. +- πŸ› οΈ **Utility & Sanitization Upgrades**: Improved sanitization and expanded utility functions for streamlined workflows. +- πŸ‘₯ **Acknowledgments**: Added contributor details and pull request acknowledgments for better transparency. ## Try it Now! From d583aa43ca1404788838820ebfb90d2e8ee8680d Mon Sep 17 00:00:00 2001 From: UncleCode Date: Thu, 28 Nov 2024 15:53:25 +0800 Subject: [PATCH 4/8] refactor: update cache handling in quickstart_async example to use CacheMode enum --- README.md | 470 +++++++++++++++--------------- docs/examples/quickstart_async.py | 95 +++--- 2 files changed, 296 insertions(+), 269 deletions(-) diff --git a/README.md b/README.md index e02d7ef8..5c50cdc5 100644 --- a/README.md +++ b/README.md @@ -29,94 +29,86 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc ## Features ✨
-πŸš€ Performance & Scalability - -- ⚑ **Blazing Fast Scraping**: Outperforms many paid services with cutting-edge optimization. -- πŸ”„ **Asynchronous Architecture**: Enhanced performance for complex multi-page crawling. -- ⚑ **Dynamic HTML Formatting**: New, fast HTML formatting for streamlined workflows. -- πŸ—‚οΈ **Large Dataset Optimization**: Improved caching for handling massive content sets. +πŸ“ Markdown Generation +- 🧹 **Clean Markdown**: Generates clean, structured Markdown with accurate formatting. +- 🎯 **Fit Markdown**: Heuristic-based filtering to remove noise and irrelevant parts for AI-friendly processing. +- πŸ”— **Citations and References**: Converts page links into a numbered reference list with clean citations. +- πŸ› οΈ **Custom Strategies**: Users can create their own Markdown generation strategies tailored to specific needs. +- πŸ“š **BM25 Algorithm**: Employs BM25-based filtering for extracting core information and removing irrelevant content.
-πŸ”Ž Extraction Capabilities +πŸ“Š Structured Data Extraction -- πŸ–ΌοΈ **Comprehensive Media Support**: Extracts images, audio, video, and responsive image formats like `srcset` and `picture`. -- πŸ“š **Advanced Content Chunking**: Topic-based, regex, sentence-level, and cosine clustering strategies. -- 🎯 **Precise Data Extraction**: Supports CSS selectors and keyword-based refinements. -- πŸ”— **All-Inclusive Link Crawling**: Extracts internal and external links. -- πŸ“ **Markdown Generation**: Enhanced markdown generator class for custom, clean, LLM-friendly outputs. -- 🏷️ **Metadata Extraction**: Fetches metadata directly from pages. +- πŸ€– **LLM-Driven Extraction**: Supports all LLMs (open-source and proprietary) for structured data extraction. +- 🧱 **Chunking Strategies**: Implements chunking (topic-based, regex, sentence-level) for targeted content processing. +- 🌌 **Cosine Similarity**: Find relevant content chunks based on user queries for semantic extraction. +- πŸ”Ž **CSS-Based Extraction**: Fast schema-based data extraction using XPath and CSS selectors. +- πŸ”§ **Schema Definition**: Define custom schemas for extracting structured JSON from repetitive patterns.
🌐 Browser Integration -- 🌍 **Multi-Browser Support**: Works with Chromium, Firefox, and WebKit. -- πŸ–₯️ **ManagedBrowser with Dynamic Config**: Flexible host/port control for tailored setups. -- βš™οΈ **Custom Browser Hooks**: Authentication, headers, and page modifications. -- πŸ•ΆοΈ **Stealth Mode**: Bypasses bot detection with advanced techniques. -- πŸ“Έ **Screenshots & JavaScript Execution**: Takes screenshots and executes custom JavaScript before crawling. +- πŸ–₯️ **Managed Browser**: Use user-owned browsers with full control, avoiding bot detection. +- πŸ”„ **Remote Browser Control**: Connect to Chrome Developer Tools Protocol for remote, large-scale data extraction. +- πŸ”’ **Session Management**: Preserve browser states and reuse them for multi-step crawling. +- 🧩 **Proxy Support**: Seamlessly connect to proxies with authentication for secure access. +- βš™οΈ **Full Browser Control**: Modify headers, cookies, user agents, and more for tailored crawling setups. +- 🌍 **Multi-Browser Support**: Compatible with Chromium, Firefox, and WebKit.
-πŸ“ Input/Output Flexibility +πŸ”Ž Crawling & Scraping -- πŸ“‚ **Local & Raw HTML Crawling**: Directly processes `file://` paths and raw HTML. -- 🌐 **Custom Headers for LLM**: Tailored headers for enhanced AI interactions. -- πŸ› οΈ **Structured Output Options**: Supports JSON, cleaned HTML, and markdown outputs. +- πŸ–ΌοΈ **Media Support**: Extract images, audio, videos, and responsive image formats like `srcset` and `picture`. +- πŸš€ **Dynamic Crawling**: Execute JS and wait for async or sync for dynamic content extraction. +- πŸ“Έ **Screenshots**: Capture page screenshots during crawling for debugging or analysis. +- πŸ“‚ **Raw Data Crawling**: Directly process raw HTML (`raw:`) or local files (`file://`). +- πŸ”— **Comprehensive Link Extraction**: Extracts internal, external links, and embedded iframe content. +- πŸ› οΈ **Customizable Hooks**: Define hooks at every step to customize crawling behavior. +- πŸ’Ύ **Caching**: Cache data for improved speed and to avoid redundant fetches. +- πŸ“„ **Metadata Extraction**: Retrieve structured metadata from web pages. +- πŸ“‘ **IFrame Content Extraction**: Seamless extraction from embedded iframe content.
-πŸ”§ Utility & Debugging +πŸš€ Deployment +- 🐳 **Dockerized Setup**: Optimized Docker image with API server for easy deployment. +- πŸ”„ **API Gateway**: One-click deployment with secure token authentication for API-based workflows. +- 🌐 **Scalable Architecture**: Designed for mass-scale production and optimized server performance. +- βš™οΈ **DigitalOcean Deployment**: Ready-to-deploy configurations for DigitalOcean and similar platforms. + +
+ +
+🎯 Additional Features + +- πŸ•ΆοΈ **Stealth Mode**: Avoid bot detection by mimicking real users. +- 🏷️ **Tag-Based Content Extraction**: Refine crawling based on custom tags, headers, or metadata. +- πŸ”— **Link Analysis**: Extract and analyze all links for detailed data exploration. - πŸ›‘οΈ **Error Handling**: Robust error management for seamless execution. -- πŸ” **Session Management**: Handles complex, multi-page interactions. -- 🧹 **Utility Functions**: Enhanced sanitization and flexible extraction helpers. -- πŸ•°οΈ **Delayed Content Loading**: Improved handling of lazy-loading and dynamic content. +- πŸ” **CORS & Static Serving**: Supports filesystem-based caching and cross-origin requests. +- πŸ“– **Clear Documentation**: Simplified and updated guides for onboarding and advanced usage. +- πŸ™Œ **Community Recognition**: Acknowledges contributors and pull requests for transparency.
-
-πŸ” Security & Accessibility - -- πŸ•΅οΈ **Proxy Support**: Enables authenticated access for restricted pages. -- πŸšͺ **API Gateway**: Deploy as an API service with secure token authentication. -- 🌐 **CORS & Static Serving**: Enhanced support for filesystem-based caching and cross-origin requests. - -
- -
-🌟 Community & Documentation - -- πŸ™Œ **Contributor Acknowledgments**: Recognition for pull requests and contributions. -- πŸ“– **Clear Documentation**: Simplified and updated for better onboarding and usage. - -
- -
-🎯 Cutting-Edge Features - -- πŸ› οΈ **BM25-Based Markdown Filtering**: Extracts cleaner, context-relevant markdown. -- πŸ“š **LLM-Friendly Citations**: Auto-converts links to numbered citations with reference lists. -- πŸ“‘ **IFrame Content Extraction**: Comprehensive analysis for embedded content. -- πŸ•°οΈ **Flexible Content Retrieval**: Combines timing-based strategies for reliable extractions. - -
- - ## Installation πŸ› οΈ Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package or use Docker. -### Using pip 🐍 +
+🐍 Using pip Choose the installation option that best fits your needs: -#### Basic Installation +### Basic Installation For basic web crawling and scraping tasks: @@ -126,7 +118,7 @@ pip install crawl4ai By default, this will install the asynchronous version of Crawl4AI, using Playwright for web crawling. -πŸ‘‰ Note: When you install Crawl4AI, the setup script should automatically install and set up Playwright. However, if you encounter any Playwright-related errors, you can manually install it using one of these methods: +πŸ‘‰ **Note**: When you install Crawl4AI, the setup script should automatically install and set up Playwright. However, if you encounter any Playwright-related errors, you can manually install it using one of these methods: 1. Through the command line: @@ -142,15 +134,19 @@ By default, this will install the asynchronous version of Crawl4AI, using Playwr This second method has proven to be more reliable in some cases. -#### Installation with Synchronous Version +--- -If you need the synchronous version using Selenium: +### Installation with Synchronous Version + +The sync version is deprecated and will be removed in future versions. If you need the synchronous version using Selenium: ```bash pip install crawl4ai[sync] ``` -#### Development Installation +--- + +### Development Installation For contributors who plan to modify the source code: @@ -159,7 +155,9 @@ git clone https://github.com/unclecode/crawl4ai.git cd crawl4ai pip install -e . # Basic installation in editable mode ``` + Install optional features: + ```bash pip install -e ".[torch]" # With PyTorch features pip install -e ".[transformer]" # With Transformer features @@ -168,7 +166,10 @@ pip install -e ".[sync]" # With synchronous crawling (Selenium) pip install -e ".[all]" # Install all optional features ``` -## One-Click Deployment πŸš€ +
+ +
+πŸš€ One-Click Deployment Deploy your own instance of Crawl4AI with one click: @@ -179,14 +180,19 @@ Deploy your own instance of Crawl4AI with one click: The deploy will: - Set up a Docker container with Crawl4AI - Configure Playwright and all dependencies -- Start the FastAPI server on port 11235 +- Start the FastAPI server on port `11235` - Set up health checks and auto-deployment -### Using Docker 🐳 +
+ +
+🐳 Using Docker Crawl4AI is available as Docker images for easy deployment. You can either pull directly from Docker Hub (recommended) or build from the repository. -#### Option 1: Docker Hub (Recommended) +--- + +### Option 1: Docker Hub (Recommended) ```bash # Pull and run from Docker Hub (choose one): @@ -204,7 +210,9 @@ docker run --platform linux/arm64 -p 11235:11235 unclecode/crawl4ai:basic docker run --shm-size=2gb -p 11235:11235 unclecode/crawl4ai:basic ``` -#### Option 2: Build from Repository +--- + +### Option 2: Build from Repository ```bash # Clone the repository @@ -226,7 +234,12 @@ docker build -t crawl4ai:local \ docker run -p 11235:11235 crawl4ai:local ``` -Quick test (works for both options): +--- + +### Quick Test + +Run a quick test (works for both Docker options): + ```python import requests @@ -243,143 +256,149 @@ result = requests.get(f"http://localhost:11235/task/{task_id}") For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://crawl4ai.com/mkdocs/basic/docker-deployment/). +
+ ## Quick Start πŸš€ ```python import asyncio -from crawl4ai import AsyncWebCrawler +from crawl4ai import AsyncWebCrawler, CacheMode async def main(): async with AsyncWebCrawler(verbose=True) as crawler: result = await crawler.arun(url="https://www.nbcnews.com/business") - print(result.markdown) + print(result.markdown_v2.raw_markdown) # Soone will be change to result.markdown if __name__ == "__main__": asyncio.run(main()) ``` -## Advanced Usage πŸ”¬ +## Advanced Usage Examples πŸ”¬ -### Executing JavaScript and Using CSS Selectors +You can check the project structure in the directory [https://github.com/unclecode/crawl4ai/docs/examples](docs/examples). Over there, you can find a variety of examples; here, some popular examples are shared. + +
+πŸ–₯️ Heuristic Markdown Generation with Clean and Fit Markdown ```python import asyncio -from crawl4ai import AsyncWebCrawler +from crawl4ai import AsyncWebCrawler, CacheMode +from crawl4ai.content_filter_strategy import BM25ContentFilter +from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator async def main(): - async with AsyncWebCrawler(verbose=True) as crawler: - js_code = ["const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"] + async with AsyncWebCrawler( + headless=True, + verbose=True, + ) as crawler: result = await crawler.arun( - url="https://www.nbcnews.com/business", - js_code=js_code, - css_selector=".wide-tease-item__description", - bypass_cache=True + url="https://docs.micronaut.io/4.7.6/guide/", + cache_mode=CacheMode.ENABLED, + markdown_generator=DefaultMarkdownGenerator( + content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0) + ), ) - print(result.extracted_content) + print(len(result.markdown)) + print(len(result.fit_markdown)) + print(len(result.markdown_v2.fit_markdown)) if __name__ == "__main__": asyncio.run(main()) ``` -### Using a Proxy +
+ +
+πŸ–₯️ Structured Data Extraction and Executing JavaScript ```python import asyncio -from crawl4ai import AsyncWebCrawler - -async def main(): - async with AsyncWebCrawler(verbose=True, proxy="http://127.0.0.1:7890") as crawler: - result = await crawler.arun( - url="https://www.nbcnews.com/business", - bypass_cache=True - ) - print(result.markdown) - -if __name__ == "__main__": - asyncio.run(main()) -``` - -### Extracting Structured Data without LLM - -The `JsonCssExtractionStrategy` allows for precise extraction of structured data from web pages using CSS selectors. - -```python -import asyncio -import json -from crawl4ai import AsyncWebCrawler +from crawl4ai import AsyncWebCrawler, CacheMode from crawl4ai.extraction_strategy import JsonCssExtractionStrategy +import json -async def extract_news_teasers(): +async def main(): schema = { - "name": "News Teaser Extractor", - "baseSelector": ".wide-tease-item__wrapper", - "fields": [ - { - "name": "category", - "selector": ".unibrow span[data-testid='unibrow-text']", - "type": "text", - }, - { - "name": "headline", - "selector": ".wide-tease-item__headline", - "type": "text", - }, - { - "name": "summary", - "selector": ".wide-tease-item__description", - "type": "text", - }, - { - "name": "time", - "selector": "[data-testid='wide-tease-date']", - "type": "text", - }, - { - "name": "image", - "type": "nested", - "selector": "picture.teasePicture img", - "fields": [ - {"name": "src", "type": "attribute", "attribute": "src"}, - {"name": "alt", "type": "attribute", "attribute": "alt"}, - ], - }, - { - "name": "link", - "selector": "a[href]", - "type": "attribute", - "attribute": "href", - }, - ], - } + "name": "KidoCode Courses", + "baseSelector": "section.charge-methodology .w-tab-content > div", + "fields": [ + { + "name": "section_title", + "selector": "h3.heading-50", + "type": "text", + }, + { + "name": "section_description", + "selector": ".charge-content", + "type": "text", + }, + { + "name": "course_name", + "selector": ".text-block-93", + "type": "text", + }, + { + "name": "course_description", + "selector": ".course-content-text", + "type": "text", + }, + { + "name": "course_icon", + "selector": ".image-92", + "type": "attribute", + "attribute": "src" + } + ] +} extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True) - async with AsyncWebCrawler(verbose=True) as crawler: + async with AsyncWebCrawler( + headless=False, + verbose=True + ) as crawler: + + # Create the JavaScript that handles clicking multiple times + js_click_tabs = """ + (async () => { + const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div"); + + for(let tab of tabs) { + // scroll to the tab + tab.scrollIntoView(); + tab.click(); + // Wait for content to load and animations to complete + await new Promise(r => setTimeout(r, 500)); + } + })(); + """ + result = await crawler.arun( - url="https://www.nbcnews.com/business", - extraction_strategy=extraction_strategy, - bypass_cache=True, + url="https://www.kidocode.com/degrees/technology", + extraction_strategy=JsonCssExtractionStrategy(schema, verbose=True), + js_code=[js_click_tabs], + cache_mode=CacheMode.BYPASS ) - assert result.success, "Failed to crawl the page" + companies = json.loads(result.extracted_content) + print(f"Successfully extracted {len(companies)} companies") + print(json.dumps(companies[0], indent=2)) - news_teasers = json.loads(result.extracted_content) - print(f"Successfully extracted {len(news_teasers)} news teasers") - print(json.dumps(news_teasers[0], indent=2)) if __name__ == "__main__": - asyncio.run(extract_news_teasers()) + asyncio.run(main()) ``` -For more advanced usage examples, check out our [Examples](https://crawl4ai.com/mkdocs/extraction/css-advanced/) section in the documentation. +
-### Extracting Structured Data with OpenAI +
+πŸ€– Extracting Structured Data with LLMs ```python import os import asyncio -from crawl4ai import AsyncWebCrawler +from crawl4ai import AsyncWebCrawler, CacheMode from crawl4ai.extraction_strategy import LLMExtractionStrategy from pydantic import BaseModel, Field @@ -394,6 +413,8 @@ async def main(): url='https://openai.com/api/pricing/', word_count_threshold=1, extraction_strategy=LLMExtractionStrategy( + # Here you can use any provider that Litellm library supports, for instance: ollama/qwen2 + # provider="ollama/qwen2", api_token="no-token", provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY'), schema=OpenAIModelFee.schema(), extraction_type="schema", @@ -401,7 +422,7 @@ async def main(): Do not miss any models in the entire content. One extracted model JSON format should look like this: {"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}.""" ), - bypass_cache=True, + cache_mode=CacheMode.BYPASS, ) print(result.extracted_content) @@ -409,105 +430,86 @@ if __name__ == "__main__": asyncio.run(main()) ``` -### Session Management and Dynamic Content Crawling +
-Crawl4AI excels at handling complex scenarios, such as crawling multiple pages with dynamic content loaded via JavaScript. Here's an example of crawling GitHub commits across multiple pages: +
+πŸ€– Using You own Browswer with Custome User Profile ```python -import asyncio -import re -from bs4 import BeautifulSoup +import os, sys +from pathlib import Path +import asyncio, time from crawl4ai import AsyncWebCrawler -async def crawl_typescript_commits(): - first_commit = "" - async def on_execution_started(page): - nonlocal first_commit - try: - while True: - await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4') - commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4') - commit = await commit.evaluate('(element) => element.textContent') - commit = re.sub(r'\s+', '', commit) - if commit and commit != first_commit: - first_commit = commit - break - await asyncio.sleep(0.5) - except Exception as e: - print(f"Warning: New content didn't appear after JavaScript execution: {e}") +async def test_news_crawl(): + # Create a persistent user data directory + user_data_dir = os.path.join(Path.home(), ".crawl4ai", "browser_profile") + os.makedirs(user_data_dir, exist_ok=True) - async with AsyncWebCrawler(verbose=True) as crawler: - crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started) - - url = "https://github.com/microsoft/TypeScript/commits/main" - session_id = "typescript_commits_session" - all_commits = [] - - js_next_page = """ - const button = document.querySelector('a[data-testid="pagination-next-button"]'); - if (button) button.click(); - """ - - for page in range(3): # Crawl 3 pages - result = await crawler.arun( - url=url, - session_id=session_id, - css_selector="li.Box-sc-g0xbh4-0", - js=js_next_page if page > 0 else None, - bypass_cache=True, - js_only=page > 0 - ) - - assert result.success, f"Failed to crawl page {page + 1}" - - soup = BeautifulSoup(result.cleaned_html, 'html.parser') - commits = soup.select("li") - all_commits.extend(commits) - - print(f"Page {page + 1}: Found {len(commits)} commits") - - await crawler.crawler_strategy.kill_session(session_id) - print(f"Successfully crawled {len(all_commits)} commits across 3 pages") - -if __name__ == "__main__": - asyncio.run(crawl_typescript_commits()) + async with AsyncWebCrawler( + verbose=True, + headless=True, + user_data_dir=user_data_dir, + use_persistent_context=True, + headers={ + "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8", + "Accept-Language": "en-US,en;q=0.5", + "Accept-Encoding": "gzip, deflate, br", + "DNT": "1", + "Connection": "keep-alive", + "Upgrade-Insecure-Requests": "1", + "Sec-Fetch-Dest": "document", + "Sec-Fetch-Mode": "navigate", + "Sec-Fetch-Site": "none", + "Sec-Fetch-User": "?1", + "Cache-Control": "max-age=0", + } + ) as crawler: + url = "ADDRESS_OF_A_CHALLENGING_WEBSITE" + + result = await crawler.arun( + url, + cache_mode=CacheMode.BYPASS, + magic=True, + ) + + print(f"Successfully crawled {url}") + print(f"Content length: {len(result.markdown)}") ``` -This example demonstrates Crawl4AI's ability to handle complex scenarios where content is loaded asynchronously. It crawls multiple pages of GitHub commits, executing JavaScript to load new content and using custom hooks to ensure data is loaded before proceeding. - -For more advanced usage examples, check out our [Examples](https://crawl4ai.com/mkdocs/tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites/) section in the documentation.
## Speed Comparison πŸš€ +A test was conducted on **[NBC News - Business Section](https://www.nbcnews.com/business)** to compare Crawl4AI and Firecrawl, highlighting Crawl4AI's speed, efficiency, and advanced features. -Crawl4AI is designed with speed as a primary focus. Our goal is to provide the fastest possible response with high-quality data extraction, minimizing abstractions between the data and the user. +--- -We've conducted a speed comparison between Crawl4AI and Firecrawl, a paid service. The results demonstrate Crawl4AI's superior performance: +#### Results Summary -```bash -Firecrawl: -Time taken: 7.02 seconds -Content length: 42074 characters -Images found: 49 +| **Method** | **Time Taken** | **Markdown Length** | **Fit Markdown** | **Images Found** | +|--------------------------------|----------------|----------------------|-------------------|------------------| +| **Firecrawl** | 6.04 seconds | 38,382 characters | - | 52 | +| **Crawl4AI (Simple Crawl)** | 1.06 seconds | 42,027 characters | - | 52 | +| **Crawl4AI (Markdown Plus)** | 1.30 seconds | 54,342 characters | 11,119 characters | 52 | +| **Crawl4AI (JavaScript)** | 1.56 seconds | 75,869 characters | 13,406 characters | 92 | -Crawl4AI (simple crawl): -Time taken: 1.60 seconds -Content length: 18238 characters -Images found: 49 +--- -Crawl4AI (with JavaScript execution): -Time taken: 4.64 seconds -Content length: 40869 characters -Images found: 89 -``` +#### Key Takeaways -As you can see, Crawl4AI outperforms Firecrawl significantly: +1. **Superior Speed**: Crawl4AI processes even advanced crawls up to **6x faster** than Firecrawl, with times as low as **1.06 seconds**. +2. **Rich Content Extraction**: Crawl4AI consistently captures more comprehensive content, producing a **Markdown Plus** output of **54,342 characters**, compared to Firecrawl's **38,382 characters**. +3. **AI-Optimized Output**: With **Fit Markdown**, Crawl4AI removes noise to produce concise, AI-friendly outputs (**11,119–13,406 characters**) tailored for LLM workflows. +4. **Dynamic Content Handling**: Using JavaScript execution, Crawl4AI extracted **92 images** and enriched content dynamically loaded via β€œLoad More” buttonsβ€”unmatched by Firecrawl. -- Simple crawl: Crawl4AI is over 4 times faster than Firecrawl. -- With JavaScript execution: Even when executing JavaScript to load more content (doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl. +--- -You can find the full comparison code in our repository at `docs/examples/crawl4ai_vs_firecrawl.py`. +#### Conclusion + +Crawl4AI outshines Firecrawl in speed, completeness, and flexibility. Its advanced features, including **Markdown Plus**, **Fit Markdown**, and **dynamic content handling**, make it the ideal choice for AI-ready web crawling. Whether you're targeting rich structured data or handling complex dynamic websites, Crawl4AI delivers unmatched performance and precision. + +You can find the full comparison code in our repository at [docs/examples/quickstart_async.py](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/quickstart_async.py). ## Documentation πŸ“š diff --git a/docs/examples/quickstart_async.py b/docs/examples/quickstart_async.py index d67a8c30..e50fe456 100644 --- a/docs/examples/quickstart_async.py +++ b/docs/examples/quickstart_async.py @@ -13,7 +13,9 @@ import re from typing import Dict, List from bs4 import BeautifulSoup from pydantic import BaseModel, Field -from crawl4ai import AsyncWebCrawler +from crawl4ai import AsyncWebCrawler, CacheMode +from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator +from crawl4ai.content_filter_strategy import BM25ContentFilter from crawl4ai.extraction_strategy import ( JsonCssExtractionStrategy, LLMExtractionStrategy, @@ -51,7 +53,7 @@ async def simple_example_with_running_js_code(): url="https://www.nbcnews.com/business", js_code=js_code, # wait_for=wait_for, - bypass_cache=True, + cache_mode=CacheMode.BYPASS, ) print(result.markdown[:500]) # Print first 500 characters @@ -61,7 +63,7 @@ async def simple_example_with_css_selector(): result = await crawler.arun( url="https://www.nbcnews.com/business", css_selector=".wide-tease-item__description", - bypass_cache=True, + cache_mode=CacheMode.BYPASS, ) print(result.markdown[:500]) # Print first 500 characters @@ -132,7 +134,7 @@ async def extract_structured_data_using_llm(provider: str, api_token: str = None {"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}.""", extra_args=extra_args ), - bypass_cache=True, + cache_mode=CacheMode.BYPASS, ) print(result.extracted_content) @@ -166,7 +168,7 @@ async def extract_structured_data_using_css_extractor(): result = await crawler.arun( url="https://www.coinbase.com/explore", extraction_strategy=extraction_strategy, - bypass_cache=True, + cache_mode=CacheMode.BYPASS, ) assert result.success, "Failed to crawl the page" @@ -213,7 +215,7 @@ async def crawl_dynamic_content_pages_method_1(): session_id=session_id, css_selector="li.Box-sc-g0xbh4-0", js=js_next_page if page > 0 else None, - bypass_cache=True, + cache_mode=CacheMode.BYPASS, js_only=page > 0, headless=False, ) @@ -282,7 +284,7 @@ async def crawl_dynamic_content_pages_method_2(): extraction_strategy=extraction_strategy, js_code=js_next_page_and_wait if page > 0 else None, js_only=page > 0, - bypass_cache=True, + cache_mode=CacheMode.BYPASS, headless=False, ) @@ -343,7 +345,7 @@ async def crawl_dynamic_content_pages_method_3(): js_code=js_next_page if page > 0 else None, wait_for=wait_for if page > 0 else None, js_only=page > 0, - bypass_cache=True, + cache_mode=CacheMode.BYPASS, headless=False, ) @@ -384,7 +386,7 @@ async def crawl_with_user_simultion(): url = "YOUR-URL-HERE" result = await crawler.arun( url=url, - bypass_cache=True, + cache_mode=CacheMode.BYPASS, magic = True, # Automatically detects and removes overlays, popups, and other elements that block content # simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction # override_navigator = True # Overrides the navigator object to make it look like a real user @@ -408,7 +410,7 @@ async def speed_comparison(): params={'formats': ['markdown', 'html']} ) end = time.time() - print("Firecrawl (simulated):") + print("Firecrawl:") print(f"Time taken: {end - start:.2f} seconds") print(f"Content length: {len(scrape_status['markdown'])} characters") print(f"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}") @@ -420,7 +422,7 @@ async def speed_comparison(): result = await crawler.arun( url="https://www.nbcnews.com/business", word_count_threshold=0, - bypass_cache=True, + cache_mode=CacheMode.BYPASS, verbose=False, ) end = time.time() @@ -430,6 +432,25 @@ async def speed_comparison(): print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}") print() + # Crawl4AI with advanced content filtering + start = time.time() + result = await crawler.arun( + url="https://www.nbcnews.com/business", + word_count_threshold=0, + markdown_generator=DefaultMarkdownGenerator( + content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0) + ), + cache_mode=CacheMode.BYPASS, + verbose=False, + ) + end = time.time() + print("Crawl4AI (Markdown Plus):") + print(f"Time taken: {end - start:.2f} seconds") + print(f"Content length: {len(result.markdown_v2.raw_markdown)} characters") + print(f"Fit Markdown: {len(result.markdown_v2.fit_markdown)} characters") + print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}") + print() + # Crawl4AI with JavaScript execution start = time.time() result = await crawler.arun( @@ -438,13 +459,17 @@ async def speed_comparison(): "const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();" ], word_count_threshold=0, - bypass_cache=True, + cache_mode=CacheMode.BYPASS, + markdown_generator=DefaultMarkdownGenerator( + content_filter=BM25ContentFilter(user_query=None, bm25_threshold=1.0) + ), verbose=False, ) end = time.time() print("Crawl4AI (with JavaScript execution):") print(f"Time taken: {end - start:.2f} seconds") print(f"Content length: {len(result.markdown)} characters") + print(f"Fit Markdown: {len(result.markdown_v2.fit_markdown)} characters") print(f"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}") print("\nNote on Speed Comparison:") @@ -483,7 +508,7 @@ async def generate_knowledge_graph(): url = "https://paulgraham.com/love.html" result = await crawler.arun( url=url, - bypass_cache=True, + cache_mode=CacheMode.BYPASS, extraction_strategy=extraction_strategy, # magic=True ) @@ -496,7 +521,7 @@ async def fit_markdown_remove_overlay(): url = "https://janineintheworld.com/places-to-visit-in-central-mexico" result = await crawler.arun( url=url, - bypass_cache=True, + cache_mode=CacheMode.BYPASS, word_count_threshold = 10, remove_overlay_elements=True, screenshot = True @@ -509,31 +534,31 @@ async def fit_markdown_remove_overlay(): async def main(): - await simple_crawl() - await simple_example_with_running_js_code() - await simple_example_with_css_selector() - await use_proxy() - await capture_and_save_screenshot("https://www.example.com", os.path.join(__location__, "tmp/example_screenshot.jpg")) - await extract_structured_data_using_css_extractor() + # await simple_crawl() + # await simple_example_with_running_js_code() + # await simple_example_with_css_selector() + # await use_proxy() + # await capture_and_save_screenshot("https://www.example.com", os.path.join(__location__, "tmp/example_screenshot.jpg")) + # await extract_structured_data_using_css_extractor() - # LLM extraction examples - await extract_structured_data_using_llm() - await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY")) - await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY")) - await extract_structured_data_using_llm("ollama/llama3.2") + # # LLM extraction examples + # await extract_structured_data_using_llm() + # await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY")) + # await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY")) + # await extract_structured_data_using_llm("ollama/llama3.2") - # You always can pass custom headers to the extraction strategy - custom_headers = { - "Authorization": "Bearer your-custom-token", - "X-Custom-Header": "Some-Value" - } - await extract_structured_data_using_llm(extra_headers=custom_headers) + # # You always can pass custom headers to the extraction strategy + # custom_headers = { + # "Authorization": "Bearer your-custom-token", + # "X-Custom-Header": "Some-Value" + # } + # await extract_structured_data_using_llm(extra_headers=custom_headers) - # await crawl_dynamic_content_pages_method_1() - # await crawl_dynamic_content_pages_method_2() - await crawl_dynamic_content_pages_method_3() + # # await crawl_dynamic_content_pages_method_1() + # # await crawl_dynamic_content_pages_method_2() + # await crawl_dynamic_content_pages_method_3() - await crawl_custom_browser_type() + # await crawl_custom_browser_type() await speed_comparison() From a69f7a953198df1d9d93420161794aafe3fcffcb Mon Sep 17 00:00:00 2001 From: UncleCode Date: Thu, 28 Nov 2024 16:31:41 +0800 Subject: [PATCH 5/8] fix: correct typo in function documentation for clarity and accuracy --- README.md | 184 +++++++++++++++++++++++++++++++----------------------- 1 file changed, 105 insertions(+), 79 deletions(-) diff --git a/README.md b/README.md index 5c50cdc5..c4ef1bd3 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,7 @@ # πŸ”₯πŸ•·οΈ Crawl4AI: LLM Friendly Web Crawler & Scraper +[✨ Check out what's new in the latest update!](#new-in-03743) + unclecode%2Fcrawl4ai | Trendshift [![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers) @@ -9,26 +11,47 @@ [![GitHub Pull Requests](https://img.shields.io/github/issues-pr/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/pulls) [![License](https://img.shields.io/github/license/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/blob/main/LICENSE) -Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. πŸ†“πŸŒ +## πŸ”₯ Crawl4AI: Crawl Smarter, Faster, Freely. For AI. -## New in 0.3.743 ✨ +Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for LLMs, AI agents, and data pipelines. Open source, flexible, and built for real-time performance, Crawl4AI empowers developers with unmatched speed, precision, and deployment ease. -- πŸš€ **Improved ManagedBrowser Configuration**: Dynamic host and port support for more flexible browser management. -- πŸ“ **Enhanced Markdown Generation**: New generator class for better formatting and customization. -- ⚑ **Fast HTML Formatting**: Significantly optimized HTML formatting in the web crawler. -- πŸ› οΈ **Utility & Sanitization Upgrades**: Improved sanitization and expanded utility functions for streamlined workflows. -- πŸ‘₯ **Acknowledgments**: Added contributor details and pull request acknowledgments for better transparency. +[✨ Check out what's new in the latest update!](#new-in-03743) + +## 🧐 Why Crawl4AI? + +1. **Built for LLMs**: Creates **smart, concise Markdown** optimized for applications like Retrieval-Augmented Generation (RAG) and fine-tuning. +2. **Lightning Fast**: Delivers results **6x faster** than competitors with real-time, cost-efficient performance. +3. **Flexible Browser Control**: Offers session management, proxies, and custom hooks for precise, seamless data access. +4. **Heuristic Intelligence**: Leverages **advanced algorithms** to extract data efficiently, reducing reliance on costly language models. +5. **Open Source & Deployable**: 100% open-source with no API keys or registration required-ready for **Docker and cloud integration**. +6. **Thriving Community**: Actively maintained by a vibrant developer community and the **#1 trending GitHub repository** across all languages. -## Try it Now! +## πŸš€ Quick Start -✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing) +1. Install Crawl4AI: +```bash +pip install crawl4ai +``` -✨ Visit our [Documentation Website](https://crawl4ai.com/mkdocs/) +2. Run a simple web crawl: +```python +import asyncio +from crawl4ai import AsyncWebCrawler, CacheMode -## Features ✨ +async def main(): + async with AsyncWebCrawler(verbose=True) as crawler: + result = await crawler.arun(url="https://www.nbcnews.com/business") + # Soone will be change to result.markdown + print(result.markdown_v2.raw_markdown) -
+if __name__ == "__main__": + asyncio.run(main()) +``` + +## ✨ Features + +
πŸ“ Markdown Generation - 🧹 **Clean Markdown**: Generates clean, structured Markdown with accurate formatting. @@ -38,7 +61,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc - πŸ“š **BM25 Algorithm**: Employs BM25-based filtering for extracting core information and removing irrelevant content.
-
+
πŸ“Š Structured Data Extraction - πŸ€– **LLM-Driven Extraction**: Supports all LLMs (open-source and proprietary) for structured data extraction. @@ -49,7 +72,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
-
+
🌐 Browser Integration - πŸ–₯️ **Managed Browser**: Use user-owned browsers with full control, avoiding bot detection. @@ -61,7 +84,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
-
+
πŸ”Ž Crawling & Scraping - πŸ–ΌοΈ **Media Support**: Extract images, audio, videos, and responsive image formats like `srcset` and `picture`. @@ -76,7 +99,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
-
+
πŸš€ Deployment - 🐳 **Dockerized Setup**: Optimized Docker image with API server for easy deployment. @@ -99,7 +122,54 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
-## Installation πŸ› οΈ + + +## Try it Now! + +✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing) + +✨ Visit our [Documentation Website](https://crawl4ai.com/mkdocs/) + + +## πŸš€ Speed Comparison + +A test was conducted on **[NBC News - Business Section](https://www.nbcnews.com/business)** to compare Crawl4AI and Firecrawl, highlighting Crawl4AI's speed, efficiency, and advanced features. + +
+πŸ“Š Results Summary + +#### Results Summary + +| **Method** | **Time Taken** | **Markdown Length** | **Fit Markdown** | **Images Found** | +|--------------------------------|----------------|----------------------|-------------------|------------------| +| **Firecrawl** | 6.04 seconds | 38,382 characters | - | 52 | +| **Crawl4AI (Simple Crawl)** | 1.06 seconds | 42,027 characters | - | 52 | +| **Crawl4AI (Markdown Plus)** | 1.30 seconds | 54,342 characters | 11,119 characters | 52 | +| **Crawl4AI (JavaScript)** | 1.56 seconds | 75,869 characters | 13,406 characters | 92 | + +
+ +
+⚑ Key Takeaways + +1. **Superior Speed**: Crawl4AI processes even advanced crawls up to **6x faster** than Firecrawl, with times as low as **1.06 seconds**. +2. **Rich Content Extraction**: Crawl4AI consistently captures more comprehensive content, producing a **Markdown Plus** output of **54,342 characters**, compared to Firecrawl's **38,382 characters**. +3. **AI-Optimized Output**: With **Fit Markdown**, Crawl4AI removes noise to produce concise, AI-friendly outputs (**11,119–13,406 characters**) tailored for LLM workflows. +4. **Dynamic Content Handling**: Using JavaScript execution, Crawl4AI extracted **92 images** and enriched content dynamically loaded via β€œLoad More” buttonsβ€”unmatched by Firecrawl. + +
+ +
+🏁 Conclusion + +Crawl4AI outshines Firecrawl in speed, completeness, and flexibility. Its advanced features, including **Markdown Plus**, **Fit Markdown**, and **dynamic content handling**, make it the ideal choice for AI-ready web crawling. Whether you're targeting rich structured data or handling complex dynamic websites, Crawl4AI delivers unmatched performance and precision. + +You can find the full comparison code in our repository at [docs/examples/quickstart_async.py](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/quickstart_async.py). + +
+ + +## πŸ› οΈ Installation πŸ› οΈ Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package or use Docker. @@ -259,27 +329,14 @@ For advanced configuration, environment variables, and usage examples, see our [
-## Quick Start πŸš€ -```python -import asyncio -from crawl4ai import AsyncWebCrawler, CacheMode -async def main(): - async with AsyncWebCrawler(verbose=True) as crawler: - result = await crawler.arun(url="https://www.nbcnews.com/business") - print(result.markdown_v2.raw_markdown) # Soone will be change to result.markdown - -if __name__ == "__main__": - asyncio.run(main()) -``` - -## Advanced Usage Examples πŸ”¬ +## πŸ”¬ Advanced Usage Examples πŸ”¬ You can check the project structure in the directory [https://github.com/unclecode/crawl4ai/docs/examples](docs/examples). Over there, you can find a variety of examples; here, some popular examples are shared.
-πŸ–₯️ Heuristic Markdown Generation with Clean and Fit Markdown +πŸ“ Heuristic Markdown Generation with Clean and Fit Markdown ```python import asyncio @@ -310,7 +367,7 @@ if __name__ == "__main__":
-πŸ–₯️ Structured Data Extraction and Executing JavaScript +πŸ–₯️ Executing JavaScript & Extract Structured Data without LLMs ```python import asyncio @@ -393,7 +450,7 @@ if __name__ == "__main__":
-πŸ€– Extracting Structured Data with LLMs +πŸ“š Extracting Structured Data with LLMs ```python import os @@ -480,74 +537,43 @@ async def test_news_crawl():
-## Speed Comparison πŸš€ -A test was conducted on **[NBC News - Business Section](https://www.nbcnews.com/business)** to compare Crawl4AI and Firecrawl, highlighting Crawl4AI's speed, efficiency, and advanced features. +## ✨ New in 0.3.743 ---- +- πŸš€ **Improved ManagedBrowser Configuration**: Dynamic host and port support for more flexible browser management. +- πŸ“ **Enhanced Markdown Generation**: New generator class for better formatting and customization. +- ⚑ **Fast HTML Formatting**: Significantly optimized HTML formatting in the web crawler. +- πŸ› οΈ **Utility & Sanitization Upgrades**: Improved sanitization and expanded utility functions for streamlined workflows. +- πŸ‘₯ **Acknowledgments**: Added contributor details and pull request acknowledgments for better transparency. -#### Results Summary -| **Method** | **Time Taken** | **Markdown Length** | **Fit Markdown** | **Images Found** | -|--------------------------------|----------------|----------------------|-------------------|------------------| -| **Firecrawl** | 6.04 seconds | 38,382 characters | - | 52 | -| **Crawl4AI (Simple Crawl)** | 1.06 seconds | 42,027 characters | - | 52 | -| **Crawl4AI (Markdown Plus)** | 1.30 seconds | 54,342 characters | 11,119 characters | 52 | -| **Crawl4AI (JavaScript)** | 1.56 seconds | 75,869 characters | 13,406 characters | 92 | - ---- - -#### Key Takeaways - -1. **Superior Speed**: Crawl4AI processes even advanced crawls up to **6x faster** than Firecrawl, with times as low as **1.06 seconds**. -2. **Rich Content Extraction**: Crawl4AI consistently captures more comprehensive content, producing a **Markdown Plus** output of **54,342 characters**, compared to Firecrawl's **38,382 characters**. -3. **AI-Optimized Output**: With **Fit Markdown**, Crawl4AI removes noise to produce concise, AI-friendly outputs (**11,119–13,406 characters**) tailored for LLM workflows. -4. **Dynamic Content Handling**: Using JavaScript execution, Crawl4AI extracted **92 images** and enriched content dynamically loaded via β€œLoad More” buttonsβ€”unmatched by Firecrawl. - ---- - -#### Conclusion - -Crawl4AI outshines Firecrawl in speed, completeness, and flexibility. Its advanced features, including **Markdown Plus**, **Fit Markdown**, and **dynamic content handling**, make it the ideal choice for AI-ready web crawling. Whether you're targeting rich structured data or handling complex dynamic websites, Crawl4AI delivers unmatched performance and precision. - -You can find the full comparison code in our repository at [docs/examples/quickstart_async.py](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/quickstart_async.py). - -## Documentation πŸ“š +## πŸ“– Documentation & Roadmap For detailed documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://crawl4ai.com/mkdocs/). -## Crawl4AI Roadmap πŸ—ΊοΈ +Moreover to check our development plans and upcoming features, check out our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md). -For detailed information on our development plans and upcoming features, check out our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md). - -### Advanced Crawling Systems πŸ”§ - [x] 0. Graph Crawler: Smart website traversal using graph search algorithms for comprehensive nested page extraction - [ ] 1. Question-Based Crawler: Natural language driven web discovery and content extraction - [ ] 2. Knowledge-Optimal Crawler: Smart crawling that maximizes knowledge while minimizing data extraction - [ ] 3. Agentic Crawler: Autonomous system for complex multi-step crawling operations - -### Specialized Features πŸ› οΈ - [ ] 4. Automated Schema Generator: Convert natural language to extraction schemas - [ ] 5. Domain-Specific Scrapers: Pre-configured extractors for common platforms (academic, e-commerce) - [ ] 6. Web Embedding Index: Semantic search infrastructure for crawled content - -### Development Tools πŸ”¨ - [ ] 7. Interactive Playground: Web UI for testing, comparing strategies with AI assistance - [ ] 8. Performance Monitor: Real-time insights into crawler operations - [ ] 9. Cloud Integration: One-click deployment solutions across cloud providers - -### Community & Growth 🌱 - [ ] 10. Sponsorship Program: Structured support system with tiered benefits - [ ] 11. Educational Content: "How to Crawl" video series and interactive tutorials -## Contributing 🀝 +## 🀝 Contributing We welcome contributions from the open-source community. Check out our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTING.md) for more information. -## License πŸ“„ +## πŸ“„ License Crawl4AI is released under the [Apache 2.0 License](https://github.com/unclecode/crawl4ai/blob/main/LICENSE). -## Contact πŸ“§ +## πŸ“§ Contact For questions, suggestions, or feedback, feel free to reach out: @@ -558,7 +584,7 @@ For questions, suggestions, or feedback, feel free to reach out: Happy Crawling! πŸ•ΈοΈπŸš€ -# Mission +## πŸ—Ύ Mission Our mission is to unlock the untapped potential of personal and enterprise data in the digital age. In today's world, individuals and organizations generate vast amounts of valuable digital footprints, yet this data remains largely uncapitalized as a true asset. @@ -570,13 +596,13 @@ This democratization of data represents the first step toward a shared data econ For a detailed exploration of our vision, opportunities, and pathway forward, please see our [full mission statement](./MISSION.md). -## Key Opportunities +### Key Opportunities - **Data Capitalization**: Transform digital footprints into valuable assets that can appear on personal and enterprise balance sheets - **Authentic Data**: Unlock the vast reservoir of real human insights and knowledge for AI advancement - **Shared Economy**: Create new value streams where data creators directly benefit from their contributions -## Development Pathway +### Development Pathway 1. **Open-Source Foundation**: Building transparent, community-driven data extraction tools 2. **Data Capitalization Platform**: Creating tools to structure and value digital assets From ddfb6707b47b6be786c2115cd7511b3d94d89e7c Mon Sep 17 00:00:00 2001 From: UncleCode Date: Thu, 28 Nov 2024 16:34:08 +0800 Subject: [PATCH 6/8] docs: update README to reflect new branding and improve section headings for clarity --- README.md | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/README.md b/README.md index c4ef1bd3..ed6892ec 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,4 @@ -# πŸ”₯πŸ•·οΈ Crawl4AI: LLM Friendly Web Crawler & Scraper - -[✨ Check out what's new in the latest update!](#new-in-03743) +# πŸ”₯πŸ•·οΈ Crawl4AI: Crawl Smarter, Faster, Freely. For AI. unclecode%2Fcrawl4ai | Trendshift @@ -11,11 +9,9 @@ [![GitHub Pull Requests](https://img.shields.io/github/issues-pr/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/pulls) [![License](https://img.shields.io/github/license/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/blob/main/LICENSE) -## πŸ”₯ Crawl4AI: Crawl Smarter, Faster, Freely. For AI. - Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for LLMs, AI agents, and data pipelines. Open source, flexible, and built for real-time performance, Crawl4AI empowers developers with unmatched speed, precision, and deployment ease. -[✨ Check out what's new in the latest update!](#new-in-03743) +[✨ Check out what's new in the latest update!](#recent-updates) ## 🧐 Why Crawl4AI? @@ -537,7 +533,7 @@ async def test_news_crawl():
-## ✨ New in 0.3.743 +## ✨ Recent Updates - πŸš€ **Improved ManagedBrowser Configuration**: Dynamic host and port support for more flexible browser management. - πŸ“ **Enhanced Markdown Generation**: New generator class for better formatting and customization. From 3fda66b85b793655a92b3627599472f4d3279b0b Mon Sep 17 00:00:00 2001 From: UncleCode Date: Thu, 28 Nov 2024 16:36:24 +0800 Subject: [PATCH 7/8] docs: refine README content for clarity and conciseness, improving descriptions and formatting --- README.md | 33 +++++++++++++++------------------ 1 file changed, 15 insertions(+), 18 deletions(-) diff --git a/README.md b/README.md index ed6892ec..7bf4b4a4 100644 --- a/README.md +++ b/README.md @@ -15,13 +15,12 @@ Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant ## 🧐 Why Crawl4AI? -1. **Built for LLMs**: Creates **smart, concise Markdown** optimized for applications like Retrieval-Augmented Generation (RAG) and fine-tuning. -2. **Lightning Fast**: Delivers results **6x faster** than competitors with real-time, cost-efficient performance. -3. **Flexible Browser Control**: Offers session management, proxies, and custom hooks for precise, seamless data access. -4. **Heuristic Intelligence**: Leverages **advanced algorithms** to extract data efficiently, reducing reliance on costly language models. -5. **Open Source & Deployable**: 100% open-source with no API keys or registration required-ready for **Docker and cloud integration**. -6. **Thriving Community**: Actively maintained by a vibrant developer community and the **#1 trending GitHub repository** across all languages. - +1. **Built for LLMs**: Creates smart, concise Markdown optimized for RAG and fine-tuning applications. +2. **Lightning Fast**: Delivers results 6x faster with real-time, cost-efficient performance. +3. **Flexible Browser Control**: Offers session management, proxies, and custom hooks for seamless data access. +4. **Heuristic Intelligence**: Uses advanced algorithms for efficient extraction, reducing reliance on costly models. +5. **Open Source & Deployable**: Fully open-source with no API keysβ€”ready for Docker and cloud integration. +6. **Thriving Community**: Actively maintained by a vibrant community and the #1 trending GitHub repository. ## πŸš€ Quick Start @@ -145,7 +144,7 @@ A test was conducted on **[NBC News - Business Section](https://www.nbcnews.com/
-
+
⚑ Key Takeaways 1. **Superior Speed**: Crawl4AI processes even advanced crawls up to **6x faster** than Firecrawl, with times as low as **1.06 seconds**. @@ -155,7 +154,7 @@ A test was conducted on **[NBC News - Business Section](https://www.nbcnews.com/
-
+
🏁 Conclusion Crawl4AI outshines Firecrawl in speed, completeness, and flexibility. Its advanced features, including **Markdown Plus**, **Fit Markdown**, and **dynamic content handling**, make it the ideal choice for AI-ready web crawling. Whether you're targeting rich structured data or handling complex dynamic websites, Crawl4AI delivers unmatched performance and precision. @@ -169,7 +168,7 @@ You can find the full comparison code in our repository at [docs/examples/quicks Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package or use Docker. -
+
🐍 Using pip Choose the installation option that best fits your needs: @@ -234,7 +233,7 @@ pip install -e ".[all]" # Install all optional features
-
+
πŸš€ One-Click Deployment Deploy your own instance of Crawl4AI with one click: @@ -251,7 +250,7 @@ The deploy will:
-
+
🐳 Using Docker Crawl4AI is available as Docker images for easy deployment. You can either pull directly from Docker Hub (recommended) or build from the repository. @@ -325,13 +324,11 @@ For advanced configuration, environment variables, and usage examples, see our [
- - ## πŸ”¬ Advanced Usage Examples πŸ”¬ You can check the project structure in the directory [https://github.com/unclecode/crawl4ai/docs/examples](docs/examples). Over there, you can find a variety of examples; here, some popular examples are shared. -
+
πŸ“ Heuristic Markdown Generation with Clean and Fit Markdown ```python @@ -362,7 +359,7 @@ if __name__ == "__main__":
-
+
πŸ–₯️ Executing JavaScript & Extract Structured Data without LLMs ```python @@ -445,7 +442,7 @@ if __name__ == "__main__":
-
+
πŸ“š Extracting Structured Data with LLMs ```python @@ -485,7 +482,7 @@ if __name__ == "__main__":
-
+
πŸ€– Using You own Browswer with Custome User Profile ```python From efe93a5f57ebe677cc12dca90549525626a85b98 Mon Sep 17 00:00:00 2001 From: UncleCode Date: Thu, 28 Nov 2024 16:41:11 +0800 Subject: [PATCH 8/8] docs: enhance README with development TODOs and refine mission statement for clarity --- README.md | 37 +++++++++++++++++++++---------------- 1 file changed, 21 insertions(+), 16 deletions(-) diff --git a/README.md b/README.md index 7bf4b4a4..20395b58 100644 --- a/README.md +++ b/README.md @@ -545,6 +545,9 @@ For detailed documentation, including installation instructions, advanced featur Moreover to check our development plans and upcoming features, check out our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md). +
+πŸ“ˆ Development TODOs + - [x] 0. Graph Crawler: Smart website traversal using graph search algorithms for comprehensive nested page extraction - [ ] 1. Question-Based Crawler: Natural language driven web discovery and content extraction - [ ] 2. Knowledge-Optimal Crawler: Smart crawling that maximizes knowledge while minimizing data extraction @@ -558,6 +561,8 @@ Moreover to check our development plans and upcoming features, check out our [Ro - [ ] 10. Sponsorship Program: Structured support system with tiered benefits - [ ] 11. Educational Content: "How to Crawl" video series and interactive tutorials +
+ ## 🀝 Contributing We welcome contributions from the open-source community. Check out our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTING.md) for more information. @@ -576,32 +581,32 @@ For questions, suggestions, or feedback, feel free to reach out: Happy Crawling! πŸ•ΈοΈπŸš€ - ## πŸ—Ύ Mission -Our mission is to unlock the untapped potential of personal and enterprise data in the digital age. In today's world, individuals and organizations generate vast amounts of valuable digital footprints, yet this data remains largely uncapitalized as a true asset. +Our mission is to unlock the value of personal and enterprise data by transforming digital footprints into structured, tradeable assets. Crawl4AI empowers individuals and organizations with open-source tools to extract and structure data, fostering a shared data economy. -Our open-source solution empowers developers and innovators to build tools for data extraction and structuring, laying the foundation for a new era of data ownership. By transforming personal and enterprise data into structured, tradeable assets, we're creating opportunities for individuals to capitalize on their digital footprints and for organizations to unlock the value of their collective knowledge. +We envision a future where AI is powered by real human knowledge, ensuring data creators directly benefit from their contributions. By democratizing data and enabling ethical sharing, we are laying the foundation for authentic AI advancement. -This democratization of data represents the first step toward a shared data economy, where willing participation in data sharing drives AI advancement while ensuring the benefits flow back to data creators. Through this approach, we're building a future where AI development is powered by authentic human knowledge rather than synthetic alternatives. +
+πŸ”‘ Key Opportunities + +- **Data Capitalization**: Transform digital footprints into measurable, valuable assets. +- **Authentic AI Data**: Provide AI systems with real human insights. +- **Shared Economy**: Create a fair data marketplace that benefits data creators. -![Mission Diagram](./docs/assets/pitch-dark.svg) +
-For a detailed exploration of our vision, opportunities, and pathway forward, please see our [full mission statement](./MISSION.md). +
+πŸš€ Development Pathway -### Key Opportunities +1. **Open-Source Tools**: Community-driven platforms for transparent data extraction. +2. **Digital Asset Structuring**: Tools to organize and value digital knowledge. +3. **Ethical Data Marketplace**: A secure, fair platform for exchanging structured data. -- **Data Capitalization**: Transform digital footprints into valuable assets that can appear on personal and enterprise balance sheets -- **Authentic Data**: Unlock the vast reservoir of real human insights and knowledge for AI advancement -- **Shared Economy**: Create new value streams where data creators directly benefit from their contributions +For more details, see our [full mission statement](./MISSION.md). +
-### Development Pathway -1. **Open-Source Foundation**: Building transparent, community-driven data extraction tools -2. **Data Capitalization Platform**: Creating tools to structure and value digital assets -3. **Shared Data Marketplace**: Establishing an economic platform for ethical data exchange - -For a detailed exploration of our vision, challenges, and solutions, please see our [full mission statement](./MISSION.md). ## Star History