refactor(docs): reorganize documentation structure and update styles
Reorganize documentation into core/advanced/extraction sections for better navigation. Update terminal theme styles and add rich library for better CLI output. Remove redundant tutorial files and consolidate content into core sections. Add personal story to index page for project context. BREAKING CHANGE: Documentation structure has been significantly reorganized
This commit is contained in:
@@ -1,113 +1,93 @@
|
||||
# Crawl4AI
|
||||
# 🚀🤖 Crawl4AI: Open-Source LLM-Friendly Web Crawler & Scraper
|
||||
|
||||
Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.
|
||||
<div align="center">
|
||||
|
||||
## Introduction
|
||||
<a href="https://trendshift.io/repositories/11716" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11716" alt="unclecode%2Fcrawl4ai | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
|
||||
Crawl4AI has one clear task: to make crawling and data extraction from web pages easy and efficient, especially for large language models (LLMs) and AI applications. Whether you are using it as a REST API or a Python library, Crawl4AI offers a robust and flexible solution with full asynchronous support.
|
||||
</div>
|
||||
|
||||
## Quick Start
|
||||
[](https://github.com/unclecode/crawl4ai/stargazers)
|
||||
[](https://github.com/unclecode/crawl4ai/network/members)
|
||||
[](https://badge.fury.io/py/crawl4ai)
|
||||
[](https://pypi.org/project/crawl4ai/)
|
||||
[](https://pepy.tech/project/crawl4ai)
|
||||
[](https://github.com/unclecode/crawl4ai/blob/main/LICENSE)
|
||||
[](https://github.com/psf/black)
|
||||
[](https://github.com/PyCQA/bandit)
|
||||
|
||||
Here's a quick example to show you how easy it is to use Crawl4AI with its asynchronous capabilities:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for large language models, AI agents, and data pipelines. Fully open source, flexible, and built for real-time performance, **Crawl4AI** empowers developers with unmatched speed, precision, and deployment ease.
|
||||
|
||||
async def main():
|
||||
# Create an instance of AsyncWebCrawler
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
# Run the crawler on a URL
|
||||
result = await crawler.arun(url="https://www.nbcnews.com/business")
|
||||
---
|
||||
|
||||
# Print the extracted content
|
||||
print(result.markdown)
|
||||
## My Personal Journey
|
||||
|
||||
# Run the async main function
|
||||
asyncio.run(main())
|
||||
```
|
||||
I’ve always loved exploring the web development, back from when HTML and JavaScript were hardly intertwined. My curiosity drove me into web development, mathematics, AI, and machine learning, always keeping a close tie to real industrial applications. In 2009–2010, as a postgraduate student, I created platforms to gather and organize published papers for Master’s and PhD researchers. Faced with post-grad students’ data challenges, I built a helper app to crawl newly published papers and public data. Relying on Internet Explorer and DLL hacks was far more cumbersome than modern tools, highlighting my longtime background in data extraction.
|
||||
|
||||
## Key Features ✨
|
||||
Fast-forward to 2023: I needed to fetch web data and transform it into neat **markdown** for my AI pipeline. All solutions I found were either **closed-source**, overpriced, or produced low-quality output. As someone who has built large edu-tech ventures (like KidoCode), I believe **data belongs to the people**. We shouldn’t pay $16 just to parse the web’s publicly available content. This friction led me to create my own library, **Crawl4AI**, in a matter of days to meet my immediate needs. Unexpectedly, it went **viral**, accumulating thousands of GitHub stars.
|
||||
|
||||
- 🆓 Completely free and open-source
|
||||
- 🚀 Blazing fast performance, outperforming many paid services
|
||||
- 🤖 LLM-friendly output formats (JSON, cleaned HTML, markdown)
|
||||
- 📄 Fit markdown generation for extracting main article content.
|
||||
- 🌐 Multi-browser support (Chromium, Firefox, WebKit)
|
||||
- 🌍 Supports crawling multiple URLs simultaneously
|
||||
- 🎨 Extracts and returns all media tags (Images, Audio, and Video)
|
||||
- 🔗 Extracts all external and internal links
|
||||
- 📚 Extracts metadata from the page
|
||||
- 🔄 Custom hooks for authentication, headers, and page modifications
|
||||
- 🕵️ User-agent customization
|
||||
- 🖼️ Takes screenshots of pages with enhanced error handling
|
||||
- 📜 Executes multiple custom JavaScripts before crawling
|
||||
- 📊 Generates structured output without LLM using JsonCssExtractionStrategy
|
||||
- 📚 Various chunking strategies: topic-based, regex, sentence, and more
|
||||
- 🧠 Advanced extraction strategies: cosine clustering, LLM, and more
|
||||
- 🎯 CSS selector support for precise data extraction
|
||||
- 📝 Passes instructions/keywords to refine extraction
|
||||
- 🔒 Proxy support with authentication for enhanced access
|
||||
- 🔄 Session management for complex multi-page crawling
|
||||
- 🌐 Asynchronous architecture for improved performance
|
||||
- 🖼️ Improved image processing with lazy-loading detection
|
||||
- 🕰️ Enhanced handling of delayed content loading
|
||||
- 🔑 Custom headers support for LLM interactions
|
||||
- 🖼️ iframe content extraction for comprehensive analysis
|
||||
- ⏱️ Flexible timeout and delayed content retrieval options
|
||||
Now, in **January 2025**, Crawl4AI has surpassed **21,000 stars** and remains the #1 trending repository. It’s my way of giving back to the community after benefiting from open source for years. I’m thrilled by how many of you share that passion. Thank you for being here, join our Discord, file issues, submit PRs, or just spread the word. Let’s build the best data extraction, crawling, and scraping library **together**.
|
||||
|
||||
---
|
||||
|
||||
## What Does Crawl4AI Do?
|
||||
|
||||
Crawl4AI is a feature-rich crawler and scraper that aims to:
|
||||
|
||||
1. **Generate Clean Markdown**: Perfect for RAG pipelines or direct ingestion into LLMs.
|
||||
2. **Structured Extraction**: Parse repeated patterns with CSS, XPath, or LLM-based extraction.
|
||||
3. **Advanced Browser Control**: Hooks, proxies, stealth modes, session re-use—fine-grained control.
|
||||
4. **High Performance**: Parallel crawling, chunk-based extraction, real-time use cases.
|
||||
5. **Open Source**: No forced API keys, no paywalls—everyone can access their data.
|
||||
|
||||
**Core Philosophies**:
|
||||
- **Democratize Data**: Free to use, transparent, and highly configurable.
|
||||
- **LLM Friendly**: Minimally processed, well-structured text, images, and metadata, so AI models can easily consume it.
|
||||
|
||||
---
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
Our documentation is organized into several sections:
|
||||
To help you get started, we’ve organized our docs into clear sections:
|
||||
|
||||
### Basic Usage
|
||||
- [Installation](basic/installation.md)
|
||||
- [Quick Start](basic/quickstart.md)
|
||||
- [Simple Crawling](basic/simple-crawling.md)
|
||||
- [Browser Configuration](basic/browser-config.md)
|
||||
- [Content Selection](basic/content-selection.md)
|
||||
- [Output Formats](basic/output-formats.md)
|
||||
- [Page Interaction](basic/page-interaction.md)
|
||||
- **Setup & Installation**
|
||||
Basic instructions to install Crawl4AI via pip or Docker.
|
||||
- **Quick Start**
|
||||
A hands-on introduction showing how to do your first crawl, generate Markdown, and do a simple extraction.
|
||||
- **Core**
|
||||
Deeper guides on single-page crawling, advanced browser/crawler parameters, content filtering, and caching.
|
||||
- **Advanced**
|
||||
Explore link & media handling, lazy loading, hooking & authentication, proxies, session management, and more.
|
||||
- **Extraction**
|
||||
Detailed references for no-LLM (CSS, XPath) vs. LLM-based strategies, chunking, and clustering approaches.
|
||||
- **API Reference**
|
||||
Find the technical specifics of each class and method, including `AsyncWebCrawler`, `arun()`, and `CrawlResult`.
|
||||
|
||||
### Advanced Features
|
||||
- [Magic Mode](advanced/magic-mode.md)
|
||||
- [Session Management](advanced/session-management.md)
|
||||
- [Hooks & Authentication](advanced/hooks-auth.md)
|
||||
- [Proxy & Security](advanced/proxy-security.md)
|
||||
- [Content Processing](advanced/content-processing.md)
|
||||
Throughout these sections, you’ll find code samples you can **copy-paste** into your environment. If something is missing or unclear, raise an issue or PR.
|
||||
|
||||
### Extraction & Processing
|
||||
- [Extraction Strategies Overview](extraction/overview.md)
|
||||
- [LLM Integration](extraction/llm.md)
|
||||
- [CSS-Based Extraction](extraction/css.md)
|
||||
- [Cosine Strategy](extraction/cosine.md)
|
||||
- [Chunking Strategies](extraction/chunking.md)
|
||||
---
|
||||
|
||||
### API Reference
|
||||
- [AsyncWebCrawler](api/async-webcrawler.md)
|
||||
- [CrawlResult](api/crawl-result.md)
|
||||
- [Extraction Strategies](api/strategies.md)
|
||||
- [arun() Method Parameters](api/arun.md)
|
||||
## How You Can Support
|
||||
|
||||
### Examples
|
||||
- Coming soon!
|
||||
- **Star & Fork**: If you find Crawl4AI helpful, star the repo on GitHub or fork it to add your own features.
|
||||
- **File Issues**: Encounter a bug or missing feature? Let us know by filing an issue, so we can improve.
|
||||
- **Pull Requests**: Whether it’s a small fix, a big feature, or better docs—contributions are always welcome.
|
||||
- **Join Discord**: Come chat about web scraping, crawling tips, or AI workflows with the community.
|
||||
- **Spread the Word**: Mention Crawl4AI in your blog posts, talks, or on social media.
|
||||
|
||||
## Getting Started
|
||||
**Our mission**: to empower everyone—students, researchers, entrepreneurs, data scientists—to access, parse, and shape the world’s data with speed, cost-efficiency, and creative freedom.
|
||||
|
||||
1. Install Crawl4AI:
|
||||
```bash
|
||||
pip install crawl4ai
|
||||
```
|
||||
---
|
||||
|
||||
2. Check out our [Quick Start Guide](basic/quickstart.md) to begin crawling web pages.
|
||||
## Quick Links
|
||||
|
||||
3. Explore our [examples](https://github.com/unclecode/crawl4ai/tree/main/docs/examples) to see Crawl4AI in action.
|
||||
- **[GitHub Repo](https://github.com/unclecode/crawl4ai)**
|
||||
- **[Installation Guide](./core/installation.md)**
|
||||
- **[Quick Start](./core/quickstart.md)**
|
||||
- **[API Reference](./api/async-webcrawler.md)**
|
||||
- **[Changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md)**
|
||||
|
||||
## Support
|
||||
Thank you for joining me on this journey. Let’s keep building an **open, democratic** approach to data extraction and AI together.
|
||||
|
||||
For questions, suggestions, or issues:
|
||||
- GitHub Issues: [Report a Bug](https://github.com/unclecode/crawl4ai/issues)
|
||||
- Twitter: [@unclecode](https://twitter.com/unclecode)
|
||||
- Website: [crawl4ai.com](https://crawl4ai.com)
|
||||
|
||||
Happy Crawling! 🕸️🚀
|
||||
Happy Crawling!
|
||||
— *Unclecde, Founder & Maintainer of Crawl4AI*
|
||||
|
||||
Reference in New Issue
Block a user