docs(urls): update documentation URLs to new domain
Update all documentation URLs from crawl4ai.com/mkdocs to docs.crawl4ai.com Improve badges styling and layout in documentation Increase code font size in documentation CSS BREAKING CHANGE: Documentation URLs have changed from crawl4ai.com/mkdocs to docs.crawl4ai.com
This commit is contained in:
@@ -1,32 +1,74 @@
|
||||
# 🚀🤖 Crawl4AI: Open-Source LLM-Friendly Web Crawler & Scraper
|
||||
|
||||
<div align="center">
|
||||
<div class = "badges" align="center">
|
||||
|
||||
<a href="https://trendshift.io/repositories/11716" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11716" alt="unclecode%2Fcrawl4ai | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
<p>
|
||||
<a href="https://trendshift.io/repositories/11716" target="_blank">
|
||||
<img src="https://trendshift.io/api/badge/repositories/11716"
|
||||
alt="unclecode%2Fcrawl4ai | Trendshift"
|
||||
style="width: 250px; height: 55px;"
|
||||
width="250" height="55"/>
|
||||
</a>
|
||||
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<a href="https://github.com/unclecode/crawl4ai/stargazers">
|
||||
<img src="https://img.shields.io/github/stars/unclecode/crawl4ai?style=social"
|
||||
alt="GitHub Stars"/>
|
||||
</a>
|
||||
<a href="https://github.com/unclecode/crawl4ai/network/members">
|
||||
<img src="https://img.shields.io/github/forks/unclecode/crawl4ai?style=social"
|
||||
alt="GitHub Forks"/>
|
||||
</a>
|
||||
<a href="https://badge.fury.io/py/crawl4ai">
|
||||
<img src="https://badge.fury.io/py/crawl4ai.svg"
|
||||
alt="PyPI version"/>
|
||||
</a>
|
||||
</p>
|
||||
|
||||
<p>
|
||||
<a href="https://pypi.org/project/crawl4ai/">
|
||||
<img src="https://img.shields.io/pypi/pyversions/crawl4ai"
|
||||
alt="Python Version"/>
|
||||
</a>
|
||||
<a href="https://pepy.tech/project/crawl4ai">
|
||||
<img src="https://static.pepy.tech/badge/crawl4ai/month"
|
||||
alt="Downloads"/>
|
||||
</a>
|
||||
<a href="https://github.com/unclecode/crawl4ai/blob/main/LICENSE">
|
||||
<img src="https://img.shields.io/github/license/unclecode/crawl4ai"
|
||||
alt="License"/>
|
||||
</a>
|
||||
</p>
|
||||
|
||||
</div>
|
||||
|
||||
[](https://github.com/unclecode/crawl4ai/stargazers)
|
||||
[](https://github.com/unclecode/crawl4ai/network/members)
|
||||
[](https://badge.fury.io/py/crawl4ai)
|
||||
[](https://pypi.org/project/crawl4ai/)
|
||||
[](https://pepy.tech/project/crawl4ai)
|
||||
[](https://github.com/unclecode/crawl4ai/blob/main/LICENSE)
|
||||
[](https://github.com/psf/black)
|
||||
[](https://github.com/PyCQA/bandit)
|
||||
|
||||
|
||||
Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for large language models, AI agents, and data pipelines. Fully open source, flexible, and built for real-time performance, **Crawl4AI** empowers developers with unmatched speed, precision, and deployment ease.
|
||||
|
||||
---
|
||||
> **Note**: If you're looking for the old documentation, you can access it [here](https://old.docs.crawl4ai.com).
|
||||
|
||||
## My Personal Journey
|
||||
|
||||
I’ve always loved exploring the web development, back from when HTML and JavaScript were hardly intertwined. My curiosity drove me into web development, mathematics, AI, and machine learning, always keeping a close tie to real industrial applications. In 2009–2010, as a postgraduate student, I created platforms to gather and organize published papers for Master’s and PhD researchers. Faced with post-grad students’ data challenges, I built a helper app to crawl newly published papers and public data. Relying on Internet Explorer and DLL hacks was far more cumbersome than modern tools, highlighting my longtime background in data extraction.
|
||||
## Quick Start
|
||||
|
||||
Fast-forward to 2023: I needed to fetch web data and transform it into neat **markdown** for my AI pipeline. All solutions I found were either **closed-source**, overpriced, or produced low-quality output. As someone who has built large edu-tech ventures (like KidoCode), I believe **data belongs to the people**. We shouldn’t pay $16 just to parse the web’s publicly available content. This friction led me to create my own library, **Crawl4AI**, in a matter of days to meet my immediate needs. Unexpectedly, it went **viral**, accumulating thousands of GitHub stars.
|
||||
Here's a quick example to show you how easy it is to use Crawl4AI with its asynchronous capabilities:
|
||||
|
||||
Now, in **January 2025**, Crawl4AI has surpassed **21,000 stars** and remains the #1 trending repository. It’s my way of giving back to the community after benefiting from open source for years. I’m thrilled by how many of you share that passion. Thank you for being here, join our Discord, file issues, submit PRs, or just spread the word. Let’s build the best data extraction, crawling, and scraping library **together**.
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
|
||||
async def main():
|
||||
# Create an instance of AsyncWebCrawler
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
# Run the crawler on a URL
|
||||
result = await crawler.arun(url="https://crawl4ai.com")
|
||||
|
||||
# Print the extracted content
|
||||
print(result.markdown)
|
||||
|
||||
# Run the async main function
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
|
||||
Reference in New Issue
Block a user