From e8b4ac60469dd46a05b93a9e47a8b0246ed804cd Mon Sep 17 00:00:00 2001 From: UncleCode Date: Thu, 9 Jan 2025 16:22:41 +0800 Subject: [PATCH] docs(urls): update documentation URLs to new domain Update all documentation URLs from crawl4ai.com/mkdocs to docs.crawl4ai.com Improve badges styling and layout in documentation Increase code font size in documentation CSS BREAKING CHANGE: Documentation URLs have changed from crawl4ai.com/mkdocs to docs.crawl4ai.com --- README.md | 8 +-- docs/examples/quickstart_v0.ipynb | 2 +- docs/md_v2/assets/styles.css | 11 +++- docs/md_v2/core/docker-deploymeny.md | 2 +- docs/md_v2/index.md | 76 +++++++++++++++++++++------- 5 files changed, 75 insertions(+), 24 deletions(-) diff --git a/README.md b/README.md index 0657ab32..68cc10a2 100644 --- a/README.md +++ b/README.md @@ -22,7 +22,7 @@ Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant [✨ Check out latest update v0.4.24x](#-recent-updates) -🎉 **Version 0.4.24x is out!** Major improvements in extraction strategies with enhanced JSON handling, SSL security, and Amazon product extraction. Plus, a completely revamped content filtering system! [Read the release notes →](https://crawl4ai.com/mkdocs/blog) +🎉 **Version 0.4.24x is out!** Major improvements in extraction strategies with enhanced JSON handling, SSL security, and Amazon product extraction. Plus, a completely revamped content filtering system! [Read the release notes →](https://docs.crawl4ai.com/blog)
🤓 My Personal Story @@ -160,7 +160,7 @@ if __name__ == "__main__": ✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing) -✨ Visit our [Documentation Website](https://crawl4ai.com/mkdocs/) +✨ Visit our [Documentation Website](https://docs.crawl4ai.com/) ## Installation 🛠️ @@ -276,7 +276,7 @@ task_id = response.json()["task_id"] result = requests.get(f"http://localhost:11235/task/{task_id}") ``` -For more examples, see our [Docker Examples](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_example.py). For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://crawl4ai.com/mkdocs/basic/docker-deployment/). +For more examples, see our [Docker Examples](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_example.py). For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://docs.crawl4ai.com/basic/docker-deployment/).
@@ -498,7 +498,7 @@ Read the full details of this release in our [0.4.24 Release Notes](https://gith > 🚨 **Documentation Update Alert**: We're undertaking a major documentation overhaul next week to reflect recent updates and improvements. Stay tuned for a more comprehensive and up-to-date guide! -For current documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://crawl4ai.com/mkdocs/). +For current documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://docs.crawl4ai.com/). To check our development plans and upcoming features, visit our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md). diff --git a/docs/examples/quickstart_v0.ipynb b/docs/examples/quickstart_v0.ipynb index 71f23acb..0282aa12 100644 --- a/docs/examples/quickstart_v0.ipynb +++ b/docs/examples/quickstart_v0.ipynb @@ -702,7 +702,7 @@ "\n", "Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n", "\n", - "For more information and advanced usage, please visit the [Crawl4AI documentation](https://crawl4ai.com/mkdocs/).\n", + "For more information and advanced usage, please visit the [Crawl4AI documentation](https://docs.crawl4ai.com/).\n", "\n", "Happy crawling!" ] diff --git a/docs/md_v2/assets/styles.css b/docs/md_v2/assets/styles.css index 1aed2822..ed7fc12e 100644 --- a/docs/md_v2/assets/styles.css +++ b/docs/md_v2/assets/styles.css @@ -7,7 +7,7 @@ :root { --global-font-size: 16px; - --global-code-font-size: 14px; + --global-code-font-size: 16px; --global-line-height: 1.5em; --global-space: 10px; --font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono, @@ -233,4 +233,13 @@ pre { .terminal h1, .terminal h2, .terminal h3, .terminal h4, .terminal h5, .terminal h6 { text-shadow: 0 0 0px var(--font-color), 0 0 0px var(--font-color), 0 0 0px var(--font-color); +} + +/* Lower max height or width for these images */ +div.badges a { + /* no underline */ + text-decoration: none !important; +} +div.badges a > img { + width: auto; } \ No newline at end of file diff --git a/docs/md_v2/core/docker-deploymeny.md b/docs/md_v2/core/docker-deploymeny.md index 644a525b..a3d0def1 100644 --- a/docs/md_v2/core/docker-deploymeny.md +++ b/docs/md_v2/core/docker-deploymeny.md @@ -699,4 +699,4 @@ Content-Type: application/json GET /task/{task_id} ``` -For more details, visit the [official documentation](https://crawl4ai.com/mkdocs/). \ No newline at end of file +For more details, visit the [official documentation](https://docs.crawl4ai.com/). \ No newline at end of file diff --git a/docs/md_v2/index.md b/docs/md_v2/index.md index a522ea13..250c977d 100644 --- a/docs/md_v2/index.md +++ b/docs/md_v2/index.md @@ -1,32 +1,74 @@ # 🚀🤖 Crawl4AI: Open-Source LLM-Friendly Web Crawler & Scraper -
+
-unclecode%2Fcrawl4ai | Trendshift +

+ + unclecode%2Fcrawl4ai | Trendshift + +

+ +

+ + GitHub Stars + + + GitHub Forks + + + PyPI version + +

+ +

+ + Python Version + + + Downloads + + + License + +

+
-[![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers) -[![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members) -[![PyPI version](https://badge.fury.io/py/crawl4ai.svg)](https://badge.fury.io/py/crawl4ai) -[![Python Version](https://img.shields.io/pypi/pyversions/crawl4ai)](https://pypi.org/project/crawl4ai/) -[![Downloads](https://static.pepy.tech/badge/crawl4ai/month)](https://pepy.tech/project/crawl4ai) -[![License](https://img.shields.io/github/license/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/blob/main/LICENSE) -[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) -[![Security: bandit](https://img.shields.io/badge/security-bandit-yellow.svg)](https://github.com/PyCQA/bandit) - - Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for large language models, AI agents, and data pipelines. Fully open source, flexible, and built for real-time performance, **Crawl4AI** empowers developers with unmatched speed, precision, and deployment ease. ---- +> **Note**: If you're looking for the old documentation, you can access it [here](https://old.docs.crawl4ai.com). -## My Personal Journey -I’ve always loved exploring the web development, back from when HTML and JavaScript were hardly intertwined. My curiosity drove me into web development, mathematics, AI, and machine learning, always keeping a close tie to real industrial applications. In 2009–2010, as a postgraduate student, I created platforms to gather and organize published papers for Master’s and PhD researchers. Faced with post-grad students’ data challenges, I built a helper app to crawl newly published papers and public data. Relying on Internet Explorer and DLL hacks was far more cumbersome than modern tools, highlighting my longtime background in data extraction. +## Quick Start -Fast-forward to 2023: I needed to fetch web data and transform it into neat **markdown** for my AI pipeline. All solutions I found were either **closed-source**, overpriced, or produced low-quality output. As someone who has built large edu-tech ventures (like KidoCode), I believe **data belongs to the people**. We shouldn’t pay $16 just to parse the web’s publicly available content. This friction led me to create my own library, **Crawl4AI**, in a matter of days to meet my immediate needs. Unexpectedly, it went **viral**, accumulating thousands of GitHub stars. +Here's a quick example to show you how easy it is to use Crawl4AI with its asynchronous capabilities: -Now, in **January 2025**, Crawl4AI has surpassed **21,000 stars** and remains the #1 trending repository. It’s my way of giving back to the community after benefiting from open source for years. I’m thrilled by how many of you share that passion. Thank you for being here, join our Discord, file issues, submit PRs, or just spread the word. Let’s build the best data extraction, crawling, and scraping library **together**. +```python +import asyncio +from crawl4ai import AsyncWebCrawler + +async def main(): + # Create an instance of AsyncWebCrawler + async with AsyncWebCrawler() as crawler: + # Run the crawler on a URL + result = await crawler.arun(url="https://crawl4ai.com") + + # Print the extracted content + print(result.markdown) + +# Run the async main function +asyncio.run(main()) +``` ---