docs(urls): update documentation URLs to new domain
Update all documentation URLs from crawl4ai.com/mkdocs to docs.crawl4ai.com across README, examples, and documentation files. This change reflects the new documentation hosting domain. Also add todo/ directory to .gitignore.
This commit is contained in:
3
.gitignore
vendored
3
.gitignore
vendored
@@ -229,4 +229,5 @@ tree.md
|
|||||||
plans/
|
plans/
|
||||||
|
|
||||||
# Codeium
|
# Codeium
|
||||||
.codeiumignore
|
.codeiumignore
|
||||||
|
todo/
|
||||||
@@ -23,7 +23,7 @@ Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant
|
|||||||
|
|
||||||
[✨ Check out latest update v0.4.24x](#-recent-updates)
|
[✨ Check out latest update v0.4.24x](#-recent-updates)
|
||||||
|
|
||||||
🎉 **Version 0.4.24x is out!** Major improvements in extraction strategies with enhanced JSON handling, SSL security, and Amazon product extraction. Plus, a completely revamped content filtering system! [Read the release notes →](https://crawl4ai.com/mkdocs/blog)
|
🎉 **Version 0.4.24x is out!** Major improvements in extraction strategies with enhanced JSON handling, SSL security, and Amazon product extraction. Plus, a completely revamped content filtering system! [Read the release notes →](https://docs.crawl4ai.com/blog)
|
||||||
|
|
||||||
## 🧐 Why Crawl4AI?
|
## 🧐 Why Crawl4AI?
|
||||||
|
|
||||||
@@ -149,7 +149,7 @@ if __name__ == "__main__":
|
|||||||
|
|
||||||
✨ Play around with this [](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing)
|
✨ Play around with this [](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing)
|
||||||
|
|
||||||
✨ Visit our [Documentation Website](https://crawl4ai.com/mkdocs/)
|
✨ Visit our [Documentation Website](https://docs.crawl4ai.com/)
|
||||||
|
|
||||||
## Installation 🛠️
|
## Installation 🛠️
|
||||||
|
|
||||||
@@ -265,7 +265,7 @@ task_id = response.json()["task_id"]
|
|||||||
result = requests.get(f"http://localhost:11235/task/{task_id}")
|
result = requests.get(f"http://localhost:11235/task/{task_id}")
|
||||||
```
|
```
|
||||||
|
|
||||||
For more examples, see our [Docker Examples](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_example.py). For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://crawl4ai.com/mkdocs/basic/docker-deployment/).
|
For more examples, see our [Docker Examples](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/docker_example.py). For advanced configuration, environment variables, and usage examples, see our [Docker Deployment Guide](https://docs.crawl4ai.com/basic/docker-deployment/).
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
@@ -487,7 +487,7 @@ Read the full details of this release in our [0.4.24 Release Notes](https://gith
|
|||||||
|
|
||||||
> 🚨 **Documentation Update Alert**: We're undertaking a major documentation overhaul next week to reflect recent updates and improvements. Stay tuned for a more comprehensive and up-to-date guide!
|
> 🚨 **Documentation Update Alert**: We're undertaking a major documentation overhaul next week to reflect recent updates and improvements. Stay tuned for a more comprehensive and up-to-date guide!
|
||||||
|
|
||||||
For current documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://crawl4ai.com/mkdocs/).
|
For current documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://docs.crawl4ai.com/).
|
||||||
|
|
||||||
To check our development plans and upcoming features, visit our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md).
|
To check our development plans and upcoming features, visit our [Roadmap](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md).
|
||||||
|
|
||||||
|
|||||||
@@ -702,7 +702,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n",
|
"Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"For more information and advanced usage, please visit the [Crawl4AI documentation](https://crawl4ai.com/mkdocs/).\n",
|
"For more information and advanced usage, please visit the [Crawl4AI documentation](https://docs.crawl4ai.com/).\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Happy crawling!"
|
"Happy crawling!"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -699,4 +699,4 @@ Content-Type: application/json
|
|||||||
GET /task/{task_id}
|
GET /task/{task_id}
|
||||||
```
|
```
|
||||||
|
|
||||||
For more details, visit the [official documentation](https://crawl4ai.com/mkdocs/).
|
For more details, visit the [official documentation](https://docs.crawl4ai.com/).
|
||||||
@@ -132,6 +132,6 @@ This script should successfully crawl the example website and print the first 50
|
|||||||
|
|
||||||
## Getting Help
|
## Getting Help
|
||||||
|
|
||||||
If you encounter any issues during installation or usage, please check the [documentation](https://crawl4ai.com/mkdocs/) or raise an issue on the [GitHub repository](https://github.com/unclecode/crawl4ai/issues).
|
If you encounter any issues during installation or usage, please check the [documentation](https://docs.crawl4ai.com/) or raise an issue on the [GitHub repository](https://github.com/unclecode/crawl4ai/issues).
|
||||||
|
|
||||||
Happy crawling! 🕷️🤖
|
Happy crawling! 🕷️🤖
|
||||||
Reference in New Issue
Block a user