Compare commits
135 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e5e6a34e80 | ||
|
|
897e766728 | ||
|
|
9200a6731d | ||
|
|
61c166ab19 | ||
|
|
659c8cd953 | ||
|
|
9ee988753d | ||
|
|
8ae6c43ca4 | ||
|
|
b6713870ef | ||
|
|
40477493d3 | ||
|
|
efcf3ac6eb | ||
|
|
9e43f7beda | ||
|
|
aa9412e1b4 | ||
|
|
cf6c835e18 | ||
|
|
e5ecf291f3 | ||
|
|
9d0cafcfa6 | ||
|
|
7715623430 | ||
|
|
f5a4e80e2c | ||
|
|
8463aabedf | ||
|
|
7f30144ef2 | ||
|
|
fa5516aad6 | ||
|
|
ca0336af9e | ||
|
|
65ed1aeade | ||
|
|
4d283ab386 | ||
|
|
3ff2a0d0e7 | ||
|
|
3cd1b3719f | ||
|
|
9926eb9f95 | ||
|
|
3abaa82501 | ||
|
|
88d8cd8650 | ||
|
|
a08f21d66c | ||
|
|
d58286989c | ||
|
|
b58af3349c | ||
|
|
940df4631f | ||
|
|
685706e0aa | ||
|
|
7b0979e134 | ||
|
|
61ae2de841 | ||
|
|
5b28eed2c0 | ||
|
|
f8a11779fe | ||
|
|
d11a83c232 | ||
|
|
3255c7a3fa | ||
|
|
4756d0a532 | ||
|
|
7ba2142363 | ||
|
|
96d1eb0d0d | ||
|
|
144cfa0eda | ||
|
|
a0dff192ae | ||
|
|
1fffeeedd2 | ||
|
|
f51b078042 | ||
|
|
b6023a51fb | ||
|
|
78cfad8b2f | ||
|
|
68b3dff74a | ||
|
|
bfc4abd6e8 | ||
|
|
8c77a760fc | ||
|
|
b9bf8ac9d7 | ||
|
|
d6182bedd7 | ||
|
|
2217904876 | ||
|
|
2c2362b4d3 | ||
|
|
612ed3fef2 | ||
|
|
fb2a6d0d04 | ||
|
|
19d3d39115 | ||
|
|
c1413e6916 | ||
|
|
e7705e661a | ||
|
|
21b110bfd7 | ||
|
|
1fcb573909 | ||
|
|
0f6c5f5453 | ||
|
|
350ca1511b | ||
|
|
539263a8ba | ||
|
|
3f0e265baf | ||
|
|
21e2538e57 | ||
|
|
480902bd66 | ||
|
|
853b9d59d8 | ||
|
|
6d04284c44 | ||
|
|
4a50781453 | ||
|
|
18561c55ce | ||
|
|
77da48050d | ||
|
|
9a97aacd85 | ||
|
|
52daf3936a | ||
|
|
2f246d19f4 | ||
|
|
413595542a | ||
|
|
42a5da854d | ||
|
|
d1d83a6ef7 | ||
|
|
194050705d | ||
|
|
989f8c91c8 | ||
|
|
edba5fb5e9 | ||
|
|
faa1defa5c | ||
|
|
f7e0cee1b0 | ||
|
|
b3a0edaa6d | ||
|
|
9c34b30723 | ||
|
|
36a5847df5 | ||
|
|
a19379aa58 | ||
|
|
768d048e1c | ||
|
|
94c11a0262 | ||
|
|
649b0bfd02 | ||
|
|
57a00ec677 | ||
|
|
aeb2114170 | ||
|
|
b8d405fddd | ||
|
|
b32013cb97 | ||
|
|
226a62a3c0 | ||
|
|
8e73a482a2 | ||
|
|
0533aeb814 | ||
|
|
aead6de888 | ||
|
|
8d82fd4cfe | ||
|
|
8f44db6499 | ||
|
|
c7553b1280 | ||
|
|
8b8683f22e | ||
|
|
774ace6e3b | ||
|
|
4a8f91a0fc | ||
|
|
18c9784b61 | ||
|
|
e5d401c67c | ||
|
|
ae77589a98 | ||
|
|
ad373c0e19 | ||
|
|
51f26d12fe | ||
|
|
f1b60b2016 | ||
|
|
8c2dc2b1e4 | ||
|
|
dc9a44c12a | ||
|
|
d9753b6349 | ||
|
|
a554c0b143 | ||
|
|
7381fa95e6 | ||
|
|
53d1176d53 | ||
|
|
52c4be0696 | ||
|
|
13a3b21d19 | ||
|
|
5cee084340 | ||
|
|
bf00c26a83 | ||
|
|
3846648c12 | ||
|
|
eb6423875f | ||
|
|
e3524a10a7 | ||
|
|
468dad6169 | ||
|
|
bc27982992 | ||
|
|
57e5decb55 | ||
|
|
b6319c6f6e | ||
|
|
0a902f562f | ||
|
|
454135856e | ||
|
|
33fddc27ad | ||
|
|
ce052a4eb5 | ||
|
|
b43d77a56b | ||
|
|
1635a92218 | ||
|
|
2a8a1b27e1 |
18
.gitignore
vendored
@@ -165,6 +165,8 @@ Crawl4AI.egg-info/
|
||||
Crawl4AI.egg-info/*
|
||||
crawler_data.db
|
||||
.vscode/
|
||||
.tests/
|
||||
.test_pads/
|
||||
test_pad.py
|
||||
test_pad*.py
|
||||
.data/
|
||||
@@ -172,3 +174,19 @@ Crawl4AI.egg-info/
|
||||
|
||||
requirements0.txt
|
||||
a.txt
|
||||
|
||||
*.sh
|
||||
.idea
|
||||
docs/examples/.chainlit/
|
||||
docs/examples/.chainlit/*
|
||||
.chainlit/config.toml
|
||||
.chainlit/translations/en-US.json
|
||||
|
||||
local/
|
||||
.files/
|
||||
|
||||
a.txt
|
||||
.lambda_function.py
|
||||
ec2*
|
||||
|
||||
update_changelog.sh
|
||||
112
CHANGELOG.md
@@ -1,31 +1,103 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
## [v0.2.77] - 2024-08-04
|
||||
|
||||
## [Unreleased]
|
||||
Significant improvements in text processing and performance:
|
||||
|
||||
### Added
|
||||
- 🔧 Separate Crawl and Extract JSON Semantic Chunk: Enhancing flexibility and efficiency in large-scale web crawling tasks.
|
||||
- 🔍 Colab Integration: Exploring integration with Google Colab for easy experimentation in a collaborative notebook environment.
|
||||
- 🎯 XPath and CSS Selector Support: Adding support for selective retrieval of specific elements from web pages.
|
||||
- 📷 Image Captioning: Incorporating image captioning capabilities to extract meaningful descriptions from images.
|
||||
- 💾 Embedding Data Generation and Storage: Developing functionalities to generate and store embedding data for each crawled website.
|
||||
- 🔍 Semantic Search Engine: Building a semantic search engine that fetches content, performs vector search similarity, and generates labeled chunk data based on user queries and URLs.
|
||||
- 🚀 **Dependency reduction**: Removed dependency on spaCy model for text chunk labeling in cosine extraction strategy.
|
||||
- 🤖 **Transformer upgrade**: Implemented text sequence classification using a transformer model for labeling text chunks.
|
||||
- ⚡ **Performance enhancement**: Improved model loading speed due to removal of spaCy dependency.
|
||||
- 🔧 **Future-proofing**: Laid groundwork for potential complete removal of spaCy dependency in future versions.
|
||||
|
||||
### Changed
|
||||
- None
|
||||
These changes address issue #68 and provide a foundation for faster, more efficient text processing in Crawl4AI.
|
||||
|
||||
### Deprecated
|
||||
- None
|
||||
## [v0.2.76] - 2024-08-02
|
||||
|
||||
### Removed
|
||||
- None
|
||||
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
|
||||
|
||||
- 🐳 **Docker enhancements**: Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
|
||||
- 🌐 **Official Docker Hub image**: Launched our first official image on Docker Hub for streamlined deployment.
|
||||
- 🔧 **Selenium upgrade**: Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
|
||||
- 🖼️ **Image description**: Implemented ability to generate textual descriptions for extracted images from web pages.
|
||||
- ⚡ **Performance boost**: Various improvements to enhance overall speed and performance.
|
||||
|
||||
A big shoutout to our amazing community contributors:
|
||||
- [@aravindkarnam](https://github.com/aravindkarnam) for developing the textual description extraction feature.
|
||||
- [@FractalMind](https://github.com/FractalMind) for creating the first official Docker Hub image and fixing Dockerfile errors.
|
||||
- [@ketonkss4](https://github.com/ketonkss4) for identifying Selenium's new capabilities, helping us reduce dependencies.
|
||||
|
||||
Your contributions are driving Crawl4AI forward! 🙌
|
||||
|
||||
## [v0.2.75] - 2024-07-19
|
||||
|
||||
Minor improvements for a more maintainable codebase:
|
||||
|
||||
- 🔄 Fixed typos in `chunking_strategy.py` and `crawler_strategy.py` to improve code readability
|
||||
- 🔄 Removed `.test_pads/` directory from `.gitignore` to keep our repository clean and organized
|
||||
|
||||
These changes may seem small, but they contribute to a more stable and sustainable codebase. By fixing typos and updating our `.gitignore` settings, we're ensuring that our code is easier to maintain and scale in the long run.
|
||||
|
||||
## [v0.2.74] - 2024-07-08
|
||||
A slew of exciting updates to improve the crawler's stability and robustness! 🎉
|
||||
|
||||
- 💻 **UTF encoding fix**: Resolved the Windows \"charmap\" error by adding UTF encoding.
|
||||
- 🛡️ **Error handling**: Implemented MaxRetryError exception handling in LocalSeleniumCrawlerStrategy.
|
||||
- 🧹 **Input sanitization**: Improved input sanitization and handled encoding issues in LLMExtractionStrategy.
|
||||
- 🚮 **Database cleanup**: Removed existing database file and initialized a new one.
|
||||
|
||||
|
||||
## [v0.2.73] - 2024-07-03
|
||||
|
||||
💡 In this release, we've bumped the version to v0.2.73 and refreshed our documentation to ensure you have the best experience with our project.
|
||||
|
||||
* Supporting website need "with-head" mode to crawl the website with head.
|
||||
* Fixing the installation issues for setup.py and dockerfile.
|
||||
* Resolve multiple issues.
|
||||
|
||||
## [v0.2.72] - 2024-06-30
|
||||
|
||||
This release brings exciting updates and improvements to our project! 🎉
|
||||
|
||||
* 📚 **Documentation Updates**: Our documentation has been revamped to reflect the latest changes and additions.
|
||||
* 🚀 **New Modes in setup.py**: We've added support for three new modes in setup.py: default, torch, and transformers. This enhances the project's flexibility and usability.
|
||||
* 🐳 **Docker File Updates**: The Docker file has been updated to ensure seamless compatibility with the new modes and improvements.
|
||||
* 🕷️ **Temporary Solution for Headless Crawling**: We've implemented a temporary solution to overcome issues with crawling websites in headless mode.
|
||||
|
||||
These changes aim to improve the overall user experience, provide more flexibility, and enhance the project's performance. We're thrilled to share these updates with you and look forward to continuing to evolve and improve our project!
|
||||
|
||||
## [0.2.71] - 2024-06-26
|
||||
|
||||
**Improved Error Handling and Performance** 🚧
|
||||
|
||||
* 🚫 Refactored `crawler_strategy.py` to handle exceptions and provide better error messages, making it more robust and reliable.
|
||||
* 💻 Optimized the `get_content_of_website_optimized` function in `utils.py` for improved performance, reducing potential bottlenecks.
|
||||
* 💻 Updated `utils.py` with the latest changes, ensuring consistency and accuracy.
|
||||
* 🚫 Migrated to `ChromeDriverManager` to resolve Chrome driver download issues, providing a smoother user experience.
|
||||
|
||||
These changes focus on refining the existing codebase, resulting in a more stable, efficient, and user-friendly experience. With these improvements, you can expect fewer errors and better performance in the crawler strategy and utility functions.
|
||||
|
||||
## [0.2.71] - 2024-06-25
|
||||
### Fixed
|
||||
- None
|
||||
- Speed up twice the extraction function.
|
||||
|
||||
### Security
|
||||
- None
|
||||
|
||||
## [1.0.0] - YYYY-MM-DD
|
||||
- Initial release
|
||||
## [0.2.6] - 2024-06-22
|
||||
### Fixed
|
||||
- Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms.
|
||||
|
||||
## [0.2.5] - 2024-06-18
|
||||
### Added
|
||||
- Added five important hooks to the crawler:
|
||||
- on_driver_created: Called when the driver is ready for initializations.
|
||||
- before_get_url: Called right before Selenium fetches the URL.
|
||||
- after_get_url: Called after Selenium fetches the URL.
|
||||
- before_return_html: Called when the data is parsed and ready.
|
||||
- on_user_agent_updated: Called when the user changes the user_agent, causing the driver to reinitialize.
|
||||
- Added an example in `quickstart.py` in the example folder under the docs.
|
||||
- Enhancement issue #24: Replaced inline HTML tags (e.g., DEL, INS, SUB, ABBR) with textual format for better context handling in LLM.
|
||||
- Maintaining the semantic context of inline tags (e.g., abbreviation, DEL, INS) for improved LLM-friendliness.
|
||||
- Updated Dockerfile to ensure compatibility across multiple platforms (Hopefully!).
|
||||
|
||||
## [0.2.4] - 2024-06-17
|
||||
### Fixed
|
||||
- Fix issue #22: Use MD5 hash for caching HTML files to handle long URLs
|
||||
|
||||
31
CONTRIBUTORS.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Contributors to Crawl4AI
|
||||
|
||||
We would like to thank the following people for their contributions to Crawl4AI:
|
||||
|
||||
## Core Team
|
||||
|
||||
- [Unclecode](https://github.com/unclecode) - Project Creator and Main Developer
|
||||
- [Nasrin](https://github.com/ntohidi) - Project Manager and Developer
|
||||
|
||||
## Community Contributors
|
||||
|
||||
- [Aravind Karnam](https://github.com/aravindkarnam) - Developed textual description extraction feature
|
||||
- [FractalMind](https://github.com/FractalMind) - Created the first official Docker Hub image and fixed Dockerfile errors
|
||||
- [ketonkss4](https://github.com/ketonkss4) - Identified Selenium's new capabilities, helping reduce dependencies
|
||||
|
||||
## Other Contributors
|
||||
|
||||
- [Gokhan](https://github.com/gkhngyk)
|
||||
- [Shiv Kumar](https://github.com/shivkumar0757)
|
||||
- [QIN2DIM](https://github.com/QIN2DIM)
|
||||
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
We also want to thank all the users who have reported bugs, suggested features, or helped in any other way to make Crawl4AI better.
|
||||
|
||||
---
|
||||
|
||||
If you've contributed to Crawl4AI and your name isn't on this list, please [open a pull request](https://github.com/unclecode/crawl4ai/pulls) with your name, link, and contribution, and we'll review it promptly.
|
||||
|
||||
Thank you all for your contributions!
|
||||
73
Dockerfile
@@ -1,40 +1,67 @@
|
||||
# Use an official Python runtime as a parent image
|
||||
FROM python:3.10-slim
|
||||
# First stage: Build and install dependencies
|
||||
FROM python:3.10-slim-bookworm
|
||||
|
||||
# Set the working directory in the container
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
# Copy the current directory contents into the container at /usr/src/app
|
||||
COPY . .
|
||||
# Define build arguments
|
||||
ARG INSTALL_OPTION=default
|
||||
|
||||
# Install any needed packages specified in requirements.txt
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Install dependencies for Chrome and ChromeDriver
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
# Install build dependencies
|
||||
RUN apt-get update && \
|
||||
apt-get install -y --no-install-recommends \
|
||||
wget \
|
||||
xvfb \
|
||||
unzip \
|
||||
git \
|
||||
curl \
|
||||
gnupg2 \
|
||||
unzip \
|
||||
gnupg \
|
||||
xvfb \
|
||||
ca-certificates \
|
||||
apt-transport-https \
|
||||
software-properties-common \
|
||||
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
|
||||
&& echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y google-chrome-stable \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
software-properties-common && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Set display port and dbus env to avoid hanging
|
||||
ENV DISPLAY=:99
|
||||
ENV DBUS_SESSION_BUS_ADDRESS=/dev/null
|
||||
# Copy the application code
|
||||
COPY . .
|
||||
|
||||
# Install Crawl4AI using the local setup.py with the specified option
|
||||
# and download models only for torch, transformer, or all options
|
||||
RUN if [ "$INSTALL_OPTION" = "all" ]; then \
|
||||
pip install --no-cache-dir .[all] && \
|
||||
crawl4ai-download-models; \
|
||||
elif [ "$INSTALL_OPTION" = "torch" ]; then \
|
||||
pip install --no-cache-dir .[torch] && \
|
||||
crawl4ai-download-models; \
|
||||
elif [ "$INSTALL_OPTION" = "transformer" ]; then \
|
||||
pip install --no-cache-dir .[transformer] && \
|
||||
crawl4ai-download-models; \
|
||||
else \
|
||||
pip install --no-cache-dir .; \
|
||||
fi
|
||||
|
||||
# Install Google Chrome
|
||||
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
|
||||
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list' && \
|
||||
apt-get update && \
|
||||
apt-get install -y google-chrome-stable
|
||||
|
||||
# Set environment to use Chrome properly
|
||||
ENV CHROME_BIN=/usr/bin/google-chrome \
|
||||
DISPLAY=:99 \
|
||||
DBUS_SESSION_BUS_ADDRESS=/dev/null \
|
||||
PYTHONUNBUFFERED=1
|
||||
|
||||
# Ensure the PATH environment variable includes the location of the installed packages
|
||||
ENV PATH=/opt/conda/bin:$PATH
|
||||
|
||||
# Make port 80 available to the world outside this container
|
||||
EXPOSE 80
|
||||
|
||||
# Define environment variable
|
||||
ENV PYTHONUNBUFFERED 1
|
||||
# Install mkdocs
|
||||
RUN pip install mkdocs mkdocs-terminal
|
||||
|
||||
# Call mkdocs to build the documentation
|
||||
RUN mkdocs build
|
||||
|
||||
# Run uvicorn
|
||||
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "4"]
|
||||
44
Dockerfile_mac
Normal file
@@ -0,0 +1,44 @@
|
||||
# Use an official Python runtime as a parent image
|
||||
FROM python:3.10-slim
|
||||
|
||||
# Set the working directory in the container
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
# Copy the current directory contents into the container at /usr/src/app
|
||||
COPY . .
|
||||
|
||||
# Install any needed packages specified in requirements.txt
|
||||
RUN pip install --no-cache-dir -r requirements.txt
|
||||
|
||||
# Install dependencies for Chrome and ChromeDriver
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
wget \
|
||||
xvfb \
|
||||
unzip \
|
||||
curl \
|
||||
gnupg2 \
|
||||
ca-certificates \
|
||||
apt-transport-https \
|
||||
software-properties-common \
|
||||
&& wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
|
||||
&& echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y google-chrome-stable \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& apt install chromium-chromedriver -y
|
||||
|
||||
# Install spacy library using pip
|
||||
RUN pip install spacy
|
||||
|
||||
# Set display port and dbus env to avoid hanging
|
||||
ENV DISPLAY=:99
|
||||
ENV DBUS_SESSION_BUS_ADDRESS=/dev/null
|
||||
|
||||
# Make port 80 available to the world outside this container
|
||||
EXPOSE 80
|
||||
|
||||
# Define environment variable
|
||||
ENV PYTHONUNBUFFERED 1
|
||||
|
||||
# Run uvicorn
|
||||
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "4"]
|
||||
583
README.md
@@ -1,4 +1,4 @@
|
||||
# Crawl4AI 🕷️🤖
|
||||
# Crawl4AI v0.2.77 🕷️🤖
|
||||
|
||||
[](https://github.com/unclecode/crawl4ai/stargazers)
|
||||
[](https://github.com/unclecode/crawl4ai/network/members)
|
||||
@@ -6,508 +6,197 @@
|
||||
[](https://github.com/unclecode/crawl4ai/pulls)
|
||||
[](https://github.com/unclecode/crawl4ai/blob/main/LICENSE)
|
||||
|
||||
Crawl4AI has one clear task: to simplify crawling and extract useful information from web pages, making it accessible for large language models (LLMs) and AI applications. 🆓🌐
|
||||
Crawl4AI simplifies web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐
|
||||
|
||||
<<<<<<< HEAD
|
||||
## 🚀 New Changes Will be Released Soon
|
||||
#### [v0.2.77] - 2024-08-02
|
||||
|
||||
- 🚀 10x faster!!
|
||||
- 📜 Execute custome JavaScript before crawling!
|
||||
- 🤝 Colab friendly!
|
||||
- 📚 Chunking strategies: topic-based, regex, sentence, and more!
|
||||
- 🧠 Extraction strategies: cosine clustering, LLM, and more!
|
||||
- 🎯 CSS selector support
|
||||
- 📝 Pass instructions/keywords to refine extraction
|
||||
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
|
||||
|
||||
## 🚧 Work in Progress 👷♂️
|
||||
- 🐳 **Docker enhancements**:
|
||||
- Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
|
||||
- 🌐 **Official Docker Hub image**:
|
||||
- Launched our first official image on Docker Hub for streamlined deployment (unclecode/crawl4ai).
|
||||
- 🔧 **Selenium upgrade**:
|
||||
- Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
|
||||
- 🖼️ **Image description**:
|
||||
- Implemented ability to generate textual descriptions for extracted images from web pages.
|
||||
- ⚡ **Performance boost**:
|
||||
- Various improvements to enhance overall speed and performance.
|
||||
|
||||
## Try it Now!
|
||||
|
||||
- 📷 Image Captioning: Incorporating image captioning capabilities to extract descriptions from images.
|
||||
- 💾 Embedding Vector Data: Generate and store embedding data for each crawled website.
|
||||
- 🔍 Semantic Search Engine: Building a semantic search engine that fetches content, performs vector search similarity, and generates labeled chunk data based on user queries and URLs.
|
||||
=======
|
||||
[](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
|
||||
✨ Play around with this [](https://colab.research.google.com/drive/1sJPAmeLj5PMrg2VgOwMJ2ubGIcK0cJeX?usp=sharing)
|
||||
|
||||
## Recent Changes
|
||||
|
||||
- 🚀 10x faster!!
|
||||
- 📜 Execute custom JavaScript before crawling!
|
||||
- 🤝 Colab friendly!
|
||||
- 📚 Chunking strategies: topic-based, regex, sentence, and more!
|
||||
- 🧠 Extraction strategies: cosine clustering, LLM, and more!
|
||||
- 🎯 CSS selector support
|
||||
- 📝 Pass instructions/keywords to refine extraction
|
||||
|
||||
## Power and Simplicity of Crawl4AI 🚀
|
||||
|
||||
To show the simplicity take a look at the first example:
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
# Create the WebCrawler instance
|
||||
crawler = WebCrawler()
|
||||
|
||||
# Run the crawler with keyword filtering and CSS selector
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
print(result) # {url, html, markdown, extracted_content, metadata}
|
||||
```
|
||||
|
||||
Now let's try a complex task. Below is an example of how you can execute JavaScript, filter data using keywords, and use a CSS selector to extract specific content—all in one go!
|
||||
|
||||
1. Instantiate a WebCrawler object.
|
||||
2. Execute custom JavaScript to click a "Load More" button.
|
||||
3. Extract semantical chunks of content and filter the data to include only content related to technology.
|
||||
4. Use a CSS selector to extract only paragraphs (`<p>` tags).
|
||||
|
||||
```python
|
||||
# Import necessary modules
|
||||
from crawl4ai import WebCrawler
|
||||
from crawl4ai.chunking_strategy import *
|
||||
from crawl4ai.extraction_strategy import *
|
||||
from crawl4ai.crawler_strategy import *
|
||||
|
||||
# Define the JavaScript code to click the "Load More" button
|
||||
js_code = """
|
||||
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
|
||||
loadMoreButton && loadMoreButton.click();
|
||||
"""
|
||||
|
||||
# Define the crawling strategy
|
||||
crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
|
||||
|
||||
# Create the WebCrawler instance with the defined strategy
|
||||
crawler = WebCrawler(crawler_strategy=crawler_strategy)
|
||||
|
||||
# Run the crawler with keyword filtering and CSS selector
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=CosineStrategy(
|
||||
semantic_filter="technology",
|
||||
),
|
||||
)
|
||||
|
||||
# Run the crawler with LLM extraction strategy
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv('OPENAI_API_KEY'),
|
||||
instruction="Extract only content related to technology"
|
||||
),
|
||||
css_selector="p"
|
||||
)
|
||||
|
||||
# Display the extracted result
|
||||
print(result)
|
||||
```
|
||||
|
||||
With Crawl4AI, you can perform advanced web crawling and data extraction tasks with just a few lines of code. This example demonstrates how you can harness the power of Crawl4AI to simplify your workflow and get the data you need efficiently.
|
||||
|
||||
---
|
||||
|
||||
*Continue reading to learn more about the features, installation process, usage, and more.*
|
||||
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Features](#features-)
|
||||
2. [Installation](#installation-)
|
||||
3. [REST API/Local Server](#using-the-local-server-ot-rest-api-)
|
||||
4. [Python Library Usage](#python-library-usage-)
|
||||
5. [Parameters](#parameters-)
|
||||
6. [Chunking Strategies](#chunking-strategies-)
|
||||
7. [Extraction Strategies](#extraction-strategies-)
|
||||
8. [Contributing](#contributing-)
|
||||
9. [License](#license-)
|
||||
10. [Contact](#contact-)
|
||||
>>>>>>> new-release-0.0.2-no-spacy
|
||||
✨ visit our [Documentation Website](https://crawl4ai.com/mkdocs/)
|
||||
|
||||
✨ Check [Demo](https://crawl4ai.com/mkdocs/demo)
|
||||
|
||||
## Features ✨
|
||||
|
||||
- 🕷️ Efficient web crawling to extract valuable data from websites
|
||||
- 🆓 Completely free and open-source
|
||||
- 🤖 LLM-friendly output formats (JSON, cleaned HTML, markdown)
|
||||
- 🌍 Supports crawling multiple URLs simultaneously
|
||||
- 🌃 Replace media tags with ALT.
|
||||
- 🆓 Completely free to use and open-source
|
||||
- 📜 Execute custom JavaScript before crawling
|
||||
- 📚 Chunking strategies: topic-based, regex, sentence, and more
|
||||
- 🧠 Extraction strategies: cosine clustering, LLM, and more
|
||||
- 🎨 Extracts and returns all media tags (Images, Audio, and Video)
|
||||
- 🔗 Extracts all external and internal links
|
||||
- 📚 Extracts metadata from the page
|
||||
- 🔄 Custom hooks for authentication, headers, and page modifications before crawling
|
||||
- 🕵️ User-agent customization
|
||||
- 🖼️ Takes screenshots of the page
|
||||
- 📜 Executes multiple custom JavaScripts before crawling
|
||||
- 📚 Various chunking strategies: topic-based, regex, sentence, and more
|
||||
- 🧠 Advanced extraction strategies: cosine clustering, LLM, and more
|
||||
- 🎯 CSS selector support
|
||||
- 📝 Pass instructions/keywords to refine extraction
|
||||
- 📝 Passes instructions/keywords to refine extraction
|
||||
|
||||
## Installation 💻
|
||||
# Crawl4AI
|
||||
|
||||
There are three ways to use Crawl4AI:
|
||||
1. As a library (Recommended)
|
||||
2. As a local server (Docker) or using the REST API
|
||||
4. As a Google Colab notebook. [](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
|
||||
## 🌟 Shoutout to Contributors of v0.2.77!
|
||||
|
||||
To install Crawl4AI as a library, follow these steps:
|
||||
A big thank you to the amazing contributors who've made this release possible:
|
||||
|
||||
1. Install the package from GitHub:
|
||||
- [@aravindkarnam](https://github.com/aravindkarnam) for the new image description feature
|
||||
- [@FractalMind](https://github.com/FractalMind) for our official Docker Hub image
|
||||
- [@ketonkss4](https://github.com/ketonkss4) for helping streamline our Selenium setup
|
||||
|
||||
Your contributions are driving Crawl4AI forward! 🚀
|
||||
|
||||
## Cool Examples 🚀
|
||||
|
||||
### Quick Start
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
# Create an instance of WebCrawler
|
||||
crawler = WebCrawler()
|
||||
|
||||
# Warm up the crawler (load necessary models)
|
||||
crawler.warmup()
|
||||
|
||||
# Run the crawler on a URL
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
|
||||
# Print the extracted content
|
||||
print(result.markdown)
|
||||
```
|
||||
|
||||
## How to install 🛠
|
||||
|
||||
### Using pip 🐍
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install "crawl4ai[all] @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
```
|
||||
|
||||
💡 Better to run the following CLI-command to load the required models. This is optional, but it will boost the performance and speed of the crawler. You need to do this only once.
|
||||
### Using Docker 🐳
|
||||
|
||||
crawl4ai-download-models
|
||||
|
||||
2. Alternatively, you can clone the repository and install the package locally:
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
git clone https://github.com/unclecode/crawl4ai.git
|
||||
cd crawl4ai
|
||||
pip install -e .[all]
|
||||
```
|
||||
|
||||
3. Use docker to run the local server:
|
||||
```bash
|
||||
docker build -t crawl4ai .
|
||||
# For Mac users
|
||||
# For Mac users (M1/M2)
|
||||
# docker build --platform linux/amd64 -t crawl4ai .
|
||||
docker build -t crawl4ai .
|
||||
docker run -d -p 8000:80 crawl4ai
|
||||
```
|
||||
|
||||
For more information about how to run Crawl4AI as a local server, please refer to the [GitHub repository](https://github.com/unclecode/crawl4ai).
|
||||
### Using Docker Hub 🐳
|
||||
|
||||
## Using the Local server ot REST API 🌐
|
||||
|
||||
You can also use Crawl4AI through the REST API. This method allows you to send HTTP requests to the Crawl4AI server and receive structured data in response. The base URL for the API is `https://crawl4ai.com/crawl`. If you run the local server, you can use `http://localhost:8000/crawl`. (Port is dependent on your docker configuration)
|
||||
|
||||
### Example Usage
|
||||
|
||||
To use the REST API, send a POST request to `https://crawl4ai.com/crawl` with the following parameters in the request body.
|
||||
|
||||
**Example Request:**
|
||||
```json
|
||||
{
|
||||
"urls": ["https://www.nbcnews.com/business"],
|
||||
"include_raw_html": false,
|
||||
"bypass_cache": true,
|
||||
"word_count_threshold": 5,
|
||||
"extraction_strategy": "CosineStrategy",
|
||||
"chunking_strategy": "RegexChunking",
|
||||
"css_selector": "p",
|
||||
"verbose": true,
|
||||
"extraction_strategy_args": {
|
||||
"semantic_filter": "finance economy and stock market",
|
||||
"word_count_threshold": 20,
|
||||
"max_dist": 0.2,
|
||||
"linkage_method": "ward",
|
||||
"top_k": 3
|
||||
},
|
||||
"chunking_strategy_args": {
|
||||
"patterns": ["\n\n"]
|
||||
}
|
||||
}
|
||||
```bash
|
||||
docker pull unclecode/crawl4ai:latest
|
||||
docker run -d -p 8000:80 unclecode/crawl4ai:latest
|
||||
```
|
||||
|
||||
**Example Response:**
|
||||
```json
|
||||
{
|
||||
"status": "success",
|
||||
"data": [
|
||||
{
|
||||
"url": "https://www.nbcnews.com/business",
|
||||
"extracted_content": "...",
|
||||
"html": "...",
|
||||
"markdown": "...",
|
||||
"metadata": {...}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
For more information about the available parameters and their descriptions, refer to the [Parameters](#parameters) section.
|
||||
## Speed-First Design 🚀
|
||||
|
||||
Perhaps the most important design principle for this library is speed. We need to ensure it can handle many links and resources in parallel as quickly as possible. By combining this speed with fast LLMs like Groq, the results will be truly amazing.
|
||||
|
||||
## Python Library Usage 🚀
|
||||
|
||||
🔥 A great way to try out Crawl4AI is to run `quickstart.py` in the `docs/examples` directory. This script demonstrates how to use Crawl4AI to crawl a website and extract content from it.
|
||||
|
||||
### Quickstart Guide
|
||||
|
||||
Create an instance of WebCrawler and call the `warmup()` function.
|
||||
```python
|
||||
import time
|
||||
from crawl4ai.web_crawler import WebCrawler
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
start = time.time()
|
||||
url = r"https://www.nbcnews.com/business"
|
||||
result = crawler.run( url, word_count_threshold=10, bypass_cache=True)
|
||||
end = time.time()
|
||||
print(f"Time taken: {end - start}")
|
||||
```
|
||||
|
||||
### Understanding 'bypass_cache' and 'include_raw_html' parameters
|
||||
Let's take a look the calculated time for the above code snippet:
|
||||
|
||||
First crawl (caches the result):
|
||||
```python
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
```bash
|
||||
[LOG] 🚀 Crawling done, success: True, time taken: 1.3623387813568115 seconds
|
||||
[LOG] 🚀 Content extracted, success: True, time taken: 0.05715131759643555 seconds
|
||||
[LOG] 🚀 Extraction, time taken: 0.05750393867492676 seconds.
|
||||
Time taken: 1.439958095550537
|
||||
```
|
||||
Fetching the content from the page took 1.3623 seconds, and extracting the content took 0.0575 seconds. 🚀
|
||||
|
||||
### Extract Structured Data from Web Pages 📊
|
||||
|
||||
Crawl all OpenAI models and their fees from the official page.
|
||||
|
||||
Second crawl (Force to crawl again):
|
||||
```python
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
|
||||
```
|
||||
💡 Don't forget to set `bypass_cache` to True if you want to try different strategies for the same URL. Otherwise, the cached result will be returned. You can also set `always_by_pass_cache` in constructor to True to always bypass the cache.
|
||||
import os
|
||||
from crawl4ai import WebCrawler
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
Crawl result without raw HTML content:
|
||||
```python
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", include_raw_html=False)
|
||||
```
|
||||
class OpenAIModelFee(BaseModel):
|
||||
model_name: str = Field(..., description="Name of the OpenAI model.")
|
||||
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
|
||||
output_fee: str = Field(..., description="Fee for output token ßfor the OpenAI model.")
|
||||
|
||||
### Adding a chunking strategy: RegexChunking
|
||||
url = 'https://openai.com/api/pricing/'
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
Using RegexChunking:
|
||||
```python
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
chunking_strategy=RegexChunking(patterns=["\n\n"])
|
||||
)
|
||||
```
|
||||
|
||||
Using NlpSentenceChunking:
|
||||
```python
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
chunking_strategy=NlpSentenceChunking()
|
||||
)
|
||||
```
|
||||
|
||||
### Extraction strategy: CosineStrategy
|
||||
|
||||
So far, the extracted content is just the result of chunking. To extract meaningful content, you can use extraction strategies. These strategies cluster consecutive chunks into meaningful blocks, keeping the same order as the text in the HTML. This approach is perfect for use in RAG applications and semantical search queries.
|
||||
|
||||
Using CosineStrategy:
|
||||
```python
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=CosineStrategy(
|
||||
semantic_filter="",
|
||||
word_count_threshold=10,
|
||||
max_dist=0.2,
|
||||
linkage_method="ward",
|
||||
top_k=3
|
||||
url=url,
|
||||
word_count_threshold=1,
|
||||
extraction_strategy= LLMExtractionStrategy(
|
||||
provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
|
||||
schema=OpenAIModelFee.schema(),
|
||||
extraction_type="schema",
|
||||
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
|
||||
Do not miss any models in the entire content. One extracted model JSON format should look like this:
|
||||
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}."""
|
||||
),
|
||||
bypass_cache=True,
|
||||
)
|
||||
)
|
||||
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
You can set `semantic_filter` to filter relevant documents before clustering. Documents are filtered based on their cosine similarity to the keyword filter embedding.
|
||||
### Execute JS, Filter Data with CSS Selector, and Clustering
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
from crawl4ai.chunking_strategy import CosineStrategy
|
||||
|
||||
js_code = ["const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();"]
|
||||
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=CosineStrategy(
|
||||
semantic_filter="finance economy and stock market",
|
||||
word_count_threshold=10,
|
||||
max_dist=0.2,
|
||||
linkage_method="ward",
|
||||
top_k=3
|
||||
)
|
||||
js=js_code,
|
||||
css_selector="p",
|
||||
extraction_strategy=CosineStrategy(semantic_filter="technology")
|
||||
)
|
||||
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
### Using LLMExtractionStrategy
|
||||
## Documentation 📚
|
||||
|
||||
Without instructions:
|
||||
```python
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv('OPENAI_API_KEY')
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
With instructions:
|
||||
```python
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv('OPENAI_API_KEY'),
|
||||
instruction="I am interested in only financial news"
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
### Targeted extraction using CSS selector
|
||||
|
||||
Extract only H2 tags:
|
||||
```python
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
css_selector="h2"
|
||||
)
|
||||
```
|
||||
|
||||
### Passing JavaScript code to click 'Load More' button
|
||||
|
||||
Using JavaScript to click 'Load More' button:
|
||||
```python
|
||||
js_code = """
|
||||
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
|
||||
loadMoreButton && loadMoreButton.click();
|
||||
"""
|
||||
crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
|
||||
crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
```
|
||||
|
||||
## Parameters 📖
|
||||
|
||||
| Parameter | Description | Required | Default Value |
|
||||
|-----------------------|-------------------------------------------------------------------------------------------------------|----------|---------------------|
|
||||
| `urls` | A list of URLs to crawl and extract data from. | Yes | - |
|
||||
| `include_raw_html` | Whether to include the raw HTML content in the response. | No | `false` |
|
||||
| `bypass_cache` | Whether to force a fresh crawl even if the URL has been previously crawled. | No | `false` |
|
||||
| `word_count_threshold`| The minimum number of words a block must contain to be considered meaningful (minimum value is 5). | No | `5` |
|
||||
| `extraction_strategy` | The strategy to use for extracting content from the HTML (e.g., "CosineStrategy"). | No | `NoExtractionStrategy` |
|
||||
| `chunking_strategy` | The strategy to use for chunking the text before processing (e.g., "RegexChunking"). | No | `RegexChunking` |
|
||||
| `css_selector` | The CSS selector to target specific parts of the HTML for extraction. | No | `None` |
|
||||
| `verbose` | Whether to enable verbose logging. | No | `true` |
|
||||
|
||||
## Chunking Strategies 📚
|
||||
|
||||
### RegexChunking
|
||||
|
||||
`RegexChunking` is a text chunking strategy that splits a given text into smaller parts using regular expressions. This is useful for preparing large texts for processing by language models, ensuring they are divided into manageable segments.
|
||||
|
||||
**Constructor Parameters:**
|
||||
- `patterns` (list, optional): A list of regular expression patterns used to split the text. Default is to split by double newlines (`['\n\n']`).
|
||||
|
||||
**Example usage:**
|
||||
```python
|
||||
chunker = RegexChunking(patterns=[r'\n\n', r'\. '])
|
||||
chunks = chunker.chunk("This is a sample text. It will be split into chunks.")
|
||||
```
|
||||
|
||||
### NlpSentenceChunking
|
||||
|
||||
`NlpSentenceChunking` uses a natural language processing model to chunk a given text into sentences. This approach leverages SpaCy to accurately split text based on sentence boundaries.
|
||||
|
||||
**Constructor Parameters:**
|
||||
- None.
|
||||
|
||||
**Example usage:**
|
||||
```python
|
||||
chunker = NlpSentenceChunking()
|
||||
chunks = chunker.chunk("This is a sample text. It will be split into sentences.")
|
||||
```
|
||||
|
||||
### TopicSegmentationChunking
|
||||
|
||||
`TopicSegmentationChunking` uses the TextTiling algorithm to segment a given text into topic-based chunks. This method identifies thematic boundaries in the text.
|
||||
|
||||
**Constructor Parameters:**
|
||||
- `num_keywords` (int, optional): The number of keywords to extract for each topic segment. Default is `3`.
|
||||
|
||||
**Example usage:**
|
||||
```python
|
||||
chunker = TopicSegmentationChunking(num_keywords=3)
|
||||
chunks = chunker.chunk("This is a sample text. It will be split into topic-based segments.")
|
||||
```
|
||||
|
||||
### FixedLengthWordChunking
|
||||
|
||||
`FixedLengthWordChunking` splits a given text into chunks of fixed length, based on the number of words.
|
||||
|
||||
**Constructor Parameters:**
|
||||
- `chunk_size` (int, optional): The number of words in each chunk. Default is `100`.
|
||||
|
||||
**Example usage:**
|
||||
```python
|
||||
chunker = FixedLengthWordChunking(chunk_size=100)
|
||||
chunks = chunker.chunk("This is a sample text. It will be split into fixed-length word chunks.")
|
||||
```
|
||||
|
||||
### SlidingWindowChunking
|
||||
|
||||
`SlidingWindowChunking` uses a sliding window approach to chunk a given text. Each chunk has a fixed length, and the window slides by a specified step size.
|
||||
|
||||
**Constructor Parameters:**
|
||||
- `window_size` (int, optional): The number of words in each chunk. Default is `100`.
|
||||
- `step` (int, optional): The number of words to slide the window. Default is `50`.
|
||||
|
||||
**Example usage:**
|
||||
```python
|
||||
chunker = SlidingWindowChunking(window_size=100, step=50)
|
||||
chunks = chunker.chunk("This is a sample text. It will be split using a sliding window approach.")
|
||||
```
|
||||
|
||||
## Extraction Strategies 🧠
|
||||
|
||||
### NoExtractionStrategy
|
||||
|
||||
`NoExtractionStrategy` is a basic extraction strategy that returns the entire HTML content without any modification. It is useful for cases where no specific extraction is required.
|
||||
|
||||
**Constructor Parameters:**
|
||||
None.
|
||||
|
||||
**Example usage:**
|
||||
```python
|
||||
extractor = NoExtractionStrategy()
|
||||
extracted_content = extractor.extract(url, html)
|
||||
```
|
||||
|
||||
### LLMExtractionStrategy
|
||||
|
||||
`LLMExtractionStrategy` uses a Language Model (LLM) to extract meaningful blocks or chunks from the given HTML content. This strategy leverages an external provider for language model completions.
|
||||
|
||||
**Constructor Parameters:**
|
||||
- `provider` (str, optional): The provider to use for the language model completions. Default is `DEFAULT_PROVIDER` (e.g., openai/gpt-4).
|
||||
- `api_token` (str, optional): The API token for the provider. If not provided, it will try to load from the environment variable `OPENAI_API_KEY`.
|
||||
- `instruction` (str, optional): An instruction to guide the LLM on how to perform the extraction. This allows users to specify the type of data they are interested in or set the tone of the response. Default is `None`.
|
||||
|
||||
**Example usage:**
|
||||
```python
|
||||
extractor = LLMExtractionStrategy(provider='openai', api_token='your_api_token', instruction='Extract only news about AI.')
|
||||
extracted_content = extractor.extract(url, html)
|
||||
```
|
||||
|
||||
### CosineStrategy
|
||||
|
||||
`CosineStrategy` uses hierarchical clustering based on cosine similarity to extract clusters of text from the given HTML content. This strategy is suitable for identifying related content sections.
|
||||
|
||||
**Constructor Parameters:**
|
||||
- `semantic_filter` (str, optional): A string containing keywords for filtering relevant documents before clustering. If provided, documents are filtered based on their cosine similarity to the keyword filter embedding. Default is `None`.
|
||||
- `word_count_threshold` (int, optional): Minimum number of words per cluster. Default is `20`.
|
||||
- `max_dist` (float, optional): The maximum cophenetic distance on the dendrogram to form clusters. Default is `0.2`.
|
||||
- `linkage_method` (str, optional): The linkage method for hierarchical clustering. Default is `'ward'`.
|
||||
- `top_k` (int, optional): Number of top categories to extract. Default is `3`.
|
||||
- `model_name` (str, optional): The model name for embedding generation. Default is `'BAAI/bge-small-en-v1.5'`.
|
||||
|
||||
**Example usage:**
|
||||
```python
|
||||
extractor = CosineStrategy(semantic_filter='finance rental prices', word_count_threshold=10, max_dist=0.2, linkage_method='ward', top_k=3, model_name='BAAI/bge-small-en-v1.5')
|
||||
extracted_content = extractor.extract(url, html)
|
||||
```
|
||||
|
||||
### TopicExtractionStrategy
|
||||
|
||||
`TopicExtractionStrategy` uses the TextTiling algorithm to segment the HTML content into topics and extracts keywords for each segment. This strategy is useful for identifying and summarizing thematic content.
|
||||
|
||||
**Constructor Parameters:**
|
||||
- `num_keywords` (int, optional): Number of keywords to represent each topic segment. Default is `3`.
|
||||
|
||||
**Example usage:**
|
||||
```python
|
||||
extractor = TopicExtractionStrategy(num_keywords=3)
|
||||
extracted_content = extractor.extract(url, html)
|
||||
```
|
||||
For detailed documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://crawl4ai.com/mkdocs/).
|
||||
|
||||
## Contributing 🤝
|
||||
|
||||
We welcome contributions from the open-source community to help improve Crawl4AI and make it even more valuable for AI enthusiasts and developers. To contribute, please follow these steps:
|
||||
|
||||
1. Fork the repository.
|
||||
2. Create a new branch for your feature or bug fix.
|
||||
3. Make your changes and commit them with descriptive messages.
|
||||
4. Push your changes to your forked repository.
|
||||
5. Submit a pull request to the main repository.
|
||||
|
||||
For more information on contributing, please see our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTING.md).
|
||||
We welcome contributions from the open-source community. Check out our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTING.md) for more information.
|
||||
|
||||
## License 📄
|
||||
|
||||
@@ -515,10 +204,14 @@ Crawl4AI is released under the [Apache 2.0 License](https://github.com/unclecode
|
||||
|
||||
## Contact 📧
|
||||
|
||||
If you have any questions, suggestions, or feedback, please feel free to reach out to us:
|
||||
For questions, suggestions, or feedback, feel free to reach out:
|
||||
|
||||
- GitHub: [unclecode](https://github.com/unclecode)
|
||||
- Twitter: [@unclecode](https://twitter.com/unclecode)
|
||||
- Website: [crawl4ai.com](https://crawl4ai.com)
|
||||
|
||||
Let's work together to make the web more accessible and useful for AI applications! 💪🌐🤖
|
||||
Happy Crawling! 🕸️🚀
|
||||
|
||||
## Star History
|
||||
|
||||
[](https://star-history.com/#unclecode/crawl4ai&Date)
|
||||
@@ -3,6 +3,7 @@ import re
|
||||
from collections import Counter
|
||||
import string
|
||||
from .model_loader import load_nltk_punkt
|
||||
from .utils import *
|
||||
|
||||
# Define the abstract base class for chunking strategies
|
||||
class ChunkingStrategy(ABC):
|
||||
@@ -16,7 +17,7 @@ class ChunkingStrategy(ABC):
|
||||
|
||||
# Regex-based chunking
|
||||
class RegexChunking(ChunkingStrategy):
|
||||
def __init__(self, patterns=None):
|
||||
def __init__(self, patterns=None, **kwargs):
|
||||
if patterns is None:
|
||||
patterns = [r'\n\n'] # Default split pattern
|
||||
self.patterns = patterns
|
||||
@@ -32,7 +33,7 @@ class RegexChunking(ChunkingStrategy):
|
||||
|
||||
# NLP-based sentence chunking
|
||||
class NlpSentenceChunking(ChunkingStrategy):
|
||||
def __init__(self):
|
||||
def __init__(self, **kwargs):
|
||||
load_nltk_punkt()
|
||||
pass
|
||||
|
||||
@@ -52,9 +53,9 @@ class NlpSentenceChunking(ChunkingStrategy):
|
||||
# Topic-based segmentation using TextTiling
|
||||
class TopicSegmentationChunking(ChunkingStrategy):
|
||||
|
||||
def __init__(self, num_keywords=3):
|
||||
def __init__(self, num_keywords=3, **kwargs):
|
||||
import nltk as nl
|
||||
self.tokenizer = nl.toknize.TextTilingTokenizer()
|
||||
self.tokenizer = nl.tokenize.TextTilingTokenizer()
|
||||
self.num_keywords = num_keywords
|
||||
|
||||
def chunk(self, text: str) -> list:
|
||||
@@ -82,7 +83,7 @@ class TopicSegmentationChunking(ChunkingStrategy):
|
||||
|
||||
# Fixed-length word chunks
|
||||
class FixedLengthWordChunking(ChunkingStrategy):
|
||||
def __init__(self, chunk_size=100):
|
||||
def __init__(self, chunk_size=100, **kwargs):
|
||||
self.chunk_size = chunk_size
|
||||
|
||||
def chunk(self, text: str) -> list:
|
||||
@@ -91,7 +92,7 @@ class FixedLengthWordChunking(ChunkingStrategy):
|
||||
|
||||
# Sliding window chunking
|
||||
class SlidingWindowChunking(ChunkingStrategy):
|
||||
def __init__(self, window_size=100, step=50):
|
||||
def __init__(self, window_size=100, step=50, **kwargs):
|
||||
self.window_size = window_size
|
||||
self.step = step
|
||||
|
||||
|
||||
@@ -21,7 +21,20 @@ PROVIDER_MODELS = {
|
||||
|
||||
|
||||
# Chunk token threshold
|
||||
CHUNK_TOKEN_THRESHOLD = 1000
|
||||
CHUNK_TOKEN_THRESHOLD = 500
|
||||
OVERLAP_RATE = 0.1
|
||||
WORD_TOKEN_RATE = 1.3
|
||||
|
||||
# Threshold for the minimum number of word in a HTML tag to be considered
|
||||
MIN_WORD_THRESHOLD = 5
|
||||
MIN_WORD_THRESHOLD = 1
|
||||
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD = 1
|
||||
|
||||
# Threshold for the Image extraction - Range is 1 to 6
|
||||
# Images are scored based on point based system, to filter based on usefulness. Points are assigned
|
||||
# to each image based on the following aspects.
|
||||
# If either height or width exceeds 150px
|
||||
# If image size is greater than 10Kb
|
||||
# If alt property is set
|
||||
# If image format is in jpg, png or webp
|
||||
# If image is in the first half of the total images extracted from the page
|
||||
IMAGE_SCORE_THRESHOLD = 2
|
||||
|
||||
@@ -5,17 +5,58 @@ from selenium.webdriver.common.by import By
|
||||
from selenium.webdriver.support.ui import WebDriverWait
|
||||
from selenium.webdriver.support import expected_conditions as EC
|
||||
from selenium.webdriver.chrome.options import Options
|
||||
from selenium.common.exceptions import InvalidArgumentException
|
||||
from selenium.common.exceptions import InvalidArgumentException, WebDriverException
|
||||
# from selenium.webdriver.chrome.service import Service as ChromeService
|
||||
# from webdriver_manager.chrome import ChromeDriverManager
|
||||
# from urllib3.exceptions import MaxRetryError
|
||||
|
||||
from typing import List
|
||||
from .config import *
|
||||
import logging, time
|
||||
import base64
|
||||
from PIL import Image, ImageDraw, ImageFont
|
||||
from io import BytesIO
|
||||
from typing import List, Callable
|
||||
import requests
|
||||
import os
|
||||
from pathlib import Path
|
||||
from .utils import *
|
||||
|
||||
logger = logging.getLogger('selenium.webdriver.remote.remote_connection')
|
||||
logger.setLevel(logging.WARNING)
|
||||
|
||||
logger_driver = logging.getLogger('selenium.webdriver.common.service')
|
||||
logger_driver.setLevel(logging.WARNING)
|
||||
|
||||
urllib3_logger = logging.getLogger('urllib3.connectionpool')
|
||||
urllib3_logger.setLevel(logging.WARNING)
|
||||
|
||||
# Disable http.client logging
|
||||
http_client_logger = logging.getLogger('http.client')
|
||||
http_client_logger.setLevel(logging.WARNING)
|
||||
|
||||
# Disable driver_finder and service logging
|
||||
driver_finder_logger = logging.getLogger('selenium.webdriver.common.driver_finder')
|
||||
driver_finder_logger.setLevel(logging.WARNING)
|
||||
|
||||
|
||||
|
||||
|
||||
class CrawlerStrategy(ABC):
|
||||
@abstractmethod
|
||||
def crawl(self, url: str, **kwargs) -> str:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def take_screenshot(self, save_path: str):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def update_user_agent(self, user_agent: str):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def set_hook(self, hook_type: str, hook: Callable):
|
||||
pass
|
||||
|
||||
class CloudCrawlerStrategy(CrawlerStrategy):
|
||||
def __init__(self, use_cached_html = False):
|
||||
@@ -33,60 +74,272 @@ class CloudCrawlerStrategy(CrawlerStrategy):
|
||||
response = requests.post("http://crawl4ai.uccode.io/crawl", json=data)
|
||||
response = response.json()
|
||||
html = response["results"][0]["html"]
|
||||
return html
|
||||
return sanitize_input_encode(html)
|
||||
|
||||
class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
||||
def __init__(self, use_cached_html=False, js_code=None):
|
||||
def __init__(self, use_cached_html=False, js_code=None, **kwargs):
|
||||
super().__init__()
|
||||
print("[LOG] 🚀 Initializing LocalSeleniumCrawlerStrategy")
|
||||
self.options = Options()
|
||||
self.options.headless = True
|
||||
if kwargs.get("user_agent"):
|
||||
self.options.add_argument("--user-agent=" + kwargs.get("user_agent"))
|
||||
else:
|
||||
user_agent = kwargs.get("user_agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36")
|
||||
self.options.add_argument(f"--user-agent={user_agent}")
|
||||
self.options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36")
|
||||
|
||||
self.options.headless = kwargs.get("headless", True)
|
||||
if self.options.headless:
|
||||
self.options.add_argument("--headless")
|
||||
|
||||
self.options.add_argument("--disable-gpu")
|
||||
self.options.add_argument("--window-size=1920,1080")
|
||||
self.options.add_argument("--no-sandbox")
|
||||
self.options.add_argument("--disable-dev-shm-usage")
|
||||
self.options.add_argument("--disable-blink-features=AutomationControlled")
|
||||
|
||||
# self.options.add_argument("--disable-dev-shm-usage")
|
||||
self.options.add_argument("--disable-gpu")
|
||||
self.options.add_argument("--disable-extensions")
|
||||
self.options.add_argument("--headless")
|
||||
# self.options.add_argument("--disable-extensions")
|
||||
# self.options.add_argument("--disable-infobars")
|
||||
# self.options.add_argument("--disable-logging")
|
||||
# self.options.add_argument("--disable-popup-blocking")
|
||||
# self.options.add_argument("--disable-translate")
|
||||
# self.options.add_argument("--disable-default-apps")
|
||||
# self.options.add_argument("--disable-background-networking")
|
||||
# self.options.add_argument("--disable-sync")
|
||||
# self.options.add_argument("--disable-features=NetworkService,NetworkServiceInProcess")
|
||||
# self.options.add_argument("--disable-browser-side-navigation")
|
||||
# self.options.add_argument("--dns-prefetch-disable")
|
||||
# self.options.add_argument("--disable-web-security")
|
||||
self.options.add_argument("--log-level=3")
|
||||
self.use_cached_html = use_cached_html
|
||||
self.use_cached_html = use_cached_html
|
||||
self.js_code = js_code
|
||||
self.verbose = kwargs.get("verbose", False)
|
||||
|
||||
# Hooks
|
||||
self.hooks = {
|
||||
'on_driver_created': None,
|
||||
'on_user_agent_updated': None,
|
||||
'before_get_url': None,
|
||||
'after_get_url': None,
|
||||
'before_return_html': None
|
||||
}
|
||||
|
||||
# chromedriver_autoinstaller.install()
|
||||
import chromedriver_autoinstaller
|
||||
self.service = Service(chromedriver_autoinstaller.install())
|
||||
self.driver = webdriver.Chrome(service=self.service, options=self.options)
|
||||
# import chromedriver_autoinstaller
|
||||
# crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai")
|
||||
# driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=self.options)
|
||||
# chromedriver_path = chromedriver_autoinstaller.install()
|
||||
# chromedriver_path = chromedriver_autoinstaller.utils.download_chromedriver()
|
||||
# self.service = Service(chromedriver_autoinstaller.install())
|
||||
|
||||
|
||||
# chromedriver_path = ChromeDriverManager().install()
|
||||
# self.service = Service(chromedriver_path)
|
||||
# self.service.log_path = "NUL"
|
||||
# self.driver = webdriver.Chrome(service=self.service, options=self.options)
|
||||
|
||||
# Use selenium-manager (built into Selenium 4.10.0+)
|
||||
self.service = Service()
|
||||
self.driver = webdriver.Chrome(options=self.options)
|
||||
|
||||
self.driver = self.execute_hook('on_driver_created', self.driver)
|
||||
|
||||
if kwargs.get("cookies"):
|
||||
for cookie in kwargs.get("cookies"):
|
||||
self.driver.add_cookie(cookie)
|
||||
|
||||
|
||||
|
||||
def crawl(self, url: str) -> str:
|
||||
def set_hook(self, hook_type: str, hook: Callable):
|
||||
if hook_type in self.hooks:
|
||||
self.hooks[hook_type] = hook
|
||||
else:
|
||||
raise ValueError(f"Invalid hook type: {hook_type}")
|
||||
|
||||
def execute_hook(self, hook_type: str, *args):
|
||||
hook = self.hooks.get(hook_type)
|
||||
if hook:
|
||||
result = hook(*args)
|
||||
if result is not None:
|
||||
if isinstance(result, webdriver.Chrome):
|
||||
return result
|
||||
else:
|
||||
raise TypeError(f"Hook {hook_type} must return an instance of webdriver.Chrome or None.")
|
||||
# If the hook returns None or there is no hook, return self.driver
|
||||
return self.driver
|
||||
|
||||
def update_user_agent(self, user_agent: str):
|
||||
self.options.add_argument(f"user-agent={user_agent}")
|
||||
self.driver.quit()
|
||||
self.driver = webdriver.Chrome(service=self.service, options=self.options)
|
||||
self.driver = self.execute_hook('on_user_agent_updated', self.driver)
|
||||
|
||||
def set_custom_headers(self, headers: dict):
|
||||
# Enable Network domain for sending headers
|
||||
self.driver.execute_cdp_cmd('Network.enable', {})
|
||||
# Set extra HTTP headers
|
||||
self.driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': headers})
|
||||
|
||||
def _ensure_page_load(self, max_checks=6, check_interval=0.01):
|
||||
initial_length = len(self.driver.page_source)
|
||||
|
||||
for ix in range(max_checks):
|
||||
# print(f"Checking page load: {ix}")
|
||||
time.sleep(check_interval)
|
||||
current_length = len(self.driver.page_source)
|
||||
|
||||
if current_length != initial_length:
|
||||
break
|
||||
|
||||
return self.driver.page_source
|
||||
|
||||
def crawl(self, url: str, **kwargs) -> str:
|
||||
# Create md5 hash of the URL
|
||||
import hashlib
|
||||
url_hash = hashlib.md5(url.encode()).hexdigest()
|
||||
|
||||
if self.use_cached_html:
|
||||
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url.replace("/", "_"))
|
||||
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url_hash)
|
||||
if os.path.exists(cache_file_path):
|
||||
with open(cache_file_path, "r") as f:
|
||||
return f.read()
|
||||
return sanitize_input_encode(f.read())
|
||||
|
||||
try:
|
||||
self.driver.get(url)
|
||||
self.driver = self.execute_hook('before_get_url', self.driver)
|
||||
if self.verbose:
|
||||
print(f"[LOG] 🕸️ Crawling {url} using LocalSeleniumCrawlerStrategy...")
|
||||
self.driver.get(url) #<html><head></head><body></body></html>
|
||||
|
||||
WebDriverWait(self.driver, 20).until(
|
||||
lambda d: d.execute_script('return document.readyState') == 'complete'
|
||||
)
|
||||
WebDriverWait(self.driver, 10).until(
|
||||
EC.presence_of_all_elements_located((By.TAG_NAME, "html"))
|
||||
EC.presence_of_all_elements_located((By.TAG_NAME, "body"))
|
||||
)
|
||||
|
||||
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
|
||||
|
||||
self.driver = self.execute_hook('after_get_url', self.driver)
|
||||
html = sanitize_input_encode(self._ensure_page_load()) # self.driver.page_source
|
||||
can_not_be_done_headless = False # Look at my creativity for naming variables
|
||||
|
||||
# TODO: Very ugly approach, but promise to change it!
|
||||
if kwargs.get('bypass_headless', False) or html == "<html><head></head><body></body></html>":
|
||||
print("[LOG] 🙌 Page could not be loaded in headless mode. Trying non-headless mode...")
|
||||
can_not_be_done_headless = True
|
||||
options = Options()
|
||||
options.headless = False
|
||||
# set window size very small
|
||||
options.add_argument("--window-size=5,5")
|
||||
driver = webdriver.Chrome(service=self.service, options=options)
|
||||
driver.get(url)
|
||||
self.driver = self.execute_hook('after_get_url', driver)
|
||||
html = sanitize_input_encode(driver.page_source)
|
||||
driver.quit()
|
||||
|
||||
# Execute JS code if provided
|
||||
if self.js_code:
|
||||
if self.js_code and type(self.js_code) == str:
|
||||
self.driver.execute_script(self.js_code)
|
||||
# Optionally, wait for some condition after executing the JS code
|
||||
WebDriverWait(self.driver, 10).until(
|
||||
lambda driver: driver.execute_script("return document.readyState") == "complete"
|
||||
)
|
||||
elif self.js_code and type(self.js_code) == list:
|
||||
for js in self.js_code:
|
||||
self.driver.execute_script(js)
|
||||
WebDriverWait(self.driver, 10).until(
|
||||
lambda driver: driver.execute_script("return document.readyState") == "complete"
|
||||
)
|
||||
|
||||
html = self.driver.page_source
|
||||
if not can_not_be_done_headless:
|
||||
html = sanitize_input_encode(self.driver.page_source)
|
||||
self.driver = self.execute_hook('before_return_html', self.driver, html)
|
||||
|
||||
# Store in cache
|
||||
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url.replace("/", "_"))
|
||||
with open(cache_file_path, "w") as f:
|
||||
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url_hash)
|
||||
with open(cache_file_path, "w", encoding="utf-8") as f:
|
||||
f.write(html)
|
||||
|
||||
if self.verbose:
|
||||
print(f"[LOG] ✅ Crawled {url} successfully!")
|
||||
|
||||
return html
|
||||
except InvalidArgumentException:
|
||||
raise InvalidArgumentException(f"Invalid URL {url}")
|
||||
if not hasattr(e, 'msg'):
|
||||
e.msg = sanitize_input_encode(str(e))
|
||||
raise InvalidArgumentException(f"Failed to crawl {url}: {e.msg}")
|
||||
except WebDriverException as e:
|
||||
# If e does nlt have msg attribute create it and set it to str(e)
|
||||
if not hasattr(e, 'msg'):
|
||||
e.msg = sanitize_input_encode(str(e))
|
||||
raise WebDriverException(f"Failed to crawl {url}: {e.msg}")
|
||||
except Exception as e:
|
||||
raise Exception(f"Failed to crawl {url}: {str(e)}")
|
||||
if not hasattr(e, 'msg'):
|
||||
e.msg = sanitize_input_encode(str(e))
|
||||
raise Exception(f"Failed to crawl {url}: {e.msg}")
|
||||
|
||||
def take_screenshot(self) -> str:
|
||||
try:
|
||||
# Get the dimensions of the page
|
||||
total_width = self.driver.execute_script("return document.body.scrollWidth")
|
||||
total_height = self.driver.execute_script("return document.body.scrollHeight")
|
||||
|
||||
# Set the window size to the dimensions of the page
|
||||
self.driver.set_window_size(total_width, total_height)
|
||||
|
||||
# Take screenshot
|
||||
screenshot = self.driver.get_screenshot_as_png()
|
||||
|
||||
# Open the screenshot with PIL
|
||||
image = Image.open(BytesIO(screenshot))
|
||||
|
||||
# Convert image to RGB mode (this will handle both RGB and RGBA images)
|
||||
rgb_image = image.convert('RGB')
|
||||
|
||||
# Convert to JPEG and compress
|
||||
buffered = BytesIO()
|
||||
rgb_image.save(buffered, format="JPEG", quality=85)
|
||||
img_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
||||
|
||||
if self.verbose:
|
||||
print(f"[LOG] 📸 Screenshot taken and converted to base64")
|
||||
|
||||
return img_base64
|
||||
except Exception as e:
|
||||
error_message = sanitize_input_encode(f"Failed to take screenshot: {str(e)}")
|
||||
print(error_message)
|
||||
|
||||
# Generate an image with black background
|
||||
img = Image.new('RGB', (800, 600), color='black')
|
||||
draw = ImageDraw.Draw(img)
|
||||
|
||||
# Load a font
|
||||
try:
|
||||
font = ImageFont.truetype("arial.ttf", 40)
|
||||
except IOError:
|
||||
font = ImageFont.load_default()
|
||||
|
||||
# Define text color and wrap the text
|
||||
text_color = (255, 255, 255)
|
||||
max_width = 780
|
||||
wrapped_text = wrap_text(draw, error_message, font, max_width)
|
||||
|
||||
# Calculate text position
|
||||
text_position = (10, 10)
|
||||
|
||||
# Draw the text on the image
|
||||
draw.text(text_position, wrapped_text, fill=text_color, font=font)
|
||||
|
||||
# Convert to base64
|
||||
buffered = BytesIO()
|
||||
img.save(buffered, format="JPEG")
|
||||
img_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
||||
|
||||
return img_base64
|
||||
|
||||
def quit(self):
|
||||
self.driver.quit()
|
||||
self.driver.quit()
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
import os
|
||||
from pathlib import Path
|
||||
import sqlite3
|
||||
from typing import Optional
|
||||
from typing import Optional, Tuple
|
||||
|
||||
DB_PATH = os.path.join(Path.home(), ".crawl4ai")
|
||||
os.makedirs(DB_PATH, exist_ok=True)
|
||||
DB_PATH = os.path.join(DB_PATH, "crawl4ai.db")
|
||||
|
||||
|
||||
def init_db():
|
||||
global DB_PATH
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
@@ -19,22 +18,37 @@ def init_db():
|
||||
cleaned_html TEXT,
|
||||
markdown TEXT,
|
||||
extracted_content TEXT,
|
||||
success BOOLEAN
|
||||
success BOOLEAN,
|
||||
media TEXT DEFAULT "{}",
|
||||
links TEXT DEFAULT "{}",
|
||||
metadata TEXT DEFAULT "{}",
|
||||
screenshot TEXT DEFAULT ""
|
||||
)
|
||||
''')
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
def check_db_path():
|
||||
if not DB_PATH:
|
||||
raise ValueError("Database path is not set or is empty.")
|
||||
|
||||
def get_cached_url(url: str) -> Optional[Tuple[str, str, str, str, str, bool]]:
|
||||
def alter_db_add_screenshot(new_column: str = "media"):
|
||||
check_db_path()
|
||||
try:
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('SELECT url, html, cleaned_html, markdown, extracted_content, success FROM crawled_data WHERE url = ?', (url,))
|
||||
cursor.execute(f'ALTER TABLE crawled_data ADD COLUMN {new_column} TEXT DEFAULT ""')
|
||||
conn.commit()
|
||||
conn.close()
|
||||
except Exception as e:
|
||||
print(f"Error altering database to add screenshot column: {e}")
|
||||
|
||||
def check_db_path():
|
||||
if not DB_PATH:
|
||||
raise ValueError("Database path is not set or is empty.")
|
||||
|
||||
def get_cached_url(url: str) -> Optional[Tuple[str, str, str, str, str, str, str, bool, str]]:
|
||||
check_db_path()
|
||||
try:
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('SELECT url, html, cleaned_html, markdown, extracted_content, success, media, links, metadata, screenshot FROM crawled_data WHERE url = ?', (url,))
|
||||
result = cursor.fetchone()
|
||||
conn.close()
|
||||
return result
|
||||
@@ -42,21 +56,25 @@ def get_cached_url(url: str) -> Optional[Tuple[str, str, str, str, str, bool]]:
|
||||
print(f"Error retrieving cached URL: {e}")
|
||||
return None
|
||||
|
||||
def cache_url(url: str, html: str, cleaned_html: str, markdown: str, extracted_content: str, success: bool):
|
||||
def cache_url(url: str, html: str, cleaned_html: str, markdown: str, extracted_content: str, success: bool, media : str = "{}", links : str = "{}", metadata : str = "{}", screenshot: str = ""):
|
||||
check_db_path()
|
||||
try:
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute('''
|
||||
INSERT INTO crawled_data (url, html, cleaned_html, markdown, extracted_content, success)
|
||||
VALUES (?, ?, ?, ?, ?, ?)
|
||||
INSERT INTO crawled_data (url, html, cleaned_html, markdown, extracted_content, success, media, links, metadata, screenshot)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(url) DO UPDATE SET
|
||||
html = excluded.html,
|
||||
cleaned_html = excluded.cleaned_html,
|
||||
markdown = excluded.markdown,
|
||||
extracted_content = excluded.extracted_content,
|
||||
success = excluded.success
|
||||
''', (url, html, cleaned_html, markdown, extracted_content, success))
|
||||
success = excluded.success,
|
||||
media = excluded.media,
|
||||
links = excluded.links,
|
||||
metadata = excluded.metadata,
|
||||
screenshot = excluded.screenshot
|
||||
''', (url, html, cleaned_html, markdown, extracted_content, success, media, links, metadata, screenshot))
|
||||
conn.commit()
|
||||
conn.close()
|
||||
except Exception as e:
|
||||
@@ -95,4 +113,23 @@ def flush_db():
|
||||
conn.commit()
|
||||
conn.close()
|
||||
except Exception as e:
|
||||
print(f"Error flushing database: {e}")
|
||||
print(f"Error flushing database: {e}")
|
||||
|
||||
def update_existing_records(new_column: str = "media", default_value: str = "{}"):
|
||||
check_db_path()
|
||||
try:
|
||||
conn = sqlite3.connect(DB_PATH)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute(f'UPDATE crawled_data SET {new_column} = "{default_value}" WHERE screenshot IS NULL')
|
||||
conn.commit()
|
||||
conn.close()
|
||||
except Exception as e:
|
||||
print(f"Error updating existing records: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Delete the existing database file
|
||||
if os.path.exists(DB_PATH):
|
||||
os.remove(DB_PATH)
|
||||
init_db()
|
||||
# alter_db_add_screenshot("COL_NAME")
|
||||
|
||||
|
||||
@@ -3,14 +3,15 @@ from typing import Any, List, Dict, Optional, Union
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
import json, time
|
||||
# from optimum.intel import IPEXModel
|
||||
from .prompts import PROMPT_EXTRACT_BLOCKS, PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
|
||||
from .prompts import *
|
||||
from .config import *
|
||||
from .utils import *
|
||||
from functools import partial
|
||||
from .model_loader import *
|
||||
|
||||
|
||||
import math
|
||||
import numpy as np
|
||||
|
||||
|
||||
class ExtractionStrategy(ABC):
|
||||
"""
|
||||
Abstract base class for all extraction strategies.
|
||||
@@ -46,6 +47,7 @@ class ExtractionStrategy(ABC):
|
||||
for future in as_completed(futures):
|
||||
extracted_content.extend(future.result())
|
||||
return extracted_content
|
||||
|
||||
class NoExtractionStrategy(ExtractionStrategy):
|
||||
def extract(self, url: str, html: str, *q, **kwargs) -> List[Dict[str, Any]]:
|
||||
return [{"index": 0, "content": html}]
|
||||
@@ -54,7 +56,9 @@ class NoExtractionStrategy(ExtractionStrategy):
|
||||
return [{"index": i, "tags": [], "content": section} for i, section in enumerate(sections)]
|
||||
|
||||
class LLMExtractionStrategy(ExtractionStrategy):
|
||||
def __init__(self, provider: str = DEFAULT_PROVIDER, api_token: Optional[str] = None, instruction:str = None, **kwargs):
|
||||
def __init__(self,
|
||||
provider: str = DEFAULT_PROVIDER, api_token: Optional[str] = None,
|
||||
instruction:str = None, schema:Dict = None, extraction_type = "block", **kwargs):
|
||||
"""
|
||||
Initialize the strategy with clustering parameters.
|
||||
|
||||
@@ -66,6 +70,18 @@ class LLMExtractionStrategy(ExtractionStrategy):
|
||||
self.provider = provider
|
||||
self.api_token = api_token or PROVIDER_MODELS.get(provider, None) or os.getenv("OPENAI_API_KEY")
|
||||
self.instruction = instruction
|
||||
self.extract_type = extraction_type
|
||||
self.schema = schema
|
||||
if schema:
|
||||
self.extract_type = "schema"
|
||||
|
||||
self.chunk_token_threshold = kwargs.get("chunk_token_threshold", CHUNK_TOKEN_THRESHOLD)
|
||||
self.overlap_rate = kwargs.get("overlap_rate", OVERLAP_RATE)
|
||||
self.word_token_rate = kwargs.get("word_token_rate", WORD_TOKEN_RATE)
|
||||
self.apply_chunking = kwargs.get("apply_chunking", True)
|
||||
if not self.apply_chunking:
|
||||
self.chunk_token_threshold = 1e9
|
||||
|
||||
self.verbose = kwargs.get("verbose", False)
|
||||
|
||||
if not self.api_token:
|
||||
@@ -80,23 +96,27 @@ class LLMExtractionStrategy(ExtractionStrategy):
|
||||
"HTML": escape_json_string(sanitize_html(html)),
|
||||
}
|
||||
|
||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS
|
||||
if self.instruction:
|
||||
variable_values["REQUEST"] = self.instruction
|
||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
|
||||
|
||||
if self.extract_type == "schema":
|
||||
variable_values["SCHEMA"] = json.dumps(self.schema, indent=2)
|
||||
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
|
||||
|
||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS if not self.instruction else PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
|
||||
for variable in variable_values:
|
||||
prompt_with_variables = prompt_with_variables.replace(
|
||||
"{" + variable + "}", variable_values[variable]
|
||||
)
|
||||
|
||||
response = perform_completion_with_backoff(self.provider, prompt_with_variables, self.api_token)
|
||||
response = perform_completion_with_backoff(self.provider, prompt_with_variables, self.api_token) # , json_response=self.extract_type == "schema")
|
||||
try:
|
||||
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)['blocks']
|
||||
blocks = json.loads(blocks)
|
||||
for block in blocks:
|
||||
block['error'] = False
|
||||
except Exception as e:
|
||||
print("Error extracting blocks:", str(e))
|
||||
parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content)
|
||||
blocks = parsed
|
||||
if unparsed:
|
||||
@@ -111,52 +131,98 @@ class LLMExtractionStrategy(ExtractionStrategy):
|
||||
print("[LOG] Extracted", len(blocks), "blocks from URL:", url, "block index:", ix)
|
||||
return blocks
|
||||
|
||||
def _merge(self, documents):
|
||||
def _merge(self, documents, chunk_token_threshold, overlap):
|
||||
chunks = []
|
||||
sections = []
|
||||
total_tokens = 0
|
||||
|
||||
# Calculate the total tokens across all documents
|
||||
for document in documents:
|
||||
total_tokens += len(document.split(' ')) * self.word_token_rate
|
||||
|
||||
# Calculate the number of sections needed
|
||||
num_sections = math.floor(total_tokens / chunk_token_threshold)
|
||||
if num_sections < 1:
|
||||
num_sections = 1 # Ensure there is at least one section
|
||||
adjusted_chunk_threshold = total_tokens / num_sections
|
||||
|
||||
total_token_so_far = 0
|
||||
current_chunk = []
|
||||
|
||||
for document in documents:
|
||||
if total_token_so_far < CHUNK_TOKEN_THRESHOLD:
|
||||
chunk = document.split(' ')
|
||||
total_token_so_far += len(chunk) * 1.3
|
||||
chunks.append(document)
|
||||
else:
|
||||
sections.append('\n\n'.join(chunks))
|
||||
chunks = [document]
|
||||
total_token_so_far = len(document.split(' ')) * 1.3
|
||||
|
||||
if chunks:
|
||||
sections.append('\n\n'.join(chunks))
|
||||
tokens = document.split(' ')
|
||||
token_count = len(tokens) * self.word_token_rate
|
||||
|
||||
return sections
|
||||
if total_token_so_far + token_count <= adjusted_chunk_threshold:
|
||||
current_chunk.extend(tokens)
|
||||
total_token_so_far += token_count
|
||||
else:
|
||||
# Ensure to handle the last section properly
|
||||
if len(sections) == num_sections - 1:
|
||||
current_chunk.extend(tokens)
|
||||
continue
|
||||
|
||||
# Add overlap if specified
|
||||
if overlap > 0 and current_chunk:
|
||||
overlap_tokens = current_chunk[-overlap:]
|
||||
current_chunk.extend(overlap_tokens)
|
||||
|
||||
sections.append(' '.join(current_chunk))
|
||||
current_chunk = tokens
|
||||
total_token_so_far = token_count
|
||||
|
||||
# Add the last chunk
|
||||
if current_chunk:
|
||||
sections.append(' '.join(current_chunk))
|
||||
|
||||
return sections
|
||||
|
||||
|
||||
def run(self, url: str, sections: List[str]) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Process sections sequentially with a delay for rate limiting issues, specifically for LLMExtractionStrategy.
|
||||
"""
|
||||
|
||||
merged_sections = self._merge(sections)
|
||||
merged_sections = self._merge(
|
||||
sections, self.chunk_token_threshold,
|
||||
overlap= int(self.chunk_token_threshold * self.overlap_rate)
|
||||
)
|
||||
extracted_content = []
|
||||
if self.provider.startswith("groq/"):
|
||||
# Sequential processing with a delay
|
||||
for ix, section in enumerate(merged_sections):
|
||||
extracted_content.extend(self.extract(ix, url, section))
|
||||
extract_func = partial(self.extract, url)
|
||||
extracted_content.extend(extract_func(ix, sanitize_input_encode(section)))
|
||||
time.sleep(0.5) # 500 ms delay between each processing
|
||||
else:
|
||||
# Parallel processing using ThreadPoolExecutor
|
||||
# extract_func = partial(self.extract, url)
|
||||
# for ix, section in enumerate(merged_sections):
|
||||
# extracted_content.append(extract_func(ix, section))
|
||||
|
||||
with ThreadPoolExecutor(max_workers=4) as executor:
|
||||
extract_func = partial(self.extract, url)
|
||||
futures = [executor.submit(extract_func, ix, section) for ix, section in enumerate(merged_sections)]
|
||||
futures = [executor.submit(extract_func, ix, sanitize_input_encode(section)) for ix, section in enumerate(merged_sections)]
|
||||
|
||||
for future in as_completed(futures):
|
||||
extracted_content.extend(future.result())
|
||||
try:
|
||||
extracted_content.extend(future.result())
|
||||
except Exception as e:
|
||||
if self.verbose:
|
||||
print(f"Error in thread execution: {e}")
|
||||
# Add error information to extracted_content
|
||||
extracted_content.append({
|
||||
"index": 0,
|
||||
"error": True,
|
||||
"tags": ["error"],
|
||||
"content": str(e)
|
||||
})
|
||||
|
||||
|
||||
return extracted_content
|
||||
|
||||
class CosineStrategy(ExtractionStrategy):
|
||||
def __init__(self, semantic_filter = None, word_count_threshold=10, max_dist=0.2, linkage_method='ward', top_k=3, model_name = 'BAAI/bge-small-en-v1.5', **kwargs):
|
||||
def __init__(self, semantic_filter = None, word_count_threshold=10, max_dist=0.2, linkage_method='ward', top_k=3, model_name = 'sentence-transformers/all-MiniLM-L6-v2', sim_threshold = 0.3, **kwargs):
|
||||
"""
|
||||
Initialize the strategy with clustering parameters.
|
||||
|
||||
@@ -168,53 +234,109 @@ class CosineStrategy(ExtractionStrategy):
|
||||
"""
|
||||
super().__init__()
|
||||
|
||||
import numpy as np
|
||||
|
||||
self.semantic_filter = semantic_filter
|
||||
self.word_count_threshold = word_count_threshold
|
||||
self.max_dist = max_dist
|
||||
self.linkage_method = linkage_method
|
||||
self.top_k = top_k
|
||||
self.sim_threshold = sim_threshold
|
||||
self.timer = time.time()
|
||||
self.verbose = kwargs.get("verbose", False)
|
||||
|
||||
self.buffer_embeddings = np.array([])
|
||||
self.get_embedding_method = "direct"
|
||||
|
||||
self.device = get_device()
|
||||
import torch
|
||||
self.device = torch.device('cpu')
|
||||
|
||||
self.default_batch_size = calculate_batch_size(self.device)
|
||||
|
||||
if model_name == "bert-base-uncased":
|
||||
self.tokenizer, self.model = load_bert_base_uncased()
|
||||
elif model_name == "BAAI/bge-small-en-v1.5":
|
||||
self.tokenizer, self.model = load_bge_small_en_v1_5()
|
||||
if self.verbose:
|
||||
print(f"[LOG] Loading Extraction Model for {self.device.type} device.")
|
||||
|
||||
self.nlp = load_text_multilabel_classifier()
|
||||
# if False and self.device.type == "cpu":
|
||||
# self.model = load_onnx_all_MiniLM_l6_v2()
|
||||
# self.tokenizer = self.model.tokenizer
|
||||
# self.get_embedding_method = "direct"
|
||||
# else:
|
||||
|
||||
self.tokenizer, self.model = load_bge_small_en_v1_5()
|
||||
self.model.to(self.device)
|
||||
self.model.eval()
|
||||
|
||||
self.get_embedding_method = "batch"
|
||||
|
||||
self.buffer_embeddings = np.array([])
|
||||
|
||||
# if model_name == "bert-base-uncased":
|
||||
# self.tokenizer, self.model = load_bert_base_uncased()
|
||||
# self.model.eval() # Ensure the model is in evaluation mode
|
||||
# self.get_embedding_method = "batch"
|
||||
# elif model_name == "BAAI/bge-small-en-v1.5":
|
||||
# self.tokenizer, self.model = load_bge_small_en_v1_5()
|
||||
# self.model.eval() # Ensure the model is in evaluation mode
|
||||
# self.get_embedding_method = "batch"
|
||||
# elif model_name == "sentence-transformers/all-MiniLM-L6-v2":
|
||||
# self.model = load_onnx_all_MiniLM_l6_v2()
|
||||
# self.tokenizer = self.model.tokenizer
|
||||
# self.get_embedding_method = "direct"
|
||||
|
||||
|
||||
if self.verbose:
|
||||
print(f"[LOG] Loading Multilabel Classifier for {self.device.type} device.")
|
||||
|
||||
self.nlp, _ = load_text_multilabel_classifier()
|
||||
# self.default_batch_size = 16 if self.device.type == 'cpu' else 64
|
||||
|
||||
if self.verbose:
|
||||
print(f"[LOG] Model loaded {model_name}, models/reuters, took " + str(time.time() - self.timer) + " seconds")
|
||||
|
||||
def filter_documents_embeddings(self, documents: List[str], semantic_filter: str, threshold: float = 0.5) -> List[str]:
|
||||
def filter_documents_embeddings(self, documents: List[str], semantic_filter: str, at_least_k: int = 20) -> List[str]:
|
||||
"""
|
||||
Filter documents based on the cosine similarity of their embeddings with the semantic_filter embedding.
|
||||
Filter and sort documents based on the cosine similarity of their embeddings with the semantic_filter embedding.
|
||||
|
||||
:param documents: List of text chunks (documents).
|
||||
:param semantic_filter: A string containing the keywords for filtering.
|
||||
:param threshold: Cosine similarity threshold for filtering documents.
|
||||
:return: Filtered list of documents.
|
||||
:param at_least_k: Minimum number of documents to return.
|
||||
:return: List of filtered documents, ensuring at least `at_least_k` documents.
|
||||
"""
|
||||
from sklearn.metrics.pairwise import cosine_similarity
|
||||
|
||||
if not semantic_filter:
|
||||
return documents
|
||||
|
||||
if len(documents) < at_least_k:
|
||||
at_least_k = len(documents) // 2
|
||||
|
||||
from sklearn.metrics.pairwise import cosine_similarity
|
||||
|
||||
# Compute embedding for the keyword filter
|
||||
query_embedding = self.get_embeddings([semantic_filter])[0]
|
||||
|
||||
# Compute embeddings for the docu ments
|
||||
# Compute embeddings for the documents
|
||||
document_embeddings = self.get_embeddings(documents)
|
||||
|
||||
# Calculate cosine similarity between the query embedding and document embeddings
|
||||
similarities = cosine_similarity([query_embedding], document_embeddings).flatten()
|
||||
|
||||
# Filter documents based on the similarity threshold
|
||||
filtered_docs = [doc for doc, sim in zip(documents, similarities) if sim >= threshold]
|
||||
filtered_docs = [(doc, sim) for doc, sim in zip(documents, similarities) if sim >= self.sim_threshold]
|
||||
|
||||
return filtered_docs
|
||||
|
||||
def get_embeddings(self, sentences: List[str], bypass_buffer=True):
|
||||
# If the number of filtered documents is less than at_least_k, sort remaining documents by similarity
|
||||
if len(filtered_docs) < at_least_k:
|
||||
remaining_docs = [(doc, sim) for doc, sim in zip(documents, similarities) if sim < self.sim_threshold]
|
||||
remaining_docs.sort(key=lambda x: x[1], reverse=True)
|
||||
filtered_docs.extend(remaining_docs[:at_least_k - len(filtered_docs)])
|
||||
|
||||
# Extract the document texts from the tuples
|
||||
filtered_docs = [doc for doc, _ in filtered_docs]
|
||||
|
||||
return filtered_docs[:at_least_k]
|
||||
|
||||
def get_embeddings(self, sentences: List[str], batch_size=None, bypass_buffer=False):
|
||||
"""
|
||||
Get BERT embeddings for a list of sentences.
|
||||
|
||||
@@ -224,19 +346,42 @@ class CosineStrategy(ExtractionStrategy):
|
||||
# if self.buffer_embeddings.any() and not bypass_buffer:
|
||||
# return self.buffer_embeddings
|
||||
|
||||
import torch
|
||||
# Tokenize sentences and convert to tensor
|
||||
encoded_input = self.tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
||||
# Compute token embeddings
|
||||
with torch.no_grad():
|
||||
model_output = self.model(**encoded_input)
|
||||
if self.device.type in [ "cpu", "gpu", "cuda", "mps"]:
|
||||
import torch
|
||||
# Tokenize sentences and convert to tensor
|
||||
if batch_size is None:
|
||||
batch_size = self.default_batch_size
|
||||
|
||||
all_embeddings = []
|
||||
for i in range(0, len(sentences), batch_size):
|
||||
batch_sentences = sentences[i:i + batch_size]
|
||||
encoded_input = self.tokenizer(batch_sentences, padding=True, truncation=True, return_tensors='pt')
|
||||
encoded_input = {key: tensor.to(self.device) for key, tensor in encoded_input.items()}
|
||||
|
||||
# Ensure no gradients are calculated
|
||||
with torch.no_grad():
|
||||
model_output = self.model(**encoded_input)
|
||||
|
||||
# Get embeddings from the last hidden state (mean pooling)
|
||||
embeddings = model_output.last_hidden_state.mean(dim=1).cpu().numpy()
|
||||
all_embeddings.append(embeddings)
|
||||
|
||||
# Get embeddings from the last hidden state (mean pooling)
|
||||
embeddings = model_output.last_hidden_state.mean(1)
|
||||
self.buffer_embeddings = embeddings.numpy()
|
||||
return embeddings.numpy()
|
||||
self.buffer_embeddings = np.vstack(all_embeddings)
|
||||
elif self.device.type == "cpu":
|
||||
# self.buffer_embeddings = self.model(sentences)
|
||||
if batch_size is None:
|
||||
batch_size = self.default_batch_size
|
||||
|
||||
all_embeddings = []
|
||||
for i in range(0, len(sentences), batch_size):
|
||||
batch_sentences = sentences[i:i + batch_size]
|
||||
embeddings = self.model(batch_sentences)
|
||||
all_embeddings.append(embeddings)
|
||||
|
||||
self.buffer_embeddings = np.vstack(all_embeddings)
|
||||
return self.buffer_embeddings
|
||||
|
||||
def hierarchical_clustering(self, sentences: List[str]):
|
||||
def hierarchical_clustering(self, sentences: List[str], embeddings = None):
|
||||
"""
|
||||
Perform hierarchical clustering on sentences and return cluster labels.
|
||||
|
||||
@@ -247,7 +392,7 @@ class CosineStrategy(ExtractionStrategy):
|
||||
from scipy.cluster.hierarchy import linkage, fcluster
|
||||
from scipy.spatial.distance import pdist
|
||||
self.timer = time.time()
|
||||
embeddings = self.get_embeddings(sentences, bypass_buffer=False)
|
||||
embeddings = self.get_embeddings(sentences, bypass_buffer=True)
|
||||
# print(f"[LOG] 🚀 Embeddings computed in {time.time() - self.timer:.2f} seconds")
|
||||
# Compute pairwise cosine distances
|
||||
distance_matrix = pdist(embeddings, 'cosine')
|
||||
@@ -311,20 +456,33 @@ class CosineStrategy(ExtractionStrategy):
|
||||
# Convert filtered clusters to a sorted list of dictionaries
|
||||
cluster_list = [{"index": int(idx), "tags" : [], "content": " ".join(filtered_clusters[idx])} for idx in sorted(filtered_clusters)]
|
||||
|
||||
labels = self.nlp([cluster['content'] for cluster in cluster_list])
|
||||
if self.verbose:
|
||||
print(f"[LOG] 🚀 Assign tags using {self.device}")
|
||||
|
||||
for cluster, label in zip(cluster_list, labels):
|
||||
cluster['tags'] = label
|
||||
if self.device.type in ["gpu", "cuda", "mps", "cpu"]:
|
||||
labels = self.nlp([cluster['content'] for cluster in cluster_list])
|
||||
|
||||
for cluster, label in zip(cluster_list, labels):
|
||||
cluster['tags'] = label
|
||||
# elif self.device.type == "cpu":
|
||||
# # Process the text with the loaded model
|
||||
# texts = [cluster['content'] for cluster in cluster_list]
|
||||
# # Batch process texts
|
||||
# docs = self.nlp.pipe(texts, disable=["tagger", "parser", "ner", "lemmatizer"])
|
||||
|
||||
# Process the text with the loaded model
|
||||
# for cluster in cluster_list:
|
||||
# cluster['tags'] = self.nlp(cluster['content'])[0]['label']
|
||||
# doc = self.nlp(cluster['content'])
|
||||
# tok_k = self.top_k
|
||||
# top_categories = sorted(doc.cats.items(), key=lambda x: x[1], reverse=True)[:tok_k]
|
||||
# cluster['tags'] = [cat for cat, _ in top_categories]
|
||||
# for doc, cluster in zip(docs, cluster_list):
|
||||
# tok_k = self.top_k
|
||||
# top_categories = sorted(doc.cats.items(), key=lambda x: x[1], reverse=True)[:tok_k]
|
||||
# cluster['tags'] = [cat for cat, _ in top_categories]
|
||||
|
||||
# for cluster in cluster_list:
|
||||
# doc = self.nlp(cluster['content'])
|
||||
# tok_k = self.top_k
|
||||
# top_categories = sorted(doc.cats.items(), key=lambda x: x[1], reverse=True)[:tok_k]
|
||||
# cluster['tags'] = [cat for cat, _ in top_categories]
|
||||
|
||||
# print(f"[LOG] 🚀 Categorization done in {time.time() - t:.2f} seconds")
|
||||
if self.verbose:
|
||||
print(f"[LOG] 🚀 Categorization done in {time.time() - t:.2f} seconds")
|
||||
|
||||
return cluster_list
|
||||
|
||||
@@ -463,4 +621,4 @@ class ContentSummarizationStrategy(ExtractionStrategy):
|
||||
|
||||
# Sort summaries by the original section index to maintain order
|
||||
summaries.sort(key=lambda x: x[0])
|
||||
return [summary for _, summary in summaries]
|
||||
return [summary for _, summary in summaries]
|
||||
|
||||
@@ -2,9 +2,59 @@ from functools import lru_cache
|
||||
from pathlib import Path
|
||||
import subprocess, os
|
||||
import shutil
|
||||
from crawl4ai.config import MODEL_REPO_BRANCH
|
||||
import tarfile
|
||||
from .model_loader import *
|
||||
import argparse
|
||||
import urllib.request
|
||||
from crawl4ai.config import MODEL_REPO_BRANCH
|
||||
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
|
||||
|
||||
@lru_cache()
|
||||
def get_available_memory(device):
|
||||
import torch
|
||||
if device.type == 'cuda':
|
||||
return torch.cuda.get_device_properties(device).total_memory
|
||||
elif device.type == 'mps':
|
||||
return 48 * 1024 ** 3 # Assuming 8GB for MPS, as a conservative estimate
|
||||
else:
|
||||
return 0
|
||||
|
||||
@lru_cache()
|
||||
def calculate_batch_size(device):
|
||||
available_memory = get_available_memory(device)
|
||||
|
||||
if device.type == 'cpu':
|
||||
return 16
|
||||
elif device.type in ['cuda', 'mps']:
|
||||
# Adjust these thresholds based on your model size and available memory
|
||||
if available_memory >= 31 * 1024 ** 3: # > 32GB
|
||||
return 256
|
||||
elif available_memory >= 15 * 1024 ** 3: # > 16GB to 32GB
|
||||
return 128
|
||||
elif available_memory >= 8 * 1024 ** 3: # 8GB to 16GB
|
||||
return 64
|
||||
else:
|
||||
return 32
|
||||
else:
|
||||
return 16 # Default batch size
|
||||
|
||||
@lru_cache()
|
||||
def get_device():
|
||||
import torch
|
||||
if torch.cuda.is_available():
|
||||
device = torch.device('cuda')
|
||||
elif torch.backends.mps.is_available():
|
||||
device = torch.device('mps')
|
||||
else:
|
||||
device = torch.device('cpu')
|
||||
return device
|
||||
|
||||
def set_model_device(model):
|
||||
device = get_device()
|
||||
model.to(device)
|
||||
return model, device
|
||||
|
||||
@lru_cache()
|
||||
def get_home_folder():
|
||||
home_folder = os.path.join(Path.home(), ".crawl4ai")
|
||||
os.makedirs(home_folder, exist_ok=True)
|
||||
@@ -17,6 +67,8 @@ def load_bert_base_uncased():
|
||||
from transformers import BertTokenizer, BertModel, AutoTokenizer, AutoModel
|
||||
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', resume_download=None)
|
||||
model = BertModel.from_pretrained('bert-base-uncased', resume_download=None)
|
||||
model.eval()
|
||||
model, device = set_model_device(model)
|
||||
return tokenizer, model
|
||||
|
||||
@lru_cache()
|
||||
@@ -25,17 +77,62 @@ def load_bge_small_en_v1_5():
|
||||
tokenizer = AutoTokenizer.from_pretrained('BAAI/bge-small-en-v1.5', resume_download=None)
|
||||
model = AutoModel.from_pretrained('BAAI/bge-small-en-v1.5', resume_download=None)
|
||||
model.eval()
|
||||
model, device = set_model_device(model)
|
||||
return tokenizer, model
|
||||
|
||||
@lru_cache()
|
||||
def load_onnx_all_MiniLM_l6_v2():
|
||||
from crawl4ai.onnx_embedding import DefaultEmbeddingModel
|
||||
|
||||
model_path = "models/onnx.tar.gz"
|
||||
model_url = "https://unclecode-files.s3.us-west-2.amazonaws.com/onnx.tar.gz"
|
||||
__location__ = os.path.realpath(
|
||||
os.path.join(os.getcwd(), os.path.dirname(__file__)))
|
||||
download_path = os.path.join(__location__, model_path)
|
||||
onnx_dir = os.path.join(__location__, "models/onnx")
|
||||
|
||||
# Create the models directory if it does not exist
|
||||
os.makedirs(os.path.dirname(download_path), exist_ok=True)
|
||||
|
||||
# Download the tar.gz file if it does not exist
|
||||
if not os.path.exists(download_path):
|
||||
def download_with_progress(url, filename):
|
||||
def reporthook(block_num, block_size, total_size):
|
||||
downloaded = block_num * block_size
|
||||
percentage = 100 * downloaded / total_size
|
||||
if downloaded < total_size:
|
||||
print(f"\rDownloading: {percentage:.2f}% ({downloaded / (1024 * 1024):.2f} MB of {total_size / (1024 * 1024):.2f} MB)", end='')
|
||||
else:
|
||||
print("\rDownload complete!")
|
||||
|
||||
urllib.request.urlretrieve(url, filename, reporthook)
|
||||
|
||||
download_with_progress(model_url, download_path)
|
||||
|
||||
# Extract the tar.gz file if the onnx directory does not exist
|
||||
if not os.path.exists(onnx_dir):
|
||||
with tarfile.open(download_path, "r:gz") as tar:
|
||||
tar.extractall(path=os.path.join(__location__, "models"))
|
||||
|
||||
# remove the tar.gz file
|
||||
os.remove(download_path)
|
||||
|
||||
|
||||
|
||||
model = DefaultEmbeddingModel()
|
||||
return model
|
||||
|
||||
@lru_cache()
|
||||
def load_text_classifier():
|
||||
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
||||
from transformers import pipeline
|
||||
import torch
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained("dstefa/roberta-base_topic_classification_nyt_news")
|
||||
model = AutoModelForSequenceClassification.from_pretrained("dstefa/roberta-base_topic_classification_nyt_news")
|
||||
model.eval()
|
||||
model, device = set_model_device(model)
|
||||
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
|
||||
|
||||
return pipe
|
||||
|
||||
@lru_cache()
|
||||
@@ -45,21 +142,23 @@ def load_text_multilabel_classifier():
|
||||
from scipy.special import expit
|
||||
import torch
|
||||
|
||||
# # Check for available device: CUDA, MPS (for Apple Silicon), or CPU
|
||||
# if torch.cuda.is_available():
|
||||
# device = torch.device("cuda")
|
||||
# elif torch.backends.mps.is_available():
|
||||
# device = torch.device("mps")
|
||||
# else:
|
||||
# device = torch.device("cpu")
|
||||
# # return load_spacy_model(), torch.device("cpu")
|
||||
|
||||
|
||||
MODEL = "cardiffnlp/tweet-topic-21-multi"
|
||||
tokenizer = AutoTokenizer.from_pretrained(MODEL, resume_download=None)
|
||||
model = AutoModelForSequenceClassification.from_pretrained(MODEL, resume_download=None)
|
||||
model.eval()
|
||||
model, device = set_model_device(model)
|
||||
class_mapping = model.config.id2label
|
||||
|
||||
# Check for available device: CUDA, MPS (for Apple Silicon), or CPU
|
||||
if torch.cuda.is_available():
|
||||
device = torch.device("cuda")
|
||||
elif torch.backends.mps.is_available():
|
||||
device = torch.device("mps")
|
||||
else:
|
||||
device = torch.device("cpu")
|
||||
|
||||
model.to(device)
|
||||
|
||||
def _classifier(texts, threshold=0.5, max_length=64):
|
||||
tokens = tokenizer(texts, return_tensors='pt', padding=True, truncation=True, max_length=max_length)
|
||||
tokens = {key: val.to(device) for key, val in tokens.items()} # Move tokens to the selected device
|
||||
@@ -78,7 +177,7 @@ def load_text_multilabel_classifier():
|
||||
|
||||
return batch_labels
|
||||
|
||||
return _classifier
|
||||
return _classifier, device
|
||||
|
||||
@lru_cache()
|
||||
def load_nltk_punkt():
|
||||
@@ -89,6 +188,68 @@ def load_nltk_punkt():
|
||||
nltk.download('punkt')
|
||||
return nltk.data.find('tokenizers/punkt')
|
||||
|
||||
|
||||
@lru_cache()
|
||||
def load_spacy_model():
|
||||
import spacy
|
||||
name = "models/reuters"
|
||||
home_folder = get_home_folder()
|
||||
model_folder = Path(home_folder) / name
|
||||
|
||||
# Check if the model directory already exists
|
||||
if not (model_folder.exists() and any(model_folder.iterdir())):
|
||||
repo_url = "https://github.com/unclecode/crawl4ai.git"
|
||||
branch = MODEL_REPO_BRANCH
|
||||
repo_folder = Path(home_folder) / "crawl4ai"
|
||||
|
||||
print("[LOG] ⏬ Downloading Spacy model for the first time...")
|
||||
|
||||
# Remove existing repo folder if it exists
|
||||
if repo_folder.exists():
|
||||
try:
|
||||
shutil.rmtree(repo_folder)
|
||||
if model_folder.exists():
|
||||
shutil.rmtree(model_folder)
|
||||
except PermissionError:
|
||||
print("[WARNING] Unable to remove existing folders. Please manually delete the following folders and try again:")
|
||||
print(f"- {repo_folder}")
|
||||
print(f"- {model_folder}")
|
||||
return None
|
||||
|
||||
try:
|
||||
# Clone the repository
|
||||
subprocess.run(
|
||||
["git", "clone", "-b", branch, repo_url, str(repo_folder)],
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL,
|
||||
check=True
|
||||
)
|
||||
|
||||
# Create the models directory if it doesn't exist
|
||||
models_folder = Path(home_folder) / "models"
|
||||
models_folder.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Copy the reuters model folder to the models directory
|
||||
source_folder = repo_folder / "models" / "reuters"
|
||||
shutil.copytree(source_folder, model_folder)
|
||||
|
||||
# Remove the cloned repository
|
||||
shutil.rmtree(repo_folder)
|
||||
|
||||
print("[LOG] ✅ Spacy Model downloaded successfully")
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"An error occurred while cloning the repository: {e}")
|
||||
return None
|
||||
except Exception as e:
|
||||
print(f"An error occurred: {e}")
|
||||
return None
|
||||
|
||||
try:
|
||||
return spacy.load(str(model_folder))
|
||||
except Exception as e:
|
||||
print(f"Error loading spacy model: {e}")
|
||||
return None
|
||||
|
||||
def download_all_models(remove_existing=False):
|
||||
"""Download all models required for Crawl4AI."""
|
||||
if remove_existing:
|
||||
@@ -104,12 +265,15 @@ def download_all_models(remove_existing=False):
|
||||
print("[LOG] Existing models removed.")
|
||||
|
||||
# Load each model to trigger download
|
||||
print("[LOG] Downloading BERT Base Uncased...")
|
||||
load_bert_base_uncased()
|
||||
print("[LOG] Downloading BGE Small EN v1.5...")
|
||||
load_bge_small_en_v1_5()
|
||||
# print("[LOG] Downloading BERT Base Uncased...")
|
||||
# load_bert_base_uncased()
|
||||
# print("[LOG] Downloading BGE Small EN v1.5...")
|
||||
# load_bge_small_en_v1_5()
|
||||
# print("[LOG] Downloading ONNX model...")
|
||||
# load_onnx_all_MiniLM_l6_v2()
|
||||
print("[LOG] Downloading text classifier...")
|
||||
load_text_multilabel_classifier
|
||||
_, device = load_text_multilabel_classifier()
|
||||
print(f"[LOG] Text classifier loaded on {device}")
|
||||
print("[LOG] Downloading custom NLTK Punkt model...")
|
||||
load_nltk_punkt()
|
||||
print("[LOG] ✅ All models downloaded successfully.")
|
||||
@@ -124,4 +288,4 @@ def main():
|
||||
download_all_models(remove_existing=args.remove_existing)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
main()
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
from pydantic import BaseModel, HttpUrl
|
||||
from typing import List
|
||||
from typing import List, Dict, Optional
|
||||
|
||||
class UrlModel(BaseModel):
|
||||
url: HttpUrl
|
||||
@@ -9,8 +9,11 @@ class CrawlResult(BaseModel):
|
||||
url: str
|
||||
html: str
|
||||
success: bool
|
||||
cleaned_html: str = None
|
||||
markdown: str = None
|
||||
extracted_content: str = None
|
||||
metadata: dict = None
|
||||
error_message: str = None
|
||||
cleaned_html: Optional[str] = None
|
||||
media: Dict[str, List[Dict]] = {}
|
||||
links: Dict[str, List[Dict]] = {}
|
||||
screenshot: Optional[str] = None
|
||||
markdown: Optional[str] = None
|
||||
extracted_content: Optional[str] = None
|
||||
metadata: Optional[dict] = None
|
||||
error_message: Optional[str] = None
|
||||
25
crawl4ai/models/onnx/config.json
Normal file
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"_name_or_path": "sentence-transformers/all-MiniLM-L6-v2",
|
||||
"architectures": [
|
||||
"BertModel"
|
||||
],
|
||||
"attention_probs_dropout_prob": 0.1,
|
||||
"classifier_dropout": null,
|
||||
"gradient_checkpointing": false,
|
||||
"hidden_act": "gelu",
|
||||
"hidden_dropout_prob": 0.1,
|
||||
"hidden_size": 384,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 1536,
|
||||
"layer_norm_eps": 1e-12,
|
||||
"max_position_embeddings": 512,
|
||||
"model_type": "bert",
|
||||
"num_attention_heads": 12,
|
||||
"num_hidden_layers": 6,
|
||||
"pad_token_id": 0,
|
||||
"position_embedding_type": "absolute",
|
||||
"transformers_version": "4.27.4",
|
||||
"type_vocab_size": 2,
|
||||
"use_cache": true,
|
||||
"vocab_size": 30522
|
||||
}
|
||||
BIN
crawl4ai/models/onnx/model.onnx
Normal file
7
crawl4ai/models/onnx/special_tokens_map.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"cls_token": "[CLS]",
|
||||
"mask_token": "[MASK]",
|
||||
"pad_token": "[PAD]",
|
||||
"sep_token": "[SEP]",
|
||||
"unk_token": "[UNK]"
|
||||
}
|
||||
30686
crawl4ai/models/onnx/tokenizer.json
Normal file
15
crawl4ai/models/onnx/tokenizer_config.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"cls_token": "[CLS]",
|
||||
"do_basic_tokenize": true,
|
||||
"do_lower_case": true,
|
||||
"mask_token": "[MASK]",
|
||||
"model_max_length": 512,
|
||||
"never_split": null,
|
||||
"pad_token": "[PAD]",
|
||||
"sep_token": "[SEP]",
|
||||
"special_tokens_map_file": "/Users/hammad/.cache/huggingface/hub/models--sentence-transformers--all-MiniLM-L6-v2/snapshots/7dbbc90392e2f80f3d3c277d6e90027e55de9125/special_tokens_map.json",
|
||||
"strip_accents": null,
|
||||
"tokenize_chinese_chars": true,
|
||||
"tokenizer_class": "BertTokenizer",
|
||||
"unk_token": "[UNK]"
|
||||
}
|
||||
30522
crawl4ai/models/onnx/vocab.txt
Normal file
50
crawl4ai/onnx_embedding.py
Normal file
@@ -0,0 +1,50 @@
|
||||
# A dependency-light way to run the onnx model
|
||||
|
||||
|
||||
import numpy as np
|
||||
from typing import List
|
||||
import os
|
||||
|
||||
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
|
||||
MODEL_ID = "sentence-transformers/all-MiniLM-L6-v2"
|
||||
|
||||
def normalize(v):
|
||||
norm = np.linalg.norm(v, axis=1)
|
||||
norm[norm == 0] = 1e-12
|
||||
return v / norm[:, np.newaxis]
|
||||
|
||||
# Sampel implementation of the default sentence-transformers model using ONNX
|
||||
class DefaultEmbeddingModel():
|
||||
|
||||
def __init__(self):
|
||||
from tokenizers import Tokenizer
|
||||
import onnxruntime as ort
|
||||
# max_seq_length = 256, for some reason sentence-transformers uses 256 even though the HF config has a max length of 128
|
||||
# https://github.com/UKPLab/sentence-transformers/blob/3e1929fddef16df94f8bc6e3b10598a98f46e62d/docs/_static/html/models_en_sentence_embeddings.html#LL480
|
||||
self.tokenizer = Tokenizer.from_file(os.path.join(__location__, "models/onnx/tokenizer.json"))
|
||||
self.tokenizer.enable_truncation(max_length=256)
|
||||
self.tokenizer.enable_padding(pad_id=0, pad_token="[PAD]", length=256)
|
||||
self.model = ort.InferenceSession(os.path.join(__location__,"models/onnx/model.onnx"))
|
||||
|
||||
|
||||
def __call__(self, documents: List[str], batch_size: int = 32):
|
||||
all_embeddings = []
|
||||
for i in range(0, len(documents), batch_size):
|
||||
batch = documents[i:i + batch_size]
|
||||
encoded = [self.tokenizer.encode(d) for d in batch]
|
||||
input_ids = np.array([e.ids for e in encoded])
|
||||
attention_mask = np.array([e.attention_mask for e in encoded])
|
||||
onnx_input = {
|
||||
"input_ids": np.array(input_ids, dtype=np.int64),
|
||||
"attention_mask": np.array(attention_mask, dtype=np.int64),
|
||||
"token_type_ids": np.array([np.zeros(len(e), dtype=np.int64) for e in input_ids], dtype=np.int64),
|
||||
}
|
||||
model_output = self.model.run(None, onnx_input)
|
||||
last_hidden_state = model_output[0]
|
||||
# Perform mean pooling with attention weighting
|
||||
input_mask_expanded = np.broadcast_to(np.expand_dims(attention_mask, -1), last_hidden_state.shape)
|
||||
embeddings = np.sum(last_hidden_state * input_mask_expanded, 1) / np.clip(input_mask_expanded.sum(1), a_min=1e-9, a_max=None)
|
||||
embeddings = normalize(embeddings).astype(np.float32)
|
||||
all_embeddings.append(embeddings)
|
||||
return np.concatenate(all_embeddings)
|
||||
|
||||
@@ -164,4 +164,41 @@ Please provide your output within <blocks> tags, like this:
|
||||
|
||||
**Make sure to follow the user instruction to extract blocks aligin with the instruction.**
|
||||
|
||||
Remember, the output should be a complete, parsable JSON wrapped in <blocks> tags, with no omissions or errors. The JSON objects should semantically break down the content into relevant blocks, maintaining the original order."""
|
||||
Remember, the output should be a complete, parsable JSON wrapped in <blocks> tags, with no omissions or errors. The JSON objects should semantically break down the content into relevant blocks, maintaining the original order."""
|
||||
|
||||
PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION = """Here is the content from the URL:
|
||||
<url>{URL}</url>
|
||||
|
||||
<url_content>
|
||||
{HTML}
|
||||
</url_content>
|
||||
|
||||
The user has made the following request for what information to extract from the above content:
|
||||
|
||||
<user_request>
|
||||
{REQUEST}
|
||||
</user_request>
|
||||
|
||||
<schema_block>
|
||||
{SCHEMA}
|
||||
</schema_block>
|
||||
|
||||
Please carefully read the URL content and the user's request. If the user provided a desired JSON schema in the <schema_block> above, extract the requested information from the URL content according to that schema. If no schema was provided, infer an appropriate JSON schema based on the user's request that will best capture the key information they are looking for.
|
||||
|
||||
Extraction instructions:
|
||||
Return the extracted information as a list of JSON objects, with each object in the list corresponding to a block of content from the URL, in the same order as it appears on the page. Wrap the entire JSON list in <blocks>...</blocks> XML tags.
|
||||
|
||||
Quality Reflection:
|
||||
Before outputting your final answer, double check that the JSON you are returning is complete, containing all the information requested by the user, and is valid JSON that could be parsed by json.loads() with no errors or omissions. The outputted JSON objects should fully match the schema, either provided or inferred.
|
||||
|
||||
Quality Score:
|
||||
After reflecting, score the quality and completeness of the JSON data you are about to return on a scale of 1 to 5. Write the score inside <score> tags.
|
||||
|
||||
Avoid Common Mistakes:
|
||||
- Do NOT add any comments using "//" or "#" in the JSON output. It causes parsing errors.
|
||||
- Make sure the JSON is properly formatted with curly braces, square brackets, and commas in the right places.
|
||||
- Do not miss closing </blocks> tag at the end of the JSON output.
|
||||
- Do not generate the Python coee show me how to do the task, this is your task to extract the information and return it in JSON format.
|
||||
|
||||
Result
|
||||
Output the final list of JSON objects, wrapped in <blocks>...</blocks> XML tags. Make sure to close the tag properly."""
|
||||
@@ -10,6 +10,10 @@ from html2text import HTML2Text
|
||||
from .prompts import PROMPT_EXTRACT_BLOCKS
|
||||
from .config import *
|
||||
from pathlib import Path
|
||||
from typing import Dict, Any
|
||||
from urllib.parse import urljoin
|
||||
import requests
|
||||
from requests.exceptions import InvalidSchema
|
||||
|
||||
class InvalidCSSSelectorError(Exception):
|
||||
pass
|
||||
@@ -95,6 +99,16 @@ def sanitize_html(html):
|
||||
|
||||
return sanitized_html
|
||||
|
||||
def sanitize_input_encode(text: str) -> str:
|
||||
"""Sanitize input to handle potential encoding issues."""
|
||||
try:
|
||||
# Attempt to encode and decode as UTF-8 to handle potential encoding issues
|
||||
return text.encode('utf-8', errors='ignore').decode('utf-8')
|
||||
except UnicodeEncodeError as e:
|
||||
print(f"Warning: Encoding issue detected. Some characters may be lost. Error: {e}")
|
||||
# Fall back to ASCII if UTF-8 fails
|
||||
return text.encode('ascii', errors='ignore').decode('ascii')
|
||||
|
||||
def escape_json_string(s):
|
||||
"""
|
||||
Escapes characters in a string to be JSON safe.
|
||||
@@ -151,7 +165,51 @@ class CustomHTML2Text(HTML2Text):
|
||||
|
||||
super().handle_tag(tag, attrs, start)
|
||||
|
||||
def get_content_of_website(html, word_count_threshold = MIN_WORD_THRESHOLD, css_selector = None):
|
||||
def replace_inline_tags(soup, tags, only_text=False):
|
||||
tag_replacements = {
|
||||
'b': lambda tag: f"**{tag.text}**",
|
||||
'i': lambda tag: f"*{tag.text}*",
|
||||
'u': lambda tag: f"__{tag.text}__",
|
||||
'span': lambda tag: f"{tag.text}",
|
||||
'del': lambda tag: f"~~{tag.text}~~",
|
||||
'ins': lambda tag: f"++{tag.text}++",
|
||||
'sub': lambda tag: f"~{tag.text}~",
|
||||
'sup': lambda tag: f"^^{tag.text}^^",
|
||||
'strong': lambda tag: f"**{tag.text}**",
|
||||
'em': lambda tag: f"*{tag.text}*",
|
||||
'code': lambda tag: f"`{tag.text}`",
|
||||
'kbd': lambda tag: f"`{tag.text}`",
|
||||
'var': lambda tag: f"_{tag.text}_",
|
||||
's': lambda tag: f"~~{tag.text}~~",
|
||||
'q': lambda tag: f'"{tag.text}"',
|
||||
'abbr': lambda tag: f"{tag.text} ({tag.get('title', '')})",
|
||||
'cite': lambda tag: f"_{tag.text}_",
|
||||
'dfn': lambda tag: f"_{tag.text}_",
|
||||
'time': lambda tag: f"{tag.text}",
|
||||
'small': lambda tag: f"<small>{tag.text}</small>",
|
||||
'mark': lambda tag: f"=={tag.text}=="
|
||||
}
|
||||
|
||||
replacement_data = [(tag, tag_replacements.get(tag, lambda t: t.text)) for tag in tags]
|
||||
|
||||
for tag_name, replacement_func in replacement_data:
|
||||
for tag in soup.find_all(tag_name):
|
||||
replacement_text = tag.text if only_text else replacement_func(tag)
|
||||
tag.replace_with(replacement_text)
|
||||
|
||||
return soup
|
||||
|
||||
# for tag_name in tags:
|
||||
# for tag in soup.find_all(tag_name):
|
||||
# if not only_text:
|
||||
# replacement_text = tag_replacements.get(tag_name, lambda t: t.text)(tag)
|
||||
# tag.replace_with(replacement_text)
|
||||
# else:
|
||||
# tag.replace_with(tag.text)
|
||||
|
||||
# return soup
|
||||
|
||||
def get_content_of_website(url, html, word_count_threshold = MIN_WORD_THRESHOLD, css_selector = None, **kwargs):
|
||||
try:
|
||||
if not html:
|
||||
return None
|
||||
@@ -170,6 +228,28 @@ def get_content_of_website(html, word_count_threshold = MIN_WORD_THRESHOLD, css_
|
||||
for el in selected_elements:
|
||||
div_tag.append(el)
|
||||
body = div_tag
|
||||
|
||||
links = {
|
||||
'internal': [],
|
||||
'external': []
|
||||
}
|
||||
|
||||
# Extract all internal and external links
|
||||
for a in body.find_all('a', href=True):
|
||||
href = a['href']
|
||||
url_base = url.split('/')[2]
|
||||
if href.startswith('http') and url_base not in href:
|
||||
links['external'].append({
|
||||
'href': href,
|
||||
'text': a.get_text()
|
||||
})
|
||||
else:
|
||||
links['internal'].append(
|
||||
{
|
||||
'href': href,
|
||||
'text': a.get_text()
|
||||
}
|
||||
)
|
||||
|
||||
# Remove script, style, and other tags that don't carry useful content from body
|
||||
for tag in body.find_all(['script', 'style', 'link', 'meta', 'noscript']):
|
||||
@@ -180,6 +260,35 @@ def get_content_of_website(html, word_count_threshold = MIN_WORD_THRESHOLD, css_
|
||||
if tag.name != 'img':
|
||||
tag.attrs = {}
|
||||
|
||||
# Extract all img tgas inti [{src: '', alt: ''}]
|
||||
media = {
|
||||
'images': [],
|
||||
'videos': [],
|
||||
'audios': []
|
||||
}
|
||||
for img in body.find_all('img'):
|
||||
media['images'].append({
|
||||
'src': img.get('src'),
|
||||
'alt': img.get('alt'),
|
||||
"type": "image"
|
||||
})
|
||||
|
||||
# Extract all video tags into [{src: '', alt: ''}]
|
||||
for video in body.find_all('video'):
|
||||
media['videos'].append({
|
||||
'src': video.get('src'),
|
||||
'alt': video.get('alt'),
|
||||
"type": "video"
|
||||
})
|
||||
|
||||
# Extract all audio tags into [{src: '', alt: ''}]
|
||||
for audio in body.find_all('audio'):
|
||||
media['audios'].append({
|
||||
'src': audio.get('src'),
|
||||
'alt': audio.get('alt'),
|
||||
"type": "audio"
|
||||
})
|
||||
|
||||
# Replace images with their alt text or remove them if no alt text is available
|
||||
for img in body.find_all('img'):
|
||||
alt_text = img.get('alt')
|
||||
@@ -198,6 +307,13 @@ def get_content_of_website(html, word_count_threshold = MIN_WORD_THRESHOLD, css_
|
||||
|
||||
# Replace all "pre" tags with their inner text
|
||||
body = replace_pre_tags_with_text(body)
|
||||
|
||||
# Replace inline tags with their text content
|
||||
body = replace_inline_tags(
|
||||
body,
|
||||
['b', 'i', 'u', 'span', 'del', 'ins', 'sub', 'sup', 'strong', 'em', 'code', 'kbd', 'var', 's', 'q', 'abbr', 'cite', 'dfn', 'time', 'small', 'mark'],
|
||||
only_text=kwargs.get('only_text', False)
|
||||
)
|
||||
|
||||
# Recursively remove empty elements, their parent elements, and elements with word count below threshold
|
||||
def remove_empty_and_low_word_count_elements(node, word_count_threshold):
|
||||
@@ -295,17 +411,293 @@ def get_content_of_website(html, word_count_threshold = MIN_WORD_THRESHOLD, css_
|
||||
markdown = h.handle(cleaned_html)
|
||||
markdown = markdown.replace(' ```', '```')
|
||||
|
||||
try:
|
||||
meta = extract_metadata(html, soup)
|
||||
except Exception as e:
|
||||
print('Error extracting metadata:', str(e))
|
||||
meta = {}
|
||||
|
||||
|
||||
# Return the Markdown content
|
||||
return{
|
||||
'markdown': markdown,
|
||||
'cleaned_html': cleaned_html,
|
||||
'success': True
|
||||
'success': True,
|
||||
'media': media,
|
||||
'links': links,
|
||||
'metadata': meta
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
print('Error processing HTML content:', str(e))
|
||||
raise InvalidCSSSelectorError(f"Invalid CSS selector: {css_selector}") from e
|
||||
|
||||
def get_content_of_website_optimized(url: str, html: str, word_count_threshold: int = MIN_WORD_THRESHOLD, css_selector: str = None, **kwargs) -> Dict[str, Any]:
|
||||
if not html:
|
||||
return None
|
||||
|
||||
soup = BeautifulSoup(html, 'html.parser')
|
||||
body = soup.body
|
||||
|
||||
image_description_min_word_threshold = kwargs.get('image_description_min_word_threshold', IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD)
|
||||
|
||||
if css_selector:
|
||||
selected_elements = body.select(css_selector)
|
||||
if not selected_elements:
|
||||
raise InvalidCSSSelectorError(f"Invalid CSS selector, No elements found for CSS selector: {css_selector}")
|
||||
body = soup.new_tag('div')
|
||||
for el in selected_elements:
|
||||
body.append(el)
|
||||
|
||||
links = {'internal': [], 'external': []}
|
||||
media = {'images': [], 'videos': [], 'audios': []}
|
||||
|
||||
def process_image(img, url, index, total_images):
|
||||
#Check if an image has valid display and inside undesired html elements
|
||||
def is_valid_image(img, parent, parent_classes):
|
||||
style = img.get('style', '')
|
||||
src = img.get('src', '')
|
||||
classes_to_check = ['button', 'icon', 'logo']
|
||||
tags_to_check = ['button', 'input']
|
||||
return all([
|
||||
'display:none' not in style,
|
||||
src,
|
||||
not any(s in var for var in [src, img.get('alt', ''), *parent_classes] for s in classes_to_check),
|
||||
parent.name not in tags_to_check
|
||||
])
|
||||
|
||||
#Score an image for it's usefulness
|
||||
def score_image_for_usefulness(img, base_url, index, images_count):
|
||||
# Function to parse image height/width value and units
|
||||
def parse_dimension(dimension):
|
||||
if dimension:
|
||||
match = re.match(r"(\d+)(\D*)", dimension)
|
||||
if match:
|
||||
number = int(match.group(1))
|
||||
unit = match.group(2) or 'px' # Default unit is 'px' if not specified
|
||||
return number, unit
|
||||
return None, None
|
||||
|
||||
# Fetch image file metadata to extract size and extension
|
||||
def fetch_image_file_size(img, base_url):
|
||||
#If src is relative path construct full URL, if not it may be CDN URL
|
||||
img_url = urljoin(base_url,img.get('src'))
|
||||
try:
|
||||
response = requests.head(img_url)
|
||||
if response.status_code == 200:
|
||||
return response.headers.get('Content-Length',None)
|
||||
else:
|
||||
print(f"Failed to retrieve file size for {img_url}")
|
||||
return None
|
||||
except InvalidSchema as e:
|
||||
return None
|
||||
finally:
|
||||
return
|
||||
|
||||
image_height = img.get('height')
|
||||
height_value, height_unit = parse_dimension(image_height)
|
||||
image_width = img.get('width')
|
||||
width_value, width_unit = parse_dimension(image_width)
|
||||
image_size = 0 #int(fetch_image_file_size(img,base_url) or 0)
|
||||
image_format = os.path.splitext(img.get('src',''))[1].lower()
|
||||
# Remove . from format
|
||||
image_format = image_format.strip('.')
|
||||
score = 0
|
||||
if height_value:
|
||||
if height_unit == 'px' and height_value > 150:
|
||||
score += 1
|
||||
if height_unit in ['%','vh','vmin','vmax'] and height_value >30:
|
||||
score += 1
|
||||
if width_value:
|
||||
if width_unit == 'px' and width_value > 150:
|
||||
score += 1
|
||||
if width_unit in ['%','vh','vmin','vmax'] and width_value >30:
|
||||
score += 1
|
||||
if image_size > 10000:
|
||||
score += 1
|
||||
if img.get('alt') != '':
|
||||
score+=1
|
||||
if any(image_format==format for format in ['jpg','png','webp']):
|
||||
score+=1
|
||||
if index/images_count<0.5:
|
||||
score+=1
|
||||
return score
|
||||
|
||||
# Extract meaningful text for images from closest parent
|
||||
def find_closest_parent_with_useful_text(tag):
|
||||
current_tag = tag
|
||||
while current_tag:
|
||||
current_tag = current_tag.parent
|
||||
# Get the text content of the parent tag
|
||||
if current_tag:
|
||||
text_content = current_tag.get_text(separator=' ',strip=True)
|
||||
# Check if the text content has at least word_count_threshold
|
||||
if len(text_content.split()) >= image_description_min_word_threshold:
|
||||
return text_content
|
||||
return None
|
||||
|
||||
if not is_valid_image(img, img.parent, img.parent.get('class', [])):
|
||||
return None
|
||||
score = score_image_for_usefulness(img, url, index, total_images)
|
||||
if score <= IMAGE_SCORE_THRESHOLD:
|
||||
return None
|
||||
return {
|
||||
'src': img.get('src', ''),
|
||||
'alt': img.get('alt', ''),
|
||||
'desc': find_closest_parent_with_useful_text(img),
|
||||
'score': score,
|
||||
'type': 'image'
|
||||
}
|
||||
|
||||
def process_element(element: element.PageElement) -> bool:
|
||||
try:
|
||||
if isinstance(element, NavigableString):
|
||||
if isinstance(element, Comment):
|
||||
element.extract()
|
||||
return False
|
||||
|
||||
if element.name in ['script', 'style', 'link', 'meta', 'noscript']:
|
||||
element.decompose()
|
||||
return False
|
||||
|
||||
keep_element = False
|
||||
|
||||
if element.name == 'a' and element.get('href'):
|
||||
href = element['href']
|
||||
url_base = url.split('/')[2]
|
||||
link_data = {'href': href, 'text': element.get_text()}
|
||||
if href.startswith('http') and url_base not in href:
|
||||
links['external'].append(link_data)
|
||||
else:
|
||||
links['internal'].append(link_data)
|
||||
keep_element = True
|
||||
|
||||
elif element.name == 'img':
|
||||
return True # Always keep image elements
|
||||
|
||||
elif element.name in ['video', 'audio']:
|
||||
media[f"{element.name}s"].append({
|
||||
'src': element.get('src'),
|
||||
'alt': element.get('alt'),
|
||||
'type': element.name
|
||||
})
|
||||
return True # Always keep video and audio elements
|
||||
|
||||
if element.name != 'pre':
|
||||
if element.name in ['b', 'i', 'u', 'span', 'del', 'ins', 'sub', 'sup', 'strong', 'em', 'code', 'kbd', 'var', 's', 'q', 'abbr', 'cite', 'dfn', 'time', 'small', 'mark']:
|
||||
if kwargs.get('only_text', False):
|
||||
element.replace_with(element.get_text())
|
||||
else:
|
||||
element.unwrap()
|
||||
elif element.name != 'img':
|
||||
element.attrs = {}
|
||||
|
||||
# Process children
|
||||
for child in list(element.children):
|
||||
if isinstance(child, NavigableString) and not isinstance(child, Comment):
|
||||
if len(child.strip()) > 0:
|
||||
keep_element = True
|
||||
else:
|
||||
if process_element(child):
|
||||
keep_element = True
|
||||
|
||||
|
||||
# Check word count
|
||||
if not keep_element:
|
||||
word_count = len(element.get_text(strip=True).split())
|
||||
keep_element = word_count >= word_count_threshold
|
||||
|
||||
if not keep_element:
|
||||
element.decompose()
|
||||
|
||||
return keep_element
|
||||
except Exception as e:
|
||||
print('Error processing element:', str(e))
|
||||
return False
|
||||
|
||||
#process images by filtering and extracting contextual text from the page
|
||||
imgs = body.find_all('img')
|
||||
media['images'] = [
|
||||
result for result in
|
||||
(process_image(img, url, i, len(imgs)) for i, img in enumerate(imgs))
|
||||
if result is not None
|
||||
]
|
||||
|
||||
process_element(body)
|
||||
|
||||
def flatten_nested_elements(node):
|
||||
if isinstance(node, NavigableString):
|
||||
return node
|
||||
if len(node.contents) == 1 and isinstance(node.contents[0], element.Tag) and node.contents[0].name == node.name:
|
||||
return flatten_nested_elements(node.contents[0])
|
||||
node.contents = [flatten_nested_elements(child) for child in node.contents]
|
||||
return node
|
||||
|
||||
body = flatten_nested_elements(body)
|
||||
|
||||
cleaned_html = str(body).replace('\n\n', '\n').replace(' ', ' ')
|
||||
cleaned_html = sanitize_html(cleaned_html)
|
||||
|
||||
h = CustomHTML2Text()
|
||||
h.ignore_links = True
|
||||
markdown = h.handle(cleaned_html)
|
||||
markdown = markdown.replace(' ```', '```')
|
||||
|
||||
try:
|
||||
meta = extract_metadata(html, soup)
|
||||
except Exception as e:
|
||||
print('Error extracting metadata:', str(e))
|
||||
meta = {}
|
||||
|
||||
return {
|
||||
'markdown': markdown,
|
||||
'cleaned_html': cleaned_html,
|
||||
'success': True,
|
||||
'media': media,
|
||||
'links': links,
|
||||
'metadata': meta
|
||||
}
|
||||
|
||||
def extract_metadata(html, soup = None):
|
||||
metadata = {}
|
||||
|
||||
if not html:
|
||||
return metadata
|
||||
|
||||
# Parse HTML content with BeautifulSoup
|
||||
if not soup:
|
||||
soup = BeautifulSoup(html, 'html.parser')
|
||||
|
||||
# Title
|
||||
title_tag = soup.find('title')
|
||||
metadata['title'] = title_tag.string if title_tag else None
|
||||
|
||||
# Meta description
|
||||
description_tag = soup.find('meta', attrs={'name': 'description'})
|
||||
metadata['description'] = description_tag['content'] if description_tag else None
|
||||
|
||||
# Meta keywords
|
||||
keywords_tag = soup.find('meta', attrs={'name': 'keywords'})
|
||||
metadata['keywords'] = keywords_tag['content'] if keywords_tag else None
|
||||
|
||||
# Meta author
|
||||
author_tag = soup.find('meta', attrs={'name': 'author'})
|
||||
metadata['author'] = author_tag['content'] if author_tag else None
|
||||
|
||||
# Open Graph metadata
|
||||
og_tags = soup.find_all('meta', attrs={'property': lambda value: value and value.startswith('og:')})
|
||||
for tag in og_tags:
|
||||
property_name = tag['property']
|
||||
metadata[property_name] = tag['content']
|
||||
|
||||
# Twitter Card metadata
|
||||
twitter_tags = soup.find_all('meta', attrs={'name': lambda value: value and value.startswith('twitter:')})
|
||||
for tag in twitter_tags:
|
||||
property_name = tag['name']
|
||||
metadata[property_name] = tag['content']
|
||||
|
||||
return metadata
|
||||
|
||||
def extract_xml_tags(string):
|
||||
tags = re.findall(r'<(\w+)>', string)
|
||||
return list(set(tags))
|
||||
@@ -324,12 +716,16 @@ def extract_xml_data(tags, string):
|
||||
return data
|
||||
|
||||
# Function to perform the completion with exponential backoff
|
||||
def perform_completion_with_backoff(provider, prompt_with_variables, api_token):
|
||||
def perform_completion_with_backoff(provider, prompt_with_variables, api_token, json_response = False):
|
||||
from litellm import completion
|
||||
from litellm.exceptions import RateLimitError
|
||||
max_attempts = 3
|
||||
base_delay = 2 # Base delay in seconds, you can adjust this based on your needs
|
||||
|
||||
extra_args = {}
|
||||
if json_response:
|
||||
extra_args["response_format"] = { "type": "json_object" }
|
||||
|
||||
for attempt in range(max_attempts):
|
||||
try:
|
||||
response =completion(
|
||||
@@ -338,7 +734,8 @@ def perform_completion_with_backoff(provider, prompt_with_variables, api_token):
|
||||
{"role": "user", "content": prompt_with_variables}
|
||||
],
|
||||
temperature=0.01,
|
||||
api_key=api_token
|
||||
api_key=api_token,
|
||||
**extra_args
|
||||
)
|
||||
return response # Return the successful response
|
||||
except RateLimitError as e:
|
||||
@@ -382,7 +779,6 @@ def extract_blocks(url, html, provider = DEFAULT_PROVIDER, api_token = None):
|
||||
for block in blocks:
|
||||
block['error'] = False
|
||||
except Exception as e:
|
||||
print("Error extracting blocks:", str(e))
|
||||
parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content)
|
||||
blocks = parsed
|
||||
# Append all unparsed segments as onr error block and content is list of unparsed segments
|
||||
@@ -428,7 +824,6 @@ def extract_blocks_batch(batch_data, provider = "groq/llama3-70b-8192", api_toke
|
||||
blocks = json.loads(blocks)
|
||||
|
||||
except Exception as e:
|
||||
print("Error extracting blocks:", str(e))
|
||||
blocks = [{
|
||||
"index": 0,
|
||||
"tags": ["error"],
|
||||
@@ -483,4 +878,23 @@ def process_sections(url: str, sections: list, provider: str, api_token: str) ->
|
||||
for future in as_completed(futures):
|
||||
extracted_content.extend(future.result())
|
||||
|
||||
return extracted_content
|
||||
return extracted_content
|
||||
|
||||
|
||||
def wrap_text(draw, text, font, max_width):
|
||||
# Wrap the text to fit within the specified width
|
||||
lines = []
|
||||
words = text.split()
|
||||
while words:
|
||||
line = ''
|
||||
while words and draw.textbbox((0, 0), line + words[0], font=font)[2] <= max_width:
|
||||
line += (words.pop(0) + ' ')
|
||||
lines.append(line)
|
||||
return '\n'.join(lines)
|
||||
|
||||
|
||||
def format_html(html_string):
|
||||
soup = BeautifulSoup(html_string, 'html.parser')
|
||||
return soup.prettify()
|
||||
|
||||
|
||||
|
||||
357
crawl4ai/web_crawler.back.py
Normal file
@@ -0,0 +1,357 @@
|
||||
import os, time
|
||||
os.environ["TOKENIZERS_PARALLELISM"] = "false"
|
||||
from pathlib import Path
|
||||
|
||||
from .models import UrlModel, CrawlResult
|
||||
from .database import init_db, get_cached_url, cache_url, DB_PATH, flush_db
|
||||
from .utils import *
|
||||
from .chunking_strategy import *
|
||||
from .extraction_strategy import *
|
||||
from .crawler_strategy import *
|
||||
from typing import List
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from .config import *
|
||||
|
||||
|
||||
class WebCrawler:
|
||||
def __init__(
|
||||
self,
|
||||
# db_path: str = None,
|
||||
crawler_strategy: CrawlerStrategy = None,
|
||||
always_by_pass_cache: bool = False,
|
||||
verbose: bool = False,
|
||||
):
|
||||
# self.db_path = db_path
|
||||
self.crawler_strategy = crawler_strategy or LocalSeleniumCrawlerStrategy(verbose=verbose)
|
||||
self.always_by_pass_cache = always_by_pass_cache
|
||||
|
||||
# Create the .crawl4ai folder in the user's home directory if it doesn't exist
|
||||
self.crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai")
|
||||
os.makedirs(self.crawl4ai_folder, exist_ok=True)
|
||||
os.makedirs(f"{self.crawl4ai_folder}/cache", exist_ok=True)
|
||||
|
||||
# If db_path is not provided, use the default path
|
||||
# if not db_path:
|
||||
# self.db_path = f"{self.crawl4ai_folder}/crawl4ai.db"
|
||||
|
||||
# flush_db()
|
||||
init_db()
|
||||
|
||||
self.ready = False
|
||||
|
||||
def warmup(self):
|
||||
print("[LOG] 🌤️ Warming up the WebCrawler")
|
||||
result = self.run(
|
||||
url='https://crawl4ai.uccode.io/',
|
||||
word_count_threshold=5,
|
||||
extraction_strategy= NoExtractionStrategy(),
|
||||
bypass_cache=False,
|
||||
verbose = False
|
||||
)
|
||||
self.ready = True
|
||||
print("[LOG] 🌞 WebCrawler is ready to crawl")
|
||||
|
||||
def fetch_page(
|
||||
self,
|
||||
url_model: UrlModel,
|
||||
provider: str = DEFAULT_PROVIDER,
|
||||
api_token: str = None,
|
||||
extract_blocks_flag: bool = True,
|
||||
word_count_threshold=MIN_WORD_THRESHOLD,
|
||||
css_selector: str = None,
|
||||
screenshot: bool = False,
|
||||
use_cached_html: bool = False,
|
||||
extraction_strategy: ExtractionStrategy = None,
|
||||
chunking_strategy: ChunkingStrategy = RegexChunking(),
|
||||
**kwargs,
|
||||
) -> CrawlResult:
|
||||
return self.run(
|
||||
url_model.url,
|
||||
word_count_threshold,
|
||||
extraction_strategy or NoExtractionStrategy(),
|
||||
chunking_strategy,
|
||||
bypass_cache=url_model.forced,
|
||||
css_selector=css_selector,
|
||||
screenshot=screenshot,
|
||||
**kwargs,
|
||||
)
|
||||
pass
|
||||
|
||||
def run_old(
|
||||
self,
|
||||
url: str,
|
||||
word_count_threshold=MIN_WORD_THRESHOLD,
|
||||
extraction_strategy: ExtractionStrategy = None,
|
||||
chunking_strategy: ChunkingStrategy = RegexChunking(),
|
||||
bypass_cache: bool = False,
|
||||
css_selector: str = None,
|
||||
screenshot: bool = False,
|
||||
user_agent: str = None,
|
||||
verbose=True,
|
||||
**kwargs,
|
||||
) -> CrawlResult:
|
||||
if user_agent:
|
||||
self.crawler_strategy.update_user_agent(user_agent)
|
||||
extraction_strategy = extraction_strategy or NoExtractionStrategy()
|
||||
extraction_strategy.verbose = verbose
|
||||
# Check if extraction strategy is an instance of ExtractionStrategy if not raise an error
|
||||
if not isinstance(extraction_strategy, ExtractionStrategy):
|
||||
raise ValueError("Unsupported extraction strategy")
|
||||
if not isinstance(chunking_strategy, ChunkingStrategy):
|
||||
raise ValueError("Unsupported chunking strategy")
|
||||
|
||||
# make sure word_count_threshold is not lesser than MIN_WORD_THRESHOLD
|
||||
if word_count_threshold < MIN_WORD_THRESHOLD:
|
||||
word_count_threshold = MIN_WORD_THRESHOLD
|
||||
|
||||
# Check cache first
|
||||
if not bypass_cache and not self.always_by_pass_cache:
|
||||
cached = get_cached_url(url)
|
||||
if cached:
|
||||
return CrawlResult(
|
||||
**{
|
||||
"url": cached[0],
|
||||
"html": cached[1],
|
||||
"cleaned_html": cached[2],
|
||||
"markdown": cached[3],
|
||||
"extracted_content": cached[4],
|
||||
"success": cached[5],
|
||||
"media": json.loads(cached[6] or "{}"),
|
||||
"links": json.loads(cached[7] or "{}"),
|
||||
"metadata": json.loads(cached[8] or "{}"), # "metadata": "{}
|
||||
"screenshot": cached[9],
|
||||
"error_message": "",
|
||||
}
|
||||
)
|
||||
|
||||
# Initialize WebDriver for crawling
|
||||
t = time.time()
|
||||
if kwargs.get("js", None):
|
||||
self.crawler_strategy.js_code = kwargs.get("js")
|
||||
html = self.crawler_strategy.crawl(url)
|
||||
base64_image = None
|
||||
if screenshot:
|
||||
base64_image = self.crawler_strategy.take_screenshot()
|
||||
success = True
|
||||
error_message = ""
|
||||
# Extract content from HTML
|
||||
try:
|
||||
result = get_content_of_website(url, html, word_count_threshold, css_selector=css_selector)
|
||||
metadata = extract_metadata(html)
|
||||
if result is None:
|
||||
raise ValueError(f"Failed to extract content from the website: {url}")
|
||||
except InvalidCSSSelectorError as e:
|
||||
raise ValueError(str(e))
|
||||
|
||||
cleaned_html = result.get("cleaned_html", "")
|
||||
markdown = result.get("markdown", "")
|
||||
media = result.get("media", [])
|
||||
links = result.get("links", [])
|
||||
|
||||
# Print a profession LOG style message, show time taken and say crawling is done
|
||||
if verbose:
|
||||
print(
|
||||
f"[LOG] 🚀 Crawling done for {url}, success: {success}, time taken: {time.time() - t} seconds"
|
||||
)
|
||||
|
||||
extracted_content = []
|
||||
if verbose:
|
||||
print(f"[LOG] 🔥 Extracting semantic blocks for {url}, Strategy: {extraction_strategy.name}")
|
||||
t = time.time()
|
||||
# Split markdown into sections
|
||||
sections = chunking_strategy.chunk(markdown)
|
||||
# sections = merge_chunks_based_on_token_threshold(sections, CHUNK_TOKEN_THRESHOLD)
|
||||
|
||||
extracted_content = extraction_strategy.run(
|
||||
url, sections,
|
||||
)
|
||||
extracted_content = json.dumps(extracted_content)
|
||||
|
||||
if verbose:
|
||||
print(
|
||||
f"[LOG] 🚀 Extraction done for {url}, time taken: {time.time() - t} seconds."
|
||||
)
|
||||
|
||||
# Cache the result
|
||||
cleaned_html = beautify_html(cleaned_html)
|
||||
cache_url(
|
||||
url,
|
||||
html,
|
||||
cleaned_html,
|
||||
markdown,
|
||||
extracted_content,
|
||||
success,
|
||||
json.dumps(media),
|
||||
json.dumps(links),
|
||||
json.dumps(metadata),
|
||||
screenshot=base64_image,
|
||||
)
|
||||
|
||||
return CrawlResult(
|
||||
url=url,
|
||||
html=html,
|
||||
cleaned_html=cleaned_html,
|
||||
markdown=markdown,
|
||||
media=media,
|
||||
links=links,
|
||||
metadata=metadata,
|
||||
screenshot=base64_image,
|
||||
extracted_content=extracted_content,
|
||||
success=success,
|
||||
error_message=error_message,
|
||||
)
|
||||
|
||||
def fetch_pages(
|
||||
self,
|
||||
url_models: List[UrlModel],
|
||||
provider: str = DEFAULT_PROVIDER,
|
||||
api_token: str = None,
|
||||
extract_blocks_flag: bool = True,
|
||||
word_count_threshold=MIN_WORD_THRESHOLD,
|
||||
use_cached_html: bool = False,
|
||||
css_selector: str = None,
|
||||
screenshot: bool = False,
|
||||
extraction_strategy: ExtractionStrategy = None,
|
||||
chunking_strategy: ChunkingStrategy = RegexChunking(),
|
||||
**kwargs,
|
||||
) -> List[CrawlResult]:
|
||||
extraction_strategy = extraction_strategy or NoExtractionStrategy()
|
||||
def fetch_page_wrapper(url_model, *args, **kwargs):
|
||||
return self.fetch_page(url_model, *args, **kwargs)
|
||||
|
||||
with ThreadPoolExecutor() as executor:
|
||||
results = list(
|
||||
executor.map(
|
||||
fetch_page_wrapper,
|
||||
url_models,
|
||||
[provider] * len(url_models),
|
||||
[api_token] * len(url_models),
|
||||
[extract_blocks_flag] * len(url_models),
|
||||
[word_count_threshold] * len(url_models),
|
||||
[css_selector] * len(url_models),
|
||||
[screenshot] * len(url_models),
|
||||
[use_cached_html] * len(url_models),
|
||||
[extraction_strategy] * len(url_models),
|
||||
[chunking_strategy] * len(url_models),
|
||||
*[kwargs] * len(url_models),
|
||||
)
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
def run(
|
||||
self,
|
||||
url: str,
|
||||
word_count_threshold=MIN_WORD_THRESHOLD,
|
||||
extraction_strategy: ExtractionStrategy = None,
|
||||
chunking_strategy: ChunkingStrategy = RegexChunking(),
|
||||
bypass_cache: bool = False,
|
||||
css_selector: str = None,
|
||||
screenshot: bool = False,
|
||||
user_agent: str = None,
|
||||
verbose=True,
|
||||
**kwargs,
|
||||
) -> CrawlResult:
|
||||
extraction_strategy = extraction_strategy or NoExtractionStrategy()
|
||||
extraction_strategy.verbose = verbose
|
||||
if not isinstance(extraction_strategy, ExtractionStrategy):
|
||||
raise ValueError("Unsupported extraction strategy")
|
||||
if not isinstance(chunking_strategy, ChunkingStrategy):
|
||||
raise ValueError("Unsupported chunking strategy")
|
||||
|
||||
if word_count_threshold < MIN_WORD_THRESHOLD:
|
||||
word_count_threshold = MIN_WORD_THRESHOLD
|
||||
|
||||
# Check cache first
|
||||
cached = None
|
||||
extracted_content = None
|
||||
if not bypass_cache and not self.always_by_pass_cache:
|
||||
cached = get_cached_url(url)
|
||||
|
||||
if cached:
|
||||
html = cached[1]
|
||||
extracted_content = cached[2]
|
||||
if screenshot:
|
||||
screenshot = cached[9]
|
||||
|
||||
else:
|
||||
if user_agent:
|
||||
self.crawler_strategy.update_user_agent(user_agent)
|
||||
html = self.crawler_strategy.crawl(url)
|
||||
if screenshot:
|
||||
screenshot = self.crawler_strategy.take_screenshot()
|
||||
|
||||
return self.process_html(url, html, extracted_content, word_count_threshold, extraction_strategy, chunking_strategy, css_selector, screenshot, verbose, bool(cached), **kwargs)
|
||||
|
||||
def process_html(
|
||||
self,
|
||||
url: str,
|
||||
html: str,
|
||||
extracted_content: str,
|
||||
word_count_threshold: int,
|
||||
extraction_strategy: ExtractionStrategy,
|
||||
chunking_strategy: ChunkingStrategy,
|
||||
css_selector: str,
|
||||
screenshot: bool,
|
||||
verbose: bool,
|
||||
is_cached: bool,
|
||||
**kwargs,
|
||||
) -> CrawlResult:
|
||||
t = time.time()
|
||||
# Extract content from HTML
|
||||
try:
|
||||
result = get_content_of_website(url, html, word_count_threshold, css_selector=css_selector)
|
||||
metadata = extract_metadata(html)
|
||||
if result is None:
|
||||
raise ValueError(f"Failed to extract content from the website: {url}")
|
||||
except InvalidCSSSelectorError as e:
|
||||
raise ValueError(str(e))
|
||||
|
||||
cleaned_html = result.get("cleaned_html", "")
|
||||
markdown = result.get("markdown", "")
|
||||
media = result.get("media", [])
|
||||
links = result.get("links", [])
|
||||
|
||||
if verbose:
|
||||
print(f"[LOG] 🚀 Crawling done for {url}, success: True, time taken: {time.time() - t} seconds")
|
||||
|
||||
if extracted_content is None:
|
||||
if verbose:
|
||||
print(f"[LOG] 🔥 Extracting semantic blocks for {url}, Strategy: {extraction_strategy.name}")
|
||||
|
||||
sections = chunking_strategy.chunk(markdown)
|
||||
extracted_content = extraction_strategy.run(url, sections)
|
||||
extracted_content = json.dumps(extracted_content)
|
||||
|
||||
if verbose:
|
||||
print(f"[LOG] 🚀 Extraction done for {url}, time taken: {time.time() - t} seconds.")
|
||||
|
||||
screenshot = None if not screenshot else screenshot
|
||||
|
||||
if not is_cached:
|
||||
cache_url(
|
||||
url,
|
||||
html,
|
||||
cleaned_html,
|
||||
markdown,
|
||||
extracted_content,
|
||||
True,
|
||||
json.dumps(media),
|
||||
json.dumps(links),
|
||||
json.dumps(metadata),
|
||||
screenshot=screenshot,
|
||||
)
|
||||
|
||||
return CrawlResult(
|
||||
url=url,
|
||||
html=html,
|
||||
cleaned_html=cleaned_html,
|
||||
markdown=markdown,
|
||||
media=media,
|
||||
links=links,
|
||||
metadata=metadata,
|
||||
screenshot=screenshot,
|
||||
extracted_content=extracted_content,
|
||||
success=True,
|
||||
error_message="",
|
||||
)
|
||||
@@ -11,6 +11,8 @@ from .crawler_strategy import *
|
||||
from typing import List
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from .config import *
|
||||
import warnings
|
||||
warnings.filterwarnings("ignore", message='Field "model_name" has conflict with protected namespace "model_".')
|
||||
|
||||
|
||||
class WebCrawler:
|
||||
@@ -19,9 +21,10 @@ class WebCrawler:
|
||||
# db_path: str = None,
|
||||
crawler_strategy: CrawlerStrategy = None,
|
||||
always_by_pass_cache: bool = False,
|
||||
verbose: bool = False,
|
||||
):
|
||||
# self.db_path = db_path
|
||||
self.crawler_strategy = crawler_strategy or LocalSeleniumCrawlerStrategy()
|
||||
self.crawler_strategy = crawler_strategy or LocalSeleniumCrawlerStrategy(verbose=verbose)
|
||||
self.always_by_pass_cache = always_by_pass_cache
|
||||
|
||||
# Create the .crawl4ai folder in the user's home directory if it doesn't exist
|
||||
@@ -41,16 +44,16 @@ class WebCrawler:
|
||||
def warmup(self):
|
||||
print("[LOG] 🌤️ Warming up the WebCrawler")
|
||||
result = self.run(
|
||||
url='https://crawl4ai.uccode.io/',
|
||||
url='https://google.com/',
|
||||
word_count_threshold=5,
|
||||
extraction_strategy= NoExtractionStrategy(),
|
||||
bypass_cache=False,
|
||||
verbose = False
|
||||
verbose = False,
|
||||
# warmup=True
|
||||
)
|
||||
self.ready = True
|
||||
print("[LOG] 🌞 WebCrawler is ready to crawl")
|
||||
|
||||
|
||||
def fetch_page(
|
||||
self,
|
||||
url_model: UrlModel,
|
||||
@@ -58,6 +61,8 @@ class WebCrawler:
|
||||
api_token: str = None,
|
||||
extract_blocks_flag: bool = True,
|
||||
word_count_threshold=MIN_WORD_THRESHOLD,
|
||||
css_selector: str = None,
|
||||
screenshot: bool = False,
|
||||
use_cached_html: bool = False,
|
||||
extraction_strategy: ExtractionStrategy = None,
|
||||
chunking_strategy: ChunkingStrategy = RegexChunking(),
|
||||
@@ -69,111 +74,12 @@ class WebCrawler:
|
||||
extraction_strategy or NoExtractionStrategy(),
|
||||
chunking_strategy,
|
||||
bypass_cache=url_model.forced,
|
||||
css_selector=css_selector,
|
||||
screenshot=screenshot,
|
||||
**kwargs,
|
||||
)
|
||||
pass
|
||||
|
||||
|
||||
def run(
|
||||
self,
|
||||
url: str,
|
||||
word_count_threshold=MIN_WORD_THRESHOLD,
|
||||
extraction_strategy: ExtractionStrategy = None,
|
||||
chunking_strategy: ChunkingStrategy = RegexChunking(),
|
||||
bypass_cache: bool = False,
|
||||
css_selector: str = None,
|
||||
verbose=True,
|
||||
**kwargs,
|
||||
) -> CrawlResult:
|
||||
extraction_strategy = extraction_strategy or NoExtractionStrategy()
|
||||
extraction_strategy.verbose = verbose
|
||||
# Check if extraction strategy is an instance of ExtractionStrategy if not raise an error
|
||||
if not isinstance(extraction_strategy, ExtractionStrategy):
|
||||
raise ValueError("Unsupported extraction strategy")
|
||||
if not isinstance(chunking_strategy, ChunkingStrategy):
|
||||
raise ValueError("Unsupported chunking strategy")
|
||||
|
||||
# make sure word_count_threshold is not lesser than MIN_WORD_THRESHOLD
|
||||
if word_count_threshold < MIN_WORD_THRESHOLD:
|
||||
word_count_threshold = MIN_WORD_THRESHOLD
|
||||
|
||||
# Check cache first
|
||||
if not bypass_cache and not self.always_by_pass_cache:
|
||||
cached = get_cached_url(url)
|
||||
if cached:
|
||||
return CrawlResult(
|
||||
**{
|
||||
"url": cached[0],
|
||||
"html": cached[1],
|
||||
"cleaned_html": cached[2],
|
||||
"markdown": cached[3],
|
||||
"extracted_content": cached[4],
|
||||
"success": cached[5],
|
||||
"error_message": "",
|
||||
}
|
||||
)
|
||||
|
||||
# Initialize WebDriver for crawling
|
||||
t = time.time()
|
||||
html = self.crawler_strategy.crawl(url)
|
||||
success = True
|
||||
error_message = ""
|
||||
# Extract content from HTML
|
||||
try:
|
||||
result = get_content_of_website(html, word_count_threshold, css_selector=css_selector)
|
||||
if result is None:
|
||||
raise ValueError(f"Failed to extract content from the website: {url}")
|
||||
except InvalidCSSSelectorError as e:
|
||||
raise ValueError(str(e))
|
||||
|
||||
cleaned_html = result.get("cleaned_html", html)
|
||||
markdown = result.get("markdown", "")
|
||||
|
||||
# Print a profession LOG style message, show time taken and say crawling is done
|
||||
if verbose:
|
||||
print(
|
||||
f"[LOG] 🚀 Crawling done for {url}, success: {success}, time taken: {time.time() - t} seconds"
|
||||
)
|
||||
|
||||
extracted_content = []
|
||||
if verbose:
|
||||
print(f"[LOG] 🔥 Extracting semantic blocks for {url}, Strategy: {extraction_strategy.name}")
|
||||
t = time.time()
|
||||
# Split markdown into sections
|
||||
sections = chunking_strategy.chunk(markdown)
|
||||
# sections = merge_chunks_based_on_token_threshold(sections, CHUNK_TOKEN_THRESHOLD)
|
||||
|
||||
extracted_content = extraction_strategy.run(
|
||||
url, sections,
|
||||
)
|
||||
extracted_content = json.dumps(extracted_content)
|
||||
|
||||
if verbose:
|
||||
print(
|
||||
f"[LOG] 🚀 Extraction done for {url}, time taken: {time.time() - t} seconds."
|
||||
)
|
||||
|
||||
# Cache the result
|
||||
cleaned_html = beautify_html(cleaned_html)
|
||||
cache_url(
|
||||
url,
|
||||
html,
|
||||
cleaned_html,
|
||||
markdown,
|
||||
extracted_content,
|
||||
success,
|
||||
)
|
||||
|
||||
return CrawlResult(
|
||||
url=url,
|
||||
html=html,
|
||||
cleaned_html=cleaned_html,
|
||||
markdown=markdown,
|
||||
extracted_content=extracted_content,
|
||||
success=success,
|
||||
error_message=error_message,
|
||||
)
|
||||
|
||||
def fetch_pages(
|
||||
self,
|
||||
url_models: List[UrlModel],
|
||||
@@ -182,6 +88,8 @@ class WebCrawler:
|
||||
extract_blocks_flag: bool = True,
|
||||
word_count_threshold=MIN_WORD_THRESHOLD,
|
||||
use_cached_html: bool = False,
|
||||
css_selector: str = None,
|
||||
screenshot: bool = False,
|
||||
extraction_strategy: ExtractionStrategy = None,
|
||||
chunking_strategy: ChunkingStrategy = RegexChunking(),
|
||||
**kwargs,
|
||||
@@ -199,6 +107,8 @@ class WebCrawler:
|
||||
[api_token] * len(url_models),
|
||||
[extract_blocks_flag] * len(url_models),
|
||||
[word_count_threshold] * len(url_models),
|
||||
[css_selector] * len(url_models),
|
||||
[screenshot] * len(url_models),
|
||||
[use_cached_html] * len(url_models),
|
||||
[extraction_strategy] * len(url_models),
|
||||
[chunking_strategy] * len(url_models),
|
||||
@@ -207,3 +117,145 @@ class WebCrawler:
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
def run(
|
||||
self,
|
||||
url: str,
|
||||
word_count_threshold=MIN_WORD_THRESHOLD,
|
||||
extraction_strategy: ExtractionStrategy = None,
|
||||
chunking_strategy: ChunkingStrategy = RegexChunking(),
|
||||
bypass_cache: bool = False,
|
||||
css_selector: str = None,
|
||||
screenshot: bool = False,
|
||||
user_agent: str = None,
|
||||
verbose=True,
|
||||
**kwargs,
|
||||
) -> CrawlResult:
|
||||
try:
|
||||
extraction_strategy = extraction_strategy or NoExtractionStrategy()
|
||||
extraction_strategy.verbose = verbose
|
||||
if not isinstance(extraction_strategy, ExtractionStrategy):
|
||||
raise ValueError("Unsupported extraction strategy")
|
||||
if not isinstance(chunking_strategy, ChunkingStrategy):
|
||||
raise ValueError("Unsupported chunking strategy")
|
||||
|
||||
# if word_count_threshold < MIN_WORD_THRESHOLD:
|
||||
# word_count_threshold = MIN_WORD_THRESHOLD
|
||||
|
||||
word_count_threshold = max(word_count_threshold, 0)
|
||||
|
||||
# Check cache first
|
||||
cached = None
|
||||
screenshot_data = None
|
||||
extracted_content = None
|
||||
if not bypass_cache and not self.always_by_pass_cache:
|
||||
cached = get_cached_url(url)
|
||||
|
||||
if kwargs.get("warmup", True) and not self.ready:
|
||||
return None
|
||||
|
||||
if cached:
|
||||
html = sanitize_input_encode(cached[1])
|
||||
extracted_content = sanitize_input_encode(cached[4])
|
||||
if screenshot:
|
||||
screenshot_data = cached[9]
|
||||
if not screenshot_data:
|
||||
cached = None
|
||||
|
||||
if not cached or not html:
|
||||
if user_agent:
|
||||
self.crawler_strategy.update_user_agent(user_agent)
|
||||
t1 = time.time()
|
||||
html = sanitize_input_encode(self.crawler_strategy.crawl(url, **kwargs))
|
||||
t2 = time.time()
|
||||
if verbose:
|
||||
print(f"[LOG] 🚀 Crawling done for {url}, success: {bool(html)}, time taken: {t2 - t1} seconds")
|
||||
if screenshot:
|
||||
screenshot_data = self.crawler_strategy.take_screenshot()
|
||||
|
||||
|
||||
crawl_result = self.process_html(url, html, extracted_content, word_count_threshold, extraction_strategy, chunking_strategy, css_selector, screenshot_data, verbose, bool(cached), **kwargs)
|
||||
crawl_result.success = bool(html)
|
||||
return crawl_result
|
||||
except Exception as e:
|
||||
if not hasattr(e, "msg"):
|
||||
e.msg = str(e)
|
||||
print(f"[ERROR] 🚫 Failed to crawl {url}, error: {e.msg}")
|
||||
return CrawlResult(url=url, html="", success=False, error_message=e.msg)
|
||||
|
||||
def process_html(
|
||||
self,
|
||||
url: str,
|
||||
html: str,
|
||||
extracted_content: str,
|
||||
word_count_threshold: int,
|
||||
extraction_strategy: ExtractionStrategy,
|
||||
chunking_strategy: ChunkingStrategy,
|
||||
css_selector: str,
|
||||
screenshot: bool,
|
||||
verbose: bool,
|
||||
is_cached: bool,
|
||||
**kwargs,
|
||||
) -> CrawlResult:
|
||||
t = time.time()
|
||||
# Extract content from HTML
|
||||
try:
|
||||
# t1 = time.time()
|
||||
# result = get_content_of_website(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False))
|
||||
# print(f"[LOG] 🚀 Crawling done for {url}, success: True, time taken: {time.time() - t1} seconds")
|
||||
t1 = time.time()
|
||||
result = get_content_of_website_optimized(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False))
|
||||
if verbose:
|
||||
print(f"[LOG] 🚀 Content extracted for {url}, success: True, time taken: {time.time() - t1} seconds")
|
||||
|
||||
if result is None:
|
||||
raise ValueError(f"Failed to extract content from the website: {url}")
|
||||
except InvalidCSSSelectorError as e:
|
||||
raise ValueError(str(e))
|
||||
|
||||
cleaned_html = sanitize_input_encode(result.get("cleaned_html", ""))
|
||||
markdown = sanitize_input_encode(result.get("markdown", ""))
|
||||
media = result.get("media", [])
|
||||
links = result.get("links", [])
|
||||
metadata = result.get("metadata", {})
|
||||
|
||||
if extracted_content is None:
|
||||
if verbose:
|
||||
print(f"[LOG] 🔥 Extracting semantic blocks for {url}, Strategy: {extraction_strategy.name}")
|
||||
|
||||
sections = chunking_strategy.chunk(markdown)
|
||||
extracted_content = extraction_strategy.run(url, sections)
|
||||
extracted_content = json.dumps(extracted_content, indent=4, default=str)
|
||||
|
||||
if verbose:
|
||||
print(f"[LOG] 🚀 Extraction done for {url}, time taken: {time.time() - t} seconds.")
|
||||
|
||||
screenshot = None if not screenshot else screenshot
|
||||
|
||||
if not is_cached:
|
||||
cache_url(
|
||||
url,
|
||||
html,
|
||||
cleaned_html,
|
||||
markdown,
|
||||
extracted_content,
|
||||
True,
|
||||
json.dumps(media),
|
||||
json.dumps(links),
|
||||
json.dumps(metadata),
|
||||
screenshot=screenshot,
|
||||
)
|
||||
|
||||
return CrawlResult(
|
||||
url=url,
|
||||
html=html,
|
||||
cleaned_html=format_html(cleaned_html),
|
||||
markdown=markdown,
|
||||
media=media,
|
||||
links=links,
|
||||
metadata=metadata,
|
||||
screenshot=screenshot,
|
||||
extracted_content=extracted_content,
|
||||
success=True,
|
||||
error_message="",
|
||||
)
|
||||
BIN
docs/.DS_Store
vendored
Normal file
BIN
docs/examples/assets/audio.mp3
Normal file
BIN
docs/examples/assets/basic.png
Normal file
|
After Width: | Height: | Size: 372 KiB |
BIN
docs/examples/assets/cosine_extraction.png
Normal file
|
After Width: | Height: | Size: 403 KiB |
BIN
docs/examples/assets/css_js.png
Normal file
|
After Width: | Height: | Size: 537 KiB |
BIN
docs/examples/assets/css_selector.png
Normal file
|
After Width: | Height: | Size: 375 KiB |
BIN
docs/examples/assets/exec_script.png
Normal file
|
After Width: | Height: | Size: 469 KiB |
BIN
docs/examples/assets/llm_extraction.png
Normal file
|
After Width: | Height: | Size: 477 KiB |
BIN
docs/examples/assets/semantic_extraction_cosine.png
Normal file
|
After Width: | Height: | Size: 419 KiB |
BIN
docs/examples/assets/semantic_extraction_llm.png
Normal file
|
After Width: | Height: | Size: 485 KiB |
3
docs/examples/chainlit.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Welcome to Crawl4AI! 🚀🤖
|
||||
|
||||
Hi there, Developer! 👋 Here is an example of a research pipeline, where you can share a URL in your conversation with any LLM, and then the context of crawled pages will be used as the context.
|
||||
41
docs/examples/llm_extraction_openai_pricing.py
Normal file
@@ -0,0 +1,41 @@
|
||||
import os
|
||||
import time
|
||||
from crawl4ai.web_crawler import WebCrawler
|
||||
from crawl4ai.chunking_strategy import *
|
||||
from crawl4ai.extraction_strategy import *
|
||||
from crawl4ai.crawler_strategy import *
|
||||
|
||||
url = r'https://openai.com/api/pricing/'
|
||||
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class OpenAIModelFee(BaseModel):
|
||||
model_name: str = Field(..., description="Name of the OpenAI model.")
|
||||
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
|
||||
output_fee: str = Field(..., description="Fee for output token for the OpenAI model.")
|
||||
|
||||
result = crawler.run(
|
||||
url=url,
|
||||
word_count_threshold=1,
|
||||
extraction_strategy= LLMExtractionStrategy(
|
||||
# provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
|
||||
provider= "groq/llama-3.1-70b-versatile", api_token = os.getenv('GROQ_API_KEY'),
|
||||
schema=OpenAIModelFee.model_json_schema(),
|
||||
extraction_type="schema",
|
||||
instruction="From the crawled content, extract all mentioned model names along with their "\
|
||||
"fees for input and output tokens. Make sure not to miss anything in the entire content. "\
|
||||
'One extracted model JSON format should look like this: '\
|
||||
'{ "model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens" }'
|
||||
),
|
||||
bypass_cache=True,
|
||||
)
|
||||
|
||||
model_fees = json.loads(result.extracted_content)
|
||||
|
||||
print(len(model_fees))
|
||||
|
||||
with open(".data/data.json", "w", encoding="utf-8") as f:
|
||||
f.write(result.extracted_content)
|
||||
@@ -12,7 +12,7 @@ console = Console()
|
||||
|
||||
@lru_cache()
|
||||
def create_crawler():
|
||||
crawler = WebCrawler()
|
||||
crawler = WebCrawler(verbose=True)
|
||||
crawler.warmup()
|
||||
return crawler
|
||||
|
||||
@@ -35,10 +35,26 @@ def cprint(message, press_any_key=False):
|
||||
|
||||
def basic_usage(crawler):
|
||||
cprint("🛠️ [bold cyan]Basic Usage: Simply provide a URL and let Crawl4ai do the magic![/bold cyan]")
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", only_text = True)
|
||||
cprint("[LOG] 📦 [bold yellow]Basic crawl result:[/bold yellow]")
|
||||
print_result(result)
|
||||
|
||||
def basic_usage_some_params(crawler):
|
||||
cprint("🛠️ [bold cyan]Basic Usage: Simply provide a URL and let Crawl4ai do the magic![/bold cyan]")
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", word_count_threshold=1, only_text = True)
|
||||
cprint("[LOG] 📦 [bold yellow]Basic crawl result:[/bold yellow]")
|
||||
print_result(result)
|
||||
|
||||
def screenshot_usage(crawler):
|
||||
cprint("\n📸 [bold cyan]Let's take a screenshot of the page![/bold cyan]")
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", screenshot=True)
|
||||
cprint("[LOG] 📦 [bold yellow]Screenshot result:[/bold yellow]")
|
||||
# Save the screenshot to a file
|
||||
with open("screenshot.png", "wb") as f:
|
||||
f.write(base64.b64decode(result.screenshot))
|
||||
cprint("Screenshot saved to 'screenshot.png'!")
|
||||
print_result(result)
|
||||
|
||||
def understanding_parameters(crawler):
|
||||
cprint("\n🧠 [bold cyan]Understanding 'bypass_cache' and 'include_raw_html' parameters:[/bold cyan]")
|
||||
cprint("By default, Crawl4ai caches the results of your crawls. This means that subsequent crawls of the same URL will be much faster! Let's see this in action.")
|
||||
@@ -86,7 +102,7 @@ def add_extraction_strategy(crawler):
|
||||
cprint("CosineStrategy uses cosine similarity to extract semantically similar blocks of text. Let's see it in action!")
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=CosineStrategy(word_count_threshold=10, max_dist=0.2, linkage_method="ward", top_k=3)
|
||||
extraction_strategy=CosineStrategy(word_count_threshold=10, max_dist=0.2, linkage_method="ward", top_k=3, sim_threshold = 0.3, verbose=True)
|
||||
)
|
||||
cprint("[LOG] 📦 [bold yellow]CosineStrategy result:[/bold yellow]")
|
||||
print_result(result)
|
||||
@@ -156,14 +172,118 @@ def interactive_extraction(crawler):
|
||||
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
|
||||
loadMoreButton && loadMoreButton.click();
|
||||
"""
|
||||
crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
|
||||
crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
|
||||
# crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
|
||||
# crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
js = js_code
|
||||
)
|
||||
cprint("[LOG] 📦 [bold yellow]JavaScript Code (Load More button) result:[/bold yellow]")
|
||||
print_result(result)
|
||||
|
||||
def multiple_scrip(crawler):
|
||||
# Passing JavaScript code to interact with the page
|
||||
cprint("\n🖱️ [bold cyan]Let's get interactive: Passing JavaScript code to click 'Load More' button![/bold cyan]", True)
|
||||
cprint("In this example we try to click the 'Load More' button on the page using JavaScript code.")
|
||||
js_code = ["""
|
||||
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
|
||||
loadMoreButton && loadMoreButton.click();
|
||||
"""] * 2
|
||||
# crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
|
||||
# crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
js = js_code
|
||||
)
|
||||
cprint("[LOG] 📦 [bold yellow]JavaScript Code (Load More button) result:[/bold yellow]")
|
||||
print_result(result)
|
||||
|
||||
def using_crawler_hooks(crawler):
|
||||
# Example usage of the hooks for authentication and setting a cookie
|
||||
def on_driver_created(driver):
|
||||
print("[HOOK] on_driver_created")
|
||||
# Example customization: maximize the window
|
||||
driver.maximize_window()
|
||||
|
||||
# Example customization: logging in to a hypothetical website
|
||||
driver.get('https://example.com/login')
|
||||
|
||||
from selenium.webdriver.support.ui import WebDriverWait
|
||||
from selenium.webdriver.common.by import By
|
||||
from selenium.webdriver.support import expected_conditions as EC
|
||||
|
||||
WebDriverWait(driver, 10).until(
|
||||
EC.presence_of_element_located((By.NAME, 'username'))
|
||||
)
|
||||
driver.find_element(By.NAME, 'username').send_keys('testuser')
|
||||
driver.find_element(By.NAME, 'password').send_keys('password123')
|
||||
driver.find_element(By.NAME, 'login').click()
|
||||
WebDriverWait(driver, 10).until(
|
||||
EC.presence_of_element_located((By.ID, 'welcome'))
|
||||
)
|
||||
# Add a custom cookie
|
||||
driver.add_cookie({'name': 'test_cookie', 'value': 'cookie_value'})
|
||||
return driver
|
||||
|
||||
|
||||
def before_get_url(driver):
|
||||
print("[HOOK] before_get_url")
|
||||
# Example customization: add a custom header
|
||||
# Enable Network domain for sending headers
|
||||
driver.execute_cdp_cmd('Network.enable', {})
|
||||
# Add a custom header
|
||||
driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': {'X-Test-Header': 'test'}})
|
||||
return driver
|
||||
|
||||
def after_get_url(driver):
|
||||
print("[HOOK] after_get_url")
|
||||
# Example customization: log the URL
|
||||
print(driver.current_url)
|
||||
return driver
|
||||
|
||||
def before_return_html(driver, html):
|
||||
print("[HOOK] before_return_html")
|
||||
# Example customization: log the HTML
|
||||
print(len(html))
|
||||
return driver
|
||||
|
||||
cprint("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True)
|
||||
|
||||
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
|
||||
crawler_strategy.set_hook('on_driver_created', on_driver_created)
|
||||
crawler_strategy.set_hook('before_get_url', before_get_url)
|
||||
crawler_strategy.set_hook('after_get_url', after_get_url)
|
||||
crawler_strategy.set_hook('before_return_html', before_return_html)
|
||||
|
||||
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
|
||||
crawler.warmup()
|
||||
result = crawler.run(url="https://example.com")
|
||||
|
||||
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
|
||||
print_result(result= result)
|
||||
|
||||
def using_crawler_hooks_dleay_example(crawler):
|
||||
def delay(driver):
|
||||
print("Delaying for 5 seconds...")
|
||||
time.sleep(5)
|
||||
print("Resuming...")
|
||||
|
||||
def create_crawler():
|
||||
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
|
||||
crawler_strategy.set_hook('after_get_url', delay)
|
||||
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
|
||||
crawler.warmup()
|
||||
return crawler
|
||||
|
||||
cprint("\n🔗 [bold cyan]Using Crawler Hooks: Let's add a delay after fetching the url to make sure entire page is fetched.[/bold cyan]")
|
||||
crawler = create_crawler()
|
||||
result = crawler.run(url="https://google.com", bypass_cache=True)
|
||||
|
||||
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
|
||||
print_result(result)
|
||||
|
||||
|
||||
|
||||
def main():
|
||||
cprint("🌟 [bold green]Welcome to the Crawl4ai Quickstart Guide! Let's dive into some web crawling fun! 🌐[/bold green]")
|
||||
cprint("⛳️ [bold cyan]First Step: Create an instance of WebCrawler and call the `warmup()` function.[/bold cyan]")
|
||||
@@ -171,15 +291,19 @@ def main():
|
||||
|
||||
crawler = create_crawler()
|
||||
|
||||
crawler.always_by_pass_cache = True
|
||||
basic_usage(crawler)
|
||||
# basic_usage_some_params(crawler)
|
||||
understanding_parameters(crawler)
|
||||
|
||||
crawler.always_by_pass_cache = True
|
||||
screenshot_usage(crawler)
|
||||
add_chunking_strategy(crawler)
|
||||
add_extraction_strategy(crawler)
|
||||
add_llm_extraction_strategy(crawler)
|
||||
targeted_extraction(crawler)
|
||||
interactive_extraction(crawler)
|
||||
multiple_scrip(crawler)
|
||||
|
||||
cprint("\n🎉 [bold green]Congratulations! You've made it through the Crawl4ai Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️[/bold green]")
|
||||
|
||||
|
||||
195
docs/examples/research_assistant.py
Normal file
@@ -0,0 +1,195 @@
|
||||
# Make sur to install the required packageschainlit and groq
|
||||
import os, time
|
||||
from openai import AsyncOpenAI
|
||||
import chainlit as cl
|
||||
import re
|
||||
import requests
|
||||
from io import BytesIO
|
||||
from chainlit.element import ElementBased
|
||||
from groq import Groq
|
||||
|
||||
# Import threadpools to run the crawl_url function in a separate thread
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
client = AsyncOpenAI(base_url="https://api.groq.com/openai/v1", api_key=os.getenv("GROQ_API_KEY"))
|
||||
|
||||
# Instrument the OpenAI client
|
||||
cl.instrument_openai()
|
||||
|
||||
settings = {
|
||||
"model": "llama3-8b-8192",
|
||||
"temperature": 0.5,
|
||||
"max_tokens": 500,
|
||||
"top_p": 1,
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0,
|
||||
}
|
||||
|
||||
def extract_urls(text):
|
||||
url_pattern = re.compile(r'(https?://\S+)')
|
||||
return url_pattern.findall(text)
|
||||
|
||||
def crawl_url(url):
|
||||
data = {
|
||||
"urls": [url],
|
||||
"include_raw_html": True,
|
||||
"word_count_threshold": 10,
|
||||
"extraction_strategy": "NoExtractionStrategy",
|
||||
"chunking_strategy": "RegexChunking"
|
||||
}
|
||||
response = requests.post("https://crawl4ai.com/crawl", json=data)
|
||||
response_data = response.json()
|
||||
response_data = response_data['results'][0]
|
||||
return response_data['markdown']
|
||||
|
||||
@cl.on_chat_start
|
||||
async def on_chat_start():
|
||||
cl.user_session.set("session", {
|
||||
"history": [],
|
||||
"context": {}
|
||||
})
|
||||
await cl.Message(
|
||||
content="Welcome to the chat! How can I assist you today?"
|
||||
).send()
|
||||
|
||||
@cl.on_message
|
||||
async def on_message(message: cl.Message):
|
||||
user_session = cl.user_session.get("session")
|
||||
|
||||
# Extract URLs from the user's message
|
||||
urls = extract_urls(message.content)
|
||||
|
||||
|
||||
futures = []
|
||||
with ThreadPoolExecutor() as executor:
|
||||
for url in urls:
|
||||
futures.append(executor.submit(crawl_url, url))
|
||||
|
||||
results = [future.result() for future in futures]
|
||||
|
||||
for url, result in zip(urls, results):
|
||||
ref_number = f"REF_{len(user_session['context']) + 1}"
|
||||
user_session["context"][ref_number] = {
|
||||
"url": url,
|
||||
"content": result
|
||||
}
|
||||
|
||||
|
||||
user_session["history"].append({
|
||||
"role": "user",
|
||||
"content": message.content
|
||||
})
|
||||
|
||||
# Create a system message that includes the context
|
||||
context_messages = [
|
||||
f'<appendix ref="{ref}">\n{data["content"]}\n</appendix>'
|
||||
for ref, data in user_session["context"].items()
|
||||
]
|
||||
if context_messages:
|
||||
system_message = {
|
||||
"role": "system",
|
||||
"content": (
|
||||
"You are a helpful bot. Use the following context for answering questions. "
|
||||
"Refer to the sources using the REF number in square brackets, e.g., [1], only if the source is given in the appendices below.\n\n"
|
||||
"If the question requires any information from the provided appendices or context, refer to the sources. "
|
||||
"If not, there is no need to add a references section. "
|
||||
"At the end of your response, provide a reference section listing the URLs and their REF numbers only if sources from the appendices were used.\n\n"
|
||||
"\n\n".join(context_messages)
|
||||
)
|
||||
}
|
||||
else:
|
||||
system_message = {
|
||||
"role": "system",
|
||||
"content": "You are a helpful assistant."
|
||||
}
|
||||
|
||||
|
||||
msg = cl.Message(content="")
|
||||
await msg.send()
|
||||
|
||||
# Get response from the LLM
|
||||
stream = await client.chat.completions.create(
|
||||
messages=[
|
||||
system_message,
|
||||
*user_session["history"]
|
||||
],
|
||||
stream=True,
|
||||
**settings
|
||||
)
|
||||
|
||||
assistant_response = ""
|
||||
async for part in stream:
|
||||
if token := part.choices[0].delta.content:
|
||||
assistant_response += token
|
||||
await msg.stream_token(token)
|
||||
|
||||
# Add assistant message to the history
|
||||
user_session["history"].append({
|
||||
"role": "assistant",
|
||||
"content": assistant_response
|
||||
})
|
||||
await msg.update()
|
||||
|
||||
# Append the reference section to the assistant's response
|
||||
reference_section = "\n\nReferences:\n"
|
||||
for ref, data in user_session["context"].items():
|
||||
reference_section += f"[{ref.split('_')[1]}]: {data['url']}\n"
|
||||
|
||||
msg.content += reference_section
|
||||
await msg.update()
|
||||
|
||||
|
||||
@cl.on_audio_chunk
|
||||
async def on_audio_chunk(chunk: cl.AudioChunk):
|
||||
if chunk.isStart:
|
||||
buffer = BytesIO()
|
||||
# This is required for whisper to recognize the file type
|
||||
buffer.name = f"input_audio.{chunk.mimeType.split('/')[1]}"
|
||||
# Initialize the session for a new audio stream
|
||||
cl.user_session.set("audio_buffer", buffer)
|
||||
cl.user_session.set("audio_mime_type", chunk.mimeType)
|
||||
|
||||
# Write the chunks to a buffer and transcribe the whole audio at the end
|
||||
cl.user_session.get("audio_buffer").write(chunk.data)
|
||||
|
||||
pass
|
||||
|
||||
@cl.step(type="tool")
|
||||
async def speech_to_text(audio_file):
|
||||
cli = Groq()
|
||||
|
||||
response = await client.audio.transcriptions.create(
|
||||
model="whisper-large-v3", file=audio_file
|
||||
)
|
||||
|
||||
return response.text
|
||||
|
||||
|
||||
@cl.on_audio_end
|
||||
async def on_audio_end(elements: list[ElementBased]):
|
||||
# Get the audio buffer from the session
|
||||
audio_buffer: BytesIO = cl.user_session.get("audio_buffer")
|
||||
audio_buffer.seek(0) # Move the file pointer to the beginning
|
||||
audio_file = audio_buffer.read()
|
||||
audio_mime_type: str = cl.user_session.get("audio_mime_type")
|
||||
|
||||
start_time = time.time()
|
||||
whisper_input = (audio_buffer.name, audio_file, audio_mime_type)
|
||||
transcription = await speech_to_text(whisper_input)
|
||||
end_time = time.time()
|
||||
print(f"Transcription took {end_time - start_time} seconds")
|
||||
|
||||
user_msg = cl.Message(
|
||||
author="You",
|
||||
type="user_message",
|
||||
content=transcription
|
||||
)
|
||||
await user_msg.send()
|
||||
await on_message(user_msg)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
from chainlit.cli import run_chainlit
|
||||
run_chainlit(__file__)
|
||||
|
||||
|
||||
64
docs/examples/rest_call.py
Normal file
@@ -0,0 +1,64 @@
|
||||
|
||||
import requests, base64, os
|
||||
|
||||
data = {
|
||||
"urls": ["https://www.nbcnews.com/business"],
|
||||
"screenshot": True,
|
||||
}
|
||||
|
||||
response = requests.post("https://crawl4ai.com/crawl", json=data)
|
||||
result = response.json()['results'][0]
|
||||
print(result.keys())
|
||||
# dict_keys(['url', 'html', 'success', 'cleaned_html', 'media',
|
||||
# 'links', 'screenshot', 'markdown', 'extracted_content',
|
||||
# 'metadata', 'error_message'])
|
||||
with open("screenshot.png", "wb") as f:
|
||||
f.write(base64.b64decode(result['screenshot']))
|
||||
|
||||
# Example of filtering the content using CSS selectors
|
||||
data = {
|
||||
"urls": [
|
||||
"https://www.nbcnews.com/business"
|
||||
],
|
||||
"css_selector": "article",
|
||||
"screenshot": True,
|
||||
}
|
||||
|
||||
# Example of executing a JS script on the page before extracting the content
|
||||
data = {
|
||||
"urls": [
|
||||
"https://www.nbcnews.com/business"
|
||||
],
|
||||
"screenshot": True,
|
||||
'js' : ["""
|
||||
const loadMoreButton = Array.from(document.querySelectorAll('button')).
|
||||
find(button => button.textContent.includes('Load More'));
|
||||
loadMoreButton && loadMoreButton.click();
|
||||
"""]
|
||||
}
|
||||
|
||||
# Example of using a custom extraction strategy
|
||||
data = {
|
||||
"urls": [
|
||||
"https://www.nbcnews.com/business"
|
||||
],
|
||||
"extraction_strategy": "CosineStrategy",
|
||||
"extraction_strategy_args": {
|
||||
"semantic_filter": "inflation rent prices"
|
||||
},
|
||||
}
|
||||
|
||||
# Example of using LLM to extract content
|
||||
data = {
|
||||
"urls": [
|
||||
"https://www.nbcnews.com/business"
|
||||
],
|
||||
"extraction_strategy": "LLMExtractionStrategy",
|
||||
"extraction_strategy_args": {
|
||||
"provider": "groq/llama3-8b-8192",
|
||||
"api_token": os.environ.get("GROQ_API_KEY"),
|
||||
"instruction": """I am interested in only financial news,
|
||||
and translate them in French."""
|
||||
},
|
||||
}
|
||||
|
||||
46
docs/examples/summarize_page.py
Normal file
@@ -0,0 +1,46 @@
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
from crawl4ai.web_crawler import WebCrawler
|
||||
from crawl4ai.chunking_strategy import *
|
||||
from crawl4ai.extraction_strategy import *
|
||||
from crawl4ai.crawler_strategy import *
|
||||
|
||||
url = r'https://marketplace.visualstudio.com/items?itemName=Unclecode.groqopilot'
|
||||
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class PageSummary(BaseModel):
|
||||
title: str = Field(..., description="Title of the page.")
|
||||
summary: str = Field(..., description="Summary of the page.")
|
||||
brief_summary: str = Field(..., description="Brief summary of the page.")
|
||||
keywords: list = Field(..., description="Keywords assigned to the page.")
|
||||
|
||||
result = crawler.run(
|
||||
url=url,
|
||||
word_count_threshold=1,
|
||||
extraction_strategy= LLMExtractionStrategy(
|
||||
provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
|
||||
schema=PageSummary.model_json_schema(),
|
||||
extraction_type="schema",
|
||||
apply_chunking =False,
|
||||
instruction="From the crawled content, extract the following details: "\
|
||||
"1. Title of the page "\
|
||||
"2. Summary of the page, which is a detailed summary "\
|
||||
"3. Brief summary of the page, which is a paragraph text "\
|
||||
"4. Keywords assigned to the page, which is a list of keywords. "\
|
||||
'The extracted JSON format should look like this: '\
|
||||
'{ "title": "Page Title", "summary": "Detailed summary of the page.", "brief_summary": "Brief summary in a paragraph.", "keywords": ["keyword1", "keyword2", "keyword3"] }'
|
||||
),
|
||||
bypass_cache=True,
|
||||
)
|
||||
|
||||
page_summary = json.loads(result.extracted_content)
|
||||
|
||||
print(page_summary)
|
||||
|
||||
with open(".data/page_summary.json", "w", encoding="utf-8") as f:
|
||||
f.write(result.extracted_content)
|
||||
281
docs/examples/tmp/chainlit_review.py
Normal file
@@ -0,0 +1,281 @@
|
||||
from openai import AsyncOpenAI
|
||||
from chainlit.types import ThreadDict
|
||||
import chainlit as cl
|
||||
from chainlit.input_widget import Select, Switch, Slider
|
||||
client = AsyncOpenAI()
|
||||
|
||||
# Instrument the OpenAI client
|
||||
cl.instrument_openai()
|
||||
|
||||
settings = {
|
||||
"model": "gpt-3.5-turbo",
|
||||
"temperature": 0.5,
|
||||
"max_tokens": 500,
|
||||
"top_p": 1,
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0,
|
||||
}
|
||||
|
||||
@cl.action_callback("action_button")
|
||||
async def on_action(action: cl.Action):
|
||||
print("The user clicked on the action button!")
|
||||
|
||||
return "Thank you for clicking on the action button!"
|
||||
|
||||
@cl.set_chat_profiles
|
||||
async def chat_profile():
|
||||
return [
|
||||
cl.ChatProfile(
|
||||
name="GPT-3.5",
|
||||
markdown_description="The underlying LLM model is **GPT-3.5**.",
|
||||
icon="https://picsum.photos/200",
|
||||
),
|
||||
cl.ChatProfile(
|
||||
name="GPT-4",
|
||||
markdown_description="The underlying LLM model is **GPT-4**.",
|
||||
icon="https://picsum.photos/250",
|
||||
),
|
||||
]
|
||||
|
||||
@cl.on_chat_start
|
||||
async def on_chat_start():
|
||||
|
||||
settings = await cl.ChatSettings(
|
||||
[
|
||||
Select(
|
||||
id="Model",
|
||||
label="OpenAI - Model",
|
||||
values=["gpt-3.5-turbo", "gpt-3.5-turbo-16k", "gpt-4", "gpt-4-32k"],
|
||||
initial_index=0,
|
||||
),
|
||||
Switch(id="Streaming", label="OpenAI - Stream Tokens", initial=True),
|
||||
Slider(
|
||||
id="Temperature",
|
||||
label="OpenAI - Temperature",
|
||||
initial=1,
|
||||
min=0,
|
||||
max=2,
|
||||
step=0.1,
|
||||
),
|
||||
Slider(
|
||||
id="SAI_Steps",
|
||||
label="Stability AI - Steps",
|
||||
initial=30,
|
||||
min=10,
|
||||
max=150,
|
||||
step=1,
|
||||
description="Amount of inference steps performed on image generation.",
|
||||
),
|
||||
Slider(
|
||||
id="SAI_Cfg_Scale",
|
||||
label="Stability AI - Cfg_Scale",
|
||||
initial=7,
|
||||
min=1,
|
||||
max=35,
|
||||
step=0.1,
|
||||
description="Influences how strongly your generation is guided to match your prompt.",
|
||||
),
|
||||
Slider(
|
||||
id="SAI_Width",
|
||||
label="Stability AI - Image Width",
|
||||
initial=512,
|
||||
min=256,
|
||||
max=2048,
|
||||
step=64,
|
||||
tooltip="Measured in pixels",
|
||||
),
|
||||
Slider(
|
||||
id="SAI_Height",
|
||||
label="Stability AI - Image Height",
|
||||
initial=512,
|
||||
min=256,
|
||||
max=2048,
|
||||
step=64,
|
||||
tooltip="Measured in pixels",
|
||||
),
|
||||
]
|
||||
).send()
|
||||
|
||||
chat_profile = cl.user_session.get("chat_profile")
|
||||
await cl.Message(
|
||||
content=f"starting chat using the {chat_profile} chat profile"
|
||||
).send()
|
||||
|
||||
print("A new chat session has started!")
|
||||
cl.user_session.set("session", {
|
||||
"history": [],
|
||||
"context": []
|
||||
})
|
||||
|
||||
image = cl.Image(url="https://c.tenor.com/uzWDSSLMCmkAAAAd/tenor.gif", name="cat image", display="inline")
|
||||
|
||||
# Attach the image to the message
|
||||
await cl.Message(
|
||||
content="You are such a good girl, aren't you?!",
|
||||
elements=[image],
|
||||
).send()
|
||||
|
||||
text_content = "Hello, this is a text element."
|
||||
elements = [
|
||||
cl.Text(name="simple_text", content=text_content, display="inline")
|
||||
]
|
||||
|
||||
await cl.Message(
|
||||
content="Check out this text element!",
|
||||
elements=elements,
|
||||
).send()
|
||||
|
||||
elements = [
|
||||
cl.Audio(path="./assets/audio.mp3", display="inline"),
|
||||
]
|
||||
await cl.Message(
|
||||
content="Here is an audio file",
|
||||
elements=elements,
|
||||
).send()
|
||||
|
||||
await cl.Avatar(
|
||||
name="Tool 1",
|
||||
url="https://avatars.githubusercontent.com/u/128686189?s=400&u=a1d1553023f8ea0921fba0debbe92a8c5f840dd9&v=4",
|
||||
).send()
|
||||
|
||||
await cl.Message(
|
||||
content="This message should not have an avatar!", author="Tool 0"
|
||||
).send()
|
||||
|
||||
await cl.Message(
|
||||
content="This message should have an avatar!", author="Tool 1"
|
||||
).send()
|
||||
|
||||
elements = [
|
||||
cl.File(
|
||||
name="quickstart.py",
|
||||
path="./quickstart.py",
|
||||
display="inline",
|
||||
),
|
||||
]
|
||||
|
||||
await cl.Message(
|
||||
content="This message has a file element", elements=elements
|
||||
).send()
|
||||
|
||||
# Sending an action button within a chatbot message
|
||||
actions = [
|
||||
cl.Action(name="action_button", value="example_value", description="Click me!")
|
||||
]
|
||||
|
||||
await cl.Message(content="Interact with this action button:", actions=actions).send()
|
||||
|
||||
# res = await cl.AskActionMessage(
|
||||
# content="Pick an action!",
|
||||
# actions=[
|
||||
# cl.Action(name="continue", value="continue", label="✅ Continue"),
|
||||
# cl.Action(name="cancel", value="cancel", label="❌ Cancel"),
|
||||
# ],
|
||||
# ).send()
|
||||
|
||||
# if res and res.get("value") == "continue":
|
||||
# await cl.Message(
|
||||
# content="Continue!",
|
||||
# ).send()
|
||||
|
||||
# import plotly.graph_objects as go
|
||||
# fig = go.Figure(
|
||||
# data=[go.Bar(y=[2, 1, 3])],
|
||||
# layout_title_text="An example figure",
|
||||
# )
|
||||
# elements = [cl.Plotly(name="chart", figure=fig, display="inline")]
|
||||
|
||||
# await cl.Message(content="This message has a chart", elements=elements).send()
|
||||
|
||||
# Sending a pdf with the local file path
|
||||
# elements = [
|
||||
# cl.Pdf(name="pdf1", display="inline", path="./pdf1.pdf")
|
||||
# ]
|
||||
|
||||
# cl.Message(content="Look at this local pdf!", elements=elements).send()
|
||||
|
||||
@cl.on_settings_update
|
||||
async def setup_agent(settings):
|
||||
print("on_settings_update", settings)
|
||||
|
||||
@cl.on_stop
|
||||
def on_stop():
|
||||
print("The user wants to stop the task!")
|
||||
|
||||
@cl.on_chat_end
|
||||
def on_chat_end():
|
||||
print("The user disconnected!")
|
||||
|
||||
|
||||
@cl.on_chat_resume
|
||||
async def on_chat_resume(thread: ThreadDict):
|
||||
print("The user resumed a previous chat session!")
|
||||
|
||||
|
||||
|
||||
|
||||
# @cl.on_message
|
||||
async def on_message(message: cl.Message):
|
||||
cl.user_session.get("session")["history"].append({
|
||||
"role": "user",
|
||||
"content": message.content
|
||||
})
|
||||
response = await client.chat.completions.create(
|
||||
messages=[
|
||||
{
|
||||
"content": "You are a helpful bot",
|
||||
"role": "system"
|
||||
},
|
||||
*cl.user_session.get("session")["history"]
|
||||
],
|
||||
**settings
|
||||
)
|
||||
|
||||
|
||||
# Add assitanr message to the history
|
||||
cl.user_session.get("session")["history"].append({
|
||||
"role": "assistant",
|
||||
"content": response.choices[0].message.content
|
||||
})
|
||||
|
||||
# msg.content = response.choices[0].message.content
|
||||
# await msg.update()
|
||||
|
||||
# await cl.Message(content=response.choices[0].message.content).send()
|
||||
|
||||
@cl.on_message
|
||||
async def on_message(message: cl.Message):
|
||||
cl.user_session.get("session")["history"].append({
|
||||
"role": "user",
|
||||
"content": message.content
|
||||
})
|
||||
|
||||
msg = cl.Message(content="")
|
||||
await msg.send()
|
||||
|
||||
stream = await client.chat.completions.create(
|
||||
messages=[
|
||||
{
|
||||
"content": "You are a helpful bot",
|
||||
"role": "system"
|
||||
},
|
||||
*cl.user_session.get("session")["history"]
|
||||
],
|
||||
stream = True,
|
||||
**settings
|
||||
)
|
||||
|
||||
async for part in stream:
|
||||
if token := part.choices[0].delta.content or "":
|
||||
await msg.stream_token(token)
|
||||
|
||||
# Add assitanr message to the history
|
||||
cl.user_session.get("session")["history"].append({
|
||||
"role": "assistant",
|
||||
"content": msg.content
|
||||
})
|
||||
await msg.update()
|
||||
|
||||
if __name__ == "__main__":
|
||||
from chainlit.cli import run_chainlit
|
||||
run_chainlit(__file__)
|
||||
238
docs/examples/tmp/research_assistant_audio_not_completed.py
Normal file
@@ -0,0 +1,238 @@
|
||||
# Make sur to install the required packageschainlit and groq
|
||||
import os, time
|
||||
from openai import AsyncOpenAI
|
||||
import chainlit as cl
|
||||
import re
|
||||
import requests
|
||||
from io import BytesIO
|
||||
from chainlit.element import ElementBased
|
||||
from groq import Groq
|
||||
|
||||
# Import threadpools to run the crawl_url function in a separate thread
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
client = AsyncOpenAI(base_url="https://api.groq.com/openai/v1", api_key=os.getenv("GROQ_API_KEY"))
|
||||
|
||||
# Instrument the OpenAI client
|
||||
cl.instrument_openai()
|
||||
|
||||
settings = {
|
||||
"model": "llama3-8b-8192",
|
||||
"temperature": 0.5,
|
||||
"max_tokens": 500,
|
||||
"top_p": 1,
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0,
|
||||
}
|
||||
|
||||
def extract_urls(text):
|
||||
url_pattern = re.compile(r'(https?://\S+)')
|
||||
return url_pattern.findall(text)
|
||||
|
||||
def crawl_url(url):
|
||||
data = {
|
||||
"urls": [url],
|
||||
"include_raw_html": True,
|
||||
"word_count_threshold": 10,
|
||||
"extraction_strategy": "NoExtractionStrategy",
|
||||
"chunking_strategy": "RegexChunking"
|
||||
}
|
||||
response = requests.post("https://crawl4ai.com/crawl", json=data)
|
||||
response_data = response.json()
|
||||
response_data = response_data['results'][0]
|
||||
return response_data['markdown']
|
||||
|
||||
@cl.on_chat_start
|
||||
async def on_chat_start():
|
||||
cl.user_session.set("session", {
|
||||
"history": [],
|
||||
"context": {}
|
||||
})
|
||||
await cl.Message(
|
||||
content="Welcome to the chat! How can I assist you today?"
|
||||
).send()
|
||||
|
||||
@cl.on_message
|
||||
async def on_message(message: cl.Message):
|
||||
user_session = cl.user_session.get("session")
|
||||
|
||||
# Extract URLs from the user's message
|
||||
urls = extract_urls(message.content)
|
||||
|
||||
|
||||
futures = []
|
||||
with ThreadPoolExecutor() as executor:
|
||||
for url in urls:
|
||||
futures.append(executor.submit(crawl_url, url))
|
||||
|
||||
results = [future.result() for future in futures]
|
||||
|
||||
for url, result in zip(urls, results):
|
||||
ref_number = f"REF_{len(user_session['context']) + 1}"
|
||||
user_session["context"][ref_number] = {
|
||||
"url": url,
|
||||
"content": result
|
||||
}
|
||||
|
||||
# for url in urls:
|
||||
# # Crawl the content of each URL and add it to the session context with a reference number
|
||||
# ref_number = f"REF_{len(user_session['context']) + 1}"
|
||||
# crawled_content = crawl_url(url)
|
||||
# user_session["context"][ref_number] = {
|
||||
# "url": url,
|
||||
# "content": crawled_content
|
||||
# }
|
||||
|
||||
user_session["history"].append({
|
||||
"role": "user",
|
||||
"content": message.content
|
||||
})
|
||||
|
||||
# Create a system message that includes the context
|
||||
context_messages = [
|
||||
f'<appendix ref="{ref}">\n{data["content"]}\n</appendix>'
|
||||
for ref, data in user_session["context"].items()
|
||||
]
|
||||
if context_messages:
|
||||
system_message = {
|
||||
"role": "system",
|
||||
"content": (
|
||||
"You are a helpful bot. Use the following context for answering questions. "
|
||||
"Refer to the sources using the REF number in square brackets, e.g., [1], only if the source is given in the appendices below.\n\n"
|
||||
"If the question requires any information from the provided appendices or context, refer to the sources. "
|
||||
"If not, there is no need to add a references section. "
|
||||
"At the end of your response, provide a reference section listing the URLs and their REF numbers only if sources from the appendices were used.\n\n"
|
||||
"\n\n".join(context_messages)
|
||||
)
|
||||
}
|
||||
else:
|
||||
system_message = {
|
||||
"role": "system",
|
||||
"content": "You are a helpful assistant."
|
||||
}
|
||||
|
||||
|
||||
msg = cl.Message(content="")
|
||||
await msg.send()
|
||||
|
||||
# Get response from the LLM
|
||||
stream = await client.chat.completions.create(
|
||||
messages=[
|
||||
system_message,
|
||||
*user_session["history"]
|
||||
],
|
||||
stream=True,
|
||||
**settings
|
||||
)
|
||||
|
||||
assistant_response = ""
|
||||
async for part in stream:
|
||||
if token := part.choices[0].delta.content:
|
||||
assistant_response += token
|
||||
await msg.stream_token(token)
|
||||
|
||||
# Add assistant message to the history
|
||||
user_session["history"].append({
|
||||
"role": "assistant",
|
||||
"content": assistant_response
|
||||
})
|
||||
await msg.update()
|
||||
|
||||
# Append the reference section to the assistant's response
|
||||
reference_section = "\n\nReferences:\n"
|
||||
for ref, data in user_session["context"].items():
|
||||
reference_section += f"[{ref.split('_')[1]}]: {data['url']}\n"
|
||||
|
||||
msg.content += reference_section
|
||||
await msg.update()
|
||||
|
||||
|
||||
@cl.on_audio_chunk
|
||||
async def on_audio_chunk(chunk: cl.AudioChunk):
|
||||
if chunk.isStart:
|
||||
buffer = BytesIO()
|
||||
# This is required for whisper to recognize the file type
|
||||
buffer.name = f"input_audio.{chunk.mimeType.split('/')[1]}"
|
||||
# Initialize the session for a new audio stream
|
||||
cl.user_session.set("audio_buffer", buffer)
|
||||
cl.user_session.set("audio_mime_type", chunk.mimeType)
|
||||
|
||||
# Write the chunks to a buffer and transcribe the whole audio at the end
|
||||
cl.user_session.get("audio_buffer").write(chunk.data)
|
||||
|
||||
pass
|
||||
|
||||
@cl.step(type="tool")
|
||||
async def speech_to_text(audio_file):
|
||||
cli = Groq()
|
||||
|
||||
# response = cli.audio.transcriptions.create(
|
||||
# file=audio_file, #(filename, file.read()),
|
||||
# model="whisper-large-v3",
|
||||
# )
|
||||
|
||||
response = await client.audio.transcriptions.create(
|
||||
model="whisper-large-v3", file=audio_file
|
||||
)
|
||||
|
||||
return response.text
|
||||
|
||||
|
||||
@cl.on_audio_end
|
||||
async def on_audio_end(elements: list[ElementBased]):
|
||||
# Get the audio buffer from the session
|
||||
audio_buffer: BytesIO = cl.user_session.get("audio_buffer")
|
||||
audio_buffer.seek(0) # Move the file pointer to the beginning
|
||||
audio_file = audio_buffer.read()
|
||||
audio_mime_type: str = cl.user_session.get("audio_mime_type")
|
||||
|
||||
# input_audio_el = cl.Audio(
|
||||
# mime=audio_mime_type, content=audio_file, name=audio_buffer.name
|
||||
# )
|
||||
# await cl.Message(
|
||||
# author="You",
|
||||
# type="user_message",
|
||||
# content="",
|
||||
# elements=[input_audio_el, *elements]
|
||||
# ).send()
|
||||
|
||||
# answer_message = await cl.Message(content="").send()
|
||||
|
||||
|
||||
start_time = time.time()
|
||||
whisper_input = (audio_buffer.name, audio_file, audio_mime_type)
|
||||
transcription = await speech_to_text(whisper_input)
|
||||
end_time = time.time()
|
||||
print(f"Transcription took {end_time - start_time} seconds")
|
||||
|
||||
user_msg = cl.Message(
|
||||
author="You",
|
||||
type="user_message",
|
||||
content=transcription
|
||||
)
|
||||
await user_msg.send()
|
||||
await on_message(user_msg)
|
||||
|
||||
# images = [file for file in elements if "image" in file.mime]
|
||||
|
||||
# text_answer = await generate_text_answer(transcription, images)
|
||||
|
||||
# output_name, output_audio = await text_to_speech(text_answer, audio_mime_type)
|
||||
|
||||
# output_audio_el = cl.Audio(
|
||||
# name=output_name,
|
||||
# auto_play=True,
|
||||
# mime=audio_mime_type,
|
||||
# content=output_audio,
|
||||
# )
|
||||
|
||||
# answer_message.elements = [output_audio_el]
|
||||
|
||||
# answer_message.content = transcription
|
||||
# await answer_message.update()
|
||||
|
||||
if __name__ == "__main__":
|
||||
from chainlit.cli import run_chainlit
|
||||
run_chainlit(__file__)
|
||||
|
||||
|
||||
141
docs/md/api/core_classes_and_functions.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# Core Classes and Functions
|
||||
|
||||
## Overview
|
||||
|
||||
In this section, we will delve into the core classes and functions that make up the Crawl4AI library. This includes the `WebCrawler` class, various `CrawlerStrategy` classes, `ChunkingStrategy` classes, and `ExtractionStrategy` classes. Understanding these core components will help you leverage the full power of Crawl4AI for your web crawling and data extraction needs.
|
||||
|
||||
## WebCrawler Class
|
||||
|
||||
The `WebCrawler` class is the main class you'll interact with. It provides the interface for crawling web pages and extracting data.
|
||||
|
||||
### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
# Create an instance of WebCrawler
|
||||
crawler = WebCrawler()
|
||||
```
|
||||
|
||||
### Methods
|
||||
|
||||
- **`warmup()`**: Prepares the crawler for use, such as loading necessary models.
|
||||
- **`run(url: str, **kwargs)`**: Runs the crawler on the specified URL with optional parameters for customization.
|
||||
|
||||
```python
|
||||
crawler.warmup()
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
print(result)
|
||||
```
|
||||
|
||||
## CrawlerStrategy Classes
|
||||
|
||||
The `CrawlerStrategy` classes define how the web crawling is executed. The base class is `CrawlerStrategy`, which is extended by specific implementations like `LocalSeleniumCrawlerStrategy`.
|
||||
|
||||
### CrawlerStrategy Base Class
|
||||
|
||||
An abstract base class that defines the interface for different crawler strategies.
|
||||
|
||||
```python
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
class CrawlerStrategy(ABC):
|
||||
@abstractmethod
|
||||
def crawl(self, url: str, **kwargs) -> str:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def take_screenshot(self, save_path: str):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def update_user_agent(self, user_agent: str):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def set_hook(self, hook_type: str, hook: Callable):
|
||||
pass
|
||||
```
|
||||
|
||||
### LocalSeleniumCrawlerStrategy Class
|
||||
|
||||
A concrete implementation of `CrawlerStrategy` that uses Selenium to crawl web pages.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.crawler_strategy import LocalSeleniumCrawlerStrategy
|
||||
|
||||
strategy = LocalSeleniumCrawlerStrategy(js_code=["console.log('Hello, world!');"])
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`crawl(url: str, **kwargs)`**: Crawls the specified URL.
|
||||
- **`take_screenshot(save_path: str)`**: Takes a screenshot of the current page.
|
||||
- **`update_user_agent(user_agent: str)`**: Updates the user agent for the browser.
|
||||
- **`set_hook(hook_type: str, hook: Callable)`**: Sets a hook for various events.
|
||||
|
||||
```python
|
||||
result = strategy.crawl("https://www.example.com")
|
||||
strategy.take_screenshot("screenshot.png")
|
||||
strategy.update_user_agent("Mozilla/5.0")
|
||||
strategy.set_hook("before_get_url", lambda: print("About to get URL"))
|
||||
```
|
||||
|
||||
## ChunkingStrategy Classes
|
||||
|
||||
The `ChunkingStrategy` classes define how the text from a web page is divided into chunks. Here are a few examples:
|
||||
|
||||
### RegexChunking Class
|
||||
|
||||
Splits text using regular expressions.
|
||||
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import RegexChunking
|
||||
|
||||
chunker = RegexChunking(patterns=[r'\n\n'])
|
||||
chunks = chunker.chunk("This is a sample text. It will be split into chunks.")
|
||||
```
|
||||
|
||||
### NlpSentenceChunking Class
|
||||
|
||||
Uses NLP to split text into sentences.
|
||||
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import NlpSentenceChunking
|
||||
|
||||
chunker = NlpSentenceChunking()
|
||||
chunks = chunker.chunk("This is a sample text. It will be split into sentences.")
|
||||
```
|
||||
|
||||
## ExtractionStrategy Classes
|
||||
|
||||
The `ExtractionStrategy` classes define how meaningful content is extracted from the chunks. Here are a few examples:
|
||||
|
||||
### CosineStrategy Class
|
||||
|
||||
Clusters text chunks based on cosine similarity.
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import CosineStrategy
|
||||
|
||||
extractor = CosineStrategy(semantic_filter="finance", word_count_threshold=10)
|
||||
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
|
||||
```
|
||||
|
||||
### LLMExtractionStrategy Class
|
||||
|
||||
Uses a Language Model to extract meaningful blocks from HTML.
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
|
||||
extractor = LLMExtractionStrategy(provider='openai', api_token='your_api_token', instruction='Extract only news about AI.')
|
||||
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
By understanding these core classes and functions, you can customize and extend Crawl4AI to suit your specific web crawling and data extraction needs. Happy crawling! 🕷️🤖
|
||||
|
||||
338
docs/md/api/detailed_api_documentation.md
Normal file
@@ -0,0 +1,338 @@
|
||||
# Detailed API Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
This section provides comprehensive documentation for the Crawl4AI API, covering all classes, methods, and their parameters. This guide will help you understand how to utilize the API to its full potential, enabling efficient web crawling and data extraction.
|
||||
|
||||
## WebCrawler Class
|
||||
|
||||
The `WebCrawler` class is the primary interface for crawling web pages and extracting data.
|
||||
|
||||
### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
crawler = WebCrawler()
|
||||
```
|
||||
|
||||
### Methods
|
||||
|
||||
#### `warmup()`
|
||||
|
||||
Prepares the crawler for use, such as loading necessary models.
|
||||
|
||||
```python
|
||||
crawler.warmup()
|
||||
```
|
||||
|
||||
#### `run(url: str, **kwargs) -> CrawlResult`
|
||||
|
||||
Crawls the specified URL and returns the result.
|
||||
|
||||
- **Parameters:**
|
||||
- `url` (str): The URL to crawl.
|
||||
- `**kwargs`: Additional parameters for customization.
|
||||
|
||||
- **Returns:**
|
||||
- `CrawlResult`: An object containing the crawl result.
|
||||
|
||||
- **Example:**
|
||||
|
||||
```python
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
print(result)
|
||||
```
|
||||
|
||||
### CrawlResult Class
|
||||
|
||||
Represents the result of a crawl operation.
|
||||
|
||||
- **Attributes:**
|
||||
- `url` (str): The URL of the crawled page.
|
||||
- `html` (str): The raw HTML of the page.
|
||||
- `success` (bool): Whether the crawl was successful.
|
||||
- `cleaned_html` (Optional[str]): The cleaned HTML.
|
||||
- `media` (Dict[str, List[Dict]]): Media tags in the page (images, audio, video).
|
||||
- `links` (Dict[str, List[Dict]]): Links in the page (external, internal).
|
||||
- `screenshot` (Optional[str]): Base64 encoded screenshot.
|
||||
- `markdown` (Optional[str]): Extracted content in Markdown format.
|
||||
- `extracted_content` (Optional[str]): Extracted meaningful content.
|
||||
- `metadata` (Optional[dict]): Metadata from the page.
|
||||
- `error_message` (Optional[str]): Error message if any.
|
||||
|
||||
## CrawlerStrategy Classes
|
||||
|
||||
The `CrawlerStrategy` classes define how the web crawling is executed.
|
||||
|
||||
### CrawlerStrategy Base Class
|
||||
|
||||
An abstract base class for different crawler strategies.
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`crawl(url: str, **kwargs) -> str`**: Crawls the specified URL.
|
||||
- **`take_screenshot(save_path: str)`**: Takes a screenshot of the current page.
|
||||
- **`update_user_agent(user_agent: str)`**: Updates the user agent for the browser.
|
||||
- **`set_hook(hook_type: str, hook: Callable)`**: Sets a hook for various events.
|
||||
|
||||
### LocalSeleniumCrawlerStrategy Class
|
||||
|
||||
Uses Selenium to crawl web pages.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.crawler_strategy import LocalSeleniumCrawlerStrategy
|
||||
|
||||
strategy = LocalSeleniumCrawlerStrategy(js_code=["console.log('Hello, world!');"])
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`crawl(url: str, **kwargs)`**: Crawls the specified URL.
|
||||
- **`take_screenshot(save_path: str)`**: Takes a screenshot of the current page.
|
||||
- **`update_user_agent(user_agent: str)`**: Updates the user agent for the browser.
|
||||
- **`set_hook(hook_type: str, hook: Callable)`**: Sets a hook for various events.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
result = strategy.crawl("https://www.example.com")
|
||||
strategy.take_screenshot("screenshot.png")
|
||||
strategy.update_user_agent("Mozilla/5.0")
|
||||
strategy.set_hook("before_get_url", lambda: print("About to get URL"))
|
||||
```
|
||||
|
||||
## ChunkingStrategy Classes
|
||||
|
||||
The `ChunkingStrategy` classes define how the text from a web page is divided into chunks.
|
||||
|
||||
### RegexChunking Class
|
||||
|
||||
Splits text using regular expressions.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import RegexChunking
|
||||
|
||||
chunker = RegexChunking(patterns=[r'\n\n'])
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`chunk(text: str) -> List[str]`**: Splits the text into chunks.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
chunks = chunker.chunk("This is a sample text. It will be split into chunks.")
|
||||
```
|
||||
|
||||
### NlpSentenceChunking Class
|
||||
|
||||
Uses NLP to split text into sentences.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import NlpSentenceChunking
|
||||
|
||||
chunker = NlpSentenceChunking()
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`chunk(text: str) -> List[str]`**: Splits the text into sentences.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
chunks = chunker.chunk("This is a sample text. It will be split into sentences.")
|
||||
```
|
||||
|
||||
### TopicSegmentationChunking Class
|
||||
|
||||
Uses the TextTiling algorithm to segment text into topics.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import TopicSegmentationChunking
|
||||
|
||||
chunker = TopicSegmentationChunking(num_keywords=3)
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`chunk(text: str) -> List[str]`**: Splits the text into topic-based segments.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
chunks = chunker.chunk("This is a sample text. It will be split into topic-based segments.")
|
||||
```
|
||||
|
||||
### FixedLengthWordChunking Class
|
||||
|
||||
Splits text into chunks of fixed length based on the number of words.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import FixedLengthWordChunking
|
||||
|
||||
chunker = FixedLengthWordChunking(chunk_size=100)
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`chunk(text: str) -> List[str]`**: Splits the text into fixed-length word chunks.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
chunks = chunker.chunk("This is a sample text. It will be split into fixed-length word chunks.")
|
||||
```
|
||||
|
||||
### SlidingWindowChunking Class
|
||||
|
||||
Uses a sliding window approach to chunk text.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import SlidingWindowChunking
|
||||
|
||||
chunker = SlidingWindowChunking(window_size=100, step=50)
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`chunk(text: str) -> List[str]`**: Splits the text using a sliding window approach.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
chunks = chunker.chunk("This is a sample text. It will be split using a sliding window approach.")
|
||||
```
|
||||
|
||||
## ExtractionStrategy Classes
|
||||
|
||||
The `ExtractionStrategy` classes define how meaningful content is extracted from the chunks.
|
||||
|
||||
### NoExtractionStrategy Class
|
||||
|
||||
Returns the entire HTML content without any modification.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import NoExtractionStrategy
|
||||
|
||||
extractor = NoExtractionStrategy()
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`extract(url: str, html: str) -> str`**: Returns the HTML content.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
|
||||
```
|
||||
|
||||
### LLMExtractionStrategy Class
|
||||
|
||||
Uses a Language Model to extract meaningful blocks from HTML.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
|
||||
extractor = LLMExtractionStrategy(provider='openai', api_token='your_api_token', instruction='Extract only news about AI.')
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`extract(url: str, html: str) -> str`**: Extracts meaningful content using the LLM.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
|
||||
```
|
||||
|
||||
### CosineStrategy Class
|
||||
|
||||
Clusters text chunks based on cosine similarity.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import CosineStrategy
|
||||
|
||||
extractor = CosineStrategy(semantic_filter="finance", word_count_threshold=10)
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`extract(url: str, html: str) -> str`**: Extracts clusters of text based on cosine similarity.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
|
||||
```
|
||||
|
||||
### TopicExtractionStrategy Class
|
||||
|
||||
Uses the TextTiling algorithm to segment HTML content into topics and extract keywords.
|
||||
|
||||
#### Initialization
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import TopicExtractionStrategy
|
||||
|
||||
extractor = TopicExtractionStrategy(num_keywords=3)
|
||||
```
|
||||
|
||||
#### Methods
|
||||
|
||||
- **`extract(url: str, html: str) -> str`**: Extracts topic-based segments and keywords.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
extracted_content = extractor.extract(url="https://www.example.com", html="<html>...</html>")
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
Here are the common parameters used across various classes and methods:
|
||||
|
||||
- **`url`** (str): The URL to crawl.
|
||||
- **`html`** (str): The HTML content of the page.
|
||||
- **`user_agent`** (str): The user agent for the HTTP requests.
|
||||
- **`patterns`** (list): A list of regular expression patterns for chunking.
|
||||
- **`num_keywords`** (int): Number of keywords for topic extraction.
|
||||
- **`chunk_size`** (int): Number of words in each chunk.
|
||||
- **`window_size`** (int): Number of words in the sliding window.
|
||||
- **`step`** (int): Step size for the sliding window.
|
||||
- **`semantic_filter`** (str): Keywords for filtering relevant documents.
|
||||
- **`word_count_threshold`** (int): Minimum number of words per cluster.
|
||||
- **`max_dist`** (float): Maximum cophenetic distance for clustering.
|
||||
- **`linkage_method`** (str): Linkage method for hierarchical clustering.
|
||||
- **`top_k`** (int): Number of top categories to extract.
|
||||
- **`provider`** (
|
||||
|
||||
str): Provider for language model completions.
|
||||
- **`api_token`** (str): API token for the provider.
|
||||
- **`instruction`** (str): Instruction to guide the LLM extraction.
|
||||
|
||||
## Conclusion
|
||||
|
||||
This detailed API documentation provides a thorough understanding of the classes, methods, and parameters in the Crawl4AI library. With this knowledge, you can effectively use the API to perform advanced web crawling and data extraction tasks.
|
||||
BIN
docs/md/assets/DankMono-Bold.woff2
Normal file
BIN
docs/md/assets/DankMono-Italic.woff2
Normal file
BIN
docs/md/assets/DankMono-Regular.woff2
Normal file
BIN
docs/md/assets/Monaco.woff
Normal file
127
docs/md/assets/dmvendor.css
Normal file
0
docs/md/assets/highlight.css
Normal file
1213
docs/md/assets/highlight.min.js
vendored
Normal file
6
docs/md/assets/highlight_init.js
Normal file
@@ -0,0 +1,6 @@
|
||||
document.addEventListener('DOMContentLoaded', (event) => {
|
||||
document.querySelectorAll('pre code').forEach((block) => {
|
||||
hljs.highlightBlock(block);
|
||||
});
|
||||
});
|
||||
|
||||
153
docs/md/assets/styles.css
Normal file
@@ -0,0 +1,153 @@
|
||||
@font-face {
|
||||
font-family: "Monaco";
|
||||
font-style: normal;
|
||||
font-weight: normal;
|
||||
src: local("Monaco"), url("Monaco.woff") format("woff");
|
||||
}
|
||||
|
||||
:root {
|
||||
--global-font-size: 16px;
|
||||
--global-line-height: 1.5em;
|
||||
--global-space: 10px;
|
||||
--font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
|
||||
Courier New, monospace, serif;
|
||||
--font-stack: dm, Monaco, Courier New, monospace, serif;
|
||||
--mono-font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
|
||||
Courier New, monospace, serif;
|
||||
|
||||
--background-color: #151515; /* Dark background */
|
||||
--font-color: #eaeaea; /* Light font color for contrast */
|
||||
--invert-font-color: #151515; /* Dark color for inverted elements */
|
||||
--primary-color: #1a95e0; /* Primary color can remain the same or be adjusted for better contrast */
|
||||
--secondary-color: #727578; /* Secondary color for less important text */
|
||||
--error-color: #ff5555; /* Bright color for errors */
|
||||
--progress-bar-background: #444; /* Darker background for progress bar */
|
||||
--progress-bar-fill: #1a95e0; /* Bright color for progress bar fill */
|
||||
--code-bg-color: #1e1e1e; /* Darker background for code blocks */
|
||||
--input-style: solid; /* Keeping input style solid */
|
||||
--block-background-color: #202020; /* Darker background for block elements */
|
||||
--global-font-color: #eaeaea; /* Light font color for global elements */
|
||||
|
||||
--background-color: #222225;
|
||||
|
||||
--background-color: #070708;
|
||||
--page-width: 70em;
|
||||
--font-color: #e8e9ed;
|
||||
--invert-font-color: #222225;
|
||||
--secondary-color: #a3abba;
|
||||
--secondary-color: #d5cec0;
|
||||
--tertiary-color: #a3abba;
|
||||
--primary-color: #09b5a5; /* Updated to the brand color */
|
||||
--primary-color: #50ffff; /* Updated to the brand color */
|
||||
--error-color: #ff3c74;
|
||||
--progress-bar-background: #3f3f44;
|
||||
--progress-bar-fill: #09b5a5; /* Updated to the brand color */
|
||||
--code-bg-color: #3f3f44;
|
||||
--input-style: solid;
|
||||
--display-h1-decoration: none;
|
||||
|
||||
--display-h1-decoration: none;
|
||||
}
|
||||
|
||||
/* body {
|
||||
background-color: var(--background-color);
|
||||
color: var(--font-color);
|
||||
}
|
||||
|
||||
a {
|
||||
color: var(--primary-color);
|
||||
}
|
||||
|
||||
a:hover {
|
||||
background-color: var(--primary-color);
|
||||
color: var(--invert-font-color);
|
||||
}
|
||||
|
||||
blockquote::after {
|
||||
color: #444;
|
||||
}
|
||||
|
||||
pre, code {
|
||||
background-color: var(--code-bg-color);
|
||||
color: var(--font-color);
|
||||
}
|
||||
|
||||
.terminal-nav:first-child {
|
||||
border-bottom: 1px dashed var(--secondary-color);
|
||||
} */
|
||||
|
||||
.terminal-mkdocs-main-content {
|
||||
line-height: var(--global-line-height);
|
||||
}
|
||||
|
||||
strong,
|
||||
.highlight {
|
||||
/* background: url(//s2.svgbox.net/pen-brushes.svg?ic=brush-1&color=50ffff); */
|
||||
background-color: #50ffff33;
|
||||
}
|
||||
|
||||
.terminal-card > header {
|
||||
color: var(--font-color);
|
||||
text-align: center;
|
||||
background-color: var(--progress-bar-background);
|
||||
padding: 0.3em 0.5em;
|
||||
}
|
||||
.btn.btn-sm {
|
||||
color: var(--font-color);
|
||||
padding: 0.2em 0.5em;
|
||||
font-size: 0.8em;
|
||||
}
|
||||
|
||||
.loading-message {
|
||||
display: none;
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
.response-section {
|
||||
display: none;
|
||||
padding-top: 20px;
|
||||
}
|
||||
|
||||
.tabs {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
}
|
||||
.tab-list {
|
||||
display: flex;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
list-style-type: none;
|
||||
border-bottom: 1px solid var(--font-color);
|
||||
}
|
||||
.tab-item {
|
||||
cursor: pointer;
|
||||
padding: 10px;
|
||||
border: 1px solid var(--font-color);
|
||||
margin-right: -1px;
|
||||
border-bottom: none;
|
||||
}
|
||||
.tab-item:hover,
|
||||
.tab-item:focus,
|
||||
.tab-item:active {
|
||||
background-color: var(--progress-bar-background);
|
||||
}
|
||||
.tab-content {
|
||||
display: none;
|
||||
border: 1px solid var(--font-color);
|
||||
border-top: none;
|
||||
}
|
||||
.tab-content:first-of-type {
|
||||
display: block;
|
||||
}
|
||||
|
||||
.tab-content header {
|
||||
padding: 0.5em;
|
||||
display: flex;
|
||||
justify-content: end;
|
||||
align-items: center;
|
||||
background-color: var(--progress-bar-background);
|
||||
}
|
||||
.tab-content pre {
|
||||
margin: 0;
|
||||
max-height: 300px; overflow: auto; border:none;
|
||||
}
|
||||
102
docs/md/changelog.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# Changelog
|
||||
|
||||
## [v0.2.77] - 2024-08-04
|
||||
|
||||
Significant improvements in text processing and performance:
|
||||
|
||||
- 🚀 **Dependency reduction**: Removed dependency on spaCy model for text chunk labeling in cosine extraction strategy.
|
||||
- 🤖 **Transformer upgrade**: Implemented text sequence classification using a transformer model for labeling text chunks.
|
||||
- ⚡ **Performance enhancement**: Improved model loading speed due to removal of spaCy dependency.
|
||||
- 🔧 **Future-proofing**: Laid groundwork for potential complete removal of spaCy dependency in future versions.
|
||||
|
||||
These changes address issue #68 and provide a foundation for faster, more efficient text processing in Crawl4AI.
|
||||
|
||||
## [v0.2.76] - 2024-08-02
|
||||
|
||||
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
|
||||
|
||||
- 🐳 **Docker enhancements**: Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
|
||||
- 🌐 **Official Docker Hub image**: Launched our first official image on Docker Hub for streamlined deployment.
|
||||
- 🔧 **Selenium upgrade**: Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
|
||||
- 🖼️ **Image description**: Implemented ability to generate textual descriptions for extracted images from web pages.
|
||||
- ⚡ **Performance boost**: Various improvements to enhance overall speed and performance.
|
||||
|
||||
A big shoutout to our amazing community contributors:
|
||||
- [@aravindkarnam](https://github.com/aravindkarnam) for developing the textual description extraction feature.
|
||||
- [@FractalMind](https://github.com/FractalMind) for creating the first official Docker Hub image and fixing Dockerfile errors.
|
||||
- [@ketonkss4](https://github.com/ketonkss4) for identifying Selenium's new capabilities, helping us reduce dependencies.
|
||||
|
||||
Your contributions are driving Crawl4AI forward! 🙌
|
||||
|
||||
## [v0.2.75] - 2024-07-19
|
||||
|
||||
Minor improvements for a more maintainable codebase:
|
||||
|
||||
- 🔄 Fixed typos in `chunking_strategy.py` and `crawler_strategy.py` to improve code readability
|
||||
- 🔄 Removed `.test_pads/` directory from `.gitignore` to keep our repository clean and organized
|
||||
|
||||
These changes may seem small, but they contribute to a more stable and sustainable codebase. By fixing typos and updating our `.gitignore` settings, we're ensuring that our code is easier to maintain and scale in the long run.
|
||||
|
||||
|
||||
## v0.2.74 - 2024-07-08
|
||||
A slew of exciting updates to improve the crawler's stability and robustness! 🎉
|
||||
|
||||
- 💻 **UTF encoding fix**: Resolved the Windows \"charmap\" error by adding UTF encoding.
|
||||
- 🛡️ **Error handling**: Implemented MaxRetryError exception handling in LocalSeleniumCrawlerStrategy.
|
||||
- 🧹 **Input sanitization**: Improved input sanitization and handled encoding issues in LLMExtractionStrategy.
|
||||
- 🚮 **Database cleanup**: Removed existing database file and initialized a new one.
|
||||
|
||||
## [v0.2.73] - 2024-07-03
|
||||
|
||||
💡 In this release, we've bumped the version to v0.2.73 and refreshed our documentation to ensure you have the best experience with our project.
|
||||
|
||||
* Supporting website need "with-head" mode to crawl the website with head.
|
||||
* Fixing the installation issues for setup.py and dockerfile.
|
||||
* Resolve multiple issues.
|
||||
|
||||
## [v0.2.72] - 2024-06-30
|
||||
|
||||
This release brings exciting updates and improvements to our project! 🎉
|
||||
|
||||
* 📚 **Documentation Updates**: Our documentation has been revamped to reflect the latest changes and additions.
|
||||
* 🚀 **New Modes in setup.py**: We've added support for three new modes in setup.py: default, torch, and transformers. This enhances the project's flexibility and usability.
|
||||
* 🐳 **Docker File Updates**: The Docker file has been updated to ensure seamless compatibility with the new modes and improvements.
|
||||
* 🕷️ **Temporary Solution for Headless Crawling**: We've implemented a temporary solution to overcome issues with crawling websites in headless mode.
|
||||
|
||||
These changes aim to improve the overall user experience, provide more flexibility, and enhance the project's performance. We're thrilled to share these updates with you and look forward to continuing to evolve and improve our project!
|
||||
|
||||
## [0.2.71] - 2024-06-26
|
||||
|
||||
**Improved Error Handling and Performance** 🚧
|
||||
|
||||
* 🚫 Refactored `crawler_strategy.py` to handle exceptions and provide better error messages, making it more robust and reliable.
|
||||
* 💻 Optimized the `get_content_of_website_optimized` function in `utils.py` for improved performance, reducing potential bottlenecks.
|
||||
* 💻 Updated `utils.py` with the latest changes, ensuring consistency and accuracy.
|
||||
* 🚫 Migrated to `ChromeDriverManager` to resolve Chrome driver download issues, providing a smoother user experience.
|
||||
|
||||
These changes focus on refining the existing codebase, resulting in a more stable, efficient, and user-friendly experience. With these improvements, you can expect fewer errors and better performance in the crawler strategy and utility functions.
|
||||
|
||||
## [0.2.71] - 2024-06-25
|
||||
### Fixed
|
||||
- Speed up twice the extraction function.
|
||||
|
||||
## [0.2.6] - 2024-06-22
|
||||
### Fixed
|
||||
- Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms.
|
||||
|
||||
## [0.2.5] - 2024-06-18
|
||||
### Added
|
||||
- Added five important hooks to the crawler:
|
||||
- on_driver_created: Called when the driver is ready for initializations.
|
||||
- before_get_url: Called right before Selenium fetches the URL.
|
||||
- after_get_url: Called after Selenium fetches the URL.
|
||||
- before_return_html: Called when the data is parsed and ready.
|
||||
- on_user_agent_updated: Called when the user changes the user_agent, causing the driver to reinitialize.
|
||||
- Added an example in `quickstart.py` in the example folder under the docs.
|
||||
- Enhancement issue #24: Replaced inline HTML tags (e.g., DEL, INS, SUB, ABBR) with textual format for better context handling in LLM.
|
||||
- Maintaining the semantic context of inline tags (e.g., abbreviation, DEL, INS) for improved LLM-friendliness.
|
||||
- Updated Dockerfile to ensure compatibility across multiple platforms (Hopefully!).
|
||||
|
||||
## [0.2.4] - 2024-06-17
|
||||
### Fixed
|
||||
- Fix issue #22: Use MD5 hash for caching HTML files to handle long URLs
|
||||
25
docs/md/contact.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Contact
|
||||
If you have any questions, suggestions, or feedback, please feel free to reach out to us:
|
||||
|
||||
- GitHub: [unclecode](https://github.com/unclecode)
|
||||
- Twitter: [@unclecode](https://twitter.com/unclecode)
|
||||
- Website: [crawl4ai.com](https://crawl4ai.com)
|
||||
|
||||
|
||||
## Contributing 🤝
|
||||
|
||||
We welcome contributions from the open-source community to help improve Crawl4AI and make it even more valuable for AI enthusiasts and developers. To contribute, please follow these steps:
|
||||
|
||||
1. Fork the repository.
|
||||
2. Create a new branch for your feature or bug fix.
|
||||
3. Make your changes and commit them with descriptive messages.
|
||||
4. Push your changes to your forked repository.
|
||||
5. Submit a pull request to the main repository.
|
||||
|
||||
For more information on contributing, please see our [contribution guidelines](https://github.com/unclecode/crawl4ai/blob/main/CONTRIBUTING.md).
|
||||
|
||||
## License 📄
|
||||
|
||||
Crawl4AI is released under the [Apache 2.0 License](https://github.com/unclecode/crawl4ai/blob/main/LICENSE).
|
||||
|
||||
Let's work together to make the web more accessible and useful for AI applications! 💪🌐🤖
|
||||
231
docs/md/demo.md
Normal file
@@ -0,0 +1,231 @@
|
||||
# Interactive Demo for Crowler
|
||||
<div id="demo">
|
||||
<form id="crawlForm" class="terminal-form">
|
||||
<fieldset>
|
||||
<legend>Enter URL and Options</legend>
|
||||
<div class="form-group">
|
||||
<label for="url">Enter URL:</label>
|
||||
<input type="text" id="url" name="url" required>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label for="screenshot">Get Screenshot:</label>
|
||||
<input type="checkbox" id="screenshot" name="screenshot">
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<button class="btn btn-default" type="submit">Submit</button>
|
||||
</div>
|
||||
|
||||
</fieldset>
|
||||
</form>
|
||||
|
||||
<div id="loading" class="loading-message">
|
||||
<div class="terminal-alert terminal-alert-primary">Loading... Please wait.</div>
|
||||
</div>
|
||||
|
||||
<section id="response" class="response-section">
|
||||
<h2>Response</h2>
|
||||
<div class="tabs">
|
||||
<ul class="tab-list">
|
||||
<li class="tab-item" onclick="showTab('markdown')">Markdown</li>
|
||||
<li class="tab-item" onclick="showTab('cleanedHtml')">Cleaned HTML</li>
|
||||
<li class="tab-item" onclick="showTab('media')">Media</li>
|
||||
<li class="tab-item" onclick="showTab('extractedContent')">Extracted Content</li>
|
||||
<li class="tab-item" onclick="showTab('screenshot')">Screenshot</li>
|
||||
<li class="tab-item" onclick="showTab('pythonCode')">Python Code</li>
|
||||
</ul>
|
||||
<div class="tab-content" id="tab-markdown">
|
||||
<header>
|
||||
<div>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('markdownContent')">Copy</button>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('markdownContent', 'markdown.md')">Download</button>
|
||||
</div>
|
||||
</header>
|
||||
<pre><code id="markdownContent" class="language-markdown hljs"></code></pre>
|
||||
</div>
|
||||
|
||||
<div class="tab-content" id="tab-cleanedHtml" style="display: none;">
|
||||
<header >
|
||||
<div>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('cleanedHtmlContent')">Copy</button>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('cleanedHtmlContent', 'cleaned.html')">Download</button>
|
||||
</div>
|
||||
</header>
|
||||
<pre><code id="cleanedHtmlContent" class="language-html hljs"></code></pre>
|
||||
</div>
|
||||
|
||||
<div class="tab-content" id="tab-media" style="display: none;">
|
||||
<header >
|
||||
<div>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('mediaContent')">Copy</button>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('mediaContent', 'media.json')">Download</button>
|
||||
</div>
|
||||
</header>
|
||||
<pre><code id="mediaContent" class="language-json hljs"></code></pre>
|
||||
</div>
|
||||
|
||||
<div class="tab-content" id="tab-extractedContent" style="display: none;">
|
||||
<header >
|
||||
<div>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('extractedContentContent')">Copy</button>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('extractedContentContent', 'extracted_content.json')">Download</button>
|
||||
</div>
|
||||
</header>
|
||||
<pre><code id="extractedContentContent" class="language-json hljs"></code></pre>
|
||||
</div>
|
||||
|
||||
<div class="tab-content" id="tab-screenshot" style="display: none;">
|
||||
<header >
|
||||
<div>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadImage('screenshotContent', 'screenshot.png')">Download</button>
|
||||
</div>
|
||||
</header>
|
||||
<pre><img id="screenshotContent" /></pre>
|
||||
</div>
|
||||
|
||||
<div class="tab-content" id="tab-pythonCode" style="display: none;">
|
||||
<header >
|
||||
<div>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('pythonCode')">Copy</button>
|
||||
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('pythonCode', 'example.py')">Download</button>
|
||||
</div>
|
||||
</header>
|
||||
<pre><code id="pythonCode" class="language-python hljs"></code></pre>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<div id="error" class="error-message" style="display: none; margin-top:1em;">
|
||||
<div class="terminal-alert terminal-alert-error"></div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function showTab(tabId) {
|
||||
const tabs = document.querySelectorAll('.tab-content');
|
||||
tabs.forEach(tab => tab.style.display = 'none');
|
||||
document.getElementById(`tab-${tabId}`).style.display = 'block';
|
||||
}
|
||||
|
||||
function redo(codeBlock, codeText){
|
||||
codeBlock.classList.remove('hljs');
|
||||
codeBlock.removeAttribute('data-highlighted');
|
||||
|
||||
// Set new code and re-highlight
|
||||
codeBlock.textContent = codeText;
|
||||
hljs.highlightBlock(codeBlock);
|
||||
}
|
||||
|
||||
function copyToClipboard(elementId) {
|
||||
const content = document.getElementById(elementId).textContent;
|
||||
navigator.clipboard.writeText(content).then(() => {
|
||||
alert('Copied to clipboard');
|
||||
});
|
||||
}
|
||||
|
||||
function downloadContent(elementId, filename) {
|
||||
const content = document.getElementById(elementId).textContent;
|
||||
const blob = new Blob([content], { type: 'text/plain' });
|
||||
const url = window.URL.createObjectURL(blob);
|
||||
const a = document.createElement('a');
|
||||
a.style.display = 'none';
|
||||
a.href = url;
|
||||
a.download = filename;
|
||||
document.body.appendChild(a);
|
||||
a.click();
|
||||
window.URL.revokeObjectURL(url);
|
||||
document.body.removeChild(a);
|
||||
}
|
||||
|
||||
function downloadImage(elementId, filename) {
|
||||
const content = document.getElementById(elementId).src;
|
||||
const a = document.createElement('a');
|
||||
a.style.display = 'none';
|
||||
a.href = content;
|
||||
a.download = filename;
|
||||
document.body.appendChild(a);
|
||||
a.click();
|
||||
document.body.removeChild(a);
|
||||
}
|
||||
|
||||
document.getElementById('crawlForm').addEventListener('submit', function(event) {
|
||||
event.preventDefault();
|
||||
document.getElementById('loading').style.display = 'block';
|
||||
document.getElementById('response').style.display = 'none';
|
||||
|
||||
const url = document.getElementById('url').value;
|
||||
const screenshot = document.getElementById('screenshot').checked;
|
||||
const data = {
|
||||
urls: [url],
|
||||
bypass_cache: false,
|
||||
word_count_threshold: 5,
|
||||
screenshot: screenshot
|
||||
};
|
||||
|
||||
fetch('/crawl', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify(data)
|
||||
})
|
||||
.then(response => {
|
||||
if (!response.ok) {
|
||||
if (response.status === 429) {
|
||||
return response.json().then(err => {
|
||||
throw Object.assign(new Error('Rate limit exceeded'), { status: 429, details: err });
|
||||
});
|
||||
}
|
||||
throw new Error('Network response was not ok');
|
||||
}
|
||||
return response.json();
|
||||
})
|
||||
.then(data => {
|
||||
data = data.results[0]; // Only one URL is requested
|
||||
document.getElementById('loading').style.display = 'none';
|
||||
document.getElementById('response').style.display = 'block';
|
||||
redo(document.getElementById('markdownContent'), data.markdown);
|
||||
redo(document.getElementById('cleanedHtmlContent'), data.cleaned_html);
|
||||
redo(document.getElementById('mediaContent'), JSON.stringify(data.media, null, 2));
|
||||
redo(document.getElementById('extractedContentContent'), data.extracted_content);
|
||||
if (screenshot) {
|
||||
document.getElementById('screenshotContent').src = `data:image/png;base64,${data.screenshot}`;
|
||||
}
|
||||
const pythonCode = `
|
||||
from crawl4ai.web_crawler import WebCrawler
|
||||
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
result = crawler.run(
|
||||
url='${url}',
|
||||
screenshot=${screenshot}
|
||||
)
|
||||
print(result)
|
||||
`;
|
||||
redo(document.getElementById('pythonCode'), pythonCode);
|
||||
document.getElementById('error').style.display = 'none';
|
||||
})
|
||||
.catch(error => {
|
||||
document.getElementById('loading').style.display = 'none';
|
||||
document.getElementById('error').style.display = 'block';
|
||||
let errorMessage = 'An unexpected error occurred. Please try again later.';
|
||||
|
||||
if (error.status === 429) {
|
||||
const details = error.details;
|
||||
if (details.retry_after) {
|
||||
errorMessage = `Rate limit exceeded. Please wait ${parseFloat(details.retry_after).toFixed(1)} seconds before trying again.`;
|
||||
} else if (details.reset_at) {
|
||||
const resetTime = new Date(details.reset_at);
|
||||
const waitTime = Math.ceil((resetTime - new Date()) / 1000);
|
||||
errorMessage = `Rate limit exceeded. Please try again after ${waitTime} seconds.`;
|
||||
} else {
|
||||
errorMessage = `Rate limit exceeded. Please try again later.`;
|
||||
}
|
||||
} else if (error.message) {
|
||||
errorMessage = error.message;
|
||||
}
|
||||
|
||||
document.querySelector('#error .terminal-alert').textContent = errorMessage;
|
||||
});
|
||||
});
|
||||
</script>
|
||||
</div>
|
||||
100
docs/md/examples/hooks_auth.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# Hooks & Auth
|
||||
|
||||
Crawl4AI allows you to customize the behavior of the web crawler using hooks. Hooks are functions that are called at specific points in the crawling process, allowing you to modify the crawler's behavior or perform additional actions. This example demonstrates how to use various hooks to customize the crawling process.
|
||||
|
||||
## Example: Using Crawler Hooks
|
||||
|
||||
Let's see how we can customize the crawler using hooks! In this example, we'll:
|
||||
|
||||
1. Maximize the browser window and log in to a website when the driver is created.
|
||||
2. Add a custom header before fetching the URL.
|
||||
3. Log the current URL after fetching it.
|
||||
4. Log the length of the HTML before returning it.
|
||||
|
||||
### Hook Definitions
|
||||
|
||||
```python
|
||||
from crawl4ai.web_crawler import WebCrawler
|
||||
from crawl4ai.crawler_strategy import *
|
||||
|
||||
def on_driver_created(driver):
|
||||
print("[HOOK] on_driver_created")
|
||||
# Example customization: maximize the window
|
||||
driver.maximize_window()
|
||||
|
||||
# Example customization: logging in to a hypothetical website
|
||||
driver.get('https://example.com/login')
|
||||
|
||||
from selenium.webdriver.support.ui import WebDriverWait
|
||||
from selenium.webdriver.common.by import By
|
||||
from selenium.webdriver.support import expected_conditions as EC
|
||||
|
||||
WebDriverWait(driver, 10).until(
|
||||
EC.presence_of_element_located((By.NAME, 'username'))
|
||||
)
|
||||
driver.find_element(By.NAME, 'username').send_keys('testuser')
|
||||
driver.find_element(By.NAME, 'password').send_keys('password123')
|
||||
driver.find_element(By.NAME, 'login').click()
|
||||
WebDriverWait(driver, 10).until(
|
||||
EC.presence_of_element_located((By.ID, 'welcome'))
|
||||
)
|
||||
# Add a custom cookie
|
||||
driver.add_cookie({'name': 'test_cookie', 'value': 'cookie_value'})
|
||||
return driver
|
||||
|
||||
|
||||
def before_get_url(driver):
|
||||
print("[HOOK] before_get_url")
|
||||
# Example customization: add a custom header
|
||||
# Enable Network domain for sending headers
|
||||
driver.execute_cdp_cmd('Network.enable', {})
|
||||
# Add a custom header
|
||||
driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': {'X-Test-Header': 'test'}})
|
||||
return driver
|
||||
|
||||
def after_get_url(driver):
|
||||
print("[HOOK] after_get_url")
|
||||
# Example customization: log the URL
|
||||
print(driver.current_url)
|
||||
return driver
|
||||
|
||||
def before_return_html(driver, html):
|
||||
print("[HOOK] before_return_html")
|
||||
# Example customization: log the HTML
|
||||
print(len(html))
|
||||
return driver
|
||||
```
|
||||
|
||||
### Using the Hooks with the WebCrawler
|
||||
|
||||
```python
|
||||
print("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True)
|
||||
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
|
||||
crawler_strategy.set_hook('on_driver_created', on_driver_created)
|
||||
crawler_strategy.set_hook('before_get_url', before_get_url)
|
||||
crawler_strategy.set_hook('after_get_url', after_get_url)
|
||||
crawler_strategy.set_hook('before_return_html', before_return_html)
|
||||
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
|
||||
crawler.warmup()
|
||||
|
||||
result = crawler.run(url="https://example.com")
|
||||
|
||||
print("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Explanation
|
||||
|
||||
- `on_driver_created`: This hook is called when the Selenium driver is created. In this example, it maximizes the window, logs in to a website, and adds a custom cookie.
|
||||
- `before_get_url`: This hook is called right before Selenium fetches the URL. In this example, it adds a custom HTTP header.
|
||||
- `after_get_url`: This hook is called after Selenium fetches the URL. In this example, it logs the current URL.
|
||||
- `before_return_html`: This hook is called before returning the HTML content. In this example, it logs the length of the HTML content.
|
||||
|
||||
### Additional Ideas
|
||||
|
||||
- **Add custom headers to requests**: You can add custom headers to the requests using the `before_get_url` hook.
|
||||
- **Perform safety checks**: Use the hooks to perform safety checks before the crawling process starts.
|
||||
- **Modify the HTML content**: Use the `before_return_html` hook to modify the HTML content before it is returned.
|
||||
- **Log additional information**: Use the hooks to log additional information for debugging or monitoring purposes.
|
||||
|
||||
By using these hooks, you can customize the behavior of the crawler to suit your specific needs.
|
||||
29
docs/md/examples/index.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Examples
|
||||
|
||||
Welcome to the examples section of Crawl4AI documentation! In this section, you will find practical examples demonstrating how to use Crawl4AI for various web crawling and data extraction tasks. Each example is designed to showcase different features and capabilities of the library.
|
||||
|
||||
## Examples Index
|
||||
|
||||
### [LLM Extraction](llm_extraction.md)
|
||||
|
||||
This example demonstrates how to use Crawl4AI to extract information using Large Language Models (LLMs). You will learn how to configure the `LLMExtractionStrategy` to get structured data from web pages.
|
||||
|
||||
### [JS Execution & CSS Filtering](js_execution_css_filtering.md)
|
||||
|
||||
Learn how to execute custom JavaScript code and filter data using CSS selectors. This example shows how to perform complex web interactions and extract specific content from web pages.
|
||||
|
||||
### [Hooks & Auth](hooks_auth.md)
|
||||
|
||||
This example covers the use of custom hooks for authentication and other pre-crawling tasks. You will see how to set up hooks to modify headers, authenticate sessions, and perform other preparatory actions before crawling.
|
||||
|
||||
### [Summarization](summarization.md)
|
||||
|
||||
Discover how to use Crawl4AI to summarize web page content. This example demonstrates the summarization capabilities of the library, helping you extract concise information from lengthy web pages.
|
||||
|
||||
### [Research Assistant](research_assistant.md)
|
||||
|
||||
In this example, Crawl4AI is used as a research assistant to gather and organize information from multiple sources. You will learn how to use various extraction and chunking strategies to compile a comprehensive report.
|
||||
|
||||
---
|
||||
|
||||
Each example includes detailed explanations and code snippets to help you understand and implement the features in your projects. Click on the links to explore each example and start making the most of Crawl4AI!
|
||||
44
docs/md/examples/js_execution_css_filtering.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# JS Execution & CSS Filtering
|
||||
|
||||
In this example, we'll demonstrate how to use Crawl4AI to execute JavaScript, filter data with CSS selectors, and use a cosine similarity strategy to extract relevant content. This approach is particularly useful when you need to interact with dynamic content on web pages, such as clicking "Load More" buttons.
|
||||
|
||||
## Example: Extracting Structured Data
|
||||
|
||||
```python
|
||||
# Import necessary modules
|
||||
from crawl4ai import WebCrawler
|
||||
from crawl4ai.chunking_strategy import *
|
||||
from crawl4ai.extraction_strategy import *
|
||||
from crawl4ai.crawler_strategy import *
|
||||
|
||||
# Define the JavaScript code to click the "Load More" button
|
||||
js_code = ["""
|
||||
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
|
||||
loadMoreButton && loadMoreButton.click();
|
||||
"""]
|
||||
|
||||
crawler = WebCrawler(verbose=True)
|
||||
crawler.warmup()
|
||||
# Run the crawler with keyword filtering and CSS selector
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
js=js_code,
|
||||
css_selector="p",
|
||||
extraction_strategy=CosineStrategy(
|
||||
semantic_filter="technology",
|
||||
),
|
||||
)
|
||||
|
||||
# Display the extracted result
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Explanation
|
||||
|
||||
1. **JavaScript Execution**: The `js_code` variable contains JavaScript code that simulates clicking a "Load More" button. This is useful for loading additional content dynamically.
|
||||
2. **CSS Selector**: The `css_selector="p"` parameter ensures that only paragraph (`<p>`) tags are extracted from the web page.
|
||||
3. **Extraction Strategy**: The `CosineStrategy` is used with a semantic filter for "technology" to extract relevant content based on cosine similarity.
|
||||
|
||||
## Try It Yourself
|
||||
|
||||
This example demonstrates the power and flexibility of Crawl4AI in handling complex web interactions and extracting meaningful data. You can customize the JavaScript code, CSS selectors, and extraction strategies to suit your specific requirements.
|
||||
90
docs/md/examples/llm_extraction.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# LLM Extraction
|
||||
|
||||
Crawl4AI allows you to use Language Models (LLMs) to extract structured data or relevant content from web pages. Below are two examples demonstrating how to use LLMExtractionStrategy for different purposes.
|
||||
|
||||
## Example 1: Extract Structured Data
|
||||
|
||||
In this example, we use the `LLMExtractionStrategy` to extract structured data (model names and their fees) from the OpenAI pricing page.
|
||||
|
||||
```python
|
||||
import os
|
||||
import time
|
||||
from crawl4ai.web_crawler import WebCrawler
|
||||
from crawl4ai.chunking_strategy import *
|
||||
from crawl4ai.extraction_strategy import *
|
||||
from crawl4ai.crawler_strategy import *
|
||||
|
||||
url = r'https://openai.com/api/pricing/'
|
||||
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class OpenAIModelFee(BaseModel):
|
||||
model_name: str = Field(..., description="Name of the OpenAI model.")
|
||||
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
|
||||
output_fee: str = Field(..., description="Fee for output token for the OpenAI model.")
|
||||
|
||||
result = crawler.run(
|
||||
url=url,
|
||||
word_count_threshold=1,
|
||||
extraction_strategy= LLMExtractionStrategy(
|
||||
provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
|
||||
schema=OpenAIModelFee.model_json_schema(),
|
||||
extraction_type="schema",
|
||||
instruction="From the crawled content, extract all mentioned model names along with their "\
|
||||
"fees for input and output tokens. Make sure not to miss anything in the entire content. "\
|
||||
'One extracted model JSON format should look like this: '\
|
||||
'{ "model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens" }'
|
||||
),
|
||||
bypass_cache=True,
|
||||
)
|
||||
|
||||
model_fees = json.loads(result.extracted_content)
|
||||
|
||||
print(len(model_fees))
|
||||
|
||||
with open(".data/data.json", "w", encoding="utf-8") as f:
|
||||
f.write(result.extracted_content)
|
||||
```
|
||||
|
||||
## Example 2: Extract Relevant Content
|
||||
|
||||
In this example, we instruct the LLM to extract only content related to technology from the NBC News business page.
|
||||
|
||||
```python
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv('OPENAI_API_KEY'),
|
||||
instruction="Extract only content related to technology"
|
||||
),
|
||||
bypass_cache=True,
|
||||
)
|
||||
|
||||
model_fees = json.loads(result.extracted_content)
|
||||
|
||||
print(len(model_fees))
|
||||
|
||||
with open(".data/data.json", "w", encoding="utf-8") as f:
|
||||
f.write(result.extracted_content)
|
||||
```
|
||||
|
||||
## Customizing LLM Provider
|
||||
|
||||
Under the hood, Crawl4AI uses the `litellm` library, which allows you to use any LLM provider you want. Just pass the correct model name and API token.
|
||||
|
||||
```python
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="your_llm_provider/model_name",
|
||||
api_token="your_api_token",
|
||||
instruction="Your extraction instruction"
|
||||
)
|
||||
```
|
||||
|
||||
This flexibility allows you to integrate with various LLM providers and tailor the extraction process to your specific needs.
|
||||
248
docs/md/examples/research_assistant.md
Normal file
@@ -0,0 +1,248 @@
|
||||
## Research Assistant Example
|
||||
|
||||
This example demonstrates how to build a research assistant using `Chainlit` and `Crawl4AI`. The assistant will be capable of crawling web pages for information and answering questions based on the crawled content. Additionally, it integrates speech-to-text functionality for audio inputs.
|
||||
|
||||
### Step-by-Step Guide
|
||||
|
||||
1. **Install Required Packages**
|
||||
|
||||
Ensure you have the necessary packages installed. You need `chainlit`, `groq`, `requests`, and `openai`.
|
||||
|
||||
```bash
|
||||
pip install chainlit groq requests openai
|
||||
```
|
||||
|
||||
2. **Import Libraries**
|
||||
|
||||
Import all the necessary modules and initialize the OpenAI client.
|
||||
|
||||
```python
|
||||
import os
|
||||
import time
|
||||
from openai import AsyncOpenAI
|
||||
import chainlit as cl
|
||||
import re
|
||||
import requests
|
||||
from io import BytesIO
|
||||
from chainlit.element import ElementBased
|
||||
from groq import Groq
|
||||
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
client = AsyncOpenAI(base_url="https://api.groq.com/openai/v1", api_key=os.getenv("GROQ_API_KEY"))
|
||||
|
||||
# Instrument the OpenAI client
|
||||
cl.instrument_openai()
|
||||
```
|
||||
|
||||
3. **Set Configuration**
|
||||
|
||||
Define the model settings for the assistant.
|
||||
|
||||
```python
|
||||
settings = {
|
||||
"model": "llama3-8b-8192",
|
||||
"temperature": 0.5,
|
||||
"max_tokens": 500,
|
||||
"top_p": 1,
|
||||
"frequency_penalty": 0,
|
||||
"presence_penalty": 0,
|
||||
}
|
||||
```
|
||||
|
||||
4. **Define Utility Functions**
|
||||
|
||||
- **Extract URLs from Text**: Use regex to find URLs in messages.
|
||||
|
||||
```python
|
||||
def extract_urls(text):
|
||||
url_pattern = re.compile(r'(https?://\S+)')
|
||||
return url_pattern.findall(text)
|
||||
```
|
||||
|
||||
- **Crawl URL**: Send a request to `Crawl4AI` to fetch the content of a URL.
|
||||
|
||||
```python
|
||||
def crawl_url(url):
|
||||
data = {
|
||||
"urls": [url],
|
||||
"include_raw_html": True,
|
||||
"word_count_threshold": 10,
|
||||
"extraction_strategy": "NoExtractionStrategy",
|
||||
"chunking_strategy": "RegexChunking"
|
||||
}
|
||||
response = requests.post("https://crawl4ai.com/crawl", json=data)
|
||||
response_data = response.json()
|
||||
response_data = response_data['results'][0]
|
||||
return response_data['markdown']
|
||||
```
|
||||
|
||||
5. **Initialize Chat Start Event**
|
||||
|
||||
Set up the initial chat message and user session.
|
||||
|
||||
```python
|
||||
@cl.on_chat_start
|
||||
async def on_chat_start():
|
||||
cl.user_session.set("session", {
|
||||
"history": [],
|
||||
"context": {}
|
||||
})
|
||||
await cl.Message(
|
||||
content="Welcome to the chat! How can I assist you today?"
|
||||
).send()
|
||||
```
|
||||
|
||||
6. **Handle Incoming Messages**
|
||||
|
||||
Process user messages, extract URLs, and crawl them concurrently. Update the chat history and system message.
|
||||
|
||||
```python
|
||||
@cl.on_message
|
||||
async def on_message(message: cl.Message):
|
||||
user_session = cl.user_session.get("session")
|
||||
|
||||
# Extract URLs from the user's message
|
||||
urls = extract_urls(message.content)
|
||||
|
||||
futures = []
|
||||
with ThreadPoolExecutor() as executor:
|
||||
for url in urls:
|
||||
futures.append(executor.submit(crawl_url, url))
|
||||
|
||||
results = [future.result() for future in futures]
|
||||
|
||||
for url, result in zip(urls, results):
|
||||
ref_number = f"REF_{len(user_session['context']) + 1}"
|
||||
user_session["context"][ref_number] = {
|
||||
"url": url,
|
||||
"content": result
|
||||
}
|
||||
|
||||
user_session["history"].append({
|
||||
"role": "user",
|
||||
"content": message.content
|
||||
})
|
||||
|
||||
# Create a system message that includes the context
|
||||
context_messages = [
|
||||
f'<appendix ref="{ref}">\n{data["content"]}\n</appendix>'
|
||||
for ref, data in user_session["context"].items()
|
||||
]
|
||||
if context_messages:
|
||||
system_message = {
|
||||
"role": "system",
|
||||
"content": (
|
||||
"You are a helpful bot. Use the following context for answering questions. "
|
||||
"Refer to the sources using the REF number in square brackets, e.g., [1], only if the source is given in the appendices below.\n\n"
|
||||
"If the question requires any information from the provided appendices or context, refer to the sources. "
|
||||
"If not, there is no need to add a references section. "
|
||||
"At the end of your response, provide a reference section listing the URLs and their REF numbers only if sources from the appendices were used.\n\n"
|
||||
"\n\n".join(context_messages)
|
||||
)
|
||||
}
|
||||
else:
|
||||
system_message = {
|
||||
"role": "system",
|
||||
"content": "You are a helpful assistant."
|
||||
}
|
||||
|
||||
msg = cl.Message(content="")
|
||||
await msg.send()
|
||||
|
||||
# Get response from the LLM
|
||||
stream = await client.chat.completions.create(
|
||||
messages=[
|
||||
system_message,
|
||||
*user_session["history"]
|
||||
],
|
||||
stream=True,
|
||||
**settings
|
||||
)
|
||||
|
||||
assistant_response = ""
|
||||
async for part in stream:
|
||||
if token := part.choices[0].delta.content:
|
||||
assistant_response += token
|
||||
await msg.stream_token(token)
|
||||
|
||||
# Add assistant message to the history
|
||||
user_session["history"].append({
|
||||
"role": "assistant",
|
||||
"content": assistant_response
|
||||
})
|
||||
await msg.update()
|
||||
|
||||
# Append the reference section to the assistant's response
|
||||
reference_section = "\n\nReferences:\n"
|
||||
for ref, data in user_session["context"].items():
|
||||
reference_section += f"[{ref.split('_')[1]}]: {data['url']}\n"
|
||||
|
||||
msg.content += reference_section
|
||||
await msg.update()
|
||||
```
|
||||
|
||||
7. **Handle Audio Input**
|
||||
|
||||
Capture and transcribe audio input. Store the audio buffer and transcribe it when the audio ends.
|
||||
|
||||
```python
|
||||
@cl.on_audio_chunk
|
||||
async def on_audio_chunk(chunk: cl.AudioChunk):
|
||||
if chunk.isStart:
|
||||
buffer = BytesIO()
|
||||
buffer.name = f"input_audio.{chunk.mimeType.split('/')[1]}"
|
||||
cl.user_session.set("audio_buffer", buffer)
|
||||
cl.user_session.set("audio_mime_type", chunk.mimeType)
|
||||
|
||||
cl.user_session.get("audio_buffer").write(chunk.data)
|
||||
|
||||
@cl.step(type="tool")
|
||||
async def speech_to_text(audio_file):
|
||||
cli = Groq()
|
||||
response = await client.audio.transcriptions.create(
|
||||
model="whisper-large-v3", file=audio_file
|
||||
)
|
||||
return response.text
|
||||
|
||||
@cl.on_audio_end
|
||||
async def on_audio_end(elements: list[ElementBased]):
|
||||
audio_buffer: BytesIO = cl.user_session.get("audio_buffer")
|
||||
audio_buffer.seek(0)
|
||||
audio_file = audio_buffer.read()
|
||||
audio_mime_type: str = cl.user_session.get("audio_mime_type")
|
||||
|
||||
start_time = time.time()
|
||||
transcription = await speech_to_text((audio_buffer.name, audio_file, audio_mime_type))
|
||||
end_time = time.time()
|
||||
print(f"Transcription took {end_time - start_time} seconds")
|
||||
|
||||
user_msg = cl.Message(
|
||||
author="You",
|
||||
type="user_message",
|
||||
content=transcription
|
||||
)
|
||||
await user_msg.send()
|
||||
await on_message(user_msg)
|
||||
```
|
||||
|
||||
8. **Run the Chat Application**
|
||||
|
||||
Start the Chainlit application.
|
||||
|
||||
```python
|
||||
if __name__ == "__main__":
|
||||
from chainlit.cli import run_chainlit
|
||||
run_chainlit(__file__)
|
||||
```
|
||||
|
||||
### Explanation
|
||||
|
||||
- **Libraries and Configuration**: Import necessary libraries and configure the OpenAI client.
|
||||
- **Utility Functions**: Define functions to extract URLs and crawl them.
|
||||
- **Chat Start Event**: Initialize chat session and welcome message.
|
||||
- **Message Handling**: Extract URLs, crawl them concurrently, and update chat history and context.
|
||||
- **Audio Handling**: Capture, buffer, and transcribe audio input, then process the transcription as text.
|
||||
- **Running the Application**: Start the Chainlit server to interact with the assistant.
|
||||
|
||||
This example showcases how to create an interactive research assistant that can fetch, process, and summarize web content, along with handling audio inputs for a seamless user experience.
|
||||
108
docs/md/examples/summarization.md
Normal file
@@ -0,0 +1,108 @@
|
||||
## Summarization Example
|
||||
|
||||
This example demonstrates how to use `Crawl4AI` to extract a summary from a web page. The goal is to obtain the title, a detailed summary, a brief summary, and a list of keywords from the given page.
|
||||
|
||||
### Step-by-Step Guide
|
||||
|
||||
1. **Import Necessary Modules**
|
||||
|
||||
First, import the necessary modules and classes.
|
||||
|
||||
```python
|
||||
import os
|
||||
import time
|
||||
import json
|
||||
from crawl4ai.web_crawler import WebCrawler
|
||||
from crawl4ai.chunking_strategy import *
|
||||
from crawl4ai.extraction_strategy import *
|
||||
from crawl4ai.crawler_strategy import *
|
||||
from pydantic import BaseModel, Field
|
||||
```
|
||||
|
||||
2. **Define the URL to be Crawled**
|
||||
|
||||
Set the URL of the web page you want to summarize.
|
||||
|
||||
```python
|
||||
url = r'https://marketplace.visualstudio.com/items?itemName=Unclecode.groqopilot'
|
||||
```
|
||||
|
||||
3. **Initialize the WebCrawler**
|
||||
|
||||
Create an instance of the `WebCrawler` and call the `warmup` method.
|
||||
|
||||
```python
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
```
|
||||
|
||||
4. **Define the Data Model**
|
||||
|
||||
Use Pydantic to define the structure of the extracted data.
|
||||
|
||||
```python
|
||||
class PageSummary(BaseModel):
|
||||
title: str = Field(..., description="Title of the page.")
|
||||
summary: str = Field(..., description="Summary of the page.")
|
||||
brief_summary: str = Field(..., description="Brief summary of the page.")
|
||||
keywords: list = Field(..., description="Keywords assigned to the page.")
|
||||
```
|
||||
|
||||
5. **Run the Crawler**
|
||||
|
||||
Set up and run the crawler with the `LLMExtractionStrategy`. Provide the necessary parameters, including the schema for the extracted data and the instruction for the LLM.
|
||||
|
||||
```python
|
||||
result = crawler.run(
|
||||
url=url,
|
||||
word_count_threshold=1,
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv('OPENAI_API_KEY'),
|
||||
schema=PageSummary.model_json_schema(),
|
||||
extraction_type="schema",
|
||||
apply_chunking=False,
|
||||
instruction=(
|
||||
"From the crawled content, extract the following details: "
|
||||
"1. Title of the page "
|
||||
"2. Summary of the page, which is a detailed summary "
|
||||
"3. Brief summary of the page, which is a paragraph text "
|
||||
"4. Keywords assigned to the page, which is a list of keywords. "
|
||||
'The extracted JSON format should look like this: '
|
||||
'{ "title": "Page Title", "summary": "Detailed summary of the page.", '
|
||||
'"brief_summary": "Brief summary in a paragraph.", "keywords": ["keyword1", "keyword2", "keyword3"] }'
|
||||
)
|
||||
),
|
||||
bypass_cache=True,
|
||||
)
|
||||
```
|
||||
|
||||
6. **Process the Extracted Data**
|
||||
|
||||
Load the extracted content into a JSON object and print it.
|
||||
|
||||
```python
|
||||
page_summary = json.loads(result.extracted_content)
|
||||
print(page_summary)
|
||||
```
|
||||
|
||||
7. **Save the Extracted Data**
|
||||
|
||||
Save the extracted data to a file for further use.
|
||||
|
||||
```python
|
||||
with open(".data/page_summary.json", "w", encoding="utf-8") as f:
|
||||
f.write(result.extracted_content)
|
||||
```
|
||||
|
||||
### Explanation
|
||||
|
||||
- **Importing Modules**: Import the necessary modules, including `WebCrawler` and `LLMExtractionStrategy` from `Crawl4AI`.
|
||||
- **URL Definition**: Set the URL of the web page you want to crawl and summarize.
|
||||
- **WebCrawler Initialization**: Create an instance of `WebCrawler` and call the `warmup` method to prepare the crawler.
|
||||
- **Data Model Definition**: Define the structure of the data you want to extract using Pydantic's `BaseModel`.
|
||||
- **Crawler Execution**: Run the crawler with the `LLMExtractionStrategy`, providing the schema and detailed instructions for the extraction process.
|
||||
- **Data Processing**: Load the extracted content into a JSON object and print it to verify the results.
|
||||
- **Data Saving**: Save the extracted data to a file for further use.
|
||||
|
||||
This example demonstrates how to harness the power of `Crawl4AI` to perform advanced web crawling and data extraction tasks with minimal code.
|
||||
138
docs/md/full_details/advanced_features.md
Normal file
@@ -0,0 +1,138 @@
|
||||
# Advanced Features
|
||||
|
||||
Crawl4AI offers a range of advanced features that allow you to fine-tune your web crawling and data extraction process. This section will cover some of these advanced features, including taking screenshots, extracting media and links, customizing the user agent, using custom hooks, and leveraging CSS selectors.
|
||||
|
||||
## Taking Screenshots 📸
|
||||
|
||||
One of the cool features of Crawl4AI is the ability to take screenshots of the web pages you're crawling. This can be particularly useful for visual verification or for capturing the state of dynamic content.
|
||||
|
||||
Here's how you can take a screenshot:
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
import base64
|
||||
|
||||
# Create the WebCrawler instance
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
# Run the crawler with the screenshot parameter
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", screenshot=True)
|
||||
|
||||
# Save the screenshot to a file
|
||||
with open("screenshot.png", "wb") as f:
|
||||
f.write(base64.b64decode(result.screenshot))
|
||||
|
||||
print("Screenshot saved to 'screenshot.png'!")
|
||||
```
|
||||
|
||||
In this example, we create a `WebCrawler` instance, warm it up, and then run it with the `screenshot` parameter set to `True`. The screenshot is saved as a base64 encoded string in the result, which we then decode and save as a PNG file.
|
||||
|
||||
## Extracting Media and Links 🎨🔗
|
||||
|
||||
Crawl4AI can extract all media tags (images, audio, and video) and links (both internal and external) from a web page. This feature is useful for collecting multimedia content or analyzing link structures.
|
||||
|
||||
Here's an example:
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
# Create the WebCrawler instance
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
# Run the crawler
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
|
||||
print("Extracted media:", result.media)
|
||||
print("Extracted links:", result.links)
|
||||
```
|
||||
|
||||
In this example, the `result` object contains dictionaries for media and links, which you can access and use as needed.
|
||||
|
||||
## Customizing the User Agent 🕵️♂️
|
||||
|
||||
Crawl4AI allows you to set a custom user agent for your HTTP requests. This can help you avoid detection by web servers or simulate different browsing environments.
|
||||
|
||||
Here's how to set a custom user agent:
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
# Create the WebCrawler instance
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
# Run the crawler with a custom user agent
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", user_agent="Mozilla/5.0 (compatible; MyCrawler/1.0)")
|
||||
|
||||
print("Crawl result:", result)
|
||||
```
|
||||
|
||||
In this example, we specify a custom user agent string when running the crawler.
|
||||
|
||||
## Using Custom Hooks 🪝
|
||||
|
||||
Hooks are a powerful feature in Crawl4AI that allow you to customize the crawling process at various stages. You can define hooks for actions such as driver initialization, before and after URL fetching, and before returning the HTML.
|
||||
|
||||
Here's an example of using hooks:
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
from selenium.webdriver.common.by import By
|
||||
from selenium.webdriver.support.ui import WebDriverWait
|
||||
from selenium.webdriver.support import expected_conditions as EC
|
||||
|
||||
# Define the hooks
|
||||
def on_driver_created(driver):
|
||||
driver.maximize_window()
|
||||
driver.get('https://example.com/login')
|
||||
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.NAME, 'username'))).send_keys('testuser')
|
||||
driver.find_element(By.NAME, 'password').send_keys('password123')
|
||||
driver.find_element(By.NAME, 'login').click()
|
||||
return driver
|
||||
|
||||
def before_get_url(driver):
|
||||
driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': {'X-Test-Header': 'test'}})
|
||||
return driver
|
||||
|
||||
# Create the WebCrawler instance
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
# Set the hooks
|
||||
crawler.set_hook('on_driver_created', on_driver_created)
|
||||
crawler.set_hook('before_get_url', before_get_url)
|
||||
|
||||
# Run the crawler
|
||||
result = crawler.run(url="https://example.com")
|
||||
|
||||
print("Crawl result:", result)
|
||||
```
|
||||
|
||||
In this example, we define hooks to handle driver initialization and custom headers before fetching the URL.
|
||||
|
||||
## Using CSS Selectors 🎯
|
||||
|
||||
CSS selectors allow you to target specific elements on a web page for extraction. This can be useful for scraping structured content, such as articles or product details.
|
||||
|
||||
Here's an example of using a CSS selector:
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
# Create the WebCrawler instance
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
# Run the crawler with a CSS selector to extract only H2 tags
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", css_selector="h2")
|
||||
|
||||
print("Extracted H2 tags:", result.extracted_content)
|
||||
```
|
||||
|
||||
In this example, we use the `css_selector` parameter to extract only the H2 tags from the web page.
|
||||
|
||||
---
|
||||
|
||||
With these advanced features, you can leverage Crawl4AI to perform sophisticated web crawling and data extraction tasks. Whether you need to take screenshots, extract specific elements, customize the crawling process, or set custom headers, Crawl4AI provides the flexibility and power to meet your needs. Happy crawling! 🕷️🚀
|
||||
133
docs/md/full_details/chunking_strategies.md
Normal file
@@ -0,0 +1,133 @@
|
||||
## Chunking Strategies 📚
|
||||
|
||||
Crawl4AI provides several powerful chunking strategies to divide text into manageable parts for further processing. Each strategy has unique characteristics and is suitable for different scenarios. Let's explore them one by one.
|
||||
|
||||
### RegexChunking
|
||||
|
||||
`RegexChunking` splits text using regular expressions. This is ideal for creating chunks based on specific patterns like paragraphs or sentences.
|
||||
|
||||
#### When to Use
|
||||
- Great for structured text with consistent delimiters.
|
||||
- Suitable for documents where specific patterns (e.g., double newlines, periods) indicate logical chunks.
|
||||
|
||||
#### Parameters
|
||||
- `patterns` (list, optional): Regular expressions used to split the text. Default is to split by double newlines (`['\n\n']`).
|
||||
|
||||
#### Example
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import RegexChunking
|
||||
|
||||
# Define patterns for splitting text
|
||||
patterns = [r'\n\n', r'\. ']
|
||||
chunker = RegexChunking(patterns=patterns)
|
||||
|
||||
# Sample text
|
||||
text = "This is a sample text. It will be split into chunks.\n\nThis is another paragraph."
|
||||
|
||||
# Chunk the text
|
||||
chunks = chunker.chunk(text)
|
||||
print(chunks)
|
||||
```
|
||||
|
||||
### NlpSentenceChunking
|
||||
|
||||
`NlpSentenceChunking` uses NLP models to split text into sentences, ensuring accurate sentence boundaries.
|
||||
|
||||
#### When to Use
|
||||
- Ideal for texts where sentence boundaries are crucial.
|
||||
- Useful for creating chunks that preserve grammatical structures.
|
||||
|
||||
#### Parameters
|
||||
- None.
|
||||
|
||||
#### Example
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import NlpSentenceChunking
|
||||
|
||||
chunker = NlpSentenceChunking()
|
||||
|
||||
# Sample text
|
||||
text = "This is a sample text. It will be split into sentences. Here's another sentence."
|
||||
|
||||
# Chunk the text
|
||||
chunks = chunker.chunk(text)
|
||||
print(chunks)
|
||||
```
|
||||
|
||||
### TopicSegmentationChunking
|
||||
|
||||
`TopicSegmentationChunking` employs the TextTiling algorithm to segment text into topic-based chunks. This method identifies thematic boundaries.
|
||||
|
||||
#### When to Use
|
||||
- Perfect for long documents with distinct topics.
|
||||
- Useful when preserving topic continuity is more important than maintaining text order.
|
||||
|
||||
#### Parameters
|
||||
- `num_keywords` (int, optional): Number of keywords for each topic segment. Default is `3`.
|
||||
|
||||
#### Example
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import TopicSegmentationChunking
|
||||
|
||||
chunker = TopicSegmentationChunking(num_keywords=3)
|
||||
|
||||
# Sample text
|
||||
text = "This document contains several topics. Topic one discusses AI. Topic two covers machine learning."
|
||||
|
||||
# Chunk the text
|
||||
chunks = chunker.chunk(text)
|
||||
print(chunks)
|
||||
```
|
||||
|
||||
### FixedLengthWordChunking
|
||||
|
||||
`FixedLengthWordChunking` splits text into chunks based on a fixed number of words. This ensures each chunk has approximately the same length.
|
||||
|
||||
#### When to Use
|
||||
- Suitable for processing large texts where uniform chunk size is important.
|
||||
- Useful when the number of words per chunk needs to be controlled.
|
||||
|
||||
#### Parameters
|
||||
- `chunk_size` (int, optional): Number of words per chunk. Default is `100`.
|
||||
|
||||
#### Example
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import FixedLengthWordChunking
|
||||
|
||||
chunker = FixedLengthWordChunking(chunk_size=10)
|
||||
|
||||
# Sample text
|
||||
text = "This is a sample text. It will be split into chunks of fixed length."
|
||||
|
||||
# Chunk the text
|
||||
chunks = chunker.chunk(text)
|
||||
print(chunks)
|
||||
```
|
||||
|
||||
### SlidingWindowChunking
|
||||
|
||||
`SlidingWindowChunking` uses a sliding window approach to create overlapping chunks. Each chunk has a fixed length, and the window slides by a specified step size.
|
||||
|
||||
#### When to Use
|
||||
- Ideal for creating overlapping chunks to preserve context.
|
||||
- Useful for tasks where context from adjacent chunks is needed.
|
||||
|
||||
#### Parameters
|
||||
- `window_size` (int, optional): Number of words in each chunk. Default is `100`.
|
||||
- `step` (int, optional): Number of words to slide the window. Default is `50`.
|
||||
|
||||
#### Example
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import SlidingWindowChunking
|
||||
|
||||
chunker = SlidingWindowChunking(window_size=10, step=5)
|
||||
|
||||
# Sample text
|
||||
text = "This is a sample text. It will be split using a sliding window approach to preserve context."
|
||||
|
||||
# Chunk the text
|
||||
chunks = chunker.chunk(text)
|
||||
print(chunks)
|
||||
```
|
||||
|
||||
With these chunking strategies, you can choose the best method to divide your text based on your specific needs. Whether you need precise sentence boundaries, topic-based segmentation, or uniform chunk sizes, Crawl4AI has you covered. Happy chunking! 📝✨
|
||||
130
docs/md/full_details/crawl_request_parameters.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Crawl Request Parameters
|
||||
|
||||
The `run` function in Crawl4AI is designed to be highly configurable, allowing you to customize the crawling and extraction process to suit your needs. Below are the parameters you can use with the `run` function, along with their descriptions, possible values, and examples.
|
||||
|
||||
## Parameters
|
||||
|
||||
### url (str)
|
||||
**Description:** The URL of the webpage to crawl.
|
||||
**Required:** Yes
|
||||
**Example:**
|
||||
```python
|
||||
url = "https://www.nbcnews.com/business"
|
||||
```
|
||||
|
||||
### word_count_threshold (int)
|
||||
**Description:** The minimum number of words a block must contain to be considered meaningful. The default value is `5`.
|
||||
**Required:** No
|
||||
**Default Value:** `5`
|
||||
**Example:**
|
||||
```python
|
||||
word_count_threshold = 10
|
||||
```
|
||||
|
||||
### extraction_strategy (ExtractionStrategy)
|
||||
**Description:** The strategy to use for extracting content from the HTML. It must be an instance of `ExtractionStrategy`. If not provided, the default is `NoExtractionStrategy`.
|
||||
**Required:** No
|
||||
**Default Value:** `NoExtractionStrategy()`
|
||||
**Example:**
|
||||
```python
|
||||
extraction_strategy = CosineStrategy(semantic_filter="finance")
|
||||
```
|
||||
|
||||
### chunking_strategy (ChunkingStrategy)
|
||||
**Description:** The strategy to use for chunking the text before processing. It must be an instance of `ChunkingStrategy`. The default value is `RegexChunking()`.
|
||||
**Required:** No
|
||||
**Default Value:** `RegexChunking()`
|
||||
**Example:**
|
||||
```python
|
||||
chunking_strategy = NlpSentenceChunking()
|
||||
```
|
||||
|
||||
### bypass_cache (bool)
|
||||
**Description:** Whether to force a fresh crawl even if the URL has been previously crawled. The default value is `False`.
|
||||
**Required:** No
|
||||
**Default Value:** `False`
|
||||
**Example:**
|
||||
```python
|
||||
bypass_cache = True
|
||||
```
|
||||
|
||||
### css_selector (str)
|
||||
**Description:** The CSS selector to target specific parts of the HTML for extraction. If not provided, the entire HTML will be processed.
|
||||
**Required:** No
|
||||
**Default Value:** `None`
|
||||
**Example:**
|
||||
```python
|
||||
css_selector = "div.article-content"
|
||||
```
|
||||
|
||||
### screenshot (bool)
|
||||
**Description:** Whether to take screenshots of the page. The default value is `False`.
|
||||
**Required:** No
|
||||
**Default Value:** `False`
|
||||
**Example:**
|
||||
```python
|
||||
screenshot = True
|
||||
```
|
||||
|
||||
### user_agent (str)
|
||||
**Description:** The user agent to use for the HTTP requests. If not provided, a default user agent will be used.
|
||||
**Required:** No
|
||||
**Default Value:** `None`
|
||||
**Example:**
|
||||
```python
|
||||
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
|
||||
```
|
||||
|
||||
### verbose (bool)
|
||||
**Description:** Whether to enable verbose logging. The default value is `True`.
|
||||
**Required:** No
|
||||
**Default Value:** `True`
|
||||
**Example:**
|
||||
```python
|
||||
verbose = True
|
||||
```
|
||||
|
||||
### **kwargs
|
||||
Additional keyword arguments that can be passed to customize the crawling process further. Some notable options include:
|
||||
|
||||
- **only_text (bool):** Whether to extract only text content, excluding HTML tags. Default is `False`.
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
css_selector="p",
|
||||
only_text=True
|
||||
)
|
||||
```
|
||||
|
||||
## Example Usage
|
||||
|
||||
Here's an example of how to use the `run` function with various parameters:
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
from crawl4ai.extraction_strategy import CosineStrategy
|
||||
from crawl4ai.chunking_strategy import NlpSentenceChunking
|
||||
|
||||
# Create the WebCrawler instance
|
||||
crawler = WebCrawler()
|
||||
|
||||
# Run the crawler with custom parameters
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
word_count_threshold=10,
|
||||
extraction_strategy=CosineStrategy(semantic_filter="finance"),
|
||||
chunking_strategy=NlpSentenceChunking(),
|
||||
bypass_cache=True,
|
||||
css_selector="div.article-content",
|
||||
screenshot=True,
|
||||
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3",
|
||||
verbose=True,
|
||||
only_text=True
|
||||
)
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
This example demonstrates how to configure various parameters to customize the crawling and extraction process using Crawl4AI.
|
||||
120
docs/md/full_details/crawl_result_class.md
Normal file
@@ -0,0 +1,120 @@
|
||||
# Crawl Result
|
||||
|
||||
The `CrawlResult` class is the heart of Crawl4AI's output, encapsulating all the data extracted from a crawling session. This class contains various fields that store the results of the web crawling and extraction process. Let's break down each field and see what it holds. 🎉
|
||||
|
||||
## Class Definition
|
||||
|
||||
```python
|
||||
class CrawlResult(BaseModel):
|
||||
url: str
|
||||
html: str
|
||||
success: bool
|
||||
cleaned_html: Optional[str] = None
|
||||
media: Dict[str, List[Dict]] = {}
|
||||
links: Dict[str, List[Dict]] = {}
|
||||
screenshot: Optional[str] = None
|
||||
markdown: Optional[str] = None
|
||||
extracted_content: Optional[str] = None
|
||||
metadata: Optional[dict] = None
|
||||
error_message: Optional[str] = None
|
||||
```
|
||||
|
||||
## Fields Explanation
|
||||
|
||||
### `url: str`
|
||||
The URL that was crawled. This field simply stores the URL of the web page that was processed.
|
||||
|
||||
### `html: str`
|
||||
The raw HTML content of the web page. This is the unprocessed HTML source as retrieved by the crawler.
|
||||
|
||||
### `success: bool`
|
||||
A flag indicating whether the crawling and extraction were successful. If any error occurs during the process, this will be `False`.
|
||||
|
||||
### `cleaned_html: Optional[str]`
|
||||
The cleaned HTML content of the web page. This field holds the HTML after removing unwanted tags like `<script>`, `<style>`, and others that do not contribute to the useful content.
|
||||
|
||||
### `media: Dict[str, List[Dict]]`
|
||||
A dictionary containing lists of extracted media elements from the web page. The media elements are categorized into images, videos, and audios. Here’s how they are structured:
|
||||
|
||||
- **Images**: Each image is represented as a dictionary with `src` (source URL) and `alt` (alternate text).
|
||||
- **Videos**: Each video is represented similarly with `src` and `alt`.
|
||||
- **Audios**: Each audio is represented with `src` and `alt`.
|
||||
|
||||
```python
|
||||
media = {
|
||||
'images': [
|
||||
{'src': 'image_url1', 'alt': 'description1', "type": "image"},
|
||||
{'src': 'image_url2', 'alt': 'description2', "type": "image"}
|
||||
],
|
||||
'videos': [
|
||||
{'src': 'video_url1', 'alt': 'description1', "type": "video"}
|
||||
],
|
||||
'audios': [
|
||||
{'src': 'audio_url1', 'alt': 'description1', "type": "audio"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### `links: Dict[str, List[Dict]]`
|
||||
A dictionary containing lists of internal and external links extracted from the web page. Each link is represented as a dictionary with `href` (URL) and `text` (link text).
|
||||
|
||||
- **Internal Links**: Links pointing to the same domain.
|
||||
- **External Links**: Links pointing to different domains.
|
||||
|
||||
```python
|
||||
links = {
|
||||
'internal': [
|
||||
{'href': 'internal_link1', 'text': 'link_text1'},
|
||||
{'href': 'internal_link2', 'text': 'link_text2'}
|
||||
],
|
||||
'external': [
|
||||
{'href': 'external_link1', 'text': 'link_text1'}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### `screenshot: Optional[str]`
|
||||
A base64-encoded screenshot of the web page. This field stores the screenshot data if the crawling was configured to take a screenshot.
|
||||
|
||||
### `markdown: Optional[str]`
|
||||
The content of the web page converted to Markdown format. This is useful for generating clean, readable text that retains the structure of the original HTML.
|
||||
|
||||
### `extracted_content: Optional[str]`
|
||||
The content extracted based on the specified extraction strategy. This field holds the meaningful content blocks extracted from the web page, ready for your AI and data processing needs.
|
||||
|
||||
### `metadata: Optional[dict]`
|
||||
A dictionary containing metadata extracted from the web page, such as title, description, keywords, and other meta tags.
|
||||
|
||||
### `error_message: Optional[str]`
|
||||
If an error occurs during crawling, this field will contain the error message, helping you debug and understand what went wrong. 🚨
|
||||
|
||||
## Example Usage
|
||||
|
||||
Here's a quick example to illustrate how you might use the `CrawlResult` in your code:
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
# Create the WebCrawler instance
|
||||
crawler = WebCrawler()
|
||||
|
||||
# Run the crawler on a URL
|
||||
result = crawler.run(url="https://www.example.com")
|
||||
|
||||
# Check if the crawl was successful
|
||||
if result.success:
|
||||
print("Crawl succeeded!")
|
||||
print("URL:", result.url)
|
||||
print("HTML:", result.html[:100]) # Print the first 100 characters of the HTML
|
||||
print("Cleaned HTML:", result.cleaned_html[:100])
|
||||
print("Media:", result.media)
|
||||
print("Links:", result.links)
|
||||
print("Screenshot:", result.screenshot)
|
||||
print("Markdown:", result.markdown[:100])
|
||||
print("Extracted Content:", result.extracted_content)
|
||||
print("Metadata:", result.metadata)
|
||||
else:
|
||||
print("Crawl failed with error:", result.error_message)
|
||||
```
|
||||
|
||||
With this setup, you can easily access all the valuable data extracted from the web page and integrate it into your applications. Happy crawling! 🕷️🤖
|
||||
116
docs/md/full_details/extraction_strategies.md
Normal file
@@ -0,0 +1,116 @@
|
||||
## Extraction Strategies 🧠
|
||||
|
||||
Crawl4AI offers powerful extraction strategies to derive meaningful information from web content. Let's dive into two of the most important strategies: `CosineStrategy` and `LLMExtractionStrategy`.
|
||||
|
||||
### CosineStrategy
|
||||
|
||||
`CosineStrategy` uses hierarchical clustering based on cosine similarity to group text chunks into meaningful clusters. This method converts each chunk into its embedding and then clusters them to form semantical chunks.
|
||||
|
||||
#### When to Use
|
||||
- Ideal for fast, accurate semantic segmentation of text.
|
||||
- Perfect for scenarios where LLMs might be overkill or too slow.
|
||||
- Suitable for narrowing down content based on specific queries or keywords.
|
||||
|
||||
#### Parameters
|
||||
- `semantic_filter` (str, optional): Keywords for filtering relevant documents before clustering. Documents are filtered based on their cosine similarity to the keyword filter embedding. Default is `None`.
|
||||
- `word_count_threshold` (int, optional): Minimum number of words per cluster. Default is `20`.
|
||||
- `max_dist` (float, optional): Maximum cophenetic distance on the dendrogram to form clusters. Default is `0.2`.
|
||||
- `linkage_method` (str, optional): Linkage method for hierarchical clustering. Default is `'ward'`.
|
||||
- `top_k` (int, optional): Number of top categories to extract. Default is `3`.
|
||||
- `model_name` (str, optional): Model name for embedding generation. Default is `'BAAI/bge-small-en-v1.5'`.
|
||||
|
||||
#### Example
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import CosineStrategy
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
# Define extraction strategy
|
||||
strategy = CosineStrategy(
|
||||
semantic_filter="finance economy stock market",
|
||||
word_count_threshold=10,
|
||||
max_dist=0.2,
|
||||
linkage_method='ward',
|
||||
top_k=3,
|
||||
model_name='BAAI/bge-small-en-v1.5'
|
||||
)
|
||||
|
||||
# Sample URL
|
||||
url = "https://www.nbcnews.com/business"
|
||||
|
||||
# Run the crawler with the extraction strategy
|
||||
result = crawler.run(url=url, extraction_strategy=strategy)
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
### LLMExtractionStrategy
|
||||
|
||||
`LLMExtractionStrategy` leverages a Language Model (LLM) to extract meaningful content from HTML. This strategy uses an external provider for LLM completions to perform extraction based on instructions.
|
||||
|
||||
#### When to Use
|
||||
- Suitable for complex extraction tasks requiring nuanced understanding.
|
||||
- Ideal for scenarios where detailed instructions can guide the extraction process.
|
||||
- Perfect for extracting specific types of information or content with precise guidelines.
|
||||
|
||||
#### Parameters
|
||||
- `provider` (str, optional): Provider for language model completions (e.g., openai/gpt-4). Default is `DEFAULT_PROVIDER`.
|
||||
- `api_token` (str, optional): API token for the provider. If not provided, it will try to load from the environment variable `OPENAI_API_KEY`.
|
||||
- `instruction` (str, optional): Instructions to guide the LLM on how to perform the extraction. Default is `None`.
|
||||
|
||||
#### Example Without Instructions
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
# Define extraction strategy without instructions
|
||||
strategy = LLMExtractionStrategy(
|
||||
provider='openai',
|
||||
api_token='your_api_token'
|
||||
)
|
||||
|
||||
# Sample URL
|
||||
url = "https://www.nbcnews.com/business"
|
||||
|
||||
# Run the crawler with the extraction strategy
|
||||
result = crawler.run(url=url, extraction_strategy=strategy)
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
#### Example With Instructions
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
crawler = WebCrawler()
|
||||
crawler.warmup()
|
||||
|
||||
# Define extraction strategy with instructions
|
||||
strategy = LLMExtractionStrategy(
|
||||
provider='openai',
|
||||
api_token='your_api_token',
|
||||
instruction="Extract only financial news and summarize key points."
|
||||
)
|
||||
|
||||
# Sample URL
|
||||
url = "https://www.nbcnews.com/business"
|
||||
|
||||
# Run the crawler with the extraction strategy
|
||||
result = crawler.run(url=url, extraction_strategy=strategy)
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
#### Use Cases for LLMExtractionStrategy
|
||||
- Extracting specific data types from structured or semi-structured content.
|
||||
- Generating summaries, extracting key information, or transforming content into different formats.
|
||||
- Performing detailed extractions based on custom instructions.
|
||||
|
||||
For more detailed examples, please refer to the [Examples section](../examples/index.md) of the documentation.
|
||||
|
||||
---
|
||||
|
||||
By choosing the right extraction strategy, you can effectively extract the most relevant and useful information from web content. Whether you need fast, accurate semantic segmentation with `CosineStrategy` or nuanced, instruction-based extraction with `LLMExtractionStrategy`, Crawl4AI has you covered. Happy extracting! 🕵️♂️✨
|
||||
101
docs/md/index.md
Normal file
@@ -0,0 +1,101 @@
|
||||
# Crawl4AI v0.2.77
|
||||
|
||||
Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.
|
||||
|
||||
|
||||
## Try the [Demo](demo.md)
|
||||
|
||||
Just try it now and crawl different pages to see how it works. You can set the links, see the structures of the output, and also view the Python sample code on how to run it. The old demo is available at [/old_demo](/old) where you can see more details.
|
||||
|
||||
## Introduction
|
||||
|
||||
Crawl4AI has one clear task: to make crawling and data extraction from web pages easy and efficient, especially for large language models (LLMs) and AI applications. Whether you are using it as a REST API or a Python library, Crawl4AI offers a robust and flexible solution.
|
||||
|
||||
## Quick Start
|
||||
|
||||
Here's a quick example to show you how easy it is to use Crawl4AI:
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
# Create an instance of WebCrawler
|
||||
crawler = WebCrawler()
|
||||
|
||||
# Warm up the crawler (load necessary models)
|
||||
crawler.warmup()
|
||||
|
||||
# Run the crawler on a URL
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
|
||||
# Print the extracted content
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
### Explanation
|
||||
|
||||
1. **Importing the Library**: We start by importing the `WebCrawler` class from the `crawl4ai` library.
|
||||
2. **Creating an Instance**: An instance of `WebCrawler` is created.
|
||||
3. **Warming Up**: The `warmup()` method prepares the crawler by loading necessary models and settings.
|
||||
4. **Running the Crawler**: The `run()` method is used to crawl the specified URL and extract meaningful content.
|
||||
5. **Printing the Result**: The extracted content is printed, showcasing the data extracted from the web page.
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
This documentation is organized into several sections to help you navigate and find the information you need quickly:
|
||||
|
||||
### [Home](index.md)
|
||||
|
||||
An introduction to Crawl4AI, including a quick start guide and an overview of the documentation structure.
|
||||
|
||||
### [Installation](installation.md)
|
||||
|
||||
Instructions on how to install Crawl4AI and its dependencies.
|
||||
|
||||
### [Introduction](introduction.md)
|
||||
|
||||
A detailed introduction to Crawl4AI, its features, and how it can be used for various web crawling and data extraction tasks.
|
||||
|
||||
### [Quick Start](quickstart.md)
|
||||
|
||||
A step-by-step guide to get you up and running with Crawl4AI, including installation instructions and basic usage examples.
|
||||
|
||||
### [Examples](examples/index.md)
|
||||
|
||||
This section contains practical examples demonstrating different use cases of Crawl4AI:
|
||||
|
||||
- [LLM Extraction](examples/llm_extraction.md)
|
||||
- [JS Execution & CSS Filtering](examples/js_execution_css_filtering.md)
|
||||
- [Hooks & Auth](examples/hooks_auth.md)
|
||||
- [Summarization](examples/summarization.md)
|
||||
- [Research Assistant](examples/research_assistant.md)
|
||||
|
||||
### [Full Details of Using Crawler](full_details/crawl_request_parameters.md)
|
||||
|
||||
Comprehensive details on using the crawler, including:
|
||||
|
||||
- [Crawl Request Parameters](full_details/crawl_request_parameters.md)
|
||||
- [Crawl Result Class](full_details/crawl_result_class.md)
|
||||
- [Advanced Features](full_details/advanced_features.md)
|
||||
- [Chunking Strategies](full_details/chunking_strategies.md)
|
||||
- [Extraction Strategies](full_details/extraction_strategies.md)
|
||||
|
||||
### [API Reference](api/core_classes_and_functions.md)
|
||||
|
||||
Detailed documentation of the API, covering:
|
||||
|
||||
- [Core Classes and Functions](api/core_classes_and_functions.md)
|
||||
- [Detailed API Documentation](api/detailed_api_documentation.md)
|
||||
|
||||
### [Change Log](changelog.md)
|
||||
|
||||
A log of all changes, updates, and improvements made to Crawl4AI.
|
||||
|
||||
### [Contact](contact.md)
|
||||
|
||||
Information on how to get in touch with the developers, report issues, and contribute to the project.
|
||||
|
||||
## Get Started
|
||||
|
||||
To get started with Crawl4AI, follow the quick start guide above or explore the detailed sections of this documentation. Whether you are a beginner or an advanced user, Crawl4AI has something to offer to make your web crawling and data extraction tasks easier and more efficient.
|
||||
|
||||
Happy Crawling! 🕸️🚀
|
||||
193
docs/md/installation.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# Installation 💻
|
||||
|
||||
There are three ways to use Crawl4AI:
|
||||
|
||||
1. As a library (Recommended).
|
||||
2. As a local server (Docker) or using the REST API.
|
||||
3. As a local server (Docker) using the pre-built image from Docker Hub.
|
||||
|
||||
## Option 1: Library Installation
|
||||
|
||||
You can try this Colab for a quick start: [](https://colab.research.google.com/drive/1sJPAmeLj5PMrg2VgOwMJ2ubGIcK0cJeX#scrollTo=g1RrmI4W_rPk)
|
||||
|
||||
Crawl4AI offers flexible installation options to suit various use cases. Choose the option that best fits your needs:
|
||||
|
||||
- **Default Installation** (Basic functionality):
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
```
|
||||
Use this for basic web crawling and scraping tasks.
|
||||
|
||||
- **Installation with PyTorch** (For advanced text clustering):
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install "crawl4ai[torch] @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
```
|
||||
Choose this if you need the CosineSimilarity cluster strategy.
|
||||
|
||||
- **Installation with Transformers** (For summarization and Hugging Face models):
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install "crawl4ai[transformer] @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
```
|
||||
Opt for this if you require text summarization or plan to use Hugging Face models.
|
||||
|
||||
- **Full Installation** (All features):
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install "crawl4ai[all] @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
```
|
||||
This installs all dependencies for full functionality.
|
||||
|
||||
- **Development Installation** (For contributors):
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
git clone https://github.com/unclecode/crawl4ai.git
|
||||
cd crawl4ai
|
||||
pip install -e ".[all]"
|
||||
```
|
||||
Use this if you plan to modify the source code.
|
||||
|
||||
💡 After installation, if you have used "torch", "transformer" or "all", it's recommended to run the following CLI command to load the required models. This is optional but will boost the performance and speed of the crawler. You need to do this only once, this is only for when you install using []
|
||||
```bash
|
||||
crawl4ai-download-models
|
||||
```
|
||||
|
||||
## Option 2: Using Docker for Local Server
|
||||
|
||||
Crawl4AI can be run as a local server using Docker. The Dockerfile supports different installation options to cater to various use cases. Here's how you can build and run the Docker image:
|
||||
|
||||
### Default Installation
|
||||
|
||||
The default installation includes the basic Crawl4AI package without additional dependencies or pre-downloaded models.
|
||||
|
||||
```bash
|
||||
# For Mac users (M1/M2)
|
||||
docker build --platform linux/amd64 -t crawl4ai .
|
||||
|
||||
# For other users
|
||||
docker build -t crawl4ai .
|
||||
|
||||
# Run the container
|
||||
docker run -d -p 8000:80 crawl4ai
|
||||
```
|
||||
|
||||
### Full Installation (All Dependencies and Models)
|
||||
|
||||
This option installs all dependencies and downloads the models.
|
||||
|
||||
```bash
|
||||
# For Mac users (M1/M2)
|
||||
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=all -t crawl4ai:all .
|
||||
|
||||
# For other users
|
||||
docker build --build-arg INSTALL_OPTION=all -t crawl4ai:all .
|
||||
|
||||
# Run the container
|
||||
docker run -d -p 8000:80 crawl4ai:all
|
||||
```
|
||||
|
||||
### Torch Installation
|
||||
|
||||
This option installs torch-related dependencies and downloads the models.
|
||||
|
||||
```bash
|
||||
# For Mac users (M1/M2)
|
||||
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=torch -t crawl4ai:torch .
|
||||
|
||||
# For other users
|
||||
docker build --build-arg INSTALL_OPTION=torch -t crawl4ai:torch .
|
||||
|
||||
# Run the container
|
||||
docker run -d -p 8000:80 crawl4ai:torch
|
||||
```
|
||||
|
||||
### Transformer Installation
|
||||
|
||||
This option installs transformer-related dependencies and downloads the models.
|
||||
|
||||
```bash
|
||||
# For Mac users (M1/M2)
|
||||
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=transformer -t crawl4ai:transformer .
|
||||
|
||||
# For other users
|
||||
docker build --build-arg INSTALL_OPTION=transformer -t crawl4ai:transformer .
|
||||
|
||||
# Run the container
|
||||
docker run -d -p 8000:80 crawl4ai:transformer
|
||||
```
|
||||
|
||||
### Notes
|
||||
|
||||
- The `--platform linux/amd64` flag is necessary for Mac users with M1/M2 chips to ensure compatibility.
|
||||
- The `-t` flag tags the image with a name (and optionally a tag in the 'name:tag' format).
|
||||
- The `-d` flag runs the container in detached mode.
|
||||
- The `-p 8000:80` flag maps port 8000 on the host to port 80 in the container.
|
||||
|
||||
Choose the installation option that best suits your needs. The default installation is suitable for basic usage, while the other options provide additional capabilities for more advanced use cases.
|
||||
|
||||
## Option 3: Using the Pre-built Image from Docker Hub
|
||||
|
||||
You can use pre-built Crawl4AI images from Docker Hub, which are available for all platforms (Mac, Linux, Windows). We have official images as well as a community-contributed image (Thanks to https://github.com/FractalMind):
|
||||
|
||||
### Default Installation
|
||||
|
||||
```bash
|
||||
|
||||
# Pull the image
|
||||
|
||||
docker pull unclecode/crawl4ai:latest
|
||||
|
||||
# Run the container
|
||||
|
||||
docker run -d -p 8000:80 unclecode/crawl4ai:latest
|
||||
|
||||
```
|
||||
|
||||
### Community-Contributed Image
|
||||
|
||||
A stable version of Crawl4AI is also available, created and maintained by a community member:
|
||||
|
||||
```bash
|
||||
|
||||
# Pull the community-contributed image
|
||||
|
||||
docker pull ryser007/crawl4ai:stable
|
||||
|
||||
# Run the container
|
||||
|
||||
docker run -d -p 8000:80 ryser007/crawl4ai:stable
|
||||
|
||||
```
|
||||
|
||||
We'd like to express our gratitude to GitHub user [@FractalMind](https://github.com/FractalMind) for creating and maintaining this stable version of the Crawl4AI Docker image. Community contributions like this are invaluable to the project.
|
||||
|
||||
|
||||
### Testing the Installation
|
||||
|
||||
After running the container, you can test if it's working correctly:
|
||||
|
||||
- On Mac and Linux:
|
||||
|
||||
```bash
|
||||
|
||||
curl http://localhost:8000
|
||||
|
||||
```
|
||||
|
||||
- On Windows (PowerShell):
|
||||
|
||||
```powershell
|
||||
|
||||
Invoke-WebRequest -Uri http://localhost:8000
|
||||
|
||||
```
|
||||
|
||||
Or open a web browser and navigate to http://localhost:8000
|
||||
|
||||
28
docs/md/interactive_content.html
Normal file
@@ -0,0 +1,28 @@
|
||||
<h1>Try Our Library</h1>
|
||||
<form id="apiForm">
|
||||
<label for="inputField">Enter some input:</label>
|
||||
<input type="text" id="inputField" name="inputField" required>
|
||||
<button type="submit">Submit</button>
|
||||
</form>
|
||||
<div id="result"></div>
|
||||
|
||||
<script>
|
||||
document.getElementById('apiForm').addEventListener('submit', function(event) {
|
||||
event.preventDefault();
|
||||
const input = document.getElementById('inputField').value;
|
||||
fetch('https://your-api-endpoint.com/api', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({ input: input })
|
||||
})
|
||||
.then(response => response.json())
|
||||
.then(data => {
|
||||
document.getElementById('result').textContent = JSON.stringify(data);
|
||||
})
|
||||
.catch(error => {
|
||||
document.getElementById('result').textContent = 'Error: ' + error;
|
||||
});
|
||||
});
|
||||
</script>
|
||||
29
docs/md/introduction.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Introduction
|
||||
|
||||
Welcome to the documentation for Crawl4AI v0.2.5! 🕷️🤖
|
||||
|
||||
Crawl4AI is designed to simplify the process of crawling web pages and extracting useful information for large language models (LLMs) and AI applications. Whether you're using it as a REST API, a Python library, or through a Google Colab notebook, Crawl4AI provides powerful features to make web data extraction easier and more efficient.
|
||||
|
||||
## Key Features ✨
|
||||
|
||||
- **🆓 Completely Free and Open-Source**: Crawl4AI is free to use and open-source, making it accessible for everyone.
|
||||
- **🤖 LLM-Friendly Output Formats**: Supports JSON, cleaned HTML, and markdown formats.
|
||||
- **🌍 Concurrent Crawling**: Crawl multiple URLs simultaneously to save time.
|
||||
- **🎨 Media Extraction**: Extract all media tags including images, audio, and video.
|
||||
- **🔗 Link Extraction**: Extract all external and internal links from web pages.
|
||||
- **📚 Metadata Extraction**: Extract metadata from web pages for additional context.
|
||||
- **🔄 Custom Hooks**: Define custom hooks for authentication, headers, and page modifications before crawling.
|
||||
- **🕵️ User Agent Support**: Customize the user agent for HTTP requests.
|
||||
- **🖼️ Screenshot Capability**: Take screenshots of web pages during crawling.
|
||||
- **📜 JavaScript Execution**: Execute custom JavaScripts before crawling.
|
||||
- **📚 Advanced Chunking and Extraction Strategies**: Utilize topic-based, regex, sentence chunking, cosine clustering, and LLM extraction strategies.
|
||||
- **🎯 CSS Selector Support**: Extract specific content using CSS selectors.
|
||||
- **📝 Instruction/Keyword Refinement**: Pass instructions or keywords to refine the extraction process.
|
||||
|
||||
Check the [Changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md) for more details.
|
||||
|
||||
## Power and Simplicity of Crawl4AI 🚀
|
||||
|
||||
Crawl4AI provides an easy way to crawl and extract data from web pages without installing any library. You can use the REST API on our server or run the local server on your machine. For more advanced control, use the Python library to customize your crawling and extraction strategies.
|
||||
|
||||
Explore the documentation to learn more about the features, installation process, usage examples, and how to contribute to Crawl4AI. Let's make the web more accessible and useful for AI applications! 💪🌐🤖
|
||||
204
docs/md/quickstart.md
Normal file
@@ -0,0 +1,204 @@
|
||||
# Quick Start Guide 🚀
|
||||
|
||||
Welcome to the Crawl4AI Quickstart Guide! In this tutorial, we'll walk you through the basic usage of Crawl4AI with a friendly and humorous tone. We'll cover everything from basic usage to advanced features like chunking and extraction strategies. Let's dive in! 🌟
|
||||
|
||||
## Getting Started 🛠️
|
||||
|
||||
First, let's create an instance of `WebCrawler` and call the `warmup()` function. This might take a few seconds the first time you run Crawl4AI, as it loads the required model files.
|
||||
|
||||
```python
|
||||
from crawl4ai import WebCrawler
|
||||
|
||||
def create_crawler():
|
||||
crawler = WebCrawler(verbose=True)
|
||||
crawler.warmup()
|
||||
return crawler
|
||||
|
||||
crawler = create_crawler()
|
||||
```
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Simply provide a URL and let Crawl4AI do the magic!
|
||||
|
||||
```python
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
print(f"Basic crawl result: {result}")
|
||||
```
|
||||
|
||||
### Taking Screenshots 📸
|
||||
|
||||
Let's take a screenshot of the page!
|
||||
|
||||
```python
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", screenshot=True)
|
||||
with open("screenshot.png", "wb") as f:
|
||||
f.write(base64.b64decode(result.screenshot))
|
||||
print("Screenshot saved to 'screenshot.png'!")
|
||||
```
|
||||
|
||||
### Understanding Parameters 🧠
|
||||
|
||||
By default, Crawl4AI caches the results of your crawls. This means that subsequent crawls of the same URL will be much faster! Let's see this in action.
|
||||
|
||||
First crawl (caches the result):
|
||||
```python
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
print(f"First crawl result: {result}")
|
||||
```
|
||||
|
||||
Force to crawl again:
|
||||
```python
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
|
||||
print(f"Second crawl result: {result}")
|
||||
```
|
||||
|
||||
### Adding a Chunking Strategy 🧩
|
||||
|
||||
Let's add a chunking strategy: `RegexChunking`! This strategy splits the text based on a given regex pattern.
|
||||
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import RegexChunking
|
||||
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
chunking_strategy=RegexChunking(patterns=["\n\n"])
|
||||
)
|
||||
print(f"RegexChunking result: {result}")
|
||||
```
|
||||
|
||||
You can also use `NlpSentenceChunking` which splits the text into sentences using NLP techniques.
|
||||
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import NlpSentenceChunking
|
||||
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
chunking_strategy=NlpSentenceChunking()
|
||||
)
|
||||
print(f"NlpSentenceChunking result: {result}")
|
||||
```
|
||||
|
||||
### Adding an Extraction Strategy 🧠
|
||||
|
||||
Let's get smarter with an extraction strategy: `CosineStrategy`! This strategy uses cosine similarity to extract semantically similar blocks of text.
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import CosineStrategy
|
||||
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=CosineStrategy(
|
||||
word_count_threshold=10,
|
||||
max_dist=0.2,
|
||||
linkage_method="ward",
|
||||
top_k=3
|
||||
)
|
||||
)
|
||||
print(f"CosineStrategy result: {result}")
|
||||
```
|
||||
|
||||
You can also pass other parameters like `semantic_filter` to extract specific content.
|
||||
|
||||
```python
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=CosineStrategy(
|
||||
semantic_filter="inflation rent prices"
|
||||
)
|
||||
)
|
||||
print(f"CosineStrategy result with semantic filter: {result}")
|
||||
```
|
||||
|
||||
### Using LLMExtractionStrategy 🤖
|
||||
|
||||
Time to bring in the big guns: `LLMExtractionStrategy` without instructions! This strategy uses a large language model to extract relevant information from the web page.
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
import os
|
||||
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv('OPENAI_API_KEY')
|
||||
)
|
||||
)
|
||||
print(f"LLMExtractionStrategy (no instructions) result: {result}")
|
||||
```
|
||||
|
||||
You can also provide specific instructions to guide the extraction.
|
||||
|
||||
```python
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv('OPENAI_API_KEY'),
|
||||
instruction="I am interested in only financial news"
|
||||
)
|
||||
)
|
||||
print(f"LLMExtractionStrategy (with instructions) result: {result}")
|
||||
```
|
||||
|
||||
### Targeted Extraction 🎯
|
||||
|
||||
Let's use a CSS selector to extract only H2 tags!
|
||||
|
||||
```python
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
css_selector="h2"
|
||||
)
|
||||
print(f"CSS Selector (H2 tags) result: {result}")
|
||||
```
|
||||
|
||||
### Interactive Extraction 🖱️
|
||||
|
||||
Passing JavaScript code to click the 'Load More' button!
|
||||
|
||||
```python
|
||||
js_code = """
|
||||
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
|
||||
loadMoreButton && loadMoreButton.click();
|
||||
"""
|
||||
|
||||
result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
js=js_code
|
||||
)
|
||||
print(f"JavaScript Code (Load More button) result: {result}")
|
||||
```
|
||||
|
||||
### Using Crawler Hooks 🔗
|
||||
|
||||
Let's see how we can customize the crawler using hooks!
|
||||
|
||||
```python
|
||||
import time
|
||||
|
||||
from crawl4ai.web_crawler import WebCrawler
|
||||
from crawl4ai.crawler_strategy import *
|
||||
|
||||
def delay(driver):
|
||||
print("Delaying for 5 seconds...")
|
||||
time.sleep(5)
|
||||
print("Resuming...")
|
||||
|
||||
def create_crawler():
|
||||
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
|
||||
crawler_strategy.set_hook('after_get_url', delay)
|
||||
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
|
||||
crawler.warmup()
|
||||
return crawler
|
||||
|
||||
crawler = create_crawler()
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
|
||||
```
|
||||
|
||||
check [Hooks](examples/hooks_auth.md) for more examples.
|
||||
|
||||
## Congratulations! 🎉
|
||||
|
||||
You've made it through the Crawl4AI Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️
|
||||
128
main.py
@@ -2,12 +2,18 @@ import os
|
||||
import importlib
|
||||
import asyncio
|
||||
from functools import lru_cache
|
||||
import logging
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
|
||||
from fastapi import FastAPI, HTTPException, Request
|
||||
from fastapi.responses import HTMLResponse, JSONResponse
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from fastapi.exceptions import RequestValidationError
|
||||
from starlette.middleware.base import BaseHTTPMiddleware
|
||||
from starlette.responses import FileResponse
|
||||
from fastapi.responses import RedirectResponse
|
||||
|
||||
from pydantic import BaseModel, HttpUrl
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
@@ -16,6 +22,15 @@ from typing import List, Optional
|
||||
from crawl4ai.web_crawler import WebCrawler
|
||||
from crawl4ai.database import get_total_count, clear_db
|
||||
|
||||
import time
|
||||
from slowapi import Limiter, _rate_limit_exceeded_handler
|
||||
from slowapi.util import get_remote_address
|
||||
from slowapi.errors import RateLimitExceeded
|
||||
|
||||
# load .env file
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv()
|
||||
|
||||
# Configuration
|
||||
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
|
||||
MAX_CONCURRENT_REQUESTS = 10 # Adjust this to change the maximum concurrent requests
|
||||
@@ -24,6 +39,78 @@ lock = asyncio.Lock()
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
# Initialize rate limiter
|
||||
def rate_limit_key_func(request: Request):
|
||||
access_token = request.headers.get("access-token")
|
||||
if access_token == os.environ.get('ACCESS_TOKEN'):
|
||||
return None
|
||||
return get_remote_address(request)
|
||||
|
||||
limiter = Limiter(key_func=rate_limit_key_func)
|
||||
app.state.limiter = limiter
|
||||
|
||||
# Dictionary to store last request times for each client
|
||||
last_request_times = {}
|
||||
last_rate_limit = {}
|
||||
|
||||
|
||||
def get_rate_limit():
|
||||
limit = os.environ.get('ACCESS_PER_MIN', "5")
|
||||
return f"{limit}/minute"
|
||||
|
||||
# Custom rate limit exceeded handler
|
||||
async def custom_rate_limit_exceeded_handler(request: Request, exc: RateLimitExceeded) -> JSONResponse:
|
||||
if request.client.host not in last_rate_limit or time.time() - last_rate_limit[request.client.host] > 60:
|
||||
last_rate_limit[request.client.host] = time.time()
|
||||
retry_after = 60 - (time.time() - last_rate_limit[request.client.host])
|
||||
reset_at = time.time() + retry_after
|
||||
return JSONResponse(
|
||||
status_code=429,
|
||||
content={
|
||||
"detail": "Rate limit exceeded",
|
||||
"limit": str(exc.limit.limit),
|
||||
"retry_after": retry_after,
|
||||
'reset_at': reset_at,
|
||||
"message": f"You have exceeded the rate limit of {exc.limit.limit}."
|
||||
}
|
||||
)
|
||||
|
||||
app.add_exception_handler(RateLimitExceeded, custom_rate_limit_exceeded_handler)
|
||||
|
||||
|
||||
# Middleware for token-based bypass and per-request limit
|
||||
class RateLimitMiddleware(BaseHTTPMiddleware):
|
||||
async def dispatch(self, request: Request, call_next):
|
||||
SPAN = int(os.environ.get('ACCESS_TIME_SPAN', 10))
|
||||
access_token = request.headers.get("access-token")
|
||||
if access_token == os.environ.get('ACCESS_TOKEN'):
|
||||
return await call_next(request)
|
||||
|
||||
path = request.url.path
|
||||
if path in ["/crawl", "/old"]:
|
||||
client_ip = request.client.host
|
||||
current_time = time.time()
|
||||
|
||||
# Check time since last request
|
||||
if client_ip in last_request_times:
|
||||
time_since_last_request = current_time - last_request_times[client_ip]
|
||||
if time_since_last_request < SPAN:
|
||||
return JSONResponse(
|
||||
status_code=429,
|
||||
content={
|
||||
"detail": "Too many requests",
|
||||
"message": "Rate limit exceeded. Please wait 10 seconds between requests.",
|
||||
"retry_after": max(0, SPAN - time_since_last_request),
|
||||
"reset_at": current_time + max(0, SPAN - time_since_last_request),
|
||||
}
|
||||
)
|
||||
|
||||
last_request_times[client_ip] = current_time
|
||||
|
||||
return await call_next(request)
|
||||
|
||||
app.add_middleware(RateLimitMiddleware)
|
||||
|
||||
# CORS configuration
|
||||
origins = ["*"] # Allow all origins
|
||||
app.add_middleware(
|
||||
@@ -36,12 +123,16 @@ app.add_middleware(
|
||||
|
||||
# Mount the pages directory as a static directory
|
||||
app.mount("/pages", StaticFiles(directory=__location__ + "/pages"), name="pages")
|
||||
app.mount("/mkdocs", StaticFiles(directory="site", html=True), name="mkdocs")
|
||||
site_templates = Jinja2Templates(directory=__location__ + "/site")
|
||||
templates = Jinja2Templates(directory=__location__ + "/pages")
|
||||
# chromedriver_autoinstaller.install() # Ensure chromedriver is installed
|
||||
|
||||
@lru_cache()
|
||||
def get_crawler():
|
||||
# Initialize and return a WebCrawler instance
|
||||
return WebCrawler()
|
||||
crawler = WebCrawler(verbose = True)
|
||||
crawler.warmup()
|
||||
return crawler
|
||||
|
||||
class CrawlRequest(BaseModel):
|
||||
urls: List[str]
|
||||
@@ -54,17 +145,23 @@ class CrawlRequest(BaseModel):
|
||||
chunking_strategy: Optional[str] = "RegexChunking"
|
||||
chunking_strategy_args: Optional[dict] = {}
|
||||
css_selector: Optional[str] = None
|
||||
screenshot: Optional[bool] = False
|
||||
user_agent: Optional[str] = None
|
||||
verbose: Optional[bool] = True
|
||||
|
||||
@app.get("/")
|
||||
def read_root():
|
||||
return RedirectResponse(url="/mkdocs")
|
||||
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
@app.get("/old", response_class=HTMLResponse)
|
||||
@limiter.limit(get_rate_limit())
|
||||
async def read_index(request: Request):
|
||||
partials_dir = os.path.join(__location__, "pages", "partial")
|
||||
partials = {}
|
||||
|
||||
for filename in os.listdir(partials_dir):
|
||||
if filename.endswith(".html"):
|
||||
with open(os.path.join(partials_dir, filename), "r") as file:
|
||||
with open(os.path.join(partials_dir, filename), "r", encoding="utf8") as file:
|
||||
partials[filename[:-5]] = file.read()
|
||||
|
||||
return templates.TemplateResponse("index.html", {"request": request, **partials})
|
||||
@@ -74,10 +171,9 @@ async def get_total_url_count():
|
||||
count = get_total_count()
|
||||
return JSONResponse(content={"count": count})
|
||||
|
||||
# Add endpoit to clear db
|
||||
@app.get("/clear-db")
|
||||
async def clear_database():
|
||||
clear_db()
|
||||
# clear_db()
|
||||
return JSONResponse(content={"message": "Database cleared."})
|
||||
|
||||
def import_strategy(module_name: str, class_name: str, *args, **kwargs):
|
||||
@@ -86,12 +182,16 @@ def import_strategy(module_name: str, class_name: str, *args, **kwargs):
|
||||
strategy_class = getattr(module, class_name)
|
||||
return strategy_class(*args, **kwargs)
|
||||
except ImportError:
|
||||
print("ImportError: Module not found.")
|
||||
raise HTTPException(status_code=400, detail=f"Module {module_name} not found.")
|
||||
except AttributeError:
|
||||
print("AttributeError: Class not found.")
|
||||
raise HTTPException(status_code=400, detail=f"Class {class_name} not found in {module_name}.")
|
||||
|
||||
@app.post("/crawl")
|
||||
@limiter.limit(get_rate_limit())
|
||||
async def crawl_urls(crawl_request: CrawlRequest, request: Request):
|
||||
logging.debug(f"[LOG] Crawl request for URL: {crawl_request.urls}")
|
||||
global current_requests
|
||||
async with lock:
|
||||
if current_requests >= MAX_CONCURRENT_REQUESTS:
|
||||
@@ -99,10 +199,15 @@ async def crawl_urls(crawl_request: CrawlRequest, request: Request):
|
||||
current_requests += 1
|
||||
|
||||
try:
|
||||
logging.debug("[LOG] Loading extraction and chunking strategies...")
|
||||
crawl_request.extraction_strategy_args['verbose'] = True
|
||||
crawl_request.chunking_strategy_args['verbose'] = True
|
||||
|
||||
extraction_strategy = import_strategy("crawl4ai.extraction_strategy", crawl_request.extraction_strategy, **crawl_request.extraction_strategy_args)
|
||||
chunking_strategy = import_strategy("crawl4ai.chunking_strategy", crawl_request.chunking_strategy, **crawl_request.chunking_strategy_args)
|
||||
|
||||
# Use ThreadPoolExecutor to run the synchronous WebCrawler in async manner
|
||||
logging.debug("[LOG] Running the WebCrawler...")
|
||||
with ThreadPoolExecutor() as executor:
|
||||
loop = asyncio.get_event_loop()
|
||||
futures = [
|
||||
@@ -115,6 +220,8 @@ async def crawl_urls(crawl_request: CrawlRequest, request: Request):
|
||||
chunking_strategy,
|
||||
crawl_request.bypass_cache,
|
||||
crawl_request.css_selector,
|
||||
crawl_request.screenshot,
|
||||
crawl_request.user_agent,
|
||||
crawl_request.verbose
|
||||
)
|
||||
for url in crawl_request.urls
|
||||
@@ -126,14 +233,13 @@ async def crawl_urls(crawl_request: CrawlRequest, request: Request):
|
||||
for result in results:
|
||||
result.html = None
|
||||
|
||||
return {"results": [result.dict() for result in results]}
|
||||
return {"results": [result.model_dump() for result in results]}
|
||||
finally:
|
||||
async with lock:
|
||||
current_requests -= 1
|
||||
|
||||
@app.get("/strategies/extraction", response_class=JSONResponse)
|
||||
async def get_extraction_strategies():
|
||||
# Load docs/extraction_strategies.json" and return as JSON response
|
||||
with open(f"{__location__}/docs/extraction_strategies.json", "r") as file:
|
||||
return JSONResponse(content=file.read())
|
||||
|
||||
@@ -141,8 +247,8 @@ async def get_extraction_strategies():
|
||||
async def get_chunking_strategies():
|
||||
with open(f"{__location__}/docs/chunking_strategies.json", "r") as file:
|
||||
return JSONResponse(content=file.read())
|
||||
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8000)
|
||||
uvicorn.run(app, host="0.0.0.0", port=8888)
|
||||
|
||||
0
middlewares.py
Normal file
42
mkdocs.yml
Normal file
@@ -0,0 +1,42 @@
|
||||
site_name: Crawl4AI Documentation
|
||||
docs_dir: docs/md
|
||||
nav:
|
||||
- Home: index.md
|
||||
- Demo: demo.md # Add this line
|
||||
- First Steps:
|
||||
- Introduction: introduction.md
|
||||
- Installation: installation.md
|
||||
- Quick Start: quickstart.md
|
||||
- Examples:
|
||||
- Intro: examples/index.md
|
||||
- LLM Extraction: examples/llm_extraction.md
|
||||
- JS Execution & CSS Filtering: examples/js_execution_css_filtering.md
|
||||
- Hooks & Auth: examples/hooks_auth.md
|
||||
- Summarization: examples/summarization.md
|
||||
- Research Assistant: examples/research_assistant.md
|
||||
- Full Details of Using Crawler:
|
||||
- Crawl Request Parameters: full_details/crawl_request_parameters.md
|
||||
- Crawl Result Class: full_details/crawl_result_class.md
|
||||
- Advanced Features: full_details/advanced_features.md
|
||||
- Chunking Strategies: full_details/chunking_strategies.md
|
||||
- Extraction Strategies: full_details/extraction_strategies.md
|
||||
- API Reference:
|
||||
- Core Classes and Functions: api/core_classes_and_functions.md
|
||||
- Detailed API Documentation: api/detailed_api_documentation.md
|
||||
- Miscellaneous:
|
||||
- Change Log: changelog.md
|
||||
- Contact: contact.md
|
||||
|
||||
theme:
|
||||
name: terminal
|
||||
palette: dark
|
||||
|
||||
# Add the css/extra.css
|
||||
extra_css:
|
||||
- assets/styles.css
|
||||
- assets/highlight.css
|
||||
- assets/dmvendor.css
|
||||
|
||||
extra_javascript:
|
||||
- assets/highlight.min.js
|
||||
- assets/highlight_init.js
|
||||
58
pages/app.js
@@ -104,11 +104,25 @@ document.getElementById("crawl-btn").addEventListener("click", () => {
|
||||
chunking_strategy: document.getElementById("chunking-strategy-select").value,
|
||||
chunking_strategy_args: {},
|
||||
css_selector: document.getElementById("css-selector").value,
|
||||
screenshot: document.getElementById("screenshot-checkbox").checked,
|
||||
// instruction: document.getElementById("instruction").value,
|
||||
// semantic_filter: document.getElementById("semantic_filter").value,
|
||||
verbose: true,
|
||||
};
|
||||
|
||||
// import requests
|
||||
|
||||
// data = {
|
||||
// "urls": [
|
||||
// "https://www.nbcnews.com/business"
|
||||
// ],
|
||||
// "word_count_threshold": 10,
|
||||
// "extraction_strategy": "NoExtractionStrategy",
|
||||
// }
|
||||
|
||||
// response = requests.post("https://crawl4ai.com/crawl", json=data) # OR local host if your run locally
|
||||
// print(response.json())
|
||||
|
||||
// save api token to local storage
|
||||
localStorage.setItem("api_token", document.getElementById("token-input").value);
|
||||
|
||||
@@ -124,25 +138,61 @@ document.getElementById("crawl-btn").addEventListener("click", () => {
|
||||
document.getElementById("json-result").textContent = JSON.stringify(parsedJson, null, 2);
|
||||
document.getElementById("cleaned-html-result").textContent = result.cleaned_html;
|
||||
document.getElementById("markdown-result").textContent = result.markdown;
|
||||
|
||||
document.getElementById("media-result").textContent = JSON.stringify( result.media, null, 2);
|
||||
if (result.screenshot){
|
||||
const imgElement = document.createElement("img");
|
||||
// Set the src attribute with the base64 data
|
||||
imgElement.src = `data:image/png;base64,${result.screenshot}`;
|
||||
document.getElementById("screenshot-result").innerHTML = "";
|
||||
document.getElementById("screenshot-result").appendChild(imgElement);
|
||||
}
|
||||
|
||||
// Update code examples dynamically
|
||||
const extractionStrategy = data.extraction_strategy;
|
||||
const isLLMExtraction = extractionStrategy === "LLMExtractionStrategy";
|
||||
|
||||
// REMOVE API TOKEN FROM CODE EXAMPLES
|
||||
data.extraction_strategy_args.api_token = "your_api_token";
|
||||
|
||||
if (data.extraction_strategy === "NoExtractionStrategy") {
|
||||
delete data.extraction_strategy_args;
|
||||
delete data.extrac_blocks;
|
||||
}
|
||||
|
||||
if (data.chunking_strategy === "RegexChunking") {
|
||||
delete data.chunking_strategy_args;
|
||||
}
|
||||
|
||||
delete data.verbose;
|
||||
|
||||
if (data.css_selector === "") {
|
||||
delete data.css_selector;
|
||||
}
|
||||
|
||||
if (!data.bypass_cache) {
|
||||
delete data.bypass_cache;
|
||||
}
|
||||
|
||||
if (!data.extract_blocks) {
|
||||
delete data.extract_blocks;
|
||||
}
|
||||
|
||||
if (!data.include_raw_html) {
|
||||
delete data.include_raw_html;
|
||||
}
|
||||
|
||||
document.getElementById(
|
||||
"curl-code"
|
||||
).textContent = `curl -X POST -H "Content-Type: application/json" -d '${JSON.stringify({
|
||||
...data,
|
||||
api_token: isLLMExtraction ? "your_api_token" : undefined,
|
||||
}, null, 2)}' http://crawl4ai.com/crawl`;
|
||||
}, null, 2)}' https://crawl4ai.com/crawl`;
|
||||
|
||||
document.getElementById("python-code").textContent = `import requests\n\ndata = ${JSON.stringify(
|
||||
{ ...data, api_token: isLLMExtraction ? "your_api_token" : undefined },
|
||||
null,
|
||||
2
|
||||
)}\n\nresponse = requests.post("http://crawl4ai.com/crawl", json=data) # OR local host if your run locally \nprint(response.json())`;
|
||||
)}\n\nresponse = requests.post("https://crawl4ai.com/crawl", json=data) # OR local host if your run locally \nprint(response.json())`;
|
||||
|
||||
document.getElementById(
|
||||
"nodejs-code"
|
||||
@@ -150,7 +200,7 @@ document.getElementById("crawl-btn").addEventListener("click", () => {
|
||||
{ ...data, api_token: isLLMExtraction ? "your_api_token" : undefined },
|
||||
null,
|
||||
2
|
||||
)};\n\naxios.post("http://crawl4ai.com/crawl", data) // OR local host if your run locally \n .then(response => console.log(response.data))\n .catch(error => console.error(error));`;
|
||||
)};\n\naxios.post("https://crawl4ai.com/crawl", data) // OR local host if your run locally \n .then(response => console.log(response.data))\n .catch(error => console.error(error));`;
|
||||
|
||||
document.getElementById(
|
||||
"library-code"
|
||||
|
||||
@@ -50,6 +50,20 @@ crawler.warmup()</code></pre>
|
||||
<div>
|
||||
<pre><code class="language-python">crawler.always_by_pass_cache = True</code></pre>
|
||||
</div>
|
||||
<!-- Step 3.5 Screenshot -->
|
||||
<div class="col-span-2 bg-lime-800 p-2 rounded text-zinc-50">
|
||||
📸
|
||||
<strong>Let's take a screenshot of the page!</strong>
|
||||
</div>
|
||||
<div>
|
||||
<pre><code class="language-python">result = crawler.run(
|
||||
url="https://www.nbcnews.com/business",
|
||||
screenshot=True
|
||||
)
|
||||
with open("screenshot.png", "wb") as f:
|
||||
f.write(base64.b64decode(result.screenshot))</code></pre>
|
||||
</div>
|
||||
|
||||
|
||||
<!-- Step 4 -->
|
||||
<div class="col-span-2 bg-lime-800 p-2 rounded text-zinc-50">
|
||||
@@ -139,13 +153,13 @@ crawler.warmup()</code></pre>
|
||||
</div>
|
||||
<div class="">Using JavaScript to click 'Load More' button:</div>
|
||||
<div>
|
||||
<pre><code class="language-python">js_code = """
|
||||
<pre><code class="language-python">js_code = ["""
|
||||
const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
|
||||
loadMoreButton && loadMoreButton.click();
|
||||
"""
|
||||
crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
|
||||
crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
|
||||
result = crawler.run(url="https://www.nbcnews.com/business")</code></pre>
|
||||
"""]
|
||||
crawler = WebCrawler(verbos=crawler_strategy, always_by_pass_cache=True)
|
||||
result = crawler.run(url="https://www.nbcnews.com/business", js = js_code)</code></pre>
|
||||
<div class="">Remember that you can pass multiple JavaScript code snippets in the list. They all will be executed in the order they are passed.</div>
|
||||
</div>
|
||||
|
||||
<!-- Conclusion -->
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
<section class="try-it py-8 px-16 pb-20 bg-zinc-900">
|
||||
<section class="try-it py-8 px-16 pb-20 bg-zinc-900 overflow-hidden">
|
||||
<div class="container mx-auto ">
|
||||
<h2 class="text-2xl font-bold mb-4 text-lime-500">Try It Now</h2>
|
||||
<div class="flex gap-4">
|
||||
@@ -20,6 +20,7 @@
|
||||
id="threshold"
|
||||
class="border border-zinc-700 rounded px-4 py-1 bg-zinc-900 text-zinc-300"
|
||||
>
|
||||
<option value="1">1</option>
|
||||
<option value="5">5</option>
|
||||
<option value="10" selected>10</option>
|
||||
<option value="15">15</option>
|
||||
@@ -124,7 +125,11 @@
|
||||
<label for="bypass-cache-checkbox" class="text-lime-500 font-bold">Bypass Cache</label>
|
||||
</div>
|
||||
<div class="flex items-center gap-2">
|
||||
<input type="checkbox" id="extract-blocks-checkbox" checked />
|
||||
<input type="checkbox" id="screenshot-checkbox" checked />
|
||||
<label for="screenshot-checkbox" class="text-lime-500 font-bold">Screenshot</label>
|
||||
</div>
|
||||
<div class="flex items-center gap-2 hidden">
|
||||
<input type="checkbox" id="extract-blocks-checkbox" />
|
||||
<label for="extract-blocks-checkbox" class="text-lime-500 font-bold">Extract Blocks</label>
|
||||
</div>
|
||||
<button id="crawl-btn" class="bg-lime-600 text-black font-bold px-4 py-0 rounded">Crawl</button>
|
||||
@@ -134,7 +139,7 @@
|
||||
<div id="loading" class="hidden">
|
||||
<p class="text-white">Loading... Please wait.</p>
|
||||
</div>
|
||||
<div id="result" class="flex-1">
|
||||
<div id="result" class="flex-1 overflow-x-auto">
|
||||
<div class="tab-buttons flex gap-2">
|
||||
<button class="tab-btn px-4 py-1 text-sm bg-zinc-700 rounded-t text-lime-500" data-tab="json">
|
||||
JSON
|
||||
@@ -148,15 +153,23 @@
|
||||
<button class="tab-btn px-4 py-1 text-sm bg-zinc-700 rounded-t text-lime-500" data-tab="markdown">
|
||||
Markdown
|
||||
</button>
|
||||
<button class="tab-btn px-4 py-1 text-sm bg-zinc-700 rounded-t text-lime-500" data-tab="media">
|
||||
Medias
|
||||
</button>
|
||||
<button class="tab-btn px-4 py-1 text-sm bg-zinc-700 rounded-t text-lime-500" data-tab="screenshot">
|
||||
Screenshot
|
||||
</button>
|
||||
</div>
|
||||
<div class="tab-content code bg-zinc-900 p-2 rounded h-full border border-zinc-700 text-sm">
|
||||
<pre class="h-full flex"><code id="json-result" class="language-json"></code></pre>
|
||||
<pre class="hidden h-full flex"><code id="cleaned-html-result" class="language-html"></code></pre>
|
||||
<pre class="hidden h-full flex"><code id="markdown-result" class="language-markdown"></code></pre>
|
||||
<pre class="hidden h-full flex"><code id="media-result" class="language-json"></code></pre>
|
||||
<pre class="hidden h-full flex"><code id="screenshot-result"></code></pre>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div id="code_help" class="flex-1">
|
||||
<div id="code_help" class="flex-1 overflow-x-auto">
|
||||
<div class="tab-buttons flex gap-2">
|
||||
<button class="code-tab-btn px-4 py-1 text-sm bg-zinc-700 rounded-t text-lime-500" data-tab="curl">
|
||||
cURL
|
||||
|
||||
13
requirements.crawl.txt
Normal file
@@ -0,0 +1,13 @@
|
||||
aiohttp
|
||||
aiosqlite
|
||||
bs4
|
||||
fastapi
|
||||
html2text
|
||||
httpx
|
||||
pydantic
|
||||
python-dotenv
|
||||
requests
|
||||
rich
|
||||
selenium
|
||||
uvicorn
|
||||
chromedriver-autoinstaller
|
||||
@@ -1,19 +1,24 @@
|
||||
numpy==1.25.0
|
||||
aiohttp==3.9.5
|
||||
aiosqlite==0.20.0
|
||||
bs4==0.0.2
|
||||
beautifulsoup4==4.12.3
|
||||
fastapi==0.111.0
|
||||
html2text==2024.2.26
|
||||
httpx==0.27.0
|
||||
lazy_import==0.2.2
|
||||
litellm==1.37.11
|
||||
litellm==1.40.17
|
||||
nltk==3.8.1
|
||||
pydantic==2.7.1
|
||||
pydantic==2.7.4
|
||||
python-dotenv==1.0.1
|
||||
requests==2.31.0
|
||||
requests==2.32.3
|
||||
rich==13.7.1
|
||||
scikit-learn==1.4.2
|
||||
selenium==4.20.0
|
||||
uvicorn==0.29.0
|
||||
transformers==4.40.2
|
||||
chromedriver-autoinstaller==0.6.4
|
||||
torch==2.3.0
|
||||
scikit-learn==1.5.0
|
||||
selenium==4.23.1
|
||||
uvicorn==0.30.1
|
||||
transformers==4.41.2
|
||||
# webdriver-manager==4.0.1
|
||||
# chromedriver-autoinstaller==0.6.4
|
||||
torch==2.3.1
|
||||
onnxruntime==1.18.0
|
||||
tokenizers==0.19.1
|
||||
pillow==10.3.0
|
||||
slowapi==0.1.9
|
||||
35
setup.py
@@ -1,31 +1,44 @@
|
||||
from setuptools import setup, find_packages
|
||||
import os
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
|
||||
# Create the .crawl4ai folder in the user's home directory if it doesn't exist
|
||||
# If the folder already exists, remove the cache folder
|
||||
crawl4ai_folder = Path.home() / ".crawl4ai"
|
||||
cache_folder = crawl4ai_folder / "cache"
|
||||
|
||||
if cache_folder.exists():
|
||||
shutil.rmtree(cache_folder)
|
||||
|
||||
crawl4ai_folder.mkdir(exist_ok=True)
|
||||
cache_folder.mkdir(exist_ok=True)
|
||||
|
||||
# Read the requirements from requirements.txt
|
||||
with open("requirements.txt") as f:
|
||||
requirements = f.read().splitlines()
|
||||
|
||||
# Define the requirements for different environments
|
||||
requirements_without_torch = [req for req in requirements if not req.startswith("torch")]
|
||||
requirements_without_transformers = [req for req in requirements if not req.startswith("transformers")]
|
||||
requirements_without_nltk = [req for req in requirements if not req.startswith("nltk")]
|
||||
requirements_without_torch_transformers_nlkt = [req for req in requirements if not req.startswith("torch") and not req.startswith("transformers") and not req.startswith("nltk")]
|
||||
default_requirements = [req for req in requirements if not req.startswith(("torch", "transformers", "onnxruntime", "nltk", "spacy", "tokenizers", "scikit-learn", "numpy"))]
|
||||
torch_requirements = [req for req in requirements if req.startswith(("torch", "nltk", "spacy", "scikit-learn", "numpy"))]
|
||||
transformer_requirements = [req for req in requirements if req.startswith(("transformers", "tokenizers", "onnxruntime"))]
|
||||
|
||||
setup(
|
||||
name="Crawl4AI",
|
||||
version="0.2.0",
|
||||
version="0.2.77",
|
||||
description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper",
|
||||
long_description=open("README.md").read(),
|
||||
long_description=open("README.md", encoding="utf-8").read(),
|
||||
long_description_content_type="text/markdown",
|
||||
url="https://github.com/unclecode/crawl4ai",
|
||||
author="Unclecode",
|
||||
author_email="unclecode@kidocode.com",
|
||||
license="MIT",
|
||||
packages=find_packages(),
|
||||
install_requires=requirements_without_torch_transformers_nlkt,
|
||||
install_requires=default_requirements,
|
||||
extras_require={
|
||||
"all": requirements, # Include all requirements
|
||||
"colab": requirements_without_torch, # Exclude torch for Colab
|
||||
"crawl": requirements_without_torch_transformers_nlkt
|
||||
"torch": torch_requirements,
|
||||
"transformer": transformer_requirements,
|
||||
"all": requirements,
|
||||
},
|
||||
entry_points={
|
||||
'console_scripts': [
|
||||
@@ -43,4 +56,4 @@ setup(
|
||||
"Programming Language :: Python :: 3.10",
|
||||
],
|
||||
python_requires=">=3.7",
|
||||
)
|
||||
)
|
||||