Compare commits
60 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7afa11a02f | ||
|
|
dec3d44224 | ||
|
|
e5e6a34e80 | ||
|
|
897e766728 | ||
|
|
9200a6731d | ||
|
|
61c166ab19 | ||
|
|
659c8cd953 | ||
|
|
9ee988753d | ||
|
|
8ae6c43ca4 | ||
|
|
b6713870ef | ||
|
|
40477493d3 | ||
|
|
efcf3ac6eb | ||
|
|
9e43f7beda | ||
|
|
aa9412e1b4 | ||
|
|
cf6c835e18 | ||
|
|
e5ecf291f3 | ||
|
|
9d0cafcfa6 | ||
|
|
7715623430 | ||
|
|
f5a4e80e2c | ||
|
|
8463aabedf | ||
|
|
7f30144ef2 | ||
|
|
fa5516aad6 | ||
|
|
ca0336af9e | ||
|
|
65ed1aeade | ||
|
|
4d283ab386 | ||
|
|
3ff2a0d0e7 | ||
|
|
3cd1b3719f | ||
|
|
9926eb9f95 | ||
|
|
3abaa82501 | ||
|
|
88d8cd8650 | ||
|
|
a08f21d66c | ||
|
|
d58286989c | ||
|
|
b58af3349c | ||
|
|
940df4631f | ||
|
|
685706e0aa | ||
|
|
7b0979e134 | ||
|
|
61ae2de841 | ||
|
|
5b28eed2c0 | ||
|
|
f8a11779fe | ||
|
|
d11a83c232 | ||
|
|
3255c7a3fa | ||
|
|
4756d0a532 | ||
|
|
7ba2142363 | ||
|
|
96d1eb0d0d | ||
|
|
144cfa0eda | ||
|
|
a0dff192ae | ||
|
|
1fffeeedd2 | ||
|
|
f51b078042 | ||
|
|
b6023a51fb | ||
|
|
78cfad8b2f | ||
|
|
68b3dff74a | ||
|
|
bfc4abd6e8 | ||
|
|
8c77a760fc | ||
|
|
b9bf8ac9d7 | ||
|
|
d6182bedd7 | ||
|
|
2217904876 | ||
|
|
2c2362b4d3 | ||
|
|
612ed3fef2 | ||
|
|
fb2a6d0d04 | ||
|
|
19d3d39115 |
12
.gitignore
vendored
12
.gitignore
vendored
@@ -165,6 +165,8 @@ Crawl4AI.egg-info/
|
|||||||
Crawl4AI.egg-info/*
|
Crawl4AI.egg-info/*
|
||||||
crawler_data.db
|
crawler_data.db
|
||||||
.vscode/
|
.vscode/
|
||||||
|
.tests/
|
||||||
|
.test_pads/
|
||||||
test_pad.py
|
test_pad.py
|
||||||
test_pad*.py
|
test_pad*.py
|
||||||
.data/
|
.data/
|
||||||
@@ -181,4 +183,12 @@ docs/examples/.chainlit/*
|
|||||||
.chainlit/translations/en-US.json
|
.chainlit/translations/en-US.json
|
||||||
|
|
||||||
local/
|
local/
|
||||||
.files/
|
.files/
|
||||||
|
|
||||||
|
a.txt
|
||||||
|
.lambda_function.py
|
||||||
|
ec2*
|
||||||
|
|
||||||
|
update_changelog.sh
|
||||||
|
test_env/
|
||||||
|
tmp/
|
||||||
85
CHANGELOG.md
85
CHANGELOG.md
@@ -1,5 +1,90 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## [v0.2.77] - 2024-08-04
|
||||||
|
|
||||||
|
Significant improvements in text processing and performance:
|
||||||
|
|
||||||
|
- 🚀 **Dependency reduction**: Removed dependency on spaCy model for text chunk labeling in cosine extraction strategy.
|
||||||
|
- 🤖 **Transformer upgrade**: Implemented text sequence classification using a transformer model for labeling text chunks.
|
||||||
|
- ⚡ **Performance enhancement**: Improved model loading speed due to removal of spaCy dependency.
|
||||||
|
- 🔧 **Future-proofing**: Laid groundwork for potential complete removal of spaCy dependency in future versions.
|
||||||
|
|
||||||
|
These changes address issue #68 and provide a foundation for faster, more efficient text processing in Crawl4AI.
|
||||||
|
|
||||||
|
## [v0.2.76] - 2024-08-02
|
||||||
|
|
||||||
|
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
|
||||||
|
|
||||||
|
- 🐳 **Docker enhancements**: Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
|
||||||
|
- 🌐 **Official Docker Hub image**: Launched our first official image on Docker Hub for streamlined deployment.
|
||||||
|
- 🔧 **Selenium upgrade**: Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
|
||||||
|
- 🖼️ **Image description**: Implemented ability to generate textual descriptions for extracted images from web pages.
|
||||||
|
- ⚡ **Performance boost**: Various improvements to enhance overall speed and performance.
|
||||||
|
|
||||||
|
A big shoutout to our amazing community contributors:
|
||||||
|
- [@aravindkarnam](https://github.com/aravindkarnam) for developing the textual description extraction feature.
|
||||||
|
- [@FractalMind](https://github.com/FractalMind) for creating the first official Docker Hub image and fixing Dockerfile errors.
|
||||||
|
- [@ketonkss4](https://github.com/ketonkss4) for identifying Selenium's new capabilities, helping us reduce dependencies.
|
||||||
|
|
||||||
|
Your contributions are driving Crawl4AI forward! 🙌
|
||||||
|
|
||||||
|
## [v0.2.75] - 2024-07-19
|
||||||
|
|
||||||
|
Minor improvements for a more maintainable codebase:
|
||||||
|
|
||||||
|
- 🔄 Fixed typos in `chunking_strategy.py` and `crawler_strategy.py` to improve code readability
|
||||||
|
- 🔄 Removed `.test_pads/` directory from `.gitignore` to keep our repository clean and organized
|
||||||
|
|
||||||
|
These changes may seem small, but they contribute to a more stable and sustainable codebase. By fixing typos and updating our `.gitignore` settings, we're ensuring that our code is easier to maintain and scale in the long run.
|
||||||
|
|
||||||
|
## [v0.2.74] - 2024-07-08
|
||||||
|
A slew of exciting updates to improve the crawler's stability and robustness! 🎉
|
||||||
|
|
||||||
|
- 💻 **UTF encoding fix**: Resolved the Windows \"charmap\" error by adding UTF encoding.
|
||||||
|
- 🛡️ **Error handling**: Implemented MaxRetryError exception handling in LocalSeleniumCrawlerStrategy.
|
||||||
|
- 🧹 **Input sanitization**: Improved input sanitization and handled encoding issues in LLMExtractionStrategy.
|
||||||
|
- 🚮 **Database cleanup**: Removed existing database file and initialized a new one.
|
||||||
|
|
||||||
|
|
||||||
|
## [v0.2.73] - 2024-07-03
|
||||||
|
|
||||||
|
💡 In this release, we've bumped the version to v0.2.73 and refreshed our documentation to ensure you have the best experience with our project.
|
||||||
|
|
||||||
|
* Supporting website need "with-head" mode to crawl the website with head.
|
||||||
|
* Fixing the installation issues for setup.py and dockerfile.
|
||||||
|
* Resolve multiple issues.
|
||||||
|
|
||||||
|
## [v0.2.72] - 2024-06-30
|
||||||
|
|
||||||
|
This release brings exciting updates and improvements to our project! 🎉
|
||||||
|
|
||||||
|
* 📚 **Documentation Updates**: Our documentation has been revamped to reflect the latest changes and additions.
|
||||||
|
* 🚀 **New Modes in setup.py**: We've added support for three new modes in setup.py: default, torch, and transformers. This enhances the project's flexibility and usability.
|
||||||
|
* 🐳 **Docker File Updates**: The Docker file has been updated to ensure seamless compatibility with the new modes and improvements.
|
||||||
|
* 🕷️ **Temporary Solution for Headless Crawling**: We've implemented a temporary solution to overcome issues with crawling websites in headless mode.
|
||||||
|
|
||||||
|
These changes aim to improve the overall user experience, provide more flexibility, and enhance the project's performance. We're thrilled to share these updates with you and look forward to continuing to evolve and improve our project!
|
||||||
|
|
||||||
|
## [0.2.71] - 2024-06-26
|
||||||
|
|
||||||
|
**Improved Error Handling and Performance** 🚧
|
||||||
|
|
||||||
|
* 🚫 Refactored `crawler_strategy.py` to handle exceptions and provide better error messages, making it more robust and reliable.
|
||||||
|
* 💻 Optimized the `get_content_of_website_optimized` function in `utils.py` for improved performance, reducing potential bottlenecks.
|
||||||
|
* 💻 Updated `utils.py` with the latest changes, ensuring consistency and accuracy.
|
||||||
|
* 🚫 Migrated to `ChromeDriverManager` to resolve Chrome driver download issues, providing a smoother user experience.
|
||||||
|
|
||||||
|
These changes focus on refining the existing codebase, resulting in a more stable, efficient, and user-friendly experience. With these improvements, you can expect fewer errors and better performance in the crawler strategy and utility functions.
|
||||||
|
|
||||||
|
## [0.2.71] - 2024-06-25
|
||||||
|
### Fixed
|
||||||
|
- Speed up twice the extraction function.
|
||||||
|
|
||||||
|
|
||||||
|
## [0.2.6] - 2024-06-22
|
||||||
|
### Fixed
|
||||||
|
- Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms.
|
||||||
|
|
||||||
## [0.2.5] - 2024-06-18
|
## [0.2.5] - 2024-06-18
|
||||||
### Added
|
### Added
|
||||||
- Added five important hooks to the crawler:
|
- Added five important hooks to the crawler:
|
||||||
|
|||||||
31
CONTRIBUTORS.md
Normal file
31
CONTRIBUTORS.md
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
# Contributors to Crawl4AI
|
||||||
|
|
||||||
|
We would like to thank the following people for their contributions to Crawl4AI:
|
||||||
|
|
||||||
|
## Core Team
|
||||||
|
|
||||||
|
- [Unclecode](https://github.com/unclecode) - Project Creator and Main Developer
|
||||||
|
- [Nasrin](https://github.com/ntohidi) - Project Manager and Developer
|
||||||
|
|
||||||
|
## Community Contributors
|
||||||
|
|
||||||
|
- [Aravind Karnam](https://github.com/aravindkarnam) - Developed textual description extraction feature
|
||||||
|
- [FractalMind](https://github.com/FractalMind) - Created the first official Docker Hub image and fixed Dockerfile errors
|
||||||
|
- [ketonkss4](https://github.com/ketonkss4) - Identified Selenium's new capabilities, helping reduce dependencies
|
||||||
|
|
||||||
|
## Other Contributors
|
||||||
|
|
||||||
|
- [Gokhan](https://github.com/gkhngyk)
|
||||||
|
- [Shiv Kumar](https://github.com/shivkumar0757)
|
||||||
|
- [QIN2DIM](https://github.com/QIN2DIM)
|
||||||
|
|
||||||
|
|
||||||
|
## Acknowledgements
|
||||||
|
|
||||||
|
We also want to thank all the users who have reported bugs, suggested features, or helped in any other way to make Crawl4AI better.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
If you've contributed to Crawl4AI and your name isn't on this list, please [open a pull request](https://github.com/unclecode/crawl4ai/pulls) with your name, link, and contribution, and we'll review it promptly.
|
||||||
|
|
||||||
|
Thank you all for your contributions!
|
||||||
55
Dockerfile
55
Dockerfile
@@ -4,6 +4,9 @@ FROM python:3.10-slim-bookworm
|
|||||||
# Set the working directory in the container
|
# Set the working directory in the container
|
||||||
WORKDIR /usr/src/app
|
WORKDIR /usr/src/app
|
||||||
|
|
||||||
|
# Define build arguments
|
||||||
|
ARG INSTALL_OPTION=default
|
||||||
|
|
||||||
# Install build dependencies
|
# Install build dependencies
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get install -y --no-install-recommends \
|
apt-get install -y --no-install-recommends \
|
||||||
@@ -18,45 +21,47 @@ RUN apt-get update && \
|
|||||||
software-properties-common && \
|
software-properties-common && \
|
||||||
rm -rf /var/lib/apt/lists/*
|
rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
# Install Python dependencies
|
# Copy the application code
|
||||||
COPY requirements.txt .
|
COPY . .
|
||||||
RUN pip install --no-cache-dir -r requirements.txt && \
|
|
||||||
pip install --no-cache-dir spacy torch onnxruntime uvicorn && \
|
|
||||||
python -m spacy download en_core_web_sm
|
|
||||||
# pip install --no-cache-dir spacy torch torchvision torchaudio onnxruntime uvicorn && \
|
|
||||||
|
|
||||||
# Install Google Chrome and ChromeDriver
|
# Install Crawl4AI using the local setup.py with the specified option
|
||||||
|
# and download models only for torch, transformer, or all options
|
||||||
|
RUN if [ "$INSTALL_OPTION" = "all" ]; then \
|
||||||
|
pip install --no-cache-dir .[all] && \
|
||||||
|
crawl4ai-download-models; \
|
||||||
|
elif [ "$INSTALL_OPTION" = "torch" ]; then \
|
||||||
|
pip install --no-cache-dir .[torch] && \
|
||||||
|
crawl4ai-download-models; \
|
||||||
|
elif [ "$INSTALL_OPTION" = "transformer" ]; then \
|
||||||
|
pip install --no-cache-dir .[transformer] && \
|
||||||
|
crawl4ai-download-models; \
|
||||||
|
else \
|
||||||
|
pip install --no-cache-dir .; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Install Google Chrome
|
||||||
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
|
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
|
||||||
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list' && \
|
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list' && \
|
||||||
apt-get update && \
|
apt-get update && \
|
||||||
apt-get install -y google-chrome-stable && \
|
apt-get install -y google-chrome-stable
|
||||||
wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip && \
|
|
||||||
unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
|
|
||||||
|
|
||||||
# Copy the rest of the application code
|
# Set environment to use Chrome properly
|
||||||
COPY . .
|
|
||||||
|
|
||||||
# Set environment to use Chrome and ChromeDriver properly
|
|
||||||
ENV CHROME_BIN=/usr/bin/google-chrome \
|
ENV CHROME_BIN=/usr/bin/google-chrome \
|
||||||
CHROMEDRIVER=/usr/local/bin/chromedriver \
|
|
||||||
DISPLAY=:99 \
|
DISPLAY=:99 \
|
||||||
DBUS_SESSION_BUS_ADDRESS=/dev/null \
|
DBUS_SESSION_BUS_ADDRESS=/dev/null \
|
||||||
PYTHONUNBUFFERED=1
|
PYTHONUNBUFFERED=1
|
||||||
|
|
||||||
# pip install -e .[all]
|
|
||||||
RUN pip install --no-cache-dir -e .[all]
|
|
||||||
|
|
||||||
# Ensure the PATH environment variable includes the location of the installed packages
|
# Ensure the PATH environment variable includes the location of the installed packages
|
||||||
ENV PATH /opt/conda/bin:$PATH
|
ENV PATH=/opt/conda/bin:$PATH
|
||||||
|
|
||||||
# Make port 80 available to the world outside this container
|
# Make port 80 available to the world outside this container
|
||||||
EXPOSE 80
|
EXPOSE 80
|
||||||
|
|
||||||
# Download models call cli "crawl4ai-download-models"
|
# Install mkdocs
|
||||||
RUN crawl4ai-download-models
|
RUN pip install mkdocs mkdocs-terminal
|
||||||
# RUN python crawl4ai/model_loader.py
|
|
||||||
|
# Call mkdocs to build the documentation
|
||||||
|
RUN mkdocs build
|
||||||
|
|
||||||
# Run uvicorn
|
# Run uvicorn
|
||||||
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "4"]
|
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "4"]
|
||||||
|
|
||||||
|
|
||||||
122
README.md
122
README.md
@@ -1,4 +1,4 @@
|
|||||||
# Crawl4AI v0.2.5 🕷️🤖
|
# Crawl4AI v0.2.77 🕷️🤖
|
||||||
|
|
||||||
[](https://github.com/unclecode/crawl4ai/stargazers)
|
[](https://github.com/unclecode/crawl4ai/stargazers)
|
||||||
[](https://github.com/unclecode/crawl4ai/network/members)
|
[](https://github.com/unclecode/crawl4ai/network/members)
|
||||||
@@ -8,10 +8,28 @@
|
|||||||
|
|
||||||
Crawl4AI simplifies web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐
|
Crawl4AI simplifies web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐
|
||||||
|
|
||||||
|
#### [v0.2.77] - 2024-08-02
|
||||||
|
|
||||||
|
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
|
||||||
|
|
||||||
|
- 🐳 **Docker enhancements**:
|
||||||
|
- Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
|
||||||
|
- 🌐 **Official Docker Hub image**:
|
||||||
|
- Launched our first official image on Docker Hub for streamlined deployment (unclecode/crawl4ai).
|
||||||
|
- 🔧 **Selenium upgrade**:
|
||||||
|
- Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
|
||||||
|
- 🖼️ **Image description**:
|
||||||
|
- Implemented ability to generate textual descriptions for extracted images from web pages.
|
||||||
|
- ⚡ **Performance boost**:
|
||||||
|
- Various improvements to enhance overall speed and performance.
|
||||||
|
|
||||||
## Try it Now!
|
## Try it Now!
|
||||||
|
|
||||||
- Use as REST API: [](https://colab.research.google.com/drive/1zODYjhemJ5bUmYceWpVoBMVpd0ofzNBZ?usp=sharing)
|
✨ Play around with this [](https://colab.research.google.com/drive/1sJPAmeLj5PMrg2VgOwMJ2ubGIcK0cJeX?usp=sharing)
|
||||||
- Use as Python library: [](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
|
|
||||||
|
✨ visit our [Documentation Website](https://crawl4ai.com/mkdocs/)
|
||||||
|
|
||||||
|
✨ Check [Demo](https://crawl4ai.com/mkdocs/demo)
|
||||||
|
|
||||||
## Features ✨
|
## Features ✨
|
||||||
|
|
||||||
@@ -30,6 +48,18 @@ Crawl4AI simplifies web crawling and data extraction, making it accessible for l
|
|||||||
- 🎯 CSS selector support
|
- 🎯 CSS selector support
|
||||||
- 📝 Passes instructions/keywords to refine extraction
|
- 📝 Passes instructions/keywords to refine extraction
|
||||||
|
|
||||||
|
# Crawl4AI
|
||||||
|
|
||||||
|
## 🌟 Shoutout to Contributors of v0.2.77!
|
||||||
|
|
||||||
|
A big thank you to the amazing contributors who've made this release possible:
|
||||||
|
|
||||||
|
- [@aravindkarnam](https://github.com/aravindkarnam) for the new image description feature
|
||||||
|
- [@FractalMind](https://github.com/FractalMind) for our official Docker Hub image
|
||||||
|
- [@ketonkss4](https://github.com/ketonkss4) for helping streamline our Selenium setup
|
||||||
|
|
||||||
|
Your contributions are driving Crawl4AI forward! 🚀
|
||||||
|
|
||||||
## Cool Examples 🚀
|
## Cool Examples 🚀
|
||||||
|
|
||||||
### Quick Start
|
### Quick Start
|
||||||
@@ -47,9 +77,62 @@ crawler.warmup()
|
|||||||
result = crawler.run(url="https://www.nbcnews.com/business")
|
result = crawler.run(url="https://www.nbcnews.com/business")
|
||||||
|
|
||||||
# Print the extracted content
|
# Print the extracted content
|
||||||
print(result.extracted_content)
|
print(result.markdown)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## How to install 🛠
|
||||||
|
|
||||||
|
### Using pip 🐍
|
||||||
|
```bash
|
||||||
|
virtualenv venv
|
||||||
|
source venv/bin/activate
|
||||||
|
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using Docker 🐳
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For Mac users (M1/M2)
|
||||||
|
# docker build --platform linux/amd64 -t crawl4ai .
|
||||||
|
docker build -t crawl4ai .
|
||||||
|
docker run -d -p 8000:80 crawl4ai
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using Docker Hub 🐳
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker pull unclecode/crawl4ai:latest
|
||||||
|
docker run -d -p 8000:80 unclecode/crawl4ai:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Speed-First Design 🚀
|
||||||
|
|
||||||
|
Perhaps the most important design principle for this library is speed. We need to ensure it can handle many links and resources in parallel as quickly as possible. By combining this speed with fast LLMs like Groq, the results will be truly amazing.
|
||||||
|
|
||||||
|
```python
|
||||||
|
import time
|
||||||
|
from crawl4ai.web_crawler import WebCrawler
|
||||||
|
crawler = WebCrawler()
|
||||||
|
crawler.warmup()
|
||||||
|
|
||||||
|
start = time.time()
|
||||||
|
url = r"https://www.nbcnews.com/business"
|
||||||
|
result = crawler.run( url, word_count_threshold=10, bypass_cache=True)
|
||||||
|
end = time.time()
|
||||||
|
print(f"Time taken: {end - start}")
|
||||||
|
```
|
||||||
|
|
||||||
|
Let's take a look the calculated time for the above code snippet:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
[LOG] 🚀 Crawling done, success: True, time taken: 1.3623387813568115 seconds
|
||||||
|
[LOG] 🚀 Content extracted, success: True, time taken: 0.05715131759643555 seconds
|
||||||
|
[LOG] 🚀 Extraction, time taken: 0.05750393867492676 seconds.
|
||||||
|
Time taken: 1.439958095550537
|
||||||
|
```
|
||||||
|
Fetching the content from the page took 1.3623 seconds, and extracting the content took 0.0575 seconds. 🚀
|
||||||
|
|
||||||
### Extract Structured Data from Web Pages 📊
|
### Extract Structured Data from Web Pages 📊
|
||||||
|
|
||||||
Crawl all OpenAI models and their fees from the official page.
|
Crawl all OpenAI models and their fees from the official page.
|
||||||
@@ -58,19 +141,30 @@ Crawl all OpenAI models and their fees from the official page.
|
|||||||
import os
|
import os
|
||||||
from crawl4ai import WebCrawler
|
from crawl4ai import WebCrawler
|
||||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
class OpenAIModelFee(BaseModel):
|
||||||
|
model_name: str = Field(..., description="Name of the OpenAI model.")
|
||||||
|
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
|
||||||
|
output_fee: str = Field(..., description="Fee for output token ßfor the OpenAI model.")
|
||||||
|
|
||||||
url = 'https://openai.com/api/pricing/'
|
url = 'https://openai.com/api/pricing/'
|
||||||
crawler = WebCrawler()
|
crawler = WebCrawler()
|
||||||
crawler.warmup()
|
crawler.warmup()
|
||||||
|
|
||||||
result = crawler.run(
|
result = crawler.run(
|
||||||
url=url,
|
url=url,
|
||||||
extraction_strategy=LLMExtractionStrategy(
|
word_count_threshold=1,
|
||||||
provider="openai/gpt-4",
|
extraction_strategy= LLMExtractionStrategy(
|
||||||
api_token=os.getenv('OPENAI_API_KEY'),
|
provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
|
||||||
instruction="Extract all model names and their fees for input and output tokens."
|
schema=OpenAIModelFee.schema(),
|
||||||
),
|
extraction_type="schema",
|
||||||
)
|
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
|
||||||
|
Do not miss any models in the entire content. One extracted model JSON format should look like this:
|
||||||
|
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}."""
|
||||||
|
),
|
||||||
|
bypass_cache=True,
|
||||||
|
)
|
||||||
|
|
||||||
print(result.extracted_content)
|
print(result.extracted_content)
|
||||||
```
|
```
|
||||||
@@ -98,7 +192,7 @@ print(result.extracted_content)
|
|||||||
|
|
||||||
## Documentation 📚
|
## Documentation 📚
|
||||||
|
|
||||||
For detailed documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://craw4ai.com/mkdocs/).
|
For detailed documentation, including installation instructions, advanced features, and API reference, visit our [Documentation Website](https://crawl4ai.com/mkdocs/).
|
||||||
|
|
||||||
## Contributing 🤝
|
## Contributing 🤝
|
||||||
|
|
||||||
@@ -117,3 +211,7 @@ For questions, suggestions, or feedback, feel free to reach out:
|
|||||||
- Website: [crawl4ai.com](https://crawl4ai.com)
|
- Website: [crawl4ai.com](https://crawl4ai.com)
|
||||||
|
|
||||||
Happy Crawling! 🕸️🚀
|
Happy Crawling! 🕸️🚀
|
||||||
|
|
||||||
|
## Star History
|
||||||
|
|
||||||
|
[](https://star-history.com/#unclecode/crawl4ai&Date)
|
||||||
@@ -3,6 +3,7 @@ import re
|
|||||||
from collections import Counter
|
from collections import Counter
|
||||||
import string
|
import string
|
||||||
from .model_loader import load_nltk_punkt
|
from .model_loader import load_nltk_punkt
|
||||||
|
from .utils import *
|
||||||
|
|
||||||
# Define the abstract base class for chunking strategies
|
# Define the abstract base class for chunking strategies
|
||||||
class ChunkingStrategy(ABC):
|
class ChunkingStrategy(ABC):
|
||||||
@@ -54,7 +55,7 @@ class TopicSegmentationChunking(ChunkingStrategy):
|
|||||||
|
|
||||||
def __init__(self, num_keywords=3, **kwargs):
|
def __init__(self, num_keywords=3, **kwargs):
|
||||||
import nltk as nl
|
import nltk as nl
|
||||||
self.tokenizer = nl.toknize.TextTilingTokenizer()
|
self.tokenizer = nl.tokenize.TextTilingTokenizer()
|
||||||
self.num_keywords = num_keywords
|
self.num_keywords = num_keywords
|
||||||
|
|
||||||
def chunk(self, text: str) -> list:
|
def chunk(self, text: str) -> list:
|
||||||
|
|||||||
@@ -27,3 +27,14 @@ WORD_TOKEN_RATE = 1.3
|
|||||||
|
|
||||||
# Threshold for the minimum number of word in a HTML tag to be considered
|
# Threshold for the minimum number of word in a HTML tag to be considered
|
||||||
MIN_WORD_THRESHOLD = 1
|
MIN_WORD_THRESHOLD = 1
|
||||||
|
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD = 1
|
||||||
|
|
||||||
|
# Threshold for the Image extraction - Range is 1 to 6
|
||||||
|
# Images are scored based on point based system, to filter based on usefulness. Points are assigned
|
||||||
|
# to each image based on the following aspects.
|
||||||
|
# If either height or width exceeds 150px
|
||||||
|
# If image size is greater than 10Kb
|
||||||
|
# If alt property is set
|
||||||
|
# If image format is in jpg, png or webp
|
||||||
|
# If image is in the first half of the total images extracted from the page
|
||||||
|
IMAGE_SCORE_THRESHOLD = 2
|
||||||
|
|||||||
@@ -5,8 +5,13 @@ from selenium.webdriver.common.by import By
|
|||||||
from selenium.webdriver.support.ui import WebDriverWait
|
from selenium.webdriver.support.ui import WebDriverWait
|
||||||
from selenium.webdriver.support import expected_conditions as EC
|
from selenium.webdriver.support import expected_conditions as EC
|
||||||
from selenium.webdriver.chrome.options import Options
|
from selenium.webdriver.chrome.options import Options
|
||||||
from selenium.common.exceptions import InvalidArgumentException
|
from selenium.common.exceptions import InvalidArgumentException, WebDriverException
|
||||||
import logging
|
# from selenium.webdriver.chrome.service import Service as ChromeService
|
||||||
|
# from webdriver_manager.chrome import ChromeDriverManager
|
||||||
|
# from urllib3.exceptions import MaxRetryError
|
||||||
|
|
||||||
|
from .config import *
|
||||||
|
import logging, time
|
||||||
import base64
|
import base64
|
||||||
from PIL import Image, ImageDraw, ImageFont
|
from PIL import Image, ImageDraw, ImageFont
|
||||||
from io import BytesIO
|
from io import BytesIO
|
||||||
@@ -14,7 +19,7 @@ from typing import List, Callable
|
|||||||
import requests
|
import requests
|
||||||
import os
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from .utils import wrap_text
|
from .utils import *
|
||||||
|
|
||||||
logger = logging.getLogger('selenium.webdriver.remote.remote_connection')
|
logger = logging.getLogger('selenium.webdriver.remote.remote_connection')
|
||||||
logger.setLevel(logging.WARNING)
|
logger.setLevel(logging.WARNING)
|
||||||
@@ -69,7 +74,7 @@ class CloudCrawlerStrategy(CrawlerStrategy):
|
|||||||
response = requests.post("http://crawl4ai.uccode.io/crawl", json=data)
|
response = requests.post("http://crawl4ai.uccode.io/crawl", json=data)
|
||||||
response = response.json()
|
response = response.json()
|
||||||
html = response["results"][0]["html"]
|
html = response["results"][0]["html"]
|
||||||
return html
|
return sanitize_input_encode(html)
|
||||||
|
|
||||||
class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
||||||
def __init__(self, use_cached_html=False, js_code=None, **kwargs):
|
def __init__(self, use_cached_html=False, js_code=None, **kwargs):
|
||||||
@@ -80,14 +85,20 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
|||||||
if kwargs.get("user_agent"):
|
if kwargs.get("user_agent"):
|
||||||
self.options.add_argument("--user-agent=" + kwargs.get("user_agent"))
|
self.options.add_argument("--user-agent=" + kwargs.get("user_agent"))
|
||||||
else:
|
else:
|
||||||
# Set user agent
|
|
||||||
user_agent = kwargs.get("user_agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36")
|
user_agent = kwargs.get("user_agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36")
|
||||||
self.options.add_argument(f"--user-agent={user_agent}")
|
self.options.add_argument(f"--user-agent={user_agent}")
|
||||||
|
self.options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36")
|
||||||
self.options.add_argument("--no-sandbox")
|
|
||||||
self.options.headless = kwargs.get("headless", True)
|
self.options.headless = kwargs.get("headless", True)
|
||||||
if self.options.headless:
|
if self.options.headless:
|
||||||
self.options.add_argument("--headless")
|
self.options.add_argument("--headless")
|
||||||
|
|
||||||
|
self.options.add_argument("--disable-gpu")
|
||||||
|
self.options.add_argument("--window-size=1920,1080")
|
||||||
|
self.options.add_argument("--no-sandbox")
|
||||||
|
self.options.add_argument("--disable-dev-shm-usage")
|
||||||
|
self.options.add_argument("--disable-blink-features=AutomationControlled")
|
||||||
|
|
||||||
# self.options.add_argument("--disable-dev-shm-usage")
|
# self.options.add_argument("--disable-dev-shm-usage")
|
||||||
self.options.add_argument("--disable-gpu")
|
self.options.add_argument("--disable-gpu")
|
||||||
# self.options.add_argument("--disable-extensions")
|
# self.options.add_argument("--disable-extensions")
|
||||||
@@ -118,13 +129,23 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
|||||||
}
|
}
|
||||||
|
|
||||||
# chromedriver_autoinstaller.install()
|
# chromedriver_autoinstaller.install()
|
||||||
import chromedriver_autoinstaller
|
# import chromedriver_autoinstaller
|
||||||
crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai")
|
# crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai")
|
||||||
chromedriver_path = chromedriver_autoinstaller.utils.download_chromedriver(crawl4ai_folder, False)
|
# driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=self.options)
|
||||||
|
# chromedriver_path = chromedriver_autoinstaller.install()
|
||||||
|
# chromedriver_path = chromedriver_autoinstaller.utils.download_chromedriver()
|
||||||
# self.service = Service(chromedriver_autoinstaller.install())
|
# self.service = Service(chromedriver_autoinstaller.install())
|
||||||
self.service = Service(chromedriver_path)
|
|
||||||
self.service.log_path = "NUL"
|
|
||||||
self.driver = webdriver.Chrome(service=self.service, options=self.options)
|
# chromedriver_path = ChromeDriverManager().install()
|
||||||
|
# self.service = Service(chromedriver_path)
|
||||||
|
# self.service.log_path = "NUL"
|
||||||
|
# self.driver = webdriver.Chrome(service=self.service, options=self.options)
|
||||||
|
|
||||||
|
# Use selenium-manager (built into Selenium 4.10.0+)
|
||||||
|
self.service = Service()
|
||||||
|
self.driver = webdriver.Chrome(options=self.options)
|
||||||
|
|
||||||
self.driver = self.execute_hook('on_driver_created', self.driver)
|
self.driver = self.execute_hook('on_driver_created', self.driver)
|
||||||
|
|
||||||
if kwargs.get("cookies"):
|
if kwargs.get("cookies"):
|
||||||
@@ -163,8 +184,20 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
|||||||
# Set extra HTTP headers
|
# Set extra HTTP headers
|
||||||
self.driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': headers})
|
self.driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': headers})
|
||||||
|
|
||||||
|
def _ensure_page_load(self, max_checks=6, check_interval=0.01):
|
||||||
|
initial_length = len(self.driver.page_source)
|
||||||
|
|
||||||
|
for ix in range(max_checks):
|
||||||
|
# print(f"Checking page load: {ix}")
|
||||||
|
time.sleep(check_interval)
|
||||||
|
current_length = len(self.driver.page_source)
|
||||||
|
|
||||||
|
if current_length != initial_length:
|
||||||
|
break
|
||||||
|
|
||||||
def crawl(self, url: str) -> str:
|
return self.driver.page_source
|
||||||
|
|
||||||
|
def crawl(self, url: str, **kwargs) -> str:
|
||||||
# Create md5 hash of the URL
|
# Create md5 hash of the URL
|
||||||
import hashlib
|
import hashlib
|
||||||
url_hash = hashlib.md5(url.encode()).hexdigest()
|
url_hash = hashlib.md5(url.encode()).hexdigest()
|
||||||
@@ -173,17 +206,40 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
|||||||
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url_hash)
|
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url_hash)
|
||||||
if os.path.exists(cache_file_path):
|
if os.path.exists(cache_file_path):
|
||||||
with open(cache_file_path, "r") as f:
|
with open(cache_file_path, "r") as f:
|
||||||
return f.read()
|
return sanitize_input_encode(f.read())
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self.driver = self.execute_hook('before_get_url', self.driver)
|
self.driver = self.execute_hook('before_get_url', self.driver)
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
print(f"[LOG] 🕸️ Crawling {url} using LocalSeleniumCrawlerStrategy...")
|
print(f"[LOG] 🕸️ Crawling {url} using LocalSeleniumCrawlerStrategy...")
|
||||||
self.driver.get(url)
|
self.driver.get(url) #<html><head></head><body></body></html>
|
||||||
WebDriverWait(self.driver, 10).until(
|
|
||||||
EC.presence_of_all_elements_located((By.TAG_NAME, "html"))
|
WebDriverWait(self.driver, 20).until(
|
||||||
|
lambda d: d.execute_script('return document.readyState') == 'complete'
|
||||||
)
|
)
|
||||||
|
WebDriverWait(self.driver, 10).until(
|
||||||
|
EC.presence_of_all_elements_located((By.TAG_NAME, "body"))
|
||||||
|
)
|
||||||
|
|
||||||
|
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
|
||||||
|
|
||||||
self.driver = self.execute_hook('after_get_url', self.driver)
|
self.driver = self.execute_hook('after_get_url', self.driver)
|
||||||
|
html = sanitize_input_encode(self._ensure_page_load()) # self.driver.page_source
|
||||||
|
can_not_be_done_headless = False # Look at my creativity for naming variables
|
||||||
|
|
||||||
|
# TODO: Very ugly approach, but promise to change it!
|
||||||
|
if kwargs.get('bypass_headless', False) or html == "<html><head></head><body></body></html>":
|
||||||
|
print("[LOG] 🙌 Page could not be loaded in headless mode. Trying non-headless mode...")
|
||||||
|
can_not_be_done_headless = True
|
||||||
|
options = Options()
|
||||||
|
options.headless = False
|
||||||
|
# set window size very small
|
||||||
|
options.add_argument("--window-size=5,5")
|
||||||
|
driver = webdriver.Chrome(service=self.service, options=options)
|
||||||
|
driver.get(url)
|
||||||
|
self.driver = self.execute_hook('after_get_url', driver)
|
||||||
|
html = sanitize_input_encode(driver.page_source)
|
||||||
|
driver.quit()
|
||||||
|
|
||||||
# Execute JS code if provided
|
# Execute JS code if provided
|
||||||
if self.js_code and type(self.js_code) == str:
|
if self.js_code and type(self.js_code) == str:
|
||||||
@@ -199,12 +255,13 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
|||||||
lambda driver: driver.execute_script("return document.readyState") == "complete"
|
lambda driver: driver.execute_script("return document.readyState") == "complete"
|
||||||
)
|
)
|
||||||
|
|
||||||
html = self.driver.page_source
|
if not can_not_be_done_headless:
|
||||||
|
html = sanitize_input_encode(self.driver.page_source)
|
||||||
self.driver = self.execute_hook('before_return_html', self.driver, html)
|
self.driver = self.execute_hook('before_return_html', self.driver, html)
|
||||||
|
|
||||||
# Store in cache
|
# Store in cache
|
||||||
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url_hash)
|
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url_hash)
|
||||||
with open(cache_file_path, "w") as f:
|
with open(cache_file_path, "w", encoding="utf-8") as f:
|
||||||
f.write(html)
|
f.write(html)
|
||||||
|
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
@@ -212,9 +269,18 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
|||||||
|
|
||||||
return html
|
return html
|
||||||
except InvalidArgumentException:
|
except InvalidArgumentException:
|
||||||
raise InvalidArgumentException(f"Invalid URL {url}")
|
if not hasattr(e, 'msg'):
|
||||||
|
e.msg = sanitize_input_encode(str(e))
|
||||||
|
raise InvalidArgumentException(f"Failed to crawl {url}: {e.msg}")
|
||||||
|
except WebDriverException as e:
|
||||||
|
# If e does nlt have msg attribute create it and set it to str(e)
|
||||||
|
if not hasattr(e, 'msg'):
|
||||||
|
e.msg = sanitize_input_encode(str(e))
|
||||||
|
raise WebDriverException(f"Failed to crawl {url}: {e.msg}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
raise Exception(f"Failed to crawl {url}: {str(e)}")
|
if not hasattr(e, 'msg'):
|
||||||
|
e.msg = sanitize_input_encode(str(e))
|
||||||
|
raise Exception(f"Failed to crawl {url}: {e.msg}")
|
||||||
|
|
||||||
def take_screenshot(self) -> str:
|
def take_screenshot(self) -> str:
|
||||||
try:
|
try:
|
||||||
@@ -231,18 +297,20 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
|||||||
# Open the screenshot with PIL
|
# Open the screenshot with PIL
|
||||||
image = Image.open(BytesIO(screenshot))
|
image = Image.open(BytesIO(screenshot))
|
||||||
|
|
||||||
|
# Convert image to RGB mode (this will handle both RGB and RGBA images)
|
||||||
|
rgb_image = image.convert('RGB')
|
||||||
|
|
||||||
# Convert to JPEG and compress
|
# Convert to JPEG and compress
|
||||||
buffered = BytesIO()
|
buffered = BytesIO()
|
||||||
image.save(buffered, format="JPEG", quality=85)
|
rgb_image.save(buffered, format="JPEG", quality=85)
|
||||||
img_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
img_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
||||||
|
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
print(f"[LOG] 📸 Screenshot taken and converted to base64")
|
print(f"[LOG] 📸 Screenshot taken and converted to base64")
|
||||||
|
|
||||||
return img_base64
|
return img_base64
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
error_message = f"Failed to take screenshot: {str(e)}"
|
error_message = sanitize_input_encode(f"Failed to take screenshot: {str(e)}")
|
||||||
print(error_message)
|
print(error_message)
|
||||||
|
|
||||||
# Generate an image with black background
|
# Generate an image with black background
|
||||||
@@ -253,7 +321,7 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
|||||||
try:
|
try:
|
||||||
font = ImageFont.truetype("arial.ttf", 40)
|
font = ImageFont.truetype("arial.ttf", 40)
|
||||||
except IOError:
|
except IOError:
|
||||||
font = ImageFont.load_default(size=40)
|
font = ImageFont.load_default()
|
||||||
|
|
||||||
# Define text color and wrap the text
|
# Define text color and wrap the text
|
||||||
text_color = (255, 255, 255)
|
text_color = (255, 255, 255)
|
||||||
@@ -272,6 +340,6 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
|||||||
img_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
img_base64 = base64.b64encode(buffered.getvalue()).decode('utf-8')
|
||||||
|
|
||||||
return img_base64
|
return img_base64
|
||||||
|
|
||||||
def quit(self):
|
def quit(self):
|
||||||
self.driver.quit()
|
self.driver.quit()
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ def init_db():
|
|||||||
extracted_content TEXT,
|
extracted_content TEXT,
|
||||||
success BOOLEAN,
|
success BOOLEAN,
|
||||||
media TEXT DEFAULT "{}",
|
media TEXT DEFAULT "{}",
|
||||||
link TEXT DEFAULT "{}",
|
links TEXT DEFAULT "{}",
|
||||||
metadata TEXT DEFAULT "{}",
|
metadata TEXT DEFAULT "{}",
|
||||||
screenshot TEXT DEFAULT ""
|
screenshot TEXT DEFAULT ""
|
||||||
)
|
)
|
||||||
@@ -127,6 +127,9 @@ def update_existing_records(new_column: str = "media", default_value: str = "{}"
|
|||||||
print(f"Error updating existing records: {e}")
|
print(f"Error updating existing records: {e}")
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
init_db() # Initialize the database if not already initialized
|
# Delete the existing database file
|
||||||
alter_db_add_screenshot("metadata") # Add the new column to the table
|
if os.path.exists(DB_PATH):
|
||||||
update_existing_records("metadata") # Update existing records to set the new column to an empty string
|
os.remove(DB_PATH)
|
||||||
|
init_db()
|
||||||
|
# alter_db_add_screenshot("COL_NAME")
|
||||||
|
|
||||||
|
|||||||
@@ -9,8 +9,9 @@ from .utils import *
|
|||||||
from functools import partial
|
from functools import partial
|
||||||
from .model_loader import *
|
from .model_loader import *
|
||||||
import math
|
import math
|
||||||
|
|
||||||
import numpy as np
|
import numpy as np
|
||||||
|
|
||||||
|
|
||||||
class ExtractionStrategy(ABC):
|
class ExtractionStrategy(ABC):
|
||||||
"""
|
"""
|
||||||
Abstract base class for all extraction strategies.
|
Abstract base class for all extraction strategies.
|
||||||
@@ -100,8 +101,8 @@ class LLMExtractionStrategy(ExtractionStrategy):
|
|||||||
variable_values["REQUEST"] = self.instruction
|
variable_values["REQUEST"] = self.instruction
|
||||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
|
prompt_with_variables = PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
|
||||||
|
|
||||||
if self.extract_type == "schema":
|
if self.extract_type == "schema" and self.schema:
|
||||||
variable_values["SCHEMA"] = json.dumps(self.schema)
|
variable_values["SCHEMA"] = json.dumps(self.schema, indent=2)
|
||||||
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
|
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
|
||||||
|
|
||||||
for variable in variable_values:
|
for variable in variable_values:
|
||||||
@@ -109,14 +110,13 @@ class LLMExtractionStrategy(ExtractionStrategy):
|
|||||||
"{" + variable + "}", variable_values[variable]
|
"{" + variable + "}", variable_values[variable]
|
||||||
)
|
)
|
||||||
|
|
||||||
response = perform_completion_with_backoff(self.provider, prompt_with_variables, self.api_token)
|
response = perform_completion_with_backoff(self.provider, prompt_with_variables, self.api_token) # , json_response=self.extract_type == "schema")
|
||||||
try:
|
try:
|
||||||
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)['blocks']
|
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)['blocks']
|
||||||
blocks = json.loads(blocks)
|
blocks = json.loads(blocks)
|
||||||
for block in blocks:
|
for block in blocks:
|
||||||
block['error'] = False
|
block['error'] = False
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print("Error extracting blocks:", str(e))
|
|
||||||
parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content)
|
parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content)
|
||||||
blocks = parsed
|
blocks = parsed
|
||||||
if unparsed:
|
if unparsed:
|
||||||
@@ -192,16 +192,31 @@ class LLMExtractionStrategy(ExtractionStrategy):
|
|||||||
# Sequential processing with a delay
|
# Sequential processing with a delay
|
||||||
for ix, section in enumerate(merged_sections):
|
for ix, section in enumerate(merged_sections):
|
||||||
extract_func = partial(self.extract, url)
|
extract_func = partial(self.extract, url)
|
||||||
extracted_content.extend(extract_func(ix, section))
|
extracted_content.extend(extract_func(ix, sanitize_input_encode(section)))
|
||||||
time.sleep(0.5) # 500 ms delay between each processing
|
time.sleep(0.5) # 500 ms delay between each processing
|
||||||
else:
|
else:
|
||||||
# Parallel processing using ThreadPoolExecutor
|
# Parallel processing using ThreadPoolExecutor
|
||||||
|
# extract_func = partial(self.extract, url)
|
||||||
|
# for ix, section in enumerate(merged_sections):
|
||||||
|
# extracted_content.append(extract_func(ix, section))
|
||||||
|
|
||||||
with ThreadPoolExecutor(max_workers=4) as executor:
|
with ThreadPoolExecutor(max_workers=4) as executor:
|
||||||
extract_func = partial(self.extract, url)
|
extract_func = partial(self.extract, url)
|
||||||
futures = [executor.submit(extract_func, ix, section) for ix, section in enumerate(merged_sections)]
|
futures = [executor.submit(extract_func, ix, sanitize_input_encode(section)) for ix, section in enumerate(merged_sections)]
|
||||||
|
|
||||||
for future in as_completed(futures):
|
for future in as_completed(futures):
|
||||||
extracted_content.extend(future.result())
|
try:
|
||||||
|
extracted_content.extend(future.result())
|
||||||
|
except Exception as e:
|
||||||
|
if self.verbose:
|
||||||
|
print(f"Error in thread execution: {e}")
|
||||||
|
# Add error information to extracted_content
|
||||||
|
extracted_content.append({
|
||||||
|
"index": 0,
|
||||||
|
"error": True,
|
||||||
|
"tags": ["error"],
|
||||||
|
"content": str(e)
|
||||||
|
})
|
||||||
|
|
||||||
|
|
||||||
return extracted_content
|
return extracted_content
|
||||||
@@ -219,6 +234,8 @@ class CosineStrategy(ExtractionStrategy):
|
|||||||
"""
|
"""
|
||||||
super().__init__()
|
super().__init__()
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
self.semantic_filter = semantic_filter
|
self.semantic_filter = semantic_filter
|
||||||
self.word_count_threshold = word_count_threshold
|
self.word_count_threshold = word_count_threshold
|
||||||
self.max_dist = max_dist
|
self.max_dist = max_dist
|
||||||
@@ -232,6 +249,9 @@ class CosineStrategy(ExtractionStrategy):
|
|||||||
self.get_embedding_method = "direct"
|
self.get_embedding_method = "direct"
|
||||||
|
|
||||||
self.device = get_device()
|
self.device = get_device()
|
||||||
|
import torch
|
||||||
|
self.device = torch.device('cpu')
|
||||||
|
|
||||||
self.default_batch_size = calculate_batch_size(self.device)
|
self.default_batch_size = calculate_batch_size(self.device)
|
||||||
|
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
@@ -244,7 +264,9 @@ class CosineStrategy(ExtractionStrategy):
|
|||||||
# else:
|
# else:
|
||||||
|
|
||||||
self.tokenizer, self.model = load_bge_small_en_v1_5()
|
self.tokenizer, self.model = load_bge_small_en_v1_5()
|
||||||
|
self.model.to(self.device)
|
||||||
self.model.eval()
|
self.model.eval()
|
||||||
|
|
||||||
self.get_embedding_method = "batch"
|
self.get_embedding_method = "batch"
|
||||||
|
|
||||||
self.buffer_embeddings = np.array([])
|
self.buffer_embeddings = np.array([])
|
||||||
@@ -266,7 +288,7 @@ class CosineStrategy(ExtractionStrategy):
|
|||||||
if self.verbose:
|
if self.verbose:
|
||||||
print(f"[LOG] Loading Multilabel Classifier for {self.device.type} device.")
|
print(f"[LOG] Loading Multilabel Classifier for {self.device.type} device.")
|
||||||
|
|
||||||
self.nlp, self.device = load_text_multilabel_classifier()
|
self.nlp, _ = load_text_multilabel_classifier()
|
||||||
# self.default_batch_size = 16 if self.device.type == 'cpu' else 64
|
# self.default_batch_size = 16 if self.device.type == 'cpu' else 64
|
||||||
|
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
@@ -437,21 +459,21 @@ class CosineStrategy(ExtractionStrategy):
|
|||||||
if self.verbose:
|
if self.verbose:
|
||||||
print(f"[LOG] 🚀 Assign tags using {self.device}")
|
print(f"[LOG] 🚀 Assign tags using {self.device}")
|
||||||
|
|
||||||
if self.device.type in ["gpu", "cuda", "mps"]:
|
if self.device.type in ["gpu", "cuda", "mps", "cpu"]:
|
||||||
labels = self.nlp([cluster['content'] for cluster in cluster_list])
|
labels = self.nlp([cluster['content'] for cluster in cluster_list])
|
||||||
|
|
||||||
for cluster, label in zip(cluster_list, labels):
|
for cluster, label in zip(cluster_list, labels):
|
||||||
cluster['tags'] = label
|
cluster['tags'] = label
|
||||||
elif self.device == "cpu":
|
# elif self.device.type == "cpu":
|
||||||
# Process the text with the loaded model
|
# # Process the text with the loaded model
|
||||||
texts = [cluster['content'] for cluster in cluster_list]
|
# texts = [cluster['content'] for cluster in cluster_list]
|
||||||
# Batch process texts
|
# # Batch process texts
|
||||||
docs = self.nlp.pipe(texts, disable=["tagger", "parser", "ner", "lemmatizer"])
|
# docs = self.nlp.pipe(texts, disable=["tagger", "parser", "ner", "lemmatizer"])
|
||||||
|
|
||||||
for doc, cluster in zip(docs, cluster_list):
|
# for doc, cluster in zip(docs, cluster_list):
|
||||||
tok_k = self.top_k
|
# tok_k = self.top_k
|
||||||
top_categories = sorted(doc.cats.items(), key=lambda x: x[1], reverse=True)[:tok_k]
|
# top_categories = sorted(doc.cats.items(), key=lambda x: x[1], reverse=True)[:tok_k]
|
||||||
cluster['tags'] = [cat for cat, _ in top_categories]
|
# cluster['tags'] = [cat for cat, _ in top_categories]
|
||||||
|
|
||||||
# for cluster in cluster_list:
|
# for cluster in cluster_list:
|
||||||
# doc = self.nlp(cluster['content'])
|
# doc = self.nlp(cluster['content'])
|
||||||
|
|||||||
@@ -3,9 +3,10 @@ from pathlib import Path
|
|||||||
import subprocess, os
|
import subprocess, os
|
||||||
import shutil
|
import shutil
|
||||||
import tarfile
|
import tarfile
|
||||||
from crawl4ai.config import MODEL_REPO_BRANCH
|
from .model_loader import *
|
||||||
import argparse
|
import argparse
|
||||||
import urllib.request
|
import urllib.request
|
||||||
|
from crawl4ai.config import MODEL_REPO_BRANCH
|
||||||
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
|
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
|
||||||
|
|
||||||
@lru_cache()
|
@lru_cache()
|
||||||
@@ -141,14 +142,15 @@ def load_text_multilabel_classifier():
|
|||||||
from scipy.special import expit
|
from scipy.special import expit
|
||||||
import torch
|
import torch
|
||||||
|
|
||||||
# Check for available device: CUDA, MPS (for Apple Silicon), or CPU
|
# # Check for available device: CUDA, MPS (for Apple Silicon), or CPU
|
||||||
if torch.cuda.is_available():
|
# if torch.cuda.is_available():
|
||||||
device = torch.device("cuda")
|
# device = torch.device("cuda")
|
||||||
elif torch.backends.mps.is_available():
|
# elif torch.backends.mps.is_available():
|
||||||
device = torch.device("mps")
|
# device = torch.device("mps")
|
||||||
else:
|
# else:
|
||||||
return load_spacy_model(), torch.device("cpu")
|
# device = torch.device("cpu")
|
||||||
|
# # return load_spacy_model(), torch.device("cpu")
|
||||||
|
|
||||||
|
|
||||||
MODEL = "cardiffnlp/tweet-topic-21-multi"
|
MODEL = "cardiffnlp/tweet-topic-21-multi"
|
||||||
tokenizer = AutoTokenizer.from_pretrained(MODEL, resume_download=None)
|
tokenizer = AutoTokenizer.from_pretrained(MODEL, resume_download=None)
|
||||||
@@ -192,51 +194,61 @@ def load_spacy_model():
|
|||||||
import spacy
|
import spacy
|
||||||
name = "models/reuters"
|
name = "models/reuters"
|
||||||
home_folder = get_home_folder()
|
home_folder = get_home_folder()
|
||||||
model_folder = os.path.join(home_folder, name)
|
model_folder = Path(home_folder) / name
|
||||||
|
|
||||||
# Check if the model directory already exists
|
# Check if the model directory already exists
|
||||||
if not (Path(model_folder).exists() and any(Path(model_folder).iterdir())):
|
if not (model_folder.exists() and any(model_folder.iterdir())):
|
||||||
repo_url = "https://github.com/unclecode/crawl4ai.git"
|
repo_url = "https://github.com/unclecode/crawl4ai.git"
|
||||||
# branch = "main"
|
|
||||||
branch = MODEL_REPO_BRANCH
|
branch = MODEL_REPO_BRANCH
|
||||||
repo_folder = os.path.join(home_folder, "crawl4ai")
|
repo_folder = Path(home_folder) / "crawl4ai"
|
||||||
model_folder = os.path.join(home_folder, name)
|
|
||||||
|
print("[LOG] ⏬ Downloading Spacy model for the first time...")
|
||||||
# print("[LOG] ⏬ Downloading Spacy model for the first time...")
|
|
||||||
|
|
||||||
# Remove existing repo folder if it exists
|
# Remove existing repo folder if it exists
|
||||||
if Path(repo_folder).exists():
|
if repo_folder.exists():
|
||||||
shutil.rmtree(repo_folder)
|
try:
|
||||||
shutil.rmtree(model_folder)
|
shutil.rmtree(repo_folder)
|
||||||
|
if model_folder.exists():
|
||||||
|
shutil.rmtree(model_folder)
|
||||||
|
except PermissionError:
|
||||||
|
print("[WARNING] Unable to remove existing folders. Please manually delete the following folders and try again:")
|
||||||
|
print(f"- {repo_folder}")
|
||||||
|
print(f"- {model_folder}")
|
||||||
|
return None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Clone the repository
|
# Clone the repository
|
||||||
subprocess.run(
|
subprocess.run(
|
||||||
["git", "clone", "-b", branch, repo_url, repo_folder],
|
["git", "clone", "-b", branch, repo_url, str(repo_folder)],
|
||||||
stdout=subprocess.DEVNULL,
|
stdout=subprocess.DEVNULL,
|
||||||
stderr=subprocess.DEVNULL,
|
stderr=subprocess.DEVNULL,
|
||||||
check=True
|
check=True
|
||||||
)
|
)
|
||||||
|
|
||||||
# Create the models directory if it doesn't exist
|
# Create the models directory if it doesn't exist
|
||||||
models_folder = os.path.join(home_folder, "models")
|
models_folder = Path(home_folder) / "models"
|
||||||
os.makedirs(models_folder, exist_ok=True)
|
models_folder.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
# Copy the reuters model folder to the models directory
|
# Copy the reuters model folder to the models directory
|
||||||
source_folder = os.path.join(repo_folder, "models/reuters")
|
source_folder = repo_folder / "models" / "reuters"
|
||||||
shutil.copytree(source_folder, model_folder)
|
shutil.copytree(source_folder, model_folder)
|
||||||
|
|
||||||
# Remove the cloned repository
|
# Remove the cloned repository
|
||||||
shutil.rmtree(repo_folder)
|
shutil.rmtree(repo_folder)
|
||||||
|
|
||||||
# Print completion message
|
print("[LOG] ✅ Spacy Model downloaded successfully")
|
||||||
# print("[LOG] ✅ Spacy Model downloaded successfully")
|
|
||||||
except subprocess.CalledProcessError as e:
|
except subprocess.CalledProcessError as e:
|
||||||
print(f"An error occurred while cloning the repository: {e}")
|
print(f"An error occurred while cloning the repository: {e}")
|
||||||
|
return None
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"An error occurred: {e}")
|
print(f"An error occurred: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
return spacy.load(model_folder)
|
try:
|
||||||
|
return spacy.load(str(model_folder))
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Error loading spacy model: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
def download_all_models(remove_existing=False):
|
def download_all_models(remove_existing=False):
|
||||||
"""Download all models required for Crawl4AI."""
|
"""Download all models required for Crawl4AI."""
|
||||||
|
|||||||
@@ -186,7 +186,7 @@ The user has made the following request for what information to extract from the
|
|||||||
Please carefully read the URL content and the user's request. If the user provided a desired JSON schema in the <schema_block> above, extract the requested information from the URL content according to that schema. If no schema was provided, infer an appropriate JSON schema based on the user's request that will best capture the key information they are looking for.
|
Please carefully read the URL content and the user's request. If the user provided a desired JSON schema in the <schema_block> above, extract the requested information from the URL content according to that schema. If no schema was provided, infer an appropriate JSON schema based on the user's request that will best capture the key information they are looking for.
|
||||||
|
|
||||||
Extraction instructions:
|
Extraction instructions:
|
||||||
Return the extracted information as a list of JSON objects, with each object in the list corresponding to a block of content from the URL, in the same order as it appears on the page. Wrap the entire JSON list in <blocks> tags.
|
Return the extracted information as a list of JSON objects, with each object in the list corresponding to a block of content from the URL, in the same order as it appears on the page. Wrap the entire JSON list in <blocks>...</blocks> XML tags.
|
||||||
|
|
||||||
Quality Reflection:
|
Quality Reflection:
|
||||||
Before outputting your final answer, double check that the JSON you are returning is complete, containing all the information requested by the user, and is valid JSON that could be parsed by json.loads() with no errors or omissions. The outputted JSON objects should fully match the schema, either provided or inferred.
|
Before outputting your final answer, double check that the JSON you are returning is complete, containing all the information requested by the user, and is valid JSON that could be parsed by json.loads() with no errors or omissions. The outputted JSON objects should fully match the schema, either provided or inferred.
|
||||||
@@ -194,5 +194,11 @@ Before outputting your final answer, double check that the JSON you are returnin
|
|||||||
Quality Score:
|
Quality Score:
|
||||||
After reflecting, score the quality and completeness of the JSON data you are about to return on a scale of 1 to 5. Write the score inside <score> tags.
|
After reflecting, score the quality and completeness of the JSON data you are about to return on a scale of 1 to 5. Write the score inside <score> tags.
|
||||||
|
|
||||||
|
Avoid Common Mistakes:
|
||||||
|
- Do NOT add any comments using "//" or "#" in the JSON output. It causes parsing errors.
|
||||||
|
- Make sure the JSON is properly formatted with curly braces, square brackets, and commas in the right places.
|
||||||
|
- Do not miss closing </blocks> tag at the end of the JSON output.
|
||||||
|
- Do not generate the Python coee show me how to do the task, this is your task to extract the information and return it in JSON format.
|
||||||
|
|
||||||
Result
|
Result
|
||||||
Output the final list of JSON objects, wrapped in <blocks> tags."""
|
Output the final list of JSON objects, wrapped in <blocks>...</blocks> XML tags. Make sure to close the tag properly."""
|
||||||
@@ -10,6 +10,10 @@ from html2text import HTML2Text
|
|||||||
from .prompts import PROMPT_EXTRACT_BLOCKS
|
from .prompts import PROMPT_EXTRACT_BLOCKS
|
||||||
from .config import *
|
from .config import *
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from typing import Dict, Any
|
||||||
|
from urllib.parse import urljoin
|
||||||
|
import requests
|
||||||
|
from requests.exceptions import InvalidSchema
|
||||||
|
|
||||||
class InvalidCSSSelectorError(Exception):
|
class InvalidCSSSelectorError(Exception):
|
||||||
pass
|
pass
|
||||||
@@ -95,6 +99,16 @@ def sanitize_html(html):
|
|||||||
|
|
||||||
return sanitized_html
|
return sanitized_html
|
||||||
|
|
||||||
|
def sanitize_input_encode(text: str) -> str:
|
||||||
|
"""Sanitize input to handle potential encoding issues."""
|
||||||
|
try:
|
||||||
|
# Attempt to encode and decode as UTF-8 to handle potential encoding issues
|
||||||
|
return text.encode('utf-8', errors='ignore').decode('utf-8')
|
||||||
|
except UnicodeEncodeError as e:
|
||||||
|
print(f"Warning: Encoding issue detected. Some characters may be lost. Error: {e}")
|
||||||
|
# Fall back to ASCII if UTF-8 fails
|
||||||
|
return text.encode('ascii', errors='ignore').decode('ascii')
|
||||||
|
|
||||||
def escape_json_string(s):
|
def escape_json_string(s):
|
||||||
"""
|
"""
|
||||||
Escapes characters in a string to be JSON safe.
|
Escapes characters in a string to be JSON safe.
|
||||||
@@ -175,16 +189,25 @@ def replace_inline_tags(soup, tags, only_text=False):
|
|||||||
'small': lambda tag: f"<small>{tag.text}</small>",
|
'small': lambda tag: f"<small>{tag.text}</small>",
|
||||||
'mark': lambda tag: f"=={tag.text}=="
|
'mark': lambda tag: f"=={tag.text}=="
|
||||||
}
|
}
|
||||||
|
|
||||||
|
replacement_data = [(tag, tag_replacements.get(tag, lambda t: t.text)) for tag in tags]
|
||||||
|
|
||||||
for tag_name in tags:
|
for tag_name, replacement_func in replacement_data:
|
||||||
for tag in soup.find_all(tag_name):
|
for tag in soup.find_all(tag_name):
|
||||||
if not only_text:
|
replacement_text = tag.text if only_text else replacement_func(tag)
|
||||||
replacement_text = tag_replacements.get(tag_name, lambda t: t.text)(tag)
|
tag.replace_with(replacement_text)
|
||||||
tag.replace_with(replacement_text)
|
|
||||||
else:
|
|
||||||
tag.replace_with(tag.text)
|
|
||||||
|
|
||||||
return soup
|
return soup
|
||||||
|
|
||||||
|
# for tag_name in tags:
|
||||||
|
# for tag in soup.find_all(tag_name):
|
||||||
|
# if not only_text:
|
||||||
|
# replacement_text = tag_replacements.get(tag_name, lambda t: t.text)(tag)
|
||||||
|
# tag.replace_with(replacement_text)
|
||||||
|
# else:
|
||||||
|
# tag.replace_with(tag.text)
|
||||||
|
|
||||||
|
# return soup
|
||||||
|
|
||||||
def get_content_of_website(url, html, word_count_threshold = MIN_WORD_THRESHOLD, css_selector = None, **kwargs):
|
def get_content_of_website(url, html, word_count_threshold = MIN_WORD_THRESHOLD, css_selector = None, **kwargs):
|
||||||
try:
|
try:
|
||||||
@@ -388,29 +411,262 @@ def get_content_of_website(url, html, word_count_threshold = MIN_WORD_THRESHOLD,
|
|||||||
markdown = h.handle(cleaned_html)
|
markdown = h.handle(cleaned_html)
|
||||||
markdown = markdown.replace(' ```', '```')
|
markdown = markdown.replace(' ```', '```')
|
||||||
|
|
||||||
|
try:
|
||||||
|
meta = extract_metadata(html, soup)
|
||||||
|
except Exception as e:
|
||||||
|
print('Error extracting metadata:', str(e))
|
||||||
|
meta = {}
|
||||||
|
|
||||||
|
|
||||||
# Return the Markdown content
|
# Return the Markdown content
|
||||||
return{
|
return{
|
||||||
'markdown': markdown,
|
'markdown': markdown,
|
||||||
'cleaned_html': cleaned_html,
|
'cleaned_html': cleaned_html,
|
||||||
'success': True,
|
'success': True,
|
||||||
'media': media,
|
'media': media,
|
||||||
'links': links
|
'links': links,
|
||||||
|
'metadata': meta
|
||||||
}
|
}
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print('Error processing HTML content:', str(e))
|
print('Error processing HTML content:', str(e))
|
||||||
raise InvalidCSSSelectorError(f"Invalid CSS selector: {css_selector}") from e
|
raise InvalidCSSSelectorError(f"Invalid CSS selector: {css_selector}") from e
|
||||||
|
|
||||||
|
def get_content_of_website_optimized(url: str, html: str, word_count_threshold: int = MIN_WORD_THRESHOLD, css_selector: str = None, **kwargs) -> Dict[str, Any]:
|
||||||
|
if not html:
|
||||||
|
return None
|
||||||
|
|
||||||
|
soup = BeautifulSoup(html, 'html.parser')
|
||||||
|
body = soup.body
|
||||||
|
|
||||||
|
image_description_min_word_threshold = kwargs.get('image_description_min_word_threshold', IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD)
|
||||||
|
|
||||||
def extract_metadata(html):
|
if css_selector:
|
||||||
|
selected_elements = body.select(css_selector)
|
||||||
|
if not selected_elements:
|
||||||
|
raise InvalidCSSSelectorError(f"Invalid CSS selector, No elements found for CSS selector: {css_selector}")
|
||||||
|
body = soup.new_tag('div')
|
||||||
|
for el in selected_elements:
|
||||||
|
body.append(el)
|
||||||
|
|
||||||
|
links = {'internal': [], 'external': []}
|
||||||
|
media = {'images': [], 'videos': [], 'audios': []}
|
||||||
|
|
||||||
|
def process_image(img, url, index, total_images):
|
||||||
|
#Check if an image has valid display and inside undesired html elements
|
||||||
|
def is_valid_image(img, parent, parent_classes):
|
||||||
|
style = img.get('style', '')
|
||||||
|
src = img.get('src', '')
|
||||||
|
classes_to_check = ['button', 'icon', 'logo']
|
||||||
|
tags_to_check = ['button', 'input']
|
||||||
|
return all([
|
||||||
|
'display:none' not in style,
|
||||||
|
src,
|
||||||
|
not any(s in var for var in [src, img.get('alt', ''), *parent_classes] for s in classes_to_check),
|
||||||
|
parent.name not in tags_to_check
|
||||||
|
])
|
||||||
|
|
||||||
|
#Score an image for it's usefulness
|
||||||
|
def score_image_for_usefulness(img, base_url, index, images_count):
|
||||||
|
# Function to parse image height/width value and units
|
||||||
|
def parse_dimension(dimension):
|
||||||
|
if dimension:
|
||||||
|
match = re.match(r"(\d+)(\D*)", dimension)
|
||||||
|
if match:
|
||||||
|
number = int(match.group(1))
|
||||||
|
unit = match.group(2) or 'px' # Default unit is 'px' if not specified
|
||||||
|
return number, unit
|
||||||
|
return None, None
|
||||||
|
|
||||||
|
# Fetch image file metadata to extract size and extension
|
||||||
|
def fetch_image_file_size(img, base_url):
|
||||||
|
#If src is relative path construct full URL, if not it may be CDN URL
|
||||||
|
img_url = urljoin(base_url,img.get('src'))
|
||||||
|
try:
|
||||||
|
response = requests.head(img_url)
|
||||||
|
if response.status_code == 200:
|
||||||
|
return response.headers.get('Content-Length',None)
|
||||||
|
else:
|
||||||
|
print(f"Failed to retrieve file size for {img_url}")
|
||||||
|
return None
|
||||||
|
except InvalidSchema as e:
|
||||||
|
return None
|
||||||
|
finally:
|
||||||
|
return
|
||||||
|
|
||||||
|
image_height = img.get('height')
|
||||||
|
height_value, height_unit = parse_dimension(image_height)
|
||||||
|
image_width = img.get('width')
|
||||||
|
width_value, width_unit = parse_dimension(image_width)
|
||||||
|
image_size = 0 #int(fetch_image_file_size(img,base_url) or 0)
|
||||||
|
image_format = os.path.splitext(img.get('src',''))[1].lower()
|
||||||
|
# Remove . from format
|
||||||
|
image_format = image_format.strip('.')
|
||||||
|
score = 0
|
||||||
|
if height_value:
|
||||||
|
if height_unit == 'px' and height_value > 150:
|
||||||
|
score += 1
|
||||||
|
if height_unit in ['%','vh','vmin','vmax'] and height_value >30:
|
||||||
|
score += 1
|
||||||
|
if width_value:
|
||||||
|
if width_unit == 'px' and width_value > 150:
|
||||||
|
score += 1
|
||||||
|
if width_unit in ['%','vh','vmin','vmax'] and width_value >30:
|
||||||
|
score += 1
|
||||||
|
if image_size > 10000:
|
||||||
|
score += 1
|
||||||
|
if img.get('alt') != '':
|
||||||
|
score+=1
|
||||||
|
if any(image_format==format for format in ['jpg','png','webp']):
|
||||||
|
score+=1
|
||||||
|
if index/images_count<0.5:
|
||||||
|
score+=1
|
||||||
|
return score
|
||||||
|
|
||||||
|
# Extract meaningful text for images from closest parent
|
||||||
|
def find_closest_parent_with_useful_text(tag):
|
||||||
|
current_tag = tag
|
||||||
|
while current_tag:
|
||||||
|
current_tag = current_tag.parent
|
||||||
|
# Get the text content of the parent tag
|
||||||
|
if current_tag:
|
||||||
|
text_content = current_tag.get_text(separator=' ',strip=True)
|
||||||
|
# Check if the text content has at least word_count_threshold
|
||||||
|
if len(text_content.split()) >= image_description_min_word_threshold:
|
||||||
|
return text_content
|
||||||
|
return None
|
||||||
|
|
||||||
|
if not is_valid_image(img, img.parent, img.parent.get('class', [])):
|
||||||
|
return None
|
||||||
|
score = score_image_for_usefulness(img, url, index, total_images)
|
||||||
|
if score <= IMAGE_SCORE_THRESHOLD:
|
||||||
|
return None
|
||||||
|
return {
|
||||||
|
'src': img.get('src', ''),
|
||||||
|
'alt': img.get('alt', ''),
|
||||||
|
'desc': find_closest_parent_with_useful_text(img),
|
||||||
|
'score': score,
|
||||||
|
'type': 'image'
|
||||||
|
}
|
||||||
|
|
||||||
|
def process_element(element: element.PageElement) -> bool:
|
||||||
|
try:
|
||||||
|
if isinstance(element, NavigableString):
|
||||||
|
if isinstance(element, Comment):
|
||||||
|
element.extract()
|
||||||
|
return False
|
||||||
|
|
||||||
|
if element.name in ['script', 'style', 'link', 'meta', 'noscript']:
|
||||||
|
element.decompose()
|
||||||
|
return False
|
||||||
|
|
||||||
|
keep_element = False
|
||||||
|
|
||||||
|
if element.name == 'a' and element.get('href'):
|
||||||
|
href = element['href']
|
||||||
|
url_base = url.split('/')[2]
|
||||||
|
link_data = {'href': href, 'text': element.get_text()}
|
||||||
|
if href.startswith('http') and url_base not in href:
|
||||||
|
links['external'].append(link_data)
|
||||||
|
else:
|
||||||
|
links['internal'].append(link_data)
|
||||||
|
keep_element = True
|
||||||
|
|
||||||
|
elif element.name == 'img':
|
||||||
|
return True # Always keep image elements
|
||||||
|
|
||||||
|
elif element.name in ['video', 'audio']:
|
||||||
|
media[f"{element.name}s"].append({
|
||||||
|
'src': element.get('src'),
|
||||||
|
'alt': element.get('alt'),
|
||||||
|
'type': element.name
|
||||||
|
})
|
||||||
|
return True # Always keep video and audio elements
|
||||||
|
|
||||||
|
if element.name != 'pre':
|
||||||
|
if element.name in ['b', 'i', 'u', 'span', 'del', 'ins', 'sub', 'sup', 'strong', 'em', 'code', 'kbd', 'var', 's', 'q', 'abbr', 'cite', 'dfn', 'time', 'small', 'mark']:
|
||||||
|
if kwargs.get('only_text', False):
|
||||||
|
element.replace_with(element.get_text())
|
||||||
|
else:
|
||||||
|
element.unwrap()
|
||||||
|
elif element.name != 'img':
|
||||||
|
element.attrs = {}
|
||||||
|
|
||||||
|
# Process children
|
||||||
|
for child in list(element.children):
|
||||||
|
if isinstance(child, NavigableString) and not isinstance(child, Comment):
|
||||||
|
if len(child.strip()) > 0:
|
||||||
|
keep_element = True
|
||||||
|
else:
|
||||||
|
if process_element(child):
|
||||||
|
keep_element = True
|
||||||
|
|
||||||
|
|
||||||
|
# Check word count
|
||||||
|
if not keep_element:
|
||||||
|
word_count = len(element.get_text(strip=True).split())
|
||||||
|
keep_element = word_count >= word_count_threshold
|
||||||
|
|
||||||
|
if not keep_element:
|
||||||
|
element.decompose()
|
||||||
|
|
||||||
|
return keep_element
|
||||||
|
except Exception as e:
|
||||||
|
print('Error processing element:', str(e))
|
||||||
|
return False
|
||||||
|
|
||||||
|
#process images by filtering and extracting contextual text from the page
|
||||||
|
imgs = body.find_all('img')
|
||||||
|
media['images'] = [
|
||||||
|
result for result in
|
||||||
|
(process_image(img, url, i, len(imgs)) for i, img in enumerate(imgs))
|
||||||
|
if result is not None
|
||||||
|
]
|
||||||
|
|
||||||
|
process_element(body)
|
||||||
|
|
||||||
|
def flatten_nested_elements(node):
|
||||||
|
if isinstance(node, NavigableString):
|
||||||
|
return node
|
||||||
|
if len(node.contents) == 1 and isinstance(node.contents[0], element.Tag) and node.contents[0].name == node.name:
|
||||||
|
return flatten_nested_elements(node.contents[0])
|
||||||
|
node.contents = [flatten_nested_elements(child) for child in node.contents]
|
||||||
|
return node
|
||||||
|
|
||||||
|
body = flatten_nested_elements(body)
|
||||||
|
|
||||||
|
cleaned_html = str(body).replace('\n\n', '\n').replace(' ', ' ')
|
||||||
|
cleaned_html = sanitize_html(cleaned_html)
|
||||||
|
|
||||||
|
h = CustomHTML2Text()
|
||||||
|
h.ignore_links = True
|
||||||
|
markdown = h.handle(cleaned_html)
|
||||||
|
markdown = markdown.replace(' ```', '```')
|
||||||
|
|
||||||
|
try:
|
||||||
|
meta = extract_metadata(html, soup)
|
||||||
|
except Exception as e:
|
||||||
|
print('Error extracting metadata:', str(e))
|
||||||
|
meta = {}
|
||||||
|
|
||||||
|
return {
|
||||||
|
'markdown': markdown,
|
||||||
|
'cleaned_html': cleaned_html,
|
||||||
|
'success': True,
|
||||||
|
'media': media,
|
||||||
|
'links': links,
|
||||||
|
'metadata': meta
|
||||||
|
}
|
||||||
|
|
||||||
|
def extract_metadata(html, soup = None):
|
||||||
metadata = {}
|
metadata = {}
|
||||||
|
|
||||||
if not html:
|
if not html:
|
||||||
return metadata
|
return metadata
|
||||||
|
|
||||||
# Parse HTML content with BeautifulSoup
|
# Parse HTML content with BeautifulSoup
|
||||||
soup = BeautifulSoup(html, 'html.parser')
|
if not soup:
|
||||||
|
soup = BeautifulSoup(html, 'html.parser')
|
||||||
|
|
||||||
# Title
|
# Title
|
||||||
title_tag = soup.find('title')
|
title_tag = soup.find('title')
|
||||||
@@ -460,12 +716,16 @@ def extract_xml_data(tags, string):
|
|||||||
return data
|
return data
|
||||||
|
|
||||||
# Function to perform the completion with exponential backoff
|
# Function to perform the completion with exponential backoff
|
||||||
def perform_completion_with_backoff(provider, prompt_with_variables, api_token):
|
def perform_completion_with_backoff(provider, prompt_with_variables, api_token, json_response = False):
|
||||||
from litellm import completion
|
from litellm import completion
|
||||||
from litellm.exceptions import RateLimitError
|
from litellm.exceptions import RateLimitError
|
||||||
max_attempts = 3
|
max_attempts = 3
|
||||||
base_delay = 2 # Base delay in seconds, you can adjust this based on your needs
|
base_delay = 2 # Base delay in seconds, you can adjust this based on your needs
|
||||||
|
|
||||||
|
extra_args = {}
|
||||||
|
if json_response:
|
||||||
|
extra_args["response_format"] = { "type": "json_object" }
|
||||||
|
|
||||||
for attempt in range(max_attempts):
|
for attempt in range(max_attempts):
|
||||||
try:
|
try:
|
||||||
response =completion(
|
response =completion(
|
||||||
@@ -474,7 +734,8 @@ def perform_completion_with_backoff(provider, prompt_with_variables, api_token):
|
|||||||
{"role": "user", "content": prompt_with_variables}
|
{"role": "user", "content": prompt_with_variables}
|
||||||
],
|
],
|
||||||
temperature=0.01,
|
temperature=0.01,
|
||||||
api_key=api_token
|
api_key=api_token,
|
||||||
|
**extra_args
|
||||||
)
|
)
|
||||||
return response # Return the successful response
|
return response # Return the successful response
|
||||||
except RateLimitError as e:
|
except RateLimitError as e:
|
||||||
@@ -518,7 +779,6 @@ def extract_blocks(url, html, provider = DEFAULT_PROVIDER, api_token = None):
|
|||||||
for block in blocks:
|
for block in blocks:
|
||||||
block['error'] = False
|
block['error'] = False
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print("Error extracting blocks:", str(e))
|
|
||||||
parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content)
|
parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content)
|
||||||
blocks = parsed
|
blocks = parsed
|
||||||
# Append all unparsed segments as onr error block and content is list of unparsed segments
|
# Append all unparsed segments as onr error block and content is list of unparsed segments
|
||||||
@@ -564,7 +824,6 @@ def extract_blocks_batch(batch_data, provider = "groq/llama3-70b-8192", api_toke
|
|||||||
blocks = json.loads(blocks)
|
blocks = json.loads(blocks)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print("Error extracting blocks:", str(e))
|
|
||||||
blocks = [{
|
blocks = [{
|
||||||
"index": 0,
|
"index": 0,
|
||||||
"tags": ["error"],
|
"tags": ["error"],
|
||||||
@@ -575,7 +834,6 @@ def extract_blocks_batch(batch_data, provider = "groq/llama3-70b-8192", api_toke
|
|||||||
|
|
||||||
return sum(all_blocks, [])
|
return sum(all_blocks, [])
|
||||||
|
|
||||||
|
|
||||||
def merge_chunks_based_on_token_threshold(chunks, token_threshold):
|
def merge_chunks_based_on_token_threshold(chunks, token_threshold):
|
||||||
"""
|
"""
|
||||||
Merges small chunks into larger ones based on the total token threshold.
|
Merges small chunks into larger ones based on the total token threshold.
|
||||||
@@ -621,7 +879,6 @@ def process_sections(url: str, sections: list, provider: str, api_token: str) ->
|
|||||||
|
|
||||||
return extracted_content
|
return extracted_content
|
||||||
|
|
||||||
|
|
||||||
def wrap_text(draw, text, font, max_width):
|
def wrap_text(draw, text, font, max_width):
|
||||||
# Wrap the text to fit within the specified width
|
# Wrap the text to fit within the specified width
|
||||||
lines = []
|
lines = []
|
||||||
@@ -631,4 +888,10 @@ def wrap_text(draw, text, font, max_width):
|
|||||||
while words and draw.textbbox((0, 0), line + words[0], font=font)[2] <= max_width:
|
while words and draw.textbbox((0, 0), line + words[0], font=font)[2] <= max_width:
|
||||||
line += (words.pop(0) + ' ')
|
line += (words.pop(0) + ' ')
|
||||||
lines.append(line)
|
lines.append(line)
|
||||||
return '\n'.join(lines)
|
return '\n'.join(lines)
|
||||||
|
|
||||||
|
def format_html(html_string):
|
||||||
|
soup = BeautifulSoup(html_string, 'html.parser')
|
||||||
|
return soup.prettify()
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -11,6 +11,8 @@ from .crawler_strategy import *
|
|||||||
from typing import List
|
from typing import List
|
||||||
from concurrent.futures import ThreadPoolExecutor
|
from concurrent.futures import ThreadPoolExecutor
|
||||||
from .config import *
|
from .config import *
|
||||||
|
import warnings
|
||||||
|
warnings.filterwarnings("ignore", message='Field "model_name" has conflict with protected namespace "model_".')
|
||||||
|
|
||||||
|
|
||||||
class WebCrawler:
|
class WebCrawler:
|
||||||
@@ -46,7 +48,8 @@ class WebCrawler:
|
|||||||
word_count_threshold=5,
|
word_count_threshold=5,
|
||||||
extraction_strategy= NoExtractionStrategy(),
|
extraction_strategy= NoExtractionStrategy(),
|
||||||
bypass_cache=False,
|
bypass_cache=False,
|
||||||
verbose = False
|
verbose = False,
|
||||||
|
# warmup=True
|
||||||
)
|
)
|
||||||
self.ready = True
|
self.ready = True
|
||||||
print("[LOG] 🌞 WebCrawler is ready to crawl")
|
print("[LOG] 🌞 WebCrawler is ready to crawl")
|
||||||
@@ -128,36 +131,57 @@ class WebCrawler:
|
|||||||
verbose=True,
|
verbose=True,
|
||||||
**kwargs,
|
**kwargs,
|
||||||
) -> CrawlResult:
|
) -> CrawlResult:
|
||||||
extraction_strategy = extraction_strategy or NoExtractionStrategy()
|
try:
|
||||||
extraction_strategy.verbose = verbose
|
extraction_strategy = extraction_strategy or NoExtractionStrategy()
|
||||||
if not isinstance(extraction_strategy, ExtractionStrategy):
|
extraction_strategy.verbose = verbose
|
||||||
raise ValueError("Unsupported extraction strategy")
|
if not isinstance(extraction_strategy, ExtractionStrategy):
|
||||||
if not isinstance(chunking_strategy, ChunkingStrategy):
|
raise ValueError("Unsupported extraction strategy")
|
||||||
raise ValueError("Unsupported chunking strategy")
|
if not isinstance(chunking_strategy, ChunkingStrategy):
|
||||||
|
raise ValueError("Unsupported chunking strategy")
|
||||||
if word_count_threshold < MIN_WORD_THRESHOLD:
|
|
||||||
word_count_threshold = MIN_WORD_THRESHOLD
|
# if word_count_threshold < MIN_WORD_THRESHOLD:
|
||||||
|
# word_count_threshold = MIN_WORD_THRESHOLD
|
||||||
|
|
||||||
|
word_count_threshold = max(word_count_threshold, 0)
|
||||||
|
|
||||||
# Check cache first
|
# Check cache first
|
||||||
cached = None
|
cached = None
|
||||||
extracted_content = None
|
screenshot_data = None
|
||||||
if not bypass_cache and not self.always_by_pass_cache:
|
extracted_content = None
|
||||||
cached = get_cached_url(url)
|
if not bypass_cache and not self.always_by_pass_cache:
|
||||||
|
cached = get_cached_url(url)
|
||||||
if cached:
|
|
||||||
html = cached[1]
|
if kwargs.get("warmup", True) and not self.ready:
|
||||||
extracted_content = cached[2]
|
return None
|
||||||
if screenshot:
|
|
||||||
screenshot = cached[9]
|
if cached:
|
||||||
|
html = sanitize_input_encode(cached[1])
|
||||||
else:
|
extracted_content = sanitize_input_encode(cached[4])
|
||||||
if user_agent:
|
if screenshot:
|
||||||
self.crawler_strategy.update_user_agent(user_agent)
|
screenshot_data = cached[9]
|
||||||
html = self.crawler_strategy.crawl(url)
|
if not screenshot_data:
|
||||||
if screenshot:
|
cached = None
|
||||||
screenshot = self.crawler_strategy.take_screenshot()
|
|
||||||
|
if not cached or not html:
|
||||||
return self.process_html(url, html, extracted_content, word_count_threshold, extraction_strategy, chunking_strategy, css_selector, screenshot, verbose, bool(cached), **kwargs)
|
if user_agent:
|
||||||
|
self.crawler_strategy.update_user_agent(user_agent)
|
||||||
|
t1 = time.time()
|
||||||
|
html = sanitize_input_encode(self.crawler_strategy.crawl(url, **kwargs))
|
||||||
|
t2 = time.time()
|
||||||
|
if verbose:
|
||||||
|
print(f"[LOG] 🚀 Crawling done for {url}, success: {bool(html)}, time taken: {t2 - t1} seconds")
|
||||||
|
if screenshot:
|
||||||
|
screenshot_data = self.crawler_strategy.take_screenshot()
|
||||||
|
|
||||||
|
|
||||||
|
crawl_result = self.process_html(url, html, extracted_content, word_count_threshold, extraction_strategy, chunking_strategy, css_selector, screenshot_data, verbose, bool(cached), **kwargs)
|
||||||
|
crawl_result.success = bool(html)
|
||||||
|
return crawl_result
|
||||||
|
except Exception as e:
|
||||||
|
if not hasattr(e, "msg"):
|
||||||
|
e.msg = str(e)
|
||||||
|
print(f"[ERROR] 🚫 Failed to crawl {url}, error: {e.msg}")
|
||||||
|
return CrawlResult(url=url, html="", success=False, error_message=e.msg)
|
||||||
|
|
||||||
def process_html(
|
def process_html(
|
||||||
self,
|
self,
|
||||||
@@ -176,20 +200,24 @@ class WebCrawler:
|
|||||||
t = time.time()
|
t = time.time()
|
||||||
# Extract content from HTML
|
# Extract content from HTML
|
||||||
try:
|
try:
|
||||||
result = get_content_of_website(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False))
|
# t1 = time.time()
|
||||||
metadata = extract_metadata(html)
|
# result = get_content_of_website(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False))
|
||||||
|
# print(f"[LOG] 🚀 Crawling done for {url}, success: True, time taken: {time.time() - t1} seconds")
|
||||||
|
t1 = time.time()
|
||||||
|
result = get_content_of_website_optimized(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False))
|
||||||
|
if verbose:
|
||||||
|
print(f"[LOG] 🚀 Content extracted for {url}, success: True, time taken: {time.time() - t1} seconds")
|
||||||
|
|
||||||
if result is None:
|
if result is None:
|
||||||
raise ValueError(f"Failed to extract content from the website: {url}")
|
raise ValueError(f"Failed to extract content from the website: {url}")
|
||||||
except InvalidCSSSelectorError as e:
|
except InvalidCSSSelectorError as e:
|
||||||
raise ValueError(str(e))
|
raise ValueError(str(e))
|
||||||
|
|
||||||
cleaned_html = result.get("cleaned_html", "")
|
cleaned_html = sanitize_input_encode(result.get("cleaned_html", ""))
|
||||||
markdown = result.get("markdown", "")
|
markdown = sanitize_input_encode(result.get("markdown", ""))
|
||||||
media = result.get("media", [])
|
media = result.get("media", [])
|
||||||
links = result.get("links", [])
|
links = result.get("links", [])
|
||||||
|
metadata = result.get("metadata", {})
|
||||||
if verbose:
|
|
||||||
print(f"[LOG] 🚀 Crawling done for {url}, success: True, time taken: {time.time() - t} seconds")
|
|
||||||
|
|
||||||
if extracted_content is None:
|
if extracted_content is None:
|
||||||
if verbose:
|
if verbose:
|
||||||
@@ -197,7 +225,7 @@ class WebCrawler:
|
|||||||
|
|
||||||
sections = chunking_strategy.chunk(markdown)
|
sections = chunking_strategy.chunk(markdown)
|
||||||
extracted_content = extraction_strategy.run(url, sections)
|
extracted_content = extraction_strategy.run(url, sections)
|
||||||
extracted_content = json.dumps(extracted_content)
|
extracted_content = json.dumps(extracted_content, indent=4, default=str)
|
||||||
|
|
||||||
if verbose:
|
if verbose:
|
||||||
print(f"[LOG] 🚀 Extraction done for {url}, time taken: {time.time() - t} seconds.")
|
print(f"[LOG] 🚀 Extraction done for {url}, time taken: {time.time() - t} seconds.")
|
||||||
@@ -217,11 +245,11 @@ class WebCrawler:
|
|||||||
json.dumps(metadata),
|
json.dumps(metadata),
|
||||||
screenshot=screenshot,
|
screenshot=screenshot,
|
||||||
)
|
)
|
||||||
|
|
||||||
return CrawlResult(
|
return CrawlResult(
|
||||||
url=url,
|
url=url,
|
||||||
html=html,
|
html=html,
|
||||||
cleaned_html=cleaned_html,
|
cleaned_html=format_html(cleaned_html),
|
||||||
markdown=markdown,
|
markdown=markdown,
|
||||||
media=media,
|
media=media,
|
||||||
links=links,
|
links=links,
|
||||||
|
|||||||
@@ -21,7 +21,8 @@ result = crawler.run(
|
|||||||
url=url,
|
url=url,
|
||||||
word_count_threshold=1,
|
word_count_threshold=1,
|
||||||
extraction_strategy= LLMExtractionStrategy(
|
extraction_strategy= LLMExtractionStrategy(
|
||||||
provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
|
# provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
|
||||||
|
provider= "groq/llama-3.1-70b-versatile", api_token = os.getenv('GROQ_API_KEY'),
|
||||||
schema=OpenAIModelFee.model_json_schema(),
|
schema=OpenAIModelFee.model_json_schema(),
|
||||||
extraction_type="schema",
|
extraction_type="schema",
|
||||||
instruction="From the crawled content, extract all mentioned model names along with their "\
|
instruction="From the crawled content, extract all mentioned model names along with their "\
|
||||||
@@ -36,5 +37,5 @@ model_fees = json.loads(result.extracted_content)
|
|||||||
|
|
||||||
print(len(model_fees))
|
print(len(model_fees))
|
||||||
|
|
||||||
with open(".data/data.json", "w") as f:
|
with open(".data/data.json", "w", encoding="utf-8") as f:
|
||||||
f.write(result.extracted_content)
|
f.write(result.extracted_content)
|
||||||
@@ -249,15 +249,40 @@ def using_crawler_hooks(crawler):
|
|||||||
|
|
||||||
cprint("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True)
|
cprint("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True)
|
||||||
|
|
||||||
crawler.set_hook('on_driver_created', on_driver_created)
|
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
|
||||||
crawler.set_hook('before_get_url', before_get_url)
|
crawler_strategy.set_hook('on_driver_created', on_driver_created)
|
||||||
crawler.set_hook('after_get_url', after_get_url)
|
crawler_strategy.set_hook('before_get_url', before_get_url)
|
||||||
crawler.set_hook('before_return_html', before_return_html)
|
crawler_strategy.set_hook('after_get_url', after_get_url)
|
||||||
|
crawler_strategy.set_hook('before_return_html', before_return_html)
|
||||||
|
|
||||||
|
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
|
||||||
|
crawler.warmup()
|
||||||
result = crawler.run(url="https://example.com")
|
result = crawler.run(url="https://example.com")
|
||||||
|
|
||||||
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
|
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
|
||||||
print_result(result= result)
|
print_result(result= result)
|
||||||
|
|
||||||
|
def using_crawler_hooks_dleay_example(crawler):
|
||||||
|
def delay(driver):
|
||||||
|
print("Delaying for 5 seconds...")
|
||||||
|
time.sleep(5)
|
||||||
|
print("Resuming...")
|
||||||
|
|
||||||
|
def create_crawler():
|
||||||
|
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
|
||||||
|
crawler_strategy.set_hook('after_get_url', delay)
|
||||||
|
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
|
||||||
|
crawler.warmup()
|
||||||
|
return crawler
|
||||||
|
|
||||||
|
cprint("\n🔗 [bold cyan]Using Crawler Hooks: Let's add a delay after fetching the url to make sure entire page is fetched.[/bold cyan]")
|
||||||
|
crawler = create_crawler()
|
||||||
|
result = crawler.run(url="https://google.com", bypass_cache=True)
|
||||||
|
|
||||||
|
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
|
||||||
|
print_result(result)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
cprint("🌟 [bold green]Welcome to the Crawl4ai Quickstart Guide! Let's dive into some web crawling fun! 🌐[/bold green]")
|
cprint("🌟 [bold green]Welcome to the Crawl4ai Quickstart Guide! Let's dive into some web crawling fun! 🌐[/bold green]")
|
||||||
|
|||||||
@@ -42,5 +42,5 @@ page_summary = json.loads(result.extracted_content)
|
|||||||
|
|
||||||
print(page_summary)
|
print(page_summary)
|
||||||
|
|
||||||
with open(".data/page_summary.json", "w") as f:
|
with open(".data/page_summary.json", "w", encoding="utf-8") as f:
|
||||||
f.write(result.extracted_content)
|
f.write(result.extracted_content)
|
||||||
|
|||||||
@@ -15,7 +15,6 @@
|
|||||||
--mono-font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
|
--mono-font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
|
||||||
Courier New, monospace, serif;
|
Courier New, monospace, serif;
|
||||||
|
|
||||||
|
|
||||||
--background-color: #151515; /* Dark background */
|
--background-color: #151515; /* Dark background */
|
||||||
--font-color: #eaeaea; /* Light font color for contrast */
|
--font-color: #eaeaea; /* Light font color for contrast */
|
||||||
--invert-font-color: #151515; /* Dark color for inverted elements */
|
--invert-font-color: #151515; /* Dark color for inverted elements */
|
||||||
@@ -30,12 +29,16 @@
|
|||||||
--global-font-color: #eaeaea; /* Light font color for global elements */
|
--global-font-color: #eaeaea; /* Light font color for global elements */
|
||||||
|
|
||||||
--background-color: #222225;
|
--background-color: #222225;
|
||||||
|
|
||||||
|
--background-color: #070708;
|
||||||
--page-width: 70em;
|
--page-width: 70em;
|
||||||
--font-color: #e8e9ed;
|
--font-color: #e8e9ed;
|
||||||
--invert-font-color: #222225;
|
--invert-font-color: #222225;
|
||||||
--secondary-color: #a3abba;
|
--secondary-color: #a3abba;
|
||||||
|
--secondary-color: #d5cec0;
|
||||||
--tertiary-color: #a3abba;
|
--tertiary-color: #a3abba;
|
||||||
--primary-color: #09b5a5; /* Updated to the brand color */
|
--primary-color: #09b5a5; /* Updated to the brand color */
|
||||||
|
--primary-color: #50ffff; /* Updated to the brand color */
|
||||||
--error-color: #ff3c74;
|
--error-color: #ff3c74;
|
||||||
--progress-bar-background: #3f3f44;
|
--progress-bar-background: #3f3f44;
|
||||||
--progress-bar-fill: #09b5a5; /* Updated to the brand color */
|
--progress-bar-fill: #09b5a5; /* Updated to the brand color */
|
||||||
@@ -73,11 +76,78 @@ pre, code {
|
|||||||
border-bottom: 1px dashed var(--secondary-color);
|
border-bottom: 1px dashed var(--secondary-color);
|
||||||
} */
|
} */
|
||||||
|
|
||||||
.terminal-mkdocs-main-content{
|
.terminal-mkdocs-main-content {
|
||||||
line-height: var(--global-line-height);
|
line-height: var(--global-line-height);
|
||||||
}
|
}
|
||||||
|
|
||||||
strong, .highlight {
|
strong,
|
||||||
|
.highlight {
|
||||||
/* background: url(//s2.svgbox.net/pen-brushes.svg?ic=brush-1&color=50ffff); */
|
/* background: url(//s2.svgbox.net/pen-brushes.svg?ic=brush-1&color=50ffff); */
|
||||||
background-color: #50ffff33;
|
background-color: #50ffff33;
|
||||||
|
}
|
||||||
|
|
||||||
|
.terminal-card > header {
|
||||||
|
color: var(--font-color);
|
||||||
|
text-align: center;
|
||||||
|
background-color: var(--progress-bar-background);
|
||||||
|
padding: 0.3em 0.5em;
|
||||||
|
}
|
||||||
|
.btn.btn-sm {
|
||||||
|
color: var(--font-color);
|
||||||
|
padding: 0.2em 0.5em;
|
||||||
|
font-size: 0.8em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.loading-message {
|
||||||
|
display: none;
|
||||||
|
margin-top: 20px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.response-section {
|
||||||
|
display: none;
|
||||||
|
padding-top: 20px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.tabs {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
}
|
||||||
|
.tab-list {
|
||||||
|
display: flex;
|
||||||
|
padding: 0;
|
||||||
|
margin: 0;
|
||||||
|
list-style-type: none;
|
||||||
|
border-bottom: 1px solid var(--font-color);
|
||||||
|
}
|
||||||
|
.tab-item {
|
||||||
|
cursor: pointer;
|
||||||
|
padding: 10px;
|
||||||
|
border: 1px solid var(--font-color);
|
||||||
|
margin-right: -1px;
|
||||||
|
border-bottom: none;
|
||||||
|
}
|
||||||
|
.tab-item:hover,
|
||||||
|
.tab-item:focus,
|
||||||
|
.tab-item:active {
|
||||||
|
background-color: var(--progress-bar-background);
|
||||||
|
}
|
||||||
|
.tab-content {
|
||||||
|
display: none;
|
||||||
|
border: 1px solid var(--font-color);
|
||||||
|
border-top: none;
|
||||||
|
}
|
||||||
|
.tab-content:first-of-type {
|
||||||
|
display: block;
|
||||||
|
}
|
||||||
|
|
||||||
|
.tab-content header {
|
||||||
|
padding: 0.5em;
|
||||||
|
display: flex;
|
||||||
|
justify-content: end;
|
||||||
|
align-items: center;
|
||||||
|
background-color: var(--progress-bar-background);
|
||||||
|
}
|
||||||
|
.tab-content pre {
|
||||||
|
margin: 0;
|
||||||
|
max-height: 300px; overflow: auto; border:none;
|
||||||
}
|
}
|
||||||
@@ -1,5 +1,89 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## [v0.2.77] - 2024-08-04
|
||||||
|
|
||||||
|
Significant improvements in text processing and performance:
|
||||||
|
|
||||||
|
- 🚀 **Dependency reduction**: Removed dependency on spaCy model for text chunk labeling in cosine extraction strategy.
|
||||||
|
- 🤖 **Transformer upgrade**: Implemented text sequence classification using a transformer model for labeling text chunks.
|
||||||
|
- ⚡ **Performance enhancement**: Improved model loading speed due to removal of spaCy dependency.
|
||||||
|
- 🔧 **Future-proofing**: Laid groundwork for potential complete removal of spaCy dependency in future versions.
|
||||||
|
|
||||||
|
These changes address issue #68 and provide a foundation for faster, more efficient text processing in Crawl4AI.
|
||||||
|
|
||||||
|
## [v0.2.76] - 2024-08-02
|
||||||
|
|
||||||
|
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
|
||||||
|
|
||||||
|
- 🐳 **Docker enhancements**: Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
|
||||||
|
- 🌐 **Official Docker Hub image**: Launched our first official image on Docker Hub for streamlined deployment.
|
||||||
|
- 🔧 **Selenium upgrade**: Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
|
||||||
|
- 🖼️ **Image description**: Implemented ability to generate textual descriptions for extracted images from web pages.
|
||||||
|
- ⚡ **Performance boost**: Various improvements to enhance overall speed and performance.
|
||||||
|
|
||||||
|
A big shoutout to our amazing community contributors:
|
||||||
|
- [@aravindkarnam](https://github.com/aravindkarnam) for developing the textual description extraction feature.
|
||||||
|
- [@FractalMind](https://github.com/FractalMind) for creating the first official Docker Hub image and fixing Dockerfile errors.
|
||||||
|
- [@ketonkss4](https://github.com/ketonkss4) for identifying Selenium's new capabilities, helping us reduce dependencies.
|
||||||
|
|
||||||
|
Your contributions are driving Crawl4AI forward! 🙌
|
||||||
|
|
||||||
|
## [v0.2.75] - 2024-07-19
|
||||||
|
|
||||||
|
Minor improvements for a more maintainable codebase:
|
||||||
|
|
||||||
|
- 🔄 Fixed typos in `chunking_strategy.py` and `crawler_strategy.py` to improve code readability
|
||||||
|
- 🔄 Removed `.test_pads/` directory from `.gitignore` to keep our repository clean and organized
|
||||||
|
|
||||||
|
These changes may seem small, but they contribute to a more stable and sustainable codebase. By fixing typos and updating our `.gitignore` settings, we're ensuring that our code is easier to maintain and scale in the long run.
|
||||||
|
|
||||||
|
|
||||||
|
## v0.2.74 - 2024-07-08
|
||||||
|
A slew of exciting updates to improve the crawler's stability and robustness! 🎉
|
||||||
|
|
||||||
|
- 💻 **UTF encoding fix**: Resolved the Windows \"charmap\" error by adding UTF encoding.
|
||||||
|
- 🛡️ **Error handling**: Implemented MaxRetryError exception handling in LocalSeleniumCrawlerStrategy.
|
||||||
|
- 🧹 **Input sanitization**: Improved input sanitization and handled encoding issues in LLMExtractionStrategy.
|
||||||
|
- 🚮 **Database cleanup**: Removed existing database file and initialized a new one.
|
||||||
|
|
||||||
|
## [v0.2.73] - 2024-07-03
|
||||||
|
|
||||||
|
💡 In this release, we've bumped the version to v0.2.73 and refreshed our documentation to ensure you have the best experience with our project.
|
||||||
|
|
||||||
|
* Supporting website need "with-head" mode to crawl the website with head.
|
||||||
|
* Fixing the installation issues for setup.py and dockerfile.
|
||||||
|
* Resolve multiple issues.
|
||||||
|
|
||||||
|
## [v0.2.72] - 2024-06-30
|
||||||
|
|
||||||
|
This release brings exciting updates and improvements to our project! 🎉
|
||||||
|
|
||||||
|
* 📚 **Documentation Updates**: Our documentation has been revamped to reflect the latest changes and additions.
|
||||||
|
* 🚀 **New Modes in setup.py**: We've added support for three new modes in setup.py: default, torch, and transformers. This enhances the project's flexibility and usability.
|
||||||
|
* 🐳 **Docker File Updates**: The Docker file has been updated to ensure seamless compatibility with the new modes and improvements.
|
||||||
|
* 🕷️ **Temporary Solution for Headless Crawling**: We've implemented a temporary solution to overcome issues with crawling websites in headless mode.
|
||||||
|
|
||||||
|
These changes aim to improve the overall user experience, provide more flexibility, and enhance the project's performance. We're thrilled to share these updates with you and look forward to continuing to evolve and improve our project!
|
||||||
|
|
||||||
|
## [0.2.71] - 2024-06-26
|
||||||
|
|
||||||
|
**Improved Error Handling and Performance** 🚧
|
||||||
|
|
||||||
|
* 🚫 Refactored `crawler_strategy.py` to handle exceptions and provide better error messages, making it more robust and reliable.
|
||||||
|
* 💻 Optimized the `get_content_of_website_optimized` function in `utils.py` for improved performance, reducing potential bottlenecks.
|
||||||
|
* 💻 Updated `utils.py` with the latest changes, ensuring consistency and accuracy.
|
||||||
|
* 🚫 Migrated to `ChromeDriverManager` to resolve Chrome driver download issues, providing a smoother user experience.
|
||||||
|
|
||||||
|
These changes focus on refining the existing codebase, resulting in a more stable, efficient, and user-friendly experience. With these improvements, you can expect fewer errors and better performance in the crawler strategy and utility functions.
|
||||||
|
|
||||||
|
## [0.2.71] - 2024-06-25
|
||||||
|
### Fixed
|
||||||
|
- Speed up twice the extraction function.
|
||||||
|
|
||||||
|
## [0.2.6] - 2024-06-22
|
||||||
|
### Fixed
|
||||||
|
- Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms.
|
||||||
|
|
||||||
## [0.2.5] - 2024-06-18
|
## [0.2.5] - 2024-06-18
|
||||||
### Added
|
### Added
|
||||||
- Added five important hooks to the crawler:
|
- Added five important hooks to the crawler:
|
||||||
|
|||||||
231
docs/md/demo.md
Normal file
231
docs/md/demo.md
Normal file
@@ -0,0 +1,231 @@
|
|||||||
|
# Interactive Demo for Crowler
|
||||||
|
<div id="demo">
|
||||||
|
<form id="crawlForm" class="terminal-form">
|
||||||
|
<fieldset>
|
||||||
|
<legend>Enter URL and Options</legend>
|
||||||
|
<div class="form-group">
|
||||||
|
<label for="url">Enter URL:</label>
|
||||||
|
<input type="text" id="url" name="url" required>
|
||||||
|
</div>
|
||||||
|
<div class="form-group">
|
||||||
|
<label for="screenshot">Get Screenshot:</label>
|
||||||
|
<input type="checkbox" id="screenshot" name="screenshot">
|
||||||
|
</div>
|
||||||
|
<div class="form-group">
|
||||||
|
<button class="btn btn-default" type="submit">Submit</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
</fieldset>
|
||||||
|
</form>
|
||||||
|
|
||||||
|
<div id="loading" class="loading-message">
|
||||||
|
<div class="terminal-alert terminal-alert-primary">Loading... Please wait.</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<section id="response" class="response-section">
|
||||||
|
<h2>Response</h2>
|
||||||
|
<div class="tabs">
|
||||||
|
<ul class="tab-list">
|
||||||
|
<li class="tab-item" onclick="showTab('markdown')">Markdown</li>
|
||||||
|
<li class="tab-item" onclick="showTab('cleanedHtml')">Cleaned HTML</li>
|
||||||
|
<li class="tab-item" onclick="showTab('media')">Media</li>
|
||||||
|
<li class="tab-item" onclick="showTab('extractedContent')">Extracted Content</li>
|
||||||
|
<li class="tab-item" onclick="showTab('screenshot')">Screenshot</li>
|
||||||
|
<li class="tab-item" onclick="showTab('pythonCode')">Python Code</li>
|
||||||
|
</ul>
|
||||||
|
<div class="tab-content" id="tab-markdown">
|
||||||
|
<header>
|
||||||
|
<div>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('markdownContent')">Copy</button>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('markdownContent', 'markdown.md')">Download</button>
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
<pre><code id="markdownContent" class="language-markdown hljs"></code></pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="tab-content" id="tab-cleanedHtml" style="display: none;">
|
||||||
|
<header >
|
||||||
|
<div>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('cleanedHtmlContent')">Copy</button>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('cleanedHtmlContent', 'cleaned.html')">Download</button>
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
<pre><code id="cleanedHtmlContent" class="language-html hljs"></code></pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="tab-content" id="tab-media" style="display: none;">
|
||||||
|
<header >
|
||||||
|
<div>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('mediaContent')">Copy</button>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('mediaContent', 'media.json')">Download</button>
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
<pre><code id="mediaContent" class="language-json hljs"></code></pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="tab-content" id="tab-extractedContent" style="display: none;">
|
||||||
|
<header >
|
||||||
|
<div>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('extractedContentContent')">Copy</button>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('extractedContentContent', 'extracted_content.json')">Download</button>
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
<pre><code id="extractedContentContent" class="language-json hljs"></code></pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="tab-content" id="tab-screenshot" style="display: none;">
|
||||||
|
<header >
|
||||||
|
<div>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadImage('screenshotContent', 'screenshot.png')">Download</button>
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
<pre><img id="screenshotContent" /></pre>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="tab-content" id="tab-pythonCode" style="display: none;">
|
||||||
|
<header >
|
||||||
|
<div>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('pythonCode')">Copy</button>
|
||||||
|
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('pythonCode', 'example.py')">Download</button>
|
||||||
|
</div>
|
||||||
|
</header>
|
||||||
|
<pre><code id="pythonCode" class="language-python hljs"></code></pre>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</section>
|
||||||
|
|
||||||
|
<div id="error" class="error-message" style="display: none; margin-top:1em;">
|
||||||
|
<div class="terminal-alert terminal-alert-error"></div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
function showTab(tabId) {
|
||||||
|
const tabs = document.querySelectorAll('.tab-content');
|
||||||
|
tabs.forEach(tab => tab.style.display = 'none');
|
||||||
|
document.getElementById(`tab-${tabId}`).style.display = 'block';
|
||||||
|
}
|
||||||
|
|
||||||
|
function redo(codeBlock, codeText){
|
||||||
|
codeBlock.classList.remove('hljs');
|
||||||
|
codeBlock.removeAttribute('data-highlighted');
|
||||||
|
|
||||||
|
// Set new code and re-highlight
|
||||||
|
codeBlock.textContent = codeText;
|
||||||
|
hljs.highlightBlock(codeBlock);
|
||||||
|
}
|
||||||
|
|
||||||
|
function copyToClipboard(elementId) {
|
||||||
|
const content = document.getElementById(elementId).textContent;
|
||||||
|
navigator.clipboard.writeText(content).then(() => {
|
||||||
|
alert('Copied to clipboard');
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function downloadContent(elementId, filename) {
|
||||||
|
const content = document.getElementById(elementId).textContent;
|
||||||
|
const blob = new Blob([content], { type: 'text/plain' });
|
||||||
|
const url = window.URL.createObjectURL(blob);
|
||||||
|
const a = document.createElement('a');
|
||||||
|
a.style.display = 'none';
|
||||||
|
a.href = url;
|
||||||
|
a.download = filename;
|
||||||
|
document.body.appendChild(a);
|
||||||
|
a.click();
|
||||||
|
window.URL.revokeObjectURL(url);
|
||||||
|
document.body.removeChild(a);
|
||||||
|
}
|
||||||
|
|
||||||
|
function downloadImage(elementId, filename) {
|
||||||
|
const content = document.getElementById(elementId).src;
|
||||||
|
const a = document.createElement('a');
|
||||||
|
a.style.display = 'none';
|
||||||
|
a.href = content;
|
||||||
|
a.download = filename;
|
||||||
|
document.body.appendChild(a);
|
||||||
|
a.click();
|
||||||
|
document.body.removeChild(a);
|
||||||
|
}
|
||||||
|
|
||||||
|
document.getElementById('crawlForm').addEventListener('submit', function(event) {
|
||||||
|
event.preventDefault();
|
||||||
|
document.getElementById('loading').style.display = 'block';
|
||||||
|
document.getElementById('response').style.display = 'none';
|
||||||
|
|
||||||
|
const url = document.getElementById('url').value;
|
||||||
|
const screenshot = document.getElementById('screenshot').checked;
|
||||||
|
const data = {
|
||||||
|
urls: [url],
|
||||||
|
bypass_cache: false,
|
||||||
|
word_count_threshold: 5,
|
||||||
|
screenshot: screenshot
|
||||||
|
};
|
||||||
|
|
||||||
|
fetch('/crawl', {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'application/json'
|
||||||
|
},
|
||||||
|
body: JSON.stringify(data)
|
||||||
|
})
|
||||||
|
.then(response => {
|
||||||
|
if (!response.ok) {
|
||||||
|
if (response.status === 429) {
|
||||||
|
return response.json().then(err => {
|
||||||
|
throw Object.assign(new Error('Rate limit exceeded'), { status: 429, details: err });
|
||||||
|
});
|
||||||
|
}
|
||||||
|
throw new Error('Network response was not ok');
|
||||||
|
}
|
||||||
|
return response.json();
|
||||||
|
})
|
||||||
|
.then(data => {
|
||||||
|
data = data.results[0]; // Only one URL is requested
|
||||||
|
document.getElementById('loading').style.display = 'none';
|
||||||
|
document.getElementById('response').style.display = 'block';
|
||||||
|
redo(document.getElementById('markdownContent'), data.markdown);
|
||||||
|
redo(document.getElementById('cleanedHtmlContent'), data.cleaned_html);
|
||||||
|
redo(document.getElementById('mediaContent'), JSON.stringify(data.media, null, 2));
|
||||||
|
redo(document.getElementById('extractedContentContent'), data.extracted_content);
|
||||||
|
if (screenshot) {
|
||||||
|
document.getElementById('screenshotContent').src = `data:image/png;base64,${data.screenshot}`;
|
||||||
|
}
|
||||||
|
const pythonCode = `
|
||||||
|
from crawl4ai.web_crawler import WebCrawler
|
||||||
|
|
||||||
|
crawler = WebCrawler()
|
||||||
|
crawler.warmup()
|
||||||
|
|
||||||
|
result = crawler.run(
|
||||||
|
url='${url}',
|
||||||
|
screenshot=${screenshot}
|
||||||
|
)
|
||||||
|
print(result)
|
||||||
|
`;
|
||||||
|
redo(document.getElementById('pythonCode'), pythonCode);
|
||||||
|
document.getElementById('error').style.display = 'none';
|
||||||
|
})
|
||||||
|
.catch(error => {
|
||||||
|
document.getElementById('loading').style.display = 'none';
|
||||||
|
document.getElementById('error').style.display = 'block';
|
||||||
|
let errorMessage = 'An unexpected error occurred. Please try again later.';
|
||||||
|
|
||||||
|
if (error.status === 429) {
|
||||||
|
const details = error.details;
|
||||||
|
if (details.retry_after) {
|
||||||
|
errorMessage = `Rate limit exceeded. Please wait ${parseFloat(details.retry_after).toFixed(1)} seconds before trying again.`;
|
||||||
|
} else if (details.reset_at) {
|
||||||
|
const resetTime = new Date(details.reset_at);
|
||||||
|
const waitTime = Math.ceil((resetTime - new Date()) / 1000);
|
||||||
|
errorMessage = `Rate limit exceeded. Please try again after ${waitTime} seconds.`;
|
||||||
|
} else {
|
||||||
|
errorMessage = `Rate limit exceeded. Please try again later.`;
|
||||||
|
}
|
||||||
|
} else if (error.message) {
|
||||||
|
errorMessage = error.message;
|
||||||
|
}
|
||||||
|
|
||||||
|
document.querySelector('#error .terminal-alert').textContent = errorMessage;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
</script>
|
||||||
|
</div>
|
||||||
@@ -14,6 +14,9 @@ Let's see how we can customize the crawler using hooks! In this example, we'll:
|
|||||||
### Hook Definitions
|
### Hook Definitions
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
from crawl4ai.web_crawler import WebCrawler
|
||||||
|
from crawl4ai.crawler_strategy import *
|
||||||
|
|
||||||
def on_driver_created(driver):
|
def on_driver_created(driver):
|
||||||
print("[HOOK] on_driver_created")
|
print("[HOOK] on_driver_created")
|
||||||
# Example customization: maximize the window
|
# Example customization: maximize the window
|
||||||
@@ -66,12 +69,13 @@ def before_return_html(driver, html):
|
|||||||
|
|
||||||
```python
|
```python
|
||||||
print("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True)
|
print("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True)
|
||||||
crawler = WebCrawler(verbose=True)
|
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
|
||||||
|
crawler_strategy.set_hook('on_driver_created', on_driver_created)
|
||||||
|
crawler_strategy.set_hook('before_get_url', before_get_url)
|
||||||
|
crawler_strategy.set_hook('after_get_url', after_get_url)
|
||||||
|
crawler_strategy.set_hook('before_return_html', before_return_html)
|
||||||
|
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
|
||||||
crawler.warmup()
|
crawler.warmup()
|
||||||
crawler.set_hook('on_driver_created', on_driver_created)
|
|
||||||
crawler.set_hook('before_get_url', before_get_url)
|
|
||||||
crawler.set_hook('after_get_url', after_get_url)
|
|
||||||
crawler.set_hook('before_return_html', before_return_html)
|
|
||||||
|
|
||||||
result = crawler.run(url="https://example.com")
|
result = crawler.run(url="https://example.com")
|
||||||
|
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ model_fees = json.loads(result.extracted_content)
|
|||||||
|
|
||||||
print(len(model_fees))
|
print(len(model_fees))
|
||||||
|
|
||||||
with open(".data/data.json", "w") as f:
|
with open(".data/data.json", "w", encoding="utf-8") as f:
|
||||||
f.write(result.extracted_content)
|
f.write(result.extracted_content)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -71,7 +71,7 @@ model_fees = json.loads(result.extracted_content)
|
|||||||
|
|
||||||
print(len(model_fees))
|
print(len(model_fees))
|
||||||
|
|
||||||
with open(".data/data.json", "w") as f:
|
with open(".data/data.json", "w", encoding="utf-8") as f:
|
||||||
f.write(result.extracted_content)
|
f.write(result.extracted_content)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -91,7 +91,7 @@ This example demonstrates how to use `Crawl4AI` to extract a summary from a web
|
|||||||
Save the extracted data to a file for further use.
|
Save the extracted data to a file for further use.
|
||||||
|
|
||||||
```python
|
```python
|
||||||
with open(".data/page_summary.json", "w") as f:
|
with open(".data/page_summary.json", "w", encoding="utf-8") as f:
|
||||||
f.write(result.extracted_content)
|
f.write(result.extracted_content)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,12 @@
|
|||||||
# Crawl4AI Documentation
|
# Crawl4AI v0.2.77
|
||||||
|
|
||||||
Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.
|
Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.
|
||||||
|
|
||||||
|
|
||||||
|
## Try the [Demo](demo.md)
|
||||||
|
|
||||||
|
Just try it now and crawl different pages to see how it works. You can set the links, see the structures of the output, and also view the Python sample code on how to run it. The old demo is available at [/old_demo](/old) where you can see more details.
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
Crawl4AI has one clear task: to make crawling and data extraction from web pages easy and efficient, especially for large language models (LLMs) and AI applications. Whether you are using it as a REST API or a Python library, Crawl4AI offers a robust and flexible solution.
|
Crawl4AI has one clear task: to make crawling and data extraction from web pages easy and efficient, especially for large language models (LLMs) and AI applications. Whether you are using it as a REST API or a Python library, Crawl4AI offers a robust and flexible solution.
|
||||||
|
|||||||
@@ -1,46 +1,193 @@
|
|||||||
# Installation 💻
|
# Installation 💻
|
||||||
|
|
||||||
There are three ways to use Crawl4AI:
|
There are three ways to use Crawl4AI:
|
||||||
1. As a library (Recommended)
|
|
||||||
2. As a local server (Docker) or using the REST API
|
|
||||||
3. As a Google Colab notebook. [](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
|
|
||||||
|
|
||||||
## Library Installation
|
1. As a library (Recommended).
|
||||||
|
2. As a local server (Docker) or using the REST API.
|
||||||
|
3. As a local server (Docker) using the pre-built image from Docker Hub.
|
||||||
|
|
||||||
To install Crawl4AI as a library, follow these steps:
|
## Option 1: Library Installation
|
||||||
|
|
||||||
1. Install the package from GitHub:
|
You can try this Colab for a quick start: [](https://colab.research.google.com/drive/1sJPAmeLj5PMrg2VgOwMJ2ubGIcK0cJeX#scrollTo=g1RrmI4W_rPk)
|
||||||
|
|
||||||
|
Crawl4AI offers flexible installation options to suit various use cases. Choose the option that best fits your needs:
|
||||||
|
|
||||||
|
- **Default Installation** (Basic functionality):
|
||||||
|
```bash
|
||||||
|
virtualenv venv
|
||||||
|
source venv/bin/activate
|
||||||
|
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
|
||||||
```
|
```
|
||||||
|
Use this for basic web crawling and scraping tasks.
|
||||||
|
|
||||||
|
- **Installation with PyTorch** (For advanced text clustering):
|
||||||
|
```bash
|
||||||
|
virtualenv venv
|
||||||
|
source venv/bin/activate
|
||||||
|
pip install "crawl4ai[torch] @ git+https://github.com/unclecode/crawl4ai.git"
|
||||||
|
```
|
||||||
|
Choose this if you need the CosineSimilarity cluster strategy.
|
||||||
|
|
||||||
|
- **Installation with Transformers** (For summarization and Hugging Face models):
|
||||||
|
```bash
|
||||||
|
virtualenv venv
|
||||||
|
source venv/bin/activate
|
||||||
|
pip install "crawl4ai[transformer] @ git+https://github.com/unclecode/crawl4ai.git"
|
||||||
|
```
|
||||||
|
Opt for this if you require text summarization or plan to use Hugging Face models.
|
||||||
|
|
||||||
|
- **Full Installation** (All features):
|
||||||
|
```bash
|
||||||
virtualenv venv
|
virtualenv venv
|
||||||
source venv/bin/activate
|
source venv/bin/activate
|
||||||
pip install "crawl4ai[all] @ git+https://github.com/unclecode/crawl4ai.git"
|
pip install "crawl4ai[all] @ git+https://github.com/unclecode/crawl4ai.git"
|
||||||
```
|
```
|
||||||
|
This installs all dependencies for full functionality.
|
||||||
|
|
||||||
💡 Better to run the following CLI-command to load the required models. This is optional, but it will boost the performance and speed of the crawler. You need to do this only once.
|
- **Development Installation** (For contributors):
|
||||||
```
|
```bash
|
||||||
crawl4ai-download-models
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Alternatively, you can clone the repository and install the package locally:
|
|
||||||
```
|
|
||||||
virtualenv venv
|
virtualenv venv
|
||||||
source venv/bin/activate
|
source venv/bin/activate
|
||||||
git clone https://github.com/unclecode/crawl4ai.git
|
git clone https://github.com/unclecode/crawl4ai.git
|
||||||
cd crawl4ai
|
cd crawl4ai
|
||||||
pip install -e .[all]
|
pip install -e ".[all]"
|
||||||
|
```
|
||||||
|
Use this if you plan to modify the source code.
|
||||||
|
|
||||||
|
💡 After installation, if you have used "torch", "transformer" or "all", it's recommended to run the following CLI command to load the required models. This is optional but will boost the performance and speed of the crawler. You need to do this only once, this is only for when you install using []
|
||||||
|
```bash
|
||||||
|
crawl4ai-download-models
|
||||||
```
|
```
|
||||||
|
|
||||||
## Using Docker for Local Server
|
## Option 2: Using Docker for Local Server
|
||||||
|
|
||||||
|
Crawl4AI can be run as a local server using Docker. The Dockerfile supports different installation options to cater to various use cases. Here's how you can build and run the Docker image:
|
||||||
|
|
||||||
|
### Default Installation
|
||||||
|
|
||||||
|
The default installation includes the basic Crawl4AI package without additional dependencies or pre-downloaded models.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For Mac users (M1/M2)
|
||||||
|
docker build --platform linux/amd64 -t crawl4ai .
|
||||||
|
|
||||||
3. Use Docker to run the local server:
|
|
||||||
```
|
|
||||||
# For Mac users
|
|
||||||
# docker build --platform linux/amd64 -t crawl4ai .
|
|
||||||
# For other users
|
# For other users
|
||||||
# docker build -t crawl4ai .
|
docker build -t crawl4ai .
|
||||||
|
|
||||||
|
# Run the container
|
||||||
docker run -d -p 8000:80 crawl4ai
|
docker run -d -p 8000:80 crawl4ai
|
||||||
```
|
```
|
||||||
|
|
||||||
## Using Google Colab
|
### Full Installation (All Dependencies and Models)
|
||||||
|
|
||||||
|
This option installs all dependencies and downloads the models.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For Mac users (M1/M2)
|
||||||
|
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=all -t crawl4ai:all .
|
||||||
|
|
||||||
|
# For other users
|
||||||
|
docker build --build-arg INSTALL_OPTION=all -t crawl4ai:all .
|
||||||
|
|
||||||
|
# Run the container
|
||||||
|
docker run -d -p 8000:80 crawl4ai:all
|
||||||
|
```
|
||||||
|
|
||||||
|
### Torch Installation
|
||||||
|
|
||||||
|
This option installs torch-related dependencies and downloads the models.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For Mac users (M1/M2)
|
||||||
|
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=torch -t crawl4ai:torch .
|
||||||
|
|
||||||
|
# For other users
|
||||||
|
docker build --build-arg INSTALL_OPTION=torch -t crawl4ai:torch .
|
||||||
|
|
||||||
|
# Run the container
|
||||||
|
docker run -d -p 8000:80 crawl4ai:torch
|
||||||
|
```
|
||||||
|
|
||||||
|
### Transformer Installation
|
||||||
|
|
||||||
|
This option installs transformer-related dependencies and downloads the models.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# For Mac users (M1/M2)
|
||||||
|
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=transformer -t crawl4ai:transformer .
|
||||||
|
|
||||||
|
# For other users
|
||||||
|
docker build --build-arg INSTALL_OPTION=transformer -t crawl4ai:transformer .
|
||||||
|
|
||||||
|
# Run the container
|
||||||
|
docker run -d -p 8000:80 crawl4ai:transformer
|
||||||
|
```
|
||||||
|
|
||||||
|
### Notes
|
||||||
|
|
||||||
|
- The `--platform linux/amd64` flag is necessary for Mac users with M1/M2 chips to ensure compatibility.
|
||||||
|
- The `-t` flag tags the image with a name (and optionally a tag in the 'name:tag' format).
|
||||||
|
- The `-d` flag runs the container in detached mode.
|
||||||
|
- The `-p 8000:80` flag maps port 8000 on the host to port 80 in the container.
|
||||||
|
|
||||||
|
Choose the installation option that best suits your needs. The default installation is suitable for basic usage, while the other options provide additional capabilities for more advanced use cases.
|
||||||
|
|
||||||
|
## Option 3: Using the Pre-built Image from Docker Hub
|
||||||
|
|
||||||
|
You can use pre-built Crawl4AI images from Docker Hub, which are available for all platforms (Mac, Linux, Windows). We have official images as well as a community-contributed image (Thanks to https://github.com/FractalMind):
|
||||||
|
|
||||||
|
### Default Installation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
|
||||||
|
# Pull the image
|
||||||
|
|
||||||
|
docker pull unclecode/crawl4ai:latest
|
||||||
|
|
||||||
|
# Run the container
|
||||||
|
|
||||||
|
docker run -d -p 8000:80 unclecode/crawl4ai:latest
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Community-Contributed Image
|
||||||
|
|
||||||
|
A stable version of Crawl4AI is also available, created and maintained by a community member:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
|
||||||
|
# Pull the community-contributed image
|
||||||
|
|
||||||
|
docker pull ryser007/crawl4ai:stable
|
||||||
|
|
||||||
|
# Run the container
|
||||||
|
|
||||||
|
docker run -d -p 8000:80 ryser007/crawl4ai:stable
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
We'd like to express our gratitude to GitHub user [@FractalMind](https://github.com/FractalMind) for creating and maintaining this stable version of the Crawl4AI Docker image. Community contributions like this are invaluable to the project.
|
||||||
|
|
||||||
|
|
||||||
|
### Testing the Installation
|
||||||
|
|
||||||
|
After running the container, you can test if it's working correctly:
|
||||||
|
|
||||||
|
- On Mac and Linux:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
|
||||||
|
curl http://localhost:8000
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
- On Windows (PowerShell):
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
|
||||||
|
Invoke-WebRequest -Uri http://localhost:8000
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Or open a web browser and navigate to http://localhost:8000
|
||||||
|
|
||||||
You can also use Crawl4AI in a Google Colab notebook for easy setup and experimentation. Simply open the following Colab notebook and follow the instructions: [](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
|
|
||||||
|
|||||||
28
docs/md/interactive_content.html
Normal file
28
docs/md/interactive_content.html
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
<h1>Try Our Library</h1>
|
||||||
|
<form id="apiForm">
|
||||||
|
<label for="inputField">Enter some input:</label>
|
||||||
|
<input type="text" id="inputField" name="inputField" required>
|
||||||
|
<button type="submit">Submit</button>
|
||||||
|
</form>
|
||||||
|
<div id="result"></div>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
document.getElementById('apiForm').addEventListener('submit', function(event) {
|
||||||
|
event.preventDefault();
|
||||||
|
const input = document.getElementById('inputField').value;
|
||||||
|
fetch('https://your-api-endpoint.com/api', {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'application/json'
|
||||||
|
},
|
||||||
|
body: JSON.stringify({ input: input })
|
||||||
|
})
|
||||||
|
.then(response => response.json())
|
||||||
|
.then(data => {
|
||||||
|
document.getElementById('result').textContent = JSON.stringify(data);
|
||||||
|
})
|
||||||
|
.catch(error => {
|
||||||
|
document.getElementById('result').textContent = 'Error: ' + error;
|
||||||
|
});
|
||||||
|
});
|
||||||
|
</script>
|
||||||
@@ -20,18 +20,6 @@ Crawl4AI is designed to simplify the process of crawling web pages and extractin
|
|||||||
- **🎯 CSS Selector Support**: Extract specific content using CSS selectors.
|
- **🎯 CSS Selector Support**: Extract specific content using CSS selectors.
|
||||||
- **📝 Instruction/Keyword Refinement**: Pass instructions or keywords to refine the extraction process.
|
- **📝 Instruction/Keyword Refinement**: Pass instructions or keywords to refine the extraction process.
|
||||||
|
|
||||||
## Recent Changes (v0.2.5) 🌟
|
|
||||||
|
|
||||||
- **New Hooks**: Added six important hooks to the crawler:
|
|
||||||
- 🟢 `on_driver_created`: Called when the driver is ready for initializations.
|
|
||||||
- 🔵 `before_get_url`: Called right before Selenium fetches the URL.
|
|
||||||
- 🟣 `after_get_url`: Called after Selenium fetches the URL.
|
|
||||||
- 🟠 `before_return_html`: Called when the data is parsed and ready.
|
|
||||||
- 🟡 `on_user_agent_updated`: Called when the user changes the user agent, causing the driver to reinitialize.
|
|
||||||
- **New Example**: Added an example in [`quickstart.py`](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/quickstart.py) in the example folder under the docs.
|
|
||||||
- **Improved Semantic Context**: Maintaining the semantic context of inline tags (e.g., abbreviation, DEL, INS) for improved LLM-friendliness.
|
|
||||||
- **Dockerfile Update**: Updated Dockerfile to ensure compatibility across multiple platforms.
|
|
||||||
|
|
||||||
Check the [Changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md) for more details.
|
Check the [Changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md) for more details.
|
||||||
|
|
||||||
## Power and Simplicity of Crawl4AI 🚀
|
## Power and Simplicity of Crawl4AI 🚀
|
||||||
|
|||||||
@@ -176,41 +176,29 @@ print(f"JavaScript Code (Load More button) result: {result}")
|
|||||||
Let's see how we can customize the crawler using hooks!
|
Let's see how we can customize the crawler using hooks!
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def on_driver_created(driver):
|
import time
|
||||||
print("[HOOK] on_driver_created")
|
|
||||||
driver.maximize_window()
|
|
||||||
driver.get('https://example.com/login')
|
|
||||||
driver.find_element(By.NAME, 'username').send_keys('testuser')
|
|
||||||
driver.find_element(By.NAME, 'password').send_keys('password123')
|
|
||||||
driver.find_element(By.NAME, 'login').click()
|
|
||||||
driver.add_cookie({'name': 'test_cookie', 'value': 'cookie_value'})
|
|
||||||
return driver
|
|
||||||
|
|
||||||
def before_get_url(driver):
|
from crawl4ai.web_crawler import WebCrawler
|
||||||
print("[HOOK] before_get_url")
|
from crawl4ai.crawler_strategy import *
|
||||||
driver.execute_cdp_cmd('Network.enable', {})
|
|
||||||
driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': {'X-Test-Header': 'test'}})
|
|
||||||
return driver
|
|
||||||
|
|
||||||
def after_get_url(driver):
|
def delay(driver):
|
||||||
print("[HOOK] after_get_url")
|
print("Delaying for 5 seconds...")
|
||||||
print(driver.current_url)
|
time.sleep(5)
|
||||||
return driver
|
print("Resuming...")
|
||||||
|
|
||||||
|
def create_crawler():
|
||||||
|
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
|
||||||
|
crawler_strategy.set_hook('after_get_url', delay)
|
||||||
|
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
|
||||||
|
crawler.warmup()
|
||||||
|
return crawler
|
||||||
|
|
||||||
def before_return_html(driver, html):
|
crawler = create_crawler()
|
||||||
print("[HOOK] before_return_html")
|
result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
|
||||||
print(len(html))
|
|
||||||
return driver
|
|
||||||
|
|
||||||
crawler.set_hook('on_driver_created', on_driver_created)
|
|
||||||
crawler.set_hook('before_get_url', before_get_url)
|
|
||||||
crawler.set_hook('after_get_url', after_get_url)
|
|
||||||
crawler.set_hook('before_return_html', before_return_html)
|
|
||||||
|
|
||||||
result = crawler.run(url="https://example.com")
|
|
||||||
print(f"Crawler Hooks result: {result}")
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
check [Hooks](examples/hooks_auth.md) for more examples.
|
||||||
|
|
||||||
## Congratulations! 🎉
|
## Congratulations! 🎉
|
||||||
|
|
||||||
You've made it through the Crawl4AI Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️
|
You've made it through the Crawl4AI Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️
|
||||||
|
|||||||
107
main.py
107
main.py
@@ -10,6 +10,10 @@ from fastapi.responses import HTMLResponse, JSONResponse
|
|||||||
from fastapi.staticfiles import StaticFiles
|
from fastapi.staticfiles import StaticFiles
|
||||||
from fastapi.middleware.cors import CORSMiddleware
|
from fastapi.middleware.cors import CORSMiddleware
|
||||||
from fastapi.templating import Jinja2Templates
|
from fastapi.templating import Jinja2Templates
|
||||||
|
from fastapi.exceptions import RequestValidationError
|
||||||
|
from starlette.middleware.base import BaseHTTPMiddleware
|
||||||
|
from starlette.responses import FileResponse
|
||||||
|
from fastapi.responses import RedirectResponse
|
||||||
|
|
||||||
from pydantic import BaseModel, HttpUrl
|
from pydantic import BaseModel, HttpUrl
|
||||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||||
@@ -18,6 +22,15 @@ from typing import List, Optional
|
|||||||
from crawl4ai.web_crawler import WebCrawler
|
from crawl4ai.web_crawler import WebCrawler
|
||||||
from crawl4ai.database import get_total_count, clear_db
|
from crawl4ai.database import get_total_count, clear_db
|
||||||
|
|
||||||
|
import time
|
||||||
|
from slowapi import Limiter, _rate_limit_exceeded_handler
|
||||||
|
from slowapi.util import get_remote_address
|
||||||
|
from slowapi.errors import RateLimitExceeded
|
||||||
|
|
||||||
|
# load .env file
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
# Configuration
|
# Configuration
|
||||||
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
|
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
|
||||||
MAX_CONCURRENT_REQUESTS = 10 # Adjust this to change the maximum concurrent requests
|
MAX_CONCURRENT_REQUESTS = 10 # Adjust this to change the maximum concurrent requests
|
||||||
@@ -26,6 +39,78 @@ lock = asyncio.Lock()
|
|||||||
|
|
||||||
app = FastAPI()
|
app = FastAPI()
|
||||||
|
|
||||||
|
# Initialize rate limiter
|
||||||
|
def rate_limit_key_func(request: Request):
|
||||||
|
access_token = request.headers.get("access-token")
|
||||||
|
if access_token == os.environ.get('ACCESS_TOKEN'):
|
||||||
|
return None
|
||||||
|
return get_remote_address(request)
|
||||||
|
|
||||||
|
limiter = Limiter(key_func=rate_limit_key_func)
|
||||||
|
app.state.limiter = limiter
|
||||||
|
|
||||||
|
# Dictionary to store last request times for each client
|
||||||
|
last_request_times = {}
|
||||||
|
last_rate_limit = {}
|
||||||
|
|
||||||
|
|
||||||
|
def get_rate_limit():
|
||||||
|
limit = os.environ.get('ACCESS_PER_MIN', "5")
|
||||||
|
return f"{limit}/minute"
|
||||||
|
|
||||||
|
# Custom rate limit exceeded handler
|
||||||
|
async def custom_rate_limit_exceeded_handler(request: Request, exc: RateLimitExceeded) -> JSONResponse:
|
||||||
|
if request.client.host not in last_rate_limit or time.time() - last_rate_limit[request.client.host] > 60:
|
||||||
|
last_rate_limit[request.client.host] = time.time()
|
||||||
|
retry_after = 60 - (time.time() - last_rate_limit[request.client.host])
|
||||||
|
reset_at = time.time() + retry_after
|
||||||
|
return JSONResponse(
|
||||||
|
status_code=429,
|
||||||
|
content={
|
||||||
|
"detail": "Rate limit exceeded",
|
||||||
|
"limit": str(exc.limit.limit),
|
||||||
|
"retry_after": retry_after,
|
||||||
|
'reset_at': reset_at,
|
||||||
|
"message": f"You have exceeded the rate limit of {exc.limit.limit}."
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
app.add_exception_handler(RateLimitExceeded, custom_rate_limit_exceeded_handler)
|
||||||
|
|
||||||
|
|
||||||
|
# Middleware for token-based bypass and per-request limit
|
||||||
|
class RateLimitMiddleware(BaseHTTPMiddleware):
|
||||||
|
async def dispatch(self, request: Request, call_next):
|
||||||
|
SPAN = int(os.environ.get('ACCESS_TIME_SPAN', 10))
|
||||||
|
access_token = request.headers.get("access-token")
|
||||||
|
if access_token == os.environ.get('ACCESS_TOKEN'):
|
||||||
|
return await call_next(request)
|
||||||
|
|
||||||
|
path = request.url.path
|
||||||
|
if path in ["/crawl", "/old"]:
|
||||||
|
client_ip = request.client.host
|
||||||
|
current_time = time.time()
|
||||||
|
|
||||||
|
# Check time since last request
|
||||||
|
if client_ip in last_request_times:
|
||||||
|
time_since_last_request = current_time - last_request_times[client_ip]
|
||||||
|
if time_since_last_request < SPAN:
|
||||||
|
return JSONResponse(
|
||||||
|
status_code=429,
|
||||||
|
content={
|
||||||
|
"detail": "Too many requests",
|
||||||
|
"message": "Rate limit exceeded. Please wait 10 seconds between requests.",
|
||||||
|
"retry_after": max(0, SPAN - time_since_last_request),
|
||||||
|
"reset_at": current_time + max(0, SPAN - time_since_last_request),
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
last_request_times[client_ip] = current_time
|
||||||
|
|
||||||
|
return await call_next(request)
|
||||||
|
|
||||||
|
app.add_middleware(RateLimitMiddleware)
|
||||||
|
|
||||||
# CORS configuration
|
# CORS configuration
|
||||||
origins = ["*"] # Allow all origins
|
origins = ["*"] # Allow all origins
|
||||||
app.add_middleware(
|
app.add_middleware(
|
||||||
@@ -39,12 +124,15 @@ app.add_middleware(
|
|||||||
# Mount the pages directory as a static directory
|
# Mount the pages directory as a static directory
|
||||||
app.mount("/pages", StaticFiles(directory=__location__ + "/pages"), name="pages")
|
app.mount("/pages", StaticFiles(directory=__location__ + "/pages"), name="pages")
|
||||||
app.mount("/mkdocs", StaticFiles(directory="site", html=True), name="mkdocs")
|
app.mount("/mkdocs", StaticFiles(directory="site", html=True), name="mkdocs")
|
||||||
|
site_templates = Jinja2Templates(directory=__location__ + "/site")
|
||||||
templates = Jinja2Templates(directory=__location__ + "/pages")
|
templates = Jinja2Templates(directory=__location__ + "/pages")
|
||||||
# chromedriver_autoinstaller.install() # Ensure chromedriver is installed
|
|
||||||
@lru_cache()
|
@lru_cache()
|
||||||
def get_crawler():
|
def get_crawler():
|
||||||
# Initialize and return a WebCrawler instance
|
# Initialize and return a WebCrawler instance
|
||||||
return WebCrawler(verbose = True)
|
crawler = WebCrawler(verbose = True)
|
||||||
|
crawler.warmup()
|
||||||
|
return crawler
|
||||||
|
|
||||||
class CrawlRequest(BaseModel):
|
class CrawlRequest(BaseModel):
|
||||||
urls: List[str]
|
urls: List[str]
|
||||||
@@ -61,8 +149,12 @@ class CrawlRequest(BaseModel):
|
|||||||
user_agent: Optional[str] = None
|
user_agent: Optional[str] = None
|
||||||
verbose: Optional[bool] = True
|
verbose: Optional[bool] = True
|
||||||
|
|
||||||
|
@app.get("/")
|
||||||
|
def read_root():
|
||||||
|
return RedirectResponse(url="/mkdocs")
|
||||||
|
|
||||||
@app.get("/", response_class=HTMLResponse)
|
@app.get("/old", response_class=HTMLResponse)
|
||||||
|
@limiter.limit(get_rate_limit())
|
||||||
async def read_index(request: Request):
|
async def read_index(request: Request):
|
||||||
partials_dir = os.path.join(__location__, "pages", "partial")
|
partials_dir = os.path.join(__location__, "pages", "partial")
|
||||||
partials = {}
|
partials = {}
|
||||||
@@ -79,7 +171,6 @@ async def get_total_url_count():
|
|||||||
count = get_total_count()
|
count = get_total_count()
|
||||||
return JSONResponse(content={"count": count})
|
return JSONResponse(content={"count": count})
|
||||||
|
|
||||||
# Add endpoit to clear db
|
|
||||||
@app.get("/clear-db")
|
@app.get("/clear-db")
|
||||||
async def clear_database():
|
async def clear_database():
|
||||||
# clear_db()
|
# clear_db()
|
||||||
@@ -98,6 +189,7 @@ def import_strategy(module_name: str, class_name: str, *args, **kwargs):
|
|||||||
raise HTTPException(status_code=400, detail=f"Class {class_name} not found in {module_name}.")
|
raise HTTPException(status_code=400, detail=f"Class {class_name} not found in {module_name}.")
|
||||||
|
|
||||||
@app.post("/crawl")
|
@app.post("/crawl")
|
||||||
|
@limiter.limit(get_rate_limit())
|
||||||
async def crawl_urls(crawl_request: CrawlRequest, request: Request):
|
async def crawl_urls(crawl_request: CrawlRequest, request: Request):
|
||||||
logging.debug(f"[LOG] Crawl request for URL: {crawl_request.urls}")
|
logging.debug(f"[LOG] Crawl request for URL: {crawl_request.urls}")
|
||||||
global current_requests
|
global current_requests
|
||||||
@@ -148,7 +240,6 @@ async def crawl_urls(crawl_request: CrawlRequest, request: Request):
|
|||||||
|
|
||||||
@app.get("/strategies/extraction", response_class=JSONResponse)
|
@app.get("/strategies/extraction", response_class=JSONResponse)
|
||||||
async def get_extraction_strategies():
|
async def get_extraction_strategies():
|
||||||
# Load docs/extraction_strategies.json" and return as JSON response
|
|
||||||
with open(f"{__location__}/docs/extraction_strategies.json", "r") as file:
|
with open(f"{__location__}/docs/extraction_strategies.json", "r") as file:
|
||||||
return JSONResponse(content=file.read())
|
return JSONResponse(content=file.read())
|
||||||
|
|
||||||
@@ -156,8 +247,8 @@ async def get_extraction_strategies():
|
|||||||
async def get_chunking_strategies():
|
async def get_chunking_strategies():
|
||||||
with open(f"{__location__}/docs/chunking_strategies.json", "r") as file:
|
with open(f"{__location__}/docs/chunking_strategies.json", "r") as file:
|
||||||
return JSONResponse(content=file.read())
|
return JSONResponse(content=file.read())
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
import uvicorn
|
import uvicorn
|
||||||
uvicorn.run(app, host="0.0.0.0", port=8080)
|
uvicorn.run(app, host="0.0.0.0", port=8888)
|
||||||
|
|||||||
0
middlewares.py
Normal file
0
middlewares.py
Normal file
15
mkdocs.yml
15
mkdocs.yml
@@ -2,9 +2,11 @@ site_name: Crawl4AI Documentation
|
|||||||
docs_dir: docs/md
|
docs_dir: docs/md
|
||||||
nav:
|
nav:
|
||||||
- Home: index.md
|
- Home: index.md
|
||||||
- Introduction: introduction.md
|
- Demo: demo.md # Add this line
|
||||||
- Installation: installation.md
|
- First Steps:
|
||||||
- Quick Start: quickstart.md
|
- Introduction: introduction.md
|
||||||
|
- Installation: installation.md
|
||||||
|
- Quick Start: quickstart.md
|
||||||
- Examples:
|
- Examples:
|
||||||
- Intro: examples/index.md
|
- Intro: examples/index.md
|
||||||
- LLM Extraction: examples/llm_extraction.md
|
- LLM Extraction: examples/llm_extraction.md
|
||||||
@@ -21,8 +23,9 @@ nav:
|
|||||||
- API Reference:
|
- API Reference:
|
||||||
- Core Classes and Functions: api/core_classes_and_functions.md
|
- Core Classes and Functions: api/core_classes_and_functions.md
|
||||||
- Detailed API Documentation: api/detailed_api_documentation.md
|
- Detailed API Documentation: api/detailed_api_documentation.md
|
||||||
- Change Log: changelog.md
|
- Miscellaneous:
|
||||||
- Contact: contact.md
|
- Change Log: changelog.md
|
||||||
|
- Contact: contact.md
|
||||||
|
|
||||||
theme:
|
theme:
|
||||||
name: terminal
|
name: terminal
|
||||||
@@ -36,4 +39,4 @@ extra_css:
|
|||||||
|
|
||||||
extra_javascript:
|
extra_javascript:
|
||||||
- assets/highlight.min.js
|
- assets/highlight.min.js
|
||||||
- assets/highlight_init.js
|
- assets/highlight_init.js
|
||||||
|
|||||||
@@ -25,7 +25,7 @@
|
|||||||
<header class="bg-zinc-950 text-lime-500 py-4 flex">
|
<header class="bg-zinc-950 text-lime-500 py-4 flex">
|
||||||
|
|
||||||
<div class="mx-auto px-4">
|
<div class="mx-auto px-4">
|
||||||
<h1 class="text-2xl font-bold">🔥🕷️ Crawl4AI: Web Data for your Thoughts v0.2.5</h1>
|
<h1 class="text-2xl font-bold">🔥🕷️ Crawl4AI: Web Data for your Thoughts</h1>
|
||||||
</div>
|
</div>
|
||||||
<div class="mx-auto px-4 flex font-bold text-xl gap-2">
|
<div class="mx-auto px-4 flex font-bold text-xl gap-2">
|
||||||
<span>📊 Total Website Processed</span>
|
<span>📊 Total Website Processed</span>
|
||||||
|
|||||||
@@ -12,11 +12,13 @@ python-dotenv==1.0.1
|
|||||||
requests==2.32.3
|
requests==2.32.3
|
||||||
rich==13.7.1
|
rich==13.7.1
|
||||||
scikit-learn==1.5.0
|
scikit-learn==1.5.0
|
||||||
selenium==4.21.0
|
selenium==4.23.1
|
||||||
uvicorn==0.30.1
|
uvicorn==0.30.1
|
||||||
transformers==4.41.2
|
transformers==4.41.2
|
||||||
chromedriver-autoinstaller==0.6.4
|
# webdriver-manager==4.0.1
|
||||||
|
# chromedriver-autoinstaller==0.6.4
|
||||||
torch==2.3.1
|
torch==2.3.1
|
||||||
onnxruntime==1.18.0
|
onnxruntime==1.18.0
|
||||||
tokenizers==0.19.1
|
tokenizers==0.19.1
|
||||||
pillow==10.3.0
|
pillow==10.3.0
|
||||||
|
slowapi==0.1.9
|
||||||
51
setup.py
51
setup.py
@@ -1,55 +1,44 @@
|
|||||||
from setuptools import setup, find_packages
|
from setuptools import setup, find_packages
|
||||||
import os
|
import os
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
import subprocess
|
import shutil
|
||||||
from setuptools.command.install import install
|
|
||||||
|
|
||||||
# Create the .crawl4ai folder in the user's home directory if it doesn't exist
|
# Create the .crawl4ai folder in the user's home directory if it doesn't exist
|
||||||
crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai")
|
# If the folder already exists, remove the cache folder
|
||||||
os.makedirs(crawl4ai_folder, exist_ok=True)
|
crawl4ai_folder = Path.home() / ".crawl4ai"
|
||||||
os.makedirs(f"{crawl4ai_folder}/cache", exist_ok=True)
|
cache_folder = crawl4ai_folder / "cache"
|
||||||
|
|
||||||
|
if cache_folder.exists():
|
||||||
|
shutil.rmtree(cache_folder)
|
||||||
|
|
||||||
|
crawl4ai_folder.mkdir(exist_ok=True)
|
||||||
|
cache_folder.mkdir(exist_ok=True)
|
||||||
|
|
||||||
# Read the requirements from requirements.txt
|
# Read the requirements from requirements.txt
|
||||||
with open("requirements.txt") as f:
|
with open("requirements.txt") as f:
|
||||||
requirements = f.read().splitlines()
|
requirements = f.read().splitlines()
|
||||||
|
|
||||||
# Read the requirements from requirements.txt
|
|
||||||
with open("requirements.crawl.txt") as f:
|
|
||||||
requirements_crawl_only = f.read().splitlines()
|
|
||||||
|
|
||||||
# Define the requirements for different environments
|
# Define the requirements for different environments
|
||||||
requirements_without_torch = [req for req in requirements if not req.startswith("torch")]
|
default_requirements = [req for req in requirements if not req.startswith(("torch", "transformers", "onnxruntime", "nltk", "spacy", "tokenizers", "scikit-learn"))]
|
||||||
requirements_without_transformers = [req for req in requirements if not req.startswith("transformers")]
|
torch_requirements = [req for req in requirements if req.startswith(("torch", "nltk", "spacy", "scikit-learn", "numpy"))]
|
||||||
requirements_without_nltk = [req for req in requirements if not req.startswith("nltk")]
|
transformer_requirements = [req for req in requirements if req.startswith(("transformers", "tokenizers", "onnxruntime"))]
|
||||||
requirements_without_torch_transformers_nlkt = [req for req in requirements if not req.startswith("torch") and not req.startswith("transformers") and not req.startswith("nltk")]
|
|
||||||
requirements_crawl_only = [req for req in requirements if not req.startswith("torch") and not req.startswith("transformers") and not req.startswith("nltk")]
|
|
||||||
|
|
||||||
class CustomInstallCommand(install):
|
|
||||||
"""Customized setuptools install command to install spacy without dependencies."""
|
|
||||||
def run(self):
|
|
||||||
install.run(self)
|
|
||||||
subprocess.check_call([os.sys.executable, '-m', 'pip', 'install', 'spacy', '--no-deps'])
|
|
||||||
|
|
||||||
setup(
|
setup(
|
||||||
name="Crawl4AI",
|
name="Crawl4AI",
|
||||||
version="0.2.5",
|
version="0.2.77",
|
||||||
description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper",
|
description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper",
|
||||||
long_description=open("README.md").read(),
|
long_description=open("README.md", encoding="utf-8").read(),
|
||||||
long_description_content_type="text/markdown",
|
long_description_content_type="text/markdown",
|
||||||
url="https://github.com/unclecode/crawl4ai",
|
url="https://github.com/unclecode/crawl4ai",
|
||||||
author="Unclecode",
|
author="Unclecode",
|
||||||
author_email="unclecode@kidocode.com",
|
author_email="unclecode@kidocode.com",
|
||||||
license="MIT",
|
license="MIT",
|
||||||
packages=find_packages(),
|
packages=find_packages(),
|
||||||
install_requires=requirements_without_torch_transformers_nlkt,
|
install_requires=default_requirements,
|
||||||
extras_require={
|
extras_require={
|
||||||
"all": requirements, # Include all requirements
|
"torch": torch_requirements,
|
||||||
"colab": requirements_without_torch, # Exclude torch for Colab
|
"transformer": transformer_requirements,
|
||||||
"crawl": requirements_crawl_only, # Include only crawl requirements
|
"all": requirements,
|
||||||
},
|
|
||||||
cmdclass={
|
|
||||||
'install': CustomInstallCommand,
|
|
||||||
},
|
},
|
||||||
entry_points={
|
entry_points={
|
||||||
'console_scripts': [
|
'console_scripts': [
|
||||||
@@ -67,4 +56,4 @@ setup(
|
|||||||
"Programming Language :: Python :: 3.10",
|
"Programming Language :: Python :: 3.10",
|
||||||
],
|
],
|
||||||
python_requires=">=3.7",
|
python_requires=">=3.7",
|
||||||
)
|
)
|
||||||
Reference in New Issue
Block a user