Compare commits

...

41 Commits

Author SHA1 Message Date
unclecode
2101540819 chore: Update version to 0.2.74 in setup.py 2024-07-08 16:30:28 +08:00
unclecode
9d98393606 Prepare branch for release 0.2.74 2024-07-08 16:30:14 +08:00
unclecode
6f99368744 Add UTF encoding to resolve the windows machone "charmap" error. 2024-07-08 16:18:07 +08:00
unclecode
ea2f83ac10 feat: Add delay after fetching URL in crawler hooks
This commit adds a delay of 5 seconds after fetching the URL in the `after_get_url` hook of the crawler hooks. The delay is implemented using the `time.sleep()` function. This change ensures that the entire page is fetched before proceeding with further actions.
2024-07-08 15:59:59 +08:00
unclecode
7f41ff4a74 The after_get_url hook is executed after getting the URL, allowing for further customization. 2024-07-06 14:28:01 +08:00
unclecode
236bdb4035 feat: Add MaxRetryError exception handling in LocalSeleniumCrawlerStrategy 2024-07-06 14:08:30 +08:00
unclecode
1368248254 feat: Sanitize input and handle encoding issues in LLMExtractionStrategy 2024-07-05 17:59:26 +08:00
unclecode
b0ec54b9e9 feat: Sanitize input and handle encoding issues in LLMExtractionStrategy 2024-07-05 17:37:25 +08:00
unclecode
fb6ed5f000 feat: Sanitize input and handle encoding issues in LLMExtractionStrategy
This commit modifies the LLMExtractionStrategy class in `extraction_strategy.py` to sanitize input and handle potential encoding issues. The `sanitize_input_encode` function is introduced in `utils.py` to encode and decode the input text as UTF-8 or ASCII, depending on the encoding issues encountered. If an encoding error occurs, the function falls back to ASCII encoding and logs a warning message. This change improves the robustness of the extraction process and ensures that characters are not lost due to encoding issues.
2024-07-05 17:30:58 +08:00
unclecode
597fe8bdb7 chore: Delete existing database file and initialize new database
This commit deletes the existing database file and initializes a new database in the `crawl4ai/database.py` file. The `os.remove()` function is used to delete the file if it exists, and then the `init_db()` function is called to initialize the new database. This change is necessary to start with a clean database state.
2024-07-05 17:04:57 +08:00
unclecode
3ff2a0d0e7 Merge branch 'main' of https://github.com/unclecode/crawl4ai 2024-07-03 15:26:47 +08:00
unclecode
3cd1b3719f Bump version to v0.2.73, update documentation, and resolve installation issues 2024-07-03 15:26:43 +08:00
unclecode
9926eb9f95 feat: Bump version to v0.2.73 and update documentation
This commit updates the version number to v0.2.73 and makes corresponding changes in the README.md and Dockerfile.

Docker file install the default mode, this resolve many of installation issues.

Additionally, the installation instructions are updated to include support for different modes. Setup.py doesn't have anymore dependancy on Spacy.

The change log is also updated to reflect these changes.

Supporting websites need with-head browser.
2024-07-03 15:19:22 +08:00
UncleCode
3abaa82501 Merge pull request #37 from shivkumar0757/fix-readme-encoding
@shivkumar0757  Great work! I value your contribution and have merged your pull request. You will be credited in the upcoming change-log. Thank you for your continuous support in advancing this library, to democratize an open access crawler to everyone.
2024-07-01 07:31:07 +02:00
unclecode
88d8cd8650 feat: Add page load check for LocalSeleniumCrawlerStrategy
This commit adds a page load check for the LocalSeleniumCrawlerStrategy in the `crawl` method. The `_ensure_page_load` method is introduced to ensure that the page has finished loading before proceeding. This helps to prevent issues with incomplete page sources and improves the reliability of the crawler.
2024-07-01 00:07:32 +08:00
shiv
a08f21d66c Fix UnicodeDecodeError by reading README.md with UTF-8 encoding 2024-06-30 20:27:33 +05:30
unclecode
d58286989c UPDATE DOCUMENTS 2024-06-30 00:34:02 +08:00
unclecode
b58af3349c chore: Update installation instructions with support for different modes 2024-06-30 00:22:17 +08:00
unclecode
940df4631f Update ChangeLog 2024-06-30 00:18:40 +08:00
unclecode
685706e0aa Update version, and change log 2024-06-30 00:17:43 +08:00
unclecode
7b0979e134 Update Redme and Docker file 2024-06-30 00:15:43 +08:00
unclecode
61ae2de841 1/Update setup.py to support following modes:
- default (most frequent mode)
- torch
- transformers
- all
2/ Update Docker file
3/ Update documentation as well.
2024-06-30 00:15:29 +08:00
unclecode
5b28eed2c0 Add a temporary solution for when we can't crawl websites in headless mode. 2024-06-29 23:25:50 +08:00
unclecode
f8a11779fe Update change log 2024-06-26 16:48:36 +08:00
unclecode
d11a83c232 ## [0.2.71] 2024-06-26
• Refactored `crawler_strategy.py` to handle exceptions and improve error messages
• Improved `get_content_of_website_optimized` function in `utils.py` for better performance
• Updated `utils.py` with latest changes
• Migrated to `ChromeDriverManager` for resolving Chrome driver download issues
2024-06-26 15:34:15 +08:00
unclecode
3255c7a3fa Update CHANGELOG.md with recent commits 2024-06-26 15:20:34 +08:00
unclecode
4756d0a532 Refactor crawler_strategy.py to handle exceptions and improve error messages 2024-06-26 15:04:33 +08:00
unclecode
7ba2142363 chore: Refactor get_content_of_website_optimized function in utils.py 2024-06-26 14:43:09 +08:00
unclecode
96d1eb0d0d Some updated ins utils.py 2024-06-26 13:03:03 +08:00
unclecode
144cfa0eda Switch to ChromeDriverManager due some issues with download the chrome driver 2024-06-26 13:00:17 +08:00
unclecode
a0dff192ae Update README for speed example 2024-06-24 23:06:12 +08:00
unclecode
1fffeeedd2 Update Readme: Showcase the speed 2024-06-24 23:02:08 +08:00
unclecode
f51b078042 Update reame example. 2024-06-24 22:54:29 +08:00
unclecode
b6023a51fb Add star chart 2024-06-24 22:47:46 +08:00
unclecode
78cfad8b2f chore: Update version to 0.2.7 and improve extraction function speed 2024-06-24 22:39:56 +08:00
unclecode
68b3dff74a Update CSS 2024-06-23 00:36:03 +08:00
unclecode
bfc4abd6e8 Update documents 2024-06-22 20:57:03 +08:00
unclecode
8c77a760fc Fixed:
- Redirect "/" to mkdocs
2024-06-22 20:54:32 +08:00
unclecode
b9bf8ac9d7 Fix mounting the "/" to mkdocs site folder 2024-06-22 20:41:39 +08:00
unclecode
d6182bedd7 chore:
- Add demo page to the new mkdocs
- Set website home page to mkdocs
2024-06-22 20:36:01 +08:00
unclecode
2217904876 Update .gitignore 2024-06-22 18:12:12 +08:00
28 changed files with 976 additions and 218 deletions

6
.gitignore vendored
View File

@@ -183,4 +183,8 @@ docs/examples/.chainlit/*
local/ local/
.files/ .files/
a.txt a.txt
.lambda_function.py
ec2*
update_changelog.sh

View File

@@ -1,5 +1,49 @@
# Changelog # Changelog
## [v0.2.74] - 2024-07-08
A slew of exciting updates to improve the crawler's stability and robustness! 🎉
- 💻 **UTF encoding fix**: Resolved the Windows \"charmap\" error by adding UTF encoding.
- 🛡️ **Error handling**: Implemented MaxRetryError exception handling in LocalSeleniumCrawlerStrategy.
- 🧹 **Input sanitization**: Improved input sanitization and handled encoding issues in LLMExtractionStrategy.
- 🚮 **Database cleanup**: Removed existing database file and initialized a new one.
## [v0.2.73] - 2024-07-03
💡 In this release, we've bumped the version to v0.2.73 and refreshed our documentation to ensure you have the best experience with our project.
* Supporting website need "with-head" mode to crawl the website with head.
* Fixing the installation issues for setup.py and dockerfile.
* Resolve multiple issues.
## [v0.2.72] - 2024-06-30
This release brings exciting updates and improvements to our project! 🎉
* 📚 **Documentation Updates**: Our documentation has been revamped to reflect the latest changes and additions.
* 🚀 **New Modes in setup.py**: We've added support for three new modes in setup.py: default, torch, and transformers. This enhances the project's flexibility and usability.
* 🐳 **Docker File Updates**: The Docker file has been updated to ensure seamless compatibility with the new modes and improvements.
* 🕷️ **Temporary Solution for Headless Crawling**: We've implemented a temporary solution to overcome issues with crawling websites in headless mode.
These changes aim to improve the overall user experience, provide more flexibility, and enhance the project's performance. We're thrilled to share these updates with you and look forward to continuing to evolve and improve our project!
## [0.2.71] - 2024-06-26
**Improved Error Handling and Performance** 🚧
* 🚫 Refactored `crawler_strategy.py` to handle exceptions and provide better error messages, making it more robust and reliable.
* 💻 Optimized the `get_content_of_website_optimized` function in `utils.py` for improved performance, reducing potential bottlenecks.
* 💻 Updated `utils.py` with the latest changes, ensuring consistency and accuracy.
* 🚫 Migrated to `ChromeDriverManager` to resolve Chrome driver download issues, providing a smoother user experience.
These changes focus on refining the existing codebase, resulting in a more stable, efficient, and user-friendly experience. With these improvements, you can expect fewer errors and better performance in the crawler strategy and utility functions.
## [0.2.71] - 2024-06-25
### Fixed
- Speed up twice the extraction function.
## [0.2.6] - 2024-06-22 ## [0.2.6] - 2024-06-22
### Fixed ### Fixed
- Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms. - Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms.

View File

@@ -18,12 +18,11 @@ RUN apt-get update && \
software-properties-common && \ software-properties-common && \
rm -rf /var/lib/apt/lists/* rm -rf /var/lib/apt/lists/*
# Install Python dependencies # Copy the application code
COPY requirements.txt . COPY . .
RUN pip install --no-cache-dir -r requirements.txt && \
pip install --no-cache-dir spacy torch onnxruntime uvicorn && \ # Install Crawl4AI using the local setup.py (which will use the default installation)
python -m spacy download en_core_web_sm RUN pip install --no-cache-dir .
# pip install --no-cache-dir spacy torch torchvision torchaudio onnxruntime uvicorn && \
# Install Google Chrome and ChromeDriver # Install Google Chrome and ChromeDriver
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \ RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
@@ -33,9 +32,6 @@ RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key
wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip && \ wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip && \
unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/ unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# Copy the rest of the application code
COPY . .
# Set environment to use Chrome and ChromeDriver properly # Set environment to use Chrome and ChromeDriver properly
ENV CHROME_BIN=/usr/bin/google-chrome \ ENV CHROME_BIN=/usr/bin/google-chrome \
CHROMEDRIVER=/usr/local/bin/chromedriver \ CHROMEDRIVER=/usr/local/bin/chromedriver \
@@ -43,9 +39,6 @@ ENV CHROME_BIN=/usr/bin/google-chrome \
DBUS_SESSION_BUS_ADDRESS=/dev/null \ DBUS_SESSION_BUS_ADDRESS=/dev/null \
PYTHONUNBUFFERED=1 PYTHONUNBUFFERED=1
# pip install -e .[all]
RUN pip install --no-cache-dir -e .[all]
# Ensure the PATH environment variable includes the location of the installed packages # Ensure the PATH environment variable includes the location of the installed packages
ENV PATH /opt/conda/bin:$PATH ENV PATH /opt/conda/bin:$PATH
@@ -53,15 +46,13 @@ ENV PATH /opt/conda/bin:$PATH
EXPOSE 80 EXPOSE 80
# Download models call cli "crawl4ai-download-models" # Download models call cli "crawl4ai-download-models"
RUN crawl4ai-download-models # RUN crawl4ai-download-models
# Instakk mkdocs # Install mkdocs
RUN pip install mkdocs mkdocs-terminal RUN pip install mkdocs mkdocs-terminal
# Call mkdocs to build the documentation # Call mkdocs to build the documentation
RUN mkdocs build RUN mkdocs build
# Run uvicorn # Run uvicorn
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "4"] CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "4"]

View File

@@ -1,4 +1,4 @@
# Crawl4AI v0.2.6 🕷️🤖 # Crawl4AI v0.2.74 🕷️🤖
[![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers) [![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members) [![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members)
@@ -11,7 +11,7 @@ Crawl4AI simplifies web crawling and data extraction, making it accessible for l
## Try it Now! ## Try it Now!
- Use as REST API: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1zODYjhemJ5bUmYceWpVoBMVpd0ofzNBZ?usp=sharing) - Use as REST API: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1zODYjhemJ5bUmYceWpVoBMVpd0ofzNBZ?usp=sharing)
- Use as Python library: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk) - Use as Python library: This collab is a bit outdated. I'm updating it with the newest versions, so please refer to the website for the latest documentation. This will be updated in a few days, and you'll have the latest version here. Thank you so much. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
✨ visit our [Documentation Website](https://crawl4ai.com/mkdocs/) ✨ visit our [Documentation Website](https://crawl4ai.com/mkdocs/)
@@ -52,6 +52,40 @@ result = crawler.run(url="https://www.nbcnews.com/business")
print(result.markdown) print(result.markdown)
``` ```
## How to install 🛠
```bash
virtualenv venv
source venv/bin/activate
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
```
### Speed-First Design 🚀
Perhaps the most important design principle for this library is speed. We need to ensure it can handle many links and resources in parallel as quickly as possible. By combining this speed with fast LLMs like Groq, the results will be truly amazing.
```python
import time
from crawl4ai.web_crawler import WebCrawler
crawler = WebCrawler()
crawler.warmup()
start = time.time()
url = r"https://www.nbcnews.com/business"
result = crawler.run( url, word_count_threshold=10, bypass_cache=True)
end = time.time()
print(f"Time taken: {end - start}")
```
Let's take a look the calculated time for the above code snippet:
```bash
[LOG] 🚀 Crawling done, success: True, time taken: 1.3623387813568115 seconds
[LOG] 🚀 Content extracted, success: True, time taken: 0.05715131759643555 seconds
[LOG] 🚀 Extraction, time taken: 0.05750393867492676 seconds.
Time taken: 1.439958095550537
```
Fetching the content from the page took 1.3623 seconds, and extracting the content took 0.0575 seconds. 🚀
### Extract Structured Data from Web Pages 📊 ### Extract Structured Data from Web Pages 📊
Crawl all OpenAI models and their fees from the official page. Crawl all OpenAI models and their fees from the official page.
@@ -60,19 +94,30 @@ Crawl all OpenAI models and their fees from the official page.
import os import os
from crawl4ai import WebCrawler from crawl4ai import WebCrawler
from crawl4ai.extraction_strategy import LLMExtractionStrategy from crawl4ai.extraction_strategy import LLMExtractionStrategy
from pydantic import BaseModel, Field
class OpenAIModelFee(BaseModel):
model_name: str = Field(..., description="Name of the OpenAI model.")
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
output_fee: str = Field(..., description="Fee for output token ßfor the OpenAI model.")
url = 'https://openai.com/api/pricing/' url = 'https://openai.com/api/pricing/'
crawler = WebCrawler() crawler = WebCrawler()
crawler.warmup() crawler.warmup()
result = crawler.run( result = crawler.run(
url=url, url=url,
extraction_strategy=LLMExtractionStrategy( word_count_threshold=1,
provider="openai/gpt-4", extraction_strategy= LLMExtractionStrategy(
api_token=os.getenv('OPENAI_API_KEY'), provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
instruction="Extract all model names and their fees for input and output tokens." schema=OpenAIModelFee.schema(),
), extraction_type="schema",
) instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
Do not miss any models in the entire content. One extracted model JSON format should look like this:
{"model_name": "GPT-4", "input_fee": "US$10.00 / 1M tokens", "output_fee": "US$30.00 / 1M tokens"}."""
),
bypass_cache=True,
)
print(result.extracted_content) print(result.extracted_content)
``` ```
@@ -119,3 +164,7 @@ For questions, suggestions, or feedback, feel free to reach out:
- Website: [crawl4ai.com](https://crawl4ai.com) - Website: [crawl4ai.com](https://crawl4ai.com)
Happy Crawling! 🕸️🚀 Happy Crawling! 🕸️🚀
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=unclecode/crawl4ai&type=Date)](https://star-history.com/#unclecode/crawl4ai&Date)

View File

@@ -3,6 +3,7 @@ import re
from collections import Counter from collections import Counter
import string import string
from .model_loader import load_nltk_punkt from .model_loader import load_nltk_punkt
from .utils import *
# Define the abstract base class for chunking strategies # Define the abstract base class for chunking strategies
class ChunkingStrategy(ABC): class ChunkingStrategy(ABC):

View File

@@ -5,8 +5,13 @@ from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import InvalidArgumentException from selenium.common.exceptions import InvalidArgumentException, WebDriverException
import logging from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager
from urllib3.exceptions import MaxRetryError
from .config import *
import logging, time
import base64 import base64
from PIL import Image, ImageDraw, ImageFont from PIL import Image, ImageDraw, ImageFont
from io import BytesIO from io import BytesIO
@@ -14,7 +19,7 @@ from typing import List, Callable
import requests import requests
import os import os
from pathlib import Path from pathlib import Path
from .utils import wrap_text from .utils import *
logger = logging.getLogger('selenium.webdriver.remote.remote_connection') logger = logging.getLogger('selenium.webdriver.remote.remote_connection')
logger.setLevel(logging.WARNING) logger.setLevel(logging.WARNING)
@@ -69,7 +74,7 @@ class CloudCrawlerStrategy(CrawlerStrategy):
response = requests.post("http://crawl4ai.uccode.io/crawl", json=data) response = requests.post("http://crawl4ai.uccode.io/crawl", json=data)
response = response.json() response = response.json()
html = response["results"][0]["html"] html = response["results"][0]["html"]
return html return sanitize_input_encode(html)
class LocalSeleniumCrawlerStrategy(CrawlerStrategy): class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
def __init__(self, use_cached_html=False, js_code=None, **kwargs): def __init__(self, use_cached_html=False, js_code=None, **kwargs):
@@ -80,14 +85,20 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
if kwargs.get("user_agent"): if kwargs.get("user_agent"):
self.options.add_argument("--user-agent=" + kwargs.get("user_agent")) self.options.add_argument("--user-agent=" + kwargs.get("user_agent"))
else: else:
# Set user agent
user_agent = kwargs.get("user_agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36") user_agent = kwargs.get("user_agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36")
self.options.add_argument(f"--user-agent={user_agent}") self.options.add_argument(f"--user-agent={user_agent}")
self.options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36")
self.options.add_argument("--no-sandbox")
self.options.headless = kwargs.get("headless", True) self.options.headless = kwargs.get("headless", True)
if self.options.headless: if self.options.headless:
self.options.add_argument("--headless") self.options.add_argument("--headless")
self.options.add_argument("--disable-gpu")
self.options.add_argument("--window-size=1920,1080")
self.options.add_argument("--no-sandbox")
self.options.add_argument("--disable-dev-shm-usage")
self.options.add_argument("--disable-blink-features=AutomationControlled")
# self.options.add_argument("--disable-dev-shm-usage") # self.options.add_argument("--disable-dev-shm-usage")
self.options.add_argument("--disable-gpu") self.options.add_argument("--disable-gpu")
# self.options.add_argument("--disable-extensions") # self.options.add_argument("--disable-extensions")
@@ -118,10 +129,15 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
} }
# chromedriver_autoinstaller.install() # chromedriver_autoinstaller.install()
import chromedriver_autoinstaller # import chromedriver_autoinstaller
crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai") # crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai")
chromedriver_path = chromedriver_autoinstaller.utils.download_chromedriver(crawl4ai_folder, False) # driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=self.options)
# chromedriver_path = chromedriver_autoinstaller.install()
# chromedriver_path = chromedriver_autoinstaller.utils.download_chromedriver()
# self.service = Service(chromedriver_autoinstaller.install()) # self.service = Service(chromedriver_autoinstaller.install())
chromedriver_path = ChromeDriverManager().install()
self.service = Service(chromedriver_path) self.service = Service(chromedriver_path)
self.service.log_path = "NUL" self.service.log_path = "NUL"
self.driver = webdriver.Chrome(service=self.service, options=self.options) self.driver = webdriver.Chrome(service=self.service, options=self.options)
@@ -163,8 +179,20 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
# Set extra HTTP headers # Set extra HTTP headers
self.driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': headers}) self.driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': headers})
def _ensure_page_load(self, max_checks=6, check_interval=0.01):
initial_length = len(self.driver.page_source)
for ix in range(max_checks):
# print(f"Checking page load: {ix}")
time.sleep(check_interval)
current_length = len(self.driver.page_source)
if current_length != initial_length:
break
def crawl(self, url: str) -> str: return self.driver.page_source
def crawl(self, url: str, **kwargs) -> str:
# Create md5 hash of the URL # Create md5 hash of the URL
import hashlib import hashlib
url_hash = hashlib.md5(url.encode()).hexdigest() url_hash = hashlib.md5(url.encode()).hexdigest()
@@ -173,17 +201,40 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url_hash) cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url_hash)
if os.path.exists(cache_file_path): if os.path.exists(cache_file_path):
with open(cache_file_path, "r") as f: with open(cache_file_path, "r") as f:
return f.read() return sanitize_input_encode(f.read())
try: try:
self.driver = self.execute_hook('before_get_url', self.driver) self.driver = self.execute_hook('before_get_url', self.driver)
if self.verbose: if self.verbose:
print(f"[LOG] 🕸️ Crawling {url} using LocalSeleniumCrawlerStrategy...") print(f"[LOG] 🕸️ Crawling {url} using LocalSeleniumCrawlerStrategy...")
self.driver.get(url) self.driver.get(url) #<html><head></head><body></body></html>
WebDriverWait(self.driver, 10).until(
EC.presence_of_all_elements_located((By.TAG_NAME, "html")) WebDriverWait(self.driver, 20).until(
lambda d: d.execute_script('return document.readyState') == 'complete'
) )
WebDriverWait(self.driver, 10).until(
EC.presence_of_all_elements_located((By.TAG_NAME, "body"))
)
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
self.driver = self.execute_hook('after_get_url', self.driver) self.driver = self.execute_hook('after_get_url', self.driver)
html = sanitize_input_encode(self._ensure_page_load()) # self.driver.page_source
can_not_be_done_headless = False # Look at my creativity for naming variables
# TODO: Very ugly approach, but promise to change it!
if kwargs.get('bypass_headless', False) or html == "<html><head></head><body></body></html>":
print("[LOG] 🙌 Page could not be loaded in headless mode. Trying non-headless mode...")
can_not_be_done_headless = True
options = Options()
options.headless = False
# set window size very small
options.add_argument("--window-size=5,5")
driver = webdriver.Chrome(service=self.service, options=options)
driver.get(url)
self.driver = self.execute_hook('after_get_url', driver)
html = sanitize_input_encode(driver.page_source)
driver.quit()
# Execute JS code if provided # Execute JS code if provided
if self.js_code and type(self.js_code) == str: if self.js_code and type(self.js_code) == str:
@@ -199,12 +250,13 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
lambda driver: driver.execute_script("return document.readyState") == "complete" lambda driver: driver.execute_script("return document.readyState") == "complete"
) )
html = self.driver.page_source if not can_not_be_done_headless:
html = sanitize_input_encode(self.driver.page_source)
self.driver = self.execute_hook('before_return_html', self.driver, html) self.driver = self.execute_hook('before_return_html', self.driver, html)
# Store in cache # Store in cache
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url_hash) cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url_hash)
with open(cache_file_path, "w") as f: with open(cache_file_path, "w", encoding="utf-8") as f:
f.write(html) f.write(html)
if self.verbose: if self.verbose:
@@ -212,9 +264,18 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
return html return html
except InvalidArgumentException: except InvalidArgumentException:
raise InvalidArgumentException(f"Invalid URL {url}") if not hasattr(e, 'msg'):
e.msg = sanitize_input_encode(str(e))
raise InvalidArgumentException(f"Failed to crawl {url}: {e.msg}")
except WebDriverException as e:
# If e does nlt have msg attribute create it and set it to str(e)
if not hasattr(e, 'msg'):
e.msg = sanitize_input_encode(str(e))
raise WebDriverException(f"Failed to crawl {url}: {e.msg}")
except Exception as e: except Exception as e:
raise Exception(f"Failed to crawl {url}: {str(e)}") if not hasattr(e, 'msg'):
e.msg = sanitize_input_encode(str(e))
raise Exception(f"Failed to crawl {url}: {e.msg}")
def take_screenshot(self) -> str: def take_screenshot(self) -> str:
try: try:
@@ -242,7 +303,7 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
return img_base64 return img_base64
except Exception as e: except Exception as e:
error_message = f"Failed to take screenshot: {str(e)}" error_message = sanitize_input_encode(f"Failed to take screenshot: {str(e)}")
print(error_message) print(error_message)
# Generate an image with black background # Generate an image with black background

View File

@@ -20,7 +20,7 @@ def init_db():
extracted_content TEXT, extracted_content TEXT,
success BOOLEAN, success BOOLEAN,
media TEXT DEFAULT "{}", media TEXT DEFAULT "{}",
link TEXT DEFAULT "{}", links TEXT DEFAULT "{}",
metadata TEXT DEFAULT "{}", metadata TEXT DEFAULT "{}",
screenshot TEXT DEFAULT "" screenshot TEXT DEFAULT ""
) )
@@ -127,6 +127,9 @@ def update_existing_records(new_column: str = "media", default_value: str = "{}"
print(f"Error updating existing records: {e}") print(f"Error updating existing records: {e}")
if __name__ == "__main__": if __name__ == "__main__":
init_db() # Initialize the database if not already initialized # Delete the existing database file
alter_db_add_screenshot("metadata") # Add the new column to the table if os.path.exists(DB_PATH):
update_existing_records("metadata") # Update existing records to set the new column to an empty string os.remove(DB_PATH)
init_db()
# alter_db_add_screenshot("COL_NAME")

View File

@@ -10,7 +10,7 @@ from functools import partial
from .model_loader import * from .model_loader import *
import math import math
import numpy as np
class ExtractionStrategy(ABC): class ExtractionStrategy(ABC):
""" """
Abstract base class for all extraction strategies. Abstract base class for all extraction strategies.
@@ -101,7 +101,7 @@ class LLMExtractionStrategy(ExtractionStrategy):
prompt_with_variables = PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION prompt_with_variables = PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
if self.extract_type == "schema": if self.extract_type == "schema":
variable_values["SCHEMA"] = json.dumps(self.schema) variable_values["SCHEMA"] = json.dumps(self.schema, indent=2)
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
for variable in variable_values: for variable in variable_values:
@@ -109,14 +109,13 @@ class LLMExtractionStrategy(ExtractionStrategy):
"{" + variable + "}", variable_values[variable] "{" + variable + "}", variable_values[variable]
) )
response = perform_completion_with_backoff(self.provider, prompt_with_variables, self.api_token) response = perform_completion_with_backoff(self.provider, prompt_with_variables, self.api_token) # , json_response=self.extract_type == "schema")
try: try:
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)['blocks'] blocks = extract_xml_data(["blocks"], response.choices[0].message.content)['blocks']
blocks = json.loads(blocks) blocks = json.loads(blocks)
for block in blocks: for block in blocks:
block['error'] = False block['error'] = False
except Exception as e: except Exception as e:
print("Error extracting blocks:", str(e))
parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content) parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content)
blocks = parsed blocks = parsed
if unparsed: if unparsed:
@@ -192,16 +191,31 @@ class LLMExtractionStrategy(ExtractionStrategy):
# Sequential processing with a delay # Sequential processing with a delay
for ix, section in enumerate(merged_sections): for ix, section in enumerate(merged_sections):
extract_func = partial(self.extract, url) extract_func = partial(self.extract, url)
extracted_content.extend(extract_func(ix, section)) extracted_content.extend(extract_func(ix, sanitize_input_encode(section)))
time.sleep(0.5) # 500 ms delay between each processing time.sleep(0.5) # 500 ms delay between each processing
else: else:
# Parallel processing using ThreadPoolExecutor # Parallel processing using ThreadPoolExecutor
# extract_func = partial(self.extract, url)
# for ix, section in enumerate(merged_sections):
# extracted_content.append(extract_func(ix, section))
with ThreadPoolExecutor(max_workers=4) as executor: with ThreadPoolExecutor(max_workers=4) as executor:
extract_func = partial(self.extract, url) extract_func = partial(self.extract, url)
futures = [executor.submit(extract_func, ix, section) for ix, section in enumerate(merged_sections)] futures = [executor.submit(extract_func, ix, sanitize_input_encode(section)) for ix, section in enumerate(merged_sections)]
for future in as_completed(futures): for future in as_completed(futures):
extracted_content.extend(future.result()) try:
extracted_content.extend(future.result())
except Exception as e:
if self.verbose:
print(f"Error in thread execution: {e}")
# Add error information to extracted_content
extracted_content.append({
"index": 0,
"error": True,
"tags": ["error"],
"content": str(e)
})
return extracted_content return extracted_content
@@ -219,6 +233,8 @@ class CosineStrategy(ExtractionStrategy):
""" """
super().__init__() super().__init__()
import numpy as np
self.semantic_filter = semantic_filter self.semantic_filter = semantic_filter
self.word_count_threshold = word_count_threshold self.word_count_threshold = word_count_threshold
self.max_dist = max_dist self.max_dist = max_dist

View File

@@ -186,7 +186,7 @@ The user has made the following request for what information to extract from the
Please carefully read the URL content and the user's request. If the user provided a desired JSON schema in the <schema_block> above, extract the requested information from the URL content according to that schema. If no schema was provided, infer an appropriate JSON schema based on the user's request that will best capture the key information they are looking for. Please carefully read the URL content and the user's request. If the user provided a desired JSON schema in the <schema_block> above, extract the requested information from the URL content according to that schema. If no schema was provided, infer an appropriate JSON schema based on the user's request that will best capture the key information they are looking for.
Extraction instructions: Extraction instructions:
Return the extracted information as a list of JSON objects, with each object in the list corresponding to a block of content from the URL, in the same order as it appears on the page. Wrap the entire JSON list in <blocks> tags. Return the extracted information as a list of JSON objects, with each object in the list corresponding to a block of content from the URL, in the same order as it appears on the page. Wrap the entire JSON list in <blocks>...</blocks> XML tags.
Quality Reflection: Quality Reflection:
Before outputting your final answer, double check that the JSON you are returning is complete, containing all the information requested by the user, and is valid JSON that could be parsed by json.loads() with no errors or omissions. The outputted JSON objects should fully match the schema, either provided or inferred. Before outputting your final answer, double check that the JSON you are returning is complete, containing all the information requested by the user, and is valid JSON that could be parsed by json.loads() with no errors or omissions. The outputted JSON objects should fully match the schema, either provided or inferred.
@@ -194,5 +194,11 @@ Before outputting your final answer, double check that the JSON you are returnin
Quality Score: Quality Score:
After reflecting, score the quality and completeness of the JSON data you are about to return on a scale of 1 to 5. Write the score inside <score> tags. After reflecting, score the quality and completeness of the JSON data you are about to return on a scale of 1 to 5. Write the score inside <score> tags.
Avoid Common Mistakes:
- Do NOT add any comments using "//" or "#" in the JSON output. It causes parsing errors.
- Make sure the JSON is properly formatted with curly braces, square brackets, and commas in the right places.
- Do not miss closing </blocks> tag at the end of the JSON output.
- Do not generate the Python coee show me how to do the task, this is your task to extract the information and return it in JSON format.
Result Result
Output the final list of JSON objects, wrapped in <blocks> tags.""" Output the final list of JSON objects, wrapped in <blocks>...</blocks> XML tags. Make sure to close the tag properly."""

View File

@@ -10,6 +10,7 @@ from html2text import HTML2Text
from .prompts import PROMPT_EXTRACT_BLOCKS from .prompts import PROMPT_EXTRACT_BLOCKS
from .config import * from .config import *
from pathlib import Path from pathlib import Path
from typing import Dict, Any
class InvalidCSSSelectorError(Exception): class InvalidCSSSelectorError(Exception):
pass pass
@@ -95,6 +96,16 @@ def sanitize_html(html):
return sanitized_html return sanitized_html
def sanitize_input_encode(text: str) -> str:
"""Sanitize input to handle potential encoding issues."""
try:
# Attempt to encode and decode as UTF-8 to handle potential encoding issues
return text.encode('utf-8', errors='ignore').decode('utf-8')
except UnicodeEncodeError as e:
print(f"Warning: Encoding issue detected. Some characters may be lost. Error: {e}")
# Fall back to ASCII if UTF-8 fails
return text.encode('ascii', errors='ignore').decode('ascii')
def escape_json_string(s): def escape_json_string(s):
""" """
Escapes characters in a string to be JSON safe. Escapes characters in a string to be JSON safe.
@@ -175,16 +186,25 @@ def replace_inline_tags(soup, tags, only_text=False):
'small': lambda tag: f"<small>{tag.text}</small>", 'small': lambda tag: f"<small>{tag.text}</small>",
'mark': lambda tag: f"=={tag.text}==" 'mark': lambda tag: f"=={tag.text}=="
} }
replacement_data = [(tag, tag_replacements.get(tag, lambda t: t.text)) for tag in tags]
for tag_name in tags: for tag_name, replacement_func in replacement_data:
for tag in soup.find_all(tag_name): for tag in soup.find_all(tag_name):
if not only_text: replacement_text = tag.text if only_text else replacement_func(tag)
replacement_text = tag_replacements.get(tag_name, lambda t: t.text)(tag) tag.replace_with(replacement_text)
tag.replace_with(replacement_text)
else:
tag.replace_with(tag.text)
return soup return soup
# for tag_name in tags:
# for tag in soup.find_all(tag_name):
# if not only_text:
# replacement_text = tag_replacements.get(tag_name, lambda t: t.text)(tag)
# tag.replace_with(replacement_text)
# else:
# tag.replace_with(tag.text)
# return soup
def get_content_of_website(url, html, word_count_threshold = MIN_WORD_THRESHOLD, css_selector = None, **kwargs): def get_content_of_website(url, html, word_count_threshold = MIN_WORD_THRESHOLD, css_selector = None, **kwargs):
try: try:
@@ -388,29 +408,160 @@ def get_content_of_website(url, html, word_count_threshold = MIN_WORD_THRESHOLD,
markdown = h.handle(cleaned_html) markdown = h.handle(cleaned_html)
markdown = markdown.replace(' ```', '```') markdown = markdown.replace(' ```', '```')
try:
meta = extract_metadata(html, soup)
except Exception as e:
print('Error extracting metadata:', str(e))
meta = {}
# Return the Markdown content # Return the Markdown content
return{ return{
'markdown': markdown, 'markdown': markdown,
'cleaned_html': cleaned_html, 'cleaned_html': cleaned_html,
'success': True, 'success': True,
'media': media, 'media': media,
'links': links 'links': links,
'metadata': meta
} }
except Exception as e: except Exception as e:
print('Error processing HTML content:', str(e)) print('Error processing HTML content:', str(e))
raise InvalidCSSSelectorError(f"Invalid CSS selector: {css_selector}") from e raise InvalidCSSSelectorError(f"Invalid CSS selector: {css_selector}") from e
def get_content_of_website_optimized(url: str, html: str, word_count_threshold: int = MIN_WORD_THRESHOLD, css_selector: str = None, **kwargs) -> Dict[str, Any]:
if not html:
return None
soup = BeautifulSoup(html, 'html.parser')
body = soup.body
def extract_metadata(html): if css_selector:
selected_elements = body.select(css_selector)
if not selected_elements:
raise InvalidCSSSelectorError(f"Invalid CSS selector, No elements found for CSS selector: {css_selector}")
body = soup.new_tag('div')
for el in selected_elements:
body.append(el)
links = {'internal': [], 'external': []}
media = {'images': [], 'videos': [], 'audios': []}
def process_element(element: element.PageElement) -> bool:
try:
if isinstance(element, NavigableString):
if isinstance(element, Comment):
element.extract()
return False
if element.name in ['script', 'style', 'link', 'meta', 'noscript']:
element.decompose()
return False
keep_element = False
if element.name == 'a' and element.get('href'):
href = element['href']
url_base = url.split('/')[2]
link_data = {'href': href, 'text': element.get_text()}
if href.startswith('http') and url_base not in href:
links['external'].append(link_data)
else:
links['internal'].append(link_data)
keep_element = True
elif element.name == 'img':
media['images'].append({
'src': element.get('src'),
'alt': element.get('alt'),
'type': 'image'
})
return True # Always keep image elements
elif element.name in ['video', 'audio']:
media[f"{element.name}s"].append({
'src': element.get('src'),
'alt': element.get('alt'),
'type': element.name
})
return True # Always keep video and audio elements
if element.name != 'pre':
if element.name in ['b', 'i', 'u', 'span', 'del', 'ins', 'sub', 'sup', 'strong', 'em', 'code', 'kbd', 'var', 's', 'q', 'abbr', 'cite', 'dfn', 'time', 'small', 'mark']:
if kwargs.get('only_text', False):
element.replace_with(element.get_text())
else:
element.unwrap()
elif element.name != 'img':
element.attrs = {}
# Process children
for child in list(element.children):
if isinstance(child, NavigableString) and not isinstance(child, Comment):
if len(child.strip()) > 0:
keep_element = True
else:
if process_element(child):
keep_element = True
# Check word count
if not keep_element:
word_count = len(element.get_text(strip=True).split())
keep_element = word_count >= word_count_threshold
if not keep_element:
element.decompose()
return keep_element
except Exception as e:
print('Error processing element:', str(e))
return False
process_element(body)
def flatten_nested_elements(node):
if isinstance(node, NavigableString):
return node
if len(node.contents) == 1 and isinstance(node.contents[0], element.Tag) and node.contents[0].name == node.name:
return flatten_nested_elements(node.contents[0])
node.contents = [flatten_nested_elements(child) for child in node.contents]
return node
body = flatten_nested_elements(body)
cleaned_html = str(body).replace('\n\n', '\n').replace(' ', ' ')
cleaned_html = sanitize_html(cleaned_html)
h = CustomHTML2Text()
h.ignore_links = True
markdown = h.handle(cleaned_html)
markdown = markdown.replace(' ```', '```')
try:
meta = extract_metadata(html, soup)
except Exception as e:
print('Error extracting metadata:', str(e))
meta = {}
return {
'markdown': markdown,
'cleaned_html': cleaned_html,
'success': True,
'media': media,
'links': links,
'metadata': meta
}
def extract_metadata(html, soup = None):
metadata = {} metadata = {}
if not html: if not html:
return metadata return metadata
# Parse HTML content with BeautifulSoup # Parse HTML content with BeautifulSoup
soup = BeautifulSoup(html, 'html.parser') if not soup:
soup = BeautifulSoup(html, 'html.parser')
# Title # Title
title_tag = soup.find('title') title_tag = soup.find('title')
@@ -460,12 +611,16 @@ def extract_xml_data(tags, string):
return data return data
# Function to perform the completion with exponential backoff # Function to perform the completion with exponential backoff
def perform_completion_with_backoff(provider, prompt_with_variables, api_token): def perform_completion_with_backoff(provider, prompt_with_variables, api_token, json_response = False):
from litellm import completion from litellm import completion
from litellm.exceptions import RateLimitError from litellm.exceptions import RateLimitError
max_attempts = 3 max_attempts = 3
base_delay = 2 # Base delay in seconds, you can adjust this based on your needs base_delay = 2 # Base delay in seconds, you can adjust this based on your needs
extra_args = {}
if json_response:
extra_args["response_format"] = { "type": "json_object" }
for attempt in range(max_attempts): for attempt in range(max_attempts):
try: try:
response =completion( response =completion(
@@ -474,7 +629,8 @@ def perform_completion_with_backoff(provider, prompt_with_variables, api_token):
{"role": "user", "content": prompt_with_variables} {"role": "user", "content": prompt_with_variables}
], ],
temperature=0.01, temperature=0.01,
api_key=api_token api_key=api_token,
**extra_args
) )
return response # Return the successful response return response # Return the successful response
except RateLimitError as e: except RateLimitError as e:
@@ -518,7 +674,6 @@ def extract_blocks(url, html, provider = DEFAULT_PROVIDER, api_token = None):
for block in blocks: for block in blocks:
block['error'] = False block['error'] = False
except Exception as e: except Exception as e:
print("Error extracting blocks:", str(e))
parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content) parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content)
blocks = parsed blocks = parsed
# Append all unparsed segments as onr error block and content is list of unparsed segments # Append all unparsed segments as onr error block and content is list of unparsed segments
@@ -564,7 +719,6 @@ def extract_blocks_batch(batch_data, provider = "groq/llama3-70b-8192", api_toke
blocks = json.loads(blocks) blocks = json.loads(blocks)
except Exception as e: except Exception as e:
print("Error extracting blocks:", str(e))
blocks = [{ blocks = [{
"index": 0, "index": 0,
"tags": ["error"], "tags": ["error"],
@@ -631,4 +785,11 @@ def wrap_text(draw, text, font, max_width):
while words and draw.textbbox((0, 0), line + words[0], font=font)[2] <= max_width: while words and draw.textbbox((0, 0), line + words[0], font=font)[2] <= max_width:
line += (words.pop(0) + ' ') line += (words.pop(0) + ' ')
lines.append(line) lines.append(line)
return '\n'.join(lines) return '\n'.join(lines)
def format_html(html_string):
soup = BeautifulSoup(html_string, 'html.parser')
return soup.prettify()

View File

@@ -11,6 +11,8 @@ from .crawler_strategy import *
from typing import List from typing import List
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from .config import * from .config import *
import warnings
warnings.filterwarnings("ignore", message='Field "model_name" has conflict with protected namespace "model_".')
class WebCrawler: class WebCrawler:
@@ -46,7 +48,8 @@ class WebCrawler:
word_count_threshold=5, word_count_threshold=5,
extraction_strategy= NoExtractionStrategy(), extraction_strategy= NoExtractionStrategy(),
bypass_cache=False, bypass_cache=False,
verbose = False verbose = False,
# warmup=True
) )
self.ready = True self.ready = True
print("[LOG] 🌞 WebCrawler is ready to crawl") print("[LOG] 🌞 WebCrawler is ready to crawl")
@@ -128,36 +131,57 @@ class WebCrawler:
verbose=True, verbose=True,
**kwargs, **kwargs,
) -> CrawlResult: ) -> CrawlResult:
extraction_strategy = extraction_strategy or NoExtractionStrategy() try:
extraction_strategy.verbose = verbose extraction_strategy = extraction_strategy or NoExtractionStrategy()
if not isinstance(extraction_strategy, ExtractionStrategy): extraction_strategy.verbose = verbose
raise ValueError("Unsupported extraction strategy") if not isinstance(extraction_strategy, ExtractionStrategy):
if not isinstance(chunking_strategy, ChunkingStrategy): raise ValueError("Unsupported extraction strategy")
raise ValueError("Unsupported chunking strategy") if not isinstance(chunking_strategy, ChunkingStrategy):
raise ValueError("Unsupported chunking strategy")
if word_count_threshold < MIN_WORD_THRESHOLD:
word_count_threshold = MIN_WORD_THRESHOLD # if word_count_threshold < MIN_WORD_THRESHOLD:
# word_count_threshold = MIN_WORD_THRESHOLD
word_count_threshold = max(word_count_threshold, 0)
# Check cache first # Check cache first
cached = None cached = None
extracted_content = None screenshot_data = None
if not bypass_cache and not self.always_by_pass_cache: extracted_content = None
cached = get_cached_url(url) if not bypass_cache and not self.always_by_pass_cache:
cached = get_cached_url(url)
if cached:
html = cached[1] if kwargs.get("warmup", True) and not self.ready:
extracted_content = cached[2] return None
if screenshot:
screenshot = cached[9] if cached:
html = sanitize_input_encode(cached[1])
else: extracted_content = sanitize_input_encode(cached[4])
if user_agent: if screenshot:
self.crawler_strategy.update_user_agent(user_agent) screenshot_data = cached[9]
html = self.crawler_strategy.crawl(url) if not screenshot_data:
if screenshot: cached = None
screenshot = self.crawler_strategy.take_screenshot()
if not cached or not html:
return self.process_html(url, html, extracted_content, word_count_threshold, extraction_strategy, chunking_strategy, css_selector, screenshot, verbose, bool(cached), **kwargs) if user_agent:
self.crawler_strategy.update_user_agent(user_agent)
t1 = time.time()
html = sanitize_input_encode(self.crawler_strategy.crawl(url, **kwargs))
t2 = time.time()
if verbose:
print(f"[LOG] 🚀 Crawling done for {url}, success: {bool(html)}, time taken: {t2 - t1} seconds")
if screenshot:
screenshot_data = self.crawler_strategy.take_screenshot()
crawl_result = self.process_html(url, html, extracted_content, word_count_threshold, extraction_strategy, chunking_strategy, css_selector, screenshot_data, verbose, bool(cached), **kwargs)
crawl_result.success = bool(html)
return crawl_result
except Exception as e:
if not hasattr(e, "msg"):
e.msg = str(e)
print(f"[ERROR] 🚫 Failed to crawl {url}, error: {e.msg}")
return CrawlResult(url=url, html="", success=False, error_message=e.msg)
def process_html( def process_html(
self, self,
@@ -176,20 +200,24 @@ class WebCrawler:
t = time.time() t = time.time()
# Extract content from HTML # Extract content from HTML
try: try:
result = get_content_of_website(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False)) # t1 = time.time()
metadata = extract_metadata(html) # result = get_content_of_website(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False))
# print(f"[LOG] 🚀 Crawling done for {url}, success: True, time taken: {time.time() - t1} seconds")
t1 = time.time()
result = get_content_of_website_optimized(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False))
if verbose:
print(f"[LOG] 🚀 Content extracted for {url}, success: True, time taken: {time.time() - t1} seconds")
if result is None: if result is None:
raise ValueError(f"Failed to extract content from the website: {url}") raise ValueError(f"Failed to extract content from the website: {url}")
except InvalidCSSSelectorError as e: except InvalidCSSSelectorError as e:
raise ValueError(str(e)) raise ValueError(str(e))
cleaned_html = result.get("cleaned_html", "") cleaned_html = sanitize_input_encode(result.get("cleaned_html", ""))
markdown = result.get("markdown", "") markdown = sanitize_input_encode(result.get("markdown", ""))
media = result.get("media", []) media = result.get("media", [])
links = result.get("links", []) links = result.get("links", [])
metadata = result.get("metadata", {})
if verbose:
print(f"[LOG] 🚀 Crawling done for {url}, success: True, time taken: {time.time() - t} seconds")
if extracted_content is None: if extracted_content is None:
if verbose: if verbose:
@@ -197,7 +225,7 @@ class WebCrawler:
sections = chunking_strategy.chunk(markdown) sections = chunking_strategy.chunk(markdown)
extracted_content = extraction_strategy.run(url, sections) extracted_content = extraction_strategy.run(url, sections)
extracted_content = json.dumps(extracted_content) extracted_content = json.dumps(extracted_content, indent=4, default=str)
if verbose: if verbose:
print(f"[LOG] 🚀 Extraction done for {url}, time taken: {time.time() - t} seconds.") print(f"[LOG] 🚀 Extraction done for {url}, time taken: {time.time() - t} seconds.")
@@ -217,11 +245,11 @@ class WebCrawler:
json.dumps(metadata), json.dumps(metadata),
screenshot=screenshot, screenshot=screenshot,
) )
return CrawlResult( return CrawlResult(
url=url, url=url,
html=html, html=html,
cleaned_html=cleaned_html, cleaned_html=format_html(cleaned_html),
markdown=markdown, markdown=markdown,
media=media, media=media,
links=links, links=links,

View File

@@ -36,5 +36,5 @@ model_fees = json.loads(result.extracted_content)
print(len(model_fees)) print(len(model_fees))
with open(".data/data.json", "w") as f: with open(".data/data.json", "w", encoding="utf-8") as f:
f.write(result.extracted_content) f.write(result.extracted_content)

View File

@@ -249,15 +249,40 @@ def using_crawler_hooks(crawler):
cprint("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True) cprint("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True)
crawler.set_hook('on_driver_created', on_driver_created) crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler.set_hook('before_get_url', before_get_url) crawler_strategy.set_hook('on_driver_created', on_driver_created)
crawler.set_hook('after_get_url', after_get_url) crawler_strategy.set_hook('before_get_url', before_get_url)
crawler.set_hook('before_return_html', before_return_html) crawler_strategy.set_hook('after_get_url', after_get_url)
crawler_strategy.set_hook('before_return_html', before_return_html)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
result = crawler.run(url="https://example.com") result = crawler.run(url="https://example.com")
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]") cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print_result(result= result) print_result(result= result)
def using_crawler_hooks_dleay_example(crawler):
def delay(driver):
print("Delaying for 5 seconds...")
time.sleep(5)
print("Resuming...")
def create_crawler():
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook('after_get_url', delay)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
return crawler
cprint("\n🔗 [bold cyan]Using Crawler Hooks: Let's add a delay after fetching the url to make sure entire page is fetched.[/bold cyan]")
crawler = create_crawler()
result = crawler.run(url="https://google.com", bypass_cache=True)
cprint("[LOG] 📦 [bold yellow]Crawler Hooks result:[/bold yellow]")
print_result(result)
def main(): def main():
cprint("🌟 [bold green]Welcome to the Crawl4ai Quickstart Guide! Let's dive into some web crawling fun! 🌐[/bold green]") cprint("🌟 [bold green]Welcome to the Crawl4ai Quickstart Guide! Let's dive into some web crawling fun! 🌐[/bold green]")

View File

@@ -42,5 +42,5 @@ page_summary = json.loads(result.extracted_content)
print(page_summary) print(page_summary)
with open(".data/page_summary.json", "w") as f: with open(".data/page_summary.json", "w", encoding="utf-8") as f:
f.write(result.extracted_content) f.write(result.extracted_content)

View File

@@ -15,7 +15,6 @@
--mono-font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono, --mono-font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
Courier New, monospace, serif; Courier New, monospace, serif;
--background-color: #151515; /* Dark background */ --background-color: #151515; /* Dark background */
--font-color: #eaeaea; /* Light font color for contrast */ --font-color: #eaeaea; /* Light font color for contrast */
--invert-font-color: #151515; /* Dark color for inverted elements */ --invert-font-color: #151515; /* Dark color for inverted elements */
@@ -30,12 +29,16 @@
--global-font-color: #eaeaea; /* Light font color for global elements */ --global-font-color: #eaeaea; /* Light font color for global elements */
--background-color: #222225; --background-color: #222225;
--background-color: #070708;
--page-width: 70em; --page-width: 70em;
--font-color: #e8e9ed; --font-color: #e8e9ed;
--invert-font-color: #222225; --invert-font-color: #222225;
--secondary-color: #a3abba; --secondary-color: #a3abba;
--secondary-color: #d5cec0;
--tertiary-color: #a3abba; --tertiary-color: #a3abba;
--primary-color: #09b5a5; /* Updated to the brand color */ --primary-color: #09b5a5; /* Updated to the brand color */
--primary-color: #50ffff; /* Updated to the brand color */
--error-color: #ff3c74; --error-color: #ff3c74;
--progress-bar-background: #3f3f44; --progress-bar-background: #3f3f44;
--progress-bar-fill: #09b5a5; /* Updated to the brand color */ --progress-bar-fill: #09b5a5; /* Updated to the brand color */
@@ -73,11 +76,78 @@ pre, code {
border-bottom: 1px dashed var(--secondary-color); border-bottom: 1px dashed var(--secondary-color);
} */ } */
.terminal-mkdocs-main-content{ .terminal-mkdocs-main-content {
line-height: var(--global-line-height); line-height: var(--global-line-height);
} }
strong, .highlight { strong,
.highlight {
/* background: url(//s2.svgbox.net/pen-brushes.svg?ic=brush-1&color=50ffff); */ /* background: url(//s2.svgbox.net/pen-brushes.svg?ic=brush-1&color=50ffff); */
background-color: #50ffff33; background-color: #50ffff33;
}
.terminal-card > header {
color: var(--font-color);
text-align: center;
background-color: var(--progress-bar-background);
padding: 0.3em 0.5em;
}
.btn.btn-sm {
color: var(--font-color);
padding: 0.2em 0.5em;
font-size: 0.8em;
}
.loading-message {
display: none;
margin-top: 20px;
}
.response-section {
display: none;
padding-top: 20px;
}
.tabs {
display: flex;
flex-direction: column;
}
.tab-list {
display: flex;
padding: 0;
margin: 0;
list-style-type: none;
border-bottom: 1px solid var(--font-color);
}
.tab-item {
cursor: pointer;
padding: 10px;
border: 1px solid var(--font-color);
margin-right: -1px;
border-bottom: none;
}
.tab-item:hover,
.tab-item:focus,
.tab-item:active {
background-color: var(--progress-bar-background);
}
.tab-content {
display: none;
border: 1px solid var(--font-color);
border-top: none;
}
.tab-content:first-of-type {
display: block;
}
.tab-content header {
padding: 0.5em;
display: flex;
justify-content: end;
align-items: center;
background-color: var(--progress-bar-background);
}
.tab-content pre {
margin: 0;
max-height: 300px; overflow: auto; border:none;
} }

View File

@@ -1,5 +1,47 @@
# Changelog # Changelog
## v0.2.74 - 2024-07-08
A slew of exciting updates to improve the crawler's stability and robustness! 🎉
- 💻 **UTF encoding fix**: Resolved the Windows \"charmap\" error by adding UTF encoding.
- 🛡️ **Error handling**: Implemented MaxRetryError exception handling in LocalSeleniumCrawlerStrategy.
- 🧹 **Input sanitization**: Improved input sanitization and handled encoding issues in LLMExtractionStrategy.
- 🚮 **Database cleanup**: Removed existing database file and initialized a new one.
## [v0.2.73] - 2024-07-03
💡 In this release, we've bumped the version to v0.2.73 and refreshed our documentation to ensure you have the best experience with our project.
* Supporting website need "with-head" mode to crawl the website with head.
* Fixing the installation issues for setup.py and dockerfile.
* Resolve multiple issues.
## [v0.2.72] - 2024-06-30
This release brings exciting updates and improvements to our project! 🎉
* 📚 **Documentation Updates**: Our documentation has been revamped to reflect the latest changes and additions.
* 🚀 **New Modes in setup.py**: We've added support for three new modes in setup.py: default, torch, and transformers. This enhances the project's flexibility and usability.
* 🐳 **Docker File Updates**: The Docker file has been updated to ensure seamless compatibility with the new modes and improvements.
* 🕷️ **Temporary Solution for Headless Crawling**: We've implemented a temporary solution to overcome issues with crawling websites in headless mode.
These changes aim to improve the overall user experience, provide more flexibility, and enhance the project's performance. We're thrilled to share these updates with you and look forward to continuing to evolve and improve our project!
## [0.2.71] - 2024-06-26
**Improved Error Handling and Performance** 🚧
* 🚫 Refactored `crawler_strategy.py` to handle exceptions and provide better error messages, making it more robust and reliable.
* 💻 Optimized the `get_content_of_website_optimized` function in `utils.py` for improved performance, reducing potential bottlenecks.
* 💻 Updated `utils.py` with the latest changes, ensuring consistency and accuracy.
* 🚫 Migrated to `ChromeDriverManager` to resolve Chrome driver download issues, providing a smoother user experience.
These changes focus on refining the existing codebase, resulting in a more stable, efficient, and user-friendly experience. With these improvements, you can expect fewer errors and better performance in the crawler strategy and utility functions.
## [0.2.71] - 2024-06-25
### Fixed
- Speed up twice the extraction function.
## [0.2.6] - 2024-06-22 ## [0.2.6] - 2024-06-22
### Fixed ### Fixed
- Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms. - Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms.

198
docs/md/demo.md Normal file
View File

@@ -0,0 +1,198 @@
# Interactive Demo for Crowler
<div id="demo">
<form id="crawlForm" class="terminal-form">
<fieldset>
<legend>Enter URL and Options</legend>
<div class="form-group">
<label for="url">Enter URL:</label>
<input type="text" id="url" name="url" required>
</div>
<div class="form-group">
<label for="screenshot">Get Screenshot:</label>
<input type="checkbox" id="screenshot" name="screenshot">
</div>
<div class="form-group">
<button class="btn btn-default" type="submit">Submit</button>
</div>
</fieldset>
</form>
<div id="loading" class="loading-message">
<div class="terminal-alert terminal-alert-primary">Loading... Please wait.</div>
</div>
<section id="response" class="response-section">
<h2>Response</h2>
<div class="tabs">
<ul class="tab-list">
<li class="tab-item" onclick="showTab('markdown')">Markdown</li>
<li class="tab-item" onclick="showTab('cleanedHtml')">Cleaned HTML</li>
<li class="tab-item" onclick="showTab('media')">Media</li>
<li class="tab-item" onclick="showTab('extractedContent')">Extracted Content</li>
<li class="tab-item" onclick="showTab('screenshot')">Screenshot</li>
<li class="tab-item" onclick="showTab('pythonCode')">Python Code</li>
</ul>
<div class="tab-content" id="tab-markdown">
<header>
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('markdownContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('markdownContent', 'markdown.md')">Download</button>
</div>
</header>
<pre><code id="markdownContent" class="language-markdown hljs"></code></pre>
</div>
<div class="tab-content" id="tab-cleanedHtml" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('cleanedHtmlContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('cleanedHtmlContent', 'cleaned.html')">Download</button>
</div>
</header>
<pre><code id="cleanedHtmlContent" class="language-html hljs"></code></pre>
</div>
<div class="tab-content" id="tab-media" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('mediaContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('mediaContent', 'media.json')">Download</button>
</div>
</header>
<pre><code id="mediaContent" class="language-json hljs"></code></pre>
</div>
<div class="tab-content" id="tab-extractedContent" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('extractedContentContent')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('extractedContentContent', 'extracted_content.json')">Download</button>
</div>
</header>
<pre><code id="extractedContentContent" class="language-json hljs"></code></pre>
</div>
<div class="tab-content" id="tab-screenshot" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadImage('screenshotContent', 'screenshot.png')">Download</button>
</div>
</header>
<pre><img id="screenshotContent" /></pre>
</div>
<div class="tab-content" id="tab-pythonCode" style="display: none;">
<header >
<div>
<button class="btn btn-default btn-ghost btn-sm" onclick="copyToClipboard('pythonCode')">Copy</button>
<button class="btn btn-default btn-ghost btn-sm" onclick="downloadContent('pythonCode', 'example.py')">Download</button>
</div>
</header>
<pre><code id="pythonCode" class="language-python hljs"></code></pre>
</div>
</div>
</section>
<script>
function showTab(tabId) {
const tabs = document.querySelectorAll('.tab-content');
tabs.forEach(tab => tab.style.display = 'none');
document.getElementById(`tab-${tabId}`).style.display = 'block';
}
function redo(codeBlock, codeText){
codeBlock.classList.remove('hljs');
codeBlock.removeAttribute('data-highlighted');
// Set new code and re-highlight
codeBlock.textContent = codeText;
hljs.highlightBlock(codeBlock);
}
function copyToClipboard(elementId) {
const content = document.getElementById(elementId).textContent;
navigator.clipboard.writeText(content).then(() => {
alert('Copied to clipboard');
});
}
function downloadContent(elementId, filename) {
const content = document.getElementById(elementId).textContent;
const blob = new Blob([content], { type: 'text/plain' });
const url = window.URL.createObjectURL(blob);
const a = document.createElement('a');
a.style.display = 'none';
a.href = url;
a.download = filename;
document.body.appendChild(a);
a.click();
window.URL.revokeObjectURL(url);
document.body.removeChild(a);
}
function downloadImage(elementId, filename) {
const content = document.getElementById(elementId).src;
const a = document.createElement('a');
a.style.display = 'none';
a.href = content;
a.download = filename;
document.body.appendChild(a);
a.click();
document.body.removeChild(a);
}
document.getElementById('crawlForm').addEventListener('submit', function(event) {
event.preventDefault();
document.getElementById('loading').style.display = 'block';
document.getElementById('response').style.display = 'none';
const url = document.getElementById('url').value;
const screenshot = document.getElementById('screenshot').checked;
const data = {
urls: [url],
bypass_cache: false,
word_count_threshold: 5,
screenshot: screenshot
};
fetch('/crawl', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
})
.then(response => response.json())
.then(data => {
data = data.results[0]; // Only one URL is requested
document.getElementById('loading').style.display = 'none';
document.getElementById('response').style.display = 'block';
redo(document.getElementById('markdownContent'), data.markdown);
redo(document.getElementById('cleanedHtmlContent'), data.cleaned_html);
redo(document.getElementById('mediaContent'), JSON.stringify(data.media, null, 2));
redo(document.getElementById('extractedContentContent'), data.extracted_content);
if (screenshot) {
document.getElementById('screenshotContent').src = `data:image/png;base64,${data.screenshot}`;
}
const pythonCode = `
from crawl4ai.web_crawler import WebCrawler
crawler = WebCrawler()
crawler.warmup()
result = crawler.run(
url='${url}',
screenshot=${screenshot}
)
print(result)
`;
redo(document.getElementById('pythonCode'), pythonCode);
})
.catch(error => {
document.getElementById('loading').style.display = 'none';
document.getElementById('response').style.display = 'block';
document.getElementById('markdownContent').textContent = 'Error: ' + error;
});
});
</script>
</div>

View File

@@ -14,6 +14,9 @@ Let's see how we can customize the crawler using hooks! In this example, we'll:
### Hook Definitions ### Hook Definitions
```python ```python
from crawl4ai.web_crawler import WebCrawler
from crawl4ai.crawler_strategy import *
def on_driver_created(driver): def on_driver_created(driver):
print("[HOOK] on_driver_created") print("[HOOK] on_driver_created")
# Example customization: maximize the window # Example customization: maximize the window
@@ -66,12 +69,13 @@ def before_return_html(driver, html):
```python ```python
print("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True) print("\n🔗 [bold cyan]Using Crawler Hooks: Let's see how we can customize the crawler using hooks![/bold cyan]", True)
crawler = WebCrawler(verbose=True) crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook('on_driver_created', on_driver_created)
crawler_strategy.set_hook('before_get_url', before_get_url)
crawler_strategy.set_hook('after_get_url', after_get_url)
crawler_strategy.set_hook('before_return_html', before_return_html)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup() crawler.warmup()
crawler.set_hook('on_driver_created', on_driver_created)
crawler.set_hook('before_get_url', before_get_url)
crawler.set_hook('after_get_url', after_get_url)
crawler.set_hook('before_return_html', before_return_html)
result = crawler.run(url="https://example.com") result = crawler.run(url="https://example.com")

View File

@@ -45,7 +45,7 @@ model_fees = json.loads(result.extracted_content)
print(len(model_fees)) print(len(model_fees))
with open(".data/data.json", "w") as f: with open(".data/data.json", "w", encoding="utf-8") as f:
f.write(result.extracted_content) f.write(result.extracted_content)
``` ```
@@ -71,7 +71,7 @@ model_fees = json.loads(result.extracted_content)
print(len(model_fees)) print(len(model_fees))
with open(".data/data.json", "w") as f: with open(".data/data.json", "w", encoding="utf-8") as f:
f.write(result.extracted_content) f.write(result.extracted_content)
``` ```

View File

@@ -91,7 +91,7 @@ This example demonstrates how to use `Crawl4AI` to extract a summary from a web
Save the extracted data to a file for further use. Save the extracted data to a file for further use.
```python ```python
with open(".data/page_summary.json", "w") as f: with open(".data/page_summary.json", "w", encoding="utf-8") as f:
f.write(result.extracted_content) f.write(result.extracted_content)
``` ```

View File

@@ -1,7 +1,12 @@
# Crawl4AI Documentation # Crawl4AI v0.2.74
Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI. Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.
## Try the [Demo](demo.md)
Just try it now and crawl different pages to see how it works. You can set the links, see the structures of the output, and also view the Python sample code on how to run it. The old demo is available at [/old_demo](/old) where you can see more details.
## Introduction ## Introduction
Crawl4AI has one clear task: to make crawling and data extraction from web pages easy and efficient, especially for large language models (LLMs) and AI applications. Whether you are using it as a REST API or a Python library, Crawl4AI offers a robust and flexible solution. Crawl4AI has one clear task: to make crawling and data extraction from web pages easy and efficient, especially for large language models (LLMs) and AI applications. Whether you are using it as a REST API or a Python library, Crawl4AI offers a robust and flexible solution.

View File

@@ -1,39 +1,67 @@
# Installation 💻 # Installation 💻
There are three ways to use Crawl4AI: There are three ways to use Crawl4AI:
1. As a library (Recommended) 1. As a library (Recommended)
2. As a local server (Docker) or using the REST API 2. As a local server (Docker) or using the REST API
3. As a Google Colab notebook. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk) 3. As a Google Colab notebook.
## Library Installation ## Library Installation
To install Crawl4AI as a library, follow these steps: Crawl4AI offers flexible installation options to suit various use cases. Choose the option that best fits your needs:
1. Install the package from GitHub: - **Default Installation** (Basic functionality):
```bash
virtualenv venv
source venv/bin/activate
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
``` ```
Use this for basic web crawling and scraping tasks.
- **Installation with PyTorch** (For advanced text clustering):
```bash
virtualenv venv
source venv/bin/activate
pip install "crawl4ai[torch] @ git+https://github.com/unclecode/crawl4ai.git"
```
Choose this if you need the CosineSimilarity cluster strategy.
- **Installation with Transformers** (For summarization and Hugging Face models):
```bash
virtualenv venv
source venv/bin/activate
pip install "crawl4ai[transformer] @ git+https://github.com/unclecode/crawl4ai.git"
```
Opt for this if you require text summarization or plan to use Hugging Face models.
- **Full Installation** (All features):
```bash
virtualenv venv virtualenv venv
source venv/bin/activate source venv/bin/activate
pip install "crawl4ai[all] @ git+https://github.com/unclecode/crawl4ai.git" pip install "crawl4ai[all] @ git+https://github.com/unclecode/crawl4ai.git"
``` ```
This installs all dependencies for full functionality.
💡 Better to run the following CLI-command to load the required models. This is optional, but it will boost the performance and speed of the crawler. You need to do this only once. - **Development Installation** (For contributors):
``` ```bash
crawl4ai-download-models
```
2. Alternatively, you can clone the repository and install the package locally:
```
virtualenv venv virtualenv venv
source venv/bin/activate source venv/bin/activate
git clone https://github.com/unclecode/crawl4ai.git git clone https://github.com/unclecode/crawl4ai.git
cd crawl4ai cd crawl4ai
pip install -e .[all] pip install -e ".[all]"
```
Use this if you plan to modify the source code.
💡 After installation, if you have used "torch", "transformer" or "all", it's recommended to run the following CLI command to load the required models. This is optional but will boost the performance and speed of the crawler. You need to do this only once, this is only for when you install using []
```bash
crawl4ai-download-models
``` ```
## Using Docker for Local Server ## Using Docker for Local Server
3. Use Docker to run the local server: To run Crawl4AI as a local server using Docker:
```
```bash
# For Mac users # For Mac users
# docker build --platform linux/amd64 -t crawl4ai . # docker build --platform linux/amd64 -t crawl4ai .
# For other users # For other users
@@ -43,4 +71,9 @@ docker run -d -p 8000:80 crawl4ai
## Using Google Colab ## Using Google Colab
You can also use Crawl4AI in a Google Colab notebook for easy setup and experimentation. Simply open the following Colab notebook and follow the instructions: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
You can also use Crawl4AI in a Google Colab notebook for easy setup and experimentation. Simply open the following Colab notebook and follow the instructions:
⚠️ This collab is a bit outdated. I'm updating it with the newest versions, so please refer to the website for the latest documentation. This will be updated in a few days, and you'll have the latest version here. Thank you so much.
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)

View File

@@ -0,0 +1,28 @@
<h1>Try Our Library</h1>
<form id="apiForm">
<label for="inputField">Enter some input:</label>
<input type="text" id="inputField" name="inputField" required>
<button type="submit">Submit</button>
</form>
<div id="result"></div>
<script>
document.getElementById('apiForm').addEventListener('submit', function(event) {
event.preventDefault();
const input = document.getElementById('inputField').value;
fetch('https://your-api-endpoint.com/api', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ input: input })
})
.then(response => response.json())
.then(data => {
document.getElementById('result').textContent = JSON.stringify(data);
})
.catch(error => {
document.getElementById('result').textContent = 'Error: ' + error;
});
});
</script>

View File

@@ -176,41 +176,29 @@ print(f"JavaScript Code (Load More button) result: {result}")
Let's see how we can customize the crawler using hooks! Let's see how we can customize the crawler using hooks!
```python ```python
def on_driver_created(driver): import time
print("[HOOK] on_driver_created")
driver.maximize_window()
driver.get('https://example.com/login')
driver.find_element(By.NAME, 'username').send_keys('testuser')
driver.find_element(By.NAME, 'password').send_keys('password123')
driver.find_element(By.NAME, 'login').click()
driver.add_cookie({'name': 'test_cookie', 'value': 'cookie_value'})
return driver
def before_get_url(driver): from crawl4ai.web_crawler import WebCrawler
print("[HOOK] before_get_url") from crawl4ai.crawler_strategy import *
driver.execute_cdp_cmd('Network.enable', {})
driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': {'X-Test-Header': 'test'}})
return driver
def after_get_url(driver): def delay(driver):
print("[HOOK] after_get_url") print("Delaying for 5 seconds...")
print(driver.current_url) time.sleep(5)
return driver print("Resuming...")
def create_crawler():
crawler_strategy = LocalSeleniumCrawlerStrategy(verbose=True)
crawler_strategy.set_hook('after_get_url', delay)
crawler = WebCrawler(verbose=True, crawler_strategy=crawler_strategy)
crawler.warmup()
return crawler
def before_return_html(driver, html): crawler = create_crawler()
print("[HOOK] before_return_html") result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
print(len(html))
return driver
crawler.set_hook('on_driver_created', on_driver_created)
crawler.set_hook('before_get_url', before_get_url)
crawler.set_hook('after_get_url', after_get_url)
crawler.set_hook('before_return_html', before_return_html)
result = crawler.run(url="https://example.com")
print(f"Crawler Hooks result: {result}")
``` ```
check [Hooks](examples/hooks_auth.md) for more examples.
## Congratulations! 🎉 ## Congratulations! 🎉
You've made it through the Crawl4AI Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️ You've made it through the Crawl4AI Quickstart Guide! Now go forth and crawl the web like a pro! 🕸️

24
main.py
View File

@@ -10,6 +10,10 @@ from fastapi.responses import HTMLResponse, JSONResponse
from fastapi.staticfiles import StaticFiles from fastapi.staticfiles import StaticFiles
from fastapi.middleware.cors import CORSMiddleware from fastapi.middleware.cors import CORSMiddleware
from fastapi.templating import Jinja2Templates from fastapi.templating import Jinja2Templates
from fastapi.exceptions import RequestValidationError
from starlette.middleware.base import BaseHTTPMiddleware
from starlette.responses import FileResponse
from fastapi.responses import RedirectResponse
from pydantic import BaseModel, HttpUrl from pydantic import BaseModel, HttpUrl
from concurrent.futures import ThreadPoolExecutor, as_completed from concurrent.futures import ThreadPoolExecutor, as_completed
@@ -39,12 +43,15 @@ app.add_middleware(
# Mount the pages directory as a static directory # Mount the pages directory as a static directory
app.mount("/pages", StaticFiles(directory=__location__ + "/pages"), name="pages") app.mount("/pages", StaticFiles(directory=__location__ + "/pages"), name="pages")
app.mount("/mkdocs", StaticFiles(directory="site", html=True), name="mkdocs") app.mount("/mkdocs", StaticFiles(directory="site", html=True), name="mkdocs")
site_templates = Jinja2Templates(directory=__location__ + "/site")
templates = Jinja2Templates(directory=__location__ + "/pages") templates = Jinja2Templates(directory=__location__ + "/pages")
# chromedriver_autoinstaller.install() # Ensure chromedriver is installed
@lru_cache() @lru_cache()
def get_crawler(): def get_crawler():
# Initialize and return a WebCrawler instance # Initialize and return a WebCrawler instance
return WebCrawler(verbose = True) crawler = WebCrawler(verbose = True)
crawler.warmup()
return crawler
class CrawlRequest(BaseModel): class CrawlRequest(BaseModel):
urls: List[str] urls: List[str]
@@ -61,8 +68,11 @@ class CrawlRequest(BaseModel):
user_agent: Optional[str] = None user_agent: Optional[str] = None
verbose: Optional[bool] = True verbose: Optional[bool] = True
@app.get("/")
def read_root():
return RedirectResponse(url="/mkdocs")
@app.get("/", response_class=HTMLResponse) @app.get("/old", response_class=HTMLResponse)
async def read_index(request: Request): async def read_index(request: Request):
partials_dir = os.path.join(__location__, "pages", "partial") partials_dir = os.path.join(__location__, "pages", "partial")
partials = {} partials = {}
@@ -79,7 +89,6 @@ async def get_total_url_count():
count = get_total_count() count = get_total_count()
return JSONResponse(content={"count": count}) return JSONResponse(content={"count": count})
# Add endpoit to clear db
@app.get("/clear-db") @app.get("/clear-db")
async def clear_database(): async def clear_database():
# clear_db() # clear_db()
@@ -148,7 +157,6 @@ async def crawl_urls(crawl_request: CrawlRequest, request: Request):
@app.get("/strategies/extraction", response_class=JSONResponse) @app.get("/strategies/extraction", response_class=JSONResponse)
async def get_extraction_strategies(): async def get_extraction_strategies():
# Load docs/extraction_strategies.json" and return as JSON response
with open(f"{__location__}/docs/extraction_strategies.json", "r") as file: with open(f"{__location__}/docs/extraction_strategies.json", "r") as file:
return JSONResponse(content=file.read()) return JSONResponse(content=file.read())
@@ -156,8 +164,8 @@ async def get_extraction_strategies():
async def get_chunking_strategies(): async def get_chunking_strategies():
with open(f"{__location__}/docs/chunking_strategies.json", "r") as file: with open(f"{__location__}/docs/chunking_strategies.json", "r") as file:
return JSONResponse(content=file.read()) return JSONResponse(content=file.read())
if __name__ == "__main__": if __name__ == "__main__":
import uvicorn import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8080) uvicorn.run(app, host="0.0.0.0", port=8888)

View File

@@ -2,9 +2,11 @@ site_name: Crawl4AI Documentation
docs_dir: docs/md docs_dir: docs/md
nav: nav:
- Home: index.md - Home: index.md
- Introduction: introduction.md - Demo: demo.md # Add this line
- Installation: installation.md - First Steps:
- Quick Start: quickstart.md - Introduction: introduction.md
- Installation: installation.md
- Quick Start: quickstart.md
- Examples: - Examples:
- Intro: examples/index.md - Intro: examples/index.md
- LLM Extraction: examples/llm_extraction.md - LLM Extraction: examples/llm_extraction.md
@@ -21,8 +23,9 @@ nav:
- API Reference: - API Reference:
- Core Classes and Functions: api/core_classes_and_functions.md - Core Classes and Functions: api/core_classes_and_functions.md
- Detailed API Documentation: api/detailed_api_documentation.md - Detailed API Documentation: api/detailed_api_documentation.md
- Change Log: changelog.md - Miscellaneous:
- Contact: contact.md - Change Log: changelog.md
- Contact: contact.md
theme: theme:
name: terminal name: terminal
@@ -36,4 +39,4 @@ extra_css:
extra_javascript: extra_javascript:
- assets/highlight.min.js - assets/highlight.min.js
- assets/highlight_init.js - assets/highlight_init.js

View File

@@ -20,3 +20,4 @@ torch==2.3.1
onnxruntime==1.18.0 onnxruntime==1.18.0
tokenizers==0.19.1 tokenizers==0.19.1
pillow==10.3.0 pillow==10.3.0
webdriver-manager==4.0.1

View File

@@ -1,55 +1,44 @@
from setuptools import setup, find_packages from setuptools import setup, find_packages
import os import os
import sys
from pathlib import Path from pathlib import Path
import subprocess import subprocess
from setuptools.command.install import install from setuptools.command.install import install
# Create the .crawl4ai folder in the user's home directory if it doesn't exist # Create the .crawl4ai folder in the user's home directory if it doesn't exist
# If the folder already exists, remove the cache folder
crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai") crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai")
if os.path.exists(f"{crawl4ai_folder}/cache"):
subprocess.run(["rm", "-rf", f"{crawl4ai_folder}/cache"])
os.makedirs(crawl4ai_folder, exist_ok=True) os.makedirs(crawl4ai_folder, exist_ok=True)
os.makedirs(f"{crawl4ai_folder}/cache", exist_ok=True) os.makedirs(f"{crawl4ai_folder}/cache", exist_ok=True)
# Read the requirements from requirements.txt # Read the requirements from requirements.txt
with open("requirements.txt") as f: with open("requirements.txt") as f:
requirements = f.read().splitlines() requirements = f.read().splitlines()
# Read the requirements from requirements.txt
with open("requirements.crawl.txt") as f:
requirements_crawl_only = f.read().splitlines()
# Define the requirements for different environments # Define the requirements for different environments
requirements_without_torch = [req for req in requirements if not req.startswith("torch")] default_requirements = [req for req in requirements if not req.startswith(("torch", "transformers", "onnxruntime", "nltk", "spacy", "tokenizers", "scikit-learn", "numpy"))]
requirements_without_transformers = [req for req in requirements if not req.startswith("transformers")] torch_requirements = [req for req in requirements if req.startswith(("torch", "nltk", "spacy", "scikit-learn", "numpy"))]
requirements_without_nltk = [req for req in requirements if not req.startswith("nltk")] transformer_requirements = [req for req in requirements if req.startswith(("transformers", "tokenizers", "onnxruntime"))]
requirements_without_torch_transformers_nlkt = [req for req in requirements if not req.startswith("torch") and not req.startswith("transformers") and not req.startswith("nltk")]
requirements_crawl_only = [req for req in requirements if not req.startswith("torch") and not req.startswith("transformers") and not req.startswith("nltk")]
class CustomInstallCommand(install):
"""Customized setuptools install command to install spacy without dependencies."""
def run(self):
install.run(self)
subprocess.check_call([os.sys.executable, '-m', 'pip', 'install', 'spacy', '--no-deps'])
setup( setup(
name="Crawl4AI", name="Crawl4AI",
version="0.2.6", version="0.2.74",
description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper", description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper",
long_description=open("README.md").read(), long_description=open("README.md", encoding="utf-8").read(),
long_description_content_type="text/markdown", long_description_content_type="text/markdown",
url="https://github.com/unclecode/crawl4ai", url="https://github.com/unclecode/crawl4ai",
author="Unclecode", author="Unclecode",
author_email="unclecode@kidocode.com", author_email="unclecode@kidocode.com",
license="MIT", license="MIT",
packages=find_packages(), packages=find_packages(),
install_requires=requirements_without_torch_transformers_nlkt, install_requires=default_requirements,
extras_require={ extras_require={
"all": requirements, # Include all requirements "torch": torch_requirements,
"colab": requirements_without_torch, # Exclude torch for Colab "transformer": transformer_requirements,
"crawl": requirements_crawl_only, # Include only crawl requirements "all": requirements,
},
cmdclass={
'install': CustomInstallCommand,
}, },
entry_points={ entry_points={
'console_scripts': [ 'console_scripts': [
@@ -67,4 +56,4 @@ setup(
"Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.10",
], ],
python_requires=">=3.7", python_requires=">=3.7",
) )