Compare commits

..

16 Commits

Author SHA1 Message Date
Umut CAN
3c6ebb73ae Update web_crawler.py
Improve code efficiency, readability, and maintainability in web_crawler.py
2024-08-30 15:30:06 +03:00
unclecode
e5e6a34e80 ## [v0.2.77] - 2024-08-04
Significant improvements in text processing and performance:

- 🚀 **Dependency reduction**: Removed dependency on spaCy model for text chunk labeling in cosine extraction strategy.
- 🤖 **Transformer upgrade**: Implemented text sequence classification using a transformer model for labeling text chunks.
-  **Performance enhancement**: Improved model loading speed due to removal of spaCy dependency.
- 🔧 **Future-proofing**: Laid groundwork for potential complete removal of spaCy dependency in future versions.

These changes address issue #68 and provide a foundation for faster, more efficient text processing in Crawl4AI.
2024-08-04 14:54:18 +08:00
unclecode
897e766728 Update README 2024-08-02 16:04:14 +08:00
unclecode
9200a6731d ## [v0.2.76] - 2024-08-02
Major improvements in functionality, performance, and cross-platform compatibility! 🚀

- 🐳 **Docker enhancements**: Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
- 🌐 **Official Docker Hub image**: Launched our first official image on Docker Hub for streamlined deployment (unclecode/crawl4ai).
- 🔧 **Selenium upgrade**: Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
- 🖼️ **Image description**: Implemented ability to generate textual descriptions for extracted images from web pages.
-  **Performance boost**: Various improvements to enhance overall speed and performance.
2024-08-02 16:02:42 +08:00
unclecode
61c166ab19 refactor: Update Crawl4AI version to v0.2.76
This commit updates the Crawl4AI version from v0.2.7765 to v0.2.76. The version number is updated in the README.md file. This change ensures consistency and reflects the correct version of the software.
2024-08-02 15:55:53 +08:00
unclecode
659c8cd953 refactor: Update image description minimum word threshold in get_content_of_website_optimized 2024-08-02 15:55:32 +08:00
unclecode
9ee988753d refactor: Update image description minimum word threshold in get_content_of_website_optimized 2024-08-02 14:53:11 +08:00
unclecode
8ae6c43ca4 refactor: Update Dockerfile to install Crawl4AI with specified options 2024-08-01 20:13:06 +08:00
unclecode
b6713870ef refactor: Update Dockerfile to install Crawl4AI with specified options
This commit updates the Dockerfile to install Crawl4AI with the specified options. The `INSTALL_OPTION` build argument is used to determine which additional packages to install. If the option is set to "all", all models will be downloaded. If the option is set to "torch", only torch models will be downloaded. If the option is set to "transformer", only transformer models will be downloaded. If no option is specified, the default installation will be used. This change improves the flexibility and customization of the Crawl4AI installation process.
2024-08-01 17:56:19 +08:00
unclecode
40477493d3 refactor: Remove image format dot in get_content_of_website_optimized
The code change removes the dot from the image format in the `get_content_of_website_optimized` function. This change ensures consistency in the image format and improves the functionality.
2024-07-31 16:15:55 +08:00
Kevin Moturi
efcf3ac6eb Update LocalSeleniumCrawlerStrategy to resolve ChromeDriver version mismatch issue
This resolves the following error: `selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 114`

Windows users are getting.
2024-07-31 13:33:09 +08:00
unclecode
9e43f7beda refactor: Temporarily disable fetching image file size in get_content_of_website_optimized
Set the `image_size` variable to 0 in the `get_content_of_website_optimized` function to temporarily disable fetching the image file size. This change addresses performance issues and will be improved in a future update.

Update Dockerfile for linuz users
2024-07-31 13:29:23 +08:00
unclecode
aa9412e1b4 refactor: Set image_size to 0 in get_content_of_website_optimized
The code change sets the `image_size` variable to 0 in the `get_content_of_website_optimized` function. This change is made to temporarily disable fetching the image file size, which was causing performance issues. The image size will be fetched in a future update to improve the functionality.
2024-07-23 13:08:53 +08:00
Aravind Karnam
cf6c835e18 moved score threshold to config.py & replaced the separator for tag.get_text in find_closest_parent_with_useful_text fn from period(.) to space( ) to keep the text more neutral. 2024-07-21 15:18:23 +05:30
Aravind Karnam
e5ecf291f3 Implemented filtering for images and grabbing the contextual text from nearest parent 2024-07-21 15:03:17 +05:30
Aravind Karnam
9d0cafcfa6 fixed import error in model_loader.py 2024-07-21 14:55:58 +05:30
17 changed files with 495 additions and 138 deletions

View File

@@ -1,5 +1,33 @@
# Changelog # Changelog
## [v0.2.77] - 2024-08-04
Significant improvements in text processing and performance:
- 🚀 **Dependency reduction**: Removed dependency on spaCy model for text chunk labeling in cosine extraction strategy.
- 🤖 **Transformer upgrade**: Implemented text sequence classification using a transformer model for labeling text chunks.
-**Performance enhancement**: Improved model loading speed due to removal of spaCy dependency.
- 🔧 **Future-proofing**: Laid groundwork for potential complete removal of spaCy dependency in future versions.
These changes address issue #68 and provide a foundation for faster, more efficient text processing in Crawl4AI.
## [v0.2.76] - 2024-08-02
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
- 🐳 **Docker enhancements**: Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
- 🌐 **Official Docker Hub image**: Launched our first official image on Docker Hub for streamlined deployment.
- 🔧 **Selenium upgrade**: Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
- 🖼️ **Image description**: Implemented ability to generate textual descriptions for extracted images from web pages.
-**Performance boost**: Various improvements to enhance overall speed and performance.
A big shoutout to our amazing community contributors:
- [@aravindkarnam](https://github.com/aravindkarnam) for developing the textual description extraction feature.
- [@FractalMind](https://github.com/FractalMind) for creating the first official Docker Hub image and fixing Dockerfile errors.
- [@ketonkss4](https://github.com/ketonkss4) for identifying Selenium's new capabilities, helping us reduce dependencies.
Your contributions are driving Crawl4AI forward! 🙌
## [v0.2.75] - 2024-07-19 ## [v0.2.75] - 2024-07-19
Minor improvements for a more maintainable codebase: Minor improvements for a more maintainable codebase:

31
CONTRIBUTORS.md Normal file
View File

@@ -0,0 +1,31 @@
# Contributors to Crawl4AI
We would like to thank the following people for their contributions to Crawl4AI:
## Core Team
- [Unclecode](https://github.com/unclecode) - Project Creator and Main Developer
- [Nasrin](https://github.com/ntohidi) - Project Manager and Developer
## Community Contributors
- [Aravind Karnam](https://github.com/aravindkarnam) - Developed textual description extraction feature
- [FractalMind](https://github.com/FractalMind) - Created the first official Docker Hub image and fixed Dockerfile errors
- [ketonkss4](https://github.com/ketonkss4) - Identified Selenium's new capabilities, helping reduce dependencies
## Other Contributors
- [Gokhan](https://github.com/gkhngyk)
- [Shiv Kumar](https://github.com/shivkumar0757)
- [QIN2DIM](https://github.com/QIN2DIM)
## Acknowledgements
We also want to thank all the users who have reported bugs, suggested features, or helped in any other way to make Crawl4AI better.
---
If you've contributed to Crawl4AI and your name isn't on this list, please [open a pull request](https://github.com/unclecode/crawl4ai/pulls) with your name, link, and contribution, and we'll review it promptly.
Thank you all for your contributions!

View File

@@ -4,6 +4,9 @@ FROM python:3.10-slim-bookworm
# Set the working directory in the container # Set the working directory in the container
WORKDIR /usr/src/app WORKDIR /usr/src/app
# Define build arguments
ARG INSTALL_OPTION=default
# Install build dependencies # Install build dependencies
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y --no-install-recommends \ apt-get install -y --no-install-recommends \
@@ -21,33 +24,39 @@ RUN apt-get update && \
# Copy the application code # Copy the application code
COPY . . COPY . .
# Install Crawl4AI using the local setup.py (which will use the default installation) # Install Crawl4AI using the local setup.py with the specified option
RUN pip install --no-cache-dir . # and download models only for torch, transformer, or all options
RUN if [ "$INSTALL_OPTION" = "all" ]; then \
pip install --no-cache-dir .[all] && \
crawl4ai-download-models; \
elif [ "$INSTALL_OPTION" = "torch" ]; then \
pip install --no-cache-dir .[torch] && \
crawl4ai-download-models; \
elif [ "$INSTALL_OPTION" = "transformer" ]; then \
pip install --no-cache-dir .[transformer] && \
crawl4ai-download-models; \
else \
pip install --no-cache-dir .; \
fi
# Install Google Chrome and ChromeDriver # Install Google Chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \ RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list' && \ sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list' && \
apt-get update && \ apt-get update && \
apt-get install -y google-chrome-stable && \ apt-get install -y google-chrome-stable
wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip && \
unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# Set environment to use Chrome and ChromeDriver properly # Set environment to use Chrome properly
ENV CHROME_BIN=/usr/bin/google-chrome \ ENV CHROME_BIN=/usr/bin/google-chrome \
CHROMEDRIVER=/usr/local/bin/chromedriver \
DISPLAY=:99 \ DISPLAY=:99 \
DBUS_SESSION_BUS_ADDRESS=/dev/null \ DBUS_SESSION_BUS_ADDRESS=/dev/null \
PYTHONUNBUFFERED=1 PYTHONUNBUFFERED=1
# Ensure the PATH environment variable includes the location of the installed packages # Ensure the PATH environment variable includes the location of the installed packages
ENV PATH /opt/conda/bin:$PATH ENV PATH=/opt/conda/bin:$PATH
# Make port 80 available to the world outside this container # Make port 80 available to the world outside this container
EXPOSE 80 EXPOSE 80
# Download models call cli "crawl4ai-download-models"
# RUN crawl4ai-download-models
# Install mkdocs # Install mkdocs
RUN pip install mkdocs mkdocs-terminal RUN pip install mkdocs mkdocs-terminal

View File

@@ -1,4 +1,4 @@
# Crawl4AI v0.2.75 🕷️🤖 # Crawl4AI v0.2.77 🕷️🤖
[![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers) [![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers)
[![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members) [![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members)
@@ -8,13 +8,29 @@
Crawl4AI simplifies web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐 Crawl4AI simplifies web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐
#### [v0.2.77] - 2024-08-02
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
- 🐳 **Docker enhancements**:
- Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
- 🌐 **Official Docker Hub image**:
- Launched our first official image on Docker Hub for streamlined deployment (unclecode/crawl4ai).
- 🔧 **Selenium upgrade**:
- Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
- 🖼️ **Image description**:
- Implemented ability to generate textual descriptions for extracted images from web pages.
-**Performance boost**:
- Various improvements to enhance overall speed and performance.
## Try it Now! ## Try it Now!
- Use as REST API: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1zODYjhemJ5bUmYceWpVoBMVpd0ofzNBZ?usp=sharing) ✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1sJPAmeLj5PMrg2VgOwMJ2ubGIcK0cJeX?usp=sharing)
- Use as Python library: This collab is a bit outdated. I'm updating it with the newest versions, so please refer to the website for the latest documentation. This will be updated in a few days, and you'll have the latest version here. Thank you so much. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
✨ visit our [Documentation Website](https://crawl4ai.com/mkdocs/) ✨ visit our [Documentation Website](https://crawl4ai.com/mkdocs/)
✨ Check [Demo](https://crawl4ai.com/mkdocs/demo)
## Features ✨ ## Features ✨
- 🆓 Completely free and open-source - 🆓 Completely free and open-source
@@ -32,6 +48,18 @@ Crawl4AI simplifies web crawling and data extraction, making it accessible for l
- 🎯 CSS selector support - 🎯 CSS selector support
- 📝 Passes instructions/keywords to refine extraction - 📝 Passes instructions/keywords to refine extraction
# Crawl4AI
## 🌟 Shoutout to Contributors of v0.2.77!
A big thank you to the amazing contributors who've made this release possible:
- [@aravindkarnam](https://github.com/aravindkarnam) for the new image description feature
- [@FractalMind](https://github.com/FractalMind) for our official Docker Hub image
- [@ketonkss4](https://github.com/ketonkss4) for helping streamline our Selenium setup
Your contributions are driving Crawl4AI forward! 🚀
## Cool Examples 🚀 ## Cool Examples 🚀
### Quick Start ### Quick Start
@@ -53,13 +81,32 @@ print(result.markdown)
``` ```
## How to install 🛠 ## How to install 🛠
### Using pip 🐍
```bash ```bash
virtualenv venv virtualenv venv
source venv/bin/activate source venv/bin/activate
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git" pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
``` ```
### Speed-First Design 🚀 ### Using Docker 🐳
```bash
# For Mac users (M1/M2)
# docker build --platform linux/amd64 -t crawl4ai .
docker build -t crawl4ai .
docker run -d -p 8000:80 crawl4ai
```
### Using Docker Hub 🐳
```bash
docker pull unclecode/crawl4ai:latest
docker run -d -p 8000:80 unclecode/crawl4ai:latest
```
## Speed-First Design 🚀
Perhaps the most important design principle for this library is speed. We need to ensure it can handle many links and resources in parallel as quickly as possible. By combining this speed with fast LLMs like Groq, the results will be truly amazing. Perhaps the most important design principle for this library is speed. We need to ensure it can handle many links and resources in parallel as quickly as possible. By combining this speed with fast LLMs like Groq, the results will be truly amazing.

View File

@@ -27,3 +27,14 @@ WORD_TOKEN_RATE = 1.3
# Threshold for the minimum number of word in a HTML tag to be considered # Threshold for the minimum number of word in a HTML tag to be considered
MIN_WORD_THRESHOLD = 1 MIN_WORD_THRESHOLD = 1
IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD = 1
# Threshold for the Image extraction - Range is 1 to 6
# Images are scored based on point based system, to filter based on usefulness. Points are assigned
# to each image based on the following aspects.
# If either height or width exceeds 150px
# If image size is greater than 10Kb
# If alt property is set
# If image format is in jpg, png or webp
# If image is in the first half of the total images extracted from the page
IMAGE_SCORE_THRESHOLD = 2

View File

@@ -6,9 +6,9 @@ from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.options import Options
from selenium.common.exceptions import InvalidArgumentException, WebDriverException from selenium.common.exceptions import InvalidArgumentException, WebDriverException
from selenium.webdriver.chrome.service import Service as ChromeService # from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager # from webdriver_manager.chrome import ChromeDriverManager
from urllib3.exceptions import MaxRetryError # from urllib3.exceptions import MaxRetryError
from .config import * from .config import *
import logging, time import logging, time
@@ -137,10 +137,15 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
# self.service = Service(chromedriver_autoinstaller.install()) # self.service = Service(chromedriver_autoinstaller.install())
chromedriver_path = ChromeDriverManager().install() # chromedriver_path = ChromeDriverManager().install()
self.service = Service(chromedriver_path) # self.service = Service(chromedriver_path)
self.service.log_path = "NUL" # self.service.log_path = "NUL"
self.driver = webdriver.Chrome(service=self.service, options=self.options) # self.driver = webdriver.Chrome(service=self.service, options=self.options)
# Use selenium-manager (built into Selenium 4.10.0+)
self.service = Service()
self.driver = webdriver.Chrome(options=self.options)
self.driver = self.execute_hook('on_driver_created', self.driver) self.driver = self.execute_hook('on_driver_created', self.driver)
if kwargs.get("cookies"): if kwargs.get("cookies"):
@@ -292,7 +297,7 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
# Open the screenshot with PIL # Open the screenshot with PIL
image = Image.open(BytesIO(screenshot)) image = Image.open(BytesIO(screenshot))
# Convert image to RGB mode # Convert image to RGB mode (this will handle both RGB and RGBA images)
rgb_image = image.convert('RGB') rgb_image = image.convert('RGB')
# Convert to JPEG and compress # Convert to JPEG and compress
@@ -304,11 +309,6 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
print(f"[LOG] 📸 Screenshot taken and converted to base64") print(f"[LOG] 📸 Screenshot taken and converted to base64")
return img_base64 return img_base64
except Exception as e:
if self.verbose:
print(f"[ERROR] Failed to take screenshot: {str(e)}")
return ""
except Exception as e: except Exception as e:
error_message = sanitize_input_encode(f"Failed to take screenshot: {str(e)}") error_message = sanitize_input_encode(f"Failed to take screenshot: {str(e)}")
print(error_message) print(error_message)
@@ -321,7 +321,7 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
try: try:
font = ImageFont.truetype("arial.ttf", 40) font = ImageFont.truetype("arial.ttf", 40)
except IOError: except IOError:
font = ImageFont.load_default(size=40) font = ImageFont.load_default()
# Define text color and wrap the text # Define text color and wrap the text
text_color = (255, 255, 255) text_color = (255, 255, 255)

View File

@@ -9,6 +9,7 @@ from .utils import *
from functools import partial from functools import partial
from .model_loader import * from .model_loader import *
import math import math
import numpy as np
class ExtractionStrategy(ABC): class ExtractionStrategy(ABC):
@@ -248,6 +249,9 @@ class CosineStrategy(ExtractionStrategy):
self.get_embedding_method = "direct" self.get_embedding_method = "direct"
self.device = get_device() self.device = get_device()
import torch
self.device = torch.device('cpu')
self.default_batch_size = calculate_batch_size(self.device) self.default_batch_size = calculate_batch_size(self.device)
if self.verbose: if self.verbose:
@@ -260,7 +264,9 @@ class CosineStrategy(ExtractionStrategy):
# else: # else:
self.tokenizer, self.model = load_bge_small_en_v1_5() self.tokenizer, self.model = load_bge_small_en_v1_5()
self.model.to(self.device)
self.model.eval() self.model.eval()
self.get_embedding_method = "batch" self.get_embedding_method = "batch"
self.buffer_embeddings = np.array([]) self.buffer_embeddings = np.array([])
@@ -282,7 +288,7 @@ class CosineStrategy(ExtractionStrategy):
if self.verbose: if self.verbose:
print(f"[LOG] Loading Multilabel Classifier for {self.device.type} device.") print(f"[LOG] Loading Multilabel Classifier for {self.device.type} device.")
self.nlp, self.device = load_text_multilabel_classifier() self.nlp, _ = load_text_multilabel_classifier()
# self.default_batch_size = 16 if self.device.type == 'cpu' else 64 # self.default_batch_size = 16 if self.device.type == 'cpu' else 64
if self.verbose: if self.verbose:
@@ -453,21 +459,21 @@ class CosineStrategy(ExtractionStrategy):
if self.verbose: if self.verbose:
print(f"[LOG] 🚀 Assign tags using {self.device}") print(f"[LOG] 🚀 Assign tags using {self.device}")
if self.device.type in ["gpu", "cuda", "mps"]: if self.device.type in ["gpu", "cuda", "mps", "cpu"]:
labels = self.nlp([cluster['content'] for cluster in cluster_list]) labels = self.nlp([cluster['content'] for cluster in cluster_list])
for cluster, label in zip(cluster_list, labels): for cluster, label in zip(cluster_list, labels):
cluster['tags'] = label cluster['tags'] = label
elif self.device == "cpu": # elif self.device.type == "cpu":
# Process the text with the loaded model # # Process the text with the loaded model
texts = [cluster['content'] for cluster in cluster_list] # texts = [cluster['content'] for cluster in cluster_list]
# Batch process texts # # Batch process texts
docs = self.nlp.pipe(texts, disable=["tagger", "parser", "ner", "lemmatizer"]) # docs = self.nlp.pipe(texts, disable=["tagger", "parser", "ner", "lemmatizer"])
for doc, cluster in zip(docs, cluster_list): # for doc, cluster in zip(docs, cluster_list):
tok_k = self.top_k # tok_k = self.top_k
top_categories = sorted(doc.cats.items(), key=lambda x: x[1], reverse=True)[:tok_k] # top_categories = sorted(doc.cats.items(), key=lambda x: x[1], reverse=True)[:tok_k]
cluster['tags'] = [cat for cat, _ in top_categories] # cluster['tags'] = [cat for cat, _ in top_categories]
# for cluster in cluster_list: # for cluster in cluster_list:
# doc = self.nlp(cluster['content']) # doc = self.nlp(cluster['content'])

View File

@@ -3,9 +3,10 @@ from pathlib import Path
import subprocess, os import subprocess, os
import shutil import shutil
import tarfile import tarfile
from crawl4ai.config import MODEL_REPO_BRANCH from .model_loader import *
import argparse import argparse
import urllib.request import urllib.request
from crawl4ai.config import MODEL_REPO_BRANCH
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__))) __location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
@lru_cache() @lru_cache()
@@ -141,13 +142,14 @@ def load_text_multilabel_classifier():
from scipy.special import expit from scipy.special import expit
import torch import torch
# Check for available device: CUDA, MPS (for Apple Silicon), or CPU # # Check for available device: CUDA, MPS (for Apple Silicon), or CPU
if torch.cuda.is_available(): # if torch.cuda.is_available():
device = torch.device("cuda") # device = torch.device("cuda")
elif torch.backends.mps.is_available(): # elif torch.backends.mps.is_available():
device = torch.device("mps") # device = torch.device("mps")
else: # else:
return load_spacy_model(), torch.device("cpu") # device = torch.device("cpu")
# # return load_spacy_model(), torch.device("cpu")
MODEL = "cardiffnlp/tweet-topic-21-multi" MODEL = "cardiffnlp/tweet-topic-21-multi"
@@ -192,51 +194,61 @@ def load_spacy_model():
import spacy import spacy
name = "models/reuters" name = "models/reuters"
home_folder = get_home_folder() home_folder = get_home_folder()
model_folder = os.path.join(home_folder, name) model_folder = Path(home_folder) / name
# Check if the model directory already exists # Check if the model directory already exists
if not (Path(model_folder).exists() and any(Path(model_folder).iterdir())): if not (model_folder.exists() and any(model_folder.iterdir())):
repo_url = "https://github.com/unclecode/crawl4ai.git" repo_url = "https://github.com/unclecode/crawl4ai.git"
# branch = "main"
branch = MODEL_REPO_BRANCH branch = MODEL_REPO_BRANCH
repo_folder = os.path.join(home_folder, "crawl4ai") repo_folder = Path(home_folder) / "crawl4ai"
model_folder = os.path.join(home_folder, name)
# print("[LOG] ⏬ Downloading Spacy model for the first time...") print("[LOG] ⏬ Downloading Spacy model for the first time...")
# Remove existing repo folder if it exists # Remove existing repo folder if it exists
if Path(repo_folder).exists(): if repo_folder.exists():
shutil.rmtree(repo_folder) try:
shutil.rmtree(model_folder) shutil.rmtree(repo_folder)
if model_folder.exists():
shutil.rmtree(model_folder)
except PermissionError:
print("[WARNING] Unable to remove existing folders. Please manually delete the following folders and try again:")
print(f"- {repo_folder}")
print(f"- {model_folder}")
return None
try: try:
# Clone the repository # Clone the repository
subprocess.run( subprocess.run(
["git", "clone", "-b", branch, repo_url, repo_folder], ["git", "clone", "-b", branch, repo_url, str(repo_folder)],
stdout=subprocess.DEVNULL, stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
check=True check=True
) )
# Create the models directory if it doesn't exist # Create the models directory if it doesn't exist
models_folder = os.path.join(home_folder, "models") models_folder = Path(home_folder) / "models"
os.makedirs(models_folder, exist_ok=True) models_folder.mkdir(parents=True, exist_ok=True)
# Copy the reuters model folder to the models directory # Copy the reuters model folder to the models directory
source_folder = os.path.join(repo_folder, "models/reuters") source_folder = repo_folder / "models" / "reuters"
shutil.copytree(source_folder, model_folder) shutil.copytree(source_folder, model_folder)
# Remove the cloned repository # Remove the cloned repository
shutil.rmtree(repo_folder) shutil.rmtree(repo_folder)
# Print completion message print("[LOG] ✅ Spacy Model downloaded successfully")
# print("[LOG] ✅ Spacy Model downloaded successfully")
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
print(f"An error occurred while cloning the repository: {e}") print(f"An error occurred while cloning the repository: {e}")
return None
except Exception as e: except Exception as e:
print(f"An error occurred: {e}") print(f"An error occurred: {e}")
return None
return spacy.load(model_folder) try:
return spacy.load(str(model_folder))
except Exception as e:
print(f"Error loading spacy model: {e}")
return None
def download_all_models(remove_existing=False): def download_all_models(remove_existing=False):
"""Download all models required for Crawl4AI.""" """Download all models required for Crawl4AI."""

View File

@@ -11,6 +11,9 @@ from .prompts import PROMPT_EXTRACT_BLOCKS
from .config import * from .config import *
from pathlib import Path from pathlib import Path
from typing import Dict, Any from typing import Dict, Any
from urllib.parse import urljoin
import requests
from requests.exceptions import InvalidSchema
class InvalidCSSSelectorError(Exception): class InvalidCSSSelectorError(Exception):
pass pass
@@ -436,6 +439,8 @@ def get_content_of_website_optimized(url: str, html: str, word_count_threshold:
soup = BeautifulSoup(html, 'html.parser') soup = BeautifulSoup(html, 'html.parser')
body = soup.body body = soup.body
image_description_min_word_threshold = kwargs.get('image_description_min_word_threshold', IMAGE_DESCRIPTION_MIN_WORD_THRESHOLD)
if css_selector: if css_selector:
selected_elements = body.select(css_selector) selected_elements = body.select(css_selector)
if not selected_elements: if not selected_elements:
@@ -447,6 +452,103 @@ def get_content_of_website_optimized(url: str, html: str, word_count_threshold:
links = {'internal': [], 'external': []} links = {'internal': [], 'external': []}
media = {'images': [], 'videos': [], 'audios': []} media = {'images': [], 'videos': [], 'audios': []}
def process_image(img, url, index, total_images):
#Check if an image has valid display and inside undesired html elements
def is_valid_image(img, parent, parent_classes):
style = img.get('style', '')
src = img.get('src', '')
classes_to_check = ['button', 'icon', 'logo']
tags_to_check = ['button', 'input']
return all([
'display:none' not in style,
src,
not any(s in var for var in [src, img.get('alt', ''), *parent_classes] for s in classes_to_check),
parent.name not in tags_to_check
])
#Score an image for it's usefulness
def score_image_for_usefulness(img, base_url, index, images_count):
# Function to parse image height/width value and units
def parse_dimension(dimension):
if dimension:
match = re.match(r"(\d+)(\D*)", dimension)
if match:
number = int(match.group(1))
unit = match.group(2) or 'px' # Default unit is 'px' if not specified
return number, unit
return None, None
# Fetch image file metadata to extract size and extension
def fetch_image_file_size(img, base_url):
#If src is relative path construct full URL, if not it may be CDN URL
img_url = urljoin(base_url,img.get('src'))
try:
response = requests.head(img_url)
if response.status_code == 200:
return response.headers.get('Content-Length',None)
else:
print(f"Failed to retrieve file size for {img_url}")
return None
except InvalidSchema as e:
return None
finally:
return
image_height = img.get('height')
height_value, height_unit = parse_dimension(image_height)
image_width = img.get('width')
width_value, width_unit = parse_dimension(image_width)
image_size = 0 #int(fetch_image_file_size(img,base_url) or 0)
image_format = os.path.splitext(img.get('src',''))[1].lower()
# Remove . from format
image_format = image_format.strip('.')
score = 0
if height_value:
if height_unit == 'px' and height_value > 150:
score += 1
if height_unit in ['%','vh','vmin','vmax'] and height_value >30:
score += 1
if width_value:
if width_unit == 'px' and width_value > 150:
score += 1
if width_unit in ['%','vh','vmin','vmax'] and width_value >30:
score += 1
if image_size > 10000:
score += 1
if img.get('alt') != '':
score+=1
if any(image_format==format for format in ['jpg','png','webp']):
score+=1
if index/images_count<0.5:
score+=1
return score
# Extract meaningful text for images from closest parent
def find_closest_parent_with_useful_text(tag):
current_tag = tag
while current_tag:
current_tag = current_tag.parent
# Get the text content of the parent tag
if current_tag:
text_content = current_tag.get_text(separator=' ',strip=True)
# Check if the text content has at least word_count_threshold
if len(text_content.split()) >= image_description_min_word_threshold:
return text_content
return None
if not is_valid_image(img, img.parent, img.parent.get('class', [])):
return None
score = score_image_for_usefulness(img, url, index, total_images)
if score <= IMAGE_SCORE_THRESHOLD:
return None
return {
'src': img.get('src', ''),
'alt': img.get('alt', ''),
'desc': find_closest_parent_with_useful_text(img),
'score': score,
'type': 'image'
}
def process_element(element: element.PageElement) -> bool: def process_element(element: element.PageElement) -> bool:
try: try:
if isinstance(element, NavigableString): if isinstance(element, NavigableString):
@@ -471,11 +573,6 @@ def get_content_of_website_optimized(url: str, html: str, word_count_threshold:
keep_element = True keep_element = True
elif element.name == 'img': elif element.name == 'img':
media['images'].append({
'src': element.get('src'),
'alt': element.get('alt'),
'type': 'image'
})
return True # Always keep image elements return True # Always keep image elements
elif element.name in ['video', 'audio']: elif element.name in ['video', 'audio']:
@@ -518,6 +615,14 @@ def get_content_of_website_optimized(url: str, html: str, word_count_threshold:
print('Error processing element:', str(e)) print('Error processing element:', str(e))
return False return False
#process images by filtering and extracting contextual text from the page
imgs = body.find_all('img')
media['images'] = [
result for result in
(process_image(img, url, i, len(imgs)) for i, img in enumerate(imgs))
if result is not None
]
process_element(body) process_element(body)
def flatten_nested_elements(node): def flatten_nested_elements(node):

View File

@@ -16,40 +16,23 @@ warnings.filterwarnings("ignore", message='Field "model_name" has conflict with
class WebCrawler: class WebCrawler:
def __init__( def __init__(self, crawler_strategy: CrawlerStrategy = None, always_by_pass_cache: bool = False, verbose: bool = False):
self,
# db_path: str = None,
crawler_strategy: CrawlerStrategy = None,
always_by_pass_cache: bool = False,
verbose: bool = False,
):
# self.db_path = db_path
self.crawler_strategy = crawler_strategy or LocalSeleniumCrawlerStrategy(verbose=verbose) self.crawler_strategy = crawler_strategy or LocalSeleniumCrawlerStrategy(verbose=verbose)
self.always_by_pass_cache = always_by_pass_cache self.always_by_pass_cache = always_by_pass_cache
# Create the .crawl4ai folder in the user's home directory if it doesn't exist
self.crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai") self.crawl4ai_folder = os.path.join(Path.home(), ".crawl4ai")
os.makedirs(self.crawl4ai_folder, exist_ok=True) os.makedirs(self.crawl4ai_folder, exist_ok=True)
os.makedirs(f"{self.crawl4ai_folder}/cache", exist_ok=True) os.makedirs(f"{self.crawl4ai_folder}/cache", exist_ok=True)
# If db_path is not provided, use the default path
# if not db_path:
# self.db_path = f"{self.crawl4ai_folder}/crawl4ai.db"
# flush_db()
init_db() init_db()
self.ready = False self.ready = False
def warmup(self): def warmup(self):
print("[LOG] 🌤️ Warming up the WebCrawler") print("[LOG] 🌤️ Warming up the WebCrawler")
result = self.run( self.run(
url='https://google.com/', url='https://google.com/',
word_count_threshold=5, word_count_threshold=5,
extraction_strategy= NoExtractionStrategy(), extraction_strategy=NoExtractionStrategy(),
bypass_cache=False, bypass_cache=False,
verbose = False, verbose=False
# warmup=True
) )
self.ready = True self.ready = True
print("[LOG] 🌞 WebCrawler is ready to crawl") print("[LOG] 🌞 WebCrawler is ready to crawl")
@@ -139,12 +122,8 @@ class WebCrawler:
if not isinstance(chunking_strategy, ChunkingStrategy): if not isinstance(chunking_strategy, ChunkingStrategy):
raise ValueError("Unsupported chunking strategy") raise ValueError("Unsupported chunking strategy")
# if word_count_threshold < MIN_WORD_THRESHOLD:
# word_count_threshold = MIN_WORD_THRESHOLD
word_count_threshold = max(word_count_threshold, 0) word_count_threshold = max(word_count_threshold, 0)
# Check cache first
cached = None cached = None
screenshot_data = None screenshot_data = None
extracted_content = None extracted_content = None
@@ -169,7 +148,7 @@ class WebCrawler:
html = sanitize_input_encode(self.crawler_strategy.crawl(url, **kwargs)) html = sanitize_input_encode(self.crawler_strategy.crawl(url, **kwargs))
t2 = time.time() t2 = time.time()
if verbose: if verbose:
print(f"[LOG] 🚀 Crawling done for {url}, success: {bool(html)}, time taken: {t2 - t1} seconds") print(f"[LOG] 🚀 Crawling done for {url}, success: {bool(html)}, time taken: {t2 - t1:.2f} seconds")
if screenshot: if screenshot:
screenshot_data = self.crawler_strategy.take_screenshot() screenshot_data = self.crawler_strategy.take_screenshot()
@@ -200,13 +179,10 @@ class WebCrawler:
t = time.time() t = time.time()
# Extract content from HTML # Extract content from HTML
try: try:
# t1 = time.time()
# result = get_content_of_website(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False))
# print(f"[LOG] 🚀 Crawling done for {url}, success: True, time taken: {time.time() - t1} seconds")
t1 = time.time() t1 = time.time()
result = get_content_of_website_optimized(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False)) result = get_content_of_website_optimized(url, html, word_count_threshold, css_selector=css_selector, only_text=kwargs.get("only_text", False))
if verbose: if verbose:
print(f"[LOG] 🚀 Content extracted for {url}, success: True, time taken: {time.time() - t1} seconds") print(f"[LOG] 🚀 Content extracted for {url}, success: True, time taken: {time.time() - t1:.2f} seconds")
if result is None: if result is None:
raise ValueError(f"Failed to extract content from the website: {url}") raise ValueError(f"Failed to extract content from the website: {url}")
@@ -228,7 +204,7 @@ class WebCrawler:
extracted_content = json.dumps(extracted_content, indent=4, default=str) extracted_content = json.dumps(extracted_content, indent=4, default=str)
if verbose: if verbose:
print(f"[LOG] 🚀 Extraction done for {url}, time taken: {time.time() - t} seconds.") print(f"[LOG] 🚀 Extraction done for {url}, time taken: {time.time() - t:.2f} seconds.")
screenshot = None if not screenshot else screenshot screenshot = None if not screenshot else screenshot

View File

@@ -21,7 +21,8 @@ result = crawler.run(
url=url, url=url,
word_count_threshold=1, word_count_threshold=1,
extraction_strategy= LLMExtractionStrategy( extraction_strategy= LLMExtractionStrategy(
provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'), # provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY'),
provider= "groq/llama-3.1-70b-versatile", api_token = os.getenv('GROQ_API_KEY'),
schema=OpenAIModelFee.model_json_schema(), schema=OpenAIModelFee.model_json_schema(),
extraction_type="schema", extraction_type="schema",
instruction="From the crawled content, extract all mentioned model names along with their "\ instruction="From the crawled content, extract all mentioned model names along with their "\

View File

@@ -1,5 +1,33 @@
# Changelog # Changelog
## [v0.2.77] - 2024-08-04
Significant improvements in text processing and performance:
- 🚀 **Dependency reduction**: Removed dependency on spaCy model for text chunk labeling in cosine extraction strategy.
- 🤖 **Transformer upgrade**: Implemented text sequence classification using a transformer model for labeling text chunks.
-**Performance enhancement**: Improved model loading speed due to removal of spaCy dependency.
- 🔧 **Future-proofing**: Laid groundwork for potential complete removal of spaCy dependency in future versions.
These changes address issue #68 and provide a foundation for faster, more efficient text processing in Crawl4AI.
## [v0.2.76] - 2024-08-02
Major improvements in functionality, performance, and cross-platform compatibility! 🚀
- 🐳 **Docker enhancements**: Significantly improved Dockerfile for easy installation on Linux, Mac, and Windows.
- 🌐 **Official Docker Hub image**: Launched our first official image on Docker Hub for streamlined deployment.
- 🔧 **Selenium upgrade**: Removed dependency on ChromeDriver, now using Selenium's built-in capabilities for better compatibility.
- 🖼️ **Image description**: Implemented ability to generate textual descriptions for extracted images from web pages.
-**Performance boost**: Various improvements to enhance overall speed and performance.
A big shoutout to our amazing community contributors:
- [@aravindkarnam](https://github.com/aravindkarnam) for developing the textual description extraction feature.
- [@FractalMind](https://github.com/FractalMind) for creating the first official Docker Hub image and fixing Dockerfile errors.
- [@ketonkss4](https://github.com/ketonkss4) for identifying Selenium's new capabilities, helping us reduce dependencies.
Your contributions are driving Crawl4AI forward! 🙌
## [v0.2.75] - 2024-07-19 ## [v0.2.75] - 2024-07-19
Minor improvements for a more maintainable codebase: Minor improvements for a more maintainable codebase:

View File

@@ -1,4 +1,4 @@
# Crawl4AI v0.2.75 # Crawl4AI v0.2.77
Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI. Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.

View File

@@ -2,11 +2,13 @@
There are three ways to use Crawl4AI: There are three ways to use Crawl4AI:
1. As a library (Recommended) 1. As a library (Recommended).
2. As a local server (Docker) or using the REST API 2. As a local server (Docker) or using the REST API.
3. As a Google Colab notebook. 3. As a local server (Docker) using the pre-built image from Docker Hub.
## Library Installation ## Option 1: Library Installation
You can try this Colab for a quick start: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1sJPAmeLj5PMrg2VgOwMJ2ubGIcK0cJeX#scrollTo=g1RrmI4W_rPk)
Crawl4AI offers flexible installation options to suit various use cases. Choose the option that best fits your needs: Crawl4AI offers flexible installation options to suit various use cases. Choose the option that best fits your needs:
@@ -57,23 +59,135 @@ Use this if you plan to modify the source code.
crawl4ai-download-models crawl4ai-download-models
``` ```
## Using Docker for Local Server ## Option 2: Using Docker for Local Server
To run Crawl4AI as a local server using Docker: Crawl4AI can be run as a local server using Docker. The Dockerfile supports different installation options to cater to various use cases. Here's how you can build and run the Docker image:
### Default Installation
The default installation includes the basic Crawl4AI package without additional dependencies or pre-downloaded models.
```bash ```bash
# For Mac users # For Mac users (M1/M2)
# docker build --platform linux/amd64 -t crawl4ai . docker build --platform linux/amd64 -t crawl4ai .
# For other users # For other users
# docker build -t crawl4ai . docker build -t crawl4ai .
# Run the container
docker run -d -p 8000:80 crawl4ai docker run -d -p 8000:80 crawl4ai
``` ```
## Using Google Colab ### Full Installation (All Dependencies and Models)
This option installs all dependencies and downloads the models.
```bash
# For Mac users (M1/M2)
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=all -t crawl4ai:all .
# For other users
docker build --build-arg INSTALL_OPTION=all -t crawl4ai:all .
# Run the container
docker run -d -p 8000:80 crawl4ai:all
```
### Torch Installation
This option installs torch-related dependencies and downloads the models.
```bash
# For Mac users (M1/M2)
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=torch -t crawl4ai:torch .
# For other users
docker build --build-arg INSTALL_OPTION=torch -t crawl4ai:torch .
# Run the container
docker run -d -p 8000:80 crawl4ai:torch
```
### Transformer Installation
This option installs transformer-related dependencies and downloads the models.
```bash
# For Mac users (M1/M2)
docker build --platform linux/amd64 --build-arg INSTALL_OPTION=transformer -t crawl4ai:transformer .
# For other users
docker build --build-arg INSTALL_OPTION=transformer -t crawl4ai:transformer .
# Run the container
docker run -d -p 8000:80 crawl4ai:transformer
```
### Notes
- The `--platform linux/amd64` flag is necessary for Mac users with M1/M2 chips to ensure compatibility.
- The `-t` flag tags the image with a name (and optionally a tag in the 'name:tag' format).
- The `-d` flag runs the container in detached mode.
- The `-p 8000:80` flag maps port 8000 on the host to port 80 in the container.
Choose the installation option that best suits your needs. The default installation is suitable for basic usage, while the other options provide additional capabilities for more advanced use cases.
## Option 3: Using the Pre-built Image from Docker Hub
You can use pre-built Crawl4AI images from Docker Hub, which are available for all platforms (Mac, Linux, Windows). We have official images as well as a community-contributed image (Thanks to https://github.com/FractalMind):
### Default Installation
```bash
# Pull the image
docker pull unclecode/crawl4ai:latest
# Run the container
docker run -d -p 8000:80 unclecode/crawl4ai:latest
```
### Community-Contributed Image
A stable version of Crawl4AI is also available, created and maintained by a community member:
```bash
# Pull the community-contributed image
docker pull ryser007/crawl4ai:stable
# Run the container
docker run -d -p 8000:80 ryser007/crawl4ai:stable
```
We'd like to express our gratitude to GitHub user [@FractalMind](https://github.com/FractalMind) for creating and maintaining this stable version of the Crawl4AI Docker image. Community contributions like this are invaluable to the project.
You can also use Crawl4AI in a Google Colab notebook for easy setup and experimentation. Simply open the following Colab notebook and follow the instructions: ### Testing the Installation
⚠️ This collab is a bit outdated. I'm updating it with the newest versions, so please refer to the website for the latest documentation. This will be updated in a few days, and you'll have the latest version here. Thank you so much. After running the container, you can test if it's working correctly:
- On Mac and Linux:
```bash
curl http://localhost:8000
```
- On Windows (PowerShell):
```powershell
Invoke-WebRequest -Uri http://localhost:8000
```
Or open a web browser and navigate to http://localhost:8000
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)

View File

@@ -20,18 +20,6 @@ Crawl4AI is designed to simplify the process of crawling web pages and extractin
- **🎯 CSS Selector Support**: Extract specific content using CSS selectors. - **🎯 CSS Selector Support**: Extract specific content using CSS selectors.
- **📝 Instruction/Keyword Refinement**: Pass instructions or keywords to refine the extraction process. - **📝 Instruction/Keyword Refinement**: Pass instructions or keywords to refine the extraction process.
## Recent Changes (v0.2.5) 🌟
- **New Hooks**: Added six important hooks to the crawler:
- 🟢 `on_driver_created`: Called when the driver is ready for initializations.
- 🔵 `before_get_url`: Called right before Selenium fetches the URL.
- 🟣 `after_get_url`: Called after Selenium fetches the URL.
- 🟠 `before_return_html`: Called when the data is parsed and ready.
- 🟡 `on_user_agent_updated`: Called when the user changes the user agent, causing the driver to reinitialize.
- **New Example**: Added an example in [`quickstart.py`](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/quickstart.py) in the example folder under the docs.
- **Improved Semantic Context**: Maintaining the semantic context of inline tags (e.g., abbreviation, DEL, INS) for improved LLM-friendliness.
- **Dockerfile Update**: Updated Dockerfile to ensure compatibility across multiple platforms.
Check the [Changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md) for more details. Check the [Changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md) for more details.
## Power and Simplicity of Crawl4AI 🚀 ## Power and Simplicity of Crawl4AI 🚀

View File

@@ -12,12 +12,13 @@ python-dotenv==1.0.1
requests==2.32.3 requests==2.32.3
rich==13.7.1 rich==13.7.1
scikit-learn==1.5.0 scikit-learn==1.5.0
selenium==4.21.0 selenium==4.23.1
uvicorn==0.30.1 uvicorn==0.30.1
transformers==4.41.2 transformers==4.41.2
chromedriver-autoinstaller==0.6.4 # webdriver-manager==4.0.1
# chromedriver-autoinstaller==0.6.4
torch==2.3.1 torch==2.3.1
onnxruntime==1.18.0 onnxruntime==1.18.0
tokenizers==0.19.1 tokenizers==0.19.1
pillow==10.3.0 pillow==10.3.0
webdriver-manager==4.0.1 slowapi==0.1.9

View File

@@ -25,7 +25,7 @@ transformer_requirements = [req for req in requirements if req.startswith(("tran
setup( setup(
name="Crawl4AI", name="Crawl4AI",
version="0.2.74", version="0.2.77",
description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper", description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper",
long_description=open("README.md", encoding="utf-8").read(), long_description=open("README.md", encoding="utf-8").read(),
long_description_content_type="text/markdown", long_description_content_type="text/markdown",