diff --git a/README.md b/README.md index 90922299..6740b4e2 100644 --- a/README.md +++ b/README.md @@ -8,16 +8,90 @@ Crawl4AI is a powerful, free web crawling service designed to extract useful information from web pages and make it accessible for large language models (LLMs) and AI applications. ππ -## π§ Work in Progress π·ββοΈ +## Recent Changes -- π§ Separate Crawl and Extract Semantic Chunk: Enhancing efficiency in large-scale tasks. -- π Colab Integration: Exploring integration with Google Colab for easy experimentation. -- π― XPath and CSS Selector Support: Adding support for selective retrieval of specific elements. -- π· Image Captioning: Incorporating image captioning capabilities to extract descriptions from images. -- πΎ Embedding Vector Data: Generate and store embedding data for each crawled website. -- π Semantic Search Engine: Building a semantic search engine that fetches content, performs vector search similarity, and generates labeled chunk data based on user queries and URLs. +- π 10x faster!! +- π Execute custom JavaScript before crawling! +- π€ Colab friendly! +- π Chunking strategies: topic-based, regex, sentence, and more! +- π§ Extraction strategies: cosine clustering, LLM, and more! +- π― CSS selector support +- π Pass instructions/keywords to refine extraction + +## Power and Simplicity of Crawl4AI π + +Crawl4AI makes even complex web crawling tasks simple and intuitive. Below is an example of how you can execute JavaScript, filter data using keywords, and use a CSS selector to extract specific contentβall in one go! + +**Example Task:** + +1. Execute custom JavaScript to click a "Load More" button. +2. Filter the data to include only content related to "technology". +3. Use a CSS selector to extract only paragraphs (`
` tags).
+
+**Example Code:**
+
+```python
+# Import necessary modules
+from crawl4ai import WebCrawler
+from crawl4ai.chunking_strategy import *
+from crawl4ai.extraction_strategy import *
+from crawl4ai.crawler_strategy import *
+
+# Define the JavaScript code to click the "Load More" button
+js_code = """
+const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
+loadMoreButton && loadMoreButton.click();
+"""
+
+# Define the crawling strategy
+crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
+
+# Create the WebCrawler instance with the defined strategy
+crawler = WebCrawler(crawler_strategy=crawler_strategy)
+
+# Run the crawler with keyword filtering and CSS selector
+result = crawler.run(
+ url="https://www.example.com",
+ extraction_strategy=CosineStrategy(
+ semantic_filter="technology",
+ ),
+)
+
+# Run the crawler with LLM extraction strategy
+result = crawler.run(
+ url="https://www.example.com",
+ extraction_strategy=LLMExtractionStrategy(
+ provider="openai/gpt-4o",
+ api_token=os.getenv('OPENAI_API_KEY'),
+ instruction="Extract only content related to technology"
+ ),
+ css_selector="p"
+)
+
+# Display the extracted result
+print(result)
+```
+
+With Crawl4AI, you can perform advanced web crawling and data extraction tasks with just a few lines of code. This example demonstrates how you can harness the power of Crawl4AI to simplify your workflow and get the data you need efficiently.
+
+---
+
+*Continue reading to learn more about the features, installation process, usage, and more.*
+
+
+## Table of Contents
+
+1. [Features](#features)
+2. [Installation](#installation)
+3. [REST API/Local Server](#using-the-local-server-ot-rest-api)
+4. [Python Library Usage](#usage)
+5. [Parameters](#parameters)
+6. [Chunking Strategies](#chunking-strategies)
+7. [Extraction Strategies](#extraction-strategies)
+8. [Contributing](#contributing)
+9. [License](#license)
+10. [Contact](#contact)
-For more details, refer to the [CHANGELOG.md](https://github.com/unclecode/crawl4ai/edit/main/CHANGELOG.md) file.
## Features β¨
@@ -26,26 +100,28 @@ For more details, refer to the [CHANGELOG.md](https://github.com/unclecode/crawl
- π Supports crawling multiple URLs simultaneously
- π Replace media tags with ALT.
- π Completely free to use and open-source
-
-## Getting Started π
-
-To get started with Crawl4AI, simply visit our web application at [https://crawl4ai.uccode.io](https://crawl4ai.uccode.io) (Available now!) and enter the URL(s) you want to crawl. The application will process the URLs and provide you with the extracted data in various formats.
+- π Execute custom JavaScript before crawling
+- π Chunking strategies: topic-based, regex, sentence, and more
+- π§ Extraction strategies: cosine clustering, LLM, and more
+- π― CSS selector support
+- π Pass instructions/keywords to refine extraction
## Installation π»
-There are two ways to use Crawl4AI: as a library in your Python projects or as a standalone local server.
-
-### Using Crawl4AI as a Library π
+There are three ways to use Crawl4AI:
+1. As a library (Recommended)
+2. As a local server (Docker) or using the REST API
+4. As a Google Colab notebook. [](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
To install Crawl4AI as a library, follow these steps:
1. Install the package from GitHub:
-```sh
+```bash
pip install git+https://github.com/unclecode/crawl4ai.git
```
-Alternatively, you can clone the repository and install the package locally:
-```sh
+2. Alternatively, you can clone the repository and install the package locally:
+```bash
virtualenv venv
source venv/bin/activate
git clone https://github.com/unclecode/crawl4ai.git
@@ -53,133 +129,193 @@ cd crawl4ai
pip install -e .
```
-2. Import the necessary modules in your Python script:
-```python
-from crawl4ai.web_crawler import WebCrawler
-from crawl4ai.chunking_strategy import *
-from crawl4ai.extraction_strategy import *
-import os
-
-crawler = WebCrawler()
-crawler.warmup() # IMPORTANT: Warmup the engine before running the first crawl
-
-# Single page crawl
-result = crawler.run(
- url='https://www.nbcnews.com/business',
- word_count_threshold=5, # Minimum word count for a HTML tag to be considered as a worthy block
- chunking_strategy= RegexChunking( patterns = ["\n\n"]), # Default is RegexChunking
- extraction_strategy= CosineStrategy(word_count_threshold=20, max_dist=0.2, linkage_method='ward', top_k=3) # Default is CosineStrategy
- # extraction_strategy= LLMExtractionStrategy(provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY')),
- bypass_cache=False,
- extract_blocks =True, # Whether to extract semantical blocks of text from the HTML
- css_selector = "", # Eg: "div.article-body"
- verbose=True,
- include_raw_html=True, # Whether to include the raw HTML content in the response
-)
-
-print(result.model_dump())
-```
-
-Running for the first time will download the chrome driver for selenium. Also creates a SQLite database file `crawler_data.db` in the current directory. This file will store the crawled data for future reference.
-
-The response model is a `CrawlResponse` object that contains the following attributes:
-```python
-class CrawlResult(BaseModel):
- url: str
- html: str
- success: bool
- cleaned_html: str = None
- markdown: str = None
- parsed_json: str = None
- error_message: str = None
-```
-
-### Running Crawl4AI as a Local Server π
-
-To run Crawl4AI as a standalone local server, follow these steps:
-
-1. Clone the repository:
-```sh
-git clone https://github.com/unclecode/crawl4ai.git
-```
-
-2. Navigate to the project directory:
-```sh
-cd crawl4ai
-```
-
-3. Open `crawler/config.py` and set your favorite LLM provider and API token.
-
-4. Build the Docker image:
-```sh
-docker build -t crawl4ai .
-```
- For Mac users, use the following command instead:
-```sh
-docker build --platform linux/amd64 -t crawl4ai .
-```
-
-5. Run the Docker container:
-```sh
+3. Use docker to run the local server:
+```bash
+docker build -t crawl4ai .
+# For Mac users
+# docker build --platform linux/amd64 -t crawl4ai .
docker run -d -p 8000:80 crawl4ai
```
-6. Access the application at `http://localhost:8000`.
+For more information about how to run Crawl4AI as a local server, please refer to the [GitHub repository](https://github.com/unclecode/crawl4ai).
-- CURL Example:
-Set the api_token to your OpenAI API key or any other provider you are using.
-```sh
-curl -X POST -H "Content-Type: application/json" -d '{"urls":["https://techcrunch.com/"],"provider_model":"openai/gpt-3.5-turbo","api_token":"your_api_token","include_raw_html":true,"forced":false,"extract_blocks_flag":false,"word_count_threshold":10}' http://localhost:8000/crawl
-```
-Set `extract_blocks_flag` to True to enable the LLM to generate semantically clustered chunks and return them as JSON. Depending on the model and data size, this may take up to 1 minute. Without this setting, it will take between 5 to 20 seconds.
+## Using the Local server ot REST API π
-- Python Example:
-```python
-import requests
-import os
+You can also use Crawl4AI through the REST API. This method allows you to send HTTP requests to the Crawl4AI server and receive structured data in response. The base URL for the API is `https://crawl4ai.com/crawl`. If you run the local server, you can use `http://localhost:8000/crawl`. (Port is dependent on your docker configuration)
-data = {
- "urls": [
- "https://www.nbcnews.com/business"
- ],
- "provider_model": "groq/llama3-70b-8192",
- "include_raw_html": true,
- "bypass_cache": false,
- "extract_blocks": true,
- "word_count_threshold": 10,
- "extraction_strategy": "CosineStrategy",
- "chunking_strategy": "RegexChunking",
- "css_selector": "",
- "verbose": true
+### Example Usage
+
+To use the REST API, send a POST request to `https://crawl4ai.com/crawl` with the following parameters in the request body.
+
+**Example Request:**
+```json
+{
+ "urls": ["https://www.example.com"],
+ "include_raw_html": false,
+ "bypass_cache": true,
+ "word_count_threshold": 5,
+ "extraction_strategy": "CosineStrategy",
+ "chunking_strategy": "RegexChunking",
+ "css_selector": "p",
+ "verbose": true,
+ "extraction_strategy_args": {
+ "semantic_filter": "finance economy and stock market",
+ "word_count_threshold": 20,
+ "max_dist": 0.2,
+ "linkage_method": "ward",
+ "top_k": 3
+ },
+ "chunking_strategy_args": {
+ "patterns": ["\n\n"]
+ }
}
-
-response = requests.post("http://crawl4ai.uccode.io/crawl", json=data) # OR http://localhost:8000 if your run locally
-
-if response.status_code == 200:
- result = response.json()["results"][0]
- print("Parsed JSON:")
- print(result["parsed_json"])
- print("\nCleaned HTML:")
- print(result["cleaned_html"])
- print("\nMarkdown:")
- print(result["markdown"])
-else:
- print("Error:", response.status_code, response.text)
```
-This code sends a POST request to the Crawl4AI server running on localhost, specifying the target URL (`http://crawl4ai.uccode.io/crawl`) and the desired options. The server processes the request and returns the crawled data in JSON format.
+**Example Response:**
+```json
+{
+ "status": "success",
+ "data": [
+ {
+ "url": "https://www.example.com",
+ "extracted_content": "...",
+ "html": "...",
+ "markdown": "...",
+ "metadata": {...}
+ }
+ ]
+}
+```
-The response from the server includes the semantical clusters, cleaned HTML, and markdown representations of the crawled webpage. You can access and use this data in your Python application as needed.
+For more information about the available parameters and their descriptions, refer to the [Parameters](#parameters) section.
-Make sure to replace `"http://localhost:8000/crawl"` with the appropriate server URL if your Crawl4AI server is running on a different host or port.
-Choose the approach that best suits your needs. If you want to integrate Crawl4AI into your existing Python projects, installing it as a library is the way to go. If you prefer to run Crawl4AI as a standalone service and interact with it via API endpoints, running it as a local server using Docker is the recommended approach.
+## Python Library Usage π
-**Make sure to check the config.py tp set required environment variables.**
+### Quickstart Guide
-That's it! You can now integrate Crawl4AI into your Python projects and leverage its web crawling capabilities. π
+Create an instance of WebCrawler and call the `warmup()` function.
+```python
+crawler = WebCrawler()
+crawler.warmup()
+```
-## π Parameters
+### Understanding 'bypass_cache' and 'include_raw_html' parameters
+
+First crawl (caches the result):
+```python
+result = crawler.run(url="https://www.nbcnews.com/business")
+```
+
+Second crawl (Force to crawl again):
+```python
+result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
+```
+ π‘ Don't forget to set `bypass_cache` to True if you want to try different strategies for the same URL. Otherwise, the cached result will be returned. You can also set `always_by_pass_cache` in constructor to True to always bypass the cache.
+
+Crawl result without raw HTML content:
+```python
+result = crawler.run(url="https://www.nbcnews.com/business", include_raw_html=False)
+```
+
+### Adding a chunking strategy: RegexChunking
+
+Using RegexChunking:
+```python
+result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ chunking_strategy=RegexChunking(patterns=["\n\n"])
+)
+```
+
+Using NlpSentenceChunking:
+```python
+result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ chunking_strategy=NlpSentenceChunking()
+)
+```
+
+### Extraction strategy: CosineStrategy
+
+Using CosineStrategy:
+```python
+result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ extraction_strategy=CosineStrategy(
+ semantic_filter="",
+ word_count_threshold=10,
+ max_dist=0.2,
+ linkage_method="ward",
+ top_k=3
+ )
+)
+```
+
+You can set `semantic_filter` to filter relevant documents before clustering. Documents are filtered based on their cosine similarity to the keyword filter embedding.
+
+```python
+result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ extraction_strategy=CosineStrategy(
+ semantic_filter="finance economy and stock market",
+ word_count_threshold=10,
+ max_dist=0.2,
+ linkage_method="ward",
+ top_k=3
+ )
+)
+```
+
+### Using LLMExtractionStrategy
+
+Without instructions:
+```python
+result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ extraction_strategy=LLMExtractionStrategy(
+ provider="openai/gpt-4o",
+ api_token=os.getenv('OPENAI_API_KEY')
+ )
+)
+```
+
+With instructions:
+```python
+result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ extraction_strategy=LLMExtractionStrategy(
+ provider="openai/gpt-4o",
+ api_token=os.getenv('OPENAI_API_KEY'),
+ instruction="I am interested in only financial news"
+ )
+)
+```
+
+### Targeted extraction using CSS selector
+
+Extract only H2 tags:
+```python
+result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ css_selector="h2"
+)
+```
+
+### Passing JavaScript code to click 'Load More' button
+
+Using JavaScript to click 'Load More' button:
+```python
+js_code = """
+const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
+loadMoreButton && loadMoreButton.click();
+"""
+crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
+crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
+result = crawler.run(url="https://www.nbcnews.com/business")
+```
+
+## Parameters π
| Parameter | Description | Required | Default Value |
|-----------------------|-------------------------------------------------------------------------------------------------------|----------|---------------------|
@@ -193,49 +329,134 @@ That's it! You can now integrate Crawl4AI into your Python projects and leverage
| `css_selector` | The CSS selector to target specific parts of the HTML for extraction. | No | `None` |
| `verbose` | Whether to enable verbose logging. | No | `true` |
-## π οΈ Configuration
-Crawl4AI allows you to configure various parameters and settings in the `crawler/config.py` file. Here's an example of how you can adjust the parameters:
+## Chunking Strategies π
+### RegexChunking
+
+`RegexChunking` is a text chunking strategy that splits a given text into smaller parts using regular expressions. This is useful for preparing large texts for processing by language models, ensuring they are divided into manageable segments.
+
+**Constructor Parameters:**
+- `patterns` (list, optional): A list of regular expression patterns used to split the text. Default is to split by double newlines (`['\n\n']`).
+
+**Example usage:**
```python
-import os
-from dotenv import load_dotenv
-
-load_dotenv() # Load environment variables from .env file
-
-# Default provider, ONLY used when the extraction strategy is LLMExtractionStrategy
-DEFAULT_PROVIDER = "openai/gpt-4-turbo"
-
-# Provider-model dictionary, ONLY used when the extraction strategy is LLMExtractionStrategy
-PROVIDER_MODELS = {
- "ollama/llama3": "no-token-needed", # Any model from Ollama no need for API token
- "groq/llama3-70b-8192": os.getenv("GROQ_API_KEY"),
- "groq/llama3-8b-8192": os.getenv("GROQ_API_KEY"),
- "openai/gpt-3.5-turbo": os.getenv("OPENAI_API_KEY"),
- "openai/gpt-4-turbo": os.getenv("OPENAI_API_KEY"),
- "openai/gpt-4o": os.getenv("OPENAI_API_KEY"),
- "anthropic/claude-3-haiku-20240307": os.getenv("ANTHROPIC_API_KEY"),
- "anthropic/claude-3-opus-20240229": os.getenv("ANTHROPIC_API_KEY"),
- "anthropic/claude-3-sonnet-20240229": os.getenv("ANTHROPIC_API_KEY"),
-}
-
-# Chunk token threshold
-CHUNK_TOKEN_THRESHOLD = 1000
-# Threshold for the minimum number of words in an HTML tag to be considered
-MIN_WORD_THRESHOLD = 5
+chunker = RegexChunking(patterns=[r'\n\n', r'\. '])
+chunks = chunker.chunk("This is a sample text. It will be split into chunks.")
```
-In the `crawler/config.py` file, you can:
+### NlpSentenceChunking
-REMEBER: You only need to set the API keys for the providers in case you choose LLMExtractStrategy as the extraction strategy. If you choose CosineStrategy, you don't need to set the API keys.
+`NlpSentenceChunking` uses a natural language processing model to chunk a given text into sentences. This approach leverages SpaCy to accurately split text based on sentence boundaries.
-- Set the default provider using the `DEFAULT_PROVIDER` variable.
-- Add or modify the provider-model dictionary (`PROVIDER_MODELS`) to include your desired providers and their corresponding API keys. Crawl4AI supports various providers such as Groq, OpenAI, Anthropic, and more. You can add any provider supported by LiteLLM, as well as Ollama.
-- Adjust the `CHUNK_TOKEN_THRESHOLD` value to control the splitting of web content into chunks for parallel processing. A higher value means fewer chunks and faster processing, but it may cause issues with weaker LLMs during extraction.
-- Modify the `MIN_WORD_THRESHOLD` value to set the minimum number of words an HTML tag must contain to be considered a meaningful block.
+**Constructor Parameters:**
+- `model` (str, optional): The SpaCy model to use for sentence detection. Default is `'en_core_web_sm'`.
-Make sure to set the appropriate API keys for each provider in the `PROVIDER_MODELS` dictionary. You can either directly provide the API key or use environment variables to store them securely.
+**Example usage:**
+```python
+chunker = NlpSentenceChunking(model='en_core_web_sm')
+chunks = chunker.chunk("This is a sample text. It will be split into sentences.")
+```
-Remember to update the `crawler/config.py` file based on your specific requirements and the providers you want to use with Crawl4AI.
+### TopicSegmentationChunking
+
+`TopicSegmentationChunking` uses the TextTiling algorithm to segment a given text into topic-based chunks. This method identifies thematic boundaries in the text.
+
+**Constructor Parameters:**
+- `num_keywords` (int, optional): The number of keywords to extract for each topic segment. Default is `3`.
+
+**Example usage:**
+```python
+chunker = TopicSegmentationChunking(num_keywords=3)
+chunks = chunker.chunk("This is a sample text. It will be split into topic-based segments.")
+```
+
+### FixedLengthWordChunking
+
+`FixedLengthWordChunking` splits a given text into chunks of fixed length, based on the number of words.
+
+**Constructor Parameters:**
+- `chunk_size` (int, optional): The number of words in each chunk. Default is `100`.
+
+**Example usage:**
+```python
+chunker = FixedLengthWordChunking(chunk_size=100)
+chunks = chunker.chunk("This is a sample text. It will be split into fixed-length word chunks.")
+```
+
+### SlidingWindowChunking
+
+`SlidingWindowChunking` uses a sliding window approach to chunk a given text. Each chunk has a fixed length, and the window slides by a specified step size.
+
+**Constructor Parameters:**
+- `window_size` (int, optional): The number of words in each chunk. Default is `100`.
+- `step` (int, optional): The number of words to slide the window. Default is `50`.
+
+**Example usage:**
+```python
+chunker = SlidingWindowChunking(window_size=100, step=50)
+chunks = chunker.chunk("This is a sample text. It will be split using a sliding window approach.")
+```
+
+## Extraction Strategies π§
+
+### NoExtractionStrategy
+
+`NoExtractionStrategy` is a basic extraction strategy that returns the entire HTML content without any modification. It is useful for cases where no specific extraction is required.
+
+**Constructor Parameters:**
+None.
+
+**Example usage:**
+```python
+extractor = NoExtractionStrategy()
+extracted_content = extractor.extract(url, html)
+```
+
+### LLMExtractionStrategy
+
+`LLMExtractionStrategy` uses a Language Model (LLM) to extract meaningful blocks or chunks from the given HTML content. This strategy leverages an external provider for language model completions.
+
+**Constructor Parameters:**
+- `provider` (str, optional): The provider to use for the language model completions. Default is `DEFAULT_PROVIDER` (e.g., openai/gpt-4).
+- `api_token` (str, optional): The API token for the provider. If not provided, it will try to load from the environment variable `OPENAI_API_KEY`.
+- `instruction` (str, optional): An instruction to guide the LLM on how to perform the extraction. This allows users to specify the type of data they are interested in or set the tone of the response. Default is `None`.
+
+**Example usage:**
+```python
+extractor = LLMExtractionStrategy(provider='openai', api_token='your_api_token', instruction='Extract only news about AI.')
+extracted_content = extractor.extract(url, html)
+```
+
+### CosineStrategy
+
+`CosineStrategy` uses hierarchical clustering based on cosine similarity to extract clusters of text from the given HTML content. This strategy is suitable for identifying related content sections.
+
+**Constructor Parameters:**
+- `semantic_filter` (str, optional): A string containing keywords for filtering relevant documents before clustering. If provided, documents are filtered based on their cosine similarity to the keyword filter embedding. Default is `None`.
+- `word_count_threshold` (int, optional): Minimum number of words per cluster. Default is `20`.
+- `max_dist` (float, optional): The maximum cophenetic distance on the dendrogram to form clusters. Default is `0.2`.
+- `linkage_method` (str, optional): The linkage method for hierarchical clustering. Default is `'ward'`.
+- `top_k` (int, optional): Number of top categories to extract. Default is `3`.
+- `model_name` (str, optional): The model name for embedding generation. Default is `'BAAI/bge-small-en-v1.5'`.
+
+**Example usage:**
+```python
+extractor = CosineStrategy(semantic_filter='artificial intelligence', word_count_threshold=10, max_dist=0.2, linkage_method='ward', top_k=3, model_name='BAAI/bge-small-en-v1.5')
+extracted_content = extractor.extract(url, html)
+```
+
+### TopicExtractionStrategy
+
+`TopicExtractionStrategy` uses the TextTiling algorithm to segment the HTML content into topics and extracts keywords for each segment. This strategy is useful for identifying and summarizing thematic content.
+
+**Constructor Parameters:**
+- `num_keywords` (int, optional): Number of keywords to represent each topic segment. Default is `3`.
+
+**Example usage:**
+```python
+extractor = TopicExtractionStrategy(num_keywords=3)
+extracted_content = extractor.extract(url, html)
+```
## Contributing π€
@@ -259,5 +480,6 @@ If you have any questions, suggestions, or feedback, please feel free to reach o
- GitHub: [unclecode](https://github.com/unclecode)
- Twitter: [@unclecode](https://twitter.com/unclecode)
+- Website: [crawl4ai.com](https://crawl4ai.com)
Let's work together to make the web more accessible and useful for AI applications! πͺππ€
diff --git a/crawl4ai/chunking_strategy.py b/crawl4ai/chunking_strategy.py
index d6f0e5d5..53e48c68 100644
--- a/crawl4ai/chunking_strategy.py
+++ b/crawl4ai/chunking_strategy.py
@@ -38,7 +38,12 @@ class RegexChunking(ChunkingStrategy):
class NlpSentenceChunking(ChunkingStrategy):
def __init__(self, model='en_core_web_sm'):
import spacy
- self.nlp = spacy.load(model)
+ try:
+ self.nlp = spacy.load(model)
+ except IOError:
+ spacy.cli.download("en_core_web_sm")
+ self.nlp = spacy.load(model)
+ # raise ImportError(f"Spacy model '{model}' not found. Please download the model using 'python -m spacy download {model}'")
def chunk(self, text: str) -> list:
doc = self.nlp(text)
diff --git a/crawl4ai/crawler_strategy.py b/crawl4ai/crawler_strategy.py
index 8d183e38..c1a06072 100644
--- a/crawl4ai/crawler_strategy.py
+++ b/crawl4ai/crawler_strategy.py
@@ -18,15 +18,16 @@ class CrawlerStrategy(ABC):
pass
class CloudCrawlerStrategy(CrawlerStrategy):
- def crawl(self, url: str, use_cached_html = False, css_selector = None) -> str:
+ def __init__(self, use_cached_html = False):
+ super().__init__()
+ self.use_cached_html = use_cached_html
+
+ def crawl(self, url: str) -> str:
data = {
"urls": [url],
- "provider_model": "",
- "api_token": "token",
"include_raw_html": True,
"forced": True,
"extract_blocks": False,
- "word_count_threshold": 10
}
response = requests.post("http://crawl4ai.uccode.io/crawl", json=data)
@@ -35,19 +36,24 @@ class CloudCrawlerStrategy(CrawlerStrategy):
return html
class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
- def __init__(self):
+ def __init__(self, use_cached_html=False, js_code=None):
+ super().__init__()
self.options = Options()
self.options.headless = True
self.options.add_argument("--no-sandbox")
self.options.add_argument("--disable-dev-shm-usage")
+ self.options.add_argument("--disable-gpu")
+ self.options.add_argument("--disable-extensions")
self.options.add_argument("--headless")
+ self.use_cached_html = use_cached_html
+ self.js_code = js_code
# chromedriver_autoinstaller.install()
self.service = Service(chromedriver_autoinstaller.install())
self.driver = webdriver.Chrome(service=self.service, options=self.options)
- def crawl(self, url: str, use_cached_html = False, css_selector = None) -> str:
- if use_cached_html:
+ def crawl(self, url: str) -> str:
+ if self.use_cached_html:
cache_file_path = os.path.join(Path.home(), ".crawl4ai", "cache", url.replace("/", "_"))
if os.path.exists(cache_file_path):
with open(cache_file_path, "r") as f:
@@ -58,6 +64,15 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
WebDriverWait(self.driver, 10).until(
EC.presence_of_all_elements_located((By.TAG_NAME, "html"))
)
+
+ # Execute JS code if provided
+ if self.js_code:
+ self.driver.execute_script(self.js_code)
+ # Optionally, wait for some condition after executing the JS code
+ WebDriverWait(self.driver, 10).until(
+ lambda driver: driver.execute_script("return document.readyState") == "complete"
+ )
+
html = self.driver.page_source
# Store in cache
diff --git a/crawl4ai/database.py b/crawl4ai/database.py
index b2169c84..391d3f4f 100644
--- a/crawl4ai/database.py
+++ b/crawl4ai/database.py
@@ -8,9 +8,9 @@ DB_PATH = os.path.join(Path.home(), ".crawl4ai")
os.makedirs(DB_PATH, exist_ok=True)
DB_PATH = os.path.join(DB_PATH, "crawl4ai.db")
-def init_db(db_path: str):
+def init_db():
global DB_PATH
- conn = sqlite3.connect(db_path)
+ conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute('''
CREATE TABLE IF NOT EXISTS crawled_data (
@@ -18,13 +18,12 @@ def init_db(db_path: str):
html TEXT,
cleaned_html TEXT,
markdown TEXT,
- parsed_json TEXT,
+ extracted_content TEXT,
success BOOLEAN
)
''')
conn.commit()
conn.close()
- DB_PATH = db_path
def check_db_path():
if not DB_PATH:
@@ -35,7 +34,7 @@ def get_cached_url(url: str) -> Optional[Tuple[str, str, str, str, str, bool]]:
try:
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
- cursor.execute('SELECT url, html, cleaned_html, markdown, parsed_json, success FROM crawled_data WHERE url = ?', (url,))
+ cursor.execute('SELECT url, html, cleaned_html, markdown, extracted_content, success FROM crawled_data WHERE url = ?', (url,))
result = cursor.fetchone()
conn.close()
return result
@@ -43,21 +42,21 @@ def get_cached_url(url: str) -> Optional[Tuple[str, str, str, str, str, bool]]:
print(f"Error retrieving cached URL: {e}")
return None
-def cache_url(url: str, html: str, cleaned_html: str, markdown: str, parsed_json: str, success: bool):
+def cache_url(url: str, html: str, cleaned_html: str, markdown: str, extracted_content: str, success: bool):
check_db_path()
try:
conn = sqlite3.connect(DB_PATH)
cursor = conn.cursor()
cursor.execute('''
- INSERT INTO crawled_data (url, html, cleaned_html, markdown, parsed_json, success)
+ INSERT INTO crawled_data (url, html, cleaned_html, markdown, extracted_content, success)
VALUES (?, ?, ?, ?, ?, ?)
ON CONFLICT(url) DO UPDATE SET
html = excluded.html,
cleaned_html = excluded.cleaned_html,
markdown = excluded.markdown,
- parsed_json = excluded.parsed_json,
+ extracted_content = excluded.extracted_content,
success = excluded.success
- ''', (url, html, cleaned_html, markdown, parsed_json, success))
+ ''', (url, html, cleaned_html, markdown, extracted_content, success))
conn.commit()
conn.close()
except Exception as e:
@@ -85,4 +84,15 @@ def clear_db():
conn.commit()
conn.close()
except Exception as e:
- print(f"Error clearing database: {e}")
\ No newline at end of file
+ print(f"Error clearing database: {e}")
+
+def flush_db():
+ check_db_path()
+ try:
+ conn = sqlite3.connect(DB_PATH)
+ cursor = conn.cursor()
+ cursor.execute('DROP TABLE crawled_data')
+ conn.commit()
+ conn.close()
+ except Exception as e:
+ print(f"Error flushing database: {e}")
\ No newline at end of file
diff --git a/crawl4ai/extraction_strategy.py b/crawl4ai/extraction_strategy.py
index 91e44e3f..c9074eb2 100644
--- a/crawl4ai/extraction_strategy.py
+++ b/crawl4ai/extraction_strategy.py
@@ -3,19 +3,20 @@ from typing import Any, List, Dict, Optional, Union
from concurrent.futures import ThreadPoolExecutor, as_completed
import json, time
# from optimum.intel import IPEXModel
-from .prompts import PROMPT_EXTRACT_BLOCKS
+from .prompts import PROMPT_EXTRACT_BLOCKS, PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
from .config import *
from .utils import *
from functools import partial
from .model_loader import load_bert_base_uncased, load_bge_small_en_v1_5, load_spacy_model
-
-
+from transformers import pipeline
+from sklearn.metrics.pairwise import cosine_similarity
+import numpy as np
class ExtractionStrategy(ABC):
"""
Abstract base class for all extraction strategies.
"""
- def __init__(self):
+ def __init__(self, **kwargs):
self.DEL = "<|DEL|>"
self.name = self.__class__.__name__
@@ -38,12 +39,12 @@ class ExtractionStrategy(ABC):
:param sections: List of sections (strings) to process.
:return: A list of processed JSON blocks.
"""
- parsed_json = []
+ extracted_content = []
with ThreadPoolExecutor() as executor:
futures = [executor.submit(self.extract, url, section, **kwargs) for section in sections]
for future in as_completed(futures):
- parsed_json.extend(future.result())
- return parsed_json
+ extracted_content.extend(future.result())
+ return extracted_content
class NoExtractionStrategy(ExtractionStrategy):
def extract(self, url: str, html: str, *q, **kwargs) -> List[Dict[str, Any]]:
@@ -53,37 +54,41 @@ class NoExtractionStrategy(ExtractionStrategy):
return [{"index": i, "tags": [], "content": section} for i, section in enumerate(sections)]
class LLMExtractionStrategy(ExtractionStrategy):
- def __init__(self, provider: str = DEFAULT_PROVIDER, api_token: Optional[str] = None):
+ def __init__(self, provider: str = DEFAULT_PROVIDER, api_token: Optional[str] = None, instruction:str = None, **kwargs):
"""
Initialize the strategy with clustering parameters.
- :param word_count_threshold: Minimum number of words per cluster.
- :param max_dist: The maximum cophenetic distance on the dendrogram to form clusters.
- :param linkage_method: The linkage method for hierarchical clustering.
+ :param provider: The provider to use for extraction.
+ :param api_token: The API token for the provider.
+ :param instruction: The instruction to use for the LLM model.
"""
super().__init__()
self.provider = provider
self.api_token = api_token or PROVIDER_MODELS.get(provider, None) or os.getenv("OPENAI_API_KEY")
+ self.instruction = instruction
if not self.api_token:
raise ValueError("API token must be provided for LLMExtractionStrategy. Update the config.py or set OPENAI_API_KEY environment variable.")
- def extract(self, url: str, html: str) -> List[Dict[str, Any]]:
- print("[LOG] Extracting blocks from URL:", url)
+ def extract(self, url: str, ix:int, html: str) -> List[Dict[str, Any]]:
+ # print("[LOG] Extracting blocks from URL:", url)
+ print(f"[LOG] Call LLM for {url} - block index: {ix}")
variable_values = {
"URL": url,
"HTML": escape_json_string(sanitize_html(html)),
}
+
+ if self.instruction:
+ variable_values["REQUEST"] = self.instruction
- prompt_with_variables = PROMPT_EXTRACT_BLOCKS
+ prompt_with_variables = PROMPT_EXTRACT_BLOCKS if not self.instruction else PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
for variable in variable_values:
prompt_with_variables = prompt_with_variables.replace(
"{" + variable + "}", variable_values[variable]
)
response = perform_completion_with_backoff(self.provider, prompt_with_variables, self.api_token)
-
try:
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)['blocks']
blocks = json.loads(blocks)
@@ -101,7 +106,7 @@ class LLMExtractionStrategy(ExtractionStrategy):
"content": unparsed
})
- print("[LOG] Extracted", len(blocks), "blocks from URL:", url)
+ print("[LOG] Extracted", len(blocks), "blocks from URL:", url, "block index:", ix)
return blocks
def _merge(self, documents):
@@ -130,29 +135,30 @@ class LLMExtractionStrategy(ExtractionStrategy):
"""
merged_sections = self._merge(sections)
- parsed_json = []
+ extracted_content = []
if self.provider.startswith("groq/"):
# Sequential processing with a delay
- for section in merged_sections:
- parsed_json.extend(self.extract(url, section))
+ for ix, section in enumerate(merged_sections):
+ extracted_content.extend(self.extract(ix, url, section))
time.sleep(0.5) # 500 ms delay between each processing
else:
# Parallel processing using ThreadPoolExecutor
with ThreadPoolExecutor(max_workers=4) as executor:
extract_func = partial(self.extract, url)
- futures = [executor.submit(extract_func, section) for section in merged_sections]
+ futures = [executor.submit(extract_func, ix, section) for ix, section in enumerate(merged_sections)]
for future in as_completed(futures):
- parsed_json.extend(future.result())
+ extracted_content.extend(future.result())
- return parsed_json
+ return extracted_content
class CosineStrategy(ExtractionStrategy):
- def __init__(self, word_count_threshold=20, max_dist=0.2, linkage_method='ward', top_k=3, model_name = 'BAAI/bge-small-en-v1.5'):
+ def __init__(self, semantic_filter = None, word_count_threshold=10, max_dist=0.2, linkage_method='ward', top_k=3, model_name = 'BAAI/bge-small-en-v1.5', **kwargs):
"""
Initialize the strategy with clustering parameters.
+ :param semantic_filter: A keyword filter for document filtering.
:param word_count_threshold: Minimum number of words per cluster.
:param max_dist: The maximum cophenetic distance on the dendrogram to form clusters.
:param linkage_method: The linkage method for hierarchical clustering.
@@ -163,11 +169,14 @@ class CosineStrategy(ExtractionStrategy):
from transformers import AutoTokenizer, AutoModel
import spacy
+ self.semantic_filter = semantic_filter
self.word_count_threshold = word_count_threshold
self.max_dist = max_dist
self.linkage_method = linkage_method
self.top_k = top_k
self.timer = time.time()
+
+ self.buffer_embeddings = np.array([])
if model_name == "bert-base-uncased":
self.tokenizer, self.model = load_bert_base_uncased()
@@ -177,13 +186,42 @@ class CosineStrategy(ExtractionStrategy):
self.nlp = load_spacy_model()
print(f"[LOG] Model loaded {model_name}, models/reuters, took " + str(time.time() - self.timer) + " seconds")
- def get_embeddings(self, sentences: List[str]):
+
+ def filter_documents_embeddings(self, documents: List[str], semantic_filter: str, threshold: float = 0.5) -> List[str]:
+ """
+ Filter documents based on the cosine similarity of their embeddings with the semantic_filter embedding.
+
+ :param documents: List of text chunks (documents).
+ :param semantic_filter: A string containing the keywords for filtering.
+ :param threshold: Cosine similarity threshold for filtering documents.
+ :return: Filtered list of documents.
+ """
+ if not semantic_filter:
+ return documents
+ # Compute embedding for the keyword filter
+ query_embedding = self.get_embeddings([semantic_filter])[0]
+
+ # Compute embeddings for the docu ments
+ document_embeddings = self.get_embeddings(documents)
+
+ # Calculate cosine similarity between the query embedding and document embeddings
+ similarities = cosine_similarity([query_embedding], document_embeddings).flatten()
+
+ # Filter documents based on the similarity threshold
+ filtered_docs = [doc for doc, sim in zip(documents, similarities) if sim >= threshold]
+
+ return filtered_docs
+
+ def get_embeddings(self, sentences: List[str], bypass_buffer=True):
"""
Get BERT embeddings for a list of sentences.
:param sentences: List of text chunks (sentences).
:return: NumPy array of embeddings.
"""
+ # if self.buffer_embeddings.any() and not bypass_buffer:
+ # return self.buffer_embeddings
+
import torch
# Tokenize sentences and convert to tensor
encoded_input = self.tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
@@ -193,6 +231,7 @@ class CosineStrategy(ExtractionStrategy):
# Get embeddings from the last hidden state (mean pooling)
embeddings = model_output.last_hidden_state.mean(1)
+ self.buffer_embeddings = embeddings.numpy()
return embeddings.numpy()
def hierarchical_clustering(self, sentences: List[str]):
@@ -206,7 +245,7 @@ class CosineStrategy(ExtractionStrategy):
from scipy.cluster.hierarchy import linkage, fcluster
from scipy.spatial.distance import pdist
self.timer = time.time()
- embeddings = self.get_embeddings(sentences)
+ embeddings = self.get_embeddings(sentences, bypass_buffer=False)
# print(f"[LOG] π Embeddings computed in {time.time() - self.timer:.2f} seconds")
# Compute pairwise cosine distances
distance_matrix = pdist(embeddings, 'cosine')
@@ -247,6 +286,12 @@ class CosineStrategy(ExtractionStrategy):
# Assume `html` is a list of text chunks for this strategy
t = time.time()
text_chunks = html.split(self.DEL) # Split by lines or paragraphs as needed
+
+ # Pre-filter documents using embeddings and semantic_filter
+ text_chunks = self.filter_documents_embeddings(text_chunks, self.semantic_filter)
+
+ if not text_chunks:
+ return []
# Perform clustering
labels = self.hierarchical_clustering(text_chunks)
@@ -290,7 +335,7 @@ class CosineStrategy(ExtractionStrategy):
return self.extract(url, self.DEL.join(sections), **kwargs)
class TopicExtractionStrategy(ExtractionStrategy):
- def __init__(self, num_keywords: int = 3):
+ def __init__(self, num_keywords: int = 3, **kwargs):
"""
Initialize the topic extraction strategy with parameters for topic segmentation.
@@ -358,7 +403,7 @@ class TopicExtractionStrategy(ExtractionStrategy):
return self.extract(url, self.DEL.join(sections), **kwargs)
class ContentSummarizationStrategy(ExtractionStrategy):
- def __init__(self, model_name: str = "sshleifer/distilbart-cnn-12-6"):
+ def __init__(self, model_name: str = "sshleifer/distilbart-cnn-12-6", **kwargs):
"""
Initialize the content summarization strategy with a specific model.
diff --git a/crawl4ai/models.py b/crawl4ai/models.py
index b9373f78..c2c2d61e 100644
--- a/crawl4ai/models.py
+++ b/crawl4ai/models.py
@@ -11,5 +11,6 @@ class CrawlResult(BaseModel):
success: bool
cleaned_html: str = None
markdown: str = None
- parsed_json: str = None
+ extracted_content: str = None
+ metadata: dict = None
error_message: str = None
\ No newline at end of file
diff --git a/crawl4ai/prompts.py b/crawl4ai/prompts.py
index be7091bc..e0498ccc 100644
--- a/crawl4ai/prompts.py
+++ b/crawl4ai/prompts.py
@@ -59,7 +59,7 @@ Please provide your output within Loading... Please wait.
+ There are two ways to use Crawl4AI: as a library in your Python projects or as a standalone local
+ server.
+
+ You can also try Crawl4AI in a Google Colab
+ To install Crawl4AI as a library, follow these steps:
+ For more information about how to run Crawl4AI as a local server, please refer to the
+ GitHub repository.
+
+ In recent times, we've witnessed a surge of startups emerging, riding the AI hype wave and charging
+ for services that should rightfully be accessible to everyone. ππΈ One such example is scraping and
+ crawling web pages and transforming them into a format suitable for Large Language Models (LLMs).
+ πΈοΈπ€ We believe that building a business around this is not the right approach; instead, it should
+ definitely be open-source. ππ So, if you possess the skills to build such tools and share our
+ philosophy, we invite you to join our "Robinhood" band and help set these products free for the
+ benefit of all. π€πͺ
+
+ To install and run Crawl4AI as a library or a local server, please refer to the π
+ GitHub repository.
+ Loading... Please wait. Content for chunking strategies... Content for extraction strategies... There are two ways to use Crawl4AI: as a library in your Python projects or as a standalone local server. To install Crawl4AI as a library, follow these steps: For more information about how to run Crawl4AI as a local server, please refer to the GitHub repository.
- In recent times, we've witnessed a surge of startups emerging, riding the AI hype wave and charging
- for services that should rightfully be accessible to everyone. ππΈ One such example is scraping and
- crawling web pages and transforming them into a format suitable for Large Language Models (LLMs).
- πΈοΈπ€ We believe that building a business around this is not the right approach; instead, it should
- definitely be open-source. ππ So, if you possess the skills to build such tools and share our
- philosophy, we invite you to join our "Robinhood" band and help set these products free for the
- benefit of all. π€πͺ
-
- To install and run Crawl4AI as a library or a local server, please refer to the π
- GitHub repository.
- π₯π·οΈ Crawl4AI: Web Data for your Thoughts
+ Try It Now
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ warmup() function.
+
+ crawler = WebCrawler()
+ crawler.warmup()
+ result = crawler.run(url="https://www.nbcnews.com/business")
+ result = crawler.run(url="https://www.nbcnews.com/business", bypass_cache=True)
+ result = crawler.run(url="https://www.nbcnews.com/business", include_raw_html=False)always_by_pass_cache to True:
+ crawler.always_by_pass_cache = True
+ result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ chunking_strategy=RegexChunking(patterns=["\n\n"])
+ )
+ result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ chunking_strategy=NlpSentenceChunking()
+ )
+ result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ extraction_strategy=CosineStrategy(word_count_threshold=10, max_dist=0.2, linkage_method="ward", top_k=3)
+ )
+ result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ extraction_strategy=LLMExtractionStrategy(provider="openai/gpt-4o", api_token=os.getenv('OPENAI_API_KEY'))
+ )
+ result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ extraction_strategy=LLMExtractionStrategy(
+ provider="openai/gpt-4o",
+ api_token=os.getenv('OPENAI_API_KEY'),
+ instruction="I am interested in only financial news"
+ )
+ )
+ result = crawler.run(
+ url="https://www.nbcnews.com/business",
+ css_selector="h2"
+ )
+ js_code = """
+ const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More'));
+ loadMoreButton && loadMoreButton.click();
+ """
+ crawler_strategy = LocalSeleniumCrawlerStrategy(js_code=js_code)
+ crawler = WebCrawler(crawler_strategy=crawler_strategy, always_by_pass_cache=True)
+ result = crawler.run(url="https://www.nbcnews.com/business")Installation π»
+
+
Using Crawl4AI as a Library π
+
+
+
+ pip install git+https://github.com/unclecode/crawl4ai.git
+ virtualenv venv
+source venv/bin/activate
+git clone https://github.com/unclecode/crawl4ai.git
+cd crawl4ai
+pip install -e .
+
+ from crawl4ai.web_crawler import WebCrawler
+from crawl4ai.chunking_strategy import *
+from crawl4ai.extraction_strategy import *
+import os
+
+crawler = WebCrawler()
+
+# Single page crawl
+single_url = UrlModel(url='https://www.nbcnews.com/business', forced=False)
+result = crawl4ai.fetch_page(
+ url='https://www.nbcnews.com/business',
+ word_count_threshold=5, # Minimum word count for a HTML tag to be considered as a worthy block
+ chunking_strategy= RegexChunking( patterns = ["\\n\\n"]), # Default is RegexChunking
+ extraction_strategy= CosineStrategy(word_count_threshold=10, max_dist=0.2, linkage_method='ward', top_k=3) # Default is CosineStrategy
+ # extraction_strategy= LLMExtractionStrategy(provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY')),
+ bypass_cache=False,
+ extract_blocks =True, # Whether to extract semantical blocks of text from the HTML
+ css_selector = "", # Eg: "div.article-body"
+ verbose=True,
+ include_raw_html=True, # Whether to include the raw HTML content in the response
+)
+print(result.model_dump())
+ π Parameters
+
+
+
+
+
+
+
+ Parameter
+ Description
+ Required
+ Default Value
+
+
+ urls
+
+ A list of URLs to crawl and extract data from.
+
+ Yes
+ -
+
+
+ include_raw_html
+
+ Whether to include the raw HTML content in the response.
+
+ No
+ false
+
+
+ bypass_cache
+
+ Whether to force a fresh crawl even if the URL has been previously crawled.
+
+ No
+ false
+
+
+ extract_blocks
+
+ Whether to extract semantical blocks of text from the HTML.
+
+ No
+ true
+
+
+ word_count_threshold
+
+ The minimum number of words a block must contain to be considered meaningful (minimum
+ value is 5).
+
+ No
+ 5
+
+
+ extraction_strategy
+
+ The strategy to use for extracting content from the HTML (e.g., "CosineStrategy").
+
+ No
+ CosineStrategy
+
+
+ chunking_strategy
+
+ The strategy to use for chunking the text before processing (e.g., "RegexChunking").
+
+ No
+ RegexChunking
+
+
+ css_selector
+
+ The CSS selector to target specific parts of the HTML for extraction.
+
+ No
+ None
+
+
+
+ verbose
+ Whether to enable verbose logging.
+ No
+ true
+ Extraction Strategies
+
+ Chunking Strategies
+
+ π€ Why building this?
+ βοΈ Installation
+ π₯π·οΈ Crawl4AI: Web Data for your Thoughts
Try It Now
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Chunking Strategies
+ Extraction Strategies
+ Installation π»
- Using Crawl4AI as a Library π
-
-
-
- pip install git+https://github.com/unclecode/crawl4ai.git
- virtualenv venv
-source venv/bin/activate
-git clone https://github.com/unclecode/crawl4ai.git
-cd crawl4ai
-pip install -e .
-
- from crawl4ai.web_crawler import WebCrawler
-from crawl4ai.chunking_strategy import *
-from crawl4ai.extraction_strategy import *
-import os
-
-crawler = WebCrawler()
-
-# Single page crawl
-single_url = UrlModel(url='https://www.nbcnews.com/business', forced=False)
-result = crawl4ai.fetch_page(
- url='https://www.nbcnews.com/business',
- word_count_threshold=5, # Minimum word count for a HTML tag to be considered as a worthy block
- chunking_strategy= RegexChunking( patterns = ["\\n\\n"]), # Default is RegexChunking
- extraction_strategy= CosineStrategy(word_count_threshold=20, max_dist=0.2, linkage_method='ward', top_k=3) # Default is CosineStrategy
- # extraction_strategy= LLMExtractionStrategy(provider= "openai/gpt-4o", api_token = os.getenv('OPENAI_API_KEY')),
- bypass_cache=False,
- extract_blocks =True, # Whether to extract semantical blocks of text from the HTML
- css_selector = "", # Eg: "div.article-body"
- verbose=True,
- include_raw_html=True, # Whether to include the raw HTML content in the response
-)
-print(result.model_dump())
- π Parameters
-
-
-
-
-
-
-
- Parameter
- Description
- Required
- Default Value
-
-
- urls
-
- A list of URLs to crawl and extract data from.
-
- Yes
- -
-
-
- include_raw_html
-
- Whether to include the raw HTML content in the response.
-
- No
- false
-
-
- bypass_cache
-
- Whether to force a fresh crawl even if the URL has been previously crawled.
-
- No
- false
-
-
- extract_blocks
-
- Whether to extract semantical blocks of text from the HTML.
-
- No
- true
-
-
- word_count_threshold
-
- The minimum number of words a block must contain to be considered meaningful (minimum
- value is 5).
-
- No
- 5
-
-
- extraction_strategy
-
- The strategy to use for extracting content from the HTML (e.g., "CosineStrategy").
-
- No
- CosineStrategy
-
-
- chunking_strategy
-
- The strategy to use for chunking the text before processing (e.g., "RegexChunking").
-
- No
- RegexChunking
-
-
- css_selector
-
- The CSS selector to target specific parts of the HTML for extraction.
-
- No
- None
-
-
-
- verbose
- Whether to enable verbose logging.
- No
- true
- Extraction Strategies
-
- Chunking Strategies
-
- π€ Why building this?
- βοΈ Installation
-