Compare commits
18 Commits
image-filt
...
v0.2.73
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3ff2a0d0e7 | ||
|
|
3cd1b3719f | ||
|
|
9926eb9f95 | ||
|
|
3abaa82501 | ||
|
|
88d8cd8650 | ||
|
|
a08f21d66c | ||
|
|
d58286989c | ||
|
|
b58af3349c | ||
|
|
940df4631f | ||
|
|
685706e0aa | ||
|
|
7b0979e134 | ||
|
|
61ae2de841 | ||
|
|
5b28eed2c0 | ||
|
|
f8a11779fe | ||
|
|
d11a83c232 | ||
|
|
3255c7a3fa | ||
|
|
4756d0a532 | ||
|
|
7ba2142363 |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -185,4 +185,6 @@ local/
|
||||
|
||||
a.txt
|
||||
.lambda_function.py
|
||||
ec2*
|
||||
ec2*
|
||||
|
||||
update_changelog.sh
|
||||
35
CHANGELOG.md
35
CHANGELOG.md
@@ -1,5 +1,40 @@
|
||||
# Changelog
|
||||
|
||||
## [v0.2.73] - 2024-07-03
|
||||
|
||||
💡 In this release, we've bumped the version to v0.2.73 and refreshed our documentation to ensure you have the best experience with our project.
|
||||
|
||||
* Supporting website need "with-head" mode to crawl the website with head.
|
||||
* Fixing the installation issues for setup.py and dockerfile.
|
||||
* Resolve multiple issues.
|
||||
|
||||
## [v0.2.72] - 2024-06-30
|
||||
|
||||
This release brings exciting updates and improvements to our project! 🎉
|
||||
|
||||
* 📚 **Documentation Updates**: Our documentation has been revamped to reflect the latest changes and additions.
|
||||
* 🚀 **New Modes in setup.py**: We've added support for three new modes in setup.py: default, torch, and transformers. This enhances the project's flexibility and usability.
|
||||
* 🐳 **Docker File Updates**: The Docker file has been updated to ensure seamless compatibility with the new modes and improvements.
|
||||
* 🕷️ **Temporary Solution for Headless Crawling**: We've implemented a temporary solution to overcome issues with crawling websites in headless mode.
|
||||
|
||||
These changes aim to improve the overall user experience, provide more flexibility, and enhance the project's performance. We're thrilled to share these updates with you and look forward to continuing to evolve and improve our project!
|
||||
|
||||
## [0.2.71] - 2024-06-26
|
||||
|
||||
**Improved Error Handling and Performance** 🚧
|
||||
|
||||
* 🚫 Refactored `crawler_strategy.py` to handle exceptions and provide better error messages, making it more robust and reliable.
|
||||
* 💻 Optimized the `get_content_of_website_optimized` function in `utils.py` for improved performance, reducing potential bottlenecks.
|
||||
* 💻 Updated `utils.py` with the latest changes, ensuring consistency and accuracy.
|
||||
* 🚫 Migrated to `ChromeDriverManager` to resolve Chrome driver download issues, providing a smoother user experience.
|
||||
|
||||
These changes focus on refining the existing codebase, resulting in a more stable, efficient, and user-friendly experience. With these improvements, you can expect fewer errors and better performance in the crawler strategy and utility functions.
|
||||
|
||||
## [0.2.71] - 2024-06-25
|
||||
### Fixed
|
||||
- Speed up twice the extraction function.
|
||||
|
||||
|
||||
## [0.2.6] - 2024-06-22
|
||||
### Fixed
|
||||
- Fix issue #19: Update Dockerfile to ensure compatibility across multiple platforms.
|
||||
|
||||
25
Dockerfile
25
Dockerfile
@@ -18,12 +18,11 @@ RUN apt-get update && \
|
||||
software-properties-common && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install Python dependencies
|
||||
COPY requirements.txt .
|
||||
RUN pip install --no-cache-dir -r requirements.txt && \
|
||||
pip install --no-cache-dir spacy torch onnxruntime uvicorn && \
|
||||
python -m spacy download en_core_web_sm
|
||||
# pip install --no-cache-dir spacy torch torchvision torchaudio onnxruntime uvicorn && \
|
||||
# Copy the application code
|
||||
COPY . .
|
||||
|
||||
# Install Crawl4AI using the local setup.py (which will use the default installation)
|
||||
RUN pip install --no-cache-dir .
|
||||
|
||||
# Install Google Chrome and ChromeDriver
|
||||
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
|
||||
@@ -33,9 +32,6 @@ RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key
|
||||
wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip && \
|
||||
unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
|
||||
|
||||
# Copy the rest of the application code
|
||||
COPY . .
|
||||
|
||||
# Set environment to use Chrome and ChromeDriver properly
|
||||
ENV CHROME_BIN=/usr/bin/google-chrome \
|
||||
CHROMEDRIVER=/usr/local/bin/chromedriver \
|
||||
@@ -43,9 +39,6 @@ ENV CHROME_BIN=/usr/bin/google-chrome \
|
||||
DBUS_SESSION_BUS_ADDRESS=/dev/null \
|
||||
PYTHONUNBUFFERED=1
|
||||
|
||||
# pip install -e .[all]
|
||||
RUN pip install --no-cache-dir -e .[all]
|
||||
|
||||
# Ensure the PATH environment variable includes the location of the installed packages
|
||||
ENV PATH /opt/conda/bin:$PATH
|
||||
|
||||
@@ -53,15 +46,13 @@ ENV PATH /opt/conda/bin:$PATH
|
||||
EXPOSE 80
|
||||
|
||||
# Download models call cli "crawl4ai-download-models"
|
||||
RUN crawl4ai-download-models
|
||||
# RUN crawl4ai-download-models
|
||||
|
||||
# Instakk mkdocs
|
||||
# Install mkdocs
|
||||
RUN pip install mkdocs mkdocs-terminal
|
||||
|
||||
# Call mkdocs to build the documentation
|
||||
RUN mkdocs build
|
||||
|
||||
# Run uvicorn
|
||||
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "4"]
|
||||
|
||||
|
||||
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80", "--workers", "4"]
|
||||
11
README.md
11
README.md
@@ -1,4 +1,4 @@
|
||||
# Crawl4AI v0.2.7 🕷️🤖
|
||||
# Crawl4AI v0.2.73 🕷️🤖
|
||||
|
||||
[](https://github.com/unclecode/crawl4ai/stargazers)
|
||||
[](https://github.com/unclecode/crawl4ai/network/members)
|
||||
@@ -11,7 +11,7 @@ Crawl4AI simplifies web crawling and data extraction, making it accessible for l
|
||||
## Try it Now!
|
||||
|
||||
- Use as REST API: [](https://colab.research.google.com/drive/1zODYjhemJ5bUmYceWpVoBMVpd0ofzNBZ?usp=sharing)
|
||||
- Use as Python library: [](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
|
||||
- Use as Python library: This collab is a bit outdated. I'm updating it with the newest versions, so please refer to the website for the latest documentation. This will be updated in a few days, and you'll have the latest version here. Thank you so much. [](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
|
||||
|
||||
✨ visit our [Documentation Website](https://crawl4ai.com/mkdocs/)
|
||||
|
||||
@@ -52,6 +52,13 @@ result = crawler.run(url="https://www.nbcnews.com/business")
|
||||
print(result.markdown)
|
||||
```
|
||||
|
||||
## How to install 🛠
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
```️
|
||||
|
||||
### Speed-First Design 🚀
|
||||
|
||||
Perhaps the most important design principle for this library is speed. We need to ensure it can handle many links and resources in parallel as quickly as possible. By combining this speed with fast LLMs like Groq, the results will be truly amazing.
|
||||
|
||||
@@ -5,11 +5,12 @@ from selenium.webdriver.common.by import By
|
||||
from selenium.webdriver.support.ui import WebDriverWait
|
||||
from selenium.webdriver.support import expected_conditions as EC
|
||||
from selenium.webdriver.chrome.options import Options
|
||||
from selenium.common.exceptions import InvalidArgumentException
|
||||
from selenium.common.exceptions import InvalidArgumentException, WebDriverException
|
||||
from selenium.webdriver.chrome.service import Service as ChromeService
|
||||
from webdriver_manager.chrome import ChromeDriverManager
|
||||
|
||||
import logging
|
||||
from .config import *
|
||||
import logging, time
|
||||
import base64
|
||||
from PIL import Image, ImageDraw, ImageFont
|
||||
from io import BytesIO
|
||||
@@ -83,14 +84,20 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
||||
if kwargs.get("user_agent"):
|
||||
self.options.add_argument("--user-agent=" + kwargs.get("user_agent"))
|
||||
else:
|
||||
# Set user agent
|
||||
user_agent = kwargs.get("user_agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36")
|
||||
self.options.add_argument(f"--user-agent={user_agent}")
|
||||
|
||||
self.options.add_argument("--no-sandbox")
|
||||
self.options.add_argument(f"--user-agent={user_agent}")
|
||||
self.options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36")
|
||||
|
||||
self.options.headless = kwargs.get("headless", True)
|
||||
if self.options.headless:
|
||||
self.options.add_argument("--headless")
|
||||
|
||||
self.options.add_argument("--disable-gpu")
|
||||
self.options.add_argument("--window-size=1920,1080")
|
||||
self.options.add_argument("--no-sandbox")
|
||||
self.options.add_argument("--disable-dev-shm-usage")
|
||||
self.options.add_argument("--disable-blink-features=AutomationControlled")
|
||||
|
||||
# self.options.add_argument("--disable-dev-shm-usage")
|
||||
self.options.add_argument("--disable-gpu")
|
||||
# self.options.add_argument("--disable-extensions")
|
||||
@@ -171,8 +178,20 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
||||
# Set extra HTTP headers
|
||||
self.driver.execute_cdp_cmd('Network.setExtraHTTPHeaders', {'headers': headers})
|
||||
|
||||
def _ensure_page_load(self, max_checks=6, check_interval=0.01):
|
||||
initial_length = len(self.driver.page_source)
|
||||
|
||||
for ix in range(max_checks):
|
||||
# print(f"Checking page load: {ix}")
|
||||
time.sleep(check_interval)
|
||||
current_length = len(self.driver.page_source)
|
||||
|
||||
if current_length != initial_length:
|
||||
break
|
||||
|
||||
def crawl(self, url: str) -> str:
|
||||
return self.driver.page_source
|
||||
|
||||
def crawl(self, url: str, **kwargs) -> str:
|
||||
# Create md5 hash of the URL
|
||||
import hashlib
|
||||
url_hash = hashlib.md5(url.encode()).hexdigest()
|
||||
@@ -187,10 +206,30 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
||||
self.driver = self.execute_hook('before_get_url', self.driver)
|
||||
if self.verbose:
|
||||
print(f"[LOG] 🕸️ Crawling {url} using LocalSeleniumCrawlerStrategy...")
|
||||
self.driver.get(url)
|
||||
WebDriverWait(self.driver, 10).until(
|
||||
EC.presence_of_all_elements_located((By.TAG_NAME, "html"))
|
||||
self.driver.get(url) #<html><head></head><body></body></html>
|
||||
|
||||
WebDriverWait(self.driver, 20).until(
|
||||
lambda d: d.execute_script('return document.readyState') == 'complete'
|
||||
)
|
||||
WebDriverWait(self.driver, 10).until(
|
||||
EC.presence_of_all_elements_located((By.TAG_NAME, "body"))
|
||||
)
|
||||
self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
|
||||
html = self._ensure_page_load() # self.driver.page_source
|
||||
can_not_be_done_headless = False # Look at my creativity for naming variables
|
||||
# TODO: Very ugly way for now but it works
|
||||
if not kwargs.get('bypass_headless', False) and html == "<html><head></head><body></body></html>":
|
||||
print("[LOG] 🙌 Page could not be loaded in headless mode. Trying non-headless mode...")
|
||||
can_not_be_done_headless = True
|
||||
options = Options()
|
||||
options.headless = False
|
||||
# set window size very small
|
||||
options.add_argument("--window-size=5,5")
|
||||
driver = webdriver.Chrome(service=self.service, options=options)
|
||||
driver.get(url)
|
||||
html = driver.page_source
|
||||
driver.quit()
|
||||
|
||||
self.driver = self.execute_hook('after_get_url', self.driver)
|
||||
|
||||
# Execute JS code if provided
|
||||
@@ -207,7 +246,8 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
||||
lambda driver: driver.execute_script("return document.readyState") == "complete"
|
||||
)
|
||||
|
||||
html = self.driver.page_source
|
||||
if not can_not_be_done_headless:
|
||||
html = self.driver.page_source
|
||||
self.driver = self.execute_hook('before_return_html', self.driver, html)
|
||||
|
||||
# Store in cache
|
||||
@@ -220,9 +260,18 @@ class LocalSeleniumCrawlerStrategy(CrawlerStrategy):
|
||||
|
||||
return html
|
||||
except InvalidArgumentException:
|
||||
raise InvalidArgumentException(f"Invalid URL {url}")
|
||||
if not hasattr(e, 'msg'):
|
||||
e.msg = str(e)
|
||||
raise InvalidArgumentException(f"Failed to crawl {url}: {e.msg}")
|
||||
except WebDriverException as e:
|
||||
# If e does nlt have msg attribute create it and set it to str(e)
|
||||
if not hasattr(e, 'msg'):
|
||||
e.msg = str(e)
|
||||
raise WebDriverException(f"Failed to crawl {url}: {e.msg}")
|
||||
except Exception as e:
|
||||
raise Exception(f"Failed to crawl {url}: {str(e)}")
|
||||
if not hasattr(e, 'msg'):
|
||||
e.msg = str(e)
|
||||
raise Exception(f"Failed to crawl {url}: {e.msg}")
|
||||
|
||||
def take_screenshot(self) -> str:
|
||||
try:
|
||||
|
||||
@@ -10,7 +10,7 @@ from functools import partial
|
||||
from .model_loader import *
|
||||
import math
|
||||
|
||||
import numpy as np
|
||||
|
||||
class ExtractionStrategy(ABC):
|
||||
"""
|
||||
Abstract base class for all extraction strategies.
|
||||
@@ -101,7 +101,7 @@ class LLMExtractionStrategy(ExtractionStrategy):
|
||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
|
||||
|
||||
if self.extract_type == "schema":
|
||||
variable_values["SCHEMA"] = json.dumps(self.schema)
|
||||
variable_values["SCHEMA"] = json.dumps(self.schema, indent=2)
|
||||
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
|
||||
|
||||
for variable in variable_values:
|
||||
@@ -109,7 +109,7 @@ class LLMExtractionStrategy(ExtractionStrategy):
|
||||
"{" + variable + "}", variable_values[variable]
|
||||
)
|
||||
|
||||
response = perform_completion_with_backoff(self.provider, prompt_with_variables, self.api_token)
|
||||
response = perform_completion_with_backoff(self.provider, prompt_with_variables, self.api_token) # , json_response=self.extract_type == "schema")
|
||||
try:
|
||||
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)['blocks']
|
||||
blocks = json.loads(blocks)
|
||||
@@ -196,6 +196,10 @@ class LLMExtractionStrategy(ExtractionStrategy):
|
||||
time.sleep(0.5) # 500 ms delay between each processing
|
||||
else:
|
||||
# Parallel processing using ThreadPoolExecutor
|
||||
# extract_func = partial(self.extract, url)
|
||||
# for ix, section in enumerate(merged_sections):
|
||||
# extracted_content.append(extract_func(ix, section))
|
||||
|
||||
with ThreadPoolExecutor(max_workers=4) as executor:
|
||||
extract_func = partial(self.extract, url)
|
||||
futures = [executor.submit(extract_func, ix, section) for ix, section in enumerate(merged_sections)]
|
||||
@@ -219,6 +223,8 @@ class CosineStrategy(ExtractionStrategy):
|
||||
"""
|
||||
super().__init__()
|
||||
|
||||
import numpy as np
|
||||
|
||||
self.semantic_filter = semantic_filter
|
||||
self.word_count_threshold = word_count_threshold
|
||||
self.max_dist = max_dist
|
||||
|
||||
@@ -186,7 +186,7 @@ The user has made the following request for what information to extract from the
|
||||
Please carefully read the URL content and the user's request. If the user provided a desired JSON schema in the <schema_block> above, extract the requested information from the URL content according to that schema. If no schema was provided, infer an appropriate JSON schema based on the user's request that will best capture the key information they are looking for.
|
||||
|
||||
Extraction instructions:
|
||||
Return the extracted information as a list of JSON objects, with each object in the list corresponding to a block of content from the URL, in the same order as it appears on the page. Wrap the entire JSON list in <blocks> tags.
|
||||
Return the extracted information as a list of JSON objects, with each object in the list corresponding to a block of content from the URL, in the same order as it appears on the page. Wrap the entire JSON list in <blocks>...</blocks> XML tags.
|
||||
|
||||
Quality Reflection:
|
||||
Before outputting your final answer, double check that the JSON you are returning is complete, containing all the information requested by the user, and is valid JSON that could be parsed by json.loads() with no errors or omissions. The outputted JSON objects should fully match the schema, either provided or inferred.
|
||||
@@ -194,5 +194,11 @@ Before outputting your final answer, double check that the JSON you are returnin
|
||||
Quality Score:
|
||||
After reflecting, score the quality and completeness of the JSON data you are about to return on a scale of 1 to 5. Write the score inside <score> tags.
|
||||
|
||||
Avoid Common Mistakes:
|
||||
- Do NOT add any comments using "//" or "#" in the JSON output. It causes parsing errors.
|
||||
- Make sure the JSON is properly formatted with curly braces, square brackets, and commas in the right places.
|
||||
- Do not miss closing </blocks> tag at the end of the JSON output.
|
||||
- Do not generate the Python coee show me how to do the task, this is your task to extract the information and return it in JSON format.
|
||||
|
||||
Result
|
||||
Output the final list of JSON objects, wrapped in <blocks> tags."""
|
||||
Output the final list of JSON objects, wrapped in <blocks>...</blocks> XML tags. Make sure to close the tag properly."""
|
||||
@@ -419,7 +419,6 @@ def get_content_of_website(url, html, word_count_threshold = MIN_WORD_THRESHOLD,
|
||||
print('Error processing HTML content:', str(e))
|
||||
raise InvalidCSSSelectorError(f"Invalid CSS selector: {css_selector}") from e
|
||||
|
||||
|
||||
def get_content_of_website_optimized(url: str, html: str, word_count_threshold: int = MIN_WORD_THRESHOLD, css_selector: str = None, **kwargs) -> Dict[str, Any]:
|
||||
if not html:
|
||||
return None
|
||||
@@ -438,67 +437,76 @@ def get_content_of_website_optimized(url: str, html: str, word_count_threshold:
|
||||
links = {'internal': [], 'external': []}
|
||||
media = {'images': [], 'videos': [], 'audios': []}
|
||||
|
||||
def process_element(element: element.PageElement) -> None:
|
||||
if isinstance(element, NavigableString):
|
||||
if isinstance(element, Comment):
|
||||
element.extract()
|
||||
return
|
||||
def process_element(element: element.PageElement) -> bool:
|
||||
try:
|
||||
if isinstance(element, NavigableString):
|
||||
if isinstance(element, Comment):
|
||||
element.extract()
|
||||
return False
|
||||
|
||||
# if not isinstance(element, element.Tag):
|
||||
# return
|
||||
|
||||
if element.name in ['script', 'style', 'link', 'meta', 'noscript']:
|
||||
element.decompose()
|
||||
return
|
||||
|
||||
if element.name == 'a' and element.get('href'):
|
||||
href = element['href']
|
||||
url_base = url.split('/')[2]
|
||||
link_data = {'href': href, 'text': element.get_text()}
|
||||
if href.startswith('http') and url_base not in href:
|
||||
links['external'].append(link_data)
|
||||
else:
|
||||
links['internal'].append(link_data)
|
||||
|
||||
elif element.name == 'img':
|
||||
media['images'].append({
|
||||
'src': element.get('src'),
|
||||
'alt': element.get('alt'),
|
||||
'type': 'image'
|
||||
})
|
||||
alt_text = element.get('alt')
|
||||
if alt_text:
|
||||
element.replace_with(soup.new_string(alt_text))
|
||||
else:
|
||||
if element.name in ['script', 'style', 'link', 'meta', 'noscript']:
|
||||
element.decompose()
|
||||
return
|
||||
return False
|
||||
|
||||
elif element.name in ['video', 'audio']:
|
||||
media[f"{element.name}s"].append({
|
||||
'src': element.get('src'),
|
||||
'alt': element.get('alt'),
|
||||
'type': element.name
|
||||
})
|
||||
keep_element = False
|
||||
|
||||
if element.name != 'pre':
|
||||
if element.name in ['b', 'i', 'u', 'span', 'del', 'ins', 'sub', 'sup', 'strong', 'em', 'code', 'kbd', 'var', 's', 'q', 'abbr', 'cite', 'dfn', 'time', 'small', 'mark']:
|
||||
if kwargs.get('only_text', False):
|
||||
element.replace_with(element.get_text())
|
||||
if element.name == 'a' and element.get('href'):
|
||||
href = element['href']
|
||||
url_base = url.split('/')[2]
|
||||
link_data = {'href': href, 'text': element.get_text()}
|
||||
if href.startswith('http') and url_base not in href:
|
||||
links['external'].append(link_data)
|
||||
else:
|
||||
element.unwrap()
|
||||
elif element.name != 'img':
|
||||
element.attrs = {}
|
||||
links['internal'].append(link_data)
|
||||
keep_element = True
|
||||
|
||||
word_count = len(element.get_text(strip=True).split())
|
||||
if word_count < word_count_threshold:
|
||||
element.decompose()
|
||||
return
|
||||
elif element.name == 'img':
|
||||
media['images'].append({
|
||||
'src': element.get('src'),
|
||||
'alt': element.get('alt'),
|
||||
'type': 'image'
|
||||
})
|
||||
return True # Always keep image elements
|
||||
|
||||
for child in list(element.children):
|
||||
process_element(child)
|
||||
elif element.name in ['video', 'audio']:
|
||||
media[f"{element.name}s"].append({
|
||||
'src': element.get('src'),
|
||||
'alt': element.get('alt'),
|
||||
'type': element.name
|
||||
})
|
||||
return True # Always keep video and audio elements
|
||||
|
||||
if not element.contents and not element.get_text(strip=True):
|
||||
element.decompose()
|
||||
if element.name != 'pre':
|
||||
if element.name in ['b', 'i', 'u', 'span', 'del', 'ins', 'sub', 'sup', 'strong', 'em', 'code', 'kbd', 'var', 's', 'q', 'abbr', 'cite', 'dfn', 'time', 'small', 'mark']:
|
||||
if kwargs.get('only_text', False):
|
||||
element.replace_with(element.get_text())
|
||||
else:
|
||||
element.unwrap()
|
||||
elif element.name != 'img':
|
||||
element.attrs = {}
|
||||
|
||||
# Process children
|
||||
for child in list(element.children):
|
||||
if isinstance(child, NavigableString) and not isinstance(child, Comment):
|
||||
if len(child.strip()) > 0:
|
||||
keep_element = True
|
||||
else:
|
||||
if process_element(child):
|
||||
keep_element = True
|
||||
|
||||
|
||||
# Check word count
|
||||
if not keep_element:
|
||||
word_count = len(element.get_text(strip=True).split())
|
||||
keep_element = word_count >= word_count_threshold
|
||||
|
||||
if not keep_element:
|
||||
element.decompose()
|
||||
|
||||
return keep_element
|
||||
except Exception as e:
|
||||
print('Error processing element:', str(e))
|
||||
return False
|
||||
|
||||
process_element(body)
|
||||
|
||||
@@ -535,7 +543,6 @@ def get_content_of_website_optimized(url: str, html: str, word_count_threshold:
|
||||
'metadata': meta
|
||||
}
|
||||
|
||||
|
||||
def extract_metadata(html, soup = None):
|
||||
metadata = {}
|
||||
|
||||
@@ -594,12 +601,16 @@ def extract_xml_data(tags, string):
|
||||
return data
|
||||
|
||||
# Function to perform the completion with exponential backoff
|
||||
def perform_completion_with_backoff(provider, prompt_with_variables, api_token):
|
||||
def perform_completion_with_backoff(provider, prompt_with_variables, api_token, json_response = False):
|
||||
from litellm import completion
|
||||
from litellm.exceptions import RateLimitError
|
||||
max_attempts = 3
|
||||
base_delay = 2 # Base delay in seconds, you can adjust this based on your needs
|
||||
|
||||
extra_args = {}
|
||||
if json_response:
|
||||
extra_args["response_format"] = { "type": "json_object" }
|
||||
|
||||
for attempt in range(max_attempts):
|
||||
try:
|
||||
response =completion(
|
||||
@@ -608,7 +619,8 @@ def perform_completion_with_backoff(provider, prompt_with_variables, api_token):
|
||||
{"role": "user", "content": prompt_with_variables}
|
||||
],
|
||||
temperature=0.01,
|
||||
api_key=api_token
|
||||
api_key=api_token,
|
||||
**extra_args
|
||||
)
|
||||
return response # Return the successful response
|
||||
except RateLimitError as e:
|
||||
|
||||
@@ -11,6 +11,8 @@ from .crawler_strategy import *
|
||||
from typing import List
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from .config import *
|
||||
import warnings
|
||||
warnings.filterwarnings("ignore", message='Field "model_name" has conflict with protected namespace "model_".')
|
||||
|
||||
|
||||
class WebCrawler:
|
||||
@@ -129,47 +131,57 @@ class WebCrawler:
|
||||
verbose=True,
|
||||
**kwargs,
|
||||
) -> CrawlResult:
|
||||
extraction_strategy = extraction_strategy or NoExtractionStrategy()
|
||||
extraction_strategy.verbose = verbose
|
||||
if not isinstance(extraction_strategy, ExtractionStrategy):
|
||||
raise ValueError("Unsupported extraction strategy")
|
||||
if not isinstance(chunking_strategy, ChunkingStrategy):
|
||||
raise ValueError("Unsupported chunking strategy")
|
||||
|
||||
if word_count_threshold < MIN_WORD_THRESHOLD:
|
||||
word_count_threshold = MIN_WORD_THRESHOLD
|
||||
try:
|
||||
extraction_strategy = extraction_strategy or NoExtractionStrategy()
|
||||
extraction_strategy.verbose = verbose
|
||||
if not isinstance(extraction_strategy, ExtractionStrategy):
|
||||
raise ValueError("Unsupported extraction strategy")
|
||||
if not isinstance(chunking_strategy, ChunkingStrategy):
|
||||
raise ValueError("Unsupported chunking strategy")
|
||||
|
||||
# if word_count_threshold < MIN_WORD_THRESHOLD:
|
||||
# word_count_threshold = MIN_WORD_THRESHOLD
|
||||
|
||||
word_count_threshold = max(word_count_threshold, 0)
|
||||
|
||||
# Check cache first
|
||||
cached = None
|
||||
screenshot_data = None
|
||||
extracted_content = None
|
||||
if not bypass_cache and not self.always_by_pass_cache:
|
||||
cached = get_cached_url(url)
|
||||
|
||||
if kwargs.get("warmup", True) and not self.ready:
|
||||
return None
|
||||
|
||||
if cached:
|
||||
html = cached[1]
|
||||
extracted_content = cached[4]
|
||||
if screenshot:
|
||||
screenshot_data = cached[9]
|
||||
if not screenshot_data:
|
||||
cached = None
|
||||
|
||||
if not cached or not html:
|
||||
if user_agent:
|
||||
self.crawler_strategy.update_user_agent(user_agent)
|
||||
t1 = time.time()
|
||||
html = self.crawler_strategy.crawl(url)
|
||||
t2 = time.time()
|
||||
if verbose:
|
||||
print(f"[LOG] 🚀 Crawling done for {url}, success: {bool(html)}, time taken: {t2 - t1} seconds")
|
||||
if screenshot:
|
||||
screenshot_data = self.crawler_strategy.take_screenshot()
|
||||
# Check cache first
|
||||
cached = None
|
||||
screenshot_data = None
|
||||
extracted_content = None
|
||||
if not bypass_cache and not self.always_by_pass_cache:
|
||||
cached = get_cached_url(url)
|
||||
|
||||
if kwargs.get("warmup", True) and not self.ready:
|
||||
return None
|
||||
|
||||
if cached:
|
||||
html = cached[1]
|
||||
extracted_content = cached[4]
|
||||
if screenshot:
|
||||
screenshot_data = cached[9]
|
||||
if not screenshot_data:
|
||||
cached = None
|
||||
|
||||
if not cached or not html:
|
||||
if user_agent:
|
||||
self.crawler_strategy.update_user_agent(user_agent)
|
||||
t1 = time.time()
|
||||
html = self.crawler_strategy.crawl(url, **kwargs)
|
||||
t2 = time.time()
|
||||
if verbose:
|
||||
print(f"[LOG] 🚀 Crawling done for {url}, success: {bool(html)}, time taken: {t2 - t1} seconds")
|
||||
if screenshot:
|
||||
screenshot_data = self.crawler_strategy.take_screenshot()
|
||||
|
||||
|
||||
return self.process_html(url, html, extracted_content, word_count_threshold, extraction_strategy, chunking_strategy, css_selector, screenshot_data, verbose, bool(cached), **kwargs)
|
||||
|
||||
crawl_result = self.process_html(url, html, extracted_content, word_count_threshold, extraction_strategy, chunking_strategy, css_selector, screenshot_data, verbose, bool(cached), **kwargs)
|
||||
crawl_result.success = bool(html)
|
||||
return crawl_result
|
||||
except Exception as e:
|
||||
if not hasattr(e, "msg"):
|
||||
e.msg = str(e)
|
||||
print(f"[ERROR] 🚫 Failed to crawl {url}, error: {e.msg}")
|
||||
return CrawlResult(url=url, html="", success=False, error_message=e.msg)
|
||||
|
||||
def process_html(
|
||||
self,
|
||||
|
||||
@@ -1,6 +1,36 @@
|
||||
# Changelog
|
||||
|
||||
## [0.2.7] - 2024-06-27
|
||||
## [v0.2.73] - 2024-07-03
|
||||
|
||||
💡 In this release, we've bumped the version to v0.2.73 and refreshed our documentation to ensure you have the best experience with our project.
|
||||
|
||||
* Supporting website need "with-head" mode to crawl the website with head.
|
||||
* Fixing the installation issues for setup.py and dockerfile.
|
||||
* Resolve multiple issues.
|
||||
|
||||
## [v0.2.72] - 2024-06-30
|
||||
|
||||
This release brings exciting updates and improvements to our project! 🎉
|
||||
|
||||
* 📚 **Documentation Updates**: Our documentation has been revamped to reflect the latest changes and additions.
|
||||
* 🚀 **New Modes in setup.py**: We've added support for three new modes in setup.py: default, torch, and transformers. This enhances the project's flexibility and usability.
|
||||
* 🐳 **Docker File Updates**: The Docker file has been updated to ensure seamless compatibility with the new modes and improvements.
|
||||
* 🕷️ **Temporary Solution for Headless Crawling**: We've implemented a temporary solution to overcome issues with crawling websites in headless mode.
|
||||
|
||||
These changes aim to improve the overall user experience, provide more flexibility, and enhance the project's performance. We're thrilled to share these updates with you and look forward to continuing to evolve and improve our project!
|
||||
|
||||
## [0.2.71] - 2024-06-26
|
||||
|
||||
**Improved Error Handling and Performance** 🚧
|
||||
|
||||
* 🚫 Refactored `crawler_strategy.py` to handle exceptions and provide better error messages, making it more robust and reliable.
|
||||
* 💻 Optimized the `get_content_of_website_optimized` function in `utils.py` for improved performance, reducing potential bottlenecks.
|
||||
* 💻 Updated `utils.py` with the latest changes, ensuring consistency and accuracy.
|
||||
* 🚫 Migrated to `ChromeDriverManager` to resolve Chrome driver download issues, providing a smoother user experience.
|
||||
|
||||
These changes focus on refining the existing codebase, resulting in a more stable, efficient, and user-friendly experience. With these improvements, you can expect fewer errors and better performance in the crawler strategy and utility functions.
|
||||
|
||||
## [0.2.71] - 2024-06-25
|
||||
### Fixed
|
||||
- Speed up twice the extraction function.
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Crawl4AI v0.2.7
|
||||
# Crawl4AI v0.2.73
|
||||
|
||||
Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.
|
||||
|
||||
|
||||
@@ -1,39 +1,67 @@
|
||||
# Installation 💻
|
||||
|
||||
There are three ways to use Crawl4AI:
|
||||
|
||||
1. As a library (Recommended)
|
||||
2. As a local server (Docker) or using the REST API
|
||||
3. As a Google Colab notebook. [](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
|
||||
3. As a Google Colab notebook.
|
||||
|
||||
## Library Installation
|
||||
|
||||
To install Crawl4AI as a library, follow these steps:
|
||||
Crawl4AI offers flexible installation options to suit various use cases. Choose the option that best fits your needs:
|
||||
|
||||
1. Install the package from GitHub:
|
||||
- **Default Installation** (Basic functionality):
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install "crawl4ai @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
```
|
||||
Use this for basic web crawling and scraping tasks.
|
||||
|
||||
- **Installation with PyTorch** (For advanced text clustering):
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install "crawl4ai[torch] @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
```
|
||||
Choose this if you need the CosineSimilarity cluster strategy.
|
||||
|
||||
- **Installation with Transformers** (For summarization and Hugging Face models):
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install "crawl4ai[transformer] @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
```
|
||||
Opt for this if you require text summarization or plan to use Hugging Face models.
|
||||
|
||||
- **Full Installation** (All features):
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
pip install "crawl4ai[all] @ git+https://github.com/unclecode/crawl4ai.git"
|
||||
```
|
||||
This installs all dependencies for full functionality.
|
||||
|
||||
💡 Better to run the following CLI-command to load the required models. This is optional, but it will boost the performance and speed of the crawler. You need to do this only once.
|
||||
```
|
||||
crawl4ai-download-models
|
||||
```
|
||||
|
||||
2. Alternatively, you can clone the repository and install the package locally:
|
||||
```
|
||||
- **Development Installation** (For contributors):
|
||||
```bash
|
||||
virtualenv venv
|
||||
source venv/bin/activate
|
||||
git clone https://github.com/unclecode/crawl4ai.git
|
||||
cd crawl4ai
|
||||
pip install -e .[all]
|
||||
pip install -e ".[all]"
|
||||
```
|
||||
Use this if you plan to modify the source code.
|
||||
|
||||
💡 After installation, if you have used "torch", "transformer" or "all", it's recommended to run the following CLI command to load the required models. This is optional but will boost the performance and speed of the crawler. You need to do this only once, this is only for when you install using []
|
||||
```bash
|
||||
crawl4ai-download-models
|
||||
```
|
||||
|
||||
## Using Docker for Local Server
|
||||
|
||||
3. Use Docker to run the local server:
|
||||
```
|
||||
To run Crawl4AI as a local server using Docker:
|
||||
|
||||
```bash
|
||||
# For Mac users
|
||||
# docker build --platform linux/amd64 -t crawl4ai .
|
||||
# For other users
|
||||
@@ -43,4 +71,9 @@ docker run -d -p 8000:80 crawl4ai
|
||||
|
||||
## Using Google Colab
|
||||
|
||||
You can also use Crawl4AI in a Google Colab notebook for easy setup and experimentation. Simply open the following Colab notebook and follow the instructions: [](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
|
||||
|
||||
You can also use Crawl4AI in a Google Colab notebook for easy setup and experimentation. Simply open the following Colab notebook and follow the instructions:
|
||||
|
||||
⚠️ This collab is a bit outdated. I'm updating it with the newest versions, so please refer to the website for the latest documentation. This will be updated in a few days, and you'll have the latest version here. Thank you so much.
|
||||
|
||||
[](https://colab.research.google.com/drive/1wz8u30rvbq6Scodye9AGCw8Qg_Z8QGsk)
|
||||
2
main.py
2
main.py
@@ -168,4 +168,4 @@ async def get_chunking_strategies():
|
||||
|
||||
if __name__ == "__main__":
|
||||
import uvicorn
|
||||
uvicorn.run(app, host="0.0.0.0", port=8080)
|
||||
uvicorn.run(app, host="0.0.0.0", port=8888)
|
||||
|
||||
36
setup.py
36
setup.py
@@ -1,6 +1,5 @@
|
||||
from setuptools import setup, find_packages
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import subprocess
|
||||
from setuptools.command.install import install
|
||||
@@ -14,42 +13,27 @@ os.makedirs(f"{crawl4ai_folder}/cache", exist_ok=True)
|
||||
with open("requirements.txt") as f:
|
||||
requirements = f.read().splitlines()
|
||||
|
||||
# Read the requirements from requirements.txt
|
||||
with open("requirements.crawl.txt") as f:
|
||||
requirements_crawl_only = f.read().splitlines()
|
||||
|
||||
# Define the requirements for different environments
|
||||
requirements_without_torch = [req for req in requirements if not req.startswith("torch")]
|
||||
requirements_without_transformers = [req for req in requirements if not req.startswith("transformers")]
|
||||
requirements_without_nltk = [req for req in requirements if not req.startswith("nltk")]
|
||||
requirements_without_torch_transformers_nlkt = [req for req in requirements if not req.startswith("torch") and not req.startswith("transformers") and not req.startswith("nltk")]
|
||||
requirements_crawl_only = [req for req in requirements if not req.startswith("torch") and not req.startswith("transformers") and not req.startswith("nltk")]
|
||||
|
||||
class CustomInstallCommand(install):
|
||||
"""Customized setuptools install command to install spacy without dependencies."""
|
||||
def run(self):
|
||||
install.run(self)
|
||||
subprocess.check_call([os.sys.executable, '-m', 'pip', 'install', 'spacy', '--no-deps'])
|
||||
default_requirements = [req for req in requirements if not req.startswith(("torch", "transformers", "onnxruntime", "nltk", "spacy", "tokenizers", "scikit-learn", "numpy"))]
|
||||
torch_requirements = [req for req in requirements if req.startswith(("torch", "nltk", "spacy", "scikit-learn", "numpy"))]
|
||||
transformer_requirements = [req for req in requirements if req.startswith(("transformers", "tokenizers", "onnxruntime"))]
|
||||
|
||||
setup(
|
||||
name="Crawl4AI",
|
||||
version="0.2.7",
|
||||
version="0.2.73",
|
||||
description="🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper",
|
||||
long_description=open("README.md").read(),
|
||||
long_description=open("README.md", encoding="utf-8").read(),
|
||||
long_description_content_type="text/markdown",
|
||||
url="https://github.com/unclecode/crawl4ai",
|
||||
author="Unclecode",
|
||||
author_email="unclecode@kidocode.com",
|
||||
license="MIT",
|
||||
packages=find_packages(),
|
||||
install_requires=requirements_without_torch_transformers_nlkt,
|
||||
install_requires=default_requirements,
|
||||
extras_require={
|
||||
"all": requirements, # Include all requirements
|
||||
"colab": requirements_without_torch, # Exclude torch for Colab
|
||||
"crawl": requirements_crawl_only, # Include only crawl requirements
|
||||
},
|
||||
cmdclass={
|
||||
'install': CustomInstallCommand,
|
||||
"torch": torch_requirements,
|
||||
"transformer": transformer_requirements,
|
||||
"all": requirements,
|
||||
},
|
||||
entry_points={
|
||||
'console_scripts': [
|
||||
@@ -67,4 +51,4 @@ setup(
|
||||
"Programming Language :: Python :: 3.10",
|
||||
],
|
||||
python_requires=">=3.7",
|
||||
)
|
||||
)
|
||||
Reference in New Issue
Block a user