refactor(docs): reorganize documentation structure and update styles
Reorganize documentation into core/advanced/extraction sections for better navigation. Update terminal theme styles and add rich library for better CLI output. Remove redundant tutorial files and consolidate content into core sections. Add personal story to index page for project context. BREAKING CHANGE: Documentation structure has been significantly reorganized
This commit is contained in:
5
.gitattributes
vendored
Normal file
5
.gitattributes
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
# Ignore Markdown files for language statistics
|
||||
*.md linguist-documentation=true
|
||||
|
||||
# Force Python files to be detected
|
||||
*.py linguist-language=Python
|
||||
@@ -24,6 +24,15 @@ Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant
|
||||
|
||||
🎉 **Version 0.4.24x is out!** Major improvements in extraction strategies with enhanced JSON handling, SSL security, and Amazon product extraction. Plus, a completely revamped content filtering system! [Read the release notes →](https://crawl4ai.com/mkdocs/blog)
|
||||
|
||||
<details>
|
||||
<summary>📦 <strong>My Personal Story</strong></summary>
|
||||
I’ve always loved exploring the web development, back from when HTML and JavaScript were hardly intertwined. My curiosity drove me into web development, mathematics, AI, and machine learning, always keeping a close tie to real industrial applications. In 2009–2010, as a postgraduate student, I created platforms to gather and organize published papers for Master’s and PhD researchers. Faced with post-grad students’ data challenges, I built a helper app to crawl newly published papers and public data. Relying on Internet Explorer and DLL hacks was far more cumbersome than modern tools, highlighting my longtime background in data extraction.
|
||||
|
||||
Fast-forward to 2023: I needed to fetch web data and transform it into neat **markdown** for my AI pipeline. All solutions I found were either **closed-source**, overpriced, or produced low-quality output. As someone who has built large edu-tech ventures (like KidoCode), I believe **data belongs to the people**. We shouldn’t pay $16 just to parse the web’s publicly available content. This friction led me to create my own library, **Crawl4AI**, in a matter of days to meet my immediate needs. Unexpectedly, it went **viral**, accumulating thousands of GitHub stars.
|
||||
|
||||
Now, in **January 2025**, Crawl4AI has surpassed **21,000 stars** and remains the #1 trending repository. It’s my way of giving back to the community after benefiting from open source for years. I’m thrilled by how many of you share that passion. Thank you for being here, join our Discord, file issues, submit PRs, or just spread the word. Let’s build the best data extraction, crawling, and scraping library **together**.
|
||||
</details>
|
||||
|
||||
## 🧐 Why Crawl4AI?
|
||||
|
||||
1. **Built for LLMs**: Creates smart, concise Markdown optimized for RAG and fine-tuning applications.
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
# crawl4ai/_version.py
|
||||
__version__ = "0.4.247"
|
||||
__version__ = "0.4.248"
|
||||
|
||||
@@ -453,12 +453,7 @@ class BrowserManager:
|
||||
|
||||
return browser_args
|
||||
|
||||
async def setup_context(
|
||||
self,
|
||||
context: BrowserContext,
|
||||
crawlerRunConfig: CrawlerRunConfig,
|
||||
is_default=False,
|
||||
):
|
||||
async def setup_context(self, context: BrowserContext, crawlerRunConfig: CrawlerRunConfig = None, is_default=False):
|
||||
"""
|
||||
Set up a browser context with the configured options.
|
||||
|
||||
@@ -516,16 +511,17 @@ class BrowserManager:
|
||||
|
||||
# Add default cookie
|
||||
await context.add_cookies(
|
||||
[{"name": "cookiesEnabled", "value": "true", "url": crawlerRunConfig.url}]
|
||||
[{"name": "cookiesEnabled", "value": "true", "url": crawlerRunConfig.url if crawlerRunConfig else "https://crawl4ai.com/"}]
|
||||
)
|
||||
|
||||
# Handle navigator overrides
|
||||
if (
|
||||
crawlerRunConfig.override_navigator
|
||||
or crawlerRunConfig.simulate_user
|
||||
or crawlerRunConfig.magic
|
||||
):
|
||||
await context.add_init_script(load_js_script("navigator_overrider"))
|
||||
if crawlerRunConfig:
|
||||
if (
|
||||
crawlerRunConfig.override_navigator
|
||||
or crawlerRunConfig.simulate_user
|
||||
or crawlerRunConfig.magic
|
||||
):
|
||||
await context.add_init_script(load_js_script("navigator_overrider"))
|
||||
|
||||
async def create_browser_context(self):
|
||||
"""
|
||||
|
||||
@@ -74,279 +74,6 @@ class NoExtractionStrategy(ExtractionStrategy):
|
||||
def run(self, url: str, sections: List[str], *q, **kwargs) -> List[Dict[str, Any]]:
|
||||
return [{"index": i, "tags": [], "content": section} for i, section in enumerate(sections)]
|
||||
|
||||
#######################################################
|
||||
# Strategies using LLM-based extraction for text data #
|
||||
#######################################################
|
||||
class LLMExtractionStrategy(ExtractionStrategy):
|
||||
"""
|
||||
A strategy that uses an LLM to extract meaningful content from the HTML.
|
||||
|
||||
Attributes:
|
||||
provider: The provider to use for extraction. It follows the format <provider_name>/<model_name>, e.g., "ollama/llama3.3".
|
||||
api_token: The API token for the provider.
|
||||
instruction: The instruction to use for the LLM model.
|
||||
schema: Pydantic model schema for structured data.
|
||||
extraction_type: "block" or "schema".
|
||||
chunk_token_threshold: Maximum tokens per chunk.
|
||||
overlap_rate: Overlap between chunks.
|
||||
word_token_rate: Word to token conversion rate.
|
||||
apply_chunking: Whether to apply chunking.
|
||||
base_url: The base URL for the API request.
|
||||
api_base: The base URL for the API request.
|
||||
extra_args: Additional arguments for the API request, such as temprature, max_tokens, etc.
|
||||
verbose: Whether to print verbose output.
|
||||
usages: List of individual token usages.
|
||||
total_usage: Accumulated token usage.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
provider: str = DEFAULT_PROVIDER, api_token: Optional[str] = None,
|
||||
instruction:str = None, schema:Dict = None, extraction_type = "block", **kwargs):
|
||||
"""
|
||||
Initialize the strategy with clustering parameters.
|
||||
|
||||
Args:
|
||||
provider: The provider to use for extraction. It follows the format <provider_name>/<model_name>, e.g., "ollama/llama3.3".
|
||||
api_token: The API token for the provider.
|
||||
instruction: The instruction to use for the LLM model.
|
||||
schema: Pydantic model schema for structured data.
|
||||
extraction_type: "block" or "schema".
|
||||
chunk_token_threshold: Maximum tokens per chunk.
|
||||
overlap_rate: Overlap between chunks.
|
||||
word_token_rate: Word to token conversion rate.
|
||||
apply_chunking: Whether to apply chunking.
|
||||
base_url: The base URL for the API request.
|
||||
api_base: The base URL for the API request.
|
||||
extra_args: Additional arguments for the API request, such as temprature, max_tokens, etc.
|
||||
verbose: Whether to print verbose output.
|
||||
usages: List of individual token usages.
|
||||
total_usage: Accumulated token usage.
|
||||
|
||||
"""
|
||||
super().__init__(**kwargs)
|
||||
self.provider = provider
|
||||
self.api_token = api_token or PROVIDER_MODELS.get(provider, "no-token") or os.getenv("OPENAI_API_KEY")
|
||||
self.instruction = instruction
|
||||
self.extract_type = extraction_type
|
||||
self.schema = schema
|
||||
if schema:
|
||||
self.extract_type = "schema"
|
||||
|
||||
self.chunk_token_threshold = kwargs.get("chunk_token_threshold", CHUNK_TOKEN_THRESHOLD)
|
||||
self.overlap_rate = kwargs.get("overlap_rate", OVERLAP_RATE)
|
||||
self.word_token_rate = kwargs.get("word_token_rate", WORD_TOKEN_RATE)
|
||||
self.apply_chunking = kwargs.get("apply_chunking", True)
|
||||
self.base_url = kwargs.get("base_url", None)
|
||||
self.api_base = kwargs.get("api_base", kwargs.get("base_url", None))
|
||||
self.extra_args = kwargs.get("extra_args", {})
|
||||
if not self.apply_chunking:
|
||||
self.chunk_token_threshold = 1e9
|
||||
|
||||
self.verbose = kwargs.get("verbose", False)
|
||||
self.usages = [] # Store individual usages
|
||||
self.total_usage = TokenUsage() # Accumulated usage
|
||||
|
||||
if not self.api_token:
|
||||
raise ValueError("API token must be provided for LLMExtractionStrategy. Update the config.py or set OPENAI_API_KEY environment variable.")
|
||||
|
||||
|
||||
def extract(self, url: str, ix:int, html: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Extract meaningful blocks or chunks from the given HTML using an LLM.
|
||||
|
||||
How it works:
|
||||
1. Construct a prompt with variables.
|
||||
2. Make a request to the LLM using the prompt.
|
||||
3. Parse the response and extract blocks or chunks.
|
||||
|
||||
Args:
|
||||
url: The URL of the webpage.
|
||||
ix: Index of the block.
|
||||
html: The HTML content of the webpage.
|
||||
|
||||
Returns:
|
||||
A list of extracted blocks or chunks.
|
||||
"""
|
||||
if self.verbose:
|
||||
# print("[LOG] Extracting blocks from URL:", url)
|
||||
print(f"[LOG] Call LLM for {url} - block index: {ix}")
|
||||
|
||||
variable_values = {
|
||||
"URL": url,
|
||||
"HTML": escape_json_string(sanitize_html(html)),
|
||||
}
|
||||
|
||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS
|
||||
if self.instruction:
|
||||
variable_values["REQUEST"] = self.instruction
|
||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
|
||||
|
||||
if self.extract_type == "schema" and self.schema:
|
||||
variable_values["SCHEMA"] = json.dumps(self.schema, indent=2)
|
||||
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
|
||||
|
||||
for variable in variable_values:
|
||||
prompt_with_variables = prompt_with_variables.replace(
|
||||
"{" + variable + "}", variable_values[variable]
|
||||
)
|
||||
|
||||
response = perform_completion_with_backoff(
|
||||
self.provider,
|
||||
prompt_with_variables,
|
||||
self.api_token,
|
||||
base_url=self.api_base or self.base_url,
|
||||
extra_args = self.extra_args
|
||||
) # , json_response=self.extract_type == "schema")
|
||||
# Track usage
|
||||
usage = TokenUsage(
|
||||
completion_tokens=response.usage.completion_tokens,
|
||||
prompt_tokens=response.usage.prompt_tokens,
|
||||
total_tokens=response.usage.total_tokens,
|
||||
completion_tokens_details=response.usage.completion_tokens_details.__dict__ if response.usage.completion_tokens_details else {},
|
||||
prompt_tokens_details=response.usage.prompt_tokens_details.__dict__ if response.usage.prompt_tokens_details else {}
|
||||
)
|
||||
self.usages.append(usage)
|
||||
|
||||
# Update totals
|
||||
self.total_usage.completion_tokens += usage.completion_tokens
|
||||
self.total_usage.prompt_tokens += usage.prompt_tokens
|
||||
self.total_usage.total_tokens += usage.total_tokens
|
||||
|
||||
try:
|
||||
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)['blocks']
|
||||
blocks = json.loads(blocks)
|
||||
for block in blocks:
|
||||
block['error'] = False
|
||||
except Exception as e:
|
||||
parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content)
|
||||
blocks = parsed
|
||||
if unparsed:
|
||||
blocks.append({
|
||||
"index": 0,
|
||||
"error": True,
|
||||
"tags": ["error"],
|
||||
"content": unparsed
|
||||
})
|
||||
|
||||
if self.verbose:
|
||||
print("[LOG] Extracted", len(blocks), "blocks from URL:", url, "block index:", ix)
|
||||
return blocks
|
||||
|
||||
def _merge(self, documents, chunk_token_threshold, overlap):
|
||||
"""
|
||||
Merge documents into sections based on chunk_token_threshold and overlap.
|
||||
"""
|
||||
chunks = []
|
||||
sections = []
|
||||
total_tokens = 0
|
||||
|
||||
# Calculate the total tokens across all documents
|
||||
for document in documents:
|
||||
total_tokens += len(document.split(' ')) * self.word_token_rate
|
||||
|
||||
# Calculate the number of sections needed
|
||||
num_sections = math.floor(total_tokens / chunk_token_threshold)
|
||||
if num_sections < 1:
|
||||
num_sections = 1 # Ensure there is at least one section
|
||||
adjusted_chunk_threshold = total_tokens / num_sections
|
||||
|
||||
total_token_so_far = 0
|
||||
current_chunk = []
|
||||
|
||||
for document in documents:
|
||||
tokens = document.split(' ')
|
||||
token_count = len(tokens) * self.word_token_rate
|
||||
|
||||
if total_token_so_far + token_count <= adjusted_chunk_threshold:
|
||||
current_chunk.extend(tokens)
|
||||
total_token_so_far += token_count
|
||||
else:
|
||||
# Ensure to handle the last section properly
|
||||
if len(sections) == num_sections - 1:
|
||||
current_chunk.extend(tokens)
|
||||
continue
|
||||
|
||||
# Add overlap if specified
|
||||
if overlap > 0 and current_chunk:
|
||||
overlap_tokens = current_chunk[-overlap:]
|
||||
current_chunk.extend(overlap_tokens)
|
||||
|
||||
sections.append(' '.join(current_chunk))
|
||||
current_chunk = tokens
|
||||
total_token_so_far = token_count
|
||||
|
||||
# Add the last chunk
|
||||
if current_chunk:
|
||||
sections.append(' '.join(current_chunk))
|
||||
|
||||
return sections
|
||||
|
||||
|
||||
def run(self, url: str, sections: List[str]) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Process sections sequentially with a delay for rate limiting issues, specifically for LLMExtractionStrategy.
|
||||
|
||||
Args:
|
||||
url: The URL of the webpage.
|
||||
sections: List of sections (strings) to process.
|
||||
|
||||
Returns:
|
||||
A list of extracted blocks or chunks.
|
||||
"""
|
||||
|
||||
merged_sections = self._merge(
|
||||
sections, self.chunk_token_threshold,
|
||||
overlap= int(self.chunk_token_threshold * self.overlap_rate)
|
||||
)
|
||||
extracted_content = []
|
||||
if self.provider.startswith("groq/"):
|
||||
# Sequential processing with a delay
|
||||
for ix, section in enumerate(merged_sections):
|
||||
extract_func = partial(self.extract, url)
|
||||
extracted_content.extend(extract_func(ix, sanitize_input_encode(section)))
|
||||
time.sleep(0.5) # 500 ms delay between each processing
|
||||
else:
|
||||
# Parallel processing using ThreadPoolExecutor
|
||||
# extract_func = partial(self.extract, url)
|
||||
# for ix, section in enumerate(merged_sections):
|
||||
# extracted_content.append(extract_func(ix, section))
|
||||
|
||||
with ThreadPoolExecutor(max_workers=4) as executor:
|
||||
extract_func = partial(self.extract, url)
|
||||
futures = [executor.submit(extract_func, ix, sanitize_input_encode(section)) for ix, section in enumerate(merged_sections)]
|
||||
|
||||
for future in as_completed(futures):
|
||||
try:
|
||||
extracted_content.extend(future.result())
|
||||
except Exception as e:
|
||||
if self.verbose:
|
||||
print(f"Error in thread execution: {e}")
|
||||
# Add error information to extracted_content
|
||||
extracted_content.append({
|
||||
"index": 0,
|
||||
"error": True,
|
||||
"tags": ["error"],
|
||||
"content": str(e)
|
||||
})
|
||||
|
||||
|
||||
return extracted_content
|
||||
|
||||
|
||||
def show_usage(self) -> None:
|
||||
"""Print a detailed token usage report showing total and per-request usage."""
|
||||
print("\n=== Token Usage Summary ===")
|
||||
print(f"{'Type':<15} {'Count':>12}")
|
||||
print("-" * 30)
|
||||
print(f"{'Completion':<15} {self.total_usage.completion_tokens:>12,}")
|
||||
print(f"{'Prompt':<15} {self.total_usage.prompt_tokens:>12,}")
|
||||
print(f"{'Total':<15} {self.total_usage.total_tokens:>12,}")
|
||||
|
||||
print("\n=== Usage History ===")
|
||||
print(f"{'Request #':<10} {'Completion':>12} {'Prompt':>12} {'Total':>12}")
|
||||
print("-" * 48)
|
||||
for i, usage in enumerate(self.usages, 1):
|
||||
print(f"{i:<10} {usage.completion_tokens:>12,} {usage.prompt_tokens:>12,} {usage.total_tokens:>12,}")
|
||||
|
||||
#######################################################
|
||||
# Strategies using clustering for text data extraction #
|
||||
@@ -665,6 +392,284 @@ class CosineStrategy(ExtractionStrategy):
|
||||
|
||||
return self.extract(url, self.DEL.join(sections), **kwargs)
|
||||
|
||||
|
||||
|
||||
#######################################################
|
||||
# Strategies using LLM-based extraction for text data #
|
||||
#######################################################
|
||||
class LLMExtractionStrategy(ExtractionStrategy):
|
||||
"""
|
||||
A strategy that uses an LLM to extract meaningful content from the HTML.
|
||||
|
||||
Attributes:
|
||||
provider: The provider to use for extraction. It follows the format <provider_name>/<model_name>, e.g., "ollama/llama3.3".
|
||||
api_token: The API token for the provider.
|
||||
instruction: The instruction to use for the LLM model.
|
||||
schema: Pydantic model schema for structured data.
|
||||
extraction_type: "block" or "schema".
|
||||
chunk_token_threshold: Maximum tokens per chunk.
|
||||
overlap_rate: Overlap between chunks.
|
||||
word_token_rate: Word to token conversion rate.
|
||||
apply_chunking: Whether to apply chunking.
|
||||
base_url: The base URL for the API request.
|
||||
api_base: The base URL for the API request.
|
||||
extra_args: Additional arguments for the API request, such as temprature, max_tokens, etc.
|
||||
verbose: Whether to print verbose output.
|
||||
usages: List of individual token usages.
|
||||
total_usage: Accumulated token usage.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
provider: str = DEFAULT_PROVIDER, api_token: Optional[str] = None,
|
||||
instruction:str = None, schema:Dict = None, extraction_type = "block", **kwargs):
|
||||
"""
|
||||
Initialize the strategy with clustering parameters.
|
||||
|
||||
Args:
|
||||
provider: The provider to use for extraction. It follows the format <provider_name>/<model_name>, e.g., "ollama/llama3.3".
|
||||
api_token: The API token for the provider.
|
||||
instruction: The instruction to use for the LLM model.
|
||||
schema: Pydantic model schema for structured data.
|
||||
extraction_type: "block" or "schema".
|
||||
chunk_token_threshold: Maximum tokens per chunk.
|
||||
overlap_rate: Overlap between chunks.
|
||||
word_token_rate: Word to token conversion rate.
|
||||
apply_chunking: Whether to apply chunking.
|
||||
base_url: The base URL for the API request.
|
||||
api_base: The base URL for the API request.
|
||||
extra_args: Additional arguments for the API request, such as temprature, max_tokens, etc.
|
||||
verbose: Whether to print verbose output.
|
||||
usages: List of individual token usages.
|
||||
total_usage: Accumulated token usage.
|
||||
|
||||
"""
|
||||
super().__init__(**kwargs)
|
||||
self.provider = provider
|
||||
self.api_token = api_token or PROVIDER_MODELS.get(provider, "no-token") or os.getenv("OPENAI_API_KEY")
|
||||
self.instruction = instruction
|
||||
self.extract_type = extraction_type
|
||||
self.schema = schema
|
||||
if schema:
|
||||
self.extract_type = "schema"
|
||||
|
||||
self.chunk_token_threshold = kwargs.get("chunk_token_threshold", CHUNK_TOKEN_THRESHOLD)
|
||||
self.overlap_rate = kwargs.get("overlap_rate", OVERLAP_RATE)
|
||||
self.word_token_rate = kwargs.get("word_token_rate", WORD_TOKEN_RATE)
|
||||
self.apply_chunking = kwargs.get("apply_chunking", True)
|
||||
self.base_url = kwargs.get("base_url", None)
|
||||
self.api_base = kwargs.get("api_base", kwargs.get("base_url", None))
|
||||
self.extra_args = kwargs.get("extra_args", {})
|
||||
if not self.apply_chunking:
|
||||
self.chunk_token_threshold = 1e9
|
||||
|
||||
self.verbose = kwargs.get("verbose", False)
|
||||
self.usages = [] # Store individual usages
|
||||
self.total_usage = TokenUsage() # Accumulated usage
|
||||
|
||||
if not self.api_token:
|
||||
raise ValueError("API token must be provided for LLMExtractionStrategy. Update the config.py or set OPENAI_API_KEY environment variable.")
|
||||
|
||||
|
||||
def extract(self, url: str, ix:int, html: str) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Extract meaningful blocks or chunks from the given HTML using an LLM.
|
||||
|
||||
How it works:
|
||||
1. Construct a prompt with variables.
|
||||
2. Make a request to the LLM using the prompt.
|
||||
3. Parse the response and extract blocks or chunks.
|
||||
|
||||
Args:
|
||||
url: The URL of the webpage.
|
||||
ix: Index of the block.
|
||||
html: The HTML content of the webpage.
|
||||
|
||||
Returns:
|
||||
A list of extracted blocks or chunks.
|
||||
"""
|
||||
if self.verbose:
|
||||
# print("[LOG] Extracting blocks from URL:", url)
|
||||
print(f"[LOG] Call LLM for {url} - block index: {ix}")
|
||||
|
||||
variable_values = {
|
||||
"URL": url,
|
||||
"HTML": escape_json_string(sanitize_html(html)),
|
||||
}
|
||||
|
||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS
|
||||
if self.instruction:
|
||||
variable_values["REQUEST"] = self.instruction
|
||||
prompt_with_variables = PROMPT_EXTRACT_BLOCKS_WITH_INSTRUCTION
|
||||
|
||||
if self.extract_type == "schema" and self.schema:
|
||||
variable_values["SCHEMA"] = json.dumps(self.schema, indent=2)
|
||||
prompt_with_variables = PROMPT_EXTRACT_SCHEMA_WITH_INSTRUCTION
|
||||
|
||||
for variable in variable_values:
|
||||
prompt_with_variables = prompt_with_variables.replace(
|
||||
"{" + variable + "}", variable_values[variable]
|
||||
)
|
||||
|
||||
response = perform_completion_with_backoff(
|
||||
self.provider,
|
||||
prompt_with_variables,
|
||||
self.api_token,
|
||||
base_url=self.api_base or self.base_url,
|
||||
extra_args = self.extra_args
|
||||
) # , json_response=self.extract_type == "schema")
|
||||
# Track usage
|
||||
usage = TokenUsage(
|
||||
completion_tokens=response.usage.completion_tokens,
|
||||
prompt_tokens=response.usage.prompt_tokens,
|
||||
total_tokens=response.usage.total_tokens,
|
||||
completion_tokens_details=response.usage.completion_tokens_details.__dict__ if response.usage.completion_tokens_details else {},
|
||||
prompt_tokens_details=response.usage.prompt_tokens_details.__dict__ if response.usage.prompt_tokens_details else {}
|
||||
)
|
||||
self.usages.append(usage)
|
||||
|
||||
# Update totals
|
||||
self.total_usage.completion_tokens += usage.completion_tokens
|
||||
self.total_usage.prompt_tokens += usage.prompt_tokens
|
||||
self.total_usage.total_tokens += usage.total_tokens
|
||||
|
||||
try:
|
||||
blocks = extract_xml_data(["blocks"], response.choices[0].message.content)['blocks']
|
||||
blocks = json.loads(blocks)
|
||||
for block in blocks:
|
||||
block['error'] = False
|
||||
except Exception as e:
|
||||
parsed, unparsed = split_and_parse_json_objects(response.choices[0].message.content)
|
||||
blocks = parsed
|
||||
if unparsed:
|
||||
blocks.append({
|
||||
"index": 0,
|
||||
"error": True,
|
||||
"tags": ["error"],
|
||||
"content": unparsed
|
||||
})
|
||||
|
||||
if self.verbose:
|
||||
print("[LOG] Extracted", len(blocks), "blocks from URL:", url, "block index:", ix)
|
||||
return blocks
|
||||
|
||||
def _merge(self, documents, chunk_token_threshold, overlap):
|
||||
"""
|
||||
Merge documents into sections based on chunk_token_threshold and overlap.
|
||||
"""
|
||||
chunks = []
|
||||
sections = []
|
||||
total_tokens = 0
|
||||
|
||||
# Calculate the total tokens across all documents
|
||||
for document in documents:
|
||||
total_tokens += len(document.split(' ')) * self.word_token_rate
|
||||
|
||||
# Calculate the number of sections needed
|
||||
num_sections = math.floor(total_tokens / chunk_token_threshold)
|
||||
if num_sections < 1:
|
||||
num_sections = 1 # Ensure there is at least one section
|
||||
adjusted_chunk_threshold = total_tokens / num_sections
|
||||
|
||||
total_token_so_far = 0
|
||||
current_chunk = []
|
||||
|
||||
for document in documents:
|
||||
tokens = document.split(' ')
|
||||
token_count = len(tokens) * self.word_token_rate
|
||||
|
||||
if total_token_so_far + token_count <= adjusted_chunk_threshold:
|
||||
current_chunk.extend(tokens)
|
||||
total_token_so_far += token_count
|
||||
else:
|
||||
# Ensure to handle the last section properly
|
||||
if len(sections) == num_sections - 1:
|
||||
current_chunk.extend(tokens)
|
||||
continue
|
||||
|
||||
# Add overlap if specified
|
||||
if overlap > 0 and current_chunk:
|
||||
overlap_tokens = current_chunk[-overlap:]
|
||||
current_chunk.extend(overlap_tokens)
|
||||
|
||||
sections.append(' '.join(current_chunk))
|
||||
current_chunk = tokens
|
||||
total_token_so_far = token_count
|
||||
|
||||
# Add the last chunk
|
||||
if current_chunk:
|
||||
sections.append(' '.join(current_chunk))
|
||||
|
||||
return sections
|
||||
|
||||
|
||||
def run(self, url: str, sections: List[str]) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Process sections sequentially with a delay for rate limiting issues, specifically for LLMExtractionStrategy.
|
||||
|
||||
Args:
|
||||
url: The URL of the webpage.
|
||||
sections: List of sections (strings) to process.
|
||||
|
||||
Returns:
|
||||
A list of extracted blocks or chunks.
|
||||
"""
|
||||
|
||||
merged_sections = self._merge(
|
||||
sections, self.chunk_token_threshold,
|
||||
overlap= int(self.chunk_token_threshold * self.overlap_rate)
|
||||
)
|
||||
extracted_content = []
|
||||
if self.provider.startswith("groq/"):
|
||||
# Sequential processing with a delay
|
||||
for ix, section in enumerate(merged_sections):
|
||||
extract_func = partial(self.extract, url)
|
||||
extracted_content.extend(extract_func(ix, sanitize_input_encode(section)))
|
||||
time.sleep(0.5) # 500 ms delay between each processing
|
||||
else:
|
||||
# Parallel processing using ThreadPoolExecutor
|
||||
# extract_func = partial(self.extract, url)
|
||||
# for ix, section in enumerate(merged_sections):
|
||||
# extracted_content.append(extract_func(ix, section))
|
||||
|
||||
with ThreadPoolExecutor(max_workers=4) as executor:
|
||||
extract_func = partial(self.extract, url)
|
||||
futures = [executor.submit(extract_func, ix, sanitize_input_encode(section)) for ix, section in enumerate(merged_sections)]
|
||||
|
||||
for future in as_completed(futures):
|
||||
try:
|
||||
extracted_content.extend(future.result())
|
||||
except Exception as e:
|
||||
if self.verbose:
|
||||
print(f"Error in thread execution: {e}")
|
||||
# Add error information to extracted_content
|
||||
extracted_content.append({
|
||||
"index": 0,
|
||||
"error": True,
|
||||
"tags": ["error"],
|
||||
"content": str(e)
|
||||
})
|
||||
|
||||
|
||||
return extracted_content
|
||||
|
||||
|
||||
def show_usage(self) -> None:
|
||||
"""Print a detailed token usage report showing total and per-request usage."""
|
||||
print("\n=== Token Usage Summary ===")
|
||||
print(f"{'Type':<15} {'Count':>12}")
|
||||
print("-" * 30)
|
||||
print(f"{'Completion':<15} {self.total_usage.completion_tokens:>12,}")
|
||||
print(f"{'Prompt':<15} {self.total_usage.prompt_tokens:>12,}")
|
||||
print(f"{'Total':<15} {self.total_usage.total_tokens:>12,}")
|
||||
|
||||
print("\n=== Usage History ===")
|
||||
print(f"{'Request #':<10} {'Completion':>12} {'Prompt':>12} {'Total':>12}")
|
||||
print("-" * 48)
|
||||
for i, usage in enumerate(self.usages, 1):
|
||||
print(f"{i:<10} {usage.completion_tokens:>12,} {usage.prompt_tokens:>12,} {usage.total_tokens:>12,}")
|
||||
|
||||
|
||||
|
||||
#######################################################
|
||||
# New extraction strategies for JSON-based extraction #
|
||||
#######################################################
|
||||
|
||||
@@ -36,7 +36,7 @@ async def main():
|
||||
'domain': '.example.com',
|
||||
'path': '/'
|
||||
}])
|
||||
await page.set_viewport_size({"width": 1920, "height": 1080})
|
||||
await page.set_viewport_size({"width": 1080, "height": 800})
|
||||
return page
|
||||
|
||||
async def on_user_agent_updated(page: Page, context: BrowserContext, user_agent: str, **kwargs):
|
||||
|
||||
@@ -1,15 +1,16 @@
|
||||
# Advanced Features (Proxy, PDF, Screenshot, SSL, Headers, & Storage State)
|
||||
# Overview of Some Important Advanced Features
|
||||
(Proxy, PDF, Screenshot, SSL, Headers, & Storage State)
|
||||
|
||||
Crawl4AI offers multiple power-user features that go beyond simple crawling. This tutorial covers:
|
||||
|
||||
1. **Proxy Usage**
|
||||
2. **Capturing PDFs & Screenshots**
|
||||
3. **Handling SSL Certificates**
|
||||
4. **Custom Headers**
|
||||
5. **Session Persistence & Local Storage**
|
||||
1. **Proxy Usage**
|
||||
2. **Capturing PDFs & Screenshots**
|
||||
3. **Handling SSL Certificates**
|
||||
4. **Custom Headers**
|
||||
5. **Session Persistence & Local Storage**
|
||||
|
||||
> **Prerequisites**
|
||||
> - You have a basic grasp of [AsyncWebCrawler Basics](./async-webcrawler-basics.md)
|
||||
> - You have a basic grasp of [AsyncWebCrawler Basics](../core/simple-crawling.md)
|
||||
> - You know how to run or configure your Python environment with Playwright installed
|
||||
|
||||
---
|
||||
@@ -84,7 +85,7 @@ async def main():
|
||||
# Save PDF
|
||||
if result.pdf:
|
||||
with open("wikipedia_page.pdf", "wb") as f:
|
||||
f.write(b64decode(result.pdf))
|
||||
f.write(result.pdf)
|
||||
|
||||
print("[OK] PDF & screenshot captured.")
|
||||
else:
|
||||
@@ -186,7 +187,7 @@ if __name__ == "__main__":
|
||||
|
||||
**Notes**
|
||||
- Some sites may react differently to certain headers (e.g., `Accept-Language`).
|
||||
- If you need advanced user-agent randomization or client hints, see [Identity-Based Crawling (Anti-Bot)](./identity-anti-bot.md) or use `UserAgentGenerator`.
|
||||
- If you need advanced user-agent randomization or client hints, see [Identity-Based Crawling (Anti-Bot)](./identity-based-crawling.md) or use `UserAgentGenerator`.
|
||||
|
||||
---
|
||||
|
||||
@@ -246,7 +247,7 @@ You can sign in once, export the browser context, and reuse it later—without r
|
||||
- **`await context.storage_state(path="my_storage.json")`**: Exports cookies, localStorage, etc. to a file.
|
||||
- Provide `storage_state="my_storage.json"` on subsequent runs to skip the login step.
|
||||
|
||||
**See**: [Detailed session management tutorial](./hooks-custom.md#using-storage_state) or [Explanations → Browser Context & Managed Browser](../../explanations/browser-management.md) for more advanced scenarios (like multi-step logins, or capturing after interactive pages).
|
||||
**See**: [Detailed session management tutorial](./session-management.md) or [Explanations → Browser Context & Managed Browser](./identity-based-crawling.md) for more advanced scenarios (like multi-step logins, or capturing after interactive pages).
|
||||
|
||||
---
|
||||
|
||||
@@ -283,7 +284,10 @@ async def main():
|
||||
|
||||
# 3. Crawl
|
||||
async with AsyncWebCrawler(config=browser_cfg) as crawler:
|
||||
result = await crawler.arun("https://secure.example.com/protected", config=crawler_cfg)
|
||||
result = await crawler.arun(
|
||||
url = "https://secure.example.com/protected",
|
||||
config=crawler_cfg
|
||||
)
|
||||
|
||||
if result.success:
|
||||
print("[OK] Crawled the secure page. Links found:", len(result.links.get("internal", [])))
|
||||
@@ -318,12 +322,6 @@ You’ve now explored several **advanced** features:
|
||||
- **Custom Headers** for language or specialized requests
|
||||
- **Session Persistence** via storage state
|
||||
|
||||
**Where to go next**:
|
||||
|
||||
- **[Hooks & Custom Code](./hooks-custom.md)**: For multi-step interactions (clicking “Load More,” performing logins, etc.)
|
||||
- **[Identity-Based Crawling & Anti-Bot](./identity-anti-bot.md)**: If you need more sophisticated user simulation or stealth.
|
||||
- **[Reference → BrowserConfig & CrawlerRunConfig](../../reference/configuration.md)**: Detailed param descriptions for everything you’ve seen here and more.
|
||||
|
||||
With these power tools, you can build robust scraping workflows that mimic real user behavior, handle secure sites, capture detailed snapshots, and manage sessions across multiple runs—streamlining your entire data collection pipeline.
|
||||
|
||||
**Last Updated**: 2024-XX-XX
|
||||
**Last Updated**: 2025-01-01
|
||||
@@ -1,136 +0,0 @@
|
||||
# Content Processing
|
||||
|
||||
Crawl4AI provides powerful content processing capabilities that help you extract clean, relevant content from web pages. This guide covers content cleaning, media handling, link analysis, and metadata extraction.
|
||||
|
||||
## Media Processing
|
||||
|
||||
Crawl4AI provides comprehensive media extraction and analysis capabilities. It automatically detects and processes various types of media elements while maintaining their context and relevance.
|
||||
|
||||
### Image Processing
|
||||
|
||||
The library handles various image scenarios, including:
|
||||
- Regular images
|
||||
- Lazy-loaded images
|
||||
- Background images
|
||||
- Responsive images
|
||||
- Image metadata and context
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
|
||||
config = CrawlerRunConfig()
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
for image in result.media["images"]:
|
||||
# Each image includes rich metadata
|
||||
print(f"Source: {image['src']}")
|
||||
print(f"Alt text: {image['alt']}")
|
||||
print(f"Description: {image['desc']}")
|
||||
print(f"Context: {image['context']}") # Surrounding text
|
||||
print(f"Relevance score: {image['score']}") # 0-10 score
|
||||
```
|
||||
|
||||
### Handling Lazy-Loaded Content
|
||||
|
||||
Crawl4AI already handles lazy loading for media elements. You can customize the wait time for lazy-loaded content with `CrawlerRunConfig`:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
wait_for="css:img[data-src]", # Wait for lazy images
|
||||
delay_before_return_html=2.0 # Additional wait time
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
```
|
||||
|
||||
### Video and Audio Content
|
||||
|
||||
The library extracts video and audio elements with their metadata:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
|
||||
config = CrawlerRunConfig()
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
# Process videos
|
||||
for video in result.media["videos"]:
|
||||
print(f"Video source: {video['src']}")
|
||||
print(f"Type: {video['type']}")
|
||||
print(f"Duration: {video.get('duration')}")
|
||||
print(f"Thumbnail: {video.get('poster')}")
|
||||
|
||||
# Process audio
|
||||
for audio in result.media["audios"]:
|
||||
print(f"Audio source: {audio['src']}")
|
||||
print(f"Type: {audio['type']}")
|
||||
print(f"Duration: {audio.get('duration')}")
|
||||
```
|
||||
|
||||
## Link Analysis
|
||||
|
||||
Crawl4AI provides sophisticated link analysis capabilities, helping you understand the relationship between pages and identify important navigation patterns.
|
||||
|
||||
### Link Classification
|
||||
|
||||
The library automatically categorizes links into:
|
||||
- Internal links (same domain)
|
||||
- External links (different domains)
|
||||
- Social media links
|
||||
- Navigation links
|
||||
- Content links
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
|
||||
config = CrawlerRunConfig()
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
# Analyze internal links
|
||||
for link in result.links["internal"]:
|
||||
print(f"Internal: {link['href']}")
|
||||
print(f"Link text: {link['text']}")
|
||||
print(f"Context: {link['context']}") # Surrounding text
|
||||
print(f"Type: {link['type']}") # nav, content, etc.
|
||||
|
||||
# Analyze external links
|
||||
for link in result.links["external"]:
|
||||
print(f"External: {link['href']}")
|
||||
print(f"Domain: {link['domain']}")
|
||||
print(f"Type: {link['type']}")
|
||||
```
|
||||
|
||||
### Smart Link Filtering
|
||||
|
||||
Control which links are included in the results with `CrawlerRunConfig`:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
exclude_external_links=True, # Remove external links
|
||||
exclude_social_media_links=True, # Remove social media links
|
||||
exclude_social_media_domains=[ # Custom social media domains
|
||||
"facebook.com", "twitter.com", "instagram.com"
|
||||
],
|
||||
exclude_domains=["ads.example.com"] # Exclude specific domains
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
```
|
||||
|
||||
## Metadata Extraction
|
||||
|
||||
Crawl4AI automatically extracts and processes page metadata, providing valuable information about the content:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
|
||||
config = CrawlerRunConfig()
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
metadata = result.metadata
|
||||
print(f"Title: {metadata['title']}")
|
||||
print(f"Description: {metadata['description']}")
|
||||
print(f"Keywords: {metadata['keywords']}")
|
||||
print(f"Author: {metadata['author']}")
|
||||
print(f"Published Date: {metadata['published_date']}")
|
||||
print(f"Modified Date: {metadata['modified_date']}")
|
||||
print(f"Language: {metadata['language']}")
|
||||
```
|
||||
12
docs/md_v2/advanced/crawl-dispatcher.md
Normal file
12
docs/md_v2/advanced/crawl-dispatcher.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Crawl Dispatcher
|
||||
|
||||
We’re excited to announce a **Crawl Dispatcher** module that can handle **thousands** of crawling tasks simultaneously. By efficiently managing system resources (memory, CPU, network), this dispatcher ensures high-performance data extraction at scale. It also provides **real-time monitoring** of each crawler’s status, memory usage, and overall progress.
|
||||
|
||||
Stay tuned—this feature is **coming soon** in an upcoming release of Crawl4AI! For the latest news, keep an eye on our changelogs and follow [@unclecode](https://twitter.com/unclecode) on X.
|
||||
|
||||
Below is a **sample** of how the dispatcher’s performance monitor might look in action:
|
||||
|
||||

|
||||
|
||||
|
||||
We can’t wait to bring you this streamlined, **scalable** approach to multi-URL crawling—**watch this space** for updates!
|
||||
@@ -17,18 +17,6 @@ async def main():
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
Or, enable it for a specific crawl by using `CrawlerRunConfig`:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
config = CrawlerRunConfig(accept_downloads=True)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
# ...
|
||||
```
|
||||
|
||||
## Specifying Download Location
|
||||
|
||||
Specify the download directory using the `downloads_path` attribute in the `BrowserConfig` object. If not provided, Crawl4AI defaults to creating a "downloads" directory inside the `.crawl4ai` folder in your home directory.
|
||||
@@ -98,7 +86,8 @@ async def download_multiple_files(url: str, download_path: str):
|
||||
const downloadLinks = document.querySelectorAll('a[download]');
|
||||
for (const link of downloadLinks) {
|
||||
link.click();
|
||||
await new Promise(r => setTimeout(r, 2000)); // Delay between clicks
|
||||
// Delay between clicks
|
||||
await new Promise(r => setTimeout(r, 2000));
|
||||
}
|
||||
""",
|
||||
wait_for=10 # Wait for all downloads to start
|
||||
@@ -1,121 +1,254 @@
|
||||
# Hooks & Auth for AsyncWebCrawler
|
||||
# Hooks & Auth in AsyncWebCrawler
|
||||
|
||||
Crawl4AI's `AsyncWebCrawler` allows you to customize the behavior of the web crawler using hooks. Hooks are asynchronous functions called at specific points in the crawling process, allowing you to modify the crawler's behavior or perform additional actions. This updated documentation demonstrates how to use hooks, including the new `on_page_context_created` hook, and ensures compatibility with `BrowserConfig` and `CrawlerRunConfig`.
|
||||
Crawl4AI’s **hooks** let you customize the crawler at specific points in the pipeline:
|
||||
|
||||
## Example: Using Crawler Hooks with AsyncWebCrawler
|
||||
1. **`on_browser_created`** – After browser creation.
|
||||
2. **`on_page_context_created`** – After a new context & page are created.
|
||||
3. **`before_goto`** – Just before navigating to a page.
|
||||
4. **`after_goto`** – Right after navigation completes.
|
||||
5. **`on_user_agent_updated`** – Whenever the user agent changes.
|
||||
6. **`on_execution_started`** – Once custom JavaScript execution begins.
|
||||
7. **`before_retrieve_html`** – Just before the crawler retrieves final HTML.
|
||||
8. **`before_return_html`** – Right before returning the HTML content.
|
||||
|
||||
In this example, we'll:
|
||||
**Important**: Avoid heavy tasks in `on_browser_created` since you don’t yet have a page context. If you need to *log in*, do so in **`on_page_context_created`**.
|
||||
|
||||
1. Configure the browser and set up authentication when it's created.
|
||||
2. Apply custom routing and initial actions when the page context is created.
|
||||
3. Add custom headers before navigating to the URL.
|
||||
4. Log the current URL after navigation.
|
||||
5. Perform actions after JavaScript execution.
|
||||
6. Log the length of the HTML before returning it.
|
||||
> note "Important Hook Usage Warning"
|
||||
**Avoid Misusing Hooks**: Do not manipulate page objects in the wrong hook or at the wrong time, as it can crash the pipeline or produce incorrect results. A common mistake is attempting to handle authentication prematurely—such as creating or closing pages in `on_browser_created`.
|
||||
|
||||
### Hook Definitions
|
||||
> **Use the Right Hook for Auth**: If you need to log in or set tokens, use `on_page_context_created`. This ensures you have a valid page/context to work with, without disrupting the main crawling flow.
|
||||
|
||||
> **Identity-Based Crawling**: For robust auth, consider identity-based crawling (or passing a session ID) to preserve state. Run your initial login steps in a separate, well-defined process, then feed that session to your main crawl—rather than shoehorning complex authentication into early hooks. Check out [Identity-Based Crawling](../advanced/identity-based-crawling.md) for more details.
|
||||
|
||||
> **Be Cautious**: Overwriting or removing elements in the wrong hook can compromise the final crawl. Keep hooks focused on smaller tasks (like route filters, custom headers), and let your main logic (crawling, data extraction) proceed normally.
|
||||
|
||||
|
||||
Below is an example demonstration.
|
||||
|
||||
---
|
||||
|
||||
## Example: Using Hooks in AsyncWebCrawler
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
|
||||
from playwright.async_api import Page, Browser, BrowserContext
|
||||
import json
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||||
from playwright.async_api import Page, BrowserContext
|
||||
|
||||
def log_routing(route):
|
||||
# Example: block loading images
|
||||
if route.request.resource_type == "image":
|
||||
print(f"[HOOK] Blocking image request: {route.request.url}")
|
||||
asyncio.create_task(route.abort())
|
||||
else:
|
||||
asyncio.create_task(route.continue_())
|
||||
|
||||
async def on_browser_created(browser: Browser, **kwargs):
|
||||
print("[HOOK] on_browser_created")
|
||||
# Example: Set browser viewport size and log in
|
||||
context = await browser.new_context(viewport={"width": 1920, "height": 1080})
|
||||
page = await context.new_page()
|
||||
await page.goto("https://example.com/login")
|
||||
await page.fill("input[name='username']", "testuser")
|
||||
await page.fill("input[name='password']", "password123")
|
||||
await page.click("button[type='submit']")
|
||||
await page.wait_for_selector("#welcome")
|
||||
await context.add_cookies([{"name": "auth_token", "value": "abc123", "url": "https://example.com"}])
|
||||
await page.close()
|
||||
await context.close()
|
||||
|
||||
async def on_page_context_created(context: BrowserContext, page: Page, **kwargs):
|
||||
print("[HOOK] on_page_context_created")
|
||||
await context.route("**", log_routing)
|
||||
|
||||
async def before_goto(page: Page, context: BrowserContext, **kwargs):
|
||||
print("[HOOK] before_goto")
|
||||
await page.set_extra_http_headers({"X-Test-Header": "test"})
|
||||
|
||||
async def after_goto(page: Page, context: BrowserContext, **kwargs):
|
||||
print("[HOOK] after_goto")
|
||||
print(f"Current URL: {page.url}")
|
||||
|
||||
async def on_execution_started(page: Page, context: BrowserContext, **kwargs):
|
||||
print("[HOOK] on_execution_started")
|
||||
await page.evaluate("console.log('Custom JS executed')")
|
||||
|
||||
async def before_return_html(page: Page, context: BrowserContext, html: str, **kwargs):
|
||||
print("[HOOK] before_return_html")
|
||||
print(f"HTML length: {len(html)}")
|
||||
return page
|
||||
```
|
||||
|
||||
### Using the Hooks with AsyncWebCrawler
|
||||
|
||||
```python
|
||||
async def main():
|
||||
print("\n🔗 Using Crawler Hooks: Customize AsyncWebCrawler with hooks!")
|
||||
print("🔗 Hooks Example: Demonstrating recommended usage")
|
||||
|
||||
# Configure browser and crawler settings
|
||||
# 1) Configure the browser
|
||||
browser_config = BrowserConfig(
|
||||
headless=True,
|
||||
viewport_width=1920,
|
||||
viewport_height=1080
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# 2) Configure the crawler run
|
||||
crawler_run_config = CrawlerRunConfig(
|
||||
js_code="window.scrollTo(0, document.body.scrollHeight);",
|
||||
wait_for="footer"
|
||||
wait_for="body",
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
# Initialize crawler
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
crawler.crawler_strategy.set_hook("on_browser_created", on_browser_created)
|
||||
crawler.crawler_strategy.set_hook("on_page_context_created", on_page_context_created)
|
||||
crawler.crawler_strategy.set_hook("before_goto", before_goto)
|
||||
crawler.crawler_strategy.set_hook("after_goto", after_goto)
|
||||
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
|
||||
crawler.crawler_strategy.set_hook("before_return_html", before_return_html)
|
||||
# 3) Create the crawler instance
|
||||
crawler = AsyncWebCrawler(config=browser_config)
|
||||
|
||||
# Run the crawler
|
||||
result = await crawler.arun(url="https://example.com", config=crawler_run_config)
|
||||
#
|
||||
# Define Hook Functions
|
||||
#
|
||||
|
||||
print("\n📦 Crawler Hooks Result:")
|
||||
print(result)
|
||||
async def on_browser_created(browser, **kwargs):
|
||||
# Called once the browser instance is created (but no pages or contexts yet)
|
||||
print("[HOOK] on_browser_created - Browser created successfully!")
|
||||
# Typically, do minimal setup here if needed
|
||||
return browser
|
||||
|
||||
asyncio.run(main())
|
||||
async def on_page_context_created(page: Page, context: BrowserContext, **kwargs):
|
||||
# Called right after a new page + context are created (ideal for auth or route config).
|
||||
print("[HOOK] on_page_context_created - Setting up page & context.")
|
||||
|
||||
# Example 1: Route filtering (e.g., block images)
|
||||
async def route_filter(route):
|
||||
if route.request.resource_type == "image":
|
||||
print(f"[HOOK] Blocking image request: {route.request.url}")
|
||||
await route.abort()
|
||||
else:
|
||||
await route.continue_()
|
||||
|
||||
await context.route("**", route_filter)
|
||||
|
||||
# Example 2: (Optional) Simulate a login scenario
|
||||
# (We do NOT create or close pages here, just do quick steps if needed)
|
||||
# e.g., await page.goto("https://example.com/login")
|
||||
# e.g., await page.fill("input[name='username']", "testuser")
|
||||
# e.g., await page.fill("input[name='password']", "password123")
|
||||
# e.g., await page.click("button[type='submit']")
|
||||
# e.g., await page.wait_for_selector("#welcome")
|
||||
# e.g., await context.add_cookies([...])
|
||||
# Then continue
|
||||
|
||||
# Example 3: Adjust the viewport
|
||||
await page.set_viewport_size({"width": 1080, "height": 600})
|
||||
return page
|
||||
|
||||
async def before_goto(
|
||||
page: Page, context: BrowserContext, url: str, **kwargs
|
||||
):
|
||||
# Called before navigating to each URL.
|
||||
print(f"[HOOK] before_goto - About to navigate: {url}")
|
||||
# e.g., inject custom headers
|
||||
await page.set_extra_http_headers({
|
||||
"Custom-Header": "my-value"
|
||||
})
|
||||
return page
|
||||
|
||||
async def after_goto(
|
||||
page: Page, context: BrowserContext,
|
||||
url: str, response, **kwargs
|
||||
):
|
||||
# Called after navigation completes.
|
||||
print(f"[HOOK] after_goto - Successfully loaded: {url}")
|
||||
# e.g., wait for a certain element if we want to verify
|
||||
try:
|
||||
await page.wait_for_selector('.content', timeout=1000)
|
||||
print("[HOOK] Found .content element!")
|
||||
except:
|
||||
print("[HOOK] .content not found, continuing anyway.")
|
||||
return page
|
||||
|
||||
async def on_user_agent_updated(
|
||||
page: Page, context: BrowserContext,
|
||||
user_agent: str, **kwargs
|
||||
):
|
||||
# Called whenever the user agent updates.
|
||||
print(f"[HOOK] on_user_agent_updated - New user agent: {user_agent}")
|
||||
return page
|
||||
|
||||
async def on_execution_started(page: Page, context: BrowserContext, **kwargs):
|
||||
# Called after custom JavaScript execution begins.
|
||||
print("[HOOK] on_execution_started - JS code is running!")
|
||||
return page
|
||||
|
||||
async def before_retrieve_html(page: Page, context: BrowserContext, **kwargs):
|
||||
# Called before final HTML retrieval.
|
||||
print("[HOOK] before_retrieve_html - We can do final actions")
|
||||
# Example: Scroll again
|
||||
await page.evaluate("window.scrollTo(0, document.body.scrollHeight);")
|
||||
return page
|
||||
|
||||
async def before_return_html(
|
||||
page: Page, context: BrowserContext, html: str, **kwargs
|
||||
):
|
||||
# Called just before returning the HTML in the result.
|
||||
print(f"[HOOK] before_return_html - HTML length: {len(html)}")
|
||||
return page
|
||||
|
||||
#
|
||||
# Attach Hooks
|
||||
#
|
||||
|
||||
crawler.crawler_strategy.set_hook("on_browser_created", on_browser_created)
|
||||
crawler.crawler_strategy.set_hook(
|
||||
"on_page_context_created", on_page_context_created
|
||||
)
|
||||
crawler.crawler_strategy.set_hook("before_goto", before_goto)
|
||||
crawler.crawler_strategy.set_hook("after_goto", after_goto)
|
||||
crawler.crawler_strategy.set_hook(
|
||||
"on_user_agent_updated", on_user_agent_updated
|
||||
)
|
||||
crawler.crawler_strategy.set_hook(
|
||||
"on_execution_started", on_execution_started
|
||||
)
|
||||
crawler.crawler_strategy.set_hook(
|
||||
"before_retrieve_html", before_retrieve_html
|
||||
)
|
||||
crawler.crawler_strategy.set_hook(
|
||||
"before_return_html", before_return_html
|
||||
)
|
||||
|
||||
await crawler.start()
|
||||
|
||||
# 4) Run the crawler on an example page
|
||||
url = "https://example.com"
|
||||
result = await crawler.arun(url, config=crawler_run_config)
|
||||
|
||||
if result.success:
|
||||
print("\nCrawled URL:", result.url)
|
||||
print("HTML length:", len(result.html))
|
||||
else:
|
||||
print("Error:", result.error_message)
|
||||
|
||||
await crawler.close()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Explanation of Hooks
|
||||
---
|
||||
|
||||
- **`on_browser_created`**: Called when the browser is created. Use this to configure the browser or handle authentication (e.g., logging in and setting cookies).
|
||||
- **`on_page_context_created`**: Called when a new page context is created. Use this to apply routing, block resources, or inject custom logic before navigating to the URL.
|
||||
- **`before_goto`**: Called before navigating to the URL. Use this to add custom headers or perform other pre-navigation actions.
|
||||
- **`after_goto`**: Called after navigation. Use this to verify content or log the URL.
|
||||
- **`on_execution_started`**: Called after executing custom JavaScript. Use this to perform additional actions.
|
||||
- **`before_return_html`**: Called before returning the HTML content. Use this to log details or preprocess the content.
|
||||
## Hook Lifecycle Summary
|
||||
|
||||
### Additional Customizations
|
||||
1. **`on_browser_created`**:
|
||||
- Browser is up, but **no** pages or contexts yet.
|
||||
- Light setup only—don’t try to open or close pages here (that belongs in `on_page_context_created`).
|
||||
|
||||
- **Resource Management**: Use `on_page_context_created` to block or modify requests (e.g., block images, fonts, or third-party scripts).
|
||||
- **Dynamic Headers**: Use `before_goto` to add or modify headers dynamically based on the URL.
|
||||
- **Authentication**: Use `on_browser_created` to handle login processes and set authentication cookies or tokens.
|
||||
- **Content Analysis**: Use `before_return_html` to analyze or modify the extracted HTML content.
|
||||
2. **`on_page_context_created`**:
|
||||
- Perfect for advanced **auth** or route blocking.
|
||||
- You have a **page** + **context** ready but haven’t navigated to the target URL yet.
|
||||
|
||||
These hooks provide powerful customization options for tailoring the crawling process to your needs.
|
||||
3. **`before_goto`**:
|
||||
- Right before navigation. Typically used for setting **custom headers** or logging the target URL.
|
||||
|
||||
4. **`after_goto`**:
|
||||
- After page navigation is done. Good place for verifying content or waiting on essential elements.
|
||||
|
||||
5. **`on_user_agent_updated`**:
|
||||
- Whenever the user agent changes (for stealth or different UA modes).
|
||||
|
||||
6. **`on_execution_started`**:
|
||||
- If you set `js_code` or run custom scripts, this runs once your JS is about to start.
|
||||
|
||||
7. **`before_retrieve_html`**:
|
||||
- Just before the final HTML snapshot is taken. Often you do a final scroll or lazy-load triggers here.
|
||||
|
||||
8. **`before_return_html`**:
|
||||
- The last hook before returning HTML to the `CrawlResult`. Good for logging HTML length or minor modifications.
|
||||
|
||||
---
|
||||
|
||||
## When to Handle Authentication
|
||||
|
||||
**Recommended**: Use **`on_page_context_created`** if you need to:
|
||||
|
||||
- Navigate to a login page or fill forms
|
||||
- Set cookies or localStorage tokens
|
||||
- Block resource routes to avoid ads
|
||||
|
||||
This ensures the newly created context is under your control **before** `arun()` navigates to the main URL.
|
||||
|
||||
---
|
||||
|
||||
## Additional Considerations
|
||||
|
||||
- **Session Management**: If you want multiple `arun()` calls to reuse a single session, pass `session_id=` in your `CrawlerRunConfig`. Hooks remain the same.
|
||||
- **Performance**: Hooks can slow down crawling if they do heavy tasks. Keep them concise.
|
||||
- **Error Handling**: If a hook fails, the overall crawl might fail. Catch exceptions or handle them gracefully.
|
||||
- **Concurrency**: If you run `arun_many()`, each URL triggers these hooks in parallel. Ensure your hooks are thread/async-safe.
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
Hooks provide **fine-grained** control over:
|
||||
|
||||
- **Browser** creation (light tasks only)
|
||||
- **Page** and **context** creation (auth, route blocking)
|
||||
- **Navigation** phases
|
||||
- **Final HTML** retrieval
|
||||
|
||||
Follow the recommended usage:
|
||||
- **Login** or advanced tasks in `on_page_context_created`
|
||||
- **Custom headers** or logs in `before_goto` / `after_goto`
|
||||
- **Scrolling** or final checks in `before_retrieve_html` / `before_return_html`
|
||||
|
||||
|
||||
180
docs/md_v2/advanced/identity-based-crawling.md
Normal file
180
docs/md_v2/advanced/identity-based-crawling.md
Normal file
@@ -0,0 +1,180 @@
|
||||
# Preserve Your Identity with Crawl4AI
|
||||
|
||||
Crawl4AI empowers you to navigate and interact with the web using your **authentic digital identity**, ensuring you’re recognized as a human and not mistaken for a bot. This tutorial covers:
|
||||
|
||||
1. **Managed Browsers** – The recommended approach for persistent profiles and identity-based crawling.
|
||||
2. **Magic Mode** – A simplified fallback solution for quick automation without persistent identity.
|
||||
|
||||
---
|
||||
|
||||
## 1. Managed Browsers: Your Digital Identity Solution
|
||||
|
||||
**Managed Browsers** let developers create and use **persistent browser profiles**. These profiles store local storage, cookies, and other session data, letting you browse as your **real self**—complete with logins, preferences, and cookies.
|
||||
|
||||
### Key Benefits
|
||||
|
||||
- **Authentic Browsing Experience**: Retain session data and browser fingerprints as though you’re a normal user.
|
||||
- **Effortless Configuration**: Once you log in or solve CAPTCHAs in your chosen data directory, you can re-run crawls without repeating those steps.
|
||||
- **Empowered Data Access**: If you can see the data in your own browser, you can automate its retrieval with your genuine identity.
|
||||
|
||||
---
|
||||
|
||||
Below is a **partial update** to your **Managed Browsers** tutorial, specifically the section about **creating a user-data directory** using **Playwright’s Chromium** binary rather than a system-wide Chrome/Edge. We’ll show how to **locate** that binary and launch it with a `--user-data-dir` argument to set up your profile. You can then point `BrowserConfig.user_data_dir` to that folder for subsequent crawls.
|
||||
|
||||
---
|
||||
|
||||
### Creating a User Data Directory (Command-Line Approach via Playwright)
|
||||
|
||||
If you installed Crawl4AI (which installs Playwright under the hood), you already have a Playwright-managed Chromium on your system. Follow these steps to launch that **Chromium** from your command line, specifying a **custom** data directory:
|
||||
|
||||
1. **Find** the Playwright Chromium binary:
|
||||
- On most systems, installed browsers go under a `~/.cache/ms-playwright/` folder or similar path.
|
||||
- To see an overview of installed browsers, run:
|
||||
```bash
|
||||
python -m playwright install --dry-run
|
||||
```
|
||||
or
|
||||
```bash
|
||||
playwright install --dry-run
|
||||
```
|
||||
(depending on your environment). This shows where Playwright keeps Chromium.
|
||||
|
||||
- For instance, you might see a path like:
|
||||
```
|
||||
~/.cache/ms-playwright/chromium-1234/chrome-linux/chrome
|
||||
```
|
||||
on Linux, or a corresponding folder on macOS/Windows.
|
||||
|
||||
2. **Launch** the Playwright Chromium binary with a **custom** user-data directory:
|
||||
```bash
|
||||
# Linux example
|
||||
~/.cache/ms-playwright/chromium-1234/chrome-linux/chrome \
|
||||
--user-data-dir=/home/<you>/my_chrome_profile
|
||||
```
|
||||
```bash
|
||||
# macOS example (Playwright’s internal binary)
|
||||
~/Library/Caches/ms-playwright/chromium-1234/chrome-mac/Chromium.app/Contents/MacOS/Chromium \
|
||||
--user-data-dir=/Users/<you>/my_chrome_profile
|
||||
```
|
||||
```powershell
|
||||
# Windows example (PowerShell/cmd)
|
||||
"C:\Users\<you>\AppData\Local\ms-playwright\chromium-1234\chrome-win\chrome.exe" ^
|
||||
--user-data-dir="C:\Users\<you>\my_chrome_profile"
|
||||
```
|
||||
|
||||
**Replace** the path with the actual subfolder indicated in your `ms-playwright` cache structure.
|
||||
- This **opens** a fresh Chromium with your new or existing data folder.
|
||||
- **Log into** any sites or configure your browser the way you want.
|
||||
- **Close** when done—your profile data is saved in that folder.
|
||||
|
||||
3. **Use** that folder in **`BrowserConfig.user_data_dir`**:
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
|
||||
browser_config = BrowserConfig(
|
||||
headless=True,
|
||||
use_managed_browser=True,
|
||||
user_data_dir="/home/<you>/my_chrome_profile",
|
||||
browser_type="chromium"
|
||||
)
|
||||
```
|
||||
- Next time you run your code, it reuses that folder—**preserving** your session data, cookies, local storage, etc.
|
||||
|
||||
---
|
||||
|
||||
## 3. Using Managed Browsers in Crawl4AI
|
||||
|
||||
Once you have a data directory with your session data, pass it to **`BrowserConfig`**:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
# 1) Reference your persistent data directory
|
||||
browser_config = BrowserConfig(
|
||||
headless=True, # 'True' for automated runs
|
||||
verbose=True,
|
||||
use_managed_browser=True, # Enables persistent browser strategy
|
||||
browser_type="chromium",
|
||||
user_data_dir="/path/to/my-chrome-profile"
|
||||
)
|
||||
|
||||
# 2) Standard crawl config
|
||||
crawl_config = CrawlerRunConfig(
|
||||
wait_for="css:.logged-in-content"
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(url="https://example.com/private", config=crawl_config)
|
||||
if result.success:
|
||||
print("Successfully accessed private data with your identity!")
|
||||
else:
|
||||
print("Error:", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Workflow
|
||||
|
||||
1. **Login** externally (via CLI or your normal Chrome with `--user-data-dir=...`).
|
||||
2. **Close** that browser.
|
||||
3. **Use** the same folder in `user_data_dir=` in Crawl4AI.
|
||||
4. **Crawl** – The site sees your identity as if you’re the same user who just logged in.
|
||||
|
||||
---
|
||||
|
||||
## 4. Magic Mode: Simplified Automation
|
||||
|
||||
If you **don’t** need a persistent profile or identity-based approach, **Magic Mode** offers a quick way to simulate human-like browsing without storing long-term data.
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
config=CrawlerRunConfig(
|
||||
magic=True, # Simplifies a lot of interaction
|
||||
remove_overlay_elements=True,
|
||||
page_timeout=60000
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
**Magic Mode**:
|
||||
|
||||
- Simulates a user-like experience
|
||||
- Randomizes user agent & navigator
|
||||
- Randomizes interactions & timings
|
||||
- Masks automation signals
|
||||
- Attempts pop-up handling
|
||||
|
||||
**But** it’s no substitute for **true** user-based sessions if you want a fully legitimate identity-based solution.
|
||||
|
||||
---
|
||||
|
||||
## 5. Comparing Managed Browsers vs. Magic Mode
|
||||
|
||||
| Feature | **Managed Browsers** | **Magic Mode** |
|
||||
|----------------------------|---------------------------------------------------------------|-----------------------------------------------------|
|
||||
| **Session Persistence** | Full localStorage/cookies retained in user_data_dir | No persistent data (fresh each run) |
|
||||
| **Genuine Identity** | Real user profile with full rights & preferences | Emulated user-like patterns, but no actual identity |
|
||||
| **Complex Sites** | Best for login-gated sites or heavy config | Simple tasks, minimal login or config needed |
|
||||
| **Setup** | External creation of user_data_dir, then use in Crawl4AI | Single-line approach (`magic=True`) |
|
||||
| **Reliability** | Extremely consistent (same data across runs) | Good for smaller tasks, can be less stable |
|
||||
|
||||
---
|
||||
|
||||
## 6. Summary
|
||||
|
||||
- **Create** your user-data directory by launching Chrome/Chromium externally with `--user-data-dir=/some/path`.
|
||||
- **Log in** or configure sites as needed, then close the browser.
|
||||
- **Reference** that folder in `BrowserConfig(user_data_dir="...")` + `use_managed_browser=True`.
|
||||
- Enjoy **persistent** sessions that reflect your real identity.
|
||||
- If you only need quick, ephemeral automation, **Magic Mode** might suffice.
|
||||
|
||||
**Recommended**: Always prefer a **Managed Browser** for robust, identity-based crawling and simpler interactions with complex sites. Use **Magic Mode** for quick tasks or prototypes where persistent data is unnecessary.
|
||||
|
||||
With these approaches, you preserve your **authentic** browsing environment, ensuring the site sees you exactly as a normal user—no repeated logins or wasted time.
|
||||
@@ -1,156 +0,0 @@
|
||||
### Preserve Your Identity with Crawl4AI
|
||||
|
||||
Crawl4AI empowers you to navigate and interact with the web using your authentic digital identity, ensuring that you are recognized as a human and not mistaken for a bot. This document introduces Managed Browsers, the recommended approach for preserving your rights to access the web, and Magic Mode, a simplified solution for specific scenarios.
|
||||
|
||||
---
|
||||
|
||||
### Managed Browsers: Your Digital Identity Solution
|
||||
|
||||
**Managed Browsers** enable developers to create and use persistent browser profiles. These profiles store local storage, cookies, and other session-related data, allowing you to interact with websites as a recognized user. By leveraging your unique identity, Managed Browsers ensure that your experience reflects your rights as a human browsing the web.
|
||||
|
||||
#### Why Use Managed Browsers?
|
||||
1. **Authentic Browsing Experience**: Managed Browsers retain session data and browser fingerprints, mirroring genuine user behavior.
|
||||
2. **Effortless Configuration**: Once you interact with the site using the browser (e.g., solving a CAPTCHA), the session data is saved and reused, providing seamless access.
|
||||
3. **Empowered Data Access**: By using your identity, Managed Browsers empower users to access data they can view on their own screens without artificial restrictions.
|
||||
|
||||
#### Steps to Use Managed Browsers
|
||||
|
||||
1. **Setup the Browser Configuration**:
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
browser_config = BrowserConfig(
|
||||
headless=False, # Set to False for initial setup to view browser actions
|
||||
verbose=True,
|
||||
user_agent_mode="random",
|
||||
use_managed_browser=True, # Enables persistent browser sessions
|
||||
browser_type="chromium",
|
||||
user_data_dir="/path/to/user_profile_data" # Path to save session data
|
||||
)
|
||||
```
|
||||
|
||||
2. **Perform an Initial Run**:
|
||||
- Run the crawler with `headless=False`.
|
||||
- Manually interact with the site (e.g., solve CAPTCHA or log in).
|
||||
- The browser session saves cookies, local storage, and other required data.
|
||||
|
||||
3. **Subsequent Runs**:
|
||||
- Switch to `headless=True` for automation.
|
||||
- The session data is reused, allowing seamless crawling.
|
||||
|
||||
#### Example: Extracting Data Using Managed Browsers
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def main():
|
||||
# Define schema for structured data extraction
|
||||
schema = {
|
||||
"name": "Example Data",
|
||||
"baseSelector": "div.example",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h1", "type": "text"},
|
||||
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"}
|
||||
]
|
||||
}
|
||||
|
||||
# Configure crawler
|
||||
browser_config = BrowserConfig(
|
||||
headless=True, # Automate subsequent runs
|
||||
verbose=True,
|
||||
use_managed_browser=True,
|
||||
user_data_dir="/path/to/user_profile_data"
|
||||
)
|
||||
|
||||
crawl_config = CrawlerRunConfig(
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema),
|
||||
wait_for="css:div.example" # Wait for the targeted element to load
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
config=crawl_config
|
||||
)
|
||||
|
||||
if result.success:
|
||||
print("Extracted Data:", result.extracted_content)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Benefits of Managed Browsers Over Other Methods
|
||||
Managed Browsers eliminate the need for manual detection workarounds by enabling developers to work directly with their identity and user profile data. This approach ensures maximum compatibility with websites and simplifies the crawling process while preserving your right to access data freely.
|
||||
|
||||
---
|
||||
|
||||
### Magic Mode: Simplified Automation
|
||||
|
||||
While Managed Browsers are the preferred approach, **Magic Mode** provides an alternative for scenarios where persistent user profiles are unnecessary or infeasible. Magic Mode automates user-like behavior and simplifies configuration.
|
||||
|
||||
#### What Magic Mode Does:
|
||||
- Simulates human browsing by randomizing interaction patterns and timing.
|
||||
- Masks browser automation signals.
|
||||
- Handles cookie popups and modals.
|
||||
- Modifies navigator properties for enhanced compatibility.
|
||||
|
||||
#### Using Magic Mode
|
||||
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
magic=True # Enables all automation features
|
||||
)
|
||||
```
|
||||
|
||||
Magic Mode is particularly useful for:
|
||||
- Quick prototyping when a Managed Browser setup is not available.
|
||||
- Basic sites requiring minimal interaction or configuration.
|
||||
|
||||
#### Example: Combining Magic Mode with Additional Options
|
||||
|
||||
```python
|
||||
async def crawl_with_magic_mode(url: str):
|
||||
async with AsyncWebCrawler(headless=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
magic=True,
|
||||
remove_overlay_elements=True, # Remove popups/modals
|
||||
page_timeout=60000 # Increased timeout for complex pages
|
||||
)
|
||||
|
||||
return result.markdown if result.success else None
|
||||
```
|
||||
|
||||
### Magic Mode vs. Managed Browsers
|
||||
While Magic Mode simplifies many tasks, it cannot match the reliability and authenticity of Managed Browsers. By using your identity and persistent profiles, Managed Browsers render Magic Mode largely unnecessary. However, Magic Mode remains a viable fallback for specific situations where user identity is not a factor.
|
||||
|
||||
---
|
||||
|
||||
### Key Comparison: Managed Browsers vs. Magic Mode
|
||||
|
||||
| Feature | **Managed Browsers** | **Magic Mode** |
|
||||
|-------------------------|------------------------------------------|-------------------------------------|
|
||||
| **Session Persistence** | Retains cookies and local storage. | No session retention. |
|
||||
| **Human Interaction** | Uses real user profiles and data. | Simulates human-like patterns. |
|
||||
| **Complex Sites** | Best suited for heavily configured sites.| Works well with simpler challenges.|
|
||||
| **Setup Complexity** | Requires initial manual interaction. | Fully automated, one-line setup. |
|
||||
|
||||
#### Recommendation:
|
||||
- Use **Managed Browsers** for reliable, session-based crawling and data extraction.
|
||||
- Use **Magic Mode** for quick prototyping or when persistent profiles are not required.
|
||||
|
||||
---
|
||||
|
||||
### Conclusion
|
||||
|
||||
- **Use Managed Browsers** to preserve your digital identity and ensure reliable, identity-based crawling with persistent sessions. This approach works seamlessly for even the most complex websites.
|
||||
- **Leverage Magic Mode** for quick automation or in scenarios where persistent user profiles are not needed.
|
||||
|
||||
By combining these approaches, Crawl4AI provides unparalleled flexibility and capability for your crawling needs.
|
||||
|
||||
104
docs/md_v2/advanced/lazy-loading.md
Normal file
104
docs/md_v2/advanced/lazy-loading.md
Normal file
@@ -0,0 +1,104 @@
|
||||
## Handling Lazy-Loaded Images
|
||||
|
||||
Many websites now load images **lazily** as you scroll. If you need to ensure they appear in your final crawl (and in `result.media`), consider:
|
||||
|
||||
1. **`wait_for_images=True`** – Wait for images to fully load.
|
||||
2. **`scan_full_page`** – Force the crawler to scroll the entire page, triggering lazy loads.
|
||||
3. **`scroll_delay`** – Add small delays between scroll steps.
|
||||
|
||||
**Note**: If the site requires multiple “Load More” triggers or complex interactions, see the [Page Interaction docs](../core/page-interaction.md).
|
||||
|
||||
### Example: Ensuring Lazy Images Appear
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, BrowserConfig
|
||||
from crawl4ai.async_configs import CacheMode
|
||||
|
||||
async def main():
|
||||
config = CrawlerRunConfig(
|
||||
# Force the crawler to wait until images are fully loaded
|
||||
wait_for_images=True,
|
||||
|
||||
# Option 1: If you want to automatically scroll the page to load images
|
||||
scan_full_page=True, # Tells the crawler to try scrolling the entire page
|
||||
scroll_delay=0.5, # Delay (seconds) between scroll steps
|
||||
|
||||
# Option 2: If the site uses a 'Load More' or JS triggers for images,
|
||||
# you can also specify js_code or wait_for logic here.
|
||||
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=BrowserConfig(headless=True)) as crawler:
|
||||
result = await crawler.arun("https://www.example.com/gallery", config=config)
|
||||
|
||||
if result.success:
|
||||
images = result.media.get("images", [])
|
||||
print("Images found:", len(images))
|
||||
for i, img in enumerate(images[:5]):
|
||||
print(f"[Image {i}] URL: {img['src']}, Score: {img.get('score','N/A')}")
|
||||
else:
|
||||
print("Error:", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Explanation**:
|
||||
|
||||
- **`wait_for_images=True`**
|
||||
The crawler tries to ensure images have finished loading before finalizing the HTML.
|
||||
- **`scan_full_page=True`**
|
||||
Tells the crawler to attempt scrolling from top to bottom. Each scroll step helps trigger lazy loading.
|
||||
- **`scroll_delay=0.5`**
|
||||
Pause half a second between each scroll step. Helps the site load images before continuing.
|
||||
|
||||
**When to Use**:
|
||||
|
||||
- **Lazy-Loading**: If images appear only when the user scrolls into view, `scan_full_page` + `scroll_delay` helps the crawler see them.
|
||||
- **Heavier Pages**: If a page is extremely long, be mindful that scanning the entire page can be slow. Adjust `scroll_delay` or the max scroll steps as needed.
|
||||
|
||||
---
|
||||
|
||||
## Combining with Other Link & Media Filters
|
||||
|
||||
You can still combine **lazy-load** logic with the usual **exclude_external_images**, **exclude_domains**, or link filtration:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
wait_for_images=True,
|
||||
scan_full_page=True,
|
||||
scroll_delay=0.5,
|
||||
|
||||
# Filter out external images if you only want local ones
|
||||
exclude_external_images=True,
|
||||
|
||||
# Exclude certain domains for links
|
||||
exclude_domains=["spammycdn.com"],
|
||||
)
|
||||
```
|
||||
|
||||
This approach ensures you see **all** images from the main domain while ignoring external ones, and the crawler physically scrolls the entire page so that lazy-loading triggers.
|
||||
|
||||
---
|
||||
|
||||
## Tips & Troubleshooting
|
||||
|
||||
1. **Long Pages**
|
||||
- Setting `scan_full_page=True` on extremely long or infinite-scroll pages can be resource-intensive.
|
||||
- Consider using [hooks](../core/page-interaction.md) or specialized logic to load specific sections or “Load More” triggers repeatedly.
|
||||
|
||||
2. **Mixed Image Behavior**
|
||||
- Some sites load images in batches as you scroll. If you’re missing images, increase your `scroll_delay` or call multiple partial scrolls in a loop with JS code or hooks.
|
||||
|
||||
3. **Combining with Dynamic Wait**
|
||||
- If the site has a placeholder that only changes to a real image after a certain event, you might do `wait_for="css:img.loaded"` or a custom JS `wait_for`.
|
||||
|
||||
4. **Caching**
|
||||
- If `cache_mode` is enabled, repeated crawls might skip some network fetches. If you suspect caching is missing new images, set `cache_mode=CacheMode.BYPASS` for fresh fetches.
|
||||
|
||||
---
|
||||
|
||||
With **lazy-loading** support, **wait_for_images**, and **scan_full_page** settings, you can capture the entire gallery or feed of images you expect—even if the site only loads them as the user scrolls. Combine these with the standard media filtering and domain exclusion for a complete link & media handling strategy.
|
||||
@@ -1,52 +0,0 @@
|
||||
# Magic Mode & Anti-Bot Protection
|
||||
|
||||
Crawl4AI provides powerful anti-detection capabilities, with Magic Mode being the simplest and most comprehensive solution.
|
||||
|
||||
## Magic Mode
|
||||
|
||||
The easiest way to bypass anti-bot protections:
|
||||
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
magic=True # Enables all anti-detection features
|
||||
)
|
||||
```
|
||||
|
||||
Magic Mode automatically:
|
||||
- Masks browser automation signals
|
||||
- Simulates human-like behavior
|
||||
- Overrides navigator properties
|
||||
- Handles cookie consent popups
|
||||
- Manages browser fingerprinting
|
||||
- Randomizes timing patterns
|
||||
|
||||
## Manual Anti-Bot Options
|
||||
|
||||
While Magic Mode is recommended, you can also configure individual anti-detection features:
|
||||
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
simulate_user=True, # Simulate human behavior
|
||||
override_navigator=True # Mask automation signals
|
||||
)
|
||||
```
|
||||
|
||||
Note: When `magic=True` is used, you don't need to set these individual options.
|
||||
|
||||
## Example: Handling Protected Sites
|
||||
|
||||
```python
|
||||
async def crawl_protected_site(url: str):
|
||||
async with AsyncWebCrawler(headless=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
magic=True,
|
||||
remove_overlay_elements=True, # Remove popups/modals
|
||||
page_timeout=60000 # Increased timeout for protection checks
|
||||
)
|
||||
|
||||
return result.markdown if result.success else None
|
||||
```
|
||||
@@ -1,188 +0,0 @@
|
||||
# Creating Browser Instances, Contexts, and Pages
|
||||
|
||||
## 1 Introduction
|
||||
|
||||
### Overview of Browser Management in Crawl4AI
|
||||
Crawl4AI's browser management system is designed to provide developers with advanced tools for handling complex web crawling tasks. By managing browser instances, contexts, and pages, Crawl4AI ensures optimal performance, anti-bot measures, and session persistence for high-volume, dynamic web crawling.
|
||||
|
||||
### Key Objectives
|
||||
- **Anti-Bot Handling**:
|
||||
- Implements stealth techniques to evade detection mechanisms used by modern websites.
|
||||
- Simulates human-like behavior, such as mouse movements, scrolling, and key presses.
|
||||
- Supports integration with third-party services to bypass CAPTCHA challenges.
|
||||
- **Persistent Sessions**:
|
||||
- Retains session data (cookies, local storage) for workflows requiring user authentication.
|
||||
- Allows seamless continuation of tasks across multiple runs without re-authentication.
|
||||
- **Scalable Crawling**:
|
||||
- Optimized resource utilization for handling thousands of URLs concurrently.
|
||||
- Flexible configuration options to tailor crawling behavior to specific requirements.
|
||||
|
||||
---
|
||||
|
||||
## 2 Browser Creation Methods
|
||||
|
||||
### Standard Browser Creation
|
||||
Standard browser creation initializes a browser instance with default or minimal configurations. It is suitable for tasks that do not require session persistence or heavy customization.
|
||||
|
||||
#### Features and Limitations
|
||||
- **Features**:
|
||||
- Quick and straightforward setup for small-scale tasks.
|
||||
- Supports headless and headful modes.
|
||||
- **Limitations**:
|
||||
- Lacks advanced customization options like session reuse.
|
||||
- May struggle with sites employing strict anti-bot measures.
|
||||
|
||||
#### Example Usage
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig
|
||||
|
||||
browser_config = BrowserConfig(browser_type="chromium", headless=True)
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun("https://crawl4ai.com")
|
||||
print(result.markdown)
|
||||
```
|
||||
|
||||
### Persistent Contexts
|
||||
Persistent contexts create browser sessions with stored data, enabling workflows that require maintaining login states or other session-specific information.
|
||||
|
||||
#### Benefits of Using `user_data_dir`
|
||||
- **Session Persistence**:
|
||||
- Stores cookies, local storage, and cache between crawling sessions.
|
||||
- Reduces overhead for repetitive logins or multi-step workflows.
|
||||
- **Enhanced Performance**:
|
||||
- Leverages pre-loaded resources for faster page loading.
|
||||
- **Flexibility**:
|
||||
- Adapts to complex workflows requiring user-specific configurations.
|
||||
|
||||
#### Example: Setting Up Persistent Contexts
|
||||
```python
|
||||
config = BrowserConfig(user_data_dir="/path/to/user/data")
|
||||
async with AsyncWebCrawler(config=config) as crawler:
|
||||
result = await crawler.arun("https://crawl4ai.com")
|
||||
print(result.markdown)
|
||||
```
|
||||
|
||||
### Managed Browser
|
||||
The `ManagedBrowser` class offers a high-level abstraction for managing browser instances, emphasizing resource management, debugging capabilities, and anti-bot measures.
|
||||
|
||||
#### How It Works
|
||||
- **Browser Process Management**:
|
||||
- Automates initialization and cleanup of browser processes.
|
||||
- Optimizes resource usage by pooling and reusing browser instances.
|
||||
- **Debugging Support**:
|
||||
- Integrates with debugging tools like Chrome Developer Tools for real-time inspection.
|
||||
- **Anti-Bot Measures**:
|
||||
- Implements stealth plugins to mimic real user behavior and bypass bot detection.
|
||||
|
||||
#### Features
|
||||
- **Customizable Configurations**:
|
||||
- Supports advanced options such as viewport resizing, proxy settings, and header manipulation.
|
||||
- **Debugging and Logging**:
|
||||
- Logs detailed browser interactions for debugging and performance analysis.
|
||||
- **Scalability**:
|
||||
- Handles multiple browser instances concurrently, scaling dynamically based on workload.
|
||||
|
||||
#### Example: Using `ManagedBrowser`
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig
|
||||
|
||||
config = BrowserConfig(headless=False, debug_port=9222)
|
||||
async with AsyncWebCrawler(config=config) as crawler:
|
||||
result = await crawler.arun("https://crawl4ai.com")
|
||||
print(result.markdown)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3 Context and Page Management
|
||||
|
||||
### Creating and Configuring Browser Contexts
|
||||
Browser contexts act as isolated environments within a single browser instance, enabling independent browsing sessions with their own cookies, cache, and storage.
|
||||
|
||||
#### Customizations
|
||||
- **Headers and Cookies**:
|
||||
- Define custom headers to mimic specific devices or browsers.
|
||||
- Set cookies for authenticated sessions.
|
||||
- **Session Reuse**:
|
||||
- Retain and reuse session data across multiple requests.
|
||||
- Example: Preserve login states for authenticated crawls.
|
||||
|
||||
#### Example: Context Initialization
|
||||
```python
|
||||
from crawl4ai import CrawlerRunConfig
|
||||
|
||||
config = CrawlerRunConfig(headers={"User-Agent": "Crawl4AI/1.0"})
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://crawl4ai.com", config=config)
|
||||
print(result.markdown)
|
||||
```
|
||||
|
||||
### Creating Pages
|
||||
Pages represent individual tabs or views within a browser context. They are responsible for rendering content, executing JavaScript, and handling user interactions.
|
||||
|
||||
#### Key Features
|
||||
- **IFrame Handling**:
|
||||
- Extract content from embedded iframes.
|
||||
- Navigate and interact with nested content.
|
||||
- **Viewport Customization**:
|
||||
- Adjust viewport size to match target device dimensions.
|
||||
- **Lazy Loading**:
|
||||
- Ensure dynamic elements are fully loaded before extraction.
|
||||
|
||||
#### Example: Page Initialization
|
||||
```python
|
||||
config = CrawlerRunConfig(viewport_width=1920, viewport_height=1080)
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://crawl4ai.com", config=config)
|
||||
print(result.markdown)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4 Advanced Features and Best Practices
|
||||
|
||||
### Debugging and Logging
|
||||
Remote debugging provides a powerful way to troubleshoot complex crawling workflows.
|
||||
|
||||
#### Example: Enabling Remote Debugging
|
||||
```python
|
||||
config = BrowserConfig(debug_port=9222)
|
||||
async with AsyncWebCrawler(config=config) as crawler:
|
||||
result = await crawler.arun("https://crawl4ai.com")
|
||||
```
|
||||
|
||||
### Anti-Bot Techniques
|
||||
- **Human Behavior Simulation**:
|
||||
- Mimic real user actions, such as scrolling, clicking, and typing.
|
||||
- Example: Use JavaScript to simulate interactions.
|
||||
- **Captcha Handling**:
|
||||
- Integrate with third-party services like 2Captcha or AntiCaptcha for automated solving.
|
||||
|
||||
#### Example: Simulating User Actions
|
||||
```python
|
||||
js_code = """
|
||||
(async () => {
|
||||
document.querySelector('input[name="search"]').value = 'test';
|
||||
document.querySelector('button[type="submit"]').click();
|
||||
})();
|
||||
"""
|
||||
config = CrawlerRunConfig(js_code=[js_code])
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://crawl4ai.com", config=config)
|
||||
```
|
||||
|
||||
### Optimizations for Performance and Scalability
|
||||
- **Persistent Contexts**:
|
||||
- Reuse browser contexts to minimize resource consumption.
|
||||
- **Concurrent Crawls**:
|
||||
- Use `arun_many` with a controlled semaphore count for efficient batch processing.
|
||||
|
||||
#### Example: Scaling Crawls
|
||||
```python
|
||||
urls = ["https://example1.com", "https://example2.com"]
|
||||
config = CrawlerRunConfig(semaphore_count=10)
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
results = await crawler.arun_many(urls, config=config)
|
||||
for result in results:
|
||||
print(result.url, result.markdown)
|
||||
```
|
||||
264
docs/md_v2/advanced/multi-url-crawling.md
Normal file
264
docs/md_v2/advanced/multi-url-crawling.md
Normal file
@@ -0,0 +1,264 @@
|
||||
# Optimized Multi-URL Crawling
|
||||
|
||||
> **Note**: We’re developing a new **executor module** that uses a sophisticated algorithm to dynamically manage multi-URL crawling, optimizing for speed and memory usage. The approaches in this document remain fully valid, but keep an eye on **Crawl4AI**’s upcoming releases for this powerful feature! Follow [@unclecode](https://twitter.com/unclecode) on X and check the changelogs to stay updated.
|
||||
|
||||
|
||||
Crawl4AI’s **AsyncWebCrawler** can handle multiple URLs in a single run, which can greatly reduce overhead and speed up crawling. This guide shows how to:
|
||||
|
||||
1. **Sequentially** crawl a list of URLs using the **same** session, avoiding repeated browser creation.
|
||||
2. **Parallel**-crawl subsets of URLs in batches, again reusing the same browser.
|
||||
|
||||
When the entire process finishes, you close the browser once—**minimizing** memory and resource usage.
|
||||
|
||||
---
|
||||
|
||||
## 1. Why Avoid Simple Loops per URL?
|
||||
|
||||
If you naively do:
|
||||
|
||||
```python
|
||||
for url in urls:
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url)
|
||||
```
|
||||
|
||||
You end up:
|
||||
|
||||
1. Spinning up a **new** browser for each URL
|
||||
2. Closing it immediately after the single crawl
|
||||
3. Potentially using a lot of CPU/memory for short-living browsers
|
||||
4. Missing out on session reusability if you have login or ongoing states
|
||||
|
||||
**Better** approaches ensure you **create** the browser once, then crawl multiple URLs with minimal overhead.
|
||||
|
||||
---
|
||||
|
||||
## 2. Sequential Crawling with Session Reuse
|
||||
|
||||
### 2.1 Overview
|
||||
|
||||
1. **One** `AsyncWebCrawler` instance for **all** URLs.
|
||||
2. **One** session (via `session_id`) so we can preserve local storage or cookies across URLs if needed.
|
||||
3. The crawler is only closed at the **end**.
|
||||
|
||||
**This** is the simplest pattern if your workload is moderate (dozens to a few hundred URLs).
|
||||
|
||||
### 2.2 Example Code
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import List
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||
|
||||
async def crawl_sequential(urls: List[str]):
|
||||
print("\n=== Sequential Crawling with Session Reuse ===")
|
||||
|
||||
browser_config = BrowserConfig(
|
||||
headless=True,
|
||||
# For better performance in Docker or low-memory environments:
|
||||
extra_args=["--disable-gpu", "--disable-dev-shm-usage", "--no-sandbox"],
|
||||
)
|
||||
|
||||
crawl_config = CrawlerRunConfig(
|
||||
markdown_generator=DefaultMarkdownGenerator()
|
||||
)
|
||||
|
||||
# Create the crawler (opens the browser)
|
||||
crawler = AsyncWebCrawler(config=browser_config)
|
||||
await crawler.start()
|
||||
|
||||
try:
|
||||
session_id = "session1" # Reuse the same session across all URLs
|
||||
for url in urls:
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
config=crawl_config,
|
||||
session_id=session_id
|
||||
)
|
||||
if result.success:
|
||||
print(f"Successfully crawled: {url}")
|
||||
# E.g. check markdown length
|
||||
print(f"Markdown length: {len(result.markdown_v2.raw_markdown)}")
|
||||
else:
|
||||
print(f"Failed: {url} - Error: {result.error_message}")
|
||||
finally:
|
||||
# After all URLs are done, close the crawler (and the browser)
|
||||
await crawler.close()
|
||||
|
||||
async def main():
|
||||
urls = [
|
||||
"https://example.com/page1",
|
||||
"https://example.com/page2",
|
||||
"https://example.com/page3"
|
||||
]
|
||||
await crawl_sequential(urls)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Why It’s Good**:
|
||||
|
||||
- **One** browser launch.
|
||||
- Minimal memory usage.
|
||||
- If the site requires login, you can log in once in `session_id` context and preserve auth across all URLs.
|
||||
|
||||
---
|
||||
|
||||
## 3. Parallel Crawling with Browser Reuse
|
||||
|
||||
### 3.1 Overview
|
||||
|
||||
To speed up crawling further, you can crawl multiple URLs in **parallel** (batches or a concurrency limit). The crawler still uses **one** browser, but spawns different sessions (or the same, depending on your logic) for each task.
|
||||
|
||||
### 3.2 Example Code
|
||||
|
||||
For this example make sure to install the [psutil](https://pypi.org/project/psutil/) package.
|
||||
|
||||
```bash
|
||||
pip install psutil
|
||||
```
|
||||
|
||||
Then you can run the following code:
|
||||
|
||||
```python
|
||||
import os
|
||||
import sys
|
||||
import psutil
|
||||
import asyncio
|
||||
|
||||
__location__ = os.path.dirname(os.path.abspath(__file__))
|
||||
__output__ = os.path.join(__location__, "output")
|
||||
|
||||
# Append parent directory to system path
|
||||
parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
sys.path.append(parent_dir)
|
||||
|
||||
from typing import List
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||||
|
||||
async def crawl_parallel(urls: List[str], max_concurrent: int = 3):
|
||||
print("\n=== Parallel Crawling with Browser Reuse + Memory Check ===")
|
||||
|
||||
# We'll keep track of peak memory usage across all tasks
|
||||
peak_memory = 0
|
||||
process = psutil.Process(os.getpid())
|
||||
|
||||
def log_memory(prefix: str = ""):
|
||||
nonlocal peak_memory
|
||||
current_mem = process.memory_info().rss # in bytes
|
||||
if current_mem > peak_memory:
|
||||
peak_memory = current_mem
|
||||
print(f"{prefix} Current Memory: {current_mem // (1024 * 1024)} MB, Peak: {peak_memory // (1024 * 1024)} MB")
|
||||
|
||||
# Minimal browser config
|
||||
browser_config = BrowserConfig(
|
||||
headless=True,
|
||||
verbose=False, # corrected from 'verbos=False'
|
||||
extra_args=["--disable-gpu", "--disable-dev-shm-usage", "--no-sandbox"],
|
||||
)
|
||||
crawl_config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||
|
||||
# Create the crawler instance
|
||||
crawler = AsyncWebCrawler(config=browser_config)
|
||||
await crawler.start()
|
||||
|
||||
try:
|
||||
# We'll chunk the URLs in batches of 'max_concurrent'
|
||||
success_count = 0
|
||||
fail_count = 0
|
||||
for i in range(0, len(urls), max_concurrent):
|
||||
batch = urls[i : i + max_concurrent]
|
||||
tasks = []
|
||||
|
||||
for j, url in enumerate(batch):
|
||||
# Unique session_id per concurrent sub-task
|
||||
session_id = f"parallel_session_{i + j}"
|
||||
task = crawler.arun(url=url, config=crawl_config, session_id=session_id)
|
||||
tasks.append(task)
|
||||
|
||||
# Check memory usage prior to launching tasks
|
||||
log_memory(prefix=f"Before batch {i//max_concurrent + 1}: ")
|
||||
|
||||
# Gather results
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
|
||||
# Check memory usage after tasks complete
|
||||
log_memory(prefix=f"After batch {i//max_concurrent + 1}: ")
|
||||
|
||||
# Evaluate results
|
||||
for url, result in zip(batch, results):
|
||||
if isinstance(result, Exception):
|
||||
print(f"Error crawling {url}: {result}")
|
||||
fail_count += 1
|
||||
elif result.success:
|
||||
success_count += 1
|
||||
else:
|
||||
fail_count += 1
|
||||
|
||||
print(f"\nSummary:")
|
||||
print(f" - Successfully crawled: {success_count}")
|
||||
print(f" - Failed: {fail_count}")
|
||||
|
||||
finally:
|
||||
print("\nClosing crawler...")
|
||||
await crawler.close()
|
||||
# Final memory log
|
||||
log_memory(prefix="Final: ")
|
||||
print(f"\nPeak memory usage (MB): {peak_memory // (1024 * 1024)}")
|
||||
|
||||
async def main():
|
||||
urls = [
|
||||
"https://example.com/page1",
|
||||
"https://example.com/page2",
|
||||
"https://example.com/page3",
|
||||
"https://example.com/page4"
|
||||
]
|
||||
await crawl_parallel(urls, max_concurrent=2)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
|
||||
```
|
||||
|
||||
**Notes**:
|
||||
|
||||
- We **reuse** the same `AsyncWebCrawler` instance for all parallel tasks, launching **one** browser.
|
||||
- Each parallel sub-task might get its own `session_id` so they don’t share cookies/localStorage (unless that’s desired).
|
||||
- We limit concurrency to `max_concurrent=2` or 3 to avoid saturating CPU/memory.
|
||||
|
||||
---
|
||||
|
||||
## 4. Performance Tips
|
||||
|
||||
1. **Extra Browser Args**
|
||||
- `--disable-gpu`, `--no-sandbox` can help in Docker or restricted environments.
|
||||
- `--disable-dev-shm-usage` avoids using `/dev/shm` which can be small on some systems.
|
||||
|
||||
2. **Session Reuse**
|
||||
- If your site requires a login or you want to maintain local data across URLs, share the **same** `session_id`.
|
||||
- If you want isolation (each URL fresh), create unique sessions.
|
||||
|
||||
3. **Batching**
|
||||
- If you have **many** URLs (like thousands), you can do parallel crawling in chunks (like `max_concurrent=5`).
|
||||
- Use `arun_many()` for a built-in approach if you prefer, but the example above is often more flexible.
|
||||
|
||||
4. **Cache**
|
||||
- If your pages share many resources or you’re re-crawling the same domain repeatedly, consider setting `cache_mode=CacheMode.ENABLED` in `CrawlerRunConfig`.
|
||||
- If you need fresh data each time, keep `cache_mode=CacheMode.BYPASS`.
|
||||
|
||||
5. **Hooks**
|
||||
- You can set up global hooks for each crawler (like to block images) or per-run if you want.
|
||||
- Keep them consistent if you’re reusing sessions.
|
||||
|
||||
---
|
||||
|
||||
## 5. Summary
|
||||
|
||||
- **One** `AsyncWebCrawler` + multiple calls to `.arun()` is far more efficient than launching a new crawler per URL.
|
||||
- **Sequential** approach with a shared session is simple and memory-friendly for moderate sets of URLs.
|
||||
- **Parallel** approach can speed up large crawls by concurrency, but keep concurrency balanced to avoid overhead.
|
||||
- Close the crawler once at the end, ensuring the browser is only opened/closed once.
|
||||
|
||||
For even more advanced memory optimizations or dynamic concurrency patterns, see future sections on hooking or distributed crawling. The patterns above suffice for the majority of multi-URL scenarios—**giving you speed, simplicity, and minimal resource usage**. Enjoy your optimized crawling!
|
||||
@@ -1,6 +1,4 @@
|
||||
# Proxy & Security
|
||||
|
||||
Configure proxy settings and enhance security features in Crawl4AI for reliable data extraction.
|
||||
# Proxy
|
||||
|
||||
## Basic Proxy Setup
|
||||
|
||||
@@ -58,38 +56,3 @@ async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(url=url, config=browser_config)
|
||||
```
|
||||
|
||||
## Custom Headers
|
||||
|
||||
Add security-related headers via `BrowserConfig`:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import BrowserConfig
|
||||
|
||||
headers = {
|
||||
"X-Forwarded-For": "203.0.113.195",
|
||||
"Accept-Language": "en-US,en;q=0.9",
|
||||
"Cache-Control": "no-cache",
|
||||
"Pragma": "no-cache"
|
||||
}
|
||||
|
||||
browser_config = BrowserConfig(headers=headers)
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
```
|
||||
|
||||
## Combining with Magic Mode
|
||||
|
||||
For maximum protection, combine proxy with Magic Mode via `CrawlerRunConfig` and `BrowserConfig`:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
|
||||
|
||||
browser_config = BrowserConfig(
|
||||
proxy="http://proxy.example.com:8080",
|
||||
headers={"Accept-Language": "en-US"}
|
||||
)
|
||||
crawler_config = CrawlerRunConfig(magic=True) # Enable all anti-detection features
|
||||
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(url="https://example.com", config=crawler_config)
|
||||
```
|
||||
|
||||
@@ -1,179 +0,0 @@
|
||||
### Session-Based Crawling for Dynamic Content
|
||||
|
||||
In modern web applications, content is often loaded dynamically without changing the URL. Examples include "Load More" buttons, infinite scrolling, or paginated content that updates via JavaScript. Crawl4AI provides session-based crawling capabilities to handle such scenarios effectively.
|
||||
|
||||
This guide explores advanced techniques for crawling dynamic content using Crawl4AI's session management features.
|
||||
|
||||
---
|
||||
|
||||
## Understanding Session-Based Crawling
|
||||
|
||||
Session-based crawling allows you to reuse a persistent browser session across multiple actions. This means the same browser tab (or page object) is used throughout, enabling:
|
||||
|
||||
1. **Efficient handling of dynamic content** without reloading the page.
|
||||
2. **JavaScript actions before and after crawling** (e.g., clicking buttons or scrolling).
|
||||
3. **State maintenance** for authenticated sessions or multi-step workflows.
|
||||
4. **Faster sequential crawling**, as it avoids reopening tabs or reallocating resources.
|
||||
|
||||
**Note:** Session-based crawling is ideal for sequential operations, not parallel tasks.
|
||||
|
||||
---
|
||||
|
||||
## Basic Concepts
|
||||
|
||||
Before diving into examples, here are some key concepts:
|
||||
|
||||
- **Session ID**: A unique identifier for a browsing session. Use the same `session_id` across multiple requests to maintain state.
|
||||
- **BrowserConfig & CrawlerRunConfig**: These configuration objects control browser settings and crawling behavior.
|
||||
- **JavaScript Execution**: Use `js_code` to perform actions like clicking buttons.
|
||||
- **CSS Selectors**: Target specific elements for interaction or data extraction.
|
||||
- **Extraction Strategy**: Define rules to extract structured data.
|
||||
- **Wait Conditions**: Specify conditions to wait for before proceeding.
|
||||
|
||||
---
|
||||
|
||||
## Example 1: Basic Session-Based Crawling
|
||||
|
||||
A simple example using session-based crawling:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
|
||||
from crawl4ai.cache_context import CacheMode
|
||||
|
||||
async def basic_session_crawl():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
session_id = "dynamic_content_session"
|
||||
url = "https://example.com/dynamic-content"
|
||||
|
||||
for page in range(3):
|
||||
config = CrawlerRunConfig(
|
||||
url=url,
|
||||
session_id=session_id,
|
||||
js_code="document.querySelector('.load-more-button').click();" if page > 0 else None,
|
||||
css_selector=".content-item",
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
result = await crawler.arun(config=config)
|
||||
print(f"Page {page + 1}: Found {result.extracted_content.count('.content-item')} items")
|
||||
|
||||
await crawler.crawler_strategy.kill_session(session_id)
|
||||
|
||||
asyncio.run(basic_session_crawl())
|
||||
```
|
||||
|
||||
This example shows:
|
||||
1. Reusing the same `session_id` across multiple requests.
|
||||
2. Executing JavaScript to load more content dynamically.
|
||||
3. Properly closing the session to free resources.
|
||||
|
||||
---
|
||||
|
||||
## Advanced Technique 1: Custom Execution Hooks
|
||||
|
||||
Use custom hooks to handle complex scenarios, such as waiting for content to load dynamically:
|
||||
|
||||
```python
|
||||
async def advanced_session_crawl_with_hooks():
|
||||
first_commit = ""
|
||||
|
||||
async def on_execution_started(page):
|
||||
nonlocal first_commit
|
||||
try:
|
||||
while True:
|
||||
await page.wait_for_selector("li.commit-item h4")
|
||||
commit = await page.query_selector("li.commit-item h4")
|
||||
commit = await commit.evaluate("(element) => element.textContent").strip()
|
||||
if commit and commit != first_commit:
|
||||
first_commit = commit
|
||||
break
|
||||
await asyncio.sleep(0.5)
|
||||
except Exception as e:
|
||||
print(f"Warning: New content didn't appear: {e}")
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
session_id = "commit_session"
|
||||
url = "https://github.com/example/repo/commits/main"
|
||||
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
|
||||
|
||||
js_next_page = """document.querySelector('a.pagination-next').click();"""
|
||||
|
||||
for page in range(3):
|
||||
config = CrawlerRunConfig(
|
||||
url=url,
|
||||
session_id=session_id,
|
||||
js_code=js_next_page if page > 0 else None,
|
||||
css_selector="li.commit-item",
|
||||
js_only=page > 0,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
result = await crawler.arun(config=config)
|
||||
print(f"Page {page + 1}: Found {len(result.extracted_content)} commits")
|
||||
|
||||
await crawler.crawler_strategy.kill_session(session_id)
|
||||
|
||||
asyncio.run(advanced_session_crawl_with_hooks())
|
||||
```
|
||||
|
||||
This technique ensures new content loads before the next action.
|
||||
|
||||
---
|
||||
|
||||
## Advanced Technique 2: Integrated JavaScript Execution and Waiting
|
||||
|
||||
Combine JavaScript execution and waiting logic for concise handling of dynamic content:
|
||||
|
||||
```python
|
||||
async def integrated_js_and_wait_crawl():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
session_id = "integrated_session"
|
||||
url = "https://github.com/example/repo/commits/main"
|
||||
|
||||
js_next_page_and_wait = """
|
||||
(async () => {
|
||||
const getCurrentCommit = () => document.querySelector('li.commit-item h4').textContent.trim();
|
||||
const initialCommit = getCurrentCommit();
|
||||
document.querySelector('a.pagination-next').click();
|
||||
while (getCurrentCommit() === initialCommit) {
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
}
|
||||
})();
|
||||
"""
|
||||
|
||||
for page in range(3):
|
||||
config = CrawlerRunConfig(
|
||||
url=url,
|
||||
session_id=session_id,
|
||||
js_code=js_next_page_and_wait if page > 0 else None,
|
||||
css_selector="li.commit-item",
|
||||
js_only=page > 0,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
result = await crawler.arun(config=config)
|
||||
print(f"Page {page + 1}: Found {len(result.extracted_content)} commits")
|
||||
|
||||
await crawler.crawler_strategy.kill_session(session_id)
|
||||
|
||||
asyncio.run(integrated_js_and_wait_crawl())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices for Session-Based Crawling
|
||||
|
||||
1. **Unique Session IDs**: Assign descriptive and unique `session_id` values.
|
||||
2. **Close Sessions**: Always clean up sessions with `kill_session` after use.
|
||||
3. **Error Handling**: Anticipate and handle errors gracefully.
|
||||
4. **Respect Websites**: Follow terms of service and robots.txt.
|
||||
5. **Delays**: Add delays to avoid overwhelming servers.
|
||||
6. **Optimize JavaScript**: Keep scripts concise for better performance.
|
||||
7. **Monitor Resources**: Track memory and CPU usage for long sessions.
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
Session-based crawling in Crawl4AI is a robust solution for handling dynamic content and multi-step workflows. By combining session management, JavaScript execution, and structured extraction strategies, you can effectively navigate and extract data from modern web applications. Always adhere to ethical web scraping practices and respect website policies.
|
||||
@@ -1,4 +1,4 @@
|
||||
### Session Management
|
||||
# Session Management
|
||||
|
||||
Session management in Crawl4AI is a powerful feature that allows you to maintain state across multiple requests, making it particularly suitable for handling complex multi-step crawling tasks. It enables you to reuse the same browser tab (or page object) across sequential actions and crawls, which is beneficial for:
|
||||
|
||||
@@ -20,8 +20,12 @@ async with AsyncWebCrawler() as crawler:
|
||||
session_id = "my_session"
|
||||
|
||||
# Define configurations
|
||||
config1 = CrawlerRunConfig(url="https://example.com/page1", session_id=session_id)
|
||||
config2 = CrawlerRunConfig(url="https://example.com/page2", session_id=session_id)
|
||||
config1 = CrawlerRunConfig(
|
||||
url="https://example.com/page1", session_id=session_id
|
||||
)
|
||||
config2 = CrawlerRunConfig(
|
||||
url="https://example.com/page2", session_id=session_id
|
||||
)
|
||||
|
||||
# First request
|
||||
result1 = await crawler.arun(config=config1)
|
||||
@@ -54,7 +58,9 @@ async def crawl_dynamic_content():
|
||||
schema = {
|
||||
"name": "Commit Extractor",
|
||||
"baseSelector": "li.Box-sc-g0xbh4-0",
|
||||
"fields": [{"name": "title", "selector": "h4.markdown-title", "type": "text"}],
|
||||
"fields": [{
|
||||
"name": "title", "selector": "h4.markdown-title", "type": "text"
|
||||
}],
|
||||
}
|
||||
extraction_strategy = JsonCssExtractionStrategy(schema)
|
||||
|
||||
@@ -87,51 +93,146 @@ async def crawl_dynamic_content():
|
||||
|
||||
---
|
||||
|
||||
#### Session Best Practices
|
||||
## Example 1: Basic Session-Based Crawling
|
||||
|
||||
1. **Descriptive Session IDs**:
|
||||
Use meaningful names for session IDs to organize workflows:
|
||||
```python
|
||||
session_id = "login_flow_session"
|
||||
session_id = "product_catalog_session"
|
||||
```
|
||||
A simple example using session-based crawling:
|
||||
|
||||
2. **Resource Management**:
|
||||
Always ensure sessions are cleaned up to free resources:
|
||||
```python
|
||||
try:
|
||||
# Your crawling code here
|
||||
pass
|
||||
finally:
|
||||
await crawler.crawler_strategy.kill_session(session_id)
|
||||
```
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
|
||||
from crawl4ai.cache_context import CacheMode
|
||||
|
||||
3. **State Maintenance**:
|
||||
Reuse the session for subsequent actions within the same workflow:
|
||||
```python
|
||||
# Step 1: Login
|
||||
login_config = CrawlerRunConfig(
|
||||
url="https://example.com/login",
|
||||
session_id=session_id,
|
||||
js_code="document.querySelector('form').submit();"
|
||||
)
|
||||
await crawler.arun(config=login_config)
|
||||
async def basic_session_crawl():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
session_id = "dynamic_content_session"
|
||||
url = "https://example.com/dynamic-content"
|
||||
|
||||
# Step 2: Verify login success
|
||||
dashboard_config = CrawlerRunConfig(
|
||||
url="https://example.com/dashboard",
|
||||
session_id=session_id,
|
||||
wait_for="css:.user-profile" # Wait for authenticated content
|
||||
)
|
||||
result = await crawler.arun(config=dashboard_config)
|
||||
```
|
||||
for page in range(3):
|
||||
config = CrawlerRunConfig(
|
||||
url=url,
|
||||
session_id=session_id,
|
||||
js_code="document.querySelector('.load-more-button').click();" if page > 0 else None,
|
||||
css_selector=".content-item",
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
result = await crawler.arun(config=config)
|
||||
print(f"Page {page + 1}: Found {result.extracted_content.count('.content-item')} items")
|
||||
|
||||
await crawler.crawler_strategy.kill_session(session_id)
|
||||
|
||||
asyncio.run(basic_session_crawl())
|
||||
```
|
||||
|
||||
This example shows:
|
||||
1. Reusing the same `session_id` across multiple requests.
|
||||
2. Executing JavaScript to load more content dynamically.
|
||||
3. Properly closing the session to free resources.
|
||||
|
||||
---
|
||||
|
||||
## Advanced Technique 1: Custom Execution Hooks
|
||||
|
||||
> Warning: You might feel confused by the end of the next few examples 😅, so make sure you are comfortable with the order of the parts before you start this.
|
||||
|
||||
Use custom hooks to handle complex scenarios, such as waiting for content to load dynamically:
|
||||
|
||||
```python
|
||||
async def advanced_session_crawl_with_hooks():
|
||||
first_commit = ""
|
||||
|
||||
async def on_execution_started(page):
|
||||
nonlocal first_commit
|
||||
try:
|
||||
while True:
|
||||
await page.wait_for_selector("li.commit-item h4")
|
||||
commit = await page.query_selector("li.commit-item h4")
|
||||
commit = await commit.evaluate("(element) => element.textContent").strip()
|
||||
if commit and commit != first_commit:
|
||||
first_commit = commit
|
||||
break
|
||||
await asyncio.sleep(0.5)
|
||||
except Exception as e:
|
||||
print(f"Warning: New content didn't appear: {e}")
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
session_id = "commit_session"
|
||||
url = "https://github.com/example/repo/commits/main"
|
||||
crawler.crawler_strategy.set_hook("on_execution_started", on_execution_started)
|
||||
|
||||
js_next_page = """document.querySelector('a.pagination-next').click();"""
|
||||
|
||||
for page in range(3):
|
||||
config = CrawlerRunConfig(
|
||||
url=url,
|
||||
session_id=session_id,
|
||||
js_code=js_next_page if page > 0 else None,
|
||||
css_selector="li.commit-item",
|
||||
js_only=page > 0,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
result = await crawler.arun(config=config)
|
||||
print(f"Page {page + 1}: Found {len(result.extracted_content)} commits")
|
||||
|
||||
await crawler.crawler_strategy.kill_session(session_id)
|
||||
|
||||
asyncio.run(advanced_session_crawl_with_hooks())
|
||||
```
|
||||
|
||||
This technique ensures new content loads before the next action.
|
||||
|
||||
---
|
||||
|
||||
## Advanced Technique 2: Integrated JavaScript Execution and Waiting
|
||||
|
||||
Combine JavaScript execution and waiting logic for concise handling of dynamic content:
|
||||
|
||||
```python
|
||||
async def integrated_js_and_wait_crawl():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
session_id = "integrated_session"
|
||||
url = "https://github.com/example/repo/commits/main"
|
||||
|
||||
js_next_page_and_wait = """
|
||||
(async () => {
|
||||
const getCurrentCommit = () => document.querySelector('li.commit-item h4').textContent.trim();
|
||||
const initialCommit = getCurrentCommit();
|
||||
document.querySelector('a.pagination-next').click();
|
||||
while (getCurrentCommit() === initialCommit) {
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
}
|
||||
})();
|
||||
"""
|
||||
|
||||
for page in range(3):
|
||||
config = CrawlerRunConfig(
|
||||
url=url,
|
||||
session_id=session_id,
|
||||
js_code=js_next_page_and_wait if page > 0 else None,
|
||||
css_selector="li.commit-item",
|
||||
js_only=page > 0,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
result = await crawler.arun(config=config)
|
||||
print(f"Page {page + 1}: Found {len(result.extracted_content)} commits")
|
||||
|
||||
await crawler.crawler_strategy.kill_session(session_id)
|
||||
|
||||
asyncio.run(integrated_js_and_wait_crawl())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
#### Common Use Cases for Sessions
|
||||
|
||||
1. **Authentication Flows**: Login and interact with secured pages.
|
||||
2. **Pagination Handling**: Navigate through multiple pages.
|
||||
3. **Form Submissions**: Fill forms, submit, and process results.
|
||||
4. **Multi-step Processes**: Complete workflows that span multiple actions.
|
||||
5. **Dynamic Content Navigation**: Handle JavaScript-rendered or event-triggered content.
|
||||
1. **Authentication Flows**: Login and interact with secured pages.
|
||||
|
||||
2. **Pagination Handling**: Navigate through multiple pages.
|
||||
|
||||
3. **Form Submissions**: Fill forms, submit, and process results.
|
||||
|
||||
4. **Multi-step Processes**: Complete workflows that span multiple actions.
|
||||
|
||||
5. **Dynamic Content Navigation**: Handle JavaScript-rendered or event-triggered content.
|
||||
|
||||
179
docs/md_v2/advanced/ssl-certificate.md
Normal file
179
docs/md_v2/advanced/ssl-certificate.md
Normal file
@@ -0,0 +1,179 @@
|
||||
# `SSLCertificate` Reference
|
||||
|
||||
The **`SSLCertificate`** class encapsulates an SSL certificate’s data and allows exporting it in various formats (PEM, DER, JSON, or text). It’s used within **Crawl4AI** whenever you set **`fetch_ssl_certificate=True`** in your **`CrawlerRunConfig`**.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
**Location**: `crawl4ai/ssl_certificate.py`
|
||||
|
||||
```python
|
||||
class SSLCertificate:
|
||||
"""
|
||||
Represents an SSL certificate with methods to export in various formats.
|
||||
|
||||
Main Methods:
|
||||
- from_url(url, timeout=10)
|
||||
- from_file(file_path)
|
||||
- from_binary(binary_data)
|
||||
- to_json(filepath=None)
|
||||
- to_pem(filepath=None)
|
||||
- to_der(filepath=None)
|
||||
...
|
||||
|
||||
Common Properties:
|
||||
- issuer
|
||||
- subject
|
||||
- valid_from
|
||||
- valid_until
|
||||
- fingerprint
|
||||
"""
|
||||
```
|
||||
|
||||
### Typical Use Case
|
||||
1. You **enable** certificate fetching in your crawl by:
|
||||
```python
|
||||
CrawlerRunConfig(fetch_ssl_certificate=True, ...)
|
||||
```
|
||||
2. After `arun()`, if `result.ssl_certificate` is present, it’s an instance of **`SSLCertificate`**.
|
||||
3. You can **read** basic properties (issuer, subject, validity) or **export** them in multiple formats.
|
||||
|
||||
---
|
||||
|
||||
## 2. Construction & Fetching
|
||||
|
||||
### 2.1 **`from_url(url, timeout=10)`**
|
||||
Manually load an SSL certificate from a given URL (port 443). Typically used internally, but you can call it directly if you want:
|
||||
|
||||
```python
|
||||
cert = SSLCertificate.from_url("https://example.com")
|
||||
if cert:
|
||||
print("Fingerprint:", cert.fingerprint)
|
||||
```
|
||||
|
||||
### 2.2 **`from_file(file_path)`**
|
||||
Load from a file containing certificate data in ASN.1 or DER. Rarely needed unless you have local cert files:
|
||||
|
||||
```python
|
||||
cert = SSLCertificate.from_file("/path/to/cert.der")
|
||||
```
|
||||
|
||||
### 2.3 **`from_binary(binary_data)`**
|
||||
Initialize from raw binary. E.g., if you captured it from a socket or another source:
|
||||
|
||||
```python
|
||||
cert = SSLCertificate.from_binary(raw_bytes)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Common Properties
|
||||
|
||||
After obtaining a **`SSLCertificate`** instance (e.g. `result.ssl_certificate` from a crawl), you can read:
|
||||
|
||||
1. **`issuer`** *(dict)*
|
||||
- E.g. `{"CN": "My Root CA", "O": "..."}`
|
||||
2. **`subject`** *(dict)*
|
||||
- E.g. `{"CN": "example.com", "O": "ExampleOrg"}`
|
||||
3. **`valid_from`** *(str)*
|
||||
- NotBefore date/time. Often in ASN.1/UTC format.
|
||||
4. **`valid_until`** *(str)*
|
||||
- NotAfter date/time.
|
||||
5. **`fingerprint`** *(str)*
|
||||
- The SHA-256 digest (lowercase hex).
|
||||
- E.g. `"d14d2e..."`
|
||||
|
||||
---
|
||||
|
||||
## 4. Export Methods
|
||||
|
||||
Once you have a **`SSLCertificate`** object, you can **export** or **inspect** it:
|
||||
|
||||
### 4.1 **`to_json(filepath=None)` → `Optional[str]`**
|
||||
- Returns a JSON string containing the parsed certificate fields.
|
||||
- If `filepath` is provided, saves it to disk instead, returning `None`.
|
||||
|
||||
**Usage**:
|
||||
```python
|
||||
json_data = cert.to_json() # returns JSON string
|
||||
cert.to_json("certificate.json") # writes file, returns None
|
||||
```
|
||||
|
||||
### 4.2 **`to_pem(filepath=None)` → `Optional[str]`**
|
||||
- Returns a PEM-encoded string (common for web servers).
|
||||
- If `filepath` is provided, saves it to disk instead.
|
||||
|
||||
```python
|
||||
pem_str = cert.to_pem() # in-memory PEM string
|
||||
cert.to_pem("/path/to/cert.pem") # saved to file
|
||||
```
|
||||
|
||||
### 4.3 **`to_der(filepath=None)` → `Optional[bytes]`**
|
||||
- Returns the original DER (binary ASN.1) bytes.
|
||||
- If `filepath` is specified, writes the bytes there instead.
|
||||
|
||||
```python
|
||||
der_bytes = cert.to_der()
|
||||
cert.to_der("certificate.der")
|
||||
```
|
||||
|
||||
### 4.4 (Optional) **`export_as_text()`**
|
||||
- If you see a method like `export_as_text()`, it typically returns an OpenSSL-style textual representation.
|
||||
- Not always needed, but can help for debugging or manual inspection.
|
||||
|
||||
---
|
||||
|
||||
## 5. Example Usage in Crawl4AI
|
||||
|
||||
Below is a minimal sample showing how the crawler obtains an SSL cert from a site, then reads or exports it. The code snippet:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import os
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
|
||||
|
||||
async def main():
|
||||
tmp_dir = "tmp"
|
||||
os.makedirs(tmp_dir, exist_ok=True)
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
fetch_ssl_certificate=True,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://example.com", config=config)
|
||||
if result.success and result.ssl_certificate:
|
||||
cert = result.ssl_certificate
|
||||
# 1. Basic Info
|
||||
print("Issuer CN:", cert.issuer.get("CN", ""))
|
||||
print("Valid until:", cert.valid_until)
|
||||
print("Fingerprint:", cert.fingerprint)
|
||||
|
||||
# 2. Export
|
||||
cert.to_json(os.path.join(tmp_dir, "certificate.json"))
|
||||
cert.to_pem(os.path.join(tmp_dir, "certificate.pem"))
|
||||
cert.to_der(os.path.join(tmp_dir, "certificate.der"))
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Notes & Best Practices
|
||||
|
||||
1. **Timeout**: `SSLCertificate.from_url` internally uses a default **10s** socket connect and wraps SSL.
|
||||
2. **Binary Form**: The certificate is loaded in ASN.1 (DER) form, then re-parsed by `OpenSSL.crypto`.
|
||||
3. **Validation**: This does **not** validate the certificate chain or trust store. It only fetches and parses.
|
||||
4. **Integration**: Within Crawl4AI, you typically just set `fetch_ssl_certificate=True` in `CrawlerRunConfig`; the final result’s `ssl_certificate` is automatically built.
|
||||
5. **Export**: If you need to store or analyze a cert, the `to_json` and `to_pem` are quite universal.
|
||||
|
||||
---
|
||||
|
||||
### Summary
|
||||
|
||||
- **`SSLCertificate`** is a convenience class for capturing and exporting the **TLS certificate** from your crawled site(s).
|
||||
- Common usage is in the **`CrawlResult.ssl_certificate`** field, accessible after setting `fetch_ssl_certificate=True`.
|
||||
- Offers quick access to essential certificate details (`issuer`, `subject`, `fingerprint`) and is easy to export (PEM, DER, JSON) for further analysis or server usage.
|
||||
|
||||
Use it whenever you need **insight** into a site’s certificate or require some form of cryptographic or compliance check.
|
||||
@@ -1,244 +1,305 @@
|
||||
# Complete Parameter Guide for arun()
|
||||
Below is a **revised parameter guide** for **`arun()`** in **AsyncWebCrawler**, reflecting the **new** approach where all parameters are passed via a **`CrawlerRunConfig`** instead of directly to `arun()`. Each section includes example usage in the new style, ensuring a clear, modern approach.
|
||||
|
||||
The following parameters can be passed to the `arun()` method. They are organized by their primary usage context and functionality.
|
||||
---
|
||||
|
||||
## Core Parameters
|
||||
# `arun()` Parameter Guide (New Approach)
|
||||
|
||||
In Crawl4AI’s **latest** configuration model, nearly all parameters that once went directly to `arun()` are now part of **`CrawlerRunConfig`**. When calling `arun()`, you provide:
|
||||
|
||||
```python
|
||||
await crawler.arun(
|
||||
url="https://example.com", # Required: URL to crawl
|
||||
verbose=True, # Enable detailed logging
|
||||
cache_mode=CacheMode.ENABLED, # Control cache behavior
|
||||
warmup=True # Whether to run warmup check
|
||||
url="https://example.com",
|
||||
config=my_run_config
|
||||
)
|
||||
```
|
||||
|
||||
## Cache Control
|
||||
Below is an organized look at the parameters that can go inside `CrawlerRunConfig`, divided by their functional areas. For **Browser** settings (e.g., `headless`, `browser_type`), see [BrowserConfig](./parameters.md).
|
||||
|
||||
---
|
||||
|
||||
## 1. Core Usage
|
||||
|
||||
```python
|
||||
from crawl4ai import CacheMode
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
|
||||
|
||||
await crawler.arun(
|
||||
cache_mode=CacheMode.ENABLED, # Normal caching (read/write)
|
||||
# Other cache modes:
|
||||
# cache_mode=CacheMode.DISABLED # No caching at all
|
||||
# cache_mode=CacheMode.READ_ONLY # Only read from cache
|
||||
# cache_mode=CacheMode.WRITE_ONLY # Only write to cache
|
||||
# cache_mode=CacheMode.BYPASS # Skip cache for this operation
|
||||
async def main():
|
||||
run_config = CrawlerRunConfig(
|
||||
verbose=True, # Detailed logging
|
||||
cache_mode=CacheMode.ENABLED, # Use normal read/write cache
|
||||
# ... other parameters
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
config=run_config
|
||||
)
|
||||
print(result.cleaned_html[:500])
|
||||
|
||||
```
|
||||
|
||||
**Key Fields**:
|
||||
- `verbose=True` logs each crawl step.
|
||||
- `cache_mode` decides how to read/write the local crawl cache.
|
||||
|
||||
---
|
||||
|
||||
## 2. Cache Control
|
||||
|
||||
**`cache_mode`** (default: `CacheMode.ENABLED`)
|
||||
Use a built-in enum from `CacheMode`:
|
||||
- `ENABLED`: Normal caching—reads if available, writes if missing.
|
||||
- `DISABLED`: No caching—always refetch pages.
|
||||
- `READ_ONLY`: Reads from cache only; no new writes.
|
||||
- `WRITE_ONLY`: Writes to cache but doesn’t read existing data.
|
||||
- `BYPASS`: Skips reading cache for this crawl (though it might still write if set up that way).
|
||||
|
||||
```python
|
||||
run_config = CrawlerRunConfig(
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
```
|
||||
|
||||
## Content Processing Parameters
|
||||
**Additional flags**:
|
||||
- `bypass_cache=True` acts like `CacheMode.BYPASS`.
|
||||
- `disable_cache=True` acts like `CacheMode.DISABLED`.
|
||||
- `no_cache_read=True` acts like `CacheMode.WRITE_ONLY`.
|
||||
- `no_cache_write=True` acts like `CacheMode.READ_ONLY`.
|
||||
|
||||
---
|
||||
|
||||
## 3. Content Processing & Selection
|
||||
|
||||
### 3.1 Text Processing
|
||||
|
||||
### Text Processing
|
||||
```python
|
||||
await crawler.arun(
|
||||
word_count_threshold=10, # Minimum words per content block
|
||||
image_description_min_word_threshold=5, # Minimum words for image descriptions
|
||||
only_text=False, # Extract only text content
|
||||
excluded_tags=['form', 'nav'], # HTML tags to exclude
|
||||
keep_data_attributes=False, # Preserve data-* attributes
|
||||
run_config = CrawlerRunConfig(
|
||||
word_count_threshold=10, # Ignore text blocks <10 words
|
||||
only_text=False, # If True, tries to remove non-text elements
|
||||
keep_data_attributes=False # Keep or discard data-* attributes
|
||||
)
|
||||
```
|
||||
|
||||
### Content Selection
|
||||
### 3.2 Content Selection
|
||||
|
||||
```python
|
||||
await crawler.arun(
|
||||
css_selector=".main-content", # CSS selector for content extraction
|
||||
remove_forms=True, # Remove all form elements
|
||||
remove_overlay_elements=True, # Remove popups/modals/overlays
|
||||
run_config = CrawlerRunConfig(
|
||||
css_selector=".main-content", # Focus on .main-content region only
|
||||
excluded_tags=["form", "nav"], # Remove entire tag blocks
|
||||
remove_forms=True, # Specifically strip <form> elements
|
||||
remove_overlay_elements=True, # Attempt to remove modals/popups
|
||||
)
|
||||
```
|
||||
|
||||
### Link Handling
|
||||
### 3.3 Link Handling
|
||||
|
||||
```python
|
||||
await crawler.arun(
|
||||
exclude_external_links=True, # Remove external links
|
||||
exclude_social_media_links=True, # Remove social media links
|
||||
exclude_external_images=True, # Remove external images
|
||||
exclude_domains=["ads.example.com"], # Specific domains to exclude
|
||||
social_media_domains=[ # Additional social media domains
|
||||
"facebook.com",
|
||||
"twitter.com",
|
||||
"instagram.com"
|
||||
]
|
||||
run_config = CrawlerRunConfig(
|
||||
exclude_external_links=True, # Remove external links from final content
|
||||
exclude_social_media_links=True, # Remove links to known social sites
|
||||
exclude_domains=["ads.example.com"], # Exclude links to these domains
|
||||
exclude_social_media_domains=["facebook.com","twitter.com"], # Extend the default list
|
||||
)
|
||||
```
|
||||
|
||||
## Browser Control Parameters
|
||||
### 3.4 Media Filtering
|
||||
|
||||
### Basic Browser Settings
|
||||
```python
|
||||
await crawler.arun(
|
||||
headless=True, # Run browser in headless mode
|
||||
browser_type="chromium", # Browser engine: "chromium", "firefox", "webkit"
|
||||
page_timeout=60000, # Page load timeout in milliseconds
|
||||
user_agent="custom-agent", # Custom user agent
|
||||
run_config = CrawlerRunConfig(
|
||||
exclude_external_images=True # Strip images from other domains
|
||||
)
|
||||
```
|
||||
|
||||
### Navigation and Waiting
|
||||
---
|
||||
|
||||
## 4. Page Navigation & Timing
|
||||
|
||||
### 4.1 Basic Browser Flow
|
||||
|
||||
```python
|
||||
await crawler.arun(
|
||||
wait_for="css:.dynamic-content", # Wait for element/condition
|
||||
delay_before_return_html=2.0, # Wait before returning HTML (seconds)
|
||||
run_config = CrawlerRunConfig(
|
||||
wait_for="css:.dynamic-content", # Wait for .dynamic-content
|
||||
delay_before_return_html=2.0, # Wait 2s before capturing final HTML
|
||||
page_timeout=60000, # Navigation & script timeout (ms)
|
||||
)
|
||||
```
|
||||
|
||||
### JavaScript Execution
|
||||
**Key Fields**:
|
||||
- `wait_for`:
|
||||
- `"css:selector"` or
|
||||
- `"js:() => boolean"`
|
||||
e.g. `js:() => document.querySelectorAll('.item').length > 10`.
|
||||
|
||||
- `mean_delay` & `max_range`: define random delays for `arun_many()` calls.
|
||||
- `semaphore_count`: concurrency limit when crawling multiple URLs.
|
||||
|
||||
### 4.2 JavaScript Execution
|
||||
|
||||
```python
|
||||
await crawler.arun(
|
||||
js_code=[ # JavaScript to execute (string or list)
|
||||
run_config = CrawlerRunConfig(
|
||||
js_code=[
|
||||
"window.scrollTo(0, document.body.scrollHeight);",
|
||||
"document.querySelector('.load-more').click();"
|
||||
"document.querySelector('.load-more')?.click();"
|
||||
],
|
||||
js_only=False, # Only execute JavaScript without reloading page
|
||||
js_only=False
|
||||
)
|
||||
```
|
||||
|
||||
### Anti-Bot Features
|
||||
- `js_code` can be a single string or a list of strings.
|
||||
- `js_only=True` means “I’m continuing in the same session with new JS steps, no new full navigation.”
|
||||
|
||||
### 4.3 Anti-Bot
|
||||
|
||||
```python
|
||||
await crawler.arun(
|
||||
magic=True, # Enable all anti-detection features
|
||||
simulate_user=True, # Simulate human behavior
|
||||
override_navigator=True # Override navigator properties
|
||||
run_config = CrawlerRunConfig(
|
||||
magic=True,
|
||||
simulate_user=True,
|
||||
override_navigator=True
|
||||
)
|
||||
```
|
||||
- `magic=True` tries multiple stealth features.
|
||||
- `simulate_user=True` mimics mouse movements or random delays.
|
||||
- `override_navigator=True` fakes some navigator properties (like user agent checks).
|
||||
|
||||
---
|
||||
|
||||
## 5. Session Management
|
||||
|
||||
**`session_id`**:
|
||||
```python
|
||||
run_config = CrawlerRunConfig(
|
||||
session_id="my_session123"
|
||||
)
|
||||
```
|
||||
If re-used in subsequent `arun()` calls, the same tab/page context is continued (helpful for multi-step tasks or stateful browsing).
|
||||
|
||||
---
|
||||
|
||||
## 6. Screenshot, PDF & Media Options
|
||||
|
||||
```python
|
||||
run_config = CrawlerRunConfig(
|
||||
screenshot=True, # Grab a screenshot as base64
|
||||
screenshot_wait_for=1.0, # Wait 1s before capturing
|
||||
pdf=True, # Also produce a PDF
|
||||
image_description_min_word_threshold=5, # If analyzing alt text
|
||||
image_score_threshold=3, # Filter out low-score images
|
||||
)
|
||||
```
|
||||
**Where they appear**:
|
||||
- `result.screenshot` → Base64 screenshot string.
|
||||
- `result.pdf` → Byte array with PDF data.
|
||||
|
||||
---
|
||||
|
||||
## 7. Extraction Strategy
|
||||
|
||||
**For advanced data extraction** (CSS/LLM-based), set `extraction_strategy`:
|
||||
|
||||
```python
|
||||
run_config = CrawlerRunConfig(
|
||||
extraction_strategy=my_css_or_llm_strategy
|
||||
)
|
||||
```
|
||||
|
||||
### Session Management
|
||||
```python
|
||||
await crawler.arun(
|
||||
session_id="my_session", # Session identifier for persistent browsing
|
||||
)
|
||||
```
|
||||
The extracted data will appear in `result.extracted_content`.
|
||||
|
||||
### Screenshot Options
|
||||
```python
|
||||
await crawler.arun(
|
||||
screenshot=True, # Take page screenshot
|
||||
screenshot_wait_for=2.0, # Wait before screenshot (seconds)
|
||||
)
|
||||
```
|
||||
---
|
||||
|
||||
## 8. Comprehensive Example
|
||||
|
||||
Below is a snippet combining many parameters:
|
||||
|
||||
### Proxy Configuration
|
||||
```python
|
||||
await crawler.arun(
|
||||
proxy="http://proxy.example.com:8080", # Simple proxy URL
|
||||
proxy_config={ # Advanced proxy settings
|
||||
"server": "http://proxy.example.com:8080",
|
||||
"username": "user",
|
||||
"password": "pass"
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def main():
|
||||
# Example schema
|
||||
schema = {
|
||||
"name": "Articles",
|
||||
"baseSelector": "article.post",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h2", "type": "text"},
|
||||
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"}
|
||||
]
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Content Extraction Parameters
|
||||
run_config = CrawlerRunConfig(
|
||||
# Core
|
||||
verbose=True,
|
||||
cache_mode=CacheMode.ENABLED,
|
||||
|
||||
### Extraction Strategy
|
||||
```python
|
||||
await crawler.arun(
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="ollama/llama2",
|
||||
schema=MySchema.schema(),
|
||||
instruction="Extract specific data"
|
||||
# Content
|
||||
word_count_threshold=10,
|
||||
css_selector="main.content",
|
||||
excluded_tags=["nav", "footer"],
|
||||
exclude_external_links=True,
|
||||
|
||||
# Page & JS
|
||||
js_code="document.querySelector('.show-more')?.click();",
|
||||
wait_for="css:.loaded-block",
|
||||
page_timeout=30000,
|
||||
|
||||
# Extraction
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema),
|
||||
|
||||
# Session
|
||||
session_id="persistent_session",
|
||||
|
||||
# Media
|
||||
screenshot=True,
|
||||
pdf=True,
|
||||
|
||||
# Anti-bot
|
||||
simulate_user=True,
|
||||
magic=True,
|
||||
)
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://example.com/posts", config=run_config)
|
||||
if result.success:
|
||||
print("HTML length:", len(result.cleaned_html))
|
||||
print("Extraction JSON:", result.extracted_content)
|
||||
if result.screenshot:
|
||||
print("Screenshot length:", len(result.screenshot))
|
||||
if result.pdf:
|
||||
print("PDF bytes length:", len(result.pdf))
|
||||
else:
|
||||
print("Error:", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Chunking Strategy
|
||||
```python
|
||||
await crawler.arun(
|
||||
chunking_strategy=RegexChunking(
|
||||
patterns=[r'\n\n', r'\.\s+']
|
||||
)
|
||||
)
|
||||
```
|
||||
**What we covered**:
|
||||
1. **Crawling** the main content region, ignoring external links.
|
||||
2. Running **JavaScript** to click “.show-more”.
|
||||
3. **Waiting** for “.loaded-block” to appear.
|
||||
4. Generating a **screenshot** & **PDF** of the final page.
|
||||
5. Extracting repeated “article.post” elements with a **CSS-based** extraction strategy.
|
||||
|
||||
### HTML to Text Options
|
||||
```python
|
||||
await crawler.arun(
|
||||
html2text={
|
||||
"ignore_links": False,
|
||||
"ignore_images": False,
|
||||
"escape_dot": False,
|
||||
"body_width": 0,
|
||||
"protect_links": True,
|
||||
"unicode_snob": True
|
||||
}
|
||||
)
|
||||
```
|
||||
---
|
||||
|
||||
## Debug Options
|
||||
```python
|
||||
await crawler.arun(
|
||||
log_console=True, # Log browser console messages
|
||||
)
|
||||
```
|
||||
## 9. Best Practices
|
||||
|
||||
## Parameter Interactions and Notes
|
||||
1. **Use `BrowserConfig` for global browser** settings (headless, user agent).
|
||||
2. **Use `CrawlerRunConfig`** to handle the **specific** crawl needs: content filtering, caching, JS, screenshot, extraction, etc.
|
||||
3. Keep your **parameters consistent** in run configs—especially if you’re part of a large codebase with multiple crawls.
|
||||
4. **Limit** large concurrency (`semaphore_count`) if the site or your system can’t handle it.
|
||||
5. For dynamic pages, set `js_code` or `scan_full_page` so you load all content.
|
||||
|
||||
1. **Cache and Performance Setup**
|
||||
```python
|
||||
# Optimal caching for repeated crawls
|
||||
await crawler.arun(
|
||||
cache_mode=CacheMode.ENABLED,
|
||||
word_count_threshold=10,
|
||||
process_iframes=False
|
||||
)
|
||||
```
|
||||
---
|
||||
|
||||
2. **Dynamic Content Handling**
|
||||
```python
|
||||
# Handle lazy-loaded content
|
||||
await crawler.arun(
|
||||
js_code="window.scrollTo(0, document.body.scrollHeight);",
|
||||
wait_for="css:.lazy-content",
|
||||
delay_before_return_html=2.0,
|
||||
cache_mode=CacheMode.WRITE_ONLY # Cache results after dynamic load
|
||||
)
|
||||
```
|
||||
## 10. Conclusion
|
||||
|
||||
3. **Content Extraction Pipeline**
|
||||
```python
|
||||
# Complete extraction setup
|
||||
await crawler.arun(
|
||||
css_selector=".main-content",
|
||||
word_count_threshold=20,
|
||||
extraction_strategy=my_strategy,
|
||||
chunking_strategy=my_chunking,
|
||||
process_iframes=True,
|
||||
remove_overlay_elements=True,
|
||||
cache_mode=CacheMode.ENABLED
|
||||
)
|
||||
```
|
||||
All parameters that used to be direct arguments to `arun()` now belong in **`CrawlerRunConfig`**. This approach:
|
||||
|
||||
## Best Practices
|
||||
- Makes code **clearer** and **more maintainable**.
|
||||
- Minimizes confusion about which arguments affect global vs. per-crawl behavior.
|
||||
- Allows you to create **reusable** config objects for different pages or tasks.
|
||||
|
||||
1. **Performance Optimization**
|
||||
```python
|
||||
await crawler.arun(
|
||||
cache_mode=CacheMode.ENABLED, # Use full caching
|
||||
word_count_threshold=10, # Filter out noise
|
||||
process_iframes=False # Skip iframes if not needed
|
||||
)
|
||||
```
|
||||
For a **full** reference, check out the [CrawlerRunConfig Docs](./parameters.md).
|
||||
|
||||
2. **Reliable Scraping**
|
||||
```python
|
||||
await crawler.arun(
|
||||
magic=True, # Enable anti-detection
|
||||
delay_before_return_html=1.0, # Wait for dynamic content
|
||||
page_timeout=60000, # Longer timeout for slow pages
|
||||
cache_mode=CacheMode.WRITE_ONLY # Cache results after successful crawl
|
||||
)
|
||||
```
|
||||
|
||||
3. **Clean Content**
|
||||
```python
|
||||
await crawler.arun(
|
||||
remove_overlay_elements=True, # Remove popups
|
||||
excluded_tags=['nav', 'aside'],# Remove unnecessary elements
|
||||
keep_data_attributes=False, # Remove data attributes
|
||||
cache_mode=CacheMode.ENABLED # Use cache for faster processing
|
||||
)
|
||||
```
|
||||
Happy crawling with your **structured, flexible** config approach!
|
||||
@@ -1,320 +1,283 @@
|
||||
Below is the **updated** guide for the **AsyncWebCrawler** class, reflecting the **new** recommended approach of configuring the browser via **`BrowserConfig`** and each crawl via **`CrawlerRunConfig`**. While the crawler still accepts legacy parameters for backward compatibility, the modern, maintainable way is shown below.
|
||||
|
||||
---
|
||||
|
||||
# AsyncWebCrawler
|
||||
|
||||
The `AsyncWebCrawler` class is the main interface for web crawling operations. It provides asynchronous web crawling capabilities with extensive configuration options.
|
||||
The **`AsyncWebCrawler`** is the core class for asynchronous web crawling in Crawl4AI. You typically create it **once**, optionally customize it with a **`BrowserConfig`** (e.g., headless, user agent), then **run** multiple **`arun()`** calls with different **`CrawlerRunConfig`** objects.
|
||||
|
||||
## Constructor
|
||||
**Recommended usage**:
|
||||
1. **Create** a `BrowserConfig` for global browser settings.
|
||||
2. **Instantiate** `AsyncWebCrawler(config=browser_config)`.
|
||||
3. **Use** the crawler in an async context manager (`async with`) or manage start/close manually.
|
||||
4. **Call** `arun(url, config=crawler_run_config)` for each page you want.
|
||||
|
||||
---
|
||||
|
||||
## 1. Constructor Overview
|
||||
|
||||
```python
|
||||
AsyncWebCrawler(
|
||||
# Browser Settings
|
||||
browser_type: str = "chromium", # Options: "chromium", "firefox", "webkit"
|
||||
headless: bool = True, # Run browser in headless mode
|
||||
verbose: bool = False, # Enable verbose logging
|
||||
class AsyncWebCrawler:
|
||||
def __init__(
|
||||
self,
|
||||
crawler_strategy: Optional[AsyncCrawlerStrategy] = None,
|
||||
config: Optional[BrowserConfig] = None,
|
||||
always_bypass_cache: bool = False, # deprecated
|
||||
always_by_pass_cache: Optional[bool] = None, # also deprecated
|
||||
base_directory: str = ...,
|
||||
thread_safe: bool = False,
|
||||
**kwargs,
|
||||
):
|
||||
"""
|
||||
Create an AsyncWebCrawler instance.
|
||||
|
||||
# Cache Settings
|
||||
always_by_pass_cache: bool = False, # Always bypass cache
|
||||
base_directory: str = str(os.getenv("CRAWL4_AI_BASE_DIRECTORY", Path.home())), # Base directory for cache
|
||||
|
||||
# Network Settings
|
||||
proxy: str = None, # Simple proxy URL
|
||||
proxy_config: Dict = None, # Advanced proxy configuration
|
||||
|
||||
# Browser Behavior
|
||||
sleep_on_close: bool = False, # Wait before closing browser
|
||||
|
||||
# Custom Settings
|
||||
user_agent: str = None, # Custom user agent
|
||||
headers: Dict[str, str] = {}, # Custom HTTP headers
|
||||
js_code: Union[str, List[str]] = None, # Default JavaScript to execute
|
||||
)
|
||||
Args:
|
||||
crawler_strategy: (Advanced) Provide a custom crawler strategy if needed.
|
||||
config: A BrowserConfig object specifying how the browser is set up.
|
||||
always_bypass_cache: (Deprecated) Use CrawlerRunConfig.cache_mode instead.
|
||||
base_directory: Folder for storing caches/logs (if relevant).
|
||||
thread_safe: If True, attempts some concurrency safeguards. Usually False.
|
||||
**kwargs: Additional legacy or debugging parameters.
|
||||
"""
|
||||
```
|
||||
|
||||
### Parameters in Detail
|
||||
### Typical Initialization
|
||||
|
||||
#### Browser Settings
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig
|
||||
|
||||
- **browser_type** (str, optional)
|
||||
- Default: `"chromium"`
|
||||
- Options: `"chromium"`, `"firefox"`, `"webkit"`
|
||||
- Controls which browser engine to use
|
||||
```python
|
||||
# Example: Using Firefox
|
||||
crawler = AsyncWebCrawler(browser_type="firefox")
|
||||
```
|
||||
browser_cfg = BrowserConfig(
|
||||
browser_type="chromium",
|
||||
headless=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
- **headless** (bool, optional)
|
||||
- Default: `True`
|
||||
- When `True`, browser runs without GUI
|
||||
- Set to `False` for debugging
|
||||
```python
|
||||
# Visible browser for debugging
|
||||
crawler = AsyncWebCrawler(headless=False)
|
||||
```
|
||||
crawler = AsyncWebCrawler(config=browser_cfg)
|
||||
```
|
||||
|
||||
- **verbose** (bool, optional)
|
||||
- Default: `False`
|
||||
- Enables detailed logging
|
||||
```python
|
||||
# Enable detailed logging
|
||||
crawler = AsyncWebCrawler(verbose=True)
|
||||
```
|
||||
**Notes**:
|
||||
- **Legacy** parameters like `always_bypass_cache` remain for backward compatibility, but prefer to set **caching** in `CrawlerRunConfig`.
|
||||
|
||||
#### Cache Settings
|
||||
---
|
||||
|
||||
- **always_by_pass_cache** (bool, optional)
|
||||
- Default: `False`
|
||||
- When `True`, always fetches fresh content
|
||||
```python
|
||||
# Always fetch fresh content
|
||||
crawler = AsyncWebCrawler(always_by_pass_cache=True)
|
||||
```
|
||||
## 2. Lifecycle: Start/Close or Context Manager
|
||||
|
||||
- **base_directory** (str, optional)
|
||||
- Default: User's home directory
|
||||
- Base path for cache storage
|
||||
```python
|
||||
# Custom cache directory
|
||||
crawler = AsyncWebCrawler(base_directory="/path/to/cache")
|
||||
```
|
||||
### 2.1 Context Manager (Recommended)
|
||||
|
||||
#### Network Settings
|
||||
```python
|
||||
async with AsyncWebCrawler(config=browser_cfg) as crawler:
|
||||
result = await crawler.arun("https://example.com")
|
||||
# The crawler automatically starts/closes resources
|
||||
```
|
||||
|
||||
- **proxy** (str, optional)
|
||||
- Simple proxy URL
|
||||
```python
|
||||
# Using simple proxy
|
||||
crawler = AsyncWebCrawler(proxy="http://proxy.example.com:8080")
|
||||
```
|
||||
When the `async with` block ends, the crawler cleans up (closes the browser, etc.).
|
||||
|
||||
- **proxy_config** (Dict, optional)
|
||||
- Advanced proxy configuration with authentication
|
||||
```python
|
||||
# Advanced proxy with auth
|
||||
crawler = AsyncWebCrawler(proxy_config={
|
||||
"server": "http://proxy.example.com:8080",
|
||||
"username": "user",
|
||||
"password": "pass"
|
||||
})
|
||||
```
|
||||
### 2.2 Manual Start & Close
|
||||
|
||||
#### Browser Behavior
|
||||
```python
|
||||
crawler = AsyncWebCrawler(config=browser_cfg)
|
||||
await crawler.start()
|
||||
|
||||
- **sleep_on_close** (bool, optional)
|
||||
- Default: `False`
|
||||
- Adds delay before closing browser
|
||||
```python
|
||||
# Wait before closing
|
||||
crawler = AsyncWebCrawler(sleep_on_close=True)
|
||||
```
|
||||
result1 = await crawler.arun("https://example.com")
|
||||
result2 = await crawler.arun("https://another.com")
|
||||
|
||||
#### Custom Settings
|
||||
await crawler.close()
|
||||
```
|
||||
|
||||
- **user_agent** (str, optional)
|
||||
- Custom user agent string
|
||||
```python
|
||||
# Custom user agent
|
||||
crawler = AsyncWebCrawler(
|
||||
user_agent="Mozilla/5.0 (Custom Agent) Chrome/90.0"
|
||||
)
|
||||
```
|
||||
Use this style if you have a **long-running** application or need full control of the crawler’s lifecycle.
|
||||
|
||||
- **headers** (Dict[str, str], optional)
|
||||
- Custom HTTP headers
|
||||
```python
|
||||
# Custom headers
|
||||
crawler = AsyncWebCrawler(
|
||||
headers={
|
||||
"Accept-Language": "en-US",
|
||||
"Custom-Header": "Value"
|
||||
}
|
||||
)
|
||||
```
|
||||
---
|
||||
|
||||
- **js_code** (Union[str, List[str]], optional)
|
||||
- Default JavaScript to execute on each page
|
||||
```python
|
||||
# Default JavaScript
|
||||
crawler = AsyncWebCrawler(
|
||||
js_code=[
|
||||
"window.scrollTo(0, document.body.scrollHeight);",
|
||||
"document.querySelector('.load-more').click();"
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
## Methods
|
||||
|
||||
### arun()
|
||||
|
||||
The primary method for crawling web pages.
|
||||
## 3. Primary Method: `arun()`
|
||||
|
||||
```python
|
||||
async def arun(
|
||||
# Required
|
||||
url: str, # URL to crawl
|
||||
|
||||
# Content Selection
|
||||
css_selector: str = None, # CSS selector for content
|
||||
word_count_threshold: int = 10, # Minimum words per block
|
||||
|
||||
# Cache Control
|
||||
bypass_cache: bool = False, # Bypass cache for this request
|
||||
|
||||
# Session Management
|
||||
session_id: str = None, # Session identifier
|
||||
|
||||
# Screenshot Options
|
||||
screenshot: bool = False, # Take screenshot
|
||||
screenshot_wait_for: float = None, # Wait before screenshot
|
||||
|
||||
# Content Processing
|
||||
process_iframes: bool = False, # Process iframe content
|
||||
remove_overlay_elements: bool = False, # Remove popups/modals
|
||||
|
||||
# Anti-Bot Settings
|
||||
simulate_user: bool = False, # Simulate human behavior
|
||||
override_navigator: bool = False, # Override navigator properties
|
||||
magic: bool = False, # Enable all anti-detection
|
||||
|
||||
# Content Filtering
|
||||
excluded_tags: List[str] = None, # HTML tags to exclude
|
||||
exclude_external_links: bool = False, # Remove external links
|
||||
exclude_social_media_links: bool = False, # Remove social media links
|
||||
|
||||
# JavaScript Handling
|
||||
js_code: Union[str, List[str]] = None, # JavaScript to execute
|
||||
wait_for: str = None, # Wait condition
|
||||
|
||||
# Page Loading
|
||||
page_timeout: int = 60000, # Page load timeout (ms)
|
||||
delay_before_return_html: float = None, # Wait before return
|
||||
|
||||
# Extraction
|
||||
extraction_strategy: ExtractionStrategy = None # Extraction strategy
|
||||
self,
|
||||
url: str,
|
||||
config: Optional[CrawlerRunConfig] = None,
|
||||
# Legacy parameters for backward compatibility...
|
||||
) -> CrawlResult:
|
||||
...
|
||||
```
|
||||
|
||||
### Usage Examples
|
||||
### 3.1 New Approach
|
||||
|
||||
You pass a `CrawlerRunConfig` object that sets up everything about a crawl—content filtering, caching, session reuse, JS code, screenshots, etc.
|
||||
|
||||
#### Basic Crawling
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
import asyncio
|
||||
from crawl4ai import CrawlerRunConfig, CacheMode
|
||||
|
||||
run_cfg = CrawlerRunConfig(
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
css_selector="main.article",
|
||||
word_count_threshold=10,
|
||||
screenshot=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_cfg) as crawler:
|
||||
result = await crawler.arun("https://example.com/news", config=run_cfg)
|
||||
print("Crawled HTML length:", len(result.cleaned_html))
|
||||
if result.screenshot:
|
||||
print("Screenshot base64 length:", len(result.screenshot))
|
||||
```
|
||||
|
||||
#### Advanced Crawling
|
||||
### 3.2 Legacy Parameters Still Accepted
|
||||
|
||||
For **backward** compatibility, `arun()` can still accept direct arguments like `css_selector=...`, `word_count_threshold=...`, etc., but we strongly advise migrating them into a **`CrawlerRunConfig`**.
|
||||
|
||||
---
|
||||
|
||||
## 4. Helper Methods
|
||||
|
||||
### 4.1 `arun_many()`
|
||||
|
||||
```python
|
||||
async with AsyncWebCrawler(
|
||||
browser_type="firefox",
|
||||
verbose=True,
|
||||
headers={"Custom-Header": "Value"}
|
||||
) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
css_selector=".main-content",
|
||||
word_count_threshold=20,
|
||||
process_iframes=True,
|
||||
magic=True,
|
||||
wait_for="css:.dynamic-content",
|
||||
screenshot=True
|
||||
async def arun_many(
|
||||
self,
|
||||
urls: List[str],
|
||||
config: Optional[CrawlerRunConfig] = None,
|
||||
# Legacy parameters...
|
||||
) -> List[CrawlResult]:
|
||||
...
|
||||
```
|
||||
|
||||
Crawls multiple URLs in concurrency. Accepts the same style `CrawlerRunConfig`. Example:
|
||||
|
||||
```python
|
||||
run_cfg = CrawlerRunConfig(
|
||||
# e.g., concurrency, wait_for, caching, extraction, etc.
|
||||
semaphore_count=5
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_cfg) as crawler:
|
||||
results = await crawler.arun_many(
|
||||
urls=["https://example.com", "https://another.com"],
|
||||
config=run_cfg
|
||||
)
|
||||
for r in results:
|
||||
print(r.url, ":", len(r.cleaned_html))
|
||||
```
|
||||
|
||||
#### Session Management
|
||||
### 4.2 `start()` & `close()`
|
||||
|
||||
Allows manual lifecycle usage instead of context manager:
|
||||
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
# First request
|
||||
result1 = await crawler.arun(
|
||||
url="https://example.com/login",
|
||||
session_id="my_session"
|
||||
crawler = AsyncWebCrawler(config=browser_cfg)
|
||||
await crawler.start()
|
||||
|
||||
# Perform multiple operations
|
||||
resultA = await crawler.arun("https://exampleA.com", config=run_cfg)
|
||||
resultB = await crawler.arun("https://exampleB.com", config=run_cfg)
|
||||
|
||||
await crawler.close()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. `CrawlResult` Output
|
||||
|
||||
Each `arun()` returns a **`CrawlResult`** containing:
|
||||
|
||||
- `url`: Final URL (if redirected).
|
||||
- `html`: Original HTML.
|
||||
- `cleaned_html`: Sanitized HTML.
|
||||
- `markdown_v2` (or future `markdown`): Markdown outputs (raw, fit, etc.).
|
||||
- `extracted_content`: If an extraction strategy was used (JSON for CSS/LLM strategies).
|
||||
- `screenshot`, `pdf`: If screenshots/PDF requested.
|
||||
- `media`, `links`: Information about discovered images/links.
|
||||
- `success`, `error_message`: Status info.
|
||||
|
||||
For details, see [CrawlResult doc](./crawl-result.md).
|
||||
|
||||
---
|
||||
|
||||
## 6. Quick Example
|
||||
|
||||
Below is an example hooking it all together:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
import json
|
||||
|
||||
async def main():
|
||||
# 1. Browser config
|
||||
browser_cfg = BrowserConfig(
|
||||
browser_type="firefox",
|
||||
headless=False,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Subsequent request using same session
|
||||
result2 = await crawler.arun(
|
||||
url="https://example.com/protected",
|
||||
session_id="my_session"
|
||||
# 2. Run config
|
||||
schema = {
|
||||
"name": "Articles",
|
||||
"baseSelector": "article.post",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h2", "type": "text"},
|
||||
{"name": "url", "selector": "a", "type": "attribute", "attribute": "href"}
|
||||
]
|
||||
}
|
||||
|
||||
run_cfg = CrawlerRunConfig(
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema),
|
||||
word_count_threshold=15,
|
||||
remove_overlay_elements=True,
|
||||
wait_for="css:.post" # Wait for posts to appear
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_cfg) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/blog",
|
||||
config=run_cfg
|
||||
)
|
||||
|
||||
if result.success:
|
||||
print("Cleaned HTML length:", len(result.cleaned_html))
|
||||
if result.extracted_content:
|
||||
articles = json.loads(result.extracted_content)
|
||||
print("Extracted articles:", articles[:2])
|
||||
else:
|
||||
print("Error:", result.error_message)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Context Manager
|
||||
**Explanation**:
|
||||
- We define a **`BrowserConfig`** with Firefox, no headless, and `verbose=True`.
|
||||
- We define a **`CrawlerRunConfig`** that **bypasses cache**, uses a **CSS** extraction schema, has a `word_count_threshold=15`, etc.
|
||||
- We pass them to `AsyncWebCrawler(config=...)` and `arun(url=..., config=...)`.
|
||||
|
||||
AsyncWebCrawler implements the async context manager protocol:
|
||||
---
|
||||
|
||||
```python
|
||||
async def __aenter__(self) -> 'AsyncWebCrawler':
|
||||
# Initialize browser and resources
|
||||
return self
|
||||
## 7. Best Practices & Migration Notes
|
||||
|
||||
async def __aexit__(self, *args):
|
||||
# Cleanup resources
|
||||
pass
|
||||
```
|
||||
1. **Use** `BrowserConfig` for **global** settings about the browser’s environment.
|
||||
2. **Use** `CrawlerRunConfig` for **per-crawl** logic (caching, content filtering, extraction strategies, wait conditions).
|
||||
3. **Avoid** legacy parameters like `css_selector` or `word_count_threshold` directly in `arun()`. Instead:
|
||||
|
||||
Always use AsyncWebCrawler with async context manager:
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
# Your crawling code here
|
||||
pass
|
||||
```
|
||||
```python
|
||||
run_cfg = CrawlerRunConfig(css_selector=".main-content", word_count_threshold=20)
|
||||
result = await crawler.arun(url="...", config=run_cfg)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
4. **Context Manager** usage is simplest unless you want a persistent crawler across many calls.
|
||||
|
||||
1. **Resource Management**
|
||||
```python
|
||||
# Always use context manager
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
# Crawler will be properly cleaned up
|
||||
pass
|
||||
```
|
||||
---
|
||||
|
||||
2. **Error Handling**
|
||||
```python
|
||||
try:
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
if not result.success:
|
||||
print(f"Crawl failed: {result.error_message}")
|
||||
except Exception as e:
|
||||
print(f"Error: {str(e)}")
|
||||
```
|
||||
## 8. Summary
|
||||
|
||||
3. **Performance Optimization**
|
||||
```python
|
||||
# Enable caching for better performance
|
||||
crawler = AsyncWebCrawler(
|
||||
always_by_pass_cache=False,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
**AsyncWebCrawler** is your entry point to asynchronous crawling:
|
||||
|
||||
4. **Anti-Detection**
|
||||
```python
|
||||
# Maximum stealth
|
||||
crawler = AsyncWebCrawler(
|
||||
headless=True,
|
||||
user_agent="Mozilla/5.0...",
|
||||
headers={"Accept-Language": "en-US"}
|
||||
)
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
magic=True,
|
||||
simulate_user=True
|
||||
)
|
||||
```
|
||||
- **Constructor** accepts **`BrowserConfig`** (or defaults).
|
||||
- **`arun(url, config=CrawlerRunConfig)`** is the main method for single-page crawls.
|
||||
- **`arun_many(urls, config=CrawlerRunConfig)`** handles concurrency across multiple URLs.
|
||||
- For advanced lifecycle control, use `start()` and `close()` explicitly.
|
||||
|
||||
## Note on Browser Types
|
||||
**Migration**:
|
||||
- If you used `AsyncWebCrawler(browser_type="chromium", css_selector="...")`, move browser settings to `BrowserConfig(...)` and content/crawl logic to `CrawlerRunConfig(...)`.
|
||||
|
||||
Each browser type has its characteristics:
|
||||
|
||||
- **chromium**: Best overall compatibility
|
||||
- **firefox**: Good for specific use cases
|
||||
- **webkit**: Lighter weight, good for basic crawling
|
||||
|
||||
Choose based on your specific needs:
|
||||
```python
|
||||
# High compatibility
|
||||
crawler = AsyncWebCrawler(browser_type="chromium")
|
||||
|
||||
# Memory efficient
|
||||
crawler = AsyncWebCrawler(browser_type="webkit")
|
||||
```
|
||||
This modular approach ensures your code is **clean**, **scalable**, and **easy to maintain**. For any advanced or rarely used parameters, see the [BrowserConfig docs](../api/parameters.md).
|
||||
@@ -1,85 +0,0 @@
|
||||
# CrawlerRunConfig Parameters Documentation
|
||||
|
||||
## Content Processing Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `word_count_threshold` | int | 200 | Minimum word count threshold before processing content |
|
||||
| `extraction_strategy` | ExtractionStrategy | None | Strategy to extract structured data from crawled pages. When None, uses NoExtractionStrategy |
|
||||
| `chunking_strategy` | ChunkingStrategy | RegexChunking() | Strategy to chunk content before extraction |
|
||||
| `markdown_generator` | MarkdownGenerationStrategy | None | Strategy for generating markdown from extracted content |
|
||||
| `content_filter` | RelevantContentFilter | None | Optional filter to prune irrelevant content |
|
||||
| `only_text` | bool | False | If True, attempt to extract text-only content where applicable |
|
||||
| `css_selector` | str | None | CSS selector to extract a specific portion of the page |
|
||||
| `excluded_tags` | list[str] | [] | List of HTML tags to exclude from processing |
|
||||
| `keep_data_attributes` | bool | False | If True, retain `data-*` attributes while removing unwanted attributes |
|
||||
| `remove_forms` | bool | False | If True, remove all `<form>` elements from the HTML |
|
||||
| `prettiify` | bool | False | If True, apply `fast_format_html` to produce prettified HTML output |
|
||||
|
||||
## Caching Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `cache_mode` | CacheMode | None | Defines how caching is handled. Defaults to CacheMode.ENABLED internally |
|
||||
| `session_id` | str | None | Optional session ID to persist browser context and page instance |
|
||||
| `bypass_cache` | bool | False | Legacy parameter, if True acts like CacheMode.BYPASS |
|
||||
| `disable_cache` | bool | False | Legacy parameter, if True acts like CacheMode.DISABLED |
|
||||
| `no_cache_read` | bool | False | Legacy parameter, if True acts like CacheMode.WRITE_ONLY |
|
||||
| `no_cache_write` | bool | False | Legacy parameter, if True acts like CacheMode.READ_ONLY |
|
||||
|
||||
## Page Navigation and Timing Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `wait_until` | str | "domcontentloaded" | The condition to wait for when navigating |
|
||||
| `page_timeout` | int | 60000 | Timeout in milliseconds for page operations like navigation |
|
||||
| `wait_for` | str | None | CSS selector or JS condition to wait for before extracting content |
|
||||
| `wait_for_images` | bool | True | If True, wait for images to load before extracting content |
|
||||
| `delay_before_return_html` | float | 0.1 | Delay in seconds before retrieving final HTML |
|
||||
| `mean_delay` | float | 0.1 | Mean base delay between requests when calling arun_many |
|
||||
| `max_range` | float | 0.3 | Max random additional delay range for requests in arun_many |
|
||||
| `semaphore_count` | int | 5 | Number of concurrent operations allowed |
|
||||
|
||||
## Page Interaction Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `js_code` | str or list[str] | None | JavaScript code/snippets to run on the page |
|
||||
| `js_only` | bool | False | If True, indicates subsequent calls are JS-driven updates |
|
||||
| `ignore_body_visibility` | bool | True | If True, ignore whether the body is visible before proceeding |
|
||||
| `scan_full_page` | bool | False | If True, scroll through the entire page to load all content |
|
||||
| `scroll_delay` | float | 0.2 | Delay in seconds between scroll steps if scan_full_page is True |
|
||||
| `process_iframes` | bool | False | If True, attempts to process and inline iframe content |
|
||||
| `remove_overlay_elements` | bool | False | If True, remove overlays/popups before extracting HTML |
|
||||
| `simulate_user` | bool | False | If True, simulate user interactions for anti-bot measures |
|
||||
| `override_navigator` | bool | False | If True, overrides navigator properties for more human-like behavior |
|
||||
| `magic` | bool | False | If True, attempts automatic handling of overlays/popups |
|
||||
| `adjust_viewport_to_content` | bool | False | If True, adjust viewport according to page content dimensions |
|
||||
|
||||
## Media Handling Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `screenshot` | bool | False | Whether to take a screenshot after crawling |
|
||||
| `screenshot_wait_for` | float | None | Additional wait time before taking a screenshot |
|
||||
| `screenshot_height_threshold` | int | 20000 | Threshold for page height to decide screenshot strategy |
|
||||
| `pdf` | bool | False | Whether to generate a PDF of the page |
|
||||
| `image_description_min_word_threshold` | int | 50 | Minimum words for image description extraction |
|
||||
| `image_score_threshold` | int | 3 | Minimum score threshold for processing an image |
|
||||
| `exclude_external_images` | bool | False | If True, exclude all external images from processing |
|
||||
|
||||
## Link and Domain Handling Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `exclude_social_media_domains` | list[str] | SOCIAL_MEDIA_DOMAINS | List of domains to exclude for social media links |
|
||||
| `exclude_external_links` | bool | False | If True, exclude all external links from the results |
|
||||
| `exclude_social_media_links` | bool | False | If True, exclude links pointing to social media domains |
|
||||
| `exclude_domains` | list[str] | [] | List of specific domains to exclude from results |
|
||||
|
||||
## Debugging and Logging Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `verbose` | bool | True | Enable verbose logging |
|
||||
| `log_console` | bool | False | If True, log console messages from the page |
|
||||
@@ -1,302 +1,330 @@
|
||||
# CrawlResult
|
||||
# `CrawlResult` Reference
|
||||
|
||||
The `CrawlResult` class represents the result of a web crawling operation. It provides access to various forms of extracted content and metadata from the crawled webpage.
|
||||
The **`CrawlResult`** class encapsulates everything returned after a single crawl operation. It provides the **raw or processed content**, details on links and media, plus optional metadata (like screenshots, PDFs, or extracted JSON).
|
||||
|
||||
## Class Definition
|
||||
**Location**: `crawl4ai/crawler/models.py` (for reference)
|
||||
|
||||
```python
|
||||
class CrawlResult(BaseModel):
|
||||
"""Result of a web crawling operation."""
|
||||
|
||||
# Basic Information
|
||||
url: str # Crawled URL
|
||||
success: bool # Whether crawl succeeded
|
||||
status_code: Optional[int] = None # HTTP status code
|
||||
error_message: Optional[str] = None # Error message if failed
|
||||
|
||||
# Content
|
||||
html: str # Raw HTML content
|
||||
cleaned_html: Optional[str] = None # Cleaned HTML
|
||||
fit_html: Optional[str] = None # Most relevant HTML content
|
||||
markdown: Optional[str] = None # HTML converted to markdown
|
||||
fit_markdown: Optional[str] = None # Most relevant markdown content
|
||||
downloaded_files: Optional[List[str]] = None # Downloaded files
|
||||
|
||||
# Extracted Data
|
||||
extracted_content: Optional[str] = None # Content from extraction strategy
|
||||
media: Dict[str, List[Dict]] = {} # Extracted media information
|
||||
links: Dict[str, List[Dict]] = {} # Extracted links
|
||||
metadata: Optional[dict] = None # Page metadata
|
||||
|
||||
# Additional Data
|
||||
screenshot: Optional[str] = None # Base64 encoded screenshot
|
||||
session_id: Optional[str] = None # Session identifier
|
||||
response_headers: Optional[dict] = None # HTTP response headers
|
||||
url: str
|
||||
html: str
|
||||
success: bool
|
||||
cleaned_html: Optional[str] = None
|
||||
media: Dict[str, List[Dict]] = {}
|
||||
links: Dict[str, List[Dict]] = {}
|
||||
downloaded_files: Optional[List[str]] = None
|
||||
screenshot: Optional[str] = None
|
||||
pdf : Optional[bytes] = None
|
||||
markdown: Optional[Union[str, MarkdownGenerationResult]] = None
|
||||
markdown_v2: Optional[MarkdownGenerationResult] = None
|
||||
fit_markdown: Optional[str] = None
|
||||
fit_html: Optional[str] = None
|
||||
extracted_content: Optional[str] = None
|
||||
metadata: Optional[dict] = None
|
||||
error_message: Optional[str] = None
|
||||
session_id: Optional[str] = None
|
||||
response_headers: Optional[dict] = None
|
||||
status_code: Optional[int] = None
|
||||
ssl_certificate: Optional[SSLCertificate] = None
|
||||
...
|
||||
```
|
||||
|
||||
## Properties and Their Data Structures
|
||||
Below is a **field-by-field** explanation and possible usage patterns.
|
||||
|
||||
### Basic Information
|
||||
---
|
||||
|
||||
## 1. Basic Crawl Info
|
||||
|
||||
### 1.1 **`url`** *(str)*
|
||||
**What**: The final crawled URL (after any redirects).
|
||||
**Usage**:
|
||||
```python
|
||||
# Access basic information
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
|
||||
print(result.url) # "https://example.com"
|
||||
print(result.success) # True/False
|
||||
print(result.status_code) # 200, 404, etc.
|
||||
print(result.error_message) # Error details if failed
|
||||
print(result.url) # e.g., "https://example.com/"
|
||||
```
|
||||
|
||||
### Content Properties
|
||||
|
||||
#### HTML Content
|
||||
### 1.2 **`success`** *(bool)*
|
||||
**What**: `True` if the crawl pipeline ended without major errors; `False` otherwise.
|
||||
**Usage**:
|
||||
```python
|
||||
# Raw HTML
|
||||
html_content = result.html
|
||||
|
||||
# Cleaned HTML (removed ads, popups, etc.)
|
||||
clean_content = result.cleaned_html
|
||||
|
||||
# Most relevant HTML content
|
||||
main_content = result.fit_html
|
||||
if not result.success:
|
||||
print(f"Crawl failed: {result.error_message}")
|
||||
```
|
||||
|
||||
#### Markdown Content
|
||||
### 1.3 **`status_code`** *(Optional[int])*
|
||||
**What**: The page’s HTTP status code (e.g., 200, 404).
|
||||
**Usage**:
|
||||
```python
|
||||
# Full markdown version
|
||||
markdown_content = result.markdown
|
||||
|
||||
# Most relevant markdown content
|
||||
main_content = result.fit_markdown
|
||||
if result.status_code == 404:
|
||||
print("Page not found!")
|
||||
```
|
||||
|
||||
### Media Content
|
||||
|
||||
The media dictionary contains organized media elements:
|
||||
|
||||
### 1.4 **`error_message`** *(Optional[str])*
|
||||
**What**: If `success=False`, a textual description of the failure.
|
||||
**Usage**:
|
||||
```python
|
||||
# Structure
|
||||
media = {
|
||||
"images": [
|
||||
{
|
||||
"src": str, # Image URL
|
||||
"alt": str, # Alt text
|
||||
"desc": str, # Contextual description
|
||||
"score": float, # Relevance score (0-10)
|
||||
"type": str, # "image"
|
||||
"width": int, # Image width (if available)
|
||||
"height": int, # Image height (if available)
|
||||
"context": str, # Surrounding text
|
||||
"lazy": bool # Whether image was lazy-loaded
|
||||
}
|
||||
],
|
||||
"videos": [
|
||||
{
|
||||
"src": str, # Video URL
|
||||
"type": str, # "video"
|
||||
"title": str, # Video title
|
||||
"poster": str, # Thumbnail URL
|
||||
"duration": str, # Video duration
|
||||
"description": str # Video description
|
||||
}
|
||||
],
|
||||
"audios": [
|
||||
{
|
||||
"src": str, # Audio URL
|
||||
"type": str, # "audio"
|
||||
"title": str, # Audio title
|
||||
"duration": str, # Audio duration
|
||||
"description": str # Audio description
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
# Example usage
|
||||
for image in result.media["images"]:
|
||||
if image["score"] > 5: # High-relevance images
|
||||
print(f"High-quality image: {image['src']}")
|
||||
print(f"Context: {image['context']}")
|
||||
if not result.success:
|
||||
print("Error:", result.error_message)
|
||||
```
|
||||
|
||||
### Link Analysis
|
||||
|
||||
The links dictionary organizes discovered links:
|
||||
|
||||
### 1.5 **`session_id`** *(Optional[str])*
|
||||
**What**: The ID used for reusing a browser context across multiple calls.
|
||||
**Usage**:
|
||||
```python
|
||||
# Structure
|
||||
links = {
|
||||
"internal": [
|
||||
{
|
||||
"href": str, # URL
|
||||
"text": str, # Link text
|
||||
"title": str, # Title attribute
|
||||
"type": str, # Link type (nav, content, etc.)
|
||||
"context": str, # Surrounding text
|
||||
"score": float # Relevance score
|
||||
}
|
||||
],
|
||||
"external": [
|
||||
{
|
||||
"href": str, # External URL
|
||||
"text": str, # Link text
|
||||
"title": str, # Title attribute
|
||||
"domain": str, # Domain name
|
||||
"type": str, # Link type
|
||||
"context": str # Surrounding text
|
||||
}
|
||||
]
|
||||
}
|
||||
# If you used session_id="login_session" in CrawlerRunConfig, see it here:
|
||||
print("Session:", result.session_id)
|
||||
```
|
||||
|
||||
# Example usage
|
||||
### 1.6 **`response_headers`** *(Optional[dict])*
|
||||
**What**: Final HTTP response headers.
|
||||
**Usage**:
|
||||
```python
|
||||
if result.response_headers:
|
||||
print("Server:", result.response_headers.get("Server", "Unknown"))
|
||||
```
|
||||
|
||||
### 1.7 **`ssl_certificate`** *(Optional[SSLCertificate])*
|
||||
**What**: If `fetch_ssl_certificate=True` in your CrawlerRunConfig, **`result.ssl_certificate`** contains a [**`SSLCertificate`**](../advanced/ssl-certificate.md) object describing the site’s certificate. You can export the cert in multiple formats (PEM/DER/JSON) or access its properties like `issuer`,
|
||||
`subject`, `valid_from`, `valid_until`, etc.
|
||||
**Usage**:
|
||||
```python
|
||||
if result.ssl_certificate:
|
||||
print("Issuer:", result.ssl_certificate.issuer)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Raw / Cleaned Content
|
||||
|
||||
### 2.1 **`html`** *(str)*
|
||||
**What**: The **original** unmodified HTML from the final page load.
|
||||
**Usage**:
|
||||
```python
|
||||
# Possibly large
|
||||
print(len(result.html))
|
||||
```
|
||||
|
||||
### 2.2 **`cleaned_html`** *(Optional[str])*
|
||||
**What**: A sanitized HTML version—scripts, styles, or excluded tags are removed based on your `CrawlerRunConfig`.
|
||||
**Usage**:
|
||||
```python
|
||||
print(result.cleaned_html[:500]) # Show a snippet
|
||||
```
|
||||
|
||||
### 2.3 **`fit_html`** *(Optional[str])*
|
||||
**What**: If a **content filter** or heuristic (e.g., Pruning/BM25) modifies the HTML, the “fit” or post-filter version.
|
||||
**When**: This is **only** present if your `markdown_generator` or `content_filter` produces it.
|
||||
**Usage**:
|
||||
```python
|
||||
if result.fit_html:
|
||||
print("High-value HTML content:", result.fit_html[:300])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Markdown Fields
|
||||
|
||||
### 3.1 The Markdown Generation Approach
|
||||
|
||||
Crawl4AI can convert HTML→Markdown, optionally including:
|
||||
|
||||
- **Raw** markdown
|
||||
- **Links as citations** (with a references section)
|
||||
- **Fit** markdown if a **content filter** is used (like Pruning or BM25)
|
||||
|
||||
### 3.2 **`markdown_v2`** *(Optional[MarkdownGenerationResult])*
|
||||
**What**: The **structured** object holding multiple markdown variants. Soon to be consolidated into `markdown`.
|
||||
|
||||
**`MarkdownGenerationResult`** includes:
|
||||
- **`raw_markdown`** *(str)*: The full HTML→Markdown conversion.
|
||||
- **`markdown_with_citations`** *(str)*: Same markdown, but with link references as academic-style citations.
|
||||
- **`references_markdown`** *(str)*: The reference list or footnotes at the end.
|
||||
- **`fit_markdown`** *(Optional[str])*: If content filtering (Pruning/BM25) was applied, the filtered “fit” text.
|
||||
- **`fit_html`** *(Optional[str])*: The HTML that led to `fit_markdown`.
|
||||
|
||||
**Usage**:
|
||||
```python
|
||||
if result.markdown_v2:
|
||||
md_res = result.markdown_v2
|
||||
print("Raw MD:", md_res.raw_markdown[:300])
|
||||
print("Citations MD:", md_res.markdown_with_citations[:300])
|
||||
print("References:", md_res.references_markdown)
|
||||
if md_res.fit_markdown:
|
||||
print("Pruned text:", md_res.fit_markdown[:300])
|
||||
```
|
||||
|
||||
### 3.3 **`markdown`** *(Optional[Union[str, MarkdownGenerationResult]])*
|
||||
**What**: In future versions, `markdown` will fully replace `markdown_v2`. Right now, it might be a `str` or a `MarkdownGenerationResult`.
|
||||
**Usage**:
|
||||
```python
|
||||
# Soon, you might see:
|
||||
if isinstance(result.markdown, MarkdownGenerationResult):
|
||||
print(result.markdown.raw_markdown[:200])
|
||||
else:
|
||||
print(result.markdown)
|
||||
```
|
||||
|
||||
### 3.4 **`fit_markdown`** *(Optional[str])*
|
||||
**What**: A direct reference to the final filtered markdown (legacy approach).
|
||||
**When**: This is set if a filter or content strategy explicitly writes there. Usually overshadowed by `markdown_v2.fit_markdown`.
|
||||
**Usage**:
|
||||
```python
|
||||
print(result.fit_markdown) # Legacy field, prefer result.markdown_v2.fit_markdown
|
||||
```
|
||||
|
||||
**Important**: “Fit” content (in `fit_markdown`/`fit_html`) only exists if you used a **filter** (like **PruningContentFilter** or **BM25ContentFilter**) within a `MarkdownGenerationStrategy`.
|
||||
|
||||
---
|
||||
|
||||
## 4. Media & Links
|
||||
|
||||
### 4.1 **`media`** *(Dict[str, List[Dict]])*
|
||||
**What**: Contains info about discovered images, videos, or audio. Typically keys: `"images"`, `"videos"`, `"audios"`.
|
||||
**Common Fields** in each item:
|
||||
|
||||
- `src` *(str)*: Media URL
|
||||
- `alt` or `title` *(str)*: Descriptive text
|
||||
- `score` *(float)*: Relevance score if the crawler’s heuristic found it “important”
|
||||
- `desc` or `description` *(Optional[str])*: Additional context extracted from surrounding text
|
||||
|
||||
**Usage**:
|
||||
```python
|
||||
images = result.media.get("images", [])
|
||||
for img in images:
|
||||
if img.get("score", 0) > 5:
|
||||
print("High-value image:", img["src"])
|
||||
```
|
||||
|
||||
### 4.2 **`links`** *(Dict[str, List[Dict]])*
|
||||
**What**: Holds internal and external link data. Usually two keys: `"internal"` and `"external"`.
|
||||
**Common Fields**:
|
||||
|
||||
- `href` *(str)*: The link target
|
||||
- `text` *(str)*: Link text
|
||||
- `title` *(str)*: Title attribute
|
||||
- `context` *(str)*: Surrounding text snippet
|
||||
- `domain` *(str)*: If external, the domain
|
||||
|
||||
**Usage**:
|
||||
```python
|
||||
for link in result.links["internal"]:
|
||||
print(f"Internal link: {link['href']}")
|
||||
print(f"Context: {link['context']}")
|
||||
print(f"Internal link to {link['href']} with text {link['text']}")
|
||||
```
|
||||
|
||||
### Metadata
|
||||
---
|
||||
|
||||
The metadata dictionary contains page information:
|
||||
## 5. Additional Fields
|
||||
|
||||
### 5.1 **`extracted_content`** *(Optional[str])*
|
||||
**What**: If you used **`extraction_strategy`** (CSS, LLM, etc.), the structured output (JSON).
|
||||
**Usage**:
|
||||
```python
|
||||
# Structure
|
||||
metadata = {
|
||||
"title": str, # Page title
|
||||
"description": str, # Meta description
|
||||
"keywords": List[str], # Meta keywords
|
||||
"author": str, # Author information
|
||||
"published_date": str, # Publication date
|
||||
"modified_date": str, # Last modified date
|
||||
"language": str, # Page language
|
||||
"canonical_url": str, # Canonical URL
|
||||
"og_data": Dict, # Open Graph data
|
||||
"twitter_data": Dict # Twitter card data
|
||||
}
|
||||
|
||||
# Example usage
|
||||
if result.metadata:
|
||||
print(f"Title: {result.metadata['title']}")
|
||||
print(f"Author: {result.metadata.get('author', 'Unknown')}")
|
||||
```
|
||||
|
||||
### Extracted Content
|
||||
|
||||
Content from extraction strategies:
|
||||
|
||||
```python
|
||||
# For LLM or CSS extraction strategies
|
||||
if result.extracted_content:
|
||||
structured_data = json.loads(result.extracted_content)
|
||||
print(structured_data)
|
||||
data = json.loads(result.extracted_content)
|
||||
print(data)
|
||||
```
|
||||
|
||||
### Screenshot
|
||||
|
||||
Base64 encoded screenshot:
|
||||
|
||||
### 5.2 **`downloaded_files`** *(Optional[List[str]])*
|
||||
**What**: If `accept_downloads=True` in your `BrowserConfig` + `downloads_path`, lists local file paths for downloaded items.
|
||||
**Usage**:
|
||||
```python
|
||||
# Save screenshot if available
|
||||
if result.screenshot:
|
||||
import base64
|
||||
if result.downloaded_files:
|
||||
for file_path in result.downloaded_files:
|
||||
print("Downloaded:", file_path)
|
||||
```
|
||||
|
||||
# Decode and save
|
||||
with open("screenshot.png", "wb") as f:
|
||||
### 5.3 **`screenshot`** *(Optional[str])*
|
||||
**What**: Base64-encoded screenshot if `screenshot=True` in `CrawlerRunConfig`.
|
||||
**Usage**:
|
||||
```python
|
||||
import base64
|
||||
if result.screenshot:
|
||||
with open("page.png", "wb") as f:
|
||||
f.write(base64.b64decode(result.screenshot))
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Content Access
|
||||
### 5.4 **`pdf`** *(Optional[bytes])*
|
||||
**What**: Raw PDF bytes if `pdf=True` in `CrawlerRunConfig`.
|
||||
**Usage**:
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
|
||||
if result.success:
|
||||
# Get clean content
|
||||
print(result.fit_markdown)
|
||||
|
||||
# Process images
|
||||
for image in result.media["images"]:
|
||||
if image["score"] > 7:
|
||||
print(f"High-quality image: {image['src']}")
|
||||
if result.pdf:
|
||||
with open("page.pdf", "wb") as f:
|
||||
f.write(result.pdf)
|
||||
```
|
||||
|
||||
### Complete Data Processing
|
||||
### 5.5 **`metadata`** *(Optional[dict])*
|
||||
**What**: Page-level metadata if discovered (title, description, OG data, etc.).
|
||||
**Usage**:
|
||||
```python
|
||||
async def process_webpage(url: str) -> Dict:
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url=url)
|
||||
|
||||
if not result.success:
|
||||
raise Exception(f"Crawl failed: {result.error_message}")
|
||||
|
||||
return {
|
||||
"content": result.fit_markdown,
|
||||
"images": [
|
||||
img for img in result.media["images"]
|
||||
if img["score"] > 5
|
||||
],
|
||||
"internal_links": [
|
||||
link["href"] for link in result.links["internal"]
|
||||
],
|
||||
"metadata": result.metadata,
|
||||
"status": result.status_code
|
||||
}
|
||||
if result.metadata:
|
||||
print("Title:", result.metadata.get("title"))
|
||||
print("Author:", result.metadata.get("author"))
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
---
|
||||
|
||||
## 6. Example: Accessing Everything
|
||||
|
||||
```python
|
||||
async def safe_crawl(url: str) -> Dict:
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
try:
|
||||
result = await crawler.arun(url=url)
|
||||
async def handle_result(result: CrawlResult):
|
||||
if not result.success:
|
||||
print("Crawl error:", result.error_message)
|
||||
return
|
||||
|
||||
if not result.success:
|
||||
return {
|
||||
"success": False,
|
||||
"error": result.error_message,
|
||||
"status": result.status_code
|
||||
}
|
||||
# Basic info
|
||||
print("Crawled URL:", result.url)
|
||||
print("Status code:", result.status_code)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"content": result.fit_markdown,
|
||||
"status": result.status_code
|
||||
}
|
||||
# HTML
|
||||
print("Original HTML size:", len(result.html))
|
||||
print("Cleaned HTML size:", len(result.cleaned_html or ""))
|
||||
|
||||
except Exception as e:
|
||||
return {
|
||||
"success": False,
|
||||
"error": str(e),
|
||||
"status": None
|
||||
}
|
||||
# Markdown output
|
||||
if result.markdown_v2:
|
||||
print("Raw Markdown:", result.markdown_v2.raw_markdown[:300])
|
||||
print("Citations Markdown:", result.markdown_v2.markdown_with_citations[:300])
|
||||
if result.markdown_v2.fit_markdown:
|
||||
print("Fit Markdown:", result.markdown_v2.fit_markdown[:200])
|
||||
else:
|
||||
print("Raw Markdown (legacy):", result.markdown[:200] if result.markdown else "N/A")
|
||||
|
||||
# Media & Links
|
||||
if "images" in result.media:
|
||||
print("Image count:", len(result.media["images"]))
|
||||
if "internal" in result.links:
|
||||
print("Internal link count:", len(result.links["internal"]))
|
||||
|
||||
# Extraction strategy result
|
||||
if result.extracted_content:
|
||||
print("Structured data:", result.extracted_content)
|
||||
|
||||
# Screenshot/PDF
|
||||
if result.screenshot:
|
||||
print("Screenshot length:", len(result.screenshot))
|
||||
if result.pdf:
|
||||
print("PDF bytes length:", len(result.pdf))
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
---
|
||||
|
||||
1. **Always Check Success**
|
||||
```python
|
||||
if not result.success:
|
||||
print(f"Error: {result.error_message}")
|
||||
return
|
||||
```
|
||||
## 7. Key Points & Future
|
||||
|
||||
2. **Use fit_markdown for Articles**
|
||||
```python
|
||||
# Better for article content
|
||||
content = result.fit_markdown if result.fit_markdown else result.markdown
|
||||
```
|
||||
1. **`markdown_v2` vs `markdown`**
|
||||
- Right now, `markdown_v2` is the more robust container (`MarkdownGenerationResult`), providing **raw_markdown**, **markdown_with_citations**, references, plus possible **fit_markdown**.
|
||||
- In future versions, everything will unify under **`markdown`**. If you rely on advanced features (citations, fit content), check `markdown_v2`.
|
||||
|
||||
3. **Filter Media by Score**
|
||||
```python
|
||||
relevant_images = [
|
||||
img for img in result.media["images"]
|
||||
if img["score"] > 5
|
||||
]
|
||||
```
|
||||
2. **Fit Content**
|
||||
- **`fit_markdown`** and **`fit_html`** appear only if you used a content filter (like **PruningContentFilter** or **BM25ContentFilter**) inside your **MarkdownGenerationStrategy** or set them directly.
|
||||
- If no filter is used, they remain `None`.
|
||||
|
||||
4. **Handle Missing Data**
|
||||
```python
|
||||
metadata = result.metadata or {}
|
||||
title = metadata.get('title', 'Unknown Title')
|
||||
```
|
||||
3. **References & Citations**
|
||||
- If you enable link citations in your `DefaultMarkdownGenerator` (`options={"citations": True}`), you’ll see `markdown_with_citations` plus a **`references_markdown`** block. This helps large language models or academic-like referencing.
|
||||
|
||||
4. **Links & Media**
|
||||
- `links["internal"]` and `links["external"]` group discovered anchors by domain.
|
||||
- `media["images"]` / `["videos"]` / `["audios"]` store extracted media elements with optional scoring or context.
|
||||
|
||||
5. **Error Cases**
|
||||
- If `success=False`, check `error_message` (e.g., timeouts, invalid URLs).
|
||||
- `status_code` might be `None` if we failed before an HTTP response.
|
||||
|
||||
Use **`CrawlResult`** to glean all final outputs and feed them into your data pipelines, AI models, or archives. With the synergy of a properly configured **BrowserConfig** and **CrawlerRunConfig**, the crawler can produce robust, structured results here in **`CrawlResult`**.
|
||||
@@ -1,36 +1,226 @@
|
||||
# Parameter Reference Table
|
||||
# 1. **BrowserConfig** – Controlling the Browser
|
||||
|
||||
`BrowserConfig` focuses on **how** the browser is launched and behaves. This includes headless mode, proxies, user agents, and other environment tweaks.
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig
|
||||
|
||||
browser_cfg = BrowserConfig(
|
||||
browser_type="chromium",
|
||||
headless=True,
|
||||
viewport_width=1280,
|
||||
viewport_height=720,
|
||||
proxy="http://user:pass@proxy:8080",
|
||||
user_agent="Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 Chrome/116.0.0.0 Safari/537.36",
|
||||
)
|
||||
```
|
||||
|
||||
## 1.1 Parameter Highlights
|
||||
|
||||
| **Parameter** | **Type / Default** | **What It Does** |
|
||||
|-----------------------|----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **`browser_type`** | `"chromium"`, `"firefox"`, `"webkit"`<br/>*(default: `"chromium"`)* | Which browser engine to use. `"chromium"` is typical for many sites, `"firefox"` or `"webkit"` for specialized tests. |
|
||||
| **`headless`** | `bool` (default: `True`) | Headless means no visible UI. `False` is handy for debugging. |
|
||||
| **`viewport_width`** | `int` (default: `1080`) | Initial page width (in px). Useful for testing responsive layouts. |
|
||||
| **`viewport_height`** | `int` (default: `600`) | Initial page height (in px). |
|
||||
| **`proxy`** | `str` (default: `None`) | Single-proxy URL if you want all traffic to go through it, e.g. `"http://user:pass@proxy:8080"`. |
|
||||
| **`proxy_config`** | `dict` (default: `None`) | For advanced or multi-proxy needs, specify details like `{"server": "...", "username": "...", ...}`. |
|
||||
| **`use_persistent_context`** | `bool` (default: `False`) | If `True`, uses a **persistent** browser context (keep cookies, sessions across runs). Also sets `use_managed_browser=True`. |
|
||||
| **`user_data_dir`** | `str or None` (default: `None`) | Directory to store user data (profiles, cookies). Must be set if you want permanent sessions. |
|
||||
| **`ignore_https_errors`** | `bool` (default: `True`) | If `True`, continues despite invalid certificates (common in dev/staging). |
|
||||
| **`java_script_enabled`** | `bool` (default: `True`) | Disable if you want no JS overhead, or if only static content is needed. |
|
||||
| **`cookies`** | `list` (default: `[]`) | Pre-set cookies, each a dict like `{"name": "session", "value": "...", "url": "..."}`. |
|
||||
| **`headers`** | `dict` (default: `{}`) | Extra HTTP headers for every request, e.g. `{"Accept-Language": "en-US"}`. |
|
||||
| **`user_agent`** | `str` (default: Chrome-based UA) | Your custom or random user agent. `user_agent_mode="random"` can shuffle it. |
|
||||
| **`light_mode`** | `bool` (default: `False`) | Disables some background features for performance gains. |
|
||||
| **`text_mode`** | `bool` (default: `False`) | If `True`, tries to disable images/other heavy content for speed. |
|
||||
| **`use_managed_browser`** | `bool` (default: `False`) | For advanced “managed” interactions (debugging, CDP usage). Typically set automatically if persistent context is on. |
|
||||
| **`extra_args`** | `list` (default: `[]`) | Additional flags for the underlying browser process, e.g. `["--disable-extensions"]`. |
|
||||
|
||||
**Tips**:
|
||||
- Set `headless=False` to visually **debug** how pages load or how interactions proceed.
|
||||
- If you need **authentication** storage or repeated sessions, consider `use_persistent_context=True` and specify `user_data_dir`.
|
||||
- For large pages, you might need a bigger `viewport_width` and `viewport_height` to handle dynamic content.
|
||||
|
||||
---
|
||||
|
||||
# 2. **CrawlerRunConfig** – Controlling Each Crawl
|
||||
|
||||
While `BrowserConfig` sets up the **environment**, `CrawlerRunConfig` details **how** each **crawl operation** should behave: caching, content filtering, link or domain blocking, timeouts, JavaScript code, etc.
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
run_cfg = CrawlerRunConfig(
|
||||
wait_for="css:.main-content",
|
||||
word_count_threshold=15,
|
||||
excluded_tags=["nav", "footer"],
|
||||
exclude_external_links=True,
|
||||
)
|
||||
```
|
||||
|
||||
## 2.1 Parameter Highlights
|
||||
|
||||
We group them by category.
|
||||
|
||||
### A) **Content Processing**
|
||||
|
||||
| **Parameter** | **Type / Default** | **What It Does** |
|
||||
|------------------------------|--------------------------------------|-------------------------------------------------------------------------------------------------|
|
||||
| **`word_count_threshold`** | `int` (default: ~200) | Skips text blocks below X words. Helps ignore trivial sections. |
|
||||
| **`extraction_strategy`** | `ExtractionStrategy` (default: None) | If set, extracts structured data (CSS-based, LLM-based, etc.). |
|
||||
| **`markdown_generator`** | `MarkdownGenerationStrategy` (None) | If you want specialized markdown output (citations, filtering, chunking, etc.). |
|
||||
| **`content_filter`** | `RelevantContentFilter` (None) | Filters out irrelevant text blocks. E.g., `PruningContentFilter` or `BM25ContentFilter`. |
|
||||
| **`css_selector`** | `str` (None) | Retains only the part of the page matching this selector. |
|
||||
| **`excluded_tags`** | `list` (None) | Removes entire tags (e.g. `["script", "style"]`). |
|
||||
| **`excluded_selector`** | `str` (None) | Like `css_selector` but to exclude. E.g. `"#ads, .tracker"`. |
|
||||
| **`only_text`** | `bool` (False) | If `True`, tries to extract text-only content. |
|
||||
| **`prettiify`** | `bool` (False) | If `True`, beautifies final HTML (slower, purely cosmetic). |
|
||||
| **`keep_data_attributes`** | `bool` (False) | If `True`, preserve `data-*` attributes in cleaned HTML. |
|
||||
| **`remove_forms`** | `bool` (False) | If `True`, remove all `<form>` elements. |
|
||||
|
||||
---
|
||||
|
||||
### B) **Caching & Session**
|
||||
|
||||
| **Parameter** | **Type / Default** | **What It Does** |
|
||||
|-------------------------|------------------------|------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **`cache_mode`** | `CacheMode or None` | Controls how caching is handled (`ENABLED`, `BYPASS`, `DISABLED`, etc.). If `None`, typically defaults to `ENABLED`. |
|
||||
| **`session_id`** | `str or None` | Assign a unique ID to reuse a single browser session across multiple `arun()` calls. |
|
||||
| **`bypass_cache`** | `bool` (False) | If `True`, acts like `CacheMode.BYPASS`. |
|
||||
| **`disable_cache`** | `bool` (False) | If `True`, acts like `CacheMode.DISABLED`. |
|
||||
| **`no_cache_read`** | `bool` (False) | If `True`, acts like `CacheMode.WRITE_ONLY` (writes cache but never reads). |
|
||||
| **`no_cache_write`** | `bool` (False) | If `True`, acts like `CacheMode.READ_ONLY` (reads cache but never writes). |
|
||||
|
||||
Use these for controlling whether you read or write from a local content cache. Handy for large batch crawls or repeated site visits.
|
||||
|
||||
---
|
||||
|
||||
### C) **Page Navigation & Timing**
|
||||
|
||||
| **Parameter** | **Type / Default** | **What It Does** |
|
||||
|----------------------------|-------------------------|----------------------------------------------------------------------------------------------------------------------|
|
||||
| **`wait_until`** | `str` (domcontentloaded)| Condition for navigation to “complete”. Often `"networkidle"` or `"domcontentloaded"`. |
|
||||
| **`page_timeout`** | `int` (60000 ms) | Timeout for page navigation or JS steps. Increase for slow sites. |
|
||||
| **`wait_for`** | `str or None` | Wait for a CSS (`"css:selector"`) or JS (`"js:() => bool"`) condition before content extraction. |
|
||||
| **`wait_for_images`** | `bool` (False) | Wait for images to load before finishing. Slows down if you only want text. |
|
||||
| **`delay_before_return_html`** | `float` (0.1) | Additional pause (seconds) before final HTML is captured. Good for last-second updates. |
|
||||
| **`mean_delay`** and **`max_range`** | `float` (0.1, 0.3) | If you call `arun_many()`, these define random delay intervals between crawls, helping avoid detection or rate limits. |
|
||||
| **`semaphore_count`** | `int` (5) | Max concurrency for `arun_many()`. Increase if you have resources for parallel crawls. |
|
||||
|
||||
---
|
||||
|
||||
### D) **Page Interaction**
|
||||
|
||||
| **Parameter** | **Type / Default** | **What It Does** |
|
||||
|----------------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **`js_code`** | `str or list[str]` (None) | JavaScript to run after load. E.g. `"document.querySelector('button')?.click();"`. |
|
||||
| **`js_only`** | `bool` (False) | If `True`, indicates we’re reusing an existing session and only applying JS. No full reload. |
|
||||
| **`ignore_body_visibility`** | `bool` (True) | Skip checking if `<body>` is visible. Usually best to keep `True`. |
|
||||
| **`scan_full_page`** | `bool` (False) | If `True`, auto-scroll the page to load dynamic content (infinite scroll). |
|
||||
| **`scroll_delay`** | `float` (0.2) | Delay between scroll steps if `scan_full_page=True`. |
|
||||
| **`process_iframes`** | `bool` (False) | Inlines iframe content for single-page extraction. |
|
||||
| **`remove_overlay_elements`** | `bool` (False) | Removes potential modals/popups blocking the main content. |
|
||||
| **`simulate_user`** | `bool` (False) | Simulate user interactions (mouse movements) to avoid bot detection. |
|
||||
| **`override_navigator`** | `bool` (False) | Override `navigator` properties in JS for stealth. |
|
||||
| **`magic`** | `bool` (False) | Automatic handling of popups/consent banners. Experimental. |
|
||||
| **`adjust_viewport_to_content`** | `bool` (False) | Resizes viewport to match page content height. |
|
||||
|
||||
If your page is a single-page app with repeated JS updates, set `js_only=True` in subsequent calls, plus a `session_id` for reusing the same tab.
|
||||
|
||||
---
|
||||
|
||||
### E) **Media Handling**
|
||||
|
||||
| **Parameter** | **Type / Default** | **What It Does** |
|
||||
|--------------------------------------------|---------------------|-----------------------------------------------------------------------------------------------------------|
|
||||
| **`screenshot`** | `bool` (False) | Capture a screenshot (base64) in `result.screenshot`. |
|
||||
| **`screenshot_wait_for`** | `float or None` | Extra wait time before the screenshot. |
|
||||
| **`screenshot_height_threshold`** | `int` (~20000) | If the page is taller than this, alternate screenshot strategies are used. |
|
||||
| **`pdf`** | `bool` (False) | If `True`, returns a PDF in `result.pdf`. |
|
||||
| **`image_description_min_word_threshold`** | `int` (~50) | Minimum words for an image’s alt text or description to be considered valid. |
|
||||
| **`image_score_threshold`** | `int` (~3) | Filter out low-scoring images. The crawler scores images by relevance (size, context, etc.). |
|
||||
| **`exclude_external_images`** | `bool` (False) | Exclude images from other domains. |
|
||||
|
||||
---
|
||||
|
||||
### F) **Link/Domain Handling**
|
||||
|
||||
| **Parameter** | **Type / Default** | **What It Does** |
|
||||
|------------------------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------------|
|
||||
| **`exclude_social_media_domains`** | `list` (e.g. Facebook/Twitter) | A default list can be extended. Any link to these domains is removed from final output. |
|
||||
| **`exclude_external_links`** | `bool` (False) | Removes all links pointing outside the current domain. |
|
||||
| **`exclude_social_media_links`** | `bool` (False) | Strips links specifically to social sites (like Facebook or Twitter). |
|
||||
| **`exclude_domains`** | `list` ([]) | Provide a custom list of domains to exclude (like `["ads.com", "trackers.io"]`). |
|
||||
|
||||
Use these for link-level content filtering (often to keep crawls “internal” or to remove spammy domains).
|
||||
|
||||
---
|
||||
|
||||
### G) **Debug & Logging**
|
||||
|
||||
| **Parameter** | **Type / Default** | **What It Does** |
|
||||
|----------------|--------------------|---------------------------------------------------------------------------|
|
||||
| **`verbose`** | `bool` (True) | Prints logs detailing each step of crawling, interactions, or errors. |
|
||||
| **`log_console`** | `bool` (False) | Logs the page’s JavaScript console output if you want deeper JS debugging.|
|
||||
|
||||
---
|
||||
|
||||
## 2.2 Example Usage
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||||
|
||||
async def main():
|
||||
# Configure the browser
|
||||
browser_cfg = BrowserConfig(
|
||||
headless=False,
|
||||
viewport_width=1280,
|
||||
viewport_height=720,
|
||||
proxy="http://user:pass@myproxy:8080",
|
||||
text_mode=True
|
||||
)
|
||||
|
||||
# Configure the run
|
||||
run_cfg = CrawlerRunConfig(
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
session_id="my_session",
|
||||
css_selector="main.article",
|
||||
excluded_tags=["script", "style"],
|
||||
exclude_external_links=True,
|
||||
wait_for="css:.article-loaded",
|
||||
screenshot=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_cfg) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/news",
|
||||
config=run_cfg
|
||||
)
|
||||
if result.success:
|
||||
print("Final cleaned_html length:", len(result.cleaned_html))
|
||||
if result.screenshot:
|
||||
print("Screenshot captured (base64, length):", len(result.screenshot))
|
||||
else:
|
||||
print("Crawl failed:", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**What’s Happening**:
|
||||
- **`text_mode=True`** avoids loading images and other heavy resources, speeding up the crawl.
|
||||
- We disable caching (`cache_mode=CacheMode.BYPASS`) to always fetch fresh content.
|
||||
- We only keep `main.article` content by specifying `css_selector="main.article"`.
|
||||
- We exclude external links (`exclude_external_links=True`).
|
||||
- We do a quick screenshot (`screenshot=True`) before finishing.
|
||||
|
||||
---
|
||||
|
||||
## 3. Putting It All Together
|
||||
|
||||
- **Use** `BrowserConfig` for **global** browser settings: engine, headless, proxy, user agent.
|
||||
- **Use** `CrawlerRunConfig` for each crawl’s **context**: how to filter content, handle caching, wait for dynamic elements, or run JS.
|
||||
- **Pass** both configs to `AsyncWebCrawler` (the `BrowserConfig`) and then to `arun()` (the `CrawlerRunConfig`).
|
||||
|
||||
| File Name | Parameter Name | Code Usage | Strategy/Class | Description |
|
||||
|-----------|---------------|------------|----------------|-------------|
|
||||
| async_crawler_strategy.py | user_agent | `kwargs.get("user_agent")` | AsyncPlaywrightCrawlerStrategy | User agent string for browser identification |
|
||||
| async_crawler_strategy.py | proxy | `kwargs.get("proxy")` | AsyncPlaywrightCrawlerStrategy | Proxy server configuration for network requests |
|
||||
| async_crawler_strategy.py | proxy_config | `kwargs.get("proxy_config")` | AsyncPlaywrightCrawlerStrategy | Detailed proxy configuration including auth |
|
||||
| async_crawler_strategy.py | headless | `kwargs.get("headless", True)` | AsyncPlaywrightCrawlerStrategy | Whether to run browser in headless mode |
|
||||
| async_crawler_strategy.py | browser_type | `kwargs.get("browser_type", "chromium")` | AsyncPlaywrightCrawlerStrategy | Type of browser to use (chromium/firefox/webkit) |
|
||||
| async_crawler_strategy.py | headers | `kwargs.get("headers", {})` | AsyncPlaywrightCrawlerStrategy | Custom HTTP headers for requests |
|
||||
| async_crawler_strategy.py | verbose | `kwargs.get("verbose", False)` | AsyncPlaywrightCrawlerStrategy | Enable detailed logging output |
|
||||
| async_crawler_strategy.py | sleep_on_close | `kwargs.get("sleep_on_close", False)` | AsyncPlaywrightCrawlerStrategy | Add delay before closing browser |
|
||||
| async_crawler_strategy.py | use_managed_browser | `kwargs.get("use_managed_browser", False)` | AsyncPlaywrightCrawlerStrategy | Use managed browser instance |
|
||||
| async_crawler_strategy.py | user_data_dir | `kwargs.get("user_data_dir", None)` | AsyncPlaywrightCrawlerStrategy | Custom directory for browser profile data |
|
||||
| async_crawler_strategy.py | session_id | `kwargs.get("session_id")` | AsyncPlaywrightCrawlerStrategy | Unique identifier for browser session |
|
||||
| async_crawler_strategy.py | override_navigator | `kwargs.get("override_navigator", False)` | AsyncPlaywrightCrawlerStrategy | Override browser navigator properties |
|
||||
| async_crawler_strategy.py | simulate_user | `kwargs.get("simulate_user", False)` | AsyncPlaywrightCrawlerStrategy | Simulate human-like behavior |
|
||||
| async_crawler_strategy.py | magic | `kwargs.get("magic", False)` | AsyncPlaywrightCrawlerStrategy | Enable advanced anti-detection features |
|
||||
| async_crawler_strategy.py | log_console | `kwargs.get("log_console", False)` | AsyncPlaywrightCrawlerStrategy | Log browser console messages |
|
||||
| async_crawler_strategy.py | js_only | `kwargs.get("js_only", False)` | AsyncPlaywrightCrawlerStrategy | Only execute JavaScript without page load |
|
||||
| async_crawler_strategy.py | page_timeout | `kwargs.get("page_timeout", 60000)` | AsyncPlaywrightCrawlerStrategy | Timeout for page load in milliseconds |
|
||||
| async_crawler_strategy.py | ignore_body_visibility | `kwargs.get("ignore_body_visibility", True)` | AsyncPlaywrightCrawlerStrategy | Process page even if body is hidden |
|
||||
| async_crawler_strategy.py | js_code | `kwargs.get("js_code", kwargs.get("js", self.js_code))` | AsyncPlaywrightCrawlerStrategy | Custom JavaScript code to execute |
|
||||
| async_crawler_strategy.py | wait_for | `kwargs.get("wait_for")` | AsyncPlaywrightCrawlerStrategy | Wait for specific element/condition |
|
||||
| async_crawler_strategy.py | process_iframes | `kwargs.get("process_iframes", False)` | AsyncPlaywrightCrawlerStrategy | Extract content from iframes |
|
||||
| async_crawler_strategy.py | delay_before_return_html | `kwargs.get("delay_before_return_html")` | AsyncPlaywrightCrawlerStrategy | Additional delay before returning HTML |
|
||||
| async_crawler_strategy.py | remove_overlay_elements | `kwargs.get("remove_overlay_elements", False)` | AsyncPlaywrightCrawlerStrategy | Remove pop-ups and overlay elements |
|
||||
| async_crawler_strategy.py | screenshot | `kwargs.get("screenshot")` | AsyncPlaywrightCrawlerStrategy | Take page screenshot |
|
||||
| async_crawler_strategy.py | screenshot_wait_for | `kwargs.get("screenshot_wait_for")` | AsyncPlaywrightCrawlerStrategy | Wait before taking screenshot |
|
||||
| async_crawler_strategy.py | semaphore_count | `kwargs.get("semaphore_count", 5)` | AsyncPlaywrightCrawlerStrategy | Concurrent request limit |
|
||||
| async_webcrawler.py | verbose | `kwargs.get("verbose", False)` | AsyncWebCrawler | Enable detailed logging |
|
||||
| async_webcrawler.py | warmup | `kwargs.get("warmup", True)` | AsyncWebCrawler | Initialize crawler with warmup request |
|
||||
| async_webcrawler.py | session_id | `kwargs.get("session_id", None)` | AsyncWebCrawler | Session identifier for browser reuse |
|
||||
| async_webcrawler.py | only_text | `kwargs.get("only_text", False)` | AsyncWebCrawler | Extract only text content |
|
||||
| async_webcrawler.py | bypass_cache | `kwargs.get("bypass_cache", False)` | AsyncWebCrawler | Skip cache and force fresh crawl |
|
||||
| async_webcrawler.py | cache_mode | `kwargs.get("cache_mode", CacheMode.ENABLE)` | AsyncWebCrawler | Cache handling mode for request |
|
||||
@@ -218,12 +218,12 @@ result = await crawler.arun(
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Choose the Right Strategy**
|
||||
1. **Choose the Right Strategy**
|
||||
- Use `LLMExtractionStrategy` for complex, unstructured content
|
||||
- Use `JsonCssExtractionStrategy` for well-structured HTML
|
||||
- Use `CosineStrategy` for content similarity and clustering
|
||||
|
||||
2. **Optimize Chunking**
|
||||
2. **Optimize Chunking**
|
||||
```python
|
||||
# For long documents
|
||||
strategy = LLMExtractionStrategy(
|
||||
@@ -232,7 +232,7 @@ result = await crawler.arun(
|
||||
)
|
||||
```
|
||||
|
||||
3. **Handle Errors**
|
||||
3. **Handle Errors**
|
||||
```python
|
||||
try:
|
||||
result = await crawler.arun(
|
||||
@@ -245,7 +245,7 @@ result = await crawler.arun(
|
||||
print(f"Extraction failed: {e}")
|
||||
```
|
||||
|
||||
4. **Monitor Performance**
|
||||
4. **Monitor Performance**
|
||||
```python
|
||||
strategy = CosineStrategy(
|
||||
verbose=True, # Enable logging
|
||||
|
||||
BIN
docs/md_v2/assets/images/dispatcher.png
Normal file
BIN
docs/md_v2/assets/images/dispatcher.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 476 KiB |
@@ -7,6 +7,7 @@
|
||||
|
||||
:root {
|
||||
--global-font-size: 16px;
|
||||
--global-code-font-size: 14px;
|
||||
--global-line-height: 1.5em;
|
||||
--global-space: 10px;
|
||||
--font-stack: Menlo, Monaco, Lucida Console, Liberation Mono, DejaVu Sans Mono, Bitstream Vera Sans Mono,
|
||||
@@ -20,6 +21,7 @@
|
||||
--invert-font-color: #151515; /* Dark color for inverted elements */
|
||||
--primary-color: #1a95e0; /* Primary color can remain the same or be adjusted for better contrast */
|
||||
--secondary-color: #727578; /* Secondary color for less important text */
|
||||
--secondary-dimmed-color: #8b857a; /* Dimmed secondary color */
|
||||
--error-color: #ff5555; /* Bright color for errors */
|
||||
--progress-bar-background: #444; /* Darker background for progress bar */
|
||||
--progress-bar-fill: #1a95e0; /* Bright color for progress bar fill */
|
||||
@@ -37,8 +39,9 @@
|
||||
--secondary-color: #a3abba;
|
||||
--secondary-color: #d5cec0;
|
||||
--tertiary-color: #a3abba;
|
||||
--primary-color: #09b5a5; /* Updated to the brand color */
|
||||
--primary-dimmed-color: #09b5a5; /* Updated to the brand color */
|
||||
--primary-color: #50ffff; /* Updated to the brand color */
|
||||
--accent-color: rgb(243, 128, 245);
|
||||
--error-color: #ff3c74;
|
||||
--progress-bar-background: #3f3f44;
|
||||
--progress-bar-fill: #09b5a5; /* Updated to the brand color */
|
||||
@@ -80,10 +83,16 @@ pre, code {
|
||||
line-height: var(--global-line-height);
|
||||
}
|
||||
|
||||
strong,
|
||||
strong {
|
||||
/* color : var(--primary-dimmed-color); */
|
||||
/* background-color: #50ffff17; */
|
||||
text-shadow: 0 0 0px var(--font-color), 0 0 0px var(--font-color);
|
||||
}
|
||||
|
||||
.highlight {
|
||||
/* background: url(//s2.svgbox.net/pen-brushes.svg?ic=brush-1&color=50ffff); */
|
||||
background-color: #50ffff33;
|
||||
background-color: #50ffff17;
|
||||
|
||||
}
|
||||
|
||||
.terminal-card > header {
|
||||
@@ -158,3 +167,70 @@ ol li::before {
|
||||
/* float: left; */
|
||||
/* padding-right: 5px; */
|
||||
}
|
||||
|
||||
|
||||
/* 8 TERMINAL CSS */
|
||||
|
||||
.terminal code {
|
||||
font-size: var(--global-code-font-size);
|
||||
background: var(--block-background-color);
|
||||
/* color: var(--secondary-color); */
|
||||
color: var(--primary-dimmed-color);
|
||||
}
|
||||
|
||||
.terminal pre code {
|
||||
background: var(--block-background-color);
|
||||
color: var(--secondary-color);
|
||||
}
|
||||
|
||||
.hljs-keyword, .hljs-selector-tag, .hljs-built_in, .hljs-name, .hljs-tag {
|
||||
color: var(--accent-color);
|
||||
}
|
||||
.hljs-string {
|
||||
color: var(--primary-dimmed-color);
|
||||
}
|
||||
.hljs-comment {
|
||||
color: var(--secondary-dimmed-color);
|
||||
font-style: italic;
|
||||
font-size: 0.9em;
|
||||
}
|
||||
.hljs-number {
|
||||
color: var(--primary-dimmed-color);
|
||||
}
|
||||
|
||||
.terminal strong > code, .terminal h2 > code , .terminal h3 > code {
|
||||
background-color: transparent;
|
||||
/* color: var(--font-color); */
|
||||
color: var(--primary-dimmed-color);
|
||||
text-shadow: none;
|
||||
}
|
||||
|
||||
blockquote {
|
||||
background-color: var(--invert-font-color);
|
||||
padding: 1em 2em;
|
||||
border-left: 2px solid var(--primary-dimmed-color);
|
||||
}
|
||||
|
||||
blockquote::after {
|
||||
content: "💡";
|
||||
white-space: pre;
|
||||
position: absolute;
|
||||
top: 1em;
|
||||
left: 5px;
|
||||
line-height: var(--global-line-height);
|
||||
color: #9ca2ab;
|
||||
}
|
||||
|
||||
pre {
|
||||
display: block;
|
||||
word-break: break-word;
|
||||
word-wrap: break-word;
|
||||
}
|
||||
|
||||
.terminal h1 {
|
||||
font-size: 2em;
|
||||
}
|
||||
|
||||
.terminal h1, .terminal h2, .terminal h3, .terminal h4, .terminal h5, .terminal h6 {
|
||||
text-shadow: 0 0 0px var(--font-color), 0 0 0px var(--font-color), 0 0 0px var(--font-color);
|
||||
}
|
||||
@@ -1,208 +0,0 @@
|
||||
# Browser Configuration
|
||||
|
||||
Crawl4AI supports multiple browser engines and offers extensive configuration options for browser behavior.
|
||||
|
||||
## Browser Types
|
||||
|
||||
Choose from three browser engines:
|
||||
|
||||
```python
|
||||
# Chromium (default)
|
||||
async with AsyncWebCrawler(browser_type="chromium") as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
|
||||
# Firefox
|
||||
async with AsyncWebCrawler(browser_type="firefox") as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
|
||||
# WebKit
|
||||
async with AsyncWebCrawler(browser_type="webkit") as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
```
|
||||
|
||||
## Basic Configuration
|
||||
|
||||
Common browser settings:
|
||||
|
||||
```python
|
||||
async with AsyncWebCrawler(
|
||||
headless=True, # Run in headless mode (no GUI)
|
||||
verbose=True, # Enable detailed logging
|
||||
sleep_on_close=False # No delay when closing browser
|
||||
) as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
```
|
||||
|
||||
## Identity Management
|
||||
|
||||
Control how your crawler appears to websites:
|
||||
|
||||
```python
|
||||
# Custom user agent
|
||||
async with AsyncWebCrawler(
|
||||
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
|
||||
) as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
|
||||
# Custom headers
|
||||
headers = {
|
||||
"Accept-Language": "en-US,en;q=0.9",
|
||||
"Cache-Control": "no-cache"
|
||||
}
|
||||
async with AsyncWebCrawler(headers=headers) as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
```
|
||||
|
||||
## Screenshot Capabilities
|
||||
|
||||
Capture page screenshots with enhanced error handling:
|
||||
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
screenshot=True, # Enable screenshot
|
||||
screenshot_wait_for=2.0 # Wait 2 seconds before capture
|
||||
)
|
||||
|
||||
if result.screenshot: # Base64 encoded image
|
||||
import base64
|
||||
with open("screenshot.png", "wb") as f:
|
||||
f.write(base64.b64decode(result.screenshot))
|
||||
```
|
||||
|
||||
## Timeouts and Waiting
|
||||
|
||||
Control page loading behavior:
|
||||
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
page_timeout=60000, # Page load timeout (ms)
|
||||
delay_before_return_html=2.0, # Wait before content capture
|
||||
wait_for="css:.dynamic-content" # Wait for specific element
|
||||
)
|
||||
```
|
||||
|
||||
## JavaScript Execution
|
||||
|
||||
Execute custom JavaScript before crawling:
|
||||
|
||||
```python
|
||||
# Single JavaScript command
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
js_code="window.scrollTo(0, document.body.scrollHeight);"
|
||||
)
|
||||
|
||||
# Multiple commands
|
||||
js_commands = [
|
||||
"window.scrollTo(0, document.body.scrollHeight);",
|
||||
"document.querySelector('.load-more').click();"
|
||||
]
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
js_code=js_commands
|
||||
)
|
||||
```
|
||||
|
||||
## Proxy Configuration
|
||||
|
||||
Use proxies for enhanced access:
|
||||
|
||||
```python
|
||||
# Simple proxy
|
||||
async with AsyncWebCrawler(
|
||||
proxy="http://proxy.example.com:8080"
|
||||
) as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
|
||||
# Proxy with authentication
|
||||
proxy_config = {
|
||||
"server": "http://proxy.example.com:8080",
|
||||
"username": "user",
|
||||
"password": "pass"
|
||||
}
|
||||
async with AsyncWebCrawler(proxy_config=proxy_config) as crawler:
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
```
|
||||
|
||||
## Anti-Detection Features
|
||||
|
||||
Enable stealth features to avoid bot detection:
|
||||
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
simulate_user=True, # Simulate human behavior
|
||||
override_navigator=True, # Mask automation signals
|
||||
magic=True # Enable all anti-detection features
|
||||
)
|
||||
```
|
||||
|
||||
## Handling Dynamic Content
|
||||
|
||||
Configure browser to handle dynamic content:
|
||||
|
||||
```python
|
||||
# Wait for dynamic content
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
wait_for="js:() => document.querySelector('.content').children.length > 10",
|
||||
process_iframes=True # Process iframe content
|
||||
)
|
||||
|
||||
# Handle lazy-loaded images
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
js_code="window.scrollTo(0, document.body.scrollHeight);",
|
||||
delay_before_return_html=2.0 # Wait for images to load
|
||||
)
|
||||
```
|
||||
|
||||
## Comprehensive Example
|
||||
|
||||
Here's how to combine various browser configurations:
|
||||
|
||||
```python
|
||||
async def crawl_with_advanced_config(url: str):
|
||||
async with AsyncWebCrawler(
|
||||
# Browser setup
|
||||
browser_type="chromium",
|
||||
headless=True,
|
||||
verbose=True,
|
||||
|
||||
# Identity
|
||||
user_agent="Custom User Agent",
|
||||
headers={"Accept-Language": "en-US"},
|
||||
|
||||
# Proxy setup
|
||||
proxy="http://proxy.example.com:8080"
|
||||
) as crawler:
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
# Content handling
|
||||
process_iframes=True,
|
||||
screenshot=True,
|
||||
|
||||
# Timing
|
||||
page_timeout=60000,
|
||||
delay_before_return_html=2.0,
|
||||
|
||||
# Anti-detection
|
||||
magic=True,
|
||||
simulate_user=True,
|
||||
|
||||
# Dynamic content
|
||||
js_code=[
|
||||
"window.scrollTo(0, document.body.scrollHeight);",
|
||||
"document.querySelector('.load-more')?.click();"
|
||||
],
|
||||
wait_for="css:.dynamic-content"
|
||||
)
|
||||
|
||||
return {
|
||||
"content": result.markdown,
|
||||
"screenshot": result.screenshot,
|
||||
"success": result.success
|
||||
}
|
||||
```
|
||||
@@ -1,135 +0,0 @@
|
||||
### Content Selection
|
||||
|
||||
Crawl4AI provides multiple ways to select and filter specific content from webpages. Learn how to precisely target the content you need.
|
||||
|
||||
#### CSS Selectors
|
||||
|
||||
Extract specific content using a `CrawlerRunConfig` with CSS selectors:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
|
||||
config = CrawlerRunConfig(css_selector=".main-article") # Target main article content
|
||||
result = await crawler.arun(url="https://crawl4ai.com", config=config)
|
||||
|
||||
config = CrawlerRunConfig(css_selector="article h1, article .content") # Target heading and content
|
||||
result = await crawler.arun(url="https://crawl4ai.com", config=config)
|
||||
```
|
||||
|
||||
#### Content Filtering
|
||||
|
||||
Control content inclusion or exclusion with `CrawlerRunConfig`:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
word_count_threshold=10, # Minimum words per block
|
||||
excluded_tags=['form', 'header', 'footer', 'nav'], # Excluded tags
|
||||
exclude_external_links=True, # Remove external links
|
||||
exclude_social_media_links=True, # Remove social media links
|
||||
exclude_external_images=True # Remove external images
|
||||
)
|
||||
|
||||
result = await crawler.arun(url="https://crawl4ai.com", config=config)
|
||||
```
|
||||
|
||||
#### Iframe Content
|
||||
|
||||
Process iframe content by enabling specific options in `CrawlerRunConfig`:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
process_iframes=True, # Extract iframe content
|
||||
remove_overlay_elements=True # Remove popups/modals that might block iframes
|
||||
)
|
||||
|
||||
result = await crawler.arun(url="https://crawl4ai.com", config=config)
|
||||
```
|
||||
|
||||
#### Structured Content Selection Using LLMs
|
||||
|
||||
Leverage LLMs for intelligent content extraction:
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
from pydantic import BaseModel
|
||||
from typing import List
|
||||
|
||||
class ArticleContent(BaseModel):
|
||||
title: str
|
||||
main_points: List[str]
|
||||
conclusion: str
|
||||
|
||||
strategy = LLMExtractionStrategy(
|
||||
provider="ollama/nemotron",
|
||||
schema=ArticleContent.schema(),
|
||||
instruction="Extract the main article title, key points, and conclusion"
|
||||
)
|
||||
|
||||
config = CrawlerRunConfig(extraction_strategy=strategy)
|
||||
|
||||
result = await crawler.arun(url="https://crawl4ai.com", config=config)
|
||||
article = json.loads(result.extracted_content)
|
||||
```
|
||||
|
||||
#### Pattern-Based Selection
|
||||
|
||||
Extract content matching repetitive patterns:
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
schema = {
|
||||
"name": "News Articles",
|
||||
"baseSelector": "article.news-item",
|
||||
"fields": [
|
||||
{"name": "headline", "selector": "h2", "type": "text"},
|
||||
{"name": "summary", "selector": ".summary", "type": "text"},
|
||||
{"name": "category", "selector": ".category", "type": "text"},
|
||||
{
|
||||
"name": "metadata",
|
||||
"type": "nested",
|
||||
"fields": [
|
||||
{"name": "author", "selector": ".author", "type": "text"},
|
||||
{"name": "date", "selector": ".date", "type": "text"}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
strategy = JsonCssExtractionStrategy(schema)
|
||||
config = CrawlerRunConfig(extraction_strategy=strategy)
|
||||
|
||||
result = await crawler.arun(url="https://crawl4ai.com", config=config)
|
||||
articles = json.loads(result.extracted_content)
|
||||
```
|
||||
|
||||
#### Comprehensive Example
|
||||
|
||||
Combine different selection methods using `CrawlerRunConfig`:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def extract_article_content(url: str):
|
||||
# Define structured extraction
|
||||
article_schema = {
|
||||
"name": "Article",
|
||||
"baseSelector": "article.main",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h1", "type": "text"},
|
||||
{"name": "content", "selector": ".content", "type": "text"}
|
||||
]
|
||||
}
|
||||
|
||||
# Define configuration
|
||||
config = CrawlerRunConfig(
|
||||
extraction_strategy=JsonCssExtractionStrategy(article_schema),
|
||||
word_count_threshold=10,
|
||||
excluded_tags=['nav', 'footer'],
|
||||
exclude_external_links=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url=url, config=config)
|
||||
return json.loads(result.extracted_content)
|
||||
```
|
||||
@@ -1,83 +0,0 @@
|
||||
# Content Filtering in Crawl4AI
|
||||
|
||||
This guide explains how to use content filtering strategies in Crawl4AI to extract the most relevant information from crawled web pages. You'll learn how to use the built-in `BM25ContentFilter` and how to create your own custom content filtering strategies.
|
||||
|
||||
## Relevance Content Filter
|
||||
|
||||
The `RelevanceContentFilter` is an abstract class providing a common interface for content filtering strategies. Specific algorithms, like `PruningContentFilter` or `BM25ContentFilter`, inherit from this class and implement the `filter_content` method. This method takes the HTML content as input and returns a list of filtered text blocks.
|
||||
|
||||
## Pruning Content Filter
|
||||
|
||||
The `PruningContentFilter` removes less relevant nodes based on metrics like text density, link density, and tag importance. Nodes that fall below a defined threshold are pruned, leaving only high-value content.
|
||||
|
||||
### Usage
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
from crawl4ai.content_filter_strategy import PruningContentFilter
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
content_filter=PruningContentFilter(
|
||||
min_word_threshold=5,
|
||||
threshold_type='dynamic',
|
||||
threshold=0.45
|
||||
),
|
||||
fit_markdown=True # Activates markdown fitting
|
||||
)
|
||||
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
if result.success:
|
||||
print(f"Cleaned Markdown:\n{result.fit_markdown}")
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
- **`min_word_threshold`**: (Optional) Minimum number of words a node must contain to be considered relevant. Nodes with fewer words are automatically pruned.
|
||||
- **`threshold_type`**: (Optional, default 'fixed') Controls how pruning thresholds are calculated:
|
||||
- `'fixed'`: Uses a constant threshold value for all nodes.
|
||||
- `'dynamic'`: Adjusts thresholds based on node properties (e.g., tag importance, text/link ratios).
|
||||
- **`threshold`**: (Optional, default 0.48) Base threshold for pruning:
|
||||
- Fixed: Nodes scoring below this value are removed.
|
||||
- Dynamic: This value adjusts based on node characteristics.
|
||||
|
||||
### How It Works
|
||||
|
||||
The algorithm evaluates each node using:
|
||||
- **Text density**: Ratio of text to overall content.
|
||||
- **Link density**: Proportion of text within links.
|
||||
- **Tag importance**: Weights based on HTML tag type (e.g., `<article>`, `<p>`, `<div>`).
|
||||
- **Content quality**: Metrics like text length and structural importance.
|
||||
|
||||
## BM25 Algorithm
|
||||
|
||||
The `BM25ContentFilter` uses the BM25 algorithm to rank and extract text chunks based on relevance to a search query or page metadata.
|
||||
|
||||
### Usage
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
from crawl4ai.content_filter_strategy import BM25ContentFilter
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
content_filter=BM25ContentFilter(user_query="fruit nutrition health"),
|
||||
fit_markdown=True # Activates markdown fitting
|
||||
)
|
||||
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
if result.success:
|
||||
print(f"Filtered Content:\n{result.extracted_content}")
|
||||
print(f"\nFiltered Markdown:\n{result.fit_markdown}")
|
||||
print(f"\nFiltered HTML:\n{result.fit_html}")
|
||||
else:
|
||||
print("Error:", result.error_message)
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
- **`user_query`**: (Optional) A string representing the search query. If not provided, the filter extracts metadata (title, description, keywords) and uses it as the query.
|
||||
- **`bm25_threshold`**: (Optional, default 1.0) Threshold controlling relevance:
|
||||
- Higher values return stricter, more relevant results.
|
||||
- Lower values include more lenient filtering.
|
||||
|
||||
@@ -1,137 +0,0 @@
|
||||
# Installation 💻
|
||||
|
||||
Crawl4AI offers flexible installation options to suit various use cases. You can install it as a Python package, use it with Docker, or run it as a local server.
|
||||
|
||||
## Option 1: Python Package Installation (Recommended)
|
||||
|
||||
Crawl4AI is now available on PyPI, making installation easier than ever. Choose the option that best fits your needs:
|
||||
|
||||
### Basic Installation
|
||||
|
||||
For basic web crawling and scraping tasks:
|
||||
|
||||
```bash
|
||||
pip install crawl4ai
|
||||
playwright install # Install Playwright dependencies
|
||||
```
|
||||
|
||||
### Installation with PyTorch
|
||||
|
||||
For advanced text clustering (includes CosineSimilarity cluster strategy):
|
||||
|
||||
```bash
|
||||
pip install crawl4ai[torch]
|
||||
```
|
||||
|
||||
### Installation with Transformers
|
||||
|
||||
For text summarization and Hugging Face models:
|
||||
|
||||
```bash
|
||||
pip install crawl4ai[transformer]
|
||||
```
|
||||
|
||||
### Full Installation
|
||||
|
||||
For all features:
|
||||
|
||||
```bash
|
||||
pip install crawl4ai[all]
|
||||
```
|
||||
|
||||
### Development Installation
|
||||
|
||||
For contributors who plan to modify the source code:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/unclecode/crawl4ai.git
|
||||
cd crawl4ai
|
||||
pip install -e ".[all]"
|
||||
playwright install # Install Playwright dependencies
|
||||
```
|
||||
|
||||
💡 After installation with "torch", "transformer", or "all" options, it's recommended to run the following CLI command to load the required models:
|
||||
|
||||
```bash
|
||||
crawl4ai-download-models
|
||||
```
|
||||
|
||||
This is optional but will boost the performance and speed of the crawler. You only need to do this once after installation.
|
||||
|
||||
## Playwright Installation Note for Ubuntu
|
||||
|
||||
If you encounter issues with Playwright installation on Ubuntu, you may need to install additional dependencies:
|
||||
|
||||
```bash
|
||||
sudo apt-get install -y \
|
||||
libwoff1 \
|
||||
libopus0 \
|
||||
libwebp7 \
|
||||
libwebpdemux2 \
|
||||
libenchant-2-2 \
|
||||
libgudev-1.0-0 \
|
||||
libsecret-1-0 \
|
||||
libhyphen0 \
|
||||
libgdk-pixbuf2.0-0 \
|
||||
libegl1 \
|
||||
libnotify4 \
|
||||
libxslt1.1 \
|
||||
libevent-2.1-7 \
|
||||
libgles2 \
|
||||
libxcomposite1 \
|
||||
libatk1.0-0 \
|
||||
libatk-bridge2.0-0 \
|
||||
libepoxy0 \
|
||||
libgtk-3-0 \
|
||||
libharfbuzz-icu0 \
|
||||
libgstreamer-gl1.0-0 \
|
||||
libgstreamer-plugins-bad1.0-0 \
|
||||
gstreamer1.0-plugins-good \
|
||||
gstreamer1.0-plugins-bad \
|
||||
libxt6 \
|
||||
libxaw7 \
|
||||
xvfb \
|
||||
fonts-noto-color-emoji \
|
||||
libfontconfig \
|
||||
libfreetype6 \
|
||||
xfonts-cyrillic \
|
||||
xfonts-scalable \
|
||||
fonts-liberation \
|
||||
fonts-ipafont-gothic \
|
||||
fonts-wqy-zenhei \
|
||||
fonts-tlwg-loma-otf \
|
||||
fonts-freefont-ttf
|
||||
```
|
||||
|
||||
## Option 2: Using Docker (Coming Soon)
|
||||
|
||||
Docker support for Crawl4AI is currently in progress and will be available soon. This will allow you to run Crawl4AI in a containerized environment, ensuring consistency across different systems.
|
||||
|
||||
## Option 3: Local Server Installation
|
||||
|
||||
For those who prefer to run Crawl4AI as a local server, instructions will be provided once the Docker implementation is complete.
|
||||
|
||||
## Verifying Your Installation
|
||||
|
||||
After installation, you can verify that Crawl4AI is working correctly by running a simple Python script:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(url="https://www.example.com")
|
||||
print(result.markdown[:500]) # Print first 500 characters
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
This script should successfully crawl the example website and print the first 500 characters of the extracted content.
|
||||
|
||||
## Getting Help
|
||||
|
||||
If you encounter any issues during installation or usage, please check the [documentation](https://crawl4ai.com/mkdocs/) or raise an issue on the [GitHub repository](https://github.com/unclecode/crawl4ai/issues).
|
||||
|
||||
Happy crawling! 🕷️🤖
|
||||
@@ -1,102 +0,0 @@
|
||||
# Output Formats
|
||||
|
||||
Crawl4AI provides multiple output formats to suit different needs, ranging from raw HTML to structured data using LLM or pattern-based extraction, and versatile markdown outputs.
|
||||
|
||||
## Basic Formats
|
||||
|
||||
```python
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
|
||||
# Access different formats
|
||||
raw_html = result.html # Original HTML
|
||||
clean_html = result.cleaned_html # Sanitized HTML
|
||||
markdown_v2 = result.markdown_v2 # Detailed markdown generation results
|
||||
fit_md = result.markdown_v2.fit_markdown # Most relevant content in markdown
|
||||
```
|
||||
|
||||
> **Note**: The `markdown_v2` property will soon be replaced by `markdown`. It is recommended to start transitioning to using `markdown` for new implementations.
|
||||
|
||||
## Raw HTML
|
||||
|
||||
Original, unmodified HTML from the webpage. Useful when you need to:
|
||||
- Preserve the exact page structure.
|
||||
- Process HTML with your own tools.
|
||||
- Debug page issues.
|
||||
|
||||
```python
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
print(result.html) # Complete HTML including headers, scripts, etc.
|
||||
```
|
||||
|
||||
## Cleaned HTML
|
||||
|
||||
Sanitized HTML with unnecessary elements removed. Automatically:
|
||||
- Removes scripts and styles.
|
||||
- Cleans up formatting.
|
||||
- Preserves semantic structure.
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
excluded_tags=['form', 'header', 'footer'], # Additional tags to remove
|
||||
keep_data_attributes=False # Remove data-* attributes
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
print(result.cleaned_html)
|
||||
```
|
||||
|
||||
## Standard Markdown
|
||||
|
||||
HTML converted to clean markdown format. This output is useful for:
|
||||
- Content analysis.
|
||||
- Documentation.
|
||||
- Readability.
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
markdown_generator=DefaultMarkdownGenerator(
|
||||
options={"include_links": True} # Include links in markdown
|
||||
)
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
print(result.markdown_v2.raw_markdown) # Standard markdown with links
|
||||
```
|
||||
|
||||
## Fit Markdown
|
||||
|
||||
Extract and convert only the most relevant content into markdown format. Best suited for:
|
||||
- Article extraction.
|
||||
- Focusing on the main content.
|
||||
- Removing boilerplate.
|
||||
|
||||
To generate `fit_markdown`, use a content filter like `PruningContentFilter`:
|
||||
|
||||
```python
|
||||
from crawl4ai.content_filter_strategy import PruningContentFilter
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
content_filter=PruningContentFilter(
|
||||
threshold=0.7,
|
||||
threshold_type="dynamic",
|
||||
min_word_threshold=100
|
||||
)
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
print(result.markdown_v2.fit_markdown) # Extracted main content in markdown
|
||||
```
|
||||
|
||||
## Markdown with Citations
|
||||
|
||||
Generate markdown that includes citations for links. This format is ideal for:
|
||||
- Creating structured documentation.
|
||||
- Including references for extracted content.
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
markdown_generator=DefaultMarkdownGenerator(
|
||||
options={"citations": True} # Enable citations
|
||||
)
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
print(result.markdown_v2.markdown_with_citations)
|
||||
print(result.markdown_v2.references_markdown) # Citations section
|
||||
```
|
||||
@@ -1,190 +0,0 @@
|
||||
# Page Interaction
|
||||
|
||||
Crawl4AI provides powerful features for interacting with dynamic webpages, handling JavaScript execution, and managing page events.
|
||||
|
||||
## JavaScript Execution
|
||||
|
||||
### Basic Execution
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
|
||||
# Single JavaScript command
|
||||
config = CrawlerRunConfig(
|
||||
js_code="window.scrollTo(0, document.body.scrollHeight);"
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
# Multiple commands
|
||||
js_commands = [
|
||||
"window.scrollTo(0, document.body.scrollHeight);",
|
||||
"document.querySelector('.load-more').click();",
|
||||
"document.querySelector('#consent-button').click();"
|
||||
]
|
||||
config = CrawlerRunConfig(js_code=js_commands)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
```
|
||||
|
||||
## Wait Conditions
|
||||
|
||||
### CSS-Based Waiting
|
||||
|
||||
Wait for elements to appear:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(wait_for="css:.dynamic-content") # Wait for element with class 'dynamic-content'
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
```
|
||||
|
||||
### JavaScript-Based Waiting
|
||||
|
||||
Wait for custom conditions:
|
||||
|
||||
```python
|
||||
# Wait for number of elements
|
||||
wait_condition = """() => {
|
||||
return document.querySelectorAll('.item').length > 10;
|
||||
}"""
|
||||
|
||||
config = CrawlerRunConfig(wait_for=f"js:{wait_condition}")
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
# Wait for dynamic content to load
|
||||
wait_for_content = """() => {
|
||||
const content = document.querySelector('.content');
|
||||
return content && content.innerText.length > 100;
|
||||
}"""
|
||||
|
||||
config = CrawlerRunConfig(wait_for=f"js:{wait_for_content}")
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
```
|
||||
|
||||
## Handling Dynamic Content
|
||||
|
||||
### Load More Content
|
||||
|
||||
Handle infinite scroll or load more buttons:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
js_code=[
|
||||
"window.scrollTo(0, document.body.scrollHeight);", # Scroll to bottom
|
||||
"const loadMore = document.querySelector('.load-more'); if(loadMore) loadMore.click();" # Click load more
|
||||
],
|
||||
wait_for="js:() => document.querySelectorAll('.item').length > previousCount" # Wait for new content
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
```
|
||||
|
||||
### Form Interaction
|
||||
|
||||
Handle forms and inputs:
|
||||
|
||||
```python
|
||||
js_form_interaction = """
|
||||
document.querySelector('#search').value = 'search term'; // Fill form fields
|
||||
document.querySelector('form').submit(); // Submit form
|
||||
"""
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
js_code=js_form_interaction,
|
||||
wait_for="css:.results" # Wait for results to load
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
```
|
||||
|
||||
## Timing Control
|
||||
|
||||
### Delays and Timeouts
|
||||
|
||||
Control timing of interactions:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
page_timeout=60000, # Page load timeout (ms)
|
||||
delay_before_return_html=2.0 # Wait before capturing content
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
```
|
||||
|
||||
## Complex Interactions Example
|
||||
|
||||
Here's an example of handling a dynamic page with multiple interactions:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def crawl_dynamic_content():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
# Initial page load
|
||||
config = CrawlerRunConfig(
|
||||
js_code="document.querySelector('.cookie-accept')?.click();", # Handle cookie consent
|
||||
wait_for="css:.main-content"
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
# Load more content
|
||||
session_id = "dynamic_session" # Keep session for multiple interactions
|
||||
|
||||
for page in range(3): # Load 3 pages of content
|
||||
config = CrawlerRunConfig(
|
||||
session_id=session_id,
|
||||
js_code=[
|
||||
"window.scrollTo(0, document.body.scrollHeight);", # Scroll to bottom
|
||||
"window.previousCount = document.querySelectorAll('.item').length;", # Store item count
|
||||
"document.querySelector('.load-more')?.click();" # Click load more
|
||||
],
|
||||
wait_for="""() => {
|
||||
const currentCount = document.querySelectorAll('.item').length;
|
||||
return currentCount > window.previousCount;
|
||||
}""",
|
||||
js_only=(page > 0) # Execute JS without reloading page for subsequent interactions
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
print(f"Page {page + 1} items:", len(result.cleaned_html))
|
||||
|
||||
# Clean up session
|
||||
await crawler.crawler_strategy.kill_session(session_id)
|
||||
```
|
||||
|
||||
## Using with Extraction Strategies
|
||||
|
||||
Combine page interaction with structured extraction:
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
|
||||
# Pattern-based extraction after interaction
|
||||
schema = {
|
||||
"name": "Dynamic Items",
|
||||
"baseSelector": ".item",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h2", "type": "text"},
|
||||
{"name": "description", "selector": ".desc", "type": "text"}
|
||||
]
|
||||
}
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
js_code="window.scrollTo(0, document.body.scrollHeight);",
|
||||
wait_for="css:.item:nth-child(10)", # Wait for 10 items
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema)
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
# Or use LLM to analyze dynamic content
|
||||
class ContentAnalysis(BaseModel):
|
||||
topics: List[str]
|
||||
summary: str
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
js_code="document.querySelector('.show-more').click();",
|
||||
wait_for="css:.full-content",
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="ollama/nemotron",
|
||||
schema=ContentAnalysis.schema(),
|
||||
instruction="Analyze the full content"
|
||||
)
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
```
|
||||
@@ -1,172 +0,0 @@
|
||||
# Quick Start Guide 🚀
|
||||
|
||||
Welcome to the Crawl4AI Quickstart Guide! In this tutorial, we'll walk you through the basic usage of Crawl4AI, covering everything from initial setup to advanced features like chunking and extraction strategies, using asynchronous programming. Let's dive in! 🌟
|
||||
|
||||
---
|
||||
|
||||
## Getting Started 🛠️
|
||||
|
||||
Set up your environment with `BrowserConfig` and create an `AsyncWebCrawler` instance.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai.async_configs import BrowserConfig
|
||||
|
||||
async def main():
|
||||
browser_config = BrowserConfig(verbose=True)
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
# Add your crawling logic here
|
||||
pass
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Provide a URL and let Crawl4AI do the work!
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
browser_config = BrowserConfig(verbose=True)
|
||||
crawl_config = CrawlerRunConfig(url="https://www.nbcnews.com/business")
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(config=crawl_config)
|
||||
print(f"Basic crawl result: {result.markdown[:500]}") # Print first 500 characters
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Taking Screenshots 📸
|
||||
|
||||
Capture and save webpage screenshots with `CrawlerRunConfig`:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import CacheMode
|
||||
|
||||
async def capture_and_save_screenshot(url: str, output_path: str):
|
||||
browser_config = BrowserConfig(verbose=True)
|
||||
crawl_config = CrawlerRunConfig(
|
||||
url=url,
|
||||
screenshot=True,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(config=crawl_config)
|
||||
|
||||
if result.success and result.screenshot:
|
||||
import base64
|
||||
screenshot_data = base64.b64decode(result.screenshot)
|
||||
with open(output_path, 'wb') as f:
|
||||
f.write(screenshot_data)
|
||||
print(f"Screenshot saved successfully to {output_path}")
|
||||
else:
|
||||
print("Failed to capture screenshot")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Browser Selection 🌐
|
||||
|
||||
Choose from multiple browser engines using `BrowserConfig`:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import BrowserConfig
|
||||
|
||||
# Use Firefox
|
||||
firefox_config = BrowserConfig(browser_type="firefox", verbose=True, headless=True)
|
||||
async with AsyncWebCrawler(config=firefox_config) as crawler:
|
||||
result = await crawler.arun(config=CrawlerRunConfig(url="https://www.example.com"))
|
||||
|
||||
# Use WebKit
|
||||
webkit_config = BrowserConfig(browser_type="webkit", verbose=True, headless=True)
|
||||
async with AsyncWebCrawler(config=webkit_config) as crawler:
|
||||
result = await crawler.arun(config=CrawlerRunConfig(url="https://www.example.com"))
|
||||
|
||||
# Use Chromium (default)
|
||||
chromium_config = BrowserConfig(verbose=True, headless=True)
|
||||
async with AsyncWebCrawler(config=chromium_config) as crawler:
|
||||
result = await crawler.arun(config=CrawlerRunConfig(url="https://www.example.com"))
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### User Simulation 🎭
|
||||
|
||||
Simulate real user behavior to bypass detection:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
|
||||
|
||||
browser_config = BrowserConfig(verbose=True, headless=True)
|
||||
crawl_config = CrawlerRunConfig(
|
||||
url="YOUR-URL-HERE",
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
simulate_user=True, # Random mouse movements and clicks
|
||||
override_navigator=True # Makes the browser appear like a real user
|
||||
)
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(config=crawl_config)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Understanding Parameters 🧠
|
||||
|
||||
Explore caching and forcing fresh crawls:
|
||||
|
||||
```python
|
||||
async def main():
|
||||
browser_config = BrowserConfig(verbose=True)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
# First crawl (uses cache)
|
||||
result1 = await crawler.arun(config=CrawlerRunConfig(url="https://www.nbcnews.com/business"))
|
||||
print(f"First crawl result: {result1.markdown[:100]}...")
|
||||
|
||||
# Force fresh crawl
|
||||
result2 = await crawler.arun(
|
||||
config=CrawlerRunConfig(url="https://www.nbcnews.com/business", cache_mode=CacheMode.BYPASS)
|
||||
)
|
||||
print(f"Second crawl result: {result2.markdown[:100]}...")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Adding a Chunking Strategy 🧩
|
||||
|
||||
Split content into chunks using `RegexChunking`:
|
||||
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import RegexChunking
|
||||
|
||||
async def main():
|
||||
browser_config = BrowserConfig(verbose=True)
|
||||
crawl_config = CrawlerRunConfig(
|
||||
url="https://www.nbcnews.com/business",
|
||||
chunking_strategy=RegexChunking(patterns=["\n\n"])
|
||||
)
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(config=crawl_config)
|
||||
print(f"RegexChunking result: {result.extracted_content[:200]}...")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Advanced Features and Configurations
|
||||
|
||||
For advanced examples (LLM strategies, knowledge graphs, pagination handling), ensure all code aligns with the `BrowserConfig` and `CrawlerRunConfig` pattern shown above.
|
||||
@@ -34,9 +34,9 @@ sequenceDiagram
|
||||
|
||||
**Benefits for Developers and Users**
|
||||
|
||||
1. **Fine-Grained Control**: Instead of predefining all logic upfront, you can dynamically guide the crawler in response to actual data and conditions encountered mid-crawl.
|
||||
2. **Real-Time Insights**: Monitor progress, errors, or network bottlenecks as they happen, without waiting for the entire crawl to finish.
|
||||
3. **Enhanced Collaboration**: Different team members or automated systems can watch the same crawl events and provide input, making the crawling process more adaptive and intelligent.
|
||||
1. **Fine-Grained Control**: Instead of predefining all logic upfront, you can dynamically guide the crawler in response to actual data and conditions encountered mid-crawl.
|
||||
2. **Real-Time Insights**: Monitor progress, errors, or network bottlenecks as they happen, without waiting for the entire crawl to finish.
|
||||
3. **Enhanced Collaboration**: Different team members or automated systems can watch the same crawl events and provide input, making the crawling process more adaptive and intelligent.
|
||||
|
||||
**Next Steps**
|
||||
|
||||
|
||||
@@ -72,9 +72,9 @@ Two big upgrades here:
|
||||
|
||||
### 🔠 **Use Cases You’ll Love**
|
||||
|
||||
1. **Authenticated Crawls**: Login once, export your storage state, and reuse it across multiple requests without the headache.
|
||||
2. **Long-page Screenshots**: Perfect for blogs, e-commerce pages, or any endless-scroll website.
|
||||
3. **PDF Export**: Create professional-looking page PDFs in seconds.
|
||||
1. **Authenticated Crawls**: Login once, export your storage state, and reuse it across multiple requests without the headache.
|
||||
2. **Long-page Screenshots**: Perfect for blogs, e-commerce pages, or any endless-scroll website.
|
||||
3. **PDF Export**: Create professional-looking page PDFs in seconds.
|
||||
|
||||
---
|
||||
|
||||
|
||||
248
docs/md_v2/core/browser-crawler-config.md
Normal file
248
docs/md_v2/core/browser-crawler-config.md
Normal file
@@ -0,0 +1,248 @@
|
||||
# Browser & Crawler Configuration (Quick Overview)
|
||||
|
||||
Crawl4AI’s flexibility stems from two key classes:
|
||||
|
||||
1. **`BrowserConfig`** – Dictates **how** the browser is launched and behaves (e.g., headless or visible, proxy, user agent).
|
||||
2. **`CrawlerRunConfig`** – Dictates **how** each **crawl** operates (e.g., caching, extraction, timeouts, JavaScript code to run, etc.).
|
||||
|
||||
In most examples, you create **one** `BrowserConfig` for the entire crawler session, then pass a **fresh** or re-used `CrawlerRunConfig` whenever you call `arun()`. This tutorial shows the most commonly used parameters. If you need advanced or rarely used fields, see the [Configuration Parameters](../api/parameters.md).
|
||||
|
||||
---
|
||||
|
||||
## 1. BrowserConfig Essentials
|
||||
|
||||
```python
|
||||
class BrowserConfig:
|
||||
def __init__(
|
||||
browser_type="chromium",
|
||||
headless=True,
|
||||
proxy_config=None,
|
||||
viewport_width=1080,
|
||||
viewport_height=600,
|
||||
verbose=True,
|
||||
use_persistent_context=False,
|
||||
user_data_dir=None,
|
||||
cookies=None,
|
||||
headers=None,
|
||||
user_agent=None,
|
||||
text_mode=False,
|
||||
light_mode=False,
|
||||
extra_args=None,
|
||||
# ... other advanced parameters omitted here
|
||||
):
|
||||
...
|
||||
```
|
||||
|
||||
### Key Fields to Note
|
||||
|
||||
|
||||
|
||||
1. **`browser_type`**
|
||||
- Options: `"chromium"`, `"firefox"`, or `"webkit"`.
|
||||
- Defaults to `"chromium"`.
|
||||
- If you need a different engine, specify it here.
|
||||
|
||||
2. **`headless`**
|
||||
- `True`: Runs the browser in headless mode (invisible browser).
|
||||
- `False`: Runs the browser in visible mode, which helps with debugging.
|
||||
|
||||
3. **`proxy_config`**
|
||||
- A dictionary with fields like:
|
||||
```json
|
||||
{
|
||||
"server": "http://proxy.example.com:8080",
|
||||
"username": "...",
|
||||
"password": "..."
|
||||
}
|
||||
```
|
||||
- Leave as `None` if a proxy is not required.
|
||||
|
||||
4. **`viewport_width` & `viewport_height`**:
|
||||
- The initial window size.
|
||||
- Some sites behave differently with smaller or bigger viewports.
|
||||
|
||||
5. **`verbose`**:
|
||||
- If `True`, prints extra logs.
|
||||
- Handy for debugging.
|
||||
|
||||
6. **`use_persistent_context`**:
|
||||
- If `True`, uses a **persistent** browser profile, storing cookies/local storage across runs.
|
||||
- Typically also set `user_data_dir` to point to a folder.
|
||||
|
||||
7. **`cookies`** & **`headers`**:
|
||||
- If you want to start with specific cookies or add universal HTTP headers, set them here.
|
||||
- E.g. `cookies=[{"name": "session", "value": "abc123", "domain": "example.com"}]`.
|
||||
|
||||
8. **`user_agent`**:
|
||||
- Custom User-Agent string. If `None`, a default is used.
|
||||
- You can also set `user_agent_mode="random"` for randomization (if you want to fight bot detection).
|
||||
|
||||
9. **`text_mode`** & **`light_mode`**:
|
||||
- `text_mode=True` disables images, possibly speeding up text-only crawls.
|
||||
- `light_mode=True` turns off certain background features for performance.
|
||||
|
||||
10. **`extra_args`**:
|
||||
- Additional flags for the underlying browser.
|
||||
- E.g. `["--disable-extensions"]`.
|
||||
|
||||
**Minimal Example**:
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig
|
||||
|
||||
browser_conf = BrowserConfig(
|
||||
browser_type="firefox",
|
||||
headless=False,
|
||||
text_mode=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_conf) as crawler:
|
||||
result = await crawler.arun("https://example.com")
|
||||
print(result.markdown[:300])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. CrawlerRunConfig Essentials
|
||||
|
||||
```python
|
||||
class CrawlerRunConfig:
|
||||
def __init__(
|
||||
word_count_threshold=200,
|
||||
extraction_strategy=None,
|
||||
markdown_generator=None,
|
||||
cache_mode=None,
|
||||
js_code=None,
|
||||
wait_for=None,
|
||||
screenshot=False,
|
||||
pdf=False,
|
||||
verbose=True,
|
||||
# ... other advanced parameters omitted
|
||||
):
|
||||
...
|
||||
```
|
||||
|
||||
### Key Fields to Note
|
||||
|
||||
1. **`word_count_threshold`**:
|
||||
- The minimum word count before a block is considered.
|
||||
- If your site has lots of short paragraphs or items, you can lower it.
|
||||
|
||||
2. **`extraction_strategy`**:
|
||||
- Where you plug in JSON-based extraction (CSS, LLM, etc.).
|
||||
- If `None`, no structured extraction is done (only raw/cleaned HTML + markdown).
|
||||
|
||||
3. **`markdown_generator`**:
|
||||
- E.g., `DefaultMarkdownGenerator(...)`, controlling how HTML→Markdown conversion is done.
|
||||
- If `None`, a default approach is used.
|
||||
|
||||
4. **`cache_mode`**:
|
||||
- Controls caching behavior (`ENABLED`, `BYPASS`, `DISABLED`, etc.).
|
||||
- If `None`, defaults to some level of caching or you can specify `CacheMode.ENABLED`.
|
||||
|
||||
5. **`js_code`**:
|
||||
- A string or list of JS strings to execute.
|
||||
- Great for “Load More” buttons or user interactions.
|
||||
|
||||
6. **`wait_for`**:
|
||||
- A CSS or JS expression to wait for before extracting content.
|
||||
- Common usage: `wait_for="css:.main-loaded"` or `wait_for="js:() => window.loaded === true"`.
|
||||
|
||||
7. **`screenshot`** & **`pdf`**:
|
||||
- If `True`, captures a screenshot or PDF after the page is fully loaded.
|
||||
- The results go to `result.screenshot` (base64) or `result.pdf` (bytes).
|
||||
|
||||
8. **`verbose`**:
|
||||
- Logs additional runtime details.
|
||||
- Overlaps with the browser’s verbosity if also set to `True` in `BrowserConfig`.
|
||||
|
||||
**Minimal Example**:
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
crawl_conf = CrawlerRunConfig(
|
||||
js_code="document.querySelector('button#loadMore')?.click()",
|
||||
wait_for="css:.loaded-content",
|
||||
screenshot=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url="https://example.com", config=crawl_conf)
|
||||
print(result.screenshot[:100]) # Base64-encoded PNG snippet
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Putting It All Together
|
||||
|
||||
In a typical scenario, you define **one** `BrowserConfig` for your crawler session, then create **one or more** `CrawlerRunConfig` depending on each call’s needs:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def main():
|
||||
# 1) Browser config: headless, bigger viewport, no proxy
|
||||
browser_conf = BrowserConfig(
|
||||
headless=True,
|
||||
viewport_width=1280,
|
||||
viewport_height=720
|
||||
)
|
||||
|
||||
# 2) Example extraction strategy
|
||||
schema = {
|
||||
"name": "Articles",
|
||||
"baseSelector": "div.article",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h2", "type": "text"},
|
||||
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"}
|
||||
]
|
||||
}
|
||||
extraction = JsonCssExtractionStrategy(schema)
|
||||
|
||||
# 3) Crawler run config: skip cache, use extraction
|
||||
run_conf = CrawlerRunConfig(
|
||||
extraction_strategy=extraction,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_conf) as crawler:
|
||||
# 4) Execute the crawl
|
||||
result = await crawler.arun(url="https://example.com/news", config=run_conf)
|
||||
|
||||
if result.success:
|
||||
print("Extracted content:", result.extracted_content)
|
||||
else:
|
||||
print("Error:", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Next Steps
|
||||
|
||||
For a **detailed list** of available parameters (including advanced ones), see:
|
||||
|
||||
- [BrowserConfig and CrawlerRunConfig Reference](../api/parameters.md)
|
||||
|
||||
You can explore topics like:
|
||||
|
||||
- **Custom Hooks & Auth** (Inject JavaScript or handle login forms).
|
||||
- **Session Management** (Re-use pages, preserve state across multiple calls).
|
||||
- **Magic Mode** or **Identity-based Crawling** (Fight bot detection by simulating user behavior).
|
||||
- **Advanced Caching** (Fine-tune read/write cache modes).
|
||||
|
||||
---
|
||||
|
||||
## 5. Conclusion
|
||||
|
||||
**BrowserConfig** and **CrawlerRunConfig** give you straightforward ways to define:
|
||||
|
||||
- **Which** browser to launch, how it should run, and any proxy or user agent needs.
|
||||
- **How** each crawl should behave—caching, timeouts, JavaScript code, extraction strategies, etc.
|
||||
|
||||
Use them together for **clear, maintainable** code, and when you need more specialized behavior, check out the advanced parameters in the [reference docs](../api/parameters.md). Happy crawling!
|
||||
@@ -49,7 +49,8 @@ from crawl4ai import AsyncWebCrawler, CacheMode
|
||||
from crawl4ai.async_configs import CrawlerRunConfig
|
||||
|
||||
async def use_proxy():
|
||||
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS) # Use CacheMode in CrawlerRunConfig
|
||||
# Use CacheMode in CrawlerRunConfig
|
||||
config = CrawlerRunConfig(cache_mode=CacheMode.BYPASS)
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://www.nbcnews.com/business",
|
||||
@@ -72,10 +73,3 @@ if __name__ == "__main__":
|
||||
| `disable_cache=True` | `cache_mode=CacheMode.DISABLED`|
|
||||
| `no_cache_read=True` | `cache_mode=CacheMode.WRITE_ONLY` |
|
||||
| `no_cache_write=True` | `cache_mode=CacheMode.READ_ONLY` |
|
||||
|
||||
## Suppressing Deprecation Warnings
|
||||
If you need time to migrate, you can temporarily suppress deprecation warnings:
|
||||
```python
|
||||
# In your config.py
|
||||
SHOW_DEPRECATION_WARNINGS = False
|
||||
```
|
||||
332
docs/md_v2/core/content-selection.md
Normal file
332
docs/md_v2/core/content-selection.md
Normal file
@@ -0,0 +1,332 @@
|
||||
# Content Selection
|
||||
|
||||
Crawl4AI provides multiple ways to **select**, **filter**, and **refine** the content from your crawls. Whether you need to target a specific CSS region, exclude entire tags, filter out external links, or remove certain domains and images, **`CrawlerRunConfig`** offers a wide range of parameters.
|
||||
|
||||
Below, we show how to configure these parameters and combine them for precise control.
|
||||
|
||||
---
|
||||
|
||||
## 1. CSS-Based Selection
|
||||
|
||||
A straightforward way to **limit** your crawl results to a certain region of the page is **`css_selector`** in **`CrawlerRunConfig`**:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
config = CrawlerRunConfig(
|
||||
# e.g., first 30 items from Hacker News
|
||||
css_selector=".athing:nth-child(-n+30)"
|
||||
)
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://news.ycombinator.com/newest",
|
||||
config=config
|
||||
)
|
||||
print("Partial HTML length:", len(result.cleaned_html))
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Result**: Only elements matching that selector remain in `result.cleaned_html`.
|
||||
|
||||
---
|
||||
|
||||
## 2. Content Filtering & Exclusions
|
||||
|
||||
### 2.1 Basic Overview
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
# Content thresholds
|
||||
word_count_threshold=10, # Minimum words per block
|
||||
|
||||
# Tag exclusions
|
||||
excluded_tags=['form', 'header', 'footer', 'nav'],
|
||||
|
||||
# Link filtering
|
||||
exclude_external_links=True,
|
||||
exclude_social_media_links=True,
|
||||
# Block entire domains
|
||||
exclude_domains=["adtrackers.com", "spammynews.org"],
|
||||
exclude_social_media_domains=["facebook.com", "twitter.com"],
|
||||
|
||||
# Media filtering
|
||||
exclude_external_images=True
|
||||
)
|
||||
```
|
||||
|
||||
**Explanation**:
|
||||
|
||||
- **`word_count_threshold`**: Ignores text blocks under X words. Helps skip trivial blocks like short nav or disclaimers.
|
||||
- **`excluded_tags`**: Removes entire tags (`<form>`, `<header>`, `<footer>`, etc.).
|
||||
- **Link Filtering**:
|
||||
- `exclude_external_links`: Strips out external links and may remove them from `result.links`.
|
||||
- `exclude_social_media_links`: Removes links pointing to known social media domains.
|
||||
- `exclude_domains`: A custom list of domains to block if discovered in links.
|
||||
- `exclude_social_media_domains`: A curated list (override or add to it) for social media sites.
|
||||
- **Media Filtering**:
|
||||
- `exclude_external_images`: Discards images not hosted on the same domain as the main page (or its subdomains).
|
||||
|
||||
By default in case you set `exclude_social_media_links=True`, the following social media domains are excluded:
|
||||
```python
|
||||
[
|
||||
'facebook.com',
|
||||
'twitter.com',
|
||||
'x.com',
|
||||
'linkedin.com',
|
||||
'instagram.com',
|
||||
'pinterest.com',
|
||||
'tiktok.com',
|
||||
'snapchat.com',
|
||||
'reddit.com',
|
||||
]
|
||||
```
|
||||
|
||||
|
||||
### 2.2 Example Usage
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
|
||||
|
||||
async def main():
|
||||
config = CrawlerRunConfig(
|
||||
css_selector="main.content",
|
||||
word_count_threshold=10,
|
||||
excluded_tags=["nav", "footer"],
|
||||
exclude_external_links=True,
|
||||
exclude_social_media_links=True,
|
||||
exclude_domains=["ads.com", "spammytrackers.net"],
|
||||
exclude_external_images=True,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url="https://news.ycombinator.com", config=config)
|
||||
print("Cleaned HTML length:", len(result.cleaned_html))
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Note**: If these parameters remove too much, reduce or disable them accordingly.
|
||||
|
||||
---
|
||||
|
||||
## 3. Handling Iframes
|
||||
|
||||
Some sites embed content in `<iframe>` tags. If you want that inline:
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
# Merge iframe content into the final output
|
||||
process_iframes=True,
|
||||
remove_overlay_elements=True
|
||||
)
|
||||
```
|
||||
|
||||
**Usage**:
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
config = CrawlerRunConfig(
|
||||
process_iframes=True,
|
||||
remove_overlay_elements=True
|
||||
)
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.org/iframe-demo",
|
||||
config=config
|
||||
)
|
||||
print("Iframe-merged length:", len(result.cleaned_html))
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Structured Extraction Examples
|
||||
|
||||
You can combine content selection with a more advanced extraction strategy. For instance, a **CSS-based** or **LLM-based** extraction strategy can run on the filtered HTML.
|
||||
|
||||
### 4.1 Pattern-Based with `JsonCssExtractionStrategy`
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def main():
|
||||
# Minimal schema for repeated items
|
||||
schema = {
|
||||
"name": "News Items",
|
||||
"baseSelector": "tr.athing",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "a.storylink", "type": "text"},
|
||||
{
|
||||
"name": "link",
|
||||
"selector": "a.storylink",
|
||||
"type": "attribute",
|
||||
"attribute": "href"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
# Content filtering
|
||||
excluded_tags=["form", "header"],
|
||||
exclude_domains=["adsite.com"],
|
||||
|
||||
# CSS selection or entire page
|
||||
css_selector="table.itemlist",
|
||||
|
||||
# No caching for demonstration
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
|
||||
# Extraction strategy
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema)
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://news.ycombinator.com/newest",
|
||||
config=config
|
||||
)
|
||||
data = json.loads(result.extracted_content)
|
||||
print("Sample extracted item:", data[:1]) # Show first item
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### 4.2 LLM-Based Extraction
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
from pydantic import BaseModel, Field
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
|
||||
class ArticleData(BaseModel):
|
||||
headline: str
|
||||
summary: str
|
||||
|
||||
async def main():
|
||||
llm_strategy = LLMExtractionStrategy(
|
||||
provider="openai/gpt-4",
|
||||
api_token="sk-YOUR_API_KEY",
|
||||
schema=ArticleData.schema(),
|
||||
extraction_type="schema",
|
||||
instruction="Extract 'headline' and a short 'summary' from the content."
|
||||
)
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
exclude_external_links=True,
|
||||
word_count_threshold=20,
|
||||
extraction_strategy=llm_strategy
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url="https://news.ycombinator.com", config=config)
|
||||
article = json.loads(result.extracted_content)
|
||||
print(article)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
Here, the crawler:
|
||||
|
||||
- Filters out external links (`exclude_external_links=True`).
|
||||
- Ignores very short text blocks (`word_count_threshold=20`).
|
||||
- Passes the final HTML to your LLM strategy for an AI-driven parse.
|
||||
|
||||
---
|
||||
|
||||
## 5. Comprehensive Example
|
||||
|
||||
Below is a short function that unifies **CSS selection**, **exclusion** logic, and a pattern-based extraction, demonstrating how you can fine-tune your final data:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def extract_main_articles(url: str):
|
||||
schema = {
|
||||
"name": "ArticleBlock",
|
||||
"baseSelector": "div.article-block",
|
||||
"fields": [
|
||||
{"name": "headline", "selector": "h2", "type": "text"},
|
||||
{"name": "summary", "selector": ".summary", "type": "text"},
|
||||
{
|
||||
"name": "metadata",
|
||||
"type": "nested",
|
||||
"fields": [
|
||||
{"name": "author", "selector": ".author", "type": "text"},
|
||||
{"name": "date", "selector": ".date", "type": "text"}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
# Keep only #main-content
|
||||
css_selector="#main-content",
|
||||
|
||||
# Filtering
|
||||
word_count_threshold=10,
|
||||
excluded_tags=["nav", "footer"],
|
||||
exclude_external_links=True,
|
||||
exclude_domains=["somebadsite.com"],
|
||||
exclude_external_images=True,
|
||||
|
||||
# Extraction
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema),
|
||||
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url=url, config=config)
|
||||
if not result.success:
|
||||
print(f"Error: {result.error_message}")
|
||||
return None
|
||||
return json.loads(result.extracted_content)
|
||||
|
||||
async def main():
|
||||
articles = await extract_main_articles("https://news.ycombinator.com/newest")
|
||||
if articles:
|
||||
print("Extracted Articles:", articles[:2]) # Show first 2
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Why This Works**:
|
||||
- **CSS** scoping with `#main-content`.
|
||||
- Multiple **exclude_** parameters to remove domains, external images, etc.
|
||||
- A **JsonCssExtractionStrategy** to parse repeated article blocks.
|
||||
|
||||
---
|
||||
|
||||
## 6. Conclusion
|
||||
|
||||
By mixing **css_selector** scoping, **content filtering** parameters, and advanced **extraction strategies**, you can precisely **choose** which data to keep. Key parameters in **`CrawlerRunConfig`** for content selection include:
|
||||
|
||||
1. **`css_selector`** – Basic scoping to an element or region.
|
||||
2. **`word_count_threshold`** – Skip short blocks.
|
||||
3. **`excluded_tags`** – Remove entire HTML tags.
|
||||
4. **`exclude_external_links`**, **`exclude_social_media_links`**, **`exclude_domains`** – Filter out unwanted links or domains.
|
||||
5. **`exclude_external_images`** – Remove images from external sources.
|
||||
6. **`process_iframes`** – Merge iframe content if needed.
|
||||
|
||||
Combine these with structured extraction (CSS, LLM-based, or others) to build powerful crawls that yield exactly the content you want, from raw or cleaned HTML up to sophisticated JSON structures. For more detail, see [Configuration Reference](../api/parameters.md). Enjoy curating your data to the max!
|
||||
246
docs/md_v2/core/crawler-result.md
Normal file
246
docs/md_v2/core/crawler-result.md
Normal file
@@ -0,0 +1,246 @@
|
||||
# Crawl Result and Output
|
||||
|
||||
When you call `arun()` on a page, Crawl4AI returns a **`CrawlResult`** object containing everything you might need—raw HTML, a cleaned version, optional screenshots or PDFs, structured extraction results, and more. This document explains those fields and how they map to different output types.
|
||||
|
||||
---
|
||||
|
||||
## 1. The `CrawlResult` Model
|
||||
|
||||
Below is the core schema. Each field captures a different aspect of the crawl’s result:
|
||||
|
||||
```python
|
||||
class MarkdownGenerationResult(BaseModel):
|
||||
raw_markdown: str
|
||||
markdown_with_citations: str
|
||||
references_markdown: str
|
||||
fit_markdown: Optional[str] = None
|
||||
fit_html: Optional[str] = None
|
||||
|
||||
class CrawlResult(BaseModel):
|
||||
url: str
|
||||
html: str
|
||||
success: bool
|
||||
cleaned_html: Optional[str] = None
|
||||
media: Dict[str, List[Dict]] = {}
|
||||
links: Dict[str, List[Dict]] = {}
|
||||
downloaded_files: Optional[List[str]] = None
|
||||
screenshot: Optional[str] = None
|
||||
pdf : Optional[bytes] = None
|
||||
markdown: Optional[Union[str, MarkdownGenerationResult]] = None
|
||||
markdown_v2: Optional[MarkdownGenerationResult] = None
|
||||
extracted_content: Optional[str] = None
|
||||
metadata: Optional[dict] = None
|
||||
error_message: Optional[str] = None
|
||||
session_id: Optional[str] = None
|
||||
response_headers: Optional[dict] = None
|
||||
status_code: Optional[int] = None
|
||||
ssl_certificate: Optional[SSLCertificate] = None
|
||||
class Config:
|
||||
arbitrary_types_allowed = True
|
||||
```
|
||||
|
||||
### Table: Key Fields in `CrawlResult`
|
||||
|
||||
| Field (Name & Type) | Description |
|
||||
|-------------------------------------------|-----------------------------------------------------------------------------------------------------|
|
||||
| **url (`str`)** | The final or actual URL crawled (in case of redirects). |
|
||||
| **html (`str`)** | Original, unmodified page HTML. Good for debugging or custom processing. |
|
||||
| **success (`bool`)** | `True` if the crawl completed without major errors, else `False`. |
|
||||
| **cleaned_html (`Optional[str]`)** | Sanitized HTML with scripts/styles removed; can exclude tags if configured via `excluded_tags` etc. |
|
||||
| **media (`Dict[str, List[Dict]]`)** | Extracted media info (images, audio, etc.), each with attributes like `src`, `alt`, `score`, etc. |
|
||||
| **links (`Dict[str, List[Dict]]`)** | Extracted link data, split by `internal` and `external`. Each link usually has `href`, `text`, etc. |
|
||||
| **downloaded_files (`Optional[List[str]]`)** | If `accept_downloads=True` in `BrowserConfig`, this lists the filepaths of saved downloads. |
|
||||
| **screenshot (`Optional[str]`)** | Screenshot of the page (base64-encoded) if `screenshot=True`. |
|
||||
| **pdf (`Optional[bytes]`)** | PDF of the page if `pdf=True`. |
|
||||
| **markdown (`Optional[str or MarkdownGenerationResult]`)** | For now, `markdown_v2` holds a `MarkdownGenerationResult`. Over time, this will be consolidated into `markdown`. The generator can provide raw markdown, citations, references, and optionally `fit_markdown`. |
|
||||
| **markdown_v2 (`Optional[MarkdownGenerationResult]`)** | Legacy field for detailed markdown output. This will be replaced by `markdown` soon. |
|
||||
| **extracted_content (`Optional[str]`)** | The output of a structured extraction (CSS/LLM-based) stored as JSON string or other text. |
|
||||
| **metadata (`Optional[dict]`)** | Additional info about the crawl or extracted data. |
|
||||
| **error_message (`Optional[str]`)** | If `success=False`, contains a short description of what went wrong. |
|
||||
| **session_id (`Optional[str]`)** | The ID of the session used for multi-page or persistent crawling. |
|
||||
| **response_headers (`Optional[dict]`)** | HTTP response headers, if captured. |
|
||||
| **status_code (`Optional[int]`)** | HTTP status code (e.g., 200 for OK). |
|
||||
| **ssl_certificate (`Optional[SSLCertificate]`)** | SSL certificate info if `fetch_ssl_certificate=True`. |
|
||||
|
||||
---
|
||||
|
||||
## 2. HTML Variants
|
||||
|
||||
### `html`: Raw HTML
|
||||
|
||||
Crawl4AI preserves the exact HTML as `result.html`. Useful for:
|
||||
|
||||
- Debugging page issues or checking the original content.
|
||||
- Performing your own specialized parse if needed.
|
||||
|
||||
### `cleaned_html`: Sanitized
|
||||
|
||||
If you specify any cleanup or exclusion parameters in `CrawlerRunConfig` (like `excluded_tags`, `remove_forms`, etc.), you’ll see the result here:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
excluded_tags=["form", "header", "footer"],
|
||||
keep_data_attributes=False
|
||||
)
|
||||
result = await crawler.arun("https://example.com", config=config)
|
||||
print(result.cleaned_html) # Freed of forms, header, footer, data-* attributes
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Markdown Generation
|
||||
|
||||
### 3.1 `markdown_v2` (Legacy) vs `markdown`
|
||||
|
||||
- **`markdown_v2`**: The current location for detailed markdown output, returning a **`MarkdownGenerationResult`** object.
|
||||
- **`markdown`**: Eventually, we’re merging these fields. For now, you might see `result.markdown_v2` used widely in code examples.
|
||||
|
||||
**`MarkdownGenerationResult`** Fields:
|
||||
|
||||
| Field | Description |
|
||||
|-------------------------|--------------------------------------------------------------------------------|
|
||||
| **raw_markdown** | The basic HTML→Markdown conversion. |
|
||||
| **markdown_with_citations** | Markdown including inline citations that reference links at the end. |
|
||||
| **references_markdown** | The references/citations themselves (if `citations=True`). |
|
||||
| **fit_markdown** | The filtered/“fit” markdown if a content filter was used. |
|
||||
| **fit_html** | The filtered HTML that generated `fit_markdown`. |
|
||||
|
||||
### 3.2 Basic Example with a Markdown Generator
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
markdown_generator=DefaultMarkdownGenerator(
|
||||
options={"citations": True, "body_width": 80} # e.g. pass html2text style options
|
||||
)
|
||||
)
|
||||
result = await crawler.arun(url="https://example.com", config=config)
|
||||
|
||||
md_res = result.markdown_v2 # or eventually 'result.markdown'
|
||||
print(md_res.raw_markdown[:500])
|
||||
print(md_res.markdown_with_citations)
|
||||
print(md_res.references_markdown)
|
||||
```
|
||||
|
||||
**Note**: If you use a filter like `PruningContentFilter`, you’ll get `fit_markdown` and `fit_html` as well.
|
||||
|
||||
---
|
||||
|
||||
## 4. Structured Extraction: `extracted_content`
|
||||
|
||||
If you run a JSON-based extraction strategy (CSS, XPath, LLM, etc.), the structured data is **not** stored in `markdown`—it’s placed in **`result.extracted_content`** as a JSON string (or sometimes plain text).
|
||||
|
||||
### Example: CSS Extraction with `raw://` HTML
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def main():
|
||||
schema = {
|
||||
"name": "Example Items",
|
||||
"baseSelector": "div.item",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h2", "type": "text"},
|
||||
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"}
|
||||
]
|
||||
}
|
||||
raw_html = "<div class='item'><h2>Item 1</h2><a href='https://example.com/item1'>Link 1</a></div>"
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="raw://" + raw_html,
|
||||
config=CrawlerRunConfig(
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema)
|
||||
)
|
||||
)
|
||||
data = json.loads(result.extracted_content)
|
||||
print(data)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
Here:
|
||||
- `url="raw://..."` passes the HTML content directly, no network requests.
|
||||
- The **CSS** extraction strategy populates `result.extracted_content` with the JSON array `[{"title": "...", "link": "..."}]`.
|
||||
|
||||
---
|
||||
|
||||
## 5. More Fields: Links, Media, and More
|
||||
|
||||
### 5.1 `links`
|
||||
|
||||
A dictionary, typically with `"internal"` and `"external"` lists. Each entry might have `href`, `text`, `title`, etc. This is automatically captured if you haven’t disabled link extraction.
|
||||
|
||||
```python
|
||||
print(result.links["internal"][:3]) # Show first 3 internal links
|
||||
```
|
||||
|
||||
### 5.2 `media`
|
||||
|
||||
Similarly, a dictionary with `"images"`, `"audio"`, `"video"`, etc. Each item could include `src`, `alt`, `score`, and more, if your crawler is set to gather them.
|
||||
|
||||
```python
|
||||
images = result.media.get("images", [])
|
||||
for img in images:
|
||||
print("Image URL:", img["src"], "Alt:", img.get("alt"))
|
||||
```
|
||||
|
||||
### 5.3 `screenshot` and `pdf`
|
||||
|
||||
If you set `screenshot=True` or `pdf=True` in **`CrawlerRunConfig`**, then:
|
||||
|
||||
- `result.screenshot` contains a base64-encoded PNG string.
|
||||
- `result.pdf` contains raw PDF bytes (you can write them to a file).
|
||||
|
||||
```python
|
||||
with open("page.pdf", "wb") as f:
|
||||
f.write(result.pdf)
|
||||
```
|
||||
|
||||
### 5.4 `ssl_certificate`
|
||||
|
||||
If `fetch_ssl_certificate=True`, `result.ssl_certificate` holds details about the site’s SSL cert, such as issuer, validity dates, etc.
|
||||
|
||||
---
|
||||
|
||||
## 6. Accessing These Fields
|
||||
|
||||
After you run:
|
||||
|
||||
```python
|
||||
result = await crawler.arun(url="https://example.com", config=some_config)
|
||||
```
|
||||
|
||||
Check any field:
|
||||
|
||||
```python
|
||||
if result.success:
|
||||
print(result.status_code, result.response_headers)
|
||||
print("Links found:", len(result.links.get("internal", [])))
|
||||
if result.markdown_v2:
|
||||
print("Markdown snippet:", result.markdown_v2.raw_markdown[:200])
|
||||
if result.extracted_content:
|
||||
print("Structured JSON:", result.extracted_content)
|
||||
else:
|
||||
print("Error:", result.error_message)
|
||||
```
|
||||
|
||||
**Remember**: Use `result.markdown_v2` for now. It will eventually become `result.markdown`.
|
||||
|
||||
---
|
||||
|
||||
## 7. Next Steps
|
||||
|
||||
- **Markdown Generation**: Dive deeper into how to configure `DefaultMarkdownGenerator` and various filters.
|
||||
- **Content Filtering**: Learn how to use `BM25ContentFilter` and `PruningContentFilter`.
|
||||
- **Session & Hooks**: If you want to manipulate the page or preserve state across multiple `arun()` calls, see the hooking or session docs.
|
||||
- **LLM Extraction**: For complex or unstructured content requiring AI-driven parsing, check the LLM-based strategies doc.
|
||||
|
||||
**Enjoy** exploring all that `CrawlResult` offers—whether you need raw HTML, sanitized output, markdown, or fully structured data, Crawl4AI has you covered!
|
||||
@@ -512,7 +512,7 @@ request = {
|
||||
|
||||
### Complete Examples
|
||||
|
||||
1. **Advanced News Crawling**
|
||||
1. **Advanced News Crawling**
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://www.nbcnews.com/business",
|
||||
@@ -529,7 +529,7 @@ request = {
|
||||
}
|
||||
```
|
||||
|
||||
2. **Anti-Detection Configuration**
|
||||
2. **Anti-Detection Configuration**
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://example.com",
|
||||
@@ -545,7 +545,7 @@ request = {
|
||||
}
|
||||
```
|
||||
|
||||
3. **LLM Extraction with Custom Parameters**
|
||||
3. **LLM Extraction with Custom Parameters**
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://openai.com/pricing",
|
||||
@@ -567,7 +567,7 @@ request = {
|
||||
}
|
||||
```
|
||||
|
||||
4. **Session-Based Dynamic Content**
|
||||
4. **Session-Based Dynamic Content**
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://example.com",
|
||||
@@ -584,7 +584,7 @@ request = {
|
||||
}
|
||||
```
|
||||
|
||||
5. **Screenshot with Custom Timing**
|
||||
5. **Screenshot with Custom Timing**
|
||||
```python
|
||||
request = {
|
||||
"urls": "https://example.com",
|
||||
@@ -624,19 +624,19 @@ request = {
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Connection Refused**
|
||||
1. **Connection Refused**
|
||||
```
|
||||
Error: Connection refused at localhost:11235
|
||||
```
|
||||
Solution: Ensure the container is running and ports are properly mapped.
|
||||
|
||||
2. **Resource Limits**
|
||||
2. **Resource Limits**
|
||||
```
|
||||
Error: No available slots
|
||||
```
|
||||
Solution: Increase MAX_CONCURRENT_TASKS or container resources.
|
||||
|
||||
3. **GPU Access**
|
||||
3. **GPU Access**
|
||||
```
|
||||
Error: GPU not found
|
||||
```
|
||||
@@ -656,17 +656,17 @@ docker logs [container_id]
|
||||
|
||||
## Best Practices 🌟
|
||||
|
||||
1. **Resource Management**
|
||||
1. **Resource Management**
|
||||
- Set appropriate memory and CPU limits
|
||||
- Monitor resource usage via health endpoint
|
||||
- Use basic version for simple crawling tasks
|
||||
|
||||
2. **Scaling**
|
||||
2. **Scaling**
|
||||
- Use multiple containers for high load
|
||||
- Implement proper load balancing
|
||||
- Monitor performance metrics
|
||||
|
||||
3. **Security**
|
||||
3. **Security**
|
||||
- Use environment variables for sensitive data
|
||||
- Implement proper network isolation
|
||||
- Regular security updates
|
||||
248
docs/md_v2/core/fit-markdown.md
Normal file
248
docs/md_v2/core/fit-markdown.md
Normal file
@@ -0,0 +1,248 @@
|
||||
# Fit Markdown with Pruning & BM25
|
||||
|
||||
**Fit Markdown** is a specialized **filtered** version of your page’s markdown, focusing on the most relevant content. By default, Crawl4AI converts the entire HTML into a broad **raw_markdown**. With fit markdown, we apply a **content filter** algorithm (e.g., **Pruning** or **BM25**) to remove or rank low-value sections—such as repetitive sidebars, shallow text blocks, or irrelevancies—leaving a concise textual “core.”
|
||||
|
||||
---
|
||||
|
||||
## 1. How “Fit Markdown” Works
|
||||
|
||||
### 1.1 The `content_filter`
|
||||
|
||||
In **`CrawlerRunConfig`**, you can specify a **`content_filter`** to shape how content is pruned or ranked before final markdown generation. A filter’s logic is applied **before** or **during** the HTML→Markdown process, producing:
|
||||
|
||||
- **`result.markdown_v2.raw_markdown`** (unfiltered)
|
||||
- **`result.markdown_v2.fit_markdown`** (filtered or “fit” version)
|
||||
- **`result.markdown_v2.fit_html`** (the corresponding HTML snippet that produced `fit_markdown`)
|
||||
|
||||
> **Note**: We’re currently storing the result in `markdown_v2`, but eventually we’ll unify it as `result.markdown`.
|
||||
|
||||
### 1.2 Common Filters
|
||||
|
||||
1. **PruningContentFilter** – Scores each node by text density, link density, and tag importance, discarding those below a threshold.
|
||||
2. **BM25ContentFilter** – Focuses on textual relevance using BM25 ranking, especially useful if you have a specific user query (e.g., “machine learning” or “food nutrition”).
|
||||
|
||||
---
|
||||
|
||||
## 2. PruningContentFilter
|
||||
|
||||
**Pruning** discards less relevant nodes based on **text density, link density, and tag importance**. It’s a heuristic-based approach—if certain sections appear too “thin” or too “spammy,” they’re pruned.
|
||||
|
||||
### 2.1 Usage Example
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.content_filter_strategy import PruningContentFilter
|
||||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||
|
||||
async def main():
|
||||
# Step 1: Create a pruning filter
|
||||
prune_filter = PruningContentFilter(
|
||||
# Lower → more content retained, higher → more content pruned
|
||||
threshold=0.45,
|
||||
# "fixed" or "dynamic"
|
||||
threshold_type="dynamic",
|
||||
# Ignore nodes with <5 words
|
||||
min_word_threshold=5
|
||||
)
|
||||
|
||||
# Step 2: Insert it into a Markdown Generator
|
||||
md_generator = DefaultMarkdownGenerator(content_filter=prune_filter)
|
||||
|
||||
# Step 3: Pass it to CrawlerRunConfig
|
||||
config = CrawlerRunConfig(
|
||||
markdown_generator=md_generator
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://news.ycombinator.com",
|
||||
config=config
|
||||
)
|
||||
|
||||
if result.success:
|
||||
# 'fit_markdown' is your pruned content, focusing on "denser" text
|
||||
print("Raw Markdown length:", len(result.markdown_v2.raw_markdown))
|
||||
print("Fit Markdown length:", len(result.markdown_v2.fit_markdown))
|
||||
else:
|
||||
print("Error:", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### 2.2 Key Parameters
|
||||
|
||||
- **`min_word_threshold`** (int): If a block has fewer words than this, it’s pruned.
|
||||
- **`threshold_type`** (str):
|
||||
- `"fixed"` → each node must exceed `threshold` (0–1).
|
||||
- `"dynamic"` → node scoring adjusts according to tag type, text/link density, etc.
|
||||
- **`threshold`** (float, default ~0.48): The base or “anchor” cutoff.
|
||||
|
||||
**Algorithmic Factors**:
|
||||
|
||||
- **Text density** – Encourages blocks that have a higher ratio of text to overall content.
|
||||
- **Link density** – Penalizes sections that are mostly links.
|
||||
- **Tag importance** – e.g., an `<article>` or `<p>` might be more important than a `<div>`.
|
||||
- **Structural context** – If a node is deeply nested or in a suspected sidebar, it might be deprioritized.
|
||||
|
||||
---
|
||||
|
||||
## 3. BM25ContentFilter
|
||||
|
||||
**BM25** is a classical text ranking algorithm often used in search engines. If you have a **user query** or rely on page metadata to derive a query, BM25 can identify which text chunks best match that query.
|
||||
|
||||
### 3.1 Usage Example
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.content_filter_strategy import BM25ContentFilter
|
||||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||
|
||||
async def main():
|
||||
# 1) A BM25 filter with a user query
|
||||
bm25_filter = BM25ContentFilter(
|
||||
user_query="startup fundraising tips",
|
||||
# Adjust for stricter or looser results
|
||||
bm25_threshold=1.2
|
||||
)
|
||||
|
||||
# 2) Insert into a Markdown Generator
|
||||
md_generator = DefaultMarkdownGenerator(content_filter=bm25_filter)
|
||||
|
||||
# 3) Pass to crawler config
|
||||
config = CrawlerRunConfig(
|
||||
markdown_generator=md_generator
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://news.ycombinator.com",
|
||||
config=config
|
||||
)
|
||||
if result.success:
|
||||
print("Fit Markdown (BM25 query-based):")
|
||||
print(result.markdown_v2.fit_markdown)
|
||||
else:
|
||||
print("Error:", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### 3.2 Parameters
|
||||
|
||||
- **`user_query`** (str, optional): E.g. `"machine learning"`. If blank, the filter tries to glean a query from page metadata.
|
||||
- **`bm25_threshold`** (float, default 1.0):
|
||||
- Higher → fewer chunks but more relevant.
|
||||
- Lower → more inclusive.
|
||||
|
||||
> In more advanced scenarios, you might see parameters like `use_stemming`, `case_sensitive`, or `priority_tags` to refine how text is tokenized or weighted.
|
||||
|
||||
---
|
||||
|
||||
## 4. Accessing the “Fit” Output
|
||||
|
||||
After the crawl, your “fit” content is found in **`result.markdown_v2.fit_markdown`**. In future versions, it will be **`result.markdown.fit_markdown`**. Meanwhile:
|
||||
|
||||
```python
|
||||
fit_md = result.markdown_v2.fit_markdown
|
||||
fit_html = result.markdown_v2.fit_html
|
||||
```
|
||||
|
||||
If the content filter is **BM25**, you might see additional logic or references in `fit_markdown` that highlight relevant segments. If it’s **Pruning**, the text is typically well-cleaned but not necessarily matched to a query.
|
||||
|
||||
---
|
||||
|
||||
## 5. Code Patterns Recap
|
||||
|
||||
### 5.1 Pruning
|
||||
|
||||
```python
|
||||
prune_filter = PruningContentFilter(
|
||||
threshold=0.5,
|
||||
threshold_type="fixed",
|
||||
min_word_threshold=10
|
||||
)
|
||||
md_generator = DefaultMarkdownGenerator(content_filter=prune_filter)
|
||||
config = CrawlerRunConfig(markdown_generator=md_generator)
|
||||
# => result.markdown_v2.fit_markdown
|
||||
```
|
||||
|
||||
### 5.2 BM25
|
||||
|
||||
```python
|
||||
bm25_filter = BM25ContentFilter(
|
||||
user_query="health benefits fruit",
|
||||
bm25_threshold=1.2
|
||||
)
|
||||
md_generator = DefaultMarkdownGenerator(content_filter=bm25_filter)
|
||||
config = CrawlerRunConfig(markdown_generator=md_generator)
|
||||
# => result.markdown_v2.fit_markdown
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Combining with “word_count_threshold” & Exclusions
|
||||
|
||||
Remember you can also specify:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
word_count_threshold=10,
|
||||
excluded_tags=["nav", "footer", "header"],
|
||||
exclude_external_links=True,
|
||||
markdown_generator=DefaultMarkdownGenerator(
|
||||
content_filter=PruningContentFilter(threshold=0.5)
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
Thus, **multi-level** filtering occurs:
|
||||
|
||||
1. The crawler’s `excluded_tags` are removed from the HTML first.
|
||||
2. The content filter (Pruning, BM25, or custom) prunes or ranks the remaining text blocks.
|
||||
3. The final “fit” content is generated in `result.markdown_v2.fit_markdown`.
|
||||
|
||||
---
|
||||
|
||||
## 7. Custom Filters
|
||||
|
||||
If you need a different approach (like a specialized ML model or site-specific heuristics), you can create a new class inheriting from `RelevantContentFilter` and implement `filter_content(html)`. Then inject it into your **markdown generator**:
|
||||
|
||||
```python
|
||||
from crawl4ai.content_filter_strategy import RelevantContentFilter
|
||||
|
||||
class MyCustomFilter(RelevantContentFilter):
|
||||
def filter_content(self, html, min_word_threshold=None):
|
||||
# parse HTML, implement custom logic
|
||||
return [block for block in ... if ... some condition...]
|
||||
|
||||
```
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Subclass `RelevantContentFilter`.
|
||||
2. Implement `filter_content(...)`.
|
||||
3. Use it in your `DefaultMarkdownGenerator(content_filter=MyCustomFilter(...))`.
|
||||
|
||||
---
|
||||
|
||||
## 8. Final Thoughts
|
||||
|
||||
**Fit Markdown** is a crucial feature for:
|
||||
|
||||
- **Summaries**: Quickly get the important text from a cluttered page.
|
||||
- **Search**: Combine with **BM25** to produce content relevant to a query.
|
||||
- **AI Pipelines**: Filter out boilerplate so LLM-based extraction or summarization runs on denser text.
|
||||
|
||||
**Key Points**:
|
||||
- **PruningContentFilter**: Great if you just want the “meatiest” text without a user query.
|
||||
- **BM25ContentFilter**: Perfect for query-based extraction or searching.
|
||||
- Combine with **`excluded_tags`, `exclude_external_links`, `word_count_threshold`** to refine your final “fit” text.
|
||||
- Fit markdown ends up in **`result.markdown_v2.fit_markdown`**; eventually **`result.markdown.fit_markdown`** in future versions.
|
||||
|
||||
With these tools, you can **zero in** on the text that truly matters, ignoring spammy or boilerplate content, and produce a concise, relevant “fit markdown” for your AI or data pipelines. Happy pruning and searching!
|
||||
|
||||
- Last Updated: 2025-01-01
|
||||
129
docs/md_v2/core/installation.md
Normal file
129
docs/md_v2/core/installation.md
Normal file
@@ -0,0 +1,129 @@
|
||||
# Installation & Setup (2023 Edition)
|
||||
|
||||
## 1. Basic Installation
|
||||
|
||||
```bash
|
||||
pip install crawl4ai
|
||||
```
|
||||
|
||||
This installs the **core** Crawl4AI library along with essential dependencies. **No** advanced features (like transformers or PyTorch) are included yet.
|
||||
|
||||
## 2. Initial Setup & Diagnostics
|
||||
|
||||
### 2.1 Run the Setup Command
|
||||
After installing, call:
|
||||
|
||||
```bash
|
||||
crawl4ai-setup
|
||||
```
|
||||
|
||||
**What does it do?**
|
||||
- Installs or updates required Playwright browsers (Chromium, Firefox, etc.)
|
||||
- Performs OS-level checks (e.g., missing libs on Linux)
|
||||
- Confirms your environment is ready to crawl
|
||||
|
||||
### 2.2 Diagnostics
|
||||
Optionally, you can run **diagnostics** to confirm everything is functioning:
|
||||
|
||||
```bash
|
||||
crawl4ai-doctor
|
||||
```
|
||||
|
||||
This command attempts to:
|
||||
- Check Python version compatibility
|
||||
- Verify Playwright installation
|
||||
- Inspect environment variables or library conflicts
|
||||
|
||||
If any issues arise, follow its suggestions (e.g., installing additional system packages) and re-run `crawl4ai-setup`.
|
||||
|
||||
---
|
||||
|
||||
## 3. Verifying Installation: A Simple Crawl (Skip this step if you already run `crawl4ai-doctor`)
|
||||
|
||||
Below is a minimal Python script demonstrating a **basic** crawl. It uses our new **`BrowserConfig`** and **`CrawlerRunConfig`** for clarity, though no custom settings are passed in this example:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://www.example.com",
|
||||
)
|
||||
print(result.markdown[:300]) # Show the first 300 characters of extracted text
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Expected** outcome:
|
||||
- A headless browser session loads `example.com`
|
||||
- Crawl4AI returns ~300 characters of markdown.
|
||||
If errors occur, rerun `crawl4ai-doctor` or manually ensure Playwright is installed correctly.
|
||||
|
||||
---
|
||||
|
||||
## 4. Advanced Installation (Optional)
|
||||
|
||||
**Warning**: Only install these **if you truly need them**. They bring in larger dependencies, including big models, which can increase disk usage and memory load significantly.
|
||||
|
||||
### 4.1 Torch, Transformers, or All
|
||||
|
||||
- **Text Clustering (Torch)**
|
||||
```bash
|
||||
pip install crawl4ai[torch]
|
||||
crawl4ai-setup
|
||||
```
|
||||
Installs PyTorch-based features (e.g., cosine similarity or advanced semantic chunking).
|
||||
|
||||
- **Transformers**
|
||||
```bash
|
||||
pip install crawl4ai[transformer]
|
||||
crawl4ai-setup
|
||||
```
|
||||
Adds Hugging Face-based summarization or generation strategies.
|
||||
|
||||
- **All Features**
|
||||
```bash
|
||||
pip install crawl4ai[all]
|
||||
crawl4ai-setup
|
||||
```
|
||||
|
||||
#### (Optional) Pre-Fetching Models
|
||||
```bash
|
||||
crawl4ai-download-models
|
||||
```
|
||||
This step caches large models locally (if needed). **Only do this** if your workflow requires them.
|
||||
|
||||
---
|
||||
|
||||
## 5. Docker (Experimental)
|
||||
|
||||
We provide a **temporary** Docker approach for testing. **It’s not stable and may break** with future releases. We plan a major Docker revamp in a future stable version, 2025 Q1. If you still want to try:
|
||||
|
||||
```bash
|
||||
docker pull unclecode/crawl4ai:basic
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic
|
||||
```
|
||||
|
||||
You can then make POST requests to `http://localhost:11235/crawl` to perform crawls. **Production usage** is discouraged until our new Docker approach is ready (planned in Jan or Feb 2025).
|
||||
|
||||
---
|
||||
|
||||
## 6. Local Server Mode (Legacy)
|
||||
|
||||
Some older docs mention running Crawl4AI as a local server. This approach has been **partially replaced** by the new Docker-based prototype and upcoming stable server release. You can experiment, but expect major changes. Official local server instructions will arrive once the new Docker architecture is finalized.
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
1. **Install** with `pip install crawl4ai` and run `crawl4ai-setup`.
|
||||
2. **Diagnose** with `crawl4ai-doctor` if you see errors.
|
||||
3. **Verify** by crawling `example.com` with minimal `BrowserConfig` + `CrawlerRunConfig`.
|
||||
4. **Advanced** features (Torch, Transformers) are **optional**—avoid them if you don’t need them (they significantly increase resource usage).
|
||||
5. **Docker** is **experimental**—use at your own risk until the stable version is released.
|
||||
6. **Local server** references in older docs are largely deprecated; a new solution is in progress.
|
||||
|
||||
**Got questions?** Check [GitHub issues](https://github.com/unclecode/crawl4ai/issues) for updates or ask the community!
|
||||
@@ -1,8 +1,4 @@
|
||||
Below is a **draft** of the **“Link & Media Analysis”** tutorial. It demonstrates how to access and filter links, handle domain restrictions, and manage media (especially images) using Crawl4AI’s configuration options. Feel free to adjust examples and text to match your exact workflow or preferences.
|
||||
|
||||
---
|
||||
|
||||
# Link & Media Analysis
|
||||
# Link & Media
|
||||
|
||||
In this tutorial, you’ll learn how to:
|
||||
|
||||
@@ -12,7 +8,7 @@ In this tutorial, you’ll learn how to:
|
||||
4. Configure your crawler to exclude or prioritize certain images
|
||||
|
||||
> **Prerequisites**
|
||||
> - You have completed or are familiar with the [AsyncWebCrawler Basics](./async-webcrawler-basics.md) tutorial.
|
||||
> - You have completed or are familiar with the [AsyncWebCrawler Basics](../core/simple-crawling.md) tutorial.
|
||||
> - You can run Crawl4AI in your environment (Playwright, Python, etc.).
|
||||
|
||||
---
|
||||
@@ -37,7 +33,9 @@ async with AsyncWebCrawler() as crawler:
|
||||
if result.success:
|
||||
internal_links = result.links.get("internal", [])
|
||||
external_links = result.links.get("external", [])
|
||||
print(f"Found {len(internal_links)} internal links, {len(external_links)} external links.")
|
||||
print(f"Found {len(internal_links)} internal links.")
|
||||
print(f"Found {len(internal_links)} external links.")
|
||||
print(f"Found {len(result.media)} media items.")
|
||||
|
||||
# Each link is typically a dictionary with fields like:
|
||||
# { "href": "...", "text": "...", "title": "...", "base_domain": "..." }
|
||||
@@ -259,37 +257,20 @@ if __name__ == "__main__":
|
||||
|
||||
## 5. Common Pitfalls & Tips
|
||||
|
||||
1. **Conflicting Flags**:
|
||||
1. **Conflicting Flags**:
|
||||
- `exclude_external_links=True` but then also specifying `exclude_social_media_links=True` is typically fine, but understand that the first setting already discards *all* external links. The second becomes somewhat redundant.
|
||||
- `exclude_external_images=True` but want to keep some external images? Currently no partial domain-based setting for images, so you might need a custom approach or hook logic.
|
||||
|
||||
2. **Relevancy Scores**:
|
||||
2. **Relevancy Scores**:
|
||||
- If your version of Crawl4AI or your scraping strategy includes an `img["score"]`, it’s typically a heuristic based on size, position, or content analysis. Evaluate carefully if you rely on it.
|
||||
|
||||
3. **Performance**:
|
||||
3. **Performance**:
|
||||
- Excluding certain domains or external images can speed up your crawl, especially for large, media-heavy pages.
|
||||
- If you want a “full” link map, do *not* exclude them. Instead, you can post-filter in your own code.
|
||||
|
||||
4. **Social Media Lists**:
|
||||
4. **Social Media Lists**:
|
||||
- `exclude_social_media_links=True` typically references an internal list of known social domains like Facebook, Twitter, LinkedIn, etc. If you need to add or remove from that list, look for library settings or a local config file (depending on your version).
|
||||
|
||||
---
|
||||
|
||||
## 6. Next Steps
|
||||
|
||||
Now that you understand how to manage **Link & Media Analysis**, you can:
|
||||
|
||||
- Fine-tune which links are stored or discarded in your final results
|
||||
- Control which images (or other media) appear in `result.media`
|
||||
- Filter out entire domains or social media platforms to keep your dataset relevant
|
||||
|
||||
**Recommended Follow-Ups**:
|
||||
- **[Advanced Features (Proxy, PDF, Screenshots)](./advanced-features.md)**: If you want to capture screenshots or save the page as a PDF for archival or debugging.
|
||||
- **[Hooks & Custom Code](./hooks-custom.md)**: For more specialized logic, such as automated “infinite scroll” or repeated “Load More” button clicks.
|
||||
- **Reference**: Check out [CrawlerRunConfig Reference](../../reference/configuration.md) for a comprehensive parameter list.
|
||||
|
||||
**Last updated**: 2024-XX-XX
|
||||
|
||||
---
|
||||
|
||||
**That’s it for Link & Media Analysis!** You’re now equipped to filter out unwanted sites and zero in on the images and videos that matter for your project.
|
||||
@@ -14,7 +14,10 @@ from crawl4ai.async_configs import CrawlerRunConfig
|
||||
async def crawl_web():
|
||||
config = CrawlerRunConfig(bypass_cache=True)
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(url="https://en.wikipedia.org/wiki/apple", config=config)
|
||||
result = await crawler.arun(
|
||||
url="https://en.wikipedia.org/wiki/apple",
|
||||
config=config
|
||||
)
|
||||
if result.success:
|
||||
print("Markdown Content:")
|
||||
print(result.markdown)
|
||||
@@ -1,7 +1,3 @@
|
||||
Below is a **draft** of the **Markdown Generation Basics** tutorial that incorporates your current Crawl4AI design and terminology. It introduces the default markdown generator, explains the concept of content filters (BM25 and Pruning), and covers the `MarkdownGenerationResult` object in a coherent, step-by-step manner. Adjust parameters or naming as needed to align with your actual codebase.
|
||||
|
||||
---
|
||||
|
||||
# Markdown Generation Basics
|
||||
|
||||
One of Crawl4AI’s core features is generating **clean, structured markdown** from web pages. Originally built to solve the problem of extracting only the “actual” content and discarding boilerplate or noise, Crawl4AI’s markdown system remains one of its biggest draws for AI workflows.
|
||||
@@ -13,7 +9,7 @@ In this tutorial, you’ll learn:
|
||||
3. The difference between raw markdown (`result.markdown`) and filtered markdown (`fit_markdown`)
|
||||
|
||||
> **Prerequisites**
|
||||
> - You’ve completed or read [AsyncWebCrawler Basics](./async-webcrawler-basics.md) to understand how to run a simple crawl.
|
||||
> - You’ve completed or read [AsyncWebCrawler Basics](../core/simple-crawling.md) to understand how to run a simple crawl.
|
||||
> - You know how to configure `CrawlerRunConfig`.
|
||||
|
||||
---
|
||||
@@ -45,7 +41,7 @@ if __name__ == "__main__":
|
||||
```
|
||||
|
||||
**What’s happening?**
|
||||
- `CrawlerRunConfig(markdown_generator=DefaultMarkdownGenerator())` instructs Crawl4AI to convert the final HTML into markdown at the end of each crawl.
|
||||
- `CrawlerRunConfig( markdown_generator = DefaultMarkdownGenerator() )` instructs Crawl4AI to convert the final HTML into markdown at the end of each crawl.
|
||||
- The resulting markdown is accessible via `result.markdown`.
|
||||
|
||||
---
|
||||
@@ -166,8 +162,8 @@ prune_filter = PruningContentFilter(
|
||||
|
||||
- **`threshold`**: Score boundary. Blocks below this score get removed.
|
||||
- **`threshold_type`**:
|
||||
- `"fixed"`: Straight comparison (`score >= threshold` keeps the block).
|
||||
- `"dynamic"`: The filter adjusts threshold in a data-driven manner.
|
||||
- `"fixed"`: Straight comparison (`score >= threshold` keeps the block).
|
||||
- `"dynamic"`: The filter adjusts threshold in a data-driven manner.
|
||||
- **`min_word_threshold`**: Discard blocks under N words as likely too short or unhelpful.
|
||||
|
||||
**When to Use PruningContentFilter**
|
||||
@@ -180,11 +176,11 @@ prune_filter = PruningContentFilter(
|
||||
|
||||
When a content filter is active, the library produces two forms of markdown inside `result.markdown_v2` or (if using the simplified field) `result.markdown`:
|
||||
|
||||
1. **`raw_markdown`**: The full unfiltered markdown.
|
||||
2. **`fit_markdown`**: A “fit” version where the filter has removed or trimmed noisy segments.
|
||||
1. **`raw_markdown`**: The full unfiltered markdown.
|
||||
2. **`fit_markdown`**: A “fit” version where the filter has removed or trimmed noisy segments.
|
||||
|
||||
**Note**:
|
||||
- In earlier examples, you may see references to `result.markdown_v2`. Depending on your library version, you might access `result.markdown`, `result.markdown_v2`, or an object named `MarkdownGenerationResult`. The idea is the same: you’ll have a raw version and a filtered (“fit”) version if a filter is used.
|
||||
> In earlier examples, you may see references to `result.markdown_v2`. Depending on your library version, you might access `result.markdown`, `result.markdown_v2`, or an object named `MarkdownGenerationResult`. The idea is the same: you’ll have a raw version and a filtered (“fit”) version if a filter is used.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
@@ -251,8 +247,8 @@ Below is a **revised section** under “Combining Filters (BM25 + Pruning)” th
|
||||
|
||||
You might want to **prune out** noisy boilerplate first (with `PruningContentFilter`), and then **rank what’s left** against a user query (with `BM25ContentFilter`). You don’t have to crawl the page twice. Instead:
|
||||
|
||||
1. **First pass**: Apply `PruningContentFilter` directly to the raw HTML from `result.html` (the crawler’s downloaded HTML).
|
||||
2. **Second pass**: Take the pruned HTML (or text) from step 1, and feed it into `BM25ContentFilter`, focusing on a user query.
|
||||
1. **First pass**: Apply `PruningContentFilter` directly to the raw HTML from `result.html` (the crawler’s downloaded HTML).
|
||||
2. **Second pass**: Take the pruned HTML (or text) from step 1, and feed it into `BM25ContentFilter`, focusing on a user query.
|
||||
|
||||
### Two-Pass Example
|
||||
|
||||
@@ -296,7 +292,8 @@ async def main():
|
||||
language="english"
|
||||
)
|
||||
|
||||
bm25_chunks = bm25_filter.filter_content(pruned_html) # returns a list of text chunks
|
||||
# returns a list of text chunks
|
||||
bm25_chunks = bm25_filter.filter_content(pruned_html)
|
||||
|
||||
if not bm25_chunks:
|
||||
print("Nothing matched the BM25 query after pruning.")
|
||||
@@ -317,10 +314,10 @@ if __name__ == "__main__":
|
||||
|
||||
### What’s Happening?
|
||||
|
||||
1. **Raw HTML**: We crawl once and store the raw HTML in `result.html`.
|
||||
2. **PruningContentFilter**: Takes HTML + optional parameters. It extracts blocks of text or partial HTML, removing headings/sections deemed “noise.” It returns a **list of text chunks**.
|
||||
3. **Combine or Transform**: We join these pruned chunks back into a single HTML-like string. (Alternatively, you could store them in a list for further logic—whatever suits your pipeline.)
|
||||
4. **BM25ContentFilter**: We feed the pruned string into `BM25ContentFilter` with a user query. This second pass further narrows the content to chunks relevant to “machine learning.”
|
||||
1. **Raw HTML**: We crawl once and store the raw HTML in `result.html`.
|
||||
2. **PruningContentFilter**: Takes HTML + optional parameters. It extracts blocks of text or partial HTML, removing headings/sections deemed “noise.” It returns a **list of text chunks**.
|
||||
3. **Combine or Transform**: We join these pruned chunks back into a single HTML-like string. (Alternatively, you could store them in a list for further logic—whatever suits your pipeline.)
|
||||
4. **BM25ContentFilter**: We feed the pruned string into `BM25ContentFilter` with a user query. This second pass further narrows the content to chunks relevant to “machine learning.”
|
||||
|
||||
**No Re-Crawling**: We used `raw_html` from the first pass, so there’s no need to run `arun()` again—**no second network request**.
|
||||
|
||||
@@ -340,19 +337,19 @@ If your codebase or pipeline design allows applying multiple filters in one pass
|
||||
|
||||
## 8. Common Pitfalls & Tips
|
||||
|
||||
1. **No Markdown Output?**
|
||||
1. **No Markdown Output?**
|
||||
- Make sure the crawler actually retrieved HTML. If the site is heavily JS-based, you may need to enable dynamic rendering or wait for elements.
|
||||
- Check if your content filter is too aggressive. Lower thresholds or disable the filter to see if content reappears.
|
||||
|
||||
2. **Performance Considerations**
|
||||
2. **Performance Considerations**
|
||||
- Very large pages with multiple filters can be slower. Consider `cache_mode` to avoid re-downloading.
|
||||
- If your final use case is LLM ingestion, consider summarizing further or chunking big texts.
|
||||
|
||||
3. **Take Advantage of `fit_markdown`**
|
||||
3. **Take Advantage of `fit_markdown`**
|
||||
- Great for RAG pipelines, semantic search, or any scenario where extraneous boilerplate is unwanted.
|
||||
- Still verify the textual quality—some sites have crucial data in footers or sidebars.
|
||||
|
||||
4. **Adjusting `html2text` Options**
|
||||
4. **Adjusting `html2text` Options**
|
||||
- If you see lots of raw HTML slipping into the text, turn on `escape_html`.
|
||||
- If code blocks look messy, experiment with `mark_code` or `handle_code_in_pre`.
|
||||
|
||||
@@ -367,16 +364,6 @@ In this **Markdown Generation Basics** tutorial, you learned to:
|
||||
- Distinguish between raw and filtered markdown (`fit_markdown`).
|
||||
- Leverage the `MarkdownGenerationResult` object to handle different forms of output (citations, references, etc.).
|
||||
|
||||
**Where to go from here**:
|
||||
|
||||
- **[Extracting JSON (No LLM)](./json-extraction-basic.md)**: If you need structured data instead of markdown, check out the library’s JSON extraction strategies.
|
||||
- **[Advanced Features](./advanced-features.md)**: Combine markdown generation with proxies, PDF exports, and more.
|
||||
- **[Explanations → Content Filters vs. Extraction Strategies](../../explanations/extraction-chunking.md)**: Dive deeper into how filters differ from chunking or semantic extraction.
|
||||
|
||||
Now you can produce high-quality Markdown from any website, focusing on exactly the content you need—an essential step for powering AI models, summarization pipelines, or knowledge-base queries.
|
||||
|
||||
**Last Updated**: 2024-XX-XX
|
||||
|
||||
---
|
||||
|
||||
That’s it for **Markdown Generation Basics**! Enjoy generating clean, noise-free markdown for your LLM workflows, content archives, or research.
|
||||
**Last Updated**: 2025-01-01
|
||||
343
docs/md_v2/core/page-interaction.md
Normal file
343
docs/md_v2/core/page-interaction.md
Normal file
@@ -0,0 +1,343 @@
|
||||
# Page Interaction
|
||||
|
||||
Crawl4AI provides powerful features for interacting with **dynamic** webpages, handling JavaScript execution, waiting for conditions, and managing multi-step flows. By combining **js_code**, **wait_for**, and certain **CrawlerRunConfig** parameters, you can:
|
||||
|
||||
1. Click “Load More” buttons
|
||||
2. Fill forms and submit them
|
||||
3. Wait for elements or data to appear
|
||||
4. Reuse sessions across multiple steps
|
||||
|
||||
Below is a quick overview of how to do it.
|
||||
|
||||
---
|
||||
|
||||
## 1. JavaScript Execution
|
||||
|
||||
### Basic Execution
|
||||
|
||||
**`js_code`** in **`CrawlerRunConfig`** accepts either a single JS string or a list of JS snippets.
|
||||
**Example**: We’ll scroll to the bottom of the page, then optionally click a “Load More” button.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
# Single JS command
|
||||
config = CrawlerRunConfig(
|
||||
js_code="window.scrollTo(0, document.body.scrollHeight);"
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://news.ycombinator.com", # Example site
|
||||
config=config
|
||||
)
|
||||
print("Crawled length:", len(result.cleaned_html))
|
||||
|
||||
# Multiple commands
|
||||
js_commands = [
|
||||
"window.scrollTo(0, document.body.scrollHeight);",
|
||||
# 'More' link on Hacker News
|
||||
"document.querySelector('a.morelink')?.click();",
|
||||
]
|
||||
config = CrawlerRunConfig(js_code=js_commands)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://news.ycombinator.com", # Another pass
|
||||
config=config
|
||||
)
|
||||
print("After scroll+click, length:", len(result.cleaned_html))
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Relevant `CrawlerRunConfig` params**:
|
||||
- **`js_code`**: A string or list of strings with JavaScript to run after the page loads.
|
||||
- **`js_only`**: If set to `True` on subsequent calls, indicates we’re continuing an existing session without a new full navigation.
|
||||
- **`session_id`**: If you want to keep the same page across multiple calls, specify an ID.
|
||||
|
||||
---
|
||||
|
||||
## 2. Wait Conditions
|
||||
|
||||
### 2.1 CSS-Based Waiting
|
||||
|
||||
Sometimes, you just want to wait for a specific element to appear. For example:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
config = CrawlerRunConfig(
|
||||
# Wait for at least 30 items on Hacker News
|
||||
wait_for="css:.athing:nth-child(30)"
|
||||
)
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://news.ycombinator.com",
|
||||
config=config
|
||||
)
|
||||
print("We have at least 30 items loaded!")
|
||||
# Rough check
|
||||
print("Total items in HTML:", result.cleaned_html.count("athing"))
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Key param**:
|
||||
- **`wait_for="css:..."`**: Tells the crawler to wait until that CSS selector is present.
|
||||
|
||||
### 2.2 JavaScript-Based Waiting
|
||||
|
||||
For more complex conditions (e.g., waiting for content length to exceed a threshold), prefix `js:`:
|
||||
|
||||
```python
|
||||
wait_condition = """() => {
|
||||
const items = document.querySelectorAll('.athing');
|
||||
return items.length > 50; // Wait for at least 51 items
|
||||
}"""
|
||||
|
||||
config = CrawlerRunConfig(wait_for=f"js:{wait_condition}")
|
||||
```
|
||||
|
||||
**Behind the Scenes**: Crawl4AI keeps polling the JS function until it returns `true` or a timeout occurs.
|
||||
|
||||
---
|
||||
|
||||
## 3. Handling Dynamic Content
|
||||
|
||||
Many modern sites require **multiple steps**: scrolling, clicking “Load More,” or updating via JavaScript. Below are typical patterns.
|
||||
|
||||
### 3.1 Load More Example (Hacker News “More” Link)
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
# Step 1: Load initial Hacker News page
|
||||
config = CrawlerRunConfig(
|
||||
wait_for="css:.athing:nth-child(30)" # Wait for 30 items
|
||||
)
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://news.ycombinator.com",
|
||||
config=config
|
||||
)
|
||||
print("Initial items loaded.")
|
||||
|
||||
# Step 2: Let's scroll and click the "More" link
|
||||
load_more_js = [
|
||||
"window.scrollTo(0, document.body.scrollHeight);",
|
||||
# The "More" link at page bottom
|
||||
"document.querySelector('a.morelink')?.click();"
|
||||
]
|
||||
|
||||
next_page_conf = CrawlerRunConfig(
|
||||
js_code=load_more_js,
|
||||
wait_for="""js:() => {
|
||||
return document.querySelectorAll('.athing').length > 30;
|
||||
}""",
|
||||
# Mark that we do not re-navigate, but run JS in the same session:
|
||||
js_only=True,
|
||||
session_id="hn_session"
|
||||
)
|
||||
|
||||
# Re-use the same crawler session
|
||||
result2 = await crawler.arun(
|
||||
url="https://news.ycombinator.com", # same URL but continuing session
|
||||
config=next_page_conf
|
||||
)
|
||||
total_items = result2.cleaned_html.count("athing")
|
||||
print("Items after load-more:", total_items)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Key params**:
|
||||
- **`session_id="hn_session"`**: Keep the same page across multiple calls to `arun()`.
|
||||
- **`js_only=True`**: We’re not performing a full reload, just applying JS in the existing page.
|
||||
- **`wait_for`** with `js:`: Wait for item count to grow beyond 30.
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Form Interaction
|
||||
|
||||
If the site has a search or login form, you can fill fields and submit them with **`js_code`**. For instance, if GitHub had a local search form:
|
||||
|
||||
```python
|
||||
js_form_interaction = """
|
||||
document.querySelector('#your-search').value = 'TypeScript commits';
|
||||
document.querySelector('form').submit();
|
||||
"""
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
js_code=js_form_interaction,
|
||||
wait_for="css:.commit"
|
||||
)
|
||||
result = await crawler.arun(url="https://github.com/search", config=config)
|
||||
```
|
||||
|
||||
**In reality**: Replace IDs or classes with the real site’s form selectors.
|
||||
|
||||
---
|
||||
|
||||
## 4. Timing Control
|
||||
|
||||
1. **`page_timeout`** (ms): Overall page load or script execution time limit.
|
||||
2. **`delay_before_return_html`** (seconds): Wait an extra moment before capturing the final HTML.
|
||||
3. **`mean_delay`** & **`max_range`**: If you call `arun_many()` with multiple URLs, these add a random pause between each request.
|
||||
|
||||
**Example**:
|
||||
|
||||
```python
|
||||
config = CrawlerRunConfig(
|
||||
page_timeout=60000, # 60s limit
|
||||
delay_before_return_html=2.5
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Multi-Step Interaction Example
|
||||
|
||||
Below is a simplified script that does multiple “Load More” clicks on GitHub’s TypeScript commits page. It **re-uses** the same session to accumulate new commits each time. The code includes the relevant **`CrawlerRunConfig`** parameters you’d rely on.
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||||
|
||||
async def multi_page_commits():
|
||||
browser_cfg = BrowserConfig(
|
||||
headless=False, # Visible for demonstration
|
||||
verbose=True
|
||||
)
|
||||
session_id = "github_ts_commits"
|
||||
|
||||
base_wait = """js:() => {
|
||||
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
|
||||
return commits.length > 0;
|
||||
}"""
|
||||
|
||||
# Step 1: Load initial commits
|
||||
config1 = CrawlerRunConfig(
|
||||
wait_for=base_wait,
|
||||
session_id=session_id,
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
# Not using js_only yet since it's our first load
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_cfg) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://github.com/microsoft/TypeScript/commits/main",
|
||||
config=config1
|
||||
)
|
||||
print("Initial commits loaded. Count:", result.cleaned_html.count("commit"))
|
||||
|
||||
# Step 2: For subsequent pages, we run JS to click 'Next Page' if it exists
|
||||
js_next_page = """
|
||||
const selector = 'a[data-testid="pagination-next-button"]';
|
||||
const button = document.querySelector(selector);
|
||||
if (button) button.click();
|
||||
"""
|
||||
|
||||
# Wait until new commits appear
|
||||
wait_for_more = """js:() => {
|
||||
const commits = document.querySelectorAll('li.Box-sc-g0xbh4-0 h4');
|
||||
if (!window.firstCommit && commits.length>0) {
|
||||
window.firstCommit = commits[0].textContent;
|
||||
return false;
|
||||
}
|
||||
// If top commit changes, we have new commits
|
||||
const topNow = commits[0]?.textContent.trim();
|
||||
return topNow && topNow !== window.firstCommit;
|
||||
}"""
|
||||
|
||||
for page in range(2): # let's do 2 more "Next" pages
|
||||
config_next = CrawlerRunConfig(
|
||||
session_id=session_id,
|
||||
js_code=js_next_page,
|
||||
wait_for=wait_for_more,
|
||||
js_only=True, # We're continuing from the open tab
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
result2 = await crawler.arun(
|
||||
url="https://github.com/microsoft/TypeScript/commits/main",
|
||||
config=config_next
|
||||
)
|
||||
print(f"Page {page+2} commits count:", result2.cleaned_html.count("commit"))
|
||||
|
||||
# Optionally kill session
|
||||
await crawler.crawler_strategy.kill_session(session_id)
|
||||
|
||||
async def main():
|
||||
await multi_page_commits()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- **`session_id`**: Keep the same page open.
|
||||
- **`js_code`** + **`wait_for`** + **`js_only=True`**: We do partial refreshes, waiting for new commits to appear.
|
||||
- **`cache_mode=CacheMode.BYPASS`** ensures we always see fresh data each step.
|
||||
|
||||
---
|
||||
|
||||
## 6. Combine Interaction with Extraction
|
||||
|
||||
Once dynamic content is loaded, you can attach an **`extraction_strategy`** (like `JsonCssExtractionStrategy` or `LLMExtractionStrategy`). For example:
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
schema = {
|
||||
"name": "Commits",
|
||||
"baseSelector": "li.Box-sc-g0xbh4-0",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h4.markdown-title", "type": "text"}
|
||||
]
|
||||
}
|
||||
config = CrawlerRunConfig(
|
||||
session_id="ts_commits_session",
|
||||
js_code=js_next_page,
|
||||
wait_for=wait_for_more,
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema)
|
||||
)
|
||||
```
|
||||
|
||||
When done, check `result.extracted_content` for the JSON.
|
||||
|
||||
---
|
||||
|
||||
## 7. Relevant `CrawlerRunConfig` Parameters
|
||||
|
||||
Below are the key interaction-related parameters in `CrawlerRunConfig`. For a full list, see [Configuration Parameters](../api/parameters.md).
|
||||
|
||||
- **`js_code`**: JavaScript to run after initial load.
|
||||
- **`js_only`**: If `True`, no new page navigation—only JS in the existing session.
|
||||
- **`wait_for`**: CSS (`"css:..."`) or JS (`"js:..."`) expression to wait for.
|
||||
- **`session_id`**: Reuse the same page across calls.
|
||||
- **`cache_mode`**: Whether to read/write from the cache or bypass.
|
||||
- **`remove_overlay_elements`**: Remove certain popups automatically.
|
||||
- **`simulate_user`, `override_navigator`, `magic`**: Anti-bot or “human-like” interactions.
|
||||
|
||||
---
|
||||
|
||||
## 8. Conclusion
|
||||
|
||||
Crawl4AI’s **page interaction** features let you:
|
||||
|
||||
1. **Execute JavaScript** for scrolling, clicks, or form filling.
|
||||
2. **Wait** for CSS or custom JS conditions before capturing data.
|
||||
3. **Handle** multi-step flows (like “Load More”) with partial reloads or persistent sessions.
|
||||
4. Combine with **structured extraction** for dynamic sites.
|
||||
|
||||
With these tools, you can scrape modern, interactive webpages confidently. For advanced hooking, user simulation, or in-depth config, check the [API reference](../api/parameters.md) or related advanced docs. Happy scripting!
|
||||
362
docs/md_v2/core/quickstart.md
Normal file
362
docs/md_v2/core/quickstart.md
Normal file
@@ -0,0 +1,362 @@
|
||||
Below is the **revised Quickstart** guide with the **Installation** section removed, plus an updated **dynamic content** crawl example that uses `BrowserConfig` and `CrawlerRunConfig` (instead of passing parameters directly to `arun()`). Everything else remains as before.
|
||||
|
||||
---
|
||||
|
||||
# Getting Started with Crawl4AI
|
||||
|
||||
Welcome to **Crawl4AI**, an open-source LLM-friendly Web Crawler & Scraper. In this tutorial, you’ll:
|
||||
|
||||
1. Run your **first crawl** using minimal configuration.
|
||||
2. Generate **Markdown** output (and learn how it’s influenced by content filters).
|
||||
3. Experiment with a simple **CSS-based extraction** strategy.
|
||||
4. See a glimpse of **LLM-based extraction** (including open-source and closed-source model options).
|
||||
5. Crawl a **dynamic** page that loads content via JavaScript.
|
||||
|
||||
---
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
Crawl4AI provides:
|
||||
|
||||
- An asynchronous crawler, **`AsyncWebCrawler`**.
|
||||
- Configurable browser and run settings via **`BrowserConfig`** and **`CrawlerRunConfig`**.
|
||||
- Automatic HTML-to-Markdown conversion via **`DefaultMarkdownGenerator`** (supports optional filters).
|
||||
- Multiple extraction strategies (LLM-based or “traditional” CSS/XPath-based).
|
||||
|
||||
By the end of this guide, you’ll have performed a basic crawl, generated Markdown, tried out two extraction strategies, and crawled a dynamic page that uses “Load More” buttons or JavaScript updates.
|
||||
|
||||
---
|
||||
|
||||
## 2. Your First Crawl
|
||||
|
||||
Here’s a minimal Python script that creates an **`AsyncWebCrawler`**, fetches a webpage, and prints the first 300 characters of its Markdown output:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://example.com")
|
||||
print(result.markdown[:300]) # Print first 300 chars
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**What’s happening?**
|
||||
- **`AsyncWebCrawler`** launches a headless browser (Chromium by default).
|
||||
- It fetches `https://example.com`.
|
||||
- Crawl4AI automatically converts the HTML into Markdown.
|
||||
|
||||
You now have a simple, working crawl!
|
||||
|
||||
---
|
||||
|
||||
## 3. Basic Configuration (Light Introduction)
|
||||
|
||||
Crawl4AI’s crawler can be heavily customized using two main classes:
|
||||
|
||||
1. **`BrowserConfig`**: Controls browser behavior (headless or full UI, user agent, JavaScript toggles, etc.).
|
||||
2. **`CrawlerRunConfig`**: Controls how each crawl runs (caching, extraction, timeouts, hooking, etc.).
|
||||
|
||||
Below is an example with minimal usage:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||||
|
||||
async def main():
|
||||
browser_conf = BrowserConfig(headless=True) # or False to see the browser
|
||||
run_conf = CrawlerRunConfig(
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_conf) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
config=run_conf
|
||||
)
|
||||
print(result.markdown)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
> IMPORTANT: By default cache mode is set to `CacheMode.ENABLED`. So to have fresh content, you need to set it to `CacheMode.BYPASS`
|
||||
|
||||
We’ll explore more advanced config in later tutorials (like enabling proxies, PDF output, multi-tab sessions, etc.). For now, just note how you pass these objects to manage crawling.
|
||||
|
||||
---
|
||||
|
||||
## 4. Generating Markdown Output
|
||||
|
||||
By default, Crawl4AI automatically generates Markdown from each crawled page. However, the exact output depends on whether you specify a **markdown generator** or **content filter**.
|
||||
|
||||
- **`result.markdown`**:
|
||||
The direct HTML-to-Markdown conversion.
|
||||
- **`result.markdown.fit_markdown`**:
|
||||
The same content after applying any configured **content filter** (e.g., `PruningContentFilter`).
|
||||
|
||||
### Example: Using a Filter with `DefaultMarkdownGenerator`
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.content_filter_strategy import PruningContentFilter
|
||||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||
|
||||
md_generator = DefaultMarkdownGenerator(
|
||||
content_filter=PruningContentFilter(threshold=0.4, threshold_type="fixed")
|
||||
)
|
||||
|
||||
config = CrawlerRunConfig(
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
markdown_generator=md_generator
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://news.ycombinator.com", config=config)
|
||||
print("Raw Markdown length:", len(result.markdown.raw_markdown))
|
||||
print("Fit Markdown length:", len(result.markdown.fit_markdown))
|
||||
```
|
||||
|
||||
**Note**: If you do **not** specify a content filter or markdown generator, you’ll typically see only the raw Markdown. `PruningContentFilter` may adds around `50ms` in processing time. We’ll dive deeper into these strategies in a dedicated **Markdown Generation** tutorial.
|
||||
|
||||
---
|
||||
|
||||
## 5. Simple Data Extraction (CSS-based)
|
||||
|
||||
Crawl4AI can also extract structured data (JSON) using CSS or XPath selectors. Below is a minimal CSS-based example:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, CacheMode
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def main():
|
||||
schema = {
|
||||
"name": "Example Items",
|
||||
"baseSelector": "div.item",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h2", "type": "text"},
|
||||
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"}
|
||||
]
|
||||
}
|
||||
|
||||
raw_html = "<div class='item'><h2>Item 1</h2><a href='https://example.com/item1'>Link 1</a></div>"
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="raw://" + raw_html,
|
||||
config=CrawlerRunConfig(
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema)
|
||||
)
|
||||
)
|
||||
# The JSON output is stored in 'extracted_content'
|
||||
data = json.loads(result.extracted_content)
|
||||
print(data)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Why is this helpful?**
|
||||
- Great for repetitive page structures (e.g., item listings, articles).
|
||||
- No AI usage or costs.
|
||||
- The crawler returns a JSON string you can parse or store.
|
||||
|
||||
> Tips: You can pass raw HTML to the crawler instead of a URL. To do so, prefix the HTML with `raw://`.
|
||||
|
||||
---
|
||||
|
||||
## 6. Simple Data Extraction (LLM-based)
|
||||
|
||||
For more complex or irregular pages, a language model can parse text intelligently into a structure you define. Crawl4AI supports **open-source** or **closed-source** providers:
|
||||
|
||||
- **Open-Source Models** (e.g., `ollama/llama3.3`, `no_token`)
|
||||
- **OpenAI Models** (e.g., `openai/gpt-4`, requires `api_token`)
|
||||
- Or any provider supported by the underlying library
|
||||
|
||||
Below is an example using **open-source** style (no token) and closed-source:
|
||||
|
||||
```python
|
||||
import os
|
||||
import json
|
||||
import asyncio
|
||||
from pydantic import BaseModel, Field
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
|
||||
class OpenAIModelFee(BaseModel):
|
||||
model_name: str = Field(..., description="Name of the OpenAI model.")
|
||||
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
|
||||
output_fee: str = Field(
|
||||
..., description="Fee for output token for the OpenAI model."
|
||||
)
|
||||
|
||||
async def extract_structured_data_using_llm(
|
||||
provider: str, api_token: str = None, extra_headers: Dict[str, str] = None
|
||||
):
|
||||
print(f"\n--- Extracting Structured Data with {provider} ---")
|
||||
|
||||
if api_token is None and provider != "ollama":
|
||||
print(f"API token is required for {provider}. Skipping this example.")
|
||||
return
|
||||
|
||||
browser_config = BrowserConfig(headless=True)
|
||||
|
||||
extra_args = {"temperature": 0, "top_p": 0.9, "max_tokens": 2000}
|
||||
if extra_headers:
|
||||
extra_args["extra_headers"] = extra_headers
|
||||
|
||||
crawler_config = CrawlerRunConfig(
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
word_count_threshold=1,
|
||||
page_timeout=80000,
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider=provider,
|
||||
api_token=api_token,
|
||||
schema=OpenAIModelFee.model_json_schema(),
|
||||
extraction_type="schema",
|
||||
instruction="""From the crawled content, extract all mentioned model names along with their fees for input and output tokens.
|
||||
Do not miss any models in the entire content.""",
|
||||
extra_args=extra_args,
|
||||
),
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://openai.com/api/pricing/", config=crawler_config
|
||||
)
|
||||
print(result.extracted_content)
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Use ollama with llama3.3
|
||||
# asyncio.run(
|
||||
# extract_structured_data_using_llm(
|
||||
# provider="ollama/llama3.3", api_token="no-token"
|
||||
# )
|
||||
# )
|
||||
|
||||
asyncio.run(
|
||||
extract_structured_data_using_llm(
|
||||
provider="openai/gpt-4o", api_token=os.getenv("OPENAI_API_KEY")
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
**What’s happening?**
|
||||
- We define a Pydantic schema (`PricingInfo`) describing the fields we want.
|
||||
- The LLM extraction strategy uses that schema and your instructions to transform raw text into structured JSON.
|
||||
- Depending on the **provider** and **api_token**, you can use local models or a remote API.
|
||||
|
||||
---
|
||||
|
||||
## 7. Dynamic Content Example
|
||||
|
||||
Some sites require multiple “page clicks” or dynamic JavaScript updates. Below is an example showing how to **click** a “Next Page” button and wait for new commits to load on GitHub, using **`BrowserConfig`** and **`CrawlerRunConfig`**:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig, CacheMode
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def extract_structured_data_using_css_extractor():
|
||||
print("\n--- Using JsonCssExtractionStrategy for Fast Structured Output ---")
|
||||
schema = {
|
||||
"name": "KidoCode Courses",
|
||||
"baseSelector": "section.charge-methodology .w-tab-content > div",
|
||||
"fields": [
|
||||
{
|
||||
"name": "section_title",
|
||||
"selector": "h3.heading-50",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "section_description",
|
||||
"selector": ".charge-content",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "course_name",
|
||||
"selector": ".text-block-93",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "course_description",
|
||||
"selector": ".course-content-text",
|
||||
"type": "text",
|
||||
},
|
||||
{
|
||||
"name": "course_icon",
|
||||
"selector": ".image-92",
|
||||
"type": "attribute",
|
||||
"attribute": "src",
|
||||
},
|
||||
],
|
||||
}
|
||||
|
||||
browser_config = BrowserConfig(headless=True, java_script_enabled=True)
|
||||
|
||||
js_click_tabs = """
|
||||
(async () => {
|
||||
const tabs = document.querySelectorAll("section.charge-methodology .tabs-menu-3 > div");
|
||||
for(let tab of tabs) {
|
||||
tab.scrollIntoView();
|
||||
tab.click();
|
||||
await new Promise(r => setTimeout(r, 500));
|
||||
}
|
||||
})();
|
||||
"""
|
||||
|
||||
crawler_config = CrawlerRunConfig(
|
||||
cache_mode=CacheMode.BYPASS,
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema),
|
||||
js_code=[js_click_tabs],
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://www.kidocode.com/degrees/technology", config=crawler_config
|
||||
)
|
||||
|
||||
companies = json.loads(result.extracted_content)
|
||||
print(f"Successfully extracted {len(companies)} companies")
|
||||
print(json.dumps(companies[0], indent=2))
|
||||
|
||||
async def main():
|
||||
await extract_structured_data_using_css_extractor()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Key Points**:
|
||||
|
||||
- **`BrowserConfig(headless=False)`**: We want to watch it click “Next Page.”
|
||||
- **`CrawlerRunConfig(...)`**: We specify the extraction strategy, pass `session_id` to reuse the same page.
|
||||
- **`js_code`** and **`wait_for`** are used for subsequent pages (`page > 0`) to click the “Next” button and wait for new commits to load.
|
||||
- **`js_only=True`** indicates we’re not re-navigating but continuing the existing session.
|
||||
- Finally, we call `kill_session()` to clean up the page and browser session.
|
||||
|
||||
---
|
||||
|
||||
## 8. Next Steps
|
||||
|
||||
Congratulations! You have:
|
||||
|
||||
1. Performed a basic crawl and printed Markdown.
|
||||
2. Used **content filters** with a markdown generator.
|
||||
3. Extracted JSON via **CSS** or **LLM** strategies.
|
||||
4. Handled **dynamic** pages with JavaScript triggers.
|
||||
|
||||
If you’re ready for more, check out:
|
||||
|
||||
- **Installation**: A deeper dive into advanced installs, Docker usage (experimental), or optional dependencies.
|
||||
- **Hooks & Auth**: Learn how to run custom JavaScript or handle logins with cookies, local storage, etc.
|
||||
- **Deployment**: Explore ephemeral testing in Docker or plan for the upcoming stable Docker release.
|
||||
- **Browser Management**: Delve into user simulation, stealth modes, and concurrency best practices.
|
||||
|
||||
Crawl4AI is a powerful, flexible tool. Enjoy building out your scrapers, data pipelines, or AI-driven extraction flows. Happy crawling!
|
||||
@@ -1,133 +1,144 @@
|
||||
## Chunking Strategies 📚
|
||||
# Chunking Strategies
|
||||
Chunking strategies are critical for dividing large texts into manageable parts, enabling effective content processing and extraction. These strategies are foundational in cosine similarity-based extraction techniques, which allow users to retrieve only the most relevant chunks of content for a given query. Additionally, they facilitate direct integration into RAG (Retrieval-Augmented Generation) systems for structured and scalable workflows.
|
||||
|
||||
Crawl4AI provides several powerful chunking strategies to divide text into manageable parts for further processing. Each strategy has unique characteristics and is suitable for different scenarios. Let's explore them one by one.
|
||||
### Why Use Chunking?
|
||||
1. **Cosine Similarity and Query Relevance**: Prepares chunks for semantic similarity analysis.
|
||||
2. **RAG System Integration**: Seamlessly processes and stores chunks for retrieval.
|
||||
3. **Structured Processing**: Allows for diverse segmentation methods, such as sentence-based, topic-based, or windowed approaches.
|
||||
|
||||
### RegexChunking
|
||||
### Methods of Chunking
|
||||
|
||||
`RegexChunking` splits text using regular expressions. This is ideal for creating chunks based on specific patterns like paragraphs or sentences.
|
||||
#### 1. Regex-Based Chunking
|
||||
Splits text based on regular expression patterns, useful for coarse segmentation.
|
||||
|
||||
#### When to Use
|
||||
- Great for structured text with consistent delimiters.
|
||||
- Suitable for documents where specific patterns (e.g., double newlines, periods) indicate logical chunks.
|
||||
|
||||
#### Parameters
|
||||
- `patterns` (list, optional): Regular expressions used to split the text. Default is to split by double newlines (`['\n\n']`).
|
||||
|
||||
#### Example
|
||||
**Code Example**:
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import RegexChunking
|
||||
class RegexChunking:
|
||||
def __init__(self, patterns=None):
|
||||
self.patterns = patterns or [r'\n\n'] # Default pattern for paragraphs
|
||||
|
||||
# Define patterns for splitting text
|
||||
patterns = [r'\n\n', r'\. ']
|
||||
chunker = RegexChunking(patterns=patterns)
|
||||
def chunk(self, text):
|
||||
paragraphs = [text]
|
||||
for pattern in self.patterns:
|
||||
paragraphs = [seg for p in paragraphs for seg in re.split(pattern, p)]
|
||||
return paragraphs
|
||||
|
||||
# Sample text
|
||||
text = "This is a sample text. It will be split into chunks.\n\nThis is another paragraph."
|
||||
# Example Usage
|
||||
text = """This is the first paragraph.
|
||||
|
||||
# Chunk the text
|
||||
chunks = chunker.chunk(text)
|
||||
print(chunks)
|
||||
This is the second paragraph."""
|
||||
chunker = RegexChunking()
|
||||
print(chunker.chunk(text))
|
||||
```
|
||||
|
||||
### NlpSentenceChunking
|
||||
#### 2. Sentence-Based Chunking
|
||||
Divides text into sentences using NLP tools, ideal for extracting meaningful statements.
|
||||
|
||||
`NlpSentenceChunking` uses NLP models to split text into sentences, ensuring accurate sentence boundaries.
|
||||
|
||||
#### When to Use
|
||||
- Ideal for texts where sentence boundaries are crucial.
|
||||
- Useful for creating chunks that preserve grammatical structures.
|
||||
|
||||
#### Parameters
|
||||
- None.
|
||||
|
||||
#### Example
|
||||
**Code Example**:
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import NlpSentenceChunking
|
||||
from nltk.tokenize import sent_tokenize
|
||||
|
||||
class NlpSentenceChunking:
|
||||
def chunk(self, text):
|
||||
sentences = sent_tokenize(text)
|
||||
return [sentence.strip() for sentence in sentences]
|
||||
|
||||
# Example Usage
|
||||
text = "This is sentence one. This is sentence two."
|
||||
chunker = NlpSentenceChunking()
|
||||
|
||||
# Sample text
|
||||
text = "This is a sample text. It will be split into sentences. Here's another sentence."
|
||||
|
||||
# Chunk the text
|
||||
chunks = chunker.chunk(text)
|
||||
print(chunks)
|
||||
print(chunker.chunk(text))
|
||||
```
|
||||
|
||||
### TopicSegmentationChunking
|
||||
#### 3. Topic-Based Segmentation
|
||||
Uses algorithms like TextTiling to create topic-coherent chunks.
|
||||
|
||||
`TopicSegmentationChunking` employs the TextTiling algorithm to segment text into topic-based chunks. This method identifies thematic boundaries.
|
||||
|
||||
#### When to Use
|
||||
- Perfect for long documents with distinct topics.
|
||||
- Useful when preserving topic continuity is more important than maintaining text order.
|
||||
|
||||
#### Parameters
|
||||
- `num_keywords` (int, optional): Number of keywords for each topic segment. Default is `3`.
|
||||
|
||||
#### Example
|
||||
**Code Example**:
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import TopicSegmentationChunking
|
||||
from nltk.tokenize import TextTilingTokenizer
|
||||
|
||||
chunker = TopicSegmentationChunking(num_keywords=3)
|
||||
class TopicSegmentationChunking:
|
||||
def __init__(self):
|
||||
self.tokenizer = TextTilingTokenizer()
|
||||
|
||||
# Sample text
|
||||
text = "This document contains several topics. Topic one discusses AI. Topic two covers machine learning."
|
||||
def chunk(self, text):
|
||||
return self.tokenizer.tokenize(text)
|
||||
|
||||
# Chunk the text
|
||||
chunks = chunker.chunk(text)
|
||||
print(chunks)
|
||||
# Example Usage
|
||||
text = """This is an introduction.
|
||||
This is a detailed discussion on the topic."""
|
||||
chunker = TopicSegmentationChunking()
|
||||
print(chunker.chunk(text))
|
||||
```
|
||||
|
||||
### FixedLengthWordChunking
|
||||
#### 4. Fixed-Length Word Chunking
|
||||
Segments text into chunks of a fixed word count.
|
||||
|
||||
`FixedLengthWordChunking` splits text into chunks based on a fixed number of words. This ensures each chunk has approximately the same length.
|
||||
|
||||
#### When to Use
|
||||
- Suitable for processing large texts where uniform chunk size is important.
|
||||
- Useful when the number of words per chunk needs to be controlled.
|
||||
|
||||
#### Parameters
|
||||
- `chunk_size` (int, optional): Number of words per chunk. Default is `100`.
|
||||
|
||||
#### Example
|
||||
**Code Example**:
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import FixedLengthWordChunking
|
||||
class FixedLengthWordChunking:
|
||||
def __init__(self, chunk_size=100):
|
||||
self.chunk_size = chunk_size
|
||||
|
||||
chunker = FixedLengthWordChunking(chunk_size=10)
|
||||
def chunk(self, text):
|
||||
words = text.split()
|
||||
return [' '.join(words[i:i + self.chunk_size]) for i in range(0, len(words), self.chunk_size)]
|
||||
|
||||
# Sample text
|
||||
text = "This is a sample text. It will be split into chunks of fixed length."
|
||||
|
||||
# Chunk the text
|
||||
chunks = chunker.chunk(text)
|
||||
print(chunks)
|
||||
# Example Usage
|
||||
text = "This is a long text with many words to be chunked into fixed sizes."
|
||||
chunker = FixedLengthWordChunking(chunk_size=5)
|
||||
print(chunker.chunk(text))
|
||||
```
|
||||
|
||||
### SlidingWindowChunking
|
||||
#### 5. Sliding Window Chunking
|
||||
Generates overlapping chunks for better contextual coherence.
|
||||
|
||||
`SlidingWindowChunking` uses a sliding window approach to create overlapping chunks. Each chunk has a fixed length, and the window slides by a specified step size.
|
||||
|
||||
#### When to Use
|
||||
- Ideal for creating overlapping chunks to preserve context.
|
||||
- Useful for tasks where context from adjacent chunks is needed.
|
||||
|
||||
#### Parameters
|
||||
- `window_size` (int, optional): Number of words in each chunk. Default is `100`.
|
||||
- `step` (int, optional): Number of words to slide the window. Default is `50`.
|
||||
|
||||
#### Example
|
||||
**Code Example**:
|
||||
```python
|
||||
from crawl4ai.chunking_strategy import SlidingWindowChunking
|
||||
class SlidingWindowChunking:
|
||||
def __init__(self, window_size=100, step=50):
|
||||
self.window_size = window_size
|
||||
self.step = step
|
||||
|
||||
chunker = SlidingWindowChunking(window_size=10, step=5)
|
||||
def chunk(self, text):
|
||||
words = text.split()
|
||||
chunks = []
|
||||
for i in range(0, len(words) - self.window_size + 1, self.step):
|
||||
chunks.append(' '.join(words[i:i + self.window_size]))
|
||||
return chunks
|
||||
|
||||
# Sample text
|
||||
text = "This is a sample text. It will be split using a sliding window approach to preserve context."
|
||||
|
||||
# Chunk the text
|
||||
chunks = chunker.chunk(text)
|
||||
print(chunks)
|
||||
# Example Usage
|
||||
text = "This is a long text to demonstrate sliding window chunking."
|
||||
chunker = SlidingWindowChunking(window_size=5, step=2)
|
||||
print(chunker.chunk(text))
|
||||
```
|
||||
|
||||
With these chunking strategies, you can choose the best method to divide your text based on your specific needs. Whether you need precise sentence boundaries, topic-based segmentation, or uniform chunk sizes, Crawl4AI has you covered. Happy chunking! 📝✨
|
||||
### Combining Chunking with Cosine Similarity
|
||||
To enhance the relevance of extracted content, chunking strategies can be paired with cosine similarity techniques. Here’s an example workflow:
|
||||
|
||||
**Code Example**:
|
||||
```python
|
||||
from sklearn.feature_extraction.text import TfidfVectorizer
|
||||
from sklearn.metrics.pairwise import cosine_similarity
|
||||
|
||||
class CosineSimilarityExtractor:
|
||||
def __init__(self, query):
|
||||
self.query = query
|
||||
self.vectorizer = TfidfVectorizer()
|
||||
|
||||
def find_relevant_chunks(self, chunks):
|
||||
vectors = self.vectorizer.fit_transform([self.query] + chunks)
|
||||
similarities = cosine_similarity(vectors[0:1], vectors[1:]).flatten()
|
||||
return [(chunks[i], similarities[i]) for i in range(len(chunks))]
|
||||
|
||||
# Example Workflow
|
||||
text = """This is a sample document. It has multiple sentences.
|
||||
We are testing chunking and similarity."""
|
||||
|
||||
chunker = SlidingWindowChunking(window_size=5, step=3)
|
||||
chunks = chunker.chunk(text)
|
||||
query = "testing chunking"
|
||||
extractor = CosineSimilarityExtractor(query)
|
||||
relevant_chunks = extractor.find_relevant_chunks(chunks)
|
||||
|
||||
print(relevant_chunks)
|
||||
```
|
||||
|
||||
@@ -56,12 +56,12 @@ CosineStrategy(
|
||||
|
||||
### Parameter Details
|
||||
|
||||
1. **semantic_filter**
|
||||
1. **semantic_filter**
|
||||
- Sets the target topic or content type
|
||||
- Use keywords relevant to your desired content
|
||||
- Example: "technical specifications", "user reviews", "pricing information"
|
||||
|
||||
2. **sim_threshold**
|
||||
2. **sim_threshold**
|
||||
- Controls how similar content must be to be grouped together
|
||||
- Higher values (e.g., 0.8) mean stricter matching
|
||||
- Lower values (e.g., 0.3) allow more variation
|
||||
@@ -73,7 +73,7 @@ CosineStrategy(
|
||||
strategy = CosineStrategy(sim_threshold=0.3)
|
||||
```
|
||||
|
||||
3. **word_count_threshold**
|
||||
3. **word_count_threshold**
|
||||
- Filters out short content blocks
|
||||
- Helps eliminate noise and irrelevant content
|
||||
```python
|
||||
@@ -81,7 +81,7 @@ CosineStrategy(
|
||||
strategy = CosineStrategy(word_count_threshold=50)
|
||||
```
|
||||
|
||||
4. **top_k**
|
||||
4. **top_k**
|
||||
- Number of top content clusters to return
|
||||
- Higher values return more diverse content
|
||||
```python
|
||||
@@ -163,17 +163,17 @@ async def extract_pricing_features(url: str):
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Adjust Thresholds Iteratively**
|
||||
1. **Adjust Thresholds Iteratively**
|
||||
- Start with default values
|
||||
- Adjust based on results
|
||||
- Monitor clustering quality
|
||||
|
||||
2. **Choose Appropriate Word Count Thresholds**
|
||||
2. **Choose Appropriate Word Count Thresholds**
|
||||
- Higher for articles (100+)
|
||||
- Lower for reviews/comments (20+)
|
||||
- Medium for product descriptions (50+)
|
||||
|
||||
3. **Optimize Performance**
|
||||
3. **Optimize Performance**
|
||||
```python
|
||||
strategy = CosineStrategy(
|
||||
word_count_threshold=10, # Filter early
|
||||
@@ -182,7 +182,7 @@ async def extract_pricing_features(url: str):
|
||||
)
|
||||
```
|
||||
|
||||
4. **Handle Different Content Types**
|
||||
4. **Handle Different Content Types**
|
||||
```python
|
||||
# For mixed content pages
|
||||
strategy = CosineStrategy(
|
||||
@@ -1,7 +1,3 @@
|
||||
Below is a **draft** of the **Extracting JSON (LLM)** tutorial, illustrating how to use large language models for structured data extraction in Crawl4AI. It highlights key parameters (like chunking, overlap, instruction, schema) and explains how the system remains **provider-agnostic** via LightLLM. Adjust field names or code snippets to match your repository’s specifics.
|
||||
|
||||
---
|
||||
|
||||
# Extracting JSON (LLM)
|
||||
|
||||
In some cases, you need to extract **complex or unstructured** information from a webpage that a simple CSS/XPath schema cannot easily parse. Or you want **AI**-driven insights, classification, or summarization. For these scenarios, Crawl4AI provides an **LLM-based extraction strategy** that:
|
||||
@@ -24,7 +20,7 @@ In some cases, you need to extract **complex or unstructured** information from
|
||||
|
||||
## 2. Provider-Agnostic via LightLLM
|
||||
|
||||
Crawl4AI uses a “provider string” (e.g., `"openai/gpt-4o"`, `"ollama/llama2.0"`, `"aws/titan"`) to identify your LLM. **Any** model that LightLLM supports is fair game. You just provide:
|
||||
Crawl4AI uses a “provider string” (e.g., `"openai/gpt-4o"`, `"ollama/llama2.0"`, `"aws/titan"`) to identify your LLM. **Any** model that LightLLM supports is fair game. You just provide:
|
||||
|
||||
- **`provider`**: The `<provider>/<model_name>` identifier (e.g., `"openai/gpt-4"`, `"ollama/llama2"`, `"huggingface/google-flan"`, etc.).
|
||||
- **`api_token`**: If needed (for OpenAI, HuggingFace, etc.); local models or Ollama might not require it.
|
||||
@@ -38,10 +34,10 @@ This means you **aren’t locked** into a single LLM vendor. Switch or experimen
|
||||
|
||||
### 3.1 Flow
|
||||
|
||||
1. **Chunking** (optional): The HTML or markdown is split into smaller segments if it’s very long (based on `chunk_token_threshold`, overlap, etc.).
|
||||
2. **Prompt Construction**: For each chunk, the library forms a prompt that includes your **`instruction`** (and possibly schema or examples).
|
||||
3. **LLM Inference**: Each chunk is sent to the model in parallel or sequentially (depending on your concurrency).
|
||||
4. **Combining**: The results from each chunk are merged and parsed into JSON.
|
||||
1. **Chunking** (optional): The HTML or markdown is split into smaller segments if it’s very long (based on `chunk_token_threshold`, overlap, etc.).
|
||||
2. **Prompt Construction**: For each chunk, the library forms a prompt that includes your **`instruction`** (and possibly schema or examples).
|
||||
3. **LLM Inference**: Each chunk is sent to the model in parallel or sequentially (depending on your concurrency).
|
||||
4. **Combining**: The results from each chunk are merged and parsed into JSON.
|
||||
|
||||
### 3.2 `extraction_type`
|
||||
|
||||
@@ -56,20 +52,20 @@ For structured data, `"schema"` is recommended. You provide `schema=YourPydantic
|
||||
|
||||
Below is an overview of important LLM extraction parameters. All are typically set inside `LLMExtractionStrategy(...)`. You then put that strategy in your `CrawlerRunConfig(..., extraction_strategy=...)`.
|
||||
|
||||
1. **`provider`** (str): e.g., `"openai/gpt-4"`, `"ollama/llama2"`.
|
||||
2. **`api_token`** (str): The API key or token for that model. May not be needed for local models.
|
||||
3. **`schema`** (dict): A JSON schema describing the fields you want. Usually generated by `YourModel.model_json_schema()`.
|
||||
4. **`extraction_type`** (str): `"schema"` or `"block"`.
|
||||
5. **`instruction`** (str): Prompt text telling the LLM what you want extracted. E.g., “Extract these fields as a JSON array.”
|
||||
6. **`chunk_token_threshold`** (int): Maximum tokens per chunk. If your content is huge, you can break it up for the LLM.
|
||||
7. **`overlap_rate`** (float): Overlap ratio between adjacent chunks. E.g., `0.1` means 10% of each chunk is repeated to preserve context continuity.
|
||||
8. **`apply_chunking`** (bool): Set `True` to chunk automatically. If you want a single pass, set `False`.
|
||||
9. **`input_format`** (str): Determines **which** crawler result is passed to the LLM. Options include:
|
||||
1. **`provider`** (str): e.g., `"openai/gpt-4"`, `"ollama/llama2"`.
|
||||
2. **`api_token`** (str): The API key or token for that model. May not be needed for local models.
|
||||
3. **`schema`** (dict): A JSON schema describing the fields you want. Usually generated by `YourModel.model_json_schema()`.
|
||||
4. **`extraction_type`** (str): `"schema"` or `"block"`.
|
||||
5. **`instruction`** (str): Prompt text telling the LLM what you want extracted. E.g., “Extract these fields as a JSON array.”
|
||||
6. **`chunk_token_threshold`** (int): Maximum tokens per chunk. If your content is huge, you can break it up for the LLM.
|
||||
7. **`overlap_rate`** (float): Overlap ratio between adjacent chunks. E.g., `0.1` means 10% of each chunk is repeated to preserve context continuity.
|
||||
8. **`apply_chunking`** (bool): Set `True` to chunk automatically. If you want a single pass, set `False`.
|
||||
9. **`input_format`** (str): Determines **which** crawler result is passed to the LLM. Options include:
|
||||
- `"markdown"`: The raw markdown (default).
|
||||
- `"fit_markdown"`: The filtered “fit” markdown if you used a content filter.
|
||||
- `"html"`: The cleaned or raw HTML.
|
||||
10. **`extra_args`** (dict): Additional LLM parameters like `temperature`, `max_tokens`, `top_p`, etc.
|
||||
11. **`show_usage()`**: A method you can call to print out usage info (token usage per chunk, total cost if known).
|
||||
10. **`extra_args`** (dict): Additional LLM parameters like `temperature`, `max_tokens`, `top_p`, etc.
|
||||
11. **`show_usage()`**: A method you can call to print out usage info (token usage per chunk, total cost if known).
|
||||
|
||||
**Example**:
|
||||
|
||||
@@ -159,7 +155,7 @@ if __name__ == "__main__":
|
||||
|
||||
### 6.1 `chunk_token_threshold`
|
||||
|
||||
If your page is large, you might exceed your LLM’s context window. **`chunk_token_threshold`** sets the approximate max tokens per chunk. The library calculates word→token ratio using `word_token_rate` (often ~0.75 by default). If chunking is enabled (`apply_chunking=True`), the text is split into segments.
|
||||
If your page is large, you might exceed your LLM’s context window. **`chunk_token_threshold`** sets the approximate max tokens per chunk. The library calculates word→token ratio using `word_token_rate` (often ~0.75 by default). If chunking is enabled (`apply_chunking=True`), the text is split into segments.
|
||||
|
||||
### 6.2 `overlap_rate`
|
||||
|
||||
@@ -281,12 +277,12 @@ if __name__ == "__main__":
|
||||
|
||||
## 10. Best Practices & Caveats
|
||||
|
||||
1. **Cost & Latency**: LLM calls can be slow or expensive. Consider chunking or smaller coverage if you only need partial data.
|
||||
2. **Model Token Limits**: If your page + instruction exceed the context window, chunking is essential.
|
||||
3. **Instruction Engineering**: Well-crafted instructions can drastically improve output reliability.
|
||||
4. **Schema Strictness**: `"schema"` extraction tries to parse the model output as JSON. If the model returns invalid JSON, partial extraction might happen, or you might get an error.
|
||||
5. **Parallel vs. Serial**: The library can process multiple chunks in parallel, but you must watch out for rate limits on certain providers.
|
||||
6. **Check Output**: Sometimes, an LLM might omit fields or produce extraneous text. You may want to post-validate with Pydantic or do additional cleanup.
|
||||
1. **Cost & Latency**: LLM calls can be slow or expensive. Consider chunking or smaller coverage if you only need partial data.
|
||||
2. **Model Token Limits**: If your page + instruction exceed the context window, chunking is essential.
|
||||
3. **Instruction Engineering**: Well-crafted instructions can drastically improve output reliability.
|
||||
4. **Schema Strictness**: `"schema"` extraction tries to parse the model output as JSON. If the model returns invalid JSON, partial extraction might happen, or you might get an error.
|
||||
5. **Parallel vs. Serial**: The library can process multiple chunks in parallel, but you must watch out for rate limits on certain providers.
|
||||
6. **Check Output**: Sometimes, an LLM might omit fields or produce extraneous text. You may want to post-validate with Pydantic or do additional cleanup.
|
||||
|
||||
---
|
||||
|
||||
@@ -303,31 +299,31 @@ If your site’s data is consistent or repetitive, consider [`JsonCssExtractionS
|
||||
|
||||
**Next Steps**:
|
||||
|
||||
1. **Experiment with Different Providers**
|
||||
1. **Experiment with Different Providers**
|
||||
- Try switching the `provider` (e.g., `"ollama/llama2"`, `"openai/gpt-4o"`, etc.) to see differences in speed, accuracy, or cost.
|
||||
- Pass different `extra_args` like `temperature`, `top_p`, and `max_tokens` to fine-tune your results.
|
||||
|
||||
2. **Combine With Other Strategies**
|
||||
2. **Combine With Other Strategies**
|
||||
- Use [content filters](../../how-to/content-filters.md) like BM25 or Pruning prior to LLM extraction to remove noise and reduce token usage.
|
||||
- Apply a [CSS or XPath extraction strategy](./json-extraction-basic.md) first for obvious, structured data, then send only the tricky parts to the LLM.
|
||||
|
||||
3. **Performance Tuning**
|
||||
3. **Performance Tuning**
|
||||
- If pages are large, tweak `chunk_token_threshold`, `overlap_rate`, or `apply_chunking` to optimize throughput.
|
||||
- Check the usage logs with `show_usage()` to keep an eye on token consumption and identify potential bottlenecks.
|
||||
|
||||
4. **Validate Outputs**
|
||||
4. **Validate Outputs**
|
||||
- If using `extraction_type="schema"`, parse the LLM’s JSON with a Pydantic model for a final validation step.
|
||||
- Log or handle any parse errors gracefully, especially if the model occasionally returns malformed JSON.
|
||||
|
||||
5. **Explore Hooks & Automation**
|
||||
5. **Explore Hooks & Automation**
|
||||
- Integrate LLM extraction with [hooks](./hooks-custom.md) for complex pre/post-processing.
|
||||
- Use a multi-step pipeline: crawl, filter, LLM-extract, then store or index results for further analysis.
|
||||
|
||||
6. **Scale and Deploy**
|
||||
6. **Scale and Deploy**
|
||||
- Combine your LLM extraction setup with [Docker or other deployment solutions](./docker-quickstart.md) to run at scale.
|
||||
- Monitor memory usage and concurrency if you call LLMs frequently.
|
||||
|
||||
**Last Updated**: 2024-XX-XX
|
||||
**Last Updated**: 2025-01-01
|
||||
|
||||
---
|
||||
|
||||
@@ -4,10 +4,10 @@ One of Crawl4AI’s **most powerful** features is extracting **structured JSON**
|
||||
|
||||
**Why avoid LLM for basic extractions?**
|
||||
|
||||
1. **Faster & Cheaper**: No API calls or GPU overhead.
|
||||
2. **Lower Carbon Footprint**: LLM inference can be energy-intensive. A well-defined schema is practically carbon-free.
|
||||
3. **Precise & Repeatable**: CSS/XPath selectors do exactly what you specify. LLM outputs can vary or hallucinate.
|
||||
4. **Scales Readily**: For thousands of pages, schema-based extraction runs quickly and in parallel.
|
||||
1. **Faster & Cheaper**: No API calls or GPU overhead.
|
||||
2. **Lower Carbon Footprint**: LLM inference can be energy-intensive. A well-defined schema is practically carbon-free.
|
||||
3. **Precise & Repeatable**: CSS/XPath selectors do exactly what you specify. LLM outputs can vary or hallucinate.
|
||||
4. **Scales Readily**: For thousands of pages, schema-based extraction runs quickly and in parallel.
|
||||
|
||||
Below, we’ll explore how to craft these schemas and use them with **JsonCssExtractionStrategy** (or **JsonXPathExtractionStrategy** if you prefer XPath). We’ll also highlight advanced features like **nested fields** and **base element attributes**.
|
||||
|
||||
@@ -18,8 +18,8 @@ Below, we’ll explore how to craft these schemas and use them with **JsonCssExt
|
||||
A schema defines:
|
||||
|
||||
1. A **base selector** that identifies each “container” element on the page (e.g., a product row, a blog post card).
|
||||
2. **Fields** describing which CSS/XPath selectors to use for each piece of data you want to capture (text, attribute, HTML block, etc.).
|
||||
3. **Nested** or **list** types for repeated or hierarchical structures.
|
||||
2. **Fields** describing which CSS/XPath selectors to use for each piece of data you want to capture (text, attribute, HTML block, etc.).
|
||||
3. **Nested** or **list** types for repeated or hierarchical structures.
|
||||
|
||||
For example, if you have a list of products, each one might have a name, price, reviews, and “related products.” This approach is faster and more reliable than an LLM for consistent, structured pages.
|
||||
|
||||
@@ -168,9 +168,9 @@ asyncio.run(extract_crypto_prices_xpath())
|
||||
|
||||
**Key Points**:
|
||||
|
||||
1. **`JsonXPathExtractionStrategy`** is used instead of `JsonCssExtractionStrategy`.
|
||||
2. **`baseSelector`** and each field’s `"selector"` use **XPath** instead of CSS.
|
||||
3. **`raw://`** lets us pass `dummy_html` with no real network request—handy for local testing.
|
||||
1. **`JsonXPathExtractionStrategy`** is used instead of `JsonCssExtractionStrategy`.
|
||||
2. **`baseSelector`** and each field’s `"selector"` use **XPath** instead of CSS.
|
||||
3. **`raw://`** lets us pass `dummy_html` with no real network request—handy for local testing.
|
||||
4. Everything (including the extraction strategy) is in **`CrawlerRunConfig`**.
|
||||
|
||||
That’s how you keep the config self-contained, illustrate **XPath** usage, and demonstrate the **raw** scheme for direct HTML input—all while avoiding the old approach of passing `extraction_strategy` directly to `arun()`.
|
||||
@@ -310,10 +310,10 @@ If all goes well, you get a **structured** JSON array with each “category,”
|
||||
|
||||
## 4. Why “No LLM” Is Often Better
|
||||
|
||||
1. **Zero Hallucination**: Schema-based extraction doesn’t guess text. It either finds it or not.
|
||||
2. **Guaranteed Structure**: The same schema yields consistent JSON across many pages, so your downstream pipeline can rely on stable keys.
|
||||
3. **Speed**: LLM-based extraction can be 10–1000x slower for large-scale crawling.
|
||||
4. **Scalable**: Adding or updating a field is a matter of adjusting the schema, not re-tuning a model.
|
||||
1. **Zero Hallucination**: Schema-based extraction doesn’t guess text. It either finds it or not.
|
||||
2. **Guaranteed Structure**: The same schema yields consistent JSON across many pages, so your downstream pipeline can rely on stable keys.
|
||||
3. **Speed**: LLM-based extraction can be 10–1000x slower for large-scale crawling.
|
||||
4. **Scalable**: Adding or updating a field is a matter of adjusting the schema, not re-tuning a model.
|
||||
|
||||
**When might you consider an LLM?** Possibly if the site is extremely unstructured or you want AI summarization. But always try a schema approach first for repeated or consistent data patterns.
|
||||
|
||||
@@ -362,13 +362,13 @@ Then run with `JsonCssExtractionStrategy(schema)` to get an array of blog post o
|
||||
|
||||
## 7. Tips & Best Practices
|
||||
|
||||
1. **Inspect the DOM** in Chrome DevTools or Firefox’s Inspector to find stable selectors.
|
||||
2. **Start Simple**: Verify you can extract a single field. Then add complexity like nested objects or lists.
|
||||
3. **Test** your schema on partial HTML or a test page before a big crawl.
|
||||
4. **Combine with JS Execution** if the site loads content dynamically. You can pass `js_code` or `wait_for` in `CrawlerRunConfig`.
|
||||
5. **Look at Logs** when `verbose=True`: if your selectors are off or your schema is malformed, it’ll often show warnings.
|
||||
6. **Use baseFields** if you need attributes from the container element (e.g., `href`, `data-id`), especially for the “parent” item.
|
||||
7. **Performance**: For large pages, make sure your selectors are as narrow as possible.
|
||||
1. **Inspect the DOM** in Chrome DevTools or Firefox’s Inspector to find stable selectors.
|
||||
2. **Start Simple**: Verify you can extract a single field. Then add complexity like nested objects or lists.
|
||||
3. **Test** your schema on partial HTML or a test page before a big crawl.
|
||||
4. **Combine with JS Execution** if the site loads content dynamically. You can pass `js_code` or `wait_for` in `CrawlerRunConfig`.
|
||||
5. **Look at Logs** when `verbose=True`: if your selectors are off or your schema is malformed, it’ll often show warnings.
|
||||
6. **Use baseFields** if you need attributes from the container element (e.g., `href`, `data-id`), especially for the “parent” item.
|
||||
7. **Performance**: For large pages, make sure your selectors are as narrow as possible.
|
||||
|
||||
---
|
||||
|
||||
@@ -388,7 +388,7 @@ With **JsonCssExtractionStrategy** (or **JsonXPathExtractionStrategy**), you can
|
||||
|
||||
**Remember**: For repeated, structured data, you don’t need to pay for or wait on an LLM. A well-crafted schema plus CSS or XPath gets you the data faster, cleaner, and cheaper—**the real power** of Crawl4AI.
|
||||
|
||||
**Last Updated**: 2024-XX-XX
|
||||
**Last Updated**: 2025-01-01
|
||||
|
||||
---
|
||||
|
||||
@@ -152,10 +152,10 @@ schema = {
|
||||
|
||||
This schema demonstrates several advanced features:
|
||||
|
||||
1. **Nested Objects**: The `details` field is a nested object within each product.
|
||||
2. **Simple Lists**: The `features` field is a simple list of text items.
|
||||
3. **Nested Lists**: The `products` field is a nested list, where each item is a complex object.
|
||||
4. **Lists of Objects**: The `reviews` and `related_products` fields are lists of objects.
|
||||
1. **Nested Objects**: The `details` field is a nested object within each product.
|
||||
2. **Simple Lists**: The `features` field is a simple list of text items.
|
||||
3. **Nested Lists**: The `products` field is a nested list, where each item is a complex object.
|
||||
4. **Lists of Objects**: The `reviews` and `related_products` fields are lists of objects.
|
||||
|
||||
Let's break down the key concepts:
|
||||
|
||||
@@ -272,11 +272,11 @@ This will produce a structured JSON output that captures the complex hierarchy o
|
||||
|
||||
## Tips for Advanced Usage
|
||||
|
||||
1. **Start Simple**: Begin with a basic schema and gradually add complexity.
|
||||
2. **Test Incrementally**: Test each part of your schema separately before combining them.
|
||||
3. **Use Chrome DevTools**: The Element Inspector is invaluable for identifying the correct selectors.
|
||||
4. **Handle Missing Data**: Use the `default` key in your field definitions to handle cases where data might be missing.
|
||||
5. **Leverage Transforms**: Use the `transform` key to clean or format extracted data (e.g., converting prices to numbers).
|
||||
6. **Consider Performance**: Very complex schemas might slow down extraction. Balance complexity with performance needs.
|
||||
1. **Start Simple**: Begin with a basic schema and gradually add complexity.
|
||||
2. **Test Incrementally**: Test each part of your schema separately before combining them.
|
||||
3. **Use Chrome DevTools**: The Element Inspector is invaluable for identifying the correct selectors.
|
||||
4. **Handle Missing Data**: Use the `default` key in your field definitions to handle cases where data might be missing.
|
||||
5. **Leverage Transforms**: Use the `transform` key to clean or format extracted data (e.g., converting prices to numbers).
|
||||
6. **Consider Performance**: Very complex schemas might slow down extraction. Balance complexity with performance needs.
|
||||
|
||||
By mastering these advanced techniques, you can use JsonCssExtractionStrategy to extract highly structured data from even the most complex web pages, making it a powerful tool for web scraping and data analysis tasks.
|
||||
@@ -83,17 +83,17 @@ The schema defines how to extract the data:
|
||||
|
||||
## Advantages of JsonCssExtractionStrategy
|
||||
|
||||
1. **Speed**: CSS selectors are fast to execute, making this method efficient for large datasets.
|
||||
2. **Precision**: You can target exactly the elements you need.
|
||||
3. **Structured Output**: The result is already structured as JSON, ready for further processing.
|
||||
4. **No External Dependencies**: Unlike LLM-based strategies, this doesn't require any API calls to external services.
|
||||
1. **Speed**: CSS selectors are fast to execute, making this method efficient for large datasets.
|
||||
2. **Precision**: You can target exactly the elements you need.
|
||||
3. **Structured Output**: The result is already structured as JSON, ready for further processing.
|
||||
4. **No External Dependencies**: Unlike LLM-based strategies, this doesn't require any API calls to external services.
|
||||
|
||||
## Tips for Using JsonCssExtractionStrategy
|
||||
|
||||
1. **Inspect the Page**: Use browser developer tools to identify the correct CSS selectors.
|
||||
2. **Test Selectors**: Verify your selectors in the browser console before using them in the script.
|
||||
3. **Handle Dynamic Content**: If the page uses JavaScript to load content, you may need to combine this with JS execution (see the Advanced Usage section).
|
||||
4. **Error Handling**: Always check the `result.success` flag and handle potential failures.
|
||||
1. **Inspect the Page**: Use browser developer tools to identify the correct CSS selectors.
|
||||
2. **Test Selectors**: Verify your selectors in the browser console before using them in the script.
|
||||
3. **Handle Dynamic Content**: If the page uses JavaScript to load content, you may need to combine this with JS execution (see the Advanced Usage section).
|
||||
4. **Error Handling**: Always check the `result.success` flag and handle potential failures.
|
||||
|
||||
## Advanced Usage: Combining with JavaScript Execution
|
||||
|
||||
@@ -97,17 +97,17 @@ result = await crawler.arun(
|
||||
|
||||
Choose your strategy based on these factors:
|
||||
|
||||
1. **Content Structure**
|
||||
1. **Content Structure**
|
||||
- Well-structured HTML → Use CSS Strategy
|
||||
- Natural language text → Use LLM Strategy
|
||||
- Mixed/Complex content → Use Cosine Strategy
|
||||
|
||||
2. **Performance Requirements**
|
||||
2. **Performance Requirements**
|
||||
- Fastest: CSS Strategy
|
||||
- Moderate: Cosine Strategy
|
||||
- Variable: LLM Strategy (depends on provider)
|
||||
|
||||
3. **Accuracy Needs**
|
||||
3. **Accuracy Needs**
|
||||
- Highest structure accuracy: CSS Strategy
|
||||
- Best semantic understanding: LLM Strategy
|
||||
- Best content relevance: Cosine Strategy
|
||||
@@ -132,7 +132,7 @@ llm_result = await crawler.arun(
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
1. **E-commerce Scraping**
|
||||
1. **E-commerce Scraping**
|
||||
```python
|
||||
# CSS Strategy for product listings
|
||||
schema = {
|
||||
@@ -145,7 +145,7 @@ llm_result = await crawler.arun(
|
||||
}
|
||||
```
|
||||
|
||||
2. **News Article Extraction**
|
||||
2. **News Article Extraction**
|
||||
```python
|
||||
# LLM Strategy for article content
|
||||
class Article(BaseModel):
|
||||
@@ -160,7 +160,7 @@ llm_result = await crawler.arun(
|
||||
)
|
||||
```
|
||||
|
||||
3. **Content Analysis**
|
||||
3. **Content Analysis**
|
||||
```python
|
||||
# Cosine Strategy for topic analysis
|
||||
strategy = CosineStrategy(
|
||||
@@ -200,17 +200,17 @@ If fit_markdown is requested but not available (no markdown generator or content
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Choose the Right Strategy**
|
||||
1. **Choose the Right Strategy**
|
||||
- Start with CSS for structured data
|
||||
- Use LLM for complex interpretation
|
||||
- Try Cosine for content relevance
|
||||
|
||||
2. **Optimize Performance**
|
||||
2. **Optimize Performance**
|
||||
- Cache LLM results
|
||||
- Keep CSS selectors specific
|
||||
- Tune similarity thresholds
|
||||
|
||||
3. **Handle Errors**
|
||||
3. **Handle Errors**
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
@@ -1,113 +1,93 @@
|
||||
# Crawl4AI
|
||||
# 🚀🤖 Crawl4AI: Open-Source LLM-Friendly Web Crawler & Scraper
|
||||
|
||||
Welcome to the official documentation for Crawl4AI! 🕷️🤖 Crawl4AI is an open-source Python library designed to simplify web crawling and extract useful information from web pages. This documentation will guide you through the features, usage, and customization of Crawl4AI.
|
||||
<div align="center">
|
||||
|
||||
## Introduction
|
||||
<a href="https://trendshift.io/repositories/11716" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11716" alt="unclecode%2Fcrawl4ai | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
|
||||
Crawl4AI has one clear task: to make crawling and data extraction from web pages easy and efficient, especially for large language models (LLMs) and AI applications. Whether you are using it as a REST API or a Python library, Crawl4AI offers a robust and flexible solution with full asynchronous support.
|
||||
</div>
|
||||
|
||||
## Quick Start
|
||||
[](https://github.com/unclecode/crawl4ai/stargazers)
|
||||
[](https://github.com/unclecode/crawl4ai/network/members)
|
||||
[](https://badge.fury.io/py/crawl4ai)
|
||||
[](https://pypi.org/project/crawl4ai/)
|
||||
[](https://pepy.tech/project/crawl4ai)
|
||||
[](https://github.com/unclecode/crawl4ai/blob/main/LICENSE)
|
||||
[](https://github.com/psf/black)
|
||||
[](https://github.com/PyCQA/bandit)
|
||||
|
||||
Here's a quick example to show you how easy it is to use Crawl4AI with its asynchronous capabilities:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
Crawl4AI is the #1 trending GitHub repository, actively maintained by a vibrant community. It delivers blazing-fast, AI-ready web crawling tailored for large language models, AI agents, and data pipelines. Fully open source, flexible, and built for real-time performance, **Crawl4AI** empowers developers with unmatched speed, precision, and deployment ease.
|
||||
|
||||
async def main():
|
||||
# Create an instance of AsyncWebCrawler
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
# Run the crawler on a URL
|
||||
result = await crawler.arun(url="https://www.nbcnews.com/business")
|
||||
---
|
||||
|
||||
# Print the extracted content
|
||||
print(result.markdown)
|
||||
## My Personal Journey
|
||||
|
||||
# Run the async main function
|
||||
asyncio.run(main())
|
||||
```
|
||||
I’ve always loved exploring the web development, back from when HTML and JavaScript were hardly intertwined. My curiosity drove me into web development, mathematics, AI, and machine learning, always keeping a close tie to real industrial applications. In 2009–2010, as a postgraduate student, I created platforms to gather and organize published papers for Master’s and PhD researchers. Faced with post-grad students’ data challenges, I built a helper app to crawl newly published papers and public data. Relying on Internet Explorer and DLL hacks was far more cumbersome than modern tools, highlighting my longtime background in data extraction.
|
||||
|
||||
## Key Features ✨
|
||||
Fast-forward to 2023: I needed to fetch web data and transform it into neat **markdown** for my AI pipeline. All solutions I found were either **closed-source**, overpriced, or produced low-quality output. As someone who has built large edu-tech ventures (like KidoCode), I believe **data belongs to the people**. We shouldn’t pay $16 just to parse the web’s publicly available content. This friction led me to create my own library, **Crawl4AI**, in a matter of days to meet my immediate needs. Unexpectedly, it went **viral**, accumulating thousands of GitHub stars.
|
||||
|
||||
- 🆓 Completely free and open-source
|
||||
- 🚀 Blazing fast performance, outperforming many paid services
|
||||
- 🤖 LLM-friendly output formats (JSON, cleaned HTML, markdown)
|
||||
- 📄 Fit markdown generation for extracting main article content.
|
||||
- 🌐 Multi-browser support (Chromium, Firefox, WebKit)
|
||||
- 🌍 Supports crawling multiple URLs simultaneously
|
||||
- 🎨 Extracts and returns all media tags (Images, Audio, and Video)
|
||||
- 🔗 Extracts all external and internal links
|
||||
- 📚 Extracts metadata from the page
|
||||
- 🔄 Custom hooks for authentication, headers, and page modifications
|
||||
- 🕵️ User-agent customization
|
||||
- 🖼️ Takes screenshots of pages with enhanced error handling
|
||||
- 📜 Executes multiple custom JavaScripts before crawling
|
||||
- 📊 Generates structured output without LLM using JsonCssExtractionStrategy
|
||||
- 📚 Various chunking strategies: topic-based, regex, sentence, and more
|
||||
- 🧠 Advanced extraction strategies: cosine clustering, LLM, and more
|
||||
- 🎯 CSS selector support for precise data extraction
|
||||
- 📝 Passes instructions/keywords to refine extraction
|
||||
- 🔒 Proxy support with authentication for enhanced access
|
||||
- 🔄 Session management for complex multi-page crawling
|
||||
- 🌐 Asynchronous architecture for improved performance
|
||||
- 🖼️ Improved image processing with lazy-loading detection
|
||||
- 🕰️ Enhanced handling of delayed content loading
|
||||
- 🔑 Custom headers support for LLM interactions
|
||||
- 🖼️ iframe content extraction for comprehensive analysis
|
||||
- ⏱️ Flexible timeout and delayed content retrieval options
|
||||
Now, in **January 2025**, Crawl4AI has surpassed **21,000 stars** and remains the #1 trending repository. It’s my way of giving back to the community after benefiting from open source for years. I’m thrilled by how many of you share that passion. Thank you for being here, join our Discord, file issues, submit PRs, or just spread the word. Let’s build the best data extraction, crawling, and scraping library **together**.
|
||||
|
||||
---
|
||||
|
||||
## What Does Crawl4AI Do?
|
||||
|
||||
Crawl4AI is a feature-rich crawler and scraper that aims to:
|
||||
|
||||
1. **Generate Clean Markdown**: Perfect for RAG pipelines or direct ingestion into LLMs.
|
||||
2. **Structured Extraction**: Parse repeated patterns with CSS, XPath, or LLM-based extraction.
|
||||
3. **Advanced Browser Control**: Hooks, proxies, stealth modes, session re-use—fine-grained control.
|
||||
4. **High Performance**: Parallel crawling, chunk-based extraction, real-time use cases.
|
||||
5. **Open Source**: No forced API keys, no paywalls—everyone can access their data.
|
||||
|
||||
**Core Philosophies**:
|
||||
- **Democratize Data**: Free to use, transparent, and highly configurable.
|
||||
- **LLM Friendly**: Minimally processed, well-structured text, images, and metadata, so AI models can easily consume it.
|
||||
|
||||
---
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
Our documentation is organized into several sections:
|
||||
To help you get started, we’ve organized our docs into clear sections:
|
||||
|
||||
### Basic Usage
|
||||
- [Installation](basic/installation.md)
|
||||
- [Quick Start](basic/quickstart.md)
|
||||
- [Simple Crawling](basic/simple-crawling.md)
|
||||
- [Browser Configuration](basic/browser-config.md)
|
||||
- [Content Selection](basic/content-selection.md)
|
||||
- [Output Formats](basic/output-formats.md)
|
||||
- [Page Interaction](basic/page-interaction.md)
|
||||
- **Setup & Installation**
|
||||
Basic instructions to install Crawl4AI via pip or Docker.
|
||||
- **Quick Start**
|
||||
A hands-on introduction showing how to do your first crawl, generate Markdown, and do a simple extraction.
|
||||
- **Core**
|
||||
Deeper guides on single-page crawling, advanced browser/crawler parameters, content filtering, and caching.
|
||||
- **Advanced**
|
||||
Explore link & media handling, lazy loading, hooking & authentication, proxies, session management, and more.
|
||||
- **Extraction**
|
||||
Detailed references for no-LLM (CSS, XPath) vs. LLM-based strategies, chunking, and clustering approaches.
|
||||
- **API Reference**
|
||||
Find the technical specifics of each class and method, including `AsyncWebCrawler`, `arun()`, and `CrawlResult`.
|
||||
|
||||
### Advanced Features
|
||||
- [Magic Mode](advanced/magic-mode.md)
|
||||
- [Session Management](advanced/session-management.md)
|
||||
- [Hooks & Authentication](advanced/hooks-auth.md)
|
||||
- [Proxy & Security](advanced/proxy-security.md)
|
||||
- [Content Processing](advanced/content-processing.md)
|
||||
Throughout these sections, you’ll find code samples you can **copy-paste** into your environment. If something is missing or unclear, raise an issue or PR.
|
||||
|
||||
### Extraction & Processing
|
||||
- [Extraction Strategies Overview](extraction/overview.md)
|
||||
- [LLM Integration](extraction/llm.md)
|
||||
- [CSS-Based Extraction](extraction/css.md)
|
||||
- [Cosine Strategy](extraction/cosine.md)
|
||||
- [Chunking Strategies](extraction/chunking.md)
|
||||
---
|
||||
|
||||
### API Reference
|
||||
- [AsyncWebCrawler](api/async-webcrawler.md)
|
||||
- [CrawlResult](api/crawl-result.md)
|
||||
- [Extraction Strategies](api/strategies.md)
|
||||
- [arun() Method Parameters](api/arun.md)
|
||||
## How You Can Support
|
||||
|
||||
### Examples
|
||||
- Coming soon!
|
||||
- **Star & Fork**: If you find Crawl4AI helpful, star the repo on GitHub or fork it to add your own features.
|
||||
- **File Issues**: Encounter a bug or missing feature? Let us know by filing an issue, so we can improve.
|
||||
- **Pull Requests**: Whether it’s a small fix, a big feature, or better docs—contributions are always welcome.
|
||||
- **Join Discord**: Come chat about web scraping, crawling tips, or AI workflows with the community.
|
||||
- **Spread the Word**: Mention Crawl4AI in your blog posts, talks, or on social media.
|
||||
|
||||
## Getting Started
|
||||
**Our mission**: to empower everyone—students, researchers, entrepreneurs, data scientists—to access, parse, and shape the world’s data with speed, cost-efficiency, and creative freedom.
|
||||
|
||||
1. Install Crawl4AI:
|
||||
```bash
|
||||
pip install crawl4ai
|
||||
```
|
||||
---
|
||||
|
||||
2. Check out our [Quick Start Guide](basic/quickstart.md) to begin crawling web pages.
|
||||
## Quick Links
|
||||
|
||||
3. Explore our [examples](https://github.com/unclecode/crawl4ai/tree/main/docs/examples) to see Crawl4AI in action.
|
||||
- **[GitHub Repo](https://github.com/unclecode/crawl4ai)**
|
||||
- **[Installation Guide](./core/installation.md)**
|
||||
- **[Quick Start](./core/quickstart.md)**
|
||||
- **[API Reference](./api/async-webcrawler.md)**
|
||||
- **[Changelog](https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md)**
|
||||
|
||||
## Support
|
||||
Thank you for joining me on this journey. Let’s keep building an **open, democratic** approach to data extraction and AI together.
|
||||
|
||||
For questions, suggestions, or issues:
|
||||
- GitHub Issues: [Report a Bug](https://github.com/unclecode/crawl4ai/issues)
|
||||
- Twitter: [@unclecode](https://twitter.com/unclecode)
|
||||
- Website: [crawl4ai.com](https://crawl4ai.com)
|
||||
|
||||
Happy Crawling! 🕸️🚀
|
||||
Happy Crawling!
|
||||
— *Unclecde, Founder & Maintainer of Crawl4AI*
|
||||
|
||||
@@ -1,51 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 1: Introduction to Crawl4AI and Basic Installation
|
||||
|
||||
### Quick Intro
|
||||
Walk through installation from PyPI, setup, and verification. Show how to install with options like `torch` or `transformer` for advanced capabilities.
|
||||
|
||||
Here's a condensed outline of the **Installation and Setup** video content:
|
||||
|
||||
---
|
||||
|
||||
1) **Introduction to Crawl4AI**: Briefly explain that Crawl4AI is a powerful tool for web scraping, data extraction, and content processing, with customizable options for various needs.
|
||||
|
||||
2) **Installation Overview**:
|
||||
|
||||
- **Basic Install**: Run `pip install crawl4ai` and `playwright install` (to set up browser dependencies).
|
||||
|
||||
- **Optional Advanced Installs**:
|
||||
- `pip install crawl4ai[torch]` - Adds PyTorch for clustering.
|
||||
- `pip install crawl4ai[transformer]` - Adds support for LLM-based extraction.
|
||||
- `pip install crawl4ai[all]` - Installs all features for complete functionality.
|
||||
|
||||
3) **Verifying the Installation**:
|
||||
|
||||
- Walk through a simple test script to confirm the setup:
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler(verbose=True) as crawler:
|
||||
result = await crawler.arun(url="https://www.example.com")
|
||||
print(result.markdown[:500]) # Show first 500 characters
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
- Explain that this script initializes the crawler and runs it on a test URL, displaying part of the extracted content to verify functionality.
|
||||
|
||||
4) **Important Tips**:
|
||||
|
||||
- **Run** `playwright install` **after installation** to set up dependencies.
|
||||
- **For full performance** on text-related tasks, run `crawl4ai-download-models` after installing with `[torch]`, `[transformer]`, or `[all]` options.
|
||||
- If you encounter issues, refer to the documentation or GitHub issues.
|
||||
|
||||
5) **Wrap Up**:
|
||||
|
||||
- Introduce the next topic in the series, which will cover Crawl4AI's browser configuration options (like choosing between `chromium`, `firefox`, and `webkit`).
|
||||
|
||||
---
|
||||
|
||||
This structure provides a concise, effective guide to get viewers up and running with Crawl4AI in minutes.
|
||||
@@ -1,78 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 2: Overview of Advanced Features
|
||||
|
||||
### Quick Intro
|
||||
A general overview of advanced features like hooks, CSS selectors, and JSON CSS extraction.
|
||||
|
||||
Here's a condensed outline for an **Overview of Advanced Features** video covering Crawl4AI's powerful customization and extraction options:
|
||||
|
||||
---
|
||||
|
||||
### **Overview of Advanced Features**
|
||||
|
||||
1) **Introduction to Advanced Features**:
|
||||
|
||||
- Briefly introduce Crawl4AI’s advanced tools, which let users go beyond basic crawling to customize and fine-tune their scraping workflows.
|
||||
|
||||
2) **Taking Screenshots**:
|
||||
|
||||
- Explain the screenshot capability for capturing page state and verifying content.
|
||||
- **Example**:
|
||||
```python
|
||||
result = await crawler.arun(url="https://www.example.com", screenshot=True)
|
||||
```
|
||||
- Mention that screenshots are saved as a base64 string in `result`, allowing easy decoding and saving.
|
||||
|
||||
3) **Media and Link Extraction**:
|
||||
|
||||
- Demonstrate how to pull all media (images, videos) and links (internal and external) from a page for deeper analysis or content gathering.
|
||||
- **Example**:
|
||||
```python
|
||||
result = await crawler.arun(url="https://www.example.com")
|
||||
print("Media:", result.media)
|
||||
print("Links:", result.links)
|
||||
```
|
||||
|
||||
4) **Custom User Agent**:
|
||||
|
||||
- Show how to set a custom user agent to disguise the crawler or simulate specific devices/browsers.
|
||||
- **Example**:
|
||||
```python
|
||||
result = await crawler.arun(url="https://www.example.com", user_agent="Mozilla/5.0 (compatible; MyCrawler/1.0)")
|
||||
```
|
||||
|
||||
5) **Custom Hooks for Enhanced Control**:
|
||||
|
||||
- Briefly cover how to use hooks, which allow custom actions like setting headers or handling login during the crawl.
|
||||
- **Example**: Setting a custom header with `before_get_url` hook.
|
||||
```python
|
||||
async def before_get_url(page):
|
||||
await page.set_extra_http_headers({"X-Test-Header": "test"})
|
||||
```
|
||||
|
||||
6) **CSS Selectors for Targeted Extraction**:
|
||||
|
||||
- Explain the use of CSS selectors to extract specific elements, ideal for structured data like articles or product details.
|
||||
- **Example**:
|
||||
```python
|
||||
result = await crawler.arun(url="https://www.example.com", css_selector="h2")
|
||||
print("H2 Tags:", result.extracted_content)
|
||||
```
|
||||
|
||||
7) **Crawling Inside Iframes**:
|
||||
|
||||
- Mention how enabling `process_iframes=True` allows extracting content within iframes, useful for sites with embedded content or ads.
|
||||
- **Example**:
|
||||
```python
|
||||
result = await crawler.arun(url="https://www.example.com", process_iframes=True)
|
||||
```
|
||||
|
||||
8) **Wrap-Up**:
|
||||
|
||||
- Summarize these advanced features and how they allow users to customize every part of their web scraping experience.
|
||||
- Tease upcoming videos where each feature will be explored in detail.
|
||||
|
||||
---
|
||||
|
||||
This covers each advanced feature with a brief example, providing a useful overview to prepare viewers for the more in-depth videos.
|
||||
@@ -1,65 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 3: Browser Configurations & Headless Crawling
|
||||
|
||||
### Quick Intro
|
||||
Explain browser options (`chromium`, `firefox`, `webkit`) and settings for headless mode, caching, and verbose logging.
|
||||
|
||||
Here’s a streamlined outline for the **Browser Configurations & Headless Crawling** video:
|
||||
|
||||
---
|
||||
|
||||
### **Browser Configurations & Headless Crawling**
|
||||
|
||||
1) **Overview of Browser Options**:
|
||||
|
||||
- Crawl4AI supports three browser engines:
|
||||
- **Chromium** (default) - Highly compatible.
|
||||
- **Firefox** - Great for specialized use cases.
|
||||
- **Webkit** - Lightweight, ideal for basic needs.
|
||||
- **Example**:
|
||||
```python
|
||||
# Using Chromium (default)
|
||||
crawler = AsyncWebCrawler(browser_type="chromium")
|
||||
|
||||
# Using Firefox
|
||||
crawler = AsyncWebCrawler(browser_type="firefox")
|
||||
|
||||
# Using WebKit
|
||||
crawler = AsyncWebCrawler(browser_type="webkit")
|
||||
```
|
||||
|
||||
2) **Headless Mode**:
|
||||
|
||||
- Headless mode runs the browser without a visible GUI, making it faster and less resource-intensive.
|
||||
- To enable or disable:
|
||||
```python
|
||||
# Headless mode (default is True)
|
||||
crawler = AsyncWebCrawler(headless=True)
|
||||
|
||||
# Disable headless mode for debugging
|
||||
crawler = AsyncWebCrawler(headless=False)
|
||||
```
|
||||
|
||||
3) **Verbose Logging**:
|
||||
- Use `verbose=True` to get detailed logs for each action, useful for debugging:
|
||||
```python
|
||||
crawler = AsyncWebCrawler(verbose=True)
|
||||
```
|
||||
|
||||
4) **Running a Basic Crawl with Configuration**:
|
||||
- Example of a simple crawl with custom browser settings:
|
||||
```python
|
||||
async with AsyncWebCrawler(browser_type="firefox", headless=True, verbose=True) as crawler:
|
||||
result = await crawler.arun(url="https://www.example.com")
|
||||
print(result.markdown[:500]) # Show first 500 characters
|
||||
```
|
||||
- This example uses Firefox in headless mode with logging enabled, demonstrating the flexibility of Crawl4AI’s setup.
|
||||
|
||||
5) **Recap & Next Steps**:
|
||||
- Recap the power of selecting different browsers and running headless mode for speed and efficiency.
|
||||
- Tease the next video: **Proxy & Security Settings** for navigating blocked or restricted content and protecting IP identity.
|
||||
|
||||
---
|
||||
|
||||
This breakdown covers browser configuration essentials in Crawl4AI, providing users with practical steps to optimize their scraping setup.
|
||||
@@ -1,90 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 4: Advanced Proxy and Security Settings
|
||||
|
||||
### Quick Intro
|
||||
Showcase proxy configurations (HTTP, SOCKS5, authenticated proxies). Demo: Use rotating proxies and set custom headers to avoid IP blocking and enhance security.
|
||||
|
||||
Here’s a focused outline for the **Proxy and Security Settings** video:
|
||||
|
||||
---
|
||||
|
||||
### **Proxy & Security Settings**
|
||||
|
||||
1) **Why Use Proxies in Web Crawling**:
|
||||
|
||||
- Proxies are essential for bypassing IP-based restrictions, improving anonymity, and managing rate limits.
|
||||
- Crawl4AI supports simple proxies, authenticated proxies, and proxy rotation for robust web scraping.
|
||||
|
||||
2) **Basic Proxy Setup**:
|
||||
|
||||
- **Using a Simple Proxy**:
|
||||
```python
|
||||
# HTTP proxy
|
||||
crawler = AsyncWebCrawler(proxy="http://proxy.example.com:8080")
|
||||
|
||||
# SOCKS proxy
|
||||
crawler = AsyncWebCrawler(proxy="socks5://proxy.example.com:1080")
|
||||
```
|
||||
|
||||
3) **Authenticated Proxies**:
|
||||
|
||||
- Use `proxy_config` for proxies requiring a username and password:
|
||||
```python
|
||||
proxy_config = {
|
||||
"server": "http://proxy.example.com:8080",
|
||||
"username": "user",
|
||||
"password": "pass"
|
||||
}
|
||||
crawler = AsyncWebCrawler(proxy_config=proxy_config)
|
||||
```
|
||||
|
||||
4) **Rotating Proxies**:
|
||||
|
||||
- Rotating proxies helps avoid IP bans by switching IP addresses for each request:
|
||||
```python
|
||||
async def get_next_proxy():
|
||||
# Define proxy rotation logic here
|
||||
return {"server": "http://next.proxy.com:8080"}
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
for url in urls:
|
||||
proxy = await get_next_proxy()
|
||||
crawler.update_proxy(proxy)
|
||||
result = await crawler.arun(url=url)
|
||||
```
|
||||
- This setup periodically switches the proxy for enhanced security and access.
|
||||
|
||||
5) **Custom Headers for Additional Security**:
|
||||
|
||||
- Set custom headers to mask the crawler’s identity and avoid detection:
|
||||
```python
|
||||
headers = {
|
||||
"X-Forwarded-For": "203.0.113.195",
|
||||
"Accept-Language": "en-US,en;q=0.9",
|
||||
"Cache-Control": "no-cache",
|
||||
"Pragma": "no-cache"
|
||||
}
|
||||
crawler = AsyncWebCrawler(headers=headers)
|
||||
```
|
||||
|
||||
6) **Combining Proxies with Magic Mode for Anti-Bot Protection**:
|
||||
|
||||
- For sites with aggressive bot detection, combine `proxy` settings with `magic=True`:
|
||||
```python
|
||||
async with AsyncWebCrawler(proxy="http://proxy.example.com:8080", headers={"Accept-Language": "en-US"}) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
magic=True # Enables anti-detection features
|
||||
)
|
||||
```
|
||||
- **Magic Mode** automatically enables user simulation, random timing, and browser property masking.
|
||||
|
||||
7) **Wrap Up & Next Steps**:
|
||||
|
||||
- Summarize the importance of proxies and anti-detection in accessing restricted content and avoiding bans.
|
||||
- Tease the next video: **JavaScript Execution and Handling Dynamic Content** for working with interactive and dynamically loaded pages.
|
||||
|
||||
---
|
||||
|
||||
This outline provides a practical guide to setting up proxies and security configurations, empowering users to navigate restricted sites while staying undetected.
|
||||
@@ -1,97 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 5: JavaScript Execution and Dynamic Content Handling
|
||||
|
||||
### Quick Intro
|
||||
Explain JavaScript code injection with examples (e.g., simulating scrolling, clicking ‘load more’). Demo: Extract content from a page that uses dynamic loading with lazy-loaded images.
|
||||
|
||||
Here’s a focused outline for the **JavaScript Execution and Dynamic Content Handling** video:
|
||||
|
||||
---
|
||||
|
||||
### **JavaScript Execution & Dynamic Content Handling**
|
||||
|
||||
1) **Why JavaScript Execution Matters**:
|
||||
|
||||
- Many modern websites load content dynamically via JavaScript, requiring special handling to access all elements.
|
||||
- Crawl4AI can execute JavaScript on pages, enabling it to interact with elements like “load more” buttons, infinite scrolls, and content that appears only after certain actions.
|
||||
|
||||
2) **Basic JavaScript Execution**:
|
||||
|
||||
- Use `js_code` to execute JavaScript commands on a page:
|
||||
```python
|
||||
# Scroll to bottom of the page
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
js_code="window.scrollTo(0, document.body.scrollHeight);"
|
||||
)
|
||||
```
|
||||
- This command scrolls to the bottom, triggering any lazy-loaded or dynamically added content.
|
||||
|
||||
3) **Multiple Commands & Simulating Clicks**:
|
||||
|
||||
- Combine multiple JavaScript commands to interact with elements like “load more” buttons:
|
||||
```python
|
||||
js_commands = [
|
||||
"window.scrollTo(0, document.body.scrollHeight);",
|
||||
"document.querySelector('.load-more').click();"
|
||||
]
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
js_code=js_commands
|
||||
)
|
||||
```
|
||||
- This script scrolls down and then clicks the “load more” button, useful for loading additional content blocks.
|
||||
|
||||
4) **Waiting for Dynamic Content**:
|
||||
|
||||
- Use `wait_for` to ensure the page loads specific elements before proceeding:
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
js_code="window.scrollTo(0, document.body.scrollHeight);",
|
||||
wait_for="css:.dynamic-content" # Wait for elements with class `.dynamic-content`
|
||||
)
|
||||
```
|
||||
- This example waits until elements with `.dynamic-content` are loaded, helping to capture content that appears after JavaScript actions.
|
||||
|
||||
5) **Handling Complex Dynamic Content (e.g., Infinite Scroll)**:
|
||||
|
||||
- Combine JavaScript execution with conditional waiting to handle infinite scrolls or paginated content:
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
js_code=[
|
||||
"window.scrollTo(0, document.body.scrollHeight);",
|
||||
"const loadMore = document.querySelector('.load-more'); if (loadMore) loadMore.click();"
|
||||
],
|
||||
wait_for="js:() => document.querySelectorAll('.item').length > 10" # Wait until 10 items are loaded
|
||||
)
|
||||
```
|
||||
- This example scrolls and clicks "load more" repeatedly, waiting each time for a specified number of items to load.
|
||||
|
||||
6) **Complete Example: Dynamic Content Handling with Extraction**:
|
||||
|
||||
- Full example demonstrating a dynamic load and content extraction in one process:
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
js_code=[
|
||||
"window.scrollTo(0, document.body.scrollHeight);",
|
||||
"document.querySelector('.load-more').click();"
|
||||
],
|
||||
wait_for="css:.main-content",
|
||||
css_selector=".main-content"
|
||||
)
|
||||
print(result.markdown[:500]) # Output the main content extracted
|
||||
```
|
||||
|
||||
7) **Wrap Up & Next Steps**:
|
||||
|
||||
- Recap how JavaScript execution allows access to dynamic content, enabling powerful interactions.
|
||||
- Tease the next video: **Content Cleaning and Fit Markdown** to show how Crawl4AI can extract only the most relevant content from complex pages.
|
||||
|
||||
---
|
||||
|
||||
This outline explains how to handle dynamic content and JavaScript-based interactions effectively, enabling users to scrape and interact with complex, modern websites.
|
||||
@@ -1,86 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 6: Magic Mode and Anti-Bot Protection
|
||||
|
||||
### Quick Intro
|
||||
Highlight `Magic Mode` and anti-bot features like user simulation, navigator overrides, and timing randomization. Demo: Access a site with anti-bot protection and show how `Magic Mode` seamlessly handles it.
|
||||
|
||||
Here’s a concise outline for the **Magic Mode and Anti-Bot Protection** video:
|
||||
|
||||
---
|
||||
|
||||
### **Magic Mode & Anti-Bot Protection**
|
||||
|
||||
1) **Why Anti-Bot Protection is Important**:
|
||||
|
||||
- Many websites use bot detection mechanisms to block automated scraping. Crawl4AI’s anti-detection features help avoid IP bans, CAPTCHAs, and access restrictions.
|
||||
- **Magic Mode** is a one-step solution to enable a range of anti-bot features without complex configuration.
|
||||
|
||||
2) **Enabling Magic Mode**:
|
||||
|
||||
- Simply set `magic=True` to activate Crawl4AI’s full anti-bot suite:
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
magic=True # Enables all anti-detection features
|
||||
)
|
||||
```
|
||||
- This enables a blend of stealth techniques, including masking automation signals, randomizing timings, and simulating real user behavior.
|
||||
|
||||
3) **What Magic Mode Does Behind the Scenes**:
|
||||
|
||||
- **User Simulation**: Mimics human actions like mouse movements and scrolling.
|
||||
- **Navigator Overrides**: Hides signals that indicate an automated browser.
|
||||
- **Timing Randomization**: Adds random delays to simulate natural interaction patterns.
|
||||
- **Cookie Handling**: Accepts and manages cookies dynamically to avoid triggers from cookie pop-ups.
|
||||
|
||||
4) **Manual Anti-Bot Options (If Not Using Magic Mode)**:
|
||||
|
||||
- For granular control, you can configure individual settings without Magic Mode:
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
simulate_user=True, # Enables human-like behavior
|
||||
override_navigator=True # Hides automation fingerprints
|
||||
)
|
||||
```
|
||||
- **Use Cases**: This approach allows more specific adjustments when certain anti-bot features are needed but others are not.
|
||||
|
||||
5) **Combining Proxies with Magic Mode**:
|
||||
|
||||
- To avoid rate limits or IP blocks, combine Magic Mode with a proxy:
|
||||
```python
|
||||
async with AsyncWebCrawler(
|
||||
proxy="http://proxy.example.com:8080",
|
||||
headers={"Accept-Language": "en-US"}
|
||||
) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
magic=True # Full anti-detection
|
||||
)
|
||||
```
|
||||
- This setup maximizes stealth by pairing anti-bot detection with IP obfuscation.
|
||||
|
||||
6) **Example of Anti-Bot Protection in Action**:
|
||||
|
||||
- Full example with Magic Mode and proxies to scrape a protected page:
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/protected-content",
|
||||
magic=True,
|
||||
proxy="http://proxy.example.com:8080",
|
||||
wait_for="css:.content-loaded" # Wait for the main content to load
|
||||
)
|
||||
print(result.markdown[:500]) # Display first 500 characters of the content
|
||||
```
|
||||
- This example ensures seamless access to protected content by combining anti-detection and waiting for full content load.
|
||||
|
||||
7) **Wrap Up & Next Steps**:
|
||||
|
||||
- Recap the power of Magic Mode and anti-bot features for handling restricted websites.
|
||||
- Tease the next video: **Content Cleaning and Fit Markdown** to show how to extract clean and focused content from a page.
|
||||
|
||||
---
|
||||
|
||||
This outline shows users how to easily avoid bot detection and access restricted content, demonstrating both the power and simplicity of Magic Mode in Crawl4AI.
|
||||
@@ -1,89 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 7: Content Cleaning and Fit Markdown
|
||||
|
||||
### Quick Intro
|
||||
Explain content cleaning options, including `fit_markdown` to keep only the most relevant content. Demo: Extract and compare regular vs. fit markdown from a news site or blog.
|
||||
|
||||
Here’s a streamlined outline for the **Content Cleaning and Fit Markdown** video:
|
||||
|
||||
---
|
||||
|
||||
### **Content Cleaning & Fit Markdown**
|
||||
|
||||
1) **Overview of Content Cleaning in Crawl4AI**:
|
||||
|
||||
- Explain that web pages often include extra elements like ads, navigation bars, footers, and popups.
|
||||
- Crawl4AI’s content cleaning features help extract only the main content, reducing noise and enhancing readability.
|
||||
|
||||
2) **Basic Content Cleaning Options**:
|
||||
|
||||
- **Removing Unwanted Elements**: Exclude specific HTML tags, like forms or navigation bars:
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
word_count_threshold=10, # Filter out blocks with fewer than 10 words
|
||||
excluded_tags=['form', 'nav'], # Exclude specific tags
|
||||
remove_overlay_elements=True # Remove popups and modals
|
||||
)
|
||||
```
|
||||
- This example extracts content while excluding forms, navigation, and modal overlays, ensuring clean results.
|
||||
|
||||
3) **Fit Markdown for Main Content Extraction**:
|
||||
|
||||
- **What is Fit Markdown**: Uses advanced analysis to identify the most relevant content (ideal for articles, blogs, and documentation).
|
||||
- **How it Works**: Analyzes content density, removes boilerplate elements, and maintains formatting for a clear output.
|
||||
- **Example**:
|
||||
```python
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
main_content = result.fit_markdown # Extracted main content
|
||||
print(main_content[:500]) # Display first 500 characters
|
||||
```
|
||||
- Fit Markdown is especially helpful for long-form content like news articles or blog posts.
|
||||
|
||||
4) **Comparing Fit Markdown with Regular Markdown**:
|
||||
|
||||
- **Fit Markdown** returns the primary content without extraneous elements.
|
||||
- **Regular Markdown** includes all extracted text in markdown format.
|
||||
- Example to show the difference:
|
||||
```python
|
||||
all_content = result.markdown # Full markdown
|
||||
main_content = result.fit_markdown # Only the main content
|
||||
|
||||
print(f"All Content Length: {len(all_content)}")
|
||||
print(f"Main Content Length: {len(main_content)}")
|
||||
```
|
||||
- This comparison shows the effectiveness of Fit Markdown in focusing on essential content.
|
||||
|
||||
5) **Media and Metadata Handling with Content Cleaning**:
|
||||
|
||||
- **Media Extraction**: Crawl4AI captures images and videos with metadata like alt text, descriptions, and relevance scores:
|
||||
```python
|
||||
for image in result.media["images"]:
|
||||
print(f"Source: {image['src']}, Alt Text: {image['alt']}, Relevance Score: {image['score']}")
|
||||
```
|
||||
- **Use Case**: Useful for saving only relevant images or videos from an article or content-heavy page.
|
||||
|
||||
6) **Example of Clean Content Extraction in Action**:
|
||||
|
||||
- Full example extracting cleaned content and Fit Markdown:
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
word_count_threshold=10,
|
||||
excluded_tags=['nav', 'footer'],
|
||||
remove_overlay_elements=True
|
||||
)
|
||||
print(result.fit_markdown[:500]) # Show main content
|
||||
```
|
||||
- This example demonstrates content cleaning with settings for filtering noise and focusing on the core text.
|
||||
|
||||
7) **Wrap Up & Next Steps**:
|
||||
|
||||
- Summarize the power of Crawl4AI’s content cleaning features and Fit Markdown for capturing clean, relevant content.
|
||||
- Tease the next video: **Link Analysis and Smart Filtering** to focus on analyzing and filtering links within crawled pages.
|
||||
|
||||
---
|
||||
|
||||
This outline covers Crawl4AI’s content cleaning features and the unique benefits of Fit Markdown, showing users how to retrieve focused, high-quality content from web pages.
|
||||
@@ -1,116 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 8: Media Handling: Images, Videos, and Audio
|
||||
|
||||
### Quick Intro
|
||||
Showcase Crawl4AI’s media extraction capabilities, including lazy-loaded media and metadata. Demo: Crawl a multimedia page, extract images, and show metadata (alt text, context, relevance score).
|
||||
|
||||
Here’s a clear and focused outline for the **Media Handling: Images, Videos, and Audio** video:
|
||||
|
||||
---
|
||||
|
||||
### **Media Handling: Images, Videos, and Audio**
|
||||
|
||||
1) **Overview of Media Extraction in Crawl4AI**:
|
||||
|
||||
- Crawl4AI can detect and extract different types of media (images, videos, and audio) along with useful metadata.
|
||||
- This functionality is essential for gathering visual content from multimedia-heavy pages like e-commerce sites, news articles, and social media feeds.
|
||||
|
||||
2) **Image Extraction and Metadata**:
|
||||
|
||||
- Crawl4AI captures images with detailed metadata, including:
|
||||
- **Source URL**: The direct URL to the image.
|
||||
- **Alt Text**: Image description if available.
|
||||
- **Relevance Score**: A score (0–10) indicating how relevant the image is to the main content.
|
||||
- **Context**: Text surrounding the image on the page.
|
||||
- **Example**:
|
||||
```python
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
|
||||
for image in result.media["images"]:
|
||||
print(f"Source: {image['src']}")
|
||||
print(f"Alt Text: {image['alt']}")
|
||||
print(f"Relevance Score: {image['score']}")
|
||||
print(f"Context: {image['context']}")
|
||||
```
|
||||
- This example shows how to access each image’s metadata, making it easy to filter for the most relevant visuals.
|
||||
|
||||
3) **Handling Lazy-Loaded Images**:
|
||||
|
||||
- Crawl4AI automatically supports lazy-loaded images, which are commonly used to optimize webpage loading.
|
||||
- **Example with Wait for Lazy-Loaded Content**:
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
wait_for="css:img[data-src]", # Wait for lazy-loaded images
|
||||
delay_before_return_html=2.0 # Allow extra time for images to load
|
||||
)
|
||||
```
|
||||
- This setup waits for lazy-loaded images to appear, ensuring they are fully captured.
|
||||
|
||||
4) **Video Extraction and Metadata**:
|
||||
|
||||
- Crawl4AI captures video elements, including:
|
||||
- **Source URL**: The video’s direct URL.
|
||||
- **Type**: Format of the video (e.g., MP4).
|
||||
- **Thumbnail**: A poster or thumbnail image if available.
|
||||
- **Duration**: Video length, if metadata is provided.
|
||||
- **Example**:
|
||||
```python
|
||||
for video in result.media["videos"]:
|
||||
print(f"Video Source: {video['src']}")
|
||||
print(f"Type: {video['type']}")
|
||||
print(f"Thumbnail: {video.get('poster')}")
|
||||
print(f"Duration: {video.get('duration')}")
|
||||
```
|
||||
- This allows users to gather video content and relevant details for further processing or analysis.
|
||||
|
||||
5) **Audio Extraction and Metadata**:
|
||||
|
||||
- Audio elements can also be extracted, with metadata like:
|
||||
- **Source URL**: The audio file’s direct URL.
|
||||
- **Type**: Format of the audio file (e.g., MP3).
|
||||
- **Duration**: Length of the audio, if available.
|
||||
- **Example**:
|
||||
```python
|
||||
for audio in result.media["audios"]:
|
||||
print(f"Audio Source: {audio['src']}")
|
||||
print(f"Type: {audio['type']}")
|
||||
print(f"Duration: {audio.get('duration')}")
|
||||
```
|
||||
- Useful for sites with podcasts, sound bites, or other audio content.
|
||||
|
||||
6) **Filtering Media by Relevance**:
|
||||
|
||||
- Use metadata like relevance score to filter only the most useful media content:
|
||||
```python
|
||||
relevant_images = [img for img in result.media["images"] if img['score'] > 5]
|
||||
```
|
||||
- This is especially helpful for content-heavy pages where you only want media directly related to the main content.
|
||||
|
||||
7) **Example: Full Media Extraction with Content Filtering**:
|
||||
|
||||
- Full example extracting images, videos, and audio along with filtering by relevance:
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
word_count_threshold=10, # Filter content blocks for relevance
|
||||
exclude_external_images=True # Only keep internal images
|
||||
)
|
||||
|
||||
# Display media summaries
|
||||
print(f"Relevant Images: {len(relevant_images)}")
|
||||
print(f"Videos: {len(result.media['videos'])}")
|
||||
print(f"Audio Clips: {len(result.media['audios'])}")
|
||||
```
|
||||
- This example shows how to capture and filter various media types, focusing on what’s most relevant.
|
||||
|
||||
8) **Wrap Up & Next Steps**:
|
||||
|
||||
- Recap the comprehensive media extraction capabilities, emphasizing how metadata helps users focus on relevant content.
|
||||
- Tease the next video: **Link Analysis and Smart Filtering** to explore how Crawl4AI handles internal, external, and social media links for more focused data gathering.
|
||||
|
||||
---
|
||||
|
||||
This outline provides users with a complete guide to handling images, videos, and audio in Crawl4AI, using metadata to enhance relevance and precision in multimedia extraction.
|
||||
@@ -1,95 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 9: Link Analysis and Smart Filtering
|
||||
|
||||
### Quick Intro
|
||||
Walk through internal and external link classification, social media link filtering, and custom domain exclusion. Demo: Analyze links on a website, focusing on internal navigation vs. external or ad links.
|
||||
|
||||
Here’s a focused outline for the **Link Analysis and Smart Filtering** video:
|
||||
|
||||
---
|
||||
|
||||
### **Link Analysis & Smart Filtering**
|
||||
|
||||
1) **Importance of Link Analysis in Web Crawling**:
|
||||
|
||||
- Explain that web pages often contain numerous links, including internal links, external links, social media links, and ads.
|
||||
- Crawl4AI’s link analysis and filtering options help extract only relevant links, enabling more targeted and efficient crawls.
|
||||
|
||||
2) **Automatic Link Classification**:
|
||||
|
||||
- Crawl4AI categorizes links automatically into internal, external, and social media links.
|
||||
- **Example**:
|
||||
```python
|
||||
result = await crawler.arun(url="https://example.com")
|
||||
|
||||
# Access internal and external links
|
||||
internal_links = result.links["internal"]
|
||||
external_links = result.links["external"]
|
||||
|
||||
# Print first few links for each type
|
||||
print("Internal Links:", internal_links[:3])
|
||||
print("External Links:", external_links[:3])
|
||||
```
|
||||
|
||||
3) **Filtering Out Unwanted Links**:
|
||||
|
||||
- **Exclude External Links**: Remove all links pointing to external sites.
|
||||
- **Exclude Social Media Links**: Filter out social media domains like Facebook or Twitter.
|
||||
- **Example**:
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
exclude_external_links=True, # Remove external links
|
||||
exclude_social_media_links=True # Remove social media links
|
||||
)
|
||||
```
|
||||
|
||||
4) **Custom Domain Filtering**:
|
||||
|
||||
- **Exclude Specific Domains**: Filter links from particular domains, e.g., ad sites.
|
||||
- **Custom Social Media Domains**: Add additional social media domains if needed.
|
||||
- **Example**:
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
exclude_domains=["ads.com", "trackers.com"],
|
||||
exclude_social_media_domains=["facebook.com", "linkedin.com"]
|
||||
)
|
||||
```
|
||||
|
||||
5) **Accessing Link Context and Metadata**:
|
||||
|
||||
- Crawl4AI provides additional metadata for each link, including its text, type (e.g., navigation or content), and surrounding context.
|
||||
- **Example**:
|
||||
```python
|
||||
for link in result.links["internal"]:
|
||||
print(f"Link: {link['href']}, Text: {link['text']}, Context: {link['context']}")
|
||||
```
|
||||
- **Use Case**: Helps users understand the relevance of links based on where they are placed on the page (e.g., navigation vs. article content).
|
||||
|
||||
6) **Example of Comprehensive Link Filtering and Analysis**:
|
||||
|
||||
- Full example combining link filtering, metadata access, and contextual information:
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
exclude_external_links=True,
|
||||
exclude_social_media_links=True,
|
||||
exclude_domains=["ads.com"],
|
||||
css_selector=".main-content" # Focus only on main content area
|
||||
)
|
||||
for link in result.links["internal"]:
|
||||
print(f"Internal Link: {link['href']}, Text: {link['text']}, Context: {link['context']}")
|
||||
```
|
||||
- This example filters unnecessary links, keeping only internal and relevant links from the main content area.
|
||||
|
||||
7) **Wrap Up & Next Steps**:
|
||||
|
||||
- Summarize the benefits of link filtering for efficient crawling and relevant content extraction.
|
||||
- Tease the next video: **Custom Headers, Identity Management, and User Simulation** to explain how to configure identity settings and simulate user behavior for stealthier crawls.
|
||||
|
||||
---
|
||||
|
||||
This outline provides a practical overview of Crawl4AI’s link analysis and filtering features, helping users target only essential links while eliminating distractions.
|
||||
@@ -1,93 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 10: Custom Headers, Identity, and User Simulation
|
||||
|
||||
### Quick Intro
|
||||
Teach how to use custom headers, user-agent strings, and simulate real user interactions. Demo: Set custom user-agent and headers to access a site that blocks typical crawlers.
|
||||
|
||||
Here’s a concise outline for the **Custom Headers, Identity Management, and User Simulation** video:
|
||||
|
||||
---
|
||||
|
||||
### **Custom Headers, Identity Management, & User Simulation**
|
||||
|
||||
1) **Why Customize Headers and Identity in Crawling**:
|
||||
|
||||
- Websites often track request headers and browser properties to detect bots. Customizing headers and managing identity help make requests appear more human, improving access to restricted sites.
|
||||
|
||||
2) **Setting Custom Headers**:
|
||||
|
||||
- Customize HTTP headers to mimic genuine browser requests or meet site-specific requirements:
|
||||
```python
|
||||
headers = {
|
||||
"Accept-Language": "en-US,en;q=0.9",
|
||||
"X-Requested-With": "XMLHttpRequest",
|
||||
"Cache-Control": "no-cache"
|
||||
}
|
||||
crawler = AsyncWebCrawler(headers=headers)
|
||||
```
|
||||
- **Use Case**: Customize the `Accept-Language` header to simulate local user settings, or `Cache-Control` to bypass cache for fresh content.
|
||||
|
||||
3) **Setting a Custom User Agent**:
|
||||
|
||||
- Some websites block requests from common crawler user agents. Setting a custom user agent string helps bypass these restrictions:
|
||||
```python
|
||||
crawler = AsyncWebCrawler(
|
||||
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
|
||||
)
|
||||
```
|
||||
- **Tip**: Use user-agent strings from popular browsers (e.g., Chrome, Firefox) to improve access and reduce detection risks.
|
||||
|
||||
4) **User Simulation for Human-like Behavior**:
|
||||
|
||||
- Enable `simulate_user=True` to mimic natural user interactions, such as random timing and simulated mouse movements:
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
simulate_user=True # Simulates human-like behavior
|
||||
)
|
||||
```
|
||||
- **Behavioral Effects**: Adds subtle variations in interactions, making the crawler harder to detect on bot-protected sites.
|
||||
|
||||
5) **Navigator Overrides and Magic Mode for Full Identity Masking**:
|
||||
|
||||
- Use `override_navigator=True` to mask automation indicators like `navigator.webdriver`, which websites check to detect bots:
|
||||
```python
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
override_navigator=True # Masks bot-related signals
|
||||
)
|
||||
```
|
||||
- **Combining with Magic Mode**: For a complete anti-bot setup, combine these identity options with `magic=True` for maximum protection:
|
||||
```python
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
magic=True, # Enables all anti-bot detection features
|
||||
user_agent="Custom-Agent", # Custom agent with Magic Mode
|
||||
)
|
||||
```
|
||||
- This setup includes all anti-detection techniques like navigator masking, random timing, and user simulation.
|
||||
|
||||
6) **Example: Comprehensive Setup for Identity Management**:
|
||||
|
||||
- A full example combining custom headers, user-agent, and user simulation for a realistic browsing profile:
|
||||
```python
|
||||
async with AsyncWebCrawler(
|
||||
headers={"Accept-Language": "en-US", "Cache-Control": "no-cache"},
|
||||
user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0",
|
||||
simulate_user=True
|
||||
) as crawler:
|
||||
result = await crawler.arun(url="https://example.com/secure-page")
|
||||
print(result.markdown[:500]) # Display extracted content
|
||||
```
|
||||
- This example enables detailed customization for evading detection and accessing protected pages smoothly.
|
||||
|
||||
7) **Wrap Up & Next Steps**:
|
||||
|
||||
- Recap the value of headers, user-agent customization, and simulation in bypassing bot detection.
|
||||
- Tease the next video: **Extraction Strategies: JSON CSS, LLM, and Cosine** to dive into structured data extraction methods for high-quality content retrieval.
|
||||
|
||||
---
|
||||
|
||||
This outline equips users with tools for managing crawler identity and human-like behavior, essential for accessing bot-protected or restricted websites.
|
||||
@@ -1,186 +0,0 @@
|
||||
Here’s a detailed outline for the **JSON-CSS Extraction Strategy** video, covering all key aspects and supported structures in Crawl4AI:
|
||||
|
||||
---
|
||||
|
||||
### **10.1 JSON-CSS Extraction Strategy**
|
||||
|
||||
#### **1. Introduction to JSON-CSS Extraction**
|
||||
- JSON-CSS Extraction is used for pulling structured data from pages with repeated patterns, like product listings, article feeds, or directories.
|
||||
- This strategy allows defining a schema with CSS selectors and data fields, making it easy to capture nested, list-based, or singular elements.
|
||||
|
||||
#### **2. Basic Schema Structure**
|
||||
- **Schema Fields**: The schema has two main components:
|
||||
- `baseSelector`: A CSS selector to locate the main elements you want to extract (e.g., each article or product block).
|
||||
- `fields`: Defines the data fields for each element, supporting various data types and structures.
|
||||
|
||||
#### **3. Simple Field Extraction**
|
||||
- **Example HTML**:
|
||||
```html
|
||||
<div class="product">
|
||||
<h2 class="title">Sample Product</h2>
|
||||
<span class="price">$19.99</span>
|
||||
<p class="description">This is a sample product.</p>
|
||||
</div>
|
||||
```
|
||||
- **Schema**:
|
||||
```python
|
||||
schema = {
|
||||
"baseSelector": ".product",
|
||||
"fields": [
|
||||
{"name": "title", "selector": ".title", "type": "text"},
|
||||
{"name": "price", "selector": ".price", "type": "text"},
|
||||
{"name": "description", "selector": ".description", "type": "text"}
|
||||
]
|
||||
}
|
||||
```
|
||||
- **Explanation**: Each field captures text content from specified CSS selectors within each `.product` element.
|
||||
|
||||
#### **4. Supported Field Types: Text, Attribute, HTML, Regex**
|
||||
- **Field Type Options**:
|
||||
- `text`: Extracts visible text.
|
||||
- `attribute`: Captures an HTML attribute (e.g., `src`, `href`).
|
||||
- `html`: Extracts the raw HTML of an element.
|
||||
- `regex`: Allows regex patterns to extract part of the text.
|
||||
|
||||
- **Example HTML** (including an image):
|
||||
```html
|
||||
<div class="product">
|
||||
<h2 class="title">Sample Product</h2>
|
||||
<img class="product-image" src="image.jpg" alt="Product Image">
|
||||
<span class="price">$19.99</span>
|
||||
<p class="description">Limited time offer.</p>
|
||||
</div>
|
||||
```
|
||||
- **Schema**:
|
||||
```python
|
||||
schema = {
|
||||
"baseSelector": ".product",
|
||||
"fields": [
|
||||
{"name": "title", "selector": ".title", "type": "text"},
|
||||
{"name": "image_url", "selector": ".product-image", "type": "attribute", "attribute": "src"},
|
||||
{"name": "price", "selector": ".price", "type": "regex", "pattern": r"\$(\d+\.\d+)"},
|
||||
{"name": "description_html", "selector": ".description", "type": "html"}
|
||||
]
|
||||
}
|
||||
```
|
||||
- **Explanation**:
|
||||
- `attribute`: Extracts the `src` attribute from `.product-image`.
|
||||
- `regex`: Extracts the numeric part from `$19.99`.
|
||||
- `html`: Retrieves the full HTML of the description element.
|
||||
|
||||
#### **5. Nested Field Extraction**
|
||||
- **Use Case**: Useful when content contains sub-elements, such as an article with author details within it.
|
||||
- **Example HTML**:
|
||||
```html
|
||||
<div class="article">
|
||||
<h1 class="title">Sample Article</h1>
|
||||
<div class="author">
|
||||
<span class="name">John Doe</span>
|
||||
<span class="bio">Writer and editor</span>
|
||||
</div>
|
||||
</div>
|
||||
```
|
||||
- **Schema**:
|
||||
```python
|
||||
schema = {
|
||||
"baseSelector": ".article",
|
||||
"fields": [
|
||||
{"name": "title", "selector": ".title", "type": "text"},
|
||||
{"name": "author", "type": "nested", "selector": ".author", "fields": [
|
||||
{"name": "name", "selector": ".name", "type": "text"},
|
||||
{"name": "bio", "selector": ".bio", "type": "text"}
|
||||
]}
|
||||
]
|
||||
}
|
||||
```
|
||||
- **Explanation**:
|
||||
- `nested`: Extracts `name` and `bio` within `.author`, grouping the author details in a single `author` object.
|
||||
|
||||
#### **6. List and Nested List Extraction**
|
||||
- **List**: Extracts multiple elements matching the selector as a list.
|
||||
- **Nested List**: Allows lists within lists, useful for items with sub-lists (e.g., specifications for each product).
|
||||
- **Example HTML**:
|
||||
```html
|
||||
<div class="product">
|
||||
<h2 class="title">Product with Features</h2>
|
||||
<ul class="features">
|
||||
<li class="feature">Feature 1</li>
|
||||
<li class="feature">Feature 2</li>
|
||||
<li class="feature">Feature 3</li>
|
||||
</ul>
|
||||
</div>
|
||||
```
|
||||
- **Schema**:
|
||||
```python
|
||||
schema = {
|
||||
"baseSelector": ".product",
|
||||
"fields": [
|
||||
{"name": "title", "selector": ".title", "type": "text"},
|
||||
{"name": "features", "type": "list", "selector": ".features .feature", "fields": [
|
||||
{"name": "feature", "type": "text"}
|
||||
]}
|
||||
]
|
||||
}
|
||||
```
|
||||
- **Explanation**:
|
||||
- `list`: Captures each `.feature` item within `.features`, outputting an array of features under the `features` field.
|
||||
|
||||
#### **7. Transformations for Field Values**
|
||||
- Transformations allow you to modify extracted values (e.g., converting to lowercase).
|
||||
- Supported transformations: `lowercase`, `uppercase`, `strip`.
|
||||
- **Example HTML**:
|
||||
```html
|
||||
<div class="product">
|
||||
<h2 class="title">Special Product</h2>
|
||||
</div>
|
||||
```
|
||||
- **Schema**:
|
||||
```python
|
||||
schema = {
|
||||
"baseSelector": ".product",
|
||||
"fields": [
|
||||
{"name": "title", "selector": ".title", "type": "text", "transform": "uppercase"}
|
||||
]
|
||||
}
|
||||
```
|
||||
- **Explanation**: The `transform` property changes the `title` to uppercase, useful for standardized outputs.
|
||||
|
||||
#### **8. Full JSON-CSS Extraction Example**
|
||||
- Combining all elements in a single schema example for a comprehensive crawl:
|
||||
- **Example HTML**:
|
||||
```html
|
||||
<div class="product">
|
||||
<h2 class="title">Featured Product</h2>
|
||||
<img class="product-image" src="product.jpg">
|
||||
<span class="price">$99.99</span>
|
||||
<p class="description">Best product of the year.</p>
|
||||
<ul class="features">
|
||||
<li class="feature">Durable</li>
|
||||
<li class="feature">Eco-friendly</li>
|
||||
</ul>
|
||||
</div>
|
||||
```
|
||||
- **Schema**:
|
||||
```python
|
||||
schema = {
|
||||
"baseSelector": ".product",
|
||||
"fields": [
|
||||
{"name": "title", "selector": ".title", "type": "text", "transform": "uppercase"},
|
||||
{"name": "image_url", "selector": ".product-image", "type": "attribute", "attribute": "src"},
|
||||
{"name": "price", "selector": ".price", "type": "regex", "pattern": r"\$(\d+\.\d+)"},
|
||||
{"name": "description", "selector": ".description", "type": "html"},
|
||||
{"name": "features", "type": "list", "selector": ".features .feature", "fields": [
|
||||
{"name": "feature", "type": "text"}
|
||||
]}
|
||||
]
|
||||
}
|
||||
```
|
||||
- **Explanation**: This schema captures and transforms each aspect of the product, illustrating the JSON-CSS strategy’s versatility for structured extraction.
|
||||
|
||||
#### **9. Wrap Up & Next Steps**
|
||||
- Summarize JSON-CSS Extraction’s flexibility for structured, pattern-based extraction.
|
||||
- Tease the next video: **10.2 LLM Extraction Strategy**, focusing on using language models to extract data based on intelligent content analysis.
|
||||
|
||||
---
|
||||
|
||||
This outline covers each JSON-CSS Extraction option in Crawl4AI, with practical examples and schema configurations, making it a thorough guide for users.
|
||||
@@ -1,153 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 11: Extraction Strategies: JSON CSS, LLM, and Cosine
|
||||
|
||||
### Quick Intro
|
||||
Introduce JSON CSS Extraction Strategy for structured data, LLM Extraction Strategy for intelligent parsing, and Cosine Strategy for clustering similar content. Demo: Use JSON CSS to scrape product details from an e-commerce site.
|
||||
|
||||
Here’s a comprehensive outline for the **LLM Extraction Strategy** video, covering key details and example applications.
|
||||
|
||||
---
|
||||
|
||||
### **10.2 LLM Extraction Strategy**
|
||||
|
||||
#### **1. Introduction to LLM Extraction Strategy**
|
||||
- The LLM Extraction Strategy leverages language models to interpret and extract structured data from complex web content.
|
||||
- Unlike traditional CSS selectors, this strategy uses natural language instructions and schemas to guide the extraction, ideal for unstructured or diverse content.
|
||||
- Supports **OpenAI**, **Azure OpenAI**, **HuggingFace**, and **Ollama** models, enabling flexibility with both proprietary and open-source providers.
|
||||
|
||||
#### **2. Key Components of LLM Extraction Strategy**
|
||||
- **Provider**: Specifies the LLM provider (e.g., OpenAI, HuggingFace, Azure).
|
||||
- **API Token**: Required for most providers, except Ollama (local LLM model).
|
||||
- **Instruction**: Custom extraction instructions sent to the model, providing flexibility in how the data is structured and extracted.
|
||||
- **Schema**: Optional, defines structured fields to organize extracted data into JSON format.
|
||||
- **Extraction Type**: Supports `"block"` for simpler text blocks or `"schema"` when a structured output format is required.
|
||||
- **Chunking Parameters**: Breaks down large documents, with options to adjust chunk size and overlap rate for more accurate extraction across lengthy texts.
|
||||
|
||||
#### **3. Basic Extraction Example: OpenAI Model Pricing**
|
||||
- **Goal**: Extract model names and their input and output fees from the OpenAI pricing page.
|
||||
- **Schema Definition**:
|
||||
- **Model Name**: Text for model identification.
|
||||
- **Input Fee**: Token cost for input processing.
|
||||
- **Output Fee**: Token cost for output generation.
|
||||
|
||||
- **Schema**:
|
||||
```python
|
||||
class OpenAIModelFee(BaseModel):
|
||||
model_name: str = Field(..., description="Name of the OpenAI model.")
|
||||
input_fee: str = Field(..., description="Fee for input token for the OpenAI model.")
|
||||
output_fee: str = Field(..., description="Fee for output token for the OpenAI model.")
|
||||
```
|
||||
|
||||
- **Example Code**:
|
||||
```python
|
||||
async def extract_openai_pricing():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://openai.com/api/pricing/",
|
||||
extraction_strategy=LLMExtractionStrategy(
|
||||
provider="openai/gpt-4o",
|
||||
api_token=os.getenv("OPENAI_API_KEY"),
|
||||
schema=OpenAIModelFee.schema(),
|
||||
extraction_type="schema",
|
||||
instruction="Extract model names and fees for input and output tokens from the page."
|
||||
),
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
- **Explanation**:
|
||||
- The extraction strategy combines a schema and detailed instruction to guide the LLM in capturing structured data.
|
||||
- Each model’s name, input fee, and output fee are extracted in a JSON format.
|
||||
|
||||
#### **4. Knowledge Graph Extraction Example**
|
||||
- **Goal**: Extract entities and their relationships from a document for use in a knowledge graph.
|
||||
- **Schema Definition**:
|
||||
- **Entities**: Individual items with descriptions (e.g., people, organizations).
|
||||
- **Relationships**: Connections between entities, including descriptions and relationship types.
|
||||
|
||||
- **Schema**:
|
||||
```python
|
||||
class Entity(BaseModel):
|
||||
name: str
|
||||
description: str
|
||||
|
||||
class Relationship(BaseModel):
|
||||
entity1: Entity
|
||||
entity2: Entity
|
||||
description: str
|
||||
relation_type: str
|
||||
|
||||
class KnowledgeGraph(BaseModel):
|
||||
entities: List[Entity]
|
||||
relationships: List[Relationship]
|
||||
```
|
||||
|
||||
- **Example Code**:
|
||||
```python
|
||||
async def extract_knowledge_graph():
|
||||
extraction_strategy = LLMExtractionStrategy(
|
||||
provider="azure/gpt-4o-mini",
|
||||
api_token=os.getenv("AZURE_API_KEY"),
|
||||
schema=KnowledgeGraph.schema(),
|
||||
extraction_type="schema",
|
||||
instruction="Extract entities and relationships from the content to build a knowledge graph."
|
||||
)
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/some-article",
|
||||
extraction_strategy=extraction_strategy,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
- **Explanation**:
|
||||
- In this setup, the LLM extracts entities and their relationships based on the schema and instruction.
|
||||
- The schema organizes results into a JSON-based knowledge graph format.
|
||||
|
||||
#### **5. Key Settings in LLM Extraction**
|
||||
- **Chunking Options**:
|
||||
- For long pages, set `chunk_token_threshold` to specify maximum token count per section.
|
||||
- Adjust `overlap_rate` to control the overlap between chunks, useful for contextual consistency.
|
||||
- **Example**:
|
||||
```python
|
||||
extraction_strategy = LLMExtractionStrategy(
|
||||
provider="openai/gpt-4",
|
||||
api_token=os.getenv("OPENAI_API_KEY"),
|
||||
chunk_token_threshold=3000,
|
||||
overlap_rate=0.2, # 20% overlap between chunks
|
||||
instruction="Extract key insights and relationships."
|
||||
)
|
||||
```
|
||||
- This setup ensures that longer texts are divided into manageable chunks with slight overlap, enhancing the quality of extraction.
|
||||
|
||||
#### **6. Flexible Provider Options for LLM Extraction**
|
||||
- **Using Proprietary Models**: OpenAI, Azure, and HuggingFace provide robust language models, often suited for complex or detailed extractions.
|
||||
- **Using Open-Source Models**: Ollama and other open-source models can be deployed locally, suitable for offline or cost-effective extraction.
|
||||
- **Example Call**:
|
||||
```python
|
||||
await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY"))
|
||||
await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY"))
|
||||
await extract_structured_data_using_llm("ollama/llama3.2")
|
||||
```
|
||||
|
||||
#### **7. Complete Example of LLM Extraction Setup**
|
||||
- Code to run both the OpenAI pricing and Knowledge Graph extractions, using various providers:
|
||||
```python
|
||||
async def main():
|
||||
await extract_openai_pricing()
|
||||
await extract_knowledge_graph()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
#### **8. Wrap Up & Next Steps**
|
||||
- Recap the power of LLM extraction for handling unstructured or complex data extraction tasks.
|
||||
- Tease the next video: **10.3 Cosine Similarity Strategy** for clustering similar content based on semantic similarity.
|
||||
|
||||
---
|
||||
|
||||
This outline explains LLM Extraction in Crawl4AI, with examples showing how to extract structured data using custom schemas and instructions. It demonstrates flexibility with multiple providers, ensuring practical application for different use cases.
|
||||
@@ -1,136 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 11: Extraction Strategies: JSON CSS, LLM, and Cosine
|
||||
|
||||
### Quick Intro
|
||||
Introduce JSON CSS Extraction Strategy for structured data, LLM Extraction Strategy for intelligent parsing, and Cosine Strategy for clustering similar content. Demo: Use JSON CSS to scrape product details from an e-commerce site.
|
||||
|
||||
Here’s a structured outline for the **Cosine Similarity Strategy** video, covering key concepts, configuration, and a practical example.
|
||||
|
||||
---
|
||||
|
||||
### **10.3 Cosine Similarity Strategy**
|
||||
|
||||
#### **1. Introduction to Cosine Similarity Strategy**
|
||||
- The Cosine Similarity Strategy clusters content by semantic similarity, offering an efficient alternative to LLM-based extraction, especially when speed is a priority.
|
||||
- Ideal for grouping similar sections of text, this strategy is well-suited for pages with content sections that may need to be classified or tagged, like news articles, product descriptions, or reviews.
|
||||
|
||||
#### **2. Key Configuration Options**
|
||||
- **semantic_filter**: A keyword-based filter to focus on relevant content.
|
||||
- **word_count_threshold**: Minimum number of words per cluster, filtering out shorter, less meaningful clusters.
|
||||
- **max_dist**: Maximum allowable distance between elements in clusters, impacting cluster tightness.
|
||||
- **linkage_method**: Method for hierarchical clustering, such as `'ward'` (for well-separated clusters).
|
||||
- **top_k**: Specifies the number of top categories for each cluster.
|
||||
- **model_name**: Defines the model for embeddings, such as `sentence-transformers/all-MiniLM-L6-v2`.
|
||||
- **sim_threshold**: Minimum similarity threshold for filtering, allowing control over cluster relevance.
|
||||
|
||||
#### **3. How Cosine Similarity Clustering Works**
|
||||
- **Step 1**: Embeddings are generated for each text section, transforming them into vectors that capture semantic meaning.
|
||||
- **Step 2**: Hierarchical clustering groups similar sections based on cosine similarity, forming clusters with related content.
|
||||
- **Step 3**: Clusters are filtered based on word count, removing those below the `word_count_threshold`.
|
||||
- **Step 4**: Each cluster is then categorized with tags, if enabled, providing context to each grouped content section.
|
||||
|
||||
#### **4. Example Use Case: Clustering Blog Article Sections**
|
||||
- **Goal**: Group related sections of a blog or news page to identify distinct topics or discussion areas.
|
||||
- **Example HTML Sections**:
|
||||
```text
|
||||
"The economy is showing signs of recovery, with markets up this quarter.",
|
||||
"In the sports world, several major teams are preparing for the upcoming season.",
|
||||
"New advancements in AI technology are reshaping the tech landscape.",
|
||||
"Market analysts are optimistic about continued growth in tech stocks."
|
||||
```
|
||||
|
||||
- **Code Setup**:
|
||||
```python
|
||||
async def extract_blog_sections():
|
||||
extraction_strategy = CosineStrategy(
|
||||
word_count_threshold=15,
|
||||
max_dist=0.3,
|
||||
sim_threshold=0.2,
|
||||
model_name="sentence-transformers/all-MiniLM-L6-v2",
|
||||
top_k=2
|
||||
)
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
url = "https://example.com/blog-page"
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
extraction_strategy=extraction_strategy,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
- **Explanation**:
|
||||
- **word_count_threshold**: Ensures only clusters with meaningful content are included.
|
||||
- **sim_threshold**: Filters out clusters with low similarity, focusing on closely related sections.
|
||||
- **top_k**: Selects top tags, useful for identifying main topics.
|
||||
|
||||
#### **5. Applying Semantic Filtering with Cosine Similarity**
|
||||
- **Semantic Filter**: Filters sections based on relevance to a specific keyword, such as “technology” for tech articles.
|
||||
- **Example Code**:
|
||||
```python
|
||||
extraction_strategy = CosineStrategy(
|
||||
semantic_filter="technology",
|
||||
word_count_threshold=10,
|
||||
max_dist=0.25,
|
||||
model_name="sentence-transformers/all-MiniLM-L6-v2"
|
||||
)
|
||||
```
|
||||
- **Explanation**:
|
||||
- **semantic_filter**: Only sections with high similarity to the “technology” keyword will be included in the clustering, making it easy to focus on specific topics within a mixed-content page.
|
||||
|
||||
#### **6. Clustering Product Reviews by Similarity**
|
||||
- **Goal**: Organize product reviews by themes, such as “price,” “quality,” or “durability.”
|
||||
- **Example Reviews**:
|
||||
```text
|
||||
"The quality of this product is outstanding and well worth the price.",
|
||||
"I found the product to be durable but a bit overpriced.",
|
||||
"Great value for the money and long-lasting.",
|
||||
"The build quality is good, but I expected a lower price point."
|
||||
```
|
||||
|
||||
- **Code Setup**:
|
||||
```python
|
||||
async def extract_product_reviews():
|
||||
extraction_strategy = CosineStrategy(
|
||||
word_count_threshold=20,
|
||||
max_dist=0.35,
|
||||
sim_threshold=0.25,
|
||||
model_name="sentence-transformers/all-MiniLM-L6-v2"
|
||||
)
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
url = "https://example.com/product-reviews"
|
||||
result = await crawler.arun(
|
||||
url=url,
|
||||
extraction_strategy=extraction_strategy,
|
||||
cache_mode=CacheMode.BYPASS
|
||||
)
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
- **Explanation**:
|
||||
- This configuration clusters similar reviews, grouping feedback by common themes, helping businesses understand customer sentiments around particular product aspects.
|
||||
|
||||
#### **7. Performance Advantages of Cosine Strategy**
|
||||
- **Speed**: The Cosine Similarity Strategy is faster than LLM-based extraction, as it doesn’t rely on API calls to external LLMs.
|
||||
- **Local Processing**: The strategy runs locally with pre-trained sentence embeddings, ideal for high-throughput scenarios where cost and latency are concerns.
|
||||
- **Comparison**: With a well-optimized local model, this method can perform clustering on large datasets quickly, making it suitable for tasks requiring rapid, repeated analysis.
|
||||
|
||||
#### **8. Full Code Example for Clustering News Articles**
|
||||
- **Code**:
|
||||
```python
|
||||
async def main():
|
||||
await extract_blog_sections()
|
||||
await extract_product_reviews()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
#### **9. Wrap Up & Next Steps**
|
||||
- Recap the efficiency and effectiveness of Cosine Similarity for clustering related content quickly.
|
||||
- Close with a reminder of Crawl4AI’s flexibility across extraction strategies, and prompt users to experiment with different settings to optimize clustering for their specific content.
|
||||
|
||||
---
|
||||
|
||||
This outline covers Cosine Similarity Strategy’s speed and effectiveness, providing examples that showcase its potential for clustering various content types efficiently.
|
||||
@@ -1,140 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 12: Session-Based Crawling for Dynamic Websites
|
||||
|
||||
### Quick Intro
|
||||
Show session management for handling websites with multiple pages or actions (like “load more” buttons). Demo: Crawl a paginated content page, persisting session data across multiple requests.
|
||||
|
||||
Here’s a detailed outline for the **Session-Based Crawling for Dynamic Websites** video, explaining why sessions are necessary, how to use them, and providing practical examples and a visual diagram to illustrate the concept.
|
||||
|
||||
---
|
||||
|
||||
### **11. Session-Based Crawling for Dynamic Websites**
|
||||
|
||||
#### **1. Introduction to Session-Based Crawling**
|
||||
- **What is Session-Based Crawling**: Session-based crawling maintains a continuous browsing session across multiple page states, allowing the crawler to interact with a page and retrieve content that loads dynamically or based on user interactions.
|
||||
- **Why It’s Needed**:
|
||||
- In static pages, all content is available directly from a single URL.
|
||||
- In dynamic websites, content often loads progressively or based on user actions (e.g., clicking “load more,” submitting forms, scrolling).
|
||||
- Session-based crawling helps simulate user actions, capturing content that is otherwise hidden until specific actions are taken.
|
||||
|
||||
#### **2. Conceptual Diagram for Session-Based Crawling**
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
Start[Start Session] --> S1[Initial State (S1)]
|
||||
S1 -->|Crawl| Content1[Extract Content S1]
|
||||
S1 -->|Action: Click Load More| S2[State S2]
|
||||
S2 -->|Crawl| Content2[Extract Content S2]
|
||||
S2 -->|Action: Scroll Down| S3[State S3]
|
||||
S3 -->|Crawl| Content3[Extract Content S3]
|
||||
S3 -->|Action: Submit Form| S4[Final State]
|
||||
S4 -->|Crawl| Content4[Extract Content S4]
|
||||
Content4 --> End[End Session]
|
||||
```
|
||||
|
||||
- **Explanation of Diagram**:
|
||||
- **Start**: Initializes the session and opens the starting URL.
|
||||
- **State Transitions**: Each action (e.g., clicking “load more,” scrolling) transitions to a new state, where additional content becomes available.
|
||||
- **Session Persistence**: Keeps the same browsing session active, preserving the state and allowing for a sequence of actions to unfold.
|
||||
- **End**: After reaching the final state, the session ends, and all accumulated content has been extracted.
|
||||
|
||||
#### **3. Key Components of Session-Based Crawling in Crawl4AI**
|
||||
- **Session ID**: A unique identifier to maintain the state across requests, allowing the crawler to “remember” previous actions.
|
||||
- **JavaScript Execution**: Executes JavaScript commands (e.g., clicks, scrolls) to simulate interactions.
|
||||
- **Wait Conditions**: Ensures the crawler waits for content to load in each state before moving on.
|
||||
- **Sequential State Transitions**: By defining actions and wait conditions between states, the crawler can navigate through the page as a user would.
|
||||
|
||||
#### **4. Basic Session Example: Multi-Step Content Loading**
|
||||
- **Goal**: Crawl an article feed that requires several “load more” clicks to display additional content.
|
||||
- **Code**:
|
||||
```python
|
||||
async def crawl_article_feed():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
session_id = "feed_session"
|
||||
|
||||
for page in range(3):
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/articles",
|
||||
session_id=session_id,
|
||||
js_code="document.querySelector('.load-more-button').click();" if page > 0 else None,
|
||||
wait_for="css:.article",
|
||||
css_selector=".article" # Target article elements
|
||||
)
|
||||
print(f"Page {page + 1}: Extracted {len(result.extracted_content)} articles")
|
||||
```
|
||||
- **Explanation**:
|
||||
- **session_id**: Ensures all requests share the same browsing state.
|
||||
- **js_code**: Clicks the “load more” button after the initial page load, expanding content on each iteration.
|
||||
- **wait_for**: Ensures articles have loaded after each click before extraction.
|
||||
|
||||
#### **5. Advanced Example: E-Commerce Product Search with Filter Selection**
|
||||
- **Goal**: Interact with filters on an e-commerce page to extract products based on selected criteria.
|
||||
- **Example Steps**:
|
||||
1. **State 1**: Load the main product page.
|
||||
2. **State 2**: Apply a filter (e.g., “On Sale”) by selecting a checkbox.
|
||||
3. **State 3**: Scroll to load additional products and capture updated results.
|
||||
|
||||
- **Code**:
|
||||
```python
|
||||
async def extract_filtered_products():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
session_id = "product_session"
|
||||
|
||||
# Step 1: Open product page
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/products",
|
||||
session_id=session_id,
|
||||
wait_for="css:.product-item"
|
||||
)
|
||||
|
||||
# Step 2: Apply filter (e.g., "On Sale")
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/products",
|
||||
session_id=session_id,
|
||||
js_code="document.querySelector('#sale-filter-checkbox').click();",
|
||||
wait_for="css:.product-item"
|
||||
)
|
||||
|
||||
# Step 3: Scroll to load additional products
|
||||
for _ in range(2): # Scroll down twice
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/products",
|
||||
session_id=session_id,
|
||||
js_code="window.scrollTo(0, document.body.scrollHeight);",
|
||||
wait_for="css:.product-item"
|
||||
)
|
||||
print(f"Loaded {len(result.extracted_content)} products after scroll")
|
||||
```
|
||||
- **Explanation**:
|
||||
- **State Persistence**: Each action (filter selection and scroll) builds on the previous session state.
|
||||
- **Multiple Interactions**: Combines clicking a filter with scrolling, demonstrating how the session preserves these actions.
|
||||
|
||||
#### **6. Key Benefits of Session-Based Crawling**
|
||||
- **Accessing Hidden Content**: Retrieves data that loads only after user actions.
|
||||
- **Simulating User Behavior**: Handles interactive elements such as “load more” buttons, dropdowns, and filters.
|
||||
- **Maintaining Continuity Across States**: Enables a sequential process, moving logically from one state to the next, capturing all desired content without reloading the initial state each time.
|
||||
|
||||
#### **7. Additional Configuration Tips**
|
||||
- **Manage Session End**: Always conclude the session after the final state to release resources.
|
||||
- **Optimize with Wait Conditions**: Use `wait_for` to ensure complete loading before each extraction.
|
||||
- **Handling Errors in Session-Based Crawling**: Include error handling for interactions that may fail, ensuring robustness across state transitions.
|
||||
|
||||
#### **8. Complete Code Example: Multi-Step Session Workflow**
|
||||
- **Example**:
|
||||
```python
|
||||
async def main():
|
||||
await crawl_article_feed()
|
||||
await extract_filtered_products()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
#### **9. Wrap Up & Next Steps**
|
||||
- Recap the usefulness of session-based crawling for dynamic content extraction.
|
||||
- Tease the next video: **Hooks and Custom Workflow with AsyncWebCrawler** to cover advanced customization options for further control over the crawling process.
|
||||
|
||||
---
|
||||
|
||||
This outline covers session-based crawling from both a conceptual and practical perspective, helping users understand its importance, configure it effectively, and use it to handle complex dynamic content.
|
||||
@@ -1,138 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 13: Chunking Strategies for Large Text Processing
|
||||
|
||||
### Quick Intro
|
||||
Explain Regex, NLP, and Fixed-Length chunking, and when to use each. Demo: Chunk a large article or document for processing by topics or sentences.
|
||||
|
||||
Here’s a structured outline for the **Chunking Strategies for Large Text Processing** video, emphasizing how chunking works within extraction and why it’s crucial for effective data aggregation.
|
||||
|
||||
Here’s a structured outline for the **Chunking Strategies for Large Text Processing** video, explaining each strategy, when to use it, and providing examples to illustrate.
|
||||
|
||||
---
|
||||
|
||||
### **12. Chunking Strategies for Large Text Processing**
|
||||
|
||||
#### **1. Introduction to Chunking in Crawl4AI**
|
||||
- **What is Chunking**: Chunking is the process of dividing large text into manageable sections or “chunks,” enabling efficient processing in extraction tasks.
|
||||
- **Why It’s Needed**:
|
||||
- When processing large text, feeding it directly into an extraction function (like `F(x)`) can overwhelm memory or token limits.
|
||||
- Chunking breaks down `x` (the text) into smaller pieces, which are processed sequentially or in parallel by the extraction function, with the final result being an aggregation of all chunks’ processed output.
|
||||
|
||||
#### **2. Key Chunking Strategies and Use Cases**
|
||||
- Crawl4AI offers various chunking strategies to suit different text structures, chunk sizes, and processing requirements.
|
||||
- **Choosing a Strategy**: Select based on the type of text (e.g., articles, transcripts) and extraction needs (e.g., simple splitting or context-sensitive processing).
|
||||
|
||||
#### **3. Strategy 1: Regex-Based Chunking**
|
||||
- **Description**: Uses regular expressions to split text based on specified patterns (e.g., paragraphs or section breaks).
|
||||
- **Use Case**: Ideal for dividing text by paragraphs or larger logical blocks where sections are clearly separated by line breaks or punctuation.
|
||||
- **Example**:
|
||||
- **Pattern**: `r'\n\n'` for double line breaks.
|
||||
```python
|
||||
chunker = RegexChunking(patterns=[r'\n\n'])
|
||||
text_chunks = chunker.chunk(long_text)
|
||||
print(text_chunks) # Output: List of paragraphs
|
||||
```
|
||||
- **Pros**: Flexible for pattern-based chunking.
|
||||
- **Cons**: Limited to text with consistent formatting.
|
||||
|
||||
#### **4. Strategy 2: NLP Sentence-Based Chunking**
|
||||
- **Description**: Uses NLP to split text by sentences, ensuring grammatically complete segments.
|
||||
- **Use Case**: Useful for extracting individual statements, such as in news articles, quotes, or legal text.
|
||||
- **Example**:
|
||||
```python
|
||||
chunker = NlpSentenceChunking()
|
||||
sentence_chunks = chunker.chunk(long_text)
|
||||
print(sentence_chunks) # Output: List of sentences
|
||||
```
|
||||
- **Pros**: Maintains sentence structure, ideal for tasks needing semantic completeness.
|
||||
- **Cons**: May create very small chunks, which could limit contextual extraction.
|
||||
|
||||
#### **5. Strategy 3: Topic-Based Segmentation Using TextTiling**
|
||||
- **Description**: Segments text into topics using TextTiling, identifying topic shifts and key segments.
|
||||
- **Use Case**: Ideal for long articles, reports, or essays where each section covers a different topic.
|
||||
- **Example**:
|
||||
```python
|
||||
chunker = TopicSegmentationChunking(num_keywords=3)
|
||||
topic_chunks = chunker.chunk_with_topics(long_text)
|
||||
print(topic_chunks) # Output: List of topic segments with keywords
|
||||
```
|
||||
- **Pros**: Groups related content, preserving topical coherence.
|
||||
- **Cons**: Depends on identifiable topic shifts, which may not be present in all texts.
|
||||
|
||||
#### **6. Strategy 4: Fixed-Length Word Chunking**
|
||||
- **Description**: Splits text into chunks based on a fixed number of words.
|
||||
- **Use Case**: Ideal for text where exact segment size is required, such as processing word-limited documents for LLMs.
|
||||
- **Example**:
|
||||
```python
|
||||
chunker = FixedLengthWordChunking(chunk_size=100)
|
||||
word_chunks = chunker.chunk(long_text)
|
||||
print(word_chunks) # Output: List of 100-word chunks
|
||||
```
|
||||
- **Pros**: Ensures uniform chunk sizes, suitable for token-based extraction limits.
|
||||
- **Cons**: May split sentences, affecting semantic coherence.
|
||||
|
||||
#### **7. Strategy 5: Sliding Window Chunking**
|
||||
- **Description**: Uses a fixed window size with a step, creating overlapping chunks to maintain context.
|
||||
- **Use Case**: Useful for maintaining context across sections, as with documents where context is needed for neighboring sections.
|
||||
- **Example**:
|
||||
```python
|
||||
chunker = SlidingWindowChunking(window_size=100, step=50)
|
||||
window_chunks = chunker.chunk(long_text)
|
||||
print(window_chunks) # Output: List of overlapping word chunks
|
||||
```
|
||||
- **Pros**: Retains context across adjacent chunks, ideal for complex semantic extraction.
|
||||
- **Cons**: Overlap increases data size, potentially impacting processing time.
|
||||
|
||||
#### **8. Strategy 6: Overlapping Window Chunking**
|
||||
- **Description**: Similar to sliding windows but with a defined overlap, allowing chunks to share content at the edges.
|
||||
- **Use Case**: Suitable for handling long texts with essential overlapping information, like research articles or medical records.
|
||||
- **Example**:
|
||||
```python
|
||||
chunker = OverlappingWindowChunking(window_size=1000, overlap=100)
|
||||
overlap_chunks = chunker.chunk(long_text)
|
||||
print(overlap_chunks) # Output: List of overlapping chunks with defined overlap
|
||||
```
|
||||
- **Pros**: Allows controlled overlap for consistent content coverage across chunks.
|
||||
- **Cons**: Redundant data in overlapping areas may increase computation.
|
||||
|
||||
#### **9. Practical Example: Using Chunking with an Extraction Strategy**
|
||||
- **Goal**: Combine chunking with an extraction strategy to process large text effectively.
|
||||
- **Example Code**:
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
|
||||
async def extract_large_text():
|
||||
# Initialize chunker and extraction strategy
|
||||
chunker = FixedLengthWordChunking(chunk_size=200)
|
||||
extraction_strategy = LLMExtractionStrategy(provider="openai/gpt-4", api_token="your_api_token")
|
||||
|
||||
# Split text into chunks
|
||||
text_chunks = chunker.chunk(large_text)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
for chunk in text_chunks:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
extraction_strategy=extraction_strategy,
|
||||
content=chunk
|
||||
)
|
||||
print(result.extracted_content)
|
||||
```
|
||||
|
||||
- **Explanation**:
|
||||
- `chunker.chunk()`: Divides the `large_text` into smaller segments based on the chosen strategy.
|
||||
- `extraction_strategy`: Processes each chunk separately, and results are then aggregated to form the final output.
|
||||
|
||||
#### **10. Choosing the Right Chunking Strategy**
|
||||
- **Text Structure**: If text has clear sections (e.g., paragraphs, topics), use Regex or Topic Segmentation.
|
||||
- **Extraction Needs**: If context is crucial, consider Sliding or Overlapping Window Chunking.
|
||||
- **Processing Constraints**: For word-limited extractions (e.g., LLMs with token limits), Fixed-Length Word Chunking is often most effective.
|
||||
|
||||
#### **11. Wrap Up & Next Steps**
|
||||
- Recap the benefits of each chunking strategy and when to use them in extraction workflows.
|
||||
- Tease the next video: **Hooks and Custom Workflow with AsyncWebCrawler**, focusing on customizing crawler behavior with hooks for a fine-tuned extraction process.
|
||||
|
||||
---
|
||||
|
||||
This outline provides a complete understanding of chunking strategies, explaining each method’s strengths and best-use scenarios to help users process large texts effectively in Crawl4AI.
|
||||
@@ -1,185 +0,0 @@
|
||||
# Crawl4AI
|
||||
|
||||
## Episode 14: Hooks and Custom Workflow with AsyncWebCrawler
|
||||
|
||||
### Quick Intro
|
||||
Cover hooks (`on_browser_created`, `before_goto`, `after_goto`) to add custom workflows. Demo: Use hooks to add custom cookies or headers, log HTML, or trigger specific events on page load.
|
||||
|
||||
Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCrawler** video, covering each hook’s purpose, usage, and example implementations.
|
||||
|
||||
---
|
||||
|
||||
### **13. Hooks and Custom Workflow with AsyncWebCrawler**
|
||||
|
||||
#### **1. Introduction to Hooks in Crawl4AI**
|
||||
- **What are Hooks**: Hooks are customizable entry points in the crawling process that allow users to inject custom actions or logic at specific stages.
|
||||
- **Why Use Hooks**:
|
||||
- They enable fine-grained control over the crawling workflow.
|
||||
- Useful for performing additional tasks (e.g., logging, modifying headers) dynamically during the crawl.
|
||||
- Hooks provide the flexibility to adapt the crawler to complex site structures or unique project needs.
|
||||
|
||||
#### **2. Overview of Available Hooks**
|
||||
- Crawl4AI offers seven key hooks to modify and control different stages in the crawling lifecycle:
|
||||
- `on_browser_created`
|
||||
- `on_user_agent_updated`
|
||||
- `on_execution_started`
|
||||
- `before_goto`
|
||||
- `after_goto`
|
||||
- `before_return_html`
|
||||
- `before_retrieve_html`
|
||||
|
||||
#### **3. Hook-by-Hook Explanation and Examples**
|
||||
|
||||
---
|
||||
|
||||
##### **Hook 1: `on_browser_created`**
|
||||
- **Purpose**: Triggered right after the browser instance is created.
|
||||
- **Use Case**:
|
||||
- Initializing browser-specific settings or performing setup actions.
|
||||
- Configuring browser extensions or scripts before any page is opened.
|
||||
- **Example**:
|
||||
```python
|
||||
async def log_browser_creation(browser):
|
||||
print("Browser instance created:", browser)
|
||||
|
||||
crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation)
|
||||
```
|
||||
- **Explanation**: This hook logs the browser creation event, useful for tracking when a new browser instance starts.
|
||||
|
||||
---
|
||||
|
||||
##### **Hook 2: `on_user_agent_updated`**
|
||||
- **Purpose**: Called whenever the user agent string is updated.
|
||||
- **Use Case**:
|
||||
- Modifying the user agent based on page requirements, e.g., changing to a mobile user agent for mobile-only pages.
|
||||
- **Example**:
|
||||
```python
|
||||
def update_user_agent(user_agent):
|
||||
print(f"User Agent Updated: {user_agent}")
|
||||
|
||||
crawler.crawler_strategy.set_hook('on_user_agent_updated', update_user_agent)
|
||||
crawler.update_user_agent("Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X)")
|
||||
```
|
||||
- **Explanation**: This hook provides a callback every time the user agent changes, helpful for debugging or dynamically altering user agent settings based on conditions.
|
||||
|
||||
---
|
||||
|
||||
##### **Hook 3: `on_execution_started`**
|
||||
- **Purpose**: Called right before the crawler begins any interaction (e.g., JavaScript execution, clicks).
|
||||
- **Use Case**:
|
||||
- Performing setup actions, such as inserting cookies or initiating custom scripts.
|
||||
- **Example**:
|
||||
```python
|
||||
async def log_execution_start(page):
|
||||
print("Execution started on page:", page.url)
|
||||
|
||||
crawler.crawler_strategy.set_hook('on_execution_started', log_execution_start)
|
||||
```
|
||||
- **Explanation**: Logs the start of any major interaction on the page, ideal for cases where you want to monitor each interaction.
|
||||
|
||||
---
|
||||
|
||||
##### **Hook 4: `before_goto`**
|
||||
- **Purpose**: Triggered before navigating to a new URL with `page.goto()`.
|
||||
- **Use Case**:
|
||||
- Modifying request headers or setting up conditions right before the page loads.
|
||||
- Adding headers or dynamically adjusting options for specific URLs.
|
||||
- **Example**:
|
||||
```python
|
||||
async def modify_headers_before_goto(page):
|
||||
await page.set_extra_http_headers({"X-Custom-Header": "CustomValue"})
|
||||
print("Custom headers set before navigation")
|
||||
|
||||
crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto)
|
||||
```
|
||||
- **Explanation**: This hook allows injecting headers or altering settings based on the page’s needs, particularly useful for pages with custom requirements.
|
||||
|
||||
---
|
||||
|
||||
##### **Hook 5: `after_goto`**
|
||||
- **Purpose**: Executed immediately after a page has loaded (after `page.goto()`).
|
||||
- **Use Case**:
|
||||
- Checking the loaded page state, modifying the DOM, or performing post-navigation actions (e.g., scrolling).
|
||||
- **Example**:
|
||||
```python
|
||||
async def post_navigation_scroll(page):
|
||||
await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
|
||||
print("Scrolled to the bottom after navigation")
|
||||
|
||||
crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll)
|
||||
```
|
||||
- **Explanation**: This hook scrolls to the bottom of the page after loading, which can help load dynamically added content like infinite scroll elements.
|
||||
|
||||
---
|
||||
|
||||
##### **Hook 6: `before_return_html`**
|
||||
- **Purpose**: Called right before HTML content is retrieved and returned.
|
||||
- **Use Case**:
|
||||
- Removing overlays or cleaning up the page for a cleaner HTML extraction.
|
||||
- **Example**:
|
||||
```python
|
||||
async def remove_advertisements(page, html):
|
||||
await page.evaluate("document.querySelectorAll('.ad-banner').forEach(el => el.remove());")
|
||||
print("Advertisements removed before returning HTML")
|
||||
|
||||
crawler.crawler_strategy.set_hook('before_return_html', remove_advertisements)
|
||||
```
|
||||
- **Explanation**: The hook removes ad banners from the HTML before it’s retrieved, ensuring a cleaner data extraction.
|
||||
|
||||
---
|
||||
|
||||
##### **Hook 7: `before_retrieve_html`**
|
||||
- **Purpose**: Runs right before Crawl4AI initiates HTML retrieval.
|
||||
- **Use Case**:
|
||||
- Finalizing any page adjustments (e.g., setting timers, waiting for specific elements).
|
||||
- **Example**:
|
||||
```python
|
||||
async def wait_for_content_before_retrieve(page):
|
||||
await page.wait_for_selector('.main-content')
|
||||
print("Main content loaded, ready to retrieve HTML")
|
||||
|
||||
crawler.crawler_strategy.set_hook('before_retrieve_html', wait_for_content_before_retrieve)
|
||||
```
|
||||
- **Explanation**: This hook waits for the main content to load before retrieving the HTML, ensuring that all essential content is captured.
|
||||
|
||||
#### **4. Setting Hooks in Crawl4AI**
|
||||
- **How to Set Hooks**:
|
||||
- Use `set_hook` to define a custom function for each hook.
|
||||
- Each hook function can be asynchronous (useful for actions like waiting or retrieving async data).
|
||||
- **Example Setup**:
|
||||
```python
|
||||
crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation)
|
||||
crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto)
|
||||
crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll)
|
||||
```
|
||||
|
||||
#### **5. Complete Example: Using Hooks for a Customized Crawl Workflow**
|
||||
- **Goal**: Log each key step, set custom headers before navigation, and clean up the page before retrieving HTML.
|
||||
- **Example Code**:
|
||||
```python
|
||||
async def custom_crawl():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
# Set hooks for custom workflow
|
||||
crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation)
|
||||
crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto)
|
||||
crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll)
|
||||
crawler.crawler_strategy.set_hook('before_return_html', remove_advertisements)
|
||||
|
||||
# Perform the crawl
|
||||
url = "https://example.com"
|
||||
result = await crawler.arun(url=url)
|
||||
print(result.html) # Display or process HTML
|
||||
```
|
||||
|
||||
#### **6. Benefits of Using Hooks in Custom Crawling Workflows**
|
||||
- **Enhanced Control**: Hooks offer precise control over each stage, allowing adjustments based on content and structure.
|
||||
- **Efficient Modifications**: Avoid reloading or restarting the session; hooks can alter actions dynamically.
|
||||
- **Context-Sensitive Actions**: Hooks enable custom logic tailored to specific pages or sections, maximizing extraction quality.
|
||||
|
||||
#### **7. Wrap Up & Next Steps**
|
||||
- Recap how hooks empower customized workflows in Crawl4AI, enabling flexibility at every stage.
|
||||
- Tease the next video: **Automating Post-Processing with Crawl4AI**, covering automated steps after data extraction.
|
||||
|
||||
---
|
||||
|
||||
This outline provides a thorough understanding of hooks, their practical applications, and examples for customizing the crawling workflow in Crawl4AI.
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,235 +0,0 @@
|
||||
Below is a sample Markdown file (`tutorials/async-webcrawler-basics.md`) illustrating how you might teach new users the fundamentals of `AsyncWebCrawler`. This tutorial builds on the **Getting Started** section by introducing key configuration parameters and the structure of the crawl result. Feel free to adjust the code snippets, wording, or format to match your style.
|
||||
|
||||
---
|
||||
|
||||
# AsyncWebCrawler Basics
|
||||
|
||||
In this tutorial, you’ll learn how to:
|
||||
|
||||
1. Create and configure an `AsyncWebCrawler` instance
|
||||
2. Understand the `CrawlResult` object returned by `arun()`
|
||||
3. Use basic `BrowserConfig` and `CrawlerRunConfig` options to tailor your crawl
|
||||
|
||||
> **Prerequisites**
|
||||
> - You’ve already completed the [Getting Started](./getting-started.md) tutorial (or have equivalent knowledge).
|
||||
> - You have **Crawl4AI** installed and configured with Playwright.
|
||||
|
||||
---
|
||||
|
||||
## 1. What is `AsyncWebCrawler`?
|
||||
|
||||
`AsyncWebCrawler` is the central class for running asynchronous crawling operations in Crawl4AI. It manages browser sessions, handles dynamic pages (if needed), and provides you with a structured result object for each crawl. Essentially, it’s your high-level interface for collecting page data.
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://example.com")
|
||||
print(result)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Creating a Basic `AsyncWebCrawler` Instance
|
||||
|
||||
Below is a simple code snippet showing how to create and use `AsyncWebCrawler`. This goes one step beyond the minimal example you saw in [Getting Started](./getting-started.md).
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai import BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
# 1. Set up configuration objects (optional if you want defaults)
|
||||
browser_config = BrowserConfig(
|
||||
browser_type="chromium",
|
||||
headless=True,
|
||||
verbose=True
|
||||
)
|
||||
crawler_config = CrawlerRunConfig(
|
||||
page_timeout=30000, # 30 seconds
|
||||
wait_for_images=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# 2. Initialize AsyncWebCrawler with your chosen browser config
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
# 3. Run a single crawl
|
||||
url_to_crawl = "https://example.com"
|
||||
result = await crawler.arun(url=url_to_crawl, config=crawler_config)
|
||||
|
||||
# 4. Inspect the result
|
||||
if result.success:
|
||||
print(f"Successfully crawled: {result.url}")
|
||||
print(f"HTML length: {len(result.html)}")
|
||||
print(f"Markdown snippet: {result.markdown[:200]}...")
|
||||
else:
|
||||
print(f"Failed to crawl {result.url}. Error: {result.error_message}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Key Points
|
||||
|
||||
1. **`BrowserConfig`** is optional, but it’s the place to specify browser-related settings (e.g., `headless`, `browser_type`).
|
||||
2. **`CrawlerRunConfig`** deals with how you want the crawler to behave for this particular run (timeouts, waiting for images, etc.).
|
||||
3. **`arun()`** is the main method to crawl a single URL. We’ll see how `arun_many()` works in later tutorials.
|
||||
|
||||
---
|
||||
|
||||
## 3. Understanding `CrawlResult`
|
||||
|
||||
When you call `arun()`, you get back a `CrawlResult` object containing all the relevant data from that crawl attempt. Some common fields include:
|
||||
|
||||
```python
|
||||
class CrawlResult(BaseModel):
|
||||
url: str
|
||||
html: str
|
||||
success: bool
|
||||
cleaned_html: Optional[str] = None
|
||||
media: Dict[str, List[Dict]] = {}
|
||||
links: Dict[str, List[Dict]] = {}
|
||||
screenshot: Optional[str] = None # base64-encoded screenshot if requested
|
||||
pdf: Optional[bytes] = None # binary PDF data if requested
|
||||
markdown: Optional[Union[str, MarkdownGenerationResult]] = None
|
||||
markdown_v2: Optional[MarkdownGenerationResult] = None
|
||||
error_message: Optional[str] = None
|
||||
# ... plus other fields like status_code, ssl_certificate, extracted_content, etc.
|
||||
```
|
||||
|
||||
### Commonly Used Fields
|
||||
|
||||
- **`success`**: `True` if the crawl succeeded, `False` otherwise.
|
||||
- **`html`**: The raw HTML (or final rendered state if JavaScript was executed).
|
||||
- **`markdown` / `markdown_v2`**: Contains the automatically generated Markdown representation of the page.
|
||||
- **`media`**: A dictionary with lists of extracted images, videos, or audio elements.
|
||||
- **`links`**: A dictionary with lists of “internal” and “external” link objects.
|
||||
- **`error_message`**: If `success` is `False`, this often contains a description of the error.
|
||||
|
||||
**Example**:
|
||||
|
||||
```python
|
||||
if result.success:
|
||||
print("Page Title or snippet of HTML:", result.html[:200])
|
||||
if result.markdown:
|
||||
print("Markdown snippet:", result.markdown[:200])
|
||||
print("Links found:", len(result.links.get("internal", [])), "internal links")
|
||||
else:
|
||||
print("Error crawling:", result.error_message)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Relevant Basic Parameters
|
||||
|
||||
Below are a few `BrowserConfig` and `CrawlerRunConfig` parameters you might tweak early on. We’ll cover more advanced ones (like proxies, PDF, or screenshots) in later tutorials.
|
||||
|
||||
### 4.1 `BrowserConfig` Essentials
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|--------------------|-----------------------------------------------------------|----------------|
|
||||
| `browser_type` | Which browser engine to use: `"chromium"`, `"firefox"`, `"webkit"` | `"chromium"` |
|
||||
| `headless` | Run the browser with no UI window. If `False`, you see the browser. | `True` |
|
||||
| `verbose` | Print extra logs for debugging. | `True` |
|
||||
| `java_script_enabled` | Toggle JavaScript. When `False`, you might speed up loads but lose dynamic content. | `True` |
|
||||
|
||||
### 4.2 `CrawlerRunConfig` Essentials
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-----------------------|--------------------------------------------------------------|--------------------|
|
||||
| `page_timeout` | Maximum time in ms to wait for the page to load or scripts. | `30000` (30s) |
|
||||
| `wait_for_images` | Wait for images to fully load. Good for accurate rendering. | `True` |
|
||||
| `css_selector` | Target only certain elements for extraction. | `None` |
|
||||
| `excluded_tags` | Skip certain HTML tags (like `nav`, `footer`, etc.) | `None` |
|
||||
| `verbose` | Print logs for debugging. | `True` |
|
||||
|
||||
> **Tip**: Don’t worry if you see lots of parameters. You’ll learn them gradually in later tutorials.
|
||||
|
||||
---
|
||||
|
||||
## 5. Windows-Specific Configuration
|
||||
|
||||
When using AsyncWebCrawler on Windows, you might encounter a `NotImplementedError` related to `asyncio.create_subprocess_exec`. This is a known Windows-specific issue that occurs because Windows' default event loop doesn't support subprocess operations.
|
||||
|
||||
To resolve this, Crawl4AI provides a utility function to configure Windows to use the ProactorEventLoop. Call this function before running any async operations:
|
||||
|
||||
```python
|
||||
from crawl4ai.utils import configure_windows_event_loop
|
||||
|
||||
# Call this before any async operations if you're on Windows
|
||||
configure_windows_event_loop()
|
||||
|
||||
# Your AsyncWebCrawler code here
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Putting It All Together
|
||||
|
||||
Here’s a slightly more in-depth example that shows off a few key config parameters at once:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
from crawl4ai import BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
browser_cfg = BrowserConfig(
|
||||
browser_type="chromium",
|
||||
headless=True,
|
||||
java_script_enabled=True,
|
||||
verbose=False
|
||||
)
|
||||
|
||||
crawler_cfg = CrawlerRunConfig(
|
||||
page_timeout=30000, # wait up to 30 seconds
|
||||
wait_for_images=True,
|
||||
css_selector=".article-body", # only extract content under this CSS selector
|
||||
verbose=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_cfg) as crawler:
|
||||
result = await crawler.arun("https://news.example.com", config=crawler_cfg)
|
||||
|
||||
if result.success:
|
||||
print("[OK] Crawled:", result.url)
|
||||
print("HTML length:", len(result.html))
|
||||
print("Extracted Markdown:", result.markdown_v2.raw_markdown[:300])
|
||||
else:
|
||||
print("[ERROR]", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Key Observations**:
|
||||
- `css_selector=".article-body"` ensures we only focus on the main content region.
|
||||
- `page_timeout=30000` helps if the site is slow.
|
||||
- We turned off `verbose` logs for the browser but kept them on for the crawler config.
|
||||
|
||||
---
|
||||
|
||||
## 7. Next Steps
|
||||
|
||||
- **Smart Crawling Techniques**: Learn to handle iframes, advanced caching, and selective extraction in the [next tutorial](./smart-crawling.md).
|
||||
- **Hooks & Custom Code**: See how to inject custom logic before and after navigation in a dedicated [Hooks Tutorial](./hooks-custom.md).
|
||||
- **Reference**: For a complete list of every parameter in `BrowserConfig` and `CrawlerRunConfig`, check out the [Reference section](../../reference/configuration.md).
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
You now know the basics of **AsyncWebCrawler**:
|
||||
- How to create it with optional browser/crawler configs
|
||||
- How `arun()` works for single-page crawls
|
||||
- Where to find your crawled data in `CrawlResult`
|
||||
- A handful of frequently used configuration parameters
|
||||
|
||||
From here, you can refine your crawler to handle more advanced scenarios, like focusing on specific content or dealing with dynamic elements. Let’s move on to **[Smart Crawling Techniques](./smart-crawling.md)** to learn how to handle iframes, advanced caching, and more.
|
||||
|
||||
---
|
||||
|
||||
**Last updated**: 2024-XX-XX
|
||||
|
||||
Keep exploring! If you get stuck, remember to check out the [How-To Guides](../../how-to/) for targeted solutions or the [Explanations](../../explanations/) for deeper conceptual background.
|
||||
@@ -1,271 +0,0 @@
|
||||
# Deploying with Docker (Quickstart)
|
||||
|
||||
> **⚠️ WARNING: Experimental & Legacy**
|
||||
> Our current Docker solution for Crawl4AI is **not stable** and **will be discontinued** soon. A more robust Docker/Orchestration strategy is in development, with a planned stable release in **2025**. If you choose to use this Docker approach, please proceed cautiously and avoid production deployment without thorough testing.
|
||||
|
||||
Crawl4AI is **open-source** and under **active development**. We appreciate your interest, but strongly recommend you make **informed decisions** if you need a production environment. Expect breaking changes in future versions.
|
||||
|
||||
---
|
||||
|
||||
## 1. Installation & Environment Setup (Outside Docker)
|
||||
|
||||
Before we jump into Docker usage, here’s a quick reminder of how to install Crawl4AI locally (legacy doc). For **non-Docker** deployments or local dev:
|
||||
|
||||
```bash
|
||||
# 1. Install the package
|
||||
pip install crawl4ai
|
||||
crawl4ai-setup
|
||||
|
||||
# 2. Install playwright dependencies (all browsers or specific ones)
|
||||
playwright install --with-deps
|
||||
# or
|
||||
playwright install --with-deps chromium
|
||||
# or
|
||||
playwright install --with-deps chrome
|
||||
```
|
||||
|
||||
**Testing** your installation:
|
||||
|
||||
```bash
|
||||
# Visible browser test
|
||||
python -c "from playwright.sync_api import sync_playwright; p = sync_playwright().start(); browser = p.chromium.launch(headless=False); page = browser.new_page(); page.goto('https://example.com'); input('Press Enter to close...')"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Docker Overview
|
||||
|
||||
This Docker approach allows you to run a **Crawl4AI** service via REST API. You can:
|
||||
|
||||
1. **POST** a request (e.g., URLs, extraction config)
|
||||
2. **Retrieve** your results from a task-based endpoint
|
||||
|
||||
> **Note**: This Docker solution is **temporary**. We plan a more robust, stable Docker approach in the near future. For now, you can experiment, but do not rely on it for mission-critical production.
|
||||
|
||||
---
|
||||
|
||||
## 3. Pulling and Running the Image
|
||||
|
||||
### Basic Run
|
||||
|
||||
```bash
|
||||
docker pull unclecode/crawl4ai:basic
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic
|
||||
```
|
||||
|
||||
This starts a container on port `11235`. You can `POST` requests to `http://localhost:11235/crawl`.
|
||||
|
||||
### Using an API Token
|
||||
|
||||
```bash
|
||||
docker run -p 11235:11235 \
|
||||
-e CRAWL4AI_API_TOKEN=your_secret_token \
|
||||
unclecode/crawl4ai:basic
|
||||
```
|
||||
|
||||
If **`CRAWL4AI_API_TOKEN`** is set, you must include `Authorization: Bearer <token>` in your requests. Otherwise, the service is open to anyone.
|
||||
|
||||
---
|
||||
|
||||
## 4. Docker Compose for Multi-Container Workflows
|
||||
|
||||
You can also use **Docker Compose** to manage multiple services. Below is an **experimental** snippet:
|
||||
|
||||
```yaml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
crawl4ai:
|
||||
image: unclecode/crawl4ai:basic
|
||||
ports:
|
||||
- "11235:11235"
|
||||
environment:
|
||||
- CRAWL4AI_API_TOKEN=${CRAWL4AI_API_TOKEN:-}
|
||||
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
|
||||
# Additional env variables as needed
|
||||
volumes:
|
||||
- /dev/shm:/dev/shm
|
||||
```
|
||||
|
||||
To run:
|
||||
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
And to stop:
|
||||
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
**Troubleshooting**:
|
||||
|
||||
- **Check logs**: `docker-compose logs -f crawl4ai`
|
||||
- **Remove orphan containers**: `docker-compose down --remove-orphans`
|
||||
- **Remove networks**: `docker network rm <network_name>`
|
||||
|
||||
---
|
||||
|
||||
## 5. Making Requests to the Container
|
||||
|
||||
**Base URL**: `http://localhost:11235`
|
||||
|
||||
### Example: Basic Crawl
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
task_request = {
|
||||
"urls": "https://example.com",
|
||||
"priority": 10
|
||||
}
|
||||
|
||||
response = requests.post("http://localhost:11235/crawl", json=task_request)
|
||||
task_id = response.json()["task_id"]
|
||||
|
||||
# Poll for status
|
||||
status_url = f"http://localhost:11235/task/{task_id}"
|
||||
status = requests.get(status_url).json()
|
||||
print(status)
|
||||
```
|
||||
|
||||
If you used an API token, do:
|
||||
|
||||
```python
|
||||
headers = {"Authorization": "Bearer your_secret_token"}
|
||||
response = requests.post(
|
||||
"http://localhost:11235/crawl",
|
||||
headers=headers,
|
||||
json=task_request
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Docker + New Crawler Config Approach
|
||||
|
||||
### Using `BrowserConfig` & `CrawlerRunConfig` in Requests
|
||||
|
||||
The Docker-based solution can accept **crawler configurations** in the request JSON (legacy doc might show direct parameters, but we want to embed them in `crawler_params` or `extra` to align with the new approach). For example:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
request_data = {
|
||||
"urls": "https://www.nbcnews.com/business",
|
||||
"crawler_params": {
|
||||
"headless": True,
|
||||
"browser_type": "chromium",
|
||||
"verbose": True,
|
||||
"page_timeout": 30000,
|
||||
# ... any other BrowserConfig-like fields
|
||||
},
|
||||
"extra": {
|
||||
"word_count_threshold": 50,
|
||||
"bypass_cache": True
|
||||
}
|
||||
}
|
||||
|
||||
response = requests.post("http://localhost:11235/crawl", json=request_data)
|
||||
task_id = response.json()["task_id"]
|
||||
```
|
||||
|
||||
This is the recommended style if you want to replicate `BrowserConfig` and `CrawlerRunConfig` settings in Docker mode.
|
||||
|
||||
---
|
||||
|
||||
## 7. Example: JSON Extraction in Docker
|
||||
|
||||
```python
|
||||
import requests
|
||||
import json
|
||||
|
||||
# Define a schema for CSS extraction
|
||||
schema = {
|
||||
"name": "Coinbase Crypto Prices",
|
||||
"baseSelector": ".cds-tableRow-t45thuk",
|
||||
"fields": [
|
||||
{
|
||||
"name": "crypto",
|
||||
"selector": "td:nth-child(1) h2",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "symbol",
|
||||
"selector": "td:nth-child(1) p",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"name": "price",
|
||||
"selector": "td:nth-child(2)",
|
||||
"type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
request_data = {
|
||||
"urls": "https://www.coinbase.com/explore",
|
||||
"extraction_config": {
|
||||
"type": "json_css",
|
||||
"params": {"schema": schema}
|
||||
},
|
||||
"crawler_params": {
|
||||
"headless": True,
|
||||
"verbose": True
|
||||
}
|
||||
}
|
||||
|
||||
resp = requests.post("http://localhost:11235/crawl", json=request_data)
|
||||
task_id = resp.json()["task_id"]
|
||||
|
||||
# Poll for status
|
||||
status = requests.get(f"http://localhost:11235/task/{task_id}").json()
|
||||
if status["status"] == "completed":
|
||||
extracted_content = status["result"]["extracted_content"]
|
||||
data = json.loads(extracted_content)
|
||||
print("Extracted:", len(data), "entries")
|
||||
else:
|
||||
print("Task still in progress or failed.")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Why This Docker Is Temporary
|
||||
|
||||
**We are building a new, stable approach**:
|
||||
|
||||
- The current Docker container is **experimental** and might break with future releases.
|
||||
- We plan a stable release in **2025** with a more robust API, versioning, and orchestration.
|
||||
- If you use this Docker in production, do so at your own risk and be prepared for **breaking changes**.
|
||||
|
||||
**Community**: Because Crawl4AI is open-source, you can track progress or contribute to the new Docker approach. Check the [GitHub repository](https://github.com/unclecode/crawl4ai) for roadmaps and updates.
|
||||
|
||||
---
|
||||
|
||||
## 9. Known Limitations & Next Steps
|
||||
|
||||
1. **Not Production-Ready**: This Docker approach lacks extensive security, logging, or advanced config for large-scale usage.
|
||||
2. **Ongoing Changes**: Expect API changes. The official stable version is targeted for **2025**.
|
||||
3. **LLM Integrations**: Docker images are big if you want GPU or multiple model providers. We might unify these in a future build.
|
||||
4. **Performance**: For concurrency or large crawls, you may need to tune resources (memory, CPU) and watch out for ephemeral storage.
|
||||
5. **Version Pinning**: If you must deploy, pin your Docker tag to a specific version (e.g., `:basic-0.3.7`) to avoid surprise updates.
|
||||
|
||||
### Next Steps
|
||||
|
||||
- **Watch the Repository**: For announcements on the new Docker architecture.
|
||||
- **Experiment**: Use this Docker for test or dev environments, but keep an eye out for breakage.
|
||||
- **Contribute**: If you have ideas or improvements, open a PR or discussion.
|
||||
- **Check Roadmaps**: See our [GitHub issues](https://github.com/unclecode/crawl4ai/issues) or [Roadmap doc](https://github.com/unclecode/crawl4ai/blob/main/ROADMAP.md) to find upcoming releases.
|
||||
|
||||
---
|
||||
|
||||
## 10. Summary
|
||||
|
||||
**Deploying with Docker** can simplify running Crawl4AI as a service. However:
|
||||
|
||||
- **This Docker** approach is **legacy** and subject to removal/overhaul.
|
||||
- For production, please weigh the risks carefully.
|
||||
- Detailed “new Docker approach” is coming in **2025**.
|
||||
|
||||
We hope this guide helps you do a quick spin-up of Crawl4AI in Docker for **experimental** usage. Stay tuned for the fully-supported version!
|
||||
@@ -1,272 +0,0 @@
|
||||
# Getting Started with Crawl4AI
|
||||
|
||||
Welcome to **Crawl4AI**, an open-source LLM friendly Web Crawler & Scraper. In this tutorial, you’ll:
|
||||
|
||||
1. **Install** Crawl4AI (both via pip and Docker, with notes on platform challenges).
|
||||
2. Run your **first crawl** using minimal configuration.
|
||||
3. Generate **Markdown** output (and learn how it’s influenced by content filters).
|
||||
4. Experiment with a simple **CSS-based extraction** strategy.
|
||||
5. See a glimpse of **LLM-based extraction** (including open-source and closed-source model options).
|
||||
|
||||
---
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
Crawl4AI provides:
|
||||
- An asynchronous crawler, **`AsyncWebCrawler`**.
|
||||
- Configurable browser and run settings via **`BrowserConfig`** and **`CrawlerRunConfig`**.
|
||||
- Automatic HTML-to-Markdown conversion via **`DefaultMarkdownGenerator`** (supports additional filters).
|
||||
- Multiple extraction strategies (LLM-based or “traditional” CSS/XPath-based).
|
||||
|
||||
By the end of this guide, you’ll have installed Crawl4AI, performed a basic crawl, generated Markdown, and tried out two extraction strategies.
|
||||
|
||||
---
|
||||
|
||||
## 2. Installation
|
||||
|
||||
### 2.1 Python + Playwright
|
||||
|
||||
#### Basic Pip Installation
|
||||
|
||||
```bash
|
||||
pip install crawl4ai
|
||||
crawl4ai-setup
|
||||
|
||||
# Verify your installation
|
||||
crawl4ai-doctor
|
||||
```
|
||||
|
||||
If you encounter any browser-related issues, you can install them manually:
|
||||
```bash
|
||||
python -m playwright install --with-deps chrome chromium
|
||||
```
|
||||
|
||||
- **`crawl4ai-setup`** installs and configures Playwright (Chromium by default).
|
||||
|
||||
We cover advanced installation and Docker in the [Installation](#installation) section.
|
||||
|
||||
---
|
||||
|
||||
## 3. Your First Crawl
|
||||
|
||||
Here’s a minimal Python script that creates an **`AsyncWebCrawler`**, fetches a webpage, and prints the first 300 characters of its Markdown output:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
|
||||
async def main():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://example.com")
|
||||
print(result.markdown[:300]) # Print first 300 chars
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**What’s happening?**
|
||||
- **`AsyncWebCrawler`** launches a headless browser (Chromium by default).
|
||||
- It fetches `https://example.com`.
|
||||
- Crawl4AI automatically converts the HTML into Markdown.
|
||||
|
||||
You now have a simple, working crawl!
|
||||
|
||||
---
|
||||
|
||||
## 4. Basic Configuration (Light Introduction)
|
||||
|
||||
Crawl4AI’s crawler can be heavily customized using two main classes:
|
||||
|
||||
1. **`BrowserConfig`**: Controls browser behavior (headless or full UI, user agent, JavaScript toggles, etc.).
|
||||
2. **`CrawlerRunConfig`**: Controls how each crawl runs (caching, extraction, timeouts, hooking, etc.).
|
||||
|
||||
Below is an example with minimal usage:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
browser_conf = BrowserConfig(headless=True) # or False to see the browser
|
||||
run_conf = CrawlerRunConfig(cache_mode="BYPASS")
|
||||
|
||||
async with AsyncWebCrawler(config=browser_conf) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com",
|
||||
config=run_conf
|
||||
)
|
||||
print(result.markdown)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
We’ll explore more advanced config in later tutorials (like enabling proxies, PDF output, multi-tab sessions, etc.). For now, just note how you pass these objects to manage crawling.
|
||||
|
||||
---
|
||||
|
||||
## 5. Generating Markdown Output
|
||||
|
||||
By default, Crawl4AI automatically generates Markdown from each crawled page. However, the exact output depends on whether you specify a **markdown generator** or **content filter**.
|
||||
|
||||
- **`result.markdown`**:
|
||||
The direct HTML-to-Markdown conversion.
|
||||
- **`result.markdown.fit_markdown`**:
|
||||
The same content after applying any configured **content filter** (e.g., `PruningContentFilter`).
|
||||
|
||||
### Example: Using a Filter with `DefaultMarkdownGenerator`
|
||||
|
||||
```python
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.content_filter_strategy import PruningContentFilter
|
||||
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
|
||||
|
||||
md_generator = DefaultMarkdownGenerator(
|
||||
content_filter=PruningContentFilter(threshold=0.4, threshold_type="fixed")
|
||||
)
|
||||
|
||||
config = CrawlerRunConfig(markdown_generator=md_generator)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://news.ycombinator.com", config=config)
|
||||
print("Raw Markdown length:", len(result.markdown.raw_markdown))
|
||||
print("Fit Markdown length:", len(result.markdown.fit_markdown))
|
||||
```
|
||||
|
||||
**Note**: If you do **not** specify a content filter or markdown generator, you’ll typically see only the raw Markdown. We’ll dive deeper into these strategies in a dedicated **Markdown Generation** tutorial.
|
||||
|
||||
---
|
||||
|
||||
## 6. Simple Data Extraction (CSS-based)
|
||||
|
||||
Crawl4AI can also extract structured data (JSON) using CSS or XPath selectors. Below is a minimal CSS-based example:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
async def main():
|
||||
schema = {
|
||||
"name": "Example Items",
|
||||
"baseSelector": "div.item",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h2", "type": "text"},
|
||||
{"name": "link", "selector": "a", "type": "attribute", "attribute": "href"}
|
||||
]
|
||||
}
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/items",
|
||||
config=CrawlerRunConfig(
|
||||
extraction_strategy=JsonCssExtractionStrategy(schema)
|
||||
)
|
||||
)
|
||||
# The JSON output is stored in 'extracted_content'
|
||||
data = json.loads(result.extracted_content)
|
||||
print(data)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Why is this helpful?**
|
||||
- Great for repetitive page structures (e.g., item listings, articles).
|
||||
- No AI usage or costs.
|
||||
- The crawler returns a JSON string you can parse or store.
|
||||
|
||||
---
|
||||
|
||||
## 7. Simple Data Extraction (LLM-based)
|
||||
|
||||
For more complex or irregular pages, a language model can parse text intelligently into a structure you define. Crawl4AI supports **open-source** or **closed-source** providers:
|
||||
|
||||
- **Open-Source Models** (e.g., `ollama/llama3.3`, `no_token`)
|
||||
- **OpenAI Models** (e.g., `openai/gpt-4`, requires `api_token`)
|
||||
- Or any provider supported by the underlying library
|
||||
|
||||
Below is an example using **open-source** style (no token) and closed-source:
|
||||
|
||||
```python
|
||||
import os
|
||||
import json
|
||||
import asyncio
|
||||
from pydantic import BaseModel, Field
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
|
||||
class PricingInfo(BaseModel):
|
||||
model_name: str = Field(..., description="Name of the AI model")
|
||||
input_fee: str = Field(..., description="Fee for input tokens")
|
||||
output_fee: str = Field(..., description="Fee for output tokens")
|
||||
|
||||
async def main():
|
||||
# 1) Open-Source usage: no token required
|
||||
llm_strategy_open_source = LLMExtractionStrategy(
|
||||
provider="ollama/llama3.3", # or "any-other-local-model"
|
||||
api_token="no_token", # for local models, no API key is typically required
|
||||
schema=PricingInfo.schema(),
|
||||
extraction_type="schema",
|
||||
instruction="""
|
||||
From this page, extract all AI model pricing details in JSON format.
|
||||
Each entry should have 'model_name', 'input_fee', and 'output_fee'.
|
||||
""",
|
||||
temperature=0
|
||||
)
|
||||
|
||||
# 2) Closed-Source usage: API key for OpenAI, for example
|
||||
openai_token = os.getenv("OPENAI_API_KEY", "sk-YOUR_API_KEY")
|
||||
llm_strategy_openai = LLMExtractionStrategy(
|
||||
provider="openai/gpt-4",
|
||||
api_token=openai_token,
|
||||
schema=PricingInfo.schema(),
|
||||
extraction_type="schema",
|
||||
instruction="""
|
||||
From this page, extract all AI model pricing details in JSON format.
|
||||
Each entry should have 'model_name', 'input_fee', and 'output_fee'.
|
||||
""",
|
||||
temperature=0
|
||||
)
|
||||
|
||||
# We'll demo the open-source approach here
|
||||
config = CrawlerRunConfig(extraction_strategy=llm_strategy_open_source)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/pricing",
|
||||
config=config
|
||||
)
|
||||
print("LLM-based extraction JSON:", result.extracted_content)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**What’s happening?**
|
||||
- We define a Pydantic schema (`PricingInfo`) describing the fields we want.
|
||||
- The LLM extraction strategy uses that schema and your instructions to transform raw text into structured JSON.
|
||||
- Depending on the **provider** and **api_token**, you can use local models or a remote API.
|
||||
|
||||
---
|
||||
|
||||
## 8. Next Steps
|
||||
|
||||
Congratulations! You have:
|
||||
1. Installed Crawl4AI (via pip, with Docker as an option).
|
||||
2. Performed a simple crawl and printed Markdown.
|
||||
3. Seen how adding a **markdown generator** + **content filter** can produce “fit” Markdown.
|
||||
4. Experimented with **CSS-based** extraction for repetitive data.
|
||||
5. Learned the basics of **LLM-based** extraction (open-source and closed-source).
|
||||
|
||||
If you are ready for more, check out:
|
||||
|
||||
- **Installation**: Learn more on how to install Crawl4AI and set up Playwright.
|
||||
- **Focus on Configuration**: Learn to customize browser settings, caching modes, advanced timeouts, etc.
|
||||
- **Markdown Generation Basics**: Dive deeper into content filtering and “fit markdown” usage.
|
||||
- **Dynamic Pages & Hooks**: Tackle sites with “Load More” buttons, login forms, or JavaScript complexities.
|
||||
- **Deployment**: Run Crawl4AI in Docker containers and scale across multiple nodes.
|
||||
- **Explanations & How-To Guides**: Explore browser contexts, identity-based crawling, hooking, performance, and more.
|
||||
|
||||
Crawl4AI is a powerful tool for extracting data and generating Markdown from virtually any website. Enjoy exploring, and we hope you build amazing AI-powered applications with it!
|
||||
@@ -1,527 +0,0 @@
|
||||
# Crawl4AI Quick Start Guide: Your All-in-One AI-Ready Web Crawling & AI Integration Solution
|
||||
|
||||
Crawl4AI, the **#1 trending GitHub repository**, streamlines web content extraction into AI-ready formats. Perfect for AI assistants, semantic search engines, or data pipelines, Crawl4AI transforms raw HTML into structured Markdown or JSON effortlessly. Integrate with LLMs, open-source models, or your own retrieval-augmented generation workflows.
|
||||
|
||||
**What Crawl4AI is not:**
|
||||
|
||||
Crawl4AI is not a replacement for traditional web scraping libraries, Selenium, or Playwright. It's not designed as a general-purpose web automation tool. Instead, Crawl4AI has a specific, focused goal:
|
||||
|
||||
- To generate perfect, AI-friendly data (particularly for LLMs) from web content
|
||||
- To maximize speed and efficiency in data extraction and processing
|
||||
- To operate at scale, from Raspberry Pi to cloud infrastructures
|
||||
|
||||
Crawl4AI is engineered with a "scale-first" mindset, aiming to handle millions of links while maintaining exceptional performance. It's super efficient and fast, optimized to:
|
||||
|
||||
1. Transform raw web content into structured, LLM-ready formats (Markdown/JSON)
|
||||
2. Implement intelligent extraction strategies to reduce reliance on costly API calls
|
||||
3. Provide a streamlined pipeline for AI data preparation and ingestion
|
||||
|
||||
In essence, Crawl4AI bridges the gap between web content and AI systems, focusing on delivering high-quality, processed data rather than offering broad web automation capabilities.
|
||||
|
||||
**Key Links:**
|
||||
|
||||
- **Website:** [https://crawl4ai.com](https://crawl4ai.com)
|
||||
- **GitHub:** [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)
|
||||
- **Colab Notebook:** [Try on Google Colab](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing)
|
||||
- **Quickstart Code Example:** [quickstart_async.config.py](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/quickstart_async.config.py)
|
||||
- **Examples Folder:** [Crawl4AI Examples](https://github.com/unclecode/crawl4ai/tree/main/docs/examples)
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Crawl4AI Quick Start Guide: Your All-in-One AI-Ready Web Crawling \& AI Integration Solution](#crawl4ai-quick-start-guide-your-all-in-one-ai-ready-web-crawling--ai-integration-solution)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [1. Introduction \& Key Concepts](#1-introduction--key-concepts)
|
||||
- [2. Installation \& Environment Setup](#2-installation--environment-setup)
|
||||
- [Test Your Installation](#test-your-installation)
|
||||
- [3. Core Concepts \& Configuration](#3-core-concepts--configuration)
|
||||
- [4. Basic Crawling \& Simple Extraction](#4-basic-crawling--simple-extraction)
|
||||
- [5. Markdown Generation \& AI-Optimized Output](#5-markdown-generation--ai-optimized-output)
|
||||
- [6. Structured Data Extraction (CSS, XPath, LLM)](#6-structured-data-extraction-css-xpath-llm)
|
||||
- [7. Advanced Extraction: LLM \& Open-Source Models](#7-advanced-extraction-llm--open-source-models)
|
||||
- [8. Page Interactions, JS Execution, \& Dynamic Content](#8-page-interactions-js-execution--dynamic-content)
|
||||
- [9. Media, Links, \& Metadata Handling](#9-media-links--metadata-handling)
|
||||
- [10. Authentication \& Identity Preservation](#10-authentication--identity-preservation)
|
||||
- [Manual Setup via User Data Directory](#manual-setup-via-user-data-directory)
|
||||
- [Using `storage_state`](#using-storage_state)
|
||||
- [11. Proxy \& Security Enhancements](#11-proxy--security-enhancements)
|
||||
- [12. Screenshots, PDFs \& File Downloads](#12-screenshots-pdfs--file-downloads)
|
||||
- [13. Caching \& Performance Optimization](#13-caching--performance-optimization)
|
||||
- [14. Hooks for Custom Logic](#14-hooks-for-custom-logic)
|
||||
- [15. Dockerization \& Scaling](#15-dockerization--scaling)
|
||||
- [16. Troubleshooting \& Common Pitfalls](#16-troubleshooting--common-pitfalls)
|
||||
- [17. Comprehensive End-to-End Example](#17-comprehensive-end-to-end-example)
|
||||
- [18. Further Resources \& Community](#18-further-resources--community)
|
||||
|
||||
---
|
||||
|
||||
## 1. Introduction & Key Concepts
|
||||
|
||||
Crawl4AI transforms websites into structured, AI-friendly data. It efficiently handles large-scale crawling, integrates with both proprietary and open-source LLMs, and optimizes content for semantic search or RAG pipelines.
|
||||
|
||||
**Quick Test:**
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler
|
||||
|
||||
async def test_run():
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://example.com")
|
||||
print(result.markdown)
|
||||
|
||||
asyncio.run(test_run())
|
||||
```
|
||||
|
||||
If you see Markdown output, everything is working!
|
||||
|
||||
**More info:** [See /docs/introduction](#) or [1_introduction.ex.md](https://github.com/unclecode/crawl4ai/blob/main/introduction.ex.md)
|
||||
|
||||
---
|
||||
|
||||
## 2. Installation & Environment Setup
|
||||
|
||||
```bash
|
||||
# Install the package
|
||||
pip install crawl4ai
|
||||
crawl4ai-setup
|
||||
|
||||
# Install Playwright with system dependencies (recommended)
|
||||
playwright install --with-deps # Installs all browsers
|
||||
|
||||
# Or install specific browsers:
|
||||
playwright install --with-deps chrome # Recommended for Colab/Linux
|
||||
playwright install --with-deps firefox
|
||||
playwright install --with-deps webkit
|
||||
playwright install --with-deps chromium
|
||||
|
||||
# Keep Playwright updated periodically
|
||||
playwright install
|
||||
```
|
||||
|
||||
> **Note**: For Google Colab and some Linux environments, use `chrome` instead of `chromium` - it tends to work more reliably.
|
||||
|
||||
### Test Your Installation
|
||||
Try these one-liners:
|
||||
|
||||
```python
|
||||
# Visible browser test
|
||||
python -c "from playwright.sync_api import sync_playwright; p = sync_playwright().start(); browser = p.chromium.launch(headless=False); page = browser.new_page(); page.goto('https://example.com'); input('Press Enter to close...')"
|
||||
|
||||
# Headless test (for servers/CI)
|
||||
python -c "from playwright.sync_api import sync_playwright; p = sync_playwright().start(); browser = p.chromium.launch(headless=True); page = browser.new_page(); page.goto('https://example.com'); print(f'Title: {page.title()}'); browser.close()"
|
||||
```
|
||||
|
||||
You should see a browser window (in visible test) loading example.com. If you get errors, try with Firefox using `playwright install --with-deps firefox`.
|
||||
|
||||
|
||||
**Try in Colab:**
|
||||
[Open Colab Notebook](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing)
|
||||
|
||||
**More info:** [See /docs/configuration](#) or [2_configuration.md](https://github.com/unclecode/crawl4ai/blob/main/configuration.md)
|
||||
|
||||
---
|
||||
|
||||
## 3. Core Concepts & Configuration
|
||||
|
||||
Use `AsyncWebCrawler`, `CrawlerRunConfig`, and `BrowserConfig` to control crawling.
|
||||
|
||||
**Example config:**
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import BrowserConfig, CrawlerRunConfig
|
||||
|
||||
browser_config = BrowserConfig(
|
||||
headless=True,
|
||||
verbose=True,
|
||||
viewport_width=1080,
|
||||
viewport_height=600,
|
||||
text_mode=False,
|
||||
ignore_https_errors=True,
|
||||
java_script_enabled=True
|
||||
)
|
||||
|
||||
run_config = CrawlerRunConfig(
|
||||
css_selector="article.main",
|
||||
word_count_threshold=50,
|
||||
excluded_tags=['nav','footer'],
|
||||
exclude_external_links=True,
|
||||
wait_for="css:.article-loaded",
|
||||
page_timeout=60000,
|
||||
delay_before_return_html=1.0,
|
||||
mean_delay=0.1,
|
||||
max_range=0.3,
|
||||
process_iframes=True,
|
||||
remove_overlay_elements=True,
|
||||
js_code="""
|
||||
(async () => {
|
||||
window.scrollTo(0, document.body.scrollHeight);
|
||||
await new Promise(r => setTimeout(r, 2000));
|
||||
document.querySelector('.load-more')?.click();
|
||||
})();
|
||||
"""
|
||||
)
|
||||
|
||||
# Use: ENABLED, DISABLED, BYPASS, READ_ONLY, WRITE_ONLY
|
||||
# run_config.cache_mode = CacheMode.ENABLED
|
||||
```
|
||||
|
||||
**Prefixes:**
|
||||
|
||||
- `http://` or `https://` for live pages
|
||||
- `file://local.html` for local
|
||||
- `raw:<html>` for raw HTML strings
|
||||
|
||||
**More info:** [See /docs/async_webcrawler](#) or [3_async_webcrawler.ex.md](https://github.com/unclecode/crawl4ai/blob/main/async_webcrawler.ex.md)
|
||||
|
||||
---
|
||||
|
||||
## 4. Basic Crawling & Simple Extraction
|
||||
|
||||
```python
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
result = await crawler.arun("https://news.example.com/article", config=run_config)
|
||||
print(result.markdown) # Basic markdown content
|
||||
```
|
||||
|
||||
**More info:** [See /docs/browser_context_page](#) or [4_browser_context_page.ex.md](https://github.com/unclecode/crawl4ai/blob/main/browser_context_page.ex.md)
|
||||
|
||||
---
|
||||
|
||||
## 5. Markdown Generation & AI-Optimized Output
|
||||
|
||||
After crawling, `result.markdown_v2` provides:
|
||||
|
||||
- `raw_markdown`: Unfiltered markdown
|
||||
- `markdown_with_citations`: Links as references at the bottom
|
||||
- `references_markdown`: A separate list of reference links
|
||||
- `fit_markdown`: Filtered, relevant markdown (e.g., after BM25)
|
||||
- `fit_html`: The HTML used to produce `fit_markdown`
|
||||
|
||||
**Example:**
|
||||
|
||||
```python
|
||||
print("RAW:", result.markdown_v2.raw_markdown[:200])
|
||||
print("CITED:", result.markdown_v2.markdown_with_citations[:200])
|
||||
print("REFERENCES:", result.markdown_v2.references_markdown)
|
||||
print("FIT MARKDOWN:", result.markdown_v2.fit_markdown)
|
||||
```
|
||||
|
||||
For AI training, `fit_markdown` focuses on the most relevant content.
|
||||
|
||||
**More info:** [See /docs/markdown_generation](#) or [5_markdown_generation.ex.md](https://github.com/unclecode/crawl4ai/blob/main/markdown_generation.ex.md)
|
||||
|
||||
---
|
||||
|
||||
## 6. Structured Data Extraction (CSS, XPath, LLM)
|
||||
|
||||
Extract JSON data without LLMs:
|
||||
|
||||
**CSS:**
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
|
||||
|
||||
schema = {
|
||||
"name": "Products",
|
||||
"baseSelector": ".product",
|
||||
"fields": [
|
||||
{"name": "title", "selector": "h2", "type": "text"},
|
||||
{"name": "price", "selector": ".price", "type": "text"}
|
||||
]
|
||||
}
|
||||
run_config.extraction_strategy = JsonCssExtractionStrategy(schema)
|
||||
```
|
||||
|
||||
**XPath:**
|
||||
|
||||
```python
|
||||
from crawl4ai.extraction_strategy import JsonXPathExtractionStrategy
|
||||
|
||||
xpath_schema = {
|
||||
"name": "Articles",
|
||||
"baseSelector": "//div[@class='article']",
|
||||
"fields": [
|
||||
{"name":"headline","selector":".//h1","type":"text"},
|
||||
{"name":"summary","selector":".//p[@class='summary']","type":"text"}
|
||||
]
|
||||
}
|
||||
run_config.extraction_strategy = JsonXPathExtractionStrategy(xpath_schema)
|
||||
```
|
||||
|
||||
**More info:** [See /docs/extraction_strategies](#) or [7_extraction_strategies.ex.md](https://github.com/unclecode/crawl4ai/blob/main/extraction_strategies.ex.md)
|
||||
|
||||
---
|
||||
|
||||
## 7. Advanced Extraction: LLM & Open-Source Models
|
||||
|
||||
Use LLMExtractionStrategy for complex tasks. Works with OpenAI or open-source models (e.g., Ollama).
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel
|
||||
from crawl4ai.extraction_strategy import LLMExtractionStrategy
|
||||
|
||||
class TravelData(BaseModel):
|
||||
destination: str
|
||||
attractions: list
|
||||
|
||||
run_config.extraction_strategy = LLMExtractionStrategy(
|
||||
provider="ollama/nemotron",
|
||||
schema=TravelData.schema(),
|
||||
instruction="Extract destination and top attractions."
|
||||
)
|
||||
```
|
||||
|
||||
**More info:** [See /docs/extraction_strategies](#) or [7_extraction_strategies.ex.md](https://github.com/unclecode/crawl4ai/blob/main/extraction_strategies.ex.md)
|
||||
|
||||
---
|
||||
|
||||
## 8. Page Interactions, JS Execution, & Dynamic Content
|
||||
|
||||
Insert `js_code` and use `wait_for` to ensure content loads. Example:
|
||||
|
||||
```python
|
||||
run_config.js_code = """
|
||||
(async () => {
|
||||
document.querySelector('.load-more')?.click();
|
||||
await new Promise(r => setTimeout(r, 2000));
|
||||
})();
|
||||
"""
|
||||
run_config.wait_for = "css:.item-loaded"
|
||||
```
|
||||
|
||||
**More info:** [See /docs/page_interaction](#) or [11_page_interaction.md](https://github.com/unclecode/crawl4ai/blob/main/page_interaction.md)
|
||||
|
||||
---
|
||||
|
||||
## 9. Media, Links, & Metadata Handling
|
||||
|
||||
`result.media["images"]`: List of images with `src`, `score`, `alt`. Score indicates relevance.
|
||||
|
||||
`result.media["videos"]`, `result.media["audios"]` similarly hold media info.
|
||||
|
||||
`result.links["internal"]`, `result.links["external"]`, `result.links["social"]`: Categorized links. Each link has `href`, `text`, `context`, `type`.
|
||||
|
||||
`result.metadata`: Title, description, keywords, author.
|
||||
|
||||
**Example:**
|
||||
|
||||
```python
|
||||
# Images
|
||||
for img in result.media["images"]:
|
||||
print("Image:", img["src"], "Score:", img["score"], "Alt:", img.get("alt","N/A"))
|
||||
|
||||
# Links
|
||||
for link in result.links["external"]:
|
||||
print("External Link:", link["href"], "Text:", link["text"])
|
||||
|
||||
# Metadata
|
||||
print("Page Title:", result.metadata["title"])
|
||||
print("Description:", result.metadata["description"])
|
||||
```
|
||||
|
||||
**More info:** [See /docs/content_selection](#) or [8_content_selection.ex.md](https://github.com/unclecode/crawl4ai/blob/main/content_selection.ex.md)
|
||||
|
||||
---
|
||||
|
||||
## 10. Authentication & Identity Preservation
|
||||
|
||||
### Manual Setup via User Data Directory
|
||||
|
||||
1. **Open Chrome with a custom user data dir:**
|
||||
|
||||
```bash
|
||||
"C:\Program Files\Google\Chrome\Application\chrome.exe" --user-data-dir="C:\MyChromeProfile"
|
||||
```
|
||||
|
||||
On macOS:
|
||||
|
||||
```bash
|
||||
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --user-data-dir="/Users/username/ChromeProfiles/MyProfile"
|
||||
```
|
||||
|
||||
2. **Log in to sites, solve CAPTCHAs, adjust settings manually.**
|
||||
The browser saves cookies/localStorage in that directory.
|
||||
|
||||
3. **Use `user_data_dir` in `BrowserConfig`:**
|
||||
|
||||
```python
|
||||
browser_config = BrowserConfig(
|
||||
headless=True,
|
||||
user_data_dir="/Users/username/ChromeProfiles/MyProfile"
|
||||
)
|
||||
```
|
||||
|
||||
Now the crawler starts with those cookies, sessions, etc.
|
||||
|
||||
### Using `storage_state`
|
||||
|
||||
Alternatively, export and reuse storage states:
|
||||
|
||||
```python
|
||||
browser_config = BrowserConfig(
|
||||
headless=True,
|
||||
storage_state="mystate.json" # Pre-saved state
|
||||
)
|
||||
```
|
||||
|
||||
No repeated logins needed.
|
||||
|
||||
**More info:** [See /docs/storage_state](#) or [16_storage_state.md](https://github.com/unclecode/crawl4ai/blob/main/storage_state.md)
|
||||
|
||||
---
|
||||
|
||||
## 11. Proxy & Security Enhancements
|
||||
|
||||
Use `proxy_config` for authenticated proxies:
|
||||
|
||||
```python
|
||||
browser_config.proxy_config = {
|
||||
"server": "http://proxy.example.com:8080",
|
||||
"username": "proxyuser",
|
||||
"password": "proxypass"
|
||||
}
|
||||
```
|
||||
|
||||
Combine with `headers` or `ignore_https_errors` as needed.
|
||||
|
||||
**More info:** [See /docs/proxy_security](#) or [14_proxy_security.md](https://github.com/unclecode/crawl4ai/blob/main/proxy_security.md)
|
||||
|
||||
---
|
||||
|
||||
## 12. Screenshots, PDFs & File Downloads
|
||||
|
||||
Enable `screenshot=True` or `pdf=True` in `CrawlerRunConfig`:
|
||||
|
||||
```python
|
||||
run_config.screenshot = True
|
||||
run_config.pdf = True
|
||||
```
|
||||
|
||||
After crawling:
|
||||
|
||||
```python
|
||||
if result.screenshot:
|
||||
with open("page.png", "wb") as f:
|
||||
f.write(result.screenshot)
|
||||
|
||||
if result.pdf:
|
||||
with open("page.pdf", "wb") as f:
|
||||
f.write(result.pdf)
|
||||
```
|
||||
|
||||
**File Downloads:**
|
||||
|
||||
```python
|
||||
browser_config.accept_downloads = True
|
||||
browser_config.downloads_path = "./downloads"
|
||||
run_config.js_code = """document.querySelector('a.download')?.click();"""
|
||||
|
||||
# After crawl:
|
||||
print("Downloaded files:", result.downloaded_files)
|
||||
```
|
||||
|
||||
**More info:** [See /docs/screenshot_and_pdf_export](#) or [15_screenshot_and_pdf_export.md](https://github.com/unclecode/crawl4ai/blob/main/screenshot_and_pdf_export.md)
|
||||
Also [10_file_download.md](https://github.com/unclecode/crawl4ai/blob/main/file_download.md)
|
||||
|
||||
---
|
||||
|
||||
## 13. Caching & Performance Optimization
|
||||
|
||||
Set `cache_mode` to reuse fetch results:
|
||||
|
||||
```python
|
||||
from crawl4ai import CacheMode
|
||||
run_config.cache_mode = CacheMode.ENABLED
|
||||
```
|
||||
|
||||
Adjust delays, increase concurrency, or use `text_mode=True` for faster extraction.
|
||||
|
||||
**More info:** [See /docs/cache_modes](#) or [9_cache_modes.md](https://github.com/unclecode/crawl4ai/blob/main/cache_modes.md)
|
||||
|
||||
---
|
||||
|
||||
## 14. Hooks for Custom Logic
|
||||
|
||||
Hooks let you run code at specific lifecycle events without creating pages manually in `on_browser_created`.
|
||||
|
||||
Use `on_page_context_created` to apply routing or modify page contexts before crawling the URL:
|
||||
|
||||
**Example Hook:**
|
||||
|
||||
```python
|
||||
async def on_page_context_created_hook(context, page, **kwargs):
|
||||
# Block all images to speed up load
|
||||
await context.route("**/*.{png,jpg,jpeg}", lambda route: route.abort())
|
||||
print("[HOOK] Image requests blocked")
|
||||
|
||||
async with AsyncWebCrawler(config=browser_config) as crawler:
|
||||
crawler.crawler_strategy.set_hook("on_page_context_created", on_page_context_created_hook)
|
||||
result = await crawler.arun("https://imageheavy.example.com", config=run_config)
|
||||
print("Crawl finished with images blocked.")
|
||||
```
|
||||
|
||||
This hook is clean and doesn’t create a separate page itself—it just modifies the current context/page setup.
|
||||
|
||||
**More info:** [See /docs/hooks_auth](#) or [13_hooks_auth.md](https://github.com/unclecode/crawl4ai/blob/main/hooks_auth.md)
|
||||
|
||||
---
|
||||
|
||||
## 15. Dockerization & Scaling
|
||||
|
||||
Use Docker images:
|
||||
|
||||
- AMD64 basic:
|
||||
|
||||
```bash
|
||||
docker pull unclecode/crawl4ai:basic-amd64
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic-amd64
|
||||
```
|
||||
|
||||
- ARM64 for M1/M2:
|
||||
|
||||
```bash
|
||||
docker pull unclecode/crawl4ai:basic-arm64
|
||||
docker run -p 11235:11235 unclecode/crawl4ai:basic-arm64
|
||||
```
|
||||
|
||||
- GPU support:
|
||||
|
||||
```bash
|
||||
docker pull unclecode/crawl4ai:gpu-amd64
|
||||
docker run --gpus all -p 11235:11235 unclecode/crawl4ai:gpu-amd64
|
||||
```
|
||||
|
||||
Scale with load balancers or Kubernetes.
|
||||
|
||||
**More info:** [See /docs/proxy_security (for proxy) or relevant Docker instructions in README](#)
|
||||
|
||||
---
|
||||
|
||||
## 16. Troubleshooting & Common Pitfalls
|
||||
|
||||
- Empty results? Relax filters, check selectors.
|
||||
- Timeouts? Increase `page_timeout` or refine `wait_for`.
|
||||
- CAPTCHAs? Use `user_data_dir` or `storage_state` after manual solving.
|
||||
- JS errors? Try headful mode for debugging.
|
||||
|
||||
Check [examples](https://github.com/unclecode/crawl4ai/tree/main/docs/examples) & [quickstart_async.config.py](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/quickstart_async.config.py) for more code.
|
||||
|
||||
---
|
||||
|
||||
## 17. Comprehensive End-to-End Example
|
||||
|
||||
Combine hooks, JS execution, PDF saving, LLM extraction—see [quickstart_async.config.py](https://github.com/unclecode/crawl4ai/blob/main/docs/examples/quickstart_async.config.py) for a full example.
|
||||
|
||||
---
|
||||
|
||||
## 18. Further Resources & Community
|
||||
|
||||
- **Docs:** [https://crawl4ai.com](https://crawl4ai.com)
|
||||
- **Issues & PRs:** [https://github.com/unclecode/crawl4ai/issues](https://github.com/unclecode/crawl4ai/issues)
|
||||
|
||||
Follow [@unclecode](https://x.com/unclecode) for news & community updates.
|
||||
|
||||
**Happy Crawling!**
|
||||
Leverage Crawl4AI to feed your AI models with clean, structured web data today.
|
||||
@@ -1,335 +0,0 @@
|
||||
# Hooks & Custom Code
|
||||
|
||||
Crawl4AI supports a **hook** system that lets you run your own Python code at specific points in the crawling pipeline. By injecting logic into these hooks, you can automate tasks like:
|
||||
|
||||
- **Authentication** (log in before navigating)
|
||||
- **Content manipulation** (modify HTML, inject scripts, etc.)
|
||||
- **Session or browser configuration** (e.g., adjusting user agents, local storage)
|
||||
- **Custom data collection** (scrape extra details or track state at each stage)
|
||||
|
||||
In this tutorial, you’ll learn about:
|
||||
|
||||
1. What hooks are available
|
||||
2. How to attach code to each hook
|
||||
3. Practical examples (auth flows, user agent changes, content manipulation, etc.)
|
||||
|
||||
> **Prerequisites**
|
||||
> - Familiar with [AsyncWebCrawler Basics](./async-webcrawler-basics.md).
|
||||
> - Comfortable with Python async/await.
|
||||
|
||||
---
|
||||
|
||||
## 1. Overview of Available Hooks
|
||||
|
||||
| Hook Name | Called When / Purpose | Context / Objects Provided |
|
||||
|--------------------------|-----------------------------------------------------------------|-----------------------------------------------------|
|
||||
| **`on_browser_created`** | Immediately after the browser is launched, but **before** any page or context is created. | **Browser** object only (no `page` yet). Use it for broad browser-level config. |
|
||||
| **`on_page_context_created`** | Right after a new page context is created. Perfect for setting default timeouts, injecting scripts, etc. | Typically provides `page` and `context`. |
|
||||
| **`on_user_agent_updated`** | Whenever the user agent changes. For advanced user agent logic or additional header updates. | Typically provides `page` and updated user agent string. |
|
||||
| **`on_execution_started`** | Right before your main crawling logic runs (before rendering the page). Good for one-time setup or variable initialization. | Typically provides `page`, possibly `context`. |
|
||||
| **`before_goto`** | Right before navigating to the URL (i.e., `page.goto(...)`). Great for setting cookies, altering the URL, or hooking in authentication steps. | Typically provides `page`, `context`, and `goto_params`. |
|
||||
| **`after_goto`** | Immediately after navigation completes, but before scraping. For post-login checks or initial content adjustments. | Typically provides `page`, `context`, `response`. |
|
||||
| **`before_retrieve_html`** | Right before retrieving or finalizing the page’s HTML content. Good for in-page manipulation (e.g., removing ads or disclaimers). | Typically provides `page` or final HTML reference. |
|
||||
| **`before_return_html`** | Just before the HTML is returned to the crawler pipeline. Last chance to alter or sanitize content. | Typically provides final HTML or a `page`. |
|
||||
|
||||
### A Note on `on_browser_created` (the “unbrowser” hook)
|
||||
- **No `page`** object is available because no page context exists yet. You can, however, set up browser-wide properties.
|
||||
- For example, you might control [CDP sessions][cdp] or advanced browser flags here.
|
||||
|
||||
---
|
||||
|
||||
## 2. Registering Hooks
|
||||
|
||||
You can attach hooks by calling:
|
||||
|
||||
```python
|
||||
crawler.crawler_strategy.set_hook("hook_name", your_hook_function)
|
||||
```
|
||||
|
||||
or by passing a `hooks` dictionary to `AsyncWebCrawler` or your strategy constructor:
|
||||
|
||||
```python
|
||||
hooks = {
|
||||
"before_goto": my_before_goto_hook,
|
||||
"after_goto": my_after_goto_hook,
|
||||
# ... etc.
|
||||
}
|
||||
async with AsyncWebCrawler(hooks=hooks) as crawler:
|
||||
...
|
||||
```
|
||||
|
||||
### Hook Signature
|
||||
|
||||
Each hook is a function (async or sync, depending on your usage) that receives **certain parameters**—most often `page`, `context`, or custom arguments relevant to that stage. The library then awaits or calls your hook before continuing.
|
||||
|
||||
---
|
||||
|
||||
## 3. Real-Life Examples
|
||||
|
||||
Below are concrete scenarios where hooks come in handy.
|
||||
|
||||
---
|
||||
|
||||
### 3.1 Authentication Before Navigation
|
||||
|
||||
One of the most frequent tasks is logging in or applying authentication **before** the crawler navigates to a URL (so that the user is recognized immediately).
|
||||
|
||||
#### Using `before_goto`
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def before_goto_auth_hook(page, context, goto_params, **kwargs):
|
||||
"""
|
||||
Example: Set cookies or localStorage to simulate login.
|
||||
This hook runs right before page.goto() is called.
|
||||
"""
|
||||
# Example: Insert cookie-based auth or local storage data
|
||||
# (You could also do more complex actions, like fill forms if you already have a 'page' open.)
|
||||
print("[HOOK] Setting auth data before goto.")
|
||||
await context.add_cookies([
|
||||
{
|
||||
"name": "session",
|
||||
"value": "abcd1234",
|
||||
"domain": "example.com",
|
||||
"path": "/"
|
||||
}
|
||||
])
|
||||
# Optionally manipulate goto_params if needed:
|
||||
# goto_params["url"] = goto_params["url"] + "?debug=1"
|
||||
|
||||
async def main():
|
||||
hooks = {
|
||||
"before_goto": before_goto_auth_hook
|
||||
}
|
||||
|
||||
browser_cfg = BrowserConfig(headless=True)
|
||||
crawler_cfg = CrawlerRunConfig()
|
||||
|
||||
async with AsyncWebCrawler(config=browser_cfg, hooks=hooks) as crawler:
|
||||
result = await crawler.arun(url="https://example.com/protected", config=crawler_cfg)
|
||||
if result.success:
|
||||
print("[OK] Logged in and fetched protected page.")
|
||||
else:
|
||||
print("[ERROR]", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Key Points**
|
||||
- `before_goto` receives `page`, `context`, `goto_params` so you can add cookies, localStorage, or even change the URL itself.
|
||||
- If you need to run a real login flow (submitting forms), consider `on_browser_created` or `on_page_context_created` if you want to do it once at the start.
|
||||
|
||||
---
|
||||
|
||||
### 3.2 Setting Up the Browser in `on_browser_created`
|
||||
|
||||
If you need to do advanced browser-level configuration (e.g., hooking into the Chrome DevTools Protocol, adjusting command-line flags, etc.), you’ll use `on_browser_created`. No `page` is available yet, but you can set up the **browser** instance itself.
|
||||
|
||||
```python
|
||||
async def on_browser_created_hook(browser, **kwargs):
|
||||
"""
|
||||
Runs immediately after the browser is created, before any pages.
|
||||
'browser' here is a Playwright Browser object.
|
||||
"""
|
||||
print("[HOOK] Browser created. Setting up custom stuff.")
|
||||
# Possibly connect to DevTools or create an incognito context
|
||||
# Example (pseudo-code):
|
||||
# devtools_url = await browser.new_context(devtools=True)
|
||||
|
||||
# Usage:
|
||||
async with AsyncWebCrawler(hooks={"on_browser_created": on_browser_created_hook}) as crawler:
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3.3 Adjusting Page or Context in `on_page_context_created`
|
||||
|
||||
If you’d like to set default timeouts or inject scripts right after a page context is spun up:
|
||||
|
||||
```python
|
||||
async def on_page_context_created_hook(page, context, **kwargs):
|
||||
print("[HOOK] Page context created. Setting default timeouts or scripts.")
|
||||
await page.set_default_timeout(20000) # 20 seconds
|
||||
# Possibly inject a script or set user locale
|
||||
|
||||
# Usage:
|
||||
hooks = {
|
||||
"on_page_context_created": on_page_context_created_hook
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3.4 Dynamically Updating User Agents
|
||||
|
||||
`on_user_agent_updated` is fired whenever the strategy updates the user agent. For instance, you might want to set certain cookies or console-log changes for debugging:
|
||||
|
||||
```python
|
||||
async def on_user_agent_updated_hook(page, context, new_ua, **kwargs):
|
||||
print(f"[HOOK] User agent updated to {new_ua}")
|
||||
# Maybe add a custom header based on new UA
|
||||
await context.set_extra_http_headers({"X-UA-Source": new_ua})
|
||||
|
||||
hooks = {
|
||||
"on_user_agent_updated": on_user_agent_updated_hook
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3.5 Initializing Stuff with `on_execution_started`
|
||||
|
||||
`on_execution_started` runs before your main crawling logic. It’s a good place for short, one-time setup tasks (like clearing old caches, or storing a timestamp).
|
||||
|
||||
```python
|
||||
async def on_execution_started_hook(page, context, **kwargs):
|
||||
print("[HOOK] Execution started. Setting a start timestamp or logging.")
|
||||
context.set_default_navigation_timeout(45000) # 45s if your site is slow
|
||||
|
||||
hooks = {
|
||||
"on_execution_started": on_execution_started_hook
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3.6 Post-Processing with `after_goto`
|
||||
|
||||
After the crawler finishes navigating (i.e., the page has presumably loaded), you can do additional checks or manipulations—like verifying you’re on the right page, or removing interstitials:
|
||||
|
||||
```python
|
||||
async def after_goto_hook(page, context, response, **kwargs):
|
||||
"""
|
||||
Called right after page.goto() finishes, but before the crawler extracts HTML.
|
||||
"""
|
||||
if response and response.ok:
|
||||
print("[HOOK] After goto. Status:", response.status)
|
||||
# Maybe remove popups or check if we landed on a login failure page.
|
||||
await page.evaluate("""() => {
|
||||
const popup = document.querySelector(".annoying-popup");
|
||||
if (popup) popup.remove();
|
||||
}""")
|
||||
else:
|
||||
print("[HOOK] Navigation might have failed, status not ok or no response.")
|
||||
|
||||
hooks = {
|
||||
"after_goto": after_goto_hook
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3.7 Last-Minute Modifications in `before_retrieve_html` or `before_return_html`
|
||||
|
||||
Sometimes you need to tweak the page or raw HTML right before it’s captured.
|
||||
|
||||
```python
|
||||
async def before_retrieve_html_hook(page, context, **kwargs):
|
||||
"""
|
||||
Modify the DOM just before the crawler finalizes the HTML.
|
||||
"""
|
||||
print("[HOOK] Removing adverts before capturing HTML.")
|
||||
await page.evaluate("""() => {
|
||||
const ads = document.querySelectorAll(".ad-banner");
|
||||
ads.forEach(ad => ad.remove());
|
||||
}""")
|
||||
|
||||
async def before_return_html_hook(page, context, html, **kwargs):
|
||||
"""
|
||||
'html' is the near-finished HTML string. Return an updated string if you like.
|
||||
"""
|
||||
# For example, remove personal data or certain tags from the final text
|
||||
print("[HOOK] Sanitizing final HTML.")
|
||||
sanitized_html = html.replace("PersonalInfo:", "[REDACTED]")
|
||||
return sanitized_html
|
||||
|
||||
hooks = {
|
||||
"before_retrieve_html": before_retrieve_html_hook,
|
||||
"before_return_html": before_return_html_hook
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: If you want to make last-second changes in `before_return_html`, you can manipulate the `html` string directly. Return a new string if you want to override.
|
||||
|
||||
---
|
||||
|
||||
## 4. Putting It All Together
|
||||
|
||||
You can combine multiple hooks in a single run. For instance:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def on_browser_created_hook(browser, **kwargs):
|
||||
print("[HOOK] Browser is up, no page yet. Good for broad config.")
|
||||
|
||||
async def before_goto_auth_hook(page, context, goto_params, **kwargs):
|
||||
print("[HOOK] Adding cookies for auth.")
|
||||
await context.add_cookies([{"name": "session", "value": "abcd1234", "domain": "example.com"}])
|
||||
|
||||
async def after_goto_log_hook(page, context, response, **kwargs):
|
||||
if response:
|
||||
print("[HOOK] after_goto: Status code:", response.status)
|
||||
|
||||
async def main():
|
||||
hooks = {
|
||||
"on_browser_created": on_browser_created_hook,
|
||||
"before_goto": before_goto_auth_hook,
|
||||
"after_goto": after_goto_log_hook
|
||||
}
|
||||
|
||||
browser_cfg = BrowserConfig(headless=True)
|
||||
crawler_cfg = CrawlerRunConfig(verbose=True)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_cfg, hooks=hooks) as crawler:
|
||||
result = await crawler.arun("https://example.com/protected", config=crawler_cfg)
|
||||
if result.success:
|
||||
print("[OK] Protected page length:", len(result.html))
|
||||
else:
|
||||
print("[ERROR]", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
This example:
|
||||
|
||||
1. **`on_browser_created`** sets up the brand-new browser instance.
|
||||
2. **`before_goto`** ensures you inject an auth cookie before accessing the page.
|
||||
3. **`after_goto`** logs the resulting HTTP status code.
|
||||
|
||||
---
|
||||
|
||||
## 5. Common Pitfalls & Best Practices
|
||||
|
||||
1. **Hook Order**: If multiple hooks do overlapping tasks (e.g., two `before_goto` hooks), be mindful of conflicts or repeated logic.
|
||||
2. **Async vs Sync**: Some hooks might be used in a synchronous or asynchronous style. Confirm your function signature. If the crawler expects `async`, define `async def`.
|
||||
3. **Mutating goto_params**: `goto_params` is a dict that eventually goes to Playwright’s `page.goto()`. Changing the `url` or adding extra fields can be powerful but can also lead to confusion. Document your changes carefully.
|
||||
4. **Browser vs Page vs Context**: Not all hooks have both `page` and `context`. For example, `on_browser_created` only has access to **`browser`**.
|
||||
5. **Avoid Overdoing It**: Hooks are powerful but can lead to complexity. If you find yourself writing massive code inside a hook, consider if a separate “how-to” function with a simpler approach might suffice.
|
||||
|
||||
---
|
||||
|
||||
## Conclusion & Next Steps
|
||||
|
||||
**Hooks** let you bend Crawl4AI to your will:
|
||||
|
||||
- **Authentication** (cookies, localStorage) with `before_goto`
|
||||
- **Browser-level config** with `on_browser_created`
|
||||
- **Page or context config** with `on_page_context_created`
|
||||
- **Content modifications** before capturing HTML (`before_retrieve_html` or `before_return_html`)
|
||||
|
||||
**Where to go next**:
|
||||
|
||||
- **[Identity-Based Crawling & Anti-Bot](./identity-anti-bot.md)**: Combine hooks with advanced user simulation to avoid bot detection.
|
||||
- **[Reference → AsyncPlaywrightCrawlerStrategy](../../reference/browser-strategies.md)**: Learn more about how hooks are implemented under the hood.
|
||||
- **[How-To Guides](../../how-to/)**: Check short, specific recipes for tasks like scraping multiple pages with repeated “Load More” clicks.
|
||||
|
||||
With the hook system, you have near-complete control over the browser’s lifecycle—whether it’s setting up environment variables, customizing user agents, or manipulating the HTML. Enjoy the freedom to create sophisticated, fully customized crawling pipelines!
|
||||
|
||||
**Last Updated**: 2024-XX-XX
|
||||
@@ -1,227 +0,0 @@
|
||||
Below is a **draft** of a follow-up tutorial, **“Smart Crawling Techniques,”** building on the **“AsyncWebCrawler Basics”** tutorial. This tutorial focuses on three main points:
|
||||
|
||||
1. **Advanced usage of CSS selectors** (e.g., partial extraction, exclusions)
|
||||
2. **Handling iframes** (if relevant for your workflow)
|
||||
3. **Waiting for dynamic content** using `wait_for`, including the new `css:` and `js:` prefixes
|
||||
|
||||
Feel free to adjust code snippets, wording, or emphasis to match your library updates or user feedback.
|
||||
|
||||
---
|
||||
|
||||
# Smart Crawling Techniques
|
||||
|
||||
In the previous tutorial ([AsyncWebCrawler Basics](./async-webcrawler-basics.md)), you learned how to create an `AsyncWebCrawler` instance, run a basic crawl, and inspect the `CrawlResult`. Now it’s time to explore some of the **targeted crawling** features that let you:
|
||||
|
||||
1. Select specific parts of a webpage using CSS selectors
|
||||
2. Exclude or ignore certain page elements
|
||||
3. Wait for dynamic content to load using `wait_for` (with `css:` or `js:` rules)
|
||||
4. (Optionally) Handle iframes if your target site embeds additional content
|
||||
|
||||
> **Prerequisites**
|
||||
> - You’ve read or completed [AsyncWebCrawler Basics](./async-webcrawler-basics.md).
|
||||
> - You have a working environment for Crawl4AI (Playwright installed, etc.).
|
||||
|
||||
---
|
||||
|
||||
## 1. Targeting Specific Elements with CSS Selectors
|
||||
|
||||
### 1.1 Simple CSS Selector Usage
|
||||
|
||||
Let’s say you only need to crawl the main article content of a news page. By setting `css_selector` in `CrawlerRunConfig`, your final HTML or Markdown output focuses on that region. For example:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
browser_cfg = BrowserConfig(headless=True)
|
||||
crawler_cfg = CrawlerRunConfig(
|
||||
css_selector=".article-body", # Only capture .article-body content
|
||||
excluded_tags=["nav", "footer"] # Optional: skip big nav & footer sections
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_cfg) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://news.example.com/story/12345",
|
||||
config=crawler_cfg
|
||||
)
|
||||
if result.success:
|
||||
print("[OK] Extracted content length:", len(result.html))
|
||||
else:
|
||||
print("[ERROR]", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**Key Parameters**:
|
||||
- **`css_selector`**: Tells the crawler to focus on `.article-body`.
|
||||
- **`excluded_tags`**: Tells the crawler to skip specific HTML tags altogether (e.g., `nav` or `footer`).
|
||||
|
||||
**Tip**: For extremely noisy pages, you can further refine how you exclude certain elements by using `excluded_selector`, which takes a CSS selector you want removed from the final output.
|
||||
|
||||
### 1.2 Excluding Content with `excluded_selector`
|
||||
|
||||
If you want to remove certain sections within `.article-body` (like “related stories” sidebars), set:
|
||||
|
||||
```python
|
||||
CrawlerRunConfig(
|
||||
css_selector=".article-body",
|
||||
excluded_selector=".related-stories, .ads-banner"
|
||||
)
|
||||
```
|
||||
|
||||
This combination grabs the main article content while filtering out sidebars or ads.
|
||||
|
||||
---
|
||||
|
||||
## 2. Handling Iframes
|
||||
|
||||
Some sites embed extra content via `<iframe>` elements—for example, embedded videos or external forms. If you want the crawler to traverse these iframes and merge their content into the final HTML or Markdown, set:
|
||||
|
||||
```python
|
||||
crawler_cfg = CrawlerRunConfig(
|
||||
process_iframes=True
|
||||
)
|
||||
```
|
||||
|
||||
- **`process_iframes=True`**: Tells the crawler (specifically the underlying Playwright strategy) to recursively fetch iframe content and integrate it into `result.html` and `result.markdown`.
|
||||
|
||||
**Warning**: Not all sites allow iframes to be crawled (some cross-origin policies might block it). If you see partial or missing data, check the domain policy or logs for warnings.
|
||||
|
||||
---
|
||||
|
||||
## 3. Waiting for Dynamic Content
|
||||
|
||||
Many modern sites load content dynamically (e.g., after user interaction or asynchronously). Crawl4AI helps you wait for specific conditions before capturing the final HTML. Let’s look at `wait_for`.
|
||||
|
||||
### 3.1 `wait_for` Basics
|
||||
|
||||
In `CrawlerRunConfig`, `wait_for` can be a simple CSS selector or a JavaScript condition. Under the hood, Crawl4AI uses `smart_wait` to interpret what you provide.
|
||||
|
||||
```python
|
||||
crawler_cfg = CrawlerRunConfig(
|
||||
wait_for="css:.main-article-loaded",
|
||||
page_timeout=30000
|
||||
)
|
||||
```
|
||||
|
||||
**Example**: `css:.main-article-loaded` means “Wait for an element with the class `.main-article-loaded` to appear in the DOM.” If it doesn’t appear within `30` seconds, you’ll get a timeout.
|
||||
|
||||
### 3.2 Using Explicit Prefixes
|
||||
|
||||
**`js:`** and **`css:`** can explicitly tell the crawler which approach to use:
|
||||
|
||||
- **`wait_for="css:.comments-section"`** → Wait for `.comments-section` to appear
|
||||
- **`wait_for="js:() => document.querySelectorAll('.comments').length > 5"`** → Wait until there are at least 6 comment elements
|
||||
|
||||
**Code Example**:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
config = CrawlerRunConfig(
|
||||
wait_for="js:() => document.querySelectorAll('.dynamic-items li').length >= 10",
|
||||
page_timeout=20000 # 20s
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/async-list",
|
||||
config=config
|
||||
)
|
||||
if result.success:
|
||||
print("[OK] Dynamic items loaded. HTML length:", len(result.html))
|
||||
else:
|
||||
print("[ERROR]", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### 3.3 Fallback Logic
|
||||
|
||||
If you **don’t** prefix `js:` or `css:`, Crawl4AI tries to detect whether your string looks like a CSS selector or a JavaScript snippet. It’ll first attempt a CSS selector. If that fails, it tries to evaluate it as a JavaScript function. This can be convenient but can also lead to confusion if the library guesses incorrectly. It’s often best to be explicit:
|
||||
|
||||
- **`"css:.my-selector"`** → Force CSS
|
||||
- **`"js:() => myAppState.isReady()"`** → Force JavaScript
|
||||
|
||||
**What Should My JavaScript Return?**
|
||||
- A function that returns `true` once the condition is met (or `false` if it fails).
|
||||
- The function can be sync or async, but note that the crawler wraps it in an async loop to poll until `true` or timeout.
|
||||
|
||||
---
|
||||
|
||||
## 4. Example: Targeted Crawl with Iframes & Wait-For
|
||||
|
||||
Below is a more advanced snippet combining these features:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, BrowserConfig, CrawlerRunConfig
|
||||
|
||||
async def main():
|
||||
browser_cfg = BrowserConfig(headless=True)
|
||||
crawler_cfg = CrawlerRunConfig(
|
||||
css_selector=".main-content",
|
||||
process_iframes=True,
|
||||
wait_for="css:.loaded-indicator", # Wait for .loaded-indicator to appear
|
||||
excluded_tags=["script", "style"], # Remove script/style tags
|
||||
page_timeout=30000,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler(config=browser_cfg) as crawler:
|
||||
result = await crawler.arun(
|
||||
url="https://example.com/iframe-heavy",
|
||||
config=crawler_cfg
|
||||
)
|
||||
if result.success:
|
||||
print("[OK] Crawled with iframes. Length of final HTML:", len(result.html))
|
||||
else:
|
||||
print("[ERROR]", result.error_message)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
**What’s Happening**:
|
||||
1. **`css_selector=".main-content"`** → Focus only on `.main-content` for final extraction.
|
||||
2. **`process_iframes=True`** → Recursively handle `<iframe>` content.
|
||||
3. **`wait_for="css:.loaded-indicator"`** → Don’t extract until the page shows `.loaded-indicator`.
|
||||
4. **`excluded_tags=["script", "style"]`** → Remove script and style tags for a cleaner result.
|
||||
|
||||
---
|
||||
|
||||
## 5. Common Pitfalls & Tips
|
||||
|
||||
1. **Be Explicit**: Using `"js:"` or `"css:"` can spare you headaches if the library guesses incorrectly.
|
||||
2. **Timeouts**: If the site never triggers your wait condition, a `TimeoutError` can occur. Check your logs or use `verbose=True` for more clues.
|
||||
3. **Infinite Scroll**: If you have repeated “load more” loops, you might use [Hooks & Custom Code](./hooks-custom.md) or add your own JavaScript for repeated scrolling.
|
||||
4. **Iframes**: Some iframes are cross-origin or protected. In those cases, you might not be able to read their content. Check your logs for permission errors.
|
||||
|
||||
---
|
||||
|
||||
## 6. Summary & Next Steps
|
||||
|
||||
With these **Targeted Crawling Techniques** you can:
|
||||
|
||||
- Precisely target or exclude content using CSS selectors.
|
||||
- Automatically wait for dynamic elements to load using `wait_for`.
|
||||
- Merge iframe content into your main page result.
|
||||
|
||||
### Where to Go Next?
|
||||
|
||||
- **[Link & Media Analysis](./link-media-analysis.md)**: Dive deeper into analyzing extracted links and media items.
|
||||
- **[Hooks & Custom Code](./hooks-custom.md)**: Learn how to implement repeated actions like infinite scroll or login sequences using hooks.
|
||||
- **Reference**: For an exhaustive list of parameters and advanced usage, see [CrawlerRunConfig Reference](../../reference/configuration.md).
|
||||
|
||||
If you run into issues or want to see real examples from other users, check the [How-To Guides](../../how-to/) or raise a question on GitHub.
|
||||
|
||||
**Last updated**: 2024-XX-XX
|
||||
|
||||
---
|
||||
|
||||
That’s it for **Targeted Crawling Techniques**! You’re now equipped to handle complex pages that rely on dynamic loading, custom CSS selectors, and iframe embedding.
|
||||
94
mkdocs copy.yml
Normal file
94
mkdocs copy.yml
Normal file
@@ -0,0 +1,94 @@
|
||||
site_name: Crawl4AI Documentation
|
||||
site_description: 🔥🕷️ Crawl4AI, Open-source LLM Friendly Web Crawler & Scrapper
|
||||
site_url: https://docs.crawl4ai.com
|
||||
repo_url: https://github.com/unclecode/crawl4ai
|
||||
repo_name: unclecode/crawl4ai
|
||||
docs_dir: docs/md_v2
|
||||
|
||||
nav:
|
||||
- Home: 'index.md'
|
||||
- 'Installation': 'basic/installation.md'
|
||||
- 'Docker Deplotment': 'basic/docker-deploymeny.md'
|
||||
- 'Quick Start': 'basic/quickstart.md'
|
||||
- Changelog & Blog:
|
||||
- 'Blog Home': 'blog/index.md'
|
||||
- 'Latest (0.4.1)': 'blog/releases/0.4.1.md'
|
||||
- 'Changelog': 'https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md'
|
||||
|
||||
- Core:
|
||||
- 'Simple Crawling': 'basic/simple-crawling.md'
|
||||
- 'Crawler Result': 'basic/crawler-result.md'
|
||||
- 'Crawler & Browser Params': 'basic/browser-crawler-config.md'
|
||||
- 'Markdown Generation': 'basic/markdown-generation.md'
|
||||
- 'Fit Markdown': 'basic/fit-markdown.md'
|
||||
- 'Page Interaction': 'basic/page-interaction.md'
|
||||
- 'Content Selection': 'basic/content-selection.md'
|
||||
- 'Cache Modes': 'basic/cache-modes.md'
|
||||
- 'Local files & Raw HTML': 'basic/local-files.md'
|
||||
- 'File Downloading': 'basic/file-downloading.md'
|
||||
|
||||
- Advanced:
|
||||
- 'Link & Media Handling': 'advanced/link-media.md'
|
||||
- 'Hooks & Auth': 'advanced/hooks-auth.md'
|
||||
- 'Lazy Loading': 'advanced/lazy-loading.md'
|
||||
- 'Proxy & Security': 'advanced/proxy-security.md'
|
||||
- 'Session Management': 'advanced/session-management.md'
|
||||
- 'Session Management (Advanced)': 'advanced/session-management-advanced.md'
|
||||
|
||||
- Extraction:
|
||||
- 'Overview': 'extraction/overview.md'
|
||||
- 'LLM Strategy': 'extraction/llm.md'
|
||||
- 'Json-CSS Extractor Basic': 'extraction/css.md'
|
||||
- 'Json-CSS Extractor Advanced': 'extraction/css-advanced.md'
|
||||
- 'Cosine Strategy': 'extraction/cosine.md'
|
||||
- 'Chunking': 'extraction/chunking.md'
|
||||
|
||||
- API Reference:
|
||||
- 'Parameters Table': 'api/parameters.md'
|
||||
- 'AsyncWebCrawler': 'api/async-webcrawler.md'
|
||||
- 'AsyncWebCrawler.arun()': 'api/arun.md'
|
||||
- 'CrawlResult': 'api/crawl-result.md'
|
||||
- 'Strategies': 'api/strategies.md'
|
||||
|
||||
- Tutorial:
|
||||
- '1. Getting Started': 'tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md'
|
||||
- '2. Advanced Features': 'tutorial/episode_02_Overview_of_Advanced_Features.md'
|
||||
- '3. Browser Setup': 'tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md'
|
||||
- '4. Proxy Settings': 'tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md'
|
||||
- '5. Dynamic Content': 'tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md'
|
||||
- '6. Magic Mode': 'tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md'
|
||||
- '7. Content Cleaning': 'tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md'
|
||||
- '8. Media Handling': 'tutorial/episode_08_Media_Handling_Images_Videos_and_Audio.md'
|
||||
- '9. Link Analysis': 'tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md'
|
||||
- '10. User Simulation': 'tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md'
|
||||
- '11.1. JSON CSS': 'tutorial/episode_11_1_Extraction_Strategies_JSON_CSS.md'
|
||||
- '11.2. LLM Strategy': 'tutorial/episode_11_2_Extraction_Strategies_LLM.md'
|
||||
- '11.3. Cosine Strategy': 'tutorial/episode_11_3_Extraction_Strategies_Cosine.md'
|
||||
- '12. Session Crawling': 'tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md'
|
||||
- '13. Text Chunking': 'tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md'
|
||||
- '14. Custom Workflows': 'tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md'
|
||||
|
||||
|
||||
theme:
|
||||
name: terminal
|
||||
palette: dark
|
||||
|
||||
markdown_extensions:
|
||||
- pymdownx.highlight:
|
||||
anchor_linenums: true
|
||||
- pymdownx.inlinehilite
|
||||
- pymdownx.snippets
|
||||
- pymdownx.superfences
|
||||
- admonition
|
||||
- pymdownx.details
|
||||
- attr_list
|
||||
- tables
|
||||
|
||||
extra_css:
|
||||
- assets/styles.css
|
||||
- assets/highlight.css
|
||||
- assets/dmvendor.css
|
||||
|
||||
extra_javascript:
|
||||
- assets/highlight.min.js
|
||||
- assets/highlight_init.js
|
||||
97
mkdocs.yml
97
mkdocs.yml
@@ -1,5 +1,5 @@
|
||||
site_name: Crawl4AI Documentation
|
||||
site_description: 🔥🕷️ Crawl4AI, Open-source LLM Friendly Web Crawler & Scrapper
|
||||
site_description: 🚀🤖 Crawl4AI, Open-source LLM-Friendly Web Crawler & Scraper
|
||||
site_url: https://docs.crawl4ai.com
|
||||
repo_url: https://github.com/unclecode/crawl4ai
|
||||
repo_name: unclecode/crawl4ai
|
||||
@@ -7,67 +7,50 @@ docs_dir: docs/md_v2
|
||||
|
||||
nav:
|
||||
- Home: 'index.md'
|
||||
- 'Installation': 'basic/installation.md'
|
||||
- 'Docker Deplotment': 'basic/docker-deploymeny.md'
|
||||
- 'Quick Start': 'basic/quickstart.md'
|
||||
- Changelog & Blog:
|
||||
- 'Blog Home': 'blog/index.md'
|
||||
- 'Latest (0.4.1)': 'blog/releases/0.4.1.md'
|
||||
- 'Changelog': 'https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md'
|
||||
|
||||
- Basic:
|
||||
- 'Simple Crawling': 'basic/simple-crawling.md'
|
||||
- 'Output Formats': 'basic/output-formats.md'
|
||||
- 'Browser Configuration': 'basic/browser-config.md'
|
||||
- 'Page Interaction': 'basic/page-interaction.md'
|
||||
- 'Content Selection': 'basic/content-selection.md'
|
||||
- 'Cache Modes': 'basic/cache-modes.md'
|
||||
|
||||
- Setup & Installation:
|
||||
- "Installation": "core/installation.md"
|
||||
- "Docker Deployment": "core/docker-deploymeny.md"
|
||||
- "Quick Start": "core/quickstart.md"
|
||||
- "Blog & Changelog":
|
||||
- "Blog Home": "blog/index.md"
|
||||
- "Changelog": "https://github.com/unclecode/crawl4ai/blob/main/CHANGELOG.md"
|
||||
- Core:
|
||||
- "Simple Crawling": "core/simple-crawling.md"
|
||||
- "Crawler Result": "core/crawler-result.md"
|
||||
- "Browser & Crawler Config": "core/browser-crawler-config.md"
|
||||
- "Markdown Generation": "core/markdown-generation.md"
|
||||
- "Fit Markdown": "core/fit-markdown.md"
|
||||
- "Page Interaction": "core/page-interaction.md"
|
||||
- "Content Selection": "core/content-selection.md"
|
||||
- "Cache Modes": "core/cache-modes.md"
|
||||
- "Local Files & Raw HTML": "core/local-files.md"
|
||||
- "Link & Media": "core/link-media.md"
|
||||
- Advanced:
|
||||
- 'Content Processing': 'advanced/content-processing.md'
|
||||
- 'Magic Mode': 'advanced/magic-mode.md'
|
||||
- 'Hooks & Auth': 'advanced/hooks-auth.md'
|
||||
- 'Proxy & Security': 'advanced/proxy-security.md'
|
||||
- 'Session Management': 'advanced/session-management.md'
|
||||
- 'Session Management (Advanced)': 'advanced/session-management-advanced.md'
|
||||
|
||||
- "Overview": "advanced/advanced-features.md"
|
||||
- "File Downloading": "advanced/file-downloading.md"
|
||||
- "Lazy Loading": "advanced/lazy-loading.md"
|
||||
- "Hooks & Auth": "advanced/hooks-auth.md"
|
||||
- "Proxy & Security": "advanced/proxy-security.md"
|
||||
- "Session Management": "advanced/session-management.md"
|
||||
- "Multi-URL Crawling": "advanced/multi-url-crawling.md"
|
||||
- "Crawl Dispatcher": "advanced/crawl-dispatcher.md"
|
||||
- "Identity Based Crawling": "advanced/identity-based-crawling.md"
|
||||
- "SSL Certificate": "advanced/ssl-certificate.md"
|
||||
- Extraction:
|
||||
- 'Overview': 'extraction/overview.md'
|
||||
- 'LLM Strategy': 'extraction/llm.md'
|
||||
- 'Json-CSS Extractor Basic': 'extraction/css.md'
|
||||
- 'Json-CSS Extractor Advanced': 'extraction/css-advanced.md'
|
||||
- 'Cosine Strategy': 'extraction/cosine.md'
|
||||
- 'Chunking': 'extraction/chunking.md'
|
||||
|
||||
- "LLM-Free Strategies": "extraction/no-llm-strategies.md"
|
||||
- "LLM Strategies": "extraction/llm-strategies.md"
|
||||
- "Clustering Strategies": "extraction/clustring-strategies.md"
|
||||
- "Chunking": "extraction/chunking.md"
|
||||
- API Reference:
|
||||
- 'Parameters Table': 'api/parameters.md'
|
||||
- 'AsyncWebCrawler': 'api/async-webcrawler.md'
|
||||
- 'AsyncWebCrawler.arun()': 'api/arun.md'
|
||||
- 'CrawlResult': 'api/crawl-result.md'
|
||||
- 'Strategies': 'api/strategies.md'
|
||||
|
||||
- Tutorial:
|
||||
- '1. Getting Started': 'tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md'
|
||||
- '2. Advanced Features': 'tutorial/episode_02_Overview_of_Advanced_Features.md'
|
||||
- '3. Browser Setup': 'tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md'
|
||||
- '4. Proxy Settings': 'tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md'
|
||||
- '5. Dynamic Content': 'tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md'
|
||||
- '6. Magic Mode': 'tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md'
|
||||
- '7. Content Cleaning': 'tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md'
|
||||
- '8. Media Handling': 'tutorial/episode_08_Media_Handling_Images_Videos_and_Audio.md'
|
||||
- '9. Link Analysis': 'tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md'
|
||||
- '10. User Simulation': 'tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md'
|
||||
- '11.1. JSON CSS': 'tutorial/episode_11_1_Extraction_Strategies_JSON_CSS.md'
|
||||
- '11.2. LLM Strategy': 'tutorial/episode_11_2_Extraction_Strategies_LLM.md'
|
||||
- '11.3. Cosine Strategy': 'tutorial/episode_11_3_Extraction_Strategies_Cosine.md'
|
||||
- '12. Session Crawling': 'tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md'
|
||||
- '13. Text Chunking': 'tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md'
|
||||
- '14. Custom Workflows': 'tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md'
|
||||
|
||||
- "AsyncWebCrawler": "api/async-webcrawler.md"
|
||||
- "arun()": "api/arun.md"
|
||||
- "Browser & Crawler Config": "api/parameters.md"
|
||||
- "CrawlResult": "api/crawl-result.md"
|
||||
- "Strategies": "api/strategies.md"
|
||||
|
||||
theme:
|
||||
name: terminal
|
||||
palette: dark
|
||||
name: 'terminal'
|
||||
palette: 'dark'
|
||||
|
||||
markdown_extensions:
|
||||
- pymdownx.highlight:
|
||||
|
||||
@@ -33,7 +33,8 @@ dependencies = [
|
||||
"psutil>=6.1.1",
|
||||
"nltk>=3.9.1",
|
||||
"playwright",
|
||||
"aiofiles"
|
||||
"aiofiles",
|
||||
"rich>=13.9.4",
|
||||
]
|
||||
classifiers = [
|
||||
"Development Status :: 3 - Alpha",
|
||||
|
||||
@@ -19,3 +19,4 @@ pydantic>=2.10
|
||||
pyOpenSSL>=24.3.0
|
||||
psutil>=6.1.1
|
||||
nltk>=3.9.1
|
||||
rich>=13.9.4
|
||||
Reference in New Issue
Block a user