feat: Add advanced link head extraction with three-layer scoring system (#1)
Squashed commit from feature/link-extractor branch implementing comprehensive link analysis: - Extract HTML head content from discovered links with parallel processing - Three-layer scoring: Intrinsic (URL quality), Contextual (BM25), and Total scores - New LinkExtractionConfig class for type-safe configuration - Pattern-based filtering for internal/external links - Comprehensive documentation and examples
This commit is contained in:
@@ -105,7 +105,366 @@ result.links = {
|
||||
|
||||
---
|
||||
|
||||
## 2. Domain Filtering
|
||||
## 2. Advanced Link Head Extraction & Scoring
|
||||
|
||||
Ever wanted to not just extract links, but also get the actual content (title, description, metadata) from those linked pages? And score them for relevance? This is exactly what Link Head Extraction does - it fetches the `<head>` section from each discovered link and scores them using multiple algorithms.
|
||||
|
||||
### 2.1 Why Link Head Extraction?
|
||||
|
||||
When you crawl a page, you get hundreds of links. But which ones are actually valuable? Link Head Extraction solves this by:
|
||||
|
||||
1. **Fetching head content** from each link (title, description, meta tags)
|
||||
2. **Scoring links intrinsically** based on URL quality, text relevance, and context
|
||||
3. **Scoring links contextually** using BM25 algorithm when you provide a search query
|
||||
4. **Combining scores intelligently** to give you a final relevance ranking
|
||||
|
||||
### 2.2 Complete Working Example
|
||||
|
||||
Here's a full example you can copy, paste, and run immediately:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crawl4ai import AsyncWebCrawler, CrawlerRunConfig
|
||||
from crawl4ai.async_configs import LinkExtractionConfig
|
||||
|
||||
async def extract_link_heads_example():
|
||||
"""
|
||||
Complete example showing link head extraction with scoring.
|
||||
This will crawl a documentation site and extract head content from internal links.
|
||||
"""
|
||||
|
||||
# Configure link head extraction
|
||||
config = CrawlerRunConfig(
|
||||
# Enable link head extraction with detailed configuration
|
||||
link_extraction_config=LinkExtractionConfig(
|
||||
include_internal=True, # Extract from internal links
|
||||
include_external=False, # Skip external links for this example
|
||||
max_links=10, # Limit to 10 links for demo
|
||||
concurrency=5, # Process 5 links simultaneously
|
||||
timeout=10, # 10 second timeout per link
|
||||
query="API documentation guide", # Query for contextual scoring
|
||||
score_threshold=0.3, # Only include links scoring above 0.3
|
||||
verbose=True # Show detailed progress
|
||||
),
|
||||
# Enable intrinsic scoring (URL quality, text relevance)
|
||||
score_links=True,
|
||||
# Keep output clean
|
||||
only_text=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
# Crawl a documentation site (great for testing)
|
||||
result = await crawler.arun("https://docs.python.org/3/", config=config)
|
||||
|
||||
if result.success:
|
||||
print(f"✅ Successfully crawled: {result.url}")
|
||||
print(f"📄 Page title: {result.metadata.get('title', 'No title')}")
|
||||
|
||||
# Access links (now enhanced with head data and scores)
|
||||
internal_links = result.links.get("internal", [])
|
||||
external_links = result.links.get("external", [])
|
||||
|
||||
print(f"\n🔗 Found {len(internal_links)} internal links")
|
||||
print(f"🌍 Found {len(external_links)} external links")
|
||||
|
||||
# Count links with head data
|
||||
links_with_head = [link for link in internal_links
|
||||
if link.get("head_data") is not None]
|
||||
print(f"🧠 Links with head data extracted: {len(links_with_head)}")
|
||||
|
||||
# Show the top 3 scoring links
|
||||
print(f"\n🏆 Top 3 Links with Full Scoring:")
|
||||
for i, link in enumerate(links_with_head[:3]):
|
||||
print(f"\n{i+1}. {link['href']}")
|
||||
print(f" Link Text: '{link.get('text', 'No text')[:50]}...'")
|
||||
|
||||
# Show all three score types
|
||||
intrinsic = link.get('intrinsic_score')
|
||||
contextual = link.get('contextual_score')
|
||||
total = link.get('total_score')
|
||||
|
||||
if intrinsic is not None:
|
||||
print(f" 📊 Intrinsic Score: {intrinsic:.2f}/10.0 (URL quality & context)")
|
||||
if contextual is not None:
|
||||
print(f" 🎯 Contextual Score: {contextual:.3f} (BM25 relevance to query)")
|
||||
if total is not None:
|
||||
print(f" ⭐ Total Score: {total:.3f} (combined final score)")
|
||||
|
||||
# Show extracted head data
|
||||
head_data = link.get("head_data", {})
|
||||
if head_data:
|
||||
title = head_data.get("title", "No title")
|
||||
description = head_data.get("meta", {}).get("description", "No description")
|
||||
|
||||
print(f" 📰 Title: {title[:60]}...")
|
||||
if description:
|
||||
print(f" 📝 Description: {description[:80]}...")
|
||||
|
||||
# Show extraction status
|
||||
status = link.get("head_extraction_status", "unknown")
|
||||
print(f" ✅ Extraction Status: {status}")
|
||||
else:
|
||||
print(f"❌ Crawl failed: {result.error_message}")
|
||||
|
||||
# Run the example
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(extract_link_heads_example())
|
||||
```
|
||||
|
||||
**Expected Output:**
|
||||
```
|
||||
✅ Successfully crawled: https://docs.python.org/3/
|
||||
📄 Page title: 3.13.5 Documentation
|
||||
🔗 Found 53 internal links
|
||||
🌍 Found 1 external links
|
||||
🧠 Links with head data extracted: 10
|
||||
|
||||
🏆 Top 3 Links with Full Scoring:
|
||||
|
||||
1. https://docs.python.org/3.15/
|
||||
Link Text: 'Python 3.15 (in development)...'
|
||||
📊 Intrinsic Score: 4.17/10.0 (URL quality & context)
|
||||
🎯 Contextual Score: 1.000 (BM25 relevance to query)
|
||||
⭐ Total Score: 5.917 (combined final score)
|
||||
📰 Title: 3.15.0a0 Documentation...
|
||||
📝 Description: The official Python documentation...
|
||||
✅ Extraction Status: valid
|
||||
```
|
||||
|
||||
### 2.3 Configuration Deep Dive
|
||||
|
||||
The `LinkExtractionConfig` class supports these options:
|
||||
|
||||
```python
|
||||
from crawl4ai.async_configs import LinkExtractionConfig
|
||||
|
||||
link_extraction_config = LinkExtractionConfig(
|
||||
# BASIC SETTINGS
|
||||
verbose=True, # Show detailed logs (recommended for learning)
|
||||
|
||||
# LINK FILTERING
|
||||
include_internal=True, # Include same-domain links
|
||||
include_external=True, # Include different-domain links
|
||||
max_links=50, # Maximum links to process (prevents overload)
|
||||
|
||||
# PATTERN FILTERING
|
||||
include_patterns=[ # Only process links matching these patterns
|
||||
"*/docs/*",
|
||||
"*/api/*",
|
||||
"*/reference/*"
|
||||
],
|
||||
exclude_patterns=[ # Skip links matching these patterns
|
||||
"*/login*",
|
||||
"*/admin*"
|
||||
],
|
||||
|
||||
# PERFORMANCE SETTINGS
|
||||
concurrency=10, # How many links to process simultaneously
|
||||
timeout=5, # Seconds to wait per link
|
||||
|
||||
# RELEVANCE SCORING
|
||||
query="machine learning API", # Query for BM25 contextual scoring
|
||||
score_threshold=0.3, # Only include links above this score
|
||||
)
|
||||
```
|
||||
|
||||
### 2.4 Understanding the Three Score Types
|
||||
|
||||
Each extracted link gets three different scores:
|
||||
|
||||
#### 1. **Intrinsic Score (0-10)** - URL and Content Quality
|
||||
Based on URL structure, link text quality, and page context:
|
||||
|
||||
```python
|
||||
# High intrinsic score indicators:
|
||||
# ✅ Clean URL structure (docs.python.org/api/reference)
|
||||
# ✅ Meaningful link text ("API Reference Guide")
|
||||
# ✅ Relevant to page context
|
||||
# ✅ Not buried deep in navigation
|
||||
|
||||
# Low intrinsic score indicators:
|
||||
# ❌ Random URLs (site.com/x7f9g2h)
|
||||
# ❌ No link text or generic text ("Click here")
|
||||
# ❌ Unrelated to page content
|
||||
```
|
||||
|
||||
#### 2. **Contextual Score (0-1)** - BM25 Relevance to Query
|
||||
Only available when you provide a `query`. Uses BM25 algorithm against head content:
|
||||
|
||||
```python
|
||||
# Example: query = "machine learning tutorial"
|
||||
# High contextual score: Link to "Complete Machine Learning Guide"
|
||||
# Low contextual score: Link to "Privacy Policy"
|
||||
```
|
||||
|
||||
#### 3. **Total Score** - Smart Combination
|
||||
Intelligently combines intrinsic and contextual scores with fallbacks:
|
||||
|
||||
```python
|
||||
# When both scores available: (intrinsic * 0.3) + (contextual * 0.7)
|
||||
# When only intrinsic: uses intrinsic score
|
||||
# When only contextual: uses contextual score
|
||||
# When neither: not calculated
|
||||
```
|
||||
|
||||
### 2.5 Practical Use Cases
|
||||
|
||||
#### Use Case 1: Research Assistant
|
||||
Find the most relevant documentation pages:
|
||||
|
||||
```python
|
||||
async def research_assistant():
|
||||
config = CrawlerRunConfig(
|
||||
link_extraction_config=LinkExtractionConfig(
|
||||
include_internal=True,
|
||||
include_external=True,
|
||||
include_patterns=["*/docs/*", "*/tutorial/*", "*/guide/*"],
|
||||
query="machine learning neural networks",
|
||||
max_links=20,
|
||||
score_threshold=0.5, # Only high-relevance links
|
||||
verbose=True
|
||||
),
|
||||
score_links=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://scikit-learn.org/", config=config)
|
||||
|
||||
if result.success:
|
||||
# Get high-scoring links
|
||||
good_links = [link for link in result.links.get("internal", [])
|
||||
if link.get("total_score", 0) > 0.7]
|
||||
|
||||
print(f"🎯 Found {len(good_links)} highly relevant links:")
|
||||
for link in good_links[:5]:
|
||||
print(f"⭐ {link['total_score']:.3f} - {link['href']}")
|
||||
print(f" {link.get('head_data', {}).get('title', 'No title')}")
|
||||
```
|
||||
|
||||
#### Use Case 2: Content Discovery
|
||||
Find all API endpoints and references:
|
||||
|
||||
```python
|
||||
async def api_discovery():
|
||||
config = CrawlerRunConfig(
|
||||
link_extraction_config=LinkExtractionConfig(
|
||||
include_internal=True,
|
||||
include_patterns=["*/api/*", "*/reference/*"],
|
||||
exclude_patterns=["*/deprecated/*"],
|
||||
max_links=100,
|
||||
concurrency=15,
|
||||
verbose=False # Clean output
|
||||
),
|
||||
score_links=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://docs.example-api.com/", config=config)
|
||||
|
||||
if result.success:
|
||||
api_links = result.links.get("internal", [])
|
||||
|
||||
# Group by endpoint type
|
||||
endpoints = {}
|
||||
for link in api_links:
|
||||
if link.get("head_data"):
|
||||
title = link["head_data"].get("title", "")
|
||||
if "GET" in title:
|
||||
endpoints.setdefault("GET", []).append(link)
|
||||
elif "POST" in title:
|
||||
endpoints.setdefault("POST", []).append(link)
|
||||
|
||||
for method, links in endpoints.items():
|
||||
print(f"\n{method} Endpoints ({len(links)}):")
|
||||
for link in links[:3]:
|
||||
print(f" • {link['href']}")
|
||||
```
|
||||
|
||||
#### Use Case 3: Link Quality Analysis
|
||||
Analyze website structure and content quality:
|
||||
|
||||
```python
|
||||
async def quality_analysis():
|
||||
config = CrawlerRunConfig(
|
||||
link_extraction_config=LinkExtractionConfig(
|
||||
include_internal=True,
|
||||
max_links=200,
|
||||
concurrency=20,
|
||||
),
|
||||
score_links=True
|
||||
)
|
||||
|
||||
async with AsyncWebCrawler() as crawler:
|
||||
result = await crawler.arun("https://your-website.com/", config=config)
|
||||
|
||||
if result.success:
|
||||
links = result.links.get("internal", [])
|
||||
|
||||
# Analyze intrinsic scores
|
||||
scores = [link.get('intrinsic_score', 0) for link in links]
|
||||
avg_score = sum(scores) / len(scores) if scores else 0
|
||||
|
||||
print(f"📊 Link Quality Analysis:")
|
||||
print(f" Average intrinsic score: {avg_score:.2f}/10.0")
|
||||
print(f" High quality links (>7.0): {len([s for s in scores if s > 7.0])}")
|
||||
print(f" Low quality links (<3.0): {len([s for s in scores if s < 3.0])}")
|
||||
|
||||
# Find problematic links
|
||||
bad_links = [link for link in links
|
||||
if link.get('intrinsic_score', 0) < 2.0]
|
||||
|
||||
if bad_links:
|
||||
print(f"\n⚠️ Links needing attention:")
|
||||
for link in bad_links[:5]:
|
||||
print(f" {link['href']} (score: {link.get('intrinsic_score', 0):.1f})")
|
||||
```
|
||||
|
||||
### 2.6 Performance Tips
|
||||
|
||||
1. **Start Small**: Begin with `max_links: 10` to understand the feature
|
||||
2. **Use Patterns**: Filter with `include_patterns` to focus on relevant sections
|
||||
3. **Adjust Concurrency**: Higher concurrency = faster but more resource usage
|
||||
4. **Set Timeouts**: Use `timeout: 5` to prevent hanging on slow sites
|
||||
5. **Use Score Thresholds**: Filter out low-quality links with `score_threshold`
|
||||
|
||||
### 2.7 Troubleshooting
|
||||
|
||||
**No head data extracted?**
|
||||
```python
|
||||
# Check your configuration:
|
||||
config = CrawlerRunConfig(
|
||||
link_extraction_config=LinkExtractionConfig(
|
||||
verbose=True # ← Enable to see what's happening
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
**Scores showing as None?**
|
||||
```python
|
||||
# Make sure scoring is enabled:
|
||||
config = CrawlerRunConfig(
|
||||
score_links=True, # ← Enable intrinsic scoring
|
||||
link_extraction_config=LinkExtractionConfig(
|
||||
query="your search terms" # ← For contextual scoring
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
**Process taking too long?**
|
||||
```python
|
||||
# Optimize performance:
|
||||
link_extraction_config = LinkExtractionConfig(
|
||||
max_links=20, # ← Reduce number
|
||||
concurrency=10, # ← Increase parallelism
|
||||
timeout=3, # ← Shorter timeout
|
||||
include_patterns=["*/important/*"] # ← Focus on key areas
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Domain Filtering
|
||||
|
||||
Some websites contain hundreds of third-party or affiliate links. You can filter out certain domains at **crawl time** by configuring the crawler. The most relevant parameters in `CrawlerRunConfig` are:
|
||||
|
||||
@@ -114,7 +473,7 @@ Some websites contain hundreds of third-party or affiliate links. You can filter
|
||||
- **`exclude_social_media_links`**: If `True`, automatically skip known social platforms.
|
||||
- **`exclude_domains`**: Provide a list of custom domains you want to exclude (e.g., `["spammyads.com", "tracker.net"]`).
|
||||
|
||||
### 2.1 Example: Excluding External & Social Media Links
|
||||
### 3.1 Example: Excluding External & Social Media Links
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
@@ -143,7 +502,7 @@ if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### 2.2 Example: Excluding Specific Domains
|
||||
### 3.2 Example: Excluding Specific Domains
|
||||
|
||||
If you want to let external links in, but specifically exclude a domain (e.g., `suspiciousads.com`), do this:
|
||||
|
||||
@@ -157,9 +516,9 @@ This approach is handy when you still want external links but need to block cert
|
||||
|
||||
---
|
||||
|
||||
## 3. Media Extraction
|
||||
## 4. Media Extraction
|
||||
|
||||
### 3.1 Accessing `result.media`
|
||||
### 4.1 Accessing `result.media`
|
||||
|
||||
By default, Crawl4AI collects images, audio, video URLs, and data tables it finds on the page. These are stored in `result.media`, a dictionary keyed by media type (e.g., `images`, `videos`, `audio`, `tables`).
|
||||
|
||||
@@ -237,7 +596,7 @@ Depending on your Crawl4AI version or scraping strategy, these dictionaries can
|
||||
|
||||
With these details, you can easily filter out or focus on certain images (for instance, ignoring images with very low scores or a different domain), or gather metadata for analytics.
|
||||
|
||||
### 3.2 Excluding External Images
|
||||
### 4.2 Excluding External Images
|
||||
|
||||
If you’re dealing with heavy pages or want to skip third-party images (advertisements, for example), you can turn on:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user