diff --git a/README.md b/README.md
index 5f867cc3..6c8a5a2a 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,9 @@
-# Crawl4AI (Async Version) 🕷️🤖
+# 🔥🕷️ Crawl4AI: LLM Friendly Web Crawler & Scrapper
+
+
[](https://github.com/unclecode/crawl4ai/stargazers)
+
[](https://github.com/unclecode/crawl4ai/network/members)
[](https://github.com/unclecode/crawl4ai/issues)
[](https://github.com/unclecode/crawl4ai/pulls)
@@ -8,6 +11,12 @@
Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐
+## 🌟 Meet the Crawl4AI Assistant: Your Copilot for Crawling
+Use the [Crawl4AI GPT Assistant](https://tinyurl.com/crawl4ai-gpt) as your AI-powered copilot! With this assistant, you can:
+- 🧑💻 Generate code for complex crawling and extraction tasks
+- 💡 Get tailored support and examples
+- 📘 Learn Crawl4AI faster with step-by-step guidance
+
## New in 0.3.72 ✨
- 📄 Fit markdown generation for extracting main article content.
@@ -19,7 +28,7 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc
## Try it Now!
-✨ Play around with this [](https://colab.research.google.com/drive/1REChY6fXQf-EaVYLv0eHEWvzlYxGm0pd?usp=sharing)
+✨ Play around with this [](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing)
✨ Visit our [Documentation Website](https://crawl4ai.com/mkdocs/)
diff --git a/docs/examples/quickstart.ipynb b/docs/examples/quickstart.ipynb
index 71f23acb..4751dec8 100644
--- a/docs/examples/quickstart.ipynb
+++ b/docs/examples/quickstart.ipynb
@@ -1,735 +1,664 @@
{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "6yLvrXn7yZQI"
- },
- "source": [
- "# Crawl4AI: Advanced Web Crawling and Data Extraction\n",
- "\n",
- "Welcome to this interactive notebook showcasing Crawl4AI, an advanced asynchronous web crawling and data extraction library.\n",
- "\n",
- "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n",
- "- Twitter: [@unclecode](https://twitter.com/unclecode)\n",
- "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n",
- "\n",
- "Let's explore the powerful features of Crawl4AI!"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "KIn_9nxFyZQK"
- },
- "source": [
- "## Installation\n",
- "\n",
- "First, let's install Crawl4AI from GitHub:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "mSnaxLf3zMog"
- },
- "outputs": [],
- "source": [
- "!sudo apt-get update && sudo apt-get install -y libwoff1 libopus0 libwebp6 libwebpdemux2 libenchant1c2a libgudev-1.0-0 libsecret-1-0 libhyphen0 libgdk-pixbuf2.0-0 libegl1 libnotify4 libxslt1.1 libevent-2.1-7 libgles2 libvpx6 libxcomposite1 libatk1.0-0 libatk-bridge2.0-0 libepoxy0 libgtk-3-0 libharfbuzz-icu0"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "xlXqaRtayZQK"
- },
- "outputs": [],
- "source": [
- "!pip install crawl4ai\n",
- "!pip install nest-asyncio\n",
- "!playwright install"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "qKCE7TI7yZQL"
- },
- "source": [
- "Now, let's import the necessary libraries:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {
- "id": "I67tr7aAyZQL"
- },
- "outputs": [],
- "source": [
- "import asyncio\n",
- "import nest_asyncio\n",
- "from crawl4ai import AsyncWebCrawler\n",
- "from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy\n",
- "import json\n",
- "import time\n",
- "from pydantic import BaseModel, Field\n",
- "\n",
- "nest_asyncio.apply()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "h7yR_Rt_yZQM"
- },
- "source": [
- "## Basic Usage\n",
- "\n",
- "Let's start with a simple crawl example:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "yBh6hf4WyZQM",
- "outputId": "0f83af5c-abba-4175-ed95-70b7512e6bcc"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
- "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
- "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.05 seconds\n",
- "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.05 seconds.\n",
- "18102\n"
- ]
- }
- ],
- "source": [
- "async def simple_crawl():\n",
- " async with AsyncWebCrawler(verbose=True) as crawler:\n",
- " result = await crawler.arun(url=\"https://www.nbcnews.com/business\")\n",
- " print(len(result.markdown))\n",
- "await simple_crawl()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "9rtkgHI28uI4"
- },
- "source": [
- "💡 By default, **Crawl4AI** caches the result of every URL, so the next time you call it, you’ll get an instant result. But if you want to bypass the cache, just set `bypass_cache=True`."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "MzZ0zlJ9yZQM"
- },
- "source": [
- "## Advanced Features\n",
- "\n",
- "### Executing JavaScript and Using CSS Selectors"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "gHStF86xyZQM",
- "outputId": "34d0fb6d-4dec-4677-f76e-85a1f082829b"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
- "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
- "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
- "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
- "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 6.06 seconds\n",
- "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n",
- "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
- "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n",
- "41135\n"
- ]
- }
- ],
- "source": [
- "async def js_and_css():\n",
- " async with AsyncWebCrawler(verbose=True) as crawler:\n",
- " js_code = [\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"]\n",
- " result = await crawler.arun(\n",
- " url=\"https://www.nbcnews.com/business\",\n",
- " js_code=js_code,\n",
- " # css_selector=\"YOUR_CSS_SELECTOR_HERE\",\n",
- " bypass_cache=True\n",
- " )\n",
- " print(len(result.markdown))\n",
- "\n",
- "await js_and_css()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "cqE_W4coyZQM"
- },
- "source": [
- "### Using a Proxy\n",
- "\n",
- "Note: You'll need to replace the proxy URL with a working proxy for this example to run successfully."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "QjAyiAGqyZQM"
- },
- "outputs": [],
- "source": [
- "async def use_proxy():\n",
- " async with AsyncWebCrawler(verbose=True, proxy=\"http://your-proxy-url:port\") as crawler:\n",
- " result = await crawler.arun(\n",
- " url=\"https://www.nbcnews.com/business\",\n",
- " bypass_cache=True\n",
- " )\n",
- " print(result.markdown[:500]) # Print first 500 characters\n",
- "\n",
- "# Uncomment the following line to run the proxy example\n",
- "# await use_proxy()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "XTZ88lbayZQN"
- },
- "source": [
- "### Extracting Structured Data with OpenAI\n",
- "\n",
- "Note: You'll need to set your OpenAI API key as an environment variable for this example to work."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 14,
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "fIOlDayYyZQN",
- "outputId": "cb8359cc-dee0-4762-9698-5dfdcee055b8"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
- "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
- "[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n",
- "[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n",
- "[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 3.77 seconds\n",
- "[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.21 seconds\n",
- "[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n",
- "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n",
- "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 1\n",
- "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 2\n",
- "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 3\n",
- "[LOG] Extracted 4 blocks from URL: https://openai.com/api/pricing/ block index: 3\n",
- "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 4\n",
- "[LOG] Extracted 5 blocks from URL: https://openai.com/api/pricing/ block index: 0\n",
- "[LOG] Extracted 1 blocks from URL: https://openai.com/api/pricing/ block index: 4\n",
- "[LOG] Extracted 8 blocks from URL: https://openai.com/api/pricing/ block index: 1\n",
- "[LOG] Extracted 12 blocks from URL: https://openai.com/api/pricing/ block index: 2\n",
- "[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 8.55 seconds.\n",
- "5029\n"
- ]
- }
- ],
- "source": [
- "import os\n",
- "from google.colab import userdata\n",
- "os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')\n",
- "\n",
- "class OpenAIModelFee(BaseModel):\n",
- " model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n",
- " input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n",
- " output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n",
- "\n",
- "async def extract_openai_fees():\n",
- " async with AsyncWebCrawler(verbose=True) as crawler:\n",
- " result = await crawler.arun(\n",
- " url='https://openai.com/api/pricing/',\n",
- " word_count_threshold=1,\n",
- " extraction_strategy=LLMExtractionStrategy(\n",
- " provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY'),\n",
- " schema=OpenAIModelFee.schema(),\n",
- " extraction_type=\"schema\",\n",
- " instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens.\n",
- " Do not miss any models in the entire content. One extracted model JSON format should look like this:\n",
- " {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n",
- " ),\n",
- " bypass_cache=True,\n",
- " )\n",
- " print(len(result.extracted_content))\n",
- "\n",
- "# Uncomment the following line to run the OpenAI extraction example\n",
- "await extract_openai_fees()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "BypA5YxEyZQN"
- },
- "source": [
- "### Advanced Multi-Page Crawling with JavaScript Execution"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "tfkcVQ0b7mw-"
- },
- "source": [
- "## Advanced Multi-Page Crawling with JavaScript Execution\n",
- "\n",
- "This example demonstrates Crawl4AI's ability to handle complex crawling scenarios, specifically extracting commits from multiple pages of a GitHub repository. The challenge here is that clicking the \"Next\" button doesn't load a new page, but instead uses asynchronous JavaScript to update the content. This is a common hurdle in modern web crawling.\n",
- "\n",
- "To overcome this, we use Crawl4AI's custom JavaScript execution to simulate clicking the \"Next\" button, and implement a custom hook to detect when new data has loaded. Our strategy involves comparing the first commit's text before and after \"clicking\" Next, waiting until it changes to confirm new data has rendered. This showcases Crawl4AI's flexibility in handling dynamic content and its ability to implement custom logic for even the most challenging crawling tasks."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 11,
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "qUBKGpn3yZQN",
- "outputId": "3e555b6a-ed33-42f4-cce9-499a923fbe17"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
- "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
- "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
- "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
- "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 5.16 seconds\n",
- "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.28 seconds\n",
- "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
- "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.28 seconds.\n",
- "Page 1: Found 35 commits\n",
- "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
- "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
- "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.78 seconds\n",
- "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.90 seconds\n",
- "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
- "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.90 seconds.\n",
- "Page 2: Found 35 commits\n",
- "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
- "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
- "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 2.00 seconds\n",
- "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.74 seconds\n",
- "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
- "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.75 seconds.\n",
- "Page 3: Found 35 commits\n",
- "Successfully crawled 105 commits across 3 pages\n"
- ]
- }
- ],
- "source": [
- "import re\n",
- "from bs4 import BeautifulSoup\n",
- "\n",
- "async def crawl_typescript_commits():\n",
- " first_commit = \"\"\n",
- " async def on_execution_started(page):\n",
- " nonlocal first_commit\n",
- " try:\n",
- " while True:\n",
- " await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')\n",
- " commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')\n",
- " commit = await commit.evaluate('(element) => element.textContent')\n",
- " commit = re.sub(r'\\s+', '', commit)\n",
- " if commit and commit != first_commit:\n",
- " first_commit = commit\n",
- " break\n",
- " await asyncio.sleep(0.5)\n",
- " except Exception as e:\n",
- " print(f\"Warning: New content didn't appear after JavaScript execution: {e}\")\n",
- "\n",
- " async with AsyncWebCrawler(verbose=True) as crawler:\n",
- " crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)\n",
- "\n",
- " url = \"https://github.com/microsoft/TypeScript/commits/main\"\n",
- " session_id = \"typescript_commits_session\"\n",
- " all_commits = []\n",
- "\n",
- " js_next_page = \"\"\"\n",
- " const button = document.querySelector('a[data-testid=\"pagination-next-button\"]');\n",
- " if (button) button.click();\n",
- " \"\"\"\n",
- "\n",
- " for page in range(3): # Crawl 3 pages\n",
- " result = await crawler.arun(\n",
- " url=url,\n",
- " session_id=session_id,\n",
- " css_selector=\"li.Box-sc-g0xbh4-0\",\n",
- " js=js_next_page if page > 0 else None,\n",
- " bypass_cache=True,\n",
- " js_only=page > 0\n",
- " )\n",
- "\n",
- " assert result.success, f\"Failed to crawl page {page + 1}\"\n",
- "\n",
- " soup = BeautifulSoup(result.cleaned_html, 'html.parser')\n",
- " commits = soup.select(\"li\")\n",
- " all_commits.extend(commits)\n",
- "\n",
- " print(f\"Page {page + 1}: Found {len(commits)} commits\")\n",
- "\n",
- " await crawler.crawler_strategy.kill_session(session_id)\n",
- " print(f\"Successfully crawled {len(all_commits)} commits across 3 pages\")\n",
- "\n",
- "await crawl_typescript_commits()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "EJRnYsp6yZQN"
- },
- "source": [
- "### Using JsonCssExtractionStrategy for Fast Structured Output"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "1ZMqIzB_8SYp"
- },
- "source": [
- "The JsonCssExtractionStrategy is a powerful feature of Crawl4AI that allows for precise, structured data extraction from web pages. Here's how it works:\n",
- "\n",
- "1. You define a schema that describes the pattern of data you're interested in extracting.\n",
- "2. The schema includes a base selector that identifies repeating elements on the page.\n",
- "3. Within the schema, you define fields, each with its own selector and type.\n",
- "4. These field selectors are applied within the context of each base selector element.\n",
- "5. The strategy supports nested structures, lists within lists, and various data types.\n",
- "6. You can even include computed fields for more complex data manipulation.\n",
- "\n",
- "This approach allows for highly flexible and precise data extraction, transforming semi-structured web content into clean, structured JSON data. It's particularly useful for extracting consistent data patterns from pages like product listings, news articles, or search results.\n",
- "\n",
- "For more details and advanced usage, check out the full documentation on the Crawl4AI website."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 12,
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "trCMR2T9yZQN",
- "outputId": "718d36f4-cccf-40f4-8d8c-c3ba73524d16"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
- "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
- "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
- "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
- "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 7.00 seconds\n",
- "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.32 seconds\n",
- "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
- "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.48 seconds.\n",
- "Successfully extracted 11 news teasers\n",
- "{\n",
- " \"category\": \"Business News\",\n",
- " \"headline\": \"NBC ripped up its Olympics playbook for 2024 \\u2014 so far, the new strategy paid off\",\n",
- " \"summary\": \"The Olympics have long been key to NBCUniversal. Paris marked the 18th Olympic Games broadcast by NBC in the U.S.\",\n",
- " \"time\": \"13h ago\",\n",
- " \"image\": {\n",
- " \"src\": \"https://media-cldnry.s-nbcnews.com/image/upload/t_focal-200x100,f_auto,q_auto:best/rockcms/2024-09/240903-nbc-olympics-ch-1344-c7a486.jpg\",\n",
- " \"alt\": \"Mike Tirico.\"\n",
- " },\n",
- " \"link\": \"https://www.nbcnews.com/business\"\n",
- "}\n"
- ]
- }
- ],
- "source": [
- "async def extract_news_teasers():\n",
- " schema = {\n",
- " \"name\": \"News Teaser Extractor\",\n",
- " \"baseSelector\": \".wide-tease-item__wrapper\",\n",
- " \"fields\": [\n",
- " {\n",
- " \"name\": \"category\",\n",
- " \"selector\": \".unibrow span[data-testid='unibrow-text']\",\n",
- " \"type\": \"text\",\n",
- " },\n",
- " {\n",
- " \"name\": \"headline\",\n",
- " \"selector\": \".wide-tease-item__headline\",\n",
- " \"type\": \"text\",\n",
- " },\n",
- " {\n",
- " \"name\": \"summary\",\n",
- " \"selector\": \".wide-tease-item__description\",\n",
- " \"type\": \"text\",\n",
- " },\n",
- " {\n",
- " \"name\": \"time\",\n",
- " \"selector\": \"[data-testid='wide-tease-date']\",\n",
- " \"type\": \"text\",\n",
- " },\n",
- " {\n",
- " \"name\": \"image\",\n",
- " \"type\": \"nested\",\n",
- " \"selector\": \"picture.teasePicture img\",\n",
- " \"fields\": [\n",
- " {\"name\": \"src\", \"type\": \"attribute\", \"attribute\": \"src\"},\n",
- " {\"name\": \"alt\", \"type\": \"attribute\", \"attribute\": \"alt\"},\n",
- " ],\n",
- " },\n",
- " {\n",
- " \"name\": \"link\",\n",
- " \"selector\": \"a[href]\",\n",
- " \"type\": \"attribute\",\n",
- " \"attribute\": \"href\",\n",
- " },\n",
- " ],\n",
- " }\n",
- "\n",
- " extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n",
- "\n",
- " async with AsyncWebCrawler(verbose=True) as crawler:\n",
- " result = await crawler.arun(\n",
- " url=\"https://www.nbcnews.com/business\",\n",
- " extraction_strategy=extraction_strategy,\n",
- " bypass_cache=True,\n",
- " )\n",
- "\n",
- " assert result.success, \"Failed to crawl the page\"\n",
- "\n",
- " news_teasers = json.loads(result.extracted_content)\n",
- " print(f\"Successfully extracted {len(news_teasers)} news teasers\")\n",
- " print(json.dumps(news_teasers[0], indent=2))\n",
- "\n",
- "await extract_news_teasers()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "FnyVhJaByZQN"
- },
- "source": [
- "## Speed Comparison\n",
- "\n",
- "Let's compare the speed of Crawl4AI with Firecrawl, a paid service. Note that we can't run Firecrawl in this Colab environment, so we'll simulate its performance based on previously recorded data."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "agDD186f3wig"
- },
- "source": [
- "💡 **Note on Speed Comparison:**\n",
- "\n",
- "The speed test conducted here is running on Google Colab, where the internet speed and performance can vary and may not reflect optimal conditions. When we call Firecrawl's API, we're seeing its best performance, while Crawl4AI's performance is limited by Colab's network speed.\n",
- "\n",
- "For a more accurate comparison, it's recommended to run these tests on your own servers or computers with a stable and fast internet connection. Despite these limitations, Crawl4AI still demonstrates faster performance in this environment.\n",
- "\n",
- "If you run these tests locally, you may observe an even more significant speed advantage for Crawl4AI compared to other services."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {
- "id": "F7KwHv8G1LbY"
- },
- "outputs": [],
- "source": [
- "!pip install firecrawl"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "metadata": {
- "colab": {
- "base_uri": "https://localhost:8080/"
- },
- "id": "91813zILyZQN",
- "outputId": "663223db-ab89-4976-b233-05ceca62b19b"
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Firecrawl (simulated):\n",
- "Time taken: 4.38 seconds\n",
- "Content length: 41967 characters\n",
- "Images found: 49\n",
- "\n",
- "Crawl4AI (simple crawl):\n",
- "Time taken: 4.22 seconds\n",
- "Content length: 18221 characters\n",
- "Images found: 49\n",
- "\n",
- "Crawl4AI (with JavaScript execution):\n",
- "Time taken: 9.13 seconds\n",
- "Content length: 34243 characters\n",
- "Images found: 89\n"
- ]
- }
- ],
- "source": [
- "import os\n",
- "from google.colab import userdata\n",
- "os.environ['FIRECRAWL_API_KEY'] = userdata.get('FIRECRAWL_API_KEY')\n",
- "import time\n",
- "from firecrawl import FirecrawlApp\n",
- "\n",
- "async def speed_comparison():\n",
- " # Simulated Firecrawl performance\n",
- " app = FirecrawlApp(api_key=os.environ['FIRECRAWL_API_KEY'])\n",
- " start = time.time()\n",
- " scrape_status = app.scrape_url(\n",
- " 'https://www.nbcnews.com/business',\n",
- " params={'formats': ['markdown', 'html']}\n",
- " )\n",
- " end = time.time()\n",
- " print(\"Firecrawl (simulated):\")\n",
- " print(f\"Time taken: {end - start:.2f} seconds\")\n",
- " print(f\"Content length: {len(scrape_status['markdown'])} characters\")\n",
- " print(f\"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}\")\n",
- " print()\n",
- "\n",
- " async with AsyncWebCrawler() as crawler:\n",
- " # Crawl4AI simple crawl\n",
- " start = time.time()\n",
- " result = await crawler.arun(\n",
- " url=\"https://www.nbcnews.com/business\",\n",
- " word_count_threshold=0,\n",
- " bypass_cache=True,\n",
- " verbose=False\n",
- " )\n",
- " end = time.time()\n",
- " print(\"Crawl4AI (simple crawl):\")\n",
- " print(f\"Time taken: {end - start:.2f} seconds\")\n",
- " print(f\"Content length: {len(result.markdown)} characters\")\n",
- " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
- " print()\n",
- "\n",
- " # Crawl4AI with JavaScript execution\n",
- " start = time.time()\n",
- " result = await crawler.arun(\n",
- " url=\"https://www.nbcnews.com/business\",\n",
- " js_code=[\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"],\n",
- " word_count_threshold=0,\n",
- " bypass_cache=True,\n",
- " verbose=False\n",
- " )\n",
- " end = time.time()\n",
- " print(\"Crawl4AI (with JavaScript execution):\")\n",
- " print(f\"Time taken: {end - start:.2f} seconds\")\n",
- " print(f\"Content length: {len(result.markdown)} characters\")\n",
- " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
- "\n",
- "await speed_comparison()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "OBFFYVJIyZQN"
- },
- "source": [
- "If you run on a local machine with a proper internet speed:\n",
- "- Simple crawl: Crawl4AI is typically over 3-4 times faster than Firecrawl.\n",
- "- With JavaScript execution: Even when executing JavaScript to load more content (potentially doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.\n",
- "\n",
- "Please note that actual performance may vary depending on network conditions and the specific content being crawled."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "A6_1RK1_yZQO"
- },
- "source": [
- "## Conclusion\n",
- "\n",
- "In this notebook, we've explored the powerful features of Crawl4AI, including:\n",
- "\n",
- "1. Basic crawling\n",
- "2. JavaScript execution and CSS selector usage\n",
- "3. Proxy support\n",
- "4. Structured data extraction with OpenAI\n",
- "5. Advanced multi-page crawling with JavaScript execution\n",
- "6. Fast structured output using JsonCssExtractionStrategy\n",
- "7. Speed comparison with other services\n",
- "\n",
- "Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n",
- "\n",
- "For more information and advanced usage, please visit the [Crawl4AI documentation](https://crawl4ai.com/mkdocs/).\n",
- "\n",
- "Happy crawling!"
- ]
- }
- ],
- "metadata": {
- "colab": {
- "provenance": []
- },
- "kernelspec": {
- "display_name": "venv",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.10.13"
- }
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "0cba38e5",
+ "metadata": {},
+ "source": [
+ "# Crawl4AI 🕷️🤖\n",
+ "
\n",
+ "\n",
+ "[](https://github.com/unclecode/crawl4ai/stargazers)\n",
+ "\n",
+ "[](https://github.com/unclecode/crawl4ai/network/members)\n",
+ "[](https://github.com/unclecode/crawl4ai/issues)\n",
+ "[](https://github.com/unclecode/crawl4ai/pulls)\n",
+ "[](https://github.com/unclecode/crawl4ai/blob/main/LICENSE)\n",
+ "\n",
+ "Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐\n",
+ "\n",
+ "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n",
+ "- Twitter: [@unclecode](https://twitter.com/unclecode)\n",
+ "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n",
+ "\n",
+ "## 🌟 Meet the Crawl4AI Assistant: Your Copilot for Crawling\n",
+ "Use the [Crawl4AI GPT Assistant](https://tinyurl.com/crawl4ai-gpt) as your AI-powered copilot! With this assistant, you can:\n",
+ "- 🧑💻 Generate code for complex crawling and extraction tasks\n",
+ "- 💡 Get tailored support and examples\n",
+ "- 📘 Learn Crawl4AI faster with step-by-step guidance"
+ ]
},
- "nbformat": 4,
- "nbformat_minor": 0
+ {
+ "cell_type": "markdown",
+ "id": "41de6458",
+ "metadata": {},
+ "source": [
+ "### **Quickstart with Crawl4AI**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1380e951",
+ "metadata": {},
+ "source": [
+ "#### 1. **Installation**\n",
+ "Install Crawl4AI and necessary dependencies:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "05fecfad",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# %%capture\n",
+ "!pip install crawl4ai\n",
+ "!pip install nest_asyncio\n",
+ "!playwright install "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "2c2a74c8",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import asyncio\n",
+ "import nest_asyncio\n",
+ "nest_asyncio.apply()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f3c558d7",
+ "metadata": {},
+ "source": [
+ "#### 2. **Basic Setup and Simple Crawl**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "003376f3",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 1.49 seconds\n",
+ "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.10 seconds.\n",
+ "IE 11 is not supported. For an optimal experience visit our site on another browser.\n",
+ "\n",
+ "[Morning Rundown: Trump and Harris' vastly different closing pitches, why Kim Jong Un is helping Russia, and an ancient city is discovered by accident](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)[](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)\n",
+ "\n",
+ "Skip to Content\n",
+ "\n",
+ "[NBC News Logo](https://www.nbcnews.com)\n",
+ "\n",
+ "Spon\n"
+ ]
+ }
+ ],
+ "source": [
+ "import asyncio\n",
+ "from crawl4ai import AsyncWebCrawler\n",
+ "\n",
+ "async def simple_crawl():\n",
+ " async with AsyncWebCrawler() as crawler:\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://www.nbcnews.com/business\",\n",
+ " bypass_cache=True # By default this is False, meaning the cache will be used\n",
+ " )\n",
+ " print(result.markdown[:500]) # Print the first 500 characters\n",
+ " \n",
+ "asyncio.run(simple_crawl())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "da9b4d50",
+ "metadata": {},
+ "source": [
+ "#### 3. **Dynamic Content Handling**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "5bb8c1e4",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
+ "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
+ "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
+ "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
+ "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 4.52 seconds\n",
+ "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.15 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.15 seconds.\n",
+ "IE 11 is not supported. For an optimal experience visit our site on another browser.\n",
+ "\n",
+ "[Morning Rundown: Trump and Harris' vastly different closing pitches, why Kim Jong Un is helping Russia, and an ancient city is discovered by accident](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)[](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)\n",
+ "\n",
+ "Skip to Content\n",
+ "\n",
+ "[NBC News Logo](https://www.nbcnews.com)\n",
+ "\n",
+ "Spon\n"
+ ]
+ }
+ ],
+ "source": [
+ "async def crawl_dynamic_content():\n",
+ " # You can use wait_for to wait for a condition to be met before returning the result\n",
+ " # wait_for = \"\"\"() => {\n",
+ " # return Array.from(document.querySelectorAll('article.tease-card')).length > 10;\n",
+ " # }\"\"\"\n",
+ "\n",
+ " # wait_for can be also just a css selector\n",
+ " # wait_for = \"article.tease-card:nth-child(10)\"\n",
+ "\n",
+ " async with AsyncWebCrawler(verbose=True) as crawler:\n",
+ " js_code = [\n",
+ " \"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"\n",
+ " ]\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://www.nbcnews.com/business\",\n",
+ " js_code=js_code,\n",
+ " # wait_for=wait_for,\n",
+ " bypass_cache=True,\n",
+ " )\n",
+ " print(result.markdown[:500]) # Print first 500 characters\n",
+ "\n",
+ "asyncio.run(crawl_dynamic_content())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "86febd8d",
+ "metadata": {},
+ "source": [
+ "#### 4. **Content Cleaning and Fit Markdown**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8e8ab01f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "async def clean_content():\n",
+ " async with AsyncWebCrawler() as crawler:\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://janineintheworld.com/places-to-visit-in-central-mexico\",\n",
+ " excluded_tags=['nav', 'footer', 'aside'],\n",
+ " remove_overlay_elements=True,\n",
+ " word_count_threshold=10,\n",
+ " bypass_cache=True\n",
+ " )\n",
+ " full_markdown_length = len(result.markdown)\n",
+ " fit_markdown_length = len(result.fit_markdown)\n",
+ " print(f\"Full Markdown Length: {full_markdown_length}\")\n",
+ " print(f\"Fit Markdown Length: {fit_markdown_length}\")\n",
+ " print(result.fit_markdown[:1000])\n",
+ " \n",
+ "\n",
+ "asyncio.run(clean_content())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "55715146",
+ "metadata": {},
+ "source": [
+ "#### 5. **Link Analysis and Smart Filtering**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "id": "2ae47c69",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 0.93 seconds\n",
+ "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.11 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n",
+ "Found 107 internal links\n",
+ "Found 58 external links\n",
+ "Href: https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973\n",
+ "Text: Morning Rundown: Trump and Harris' vastly different closing pitches, why Kim Jong Un is helping Russia, and an ancient city is discovered by accident\n",
+ "\n",
+ "Href: https://www.nbcnews.com\n",
+ "Text: NBC News Logo\n",
+ "\n",
+ "Href: https://www.nbcnews.com/politics/2024-election/live-blog/kamala-harris-donald-trump-rally-election-live-updates-rcna177529\n",
+ "Text: 2024 Election\n",
+ "\n",
+ "Href: https://www.nbcnews.com/politics\n",
+ "Text: Politics\n",
+ "\n",
+ "Href: https://www.nbcnews.com/us-news\n",
+ "Text: U.S. News\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "\n",
+ "async def link_analysis():\n",
+ " async with AsyncWebCrawler() as crawler:\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://www.nbcnews.com/business\",\n",
+ " bypass_cache=True,\n",
+ " exclude_external_links=True,\n",
+ " exclude_social_media_links=True,\n",
+ " # exclude_domains=[\"facebook.com\", \"twitter.com\"]\n",
+ " )\n",
+ " print(f\"Found {len(result.links['internal'])} internal links\")\n",
+ " print(f\"Found {len(result.links['external'])} external links\")\n",
+ "\n",
+ " for link in result.links['internal'][:5]:\n",
+ " print(f\"Href: {link['href']}\\nText: {link['text']}\\n\")\n",
+ " \n",
+ "\n",
+ "asyncio.run(link_analysis())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "80cceef3",
+ "metadata": {},
+ "source": [
+ "#### 6. **Media Handling**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "id": "1fed7f99",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 1.42 seconds\n",
+ "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.11 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.12 seconds.\n",
+ "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-762x508,f_auto,q_auto:best/rockcms/2024-10/241023-NM-Chilccare-jg-27b982.jpg, Alt: , Score: 4\n",
+ "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241030-china-ev-electric-mb-0746-cae05c.jpg, Alt: Volkswagen Workshop in Hefei, Score: 5\n",
+ "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241029-nyc-subway-sandwich-2021-ac-922p-a92374.jpg, Alt: A sub is prepared at a Subway restaurant in Manhattan, New York City, Score: 5\n",
+ "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241029-suv-gravity-ch-1618-752415.jpg, Alt: The Lucid Gravity car., Score: 5\n",
+ "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241029-dearborn-michigan-f-150-ford-ranger-trucks-assembly-line-ac-426p-614f0b.jpg, Alt: Ford Introduces new F-150 And Ranger Trucks At Their Dearborn Plant, Score: 5\n"
+ ]
+ }
+ ],
+ "source": [
+ "async def media_handling():\n",
+ " async with AsyncWebCrawler() as crawler:\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://www.nbcnews.com/business\", \n",
+ " bypass_cache=True,\n",
+ " exclude_external_images=False,\n",
+ " screenshot=True\n",
+ " )\n",
+ " for img in result.media['images'][:5]:\n",
+ " print(f\"Image URL: {img['src']}, Alt: {img['alt']}, Score: {img['score']}\")\n",
+ " \n",
+ "asyncio.run(media_handling())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9290499a",
+ "metadata": {},
+ "source": [
+ "#### 7. **Using Hooks for Custom Workflow**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9d069c2b",
+ "metadata": {},
+ "source": [
+ "Hooks in Crawl4AI allow you to run custom logic at specific stages of the crawling process. This can be invaluable for scenarios like setting custom headers, logging activities, or processing content before it is returned. Below is an example of a basic workflow using a hook, followed by a complete list of available hooks and explanations on their usage."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 27,
+ "id": "bc4d2fc8",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[Hook] Preparing to navigate...\n",
+ "[LOG] 🚀 Crawling done for https://crawl4ai.com, success: True, time taken: 3.49 seconds\n",
+ "[LOG] 🚀 Content extracted for https://crawl4ai.com, success: True, time taken: 0.03 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://crawl4ai.com, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Extraction done for https://crawl4ai.com, time taken: 0.03 seconds.\n",
+ "[Crawl4AI Documentation](https://docs.crawl4ai.com/)\n",
+ "\n",
+ " * [ Home ](.)\n",
+ " * [ Installation ](basic/installation/)\n",
+ " * [ Quick Start ](basic/quickstart/)\n",
+ " * [ Search ](#)\n",
+ "\n",
+ "\n",
+ "\n",
+ " * Home\n",
+ " * [Installation](basic/installation/)\n",
+ " * [Quick Start](basic/quickstart/)\n",
+ " * Basic\n",
+ " * [Simple Crawling](basic/simple-crawling/)\n",
+ " * [Output Formats](basic/output-formats/)\n",
+ " * [Browser Configuration](basic/browser-config/)\n",
+ " * [Page Interaction](basic/page-interaction/)\n",
+ " * [Content Selection](basic/con\n"
+ ]
+ }
+ ],
+ "source": [
+ "async def custom_hook_workflow():\n",
+ " async with AsyncWebCrawler() as crawler:\n",
+ " # Set a 'before_goto' hook to run custom code just before navigation\n",
+ " crawler.crawler_strategy.set_hook(\"before_goto\", lambda page: print(\"[Hook] Preparing to navigate...\"))\n",
+ " \n",
+ " # Perform the crawl operation\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://crawl4ai.com\",\n",
+ " bypass_cache=True\n",
+ " )\n",
+ " print(result.markdown[:500]) # Display the first 500 characters\n",
+ "\n",
+ "asyncio.run(custom_hook_workflow())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3ff45e21",
+ "metadata": {},
+ "source": [
+ "List of available hooks and examples for each stage of the crawling process:\n",
+ "\n",
+ "- **on_browser_created**\n",
+ " ```python\n",
+ " async def on_browser_created_hook(browser):\n",
+ " print(\"[Hook] Browser created\")\n",
+ " ```\n",
+ "\n",
+ "- **before_goto**\n",
+ " ```python\n",
+ " async def before_goto_hook(page):\n",
+ " await page.set_extra_http_headers({\"X-Test-Header\": \"test\"})\n",
+ " ```\n",
+ "\n",
+ "- **after_goto**\n",
+ " ```python\n",
+ " async def after_goto_hook(page):\n",
+ " print(f\"[Hook] Navigated to {page.url}\")\n",
+ " ```\n",
+ "\n",
+ "- **on_execution_started**\n",
+ " ```python\n",
+ " async def on_execution_started_hook(page):\n",
+ " print(\"[Hook] JavaScript execution started\")\n",
+ " ```\n",
+ "\n",
+ "- **before_return_html**\n",
+ " ```python\n",
+ " async def before_return_html_hook(page, html):\n",
+ " print(f\"[Hook] HTML length: {len(html)}\")\n",
+ " ```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2d56ebb1",
+ "metadata": {},
+ "source": [
+ "#### 8. **Session-Based Crawling**\n",
+ "\n",
+ "When to Use Session-Based Crawling: \n",
+ "Session-based crawling is especially beneficial when navigating through multi-page content where each page load needs to maintain the same session context. For instance, in cases where a “Next Page” button must be clicked to load subsequent data, the new data often replaces the previous content. Here, session-based crawling keeps the browser state intact across each interaction, allowing for sequential actions within the same session.\n",
+ "\n",
+ "Example: Multi-Page Navigation Using JavaScript\n",
+ "In this example, we’ll navigate through multiple pages by clicking a \"Next Page\" button. After each page load, we extract the new content and repeat the process."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e7bfebae",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "async def multi_page_session_crawl():\n",
+ " async with AsyncWebCrawler() as crawler:\n",
+ " session_id = \"page_navigation_session\"\n",
+ " url = \"https://example.com/paged-content\"\n",
+ "\n",
+ " for page_number in range(1, 4):\n",
+ " result = await crawler.arun(\n",
+ " url=url,\n",
+ " session_id=session_id,\n",
+ " js_code=\"document.querySelector('.next-page-button').click();\" if page_number > 1 else None,\n",
+ " css_selector=\".content-section\",\n",
+ " bypass_cache=True\n",
+ " )\n",
+ " print(f\"Page {page_number} Content:\")\n",
+ " print(result.markdown[:500]) # Print first 500 characters\n",
+ "\n",
+ "# asyncio.run(multi_page_session_crawl())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ad32a778",
+ "metadata": {},
+ "source": [
+ "#### 9. **Using Extraction Strategies**\n",
+ "\n",
+ "**LLM Extraction**\n",
+ "\n",
+ "This example demonstrates how to use language model-based extraction to retrieve structured data from a pricing page on OpenAI’s site."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 31,
+ "id": "3011a7c5",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "--- Extracting Structured Data with openai/gpt-4o-mini ---\n",
+ "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
+ "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
+ "[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n",
+ "[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n",
+ "[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 1.29 seconds\n",
+ "[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.13 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n",
+ "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n",
+ "[LOG] Extracted 26 blocks from URL: https://openai.com/api/pricing/ block index: 0\n",
+ "[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 15.12 seconds.\n",
+ "[{'model_name': 'gpt-4o', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-2024-08-06', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-audio-preview', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-audio-preview-2024-10-01', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-2024-05-13', 'input_fee': '$5.00 / 1M input tokens', 'output_fee': '$15.00 / 1M output tokens', 'error': False}]\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "/Users/unclecode/devs/crawl4ai/venv/lib/python3.10/site-packages/pydantic/main.py:347: UserWarning: Pydantic serializer warnings:\n",
+ " Expected `PromptTokensDetails` but got `dict` - serialized value may not be as expected\n",
+ " return self.__pydantic_serializer__.to_python(\n"
+ ]
+ }
+ ],
+ "source": [
+ "from crawl4ai.extraction_strategy import LLMExtractionStrategy\n",
+ "from pydantic import BaseModel, Field\n",
+ "import os, json\n",
+ "\n",
+ "class OpenAIModelFee(BaseModel):\n",
+ " model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n",
+ " input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n",
+ " output_fee: str = Field(\n",
+ " ..., description=\"Fee for output token for the OpenAI model.\"\n",
+ " )\n",
+ "\n",
+ "async def extract_structured_data_using_llm(provider: str, api_token: str = None, extra_headers: dict = None):\n",
+ " print(f\"\\n--- Extracting Structured Data with {provider} ---\")\n",
+ " \n",
+ " # Skip if API token is missing (for providers that require it)\n",
+ " if api_token is None and provider != \"ollama\":\n",
+ " print(f\"API token is required for {provider}. Skipping this example.\")\n",
+ " return\n",
+ "\n",
+ " extra_args = {\"extra_headers\": extra_headers} if extra_headers else {}\n",
+ "\n",
+ " async with AsyncWebCrawler(verbose=True) as crawler:\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://openai.com/api/pricing/\",\n",
+ " word_count_threshold=1,\n",
+ " extraction_strategy=LLMExtractionStrategy(\n",
+ " provider=provider,\n",
+ " api_token=api_token,\n",
+ " schema=OpenAIModelFee.schema(),\n",
+ " extraction_type=\"schema\",\n",
+ " instruction=\"\"\"Extract all model names along with fees for input and output tokens.\"\n",
+ " \"{model_name: 'GPT-4', input_fee: 'US$10.00 / 1M tokens', output_fee: 'US$30.00 / 1M tokens'}.\"\"\",\n",
+ " **extra_args\n",
+ " ),\n",
+ " bypass_cache=True,\n",
+ " )\n",
+ " print(json.loads(result.extracted_content)[:5])\n",
+ "\n",
+ "# Usage:\n",
+ "await extract_structured_data_using_llm(\"openai/gpt-4o-mini\", os.getenv(\"OPENAI_API_KEY\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6532db9d",
+ "metadata": {},
+ "source": [
+ "**Cosine Similarity Strategy**\n",
+ "\n",
+ "This strategy uses semantic clustering to extract relevant content based on contextual similarity, which is helpful when extracting related sections from a single topic."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 32,
+ "id": "ec079108",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[LOG] Loading Extraction Model for mps device.\n",
+ "[LOG] Loading Multilabel Classifier for mps device.\n",
+ "[LOG] Model loaded sentence-transformers/all-MiniLM-L6-v2, models/reuters, took 5.193778038024902 seconds\n",
+ "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, success: True, time taken: 1.37 seconds\n",
+ "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, success: True, time taken: 0.07 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Assign tags using mps\n",
+ "[LOG] 🚀 Categorization done in 0.55 seconds\n",
+ "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, time taken: 6.63 seconds.\n",
+ "[{'index': 1, 'tags': ['news_&_social_concern'], 'content': \"McDonald's 2024 combo: Inflation, a health crisis and a side of politics # McDonald's 2024 combo: Inflation, a health crisis and a side of politics\"}, {'index': 2, 'tags': ['business_&_entrepreneurs', 'news_&_social_concern'], 'content': 'Like many major brands, McDonald’s raked in big profits as the economy reopened from the pandemic. In October 2022, [executives were boasting](https://www.cnbc.com/2022/10/27/mcdonalds-mcd-earnings-q3-2022.html) that they’d been raising prices without crimping traffic, even as competitors began to warn that some customers were closing their wallets after inflation peaked above 9% that summer. Still, the U.S. had repeatedly dodged a much-forecast recession, and [Americans kept spending on nonessentials](https://www.nbcnews.com/business/economy/year-peak-inflation-travel-leisure-mostly-cost-less-rcna92760) like travel and dining out — despite regularly relaying to pollsters their dismal views of an otherwise solid economy. Even so, 64% of consumers said they noticed price increases at quick-service restaurants in September, more than at any other type of venue, according to a survey by Datassential, a food and beverage market researcher. Politicians are still drawing attention to fast-food costs, too, as the election season barrels toward a tumultuous finish. A group of Democratic senators this month [denounced McDonald’s for menu prices](https://www.nbcnews.com/news/us-news/democratic-senators-slam-mcdonalds-menu-price-hikes-rcna176380) that they said outstripped inflation, accusing the company of looking to profit “at the expense of people’s ability to put food on the table.” The financial results come toward the end of a humbling year for the nearly $213 billion restaurant chain, whose shares remained steady on the heels of its latest earnings. Kempczinski [sought to reassure investors](https://www.cnbc.com/2024/10/29/mcdonalds-e-coli-outbreak-ceo-comments.html) that [the E. coli outbreak](https://www.nbcnews.com/health/health-news/illnesses-linked-mcdonalds-e-coli-outbreak-rise-75-cdc-says-rcna177260), linked to Quarter Pounder burgers, was under control after the health crisis temporarily dented the company’s stock and caused U.S. foot traffic to drop nearly 10% in the days afterward, according to estimates by Gordon Haskett financial researchers. The fast-food giant [reported Tuesday](https://www.cnbc.com/2024/10/29/mcdonalds-mcd-earnings-q3-2024.html) that it had reversed its recent U.S. sales drop, posting a 0.3% uptick in the third quarter. Foot traffic was still down slightly, but the company said its summer of discounts was paying off. But by early this year, [photos of eye-watering menu prices](https://x.com/sam_learner/status/1681367351143301129) at some McDonald’s locations — including an $18 Big Mac combo at a Connecticut rest stop from July 2023 — went viral, bringing diners’ long-simmering frustrations to a boiling point that the company couldn’t ignore. On an earnings call in April, Kempczinski acknowledged that foot traffic had fallen. “We will stay laser-focused on providing an unparalleled experience with simple, everyday value and affordability that our consumers can count on as they continue to be mindful about their spending,” CEO Chris Kempczinski [said in a statement](https://www.prnewswire.com/news-releases/mcdonalds-reports-third-quarter-2024-results-302289216.html?Fds-Load-Behavior=force-external) alongside the earnings report.'}, {'index': 3, 'tags': ['food_&_dining', 'news_&_social_concern'], 'content': 'McDonald’s has had some success leaning into discounts this year. Eric Thayer / Bloomberg via Getty Images file'}, {'index': 4, 'tags': ['business_&_entrepreneurs', 'food_&_dining', 'news_&_social_concern'], 'content': 'McDonald’s has faced a customer revolt over pricey Big Macs, an unsolicited cameo in election-season crossfire, and now an E. coli outbreak — just as the company had been luring customers back with more affordable burgers. Despite a difficult quarter, McDonald’s looks resilient in the face of various pressures, analysts say — something the company shares with U.S. consumers overall. “Consumers continue to be even more discriminating with every dollar that they spend,” he said at the time. Going forward, McDonald’s would be “laser-focused” on affordability. “McDonald’s has also done a good job of embedding the brand in popular culture to enhance its relevance and meaning around fun and family. But it also needed to modify the product line to meet the expectations of a consumer who is on a tight budget,” he said. “The thing that McDonald’s had struggled with, and why I think we’re seeing kind of an inflection point, is a value proposition,” Senatore said. “McDonald’s menu price increases had run ahead of a lot of its restaurant peers. … Consumers are savvy enough to know that.” For many consumers, the fast-food giant’s menus serve as an informal gauge of the economy overall, said Sara Senatore, a Bank of America analyst covering restaurants. “The spotlight is always on McDonald’s because it’s so big” and something of a “bellwether,” she said. McDonald’s didn’t respond to requests for comment.'}, {'index': 5, 'tags': ['business_&_entrepreneurs', 'food_&_dining'], 'content': 'Mickey D’s’ $5 meal deal, which it launched in late June to jumpstart slumping sales, has given the company an appealing price point to advertise nationwide, Senatore said, speculating that it could open the door to a new permanent value offering. But before that promotion rolled out, the company’s reputation as a low-cost option had taken a bruising hit.'}]\n"
+ ]
+ }
+ ],
+ "source": [
+ "from crawl4ai.extraction_strategy import CosineStrategy\n",
+ "\n",
+ "async def cosine_similarity_extraction():\n",
+ " async with AsyncWebCrawler() as crawler:\n",
+ " strategy = CosineStrategy(\n",
+ " word_count_threshold=10,\n",
+ " max_dist=0.2, # Maximum distance between two words\n",
+ " linkage_method=\"ward\", # Linkage method for hierarchical clustering (ward, complete, average, single)\n",
+ " top_k=3, # Number of top keywords to extract\n",
+ " sim_threshold=0.3, # Similarity threshold for clustering\n",
+ " semantic_filter=\"McDonald's economic impact, American consumer trends\", # Keywords to filter the content semantically using embeddings\n",
+ " verbose=True\n",
+ " )\n",
+ " \n",
+ " result = await crawler.arun(\n",
+ " url=\"https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156\",\n",
+ " extraction_strategy=strategy\n",
+ " )\n",
+ " print(json.loads(result.extracted_content)[:5])\n",
+ "\n",
+ "asyncio.run(cosine_similarity_extraction())\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ff423629",
+ "metadata": {},
+ "source": [
+ "#### 10. **Conclusion and Next Steps**\n",
+ "\n",
+ "You’ve explored core features of Crawl4AI, including dynamic content handling, link analysis, and advanced extraction strategies. Visit our documentation for further details on using Crawl4AI’s extensive features.\n",
+ "\n",
+ "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n",
+ "- Twitter: [@unclecode](https://twitter.com/unclecode)\n",
+ "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n",
+ "\n",
+ "Happy Crawling with Crawl4AI! 🕷️🤖\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d34c1d35",
+ "metadata": {},
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "venv",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
}
diff --git a/docs/examples/quickstart_async.py b/docs/examples/quickstart_async.py
index 02b5f8bb..9c57f57d 100644
--- a/docs/examples/quickstart_async.py
+++ b/docs/examples/quickstart_async.py
@@ -383,10 +383,11 @@ async def crawl_with_user_simultion():
async with AsyncWebCrawler(verbose=True, headless=True) as crawler:
url = "YOUR-URL-HERE"
result = await crawler.arun(
- url=url,
+ url=url,
bypass_cache=True,
- simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
- override_navigator = True # Overrides the navigator object to make it look like a real user
+ magic = True, # Automatically detects and removes overlays, popups, and other elements that block content
+ # simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction
+ # override_navigator = True # Overrides the navigator object to make it look like a real user
)
print(result.markdown)
diff --git a/docs/examples/quickstart_v0.ipynb b/docs/examples/quickstart_v0.ipynb
new file mode 100644
index 00000000..71f23acb
--- /dev/null
+++ b/docs/examples/quickstart_v0.ipynb
@@ -0,0 +1,735 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6yLvrXn7yZQI"
+ },
+ "source": [
+ "# Crawl4AI: Advanced Web Crawling and Data Extraction\n",
+ "\n",
+ "Welcome to this interactive notebook showcasing Crawl4AI, an advanced asynchronous web crawling and data extraction library.\n",
+ "\n",
+ "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n",
+ "- Twitter: [@unclecode](https://twitter.com/unclecode)\n",
+ "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n",
+ "\n",
+ "Let's explore the powerful features of Crawl4AI!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KIn_9nxFyZQK"
+ },
+ "source": [
+ "## Installation\n",
+ "\n",
+ "First, let's install Crawl4AI from GitHub:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "mSnaxLf3zMog"
+ },
+ "outputs": [],
+ "source": [
+ "!sudo apt-get update && sudo apt-get install -y libwoff1 libopus0 libwebp6 libwebpdemux2 libenchant1c2a libgudev-1.0-0 libsecret-1-0 libhyphen0 libgdk-pixbuf2.0-0 libegl1 libnotify4 libxslt1.1 libevent-2.1-7 libgles2 libvpx6 libxcomposite1 libatk1.0-0 libatk-bridge2.0-0 libepoxy0 libgtk-3-0 libharfbuzz-icu0"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "xlXqaRtayZQK"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install crawl4ai\n",
+ "!pip install nest-asyncio\n",
+ "!playwright install"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "qKCE7TI7yZQL"
+ },
+ "source": [
+ "Now, let's import the necessary libraries:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "id": "I67tr7aAyZQL"
+ },
+ "outputs": [],
+ "source": [
+ "import asyncio\n",
+ "import nest_asyncio\n",
+ "from crawl4ai import AsyncWebCrawler\n",
+ "from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy\n",
+ "import json\n",
+ "import time\n",
+ "from pydantic import BaseModel, Field\n",
+ "\n",
+ "nest_asyncio.apply()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "h7yR_Rt_yZQM"
+ },
+ "source": [
+ "## Basic Usage\n",
+ "\n",
+ "Let's start with a simple crawl example:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "yBh6hf4WyZQM",
+ "outputId": "0f83af5c-abba-4175-ed95-70b7512e6bcc"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
+ "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
+ "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.05 seconds\n",
+ "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.05 seconds.\n",
+ "18102\n"
+ ]
+ }
+ ],
+ "source": [
+ "async def simple_crawl():\n",
+ " async with AsyncWebCrawler(verbose=True) as crawler:\n",
+ " result = await crawler.arun(url=\"https://www.nbcnews.com/business\")\n",
+ " print(len(result.markdown))\n",
+ "await simple_crawl()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9rtkgHI28uI4"
+ },
+ "source": [
+ "💡 By default, **Crawl4AI** caches the result of every URL, so the next time you call it, you’ll get an instant result. But if you want to bypass the cache, just set `bypass_cache=True`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "MzZ0zlJ9yZQM"
+ },
+ "source": [
+ "## Advanced Features\n",
+ "\n",
+ "### Executing JavaScript and Using CSS Selectors"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "gHStF86xyZQM",
+ "outputId": "34d0fb6d-4dec-4677-f76e-85a1f082829b"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
+ "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
+ "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
+ "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
+ "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 6.06 seconds\n",
+ "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n",
+ "41135\n"
+ ]
+ }
+ ],
+ "source": [
+ "async def js_and_css():\n",
+ " async with AsyncWebCrawler(verbose=True) as crawler:\n",
+ " js_code = [\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"]\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://www.nbcnews.com/business\",\n",
+ " js_code=js_code,\n",
+ " # css_selector=\"YOUR_CSS_SELECTOR_HERE\",\n",
+ " bypass_cache=True\n",
+ " )\n",
+ " print(len(result.markdown))\n",
+ "\n",
+ "await js_and_css()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "cqE_W4coyZQM"
+ },
+ "source": [
+ "### Using a Proxy\n",
+ "\n",
+ "Note: You'll need to replace the proxy URL with a working proxy for this example to run successfully."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "QjAyiAGqyZQM"
+ },
+ "outputs": [],
+ "source": [
+ "async def use_proxy():\n",
+ " async with AsyncWebCrawler(verbose=True, proxy=\"http://your-proxy-url:port\") as crawler:\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://www.nbcnews.com/business\",\n",
+ " bypass_cache=True\n",
+ " )\n",
+ " print(result.markdown[:500]) # Print first 500 characters\n",
+ "\n",
+ "# Uncomment the following line to run the proxy example\n",
+ "# await use_proxy()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "XTZ88lbayZQN"
+ },
+ "source": [
+ "### Extracting Structured Data with OpenAI\n",
+ "\n",
+ "Note: You'll need to set your OpenAI API key as an environment variable for this example to work."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "fIOlDayYyZQN",
+ "outputId": "cb8359cc-dee0-4762-9698-5dfdcee055b8"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
+ "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
+ "[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n",
+ "[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n",
+ "[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 3.77 seconds\n",
+ "[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.21 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n",
+ "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n",
+ "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 1\n",
+ "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 2\n",
+ "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 3\n",
+ "[LOG] Extracted 4 blocks from URL: https://openai.com/api/pricing/ block index: 3\n",
+ "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 4\n",
+ "[LOG] Extracted 5 blocks from URL: https://openai.com/api/pricing/ block index: 0\n",
+ "[LOG] Extracted 1 blocks from URL: https://openai.com/api/pricing/ block index: 4\n",
+ "[LOG] Extracted 8 blocks from URL: https://openai.com/api/pricing/ block index: 1\n",
+ "[LOG] Extracted 12 blocks from URL: https://openai.com/api/pricing/ block index: 2\n",
+ "[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 8.55 seconds.\n",
+ "5029\n"
+ ]
+ }
+ ],
+ "source": [
+ "import os\n",
+ "from google.colab import userdata\n",
+ "os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')\n",
+ "\n",
+ "class OpenAIModelFee(BaseModel):\n",
+ " model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n",
+ " input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n",
+ " output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n",
+ "\n",
+ "async def extract_openai_fees():\n",
+ " async with AsyncWebCrawler(verbose=True) as crawler:\n",
+ " result = await crawler.arun(\n",
+ " url='https://openai.com/api/pricing/',\n",
+ " word_count_threshold=1,\n",
+ " extraction_strategy=LLMExtractionStrategy(\n",
+ " provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY'),\n",
+ " schema=OpenAIModelFee.schema(),\n",
+ " extraction_type=\"schema\",\n",
+ " instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens.\n",
+ " Do not miss any models in the entire content. One extracted model JSON format should look like this:\n",
+ " {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n",
+ " ),\n",
+ " bypass_cache=True,\n",
+ " )\n",
+ " print(len(result.extracted_content))\n",
+ "\n",
+ "# Uncomment the following line to run the OpenAI extraction example\n",
+ "await extract_openai_fees()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "BypA5YxEyZQN"
+ },
+ "source": [
+ "### Advanced Multi-Page Crawling with JavaScript Execution"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "tfkcVQ0b7mw-"
+ },
+ "source": [
+ "## Advanced Multi-Page Crawling with JavaScript Execution\n",
+ "\n",
+ "This example demonstrates Crawl4AI's ability to handle complex crawling scenarios, specifically extracting commits from multiple pages of a GitHub repository. The challenge here is that clicking the \"Next\" button doesn't load a new page, but instead uses asynchronous JavaScript to update the content. This is a common hurdle in modern web crawling.\n",
+ "\n",
+ "To overcome this, we use Crawl4AI's custom JavaScript execution to simulate clicking the \"Next\" button, and implement a custom hook to detect when new data has loaded. Our strategy involves comparing the first commit's text before and after \"clicking\" Next, waiting until it changes to confirm new data has rendered. This showcases Crawl4AI's flexibility in handling dynamic content and its ability to implement custom logic for even the most challenging crawling tasks."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "qUBKGpn3yZQN",
+ "outputId": "3e555b6a-ed33-42f4-cce9-499a923fbe17"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
+ "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
+ "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
+ "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
+ "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 5.16 seconds\n",
+ "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.28 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.28 seconds.\n",
+ "Page 1: Found 35 commits\n",
+ "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
+ "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
+ "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.78 seconds\n",
+ "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.90 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.90 seconds.\n",
+ "Page 2: Found 35 commits\n",
+ "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n",
+ "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n",
+ "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 2.00 seconds\n",
+ "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.74 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.75 seconds.\n",
+ "Page 3: Found 35 commits\n",
+ "Successfully crawled 105 commits across 3 pages\n"
+ ]
+ }
+ ],
+ "source": [
+ "import re\n",
+ "from bs4 import BeautifulSoup\n",
+ "\n",
+ "async def crawl_typescript_commits():\n",
+ " first_commit = \"\"\n",
+ " async def on_execution_started(page):\n",
+ " nonlocal first_commit\n",
+ " try:\n",
+ " while True:\n",
+ " await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')\n",
+ " commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')\n",
+ " commit = await commit.evaluate('(element) => element.textContent')\n",
+ " commit = re.sub(r'\\s+', '', commit)\n",
+ " if commit and commit != first_commit:\n",
+ " first_commit = commit\n",
+ " break\n",
+ " await asyncio.sleep(0.5)\n",
+ " except Exception as e:\n",
+ " print(f\"Warning: New content didn't appear after JavaScript execution: {e}\")\n",
+ "\n",
+ " async with AsyncWebCrawler(verbose=True) as crawler:\n",
+ " crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)\n",
+ "\n",
+ " url = \"https://github.com/microsoft/TypeScript/commits/main\"\n",
+ " session_id = \"typescript_commits_session\"\n",
+ " all_commits = []\n",
+ "\n",
+ " js_next_page = \"\"\"\n",
+ " const button = document.querySelector('a[data-testid=\"pagination-next-button\"]');\n",
+ " if (button) button.click();\n",
+ " \"\"\"\n",
+ "\n",
+ " for page in range(3): # Crawl 3 pages\n",
+ " result = await crawler.arun(\n",
+ " url=url,\n",
+ " session_id=session_id,\n",
+ " css_selector=\"li.Box-sc-g0xbh4-0\",\n",
+ " js=js_next_page if page > 0 else None,\n",
+ " bypass_cache=True,\n",
+ " js_only=page > 0\n",
+ " )\n",
+ "\n",
+ " assert result.success, f\"Failed to crawl page {page + 1}\"\n",
+ "\n",
+ " soup = BeautifulSoup(result.cleaned_html, 'html.parser')\n",
+ " commits = soup.select(\"li\")\n",
+ " all_commits.extend(commits)\n",
+ "\n",
+ " print(f\"Page {page + 1}: Found {len(commits)} commits\")\n",
+ "\n",
+ " await crawler.crawler_strategy.kill_session(session_id)\n",
+ " print(f\"Successfully crawled {len(all_commits)} commits across 3 pages\")\n",
+ "\n",
+ "await crawl_typescript_commits()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "EJRnYsp6yZQN"
+ },
+ "source": [
+ "### Using JsonCssExtractionStrategy for Fast Structured Output"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1ZMqIzB_8SYp"
+ },
+ "source": [
+ "The JsonCssExtractionStrategy is a powerful feature of Crawl4AI that allows for precise, structured data extraction from web pages. Here's how it works:\n",
+ "\n",
+ "1. You define a schema that describes the pattern of data you're interested in extracting.\n",
+ "2. The schema includes a base selector that identifies repeating elements on the page.\n",
+ "3. Within the schema, you define fields, each with its own selector and type.\n",
+ "4. These field selectors are applied within the context of each base selector element.\n",
+ "5. The strategy supports nested structures, lists within lists, and various data types.\n",
+ "6. You can even include computed fields for more complex data manipulation.\n",
+ "\n",
+ "This approach allows for highly flexible and precise data extraction, transforming semi-structured web content into clean, structured JSON data. It's particularly useful for extracting consistent data patterns from pages like product listings, news articles, or search results.\n",
+ "\n",
+ "For more details and advanced usage, check out the full documentation on the Crawl4AI website."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "trCMR2T9yZQN",
+ "outputId": "718d36f4-cccf-40f4-8d8c-c3ba73524d16"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[LOG] 🌤️ Warming up the AsyncWebCrawler\n",
+ "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n",
+ "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n",
+ "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n",
+ "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 7.00 seconds\n",
+ "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.32 seconds\n",
+ "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n",
+ "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.48 seconds.\n",
+ "Successfully extracted 11 news teasers\n",
+ "{\n",
+ " \"category\": \"Business News\",\n",
+ " \"headline\": \"NBC ripped up its Olympics playbook for 2024 \\u2014 so far, the new strategy paid off\",\n",
+ " \"summary\": \"The Olympics have long been key to NBCUniversal. Paris marked the 18th Olympic Games broadcast by NBC in the U.S.\",\n",
+ " \"time\": \"13h ago\",\n",
+ " \"image\": {\n",
+ " \"src\": \"https://media-cldnry.s-nbcnews.com/image/upload/t_focal-200x100,f_auto,q_auto:best/rockcms/2024-09/240903-nbc-olympics-ch-1344-c7a486.jpg\",\n",
+ " \"alt\": \"Mike Tirico.\"\n",
+ " },\n",
+ " \"link\": \"https://www.nbcnews.com/business\"\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "async def extract_news_teasers():\n",
+ " schema = {\n",
+ " \"name\": \"News Teaser Extractor\",\n",
+ " \"baseSelector\": \".wide-tease-item__wrapper\",\n",
+ " \"fields\": [\n",
+ " {\n",
+ " \"name\": \"category\",\n",
+ " \"selector\": \".unibrow span[data-testid='unibrow-text']\",\n",
+ " \"type\": \"text\",\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"headline\",\n",
+ " \"selector\": \".wide-tease-item__headline\",\n",
+ " \"type\": \"text\",\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"summary\",\n",
+ " \"selector\": \".wide-tease-item__description\",\n",
+ " \"type\": \"text\",\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"time\",\n",
+ " \"selector\": \"[data-testid='wide-tease-date']\",\n",
+ " \"type\": \"text\",\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"image\",\n",
+ " \"type\": \"nested\",\n",
+ " \"selector\": \"picture.teasePicture img\",\n",
+ " \"fields\": [\n",
+ " {\"name\": \"src\", \"type\": \"attribute\", \"attribute\": \"src\"},\n",
+ " {\"name\": \"alt\", \"type\": \"attribute\", \"attribute\": \"alt\"},\n",
+ " ],\n",
+ " },\n",
+ " {\n",
+ " \"name\": \"link\",\n",
+ " \"selector\": \"a[href]\",\n",
+ " \"type\": \"attribute\",\n",
+ " \"attribute\": \"href\",\n",
+ " },\n",
+ " ],\n",
+ " }\n",
+ "\n",
+ " extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n",
+ "\n",
+ " async with AsyncWebCrawler(verbose=True) as crawler:\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://www.nbcnews.com/business\",\n",
+ " extraction_strategy=extraction_strategy,\n",
+ " bypass_cache=True,\n",
+ " )\n",
+ "\n",
+ " assert result.success, \"Failed to crawl the page\"\n",
+ "\n",
+ " news_teasers = json.loads(result.extracted_content)\n",
+ " print(f\"Successfully extracted {len(news_teasers)} news teasers\")\n",
+ " print(json.dumps(news_teasers[0], indent=2))\n",
+ "\n",
+ "await extract_news_teasers()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "FnyVhJaByZQN"
+ },
+ "source": [
+ "## Speed Comparison\n",
+ "\n",
+ "Let's compare the speed of Crawl4AI with Firecrawl, a paid service. Note that we can't run Firecrawl in this Colab environment, so we'll simulate its performance based on previously recorded data."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "agDD186f3wig"
+ },
+ "source": [
+ "💡 **Note on Speed Comparison:**\n",
+ "\n",
+ "The speed test conducted here is running on Google Colab, where the internet speed and performance can vary and may not reflect optimal conditions. When we call Firecrawl's API, we're seeing its best performance, while Crawl4AI's performance is limited by Colab's network speed.\n",
+ "\n",
+ "For a more accurate comparison, it's recommended to run these tests on your own servers or computers with a stable and fast internet connection. Despite these limitations, Crawl4AI still demonstrates faster performance in this environment.\n",
+ "\n",
+ "If you run these tests locally, you may observe an even more significant speed advantage for Crawl4AI compared to other services."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "id": "F7KwHv8G1LbY"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install firecrawl"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "91813zILyZQN",
+ "outputId": "663223db-ab89-4976-b233-05ceca62b19b"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Firecrawl (simulated):\n",
+ "Time taken: 4.38 seconds\n",
+ "Content length: 41967 characters\n",
+ "Images found: 49\n",
+ "\n",
+ "Crawl4AI (simple crawl):\n",
+ "Time taken: 4.22 seconds\n",
+ "Content length: 18221 characters\n",
+ "Images found: 49\n",
+ "\n",
+ "Crawl4AI (with JavaScript execution):\n",
+ "Time taken: 9.13 seconds\n",
+ "Content length: 34243 characters\n",
+ "Images found: 89\n"
+ ]
+ }
+ ],
+ "source": [
+ "import os\n",
+ "from google.colab import userdata\n",
+ "os.environ['FIRECRAWL_API_KEY'] = userdata.get('FIRECRAWL_API_KEY')\n",
+ "import time\n",
+ "from firecrawl import FirecrawlApp\n",
+ "\n",
+ "async def speed_comparison():\n",
+ " # Simulated Firecrawl performance\n",
+ " app = FirecrawlApp(api_key=os.environ['FIRECRAWL_API_KEY'])\n",
+ " start = time.time()\n",
+ " scrape_status = app.scrape_url(\n",
+ " 'https://www.nbcnews.com/business',\n",
+ " params={'formats': ['markdown', 'html']}\n",
+ " )\n",
+ " end = time.time()\n",
+ " print(\"Firecrawl (simulated):\")\n",
+ " print(f\"Time taken: {end - start:.2f} seconds\")\n",
+ " print(f\"Content length: {len(scrape_status['markdown'])} characters\")\n",
+ " print(f\"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}\")\n",
+ " print()\n",
+ "\n",
+ " async with AsyncWebCrawler() as crawler:\n",
+ " # Crawl4AI simple crawl\n",
+ " start = time.time()\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://www.nbcnews.com/business\",\n",
+ " word_count_threshold=0,\n",
+ " bypass_cache=True,\n",
+ " verbose=False\n",
+ " )\n",
+ " end = time.time()\n",
+ " print(\"Crawl4AI (simple crawl):\")\n",
+ " print(f\"Time taken: {end - start:.2f} seconds\")\n",
+ " print(f\"Content length: {len(result.markdown)} characters\")\n",
+ " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
+ " print()\n",
+ "\n",
+ " # Crawl4AI with JavaScript execution\n",
+ " start = time.time()\n",
+ " result = await crawler.arun(\n",
+ " url=\"https://www.nbcnews.com/business\",\n",
+ " js_code=[\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"],\n",
+ " word_count_threshold=0,\n",
+ " bypass_cache=True,\n",
+ " verbose=False\n",
+ " )\n",
+ " end = time.time()\n",
+ " print(\"Crawl4AI (with JavaScript execution):\")\n",
+ " print(f\"Time taken: {end - start:.2f} seconds\")\n",
+ " print(f\"Content length: {len(result.markdown)} characters\")\n",
+ " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n",
+ "\n",
+ "await speed_comparison()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "OBFFYVJIyZQN"
+ },
+ "source": [
+ "If you run on a local machine with a proper internet speed:\n",
+ "- Simple crawl: Crawl4AI is typically over 3-4 times faster than Firecrawl.\n",
+ "- With JavaScript execution: Even when executing JavaScript to load more content (potentially doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.\n",
+ "\n",
+ "Please note that actual performance may vary depending on network conditions and the specific content being crawled."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "A6_1RK1_yZQO"
+ },
+ "source": [
+ "## Conclusion\n",
+ "\n",
+ "In this notebook, we've explored the powerful features of Crawl4AI, including:\n",
+ "\n",
+ "1. Basic crawling\n",
+ "2. JavaScript execution and CSS selector usage\n",
+ "3. Proxy support\n",
+ "4. Structured data extraction with OpenAI\n",
+ "5. Advanced multi-page crawling with JavaScript execution\n",
+ "6. Fast structured output using JsonCssExtractionStrategy\n",
+ "7. Speed comparison with other services\n",
+ "\n",
+ "Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n",
+ "\n",
+ "For more information and advanced usage, please visit the [Crawl4AI documentation](https://crawl4ai.com/mkdocs/).\n",
+ "\n",
+ "Happy crawling!"
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "provenance": []
+ },
+ "kernelspec": {
+ "display_name": "venv",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/docs/md_v2/assets/docs.zip b/docs/md_v2/assets/docs.zip
new file mode 100644
index 00000000..6b28c0a8
Binary files /dev/null and b/docs/md_v2/assets/docs.zip differ
diff --git a/docs/md_v2/assets/styles.css b/docs/md_v2/assets/styles.css
index f103474f..68a93f5d 100644
--- a/docs/md_v2/assets/styles.css
+++ b/docs/md_v2/assets/styles.css
@@ -150,4 +150,11 @@ strong,
.tab-content pre {
margin: 0;
max-height: 300px; overflow: auto; border:none;
+}
+
+ol li::before {
+ content: counters(item, ".") ". ";
+ counter-increment: item;
+ /* float: left; */
+ /* padding-right: 5px; */
}
\ No newline at end of file
diff --git a/docs/md_v2/extraction/extraction_strategies.md b/docs/md_v2/extraction/extraction_strategies.md
deleted file mode 100644
index 2b29a081..00000000
--- a/docs/md_v2/extraction/extraction_strategies.md
+++ /dev/null
@@ -1,185 +0,0 @@
-## Extraction Strategies 🧠
-
-Crawl4AI offers powerful extraction strategies to derive meaningful information from web content. Let's dive into three of the most important strategies: `CosineStrategy`, `LLMExtractionStrategy`, and the new `JsonCssExtractionStrategy`.
-
-### LLMExtractionStrategy
-
-`LLMExtractionStrategy` leverages a Language Model (LLM) to extract meaningful content from HTML. This strategy uses an external provider for LLM completions to perform extraction based on instructions.
-
-#### When to Use
-- Suitable for complex extraction tasks requiring nuanced understanding.
-- Ideal for scenarios where detailed instructions can guide the extraction process.
-- Perfect for extracting specific types of information or content with precise guidelines.
-
-#### Parameters
-- `provider` (str, optional): Provider for language model completions (e.g., openai/gpt-4). Default is `DEFAULT_PROVIDER`.
-- `api_token` (str, optional): API token for the provider. If not provided, it will try to load from the environment variable `OPENAI_API_KEY`.
-- `instruction` (str, optional): Instructions to guide the LLM on how to perform the extraction. Default is `None`.
-
-#### Example Without Instructions
-```python
-import asyncio
-import os
-from crawl4ai import AsyncWebCrawler
-from crawl4ai.extraction_strategy import LLMExtractionStrategy
-
-async def main():
- async with AsyncWebCrawler(verbose=True) as crawler:
- # Define extraction strategy without instructions
- strategy = LLMExtractionStrategy(
- provider='openai',
- api_token=os.getenv('OPENAI_API_KEY')
- )
-
- # Sample URL
- url = "https://www.nbcnews.com/business"
-
- # Run the crawler with the extraction strategy
- result = await crawler.arun(url=url, extraction_strategy=strategy)
- print(result.extracted_content)
-
-asyncio.run(main())
-```
-
-#### Example With Instructions
-```python
-import asyncio
-import os
-from crawl4ai import AsyncWebCrawler
-from crawl4ai.extraction_strategy import LLMExtractionStrategy
-
-async def main():
- async with AsyncWebCrawler(verbose=True) as crawler:
- # Define extraction strategy with instructions
- strategy = LLMExtractionStrategy(
- provider='openai',
- api_token=os.getenv('OPENAI_API_KEY'),
- instruction="Extract only financial news and summarize key points."
- )
-
- # Sample URL
- url = "https://www.nbcnews.com/business"
-
- # Run the crawler with the extraction strategy
- result = await crawler.arun(url=url, extraction_strategy=strategy)
- print(result.extracted_content)
-
-asyncio.run(main())
-```
-
-### JsonCssExtractionStrategy
-
-`JsonCssExtractionStrategy` is a powerful tool for extracting structured data from HTML using CSS selectors. It allows you to define a schema that maps CSS selectors to specific fields, enabling precise and efficient data extraction.
-
-#### When to Use
-- Ideal for extracting structured data from websites with consistent HTML structures.
-- Perfect for scenarios where you need to extract specific elements or attributes from a webpage.
-- Suitable for creating datasets from web pages with tabular or list-based information.
-
-#### Parameters
-- `schema` (Dict[str, Any]): A dictionary defining the extraction schema, including base selector and field definitions.
-
-#### Example
-```python
-import asyncio
-import json
-from crawl4ai import AsyncWebCrawler
-from crawl4ai.extraction_strategy import JsonCssExtractionStrategy
-
-async def main():
- async with AsyncWebCrawler(verbose=True) as crawler:
- # Define the extraction schema
- schema = {
- "name": "News Articles",
- "baseSelector": "article.tease-card",
- "fields": [
- {
- "name": "title",
- "selector": "h2",
- "type": "text",
- },
- {
- "name": "summary",
- "selector": "div.tease-card__info",
- "type": "text",
- },
- {
- "name": "link",
- "selector": "a",
- "type": "attribute",
- "attribute": "href"
- }
- ],
- }
-
- # Create the extraction strategy
- strategy = JsonCssExtractionStrategy(schema, verbose=True)
-
- # Sample URL
- url = "https://www.nbcnews.com/business"
-
- # Run the crawler with the extraction strategy
- result = await crawler.arun(url=url, extraction_strategy=strategy)
-
- # Parse and print the extracted content
- extracted_data = json.loads(result.extracted_content)
- print(json.dumps(extracted_data, indent=2))
-
-asyncio.run(main())
-```
-
-#### Use Cases for JsonCssExtractionStrategy
-- Extracting product information from e-commerce websites.
-- Gathering news articles and their metadata from news portals.
-- Collecting user reviews and ratings from review websites.
-- Extracting job listings from job boards.
-
-By choosing the right extraction strategy, you can effectively extract the most relevant and useful information from web content. Whether you need fast, accurate semantic segmentation with `CosineStrategy`, nuanced, instruction-based extraction with `LLMExtractionStrategy`, or precise structured data extraction with `JsonCssExtractionStrategy`, Crawl4AI has you covered. Happy extracting! 🕵️♂️✨
-
-For more details on schema definitions and advanced extraction strategies, check out the[Advanced JsonCssExtraction](./css-advanced.md).
-
-
-### CosineStrategy
-
-`CosineStrategy` uses hierarchical clustering based on cosine similarity to group text chunks into meaningful clusters. This method converts each chunk into its embedding and then clusters them to form semantical chunks.
-
-#### When to Use
-- Ideal for fast, accurate semantic segmentation of text.
-- Perfect for scenarios where LLMs might be overkill or too slow.
-- Suitable for narrowing down content based on specific queries or keywords.
-
-#### Parameters
-- `semantic_filter` (str, optional): Keywords for filtering relevant documents before clustering. Documents are filtered based on their cosine similarity to the keyword filter embedding. Default is `None`.
-- `word_count_threshold` (int, optional): Minimum number of words per cluster. Default is `20`.
-- `max_dist` (float, optional): Maximum cophenetic distance on the dendrogram to form clusters. Default is `0.2`.
-- `linkage_method` (str, optional): Linkage method for hierarchical clustering. Default is `'ward'`.
-- `top_k` (int, optional): Number of top categories to extract. Default is `3`.
-- `model_name` (str, optional): Model name for embedding generation. Default is `'BAAI/bge-small-en-v1.5'`.
-
-#### Example
-```python
-import asyncio
-from crawl4ai import AsyncWebCrawler
-from crawl4ai.extraction_strategy import CosineStrategy
-
-async def main():
- async with AsyncWebCrawler(verbose=True) as crawler:
- # Define extraction strategy
- strategy = CosineStrategy(
- semantic_filter="finance economy stock market",
- word_count_threshold=10,
- max_dist=0.2,
- linkage_method='ward',
- top_k=3,
- model_name='BAAI/bge-small-en-v1.5'
- )
-
- # Sample URL
- url = "https://www.nbcnews.com/business"
-
- # Run the crawler with the extraction strategy
- result = await crawler.arun(url=url, extraction_strategy=strategy)
- print(result.extracted_content)
-
-asyncio.run(main())
-```
diff --git a/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md b/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md
new file mode 100644
index 00000000..f19d19f8
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md
@@ -0,0 +1,50 @@
+# Crawl4AI
+
+## Episode 1: Introduction to Crawl4AI and Basic Installation
+
+### Quick Intro
+Walk through installation from PyPI, setup, and verification. Show how to install with options like `torch` or `transformer` for advanced capabilities.
+
+Here's a condensed outline of the **Installation and Setup** video content:
+
+---
+
+1 **Introduction to Crawl4AI**: Briefly explain that Crawl4AI is a powerful tool for web scraping, data extraction, and content processing, with customizable options for various needs.
+
+2 **Installation Overview**:
+
+ - **Basic Install**: Run `pip install crawl4ai` and `playwright install` (to set up browser dependencies).
+
+ - **Optional Advanced Installs**:
+ - `pip install crawl4ai[torch]` - Adds PyTorch for clustering.
+ - `pip install crawl4ai[transformer]` - Adds support for LLM-based extraction.
+ - `pip install crawl4ai[all]` - Installs all features for complete functionality.
+
+3 **Verifying the Installation**:
+
+ - Walk through a simple test script to confirm the setup:
+ ```python
+ import asyncio
+ from crawl4ai import AsyncWebCrawler
+
+ async def main():
+ async with AsyncWebCrawler(verbose=True) as crawler:
+ result = await crawler.arun(url="https://www.example.com")
+ print(result.markdown[:500]) # Show first 500 characters
+
+ asyncio.run(main())
+ ```
+ - Explain that this script initializes the crawler and runs it on a test URL, displaying part of the extracted content to verify functionality.
+
+4 **Important Tips**:
+
+ - **Run** `playwright install` **after installation** to set up dependencies.
+ - **For full performance** on text-related tasks, run `crawl4ai-download-models` after installing with `[torch]`, `[transformer]`, or `[all]` options.
+ - If you encounter issues, refer to the documentation or GitHub issues.
+
+5 **Wrap Up**:
+ - Introduce the next topic in the series, which will cover Crawl4AI's browser configuration options (like choosing between `chromium`, `firefox`, and `webkit`).
+
+---
+
+This structure provides a concise, effective guide to get viewers up and running with Crawl4AI in minutes.
\ No newline at end of file
diff --git a/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md b/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md
new file mode 100644
index 00000000..f2216b4c
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md
@@ -0,0 +1,78 @@
+# Crawl4AI
+
+## Episode 2: Overview of Advanced Features
+
+### Quick Intro
+A general overview of advanced features like hooks, CSS selectors, and JSON CSS extraction.
+
+Here's a condensed outline for an **Overview of Advanced Features** video covering Crawl4AI's powerful customization and extraction options:
+
+---
+
+### **Overview of Advanced Features**
+
+1 **Introduction to Advanced Features**:
+
+ - Briefly introduce Crawl4AI’s advanced tools, which let users go beyond basic crawling to customize and fine-tune their scraping workflows.
+
+2 **Taking Screenshots**:
+
+ - Explain the screenshot capability for capturing page state and verifying content.
+ - **Example**:
+ ```python
+ result = await crawler.arun(url="https://www.example.com", screenshot=True)
+ ```
+ - Mention that screenshots are saved as a base64 string in `result`, allowing easy decoding and saving.
+
+3 **Media and Link Extraction**:
+
+ - Demonstrate how to pull all media (images, videos) and links (internal and external) from a page for deeper analysis or content gathering.
+ - **Example**:
+ ```python
+ result = await crawler.arun(url="https://www.example.com")
+ print("Media:", result.media)
+ print("Links:", result.links)
+ ```
+
+4 **Custom User Agent**:
+
+ - Show how to set a custom user agent to disguise the crawler or simulate specific devices/browsers.
+ - **Example**:
+ ```python
+ result = await crawler.arun(url="https://www.example.com", user_agent="Mozilla/5.0 (compatible; MyCrawler/1.0)")
+ ```
+
+5 **Custom Hooks for Enhanced Control**:
+
+ - Briefly cover how to use hooks, which allow custom actions like setting headers or handling login during the crawl.
+ - **Example**: Setting a custom header with `before_get_url` hook.
+ ```python
+ async def before_get_url(page):
+ await page.set_extra_http_headers({"X-Test-Header": "test"})
+ ```
+
+6 **CSS Selectors for Targeted Extraction**:
+
+ - Explain the use of CSS selectors to extract specific elements, ideal for structured data like articles or product details.
+ - **Example**:
+ ```python
+ result = await crawler.arun(url="https://www.example.com", css_selector="h2")
+ print("H2 Tags:", result.extracted_content)
+ ```
+
+7 **Crawling Inside Iframes**:
+
+ - Mention how enabling `process_iframes=True` allows extracting content within iframes, useful for sites with embedded content or ads.
+ - **Example**:
+ ```python
+ result = await crawler.arun(url="https://www.example.com", process_iframes=True)
+ ```
+
+8 **Wrap-Up**:
+
+ - Summarize these advanced features and how they allow users to customize every part of their web scraping experience.
+ - Tease upcoming videos where each feature will be explored in detail.
+
+---
+
+This covers each advanced feature with a brief example, providing a useful overview to prepare viewers for the more in-depth videos.
\ No newline at end of file
diff --git a/docs/md_v2/tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md b/docs/md_v2/tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md
new file mode 100644
index 00000000..100c4983
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md
@@ -0,0 +1,63 @@
+# Crawl4AI
+
+## Episode 3: Browser Configurations & Headless Crawling
+
+### Quick Intro
+Explain browser options (`chromium`, `firefox`, `webkit`) and settings for headless mode, caching, and verbose logging.
+
+Here’s a streamlined outline for the **Browser Configurations & Headless Crawling** video:
+
+---
+
+### **Browser Configurations & Headless Crawling**
+
+1. **Overview of Browser Options**:
+ - Crawl4AI supports three browser engines:
+ - **Chromium** (default) - Highly compatible.
+ - **Firefox** - Great for specialized use cases.
+ - **Webkit** - Lightweight, ideal for basic needs.
+ - **Example**:
+ ```python
+ # Using Chromium (default)
+ crawler = AsyncWebCrawler(browser_type="chromium")
+
+ # Using Firefox
+ crawler = AsyncWebCrawler(browser_type="firefox")
+
+ # Using WebKit
+ crawler = AsyncWebCrawler(browser_type="webkit")
+ ```
+
+2. **Headless Mode**:
+ - Headless mode runs the browser without a visible GUI, making it faster and less resource-intensive.
+ - To enable or disable:
+ ```python
+ # Headless mode (default is True)
+ crawler = AsyncWebCrawler(headless=True)
+
+ # Disable headless mode for debugging
+ crawler = AsyncWebCrawler(headless=False)
+ ```
+
+3. **Verbose Logging**:
+ - Use `verbose=True` to get detailed logs for each action, useful for debugging:
+ ```python
+ crawler = AsyncWebCrawler(verbose=True)
+ ```
+
+4. **Running a Basic Crawl with Configuration**:
+ - Example of a simple crawl with custom browser settings:
+ ```python
+ async with AsyncWebCrawler(browser_type="firefox", headless=True, verbose=True) as crawler:
+ result = await crawler.arun(url="https://www.example.com")
+ print(result.markdown[:500]) # Show first 500 characters
+ ```
+ - This example uses Firefox in headless mode with logging enabled, demonstrating the flexibility of Crawl4AI’s setup.
+
+5. **Recap & Next Steps**:
+ - Recap the power of selecting different browsers and running headless mode for speed and efficiency.
+ - Tease the next video: **Proxy & Security Settings** for navigating blocked or restricted content and protecting IP identity.
+
+---
+
+This breakdown covers browser configuration essentials in Crawl4AI, providing users with practical steps to optimize their scraping setup.
\ No newline at end of file
diff --git a/docs/md_v2/tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md b/docs/md_v2/tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md
new file mode 100644
index 00000000..9f45a939
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md
@@ -0,0 +1,83 @@
+# Crawl4AI
+
+## Episode 4: Advanced Proxy and Security Settings
+
+### Quick Intro
+Showcase proxy configurations (HTTP, SOCKS5, authenticated proxies). Demo: Use rotating proxies and set custom headers to avoid IP blocking and enhance security.
+
+Here’s a focused outline for the **Proxy and Security Settings** video:
+
+---
+
+### **Proxy & Security Settings**
+
+1. **Why Use Proxies in Web Crawling**:
+ - Proxies are essential for bypassing IP-based restrictions, improving anonymity, and managing rate limits.
+ - Crawl4AI supports simple proxies, authenticated proxies, and proxy rotation for robust web scraping.
+
+2. **Basic Proxy Setup**:
+ - **Using a Simple Proxy**:
+ ```python
+ # HTTP proxy
+ crawler = AsyncWebCrawler(proxy="http://proxy.example.com:8080")
+
+ # SOCKS proxy
+ crawler = AsyncWebCrawler(proxy="socks5://proxy.example.com:1080")
+ ```
+
+3. **Authenticated Proxies**:
+ - Use `proxy_config` for proxies requiring a username and password:
+ ```python
+ proxy_config = {
+ "server": "http://proxy.example.com:8080",
+ "username": "user",
+ "password": "pass"
+ }
+ crawler = AsyncWebCrawler(proxy_config=proxy_config)
+ ```
+
+4. **Rotating Proxies**:
+ - Rotating proxies helps avoid IP bans by switching IP addresses for each request:
+ ```python
+ async def get_next_proxy():
+ # Define proxy rotation logic here
+ return {"server": "http://next.proxy.com:8080"}
+
+ async with AsyncWebCrawler() as crawler:
+ for url in urls:
+ proxy = await get_next_proxy()
+ crawler.update_proxy(proxy)
+ result = await crawler.arun(url=url)
+ ```
+ - This setup periodically switches the proxy for enhanced security and access.
+
+5. **Custom Headers for Additional Security**:
+ - Set custom headers to mask the crawler’s identity and avoid detection:
+ ```python
+ headers = {
+ "X-Forwarded-For": "203.0.113.195",
+ "Accept-Language": "en-US,en;q=0.9",
+ "Cache-Control": "no-cache",
+ "Pragma": "no-cache"
+ }
+ crawler = AsyncWebCrawler(headers=headers)
+ ```
+
+6. **Combining Proxies with Magic Mode for Anti-Bot Protection**:
+ - For sites with aggressive bot detection, combine `proxy` settings with `magic=True`:
+ ```python
+ async with AsyncWebCrawler(proxy="http://proxy.example.com:8080", headers={"Accept-Language": "en-US"}) as crawler:
+ result = await crawler.arun(
+ url="https://example.com",
+ magic=True # Enables anti-detection features
+ )
+ ```
+ - **Magic Mode** automatically enables user simulation, random timing, and browser property masking.
+
+7. **Wrap Up & Next Steps**:
+ - Summarize the importance of proxies and anti-detection in accessing restricted content and avoiding bans.
+ - Tease the next video: **JavaScript Execution and Handling Dynamic Content** for working with interactive and dynamically loaded pages.
+
+---
+
+This outline provides a practical guide to setting up proxies and security configurations, empowering users to navigate restricted sites while staying undetected.
\ No newline at end of file
diff --git a/docs/md_v2/tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md b/docs/md_v2/tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md
new file mode 100644
index 00000000..a9e7bb94
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md
@@ -0,0 +1,90 @@
+# Crawl4AI
+
+## Episode 5: JavaScript Execution and Dynamic Content Handling
+
+### Quick Intro
+Explain JavaScript code injection with examples (e.g., simulating scrolling, clicking ‘load more’). Demo: Extract content from a page that uses dynamic loading with lazy-loaded images.
+
+Here’s a focused outline for the **JavaScript Execution and Dynamic Content Handling** video:
+
+---
+
+### **JavaScript Execution & Dynamic Content Handling**
+
+1. **Why JavaScript Execution Matters**:
+ - Many modern websites load content dynamically via JavaScript, requiring special handling to access all elements.
+ - Crawl4AI can execute JavaScript on pages, enabling it to interact with elements like “load more” buttons, infinite scrolls, and content that appears only after certain actions.
+
+2. **Basic JavaScript Execution**:
+ - Use `js_code` to execute JavaScript commands on a page:
+ ```python
+ # Scroll to bottom of the page
+ result = await crawler.arun(
+ url="https://example.com",
+ js_code="window.scrollTo(0, document.body.scrollHeight);"
+ )
+ ```
+ - This command scrolls to the bottom, triggering any lazy-loaded or dynamically added content.
+
+3. **Multiple Commands & Simulating Clicks**:
+ - Combine multiple JavaScript commands to interact with elements like “load more” buttons:
+ ```python
+ js_commands = [
+ "window.scrollTo(0, document.body.scrollHeight);",
+ "document.querySelector('.load-more').click();"
+ ]
+ result = await crawler.arun(
+ url="https://example.com",
+ js_code=js_commands
+ )
+ ```
+ - This script scrolls down and then clicks the “load more” button, useful for loading additional content blocks.
+
+4. **Waiting for Dynamic Content**:
+ - Use `wait_for` to ensure the page loads specific elements before proceeding:
+ ```python
+ result = await crawler.arun(
+ url="https://example.com",
+ js_code="window.scrollTo(0, document.body.scrollHeight);",
+ wait_for="css:.dynamic-content" # Wait for elements with class `.dynamic-content`
+ )
+ ```
+ - This example waits until elements with `.dynamic-content` are loaded, helping to capture content that appears after JavaScript actions.
+
+5. **Handling Complex Dynamic Content (e.g., Infinite Scroll)**:
+ - Combine JavaScript execution with conditional waiting to handle infinite scrolls or paginated content:
+ ```python
+ result = await crawler.arun(
+ url="https://example.com",
+ js_code=[
+ "window.scrollTo(0, document.body.scrollHeight);",
+ "const loadMore = document.querySelector('.load-more'); if (loadMore) loadMore.click();"
+ ],
+ wait_for="js:() => document.querySelectorAll('.item').length > 10" # Wait until 10 items are loaded
+ )
+ ```
+ - This example scrolls and clicks "load more" repeatedly, waiting each time for a specified number of items to load.
+
+6. **Complete Example: Dynamic Content Handling with Extraction**:
+ - Full example demonstrating a dynamic load and content extraction in one process:
+ ```python
+ async with AsyncWebCrawler() as crawler:
+ result = await crawler.arun(
+ url="https://example.com",
+ js_code=[
+ "window.scrollTo(0, document.body.scrollHeight);",
+ "document.querySelector('.load-more').click();"
+ ],
+ wait_for="css:.main-content",
+ css_selector=".main-content"
+ )
+ print(result.markdown[:500]) # Output the main content extracted
+ ```
+
+7. **Wrap Up & Next Steps**:
+ - Recap how JavaScript execution allows access to dynamic content, enabling powerful interactions.
+ - Tease the next video: **Content Cleaning and Fit Markdown** to show how Crawl4AI can extract only the most relevant content from complex pages.
+
+---
+
+This outline explains how to handle dynamic content and JavaScript-based interactions effectively, enabling users to scrape and interact with complex, modern websites.
\ No newline at end of file
diff --git a/docs/md_v2/tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md b/docs/md_v2/tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md
new file mode 100644
index 00000000..6703457c
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md
@@ -0,0 +1,79 @@
+# Crawl4AI
+
+## Episode 6: Magic Mode and Anti-Bot Protection
+
+### Quick Intro
+Highlight `Magic Mode` and anti-bot features like user simulation, navigator overrides, and timing randomization. Demo: Access a site with anti-bot protection and show how `Magic Mode` seamlessly handles it.
+
+Here’s a concise outline for the **Magic Mode and Anti-Bot Protection** video:
+
+---
+
+### **Magic Mode & Anti-Bot Protection**
+
+1. **Why Anti-Bot Protection is Important**:
+ - Many websites use bot detection mechanisms to block automated scraping. Crawl4AI’s anti-detection features help avoid IP bans, CAPTCHAs, and access restrictions.
+ - **Magic Mode** is a one-step solution to enable a range of anti-bot features without complex configuration.
+
+2. **Enabling Magic Mode**:
+ - Simply set `magic=True` to activate Crawl4AI’s full anti-bot suite:
+ ```python
+ result = await crawler.arun(
+ url="https://example.com",
+ magic=True # Enables all anti-detection features
+ )
+ ```
+ - This enables a blend of stealth techniques, including masking automation signals, randomizing timings, and simulating real user behavior.
+
+3. **What Magic Mode Does Behind the Scenes**:
+ - **User Simulation**: Mimics human actions like mouse movements and scrolling.
+ - **Navigator Overrides**: Hides signals that indicate an automated browser.
+ - **Timing Randomization**: Adds random delays to simulate natural interaction patterns.
+ - **Cookie Handling**: Accepts and manages cookies dynamically to avoid triggers from cookie pop-ups.
+
+4. **Manual Anti-Bot Options (If Not Using Magic Mode)**:
+ - For granular control, you can configure individual settings without Magic Mode:
+ ```python
+ result = await crawler.arun(
+ url="https://example.com",
+ simulate_user=True, # Enables human-like behavior
+ override_navigator=True # Hides automation fingerprints
+ )
+ ```
+ - **Use Cases**: This approach allows more specific adjustments when certain anti-bot features are needed but others are not.
+
+5. **Combining Proxies with Magic Mode**:
+ - To avoid rate limits or IP blocks, combine Magic Mode with a proxy:
+ ```python
+ async with AsyncWebCrawler(
+ proxy="http://proxy.example.com:8080",
+ headers={"Accept-Language": "en-US"}
+ ) as crawler:
+ result = await crawler.arun(
+ url="https://example.com",
+ magic=True # Full anti-detection
+ )
+ ```
+ - This setup maximizes stealth by pairing anti-bot detection with IP obfuscation.
+
+6. **Example of Anti-Bot Protection in Action**:
+ - Full example with Magic Mode and proxies to scrape a protected page:
+ ```python
+ async with AsyncWebCrawler() as crawler:
+ result = await crawler.arun(
+ url="https://example.com/protected-content",
+ magic=True,
+ proxy="http://proxy.example.com:8080",
+ wait_for="css:.content-loaded" # Wait for the main content to load
+ )
+ print(result.markdown[:500]) # Display first 500 characters of the content
+ ```
+ - This example ensures seamless access to protected content by combining anti-detection and waiting for full content load.
+
+7. **Wrap Up & Next Steps**:
+ - Recap the power of Magic Mode and anti-bot features for handling restricted websites.
+ - Tease the next video: **Content Cleaning and Fit Markdown** to show how to extract clean and focused content from a page.
+
+---
+
+This outline shows users how to easily avoid bot detection and access restricted content, demonstrating both the power and simplicity of Magic Mode in Crawl4AI.
\ No newline at end of file
diff --git a/docs/md_v2/tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md b/docs/md_v2/tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md
new file mode 100644
index 00000000..ce7d5222
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md
@@ -0,0 +1,82 @@
+# Crawl4AI
+
+## Episode 7: Content Cleaning and Fit Markdown
+
+### Quick Intro
+Explain content cleaning options, including `fit_markdown` to keep only the most relevant content. Demo: Extract and compare regular vs. fit markdown from a news site or blog.
+
+Here’s a streamlined outline for the **Content Cleaning and Fit Markdown** video:
+
+---
+
+### **Content Cleaning & Fit Markdown**
+
+1. **Overview of Content Cleaning in Crawl4AI**:
+ - Explain that web pages often include extra elements like ads, navigation bars, footers, and popups.
+ - Crawl4AI’s content cleaning features help extract only the main content, reducing noise and enhancing readability.
+
+2. **Basic Content Cleaning Options**:
+ - **Removing Unwanted Elements**: Exclude specific HTML tags, like forms or navigation bars:
+ ```python
+ result = await crawler.arun(
+ url="https://example.com",
+ word_count_threshold=10, # Filter out blocks with fewer than 10 words
+ excluded_tags=['form', 'nav'], # Exclude specific tags
+ remove_overlay_elements=True # Remove popups and modals
+ )
+ ```
+ - This example extracts content while excluding forms, navigation, and modal overlays, ensuring clean results.
+
+3. **Fit Markdown for Main Content Extraction**:
+ - **What is Fit Markdown**: Uses advanced analysis to identify the most relevant content (ideal for articles, blogs, and documentation).
+ - **How it Works**: Analyzes content density, removes boilerplate elements, and maintains formatting for a clear output.
+ - **Example**:
+ ```python
+ result = await crawler.arun(url="https://example.com")
+ main_content = result.fit_markdown # Extracted main content
+ print(main_content[:500]) # Display first 500 characters
+ ```
+ - Fit Markdown is especially helpful for long-form content like news articles or blog posts.
+
+4. **Comparing Fit Markdown with Regular Markdown**:
+ - **Fit Markdown** returns the primary content without extraneous elements.
+ - **Regular Markdown** includes all extracted text in markdown format.
+ - Example to show the difference:
+ ```python
+ all_content = result.markdown # Full markdown
+ main_content = result.fit_markdown # Only the main content
+
+ print(f"All Content Length: {len(all_content)}")
+ print(f"Main Content Length: {len(main_content)}")
+ ```
+ - This comparison shows the effectiveness of Fit Markdown in focusing on essential content.
+
+5. **Media and Metadata Handling with Content Cleaning**:
+ - **Media Extraction**: Crawl4AI captures images and videos with metadata like alt text, descriptions, and relevance scores:
+ ```python
+ for image in result.media["images"]:
+ print(f"Source: {image['src']}, Alt Text: {image['alt']}, Relevance Score: {image['score']}")
+ ```
+ - **Use Case**: Useful for saving only relevant images or videos from an article or content-heavy page.
+
+6. **Example of Clean Content Extraction in Action**:
+ - Full example extracting cleaned content and Fit Markdown:
+ ```python
+ async with AsyncWebCrawler() as crawler:
+ result = await crawler.arun(
+ url="https://example.com",
+ word_count_threshold=10,
+ excluded_tags=['nav', 'footer'],
+ remove_overlay_elements=True
+ )
+ print(result.fit_markdown[:500]) # Show main content
+ ```
+ - This example demonstrates content cleaning with settings for filtering noise and focusing on the core text.
+
+7. **Wrap Up & Next Steps**:
+ - Summarize the power of Crawl4AI’s content cleaning features and Fit Markdown for capturing clean, relevant content.
+ - Tease the next video: **Link Analysis and Smart Filtering** to focus on analyzing and filtering links within crawled pages.
+
+---
+
+This outline covers Crawl4AI’s content cleaning features and the unique benefits of Fit Markdown, showing users how to retrieve focused, high-quality content from web pages.
\ No newline at end of file
diff --git a/docs/md_v2/tutorial/episode_08_Media_Handling:_Images,_Videos,_and_Audio.md b/docs/md_v2/tutorial/episode_08_Media_Handling:_Images,_Videos,_and_Audio.md
new file mode 100644
index 00000000..c3a724e2
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_08_Media_Handling:_Images,_Videos,_and_Audio.md
@@ -0,0 +1,108 @@
+# Crawl4AI
+
+## Episode 8: Media Handling: Images, Videos, and Audio
+
+### Quick Intro
+Showcase Crawl4AI’s media extraction capabilities, including lazy-loaded media and metadata. Demo: Crawl a multimedia page, extract images, and show metadata (alt text, context, relevance score).
+
+Here’s a clear and focused outline for the **Media Handling: Images, Videos, and Audio** video:
+
+---
+
+### **Media Handling: Images, Videos, and Audio**
+
+1. **Overview of Media Extraction in Crawl4AI**:
+ - Crawl4AI can detect and extract different types of media (images, videos, and audio) along with useful metadata.
+ - This functionality is essential for gathering visual content from multimedia-heavy pages like e-commerce sites, news articles, and social media feeds.
+
+2. **Image Extraction and Metadata**:
+ - Crawl4AI captures images with detailed metadata, including:
+ - **Source URL**: The direct URL to the image.
+ - **Alt Text**: Image description if available.
+ - **Relevance Score**: A score (0–10) indicating how relevant the image is to the main content.
+ - **Context**: Text surrounding the image on the page.
+ - **Example**:
+ ```python
+ result = await crawler.arun(url="https://example.com")
+
+ for image in result.media["images"]:
+ print(f"Source: {image['src']}")
+ print(f"Alt Text: {image['alt']}")
+ print(f"Relevance Score: {image['score']}")
+ print(f"Context: {image['context']}")
+ ```
+ - This example shows how to access each image’s metadata, making it easy to filter for the most relevant visuals.
+
+3. **Handling Lazy-Loaded Images**:
+ - Crawl4AI automatically supports lazy-loaded images, which are commonly used to optimize webpage loading.
+ - **Example with Wait for Lazy-Loaded Content**:
+ ```python
+ result = await crawler.arun(
+ url="https://example.com",
+ wait_for="css:img[data-src]", # Wait for lazy-loaded images
+ delay_before_return_html=2.0 # Allow extra time for images to load
+ )
+ ```
+ - This setup waits for lazy-loaded images to appear, ensuring they are fully captured.
+
+4. **Video Extraction and Metadata**:
+ - Crawl4AI captures video elements, including:
+ - **Source URL**: The video’s direct URL.
+ - **Type**: Format of the video (e.g., MP4).
+ - **Thumbnail**: A poster or thumbnail image if available.
+ - **Duration**: Video length, if metadata is provided.
+ - **Example**:
+ ```python
+ for video in result.media["videos"]:
+ print(f"Video Source: {video['src']}")
+ print(f"Type: {video['type']}")
+ print(f"Thumbnail: {video.get('poster')}")
+ print(f"Duration: {video.get('duration')}")
+ ```
+ - This allows users to gather video content and relevant details for further processing or analysis.
+
+5. **Audio Extraction and Metadata**:
+ - Audio elements can also be extracted, with metadata like:
+ - **Source URL**: The audio file’s direct URL.
+ - **Type**: Format of the audio file (e.g., MP3).
+ - **Duration**: Length of the audio, if available.
+ - **Example**:
+ ```python
+ for audio in result.media["audios"]:
+ print(f"Audio Source: {audio['src']}")
+ print(f"Type: {audio['type']}")
+ print(f"Duration: {audio.get('duration')}")
+ ```
+ - Useful for sites with podcasts, sound bites, or other audio content.
+
+6. **Filtering Media by Relevance**:
+ - Use metadata like relevance score to filter only the most useful media content:
+ ```python
+ relevant_images = [img for img in result.media["images"] if img['score'] > 5]
+ ```
+ - This is especially helpful for content-heavy pages where you only want media directly related to the main content.
+
+7. **Example: Full Media Extraction with Content Filtering**:
+ - Full example extracting images, videos, and audio along with filtering by relevance:
+ ```python
+ async with AsyncWebCrawler() as crawler:
+ result = await crawler.arun(
+ url="https://example.com",
+ word_count_threshold=10, # Filter content blocks for relevance
+ exclude_external_images=True # Only keep internal images
+ )
+
+ # Display media summaries
+ print(f"Relevant Images: {len(relevant_images)}")
+ print(f"Videos: {len(result.media['videos'])}")
+ print(f"Audio Clips: {len(result.media['audios'])}")
+ ```
+ - This example shows how to capture and filter various media types, focusing on what’s most relevant.
+
+8. **Wrap Up & Next Steps**:
+ - Recap the comprehensive media extraction capabilities, emphasizing how metadata helps users focus on relevant content.
+ - Tease the next video: **Link Analysis and Smart Filtering** to explore how Crawl4AI handles internal, external, and social media links for more focused data gathering.
+
+---
+
+This outline provides users with a complete guide to handling images, videos, and audio in Crawl4AI, using metadata to enhance relevance and precision in multimedia extraction.
\ No newline at end of file
diff --git a/docs/md_v2/tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md b/docs/md_v2/tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md
new file mode 100644
index 00000000..82af6b9a
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md
@@ -0,0 +1,88 @@
+# Crawl4AI
+
+## Episode 9: Link Analysis and Smart Filtering
+
+### Quick Intro
+Walk through internal and external link classification, social media link filtering, and custom domain exclusion. Demo: Analyze links on a website, focusing on internal navigation vs. external or ad links.
+
+Here’s a focused outline for the **Link Analysis and Smart Filtering** video:
+
+---
+
+### **Link Analysis & Smart Filtering**
+
+1. **Importance of Link Analysis in Web Crawling**:
+ - Explain that web pages often contain numerous links, including internal links, external links, social media links, and ads.
+ - Crawl4AI’s link analysis and filtering options help extract only relevant links, enabling more targeted and efficient crawls.
+
+2. **Automatic Link Classification**:
+ - Crawl4AI categorizes links automatically into internal, external, and social media links.
+ - **Example**:
+ ```python
+ result = await crawler.arun(url="https://example.com")
+
+ # Access internal and external links
+ internal_links = result.links["internal"]
+ external_links = result.links["external"]
+
+ # Print first few links for each type
+ print("Internal Links:", internal_links[:3])
+ print("External Links:", external_links[:3])
+ ```
+
+3. **Filtering Out Unwanted Links**:
+ - **Exclude External Links**: Remove all links pointing to external sites.
+ - **Exclude Social Media Links**: Filter out social media domains like Facebook or Twitter.
+ - **Example**:
+ ```python
+ result = await crawler.arun(
+ url="https://example.com",
+ exclude_external_links=True, # Remove external links
+ exclude_social_media_links=True # Remove social media links
+ )
+ ```
+
+4. **Custom Domain Filtering**:
+ - **Exclude Specific Domains**: Filter links from particular domains, e.g., ad sites.
+ - **Custom Social Media Domains**: Add additional social media domains if needed.
+ - **Example**:
+ ```python
+ result = await crawler.arun(
+ url="https://example.com",
+ exclude_domains=["ads.com", "trackers.com"],
+ exclude_social_media_domains=["facebook.com", "linkedin.com"]
+ )
+ ```
+
+5. **Accessing Link Context and Metadata**:
+ - Crawl4AI provides additional metadata for each link, including its text, type (e.g., navigation or content), and surrounding context.
+ - **Example**:
+ ```python
+ for link in result.links["internal"]:
+ print(f"Link: {link['href']}, Text: {link['text']}, Context: {link['context']}")
+ ```
+ - **Use Case**: Helps users understand the relevance of links based on where they are placed on the page (e.g., navigation vs. article content).
+
+6. **Example of Comprehensive Link Filtering and Analysis**:
+ - Full example combining link filtering, metadata access, and contextual information:
+ ```python
+ async with AsyncWebCrawler() as crawler:
+ result = await crawler.arun(
+ url="https://example.com",
+ exclude_external_links=True,
+ exclude_social_media_links=True,
+ exclude_domains=["ads.com"],
+ css_selector=".main-content" # Focus only on main content area
+ )
+ for link in result.links["internal"]:
+ print(f"Internal Link: {link['href']}, Text: {link['text']}, Context: {link['context']}")
+ ```
+ - This example filters unnecessary links, keeping only internal and relevant links from the main content area.
+
+7. **Wrap Up & Next Steps**:
+ - Summarize the benefits of link filtering for efficient crawling and relevant content extraction.
+ - Tease the next video: **Custom Headers, Identity Management, and User Simulation** to explain how to configure identity settings and simulate user behavior for stealthier crawls.
+
+---
+
+This outline provides a practical overview of Crawl4AI’s link analysis and filtering features, helping users target only essential links while eliminating distractions.
\ No newline at end of file
diff --git a/docs/md_v2/tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md b/docs/md_v2/tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md
new file mode 100644
index 00000000..92af4f2e
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md
@@ -0,0 +1,86 @@
+# Crawl4AI
+
+## Episode 10: Custom Headers, Identity, and User Simulation
+
+### Quick Intro
+Teach how to use custom headers, user-agent strings, and simulate real user interactions. Demo: Set custom user-agent and headers to access a site that blocks typical crawlers.
+
+Here’s a concise outline for the **Custom Headers, Identity Management, and User Simulation** video:
+
+---
+
+### **Custom Headers, Identity Management, & User Simulation**
+
+1. **Why Customize Headers and Identity in Crawling**:
+ - Websites often track request headers and browser properties to detect bots. Customizing headers and managing identity help make requests appear more human, improving access to restricted sites.
+
+2. **Setting Custom Headers**:
+ - Customize HTTP headers to mimic genuine browser requests or meet site-specific requirements:
+ ```python
+ headers = {
+ "Accept-Language": "en-US,en;q=0.9",
+ "X-Requested-With": "XMLHttpRequest",
+ "Cache-Control": "no-cache"
+ }
+ crawler = AsyncWebCrawler(headers=headers)
+ ```
+ - **Use Case**: Customize the `Accept-Language` header to simulate local user settings, or `Cache-Control` to bypass cache for fresh content.
+
+3. **Setting a Custom User Agent**:
+ - Some websites block requests from common crawler user agents. Setting a custom user agent string helps bypass these restrictions:
+ ```python
+ crawler = AsyncWebCrawler(
+ user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
+ )
+ ```
+ - **Tip**: Use user-agent strings from popular browsers (e.g., Chrome, Firefox) to improve access and reduce detection risks.
+
+4. **User Simulation for Human-like Behavior**:
+ - Enable `simulate_user=True` to mimic natural user interactions, such as random timing and simulated mouse movements:
+ ```python
+ result = await crawler.arun(
+ url="https://example.com",
+ simulate_user=True # Simulates human-like behavior
+ )
+ ```
+ - **Behavioral Effects**: Adds subtle variations in interactions, making the crawler harder to detect on bot-protected sites.
+
+5. **Navigator Overrides and Magic Mode for Full Identity Masking**:
+ - Use `override_navigator=True` to mask automation indicators like `navigator.webdriver`, which websites check to detect bots:
+ ```python
+ result = await crawler.arun(
+ url="https://example.com",
+ override_navigator=True # Masks bot-related signals
+ )
+ ```
+ - **Combining with Magic Mode**: For a complete anti-bot setup, combine these identity options with `magic=True` for maximum protection:
+ ```python
+ async with AsyncWebCrawler() as crawler:
+ result = await crawler.arun(
+ url="https://example.com",
+ magic=True, # Enables all anti-bot detection features
+ user_agent="Custom-Agent", # Custom agent with Magic Mode
+ )
+ ```
+ - This setup includes all anti-detection techniques like navigator masking, random timing, and user simulation.
+
+6. **Example: Comprehensive Setup for Identity Management**:
+ - A full example combining custom headers, user-agent, and user simulation for a realistic browsing profile:
+ ```python
+ async with AsyncWebCrawler(
+ headers={"Accept-Language": "en-US", "Cache-Control": "no-cache"},
+ user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0",
+ simulate_user=True
+ ) as crawler:
+ result = await crawler.arun(url="https://example.com/secure-page")
+ print(result.markdown[:500]) # Display extracted content
+ ```
+ - This example enables detailed customization for evading detection and accessing protected pages smoothly.
+
+7. **Wrap Up & Next Steps**:
+ - Recap the value of headers, user-agent customization, and simulation in bypassing bot detection.
+ - Tease the next video: **Extraction Strategies: JSON CSS, LLM, and Cosine** to dive into structured data extraction methods for high-quality content retrieval.
+
+---
+
+This outline equips users with tools for managing crawler identity and human-like behavior, essential for accessing bot-protected or restricted websites.
\ No newline at end of file
diff --git a/docs/md_v2/tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md b/docs/md_v2/tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md
new file mode 100644
index 00000000..a8a357af
--- /dev/null
+++ b/docs/md_v2/tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md
@@ -0,0 +1,186 @@
+Here’s a detailed outline for the **JSON-CSS Extraction Strategy** video, covering all key aspects and supported structures in Crawl4AI:
+
+---
+
+### **10.1 JSON-CSS Extraction Strategy**
+
+#### **1. Introduction to JSON-CSS Extraction**
+ - JSON-CSS Extraction is used for pulling structured data from pages with repeated patterns, like product listings, article feeds, or directories.
+ - This strategy allows defining a schema with CSS selectors and data fields, making it easy to capture nested, list-based, or singular elements.
+
+#### **2. Basic Schema Structure**
+ - **Schema Fields**: The schema has two main components:
+ - `baseSelector`: A CSS selector to locate the main elements you want to extract (e.g., each article or product block).
+ - `fields`: Defines the data fields for each element, supporting various data types and structures.
+
+#### **3. Simple Field Extraction**
+ - **Example HTML**:
+ ```html
+
This is a sample product.
+
+ $19.99
+ Limited time offer.
+
+ $99.99
+ Best product of the year.
+This is a sample product.
+
+ $19.99
+ Limited time offer.
+
+ $99.99
+ Best product of the year.
+