diff --git a/README.md b/README.md index bcb20270..9e937aab 100644 --- a/README.md +++ b/README.md @@ -25,12 +25,9 @@ Use the [Crawl4AI GPT Assistant](https://tinyurl.com/crawl4ai-gpt) as your AI-po - 💾 Improved caching system for better performance - ⚡ Optimized batch processing with automatic rate limiting -Try new features in this colab notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1L6LJ3KlplhJdUy3Wcry6pstnwRpCJ3yB?usp=sharing) - - ## Try it Now! -✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1REChY6fXQf-EaVYLv0eHEWvzlYxGm0pd?usp=sharing) +✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing) ✨ Visit our [Documentation Website](https://crawl4ai.com/mkdocs/) diff --git a/docs/examples/quickstart.ipynb b/docs/examples/quickstart.ipynb index 71f23acb..4751dec8 100644 --- a/docs/examples/quickstart.ipynb +++ b/docs/examples/quickstart.ipynb @@ -1,735 +1,664 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "6yLvrXn7yZQI" - }, - "source": [ - "# Crawl4AI: Advanced Web Crawling and Data Extraction\n", - "\n", - "Welcome to this interactive notebook showcasing Crawl4AI, an advanced asynchronous web crawling and data extraction library.\n", - "\n", - "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n", - "- Twitter: [@unclecode](https://twitter.com/unclecode)\n", - "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n", - "\n", - "Let's explore the powerful features of Crawl4AI!" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "KIn_9nxFyZQK" - }, - "source": [ - "## Installation\n", - "\n", - "First, let's install Crawl4AI from GitHub:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "mSnaxLf3zMog" - }, - "outputs": [], - "source": [ - "!sudo apt-get update && sudo apt-get install -y libwoff1 libopus0 libwebp6 libwebpdemux2 libenchant1c2a libgudev-1.0-0 libsecret-1-0 libhyphen0 libgdk-pixbuf2.0-0 libegl1 libnotify4 libxslt1.1 libevent-2.1-7 libgles2 libvpx6 libxcomposite1 libatk1.0-0 libatk-bridge2.0-0 libepoxy0 libgtk-3-0 libharfbuzz-icu0" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "xlXqaRtayZQK" - }, - "outputs": [], - "source": [ - "!pip install crawl4ai\n", - "!pip install nest-asyncio\n", - "!playwright install" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "qKCE7TI7yZQL" - }, - "source": [ - "Now, let's import the necessary libraries:" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": { - "id": "I67tr7aAyZQL" - }, - "outputs": [], - "source": [ - "import asyncio\n", - "import nest_asyncio\n", - "from crawl4ai import AsyncWebCrawler\n", - "from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy\n", - "import json\n", - "import time\n", - "from pydantic import BaseModel, Field\n", - "\n", - "nest_asyncio.apply()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "h7yR_Rt_yZQM" - }, - "source": [ - "## Basic Usage\n", - "\n", - "Let's start with a simple crawl example:" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "yBh6hf4WyZQM", - "outputId": "0f83af5c-abba-4175-ed95-70b7512e6bcc" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", - "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", - "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.05 seconds\n", - "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.05 seconds.\n", - "18102\n" - ] - } - ], - "source": [ - "async def simple_crawl():\n", - " async with AsyncWebCrawler(verbose=True) as crawler:\n", - " result = await crawler.arun(url=\"https://www.nbcnews.com/business\")\n", - " print(len(result.markdown))\n", - "await simple_crawl()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "9rtkgHI28uI4" - }, - "source": [ - "💡 By default, **Crawl4AI** caches the result of every URL, so the next time you call it, you’ll get an instant result. But if you want to bypass the cache, just set `bypass_cache=True`." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "MzZ0zlJ9yZQM" - }, - "source": [ - "## Advanced Features\n", - "\n", - "### Executing JavaScript and Using CSS Selectors" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "gHStF86xyZQM", - "outputId": "34d0fb6d-4dec-4677-f76e-85a1f082829b" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", - "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", - "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n", - "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 6.06 seconds\n", - "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", - "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n", - "41135\n" - ] - } - ], - "source": [ - "async def js_and_css():\n", - " async with AsyncWebCrawler(verbose=True) as crawler:\n", - " js_code = [\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"]\n", - " result = await crawler.arun(\n", - " url=\"https://www.nbcnews.com/business\",\n", - " js_code=js_code,\n", - " # css_selector=\"YOUR_CSS_SELECTOR_HERE\",\n", - " bypass_cache=True\n", - " )\n", - " print(len(result.markdown))\n", - "\n", - "await js_and_css()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "cqE_W4coyZQM" - }, - "source": [ - "### Using a Proxy\n", - "\n", - "Note: You'll need to replace the proxy URL with a working proxy for this example to run successfully." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "QjAyiAGqyZQM" - }, - "outputs": [], - "source": [ - "async def use_proxy():\n", - " async with AsyncWebCrawler(verbose=True, proxy=\"http://your-proxy-url:port\") as crawler:\n", - " result = await crawler.arun(\n", - " url=\"https://www.nbcnews.com/business\",\n", - " bypass_cache=True\n", - " )\n", - " print(result.markdown[:500]) # Print first 500 characters\n", - "\n", - "# Uncomment the following line to run the proxy example\n", - "# await use_proxy()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XTZ88lbayZQN" - }, - "source": [ - "### Extracting Structured Data with OpenAI\n", - "\n", - "Note: You'll need to set your OpenAI API key as an environment variable for this example to work." - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "fIOlDayYyZQN", - "outputId": "cb8359cc-dee0-4762-9698-5dfdcee055b8" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", - "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", - "[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n", - "[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 3.77 seconds\n", - "[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.21 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n", - "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n", - "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 1\n", - "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 2\n", - "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 3\n", - "[LOG] Extracted 4 blocks from URL: https://openai.com/api/pricing/ block index: 3\n", - "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 4\n", - "[LOG] Extracted 5 blocks from URL: https://openai.com/api/pricing/ block index: 0\n", - "[LOG] Extracted 1 blocks from URL: https://openai.com/api/pricing/ block index: 4\n", - "[LOG] Extracted 8 blocks from URL: https://openai.com/api/pricing/ block index: 1\n", - "[LOG] Extracted 12 blocks from URL: https://openai.com/api/pricing/ block index: 2\n", - "[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 8.55 seconds.\n", - "5029\n" - ] - } - ], - "source": [ - "import os\n", - "from google.colab import userdata\n", - "os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')\n", - "\n", - "class OpenAIModelFee(BaseModel):\n", - " model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n", - " input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n", - " output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n", - "\n", - "async def extract_openai_fees():\n", - " async with AsyncWebCrawler(verbose=True) as crawler:\n", - " result = await crawler.arun(\n", - " url='https://openai.com/api/pricing/',\n", - " word_count_threshold=1,\n", - " extraction_strategy=LLMExtractionStrategy(\n", - " provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY'),\n", - " schema=OpenAIModelFee.schema(),\n", - " extraction_type=\"schema\",\n", - " instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens.\n", - " Do not miss any models in the entire content. One extracted model JSON format should look like this:\n", - " {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n", - " ),\n", - " bypass_cache=True,\n", - " )\n", - " print(len(result.extracted_content))\n", - "\n", - "# Uncomment the following line to run the OpenAI extraction example\n", - "await extract_openai_fees()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "BypA5YxEyZQN" - }, - "source": [ - "### Advanced Multi-Page Crawling with JavaScript Execution" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "tfkcVQ0b7mw-" - }, - "source": [ - "## Advanced Multi-Page Crawling with JavaScript Execution\n", - "\n", - "This example demonstrates Crawl4AI's ability to handle complex crawling scenarios, specifically extracting commits from multiple pages of a GitHub repository. The challenge here is that clicking the \"Next\" button doesn't load a new page, but instead uses asynchronous JavaScript to update the content. This is a common hurdle in modern web crawling.\n", - "\n", - "To overcome this, we use Crawl4AI's custom JavaScript execution to simulate clicking the \"Next\" button, and implement a custom hook to detect when new data has loaded. Our strategy involves comparing the first commit's text before and after \"clicking\" Next, waiting until it changes to confirm new data has rendered. This showcases Crawl4AI's flexibility in handling dynamic content and its ability to implement custom logic for even the most challenging crawling tasks." - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "qUBKGpn3yZQN", - "outputId": "3e555b6a-ed33-42f4-cce9-499a923fbe17" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", - "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", - "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", - "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 5.16 seconds\n", - "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.28 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", - "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.28 seconds.\n", - "Page 1: Found 35 commits\n", - "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", - "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.78 seconds\n", - "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.90 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", - "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.90 seconds.\n", - "Page 2: Found 35 commits\n", - "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", - "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 2.00 seconds\n", - "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.74 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", - "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.75 seconds.\n", - "Page 3: Found 35 commits\n", - "Successfully crawled 105 commits across 3 pages\n" - ] - } - ], - "source": [ - "import re\n", - "from bs4 import BeautifulSoup\n", - "\n", - "async def crawl_typescript_commits():\n", - " first_commit = \"\"\n", - " async def on_execution_started(page):\n", - " nonlocal first_commit\n", - " try:\n", - " while True:\n", - " await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')\n", - " commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')\n", - " commit = await commit.evaluate('(element) => element.textContent')\n", - " commit = re.sub(r'\\s+', '', commit)\n", - " if commit and commit != first_commit:\n", - " first_commit = commit\n", - " break\n", - " await asyncio.sleep(0.5)\n", - " except Exception as e:\n", - " print(f\"Warning: New content didn't appear after JavaScript execution: {e}\")\n", - "\n", - " async with AsyncWebCrawler(verbose=True) as crawler:\n", - " crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)\n", - "\n", - " url = \"https://github.com/microsoft/TypeScript/commits/main\"\n", - " session_id = \"typescript_commits_session\"\n", - " all_commits = []\n", - "\n", - " js_next_page = \"\"\"\n", - " const button = document.querySelector('a[data-testid=\"pagination-next-button\"]');\n", - " if (button) button.click();\n", - " \"\"\"\n", - "\n", - " for page in range(3): # Crawl 3 pages\n", - " result = await crawler.arun(\n", - " url=url,\n", - " session_id=session_id,\n", - " css_selector=\"li.Box-sc-g0xbh4-0\",\n", - " js=js_next_page if page > 0 else None,\n", - " bypass_cache=True,\n", - " js_only=page > 0\n", - " )\n", - "\n", - " assert result.success, f\"Failed to crawl page {page + 1}\"\n", - "\n", - " soup = BeautifulSoup(result.cleaned_html, 'html.parser')\n", - " commits = soup.select(\"li\")\n", - " all_commits.extend(commits)\n", - "\n", - " print(f\"Page {page + 1}: Found {len(commits)} commits\")\n", - "\n", - " await crawler.crawler_strategy.kill_session(session_id)\n", - " print(f\"Successfully crawled {len(all_commits)} commits across 3 pages\")\n", - "\n", - "await crawl_typescript_commits()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "EJRnYsp6yZQN" - }, - "source": [ - "### Using JsonCssExtractionStrategy for Fast Structured Output" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "1ZMqIzB_8SYp" - }, - "source": [ - "The JsonCssExtractionStrategy is a powerful feature of Crawl4AI that allows for precise, structured data extraction from web pages. Here's how it works:\n", - "\n", - "1. You define a schema that describes the pattern of data you're interested in extracting.\n", - "2. The schema includes a base selector that identifies repeating elements on the page.\n", - "3. Within the schema, you define fields, each with its own selector and type.\n", - "4. These field selectors are applied within the context of each base selector element.\n", - "5. The strategy supports nested structures, lists within lists, and various data types.\n", - "6. You can even include computed fields for more complex data manipulation.\n", - "\n", - "This approach allows for highly flexible and precise data extraction, transforming semi-structured web content into clean, structured JSON data. It's particularly useful for extracting consistent data patterns from pages like product listings, news articles, or search results.\n", - "\n", - "For more details and advanced usage, check out the full documentation on the Crawl4AI website." - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "trCMR2T9yZQN", - "outputId": "718d36f4-cccf-40f4-8d8c-c3ba73524d16" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", - "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", - "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n", - "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 7.00 seconds\n", - "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.32 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", - "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.48 seconds.\n", - "Successfully extracted 11 news teasers\n", - "{\n", - " \"category\": \"Business News\",\n", - " \"headline\": \"NBC ripped up its Olympics playbook for 2024 \\u2014 so far, the new strategy paid off\",\n", - " \"summary\": \"The Olympics have long been key to NBCUniversal. Paris marked the 18th Olympic Games broadcast by NBC in the U.S.\",\n", - " \"time\": \"13h ago\",\n", - " \"image\": {\n", - " \"src\": \"https://media-cldnry.s-nbcnews.com/image/upload/t_focal-200x100,f_auto,q_auto:best/rockcms/2024-09/240903-nbc-olympics-ch-1344-c7a486.jpg\",\n", - " \"alt\": \"Mike Tirico.\"\n", - " },\n", - " \"link\": \"https://www.nbcnews.com/business\"\n", - "}\n" - ] - } - ], - "source": [ - "async def extract_news_teasers():\n", - " schema = {\n", - " \"name\": \"News Teaser Extractor\",\n", - " \"baseSelector\": \".wide-tease-item__wrapper\",\n", - " \"fields\": [\n", - " {\n", - " \"name\": \"category\",\n", - " \"selector\": \".unibrow span[data-testid='unibrow-text']\",\n", - " \"type\": \"text\",\n", - " },\n", - " {\n", - " \"name\": \"headline\",\n", - " \"selector\": \".wide-tease-item__headline\",\n", - " \"type\": \"text\",\n", - " },\n", - " {\n", - " \"name\": \"summary\",\n", - " \"selector\": \".wide-tease-item__description\",\n", - " \"type\": \"text\",\n", - " },\n", - " {\n", - " \"name\": \"time\",\n", - " \"selector\": \"[data-testid='wide-tease-date']\",\n", - " \"type\": \"text\",\n", - " },\n", - " {\n", - " \"name\": \"image\",\n", - " \"type\": \"nested\",\n", - " \"selector\": \"picture.teasePicture img\",\n", - " \"fields\": [\n", - " {\"name\": \"src\", \"type\": \"attribute\", \"attribute\": \"src\"},\n", - " {\"name\": \"alt\", \"type\": \"attribute\", \"attribute\": \"alt\"},\n", - " ],\n", - " },\n", - " {\n", - " \"name\": \"link\",\n", - " \"selector\": \"a[href]\",\n", - " \"type\": \"attribute\",\n", - " \"attribute\": \"href\",\n", - " },\n", - " ],\n", - " }\n", - "\n", - " extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n", - "\n", - " async with AsyncWebCrawler(verbose=True) as crawler:\n", - " result = await crawler.arun(\n", - " url=\"https://www.nbcnews.com/business\",\n", - " extraction_strategy=extraction_strategy,\n", - " bypass_cache=True,\n", - " )\n", - "\n", - " assert result.success, \"Failed to crawl the page\"\n", - "\n", - " news_teasers = json.loads(result.extracted_content)\n", - " print(f\"Successfully extracted {len(news_teasers)} news teasers\")\n", - " print(json.dumps(news_teasers[0], indent=2))\n", - "\n", - "await extract_news_teasers()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "FnyVhJaByZQN" - }, - "source": [ - "## Speed Comparison\n", - "\n", - "Let's compare the speed of Crawl4AI with Firecrawl, a paid service. Note that we can't run Firecrawl in this Colab environment, so we'll simulate its performance based on previously recorded data." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "agDD186f3wig" - }, - "source": [ - "💡 **Note on Speed Comparison:**\n", - "\n", - "The speed test conducted here is running on Google Colab, where the internet speed and performance can vary and may not reflect optimal conditions. When we call Firecrawl's API, we're seeing its best performance, while Crawl4AI's performance is limited by Colab's network speed.\n", - "\n", - "For a more accurate comparison, it's recommended to run these tests on your own servers or computers with a stable and fast internet connection. Despite these limitations, Crawl4AI still demonstrates faster performance in this environment.\n", - "\n", - "If you run these tests locally, you may observe an even more significant speed advantage for Crawl4AI compared to other services." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "F7KwHv8G1LbY" - }, - "outputs": [], - "source": [ - "!pip install firecrawl" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "91813zILyZQN", - "outputId": "663223db-ab89-4976-b233-05ceca62b19b" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Firecrawl (simulated):\n", - "Time taken: 4.38 seconds\n", - "Content length: 41967 characters\n", - "Images found: 49\n", - "\n", - "Crawl4AI (simple crawl):\n", - "Time taken: 4.22 seconds\n", - "Content length: 18221 characters\n", - "Images found: 49\n", - "\n", - "Crawl4AI (with JavaScript execution):\n", - "Time taken: 9.13 seconds\n", - "Content length: 34243 characters\n", - "Images found: 89\n" - ] - } - ], - "source": [ - "import os\n", - "from google.colab import userdata\n", - "os.environ['FIRECRAWL_API_KEY'] = userdata.get('FIRECRAWL_API_KEY')\n", - "import time\n", - "from firecrawl import FirecrawlApp\n", - "\n", - "async def speed_comparison():\n", - " # Simulated Firecrawl performance\n", - " app = FirecrawlApp(api_key=os.environ['FIRECRAWL_API_KEY'])\n", - " start = time.time()\n", - " scrape_status = app.scrape_url(\n", - " 'https://www.nbcnews.com/business',\n", - " params={'formats': ['markdown', 'html']}\n", - " )\n", - " end = time.time()\n", - " print(\"Firecrawl (simulated):\")\n", - " print(f\"Time taken: {end - start:.2f} seconds\")\n", - " print(f\"Content length: {len(scrape_status['markdown'])} characters\")\n", - " print(f\"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}\")\n", - " print()\n", - "\n", - " async with AsyncWebCrawler() as crawler:\n", - " # Crawl4AI simple crawl\n", - " start = time.time()\n", - " result = await crawler.arun(\n", - " url=\"https://www.nbcnews.com/business\",\n", - " word_count_threshold=0,\n", - " bypass_cache=True,\n", - " verbose=False\n", - " )\n", - " end = time.time()\n", - " print(\"Crawl4AI (simple crawl):\")\n", - " print(f\"Time taken: {end - start:.2f} seconds\")\n", - " print(f\"Content length: {len(result.markdown)} characters\")\n", - " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n", - " print()\n", - "\n", - " # Crawl4AI with JavaScript execution\n", - " start = time.time()\n", - " result = await crawler.arun(\n", - " url=\"https://www.nbcnews.com/business\",\n", - " js_code=[\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"],\n", - " word_count_threshold=0,\n", - " bypass_cache=True,\n", - " verbose=False\n", - " )\n", - " end = time.time()\n", - " print(\"Crawl4AI (with JavaScript execution):\")\n", - " print(f\"Time taken: {end - start:.2f} seconds\")\n", - " print(f\"Content length: {len(result.markdown)} characters\")\n", - " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n", - "\n", - "await speed_comparison()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "OBFFYVJIyZQN" - }, - "source": [ - "If you run on a local machine with a proper internet speed:\n", - "- Simple crawl: Crawl4AI is typically over 3-4 times faster than Firecrawl.\n", - "- With JavaScript execution: Even when executing JavaScript to load more content (potentially doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.\n", - "\n", - "Please note that actual performance may vary depending on network conditions and the specific content being crawled." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "A6_1RK1_yZQO" - }, - "source": [ - "## Conclusion\n", - "\n", - "In this notebook, we've explored the powerful features of Crawl4AI, including:\n", - "\n", - "1. Basic crawling\n", - "2. JavaScript execution and CSS selector usage\n", - "3. Proxy support\n", - "4. Structured data extraction with OpenAI\n", - "5. Advanced multi-page crawling with JavaScript execution\n", - "6. Fast structured output using JsonCssExtractionStrategy\n", - "7. Speed comparison with other services\n", - "\n", - "Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n", - "\n", - "For more information and advanced usage, please visit the [Crawl4AI documentation](https://crawl4ai.com/mkdocs/).\n", - "\n", - "Happy crawling!" - ] - } - ], - "metadata": { - "colab": { - "provenance": [] - }, - "kernelspec": { - "display_name": "venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.13" - } + "cells": [ + { + "cell_type": "markdown", + "id": "0cba38e5", + "metadata": {}, + "source": [ + "# Crawl4AI 🕷️🤖\n", + "\"unclecode%2Fcrawl4ai\n", + "\n", + "[![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers)\n", + "![PyPI - Downloads](https://img.shields.io/pypi/dm/Crawl4AI)\n", + "[![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members)\n", + "[![GitHub Issues](https://img.shields.io/github/issues/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/issues)\n", + "[![GitHub Pull Requests](https://img.shields.io/github/issues-pr/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/pulls)\n", + "[![License](https://img.shields.io/github/license/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/blob/main/LICENSE)\n", + "\n", + "Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐\n", + "\n", + "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n", + "- Twitter: [@unclecode](https://twitter.com/unclecode)\n", + "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n", + "\n", + "## 🌟 Meet the Crawl4AI Assistant: Your Copilot for Crawling\n", + "Use the [Crawl4AI GPT Assistant](https://tinyurl.com/crawl4ai-gpt) as your AI-powered copilot! With this assistant, you can:\n", + "- 🧑‍💻 Generate code for complex crawling and extraction tasks\n", + "- 💡 Get tailored support and examples\n", + "- 📘 Learn Crawl4AI faster with step-by-step guidance" + ] }, - "nbformat": 4, - "nbformat_minor": 0 + { + "cell_type": "markdown", + "id": "41de6458", + "metadata": {}, + "source": [ + "### **Quickstart with Crawl4AI**" + ] + }, + { + "cell_type": "markdown", + "id": "1380e951", + "metadata": {}, + "source": [ + "#### 1. **Installation**\n", + "Install Crawl4AI and necessary dependencies:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "05fecfad", + "metadata": {}, + "outputs": [], + "source": [ + "# %%capture\n", + "!pip install crawl4ai\n", + "!pip install nest_asyncio\n", + "!playwright install " + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "2c2a74c8", + "metadata": {}, + "outputs": [], + "source": [ + "import asyncio\n", + "import nest_asyncio\n", + "nest_asyncio.apply()" + ] + }, + { + "cell_type": "markdown", + "id": "f3c558d7", + "metadata": {}, + "source": [ + "#### 2. **Basic Setup and Simple Crawl**" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "003376f3", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 1.49 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.10 seconds.\n", + "IE 11 is not supported. For an optimal experience visit our site on another browser.\n", + "\n", + "[Morning Rundown: Trump and Harris' vastly different closing pitches, why Kim Jong Un is helping Russia, and an ancient city is discovered by accident](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)[](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)\n", + "\n", + "Skip to Content\n", + "\n", + "[NBC News Logo](https://www.nbcnews.com)\n", + "\n", + "Spon\n" + ] + } + ], + "source": [ + "import asyncio\n", + "from crawl4ai import AsyncWebCrawler\n", + "\n", + "async def simple_crawl():\n", + " async with AsyncWebCrawler() as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " bypass_cache=True # By default this is False, meaning the cache will be used\n", + " )\n", + " print(result.markdown[:500]) # Print the first 500 characters\n", + " \n", + "asyncio.run(simple_crawl())" + ] + }, + { + "cell_type": "markdown", + "id": "da9b4d50", + "metadata": {}, + "source": [ + "#### 3. **Dynamic Content Handling**" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "5bb8c1e4", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n", + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 4.52 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.15 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.15 seconds.\n", + "IE 11 is not supported. For an optimal experience visit our site on another browser.\n", + "\n", + "[Morning Rundown: Trump and Harris' vastly different closing pitches, why Kim Jong Un is helping Russia, and an ancient city is discovered by accident](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)[](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)\n", + "\n", + "Skip to Content\n", + "\n", + "[NBC News Logo](https://www.nbcnews.com)\n", + "\n", + "Spon\n" + ] + } + ], + "source": [ + "async def crawl_dynamic_content():\n", + " # You can use wait_for to wait for a condition to be met before returning the result\n", + " # wait_for = \"\"\"() => {\n", + " # return Array.from(document.querySelectorAll('article.tease-card')).length > 10;\n", + " # }\"\"\"\n", + "\n", + " # wait_for can be also just a css selector\n", + " # wait_for = \"article.tease-card:nth-child(10)\"\n", + "\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " js_code = [\n", + " \"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"\n", + " ]\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " js_code=js_code,\n", + " # wait_for=wait_for,\n", + " bypass_cache=True,\n", + " )\n", + " print(result.markdown[:500]) # Print first 500 characters\n", + "\n", + "asyncio.run(crawl_dynamic_content())" + ] + }, + { + "cell_type": "markdown", + "id": "86febd8d", + "metadata": {}, + "source": [ + "#### 4. **Content Cleaning and Fit Markdown**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8e8ab01f", + "metadata": {}, + "outputs": [], + "source": [ + "async def clean_content():\n", + " async with AsyncWebCrawler() as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://janineintheworld.com/places-to-visit-in-central-mexico\",\n", + " excluded_tags=['nav', 'footer', 'aside'],\n", + " remove_overlay_elements=True,\n", + " word_count_threshold=10,\n", + " bypass_cache=True\n", + " )\n", + " full_markdown_length = len(result.markdown)\n", + " fit_markdown_length = len(result.fit_markdown)\n", + " print(f\"Full Markdown Length: {full_markdown_length}\")\n", + " print(f\"Fit Markdown Length: {fit_markdown_length}\")\n", + " print(result.fit_markdown[:1000])\n", + " \n", + "\n", + "asyncio.run(clean_content())" + ] + }, + { + "cell_type": "markdown", + "id": "55715146", + "metadata": {}, + "source": [ + "#### 5. **Link Analysis and Smart Filtering**" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "id": "2ae47c69", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 0.93 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.11 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n", + "Found 107 internal links\n", + "Found 58 external links\n", + "Href: https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973\n", + "Text: Morning Rundown: Trump and Harris' vastly different closing pitches, why Kim Jong Un is helping Russia, and an ancient city is discovered by accident\n", + "\n", + "Href: https://www.nbcnews.com\n", + "Text: NBC News Logo\n", + "\n", + "Href: https://www.nbcnews.com/politics/2024-election/live-blog/kamala-harris-donald-trump-rally-election-live-updates-rcna177529\n", + "Text: 2024 Election\n", + "\n", + "Href: https://www.nbcnews.com/politics\n", + "Text: Politics\n", + "\n", + "Href: https://www.nbcnews.com/us-news\n", + "Text: U.S. News\n", + "\n" + ] + } + ], + "source": [ + "\n", + "async def link_analysis():\n", + " async with AsyncWebCrawler() as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " bypass_cache=True,\n", + " exclude_external_links=True,\n", + " exclude_social_media_links=True,\n", + " # exclude_domains=[\"facebook.com\", \"twitter.com\"]\n", + " )\n", + " print(f\"Found {len(result.links['internal'])} internal links\")\n", + " print(f\"Found {len(result.links['external'])} external links\")\n", + "\n", + " for link in result.links['internal'][:5]:\n", + " print(f\"Href: {link['href']}\\nText: {link['text']}\\n\")\n", + " \n", + "\n", + "asyncio.run(link_analysis())" + ] + }, + { + "cell_type": "markdown", + "id": "80cceef3", + "metadata": {}, + "source": [ + "#### 6. **Media Handling**" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "1fed7f99", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 1.42 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.11 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.12 seconds.\n", + "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-762x508,f_auto,q_auto:best/rockcms/2024-10/241023-NM-Chilccare-jg-27b982.jpg, Alt: , Score: 4\n", + "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241030-china-ev-electric-mb-0746-cae05c.jpg, Alt: Volkswagen Workshop in Hefei, Score: 5\n", + "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241029-nyc-subway-sandwich-2021-ac-922p-a92374.jpg, Alt: A sub is prepared at a Subway restaurant in Manhattan, New York City, Score: 5\n", + "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241029-suv-gravity-ch-1618-752415.jpg, Alt: The Lucid Gravity car., Score: 5\n", + "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241029-dearborn-michigan-f-150-ford-ranger-trucks-assembly-line-ac-426p-614f0b.jpg, Alt: Ford Introduces new F-150 And Ranger Trucks At Their Dearborn Plant, Score: 5\n" + ] + } + ], + "source": [ + "async def media_handling():\n", + " async with AsyncWebCrawler() as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\", \n", + " bypass_cache=True,\n", + " exclude_external_images=False,\n", + " screenshot=True\n", + " )\n", + " for img in result.media['images'][:5]:\n", + " print(f\"Image URL: {img['src']}, Alt: {img['alt']}, Score: {img['score']}\")\n", + " \n", + "asyncio.run(media_handling())" + ] + }, + { + "cell_type": "markdown", + "id": "9290499a", + "metadata": {}, + "source": [ + "#### 7. **Using Hooks for Custom Workflow**" + ] + }, + { + "cell_type": "markdown", + "id": "9d069c2b", + "metadata": {}, + "source": [ + "Hooks in Crawl4AI allow you to run custom logic at specific stages of the crawling process. This can be invaluable for scenarios like setting custom headers, logging activities, or processing content before it is returned. Below is an example of a basic workflow using a hook, followed by a complete list of available hooks and explanations on their usage." + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "id": "bc4d2fc8", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[Hook] Preparing to navigate...\n", + "[LOG] 🚀 Crawling done for https://crawl4ai.com, success: True, time taken: 3.49 seconds\n", + "[LOG] 🚀 Content extracted for https://crawl4ai.com, success: True, time taken: 0.03 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://crawl4ai.com, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://crawl4ai.com, time taken: 0.03 seconds.\n", + "[Crawl4AI Documentation](https://docs.crawl4ai.com/)\n", + "\n", + " * [ Home ](.)\n", + " * [ Installation ](basic/installation/)\n", + " * [ Quick Start ](basic/quickstart/)\n", + " * [ Search ](#)\n", + "\n", + "\n", + "\n", + " * Home\n", + " * [Installation](basic/installation/)\n", + " * [Quick Start](basic/quickstart/)\n", + " * Basic\n", + " * [Simple Crawling](basic/simple-crawling/)\n", + " * [Output Formats](basic/output-formats/)\n", + " * [Browser Configuration](basic/browser-config/)\n", + " * [Page Interaction](basic/page-interaction/)\n", + " * [Content Selection](basic/con\n" + ] + } + ], + "source": [ + "async def custom_hook_workflow():\n", + " async with AsyncWebCrawler() as crawler:\n", + " # Set a 'before_goto' hook to run custom code just before navigation\n", + " crawler.crawler_strategy.set_hook(\"before_goto\", lambda page: print(\"[Hook] Preparing to navigate...\"))\n", + " \n", + " # Perform the crawl operation\n", + " result = await crawler.arun(\n", + " url=\"https://crawl4ai.com\",\n", + " bypass_cache=True\n", + " )\n", + " print(result.markdown[:500]) # Display the first 500 characters\n", + "\n", + "asyncio.run(custom_hook_workflow())" + ] + }, + { + "cell_type": "markdown", + "id": "3ff45e21", + "metadata": {}, + "source": [ + "List of available hooks and examples for each stage of the crawling process:\n", + "\n", + "- **on_browser_created**\n", + " ```python\n", + " async def on_browser_created_hook(browser):\n", + " print(\"[Hook] Browser created\")\n", + " ```\n", + "\n", + "- **before_goto**\n", + " ```python\n", + " async def before_goto_hook(page):\n", + " await page.set_extra_http_headers({\"X-Test-Header\": \"test\"})\n", + " ```\n", + "\n", + "- **after_goto**\n", + " ```python\n", + " async def after_goto_hook(page):\n", + " print(f\"[Hook] Navigated to {page.url}\")\n", + " ```\n", + "\n", + "- **on_execution_started**\n", + " ```python\n", + " async def on_execution_started_hook(page):\n", + " print(\"[Hook] JavaScript execution started\")\n", + " ```\n", + "\n", + "- **before_return_html**\n", + " ```python\n", + " async def before_return_html_hook(page, html):\n", + " print(f\"[Hook] HTML length: {len(html)}\")\n", + " ```" + ] + }, + { + "cell_type": "markdown", + "id": "2d56ebb1", + "metadata": {}, + "source": [ + "#### 8. **Session-Based Crawling**\n", + "\n", + "When to Use Session-Based Crawling: \n", + "Session-based crawling is especially beneficial when navigating through multi-page content where each page load needs to maintain the same session context. For instance, in cases where a “Next Page” button must be clicked to load subsequent data, the new data often replaces the previous content. Here, session-based crawling keeps the browser state intact across each interaction, allowing for sequential actions within the same session.\n", + "\n", + "Example: Multi-Page Navigation Using JavaScript\n", + "In this example, we’ll navigate through multiple pages by clicking a \"Next Page\" button. After each page load, we extract the new content and repeat the process." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e7bfebae", + "metadata": {}, + "outputs": [], + "source": [ + "async def multi_page_session_crawl():\n", + " async with AsyncWebCrawler() as crawler:\n", + " session_id = \"page_navigation_session\"\n", + " url = \"https://example.com/paged-content\"\n", + "\n", + " for page_number in range(1, 4):\n", + " result = await crawler.arun(\n", + " url=url,\n", + " session_id=session_id,\n", + " js_code=\"document.querySelector('.next-page-button').click();\" if page_number > 1 else None,\n", + " css_selector=\".content-section\",\n", + " bypass_cache=True\n", + " )\n", + " print(f\"Page {page_number} Content:\")\n", + " print(result.markdown[:500]) # Print first 500 characters\n", + "\n", + "# asyncio.run(multi_page_session_crawl())" + ] + }, + { + "cell_type": "markdown", + "id": "ad32a778", + "metadata": {}, + "source": [ + "#### 9. **Using Extraction Strategies**\n", + "\n", + "**LLM Extraction**\n", + "\n", + "This example demonstrates how to use language model-based extraction to retrieve structured data from a pricing page on OpenAI’s site." + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "3011a7c5", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "--- Extracting Structured Data with openai/gpt-4o-mini ---\n", + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n", + "[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 1.29 seconds\n", + "[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.13 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n", + "[LOG] Extracted 26 blocks from URL: https://openai.com/api/pricing/ block index: 0\n", + "[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 15.12 seconds.\n", + "[{'model_name': 'gpt-4o', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-2024-08-06', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-audio-preview', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-audio-preview-2024-10-01', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-2024-05-13', 'input_fee': '$5.00 / 1M input tokens', 'output_fee': '$15.00 / 1M output tokens', 'error': False}]\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/Users/unclecode/devs/crawl4ai/venv/lib/python3.10/site-packages/pydantic/main.py:347: UserWarning: Pydantic serializer warnings:\n", + " Expected `PromptTokensDetails` but got `dict` - serialized value may not be as expected\n", + " return self.__pydantic_serializer__.to_python(\n" + ] + } + ], + "source": [ + "from crawl4ai.extraction_strategy import LLMExtractionStrategy\n", + "from pydantic import BaseModel, Field\n", + "import os, json\n", + "\n", + "class OpenAIModelFee(BaseModel):\n", + " model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n", + " input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n", + " output_fee: str = Field(\n", + " ..., description=\"Fee for output token for the OpenAI model.\"\n", + " )\n", + "\n", + "async def extract_structured_data_using_llm(provider: str, api_token: str = None, extra_headers: dict = None):\n", + " print(f\"\\n--- Extracting Structured Data with {provider} ---\")\n", + " \n", + " # Skip if API token is missing (for providers that require it)\n", + " if api_token is None and provider != \"ollama\":\n", + " print(f\"API token is required for {provider}. Skipping this example.\")\n", + " return\n", + "\n", + " extra_args = {\"extra_headers\": extra_headers} if extra_headers else {}\n", + "\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://openai.com/api/pricing/\",\n", + " word_count_threshold=1,\n", + " extraction_strategy=LLMExtractionStrategy(\n", + " provider=provider,\n", + " api_token=api_token,\n", + " schema=OpenAIModelFee.schema(),\n", + " extraction_type=\"schema\",\n", + " instruction=\"\"\"Extract all model names along with fees for input and output tokens.\"\n", + " \"{model_name: 'GPT-4', input_fee: 'US$10.00 / 1M tokens', output_fee: 'US$30.00 / 1M tokens'}.\"\"\",\n", + " **extra_args\n", + " ),\n", + " bypass_cache=True,\n", + " )\n", + " print(json.loads(result.extracted_content)[:5])\n", + "\n", + "# Usage:\n", + "await extract_structured_data_using_llm(\"openai/gpt-4o-mini\", os.getenv(\"OPENAI_API_KEY\"))" + ] + }, + { + "cell_type": "markdown", + "id": "6532db9d", + "metadata": {}, + "source": [ + "**Cosine Similarity Strategy**\n", + "\n", + "This strategy uses semantic clustering to extract relevant content based on contextual similarity, which is helpful when extracting related sections from a single topic." + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "id": "ec079108", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] Loading Extraction Model for mps device.\n", + "[LOG] Loading Multilabel Classifier for mps device.\n", + "[LOG] Model loaded sentence-transformers/all-MiniLM-L6-v2, models/reuters, took 5.193778038024902 seconds\n", + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, success: True, time taken: 1.37 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, success: True, time taken: 0.07 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Assign tags using mps\n", + "[LOG] 🚀 Categorization done in 0.55 seconds\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, time taken: 6.63 seconds.\n", + "[{'index': 1, 'tags': ['news_&_social_concern'], 'content': \"McDonald's 2024 combo: Inflation, a health crisis and a side of politics # McDonald's 2024 combo: Inflation, a health crisis and a side of politics\"}, {'index': 2, 'tags': ['business_&_entrepreneurs', 'news_&_social_concern'], 'content': 'Like many major brands, McDonald’s raked in big profits as the economy reopened from the pandemic. In October 2022, [executives were boasting](https://www.cnbc.com/2022/10/27/mcdonalds-mcd-earnings-q3-2022.html) that they’d been raising prices without crimping traffic, even as competitors began to warn that some customers were closing their wallets after inflation peaked above 9% that summer. Still, the U.S. had repeatedly dodged a much-forecast recession, and [Americans kept spending on nonessentials](https://www.nbcnews.com/business/economy/year-peak-inflation-travel-leisure-mostly-cost-less-rcna92760) like travel and dining out — despite regularly relaying to pollsters their dismal views of an otherwise solid economy. Even so, 64% of consumers said they noticed price increases at quick-service restaurants in September, more than at any other type of venue, according to a survey by Datassential, a food and beverage market researcher. Politicians are still drawing attention to fast-food costs, too, as the election season barrels toward a tumultuous finish. A group of Democratic senators this month [denounced McDonald’s for menu prices](https://www.nbcnews.com/news/us-news/democratic-senators-slam-mcdonalds-menu-price-hikes-rcna176380) that they said outstripped inflation, accusing the company of looking to profit “at the expense of people’s ability to put food on the table.” The financial results come toward the end of a humbling year for the nearly $213 billion restaurant chain, whose shares remained steady on the heels of its latest earnings. Kempczinski [sought to reassure investors](https://www.cnbc.com/2024/10/29/mcdonalds-e-coli-outbreak-ceo-comments.html) that [the E. coli outbreak](https://www.nbcnews.com/health/health-news/illnesses-linked-mcdonalds-e-coli-outbreak-rise-75-cdc-says-rcna177260), linked to Quarter Pounder burgers, was under control after the health crisis temporarily dented the company’s stock and caused U.S. foot traffic to drop nearly 10% in the days afterward, according to estimates by Gordon Haskett financial researchers. The fast-food giant [reported Tuesday](https://www.cnbc.com/2024/10/29/mcdonalds-mcd-earnings-q3-2024.html) that it had reversed its recent U.S. sales drop, posting a 0.3% uptick in the third quarter. Foot traffic was still down slightly, but the company said its summer of discounts was paying off. But by early this year, [photos of eye-watering menu prices](https://x.com/sam_learner/status/1681367351143301129) at some McDonald’s locations — including an $18 Big Mac combo at a Connecticut rest stop from July 2023 — went viral, bringing diners’ long-simmering frustrations to a boiling point that the company couldn’t ignore. On an earnings call in April, Kempczinski acknowledged that foot traffic had fallen. “We will stay laser-focused on providing an unparalleled experience with simple, everyday value and affordability that our consumers can count on as they continue to be mindful about their spending,” CEO Chris Kempczinski [said in a statement](https://www.prnewswire.com/news-releases/mcdonalds-reports-third-quarter-2024-results-302289216.html?Fds-Load-Behavior=force-external) alongside the earnings report.'}, {'index': 3, 'tags': ['food_&_dining', 'news_&_social_concern'], 'content': '![mcdonalds drive-thru economy fast food](https://media-cldnry.s-nbcnews.com/image/upload/t_fit-760w,f_auto,q_auto:best/rockcms/2024-10/241024-los-angeles-mcdonalds-drive-thru-ac-1059p-cfc311.jpg)McDonald’s has had some success leaning into discounts this year. Eric Thayer / Bloomberg via Getty Images file'}, {'index': 4, 'tags': ['business_&_entrepreneurs', 'food_&_dining', 'news_&_social_concern'], 'content': 'McDonald’s has faced a customer revolt over pricey Big Macs, an unsolicited cameo in election-season crossfire, and now an E. coli outbreak — just as the company had been luring customers back with more affordable burgers. Despite a difficult quarter, McDonald’s looks resilient in the face of various pressures, analysts say — something the company shares with U.S. consumers overall. “Consumers continue to be even more discriminating with every dollar that they spend,” he said at the time. Going forward, McDonald’s would be “laser-focused” on affordability. “McDonald’s has also done a good job of embedding the brand in popular culture to enhance its relevance and meaning around fun and family. But it also needed to modify the product line to meet the expectations of a consumer who is on a tight budget,” he said. “The thing that McDonald’s had struggled with, and why I think we’re seeing kind of an inflection point, is a value proposition,” Senatore said. “McDonald’s menu price increases had run ahead of a lot of its restaurant peers. … Consumers are savvy enough to know that.” For many consumers, the fast-food giant’s menus serve as an informal gauge of the economy overall, said Sara Senatore, a Bank of America analyst covering restaurants. “The spotlight is always on McDonald’s because it’s so big” and something of a “bellwether,” she said. McDonald’s didn’t respond to requests for comment.'}, {'index': 5, 'tags': ['business_&_entrepreneurs', 'food_&_dining'], 'content': 'Mickey D’s’ $5 meal deal, which it launched in late June to jumpstart slumping sales, has given the company an appealing price point to advertise nationwide, Senatore said, speculating that it could open the door to a new permanent value offering. But before that promotion rolled out, the company’s reputation as a low-cost option had taken a bruising hit.'}]\n" + ] + } + ], + "source": [ + "from crawl4ai.extraction_strategy import CosineStrategy\n", + "\n", + "async def cosine_similarity_extraction():\n", + " async with AsyncWebCrawler() as crawler:\n", + " strategy = CosineStrategy(\n", + " word_count_threshold=10,\n", + " max_dist=0.2, # Maximum distance between two words\n", + " linkage_method=\"ward\", # Linkage method for hierarchical clustering (ward, complete, average, single)\n", + " top_k=3, # Number of top keywords to extract\n", + " sim_threshold=0.3, # Similarity threshold for clustering\n", + " semantic_filter=\"McDonald's economic impact, American consumer trends\", # Keywords to filter the content semantically using embeddings\n", + " verbose=True\n", + " )\n", + " \n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156\",\n", + " extraction_strategy=strategy\n", + " )\n", + " print(json.loads(result.extracted_content)[:5])\n", + "\n", + "asyncio.run(cosine_similarity_extraction())\n" + ] + }, + { + "cell_type": "markdown", + "id": "ff423629", + "metadata": {}, + "source": [ + "#### 10. **Conclusion and Next Steps**\n", + "\n", + "You’ve explored core features of Crawl4AI, including dynamic content handling, link analysis, and advanced extraction strategies. Visit our documentation for further details on using Crawl4AI’s extensive features.\n", + "\n", + "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n", + "- Twitter: [@unclecode](https://twitter.com/unclecode)\n", + "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n", + "\n", + "Happy Crawling with Crawl4AI! 🕷️🤖\n" + ] + }, + { + "cell_type": "markdown", + "id": "d34c1d35", + "metadata": {}, + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.13" + } + }, + "nbformat": 4, + "nbformat_minor": 5 } diff --git a/docs/examples/quickstart_v0.ipynb b/docs/examples/quickstart_v0.ipynb new file mode 100644 index 00000000..71f23acb --- /dev/null +++ b/docs/examples/quickstart_v0.ipynb @@ -0,0 +1,735 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "6yLvrXn7yZQI" + }, + "source": [ + "# Crawl4AI: Advanced Web Crawling and Data Extraction\n", + "\n", + "Welcome to this interactive notebook showcasing Crawl4AI, an advanced asynchronous web crawling and data extraction library.\n", + "\n", + "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n", + "- Twitter: [@unclecode](https://twitter.com/unclecode)\n", + "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n", + "\n", + "Let's explore the powerful features of Crawl4AI!" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "KIn_9nxFyZQK" + }, + "source": [ + "## Installation\n", + "\n", + "First, let's install Crawl4AI from GitHub:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "mSnaxLf3zMog" + }, + "outputs": [], + "source": [ + "!sudo apt-get update && sudo apt-get install -y libwoff1 libopus0 libwebp6 libwebpdemux2 libenchant1c2a libgudev-1.0-0 libsecret-1-0 libhyphen0 libgdk-pixbuf2.0-0 libegl1 libnotify4 libxslt1.1 libevent-2.1-7 libgles2 libvpx6 libxcomposite1 libatk1.0-0 libatk-bridge2.0-0 libepoxy0 libgtk-3-0 libharfbuzz-icu0" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "xlXqaRtayZQK" + }, + "outputs": [], + "source": [ + "!pip install crawl4ai\n", + "!pip install nest-asyncio\n", + "!playwright install" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qKCE7TI7yZQL" + }, + "source": [ + "Now, let's import the necessary libraries:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "id": "I67tr7aAyZQL" + }, + "outputs": [], + "source": [ + "import asyncio\n", + "import nest_asyncio\n", + "from crawl4ai import AsyncWebCrawler\n", + "from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy\n", + "import json\n", + "import time\n", + "from pydantic import BaseModel, Field\n", + "\n", + "nest_asyncio.apply()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "h7yR_Rt_yZQM" + }, + "source": [ + "## Basic Usage\n", + "\n", + "Let's start with a simple crawl example:" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "yBh6hf4WyZQM", + "outputId": "0f83af5c-abba-4175-ed95-70b7512e6bcc" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.05 seconds\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.05 seconds.\n", + "18102\n" + ] + } + ], + "source": [ + "async def simple_crawl():\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " result = await crawler.arun(url=\"https://www.nbcnews.com/business\")\n", + " print(len(result.markdown))\n", + "await simple_crawl()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "9rtkgHI28uI4" + }, + "source": [ + "💡 By default, **Crawl4AI** caches the result of every URL, so the next time you call it, you’ll get an instant result. But if you want to bypass the cache, just set `bypass_cache=True`." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MzZ0zlJ9yZQM" + }, + "source": [ + "## Advanced Features\n", + "\n", + "### Executing JavaScript and Using CSS Selectors" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "gHStF86xyZQM", + "outputId": "34d0fb6d-4dec-4677-f76e-85a1f082829b" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n", + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 6.06 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n", + "41135\n" + ] + } + ], + "source": [ + "async def js_and_css():\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " js_code = [\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"]\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " js_code=js_code,\n", + " # css_selector=\"YOUR_CSS_SELECTOR_HERE\",\n", + " bypass_cache=True\n", + " )\n", + " print(len(result.markdown))\n", + "\n", + "await js_and_css()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "cqE_W4coyZQM" + }, + "source": [ + "### Using a Proxy\n", + "\n", + "Note: You'll need to replace the proxy URL with a working proxy for this example to run successfully." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "QjAyiAGqyZQM" + }, + "outputs": [], + "source": [ + "async def use_proxy():\n", + " async with AsyncWebCrawler(verbose=True, proxy=\"http://your-proxy-url:port\") as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " bypass_cache=True\n", + " )\n", + " print(result.markdown[:500]) # Print first 500 characters\n", + "\n", + "# Uncomment the following line to run the proxy example\n", + "# await use_proxy()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XTZ88lbayZQN" + }, + "source": [ + "### Extracting Structured Data with OpenAI\n", + "\n", + "Note: You'll need to set your OpenAI API key as an environment variable for this example to work." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "fIOlDayYyZQN", + "outputId": "cb8359cc-dee0-4762-9698-5dfdcee055b8" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n", + "[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 3.77 seconds\n", + "[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.21 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 1\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 2\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 3\n", + "[LOG] Extracted 4 blocks from URL: https://openai.com/api/pricing/ block index: 3\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 4\n", + "[LOG] Extracted 5 blocks from URL: https://openai.com/api/pricing/ block index: 0\n", + "[LOG] Extracted 1 blocks from URL: https://openai.com/api/pricing/ block index: 4\n", + "[LOG] Extracted 8 blocks from URL: https://openai.com/api/pricing/ block index: 1\n", + "[LOG] Extracted 12 blocks from URL: https://openai.com/api/pricing/ block index: 2\n", + "[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 8.55 seconds.\n", + "5029\n" + ] + } + ], + "source": [ + "import os\n", + "from google.colab import userdata\n", + "os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')\n", + "\n", + "class OpenAIModelFee(BaseModel):\n", + " model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n", + " input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n", + " output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n", + "\n", + "async def extract_openai_fees():\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " result = await crawler.arun(\n", + " url='https://openai.com/api/pricing/',\n", + " word_count_threshold=1,\n", + " extraction_strategy=LLMExtractionStrategy(\n", + " provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY'),\n", + " schema=OpenAIModelFee.schema(),\n", + " extraction_type=\"schema\",\n", + " instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens.\n", + " Do not miss any models in the entire content. One extracted model JSON format should look like this:\n", + " {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n", + " ),\n", + " bypass_cache=True,\n", + " )\n", + " print(len(result.extracted_content))\n", + "\n", + "# Uncomment the following line to run the OpenAI extraction example\n", + "await extract_openai_fees()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "BypA5YxEyZQN" + }, + "source": [ + "### Advanced Multi-Page Crawling with JavaScript Execution" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tfkcVQ0b7mw-" + }, + "source": [ + "## Advanced Multi-Page Crawling with JavaScript Execution\n", + "\n", + "This example demonstrates Crawl4AI's ability to handle complex crawling scenarios, specifically extracting commits from multiple pages of a GitHub repository. The challenge here is that clicking the \"Next\" button doesn't load a new page, but instead uses asynchronous JavaScript to update the content. This is a common hurdle in modern web crawling.\n", + "\n", + "To overcome this, we use Crawl4AI's custom JavaScript execution to simulate clicking the \"Next\" button, and implement a custom hook to detect when new data has loaded. Our strategy involves comparing the first commit's text before and after \"clicking\" Next, waiting until it changes to confirm new data has rendered. This showcases Crawl4AI's flexibility in handling dynamic content and its ability to implement custom logic for even the most challenging crawling tasks." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "qUBKGpn3yZQN", + "outputId": "3e555b6a-ed33-42f4-cce9-499a923fbe17" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", + "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 5.16 seconds\n", + "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.28 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.28 seconds.\n", + "Page 1: Found 35 commits\n", + "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", + "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.78 seconds\n", + "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.90 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.90 seconds.\n", + "Page 2: Found 35 commits\n", + "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", + "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 2.00 seconds\n", + "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.74 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.75 seconds.\n", + "Page 3: Found 35 commits\n", + "Successfully crawled 105 commits across 3 pages\n" + ] + } + ], + "source": [ + "import re\n", + "from bs4 import BeautifulSoup\n", + "\n", + "async def crawl_typescript_commits():\n", + " first_commit = \"\"\n", + " async def on_execution_started(page):\n", + " nonlocal first_commit\n", + " try:\n", + " while True:\n", + " await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')\n", + " commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')\n", + " commit = await commit.evaluate('(element) => element.textContent')\n", + " commit = re.sub(r'\\s+', '', commit)\n", + " if commit and commit != first_commit:\n", + " first_commit = commit\n", + " break\n", + " await asyncio.sleep(0.5)\n", + " except Exception as e:\n", + " print(f\"Warning: New content didn't appear after JavaScript execution: {e}\")\n", + "\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)\n", + "\n", + " url = \"https://github.com/microsoft/TypeScript/commits/main\"\n", + " session_id = \"typescript_commits_session\"\n", + " all_commits = []\n", + "\n", + " js_next_page = \"\"\"\n", + " const button = document.querySelector('a[data-testid=\"pagination-next-button\"]');\n", + " if (button) button.click();\n", + " \"\"\"\n", + "\n", + " for page in range(3): # Crawl 3 pages\n", + " result = await crawler.arun(\n", + " url=url,\n", + " session_id=session_id,\n", + " css_selector=\"li.Box-sc-g0xbh4-0\",\n", + " js=js_next_page if page > 0 else None,\n", + " bypass_cache=True,\n", + " js_only=page > 0\n", + " )\n", + "\n", + " assert result.success, f\"Failed to crawl page {page + 1}\"\n", + "\n", + " soup = BeautifulSoup(result.cleaned_html, 'html.parser')\n", + " commits = soup.select(\"li\")\n", + " all_commits.extend(commits)\n", + "\n", + " print(f\"Page {page + 1}: Found {len(commits)} commits\")\n", + "\n", + " await crawler.crawler_strategy.kill_session(session_id)\n", + " print(f\"Successfully crawled {len(all_commits)} commits across 3 pages\")\n", + "\n", + "await crawl_typescript_commits()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "EJRnYsp6yZQN" + }, + "source": [ + "### Using JsonCssExtractionStrategy for Fast Structured Output" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "1ZMqIzB_8SYp" + }, + "source": [ + "The JsonCssExtractionStrategy is a powerful feature of Crawl4AI that allows for precise, structured data extraction from web pages. Here's how it works:\n", + "\n", + "1. You define a schema that describes the pattern of data you're interested in extracting.\n", + "2. The schema includes a base selector that identifies repeating elements on the page.\n", + "3. Within the schema, you define fields, each with its own selector and type.\n", + "4. These field selectors are applied within the context of each base selector element.\n", + "5. The strategy supports nested structures, lists within lists, and various data types.\n", + "6. You can even include computed fields for more complex data manipulation.\n", + "\n", + "This approach allows for highly flexible and precise data extraction, transforming semi-structured web content into clean, structured JSON data. It's particularly useful for extracting consistent data patterns from pages like product listings, news articles, or search results.\n", + "\n", + "For more details and advanced usage, check out the full documentation on the Crawl4AI website." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "trCMR2T9yZQN", + "outputId": "718d36f4-cccf-40f4-8d8c-c3ba73524d16" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n", + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 7.00 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.32 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.48 seconds.\n", + "Successfully extracted 11 news teasers\n", + "{\n", + " \"category\": \"Business News\",\n", + " \"headline\": \"NBC ripped up its Olympics playbook for 2024 \\u2014 so far, the new strategy paid off\",\n", + " \"summary\": \"The Olympics have long been key to NBCUniversal. Paris marked the 18th Olympic Games broadcast by NBC in the U.S.\",\n", + " \"time\": \"13h ago\",\n", + " \"image\": {\n", + " \"src\": \"https://media-cldnry.s-nbcnews.com/image/upload/t_focal-200x100,f_auto,q_auto:best/rockcms/2024-09/240903-nbc-olympics-ch-1344-c7a486.jpg\",\n", + " \"alt\": \"Mike Tirico.\"\n", + " },\n", + " \"link\": \"https://www.nbcnews.com/business\"\n", + "}\n" + ] + } + ], + "source": [ + "async def extract_news_teasers():\n", + " schema = {\n", + " \"name\": \"News Teaser Extractor\",\n", + " \"baseSelector\": \".wide-tease-item__wrapper\",\n", + " \"fields\": [\n", + " {\n", + " \"name\": \"category\",\n", + " \"selector\": \".unibrow span[data-testid='unibrow-text']\",\n", + " \"type\": \"text\",\n", + " },\n", + " {\n", + " \"name\": \"headline\",\n", + " \"selector\": \".wide-tease-item__headline\",\n", + " \"type\": \"text\",\n", + " },\n", + " {\n", + " \"name\": \"summary\",\n", + " \"selector\": \".wide-tease-item__description\",\n", + " \"type\": \"text\",\n", + " },\n", + " {\n", + " \"name\": \"time\",\n", + " \"selector\": \"[data-testid='wide-tease-date']\",\n", + " \"type\": \"text\",\n", + " },\n", + " {\n", + " \"name\": \"image\",\n", + " \"type\": \"nested\",\n", + " \"selector\": \"picture.teasePicture img\",\n", + " \"fields\": [\n", + " {\"name\": \"src\", \"type\": \"attribute\", \"attribute\": \"src\"},\n", + " {\"name\": \"alt\", \"type\": \"attribute\", \"attribute\": \"alt\"},\n", + " ],\n", + " },\n", + " {\n", + " \"name\": \"link\",\n", + " \"selector\": \"a[href]\",\n", + " \"type\": \"attribute\",\n", + " \"attribute\": \"href\",\n", + " },\n", + " ],\n", + " }\n", + "\n", + " extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n", + "\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " extraction_strategy=extraction_strategy,\n", + " bypass_cache=True,\n", + " )\n", + "\n", + " assert result.success, \"Failed to crawl the page\"\n", + "\n", + " news_teasers = json.loads(result.extracted_content)\n", + " print(f\"Successfully extracted {len(news_teasers)} news teasers\")\n", + " print(json.dumps(news_teasers[0], indent=2))\n", + "\n", + "await extract_news_teasers()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "FnyVhJaByZQN" + }, + "source": [ + "## Speed Comparison\n", + "\n", + "Let's compare the speed of Crawl4AI with Firecrawl, a paid service. Note that we can't run Firecrawl in this Colab environment, so we'll simulate its performance based on previously recorded data." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "agDD186f3wig" + }, + "source": [ + "💡 **Note on Speed Comparison:**\n", + "\n", + "The speed test conducted here is running on Google Colab, where the internet speed and performance can vary and may not reflect optimal conditions. When we call Firecrawl's API, we're seeing its best performance, while Crawl4AI's performance is limited by Colab's network speed.\n", + "\n", + "For a more accurate comparison, it's recommended to run these tests on your own servers or computers with a stable and fast internet connection. Despite these limitations, Crawl4AI still demonstrates faster performance in this environment.\n", + "\n", + "If you run these tests locally, you may observe an even more significant speed advantage for Crawl4AI compared to other services." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "F7KwHv8G1LbY" + }, + "outputs": [], + "source": [ + "!pip install firecrawl" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "91813zILyZQN", + "outputId": "663223db-ab89-4976-b233-05ceca62b19b" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Firecrawl (simulated):\n", + "Time taken: 4.38 seconds\n", + "Content length: 41967 characters\n", + "Images found: 49\n", + "\n", + "Crawl4AI (simple crawl):\n", + "Time taken: 4.22 seconds\n", + "Content length: 18221 characters\n", + "Images found: 49\n", + "\n", + "Crawl4AI (with JavaScript execution):\n", + "Time taken: 9.13 seconds\n", + "Content length: 34243 characters\n", + "Images found: 89\n" + ] + } + ], + "source": [ + "import os\n", + "from google.colab import userdata\n", + "os.environ['FIRECRAWL_API_KEY'] = userdata.get('FIRECRAWL_API_KEY')\n", + "import time\n", + "from firecrawl import FirecrawlApp\n", + "\n", + "async def speed_comparison():\n", + " # Simulated Firecrawl performance\n", + " app = FirecrawlApp(api_key=os.environ['FIRECRAWL_API_KEY'])\n", + " start = time.time()\n", + " scrape_status = app.scrape_url(\n", + " 'https://www.nbcnews.com/business',\n", + " params={'formats': ['markdown', 'html']}\n", + " )\n", + " end = time.time()\n", + " print(\"Firecrawl (simulated):\")\n", + " print(f\"Time taken: {end - start:.2f} seconds\")\n", + " print(f\"Content length: {len(scrape_status['markdown'])} characters\")\n", + " print(f\"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}\")\n", + " print()\n", + "\n", + " async with AsyncWebCrawler() as crawler:\n", + " # Crawl4AI simple crawl\n", + " start = time.time()\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " word_count_threshold=0,\n", + " bypass_cache=True,\n", + " verbose=False\n", + " )\n", + " end = time.time()\n", + " print(\"Crawl4AI (simple crawl):\")\n", + " print(f\"Time taken: {end - start:.2f} seconds\")\n", + " print(f\"Content length: {len(result.markdown)} characters\")\n", + " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n", + " print()\n", + "\n", + " # Crawl4AI with JavaScript execution\n", + " start = time.time()\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " js_code=[\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"],\n", + " word_count_threshold=0,\n", + " bypass_cache=True,\n", + " verbose=False\n", + " )\n", + " end = time.time()\n", + " print(\"Crawl4AI (with JavaScript execution):\")\n", + " print(f\"Time taken: {end - start:.2f} seconds\")\n", + " print(f\"Content length: {len(result.markdown)} characters\")\n", + " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n", + "\n", + "await speed_comparison()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "OBFFYVJIyZQN" + }, + "source": [ + "If you run on a local machine with a proper internet speed:\n", + "- Simple crawl: Crawl4AI is typically over 3-4 times faster than Firecrawl.\n", + "- With JavaScript execution: Even when executing JavaScript to load more content (potentially doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.\n", + "\n", + "Please note that actual performance may vary depending on network conditions and the specific content being crawled." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "A6_1RK1_yZQO" + }, + "source": [ + "## Conclusion\n", + "\n", + "In this notebook, we've explored the powerful features of Crawl4AI, including:\n", + "\n", + "1. Basic crawling\n", + "2. JavaScript execution and CSS selector usage\n", + "3. Proxy support\n", + "4. Structured data extraction with OpenAI\n", + "5. Advanced multi-page crawling with JavaScript execution\n", + "6. Fast structured output using JsonCssExtractionStrategy\n", + "7. Speed comparison with other services\n", + "\n", + "Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n", + "\n", + "For more information and advanced usage, please visit the [Crawl4AI documentation](https://crawl4ai.com/mkdocs/).\n", + "\n", + "Happy crawling!" + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "display_name": "venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.13" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/docs/md_v2/assets/styles.css b/docs/md_v2/assets/styles.css index f103474f..68a93f5d 100644 --- a/docs/md_v2/assets/styles.css +++ b/docs/md_v2/assets/styles.css @@ -150,4 +150,11 @@ strong, .tab-content pre { margin: 0; max-height: 300px; overflow: auto; border:none; +} + +ol li::before { + content: counters(item, ".") ". "; + counter-increment: item; + /* float: left; */ + /* padding-right: 5px; */ } \ No newline at end of file diff --git a/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md b/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md index f2b1ace1..f19d19f8 100644 --- a/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md +++ b/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md @@ -9,17 +9,19 @@ Here's a condensed outline of the **Installation and Setup** video content: --- -1. **Introduction to Crawl4AI**: - - Briefly explain that Crawl4AI is a powerful tool for web scraping, data extraction, and content processing, with customizable options for various needs. +1 **Introduction to Crawl4AI**: Briefly explain that Crawl4AI is a powerful tool for web scraping, data extraction, and content processing, with customizable options for various needs. -2. **Installation Overview**: +2 **Installation Overview**: + - **Basic Install**: Run `pip install crawl4ai` and `playwright install` (to set up browser dependencies). + - **Optional Advanced Installs**: - `pip install crawl4ai[torch]` - Adds PyTorch for clustering. - `pip install crawl4ai[transformer]` - Adds support for LLM-based extraction. - `pip install crawl4ai[all]` - Installs all features for complete functionality. -3. **Verifying the Installation**: +3 **Verifying the Installation**: + - Walk through a simple test script to confirm the setup: ```python import asyncio @@ -34,12 +36,13 @@ Here's a condensed outline of the **Installation and Setup** video content: ``` - Explain that this script initializes the crawler and runs it on a test URL, displaying part of the extracted content to verify functionality. -4. **Important Tips**: +4 **Important Tips**: + - **Run** `playwright install` **after installation** to set up dependencies. - **For full performance** on text-related tasks, run `crawl4ai-download-models` after installing with `[torch]`, `[transformer]`, or `[all]` options. - If you encounter issues, refer to the documentation or GitHub issues. -5. **Wrap Up**: +5 **Wrap Up**: - Introduce the next topic in the series, which will cover Crawl4AI's browser configuration options (like choosing between `chromium`, `firefox`, and `webkit`). --- diff --git a/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md b/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md index e9844a7c..f2216b4c 100644 --- a/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md +++ b/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md @@ -11,10 +11,12 @@ Here's a condensed outline for an **Overview of Advanced Features** video coveri ### **Overview of Advanced Features** -1. **Introduction to Advanced Features**: +1 **Introduction to Advanced Features**: + - Briefly introduce Crawl4AI’s advanced tools, which let users go beyond basic crawling to customize and fine-tune their scraping workflows. -2. **Taking Screenshots**: +2 **Taking Screenshots**: + - Explain the screenshot capability for capturing page state and verifying content. - **Example**: ```python @@ -22,7 +24,8 @@ Here's a condensed outline for an **Overview of Advanced Features** video coveri ``` - Mention that screenshots are saved as a base64 string in `result`, allowing easy decoding and saving. -3. **Media and Link Extraction**: +3 **Media and Link Extraction**: + - Demonstrate how to pull all media (images, videos) and links (internal and external) from a page for deeper analysis or content gathering. - **Example**: ```python @@ -31,14 +34,16 @@ Here's a condensed outline for an **Overview of Advanced Features** video coveri print("Links:", result.links) ``` -4. **Custom User Agent**: +4 **Custom User Agent**: + - Show how to set a custom user agent to disguise the crawler or simulate specific devices/browsers. - **Example**: ```python result = await crawler.arun(url="https://www.example.com", user_agent="Mozilla/5.0 (compatible; MyCrawler/1.0)") ``` -5. **Custom Hooks for Enhanced Control**: +5 **Custom Hooks for Enhanced Control**: + - Briefly cover how to use hooks, which allow custom actions like setting headers or handling login during the crawl. - **Example**: Setting a custom header with `before_get_url` hook. ```python @@ -46,7 +51,8 @@ Here's a condensed outline for an **Overview of Advanced Features** video coveri await page.set_extra_http_headers({"X-Test-Header": "test"}) ``` -6. **CSS Selectors for Targeted Extraction**: +6 **CSS Selectors for Targeted Extraction**: + - Explain the use of CSS selectors to extract specific elements, ideal for structured data like articles or product details. - **Example**: ```python @@ -54,14 +60,16 @@ Here's a condensed outline for an **Overview of Advanced Features** video coveri print("H2 Tags:", result.extracted_content) ``` -7. **Crawling Inside Iframes**: +7 **Crawling Inside Iframes**: + - Mention how enabling `process_iframes=True` allows extracting content within iframes, useful for sites with embedded content or ads. - **Example**: ```python result = await crawler.arun(url="https://www.example.com", process_iframes=True) ``` -8. **Wrap-Up**: +8 **Wrap-Up**: + - Summarize these advanced features and how they allow users to customize every part of their web scraping experience. - Tease upcoming videos where each feature will be explored in detail. diff --git a/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md b/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md index 11b9be7d..87a3d217 100644 --- a/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md +++ b/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md @@ -42,7 +42,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def log_browser_creation(browser): print("Browser instance created:", browser) - crawler.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) ``` - **Explanation**: This hook logs the browser creation event, useful for tracking when a new browser instance starts. @@ -57,7 +57,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra def update_user_agent(user_agent): print(f"User Agent Updated: {user_agent}") - crawler.set_hook('on_user_agent_updated', update_user_agent) + crawler.crawler_strategy.set_hook('on_user_agent_updated', update_user_agent) crawler.update_user_agent("Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X)") ``` - **Explanation**: This hook provides a callback every time the user agent changes, helpful for debugging or dynamically altering user agent settings based on conditions. @@ -73,7 +73,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def log_execution_start(page): print("Execution started on page:", page.url) - crawler.set_hook('on_execution_started', log_execution_start) + crawler.crawler_strategy.set_hook('on_execution_started', log_execution_start) ``` - **Explanation**: Logs the start of any major interaction on the page, ideal for cases where you want to monitor each interaction. @@ -90,7 +90,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.set_extra_http_headers({"X-Custom-Header": "CustomValue"}) print("Custom headers set before navigation") - crawler.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) ``` - **Explanation**: This hook allows injecting headers or altering settings based on the page’s needs, particularly useful for pages with custom requirements. @@ -106,7 +106,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.evaluate("window.scrollTo(0, document.body.scrollHeight)") print("Scrolled to the bottom after navigation") - crawler.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) ``` - **Explanation**: This hook scrolls to the bottom of the page after loading, which can help load dynamically added content like infinite scroll elements. @@ -122,7 +122,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.evaluate("document.querySelectorAll('.ad-banner').forEach(el => el.remove());") print("Advertisements removed before returning HTML") - crawler.set_hook('before_return_html', remove_advertisements) + crawler.crawler_strategy.set_hook('before_return_html', remove_advertisements) ``` - **Explanation**: The hook removes ad banners from the HTML before it’s retrieved, ensuring a cleaner data extraction. @@ -138,7 +138,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.wait_for_selector('.main-content') print("Main content loaded, ready to retrieve HTML") - crawler.set_hook('before_retrieve_html', wait_for_content_before_retrieve) + crawler.crawler_strategy.set_hook('before_retrieve_html', wait_for_content_before_retrieve) ``` - **Explanation**: This hook waits for the main content to load before retrieving the HTML, ensuring that all essential content is captured. @@ -148,9 +148,9 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra - Each hook function can be asynchronous (useful for actions like waiting or retrieving async data). - **Example Setup**: ```python - crawler.set_hook('on_browser_created', log_browser_creation) - crawler.set_hook('before_goto', modify_headers_before_goto) - crawler.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) ``` #### **5. Complete Example: Using Hooks for a Customized Crawl Workflow** @@ -160,10 +160,10 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def custom_crawl(): async with AsyncWebCrawler() as crawler: # Set hooks for custom workflow - crawler.set_hook('on_browser_created', log_browser_creation) - crawler.set_hook('before_goto', modify_headers_before_goto) - crawler.set_hook('after_goto', post_navigation_scroll) - crawler.set_hook('before_return_html', remove_advertisements) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('before_return_html', remove_advertisements) # Perform the crawl url = "https://example.com" diff --git a/docs/md_v2/tutorial/tutorial.md b/docs/md_v2/tutorial/tutorial.md index 4e90484d..5621744d 100644 --- a/docs/md_v2/tutorial/tutorial.md +++ b/docs/md_v2/tutorial/tutorial.md @@ -771,9 +771,11 @@ Here’s a concise outline for the **Custom Headers, Identity Management, and Us async with AsyncWebCrawler( headers={"Accept-Language": "en-US", "Cache-Control": "no-cache"}, user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0", - simulate_user=True ) as crawler: - result = await crawler.arun(url="https://example.com/secure-page") + result = await crawler.arun( + url="https://example.com/secure-page", + simulate_user=True + ) print(result.markdown[:500]) # Display extracted content ``` - This example enables detailed customization for evading detection and accessing protected pages smoothly. @@ -1576,7 +1578,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def log_browser_creation(browser): print("Browser instance created:", browser) - crawler.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) ``` - **Explanation**: This hook logs the browser creation event, useful for tracking when a new browser instance starts. @@ -1591,7 +1593,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra def update_user_agent(user_agent): print(f"User Agent Updated: {user_agent}") - crawler.set_hook('on_user_agent_updated', update_user_agent) + crawler.crawler_strategy.set_hook('on_user_agent_updated', update_user_agent) crawler.update_user_agent("Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X)") ``` - **Explanation**: This hook provides a callback every time the user agent changes, helpful for debugging or dynamically altering user agent settings based on conditions. @@ -1607,7 +1609,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def log_execution_start(page): print("Execution started on page:", page.url) - crawler.set_hook('on_execution_started', log_execution_start) + crawler.crawler_strategy.set_hook('on_execution_started', log_execution_start) ``` - **Explanation**: Logs the start of any major interaction on the page, ideal for cases where you want to monitor each interaction. @@ -1624,7 +1626,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.set_extra_http_headers({"X-Custom-Header": "CustomValue"}) print("Custom headers set before navigation") - crawler.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) ``` - **Explanation**: This hook allows injecting headers or altering settings based on the page’s needs, particularly useful for pages with custom requirements. @@ -1640,7 +1642,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.evaluate("window.scrollTo(0, document.body.scrollHeight)") print("Scrolled to the bottom after navigation") - crawler.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) ``` - **Explanation**: This hook scrolls to the bottom of the page after loading, which can help load dynamically added content like infinite scroll elements. @@ -1656,7 +1658,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.evaluate("document.querySelectorAll('.ad-banner').forEach(el => el.remove());") print("Advertisements removed before returning HTML") - crawler.set_hook('before_return_html', remove_advertisements) + crawler.crawler_strategy.set_hook('before_return_html', remove_advertisements) ``` - **Explanation**: The hook removes ad banners from the HTML before it’s retrieved, ensuring a cleaner data extraction. @@ -1672,7 +1674,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.wait_for_selector('.main-content') print("Main content loaded, ready to retrieve HTML") - crawler.set_hook('before_retrieve_html', wait_for_content_before_retrieve) + crawler.crawler_strategy.set_hook('before_retrieve_html', wait_for_content_before_retrieve) ``` - **Explanation**: This hook waits for the main content to load before retrieving the HTML, ensuring that all essential content is captured. @@ -1682,9 +1684,9 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra - Each hook function can be asynchronous (useful for actions like waiting or retrieving async data). - **Example Setup**: ```python - crawler.set_hook('on_browser_created', log_browser_creation) - crawler.set_hook('before_goto', modify_headers_before_goto) - crawler.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) ``` #### **5. Complete Example: Using Hooks for a Customized Crawl Workflow** @@ -1694,10 +1696,10 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def custom_crawl(): async with AsyncWebCrawler() as crawler: # Set hooks for custom workflow - crawler.set_hook('on_browser_created', log_browser_creation) - crawler.set_hook('before_goto', modify_headers_before_goto) - crawler.set_hook('after_goto', post_navigation_scroll) - crawler.set_hook('before_return_html', remove_advertisements) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('before_return_html', remove_advertisements) # Perform the crawl url = "https://example.com" diff --git a/docs/nootbooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb b/docs/notebooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb similarity index 100% rename from docs/nootbooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb rename to docs/notebooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb diff --git a/mkdocs.yml b/mkdocs.yml index 52fdd579..ddcad318 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -33,29 +33,30 @@ nav: - 'Cosine Strategy': 'extraction/cosine.md' - 'Chunking': 'extraction/chunking.md' - - Tutorial: - - 'Episode 1: Introduction to Crawl4AI and Basic Installation': 'tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md' - - 'Episode 2: Overview of Advanced Features': 'tutorial/episode_02_Overview_of_Advanced_Features.md' - - 'Episode 3: Browser Configurations & Headless Crawling': 'tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md' - - 'Episode 4: Advanced Proxy and Security Settings': 'tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md' - - 'Episode 5: JavaScript Execution and Dynamic Content Handling': 'tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md' - - 'Episode 6: Magic Mode and Anti-Bot Protection': 'tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md' - - 'Episode 7: Content Cleaning and Fit Markdown': 'tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md' - - 'Episode 8: Media Handling: Images, Videos, and Audio': 'tutorial/episode_08_Media_Handling:_Images,_Videos,_and_Audio.md' - - 'Episode 9: Link Analysis and Smart Filtering': 'tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md' - - 'Episode 10: Custom Headers, Identity, and User Simulation': 'tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md' - - 'Episode 11.1: Extraction Strategies: JSON CSS': 'tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md' - - 'Episode 11.2: Extraction Strategies: LLM': 'tutorial/episode_11_2_Extraction_Strategies:_LLM.md' - - 'Episode 11.3: Extraction Strategies: Cosine': 'tutorial/episode_11_3_Extraction_Strategies:_Cosine.md' - - 'Episode 12: Session-Based Crawling for Dynamic Websites': 'tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md' - - 'Episode 13: Chunking Strategies for Large Text Processing': 'tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md' - - 'Episode 14: Hooks and Custom Workflow with AsyncWebCrawler': 'tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md' - - API Reference: - 'AsyncWebCrawler': 'api/async-webcrawler.md' - 'AsyncWebCrawler.arun()': 'api/arun.md' - 'CrawlResult': 'api/crawl-result.md' - 'Strategies': 'api/strategies.md' + + - Tutorial: + - '1. Getting Started': 'tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md' + - '2. Advanced Features': 'tutorial/episode_02_Overview_of_Advanced_Features.md' + - '3. Browser Setup': 'tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md' + - '4. Proxy Settings': 'tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md' + - '5. Dynamic Content': 'tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md' + - '6. Magic Mode': 'tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md' + - '7. Content Cleaning': 'tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md' + - '8. Media Handling': 'tutorial/episode_08_Media_Handling:_Images,_Videos,_and_Audio.md' + - '9. Link Analysis': 'tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md' + - '10. User Simulation': 'tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md' + - '11.1. JSON CSS': 'tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md' + - '11.2. LLM Strategy': 'tutorial/episode_11_2_Extraction_Strategies:_LLM.md' + - '11.3. Cosine Strategy': 'tutorial/episode_11_3_Extraction_Strategies:_Cosine.md' + - '12. Session Crawling': 'tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md' + - '13. Text Chunking': 'tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md' + - '14. Custom Workflows': 'tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md' + theme: name: terminal @@ -79,4 +80,4 @@ extra_css: extra_javascript: - assets/highlight.min.js - - assets/highlight_init.js + - assets/highlight_init.js \ No newline at end of file