Release prep (#749)

* fix: Update export of URLPatternFilter

* chore: Add dependancy for cchardet in requirements

* docs: Update example for deep crawl in release note for v0.5

* Docs: update the example for memory dispatcher

* docs: updated example for crawl strategies

* Refactor: Removed wrapping in if __name__==main block since this is a markdown file.

* chore: removed cchardet from dependancy list, since unclecode is planning to remove it

* docs: updated the example for proxy rotation to a working example

* feat: Introduced ProxyConfig param

* Add tutorial for deep crawl & update contributor list for bug fixes in feb alpha-1

* chore: update and test new dependancies

* feat:Make PyPDF2 a conditional dependancy

* updated tutorial and release note for v0.5

* docs: update docs for deep crawl, and fix a typo in docker-deployment markdown filename

* refactor: 1. Deprecate markdown_v2 2. Make markdown backward compatible to behave as a string when needed. 3. Fix LlmConfig usage in cli 4. Deprecate markdown_v2 in cli 5. Update AsyncWebCrawler for changes in CrawlResult

* fix: Bug in serialisation of markdown in acache_url

* Refactor: Added deprecation errors for fit_html and fit_markdown directly on markdown. Now access them via markdown

* fix: remove deprecated markdown_v2 from docker

* Refactor: remove deprecated fit_markdown and fit_html from result

* refactor: fix cache retrieval for markdown as a string

* chore: update all docs, examples and tests with deprecation announcements for markdown_v2, fit_html, fit_markdown
This commit is contained in:
Aravind
2025-02-28 17:23:35 +05:30
committed by GitHub
parent 3a87b4e43b
commit a9e24307cc
38 changed files with 2040 additions and 326 deletions

View File

@@ -80,7 +80,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "003376f3",
"metadata": {},
"outputs": [
@@ -114,7 +114,7 @@
" url=\"https://www.nbcnews.com/business\",\n",
" bypass_cache=True # By default this is False, meaning the cache will be used\n",
" )\n",
" print(result.markdown[:500]) # Print the first 500 characters\n",
" print(result.markdown.raw_markdown[:500]) # Print the first 500 characters\n",
" \n",
"asyncio.run(simple_crawl())"
]
@@ -129,7 +129,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"id": "5bb8c1e4",
"metadata": {},
"outputs": [
@@ -177,7 +177,7 @@
" # wait_for=wait_for,\n",
" bypass_cache=True,\n",
" )\n",
" print(result.markdown[:500]) # Print first 500 characters\n",
" print(result.markdown.raw_markdown[:500]) # Print first 500 characters\n",
"\n",
"asyncio.run(crawl_dynamic_content())"
]
@@ -206,11 +206,11 @@
" word_count_threshold=10,\n",
" bypass_cache=True\n",
" )\n",
" full_markdown_length = len(result.markdown)\n",
" fit_markdown_length = len(result.fit_markdown)\n",
" full_markdown_length = len(result.markdown.raw_markdown)\n",
" fit_markdown_length = len(result.markdown.fit_markdown)\n",
" print(f\"Full Markdown Length: {full_markdown_length}\")\n",
" print(f\"Fit Markdown Length: {fit_markdown_length}\")\n",
" print(result.fit_markdown[:1000])\n",
" print(result.markdown.fit_markdown[:1000])\n",
" \n",
"\n",
"asyncio.run(clean_content())"
@@ -342,7 +342,7 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": null,
"id": "bc4d2fc8",
"metadata": {},
"outputs": [
@@ -387,7 +387,7 @@
" url=\"https://crawl4ai.com\",\n",
" bypass_cache=True\n",
" )\n",
" print(result.markdown[:500]) # Display the first 500 characters\n",
" print(result.markdown.raw_markdown[:500]) # Display the first 500 characters\n",
"\n",
"asyncio.run(custom_hook_workflow())"
]
@@ -465,7 +465,7 @@
" bypass_cache=True\n",
" )\n",
" print(f\"Page {page_number} Content:\")\n",
" print(result.markdown[:500]) # Print first 500 characters\n",
" print(result.markdown.raw_markdown[:500]) # Print first 500 characters\n",
"\n",
"# asyncio.run(multi_page_session_crawl())"
]