From d913e20edcd8583087e212df46ebab1ff4f30a94 Mon Sep 17 00:00:00 2001 From: UncleCode Date: Mon, 28 Oct 2024 15:09:37 +0800 Subject: [PATCH 1/8] Update Readme --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 5f867cc3..f6552009 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,8 @@ # Crawl4AI (Async Version) 🕷️🤖 +unclecode%2Fcrawl4ai | Trendshift [![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers) +![PyPI - Downloads](https://img.shields.io/pypi/dm/AutoRAG) [![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members) [![GitHub Issues](https://img.shields.io/github/issues/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/issues) [![GitHub Pull Requests](https://img.shields.io/github/issues-pr/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/pulls) From b2800fefc65439b1f737ea602f98c8d94f2784c3 Mon Sep 17 00:00:00 2001 From: UncleCode Date: Mon, 28 Oct 2024 15:10:12 +0800 Subject: [PATCH 2/8] Add badges to README --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index 5f867cc3..f6552009 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,8 @@ # Crawl4AI (Async Version) 🕷️🤖 +unclecode%2Fcrawl4ai | Trendshift [![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers) +![PyPI - Downloads](https://img.shields.io/pypi/dm/AutoRAG) [![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members) [![GitHub Issues](https://img.shields.io/github/issues/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/issues) [![GitHub Pull Requests](https://img.shields.io/github/issues-pr/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/pulls) From d9e0b7abab9bbc8bb9640d0d0b548301f18717ae Mon Sep 17 00:00:00 2001 From: UncleCode Date: Mon, 28 Oct 2024 15:14:16 +0800 Subject: [PATCH 3/8] Fix README badge --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f6552009..ea60c83e 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ unclecode%2Fcrawl4ai | Trendshift [![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers) -![PyPI - Downloads](https://img.shields.io/pypi/dm/AutoRAG) +![PyPI - Downloads](https://img.shields.io/pypi/dm/Crawl4AI) [![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members) [![GitHub Issues](https://img.shields.io/github/issues/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/issues) [![GitHub Pull Requests](https://img.shields.io/github/issues-pr/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/pulls) From 3529c2e73208707f78df0873d18db2ba4573ec63 Mon Sep 17 00:00:00 2001 From: UncleCode Date: Wed, 30 Oct 2024 00:16:18 +0800 Subject: [PATCH 4/8] Update new tutorial documents and added to the docs folder. --- README.md | 11 +- docs/examples/quickstart_async.py | 7 +- docs/md_v2/assets/docs.zip | Bin 0 -> 64674 bytes .../md_v2/extraction/extraction_strategies.md | 185 -- ...tion_to_Crawl4AI_and_Basic_Installation.md | 47 + ...pisode_02_Overview_of_Advanced_Features.md | 70 + ...wser_Configurations_&_Headless_Crawling.md | 63 + ...04_Advanced_Proxy_and_Security_Settings.md | 83 + ..._Execution_and_Dynamic_Content_Handling.md | 90 + ...e_06_Magic_Mode_and_Anti-Bot_Protection.md | 79 + ...de_07_Content_Cleaning_and_Fit_Markdown.md | 82 + ...dia_Handling:_Images,_Videos,_and_Audio.md | 108 ++ ...de_09_Link_Analysis_and_Smart_Filtering.md | 88 + ..._Headers,_Identity,_and_User_Simulation.md | 86 + ...de_11_1_Extraction_Strategies:_JSON_CSS.md | 186 ++ ...episode_11_2_Extraction_Strategies:_LLM.md | 153 ++ ...sode_11_3_Extraction_Strategies:_Cosine.md | 136 ++ ...ion-Based_Crawling_for_Dynamic_Websites.md | 140 ++ ...ng_Strategies_for_Large_Text_Processing.md | 138 ++ ...nd_Custom_Workflow_with_AsyncWebCrawler.md | 185 ++ docs/md_v2/tutorial/tutorial.md | 1719 +++++++++++++++++ ...rawl4AI_v0.3.72_Release_Announcement.ipynb | 235 +++ mkdocs.yml | 21 +- 23 files changed, 3721 insertions(+), 191 deletions(-) create mode 100644 docs/md_v2/assets/docs.zip delete mode 100644 docs/md_v2/extraction/extraction_strategies.md create mode 100644 docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md create mode 100644 docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md create mode 100644 docs/md_v2/tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md create mode 100644 docs/md_v2/tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md create mode 100644 docs/md_v2/tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md create mode 100644 docs/md_v2/tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md create mode 100644 docs/md_v2/tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md create mode 100644 docs/md_v2/tutorial/episode_08_Media_Handling:_Images,_Videos,_and_Audio.md create mode 100644 docs/md_v2/tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md create mode 100644 docs/md_v2/tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md create mode 100644 docs/md_v2/tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md create mode 100644 docs/md_v2/tutorial/episode_11_2_Extraction_Strategies:_LLM.md create mode 100644 docs/md_v2/tutorial/episode_11_3_Extraction_Strategies:_Cosine.md create mode 100644 docs/md_v2/tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md create mode 100644 docs/md_v2/tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md create mode 100644 docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md create mode 100644 docs/md_v2/tutorial/tutorial.md create mode 100644 docs/nootbooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb diff --git a/README.md b/README.md index f6552009..bcb20270 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ unclecode%2Fcrawl4ai | Trendshift [![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers) -![PyPI - Downloads](https://img.shields.io/pypi/dm/AutoRAG) +![PyPI - Downloads](https://img.shields.io/pypi/dm/Crawl4AI) [![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members) [![GitHub Issues](https://img.shields.io/github/issues/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/issues) [![GitHub Pull Requests](https://img.shields.io/github/issues-pr/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/pulls) @@ -10,6 +10,12 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐 +## 🌟 Meet the Crawl4AI Assistant: Your Copilot for Crawling +Use the [Crawl4AI GPT Assistant](https://tinyurl.com/crawl4ai-gpt) as your AI-powered copilot! With this assistant, you can: +- 🧑‍💻 Generate code for complex crawling and extraction tasks +- 💡 Get tailored support and examples +- 📘 Learn Crawl4AI faster with step-by-step guidance + ## New in 0.3.72 ✨ - 📄 Fit markdown generation for extracting main article content. @@ -19,6 +25,9 @@ Crawl4AI simplifies asynchronous web crawling and data extraction, making it acc - 💾 Improved caching system for better performance - ⚡ Optimized batch processing with automatic rate limiting +Try new features in this colab notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1L6LJ3KlplhJdUy3Wcry6pstnwRpCJ3yB?usp=sharing) + + ## Try it Now! ✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1REChY6fXQf-EaVYLv0eHEWvzlYxGm0pd?usp=sharing) diff --git a/docs/examples/quickstart_async.py b/docs/examples/quickstart_async.py index 02b5f8bb..9c57f57d 100644 --- a/docs/examples/quickstart_async.py +++ b/docs/examples/quickstart_async.py @@ -383,10 +383,11 @@ async def crawl_with_user_simultion(): async with AsyncWebCrawler(verbose=True, headless=True) as crawler: url = "YOUR-URL-HERE" result = await crawler.arun( - url=url, + url=url, bypass_cache=True, - simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction - override_navigator = True # Overrides the navigator object to make it look like a real user + magic = True, # Automatically detects and removes overlays, popups, and other elements that block content + # simulate_user = True,# Causes a series of random mouse movements and clicks to simulate user interaction + # override_navigator = True # Overrides the navigator object to make it look like a real user ) print(result.markdown) diff --git a/docs/md_v2/assets/docs.zip b/docs/md_v2/assets/docs.zip new file mode 100644 index 0000000000000000000000000000000000000000..6b28c0a85e3db6904f7f6cfe71d3af24de7bc80f GIT binary patch literal 64674 zcmV)XK&`(}O9KQH00;mG0MCM4SpWb4000000I7HY01E&h0AXZyVQyn(WG-!FRa6ZC z2ggYtXBSEzXAeprXLWcB009K`0RR956aWAK?Op4V+eUW&&!_06s+OceQX|RUc_~t z`}FB^`OfLn;4N{IrKL(s@w~`ZO6xe?^m-?SJWswjej@TBdml$ii#$75#kxww$~4bi zbFq@ST*gUU#?Y|b%2I4ql8b#-33X8xa#e~|qGUQ2g-X)9sd7L$W$hK9qTcCil9j;BUws2ew?M# zUhmc|H`x=@8(zF6UZoKXRl<`v9lB>*_FivQDptr3yyr+A)AX`6v=J&@Wfin8gcM0u z!c~!Gu~uRtWCW8a;qwmqf`?N1KHf;A2WU(yxU$YNcoIJ3SzhIa{;b?WZ*kG&*HpNb zkyV-Ppr@5gl09<=nGfvau`5~{tzdka2${or$<5q%lG9rOM5VWm6T#GWl^QWo`AoqNap=*S>}3n=Z?CNyF5|TRkrJo z>GpY6LFSFmJ`&$JX?GL3bCZ6N%g}?5fOP7L}?I!D4i0mqjwsy%7kw z_=7U)V|moei#RQZ%$Ot}l9hVC`;+ZiSrU6Jj zN?k;<7=an8idch==$g2oR9eFXidGjNvPcpYodoWD8z;)F9x7CzlXeZrvcOy)XXmx~ ziq}~IvOQsm{)_Uq3<%M2$%I5eAn6qHVG=5jv$UXfsz_OzoC09SR%4eL5Lx7jER}m| zxK%PD!{RKA1`aG`p-1rObElQW0pxbeI2+}hz>Gi!;6G2(HU_g&thPJ{6lNgO+S6O& zIV_t3?mo(j89e_W$u_W<+FB8?h#+*>^Y`l9bqZe}W~&N$$`+CgE%9&9+E1D#NtTPY z(wnir&(I!$`a(ybaF&K|Cv>S;DRf;CFcBPwMw)SmQ-!k;Eauvce44Fdu%7I3)=EWq z@j2dPT6WMn&QU=AT*+i|uHwzs^G_?1?rR&PgLcSi@xis~AvYFw;JKN|G*lJr<0RYV zz$zxbP@5_tMw$lTN!nfb$Jr9B`!yMVs?trlorw>LN{1m4qmMwgedL81s{65ETzbk{ zk6U0Ohza5qIJ6U~RlrE8_vs2WX>C<`VZ_RIh3uT6zKq1=8~k@hhP2=B8^gmAWS(m3 z{}M$uD@2L{3d*Xc@E~=-0}}_2@#AZ?JmFOwj^L(UQXY*>_vK3t|BYPZp7$!@;=yC_#h50PkfHXTXbcpTjqB9cBy*R@iLsjNL*` zz?|cPnRQW8Z-oUdY{12n9om>+VlRQqNxTF@w`aH+3>;Hqy;3P&0LI3t;rWaVGs`=* zLYwha{$)Q&GKmJsU3?%{?=}Shi{RP|rSpv0=B_be4$~s-W*#?TS-^Ok*~D()iJNcw zOhNxG%QC1VAV}Ai@QfBk4A(4e)XpmXX;y)?1s$g+-VAiH8oWjQ4jUZ<$v6|=OeST! z*+ZpQMVzDb_?9qyM~^4W8sE(Didy0*Hv0_lR3~ ztn=7JB)r&9d9iI6!8#&kEMx+{D)JJQKx=`bfsf*1)J1tJ{tW_+6`(A@bD01G0NN4% zMUf4ll`i9*q6d>e$c$B(-6#--rxq~KyuZ?V7VkE1aLp&Mu5bGUVP9jnmJ%z!_++FI zv@R9gQwz2+1UKjRrbi@M$59mH;)1hvbLx3dApRB1N=A}@3`&+cwuUNd3=HU~Yb>d; zw+d*q*$`NXtug49)qd_mlWuTtE#2T2RyVBpFrC^;J~cWZ`7gnA`6g({I>f#zq>*`Q zH{GamIEc*E!Kl^ew%RRI87K7z5PGSCn+KC5?$8{WPpvu9<<=Z|1>78|(dBbAO|G0H zwdH8VmuE$ZC@cu;Em}8Dg(yBrBzTlHQiUi#6xv1@4`gq$BF10=i3dNEmKl1K zCzxMA_2A1xEkWs{3_Tped4M#v+flq;tAb(;YD1X6hGwZDB5+T-y<`J4{(d9i)r3Ge zoq$e~Jz69$dUOL^k7?KxKFbG$j?nVW03z~rTd0Wb#tm>g=K>Bm3vTsEzQRbi>E8-B zJ_hnIcEC)iay1Q^pf0Y-1lzF51iRW-FySFBy=%mha8OAb)9|*Jqjuuxqd0+)5gQ@n ziY!5(1^%C<4kHVL3nH8h>k>s&H|mE>E@ffav!LNA72=XIK}Mv(?yv_<8nGaaX!qiC zGHt~<*o)D(`5V#rgXoKVcfB{$+yU_ZTCUVG%ii&PMZW~61@zmm(6*GDLhk4SJ@&TF z6-PiH0RJ@P-dhH7+|DPOED56k;v1+Lq-c5vGLO#d)&Y7RLe{Y{(5OZ4Wm3syf|9~% z0}2S1nE69_`BdC;nb~oO*$1;0r#jh>%LIkr-T(!_E35(C;L|sa{Pp?+->E%i{%Wn? znTz$N#}!Pms5O*ysov~)wM=5Y1-*I*E~3^vx3?DXy+Qf2EW%t;$5T6Vr{kfgGTl_L ze6{w8xmdFe;(^k*aeyKM>uxg37_8TF1Zd!nP0r``Z4_3&wj`1^U4#vnE>|?>;M-=4 z#vrIcb_k!mJm10N^f*&Z;9QyD^%%3^FkP5>vhSpb4V_r-eUxb6J_X5~d|5EUh2h^= z{RL1gIH1}b4yRn>HMVo1h4@B%iMQT5%I|ZIeYcELRmVOIWsRfmoB<_$Lr2b!*uc?q zkL_MK335i)am~R*Lhm8wb1^Wz59%}ef@TzMurOKAs{+CEomJ8~}IIQkLB_aj=K;#6%P@07|>k>tW}kjx)JyhfhH zxe<=u{T7OO+8E76yauzbd`jOC@Ps|8;M*?Y2lqyXS(tjxUkSN26~1-xNHWPOSJ`j2pwi?4mj1)veoP`ZE@(Sp#Tv8<;S)~mSUP@U8S61;(bB9 zk8VTZR+9HXc@|@MW*~#xY%`1*Gldqc1C3<7_P#O1HaZ}+5IGnWa4tgnu+n6@%zlCK z6cCv1;39Z1SPz)WIrd(G{jzBunq8YMwK+EF<()>8x}1qu#3svZ=2KVpu-AL4N|=XM ziYPf(6!q5!ZmwOh2EPGG;SE4DxiNGxND|)g18rN!8`?O-Wi4REaeoRoH)9-iRVWOZ zrf4l87&pTTvxy)$ba!zMqT!5sdiCPz6z$a0Y{TmTfEm>Tt_!(F8|q`DDXRZ-yEH~F zxX~ZwdwELE6sT9LikpAw`%NfeY{iiF#tcoIPN2P+(ga<|Wbsg~<1}t1;5lgH6~yQl z)4eNB&cr67YpqPJ(Ucxt7{Obx@ND+AXP+l>e_r5bfa&7YwBoLEciVI(b2m2bE{w$k zUZ7r(=FZZES+<2)tfBcnv&~0gn9zUo_=g|9d)x7x!T&zJZe`5G;Tl|rm5}$bIw!g` zh#);?Q>fO;nwC>=)(dlQXrG#Y7@WsZxgE^J-QS~89c*cf1iv4Bb@b5$B8U~8(dI6w zJG?REHkVTjm|;T_kfK~lvEApGE3@IC13KP;-XKmIH6K{V zNirP7Xfx^TghwLhV0bTG- zJ$&*5>W{rzApR8FR`ntEkkhK?0HhL zh0Vm6T{ooAr(GB~P_G^{bYOs=k#h?c08VP= zF7<(m_(=p8Llb{^+dSm|xHam$dEwJ{*enQ<9T-`xd!6)4Bw{`w#P-3StRMzdPBmakb;7iUwP z9_w0=vHkV}5N670lHNp|wZ-#n#&Hm?rl7qM3d$%RKxOmF;*enMg9O~A2U*af%4^|- z@zi6qhuVQLgTJ`;0DP-%;*KfO7-+x&;tV;A`U3(@fT464lTaZ6j%7xA{PN{<>&Y`A^<;U1uCyfPBN34~zqXPi)Q(XLJQAVIKf8^r-FCKsr72Ik7s zb4iPu{1Ux9u+4ITjjN65rW?qinxrTh50(@X!0PV;`>rHH(J`o_Y&K4MHEVSeMg&oYta2t*+tYbIyud& zPW6NJ0xdw+m9LTaUG!+H8nidYhtK66xwYJ5xXPp~Gdc)h=&A4WeNvs=wY>K=JogT1 zHyKp$gDLJbY0MusS!XMXd+dtIbq2hv?QzYh1XhDfwvNu;hK_W+NwLy;B2MF7m2e`l z+R1d1#P7WL!TldVtqWRD<4G`4S!6jL5TH7*$EG5Py&}pCX*gGO6AdZtH&`mxu)eAw zV1Qe)6Y&B%Aj#KQKtm}>Q;lF2Hm$%OJE8j_H`%e*R4o@wKSp(PCA40(xCF_i5&;>- zAfedl0ruYLzOOb6USO7`zTJDvai5v*`Hpup!}*0{d{8%!E8Df8Mkv0|aOaFpft-2i zIztu#8HP7emJUiX8e~tnQdid)l;y@N*OJ4|AsPb!9h}CvH_%z>N*r6{w&?<4l`nIh zU4^sJy@7o34#N$g=x`gQp?&r1mmOXR&py=8E`&Xxds|>pSXPzu-yOkUc$YRw@Yo0P zpBvFGh&N4QwNgcPZe(%%F`sD>fOyTJdC*Pg&=+~uD zTvsBtm@MDh%K>lb!dOgH<%wImVP(uy1)u2`jUM-;E1Ci9H!x=??Caz#Q zE1J(l&-k;w?()nt{@m|<>|MsmkeGS+<;J=8A}e_#2^og|uV-^>4DF<0Uy+&9Ajpw; zr8+PMnnr-=ixgC3K~LGt(=9BXEt)-H_<*4R@8(;@|Ip?+b~t6mJBDA*zc!mdlyj`H zFhrnUD@jL92rW_@ya+5q0|5VMZ;tsH%i`u7T{Fx3ibgWG>Hni^S$1~{!7h%8As(CT zcMmrUti760c_f{#qPw*Pu@RJ$<3CLvWyQHH07l>^WbVDA`_rT8-MjavcYpt-71U#} zt$8`IwI|q4rIS~uW0ijW=lRj}_qO2)o?@Js@-u9fW|I|OvkjgXaU_J3Jy9xT8HSp-XpB{mKT6sc7eCT+CeV*2uz57?7A@f`lg#%XZi5q;A(*M|Lp^!we;8+F>Er=nfD^kTkBVhOe_Jx%dy-<%2KJJw8DyR(#w_b`7+~|+UQwE7u^|@af{X>zwhB`8@e}~8S)uxZH}EK^q>X7TD8j@conc0>0@F`r80Ay^ z&pM42=`x7Wr{ZNM)EcX%Ab`WlVVp8VvS3+Ez70mYB$CvIVVvr4B@DI0I2iu9H2~eYBYifUHx(n4{;a5S6SJgTRq@hSEV4KNAG$;Z zlSk%07Fghk4jU{`H5HylbNf5A9X?dk%@pJjH-D+(iKS=c>d`HESbXu4V*HY6juA>W zQej!)m;kIV(H)uT@=3?AJm{=pM$Ui;4ES}G;~tPcBs%4`$f^xb;movI#L*cHfrU$) z-i(Cx9&lxwMLNW?prL4XfSd=M_rc(acJ+Zz#%6$2CyLWSSY#k{wo~GrBDhNr&p6X) z1VcP|h`Fd^QIV9|+Wi=#XUt5o6!1xTdSL1@b5vnpn@Cea&d|i2Q3h4l9HWG z_N`5$)$)I4pCM!u&6ZPh!Q(-qZ_J90hSchnVIv85h8G4i5bcru_ zR)&CzKeC54jYd*q+q16@7IrTU`=`PiWvi*|L2Gw#IpT1JwjJx4OFBMu;!>;F*D2Ap z4->A?=ea#+Wb%WUDS?$OtT84HV#{5Nx5eF=5w%11#Oy<2n$w;(UeW`$U@Ti7kpJ1| z?!$(=*EraxO4k9ixru8`E@+0;j-CrCFL}SS}d$=I)lozHZU@fH`XNtflOimgDu4c*>oV zN{a0l{ZZSv*trBfQk{EbOUq^_pO&1y4G=2Mf8M z+wK|wruNk@_d$5V6Kh(rhP??y2fI*dG&EmFo!WW`Bw@j&eek2D?j|r$lYmp+A_6aJ zQ!*+F-*M9*!GfFn9ESHBpz0b9Foakx7h~RnWwF8~B^(5EQjI%Rp|9S84Kj(Zmb13S zPbgymGgLl31aR)n=+yeApm@J45GL1vQ6IsiC!q2QKpTJ-`|>)BiOW`jM{Ij_@4&#; zoYjxw0^|y~Pe}tR>gYUM!U)K|7LU4JDu)+W_y#@Nti#%ef`fZZRFRFKoWrF+=m4T| zghvZdC+L4c_~7OOyc;y<0j)pLq`L_=Z2OR>I3&>Vp~Bef#z2!w{qYUxyx zOx)a>(;YVGx(Q#1f0y~g4s1ItNzB?jXJd)>0-~TDXbM)PX32YnO9X)ACGWu7t7CEZ z=xEfzYn~VMb8Sx8S@>su``e&{-^}NA=5|q!e}o&`U4QI#fBQH(2pKdu%$;=GU9OGY zi#3#)Oj%Ym{B|F6NOlRa{iR;Mzm8QBX=wMRMSJ%|9UXM4y#q8*BOMU1{lXG(=_7ay ziTVXOJJ<#=l*U3;Y)D~3N6U}(OK<&G&wrnW42$~zsI&A$YoCO4_58djV^Oc`|33or zI+!u6BB zqK!MdkAV03C?9qcfnbouSRF|n%CiyJk^)g%Li?JyMR^Dp-k$|$o6c;8jd516~}nbHEVMAj3n6vc^W|CxAhh8gjGwc>=*8g>`e0 z{pTY#!~t!X*7W%1@J=GN$C8)gxUW0>&p#v?VcE{?I}Da@2K#n!a}H<<_-PRF8sBEo zg-a=0_o=+aKihHeJOa4ZiUlhY%F(Y7>{ke8|9;63?7GOy9VB#y;aAA^%Ytl&9e#Hf z&!)dxfX!vGz-AxBZ6jY{VbPnzL*e__DCM< z5@Cm-F^-dSx!1H)>-K21Nim*;ouW>_YBQmebe1MOfMF4yl@IzyV4Yo*AVA7OJG?a;$8b0q@ES^yG=6ckbN2 z?Wgh(-`vY=#21G67pL(%eRZoIjW&AUXMi2?xCbya*;)CtwMJBCJ?;I@iblJRC{{G& zQ~LE1Q2(=op61LS$_%QNwCjcwwvF1O@F>MUBNccOicSoAEyMA@Qz}24L#`)^*Xwu{ zo0urxp?(GX?y!DXiu4tZ^(;$ctS~SyLbAV_LStdjuvdb99R?H=BMqEBsbsCMu^^d< zsmhRE)t05330RplP$2Do^JbM)^t&8~>LvMYa)+x$YrpVQs}E=vdYa8%su^gO*G|O) z2(EZdLp}F3pyN(Pk83fgUt7N;(5{3Hf8)c~n@N~yq!@D0TjO0FFjdR~zg_?xyCtXm zD_VJpiG!!KjqfGjR*r|nR7jcv7u8_3SKL& z^^1S}{0Y7YU&e{XA5mH}eW+7z(WpuIg2bt-iZ%xacU1j++G2X`+|hx`*;>tLt3;M) zG-tEz;!~puId5Mgiiar2SH{x8a=6-1OZF?|2_LU%dV~hCO~drl52s#UHb?)%H%1op zyBz*h2ZN%!Kd~_W4zliUfId61Y>yFIK*wpd)JR0?h_SnASkG#n4CuS}IrQE8bo%(^ zkKj+q*P@f{eGX)^XDk|3{=Bpn1^A>4Od5ZuB`cCBEa`L-`iHoDTrHb`&Ivr>x&!$f z-kdcu?0$sg@0-klk$}K#UEaZ$IJ4cnR#Jz4H-nm682c&dom;lXH+b*4H{93H0$rU_ z;a?cQmuP`l^&9yWQ@x3=LHmqEi<^vn*%)Kwzgr^aL3T0GtI6i*V!8cda`a0`#srtI z%EmZuhG>kE0e{bgO@pWZn52z8GW?5A+l)fxAT!mlfIrtf&j0JgOzWU}ySnl-ayR&Q z-TV#yeRd8fKnfvfAv!)Yybwu%fcX)5fT1X%~OR~!O&{n*u*cUGWz!!q(@S3mzgZ0tI4nbDAN=V|4%@CFqzY@{U(#F zy{5pPWvHD}-cf+XKecCt{0%WNW!b6L+Q zEX2A&`!huJn*fw`YGQc*(=^cHe*jQR2MAXJETXO;0060Y002-+0|XQR2mlBG4})A; z0000000000+U)@V3IHGgVQg$)WN%|GZDdtc4FCr|N*`xHN*`xFN*`x+cnbgl1oZ&` z00a~O008X0S#KlRxghw?UlFDn7MYi%mdq?ysjTWzZk1{0qDf`CY@`J$LxHALy0F7B5=6NnK|EB*y`zOq|o+TENTB=+|UjbDqMVxcu zEZ_O|Wh;1;rB`{B1&`BY6kjZ|a2}^gqwzQkuO<(UUIh7KHcPX49!wXLc|4m$!MXH^ zl8ZQr@*qrxK{^^mSsp~6=24QzAERL44i}^|{3j1aX=b{gN8|8goMyd7V{6Myw|ot1zDHyIz)i$>|E&1W&<4tF{U_}H)G`Q~%Diq0?N`Q1$c zPKXaT%;P}`!qVv!KtsO`xkBM{4vVT0mv%)K25@N0Q6xr593KR3?}IX0Nn-k$|q4Y>!(S7Fu~5x z!U@1B^?IE`k4d-;uEtRk;A`lk>ucQkroI?P$vmDfgI8e^UPM#)3pX{HXXzvur&qx; zU1U0H5YA>%2sd<|25`!GJTG6NMLvhyw!p)H@sgErLkqyBeQJ8xe3kwwo=n1>hrQjP z^)_KD58A=e4A#Psx%2S;FM9XCXx`oC*mdB+14?Sh zzcAAGG;s51Fo0DPF_<8&T84rk$cJc;KqAa8LBb16ch45Wp~qr?5Hn&3KA*EjXw5^n|u4 zokdBjX$R@el8a_1XkMK&UL5fm;?JmemBB@7@esZ9FCGjd{7S1dleRs*ckwhz7jq_? zxGY30=HAlLP10~ks6!~^ogNW>p+DzI9DcD2KZR4;3XXLWfwT)+)0}Z8A-Fzoe?N<0 z_mlp3KAqrcb)2Kp1A=7%dN2n?yQ~M?9OU^yZ@5gtX*}pk)20~$d4+ry4dM~dj%dON z&hF!{!jB=`gm^X&o_>l33nnW3D}0!dfYV54g!@$ld$?2Z2*A>54?8x6Bf9l|{~_-q zOaO5uq=-O~PA2ct7VOtB9V~EJz4LUqlsC_#_+mV7f8A^lHhZO9m&_+`(183q%<2Q< zs_PCeD9*I`<08tIC$P(dd78DhdwBZYDQwtwyEmA`gUc4q==9EZ!&!uLrpHPgCZB1MMlL_`gL*XOe^&#%&Kh|lr&H5AO9%c1+n zozt=-*nsYnXby0Pa>BDHL_7hsOzFft3MVkbk-R|uCH$Di!{8jq-%vZ=EfI@B)CuQN zA9v8FV+3FHG8iuadJT%Ejwu{f77wF-0?#f&xJjlrT;wpn#4}H)EE47MMFPAdbp)Ce z52$S&y$mN4AQ-S$X31ttocr@!XnOF3kt$xLx$=c{r>qM0cfFKBf@eOd8O4+g$5kfy6WpD*@yPtIJ z@KwOC%o6BIKIdJSgnwFg5$!3HkpJ~ie`hVN^{6CI7f%%D@4y3yShFk|i%9;IC)j8_ zj{vXdBqKl{nD04A)WOFvi_=AJ0K}8vtHd&pw0TeZh(!9~@FU@6K^1uW58(c0ZAKL9 za4{d$BP$IqX6AW~yxuVld^ylO_9Q&>)cKssQ09><(ug4O9_Mp}j|v||9{>AVaiaOT z@{3FW{N-&%hOlKB|E|Nqrjf!c{MYdwZYh>9B7yh1>Q?y!6q$bSW#Lo6{gysZW=OsF zD_PD}-#YT}rT5Yz+`>}IsDITLo)l1``&KXi2k$BV+z=gjLu|sh{cDGFb332<&@Vg(5)#MKt^8&J8Z&%#z{BKfOnllB(S~XW#jWep8G`Bh%vcY``yk-zdGLQm>Kk3 znNp^Rh#R*DkJ`@A5=`F0X9NVEWPUXc=VpszU?q~lWP!{AiX@--_d8`n?QBp14p(9_ zWN1f59&pPdnfJ|`!@XUHNqrS3fXJrQG7n~`Y@eiq%Z9nUxE&Wf?Hs8)J=9h7=ix6UTspeit>A4m z1*SRB1_3qj`sV2%hMv=C7>9m;)0>+F)WjJ<=JkpmubCKgP|7iu=OqlGqkLsX((u>H zU>X1cmTMRZznR~y0;h@~lc*pFp}^!aqQ>>~Q9warD%uoXuvzacoh@eh&NLl@WCCCZ z5I;p4#NeZ=akk-Po-GC_g$%0@?-sOpPsr%y%U3MCp8zfJ+Msk#ZvpYeFv4UKBdZ0d zw1ftc`^$rLREZA>JQL~PtX?W-_qsLEM zZ3ZuZTsliZ_e>vck%p&E-*d=2+g7y2-cCI4#1yM>pW;p;!Dyj4Pko#(d1;W z>Hu-S0^kM;pSQ~(xe3hu3TFQ}&#T~$&&KM5RJrpSzbWUKtCk1#(8^n2BiWxGPOux# zqiIvnrmRqvs?>QDM?hepn`KzPH=iT<3WP*dgy+r|#{2jLFDCvqhc&ulkL{l?rf~6> z#qK@l`T9WyxF>1FDDI2eVSIK{$DBsi1xR#mU;o zbyF=51v4ydeNP|y!}y%O!yWVApFLDl3wSMtyY`QCgY#=%;owP1EGS>#3T933?m0C` z6Oga4!`%{M@?X!z%;-`-{@KTJFe?{f_j^bPN~6xS}IJ_B+xu(-D^Pf~G)JSFuUM zXywoaf#-c76dZ_UYt(%I_T@qFJ-zv`oo9pX)6dPe*Ww5^7O%8z7Eb2nW>2D=>{Y}{ zXiJcBD@J?^2TLkC-~-Yua=wp$a{C);4(%!C3P!jU<8)Z3Hk7N~m;0|5yr?sTiyn>e z3%TVpEk6~5JK z%&p*C5YU~YE~?dp6YJkj-&sYH>ItC_U+seGtVR+qe9srFQFOMl^l4?|Dv^1WB2Kmr zW=P^sDCcY5>w_O)>sE*>?#~L*rPS6JQxsSWa*F%o$t=|VPZ7y11znV{2W_w3(Z_<_ zc+2cwRx3SK8+PC_GCsgoFf;|eQjf_WrEqxbhau_(V-!A5kBj=!7qE39oCqQ9zx?^% z{(p|XRGd0Tlju{7M#$K0t1g|WJzW4OsrhJ;qhC&rI?~^z3-~qBegU+CdmjeJL=Iqj zgG-=49qIz_2vs~uk!|XrRxLvhKIt+^2f*^MHJlZxZ|M8M-hmEx%)|LBYQ2rn&mcSk?#&{!GTXK$S z+)+Hw+4&CDzDNSMj1QzcIhHri>G@T3Zd_AHk_M_C&hU?Um|qGRg{I@iES{MSRpGKR zn}o}&j8rxH&KzxUY+nvR^nPZ{em^bqa^`b4q!Dvd^U`nPhTJ^pQM1Sl;MUaMvPp+6Qj+d_lkR*IS~eB8bl~~}x0%^1 zjJtto0k>iVge1KpQ^(5}>K>&=aB7i7e_X`KNR)s;SKdXll$gx02~Ymr81?(POat%^ zoC(Yv)(rAl&f2h9l;OOIou>Obi%`-WvFS(1Dq+AhnJftt;M|5OG{B*$Uul0l)V6;h z+Ed1fK?^7n{&|w73E(n$Dq23$uDa-Lu*fo2q=E?IlEa3Kt= zxQi)1nnyg<-@uYbOI+UX5WV|TEJJ&Twh$@cN9xEl!aXElATN2h;IIaGSm#SzQ$l1w zEQ3nmSkUt%Dp4R3s~xAl#!35PuuJ`5u+70&fo&V!(cEDi3>tv5RZa>9|fv@KR7%Ln*BcM-uq2wX>r=a z+3=Fv&PC^K4S;!HOyC}~S$PDgf64100FgPX?>yZjnlYoIidiWxFx;9lp$tmja=1hA zJetfJjTa*V6?9StVFDfS6S%i|z5v=f#5=*HLgcu{(cqH4_d#6)sO4!( zcitd>qzO~l-9f(7CN?XKb6gOAFGv}?e)u+?KVO`qIvi*_Anc_bY9(>m;c2&Fna{)7 zY-!k%z#?Tn+7nl{prwl zp}(DOTfr+NhZ*aY9P4;u$54gRTRE0SLze=4e?pv1ER)n|uSnf2dgyxvH<1Hs)Q57ZENq%$91>$;IM3>;*^Satc&& zvc!#JTd(3hW(o57auVTo(APkSn+b4Ibb{UN-0B$YKHt4ceZ30;E5wh6!&q1}>_7|w zt&Rq}#6T^)_kCCgG1gJ8@mj{@jqkhn0^6cdI6wDZxrT4KEUy&M8+0IW1%O)4nc1VR z<|@dJ_JVIS5FYR^JPkFQ8A=DvNBJ!8a<4^rV+|7qwCO~RSHKHy!9Eq< ztuE}5KDYbx3}$4vrW#FH$|$B|yK_2V4?Q&DbY-(dBjoBfyYLy9neozsy~`EIDTXGE zVOM78)OXi6+eL?T!`~pMZ%pN?y{zz+c!I>pYR$?jm-!Vix#<HCpy0*0@!*3BHPi6j!MfYEO&x8q?=<5)b@PQ3LhoJGX~)g zqc9fGWclNJWBfWv;n0Q`(YIMR8<%w1@nC)`&M?eH<>Pqfzr7nb3>0I> zysK!_+v)XsDC;{TzPfk18Mkx8y$j#iD)-e+Wnb}MUBzOPN3Yx2IDjzJLF?;A79r}a z7ve7>AUWB}xS|@jfE!qrl?u9?8IJ&5oaJk^!|lwZ*D(v<(9&C{*f8xTlj+}-u@@Dp zC-m#4c3wSF^g)dihAVPBb$hI_@#0<@hfm8J#3SUB9o^(#!SMijLiIXWwTNnSH0eJ} zfjF+jQjD)L5Je)hZ-KVJwBGovdsQCgy4G+tW3Mt_AG5_5#x03bMvrI&DHB9Y?pfLi zE)F3Ajc?T$pN@%#bJGGunrna^o~M|51)GotDf)=Im)?r}1lAJlJp7kL1lLSNu%^=% z8lT{K_k1y*r^%0+!hnKufe&HDR4$Cl$%#lQCgFz(-ZdIKjBKYQJ$K8cTqE0tOogul z%h=n64%;sOWr4JIb&`i;U^b*Kx;$A-&w)WoM`lABGm_?{NH1toYZB&oIpG#apwlBMe{v-k4p#jdMrvuFw&`7n9X{2VM0;VQE6+i5<8%(yHxs0ZkLW-47}@5g zi)GLT=aYB=17!JFofRh_TKGo}2ZQhXvF*s0izd6XgF++em8SvgLg;xdMacm#55_Rz zkKts2Tf5Cq0vLU|{k0ByvN)ge{!CYnM?zP-ms4}t{@Vfh8x7(*u++0EqQ6~|`3C3ks{rk61( zp!2#!3iUi9ckYIiiNni#urkvDg%DMwIdFF3r9rnvU-Uf6QN7lW(LUbPy_ctSqkbJl zGi^pUtMW+80^LZI*JyCd!sH@q-ESY5$=nykwhe>zmbmc&UE`4aC%ZJYuzWjbaIZ3P`P(4S*dzqY(H751rzVMrRS3`kp@f zcfsCgKCOdhCx8vC^hKzdfq0Cd3y0iP^UB8w@~_cF^)7)UQViCzx7^Y=_C_UAVii__ zXskE`sl-eO{<5qWb*hZwx?>Ysr_N?j` z#!mKE&TLj7#JFDv$$Z?UmaY|sZmvfO7i%djBFC<|A5*S2^1>J6A{H2j-r_C=## z*uV<4er?zm&ef0EYBQ_775ujgq@^cwOe6B`0&}eV<EdEc<8X8i`Pw4OWrpP>8^I`xBj8_` zBmj*Ua1a;*HBXbM7ko#f4lw5m1q}g)L-a`~&K(vZQ6p@OhMH1M491J(QUihtcI57Rj-NFHapI}7$a zW*LPJ{L`Fu>gc?GHI9K|a+b=R!(T9O9(`yI;y0PP{g4Yn7YkIu|MZ;Xw`9@ba2S}> z>?}^@|GsGONH+O5dMuztYVT}eQ#a-&<1-O0x_9g8ioT`QCVJgmR7xCa{)sp64 zCtx5sA7Ba-ayUOnQzs^ZD@G2Cn!IH8+2`>Og6}H>3O+a2h7_#a+;T!mRDmp`6(A^rqN!}rd2KRZB=V#z_pkj-aYvI5zsv3NhsTXTsv< zQN(;chOBB}*AL3mP|>Gc<{Gq03vL(4?cUw8Kkmm1?9ZOV|Lh_EgHGt4qWf8jVdY3P zsDF3?n^KtW{%xjPdCg12kgB^>FJZrC2`l+Dj1MCF%43zJRH8*l{l6Ox1{Gw;Lo02m zEPN20y^SuSPultHXAGf67nRQ@nCbyfm;@J-Z7d8fK)_014bWX}&Aoa>mwvjW=JLM( zqet%6TXgEr+3QnU*P*u{^$5f4XTMK=pEOUaJl$;iDyM6S!7q7}eK^?P6iiHx(z515 zl9Cv%=2-L}i0b~+e^|*ap_XZ8nv;LcEtQ_tn$6X*HXT4p(PXGGGaU6pZ=OX_NvTCy z5J==C(E`ANdbeN~JAtEcJZ!#36fEeN2j$K^!AxY5?~kIj1Hm-J9}nGwDF}xCt{YhR z;cJX6p2w?)M(rRyYVbwS3}pqs$Ub@6_9v5RtBGPk81G!n=G_M=rmx6*7eEG+kFDmL zN*O!T50U$ptk`SbMu`@MZBHb`D$e(^Q_vtO>kMoBOw zwgDB?kAU|8CWHc^xRb_1afjb_IUJ(Z(VlqnDT@Bf)JWAZEtOe|ACu}rzi6@TMQ#UMIR%)i|GIG^hUeoFN!N*C8$ zS@COdk^<^O8Q`rG`c)k>x7<6B&p6Nay=Rpl(4aCBd_)6&lSn=OPJOJNyMoh3qY7be z7QVOb-QeytPU3BbHyfL0FVMDsskhr&@~$c%j}a+4KlgpTw=-Lo4*+67oxl0DiOIcT z7~B;7eJWr=$4}*41TS-5vB=2EKKNa$VLqv``{eYTlRT z#M4aN`L>FE?@ zG<9@5JUk3`+v@l*LbrmiE9O8t>BrO+<{@Kz*gfsx4cENw^DLgW3ZK-0o&G4w6mArdir%6=qYwYI+actO8U5T`9xy1_Fs6R&kScs^_ALp=d z>C)r);J@M8Ak1Yvi`g1%spvlf!z+QIQboi@u5j5T8>K)AD?n_AIaJ!ZYO5yO1>c5A zAt3^O>mu#mj3ZeavAL>%itG8{8Nz%9COCFd2BN!gn8i8C6aVn{Bu^BRfW#PLn^S7;7jHuCtOF}Zwr)IKL8wp^#pIMbaIpf_w z{Tox0SOpcyJ9t6fOXMGvTVEsNFEH@L8eJWw6h$YJiCcdzk5KqP9X3N9WigJ}xWeuY zm3EA2a(b!H z;4OT03f45->$h?uTp#d^=H5P&nhL_>W3JOP+KRU{*|Q)${{t58;A@B;XKcMgs787m zWvE0jM*A@MBe~Kc{C$k0tDuDzvk%UQr>$OZ2M|A}cI@h&4BH&psp;&b6h`}wDuY;E zyUppvX%ef_>dB9jU$8!To4Q78GQGN^4%k2{CoFkXR;B*hBo88z9?H?Fjx9_L{bA)@ zTNkCy=zQ1cE#`zObfSTX&m}m0{_fqejLx2AlImz5W1}AIK5%nU+7xVDe}#O)D9?z$ zIry?|H~UFUh#Z~&09%cKKIs50&NR~D`YPz3h~-iWiPPu{wZk#R2q)_G=w?4LWy9I9 z$GFNS!jX(Qkz-ai&XGAsuG`G@fAwTArbyA9B!yYYp}Kv&B7dq7tuCCcbFVF*U5ey= znq^ei^k_n*RXHtf?6_zG`N$n7r@KHJFg)f<^Y4&fZihwR5xGV=Hbb0je~NCb7g4cV z&WXK2wuf`q+-f;5Q?~3N#8N`h3nKGUyInXjQb2MvIAKh5JnY}{TieCbE6tDg}kSm7m4v3 z)=X6H`}He*Zzcs-eHjH5E^v@eymy`iW0JGJry?TQ2UGEAp85*s09PGcg#hdnT=uOh za#GwUWsflMsy-L_GtXr>U$t_BI;SPAn^z>Py3|?OyVQ955YeS}HHF01Ovq9WcRu*k znGQ9A=?F<)HK7-YLy}-h_|J!JfcQ~)O5U{UJXtr!+BfCB?S(kD7h+ZZph4;WppPx5 z=8Yq9S{p}`87FfW^BsX)r_3E1Be7HzFAIg*l13nF6f;>GINsso($v9{kw|LLl`@)} zyn-@Xl`9{I4r;!Q?=bDQl|yx^T-G2RK^r$?Jy0_$ke8&GPri#mEhuv2j3YXMI3dPV z94=X4zIMvx48Jf-!t-a9R-6_0tLM~ta8p|I?&R$2$KU2`Xet8P^tnKf_Ikm+d(1ZK zKHR%^U~*zFk}LS6X4m!Nab673SUXZB2w@c-cSi=g~~$Kcd^mo{w#==J2#jkI=)n znC}?go^T=?b;{N+pSV?KDH$?uolUB2{2Ze{F2YIoiXwc?q;gU9-i=NOZ*lEeHQZc> z)x|1E`@&`u%?N$5n9xQ_1E3`3A#8W+W9`8BOET#rs}nI}VGP4EZdOM%=zfF$G9%P%8lNJsn0w*=pCia}&Il0I<9XCm4}m*Z#=?J< zc{#mGBUhZMX|Tw(I89;VZkjBMt}MR+jK$F7E(q}y6R?QuL>2h>wUSqJi@u!9*T>B@ zfJi{dyvIHEf*11wXTu>Z%1Y2j;R9M|(XY#FnH5IcQ^8xJQjRkdUMpT?%(V2R+8&7^ z&caadJc-XU)X8Z8u6~$hmV6K;_yDW4X)O~jFT(IH0#7%18UAV6MSrSjXdgZb2bUL_ zm0=jAlh&I;ll2)-%<#0v+v?N&E-=f+5*S=ZOx~ewrRrgZ;mM}vvn#orP?^3Ty~6Wj z-WuCX_G()rUQZSoZ8|{(Zlc?JgZj`o#%_0Sms?bL5zt}3S${1%)CuW=(GXyWdcya)kUNDm|{->RwSnFu9a9CnQ6SEmjaUc$^uCEqDv4Vt_UG zbT@dk(%sN{>b{PZ=b03I^y z6TbtpUQx!)okh~_&>DH4N^8{8@BSM31k4(#(dsoyleKH4c2#RMI%X90RY0sFJb5}B z6UR^Ha8EPf5tp0h{u15a#L`KD}8yIr95a+^xrT=3Wx0c(@*|u8-mPji0AjXSyp`k z2X3dXk5oY8zwf5(nP`dxYf02g+RBAufR>E zYr>Bm!SC2~X(gO(TYc+DlqkhQ+-swbPD1GunMyWdi9x zLmf&MT4e$ke^`$)Yn{p7C*D$0T2idC@OtF>=2p&# z5gipG_633l3CXN{k&-he?|*=4ST4k*bx|))nZ_e*XGDwOo8Te7IyE!D&pLKaC+P7$ z;t7iJgwtJ z24@~suEct@s13*lnUjwTRleneS#Q=ZdhS$YfumY4yyHUWxmG&&1x%At zM53IHQ)ZNjg=vhR?#T?01Mt$WjTVWxyOLcnqy*9vbo;>67D|C;X^i5Tunfl7CL7b? z87i}ml`#={E!~u7uq->(OkKq>sey`>qUjQQfx2!C;bany6!22`sG5gXSKbzHjm7!0 zrBPIuTE1R4oA;?od#H&R&;t=?aut!(?|-;>cFq~tq<*qE$?3-iKU*Ko&38gIg=73G z=d2s{5L<#UUWj4VAKlBEM}-gd===7;BR-u(KsqMz{p zR=qxOtLz<+C$&Nddd1SbasWZi&oS~ANcvLM{&&qo#d~p%=F4B40 z+OA+7cVIM-liI}&qZor8DMiBvbhJ2ZVp<@IzdJ=!E(UJmlTCV}zYkI#?x}z6?cXnV z=77#^4)6RR@zPW?0b@antxc_w+{Be`U2$5_ZEq)!INytwrorU#&RBX1fI$2FGH-qO zej6tbXGcqw=Qw|=2Z7J|QJ?>Y7eHw7am6NYe?E1l<1H8XfL6O=F8ao~6QWV1qa+7> zx;~ZHz-hV;{=*XfD^3z_NMDjDz|u5zLhJkO-*(@@)<}W|-Kp)*-lPfRZ8T@wa@~|C zf+hj&`uYuvjt@Aob`Q?vbF*E*EC%WJbIyIbZfaG1)=hF&jrvb4$}jG7MAG6sYEfC2 z6hH?S+S(RD+VqTu+wEo*U^uE~XF(mR;pyg$tOOOO^QPNRbcsoocMQ&Rv3ZiSg=%8N z>P9))3ki8>RLe6|@Blv{;#T9TD!cpplJ#jixTG}q!!^_P{bei1 zpbL+mYHgRvDImJ?!?oi$_y`7Y{#x79Vg=}E|JY>ar7O#GFFoFL_RbZti*O#gD1OtV zgT=2N79adLF3q>h?yf)9X2xzfr5Y=8IsWqJfBZiRLV{%X^0c*@L~nI2ucx1Y!%U=R zayK|DgXh@+%RzXW0og`KC?k0QFPptAh7*jfkbqWtfQ206?Obs}D0K(%VQ!c);lJ4T z0BCDomp`mpF2Hf#7tL}V zI5f+o#!FV2V58{METRB46I7S8TP^%5vm|zjzgUK!C(C;sc*reO|;b+{yfq{G-1o+8=rE1AICvUb0@Io$_~Myx0tToiFxK= z6`sm@N`U|}pQ@x!KUnsyYRXM{`%Qvhjy=n%K15;b)T6k;q?kyjvv;12>wsKfb0Enk zPq4hkIu!9mf}Yl0P0@^s#KkmBx)dC*;0N#DNG9SuFsX>0m}F@1Ts(sOI&k9AO>;=d z-k`G>!TClRF$QRo?FMftD>8WxNZCR4Mv#L|7J!cp(3zOg@5o!uS$>Sy&A8CotHmxz zMwJMhQ4A2e2y3_ZPVK&xDWUmi%j`aHzTJ0xTbl!#gPilrssr^>LQoXQnT4)1+#;BU z?9DJuw&#R2G!p@1CTD{r;I^O~HUv=+048>e(KfImd)iE!_K~f!Zmw(N0@tMTzBU(K zN`Ped=Nch%E2DDtr(ELFeM?3X4Eu;|eajKwE`usD2sd|_Z~d*t_0>4uma8$CtVa?+ z1xNR33CR2esAPu8AsbfNKBIRz*Og%#d6bOVoG8tm;D*G8_iz=`Nld2l%1|Z_;Ng#- zU`ZG$aiSm9YZ{hv?W%5p+o8&61<;7_0WJAzV&X&m%k?Zc_;UBl-Dcy?!g0+EPu@KK z^$CA{(>dlCv+%HTmV0+QOB080IRbz`I*>*K`8(cU4qEWI2JbYeRi97GCWocq49?fK za?>R>tkFat@bCKE=$Q}rcgzyIYA4;v9J+tp`L4Z9=WIyA4Wqhws#{~$E|l%dQi%q^ z9f=1~O=3x4(IvYX_EOE$7CO&n#gR~a7}#1va`C?}?>P?GE3f1AbCyyFQ(gH6NO;n;6-Q_?RNu> z1&Xh_y;%eFnh04lApZba1Ktf_sv-Mt-5Tuc(Dq} za2t&mNnIM;jw@huY_GV}OU7FAc++C41EAvn$#ua1;Y^1tNM4L-)9Bcb<#cWDrf&}#|=xXIFH#~G`>EqCgUlqE^Y zgD7=CjSGVpjN|3#q5!1m(`@Lm4Bw?i!+?;cV1Q~qcWN`4nS_`n*ePA0;=Fi_kot2$o|~v8Wv_~N z)?<;u0rsH^R8V;&@6o{mBRW)A(2A>00Yfy0OTHp9{*uQNLr>fIqwDn0`QjbP_;a3f zjI3lq6_#dZ6FBKoq|4<&uHez!`@p+5ke>@*%@h2a}UmFhgm)5DAk^-8qL%HiJigS zCR(l3ZnoVWMPG+6IwE3)kFWyQh=T6qB+w+px%26q>x8m)(LDZCBGJ8eF zbb}akd-8U$cOalvhfl-~Nr@9B+L+6fs>4`1s)7Hz%=1H^)ReVsjJ=bzKYn`i>9U;8 zypToR^@20Q`u#mDd*xHr3=Msk&P?c+&*Oeg@~B>`K=ud+q+z7l33Y z#{J7zd{kFPKL|MYO>O0$9=Wu_THT3lx22?d^A55ndnU+Ah!qaF*5UF1N$7i#_UH~X zW!@O+A}tlH3S&695^w~bu_o`-`=ug-sH6gZwNYOfBogIb zU0~Nk)vz=(@~+2N<;ha>Wa4!;<_^wq`N;&RnuzC~&5f3r)1?U^JTd)9fIu5&Hr48S zg0yKa(Xcx88~I7ep7~(aN8CqV2)W`!I!-CkK!pt&4ys(~;N+mGBuyqMx(m9anYZ(7 z>0>kwFvpu*&IYy89S@Ve^WbzPgYkokyijH>&QOHJZZ{79>3CJOgxpgd2>IPDudr%J z8H|$tx617>V<$J|cy%em;B9Yt>ne`3a1!p%c@~A2HQTS)DW62qthL*F=#-89gq$J< zLPawUKLY$zoqg-OkgVYcn5e|2Z;6Mf?iX;bcVIpUzUSOrG^^VX?po|@JaU?q4taIn zJ$3ZnshT2JdMAr!=|{rDY}Oz9P>q}-}F%1?}sDb47HR@bs_0GCJ7xbwc}@<-CgiN z#R;6ag{yg(p_}8TFa>39q#Q}HM&kwq&}FfVE=ICoDX*}?g@f3fBu7h?ORL9#gGBLF zBG&5sgfIqdMl|cyF*y4N6t#}2?BRKpAWWnIr9NVluxwuhOdBvQn!FBW;(5uzBi2z( z4h(F~TKypwY?#CDlhS~kIx1sJm;=(Zcr+}Q&=r1hfEo12Sckq384LHQs3IIeI){~v zPz^_8CO+)Lb;1hw#2?(9fOm)J+?neQQMy@R0k^k!iS@;&E~2^kt()pkRgGDU4be%2okvU{)nHyF5<<6szCt2h7+^4;&O}StnPvq;?F92 zSi#%2<^*QtJ!fQz_~IB#spk=TrKXdQ5grjRB=6V)uTML{-tKO@Vy}5!FwVw1p{uZc ze*SZ304(z5ZEn@<@rU@v^3b0f)qj0%*Rl*hK!vb+D*q!@2a^f>*Hs(rZ-&jg(=uQd zMgMSRy*6VMu}UcEb?&-gFAp~D-lW+hy<(DFEJQ)UNlhbbb%o!SxbT@&F$jR?dB zDUVg5)FC|^FXJR9JT>+25XHtEemb4OSKP384h< zOv39|68&75@{g>Hxa&%-t9jp_T$NUPVt<7qt^dM;$(w&+!G3TS?59|X^{7&fED0-RmTx3#T@g>r9(A9C}tk;-c{i9W^T_r(d_2f*jx@94`?RCblp ze$j2NbEq&G!QS(ttX~fKcRXd~BYDIcE2Y{|Q4JZY=ZyWRSN^KOZZAL{}tSJ@78b+ZMV_PI9s4tJ_3-&(@h700Hbtih9FAH-*W6a)n?AQma zbT)N&&lY+kL-r6on@y0acmrYqOYFuDEaXYe?pW!03VS#IM5UV|%DHpzo?Xfl%-qY{ zhzUb%;xx|bo0t62$k6*S1>OYk#l;RgS3yg%Kh(UYH@cX}xqFKET*;f>gM)4pPnFxy(V?2w6 z=vjoWNMGW&UZqKl9tKX8S@SIv8kSK&)nVXbE~S|nB>x*T=>l3sb$s-iz`g&)s9py=o+#$NInD7x47f=3`+@tL~!-1>laDmS|0 z$)NUZtwo^h3G3GQu>NKttaKPL%RyP=jUO<1%=W0fcQid`mH)yz%2_v8DHI!@PZ(8F z5eisuOhG-gkun@?-vnVo!hYss%|;YZ-Dmd45FFYQS8&g{)*k-R@e3_6iba%0JU_fs z=Fq5#n;@}gyrLy=a7y0KC#6mI;*M%OXFZ?Ml8Y?OQDU?8@TpOkh_`hS#URSgT3OmT z4&xiD)0UdNh&GB8p)lDZV!C;7>T0(+`k$6VbE7(#DcZZCWBd+o-JZjJR`Rk9=FpsZ zoGi|BoJ3+Kqt-O^XO*7L{Pu4%zy0gakCQ(FJtbO;M7D7omQ7$IfVWazqd8VLP1oU99QYDUFdfZzFo`qc&5e{g zR5yc~D;s+q_0E?1_&RydE#lr>3$$LP!W9NEi57rWv!GuQ?@dgDb_+#|S&SwdV;=bj zQ^Y(G(e+zMg*rI-c?hj-Gn>%-t*9d$ z=H8!?aaooiXdB~)k*9P<09*|~)|}&nn;K<_zxQqi%o{Rkq%=7p*1T@Y%0+XkM%t#E zCk^2=u9`Ze9)YY47ok#}?H=b`#Sjs$s(BJ2uOM@kj7^-FO77n+ralbw@p&3%!}Zhm zi!${v_`eI_gUXzySCc99(iCVcgBPVdB>{^|?J3Y7`H_UrP&~YMk74lG@M9*C3B00% zeujxYPjJgr5y!f$S0omq-=LNuqM87tbt>}kwlod2z!{Dg2`M|RmMHkkpa1cnDhmNg z;ajW1(9Vu-2sLt;pCzK&E^NhSxtL;)QBGxb$7qp}ipgel>TxkNLsd~%DVjaP zHHSe--`p7_y-R2D;3T>*I#>suYTum&5v(CyvYGE4jQ=i{BnDUtZa$uJ>>u@X0Bw!~ zsv@873*&W;thb@=G%y|YTCj5q@t{S)MY@Qohg2~c%Vy)pa|hr^n$5u zro^1J+qP&Pz+>W6Zm1B zcG&vC7>NQ_)5iV3vup4y{uB+l!t-~qCg&2=yz%)KhXPnOBn(E_H%QDO({^b&7czwA zVn?PGkntGJm7n4%)p@c>9ayJUeFHp}!%5&dLEEK=jDg#uZ1(&)ERR7Pm4bKiZcxmO zKY3(Kq&wvsrRfuesJX_D15>P`n}!V+k<`WIa{EwY0|Q5INaO4 zOP&E&&X6Lfd{;nyG9iD7??@x+fOO^aGY!GY`K>5YtuHRlmy#O?oD^|3vxHhi3kQt7 z6|psxO#3`2egm_DqRXo=#pqRlA{K7&ItIhA3FVoHp9M1vT86`CGRUMSH$p4Ju&H(U zhj74oacC$wMg;OYpyst~3e_NF+pAs$*)Sq*zy?DhZJm$^Xm->e@ZlZYlp5f%_oT8x z%h?pTCLKD{!-u!NE}CF(1#p&5{VuN2caq^KkE*0`CxP=gO>@oOoX?e^vC=xZWYDVp zLU4Zi<{pAPF?vhQw*<=~kBtgOy5liaxkINZJ69RoyZN>7meTC;4x#J#bJ)zKS>?}A zP4U0|=l}Kp!ac-%5s^ul!zq>v4?T-KVUirAV)n^Wj&HKa=i zO&QhMR|U0LkHg78GCDg8P`?5h^Z~TvvoNik@Q4cnKvZwN4XH#fY}bftOXlV6GNmRR zzX$PmsKYo_1|GpVyki#590QR5swFB$KC3=Y?@)Enl4vV$c346Dj-9wlWe_XtDDLe# z1$n9aD`538*`dRo@b2*>;;GYnw|Bp2R^UNS>}i%v8?6cs4)CVIkOB_Zf`Wr!G)Y734dCGC`!2yi(Aw?oB5K;}?G~I)47POk3*Sw{ zPkoeX-OucMK#Li|IYB43BQGzGn9?C>9ISte0d-8PbPu;N^Xay!CF)E=+6!%?Az)|w zm%3H={fna`SndT{n{x52$`%-M*NxdiCq4-As_}UrjofCYN)o%q_zYXL6HX>wJgk?m zx-Y-ze%#-tblp#35<~GGhJoxC#h2v(=xT~SD^vruumnl$DJA(hpQE&Y!M$FhRixDEew*F@SW$x|Pz}TsO7fo2u->5cV9LrYd&jO)^=UxWXV`oFlrxXmrO~vU-V?MD*dVT`1Ym2vO+D3K&6?kt$09u*2r= zDsT})_*8OjLi=dN^1kIe7Cht`xzqxveH2K4ac1c@XuFVng09<-07j7t)Q;6Qq z>be}wlybnz$ePBmvx@H`UvFMP)3rMw)ui9mUGbpRYRHX7}=W#Ei(%gi6tqzl|diJ4}_ct_KD1fs2idL*^kahc!B^;fWBL&&C?-aaKXK!Mg)uJ-11ehoVkpwu4{G7>@D-cT-+!jFfYy{|I-8itBKcAFBwDoOm>} zFTf1}&~_%KlM%khrQJab==HagG$~rcn1sAy%`k6_bt0RF-Ynx+ynaJ!HB|*%7gX>79lJCW_B!ze5d3ke zVaMH(DvDwDyriuX3c#F+aLa~jdK-zx1&S6`Ke|ShC+A&N+PIFYY6WJzAy$MDlhlX} zRe4vNuhX)4F+x?nB#1<>YPU)b`x-lOq@ysNM8gA{9~(Cpn1~$Jsh^!RlZ_S0V9mI6 zG7_6>_YOvfO>0F|##;ioD^&RfZMq3nlrj=sb6t!V4%v|t>lFuWOvMPJQeU|BF{H(n zXf;SPU(_aupo@rEz9XgtCsL#J_~c~gx5wdp+=fYDl3T4o)@Yy^3H9;8RWy+xW^tpj z4wV%B$^1LH!;$fF248Y4>13Eh6#5dFrQ!^F4i8DHZxn26;}x`WAz6w@UA|b`hYm!Y zLdgw|i>6s0s7^qx2#5}6DQ+D~@cEd$^f+iO3T@P^#5;E2cyi>Aslch@P$k9{sa;cD zMTKW=JV857mh_lvoO_iN7IMXU7_lijDzE5?SW=cVJSFz?(g1R`V@kOLqp}Et zvD-ZA5)niva}8-FA2Ps5I{*9@hPf}zXT{dYdP4Jm_0?Bj{c>j}t;(P4%E8EIp-nzR z`z$v)2cc-#{061t$7weEf&*r}Qh8w;*~WM~}# z9Ax3o%&B^Ne8cI7NY6H#{-%>Bu_Cw7`eGaAkB)3XdWAKftLI-0E`lezieNl&saqU( z;ahV|Q$Pje>NT40@a5V6s)yIdgZcumn`YOV!>TDLV4Ge0<iLKIS+lE#X&;@Px_r z3?7v>6)~MP@ub?5lsd)GP%nmazvpA+LR&9%)4X!xFl#1Y0CUB+afyoLYd*`wy^qBG z?gCsZ)>*wb_j?|x*(AEP)Z}xy*P7`uoNq9{S#iKU29DL^!>Tdad4Y{i)daYc7FVjO#98qh1>>q4Le?9l%&wt>L$GSGAb* zirf=$sO#XY0;QNXcL!xMp`ZG8mpyyDX2mNlT+~K4xe9lirgyi|hIPB{b=b7yk9Dbq ztGxCyHtP0cEkYE@*IC7^=Q7Jb`TIgFX}^ZcG3^Uzs>rqfWYE+K_NH zs~Dm6E7_>`9m8Iu`aM5CZG3jpWQuU%mJ>9QP$g~8f!M8HvwG|eYtAt19%^jQgc~~x z2#DB&-cz!XrcYxI1YjlIBiWn-xh1r$ z(=)B(7;d3xD%0%iQTR~$xc<_!@$XHJbB!;8@CDo3EI%Gqd8zEKtJjl*MB z7!}K2PI5;TqWYMXP*XeJVQTzaWr@6&o#oi$L#c=rLgTdh8 z?(qI_^wn4UyTd4Y810RQ`#T(@*pE~j+(X-00UmFZ(QQRFOE*b;4lA1%oEthp^BK7g zaD*V%kUI!xOee{5By?yT0~4JM#!Iap?O1tHNsn_JOcoUWaV=ByyLiT= z!?*Bjay4=nVY;bUa@QSN;yoOD_`OFUgs9 zf?jFB?BY|VTlg(l9!HJxu$pdvva}_Fuwatlxv8+}fxXAiyqCDxGjXXqBThS( z_*a$@hM4OoW{)(0V$I8p{`Dd)5Zq4Vi!p_lO!*s<`rYJQf2TgLx%Xk?3d^6yYeKo= zQgWZ0QbBS7e*TVxmkc=alCxe@*#P7OD0m@nOEUP!>WEOtjsBY`TUuii9ak>%&(ZKt zVu#~I5kfZ^t*5>)OuX4t&nHaZz;TfKLQVF1Ob?e z$>)heVeJCiPYR`Q6Q+|RTnDB^;n7&g+PbUEF{%oo$V(!Q_kl(Z#fhJT@_So+;jyfJ zIm7<4lFYCR)2eLyBwkwo8Rm=eDw%lYijrKqBb8fW7Aw>wDv0 zB5ss+;>klRLVqZ{V(u4I597F1jo=#j#Sl8Ujj%7nG*WLeOlF5)MZf)tmZFfPaxY)L z65+xr(9p1n7Y+;{Bb;Q***qOkz0kojIA)|m>v#+&ja#D13G~?s5GqYgHr_`03^G&R zSTkoVO0qBG8P6?**WOQ`ZMFA}_=G-Pus&8hiZr>haFl?!I~ntraJ;x!%loD4}f`MyPOuE$XkE#a}_V zySlmpNsh0O6n`ZA^*148Q9O9=ova_DO+S1&`+~7=B#Ra5eA#btl4|z@n5w#pR5Vw} zcisRWc29j7j}MoNj=K#p-Ok$u^N0*Nqea32uvKE=Mws5l={@~~(P5lYLZm#Sy}>ML zYFrtS({sW97+d2<4jPn{50Jpw0q`Jh?rVH#SPb$~@=s zqK-C`84I3-jERU)#=~E61=P!#e`_I;#f&XYs?ddv>ufs}c<01mulA*9F&wK)q0Hs6 z;bi+E_Kv22b(ta~qs$!+-Il?k6q1xQ52v0-ra;><Q%#K|QjD`;vnaB?E00CfQdZ{T#1C zgh?$vMiVMShf-tiWr=bX55BnNs895I<|2V(N7k1Zs89%&PohYbP9jyGEL252)rMLa z+O3~gNcv5M^hUNKGiASWih^myX^oU5=$OJSSJ_XhT=Q8Fwf)tJTQM<@B42IIVI~rw z(H0Q!gR$Hyg5zkwDbUV*qIMRHCgFu!3CjkLU^J4+u3Px>0VZA-H4{(QF7~NTn{7rc zC6nYrdq6lr3JxJIa5AWwO;{sm7Cvb9siww}k{hjZR3JMlOsrz*&3SLm(NoJV0NEyd ze8CFiGod<`zVkTFBEwtcZ>xtgFYCrSb@^2(HsDl4l9YpRHPXrCUE131*!n@|>2N7; zo=2$elBzw-1Y#lfV<2O39wD!jPJm1ueglv^e-}@qbTQYbodDx5+U>8&j%Nx2>?6>p z$&jNMO--b~ZF`2B9>YDMqFd69cwVPW4F2oCw1pTPh;?P2-FCC%Rb<%TCB8**+6~xi zVHt03hcH2t1$GxlEHk?>00wy#3)`;63a)v==vGw|qVFt^m#P?ckEB$o_6`JH#sMo7q z$t;tVJ*pL@QoK^LmS6_cYJIUWIFP>1UltCK^kQKRufD zMJ>}z_?~iFJ?h=+^{_07%@lFie2ucDNZztl)VU{`gg61grVo3%ZXlQj+Z7YC>ZX@b zVua_af!X|fosrdGymIIY9uZ`ZB#iM(x$=cv$Z}h0HMbyA4OnSMU}b0f$7UW%S!AkH zZmee@Dj}>~=1zPVfm7fe*twX^yAM*}4z_|f8Qo3b_`+!@rFLv{db*PUYQb3Hvd1lK zW1kM{FwJ`)P)Etf*7lp@r>~D*^g#yg|N80gwkvYRFup(Zp_|nDy5EzDKK|%oFi^qa)`G`uQ-{^B#!H?`Rw+a#MKz_0>~gMW^#b4#bn5~ zDH}u{Y_A^mdvWIMt8nt|@w@H=ls^p0z^Cs|{?*=YZ+AD?3HDyeI)9dy6(c|DXX4N}KZA_HsKVeE-K|^p!cUnsM z=600id;kJ>&UqJQGdGYaZ%Y#H3dMWa_vyxv)Vv)kaL?~{$N)#`dKp1Vh{65X3e?~> zyCH6Hhu8!=_+VU-F0rHu{MCt7{_zrg=}3g>6*9EZ0amH^?2xW;#`uaAgzF8JfrWco zR&{ZWYCL~TLhwk7e;=WhriaL*wW4x1*Ye(Ow|i((X=&;3Z?wp_Tj81=6af`)zeGJL zehoUZ?hK1-`%`E;YD)i2xpMeTHE*H=#Qz&bn_pO*9u=#L^NyK}bt_AYODot2%uG4J z#N*DywdSo1rFF%>3HEj?C->9EvYRR4^cVj%8bnozd)O@R*xxEbKUqwtVTRcT$)qU@ z1JnfILRJ{O-Ug`~JgPa6Z-?Y_{`hU#yhO1HLERHc|2QBOpfLL@viNFY6G#i+^X`;dk<4M_DK@7vDPC%i& zP({{88tiRET5?QI7bWD%X>5DO^ovt~{ij@W+qqj5F)f=9DS4sP5k_)9Wp-_zalz?< zQ`pjQ1aP5{4n`g=@1d^8+T!A5>b-@(#T>8u})VETQm)H9u7=qcjZ z=+g`&begDY$vkkO4QiS6it28NrNL*oA|Seps5CFW0O)DrIt$5Z;VD)WUr** zLM&iVaULVb154#f<@VgPmd;*c+WI?@!MvTtA1fg6_Ewx)iTvp?mc0mFcA`4{ss3z8 z%6&Aqj-2~=?ph=re=6lPy!nPaa{T~yW-qIt{N&UAU*h9yEq;39%}!9V4+>}99m~zL~q=o-@0`#Gx1OCxR z*cY-X+J$nY?+0{+dPJ^Ju7wX60W|9YwW+Z%(diZqbeqJA82u}oleK6eWZh$vAYHU2 z;IeJIs>`-*+qP}nwr$(CZQE8Cr{9UVaVKJaLVn0Rc~*1mZz<{5+pNqn; zG>#W<6+cqW#*VoSLu*?wulwD~;t2c_T@l47P?_7EBI87Wd+@hg1x^#VQlj#P2=^S} zsq2ifOdozZrhYu_`66s?rS%4lG6fAem3g6BWp;)17T+L8WW2m=8Td@&wPOs7K(HZEx%N1^t0tPd@3Enaj+eloi&86eUuFzRD_D!O1_9Wipr| zXib$=e^5fnQ8=6;vnC9-W6$3yer-jaLA!Hy!3JMS#jL4AU94LB!Tmb>#sQ;jg8u)~?GK#3OcY6pw5Rg%&&hBWd&}2U9 zFlRX9LJ2xF{{HRzZPTuUEl&b;w$vqxPd=2~B3`?0^9upxcTLv5;!S^ua!KTH?-xL~ z+Zf0V1d-0+d+r;u%ssgvEDG_%PYx|z>l<(kJHs%#oMkqLr2z2XK1_~VM4}T50;G1Y z2l+xrm7F=Ye9UT0Gs11Y7h7#cT=Y#|RVbXgFaj z2>lh31{gyapT@Q+nGh`-Gil9L<$)6^&*3r(-PLn!1EMCtZPgnha$Z>d3FCedQ| zjkV@ElwM&p^q%GmwSze2(m$nD#nAaZ6< zF#)1t*Sp4&+eodqO^UVNzRpARd+>7If;bPC*?;9*O@hTals3mp(83F-FzUPah2`Q* zYA(51>4s}+(V7!T7X=4?idz+>hRd+fEFLSlwU}4@@yAsYqsd8uifZwQ+RcXUp8T!R zwbfz@jpA?<4|&eG5WKb_vRROG(Xj(K3gs#;LrxB1akwto5_05N6k6HXNv+Cd#uVH< z$lRz+p~=dFAf5eNB4i>kIjk9eW(mFo?hI|0;%jL;cLFS`^QCB2@SK~qiP@Ako}?E` z8Bo&N3yF1DWDpJh>dZ*<27@oD{R2tnw&59~$cbDPALMKfiA(MbHkuqpRPIB1y!X1m z50ixo9}(4k3x7?hPj4Jh21Nk%mp6}%r{E;{a9nE$V;l@Q6aOZMR#rReawIQvrook- zdLXP%J_loT!$MFj*8BBdYcVOTw6!vmCEQ2=vK-TZyn{|q6DKz=%LrG&rM_OqELkjD zwk_;nTxjWQGI}x%jicy^IWZf4B!M7&7>oMKu$WW&N$bjAB6RzGF1p?^5#WNz)u+3k z(8tE1z|)hjG}84i7MbJtMk^)Q|MA!V^lt%6N&zPm(8zqVN=yWj_3!7Rtu?-~0W}@J z0&Ycu%Xj6Iwxh(7!-X*hTASOt2NdPEvT}|5+CGz2`8~Va#kHct63{lF=frdk3F9BZ z{&DrR{3E=fUXE0TuS|-#V{5T_L=P+*Q*7N1EC=zqkQJ!~4RqDxb=%V8MrXc2g25Jy zEIgv~{Vrf{N_UOf!@#!TvlP(x%(#T_n9~9M$cPEX+9V=IN3HJ+z)RJ0JM-U|A=lej7I=nO?%h zD!$0mi&&kPgnU_}t`F>fS30fA<(g5IP1B_3g4w43KIufn@$UFMsrCET(F{!zX4cI< zp`;{}pcT+RzFrsHfQmnc*YB*>i<-WcPXU&Z=0a@PFg_tl`LrYIQS89=YqLY(R%nw( z;`xfLUN!p_mU7N`o&Gbu))^xuRNzfwv zoiMFdFigo>;M?km_WBeyqjr&ZI6<=QvAzYX$+(HwYbkZPm{aHK)#a{E%86#uF<5UR zVyUXQFGPuMpjW38z|7;!6vGk+b{@K@A_fT3On%YJ_rPtJ`PFdigax?u1x6K9WwQ7^3PZ&0?=U!=k_|Bul9kKtH9!wgL6x;QC@WV`Un7?^eF)(A^@#=?}+waaF5`E2^)8s8zXW_OwWpg&?G3 zrK#A8QQtd3a>{UJxh_u($>ZkJuSS2ZbrP&>d)q(>ybTz+PoHTNR>e4=KzP~+Lk>d& zZ9lbAn1}hDK>Q(LZ=0PIB$Pb2W0##@2E>0^MIX?Ve#z+QyB!xsWrDt?S6$vfhvon) zwI1Bj_{iPq(XFVu3$?2NP->xbzO82F>k#^2dQtQ?hylva-in9)E{Mba@c?0x(`_58 zwMn-hiicyo!2FHza?l4uuBex!iaI7ZKN>plU#UMwr|1y0TdY`p&5qc`b{b89jYfW* zFaE+O<=+=!EU=@IWGH(sMS5A%=JgMi%B6l#KPxnBqi}@!&_O=Z?NZ1UlrMkBeJ}zyy zz(k$Yd378pN2xM5-GqsRb)T=?d)@{4zNS#x-y^@^%kbf?fsNn7p>N>zm>gRIZ`iXM->J)E;HobJkORUh@l@DS znCJM=3RH-Vl4|FTpHG~8kCHUqE0gSo{V%42!X=7H1KNR93U1mp%Li{R(<^7AFRNzl z2`{Iig4wF@D_c6Ss*N?2Ir^_|HbFpvbis6pv{c+p650At;5-oz74NO2%*;SMZt#_X zySoUc%j>sDF1u$v*tXAbqA?`5hA`;%>)EuQk7i0}C)y)PBOb9!;?vHnFL2{xm*(};EPt1JqV8f9EhCd;NI6ygW%Ov z-#$=%P9oz*o%i`xygg{waQ>$D+ ze;HJD)u*9^6s%!w!pigJi&Zje+Z}Jf*}#>dHX6xDqJKs_kwY|AHeB{5DgTM(ce!US zht*~D7gw{(w!rk{X9@Kj7_f0#Q131bRMVb)lr+UmT*xbqH5uE1eK(YfsD8{nZfc|8k&lQ(Pz&d(}eC-OLqVM-R&|3UZrA( zT<(J#C`vu0RnzjxTU>gm4)r-yCk0-^mVEe8!i;SuXHl$U@M;|eXgIZp0U|f$R3|}R zGyq7PN(q?|$brKbM>H z4%M`y(oj5uRs#MeRXjvviOfyg1HIv4B9n!+1oIGgBN)f)Gs+ujRUl>w#>ZsGYV?II ze<6VZ!^Cw!FUatSj7s0 zc4259<4G1kOt+(7|tKzsuiNCYYhy%fbfI>}ew!tOw&uy`2W+|O5{jBV6HN@)Bqywl1s+J>NJR44 zz)Uw`CG(GFpO#^zL=2eL4HdbsZVlM=Wcg%LsJCDNkdzb!yMV;+)s9A!-WYL|&h!ci~) zJdaz#ERX08%KD+g+bYQ?P^K)PGW0Hs79COh{z#OMso>XSYoFN=&lyW~!-0Uq!c1!+ z&3e(8gOY+&?7kx^u&ZXbu+N?y${8*;0Yy>qHqxwl*EOY#P2X?JXqIXS48|DCChe%2 zDva@%*+*6V&Je;IGE-&>b&x22OI4YVe`z0Ni63oAx2jz}bt24~wW%}f3I<$mgw_QZ z*JU{9NW?PbZOoQs41vMz=8H7qpm+@vFFUkujb@dGL(F5jlFinkJaoU6B4PYIv@VYu zmPytc*wXz<=t?L(rFX+FmZ_a#4Px~N7G4tcgwb4*7%x7Fb-zfYie^xfVx`dqM54s+4)o-4c zGZrkWu?&|ZwkwDiztNyjII{Ggut2gKtzskvcrI}T2BeP&*or$ydIFs~HDMcLXI@+ekB*`H)FNBsXOTO(>wy*7s zat4e-3cK5Ds9tSSSf!v<8#$vcN#+)_wTIPi&vQ+v*V%$9JxMIG#az6E^jvwRESWtj zSfP|V8lBV*e0x%?$*KYk*oMFa{*FC1n}{W=tQo*>P>IP>8jrAGDZF!r*-wwUMWa}7&gZ4&T3TYdCQlW*D5j9XQL9AV(MVFu>ACcw%@}cpc1P~v z*13k47r#E~@U7LisTtfesR7+V7^>oxmCOCtk@@?m^ZS02wb$+b_*?^prz^Ouy4-bc zumT(>W-8-9vEW8{Z)&_IO6EVdz* zhX73CJDu;nUKusB<1ZWa+t=oFVM7E4bkEaAGzk?Vv>2N2I0floH9{zujR4Q0NL*sG zT%7-S_0Y|I^kBpW)n7t%byj#9g;OCzazds!HxMFblW`#uM{*H;fbhLKvyCGOXQA?kmV#cha4MaJ@Cr+4UO=IdkU>&!7yb2jV~KTC@_#ZW-=$X|fhIGoMg-JyA_ zhzk}a8)6cqp#r!^HgHNfaMDIsv8ISBdt()J3erc^E&$U zpk$_(VX^=v)7m=VQZf1px8~0)l0roZ?g-Bk*l02M-Z-tDmh^OXrg|-kRA9f z2r1kTZvY}S={s3FKQ%MTySOeJ)f4^9uqo4${)|#3$t+=wuyX$VDQ=4)*I`t`wH=0V(fipSh&^(IVbyCrSXCJWJA7|n4UEz1rq6x-~A<(DN;;B2P_*JVZ zL$-4HGX-JIBfYXUk`e0HkKx_|gRDqQ4nYT! z8!{dmv{|$G_pP;|*MIzZ8M$Aw0SvssU5H@vYTJ=1-9#N)ZPC?z{A2i_SPIFQ5PpkL z3-6Y1;{bLqg^Bcs8cp@e7^sE{gGE74R8R6rA>WvYm2z!|nLbx#8WOgc-r%e8_A>}mtTV7d!8;RFOL3l7h}!p|U{2~-#( zxj=a4kGIV6z<=g+NMv#6hvyC!n^|JO_dbRo6fZWicJjv{Ozq^~iJiy7bh3f@iuKGF zm2sRd&k*zx6bYA~48%=cq&K$kj*%BI!lCwUtfKDa6jWnubdWAtP}34T z5*jwApBxr;KF&B81OjYhW`B0U=!DwG)1pBoZ+dMxFHx=^DV6~~xcy(8w^A`gGk|Z~#H3b}@xB57 z^J{so7of~t0Ra9}YX8@-H3|G*zgAWO008Vizt-O3f9%>HaeOoQe{L=8|8LjUaDUrv zjVAV8)v1fOH(e*0{IS{Xc@h9`m)$%au#IN7us4VG6se{D_pgqZ7Fh%GchAoBHA^pX ziD`b7%*@Qp^vo+2x524n=19)aSK)C?Iyq<{tVG|frOb_bwmq3Tr8;npD)*FnhDw3f zoqD~He4{A62_sKDBMK@&7&dVATqt~FZawMC20Lv zvnJz%a)fqi9%xjQcfzh6ifDfg0z6m|;yh>ORUhuyzu)r~U-v9vj*YP&SOuHf$90wF);VoTaSJ`;$UJ-s#f5~9&NOC}q zeGcmwgoC{!D_xqhDMQlHurOq9K!0PjM3k|D1(F}K%CSOkhzJl=&&G5=sz=tJmB%dS z7BI|tpx1}*60Lu&P@G0}OYN3oz}#ec9(Ym0#1PJNWsB#Fa)uLpW%<7E>ZXF#MMQCj z)o=Zx*T`~Itl%RQ0)2oOHvCiq_S)DQ|7KcHu$NVR=^iWf@LXAF^_%-GOUX-+hVcCj z0?lm!=hp0!T3`Bn-|cNw5kvg~Q~e!t8{#;`MGD%bNeR1pTtH&hts3^Tsttel^Zv?c zmtIAxfTp*Gv!!6*6iyW4#;vo^{gw0=N-zhm6B${G-fdRD*-yvC{cGywJUl8K{0Pg< z1;Q9Nb82=yi+PRP(~x@Im(i~`ntlOqGiwRyo>@uFXfHd(tg9J>%AualU&=oTaPEz; zafB>p0wS;g_*-2+RHe>AhM=;e^4EKYnkNjZ#c;jHRbEnSQ`kWp{VPvxW(PLFfzj)h8z;(yBDR>Gdq|OZfBO<&~WrCU&86g3GHCEko zNzPRndv9Xb3a&K`z3SEl4>!A|X`l&GwXx<4Dq_y#40>lsTT38!mM@#^x8_3CM$(w4 z-Y2RqD6p-p&)zTNb@bg)|NOde@{9~w4l_W!i-ze?E)Gc0z+0FM_UujsfVD(QV!7gt z8S3EbJ%0HDZx<9se7zsVtOx`uX5s!yVj!Hl<=PC(2YFX9RX&j#H?tUB&JVp-M+AUE z+nekQYT=kVUGNspgUtTiXLw6L6}x9oVl3uf3o-EK)E9@dQL(RGbc?rs%+wL}=&6SN z;!2JS{#X%D-o%WLh3N+RV(kN|bdJ=|)Q?mT>Kyp#{br^ujHy8`C8J)lEN3(jMsz||Hi=<{7K*Fu- zX3OHFRx!(|Wuel6gj9d=*b=l1 zsXL80u}7sC2vv?LWInIb+WKVb$7>cFXnZ;kb|Jcj%1!w&}4(!}Bo z1c*U%*MgQ?sZ)mD_JkIdLadn}661G*=vDC*IR~`FUKctTuQGsQoC{E>j+h}1@R14^ zLo9=uZ1sm!#`s(7RI^!mG*TkDA1t}U8S?y8ln-tqEvCwhQXv+2-Bsz?F%xLns&a!n zxU0}3sSOT@aBz2>{kw^!|NGE_8KWVXfN)0X5D2y$zFBlPJsd20=9)wko5k;Wv8r>W z}MvZ)6rf zB(#H+^wSWas%`p%$|co+Bo&U=N0$9TB9?gCx*$H*>g412Cf)(Fm2M7EF5ZaS@!9|D!pC@_Pu=)^N| zl-oG#$KKh%-KLsdps%F{0r(?PvYJ!_S2=p6+tGrx#N_gmuJ9te4o`V`nA7VS2cBiQ zz@nN4OX75XgLDY#%sB%W#Jn}#sV)D4Qg$OE1+&-_my|1n?4q!oWo9!f^0+gER9^R- z4mM>J5yxR{$qM;e3iG8Y{s9x0KNT$F*hn2ASG=#wa*4T26F`zKk58T>n?I+&0u=!^dA3F!~Ah?UX8=yPlL>80v>f z*Rk0^e1{Z+ksQ}WZ5rh4{9GE7a}{450zkuR9t4ZZ4pR9KA;e#)0&H+c8bfk=Ba>5{ zZSH`|19(AQLqA}fG%Q?XS3A{N3NfZ1Gx~OQGHGcIX z$Gw4gTrFHU`G9;P@-}H|{UQ{j-(K_QaH+frfFaCW^6V(w@diCs^+LiSUuaVWV~vOK z)aHNJx^zi_M?(T2a%}p;oeC4TPAw*rdT;3V{p%^OT%l~qb?>Nr3kxnTY{|%&wUPIh z_2UN1#DMJS=e}o%6oG5+)*r?`Lg!WMM-U1sXupq9Ndq^NRq|ImyhRs1p_7EPs+r7I zlNcL58i7)CGElfDX6p5baD1>99MW?!5XGUd2jIw!E4AxXV=LCL!{(B-C2NU`uT?9V zfeInObO;I5KczZV%yH)MVc#po5pX7=Q5kLOq+}wwHpI($o%2~`Wt6O%1L8+txl;24 z>E90i1Bd#9aE?EJ_gxQ!#BCq{$Oz<44P!}wV#gK0yei< zT;h1^rHzg?RtYWT2R*mRiYLK=qUDQ2af`}N63P|xDPr#?Qrk&CFZjFkz+g0tW4$`$ zmZQNTAl?_NtsnoJ=^8i{d62KF3qw6F>lgWEs0OtHloO^q|66m`CT*CnwE|mKl1UQt zN!Z0gIGvyQqkooC9jkARx?i^MIqu6g`h~V$z58i-;HU6wfQ9q{IhXZ01&4^fW$ZN?h-(A01tVNnNb^Ht4dd!^XG&LAW|QLv8dwmWeX9Mo-uP8*n&+$FJhWjH;2!T(z_x3Q*bIBb~noF!C2ly+d*2A{)htpXZm-V zup&&a)Nk^az1=3UF!56`Me;L?0IVTRIix*`Z0MPHj9rX_(AW{<+KeM}MBBYrxf-Dh)tY}LK>7v{78Mie+%h)!Bl4n6`VE+8?y7}g*YHW}qO^D!L`$qR&MRafM91Lmtx$btN!!Q6-Z>cMz0JtJ zxxr!-lip>;=OzQEdD?amy86{YDxa~AGjPHGq9W|-vsV=|*#%G+ynO<>CwMvDZv)S0 z9t{6frH)5BgZu0gIaza8jD*=#%_l==oU6rYYg@Myt-sgYWY4uan9X(PNc=7H?+w=9 zmJUUjw+}byHD?I`6FklG-M7yGFX`iM8Qx zlFuPh)XO_Cp&aJA#fr`op{}p3C9Ijg7N#icjw)$gmai{(%MpvR;+5~6JN2(mVAX$* z95+uzl6jrn11?h3{#;N}nu|2FGBT`xHjX>wn_CKB+MA6Obt0{q+OqpI#-4f@y6gTK z%aC4|8KT_`#^tBT3lG|g&}o@)3iuPdv!5R@{IO}F355k7_>SA`|1A*D_w7Zz(oeTA zTeoM}#g(#3Soz2y|=r&dmzZeu^d&r==MvgGUvE5VKv1I9u&tea!*Yp|g% zeOHF8I+dXE2Vh%T85N$0Z|MSgPSZ9cMmz^1R@H)%=ce7ZXiD!l*%fZF+RLnlMl;zJBJhj*R)X98W0L$E_M1lv! z0&Csqk9nuL?YLyTCrp5pO&-C~6-3I=4mwv(FNDz+7iehEkr=rLD$vC!VPqs{;;;8f z`WJ{kGlr13^Ub-~J+*y;YO>3zV^Hjzm`pT>gkr_$Cf05mMx~;X&iN;XQOypz{RI^+ z&cDF65W86n^B@PIisXPT$kS+zwzU=TAIcck=DWB_Qh4OGs*eElN~fv)Z}*y?H`f%0 z`{Y#}*7RzZ1Gp)FDnDQe;sMKU!8brTu(FUt7}|52avkwgzIy2$#GBRS;_oXpfjfvpkMFOq+S68~lDV|`3LohHDRJ_!5Cg2t1T%!?p*i_ZgR3EONuLr-zY}Sf4$ucFWuG=&#% zF<>aSk-7v3hFbH~Ei_{BUC(<@;tJgZNepu4scnqO;J}@*prXcrnPxC1o zmMe`QxjIVVMH;wOv(*!6sVcQ%0_i%s#cUtFi@``B1;Wb9pH-W;;otD`7)5Gp{Tkx= zz)8-8A~G8^4B+y(wS1KTmv9?09IB0E^xu9GxL++pWNVB=Cx0vo0_x?Otc*8C?cyy{ zya8L8Ewa_nZ89=Ypv(exMAF?Z0W7wwmQP@C$}6l|VV^yabbk00km)~ic@LTNslK^k zV_jyoVV^Rv;$VB)aV;57RmELA7H^8TTusX-8!a%{b-f6g40useMGlBP_Na};#Yq$JLX7aww_tjm5eDk zhvb%(9JzIpfrd$BN?S^?vyyo)x(eh=7X?c@71F^|K;zN?%WB47U=njh7{7_MpP47} zH2d#KS9MW@Q~auiH~srKYZyF%=~l3k?ki*hALhBn2Uz0W5ktm#H%7Nq!1bd=HWIl>6{z);st(3+=d?c0 zPAA5C#w)}BxISR?;O-6hua^D-qu;_@v+Um3b7h&4aYQ(Cx5MwZCxe z+_CtV))qw@j{ZPS0LK)btZU!0*e9g6R^zJ!-}3cNEIV&~KD&2Ak$zpp)Pht^8c!LFTRzTL6kghI4d z^E(5*@g&)mL6E8afRI#s(^+<(kaBBeOg6bjGzRWsGL9Z~FKL^}?@H1wVeva4dX7-V z?Z^vZv3>41UogRo(FkOS%@sE;QJk}C1@;tF3weX*ZBj#V7DZr``;KhhmCNXekF3X1 z?yK%-HO+eZ9Rp_*qsuhf{{4*OpnCi~GUHg<-H7hxij_F>%5L27XtSKaPs+a446!f! z#1-FM3bt{qdwenl#}X;K+`IUV!Cr311H$q2sv2q?*AXi#Syf*ULg#TrQXSd))9mgC zWo(6gn>>7722ovv&Fe!lhQP0z-dcDvwkN;|!721$CpHF%DSAQ&fKNm2E_3q0!%tP-cI0VB7L|E?&197p8 zK}{t0@P^8cdxJ<^4tUeksoFtc?-lE->YwFXdwuI~5S0olx_;_%!ahzLY!T>CDtBKXxD|>3r1V7@rsawLeLQ~e(3QdL)#<-9! z5J;B*YgX8?0TKa!d>FS`fVc7~vLmg^$_m@%XvPKrBE3;X7j2oWO3o)o4>7M?k`$@Z z19^nm0k>Qawe+FvnZl)xC_G$T$fQ~o$xO2PsRCLp5k+ZdSoE0zb;|uAKS@)DbV~dk z6K2)DAk-eNED6<@ScFF6=B-l4J?$U~t_vp@7g{viRf;nGH<3g#N)xk2q;XkL%{B5`xv^nOf)B^PJG_d$5u3U|go_6SlAkJZ(-XsN$WGN5NDoN19sE*#- z&1*FLh6)Khy}*+P5Qcw-9Z`c~3i59z+?@gI5^YH^&9 zifq!mI{1-jE^AbX^lNkjS04=8lV|SfR&bze1iRWY!9Lf)#1o|0&3p%{nvoO1Q6N1b{HEU7)dWwRadzmY z7WadZU2WO}I$_J~5(yHti4yfc9+sl0yOyLp?FAY{`e`cZD|Pa$654|-935HuOfF-^l+acLJM;m_JuCVy*8VW^ zKJwA0eWG<69b)z;KZ$=G2$VZ`GXU8T^lg7{7*ouu0>dicE664>pBF&YTL1tEkv5Q{ z`K*0}w~X`!4-X+TDHY~h=LkWO3ilnH@SEeLv%{!YIA7C(2^4z8m{4?S9uEY`>|&L` zBAx4;B#xMb$hgf>x(L8 zkG|STNV|Z(+I`S?c24BfSP%Yjm6WKQV0K~`b-juHa?~K``9(cq@YA5Y+h1qDuIUIh zaRd*KR(`nKCr*o3$fz06CLlOyLny|)8UD*};aHfph;*zU+uSi(FbfCWVozWXz8ETjc^VKq3hc00uR-Rg`we0&1FC z<}Rj8Xx{r4&hwX%ceBD2oC}Tw44Of5aTD%He#GFQ2c~#>STiYP5{M-G)In5vP~MBp z1FC_#Yov2YUL$JZUA8O2L>^+2a{Rbw_}b$ZQre0A!&suPSq}hItCYzQ)rmZM|D0JS zNN1gdSO6b@T?+H%!$GMb617+VWaf9W78qum(Fu@<@nLA%a9S(HfVN+NTf0p&K?hWt z)0mI$d9bGxpWVVbcJahasX711BjM;Pf*MzD?bCE-uL8tIIO60=m-vX7Kb6@N1 z1WbMpk2NEq^+s90y263h1dh7(;7C(TFTtxMv?vO{Vn$;nb_ z_DS^qTwJBok~Xr=igK0;<0ksa;vx?W)KKY!G{?v>m6&0ti-AcNNKm4yF#1tVV_nX) z41kGQSM5I?;UzHZvwzQzh0||<7|vjsl}Q>aVG0Hojz6hi05T~dP)EOtOEBl@Q2_0U zO8#^*{Rv89Uq}L>rRtKzzn^_ zFCV0+m-0^dtUXEaJo#;7XG>Hb!4w#J13Yt;TTAB{J^-`Ot(yQn-MFKDb5{sMtPWni| z%WKL4hxKo-?M(q1G85tsuGw8``26n^dUbam03CUYz)kKh&;hhTFgcXMGLWJsaS-`8 zMZz@J1!2P33MW751Y0Xl^LGyb8P0>1;0rJv&P4&BKO{tX|J|9@5~#k1;a=`}T23r} zz$!IdM*Vro>-D|nFsKA$BFU3w4Q=$FDiGBYxP8y^Yn0t=h8sykpOhdCCnnMF7IkxD(6Y2P zl+wJnnZzMq)xz5r6~GR(Yk_L)^e8HIQPXN}JeRPa%&dL=?_lW+5Lyz_g>j6nYC9<_ zh}xw*&E|X4_qy;7zd&#Fcmt3!Cr~I>79GIjWy_iox+u9xCKLeA00j!ibyxsSeZfPp zIJz%aVvn3=w`Ty~#3h-txOXxJ@QkB5Y3yaYVkXtx=3%Ujpsb7Un1udA^hr~-WLcge zqveu0W(6Q%z&lU@FF`r6&s6eA3kbE@m}qYgkh;=eR6#7AmU0VvCsOV2SC(%HPhXb; z-op}$^pT&1F6oWz7Ngpe)6m`1RpaabCbfY1%4<0%Q5|QiegBi?h?@+La8_w6&54mDC zBoIBXA1%=W%-tEV0X#cl!2WVmwb4Z|i=1VgO<5#nZz1}}o4JA{f^QHgB)U2Ktzw&h zU4_fRL7V*@Ck*_u0^TYSsErwzalIE(+!aPJ+=jSCsS7>5H1}Fvzl?joZJvtjGNZN$ z>;<7;V7e(+!!?eT=pp8#bt^V&ElLA75EAp%z7?Od7E_;O2;kRVlTV6mnKXa^8uH2K zV5gtjt^Ry{c>;b2*v-DJT;AjPT)#=v!~rsVkVK#9Ca+@iqDK8&C7w)ABQR!z@xWqD zopH_&C$4xg9p~rxsigrIrK5a`K}d-vF>lJctmajC*VnPHl0~TP%rS(iOlXFN5;7C% z84OiKjp#A+2bXeplJbGE<3^L*M<7!Ip|WAbOKrg|tjYqnWpp|6j0PpQ;!{j*W8oag zqb{;65x@(Ab4D|yl5F4*e%zTxk?Mz>IlXXMe!PRZ+jKxPyClbg?VUA?!EIPcemfw8BM0UNpGWOOr=wSR2~bftzrfn<{|SXfS{rucTeAI zkH`H#M=q%I@W)zS$I@oXGLoYjm1pFe;(~hl2U~(uY6MNCHRzSi=sBU(Ho6fgZAMhi z2$stpG~+SA2Em-tv1Ft4kvwIl%M;^JmtNGw(^I=84G;K|dmWKzbI<4V?e}I~%0>Q+ zI`K}(SuoUA3wbeObqOan(Ec|mgQ@l^o@tuYB8>BhV>n=iV7!H8@7w3J>zq6l< z2sI8FB@4`4MT#L%&Mg87Ul{K-w2ANqI#X|(i+-r*@L-bavL_Y8jKHYf-^a|N0EvN~ z;SQ*sokQKh|E1~(u)O2Uyn(z%#gkGv?E(mm7!j!8Na-P(foNTmQH1H1dKv|MuV4dR zV?Q<~z&XIk$4PRI?qoT%G@cArdq?X4Kcj4x1v{iPYL-%4BN*$Xu_?0q#YD0c>uJ`u z#$BZwhW`avd6;)JO2el;m|-(S?TSs}k`6=gXr0(WINx{fu=U&myF9R~aKAD;SgPMN zk|Clqh2FT(Zv;w~Df`YES=vsKPSreVxSSU#dfxz*@hen|=rZ>NrG_GwKfkno?ws-~ z$8=;J4n~Zlx#AAvm3j893M`Z_8xVi!AR-a9aW2&1fwk;_S?sF4U7G~Rm1Xo3)C$7P ziqje@HYM(vI(~N9iR459Xm}E0``$P~PGNU<${>lH+GcK(Xy0ZGKKtm%pEsGn8)18g z3x_J7 zYdu&D^ACHA_zB$;0%*i0Aw3m-p_XLVbUh^SuW7)mPssp4C$JY#{7|L!7#Ou^ILh$} zevn>)0PbFhfViUqyncMjKbrtH{w9EKk86WZOU1?46)fEvm5Y{qpcYBtUyz7@Lq7qP z@x*C$qLpXagG)fPW0YXDC-^r)tTagWb^2kK+DXK4XoKOY3>gN*GK}SaaZ^1&{sFmlHgKyCewsbD3K8o4mKHFR<27kg?};1%~$37pN?hH~qy6UDK0k zVn;jG5&z{CSfQJ;cbD%lY|)3Oh!CVg{U;|U<<>CdAi~RvRCq^AL3!7Ac`qFHM8b5| z>4TubVS{?^B%p*$5N3vq0wI=+h(`_QvSh|ydCoZ{ArlY<2c&N~QM2Z#1l?zd+eWTb zbN6Is?P!a9Wq)O5%mG@suP~>z4&%5DAlF}3bn}L(>6F_E za^*;*b>#9wH=4X7Pwn?_`0Lk3YS(`P1k(6E+fQYl7R~j-=2&!#D$Zb`r~W zvrmWfD#1Gy;2@`Y7?!e6?8|#m*h_tv0ta3jW1{Gs+m&s_x(8g-n{*`};l@QVLIH1%wfaeg-14{_+YRmC0dBB^_Rls_g zSU3~ekfaj7Ya{}6;RlMd&YW#0!QZ02b_!40^L=RMk*E&N*Plm^DTF1$G7|>s8K1^M zN#Wd8iRyZ2dC2tBlF%+mt&NrxHIK1jL_+Qe3HFX2_dW#_+o$*pe@t`LxMDOVqN1Px7_c_>En{9OPM^jR8L{= z!T-vm&~T=FJKd#Ry|1R7gm9eWZX&b;_0uq+y_Q$?Kw%XTKQM!y0R9f{(Azp$HDVUh zD&pLd8x$0r80eCIBATX$QsvF2lE*XX@`_hRG{uXa>O8ShTYVqLt+s-BQ~{%+*Bz zzV-s7+}ykCH=8J@f*(RR&M<&prWr0!CLtYY_&V|of;+u__S+9-3Gk%uQp@a0h-;O+7B&%ZNYHNxB1i4wbhp=}kg8;LQ( zju!6U^|e_)7P5QFWHWy??uu}9@~SQ$m{zvoVuCWs{|J9Wpa+aWeo60Y0Y0Q0}A2wG*Ep%z@+s2!dosn2#c(p8XB@2BCO3We@W z#Eb(_)FxcZgmZIB+8w6xmXfo}G8k%&Dm?k2Fr&j#mOEd&diR(~HH5Lmc7QLlVs`6rw(sr`>1J?z5r ztZ6{SCppV=CEj*M%iCZ(lhWiSTs5IlVi+aYgmd5*-`9W8l8K$`KOO2Y0W7Xy z=?9#Xs_qGt84o0;oBzkSDW8{r2f_va(}yqs3yr9~U;&;ZcjwJM(?lKNH;)9CoDA&< zfnA!6rYtMloN5T|yMC-=)i5;eO1apiLvaGO!pcnUjjKMkTjRX$a#D8~-&!lXc5p`{ zxY;;$&cgeMs%4Mh2GD98;vWgxd))8+qwkDq7cm8M0`7?;G{@p6fOvY-EJBgF&PTfEBHNb2 z+DhsxhU!IcFOxS6pXgkr=xbNc`%oLI(?z!pxx~kKnEA?@c)_md@tiO`l|%9so^|yL zf5mXDNgVccciRaY7Bn4+kkb~4@bCNDfgWE;wo7!yT4|(ySl|N?M;rmw zjF_fW3tf;^9I#6R)I>3=tv1UbnXg1O*kW?-ES8>LrnZ7 z+&j}kGvf0%c`YkVMs~eo8sO&}qyjJTdj+!zs#+;f@y8=eXMYjPN3TF@A@(a%1N>!p zqV=jR1NDJiNV!TCpnZPX6TdEbg3>3$hQ@Fy?|ic#@vYDC4(B->2thpqS^+2!E<+$8 z&U@*mnn`B=zV4j3571v>b9}8tbuC_!{Td(n*sE$|oxdN6rGB zvH=^|nVM+J$t?)hp(ijhtO5&u3^&HX(5-blHWIR!V&jEB_cmG-i$Av(27 z2XZe4knoPSIxsVgQW<&k_&?SbdUP|bPjEb>yP#~|yH{N>DAlf z$s`B7%hcXFPJ$*x;mF+e`3XF5c`oQ8Kxk#=v`DazLSKl7suw-~Gq?#!I-tp|mkoC<-0p~c9_2ypxY3;r<}>P+uq$F;KE2`@BOwF=FZ zZbqK&SQk8cJE6YUD%p}Q3&vW)9sCBk!HZ+3Mm30yuawf;4h>3Uhg$W+Fb~tCp|Q$qV39mJjinSnwL2Vr5U2l#c1f+?O1T+`~FPJfM59Yrj`Ws&t+6wU0A|1cMQ% z5pRN3R3<~cAcc`2T#KaSCX$_VLoxLh)A$%W2EY*c7nsz;d zwwG3_@Af3M;XJR{Pbc^jd>BDG%+5F@X)3W&fv+A8f)b$HuP*;B_Zh3{_}<&a?%)p^ zsde%)17CzUH9Sn2N%LlR#h;?3gQ8!DsG7N!C^|2~U)`w=Z#&Muk6^j-7Lf{-;1WFh zFbRi-l~Ha6a8RmH7Jj!QE4&u$&go?qCN+z1d~T<sD*q*%nLyg+v8?M^pBzuw5Jp4JJzkR1ifDi57n&fgUgakpO#`*rQu1;(<(@ z$NX)NVhZGrV=^X>6Mlfu*s-BIeaXOrzAdoThdWr&)r7IEFwrXlp36HCdeliuhXmxv z<{aUL#x;?ae(0CeD(hE*fd^DeW^h&vy{AsSZqJ8@J9o#|OZSH;GCh}%&jQFY+KCo3 z1FFU>*$N$WdNK}RludDHn}{!K5tLw*K-NL^wuax z--Tzj*`=i)KHQx(eC*=#hKI4W*ExwVY-?o@r8Cz2fOn^O9)5@dVe(t!hl2JYJs1vL z!`JB0L0Ym2MYtcFA%=4+*0taqO$`50lUJNX{l_X067`5O#?*=g0lyEf#^?((#dU2` zfez`()yO`V5<_`h@-w}?4|$VQAijdXD$BIBJ&l~T0~Ao^c$a{>TormV98OnW;GS4# z8()h}qma^ym=wI5K)zYhJh^2&YI8aR8f8nu^mS>yZCafJLnatuUEaHq+uesiH>q^b zwtL2JY)J$Q1^aAYR&3q1M zBEH1(E#wP7o}aQv`-Fy?DpwuMtB=X9vz7(Q;-WEAG+a265p1vcK3T{>4pWO6bfo58 z{(SAwsK9-*;%VJb;l*pT(mR~fk=D?kc9FB2@O9S@W_EAIE_P^!2GtA@B!@oW&?{w@ z8>ikJxVdnsy1^Hv3fS%xX{ONt(kO?gWXd@^VQ(q6ZTi%ZGF}! zvWc+?ADllPf62cMy?uXN91hs)QRB7mVME<3)Y-LVBkgs1FQ*VT_Z0g?Ij< z4$(_rO^CbpN^Lw+Lnd{MC7A0MKakQDJI%94g5A zPLXSfu1STPGlWd2Y@SX~B{uFxlThF872HUafXcBl<(ZTp2i|Nbgu zp=UMW$Z6DXYsG{Teo!iY8*mIPADtRK8Co=Nbi$Bih*z8N?hj9-i4@GNEZ+P>QQl-6 z0~0{eZbwNl!}j{L(8#95tC2LfY_0APApUX+wTo5Jm5Xr)BiVm1*s7!}ZdzDi!d5am$y73hT4fdJ3O4eU&jkqLv(ps39r?A2l7!2 zBbsq{emw4>Ez%^}l5|rEb%U=&Uofo^Zgww}ON+ITud+QQSbpm(5yM^mbgr-pWr_j> zp|`mPx$V(GPy{)+V|Y6G$D)IdOF(w6ymtD+R@`YHYn+RrTqm5v%DbwpUU**!#_gRB*BA?yIhN)c@DC&sLEr|i&#GI5e z$M@xD;hI|QwBo9f6%nhWVnE5aNrS}lb?z^6ekOeq)+netZuYVZQYiSvk?KU)bo*(` zSWTRi6p;eN2_Ih^Bnof**sOV+m4?(C9fJq3>QE|%IPkX8wmWR2Tn&TmUTt?da5xKa zyVVC&E}IJaiza@-jmd7&)Exqtl4P32ic3?&FuR1RWrs3C+I_2BgL~lKV)ZY37j~59 znD|yXy+q30r22ai#_ue??Ll>fqS*~EaB?gUMj3^O1DrQJ_fk+LLyvBVlTr&paT^s( zhZ=Uwo4|u}CAeW=WCD3iTrwP7Ulh^bsl70Y>zh%Ty4s`-71g=l4wula=yps63#IA0 zTdL5m@t3%(F3PMa-<`6!!p1zxMN1&9^la4dReR4fh0^q#dn}$&ECrM~<^+jO7umdC zkEbbkIYd*=6-6KE*w6Yu0a{Uw)1+*1%p@XH8Za-Exb`y;hzW3uP)D)HdhA=0G3v zvhho-;b1Uv&x@08<)0n6sy=2YSSPPAi0czbL+WRZ?=9I$hhV5Y*Rq#h3WPee{S8fAX2!3_mdmcWc8;nN4&X{0M1lD87!hP{APdM zpDpZ#8|^MXb19X!ZtixH2F#LbHnA>SkH)ss8)c`wy^sztHG{CIJBW52O0O0ISpg1z4RW0svtA z0#+vO&W;8~&K7pIdQQJf&L(CSCQknYT;2ZyS1`Y@74-iTxZ2UVa@-P4{hlj3FDR8P zP7AheJu;VGn{3QsC)%Qv!jz*tD&)_X_yYt8hCY!^@v{TI{?*o^_2HIhm8*|$FjvyG z02_!x!m%-3>6qXzswbN_00Bz}76_kFeZ-Lt|u`+4fuH~cZ|Zya+u9IWlK!EQ z5ao~=;gEr@D?d&>8AU!Gk{A3Z5HCeR zV8t-RfEjN}nlxk_IOA0#`81b3VZ@d8xXdHlbiSk_JAygEu(|)oqh@t=HPxMt>>-I% zxXR_=(}(cE14kgwf}P}_lCh)WsBh?s>H0HsKSCVAb>N^< zpQH(E4$0%)WT*6~7U^yVfkWi}Z%7WD0l&?j>9To)>8427VXbt|E$D^FLVrE~BO z@ZI7aQZGk99ghl>aD%dYF$>Cd#~bxULld`oKr3lpm7HsTwtuA^(@p#vi>keypOX8w z#+zCl85!|bkX|ZgnO(%3Ms@7M_sA>@3xzG(;GT)(OtUs1%TM{u=to~i&2+jppX$aA z$7i~glIYkW8phv<`7o%@D6MB-y zhtA6l&tLwBPc$K5PGO5B!kGb-P}nzRBL#(jXai=q2m_Vi!#I%#&v10{O`vXTNnC1ga7YEK6O(nuVgqs0gAIZuqiL55p` zX+Dn0km-BTuo;Nzx+z!nVbBxz)Fv~ud@BhErMNiKqSDT|OQ}AEQC|hOY#v#v}@E8#VJ8@0H}PUfZO~YRCWz!2CFG3PAKCYGF=M;=Cf)AK|Ki}e5X7i z$&s}?Im(JWP!p5aIr(ABku^|5W=s&h&_TR;X&YBWSd2^$M|(S~ZEf0XpEqUT%n82Gsj7%%qcwZ_=g^SlZX|UEks~#Yvmyg1e@zap! zwAmZfF-DlIQGh-S!(eMr0yU6n7cZ`Ef9L*Dopl(#ql$2@V1Rjb=^-)gk7pvhK!8PhI<~%@2e}9 ztb;Lwr7y1KV?Y7t<|3=@g3!ZAtKUMRvH1gTAWi%Yj2@h$YY{F-3S1w9_j>?VRMtk7 zCSrV}EIn6XhDTATBOPaZm60^_?y)wiF9Is3Je*9Uy`V|eVSpIUtY?n2ebk# zA9@tO|1s_)r7j;R1Pfp@p-NKQi$-i;TBf+Q0CnyZ<2MRbl08n*yZ#e^o?dF$9ezt*{&&)b}oj= z$DR!)Gk%dTLNe%IC6MiYkW-1*I{db=lQ>pDLK83ScjN(|^v)UR*Q8*HCE$5>7dxdJ zqUL411Z`HyO7+L(RuQ_{&4ZaBqz;R!C3Mej@lb>E!T}Xn)<%tJFTRw^gaL*tF;JKC zPgr1u6*ph!(cFXOe^+O3oCjBo<%FBAu66AOn7;y^sLg~@>;Sc#pG6Vt%bh^@eG!Ne zi!xq0gNB>;`{5{R-QiZg2He&8$VXo0oSB&nq739QMuFd0C`>;N<={?S?m;hZ#1=$S zY|^6we?g;OF8W>YWqJayK=Z^QNr=}m-|@%diXp?`^UoaaWOpPHOfbAFrDC?U7h8l@ z&|3aOm0`%t56$2cZV-;c(GJfge8W%iYaGf&0R&gsTCb>yfQM69%$FR2YdZ`SVu)43 z?*ntC-(H>uE^NBGDYV>tbfvMhvw#$_u~>lKoB5Ozda}GR{fAo`viqJ_-lh&Wk|btV zS?t+w)CVGg+gP*6yxse+$-b+MsE~#nvvy>uzw2_6h;>WGtoN0ImcjlGaApkutAT|0f2V$w@6GGm}k^&l%$XpNFxpx9;3VC#xy6@FTM~K}C>;llaWL<@c z=ZKMs!b5`Kk33??$2VFLJc)(#8Cg)TN*QU>5V>gy2Q`QuRAs8vr2y&$=(>7M7f*nM zk7f%_>Zv5Ed3x2amDlynNg-5(w2KNBY@td)d0;UDTXrCZSA!7F7fsCKkP5{Z(ttFe z(3}@3h1~8Y-6DzY(8CSO4XnUvLtJfd zW_XL|tpXEW#Tc~@cuElJWP>V;(8{KK+Y+FONP6Rx?rM!Tt1kx1Tbt<^3cfb`Dy1f!f<@VXs_e|pm(Gmd{ zMsSFD2IG#g>?(%!N2elAWl}+l1Fh58(?P|8g463%+pnb`)Vxg?=%>m^*wabl1ZvoXZwC9utuT2LM2R^ zOY?~$3F9*mjf{#aOD=_gz)zJ0fTy@)4@%6DI^ZCFzvOol#iB6-%uHIL$|Mjyqk{Q{ zx=dJndwaYIKk6USI%>>B)Qv;v;Obk9O3fn7zvZX}qr@z>K~?4+EBRlnI<9o17?pNT zER3@z&xC%=wraA07#|AMLqhj#JGjKO{^2-76?t2!pz(P?9Ae3)HNk4^|7#9HTJMAk-N0eAdq~?djAwnqsn*)$HVtL9ty~@tg?2 z7zjAh(L?1(Ym}7&32&av0vw&?=_*{76a`ob=Qw#eH=l3Ibfr4y!i_Szy21CEs?cc( z^muf&COA#HY8K2Yf+T{zA$t_Kv1oK>^NB5qvd~H8f^q=VD_;8=7>R;==j*=U+b-E> zD^IEdH}Fg3s#GElt7h39+AaD@Vt47sdzMj`->EH zSqcw-e7`Hemy(EhzyE#CMC8bqpsA@}dJxK*xxafBZ(AinZU&*5(tm(@$)R9Z9(r{a zI6omPLxnJKF`TIwJV)l1rzDTym_jI!Irc{s89Xj?ja0O>s zw_J19G~e=6N&5SZN!cksa_*`G(Npk#&+qGbP0sDKa@o)5Ag=ebTrH&eGFI&o(BD1(&K-a7yNzYDT zKTSJ4uei;_orx)Q0*Q=diLbBS?~ajoZv3t7y%QvcYS%*blS&)M!u*v!4j z>aXiA=ge^QT&a9d3#8|NO<}u5C5}#Qlf1P$t$7ZLEHbR;{!ft|EC9UYpL-b^F6Q22 zw;$$ntXJQqIb>~r>e`>5S)cHcH-tTN#^;8=f2nQBIAahSyH!(pP39*Va8FLk$kFntFHo0t%X|3`mSCuU$JzqJqq{yRux4c zhP!3z0`k`B?IRQ6fkH5gTM;R-TJ8&AwN}J{Pd-{L+QSOJ8X2iLp**h9eY0gzgl&i@ z*^?L&fcyMsJw)Rw8IQyR{lM+~n3ElSte0}X69YMI){Jb=RGF51=2j+K8ryT1OF;?s z4IVF@{+7aFjA}2j!YNR_q0Tfq;1@d7p<57$I_0IluhP$&ZsNGdR%qNyFhyXAw-~&n z&#|5FZ9Ca))h+Yc5U`yWunc`+^E#P%t5mQg_4d9<73oXuZc87j%OAL7`?Tclzn2Bv zwc!8>MQ_0w6xFff2p(m{>G_YP4q@4HcqyA)uPh)FEN8t|3zVcY>cvhehxuL|>&{*~ zM6QTv7L>;jcSyKp73#&#gLUqx;;??*tsz;4P}=cC$MAl%fE55uv|)~HHw`@4>FPox z!l~usDD{?>UY38c6_u{`XN@+HPc#sG*#+OP0v2S6H7=~_rN3y-{C#^HTQ$Qq8S%&B zLRYOX+k|XEx#3X0QnlmY{ufJR29JjW6}_%Y&g@DU9>E^83)!epEccf~9_H#^a2Zn! zUXK3IlydoolckcKh>^kMN*;(Yq}6>u+-0^#LP|_&&s~ktt|veDJ`pc;f>~VfpohY zv$MAed^6aIO=zep(^({uRa_%+-tquy4{7d^6%>XB`Dl$S^XX%x6S7L+V5Yz>3ASS- znMG)A#=>Z-wKRq_lB!BdxFO+4b94FnFYH1H%Xo7HEf*J%k%%O6x=l;v>ARaCxz`jxqGEHIn8OV+LuI>HzuS>j9@^H9v6{QM{Mql#D}Z7Qzq@kjxb8 zhbx+|QYzZsk|y44s74(ZQId3I&$Ne3G2_UjE*l9Dhf)P=-F{%>caCCs;quN~pQD#wSf!-@_%E9vTsJ_cHZmICxMAvz1c_qt)qU^;7nQR6~0Hl38 zTL9V=;%Q^mh~Xryj%Q~RzAuX>;{$x)T?C&ePBeTVS;5bWnwJ4Pr~Bt|6&ct3>1;0D z%b4C@u(&+#UFc&qgV$pN1nx~(_`&-%y}y4$6?qfAj_m!y*eKviV!^!HT~&|9N~J?W z&2GhtMz-f34FIqrAys=+gG~*G#v?Tyh2)y`6fOb&&VwQ$#TmIucD*l~25p)0*((D2 zAf+~W>^4wc9U!vEd)M!+Y`7|19>CO|4cCT5Tl`8@mCk!P9nQL%-0T>DeRa{g;)mp}`aE1K0j7{udzU=9;o{kAPB-nlcIF6`A_QCZKN8Nd+3rSqmF zlIy8oEgkeO5UPC=-w4fAKMcyd@KB*cCg&VHE?j_K6B8|rrX_*vqvSSh4YcUFcUdM^ zPLYtgE{gpVaLStVJXjTfjn$B#OB8B8P@1BV=nt3NGDg6~dZWuY%+VS>>FgpAIO)%( zu_*IgEG_?F7O92cg{&^Nof0c|g|nIOfz0D(yzBmr=$0pULJMvtZ2nC03Wl3~31=sO zX>*@s0!`Pc?E>B3P-jY8cuhoa-Px!8afdt>mMLam3&RPR9d)`-+Sc`?!IclQ&P>76}ERgg&WB;ZY@X4GVgR;`N2W|p17$` z6d!itD~-`aHKHfeh_JSdv#)ygCd}vTNlSGl!>$^cnB`0RcWc~iQ`IC@P0~o=jF3Ko z#9s)mK~BJEgt;kM5N6tX{4ZasnFkA{L$9O?rM(KO0|NRqKi9mDcM^Ezx{BC$H~Ftk zLqC3Aj>Z;Q5L#GW;fYpp*Ru6bpA@O^R*ob z>}>)o_N_GQ&BfRL@+Nl-eO{6rSv@?i6BD;5ko5qH3^gu@;8~W5BP|8GNMpE~8Dzya zj&Y>8Nf#2irqUY zOX5hl+5OqL0ooBXRYi!UDW!t+C(A5tmz3mXaaT+kTH|%QcKEHT52EY_;y|o00N52s z7NrW+lHH$Di+^8G+x2RW?IU#$jPq20zG59fy?0$(cPo?5;-ggy37MEsD@q{Q;bI86 zYda|2A@#dRS)^ zOSt+rHEa4Eh_{&vD98d?tNc@``-#u)wzjsmKhXe@AXWCX#uuC<@-g6KX1c?(`7|`h zp^Wt*+q>TGGx`NY9RRsmTGO{6i`^A)duFP%pnJ5(f%@70vG9EF?r0GeuY?e{<)OzY zYCMaX25IMS;|R^R__V^?la^=ihJjB}(9Mn9GoZBibU2xwm}~VmrjCWpVZBh`oex3_ z2N;x|L$BsvbDz!vD?lB)+e>zDKb#Yq*+vT0Y;nxi&F2-q@yt(65^(HD~*ZL(lxK4F^-!f*D-{hCZ|5s%MlUe6J_$crp(w@`Bgm5i;-4!(UOz7EE#aYC2Ldd+pUHgJ{Oc?*j-Eeh3w0$j81Qu9Amh;>!1wj)c1U z)e%kq>tPKqT3Q1-+R6mgxzmD<+xG8W7Oo%wPPjxdRpYuKLII5k)6|||A1P9qx0=pT zD!lFwy0XqYE=_Vg2{mN)|O>G zt#hl}bXb?sgH{-L5t48T-X*{#Fq3c)*4MN~K!}WtwC`HIT=dFRj0^dSpeN(+3?fg; zHE@zm;E`1oJuSqF%Y_@!9N?X(MZ3IRZwkik<##{ zvyGcpizQnYz^-7eX4x*_Zq@kv+baaqC1coI!>mAxxGX5qxOid{)H>~ZBcyqN$+;g|yEvebRZ(wTe1rhWchDn+VS zEe~nRYo7FIE&P3*OjhE{1lU)8Y&iGNB(Z(YE%rkyH^A<<) zZ;x+(j&)Wl&z<@ShM9QaQra||ee0J`+yHkrfDGcX*qk2mo zx=}06jpRg-3PEtP2S{Uxi30y7ydt(H zTWLe2V#Q%Cw`NMkSAf+t7rFmBTi3wgrOL*c@jD!t{LPQ)+jv-el7}F~11Rd5GSI)m zDDs17?rDiOK!(reyj@mgRkCw$iSsq>>+T#!Y2qEkky*r@;BhQ7v|WCWqF!&WCr=38 z^Sd}Yzja%7rqJiF{u5hSaGXMIxDNiZ1EnY!2!C)J+D(Mr7^GF=X>CopyJ_|BY*nuO z8PGs|a=1F&93eI(^B#ywGnZ3_@8$$x8ccx{lSk*<2sYk$Ogn`9Ad0)|eR8p#)_t1Y zNw1Mv=R12~juCE#EREPH-(w4>K1YKZNi2 zq9*%Gf9HJr+OBSHBXJvEL}QSlRSxSUaj8>~@^)yQ?XX`?z7bdK0wh@QYG#>~-LBIg z!lLD?E-&_BPlOLFQ2KjwTEj#w#}s+-Tmi@6Zi%J|0o>A;1iGk4zsPJB@s1?z(%WlR zq~)S((ruWYT#|Ee%}gG<^)k+0uyo7)#hQFp2L+?U`o|f^D>zLP`y)6~X-j~M3XGn9hjR7!d|j2=g>WM9Ds1+Z}Fz_}QBr}MaQ2k(1mpv+(L;~Y@N zgiRJ1{n6t~W7Ik)-&eqKYZX)W?%H~`mndOU)Z;_`%nCDLD^cW;l?PGj_6c-Qw1(w9 zUe{no!)=4sp_0Dny$@uEqUE}jO6=tl-BpG`_^|@A;nX))C|?@!th}WknT~J)ZV2VC zL_q+OD}}k+cc?nDj8RZKxz}Ml%vj&Dx!$%@o3cdF57FUCmR%M7AkhipE_s#s4ikcC!8KwWnJ%f`Cn<9I?e}m#E76z1Ury ze#j)hCvH|fapt*Uf1y&uN&Zu22y#)Nsj*T3AqBHS8*75L2NKQ5qj@!E^JJ4*LU4N( zVK0+fpwU(z-ejp-;Nk|FMczAD$Yi&hRn$8XPCyQhN*s2N_3a~>Q}d1$7$t-;F2}!7 zkA-5A8M)Nn{H)pK;{E~#qoH{;?y+vW4rk0e#5lOfXQ2d)nt;Rw5&bNb18#~a6;$^{ zp);c@4>53(U#`%*G^IuX!|)h9?ZUk}hN8Ah#)NyV6YW_>67@)TzbnKN`>|lvF1>tDAZ|j8F^)81SFq{^wGedoM`%D! z+i0?;h1BOB-_0MZl58Je=hrwZ(||?(_vs5!sUTO0GP!3EoF5?Vxqq`--Jo!zhHl=b zQEhlw&x1olPZOfA3YP6u*~|ya#znc-1r-m!CY8sbSWC$Bc}r&2B#`_fKXY-QuD+t8Xgh{^r0OG!kR zRL?~5Djl=Ooj+U>NGPB8-LH%DbVhlYWksNSRbVPRV=Tc(Ue0MA&$~cOuY0juDTgy= zz_Sx{=PV1#Xr@VJH$O4>sHq7|0w7OGVlY#fx(PYNv&WMtdDCQx>5TEwbgIL}&aI_k zV#j-w@<>(8^LTutl3=Awt9=Td51*%!W`arSz75&v;I z(R|*8@6@n_ebsFH<;EK1rQEMKsi3c!NNFn3E1EMLkw-r9co)Y09(p^$lTF|)LK%V=F3JFDp#hje)kCyD$~J8Xfm-fhotVvqas^eXwco2N&QxK%v$Q2^9nxo0jEX zEqQ&}`S{J`*UNkm7NT8AZRYmMWzQd_C+ zmEd|*3KtIBTy&%cm8+M|-9y~lsajX2k>LZr1;t_N1LI1$AB=7!0{dWuXC>dp@4g|T=B^I$w(E+=IKeJI3i$U;B${Fmc#tJew7_&as zWr%Ph+eB_oS=RJOgoAa6{RIy5iYjyQ=8Aq=Q%D1Hp!&~D$1_{;Dz<(rX~lP}xU`x; z#Zw+?MzScubwOQJ^_>zC3+_bgZD6AtYGYU_rS`-8yx{>m6`f2trM)T7qm((T6JAKM z_qZMMERWdLk^EH;8`Br?ipf#`!PO>gh33-L7G9P{t!n^m$!7u}6C?4hvrJM-o}zKN zQWar2SJgIcU8j$92D>x+o|3s1+m8Js#w7d&AX{etNpBlU2ybA~DWYP>qjf=r;hvX` z&adkAdz7vp97DHYG}~+ToBX7^bx9W7MtO;(@*E5xwJlpm870Fib517J(5{!kl7@IO zDfFkZD@S!3!(}WgGkYu7wqq@TeF8Q~=0X~ZvojEFKt&nG41PK_BE_Rw3T=jvRRuG)Xvj{h0O0b^;ub^kv}JpT7@{V$R2f1`x|C9?g`C=TGS1^EA* v&-TBo{)Zv{@4x*&4Do+PaX`Tm!2fS3QC 5] + ``` + - This is especially helpful for content-heavy pages where you only want media directly related to the main content. + +7. **Example: Full Media Extraction with Content Filtering**: + - Full example extracting images, videos, and audio along with filtering by relevance: + ```python + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com", + word_count_threshold=10, # Filter content blocks for relevance + exclude_external_images=True # Only keep internal images + ) + + # Display media summaries + print(f"Relevant Images: {len(relevant_images)}") + print(f"Videos: {len(result.media['videos'])}") + print(f"Audio Clips: {len(result.media['audios'])}") + ``` + - This example shows how to capture and filter various media types, focusing on what’s most relevant. + +8. **Wrap Up & Next Steps**: + - Recap the comprehensive media extraction capabilities, emphasizing how metadata helps users focus on relevant content. + - Tease the next video: **Link Analysis and Smart Filtering** to explore how Crawl4AI handles internal, external, and social media links for more focused data gathering. + +--- + +This outline provides users with a complete guide to handling images, videos, and audio in Crawl4AI, using metadata to enhance relevance and precision in multimedia extraction. \ No newline at end of file diff --git a/docs/md_v2/tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md b/docs/md_v2/tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md new file mode 100644 index 00000000..82af6b9a --- /dev/null +++ b/docs/md_v2/tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md @@ -0,0 +1,88 @@ +# Crawl4AI + +## Episode 9: Link Analysis and Smart Filtering + +### Quick Intro +Walk through internal and external link classification, social media link filtering, and custom domain exclusion. Demo: Analyze links on a website, focusing on internal navigation vs. external or ad links. + +Here’s a focused outline for the **Link Analysis and Smart Filtering** video: + +--- + +### **Link Analysis & Smart Filtering** + +1. **Importance of Link Analysis in Web Crawling**: + - Explain that web pages often contain numerous links, including internal links, external links, social media links, and ads. + - Crawl4AI’s link analysis and filtering options help extract only relevant links, enabling more targeted and efficient crawls. + +2. **Automatic Link Classification**: + - Crawl4AI categorizes links automatically into internal, external, and social media links. + - **Example**: + ```python + result = await crawler.arun(url="https://example.com") + + # Access internal and external links + internal_links = result.links["internal"] + external_links = result.links["external"] + + # Print first few links for each type + print("Internal Links:", internal_links[:3]) + print("External Links:", external_links[:3]) + ``` + +3. **Filtering Out Unwanted Links**: + - **Exclude External Links**: Remove all links pointing to external sites. + - **Exclude Social Media Links**: Filter out social media domains like Facebook or Twitter. + - **Example**: + ```python + result = await crawler.arun( + url="https://example.com", + exclude_external_links=True, # Remove external links + exclude_social_media_links=True # Remove social media links + ) + ``` + +4. **Custom Domain Filtering**: + - **Exclude Specific Domains**: Filter links from particular domains, e.g., ad sites. + - **Custom Social Media Domains**: Add additional social media domains if needed. + - **Example**: + ```python + result = await crawler.arun( + url="https://example.com", + exclude_domains=["ads.com", "trackers.com"], + exclude_social_media_domains=["facebook.com", "linkedin.com"] + ) + ``` + +5. **Accessing Link Context and Metadata**: + - Crawl4AI provides additional metadata for each link, including its text, type (e.g., navigation or content), and surrounding context. + - **Example**: + ```python + for link in result.links["internal"]: + print(f"Link: {link['href']}, Text: {link['text']}, Context: {link['context']}") + ``` + - **Use Case**: Helps users understand the relevance of links based on where they are placed on the page (e.g., navigation vs. article content). + +6. **Example of Comprehensive Link Filtering and Analysis**: + - Full example combining link filtering, metadata access, and contextual information: + ```python + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com", + exclude_external_links=True, + exclude_social_media_links=True, + exclude_domains=["ads.com"], + css_selector=".main-content" # Focus only on main content area + ) + for link in result.links["internal"]: + print(f"Internal Link: {link['href']}, Text: {link['text']}, Context: {link['context']}") + ``` + - This example filters unnecessary links, keeping only internal and relevant links from the main content area. + +7. **Wrap Up & Next Steps**: + - Summarize the benefits of link filtering for efficient crawling and relevant content extraction. + - Tease the next video: **Custom Headers, Identity Management, and User Simulation** to explain how to configure identity settings and simulate user behavior for stealthier crawls. + +--- + +This outline provides a practical overview of Crawl4AI’s link analysis and filtering features, helping users target only essential links while eliminating distractions. \ No newline at end of file diff --git a/docs/md_v2/tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md b/docs/md_v2/tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md new file mode 100644 index 00000000..92af4f2e --- /dev/null +++ b/docs/md_v2/tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md @@ -0,0 +1,86 @@ +# Crawl4AI + +## Episode 10: Custom Headers, Identity, and User Simulation + +### Quick Intro +Teach how to use custom headers, user-agent strings, and simulate real user interactions. Demo: Set custom user-agent and headers to access a site that blocks typical crawlers. + +Here’s a concise outline for the **Custom Headers, Identity Management, and User Simulation** video: + +--- + +### **Custom Headers, Identity Management, & User Simulation** + +1. **Why Customize Headers and Identity in Crawling**: + - Websites often track request headers and browser properties to detect bots. Customizing headers and managing identity help make requests appear more human, improving access to restricted sites. + +2. **Setting Custom Headers**: + - Customize HTTP headers to mimic genuine browser requests or meet site-specific requirements: + ```python + headers = { + "Accept-Language": "en-US,en;q=0.9", + "X-Requested-With": "XMLHttpRequest", + "Cache-Control": "no-cache" + } + crawler = AsyncWebCrawler(headers=headers) + ``` + - **Use Case**: Customize the `Accept-Language` header to simulate local user settings, or `Cache-Control` to bypass cache for fresh content. + +3. **Setting a Custom User Agent**: + - Some websites block requests from common crawler user agents. Setting a custom user agent string helps bypass these restrictions: + ```python + crawler = AsyncWebCrawler( + user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" + ) + ``` + - **Tip**: Use user-agent strings from popular browsers (e.g., Chrome, Firefox) to improve access and reduce detection risks. + +4. **User Simulation for Human-like Behavior**: + - Enable `simulate_user=True` to mimic natural user interactions, such as random timing and simulated mouse movements: + ```python + result = await crawler.arun( + url="https://example.com", + simulate_user=True # Simulates human-like behavior + ) + ``` + - **Behavioral Effects**: Adds subtle variations in interactions, making the crawler harder to detect on bot-protected sites. + +5. **Navigator Overrides and Magic Mode for Full Identity Masking**: + - Use `override_navigator=True` to mask automation indicators like `navigator.webdriver`, which websites check to detect bots: + ```python + result = await crawler.arun( + url="https://example.com", + override_navigator=True # Masks bot-related signals + ) + ``` + - **Combining with Magic Mode**: For a complete anti-bot setup, combine these identity options with `magic=True` for maximum protection: + ```python + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com", + magic=True, # Enables all anti-bot detection features + user_agent="Custom-Agent", # Custom agent with Magic Mode + ) + ``` + - This setup includes all anti-detection techniques like navigator masking, random timing, and user simulation. + +6. **Example: Comprehensive Setup for Identity Management**: + - A full example combining custom headers, user-agent, and user simulation for a realistic browsing profile: + ```python + async with AsyncWebCrawler( + headers={"Accept-Language": "en-US", "Cache-Control": "no-cache"}, + user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0", + simulate_user=True + ) as crawler: + result = await crawler.arun(url="https://example.com/secure-page") + print(result.markdown[:500]) # Display extracted content + ``` + - This example enables detailed customization for evading detection and accessing protected pages smoothly. + +7. **Wrap Up & Next Steps**: + - Recap the value of headers, user-agent customization, and simulation in bypassing bot detection. + - Tease the next video: **Extraction Strategies: JSON CSS, LLM, and Cosine** to dive into structured data extraction methods for high-quality content retrieval. + +--- + +This outline equips users with tools for managing crawler identity and human-like behavior, essential for accessing bot-protected or restricted websites. \ No newline at end of file diff --git a/docs/md_v2/tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md b/docs/md_v2/tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md new file mode 100644 index 00000000..a8a357af --- /dev/null +++ b/docs/md_v2/tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md @@ -0,0 +1,186 @@ +Here’s a detailed outline for the **JSON-CSS Extraction Strategy** video, covering all key aspects and supported structures in Crawl4AI: + +--- + +### **10.1 JSON-CSS Extraction Strategy** + +#### **1. Introduction to JSON-CSS Extraction** + - JSON-CSS Extraction is used for pulling structured data from pages with repeated patterns, like product listings, article feeds, or directories. + - This strategy allows defining a schema with CSS selectors and data fields, making it easy to capture nested, list-based, or singular elements. + +#### **2. Basic Schema Structure** + - **Schema Fields**: The schema has two main components: + - `baseSelector`: A CSS selector to locate the main elements you want to extract (e.g., each article or product block). + - `fields`: Defines the data fields for each element, supporting various data types and structures. + +#### **3. Simple Field Extraction** + - **Example HTML**: + ```html +
+

Sample Product

+ $19.99 +

This is a sample product.

+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".product", + "fields": [ + {"name": "title", "selector": ".title", "type": "text"}, + {"name": "price", "selector": ".price", "type": "text"}, + {"name": "description", "selector": ".description", "type": "text"} + ] + } + ``` + - **Explanation**: Each field captures text content from specified CSS selectors within each `.product` element. + +#### **4. Supported Field Types: Text, Attribute, HTML, Regex** + - **Field Type Options**: + - `text`: Extracts visible text. + - `attribute`: Captures an HTML attribute (e.g., `src`, `href`). + - `html`: Extracts the raw HTML of an element. + - `regex`: Allows regex patterns to extract part of the text. + + - **Example HTML** (including an image): + ```html +
+

Sample Product

+ Product Image + $19.99 +

Limited time offer.

+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".product", + "fields": [ + {"name": "title", "selector": ".title", "type": "text"}, + {"name": "image_url", "selector": ".product-image", "type": "attribute", "attribute": "src"}, + {"name": "price", "selector": ".price", "type": "regex", "pattern": r"\$(\d+\.\d+)"}, + {"name": "description_html", "selector": ".description", "type": "html"} + ] + } + ``` + - **Explanation**: + - `attribute`: Extracts the `src` attribute from `.product-image`. + - `regex`: Extracts the numeric part from `$19.99`. + - `html`: Retrieves the full HTML of the description element. + +#### **5. Nested Field Extraction** + - **Use Case**: Useful when content contains sub-elements, such as an article with author details within it. + - **Example HTML**: + ```html +
+

Sample Article

+
+ John Doe + Writer and editor +
+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".article", + "fields": [ + {"name": "title", "selector": ".title", "type": "text"}, + {"name": "author", "type": "nested", "selector": ".author", "fields": [ + {"name": "name", "selector": ".name", "type": "text"}, + {"name": "bio", "selector": ".bio", "type": "text"} + ]} + ] + } + ``` + - **Explanation**: + - `nested`: Extracts `name` and `bio` within `.author`, grouping the author details in a single `author` object. + +#### **6. List and Nested List Extraction** + - **List**: Extracts multiple elements matching the selector as a list. + - **Nested List**: Allows lists within lists, useful for items with sub-lists (e.g., specifications for each product). + - **Example HTML**: + ```html +
+

Product with Features

+
    +
  • Feature 1
  • +
  • Feature 2
  • +
  • Feature 3
  • +
+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".product", + "fields": [ + {"name": "title", "selector": ".title", "type": "text"}, + {"name": "features", "type": "list", "selector": ".features .feature", "fields": [ + {"name": "feature", "type": "text"} + ]} + ] + } + ``` + - **Explanation**: + - `list`: Captures each `.feature` item within `.features`, outputting an array of features under the `features` field. + +#### **7. Transformations for Field Values** + - Transformations allow you to modify extracted values (e.g., converting to lowercase). + - Supported transformations: `lowercase`, `uppercase`, `strip`. + - **Example HTML**: + ```html +
+

Special Product

+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".product", + "fields": [ + {"name": "title", "selector": ".title", "type": "text", "transform": "uppercase"} + ] + } + ``` + - **Explanation**: The `transform` property changes the `title` to uppercase, useful for standardized outputs. + +#### **8. Full JSON-CSS Extraction Example** + - Combining all elements in a single schema example for a comprehensive crawl: + - **Example HTML**: + ```html +
+

Featured Product

+ + $99.99 +

Best product of the year.

+
    +
  • Durable
  • +
  • Eco-friendly
  • +
+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".product", + "fields": [ + {"name": "title", "selector": ".title", "type": "text", "transform": "uppercase"}, + {"name": "image_url", "selector": ".product-image", "type": "attribute", "attribute": "src"}, + {"name": "price", "selector": ".price", "type": "regex", "pattern": r"\$(\d+\.\d+)"}, + {"name": "description", "selector": ".description", "type": "html"}, + {"name": "features", "type": "list", "selector": ".features .feature", "fields": [ + {"name": "feature", "type": "text"} + ]} + ] + } + ``` + - **Explanation**: This schema captures and transforms each aspect of the product, illustrating the JSON-CSS strategy’s versatility for structured extraction. + +#### **9. Wrap Up & Next Steps** + - Summarize JSON-CSS Extraction’s flexibility for structured, pattern-based extraction. + - Tease the next video: **10.2 LLM Extraction Strategy**, focusing on using language models to extract data based on intelligent content analysis. + +--- + +This outline covers each JSON-CSS Extraction option in Crawl4AI, with practical examples and schema configurations, making it a thorough guide for users. \ No newline at end of file diff --git a/docs/md_v2/tutorial/episode_11_2_Extraction_Strategies:_LLM.md b/docs/md_v2/tutorial/episode_11_2_Extraction_Strategies:_LLM.md new file mode 100644 index 00000000..900c32f2 --- /dev/null +++ b/docs/md_v2/tutorial/episode_11_2_Extraction_Strategies:_LLM.md @@ -0,0 +1,153 @@ +# Crawl4AI + +## Episode 11: Extraction Strategies: JSON CSS, LLM, and Cosine + +### Quick Intro +Introduce JSON CSS Extraction Strategy for structured data, LLM Extraction Strategy for intelligent parsing, and Cosine Strategy for clustering similar content. Demo: Use JSON CSS to scrape product details from an e-commerce site. + +Here’s a comprehensive outline for the **LLM Extraction Strategy** video, covering key details and example applications. + +--- + +### **10.2 LLM Extraction Strategy** + +#### **1. Introduction to LLM Extraction Strategy** + - The LLM Extraction Strategy leverages language models to interpret and extract structured data from complex web content. + - Unlike traditional CSS selectors, this strategy uses natural language instructions and schemas to guide the extraction, ideal for unstructured or diverse content. + - Supports **OpenAI**, **Azure OpenAI**, **HuggingFace**, and **Ollama** models, enabling flexibility with both proprietary and open-source providers. + +#### **2. Key Components of LLM Extraction Strategy** + - **Provider**: Specifies the LLM provider (e.g., OpenAI, HuggingFace, Azure). + - **API Token**: Required for most providers, except Ollama (local LLM model). + - **Instruction**: Custom extraction instructions sent to the model, providing flexibility in how the data is structured and extracted. + - **Schema**: Optional, defines structured fields to organize extracted data into JSON format. + - **Extraction Type**: Supports `"block"` for simpler text blocks or `"schema"` when a structured output format is required. + - **Chunking Parameters**: Breaks down large documents, with options to adjust chunk size and overlap rate for more accurate extraction across lengthy texts. + +#### **3. Basic Extraction Example: OpenAI Model Pricing** + - **Goal**: Extract model names and their input and output fees from the OpenAI pricing page. + - **Schema Definition**: + - **Model Name**: Text for model identification. + - **Input Fee**: Token cost for input processing. + - **Output Fee**: Token cost for output generation. + + - **Schema**: + ```python + class OpenAIModelFee(BaseModel): + model_name: str = Field(..., description="Name of the OpenAI model.") + input_fee: str = Field(..., description="Fee for input token for the OpenAI model.") + output_fee: str = Field(..., description="Fee for output token for the OpenAI model.") + ``` + + - **Example Code**: + ```python + async def extract_openai_pricing(): + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://openai.com/api/pricing/", + extraction_strategy=LLMExtractionStrategy( + provider="openai/gpt-4o", + api_token=os.getenv("OPENAI_API_KEY"), + schema=OpenAIModelFee.schema(), + extraction_type="schema", + instruction="Extract model names and fees for input and output tokens from the page." + ), + bypass_cache=True + ) + print(result.extracted_content) + ``` + + - **Explanation**: + - The extraction strategy combines a schema and detailed instruction to guide the LLM in capturing structured data. + - Each model’s name, input fee, and output fee are extracted in a JSON format. + +#### **4. Knowledge Graph Extraction Example** + - **Goal**: Extract entities and their relationships from a document for use in a knowledge graph. + - **Schema Definition**: + - **Entities**: Individual items with descriptions (e.g., people, organizations). + - **Relationships**: Connections between entities, including descriptions and relationship types. + + - **Schema**: + ```python + class Entity(BaseModel): + name: str + description: str + + class Relationship(BaseModel): + entity1: Entity + entity2: Entity + description: str + relation_type: str + + class KnowledgeGraph(BaseModel): + entities: List[Entity] + relationships: List[Relationship] + ``` + + - **Example Code**: + ```python + async def extract_knowledge_graph(): + extraction_strategy = LLMExtractionStrategy( + provider="azure/gpt-4o-mini", + api_token=os.getenv("AZURE_API_KEY"), + schema=KnowledgeGraph.schema(), + extraction_type="schema", + instruction="Extract entities and relationships from the content to build a knowledge graph." + ) + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com/some-article", + extraction_strategy=extraction_strategy, + bypass_cache=True + ) + print(result.extracted_content) + ``` + + - **Explanation**: + - In this setup, the LLM extracts entities and their relationships based on the schema and instruction. + - The schema organizes results into a JSON-based knowledge graph format. + +#### **5. Key Settings in LLM Extraction** + - **Chunking Options**: + - For long pages, set `chunk_token_threshold` to specify maximum token count per section. + - Adjust `overlap_rate` to control the overlap between chunks, useful for contextual consistency. + - **Example**: + ```python + extraction_strategy = LLMExtractionStrategy( + provider="openai/gpt-4", + api_token=os.getenv("OPENAI_API_KEY"), + chunk_token_threshold=3000, + overlap_rate=0.2, # 20% overlap between chunks + instruction="Extract key insights and relationships." + ) + ``` + - This setup ensures that longer texts are divided into manageable chunks with slight overlap, enhancing the quality of extraction. + +#### **6. Flexible Provider Options for LLM Extraction** + - **Using Proprietary Models**: OpenAI, Azure, and HuggingFace provide robust language models, often suited for complex or detailed extractions. + - **Using Open-Source Models**: Ollama and other open-source models can be deployed locally, suitable for offline or cost-effective extraction. + - **Example Call**: + ```python + await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY")) + await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY")) + await extract_structured_data_using_llm("ollama/llama3.2") + ``` + +#### **7. Complete Example of LLM Extraction Setup** + - Code to run both the OpenAI pricing and Knowledge Graph extractions, using various providers: + ```python + async def main(): + await extract_openai_pricing() + await extract_knowledge_graph() + + if __name__ == "__main__": + asyncio.run(main()) + ``` + +#### **8. Wrap Up & Next Steps** + - Recap the power of LLM extraction for handling unstructured or complex data extraction tasks. + - Tease the next video: **10.3 Cosine Similarity Strategy** for clustering similar content based on semantic similarity. + +--- + +This outline explains LLM Extraction in Crawl4AI, with examples showing how to extract structured data using custom schemas and instructions. It demonstrates flexibility with multiple providers, ensuring practical application for different use cases. \ No newline at end of file diff --git a/docs/md_v2/tutorial/episode_11_3_Extraction_Strategies:_Cosine.md b/docs/md_v2/tutorial/episode_11_3_Extraction_Strategies:_Cosine.md new file mode 100644 index 00000000..61e210e4 --- /dev/null +++ b/docs/md_v2/tutorial/episode_11_3_Extraction_Strategies:_Cosine.md @@ -0,0 +1,136 @@ +# Crawl4AI + +## Episode 11: Extraction Strategies: JSON CSS, LLM, and Cosine + +### Quick Intro +Introduce JSON CSS Extraction Strategy for structured data, LLM Extraction Strategy for intelligent parsing, and Cosine Strategy for clustering similar content. Demo: Use JSON CSS to scrape product details from an e-commerce site. + +Here’s a structured outline for the **Cosine Similarity Strategy** video, covering key concepts, configuration, and a practical example. + +--- + +### **10.3 Cosine Similarity Strategy** + +#### **1. Introduction to Cosine Similarity Strategy** + - The Cosine Similarity Strategy clusters content by semantic similarity, offering an efficient alternative to LLM-based extraction, especially when speed is a priority. + - Ideal for grouping similar sections of text, this strategy is well-suited for pages with content sections that may need to be classified or tagged, like news articles, product descriptions, or reviews. + +#### **2. Key Configuration Options** + - **semantic_filter**: A keyword-based filter to focus on relevant content. + - **word_count_threshold**: Minimum number of words per cluster, filtering out shorter, less meaningful clusters. + - **max_dist**: Maximum allowable distance between elements in clusters, impacting cluster tightness. + - **linkage_method**: Method for hierarchical clustering, such as `'ward'` (for well-separated clusters). + - **top_k**: Specifies the number of top categories for each cluster. + - **model_name**: Defines the model for embeddings, such as `sentence-transformers/all-MiniLM-L6-v2`. + - **sim_threshold**: Minimum similarity threshold for filtering, allowing control over cluster relevance. + +#### **3. How Cosine Similarity Clustering Works** + - **Step 1**: Embeddings are generated for each text section, transforming them into vectors that capture semantic meaning. + - **Step 2**: Hierarchical clustering groups similar sections based on cosine similarity, forming clusters with related content. + - **Step 3**: Clusters are filtered based on word count, removing those below the `word_count_threshold`. + - **Step 4**: Each cluster is then categorized with tags, if enabled, providing context to each grouped content section. + +#### **4. Example Use Case: Clustering Blog Article Sections** + - **Goal**: Group related sections of a blog or news page to identify distinct topics or discussion areas. + - **Example HTML Sections**: + ```text + "The economy is showing signs of recovery, with markets up this quarter.", + "In the sports world, several major teams are preparing for the upcoming season.", + "New advancements in AI technology are reshaping the tech landscape.", + "Market analysts are optimistic about continued growth in tech stocks." + ``` + + - **Code Setup**: + ```python + async def extract_blog_sections(): + extraction_strategy = CosineStrategy( + word_count_threshold=15, + max_dist=0.3, + sim_threshold=0.2, + model_name="sentence-transformers/all-MiniLM-L6-v2", + top_k=2 + ) + async with AsyncWebCrawler() as crawler: + url = "https://example.com/blog-page" + result = await crawler.arun( + url=url, + extraction_strategy=extraction_strategy, + bypass_cache=True + ) + print(result.extracted_content) + ``` + + - **Explanation**: + - **word_count_threshold**: Ensures only clusters with meaningful content are included. + - **sim_threshold**: Filters out clusters with low similarity, focusing on closely related sections. + - **top_k**: Selects top tags, useful for identifying main topics. + +#### **5. Applying Semantic Filtering with Cosine Similarity** + - **Semantic Filter**: Filters sections based on relevance to a specific keyword, such as “technology” for tech articles. + - **Example Code**: + ```python + extraction_strategy = CosineStrategy( + semantic_filter="technology", + word_count_threshold=10, + max_dist=0.25, + model_name="sentence-transformers/all-MiniLM-L6-v2" + ) + ``` + - **Explanation**: + - **semantic_filter**: Only sections with high similarity to the “technology” keyword will be included in the clustering, making it easy to focus on specific topics within a mixed-content page. + +#### **6. Clustering Product Reviews by Similarity** + - **Goal**: Organize product reviews by themes, such as “price,” “quality,” or “durability.” + - **Example Reviews**: + ```text + "The quality of this product is outstanding and well worth the price.", + "I found the product to be durable but a bit overpriced.", + "Great value for the money and long-lasting.", + "The build quality is good, but I expected a lower price point." + ``` + + - **Code Setup**: + ```python + async def extract_product_reviews(): + extraction_strategy = CosineStrategy( + word_count_threshold=20, + max_dist=0.35, + sim_threshold=0.25, + model_name="sentence-transformers/all-MiniLM-L6-v2" + ) + async with AsyncWebCrawler() as crawler: + url = "https://example.com/product-reviews" + result = await crawler.arun( + url=url, + extraction_strategy=extraction_strategy, + bypass_cache=True + ) + print(result.extracted_content) + ``` + + - **Explanation**: + - This configuration clusters similar reviews, grouping feedback by common themes, helping businesses understand customer sentiments around particular product aspects. + +#### **7. Performance Advantages of Cosine Strategy** + - **Speed**: The Cosine Similarity Strategy is faster than LLM-based extraction, as it doesn’t rely on API calls to external LLMs. + - **Local Processing**: The strategy runs locally with pre-trained sentence embeddings, ideal for high-throughput scenarios where cost and latency are concerns. + - **Comparison**: With a well-optimized local model, this method can perform clustering on large datasets quickly, making it suitable for tasks requiring rapid, repeated analysis. + +#### **8. Full Code Example for Clustering News Articles** + - **Code**: + ```python + async def main(): + await extract_blog_sections() + await extract_product_reviews() + + if __name__ == "__main__": + asyncio.run(main()) + ``` + +#### **9. Wrap Up & Next Steps** + - Recap the efficiency and effectiveness of Cosine Similarity for clustering related content quickly. + - Close with a reminder of Crawl4AI’s flexibility across extraction strategies, and prompt users to experiment with different settings to optimize clustering for their specific content. + +--- + +This outline covers Cosine Similarity Strategy’s speed and effectiveness, providing examples that showcase its potential for clustering various content types efficiently. \ No newline at end of file diff --git a/docs/md_v2/tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md b/docs/md_v2/tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md new file mode 100644 index 00000000..d1ab813d --- /dev/null +++ b/docs/md_v2/tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md @@ -0,0 +1,140 @@ +# Crawl4AI + +## Episode 12: Session-Based Crawling for Dynamic Websites + +### Quick Intro +Show session management for handling websites with multiple pages or actions (like “load more” buttons). Demo: Crawl a paginated content page, persisting session data across multiple requests. + +Here’s a detailed outline for the **Session-Based Crawling for Dynamic Websites** video, explaining why sessions are necessary, how to use them, and providing practical examples and a visual diagram to illustrate the concept. + +--- + +### **11. Session-Based Crawling for Dynamic Websites** + +#### **1. Introduction to Session-Based Crawling** + - **What is Session-Based Crawling**: Session-based crawling maintains a continuous browsing session across multiple page states, allowing the crawler to interact with a page and retrieve content that loads dynamically or based on user interactions. + - **Why It’s Needed**: + - In static pages, all content is available directly from a single URL. + - In dynamic websites, content often loads progressively or based on user actions (e.g., clicking “load more,” submitting forms, scrolling). + - Session-based crawling helps simulate user actions, capturing content that is otherwise hidden until specific actions are taken. + +#### **2. Conceptual Diagram for Session-Based Crawling** + + ```mermaid + graph TD + Start[Start Session] --> S1[Initial State (S1)] + S1 -->|Crawl| Content1[Extract Content S1] + S1 -->|Action: Click Load More| S2[State S2] + S2 -->|Crawl| Content2[Extract Content S2] + S2 -->|Action: Scroll Down| S3[State S3] + S3 -->|Crawl| Content3[Extract Content S3] + S3 -->|Action: Submit Form| S4[Final State] + S4 -->|Crawl| Content4[Extract Content S4] + Content4 --> End[End Session] + ``` + + - **Explanation of Diagram**: + - **Start**: Initializes the session and opens the starting URL. + - **State Transitions**: Each action (e.g., clicking “load more,” scrolling) transitions to a new state, where additional content becomes available. + - **Session Persistence**: Keeps the same browsing session active, preserving the state and allowing for a sequence of actions to unfold. + - **End**: After reaching the final state, the session ends, and all accumulated content has been extracted. + +#### **3. Key Components of Session-Based Crawling in Crawl4AI** + - **Session ID**: A unique identifier to maintain the state across requests, allowing the crawler to “remember” previous actions. + - **JavaScript Execution**: Executes JavaScript commands (e.g., clicks, scrolls) to simulate interactions. + - **Wait Conditions**: Ensures the crawler waits for content to load in each state before moving on. + - **Sequential State Transitions**: By defining actions and wait conditions between states, the crawler can navigate through the page as a user would. + +#### **4. Basic Session Example: Multi-Step Content Loading** + - **Goal**: Crawl an article feed that requires several “load more” clicks to display additional content. + - **Code**: + ```python + async def crawl_article_feed(): + async with AsyncWebCrawler() as crawler: + session_id = "feed_session" + + for page in range(3): + result = await crawler.arun( + url="https://example.com/articles", + session_id=session_id, + js_code="document.querySelector('.load-more-button').click();" if page > 0 else None, + wait_for="css:.article", + css_selector=".article" # Target article elements + ) + print(f"Page {page + 1}: Extracted {len(result.extracted_content)} articles") + ``` + - **Explanation**: + - **session_id**: Ensures all requests share the same browsing state. + - **js_code**: Clicks the “load more” button after the initial page load, expanding content on each iteration. + - **wait_for**: Ensures articles have loaded after each click before extraction. + +#### **5. Advanced Example: E-Commerce Product Search with Filter Selection** + - **Goal**: Interact with filters on an e-commerce page to extract products based on selected criteria. + - **Example Steps**: + 1. **State 1**: Load the main product page. + 2. **State 2**: Apply a filter (e.g., “On Sale”) by selecting a checkbox. + 3. **State 3**: Scroll to load additional products and capture updated results. + + - **Code**: + ```python + async def extract_filtered_products(): + async with AsyncWebCrawler() as crawler: + session_id = "product_session" + + # Step 1: Open product page + result = await crawler.arun( + url="https://example.com/products", + session_id=session_id, + wait_for="css:.product-item" + ) + + # Step 2: Apply filter (e.g., "On Sale") + result = await crawler.arun( + url="https://example.com/products", + session_id=session_id, + js_code="document.querySelector('#sale-filter-checkbox').click();", + wait_for="css:.product-item" + ) + + # Step 3: Scroll to load additional products + for _ in range(2): # Scroll down twice + result = await crawler.arun( + url="https://example.com/products", + session_id=session_id, + js_code="window.scrollTo(0, document.body.scrollHeight);", + wait_for="css:.product-item" + ) + print(f"Loaded {len(result.extracted_content)} products after scroll") + ``` + - **Explanation**: + - **State Persistence**: Each action (filter selection and scroll) builds on the previous session state. + - **Multiple Interactions**: Combines clicking a filter with scrolling, demonstrating how the session preserves these actions. + +#### **6. Key Benefits of Session-Based Crawling** + - **Accessing Hidden Content**: Retrieves data that loads only after user actions. + - **Simulating User Behavior**: Handles interactive elements such as “load more” buttons, dropdowns, and filters. + - **Maintaining Continuity Across States**: Enables a sequential process, moving logically from one state to the next, capturing all desired content without reloading the initial state each time. + +#### **7. Additional Configuration Tips** + - **Manage Session End**: Always conclude the session after the final state to release resources. + - **Optimize with Wait Conditions**: Use `wait_for` to ensure complete loading before each extraction. + - **Handling Errors in Session-Based Crawling**: Include error handling for interactions that may fail, ensuring robustness across state transitions. + +#### **8. Complete Code Example: Multi-Step Session Workflow** + - **Example**: + ```python + async def main(): + await crawl_article_feed() + await extract_filtered_products() + + if __name__ == "__main__": + asyncio.run(main()) + ``` + +#### **9. Wrap Up & Next Steps** + - Recap the usefulness of session-based crawling for dynamic content extraction. + - Tease the next video: **Hooks and Custom Workflow with AsyncWebCrawler** to cover advanced customization options for further control over the crawling process. + +--- + +This outline covers session-based crawling from both a conceptual and practical perspective, helping users understand its importance, configure it effectively, and use it to handle complex dynamic content. \ No newline at end of file diff --git a/docs/md_v2/tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md b/docs/md_v2/tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md new file mode 100644 index 00000000..eda07e8b --- /dev/null +++ b/docs/md_v2/tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md @@ -0,0 +1,138 @@ +# Crawl4AI + +## Episode 13: Chunking Strategies for Large Text Processing + +### Quick Intro +Explain Regex, NLP, and Fixed-Length chunking, and when to use each. Demo: Chunk a large article or document for processing by topics or sentences. + +Here’s a structured outline for the **Chunking Strategies for Large Text Processing** video, emphasizing how chunking works within extraction and why it’s crucial for effective data aggregation. + +Here’s a structured outline for the **Chunking Strategies for Large Text Processing** video, explaining each strategy, when to use it, and providing examples to illustrate. + +--- + +### **12. Chunking Strategies for Large Text Processing** + +#### **1. Introduction to Chunking in Crawl4AI** + - **What is Chunking**: Chunking is the process of dividing large text into manageable sections or “chunks,” enabling efficient processing in extraction tasks. + - **Why It’s Needed**: + - When processing large text, feeding it directly into an extraction function (like `F(x)`) can overwhelm memory or token limits. + - Chunking breaks down `x` (the text) into smaller pieces, which are processed sequentially or in parallel by the extraction function, with the final result being an aggregation of all chunks’ processed output. + +#### **2. Key Chunking Strategies and Use Cases** + - Crawl4AI offers various chunking strategies to suit different text structures, chunk sizes, and processing requirements. + - **Choosing a Strategy**: Select based on the type of text (e.g., articles, transcripts) and extraction needs (e.g., simple splitting or context-sensitive processing). + +#### **3. Strategy 1: Regex-Based Chunking** + - **Description**: Uses regular expressions to split text based on specified patterns (e.g., paragraphs or section breaks). + - **Use Case**: Ideal for dividing text by paragraphs or larger logical blocks where sections are clearly separated by line breaks or punctuation. + - **Example**: + - **Pattern**: `r'\n\n'` for double line breaks. + ```python + chunker = RegexChunking(patterns=[r'\n\n']) + text_chunks = chunker.chunk(long_text) + print(text_chunks) # Output: List of paragraphs + ``` + - **Pros**: Flexible for pattern-based chunking. + - **Cons**: Limited to text with consistent formatting. + +#### **4. Strategy 2: NLP Sentence-Based Chunking** + - **Description**: Uses NLP to split text by sentences, ensuring grammatically complete segments. + - **Use Case**: Useful for extracting individual statements, such as in news articles, quotes, or legal text. + - **Example**: + ```python + chunker = NlpSentenceChunking() + sentence_chunks = chunker.chunk(long_text) + print(sentence_chunks) # Output: List of sentences + ``` + - **Pros**: Maintains sentence structure, ideal for tasks needing semantic completeness. + - **Cons**: May create very small chunks, which could limit contextual extraction. + +#### **5. Strategy 3: Topic-Based Segmentation Using TextTiling** + - **Description**: Segments text into topics using TextTiling, identifying topic shifts and key segments. + - **Use Case**: Ideal for long articles, reports, or essays where each section covers a different topic. + - **Example**: + ```python + chunker = TopicSegmentationChunking(num_keywords=3) + topic_chunks = chunker.chunk_with_topics(long_text) + print(topic_chunks) # Output: List of topic segments with keywords + ``` + - **Pros**: Groups related content, preserving topical coherence. + - **Cons**: Depends on identifiable topic shifts, which may not be present in all texts. + +#### **6. Strategy 4: Fixed-Length Word Chunking** + - **Description**: Splits text into chunks based on a fixed number of words. + - **Use Case**: Ideal for text where exact segment size is required, such as processing word-limited documents for LLMs. + - **Example**: + ```python + chunker = FixedLengthWordChunking(chunk_size=100) + word_chunks = chunker.chunk(long_text) + print(word_chunks) # Output: List of 100-word chunks + ``` + - **Pros**: Ensures uniform chunk sizes, suitable for token-based extraction limits. + - **Cons**: May split sentences, affecting semantic coherence. + +#### **7. Strategy 5: Sliding Window Chunking** + - **Description**: Uses a fixed window size with a step, creating overlapping chunks to maintain context. + - **Use Case**: Useful for maintaining context across sections, as with documents where context is needed for neighboring sections. + - **Example**: + ```python + chunker = SlidingWindowChunking(window_size=100, step=50) + window_chunks = chunker.chunk(long_text) + print(window_chunks) # Output: List of overlapping word chunks + ``` + - **Pros**: Retains context across adjacent chunks, ideal for complex semantic extraction. + - **Cons**: Overlap increases data size, potentially impacting processing time. + +#### **8. Strategy 6: Overlapping Window Chunking** + - **Description**: Similar to sliding windows but with a defined overlap, allowing chunks to share content at the edges. + - **Use Case**: Suitable for handling long texts with essential overlapping information, like research articles or medical records. + - **Example**: + ```python + chunker = OverlappingWindowChunking(window_size=1000, overlap=100) + overlap_chunks = chunker.chunk(long_text) + print(overlap_chunks) # Output: List of overlapping chunks with defined overlap + ``` + - **Pros**: Allows controlled overlap for consistent content coverage across chunks. + - **Cons**: Redundant data in overlapping areas may increase computation. + +#### **9. Practical Example: Using Chunking with an Extraction Strategy** + - **Goal**: Combine chunking with an extraction strategy to process large text effectively. + - **Example Code**: + ```python + from crawl4ai.extraction_strategy import LLMExtractionStrategy + + async def extract_large_text(): + # Initialize chunker and extraction strategy + chunker = FixedLengthWordChunking(chunk_size=200) + extraction_strategy = LLMExtractionStrategy(provider="openai/gpt-4", api_token="your_api_token") + + # Split text into chunks + text_chunks = chunker.chunk(large_text) + + async with AsyncWebCrawler() as crawler: + for chunk in text_chunks: + result = await crawler.arun( + url="https://example.com", + extraction_strategy=extraction_strategy, + content=chunk + ) + print(result.extracted_content) + ``` + + - **Explanation**: + - `chunker.chunk()`: Divides the `large_text` into smaller segments based on the chosen strategy. + - `extraction_strategy`: Processes each chunk separately, and results are then aggregated to form the final output. + +#### **10. Choosing the Right Chunking Strategy** + - **Text Structure**: If text has clear sections (e.g., paragraphs, topics), use Regex or Topic Segmentation. + - **Extraction Needs**: If context is crucial, consider Sliding or Overlapping Window Chunking. + - **Processing Constraints**: For word-limited extractions (e.g., LLMs with token limits), Fixed-Length Word Chunking is often most effective. + +#### **11. Wrap Up & Next Steps** + - Recap the benefits of each chunking strategy and when to use them in extraction workflows. + - Tease the next video: **Hooks and Custom Workflow with AsyncWebCrawler**, focusing on customizing crawler behavior with hooks for a fine-tuned extraction process. + +--- + +This outline provides a complete understanding of chunking strategies, explaining each method’s strengths and best-use scenarios to help users process large texts effectively in Crawl4AI. \ No newline at end of file diff --git a/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md b/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md new file mode 100644 index 00000000..11b9be7d --- /dev/null +++ b/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md @@ -0,0 +1,185 @@ +# Crawl4AI + +## Episode 14: Hooks and Custom Workflow with AsyncWebCrawler + +### Quick Intro +Cover hooks (`on_browser_created`, `before_goto`, `after_goto`) to add custom workflows. Demo: Use hooks to add custom cookies or headers, log HTML, or trigger specific events on page load. + +Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCrawler** video, covering each hook’s purpose, usage, and example implementations. + +--- + +### **13. Hooks and Custom Workflow with AsyncWebCrawler** + +#### **1. Introduction to Hooks in Crawl4AI** + - **What are Hooks**: Hooks are customizable entry points in the crawling process that allow users to inject custom actions or logic at specific stages. + - **Why Use Hooks**: + - They enable fine-grained control over the crawling workflow. + - Useful for performing additional tasks (e.g., logging, modifying headers) dynamically during the crawl. + - Hooks provide the flexibility to adapt the crawler to complex site structures or unique project needs. + +#### **2. Overview of Available Hooks** + - Crawl4AI offers seven key hooks to modify and control different stages in the crawling lifecycle: + - `on_browser_created` + - `on_user_agent_updated` + - `on_execution_started` + - `before_goto` + - `after_goto` + - `before_return_html` + - `before_retrieve_html` + +#### **3. Hook-by-Hook Explanation and Examples** + +--- + +##### **Hook 1: `on_browser_created`** + - **Purpose**: Triggered right after the browser instance is created. + - **Use Case**: + - Initializing browser-specific settings or performing setup actions. + - Configuring browser extensions or scripts before any page is opened. + - **Example**: + ```python + async def log_browser_creation(browser): + print("Browser instance created:", browser) + + crawler.set_hook('on_browser_created', log_browser_creation) + ``` + - **Explanation**: This hook logs the browser creation event, useful for tracking when a new browser instance starts. + +--- + +##### **Hook 2: `on_user_agent_updated`** + - **Purpose**: Called whenever the user agent string is updated. + - **Use Case**: + - Modifying the user agent based on page requirements, e.g., changing to a mobile user agent for mobile-only pages. + - **Example**: + ```python + def update_user_agent(user_agent): + print(f"User Agent Updated: {user_agent}") + + crawler.set_hook('on_user_agent_updated', update_user_agent) + crawler.update_user_agent("Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X)") + ``` + - **Explanation**: This hook provides a callback every time the user agent changes, helpful for debugging or dynamically altering user agent settings based on conditions. + +--- + +##### **Hook 3: `on_execution_started`** + - **Purpose**: Called right before the crawler begins any interaction (e.g., JavaScript execution, clicks). + - **Use Case**: + - Performing setup actions, such as inserting cookies or initiating custom scripts. + - **Example**: + ```python + async def log_execution_start(page): + print("Execution started on page:", page.url) + + crawler.set_hook('on_execution_started', log_execution_start) + ``` + - **Explanation**: Logs the start of any major interaction on the page, ideal for cases where you want to monitor each interaction. + +--- + +##### **Hook 4: `before_goto`** + - **Purpose**: Triggered before navigating to a new URL with `page.goto()`. + - **Use Case**: + - Modifying request headers or setting up conditions right before the page loads. + - Adding headers or dynamically adjusting options for specific URLs. + - **Example**: + ```python + async def modify_headers_before_goto(page): + await page.set_extra_http_headers({"X-Custom-Header": "CustomValue"}) + print("Custom headers set before navigation") + + crawler.set_hook('before_goto', modify_headers_before_goto) + ``` + - **Explanation**: This hook allows injecting headers or altering settings based on the page’s needs, particularly useful for pages with custom requirements. + +--- + +##### **Hook 5: `after_goto`** + - **Purpose**: Executed immediately after a page has loaded (after `page.goto()`). + - **Use Case**: + - Checking the loaded page state, modifying the DOM, or performing post-navigation actions (e.g., scrolling). + - **Example**: + ```python + async def post_navigation_scroll(page): + await page.evaluate("window.scrollTo(0, document.body.scrollHeight)") + print("Scrolled to the bottom after navigation") + + crawler.set_hook('after_goto', post_navigation_scroll) + ``` + - **Explanation**: This hook scrolls to the bottom of the page after loading, which can help load dynamically added content like infinite scroll elements. + +--- + +##### **Hook 6: `before_return_html`** + - **Purpose**: Called right before HTML content is retrieved and returned. + - **Use Case**: + - Removing overlays or cleaning up the page for a cleaner HTML extraction. + - **Example**: + ```python + async def remove_advertisements(page, html): + await page.evaluate("document.querySelectorAll('.ad-banner').forEach(el => el.remove());") + print("Advertisements removed before returning HTML") + + crawler.set_hook('before_return_html', remove_advertisements) + ``` + - **Explanation**: The hook removes ad banners from the HTML before it’s retrieved, ensuring a cleaner data extraction. + +--- + +##### **Hook 7: `before_retrieve_html`** + - **Purpose**: Runs right before Crawl4AI initiates HTML retrieval. + - **Use Case**: + - Finalizing any page adjustments (e.g., setting timers, waiting for specific elements). + - **Example**: + ```python + async def wait_for_content_before_retrieve(page): + await page.wait_for_selector('.main-content') + print("Main content loaded, ready to retrieve HTML") + + crawler.set_hook('before_retrieve_html', wait_for_content_before_retrieve) + ``` + - **Explanation**: This hook waits for the main content to load before retrieving the HTML, ensuring that all essential content is captured. + +#### **4. Setting Hooks in Crawl4AI** + - **How to Set Hooks**: + - Use `set_hook` to define a custom function for each hook. + - Each hook function can be asynchronous (useful for actions like waiting or retrieving async data). + - **Example Setup**: + ```python + crawler.set_hook('on_browser_created', log_browser_creation) + crawler.set_hook('before_goto', modify_headers_before_goto) + crawler.set_hook('after_goto', post_navigation_scroll) + ``` + +#### **5. Complete Example: Using Hooks for a Customized Crawl Workflow** + - **Goal**: Log each key step, set custom headers before navigation, and clean up the page before retrieving HTML. + - **Example Code**: + ```python + async def custom_crawl(): + async with AsyncWebCrawler() as crawler: + # Set hooks for custom workflow + crawler.set_hook('on_browser_created', log_browser_creation) + crawler.set_hook('before_goto', modify_headers_before_goto) + crawler.set_hook('after_goto', post_navigation_scroll) + crawler.set_hook('before_return_html', remove_advertisements) + + # Perform the crawl + url = "https://example.com" + result = await crawler.arun(url=url) + print(result.html) # Display or process HTML + ``` + +#### **6. Benefits of Using Hooks in Custom Crawling Workflows** + - **Enhanced Control**: Hooks offer precise control over each stage, allowing adjustments based on content and structure. + - **Efficient Modifications**: Avoid reloading or restarting the session; hooks can alter actions dynamically. + - **Context-Sensitive Actions**: Hooks enable custom logic tailored to specific pages or sections, maximizing extraction quality. + +#### **7. Wrap Up & Next Steps** + - Recap how hooks empower customized workflows in Crawl4AI, enabling flexibility at every stage. + - Tease the next video: **Automating Post-Processing with Crawl4AI**, covering automated steps after data extraction. + +--- + +This outline provides a thorough understanding of hooks, their practical applications, and examples for customizing the crawling workflow in Crawl4AI. \ No newline at end of file diff --git a/docs/md_v2/tutorial/tutorial.md b/docs/md_v2/tutorial/tutorial.md new file mode 100644 index 00000000..4e90484d --- /dev/null +++ b/docs/md_v2/tutorial/tutorial.md @@ -0,0 +1,1719 @@ +# Crawl4AI + +## Episode 1: Introduction to Crawl4AI and Basic Installation + +### Quick Intro +Walk through installation from PyPI, setup, and verification. Show how to install with options like `torch` or `transformer` for advanced capabilities. + +Here's a condensed outline of the **Installation and Setup** video content: + +--- + +1. **Introduction to Crawl4AI**: + - Briefly explain that Crawl4AI is a powerful tool for web scraping, data extraction, and content processing, with customizable options for various needs. + +2. **Installation Overview**: + - **Basic Install**: Run `pip install crawl4ai` and `playwright install` (to set up browser dependencies). + - **Optional Advanced Installs**: + - `pip install crawl4ai[torch]` - Adds PyTorch for clustering. + - `pip install crawl4ai[transformer]` - Adds support for LLM-based extraction. + - `pip install crawl4ai[all]` - Installs all features for complete functionality. + +3. **Verifying the Installation**: + - Walk through a simple test script to confirm the setup: + ```python + import asyncio + from crawl4ai import AsyncWebCrawler + + async def main(): + async with AsyncWebCrawler(verbose=True) as crawler: + result = await crawler.arun(url="https://www.example.com") + print(result.markdown[:500]) # Show first 500 characters + + asyncio.run(main()) + ``` + - Explain that this script initializes the crawler and runs it on a test URL, displaying part of the extracted content to verify functionality. + +4. **Important Tips**: + - **Run** `playwright install` **after installation** to set up dependencies. + - **For full performance** on text-related tasks, run `crawl4ai-download-models` after installing with `[torch]`, `[transformer]`, or `[all]` options. + - If you encounter issues, refer to the documentation or GitHub issues. + +5. **Wrap Up**: + - Introduce the next topic in the series, which will cover Crawl4AI's browser configuration options (like choosing between `chromium`, `firefox`, and `webkit`). + +--- + +This structure provides a concise, effective guide to get viewers up and running with Crawl4AI in minutes.# Crawl4AI + +## Episode 2: Overview of Advanced Features + +### Quick Intro +A general overview of advanced features like hooks, CSS selectors, and JSON CSS extraction. + +Here's a condensed outline for an **Overview of Advanced Features** video covering Crawl4AI's powerful customization and extraction options: + +--- + +### **Overview of Advanced Features** + +1. **Introduction to Advanced Features**: + - Briefly introduce Crawl4AI’s advanced tools, which let users go beyond basic crawling to customize and fine-tune their scraping workflows. + +2. **Taking Screenshots**: + - Explain the screenshot capability for capturing page state and verifying content. + - **Example**: + ```python + result = await crawler.arun(url="https://www.example.com", screenshot=True) + ``` + - Mention that screenshots are saved as a base64 string in `result`, allowing easy decoding and saving. + +3. **Media and Link Extraction**: + - Demonstrate how to pull all media (images, videos) and links (internal and external) from a page for deeper analysis or content gathering. + - **Example**: + ```python + result = await crawler.arun(url="https://www.example.com") + print("Media:", result.media) + print("Links:", result.links) + ``` + +4. **Custom User Agent**: + - Show how to set a custom user agent to disguise the crawler or simulate specific devices/browsers. + - **Example**: + ```python + result = await crawler.arun(url="https://www.example.com", user_agent="Mozilla/5.0 (compatible; MyCrawler/1.0)") + ``` + +5. **Custom Hooks for Enhanced Control**: + - Briefly cover how to use hooks, which allow custom actions like setting headers or handling login during the crawl. + - **Example**: Setting a custom header with `before_get_url` hook. + ```python + async def before_get_url(page): + await page.set_extra_http_headers({"X-Test-Header": "test"}) + ``` + +6. **CSS Selectors for Targeted Extraction**: + - Explain the use of CSS selectors to extract specific elements, ideal for structured data like articles or product details. + - **Example**: + ```python + result = await crawler.arun(url="https://www.example.com", css_selector="h2") + print("H2 Tags:", result.extracted_content) + ``` + +7. **Crawling Inside Iframes**: + - Mention how enabling `process_iframes=True` allows extracting content within iframes, useful for sites with embedded content or ads. + - **Example**: + ```python + result = await crawler.arun(url="https://www.example.com", process_iframes=True) + ``` + +8. **Wrap-Up**: + - Summarize these advanced features and how they allow users to customize every part of their web scraping experience. + - Tease upcoming videos where each feature will be explored in detail. + +--- + +This covers each advanced feature with a brief example, providing a useful overview to prepare viewers for the more in-depth videos.# Crawl4AI + +## Episode 3: Browser Configurations & Headless Crawling + +### Quick Intro +Explain browser options (`chromium`, `firefox`, `webkit`) and settings for headless mode, caching, and verbose logging. + +Here’s a streamlined outline for the **Browser Configurations & Headless Crawling** video: + +--- + +### **Browser Configurations & Headless Crawling** + +1. **Overview of Browser Options**: + - Crawl4AI supports three browser engines: + - **Chromium** (default) - Highly compatible. + - **Firefox** - Great for specialized use cases. + - **Webkit** - Lightweight, ideal for basic needs. + - **Example**: + ```python + # Using Chromium (default) + crawler = AsyncWebCrawler(browser_type="chromium") + + # Using Firefox + crawler = AsyncWebCrawler(browser_type="firefox") + + # Using WebKit + crawler = AsyncWebCrawler(browser_type="webkit") + ``` + +2. **Headless Mode**: + - Headless mode runs the browser without a visible GUI, making it faster and less resource-intensive. + - To enable or disable: + ```python + # Headless mode (default is True) + crawler = AsyncWebCrawler(headless=True) + + # Disable headless mode for debugging + crawler = AsyncWebCrawler(headless=False) + ``` + +3. **Verbose Logging**: + - Use `verbose=True` to get detailed logs for each action, useful for debugging: + ```python + crawler = AsyncWebCrawler(verbose=True) + ``` + +4. **Running a Basic Crawl with Configuration**: + - Example of a simple crawl with custom browser settings: + ```python + async with AsyncWebCrawler(browser_type="firefox", headless=True, verbose=True) as crawler: + result = await crawler.arun(url="https://www.example.com") + print(result.markdown[:500]) # Show first 500 characters + ``` + - This example uses Firefox in headless mode with logging enabled, demonstrating the flexibility of Crawl4AI’s setup. + +5. **Recap & Next Steps**: + - Recap the power of selecting different browsers and running headless mode for speed and efficiency. + - Tease the next video: **Proxy & Security Settings** for navigating blocked or restricted content and protecting IP identity. + +--- + +This breakdown covers browser configuration essentials in Crawl4AI, providing users with practical steps to optimize their scraping setup.# Crawl4AI + +## Episode 4: Advanced Proxy and Security Settings + +### Quick Intro +Showcase proxy configurations (HTTP, SOCKS5, authenticated proxies). Demo: Use rotating proxies and set custom headers to avoid IP blocking and enhance security. + +Here’s a focused outline for the **Proxy and Security Settings** video: + +--- + +### **Proxy & Security Settings** + +1. **Why Use Proxies in Web Crawling**: + - Proxies are essential for bypassing IP-based restrictions, improving anonymity, and managing rate limits. + - Crawl4AI supports simple proxies, authenticated proxies, and proxy rotation for robust web scraping. + +2. **Basic Proxy Setup**: + - **Using a Simple Proxy**: + ```python + # HTTP proxy + crawler = AsyncWebCrawler(proxy="http://proxy.example.com:8080") + + # SOCKS proxy + crawler = AsyncWebCrawler(proxy="socks5://proxy.example.com:1080") + ``` + +3. **Authenticated Proxies**: + - Use `proxy_config` for proxies requiring a username and password: + ```python + proxy_config = { + "server": "http://proxy.example.com:8080", + "username": "user", + "password": "pass" + } + crawler = AsyncWebCrawler(proxy_config=proxy_config) + ``` + +4. **Rotating Proxies**: + - Rotating proxies helps avoid IP bans by switching IP addresses for each request: + ```python + async def get_next_proxy(): + # Define proxy rotation logic here + return {"server": "http://next.proxy.com:8080"} + + async with AsyncWebCrawler() as crawler: + for url in urls: + proxy = await get_next_proxy() + crawler.update_proxy(proxy) + result = await crawler.arun(url=url) + ``` + - This setup periodically switches the proxy for enhanced security and access. + +5. **Custom Headers for Additional Security**: + - Set custom headers to mask the crawler’s identity and avoid detection: + ```python + headers = { + "X-Forwarded-For": "203.0.113.195", + "Accept-Language": "en-US,en;q=0.9", + "Cache-Control": "no-cache", + "Pragma": "no-cache" + } + crawler = AsyncWebCrawler(headers=headers) + ``` + +6. **Combining Proxies with Magic Mode for Anti-Bot Protection**: + - For sites with aggressive bot detection, combine `proxy` settings with `magic=True`: + ```python + async with AsyncWebCrawler(proxy="http://proxy.example.com:8080", headers={"Accept-Language": "en-US"}) as crawler: + result = await crawler.arun( + url="https://example.com", + magic=True # Enables anti-detection features + ) + ``` + - **Magic Mode** automatically enables user simulation, random timing, and browser property masking. + +7. **Wrap Up & Next Steps**: + - Summarize the importance of proxies and anti-detection in accessing restricted content and avoiding bans. + - Tease the next video: **JavaScript Execution and Handling Dynamic Content** for working with interactive and dynamically loaded pages. + +--- + +This outline provides a practical guide to setting up proxies and security configurations, empowering users to navigate restricted sites while staying undetected.# Crawl4AI + +## Episode 5: JavaScript Execution and Dynamic Content Handling + +### Quick Intro +Explain JavaScript code injection with examples (e.g., simulating scrolling, clicking ‘load more’). Demo: Extract content from a page that uses dynamic loading with lazy-loaded images. + +Here’s a focused outline for the **JavaScript Execution and Dynamic Content Handling** video: + +--- + +### **JavaScript Execution & Dynamic Content Handling** + +1. **Why JavaScript Execution Matters**: + - Many modern websites load content dynamically via JavaScript, requiring special handling to access all elements. + - Crawl4AI can execute JavaScript on pages, enabling it to interact with elements like “load more” buttons, infinite scrolls, and content that appears only after certain actions. + +2. **Basic JavaScript Execution**: + - Use `js_code` to execute JavaScript commands on a page: + ```python + # Scroll to bottom of the page + result = await crawler.arun( + url="https://example.com", + js_code="window.scrollTo(0, document.body.scrollHeight);" + ) + ``` + - This command scrolls to the bottom, triggering any lazy-loaded or dynamically added content. + +3. **Multiple Commands & Simulating Clicks**: + - Combine multiple JavaScript commands to interact with elements like “load more” buttons: + ```python + js_commands = [ + "window.scrollTo(0, document.body.scrollHeight);", + "document.querySelector('.load-more').click();" + ] + result = await crawler.arun( + url="https://example.com", + js_code=js_commands + ) + ``` + - This script scrolls down and then clicks the “load more” button, useful for loading additional content blocks. + +4. **Waiting for Dynamic Content**: + - Use `wait_for` to ensure the page loads specific elements before proceeding: + ```python + result = await crawler.arun( + url="https://example.com", + js_code="window.scrollTo(0, document.body.scrollHeight);", + wait_for="css:.dynamic-content" # Wait for elements with class `.dynamic-content` + ) + ``` + - This example waits until elements with `.dynamic-content` are loaded, helping to capture content that appears after JavaScript actions. + +5. **Handling Complex Dynamic Content (e.g., Infinite Scroll)**: + - Combine JavaScript execution with conditional waiting to handle infinite scrolls or paginated content: + ```python + result = await crawler.arun( + url="https://example.com", + js_code=[ + "window.scrollTo(0, document.body.scrollHeight);", + "const loadMore = document.querySelector('.load-more'); if (loadMore) loadMore.click();" + ], + wait_for="js:() => document.querySelectorAll('.item').length > 10" # Wait until 10 items are loaded + ) + ``` + - This example scrolls and clicks "load more" repeatedly, waiting each time for a specified number of items to load. + +6. **Complete Example: Dynamic Content Handling with Extraction**: + - Full example demonstrating a dynamic load and content extraction in one process: + ```python + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com", + js_code=[ + "window.scrollTo(0, document.body.scrollHeight);", + "document.querySelector('.load-more').click();" + ], + wait_for="css:.main-content", + css_selector=".main-content" + ) + print(result.markdown[:500]) # Output the main content extracted + ``` + +7. **Wrap Up & Next Steps**: + - Recap how JavaScript execution allows access to dynamic content, enabling powerful interactions. + - Tease the next video: **Content Cleaning and Fit Markdown** to show how Crawl4AI can extract only the most relevant content from complex pages. + +--- + +This outline explains how to handle dynamic content and JavaScript-based interactions effectively, enabling users to scrape and interact with complex, modern websites.# Crawl4AI + +## Episode 6: Magic Mode and Anti-Bot Protection + +### Quick Intro +Highlight `Magic Mode` and anti-bot features like user simulation, navigator overrides, and timing randomization. Demo: Access a site with anti-bot protection and show how `Magic Mode` seamlessly handles it. + +Here’s a concise outline for the **Magic Mode and Anti-Bot Protection** video: + +--- + +### **Magic Mode & Anti-Bot Protection** + +1. **Why Anti-Bot Protection is Important**: + - Many websites use bot detection mechanisms to block automated scraping. Crawl4AI’s anti-detection features help avoid IP bans, CAPTCHAs, and access restrictions. + - **Magic Mode** is a one-step solution to enable a range of anti-bot features without complex configuration. + +2. **Enabling Magic Mode**: + - Simply set `magic=True` to activate Crawl4AI’s full anti-bot suite: + ```python + result = await crawler.arun( + url="https://example.com", + magic=True # Enables all anti-detection features + ) + ``` + - This enables a blend of stealth techniques, including masking automation signals, randomizing timings, and simulating real user behavior. + +3. **What Magic Mode Does Behind the Scenes**: + - **User Simulation**: Mimics human actions like mouse movements and scrolling. + - **Navigator Overrides**: Hides signals that indicate an automated browser. + - **Timing Randomization**: Adds random delays to simulate natural interaction patterns. + - **Cookie Handling**: Accepts and manages cookies dynamically to avoid triggers from cookie pop-ups. + +4. **Manual Anti-Bot Options (If Not Using Magic Mode)**: + - For granular control, you can configure individual settings without Magic Mode: + ```python + result = await crawler.arun( + url="https://example.com", + simulate_user=True, # Enables human-like behavior + override_navigator=True # Hides automation fingerprints + ) + ``` + - **Use Cases**: This approach allows more specific adjustments when certain anti-bot features are needed but others are not. + +5. **Combining Proxies with Magic Mode**: + - To avoid rate limits or IP blocks, combine Magic Mode with a proxy: + ```python + async with AsyncWebCrawler( + proxy="http://proxy.example.com:8080", + headers={"Accept-Language": "en-US"} + ) as crawler: + result = await crawler.arun( + url="https://example.com", + magic=True # Full anti-detection + ) + ``` + - This setup maximizes stealth by pairing anti-bot detection with IP obfuscation. + +6. **Example of Anti-Bot Protection in Action**: + - Full example with Magic Mode and proxies to scrape a protected page: + ```python + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com/protected-content", + magic=True, + proxy="http://proxy.example.com:8080", + wait_for="css:.content-loaded" # Wait for the main content to load + ) + print(result.markdown[:500]) # Display first 500 characters of the content + ``` + - This example ensures seamless access to protected content by combining anti-detection and waiting for full content load. + +7. **Wrap Up & Next Steps**: + - Recap the power of Magic Mode and anti-bot features for handling restricted websites. + - Tease the next video: **Content Cleaning and Fit Markdown** to show how to extract clean and focused content from a page. + +--- + +This outline shows users how to easily avoid bot detection and access restricted content, demonstrating both the power and simplicity of Magic Mode in Crawl4AI.# Crawl4AI + +## Episode 7: Content Cleaning and Fit Markdown + +### Quick Intro +Explain content cleaning options, including `fit_markdown` to keep only the most relevant content. Demo: Extract and compare regular vs. fit markdown from a news site or blog. + +Here’s a streamlined outline for the **Content Cleaning and Fit Markdown** video: + +--- + +### **Content Cleaning & Fit Markdown** + +1. **Overview of Content Cleaning in Crawl4AI**: + - Explain that web pages often include extra elements like ads, navigation bars, footers, and popups. + - Crawl4AI’s content cleaning features help extract only the main content, reducing noise and enhancing readability. + +2. **Basic Content Cleaning Options**: + - **Removing Unwanted Elements**: Exclude specific HTML tags, like forms or navigation bars: + ```python + result = await crawler.arun( + url="https://example.com", + word_count_threshold=10, # Filter out blocks with fewer than 10 words + excluded_tags=['form', 'nav'], # Exclude specific tags + remove_overlay_elements=True # Remove popups and modals + ) + ``` + - This example extracts content while excluding forms, navigation, and modal overlays, ensuring clean results. + +3. **Fit Markdown for Main Content Extraction**: + - **What is Fit Markdown**: Uses advanced analysis to identify the most relevant content (ideal for articles, blogs, and documentation). + - **How it Works**: Analyzes content density, removes boilerplate elements, and maintains formatting for a clear output. + - **Example**: + ```python + result = await crawler.arun(url="https://example.com") + main_content = result.fit_markdown # Extracted main content + print(main_content[:500]) # Display first 500 characters + ``` + - Fit Markdown is especially helpful for long-form content like news articles or blog posts. + +4. **Comparing Fit Markdown with Regular Markdown**: + - **Fit Markdown** returns the primary content without extraneous elements. + - **Regular Markdown** includes all extracted text in markdown format. + - Example to show the difference: + ```python + all_content = result.markdown # Full markdown + main_content = result.fit_markdown # Only the main content + + print(f"All Content Length: {len(all_content)}") + print(f"Main Content Length: {len(main_content)}") + ``` + - This comparison shows the effectiveness of Fit Markdown in focusing on essential content. + +5. **Media and Metadata Handling with Content Cleaning**: + - **Media Extraction**: Crawl4AI captures images and videos with metadata like alt text, descriptions, and relevance scores: + ```python + for image in result.media["images"]: + print(f"Source: {image['src']}, Alt Text: {image['alt']}, Relevance Score: {image['score']}") + ``` + - **Use Case**: Useful for saving only relevant images or videos from an article or content-heavy page. + +6. **Example of Clean Content Extraction in Action**: + - Full example extracting cleaned content and Fit Markdown: + ```python + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com", + word_count_threshold=10, + excluded_tags=['nav', 'footer'], + remove_overlay_elements=True + ) + print(result.fit_markdown[:500]) # Show main content + ``` + - This example demonstrates content cleaning with settings for filtering noise and focusing on the core text. + +7. **Wrap Up & Next Steps**: + - Summarize the power of Crawl4AI’s content cleaning features and Fit Markdown for capturing clean, relevant content. + - Tease the next video: **Link Analysis and Smart Filtering** to focus on analyzing and filtering links within crawled pages. + +--- + +This outline covers Crawl4AI’s content cleaning features and the unique benefits of Fit Markdown, showing users how to retrieve focused, high-quality content from web pages.# Crawl4AI + +## Episode 8: Media Handling: Images, Videos, and Audio + +### Quick Intro +Showcase Crawl4AI’s media extraction capabilities, including lazy-loaded media and metadata. Demo: Crawl a multimedia page, extract images, and show metadata (alt text, context, relevance score). + +Here’s a clear and focused outline for the **Media Handling: Images, Videos, and Audio** video: + +--- + +### **Media Handling: Images, Videos, and Audio** + +1. **Overview of Media Extraction in Crawl4AI**: + - Crawl4AI can detect and extract different types of media (images, videos, and audio) along with useful metadata. + - This functionality is essential for gathering visual content from multimedia-heavy pages like e-commerce sites, news articles, and social media feeds. + +2. **Image Extraction and Metadata**: + - Crawl4AI captures images with detailed metadata, including: + - **Source URL**: The direct URL to the image. + - **Alt Text**: Image description if available. + - **Relevance Score**: A score (0–10) indicating how relevant the image is to the main content. + - **Context**: Text surrounding the image on the page. + - **Example**: + ```python + result = await crawler.arun(url="https://example.com") + + for image in result.media["images"]: + print(f"Source: {image['src']}") + print(f"Alt Text: {image['alt']}") + print(f"Relevance Score: {image['score']}") + print(f"Context: {image['context']}") + ``` + - This example shows how to access each image’s metadata, making it easy to filter for the most relevant visuals. + +3. **Handling Lazy-Loaded Images**: + - Crawl4AI automatically supports lazy-loaded images, which are commonly used to optimize webpage loading. + - **Example with Wait for Lazy-Loaded Content**: + ```python + result = await crawler.arun( + url="https://example.com", + wait_for="css:img[data-src]", # Wait for lazy-loaded images + delay_before_return_html=2.0 # Allow extra time for images to load + ) + ``` + - This setup waits for lazy-loaded images to appear, ensuring they are fully captured. + +4. **Video Extraction and Metadata**: + - Crawl4AI captures video elements, including: + - **Source URL**: The video’s direct URL. + - **Type**: Format of the video (e.g., MP4). + - **Thumbnail**: A poster or thumbnail image if available. + - **Duration**: Video length, if metadata is provided. + - **Example**: + ```python + for video in result.media["videos"]: + print(f"Video Source: {video['src']}") + print(f"Type: {video['type']}") + print(f"Thumbnail: {video.get('poster')}") + print(f"Duration: {video.get('duration')}") + ``` + - This allows users to gather video content and relevant details for further processing or analysis. + +5. **Audio Extraction and Metadata**: + - Audio elements can also be extracted, with metadata like: + - **Source URL**: The audio file’s direct URL. + - **Type**: Format of the audio file (e.g., MP3). + - **Duration**: Length of the audio, if available. + - **Example**: + ```python + for audio in result.media["audios"]: + print(f"Audio Source: {audio['src']}") + print(f"Type: {audio['type']}") + print(f"Duration: {audio.get('duration')}") + ``` + - Useful for sites with podcasts, sound bites, or other audio content. + +6. **Filtering Media by Relevance**: + - Use metadata like relevance score to filter only the most useful media content: + ```python + relevant_images = [img for img in result.media["images"] if img['score'] > 5] + ``` + - This is especially helpful for content-heavy pages where you only want media directly related to the main content. + +7. **Example: Full Media Extraction with Content Filtering**: + - Full example extracting images, videos, and audio along with filtering by relevance: + ```python + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com", + word_count_threshold=10, # Filter content blocks for relevance + exclude_external_images=True # Only keep internal images + ) + + # Display media summaries + print(f"Relevant Images: {len(relevant_images)}") + print(f"Videos: {len(result.media['videos'])}") + print(f"Audio Clips: {len(result.media['audios'])}") + ``` + - This example shows how to capture and filter various media types, focusing on what’s most relevant. + +8. **Wrap Up & Next Steps**: + - Recap the comprehensive media extraction capabilities, emphasizing how metadata helps users focus on relevant content. + - Tease the next video: **Link Analysis and Smart Filtering** to explore how Crawl4AI handles internal, external, and social media links for more focused data gathering. + +--- + +This outline provides users with a complete guide to handling images, videos, and audio in Crawl4AI, using metadata to enhance relevance and precision in multimedia extraction.# Crawl4AI + +## Episode 9: Link Analysis and Smart Filtering + +### Quick Intro +Walk through internal and external link classification, social media link filtering, and custom domain exclusion. Demo: Analyze links on a website, focusing on internal navigation vs. external or ad links. + +Here’s a focused outline for the **Link Analysis and Smart Filtering** video: + +--- + +### **Link Analysis & Smart Filtering** + +1. **Importance of Link Analysis in Web Crawling**: + - Explain that web pages often contain numerous links, including internal links, external links, social media links, and ads. + - Crawl4AI’s link analysis and filtering options help extract only relevant links, enabling more targeted and efficient crawls. + +2. **Automatic Link Classification**: + - Crawl4AI categorizes links automatically into internal, external, and social media links. + - **Example**: + ```python + result = await crawler.arun(url="https://example.com") + + # Access internal and external links + internal_links = result.links["internal"] + external_links = result.links["external"] + + # Print first few links for each type + print("Internal Links:", internal_links[:3]) + print("External Links:", external_links[:3]) + ``` + +3. **Filtering Out Unwanted Links**: + - **Exclude External Links**: Remove all links pointing to external sites. + - **Exclude Social Media Links**: Filter out social media domains like Facebook or Twitter. + - **Example**: + ```python + result = await crawler.arun( + url="https://example.com", + exclude_external_links=True, # Remove external links + exclude_social_media_links=True # Remove social media links + ) + ``` + +4. **Custom Domain Filtering**: + - **Exclude Specific Domains**: Filter links from particular domains, e.g., ad sites. + - **Custom Social Media Domains**: Add additional social media domains if needed. + - **Example**: + ```python + result = await crawler.arun( + url="https://example.com", + exclude_domains=["ads.com", "trackers.com"], + exclude_social_media_domains=["facebook.com", "linkedin.com"] + ) + ``` + +5. **Accessing Link Context and Metadata**: + - Crawl4AI provides additional metadata for each link, including its text, type (e.g., navigation or content), and surrounding context. + - **Example**: + ```python + for link in result.links["internal"]: + print(f"Link: {link['href']}, Text: {link['text']}, Context: {link['context']}") + ``` + - **Use Case**: Helps users understand the relevance of links based on where they are placed on the page (e.g., navigation vs. article content). + +6. **Example of Comprehensive Link Filtering and Analysis**: + - Full example combining link filtering, metadata access, and contextual information: + ```python + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com", + exclude_external_links=True, + exclude_social_media_links=True, + exclude_domains=["ads.com"], + css_selector=".main-content" # Focus only on main content area + ) + for link in result.links["internal"]: + print(f"Internal Link: {link['href']}, Text: {link['text']}, Context: {link['context']}") + ``` + - This example filters unnecessary links, keeping only internal and relevant links from the main content area. + +7. **Wrap Up & Next Steps**: + - Summarize the benefits of link filtering for efficient crawling and relevant content extraction. + - Tease the next video: **Custom Headers, Identity Management, and User Simulation** to explain how to configure identity settings and simulate user behavior for stealthier crawls. + +--- + +This outline provides a practical overview of Crawl4AI’s link analysis and filtering features, helping users target only essential links while eliminating distractions.# Crawl4AI + +## Episode 10: Custom Headers, Identity, and User Simulation + +### Quick Intro +Teach how to use custom headers, user-agent strings, and simulate real user interactions. Demo: Set custom user-agent and headers to access a site that blocks typical crawlers. + +Here’s a concise outline for the **Custom Headers, Identity Management, and User Simulation** video: + +--- + +### **Custom Headers, Identity Management, & User Simulation** + +1. **Why Customize Headers and Identity in Crawling**: + - Websites often track request headers and browser properties to detect bots. Customizing headers and managing identity help make requests appear more human, improving access to restricted sites. + +2. **Setting Custom Headers**: + - Customize HTTP headers to mimic genuine browser requests or meet site-specific requirements: + ```python + headers = { + "Accept-Language": "en-US,en;q=0.9", + "X-Requested-With": "XMLHttpRequest", + "Cache-Control": "no-cache" + } + crawler = AsyncWebCrawler(headers=headers) + ``` + - **Use Case**: Customize the `Accept-Language` header to simulate local user settings, or `Cache-Control` to bypass cache for fresh content. + +3. **Setting a Custom User Agent**: + - Some websites block requests from common crawler user agents. Setting a custom user agent string helps bypass these restrictions: + ```python + crawler = AsyncWebCrawler( + user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" + ) + ``` + - **Tip**: Use user-agent strings from popular browsers (e.g., Chrome, Firefox) to improve access and reduce detection risks. + +4. **User Simulation for Human-like Behavior**: + - Enable `simulate_user=True` to mimic natural user interactions, such as random timing and simulated mouse movements: + ```python + result = await crawler.arun( + url="https://example.com", + simulate_user=True # Simulates human-like behavior + ) + ``` + - **Behavioral Effects**: Adds subtle variations in interactions, making the crawler harder to detect on bot-protected sites. + +5. **Navigator Overrides and Magic Mode for Full Identity Masking**: + - Use `override_navigator=True` to mask automation indicators like `navigator.webdriver`, which websites check to detect bots: + ```python + result = await crawler.arun( + url="https://example.com", + override_navigator=True # Masks bot-related signals + ) + ``` + - **Combining with Magic Mode**: For a complete anti-bot setup, combine these identity options with `magic=True` for maximum protection: + ```python + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com", + magic=True, # Enables all anti-bot detection features + user_agent="Custom-Agent", # Custom agent with Magic Mode + ) + ``` + - This setup includes all anti-detection techniques like navigator masking, random timing, and user simulation. + +6. **Example: Comprehensive Setup for Identity Management**: + - A full example combining custom headers, user-agent, and user simulation for a realistic browsing profile: + ```python + async with AsyncWebCrawler( + headers={"Accept-Language": "en-US", "Cache-Control": "no-cache"}, + user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0", + simulate_user=True + ) as crawler: + result = await crawler.arun(url="https://example.com/secure-page") + print(result.markdown[:500]) # Display extracted content + ``` + - This example enables detailed customization for evading detection and accessing protected pages smoothly. + +7. **Wrap Up & Next Steps**: + - Recap the value of headers, user-agent customization, and simulation in bypassing bot detection. + - Tease the next video: **Extraction Strategies: JSON CSS, LLM, and Cosine** to dive into structured data extraction methods for high-quality content retrieval. + +--- + +This outline equips users with tools for managing crawler identity and human-like behavior, essential for accessing bot-protected or restricted websites.Here’s a detailed outline for the **JSON-CSS Extraction Strategy** video, covering all key aspects and supported structures in Crawl4AI: + +--- + +### **10.1 JSON-CSS Extraction Strategy** + +#### **1. Introduction to JSON-CSS Extraction** + - JSON-CSS Extraction is used for pulling structured data from pages with repeated patterns, like product listings, article feeds, or directories. + - This strategy allows defining a schema with CSS selectors and data fields, making it easy to capture nested, list-based, or singular elements. + +#### **2. Basic Schema Structure** + - **Schema Fields**: The schema has two main components: + - `baseSelector`: A CSS selector to locate the main elements you want to extract (e.g., each article or product block). + - `fields`: Defines the data fields for each element, supporting various data types and structures. + +#### **3. Simple Field Extraction** + - **Example HTML**: + ```html +
+

Sample Product

+ $19.99 +

This is a sample product.

+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".product", + "fields": [ + {"name": "title", "selector": ".title", "type": "text"}, + {"name": "price", "selector": ".price", "type": "text"}, + {"name": "description", "selector": ".description", "type": "text"} + ] + } + ``` + - **Explanation**: Each field captures text content from specified CSS selectors within each `.product` element. + +#### **4. Supported Field Types: Text, Attribute, HTML, Regex** + - **Field Type Options**: + - `text`: Extracts visible text. + - `attribute`: Captures an HTML attribute (e.g., `src`, `href`). + - `html`: Extracts the raw HTML of an element. + - `regex`: Allows regex patterns to extract part of the text. + + - **Example HTML** (including an image): + ```html +
+

Sample Product

+ Product Image + $19.99 +

Limited time offer.

+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".product", + "fields": [ + {"name": "title", "selector": ".title", "type": "text"}, + {"name": "image_url", "selector": ".product-image", "type": "attribute", "attribute": "src"}, + {"name": "price", "selector": ".price", "type": "regex", "pattern": r"\$(\d+\.\d+)"}, + {"name": "description_html", "selector": ".description", "type": "html"} + ] + } + ``` + - **Explanation**: + - `attribute`: Extracts the `src` attribute from `.product-image`. + - `regex`: Extracts the numeric part from `$19.99`. + - `html`: Retrieves the full HTML of the description element. + +#### **5. Nested Field Extraction** + - **Use Case**: Useful when content contains sub-elements, such as an article with author details within it. + - **Example HTML**: + ```html +
+

Sample Article

+
+ John Doe + Writer and editor +
+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".article", + "fields": [ + {"name": "title", "selector": ".title", "type": "text"}, + {"name": "author", "type": "nested", "selector": ".author", "fields": [ + {"name": "name", "selector": ".name", "type": "text"}, + {"name": "bio", "selector": ".bio", "type": "text"} + ]} + ] + } + ``` + - **Explanation**: + - `nested`: Extracts `name` and `bio` within `.author`, grouping the author details in a single `author` object. + +#### **6. List and Nested List Extraction** + - **List**: Extracts multiple elements matching the selector as a list. + - **Nested List**: Allows lists within lists, useful for items with sub-lists (e.g., specifications for each product). + - **Example HTML**: + ```html +
+

Product with Features

+
    +
  • Feature 1
  • +
  • Feature 2
  • +
  • Feature 3
  • +
+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".product", + "fields": [ + {"name": "title", "selector": ".title", "type": "text"}, + {"name": "features", "type": "list", "selector": ".features .feature", "fields": [ + {"name": "feature", "type": "text"} + ]} + ] + } + ``` + - **Explanation**: + - `list`: Captures each `.feature` item within `.features`, outputting an array of features under the `features` field. + +#### **7. Transformations for Field Values** + - Transformations allow you to modify extracted values (e.g., converting to lowercase). + - Supported transformations: `lowercase`, `uppercase`, `strip`. + - **Example HTML**: + ```html +
+

Special Product

+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".product", + "fields": [ + {"name": "title", "selector": ".title", "type": "text", "transform": "uppercase"} + ] + } + ``` + - **Explanation**: The `transform` property changes the `title` to uppercase, useful for standardized outputs. + +#### **8. Full JSON-CSS Extraction Example** + - Combining all elements in a single schema example for a comprehensive crawl: + - **Example HTML**: + ```html +
+

Featured Product

+ + $99.99 +

Best product of the year.

+
    +
  • Durable
  • +
  • Eco-friendly
  • +
+
+ ``` + - **Schema**: + ```python + schema = { + "baseSelector": ".product", + "fields": [ + {"name": "title", "selector": ".title", "type": "text", "transform": "uppercase"}, + {"name": "image_url", "selector": ".product-image", "type": "attribute", "attribute": "src"}, + {"name": "price", "selector": ".price", "type": "regex", "pattern": r"\$(\d+\.\d+)"}, + {"name": "description", "selector": ".description", "type": "html"}, + {"name": "features", "type": "list", "selector": ".features .feature", "fields": [ + {"name": "feature", "type": "text"} + ]} + ] + } + ``` + - **Explanation**: This schema captures and transforms each aspect of the product, illustrating the JSON-CSS strategy’s versatility for structured extraction. + +#### **9. Wrap Up & Next Steps** + - Summarize JSON-CSS Extraction’s flexibility for structured, pattern-based extraction. + - Tease the next video: **10.2 LLM Extraction Strategy**, focusing on using language models to extract data based on intelligent content analysis. + +--- + +This outline covers each JSON-CSS Extraction option in Crawl4AI, with practical examples and schema configurations, making it a thorough guide for users.# Crawl4AI + +## Episode 11: Extraction Strategies: JSON CSS, LLM, and Cosine + +### Quick Intro +Introduce JSON CSS Extraction Strategy for structured data, LLM Extraction Strategy for intelligent parsing, and Cosine Strategy for clustering similar content. Demo: Use JSON CSS to scrape product details from an e-commerce site. + +Here’s a comprehensive outline for the **LLM Extraction Strategy** video, covering key details and example applications. + +--- + +### **10.2 LLM Extraction Strategy** + +#### **1. Introduction to LLM Extraction Strategy** + - The LLM Extraction Strategy leverages language models to interpret and extract structured data from complex web content. + - Unlike traditional CSS selectors, this strategy uses natural language instructions and schemas to guide the extraction, ideal for unstructured or diverse content. + - Supports **OpenAI**, **Azure OpenAI**, **HuggingFace**, and **Ollama** models, enabling flexibility with both proprietary and open-source providers. + +#### **2. Key Components of LLM Extraction Strategy** + - **Provider**: Specifies the LLM provider (e.g., OpenAI, HuggingFace, Azure). + - **API Token**: Required for most providers, except Ollama (local LLM model). + - **Instruction**: Custom extraction instructions sent to the model, providing flexibility in how the data is structured and extracted. + - **Schema**: Optional, defines structured fields to organize extracted data into JSON format. + - **Extraction Type**: Supports `"block"` for simpler text blocks or `"schema"` when a structured output format is required. + - **Chunking Parameters**: Breaks down large documents, with options to adjust chunk size and overlap rate for more accurate extraction across lengthy texts. + +#### **3. Basic Extraction Example: OpenAI Model Pricing** + - **Goal**: Extract model names and their input and output fees from the OpenAI pricing page. + - **Schema Definition**: + - **Model Name**: Text for model identification. + - **Input Fee**: Token cost for input processing. + - **Output Fee**: Token cost for output generation. + + - **Schema**: + ```python + class OpenAIModelFee(BaseModel): + model_name: str = Field(..., description="Name of the OpenAI model.") + input_fee: str = Field(..., description="Fee for input token for the OpenAI model.") + output_fee: str = Field(..., description="Fee for output token for the OpenAI model.") + ``` + + - **Example Code**: + ```python + async def extract_openai_pricing(): + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://openai.com/api/pricing/", + extraction_strategy=LLMExtractionStrategy( + provider="openai/gpt-4o", + api_token=os.getenv("OPENAI_API_KEY"), + schema=OpenAIModelFee.schema(), + extraction_type="schema", + instruction="Extract model names and fees for input and output tokens from the page." + ), + bypass_cache=True + ) + print(result.extracted_content) + ``` + + - **Explanation**: + - The extraction strategy combines a schema and detailed instruction to guide the LLM in capturing structured data. + - Each model’s name, input fee, and output fee are extracted in a JSON format. + +#### **4. Knowledge Graph Extraction Example** + - **Goal**: Extract entities and their relationships from a document for use in a knowledge graph. + - **Schema Definition**: + - **Entities**: Individual items with descriptions (e.g., people, organizations). + - **Relationships**: Connections between entities, including descriptions and relationship types. + + - **Schema**: + ```python + class Entity(BaseModel): + name: str + description: str + + class Relationship(BaseModel): + entity1: Entity + entity2: Entity + description: str + relation_type: str + + class KnowledgeGraph(BaseModel): + entities: List[Entity] + relationships: List[Relationship] + ``` + + - **Example Code**: + ```python + async def extract_knowledge_graph(): + extraction_strategy = LLMExtractionStrategy( + provider="azure/gpt-4o-mini", + api_token=os.getenv("AZURE_API_KEY"), + schema=KnowledgeGraph.schema(), + extraction_type="schema", + instruction="Extract entities and relationships from the content to build a knowledge graph." + ) + async with AsyncWebCrawler() as crawler: + result = await crawler.arun( + url="https://example.com/some-article", + extraction_strategy=extraction_strategy, + bypass_cache=True + ) + print(result.extracted_content) + ``` + + - **Explanation**: + - In this setup, the LLM extracts entities and their relationships based on the schema and instruction. + - The schema organizes results into a JSON-based knowledge graph format. + +#### **5. Key Settings in LLM Extraction** + - **Chunking Options**: + - For long pages, set `chunk_token_threshold` to specify maximum token count per section. + - Adjust `overlap_rate` to control the overlap between chunks, useful for contextual consistency. + - **Example**: + ```python + extraction_strategy = LLMExtractionStrategy( + provider="openai/gpt-4", + api_token=os.getenv("OPENAI_API_KEY"), + chunk_token_threshold=3000, + overlap_rate=0.2, # 20% overlap between chunks + instruction="Extract key insights and relationships." + ) + ``` + - This setup ensures that longer texts are divided into manageable chunks with slight overlap, enhancing the quality of extraction. + +#### **6. Flexible Provider Options for LLM Extraction** + - **Using Proprietary Models**: OpenAI, Azure, and HuggingFace provide robust language models, often suited for complex or detailed extractions. + - **Using Open-Source Models**: Ollama and other open-source models can be deployed locally, suitable for offline or cost-effective extraction. + - **Example Call**: + ```python + await extract_structured_data_using_llm("huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct", os.getenv("HUGGINGFACE_API_KEY")) + await extract_structured_data_using_llm("openai/gpt-4o", os.getenv("OPENAI_API_KEY")) + await extract_structured_data_using_llm("ollama/llama3.2") + ``` + +#### **7. Complete Example of LLM Extraction Setup** + - Code to run both the OpenAI pricing and Knowledge Graph extractions, using various providers: + ```python + async def main(): + await extract_openai_pricing() + await extract_knowledge_graph() + + if __name__ == "__main__": + asyncio.run(main()) + ``` + +#### **8. Wrap Up & Next Steps** + - Recap the power of LLM extraction for handling unstructured or complex data extraction tasks. + - Tease the next video: **10.3 Cosine Similarity Strategy** for clustering similar content based on semantic similarity. + +--- + +This outline explains LLM Extraction in Crawl4AI, with examples showing how to extract structured data using custom schemas and instructions. It demonstrates flexibility with multiple providers, ensuring practical application for different use cases.# Crawl4AI + +## Episode 11: Extraction Strategies: JSON CSS, LLM, and Cosine + +### Quick Intro +Introduce JSON CSS Extraction Strategy for structured data, LLM Extraction Strategy for intelligent parsing, and Cosine Strategy for clustering similar content. Demo: Use JSON CSS to scrape product details from an e-commerce site. + +Here’s a structured outline for the **Cosine Similarity Strategy** video, covering key concepts, configuration, and a practical example. + +--- + +### **10.3 Cosine Similarity Strategy** + +#### **1. Introduction to Cosine Similarity Strategy** + - The Cosine Similarity Strategy clusters content by semantic similarity, offering an efficient alternative to LLM-based extraction, especially when speed is a priority. + - Ideal for grouping similar sections of text, this strategy is well-suited for pages with content sections that may need to be classified or tagged, like news articles, product descriptions, or reviews. + +#### **2. Key Configuration Options** + - **semantic_filter**: A keyword-based filter to focus on relevant content. + - **word_count_threshold**: Minimum number of words per cluster, filtering out shorter, less meaningful clusters. + - **max_dist**: Maximum allowable distance between elements in clusters, impacting cluster tightness. + - **linkage_method**: Method for hierarchical clustering, such as `'ward'` (for well-separated clusters). + - **top_k**: Specifies the number of top categories for each cluster. + - **model_name**: Defines the model for embeddings, such as `sentence-transformers/all-MiniLM-L6-v2`. + - **sim_threshold**: Minimum similarity threshold for filtering, allowing control over cluster relevance. + +#### **3. How Cosine Similarity Clustering Works** + - **Step 1**: Embeddings are generated for each text section, transforming them into vectors that capture semantic meaning. + - **Step 2**: Hierarchical clustering groups similar sections based on cosine similarity, forming clusters with related content. + - **Step 3**: Clusters are filtered based on word count, removing those below the `word_count_threshold`. + - **Step 4**: Each cluster is then categorized with tags, if enabled, providing context to each grouped content section. + +#### **4. Example Use Case: Clustering Blog Article Sections** + - **Goal**: Group related sections of a blog or news page to identify distinct topics or discussion areas. + - **Example HTML Sections**: + ```text + "The economy is showing signs of recovery, with markets up this quarter.", + "In the sports world, several major teams are preparing for the upcoming season.", + "New advancements in AI technology are reshaping the tech landscape.", + "Market analysts are optimistic about continued growth in tech stocks." + ``` + + - **Code Setup**: + ```python + async def extract_blog_sections(): + extraction_strategy = CosineStrategy( + word_count_threshold=15, + max_dist=0.3, + sim_threshold=0.2, + model_name="sentence-transformers/all-MiniLM-L6-v2", + top_k=2 + ) + async with AsyncWebCrawler() as crawler: + url = "https://example.com/blog-page" + result = await crawler.arun( + url=url, + extraction_strategy=extraction_strategy, + bypass_cache=True + ) + print(result.extracted_content) + ``` + + - **Explanation**: + - **word_count_threshold**: Ensures only clusters with meaningful content are included. + - **sim_threshold**: Filters out clusters with low similarity, focusing on closely related sections. + - **top_k**: Selects top tags, useful for identifying main topics. + +#### **5. Applying Semantic Filtering with Cosine Similarity** + - **Semantic Filter**: Filters sections based on relevance to a specific keyword, such as “technology” for tech articles. + - **Example Code**: + ```python + extraction_strategy = CosineStrategy( + semantic_filter="technology", + word_count_threshold=10, + max_dist=0.25, + model_name="sentence-transformers/all-MiniLM-L6-v2" + ) + ``` + - **Explanation**: + - **semantic_filter**: Only sections with high similarity to the “technology” keyword will be included in the clustering, making it easy to focus on specific topics within a mixed-content page. + +#### **6. Clustering Product Reviews by Similarity** + - **Goal**: Organize product reviews by themes, such as “price,” “quality,” or “durability.” + - **Example Reviews**: + ```text + "The quality of this product is outstanding and well worth the price.", + "I found the product to be durable but a bit overpriced.", + "Great value for the money and long-lasting.", + "The build quality is good, but I expected a lower price point." + ``` + + - **Code Setup**: + ```python + async def extract_product_reviews(): + extraction_strategy = CosineStrategy( + word_count_threshold=20, + max_dist=0.35, + sim_threshold=0.25, + model_name="sentence-transformers/all-MiniLM-L6-v2" + ) + async with AsyncWebCrawler() as crawler: + url = "https://example.com/product-reviews" + result = await crawler.arun( + url=url, + extraction_strategy=extraction_strategy, + bypass_cache=True + ) + print(result.extracted_content) + ``` + + - **Explanation**: + - This configuration clusters similar reviews, grouping feedback by common themes, helping businesses understand customer sentiments around particular product aspects. + +#### **7. Performance Advantages of Cosine Strategy** + - **Speed**: The Cosine Similarity Strategy is faster than LLM-based extraction, as it doesn’t rely on API calls to external LLMs. + - **Local Processing**: The strategy runs locally with pre-trained sentence embeddings, ideal for high-throughput scenarios where cost and latency are concerns. + - **Comparison**: With a well-optimized local model, this method can perform clustering on large datasets quickly, making it suitable for tasks requiring rapid, repeated analysis. + +#### **8. Full Code Example for Clustering News Articles** + - **Code**: + ```python + async def main(): + await extract_blog_sections() + await extract_product_reviews() + + if __name__ == "__main__": + asyncio.run(main()) + ``` + +#### **9. Wrap Up & Next Steps** + - Recap the efficiency and effectiveness of Cosine Similarity for clustering related content quickly. + - Close with a reminder of Crawl4AI’s flexibility across extraction strategies, and prompt users to experiment with different settings to optimize clustering for their specific content. + +--- + +This outline covers Cosine Similarity Strategy’s speed and effectiveness, providing examples that showcase its potential for clustering various content types efficiently.# Crawl4AI + +## Episode 12: Session-Based Crawling for Dynamic Websites + +### Quick Intro +Show session management for handling websites with multiple pages or actions (like “load more” buttons). Demo: Crawl a paginated content page, persisting session data across multiple requests. + +Here’s a detailed outline for the **Session-Based Crawling for Dynamic Websites** video, explaining why sessions are necessary, how to use them, and providing practical examples and a visual diagram to illustrate the concept. + +--- + +### **11. Session-Based Crawling for Dynamic Websites** + +#### **1. Introduction to Session-Based Crawling** + - **What is Session-Based Crawling**: Session-based crawling maintains a continuous browsing session across multiple page states, allowing the crawler to interact with a page and retrieve content that loads dynamically or based on user interactions. + - **Why It’s Needed**: + - In static pages, all content is available directly from a single URL. + - In dynamic websites, content often loads progressively or based on user actions (e.g., clicking “load more,” submitting forms, scrolling). + - Session-based crawling helps simulate user actions, capturing content that is otherwise hidden until specific actions are taken. + +#### **2. Conceptual Diagram for Session-Based Crawling** + + ```mermaid + graph TD + Start[Start Session] --> S1[Initial State (S1)] + S1 -->|Crawl| Content1[Extract Content S1] + S1 -->|Action: Click Load More| S2[State S2] + S2 -->|Crawl| Content2[Extract Content S2] + S2 -->|Action: Scroll Down| S3[State S3] + S3 -->|Crawl| Content3[Extract Content S3] + S3 -->|Action: Submit Form| S4[Final State] + S4 -->|Crawl| Content4[Extract Content S4] + Content4 --> End[End Session] + ``` + + - **Explanation of Diagram**: + - **Start**: Initializes the session and opens the starting URL. + - **State Transitions**: Each action (e.g., clicking “load more,” scrolling) transitions to a new state, where additional content becomes available. + - **Session Persistence**: Keeps the same browsing session active, preserving the state and allowing for a sequence of actions to unfold. + - **End**: After reaching the final state, the session ends, and all accumulated content has been extracted. + +#### **3. Key Components of Session-Based Crawling in Crawl4AI** + - **Session ID**: A unique identifier to maintain the state across requests, allowing the crawler to “remember” previous actions. + - **JavaScript Execution**: Executes JavaScript commands (e.g., clicks, scrolls) to simulate interactions. + - **Wait Conditions**: Ensures the crawler waits for content to load in each state before moving on. + - **Sequential State Transitions**: By defining actions and wait conditions between states, the crawler can navigate through the page as a user would. + +#### **4. Basic Session Example: Multi-Step Content Loading** + - **Goal**: Crawl an article feed that requires several “load more” clicks to display additional content. + - **Code**: + ```python + async def crawl_article_feed(): + async with AsyncWebCrawler() as crawler: + session_id = "feed_session" + + for page in range(3): + result = await crawler.arun( + url="https://example.com/articles", + session_id=session_id, + js_code="document.querySelector('.load-more-button').click();" if page > 0 else None, + wait_for="css:.article", + css_selector=".article" # Target article elements + ) + print(f"Page {page + 1}: Extracted {len(result.extracted_content)} articles") + ``` + - **Explanation**: + - **session_id**: Ensures all requests share the same browsing state. + - **js_code**: Clicks the “load more” button after the initial page load, expanding content on each iteration. + - **wait_for**: Ensures articles have loaded after each click before extraction. + +#### **5. Advanced Example: E-Commerce Product Search with Filter Selection** + - **Goal**: Interact with filters on an e-commerce page to extract products based on selected criteria. + - **Example Steps**: + 1. **State 1**: Load the main product page. + 2. **State 2**: Apply a filter (e.g., “On Sale”) by selecting a checkbox. + 3. **State 3**: Scroll to load additional products and capture updated results. + + - **Code**: + ```python + async def extract_filtered_products(): + async with AsyncWebCrawler() as crawler: + session_id = "product_session" + + # Step 1: Open product page + result = await crawler.arun( + url="https://example.com/products", + session_id=session_id, + wait_for="css:.product-item" + ) + + # Step 2: Apply filter (e.g., "On Sale") + result = await crawler.arun( + url="https://example.com/products", + session_id=session_id, + js_code="document.querySelector('#sale-filter-checkbox').click();", + wait_for="css:.product-item" + ) + + # Step 3: Scroll to load additional products + for _ in range(2): # Scroll down twice + result = await crawler.arun( + url="https://example.com/products", + session_id=session_id, + js_code="window.scrollTo(0, document.body.scrollHeight);", + wait_for="css:.product-item" + ) + print(f"Loaded {len(result.extracted_content)} products after scroll") + ``` + - **Explanation**: + - **State Persistence**: Each action (filter selection and scroll) builds on the previous session state. + - **Multiple Interactions**: Combines clicking a filter with scrolling, demonstrating how the session preserves these actions. + +#### **6. Key Benefits of Session-Based Crawling** + - **Accessing Hidden Content**: Retrieves data that loads only after user actions. + - **Simulating User Behavior**: Handles interactive elements such as “load more” buttons, dropdowns, and filters. + - **Maintaining Continuity Across States**: Enables a sequential process, moving logically from one state to the next, capturing all desired content without reloading the initial state each time. + +#### **7. Additional Configuration Tips** + - **Manage Session End**: Always conclude the session after the final state to release resources. + - **Optimize with Wait Conditions**: Use `wait_for` to ensure complete loading before each extraction. + - **Handling Errors in Session-Based Crawling**: Include error handling for interactions that may fail, ensuring robustness across state transitions. + +#### **8. Complete Code Example: Multi-Step Session Workflow** + - **Example**: + ```python + async def main(): + await crawl_article_feed() + await extract_filtered_products() + + if __name__ == "__main__": + asyncio.run(main()) + ``` + +#### **9. Wrap Up & Next Steps** + - Recap the usefulness of session-based crawling for dynamic content extraction. + - Tease the next video: **Hooks and Custom Workflow with AsyncWebCrawler** to cover advanced customization options for further control over the crawling process. + +--- + +This outline covers session-based crawling from both a conceptual and practical perspective, helping users understand its importance, configure it effectively, and use it to handle complex dynamic content.# Crawl4AI + +## Episode 13: Chunking Strategies for Large Text Processing + +### Quick Intro +Explain Regex, NLP, and Fixed-Length chunking, and when to use each. Demo: Chunk a large article or document for processing by topics or sentences. + +Here’s a structured outline for the **Chunking Strategies for Large Text Processing** video, emphasizing how chunking works within extraction and why it’s crucial for effective data aggregation. + +Here’s a structured outline for the **Chunking Strategies for Large Text Processing** video, explaining each strategy, when to use it, and providing examples to illustrate. + +--- + +### **12. Chunking Strategies for Large Text Processing** + +#### **1. Introduction to Chunking in Crawl4AI** + - **What is Chunking**: Chunking is the process of dividing large text into manageable sections or “chunks,” enabling efficient processing in extraction tasks. + - **Why It’s Needed**: + - When processing large text, feeding it directly into an extraction function (like `F(x)`) can overwhelm memory or token limits. + - Chunking breaks down `x` (the text) into smaller pieces, which are processed sequentially or in parallel by the extraction function, with the final result being an aggregation of all chunks’ processed output. + +#### **2. Key Chunking Strategies and Use Cases** + - Crawl4AI offers various chunking strategies to suit different text structures, chunk sizes, and processing requirements. + - **Choosing a Strategy**: Select based on the type of text (e.g., articles, transcripts) and extraction needs (e.g., simple splitting or context-sensitive processing). + +#### **3. Strategy 1: Regex-Based Chunking** + - **Description**: Uses regular expressions to split text based on specified patterns (e.g., paragraphs or section breaks). + - **Use Case**: Ideal for dividing text by paragraphs or larger logical blocks where sections are clearly separated by line breaks or punctuation. + - **Example**: + - **Pattern**: `r'\n\n'` for double line breaks. + ```python + chunker = RegexChunking(patterns=[r'\n\n']) + text_chunks = chunker.chunk(long_text) + print(text_chunks) # Output: List of paragraphs + ``` + - **Pros**: Flexible for pattern-based chunking. + - **Cons**: Limited to text with consistent formatting. + +#### **4. Strategy 2: NLP Sentence-Based Chunking** + - **Description**: Uses NLP to split text by sentences, ensuring grammatically complete segments. + - **Use Case**: Useful for extracting individual statements, such as in news articles, quotes, or legal text. + - **Example**: + ```python + chunker = NlpSentenceChunking() + sentence_chunks = chunker.chunk(long_text) + print(sentence_chunks) # Output: List of sentences + ``` + - **Pros**: Maintains sentence structure, ideal for tasks needing semantic completeness. + - **Cons**: May create very small chunks, which could limit contextual extraction. + +#### **5. Strategy 3: Topic-Based Segmentation Using TextTiling** + - **Description**: Segments text into topics using TextTiling, identifying topic shifts and key segments. + - **Use Case**: Ideal for long articles, reports, or essays where each section covers a different topic. + - **Example**: + ```python + chunker = TopicSegmentationChunking(num_keywords=3) + topic_chunks = chunker.chunk_with_topics(long_text) + print(topic_chunks) # Output: List of topic segments with keywords + ``` + - **Pros**: Groups related content, preserving topical coherence. + - **Cons**: Depends on identifiable topic shifts, which may not be present in all texts. + +#### **6. Strategy 4: Fixed-Length Word Chunking** + - **Description**: Splits text into chunks based on a fixed number of words. + - **Use Case**: Ideal for text where exact segment size is required, such as processing word-limited documents for LLMs. + - **Example**: + ```python + chunker = FixedLengthWordChunking(chunk_size=100) + word_chunks = chunker.chunk(long_text) + print(word_chunks) # Output: List of 100-word chunks + ``` + - **Pros**: Ensures uniform chunk sizes, suitable for token-based extraction limits. + - **Cons**: May split sentences, affecting semantic coherence. + +#### **7. Strategy 5: Sliding Window Chunking** + - **Description**: Uses a fixed window size with a step, creating overlapping chunks to maintain context. + - **Use Case**: Useful for maintaining context across sections, as with documents where context is needed for neighboring sections. + - **Example**: + ```python + chunker = SlidingWindowChunking(window_size=100, step=50) + window_chunks = chunker.chunk(long_text) + print(window_chunks) # Output: List of overlapping word chunks + ``` + - **Pros**: Retains context across adjacent chunks, ideal for complex semantic extraction. + - **Cons**: Overlap increases data size, potentially impacting processing time. + +#### **8. Strategy 6: Overlapping Window Chunking** + - **Description**: Similar to sliding windows but with a defined overlap, allowing chunks to share content at the edges. + - **Use Case**: Suitable for handling long texts with essential overlapping information, like research articles or medical records. + - **Example**: + ```python + chunker = OverlappingWindowChunking(window_size=1000, overlap=100) + overlap_chunks = chunker.chunk(long_text) + print(overlap_chunks) # Output: List of overlapping chunks with defined overlap + ``` + - **Pros**: Allows controlled overlap for consistent content coverage across chunks. + - **Cons**: Redundant data in overlapping areas may increase computation. + +#### **9. Practical Example: Using Chunking with an Extraction Strategy** + - **Goal**: Combine chunking with an extraction strategy to process large text effectively. + - **Example Code**: + ```python + from crawl4ai.extraction_strategy import LLMExtractionStrategy + + async def extract_large_text(): + # Initialize chunker and extraction strategy + chunker = FixedLengthWordChunking(chunk_size=200) + extraction_strategy = LLMExtractionStrategy(provider="openai/gpt-4", api_token="your_api_token") + + # Split text into chunks + text_chunks = chunker.chunk(large_text) + + async with AsyncWebCrawler() as crawler: + for chunk in text_chunks: + result = await crawler.arun( + url="https://example.com", + extraction_strategy=extraction_strategy, + content=chunk + ) + print(result.extracted_content) + ``` + + - **Explanation**: + - `chunker.chunk()`: Divides the `large_text` into smaller segments based on the chosen strategy. + - `extraction_strategy`: Processes each chunk separately, and results are then aggregated to form the final output. + +#### **10. Choosing the Right Chunking Strategy** + - **Text Structure**: If text has clear sections (e.g., paragraphs, topics), use Regex or Topic Segmentation. + - **Extraction Needs**: If context is crucial, consider Sliding or Overlapping Window Chunking. + - **Processing Constraints**: For word-limited extractions (e.g., LLMs with token limits), Fixed-Length Word Chunking is often most effective. + +#### **11. Wrap Up & Next Steps** + - Recap the benefits of each chunking strategy and when to use them in extraction workflows. + - Tease the next video: **Hooks and Custom Workflow with AsyncWebCrawler**, focusing on customizing crawler behavior with hooks for a fine-tuned extraction process. + +--- + +This outline provides a complete understanding of chunking strategies, explaining each method’s strengths and best-use scenarios to help users process large texts effectively in Crawl4AI.# Crawl4AI + +## Episode 14: Hooks and Custom Workflow with AsyncWebCrawler + +### Quick Intro +Cover hooks (`on_browser_created`, `before_goto`, `after_goto`) to add custom workflows. Demo: Use hooks to add custom cookies or headers, log HTML, or trigger specific events on page load. + +Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCrawler** video, covering each hook’s purpose, usage, and example implementations. + +--- + +### **13. Hooks and Custom Workflow with AsyncWebCrawler** + +#### **1. Introduction to Hooks in Crawl4AI** + - **What are Hooks**: Hooks are customizable entry points in the crawling process that allow users to inject custom actions or logic at specific stages. + - **Why Use Hooks**: + - They enable fine-grained control over the crawling workflow. + - Useful for performing additional tasks (e.g., logging, modifying headers) dynamically during the crawl. + - Hooks provide the flexibility to adapt the crawler to complex site structures or unique project needs. + +#### **2. Overview of Available Hooks** + - Crawl4AI offers seven key hooks to modify and control different stages in the crawling lifecycle: + - `on_browser_created` + - `on_user_agent_updated` + - `on_execution_started` + - `before_goto` + - `after_goto` + - `before_return_html` + - `before_retrieve_html` + +#### **3. Hook-by-Hook Explanation and Examples** + +--- + +##### **Hook 1: `on_browser_created`** + - **Purpose**: Triggered right after the browser instance is created. + - **Use Case**: + - Initializing browser-specific settings or performing setup actions. + - Configuring browser extensions or scripts before any page is opened. + - **Example**: + ```python + async def log_browser_creation(browser): + print("Browser instance created:", browser) + + crawler.set_hook('on_browser_created', log_browser_creation) + ``` + - **Explanation**: This hook logs the browser creation event, useful for tracking when a new browser instance starts. + +--- + +##### **Hook 2: `on_user_agent_updated`** + - **Purpose**: Called whenever the user agent string is updated. + - **Use Case**: + - Modifying the user agent based on page requirements, e.g., changing to a mobile user agent for mobile-only pages. + - **Example**: + ```python + def update_user_agent(user_agent): + print(f"User Agent Updated: {user_agent}") + + crawler.set_hook('on_user_agent_updated', update_user_agent) + crawler.update_user_agent("Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X)") + ``` + - **Explanation**: This hook provides a callback every time the user agent changes, helpful for debugging or dynamically altering user agent settings based on conditions. + +--- + +##### **Hook 3: `on_execution_started`** + - **Purpose**: Called right before the crawler begins any interaction (e.g., JavaScript execution, clicks). + - **Use Case**: + - Performing setup actions, such as inserting cookies or initiating custom scripts. + - **Example**: + ```python + async def log_execution_start(page): + print("Execution started on page:", page.url) + + crawler.set_hook('on_execution_started', log_execution_start) + ``` + - **Explanation**: Logs the start of any major interaction on the page, ideal for cases where you want to monitor each interaction. + +--- + +##### **Hook 4: `before_goto`** + - **Purpose**: Triggered before navigating to a new URL with `page.goto()`. + - **Use Case**: + - Modifying request headers or setting up conditions right before the page loads. + - Adding headers or dynamically adjusting options for specific URLs. + - **Example**: + ```python + async def modify_headers_before_goto(page): + await page.set_extra_http_headers({"X-Custom-Header": "CustomValue"}) + print("Custom headers set before navigation") + + crawler.set_hook('before_goto', modify_headers_before_goto) + ``` + - **Explanation**: This hook allows injecting headers or altering settings based on the page’s needs, particularly useful for pages with custom requirements. + +--- + +##### **Hook 5: `after_goto`** + - **Purpose**: Executed immediately after a page has loaded (after `page.goto()`). + - **Use Case**: + - Checking the loaded page state, modifying the DOM, or performing post-navigation actions (e.g., scrolling). + - **Example**: + ```python + async def post_navigation_scroll(page): + await page.evaluate("window.scrollTo(0, document.body.scrollHeight)") + print("Scrolled to the bottom after navigation") + + crawler.set_hook('after_goto', post_navigation_scroll) + ``` + - **Explanation**: This hook scrolls to the bottom of the page after loading, which can help load dynamically added content like infinite scroll elements. + +--- + +##### **Hook 6: `before_return_html`** + - **Purpose**: Called right before HTML content is retrieved and returned. + - **Use Case**: + - Removing overlays or cleaning up the page for a cleaner HTML extraction. + - **Example**: + ```python + async def remove_advertisements(page, html): + await page.evaluate("document.querySelectorAll('.ad-banner').forEach(el => el.remove());") + print("Advertisements removed before returning HTML") + + crawler.set_hook('before_return_html', remove_advertisements) + ``` + - **Explanation**: The hook removes ad banners from the HTML before it’s retrieved, ensuring a cleaner data extraction. + +--- + +##### **Hook 7: `before_retrieve_html`** + - **Purpose**: Runs right before Crawl4AI initiates HTML retrieval. + - **Use Case**: + - Finalizing any page adjustments (e.g., setting timers, waiting for specific elements). + - **Example**: + ```python + async def wait_for_content_before_retrieve(page): + await page.wait_for_selector('.main-content') + print("Main content loaded, ready to retrieve HTML") + + crawler.set_hook('before_retrieve_html', wait_for_content_before_retrieve) + ``` + - **Explanation**: This hook waits for the main content to load before retrieving the HTML, ensuring that all essential content is captured. + +#### **4. Setting Hooks in Crawl4AI** + - **How to Set Hooks**: + - Use `set_hook` to define a custom function for each hook. + - Each hook function can be asynchronous (useful for actions like waiting or retrieving async data). + - **Example Setup**: + ```python + crawler.set_hook('on_browser_created', log_browser_creation) + crawler.set_hook('before_goto', modify_headers_before_goto) + crawler.set_hook('after_goto', post_navigation_scroll) + ``` + +#### **5. Complete Example: Using Hooks for a Customized Crawl Workflow** + - **Goal**: Log each key step, set custom headers before navigation, and clean up the page before retrieving HTML. + - **Example Code**: + ```python + async def custom_crawl(): + async with AsyncWebCrawler() as crawler: + # Set hooks for custom workflow + crawler.set_hook('on_browser_created', log_browser_creation) + crawler.set_hook('before_goto', modify_headers_before_goto) + crawler.set_hook('after_goto', post_navigation_scroll) + crawler.set_hook('before_return_html', remove_advertisements) + + # Perform the crawl + url = "https://example.com" + result = await crawler.arun(url=url) + print(result.html) # Display or process HTML + ``` + +#### **6. Benefits of Using Hooks in Custom Crawling Workflows** + - **Enhanced Control**: Hooks offer precise control over each stage, allowing adjustments based on content and structure. + - **Efficient Modifications**: Avoid reloading or restarting the session; hooks can alter actions dynamically. + - **Context-Sensitive Actions**: Hooks enable custom logic tailored to specific pages or sections, maximizing extraction quality. + +#### **7. Wrap Up & Next Steps** + - Recap how hooks empower customized workflows in Crawl4AI, enabling flexibility at every stage. + - Tease the next video: **Automating Post-Processing with Crawl4AI**, covering automated steps after data extraction. + +--- + +This outline provides a thorough understanding of hooks, their practical applications, and examples for customizing the crawling workflow in Crawl4AI. \ No newline at end of file diff --git a/docs/nootbooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb b/docs/nootbooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb new file mode 100644 index 00000000..053bc6c5 --- /dev/null +++ b/docs/nootbooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb @@ -0,0 +1,235 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 🚀 Crawl4AI v0.3.72 Release Announcement\n", + "\n", + "Welcome to the new release of **Crawl4AI v0.3.72**! This notebook highlights the latest features and demonstrates how they work in real-time. Follow along to see each feature in action!\n", + "\n", + "### What’s New?\n", + "- ✨ `Fit Markdown`: Extracts only the main content from articles and blogs\n", + "- 🛡️ **Magic Mode**: Comprehensive anti-bot detection bypass\n", + "- 🌐 **Multi-browser support**: Switch between Chromium, Firefox, WebKit\n", + "- 🔍 **Knowledge Graph Extraction**: Generate structured graphs of entities & relationships from any URL\n", + "- 🤖 **Crawl4AI GPT Assistant**: Chat directly with our AI assistant for help, code generation, and faster learning (available [here](https://tinyurl.com/your-gpt-assistant-link))\n", + "\n", + "---\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 📥 Setup\n", + "To start, we'll install `Crawl4AI` along with Playwright and `nest_asyncio` to ensure compatibility with Colab’s asynchronous environment." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Install Crawl4AI and dependencies\n", + "!pip install crawl4ai\n", + "!playwright install\n", + "!pip install nest_asyncio" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Import nest_asyncio and apply it to allow asyncio in Colab\n", + "import nest_asyncio\n", + "nest_asyncio.apply()\n", + "\n", + "print('Setup complete!')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## ✨ Feature 1: `Fit Markdown`\n", + "Extracts only the main content from articles and blog pages, removing sidebars, ads, and other distractions.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import asyncio\n", + "from crawl4ai import AsyncWebCrawler\n", + "\n", + "async def fit_markdown_demo():\n", + " async with AsyncWebCrawler() as crawler:\n", + " result = await crawler.arun(url=\"https://janineintheworld.com/places-to-visit-in-central-mexico\")\n", + " print(result.fit_markdown) # Shows main content in Markdown format\n", + "\n", + "# Run the demo\n", + "await fit_markdown_demo()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## 🛡️ Feature 2: Magic Mode\n", + "Magic Mode bypasses anti-bot detection to make crawling more reliable on protected websites.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "async def magic_mode_demo():\n", + " async with AsyncWebCrawler() as crawler: # Enables anti-bot detection bypass\n", + " result = await crawler.arun(\n", + " url=\"https://www.reuters.com/markets/us/global-markets-view-usa-pix-2024-08-29/\",\n", + " magic=True # Enables magic mode\n", + " )\n", + " print(result.markdown) # Shows the full content in Markdown format\n", + "\n", + "# Run the demo\n", + "await magic_mode_demo()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## 🌐 Feature 3: Multi-Browser Support\n", + "Crawl4AI now supports Chromium, Firefox, and WebKit. Here’s how to specify Firefox for a crawl.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "async def multi_browser_demo():\n", + " async with AsyncWebCrawler(browser_type=\"firefox\") as crawler: # Using Firefox instead of default Chromium\n", + " result = await crawler.arun(url=\"https://crawl4i.com\")\n", + " print(result.markdown) # Shows content extracted using Firefox\n", + "\n", + "# Run the demo\n", + "await multi_browser_demo()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## ✨ Put them all together\n", + "\n", + "Let's combine all the features to extract the main content from a blog post, bypass anti-bot detection, and generate a knowledge graph from the extracted content." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from crawl4ai.extraction_strategy import LLMExtractionStrategy\n", + "from pydantic import BaseModel\n", + "import json, os\n", + "from typing import List\n", + "\n", + "# Define classes for the knowledge graph structure\n", + "class Landmark(BaseModel):\n", + " name: str\n", + " description: str\n", + " activities: list[str] # E.g., visiting, sightseeing, relaxing\n", + "\n", + "class City(BaseModel):\n", + " name: str\n", + " description: str\n", + " landmarks: list[Landmark]\n", + " cultural_highlights: list[str] # E.g., food, music, traditional crafts\n", + "\n", + "class TravelKnowledgeGraph(BaseModel):\n", + " cities: list[City] # Central Mexican cities to visit\n", + "\n", + "async def combined_demo():\n", + " # Define the knowledge graph extraction strategy\n", + " strategy = LLMExtractionStrategy(\n", + " # provider=\"ollama/nemotron\",\n", + " provider='openai/gpt-4o-mini', # Or any other provider, including Ollama and open source models\n", + " pi_token=os.getenv('OPENAI_API_KEY'), # In case of Ollama just pass \"no-token\"\n", + " schema=TravelKnowledgeGraph.schema(),\n", + " instruction=(\n", + " \"Extract cities, landmarks, and cultural highlights for places to visit in Central Mexico. \"\n", + " \"For each city, list main landmarks with descriptions and activities, as well as cultural highlights.\"\n", + " )\n", + " )\n", + "\n", + " # Set up the AsyncWebCrawler with multi-browser support, Magic Mode, and Fit Markdown\n", + " async with AsyncWebCrawler(browser_type=\"firefox\") as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://janineintheworld.com/places-to-visit-in-central-mexico\",\n", + " extraction_strategy=strategy,\n", + " bypass_cache=True,\n", + " magic=True\n", + " )\n", + " \n", + " # Display main article content in Fit Markdown format\n", + " print(\"Extracted Main Content:\\n\", result.fit_markdown)\n", + " \n", + " # Display extracted knowledge graph of cities, landmarks, and cultural highlights\n", + " if result.extracted_content:\n", + " travel_graph = json.loads(result.extracted_content)\n", + " print(\"\\nExtracted Knowledge Graph:\\n\", json.dumps(travel_graph, indent=2))\n", + "\n", + "# Run the combined demo\n", + "await combined_demo()\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "---\n", + "\n", + "## 🤖 Crawl4AI GPT Assistant\n", + "Chat with the Crawl4AI GPT Assistant for code generation, support, and learning Crawl4AI faster. Try it out [here](https://tinyurl.com/crawl4ai-gpt)!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "name": "python", + "version": "3.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/mkdocs.yml b/mkdocs.yml index 30136c61..52fdd579 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -33,13 +33,30 @@ nav: - 'Cosine Strategy': 'extraction/cosine.md' - 'Chunking': 'extraction/chunking.md' + - Tutorial: + - 'Episode 1: Introduction to Crawl4AI and Basic Installation': 'tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md' + - 'Episode 2: Overview of Advanced Features': 'tutorial/episode_02_Overview_of_Advanced_Features.md' + - 'Episode 3: Browser Configurations & Headless Crawling': 'tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md' + - 'Episode 4: Advanced Proxy and Security Settings': 'tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md' + - 'Episode 5: JavaScript Execution and Dynamic Content Handling': 'tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md' + - 'Episode 6: Magic Mode and Anti-Bot Protection': 'tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md' + - 'Episode 7: Content Cleaning and Fit Markdown': 'tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md' + - 'Episode 8: Media Handling: Images, Videos, and Audio': 'tutorial/episode_08_Media_Handling:_Images,_Videos,_and_Audio.md' + - 'Episode 9: Link Analysis and Smart Filtering': 'tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md' + - 'Episode 10: Custom Headers, Identity, and User Simulation': 'tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md' + - 'Episode 11.1: Extraction Strategies: JSON CSS': 'tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md' + - 'Episode 11.2: Extraction Strategies: LLM': 'tutorial/episode_11_2_Extraction_Strategies:_LLM.md' + - 'Episode 11.3: Extraction Strategies: Cosine': 'tutorial/episode_11_3_Extraction_Strategies:_Cosine.md' + - 'Episode 12: Session-Based Crawling for Dynamic Websites': 'tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md' + - 'Episode 13: Chunking Strategies for Large Text Processing': 'tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md' + - 'Episode 14: Hooks and Custom Workflow with AsyncWebCrawler': 'tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md' + - API Reference: - 'AsyncWebCrawler': 'api/async-webcrawler.md' - 'AsyncWebCrawler.arun()': 'api/arun.md' - 'CrawlResult': 'api/crawl-result.md' - 'Strategies': 'api/strategies.md' - theme: name: terminal palette: dark @@ -62,4 +79,4 @@ extra_css: extra_javascript: - assets/highlight.min.js - - assets/highlight_init.js \ No newline at end of file + - assets/highlight_init.js From 9307c19f356eefc0c16e34c097e845afebd36f9e Mon Sep 17 00:00:00 2001 From: UncleCode Date: Wed, 30 Oct 2024 20:39:35 +0800 Subject: [PATCH 5/8] Update documents, upload new version of quickstart. --- README.md | 5 +- docs/examples/quickstart.ipynb | 1393 ++++++++--------- docs/examples/quickstart_v0.ipynb | 735 +++++++++ docs/md_v2/assets/styles.css | 7 + ...tion_to_Crawl4AI_and_Basic_Installation.md | 15 +- ...pisode_02_Overview_of_Advanced_Features.md | 24 +- ...nd_Custom_Workflow_with_AsyncWebCrawler.md | 28 +- docs/md_v2/tutorial/tutorial.md | 34 +- ...rawl4AI_v0.3.72_Release_Announcement.ipynb | 0 mkdocs.yml | 39 +- 10 files changed, 1481 insertions(+), 799 deletions(-) create mode 100644 docs/examples/quickstart_v0.ipynb rename docs/{nootbooks => notebooks}/Crawl4AI_v0.3.72_Release_Announcement.ipynb (100%) diff --git a/README.md b/README.md index bcb20270..9e937aab 100644 --- a/README.md +++ b/README.md @@ -25,12 +25,9 @@ Use the [Crawl4AI GPT Assistant](https://tinyurl.com/crawl4ai-gpt) as your AI-po - 💾 Improved caching system for better performance - ⚡ Optimized batch processing with automatic rate limiting -Try new features in this colab notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1L6LJ3KlplhJdUy3Wcry6pstnwRpCJ3yB?usp=sharing) - - ## Try it Now! -✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1REChY6fXQf-EaVYLv0eHEWvzlYxGm0pd?usp=sharing) +✨ Play around with this [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SgRPrByQLzjRfwoRNq1wSGE9nYY_EE8C?usp=sharing) ✨ Visit our [Documentation Website](https://crawl4ai.com/mkdocs/) diff --git a/docs/examples/quickstart.ipynb b/docs/examples/quickstart.ipynb index 71f23acb..4751dec8 100644 --- a/docs/examples/quickstart.ipynb +++ b/docs/examples/quickstart.ipynb @@ -1,735 +1,664 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": { - "id": "6yLvrXn7yZQI" - }, - "source": [ - "# Crawl4AI: Advanced Web Crawling and Data Extraction\n", - "\n", - "Welcome to this interactive notebook showcasing Crawl4AI, an advanced asynchronous web crawling and data extraction library.\n", - "\n", - "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n", - "- Twitter: [@unclecode](https://twitter.com/unclecode)\n", - "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n", - "\n", - "Let's explore the powerful features of Crawl4AI!" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "KIn_9nxFyZQK" - }, - "source": [ - "## Installation\n", - "\n", - "First, let's install Crawl4AI from GitHub:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "mSnaxLf3zMog" - }, - "outputs": [], - "source": [ - "!sudo apt-get update && sudo apt-get install -y libwoff1 libopus0 libwebp6 libwebpdemux2 libenchant1c2a libgudev-1.0-0 libsecret-1-0 libhyphen0 libgdk-pixbuf2.0-0 libegl1 libnotify4 libxslt1.1 libevent-2.1-7 libgles2 libvpx6 libxcomposite1 libatk1.0-0 libatk-bridge2.0-0 libepoxy0 libgtk-3-0 libharfbuzz-icu0" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "xlXqaRtayZQK" - }, - "outputs": [], - "source": [ - "!pip install crawl4ai\n", - "!pip install nest-asyncio\n", - "!playwright install" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "qKCE7TI7yZQL" - }, - "source": [ - "Now, let's import the necessary libraries:" - ] - }, - { - "cell_type": "code", - "execution_count": 1, - "metadata": { - "id": "I67tr7aAyZQL" - }, - "outputs": [], - "source": [ - "import asyncio\n", - "import nest_asyncio\n", - "from crawl4ai import AsyncWebCrawler\n", - "from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy\n", - "import json\n", - "import time\n", - "from pydantic import BaseModel, Field\n", - "\n", - "nest_asyncio.apply()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "h7yR_Rt_yZQM" - }, - "source": [ - "## Basic Usage\n", - "\n", - "Let's start with a simple crawl example:" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "yBh6hf4WyZQM", - "outputId": "0f83af5c-abba-4175-ed95-70b7512e6bcc" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", - "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", - "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.05 seconds\n", - "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.05 seconds.\n", - "18102\n" - ] - } - ], - "source": [ - "async def simple_crawl():\n", - " async with AsyncWebCrawler(verbose=True) as crawler:\n", - " result = await crawler.arun(url=\"https://www.nbcnews.com/business\")\n", - " print(len(result.markdown))\n", - "await simple_crawl()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "9rtkgHI28uI4" - }, - "source": [ - "💡 By default, **Crawl4AI** caches the result of every URL, so the next time you call it, you’ll get an instant result. But if you want to bypass the cache, just set `bypass_cache=True`." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "MzZ0zlJ9yZQM" - }, - "source": [ - "## Advanced Features\n", - "\n", - "### Executing JavaScript and Using CSS Selectors" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "gHStF86xyZQM", - "outputId": "34d0fb6d-4dec-4677-f76e-85a1f082829b" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", - "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", - "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n", - "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 6.06 seconds\n", - "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", - "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n", - "41135\n" - ] - } - ], - "source": [ - "async def js_and_css():\n", - " async with AsyncWebCrawler(verbose=True) as crawler:\n", - " js_code = [\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"]\n", - " result = await crawler.arun(\n", - " url=\"https://www.nbcnews.com/business\",\n", - " js_code=js_code,\n", - " # css_selector=\"YOUR_CSS_SELECTOR_HERE\",\n", - " bypass_cache=True\n", - " )\n", - " print(len(result.markdown))\n", - "\n", - "await js_and_css()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "cqE_W4coyZQM" - }, - "source": [ - "### Using a Proxy\n", - "\n", - "Note: You'll need to replace the proxy URL with a working proxy for this example to run successfully." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "QjAyiAGqyZQM" - }, - "outputs": [], - "source": [ - "async def use_proxy():\n", - " async with AsyncWebCrawler(verbose=True, proxy=\"http://your-proxy-url:port\") as crawler:\n", - " result = await crawler.arun(\n", - " url=\"https://www.nbcnews.com/business\",\n", - " bypass_cache=True\n", - " )\n", - " print(result.markdown[:500]) # Print first 500 characters\n", - "\n", - "# Uncomment the following line to run the proxy example\n", - "# await use_proxy()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "XTZ88lbayZQN" - }, - "source": [ - "### Extracting Structured Data with OpenAI\n", - "\n", - "Note: You'll need to set your OpenAI API key as an environment variable for this example to work." - ] - }, - { - "cell_type": "code", - "execution_count": 14, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "fIOlDayYyZQN", - "outputId": "cb8359cc-dee0-4762-9698-5dfdcee055b8" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", - "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", - "[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n", - "[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 3.77 seconds\n", - "[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.21 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n", - "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n", - "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 1\n", - "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 2\n", - "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 3\n", - "[LOG] Extracted 4 blocks from URL: https://openai.com/api/pricing/ block index: 3\n", - "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 4\n", - "[LOG] Extracted 5 blocks from URL: https://openai.com/api/pricing/ block index: 0\n", - "[LOG] Extracted 1 blocks from URL: https://openai.com/api/pricing/ block index: 4\n", - "[LOG] Extracted 8 blocks from URL: https://openai.com/api/pricing/ block index: 1\n", - "[LOG] Extracted 12 blocks from URL: https://openai.com/api/pricing/ block index: 2\n", - "[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 8.55 seconds.\n", - "5029\n" - ] - } - ], - "source": [ - "import os\n", - "from google.colab import userdata\n", - "os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')\n", - "\n", - "class OpenAIModelFee(BaseModel):\n", - " model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n", - " input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n", - " output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n", - "\n", - "async def extract_openai_fees():\n", - " async with AsyncWebCrawler(verbose=True) as crawler:\n", - " result = await crawler.arun(\n", - " url='https://openai.com/api/pricing/',\n", - " word_count_threshold=1,\n", - " extraction_strategy=LLMExtractionStrategy(\n", - " provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY'),\n", - " schema=OpenAIModelFee.schema(),\n", - " extraction_type=\"schema\",\n", - " instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens.\n", - " Do not miss any models in the entire content. One extracted model JSON format should look like this:\n", - " {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n", - " ),\n", - " bypass_cache=True,\n", - " )\n", - " print(len(result.extracted_content))\n", - "\n", - "# Uncomment the following line to run the OpenAI extraction example\n", - "await extract_openai_fees()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "BypA5YxEyZQN" - }, - "source": [ - "### Advanced Multi-Page Crawling with JavaScript Execution" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "tfkcVQ0b7mw-" - }, - "source": [ - "## Advanced Multi-Page Crawling with JavaScript Execution\n", - "\n", - "This example demonstrates Crawl4AI's ability to handle complex crawling scenarios, specifically extracting commits from multiple pages of a GitHub repository. The challenge here is that clicking the \"Next\" button doesn't load a new page, but instead uses asynchronous JavaScript to update the content. This is a common hurdle in modern web crawling.\n", - "\n", - "To overcome this, we use Crawl4AI's custom JavaScript execution to simulate clicking the \"Next\" button, and implement a custom hook to detect when new data has loaded. Our strategy involves comparing the first commit's text before and after \"clicking\" Next, waiting until it changes to confirm new data has rendered. This showcases Crawl4AI's flexibility in handling dynamic content and its ability to implement custom logic for even the most challenging crawling tasks." - ] - }, - { - "cell_type": "code", - "execution_count": 11, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "qUBKGpn3yZQN", - "outputId": "3e555b6a-ed33-42f4-cce9-499a923fbe17" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", - "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", - "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", - "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 5.16 seconds\n", - "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.28 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", - "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.28 seconds.\n", - "Page 1: Found 35 commits\n", - "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", - "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.78 seconds\n", - "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.90 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", - "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.90 seconds.\n", - "Page 2: Found 35 commits\n", - "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", - "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 2.00 seconds\n", - "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.74 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", - "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.75 seconds.\n", - "Page 3: Found 35 commits\n", - "Successfully crawled 105 commits across 3 pages\n" - ] - } - ], - "source": [ - "import re\n", - "from bs4 import BeautifulSoup\n", - "\n", - "async def crawl_typescript_commits():\n", - " first_commit = \"\"\n", - " async def on_execution_started(page):\n", - " nonlocal first_commit\n", - " try:\n", - " while True:\n", - " await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')\n", - " commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')\n", - " commit = await commit.evaluate('(element) => element.textContent')\n", - " commit = re.sub(r'\\s+', '', commit)\n", - " if commit and commit != first_commit:\n", - " first_commit = commit\n", - " break\n", - " await asyncio.sleep(0.5)\n", - " except Exception as e:\n", - " print(f\"Warning: New content didn't appear after JavaScript execution: {e}\")\n", - "\n", - " async with AsyncWebCrawler(verbose=True) as crawler:\n", - " crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)\n", - "\n", - " url = \"https://github.com/microsoft/TypeScript/commits/main\"\n", - " session_id = \"typescript_commits_session\"\n", - " all_commits = []\n", - "\n", - " js_next_page = \"\"\"\n", - " const button = document.querySelector('a[data-testid=\"pagination-next-button\"]');\n", - " if (button) button.click();\n", - " \"\"\"\n", - "\n", - " for page in range(3): # Crawl 3 pages\n", - " result = await crawler.arun(\n", - " url=url,\n", - " session_id=session_id,\n", - " css_selector=\"li.Box-sc-g0xbh4-0\",\n", - " js=js_next_page if page > 0 else None,\n", - " bypass_cache=True,\n", - " js_only=page > 0\n", - " )\n", - "\n", - " assert result.success, f\"Failed to crawl page {page + 1}\"\n", - "\n", - " soup = BeautifulSoup(result.cleaned_html, 'html.parser')\n", - " commits = soup.select(\"li\")\n", - " all_commits.extend(commits)\n", - "\n", - " print(f\"Page {page + 1}: Found {len(commits)} commits\")\n", - "\n", - " await crawler.crawler_strategy.kill_session(session_id)\n", - " print(f\"Successfully crawled {len(all_commits)} commits across 3 pages\")\n", - "\n", - "await crawl_typescript_commits()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "EJRnYsp6yZQN" - }, - "source": [ - "### Using JsonCssExtractionStrategy for Fast Structured Output" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "1ZMqIzB_8SYp" - }, - "source": [ - "The JsonCssExtractionStrategy is a powerful feature of Crawl4AI that allows for precise, structured data extraction from web pages. Here's how it works:\n", - "\n", - "1. You define a schema that describes the pattern of data you're interested in extracting.\n", - "2. The schema includes a base selector that identifies repeating elements on the page.\n", - "3. Within the schema, you define fields, each with its own selector and type.\n", - "4. These field selectors are applied within the context of each base selector element.\n", - "5. The strategy supports nested structures, lists within lists, and various data types.\n", - "6. You can even include computed fields for more complex data manipulation.\n", - "\n", - "This approach allows for highly flexible and precise data extraction, transforming semi-structured web content into clean, structured JSON data. It's particularly useful for extracting consistent data patterns from pages like product listings, news articles, or search results.\n", - "\n", - "For more details and advanced usage, check out the full documentation on the Crawl4AI website." - ] - }, - { - "cell_type": "code", - "execution_count": 12, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "trCMR2T9yZQN", - "outputId": "718d36f4-cccf-40f4-8d8c-c3ba73524d16" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", - "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", - "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n", - "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n", - "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 7.00 seconds\n", - "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.32 seconds\n", - "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", - "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.48 seconds.\n", - "Successfully extracted 11 news teasers\n", - "{\n", - " \"category\": \"Business News\",\n", - " \"headline\": \"NBC ripped up its Olympics playbook for 2024 \\u2014 so far, the new strategy paid off\",\n", - " \"summary\": \"The Olympics have long been key to NBCUniversal. Paris marked the 18th Olympic Games broadcast by NBC in the U.S.\",\n", - " \"time\": \"13h ago\",\n", - " \"image\": {\n", - " \"src\": \"https://media-cldnry.s-nbcnews.com/image/upload/t_focal-200x100,f_auto,q_auto:best/rockcms/2024-09/240903-nbc-olympics-ch-1344-c7a486.jpg\",\n", - " \"alt\": \"Mike Tirico.\"\n", - " },\n", - " \"link\": \"https://www.nbcnews.com/business\"\n", - "}\n" - ] - } - ], - "source": [ - "async def extract_news_teasers():\n", - " schema = {\n", - " \"name\": \"News Teaser Extractor\",\n", - " \"baseSelector\": \".wide-tease-item__wrapper\",\n", - " \"fields\": [\n", - " {\n", - " \"name\": \"category\",\n", - " \"selector\": \".unibrow span[data-testid='unibrow-text']\",\n", - " \"type\": \"text\",\n", - " },\n", - " {\n", - " \"name\": \"headline\",\n", - " \"selector\": \".wide-tease-item__headline\",\n", - " \"type\": \"text\",\n", - " },\n", - " {\n", - " \"name\": \"summary\",\n", - " \"selector\": \".wide-tease-item__description\",\n", - " \"type\": \"text\",\n", - " },\n", - " {\n", - " \"name\": \"time\",\n", - " \"selector\": \"[data-testid='wide-tease-date']\",\n", - " \"type\": \"text\",\n", - " },\n", - " {\n", - " \"name\": \"image\",\n", - " \"type\": \"nested\",\n", - " \"selector\": \"picture.teasePicture img\",\n", - " \"fields\": [\n", - " {\"name\": \"src\", \"type\": \"attribute\", \"attribute\": \"src\"},\n", - " {\"name\": \"alt\", \"type\": \"attribute\", \"attribute\": \"alt\"},\n", - " ],\n", - " },\n", - " {\n", - " \"name\": \"link\",\n", - " \"selector\": \"a[href]\",\n", - " \"type\": \"attribute\",\n", - " \"attribute\": \"href\",\n", - " },\n", - " ],\n", - " }\n", - "\n", - " extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n", - "\n", - " async with AsyncWebCrawler(verbose=True) as crawler:\n", - " result = await crawler.arun(\n", - " url=\"https://www.nbcnews.com/business\",\n", - " extraction_strategy=extraction_strategy,\n", - " bypass_cache=True,\n", - " )\n", - "\n", - " assert result.success, \"Failed to crawl the page\"\n", - "\n", - " news_teasers = json.loads(result.extracted_content)\n", - " print(f\"Successfully extracted {len(news_teasers)} news teasers\")\n", - " print(json.dumps(news_teasers[0], indent=2))\n", - "\n", - "await extract_news_teasers()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "FnyVhJaByZQN" - }, - "source": [ - "## Speed Comparison\n", - "\n", - "Let's compare the speed of Crawl4AI with Firecrawl, a paid service. Note that we can't run Firecrawl in this Colab environment, so we'll simulate its performance based on previously recorded data." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "agDD186f3wig" - }, - "source": [ - "💡 **Note on Speed Comparison:**\n", - "\n", - "The speed test conducted here is running on Google Colab, where the internet speed and performance can vary and may not reflect optimal conditions. When we call Firecrawl's API, we're seeing its best performance, while Crawl4AI's performance is limited by Colab's network speed.\n", - "\n", - "For a more accurate comparison, it's recommended to run these tests on your own servers or computers with a stable and fast internet connection. Despite these limitations, Crawl4AI still demonstrates faster performance in this environment.\n", - "\n", - "If you run these tests locally, you may observe an even more significant speed advantage for Crawl4AI compared to other services." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "id": "F7KwHv8G1LbY" - }, - "outputs": [], - "source": [ - "!pip install firecrawl" - ] - }, - { - "cell_type": "code", - "execution_count": 4, - "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "91813zILyZQN", - "outputId": "663223db-ab89-4976-b233-05ceca62b19b" - }, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Firecrawl (simulated):\n", - "Time taken: 4.38 seconds\n", - "Content length: 41967 characters\n", - "Images found: 49\n", - "\n", - "Crawl4AI (simple crawl):\n", - "Time taken: 4.22 seconds\n", - "Content length: 18221 characters\n", - "Images found: 49\n", - "\n", - "Crawl4AI (with JavaScript execution):\n", - "Time taken: 9.13 seconds\n", - "Content length: 34243 characters\n", - "Images found: 89\n" - ] - } - ], - "source": [ - "import os\n", - "from google.colab import userdata\n", - "os.environ['FIRECRAWL_API_KEY'] = userdata.get('FIRECRAWL_API_KEY')\n", - "import time\n", - "from firecrawl import FirecrawlApp\n", - "\n", - "async def speed_comparison():\n", - " # Simulated Firecrawl performance\n", - " app = FirecrawlApp(api_key=os.environ['FIRECRAWL_API_KEY'])\n", - " start = time.time()\n", - " scrape_status = app.scrape_url(\n", - " 'https://www.nbcnews.com/business',\n", - " params={'formats': ['markdown', 'html']}\n", - " )\n", - " end = time.time()\n", - " print(\"Firecrawl (simulated):\")\n", - " print(f\"Time taken: {end - start:.2f} seconds\")\n", - " print(f\"Content length: {len(scrape_status['markdown'])} characters\")\n", - " print(f\"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}\")\n", - " print()\n", - "\n", - " async with AsyncWebCrawler() as crawler:\n", - " # Crawl4AI simple crawl\n", - " start = time.time()\n", - " result = await crawler.arun(\n", - " url=\"https://www.nbcnews.com/business\",\n", - " word_count_threshold=0,\n", - " bypass_cache=True,\n", - " verbose=False\n", - " )\n", - " end = time.time()\n", - " print(\"Crawl4AI (simple crawl):\")\n", - " print(f\"Time taken: {end - start:.2f} seconds\")\n", - " print(f\"Content length: {len(result.markdown)} characters\")\n", - " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n", - " print()\n", - "\n", - " # Crawl4AI with JavaScript execution\n", - " start = time.time()\n", - " result = await crawler.arun(\n", - " url=\"https://www.nbcnews.com/business\",\n", - " js_code=[\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"],\n", - " word_count_threshold=0,\n", - " bypass_cache=True,\n", - " verbose=False\n", - " )\n", - " end = time.time()\n", - " print(\"Crawl4AI (with JavaScript execution):\")\n", - " print(f\"Time taken: {end - start:.2f} seconds\")\n", - " print(f\"Content length: {len(result.markdown)} characters\")\n", - " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n", - "\n", - "await speed_comparison()" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "OBFFYVJIyZQN" - }, - "source": [ - "If you run on a local machine with a proper internet speed:\n", - "- Simple crawl: Crawl4AI is typically over 3-4 times faster than Firecrawl.\n", - "- With JavaScript execution: Even when executing JavaScript to load more content (potentially doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.\n", - "\n", - "Please note that actual performance may vary depending on network conditions and the specific content being crawled." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "id": "A6_1RK1_yZQO" - }, - "source": [ - "## Conclusion\n", - "\n", - "In this notebook, we've explored the powerful features of Crawl4AI, including:\n", - "\n", - "1. Basic crawling\n", - "2. JavaScript execution and CSS selector usage\n", - "3. Proxy support\n", - "4. Structured data extraction with OpenAI\n", - "5. Advanced multi-page crawling with JavaScript execution\n", - "6. Fast structured output using JsonCssExtractionStrategy\n", - "7. Speed comparison with other services\n", - "\n", - "Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n", - "\n", - "For more information and advanced usage, please visit the [Crawl4AI documentation](https://crawl4ai.com/mkdocs/).\n", - "\n", - "Happy crawling!" - ] - } - ], - "metadata": { - "colab": { - "provenance": [] - }, - "kernelspec": { - "display_name": "venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.13" - } + "cells": [ + { + "cell_type": "markdown", + "id": "0cba38e5", + "metadata": {}, + "source": [ + "# Crawl4AI 🕷️🤖\n", + "
\"unclecode%2Fcrawl4ai\n", + "\n", + "[![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers)\n", + "![PyPI - Downloads](https://img.shields.io/pypi/dm/Crawl4AI)\n", + "[![GitHub Forks](https://img.shields.io/github/forks/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/network/members)\n", + "[![GitHub Issues](https://img.shields.io/github/issues/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/issues)\n", + "[![GitHub Pull Requests](https://img.shields.io/github/issues-pr/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/pulls)\n", + "[![License](https://img.shields.io/github/license/unclecode/crawl4ai)](https://github.com/unclecode/crawl4ai/blob/main/LICENSE)\n", + "\n", + "Crawl4AI simplifies asynchronous web crawling and data extraction, making it accessible for large language models (LLMs) and AI applications. 🆓🌐\n", + "\n", + "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n", + "- Twitter: [@unclecode](https://twitter.com/unclecode)\n", + "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n", + "\n", + "## 🌟 Meet the Crawl4AI Assistant: Your Copilot for Crawling\n", + "Use the [Crawl4AI GPT Assistant](https://tinyurl.com/crawl4ai-gpt) as your AI-powered copilot! With this assistant, you can:\n", + "- 🧑‍💻 Generate code for complex crawling and extraction tasks\n", + "- 💡 Get tailored support and examples\n", + "- 📘 Learn Crawl4AI faster with step-by-step guidance" + ] }, - "nbformat": 4, - "nbformat_minor": 0 + { + "cell_type": "markdown", + "id": "41de6458", + "metadata": {}, + "source": [ + "### **Quickstart with Crawl4AI**" + ] + }, + { + "cell_type": "markdown", + "id": "1380e951", + "metadata": {}, + "source": [ + "#### 1. **Installation**\n", + "Install Crawl4AI and necessary dependencies:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "05fecfad", + "metadata": {}, + "outputs": [], + "source": [ + "# %%capture\n", + "!pip install crawl4ai\n", + "!pip install nest_asyncio\n", + "!playwright install " + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "2c2a74c8", + "metadata": {}, + "outputs": [], + "source": [ + "import asyncio\n", + "import nest_asyncio\n", + "nest_asyncio.apply()" + ] + }, + { + "cell_type": "markdown", + "id": "f3c558d7", + "metadata": {}, + "source": [ + "#### 2. **Basic Setup and Simple Crawl**" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "003376f3", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 1.49 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.10 seconds.\n", + "IE 11 is not supported. For an optimal experience visit our site on another browser.\n", + "\n", + "[Morning Rundown: Trump and Harris' vastly different closing pitches, why Kim Jong Un is helping Russia, and an ancient city is discovered by accident](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)[](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)\n", + "\n", + "Skip to Content\n", + "\n", + "[NBC News Logo](https://www.nbcnews.com)\n", + "\n", + "Spon\n" + ] + } + ], + "source": [ + "import asyncio\n", + "from crawl4ai import AsyncWebCrawler\n", + "\n", + "async def simple_crawl():\n", + " async with AsyncWebCrawler() as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " bypass_cache=True # By default this is False, meaning the cache will be used\n", + " )\n", + " print(result.markdown[:500]) # Print the first 500 characters\n", + " \n", + "asyncio.run(simple_crawl())" + ] + }, + { + "cell_type": "markdown", + "id": "da9b4d50", + "metadata": {}, + "source": [ + "#### 3. **Dynamic Content Handling**" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "5bb8c1e4", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n", + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 4.52 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.15 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.15 seconds.\n", + "IE 11 is not supported. For an optimal experience visit our site on another browser.\n", + "\n", + "[Morning Rundown: Trump and Harris' vastly different closing pitches, why Kim Jong Un is helping Russia, and an ancient city is discovered by accident](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)[](https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973)\n", + "\n", + "Skip to Content\n", + "\n", + "[NBC News Logo](https://www.nbcnews.com)\n", + "\n", + "Spon\n" + ] + } + ], + "source": [ + "async def crawl_dynamic_content():\n", + " # You can use wait_for to wait for a condition to be met before returning the result\n", + " # wait_for = \"\"\"() => {\n", + " # return Array.from(document.querySelectorAll('article.tease-card')).length > 10;\n", + " # }\"\"\"\n", + "\n", + " # wait_for can be also just a css selector\n", + " # wait_for = \"article.tease-card:nth-child(10)\"\n", + "\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " js_code = [\n", + " \"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"\n", + " ]\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " js_code=js_code,\n", + " # wait_for=wait_for,\n", + " bypass_cache=True,\n", + " )\n", + " print(result.markdown[:500]) # Print first 500 characters\n", + "\n", + "asyncio.run(crawl_dynamic_content())" + ] + }, + { + "cell_type": "markdown", + "id": "86febd8d", + "metadata": {}, + "source": [ + "#### 4. **Content Cleaning and Fit Markdown**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8e8ab01f", + "metadata": {}, + "outputs": [], + "source": [ + "async def clean_content():\n", + " async with AsyncWebCrawler() as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://janineintheworld.com/places-to-visit-in-central-mexico\",\n", + " excluded_tags=['nav', 'footer', 'aside'],\n", + " remove_overlay_elements=True,\n", + " word_count_threshold=10,\n", + " bypass_cache=True\n", + " )\n", + " full_markdown_length = len(result.markdown)\n", + " fit_markdown_length = len(result.fit_markdown)\n", + " print(f\"Full Markdown Length: {full_markdown_length}\")\n", + " print(f\"Fit Markdown Length: {fit_markdown_length}\")\n", + " print(result.fit_markdown[:1000])\n", + " \n", + "\n", + "asyncio.run(clean_content())" + ] + }, + { + "cell_type": "markdown", + "id": "55715146", + "metadata": {}, + "source": [ + "#### 5. **Link Analysis and Smart Filtering**" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "id": "2ae47c69", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 0.93 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.11 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n", + "Found 107 internal links\n", + "Found 58 external links\n", + "Href: https://www.nbcnews.com/news/harris-speech-ellipse-ancient-mayan-city-morning-rundown-rcna177973\n", + "Text: Morning Rundown: Trump and Harris' vastly different closing pitches, why Kim Jong Un is helping Russia, and an ancient city is discovered by accident\n", + "\n", + "Href: https://www.nbcnews.com\n", + "Text: NBC News Logo\n", + "\n", + "Href: https://www.nbcnews.com/politics/2024-election/live-blog/kamala-harris-donald-trump-rally-election-live-updates-rcna177529\n", + "Text: 2024 Election\n", + "\n", + "Href: https://www.nbcnews.com/politics\n", + "Text: Politics\n", + "\n", + "Href: https://www.nbcnews.com/us-news\n", + "Text: U.S. News\n", + "\n" + ] + } + ], + "source": [ + "\n", + "async def link_analysis():\n", + " async with AsyncWebCrawler() as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " bypass_cache=True,\n", + " exclude_external_links=True,\n", + " exclude_social_media_links=True,\n", + " # exclude_domains=[\"facebook.com\", \"twitter.com\"]\n", + " )\n", + " print(f\"Found {len(result.links['internal'])} internal links\")\n", + " print(f\"Found {len(result.links['external'])} external links\")\n", + "\n", + " for link in result.links['internal'][:5]:\n", + " print(f\"Href: {link['href']}\\nText: {link['text']}\\n\")\n", + " \n", + "\n", + "asyncio.run(link_analysis())" + ] + }, + { + "cell_type": "markdown", + "id": "80cceef3", + "metadata": {}, + "source": [ + "#### 6. **Media Handling**" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "1fed7f99", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 1.42 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.11 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.12 seconds.\n", + "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-762x508,f_auto,q_auto:best/rockcms/2024-10/241023-NM-Chilccare-jg-27b982.jpg, Alt: , Score: 4\n", + "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241030-china-ev-electric-mb-0746-cae05c.jpg, Alt: Volkswagen Workshop in Hefei, Score: 5\n", + "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241029-nyc-subway-sandwich-2021-ac-922p-a92374.jpg, Alt: A sub is prepared at a Subway restaurant in Manhattan, New York City, Score: 5\n", + "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241029-suv-gravity-ch-1618-752415.jpg, Alt: The Lucid Gravity car., Score: 5\n", + "Image URL: https://media-cldnry.s-nbcnews.com/image/upload/t_focal-80x80,f_auto,q_auto:best/rockcms/2024-10/241029-dearborn-michigan-f-150-ford-ranger-trucks-assembly-line-ac-426p-614f0b.jpg, Alt: Ford Introduces new F-150 And Ranger Trucks At Their Dearborn Plant, Score: 5\n" + ] + } + ], + "source": [ + "async def media_handling():\n", + " async with AsyncWebCrawler() as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\", \n", + " bypass_cache=True,\n", + " exclude_external_images=False,\n", + " screenshot=True\n", + " )\n", + " for img in result.media['images'][:5]:\n", + " print(f\"Image URL: {img['src']}, Alt: {img['alt']}, Score: {img['score']}\")\n", + " \n", + "asyncio.run(media_handling())" + ] + }, + { + "cell_type": "markdown", + "id": "9290499a", + "metadata": {}, + "source": [ + "#### 7. **Using Hooks for Custom Workflow**" + ] + }, + { + "cell_type": "markdown", + "id": "9d069c2b", + "metadata": {}, + "source": [ + "Hooks in Crawl4AI allow you to run custom logic at specific stages of the crawling process. This can be invaluable for scenarios like setting custom headers, logging activities, or processing content before it is returned. Below is an example of a basic workflow using a hook, followed by a complete list of available hooks and explanations on their usage." + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "id": "bc4d2fc8", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[Hook] Preparing to navigate...\n", + "[LOG] 🚀 Crawling done for https://crawl4ai.com, success: True, time taken: 3.49 seconds\n", + "[LOG] 🚀 Content extracted for https://crawl4ai.com, success: True, time taken: 0.03 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://crawl4ai.com, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://crawl4ai.com, time taken: 0.03 seconds.\n", + "[Crawl4AI Documentation](https://docs.crawl4ai.com/)\n", + "\n", + " * [ Home ](.)\n", + " * [ Installation ](basic/installation/)\n", + " * [ Quick Start ](basic/quickstart/)\n", + " * [ Search ](#)\n", + "\n", + "\n", + "\n", + " * Home\n", + " * [Installation](basic/installation/)\n", + " * [Quick Start](basic/quickstart/)\n", + " * Basic\n", + " * [Simple Crawling](basic/simple-crawling/)\n", + " * [Output Formats](basic/output-formats/)\n", + " * [Browser Configuration](basic/browser-config/)\n", + " * [Page Interaction](basic/page-interaction/)\n", + " * [Content Selection](basic/con\n" + ] + } + ], + "source": [ + "async def custom_hook_workflow():\n", + " async with AsyncWebCrawler() as crawler:\n", + " # Set a 'before_goto' hook to run custom code just before navigation\n", + " crawler.crawler_strategy.set_hook(\"before_goto\", lambda page: print(\"[Hook] Preparing to navigate...\"))\n", + " \n", + " # Perform the crawl operation\n", + " result = await crawler.arun(\n", + " url=\"https://crawl4ai.com\",\n", + " bypass_cache=True\n", + " )\n", + " print(result.markdown[:500]) # Display the first 500 characters\n", + "\n", + "asyncio.run(custom_hook_workflow())" + ] + }, + { + "cell_type": "markdown", + "id": "3ff45e21", + "metadata": {}, + "source": [ + "List of available hooks and examples for each stage of the crawling process:\n", + "\n", + "- **on_browser_created**\n", + " ```python\n", + " async def on_browser_created_hook(browser):\n", + " print(\"[Hook] Browser created\")\n", + " ```\n", + "\n", + "- **before_goto**\n", + " ```python\n", + " async def before_goto_hook(page):\n", + " await page.set_extra_http_headers({\"X-Test-Header\": \"test\"})\n", + " ```\n", + "\n", + "- **after_goto**\n", + " ```python\n", + " async def after_goto_hook(page):\n", + " print(f\"[Hook] Navigated to {page.url}\")\n", + " ```\n", + "\n", + "- **on_execution_started**\n", + " ```python\n", + " async def on_execution_started_hook(page):\n", + " print(\"[Hook] JavaScript execution started\")\n", + " ```\n", + "\n", + "- **before_return_html**\n", + " ```python\n", + " async def before_return_html_hook(page, html):\n", + " print(f\"[Hook] HTML length: {len(html)}\")\n", + " ```" + ] + }, + { + "cell_type": "markdown", + "id": "2d56ebb1", + "metadata": {}, + "source": [ + "#### 8. **Session-Based Crawling**\n", + "\n", + "When to Use Session-Based Crawling: \n", + "Session-based crawling is especially beneficial when navigating through multi-page content where each page load needs to maintain the same session context. For instance, in cases where a “Next Page” button must be clicked to load subsequent data, the new data often replaces the previous content. Here, session-based crawling keeps the browser state intact across each interaction, allowing for sequential actions within the same session.\n", + "\n", + "Example: Multi-Page Navigation Using JavaScript\n", + "In this example, we’ll navigate through multiple pages by clicking a \"Next Page\" button. After each page load, we extract the new content and repeat the process." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e7bfebae", + "metadata": {}, + "outputs": [], + "source": [ + "async def multi_page_session_crawl():\n", + " async with AsyncWebCrawler() as crawler:\n", + " session_id = \"page_navigation_session\"\n", + " url = \"https://example.com/paged-content\"\n", + "\n", + " for page_number in range(1, 4):\n", + " result = await crawler.arun(\n", + " url=url,\n", + " session_id=session_id,\n", + " js_code=\"document.querySelector('.next-page-button').click();\" if page_number > 1 else None,\n", + " css_selector=\".content-section\",\n", + " bypass_cache=True\n", + " )\n", + " print(f\"Page {page_number} Content:\")\n", + " print(result.markdown[:500]) # Print first 500 characters\n", + "\n", + "# asyncio.run(multi_page_session_crawl())" + ] + }, + { + "cell_type": "markdown", + "id": "ad32a778", + "metadata": {}, + "source": [ + "#### 9. **Using Extraction Strategies**\n", + "\n", + "**LLM Extraction**\n", + "\n", + "This example demonstrates how to use language model-based extraction to retrieve structured data from a pricing page on OpenAI’s site." + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "3011a7c5", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "--- Extracting Structured Data with openai/gpt-4o-mini ---\n", + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n", + "[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 1.29 seconds\n", + "[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.13 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n", + "[LOG] Extracted 26 blocks from URL: https://openai.com/api/pricing/ block index: 0\n", + "[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 15.12 seconds.\n", + "[{'model_name': 'gpt-4o', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-2024-08-06', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-audio-preview', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-audio-preview-2024-10-01', 'input_fee': '$2.50 / 1M input tokens', 'output_fee': '$10.00 / 1M output tokens', 'error': False}, {'model_name': 'gpt-4o-2024-05-13', 'input_fee': '$5.00 / 1M input tokens', 'output_fee': '$15.00 / 1M output tokens', 'error': False}]\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "/Users/unclecode/devs/crawl4ai/venv/lib/python3.10/site-packages/pydantic/main.py:347: UserWarning: Pydantic serializer warnings:\n", + " Expected `PromptTokensDetails` but got `dict` - serialized value may not be as expected\n", + " return self.__pydantic_serializer__.to_python(\n" + ] + } + ], + "source": [ + "from crawl4ai.extraction_strategy import LLMExtractionStrategy\n", + "from pydantic import BaseModel, Field\n", + "import os, json\n", + "\n", + "class OpenAIModelFee(BaseModel):\n", + " model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n", + " input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n", + " output_fee: str = Field(\n", + " ..., description=\"Fee for output token for the OpenAI model.\"\n", + " )\n", + "\n", + "async def extract_structured_data_using_llm(provider: str, api_token: str = None, extra_headers: dict = None):\n", + " print(f\"\\n--- Extracting Structured Data with {provider} ---\")\n", + " \n", + " # Skip if API token is missing (for providers that require it)\n", + " if api_token is None and provider != \"ollama\":\n", + " print(f\"API token is required for {provider}. Skipping this example.\")\n", + " return\n", + "\n", + " extra_args = {\"extra_headers\": extra_headers} if extra_headers else {}\n", + "\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://openai.com/api/pricing/\",\n", + " word_count_threshold=1,\n", + " extraction_strategy=LLMExtractionStrategy(\n", + " provider=provider,\n", + " api_token=api_token,\n", + " schema=OpenAIModelFee.schema(),\n", + " extraction_type=\"schema\",\n", + " instruction=\"\"\"Extract all model names along with fees for input and output tokens.\"\n", + " \"{model_name: 'GPT-4', input_fee: 'US$10.00 / 1M tokens', output_fee: 'US$30.00 / 1M tokens'}.\"\"\",\n", + " **extra_args\n", + " ),\n", + " bypass_cache=True,\n", + " )\n", + " print(json.loads(result.extracted_content)[:5])\n", + "\n", + "# Usage:\n", + "await extract_structured_data_using_llm(\"openai/gpt-4o-mini\", os.getenv(\"OPENAI_API_KEY\"))" + ] + }, + { + "cell_type": "markdown", + "id": "6532db9d", + "metadata": {}, + "source": [ + "**Cosine Similarity Strategy**\n", + "\n", + "This strategy uses semantic clustering to extract relevant content based on contextual similarity, which is helpful when extracting related sections from a single topic." + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "id": "ec079108", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] Loading Extraction Model for mps device.\n", + "[LOG] Loading Multilabel Classifier for mps device.\n", + "[LOG] Model loaded sentence-transformers/all-MiniLM-L6-v2, models/reuters, took 5.193778038024902 seconds\n", + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, success: True, time taken: 1.37 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, success: True, time taken: 0.07 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Assign tags using mps\n", + "[LOG] 🚀 Categorization done in 0.55 seconds\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156, time taken: 6.63 seconds.\n", + "[{'index': 1, 'tags': ['news_&_social_concern'], 'content': \"McDonald's 2024 combo: Inflation, a health crisis and a side of politics # McDonald's 2024 combo: Inflation, a health crisis and a side of politics\"}, {'index': 2, 'tags': ['business_&_entrepreneurs', 'news_&_social_concern'], 'content': 'Like many major brands, McDonald’s raked in big profits as the economy reopened from the pandemic. In October 2022, [executives were boasting](https://www.cnbc.com/2022/10/27/mcdonalds-mcd-earnings-q3-2022.html) that they’d been raising prices without crimping traffic, even as competitors began to warn that some customers were closing their wallets after inflation peaked above 9% that summer. Still, the U.S. had repeatedly dodged a much-forecast recession, and [Americans kept spending on nonessentials](https://www.nbcnews.com/business/economy/year-peak-inflation-travel-leisure-mostly-cost-less-rcna92760) like travel and dining out — despite regularly relaying to pollsters their dismal views of an otherwise solid economy. Even so, 64% of consumers said they noticed price increases at quick-service restaurants in September, more than at any other type of venue, according to a survey by Datassential, a food and beverage market researcher. Politicians are still drawing attention to fast-food costs, too, as the election season barrels toward a tumultuous finish. A group of Democratic senators this month [denounced McDonald’s for menu prices](https://www.nbcnews.com/news/us-news/democratic-senators-slam-mcdonalds-menu-price-hikes-rcna176380) that they said outstripped inflation, accusing the company of looking to profit “at the expense of people’s ability to put food on the table.” The financial results come toward the end of a humbling year for the nearly $213 billion restaurant chain, whose shares remained steady on the heels of its latest earnings. Kempczinski [sought to reassure investors](https://www.cnbc.com/2024/10/29/mcdonalds-e-coli-outbreak-ceo-comments.html) that [the E. coli outbreak](https://www.nbcnews.com/health/health-news/illnesses-linked-mcdonalds-e-coli-outbreak-rise-75-cdc-says-rcna177260), linked to Quarter Pounder burgers, was under control after the health crisis temporarily dented the company’s stock and caused U.S. foot traffic to drop nearly 10% in the days afterward, according to estimates by Gordon Haskett financial researchers. The fast-food giant [reported Tuesday](https://www.cnbc.com/2024/10/29/mcdonalds-mcd-earnings-q3-2024.html) that it had reversed its recent U.S. sales drop, posting a 0.3% uptick in the third quarter. Foot traffic was still down slightly, but the company said its summer of discounts was paying off. But by early this year, [photos of eye-watering menu prices](https://x.com/sam_learner/status/1681367351143301129) at some McDonald’s locations — including an $18 Big Mac combo at a Connecticut rest stop from July 2023 — went viral, bringing diners’ long-simmering frustrations to a boiling point that the company couldn’t ignore. On an earnings call in April, Kempczinski acknowledged that foot traffic had fallen. “We will stay laser-focused on providing an unparalleled experience with simple, everyday value and affordability that our consumers can count on as they continue to be mindful about their spending,” CEO Chris Kempczinski [said in a statement](https://www.prnewswire.com/news-releases/mcdonalds-reports-third-quarter-2024-results-302289216.html?Fds-Load-Behavior=force-external) alongside the earnings report.'}, {'index': 3, 'tags': ['food_&_dining', 'news_&_social_concern'], 'content': '![mcdonalds drive-thru economy fast food](https://media-cldnry.s-nbcnews.com/image/upload/t_fit-760w,f_auto,q_auto:best/rockcms/2024-10/241024-los-angeles-mcdonalds-drive-thru-ac-1059p-cfc311.jpg)McDonald’s has had some success leaning into discounts this year. Eric Thayer / Bloomberg via Getty Images file'}, {'index': 4, 'tags': ['business_&_entrepreneurs', 'food_&_dining', 'news_&_social_concern'], 'content': 'McDonald’s has faced a customer revolt over pricey Big Macs, an unsolicited cameo in election-season crossfire, and now an E. coli outbreak — just as the company had been luring customers back with more affordable burgers. Despite a difficult quarter, McDonald’s looks resilient in the face of various pressures, analysts say — something the company shares with U.S. consumers overall. “Consumers continue to be even more discriminating with every dollar that they spend,” he said at the time. Going forward, McDonald’s would be “laser-focused” on affordability. “McDonald’s has also done a good job of embedding the brand in popular culture to enhance its relevance and meaning around fun and family. But it also needed to modify the product line to meet the expectations of a consumer who is on a tight budget,” he said. “The thing that McDonald’s had struggled with, and why I think we’re seeing kind of an inflection point, is a value proposition,” Senatore said. “McDonald’s menu price increases had run ahead of a lot of its restaurant peers. … Consumers are savvy enough to know that.” For many consumers, the fast-food giant’s menus serve as an informal gauge of the economy overall, said Sara Senatore, a Bank of America analyst covering restaurants. “The spotlight is always on McDonald’s because it’s so big” and something of a “bellwether,” she said. McDonald’s didn’t respond to requests for comment.'}, {'index': 5, 'tags': ['business_&_entrepreneurs', 'food_&_dining'], 'content': 'Mickey D’s’ $5 meal deal, which it launched in late June to jumpstart slumping sales, has given the company an appealing price point to advertise nationwide, Senatore said, speculating that it could open the door to a new permanent value offering. But before that promotion rolled out, the company’s reputation as a low-cost option had taken a bruising hit.'}]\n" + ] + } + ], + "source": [ + "from crawl4ai.extraction_strategy import CosineStrategy\n", + "\n", + "async def cosine_similarity_extraction():\n", + " async with AsyncWebCrawler() as crawler:\n", + " strategy = CosineStrategy(\n", + " word_count_threshold=10,\n", + " max_dist=0.2, # Maximum distance between two words\n", + " linkage_method=\"ward\", # Linkage method for hierarchical clustering (ward, complete, average, single)\n", + " top_k=3, # Number of top keywords to extract\n", + " sim_threshold=0.3, # Similarity threshold for clustering\n", + " semantic_filter=\"McDonald's economic impact, American consumer trends\", # Keywords to filter the content semantically using embeddings\n", + " verbose=True\n", + " )\n", + " \n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business/consumer/how-mcdonalds-e-coli-crisis-inflation-politics-reflect-american-story-rcna177156\",\n", + " extraction_strategy=strategy\n", + " )\n", + " print(json.loads(result.extracted_content)[:5])\n", + "\n", + "asyncio.run(cosine_similarity_extraction())\n" + ] + }, + { + "cell_type": "markdown", + "id": "ff423629", + "metadata": {}, + "source": [ + "#### 10. **Conclusion and Next Steps**\n", + "\n", + "You’ve explored core features of Crawl4AI, including dynamic content handling, link analysis, and advanced extraction strategies. Visit our documentation for further details on using Crawl4AI’s extensive features.\n", + "\n", + "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n", + "- Twitter: [@unclecode](https://twitter.com/unclecode)\n", + "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n", + "\n", + "Happy Crawling with Crawl4AI! 🕷️🤖\n" + ] + }, + { + "cell_type": "markdown", + "id": "d34c1d35", + "metadata": {}, + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.13" + } + }, + "nbformat": 4, + "nbformat_minor": 5 } diff --git a/docs/examples/quickstart_v0.ipynb b/docs/examples/quickstart_v0.ipynb new file mode 100644 index 00000000..71f23acb --- /dev/null +++ b/docs/examples/quickstart_v0.ipynb @@ -0,0 +1,735 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "6yLvrXn7yZQI" + }, + "source": [ + "# Crawl4AI: Advanced Web Crawling and Data Extraction\n", + "\n", + "Welcome to this interactive notebook showcasing Crawl4AI, an advanced asynchronous web crawling and data extraction library.\n", + "\n", + "- GitHub Repository: [https://github.com/unclecode/crawl4ai](https://github.com/unclecode/crawl4ai)\n", + "- Twitter: [@unclecode](https://twitter.com/unclecode)\n", + "- Website: [https://crawl4ai.com](https://crawl4ai.com)\n", + "\n", + "Let's explore the powerful features of Crawl4AI!" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "KIn_9nxFyZQK" + }, + "source": [ + "## Installation\n", + "\n", + "First, let's install Crawl4AI from GitHub:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "mSnaxLf3zMog" + }, + "outputs": [], + "source": [ + "!sudo apt-get update && sudo apt-get install -y libwoff1 libopus0 libwebp6 libwebpdemux2 libenchant1c2a libgudev-1.0-0 libsecret-1-0 libhyphen0 libgdk-pixbuf2.0-0 libegl1 libnotify4 libxslt1.1 libevent-2.1-7 libgles2 libvpx6 libxcomposite1 libatk1.0-0 libatk-bridge2.0-0 libepoxy0 libgtk-3-0 libharfbuzz-icu0" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "xlXqaRtayZQK" + }, + "outputs": [], + "source": [ + "!pip install crawl4ai\n", + "!pip install nest-asyncio\n", + "!playwright install" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qKCE7TI7yZQL" + }, + "source": [ + "Now, let's import the necessary libraries:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "metadata": { + "id": "I67tr7aAyZQL" + }, + "outputs": [], + "source": [ + "import asyncio\n", + "import nest_asyncio\n", + "from crawl4ai import AsyncWebCrawler\n", + "from crawl4ai.extraction_strategy import JsonCssExtractionStrategy, LLMExtractionStrategy\n", + "import json\n", + "import time\n", + "from pydantic import BaseModel, Field\n", + "\n", + "nest_asyncio.apply()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "h7yR_Rt_yZQM" + }, + "source": [ + "## Basic Usage\n", + "\n", + "Let's start with a simple crawl example:" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "yBh6hf4WyZQM", + "outputId": "0f83af5c-abba-4175-ed95-70b7512e6bcc" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.05 seconds\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.05 seconds.\n", + "18102\n" + ] + } + ], + "source": [ + "async def simple_crawl():\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " result = await crawler.arun(url=\"https://www.nbcnews.com/business\")\n", + " print(len(result.markdown))\n", + "await simple_crawl()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "9rtkgHI28uI4" + }, + "source": [ + "💡 By default, **Crawl4AI** caches the result of every URL, so the next time you call it, you’ll get an instant result. But if you want to bypass the cache, just set `bypass_cache=True`." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "MzZ0zlJ9yZQM" + }, + "source": [ + "## Advanced Features\n", + "\n", + "### Executing JavaScript and Using CSS Selectors" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "gHStF86xyZQM", + "outputId": "34d0fb6d-4dec-4677-f76e-85a1f082829b" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n", + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 6.06 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.10 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.11 seconds.\n", + "41135\n" + ] + } + ], + "source": [ + "async def js_and_css():\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " js_code = [\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"]\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " js_code=js_code,\n", + " # css_selector=\"YOUR_CSS_SELECTOR_HERE\",\n", + " bypass_cache=True\n", + " )\n", + " print(len(result.markdown))\n", + "\n", + "await js_and_css()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "cqE_W4coyZQM" + }, + "source": [ + "### Using a Proxy\n", + "\n", + "Note: You'll need to replace the proxy URL with a working proxy for this example to run successfully." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "QjAyiAGqyZQM" + }, + "outputs": [], + "source": [ + "async def use_proxy():\n", + " async with AsyncWebCrawler(verbose=True, proxy=\"http://your-proxy-url:port\") as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " bypass_cache=True\n", + " )\n", + " print(result.markdown[:500]) # Print first 500 characters\n", + "\n", + "# Uncomment the following line to run the proxy example\n", + "# await use_proxy()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "XTZ88lbayZQN" + }, + "source": [ + "### Extracting Structured Data with OpenAI\n", + "\n", + "Note: You'll need to set your OpenAI API key as an environment variable for this example to work." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "fIOlDayYyZQN", + "outputId": "cb8359cc-dee0-4762-9698-5dfdcee055b8" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://openai.com/api/pricing/ using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://openai.com/api/pricing/ successfully!\n", + "[LOG] 🚀 Crawling done for https://openai.com/api/pricing/, success: True, time taken: 3.77 seconds\n", + "[LOG] 🚀 Content extracted for https://openai.com/api/pricing/, success: True, time taken: 0.21 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://openai.com/api/pricing/, Strategy: AsyncWebCrawler\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 0\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 1\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 2\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 3\n", + "[LOG] Extracted 4 blocks from URL: https://openai.com/api/pricing/ block index: 3\n", + "[LOG] Call LLM for https://openai.com/api/pricing/ - block index: 4\n", + "[LOG] Extracted 5 blocks from URL: https://openai.com/api/pricing/ block index: 0\n", + "[LOG] Extracted 1 blocks from URL: https://openai.com/api/pricing/ block index: 4\n", + "[LOG] Extracted 8 blocks from URL: https://openai.com/api/pricing/ block index: 1\n", + "[LOG] Extracted 12 blocks from URL: https://openai.com/api/pricing/ block index: 2\n", + "[LOG] 🚀 Extraction done for https://openai.com/api/pricing/, time taken: 8.55 seconds.\n", + "5029\n" + ] + } + ], + "source": [ + "import os\n", + "from google.colab import userdata\n", + "os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')\n", + "\n", + "class OpenAIModelFee(BaseModel):\n", + " model_name: str = Field(..., description=\"Name of the OpenAI model.\")\n", + " input_fee: str = Field(..., description=\"Fee for input token for the OpenAI model.\")\n", + " output_fee: str = Field(..., description=\"Fee for output token for the OpenAI model.\")\n", + "\n", + "async def extract_openai_fees():\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " result = await crawler.arun(\n", + " url='https://openai.com/api/pricing/',\n", + " word_count_threshold=1,\n", + " extraction_strategy=LLMExtractionStrategy(\n", + " provider=\"openai/gpt-4o\", api_token=os.getenv('OPENAI_API_KEY'),\n", + " schema=OpenAIModelFee.schema(),\n", + " extraction_type=\"schema\",\n", + " instruction=\"\"\"From the crawled content, extract all mentioned model names along with their fees for input and output tokens.\n", + " Do not miss any models in the entire content. One extracted model JSON format should look like this:\n", + " {\"model_name\": \"GPT-4\", \"input_fee\": \"US$10.00 / 1M tokens\", \"output_fee\": \"US$30.00 / 1M tokens\"}.\"\"\"\n", + " ),\n", + " bypass_cache=True,\n", + " )\n", + " print(len(result.extracted_content))\n", + "\n", + "# Uncomment the following line to run the OpenAI extraction example\n", + "await extract_openai_fees()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "BypA5YxEyZQN" + }, + "source": [ + "### Advanced Multi-Page Crawling with JavaScript Execution" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "tfkcVQ0b7mw-" + }, + "source": [ + "## Advanced Multi-Page Crawling with JavaScript Execution\n", + "\n", + "This example demonstrates Crawl4AI's ability to handle complex crawling scenarios, specifically extracting commits from multiple pages of a GitHub repository. The challenge here is that clicking the \"Next\" button doesn't load a new page, but instead uses asynchronous JavaScript to update the content. This is a common hurdle in modern web crawling.\n", + "\n", + "To overcome this, we use Crawl4AI's custom JavaScript execution to simulate clicking the \"Next\" button, and implement a custom hook to detect when new data has loaded. Our strategy involves comparing the first commit's text before and after \"clicking\" Next, waiting until it changes to confirm new data has rendered. This showcases Crawl4AI's flexibility in handling dynamic content and its ability to implement custom logic for even the most challenging crawling tasks." + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "qUBKGpn3yZQN", + "outputId": "3e555b6a-ed33-42f4-cce9-499a923fbe17" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", + "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 5.16 seconds\n", + "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.28 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.28 seconds.\n", + "Page 1: Found 35 commits\n", + "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", + "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.78 seconds\n", + "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.90 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.90 seconds.\n", + "Page 2: Found 35 commits\n", + "[LOG] 🕸️ Crawling https://github.com/microsoft/TypeScript/commits/main using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://github.com/microsoft/TypeScript/commits/main successfully!\n", + "[LOG] 🚀 Crawling done for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 2.00 seconds\n", + "[LOG] 🚀 Content extracted for https://github.com/microsoft/TypeScript/commits/main, success: True, time taken: 0.74 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://github.com/microsoft/TypeScript/commits/main, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://github.com/microsoft/TypeScript/commits/main, time taken: 0.75 seconds.\n", + "Page 3: Found 35 commits\n", + "Successfully crawled 105 commits across 3 pages\n" + ] + } + ], + "source": [ + "import re\n", + "from bs4 import BeautifulSoup\n", + "\n", + "async def crawl_typescript_commits():\n", + " first_commit = \"\"\n", + " async def on_execution_started(page):\n", + " nonlocal first_commit\n", + " try:\n", + " while True:\n", + " await page.wait_for_selector('li.Box-sc-g0xbh4-0 h4')\n", + " commit = await page.query_selector('li.Box-sc-g0xbh4-0 h4')\n", + " commit = await commit.evaluate('(element) => element.textContent')\n", + " commit = re.sub(r'\\s+', '', commit)\n", + " if commit and commit != first_commit:\n", + " first_commit = commit\n", + " break\n", + " await asyncio.sleep(0.5)\n", + " except Exception as e:\n", + " print(f\"Warning: New content didn't appear after JavaScript execution: {e}\")\n", + "\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " crawler.crawler_strategy.set_hook('on_execution_started', on_execution_started)\n", + "\n", + " url = \"https://github.com/microsoft/TypeScript/commits/main\"\n", + " session_id = \"typescript_commits_session\"\n", + " all_commits = []\n", + "\n", + " js_next_page = \"\"\"\n", + " const button = document.querySelector('a[data-testid=\"pagination-next-button\"]');\n", + " if (button) button.click();\n", + " \"\"\"\n", + "\n", + " for page in range(3): # Crawl 3 pages\n", + " result = await crawler.arun(\n", + " url=url,\n", + " session_id=session_id,\n", + " css_selector=\"li.Box-sc-g0xbh4-0\",\n", + " js=js_next_page if page > 0 else None,\n", + " bypass_cache=True,\n", + " js_only=page > 0\n", + " )\n", + "\n", + " assert result.success, f\"Failed to crawl page {page + 1}\"\n", + "\n", + " soup = BeautifulSoup(result.cleaned_html, 'html.parser')\n", + " commits = soup.select(\"li\")\n", + " all_commits.extend(commits)\n", + "\n", + " print(f\"Page {page + 1}: Found {len(commits)} commits\")\n", + "\n", + " await crawler.crawler_strategy.kill_session(session_id)\n", + " print(f\"Successfully crawled {len(all_commits)} commits across 3 pages\")\n", + "\n", + "await crawl_typescript_commits()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "EJRnYsp6yZQN" + }, + "source": [ + "### Using JsonCssExtractionStrategy for Fast Structured Output" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "1ZMqIzB_8SYp" + }, + "source": [ + "The JsonCssExtractionStrategy is a powerful feature of Crawl4AI that allows for precise, structured data extraction from web pages. Here's how it works:\n", + "\n", + "1. You define a schema that describes the pattern of data you're interested in extracting.\n", + "2. The schema includes a base selector that identifies repeating elements on the page.\n", + "3. Within the schema, you define fields, each with its own selector and type.\n", + "4. These field selectors are applied within the context of each base selector element.\n", + "5. The strategy supports nested structures, lists within lists, and various data types.\n", + "6. You can even include computed fields for more complex data manipulation.\n", + "\n", + "This approach allows for highly flexible and precise data extraction, transforming semi-structured web content into clean, structured JSON data. It's particularly useful for extracting consistent data patterns from pages like product listings, news articles, or search results.\n", + "\n", + "For more details and advanced usage, check out the full documentation on the Crawl4AI website." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "trCMR2T9yZQN", + "outputId": "718d36f4-cccf-40f4-8d8c-c3ba73524d16" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[LOG] 🌤️ Warming up the AsyncWebCrawler\n", + "[LOG] 🌞 AsyncWebCrawler is ready to crawl\n", + "[LOG] 🕸️ Crawling https://www.nbcnews.com/business using AsyncPlaywrightCrawlerStrategy...\n", + "[LOG] ✅ Crawled https://www.nbcnews.com/business successfully!\n", + "[LOG] 🚀 Crawling done for https://www.nbcnews.com/business, success: True, time taken: 7.00 seconds\n", + "[LOG] 🚀 Content extracted for https://www.nbcnews.com/business, success: True, time taken: 0.32 seconds\n", + "[LOG] 🔥 Extracting semantic blocks for https://www.nbcnews.com/business, Strategy: AsyncWebCrawler\n", + "[LOG] 🚀 Extraction done for https://www.nbcnews.com/business, time taken: 0.48 seconds.\n", + "Successfully extracted 11 news teasers\n", + "{\n", + " \"category\": \"Business News\",\n", + " \"headline\": \"NBC ripped up its Olympics playbook for 2024 \\u2014 so far, the new strategy paid off\",\n", + " \"summary\": \"The Olympics have long been key to NBCUniversal. Paris marked the 18th Olympic Games broadcast by NBC in the U.S.\",\n", + " \"time\": \"13h ago\",\n", + " \"image\": {\n", + " \"src\": \"https://media-cldnry.s-nbcnews.com/image/upload/t_focal-200x100,f_auto,q_auto:best/rockcms/2024-09/240903-nbc-olympics-ch-1344-c7a486.jpg\",\n", + " \"alt\": \"Mike Tirico.\"\n", + " },\n", + " \"link\": \"https://www.nbcnews.com/business\"\n", + "}\n" + ] + } + ], + "source": [ + "async def extract_news_teasers():\n", + " schema = {\n", + " \"name\": \"News Teaser Extractor\",\n", + " \"baseSelector\": \".wide-tease-item__wrapper\",\n", + " \"fields\": [\n", + " {\n", + " \"name\": \"category\",\n", + " \"selector\": \".unibrow span[data-testid='unibrow-text']\",\n", + " \"type\": \"text\",\n", + " },\n", + " {\n", + " \"name\": \"headline\",\n", + " \"selector\": \".wide-tease-item__headline\",\n", + " \"type\": \"text\",\n", + " },\n", + " {\n", + " \"name\": \"summary\",\n", + " \"selector\": \".wide-tease-item__description\",\n", + " \"type\": \"text\",\n", + " },\n", + " {\n", + " \"name\": \"time\",\n", + " \"selector\": \"[data-testid='wide-tease-date']\",\n", + " \"type\": \"text\",\n", + " },\n", + " {\n", + " \"name\": \"image\",\n", + " \"type\": \"nested\",\n", + " \"selector\": \"picture.teasePicture img\",\n", + " \"fields\": [\n", + " {\"name\": \"src\", \"type\": \"attribute\", \"attribute\": \"src\"},\n", + " {\"name\": \"alt\", \"type\": \"attribute\", \"attribute\": \"alt\"},\n", + " ],\n", + " },\n", + " {\n", + " \"name\": \"link\",\n", + " \"selector\": \"a[href]\",\n", + " \"type\": \"attribute\",\n", + " \"attribute\": \"href\",\n", + " },\n", + " ],\n", + " }\n", + "\n", + " extraction_strategy = JsonCssExtractionStrategy(schema, verbose=True)\n", + "\n", + " async with AsyncWebCrawler(verbose=True) as crawler:\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " extraction_strategy=extraction_strategy,\n", + " bypass_cache=True,\n", + " )\n", + "\n", + " assert result.success, \"Failed to crawl the page\"\n", + "\n", + " news_teasers = json.loads(result.extracted_content)\n", + " print(f\"Successfully extracted {len(news_teasers)} news teasers\")\n", + " print(json.dumps(news_teasers[0], indent=2))\n", + "\n", + "await extract_news_teasers()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "FnyVhJaByZQN" + }, + "source": [ + "## Speed Comparison\n", + "\n", + "Let's compare the speed of Crawl4AI with Firecrawl, a paid service. Note that we can't run Firecrawl in this Colab environment, so we'll simulate its performance based on previously recorded data." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "agDD186f3wig" + }, + "source": [ + "💡 **Note on Speed Comparison:**\n", + "\n", + "The speed test conducted here is running on Google Colab, where the internet speed and performance can vary and may not reflect optimal conditions. When we call Firecrawl's API, we're seeing its best performance, while Crawl4AI's performance is limited by Colab's network speed.\n", + "\n", + "For a more accurate comparison, it's recommended to run these tests on your own servers or computers with a stable and fast internet connection. Despite these limitations, Crawl4AI still demonstrates faster performance in this environment.\n", + "\n", + "If you run these tests locally, you may observe an even more significant speed advantage for Crawl4AI compared to other services." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "F7KwHv8G1LbY" + }, + "outputs": [], + "source": [ + "!pip install firecrawl" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "91813zILyZQN", + "outputId": "663223db-ab89-4976-b233-05ceca62b19b" + }, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Firecrawl (simulated):\n", + "Time taken: 4.38 seconds\n", + "Content length: 41967 characters\n", + "Images found: 49\n", + "\n", + "Crawl4AI (simple crawl):\n", + "Time taken: 4.22 seconds\n", + "Content length: 18221 characters\n", + "Images found: 49\n", + "\n", + "Crawl4AI (with JavaScript execution):\n", + "Time taken: 9.13 seconds\n", + "Content length: 34243 characters\n", + "Images found: 89\n" + ] + } + ], + "source": [ + "import os\n", + "from google.colab import userdata\n", + "os.environ['FIRECRAWL_API_KEY'] = userdata.get('FIRECRAWL_API_KEY')\n", + "import time\n", + "from firecrawl import FirecrawlApp\n", + "\n", + "async def speed_comparison():\n", + " # Simulated Firecrawl performance\n", + " app = FirecrawlApp(api_key=os.environ['FIRECRAWL_API_KEY'])\n", + " start = time.time()\n", + " scrape_status = app.scrape_url(\n", + " 'https://www.nbcnews.com/business',\n", + " params={'formats': ['markdown', 'html']}\n", + " )\n", + " end = time.time()\n", + " print(\"Firecrawl (simulated):\")\n", + " print(f\"Time taken: {end - start:.2f} seconds\")\n", + " print(f\"Content length: {len(scrape_status['markdown'])} characters\")\n", + " print(f\"Images found: {scrape_status['markdown'].count('cldnry.s-nbcnews.com')}\")\n", + " print()\n", + "\n", + " async with AsyncWebCrawler() as crawler:\n", + " # Crawl4AI simple crawl\n", + " start = time.time()\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " word_count_threshold=0,\n", + " bypass_cache=True,\n", + " verbose=False\n", + " )\n", + " end = time.time()\n", + " print(\"Crawl4AI (simple crawl):\")\n", + " print(f\"Time taken: {end - start:.2f} seconds\")\n", + " print(f\"Content length: {len(result.markdown)} characters\")\n", + " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n", + " print()\n", + "\n", + " # Crawl4AI with JavaScript execution\n", + " start = time.time()\n", + " result = await crawler.arun(\n", + " url=\"https://www.nbcnews.com/business\",\n", + " js_code=[\"const loadMoreButton = Array.from(document.querySelectorAll('button')).find(button => button.textContent.includes('Load More')); loadMoreButton && loadMoreButton.click();\"],\n", + " word_count_threshold=0,\n", + " bypass_cache=True,\n", + " verbose=False\n", + " )\n", + " end = time.time()\n", + " print(\"Crawl4AI (with JavaScript execution):\")\n", + " print(f\"Time taken: {end - start:.2f} seconds\")\n", + " print(f\"Content length: {len(result.markdown)} characters\")\n", + " print(f\"Images found: {result.markdown.count('cldnry.s-nbcnews.com')}\")\n", + "\n", + "await speed_comparison()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "OBFFYVJIyZQN" + }, + "source": [ + "If you run on a local machine with a proper internet speed:\n", + "- Simple crawl: Crawl4AI is typically over 3-4 times faster than Firecrawl.\n", + "- With JavaScript execution: Even when executing JavaScript to load more content (potentially doubling the number of images found), Crawl4AI is still faster than Firecrawl's simple crawl.\n", + "\n", + "Please note that actual performance may vary depending on network conditions and the specific content being crawled." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "A6_1RK1_yZQO" + }, + "source": [ + "## Conclusion\n", + "\n", + "In this notebook, we've explored the powerful features of Crawl4AI, including:\n", + "\n", + "1. Basic crawling\n", + "2. JavaScript execution and CSS selector usage\n", + "3. Proxy support\n", + "4. Structured data extraction with OpenAI\n", + "5. Advanced multi-page crawling with JavaScript execution\n", + "6. Fast structured output using JsonCssExtractionStrategy\n", + "7. Speed comparison with other services\n", + "\n", + "Crawl4AI offers a fast, flexible, and powerful solution for web crawling and data extraction tasks. Its asynchronous architecture and advanced features make it suitable for a wide range of applications, from simple web scraping to complex, multi-page data extraction scenarios.\n", + "\n", + "For more information and advanced usage, please visit the [Crawl4AI documentation](https://crawl4ai.com/mkdocs/).\n", + "\n", + "Happy crawling!" + ] + } + ], + "metadata": { + "colab": { + "provenance": [] + }, + "kernelspec": { + "display_name": "venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.13" + } + }, + "nbformat": 4, + "nbformat_minor": 0 +} diff --git a/docs/md_v2/assets/styles.css b/docs/md_v2/assets/styles.css index f103474f..68a93f5d 100644 --- a/docs/md_v2/assets/styles.css +++ b/docs/md_v2/assets/styles.css @@ -150,4 +150,11 @@ strong, .tab-content pre { margin: 0; max-height: 300px; overflow: auto; border:none; +} + +ol li::before { + content: counters(item, ".") ". "; + counter-increment: item; + /* float: left; */ + /* padding-right: 5px; */ } \ No newline at end of file diff --git a/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md b/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md index f2b1ace1..f19d19f8 100644 --- a/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md +++ b/docs/md_v2/tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md @@ -9,17 +9,19 @@ Here's a condensed outline of the **Installation and Setup** video content: --- -1. **Introduction to Crawl4AI**: - - Briefly explain that Crawl4AI is a powerful tool for web scraping, data extraction, and content processing, with customizable options for various needs. +1 **Introduction to Crawl4AI**: Briefly explain that Crawl4AI is a powerful tool for web scraping, data extraction, and content processing, with customizable options for various needs. -2. **Installation Overview**: +2 **Installation Overview**: + - **Basic Install**: Run `pip install crawl4ai` and `playwright install` (to set up browser dependencies). + - **Optional Advanced Installs**: - `pip install crawl4ai[torch]` - Adds PyTorch for clustering. - `pip install crawl4ai[transformer]` - Adds support for LLM-based extraction. - `pip install crawl4ai[all]` - Installs all features for complete functionality. -3. **Verifying the Installation**: +3 **Verifying the Installation**: + - Walk through a simple test script to confirm the setup: ```python import asyncio @@ -34,12 +36,13 @@ Here's a condensed outline of the **Installation and Setup** video content: ``` - Explain that this script initializes the crawler and runs it on a test URL, displaying part of the extracted content to verify functionality. -4. **Important Tips**: +4 **Important Tips**: + - **Run** `playwright install` **after installation** to set up dependencies. - **For full performance** on text-related tasks, run `crawl4ai-download-models` after installing with `[torch]`, `[transformer]`, or `[all]` options. - If you encounter issues, refer to the documentation or GitHub issues. -5. **Wrap Up**: +5 **Wrap Up**: - Introduce the next topic in the series, which will cover Crawl4AI's browser configuration options (like choosing between `chromium`, `firefox`, and `webkit`). --- diff --git a/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md b/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md index e9844a7c..f2216b4c 100644 --- a/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md +++ b/docs/md_v2/tutorial/episode_02_Overview_of_Advanced_Features.md @@ -11,10 +11,12 @@ Here's a condensed outline for an **Overview of Advanced Features** video coveri ### **Overview of Advanced Features** -1. **Introduction to Advanced Features**: +1 **Introduction to Advanced Features**: + - Briefly introduce Crawl4AI’s advanced tools, which let users go beyond basic crawling to customize and fine-tune their scraping workflows. -2. **Taking Screenshots**: +2 **Taking Screenshots**: + - Explain the screenshot capability for capturing page state and verifying content. - **Example**: ```python @@ -22,7 +24,8 @@ Here's a condensed outline for an **Overview of Advanced Features** video coveri ``` - Mention that screenshots are saved as a base64 string in `result`, allowing easy decoding and saving. -3. **Media and Link Extraction**: +3 **Media and Link Extraction**: + - Demonstrate how to pull all media (images, videos) and links (internal and external) from a page for deeper analysis or content gathering. - **Example**: ```python @@ -31,14 +34,16 @@ Here's a condensed outline for an **Overview of Advanced Features** video coveri print("Links:", result.links) ``` -4. **Custom User Agent**: +4 **Custom User Agent**: + - Show how to set a custom user agent to disguise the crawler or simulate specific devices/browsers. - **Example**: ```python result = await crawler.arun(url="https://www.example.com", user_agent="Mozilla/5.0 (compatible; MyCrawler/1.0)") ``` -5. **Custom Hooks for Enhanced Control**: +5 **Custom Hooks for Enhanced Control**: + - Briefly cover how to use hooks, which allow custom actions like setting headers or handling login during the crawl. - **Example**: Setting a custom header with `before_get_url` hook. ```python @@ -46,7 +51,8 @@ Here's a condensed outline for an **Overview of Advanced Features** video coveri await page.set_extra_http_headers({"X-Test-Header": "test"}) ``` -6. **CSS Selectors for Targeted Extraction**: +6 **CSS Selectors for Targeted Extraction**: + - Explain the use of CSS selectors to extract specific elements, ideal for structured data like articles or product details. - **Example**: ```python @@ -54,14 +60,16 @@ Here's a condensed outline for an **Overview of Advanced Features** video coveri print("H2 Tags:", result.extracted_content) ``` -7. **Crawling Inside Iframes**: +7 **Crawling Inside Iframes**: + - Mention how enabling `process_iframes=True` allows extracting content within iframes, useful for sites with embedded content or ads. - **Example**: ```python result = await crawler.arun(url="https://www.example.com", process_iframes=True) ``` -8. **Wrap-Up**: +8 **Wrap-Up**: + - Summarize these advanced features and how they allow users to customize every part of their web scraping experience. - Tease upcoming videos where each feature will be explored in detail. diff --git a/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md b/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md index 11b9be7d..87a3d217 100644 --- a/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md +++ b/docs/md_v2/tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md @@ -42,7 +42,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def log_browser_creation(browser): print("Browser instance created:", browser) - crawler.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) ``` - **Explanation**: This hook logs the browser creation event, useful for tracking when a new browser instance starts. @@ -57,7 +57,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra def update_user_agent(user_agent): print(f"User Agent Updated: {user_agent}") - crawler.set_hook('on_user_agent_updated', update_user_agent) + crawler.crawler_strategy.set_hook('on_user_agent_updated', update_user_agent) crawler.update_user_agent("Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X)") ``` - **Explanation**: This hook provides a callback every time the user agent changes, helpful for debugging or dynamically altering user agent settings based on conditions. @@ -73,7 +73,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def log_execution_start(page): print("Execution started on page:", page.url) - crawler.set_hook('on_execution_started', log_execution_start) + crawler.crawler_strategy.set_hook('on_execution_started', log_execution_start) ``` - **Explanation**: Logs the start of any major interaction on the page, ideal for cases where you want to monitor each interaction. @@ -90,7 +90,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.set_extra_http_headers({"X-Custom-Header": "CustomValue"}) print("Custom headers set before navigation") - crawler.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) ``` - **Explanation**: This hook allows injecting headers or altering settings based on the page’s needs, particularly useful for pages with custom requirements. @@ -106,7 +106,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.evaluate("window.scrollTo(0, document.body.scrollHeight)") print("Scrolled to the bottom after navigation") - crawler.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) ``` - **Explanation**: This hook scrolls to the bottom of the page after loading, which can help load dynamically added content like infinite scroll elements. @@ -122,7 +122,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.evaluate("document.querySelectorAll('.ad-banner').forEach(el => el.remove());") print("Advertisements removed before returning HTML") - crawler.set_hook('before_return_html', remove_advertisements) + crawler.crawler_strategy.set_hook('before_return_html', remove_advertisements) ``` - **Explanation**: The hook removes ad banners from the HTML before it’s retrieved, ensuring a cleaner data extraction. @@ -138,7 +138,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.wait_for_selector('.main-content') print("Main content loaded, ready to retrieve HTML") - crawler.set_hook('before_retrieve_html', wait_for_content_before_retrieve) + crawler.crawler_strategy.set_hook('before_retrieve_html', wait_for_content_before_retrieve) ``` - **Explanation**: This hook waits for the main content to load before retrieving the HTML, ensuring that all essential content is captured. @@ -148,9 +148,9 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra - Each hook function can be asynchronous (useful for actions like waiting or retrieving async data). - **Example Setup**: ```python - crawler.set_hook('on_browser_created', log_browser_creation) - crawler.set_hook('before_goto', modify_headers_before_goto) - crawler.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) ``` #### **5. Complete Example: Using Hooks for a Customized Crawl Workflow** @@ -160,10 +160,10 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def custom_crawl(): async with AsyncWebCrawler() as crawler: # Set hooks for custom workflow - crawler.set_hook('on_browser_created', log_browser_creation) - crawler.set_hook('before_goto', modify_headers_before_goto) - crawler.set_hook('after_goto', post_navigation_scroll) - crawler.set_hook('before_return_html', remove_advertisements) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('before_return_html', remove_advertisements) # Perform the crawl url = "https://example.com" diff --git a/docs/md_v2/tutorial/tutorial.md b/docs/md_v2/tutorial/tutorial.md index 4e90484d..5621744d 100644 --- a/docs/md_v2/tutorial/tutorial.md +++ b/docs/md_v2/tutorial/tutorial.md @@ -771,9 +771,11 @@ Here’s a concise outline for the **Custom Headers, Identity Management, and Us async with AsyncWebCrawler( headers={"Accept-Language": "en-US", "Cache-Control": "no-cache"}, user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0", - simulate_user=True ) as crawler: - result = await crawler.arun(url="https://example.com/secure-page") + result = await crawler.arun( + url="https://example.com/secure-page", + simulate_user=True + ) print(result.markdown[:500]) # Display extracted content ``` - This example enables detailed customization for evading detection and accessing protected pages smoothly. @@ -1576,7 +1578,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def log_browser_creation(browser): print("Browser instance created:", browser) - crawler.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) ``` - **Explanation**: This hook logs the browser creation event, useful for tracking when a new browser instance starts. @@ -1591,7 +1593,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra def update_user_agent(user_agent): print(f"User Agent Updated: {user_agent}") - crawler.set_hook('on_user_agent_updated', update_user_agent) + crawler.crawler_strategy.set_hook('on_user_agent_updated', update_user_agent) crawler.update_user_agent("Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X)") ``` - **Explanation**: This hook provides a callback every time the user agent changes, helpful for debugging or dynamically altering user agent settings based on conditions. @@ -1607,7 +1609,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def log_execution_start(page): print("Execution started on page:", page.url) - crawler.set_hook('on_execution_started', log_execution_start) + crawler.crawler_strategy.set_hook('on_execution_started', log_execution_start) ``` - **Explanation**: Logs the start of any major interaction on the page, ideal for cases where you want to monitor each interaction. @@ -1624,7 +1626,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.set_extra_http_headers({"X-Custom-Header": "CustomValue"}) print("Custom headers set before navigation") - crawler.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) ``` - **Explanation**: This hook allows injecting headers or altering settings based on the page’s needs, particularly useful for pages with custom requirements. @@ -1640,7 +1642,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.evaluate("window.scrollTo(0, document.body.scrollHeight)") print("Scrolled to the bottom after navigation") - crawler.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) ``` - **Explanation**: This hook scrolls to the bottom of the page after loading, which can help load dynamically added content like infinite scroll elements. @@ -1656,7 +1658,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.evaluate("document.querySelectorAll('.ad-banner').forEach(el => el.remove());") print("Advertisements removed before returning HTML") - crawler.set_hook('before_return_html', remove_advertisements) + crawler.crawler_strategy.set_hook('before_return_html', remove_advertisements) ``` - **Explanation**: The hook removes ad banners from the HTML before it’s retrieved, ensuring a cleaner data extraction. @@ -1672,7 +1674,7 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra await page.wait_for_selector('.main-content') print("Main content loaded, ready to retrieve HTML") - crawler.set_hook('before_retrieve_html', wait_for_content_before_retrieve) + crawler.crawler_strategy.set_hook('before_retrieve_html', wait_for_content_before_retrieve) ``` - **Explanation**: This hook waits for the main content to load before retrieving the HTML, ensuring that all essential content is captured. @@ -1682,9 +1684,9 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra - Each hook function can be asynchronous (useful for actions like waiting or retrieving async data). - **Example Setup**: ```python - crawler.set_hook('on_browser_created', log_browser_creation) - crawler.set_hook('before_goto', modify_headers_before_goto) - crawler.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) ``` #### **5. Complete Example: Using Hooks for a Customized Crawl Workflow** @@ -1694,10 +1696,10 @@ Here’s a detailed outline for the **Hooks and Custom Workflow with AsyncWebCra async def custom_crawl(): async with AsyncWebCrawler() as crawler: # Set hooks for custom workflow - crawler.set_hook('on_browser_created', log_browser_creation) - crawler.set_hook('before_goto', modify_headers_before_goto) - crawler.set_hook('after_goto', post_navigation_scroll) - crawler.set_hook('before_return_html', remove_advertisements) + crawler.crawler_strategy.set_hook('on_browser_created', log_browser_creation) + crawler.crawler_strategy.set_hook('before_goto', modify_headers_before_goto) + crawler.crawler_strategy.set_hook('after_goto', post_navigation_scroll) + crawler.crawler_strategy.set_hook('before_return_html', remove_advertisements) # Perform the crawl url = "https://example.com" diff --git a/docs/nootbooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb b/docs/notebooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb similarity index 100% rename from docs/nootbooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb rename to docs/notebooks/Crawl4AI_v0.3.72_Release_Announcement.ipynb diff --git a/mkdocs.yml b/mkdocs.yml index 52fdd579..ddcad318 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -33,29 +33,30 @@ nav: - 'Cosine Strategy': 'extraction/cosine.md' - 'Chunking': 'extraction/chunking.md' - - Tutorial: - - 'Episode 1: Introduction to Crawl4AI and Basic Installation': 'tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md' - - 'Episode 2: Overview of Advanced Features': 'tutorial/episode_02_Overview_of_Advanced_Features.md' - - 'Episode 3: Browser Configurations & Headless Crawling': 'tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md' - - 'Episode 4: Advanced Proxy and Security Settings': 'tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md' - - 'Episode 5: JavaScript Execution and Dynamic Content Handling': 'tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md' - - 'Episode 6: Magic Mode and Anti-Bot Protection': 'tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md' - - 'Episode 7: Content Cleaning and Fit Markdown': 'tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md' - - 'Episode 8: Media Handling: Images, Videos, and Audio': 'tutorial/episode_08_Media_Handling:_Images,_Videos,_and_Audio.md' - - 'Episode 9: Link Analysis and Smart Filtering': 'tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md' - - 'Episode 10: Custom Headers, Identity, and User Simulation': 'tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md' - - 'Episode 11.1: Extraction Strategies: JSON CSS': 'tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md' - - 'Episode 11.2: Extraction Strategies: LLM': 'tutorial/episode_11_2_Extraction_Strategies:_LLM.md' - - 'Episode 11.3: Extraction Strategies: Cosine': 'tutorial/episode_11_3_Extraction_Strategies:_Cosine.md' - - 'Episode 12: Session-Based Crawling for Dynamic Websites': 'tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md' - - 'Episode 13: Chunking Strategies for Large Text Processing': 'tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md' - - 'Episode 14: Hooks and Custom Workflow with AsyncWebCrawler': 'tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md' - - API Reference: - 'AsyncWebCrawler': 'api/async-webcrawler.md' - 'AsyncWebCrawler.arun()': 'api/arun.md' - 'CrawlResult': 'api/crawl-result.md' - 'Strategies': 'api/strategies.md' + + - Tutorial: + - '1. Getting Started': 'tutorial/episode_01_Introduction_to_Crawl4AI_and_Basic_Installation.md' + - '2. Advanced Features': 'tutorial/episode_02_Overview_of_Advanced_Features.md' + - '3. Browser Setup': 'tutorial/episode_03_Browser_Configurations_&_Headless_Crawling.md' + - '4. Proxy Settings': 'tutorial/episode_04_Advanced_Proxy_and_Security_Settings.md' + - '5. Dynamic Content': 'tutorial/episode_05_JavaScript_Execution_and_Dynamic_Content_Handling.md' + - '6. Magic Mode': 'tutorial/episode_06_Magic_Mode_and_Anti-Bot_Protection.md' + - '7. Content Cleaning': 'tutorial/episode_07_Content_Cleaning_and_Fit_Markdown.md' + - '8. Media Handling': 'tutorial/episode_08_Media_Handling:_Images,_Videos,_and_Audio.md' + - '9. Link Analysis': 'tutorial/episode_09_Link_Analysis_and_Smart_Filtering.md' + - '10. User Simulation': 'tutorial/episode_10_Custom_Headers,_Identity,_and_User_Simulation.md' + - '11.1. JSON CSS': 'tutorial/episode_11_1_Extraction_Strategies:_JSON_CSS.md' + - '11.2. LLM Strategy': 'tutorial/episode_11_2_Extraction_Strategies:_LLM.md' + - '11.3. Cosine Strategy': 'tutorial/episode_11_3_Extraction_Strategies:_Cosine.md' + - '12. Session Crawling': 'tutorial/episode_12_Session-Based_Crawling_for_Dynamic_Websites.md' + - '13. Text Chunking': 'tutorial/episode_13_Chunking_Strategies_for_Large_Text_Processing.md' + - '14. Custom Workflows': 'tutorial/episode_14_Hooks_and_Custom_Workflow_with_AsyncWebCrawler.md' + theme: name: terminal @@ -79,4 +80,4 @@ extra_css: extra_javascript: - assets/highlight.min.js - - assets/highlight_init.js + - assets/highlight_init.js \ No newline at end of file From 47464cedec3fa5df1fffff6e0046547390c992df Mon Sep 17 00:00:00 2001 From: UncleCode Date: Wed, 30 Oct 2024 20:42:27 +0800 Subject: [PATCH 6/8] Update README --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9e937aab..2e54d537 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Crawl4AI (Async Version) 🕷️🤖 +# 🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper unclecode%2Fcrawl4ai | Trendshift [![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers) From cb6f5323aed20bdd6e6b115c4cf6a4ed33b71c70 Mon Sep 17 00:00:00 2001 From: UncleCode Date: Wed, 30 Oct 2024 20:44:57 +0800 Subject: [PATCH 7/8] Update README --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 2e54d537..d783034c 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,5 @@ -# 🔥🕷️ Crawl4AI: Open-source LLM Friendly Web Crawler & Scrapper +# 🔥🕷️ Crawl4AI:LLM Friendly Web Crawler & Scrapper + unclecode%2Fcrawl4ai | Trendshift [![GitHub Stars](https://img.shields.io/github/stars/unclecode/crawl4ai?style=social)](https://github.com/unclecode/crawl4ai/stargazers) From e97e8df6bae4ee2cd69ceec26d30530998bb0174 Mon Sep 17 00:00:00 2001 From: UncleCode Date: Wed, 30 Oct 2024 20:45:20 +0800 Subject: [PATCH 8/8] Update README: Fix typo in project name --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index d783034c..6c8a5a2a 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# 🔥🕷️ Crawl4AI:LLM Friendly Web Crawler & Scrapper +# 🔥🕷️ Crawl4AI: LLM Friendly Web Crawler & Scrapper unclecode%2Fcrawl4ai | Trendshift