diff --git a/README.md b/README.md index 9675ddae..aba3d118 100644 --- a/README.md +++ b/README.md @@ -258,6 +258,8 @@ result = crawler.run( ### Extraction strategy: CosineStrategy +So far, the extracted content is just the result of chunking. To extract meaningful content, you can use extraction strategies. These strategies cluster consecutive chunks into meaningful blocks, keeping the same order as the text in the HTML. This approach is perfect for use in RAG applications and semantical search queries. + Using CosineStrategy: ```python result = crawler.run( @@ -368,11 +370,11 @@ chunks = chunker.chunk("This is a sample text. It will be split into chunks.") `NlpSentenceChunking` uses a natural language processing model to chunk a given text into sentences. This approach leverages SpaCy to accurately split text based on sentence boundaries. **Constructor Parameters:** -- `model` (str, optional): The SpaCy model to use for sentence detection. Default is `'en_core_web_sm'`. +- None. **Example usage:** ```python -chunker = NlpSentenceChunking(model='en_core_web_sm') +chunker = NlpSentenceChunking() chunks = chunker.chunk("This is a sample text. It will be split into sentences.") ``` diff --git a/docs/chunking_strategies.json b/docs/chunking_strategies.json index ec855cc7..b0d2a6bc 100644 --- a/docs/chunking_strategies.json +++ b/docs/chunking_strategies.json @@ -1,7 +1,7 @@ { "RegexChunking": "### RegexChunking\n\n`RegexChunking` is a text chunking strategy that splits a given text into smaller parts using regular expressions.\nThis is useful for preparing large texts for processing by language models, ensuring they are divided into manageable segments.\n\n#### Constructor Parameters:\n- `patterns` (list, optional): A list of regular expression patterns used to split the text. Default is to split by double newlines (`['\\n\\n']`).\n\n#### Example usage:\n```python\nchunker = RegexChunking(patterns=[r'\\n\\n', r'\\. '])\nchunks = chunker.chunk(\"This is a sample text. It will be split into chunks.\")\n```", - "NlpSentenceChunking": "### NlpSentenceChunking\n\n`NlpSentenceChunking` uses a natural language processing model to chunk a given text into sentences. This approach leverages SpaCy to accurately split text based on sentence boundaries.\n\n#### Constructor Parameters:\n- `model` (str, optional): The SpaCy model to use for sentence detection. Default is `'en_core_web_sm'`.\n\n#### Example usage:\n```python\nchunker = NlpSentenceChunking(model='en_core_web_sm')\nchunks = chunker.chunk(\"This is a sample text. It will be split into sentences.\")\n```", + "NlpSentenceChunking": "### NlpSentenceChunking\n\n`NlpSentenceChunking` uses a natural language processing model to chunk a given text into sentences. This approach leverages SpaCy to accurately split text based on sentence boundaries.\n\n#### Constructor Parameters:\n- None.\n\n#### Example usage:\n```python\nchunker = NlpSentenceChunking()\nchunks = chunker.chunk(\"This is a sample text. It will be split into sentences.\")\n```", "TopicSegmentationChunking": "### TopicSegmentationChunking\n\n`TopicSegmentationChunking` uses the TextTiling algorithm to segment a given text into topic-based chunks. This method identifies thematic boundaries in the text.\n\n#### Constructor Parameters:\n- `num_keywords` (int, optional): The number of keywords to extract for each topic segment. Default is `3`.\n\n#### Example usage:\n```python\nchunker = TopicSegmentationChunking(num_keywords=3)\nchunks = chunker.chunk(\"This is a sample text. It will be split into topic-based segments.\")\n```", diff --git a/pages/tmp.html b/pages/tmp.html index 190afd98..7c924676 100644 --- a/pages/tmp.html +++ b/pages/tmp.html @@ -236,12 +236,11 @@ chunks = chunker.chunk("This is a sample text. It will be split into chunks.")

Constructor Parameters:

Example usage:

-
chunker = NlpSentenceChunking(model='en_core_web_sm')
+                
chunker = NlpSentenceChunking()
 chunks = chunker.chunk("This is a sample text. It will be split into sentences.")