docs: update text in llm-strategies.md to reflect new changes in LlmConfig
This commit is contained in:
@@ -20,11 +20,17 @@ In some cases, you need to extract **complex or unstructured** information from
|
||||
|
||||
## 2. Provider-Agnostic via LiteLLM
|
||||
|
||||
You can use LlmConfig, to quickly configure multiple variations of LLMs and experiment with them to find the optimal one for your use case. You can read more about LlmConfig [here](/api/parameters).
|
||||
|
||||
```python
|
||||
llmConfig = LlmConfig(provider="openai/gpt-4o-mini", api_token=os.getenv("OPENAI_API_KEY"))
|
||||
```
|
||||
|
||||
Crawl4AI uses a “provider string” (e.g., `"openai/gpt-4o"`, `"ollama/llama2.0"`, `"aws/titan"`) to identify your LLM. **Any** model that LiteLLM supports is fair game. You just provide:
|
||||
|
||||
- **`provider`**: The `<provider>/<model_name>` identifier (e.g., `"openai/gpt-4"`, `"ollama/llama2"`, `"huggingface/google-flan"`, etc.).
|
||||
- **`api_token`**: If needed (for OpenAI, HuggingFace, etc.); local models or Ollama might not require it.
|
||||
- **`api_base`** (optional): If your provider has a custom endpoint.
|
||||
- **`base_url`** (optional): If your provider has a custom endpoint.
|
||||
|
||||
This means you **aren’t locked** into a single LLM vendor. Switch or experiment easily.
|
||||
|
||||
@@ -52,20 +58,19 @@ For structured data, `"schema"` is recommended. You provide `schema=YourPydantic
|
||||
|
||||
Below is an overview of important LLM extraction parameters. All are typically set inside `LLMExtractionStrategy(...)`. You then put that strategy in your `CrawlerRunConfig(..., extraction_strategy=...)`.
|
||||
|
||||
1. **`provider`** (str): e.g., `"openai/gpt-4"`, `"ollama/llama2"`.
|
||||
2. **`api_token`** (str): The API key or token for that model. May not be needed for local models.
|
||||
3. **`schema`** (dict): A JSON schema describing the fields you want. Usually generated by `YourModel.model_json_schema()`.
|
||||
4. **`extraction_type`** (str): `"schema"` or `"block"`.
|
||||
5. **`instruction`** (str): Prompt text telling the LLM what you want extracted. E.g., “Extract these fields as a JSON array.”
|
||||
6. **`chunk_token_threshold`** (int): Maximum tokens per chunk. If your content is huge, you can break it up for the LLM.
|
||||
7. **`overlap_rate`** (float): Overlap ratio between adjacent chunks. E.g., `0.1` means 10% of each chunk is repeated to preserve context continuity.
|
||||
8. **`apply_chunking`** (bool): Set `True` to chunk automatically. If you want a single pass, set `False`.
|
||||
9. **`input_format`** (str): Determines **which** crawler result is passed to the LLM. Options include:
|
||||
1. **`llmConfig`** (LlmConfig): e.g., `"openai/gpt-4"`, `"ollama/llama2"`.
|
||||
2. **`schema`** (dict): A JSON schema describing the fields you want. Usually generated by `YourModel.model_json_schema()`.
|
||||
3. **`extraction_type`** (str): `"schema"` or `"block"`.
|
||||
4. **`instruction`** (str): Prompt text telling the LLM what you want extracted. E.g., “Extract these fields as a JSON array.”
|
||||
5. **`chunk_token_threshold`** (int): Maximum tokens per chunk. If your content is huge, you can break it up for the LLM.
|
||||
6. **`overlap_rate`** (float): Overlap ratio between adjacent chunks. E.g., `0.1` means 10% of each chunk is repeated to preserve context continuity.
|
||||
7. **`apply_chunking`** (bool): Set `True` to chunk automatically. If you want a single pass, set `False`.
|
||||
8. **`input_format`** (str): Determines **which** crawler result is passed to the LLM. Options include:
|
||||
- `"markdown"`: The raw markdown (default).
|
||||
- `"fit_markdown"`: The filtered “fit” markdown if you used a content filter.
|
||||
- `"html"`: The cleaned or raw HTML.
|
||||
10. **`extra_args`** (dict): Additional LLM parameters like `temperature`, `max_tokens`, `top_p`, etc.
|
||||
11. **`show_usage()`**: A method you can call to print out usage info (token usage per chunk, total cost if known).
|
||||
9. **`extra_args`** (dict): Additional LLM parameters like `temperature`, `max_tokens`, `top_p`, etc.
|
||||
10. **`show_usage()`**: A method you can call to print out usage info (token usage per chunk, total cost if known).
|
||||
|
||||
**Example**:
|
||||
|
||||
@@ -233,8 +238,7 @@ class KnowledgeGraph(BaseModel):
|
||||
async def main():
|
||||
# LLM extraction strategy
|
||||
llm_strat = LLMExtractionStrategy(
|
||||
provider="openai/gpt-4",
|
||||
api_token=os.getenv('OPENAI_API_KEY'),
|
||||
llmConfig = LlmConfig(provider="openai/gpt-4", api_token=os.getenv('OPENAI_API_KEY')),
|
||||
schema=KnowledgeGraph.schema_json(),
|
||||
extraction_type="schema",
|
||||
instruction="Extract entities and relationships from the content. Return valid JSON.",
|
||||
|
||||
Reference in New Issue
Block a user