Add token usage tracking to generate_schema / agenerate_schema
generate_schema can make up to 5 internal LLM calls (field inference, schema generation, validation retries) with no way to track token consumption. Add an optional `usage: TokenUsage = None` parameter that accumulates prompt/completion/total tokens across all calls in-place. - _infer_target_json: accept and populate usage accumulator - agenerate_schema: track usage after every aperform_completion call in the retry loop, forward usage to _infer_target_json - generate_schema (sync): forward usage to agenerate_schema Fully backward-compatible — omitting usage changes nothing.
This commit is contained in:
@@ -4537,6 +4537,20 @@ xpath_schema = JsonXPathExtractionStrategy.generate_schema(
|
||||
# Use the generated schema for fast, repeated extractions
|
||||
strategy = JsonCssExtractionStrategy(css_schema)
|
||||
```
|
||||
### Token Usage Tracking
|
||||
`generate_schema` may make multiple LLM calls internally (field inference, generation, validation retries). Track total token consumption by passing a `TokenUsage` accumulator:
|
||||
```python
|
||||
from crawl4ai.models import TokenUsage
|
||||
|
||||
usage = TokenUsage()
|
||||
schema = JsonCssExtractionStrategy.generate_schema(
|
||||
url="https://example.com/products",
|
||||
query="extract product name and price",
|
||||
usage=usage,
|
||||
)
|
||||
print(f"Total tokens: {usage.total_tokens}")
|
||||
```
|
||||
The `usage` parameter is optional and fully backward-compatible. Both `generate_schema` (sync) and `agenerate_schema` (async) support it.
|
||||
### LLM Provider Options
|
||||
1. **OpenAI GPT-4 (`openai/gpt4o`)**
|
||||
- Default provider
|
||||
|
||||
Reference in New Issue
Block a user