Implement hierarchical configuration for LLM parameters with support for: - Temperature control (0.0-2.0) to adjust response creativity - Custom base_url for proxy servers and alternative endpoints - 4-tier priority: request params > provider env > global env > defaults Add helper functions in utils.py, update API schemas and handlers, support environment variables (LLM_TEMPERATURE, OPENAI_TEMPERATURE, etc.), and provide comprehensive documentation with examples.
32 lines
1.2 KiB
Plaintext
32 lines
1.2 KiB
Plaintext
# LLM Provider Keys
|
|
OPENAI_API_KEY=your_openai_key_here
|
|
DEEPSEEK_API_KEY=your_deepseek_key_here
|
|
ANTHROPIC_API_KEY=your_anthropic_key_here
|
|
GROQ_API_KEY=your_groq_key_here
|
|
TOGETHER_API_KEY=your_together_key_here
|
|
MISTRAL_API_KEY=your_mistral_key_here
|
|
GEMINI_API_TOKEN=your_gemini_key_here
|
|
|
|
# Optional: Override the default LLM provider
|
|
# Examples: "openai/gpt-4", "anthropic/claude-3-opus", "deepseek/chat", etc.
|
|
# If not set, uses the provider specified in config.yml (default: openai/gpt-4o-mini)
|
|
# LLM_PROVIDER=anthropic/claude-3-opus
|
|
|
|
# Optional: Global LLM temperature setting (0.0-2.0)
|
|
# Controls randomness in responses. Lower = more focused, Higher = more creative
|
|
# LLM_TEMPERATURE=0.7
|
|
|
|
# Optional: Global custom API base URL
|
|
# Use this to point to custom endpoints or proxy servers
|
|
# LLM_BASE_URL=https://api.custom.com/v1
|
|
|
|
# Optional: Provider-specific temperature overrides
|
|
# These take precedence over the global LLM_TEMPERATURE
|
|
# OPENAI_TEMPERATURE=0.5
|
|
# ANTHROPIC_TEMPERATURE=0.3
|
|
# GROQ_TEMPERATURE=0.8
|
|
|
|
# Optional: Provider-specific base URL overrides
|
|
# Use for provider-specific proxy endpoints
|
|
# OPENAI_BASE_URL=https://custom-openai.company.com/v1
|
|
# GROQ_BASE_URL=https://custom-groq.company.com/v1 |