fix: increase LLM timeout 30s→60s, fix Gateway request format, add Ollama optimization guide
- Fixed Gateway: 'prompt' → 'message' field name - Increased LLM provider timeout from 30s to 60s - Added OLLAMA-OPTIMIZATION.md with performance tips - DAARWIZZ now responds (slowly but works)
This commit is contained in:
@@ -25,7 +25,7 @@ class LLMProvider(Provider):
|
||||
base_url: str,
|
||||
model: str,
|
||||
api_key: Optional[str] = None,
|
||||
timeout_s: int = 30,
|
||||
timeout_s: int = 60,
|
||||
max_tokens: int = 1024,
|
||||
temperature: float = 0.2,
|
||||
provider_type: str = "openai", # "openai" or "ollama"
|
||||
|
||||
Reference in New Issue
Block a user