fix: increase LLM timeout 30s→60s, fix Gateway request format, add Ollama optimization guide

- Fixed Gateway: 'prompt' → 'message' field name
- Increased LLM provider timeout from 30s to 60s
- Added OLLAMA-OPTIMIZATION.md with performance tips
- DAARWIZZ now responds (slowly but works)
This commit is contained in:
Ivan Tytar
2025-11-15 17:46:35 +01:00
parent 36770c5c92
commit 03d3d6ecc4
3 changed files with 94 additions and 3 deletions

View File

@@ -25,7 +25,7 @@ class LLMProvider(Provider):
base_url: str,
model: str,
api_key: Optional[str] = None,
timeout_s: int = 30,
timeout_s: int = 60,
max_tokens: int = 1024,
temperature: float = 0.2,
provider_type: str = "openai", # "openai" or "ollama"