1.4 KiB
1.4 KiB
Agent Runtime Policy (NODE1)
Purpose
Single policy for runtime model selection and orchestration behavior.
Agent Classes
top_level: Telegram-facing orchestrators with optional CrewAI teams.internal: infrastructure/service agents (not Telegram-facing by default).
Model Policy
- Top-level agents:
cloud_deepseekprimary +cloud_mistralfallback. - Exception:
sofiiausescloud_grokprimary +cloud_deepseekfallback. - Internal agents: local-first (
ollamaprofiles).monitor:qwen2_5_3b_service.devtools: local by default; cloud override only for explicitly heavy tasks.comfy: no chat LLM profile (tool/service execution path).
Orchestration Policy (CrewAI)
- Top-level agents are direct-LLM first for simple requests.
- CrewAI is on-demand for complex/detailed requests (
force_detailed/requires_complex_reasoning). - Fast path must cap active subagents to 2-3 roles.
- Final user response is always formed by the top-level agent.
Routing Constraints
- Do not use code-level hard overrides for cloud/local policy if it can be expressed in
router-config.yml. - Prefer deterministic routing rules per agent over generic fallbacks.
Source of Truth
- Canonical intent:
config/agent_registry.yml. - Runtime implementation:
services/router/router-config.yml. - Any policy change must be reflected in both until router config generation is fully automated.