node2: fix Sofiia routing determinism + Node Capabilities Service
Bug fixes:
- Bug A: GROK_API_KEY env mismatch — router expected GROK_API_KEY but only
XAI_API_KEY was present. Added GROK_API_KEY=${XAI_API_KEY} alias in compose.
- Bug B: 'grok' profile missing in router-config.node2.yml — added cloud_grok
profile (provider: grok, model: grok-2-1212). Sofiia now has
default_llm=cloud_grok with fallback_llm=local_default_coder.
- Bug C: Router silently defaulted to cloud DeepSeek when profile was unknown.
Now falls back to agent.fallback_llm or local_default_coder with WARNING log.
Hardcoded Ollama URL (172.18.0.1) replaced with config-driven base_url.
New service: Node Capabilities Service (NCS)
- services/node-capabilities/ — FastAPI microservice exposing live model
inventory from Ollama, Swapper, and llama-server.
- GET /capabilities — canonical JSON with served_models[] and inventory_only[]
- GET /capabilities/models — flat list of served models
- POST /capabilities/refresh — force cache refresh
- Cache TTL 15s, bound to 127.0.0.1:8099
- services/router/capabilities_client.py — async client with TTL cache
Artifacts:
- ops/node2_models_audit.md — 3-layer model view (served/disk/cloud)
- ops/node2_models_audit.yml — machine-readable audit
- ops/node2_capabilities_example.json — sample NCS output (14 served models)
Made-with: Cursor
This commit is contained in:
1
ops/node2_capabilities_example.json
Normal file
1
ops/node2_capabilities_example.json
Normal file
File diff suppressed because one or more lines are too long
125
ops/node2_models_audit.md
Normal file
125
ops/node2_models_audit.md
Normal file
@@ -0,0 +1,125 @@
|
||||
# NODA2 Model Audit — Three-Layer View
|
||||
**Date:** 2026-02-27
|
||||
**Node:** MacBook Pro M4 Max, 64GB unified memory
|
||||
|
||||
---
|
||||
|
||||
## Layer 1: Served by Runtime (routing-eligible)
|
||||
|
||||
These are models the router can actively select and invoke.
|
||||
|
||||
### Ollama (12 models, port 11434)
|
||||
|
||||
| Model | Type | Size | Status | Note |
|
||||
|-------|------|------|--------|------|
|
||||
| qwen3.5:35b-a3b | LLM (MoE) | 9.3 GB | idle | PRIMARY reasoning |
|
||||
| qwen3:14b | LLM | 9.3 GB | idle | Default local |
|
||||
| gemma3:latest | LLM | 3.3 GB | idle | Fast small |
|
||||
| glm-4.7-flash:32k | LLM | 19 GB | idle | Long-context |
|
||||
| glm-4.7-flash:q4_K_M | LLM | 19 GB | idle | **DUPLICATE** |
|
||||
| llava:13b | Vision | 8.0 GB | idle | P0 fallback |
|
||||
| mistral-nemo:12b | LLM | 7.1 GB | idle | old |
|
||||
| deepseek-coder:33b | Code | 18.8 GB | idle | Heavy code |
|
||||
| deepseek-r1:70b | LLM | 42.5 GB | idle | Very heavy reasoning |
|
||||
| starcoder2:3b | Code | 1.7 GB | idle | Fast code |
|
||||
| phi3:latest | LLM | 2.2 GB | idle | Small general |
|
||||
| gpt-oss:latest | LLM | 13.8 GB | idle | old |
|
||||
|
||||
### Swapper (port 8890)
|
||||
|
||||
| Model | Type | Status |
|
||||
|-------|------|--------|
|
||||
| llava-13b | Vision | unloaded |
|
||||
|
||||
### llama-server (port 11435)
|
||||
|
||||
| Model | Type | Note |
|
||||
|-------|------|------|
|
||||
| Qwen3.5-35B-A3B-Q4_K_M.gguf | LLM | **DUPLICATE** of Ollama |
|
||||
|
||||
### Cloud APIs
|
||||
|
||||
| Provider | Model | API Key | Active |
|
||||
|----------|-------|---------|--------|
|
||||
| Grok (xAI) | grok-2-1212 | `GROK_API_KEY` ✅ | **Sofiia primary** |
|
||||
| DeepSeek | deepseek-chat | `DEEPSEEK_API_KEY` ✅ | Other agents |
|
||||
| Mistral | mistral-large | `MISTRAL_API_KEY` | Not configured |
|
||||
|
||||
---
|
||||
|
||||
## Layer 2: Installed on Disk (not served)
|
||||
|
||||
These are on disk but NOT reachable by router/swapper.
|
||||
|
||||
| Model | Type | Size | Location | Status |
|
||||
|-------|------|------|----------|--------|
|
||||
| whisper-large-v3-turbo (MLX) | STT | 1.5 GB | HF cache | Ready, not integrated |
|
||||
| Kokoro-82M-bf16 (MLX) | TTS | 0.35 GB | HF cache | Ready, not integrated |
|
||||
| MiniCPM-V-4_5 | Vision | 16 GB | HF cache | Not serving |
|
||||
| Qwen3-VL-32B-Instruct | Vision | 123 GB | Cursor worktree | R&D artifact |
|
||||
| Jan-v2-VL-med-Q8_0 | Vision | 9.2 GB | Jan AI | Not running |
|
||||
| Qwen2.5-7B-Instruct | LLM | 14 GB | HF cache | Idle |
|
||||
| Qwen2.5-1.5B-Instruct | LLM | 2.9 GB | HF cache | Idle |
|
||||
| flux2-dev-Q8_0 | Image gen | 33 GB | ComfyUI | Offline |
|
||||
| ltx-2-19b-distilled | Video gen | 25 GB | ComfyUI | Offline |
|
||||
| SDXL-base-1.0 | Image gen | 72 GB | hf_models | Legacy |
|
||||
| FLUX.2-dev (Aquiles) | Image gen | 105 GB | HF cache | ComfyUI |
|
||||
|
||||
---
|
||||
|
||||
## Layer 3: Sofiia Routing (after fix)
|
||||
|
||||
### Before fix (broken)
|
||||
```
|
||||
agent_registry: llm_profile=grok
|
||||
→ router looks up "grok" in node2 config → NOT FOUND
|
||||
→ llm_profile = {} → provider defaults to "deepseek" (hardcoded)
|
||||
→ tries DEEPSEEK_API_KEY → may work (nondeterministic)
|
||||
→ XAI_API_KEY exists but mapped as "XAI_API_KEY", not "GROK_API_KEY"
|
||||
```
|
||||
|
||||
### After fix (deterministic)
|
||||
```
|
||||
agent_registry: llm_profile=grok
|
||||
router-config.node2.yml:
|
||||
agents.sofiia.default_llm = cloud_grok
|
||||
agents.sofiia.fallback_llm = local_default_coder
|
||||
llm_profiles.cloud_grok = {provider: grok, model: grok-2-1212, base_url: https://api.x.ai}
|
||||
|
||||
docker-compose: GROK_API_KEY=${XAI_API_KEY} (aliased)
|
||||
|
||||
Chain:
|
||||
1. Sofiia request → router resolves cloud_grok
|
||||
2. provider=grok → GROK_API_KEY present → xAI API → grok-2-1212
|
||||
3. If Grok fails → fallback_llm=local_default_coder → qwen3:14b (Ollama)
|
||||
4. If unknown profile → WARNING logged, uses agent.default_llm (local), NOT cloud silently
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fixes Applied in This Commit
|
||||
|
||||
| Bug | Fix | File |
|
||||
|-----|-----|------|
|
||||
| A: GROK_API_KEY not in env | Added `GROK_API_KEY=${XAI_API_KEY}` | docker-compose.node2-sofiia.yml |
|
||||
| B: No `grok` profile | Added `cloud_grok` profile | router-config.node2.yml |
|
||||
| B: Sofiia → wrong profile | `agents.sofiia.default_llm = cloud_grok` | router-config.node2.yml |
|
||||
| C: Silent cloud fallback | Unknown profile → local default + WARNING | services/router/main.py |
|
||||
| C: Hardcoded Ollama URL | `172.18.0.1:11434` → dynamic from config | services/router/main.py |
|
||||
| — | Node Capabilities Service | services/node-capabilities/ |
|
||||
|
||||
---
|
||||
|
||||
## Node Capabilities Service
|
||||
|
||||
New microservice providing live model inventory at `GET /capabilities`:
|
||||
- Collects from Ollama, Swapper, llama-server
|
||||
- Returns canonical JSON with `served_models[]` and `inventory_only[]`
|
||||
- Cache TTL: 15s
|
||||
- Port: 127.0.0.1:8099
|
||||
|
||||
Verification:
|
||||
```bash
|
||||
curl -s http://localhost:8099/capabilities | jq '.served_models | length'
|
||||
# Expected: 14
|
||||
```
|
||||
76
ops/node2_models_audit.yml
Normal file
76
ops/node2_models_audit.yml
Normal file
@@ -0,0 +1,76 @@
|
||||
# NODA2 Model Audit — Three-layer view
|
||||
# Date: 2026-02-27
|
||||
# Source: Node Capabilities Service + manual disk scan
|
||||
|
||||
# ─── LAYER 1: SERVED BY RUNTIME (routing-eligible) ───────────────────────────
|
||||
served_by_runtime:
|
||||
ollama:
|
||||
base_url: http://host.docker.internal:11434
|
||||
version: "0.17.1"
|
||||
models:
|
||||
- {name: "qwen3.5:35b-a3b", type: llm, size_gb: 9.3, params: "14.8B MoE"}
|
||||
- {name: "qwen3:14b", type: llm, size_gb: 9.3, params: "14B"}
|
||||
- {name: "gemma3:latest", type: llm, size_gb: 3.3, params: "4B"}
|
||||
- {name: "glm-4.7-flash:32k", type: llm, size_gb: 19.0, params: "~32B"}
|
||||
- {name: "glm-4.7-flash:q4_K_M", type: llm, size_gb: 19.0, note: "DUPLICATE of :32k"}
|
||||
- {name: "llava:13b", type: vision, size_gb: 8.0, params: "13B"}
|
||||
- {name: "mistral-nemo:12b", type: llm, size_gb: 7.1, note: "old"}
|
||||
- {name: "deepseek-coder:33b", type: code, size_gb: 18.8, params: "33B"}
|
||||
- {name: "deepseek-r1:70b", type: llm, size_gb: 42.5, params: "70B"}
|
||||
- {name: "starcoder2:3b", type: code, size_gb: 1.7}
|
||||
- {name: "phi3:latest", type: llm, size_gb: 2.2}
|
||||
- {name: "gpt-oss:latest", type: llm, size_gb: 13.8, note: "old"}
|
||||
|
||||
swapper:
|
||||
base_url: http://swapper-service:8890
|
||||
active_model: null
|
||||
vision_models:
|
||||
- {name: "llava-13b", type: vision, size_gb: 8.0, status: unloaded}
|
||||
llm_models_count: 9
|
||||
|
||||
llama_server:
|
||||
base_url: http://host.docker.internal:11435
|
||||
models:
|
||||
- {name: "Qwen3.5-35B-A3B-Q4_K_M.gguf", type: llm, note: "DUPLICATE of ollama qwen3.5:35b-a3b"}
|
||||
|
||||
# ─── LAYER 2: INSTALLED ON DISK (not served, not for routing) ────────────────
|
||||
installed_on_disk:
|
||||
hf_cache:
|
||||
- {name: "whisper-large-v3-turbo-asr-fp16", type: stt, size_gb: 1.5, backend: mlx, ready: true}
|
||||
- {name: "Kokoro-82M-bf16", type: tts, size_gb: 0.35, backend: mlx, ready: true}
|
||||
- {name: "MiniCPM-V-4_5", type: vision, size_gb: 16.0, backend: hf, ready: false}
|
||||
- {name: "Qwen2.5-7B-Instruct", type: llm, size_gb: 14.0, backend: hf}
|
||||
- {name: "Qwen2.5-1.5B-Instruct", type: llm, size_gb: 2.9, backend: hf}
|
||||
- {name: "FLUX.2-dev (Aquiles)", type: image_gen, size_gb: 105.0, backend: comfyui}
|
||||
|
||||
cursor_worktree:
|
||||
- {name: "Qwen3-VL-32B-Instruct", type: vision, size_gb: 123.0, path: "~/.cursor/worktrees/.../models/"}
|
||||
|
||||
jan_ai:
|
||||
- {name: "Jan-v2-VL-med-Q8_0", type: vision, size_gb: 9.2, path: "~/Library/Application Support/Jan/"}
|
||||
|
||||
llama_cpp_models:
|
||||
- {name: "Qwen3.5-35B-A3B-Q4_K_M.gguf", type: llm, size_gb: 20.0, note: "DUPLICATE, served by llama-server"}
|
||||
|
||||
comfyui:
|
||||
- {name: "flux2-dev-Q8_0.gguf", type: image_gen, size_gb: 33.0}
|
||||
- {name: "ltx-2-19b-distilled-fp8.safetensors", type: video_gen, size_gb: 25.0}
|
||||
- {name: "z_image_turbo_bf16.safetensors", type: image_gen, size_gb: 11.0}
|
||||
- {name: "SDXL-base-1.0", type: image_gen, size_gb: 72.0, note: "legacy"}
|
||||
|
||||
hf_models_dir:
|
||||
- {name: "stabilityai_sdxl_base_1.0", type: image_gen, size_gb: 72.0, note: "legacy"}
|
||||
|
||||
# ─── LAYER 3: CLOUD / EXTERNAL APIs ──────────────────────────────────────────
|
||||
cloud_apis:
|
||||
- {name: "grok-2-1212", provider: grok, api_key_env: "GROK_API_KEY", active: true}
|
||||
- {name: "deepseek-chat", provider: deepseek, api_key_env: "DEEPSEEK_API_KEY", active: true}
|
||||
- {name: "mistral-large-latest", provider: mistral, api_key_env: "MISTRAL_API_KEY", active: false}
|
||||
|
||||
# ─── SOFIIA ROUTING CHAIN (after fix) ────────────────────────────────────────
|
||||
sofiia_routing:
|
||||
agent_registry: "llm_profile: grok"
|
||||
router_config: "agents.sofiia.default_llm: cloud_grok → provider=grok, model=grok-2-1212"
|
||||
fallback: "fallback_llm: local_default_coder → qwen3:14b (Ollama)"
|
||||
env_mapping: "XAI_API_KEY → GROK_API_KEY (aliased in compose)"
|
||||
deterministic: true
|
||||
Reference in New Issue
Block a user