P3.1: GPU/Queue-aware routing — NCS metrics + scoring-based model selection

NCS (services/node-capabilities/metrics.py):
- NodeLoad: inflight_jobs, queue_depth, concurrency_limit, estimated_wait_ms,
  cpu_load_1m, mem_pressure (macOS + Linux), rtt_ms_to_hub
- RuntimeLoad: per-runtime healthy, p50_ms, p95_ms from rolling 50-sample window
- POST /capabilities/report_latency for node-worker → NCS reporting
- NCS fetches worker metrics via NODE_WORKER_URL

Node Worker:
- GET /metrics endpoint (inflight, concurrency, latency buffers)
- Latency tracking per job type (llm/vision) with rolling buffer
- Fire-and-forget latency reporting to NCS after each successful job

Router (model_select v3):
- score_candidate(): wait + model_latency + cross_node_penalty + prefer_bonus
- LOCAL_THRESHOLD_MS=250: prefer local if within threshold of remote
- ModelSelection.score field for observability
- Structured [score] logs with chosen node, model, and score breakdown

Tests: 19 new (12 scoring + 7 NCS metrics), 36 total pass
Docs: ops/runbook_p3_1.md, ops/CHANGELOG_FABRIC.md

No breaking changes to JobRequest/JobResponse or capabilities schema.

Made-with: Cursor
This commit is contained in:
Apple
2026-02-27 02:55:44 -08:00
parent c4b94a327d
commit a605b8c43e
11 changed files with 706 additions and 40 deletions

54
ops/CHANGELOG_FABRIC.md Normal file
View File

@@ -0,0 +1,54 @@
# Agent Fabric Layer — Changelog
## v0.3 — P3.1 GPU/Queue-aware Routing (2026-02-27)
### NCS (Node Capabilities Service)
- **NEW** `metrics.py` module: NodeLoad + RuntimeLoad collection
- Capabilities payload now includes `node_load` and `runtime_load`
- `node_load`: inflight_jobs, queue_depth, concurrency_limit, estimated_wait_ms, cpu_load_1m, mem_pressure
- `runtime_load`: per-runtime healthy status, p50_ms, p95_ms from rolling window
- **NEW** `POST /capabilities/report_latency` — accepts latency reports from node-worker
- NCS fetches worker metrics via `NODE_WORKER_URL` env
### Node Worker
- **NEW** `GET /metrics` endpoint: inflight_jobs, concurrency_limit, last_latencies_llm/vision
- Latency tracking: rolling buffer of last 50 latencies per type
- Fire-and-forget latency reporting to NCS after each successful job
### Router (model_select v3)
- **NEW** `score_candidate()` function: wait + model_latency + cross_penalty + prefer_bonus
- Selection uses scoring instead of simple local-first ordering
- `LOCAL_THRESHOLD_MS = 250`: prefer local if within threshold of remote
- `ModelSelection.score` field added
- Structured log format: `[score] agent=X type=Y chosen=LOCAL:node/model score=N`
### Tests
- 12 scoring tests (local wins, remote wins, exclude, breaker, type filter, prefer list, cross penalty, wait, threshold)
- 7 NCS metrics tests (latency stats, cpu load, mem pressure, node load, runtime load)
### No Breaking Changes
- JobRequest/JobResponse envelope unchanged
- Existing capabilities fields preserved
- All new fields are optional/additive
---
## v0.2 — P2.2+P2.3 NATS Offload (2026-02-26)
- Node Worker service (NATS offload executor)
- offload_client.py (circuit breaker, retries, deadline)
- model_select with exclude_nodes + force_local
- Router /infer remote offload path
## v0.1 — P2 Global Capabilities (2026-02-26)
- Node Capabilities Service (NCS) on each node
- global_capabilities_client.py (NATS scatter-gather discovery)
- model_select v2 (multi-node aware)
- NATS wildcard discovery: node.*.capabilities.get
## v0.0 — P1 NCS-first Selection (2026-02-26)
- capabilities_client.py (single-node HTTP)
- model_select v1 (profile → NCS → static fallback)
- Grok API integration fix

77
ops/runbook_p3_1.md Normal file
View File

@@ -0,0 +1,77 @@
# P3.1 — GPU/Queue-aware Routing Runbook
## What Changed
NCS now exposes **runtime health and load metrics** alongside model inventory.
Router uses a **scoring function** to pick the fastest node+model combo.
Node-worker reports latencies back to NCS for p50/p95 calculation.
## Verification Commands
### 1. NCS capabilities with load metrics
```bash
curl -s http://127.0.0.1:8099/capabilities | jq '.node_load'
```
Expected: `inflight_jobs`, `estimated_wait_ms`, `cpu_load_1m`, `mem_pressure`
### 2. Runtime load (p50/p95)
```bash
curl -s http://127.0.0.1:8099/capabilities | jq '.runtime_load'
```
Expected: per-runtime `p50_ms`, `p95_ms` after some traffic
### 3. Node-worker metrics
```bash
curl -s http://127.0.0.1:8109/metrics | jq
```
Expected: `inflight_jobs`, `concurrency_limit`, `last_latencies_llm`
### 4. NATS capabilities (includes metrics)
```bash
nats req node.noda2.capabilities.get '{}'
```
### 5. Router scoring logs
```bash
docker logs dagi-router-node2 2>&1 | grep '\[score\]'
```
Expected: `chosen=LOCAL:nodeX/modelY score=NNN`
### 6. Report latency manually
```bash
curl -s -X POST http://127.0.0.1:8099/capabilities/report_latency \
-H "Content-Type: application/json" \
-d '{"runtime":"ollama","type":"llm","latency_ms":450}'
```
## Scoring Formula
```
score = wait + model_latency + cross_node_penalty + prefer_bonus
wait = node_load.estimated_wait_ms (0 if idle)
model_latency = model_p50_ms or runtime p50_ms or 1500 (default)
cross_penalty = 0 if local, else rtt_ms * 2
prefer_bonus = -1000 for first prefer match, -900 for second, etc.
```
If best_local_score <= best_remote_score + 250ms → prefer local.
## Estimated Wait Formula
```
if inflight_jobs < concurrency_limit:
estimated_wait = 0
else:
estimated_wait = (inflight - concurrency + 1) * p50_ms
```
## Troubleshooting
| Symptom | Check | Fix |
|---------|-------|-----|
| NCS shows `p50=null` | No traffic yet | Send test requests |
| `estimated_wait_ms` always 0 | Inflight < limit | Expected if not saturated |
| `mem_pressure=null` | Container lacks `memory_pressure` | Expected in Docker |
| Scoring always picks local | Remote score higher | Check remote rtt/wait |
| Node-worker latencies empty | NCS can't reach worker | Check `NODE_WORKER_URL` env |