Files
microdao-daarion/services/node-worker/idempotency.py
Apple c4b94a327d P2.2+P2.3: NATS offload node-worker + router offload integration
Node Worker (services/node-worker/):
- NATS subscriber for node.{NODE_ID}.llm.request / vision.request
- Canonical JobRequest/JobResponse envelope (Pydantic)
- Idempotency cache (TTL 10min) with inflight dedup
- Deadline enforcement (DEADLINE_EXCEEDED on expired jobs)
- Concurrency limiter (semaphore, returns busy)
- Ollama + Swapper vision providers

Router offload (services/router/offload_client.py):
- NATS req/reply with configurable retries
- Circuit breaker per node+type (3 fails/60s → open 120s)
- Concurrency semaphore for remote requests

Model selection (services/router/model_select.py):
- exclude_nodes parameter for circuit-broken nodes
- force_local flag for fallback re-selection
- Integrated circuit breaker state awareness

Router /infer pipeline:
- Remote offload path when NCS selects remote node
- Automatic fallback: exclude failed node → force_local re-select
- Deadline propagation from router to node-worker

Tests: 17 unit tests (idempotency, deadline, circuit breaker)
Docs: ops/offload_routing.md (subjects, envelope, verification)
Made-with: Cursor
2026-02-27 02:44:05 -08:00

63 lines
2.1 KiB
Python

"""Idempotency cache + inflight dedup for job execution."""
import asyncio
import logging
import time
from typing import Dict, Optional, Tuple
from models import JobResponse
logger = logging.getLogger("idempotency")
CACHE_TTL = 600 # 10 min for successful results
TIMEOUT_TTL = 30 # 30s for timeout results
class IdempotencyStore:
def __init__(self, max_size: int = 10_000):
self._cache: Dict[str, Tuple[JobResponse, float]] = {}
self._inflight: Dict[str, asyncio.Future] = {}
self._max_size = max_size
def get(self, key: str) -> Optional[JobResponse]:
entry = self._cache.get(key)
if not entry:
return None
resp, expires = entry
if time.time() > expires:
self._cache.pop(key, None)
return None
cached = resp.model_copy()
cached.cached = True
return cached
def put(self, key: str, resp: JobResponse):
ttl = TIMEOUT_TTL if resp.status == "timeout" else CACHE_TTL
self._cache[key] = (resp, time.time() + ttl)
self._evict_if_needed()
def _evict_if_needed(self):
if len(self._cache) <= self._max_size:
return
now = time.time()
expired = [k for k, (_, exp) in self._cache.items() if now > exp]
for k in expired:
self._cache.pop(k, None)
if len(self._cache) > self._max_size:
oldest = sorted(self._cache, key=lambda k: self._cache[k][1])
for k in oldest[:len(self._cache) - self._max_size]:
self._cache.pop(k, None)
async def acquire_inflight(self, key: str) -> Optional[asyncio.Future]:
"""If another coroutine is already processing this key, return its future.
Otherwise register this coroutine as the processor and return None."""
if key in self._inflight:
return self._inflight[key]
fut: asyncio.Future = asyncio.get_event_loop().create_future()
self._inflight[key] = fut
return None
def complete_inflight(self, key: str, resp: JobResponse):
fut = self._inflight.pop(key, None)
if fut and not fut.done():
fut.set_result(resp)