Files
microdao-daarion/ops/audit_node2_20260227.md
Apple 46d7dea88a docs(audit): NODA2 full audit 2026-02-27
- ops/audit_node2_20260227.md: readable report (hardware, containers, models, Sofiia, findings)
- ops/audit_node2_20260227.json: structured machine-readable inventory
- ops/audit_node2_findings.yml: 10 PASS + 5 PARTIAL + 3 FAIL + 3 SECURITY gaps
- ops/node2_capabilities.yml: router-ready capabilities (vision/text/code/stt/tts models)

Key findings:
  P0: vision pipeline broken (/vision/models=empty, qwen3-vl:8b not installed)
  P1: node-ops-worker missing, SSH root password in sofiia-console env
  P1: router-config.yml uses 172.17.0.1 (Linux bridge) not host.docker.internal

Made-with: Cursor
2026-02-27 01:14:38 -08:00

7.8 KiB
Raw Blame History

NODA2 Audit Report

Дата: 2026-02-27
Нода: MacBook Pro M4 Max (Apple Silicon)
Аудитор: Sofiia (Cursor session)


Executive Summary

NODA2 — це MacBook Pro M4 Max з 64GB RAM, на якому розгорнутий повний dev-стек DAARION.city. NATS leafnode успішно підключений до NODA1 (rtt=58ms). Основний стек (router, gateway, memory, swapper, qdrant) здоровий. Критичний gap: vision pipeline зламаний (/vision/models порожній, qwen3-vl:8b не встановлена). Sofiia керує NODA1 через SSH root password — SECURITY risk. node-ops-worker не реалізований.


Part A — Runtime Inventory

Hardware

Параметр Значення
CPU Apple M4 Max
RAM 64 GB (unified)
Storage free 634 GB / 1.8 TB
OS macOS 26.3 (Darwin arm64)

Docker Containers (12)

Container Port Status
dagi-router-node2 9102→8000 healthy
dagi-gateway-node2 9300 healthy (14 agents)
dagi-nats-node2 4222, 8222 running (leafnode→NODA1)
dagi-memory-service-node2 8000 healthy
dagi-qdrant-node2 6333-6334 healthy
swapper-service-node2 8890 healthy
dagi-postgres-node2 5433→5432 healthy
dagi-neo4j-node2 7474, 7687 healthy
sofiia-console 8002 ⚠️ running (no healthcheck)
open-webui 8080 healthy (v0.7.2)
dagi-postgres 5432 healthy
dagi-redis 6379 healthy

Non-Docker Services

Process Port Description
ollama 11434 System daemon, 11 models
llama-server 11435 llama.cpp server, Qwen3.5-35B-A3B
gitea 3000 Self-hosted Git (v1.25.3)
spacebot 19898 Telegram bot → sofiia-console BFF
opencode 3456 OpenCode.app AI coding tool

NATS Leafnode Status

NODA2 (spoke) ──58ms──> NODA1 144.76.224.179:7422 (hub)
Cross-node pub/sub: PASS (node.test.hello confirmed)

Qdrant Collections

Collection Points
sofiia_messages 2
sofiia_docs 0
sofiia_memory_items 0
sofiia_user_context 0
memories 0
messages 0

Part B — Sofiia Agent Inventory

Registry Entry

agent_id: sofiia
display_name: Sophia
class: top_level
canonical_role: Chief AI Architect & Technical Sovereign
visibility: private
telegram: enabled (whitelist)
prompt_file: gateway-bot/sofiia_prompt.txt (1579 lines)

Runtime

  • Gateway: dagi-gateway-node2:9300 — зареєстрована з 13 іншими агентами
  • Router: dagi-router-node2:9102NODE_ID=NODA2, nats_connected=true
  • Console UI: http://localhost:8002 — Python process, HTML UI
    • NODES_NODA1_ROUTER_URL=http://144.76.224.179:9102
    • NODES_NODA2_ROUTER_URL=http://router:8000
    • ROUTER_URL=http://router:8000
    • ⚠️ NODES_NODA1_SSH_PASSWORD=[secret present] — SECURITY RISK
  • Spacebot: Telegram → sofiia-console BFF (http://localhost:8002/api)

Node Control (current state)

  • Механізм: SSH root з паролем (в env sofiia-console)
  • Що є: NODES_NODA1_SSH_PASSWORD в Docker env
  • Чого немає: NATS node-ops-worker, allowlist команд, audit log

Part C — Models Audit

Ollama (port 11434)

Model Type Size Status
qwen3.5:35b-a3b LLM (MoE) 9.3 GB available (14.8B active)
qwen3:14b LLM 9.3 GB available
gemma3:latest LLM 3.3 GB available
glm-4.7-flash:32k LLM 19 GB available (32k context)
glm-4.7-flash:q4_K_M LLM 19 GB available
llava:13b Vision 8.0 GB available (LLaVA+CLIP)
mistral-nemo:12b LLM 7.1 GB available
deepseek-coder:33b Code 18 GB available
deepseek-r1:70b LLM (Reasoning) 42 GB available
starcoder2:3b Code 1.7 GB available
phi3:latest LLM 2.2 GB available
gpt-oss:latest LLM 13 GB available
qwen3-vl:8b Vision ~8 GB NOT INSTALLED

llama-server (port 11435, llama.cpp)

Model Type Note
Qwen3.5-35B-A3B-Q4_K_M.gguf LLM Same as Ollama qwen3.5:35b-a3b — DUPLICATE

Swapper (port 8890)

Endpoint Status
/health {"status":"healthy","active_model":"qwen3-14b"}
/models 200 (9 configured, 1 loaded)
/vision/models ⚠️ 200 but empty list
/stt/models 200
/tts/models 200

Swapper models configured (з swapper_config_node2.yaml):

  • Loaded: qwen3:14b
  • Unloaded: gpt-oss, phi3, qwen3.5:35b-a3b, glm-4.7-flash, deepseek-coder:33b, deepseek-r1:70b
  • ⚠️ NOT installed: gemma2:27b, qwen2.5-coder:32b

Capabilities Summary

vision_models:    [llava:13b]          # legacy, available; qwen3-vl:8b recommended
text_models:      [qwen3.5:35b-a3b, qwen3:14b, glm-4.7-flash, gemma3, mistral-nemo, deepseek-r1:70b]
code_models:      [deepseek-coder:33b, starcoder2:3b]
embedding_models: [unknown - check memory-service]
stt_models:       [whisper via swapper - details TBD]
tts_models:       [xtts/coqui via swapper - details TBD]

Part D — Findings

P0 — Негайно

ID Issue
FAIL-01 Vision pipeline broken: /vision/models=[], qwen3-vl:8b not installed, llava:13b not in swapper config

P1 — Цього тижня

ID Issue
FAIL-02 node-ops-worker не реалізований — Sofiia керує NODA1 через SSH root password
FAIL-03 router-config.yml: 172.17.0.1:11434 (Linux bridge) — потрібно host.docker.internal:11434
SEC-01 SSH password в Docker env sofiia-console
SEC-03 NODA2 порти на 0.0.0.0 без firewall

P2 — Наступний спринт

ID Issue
PART-03 llama-server:11435 дублює Ollama — waste of memory
PART-04 Qdrant memory collections empty — memory/RAG не використовується
PART-02 Swapper config: gemma2:27b, qwen2.5-coder:32b — не встановлені
- Cross-node vision routing NODA1→NODA2 через NATS не реалізований

Рекомендований Action Plan

Крок 1 (P0): Виправити vision

# На NODA2:
ollama pull qwen3-vl:8b

# Додати до swapper_config_node2.yaml секцію vision:
# vision_models:
#   - name: qwen3-vl:8b
#     type: vision
#     priority: high

Крок 2 (P1): Виправити router-config.yml

sed -i '' 's|http://172.17.0.1:11434|http://host.docker.internal:11434|g' router-config.yml
docker restart dagi-router-node2

Крок 3 (P1): Реалізувати node-ops-worker

services/node_ops_worker/
  main.py      — NATS subscriber
  allowlist.py — whitelist команд
  metrics.py   — ops_requests_total, ops_errors_total
Subjects:
  node.noda1.ops.request / node.noda2.ops.request

Крок 4 (P1): Видалити SSH password з env

# Видалити NODES_NODA1_SSH_PASSWORD з docker-compose.node2.yml
# Додати SSH key-based auth або перейти на NATS node-ops

Канонічні Endpoints NODA2 (для NODA1 routing)

Service Internal Via NATS
Ollama LLM http://host.docker.internal:11434 node.noda2.llm.request (TBD)
Ollama Vision http://host.docker.internal:11434 node.noda2.vision.request (TBD)
Swapper http://host.docker.internal:8890 node.noda2.swapper.request (TBD)
Router http://host.docker.internal:9102 via NATS messaging