Router (main.py):
- When DSML detected in 2nd LLM response after tool execution,
make a 3rd LLM call with explicit synthesis prompt instead of
returning raw tool results to the user
- Falls back to format_tool_calls_for_response only if 3rd call fails
Router (tool_manager.py):
- Added _strip_think_tags() helper for <think>...</think> removal
from DeepSeek reasoning artifacts
Gateway (http_api.py):
- Strip <think>...</think> tags before sending to Telegram
- Strip DSML/XML-like markup (function_calls, invoke, parameter tags)
- Ensure empty text after stripping gets "..." fallback
Deployed to NODE1 and verified services running.
Co-authored-by: Cursor <cursoragent@cursor.com>
- Fixed unquoted `helion` variable reference to string literal `"helion"`
in tool_manager.py search_memories fallback
- Replaced `[Контекст пам'яті]` with `[INTERNAL MEMORY - do NOT repeat
to user]` in all 3 injection points in main.py
- Verified: Senpai now responds without Helion contamination or memory
brief leaking
Tested and deployed on NODE1.
Co-authored-by: Cursor <cursoragent@cursor.com>
1. YAML structure bug: Senpai was in `policies:` instead of `agents:`
in router-config.yml. Router couldn't find Senpai config → no routing
rule → fallback to local model.
2. tool_manager agent_id not passed: memory_search and graph_query
tools were called without agent_id → defaulted to "helion" →
ALL agents' tool calls searched Helion's Qdrant collections.
Fixed: agent_id now flows from main.py → execute_tool → _memory_search.
3. Config not mounted: router-config.yml was baked into Docker image,
host changes had no effect. Added volume mount in docker-compose.
Also added:
- Sofiia agent config + routing rule (was completely missing)
- Senpai routing rule: cloud_deepseek (was falling to local qwen3:8b)
- Anti-echo instruction for memory brief injection
Deployed and verified on NODE1: Senpai now searches senpai_* collections.
Co-authored-by: Cursor <cursoragent@cursor.com>
Brand commands (~290 lines):
- Code was trapped inside `if reply_to_message:` block (unreachable)
- Moved to feature flag: ENABLE_BRAND_COMMANDS=true to activate
- Zero re-indentation: 8sp code naturally fits as feature flag body
- Helper functions (_brand_*, _artifact_*) unchanged
Memory LLM Summary:
- Replace placeholder with real DeepSeek API integration
- Structured output: summary, goals, decisions, open_questions, next_steps, key_facts
- Graceful fallback if API key not set or call fails
- Added MEMORY_DEEPSEEK_API_KEY config
- Ukrainian output language
Deployed and verified on NODE1.
Co-authored-by: Cursor <cursoragent@cursor.com>
1. thread_has_agent_participation (SOWA Priority 11):
- New function has_agent_chat_participation() in behavior_policy.py
- Checks if agent responded to ANY user in this chat within 30min
- When active + user asks question/imperative → agent responds
- Different from per-user conversation_context (Priority 12)
- Wired into both detect_explicit_request() and analyze_message()
2. ACK reply_to_message_id:
- When SOWA sends ACK ("NUTRA тут"), it now replies to the user's
message instead of sending a standalone message
- Better UX: visually linked to what the user wrote
- Uses allow_sending_without_reply=True for safety
Known issue (not fixed - too risky):
- Lines 1368-1639 in http_api.py are dead code (brand commands /бренд)
at incorrect indentation level (8 spaces, inside unreachable block)
- These commands never worked on NODE1, fixing 260 lines of indentation
carries regression risk — deferred to separate cleanup PR
Co-authored-by: Cursor <cursoragent@cursor.com>
When a user replies to an agent's message in Telegram groups,
it is now treated as a direct mention (SOWA FULL response).
Implementation:
- Detect reply_to_message.from.is_bot in Gateway webhook handler
- Verify bot_id matches this agent's token (multi-agent safe)
- Pass is_reply_to_agent=True to detect_explicit_request() and
analyze_message() (SOWA v2.2)
- Add is_reply_to_agent to Router metadata for analytics
SOWA already had Priority 3 logic for reply_to_agent → FULL,
it was just never wired up (had TODO placeholders with False).
Edge cases handled:
- Only triggers when reply is to THIS agent's bot (not other bots)
- Reply to forwarded messages: won't trigger (from.is_bot would be
the original sender, not the bot)
- Works alongside existing DM, mention, and training group rules
Co-authored-by: Cursor <cursoragent@cursor.com>
CI:
- python-services-ci now only runs on main branch (not feature branches)
- Install deps with lock fallback (if lock file is stale, install without it)
Cursor rules:
- New project-context.mdc (alwaysApply: true) — gives AI full project
context immediately in every new chat
- Updated noda1-operations.mdc: alwaysApply: true, fixed container names
(dagi-router-node1, not dagi-staging-router)
This ensures that when opening a new Cursor chat in this workspace,
the AI already knows: project structure, NODE1 server details, all 13
agents, SSH credentials location, and key documentation paths.
Co-authored-by: Cursor <cursoragent@cursor.com>
SOWA fixes:
- Add Russian variants for all agents (сэнпай, хелион, друид, etc.)
- Add missing sofiia agent to AGENT_NAME_VARIANTS
- Add /senpai, /sofiia command prefixes
Vision denial fix (all 13 agents):
- Add explicit rule: "Never say you can't see/analyze images"
- Agents have Vision API via Swapper (qwen3-vl-8b)
- When vision model describes a photo, the follow-up text model (DeepSeek)
must not deny having seen it
Root cause: NUTRA correctly analyzed a photo via vision model, but when
asked a follow-up question, DeepSeek (text model) responded "I cannot
see images" because the system prompt lacked the denial prevention rule.
Co-authored-by: Cursor <cursoragent@cursor.com>
Complete snapshot of /opt/microdao-daarion/ from NODE1 (144.76.224.179).
This represents the actual running production code that has diverged
significantly from the previous main branch.
Key changes from old main:
- Gateway (http_api.py): expanded from ~40KB to 164KB with full agent support
- Router: new /v1/agents/{id}/infer endpoint with vision + DeepSeek routing
- Behavior Policy: SOWA v2.2 (3-level: FULL/ACK/SILENT)
- Agent Registry: config/agent_registry.yml as single source of truth
- 13 agents configured (was 3)
- Memory service integration
- CrewAI teams and roles
Excluded from snapshot: venv/, .env, data/, backups, .tgz archives
Co-authored-by: Cursor <cursoragent@cursor.com>
NODA1 agents now:
- Don't respond to broadcasts/posters/announcements without direct mention
- Don't respond to media (photo/link) without explicit question
- Keep responses short (1-2 sentences by default)
- No emoji, no "ready to help", no self-promotion
Added:
- behavior_policy.py: detect_directed_to_agent(), detect_broadcast_intent(), should_respond()
- behavior_policy_v1.txt: unified policy block for all prompts
- Pre-LLM check in http_api.py: skip Router call if should_respond=False
- NO_OUTPUT handling: don't send to Telegram if LLM returns empty
- Updated all 9 agent prompts with Behavior Policy v1
- Unit and E2E tests for 5 acceptance cases
- Added TRAINING_GROUP_IDS constant for Agent Preschool group
- Gateway now adds "[РЕЖИМ НАВЧАННЯ]" prefix for training groups
- Agents will respond to all messages in training groups
Co-authored-by: Cursor <cursoragent@cursor.com>
All agents now respond to all messages in the training group
"Agent Preschool Daarion.city" without requiring mentions.
Updated prompts: helion, daarwizz, greenfood, nutra, agromatrix, druid
Co-authored-by: Cursor <cursoragent@cursor.com>
Helion now only responds in groups when:
- Mentioned by name/username
- Direct question about Energy Union
- Previously was responding to all messages in groups
Co-authored-by: Cursor <cursoragent@cursor.com>
- Use stdlib urllib.request instead of requests library
- requests was not installed in the router image, causing healthcheck
to always fail with "ModuleNotFoundError: No module named 'requests'"
- Increase start_period to 30s and retries to 5 for stability
Co-authored-by: Cursor <cursoragent@cursor.com>
- docker-compose.node1.yml: Add network aliases (router, gateway,
memory-service, qdrant, nats, neo4j) to eliminate manual
`docker network connect --alias` commands
- docker-compose.node1.yml: ROUTER_URL now uses env variable with
fallback: ${ROUTER_URL:-http://router:8000}
- docker-compose.node1.yml: Increase router healthcheck start_period
to 30s and retries to 5
- .gitignore: Add noda1-credentials.local.mdc (local-only SSH creds)
- scripts/node1/verify_agents.sh: Improved output with agent list
- docs: Add NODA1-AGENT-VERIFICATION.md, NODA1-AGENT-ARCHITECTURE.md,
NODA1-VERIFICATION-REPORT-2026-02-03.md
- config/README.md: How to add new agents
- .cursor/rules/, .cursor/skills/: NODA1 operations skill for Cursor
Root cause fixed: Gateway could not resolve 'router' DNS name when
Router container was named 'dagi-staging-router' without alias.
Co-authored-by: Cursor <cursoragent@cursor.com>
- Коли підключати агентів: після налаштування інфраструктури
- DAGI Router: готово до deployment на NODE1/NODE3
- Swapper Service: готово до deployment на NODE1/NODE3
- Логування: все записується (GitHub, Gitea, GitLab)
- NODE1 перевірка: чистий, інцидентів не виявлено
Рекомендований порядок дій включено.
- K8s deployment для DAGI Router (NODE1)
- K8s deployment для Swapper Service (NODE1)
- ConfigMaps для конфігурацій
- Services (ClusterIP + NodePort)
- Інтеграція з NATS JetStream
- Оновлено DEPLOYMENT-PLAN.md з конкретними інструкціями
TODO: Створити аналоги для NODE3
- Відповіді на питання про підключення агентів
- План встановлення DAGI Router на NODE1/NODE3
- План встановлення Swapper Service на NODE1/NODE3
- Перевірка логування (GitLab, Gitea, GitHub)
- Перевірка NODE1 на інциденти (чистий)
Статус:
- DAGI Router: працює на NODE2, потрібно на NODE1/NODE3
- Swapper Service: працює на NODE2, потрібно на NODE1/NODE3
- Агенти: підключати після налаштування інфраструктури
- Atomic генерація всіх секретів (generate-all-secrets.sh)
- Auth enforcement перевірка (enforce-auth.sh)
- Оновлений full flow test (must-pass)
- Prometheus alerting rules для Memory Module
- Matrix alerts bridge (алерти в ops room)
- Policy engine документація для пам'яті
Готово до production deployment!
- JWT middleware для FastAPI
- Генерація/перевірка JWT токенів
- Скрипти для генерації Qdrant API keys
- Скрипти для генерації NATS operator JWT
- План реалізації Auth
TODO: Додати JWT до endpoints, NATS nkeys config, Qdrant API key config
- Автоматичне створення streams при старті worker
- Перевірка наявності streams перед створенням
- Підтримка всіх 4 streams (MM_ONLINE, MM_OFFLINE, MM_WRITE, MM_EVENTS)
Це вирішує проблему з DNS в K8s Job
- Matrix Client (підключення та синхронізація)
- RBAC Checker (перевірка прав через Postgres)
- Job Creator (створення jobs з команд)
- NATS Publisher (публікація jobs у streams)
- K8s deployment
- README з документацією
Команди: !embed, !retrieve, !summarize
TODO: Реальна інтеграція з Matrix homeserver, статуси результатів
- NATS працює в standalone режимі (1 replica)
- Виправлено server_name через initContainer
- Створено K8s Job для створення streams (через Python)
- Створено create-streams.py скрипт
TODO: Streams створити через worker-daemon або після виправлення DNS в Job
- Видалено всі паролі та API ключі з документів
- Замінено на посилання на Vault
- Закрито NodePort для Memory Service (тільки internal)
- Створено SECURITY-ROTATION-PLAN.md
- Створено ARCHITECTURE-150-NODES.md (план для 150 нод)
- Оновлено config.py (видалено hardcoded Cohere key)
- NODE3 додано до K3s кластера як worker (llm80-che-1-1)
- Memory Service працює в K8s на NODE3 (pod: memory-service-node3-*)
- Docker контейнер зупинено та видалено
- Оновлено MEMORY-MODULE-STATUS.md v3.1.0
- NODE1: Memory Service в K8s (port 30800) ✅
- NODE2: Memory Service в Docker (port 8001) ✅
- NODE3: Memory Service в Docker (port 8001) ✅
- Всі ноди: Cohere API налаштовано для embeddings ✅
- NODE2: ComfyUI перевірено (macOS App, port 8000) ✅
- Оновлено MEMORY-MODULE-STATUS.md v3.0.0