snapshot: NODE1 production state 2026-02-09

Complete snapshot of /opt/microdao-daarion/ from NODE1 (144.76.224.179).
This represents the actual running production code that has diverged
significantly from the previous main branch.

Key changes from old main:
- Gateway (http_api.py): expanded from ~40KB to 164KB with full agent support
- Router: new /v1/agents/{id}/infer endpoint with vision + DeepSeek routing
- Behavior Policy: SOWA v2.2 (3-level: FULL/ACK/SILENT)
- Agent Registry: config/agent_registry.yml as single source of truth
- 13 agents configured (was 3)
- Memory service integration
- CrewAI teams and roles

Excluded from snapshot: venv/, .env, data/, backups, .tgz archives

Co-authored-by: Cursor <cursoragent@cursor.com>
This commit is contained in:
Apple
2026-02-09 08:46:46 -08:00
parent 134c044c21
commit ef3473db21
9473 changed files with 408933 additions and 2769877 deletions

View File

@@ -1,125 +0,0 @@
# E2E RAG Pipeline Test
End-to-end тест для повного пайплайну: PARSER → RAG → Router (Memory + RAG).
## Підготовка
1. Запустити всі сервіси:
```bash
docker-compose up -d parser-service rag-service router memory-service city-db
```
2. Перевірити, що сервіси працюють:
```bash
curl http://localhost:9400/health # PARSER
curl http://localhost:9500/health # RAG
curl http://localhost:9102/health # Router
curl http://localhost:8000/health # Memory
```
## Тест 1: Ingest Document
```bash
curl -X POST http://localhost:9400/ocr/ingest \
-F "file=@tests/fixtures/parsed_json_example.json" \
-F "dao_id=daarion" \
-F "doc_id=microdao-tokenomics-2025-11"
```
**Очікуваний результат:**
```json
{
"dao_id": "daarion",
"doc_id": "microdao-tokenomics-2025-11",
"pages_processed": 2,
"rag_ingested": true,
"raw_json": { ... }
}
```
## Тест 2: Query RAG Service Directly
```bash
curl -X POST http://localhost:9500/query \
-H "Content-Type: application/json" \
-d '{
"dao_id": "daarion",
"question": "Поясни токеноміку microDAO і роль стейкінгу"
}'
```
**Очікуваний результат:**
```json
{
"answer": "MicroDAO використовує токен μGOV...",
"citations": [
{
"doc_id": "microdao-tokenomics-2025-11",
"page": 1,
"section": "Токеноміка MicroDAO",
"excerpt": "..."
}
],
"documents": [...]
}
```
## Тест 3: Query via Router (Memory + RAG)
```bash
curl -X POST http://localhost:9102/route \
-H "Content-Type: application/json" \
-d '{
"mode": "rag_query",
"dao_id": "daarion",
"user_id": "test-user",
"payload": {
"question": "Поясни токеноміку microDAO і роль стейкінгу"
}
}'
```
**Очікуваний результат:**
```json
{
"ok": true,
"provider_id": "llm_local_qwen3_8b",
"data": {
"text": "Відповідь з урахуванням Memory + RAG...",
"citations": [...]
},
"metadata": {
"memory_used": true,
"rag_used": true,
"documents_retrieved": 5,
"citations_count": 3
}
}
```
## Автоматичний E2E тест
Запустити скрипт:
```bash
./tests/e2e_rag_pipeline.sh
```
Скрипт перевіряє всі три кроки автоматично.
## Troubleshooting
### RAG Service не знаходить документи
- Перевірити, що документ був успішно індексований: `rag_ingested: true`
- Перевірити логі RAG Service: `docker-compose logs rag-service`
- Перевірити, що `dao_id` збігається в ingest та query
### Router повертає помилку
- Перевірити, що `mode="rag_query"` правильно обробляється
- Перевірити логі Router: `docker-compose logs router`
- Перевірити, що RAG та Memory сервіси доступні з Router
### Memory context порожній
- Перевірити, що Memory Service працює
- Перевірити, що `user_id` та `dao_id` правильні
- Memory може бути порожнім для нового користувача (це нормально)

View File

@@ -1,101 +0,0 @@
#!/bin/bash
# E2E test script for RAG pipeline: ingest → query
set -e
PARSER_URL="${PARSER_URL:-http://localhost:9400}"
RAG_URL="${RAG_URL:-http://localhost:9500}"
ROUTER_URL="${ROUTER_URL:-http://localhost:9102}"
echo "=== E2E RAG Pipeline Test ==="
echo ""
# Step 1: Ingest document
echo "Step 1: Ingesting document via /ocr/ingest..."
INGEST_RESPONSE=$(curl -s -X POST "${PARSER_URL}/ocr/ingest" \
-F "file=@tests/fixtures/parsed_json_example.json" \
-F "dao_id=daarion" \
-F "doc_id=microdao-tokenomics-2025-11")
echo "Ingest response:"
echo "$INGEST_RESPONSE" | jq '.'
echo ""
DOC_ID=$(echo "$INGEST_RESPONSE" | jq -r '.doc_id')
RAG_INGESTED=$(echo "$INGEST_RESPONSE" | jq -r '.rag_ingested')
if [ "$RAG_INGESTED" != "true" ]; then
echo "ERROR: Document was not ingested into RAG"
exit 1
fi
echo "✓ Document ingested: doc_id=${DOC_ID}"
echo ""
# Step 2: Query via RAG Service directly
echo "Step 2: Querying RAG Service directly..."
RAG_QUERY_RESPONSE=$(curl -s -X POST "${RAG_URL}/query" \
-H "Content-Type: application/json" \
-d '{
"dao_id": "daarion",
"question": "Поясни токеноміку microDAO і роль стейкінгу"
}')
echo "RAG query response:"
echo "$RAG_QUERY_RESPONSE" | jq '.'
echo ""
ANSWER=$(echo "$RAG_QUERY_RESPONSE" | jq -r '.answer')
CITATIONS_COUNT=$(echo "$RAG_QUERY_RESPONSE" | jq '.citations | length')
if [ -z "$ANSWER" ] || [ "$ANSWER" == "null" ]; then
echo "ERROR: Empty answer from RAG Service"
exit 1
fi
if [ "$CITATIONS_COUNT" -eq 0 ]; then
echo "WARNING: No citations returned"
fi
echo "✓ RAG query successful: answer length=${#ANSWER}, citations=${CITATIONS_COUNT}"
echo ""
# Step 3: Query via Router (mode="rag_query")
echo "Step 3: Querying via Router (mode=rag_query)..."
ROUTER_QUERY_RESPONSE=$(curl -s -X POST "${ROUTER_URL}/route" \
-H "Content-Type: application/json" \
-d '{
"mode": "rag_query",
"dao_id": "daarion",
"user_id": "test-user",
"payload": {
"question": "Поясни токеноміку microDAO і роль стейкінгу"
}
}')
echo "Router query response:"
echo "$ROUTER_QUERY_RESPONSE" | jq '.'
echo ""
ROUTER_OK=$(echo "$ROUTER_QUERY_RESPONSE" | jq -r '.ok')
ROUTER_TEXT=$(echo "$ROUTER_QUERY_RESPONSE" | jq -r '.data.text // .data.answer // ""')
ROUTER_CITATIONS=$(echo "$ROUTER_QUERY_RESPONSE" | jq '.data.citations // .metadata.citations // []')
if [ "$ROUTER_OK" != "true" ]; then
echo "ERROR: Router query failed"
exit 1
fi
if [ -z "$ROUTER_TEXT" ] || [ "$ROUTER_TEXT" == "null" ]; then
echo "ERROR: Empty answer from Router"
exit 1
fi
ROUTER_CITATIONS_COUNT=$(echo "$ROUTER_CITATIONS" | jq 'length')
echo "✓ Router query successful: answer length=${#ROUTER_TEXT}, citations=${ROUTER_CITATIONS_COUNT}"
echo ""
echo "=== E2E Test Complete ==="
echo "All steps passed successfully!"

View File

@@ -1,237 +0,0 @@
#!/usr/bin/env python3
"""
RAG Evaluation Script
Tests RAG quality with fixed questions and saves results
"""
import json
import csv
import time
import sys
from pathlib import Path
from typing import List, Dict, Any
from datetime import datetime
import httpx
# Configuration
RAG_URL = "http://localhost:9500"
ROUTER_URL = "http://localhost:9102"
DAO_ID = "daarion"
# Test questions
TEST_QUESTIONS = [
{
"id": "q1",
"question": "Яка роль стейкінгу в microDAO?",
"expected_doc_ids": ["microdao-tokenomics"],
"category": "tokenomics"
},
{
"id": "q2",
"question": "Які основні фази roadmap розгортання?",
"expected_doc_ids": ["roadmap", "deployment"],
"category": "roadmap"
},
{
"id": "q3",
"question": "Поясни архітектуру DAARION.city",
"expected_doc_ids": ["architecture", "whitepaper"],
"category": "architecture"
},
{
"id": "q4",
"question": "Як працює система ролей та RBAC?",
"expected_doc_ids": ["rbac", "roles"],
"category": "rbac"
},
{
"id": "q5",
"question": "Що таке μGOV токен і навіщо він потрібен?",
"expected_doc_ids": ["microdao-tokenomics", "tokenomics"],
"category": "tokenomics"
}
]
async def test_rag_query(question: Dict[str, Any], dao_id: str) -> Dict[str, Any]:
"""Test single RAG query"""
async with httpx.AsyncClient(timeout=60.0) as client:
start_time = time.time()
response = await client.post(
f"{RAG_URL}/query",
json={
"dao_id": dao_id,
"question": question["question"],
"top_k": 5
}
)
elapsed = time.time() - start_time
response.raise_for_status()
data = response.json()
# Extract metrics
metrics = data.get("metrics", {})
citations = data.get("citations", [])
answer = data.get("answer", "")
# Check if expected doc_ids are found
found_doc_ids = [c.get("doc_id", "") for c in citations]
expected_found = any(
expected_id in found_doc_id
for expected_id in question["expected_doc_ids"]
for found_doc_id in found_doc_ids
)
return {
"question_id": question["id"],
"question": question["question"],
"category": question["category"],
"answer": answer,
"answer_length": len(answer),
"citations_count": len(citations),
"citations": citations,
"doc_ids_found": found_doc_ids,
"expected_doc_found": expected_found,
"query_time_seconds": elapsed,
"metrics": metrics,
"timestamp": datetime.utcnow().isoformat()
}
async def test_router_query(question: Dict[str, Any], dao_id: str, user_id: str = "test-user") -> Dict[str, Any]:
"""Test query via Router (Memory + RAG)"""
async with httpx.AsyncClient(timeout=60.0) as client:
start_time = time.time()
response = await client.post(
f"{ROUTER_URL}/route",
json={
"mode": "rag_query",
"dao_id": dao_id,
"user_id": user_id,
"payload": {
"question": question["question"]
}
}
)
elapsed = time.time() - start_time
response.raise_for_status()
data = response.json()
# Extract data
answer = data.get("data", {}).get("text", "")
citations = data.get("data", {}).get("citations", []) or data.get("metadata", {}).get("citations", [])
metadata = data.get("metadata", {})
return {
"question_id": question["id"],
"question": question["question"],
"category": question["category"],
"answer": answer,
"answer_length": len(answer),
"citations_count": len(citations),
"citations": citations,
"memory_used": metadata.get("memory_used", False),
"rag_used": metadata.get("rag_used", False),
"query_time_seconds": elapsed,
"metadata": metadata,
"timestamp": datetime.utcnow().isoformat()
}
async def run_evaluation(output_dir: Path = Path("tests/rag_eval_results")):
"""Run full evaluation"""
output_dir.mkdir(parents=True, exist_ok=True)
timestamp = datetime.utcnow().strftime("%Y%m%d_%H%M%S")
# Test RAG Service directly
print("Testing RAG Service directly...")
rag_results = []
for question in TEST_QUESTIONS:
print(f" Testing: {question['question'][:50]}...")
try:
result = await test_rag_query(question, DAO_ID)
rag_results.append(result)
print(f" ✓ Found {result['citations_count']} citations, expected doc: {result['expected_doc_found']}")
except Exception as e:
print(f" ✗ Error: {e}")
rag_results.append({
"question_id": question["id"],
"error": str(e)
})
# Test Router (Memory + RAG)
print("\nTesting Router (Memory + RAG)...")
router_results = []
for question in TEST_QUESTIONS:
print(f" Testing: {question['question'][:50]}...")
try:
result = await test_router_query(question, DAO_ID)
router_results.append(result)
print(f" ✓ Answer length: {result['answer_length']}, citations: {result['citations_count']}")
except Exception as e:
print(f" ✗ Error: {e}")
router_results.append({
"question_id": question["id"],
"error": str(e)
})
# Save results
results_file = output_dir / f"rag_eval_{timestamp}.json"
with open(results_file, "w", encoding="utf-8") as f:
json.dump({
"rag_service_results": rag_results,
"router_results": router_results,
"timestamp": timestamp,
"dao_id": DAO_ID
}, f, indent=2, ensure_ascii=False)
# Save CSV summary
csv_file = output_dir / f"rag_eval_{timestamp}.csv"
with open(csv_file, "w", newline="", encoding="utf-8") as f:
writer = csv.writer(f)
writer.writerow([
"Question ID", "Question", "Category",
"RAG Citations", "RAG Expected Found", "RAG Time (s)",
"Router Citations", "Router Memory Used", "Router Time (s)",
"Answer Length"
])
for rag_res, router_res in zip(rag_results, router_results):
writer.writerow([
rag_res.get("question_id", ""),
rag_res.get("question", ""),
rag_res.get("category", ""),
rag_res.get("citations_count", 0),
rag_res.get("expected_doc_found", False),
rag_res.get("query_time_seconds", 0),
router_res.get("citations_count", 0),
router_res.get("memory_used", False),
router_res.get("query_time_seconds", 0),
router_res.get("answer_length", 0)
])
print(f"\n✓ Results saved:")
print(f" JSON: {results_file}")
print(f" CSV: {csv_file}")
# Print summary
print("\n=== Summary ===")
rag_avg_time = sum(r.get("query_time_seconds", 0) for r in rag_results) / len(rag_results)
router_avg_time = sum(r.get("query_time_seconds", 0) for r in router_results) / len(router_results)
print(f"RAG Service: avg time={rag_avg_time:.2f}s")
print(f"Router: avg time={router_avg_time:.2f}s")
print(f"Expected docs found: {sum(1 for r in rag_results if r.get('expected_doc_found', False))}/{len(rag_results)}")
if __name__ == "__main__":
import asyncio
asyncio.run(run_evaluation())

View File

@@ -1,326 +0,0 @@
"""
Tests for Agent System Prompts Runtime API
Тести для Agent System Prompts MVP v2:
- Runtime prompts API
- build_system_prompt function
- Prompts status check API
"""
import pytest
import asyncio
from typing import Dict, Any
# Mock functions for testing without database
def build_system_prompt_from_parts(
prompts: Dict[str, str],
agent_info: Dict[str, Any] = None,
context: Dict[str, Any] = None
) -> str:
"""Build system prompt from parts (mock implementation for testing)"""
parts = []
# Core prompt (required)
if prompts.get("core"):
parts.append(prompts["core"])
elif agent_info:
agent_name = agent_info.get("display_name") or agent_info.get("name") or "Agent"
agent_kind = agent_info.get("kind") or "assistant"
parts.append(
f"You are {agent_name}, an AI {agent_kind} in DAARION.city ecosystem. "
f"Be helpful, accurate, and follow ethical guidelines."
)
else:
parts.append("You are an AI assistant. Be helpful and accurate.")
# Governance rules
if prompts.get("governance"):
parts.append("\n\n## Governance\n" + prompts["governance"])
# Safety guidelines
if prompts.get("safety"):
parts.append("\n\n## Safety Guidelines\n" + prompts["safety"])
# Tools instructions
if prompts.get("tools"):
parts.append("\n\n## Tools & Capabilities\n" + prompts["tools"])
# Context additions
if context:
context_lines = []
if context.get("node"):
node = context["node"]
context_lines.append(f"- **Node**: {node.get('name', 'Unknown')}")
if context.get("district"):
district = context["district"]
context_lines.append(f"- **District**: {district.get('name', 'Unknown')}")
if context.get("microdao"):
microdao = context["microdao"]
context_lines.append(f"- **MicroDAO**: {microdao.get('name', 'Unknown')}")
if context_lines:
parts.append("\n\n## Current Context\n" + "\n".join(context_lines))
return "\n".join(parts)
class TestBuildSystemPrompt:
"""Tests for build_system_prompt function"""
def test_core_only(self):
"""Test with only core prompt"""
prompts = {
"core": "You are DAARWIZZ, the global orchestrator.",
"safety": None,
"governance": None,
"tools": None
}
result = build_system_prompt_from_parts(prompts)
assert "DAARWIZZ" in result
assert "orchestrator" in result
assert "## Safety" not in result
assert "## Governance" not in result
def test_full_prompts(self):
"""Test with all prompt types"""
prompts = {
"core": "You are DAARWIZZ, the global orchestrator of DAARION.city.",
"safety": "Never execute irreversible actions without confirmation.",
"governance": "Coordinate with district leads for resource allocation.",
"tools": "Use agent_delegate to delegate tasks."
}
result = build_system_prompt_from_parts(prompts)
assert "DAARWIZZ" in result
assert "## Safety Guidelines" in result
assert "irreversible" in result
assert "## Governance" in result
assert "district leads" in result
assert "## Tools" in result
assert "agent_delegate" in result
def test_fallback_without_core(self):
"""Test fallback when no core prompt provided"""
prompts = {
"core": None,
"safety": "Be safe",
"governance": None,
"tools": None
}
agent_info = {
"name": "TestAgent",
"display_name": "Test Agent",
"kind": "coordinator"
}
result = build_system_prompt_from_parts(prompts, agent_info)
assert "Test Agent" in result
assert "coordinator" in result
assert "## Safety Guidelines" in result
assert "Be safe" in result
def test_with_context(self):
"""Test prompt with runtime context"""
prompts = {
"core": "You are a node agent.",
"safety": None,
"governance": None,
"tools": None
}
context = {
"node": {"name": "NODE1", "environment": "production"},
"district": {"name": "ENERGYUNION"},
"microdao": {"name": "DAARION"}
}
result = build_system_prompt_from_parts(prompts, context=context)
assert "node agent" in result
assert "## Current Context" in result
assert "NODE1" in result
assert "ENERGYUNION" in result
assert "DAARION" in result
def test_prompt_order(self):
"""Test that prompts are assembled in correct order"""
prompts = {
"core": "CORE_MARKER",
"safety": "SAFETY_MARKER",
"governance": "GOVERNANCE_MARKER",
"tools": "TOOLS_MARKER"
}
result = build_system_prompt_from_parts(prompts)
# Check order: core → governance → safety → tools
core_pos = result.find("CORE_MARKER")
gov_pos = result.find("GOVERNANCE_MARKER")
safety_pos = result.find("SAFETY_MARKER")
tools_pos = result.find("TOOLS_MARKER")
assert core_pos < gov_pos < safety_pos < tools_pos
class TestRuntimePromptsFormat:
"""Tests for runtime prompts response format"""
def test_response_structure(self):
"""Test expected response structure"""
expected_keys = {"agent_id", "has_prompts", "prompts"}
# Mock response
response = {
"agent_id": "agent-daarwizz",
"has_prompts": True,
"prompts": {
"core": "You are DAARWIZZ...",
"safety": "Safety rules...",
"governance": None,
"tools": None
}
}
assert set(response.keys()) == expected_keys
assert response["has_prompts"] is True
assert "core" in response["prompts"]
assert "safety" in response["prompts"]
assert "governance" in response["prompts"]
assert "tools" in response["prompts"]
def test_has_prompts_when_core_exists(self):
"""Test has_prompts is True when core exists"""
prompts = {"core": "Some core prompt", "safety": None, "governance": None, "tools": None}
has_prompts = prompts.get("core") is not None
assert has_prompts is True
def test_has_prompts_when_core_missing(self):
"""Test has_prompts is False when core is None"""
prompts = {"core": None, "safety": "Safety only", "governance": None, "tools": None}
has_prompts = prompts.get("core") is not None
assert has_prompts is False
class TestPromptsStatusBatch:
"""Tests for batch prompts status check"""
def test_status_response_format(self):
"""Test batch status response format"""
agent_ids = ["agent-daarwizz", "agent-devtools", "agent-unknown"]
# Mock response
response = {
"status": {
"agent-daarwizz": True,
"agent-devtools": True,
"agent-unknown": False
}
}
assert "status" in response
assert isinstance(response["status"], dict)
assert all(aid in response["status"] for aid in agent_ids)
assert all(isinstance(v, bool) for v in response["status"].values())
class TestNodeAgentPrompts:
"""Tests for Node Agent specific prompts"""
def test_node_guardian_prompt_content(self):
"""Test Node Guardian has appropriate content markers"""
guardian_core = """Ти — Node Guardian для НОДА1 (Hetzner GEX44 Production).
Твоя місія: забезпечувати стабільну роботу продакшн-інфраструктури DAARION.city."""
assert "Node Guardian" in guardian_core
assert "НОДА1" in guardian_core
assert "Production" in guardian_core or "production" in guardian_core.lower()
def test_node_guardian_safety_rules(self):
"""Test Node Guardian safety rules"""
guardian_safety = """Ніколи не виконуй деструктивні команди без підтвердження.
Не розкривай чутливу інформацію (паролі, API ключі).
При невизначеності — ескалюй до людини."""
assert "деструктивні" in guardian_safety
assert "підтвердження" in guardian_safety
assert "ескалюй" in guardian_safety
class TestAgentCoverage:
"""Tests for agent prompts coverage requirements"""
REQUIRED_AGENTS = [
# City / Core
"agent-daarwizz",
"agent-microdao-orchestrator",
"agent-devtools",
# District / MicroDAO
"agent-greenfood",
"agent-helion",
"agent-soul",
"agent-druid",
"agent-nutra",
"agent-eonarch",
"agent-clan",
"agent-yaromir",
"agent-monitor",
# Node Agents
"monitor-node1",
"monitor-node2",
"node-steward-node1",
"node-steward-node2"
]
def test_required_agents_list(self):
"""Test required agents are defined"""
assert len(self.REQUIRED_AGENTS) == 16
assert "agent-daarwizz" in self.REQUIRED_AGENTS
assert "monitor-node1" in self.REQUIRED_AGENTS
assert "monitor-node2" in self.REQUIRED_AGENTS
# Integration tests (require running services)
class TestIntegration:
"""Integration tests - skip if services not available"""
@pytest.mark.skip(reason="Requires running services")
async def test_fetch_runtime_prompts(self):
"""Test fetching runtime prompts from API"""
import httpx
async with httpx.AsyncClient() as client:
response = await client.get(
"http://localhost:7001/internal/agents/agent-daarwizz/prompts/runtime"
)
assert response.status_code == 200
data = response.json()
assert data["agent_id"] == "agent-daarwizz"
assert "prompts" in data
@pytest.mark.skip(reason="Requires running services")
async def test_fetch_system_prompt(self):
"""Test fetching full system prompt from API"""
import httpx
async with httpx.AsyncClient() as client:
response = await client.get(
"http://localhost:7001/internal/agents/agent-daarwizz/system-prompt"
)
assert response.status_code == 200
data = response.json()
assert data["agent_id"] == "agent-daarwizz"
assert "system_prompt" in data
assert len(data["system_prompt"]) > 100
if __name__ == "__main__":
pytest.main([__file__, "-v"])

View File

@@ -1,446 +1,304 @@
"""
Tests for Behavior Policy v1: Silent-by-default + Short-first + Media-no-comment
Tests for Behavior Policy v2.1
Aligned with Global System Prompt v2.1 — FINAL
Tests cover:
- Broadcast detection
- Imperative/explicit request detection
- Mention detection
- SOWA: DM always responds
- SOWA: Mentioned + explicit_request -> responds
- SOWA: Broadcast without mention -> NO_OUTPUT
- SOWA: Media/link without request -> NO_OUTPUT
- SOWA: Media + mention -> responds
- SOWA: Bare mention in public -> NO_OUTPUT (v2.1 anti-spam)
- SOWA: Bare mention in DM -> responds
- SOWA: Question without mention in topic -> NO_OUTPUT
- URL detection
- Reply to agent
"""
import pytest
import sys
from pathlib import Path
import os
# Add gateway-bot to path
sys.path.insert(0, str(Path(__file__).parent.parent / "gateway-bot"))
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'gateway-bot'))
from behavior_policy import (
detect_agent_mention,
detect_any_agent_mention,
detect_command,
detect_question,
detect_imperative,
detect_broadcast_intent,
detect_short_note,
detect_media_question,
analyze_message,
should_respond,
is_no_output_response,
NO_OUTPUT,
TRAINING_GROUP_IDS,
detect_agent_mention,
detect_broadcast_intent,
detect_question,
detect_imperative,
detect_url,
detect_explicit_request,
detect_short_note,
detect_media_question,
)
# ========================================
# Unit Tests: detect_agent_mention
# ========================================
# ===== Broadcast Detection =====
class TestDetectAgentMention:
def test_helion_mention_exact(self):
assert detect_agent_mention("Helion, що ти думаєш?", "helion") is True
def test_helion_mention_lowercase(self):
assert detect_agent_mention("helion допоможи", "helion") is True
def test_helion_mention_ukrainian(self):
assert detect_agent_mention("Хеліон, як справи?", "helion") is True
def test_helion_mention_at(self):
assert detect_agent_mention("@energyunionBot глянь", "helion") is True
def test_helion_no_mention(self):
assert detect_agent_mention("Привіт всім", "helion") is False
def test_daarwizz_mention(self):
assert detect_agent_mention("@DAARWIZZBot поясни", "daarwizz") is True
def test_daarwizz_no_mention(self):
assert detect_agent_mention("Helion допоможи", "daarwizz") is False
class TestDetectAnyAgentMention:
def test_helion_detected(self):
assert detect_any_agent_mention("Helion, скажи") == "helion"
def test_daarwizz_detected(self):
assert detect_any_agent_mention("@DAARWIZZBot допоможи") == "daarwizz"
def test_no_agent(self):
assert detect_any_agent_mention("Привіт всім") is None
# ========================================
# Unit Tests: detect_command
# ========================================
class TestDetectCommand:
def test_ask_command(self):
assert detect_command("/ask що таке DAO?") is True
def test_helion_command(self):
assert detect_command("/helion покажи") is True
def test_brand_command(self):
assert detect_command("/бренд_інтейк https://example.com") is True
def test_no_command(self):
assert detect_command("Привіт, як справи?") is False
def test_slash_in_middle(self):
assert detect_command("Дивись https://example.com/path") is False
# ========================================
# Unit Tests: detect_question
# ========================================
class TestDetectQuestion:
def test_question_mark(self):
assert detect_question("Що це таке?") is True
def test_question_word_start(self):
assert detect_question("Як це працює") is True
def test_question_word_чому(self):
assert detect_question("Чому так") is True
def test_english_question(self):
assert detect_question("What is this?") is True
def test_no_question(self):
assert detect_question("Добре") is False
def test_statement(self):
assert detect_question("Я згоден з цим") is False
# ========================================
# Unit Tests: detect_imperative
# ========================================
class TestDetectImperative:
def test_поясни(self):
assert detect_imperative("Поясни мені це") is True
def test_зроби(self):
assert detect_imperative("Зроби аналіз") is True
def test_допоможи(self):
assert detect_imperative("Допоможи з цим") is True
def test_after_mention(self):
assert detect_imperative("@Helion поясни") is True
def test_no_imperative(self):
assert detect_imperative("Привіт") is False
# ========================================
# Unit Tests: detect_broadcast_intent
# ========================================
class TestDetectBroadcastIntent:
class TestBroadcastDetection:
def test_time_pattern(self):
assert detect_broadcast_intent("20:00 Вебінар") is True
def test_time_date_pattern(self):
assert detect_broadcast_intent("14.30 10.02 Зустріч") is True
assert detect_broadcast_intent("20:00 10.02 Deployed")
def test_emoji_start(self):
assert detect_broadcast_intent("✅ Завершено") is True
def test_url_only(self):
assert detect_broadcast_intent("https://example.com") is True
assert detect_broadcast_intent("\u26a1 Оновлення системи")
def test_announcement_word(self):
assert detect_broadcast_intent("Анонс: новий реліз") is True
def test_normal_message(self):
assert detect_broadcast_intent("Привіт, як справи?") is False
assert detect_broadcast_intent("увага всім! нова версія")
def test_url_only(self):
assert detect_broadcast_intent("https://example.com")
def test_normal_message_not_broadcast(self):
assert not detect_broadcast_intent("Допоможи з налаштуванням")
# ========================================
# Unit Tests: detect_short_note
# ========================================
# ===== Imperative Detection =====
class TestDetectShortNote:
def test_checkmark_only(self):
assert detect_short_note("") is True
def test_time_checkmark(self):
assert detect_short_note("20:00 ✅") is True
def test_ok(self):
assert detect_short_note("ok") is True
def test_plus(self):
assert detect_short_note("+") is True
def test_normal_message(self):
assert detect_short_note("Привіт, як справи?") is False
def test_empty(self):
assert detect_short_note("") is True
class TestImperativeDetection:
def test_ua_imperative(self):
assert detect_imperative("Допоможи з налаштуванням")
def test_ua_analyze(self):
assert detect_imperative("Проаналізуй логи")
def test_en_fix(self):
assert detect_imperative("Fix this bug")
def test_en_explain(self):
assert detect_imperative("Explain how it works")
def test_no_imperative(self):
assert not detect_imperative("Просто повідомлення")
# ========================================
# Unit Tests: detect_media_question
# ========================================
# ===== Explicit Request Detection (Gateway level) =====
class TestDetectMediaQuestion:
def test_question_in_caption(self):
assert detect_media_question("Що на цьому фото?") is True
def test_imperative_in_caption(self):
assert detect_media_question("Опиши це зображення") is True
def test_no_question(self):
assert detect_media_question("") is False
def test_just_hashtag(self):
assert detect_media_question("#photo") is False
class TestExplicitRequestDetection:
def test_imperative_triggers(self):
assert detect_explicit_request("Допоможи з налаштуванням")
def test_question_with_mention(self):
assert detect_explicit_request("Що це?", mentioned_agents=['helion'])
def test_question_in_dm(self):
assert detect_explicit_request("Що це?", is_dm=True)
def test_question_without_context(self):
assert not detect_explicit_request("Що це?")
def test_no_question_no_imperative(self):
assert not detect_explicit_request("Просто текст без питання")
# ========================================
# Unit Tests: is_no_output_response
# ========================================
# ===== Question Detection =====
class TestIsNoOutputResponse:
def test_empty_string(self):
assert is_no_output_response("") is True
def test_whitespace(self):
assert is_no_output_response(" ") is True
def test_no_output_marker(self):
assert is_no_output_response("__NO_OUTPUT__") is True
def test_no_output_lowercase(self):
assert is_no_output_response("no_output") is True
def test_normal_response(self):
assert is_no_output_response("Ось моя відповідь") is False
def test_dots_only(self):
assert is_no_output_response("...") is True
class TestQuestionDetection:
def test_with_question_mark(self):
assert detect_question("Що це таке?")
def test_without_question_mark(self):
assert not detect_question("Що нового в проекті")
# ========================================
# Integration Tests: analyze_message / should_respond
# ========================================
# ===== Mention Detection =====
class TestAnalyzeMessage:
"""Test main decision logic"""
def test_training_group_always_respond(self):
decision = analyze_message(
text="Привіт всім",
agent_id="helion",
chat_id="-1003556680911", # Training group
)
assert decision.should_respond is True
assert decision.reason == "training_group"
def test_private_chat_always_respond(self):
class TestMentionDetection:
def test_helion_at_mention(self):
assert detect_agent_mention("@Helion що на постері?", "helion")
def test_helion_name(self):
assert detect_agent_mention("Helion, допоможи", "helion")
def test_ua_name(self):
assert detect_agent_mention("хеліон, що скажеш?", "helion")
def test_no_mention(self):
assert not detect_agent_mention("Хто знає відповідь?", "helion")
# ===== URL Detection =====
class TestURLDetection:
def test_https(self):
assert detect_url("Check https://example.com")
def test_www(self):
assert detect_url("Visit www.github.com")
def test_telegram(self):
assert detect_url("Join t.me/channel")
def test_telegram_me(self):
assert detect_url("Link: telegram.me/bot")
def test_no_link(self):
assert not detect_url("No link here")
# ===== SOWA: DM Always Responds =====
class TestSOWADMAlwaysResponds:
def test_dm_responds(self):
decision = analyze_message(
text="Привіт",
agent_id="helion",
chat_id="123456",
chat_id="123",
user_id="456",
is_private_chat=True,
)
assert decision.should_respond is True
assert decision.should_respond
assert decision.reason == "private_chat"
def test_direct_mention_respond(self):
# ===== SOWA: Mentioned + Request =====
class TestSOWAMentionedWithRequest:
def test_mentioned_with_request(self):
decision = analyze_message(
text="Helion, що думаєш?",
text="@Helion що змінилось у v2.0?",
agent_id="helion",
chat_id="group123",
chat_id="-100123",
user_id="456",
is_private_chat=False,
payload_explicit_request=True,
)
assert decision.should_respond is True
assert decision.reason == "direct_mention"
def test_command_respond(self):
decision = analyze_message(
text="/helion допоможи",
assert decision.should_respond
assert decision.reason == "mentioned_with_request"
# ===== SOWA: Broadcast Without Mention =====
class TestSOWABroadcastNoMention:
def test_broadcast_no_mention(self):
resp, reason = should_respond(
text="\u26a1 Оновлення: релізимо v2.0",
agent_id="helion",
chat_id="group123",
chat_id="-100123",
user_id="456",
is_private_chat=False,
)
assert decision.should_respond is True
assert decision.reason == "command"
def test_broadcast_no_mention_silent(self):
decision = analyze_message(
text="20:00 Вебінар Energy Union",
agent_id="helion",
chat_id="group123",
)
assert decision.should_respond is False
assert decision.reason == "broadcast_no_mention"
def test_short_note_silent(self):
decision = analyze_message(
text="20:00 10.02 ✅",
agent_id="helion",
chat_id="group123",
)
assert decision.should_respond is False
assert "short_note" in decision.reason or "broadcast" in decision.reason
def test_media_no_question_silent(self):
decision = analyze_message(
assert not resp
assert reason == "broadcast_not_directed"
# ===== SOWA: Media Without Request =====
class TestSOWAMediaWithoutRequest:
def test_media_no_request(self):
resp, reason = should_respond(
text="",
agent_id="helion",
chat_id="group123",
chat_id="-100123",
user_id="456",
has_media=True,
media_caption="",
is_private_chat=False,
payload_explicit_request=False,
)
assert decision.should_respond is False
assert decision.reason == "media_no_question"
def test_media_with_question_respond(self):
assert not resp
assert reason == "media_or_link_without_request"
# ===== SOWA: Media + Mention =====
class TestSOWAMediaWithMention:
def test_media_with_mention(self):
decision = analyze_message(
text="",
text="@Helion що на фото?",
agent_id="helion",
chat_id="group123",
chat_id="-100123",
user_id="456",
has_media=True,
media_caption="Що на цьому фото?",
media_caption="@Helion що на фото?",
is_private_chat=False,
payload_explicit_request=True,
)
assert decision.should_respond is True
assert decision.reason == "media_with_question"
def test_question_no_mention_silent(self):
"""General question without mention = don't respond in groups"""
decision = analyze_message(
text="Як це працює?",
agent_id="helion",
chat_id="group123",
)
assert decision.should_respond is False
assert decision.reason == "question_no_mention"
def test_addressed_to_other_agent(self):
decision = analyze_message(
text="@DAARWIZZBot поясни DAO",
agent_id="helion",
chat_id="group123",
)
assert decision.should_respond is False
assert "other_agent" in decision.reason
assert decision.should_respond
assert decision.reason == "mentioned_with_request"
# ========================================
# E2E Test Cases (from requirements)
# ========================================
# ===== v2.1: Bare Mention in Public = NO_OUTPUT =====
class TestE2ECases:
"""
Test cases from the requirements document.
"""
def test_case_1_poster_no_question(self):
"""Case 1: Постер у каналі без питання → (нічого)"""
respond, reason = should_respond(
text="",
class TestSOWABareMentionPublicNoResponse:
def test_bare_mention_public(self):
resp, reason = should_respond(
text="@Helion",
agent_id="helion",
chat_id="channel123",
has_media=True,
media_caption="",
chat_id="-100123",
user_id="456",
is_private_chat=False,
payload_explicit_request=False,
)
assert respond is False
assert reason == "media_no_question"
def test_case_2_timing_no_question(self):
"""Case 2: Таймінг без питання → (нічого)"""
respond, reason = should_respond(
text="20:00 10.02 ✅",
agent_id="helion",
chat_id="group123",
)
assert respond is False
# Either short_note or broadcast pattern matches
def test_case_3_direct_request_with_photo(self):
"""Case 3: @Helion що на цьому постері? коротко + image → respond"""
respond, reason = should_respond(
text="",
agent_id="helion",
chat_id="group123",
has_media=True,
media_caption="@Helion що на цьому постері? коротко",
)
# Should respond because there's a question in caption
assert respond is True
assert reason == "media_with_question"
def test_case_4_link_no_question(self):
"""Case 4: Посилання без питання → (нічого)"""
respond, reason = should_respond(
text="https://t.me/energyunionofficial/123",
agent_id="helion",
chat_id="group123",
has_media=True, # Link treated as media
media_caption="https://t.me/energyunionofficial/123",
)
assert respond is False
assert reason == "media_no_question"
def test_case_5_link_with_question(self):
"""Case 5: @DAARWIZZ глянь посилання і скажи 3 тези + link → respond"""
respond, reason = should_respond(
text="@DAARWIZZBot глянь посилання і скажи 3 тези https://example.com",
agent_id="daarwizz",
chat_id="group123",
has_media=True,
media_caption="@DAARWIZZBot глянь посилання і скажи 3 тези https://example.com",
)
# Should respond because there's a direct mention + question
assert respond is True
assert reason == "media_with_question"
assert not resp
assert reason == "bare_mention_public_topic"
# ========================================
# Edge Cases
# ========================================
# ===== v2.1: Bare Mention in DM = Responds =====
class TestEdgeCases:
def test_empty_text(self):
respond, reason = should_respond(
text="",
class TestSOWABareMentionDMResponds:
def test_bare_mention_dm(self):
resp, reason = should_respond(
text="@Helion",
agent_id="helion",
chat_id="group123",
chat_id="123",
user_id="456",
is_private_chat=True,
payload_explicit_request=False,
)
assert respond is False
def test_none_text(self):
respond, reason = should_respond(
text=None,
assert resp
assert reason == "private_chat"
# ===== SOWA: Question Without Mention in Topic =====
class TestSOWAQuestionWithoutMentionInTopic:
def test_question_no_mention_topic(self):
resp, reason = should_respond(
text="Хто знає чому падає сервер?",
agent_id="helion",
chat_id="group123",
chat_id="-100123",
user_id="456",
is_private_chat=False,
payload_explicit_request=False,
)
assert respond is False
def test_mixed_agents_mention(self):
"""When multiple agents mentioned, each should handle their own"""
# Helion should respond to Helion mention
respond_helion, _ = should_respond(
text="Helion та DAARWIZZ, допоможіть",
assert not resp
assert reason == "not_directed_to_agent"
# ===== SOWA: Reply to Agent =====
class TestSOWAReplyToAgent:
def test_reply_to_agent(self):
resp, reason = should_respond(
text="А що з цим робити?",
agent_id="helion",
chat_id="group123",
chat_id="-100123",
user_id="456",
is_private_chat=False,
is_reply_to_agent=True,
)
assert respond_helion is True
# DAARWIZZ should also respond
respond_daarwizz, _ = should_respond(
text="Helion та DAARWIZZ, допоможіть",
agent_id="daarwizz",
chat_id="group123",
)
assert respond_daarwizz is True
def test_question_to_specific_agent(self):
"""Question directed to another agent"""
respond, reason = should_respond(
text="@greenfoodliveBot як справи?",
agent_id="helion",
chat_id="group123",
)
assert respond is False
assert "other_agent" in reason
assert resp
assert reason == "reply_to_agent"
# ===== Short Notes =====
class TestShortNotes:
def test_checkmark(self):
assert detect_short_note("\u2705")
def test_ok(self):
assert detect_short_note("ok")
def test_plus(self):
assert detect_short_note("+")
def test_normal_message(self):
assert not detect_short_note("Це нормальне повідомлення")
if __name__ == "__main__":

View File

@@ -1,280 +0,0 @@
"""
DAGI Router API Tests
Тести для endpoints:
- GET /internal/node/{node_id}/dagi-router/agents
- GET /internal/node/{node_id}/metrics/current
- POST /internal/node/{node_id}/dagi-audit/run
- POST /internal/node/{node_id}/dagi-router/phantom/sync
- POST /internal/node/{node_id}/dagi-router/stale/mark
"""
import pytest
import httpx
from typing import Any, Dict
# Test configuration
CITY_SERVICE_URL = "http://localhost:7001"
NODE1_ID = "node-1-hetzner-gex44"
NODE2_ID = "node-2-macbook-m4max"
# ============================================================================
# Fixtures
# ============================================================================
@pytest.fixture
def client():
"""HTTP client для тестування"""
return httpx.Client(base_url=CITY_SERVICE_URL, timeout=30.0)
@pytest.fixture
def node_ids():
"""Node IDs для тестування"""
return [NODE1_ID, NODE2_ID]
# ============================================================================
# DAGI Router Agents Tests
# ============================================================================
class TestDAGIRouterAgents:
"""Тести для GET /internal/node/{node_id}/dagi-router/agents"""
def test_get_agents_returns_valid_response(self, client):
"""Endpoint повертає валідну структуру"""
response = client.get(f"/city/internal/node/{NODE1_ID}/dagi-router/agents")
assert response.status_code == 200
data = response.json()
# Перевірка структури
assert "node_id" in data
assert "summary" in data
assert "agents" in data
# Перевірка summary
summary = data["summary"]
assert "active" in summary
assert "phantom" in summary
assert "stale" in summary
assert "router_total" in summary
assert "system_total" in summary
# Types
assert isinstance(summary["active"], int)
assert isinstance(summary["phantom"], int)
assert isinstance(data["agents"], list)
def test_get_agents_for_unknown_node(self, client):
"""Endpoint повертає пустий response для невідомої ноди"""
response = client.get("/city/internal/node/unknown-node-id/dagi-router/agents")
# Має повернути 200 з пустим списком, не 404
assert response.status_code == 200
data = response.json()
assert data["agents"] == []
assert data["summary"]["active"] == 0
def test_agents_have_required_fields(self, client):
"""Агенти мають всі необхідні поля"""
response = client.get(f"/city/internal/node/{NODE1_ID}/dagi-router/agents")
assert response.status_code == 200
data = response.json()
if data["agents"]:
agent = data["agents"][0]
# Required fields
assert "id" in agent
assert "name" in agent
assert "status" in agent
# Status must be valid
assert agent["status"] in ["active", "phantom", "stale", "error"]
# ============================================================================
# Node Metrics Tests
# ============================================================================
class TestNodeMetrics:
"""Тести для GET /internal/node/{node_id}/metrics/current"""
def test_get_metrics_returns_valid_response(self, client):
"""Endpoint повертає валідну структуру"""
response = client.get(f"/city/internal/node/{NODE1_ID}/metrics/current")
assert response.status_code == 200
data = response.json()
# Required fields
assert "node_id" in data
assert data["node_id"] == NODE1_ID
# Metric fields
assert "cpu_cores" in data
assert "cpu_usage" in data
assert "gpu_model" in data
assert "gpu_memory_total" in data
assert "gpu_memory_used" in data
assert "ram_total" in data
assert "ram_used" in data
assert "disk_total" in data
assert "disk_used" in data
assert "agent_count_router" in data
assert "agent_count_system" in data
def test_get_metrics_for_unknown_node(self, client):
"""Endpoint повертає minimal response для невідомої ноди"""
response = client.get("/city/internal/node/unknown-node-id/metrics/current")
# Має повернути 200 з мінімальним response
assert response.status_code == 200
data = response.json()
assert data["node_id"] == "unknown-node-id"
def test_metrics_have_numeric_values(self, client):
"""Метрики мають числові значення"""
response = client.get(f"/city/internal/node/{NODE1_ID}/metrics/current")
assert response.status_code == 200
data = response.json()
# All numeric fields should be numbers
numeric_fields = [
"cpu_cores", "cpu_usage",
"gpu_memory_total", "gpu_memory_used",
"ram_total", "ram_used",
"disk_total", "disk_used",
"agent_count_router", "agent_count_system"
]
for field in numeric_fields:
assert isinstance(data[field], (int, float)), f"{field} should be numeric"
# ============================================================================
# DAGI Audit Tests
# ============================================================================
class TestDAGIAudit:
"""Тести для POST /internal/node/{node_id}/dagi-audit/run"""
def test_run_audit_returns_valid_response(self, client):
"""POST audit повертає валідну структуру"""
response = client.post(f"/city/internal/node/{NODE1_ID}/dagi-audit/run")
assert response.status_code == 200
data = response.json()
assert "status" in data
assert data["status"] == "completed"
assert "summary" in data
assert "message" in data
# Summary fields
summary = data["summary"]
assert "router_total" in summary
assert "db_total" in summary
assert "active_count" in summary
assert "phantom_count" in summary
assert "stale_count" in summary
def test_get_audit_summary(self, client):
"""GET audit summary повертає дані"""
response = client.get(f"/city/internal/node/{NODE1_ID}/dagi-audit")
# Може бути 200 з даними або null
assert response.status_code == 200
data = response.json()
if data:
assert "node_id" in data
assert "timestamp" in data
assert "active_count" in data
# ============================================================================
# Phantom/Stale Sync Tests
# ============================================================================
class TestPhantomStaleSync:
"""Тести для phantom/stale sync endpoints"""
def test_phantom_sync_empty_list(self, client):
"""Sync з пустим списком не падає"""
response = client.post(
f"/city/internal/node/{NODE1_ID}/dagi-router/phantom/sync",
json={"agent_ids": []}
)
assert response.status_code == 200
data = response.json()
assert data["status"] == "completed"
assert data["created_count"] == 0
def test_stale_mark_empty_list(self, client):
"""Mark stale з пустим списком не падає"""
response = client.post(
f"/city/internal/node/{NODE1_ID}/dagi-router/stale/mark",
json={"agent_ids": []}
)
assert response.status_code == 200
data = response.json()
assert data["status"] == "completed"
assert data["marked_count"] == 0
# ============================================================================
# Integration Tests
# ============================================================================
class TestIntegration:
"""Інтеграційні тести"""
def test_full_audit_flow(self, client):
"""Повний цикл: audit → get agents → get metrics"""
# 1. Run audit
audit_response = client.post(f"/city/internal/node/{NODE1_ID}/dagi-audit/run")
assert audit_response.status_code == 200
# 2. Get agents
agents_response = client.get(f"/city/internal/node/{NODE1_ID}/dagi-router/agents")
assert agents_response.status_code == 200
agents_data = agents_response.json()
# 3. Get metrics
metrics_response = client.get(f"/city/internal/node/{NODE1_ID}/metrics/current")
assert metrics_response.status_code == 200
# 4. Verify consistency
audit_data = audit_response.json()
# Agent counts should match
assert agents_data["summary"]["active"] + agents_data["summary"]["phantom"] + agents_data["summary"]["stale"] >= 0
def test_both_nodes_accessible(self, client, node_ids):
"""Обидві ноди доступні через API"""
for node_id in node_ids:
response = client.get(f"/city/internal/node/{node_id}/metrics/current")
assert response.status_code == 200
data = response.json()
assert data["node_id"] == node_id
# ============================================================================
# Run tests
# ============================================================================
if __name__ == "__main__":
pytest.main([__file__, "-v", "--tb=short"])

31
tests/test_dictionary.py Normal file
View File

@@ -0,0 +1,31 @@
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).resolve().parents[1] / 'packages' / 'agromatrix-tools'))
from agromatrix_tools import tool_dictionary
from agromatrix_tools.normalize import parse_quantity, convert
def test_synonym_match():
res = tool_dictionary.normalize_crop('озима пшениця', trace_id='t1')
assert res['status'] == 'ok'
assert res['normalized_id'] == 'crop_wheat_winter'
def test_fuzzy_suggestions():
res = tool_dictionary.normalize_operation('посив', trace_id='t2')
assert res['suggestions']
def test_pending_unknown():
res = tool_dictionary.normalize_field('невідоме поле', trace_id='t3')
assert res['status'] == 'pending'
def test_unit_conversion():
value, unit = parse_quantity('2,5 т')
assert unit in ['т', 't']
from_dict = 't'
to = 'kg'
# simulate conversion
assert convert(2.5, from_dict, to, [{'id':'t','to_base':{'base':'kg','factor':1000}}, {'id':'kg'}]) == 2500

View File

@@ -0,0 +1,29 @@
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).resolve().parents[1] / 'packages' / 'agromatrix-tools'))
from agromatrix_tools import tool_dictionary_review as review
def test_list_pending_open():
items = review.list_pending(limit=5)
assert isinstance(items, list)
def test_reject_pending(tmp_path, monkeypatch):
# skip if no pending
items = review.list_pending(limit=1)
if not items:
return
ref = items[0]['pending_ref']
res = review.reject_pending(ref, 'test')
assert res['decision'] == 'rejected'
def test_apply_idempotent():
try:
review.apply_resolutions()
review.apply_resolutions()
assert True
except Exception:
assert False

View File

@@ -1,336 +0,0 @@
"""
Infrastructure Smoke Tests
Базові API тести для перевірки після деплою.
Запускаються як частина deploy pipeline або вручну.
Використання:
pytest tests/test_infra_smoke.py -v
pytest tests/test_infra_smoke.py -v --base-url http://localhost:7001
"""
import os
import pytest
import requests
from datetime import datetime, timezone, timedelta
from typing import Optional
# Configuration
BASE_URL = os.getenv("CITY_SERVICE_URL", "http://daarion-city-service:7001")
TIMEOUT = 10
# Node IDs
NODE1_ID = "node-1-hetzner-gex44"
NODE2_ID = "node-2-macbook-m4max"
def pytest_addoption(parser):
"""Add command line options"""
parser.addoption(
"--base-url",
action="store",
default=BASE_URL,
help="Base URL of city-service API"
)
@pytest.fixture
def base_url(request):
"""Get base URL from command line or environment"""
return request.config.getoption("--base-url") or BASE_URL
@pytest.fixture
def api_client(base_url):
"""Create API client session"""
session = requests.Session()
session.timeout = TIMEOUT
class Client:
def __init__(self, base_url: str, session: requests.Session):
self.base_url = base_url.rstrip("/")
self.session = session
def get(self, path: str) -> requests.Response:
return self.session.get(f"{self.base_url}{path}", timeout=TIMEOUT)
def post(self, path: str, json: dict) -> requests.Response:
return self.session.post(f"{self.base_url}{path}", json=json, timeout=TIMEOUT)
return Client(base_url, session)
# ==============================================================================
# Health Checks
# ==============================================================================
class TestHealthChecks:
"""Basic health check tests"""
def test_healthz_endpoint(self, api_client):
"""Test /healthz returns 200 and status ok"""
response = api_client.get("/healthz")
assert response.status_code == 200, f"Health check failed: {response.text}"
data = response.json()
assert data.get("status") == "ok", f"Unhealthy status: {data}"
def test_public_nodes_endpoint(self, api_client):
"""Test /public/nodes returns node list"""
response = api_client.get("/public/nodes")
assert response.status_code == 200, f"Nodes endpoint failed: {response.text}"
data = response.json()
assert "items" in data, "Response missing 'items' key"
assert "total" in data, "Response missing 'total' key"
# ==============================================================================
# Node Metrics Tests
# ==============================================================================
class TestNodeMetrics:
"""Node metrics tests"""
@pytest.mark.parametrize("node_id", [NODE1_ID, NODE2_ID])
def test_node_metrics_endpoint(self, api_client, node_id):
"""Test node metrics endpoint returns data"""
response = api_client.get(f"/internal/node/{node_id}/metrics/current")
assert response.status_code == 200, f"Node metrics failed for {node_id}: {response.text}"
data = response.json()
# Check required fields
assert "node_id" in data, "Missing node_id"
assert "agent_count_router" in data, "Missing agent_count_router"
assert "agent_count_system" in data, "Missing agent_count_system"
def test_node1_has_agents(self, api_client):
"""Test NODE1 has at least 1 agent in router"""
response = api_client.get(f"/internal/node/{NODE1_ID}/metrics/current")
if response.status_code != 200:
pytest.skip(f"NODE1 metrics not available: {response.status_code}")
data = response.json()
agent_count = data.get("agent_count_router", 0)
assert agent_count >= 1, f"NODE1 has {agent_count} agents in router, expected >= 1"
def test_node2_has_agents(self, api_client):
"""Test NODE2 has at least 1 agent in system"""
response = api_client.get(f"/internal/node/{NODE2_ID}/metrics/current")
if response.status_code != 200:
pytest.skip(f"NODE2 metrics not available: {response.status_code}")
data = response.json()
agent_count = data.get("agent_count_system", 0)
assert agent_count >= 1, f"NODE2 has {agent_count} agents in system, expected >= 1"
# ==============================================================================
# Node Agents Tests
# ==============================================================================
class TestNodeAgents:
"""Node agents (Guardian/Steward) tests"""
@pytest.mark.parametrize("node_id", [NODE1_ID, NODE2_ID])
def test_node_agents_endpoint(self, api_client, node_id):
"""Test node agents endpoint returns data"""
response = api_client.get(f"/internal/node/{node_id}/agents")
assert response.status_code == 200, f"Node agents failed for {node_id}: {response.text}"
data = response.json()
assert "node_id" in data, "Missing node_id"
assert "total" in data, "Missing total"
assert "agents" in data, "Missing agents list"
def test_node1_has_guardian(self, api_client):
"""Test NODE1 has Node Guardian"""
response = api_client.get(f"/internal/node/{NODE1_ID}/agents")
if response.status_code != 200:
pytest.skip(f"NODE1 agents not available: {response.status_code}")
data = response.json()
guardian = data.get("guardian")
assert guardian is not None, "NODE1 missing Node Guardian"
assert guardian.get("id"), "Guardian has no ID"
def test_node1_has_steward(self, api_client):
"""Test NODE1 has Node Steward"""
response = api_client.get(f"/internal/node/{NODE1_ID}/agents")
if response.status_code != 200:
pytest.skip(f"NODE1 agents not available: {response.status_code}")
data = response.json()
steward = data.get("steward")
assert steward is not None, "NODE1 missing Node Steward"
assert steward.get("id"), "Steward has no ID"
def test_node2_has_guardian(self, api_client):
"""Test NODE2 has Node Guardian"""
response = api_client.get(f"/internal/node/{NODE2_ID}/agents")
if response.status_code != 200:
pytest.skip(f"NODE2 agents not available: {response.status_code}")
data = response.json()
guardian = data.get("guardian")
assert guardian is not None, "NODE2 missing Node Guardian"
# ==============================================================================
# DAGI Router Tests
# ==============================================================================
class TestDAGIRouter:
"""DAGI Router tests"""
@pytest.mark.parametrize("node_id", [NODE1_ID, NODE2_ID])
def test_dagi_router_agents_endpoint(self, api_client, node_id):
"""Test DAGI Router agents endpoint returns data"""
response = api_client.get(f"/internal/node/{node_id}/dagi-router/agents")
# May return empty if no audit yet
if response.status_code == 404:
pytest.skip(f"DAGI Router not configured for {node_id}")
assert response.status_code == 200, f"DAGI Router failed for {node_id}: {response.text}"
data = response.json()
assert "node_id" in data, "Missing node_id"
assert "summary" in data, "Missing summary"
assert "agents" in data, "Missing agents list"
def test_node1_router_has_agents(self, api_client):
"""Test NODE1 DAGI Router has agents"""
response = api_client.get(f"/internal/node/{NODE1_ID}/dagi-router/agents")
if response.status_code != 200:
pytest.skip(f"NODE1 DAGI Router not available: {response.status_code}")
data = response.json()
summary = data.get("summary", {})
router_total = summary.get("router_total", 0)
# Warn but don't fail - router may not be configured
if router_total == 0:
pytest.skip("NODE1 DAGI Router has 0 agents (may not be configured)")
assert router_total >= 1, f"DAGI Router has {router_total} agents, expected >= 1"
# ==============================================================================
# Core Agents Tests
# ==============================================================================
class TestCoreAgents:
"""Core agents tests"""
def test_prompts_status_endpoint(self, api_client):
"""Test prompts status batch endpoint"""
agent_ids = ["agent-daarwizz", "agent-devtools", "agent-soul"]
response = api_client.post("/internal/agents/prompts/status", {"agent_ids": agent_ids})
assert response.status_code == 200, f"Prompts status failed: {response.text}"
data = response.json()
assert "status" in data, "Missing status in response"
assert isinstance(data["status"], dict), "Status should be a dict"
def test_daarwizz_runtime_prompt(self, api_client):
"""Test DAARWIZZ has runtime prompt"""
# Try both possible slugs
for agent_id in ["agent-daarwizz", "daarwizz"]:
response = api_client.get(f"/internal/agents/{agent_id}/prompts/runtime")
if response.status_code == 200:
data = response.json()
if data.get("has_prompts"):
assert data.get("prompts", {}).get("core"), "DAARWIZZ missing core prompt"
return
pytest.skip("DAARWIZZ agent not found or no prompts configured")
def test_runtime_system_prompt_endpoint(self, api_client):
"""Test runtime system prompt endpoint works"""
response = api_client.get("/internal/agents/agent-daarwizz/system-prompt")
if response.status_code == 404:
pytest.skip("DAARWIZZ agent not found")
assert response.status_code == 200, f"System prompt failed: {response.text}"
data = response.json()
assert "agent_id" in data, "Missing agent_id"
assert "system_prompt" in data, "Missing system_prompt"
assert len(data.get("system_prompt", "")) > 10, "System prompt too short"
# ==============================================================================
# Integration Tests
# ==============================================================================
class TestIntegration:
"""End-to-end integration tests"""
def test_node_to_agents_flow(self, api_client):
"""Test full flow: node → agents → prompts"""
# Get node
response = api_client.get(f"/internal/node/{NODE1_ID}/agents")
if response.status_code != 200:
pytest.skip(f"NODE1 not available: {response.status_code}")
data = response.json()
agents = data.get("agents", [])
if not agents:
pytest.skip("No agents found for NODE1")
# Get first agent's prompts
agent = agents[0]
agent_id = agent.get("id")
response = api_client.get(f"/internal/agents/{agent_id}/prompts/runtime")
# Should return successfully even if no prompts
assert response.status_code == 200, f"Agent prompts failed for {agent_id}: {response.text}"
def test_public_nodes_have_metrics(self, api_client):
"""Test public nodes endpoint includes metrics"""
response = api_client.get("/public/nodes")
assert response.status_code == 200
data = response.json()
items = data.get("items", [])
if not items:
pytest.skip("No nodes in system")
# Check first node has metrics
node = items[0]
# Should have metrics object after our changes
if "metrics" in node:
metrics = node["metrics"]
assert "cpu_cores" in metrics or "ram_total" in metrics, "Metrics object empty"
# ==============================================================================
# Run as script
# ==============================================================================
if __name__ == "__main__":
pytest.main([__file__, "-v"])

View File

@@ -0,0 +1,32 @@
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).resolve().parents[1] / 'packages' / 'agromatrix-tools'))
from agromatrix_tools import tool_operation_plan as op
def test_create_plan():
plan_id = op.create_plan({
'scope': {'field_ids': ['field_001'], 'crop_ids': ['crop_wheat_winter'], 'date_window': {'start': '2026-02-08', 'end': '2026-02-15'}},
'tasks': [{'operation_id': 'op_sowing', 'planned_date': '2026-02-10', 'priority': 'normal', 'assignee': 'team'}]
}, trace_id='t1')
assert plan_id
plan = op.get_plan(plan_id)
assert plan['status'] == 'planned'
def test_update_and_status():
plan_id = op.create_plan({'scope': {'field_ids': ['field_001'], 'crop_ids': [], 'date_window': {'start':'2026-02-08','end':'2026-02-10'}}, 'tasks': []}, trace_id='t2')
op.update_plan(plan_id, {'source': 'api'})
op.set_status(plan_id, 'scheduled')
plan = op.get_plan(plan_id)
assert plan['status'] == 'scheduled'
def test_invalid_transition():
plan_id = op.create_plan({'scope': {'field_ids': ['field_001'], 'crop_ids': [], 'date_window': {'start':'2026-02-08','end':'2026-02-10'}}, 'tasks': []}, trace_id='t3')
try:
op.set_status(plan_id, 'closed')
assert False, 'should have failed'
except Exception:
assert True

View File

@@ -0,0 +1,65 @@
import sys
from pathlib import Path
root = Path(__file__).resolve().parents[1]
sys.path.insert(0, str(root))
sys.path.insert(0, str(root / 'packages' / 'agromatrix-tools'))
from crews.agromatrix_crew import operator_commands as oc
def test_parse():
cmd = oc.parse_operator_command('/pending --limit 5 --category field')
assert cmd['cmd'] == 'pending'
def test_gating(monkeypatch):
monkeypatch.setenv('AGX_OPERATOR_IDS', '123')
assert oc.is_operator('123', None)
assert not oc.is_operator('999', None)
def test_pending_list_flags(monkeypatch):
monkeypatch.setenv('AGX_OPERATOR_IDS', '1')
res = oc.route_operator_command('/pending --limit 5 --category unit', '1', None)
assert res['status'] == 'ok'
def test_approve_apply_guard(monkeypatch):
monkeypatch.setenv('AGX_OPERATOR_IDS', '1')
# no apply allowed by default
res = oc.route_operator_command('/approve pending.jsonl:1 map_to crop_wheat_winter --apply', '1', None)
assert res['summary'] == 'apply_not_allowed'
def test_whoami_operator(monkeypatch):
monkeypatch.setenv('AGX_OPERATOR_IDS', '1')
res = oc.route_operator_command('/whoami', '1', '2')
assert res['status'] == 'ok'
assert 'user_id: 1' in res['summary']
assert 'chat_id: 2' in res['summary']
def test_whoami_non_operator(monkeypatch):
monkeypatch.setenv('AGX_OPERATOR_IDS', '1')
res = oc.route_operator_command('/whoami', '9', '2')
assert res['status'] == 'error'
def test_pending_show(monkeypatch):
monkeypatch.setenv('AGX_OPERATOR_IDS', '1')
def _fake_detail(_ref):
return {
'ref': 'pending.jsonl:1',
'category': 'crop',
'raw_term': 'пшениця',
'ts': '2026-01-01T00:00:00Z',
'suggestions': [{'id': 'crop_wheat', 'score': 0.92}],
'status': 'approved',
'decision': 'approved',
'reason': 'match'
}
monkeypatch.setattr(oc.review, 'get_pending_detail', _fake_detail)
res = oc.route_operator_command('/pending_show pending.jsonl:1', '1', None)
assert res['status'] == 'ok'
assert 'ref: pending.jsonl:1' in res['summary']
assert 'suggestions:' in res['summary']

13
tests/test_spreadsheet.py Normal file
View File

@@ -0,0 +1,13 @@
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).resolve().parents[1] / 'packages' / 'agromatrix-tools'))
import os
from agromatrix_tools import tool_spreadsheet
def test_create_and_read(tmp_path):
path = tmp_path / "test.xlsx"
spec = {"sheets": [{"name": "Data", "data": [["A", "B"], [1, 2]]}]}
tool_spreadsheet.create_workbook(str(path), spec)
result = tool_spreadsheet.read_range(str(path), "Data", "A1:B2")
assert result["values"][0] == ["A", "B"]