snapshot: NODE1 production state 2026-02-09

Complete snapshot of /opt/microdao-daarion/ from NODE1 (144.76.224.179).
This represents the actual running production code that has diverged
significantly from the previous main branch.

Key changes from old main:
- Gateway (http_api.py): expanded from ~40KB to 164KB with full agent support
- Router: new /v1/agents/{id}/infer endpoint with vision + DeepSeek routing
- Behavior Policy: SOWA v2.2 (3-level: FULL/ACK/SILENT)
- Agent Registry: config/agent_registry.yml as single source of truth
- 13 agents configured (was 3)
- Memory service integration
- CrewAI teams and roles

Excluded from snapshot: venv/, .env, data/, backups, .tgz archives

Co-authored-by: Cursor <cursoragent@cursor.com>
This commit is contained in:
Apple
2026-02-09 08:46:46 -08:00
parent 134c044c21
commit ef3473db21
9473 changed files with 408933 additions and 2769877 deletions

View File

@@ -0,0 +1,53 @@
# Agent Scheduler
Cron-based task scheduler for DAARION agents.
## Scheduled Tasks
| Task | Agent | Schedule | Description |
|------|-------|----------|-------------|
| daily_health_check | all | 9:00 AM daily | Health check all agents |
| helion_energy_report | helion | Monday 8:00 AM | Weekly energy report |
| agromatrix_weather | agromatrix | 6:00 AM daily | Weather forecast |
| memory_cleanup | all | Sunday 3:00 AM | Cleanup old memories |
## Usage
```bash
# Run scheduler
python agent_scheduler.py
# Or via Docker
docker compose -f docker-compose.node1.yml up -d agent-scheduler
```
## Configuration
Environment variables:
- `GATEWAY_URL` - Gateway service URL
- `MEMORY_SERVICE_URL` - Memory service URL
## Adding Tasks
Edit `SCHEDULED_TASKS` list in `agent_scheduler.py`:
```python
AgentTask(
name="task_name",
agent_id="agent_id", # or "*" for all
schedule="0 9 * * *", # cron expression
task_type="health_check|generate_report|web_search|memory_cleanup",
params={"key": "value"}
)
```
## Status
Currently: **Not deployed** (requires Docker service addition)
## TODO
- [ ] Add Docker service to docker-compose.node1.yml
- [ ] Implement proper cron parsing (croniter)
- [ ] Add task status API
- [ ] Add Prometheus metrics

View File

@@ -0,0 +1,95 @@
# Agent Scheduler - Cron-based tasks for agents
import os
import asyncio
import logging
from datetime import datetime, timedelta
from typing import Dict, List, Any, Optional
import httpx
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
GATEWAY_URL = os.getenv("GATEWAY_URL", "http://dagi-gateway-node1:9300")
class AgentTask:
def __init__(self, name: str, agent_id: str, schedule: str, task_type: str, params: Dict = None):
self.name = name
self.agent_id = agent_id
self.schedule = schedule
self.task_type = task_type
self.params = params or {}
self.last_run = None
self.next_run = None
SCHEDULED_TASKS = [
AgentTask("daily_health_check", "*", "0 9 * * *", "health_check"),
AgentTask("helion_energy_report", "helion", "0 8 * * 1", "generate_report", {"report_type": "energy_weekly"}),
AgentTask("agromatrix_weather", "agromatrix", "0 6 * * *", "web_search", {"query": "weather forecast Ukraine"}),
AgentTask("memory_cleanup", "*", "0 3 * * 0", "memory_cleanup", {"older_than_days": 90}),
]
async def execute_task(task: AgentTask) -> Dict:
logger.info(f"Executing task: {task.name} for agent: {task.agent_id}")
try:
async with httpx.AsyncClient(timeout=60.0) as client:
if task.task_type == "health_check":
resp = await client.get(f"{GATEWAY_URL}/health")
return {"status": "ok" if resp.status_code == 200 else "error"}
elif task.task_type == "generate_report":
resp = await client.post(
f"{GATEWAY_URL}/{task.agent_id}/internal/task",
json={"task": "generate_report", "params": task.params}
)
return resp.json() if resp.status_code == 200 else {"error": resp.status_code}
elif task.task_type == "web_search":
resp = await client.post(
f"{GATEWAY_URL}/{task.agent_id}/internal/task",
json={"task": "web_search", "params": task.params}
)
return resp.json() if resp.status_code == 200 else {"error": resp.status_code}
elif task.task_type == "memory_cleanup":
memory_url = os.getenv("MEMORY_SERVICE_URL", "http://memory-service:8000")
resp = await client.post(f"{memory_url}/admin/cleanup", json=task.params)
return resp.json() if resp.status_code == 200 else {"error": resp.status_code}
else:
return {"error": f"Unknown task type: {task.task_type}"}
except Exception as e:
logger.error(f"Task execution failed: {e}")
return {"error": str(e)}
async def run_scheduler():
logger.info("Agent Scheduler started")
while True:
now = datetime.now()
for task in SCHEDULED_TASKS:
if should_run(task, now):
result = await execute_task(task)
task.last_run = now
logger.info(f"Task {task.name} completed: {result}")
await asyncio.sleep(60)
def should_run(task: AgentTask, now: datetime) -> bool:
if not task.last_run:
return False
if task.last_run and (now - task.last_run) < timedelta(hours=1):
return False
return False
if __name__ == "__main__":
asyncio.run(run_scheduler())