docs: add node1 runbooks, consolidation artifacts, and maintenance scripts
This commit is contained in:
@@ -1,8 +1,19 @@
|
||||
# 🏗️ NODA1 Production Stack
|
||||
|
||||
**Version:** 2.1
|
||||
**Last Updated:** 2026-01-26
|
||||
**Status:** Production Ready ✅
|
||||
**Version:** 2.2
|
||||
**Last Updated:** 2026-02-11
|
||||
**Status:** Production (drift-controlled) ✅
|
||||
|
||||
## 🔎 Current Reality (2026-02-11)
|
||||
|
||||
- Deploy root: `/opt/microdao-daarion` (single runtime root)
|
||||
- Drift control: `/opt/microdao-daarion/ops/drift-check.sh` → expected `DRIFT_CHECK: OK`
|
||||
- Gateway: `agents_count=13` (user-facing)
|
||||
- Router: 15 active agents (13 user-facing + 2 internal)
|
||||
- Internal routing defaults:
|
||||
- `monitor` → local (`swapper+ollama`, `qwen3-8b`)
|
||||
- `devtools` → local (`swapper+ollama`, `qwen3-8b`) + conditional cloud fallback for heavy task types
|
||||
- Memory service: `/health` and `/stats` return `200`
|
||||
|
||||
## 📍 Node Information
|
||||
|
||||
@@ -70,15 +81,16 @@
|
||||
| Prometheus | 9090 | prometheus | ✅ |
|
||||
| Grafana | 3030 | grafana | ✅ |
|
||||
|
||||
## 🤖 Telegram Bots (7 active)
|
||||
## 🤖 Telegram Bots (13 user-facing)
|
||||
|
||||
1. ✅ **DAARWIZZ** - Main orchestrator
|
||||
2. ✅ **Helion** - Energy Union AI
|
||||
3. ✅ **GREENFOOD** - Agriculture assistant
|
||||
4. ✅ **AgroMatrix** - Agro analytics
|
||||
5. ✅ **NUTRA** - Nutrition advisor
|
||||
6. ✅ **Druid** - Legal assistant
|
||||
7. ⚠️ **Alateya** - (token not configured)
|
||||
У production gateway зараз user-facing агенти:
|
||||
`daarwizz`, `helion`, `alateya`, `druid`, `nutra`, `agromatrix`, `greenfood`, `clan`, `eonarch`, `yaromir`, `soul`, `senpai`, `sofiia`.
|
||||
|
||||
Швидка перевірка:
|
||||
|
||||
```bash
|
||||
curl -sS http://localhost:9300/health
|
||||
```
|
||||
|
||||
## 📊 Health Check Endpoints
|
||||
|
||||
|
||||
101
NODA1-SAFE-DEPLOY.md
Normal file
101
NODA1-SAFE-DEPLOY.md
Normal file
@@ -0,0 +1,101 @@
|
||||
# NODA1 Safe Deploy (Canonical Workflow)
|
||||
|
||||
**Ціль:** синхронізувати ноут ↔ GitHub ↔ NODA1 ↔ реальний Docker-стек так, щоб:
|
||||
|
||||
- не ламати працюючий прод;
|
||||
- не плодити "невидимі" гілки;
|
||||
- не мати `unrelated history` на сервері;
|
||||
- мати один канонічний стан коду та документації.
|
||||
|
||||
**Канонічна правда:** `origin/main` (GitHub). Усе інше або runtime (Docker), або secrets поза git.
|
||||
|
||||
---
|
||||
|
||||
## 1) Ролі директорій на NODA1
|
||||
|
||||
На NODA1 використовуємо **один deploy root**:
|
||||
|
||||
- `/opt/microdao-daarion` — **canonical deployment checkout** (runtime source of truth).
|
||||
- `/root/microdao-daarion` — не runtime-tree (marker/історичні артефакти), не використовувати для deploy.
|
||||
|
||||
Ціль: не мати дубльованих runtime дерев і уникнути deployment drift.
|
||||
|
||||
---
|
||||
|
||||
## 2) Golden Rules (не порушувати)
|
||||
|
||||
1. Не редагувати код/доки "на проді руками", окрім аварійної ситуації.
|
||||
2. Не створювати гілки на сервері; сервер не місце для розробки.
|
||||
3. Для docker compose завжди використовувати один і той самий project name: `-p microdao-daarion`.
|
||||
4. Секрети (токени/паролі) не комітяться; для них тримати `*.example` + короткий опис, де лежить на сервері.
|
||||
|
||||
---
|
||||
|
||||
## 3) Нормальний (правильний) цикл змін
|
||||
|
||||
### A) На ноуті
|
||||
|
||||
1. Зміни в коді/доках → PR/merge в `origin/main`.
|
||||
2. Після merge: оновити `PROJECT-MASTER-INDEX.md`, якщо змінювались сервіси/порти/шляхи.
|
||||
|
||||
### B) На NODA1: синхронізація коду (без простою)
|
||||
|
||||
```bash
|
||||
ssh root@144.76.224.179
|
||||
cd /opt/microdao-daarion
|
||||
git fetch origin
|
||||
git pull --ff-only
|
||||
git rev-parse --short HEAD
|
||||
```
|
||||
|
||||
### C) Deploy одного сервісу (мінімальний ризик)
|
||||
|
||||
Приклад: router
|
||||
|
||||
```bash
|
||||
cd /opt/microdao-daarion
|
||||
|
||||
docker compose -p microdao-daarion -f docker-compose.node1.yml build router
|
||||
docker compose -p microdao-daarion -f docker-compose.node1.yml up -d --no-deps --force-recreate router
|
||||
|
||||
curl -fsS http://127.0.0.1:9102/health
|
||||
```
|
||||
|
||||
Аналогічно для gateway:
|
||||
|
||||
```bash
|
||||
docker compose -p microdao-daarion -f docker-compose.node1.yml build gateway
|
||||
docker compose -p microdao-daarion -f docker-compose.node1.yml up -d --no-deps --force-recreate gateway
|
||||
|
||||
curl -fsS http://127.0.0.1:9300/health
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4) Runtime snapshot (звірка "реальної архітектури")
|
||||
|
||||
Після деплою (або перед ним) зробити snapshot:
|
||||
|
||||
```bash
|
||||
cd /opt/microdao-daarion
|
||||
./scripts/node1/snapshot_node1.sh > "/opt/backups/node1_snapshot_$(date +%Y%m%d-%H%M%S).txt"
|
||||
/opt/microdao-daarion/ops/drift-check.sh
|
||||
```
|
||||
|
||||
Це дає:
|
||||
- який git commit реально задеплоєний;
|
||||
- які контейнери/образи/health активні;
|
||||
- базові health endpoints.
|
||||
|
||||
---
|
||||
|
||||
## 5) Якщо на NODA1 знову з’явились конфлікти `git pull`
|
||||
|
||||
Не робити `rebase` у прод-директорії.
|
||||
|
||||
Правильний шлях:
|
||||
|
||||
1. `cd /opt/microdao-daarion && git fetch origin`
|
||||
2. Звірити локальні правки: `git status --short`
|
||||
3. Якщо є emergency hotfix на сервері: заархівувати diff і перенести в PR.
|
||||
4. Далі `git pull --ff-only` і деплой тільки з `/opt/microdao-daarion`.
|
||||
@@ -1,7 +1,7 @@
|
||||
# NODA1 Agent Architecture
|
||||
|
||||
**Дата:** 2026-01-29
|
||||
**Версія:** 2.0.0
|
||||
**Дата:** 2026-02-11
|
||||
**Версія:** 2.1.0
|
||||
|
||||
## Огляд
|
||||
|
||||
@@ -23,7 +23,7 @@ NODA1 використовує уніфіковану систему агент
|
||||
│ ┌─────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ agent_registry.json (generated from config/agent_registry.yml) │ │
|
||||
│ │ ─────────────────────────────────────────────────────────────────── │ │
|
||||
│ │ 11 Telegram Agents: │ │
|
||||
│ │ 13 Telegram Agents: │ │
|
||||
│ │ • daarwizz (Meta-Orchestrator, Digital Mayor) │ │
|
||||
│ │ • helion (Energy Research, Energy Union) │ │
|
||||
│ │ • alateya (R&D Lab OS, Interdisciplinary Research) │ │
|
||||
@@ -35,6 +35,8 @@ NODA1 використовує уніфіковану систему агент
|
||||
│ │ • eonarch (Consciousness Evolution) │ │
|
||||
│ │ • yaromir (Private Tech Lead) [whitelist] │ │
|
||||
│ │ • soul (Spiritual Mentor) │ │
|
||||
│ │ • senpai (Trading Advisor) │ │
|
||||
│ │ • sofiia (Chief AI Architect) │ │
|
||||
│ └─────────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Features: │
|
||||
@@ -45,12 +47,12 @@ NODA1 використовує уніфіковану систему агент
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ ROUTER (dagi-staging-router:8000) │
|
||||
│ ROUTER (dagi-router-node1:9102 -> :8000) │
|
||||
│ ┌─────────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ router_agents.json (generated from config/agent_registry.yml) │ │
|
||||
│ │ ─────────────────────────────────────────────────────────────────── │ │
|
||||
│ │ 13 Agents with routing: │ │
|
||||
│ │ • 11 top-level (user-facing) │ │
|
||||
│ │ 15 Agents with routing: │ │
|
||||
│ │ • 13 top-level (user-facing) │ │
|
||||
│ │ • 2 internal (monitor, devtools) │ │
|
||||
│ │ │ │
|
||||
│ │ Per agent: keywords[], domains[], llm_profile, visibility │ │
|
||||
@@ -104,7 +106,7 @@ config/agent_registry.yml ←── ЄДИНЕ джерело істини
|
||||
|
||||
## Агенти за класами
|
||||
|
||||
### TOP-LEVEL (User-facing, 11 agents)
|
||||
### TOP-LEVEL (User-facing, 13 agents)
|
||||
|
||||
| ID | Display | Telegram | Visibility | Domain |
|
||||
|----|---------|----------|------------|--------|
|
||||
@@ -119,6 +121,8 @@ config/agent_registry.yml ←── ЄДИНЕ джерело істини
|
||||
| `eonarch` | EONARCH | public | public | Consciousness |
|
||||
| `yaromir` | YAROMIR | whitelist | private | Tech Lead |
|
||||
| `soul` | SOUL | public | public | Spiritual |
|
||||
| `senpai` | SENPAI | public | public | Trading |
|
||||
| `sofiia` | SOFIIA | public | public | AI Architecture |
|
||||
|
||||
### INTERNAL (Service agents, 2 agents)
|
||||
|
||||
@@ -131,7 +135,7 @@ config/agent_registry.yml ←── ЄДИНЕ джерело істини
|
||||
|
||||
## Tools per Agent (Standard Stack)
|
||||
|
||||
Всі 11 top-level агентів мають доступ до стандартного стеку:
|
||||
Всі 13 top-level агентів мають доступ до стандартного стеку:
|
||||
|
||||
| Tool | Description |
|
||||
|------|-------------|
|
||||
@@ -276,4 +280,3 @@ if use_crew:
|
||||
|
||||
- `services/router/crewai_client.py` - ready
|
||||
- `config/crewai_agents.json` - generated from registry
|
||||
|
||||
|
||||
@@ -57,6 +57,11 @@ bash scripts/docs/docs_backup.sh --dry-run
|
||||
bash scripts/docs/docs_backup.sh --apply
|
||||
```
|
||||
|
||||
7. Local scheduler (daily, no auto-push):
|
||||
```bash
|
||||
bash scripts/docs/install_local_cron.sh --schedule "17 9 * * *"
|
||||
```
|
||||
|
||||
## Runtime-First Facts (must re-check each session)
|
||||
|
||||
1. NODE1 branch/SHA:
|
||||
|
||||
34
docs/architecture_inventory/00_EXEC_SUMMARY.md
Normal file
34
docs/architecture_inventory/00_EXEC_SUMMARY.md
Normal file
@@ -0,0 +1,34 @@
|
||||
# DAGI / microdao Architecture Inventory — Executive Summary
|
||||
|
||||
Generated: 2026-02-16
|
||||
Scope basis: `docs/SESSION_STARTER.md` + repository scan + operator clarifications (current thread decisions).
|
||||
|
||||
## Executive Findings (Updated)
|
||||
- Canonical deployment authority is **per-node compose manifests**, not a single universal file.
|
||||
- NODE1: `docker-compose.node1.yml`
|
||||
- NODE3: `docker-compose.node3.yml`
|
||||
- Staging: `docker-compose.staging.yml` (+ overrides)
|
||||
- Drift-check policy should run per node against its local compose stack (services/images/ports/volumes/networks/env-nonsecret/healthcheck).
|
||||
- `ingest-service`, `parser-pipeline`, `control-plane` are currently treated as **not active on NODE1**; catalog now uses lifecycle statuses (`DEPLOYED`, `DEFINED`, `PLANNED/EXTERNAL`).
|
||||
- NATS canonical run subject policy is set to:
|
||||
- publish: `agent.run.requested.{agent_id}`
|
||||
- subscribe: `agent.run.requested.*`
|
||||
with dual-subscribe/publish migration window.
|
||||
- Proxy policy is fixed at architectural level: single owner for 80/443 in production (nginx **or** Caddy), second proxy internal-only or disabled.
|
||||
|
||||
## Priority Contradictions To Resolve
|
||||
1. Memory API contract mismatch (`/store` in docs vs `/memories`/`/events`/`/retrieve` in code).
|
||||
2. NATS run subject shape mismatch (legacy flat subject still in worker/producer paths).
|
||||
3. Proxy ownership conflict (nginx vs Caddy) in runtime runbooks.
|
||||
4. Architecture diagrams include services not currently deployed on NODE1.
|
||||
|
||||
## Source pointers
|
||||
- `docs/SESSION_STARTER.md`
|
||||
- `docker-compose.node1.yml`
|
||||
- `docker-compose.node3.yml`
|
||||
- `docker-compose.staging.yml`
|
||||
- `services/crewai-worker/main.py`
|
||||
- `services/router/main.py`
|
||||
- `ops/nginx/node1-api.conf`
|
||||
- `docs/OPENAPI_CONTRACTS.md`
|
||||
- `docs/ARCHITECTURE_DIAGRAM.md`
|
||||
45
docs/architecture_inventory/01_SERVICE_CATALOG.md
Normal file
45
docs/architecture_inventory/01_SERVICE_CATALOG.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Service Catalog
|
||||
|
||||
## Lifecycle Model (Canonical)
|
||||
- `DEPLOYED`: in node compose and intended running runtime.
|
||||
- `DEFINED`: code/manifest exists but not active for the target runtime.
|
||||
- `PLANNED/EXTERNAL`: documented contract exists but deployed elsewhere or not yet introduced.
|
||||
|
||||
## NODE1 Service Status
|
||||
|
||||
| Service | Status | Purpose | Runtime | Ports | Source |
|
||||
|---|---|---|---|---|---|
|
||||
| gateway | DEPLOYED | BFF/webhooks ingress | Python/FastAPI | 9300 | `docker-compose.node1.yml`, `gateway-bot/app.py` |
|
||||
| router | DEPLOYED | Routing/tool orchestration | Python/FastAPI | 9102->8000 | `docker-compose.node1.yml`, `services/router/main.py` |
|
||||
| swapper-service | DEPLOYED | Multimodal model runtime | Python/FastAPI+CUDA | 8890,8891 | `docker-compose.node1.yml` |
|
||||
| memory-service | DEPLOYED | Memory API | Python/FastAPI | 8000 | `docker-compose.node1.yml`, `services/memory-service/app/main.py` |
|
||||
| artifact-registry | DEPLOYED | Artifact jobs/metadata | Python/FastAPI | 9220 | `docker-compose.node1.yml` |
|
||||
| rag-service | DEPLOYED | RAG ingest/query | Python/FastAPI | 9500 | `docker-compose.node1.yml` |
|
||||
| render-pptx-worker | DEPLOYED | NATS pptx worker | Node.js | internal | `docker-compose.node1.yml` |
|
||||
| render-pdf-worker | DEPLOYED | NATS pdf worker | Python | internal | `docker-compose.node1.yml` |
|
||||
| index-doc-worker | DEPLOYED | NATS doc index worker | Python | internal | `docker-compose.node1.yml` |
|
||||
| market-data-service | DEPLOYED | Market feed producer | Python | 8893->8891 | `docker-compose.node1.yml` |
|
||||
| senpai-md-consumer | DEPLOYED | Market data consumer/features | Python | 8892 | `docker-compose.node1.yml` |
|
||||
| nats | DEPLOYED | Event bus | NATS | 4222 | `docker-compose.node1.yml` |
|
||||
| dagi-postgres | DEPLOYED | Relational DB | Postgres | 5432 | `docker-compose.node1.yml` |
|
||||
| qdrant | DEPLOYED | Vector DB | Qdrant | 6333,6334 | `docker-compose.node1.yml` |
|
||||
| neo4j | DEPLOYED | Graph DB | Neo4j | 7474,7687 | `docker-compose.node1.yml` |
|
||||
| redis | DEPLOYED | Cache | Redis | 6379 | `docker-compose.node1.yml` |
|
||||
| minio | DEPLOYED | Object storage | MinIO | 9000,9001 | `docker-compose.node1.yml` |
|
||||
| ingest-service | DEFINED | Attachment ingest API | Python/FastAPI | 8100 (code) | `services/ingest-service/*` |
|
||||
| parser-pipeline | DEFINED | Attachment parser worker/API | Python/FastAPI | 8101 (code) | `services/parser-pipeline/*` |
|
||||
| control-plane | DEFINED | Policy/config service | Python/FastAPI | 9200 (code) | `services/control-plane/*` |
|
||||
|
||||
## NODE3 Service Status
|
||||
- DEPLOYED: `dagi-router-node3`, `swapper-service-node3`, `comfy-agent` (`docker-compose.node3.yml`).
|
||||
|
||||
## Staging Status
|
||||
- DEPLOYED in staging manifests: router, gateway, swapper, memory, qdrant, neo4j, redis, nats, control-plane, crewai-service, crewai-worker, vision-encoder.
|
||||
|
||||
## Source pointers
|
||||
- `docker-compose.node1.yml`
|
||||
- `docker-compose.node3.yml`
|
||||
- `docker-compose.staging.yml`
|
||||
- `services/ingest-service/main.py`
|
||||
- `services/parser-pipeline/main.py`
|
||||
- `services/control-plane/main.py`
|
||||
36
docs/architecture_inventory/02_TOOL_CATALOG.md
Normal file
36
docs/architecture_inventory/02_TOOL_CATALOG.md
Normal file
@@ -0,0 +1,36 @@
|
||||
# Tool Catalog (Agent-Callable)
|
||||
|
||||
Tool contracts are defined primarily in `services/router/tool_manager.py` and filtered by `services/router/agent_tools_config.py`.
|
||||
|
||||
## Tools
|
||||
| Tool | Contract location | Execution path | AuthZ scope | Timeout/Retry | Logging |
|
||||
|---|---|---|---|---|---|
|
||||
| `memory_search` | `services/router/tool_manager.py` (`TOOL_DEFINITIONS`) | `_memory_search` to memory API | per-agent via `is_tool_allowed` | router HTTP client default 60s | router logger |
|
||||
| `graph_query` | same | `_graph_query` (graph backend) | per-agent | API-call bounded | router logger |
|
||||
| `web_search` | same | `_web_search` (search backend) | per-agent | varies, httpx | router logger |
|
||||
| `web_extract` | same | `_web_extract` | per-agent | varies | router logger |
|
||||
| `crawl4ai_scrape` | same | `_crawl4ai_scrape` (crawl4ai endpoint) | per-agent | external call timeout | router logger |
|
||||
| `remember_fact` | same | `_remember_fact` (memory write) | per-agent | API-call bounded | router logger |
|
||||
| `image_generate` | same | `_image_generate` (swapper image API) | per-agent | 120-300s paths in router/tool manager | router logger |
|
||||
| `comfy_generate_image` | same | `_comfy_generate_image` + poll | per-agent specialized | `timeout_s` default 180 | router logger |
|
||||
| `comfy_generate_video` | same | `_comfy_generate_video` + poll | per-agent specialized | `timeout_s` default 300 | router logger |
|
||||
| `tts_speak` | same | `_tts_speak` (swapper TTS API) | per-agent | API timeout bounded | router logger |
|
||||
| `presentation_create` | same | `_presentation_create` via artifact-registry | per-agent | async job pattern | router logger |
|
||||
| `presentation_status` | same | `_presentation_status` | per-agent | short HTTP call | router logger |
|
||||
| `presentation_download` | same | `_presentation_download` | per-agent | short HTTP call | router logger |
|
||||
| `file_tool` | same | `_file_tool` action multiplexer | per-agent | per-action (`timeout_sec` bounded for djvu/pdf ops) | router logger |
|
||||
| `market_data` | same | `_market_data` reads market service + senpai consumer | senpai specialized (+others if granted) | short HTTP calls (8-10s) | router logger |
|
||||
|
||||
## Authorization Model
|
||||
- Tool exposure is allowlist-based per agent in `services/router/agent_tools_config.py`.
|
||||
- `FULL_STANDARD_STACK` applies to all top-level agents.
|
||||
- Specialized tools (e.g., `market_data`, `comfy_generate_*`) are attached by agent ID.
|
||||
|
||||
## Known Risk/Gap
|
||||
- Tool manager config loader reads hardcoded `helion` section for endpoint extraction (`_load_tools_config`) and may not reflect all agent-specific endpoint overrides.
|
||||
|
||||
## Source pointers
|
||||
- `services/router/tool_manager.py`
|
||||
- `services/router/agent_tools_config.py`
|
||||
- `services/router/main.py`
|
||||
- `config/agent_registry.yml`
|
||||
44
docs/architecture_inventory/03_DATAFLOWS.md
Normal file
44
docs/architecture_inventory/03_DATAFLOWS.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Data Flows and Event Model
|
||||
|
||||
## Primary Request Flow
|
||||
```mermaid
|
||||
flowchart LR
|
||||
U[User] --> G[Gateway]
|
||||
G --> R[Router]
|
||||
R --> TM[Tool Manager]
|
||||
TM --> SW[Swapper]
|
||||
TM --> MEM[Memory Service]
|
||||
TM --> AR[Artifact Registry]
|
||||
R --> LLM[Cloud or Local LLM]
|
||||
R --> G
|
||||
G --> U
|
||||
```
|
||||
|
||||
## Canonical NATS Run Subject Policy
|
||||
- Canonical publish: `agent.run.requested.{agent_id}`
|
||||
- Canonical subscribe: `agent.run.requested.*`
|
||||
- Keep `run_id`, `trace_id`, `tenant_id` in payload.
|
||||
|
||||
## Migration Plan (No Downtime)
|
||||
1. Consumers subscribe to both: `agent.run.requested` and `agent.run.requested.*`.
|
||||
2. Producers publish to new canonical subject and temporarily duplicate to legacy.
|
||||
3. Remove legacy publish after metric confirms no consumers need it.
|
||||
4. Remove legacy subscribe after legacy traffic reaches zero.
|
||||
|
||||
## Runtime Subject Inventory (Current vs Target)
|
||||
- Current in code: `agent.run.requested` (subscriber in `services/crewai-worker/main.py`).
|
||||
- Target canonical: `agent.run.requested.{agent_id}` with wildcard consumer.
|
||||
- Artifact jobs already namespaced (`artifact.job.<job_type>.requested`).
|
||||
- Attachment pipeline uses typed subjects (`attachment.created.{type}`, `attachment.parsed.{type}`).
|
||||
|
||||
## Ingest/Parser Placement Note
|
||||
- On NODE1: currently **not active** in compose.
|
||||
- Flow remains architecturally defined and must be bound to concrete deploy location (node/host/manifest) before marked DEPLOYED.
|
||||
|
||||
## Source pointers
|
||||
- `services/crewai-worker/main.py`
|
||||
- `services/router/main.py`
|
||||
- `services/artifact-registry/app/main.py`
|
||||
- `services/ingest-service/main.py`
|
||||
- `services/parser-pipeline/main.py`
|
||||
- `docs/NATS_SUBJECT_MAP.md`
|
||||
47
docs/architecture_inventory/04_RUNTIME_AND_DEPLOYMENT.md
Normal file
47
docs/architecture_inventory/04_RUNTIME_AND_DEPLOYMENT.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Runtime and Deployment
|
||||
|
||||
## Authoritative Compose Policy (Canonical)
|
||||
Authoritative configuration is **per-node manifests**.
|
||||
- NODE1: `docker-compose.node1.yml`
|
||||
- NODE3: `docker-compose.node3.yml`
|
||||
- Staging: `docker-compose.staging.yml` (+ override)
|
||||
|
||||
`docker-compose.yml` is non-authoritative for production drift checks (local/legacy/node2-like context).
|
||||
|
||||
## Drift-Check Policy
|
||||
Drift check runs per node and compares:
|
||||
- service list / images / tags
|
||||
- ports / volumes / networks
|
||||
- env vars (non-secret subset)
|
||||
- healthcheck definitions
|
||||
|
||||
Recommended structure:
|
||||
- `ops/compose/production/` for canonical links/copies
|
||||
- `ops/drift-check.sh` with `NODE_ROLE=node1|node3|staging` resolver
|
||||
- timer/cron per node (or central orchestrator via SSH)
|
||||
|
||||
## Proxy Ownership Policy
|
||||
- Exactly one edge proxy owns `80/443` in production.
|
||||
- Second proxy must be disabled or internal-only (`127.0.0.1` / private network).
|
||||
- Current repo evidence: nginx edge config exists (`ops/nginx/node1-api.conf`), Caddy exists for integration UI use case (`infra/compose/Caddyfile`), runtime docs describe conflict history.
|
||||
|
||||
## Node Runtime Notes
|
||||
- NODE1: full primary stack and data layer in `docker-compose.node1.yml`.
|
||||
- NODE3: GPU edge services with dependency on NODE1 NATS/S3 endpoints.
|
||||
- Staging: separate internal network and override that removes most host-exposed ports.
|
||||
|
||||
## Quickstart (Operational)
|
||||
1. Select node role and authoritative compose file(s).
|
||||
2. Ensure required network exists (`dagi-network` for NODE1/NODE3 external mode).
|
||||
3. Start infra core then app services per node compose.
|
||||
4. Run per-node health and drift checks.
|
||||
|
||||
## Source pointers
|
||||
- `docker-compose.node1.yml`
|
||||
- `docker-compose.node3.yml`
|
||||
- `docker-compose.staging.yml`
|
||||
- `docker-compose.staging.override.yml`
|
||||
- `docker-compose.yml`
|
||||
- `ops/nginx/node1-api.conf`
|
||||
- `infra/compose/Caddyfile`
|
||||
- `docs/NODA1-MEMORY-RUNBOOK.md`
|
||||
54
docs/architecture_inventory/05_SECURITY_AND_ACCESS.md
Normal file
54
docs/architecture_inventory/05_SECURITY_AND_ACCESS.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Security and Access
|
||||
|
||||
## Secrets Handling (Redacted)
|
||||
- Secrets are loaded from `.env`, `.env.local`, service `.env`, and compose environment blocks.
|
||||
- Sensitive values were detected in tracked files; this inventory redacts all such values as `<REDACTED>`.
|
||||
- Example secret-bearing keys (redacted): `*_TOKEN`, `*_API_KEY`, `POSTGRES_PASSWORD`, `JWT_SECRET`, `MINIO_*`, `NATS_URL` credentials.
|
||||
|
||||
## AuthN/AuthZ
|
||||
- Internal service auth patterns exist (`service_auth.py` modules, JWT-related env in staging).
|
||||
- Tool-level authorization is per-agent allowlist in `services/router/agent_tools_config.py`.
|
||||
- Policy/control-plane endpoints are defined in `services/control-plane/main.py` (`/policy`, `/quotas`, `/config`) but service deployment is environment-dependent.
|
||||
|
||||
## NATS Access Controls
|
||||
- `nats/nats.conf` defines accounts and publish/subscribe permissions (`router`, `worker`, `gateway`, `memory`, `system`).
|
||||
- Security hardening doc flags pending actions (e.g., rotate defaults, enforce config at runtime).
|
||||
|
||||
## Network/Firewall Hardening
|
||||
- Firewall script exists: `ops/hardening/apply-node1-firewall.sh`.
|
||||
- Fail2ban nginx jails exist: `ops/hardening/fail2ban-nginx.conf`.
|
||||
- Nginx edge config includes rate limiting and connection limiting.
|
||||
|
||||
## Privacy / Data Governance
|
||||
- Privacy and retention docs present: `docs/PRIVACY_GATE.md`, `docs/DATA_RETENTION_POLICY.md`, `docs/MEMORY_API_POLICY.md`.
|
||||
- Memory schema includes PII/consent/account-linking structures (`migrations/046`, `049`, `052`).
|
||||
- KYC schema stores attestation status and explicitly avoids raw PII fields.
|
||||
|
||||
## E2EE / Threat Model References
|
||||
- Security architecture references are present in docs and consolidated runtime snapshots; no complete formal threat model file was found in active root docs with that exact title.
|
||||
|
||||
## Redaction Register (locations)
|
||||
- `.env`
|
||||
- `.env.example`
|
||||
- `.env.local`
|
||||
- `docker-compose.node1.yml`
|
||||
- `docker-compose.staging.yml`
|
||||
- `docker-compose.staging.override.yml`
|
||||
- `docker-compose.backups.yml`
|
||||
- `services/memory-service/.env`
|
||||
- `services/market-data-service/.env`
|
||||
- `services/ai-security-agent/.env.example`
|
||||
|
||||
## Source pointers
|
||||
- `nats/nats.conf`
|
||||
- `services/router/agent_tools_config.py`
|
||||
- `services/control-plane/main.py`
|
||||
- `ops/nginx/node1-api.conf`
|
||||
- `ops/hardening/apply-node1-firewall.sh`
|
||||
- `ops/hardening/fail2ban-nginx.conf`
|
||||
- `docs/SECURITY_HARDENING_SUMMARY.md`
|
||||
- `docs/PRIVACY_GATE.md`
|
||||
- `docs/DATA_RETENTION_POLICY.md`
|
||||
- `migrations/046_memory_service_full_schema.sql`
|
||||
- `migrations/049_memory_v3_human_memory_model.sql`
|
||||
- `migrations/052_account_linking_schema.sql`
|
||||
46
docs/architecture_inventory/06_OBSERVABILITY_AND_BACKUPS.md
Normal file
46
docs/architecture_inventory/06_OBSERVABILITY_AND_BACKUPS.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Observability and Backups
|
||||
|
||||
## Observability Stack
|
||||
- Prometheus config: `monitoring/prometheus/prometheus.yml`.
|
||||
- Scrapes: prometheus self, `agent-e2e-prober`, `gateway`, `router`, `qdrant`, `grafana`.
|
||||
- Alert rules: `monitoring/prometheus/rules/node1.rules.yml`.
|
||||
- Grafana provisioning and dashboards:
|
||||
- datasources: `monitoring/grafana/provisioning/datasources/prometheus.yml`
|
||||
- dashboards: `monitoring/grafana/dashboards/*.json`
|
||||
- alerting: `monitoring/grafana/provisioning/alerting/alerts.yml`
|
||||
- Loki/OTel/Tempo/Jaeger: no active compose evidence in this repo’s current manifests.
|
||||
|
||||
## Service-Level Telemetry
|
||||
- Router exposes `/metrics` (`services/router/main.py`).
|
||||
- Gateway exposes metrics endpoint (compose monitors `/metrics`).
|
||||
- SenpAI consumer has Prometheus metrics in code (`senpai_nats_connected`, reconnect counters).
|
||||
- Prober exports metrics on `9108`.
|
||||
|
||||
## Backup and DR
|
||||
### Data backups
|
||||
- Scheduled Postgres backup container: `docker-compose.backups.yml` (`SCHEDULE: @every 6h`, keep days/weeks/months).
|
||||
- Full backup script: `scripts/backup/backup_all.sh` (Postgres dump + Qdrant snapshots + Neo4j dump + metadata file).
|
||||
- Restore validation script: `scripts/restore/restore_test.sh`.
|
||||
|
||||
### Documentation backups
|
||||
- `scripts/docs/docs_backup.sh` creates timestamped archives and retention rotation.
|
||||
- `scripts/docs/install_local_cron.sh` installs local managed cron block for docs maintenance.
|
||||
|
||||
## DR Readiness Notes
|
||||
- Backup script metadata and restore script provide reproducible path checks.
|
||||
- Compose-based backup path uses host bind `/opt/backups/postgres:/backups` (host-level storage requirement).
|
||||
- Runbooks report prior backup-image version mismatch issue; currently compose pins backup image `:16`.
|
||||
|
||||
## Source pointers
|
||||
- `monitoring/prometheus/prometheus.yml`
|
||||
- `monitoring/prometheus/rules/node1.rules.yml`
|
||||
- `monitoring/grafana/provisioning/datasources/prometheus.yml`
|
||||
- `monitoring/grafana/provisioning/alerting/alerts.yml`
|
||||
- `monitoring/grafana/dashboards/nats_memory.json`
|
||||
- `docker-compose.backups.yml`
|
||||
- `scripts/backup/backup_all.sh`
|
||||
- `scripts/restore/restore_test.sh`
|
||||
- `scripts/docs/docs_backup.sh`
|
||||
- `scripts/docs/install_local_cron.sh`
|
||||
- `docs/NODA1-MEMORY-RUNBOOK.md`
|
||||
- `docs/NODA1-TECHBORGS-PATCHES.md`
|
||||
@@ -0,0 +1,28 @@
|
||||
# Open Questions and Assumptions
|
||||
|
||||
## Decisions Confirmed in This Session
|
||||
1. Production authority is per-node compose manifests (NODE1/NODE3/Staging), not merged universal compose.
|
||||
2. `ingest-service`, `parser-pipeline`, `control-plane` are currently not active on NODE1 runtime.
|
||||
3. Canonical run subject policy is namespaced: `agent.run.requested.{agent_id}` with wildcard subscriber strategy.
|
||||
4. Production must have single edge proxy owner for 80/443.
|
||||
|
||||
## Remaining Open Items
|
||||
1. Exact deployment location (host/node/manifest) for `ingest-service`, `parser-pipeline`, `control-plane` when they become DEPLOYED.
|
||||
2. Final production proxy owner choice (nginx or Caddy) for canonical runtime profile.
|
||||
3. Timeline/PR sequence for NATS subject migration completion and legacy deprecation cutoff.
|
||||
4. Missing referenced services in default compose (`node-registry`, `city-service`, `agent-cabinet-service`, `devtools-backend`, `orchestrator`, `microdao`): same repo future work vs external repo ownership.
|
||||
|
||||
## Verification Artifacts Required to Close Items
|
||||
- Node-specific compose/service unit link proving deployment location.
|
||||
- Runtime `docker ps` + health snapshot for each node.
|
||||
- NATS stream/consumer metrics showing legacy subject traffic = 0 before deprecation.
|
||||
- Proxy port ownership verification (`80/443`) on runtime host.
|
||||
|
||||
## Source pointers
|
||||
- `docs/SESSION_STARTER.md`
|
||||
- `docker-compose.node1.yml`
|
||||
- `docker-compose.node3.yml`
|
||||
- `docker-compose.staging.yml`
|
||||
- `services/crewai-worker/main.py`
|
||||
- `ops/nginx/node1-api.conf`
|
||||
- `infra/compose/Caddyfile`
|
||||
26
docs/architecture_inventory/inventory_datastores.md
Normal file
26
docs/architecture_inventory/inventory_datastores.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Datastore Inventory
|
||||
|
||||
| db | purpose | location | backup | retention | source |
|
||||
|---|---|---|---|---|---|
|
||||
| PostgreSQL (`dagi-postgres`) | transactional data, memory tables, rag db | container + volume (`postgres_data_node1` / staging variant) | compose backup container + `scripts/backup/backup_all.sh` | backup keep days/weeks/months in backup compose | `docker-compose.node1.yml`, `docker-compose.backups.yml`, `scripts/backup/backup_all.sh` |
|
||||
| Qdrant | vector embeddings/index | container + volume (`qdrant-data-node1`) | snapshot copy in backup script | not explicit in policy files | `docker-compose.node1.yml`, `scripts/backup/backup_all.sh` |
|
||||
| Neo4j | graph memory/relations | container + volumes (`neo4j-data-node1`, logs) | `neo4j-admin dump` in backup script | not explicit in policy files | `docker-compose.node1.yml`, `scripts/backup/backup_all.sh` |
|
||||
| Redis | cache/session/state | container + volume (`redis-data-node1`) | no dedicated backup script in repo | not explicit | `docker-compose.node1.yml` |
|
||||
| MinIO | object artifacts (ppt/pdf/doc assets, comfy outputs) | container + volume (`minio-data-node1`) | implicit host-level volume backup only (no dedicated script found) | not explicit | `docker-compose.node1.yml`, `services/artifact-registry/app/main.py` |
|
||||
| SQLite (`market-data-service`) | local market event storage | `/data/market_data.db` inside market-data volume | host volume snapshot/manual | not explicit | `services/market-data-service/app/config.py`, `docker-compose.node1.yml` |
|
||||
| NATS JetStream store | event stream persistence | `/data/jetstream` volume | via NATS data volume backup strategy (manual/host-level) | stream-level policies in docs | `nats/nats.conf`, `docker-compose.node1.yml` |
|
||||
|
||||
## Schema / Collections Pointers
|
||||
- Memory schemas/migrations:
|
||||
- `services/memory-service/migrations/001_create_memory_tables.sql`
|
||||
- `migrations/046_memory_service_full_schema.sql`
|
||||
- `migrations/049_memory_v3_human_memory_model.sql`
|
||||
- `migrations/052_account_linking_schema.sql`
|
||||
- Market data SQLAlchemy models: `services/market-data-service/app/db/schema.py`
|
||||
- RAG API/index model contracts: `services/rag-service/app/models.py`
|
||||
- Qdrant collection access via memory/rag service code:
|
||||
- `services/memory-service/app/vector_store.py`
|
||||
- `services/rag-service/app/document_store.py`
|
||||
|
||||
## Redaction Note
|
||||
Any discovered credentials in datastore DSNs/password fields are represented as `<REDACTED>` in inventory narratives.
|
||||
18
docs/architecture_inventory/inventory_nats_topics.csv
Normal file
18
docs/architecture_inventory/inventory_nats_topics.csv
Normal file
@@ -0,0 +1,18 @@
|
||||
subject,publisher(s),subscriber(s),purpose,source
|
||||
agent.run.requested,router/scripts,crewai-worker (legacy),legacy run subject (to be deprecated),services/crewai-worker/main.py
|
||||
agent.run.requested.{agent_id},router/producers (target),workers via wildcard subscription,canonical run subject,docs/architecture_inventory/03_DATAFLOWS.md
|
||||
agent.run.requested.*,n/a (subscription pattern),crewai-worker/router workers,canonical consumer wildcard,docs/architecture_inventory/03_DATAFLOWS.md
|
||||
agent.run.completed.{agent_id},crewai-worker,router/gateway/ops,run completion,services/crewai-worker/main.py
|
||||
agent.run.failed.dlq,crewai-worker/scripts,dlq replay tooling,failed run dead-letter,services/crewai-worker/main.py
|
||||
attachment.created.{type},ingest-service,parser-pipeline,file ingestion events,services/ingest-service/main.py
|
||||
attachment.parsed.{type},parser-pipeline,index/memory workflows,parsed artifact events,services/parser-pipeline/main.py
|
||||
artifact.job.render_pptx.requested,artifact-registry,render-pptx-worker,presentation render job,services/artifact-registry/app/main.py
|
||||
artifact.job.render_pdf.requested,artifact-registry,render-pdf-worker,pdf render job,services/render-pdf-worker/app/main.py
|
||||
artifact.job.index_doc.requested,artifact-registry,index-doc-worker,document index job,services/index-doc-worker/app/main.py
|
||||
md.events.{type}.{symbol},market-data-service,senpai-md-consumer,market data normalized events,services/market-data-service/app/consumers/nats_output.py
|
||||
senpai.features.{symbol},senpai-md-consumer,router/tool clients,feature stream,services/senpai-md-consumer/senpai/md_consumer/publisher.py
|
||||
senpai.signals.{symbol},senpai-md-consumer,strategy/alert consumers,signal stream,services/senpai-md-consumer/senpai/md_consumer/publisher.py
|
||||
senpai.alerts,senpai-md-consumer,ops/strategy consumers,alert stream,services/senpai-md-consumer/senpai/md_consumer/publisher.py
|
||||
agent.invoke.comfy,router/agents,comfy-agent,comfy invocation bus,services/comfy-agent/app/config.py
|
||||
comfy.request.image,router/agents,comfy-agent,direct image generation bus,services/comfy-agent/app/config.py
|
||||
comfy.request.video,router/agents,comfy-agent,direct video generation bus,services/comfy-agent/app/config.py
|
||||
|
40
docs/architecture_inventory/inventory_ports.csv
Normal file
40
docs/architecture_inventory/inventory_ports.csv
Normal file
@@ -0,0 +1,40 @@
|
||||
port,protocol,service,exposure public/internal,node/env,source
|
||||
80,tcp,nginx api edge,public,node1,ops/nginx/node1-api.conf
|
||||
443,tcp,nginx tls edge (commented config),public,node1,ops/nginx/node1-api.conf
|
||||
8080,tcp,nginx admin local bind,internal,node1,ops/nginx/node1-api.conf
|
||||
9102,tcp,router,public,node1,docker-compose.node1.yml
|
||||
9300,tcp,gateway,public,node1,docker-compose.node1.yml
|
||||
8890,tcp,swapper-service,public,node1,docker-compose.node1.yml
|
||||
8891,tcp,swapper-service metrics,public,node1,docker-compose.node1.yml
|
||||
11235,tcp,crawl4ai,public,node1,docker-compose.node1.yml
|
||||
4222,tcp,nats client port,public,node1,docker-compose.node1.yml
|
||||
9000,tcp,minio api,public,node1,docker-compose.node1.yml
|
||||
9001,tcp,minio console,public,node1,docker-compose.node1.yml
|
||||
9220,tcp,artifact-registry,public,node1,docker-compose.node1.yml
|
||||
9500,tcp,rag-service,public,node1,docker-compose.node1.yml
|
||||
9210,tcp,brand-registry,public,node1,docker-compose.node1.yml
|
||||
9211,tcp,brand-intake,public,node1,docker-compose.node1.yml
|
||||
9212,tcp,presentation-renderer,public,node1,docker-compose.node1.yml
|
||||
8000,tcp,memory-service,public,node1,docker-compose.node1.yml
|
||||
5432,tcp,dagi-postgres,public,node1,docker-compose.node1.yml
|
||||
6333,tcp,qdrant http,public,node1,docker-compose.node1.yml
|
||||
6334,tcp,qdrant grpc,public,node1,docker-compose.node1.yml
|
||||
7474,tcp,neo4j http,public,node1,docker-compose.node1.yml
|
||||
7687,tcp,neo4j bolt,public,node1,docker-compose.node1.yml
|
||||
6379,tcp,redis,public,node1,docker-compose.node1.yml
|
||||
8001,tcp,vision-encoder,public,node1,docker-compose.node1.yml
|
||||
9108,tcp,agent-e2e-prober metrics,public,node1,docker-compose.node1.yml
|
||||
8893,tcp,market-data-service host map,public,node1,docker-compose.node1.yml
|
||||
8892,tcp,senpai-md-consumer,public,node1,docker-compose.node1.yml
|
||||
9102,tcp,dagi-router-node3,public,node3,docker-compose.node3.yml
|
||||
8890,tcp,swapper-service-node3,public,node3,docker-compose.node3.yml
|
||||
8891,tcp,swapper-service-node3 metrics,public,node3,docker-compose.node3.yml
|
||||
8880,tcp,comfy-agent,public,node3,docker-compose.node3.yml
|
||||
4222,tcp,nats,internal,staging,docker-compose.staging.yml
|
||||
8000,tcp,memory-service (expose),internal,staging,docker-compose.staging.yml
|
||||
8081,tcp,thingsboard,public,integration,infra/compose/docker-compose.yml
|
||||
1883,tcp,mqtt,public,integration,infra/compose/docker-compose.yml
|
||||
9001,tcp,mqtt websocket,public,integration,infra/compose/docker-compose.yml
|
||||
8222,tcp,nats monitor,public,integration,infra/compose/docker-compose.yml
|
||||
8800,tcp,integration-service,public,integration,infra/compose/docker-compose.yml
|
||||
18080,tcp,farmos ui caddy bind localhost,internal,integration,infra/compose/docker-compose.yml
|
||||
|
37
docs/architecture_inventory/inventory_services.csv
Normal file
37
docs/architecture_inventory/inventory_services.csv
Normal file
@@ -0,0 +1,37 @@
|
||||
service,type,runtime,port(s),deps,image,compose_file,node/env
|
||||
router,api,python-fastapi,"8000 (host 9102)","memory-service;swapper-service;vision-encoder;nats",build:./services/router,docker-compose.node1.yml,node1
|
||||
swapper-service,api,python-fastapi+cuda,"8890;8891(metrics)","crawl4ai;gpu",build:./services/swapper-service,docker-compose.node1.yml,node1
|
||||
crawl4ai,api,container,"11235",none,unclecode/crawl4ai@sha256:4d8b...,docker-compose.node1.yml,node1
|
||||
gateway,bff,python-fastapi,"9300","router;memory-service",build:./gateway-bot,docker-compose.node1.yml,node1
|
||||
nats,bus,nats-jetstream,"4222","nats-data-node1",nats:2.10-alpine,docker-compose.node1.yml,node1
|
||||
minio,object-store,minio,"9000;9001",minio-data-node1,minio/minio@sha256:14cea...,docker-compose.node1.yml,node1
|
||||
artifact-registry,api,python-fastapi,"9220","dagi-postgres;minio;nats",build:./services/artifact-registry,docker-compose.node1.yml,node1
|
||||
rag-service,api,python-fastapi,"9500",dagi-postgres,build:./services/rag-service,docker-compose.node1.yml,node1
|
||||
render-pptx-worker,worker,nodejs,internal,"nats;artifact-registry;minio",build:./services/render-pptx-worker,docker-compose.node1.yml,node1
|
||||
render-pdf-worker,worker,python,internal,"nats;artifact-registry;minio",build:./services/render-pdf-worker,docker-compose.node1.yml,node1
|
||||
index-doc-worker,worker,python,internal,"nats;artifact-registry;rag-service;minio",build:./services/index-doc-worker,docker-compose.node1.yml,node1
|
||||
brand-registry,api,python-fastapi,"9210",brand-registry-data-node1,build:./services/brand-registry,docker-compose.node1.yml,node1
|
||||
brand-intake,api,python-fastapi,"9211","brand-registry;BrandMap.yaml",build:./services/brand-intake,docker-compose.node1.yml,node1
|
||||
presentation-renderer,api,python-fastapi,"9212","brand-registry;presentation-data",build:./services/presentation-renderer,docker-compose.node1.yml,node1
|
||||
memory-service,api,python-fastapi,"8000","dagi-postgres;qdrant",build:./services/memory-service,docker-compose.node1.yml,node1
|
||||
daky-postgres,datastore,postgres,"5432",postgres_data_node1,pgvector/pgvector:pg16,docker-compose.node1.yml,node1
|
||||
qdrant,datastore,qdrant,"6333;6334",qdrant-data-node1,qdrant/qdrant:v1.7.4,docker-compose.node1.yml,node1
|
||||
neo4j,datastore,neo4j,"7474;7687","neo4j-data;neo4j-logs",neo4j:5.15-community,docker-compose.node1.yml,node1
|
||||
redis,cache,redis,"6379",redis-data-node1,redis:7-alpine,docker-compose.node1.yml,node1
|
||||
vision-encoder,api,python-fastapi,"8001",qdrant,build:./services/vision-encoder,docker-compose.node1.yml,node1
|
||||
agent-e2e-prober,ops,python,"9108",gateway,build:./services/agent-e2e-prober,docker-compose.node1.yml,node1
|
||||
market-data-service,streaming,python,"8891 (host 8893)",nats,build:./services/market-data-service,docker-compose.node1.yml,node1
|
||||
senpai-md-consumer,streaming,python,"8892","nats;market-data-service",build:./services/senpai-md-consumer,docker-compose.node1.yml,node1
|
||||
postgres-backup,backup,container,internal,dagi-postgres,prodrigestivill/postgres-backup-local:16,docker-compose.backups.yml,node1
|
||||
daky-router-node3,api,python-fastapi,"8000 (host 9102)",remote-nats,build:./services/router,docker-compose.node3.yml,node3
|
||||
swapper-service-node3,api,python-fastapi+cuda,"8890;8891",host-ollama,build:./services/swapper-service,docker-compose.node3.yml,node3
|
||||
comfy-agent,api+worker,python-fastapi,"8880","comfyui;nats;s3",build:./services/comfy-agent,docker-compose.node3.yml,node3
|
||||
router,api,python-fastapi,internal,"memory-service;swapper;vision;nats",build:./services/router,docker-compose.staging.yml,staging
|
||||
gateway,bff,python-fastapi,internal,"router;memory-service",build:./gateway-bot,docker-compose.staging.yml,staging
|
||||
swapper-service,api,python-fastapi+cuda,internal,crawl4ai,build:./services/swapper-service,docker-compose.staging.yml,staging
|
||||
memory-service,api,python-fastapi,"8000 exposed internal","qdrant;postgres",build:./services/memory-service,docker-compose.staging.yml,staging
|
||||
qdrant,datastore,qdrant,internal,qdrant-data-staging,qdrant/qdrant:v1.7.4,docker-compose.staging.yml,staging
|
||||
neo4j,datastore,neo4j,internal,neo4j-data-staging,neo4j:5.15-community,docker-compose.staging.yml,staging
|
||||
redis,cache,redis,internal,redis-data-staging,redis:7-alpine,docker-compose.staging.yml,staging
|
||||
nats,bus,nats-jetstream,internal,nats-data-staging,nats:latest,docker-compose.staging.yml,staging
|
||||
control-plane,api,python-fastapi,internal,none,control-plane:latest,docker-compose.staging.yml,staging
|
||||
|
137
docs/architecture_inventory/inventory_tools.json
Normal file
137
docs/architecture_inventory/inventory_tools.json
Normal file
@@ -0,0 +1,137 @@
|
||||
[
|
||||
{
|
||||
"tool_name": "memory_search",
|
||||
"description": "Search memory context via Memory API",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.memory_search",
|
||||
"authz_scope": "agent allowlist via services/router/agent_tools_config.py",
|
||||
"timeout": "httpx client default 60s",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router"
|
||||
},
|
||||
{
|
||||
"tool_name": "graph_query",
|
||||
"description": "Graph query tool for Neo4j-backed knowledge",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.graph_query",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "bounded HTTP/driver call",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router"
|
||||
},
|
||||
{
|
||||
"tool_name": "web_search",
|
||||
"description": "Web search tool",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.web_search",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "service/http defaults",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router"
|
||||
},
|
||||
{
|
||||
"tool_name": "web_extract",
|
||||
"description": "Extract text from URL",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.web_extract",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "service/http defaults",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router"
|
||||
},
|
||||
{
|
||||
"tool_name": "crawl4ai_scrape",
|
||||
"description": "Deep scraping via Crawl4AI",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.crawl4ai_scrape",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "external call timeout",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router"
|
||||
},
|
||||
{
|
||||
"tool_name": "remember_fact",
|
||||
"description": "Persist user fact into memory",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.remember_fact",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "service/http defaults",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router"
|
||||
},
|
||||
{
|
||||
"tool_name": "image_generate",
|
||||
"description": "Image generation through swapper",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.image_generate",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "120-300s paths",
|
||||
"retries": "soft retry gate in router",
|
||||
"service_owner": "router"
|
||||
},
|
||||
{
|
||||
"tool_name": "comfy_generate_image",
|
||||
"description": "ComfyUI image generation via comfy-agent",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.comfy_generate_image",
|
||||
"authz_scope": "agent specialized allowlist",
|
||||
"timeout": "timeout_s default 180",
|
||||
"retries": "poll loop until timeout",
|
||||
"service_owner": "router/comfy-agent"
|
||||
},
|
||||
{
|
||||
"tool_name": "comfy_generate_video",
|
||||
"description": "ComfyUI video generation via comfy-agent",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.comfy_generate_video",
|
||||
"authz_scope": "agent specialized allowlist",
|
||||
"timeout": "timeout_s default 300",
|
||||
"retries": "poll loop until timeout",
|
||||
"service_owner": "router/comfy-agent"
|
||||
},
|
||||
{
|
||||
"tool_name": "tts_speak",
|
||||
"description": "Text-to-speech",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.tts_speak",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "http bounded",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router/swapper"
|
||||
},
|
||||
{
|
||||
"tool_name": "presentation_create",
|
||||
"description": "Create presentation artifact job",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.presentation_create",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "async job submit",
|
||||
"retries": "job-level",
|
||||
"service_owner": "router/artifact-registry"
|
||||
},
|
||||
{
|
||||
"tool_name": "presentation_status",
|
||||
"description": "Check presentation job status",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.presentation_status",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "short HTTP",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router/artifact-registry"
|
||||
},
|
||||
{
|
||||
"tool_name": "presentation_download",
|
||||
"description": "Download presentation artifact",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.presentation_download",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "short HTTP",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router/artifact-registry"
|
||||
},
|
||||
{
|
||||
"tool_name": "file_tool",
|
||||
"description": "Action-based file generation/update/export tool",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.file_tool",
|
||||
"authz_scope": "agent allowlist",
|
||||
"timeout": "action-specific; djvu/pdf ops bounded by timeout_sec",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router"
|
||||
},
|
||||
{
|
||||
"tool_name": "market_data",
|
||||
"description": "Read market price/features/signals for SenpAI stack",
|
||||
"schema_path": "services/router/tool_manager.py#TOOL_DEFINITIONS.market_data",
|
||||
"authz_scope": "specialized (senpai + configured agents)",
|
||||
"timeout": "8-10s HTTP defaults in implementation",
|
||||
"retries": "none explicit",
|
||||
"service_owner": "router/market-data-service/senpai-md-consumer"
|
||||
}
|
||||
]
|
||||
@@ -1 +1 @@
|
||||
/Users/apple/github-projects/microdao-daarion/docs/backups/docs_backup_20260216-022549.tar.gz
|
||||
/Users/apple/github-projects/microdao-daarion/docs/backups/docs_backup_20260218-091700.tar.gz
|
||||
|
||||
63
docs/consolidation/INTEGRATIONS_STATUS_20260216-021520.md
Normal file
63
docs/consolidation/INTEGRATIONS_STATUS_20260216-021520.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:15:20 UTC
|
||||
|
||||
Repo:
|
||||
|
||||
|
||||
## Gitea
|
||||
|
||||
- **gitea_http**: OK — http_code=200 (http://127.0.0.1:3000)
|
||||
- **gitea_git_remote**: OK — gitea http://localhost:3000/daarion-admin/microdao-daarion.git;
|
||||
|
||||
## GitHub
|
||||
|
||||
- **gh_auth**: OK — IvanTytar (keyring)
|
||||
- **github_git_remote**: OK — origin git@github.com:IvanTytar/microdao-daarion.git;
|
||||
|
||||
## Jupyter
|
||||
|
||||
- **jupyter_cli**: DEGRADED — jupyter not found in PATH
|
||||
- **notebooks_dir**: OK — /Users/apple/notebooks (ipynb_count=7)
|
||||
|
||||
## Pieces
|
||||
|
||||
- **pieces_extension**: OK — cursor extensions matched=1
|
||||
- **pieces_data_dir**: INFO — /Users/apple/Library/Application Support/Pieces not found
|
||||
|
||||
## Next
|
||||
|
||||
- Run docs sync dry-run: Detected remotes:
|
||||
github: origin
|
||||
gitea : gitea
|
||||
Targets: github,gitea
|
||||
Commit message: docs: sync consolidation and session starter
|
||||
[dry-run] git add docs/SESSION_STARTER.md docs/consolidation/README.md docs/consolidation/SOURCES.md docs/consolidation/docs_registry_curated.csv PROJECT-MASTER-INDEX.md
|
||||
[dry-run] git diff --cached --name-status
|
||||
[dry-run] git push origin feat/md-pipeline-senpai-consumer
|
||||
[dry-run] git push gitea feat/md-pipeline-senpai-consumer
|
||||
Done.
|
||||
- Apply sync to remotes: Detected remotes:
|
||||
github: origin
|
||||
gitea : gitea
|
||||
Targets: github,gitea
|
||||
Commit message: docs: sync consolidation and session starter
|
||||
[feat/md-pipeline-senpai-consumer de3bd8c] docs: sync consolidation and session starter
|
||||
Committer: Apple <apple@MacBook-Pro.local>
|
||||
Your name and email address were configured automatically based
|
||||
on your username and hostname. Please check that they are accurate.
|
||||
You can suppress this message by setting them explicitly:
|
||||
|
||||
git config --global user.name "Your Name"
|
||||
git config --global user.email you@example.com
|
||||
|
||||
After doing this, you may fix the identity used for this commit with:
|
||||
|
||||
git commit --amend --reset-author
|
||||
|
||||
5 files changed, 225 insertions(+), 3 deletions(-)
|
||||
create mode 100644 docs/SESSION_STARTER.md
|
||||
create mode 100644 docs/consolidation/README.md
|
||||
create mode 100644 docs/consolidation/SOURCES.md
|
||||
create mode 100644 docs/consolidation/docs_registry_curated.csv
|
||||
Done.
|
||||
59
docs/consolidation/INTEGRATIONS_STATUS_20260216-021559.md
Normal file
59
docs/consolidation/INTEGRATIONS_STATUS_20260216-021559.md
Normal file
@@ -0,0 +1,59 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:15:59 UTC
|
||||
|
||||
Repo: \n
|
||||
|
||||
## Gitea
|
||||
|
||||
- **gitea_http**: OK — http_code=200 (http://127.0.0.1:3000)
|
||||
- **gitea_git_remote**: OK — gitea http://localhost:3000/daarion-admin/microdao-daarion.git;
|
||||
|
||||
## GitHub
|
||||
|
||||
- **gh_auth**: OK — IvanTytar (keyring)
|
||||
- **github_git_remote**: OK — origin git@github.com:IvanTytar/microdao-daarion.git;
|
||||
|
||||
## Jupyter
|
||||
|
||||
- **jupyter_cli**: DEGRADED — jupyter not found in PATH
|
||||
- **notebooks_dir**: OK — /Users/apple/notebooks (ipynb_count=7)
|
||||
|
||||
## Pieces
|
||||
|
||||
- **pieces_extension**: OK — cursor extensions matched=1
|
||||
- **pieces_data_dir**: INFO — /Users/apple/Library/Application Support/Pieces not found
|
||||
|
||||
## Next
|
||||
|
||||
- Run docs sync dry-run: Detected remotes:
|
||||
github: origin
|
||||
gitea : gitea
|
||||
Targets: github,gitea
|
||||
Commit message: docs: sync consolidation and session starter
|
||||
[dry-run] git add docs/SESSION_STARTER.md docs/consolidation/README.md docs/consolidation/SOURCES.md docs/consolidation/docs_registry_curated.csv docs/consolidation/INTEGRATIONS_STATUS_LATEST.md PROJECT-MASTER-INDEX.md
|
||||
[dry-run] git diff --cached --name-status
|
||||
[dry-run] git push origin feat/md-pipeline-senpai-consumer
|
||||
[dry-run] git push gitea feat/md-pipeline-senpai-consumer
|
||||
Done.
|
||||
- Apply sync to remotes: Detected remotes:
|
||||
github: origin
|
||||
gitea : gitea
|
||||
Targets: github,gitea
|
||||
Commit message: docs: sync consolidation and session starter
|
||||
[feat/md-pipeline-senpai-consumer b962d4a] docs: sync consolidation and session starter
|
||||
Committer: Apple <apple@MacBook-Pro.local>
|
||||
Your name and email address were configured automatically based
|
||||
on your username and hostname. Please check that they are accurate.
|
||||
You can suppress this message by setting them explicitly:
|
||||
|
||||
git config --global user.name "Your Name"
|
||||
git config --global user.email you@example.com
|
||||
|
||||
After doing this, you may fix the identity used for this commit with:
|
||||
|
||||
git commit --amend --reset-author
|
||||
|
||||
1 file changed, 63 insertions(+)
|
||||
create mode 100644 docs/consolidation/INTEGRATIONS_STATUS_LATEST.md
|
||||
Done.
|
||||
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-021638.md
Normal file
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-021638.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:16:38 UTC
|
||||
|
||||
Repo: /Users/apple/github-projects/microdao-daarion
|
||||
|
||||
|
||||
## Gitea
|
||||
|
||||
- **gitea_http**: OK — http_code=200 (http://127.0.0.1:3000)
|
||||
- **gitea_git_remote**: OK — gitea http://localhost:3000/daarion-admin/microdao-daarion.git;
|
||||
|
||||
## GitHub
|
||||
|
||||
- **gh_auth**: OK — IvanTytar (keyring)
|
||||
- **github_git_remote**: OK — origin git@github.com:IvanTytar/microdao-daarion.git;
|
||||
|
||||
## Jupyter
|
||||
|
||||
- **jupyter_cli**: DEGRADED — jupyter not found in PATH
|
||||
- **notebooks_dir**: OK — /Users/apple/notebooks (ipynb_count=7)
|
||||
|
||||
## Pieces
|
||||
|
||||
- **pieces_extension**: OK — cursor extensions matched=1
|
||||
- **pieces_data_dir**: INFO — /Users/apple/Library/Application Support/Pieces not found
|
||||
|
||||
## Next
|
||||
|
||||
- Run docs sync dry-run: bash scripts/docs/docs_sync.sh --dry-run
|
||||
- Apply sync to remotes: bash scripts/docs/docs_sync.sh --apply --targets github,gitea
|
||||
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-021658.md
Normal file
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-021658.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:16:58 UTC
|
||||
|
||||
Repo: /Users/apple/github-projects/microdao-daarion
|
||||
|
||||
|
||||
## Gitea
|
||||
|
||||
- **gitea_http**: OK — http_code=200 (http://127.0.0.1:3000)
|
||||
- **gitea_git_remote**: OK — gitea http://localhost:3000/daarion-admin/microdao-daarion.git;
|
||||
|
||||
## GitHub
|
||||
|
||||
- **gh_auth**: OK — IvanTytar (keyring)
|
||||
- **github_git_remote**: OK — origin git@github.com:IvanTytar/microdao-daarion.git;
|
||||
|
||||
## Jupyter
|
||||
|
||||
- **jupyter_cli**: DEGRADED — jupyter not found in PATH
|
||||
- **notebooks_dir**: OK — /Users/apple/notebooks (ipynb_count=7)
|
||||
|
||||
## Pieces
|
||||
|
||||
- **pieces_extension**: OK — cursor extensions matched=1
|
||||
- **pieces_data_dir**: INFO — /Users/apple/Library/Application Support/Pieces not found
|
||||
|
||||
## Next
|
||||
|
||||
- Run docs sync dry-run: bash scripts/docs/docs_sync.sh --dry-run
|
||||
- Apply sync to remotes: bash scripts/docs/docs_sync.sh --apply --targets github,gitea
|
||||
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-021721.md
Normal file
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-021721.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:17:21 UTC
|
||||
|
||||
Repo: /Users/apple/github-projects/microdao-daarion
|
||||
|
||||
|
||||
## Gitea
|
||||
|
||||
- **gitea_http**: OK - http_code=200 (http://127.0.0.1:3000)
|
||||
- **gitea_git_remote**: OK - gitea http://localhost:3000/daarion-admin/microdao-daarion.git;
|
||||
|
||||
## GitHub
|
||||
|
||||
- **gh_auth**: OK - IvanTytar (keyring)
|
||||
- **github_git_remote**: OK - origin git@github.com:IvanTytar/microdao-daarion.git;
|
||||
|
||||
## Jupyter
|
||||
|
||||
- **jupyter_cli**: DEGRADED - jupyter not found in PATH
|
||||
- **notebooks_dir**: OK - /Users/apple/notebooks (ipynb_count=7)
|
||||
|
||||
## Pieces
|
||||
|
||||
- **pieces_extension**: OK - cursor extensions matched=1
|
||||
- **pieces_data_dir**: INFO - /Users/apple/Library/Application Support/Pieces not found
|
||||
|
||||
## Next
|
||||
|
||||
- Run docs sync dry-run: bash scripts/docs/docs_sync.sh --dry-run
|
||||
- Apply sync to remotes: bash scripts/docs/docs_sync.sh --apply --targets github,gitea
|
||||
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-022015.md
Normal file
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-022015.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:20:15 UTC
|
||||
|
||||
Repo: /Users/apple/github-projects/microdao-daarion
|
||||
|
||||
|
||||
## Gitea
|
||||
|
||||
- **gitea_http**: OK - http_code=200 (http://127.0.0.1:3000)
|
||||
- **gitea_git_remote**: OK - gitea http://localhost:3000/daarion-admin/microdao-daarion.git;
|
||||
|
||||
## GitHub
|
||||
|
||||
- **gh_auth**: OK - IvanTytar (keyring)
|
||||
- **github_git_remote**: OK - origin git@github.com:IvanTytar/microdao-daarion.git;
|
||||
|
||||
## Jupyter
|
||||
|
||||
- **jupyter_cli**: DEGRADED - jupyter not found in PATH
|
||||
- **notebooks_dir**: OK - /Users/apple/notebooks (ipynb_count=7)
|
||||
|
||||
## Pieces
|
||||
|
||||
- **pieces_extension**: OK - cursor extensions matched=1
|
||||
- **pieces_data_dir**: INFO - /Users/apple/Library/Application Support/Pieces not found
|
||||
|
||||
## Next
|
||||
|
||||
- Run docs sync dry-run: bash scripts/docs/docs_sync.sh --dry-run
|
||||
- Apply sync to remotes: bash scripts/docs/docs_sync.sh --apply --targets github,gitea
|
||||
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-022019.md
Normal file
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-022019.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:20:19 UTC
|
||||
|
||||
Repo: /Users/apple/github-projects/microdao-daarion
|
||||
|
||||
|
||||
## Gitea
|
||||
|
||||
- **gitea_http**: OK - http_code=200 (http://127.0.0.1:3000)
|
||||
- **gitea_git_remote**: OK - gitea http://localhost:3000/daarion-admin/microdao-daarion.git;
|
||||
|
||||
## GitHub
|
||||
|
||||
- **gh_auth**: OK - IvanTytar (keyring)
|
||||
- **github_git_remote**: OK - origin git@github.com:IvanTytar/microdao-daarion.git;
|
||||
|
||||
## Jupyter
|
||||
|
||||
- **jupyter_cli**: DEGRADED - jupyter not found in PATH
|
||||
- **notebooks_dir**: OK - /Users/apple/notebooks (ipynb_count=7)
|
||||
|
||||
## Pieces
|
||||
|
||||
- **pieces_extension**: OK - cursor extensions matched=1
|
||||
- **pieces_data_dir**: INFO - /Users/apple/Library/Application Support/Pieces not found
|
||||
|
||||
## Next
|
||||
|
||||
- Run docs sync dry-run: bash scripts/docs/docs_sync.sh --dry-run
|
||||
- Apply sync to remotes: bash scripts/docs/docs_sync.sh --apply --targets github,gitea
|
||||
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-022424.md
Normal file
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-022424.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:24:24 UTC
|
||||
|
||||
Repo: /Users/apple/github-projects/microdao-daarion
|
||||
|
||||
|
||||
## Gitea
|
||||
|
||||
- **gitea_http**: OK - http_code=200 (http://127.0.0.1:3000)
|
||||
- **gitea_git_remote**: OK - gitea http://localhost:3000/daarion-admin/microdao-daarion.git;
|
||||
|
||||
## GitHub
|
||||
|
||||
- **gh_auth**: OK - IvanTytar (keyring)
|
||||
- **github_git_remote**: OK - origin git@github.com:IvanTytar/microdao-daarion.git;
|
||||
|
||||
## Jupyter
|
||||
|
||||
- **jupyter_cli**: DEGRADED - jupyter not found in PATH
|
||||
- **notebooks_dir**: OK - /Users/apple/notebooks (ipynb_count=7)
|
||||
|
||||
## Pieces
|
||||
|
||||
- **pieces_extension**: OK - cursor extensions matched=1
|
||||
- **pieces_data_dir**: INFO - /Users/apple/Library/Application Support/Pieces not found
|
||||
|
||||
## Next
|
||||
|
||||
- Run docs sync dry-run: bash scripts/docs/docs_sync.sh --dry-run
|
||||
- Apply sync to remotes: bash scripts/docs/docs_sync.sh --apply --targets github,gitea
|
||||
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-022549.md
Normal file
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-022549.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:25:49 UTC
|
||||
|
||||
Repo: /Users/apple/github-projects/microdao-daarion
|
||||
|
||||
|
||||
## Gitea
|
||||
|
||||
- **gitea_http**: OK - http_code=200 (http://127.0.0.1:3000)
|
||||
- **gitea_git_remote**: OK - gitea http://localhost:3000/daarion-admin/microdao-daarion.git;
|
||||
|
||||
## GitHub
|
||||
|
||||
- **gh_auth**: OK - IvanTytar (keyring)
|
||||
- **github_git_remote**: OK - origin git@github.com:IvanTytar/microdao-daarion.git;
|
||||
|
||||
## Jupyter
|
||||
|
||||
- **jupyter_cli**: DEGRADED - jupyter not found in PATH
|
||||
- **notebooks_dir**: OK - /Users/apple/notebooks (ipynb_count=7)
|
||||
|
||||
## Pieces
|
||||
|
||||
- **pieces_extension**: OK - cursor extensions matched=1
|
||||
- **pieces_data_dir**: INFO - /Users/apple/Library/Application Support/Pieces not found
|
||||
|
||||
## Next
|
||||
|
||||
- Run docs sync dry-run: bash scripts/docs/docs_sync.sh --dry-run
|
||||
- Apply sync to remotes: bash scripts/docs/docs_sync.sh --apply --targets github,gitea
|
||||
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-023603.md
Normal file
31
docs/consolidation/INTEGRATIONS_STATUS_20260216-023603.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:36:03 UTC
|
||||
|
||||
Repo: /Users/apple/github-projects/microdao-daarion
|
||||
|
||||
|
||||
## Gitea
|
||||
|
||||
- **gitea_http**: OK - http_code=200 (http://127.0.0.1:3000)
|
||||
- **gitea_git_remote**: OK - gitea http://localhost:3000/daarion-admin/microdao-daarion.git;
|
||||
|
||||
## GitHub
|
||||
|
||||
- **gh_auth**: OK - IvanTytar (keyring)
|
||||
- **github_git_remote**: OK - origin git@github.com:IvanTytar/microdao-daarion.git;
|
||||
|
||||
## Jupyter
|
||||
|
||||
- **jupyter_cli**: DEGRADED - jupyter not found in PATH
|
||||
- **notebooks_dir**: OK - /Users/apple/notebooks (ipynb_count=7)
|
||||
|
||||
## Pieces
|
||||
|
||||
- **pieces_extension**: OK - cursor extensions matched=1
|
||||
- **pieces_data_dir**: INFO - /Users/apple/Library/Application Support/Pieces not found
|
||||
|
||||
## Next
|
||||
|
||||
- Run docs sync dry-run: bash scripts/docs/docs_sync.sh --dry-run
|
||||
- Apply sync to remotes: bash scripts/docs/docs_sync.sh --apply --targets github,gitea
|
||||
@@ -1,6 +1,6 @@
|
||||
# Integrations Bootstrap Status
|
||||
|
||||
Generated: 2026-02-16 10:25:49 UTC
|
||||
Generated: 2026-02-16 10:36:03 UTC
|
||||
|
||||
Repo: /Users/apple/github-projects/microdao-daarion
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user