Files
microdao-daarion/docs/architecture_inventory/04_RUNTIME_AND_DEPLOYMENT.md

1.8 KiB

Runtime and Deployment

Authoritative Compose Policy (Canonical)

Authoritative configuration is per-node manifests.

  • NODE1: docker-compose.node1.yml
  • NODE3: docker-compose.node3.yml
  • Staging: docker-compose.staging.yml (+ override)

docker-compose.yml is non-authoritative for production drift checks (local/legacy/node2-like context).

Drift-Check Policy

Drift check runs per node and compares:

  • service list / images / tags
  • ports / volumes / networks
  • env vars (non-secret subset)
  • healthcheck definitions

Recommended structure:

  • ops/compose/production/ for canonical links/copies
  • ops/drift-check.sh with NODE_ROLE=node1|node3|staging resolver
  • timer/cron per node (or central orchestrator via SSH)

Proxy Ownership Policy

  • Exactly one edge proxy owns 80/443 in production.
  • Second proxy must be disabled or internal-only (127.0.0.1 / private network).
  • Current repo evidence: nginx edge config exists (ops/nginx/node1-api.conf), Caddy exists for integration UI use case (infra/compose/Caddyfile), runtime docs describe conflict history.

Node Runtime Notes

  • NODE1: full primary stack and data layer in docker-compose.node1.yml.
  • NODE3: GPU edge services with dependency on NODE1 NATS/S3 endpoints.
  • Staging: separate internal network and override that removes most host-exposed ports.

Quickstart (Operational)

  1. Select node role and authoritative compose file(s).
  2. Ensure required network exists (dagi-network for NODE1/NODE3 external mode).
  3. Start infra core then app services per node compose.
  4. Run per-node health and drift checks.

Source pointers

  • docker-compose.node1.yml
  • docker-compose.node3.yml
  • docker-compose.staging.yml
  • docker-compose.staging.override.yml
  • docker-compose.yml
  • ops/nginx/node1-api.conf
  • infra/compose/Caddyfile
  • docs/NODA1-MEMORY-RUNBOOK.md