1.8 KiB
1.8 KiB
Runtime and Deployment
Authoritative Compose Policy (Canonical)
Authoritative configuration is per-node manifests.
- NODE1:
docker-compose.node1.yml - NODE3:
docker-compose.node3.yml - Staging:
docker-compose.staging.yml(+ override)
docker-compose.yml is non-authoritative for production drift checks (local/legacy/node2-like context).
Drift-Check Policy
Drift check runs per node and compares:
- service list / images / tags
- ports / volumes / networks
- env vars (non-secret subset)
- healthcheck definitions
Recommended structure:
ops/compose/production/for canonical links/copiesops/drift-check.shwithNODE_ROLE=node1|node3|stagingresolver- timer/cron per node (or central orchestrator via SSH)
Proxy Ownership Policy
- Exactly one edge proxy owns
80/443in production. - Second proxy must be disabled or internal-only (
127.0.0.1/ private network). - Current repo evidence: nginx edge config exists (
ops/nginx/node1-api.conf), Caddy exists for integration UI use case (infra/compose/Caddyfile), runtime docs describe conflict history.
Node Runtime Notes
- NODE1: full primary stack and data layer in
docker-compose.node1.yml. - NODE3: GPU edge services with dependency on NODE1 NATS/S3 endpoints.
- Staging: separate internal network and override that removes most host-exposed ports.
Quickstart (Operational)
- Select node role and authoritative compose file(s).
- Ensure required network exists (
dagi-networkfor NODE1/NODE3 external mode). - Start infra core then app services per node compose.
- Run per-node health and drift checks.
Source pointers
docker-compose.node1.ymldocker-compose.node3.ymldocker-compose.staging.ymldocker-compose.staging.override.ymldocker-compose.ymlops/nginx/node1-api.confinfra/compose/Caddyfiledocs/NODA1-MEMORY-RUNBOOK.md