Added NODE3 - AI/ML Workstation Specification: Hardware: - CPU: AMD Ryzen Threadripper PRO 5975WX (32 cores / 64 threads, 3.6 GHz boost) - RAM: 128GB DDR4 - GPU: NVIDIA GeForce RTX 3090 24GB GDDR6X - 10496 CUDA cores - CUDA 13.0, Driver 580.95.05 - Storage: Samsung SSD 990 PRO 4TB NVMe - Root: 100GB (27% used) - Available for expansion: 3.5TB System: - Hostname: llm80-che-1-1 - IP: 80.77.35.151:33147 - OS: Ubuntu 24.04.3 LTS (Noble Numbat) - Container Runtime: MicroK8s + containerd - Uptime: 24/7 Security Status: ✅ CLEAN (verified 2026-01-09) - No crypto miners detected - 0 zombie processes - CPU load: 0.17 (very low) - GPU utilization: 0% (ready for workloads) Services Running: - Port 3000 - Unknown service (needs investigation) - Port 8080 - Unknown service (needs investigation) - Port 11434 - Ollama (localhost only) - Port 27017/27019 - MongoDB (localhost only) - Kubernetes API: 16443 - K8s services: 10248-10259, 25000 Recommended Use Cases: - 🤖 Large LLM inference (Llama 70B, Qwen 72B, Mixtral 8x22B) - 🧠 Model training and fine-tuning - 🎨 Stable Diffusion XL image generation - 🔬 AI/ML research and experimentation - 🚀 Kubernetes-based AI service orchestration Files Updated: - INFRASTRUCTURE.md v2.4.0 - docs/infrastructure_quick_ref.ipynb v2.3.0 NODE3 is the most powerful node in the infrastructure: - Most CPU cores: 32c/64t (vs 16c M4 Max) - Most RAM: 128GB (vs 64GB) - Dedicated GPU: RTX 3090 24GB VRAM - Largest storage: 4TB NVMe (vs 2TB) Co-Authored-By: Warp <agent@warp.dev>
32 KiB
32 KiB