From Docker Compose to Kubernetes: Migrating a Real Homelab Stack
A practical account of migrating 15+ self-hosted services to K3s — including AMD GPU passthrough, WiFi camera routing, custom monitoring, and a feudal Japan dashboard.
Introduction
Most Kubernetes tutorials start with a todo app and end before things get complicated. This isn't that article.
This is the story of migrating a real homelab — 15+ production-grade services including AI camera detection with AMD ROCm GPU acceleration, Home Assistant with hardware integrations, Jellyfin with hardware video transcoding, and a custom Discord alerting pipeline — from Docker Compose to K3s running on a single machine.
Everything in this post happened on real hardware. Every problem described is a problem that actually happened. Every fix is the fix that actually worked.
The Stack Before Migration
The homelab ran on a single Ubuntu machine (manupa-hn-wx9x, 192.168.1.9) using Docker Compose. Services included:
| Category | Services |
|---|---|
| Smart Home | Home Assistant, NanoMQ (MQTT broker) |
| Surveillance | Frigate (AMD ROCm GPU object detection), frigate-telegram bot |
| Media | Jellyfin |
| Monitoring | Netdata, Uptime Kuma |
| Tools | Stirling PDF, tldraw, FileBrowser, qBittorrent |
| Archiving | Archive Team Warrior |
| Privacy | Snowflake Proxy (Tor bridge) |
| AI | Open WebUI |
Why migrate? The honest answer: Docker Compose works fine until you want to scale a single service independently, pin resource limits per container, get structured health alerting, or reproduce the entire stack from code in under 10 minutes. Kubernetes gives you all of that.
Why K3s
Full Kubernetes (kubeadm) adds significant operational overhead for a homelab. K3s is Rancher's lightweight distribution — a single binary, installs as a systemd service, and ships with:
- Traefik — ingress controller (takes port 80)
- local-path provisioner — dynamic PVC storage in
/var/lib/rancher/k3s/storage/ - CoreDNS — service discovery
- Flannel — pod networking (VXLAN overlay, 10.42.0.0/16)
- Built-in containerd — no separate Docker daemon needed
Installation:
curl -sfL https://get.k3s.io | sh -
One command. 30 seconds. Production-grade Kubernetes cluster on your desk.
Architecture
manupa-hn-wx9x (192.168.1.9)
┌─────────────────────────────────────┐
│ │
│ homelab namespace │
│ ├── Frigate (GPU, hostNetwork) │
│ ├── Jellyfin (GPU, hostNetwork) │
│ ├── Home Assistant (hostNetwork) │
│ ├── Netdata (DaemonSet) │
│ ├── Uptime Kuma, FileBrowser │
│ ├── qBittorrent, Stirling PDF │
│ ├── tldraw, Archive Warrior │
│ ├── Snowflake Proxy │
│ ├── Open WebUI, Open Terminal │
│ └── Homepage Dashboard │
│ │
│ monitoring namespace │
│ ├── Prometheus + Alertmanager │
│ ├── Grafana │
│ ├── Node Exporter (DaemonSet) │
│ └── Kube State Metrics │
│ │
│ Still on Docker │
│ ├── frigate-telegram │
│ └── NanoMQ (MQTT) │
└─────────────────────────────────────┘
│
Nginx reverse proxy
(original ports → NodePorts)
All manifests managed with Kustomize (kubectl apply -k), stored in /home/manupa/Docker/k8s/.
Storage Strategy
The simplest approach for a single-node cluster with existing data: hostPath PersistentVolumes pointing directly at existing directories.
apiVersion: v1
kind: PersistentVolume
metadata:
name: jellyfin-config-pv
spec:
capacity:
storage: 10Gi
accessModes: [ReadWriteOnce]
hostPath:
path: /home/manupa/Docker/jellyfin/config
No data migration. No downtime. Existing files immediately available to pods. For new services with no existing data, the K3s local-path StorageClass handles dynamic provisioning automatically.
The trade-off: hostPath volumes are node-specific. When you add a second node, pods using hostPath must be pinned to the node where the data lives via nodeSelector. This is manageable but plan for shared storage (NFS or Longhorn) if you want true pod mobility later.
The Tricky Bits
1. AMD GPU Passthrough for Frigate (ROCm)
Frigate's AMD ROCm GPU access requires three things in the pod spec:
securityContext:
privileged: true
env:
- name: LIBVA_DRIVER_NAME
value: "radeonsi"
- name: HSA_ENABLE_SDMA
value: "0"
volumes:
- name: dev-kfd
hostPath:
path: /dev/kfd
- name: dev-dri
hostPath:
path: /dev/dri
2. WiFi Hotspot vs. Flannel CIDR Conflict
K3s Flannel uses 10.42.0.0/16 by default. The WiFi hotspot on this machine also auto-assigned itself 10.42.x.x. Every pod lost network access after K3s installed.
Fix: Change the hotspot subnet to something that doesn't conflict:
sudo nmcli connection modify Hotspot ipv4.addresses 10.50.0.1/24
3. Frigate Cameras on the Hotspot (10.50.0.x)
Fix: hostNetwork: true on the Frigate pod. The pod uses the host's network stack directly, which has a route to 10.50.0.x via the hotspot interface.
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
4. /dev/shm Running Out (Frigate)
Frigate's shared memory usage hit 71% of the allocated 500Mi emptyDir limit.
# After — disk-backed, no size limit
volumes:
- name: dshm
hostPath:
path: /tmp/frigate-shm
type: DirectoryOrCreate
Monitoring and Alerting
Stack
Node Exporter (DaemonSet) ──┐
Kube State Metrics ├──▶ Prometheus ──▶ Alertmanager ──▶ Discord
K8s API / cAdvisor ┘ │
▼
Grafana
Discord Integration
receivers:
- name: discord
discord_configs:
- webhook_url: 'https://discord.com/api/webhooks/...'
title: >-
{{ if eq .Status "firing" }}🔥{{ else }}✅{{ end }}
[{{ .Status | toUpper }}] {{ .GroupLabels.alertname }}
send_resolved: true
The Homepage Dashboard
Homepage (gethomepage/homepage) serves as the central entry point. The default dark theme was replaced with a custom feudal Japan / sumi-e (墨絵) aesthetic:
- Background: deep ink-wash gradient with ambient radial glows in bamboo green and sakura rose
- Cards: dark lacquer panels with barely-visible gold borders, 2px lift on hover
- Group headers: antique gold, ultra-light weight, torii-bar underline
- Typography: system serif stack (Hiragino Mincho ProN → Yu Mincho → Georgia)
- Scrollbar: 3px thin, gold-tinted
Multi-Node Expansion
curl -sfL https://get.k3s.io | \
K3S_URL=https://192.168.1.9:6443 \
K3S_TOKEN=<token> \
sh -
The node labelling pattern:
kubectl label node manupa-hn-wx9x role=primary gpu=amd
kubectl label node manupa role=worker
Lessons Learned
- Plan your storage before you plan your services: If multi-node is in your future, set up NFS or Longhorn first.
- hostNetwork is not a bad word: In a homelab, some services genuinely need it (mDNS/SSDP, WiFi hotspots).
- GPU access in Kubernetes is not scary: Four YAML fields. It works the same as in Docker.
- Alertmanager is better than dashboards for homelab ops: You want Discord to tell you when something breaks at 2am.
- YAML sprawl is real but manageable: Kustomize keeps it organized.
- K3s is genuinely production-grade: For a homelab, you give up almost nothing vs. full Kubernetes.
Closing
The migration took a weekend of focused work. The result is a homelab that is Documented, Version-controlled, Observable, Scalable, and Recoverable.
If you're running a homelab on Docker Compose and wondering whether Kubernetes is worth the learning curve — it is. Start with K3s. Start with one service. The rest follows naturally.
Stack: K3s v1.34.5 · Ubuntu 25.10 · AMD ROCm · Flannel CNI · Traefik · Kustomize
Services: Home Assistant · Frigate · Jellyfin · Prometheus · Grafana · Alertmanager · Netdata · Uptime Kuma · FileBrowser · Stirling PDF · tldraw · qBittorrent · Archive Warrior · Snowflake Proxy · Open WebUI · Open Terminal · Homepage
No comments:
Post a Comment