See Through Walls with a $9 Microcontroller

See Through Walls with a $9 Microcontroller | RuView
router
cell_wifi Edge AI & Sensors

See Through Walls with a $9 Microcontroller

Deploying WiFi DensePose on Kubernetes.

"How I deployed a real-time WiFi-based human sensing system on a homelab K3s cluster with a $9 ESP32-S3, live pose estimation, and RTSP camera fusion."

WiFi signals pass through walls. When a person moves — or even breathes — those signals scatter differently. What if you could read that scattering pattern and reconstruct what happened on the other side?

That's exactly what RuView does. Built on research from Carnegie Mellon's DensePose From WiFi paper, RuView is an open-source edge AI system that turns commodity WiFi signals into real-time human pose estimation, vital sign monitoring, and presence detection — all without a single pixel of video.

I took it a step further: deployed it on Kubernetes, wired up a live ESP32-S3 sensor, and fused the WiFi signal data with an RTSP camera feed for dual-modal pose estimation. Here's how.

memory

The Hardware: $9 and a WiFi Router

The entire sensing hardware cost me $9:

  • check_circle 1x ESP32-S3 ($9) — a dual-core microcontroller with WiFi that exposes Channel State Information (CSI). CSI gives you per-subcarrier amplitude and phase data — 56+ data points per WiFi frame, 20 times per second. That's the raw material for sensing.

Standard consumer WiFi only gives you RSSI (a single signal strength number). CSI is like going from a thermometer to a thermal camera — instead of one number, you get a detailed map of how the signal is being affected by everything in the room.

info

I also had three ESP32-C3s sitting around, but those are single-core RISC-V chips that can't handle the DSP pipeline. The S3's dual-core Xtensa is required — one core captures CSI interrupts while the other runs signal processing.

layers

The Software Stack

RuView's Rust sensing server processes the signal chain:

terminal
ESP32 CSI (UDP) → Hampel outlier rejection → SpotFi phase correction
    → Fresnel zone modeling → FFT vital sign extraction
    → AI backbone (RuVector attention networks)
    → 17 body keypoints + breathing rate + heart rate + presence

At 54,000 frames/sec throughput in Rust, this is fast enough to process live data from multiple sensors with headroom to spare. The server exposes a REST API, WebSocket stream, and a full browser UI.

bolt

Flashing the ESP32-S3

The firmware ships as pre-built binaries in GitHub Releases. Flashing takes 30 seconds:

Bash
pip install esptool
python -m esptool --chip esp32s3 --port COM7 --baud 460800 \
  write_flash --flash_mode dio --flash_size 8MB \
  0x0 bootloader.bin \
  0x8000 partition-table.bin \
  0xd000 ota_data_initial.bin \
  0x10000 esp32-csi-node.bin

Then provision it with your WiFi credentials and the IP of your server:

Bash
python provision.py --port COM7 \
  --ssid "MyWiFi" --password "secret" \
  --target-ip 192.168.1.9

The ESP32 connects to WiFi and starts streaming CSI frames over UDP to port 5005. No internet needed after provisioning — everything stays local.

view_in_ar

Containerizing for Kubernetes

The project includes a multi-stage Dockerfile that compiles the Rust server and bundles the UI into a minimal Debian image:

Dockerfile
FROM rust:1.85-bookworm AS builder
WORKDIR /build
COPY rust-port/wifi-densepose-rs/ ./
RUN cargo build --release -p wifi-densepose-sensing-server \
    && strip target/release/sensing-server

FROM debian:bookworm-slim
COPY --from=builder /build/target/release/sensing-server /app/
COPY ui/ /app/ui/
EXPOSE 3000/tcp 3001/tcp 5005/udp
CMD ["/app/sensing-server --source auto --ui-path /app/ui --bind-addr 0.0.0.0"]

Built and pushed to GHCR:

docker build -f docker/Dockerfile.rust -t ghcr.io/zektopic/ruview:k8s-poc .
docker push ghcr.io/zektopic/ruview:k8s-poc
dns

The K8s Deployment

The deployment has one unusual requirement: UDP hostPort. The ESP32 sends raw CSI frames to a specific IP:port, so the pod needs to receive those packets directly on the host's network interface without kube-proxy NAT:

YAML
ports:
- containerPort: 5005
  protocol: UDP
  hostPort: 5005    # ESP32 sends directly to host

This means the pod must be pinned to a specific node (nodeName) — if it moves, the ESP32 would be sending to the wrong IP. For a homelab this is fine; for production you'd use a DaemonSet or a LoadBalancer with UDP support.

The HTTP API and WebSocket get standard NodePort services:

type: NodePort
ports:
- name: http
  port: 3000
  nodePort: 30900

An nginx reverse proxy ties it all together, handling WebSocket upgrades with proper timeout settings so the live data stream doesn't drop.

dashboard

What It Looks Like

The Observatory UI is the star — a cinematic Three.js dashboard with five holographic panels:

waves

Subcarrier Manifold

Live heatmap of all 56+ WiFi subcarriers, showing frequency effects.

favorite

Vital Signs Oracle

Breathing rate (6-30 BPM) & heart rate (40-120 BPM) from phase variations.

person_search

Presence Heatmap

Room-level signal field showing where people are located.

scatter_plot

Phase Constellation

Complex-plane plot of CSI phase, revealing movement patterns.

memory_alt

Convergence Engine

Signal processing pipeline metrics and overall health.

The Pose Fusion view goes further — it overlays WiFi-derived pose estimation onto a live camera feed. I connected my RTSP camera through Frigate's go2rtc, which already handles RTSP-to-HLS transcoding. The browser loads the HLS stream alongside the CSI data, and the fusion engine cross-correlates video motion with WiFi signal changes.

analytics

Real Data, Real Results

With the ESP32-S3 powered on and placed in my office, the system immediately detected:

  • sensors Presence: true with confidence ~0.78
  • directions_run Motion level: present_movingpresent_stillactive
  • group Person count: 1 (estimated from CSI subcarrier patterns)
  • speed 64 subcarriers streaming at 20 Hz

All through the wall, with no camera in the room.

JSON Response
{
  "classification": {
    "confidence": 0.78,
    "motion_level": "present_moving",
    "presence": true
  },
  "estimated_persons": 1,
  "features": {
    "breathing_band_power": 34.27,
    "motion_band_power": 61.17,
    "spectral_power": 158.92
  }
}
shield_lock
security

Privacy by Design

This is the compelling part. There is no camera in the sensing loop. The ESP32 captures WiFi signal disturbances — amplitude and phase changes caused by human bodies scattering radio waves.

There are no images, no video frames, no biometric data stored. The "sensing" is fundamentally different from surveillance.

For applications like elderly care monitoring, hospital patient tracking, or smart building occupancy — where cameras raise serious privacy and regulatory concerns — WiFi sensing sidesteps the problem entirely.

rocket_launch

What's Next

hub

Multi-node mesh

Adding 3-6 ESP32-S3 nodes for full 360-degree room coverage with multistatic fusion.

radar

ESP32-C6 + mmWave

Pairing the C6 with a Seeed MR60BHA2 60 GHz sensor for clinical-grade vital signs.

extension

Edge WASM modules

65 implemented edge intelligence modules run directly on the ESP32 as tiny WASM binaries (fall detection, sleep monitoring) with zero cloud dependency.

model_training

Training pipeline

Recording labeled CSI sessions to train the adaptive classifier for room-specific signal characteristics.

play_circle

Try It Yourself

The fastest path to a working system:

# 1. Docker (simulated data, no hardware)
docker run -p 3000:3000 ghcr.io/zektopic/ruview:k8s-poc
# Open http://localhost:3000/ui/

# 2. With ESP32-S3 hardware (~$9)
# Flash firmware, provision WiFi, run server with --source auto

# 3. Full K8s deployment
# See the deployment guide for complete instructions

The entire system — firmware, server, UI, signal processing, neural networks — is open source under MIT. One $9 microcontroller and some WiFi signals. That's all it takes to give a room spatial awareness.

From Docker Compose to Kubernetes: Migrating a Real Homelab Stack

From Docker Compose to Kubernetes: Migrating a Real Homelab Stack

From Docker Compose to Kubernetes: Migrating a Real Homelab Stack

A practical account of migrating 15+ self-hosted services to K3s — including AMD GPU passthrough, WiFi camera routing, custom monitoring, and a feudal Japan dashboard.


Introduction

Most Kubernetes tutorials start with a todo app and end before things get complicated. This isn't that article.

This is the story of migrating a real homelab — 15+ production-grade services including AI camera detection with AMD ROCm GPU acceleration, Home Assistant with hardware integrations, Jellyfin with hardware video transcoding, and a custom Discord alerting pipeline — from Docker Compose to K3s running on a single machine.

Homelab status overview

Everything in this post happened on real hardware. Every problem described is a problem that actually happened. Every fix is the fix that actually worked.


The Stack Before Migration

The homelab ran on a single Ubuntu machine (manupa-hn-wx9x, 192.168.1.9) using Docker Compose. Services included:

Category Services
Smart HomeHome Assistant, NanoMQ (MQTT broker)
SurveillanceFrigate (AMD ROCm GPU object detection), frigate-telegram bot
MediaJellyfin
MonitoringNetdata, Uptime Kuma
ToolsStirling PDF, tldraw, FileBrowser, qBittorrent
ArchivingArchive Team Warrior
PrivacySnowflake Proxy (Tor bridge)
AIOpen WebUI

Why migrate? The honest answer: Docker Compose works fine until you want to scale a single service independently, pin resource limits per container, get structured health alerting, or reproduce the entire stack from code in under 10 minutes. Kubernetes gives you all of that.


Why K3s

Full Kubernetes (kubeadm) adds significant operational overhead for a homelab. K3s is Rancher's lightweight distribution — a single binary, installs as a systemd service, and ships with:

  • Traefik — ingress controller (takes port 80)
  • local-path provisioner — dynamic PVC storage in /var/lib/rancher/k3s/storage/
  • CoreDNS — service discovery
  • Flannel — pod networking (VXLAN overlay, 10.42.0.0/16)
  • Built-in containerd — no separate Docker daemon needed

Installation:

curl -sfL https://get.k3s.io | sh -

One command. 30 seconds. Production-grade Kubernetes cluster on your desk.


Architecture

                    manupa-hn-wx9x (192.168.1.9)
                    ┌─────────────────────────────────────┐
                    │                                     │
                    │  homelab namespace                  │
                    │  ├── Frigate (GPU, hostNetwork)     │
                    │  ├── Jellyfin (GPU, hostNetwork)    │
                    │  ├── Home Assistant (hostNetwork)   │
                    │  ├── Netdata (DaemonSet)            │
                    │  ├── Uptime Kuma, FileBrowser       │
                    │  ├── qBittorrent, Stirling PDF      │
                    │  ├── tldraw, Archive Warrior        │
                    │  ├── Snowflake Proxy                │
                    │  ├── Open WebUI, Open Terminal      │
                    │  └── Homepage Dashboard             │
                    │                                     │
                    │  monitoring namespace               │
                    │  ├── Prometheus + Alertmanager      │
                    │  ├── Grafana                        │
                    │  ├── Node Exporter (DaemonSet)      │
                    │  └── Kube State Metrics             │
                    │                                     │
                    │  Still on Docker                    │
                    │  ├── frigate-telegram               │
                    │  └── NanoMQ (MQTT)                  │
                    └─────────────────────────────────────┘
                                      │
                             Nginx reverse proxy
                             (original ports → NodePorts)

All manifests managed with Kustomize (kubectl apply -k), stored in /home/manupa/Docker/k8s/.

Kubernetes Cluster Overview

Storage Strategy

The simplest approach for a single-node cluster with existing data: hostPath PersistentVolumes pointing directly at existing directories.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: jellyfin-config-pv
spec:
  capacity:
    storage: 10Gi
  accessModes: [ReadWriteOnce]
  hostPath:
    path: /home/manupa/Docker/jellyfin/config

No data migration. No downtime. Existing files immediately available to pods. For new services with no existing data, the K3s local-path StorageClass handles dynamic provisioning automatically.

The trade-off: hostPath volumes are node-specific. When you add a second node, pods using hostPath must be pinned to the node where the data lives via nodeSelector. This is manageable but plan for shared storage (NFS or Longhorn) if you want true pod mobility later.


The Tricky Bits

1. AMD GPU Passthrough for Frigate (ROCm)

Frigate's AMD ROCm GPU access requires three things in the pod spec:

securityContext:
  privileged: true
env:
  - name: LIBVA_DRIVER_NAME
    value: "radeonsi"
  - name: HSA_ENABLE_SDMA
    value: "0"
volumes:
  - name: dev-kfd
    hostPath:
      path: /dev/kfd
  - name: dev-dri
    hostPath:
      path: /dev/dri

2. WiFi Hotspot vs. Flannel CIDR Conflict

K3s Flannel uses 10.42.0.0/16 by default. The WiFi hotspot on this machine also auto-assigned itself 10.42.x.x. Every pod lost network access after K3s installed.

Fix: Change the hotspot subnet to something that doesn't conflict:

sudo nmcli connection modify Hotspot ipv4.addresses 10.50.0.1/24

3. Frigate Cameras on the Hotspot (10.50.0.x)

Fix: hostNetwork: true on the Frigate pod. The pod uses the host's network stack directly, which has a route to 10.50.0.x via the hotspot interface.

spec:
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet

4. /dev/shm Running Out (Frigate)

Frigate's shared memory usage hit 71% of the allocated 500Mi emptyDir limit.

# After — disk-backed, no size limit
volumes:
  - name: dshm
    hostPath:
      path: /tmp/frigate-shm
      type: DirectoryOrCreate

Monitoring and Alerting

Stack

Node Exporter (DaemonSet) ──┐
Kube State Metrics          ├──▶ Prometheus ──▶ Alertmanager ──▶ Discord
K8s API / cAdvisor          ┘        │
                                     ▼
                                  Grafana
Monitoring and Grafana Details

Discord Integration

receivers:
  - name: discord
    discord_configs:
      - webhook_url: 'https://discord.com/api/webhooks/...'
        title: >-
          {{ if eq .Status "firing" }}🔥{{ else }}✅{{ end }}
          [{{ .Status | toUpper }}] {{ .GroupLabels.alertname }}
        send_resolved: true

The Homepage Dashboard

Homepage (gethomepage/homepage) serves as the central entry point. The default dark theme was replaced with a custom feudal Japan / sumi-e (墨絵) aesthetic:

  • Background: deep ink-wash gradient with ambient radial glows in bamboo green and sakura rose
  • Cards: dark lacquer panels with barely-visible gold borders, 2px lift on hover
  • Group headers: antique gold, ultra-light weight, torii-bar underline
  • Typography: system serif stack (Hiragino Mincho ProN → Yu Mincho → Georgia)
  • Scrollbar: 3px thin, gold-tinted
Sumi-e themed dashboard

Multi-Node Expansion

curl -sfL https://get.k3s.io | \
  K3S_URL=https://192.168.1.9:6443 \
  K3S_TOKEN=<token> \
  sh -

The node labelling pattern:

kubectl label node manupa-hn-wx9x role=primary gpu=amd
kubectl label node manupa role=worker

Lessons Learned

  1. Plan your storage before you plan your services: If multi-node is in your future, set up NFS or Longhorn first.
  2. hostNetwork is not a bad word: In a homelab, some services genuinely need it (mDNS/SSDP, WiFi hotspots).
  3. GPU access in Kubernetes is not scary: Four YAML fields. It works the same as in Docker.
  4. Alertmanager is better than dashboards for homelab ops: You want Discord to tell you when something breaks at 2am.
  5. YAML sprawl is real but manageable: Kustomize keeps it organized.
  6. K3s is genuinely production-grade: For a homelab, you give up almost nothing vs. full Kubernetes.
Final Setup Details

Closing

The migration took a weekend of focused work. The result is a homelab that is Documented, Version-controlled, Observable, Scalable, and Recoverable.

If you're running a homelab on Docker Compose and wondering whether Kubernetes is worth the learning curve — it is. Start with K3s. Start with one service. The rest follows naturally.

Stack: K3s v1.34.5 · Ubuntu 25.10 · AMD ROCm · Flannel CNI · Traefik · Kustomize
Services: Home Assistant · Frigate · Jellyfin · Prometheus · Grafana · Alertmanager · Netdata · Uptime Kuma · FileBrowser · Stirling PDF · tldraw · qBittorrent · Archive Warrior · Snowflake Proxy · Open WebUI · Open Terminal · Homepage