Horus / Neper / Maat
Home GitOps cluster
Bare-metal Kubernetes on 4× Raspberry Pi 4 with Flux, Cilium, Tailscale, an in-cluster Zot registry, and MinIO. The infrastructure layer of the optimisation work; same patterns I apply to bigger clusters at work.
4
Raspberry Pi 4 nodes
ARM64
End-to-end
Flux
GitOps reconciler
0
Public ports
What it is
A real production-style Kubernetes cluster running at home, on four Raspberry Pi 4 boards.
Bare-metal kubeadm v1.35, Cilium CNI, local-path-provisioner +
an in-cluster MinIO for S3-compatible storage, Zot as a private container registry backed
by a MinIO bucket. Flux watches a Git repository and reconciles every manifest in
clusters/production/ — the cluster's state is its repository.
External access is brokered exclusively by the Tailscale Operator: MagicDNS plus automatically-issued TLS, zero public ports. The cluster is genuinely on the public internet only in the sense that its operator can reach it from anywhere.
Why it exists
Two reasons. First, it hosts apps I actually use: a recipe platform
(Neper: Astro + FastAPI), an artist's website
(Laura Ubbesen: Astro + nginx), and a knowledge graph experiment
(Maat). Second, it's where I test infrastructure ideas at real scale
without paying cloud invoice prices: the multi-arch build pipeline, the registry
topology, the GitOps reconciliation patterns — all the things you can't really learn
from a single-node minikube.
The interesting parts
- ARM64 cross-compilation from x86_64. Every image needs to be
linux/arm64. The build host is x86, so the pipeline uses QEMU user-mode emulation underpodman build --platform=linux/arm64, then pushes to the in-cluster Zot. Build → push → Flux pulls → pods restart. End-to-end in under two minutes. - HTTP registry inside the cluster. Zot speaks plain HTTP on port 5000,
which is fine inside a cluster but takes some convincing on the kubelet side. The
bootstrap scripts in
bootstrap/configure-zot-registry-on-nodes.shwire upcerts.dentries and/etc/hostson every node so containerd can pull from the cluster-internal DNS name without TLS gymnastics. - No public ingress. No nginx-on-public-IP, no Let's Encrypt dance. The Tailscale Operator turns Kubernetes Services into Tailscale hosts, so the apps are reachable only on the tailnet. Authentication is the tailnet itself.
- Storage is split. Application PVCs use
local-path-provisioner(each pod's data lives on its node). Anything that wants S3 semantics — backups, registry blobs, model artefacts — goes to MinIO.
What I learned doing it
ARM64 is mature now. The decade of "everything works on amd64, half things break on ARM" is over. With QEMU + buildx + a real ARM target to deploy to, the build chain is as reliable as anything I'd run on EKS, with a tiny fraction of the bill.
The bigger lesson is that building the operator setup right once compounds across every subsequent project. Adding a new app is now: write the manifest, point it at a Zot image tag, push to the GitOps repo. No DNS dance, no ingress controller surgery, no rebuilding mental models. The same patterns apply directly when these projects move to a managed cluster — only the underlying infrastructure name changes.
Related work
Engineer on the optimisation arc
Provstiskyen: performance work on a 10-year SaaS
Profiled and fixed the cold-start path on a 44,000-line R Shiny production app: 50-second logins down to 18, and 35-minute deploys down to 80 seconds, all on the existing codebase. The full rewrite that came later was made possible by a year of targeted optimisation work first.
Optimization Fullstack DevOpsThis site
Tachyon
The same haversine kernel walked from a naïve pandas `.apply` through C++, Rust, Zig SIMD, and finally an analyzer-driven V7 in Zig that reads its own compiled assembly to land at 150 GB/s, plus a WebGPU compute lab in the browser. End-to-end demo of the optimisation work I do for clients.
Optimization DevOps Fullstack