Skip to content

Horus / Neper / Maat

Home GitOps cluster

Bare-metal Kubernetes on 4× Raspberry Pi 4 with Flux, Cilium, Tailscale, an in-cluster Zot registry, and MinIO. The infrastructure layer of the optimisation work; same patterns I apply to bigger clusters at work.

DevOps Fullstack
Kubernetes Flux Cilium Tailscale MinIO Zot ARM64

4

Raspberry Pi 4 nodes

ARM64

End-to-end

Flux

GitOps reconciler

0

Public ports

What it is

A real production-style Kubernetes cluster running at home, on four Raspberry Pi 4 boards. Bare-metal kubeadm v1.35, Cilium CNI, local-path-provisioner + an in-cluster MinIO for S3-compatible storage, Zot as a private container registry backed by a MinIO bucket. Flux watches a Git repository and reconciles every manifest in clusters/production/ — the cluster's state is its repository.

External access is brokered exclusively by the Tailscale Operator: MagicDNS plus automatically-issued TLS, zero public ports. The cluster is genuinely on the public internet only in the sense that its operator can reach it from anywhere.

Why it exists

Two reasons. First, it hosts apps I actually use: a recipe platform (Neper: Astro + FastAPI), an artist's website (Laura Ubbesen: Astro + nginx), and a knowledge graph experiment (Maat). Second, it's where I test infrastructure ideas at real scale without paying cloud invoice prices: the multi-arch build pipeline, the registry topology, the GitOps reconciliation patterns — all the things you can't really learn from a single-node minikube.

The interesting parts

  • ARM64 cross-compilation from x86_64. Every image needs to be linux/arm64. The build host is x86, so the pipeline uses QEMU user-mode emulation under podman build --platform=linux/arm64, then pushes to the in-cluster Zot. Build → push → Flux pulls → pods restart. End-to-end in under two minutes.
  • HTTP registry inside the cluster. Zot speaks plain HTTP on port 5000, which is fine inside a cluster but takes some convincing on the kubelet side. The bootstrap scripts in bootstrap/configure-zot-registry-on-nodes.sh wire up certs.d entries and /etc/hosts on every node so containerd can pull from the cluster-internal DNS name without TLS gymnastics.
  • No public ingress. No nginx-on-public-IP, no Let's Encrypt dance. The Tailscale Operator turns Kubernetes Services into Tailscale hosts, so the apps are reachable only on the tailnet. Authentication is the tailnet itself.
  • Storage is split. Application PVCs use local-path-provisioner (each pod's data lives on its node). Anything that wants S3 semantics — backups, registry blobs, model artefacts — goes to MinIO.

What I learned doing it

ARM64 is mature now. The decade of "everything works on amd64, half things break on ARM" is over. With QEMU + buildx + a real ARM target to deploy to, the build chain is as reliable as anything I'd run on EKS, with a tiny fraction of the bill.

The bigger lesson is that building the operator setup right once compounds across every subsequent project. Adding a new app is now: write the manifest, point it at a Zot image tag, push to the GitOps repo. No DNS dance, no ingress controller surgery, no rebuilding mental models. The same patterns apply directly when these projects move to a managed cluster — only the underlying infrastructure name changes.

Related work