A complete, opinionated, two-stack homelab β DNS Β· ad-blocking Β· media Β· torrents Β· smart home Β· automation Β· dashboards Β· file sharing Β· zero-trust remote access β all self-hosted, all in one repo, designed for any Linux server.
π³ Docker Stack β β βΈοΈ k3s Stack β β βοΈ Ansible β β π€ Contributing β
Most homelab projects pick a side: either "here's my docker-compose.yml collection" or "here's my Helm-charted k3s cluster". This repo refuses to choose, because both have a place in a serious homelab:
- Docker Compose is unbeatable for trying things out β clone, edit env vars,
docker compose up. Done in ninety seconds. - Kubernetes (k3s) is unbeatable for running things long-term β declarative state, GitOps reconciliation, sealed secrets, real ingress, real RBAC.
So this repo gives you both, side-by-side, with the same set of services modelled twice β once the easy way, once the production way. Pick a service, prototype it in docker/, then promote the working configuration to k3s/ once you trust it.
Every choice is benchmarked for an 8 GB homelab server (tested on Raspberry Pi 5, x86_64 mini-PCs, and cloud VMs). Everything is reproducible from a clean git clone. Nothing depends on a SaaS, a paid plan, or an undocumented click in someone's WebUI.
| π³ Docker Stack | βΈοΈ k3s Stack | |
|---|---|---|
| Best for | Prototyping Β· single-service experiments Β· learning Β· "let me try X for an evening" | Production Β· GitOps Β· long-running workloads Β· multi-service composition |
| Deploy unit | docker compose up -d per service |
kubectl apply -k per app, then ArgoCD reconciles |
| Source of truth | docker-compose.yml + .env files |
YAML manifests + SealedSecrets in git |
| Networking | Bridge networks + host port bindings | Traefik IngressRoute + LoadBalancer (klipper-lb) |
| TLS / certificates | Manual (or Nginx Proxy Manager UI) | cert-manager + Let's Encrypt, automatic renewal |
| Secrets | .env files (git-ignored) |
SealedSecrets (encrypted, safe to commit) |
| Updates | docker compose pull && up -d |
git push β ArgoCD auto-syncs |
| Rollback | Edit compose / re-pull old tag | git revert β ArgoCD un-applies |
| Recovery | Re-run setup.sh, restore bind mounts |
cluster-restore.sh + PVCs |
| Per-service docs | 27 self-contained READMEs | 14 self-contained READMEs |
| Resource overhead | Just Docker daemon (~50 MB RAM) | k3s control plane (~500 MB RAM) |
| Service count | 27 | 14 (and growing) |
TL;DR β Use the Docker stack to try things out. Promote what works to the k3s stack and let ArgoCD run it for you. They share the same Pi, the same Pi-hole DNS, and the same Twingate connector.
The big picture: two deployment paths (manual compose up / GitOps), two ingress paths (LAN via Pi-hole DNS / WAN via Twingate or Cloudflare), and one Pi running everything. No port-forwarding, no SaaS in the critical path.
graph TB
%% βββ HEADERS (rendered as banner nodes) βββββββββββββββββββββββββββββ
H1>"<b>β WHO USES IT</b>"]
H2>"<b>β‘ INTERNET SERVICES</b>"]
H3>"<b>β’ HOME EDGE</b> Β· no inbound port-forward, ever"]
H4>"<b>β£ THE PI</b> Β· two stacks, one box"]
H5>"<b>β€ SELF-HOSTED WORKLOADS</b>"]
%% βββ TIER 1 Β· users βββββββββββββββββββββββββββββββββββββββββββββββββ
Dev[π¨βπ» <b>Developer</b><br/>writes manifests<br/>+ compose files]
Remote[π± <b>Remote user</b><br/>phone Β· laptop<br/>any network]
LAN[π <b>LAN user</b><br/>desktop Β· TV Β· IoT<br/>same Wi-Fi]
%% βββ TIER 2 Β· internet services βββββββββββββββββββββββββββββββββββββ
Repo[(π <b>GitHub repo</b><br/>source of truth)]
Actions[βοΈ <b>GitHub Actions</b><br/>regenerate READMEs<br/>validate frontmatter]
CF[(βοΈ <b>Cloudflare DNS</b><br/>your-domain.tld<br/>β Twingate edge)]
TGEdge[π‘οΈ <b>Twingate Edge</b><br/>identity-aware proxy<br/>no open inbound port]
%% βββ TIER 3 Β· home edge βββββββββββββββββββββββββββββββββββββββββββββ
Router[π <b>Home Router</b><br/>NAT Β· DHCP only]
TGConn[π‘οΈ <b>Twingate Connector</b><br/>outbound TCP/443 only<br/>punches no holes]
Pihole[π‘οΈ <b>Pi-hole</b><br/>LAN DNS Β· ad-block<br/>*.lan β 192.168.x.x]
%% βββ TIER 4 Β· the Pi ββββββββββββββββββββββββββββββββββββββββββββββββ
Docker[π³ <b>Docker stack</b><br/>27 services Β· prototyping<br/>docker compose + setup.sh<br/>NPM for TLS / reverse-proxy]
Argo[π <b>ArgoCD</b><br/>GitOps controller<br/>pulls main every 3 min]
K3s[βΈοΈ <b>k3s cluster</b><br/>14 apps Β· production<br/>Traefik IngressRoute<br/>cert-manager Β· SealedSecrets]
%% βββ TIER 5 Β· self-hosted workloads (auto-generated) ββββββββββββββββ
W1[π¬ <b>Media</b><br/>Jellyfin Β· Plex]
W2[π‘ <b>Dashboards</b><br/>Dashy Β· Homarr Β· Homepage]
W3[π€ <b>Automation</b><br/>Home Assistant Β· n8n]
W4[π <b>Files & Sync</b><br/>FileBrowser Β· Nextcloud Β· ownCloud Β· Pydio Β· Rclone Β· +2 more]
W5[π§² <b>Downloads</b><br/>Aria2 Β· BitComet Β· Deluge Β· qBittorrent]
W6[π <b>Monitoring</b><br/>Dashdot Β· Netdata Β· Portainer]
W7[π οΈ <b>Dev tooling</b><br/>Gitea Β· GitLab Β· LocalStack]
%% βββ HEADER ANCHORS (invisible) βββββββββββββββββββββββββββββββββββββ
H1 ~~~ Dev
H2 ~~~ Repo
H3 ~~~ Router
H4 ~~~ Docker
H5 ~~~ W1
%% βββ FLOWS Β· GitOps lane (purple, thick) ββββββββββββββββββββββββββββ
Dev == "git push" ==> Repo
Repo -- webhook --> Actions
Actions -. "auto-commit<br/>regenerated docs" .-> Repo
Repo == "pull every 3 min" ==> Argo
Argo == "kubectl apply" ==> K3s
%% βββ FLOWS Β· manual Docker deploy βββββββββββββββββββββββββββββββββββ
Dev -. "ssh + ./setup.sh" .-> Docker
%% βββ FLOWS Β· remote access lane (orange) ββββββββββββββββββββββββββββ
Remote --> CF --> TGEdge
TGEdge -. "encrypted tunnel" .-> TGConn
TGConn --> Docker
TGConn --> K3s
%% βββ FLOWS Β· LAN access lane (green) ββββββββββββββββββββββββββββββββ
LAN --> Router --> Pihole
Pihole --> Docker
Pihole --> K3s
%% βββ FLOWS Β· stacks β workloads βββββββββββββββββββββββββββββββββββββ
Docker --> W1 & W2 & W3 & W4 & W5 & W6 & W7
K3s --> W1 & W2 & W3 & W4 & W5 & W6 & W7
%% βββ STYLES βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
classDef header fill:#263238,stroke:#263238,color:#ffffff,font-size:18px,font-weight:bold
classDef user fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,color:#000
classDef internet fill:#fff3e0,stroke:#ef6c00,stroke-width:2px,color:#000
classDef edge fill:#fce4ec,stroke:#c2185b,stroke-width:2px,color:#000
classDef stack fill:#e3f2fd,stroke:#1565c0,stroke-width:3px,color:#000
classDef gitops fill:#f3e5f5,stroke:#6a1b9a,stroke-width:3px,color:#000
classDef workload fill:#fffde7,stroke:#f9a825,stroke-width:2px,color:#000
class H1,H2,H3,H4,H5 header
class Dev,Remote,LAN user
class Repo,Actions,CF,TGEdge internet
class Router,TGConn,Pihole edge
class Docker,K3s stack
class Argo gitops
class W1,W2,W3,W4,W5,W6,W7 workload
How to read this diagram:
| Path | Color | What flows |
|---|---|---|
| π£ GitOps (purple, thick) | Dev β GitHub β ArgoCD β k3s |
A git push reconciles into the cluster automatically β no SSH, no kubectl |
| π Remote access (orange) | Remote β Cloudflare β Twingate edge β’ Twingate connector β stack |
Identity-aware, outbound-only, works behind CGNAT |
| π’ LAN access (green) | LAN β Pi-hole β stack |
Pure-DNS routing β no router config, no certs needed for *.lan |
| π΅ The Pi (blue) | hosts both stacks side-by-side | Docker for tinkering, k3s for production β same workloads, different lifecycles |
A single server sits behind a normal home router. No port forwarding is required β remote access flows through the Twingate connector, while the LAN gets DNS-level ad blocking and an internal *.home.your-domain.tld domain served by Traefik (k3s) or Nginx Proxy Manager (Docker).
π New to all this? Jump to π DNS & TLS β beginner to pro for a step-by-step walkthrough of how to actually point a hostname at your server: from
/etc/hostson a single laptop, to LAN-wide Pi-hole, to a real Cloudflare A-record pointing at a private IP, to mkcert and finally Let's Encrypt with the DNS-01 challenge.
git clone https://github.com/Thre4dripper/Home-Server-Lab.git
cd Home-Server-Lab/docker/<service> # e.g. docker/jellyfin
./setup.shEvery Docker service is a self-contained folder with docker-compose.yml, setup.sh and a per-service README.md. The setup script handles env-file scaffolding, directory creation and docker compose up -d. β see docker/README.md for the full catalog and detailed walkthrough.
# 1. Install k3s (single-node, with default Traefik + klipper-lb)
curl -sfL https://get.k3s.io | sh -
# 2. Bootstrap the cluster
git clone https://github.com/Thre4dripper/Home-Server-Lab.git
cd Home-Server-Lab/k3s
kubectl apply -f base/namespaces/
kubectl apply -k infra/sealed-secrets/
kubectl apply -k infra/traefik/
kubectl apply -k infra/cert-manager/
kubectl apply -k infra/argocd/
# 3. Hand the keys to ArgoCD (one ApplicationSet β one Application per app)
kubectl apply -f infra/argocd/root-app.yaml
# Done. From here, every commit to k3s/apps/** is auto-deployed.β see k3s/README.md for the full bootstrap order, secrets workflow and service catalog.
cd Home-Server-Lab/ansible
ansible-playbook -i inventory.yml site.ymlInstalls Docker, k3s, the Sealed Secrets controller and friends on a fresh Pi β then you're ready for Option A or B.
Both stacks publish auto-generated catalog pages with mermaid diagrams and per-category tables:
| Stack | Catalog | Services | Categories |
|---|---|---|---|
| π³ Docker | docker/README.md β | 27 ready-to-run Compose stacks | 7 |
| βΈοΈ k3s | k3s/README.md β | 14 GitOps-managed Kubernetes apps | 8 |
Both pages regenerate automatically from per-service README.md frontmatter via GitHub Actions β see Automation.
- π Privacy first β Your data, your hardware, your rules. No SaaS dependencies, no telemetry, no third-party clouds in the critical path.
- ποΈ Production-grade patterns β Real ingress, real secrets management, real GitOps β even on a Pi. The k3s stack is structured exactly the way you'd structure a small production cluster.
- π¦ Resource-efficient β Every service has a benchmarked RAM/CPU footprint. The full Docker catalog runs comfortably on an 8 GB server (single-board, mini-PC, or VM).
- π§ͺ Reproducible from zero β
git cloneβ bootstrap β working homelab. No undocumented manual clicks. No "oh, you also need toβ¦". - π Self-documenting β Every service carries machine-readable YAML frontmatter. The catalog pages, mermaid diagrams and category tables are derived from that frontmatter, so they cannot drift out of sync with reality.
- π Educational β Each per-service README is structured to teach: Why this service Β· How it's wired Β· What can go wrong Β· How to fix it.
| Layer | Docker stack | k3s stack |
|---|---|---|
| Container runtime | Docker Engine | containerd (via k3s) |
| Orchestration | docker compose | k3s (Kubernetes 1.28+) |
| Ingress / proxy | Nginx Proxy Manager (nginx-ui) |
Traefik (built into k3s) |
| Load balancer | host port bindings | klipper-lb (built into k3s) |
| TLS | Manual / Let's Encrypt via NPM | cert-manager + Let's Encrypt |
| Secrets | .env files (git-ignored) |
Bitnami SealedSecrets (encrypted in git) |
| Deployment automation | per-service setup.sh |
per-app setup.sh + ArgoCD |
| Remote access | Twingate connector container | Twingate connector Pod |
| DNS | Pi-hole container | Pi-hole Pod (hostNetwork: true) |
| CI | GitHub Actions (README + frontmatter) | GitHub Actions (README + frontmatter) |
The reference deployment is an 8 GB homelab server with SSD storage running Debian 12 / Ubuntu 22.04+ (64-bit). Tested on Raspberry Pi 5, Intel NUC, and Hetzner cloud VMs.
| Component | Minimum | Recommended | Notes |
|---|---|---|---|
| CPU | Quad-core 1.5 GHz (ARM64 or x86_64) | 4-core 2.0 GHz+ | Both stacks are multi-arch where underlying images support it |
| RAM | 4 GB | 8 GB | k3s adds ~500 MB baseline; full Docker catalog needs 6 GB+ |
| Storage | 32 GB | 256 GB+ NVMe / SSD | SSD strongly recommended β move /var/lib/{docker,rancher} for best I/O |
| Network | 100 Mbit Ethernet | Gigabit Ethernet | Wired strongly recommended for Home Assistant / multicast discovery |
| Power | Stable power supply | UPS recommended | Sudden power loss can corrupt Docker overlays / k3s etcd |
| Profile | Services | Total RAM | Storage | Stack |
|---|---|---|---|---|
| Minimal | Pi-hole + Portainer + Homepage | ~400 MB | 16 GB | π³ Docker |
| Media hub | + Jellyfin + qBittorrent + FileBrowser | ~2 GB | 256 GB+ | π³ Docker |
| Smart home | + Home Assistant + n8n + Mosquitto | ~3 GB | 32 GB | π³ Docker |
| Production cluster | k3s + Traefik + ArgoCD + 8β10 apps | ~4 GB | 128 GB+ | βΈοΈ k3s |
| Full lab | Both stacks side-by-side | ~6β7 GB | 256 GB+ | π³ + βΈοΈ |
"How do I get
https://jellyfin.hometo actually work in my browser?" β every homelab tutorial skips this. Here is the full progression, from a five-minute hack on one laptop, all the way to publicly-trusted certificates on a wildcard domain that resolves to a private IP.
Each level builds on the last. You don't need the next level until the current one starts hurting. Pick the lowest one that still covers your needs.
http://192.168.1.42:8096 β Jellyfin
http://192.168.1.42:9000 β Portainer
- β Zero setup. Works on day one.
- β Ugly URLs, no TLS, no friendly names, breaks if DHCP changes the IP.
- π Use when: you're testing a service for an hour and never coming back.
Reserve the Pi's IP in your router's DHCP leases β it costs nothing and stops every URL from breaking the day your router reboots.
Edit /etc/hosts (Linux/macOS) or C:\Windows\System32\drivers\etc\hosts (Windows):
192.168.1.42 jellyfin.lan portainer.lan homepage.lan
Now http://jellyfin.lan:8096 works on that one machine.
- β Zero infra, instant.
- β Per-device. Doesn't help your phone, your TV, or guests.
- π Use when: you're the only user and you only care about your laptop.
Run docker/pihole (or k3s/apps/pihole), then point your router's DHCP at the server's IP as the network DNS server. Now every device on your LAN β phone, TV, IoT, guests β resolves the names you define.
In dns-entries.conf (already wired into the Pi-hole compose file):
192.168.1.42 jellyfin.lan
192.168.1.42 portainer.lan
192.168.1.42 *.home.lan
- β Whole LAN, including phones and TVs. Plus network-wide ad blocking as a bonus.
- β Still no TLS β browsers will scream
Not Secureand disable half their features (clipboard, service workers, mic/camera). - β Doesn't work outside your house.
- π Use when: you have multiple devices and don't yet care about HTTPS.
Most people think "Cloudflare DNS" means "exposed to the internet." It doesn't have to. DNS is just name β IP resolution; it doesn't care if the IP is public or private. This is the single most useful trick in homelabbing.
In your Cloudflare dashboard for your-domain.tld:
Type Name Content Proxy
A home 192.168.1.42 βοΈ DNS only (grey cloud)
A *.home 192.168.1.42 βοΈ DNS only (grey cloud)
Yes β a public DNS record pointing at 192.168.1.42. From the public internet that IP is unroutable, so the record is harmless. From inside your LAN, however, the name resolves and traffic stays local. Result: you get a real, properly-delegated domain (jellyfin.home.your-domain.tld) that works on every device on your LAN β without editing hosts files, without running Pi-hole local DNS overrides, and crucially the same hostnames will keep working for the TLS-via-Let's-Encrypt step below.
- β Real domain, no per-device config, no internal DNS server needed.
- β Sets you up perfectly for Level 5 (Let's Encrypt DNS-01).
- β Anyone with a Cloudflare account can see that you have a host called
jellyfin.home.your-domain.tldpointing at an RFC-1918 IP. (Fine β it's a private IP, they can't reach it.) - β Outside your LAN the names resolve to a useless private IP. You still need Twingate / WireGuard / etc. for actual remote access.
- π Use when: you own a domain and want professional-looking hostnames without running a DNS server.
π§ Why this is brilliant: it makes "remote access" and "local access" use the exact same hostname. From your couch,
jellyfin.home.your-domain.tldresolves to192.168.1.42and goes direct. From a coffee shop with Twingate connected, the resolver returns the same IP and Twingate transparently tunnels you to the LAN. One URL, two paths, zero config drift.
Now you have nice names, but browsers still complain. The cheapest fix is mkcert β it generates a local certificate authority and installs it into your OS trust store. Certs signed by it are trusted on the machines where you ran mkcert -install.
# On the server, once
mkcert -install
mkcert "*.home.your-domain.tld" home.your-domain.tld
# β home.your-domain.tld+1.pem + home.your-domain.tld+1-key.pemDrop those files into Nginx Proxy Manager (Docker stack) or into a Kubernetes Secret consumed by Traefik (k3s stack). Browsers on machines that trust the mkcert root CA now show the green padlock.
- β Real HTTPS, real green padlock, full Web-API access (clipboard, service workers, WebRTC).
- β
Free, offline, works for any hostname including
*.lan. - β You must install the mkcert root CA on every device that should trust the cert. Doable for laptops, painful for Smart TVs, IoT devices and visitors' phones.
- π Use when: you want HTTPS for yourself and don't want to wrestle with public certificate authorities yet.
The grown-up version. cert-manager (k3s) or Nginx Proxy Manager (Docker) requests a real Let's Encrypt certificate for *.home.your-domain.tld using the DNS-01 challenge β Let's Encrypt asks you to prove ownership of the domain by adding a TXT record, cert-manager calls Cloudflare's API to add it, Let's Encrypt verifies, certificate issued.
The killer feature: because DNS-01 doesn't require Let's Encrypt to connect to your service, it works perfectly when:
- The hostname resolves to a private IP (Level 3 setup) β
- You have no port forwarding β
- You're behind CGNAT β
- You want a wildcard cert (HTTP-01 doesn't support wildcards) β
In k3s, this is one ClusterIssuer and one Certificate resource β see k3s/infra/cert-manager. In Docker, it's the "Let's Encrypt" tab in Nginx Proxy Manager with the Cloudflare DNS provider plugin.
# k3s ClusterIssuer (excerpt)
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- dns01:
cloudflare:
apiTokenSecretRef:
name: cloudflare-api-token
key: api-token- β Real, publicly-trusted certs. Works on every device, every browser, every visitor's phone, no manual trust install.
- β Auto-renews every 60 days. Set and forget.
- β
Combined with Level 3, your
https://jellyfin.home.your-domain.tldURL is identical from your couch, your phone over Twingate, and a friend's laptop you handed access to. - β Requires a real domain (~$10/year) and a Cloudflare account (free).
- π Use when: you've graduated. This is the "production" answer.
DNS + TLS solves "what name and what cert," not "how do bytes get from the coffee shop to my Pi." For that you have three sane options:
| Option | How it works | Trade-off |
|---|---|---|
| Port-forward + Let's Encrypt HTTP-01 | Open 80/443 on the router, point them at the server | Simple, but exposes the server directly to the internet. Don't unless you know what you're doing. |
| Cloudflare Tunnel | Daemon on the server makes outbound connection to Cloudflare; CF terminates TLS and proxies in | Free tier, no router config, but all traffic flows through Cloudflare |
| Twingate / Tailscale / Headscale | Identity-aware mesh VPN; outbound-only connector on the server | What this repo uses. No port forwarding, no SaaS in the data path (only signaling) |
This repo ships docker/twingate and k3s/apps/twingate precisely because Twingate's connector model composes cleanly with the Level 3 + Level 5 setup above: same hostname, real cert, no inbound port, identity-checked at the edge.
| You are... | Stop at level | Why |
|---|---|---|
| Tinkering on one laptop for an evening | 0β1 | Don't waste an hour on infra you'll throw away |
| Multi-device household, LAN only | 2 | Pi-hole alone solves the friendly-name problem and blocks ads |
| You own a domain, want HTTPS for yourself | 3 + 4 | Cloudflare DNS + mkcert is the lowest-effort secure setup |
| You want it to "just work" for everyone forever | 3 + 5 | Real cert, real domain, zero per-device setup |
| You also want remote access | 3 + 5 + 6 | Twingate connector container is already in this repo |
This is a homelab, not a production SaaS, but the patterns used here are real:
- β No inbound port forwarding β remote access flows through the Twingate connector (outbound-only TCP/443 to the Twingate edge).
- β
Secrets never in git plaintext β Docker uses
.envfiles (git-ignored); k3s uses Bitnami SealedSecrets, which are encrypted with the cluster's public key and only decryptable inside the cluster. - β TLS everywhere β k3s ingress is fronted by cert-manager + Let's Encrypt; Docker stack uses Nginx Proxy Manager with the same provider.
- β
Network segmentation β Docker isolates each stack on its own bridge network; k3s isolates by namespace, with
NetworkPolicyavailable where needed. - β
Least-privilege RBAC β k3s service accounts (e.g. Homepage's kubernetes widget) are bound to read-only ClusterRoles, never
cluster-admin(except Portainer, which is opt-in and called out). - β DNS-level filtering β Pi-hole blocks ads, trackers and known-malicious domains for every device on the LAN.
- β Resource limits β every Pod has CPU + memory requests/limits to prevent one runaway container from OOM-killing the host.
What this does not give you out of the box:
- β DDoS protection (you're not on the public internet)
- β WAF / app-layer firewall (overkill for a homelab; add Crowdsec if you want it)
- β Hardware-attested boot (Pi limitation)
β Every per-service README has its own Troubleshooting and Hardening notes sections.
This repo is itself GitOps β the documentation, the catalog and the diagrams are all reconciled from the source of truth, which is each service's README.md frontmatter.
| Workflow | Trigger | Effect |
|---|---|---|
update-readme.yml |
Any per-service README change in docker/* or k3s/apps/* |
Regenerates all three catalogs β docker/README.md, k3s/README.md and the root README.md β in a single matrix job |
validate-metadata.yml |
PRs touching any service README | Validates frontmatter schema for both stacks (required fields, allowed categories, valid icons) |
security-scan.yml |
Every push + PR + weekly cron | gitleaks (fast secret scan) + trufflehog (verified credentials, deep history) + Trivy (filesystem CVEs + IaC misconfigs) |
| Dependabot | Weekly | PRs for GitHub Actions, pip packages, n8n Dockerfile bumps |
| Renovate | Continuous | PRs for Docker image tags, Helm charts, k8s manifests, Ansible tool versions β minor/patch auto-merged after CI |
The matrix-based generator is a single workflow file that runs update-docker-readme.py, update-k3s-readme.py and update-global-readme.py in parallel and commits/pushes (or PR-comments) any regenerated catalog. Inside the root README, only the segments wrapped in <!-- AUTOGEN:* --> markers are touched β every other line is yours.
Add a service β write its README with the right frontmatter β push β the catalog updates itself.
π Pre-commit hooks block secrets before they hit git. Install once with
pip install pre-commit && pre-commit installβ see SECURITY.md for the full incident-response playbook (rotate β purge history β re-clone).
Home-Server-Lab/
βββ README.md β you are here
βββ docker/ π³ Docker Compose stack β <!-- AUTOGEN:DOCKER_COUNT -->27<!-- /AUTOGEN:DOCKER_COUNT --> services
β βββ README.md auto-generated catalog + mermaid
β βββ <service>/ docker-compose.yml + setup.sh + README.md (frontmatter)
βββ k3s/ βΈοΈ k3s + ArgoCD stack β <!-- AUTOGEN:K3S_COUNT -->14<!-- /AUTOGEN:K3S_COUNT --> apps
β βββ README.md auto-generated catalog + mermaid + bootstrap docs
β βββ base/ shared namespaces
β βββ infra/ Traefik Β· SealedSecrets Β· cert-manager Β· ArgoCD
β βββ apps/<service>/ manifests + setup.sh + README.md (frontmatter)
β βββ scripts/ shared helpers (_app-ctl.sh, seal.sh, db-user.sh, β¦)
βββ ansible/ βοΈ Bare-metal & host bootstrap (Docker, k3s, sealed-secrets)
βββ .github/
βββ scripts/ update-docker-readme.py Β· update-k3s-readme.py Β· validate-service.py
βββ workflows/ update-readme.yml Β· validate-metadata.yml
Why both Docker AND k3s? Isn't that redundant?
No β they serve different purposes. The Docker stack is for trying things; the k3s stack is for running them. You'll spin up a service in Docker for an afternoon to learn how it works, then promote it to k3s once you trust the configuration. Removing one would force every experiment through the production deployment path, which is friction you don't want when you're tinkering.
Do I need to run both?
No. They're independent. The Docker stack works on any Linux host with Docker. The k3s stack works on any host with k3s (or full k8s). Pick one or both.
Why k3s and not full Kubernetes?
k3s is full Kubernetes β same APIs, same kubectl, same manifests. It just ships as a single binary, replaces etcd with SQLite by default, and is built for ARM/edge. Everything in k3s/apps/ would work unchanged on EKS / GKE / AKS / k0s / minikube.
Can I run this on x86_64 / Intel / AMD?
Yes. Every image used here ships multi-arch manifests (linux/amd64 + linux/arm64). Tested on Raspberry Pi 5, Intel NUCs, and x86_64 VMs β nothing forces a specific architecture.
How do I add my own service?
Pick a stack, copy the closest existing service folder, edit the manifests/compose file, write a README with the required frontmatter, push. The catalog regenerates itself. Full guide in CONTRIBUTING.md.
How do I expose a service to the public internet?
You don't need to. Twingate is the recommended path β it's outbound-only, identity-aware and works through CGNAT. If you really want public exposure, both stacks support it: Docker via Nginx Proxy Manager + Let's Encrypt, k3s via Traefik + cert-manager + a router port-forward.
What about backups?
- Docker: every service uses bind-mounted volumes under
<service>/data/. Atar.gzof the repo +data/folders is your backup. - k3s: PVCs use
Retainreclaim policy. For logical DB backups seek3s/databases/README.md. For full cluster snapshots, thecluster-restore.shhelper exists.
Does this work behind CGNAT / on a phone hotspot / on a hostile network?
Yes β that's exactly why Twingate is the recommended remote-access path. It only requires outbound HTTPS.
Contributions are welcome β adding a service, fixing a manifest, improving docs, sharing benchmarks.
- See CONTRIBUTING.md for the full workflow.
- See CODE_OF_CONDUCT.md for community standards.
The TL;DR for adding a service:
π³ Add a Docker service
mkdir docker/my-app && cd docker/my-app
# create docker-compose.yml, setup.sh, README.md (with frontmatter)
git add . && git commit -m "feat(docker): add my-app"
git push # docker/README.md regenerates automaticallyRequired frontmatter fields: name, category, purpose, description, icon, features, resource_usage. See any existing service for an example.
βΈοΈ Add a k3s service
cd k3s
./scripts/new-service.sh my-app
# fill in manifests + write apps/my-app/README.md with frontmatter
git add . && git commit -m "feat(k3s): add my-app"
git push # k3s/README.md regenerates and ArgoCD deploysRequired frontmatter fields: name, category, purpose, description, icon, namespace, components, features, resource_usage.
MIT β do whatever you want, just keep the notice.
Built on the shoulders of giants:
- The self-hosted community β awesome-selfhosted, r/selfhosted, r/homelab
- k3s β for making real Kubernetes possible on a Pi
- ArgoCD β for GitOps that actually works
- Bitnami SealedSecrets β for letting secrets live in git, safely
- Traefik + cert-manager β for ingress that just works
- Twingate β for zero-trust remote access without port forwarding
- Raspberry Pi Foundation β for affordable, capable single-board computers that started the homelab revolution
- And every open-source project listed in the catalogs β none of this exists without them
If this repo helped you build something cool, β star it β it's the best way to help others find it.
Built one container, one manifest and one Pi at a time.