Background
run.Manager.cleanOrphanNetworks (added in the fix for #315) runs at NewManagerWithOptions time and removes moat-managed container networks whose run dirs no longer exist. It's gated behind ManagerOptions.ReapOrphanNetworks so it only fires from moat run and moat clean paths — read-only commands skip it to avoid the per-invocation cost.
Why move it
The proxy daemon is a more natural home for the reaper:
- The daemon already outlives individual CLI invocations (auto-shuts down after 5 minutes idle).
- Sweep cost happens once per daemon lifetime instead of once per
moat run.
- Daemon already tracks active runs via run-token registration — it has authoritative knowledge of which networks are alive without scanning
~/.moat/runs/ from disk.
- A periodic background sweep (e.g., every 10 minutes) would catch leaks from hard process kills (
go test -timeout, SIGKILL) without waiting for the next user moat run.
- CLI startup latency would no longer scale with orphan count on Apple containers.
Proposed work
- Add a periodic reaper goroutine to the daemon (
internal/daemon/).
- Cross-reference moat-managed networks against the daemon's in-memory run registry (which is more authoritative than disk run dirs) plus disk run dirs as fallback for runs not yet registered.
- Remove the
ReapOrphanNetworks plumbing in run.ManagerOptions once the daemon owns this fully.
- Optionally expose
moat doctor or moat cleanup networks as a user-facing escape hatch for forced reaping.
Architectural rationale
From the architecture review of the #315 fix:
Orphan cleanup is a daemon responsibility, not a per-CLI-instance responsibility. The CLI-launched daemon at startup, and periodically (e.g., every N minutes), is a better fit than every `moat run`/`moat list`.
Out of scope
The bounded networkCreateTimeout in internal/container/runtime.go should remain regardless — it's defense-in-depth for when the runtime itself is unresponsive, independent of how/when reaping happens.
Background
run.Manager.cleanOrphanNetworks(added in the fix for #315) runs atNewManagerWithOptionstime and removes moat-managed container networks whose run dirs no longer exist. It's gated behindManagerOptions.ReapOrphanNetworksso it only fires frommoat runandmoat cleanpaths — read-only commands skip it to avoid the per-invocation cost.Why move it
The proxy daemon is a more natural home for the reaper:
moat run.~/.moat/runs/from disk.go test -timeout, SIGKILL) without waiting for the next usermoat run.Proposed work
internal/daemon/).ReapOrphanNetworksplumbing inrun.ManagerOptionsonce the daemon owns this fully.moat doctorormoat cleanup networksas a user-facing escape hatch for forced reaping.Architectural rationale
From the architecture review of the #315 fix:
Out of scope
The bounded
networkCreateTimeoutininternal/container/runtime.goshould remain regardless — it's defense-in-depth for when the runtime itself is unresponsive, independent of how/when reaping happens.