A self-hosted, zero-cost AI development stack built around local LLMs. Run powerful language models on your own hardware — no token spend, no cloud dependency. Free-tier cloud models available as an optional augment.
Built around Ollama, OpenCode, and a lightweight smart router that automatically selects the right model for each request.
MIT — use freely, contributions welcome.