A local-first Personal AI Infrastructure built around OpenCode and Ollama
synapse is an open-source project inspired by Daniel Miessler's Personal AI Infrastructure (pai), adapted to work exclusively with local LLMs via Ollama and OpenCode, a terminal-based coding agent.
- Local-first: No dependency on proprietary SaaS models (Claude, GPT, Copilot)
- Declarative: Git is the source of truth, no runtime state
- Boring by design: Static structure, explicit configuration, predictable behavior
- Reproducible: Anyone can clone and run from Git alone
- π€ Role-based agents: 5 specialized agents (code-writer, code-reviewer, researcher, documentarian, orchestrator)
- π§ Local LLMs: All models run on Ollama (separate hardware supported)
- π Skill modules: Reusable workflows for common tasks
- π Permission system: Explicit tool access control per agent
- π― Model-to-role mapping: Static, declarative model assignments
- β Validation hooks: Fail-fast startup validation
- Ollama running on local or remote hardware
- Python 3.8+ with pip
- Git
Pull these models to match the default role mappings:
ollama pull mistral # code-writer
ollama pull llama2 # code-reviewer
ollama pull mixtral # orchestrator, researcher
ollama pull codellama # documentariangit clone https://github.com/yourusername/synapse.git
cd synapse
./install.shThe installation script will:
- Check prerequisites (Python 3.8+, pip)
- Install dependencies
- Set up configuration directory
- Validate Ollama connection
- Guide you through next steps
git clone https://github.com/yourusername/synapse.git
cd synapsepip install -r requirements.txtEdit models/models.yaml to point to your Ollama server:
ollama:
endpoint: "http://localhost:11434" # Change if Ollama is on different host
timeout: 120
verify_ssl: true./bin/synapse validate
# or
./bin/validate-setup.shYou should see:
β
Ollama connection validated
β
All 5 role models available
# List available agents
./bin/synapse agents
# Show detailed agent info
./bin/synapse show code-writer
# Test a model
./bin/synapse test code-writer "Write a hello world function in Python"
# Check permissions
./bin/synapse check-permission code-writer "Read(file.txt)"
# Export agents for OpenCode
./bin/synapse export /path/to/opencode/agents# Validate setup
./bin/synapse validate
# Test model connection
```bash
# List available models
./lib/ollama_connector.py list
# Test generation with a role
./lib/ollama_connector.py generate code-writer "Write a hello world function in Python"./hooks/startup-validation.py./hooks/model-resolution.py code-writerβββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β User β
ββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββββββ
β OpenCode β (Local workstation)
β Terminal Interface β
ββββββββββββ¬ββββββββββββ
β
β Invoke agent by role
βΌ
ββββββββββββββββββββββββ
β Agent Definition β (agents/*.md)
β - code-writer β
β - code-reviewer β
β - researcher β
β - documentarian β
β - orchestrator β
ββββββββββββ¬ββββββββββββ
β
β Resolve role β model
βΌ
ββββββββββββββββββββββββ
β Model Mapping β (models/models.yaml)
β code-writer β mistral
β code-reviewer β llama2
ββββββββββββ¬ββββββββββββ
β
β HTTP API call
βΌ
ββββββββββββββββββββββββ
β Ollama β (Dedicated hardware)
β Multiple Models β
ββββββββββββββββββββββββ
Located in agents/, each agent has:
- YAML frontmatter: metadata (name, model, permissions, color)
- Markdown body: persona, responsibilities, constraints
Example (agents/code-writer.md):
---
name: code-writer
description: Generates new code with precision
model: mistral
permissions:
allow:
- Read(*)
- Create(*)
- Write(*)
---
# Code Writer
You are a code-writer agent...models/models.yaml defines:
- Ollama endpoint configuration
- Model declarations (provider, name, capabilities)
- Role-to-model mappings
Located in skills/, each skill defines:
- Purpose and prerequisites
- Step-by-step process
- Agent assignment
- Quality checklist
- Example usage
Located in hooks/, lifecycle event handlers:
startup-validation.py: Validate Ollama connectionmodel-resolution.py: Resolve role to modelpermission-check.sh: Enforce tool permissions
Generates new code from specifications. No architectural authority.
Can: Write functions, classes, modules, implement features
Cannot: Make architecture decisions, review code, write docs
Reviews code changes for quality and correctness. Read-only access.
Can: Review diffs, identify bugs, flag security issues
Cannot: Modify code, commit changes
Gathers context and synthesizes findings.
Can: Search codebase, analyze patterns, summarize information
Cannot: Write code, make decisions, implement solutions
Creates clear, comprehensive documentation.
Can: Write READMEs, API docs, guides, tutorials
Cannot: Write production code, make architectural decisions
Coordinates multi-agent workflows for complex tasks.
Can: Break down tasks, delegate to agents, coordinate handoffs
Cannot: Write code directly, review code, write documentation
Essential skills included:
- code-generation: Generate new code from specifications
- code-review: Systematic code quality review
- codebase-research: Find existing patterns and conventions
- documentation-generation: Create READMEs and guides
- task-orchestration: Coordinate multi-agent workflows
See skills/ directory for detailed workflow definitions.
# Use code-writer to implement a feature
opencode "Implement user authentication function" --agent code-writer# Use orchestrator to coordinate complex workflow
opencode "Add authentication to the API" --agent orchestrator
# Orchestrator will:
# 1. Use researcher to find existing patterns
# 2. Use code-writer to implement middleware
# 3. Use code-reviewer to validate security
# 4. Use documentarian to write setup guide# Use code-reviewer for pull request review
opencode "Review the changes in PR #123" --agent code-reviewer- Create
agents/your-agent.mdwith YAML frontmatter and description - Add role mapping in
models/models.yaml - Define permissions in agent frontmatter
- Create
skills/your-skill.mdwith process definition - Specify agent assignment and triggers
- Document prerequisites and quality checklist
Edit models/models.yaml:
role_mapping:
code-writer: deepseek-coder # Change from mistral
code-reviewer: codellama # Change from llama2Then add model definition:
models:
deepseek-coder:
provider: ollama
model_name: "deepseek-coder:latest"
roles:
- code-writer
context_window: 16384β Ollama connection failed: Connection refused
Solution: Ensure Ollama is running:
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Start Ollama if needed
ollama serveβ οΈ Some models are not available:
- code-writer (mistral:latest)
Solution: Pull the missing model:
ollama pull mistralβ Configuration not found: models.yaml not found
Solution: Ensure you're running from the repository root, or set absolute path in scripts.
.
βββ agents/ # Agent role definitions
β βββ code-writer.md
β βββ code-reviewer.md
β βββ researcher.md
β βββ documentarian.md
β βββ orchestrator.md
βββ models/
β βββ models.yaml # Model configuration and mappings
βββ config/
β βββ settings.yaml # Central settings
βββ skills/ # Reusable skill workflows
β βββ code-generation.md
β βββ code-review.md
β βββ codebase-research.md
β βββ documentation-generation.md
β βββ task-orchestration.md
βββ hooks/ # Lifecycle event handlers
β βββ startup-validation.py
β βββ model-resolution.py
β βββ permission-check.sh
βββ lib/
β βββ ollama_connector.py # Ollama API integration
βββ workflows/ # Example workflow compositions
βββ docs/ # Additional documentation
βββ requirements.txt # Python dependencies
βββ README.md
This is a reference implementation. Contributions that maintain the "boring by design" philosophy are welcome:
- Prefer explicit over implicit
- Prefer static over dynamic
- Prefer deletion over addition
- Keep it reproducible
This project is inspired by:
See LICENSE file.
This project explicitly does not aim to:
- Replace IDEs or full AI platforms
- Build GUIs, web UIs, or desktop apps
- Support cloud-only deployments
- Abstract away LLM behavior with "magic"
- Chase benchmark performance
If it feels impressive, it's probably wrong. This is intentionally boring infrastructure.
If you can reproduce it from Git alone, we succeeded.