Skip to content

πŸ€– A local-first Personal AI Infrastructure built around OpenCode and Ollama

License

Notifications You must be signed in to change notification settings

philbudden/synapse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

14 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

synapse

A local-first Personal AI Infrastructure built around OpenCode and Ollama

synapse is an open-source project inspired by Daniel Miessler's Personal AI Infrastructure (pai), adapted to work exclusively with local LLMs via Ollama and OpenCode, a terminal-based coding agent.

Core Philosophy

  • Local-first: No dependency on proprietary SaaS models (Claude, GPT, Copilot)
  • Declarative: Git is the source of truth, no runtime state
  • Boring by design: Static structure, explicit configuration, predictable behavior
  • Reproducible: Anyone can clone and run from Git alone

Features

  • πŸ€– Role-based agents: 5 specialized agents (code-writer, code-reviewer, researcher, documentarian, orchestrator)
  • 🧠 Local LLMs: All models run on Ollama (separate hardware supported)
  • πŸ“ Skill modules: Reusable workflows for common tasks
  • πŸ”’ Permission system: Explicit tool access control per agent
  • 🎯 Model-to-role mapping: Static, declarative model assignments
  • βœ… Validation hooks: Fail-fast startup validation

Prerequisites

Required

  • Ollama running on local or remote hardware
  • Python 3.8+ with pip
  • Git

Recommended Ollama Models

Pull these models to match the default role mappings:

ollama pull mistral      # code-writer
ollama pull llama2       # code-reviewer
ollama pull mixtral      # orchestrator, researcher
ollama pull codellama    # documentarian

Installation

Quick Install (Recommended)

git clone https://github.com/yourusername/synapse.git
cd synapse
./install.sh

The installation script will:

  • Check prerequisites (Python 3.8+, pip)
  • Install dependencies
  • Set up configuration directory
  • Validate Ollama connection
  • Guide you through next steps

Manual Installation

1. Clone the repository

git clone https://github.com/yourusername/synapse.git
cd synapse

2. Install Python dependencies

pip install -r requirements.txt

3. Configure Ollama endpoint

Edit models/models.yaml to point to your Ollama server:

ollama:
  endpoint: "http://localhost:11434"  # Change if Ollama is on different host
  timeout: 120
  verify_ssl: true

4. Validate configuration

./bin/synapse validate
# or
./bin/validate-setup.sh

You should see:

βœ… Ollama connection validated
βœ… All 5 role models available

Quick Start

Using the CLI

# List available agents
./bin/synapse agents

# Show detailed agent info
./bin/synapse show code-writer

# Test a model
./bin/synapse test code-writer "Write a hello world function in Python"

# Check permissions
./bin/synapse check-permission code-writer "Read(file.txt)"

# Export agents for OpenCode
./bin/synapse export /path/to/opencode/agents

Direct Integration

# Validate setup
./bin/synapse validate

# Test model connection

```bash
# List available models
./lib/ollama_connector.py list

# Test generation with a role
./lib/ollama_connector.py generate code-writer "Write a hello world function in Python"

Validate startup

./hooks/startup-validation.py

Resolve agent role to model

./hooks/model-resolution.py code-writer

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                         User                            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β”‚
                     β–Ό
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚      OpenCode        β”‚  (Local workstation)
          β”‚  Terminal Interface  β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β”‚
                     β”‚ Invoke agent by role
                     β–Ό
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚   Agent Definition   β”‚  (agents/*.md)
          β”‚   - code-writer      β”‚
          β”‚   - code-reviewer    β”‚
          β”‚   - researcher       β”‚
          β”‚   - documentarian    β”‚
          β”‚   - orchestrator     β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β”‚
                     β”‚ Resolve role β†’ model
                     β–Ό
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚   Model Mapping      β”‚  (models/models.yaml)
          β”‚   code-writer β†’ mistral
          β”‚   code-reviewer β†’ llama2
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     β”‚
                     β”‚ HTTP API call
                     β–Ό
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚       Ollama         β”‚  (Dedicated hardware)
          β”‚   Multiple Models    β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Configuration

Agent Definitions

Located in agents/, each agent has:

  • YAML frontmatter: metadata (name, model, permissions, color)
  • Markdown body: persona, responsibilities, constraints

Example (agents/code-writer.md):

---
name: code-writer
description: Generates new code with precision
model: mistral
permissions:
  allow:
    - Read(*)
    - Create(*)
    - Write(*)
---

# Code Writer

You are a code-writer agent...

Model Configuration

models/models.yaml defines:

  • Ollama endpoint configuration
  • Model declarations (provider, name, capabilities)
  • Role-to-model mappings

Skills

Located in skills/, each skill defines:

  • Purpose and prerequisites
  • Step-by-step process
  • Agent assignment
  • Quality checklist
  • Example usage

Hooks

Located in hooks/, lifecycle event handlers:

  • startup-validation.py: Validate Ollama connection
  • model-resolution.py: Resolve role to model
  • permission-check.sh: Enforce tool permissions

Agent Roles

code-writer

Generates new code from specifications. No architectural authority.

Can: Write functions, classes, modules, implement features
Cannot: Make architecture decisions, review code, write docs

code-reviewer

Reviews code changes for quality and correctness. Read-only access.

Can: Review diffs, identify bugs, flag security issues
Cannot: Modify code, commit changes

researcher

Gathers context and synthesizes findings.

Can: Search codebase, analyze patterns, summarize information
Cannot: Write code, make decisions, implement solutions

documentarian

Creates clear, comprehensive documentation.

Can: Write READMEs, API docs, guides, tutorials
Cannot: Write production code, make architectural decisions

orchestrator

Coordinates multi-agent workflows for complex tasks.

Can: Break down tasks, delegate to agents, coordinate handoffs
Cannot: Write code directly, review code, write documentation

Skills

Essential skills included:

  • code-generation: Generate new code from specifications
  • code-review: Systematic code quality review
  • codebase-research: Find existing patterns and conventions
  • documentation-generation: Create READMEs and guides
  • task-orchestration: Coordinate multi-agent workflows

See skills/ directory for detailed workflow definitions.

Usage Patterns

Simple Task (Single Agent)

# Use code-writer to implement a feature
opencode "Implement user authentication function" --agent code-writer

Multi-Step Task (Orchestrator)

# Use orchestrator to coordinate complex workflow
opencode "Add authentication to the API" --agent orchestrator

# Orchestrator will:
# 1. Use researcher to find existing patterns
# 2. Use code-writer to implement middleware
# 3. Use code-reviewer to validate security
# 4. Use documentarian to write setup guide

Review Changes

# Use code-reviewer for pull request review
opencode "Review the changes in PR #123" --agent code-reviewer

Customization

Add a New Agent

  1. Create agents/your-agent.md with YAML frontmatter and description
  2. Add role mapping in models/models.yaml
  3. Define permissions in agent frontmatter

Add a New Skill

  1. Create skills/your-skill.md with process definition
  2. Specify agent assignment and triggers
  3. Document prerequisites and quality checklist

Use Different Models

Edit models/models.yaml:

role_mapping:
  code-writer: deepseek-coder  # Change from mistral
  code-reviewer: codellama      # Change from llama2

Then add model definition:

models:
  deepseek-coder:
    provider: ollama
    model_name: "deepseek-coder:latest"
    roles:
      - code-writer
    context_window: 16384

Troubleshooting

Connection Failed

❌ Ollama connection failed: Connection refused

Solution: Ensure Ollama is running:

# Check if Ollama is running
curl http://localhost:11434/api/tags

# Start Ollama if needed
ollama serve

Models Unavailable

⚠️  Some models are not available:
  - code-writer (mistral:latest)

Solution: Pull the missing model:

ollama pull mistral

Configuration Not Found

❌ Configuration not found: models.yaml not found

Solution: Ensure you're running from the repository root, or set absolute path in scripts.

Project Structure

.
β”œβ”€β”€ agents/               # Agent role definitions
β”‚   β”œβ”€β”€ code-writer.md
β”‚   β”œβ”€β”€ code-reviewer.md
β”‚   β”œβ”€β”€ researcher.md
β”‚   β”œβ”€β”€ documentarian.md
β”‚   └── orchestrator.md
β”œβ”€β”€ models/
β”‚   └── models.yaml      # Model configuration and mappings
β”œβ”€β”€ config/
β”‚   └── settings.yaml    # Central settings
β”œβ”€β”€ skills/              # Reusable skill workflows
β”‚   β”œβ”€β”€ code-generation.md
β”‚   β”œβ”€β”€ code-review.md
β”‚   β”œβ”€β”€ codebase-research.md
β”‚   β”œβ”€β”€ documentation-generation.md
β”‚   └── task-orchestration.md
β”œβ”€β”€ hooks/               # Lifecycle event handlers
β”‚   β”œβ”€β”€ startup-validation.py
β”‚   β”œβ”€β”€ model-resolution.py
β”‚   └── permission-check.sh
β”œβ”€β”€ lib/
β”‚   └── ollama_connector.py  # Ollama API integration
β”œβ”€β”€ workflows/           # Example workflow compositions
β”œβ”€β”€ docs/                # Additional documentation
β”œβ”€β”€ requirements.txt     # Python dependencies
└── README.md

Contributing

This is a reference implementation. Contributions that maintain the "boring by design" philosophy are welcome:

  • Prefer explicit over implicit
  • Prefer static over dynamic
  • Prefer deletion over addition
  • Keep it reproducible

Inspiration

This project is inspired by:

  • pai by Daniel Miessler
  • OpenCode terminal-based coding agent
  • Ollama local LLM runtime

License

See LICENSE file.

Non-Goals

This project explicitly does not aim to:

  • Replace IDEs or full AI platforms
  • Build GUIs, web UIs, or desktop apps
  • Support cloud-only deployments
  • Abstract away LLM behavior with "magic"
  • Chase benchmark performance

If it feels impressive, it's probably wrong. This is intentionally boring infrastructure.


If you can reproduce it from Git alone, we succeeded.

About

πŸ€– A local-first Personal AI Infrastructure built around OpenCode and Ollama

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •