Persistent memory and code intelligence for AI coding agents.
A persistent memory server for AI coding agents. It solves the #1 problem with AI assistants: they forget everything between sessions.
nano-brain automatically ingests your AI sessions, notes, and codebase, indexes everything with full-text search + vector embeddings + knowledge graph, serves memories via 22 MCP tools, and learns which memories matter most to you over time.
- Hybrid search pipeline — BM25 + vector + RRF fusion + VoyageAI neural reranking with 6 ranking signals
- Code intelligence — symbol graph, call flow detection, impact analysis, change detection via Tree-sitter AST
- Automatic data ingestion — session harvesting (2min poll), file watching (chokidar), codebase indexing
- Multi-workspace isolation — per-workspace SQLite databases, cross-workspace search with
--scope=all - Flexible embedding providers — VoyageAI, Ollama, OpenAI-compatible
- Dual vector stores — Qdrant (production) or sqlite-vec (embedded)
- Privacy-first — 100% local processing option, your code never leaves your machine
- MCP + CLI — stdio/HTTP/SSE transports for local or containerized environments
- Automatic corruption recovery — detects & recovers from database corruption on startup with zero user intervention
- Self-learning system — Thompson Sampling tunes search parameters, preference learning personalizes results
- Knowledge graph — LLM-extracted entities and relationships, graph traversal, temporal queries
- Memory intelligence — LLM categorization, entity pruning, proactive suggestions
User Query
│
▼
┌─────────────────┐
│ Query Expansion │ ← (currently stubbed, planned)
│ (optional) │ generates 2-3 query variants
└────────┬────────┘
│
┌────┴────┐
▼ ▼
┌────────┐ ┌──────────┐
│ BM25 │ │ Vector │
│ (FTS5) │ │ (Qdrant │
│ │ │ or │
│ │ │ sqlite- │
│ │ │ vec) │
└───┬────┘ └────┬─────┘
│ │
▼ ▼
┌─────────────────┐
│ RRF Fusion │ ← k=60, original query 2× weight
│ │
└────────┬────────┘
│
▼
┌─────────────────┐
│ PageRank Boost │ ← Centrality from file dependency graph
│ │ weight: 0.1 (default)
└────────┬────────┘
│
▼
┌─────────────────┐
│ Supersede │ ← 0.3× demotion for replaced documents
│ Demotion │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Neural Reranking│ ← VoyageAI rerank-2.5-lite
│ (optional) │
└────────┬────────┘
│
▼
┌─────────────────┐
│ Position-Aware │ ← top 3: 75/25, 4-10: 60/40, 11+: 40/60
│ Blending │ (RRF weight / rerank weight)
└────────┬────────┘
│
▼
Final Results
Memory Write
│
▼
┌─────────────────┐
│ Save to File │ → ~/.nano-brain/memory/
│ Hash + DB Insert │ → documents + content tables
│ FTS5 Index │ → documents_fts (auto-trigger)
└────────┬────────┘
│
┌────┴────┐
▼ ▼
┌────────┐ ┌──────────┐
│Keyword │ │ Async │ (fire-and-forget)
│Categorize│ │ Processes│
│auto:* │ │ │
└────────┘ ├──────────┤
│ LLM │ → llm:* tags
│ Categorize│
├──────────┤
│ Entity │ → knowledge graph
│ Extract │
└──────────┘
memory_search — BM25 only (fast, exact keyword matching)
memory_vsearch — Vector only (semantic similarity via embeddings)
memory_query — Full hybrid pipeline with 6 ranking signals:
- BM25 full-text scoring — SQLite FTS5 with porter stemming
- Vector cosine similarity — Qdrant or sqlite-vec embeddings
- RRF fusion — k=60, original query weighted 2×
- PageRank centrality boost — from file dependency graph (weight: 0.1)
- Supersede demotion — 0.3× penalty for replaced documents
- VoyageAI neural reranking — rerank-2.5-lite with position-aware blending:
- Top 3 results: 75% RRF / 25% rerank
- Ranks 4-10: 60% RRF / 40% rerank
- Ranks 11+: 40% RRF / 60% rerank
Query expansion generates 2-3 query variants before search. The pipeline supports it, but no expansion provider is currently active.
Built on Tree-sitter AST parsing for TypeScript, JavaScript, and Python:
code_context — 360° view of a code symbol:
- Direct callers and callees
- Transitive call flows (upstream/downstream)
- File location, definition, and references
- Centrality score (PageRank) and cluster membership
code_impact — Change impact analysis:
- Upstream dependencies (what calls this?)
- Downstream dependencies (what does this call?)
- BFS traversal with configurable depth
- Risk assessment for refactoring
code_detect_changes — Map git diff to affected symbols:
- Parses
git diffoutput - Identifies modified symbols via Tree-sitter
- Returns symbol names, types, and file locations
- Scope:
staged,unstaged, orall
memory_focus — File dependency context:
- Import/export graph for a file
- Centrality score (PageRank)
- Cluster membership (Louvain algorithm)
- Direct dependencies and dependents
memory_graph_stats — Dependency graph overview:
- Total files, symbols, edges
- Cycle detection
- Clustering coefficient
- Top central files
Symbol tracking — Cross-repo symbol queries:
- Redis keys, PubSub channels
- MySQL tables, columns
- API endpoints (Express, FastAPI)
- Bull/BullMQ queues
- GraphQL types, queries, mutations
All data sources are indexed automatically:
Session harvesting — Converts OpenCode JSON sessions into searchable markdown:
- Polls
~/.opencode/sessions/every 2 minutes - Extracts user queries, assistant responses, tool calls
- Incremental append (hash-based deduplication)
File watching — Monitors collections for changes:
- Chokidar watches configured directories
- Dirty-flag tracking for incremental updates
- Reindexes every 5 minutes if changes detected
Codebase indexing — Tree-sitter AST → symbol graph:
- Parses TS/JS/Python files
- Extracts functions, classes, methods, variables
- Builds call graph (caller → callee edges)
- Computes PageRank centrality
- Detects clusters via Louvain algorithm
- Identifies call flows (entry points → leaf functions)
Incremental behavior:
- Hash-based file skipping (SHA-256 content addressing)
- Adaptive embedding backoff (exponential retry)
- Batch processing for large codebases
nano-brain runs 9 background jobs to keep your memory fresh and intelligent:
| Job | Interval | What It Does |
|---|---|---|
| File reindex | 5 min | Watch collections, reindex changed files |
| Session harvest | 2 min | Convert OpenCode sessions → searchable markdown |
| Embedding | 60s (adaptive) | Generate vector embeddings for new docs |
| Learning cycle | 10 min | Thompson Sampling + preference weight updates |
| Consolidation | 1 hour | LLM summarizes related memories |
| Importance | 30 min | Rescore document importance from usage |
| Sequence analysis | 30 min | Detect query patterns for proactive suggestions |
| Pruning (soft) | 6 hours | Soft-delete contradicted/orphan entities |
| Pruning (hard) | 7 days | Permanently delete old soft-deleted entities |
Heading-aware markdown chunking that respects document structure:
- Target size: 900 tokens (~3600 characters)
- Overlap: 15% between chunks (~540 characters)
- Respects boundaries: Code fences, headings, paragraphs
- Break point scoring: h1=100, h2=90, h3=80, code-fence=80, hr=60, blank-line=40
- Content-addressed storage: SHA-256 hash deduplication
SQLite (via better-sqlite3):
documents— metadata, content, embeddingschunks— heading-aware markdown chunks (900 tokens, 15% overlap)fts_index— FTS5 virtual table with porter stemmingvec_index— sqlite-vec extension (cosine distance)symbols— code symbols (functions, classes, variables)call_edges— caller → callee relationshipsfile_deps— import/export graphclusters— Louvain clustering resultsflows— detected call flows (entry → leaf)
Qdrant (optional, production vector store):
- Included in
nano-brain docker startcompose stack, or managed standalone viaqdrant up/down/statuscommands - Automatic migration from sqlite-vec
- Verification and cleanup tools
Embedding providers:
- VoyageAI — voyage-code-3 (1024 dims, code-optimized)
- Ollama — local models (nomic-embed-text, etc.)
- OpenAI-compatible — Azure, LM Studio, custom endpoints
Reranking:
- VoyageAI — rerank-2.5-lite (neural reranking)
Storage management:
- Per-workspace SQLite databases (isolated)
- Content-addressed storage (SHA-256 deduplication)
- Retention policies (maxSize budget, auto-cleanup)
- Disk space checks before indexing
nano-brain uses 18 SQLite tables organized into 5 functional groups:
| Table | Purpose |
|---|---|
| Core Documents | |
documents |
Document metadata, content, embeddings |
chunks |
Heading-aware markdown chunks (900 tokens, 15% overlap) |
content |
Raw content storage (content-addressed) |
| Search Indexes | |
documents_fts |
FTS5 full-text search index (porter stemming) |
vec_index |
sqlite-vec vector index (cosine distance) |
| Code Intelligence | |
symbols |
Code symbols (functions, classes, variables) |
call_edges |
Caller → callee relationships |
file_deps |
Import/export graph |
clusters |
Louvain clustering results |
flows |
Detected call flows (entry → leaf) |
| Knowledge Graph | |
entities |
LLM-extracted entities (people, concepts, tools) |
relationships |
Entity-to-entity connections |
| Learning & Intelligence | |
telemetry |
Search queries, results, expand feedback |
bandit_variants |
Thompson Sampling search parameter tuning |
config_versions |
Search config version history |
consolidations |
LLM-generated memory summaries |
query_sequences |
Query pattern detection for proactive suggestions |
category_preferences |
Per-workspace category weights from expand patterns |
Why corruption happens: SQLite databases can become corrupted due to:
- Unexpected process termination during write operations
- Filesystem crashes or power loss during WAL (Write-Ahead Log) checkpoint
- Disk I/O errors or hardware faults
- Rare race conditions in concurrent access (even with better-sqlite3 serialization)
Automatic corruption detection:
nano-brain automatically detects database corruption on startup via PRAGMA integrity_check:
- Runs before any database operations in
createStore() - Checks database file integrity without modifying data
- Takes 50-500ms depending on database size
Automatic recovery: When corruption is detected:
- Backup corrupted file — Renamed to
.corrupted.{ISO-timestamp}for forensics/recovery - Clear WAL/SHM files — Removes Write-Ahead Log and shared memory files
- Initialize fresh database — Creates clean SQLite database from scratch
- Verify fresh database — Runs integrity check to confirm recovery succeeded
- Emit metric —
database_corruption_detectedcounter for monitoring/alerting
Why this works: The database is a cache/index — all data is re-derivable from source files. Recovery involves:
- Session harvesting (re-ingests from session logs)
- Codebase reindexing (rescan source files)
- Memory re-embedding (regenerates vectors)
- Call graph rebuilding (reparses symbols)
Automatic restart with launchd:
On macOS, nano-brain runs as a launchd service (com.tamlh.nano-brain):
- If corruption causes a fatal error, process exits
- launchd automatically restarts it after 10-second throttle
- On restart,
checkAndRecoverDB()detects corruption and recovers - Service comes back online automatically with fresh database
Installation (macOS):
# Copy plist to launchd directory
cp ~/.config/nano-brain/launchd/com.tamlh.nano-brain.plist ~/Library/LaunchAgents/
# Load the service
launchctl load ~/Library/LaunchAgents/com.tamlh.nano-brain.plist
# Check status
launchctl list | grep nano-brainMonitoring: Check for corruption metrics in your monitoring/alerting system:
- Counter:
database_corruption_detected - Alert threshold: > 3 events per 24 hours (indicates underlying hardware/filesystem issue)
Troubleshooting: If corruption happens frequently:
- Check system logs for disk I/O errors:
log stream --predicate 'eventMessage contains[c] "I/O error"' - Verify filesystem health:
diskutil verifyVolume /(macOS) - Check disk space:
df -h ~/.nano-brain/ - Review database size:
ls -lh ~/.nano-brain/index.db - Consider moving database to a different drive if corruption persists
Forensics: Corrupted backups are kept for analysis:
- Located at:
~/.nano-brain/index.db.corrupted.{ISO-timestamp} - Last 5 backups are kept by default; older ones are auto-cleaned
- File size indicates when corruption occurred (if truncated vs intact)
| Tool | Description |
|---|---|
memory_search |
BM25 keyword search (fast, exact matching) |
memory_vsearch |
Semantic vector search (embeddings) |
memory_query |
Full hybrid search (BM25 + vector + RRF + reranking) |
memory_get |
Retrieve document by path or docid (#abc123) |
memory_multi_get |
Batch retrieve by glob pattern |
| Tool | Description |
|---|---|
memory_write |
Write to daily log (supports tags, supersedes) |
memory_tags |
List all tags with document counts |
memory_status |
Index health, collections, model status, graph stats |
memory_update |
Trigger reindex of all collections |
memory_consolidate |
Trigger LLM memory consolidation |
memory_suggestions |
Proactive next-query predictions based on patterns |
| Tool | Description |
|---|---|
code_context |
360° view of a code symbol (callers, callees, flows, centrality) |
code_impact |
Change impact analysis (upstream/downstream BFS) |
code_detect_changes |
Map git diff to affected symbols (staged/unstaged/all) |
memory_index_codebase |
Index codebase files in current workspace (Tree-sitter AST) |
| Tool | Description |
|---|---|
memory_focus |
File dependency context (imports/exports, centrality, cluster) |
memory_graph_stats |
Dependency graph overview (files, symbols, edges, cycles) |
memory_symbols |
Cross-repo symbol query (Redis, MySQL, API endpoints, queues) |
memory_impact |
Cross-repo impact analysis (writers vs readers) |
memory_graph_query |
BFS traversal from entity through knowledge graph |
memory_related |
Find related memories via entity graph connections |
memory_timeline |
Temporal view of entity changes over time |
# Install globally
npm install -g nano-brain
# Initialize (creates config, indexes codebase, generates embeddings)
npx nano-brain init --root=/path/to/your/project
# Check everything is working
npx nano-brain statusThe simplest way to run nano-brain as a persistent service:
# Start nano-brain + Qdrant containers
npx nano-brain docker start
# Check status
npx nano-brain docker status
# Stop
npx nano-brain docker stopThis runs the bundled docker-compose.yml which starts nano-brain (HTTP/SSE on port 3100) and Qdrant (ports 6333/6334). A config.default.yml is included as a starting template — copy it to ~/.nano-brain/config.yml and customize.
Environment variables:
NANO_BRAIN_APP— Path to nano-brain source directory (default: package install location)NANO_BRAIN_HOME— Path to~/.nano-braindata directory (default:~/.nano-brain)NANO_BRAIN_WORKSPACE— Path to your project workspace to index (mounted read-only and passed as--root)
HTTP API (available when Docker is running):
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/api/status |
GET | Index health, collections, model status |
/api/query |
POST | Hybrid search (body: {query, tags, scope, limit}) |
/api/search |
POST | BM25 keyword search (body: {query, limit}) |
/api/write |
POST | Write memory (body: {content, tags, supersedes}) |
/api/reindex |
POST | Trigger reindex (body: {root}) |
/mcp |
— | MCP endpoint (for AI agent integration) |
/sse |
— | SSE transport (for MCP remote clients) |
Add to your AI agent's MCP config (e.g., ~/.config/opencode/opencode.json):
Docker mode (recommended):
{
"mcp": {
"nano-brain": {
"type": "remote",
"url": "http://host.docker.internal:3100/mcp",
"enabled": true
}
}
}Start the server via Docker:
npx nano-brain docker start # Start nano-brain + Qdrant containers
npx nano-brain docker status # Check if running
npx nano-brain docker stop # Stop containersLocal mode (stdio, for development):
{
"mcp": {
"nano-brain": {
"type": "local",
"command": ["npx", "nano-brain", "mcp"],
"enabled": true
}
}
}Claude Code (.mcp.json):
{
"mcpServers": {
"nano-brain": {
"command": "npx",
"args": ["mcp-remote", "http://localhost:3100/sse"]
}
}
}Create ~/.nano-brain/config.yml (auto-generated by init):
# Collections (directories to index)
collections:
memory:
path: ~/.nano-brain/memory
pattern: "**/*.md"
update: auto
sessions:
path: ~/.nano-brain/sessions
pattern: "**/*.md"
update: auto
# Vector store (qdrant or sqlite-vec)
vector:
provider: qdrant
url: http://localhost:6333
collection: nano_brain
# OR: provider: sqlite-vec (embedded, no external service)
# Embedding provider
embedding:
provider: openai # 'ollama' or 'openai' (OpenAI-compatible)
url: https://api.voyageai.com # VoyageAI, Azure, LM Studio, etc.
model: voyage-code-3
apiKey: ${VOYAGE_API_KEY}
dimensions: 1024 # Output dimensions (default: 1024); omit to use model default
# OR: provider: ollama, url: http://localhost:11434, model: nomic-embed-text
# Reranker (uses embedding.apiKey if not set separately)
reranker:
model: rerank-2.5-lite
# apiKey: ${VOYAGE_API_KEY} # optional, falls back to embedding.apiKey
# Codebase indexing
codebase:
enabled: true
languages: [typescript, javascript, python]
exclude: [node_modules, dist, build, .git]
maxFileSize: 1048576 # 1MB
# File watcher
watcher:
enabled: true
debounce: 300 # ms
reindexInterval: 300 # seconds (5 minutes)
# Search configuration
search:
rrf_k: 60
top_k: 30
expansion:
enabled: true
weight: 1
reranking:
enabled: true
blending:
top3:
rrf: 0.75
rerank: 0.25
mid:
rrf: 0.60
rerank: 0.40
tail:
rrf: 0.40
rerank: 0.60
centrality_weight: 0.1
supersede_demotion: 0.3
# Polling intervals
intervals:
sessionHarvest: 120 # seconds (2 minutes)
healthCheck: 60 # seconds
# Storage management
storage:
maxSize: 10737418240 # 10GB
retention:
sessions: 90 # days
logs: 30 # days
# Workspaces
workspaces:
isolation: true # Per-workspace SQLite databases
defaultScope: current # or 'all' for cross-workspace search
# Entity pruning (Memory Intelligence v2)
pruning:
enabled: true
interval_ms: 21600000 # 6 hours
contradicted_ttl_days: 30
orphan_ttl_days: 90
batch_size: 100
hard_delete_after_days: 30
# LLM categorization (Memory Intelligence v2)
categorization:
llm_enabled: true
confidence_threshold: 0.6
max_content_length: 2000
# Preference learning (Memory Intelligence v2)
preferences:
enabled: true
min_queries: 20
weight_min: 0.5
weight_max: 2.0
baseline_expand_rate: 0.1
# Logging
logging:
level: info # debug, info, warn, error
file: ~/.nano-brain/logs/nano-brain.log
maxSize: 10485760 # 10MB
maxFiles: 5Data directory layout (~/.nano-brain/):
~/.nano-brain/
├── config.yml # Configuration
├── data/ # SQLite databases (per-workspace)
├── memory/ # Curated notes
├── sessions/ # Harvested sessions
└── logs/ # Application logs
nano-brain init # Full initialization (config, index, embed, AGENTS.md)
nano-brain init --root=/path # Initialize for specific project
nano-brain status # Show index health, collections, model statusnano-brain mcp # Start MCP server (stdio)
nano-brain mcp --http --port=3100 --host=0.0.0.0 # Start MCP server (HTTP/SSE)nano-brain serve # Start SSE server as background daemon (port 3100)
nano-brain serve status # Check if server is running
nano-brain serve stop # Stop the daemon
nano-brain serve --foreground # Run in foreground (for debugging)
nano-brain serve --port=8080 # Custom portnano-brain docker start # Start nano-brain + Qdrant via docker compose
nano-brain docker status # Check container status
nano-brain docker stop # Stop containersThe docker start command runs the bundled docker-compose.yml which starts:
- nano-brain — HTTP/SSE server on port 3100 (node:22-slim)
- Qdrant — Vector store on ports 6333/6334
Environment variables for volume mounts:
NANO_BRAIN_APP— Path to nano-brain source (default: current directory)NANO_BRAIN_HOME— Path to data directory (default:~/.nano-brain)NANO_BRAIN_WORKSPACE— Path to your project workspace to index as codebase (mounted read-only, passed as--root)
nano-brain search "query" # BM25 keyword search
nano-brain vsearch "query" # Vector semantic search
nano-brain query "query" # Hybrid search (BM25 + vector + reranking)
nano-brain query "query" --tags=bug,fix # Filter by tags
nano-brain query "query" --scope=all # Cross-workspace searchnano-brain write "content" # Write to daily log
nano-brain write "content" --tags=decision,architecture
nano-brain write "content" --supersedes=abc123 # Mark as replacement
nano-brain get <path> # Retrieve document by path
nano-brain get "#abc123" # Retrieve by docid
nano-brain tags # List all tags with countsnano-brain update # Reindex all collections
nano-brain index-codebase # Index codebase in current workspace
nano-brain reset --confirm # Reset all data (requires confirmation)
nano-brain reset --dry-run # Preview what would be deletednano-brain collection add <name> <path> # Add collection
nano-brain collection remove <name> # Remove collection
nano-brain collection list # List collectionsnano-brain rm --list # List all workspaces
nano-brain rm <workspace> --dry-run # Preview what would be deleted
nano-brain rm <workspace> # Remove workspace and all its data
# <workspace> can be: absolute path, hash prefix, or workspace nameNote: If using
nano-brain docker start, Qdrant is already included in the compose stack. These commands manage a standalone Qdrant container separately.
nano-brain qdrant up # Start Qdrant Docker container
nano-brain qdrant down # Stop Qdrant container
nano-brain qdrant status # Check Qdrant status
nano-brain qdrant migrate # Migrate from sqlite-vec to Qdrant
nano-brain qdrant verify # Verify Qdrant data integrity
nano-brain qdrant activate # Switch to Qdrant (update config)
nano-brain qdrant cleanup # Remove orphaned vectorsnano-brain cache clear # Clear all caches
nano-brain cache clear --type=embeddings # Clear specific cache type
nano-brain cache stats # Show cache statisticsnano-brain bench # Run default benchmark suite
nano-brain bench --suite=search # Run specific suite
nano-brain bench --iterations=100 --json --save
nano-brain bench --compare=baseline.json # Compare with baselinenano-brain logs # Show recent logs (last 50 lines)
nano-brain logs -f # Tail logs in real-time
nano-brain logs -n 100 # Show last 100 lines
nano-brain logs --date=2026-03-01 # Show log for specific date
nano-brain logs --clear # Delete all log files
nano-brain logs path # Print log directory pathsrc/
├── index.ts # CLI entry point
├── server.ts # MCP server (22+ tools, stdio/HTTP/SSE)
├── store.ts # SQLite storage (FTS5 + sqlite-vec)
├── storage.ts # Storage management (retention, disk space)
├── vector-store.ts # Vector store abstraction (Qdrant + sqlite-vec)
├── search.ts # Hybrid search pipeline (RRF, reranking, blending)
├── chunker.ts # Heading-aware markdown chunking
├── collections.ts # YAML config, collection scanning
├── embeddings.ts # Embedding providers (VoyageAI, Ollama, OpenAI-compatible)
├── reranker.ts # VoyageAI reranker
├── expansion.ts # Query expansion (interface only, no active provider)
├── harvester.ts # OpenCode session → markdown converter
├── watcher.ts # File watcher (chokidar, dirty flags)
├── codebase.ts # Codebase indexing orchestrator
├── treesitter.ts # Tree-sitter AST parsing
├── symbols.ts # Symbol extraction (functions, classes, variables)
├── graph.ts # File dependency graph (imports/exports)
├── symbol-graph.ts # Symbol call graph (caller → callee)
├── flow-detection.ts # Call flow detection (entry → leaf)
├── types.ts # TypeScript interfaces
└── providers/ # Vector store implementations
├── qdrant.ts # Qdrant vector store
└── sqlite-vec.ts # sqlite-vec vector store
bin/
└── cli.js # CLI wrapper
test/
└── *.test.ts # 760+ tests (vitest)
SKILL.md # AI agent routing instructions (auto-loaded by OpenCode)
AGENTS_SNIPPET.md # Optional project-level AGENTS.md managed block
- TypeScript + Node.js (via tsx)
- better-sqlite3 + sqlite-vec for embedded storage
- Qdrant for production vector store (optional)
- Tree-sitter for AST parsing (TS, JS, Python)
- @modelcontextprotocol/sdk for MCP server (stdio/HTTP/SSE transports)
- chokidar for file watching
- vitest for testing (760+ tests)
Embeddings:
- VoyageAI — voyage-code-3 (1024 dims, code-optimized, recommended)
- Ollama — nomic-embed-text, mxbai-embed-large, etc. (local, free)
- OpenAI-compatible — Azure OpenAI, LM Studio, custom endpoints
Reranking:
- VoyageAI — rerank-2.5-lite (neural reranking, recommended)
Query Expansion:
- Pipeline support exists but no active provider. The interface is ready for future integration.
| nano-brain | Mem0 / OpenMemory | Zep / Graphiti | OMEGA | Letta (MemGPT) | Claude Native | |
|---|---|---|---|---|---|---|
| Search | Hybrid (BM25 + vector + 6 ranking signals) | Vector only | Graph traversal + vector | Semantic + BM25 | Agent-managed | Text file read |
| Storage | SQLite + Qdrant (optional) | PostgreSQL + Qdrant | Neo4j | SQLite | PostgreSQL / SQLite | Flat text files |
| MCP Tools | 22+ | 4-9 | 9-10 | 12 | 7 | 0 |
| Code Intelligence | Yes (Tree-sitter AST, symbol graph, impact analysis) | No | No | No | No | No |
| Codebase Indexing | Yes (AST → symbols → call graph → flows) | No | No | No | No | No |
| Session Recall | Yes (auto-harvests past sessions) | No | No | No | No | Limited (CLAUDE.md) |
| Query Expansion | Pipeline ready (no active provider) | No | No | No | No | No |
| Neural Reranking | Yes (VoyageAI rerank-2.5-lite) | No | No | No | No | No |
| Local-First | Yes (Ollama + sqlite-vec) | Requires OpenAI API key | Requires Docker + Neo4j | Yes | Yes | Yes |
| Cloud Option | Yes (VoyageAI, OpenAI-compatible) | Cloud API (OpenAI) | Cloud API | Local ONNX | Cloud API | None |
| Privacy | 100% local option available | Cloud API calls | Cloud or self-host | 100% local | Self-host or cloud | Local files |
| Dependencies | SQLite + embedding API (+ optional Qdrant) | Docker + PostgreSQL + Qdrant + OpenAI key | Docker + Neo4j | SQLite + ONNX | PostgreSQL | None |
| Pricing | Free (open source, MIT) | Free tier / Pro $249/mo | Free self-host / Cloud $25-475/mo | Free (Apache-2.0) | Free (Apache-2.0) | Free (with Claude) |
| GitHub Stars | New | ~47K | ~23K | ~25 | ~21K | N/A |
- 6-signal hybrid search — BM25 + vector + RRF + PageRank + supersede + neural reranking in a single pipeline
- Code intelligence — Tree-sitter AST parsing, symbol graph, call flow detection, impact analysis
- Codebase indexing — index your source files with structural boundary detection, not just conversations
- Session recall — automatically harvests and indexes past AI coding sessions
- Flexible deployment — 100% local (Ollama + sqlite-vec) or cloud (VoyageAI + Qdrant)
- Privacy-first — local processing option, your code never leaves your machine
- You need a knowledge graph with temporal reasoning (Zep/Graphiti)
- You want a full agent framework, not just memory (Letta)
- You need cloud-hosted memory shared across teams (Mem0 Cloud)
- You only need basic session notes (Claude native memory)
nano-brain ships with a SKILL.md that teaches AI agents when and how to use memory tools. When loaded as an OpenCode skill, agents automatically:
- Check memory before starting work — recall past decisions, patterns, and context
- Save context after completing work — persist key decisions and debugging insights
- Route queries to the right search tool — BM25 for exact terms, vector for concepts, hybrid for best quality
- Use code intelligence — understand symbol relationships, assess change impact, detect affected code
The skill file at SKILL.md provides routing rules, trigger phrases, tool selection guides, and integration patterns. It's automatically loaded when any agent references the nano-brain skill.
For project-level integration, AGENTS_SNIPPET.md provides a managed block that can be injected into a project's AGENTS.md:
npx nano-brain init --root=/path/to/projectThis adds a managed block to your project's AGENTS.md with quick reference tables for CLI commands and MCP tools (if available).
See SKILL.md for full routing rules and AGENTS_SNIPPET.md for the project-level snippet.
nano-brain includes an adaptive self-learning system that improves search quality over time.
Add to your config.yml:
telemetry:
enabled: true # Log search queries and feedback (default: true)
retention_days: 90 # How long to keep telemetry data
learning:
enabled: true # Enable adaptive search tuning
update_interval_ms: 600000 # Cold-path update interval (10 min)
consolidation:
enabled: true # Enable memory consolidation
interval_ms: 3600000 # Consolidation interval (60 min)
model: gpt-4o-mini # LLM model for consolidation
endpoint: https://api.openai.com/v1 # OpenAI-compatible endpoint
apiKey: sk-... # API key (not required for Ollama)
provider: openai # 'openai' (default) or 'ollama'
extraction:
enabled: true # Enable fact extraction from sessions
model: gpt-4o-mini # LLM model for extraction
endpoint: https://api.openai.com/v1
apiKey: sk-...
maxFactsPerSession: 20 # Max facts to extract per session
importance:
enabled: true # Enable importance scoring
weight: 0.1 # Importance boost weight
decay_half_life_days: 30
intents:
enabled: true # Enable query intent classification- Telemetry: Every search query is logged with results and timing. When you expand a result, it's recorded as positive feedback.
- Thompson Sampling: Search parameters (rrf_k, centrality_weight) are automatically tuned using multi-armed bandits based on expand feedback.
- Consolidation: Periodically, an LLM reviews recent memories and finds connections, generating consolidated insights.
- Importance Scoring: Documents accessed frequently get boosted in search results. Unused documents decay over time.
- Intent Classification: Queries are classified by intent (lookup, explanation, architecture, recall) and routed to optimized search configs.
nano-brain learning rollback [version_id]— View or rollback to a previous config versionnano-brain consolidate— Trigger a manual consolidation cycle
memory_consolidate— Trigger consolidation manuallymemory_consolidation_status— View consolidation queue stats and recent logsmemory_importance— View document importance scoresmemory_status— View learning system status (telemetry records, bandit variants, config version)
nano-brain includes three advanced intelligence features that make your memory smarter over time.
Automatically cleans up outdated or contradicted information from the knowledge graph.
How it works:
- Background job runs every 6 hours
- Soft-deletes contradicted entities after 30 days (when newer information supersedes them)
- Soft-deletes orphan entities after 90 days (entities with no relationships)
- Hard-deletes soft-deleted entities after 30-day retention period
- Processes in batches of 100 to avoid SQLite lock contention
Configuration:
pruning:
enabled: true
interval_ms: 21600000 # 6 hours
contradicted_ttl_days: 30 # How long before contradicted entities are soft-deleted
orphan_ttl_days: 90 # How long before orphan entities are soft-deleted
batch_size: 100 # Max entities to process per cycle
hard_delete_after_days: 30 # Retention period for soft-deleted entitiesWhy this matters: Prevents your knowledge graph from accumulating stale information. When you update a decision or pattern, the old version is automatically pruned after the TTL expires.
Adds semantic categorization to your memories using an LLM, going beyond simple keyword matching.
How it works:
- Runs asynchronously after the keyword categorizer (fire-and-forget)
- Uses the same LLM provider configured for consolidation (same endpoint/model/apiKey)
- Adds tags prefixed with
llm:(e.g.,llm:architecture-decision,llm:debugging-insight) - Only processes documents under 2000 characters (configurable)
- Requires confidence threshold of 0.6 or higher (configurable)
7 semantic categories:
llm:architecture-decision— System design choices, trade-offs, patternsllm:debugging-insight— Root cause analysis, fix patterns, gotchasllm:tool-config— Setup instructions, configuration patternsllm:pattern— Reusable code patterns, best practicesllm:preference— User preferences, workflow choicesllm:context— Background information, explanationsllm:workflow— Process documentation, how-to guides
Configuration:
categorization:
llm_enabled: true
confidence_threshold: 0.6 # Minimum confidence to apply a category
max_content_length: 2000 # Skip documents longer than thisWhy this matters: Enables semantic filtering like --tags=llm:architecture-decision to find all design decisions, even if they don't use the word "architecture."
Learns which types of memories you expand most often and boosts them in future searches.
How it works:
- Tracks expand events per category per workspace
- Calculates category weights based on expand rate vs baseline (10% default)
- Applies weights as multipliers in search scoring (0.5× to 2.0× range)
- Cold start: uses neutral weights until 20 queries collected
- Updates every 10 minutes via learning cycle background job
Example: If you expand llm:debugging-insight memories 30% of the time (vs 10% baseline), those memories get a 1.5× boost in future searches.
Configuration:
preferences:
enabled: true
min_queries: 20 # Minimum queries before personalization kicks in
weight_min: 0.5 # Minimum category weight (demotion)
weight_max: 2.0 # Maximum category weight (boost)
baseline_expand_rate: 0.1 # Expected expand rate (10%)Why this matters: Your memory system adapts to your workflow. If you're debugging, debugging insights automatically surface higher. If you're designing, architecture decisions get prioritized.
nano-brain can predict what you'll need next based on your query patterns.
- Query chains: Groups of related queries within 5-minute windows are detected
- Clustering: Similar queries are grouped into semantic clusters using embeddings
- Transition learning: The system learns which clusters follow which (e.g., "auth questions" → "token refresh questions")
- Predictions: When you ask about topic A, nano-brain predicts you'll need topic B next
proactive:
enabled: true
chain_timeout_ms: 300000 # 5 min window for grouping queries
min_queries_for_prediction: 50 # Minimum queries before predictions activate
max_suggestions: 5
confidence_threshold: 0.3
cluster_count: 50
analysis_interval_ms: 1800000 # Rebuild every 30 minmemory_suggestions— Get predicted next queries based on current contextcontext(optional): Current query or topicworkspace(optional): Workspace pathlimit(optional): Max suggestions (default 3)
LLM provider unreachable: Check endpoint URL and network connectivity. For Ollama, ensure ollama serve is running.
Invalid API key: Verify apiKey in config or CONSOLIDATION_API_KEY env var. Ollama doesn't require an API key.
Empty LLM responses: Check model name is correct. For Ollama, run ollama list to see available models.
Consolidation not running: Ensure consolidation.enabled: true and check memory_consolidation_status for queue state.
MIT