Persistent, semantic memory for Claude Code sessions — powered by Valkey.
Every time you start a new Claude Code session, context is lost. BetterDB Memory automatically captures what you did, embeds it as vectors in Valkey, and retrieves relevant history at the start of each new session.
- Bun runtime — required (the CLI and all hooks run on Bun, not Node)
- Claude Code installed
- Valkey 8.0+ with the Search module
# 1. Copy .env.example and fill in your settings
cp .env.example .env
# 2. Install
bunx @betterdb/memory installThe install will:
- Compile native hook binaries to
~/.betterdb/bin/ - Register 4 lifecycle hooks with Claude Code
- Register the MCP server for mid-conversation tools
- Create the Valkey search index
- Save your
.envvalues to~/.betterdb/memory.jsonfor runtime use
The setup skill will offer to spin one up in Docker for you. Or run it manually:
# Via CLI
bunx @betterdb/memory docker-valkey
# Or directly with Docker
docker run -d --name betterdb-valkey -p 6379:6379 -v betterdb-valkey-data:/data valkey/valkey-search:8 valkey-server --save 60 1| Hook | What it does |
|---|---|
| SessionStart | Retrieves relevant memories via vector search, injects as context |
| PostToolUse | Records every tool call to a temp JSONL file |
| Stop | Summarizes the session, embeds it, stores in Valkey |
| PreToolUse | Surfaces file-specific history when accessing known files |
Claude can use these mid-conversation:
search_context— Semantic search over past sessionsstore_insight— Save a decision, pattern, or warninglist_open_threads— Show unresolved itemsforget— Delete a specific memory
bunx @betterdb/memory install # Set up hooks + MCP server
bunx @betterdb/memory status # Check health
bunx @betterdb/memory uninstall # Remove everything
bunx @betterdb/memory maintain # Run aging/compression manually
bunx @betterdb/memory docker-valkey # Manage Docker Valkey containerCopy .env.example to .env and fill in your values before running bunx @betterdb/memory install. They get saved to ~/.betterdb/memory.json and used by the compiled binaries at runtime.
| Variable | Default | Description |
|---|---|---|
BETTERDB_VALKEY_URL |
redis://localhost:6379 |
Valkey connection URL |
BETTERDB_VALKEY_INDEX_NAME |
betterdb-memory-index |
Valkey search index name |
BETTERDB_EMBED_DIM |
1024 |
Embedding dimensions |
BETTERDB_MAX_CONTEXT_MEMORIES |
5 |
Memories injected per session |
BETTERDB_CONTEXT_FILE |
.betterdb_context.md |
Context injection file |
BETTERDB_ALLOW_REMOTE_FALLBACK |
true |
Fall back to remote APIs if local models unavailable |
| Variable | Default | Description |
|---|---|---|
BETTERDB_EMBED_PROVIDER |
auto-detect | Force embed provider: ollama, voyage, openai, groq, together |
BETTERDB_SUMMARIZE_PROVIDER |
auto-detect | Force summarize provider: ollama, anthropic, openai, groq, together |
BETTERDB_EMBED_MODEL |
mxbai-embed-large |
Ollama embedding model name |
BETTERDB_SUMMARIZE_MODEL |
mistral:7b |
Ollama summarization model name |
BETTERDB_OLLAMA_URL |
http://localhost:11434 |
Ollama API URL |
At least one embedding provider and one summarization provider must be available. Ollama is free and local; the others require API keys.
| Variable | Provider | Used for |
|---|---|---|
ANTHROPIC_API_KEY |
Anthropic | Summarization only (no embeddings) |
VOYAGE_API_KEY |
Voyage AI | Embeddings only |
OPENAI_API_KEY |
OpenAI | Embeddings + summarization |
GROQ_API_KEY |
Groq | Embeddings + summarization |
TOGETHER_API_KEY |
Together AI | Embeddings + summarization |
| Variable | Default | Description |
|---|---|---|
BETTERDB_DECAY_RATE |
0.95 |
Memory importance decay per day |
BETTERDB_COMPRESS_THRESHOLD |
0.3 |
Importance threshold for compression |
BETTERDB_DISTILL_MIN_SESSIONS |
5 |
Min sessions before knowledge distillation |
BETTERDB_AGING_INTERVAL_HOURS |
6 |
Hours between automatic aging runs |
MIT