Multi-resolution codebase context for AI coding agents.
Generate a pyramid of context at three zoom levels — a scannable L0 index in your agent config, L1 summaries on disk, and L2 detail files per module — so AI agents can understand your entire codebase without wasting context tokens.
Based on StrongDM's pyramid summaries pattern for AI agent codebase comprehension. See Prior Art for details.
npx pyramid-contextThis scans ./src, generates L0/L1/L2 context, and injects the L0 index into AGENTS.md.
| Level | Detail | Location | Purpose |
|---|---|---|---|
| L0 | 2-3 word tag per file | AGENTS.md / CLAUDE.md | Scan the whole codebase at a glance |
| L1 | One sentence per file | .context/L1.md |
Decide which files to read deeper |
| L2 | Exports, imports, deps | .context/{path}.md |
Understand a module without reading source |
L2 files are generated per source file (e.g. .context/analyzers/typescript.ts.md), so agents can pull only the modules they need into context rather than loading the whole codebase.
Commit .context/ to git — the generated context is useful for anyone (human or AI) reading the codebase. Re-run pyramid-context when source changes to keep it fresh.
npm install -g pyramid-context # global
npm install -D pyramid-context # devDependency
npx pyramid-context # one-shot# Generate pyramid (default: scans ./src)
pyramid-context
# Custom source directory
pyramid-context --src lib
# Explicit target file
pyramid-context --target AGENTS.md
# Language filter
pyramid-context --lang ts,js
# Use LLM for better descriptions
pyramid-context --llm
# Check if pyramid is stale (for CI / hooks)
pyramid-context --check
# Dry run
pyramid-context --dry-run
# Initialize project
pyramid-context initOptional .pyramidrc.json:
{
"src": ["src", "lib"],
"targets": ["AGENTS.md", "CLAUDE.md"],
"exclude": ["**/*.test.ts", "**/*.spec.ts"],
"contextDir": ".context",
"llm": {
"enabled": true,
"provider": "anthropic",
"apiKeyEnv": "ANTHROPIC_API_KEY",
"model": "claude-haiku-4-5-20251001"
}
}When --llm is passed (or llm.enabled is set in config), pyramid-context calls an LLM to generate higher-quality L0 tags and L1 summaries. Without an API key it silently falls back to heuristic analysis.
| Field | Default | Description |
|---|---|---|
llm.provider |
"anthropic" |
"anthropic" or "openai" |
llm.apiKeyEnv |
"ANTHROPIC_API_KEY" |
Environment variable holding the API key. Restricted to ANTHROPIC_API_KEY, OPENAI_API_KEY, LITELLM_API_KEY. |
llm.model |
"claude-haiku-4-5-20251001" |
Model ID to use |
llm.baseUrl |
(provider default) | Custom API base URL |
LLM results are cached in the manifest — unchanged files won't be re-sent on subsequent runs.
A single pipe-delimited line optimized for minimal tokens:
[Codebase Map]|root:./src|L1:.context/L1.md|L2:.context/{path}.md|agent/{loop.ts:goal poller,runner.ts:LLM tool loop}|tools/{bash.ts:shell exec,wiki.ts:shared wiki}
For a 30-file project, the entire L0 index fits in ~300 tokens.
| Language | Extensions | Export Detection | Import Detection |
|---|---|---|---|
| TypeScript | .ts, .tsx |
export function/class/interface/type/const |
from '...' |
| JavaScript | .js, .jsx, .mjs, .cjs |
export, module.exports |
from '...', require() |
| Python | .py |
Top-level def, class, __all__ |
import, from ... import |
| Go | .go |
Capitalized names | import "..." |
| Rust | .rs |
pub fn/struct/enum/trait |
use ... |
{
"hooks": {
"Stop": [{
"hooks": [{
"type": "command",
"command": "npx pyramid-context --check && exit 0 || npx pyramid-context"
}]
}]
}
}npx pyramid-context --check 2>/dev/null || npx pyramid-context
git add AGENTS.md CLAUDE.md .context/ 2>/dev/nullimport { generatePyramid, injectIntoFile } from 'pyramid-context'
const pyramid = await generatePyramid({
src: ['src'],
languages: ['typescript'],
})
// pyramid.files — array of { path, l0, l1, l2, hash }
// pyramid.l0Index — pipe-delimited string
// pyramid.l1Content — markdown string
// pyramid.staleCount — number of changed filesThe full specification lives in SPEC.md — it describes every output format, algorithm, and edge case in enough detail to reimplement pyramid-context in any language.
The fastest way to port it: give SPEC.md to an AI coding agent and ask it to implement the spec in your language of choice.
Here is a specification for a codebase context generator.
Please implement it in [Python/Go/Rust/etc].
<paste SPEC.md contents>
The spec is self-contained — it covers the output formats (L0/L1/L2), the analyzer regex patterns for each language, the injection algorithm, the manifest format, and the security considerations. No knowledge of this TypeScript implementation is needed.
| Tool | Approach | Limitation |
|---|---|---|
Vercel @next/codemod agents-md |
Pipe-delimited docs index | Next.js-specific |
Aider .aider.repo-map |
AST-based repo map | Aider-specific, requires tree-sitter |
| StrongDM pyramid summaries | Multi-resolution LLM summaries | Internal technique, not packaged |
pyramid-context combines the best ideas: Vercel's surgical injection + StrongDM's multi-resolution pyramid + universal agent support + automatic code analysis.
MIT