Skip to content

HaShiShark/context-editor-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

20 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

hashcode

hashcode

Cursor uses AI to edit code β€” we use AI to edit AI's context. πŸͺ†

Features β€’ Architecture β€’ Install β€’ Roadmap β€’ Join us β€’ FAQ

English | δΈ­ζ–‡

Electron 37 Python 3.10+ React 19 TypeScript MIT License Alpha

Warning

Alpha Status: hashcode is in early development. There are incomplete features and known bugs. We're sharing this project because we believe the direction is worth exploring, not because it's finished. Testing, feedback, and contributions are very welcome.

Join us

If you want to test hashcode, contribute ideas, report bugs, or build this direction together, email us at 3455744878@qq.com.

Codex Version

The Codex version is now available. If you use Codex and want the same context editing workflow, check out codex-context-editor-proxy.


πŸ€” The Problem

Your conversations with AI constantly shift direction. The topic has moved on, but previous discussions and tool outputs are still sitting in the context. Sure, /compact can compress things, but it's too blunt β€” you can't choose what stays and what goes.

The real problem isn't that the context is "too long". It's that you have zero control over it.

This happens all the time in real usage:

  • πŸ”„ Topic switching β€” You just finished debugging a bug with AI, then started a high-quality project discussion. But the context is already dominated by the bug fix. Continuing might trigger aggressive compression and attention degradation. You want to stay in flow, but you can't precisely compress just the earlier content.
  • πŸ“¦ Irrelevant content buildup β€” After 30 turns, tool outputs from much earlier are completely irrelevant now. They're still hogging the context and slowing the model down, and you don't even know exactly where they are.
  • πŸ” Context diagnostics β€” The context fills up surprisingly fast and you want to know why. Traditional tools only tell you "how much window is left" β€” they can't help you pinpoint which nodes are eating space or fix them.

What if you could see, edit, and version-control your AI's context like source code?

That's what hashcode does.

πŸ’‘ The Idea

If Cursor can use AI to edit your code, why can't we use AI to edit AI's context?

Cursor:    AI  β†’  edits  β†’  Code
                                         
Us:        AI  β†’  edits  β†’  AI's Context  πŸͺ†

hashcode is the first desktop client that:

  1. Visualizes the entire context your main model actually consumes β€” as a structured Context Map, not a chat log.
  2. Deploys a second AI model to precisely edit your context β€” you decide what to keep, what to delete, what to compress, instead of handing it off to a blunt compact command.
  3. Version-controls every edit, so you can roll back to any previous context state.

One AI doing the thinking. Another AI grooming what the first one sees β€” under your control. πŸͺ†


🌟 Core Features

πŸ“ Context Map β€” See Everything Your Model Sees

context-map

The right sidebar turns the raw transcript into a structured, scrollable map:

  • Numbered nodes β€” #1 #2 #3 ... β€” each user/assistant turn is one node
  • Token weight colors β€” 🟒 normal / 🟑 heavy / πŸ”΄ very heavy β€” instantly spot bloat
  • Minimap β€” bird's-eye overview with a draggable viewport rectangle, like VS Code's minimap
  • Expand on click β€” collapsed by default, expand any node to see full markdown or tool call details
  • Multi-select β€” Ctrl+Click or drag the gutter to select nodes for the AI editor

πŸͺ† Context Workbench β€” AI Editing AI's Context

workbench-suggest

workbench-manual

The right panel is the command center for the second AI model, with four tabs:

Tab What It Does
πŸ’‘ Suggest Auto-analyzes your context: which nodes are bloated, which tool outputs are redundant
✏️ Manual Chat with the context model β€” "compress nodes #4-7" or "delete the weather tool output"
βͺ Restore Browse every context revision, click to restore any previous version
βš™οΈ Settings Configure the context model independently (different model, different provider)

πŸ”§ Precision Editing Tools

The context model has surgical tools to modify individual items inside the context:

Tool What It Does Example
get_node_details Inspect a node's full protocol-layer items "Show me what's inside node #4"
delete_item Remove a specific item from a node "Delete the shell output from node #6"
replace_item Rewrite an item with new content "Replace the verbose tool output with a summary"
compress_item AI-compress an item, preserving its type "Compress the function_call_output in node #3"
compress_nodes Merge multiple nodes into one summary node "Summarize nodes #2-5 into a single node"
delete_nodes Remove entire nodes from context "Drop nodes #1-3, they're no longer relevant"

βͺ Version Control for Context

Every edit round creates a revision β€” a full snapshot of your context state:

Revision #1  ← "Compressed weather tool outputs"        [Restore]
Revision #2  ← "Deleted redundant shell commands"       [Restore] ← Active
Revision #3  ← "Merged nodes #2-5 into summary"         [Restore]
  • Linear rollback β€” click any revision to restore
  • Undo restore β€” changed your mind? One-click undo (until your next action)
  • Full snapshots β€” no patches, no merge conflicts. Every revision is a complete copy

πŸ”Œ Multi-Provider Support

Connect to any LLM provider β€” for both your main model and your context editor model:

Provider Protocol Status
OpenAI Responses API βœ… Built-in
Claude Messages API βœ… Built-in
Gemini GenerateContent API βœ… Built-in
Custom Chat Completions βœ… Any OpenAI-compatible endpoint

Mix and match: use GPT for chatting, Claude for context editing. Each provider has independent API key and base URL configuration.

🎨 Desktop Client

  • Native window: Electron desktop app, Windows supported (macOS / Linux planned)
  • Three-panel layout: sidebar β†’ chat β†’ context map + workbench
  • Dark theme: deep blacks, no eye strain, designed for long sessions
  • Streaming responses: both main chat and context model stream in real-time
  • File attachments: drag & drop images and files into the chat
  • Markdown rendering: full GFM, syntax highlighting, Mermaid diagrams
  • Project workspaces: organize conversations by project, with file tree browsing

πŸ—οΈ Architecture

The "Two-Model" Architecture

  1. Main Model β€” the AI you chat with. It reads/writes files, runs commands, answers questions.
  2. Context Model β€” a separate AI that only sees the main model's context. It can analyze, compress, and restructure it.

They never run in parallel on the same session. When the context model is editing, the main chat is paused (and vice versa). This prevents conflicting writes.

Tech Stack

Layer Technology Why
Desktop Shell Electron 37 Cross-platform native window, Python backend auto-managed
Frontend React 19 + TypeScript + Vite Fast dev, type safety, modern DX
Backend Python (child process) Zero-framework, minimal deps, lifecycle managed by Electron
LLM Runtime Custom agent_runtime Provider-agnostic adapter layer (OpenAI / Claude / Gemini)
Storage Local SQLite + JSON settings (user data dir) Single-file local database, data stays local
Streaming Server-Sent Events (SSE) Real-time token streaming

How It Runs

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  Electron Main Process                β”‚
β”‚                                                      β”‚
β”‚   app.whenReady()                                    β”‚
β”‚     β”œβ”€β”€ 1. Find an available port                    β”‚
β”‚     β”œβ”€β”€ 2. Spawn Python child process (web_server)   β”‚
β”‚     β”œβ”€β”€ 3. Wait for backend /api/init to be ready    β”‚
β”‚     └── 4. Create BrowserWindow β†’ load frontend      β”‚
β”‚                                                      β”‚
β”‚   app.on('before-quit')                              β”‚
β”‚     └── Kill Python child process                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”     HTTP + SSE     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚  Renderer   β”‚ ◄────────────────► β”‚ Python       β”‚
    β”‚  React App  β”‚                    β”‚ Backend      β”‚
    β”‚             β”‚                    β”‚              β”‚
    β”‚ Β· Chat      β”‚                    β”‚ Β· Main Agent β”‚
    β”‚ Β· Ctx Map   β”‚                    β”‚ Β· Ctx Agent  β”‚
    β”‚ Β· Workbench β”‚                    β”‚ Β· State Mgr  β”‚
    β”‚ Β· Settings  β”‚                    β”‚ Β· SQLite     β”‚
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“¦ Installation

Option 1: Run from Source (Developers)

Prerequisites

  • Python 3.10+
  • Node.js 18+
  • At least one LLM API key (OpenAI / Anthropic / Google)
git clone https://github.com/YOUR_USERNAME/context-editor-agent.git
cd context-editor-agent

# Python environment
python -m venv .venv
# Windows:
.venv\Scripts\Activate.ps1
# macOS / Linux:
source .venv/bin/activate
pip install -r requirements.txt

# Frontend dependencies
npm install

# Launch the Electron desktop client
npm run dev:electron

Configure your API key in the Settings page after launching.

Option 2: Build Installer

# Build Windows .exe installer
npm run dist:win

The installer is generated in the release/ directory. Double-click to install, ready to use.


πŸ—ΊοΈ Roadmap

βœ… Done (v0.1 β€” Current)

  • Main chat with streaming responses
  • Context Map with minimap and node selection
  • Context Workbench (Suggest / Manual / Restore / Settings)
  • Working snapshot + atomic commit lifecycle
  • Revision history with linear rollback & undo-restore
  • Multi-provider support (OpenAI, Claude, Gemini, Custom)
  • Context model tools: inspect, delete, replace, compress, summarize
  • File attachments in chat
  • Project workspace with file tree
  • Full markdown rendering with syntax highlighting
  • Electron desktop client + Windows installer

πŸ”œ Next

  • Context monitor model that auto-evaluates importance and maintains context
  • Expand capabilities with more mainstream agent features

❓ FAQ

How is this different from Claude Code / Codex's /compact?

compact is a black box β€” you don't know what it compressed, what it kept, and you can't roll it back. It solves "context too long", but it doesn't solve "there's stuff in my context I don't want".

hashcode is about context freedom: you can see how many tokens each node takes, precisely delete a specific tool output, compress a few old conversation turns, or clean up earlier content in one sentence when switching topics β€” and roll back anytime. Not brute-force compression. Precise control.

Does the context model actually modify what the main model sees?

Yes. When the context model compresses or deletes items, those changes are committed to the canonical transcript. The next time the main model responds, it sees the edited version. This is not a UI trick β€” it's actual context engineering.

Does this reduce cache hit rates?

Theoretically yes, but when an agent is making frequent API calls, each context operation recalculates the cache at most once. This is far cheaper than carrying around useless context indefinitely.


🀝 Contributing

This project is in early alpha. Contributions, ideas, and bug reports are very welcome!

  1. Fork the repo
  2. Create a feature branch (git checkout -b feature/amazing-thing)
  3. Commit your changes
  4. Push and open a Pull Request


πŸͺ† AI editing AI β€” it's turtles all the way down.

If you find this project interesting, consider giving it a ⭐ β€” it helps others discover it.

About

Cursor uses AI to edit code β€” we use AI to edit AI's context. πŸͺ† Context map + compression + version control for LLM context windows.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors