Skip to content

docs: add Headroom community provider for context compression#13866

Open
chopratejas wants to merge 2 commits intovercel:mainfrom
chopratejas:add-headroom-community-provider
Open

docs: add Headroom community provider for context compression#13866
chopratejas wants to merge 2 commits intovercel:mainfrom
chopratejas:add-headroom-community-provider

Conversation

@chopratejas
Copy link

Background

Headroom is an open-source context compression library for LLM applications. It compresses tool outputs, search results, database queries, and other large context before it reaches the model — typically saving 70-90% of input tokens while preserving accuracy.

The TypeScript SDK (headroom-ai) is published on npm with zero dependencies. It provides a compress() function and a headroomMiddleware() for wrapLanguageModel.

No existing community provider or middleware addresses context compression. This fills a gap alongside caching (Upstash), observability (Helicone/PostHog), memory (Mem0/Hindsight), and routing (Portkey/OpenRouter).

Summary

4 files added/modified:

  1. content/providers/03-community-providers/51-headroom.mdx — Community provider page with setup, usage (compress(), headroomMiddleware(), streaming, multi-provider), compression details, and configuration
  2. content/cookbook/01-next/123-context-compression-middleware.mdx — Cookbook recipe for building a context compression middleware in Next.js agentic apps
  3. content/docs/02-foundations/02-providers-and-models.mdx — Added Headroom to community providers list
  4. content/docs/03-ai-sdk-core/40-middleware.mdx — Added Headroom to Community Middleware section

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

Future Work

  • Headroom as a built-in middleware option (similar to extractReasoningMiddleware)
  • prepareStep integration for progressive compression in multi-step agents

Related Issues

N/A — new community provider addition.

Add Headroom — context compression middleware for the AI SDK.
Compresses tool outputs, search results, and large context before
it reaches the model. 70-90% token savings on structured data,
with accuracy preserved.

- Community provider page with setup, usage, and configuration
- Cookbook recipe: context compression middleware for Next.js agents
- Added to community middleware section in middleware docs
- Added to community providers list in foundations docs

npm package: headroom-ai (published, zero dependencies)
GitHub: https://github.com/chopratejas/headroom
@tigent tigent bot added ai/provider related to a provider package. Must be assigned together with at least one `provider/*` label documentation Improvements or additions to documentation provider/community labels Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/provider related to a provider package. Must be assigned together with at least one `provider/*` label documentation Improvements or additions to documentation provider/community

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants