Skip to content

axonops/axonops-schema-registry

Repository files navigation

AxonOps Schema Registry

AxonOps Schema Registry

Drop-in Confluent Replacement with Extra REST APIs, Enterprise Security, Multi-Backend Storage, and Built-in MCP Server

License Go Version GitHub Stars GitHub Issues

Getting Started | Documentation | API Reference | MCP Server | Report Issue


Overview

AxonOps Schema Registry is a schema registry for Apache Kafka® that goes beyond Schema Registry compatibility — it gives you every Confluent Schema Registry REST API (including Enterprise-only endpoints like contexts, data contracts, CSFLE encryption, and exporters) plus a large set of additional REST APIs for schema analysis, quality scoring, field search, and admin management — all under the Apache 2.0 license. It also ships with a built-in Model Context Protocol (MCP) server that lets AI assistants like Claude, Cursor, and VS Code Copilot work directly with your schemas through natural language.

Unlike Confluent Schema Registry, which uses Kafka itself (a special _schemas topic) as its storage backend, AxonOps Schema Registry does not require Kafka for storage -- it uses standard databases (PostgreSQL, MySQL, or Cassandra) while remaining fully API-compatible with the Confluent Schema Registry REST API, serializers, and client libraries. On top of the Schema Registry compatible endpoints, AxonOps adds a substantial set of extra REST API endpoints for schema analysis, quality scoring, field/type search, similarity detection, migration planning, and administrative operations — plus an MCP server with extensive tools, resources, and prompts for AI-assisted workflows.

100% Free & Open Source

Apache 2.0 Licensed -- No hidden costs -- No premium tiers -- No license keys

Contents

New to schema registries? Read the Fundamentals guide to understand what a schema registry is, why it matters, and how it fits into an event-driven architecture. Ready to design your schemas? See Best Practices for patterns, naming conventions, and evolution strategies.

Why AxonOps Schema Registry?

We built AxonOps Schema Registry because we believe schema management should be simple to deploy, open by default, and packed with the features teams actually need -- without requiring a commercial license or a fleet of infrastructure to support it.

  • Drop-in Compatible -- works with existing Confluent serializers for Java, Go, and Python. Migrate by changing one URL; your producers and consumers keep working.
  • No Kafka Required -- stores schemas in PostgreSQL, MySQL, or Cassandra instead of Kafka. One fewer moving part in your stack, and you choose the database your team already knows.
  • Single Binary, Tiny Footprint -- ships as a ~50 MB statically-linked binary with no runtime dependencies. Runs comfortably on a Raspberry Pi, a side-car container, or a fleet of VMs.
  • Enterprise Security Built In -- six authentication methods (Basic, API Keys, JWT, LDAP/AD, OIDC, mTLS), four RBAC roles, token-bucket rate limiting, and auto-reloading TLS -- all configured through YAML, no plugins required.
  • Enterprise Audit Logging -- every write operation produces a structured audit event with actor, target, outcome, and change-integrity hashes. Events are delivered simultaneously to any combination of stdout, rotating files, syslog (RFC 5424 over TCP/TLS), and webhooks (Splunk HEC, Elasticsearch, and more) -- in JSON or CEF format, with Prometheus metrics for delivery health. Everything ships in the single binary; no external log infrastructure needed.
  • Data Contracts -- attach metadata tags, domain rules, migration rules, and encoding rules to your schemas. Config-level defaults merge with per-subject overrides in a three-layer hierarchy, giving platform teams governance without slowing down individual squads.
  • Client-Side Field Encryption (CSFLE) -- built-in DEK/KEK registry with HashiCorp Vault and OpenBao Transit integration, compatible with Confluent's CSFLE serializers. Encrypt sensitive fields before they leave the producer, with key rotation and versioned DEKs handled automatically.
  • Schema Intelligence -- 26 analysis endpoints (REST and MCP) for field search with fuzzy and regex matching, type search, structural similarity detection, quality scoring, complexity grading, cross-schema pattern detection, compatibility-aware evolution suggestions, and multi-step migration planning. Useful in CI/CD gates, code reviews, and day-to-day schema exploration.
  • AI-Ready (MCP Server) -- the first schema registry with a built-in Model Context Protocol server. AI assistants like Claude, Cursor, VS Code Copilot, and Windsurf can design schemas, check compatibility, score quality, and plan migrations through natural conversation -- backed by 107 tools, 47 resources, and 33 guided prompts. Server-side guardrails keep AI access safe: a global read-only mode, tool allow/deny lists, five permission presets (readonly through full), 14 fine-grained permission scopes, and two-phase confirmations for destructive operations -- all enforced at the server, not the client.
  • Multi-Datacenter Ready -- pair with Cassandra for active-active deployments that replicate schemas across data centers with no leader election and no coordination overhead.
  • Cloud Native -- health checks, Prometheus metrics, graceful shutdown, and automatic database migrations. Designed for Kubernetes from the start.
  • Strict Spec Compliance -- catches invalid Avro, Protobuf, and JSON Schemas at registration time rather than letting them surface at runtime. Fewer surprises in production. (details)
  • Open Source, All Inclusive -- every feature listed above ships under the Apache 2.0 license. There is no "Enterprise Edition" gate. Community contributions, issues, and pull requests are welcome.

Feature Comparison

Comparison based on upstream/default configurations. Third-party plugins may extend capabilities.

Feature AxonOps Confluent OSS Confluent Enterprise Karapace
License Apache 2.0 Confluent Community Commercial Apache 2.0
Language Go Java Java Python
API Compatibility Full N/A N/A Full
Avro
Protobuf
JSON Schema
Schema References
All 7 Compat Modes
Storage: Kafka
Storage: PostgreSQL
Storage: MySQL
Storage: Cassandra
No Kafka Dependency
Basic Auth ✅ ³ ⚠️
API Keys
LDAP/AD ⚠️ ³
OIDC/OAuth2 ✅ ³
mTLS
RBAC ⚠️ Limited
Enterprise Audit Logging ❌ ⁶
Rate Limiting
Prometheus Metrics
REST Proxy Separate Separate
Schema Validation
Strict Spec Compliance ⚠️ Partial
Data Contracts
Multi-Tenant Contexts
DEK Registry (CSFLE)
KMS Providers 2 + 3 ¹
Exporter API ²
Extra REST APIs
MCP Server (AI) ✅ (Beta)
Single Binary
Memory Footprint ~50MB ~500MB+ ~500MB+ ~200MB+

¹ HashiCorp Vault and OpenBao Transit are production-ready. AWS KMS, Azure Key Vault, and GCP KMS support is coming soon.

² Confluent-compatible exporter management API for schema replication configuration. AxonOps stores exporter definitions; active cross-registry replication requires an external agent.

³ Confluent OSS authentication requires Java JAAS LoginModule configuration. AxonOps provides all authentication methods as built-in features with simple YAML configuration -- no Java runtime, no external plugins, no license keys.

⁴ Karapace uses its own ACL-based credential mechanism rather than standard HTTP Basic Authentication.

⁵ Additional AxonOps REST APIs beyond the Schema Registry compatible surface: schema analysis and quality scoring, field/type search, similarity detection, compatibility suggestions, migration planning, registry statistics, user and API key admin, self-service account management, and built-in API documentation. See AxonOps Extensions.

⁶ Confluent Platform does not include built-in audit log outputs. Audit events are written exclusively to internal Kafka topics (docs); delivering events to file, syslog, Splunk, or any external destination requires deploying separate Kafka Connect sink connectors and additional infrastructure. AxonOps provides native multi-output delivery — stdout, file (with rotation), syslog (RFC 5424 over TCP/TLS), and webhook (Splunk HEC, Elasticsearch, etc.) — built into the single binary.

In short: AxonOps gives you every Confluent Schema Registry REST API (Community + Enterprise) plus many additional REST endpoints and a built-in MCP server — all under the Apache 2.0 license, in a single ~50 MB binary, with no Kafka dependency for storage. You get Enterprise-grade capabilities (data contracts, client-side encryption, RBAC, audit logging, multi-tenant contexts, rate limiting) and advanced schema analysis, quality scoring, field search, similarity detection, and AI-assisted schema management that no other registry offers. If you need enterprise support, AxonOps offers commercial support plans.

Quick Start

# Start with Docker (in-memory storage, no database required)
docker run -d -p 8081:8081 ghcr.io/axonops/axonops-schema-registry:latest

# Verify
curl http://localhost:8081/

# Register a schema
curl -X POST http://localhost:8081/subjects/users-value/versions \
  -H "Content-Type: application/vnd.schemaregistry.v1+json" \
  -d '{"schema": "{\"type\": \"record\", \"name\": \"User\", \"fields\": [{\"name\": \"id\", \"type\": \"int\"}, {\"name\": \"name\", \"type\": \"string\"}]}"}'

# Check compatibility
curl -X POST http://localhost:8081/compatibility/subjects/users-value/versions/latest \
  -H "Content-Type: application/vnd.schemaregistry.v1+json" \
  -d '{"schema": "{\"type\": \"record\", \"name\": \"User\", \"fields\": [{\"name\": \"id\", \"type\": \"int\"}, {\"name\": \"name\", \"type\": \"string\"}, {\"name\": \"email\", \"type\": [\"null\", \"string\"], \"default\": null}]}"}'

See the Getting Started guide for Kafka client integration examples in Java, Go, and Python.


Features

Schema Management

  • Multi-Format -- Avro, Protocol Buffers (proto2/proto3), JSON Schema
  • Schema References -- cross-subject dependencies for all three schema types
  • 7 Compatibility Modes -- NONE, BACKWARD, FORWARD, FULL, and transitive variants
  • Normalization -- canonical form generation for content-addressed deduplication
  • Soft Delete -- recoverable deletion with permanent delete option
  • Multi-Tenant Contexts -- namespace isolation with independent schema IDs, subjects, compatibility config, and modes per context (docs)
  • Data Contracts -- schema metadata (tags, properties, sensitive fields), rule sets (domain rules, migration rules, encoding rules), and config-level defaults/overrides with 3-layer merge (docs)

Encryption

  • DEK Registry -- Client-Side Field Level Encryption (CSFLE) with KEK/DEK management, compatible with Confluent's Enterprise CSFLE feature (docs)
  • KMS Providers -- HashiCorp Vault and OpenBao Transit for production use. AWS KMS, Azure Key Vault, and GCP KMS coming soon.
  • Exporter API -- Confluent-compatible exporter management API for schema replication configuration (docs)

Storage Backends

Backend Use Case Concurrency Model
PostgreSQL Production ACID transactions with row-level locking
MySQL Production ACID transactions with SELECT ... FOR UPDATE
Cassandra 5+ Distributed / HA Lightweight transactions (LWT) + SAI indexes
Memory Development Mutex-based, no persistence

Note: The Cassandra storage backend requires Cassandra 5.0 or later. Earlier versions are not supported.

Auth storage can optionally be separated into HashiCorp Vault.

Security

  • Authentication -- Basic Auth, API Keys, JWT, LDAP/AD, OIDC, mTLS
  • Authorization -- RBAC with 4 built-in roles (super_admin, admin, developer, readonly)
  • Rate Limiting -- Token bucket algorithm, per-client or per-endpoint
  • Enterprise Audit Logging -- Multi-output delivery (stdout, file with rotation, syslog RFC 5424/TLS, webhook), JSON and CEF formats, Prometheus metrics
  • TLS -- Auto-reload certificates, configurable minimum version, mutual TLS

Operations

  • Prometheus Metrics -- 19 metrics covering requests, schemas, compatibility, storage, cache, auth, and rate limiting
  • Health Checks -- GET / for load balancer and Kubernetes probes
  • Swagger UI -- Built-in interactive API documentation at GET /docs
  • Graceful Shutdown -- Clean connection draining on SIGTERM/SIGINT
  • Database Migrations -- Automatic schema creation and upgrades

MCP Server (AI-Assisted Schema Management) — Beta

Beta: The MCP server is fully functional and under active development. APIs, tool names, and configuration options MAY change in future releases before the stable 1.0 release.

AxonOps is the first schema registry with a built-in Model Context Protocol server, enabling AI assistants to work directly with your schema registry through natural language. Instead of manually writing REST calls or navigating documentation, developers can ask their AI assistant to design schemas, check compatibility, score quality, plan migrations, and explore the registry — all through conversation.

  • Tools -- full registry CRUD, schema analysis, quality scoring, migration planning, and admin operations
  • Resources -- direct data access for AI clients (static and templated)
  • Prompts -- guided workflows for schema design, evolution, compatibility troubleshooting, encryption setup, and more
  • Security -- bearer token auth, origin validation, read-only mode, tool policies, and two-phase confirmations for destructive operations
  • Compatible with -- Claude Desktop, Claude Code, Cursor, VS Code Copilot, Windsurf, and any MCP-compatible client
  • Schema Intelligence -- 9 deterministic analysis tools that give AI assistants deep insight into your registry: field search across all schemas (with fuzzy and regex matching), type search, structural similarity detection (Jaccard index), quality scoring (naming, docs, type safety, evolution readiness), complexity grading, cross-schema pattern detection, compatibility-aware evolution suggestions, and multi-step migration planning
  • Also available as REST -- all analysis capabilities are exposed as REST endpoints in addition to MCP, for use in CI/CD pipelines and custom tooling

See the MCP Server Guide for configuration, client setup, and the full MCP API Reference.


Architecture

AxonOps Schema Registry is a single stateless binary that connects to any supported storage backend. There is no leader election and no inter-instance coordination -- database-level constraints handle concurrency.

  • Single instance -- one binary, one database connection. Suitable for development or low-traffic production.
  • High availability -- multiple stateless instances behind a load balancer with database-level locking (PostgreSQL/MySQL).
  • Multi-datacenter -- active-active across datacenters using Cassandra's native cross-DC replication and lightweight transactions.

See Deployment for detailed architecture diagrams, topology options, and production configuration.


API Compatibility

AxonOps Schema Registry implements the full Confluent Schema Registry REST API v1 -- including Enterprise-only features that Confluent charges for -- plus additional AxonOps extensions:

Confluent Compatible (Community)

These endpoints are compatible with the free/open-source Confluent Schema Registry:

  • Schemas -- retrieve by ID, list, query types
  • Subjects -- register, list versions, delete, lookup
  • Config -- global and per-subject compatibility levels
  • Mode -- global and per-subject read/write modes
  • Compatibility -- test schema compatibility without registering
  • Metadata -- cluster ID, server version
  • Health -- liveness, readiness, and startup probes

Confluent Compatible (Enterprise)

These endpoints require a Confluent Enterprise license in Confluent Platform. AxonOps includes them free under Apache 2.0:

  • Import -- bulk-import schemas preserving original IDs
  • Contexts -- multi-tenant schema isolation with independent schema IDs, subjects, and config
  • Exporters -- Schema Linking compatible exporter management
  • DEK Registry -- Client-Side Field Level Encryption (CSFLE) with KEK/DEK management

AxonOps Extensions

These endpoints are unique to AxonOps Schema Registry -- not available in any version of Confluent:

  • Analysis -- schema validation, normalization, quality scoring, field/type search, similarity detection, compatibility suggestions, statistics, diff, export, and migration planning (each with context-scoped variants)
  • Admin -- user and API key management with built-in RBAC
  • Account -- self-service profile and password management
  • Documentation -- built-in Swagger UI and OpenAPI spec serving

See the full API Reference with the API Compatibility Reference section for detailed endpoint listings.

Serializer & Client Compatibility

  • All serializers -- compatible with Confluent's Avro, Protobuf, and JSON Schema serializers
  • All client libraries -- works with confluent-kafka-go, confluent-kafka-python, and Java Kafka clients
  • Error format -- HTTP status codes and error response JSON match Confluent behavior

Known differences:

  • Contexts -- Both Confluent and AxonOps support contexts for multi-tenancy. Subjects can be qualified with a context prefix (e.g., :.mycontext:my-subject), and schema IDs are unique within each context. AxonOps also supports URL prefix routing (/contexts/.mycontext/subjects/...) as an alternative. See the Contexts guide for full documentation.
  • Cluster coordination -- Confluent uses Kafka's group protocol for leader election between registry instances. AxonOps instances are fully stateless with no leader election -- database-level constraints (transactions, LWTs) handle coordination instead.

Strict Specification Compliance

AxonOps Schema Registry enforces Avro, Protobuf, and JSON Schema specifications more faithfully than Confluent Schema Registry. This catches invalid schemas at registration time -- before they enter your pipeline and cause failures during serialization, deserialization, or code generation.

Schema Fingerprinting and Deduplication

AxonOps uses specification-correct canonical forms for schema fingerprinting, producing better deduplication than Confluent's raw-string approach.

Behavior AxonOps Confluent Why AxonOps is Better
Avro Parsing Canonical Form Follows the Avro spec PCF: strips doc, aliases, and order from the fingerprint Includes doc, aliases, and order in the fingerprint Two schemas that differ only in documentation or field ordering hints are logically identical. AxonOps correctly assigns them the same global ID, avoiding unnecessary schema proliferation.
JSON Schema key ordering Normalizes JSON key order before fingerprinting Hashes the raw JSON string, so {"type":"object","properties":...} and {"properties":...,"type":"object"} get different IDs JSON objects are unordered by specification (RFC 8259). AxonOps correctly treats key-reordered schemas as identical.

Stricter Schema Validation

Confluent accepts several schemas that violate their respective specifications. AxonOps rejects them at registration time with a 422 error, preventing invalid schemas from entering the registry.

Invalid Schema AxonOps Confluent Specification Reference
Avro: invalid default type (e.g., "default": "not_a_number" on an int field) Rejects (422) Accepts (200) Avro spec: "A default value for this field, only used when reading instances that lack this field for schema evolution purposes. [...] The value type must match the field's schema type."
Avro: enum with empty symbols ("symbols": []) Rejects (422) Accepts (200) Avro spec: "symbols: a JSON array, listing symbols, as JSON strings. All symbols in an enum must be unique." An empty array produces an unusable enum type with no valid values.
Avro: fixed with size 0 ("size": 0) Rejects (422) Accepts (200) Avro spec: "size: an integer, specifying the number of bytes per value." A zero-byte fixed type is meaningless and will fail during serialization.
Protobuf: duplicate field numbers (two fields with the same number in one message) Rejects (422) Accepts (200) Protobuf spec: "Each field in the message definition has a unique number." Duplicate field numbers produce ambiguous wire format encoding.
Protobuf: unresolvable imports (import "nonexistent/file.proto") Rejects (422) Accepts (200) Protobuf spec: Imports must resolve to a known .proto file. An unresolvable import will fail at compile time in any language.

JSON Schema Draft-07 Boolean Root Schemas

AxonOps supports boolean root schemas (true and false as standalone schemas), which are valid in JSON Schema Draft-07 but uncommon. true accepts any instance, false rejects all instances.

Impact on Migration

If you are migrating from Confluent and have schemas that contain the invalid patterns listed above, those schemas will be rejected by AxonOps during import. This is by design -- it surfaces latent problems in your schema definitions. You should fix the invalid schemas before migrating.

For the fingerprinting differences, schemas that Confluent stored as separate global IDs (because they differ only in doc, aliases, order, or JSON key ordering) will be correctly deduplicated to a single global ID in AxonOps.


Schema Registry Ecosystem

AxonOps Schema Registry exists within a healthy ecosystem of Kafka schema registry implementations. Confluent created the original Schema Registry and defined the REST API that has become the industry standard — every Kafka serializer and client library speaks this API. We are grateful for that foundational contribution.

AxonOps Schema Registry builds on this standard by implementing the full Confluent Schema Registry REST API (Community and Enterprise editions), adding extra REST APIs for schema analysis and administration, and including a built-in MCP server for AI-assisted workflows — all under the Apache 2.0 license.

We built this project to make schema registry an integral part of any Kafka deployment, without limitations from licensing or costs. Whether you use AxonOps or not, this project is freely available as part of our commitment to the open-source Kafka community.

See the Ecosystem Guide for a detailed comparison of Confluent, Karapace, Apicurio, and AxonOps, and guidance on choosing the right registry for your needs.


Documentation

Guide Description
Fundamentals What is a schema registry, core concepts, and how it fits into Kafka
Best Practices Schema design patterns, naming conventions, evolution strategies, and common mistakes
Getting Started Run the registry and register your first schemas in five minutes
Installation Docker, APT, YUM, binary, Kubernetes, and from-source installation
Configuration Complete YAML reference with all fields, defaults, and environment variables
Storage Backends PostgreSQL, MySQL, Cassandra, and in-memory backend setup and tuning
Schema Types Avro, Protobuf, and JSON Schema support with reference examples
Compatibility All 7 compatibility modes with per-type rules and configuration
Contexts Multi-tenancy via contexts: namespace isolation, qualified subjects, URL routing
Data Contracts Metadata, rule sets, config defaults/overrides, and governance policies
API Reference All REST endpoints with parameters, examples, and compatibility reference
Authentication All 6 auth methods, RBAC, user management, and admin CLI
Security TLS, rate limiting, credential storage, and hardening checklist
Auditing Enterprise audit logging with multi-output delivery, CEF format, and Prometheus metrics
Deployment Architecture diagrams, topologies, Docker Compose, Kubernetes manifests, systemd, and health checks
Monitoring Prometheus metrics, alerting rules, structured logging, and Grafana queries
Migration Migrating from Confluent Schema Registry with preserved schema IDs
Testing Strategy Testing philosophy, all test layers, how to run and write tests
Development Building from source, running the test suite, and contributing
Encryption DEK Registry, Client-Side Field Level Encryption (CSFLE), and KMS providers
Exporters Schema Linking via exporter management API
MCP Server AI-assisted schema management via Model Context Protocol
MCP API Reference Auto-generated reference for all MCP tools, resources, and prompts
Ecosystem Schema registry ecosystem overview, comparisons, and choosing the right registry
Troubleshooting Common issues, diagnostic commands, and error code reference

Development

Building from Source

git clone https://github.com/axonops/axonops-schema-registry.git
cd axonops-schema-registry
make build

Running Tests

# Unit tests
make test

# Integration tests (requires Docker)
make test-integration

# BDD tests
make test-bdd

# All tests with coverage
make test-coverage

See the Development guide for the full build, test, and contribution workflow.

Contributing

We welcome contributions from the community. Please read the Development guide before submitting pull requests. It covers:

  • Code conventions and project structure
  • Testing philosophy and how to write tests
  • Step-by-step developer workflows
  • How to update the API and regenerate documentation

Community & Support

If you find AxonOps Schema Registry useful, please consider giving us a star!


License

Apache License 2.0 -- see LICENSE for details.


Acknowledgements

This project stands on the shoulders of exceptional open-source work. We are grateful to:

  • Confluent — for creating the original Schema Registry, defining the REST API that became the industry standard, and advancing the Kafka ecosystem. Every Kafka serializer and client library speaks the API that Confluent designed, and this project would not exist without that foundational contribution.
  • Apache Kafka — for the event streaming platform at the heart of it all. The Kafka community's commitment to open standards and interoperability is what makes projects like this possible.
  • Model Context Protocol — for the open protocol that enables AI assistants to interact with developer tools. MCP is transforming how developers work with infrastructure, and we are proud to be among the first schema registries to adopt it.
  • Apache Avro, Protocol Buffers, and JSON Schema — for the serialization formats and schema languages that make schema-driven development possible.

We also want to thank the maintainers of the core Go libraries that power this project:


Legal Notices

This project may contain trademarks or logos for projects, products, or services. Any use of third-party trademarks or logos is subject to those third parties' policies.

  • AxonOps is a registered trademark of AxonOps Limited.
  • Apache, Apache Cassandra, Cassandra, Apache Kafka, and Kafka are either registered trademarks or trademarks of the Apache Software Foundation or its subsidiaries in Canada, the United States, and/or other countries.
  • Confluent is a registered trademark of Confluent, Inc., an IBM company.

Made with ❤️ by the AxonOps team

Copyright © 2026 AxonOps Limited

About

High-performance, API-compatible drop-in replacement for Confluent Schema Registry. Avro, Protobuf & JSON Schema with PostgreSQL, MySQL, Cassandra storage. Built-in RBAC, client side field encryption, data contracts, audit logging. No Kafka dependency. Apache 2.0.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors