Every AI decision deserves proof. We're building it.
Proof = CID content hash + HMAC-chained audit + Ed25519 signatures β verifiable by anyone outside your system.
Open-source AI agent framework with tamper-proof memory, cryptographic audit trail, and trust scoring. HIPAA, SOC2, GDPR, EU AI Act evidence-ready. Python, TypeScript, Docker.
Get Started Β· See It Work Β· Why Now Β· The Gap Β· vs Others Β· Docs
pip install connector-agent-ossfrom connector_agent_oss import Connector
import os
c = Connector("deepseek", "deepseek-chat", os.environ["DEEPSEEK_API_KEY"])
result = c.agent("bot", "You are helpful").run("Hello!", "user:alice")That's it. 3 lines. Every response now includes:
result.cidβ tamper-proof content hash (CIDv1, SHA2-256)result.trustβ kernel-verified trust score, 0β100result.audit_countβ HMAC-chained, Ed25519-signed audit entries
π¦ npm β npm install @connector_oss/connector
import { Connector } from '@connector_oss/connector'
const c = new Connector({ llm: 'deepseek:deepseek-chat', apiKey: process.env.DEEPSEEK_API_KEY })
await c.remember('pid:bot', 'Patient has fever', 'nurse')π³ Docker β docker run adminumesh3011/connector-oss
docker run -p 8080:8080 -e DEEPSEEK_API_KEY=sk-... adminumesh3011/connector-oss
curl http://localhost:8080/health # β {"status": "ok"}No Rust toolchain needed. Prebuilt native binaries for Linux, macOS, Windows.
1-minute demo β YAML config Β· Knowledge injection Β· Tool use Β· Pipeline Β· Attack simulation Β· Trust scoring
Every response comes back with proof:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β result = agent.run("Diagnose this patient", "patient:P-001") β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β result.text "Based on the symptoms, likely diagnosis..." β
β result.trust 94 β
β result.trust_grade "A+" β
β result.cid "bafy...k7q2" β tamper-proof content hash β
β result.namespace "patient:P-001" β
β result.audit_count 3 β HMAC-chained, Ed25519-signedβ
β result.comply("hipaa") β { passed: true, evidence: [...] } β
β β
β Every field is kernel-verified. Nothing is self-reported. β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
The CID is a content hash. If anyone changes the data, the hash breaks. If the audit chain is tampered with, the HMAC breaks. If a signature is forged, Ed25519 catches it. Math, not trust.
This isn't theoretical. It's regulation β with deadlines and fines.
| When | What | Source |
|---|---|---|
| Aug 2, 2026 | EU AI Act: high-risk AI rules take effect. Requires audit trails, risk documentation, and evidence for regulators. Fines: up to β¬15M / 3% of global revenue for non-compliance; β¬35M / 7% for prohibited practices. | EU Commission timeline Β· Article 99 |
| Feb 1, 2026 | Colorado AI Act (SB 205): deployers of high-risk AI must document decision-making, maintain audit trails, and protect consumers from algorithmic discrimination. | Colorado Legislature |
| Dec 2024 | Italy fined OpenAI β¬15M for GDPR violations in AI data processing β first major AI-specific GDPR enforcement. | Reuters |
| Sep 2024 | FTC launched "Operation AI Comply" β enforcement against deceptive AI claims; "there is no AI exemption from the laws on the books." | FTC press release |
| 2025 | ISACA: "Agentic AI breaks traditional audit models" β autonomous agents create decisions that can't be traced by existing governance tools. | ISACA |
| Jul 2024 | NIST AI 600-1: Generative AI Risk Management Profile β sets expectations for AI documentation, provenance, and accountability. | NIST |
"Every action taken by an AI system should be logged via an audit trail that captures who initiated the action β whether human, application, or AI agent β along with the reason for it." β ISACA, 2025
Every AI framework today can call an LLM. None of them can prove what happened after. That's the gap Connector fills.
AI agents are making consequential decisions β diagnosing patients, approving loans, flagging fraud. But when an auditor asks "prove what your AI did and why", today's frameworks have nothing:
Current state of AI agent frameworks (2026)
βββ β
Great at calling LLMs
βββ β
Great at chaining agents
βββ β No tamper-proof memory β data can be altered after the fact
βββ β No cryptographic audit β logs are self-reported and mutable
βββ β No compliance evidence β auditors get checkbox PDFs, not proof
βββ β No trust scoring β "it said confidence=0.95" β who verified?
βββ β No way to answer: "Who approved this? What did the AI see?"
When a healthcare AI makes a decision about a patient, who proves what it saw, what it decided, and why?
When a finance AI flags a transaction, where's the immutable evidence for the auditor?
When an AI agent elevates its own permissions to complete a task, where's the tamper-proof record of who approved it? (ISACA discusses autonomous permission elevation and approval traceability as a growing governance gap β source)
In regulated environments, this gap is becoming a compliance liability. Every memory packet gets a CID. Every action gets an Ed25519 signature. Every chain gets HMAC verification. Compliance evidence comes from real cryptographic proof, not self-assessments.
These frameworks are excellent at what they do. Connector doesn't replace them β it adds the accountability layer that regulated industries now require.
| LangChain | CrewAI | OpenAI SDK | Connector | |
|---|---|---|---|---|
| Tamper-proof memory | β | β | β | β CID-addressed |
| Cryptographic audit trail | β | β | β | β Ed25519 + HMAC |
| HIPAA / SOC2 / GDPR | β | β | β | β From real evidence |
| Trust score per response | β | β | β | β 0β100, kernel-verified |
| Non-bypassable policies | β | β | β | β 5-layer guard |
| Multi-cell federation | β | β | β | β BFT consensus |
| Works with any LLM | β | β | β | β DeepSeek, OpenAI, Anthropic, local |
| Lines for simplest agent | ~8 | ~12 | ~6 | ~3 |
| LangChain | CrewAI | Connector OSS |
|---|---|---|
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent
from langchain.agents import AgentType
llm = ChatOpenAI(model="gpt-4")
agent = initialize_agent(
tools=[],
llm=llm,
agent=AgentType.ZERO_SHOT_REACT,
)
result = agent.run("Diagnose patient")
print(result)
# just a string β no proof |
from crewai import Agent, Task, Crew
doctor = Agent(
role="Doctor",
goal="Diagnose patient",
llm="gpt-4"
)
task = Task(
description="Diagnose",
agent=doctor
)
crew = Crew(agents=[doctor], tasks=[task])
result = crew.kickoff()
print(result)
# just a string β no proof |
from connector_oss import Connector
c = Connector("openai", "gpt-4", api_key)
r = c.agent("doctor", "Diagnose.") \
.run("Diagnose patient", "patient:1")
print(r.text) # response
print(r.trust) # 80 β kernel-verified
print(r.cid) # bafy...k7q2
print(r.is_verified()) # True
# trust + audit + CID = FREE |
| β No trust score β No audit trail β No CID β No compliance |
β No trust score β No audit trail β No CID β No compliance |
β
Trust score β HMAC audit trail β CID content hash β HIPAA/SOC2 ready |
3 lines. Same effort as competitors. But every response comes with cryptographic proof, trust scoring, and a full audit trail β for free.
| Feature | How it works | |
|---|---|---|
| π | Tamper-proof memory | Every memory packet β CIDv1 (SHA2-256 of DAG-CBOR) |
| π | Trust score 0β100 | Kernel-computed from audit integrity, not self-reported |
| π | Full audit trail | HMAC-chained, Ed25519-signed, exportable |
| π₯ | Compliance reports | HIPAA, SOC2, GDPR, EU AI Act β from real evidence |
| π§ | Knowledge graph + RAG | Built-in entity extraction and retrieval |
| π | Multi-agent pipelines | DAG orchestration with saga rollback |
| π | Federation | BFT consensus across organizations |
| π‘οΈ | Policy firewall | Non-bypassable, 5-layer, per-request enforcement |
c = Connector.from_config("hospital.yaml") # comply=[hipaa]
triage = c.agent("triage", "Classify patients by urgency 1-5.")
doctor = c.agent("doctor", "Diagnose based on triage data.")
t = triage.run("45M, chest pain 2h, BP 158/95", "patient:P-001")
d = doctor.run(f"Patient: {t.text}", "patient:P-001")
print(f"Trust: {d.trust}/100 ({d.trust_grade})") # 94/100 (A+)
print(f"CID: {d.cid}") # Immutable proof of this decisionFinance β Fraud Detection
c = Connector.from_config("finance.yaml") # comply=[soc2, gdpr]
result = c.agent("fraud_analyzer", "Analyze transactions.").run(
"Transaction: $4,200 at 3:47 AM, Lagos. Cardholder in New York.",
"user:card-8821"
)
print(f"CID: {result.cid}") # Immutable audit evidence for regulatorsMulti-Agent Pipeline
pipe = c.pipeline("support")
pipe.agent("triage", "Classify tickets")
pipe.agent("resolver", "Find answers")
pipe.route("triage -> resolver")
pipe.hipaa()
result = pipe.run("My account is locked", user="user:bob")YAML Config β HIPAA system in 15 lines
connector:
provider: deepseek
model: deepseek-chat
api_key: ${DEEPSEEK_API_KEY}
storage: sqlite:./data.db
comply: [hipaa, soc2]
security:
signing: true
data_classification: PHI
firewall:
preset: hipaa
agents:
triage: { instructions: "Classify patients by urgency 1-5." }
doctor: { instructions: "Diagnose based on triage.", memory_from: [triage] }28 Rust crates Β· 3 workspaces Β· 1,857 tests Β· 0 failures β click to expand
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β SDK Layer β
β Python (PyO3 ~140 fn) TypeScript (NAPI-RS ~35 methods + HTTP fallback) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Ring 4 β connector-api (Connector Β· AgentBuilder Β· PipelineBuilder) β
β Ring 3 β connector-engine (61 modules: firewall, policy, trust, routing) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Ring 1 β VAC Memory Kernel β Ring 2 β AAPI Action Kernel β
β MemoryKernel Β· 29 syscalls β VAKYA grammar (8 slots, 15 verbs) β
β MemPacket (CID-addressed) β Ed25519 signing Β· capability tokens β
β KnotEngine Β· Prolly tree β SagaCoordinator Β· FederatedPolicy β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Ring 0 β Cryptographic Foundation β
β CIDv1 Β· Ed25519 Β· HMAC-SHA256 Β· Noise_IK Β· ML-DSA-65 Β· Prolly Merkle β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Connector Protocol β CP/1.0 (7 layers, 120 capabilities) β
β Bridges: ANP Β· A2A Β· ACP Β· MCP Β· SCITT β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Build everything locally
git clone https://github.com/GlobalSushrut/connector-oss.git && cd connector-oss
# Test (1,857 tests)
cd connector && cargo test && cd .. # 1,194 tests
cd vac && cargo test && cd .. # 492 tests
cd aapi && cargo test && cd .. # 171 tests
# Python SDK
cd sdks/python && pip install maturin && maturin develop --release && cd ../..
# TypeScript SDK
cd sdks/typescript && npm install && npm run build && cd ../..
# Docker
docker build -t connector-oss .CI publishes automatically on git tag v*:
| Package | Install |
|---|---|
connector-agent-oss |
pip install connector-agent-oss |
@connector_oss/connector |
npm i @connector_oss/connector |
connector-oss |
docker pull adminumesh3011/connector-oss |
β 35 docs from crypto to deployment Β· QUICKSTART.md Β· ARCHITECTURE.md Β· CHANGELOG.md
| If you're building... | Connector gives you... |
|---|---|
| Healthcare AI (HIPAA) | Tamper-proof patient data memory, audit trail for every AI decision, compliance evidence |
| Financial AI (SOC2) | Immutable transaction audit, cryptographic proof for regulators, fraud detection pipeline |
| Legal AI (GDPR) | Right-to-erasure support, data provenance, EU AI Act compliance reports |
| Enterprise AI agents | Non-bypassable policy firewall, RBAC, trust scoring, deterministic guardrails |
| Multi-agent systems | Shared memory with isolation, DAG pipelines, saga rollback, BFT federation |
| Any AI agent in production | Accountability, observability, and "math not trust" verification |
Connector is the accountability layer that the AI industry doesn't have yet. It works alongside LangChain, CrewAI, and OpenAI SDK β or as a standalone framework.
Looking for an alternative to existing AI frameworks? Here's how Connector compares:
- LangChain alternative with compliance β LangChain chains LLMs but has no audit trail, no tamper-proof memory, no compliance evidence. Connector adds all of that.
- CrewAI alternative with HIPAA β CrewAI orchestrates agent crews but has no cryptographic verification. Connector gives you the same multi-agent capability plus provenance.
- Mem0 alternative with cryptographic proof β Mem0 provides AI memory but relies on LLM-based verification. Connector uses CID content-addressing and Ed25519 signatures β math, not AI.
- OpenAI Agents SDK alternative for regulated industries β OpenAI's SDK doesn't provide audit trails or compliance reports. Connector wraps any LLM (including OpenAI) with full accountability.
- Dify / Flowise / n8n alternative for enterprise β Visual workflow tools lack security primitives. Connector provides the trust infrastructure underneath.
What is tamper-proof memory for AI agents?
Tamper-proof memory means every piece of data an AI agent reads, writes, or decides is content-addressed using CID (Content Identifier) hashes. If anyone changes the data after the fact, the hash breaks. The audit chain uses HMAC-SHA256 and Ed25519 digital signatures, making it mathematically impossible to alter history without detection. This is the same principle behind Git and IPFS.
How does Connector help with HIPAA compliance for AI?
Connector provides HIPAA compliance evidence from real cryptographic audit trails β not checkbox self-assessments. Every AI agent interaction with patient data is logged with an immutable CID, signed with Ed25519, and chained with HMAC. The comply("hipaa") method generates compliance reports that auditors can independently verify. Data isolation is enforced at the kernel level with namespace-based access control.
Can I use Connector with OpenAI, Anthropic, DeepSeek, or local LLMs?
Yes. Connector supports 15+ LLM providers out of the box: OpenAI, Anthropic, DeepSeek, Google Gemini, Groq, Together, Mistral, Cohere, Fireworks, Perplexity, OpenRouter, Ollama, LM Studio, vLLM, and any OpenAI-compatible endpoint. The trust and audit layer works identically regardless of which LLM you use.
How is this different from just logging AI responses?
Logging is self-reported and mutable β anyone with database access can alter logs. Connector's audit trail is cryptographically chained: each entry includes an HMAC of the previous entry, making the entire chain tamper-evident. Every memory packet has a CID (content hash), and every action is Ed25519-signed. An auditor can independently verify the entire chain without trusting the system that produced it.
Does Connector work with existing AI frameworks like LangChain or CrewAI?
Yes. Connector provides adapters for LangChain, CrewAI, and OpenAI Agents SDK. You can use Connector as the memory and compliance layer underneath your existing agent framework, or use Connector's built-in agent and pipeline system directly.
What is an AI agent trust score?
Connector computes a trust score (0β100) for every AI agent response. Unlike self-reported confidence scores, this is kernel-verified from audit chain integrity, memory provenance, policy compliance, and cryptographic verification. A score of 90+ means the response has full CID grounding, complete audit trail, and valid signatures.
Is Connector suitable for SOC2 audits?
Yes. Connector generates SOC2 compliance evidence from real cryptographic data β CID-addressed memory, Ed25519-signed audit entries, and HMAC-chained logs. The comply("soc2") method produces exportable reports that map directly to SOC2 Trust Service Criteria (security, availability, processing integrity, confidentiality, privacy).
Can Connector handle multi-agent AI systems?
Yes. Connector supports multi-agent pipelines with DAG orchestration, shared memory with namespace isolation, inter-agent communication, saga rollback for failure recovery, and BFT (Byzantine Fault Tolerant) consensus for multi-organization federation. Each agent gets its own memory namespace with configurable access control.
AI agent framework Β· tamper-proof AI memory Β· AI audit trail Β· HIPAA compliant AI Β· SOC2 AI compliance Β· GDPR AI agent Β· EU AI Act framework Β· AI agent trust score Β· cryptographic audit trail Β· AI agent governance Β· AI agent observability Β· LangChain alternative Β· CrewAI alternative Β· Mem0 alternative Β· secure AI agent framework Β· enterprise AI agent Β· healthcare AI framework Β· financial AI compliance Β· AI decision provenance Β· multi-agent orchestration Β· AI agent accountability Β· deterministic AI guardrails Β· AI agent memory framework Β· regulated AI infrastructure Β· open source AI compliance
See CONTRIBUTING.md. Issues and PRs welcome.
License: Apache-2.0 β LICENSE
pip install connector-agent-oss Β· npm i @connector_oss/connector Β· docker pull adminumesh3011/connector-oss
Built by Umesh Adhikari
