Skip to content

GlobalSushrut/connector-oss

Repository files navigation


Connector OSS Logo

πŸ” Connector OSS β€” Make Every AI Decision Provable

Your AI agents are making decisions. Can you prove what they did?

Every AI decision deserves proof. We're building it.

Proof = CID content hash + HMAC-chained audit + Ed25519 signatures β€” verifiable by anyone outside your system.

Open-source AI agent framework with tamper-proof memory, cryptographic audit trail, and trust scoring. HIPAA, SOC2, GDPR, EU AI Act evidence-ready. Python, TypeScript, Docker.


GitHub stars Β  PyPI Β  npm Β  Docker


Get Started Β· See It Work Β· Why Now Β· The Gap Β· vs Others Β· Docs


πŸš€ Get Started

pip install connector-agent-oss
from connector_agent_oss import Connector
import os

c = Connector("deepseek", "deepseek-chat", os.environ["DEEPSEEK_API_KEY"])
result = c.agent("bot", "You are helpful").run("Hello!", "user:alice")

That's it. 3 lines. Every response now includes:

  • result.cid β€” tamper-proof content hash (CIDv1, SHA2-256)
  • result.trust β€” kernel-verified trust score, 0–100
  • result.audit_count β€” HMAC-chained, Ed25519-signed audit entries
πŸ“¦ npm β€” npm install @connector_oss/connector
import { Connector } from '@connector_oss/connector'
const c = new Connector({ llm: 'deepseek:deepseek-chat', apiKey: process.env.DEEPSEEK_API_KEY })
await c.remember('pid:bot', 'Patient has fever', 'nurse')
🐳 Docker β€” docker run adminumesh3011/connector-oss
docker run -p 8080:8080 -e DEEPSEEK_API_KEY=sk-... adminumesh3011/connector-oss
curl http://localhost:8080/health   # β†’ {"status": "ok"}

No Rust toolchain needed. Prebuilt native binaries for Linux, macOS, Windows.


πŸ‘€ See It Work

1-minute demo β€” YAML config Β· Knowledge injection Β· Tool use Β· Pipeline Β· Attack simulation Β· Trust scoring

Connector OSS Demo

πŸ“– View full YAML config, Python code, and raw output β†’

Every response comes back with proof:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  result = agent.run("Diagnose this patient", "patient:P-001")       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                      β”‚
β”‚  result.text          "Based on the symptoms, likely diagnosis..."   β”‚
β”‚  result.trust         94                                             β”‚
β”‚  result.trust_grade   "A+"                                           β”‚
β”‚  result.cid           "bafy...k7q2"   ← tamper-proof content hash   β”‚
β”‚  result.namespace     "patient:P-001"                                β”‚
β”‚  result.audit_count   3               ← HMAC-chained, Ed25519-signedβ”‚
β”‚  result.comply("hipaa")  β†’ { passed: true, evidence: [...] }        β”‚
β”‚                                                                      β”‚
β”‚  Every field is kernel-verified. Nothing is self-reported.           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The CID is a content hash. If anyone changes the data, the hash breaks. If the audit chain is tampered with, the HMAC breaks. If a signature is forged, Ed25519 catches it. Math, not trust.


⏰ Why Now

This isn't theoretical. It's regulation β€” with deadlines and fines.

When What Source
Aug 2, 2026 EU AI Act: high-risk AI rules take effect. Requires audit trails, risk documentation, and evidence for regulators. Fines: up to €15M / 3% of global revenue for non-compliance; €35M / 7% for prohibited practices. EU Commission timeline Β· Article 99
Feb 1, 2026 Colorado AI Act (SB 205): deployers of high-risk AI must document decision-making, maintain audit trails, and protect consumers from algorithmic discrimination. Colorado Legislature
Dec 2024 Italy fined OpenAI €15M for GDPR violations in AI data processing β€” first major AI-specific GDPR enforcement. Reuters
Sep 2024 FTC launched "Operation AI Comply" β€” enforcement against deceptive AI claims; "there is no AI exemption from the laws on the books." FTC press release
2025 ISACA: "Agentic AI breaks traditional audit models" β€” autonomous agents create decisions that can't be traced by existing governance tools. ISACA
Jul 2024 NIST AI 600-1: Generative AI Risk Management Profile β€” sets expectations for AI documentation, provenance, and accountability. NIST

"Every action taken by an AI system should be logged via an audit trail that captures who initiated the action β€” whether human, application, or AI agent β€” along with the reason for it." β€” ISACA, 2025

Every AI framework today can call an LLM. None of them can prove what happened after. That's the gap Connector fills.


πŸ”₯ The Accountability Gap

AI agents are making consequential decisions β€” diagnosing patients, approving loans, flagging fraud. But when an auditor asks "prove what your AI did and why", today's frameworks have nothing:

Current state of AI agent frameworks (2026)
β”œβ”€β”€ βœ… Great at calling LLMs
β”œβ”€β”€ βœ… Great at chaining agents
β”œβ”€β”€ ❌ No tamper-proof memory    ← data can be altered after the fact
β”œβ”€β”€ ❌ No cryptographic audit     ← logs are self-reported and mutable
β”œβ”€β”€ ❌ No compliance evidence     ← auditors get checkbox PDFs, not proof
β”œβ”€β”€ ❌ No trust scoring           ← "it said confidence=0.95" β€” who verified?
└── ❌ No way to answer: "Who approved this? What did the AI see?"

When a healthcare AI makes a decision about a patient, who proves what it saw, what it decided, and why?

When a finance AI flags a transaction, where's the immutable evidence for the auditor?

When an AI agent elevates its own permissions to complete a task, where's the tamper-proof record of who approved it? (ISACA discusses autonomous permission elevation and approval traceability as a growing governance gap β€” source)

In regulated environments, this gap is becoming a compliance liability. Every memory packet gets a CID. Every action gets an Ed25519 signature. Every chain gets HMAC verification. Compliance evidence comes from real cryptographic proof, not self-assessments.


⚑ Connector vs Everything Else

These frameworks are excellent at what they do. Connector doesn't replace them β€” it adds the accountability layer that regulated industries now require.

LangChain CrewAI OpenAI SDK Connector
Tamper-proof memory ❌ ❌ ❌ βœ… CID-addressed
Cryptographic audit trail ❌ ❌ ❌ βœ… Ed25519 + HMAC
HIPAA / SOC2 / GDPR ❌ ❌ ❌ βœ… From real evidence
Trust score per response ❌ ❌ ❌ βœ… 0–100, kernel-verified
Non-bypassable policies ❌ ❌ ❌ βœ… 5-layer guard
Multi-cell federation ❌ ❌ ❌ βœ… BFT consensus
Works with any LLM βœ… βœ… ❌ βœ… DeepSeek, OpenAI, Anthropic, local
Lines for simplest agent ~8 ~12 ~6 ~3

Same effort, 10x more proof

LangChainCrewAIConnector OSS
from langchain_openai import ChatOpenAI
from langchain.agents import initialize_agent
from langchain.agents import AgentType

llm = ChatOpenAI(model="gpt-4")
agent = initialize_agent(
    tools=[],
    llm=llm,
    agent=AgentType.ZERO_SHOT_REACT,
)
result = agent.run("Diagnose patient")
print(result)
# just a string β€” no proof
from crewai import Agent, Task, Crew

doctor = Agent(
    role="Doctor",
    goal="Diagnose patient",
    llm="gpt-4"
)
task = Task(
    description="Diagnose",
    agent=doctor
)
crew = Crew(agents=[doctor], tasks=[task])
result = crew.kickoff()
print(result)
# just a string β€” no proof
from connector_oss import Connector

c = Connector("openai", "gpt-4", api_key)
r = c.agent("doctor", "Diagnose.") \
     .run("Diagnose patient", "patient:1")

print(r.text)        # response
print(r.trust)       # 80 β€” kernel-verified
print(r.cid)         # bafy...k7q2
print(r.is_verified()) # True
# trust + audit + CID = FREE
❌ No trust score
❌ No audit trail
❌ No CID
❌ No compliance
❌ No trust score
❌ No audit trail
❌ No CID
❌ No compliance
βœ… Trust score
βœ… HMAC audit trail
βœ… CID content hash
βœ… HIPAA/SOC2 ready

3 lines. Same effort as competitors. But every response comes with cryptographic proof, trust scoring, and a full audit trail β€” for free.


πŸ’‘ What You Get β€” Zero Config

Feature How it works
πŸ”’ Tamper-proof memory Every memory packet β†’ CIDv1 (SHA2-256 of DAG-CBOR)
πŸ“Š Trust score 0–100 Kernel-computed from audit integrity, not self-reported
πŸ“‹ Full audit trail HMAC-chained, Ed25519-signed, exportable
πŸ₯ Compliance reports HIPAA, SOC2, GDPR, EU AI Act β€” from real evidence
🧠 Knowledge graph + RAG Built-in entity extraction and retrieval
πŸ”€ Multi-agent pipelines DAG orchestration with saga rollback
🌐 Federation BFT consensus across organizations
πŸ›‘οΈ Policy firewall Non-bypassable, 5-layer, per-request enforcement

πŸ—οΈ Real-World Examples

Healthcare β€” HIPAA ER Triage

c = Connector.from_config("hospital.yaml")  # comply=[hipaa]
triage = c.agent("triage", "Classify patients by urgency 1-5.")
doctor = c.agent("doctor", "Diagnose based on triage data.")

t = triage.run("45M, chest pain 2h, BP 158/95", "patient:P-001")
d = doctor.run(f"Patient: {t.text}", "patient:P-001")
print(f"Trust: {d.trust}/100 ({d.trust_grade})")  # 94/100 (A+)
print(f"CID: {d.cid}")  # Immutable proof of this decision
Finance β€” Fraud Detection
c = Connector.from_config("finance.yaml")  # comply=[soc2, gdpr]
result = c.agent("fraud_analyzer", "Analyze transactions.").run(
    "Transaction: $4,200 at 3:47 AM, Lagos. Cardholder in New York.",
    "user:card-8821"
)
print(f"CID: {result.cid}")  # Immutable audit evidence for regulators
Multi-Agent Pipeline
pipe = c.pipeline("support")
pipe.agent("triage", "Classify tickets")
pipe.agent("resolver", "Find answers")
pipe.route("triage -> resolver")
pipe.hipaa()
result = pipe.run("My account is locked", user="user:bob")
YAML Config β€” HIPAA system in 15 lines
connector:
  provider: deepseek
  model: deepseek-chat
  api_key: ${DEEPSEEK_API_KEY}
  storage: sqlite:./data.db
  comply: [hipaa, soc2]
  security:
    signing: true
    data_classification: PHI
  firewall:
    preset: hipaa
agents:
  triage: { instructions: "Classify patients by urgency 1-5." }
  doctor: { instructions: "Diagnose based on triage.", memory_from: [triage] }

β†’ Full YAML Dictionary


πŸ›οΈ Architecture

28 Rust crates Β· 3 workspaces Β· 1,857 tests Β· 0 failures β€” click to expand
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  SDK Layer                                                                   β”‚
β”‚  Python (PyO3 ~140 fn)     TypeScript (NAPI-RS ~35 methods + HTTP fallback) β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Ring 4 β€” connector-api  (Connector Β· AgentBuilder Β· PipelineBuilder)       β”‚
β”‚  Ring 3 β€” connector-engine  (61 modules: firewall, policy, trust, routing)  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Ring 1 β€” VAC Memory Kernel     β”‚  Ring 2 β€” AAPI Action Kernel              β”‚
β”‚  MemoryKernel Β· 29 syscalls     β”‚  VAKYA grammar (8 slots, 15 verbs)        β”‚
β”‚  MemPacket (CID-addressed)      β”‚  Ed25519 signing Β· capability tokens      β”‚
β”‚  KnotEngine Β· Prolly tree       β”‚  SagaCoordinator Β· FederatedPolicy        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Ring 0 β€” Cryptographic Foundation                                           β”‚
β”‚  CIDv1 Β· Ed25519 Β· HMAC-SHA256 Β· Noise_IK Β· ML-DSA-65 Β· Prolly Merkle    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Connector Protocol β€” CP/1.0  (7 layers, 120 capabilities)                  β”‚
β”‚  Bridges: ANP Β· A2A Β· ACP Β· MCP Β· SCITT                                    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

β†’ Full architecture: ARCHITECTURE.md


πŸ“¦ Install from Source

Build everything locally
git clone https://github.com/GlobalSushrut/connector-oss.git && cd connector-oss

# Test (1,857 tests)
cd connector && cargo test && cd ..   # 1,194 tests
cd vac && cargo test && cd ..         # 492 tests
cd aapi && cargo test && cd ..        # 171 tests

# Python SDK
cd sdks/python && pip install maturin && maturin develop --release && cd ../..

# TypeScript SDK
cd sdks/typescript && npm install && npm run build && cd ../..

# Docker
docker build -t connector-oss .

CI publishes automatically on git tag v*:

Package Install
connector-agent-oss pip install connector-agent-oss
@connector_oss/connector npm i @connector_oss/connector
connector-oss docker pull adminumesh3011/connector-oss

πŸ“š Documentation

β†’ 35 docs from crypto to deployment Β· QUICKSTART.md Β· ARCHITECTURE.md Β· CHANGELOG.md


🎯 Who Is This For?

If you're building... Connector gives you...
Healthcare AI (HIPAA) Tamper-proof patient data memory, audit trail for every AI decision, compliance evidence
Financial AI (SOC2) Immutable transaction audit, cryptographic proof for regulators, fraud detection pipeline
Legal AI (GDPR) Right-to-erasure support, data provenance, EU AI Act compliance reports
Enterprise AI agents Non-bypassable policy firewall, RBAC, trust scoring, deterministic guardrails
Multi-agent systems Shared memory with isolation, DAG pipelines, saga rollback, BFT federation
Any AI agent in production Accountability, observability, and "math not trust" verification

Connector is the accountability layer that the AI industry doesn't have yet. It works alongside LangChain, CrewAI, and OpenAI SDK β€” or as a standalone framework.


πŸ”„ Alternatives & Comparisons

Looking for an alternative to existing AI frameworks? Here's how Connector compares:

  • LangChain alternative with compliance β€” LangChain chains LLMs but has no audit trail, no tamper-proof memory, no compliance evidence. Connector adds all of that.
  • CrewAI alternative with HIPAA β€” CrewAI orchestrates agent crews but has no cryptographic verification. Connector gives you the same multi-agent capability plus provenance.
  • Mem0 alternative with cryptographic proof β€” Mem0 provides AI memory but relies on LLM-based verification. Connector uses CID content-addressing and Ed25519 signatures β€” math, not AI.
  • OpenAI Agents SDK alternative for regulated industries β€” OpenAI's SDK doesn't provide audit trails or compliance reports. Connector wraps any LLM (including OpenAI) with full accountability.
  • Dify / Flowise / n8n alternative for enterprise β€” Visual workflow tools lack security primitives. Connector provides the trust infrastructure underneath.

❓ FAQ

What is tamper-proof memory for AI agents?

Tamper-proof memory means every piece of data an AI agent reads, writes, or decides is content-addressed using CID (Content Identifier) hashes. If anyone changes the data after the fact, the hash breaks. The audit chain uses HMAC-SHA256 and Ed25519 digital signatures, making it mathematically impossible to alter history without detection. This is the same principle behind Git and IPFS.

How does Connector help with HIPAA compliance for AI?

Connector provides HIPAA compliance evidence from real cryptographic audit trails β€” not checkbox self-assessments. Every AI agent interaction with patient data is logged with an immutable CID, signed with Ed25519, and chained with HMAC. The comply("hipaa") method generates compliance reports that auditors can independently verify. Data isolation is enforced at the kernel level with namespace-based access control.

Can I use Connector with OpenAI, Anthropic, DeepSeek, or local LLMs?

Yes. Connector supports 15+ LLM providers out of the box: OpenAI, Anthropic, DeepSeek, Google Gemini, Groq, Together, Mistral, Cohere, Fireworks, Perplexity, OpenRouter, Ollama, LM Studio, vLLM, and any OpenAI-compatible endpoint. The trust and audit layer works identically regardless of which LLM you use.

How is this different from just logging AI responses?

Logging is self-reported and mutable β€” anyone with database access can alter logs. Connector's audit trail is cryptographically chained: each entry includes an HMAC of the previous entry, making the entire chain tamper-evident. Every memory packet has a CID (content hash), and every action is Ed25519-signed. An auditor can independently verify the entire chain without trusting the system that produced it.

Does Connector work with existing AI frameworks like LangChain or CrewAI?

Yes. Connector provides adapters for LangChain, CrewAI, and OpenAI Agents SDK. You can use Connector as the memory and compliance layer underneath your existing agent framework, or use Connector's built-in agent and pipeline system directly.

What is an AI agent trust score?

Connector computes a trust score (0–100) for every AI agent response. Unlike self-reported confidence scores, this is kernel-verified from audit chain integrity, memory provenance, policy compliance, and cryptographic verification. A score of 90+ means the response has full CID grounding, complete audit trail, and valid signatures.

Is Connector suitable for SOC2 audits?

Yes. Connector generates SOC2 compliance evidence from real cryptographic data β€” CID-addressed memory, Ed25519-signed audit entries, and HMAC-chained logs. The comply("soc2") method produces exportable reports that map directly to SOC2 Trust Service Criteria (security, availability, processing integrity, confidentiality, privacy).

Can Connector handle multi-agent AI systems?

Yes. Connector supports multi-agent pipelines with DAG orchestration, shared memory with namespace isolation, inter-agent communication, saga rollback for failure recovery, and BFT (Byzantine Fault Tolerant) consensus for multi-organization federation. Each agent gets its own memory namespace with configurable access control.


🏷️ Keywords

AI agent framework Β· tamper-proof AI memory Β· AI audit trail Β· HIPAA compliant AI Β· SOC2 AI compliance Β· GDPR AI agent Β· EU AI Act framework Β· AI agent trust score Β· cryptographic audit trail Β· AI agent governance Β· AI agent observability Β· LangChain alternative Β· CrewAI alternative Β· Mem0 alternative Β· secure AI agent framework Β· enterprise AI agent Β· healthcare AI framework Β· financial AI compliance Β· AI decision provenance Β· multi-agent orchestration Β· AI agent accountability Β· deterministic AI guardrails Β· AI agent memory framework Β· regulated AI infrastructure Β· open source AI compliance


🀝 Contributing

See CONTRIBUTING.md. Issues and PRs welcome.

License: Apache-2.0 β€” LICENSE


Found this useful? Help others find it too.


Star on GitHub


Share on X Β  Share on LinkedIn Β  Share on Reddit Β  Submit to HN


pip install connector-agent-oss Β· npm i @connector_oss/connector Β· docker pull adminumesh3011/connector-oss

Built by Umesh Adhikari

About

Tamper-proof memory + cryptographic audit trail for AI agents. HIPAA, SOC2, GDPR compliance built-in. Trust score for every response. Python & TypeScript SDKs. Rust-powered.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages