Skip to content
View tvprasad's full-sized avatar
💭
I may be slow to respond.
💭
I may be slow to respond.

Block or report tvprasad

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
tvprasad/README.md

Prasad Thiriveedi

I build governed AI, analytics, and data platforms that transform operational, telemetry, and workflow data into production decision-support systems, advanced analytics workflows, and enterprise AI applications.

Building Meridian (governed RAG control plane) and aiPolaris (multi-agent orchestration) with focus on information retrieval, analytics workflows, operational telemetry, and governed AI execution.


What I'm Building

AgentCore PR Agent | Assessment-built reference workflow for governed AI developer automation. Demonstrates LangGraph orchestration, MCP-style tool execution, reflection/retry recovery, HITL approval gates, guardrails, audit logging, tests, and design documentation.


GovEvidence AI | Microsoft Agent-a-thon Level 3 Governed multi-agent compliance workflow built on Azure AI Foundry with grounded retrieval, confidence-gated refusal, Entra-governed identity, auditability, and HITL approval gates.


Meridian Live — Control plane for enterprise agent execution in regulated environments. Includes telemetry analytics, retrieval quality monitoring, confidence scoring, structured evaluation workflows, and operational observability for production AI systems. Deterministic retrieval · Explicit refusal semantics · Citation validation · Structured telemetry

Prevents the compliance failures and audit gaps that surface when RAG systems are deployed without governance.

Validated through real-world agent workflows, including failure diagnosis and controlled execution under production-like conditions.

AI systems fail from architectural ambiguity, not model weakness.


AgentBond — Capability-based enforcement layer for agent delegation and tool access.

Issues scoped, non-redelegable tokens that bind:

  • allowed tools
  • resource boundaries
  • time constraints (TTL)

Every action is validated at execution time: signature · scope · policy · audit

Prevents confused-deputy problems and limits blast radius even under orchestrator compromise.

Forms the hard trust boundary between agent intent and system execution.


aiPolaris (Feature Product) - Regulated AI agent orchestration stalls on compliance gaps, audit failures, and capability boundaries that aren't enforced until production. aiPolaris prevents that — from the first commit. Focused on transforming workflow telemetry, retrieval activity, latency, and quality signals into interpretable analytics and operational insights.

AI systems fail from architectural ambiguity, not model weakness.

Next: Enterprise Agentic RAG — LangGraph multi-agent system with Graph API, ADLS Gen2, Azure AI Search, Entra ID auth, and GCCH-scoped Terraform. Built to demonstrate the full delivery process from intake to ATO-ready release records.


Dead Letter Oracle — MCP-based agent system for diagnosing and safely replaying failed messages with governed execution and audit traceability.


How I Work

I prefer measurable hypotheses, interpretable outputs, structured telemetry, and traceable analytics over opaque automation. Regulated AI initiatives stall because there is no delivery process — not because the engineering is wrong. Every engagement runs the same five-phase loop: intake, parallel execution, integration, delivery, and continuous operations. The compliance evidence accumulates as the system is built, not after.

Role mode What it produces
Business Analyst Use cases, system boundary doc, acceptance criteria
ML / AI Engineer Agent graph, eval harness, prompt versioning
Data Engineer Graph API connectors, ADLS pipeline, AI Search index
Data Scientist / Analytics Feature engineering, telemetry analytics, retrieval evaluation, operational KPIs
DevOps / MLOps Terraform (commercial + GCCH), CI/CD, release records
Security Threat model, NIST control mapping, SAST gates

Stack

Domain Technologies
AI Orchestration LangGraph, Semantic Kernel, AutoGen, MCP tool servers
Retrieval Azure AI Search, pgvector, Chroma, RAG pipelines
LLMs Azure OpenAI, Claude (Opus/Sonnet), Ollama (local)
Data Graph API, ADLS Gen2, Azure Data Factory, Kafka, Databricks, Delta Lake, PySpark, Synapse SQL
Analytics Python, Pandas, SQL, PySpark, Databricks SQL, Power BI
Backend Python, FastAPI, C#/.NET Core, TypeScript, gRPC
Cloud & Infra Azure (GCCH-ready), AWS, Kubernetes, Terraform, AKS
Compliance NIST 800-53, FedRAMP, ATO-ready, active secret clearance

Selected AI Certifications & Programs

  • Microsoft Agent Master: Agent-a-thon Level 3
  • Anthropic: Claude API & MCP Development
  • Microsoft: AI agent fundamentals with Azure AI Foundry & Cognitive Services
  • AWS: Generative AI & AI Agents with Amazon Bedrock
  • AWS: Security Governance at Scale
  • Microsoft: Azure Cognitive Services
  • Databricks: Fundamentals
  • Databricks: Platform Architecture
  • Microsoft: Fabric / Synapse

Philosophy

Control precedes generation. Data quality, observability, and interpretable analytics are foundational to trustworthy AI systems. Observability precedes scale. Governance precedes automation.

I design systems where failure modes are explicit, validated, and controlled before production.


Representative Scenarios

  • Feature engineering and analytics workflows for governed AI systems
  • DLQ failure diagnosis and governed replay
  • Schema mismatch detection with validation loops
  • Controlled tool execution via MCP enforcement boundary
  • Agent decision traceability with audit reconstruction

LinkedIn

Pinned Loading

  1. meridian-studio meridian-studio Public

    Operator UI for the Meridian governed AI platform — RAG, Ops Copilot, Runtime Provisioning

    TypeScript

  2. meridian-infra meridian-infra Public

    Terraform infrastructure provisioning for the Meridian governed AI platform

    HCL 1

  3. aiPolaris aiPolaris Public

    Federal-grade multi-agent orchestration - LangGraph DAG, LangChain LCEL, capability sandboxing, full audit trail, GCCH-ready Terraform

    Python

  4. dead-letter-oracle dead-letter-oracle Public

    Governed MCP agent for DLQ incident resolution with closed-loop reasoning, multi-factor governance, and BlackBox reasoning trace

    Python

  5. agentbond agentbond Public

    Zero-trust capability delegation for MCP multi-agent systems. Solves the confused deputy problem with scoped JWT tokens, deterministic enforcement, and full audit trail.

    Python

  6. agentcore-pr-agent agentcore-pr-agent Public

    Governed agentic runtime and gateway: LangGraph workflows, MCP-style tools, HITL, guardrails

    Python