Agent Indoctrination – AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework 🚀
-
Updated
Nov 25, 2025 - Python
Agent Indoctrination – AI Safety, Bias, Fairness, Ethics & Compliance Testing Framework 🚀
Recon-Level Audit of Claude 4 – Obfuscated, Ethical & Technically Precise
Independent AI Safety & Defensive Tooling Engineering. Minimal, cryptographically verifiable outer-layer governance tools for frontier LLMs and multimodal systems.
An auditing framework to evaluate LLMs in local government reporting. Compares AI-generated headlines and topic prioritization against professional journalistic standards. Submitted to CHI 2026.
This report presents a meta-audit of a 7-turn interaction between an Operator and a frontier LLM concerning the integration of security and file-system robustness features into a LLM governance framework.
AI agent that transforms existing codebases — no migrations, no rewrites, directly on production code.
🐙 Ethical red-team audit of Claude 4 with clear introspection and policy visibility. Includes JSON data and Python tooling; Mermaid diagrams map model behavior.
Add a description, image, and links to the llm-audit topic page so that developers can more easily learn about it.
To associate your repository with the llm-audit topic, visit your repo's landing page and select "manage topics."