Detailed catalogue of AI security controls across all three AISECA maturity tiers. Each control includes a description, implementation guidance, and mapping to NIST GenAI Risk Domains.
Status: Draft | Framework Version: 0.1
- Description: Catalogue all AI tools, models, and integrations across the organisation
- Implementation: Maintain a living register of all AI systems including vendor, data flows, users, and risk classification
- NIST Mapping: GOVERN 1.1, MAP 1.1
- Description: Establish clear guidelines for how AI tools may be used
- Implementation: Define permitted and prohibited uses, data handling requirements, and escalation procedures
- NIST Mapping: GOVERN 1.3, GOVERN 2.1
- Description: Role-based access to AI systems and their training data
- Implementation: Implement RBAC with least-privilege principles for all AI system access
- NIST Mapping: GOVERN 1.4, MANAGE 2.1
- Description: Map AI deployments to NIST GenAI Risk Domains
- Implementation: Assess each AI deployment against risk domains, document findings, assign risk owners
- NIST Mapping: MAP 1.1, MAP 2.1
- Description: Classify data flowing into and out of AI systems
- Implementation: Apply organisational data classification standards to all AI data flows
- NIST Mapping: MAP 3.1, MANAGE 1.1
- Description: Security evaluation standards for AI vendor selection
- Implementation: Define minimum security requirements for AI vendors including data handling, model transparency, and incident response
- NIST Mapping: GOVERN 5.1, MAP 5.1
- Description: Input validation and sanitisation for all AI-facing interfaces
- Implementation: Deploy input filtering, context isolation, and injection detection mechanisms
- NIST Mapping: MANAGE 2.2, MANAGE 4.1
- Description: Monitor and prevent sensitive data exfiltration through AI
- Implementation: Implement DLP controls on AI inputs and outputs, including PII detection and blocking
- NIST Mapping: MANAGE 2.2, MANAGE 3.1
- Description: Real-time scanning of AI outputs for policy violations
- Implementation: Automated scanning of AI-generated content against organisational policies
- NIST Mapping: MEASURE 2.1, MANAGE 4.1
- Description: Continuous verification of AI systems against defined policies
- Implementation: Automated policy enforcement with alerting and reporting
- NIST Mapping: GOVERN 1.5, MEASURE 3.1
- Description: AI-specific incident response procedures and escalation
- Implementation: Documented playbooks for AI-specific incidents including model compromise, data leakage, and adversarial attacks
- NIST Mapping: MANAGE 4.1, MANAGE 4.2
- Description: Comprehensive logging of all AI interactions for forensics
- Implementation: Immutable audit logs capturing all AI system interactions, access events, and configuration changes
- NIST Mapping: MEASURE 2.1, MANAGE 3.2
- Description: Adversarial testing of AI systems by internal or external teams
- Implementation: Regular red team engagements targeting AI-specific attack vectors
- NIST Mapping: MEASURE 2.2, MEASURE 3.2
- Description: Drift detection and behavioural anomaly identification
- Implementation: Continuous monitoring for model drift, output degradation, and unexpected behavioural changes
- NIST Mapping: MEASURE 1.1, MEASURE 2.1
- Description: Feed AI-specific threat intel into defence posture
- Implementation: Subscribe to and operationalise AI-specific threat intelligence feeds
- NIST Mapping: MAP 5.2, MANAGE 4.1
- Description: Continuous improvement cycles from security events and testing
- Implementation: Structured feedback mechanisms from incidents, testing, and monitoring into control refinement
- NIST Mapping: MEASURE 3.3, MANAGE 4.3
- Description: Evolve controls based on new attack vectors and research
- Implementation: Quarterly control review cycles incorporating new threats, research, and operational experience
- NIST Mapping: GOVERN 1.5, MANAGE 4.3
- Description: Compare maturity and controls against peer organisations
- Implementation: Participate in AISECA benchmarking programme to measure relative maturity
- NIST Mapping: GOVERN 6.1, MEASURE 3.3
To suggest new controls or refinements, open a pull request or issue. See CONTRIBUTING.md.
Released under CC BY 4.0.
AISECA -- AI Security Alliance | aiseca.org | GitHub