SOC 2 Doesn't Cover AI. Here's What Does.

Your company just passed its SOC 2 Type II audit. The report is clean. The controls are documented. The board is reassured. And none of it addresses the AI models running in your production environment.

This is the compliance gap that most organizations don't see until it's too late. SOC 2 — the Trust Services Criteria framework maintained by the AICPA — was designed to evaluate controls around information security, availability, processing integrity, confidentiality, and privacy. It was built for a world of deterministic software systems, structured data, and well-defined processing logic. It was not built for probabilistic models that learn from data, drift over time, and produce outputs that cannot be fully predicted by their operators.

That distinction matters enormously when a regulator, a plaintiff's attorney, or a board committee asks: "What governance framework covers our AI systems?" Pointing to your SOC 2 report is not an answer. It's an exposure.

Where SOC 2 Falls Short

SOC 2's Trust Services Criteria operate at the infrastructure and application layer. They address whether access controls are in place, whether data is encrypted in transit, whether system availability meets defined targets. These are necessary controls. They are also wholly insufficient for AI governance.

Here's what SOC 2 does not cover:

Model risk. SOC 2 has no criteria for evaluating whether an AI model is performing within acceptable parameters. It does not address model drift, where a model's accuracy degrades over time as the underlying data distribution changes. It does not require monitoring of model outputs for bias, hallucination, or factual accuracy. A company can have a perfectly clean SOC 2 report while running a credit-scoring model that has drifted into discriminatory territory.

Training data governance. The provenance, quality, and representativeness of training data are foundational to AI risk. SOC 2's data controls focus on confidentiality and access — who can see the data, whether it's encrypted, whether it's retained per policy. They do not address whether the data used to train a model was representative of the population the model serves, whether it was lawfully obtained, or whether it contains embedded biases that propagate into model outputs.

Explainability and transparency. Regulators and courts increasingly expect organizations to explain how AI systems arrive at decisions that affect individuals. The EU AI Act mandates transparency for high-risk systems. The CFPB requires adverse action notices that explain automated credit decisions. SOC 2 has no criteria for model interpretability or explanation capabilities. You can be SOC 2 compliant and completely unable to explain why your model denied a loan application.

Human oversight design. SOC 2 does not evaluate whether human review processes are meaningfully designed into AI workflows. It does not ask whether a human reviewer has the information, training, and authority to override an AI recommendation. It does not assess whether escalation paths exist when models produce anomalous outputs.

AI-specific incident response. When an AI model produces harmful outputs at scale — biased hiring recommendations, hallucinated medical advice, incorrect financial calculations — the incident response requirements differ fundamentally from a data breach. SOC 2's incident response criteria were designed for security events, not model failures.

What NIST AI RMF Covers

The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023, is the most comprehensive U.S. government framework for managing AI risk. Unlike SOC 2, it was designed specifically for AI systems and their unique characteristics.

The framework is organized around four core functions: Govern, Map, Measure, and Manage.

Govern establishes the organizational structures and policies for AI risk management. This includes defining roles and responsibilities, establishing risk tolerances, creating accountability mechanisms, and ensuring that AI governance is integrated into enterprise risk management. This is the organizational layer that SOC 2 never touches for AI.

Map focuses on understanding the context in which AI systems operate. It requires organizations to identify the intended purposes and potential misuses of AI systems, understand the populations affected, assess the regulatory landscape, and document the limitations of AI systems. This contextual analysis is critical for AI risk and entirely absent from SOC 2.

Measure addresses the quantitative and qualitative assessment of AI risks. This includes evaluating model performance across different populations, testing for bias and fairness, assessing reliability and robustness, and monitoring for drift. These are the model-level evaluations that no infrastructure-focused framework can provide.

Manage covers the ongoing treatment of identified risks, including prioritization, mitigation strategies, and continuous monitoring. It explicitly addresses the need for human oversight, escalation procedures, and mechanisms for affected individuals to seek recourse.

The AI RMF is voluntary, not a compliance mandate. But it is rapidly becoming the de facto standard that regulators, auditors, and courts reference when evaluating whether an organization's AI governance is reasonable. When NIST speaks, standards of care follow.

How ISO 42001 Fills the Gap

ISO/IEC 42001:2023 is the first international standard for AI management systems. Published in December 2023, it provides a certifiable framework for establishing, implementing, maintaining, and continually improving an AI management system within an organization.

Where NIST AI RMF provides guidance, ISO 42001 provides structure. It follows the familiar ISO management system model (shared with ISO 27001 for information security and ISO 9001 for quality), making it integrable with existing compliance programs. For organizations already ISO 27001 certified, the path to ISO 42001 is architecturally familiar.

Key elements of ISO 42001 that address the SOC 2 gap:

AI policy and objectives. The standard requires a documented AI policy that addresses the responsible development and use of AI, along with measurable objectives. This creates the governance layer that SOC 2 lacks for AI.

AI risk assessment. ISO 42001 mandates a systematic approach to identifying and evaluating AI-specific risks, including risks to individuals and groups affected by AI systems, risks arising from the AI system lifecycle (development, deployment, operation, decommissioning), and risks specific to the data used in AI systems.

AI system impact assessment. The standard requires organizations to assess the potential impacts of AI systems on individuals, groups, and society. This goes well beyond SOC 2's focus on data subjects and into the broader consequences of automated decision-making.

Controls for AI lifecycle. ISO 42001 Annex B provides an extensive set of controls covering data quality, model development, testing and validation, deployment, monitoring, and retirement. These are purpose-built for AI and have no equivalent in the SOC 2 Trust Services Criteria.

The Practical Path for Compliance Officers

If you are responsible for compliance and your current framework is SOC 2 alone, here is the sequence of actions that closes the AI governance gap without requiring you to discard your existing program:

Step 1: Acknowledge the gap formally. Present to your risk committee or board that SOC 2 does not cover AI model governance. This is not a criticism of SOC 2; it's a factual statement about scope. Document the presentation and the board's acknowledgment. This is now part of your governance record.

Step 2: Inventory your AI systems. You cannot assess risk for systems you haven't identified. Catalog every AI and ML system in production, including third-party AI embedded in vendor platforms. For each system, document: what it does, what data it uses, who it affects, who owns it, and what controls currently exist. This inventory is the foundation for every AI governance framework.

Step 3: Map to NIST AI RMF. Use the NIST AI RMF as your risk assessment framework. For each AI system in your inventory, walk through the Map and Measure functions. Identify the risks. Assess the current state of controls. Document the gaps. This gives you a structured, defensible risk assessment that regulators and courts will recognize.

Step 4: Implement ISO 42001 controls incrementally. You do not need to pursue ISO 42001 certification immediately. But the controls in Annex B provide a practical checklist for AI governance. Prioritize controls based on the risk assessment from Step 3. Start with the AI systems that are highest-risk: those affecting individuals, those in regulated domains, those with the largest blast radius if they fail.

Step 5: Integrate into your existing compliance rhythm. AI governance should not be a separate, parallel compliance program. It should be integrated into your existing risk management cadence. Add AI risk to your risk committee's charter. Include AI system reviews in your quarterly compliance assessments. Make AI governance part of your vendor risk management program.

The Board Conversation

For directors reading this: the question to ask your compliance team is not "are we SOC 2 compliant?" It's "what framework governs our AI systems?" If the answer is SOC 2, or if the answer is silence, you have a gap that is both a governance liability and an opportunity to lead.

The organizations that move first to adopt AI-specific governance frameworks — NIST AI RMF, ISO 42001, and the emerging sector-specific requirements — will not only reduce their legal exposure. They will build the institutional muscle for AI governance that becomes a competitive advantage as regulation tightens.

SOC 2 was the right answer for the right era. For AI, the era has changed. The frameworks have arrived. The question is whether your organization will adopt them proactively or be forced to adopt them reactively, in the aftermath of an incident, under the scrutiny of regulators and plaintiffs' counsel.

Ritesh Vajariya

Ritesh Vajariya

Founder, NEUBoard | CEO, AI Guru

LinkedIn →

Want to assess your board's AI governance readiness?

Schedule a Confidential Scorecard Briefing