The 5 AI Governance Questions Every Board Should Be Asking in 2026
Most boards have discussed AI. Few have governed it. The difference between the two is not enthusiasm or awareness — it's specificity. A board that has "discussed AI" might have heard a presentation from the CTO, nodded along to a strategy deck, or added "artificial intelligence" to a risk register. A board that governs AI can answer concrete questions about what is deployed, who is accountable, and what evidence exists that oversight is happening.
In 2026, with the EU AI Act in enforcement, SEC disclosure expectations tightening, and Caremark-style oversight liability extending to technology risk, the gap between discussion and governance is a legal exposure. Here are the five questions that close it.
Question 1: What AI Is Actually Deployed Across the Organization?
This is the foundational question, and the one most boards cannot answer. Not because the information doesn't exist, but because no one has been asked to compile it.
AI is no longer a discrete initiative run by a data science team. It is embedded everywhere. Your CRM uses AI for lead scoring. Your HR platform uses AI for resume screening. Your customer service runs on AI chatbots. Your developers use AI code assistants. Your finance team uses AI for forecasting. Your legal department may be using AI for contract review. Each of these systems carries its own risk profile — bias, accuracy, data privacy, vendor dependency — and most of them were adopted without board awareness.
The question is not theoretical. Under the EU AI Act, organizations deploying high-risk AI systems must maintain documentation and conform to specific governance requirements. Under U.S. securities law, material AI risks require disclosure. You cannot disclose risks you haven't identified, and you cannot identify risks in systems you don't know about.
What a good answer looks like: A documented inventory of all AI systems in production and in development, including third-party AI embedded in vendor platforms. For each system: what it does, what data it processes, what decisions it influences, what population it affects, who owns it operationally, and when it was last reviewed. This inventory should be updated quarterly and presented to the risk committee.
What a bad answer looks like: "Our CTO is on top of it." "We have a data science team." "We're still figuring that out."
Question 2: Who Owns the Risk?
In too many organizations, AI risk ownership is distributed to the point of dissolution. The engineering team builds the models. The business unit deploys them. The legal team reviews contracts. The compliance team monitors regulations. The risk team maintains the risk register. No single person or committee has clear, documented accountability for AI risk at the enterprise level.
This is not a coordination problem. It's a governance failure. When an AI system produces biased outputs, causes a regulatory violation, or creates a material business disruption, the first question in any investigation or litigation will be: who was responsible for overseeing this? If the answer requires assembling a committee to determine who should have been responsible, the governance structure has already failed the Caremark test.
What a good answer looks like: A named executive (Chief Risk Officer, Chief AI Officer, or equivalent) with documented responsibility for AI risk management across the enterprise. A board committee (risk, audit, or technology) with AI oversight explicitly in its charter. A defined reporting cadence — quarterly at minimum — from the executive to the committee. Written terms of reference that specify escalation triggers: when does an AI issue become a board-level issue?
What a bad answer looks like: "It's a cross-functional responsibility." "Our CISO handles it as part of technology risk." "We haven't formalized that yet."
Question 3: What Vendor AI Are We Exposed To?
Most enterprise AI exposure does not come from models built in-house. It comes from vendors. The AI in your Salesforce instance. The AI in your Workday platform. The AI in your AWS services. The AI in your Microsoft 365 suite. These systems make decisions that affect your customers, your employees, and your compliance posture — and your organization bears the risk.
This is the oversight gap that regulators are increasingly focused on. The EU AI Act explicitly holds "deployers" responsible for high-risk AI systems, even when those systems are built by third-party providers. The CFPB has made clear that financial institutions cannot outsource compliance obligations to vendors. If your vendor's AI model discriminates, your organization faces the enforcement action.
Vendor AI governance requires more than checking a box during procurement. It requires understanding what AI capabilities are embedded in vendor platforms, how those capabilities affect your operations, what controls the vendor has in place, and what your contractual rights are regarding model changes, data usage, and incident notification.
What a good answer looks like: An extension of the AI inventory that includes all material vendor AI systems. Vendor contracts that address AI-specific provisions: model change notification, data usage limitations, audit rights, incident response requirements, and liability allocation for AI failures. A vendor AI risk assessment process integrated into procurement and periodic vendor reviews.
What a bad answer looks like: "We trust our vendors." "That's covered in our standard vendor risk assessment." "We didn't know our vendors were using AI."
Question 4: How Would We Know If Something Went Wrong?
AI failures are not like server outages. They can be subtle, persistent, and invisible to standard monitoring. A model that gradually drifts toward biased outputs doesn't trigger an alert. A chatbot that provides incorrect medical or financial information doesn't crash. A hiring algorithm that systematically disadvantages a protected class doesn't generate an error log.
This means traditional IT monitoring is insufficient. Organizations need AI-specific monitoring that evaluates model performance, output quality, and fairness metrics on an ongoing basis. They need escalation paths that route AI issues to people with the authority and technical understanding to act. And they need incident response plans that account for the unique characteristics of AI failures — which may require model retraining, output review, and affected-party notification rather than a server restart.
What a good answer looks like: Documented monitoring protocols for each material AI system, including performance thresholds, drift detection, and fairness metrics. A defined escalation path from the operational team to the risk owner to the board committee, with specific triggers for each level. An AI-specific incident response plan that has been tabletop-tested. A process for notifying affected parties when AI systems produce harmful outputs.
What a bad answer looks like: "Our IT team monitors everything." "We haven't had any incidents." "We'd figure it out."
Question 5: Can We Demonstrate Governance Under Scrutiny?
This is the question that ties everything together, and it is the one that matters most in litigation, regulatory examination, and insurance underwriting. The question is not whether the board has good intentions regarding AI oversight. It's whether the board can prove, under adversarial scrutiny, that it was exercising oversight.
Proof means documentation. Board minutes that reflect substantive discussion of AI risks, not just a line item that says "AI was discussed." Committee reports that present AI risk metrics, not just strategy updates. Written risk assessments with findings and action items. Audit trails that show when AI systems were reviewed and by whom. Evidence that management reported to the board and that the board asked questions, set expectations, and followed up.
In a Caremark claim, the court evaluates whether the board implemented a reasonable reporting system and whether the board actually monitored the information that system produced. Both elements require documentation. A board that governed AI wisely but left no paper trail is, for legal purposes, a board that didn't govern AI at all.
What a good answer looks like: A governance record that includes quarterly AI risk reports to the board committee, documented risk assessments for each material AI system, written policies on AI development and deployment, meeting minutes that reflect substantive AI oversight discussions, and an audit trail of decisions made and actions taken.
What a bad answer looks like: "We've had lots of conversations about this." "Our board is very engaged on AI." "We're working on formalizing our approach."
Moving from Discussion to Governance
These five questions are not a framework unto themselves. They are a diagnostic. If your board can answer all five with specificity and documentation, you have governance. If your board cannot, you have discussion — and discussion, in a regulatory and litigation environment that is accelerating, is not a defense.
The path from discussion to governance does not require a massive initiative. It requires four concrete actions: build the AI inventory, assign ownership, establish a reporting cadence, and document everything. These can be accomplished in weeks, not months. The boards that act now are building the evidentiary foundation that will separate them from the boards that will, eventually, be asked to explain why they weren't paying attention.
The time for general AI awareness at the board level has passed. What's needed now is specific, documented, defensible governance. These five questions tell you whether you have it.
Want to assess your board's AI governance readiness?
Schedule a Confidential Scorecard Briefing