← Back to Insights
Perspective #060 Governance & Leadership

The board said yes. To what, exactly?

"The board approved the AI budget. It has not yet designed the AI governance."

62%
Of directors now set aside agenda time for AI discussions.
(NACD 2025 Public Company Board Practices Survey, 201 respondents, July 2025)
12%
Of Fortune 100 boards disclosed AI governance training in 2025 proxy statements.
(EY Center for Board Matters, 2025)

The Fusion Equation

Performance × Responsibility = Value
Performance
Board Oversight
Responsibility
AI Accountability
"Today's boardroom is at a strategic inflection point. The rapid convergence of cyber risk, AI disruption, and economic volatility demands a new level of board fluency and foresight." — Peter Gleason, President & CEO, NACD, Board Practices Survey, July 2025

The core tension

Awareness of AI is not governance of AI. The board that has discussed it has not necessarily designed accountability for it.

The governance architecture is not a compliance exercise. It is the structure that makes AI deployment defensible to regulators, investors, and the courts simultaneously.

The analytical depth

NACD (2025): 62% of directors now set aside agenda time for AI discussions. Yet only 14% discuss AI at every meeting, and 45% have yet to add AI to their agenda. Awareness and oversight are not the same thing.

EY (2025): only 12% of Fortune 100 boards disclosed AI governance training in proxy statements. Deloitte (2026): only 1 in 5 companies has a mature governance model for autonomous AI agents.

The EU AI Act applies from 2 August 2026. High-risk AI systems require board-level accountability, human oversight documentation, and explainability standards.

The board that cannot name the three highest-risk AI systems in its organisation, describe their decision scope, and identify the accountable officer, is not yet governing AI.

ING / Gartner
AI Governance Architecture · Board-Level Oversight · 40% Project Cancellation Risk
40%
Agentic AI projects that will be canceled by end of 2027, due to escalating costs, unclear business value, or inadequate risk controls. The leading cause of cancellation is not model failure. It is governance failure: unclear ownership, undefined escalation paths, and absent accountability structures discovered only after deployment. (Gartner Press Release, June 25, 2025)
Board AI governance is not a committee charter and a quarterly briefing. It requires four structural elements: (1) a taxonomy of AI systems by risk level, with named owners for each; (2) a decision scope definition for each deployed system, distinguishing what the system decides autonomously from what requires human approval; (3) an escalation protocol, tested and role-assigned, that fires before a crisis; (4) an explainability standard for high-risk decisions, meeting both internal accountability and external regulatory requirements. The EU AI Act encodes these four elements as legal obligations from 2 August 2026. The board that has not designed this architecture is not compliant.
"ING provides one of the clearest current models of board-level AI governance. The bank publishes a board-level AI risk report that covers specific systems, specific incidents, and specific mitigation actions. Not AI as a category of risk. Specific deployed systems, named and governed. The CDAO maintains direct accountability to the board. This is the architecture the EU AI Act will require from August 2026. ING has built it. Most European financial institutions are still building awareness."
ING provides one of the clearest current models of board-level AI governance: specific deployed systems, named and governed, with the CDAO maintaining direct accountability to the board. Most European financial institutions are still building awareness.
Performance
Board Oversight
The board that governs AI well moves faster than the board that governs it poorly. Governance is not a drag on AI deployment velocity. It is the architecture that makes deployment defensible. Boards with clear AI oversight frameworks approved deployments faster than peers, because they had already answered the questions that would otherwise emerge as crises post-launch: who is accountable, what is the decision scope, what is the escalation path, and how is the decision explained. Governance is not compliance overhead. It is the operational infrastructure for responsible AI deployment at scale.
Responsibility
AI Accountability
Every legal framework governing AI was designed for organizations, not algorithms. When an AI system makes a consequential decision in a high-risk category, the EU AI Act, the AI Liability Directive, and common law tort all land on the same answer: the deploying organization is accountable. The board that cannot answer the governance questions on deployment day will answer them on liability day. The accountability architecture is not optional. It is the condition under which the AI strategy is legally and commercially defensible. The governance investment is the performance investment. There is no trade-off to be made.

Download the full case

PDF · 7 slides · Free access · Downloaded 0 times

Let's discuss this
Unresolved tensions
Which board committee owns AI governance and how does it avoid overloading audit?
Can boards maintain meaningful human oversight of agentic AI at operational speed?
How does AI governance become a valuation signal rather than a compliance cost?
By Fabrice Macarty

This case resonates?

The board that cannot name the risk cannot govern it. The EU AI Act applies from 2 August 2026. Build the architecture before it builds itself.

Start the conversation
Access the Full Case
Please provide your details below. We will instantly email you a secure link to download the complete study.