← Back to Insights
Perspective #064 Technology & Governance

When the Agent Decides: Who Answers?

"Delegation removes the human from the workflow. It does not remove the organisation from the liability."

15%
Of enterprise decisions will be made autonomously by 2028, up from 0% in 2024.
(Gartner, June 2025)
1 in 5
Companies has a mature governance model for autonomous AI agents.
(Deloitte, 2026 State of AI in Enterprise, 3,235 leaders, 24 countries)

The Fusion Equation

Performance × Responsibility = Value
Performance
Decision Velocity
Responsibility
Accountability Architecture
"If you have machine-to-machine agents able to make judgments and take actions, you have to permission them and make sure you have the right guardrails around what they can and can't do, and how many autonomous decisions they can make." — Teresa Heitsenrether, CDAO, JPMorgan Chase, American Banker, October 2025

The core tension

You cannot delegate the decision and retain only the gain. You retain the liability too.

The governance architecture is not a constraint on the performance model. It is the legal and commercial condition under which the performance model is defensible.

The analytical depth

The EU AI Act, the AI Liability Directive, and common law tort confirm one principle: the operator answers. Delegating the decision does not transfer the liability.

Gartner (June 2025): at least 15% of day-to-day enterprise decisions will be made autonomously by 2028, up from 0% in 2024. The transition is not gradual. It is structural.

Deloitte (2026): only 1 in 5 companies has a mature governance model for autonomous AI agents. Survey of 3,235 senior leaders across 24 countries. The oversight gap is the largest unpriced risk in enterprise AI deployment.

The accountability gap arrives when the human-in-the-loop becomes a rubber stamp, approving agent decisions without real review. The structure exists on paper. The accountability does not.

JPMorgan Chase
AI Decision Governance · 300+ Use Cases · $1.5B+ Business Value
450+
AI and ML use cases in production at JPMorgan Chase by 2025, targeting 1,000 by 2026. Ranked #1 in AI adoption among large global banks by Evident for three consecutive years. Each use case governed through a firmwide accountability framework with defined decision boundaries, explainability standards, and escalation protocols embedded before deployment. (Emerj / JPM 2025; Evident AI Index 2025; Heitsenrether, American Banker, Oct 2025)
In 2024, Air Canada's customer service chatbot promised a passenger a bereavement fare refund that Air Canada's own policy did not permit. Air Canada argued the chatbot was a separate legal entity, and the airline was not responsible for its representations. The British Columbia Civil Resolution Tribunal rejected this argument entirely: Air Canada was responsible for all information provided by its chatbot. The ruling established the precedent that deploying an agent does not transfer liability away from the operator. It amplifies it. Every AI deployment decision taken without a governance architecture is now legally precedented.
"JPMorgan Chase's model is the clearest example of governed AI at scale. The CDAO sits on the Operating Committee and reports directly to Jamie Dimon. Governance is not an IT function. It is a board-level accountability structure. Decision boundaries are defined per use case. Explainability is built in. The agent was permissioned to flag, not to act: a distinction most organisations deploying agentic AI have not yet made."
JPMorgan Chase's model is the clearest example of governed AI at scale. The agent was permissioned to flag, not to act: a distinction most organisations deploying agentic AI have not yet made.
Performance
Decision Velocity
The agent decides faster, more consistently, and at lower unit cost than any human. It does not experience decision fatigue. It does not hedge to protect its career. For high-volume, bounded decisions, credit thresholds, inventory reorders, fraud flags, pricing adjustments, the performance case is structurally unassailable. The question is not whether to deploy it. It is where, at what autonomy level, and under what oversight.
Responsibility
Accountability Architecture
Every legal framework governing corporate accountability was designed for human decision-makers. Directors are liable. Executives are accountable. Contracts require informed parties. When an AI agent makes a consequential decision, denying a claim, executing a trade, flagging a candidate, approving a loan, the law looks for a human to hold responsible. The EU AI Liability Directive, the AI Act's high-risk provisions, and common law tort all land on the same conclusion: the operator is accountable. The governance architecture is not optional. It is the condition under which the strategy is defensible.

Download the full case

PDF · 7 slides · Free access · Downloaded 1 times

Let's discuss this
Unresolved tensions
Can explainability requirements be met for agentic AI?
Who is accountable when a pipeline of agents decides collectively?
Does meaningful human oversight survive the performance pressure to remove it?
By Fabrice Macarty

This case resonates?

Delegation is the strategy. Governance architecture is the condition under which the strategy survives.

Start the conversation
Access the Full Case
Please provide your details below. We will instantly email you a secure link to download the complete study.