THE DECISION STAKES
The AI did not sign it. You did.
"It is no answer to say that the citation came from an AI tool. Counsel bears personal responsibility for every authority placed before this court."
— Justice Saini, UK High Court, Ayinde v Haringey & Al-Haroun v Qatar National Bank, April 2025
AI hallucination liability is not a technology risk. It is a process risk. The four controls below are not technical. They are operational.
Every AI output that reaches a client, a regulator, a court, or a board carries your signature. The verification posture you build now determines whether an error becomes a correction or a sanction.
Four moves. One decision you can defend.
01
VERIFY
Implement mandatory human verification for any AI output that reaches an external party or informs a material decision. Define verification as a process step, not a recommendation. Log who verified, what they checked, and when.
Courts and regulators do not accept 'I did not check.' Verification is not a quality step. It is the liability transfer mechanism from the model to the professional.
02
SCOPE
Define in writing which tasks AI can perform autonomously, which require human review before use, and which are out of scope entirely. High-stakes outputs: legal, regulatory, clinical, financial. Exclude or gate these explicitly.
A scoped AI use policy protects the organization and gives practitioners clear decision rules. An undefined scope means every use of AI is a personal liability judgment call.
03
DOCUMENT
Build an audit trail of AI-generated content: which tool, which prompt, which output, who reviewed it, what changes were made before use. Retain for a minimum of 4 years to align with California civil rights AI record requirements.
Documentation is the only defense if an AI error surfaces later. Without it, you cannot prove what the model produced and what your team verified. The absence of a trail is itself evidence of negligence.
04
INSURE
Review professional indemnity and D&O policies for AI-generated error coverage. Most policies written before 2024 do not cover AI hallucination claims explicitly. Negotiate AI-specific clauses at renewal. Disclose AI use to insurers where required.
The enterprise that has verified, scoped, and documented its AI use is insurable at standard rates. The one that has not is either uncovered or overexposed.
Noland v. Land of the Free
California Court of Appeal, 2nd District. September 2025. Published opinion. AI hallucination in appellate brief.
21/23
Citations in the appellate brief that were fabricated by AI: 21 out of 23. The court imposed a $10,000 individual fine, the highest single sanction to date at time of ruling. The opinion was designated for publication, meaning it now serves as binding precedent in California.
(California 2d District Court of Appeal, September 2025)
The case itself was straightforward: an employment dispute, summary judgment affirmed on appeal. What the court could not ignore was the brief: 21 of 23 citations were AI hallucinations. The cases did not exist. The quotes were invented. The court did not sanction the AI tool. It sanctioned the attorney who submitted the brief without verification. The California Judicial Council followed with guidelines requiring judges to either ban generative AI or adopt a formal AI use policy by December 15, 2025. The opinion is now cited in every AI governance discussion in professional services, legal, and enterprise risk
The court did not rule that AI is unreliable. It ruled that using AI without verification is incompatible with professional responsibility. That ruling now applies beyond law firms.