← Back to Insights
Financial Services & KYC/AML

Your AI Governance Chart Has One Name on It in Court

Curated by David deBoisblanc, Duczer East
Financial Services & KYC/AML 3 min read May 6, 2026 Duczer East Insights

Courts are starting to signal what risk officers already suspect: when an AI system causes harm, the bank that deployed it owns the liability, not the vendor that built it.

A recent CIO feature on AI accountability lays out a dynamic that should land squarely on the desk of every Chief Risk Officer. As AI moves from pilot to production, organizations are building governance models that distribute responsibility across legal, compliance, IT, risk, and the business. The model looks reasonable on a slide. It collapses under legal scrutiny. Attorney Jessica Eaves Mathews, quoted in the piece, makes the point bluntly: when something goes wrong, the algorithm does not appear in court. The humans who developed, deployed, or used it do. And the deploying organization is almost always first in line. "We bought it from a vendor" may not hold up as a defense. The underlying principle is older than AI itself, that liability follows the party best positioned to prevent the harm, which in practice means the bank integrating the system into a real-world decision.

For a Chief Risk Officer in Financial Services, this reframes the AI question entirely. The risk is no longer model performance in the abstract. It is whether, on the day a regulator or plaintiff's counsel asks what the agent did, why it did it, what data it touched, and who authorized it to act, the institution can produce a coherent answer in hours rather than weeks. Distributed governance does not produce that answer. It produces finger-pointing across four functions while the clock runs. The institutions that will defend themselves successfully are the ones treating agent activity the same way they treat trader activity: identity, entitlements, full audit trail, real-time monitoring, and a governed data layer where every decision can be reconstructed end to end. Agent Access Management is becoming the regulatory equivalent of segregation of duties. The cost of building it now is a rounding error against a single consent order or class action tied to an undefendable AI decision. The window in which "we are still figuring out our governance model" is an acceptable answer to a regulator is closing. Mathews expects a small number of high-profile cases to define the terrain, and the institutions caught in those cases will set the standard everyone else is measured against.

Curated Article
AI is spreading decision-making, but not accountability
https://www.cio.com/article/4160986/ai-is-spreading-decision-making-but-not-accountability.html
Read the full article →

Would you like to discuss the ideas raised here?

Duczer East is recognized for deep work in data-centric AI, agentic systems, and enterprise integration. Happy to compare notes on any of the points raised — no pitch, just a conversation.

Get in touch
Duczer East — Where Data Engineering Meets Agentic AI

The Practitioner's Briefing

Senior-level insights on agentic AI, data engineering, and enterprise integration — delivered to your inbox.