Courts are starting to signal what risk officers already suspect: when an AI system causes harm, the bank that deployed it owns the liability, not the vendor that built it.
A recent CIO feature on AI accountability lays out a dynamic that should land squarely on the desk of every Chief Risk Officer. As AI moves from pilot to production, organizations are building governance models that distribute responsibility across legal, compliance, IT, risk, and the business. The model looks reasonable on a slide. It collapses under legal scrutiny. Attorney Jessica Eaves Mathews, quoted in the piece, makes the point bluntly: when something goes wrong, the algorithm does not appear in court. The humans who developed, deployed, or used it do. And the deploying organization is almost always first in line. "We bought it from a vendor" may not hold up as a defense. The underlying principle is older than AI itself, that liability follows the party best positioned to prevent the harm, which in practice means the bank integrating the system into a real-world decision.
For a Chief Risk Officer in Financial Services, this reframes the AI question entirely. The risk is no longer model performance in the abstract. It is whether, on the day a regulator or plaintiff's counsel asks what the agent did, why it did it, what data it touched, and who authorized it to act, the institution can produce a coherent answer in hours rather than weeks. Distributed governance does not produce that answer. It produces finger-pointing across four functions while the clock runs. The institutions that will defend themselves successfully are the ones treating agent activity the same way they treat trader activity: identity, entitlements, full audit trail, real-time monitoring, and a governed data layer where every decision can be reconstructed end to end. Agent Access Management is becoming the regulatory equivalent of segregation of duties. The cost of building it now is a rounding error against a single consent order or class action tied to an undefendable AI decision. The window in which "we are still figuring out our governance model" is an acceptable answer to a regulator is closing. Mathews expects a small number of high-profile cases to define the terrain, and the institutions caught in those cases will set the standard everyone else is measured against.