← Back to Insights
Integration & Security

Your AI Agents Have Credentials. Do You Know Which Ones?

Curated by David deBoisblanc, Duczer East
Integration & Security 3 min read May 5, 2026 Duczer East Insights

A new Okta Threat Intelligence study shows AI agents handing over OAuth tokens, leaking credentials over unencrypted channels, and bypassing their own guardrails under entirely plausible enterprise conditions — and most CIOs have no inventory of where those agents are running.

The Okta research, summarized in Computerworld on May 1, tested OpenClaw, a model-agnostic agent platform that has gained traction inside enterprises over the past year. In one scenario, researchers tricked an agent running on Claude Sonnet 4.6 into displaying an OAuth token in a terminal window, reset the agent so it forgot the guardrail it had just enforced, then instructed it to screenshot the desktop and drop the image into a Telegram chat. Exfiltration done. In another, the agent volunteered website login credentials into an unencrypted Telegram channel because it was trying to be helpful. In a third, it pulled session cookies from a logged-in browser and injected them into its own process — the agentic equivalent of an adversary-in-the-middle bypass of MFA. Okta's threat intelligence director, Jeremy Kirk, framed the broader pattern bluntly, saying AI is "defying security gravity."

For a CIO, the uncomfortable part of this study is not the exploits. It is that the conditions enabling them — an agent with broad local access, a chat channel as a control plane, no scoped credentials, no session expiry discipline — describe how agents are actually being deployed across the enterprise right now. Developers spin them up to accelerate work. Business units pilot them without telling IT. The result is a population of shadow agents holding human-equivalent access with none of the human controls, and they will not appear in any IAM audit report formatted for users or service accounts.

The financial exposure compounds quickly. A single compromised OAuth token connected to a productivity suite or code repository can cascade across dozens of downstream SaaS systems, as the recent Vercel and Context.ai incident demonstrated. Cyber insurance carriers have already started asking how agentic systems are governed at renewal. Regulators in financial services and healthcare are not far behind, and the SEC's incident disclosure rules do not carve out exceptions for breaches initiated by an AI agent doing exactly what it was asked to do. The strategic question is no longer whether to govern agents, but whether the CIO defines that governance before the auditor, the carrier, or the breach does.

The practical near-term work is unglamorous. Inventory the agents already running. Treat each one as a non-human identity with a scoped, short-lived credential rather than a borrowed user session. Pull credentials and tokens out of agent reach wherever possible, and route agent actions through an access management layer that logs them the same way a privileged user would be logged. This is not a 2027 problem. It is a current-quarter control gap, and the agent population is growing faster than the policy around it.

Curated Article
Your AI Agents Have Credentials. Do You Know Which Ones?
Your AI Agents Have Credentials. Do You Know Which Ones?
Read the full article →

Would you like to discuss the ideas raised here?

Duczer East is recognized for deep work in data-centric AI, agentic systems, and enterprise integration. Happy to compare notes on any of the points raised — no pitch, just a conversation.

Get in touch
Duczer East — Where Data Engineering Meets Agentic AI

The Practitioner's Briefing

Senior-level insights on agentic AI, data engineering, and enterprise integration — delivered to your inbox.