The Biggest Security Risk at RSA 2026: Autonomous AI Agents and the New Identity Crisis
The biggest threat at RSA this year isn’t on the expo floor. It’s the autonomous agent your team spun up last quarter that still has standing access to production.
Every year, the Moscone Center gives us a reading on where this industry is heading. If 2024 was the year of AI experimentation and 2025 was the year of the pilot program, then 2026 is the year we have to answer for all of it. RSA Conference 2026’s theme, “Power of Community,” couldn’t be more fitting—because our community now includes entities that don’t have a badge, a pulse, or a manager to call when something goes wrong.
Here’s where I think we need to focus.
The Agentic Shift: Autonomy Without Accountability
We need to stop talking about AI as a tool and start talking about it as an actor. The industry has moved well past chatbots and copilots. We are now deploying agentic AI—autonomous systems that call APIs, query databases, trigger workflows, and make decisions on behalf of the organization without a human in the loop for every action (Saviynt, 2026).
That is a fundamentally different risk model than anything traditional IAM was designed to handle.
The problem I keep seeing in the field is what I’d call the governance deficit. A developer or a data team provisions an AI agent, gives it the credentials it needs to do its job, and moves on. That agent inherits broad, standing permissions. It doesn’t go on vacation. It doesn’t change roles. It doesn’t trigger an HR offboarding workflow. It just runs—often long after the original use case has evolved or been abandoned entirely. Saviynt’s 2026 identity security trends report flags this exact pattern: the rise of “shadow agents” operating outside the visibility of security teams (Saviynt, 2026).
If that doesn’t keep you up at night, it should.
Non-Human Identity Is Your Largest Unmanaged Attack Surface
Here is the number that should anchor every identity conversation at RSA this year: in modern cloud-native environments, non-human identities—service accounts, API keys, bots, and AI agents—outnumber human users by as much as 45 to 1 (CDW, 2026).
Let that ratio sink in. We have spent decades and billions of dollars building identity programs around the human user. Password policies, MFA, privileged access management, user access reviews—all of it architected for people. Meanwhile, the machine identity sprawl has quietly become the dominant population in our environments, and in too many organizations, it is barely governed at all.
CyberArk’s research underscores the consequence: orphaned credentials and unmanaged API keys have become a primary breach vector (CyberArk, 2025). These aren’t theoretical risks. These are the footholds that attackers are actively exploiting right now. Every unmanaged service account is a door we forgot to lock. Every AI agent with stale permissions is a privilege escalation waiting to happen.
The industry is starting to rally around the concept of an Identity Fabric—a unified architecture that provides automated lifecycle management for every identity in the environment, human or otherwise (Optimal IdM, 2026). I think that’s directionally right. But architecture alone won’t save us. We need a strategic framework to drive the decisions, and I believe that framework already exists.
Zero Trust Is the Operating Model for the Agentic Era
I’ve been a proponent of Zero Trust for years, and I’ll admit that the term has been diluted by marketing to the point where it risks meaning nothing. But strip away the vendor pitches and get back to the core principles—never trust, always verify; least privilege; assume breach—and you have exactly the mental model we need for governing AI identities.
Think about it this way. Traditional IAM implicitly trusted identities once they were inside the perimeter and authenticated. Zero Trust challenged that assumption for human users. Now we need to extend that same challenge to every non-human identity, and especially to autonomous agents.
What does Zero Trust look like when applied to agentic AI? It means several things.
First, it means continuous verification.
An AI agent should not authenticate once and operate indefinitely. Its access should be re-evaluated based on context: what is it doing right now, does that align with its defined purpose, and has anything about its environment changed? Healthcare Info Security’s 2026 predictions highlight this exact gap—AI is breaking legacy identity and data security models precisely because those models assumed static, human-patterned access (Healthcare Info Security, 2026).
Second, it means just-in-time access.
AI agents should not hold standing privileges. They should request access for a specific task, receive the minimum permissions necessary, and have those permissions revoked the moment the task is complete (Saviynt, 2026). If we’ve learned anything from privileged access management, it’s that standing access is standing risk. That principle applies tenfold to autonomous systems that operate at machine speed.
Third, it means microsegmentation of identity.
Not all AI agents are equal. An agent that summarizes meeting notes has a fundamentally different risk profile than one that queries a customer database or modifies infrastructure configurations. Our identity architectures need to reflect that granularity. Broad role-based access is not sufficient when the “user” can execute thousands of actions per minute without human review.
Fourth, and perhaps most importantly, it means assuming breach. We need to operate under the assumption that an AI agent’s credentials will be compromised, that its behavior will drift, or that its underlying model will be manipulated. Prompt injection is not a theoretical attack—it is an active threat vector that can lead to unauthorized privilege escalation if identity controls are not resilient to it (NIST, 2026b). Building our identity controls with this assumption baked in changes everything about how we architect, monitor, and respond.
NIST Is Giving Us the Roadmap
For those of us who like to anchor our strategies in frameworks rather than hype, NIST has delivered. The Cybersecurity Framework Profile for Artificial Intelligence—NIST IR 8596—released earlier this year, provides structured guidance for mapping AI-specific risks to the controls we already know from the Cybersecurity Framework (NIST, 2026b).
What I find most valuable about this profile is that it doesn’t treat AI governance as a separate discipline. It integrates AI risk into the existing framework functions: Identify, Protect, Detect, Respond, Recover. That’s critical, because AI governance should not be a parallel program—it should be woven into the security program you already operate.
The profile’s emphasis on identity integrity is particularly relevant. It calls out the need to ensure that AI-specific threats like prompt injection and model manipulation don’t result in unauthorized access or privilege escalation. It also pushes on supply chain transparency—understanding the provenance of the data and models your AI agents consume—and international alignment, which matters enormously for organizations operating across regulatory jurisdictions (NIST, 2026b; NIST, 2026c).
Meanwhile, the NIST National Cybersecurity Center of Excellence is actively working on guidance for AI agent identity and authorization, recognizing that existing authentication and authorization models were not designed for entities that dynamically interact with tools, data sources, and services in unpredictable patterns (NIST NCCoE, 2026). This is the kind of foundational work that will shape how we build identity architectures for the next decade.
The Strategic Imperative: Treat AI Agents Like Privileged Identities
If I could leave RSA attendees with one operational takeaway, it would be this: treat every AI agent like a privileged identity. Not because every agent has privileged access today, but because the autonomous nature of these systems means they carry privileged risk.
That means full lifecycle management—provisioning, periodic access review, behavioral monitoring, and deprovisioning. It means auditability—every action an AI agent takes should be attributable and reviewable. It means governance integration—AI agents should be subject to the same policy frameworks and compliance requirements as your most sensitive human accounts.
And it means asking the hard questions that most organizations are still avoiding. Do we have a complete inventory of every non-human identity in our environment? Do we know which AI agents are active, what they have access to, and who is accountable for their behavior? Can we detect when an agent’s behavior deviates from its intended purpose? Do our biometric and authentication strategies account for the generative AI threat to liveness detection and identity verification (The Hacker News, 2026)?
If you can’t answer those questions with confidence, you have work to do before the next board meeting.
Looking Ahead From the Expo Floor
RSA Conference 2026 will be full of vendors claiming to solve these problems. Some of them will be genuine. Many will be wrapping incremental features in “AI governance” packaging. The signal-to-noise ratio on the expo floor is always a challenge.
My advice: look past the slide decks and ask how a solution addresses the full identity lifecycle for non-human entities. Ask how it integrates with your Zero Trust architecture. Ask how it handles just-in-time access for autonomous agents. Ask what happens when an agent’s credentials are compromised, or when its behavior drifts outside its intended scope.
The organizations that will navigate this era successfully are the ones that recognize a fundamental truth: identity is no longer just an IT function or a compliance checkbox. It is the control plane for the modern enterprise. And as the population of that enterprise shifts from predominantly human to predominantly autonomous, our identity strategies must shift with it.
Art Ocain is the Executive Director of Operations at Airiam, where he leads cybersecurity strategy and operations for organizations navigating complex threat landscapes.
Contact us to discuss how AI can help your business grow smarter instead of just bigger.

References
- CDW. (2026, January 30). 5 IAM trends to watch in 2026 (and how to prepare for them). https://www.cdw.com/content/cdw/en/articles/security/5-iam-trends-watch-2026-how-prepare-them.html
- CyberArk. (2025, December 4). AI agents and identity risks: How security will shift in 2026. https://www.cyberark.com/resources/blog/ai-agents-and-identity-risks-how-security-will-shift-in-2026
- Healthcare Info Security. (2026). 2026 predictions: AI breaking identity and data security. https://www.healthcareinfosecurity.com/blogs/2026-predictions-ai-breaking-identity-data-security-p-4042
- National Institute of Standards and Technology. (2026b). Cybersecurity framework profile for artificial intelligence (Cyber AI Profile) (NIST IR 8596). https://csrc.nist.gov/pubs/ir/8596/iprd
- National Institute of Standards and Technology. (2026c, March 6). AI standards webinar: International landscape and ITL priorities. https://www.nist.gov/artificial-intelligence/ai-standards
- NIST National Cybersecurity Center of Excellence. (2026). Accelerating the adoption of software and AI agent identity and authorization.
- Optimal IdM. (2026). 2026 identity and access management trends: Embracing a passwordless, AI-driven future. https://optimalidm.com/resources/blog/2026-iam-trends/
- Saviynt. (2026). 2026 identity security trends & predictions. https://saviynt.com/hubfs/2026%20Identity%20Security%20Trends%20-%20Saviynt.pdf
- The Hacker News. (2026, February 2). 9 identity security predictions for 2026. https://thehackernews.com/expert-insights/2026/02/9-identity-security-predictions-for-2026.html