RSAC 2026 recap: Agentic AI is the new attack surface — and the new front line of defence
Posted By
Neha Jawale
RSAC 2026 ran 23–26 March at the Moscone Center in San Francisco — 32 keynotes, 570+ sessions, and 600+ exhibitors. The theme was unmistakable: agentic AI has moved from experimentation to production, and the security model must move with it. Here is what we saw, and what it means for IT and security leaders.
Agentic AI is the new attack surface
The core message across RSAC 2026 was that agentic AI changes the nature of risk. Generative AI produces content. Agentic AI takes action — it reasons, plans, accesses tools, and executes tasks with minimal human oversight. Security models built for software that responds to inputs are not equipped for software that acts on intent.
Increased attack surface due to the boom in AI, GenAI and agentic AI
From what we observed, adoption has moved significantly faster than governance. AI agents are active in most enterprise environments — many built outside formal IT processes, running with permissions that were never designed to expire, and operating without full security visibility. This is what makes the boom in AI, GenAI, and agentic AI a compounding risk problem: each new agent is a potential entry point through over-privileged access, prompt manipulation, or tool misuse.
The RSAC Innovation Sandbox reflected this sharply. Eight of the top ten finalists addressed AI agent security directly. This year’s Most Innovative Startup 2026 award was won by an AI governance platform that gives enterprise teams continuous visibility into agentic AI behavior. This capability barely existed as a dedicated category just two years ago.
The AI SOC — from reactive to proactive
Security operations centres are under real strain — alert volumes have outgrown analyst capacity, and the speed at which threats now move has shortened response windows to the point where human-only workflows cannot keep pace. What we saw at RSAC 2026 was the industry's practical answer: the agentic SOC.
AI SOC agent space: The next evolution in security operations
The agentic SOC uses specialised AI agents to handle repeatable SOC tasks — triage, detection logic, SOP matching, guided response — so human analysts can focus on decisions requiring judgement and accountability. From what we observed across vendor announcements and sessions, purpose-built SOC agents are already moving from concept into release cycles this year. The urgency was clear: defenders need to match the speed at which threats operate, and agentic capabilities inside the SOC are becoming a present operational requirement, not a future aspiration.
SOC in AI and GenAI — the intelligence layer
Running AI inside your SOC and securing the AI deployed across your enterprise are related but distinct challenges. RSAC 2026 addressed both — and the architecture that kept surfacing for the latter is worth understanding.
SoC in AI/GenAI: Building the visibility layer your organisation needs
The organisations ahead of this problem are building a dedicated intelligence layer for their AI estate. From what we saw, it consistently involves four components:
- Telemetry layer: Captures prompts, reasoning chains, tool calls, and safety events at the point of execution, not retrospectively
- AI data lake: Stores interaction logs, execution graphs, and evaluation scores as a durable, auditable record
- Intelligence and control layer: AI monitoring AI, detecting behavioral drift, scoring risk, and triggering containment before damage occurs
- Governance dashboard: Surfaces cost per decision, model reliability, alignment scores, and incident velocity for leadership visibility
Without this visibility, organisations cannot answer the questions regulators and boards are starting to ask: what decisions are your AI systems making, who is accountable, and can you show the reasoning? AI observability is becoming a governance requirement, not an engineering preference.
Securing AI and agentic AI end to end
Securing agentic AI means protecting systems that act on intent across a full lifecycle — from data and model training through to runtime behavior and accountability. Traditional application security was not built for that scope.
Securing AI and agentic AI: The five gaps most organisations have not yet closed
These five gaps came up consistently across RSAC 2026 sessions and conversations:
- Agent inventory: most organisations lack a complete picture of every agent running in their environment, who built it, and who owns it.
- Persistent permissions: agents are frequently deployed with access that does not expire, creating ongoing exposure that grows with every new deployment.
- Behavioral monitoring: real-time detection of agent deviation from intended purpose is still an unsolved problem for most security teams.
- Prompt injection defence: agents processing external data are vulnerable to manipulation, and most deployments lack dedicated protection.
- Audit trails: the ability to trace every agent action back to a human-accountable owner, and reconstruct reasoning on demand, is absent in most environments today.
Zero trust for the agentic workforce
Zero trust as a principle has not changed. What has changed is who — or what — it needs to apply to. The conversation at RSAC 2026 made clear that zero trust must now extend to non-human identities with the same rigour applied to human users and devices.
Zero trust for the agentic workforce: From access control to action control
AI agents operate continuously, chain tools across systems, and cross trust boundaries in ways existing IAM and PAM solutions were never designed to govern. Non-human identities already outnumber human ones in many enterprise environments, yet most identity controls were built for a world where every identity belonged to a person. What we heard repeatedly at RSAC was a call to move from access control to action control — observing, logging, and applying policy to every step of an agent's execution, not just at the point of authentication. Permissions need to be task-specific and time-bound, not persistent.
Zero-Trust-as-Code — enforcing policy at machine speed
Zero trust is a strategy. Zero-Trust-as-Code (ZTaC) is how you operationalise it when agents move faster than any manual review process can follow.
Zero-Trust-as-Code (ZTaC): What it looked like at RSAC 2026
Agents are goal-oriented — they will find a path to complete a task, often one that bypasses controls designed with human behavior in mind. Static policies reviewed after the fact cannot keep pace. ZTaC encodes enforceable, versioned policy that deploys alongside the agent itself, enforcing controls at the point of action without depending on human approval at each step. What we observed at RSAC was concrete movement in this direction — runtime policy enforcement at the agent execution layer, with near-instant permission revocation and auditable records generated automatically. For CISOs and CIOs, ZTaC is the operational bridge between having a zero-trust strategy and actually enforcing it in an agentic environment.
What RSAC 2026 means for IT and security leaders
RSAC 2026 documented a present gap — between where most organisations are today and where their security posture needs to be. Agentic AI is operational across most enterprise environments. The frameworks and tooling to secure it are available now. The gap is in organisational prioritisation.
Five questions to drive your next planning cycle
Use these as the basis for an honest assessment of where your organisation stands today:
- Do you have a complete inventory of every AI agent in your environment — including those built outside IT's direct control?
- Is each agent mapped to a human owner, with task-specific permissions that expire after use?
- Have you extended zero trust controls to non-human identities, not just users and devices?
- Is your SOC equipped to detect AI-native threats — prompt injection, behavioral drift, lateral movement at machine speed?
- Are your governance policies encoded and enforced at runtime, or are they documented intentions that agents can route around?
Organisations that treat agentic AI security as a present operational requirement — building agent inventory, behavioral monitoring, agentic zero trust, and ZTaC into their architecture now — will manage risk more effectively and move faster as the agentic workforce continues to scale.
Where this leaves you
RSAC 2026 confirmed that agentic AI is already operating inside enterprises, often with more autonomy and access than security teams realise. Organisations that win will be the ones that treat agents as first-class security entities: inventoried, identity-bound, least-privileged, continuously monitored, and governed at runtime rather than reviewed after the fact.
This shift requires practical execution. Opcito helps enterprises operationalise agent-aware security across cloud and AI environments—covering visibility, policy enforcement, and SOC integration at production scale. If you’re assessing how to secure agentic workloads without slowing innovation, contact Opcito’s experts.













