Meeting Opcito at RSAC 2026? This is what we bring to the table
Posted By
Neha Jawale
RSA Conference 2026 runs March 23–26 at the Moscone Center in San Francisco — one of the largest cyber security events on the calendar. For Opcito, attending is a natural extension of the work we already do. Several of our existing customers are among RSAC 2026 exhibitors, showcasing security products that our engineering teams helped build. We are not showing up to learn what the security market looks like. We already know.
What has changed is the conversation. This year, the dominant challenge is not defending existing infrastructure — it is securing an entirely new class of technology being deployed faster than the security architecture around it is being built.
This blog covers what Opcito is seeing in the market, what we are building, and who we are looking to meet in San Francisco.
A decade of security product engineering expertise across the full stack
Security product engineering is a specific discipline. Most engineering partners understand software. Fewer understand the security domain deeply enough to make architecture decisions that hold up under adversarial conditions, compliance scrutiny, and real-world pressure — not just in a controlled environment.
Opcito has spent a decade doing exactly that — building and operationalizing security products across IAM, PAM, network security, cloud security, CNAPP, DSPM, data security, encryption, and software supply chain security, for ISVs ranging from stealth-mode startups to large global enterprises. These are not capability claims. They are the outcome of actual engineering engagements.
Nitin Singhvi, our VP of Engineering, has spent two decades inside this stack. When he sits down with a security ISV at RSAC, there is no ramp-up. The domain is familiar ground.
Meet Nitin Singhvi at
RSAC 2026 San Francisco
Software supply chain security — still underestimated
Recent reports highlight an exponential growth of supply chain attacks, with a disproportionate share originating from open-source dependencies. This is a problem that lives inside the build pipeline — where dependencies are resolved, artifacts are assembled, and code ships — and most organizations focus their security investment everywhere except there.
How Opcito approaches software supply chain security in CI/CD pipelines
Our approach embeds supply chain security into the engineering workflow rather than treating it as a periodic audit. The work includes:
- SBOM generation in CI/CD pipelines: Automatically generated as a byproduct of the build process, not produced manually as a compliance artifact — ensuring the SBOM reflects what is actually in the build.
- Automated dependency scanning: Continuous scanning against known vulnerability databases, with remediation workflows integrated into the pipeline.
- Build pipeline hardening: Signed commits, hardened artifact repositories, build provenance tracking, and prevention of unauthorized pipeline modifications.
- Continuous compliance monitoring: Policy enforcement and compliance checks running continuously, so compliance posture is visible at all times.
Now that AI components — pre-trained models, model weights, datasets, fine-tuning artifacts — are entering the software supply chain, the attack surface has expanded in ways traditional SBOM and dependency scanning were not designed to handle. This is a problem we are actively working on with security ISVs building tooling in this space.
The central problem at RSAC 2026: GenAI and Agentic AI security architecture
GenAI and Agentic AI are being deployed at scale — inside enterprise workflows, inside security products, and inside systems making decisions without waiting for human input. The security implications are significant and the industry does not yet have a settled playbook for addressing them.
The most consistent mistake we see is treating AI as an afterthought rather than a foundational element of the security architecture. Teams build fast, deploy faster, and ask the security question afterward — by which point the core architecture is already fixed. A decade ago, the same mistake was made with DevSecOps. Security was bolted on at the end of development cycles and DevSecOps emerged as the industry's answer. The lesson was expensive to learn. It is being repeated now with AI.
Shadow AI risk management compounds this further. Most employees have access to tools like ChatGPT, and knowingly or unknowingly, sensitive and regulated data is being submitted to these systems daily. Addressing shadow AI requires infrastructure-level controls like proxy agents, AI firewalls, and governance frameworks that enforce policy at the point of interaction, not after the fact. Among the dominant RSAC 2026 cybersecurity trends, the shift from securing static systems to securing autonomous ones stands out as the most consequential and the least solved.
Why Agentic AI security is architecturally different
An agent is a multi-layered software system. Each layer — data, model, orchestration, tool use, output — introduces its own security dimensions. Securing Agentic AI systems in production is not an extension of securing a traditional web application or REST API. The AI security architecture is different and the approach has to match it.
GenAI, Agentic AI, RAG, and MCP — distinct architectures, distinct risk profiles
- GenAI applications introduce prompt injection, jailbreaking, data leakage through model outputs, and insecure context handling. GenAI security engineering here requires input validation, output filtering, and controls over what context the model can access.
- Agentic AI systems plan and execute sequences of actions — web browsing, code execution, API calls, database queries. Every tool the agent can invoke is a potential attack vector. Authorization at the tool level, not just the session level, is a hard requirement.
- RAG pipelines introduce a retrieval layer between the user query and the model response. RAG pipeline security vulnerabilities include embedding poisoning, indirect prompt injection through retrieved documents, and data leakage where the retrieval layer does not enforce access controls correctly.
- MCP-based systems connect models to tools and data sources through a protocol layer. MCP security architecture and tool authorization are critical — the primary risks are unauthorized tool invocations, privilege escalation through tool chaining, and inadequate scoping of tool permissions.
AI security posture management’s distinct problem
DSPM addresses data posture across cloud environments. AI security posture management (AI-SPM) addresses something different — continuous visibility and control of AI assets across the enterprise: models in use, agent configurations, data flows into and out of AI systems, and policy compliance across AI-driven workflows.
AI security posture management for enterprises is becoming a requirement in the same way CSPM became a requirement as cloud adoption scaled. We are working with organizations on integrating AI-SPM into enterprise security workflows — covering model inventory, access governance for AI systems, and runtime monitoring of agentic behavior.
Human-in-the-loop AI architecture — an engineering principle, not a feature
Fully autonomous AI systems — particularly those with access to sensitive data or high-impact workflows — introduce risk that automation alone cannot manage. Human-in-the-loop (HITL) is a deliberate architectural principle that determines where human judgment is required in a workflow and ensures AI actions in high-stakes contexts are reviewable, auditable, and reversible.
Human-in-the-loop design for Agentic AI in practice
For low-stakes GenAI applications, the HITL requirement is minimal. For Agentic AI systems making decisions about access control or security policy enforcement, HITL is non-negotiable. Opcito defines HITL requirements at the architecture stage — before implementation begins — identifying which decision points require human review and ensuring the system degrades gracefully when review is unavailable rather than defaulting to autonomous action.
Building the system first and inserting human oversight later produces HITL implementations that are superficial, easily bypassed, and not trusted by the people they are supposed to involve.
DevSecOps for AI systems — embedding security into AI engineering workflows
DevSecOps is core to how Opcito works and a consistent gap in the organizations we engage with. Engineering teams face real pressure — faster release cycles, more automation, greater infrastructure complexity, and now AI-generated code entering codebases. Layering compliance, Zero Trust enforcement, and audit requirements on top creates friction that security either slows or gets bypassed under. Neither is acceptable.
AI-specific controls in DevSecOps pipelines that Opcito addresses
- AI security controls in CI/CD pipelines: Traditional SAST, DAST, and SCA tooling does not fully cover vulnerability classes introduced by LLM-generated code. Integrating AI-specific validation into the pipeline — output validation, license compliance, detection of insecure LLM output patterns — is a capability most DevSecOps implementations do not yet have.
- Compliance automation: SOC 2, ISO 27001, HIPAA, PCI DSS — most teams handle compliance evidence manually. Opcito automates compliance evidence collection as a byproduct of the CI/CD pipeline, so audit readiness is continuous.
- IaC security: IaC scanning, policy enforcement using Open Policy Agent (OPA), and drift detection are foundational to a secure cloud posture. Misconfiguration in IaC is one of the most common and most preventable sources of cloud security incidents.
- Zero Trust at the engineering layer: Implementing Zero Trust effectively requires changes at the application and infrastructure layer — service mesh configuration, workload identity, least-privilege policy enforcement, and continuous verification across distributed systems.
- Training and upskilling engineering teams: Secure development practices do not spread through policy documents. We work with engineering teams to build secure development capability in-house — covering threat modelling, secure code review, and AI-specific security practices for teams working with LLM-generated code.
These are the conversations worth having while attending RSAC apart from the sessions on the RSAC agenda.
What Opcito is currently building
Our work in GenAI and Agentic AI security is across live, production-grade engagements. The engagements below reflect where RSAC 2026 security innovations are being built — not announced on a stage but engineered in production. Without naming clients:
- Data-layer security for GenAI and Agentic AI: Securing the data ingestion and retrieval layer of a GenAI-powered enterprise system — access control at the retrieval layer, data classification before ingestion into the vector store, and monitoring for anomalous retrieval patterns indicating indirect prompt injection.
- MCP server development for a security ISV: Building and securing an MCP-based architecture with defined trust boundaries, scoped tool permissions, and audit logging. The core challenge is preventing privilege escalation through tool chaining — where individually permitted tool calls combine to produce a capability that should not be authorized.
- Proxy agents and AI firewall solutions: Enforcing policy at the infrastructure layer between employees and external GenAI systems — data classification enforcement, prevention of sensitive data submission, interaction logging for compliance, and shadow AI risk management across the organization.
- Unified security platform for GenAI and Agentic AI assets: An end-to-end platform covering model validation, input filtering, runtime monitoring, output controls, and incident response workflows for AI-generated actions — with AI governance and risk controls embedded across every layer.
Who should meet Opcito at RSAC 2026
If any of the following describes your situation, it is worth carving time out of your RSAC 2026 schedule to meet us.
Security ISVs building for GenAI and Agentic AI
You are building a security product across any layer of the GenAI or Agentic AI security architecture — data security, model security, guardrails, RAG pipeline security, MCP architecture, or AI-SPM. We are already building here and the domain knowledge is already in the room.
Enterprises deploying GenAI or Agentic AI without a security foundation
Shadow AI risk, unsecured agent deployments, unhardened RAG pipelines, or no AI governance framework yet. These are problems worth solving before something goes wrong, not after.
Engineering and security leaders dealing with DevSecOps at scale
Compliance is still manual, security is still a release gate, or AI-generated code is entering your codebase without a clear validation workflow. Opcito has solved this across organizations of different sizes and stacks.
Book a meeting at RSAC 2026
Nitin Singhvi, our VP of Engineering, will be in San Francisco at the Moscone Center for RSA week. He brings two decades of hands-on security product engineering to every conversation. The harder the problem, the more useful the conversation tends to be.
The RSAC 2026 agenda is packed — meetings are available in person at the conference and virtually before or after the event.
Meet our team at
RSAC 2026 San Francisco













