Securing the agent layer
AI coding agents are transforming software development. Unbound is building the security infrastructure to make that transformation safe.
We saw the gap from the inside
We spent years building data security products at Palo Alto Networks, Imperva, and Adobe. Firewalls, DLP, CASB — all built around the same assumption: a human is the one accessing systems, and the job is to govern what that human can reach.
That assumption broke.
Today, Cursor, Claude Code, Codex, and Copilot don't just suggest code. They execute terminal commands, provision infrastructure, connect to internal APIs, and interact with external tool servers through MCP. They operate with real enterprise permissions — often with minimal oversight — inside the same environments where your most sensitive production systems live.
CASBs don't govern terminal commands. Endpoint tools don't evaluate MCP server configurations. IAM doesn't know your coding agent just auto-approved a destructive action during a change freeze.
So we built something new.
Unbound is the Agent Access Security Broker — the control plane we believe every enterprise running AI coding agents at scale will need. AASB sits between AI coding agents and the systems they interact with. It lets security teams discover every agent, MCP server, and risky configuration in their org. It gives them the ability to audit, warn, block, or require human approval on the highest-risk actions — without pulling developers off the tools that make them productive.
Founded by security and infrastructure veterans
We've spent our careers at the intersection of enterprise security, high-scale infrastructure, and developer tooling.
RSRaj spent nearly a decade building data security products at Palo Alto Networks and Imperva, where he led product efforts across DLP, CASB, and cloud data protection. He saw firsthand how enterprise security architectures were designed around a single assumption — that humans were the actors accessing systems — and recognized that AI coding agents were about to break that model entirely.
Before his security career, Raj and Vignesh worked together at Adobe for five years, building sub-millisecond latency systems that processed billions of advertising artifacts at scale. Raj holds a Master’s in System Design and Management from MIT and is a member of the Forbes Technology Council.
At Unbound, Raj sets the product vision and go-to-market strategy, drawing on his experience building security categories to establish the Agent Access Security Broker as the governance standard for AI coding agents.
VSVignesh is a systems engineer who has spent his career building infrastructure that operates at the intersection of scale, speed, and reliability. At Adobe, he spent five years building the real-time ad-tech systems that processed billions of artifacts at sub-millisecond latency — the same team where he and Raj first worked together.
After Adobe, Vignesh became a founding engineer at Tophatter and an early engineer at Shogun (YC S18), where he scaled engineering platforms from seed to growth stage. He brings deep expertise in distributed systems, cloud architecture, and building developer-facing platforms that need to be both performant and invisible.
At Unbound, Vignesh leads engineering and architecture — designing the gateway, hooks, and policy engine that evaluate over a million AI agent tool calls per month in production without adding latency or disrupting developer workflows.
Backed and recognized
Backed by
$4M seed round led by Race Capital, with Y Combinator, Wayfinder Ventures, Pioneer Fund, and notable angels.
Compliance & marketplace
1M+
Agent tool calls evaluated monthly
20+
AI coding tools discovered and governed
85%
Of developers now use AI coding tools daily
<5 min
Time to first discovery scan
The question isn't whether you need this layer
It's whether you'll have it in place before the next incident happens inside your org.
