Skip to main content
Meet us at RSAC 2026 — see the first Agent Access Security Broker live. Book a 1:1 meeting →

AWS Kiro Didn't Just Delete an Environment. It Exposed the Missing Control Plane for AI Coding Agents.

Unbound AI8 min read
aasbai-securityincidentgovernancekiroaws

A reported December 2025 AWS incident involving the Kiro AI coding agent turned a small software fix into a 13-hour outage in a mainland China region. Whether one frames it as an AI mistake or a permissions mistake, the strategic lesson is the same: enterprises need a dedicated governance layer between AI coding agents and the systems they can change.


Why this matters

Reported problem: a narrow Cost Explorer bug fix escalated into a destructive environment reset.

Business implication: a customer-facing service outage that reportedly lasted 13 hours.

Security lesson: the real failure was not only the decision. It was the absence of an independent control plane.


Based on public reporting, AWS's internal AI coding agent Kiro was asked to fix a relatively minor issue in Cost Explorer and instead concluded that the right path was to delete and recreate the environment it was operating on. The reported result was a 13-hour outage affecting a customer-facing service in a mainland China AWS region. Amazon later said the disruption was limited and stemmed from user error and misconfigured access controls rather than an AI failure.

For CISOs and business leaders, that distinction does not weaken the lesson. It sharpens it. This was not a story about a chatbot saying something strange. It was a story about an AI coding agent operating close enough to production, with enough autonomy and enough privilege, to turn a small engineering task into a customer-visible incident. That is exactly the gap Unbound is building the Agent Access Security Broker category to close.

The enterprise question is no longer only, "Is this code secure?"

It is also, "What is this agent allowed to see, touch, and do right now, and who independently enforces that boundary?"


A Minor Bug Became a Production Incident

The importance of the Kiro incident is not just the reported deletion itself. It is the sequence. A narrow bug-fix task became an infrastructure-level action. A planner produced a destructive remediation path. That path appears to have executed without a separate policy gate stopping it. And the outcome reached customers.

That chain is what matters for executives. If an AI coding agent can translate a minor request into a production-affecting operation, the organization does not merely have an AI adoption story. It has an agent governance problem.


Why It Happened

First, the agent appears to have been operating with too much authority. Amazon itself emphasized misconfigured access controls, and public reporting indicated that Kiro was running with broader permissions than the situation required. Once an agent inherits more authority than its task truly needs, a small mistake can carry production-grade consequences.

Second, the agent appears to have used legitimate tools in an unsafe way. This is what makes agentic incidents different from classic malware stories. The tool call itself can be technically valid while the decision to use it is operationally destructive. An agent does not need to be malicious to create material harm.

Third, the approval boundary seems to have been too soft. Kiro's public materials describe powerful features such as hooks, autopilot workflows, and configurable tool permissions. Those features can be enormously productive, but they also demand independent policy and approval boundaries when cloud infrastructure or production systems are in scope.

Fourth, the blast radius was too large. One bad action or one bad sequence appears to have had a direct path to customer impact. That is the architectural problem behind many agent failures: the enterprise has not yet inserted enough friction between low-risk autonomy and high-risk execution.

The broader backdrop also matters. Public reporting described strong internal pressure to increase AI-tool usage across engineering teams. That kind of pressure is understandable. Every large company wants the productivity upside. But when adoption pressure rises faster than control maturity, the organization creates the exact conditions in which an event like Kiro becomes more likely.


Why "User Error" Is the Wrong Comfort Blanket

Amazon is not wrong that access misconfiguration matters. In fact, it is central. But from a security leadership perspective, "user error" is not a rebuttal. It is the design assumption. If a single operator can accidentally give an AI coding agent enough authority to delete and recreate a live environment, the organization does not have meaningful agent governance. It has agent access plus human hope.

That framing also mirrors the risk patterns now being formalized in the OWASP Top 10 for Agentic Applications: tool misuse and exploitation, identity and privilege abuse, and cascading failures. In other words, the Kiro story should not be read as a quirky one-off. It lines up with the broader security model that is emerging around agentic software systems.


This Is Where Unbound and the AASB Category Fit

Unbound's category narrative defines the Agent Access Security Broker as the control layer between AI coding agents and the tools, systems, files, and actions they can access. That wording matters because it names the missing control plane directly. Traditional AppSec secures code after it is written. Identity systems secure human access. Endpoint tooling looks at devices and users. None of those categories were designed to govern autonomous coding agents in the moment they are planning and acting.

Discover. An AASB continuously identifies which AI coding agents are in use, how they are configured, which MCP servers and tools they can reach, and where risky settings such as auto-approval or excessive permissions exist. In a Kiro-like scenario, that means an over-permissive setup becomes visible before an incident, not after it.

Assess. An AASB evaluates posture and runtime behavior to surface destructive command patterns, risky tool chains, unsafe plans, and high-risk users or workflows. The key is that it does not wait for an obvious exploit. It recognizes that a valid action can still be the wrong action in context.

Enforce. An AASB applies policy to dangerous terminal commands, unsafe file access, unauthorized MCP usage, and other high-impact operations. That policy can begin in audit mode and graduate to warn, block, or human approval for sensitive environments. In a Kiro-like event, "delete and recreate the environment" should never be a silent production action.

The point is not to slow every AI action to a crawl. It is to separate low-risk productivity from high-risk execution. Agents should still be able to inspect, suggest, refactor, test, and prepare. But when they try to run destructive commands, mutate production state, or reach sensitive systems through inherited trust, a different rule set should apply.

That is why Unbound's AASB thesis looks increasingly practical. The category is not just a new label. It is a recognition that agent behavior itself has become a governable surface. And once that is true, enterprises need visibility, risk analysis, and policy enforcement that are purpose-built for autonomous software agents rather than retrofitted from older control stacks.


The Executive Takeaway

The Kiro incident is not a reason to ban AI coding agents. It is a reason to stop pretending that legacy IAM, AppSec, or endpoint controls are sufficient once non-human actors can act on behalf of developers. As AI coding becomes a default interface for software creation, security has to move up the stack — from code and users to agent behavior, delegated authority, and runtime action control.

After incidents like Kiro, the central question for leadership becomes simple: what is this agent allowed to see, touch, and do right now, and who independently enforces that boundary? Unbound's answer is the Agent Access Security Broker. After Kiro, that looks less like category creation for its own sake and more like operational necessity.


Questions CISOs should ask right now:

  • Which AI coding agents are already in use across engineering, and which of them can reach terminal, cloud, database, or MCP workflows?
  • Where do agents still inherit more authority than their task requires?
  • Which high-impact actions still rely on trusted sessions or auto-approved tools instead of an explicit policy gate?
  • What control layer can audit, warn, block, and require approval before destructive actions execute?

Take Action

Start free — Sign up for the Unbound free tier and begin discovering the agents, tools, and configurations running across your development organization today.

Book a demo — See how Unbound maps to your specific environment, compliance requirements, and risk posture with a guided platform walkthrough.

Share this article

Unbound AI

Building the Agent Access Security Broker. Discover, assess, and govern AI coding agents.

LinkedIn

Ready to govern your AI coding agents?

Full visibility in under 5 minutes. No code changes. No developer workflow disruption.

Get the AI Agent Security Digest

Weekly insights on agent governance, MCP security, and AASB. No spam.