AI coding agents might be all the rage, but they should come with a serious warning label: Use (or let loose) at your own risk.
Agents perform tasks on your computer autonomously with little human direction. However, Amazon reportedly traced two recent AWS outages to engineers using its Kiro AI coding assistant, which launched in July 2025 with a mission to “tame the complexity” of vibe coding.
(Credit: Amazon)
Amazon provided an internal post-mortem on one of the outages, which happened in December and lasted approximately 13 hours. But employees told the FT that it was the second time in as many months that Amazon’s AI tools were involved in a service disruption. (Both were unrelated to the massive October AWS incident that seemed to take down half the internet.)
The Kiro tool reportedly took it upon itself to “delete and recreate the environment,” the FT says. However, Amazon disputes the characterization, saying it was “user error, not AI error” since the agents were given “broader permissions than expected.”
Kiro will ask engineers before taking any major actions, but apparently, the person involved in the December incident had permission to deploy changes to production without a second approval, suggesting it could be a management problem.
Amazon did not immediately respond to a request for comment. But it told TechRadar Pro that the December outage was an “extremely limited event” affecting AWS Cost Explorer in one of two regions in Mainland China.
In any case, it’s safe to say AI coding agents are creating a major wrinkle in how tech companies diagnose and prevent issues. Rogue agents are not uncommon; one deleted a startup’s entire database without asking for permission, then apologized to the user.
“Oh, so when it works, it’s ‘agentic,’ but when it fails, it’s actually ‘user error,'” says one Redditor in response to the incident.
Recommended by Our Editors
“The part that gets me is that this wasn’t some startup moving fast and breaking things,” adds another. “If [Amazon] can’t get the guardrails right, the rest of us should probably pump the brakes on giving these tools write access to anything that matters.”
Amazon may have been quick to dismiss the AI’s involvement, since products like Kiro are en vogue and competing with the likes of Claude Code. AI agents are capturing particular interest this year, most recently with the viral OpenClaw coding tool, whose creator was recruited to OpenAI last week to accelerate its agentic products, despite OpenClaw’s security issues.
At the same time, it could be beneficial for Amazon to maintain accountability for its engineers when using AI tools. It can be easy to simply approve AI-generated code without taking the time to make sure it works. While many programmers are hooked on AI-powered coding, enjoying how it can take menial tasks off their plate, others are concerned about hidden security issues, bugs, and tech debt created by agents that have yet to be discovered.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy
Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Our Expert
Emily Forlini
Senior Reporter
Experience
As a news and features writer at PCMag, I cover the biggest tech trends that shape the way we live and work. I specialize in on-the-ground reporting, uncovering stories from the people who are at the center of change—whether that’s the CEO of a high-valued startup or an everyday person taking on Big Tech. I also cover daily tech news and breaking stories, contextualizing them so you get the full picture.
I came to journalism from a previous career working in Big Tech on the West Coast. That experience gave me an up-close view of how software works and how business strategies shift over time. Now that I have my master’s in journalism from Northwestern University, I couple my insider knowledge and reporting chops to help answer the big question: Where is this all going?
Read Full Bio
