Home / Technology / Anthropic AI Agents Tackle Code Bugs Humans Miss
Anthropic AI Agents Tackle Code Bugs Humans Miss
10 Mar
Summary
- Multi-agent AI system reviews code for bugs.
- Pricing is $15-$25 per review, aiming for quality over speed.
- Anthropic faces a lawsuit against the Trump administration.

Anthropic has introduced Code Review, an advanced multi-agent AI system integrated into Claude Code. This new feature dispatches multiple AI agents to meticulously examine each pull request for bugs, aiming to catch issues that human reviewers might miss. The system is available in research preview for Team and Enterprise customers.
Code Review operates by having AI agents independently search for bugs and then cross-verify their findings to ensure accuracy. The system dynamically adjusts its analysis based on the pull request's complexity, deploying more agents for larger or intricate changes. Anthropic positions this tool as a crucial insurance product against costly production bugs, with pricing set at $15 to $25 per review.
This significant product launch coincides with other major developments for Anthropic. The company has filed lawsuits challenging a Pentagon blacklisting that designated it a national security risk. Simultaneously, Microsoft announced a partnership embedding Claude into its Microsoft 365 Copilot platform, expanding Anthropic's commercial reach.
Anthropic's internal testing indicates high effectiveness, with less than 1% of findings marked incorrect by engineers. The system aims to augment, not replace, human reviewers, allowing them to focus on higher-level architectural decisions. The company reported a 200% increase in AI-assisted code output per engineer over the past year, highlighting the need for such advanced review tools.




