Home / Technology / AI Finds 500+ Hidden Code Vulnerabilities
AI Finds 500+ Hidden Code Vulnerabilities
23 Feb
Summary
- Advanced AI model discovered over 500 high-severity code vulnerabilities.
- Flaws persisted for decades despite expert review and extensive testing.
- New AI-powered tool rapidly identifies complex logic and access control issues.

Anthropic's most advanced AI model, Claude Opus 4.6, has identified over 500 high-severity security vulnerabilities within widely used open-source codebases. These critical flaws had persisted for decades, evading extensive expert review and fuzzing efforts. The new Claude Code Security tool, launched as a limited research preview on February 20, 2026, utilizes reasoning-based analysis to detect issues missed by traditional pattern-matching scanners like CodeQL. This AI approach mimics human security researchers, analyzing data flow and business logic to uncover complex vulnerabilities.
Anthropic demonstrated this capability by analyzing GhostScript, OpenSC, and CGIF, uncovering flaws related to commit history analysis, buffer overflows, and algorithm edge cases. The AI successfully generated proofs-of-concept for vulnerabilities that 100% branch coverage would not detect. These findings underscore a significant shift from pattern-matching to hypothesis generation in code security, demanding robust human and technical controls.
The dual-use nature of this technology presents a challenge, as the same reasoning that finds vulnerabilities can also aid attackers. Anthropic is deliberately managing this release to enterprise and team customers, with open-source maintainers eligible for expedited access. Safeguards include multi-stage self-verification, human patch approval, and internal detection probes. The company emphasizes tipping the scales towards defenders while acknowledging the potential for misuse, highlighting the need for formal governance frameworks for such powerful AI tools.




