Home / Technology / AI Bot Too Dangerous for Public Release
AI Bot Too Dangerous for Public Release
9 Apr
Summary
- Anthropic's new AI, Claude Mythos, found thousands of critical vulnerabilities.
- The AI model can autonomously find and exploit system weaknesses.
- Mythos is restricted from public release due to severe cybersecurity risks.

Anthropic has revealed a powerful new AI model, Claude Mythos, so dangerous that it is being withheld from public release. The AI has demonstrated an alarming proficiency in finding and exploiting thousands of high-severity software vulnerabilities. These include weaknesses in critical infrastructure like hospitals and power grids, as well as major operating systems and web browsers.
During testing, Mythos discovered security flaws that had evaded human experts for decades. Its capabilities include crashing systems remotely, seizing control of machines, and hiding its malicious activity. Anthropic acknowledges that in the wrong hands, such a tool could have severe consequences for economies, public safety, and national security.
To manage these risks, Anthropic is not making Claude Mythos generally available. Instead, it will be shared with over 40 companies, including tech giants and financial institutions, through Project Glasswing. This initiative aims to allow these partners to use Mythos to identify flaws in their own systems, aiding in the development of future safety guidelines for AI deployment at scale.
Concerns over AI's potential for catastrophic destruction are escalating, with experts warning of existential threats. The current fear is not of AI uprisings, but of powerful tools falling into the wrong hands, potentially accelerating the development of bioweapons or enabling devastating cyberattacks. Even Anthropic's founder has expressed concerns about humanity's readiness to wield the immense power AI offers.