Home / Technology / UK Regulators Eye Anthropic's AI for Security Flaws
UK Regulators Eye Anthropic's AI for Security Flaws
12 Apr
Summary
- UK regulators are discussing Anthropic's new AI model with banks.
- The AI model, Claude Mythos Preview, can find critical system vulnerabilities.
- US Treasury Secretary also met with Wall Street banks on the AI's risks.

UK financial authorities are urgently engaging with the National Cyber Security Centre and major British banks to evaluate the cybersecurity risks posed by Anthropic's latest AI model. Officials from the Bank of England, the Financial Conduct Authority, and HM Treasury are collaborating to identify potential vulnerabilities within critical IT systems. These concerns are heightened by Anthropic's claim that its new model, Claude Mythos Preview, has already uncovered thousands of severe vulnerabilities across major operating systems and web browsers, some of which have persisted for decades.
The proactive discussions in the UK mirror recent engagements in the United States, where Treasury Secretary Scott Bessent convened with Wall Street bank leaders to address the AI model's sophisticated capability in detecting exploitable cybersecurity weaknesses. Anthropic itself has warned that such advanced capabilities could soon proliferate beyond safe-deployment actors, potentially leading to severe consequences for economies, public safety, and national security. These potential ramifications are now on the agenda for the UK's Cross Market Operational Resilience Group, a forum for regulators and financial services firms to address sector threats.