Home / Technology / AI Cybersecurity: GPT-5.5 Matches Mythos, Challenges Marketing
AI Cybersecurity: GPT-5.5 Matches Mythos, Challenges Marketing
1 May
Summary
- GPT-5.5 performance on cyber evaluations is similar to Mythos Preview.
- New research shows GPT-5.5 surpassed Mythos Preview on expert cyber tasks.
- OpenAI CEO criticizes 'fear-based marketing' regarding AI cybersecurity risks.

Recent findings from the UK's AI Security Institute (AISI) reveal that OpenAI's GPT-5.5 has achieved performance levels comparable to Anthropic's Mythos Preview on cybersecurity evaluations.
AISI tested both models using 95 Capture the Flag challenges. GPT-5.5 achieved a 71.4% success rate on expert tasks, marginally exceeding Mythos Preview's 68.6%. The AI also demonstrated significant progress on a simulated 32-step data extraction attack.
Despite these capabilities, GPT-5.5, like previous models, still fails the "Cooling Tower" simulation, which models disruption of power plant control software. This suggests general AI advancements, rather than model-specific breakthroughs, are driving these cybersecurity performances.
OpenAI CEO Sam Altman has publicly critiqued what he terms "fear-based marketing" in the AI industry. He suggests that high-profile warnings about certain AI models' dangers may be exaggerated for commercial gain, while acknowledging that genuinely dangerous models will inevitably be released.
OpenAI has implemented programs like Trusted Access for Cyber to allow security researchers to study frontier models. Limited releases of specialized variants, such as GPT-5.4-Cyber and the upcoming GPT-5.5-Cyber, are being managed through this trusted access list for defensive purposes.