Home / Technology / ChatGPT's New Power Sparks AI Security Debate
ChatGPT's New Power Sparks AI Security Debate
24 Apr
Summary
- OpenAI releases GPT-5.5, contrasting with Anthropic's cautious approach.
- Companies differ on sharing advanced AI due to cybersecurity risks.
- AI code generation skills are reshaping cybersecurity defenses and attacks.

OpenAI, the developer behind ChatGPT, has adopted a more open strategy for cybersecurity compared to its competitor, Anthropic. While Anthropic recently restricted access to its newest AI, Claude Mythos, to a select group of partners citing security, OpenAI has publicly released its flagship model, GPT-5.5.
This new model, powering ChatGPT, boasts significant improvements in coding and office-related tasks. OpenAI is not yet offering API access to GPT-5.5, allowing more time to study security implications before broader integration into third-party applications.
The contrasting approaches from OpenAI and Anthropic underscore a fundamental disagreement on managing AI technology. This technology is dual-use, valuable for both network defense and offensive cyber operations, creating a complex challenge.
Previously, OpenAI had shared GPT-5.4-Cyber with a larger group of cybersecurity professionals. Anthropic's limited release of Claude Mythos to about 40 critical infrastructure partners, including major tech firms, has drawn criticism from some experts who argue for wider distribution for immediate defense.
Despite its public release, OpenAI has implemented guardrails in GPT-5.5 to prevent malicious cybersecurity uses, a feature absent in the specialized GPT-5.4-Cyber. Benchmark tests suggest Anthropic's Claude Mythos may be more powerful, although OpenAI's strategy prioritizes broader, managed access.