Home / Technology / AI Giants Join US Security Deep Dive
AI Giants Join US Security Deep Dive
6 May
Summary
- US enlists Google DeepMind, xAI, and Microsoft for AI risk assessments.
- Scientists focus on demonstrable risks like cyberattacks and weapon development.
- Previous reviews uncovered ways to bypass AI safety mechanisms.

The Trump administration has broadened its initiative to grant U.S. government scientists access to unreleased artificial intelligence models. This program, aimed at conducting risk assessments, now includes major players such as Google's DeepMind, xAI, and Microsoft. Previously, OpenAI and Anthropic were already collaborating with the U.S. Center for AI Standards and Innovation (CAISI) team.
U.S. scientists are prioritizing "demonstrable risks." These include the potential for advanced AI models to be leveraged for cyberattacks against critical infrastructure or used by adversaries to develop chemical or biological weapons. They also seek to prevent the corruption of data essential for training American AI models.
Companies involved are providing access to proprietary models and data for rigorous testing. OpenAI is testing a cybersecurity variant of its GPT-5.5-Cyber model, while Microsoft is developing shared datasets and workflows for assessments. Google DeepMind will also supply access to its advanced models and data for evaluation purposes.
Past reviews have already yielded critical insights. For instance, Anthropic identified and patched vulnerabilities that allowed safety mechanisms to be circumvented. Similarly, OpenAI discovered exploits in its ChatGPT Agent that could have enabled sophisticated actors to bypass security measures and impersonate users.
This expanded effort builds on previous agreements where companies like Meta, Amazon, and Inflection AI committed to independent expert reviews of their models for biosecurity and cybersecurity risks. The U.S. scientists, previously organized under a different name during former President Joe Biden's tenure, also released voluntary guidelines. They are currently developing guidelines for critical infrastructure providers to test their AI systems.