Microsoft, Google, xAI give US access to AI models for security testing
Summary
Microsoft, Google, and xAI will let the US government test their new artificial intelligence (AI) models to check for national security risks. This allows officials to study the AI’s behavior and find potential threats before the tools are widely used.Key Facts
- The Center for AI Standards and Innovation (CAISI) at the Department of Commerce announced the agreement.
- The US government can evaluate AI models before they are launched publicly.
- President Donald Trump’s administration made this partnership pledge in July.
- Microsoft will work with government scientists to test AI for unexpected behaviors using shared data and testing methods.
- The effort aims to identify risks like cyberattacks or military misuse from advanced AI systems such as Anthropic’s Mythos.
- CAISI has already conducted over 40 tests on cutting-edge AI models, sometimes with safety limits disabled to find vulnerabilities.
- The Pentagon also agreed with seven tech firms, including Microsoft and Google, to use AI systems in classified defense networks.
- Anthropic is not part of the Pentagon agreement due to a legal dispute with the Trump administration over AI ethics and safety.
Read the Full Article
This is a fact-based summary from The Actual News. Click below to read the complete story directly from the original source.