US announces deals with tech firms for national security review of AI models before release
Summary
The US government has made deals with Google DeepMind, Microsoft, and xAI to review their new AI models before these are released to the public. This review aims to understand the capabilities of these AI systems and protect national security by spotting risks related to cybersecurity and other threats.Key Facts
- The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, leads the review process.
- The deals require early access to AI models so the government can check for potential security risks.
- CAISI has already reviewed over 40 AI models, including some that are not publicly released.
- Earlier, OpenAI and Anthropic made similar agreements with the US government.
- Risks include cybersecurity breaches, biosecurity issues, and potential misuse for chemical weapons.
- New powerful AI models like Anthropic’s Mythos have limited rollouts due to safety concerns.
- Microsoft also made a similar agreement with the UK government’s AI Security Institute.
- The goal is to ensure AI models are tested for safety and security before wide release.
Read the Full Article
This is a fact-based summary from The Actual News. Click below to read the complete story directly from the original source.