Spooked by Mythos, Trump suddenly realized AI safety testing might be good
Summary
The Trump administration has agreed to work with Google DeepMind, Microsoft, and xAI to perform government safety checks on their new AI models before and after release. This marks a change from President Trump’s earlier rejection of Biden-era AI safety policies, prompted by concerns over the risks posed by advanced AI systems.Key Facts
- The Trump administration signed agreements with Google DeepMind, Microsoft, and xAI for AI safety testing.
- Previously, President Trump had dismissed the need for voluntary AI safety checks and renamed the US AI Safety Institute to remove “safety” from its name.
- Anthropic delayed releasing its advanced AI model called Claude Mythos due to fears it could be misused.
- The Center for AI Standards and Innovation (CAISI) will conduct tests on AI models, sometimes with reduced safeguards to assess risks better.
- A task force of experts from various government agencies has been formed to focus on AI national security concerns.
- Some companies, like Google DeepMind and Microsoft, support the government’s AI safety testing plans.
- Critics worry CAISI might not have enough resources or expertise and that voluntary agreements may fail to provide enough transparency.
- There is concern that political bias in AI evaluations could reduce public trust and discourage companies from cooperating.
Read the Full Article
This is a fact-based summary from The Actual News. Click below to read the complete story directly from the original source.