Account

The Actual News

Just the Facts, from multiple news sources.

Anthropic: No "kill switch" for AI in classified settings

Anthropic: No "kill switch" for AI in classified settings

Summary

Anthropic says it cannot control or turn off its AI models once the Pentagon starts using them. The Pentagon has labeled Anthropic a supply chain risk because it disagrees with how the company wants its AI to be used in military operations.

Key Facts

  • Anthropic filed a document in federal court saying it has no way to see, control, or shut down its AI after deployment by the Pentagon.
  • The Pentagon called Anthropic a supply chain risk, concerned about the company's involvement in sensitive military uses.
  • Anthropic’s policies forbid using its AI, called Claude, for autonomous weapons or mass surveillance.
  • The Pentagon rejected these restrictions, leading to a legal dispute.
  • A D.C. appeals court denied Anthropic’s request to pause the supply chain risk label, while a California judge allowed a related request.
  • Because of the court decisions, Anthropic cannot get new Pentagon contracts but can work with other government agencies.
  • The Trump administration is trying to use Anthropic’s new AI model, Mythos, across federal agencies, raising cybersecurity concerns.
  • A court hearing on this issue is set for May 19.
Read the Full Article

This is a fact-based summary from The Actual News. Click below to read the complete story directly from the original source.