Summary
The Pentagon is considering labeling the AI company Anthropic as a "supply chain risk." This step involves asking defense contractors like Boeing and Lockheed Martin to assess their reliance on Anthropic's AI model, Claude. The decision stems from disagreements over the use of the AI model for military purposes.
Key Facts
- The Pentagon contacted Boeing and Lockheed Martin to evaluate their dependence on Anthropic's AI, Claude.
- Anthropic has safeguards on its AI to prevent mass surveillance and autonomous weapons use.
- Defense Secretary Pete Hegseth has given Anthropic a deadline to meet the Pentagon's conditions.
- If Anthropic does not comply, the Pentagon may use the Defense Production Act to modify Claude's usage or label Anthropic as a supply chain risk.
- Claude is the only AI currently used in the military's classified systems.
- The potential supply chain risk label is usually for companies in adversarial countries, making this situation unusual.
- Anthropic's CEO, Dario Amodei, has previously raised concerns about AI use in surveillance and autonomous weapons.
- Anthropic has been growing, securing new funding and expanding into major business operations.