Account

The Actual News

Just the Facts, from multiple news sources.

Anthropic’s new AI tool has implications for us all – whether we can use it or not | Shakeel Hashim

Anthropic’s new AI tool has implications for us all – whether we can use it or not | Shakeel Hashim

Summary

In June 2024, a cyber-attack disrupted London hospitals, causing appointment cancellations and blood shortages. Anthropic, an AI company, announced a new AI model named “Claude Mythos Preview” that can find serious security flaws in major software, potentially enabling more powerful cyber-attacks. The company is limiting access to this AI to help companies fix vulnerabilities before hackers exploit them.

Key Facts

  • A cyber-attack in London in June 2024 caused over 10,000 appointment cancellations and a patient’s death due to blood test delays.
  • Anthropic released an AI model called Claude Mythos Preview, which can detect security weaknesses in major browsers and operating systems.
  • Mythos found a 27-year-old bug in critical security infrastructure and flaws in the Linux kernel, a key part of many computer systems.
  • Experts warn this AI could make cyber-attacks faster, more frequent, and more destructive by enabling less skilled hackers.
  • Anthropic is sharing Mythos only with big companies like Apple, Microsoft, and Google to help them improve security.
  • There is little government regulation to control the spread of such powerful AI tools at the national or global level.
  • The Trump administration has banned its government agencies from using Anthropic’s technology and criticized the company.
  • This lack of cooperation between Anthropic and the government may hinder efforts to protect critical systems.
Read the Full Article

This is a fact-based summary from The Actual News. Click below to read the complete story directly from the original source.