Account

The Actual News

Just the Facts, from multiple news sources.

The next phase of AI cybersecurity still needs humans

The next phase of AI cybersecurity still needs humans

Summary

New AI models by Anthropic and OpenAI show strong abilities to find security bugs in software, but they still need skilled humans to check and manage their work. Companies like Microsoft, Cisco, and Palo Alto Networks are using these AI tools to discover more vulnerabilities faster but emphasize that human experts are critical to interpret results and avoid mistakes.

Key Facts

  • Anthropic’s Mythos Preview and OpenAI’s GPT-5.5-Cyber can find many software bugs across different operating systems.
  • Palo Alto Networks found 75 bugs using these AI systems compared to the usual 5-10 bugs found monthly.
  • Microsoft’s AI security system found 16 new problems in Windows networking and authentication.
  • Cisco released an open-source guide for using advanced AI models in cybersecurity.
  • AI models sometimes produce false positives—incorrect or exaggerated warnings about security issues.
  • Human security researchers help validate AI findings and reduce errors.
  • Some AI tools are better at finding bugs than confirming if those bugs can be exploited.
  • Experts say AI models are like a brain without a body; skilled humans are needed to control and guide the AI effectively.
Read the Full Article

This is a fact-based summary from The Actual News. Click below to read the complete story directly from the original source.