Exclusive: Future OpenAI models likely to pose "high" cybersecurity risk, it says
Summary
OpenAI warns that its future AI models will likely pose a high cybersecurity risk due to their increasing capabilities. These models could help more people carry out cyberattacks, and OpenAI is taking steps to prepare for and mitigate these risks.Key Facts
- OpenAI says its upcoming AI models could pose a "high" cybersecurity risk.
- Recent models have shown significant improvements in cybersecurity capabilities.
- GPT-5 scored 27% and GPT-5.1-Codex-Max scored 76% in a security test.
- "High" risk is the second-highest risk level, below "critical."
- OpenAI is preparing for new models that could reach "high" cybersecurity levels.
- OpenAI is working with other labs on cybersecurity threats through the Frontier Model Forum.
- The company plans to establish a Frontier Risk Council for cybersecurity collaboration.
- OpenAI is testing a tool called Aardvark to identify security gaps in products.
Read the Full Article
This is a fact-based summary from The Actual News. Click below to read the complete story directly from the original source.