Account

The Actual News

Just the Facts, from multiple news sources.

Anthropic's AI downgrade stings power users

Anthropic's AI downgrade stings power users

Summary

Anthropic’s AI model Claude has shown a drop in its performance, according to users reporting less accurate and less detailed answers. Anthropic says it changed Claude’s default reasoning level but denies reducing quality to save computing power or to focus on their new model, Mythos.

Key Facts

  • Users on online forums complain Claude’s AI is not as good at complex tasks as before.
  • Some believe Claude was intentionally weakened (“nerfed”) to reduce costs or support testing Mythos.
  • Anthropic says users can adjust Claude’s effort level between faster, less smart or slower, more intelligent modes.
  • An AI analyst reviewed Claude and agreed there were changes, but denied extreme claims of secret downgrades.
  • People may be noticing flaws more because they got used to Claude’s earlier performance, a process called habituation.
  • Power users rely on stable AI quality for work like coding and research, so the issue matters to them.
  • Top AI models like Anthropic’s often reserve the best performance for paying customers or experimental projects.
  • Anthropic has moved big customers to pay based on how much they use the AI, linking quality to cost.

Source Information