OpenAI tells ChatGPT models to stop talking about goblins
Summary
OpenAI noticed that its AI tools, including ChatGPT and its coding assistant Codex, started mentioning goblins and other creatures more often without clear reasons. To fix this, OpenAI told its coding tool not to talk about these creatures unless it is clearly relevant to what users ask.Key Facts
- After launching GPT-5.1, OpenAI saw a 175% rise in mentions of "goblins" and a 52% rise in "gremlins" in ChatGPT responses.
- The increase came from the AI using these creatures in metaphors and casual conversation.
- OpenAI instructed Codex to avoid talking about goblins, gremlins, raccoons, trolls, ogres, pigeons, or similar creatures unless relevant to user queries.
- This issue happened because of how the AI models were trained to have a "nerdy personality," which caused the models to mention these creatures more.
- OpenAI clarified the move was not a marketing trick but a real technical fix.
- The strange increase in creature mentions highlights challenges with AI training, where some patterns in AI responses can accidentally get rewarded and repeated.
- This comes as AI companies work to make chatbots more friendly and chatty, but such changes can sometimes lead to more mistakes or strange behavior.
- Researchers warn that AI personality tuning can cause a trade-off between being engaging and being accurate.
Read the Full Article
This is a fact-based summary from The Actual News. Click below to read the complete story directly from the original source.