Summary
A study found that AI models like ChatGPT and others often give incorrect responses to questions about news events. About 45% of the answers from these AI tools had significant errors, especially in sourcing and accuracy.
Key Facts
- A study by the European Broadcasting Union and the BBC tested AI models like ChatGPT, Google’s Gemini, Microsoft’s Copilot, and Perplexity.
- The study assessed over 2,700 responses from these AI models.
- 45% of the responses had significant problems.
- Sourcing errors were the most common, appearing in 31% of responses.
- Accuracy issues were found in 20% of the responses.
- Gemini had the most sourcing issues, with 76% of its answers affected.
- Some examples of errors include claims about Czechia and false information about Pope Francis.
- AI model producers like OpenAI and Google did not comment on the study.