Summary
A study by Oxford University looked at how effective large-language models (LLMs), like ChatGPT, are at giving medical advice. The study found that these AI systems often give mixed or wrong information, which can be risky for people using them for health decisions. Researchers say these AI tools need more testing to ensure they are safe to use in healthcare.
Key Facts
- Oxford University researchers conducted the study on 1,300 people.
- Participants were divided into two groups to compare AI advice with traditional sources.
- AI systems, like ChatGPT, often gave a mix of correct and incorrect medical advice.
- The study found that AI sometimes failed to accurately understand user questions.
- Researchers say relying on AI for medical advice can be dangerous without professional guidance.
- Over a fifth of Americans have followed inaccurate AI medical advice.
- AI systems need rigorous testing, similar to clinical trials for new medications.
- AI tools could be manipulated to give false information, raising concerns about misinformation.