Summary
A report claims that Apple's AI, called Apple Intelligence, shows patterns of racial and gender bias. AI Forensics, a group in Europe, conducted tests and found that the AI often made assumptions based on stereotypes.
Key Facts
- AI Forensics is a nonprofit that investigates algorithms for major tech companies.
- Their tests showed racial bias, such as assuming non-white characters more often had specified ethnicities.
- Gender bias was evident when the AI assumed roles based on stereotypes, like assuming the man was the doctor in a prompt.
- The AI incorrectly made up information 15% of the time and relied on stereotypes 72% of those times.
- Apple Intelligence is built into many Apple products and assists with tasks like summarizing notes and writing emails.
- Apple has not signed a document addressing discriminatory bias as a risk in AI.
- The EU may impose fines on Apple if its AI violates new AI regulations set to enforce later this year.