Account

The Actual News

Just the Facts, from multiple news sources.

AI 'friend' chatbots probed over child protection

AI 'friend' chatbots probed over child protection

Summary

The U.S. Federal Trade Commission (FTC) is investigating seven tech companies about how their AI chatbots interact with children. The agency wants to know how these chatbots are monetized and if enough safety measures are in place. This move comes amid concerns about young users being particularly vulnerable to AI that can imitate human conversations.

Key Facts

  • The FTC is looking into seven companies: Alphabet, OpenAI, Character.ai, Snap, XAI, Meta, and Instagram.
  • The inquiry focuses on what safety measures these companies use to protect children interacting with AI chatbots.
  • There are concerns that AI chatbots, acting like friends, can influence young people, especially since they can mimic emotions.
  • The FTC seeks information on how companies balance making money with ensuring user safety.
  • Current concerns have led to lawsuits, with parents blaming chatbots for influencing their children's harmful actions.
  • OpenAI has acknowledged some limitations in its AI's protective measures, especially during long conversations.
  • Past internal guidelines at Meta allowed AI companions to engage in romantic conversations with minors.
  • The FTC's investigation is a broad inquiry and does not imply immediate legal action.

Source Information