Summary
Character.ai, a platform for talking to AI chatbots, will stop teens from having chats with its virtual characters starting from November 25. This decision follows criticism and legal action after concerns arose about the nature of interactions with young users. Instead of chats, teens can use the platform to generate content like videos, and new age verification methods will be introduced.
Key Facts
- Character.ai will ban teens from chatting with AI chatbots from November 25.
- The change comes after reports and feedback from regulators, safety experts, and parents.
- The platform has faced lawsuits in the U.S., including one related to a teenager's death.
- Concerns included AI's ability to make up information and feign empathy, posing risks to teens.
- New features for teens will focus on gameplay and role-play instead of open-ended chats.
- Age verification methods and an AI safety research lab will be introduced.
- Character.ai removed harmful chatbots, including one based on Jeffrey Epstein, after reports.
- Safety groups believe safety measures should have been part of the platform from the start.