Character.AI no longer lets its chatbots get romantic with teens
Character.AI has a new set of features aimed at making interactions with the virtual personalities it hosts safer, especially for teenagers. The company has just introduced a new version of its AI model designed specifically for its younger users, as well as a set of parental controls to manage their time on the website. The updates follow previous security changes to the platform in the wake of allegations that its AI chatbots were negatively impacting children’s mental health.
These security changes have been accompanied by other efforts to tighten the reins on Character.AI’s content. The company recently started removing all AI imitations of copyrighted and trademarked characters, albeit an incomplete one.
For teen users, the most noticeable change will likely be the separation between the adult and teen versions of the AI model. You must be 13 to sign up for Character.AI, but users under 18 will be directed to a model with narrower guardrails specifically built to prevent romantic or suggestive interactions.
The model also has better filters for what the user writes and can better detect when a user tries to bypass these limits. That includes a new restriction on editing chatbot responses to bypass the suggestive content restriction. The company would like to keep all conversations between teens and its AI personalities PG. Additionally, if a conversation touches on topics like self-harm or suicide, the platform will display a link to the National Suicide Prevention Lifeline to direct teens to professional resources.
Character.AI is also working to keep parents informed about what their teens are doing on the website, and the controls will arrive early next year. The new parental controls give parents insight into how much time their children spend on the platform and which bots they chat with the most. To ensure these changes hit the right notes, Character.AI is working with several online teen safety experts.
Disclaimer AI
It’s not just teenagers that Character.AI wants to help maintain a sense of reality. They also address concerns about screen time addiction, with all users receiving a reminder after talking to a chatbot for an hour. The reminder urges them to take a break.
The existing disclaimers about the AI origins of the characters will also get a boost. Instead of just a small comment, you’ll see a longer explanation that it’s AI. This is especially true if any of the chatbots are described as doctors, therapists or other experts. A new additional warning makes it crystal clear that the AI is not a recognized professional and should not replace real advice, diagnosis or treatment. Imagine a big yellow sign that says, “Hey, this is nice and all, but maybe don’t ask me for life-changing advice.”
“At Character.AI, we are committed to promoting a secure environment for all our users. To meet that promise, we recognize that our approach to security must evolve alongside the technology that powers our product – creating a platform where creativity and research can thrive without compromising safety,” Character.AI explained in a after about the changes. “To get this right, safety must be integrated into everything we do here at Character.AI. This series of changes is part of our long-term commitment to continually improve our policies and our product.”