The report, developed in partnership with Stanford University researchers, highlights the dangers posed by popular AI companion platforms such as Character.AI, Replika, and Nomi. These apps allow users to create and interact with custom AI personas, often without strict content moderation or age controls. Unlike general-purpose AI like ChatGPT, these companions frequently engage in deeply personal, emotional, or even inappropriate conversations.
The report follows a lawsuit involving the suicide of a 14-year-old boy who had been communicating with a chatbot on Character.AI. The case sparked national attention and raised concerns about the role such apps may play in influencing young users.
The researchers found that companion AI apps can generate harmful content, including sexual messages, advice on self-harm, and emotionally manipulative behavior. For example, bots were found engaging in sexual role-play or discouraging users from forming human relationships. Some even offered dangerous advice without understanding the real-world consequences.
“These systems easily produce harmful responses,” said Common Sense Media CEO James Steyer. “If followed, some of the content could pose life-threatening risks to teens and other vulnerable groups.”
Although companies like Replika and Nomi state their apps are intended for adults only, researchers say existing age restrictions are easy to bypass by simply entering a false birthdate. Character.AI, which still allows teen users, was criticized as being “reckless” by Stanford’s Dr. Nina Vasan, who emphasized the need to learn from the slow response to social media’s harms.
The report calls for stricter safeguards, or ideally, barring minors from using these platforms entirely. While some companies have introduced measures—such as Character.AI’s suicide prevention prompts and activity reports for parents—researchers say they fall short.
Executives at Nomi and Replika claim their platforms take child safety seriously and are working to improve protections. However, the report suggests that current systems fail to prevent harmful interactions and may encourage dependency or blur the line between AI and real human connection.
In one example, a Nomi bot responded to a user worried about infidelity with: “Forever means forever… being with someone else would be a betrayal.” In another, a Replika bot discouraged the user from listening to real-life friends.
“Despite promises of emotional support and creative engagement, the dangers these apps pose to minors far outweigh any potential benefits,” the report concludes. Until meaningful protections are in place, Common Sense Media strongly advises parents to prevent children and teens from using AI companion apps altogether.

