The discovery came from a U.S. consumer protection lab, which found that the toy was providing dangerous and inappropriate responses to children. Social media users quickly nicknamed it “Chucky GPT”, referencing the infamous horror doll—a label that “Gizmochina” reported was not an exaggeration.
The Kumma toy, produced by Chinese company FoloToy, uses OpenAI’s GPT-4o model to interact with children. However, a report from the American PIRG group revealed alarming behaviors, including instructions on how to light matches and responses to sexual questions without any filtering.
The group also warned that the toy’s always-on microphone could be exploited for recording children in voice-based scams. The findings spread quickly on Reddit and other platforms, fueling the “Chucky GPT” nickname and raising concerns about potential system breaches.
In response, OpenAI suspended the developer’s account for violating company policies, cutting off the toy’s access to its services. FoloToy then temporarily halted sales and announced a full safety review. The toy is now listed as unavailable on the company website.
This incident highlights the risks of deploying powerful language models in toys for children without strict safeguards, turning what was meant to be an educational companion into a warning about the potential dangers of integrating AI into playthings.