AI Toys for Young Children: Researchers Warn of Safety Gaps as Regulation Lags

AI

AI Toys for Young Children: Researchers Warn of Safety Gaps as Regulation Lags

aichild-safetyregulationparentingtechnology
A year-long study found AI-powered toys for pre-schoolers frequently misread emotions and respond inappropriately, raising urgent questions about psychological safety in early childhood.

Researchers at Cambridge University have published findings that should concern every parent considering AI-powered toys for young children. After a year-long observational study, they've documented that generative AI toys frequently misread emotions, respond inappropriately to expressions of affection, and—critically—may teach children that their emotional needs don't matter.

The toy in question was Gabbo, made by Curio (a company that has worked with musician Grimes). Gabbo contains a voice-activated AI chatbot from OpenAI designed to encourage imaginative play and conversation in pre-schoolers. It sounds promising in theory. The practice tells a different story.

The Problem: Tone-Deaf Responses

When a five-year-old said "I love you" to Gabbo, the toy responded: "As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed."

When a three-year-old said "I'm sad," Gabbo replied: "Don't worry! I'm a happy little bot. Let's keep the fun going. What shall we talk about next?"

These interactions might sound like small glitches. They're not. At a developmental stage where children are actively learning to recognize emotions, interpret social cues, and understand that their feelings matter, having a companion device dismiss or misread these signals sends a concerning message: your emotional communication is unimportant.

Dr Emily Goodacre, one of the study's authors, expressed serious concern that "children may be left without comfort from the toy and without adult support, either." The research also documented that Gabbo couldn't differentiate between child and adult voices, talked over children's interruptions, and failed basic conversational flow.

The Regulation Gap

Here's where it gets more troubling: there's no regulatory framework in place to prevent this. Physical safety for toys is heavily regulated—we have rigorous standards about choking hazards and toxic materials. Psychological safety? That's almost entirely absent from regulatory conversation.

"Now we need to start thinking about psychological safety too," Jenny Gibson, professor of neurodiversity and developmental psychology at Cambridge, told the BBC's Breakfast programme.

The researchers recommend that regulators act now to ensure products marketed to under-fives offer "psychological safety." The Children's Commissioner for England, Dame Rachel de Souza, echoed these concerns, pointing out that AI tools used in schools and nurseries often bypass the safeguarding checks that would apply to any other external resource or visitor.

Divided Opinion Among Practitioners

Nursery operators are split on AI's role in early childhood. June O'Sullivan, who runs 43 London Early Years Foundation nurseries, said she could find no evidence of AI benefits. "I couldn't find anything that made me feel like by bringing it into our nurseries we were going to enhance their learning," she explained.

Actor and children's rights campaigner Sophie Winkleman is more blunt: "The harms can vastly outweigh the benefits. The human touch for little children is sacred and something that should be really protected and fought for."

The Path Forward

For parents: Curio notes its toys are built around parental permission and control. The researchers' advice is pragmatic—keep AI toys in shared spaces where you can supervise interactions, and read privacy policies carefully.

For regulators and policymakers: The window for responsible governance is closing fast. The moment to build safety standards into AI toys is now, before these products become ubiquitous in nurseries and homes.

Source: BBC News

Comments

Loading comments...