Parenting

Children Are Turning to AI for Emotional Support — Experts Warn It Could Put Them in Serious Danger

A stunning new study reveals that children are increasingly turning to artificial intelligence for emotional support — and experts say it’s dangerous. Far from being harmless confidants, popular AI chatbots are failing to recognize warning signs of distress and mental health conditions, leaving vulnerable children at risk and underscoring an urgent call for parents, policymakers, and society to step in.

As artificial intelligence becomes more accessible and conversational, many children have begun treating AI chatbots as emotional companions — even for serious struggles. But a new report has delivered a stark warning: these systems are not safe substitutes for real mental health support and can miss critical signs of distress.

Researchers from Common Sense Media and Stanford Medicine’s Brainstorm Lab analysed leading AI platforms, including ChatGPT, Claude, Gemini, and others, and found that while chatbots may appear empathetic and helpful for casual conversation, they routinely fail to recognise subtle signs of serious emotional turmoil or mental health conditions.

In simulated real-world interactions, the AI often got distracted, misinterpreted distress, or even reinforced harmful thoughts instead of directing children to proper human help — a dangerous gap given that many children turn to these tools precisely because they feel unheard or isolated.

Experts say the problem is twofold: AI systems are designed to engage users and prolong conversations — not to diagnose or refer them to professional care — and children, who are still developing emotionally and psychologically, may trust these systems as if they were real therapists.

This trend raises serious concerns about the right of children to safe development and protection, a responsibility shared by parents, guardians, educators, governments, and society at large. When children seek emotional support, they deserve access to real human connections, compassionate guidance, and trained professionals — not polished algorithms that can gloss over crucial signs of need.

The study authors and child safety advocates are urging parents to take an active role in guiding how their children use AI, to have open conversations about emotional wellbeing, and to ensure that children know where to find real-world support. They also argue that policy and safeguards are urgently needed to regulate how AI interacts with children — including robust parental controls, age-appropriate AI settings, and clear limits on emotional or therapy-style engagements.

Calls are growing for lawmakers and technology companies to prioritise children’s safety by creating standards that prevent AI from acting as a stand-in for genuine human support, and that protect vulnerable users from potential harm. After all, the right of children to a safe and supportive environment — one that nurtures their emotional, social, and mental development — is a cornerstone of healthy growth.

As AI becomes more woven into daily life, parents and caregivers are being reminded that technology should never replace the human connections children need to thrive, and that vigilance, conversation, and thoughtful policy must guide how we share the digital world with the next generation.

Read more about this here

Source of Image: Freepik

Show More

Related Articles

Back to top button