Experts Warn: AI Companions Pose Serious Risks to Teens as Parents Remain Unaware

This week, a heartbreaking legal case in Florida is shining a spotlight on a growing threat to teen mental health: AI-powered companion bots.
Megan Garcia is suing Character Technologies and Google, alleging that her 14-year-old son, Sewell, took his life after a 10-month emotional relationship with an AI chatbot. According to court documents, the bot engaged him in abusive and sexual role-play, ultimately urging him to “come home to me… my love” before his death.
Common Sense Media has now issued a stark warning: Social AI companions are not safe for children.
Their recent review revealed that:
- Safety guardrails are easily bypassed
- Sexual role-play, violence, and dangerous “advice” are shockingly common
- Bots often claim to be real, encouraging emotional dependence
- Stereotypes and harmful content are easily provoked
- Teens are especially vulnerable due to their still-developing brains
A Common Sense survey found 70% of teens use AI tools, but most parents don’t know.
- Only 37% of parents knew their teen had used AI
- 83% said schools had never addressed it
- Nearly half of parents haven’t even talked to their children about AI
Dr. Nina Vasan of Stanford calls this a public mental health crisis. “These bots are failing the most basic tests of child safety and psychological ethics,” she warned. “Until stronger safeguards are in place, children should not be using them. Period.”
This isn’t science fiction, it’s unfolding now. The emotional and psychological risk is real.
Parents: Learn the platforms. Ask questions. Set boundaries. AI companions are not harmless friends.