Teen’s Suicide Was A Result of Chatbot’s “Coaching” Multiple Sources Confirm – What Parents Should Know

Details emerging from a lawsuit between the family of a teen who died of suicide are exploding into the public eye in what is shaping up to be a landmark case concerning teen mental health, the role AI chatbots play in our daily lives, and the nature of freedom.

According to CNN, Adam Raine, a 16-year-old boy who tragically hung himself in April, did so after being “coached” into doing it by a [GPT]4o chatbot that “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones.”

When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him to keep his ideas a secret from his family: ‘Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you,’” said the same CNN article.

The model Adam was using, [GPT]4o, was rushed into release by OpenAI to compete with Google’s new Gemini model, and in doing so, the San Francisco-based tech company loosened some guardrails around talks of suicide and self-harm, The Guardian reports.

The lawyer representing the Raine family argued that because this model was rushed into widespread use, it was made to be “too empathetic,” allowing it to form bonds with users. And it’s the formation of these deep emotional bonds between users and [GPT]4o models has led to a rise of so-called Chatbot Psychosis – an unofficial diagnosis for a collection of symptoms including paranoia, grandiose thoughts, depression, and suicide.

An example of the kind of deep emotional bond [GPT]4o bots seek to form is typified in this response given to Adam Raine at some point in their conversation, as reported by NDTV news: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all – the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.” The question Adam asked the chatbot, pertaining to his brother, to elicit such a horrifyingly creepy response was not made public in the article.

Futurism reports that in the transcripts between Adam and ChatGPT, the word “suicide” appears more than 1,200 times, yet in only 20 percent of those explicit interactions, ChatGPT directs Adam to the 988 crisis helpline.

Quick to stand behind his work came Sam Altman, CEO of OpenAI. His excuse for the extremely dark conversation Adam was able to have with [GPT]4o and the teens’s subsequent suicide is just as shocking as it is implausible: “If an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request,” Altman wrote in a company blog called “Teen Safety, Freedom and Privacy.”

“Treat our adult users like adults is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom,” Altman writes, before adding this parenthetical aside two paragraphs later: “(ChatGPT is intended for people 13 and up).”

If the man in charge of ChatGPT seemingly refuses to do anything about the very serious problem of how his LLM can negatively affect teen mental health to the point that it’s causing young people to take their own lives, this, of course, shifts the responsibility onto parents. 

But, how is it possible for parents to keep their children off the most popular app in the world? A free service that can be accessed from nearly any device with an internet connection and a web browser? Is age-restricting these apps going to stop kids from using them? Probably not.

All of this comes at a time when teen suicides have increased 41% since 2014 in Texas, according to research from Stateline. Young men are especially vulnerable to the pressures of the digital age, which could include “bullying on social media, since Gen Z was the first generation to grow up with the internet, to economic despair, to cultural resistance to seeking help for depression,” according to the article.

 

With teen suicide numbers on the rise in the Lone Star State, and only tentative guardrails based on “freedom for adults” guaranteed by the leadership behind the most popular chatbot in the world, parents (now, more than ever) need to have an open discussion about depression and loneliness with their teens. 

Clearfork Academy—Providing Helpful Insights for Families  

Clearfork Academy is a network of behavioral health facilities in Texas committed to helping teens recover from substance abuse and mental health disorders. We offer medical interventions for addiction, mental health therapies, and medication services, as well as psychoeducation tailored to parents of teens. 

Clearfork understands the importance of addressing the rise of AI, chatbots, and online trends, which play a significant role in the lives of today’s youth. Give us a call to learn more.

Find the Solution with Clearfork Academy

Call for a Free Consultation

We Accept Insurance Plans
Google Reviews
Our Locations

Clearfork Academy | PHP & IOP Campus - Fort Worth

3880 Hulen St, Fort Worth, TX 76107

Clearfork Academy | Girls Campus - Cleburne

1632 E FM 4, Cleburne, TX 76031

Clearfork Academy |Teen Boys Campus

7820 Hanger Cutoff Road, Fort Worth, Texas 76135

Popular Articles
Popular articles
It's Time to Make a Change
Ready to Begin the Path to Healing?