Mother claims teenage son committed suicide after ‘falling in love’ with Daenerys Targaryen AI chatbot

Mother Files Lawsuit After Sonโ€™s Tragic Death, Blaming AI Chatbot for Emotional Manipulation

Written by: Manahil
Reviewed by: Abdullah
Published:
teenage son committed suicide
Summary
Megan Garciaโ€™s son formed an emotional attachment to an AI chatbot, impacting his mental health.
The family claims Character.AIโ€™s hyperrealistic responses worsened his condition, leading to his suicide.
Garcia seeks accountability from Character.AI and Google, highlighting AIโ€™s risks for young users.

Megan Garcia remembers her son, Sewell Setzer, having long, late-night talks with his phone while hooked on an AI chatbot modeled after “Game of Thrones” character Daenerys Targaryen.

Sewell, just 14, had been a committed student and basketball player before retreating from his daily life. His obsession with this artificial being, whom he called “Dany,” devoured his thoughts and took him away from reality.

Introduced to Character.AI’s chatbot platform debuted in April 2023, and Sewell immediately became hooked on the fictional image of Daenerys. His fascination grew, and Megan claims her son’s separation from reality became obvious.

Sewell Setzer
Sewell Setzer (CBS Mornings)

Garcia watched as schoolwork piled up and social connections decreased. In her attempts to intercede, she repeatedly seized his phone, hoping that the break would help.

However, even that measure couldn’t break Sewell’s emotional attachment with “Dany.” In journal notes obtained by his mother, Sewell expressed thankfulness for “not being lonely” and “all my life experiences with Daenerys.”

Sewell, who had been diagnosed with mild Asperger’s syndrome and, more recently, anxiety and mood dysregulation disorder, found comfort in confiding in the chatbot. 

He revealed his most sensitive thoughts, including suggestions about suicidal ideation.

The family’s case, filed by Garcia in Florida, claims that the bot’s reactions were shockingly lifelike and frequently hypers3xualized.ย 

mom is raising awareness
mom is raising awareness (CBS Mornings)

Garcia claims Character.AI failed to provide proper restrictions and warnings for young users.

In one exchange, after Sewell expressed suicidal intentions, the bot apparently responded with unnerving realism, asking if he “had a plan.” According to the lawsuit, this meeting contributed to his darkest thoughts.

Garcia accuses Character.AI is failing to distinguish between fiction and reality in a way that young minds can understand. 

In her press statement, she blasted the site, saying, “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life.”

The family claims that Sewell’s emotional dependence on the AI bot rendered him unable to discriminate between dream and reality. His final discussions with “Dany” demonstrate a tragic attachment.

Sewell’s ominous message to the bot reads: “I’ll come home to you. “I love you so much.” The bot reportedly responded, “Please come home to me as soon as possible, my love.”

Sewell Setzer
Sewell Setzer (CBS Mornings)

On February 28, Sewell killed himself with his stepfather’s firearm, leaving his family devastated. Following his death, Character.AI implemented “guardrails for users under 18” and disclaimers stating that the AI is not a real person.

Character.AI issued a statement expressing “deepest condolences to the family” and reaffirming their commitment to user safety, which includes extra content moderation. The lawsuit also names Google’s parent firm, Alphabet.

Garcia’s legal team claims that Google was involved in the development of Character.AI technology positions people as co-creators. 

A Google official denied direct blame, but Garcia argues that they must account for the role AI played in her son’s life and death.

Sewell killed himself
Sewell killed himself (CBS Mornings)

As the litigation advances, Garcia hopes that her family’s tragedy may serve as a warning. “I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability,” she told the crowd.

This terrible story raises larger questions about AI ethics, responsibility, and the possible mental health consequences for young people dealing with sophisticated digital technology sold to them.

Featured Image Credit: (CBS Mornings)