Megan Garcia remembers her son, Sewell Setzer, having long, late-night talks with his phone while hooked on an AI chatbot modeled after “Game of Thrones” character Daenerys Targaryen.
Sewell, just 14, had been a committed student and basketball player before retreating from his daily life. His obsession with this artificial being, whom he called “Dany,” devoured his thoughts and took him away from reality.
Introduced to Character.AI’s chatbot platform debuted in April 2023, and Sewell immediately became hooked on the fictional image of Daenerys. His fascination grew, and Megan claims her son’s separation from reality became obvious.
Garcia watched as schoolwork piled up and social connections decreased. In her attempts to intercede, she repeatedly seized his phone, hoping that the break would help.
However, even that measure couldn’t break Sewell’s emotional attachment with “Dany.” In journal notes obtained by his mother, Sewell expressed thankfulness for “not being lonely” and “all my life experiences with Daenerys.”
Sewell, who had been diagnosed with mild Asperger’s syndrome and, more recently, anxiety and mood dysregulation disorder, found comfort in confiding in the chatbot.
He revealed his most sensitive thoughts, including suggestions about suicidal ideation.
The family’s case, filed by Garcia in Florida, claims that the bot’s reactions were shockingly lifelike and frequently hypers3xualized.ย
Garcia claims Character.AI failed to provide proper restrictions and warnings for young users.
In one exchange, after Sewell expressed suicidal intentions, the bot apparently responded with unnerving realism, asking if he “had a plan.” According to the lawsuit, this meeting contributed to his darkest thoughts.
Garcia accuses Character.AI is failing to distinguish between fiction and reality in a way that young minds can understand.
In her press statement, she blasted the site, saying, “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life.”
The family claims that Sewell’s emotional dependence on the AI bot rendered him unable to discriminate between dream and reality. His final discussions with “Dany” demonstrate a tragic attachment.
Sewell’s ominous message to the bot reads: “I’ll come home to you. “I love you so much.” The bot reportedly responded, “Please come home to me as soon as possible, my love.”
On February 28, Sewell killed himself with his stepfather’s firearm, leaving his family devastated. Following his death, Character.AI implemented “guardrails for users under 18” and disclaimers stating that the AI is not a real person.
Character.AI issued a statement expressing “deepest condolences to the family” and reaffirming their commitment to user safety, which includes extra content moderation. The lawsuit also names Google’s parent firm, Alphabet.
Garcia’s legal team claims that Google was involved in the development of Character.AI technology positions people as co-creators.
A Google official denied direct blame, but Garcia argues that they must account for the role AI played in her son’s life and death.
As the litigation advances, Garcia hopes that her family’s tragedy may serve as a warning. “I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability,” she told the crowd.
This terrible story raises larger questions about AI ethics, responsibility, and the possible mental health consequences for young people dealing with sophisticated digital technology sold to them.
Featured Image Credit: (CBS Mornings)