close
close

Mother sues Google-owned Character.AI over son’s suicide, calling it ‘collateral damage’ in ‘grand experiment’

Mother sues Google-owned Character.AI over son’s suicide, calling it ‘collateral damage’ in ‘grand experiment’

When you purchase through links in our articles, Future and its distribution partners may earn a commission.

    A smartphone on a dark background, with the screen covered with dozens of AI avatars and a reading speech bubble "Nice to see you".     A smartphone on a dark background, with the screen covered with dozens of AI avatars and a reading speech bubble "Nice to see you".

Source: Character.AI

A Florida mother is suing Google’s Character.AI platform, claiming it played a large role in the suicide of her 14-year-old son.

Sewell Setzer III fatally shot himself in February 2024, weeks before his 15th birthday, after his mother developed a “harmful dependence” on the platform and no longer wanted to “live outside” the fictional relationships it created.

According to his mother, Megan Garcia, Setzer began using Character.AI in April 2023 and quickly became “visibly withdrawn, spent increasing amounts of time alone in his bedroom, and began to suffer from low self-esteem.” He also left the school basketball team.

Character.AI uses sophisticated LLM models to facilitate conversations between users and characters, from historical figures to fictional people to modern-day celebrities. The platform adapts its responses to the user’s personality, using deep learning algorithms and closely mimicking a person’s characteristics, resembling human interaction.

You could talk about rock and roll with Elvis or the intricacies of technology with Steve Jobs, in this case Sewell attached himself to a chatbot based on the fictional character Daenerys from Game of Thrones.

According to a lawsuit filed this week in Orlando, Florida, the AI ​​chatbot told Setzer that “she” loved him and was engaging in sexual conversations. He also claims that “Daenerys” asked Setzer if he had a suicide plan. He replied yes, but he didn’t know if he would succeed or if he would just hurt himself. The chatbot allegedly replied, “That’s no reason not to do it.”

The complaint states that in February, Garcia took her son’s phone when he was having trouble at school. He found his phone and typed a message into Character.AI: “What if I told you I can go home now?”

The chatbot replied, “…please, my sweet king.” According to the lawsuit, Sewell shot himself “seconds later” with his stepfather’s gun.

Garcia is suing Google on claims of wrongful death, negligence and intentional infliction of emotional distress, among others.

She said New York Times: :

“I feel like it was a big experiment and my baby was just collateral damage.”

Other social media platforms, including Meta, which owns Instagram and Facebook, and ByteDance, which owns TikTok and its Chinese counterpart Douyin, are also currently being criticized for contributing to teenagers’ mental health problems.

Screenshot of the interface from Character.AIScreenshot of the interface from Character.AI

Screenshot of the interface from Character.AI

Instagram recently launched “Accounts for Teens” a feature to help combat sexual extortion among younger users.

Despite its uses for goodArtificial intelligence has emerged as one of the main issues related to the well-being of young people with access to the internet. In a situation called the “loneliness epidemic” exacerbated by Covid 19 lockdowns: a YouGov The survey found that 69% of UK teenagers aged 13-19 said they “often” felt lonely, and 59% said they had no one to talk to.

However, the reliance on fictional worlds and the melancholy caused by their elusiveness is nothing new. After the release of the first James Cameron Avatar for the 2009 film, multiple news sources reported that people became depressed because they were unable to visit the fictional planet Pandora, even contemplating suicide.

In a change to the community’s security updates on October 22, the same day Garcia filed a lawsuit against it, Character.AI he wrote: :

“Character.AI takes the security of our users very seriously and we are always looking for ways to evolve and improve our platform. Today we want to inform you about the safety measures we have implemented over the last six months and additional ones to come, including new handrails for users under 18 years of age.

Despite the nature of the lawsuit, Character.AI claims:

“Our policies do not allow non-consensual sexual content, graphic or detailed descriptions of sexual acts, or the promotion or depiction of self-harm or suicide. We continually train a large language model (LLM) that enables characters on the platform to follow these rules.

This last sentence seems to confirm that Character.AI has no control over its AI – a factor that worries AI skeptics the most.

Interface with Character.AIInterface with Character.AI

Interface with Character.AI

You might be interested in how it works best AI image generators are changing the world of imaging.