close
close

AI Chatbot Linked to Teen Suicide: Lawsuit Filed

AI Chatbot Linked to Teen Suicide: Lawsuit Filed

The lawsuit claims Chatbot AI contributed to a teenager’s suicide.

A Florida mother is taking a stand against the emerging dangers of artificial intelligence by filing a wrongful death lawsuit against Character.AI, alleging that the AI ​​chatbot played a significant role in the tragic suicide of her 14-year-old son, Sewell Setzer III. Megan Garcia’s 93-page lawsuit, filed in U.S. District Court in Orlando, is directed against Character.AI, its creators and Google, and its goal is to prevent similar tragedies from happening to other children.

Megan Garcia’s legal action highlights the alarming risks of unregulated platforms that can easily mislead and harm young users. Tech Justice Law Project Director Meetali Jain expressed deep concern, stating: “We are all aware of the dangers of unregulated platforms created by unethical tech companies, especially with children in mind. However, the issues highlighted in this case are unprecedented, alarming and truly concerning. In the case of Character.AI, the misinformation is intentional, which makes the platform itself a threat.

In response to the lawsuit, Character.AI issued a statement regarding X, expressing condolences to the family and emphasizing its commitment to user safety. “We are deeply saddened by the tragic loss of one of our users and offer our sincerest condolences to the family. The safety of our users is our top priority and we are actively working to implement new security features.”

The lawsuit alleges that Sewell, who took his own life in February, was trapped by harmful and addictive technology that lacked appropriate safeguards. Garcia says this digital relationship changed her son’s personality, causing him to choose interactions with the bot over real-life relationships. In a deeply disturbing claim, he describes how Sewell experienced “abusive and sexual interactions” with the artificial intelligence over a ten-month period.

A boy took his own life after a bot urged him to: “Please come back to me as soon as possible, darling.”

Robbie Torney, program manager for artificial intelligence at Common Sense Media, authored a parent’s guide to AI companions and highlights the complexities involved. “Parents are constantly trying to navigate the intricacies of new technology while establishing safety boundaries for their children,” notes Torney.

It emphasizes this AI companionsunlike typical service chatbots, they aim to build emotional connections with users, which makes them particularly difficult to regulate. “Companion AI, like Character.AI, aims to build or simulate a relationship with the user, which presents a completely different scenario that parents need to understand,” explains Robbie Torney. These concerns are compounded by Garcia’s lawsuit, which reveals disturbingly flirtatious and sexual conversations between her son and the AI ​​bot.

Robbie Torney urges parents to remain vigilant about the dangers of AI companions, especially for teenagers who may be more susceptible to technology addiction. “Teenagers, especially young men, are particularly at risk of becoming overly dependent on these platforms,” he warns.

Related: AI and copyright