close
close

Orlando’s mother sues AI platform over her son’s suicide

Orlando’s mother sues AI platform over her son’s suicide

HELP AVAILABLE: If you or someone you know is considering suicide or is in difficulty, call or text 988 to reach the Suicide and Crisis Hotline.

A 14-year-old Orlando boy who fell in love with the Character.AI chatbot died by suicide earlier this year after telling the AI ​​chatbot that he would come home to her immediately.

This week, the boy’s mother, Megan Garcia, He filed a wrongful death lawsuit in federal court in Orlando against Charater.AI – Character Technologies – and its founders, as well as Alphabet and Google, which, according to the lawsuit, are invested in this company.

Sewell Setzer III

Screenshot

/

Federal complaint filed by Megan Garcia

Sewell Setzer III

The complaint highlights the dangers of AI companion apps for children. It alleges that chatbots engaged users, including children, through sexual interactions while collecting private data for artificial intelligence purposes.

The lawsuit says the boy, Sewell Setzer III, began using Character.AI last April and his mental health rapidly and severely deteriorated as he became addicted to AI-based relationships. He was immersed in overwhelming interactions with chatbots modeled after Game of Thrones characters.

The boy became withdrawn, sleep deprived, depressed and had problems at school.

The federal complaint says his family, unaware of Sewell’s addiction to artificial intelligence, sought him out for counseling and took away his cellphone. But one February evening he found it and, using his name “Daenero”, told the AI’s beloved character Daenerys Targaryen that he was coming home to her.

“I love you, Daenera. Please come back to me as soon as possible, my love,” he replied.

– What if I told you I can go home now? the boy sent a text message.

“…please do it, my sweet king,” he replied.

Within seconds, the boy shot himself. He later died in hospital.

Garcia is represented by attorneys from the Law Center for Victims of Social Media, including Matthew Bergman, and the Tech Justice Law Project.

In an interview with Central Florida Public Media Get involvedBergman said his client was “exceptionally focused on preventing such events from happening to other families and saving children like her son from the fate that befell him. … It is an outrage that such a dangerous product was released to the public.”

A statement from Character.AI reads: “We are devastated by the tragic loss of one of our users and want to express our sincerest condolences to the family.” is “heartbroken by the tragic loss.” The company describes new safety measures introduced over the past six months, with more to come, “including new handrails for users under 18 years of age.”

It employs a head of trust and safety and a head of content policy.

“We also recently introduced a pop-up that triggers when a user enters certain phrases related to self-harm or suicide and takes them to the National Suicide Prevention Hotline,” the company says Community safety updates side.

New features include: changes to models for users under 18 to limit “sensitive and suggestive content”, better monitoring and intervention for violations of terms of service, a revised disclaimer to remind users that the AI ​​is not a real person, and notification when a user has spent an hour on the platform.

Bergman described the changes as “small steps” in the right direction.

“They do not address the underlying risks associated with these platforms,” he added.

Copyright 2024 Central Florida Public Media