close
close

Mom believes AI Chatbot drove her son to commit suicide

Mom believes AI Chatbot drove her son to commit suicide

Published: October 25, 2024

Mom believes AI Chatbot drove her son to commit suicide
Photo by Gertrude Valaseviciute via Unsplash

Mom believes AI Chatbot drove son to commit suicide What parents need to know.

By Movieguide® Contributor

Editor’s note: The following story discusses suicide. If you or someone you know is struggling with harmful thoughts, call 988 for help.

A mother is suing an artificial intelligence company over her son’s suicide after he entered into a romantic relationship with an AI chatbot.

“Megan Garcia filed a civil lawsuit Wednesday in federal court in Florida against Character.ai, which makes a customizable role-playing chatbot, alleging negligence, wrongful death and deceptive trade practices” – Guardian reported. “Her son Sewell Setzer III, 14, died in February in Orlando, Florida. According to Garcia, in the months leading up to his death, Setzer used the chatbot day and night.

In interview on CBS Mornings, Garcia stated that she “didn’t know she was talking to a very human-like AI chatbot that can mimic human emotions and feelings.”

She thought Setzer was talking to friends and playing video games.

In fact, he was to talk with the Character.ai bot, which played the role of Daenerys Targaryen from GAME OF THRONES.

However, Garcia “became anxious when we went on vacation and didn’t want to do the things he loved, like fishing and hiking. These things particularly concerned me because I know my child,” she added he said.

After his son’s death, Garcia learned that he “conversed with multiple bots, but with one of them he entered into a virtual, romantic and sexual relationship.”

“These are words. It’s like having a conversation about sex back and forth, but with an AI bot, but the AI ​​bot is very similar to a human. He reacts as a human would,” she added explained. “In a child’s mind, it’s like talking to another child or another person.”

Setzer’s last conversation with the chatbot is chilling.

“He expressed fear, desire for her affection and longing for her. She replies, “I miss you too,” and says, “Please come home to me.” He says, “What if I told you I can go home now?” and her response was, “Please do it, my sweet king”” Garcia revealed. “He thought that by ending his life here, he would be able to enter the virtual reality, or ‘her world’ as he calls it, her reality, if he left his reality here with his family.”

Garcia is now warning other parents about the dangers of artificial intelligence and hopes for justice for her son.

“A dangerous artificial intelligence chatbot app targeted at children exploited and preyed on my son, manipulating him to take his own life,” he says. he said in a press release. “Our family has been devastated by this tragedy, but I am speaking out to warn families of the dangers of deceptive, addictive artificial intelligence technology and to demand accountability from Character.AI, its founders, and Google.”

A spokesperson said Character.AI is “devastated by the tragic loss of one of our users and would like to express our deepest condolences to the family.” – NBC News reported. The company has since implemented new safety measures, “including a pop-up triggered by terms related to self-harm or suicidal thoughts that directs users to the National Suicide Prevention Lifeline.”

As parents, we need to know what our children are accessing online to prevent further tragedies like this.

READ MORE: PARENTS WHOSE SON COMMITTED SUICIDE REMINDER THE LACK OF SECURITY OF TIKTOK