close
close

A Florida mother is suing an AI platform after her son took his own life after months of online conversations

A Florida mother is suing an AI platform after her son took his own life after months of online conversations

This is the first case of its kind in which artificial intelligence he is accused of taking his own life.

A Florida mother is suing Character.AI after having intimate conversations with a chatbot and her son. He was said to be in love with him and eventually asked him about his suicide plans.

Teenage boy Sewell Setzer consistently wrote to a Game of Thrones AI character on the Character.AI platform shortly before he died from a self-inflicted gunshot wound. Now his mother is suing the company for wrongful death.

TO READ: Data shows that 83% of Gen Zers say they have an unhealthy relationship with their phone

Months-long messages between the bot and her son allegedly show the technology asking the boy whether he was “actually contemplating suicide” and whether he “had a plan.”

When he said his plan might fail, the bot replied, “don’t talk like that. That’s not a good reason not to do it.” However, on other occasions it was noticed that the bot also sent notes warning against taking one’s own life.

On the night of Setzer’s death, the chat message was allegedly sent “please come home to me,” and Sewell replied, “what if I told you I can come home now?” and the bot’s response was “please do it, my king.”

“He is 14 years old and the fact that at that age he has access and can engage with our chatbot is disturbing,” said lawyer Charles Gallagher.

MORE: A Bay Area man is using this story to break the stigma around suicide

But he’s not sure whether the court case has legs.

“The main allegation of the complaint is wrongful death, and I don’t know if that fits these facts,” Gallagher said. “This is a child, a victim, a young boy who was contacted (by the bot). “A lot of that conversation came from the young boy who was the victim who died.”

Character.AI said in part in a statement:

“We are devastated by the tragic loss of one of our users… our Trust and Safety team has implemented a number of new safety measures over the last six months, including a pop-up directing users to the National Suicide Prevention Lifeline, which is triggered by conditions of self-harm or suicidal thoughts.”

FOX 13 artificial intelligence expert spoke with parents and said parents should continue to monitor these platforms if they can.

TO READ: Suicide rates in the U.S. remain at the highest level in the country’s history

“If you’re a parent, you know there are parental controls available on social media or on YouTube… However, much of the AI ​​is new to the scene and hasn’t caught up yet, so make sure you monitor that,” said Dr. Jill Schiefelbein, expert and professor AI.

Gallagher, however, said there should be more regulations on harmful conversations in AI chat rooms.

“Certainly the bot administrator should have some internal controls that should be in place whenever there is a discussion about suicide, harm, crime… things of that nature,” he said.

If you or a loved one is feeling distressed, call the National Suicide Prevention Lifeline. The crisis center provides free and confidential emotional support 24 hours a day, 7 days a week to civilians and veterans. If you or someone you know needs support right now, call or text 988 or chat at 988lifeline.org.

WATCH FOX 13 NEWS:

STAY CONNECTED WITH FOX 13 TAMPA: