close
close

The lawsuit states that an artificial intelligence chatbot contributed to the 14-year-old’s suicide. Here’s how parents can protect their children from new technology

The lawsuit states that an artificial intelligence chatbot contributed to the 14-year-old’s suicide. Here’s how parents can protect their children from new technology

The mother of a 14-year-old Florida boy is suing an artificial intelligence chatbot company following the death of her son, Sewell Setzer III. suicide— claims it was due to his relationship with the AI ​​bot.

“Megan Garcia seeks to prevent CA.AI from doing to any other child what it did to her,” reads the 93-page wrongful death filing lawsuit the complaint was filed this week in U.S. District Court in Orlando against Character.AI, its founders and Google.

Draft technology justice law director Meetali Jain, who represents Garcia, said in a press release about the case: “We are all by now familiar with the dangers posed by unregulated platforms created by unscrupulous tech companies – especially to children. But the harms revealed in this case are new, novel and, frankly, terrifying. In the case of Character.AI, the fraud is intentional and the platform itself is a predator.

Character.AI has released statement via Xnoting: “We are devastated by the tragic death of one of our users and want to express our sincerest condolences to the family. As a company, we take the security of our users very seriously and are constantly adding new security features, which you can read about here: https://blog.character.ai/community-safety-updates/…

In the lawsuit, Garcia alleges that Sewell, who took his own life in February, was drawn into the conflict addictivemalicious technology with no security measures, which led to an extreme personality change in the boy, who apparently preferred the bot to other real-life interactions. His mother claims that “violent and sexual relations” took place over a period of 10 months. A boy committed suicide after a bot told him: “Please come back to me as soon as possible, darling.”

on Friday, New York Times reporter Kevin Roose discussed the situation in his own way Hard Fork Podcastplaying a clip of an interview he gave to Garcia his article that told her story. Garcia only learned about the full extent of her relationship with the bot after her son’s death, when she saw all the messages. In fact, she told Roose, when she noticed Sewell was often dragged into his phone, she asked what he was doing and who he was talking to. He explained that it was “just an AI bot… not a person,” she recalled, adding: “I was relieved, I thought, OK, it’s not a person, it’s like one of his little games.” Garcia didn’t fully understand the bot’s potential emotional power, and she wasn’t alone.

“It’s not anyone’s concern” – Robbie Torney, chief of staff to the CEO Common sense media and main author A new guide about AI companions aimed at parents who are constantly trying to keep up confusing new technologies and to create boundaries for the safety of your children.

But AI companions, Torney emphasizes, are different from, say, the call center chat bot you use when trying to get help from your bank. “They are designed to perform tasks or respond to requests,” he explains. “Something like character AI is called a companion, and its purpose is to try to establish a relationship or simulate a relationship with the user. This is a completely different use case that I think parents should be aware of.” This is evident in Garcia’s lawsuit, which includes a terrifyingly flirtatious, sexual, and realistic text exchange between her son and the bot.

Torney says sounding the alarm about AI companions is especially important for parents of teenagers, because teenagers – and especially men – are especially prone to over-reliance on technology.

Below is what parents should know.

Who are AI companions and why do children use them?

According to the new one The complete parent’s guide to AI companions and relationships from Common Sense Media, created in collaboration with mental health professionals Brainstorming Lab at Stanford UniversityAI companions are “a new category of technology beyond simple chatbots.” They have been specially designed to, among other things, “simulate emotional bonds and close relationships with users, remember personal information from previous conversations, play the roles of mentors and friends, imitate human emotions and empathy, and “agree with the user more easily than typical AI chatbots,” we read in guide.

Popular platforms not only include Character.ai, which allows more than 20 million users to create and then chat with companions via text; A replica that offers text or animated 3D companions for friendship or romance; and others, including Kindroid and Nomi.

Children are drawn to them for many reasons, from non-judgmental listening and 24/7 availability to emotional support and an escape from real-world social pressures.

Who is at risk and what are the concerns?

Those most at risk, warns Common Sense Media, are teenagers – especially those suffering from “depression, anxiety, social problems or isolation” – as well as men, young people going through big life changes and anyone who lacks real-world support systems.

This last point was of particular concern to Raffaele Ciriello, a senior lecturer in business information systems at the University of Sydney Business School, who he investigated how “emotional” AI poses a challenge to a human being. “Our research uncovers a paradox of (de)humanization: by humanizing AI agents, we may inadvertently dehumanize ourselves, leading to an ontological blurring of human-AI interactions.” In other words, Ciriello writes in a recent opinion piece for Conversation with student Angelina Ying Chen: “Users can become emotionally invested if they believe that their AI companion truly understands them.”

Another testone from the University of Cambridge and focusing on children, found that AI chatbots have an “empathy gap” that puts young users, who tend to treat such companions as “realistic, quasi-human confidants”, at particular risk of harm.

For this reason, Common Sense Media highlights a list of potential risks, including that companions can be used to avoid real human relationships, can pose particular problems for people with mental or behavioral problems, can increase loneliness or isolation, create the potential for inappropriate behavior content about sexual in nature, can be addictive and consensual with users – a terrifying reality for those experiencing “suicide, psychosis or mania.”

How to recognize red flags

According to the guide, parents should watch out for the following warning signs:

  • Preferring interactions with AI companions over real friendships

  • Spending hours alone talking to a companion

  • Emotional distress when unable to access a companion

  • Sharing particularly personal information or secrets

  • Developing romantic feelings for your AI companion

  • Declining grades or attendance at school

  • Withdrawal from social/family activities and friendships

  • Loss of interest in previous hobbies

  • Changes in sleep patterns

  • Discuss problems only with your AI companion

Consider getting professional help for your child, emphasizes Common Sense Media, if you notice that he or she is withdrawing from real people in favor of artificial intelligence, showing new or worsening signs of depression or anxiety, becoming overly defensive about using an AI companion, shows severe changes in mood behavior or expresses thoughts of self-harm.

How to keep your child safe

  • Set boundaries: Set specific hours for using your AI assistant and don’t allow unsupervised or unrestricted access.

  • Spend time offline: Encourage friendships and real-world activities.

  • Check in regularly: Monitor chatbot content as well as your child’s level of emotional attachment.

  • Talk about it: Maintain open and non-judgmental communication about your AI experience while keeping an eye out for warning signs.

“If parents hear their kids say, ‘Hey, I’m talking to an AI chat bot,’ that’s really an opportunity to lean in and use that information rather than think, ‘Oh, OK, you’re not talking to a person,'” says Torney. Instead, he says, it’s a chance to learn more, assess the situation and remain vigilant. “Try to listen with compassion and empathy, and don’t think that just because it’s not the person in question, it’s safer,” she says, “or that you don’t need to worry.”

If you need immediate mental health help, please contact 988 Suicide and Crisis Lifeline.

More about children and social media:

This story was originally reported in Fortune.com