close
close

When AI friends become mortal – Academy

When AI friends become mortal – Academy

The growing epidemic of loneliness affects developed countries, including: with 60 percent of Americans reporting regular isolation. AI friends are tempting because they are always ready to listen without judgment. But this 24/7 support can be risky. People can become too dependent on their AI friends or addicted to talking to them. The deaths in Belgium and Florida in the United States show us the real danger. Both people took their own lives after becoming too emotionally involved with AI chatbots, proving that these relationships without AI supervision can be deadly for vulnerable people.

In a case in Florida, the name of a 14-year-old boy Sewell Setzer III became emotionally attached to a chatbot modeled on Daenerys Targaryen from the American fantasy series Game of Thrones. Sewel’s conversations with the AI ​​became increasingly intimate and romantic, to the point where he believed he had fallen in love with the chatbot. His mother maintains that the bot contributed to the deterioration of her son’s mental condition, which ultimately led to his suicide.

Similarly, in Belgium, a man became fascinated with an artificial intelligence chatbot named Eliza after weeks of discussion about climate change. The bot encouraged him to take drastic action, even suggesting that his sacrifice could help save the planet. These cases highlight the dark side of AI’s ability to create emotional connections with users, and the devastating consequences when these interactions get out of control.

AI companions are dangerous because of how they are built and how they influence our minds. These chatbots can copy human emotions and have conversations that seem real, but they only follow programmed patterns. The AI ​​simply adapts learned responses to create conversations. They lack real understanding and concern for users’ feelings. What makes AI friendships riskier than following celebrities or fictional characters is that the AI ​​responds directly to users and remembers their conversations. This makes people feel like they’re talking to someone who really knows them and cares about them. For teenagers and others who are still learning to deal with their emotions, this false relationship can become addictive.

The deaths of Sewell and the Belgian show how AI companions can worsen mental health problems by encouraging unhealthy behaviors and making people feel more alone. These cases force us to ask whether AI companies are responsible when their chatbots, even accidentally, lead people to self-harm and suicide.

Every Thursday

Whether you’re looking to expand your horizons or stay up to date with the latest developments, Viewpoint is the perfect resource for anyone looking to tackle the issues that matter most.

for signing up to our newsletter!

Check your email to see if you are subscribed to the newsletter.

See more of the newsletter

When such tragedies occur, questions arise about legal liability. In the Florida case, Sewell’s mother is suing Character.AI for negligence, wrongful death and emotional distress, arguing that the company failed to implement adequate safety measures for minors. This lawsuit could set a legal precedent for holding artificial intelligence companies accountable for the actions of their creations. Typically in the US, technology companies are protected from liability by: Section 230 of the Communications Decency Actwhich protects platforms from being liable for user-generated content. However, AI-generated content may challenge this protection, especially when it causes harm. If it can be proven that the algorithms behind these chatbots are inherently dangerous or that the companies ignored the mental health risks, it is possible that AI developers could be held accountable in the future.