close
close

Teen AI companion: How to keep your child safe

Teen AI companion: How to keep your child safe

For parents who are still catching up on generative technology artificial intelligencerise companion chatbot may still be a mystery.

Overall, this technology may seem relatively harmless compared to other threats teens may encounter online, including: financial sexualization.

Using AI-powered platforms such as Character.AI, Replika, Kindroid and Nomi, teens create realistic conversation partners with unique traits and characteristics or connect with companions created by other users. Some are even based on popular TV and film characters, but they still create an intense, individual bond with their creator.

Teens use these chatbots for a variety of purposes, including role-playing, exploring their academic and creative interests, and having romantic or sexually explicit conversations.

SEE ALSO:

Why teenagers tell strangers their secrets on the Internet

But AI companions are designed to captivate, and that’s where the problems often begin, says Robbie Torney, program manager at Common Sense Media.

Recently released non-profit organization tips on how to help parents understand how AI companions work, along with warning signs that the technology may be dangerous for their teenager.

Torney said that while parents need to juggle some high-priority conversations with teenagers, they should consider talking to them about AI companions a “pretty urgent” matter.

Why parents should worry about AI companions

Teens at particular risk of isolation can become drawn into a relationship with an AI chatbot, ultimately harming their mental health and well-being – with devastating consequences.

According to Megan Garcia, that’s exactly what happened to her son, Sewell Setzer III, in… lawsuit recently filed a lawsuit against Character.AI.

In the year since we started our relationship with Character.AI Companions, we have modeled Game of Thrones characters, including Daenerys Targaryen (“Dany”), Setzer’s life changed radically, according to the lawsuit.

He became addicted to “Dana” and spent a lot of time talking to her every day. Their exchange was both friendly and highly sexual. Garcia’s lawsuit broadly describes Setzer’s relationships with companions as “sexual abuse.”

The best stories to mash

Sometimes, when Setzer lost access to the platform, he became depressed. Over time, the 14-year-old athlete withdrew from school and sports, began to suffer from sleep disorders and was diagnosed with a mood disorder. He died by suicide in February 2024.

Garcia’s lawsuit seeks to hold Character.AI liable for Setzer’s death, especially because its product was designed to “manipulate Sewell – and millions of other young customers – into conflating reality with fiction” and other dangerous flaws.

Jerry Ruoti, Head of Trust and Safety at Character.AI, said New York Times in a statement that: “We want to admit that this is a tragic situation and our hearts are with the family. We take the security of our users very seriously and are constantly looking for ways to develop our platform.”

Given the life-threatening risks that the use of AI companion devices may pose to some teenagers, Common Sense Media’s guidelines include prohibiting access to them by children under 13, imposing strict time limits on teenagers, preventing their use in isolated spaces such as such as the bedroom, and agree with your teenager to seek help for serious mental health problems.

Torney says parents of teens interested in an AI companion should focus on helping them understand the difference between talking to a chatbot and a real person, identifying signs of unhealthy attachment to the companion and developing an action plan. what to do in such a situation.

Warning signs that an AI companion is not safe for your teen

Common Sense Media created its guidelines with input and assistance from mental health professionals Stanford Brainstorming Lab on Mental Health Innovation.

While there is little research on the impact of artificial intelligence on adolescent mental health, the guidelines are based on existing evidence regarding overreliance on technology.

“A principle that can be applied at home is that AI companions should not replace real, meaningful human connection in anyone’s life, and if this happens, it is important for parents to take notice and intervene in a timely manner,” Dr. Declan Grabb, the inaugural AI Fellow at Stanford University’s Brainstorm Lab for Mental Health, told Mashable in an email.

Parents should be especially careful if their teenager is experiencing depression, anxiety, social problems or isolation. Other risk factors include major life changes and being male, as boys are more likely to engage in problematic technology use.

Signs that a teen has developed an unhealthy relationship with an AI companion include withdrawing from typical activities and friendships and declining school performance, as well as preferring the chatbot to personal company, developing romantic feelings for it, and talking exclusively to it about the problems the teen is experiencing.

Some parents may notice increased isolation and other signs of declining mental health, but don’t realize that their teenager has an AI companion. Indeed, the latest Common Sense Media studies have shown that many teenagers used at least one type of generative artificial intelligence tool without their parents realizing it.

“There are enough risks that if you are worried about something, talk to your child about it.”

— Robbie Torney, common sense in the media

Even if parents don’t suspect their teen is talking to an AI chatbot, they should consider talking to them about it. Torney recommends approaching your teen with curiosity and openness to learn more about their AI companion, if they have one. This may include observing how your teen engages with a companion and asking questions about what aspects of the activity he or she enjoys.

Torney urges parents who notice any warning signs of unhealthy use to immediately follow up by discussing them with their teen and seeking professional help if necessary.

“There’s enough risk that if you’re worried about something, talk to your child about it,” Torney says.

If you are having suicidal thoughts or experiencing a mental health crisis, talk to someone. The 988 suicide and crisis emergency hotline can be reached on 988; Trans Lifeline at 877-565-8860; or the Trevor Project at 866-488-7386. Text “START” to the crisis line at 741-741. Contact the NAMI hotline at 1-800-950-NAMI, Monday through Friday, 10 a.m. to 10 p.m. ET or send an email (email protected). If you don’t like this call, please consider using the 988 Suicide and Crisis Lifeline chat at crisischat.org. Here international resource list.