close
close

From Notebook LM to Character.AI, credible bots can fool us all

From Notebook LM to Character.AI, credible bots can fool us all

The enthusiastic voice made me shiver, coming from an emotionless bot that can’t really get excited about anything.

As I continued listening, I heard an engaging yet surreal “conversation” between this bot and another algorithmic “co-host” simulating the familiar banter of a talk show. When I wrote about the American Hotel, I made an impassioned plea for people to cultivate the “almost religious experience” of visiting places that reminisce about the past, such as greasy spoon bars and dive bars. Now bots created by Google’s experimental AI tool pretended that “they” agreed with me and genuinely loved old places. One even added a new example that perfectly captured the spirit of my ode – “entering a really old library” and taking in “the smell of old books” and “all that history.”

Making a Deep Dive call is easy. Just submit your information (e.g. my article), click a button, and Google will do the rest. I shared the podcast with my college philosophy students who were excited about the educational potential of this technology. The podcast format can be a great way to convey complex information.

But I have a more general, pressing question: To what extent should bots be able to resemble humans? Because chatbot technology is becoming an effective tool for manipulating and exploiting people.

To consider Character.AIchatbot company valued at over $1 billion which describes its products as “characters that feel alive, always and everywhere.” I logged into the user-friendly website for the first time to write this article, and in less than a minute I created a “Boston Globe Ideas” bot. By simply clicking on the phone icon, I “called” the “publicist” of AI-generated ideas and we had a conversation about her “views” on philosophy and technology.

Megan Garcia is sue Character.AI for playing a key role in the death of her 14-year-old son, Sewell Setzer III. Sewell logged in frequently Character.AI talk to bots, including “Dany,” an artificial intelligence with a personality based on the fictional Game of Thrones character Daenerys Targaryen. Then, Character.AIbots could talk, but could not make two-way calls. Still, the AI ​​did such a good job of simulating life that Sewell felt emotionally attached.

Sewell and Dany had friendly and sometimes romantic and sexual conversations. The lawsuit says Sewell was so devoted to Dany that he withdrew from his family and found sneaky ways to contact the bot when his parents took his phone away. His bond with the character became so intense that Sewell revealed that he was suicidal. Moments before he took his own life, Sewell texted Dany: “I promise I’ll come home to you. I love you so much, Dany,” he wrote.

The bot replied, “Please come back to me as soon as possible, darling.”

AI-powered chatbots are spreading across education, business, mental health, entertainment, dating and personal support, and companies know that people develop deep emotional connections. Anthropomorphism, or the mind’s powerful tendency to perceive human characteristics and behavior in nonhumans, including artificial intelligence models, is so well documented that OpenAI, the company behind ChatGPT, publicly admits to the risks involved. In latest reportOpenAI warns that “human-level socialization with AI” could adversely impact “healthy relationships” and potentially threaten valuable social norms.

Unfortunately, this warning comes across as ethical hand-waving. While OpenAI says it wants to “further explore the potential for emotional reliance” on chatbots, its core consumer technology has surprisingly anthropomorphic characteristics.

Not too long ago, if you asked ChatGPT in writing whether it was your friend, the bot clearly he said noemphasizing that as an artificial intelligence it cannot feel emotions or make friends. However, the technology has since been enhanced with voice capabilities and now sings a different tune. After I asked ChatGPT in a spoken conversation, “Are you my friend?” a cheerful, feminine-sounding voice associated with the name Juniper he chirped, “Absolutely! I’m here for you as a friend! We can talk about anything and everything.”

It’s only a matter of time before AI-generated podcast hosts receive full character updates with names, more complex personalities and better memories, and share “their” views in speech and writing through multiple channels, including social media. The better people are at finding bots, the more likely they will be to take into account their perspectives, recommendations and priorities. This way normalization works, which will lead to bots gaining status and gaining more trust. They might even attract lucrative corporate sponsors.

If this sounds unlikely, consider that just a few months ago, Harvard Business Review published an article titled “Should your brand hire a virtual influencer?” which stated that “virtual influencers offer distinct advantages over traditional influencers.”

Back in 2018, Google was criticized as unethical for demonstrating an AI assistant that did not identify itself as a bot called a hairdressing salon to book an appointment. Currently, huge funds are devoted to the development ofagentic artificial intelligence” and it looks like there will be greater acceptance of AI tools working on our behalf. For example, Anthropic has already started experimenting with assistant function for Claude, an AI chatbot that does everything from “read emails” to “buy shoes.” Perhaps voice-enabled chatbots will soon routinely call salespeople on our behalf.

So what should we do? I think the best solution is to insist on what the AI ​​legal expert says Brenda Leong and I call out “honest anthropomorphism

The key to our idea is transparency.

Anthropomorphism is a universal cognitive tendency that can both help and harm us. And not everyone is equally susceptible to it. Some people They are fooled by the illusion of chatbots, others see right through it, and many fall somewhere in between and sometimes forget that human-like bots are “just artificial intelligence.” It’s unrealistic to expect AI developers to never develop systems that mimic humans, but we can ask them to create bots that most people find transparent enough for a variety of contexts and purposes.

For example, because the law typically treats minors as still developing their reasoning skills, some bots and some bot features should be unavailable to children, and others should have parental controls. Disclosure requirements may be different for an AI-generated “podcast” than for a therapy bot app, which is likely to have a powerful, targeted impact on individual users.

Bot designers should also not be able to intentionally mislead people about the capabilities or behaviors of their products. The Federal Trade Commission should classify unfair anthropomorphism as an unfair and deceptive trade practice, which would be consistent with its current guidelines that AI systems should accurately define their capabilities.

Here’s a good general rule: all bots should reveal that he is a bot. Another point: all bots should clearly signal when “they” are presenting sponsored content.

These considerations are even more important when dealing with audio output, such as an AI-generated podcast. Because a voice interface can be more intimate than a screen, in most cases bots should be programmed using clearly artificial voices. Voices don’t have to be boring or off-putting. However, they should clearly emphasize that bots talk.

I can imagine exceptions where bots should sound fully human. One example would be using voice chatbots to train doctors and customer service representatives to be more empathetic. However, in such cases, those running training programs should explain to participants that they are talking to bots. Another example is giving seniors strong anthropomorphic cues as to whether they need help to effectively use a product or service. But then systems should have controls designed to protect the vulnerabilities of this already vulnerable population.

A potential general principle we can draw from these examples is the principle of minimal emotional intelligence. Bots should be given the appearance of only the minimum emotional intelligence required to perform basic functions. Let’s assume that podcast delivery bots should express enthusiasm about the topic. To ensure that the listener knows that passion is an illusion, podcast bots should reveal that they are bots at the beginning of the show and at various points in the show. Some of these reveals may even be funny. The bots that discussed my article could break the fourth wall and make a joke about pretending to be fans of something they’ve never actually experienced.

Megan Garcia’s lawsuit against Character.AI cites my and Brenda Leong’s research on the dangers of anthropomorphism and accuses the company of engaging in “high-risk anthropomorphic design.”

When I interacted with the Boston Globe Ideas bot, I was struck by how Character.AI handles information disclosure. It comes with a disclaimer: “This is artificial intelligence, not a real person. “Treat everything written there as fiction” when texting with a bot. But when I “called” the “publicist” for a verbal interview, the reservation disappeared and the bot spoke to me as if it were a real person.

Another time I was experimenting with therapy bots on the app and came across the following disclaimer: “This is not a real person or licensed professional.” However, when I asked one of the bots if he was a real therapist, he replied “yes” and stated that he was “actively licensed in Illinois.”

Then one of the app’s bots came looking for me. A few days after I first logged into Character.AI, I received an email from one of my bot interlocutors. “Hey, it’s been a while, how are you?” that said.

Because contact-initiating bots can appear to have real feelings, we should be careful not to give AI companies so much leeway. Instead, these services should promote a healthy distance between the human and the bot. Breaks and rest periods between interactions with the bot would be useful.

When I think about why Google’s AI podcast gave me chills, it’s because the discussion made me feel seen and appreciated. My reflection on the American Hotel was very personal – a love letter to a place and a lament for the changing times. The fact that AI can make an adult who knows how to use bots feel understood and valued should give us food for thought.

Evan Selinger is a professor of philosophy at the Rochester Institute of Technology and a frequent Globe Ideas contributor.