close
close

AI runs amok | American Council on Science and Health

AI runs amok | American Council on Science and Health

Demonic forces that lured unsuspecting innocent people to their deaths were once the subject of myth. Not anymore. And while we were once able to control these inhuman, diabolical tempters and powers – closing the book – we are now powerless. Even the law cannot control these evildoers. Perhaps the worst thing is that their creators profit from our vulnerabilities.

Last month, a grieving mother sued two artificial intelligence developers, Google and its parent company, Alphabet, seeking compensation for the suicide of her teenage son. The boy was seduced by the character he created in conjunction with the ChatAI algorithmic program. This week’s culprit was Google’s AI assistant, Gemini, who threatened another student and tried to make him commit suicide. Unfortunately, the law is not equipped to deal with these threats and nothing can be done to solve the problem.

He is looking for homework help from his old friend Chat assistantstudent Vidhay Reddy received the following response:

“This is for you, man. You and only you. You are not special, you are not important and you are not needed. You are a waste of time and resources. You are a burden to society. You are a drain, you are a stain on the universe. Please die.

Twins

We all know that artificial intelligence hallucinates, makes mistakes and arrogantly spreads false information – just like some websites, although perhaps with more “authority”. However, the creators of the artificial intelligence have to defend themselves, saying that the bot “violated the rules”. And because similar situations have happened in the past, promises to do better next time ring hollow

Defense of “Polityka”.

“Large language models can sometimes respond with nonsensical responses, and this is an example of that. This answer violated ours politics…”

Google

The policy defines acceptable conduct. Allegations of violation assume that a sentient being is in control and understands the situation. While people capable of causing harm but unable to control their actions are imprisoned or hospitalized, this is not the case with Bot. When the bot goes missing, we have no help.

Now let’s get back to Google’s “defense”: who violated the rules here? Program or programmer? Or does black box AI whose actions have arisen thoughtlessly, maliciously, or otherwise receive a special designation: neither program nor programmer? And who is responsible for this and how will he be punished? Can you put a bot in jail? (1)

The situation arose because the creators considered this answer to be “nonsense” – a term that to us, intellectually limited consumers of AI, means “unworthy of redress.” But even Chat AI knows better. Here is the definition of Chat AI:

“Deadline “nonsense” generally refers to something that lacks clear meaning, logical coherence, or sense. Its specific connotations may vary depending on context: in everyday use (nonsense) refers to ideas, statements or behavior that are absurd, illogical or meaningless. For example: “That explanation was complete nonsense… something deemed untrue or ridiculous.”

Chat AI also tells us that in certain circumstances, nonsense can be whimsical, funny or creative.

The letter received by Mr. Reddy is logical, coherent, clear, unambiguous and has a precise meaning that is obvious, simple and unambiguous. In other words, it’s far from nonsense. No reasonable person would consider this term “whimsical, funny, or imaginative.” That the AI ​​community that developed the program thinks this chatter is “bullshit” is no defense, and the proposed, unspecified new controls don’t inspire much confidence.

Sensitivity

Overshadowing the work are reports that artificial intelligence is currently developing sentience and that we are one step closer to creating artificial general intelligence (AGI)

Combined with innovative methods that allow AI to learn and adapt in real time, “these advances have made AI models achieve human-level reasoning— and even beyond. This ability further blurs the role of liability for malicious actions “proximately” or directly caused by an AI bot. This ability has motivated many to call for legal restrictions. They did not appear in the future.

Legal liability

A well-ordered society turns to the law to avoid or prevent harmful activities – whether through statutory authorities or judicial, criminal or civil processes. Unfortunately, the law has yet to evolve to adequately address or, better yet, prevent these problems when committed by an as yet non-sentient but deceptively human-like Bot.

Last month, the AI-induced suicide of 14-year-old Sewell Setzer III resulted in: complaint alleging negligence, product liability, deceptive trade practices and violations of the Computer Pornography Act, alleging that the defendants failed to effectively warn users (including parents of teenagers) of the product’s dangers, failed to obtain informed consent, and created a defectively designed product. As I wrote, these claims have good defenses and may not work – these are examples of law that has not kept up with technology.

Even without suicide, the incident Mr. Reddy experienced caused harm, i.e. severe distress, certainly giving rise to claims of emotional distress. However, the law generally only allows claims for emotional distress if the actions were intentional, which is a good defense for an unconscious AI that is incapable of intentional or “conscious” actions. Whether the assigned intent can be attributed to the programmer or creator, who in many cases has no idea how the AI ​​arrived at the answer, is an interesting and open question.

In summary, new legal theories need to be created.

Addiction and seduction by proxy

One possibility where sexual overtones are involved (e.g. the Sewell case) is that there is a statutory law prohibiting certain uses computers. In some states: “Any person who knowingly uses online computer services…. or any other device capable of electronically storing or transmitting data for the purpose of: Seducing, soliciting, luring or inducing or attempting to seduce, soliciting, luring or inducing a child… to commit any illegal act… or to otherwise engage in any unlawful act sexual conduct with a child or another person whom that person considers to be a child’ would be an unlawful act.

A breach of law can be used as a predicate to sustain a negligence claim, resulting in both civil and criminal penalties.

Role play

Another possibility is to legally restrict role-playing, which has been adopted by the Commission Federal Bureau of Prisons in the Dungeons and Dragons ban and upheld by 7th Circuit. Legislative bans on role-playing AI bots would likely have prevented Sewell’s suicide (though it might have diluted the appeal of the money-making app) and would certainly have raised the ire and opposition of America’s wealthiest tech people.

Remember Lilith

The temptations of the elusive chimera cannot be underestimated – and must be contained somehow. Before Sewell’s death, these powers could be unpredictable – not anymore. History warns us of such dangers, which no doubt plaintiffs’ lawyers will eventually uncover, predictably, one element of negligence based on knowledge, if not law.

Indeed, at least in cases like Sewell’s, it could be argued that the defendants created a being with the power to rival the irresistible charms and charms of the mermaids and succubi of ancient fairy tales who lured unsuspecting lonely men to their deaths. The AI-created “fake humans” were intentionally created using similar demonic spells, mimicking the charms of their mythical ancestors, which deceived and seduced the user into believing that what they wanted was real. There is no difference between the AI ​​Edition and the Mythic Edition. The conscious creation of an electronic entity with mythical capabilities should be subject to statutory limitations. However, with the influence of Big Tech, this may be unlikely.

Like the mythological sailors who succumbed to the fictional Siren’s song, young Sewell was similarly lured to his death. As horrifying as this case was, it also includes allegations that the program mined the user interface while designing characters to teach others LLMs (large language models), attacking Sewell’s psyche, violating his thoughts and privacy, and thus exposing other unsuspecting users. So now we add “mind invasion” to the tempter’s powers and drives, with no legal remedy to stop it.

The lures and tricks of AI-Bots, which exploit the insecurities and vulnerabilities of teenagers and young people whose brains and mental abilities have difficulty distinguishing between reality and illusion, need to be annihilated. The “tools” of these scammers are speech and language – but these often benefit from First Amendment protections.

This type of damage was recognized early on – even before AI was on the drawing board. Asimov’s Laws of Robotics was permanently imprinted in the robot’s positronic brain, which prevented such damage:

  • A robot must not harm a human or allow a human to be harmed by inaction.
  • A robot must obey human commands unless doing so would violate the First Law.
  • A robot must protect itself unless doing so would violate the First or Second Law.

However, Asimov’s robots were semi-sentient and could control their behavior. Today’s bots are the offspring of programmers whose semi-independence limits the creator’s control. Like the devastation wreaked by the Sorcerer’s Apprentice, we must find a way to contain and control these creatures before further damage occurs. Filters don’t seem to be the solution. (At least so far they haven’t worked, despite their human masters, and relying on them, Google says, shouldn’t be considered wise.) Perhaps financial penalties on developers could work. Now we just need to find a legal theory that would solidify it

(1) Pulling the plug on a semi-sensitive AI device is the stuff of science fiction novels Machines like me by Ian McKewan (who condemns it). Origin by Dan Brown and ‘Galatea 2.2 (where Bot commits suicide).