close
close

Stanford Professor Allegedly Offers False Artificial Intelligence Quotes in False Harms Argument

Stanford Professor Allegedly Offers False Artificial Intelligence Quotes in False Harms Argument

PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions that help support us testing.

A prominent Stanford professor is accused of incorporating artificial intelligence-generated fake quotes into a legal argument about the dangers of deepfakes.

Minnesota, like California, has proposed a law that would enforce legal restrictions on the use of deepfakes during the election period. Professor Jeff Hancock, founder and director of the Stanford Social Media Lab, presented the following in support of the bill: Minnesota reformer reports.

However, some journalists and law professors they weren’t able to to locate some of the research cited in the argument, such as “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Disinformation Acceptance.”

Some commentators believe that this may mean that part of the argument was generated by artificial intelligence, claiming that it could be an example of “AI hallucinations”. In this case, AI such as ChatGPT simply creates information that does not exist.

Opponents of the new law in Minnesota they argued that these potential “AI hallucinations” make the professor’s legal argument less credible. A lawsuit filed by conservative and Republican Rep. Mary Franson in court said the cryptic quotes “call into question the entire document.”

Professor Hancock is a well-known figure in the field of disinformation. One of his TED talks, “The Future of Lying,” has received over 1.5 million views on YouTube, and he is also featured in a documentary on disinformation on Netflix.

This isn’t the first time the appearance of fake AI-powered legal citations has caused problems. In June 2023 Reuters reported that two New York lawyers after submitting legal documentation that the court said was generated by OpenAI’s ChatGPT resulted in them being fined $5,000.

Professor Hancock has not yet publicly responded to the allegations against him.

Perhaps not surprisingly, legal arguments about the dangers of deepfakes in elections are currently under intense scrutiny. Elon Musk’s X is in the lead comparable lawsuit defiant California Defending Democracy Against Fraud Act of 2024which also places restrictions on the creation and sharing of false misinformation during the election season, arguing that such restrictions hamper the implementation of the First Amendment.