close
close

Fake quotes show Alaska education official relied on generative artificial intelligence, raising broader questions

Fake quotes show Alaska education official relied on generative artificial intelligence, raising broader questions

ANCHORAGE, Alaska (Alaska Beacon) – The state’s top education official relied on generative artificial intelligence to develop a project proposed cell phone use policy in Alaska schoolsresulting in a state document citing alleged academic research that, according to Alaska LighthouseClaire Stremple.

The the document did not reveal that artificial intelligence was used in its concept. At least some of the AI-generated false information found its way to members of the state’s Board of Education and Early Childhood Development.

Policymakers in education and other government bodies rely on well-supported research. The Commissioner’s use of fake AI-generated content demonstrates a lack of state policy on the use of AI tools, while public trust depends on knowing that the sources used to make government decisions are not only accurate but also real.

A department spokesman first called the fake sources “placeholders.” They were cited throughout the resolution posted on the department’s website ahead of this month’s state board of education meeting in Matanuska-Susitna Parish.

Later, state Education Commissioner Deena Bishop said they were part of the first project she had created using generative artificial intelligence. She said she realized her mistake before the meeting and sent the correct quotes to board members. The management board adopted the resolution.

However, erroneous references and other traces of the so-called “AI hallucinations” are found in a revised document later distributed by the department and which Bishop said the council voted on.

The resolution directs DEED to develop a model cell phone restriction policy. The resolution published on the website of the state in question purported research articles that cannot be found at the specified web addresses and whose titles did not appear in broader internet search results.

Four of the six citations in the document appear to be studies published in scientific journals, but are false. Journals cited by the state exist, but the titles referenced by the department are not printed in the issues listed. Instead, works on various topics are posted at the links mentioned.

Ellie Pavlick, an assistant professor of computer science and linguistics at Brown University and a Google Deepmind researcher, reviewed the quotes and found they looked like other artificial intelligence-generated fake quotes.

“This is exactly the type of pattern that can be seen with AI hallucination-induced quotes,” she said.

Hallucination is a term used when an artificial intelligence system produces misleading or false information, usually because the model does not have enough data or makes incorrect assumptions.

“It’s very typical to see these fake quotes that refer to a real magazine, sometimes even a real person, a credible name, but they don’t match the real one,” she said. “It’s just like the citation pattern you’d expect from a language model — at least we’ve seen them do something like that.”

The reference section of the document contains URLs that link to research articles on various topics. Instead of the article “Cell Phone Ban Improves Student Performance: Evidence from a Quasi-Experiment” in the journal Computers in Human Behavior, the state’s URL linked to the article “Sex-Related Behavior on Facebook,” which was a different article in the publication. A search for a suitable title yielded no results. The same goes for two studies the state says can be found in the Journal of Educational Psychology.

After the Alaska Beacon asked the department to provide the false research, officials updated the document online. When asked whether the department was using artificial intelligence, spokesman Bryan Zadalis said the quotes were only for filler purposes until the correct information was inserted.

“Many of the sources listed were placeholders during the development process while the definitive sources were criticized, compared, and revised. This is a process that many of us have become accustomed to working with,” he wrote in an email Friday.

Bishop later said that this was the first draft that was published in error. She said she used generative artificial intelligence to prepare the documents and correct errors.

However, traces of the AI-generated document can still be found throughout the document, Bishop said, which was reviewed by the board and voted on.

For example, the section has been updated document continues to refer readers to a fictitious 2019 American Psychological Association study that purported to support the resolution’s claim that “students in schools with cell phone restrictions had lower levels of stress and higher levels of academic achievement “. New quotes lead to the study that concerns sanity not academic performance. Anecdotally, this study found no direct correlation between cell phone use and depression or loneliness.

Although this claim is not properly taken from the document, it exists test This shows that smartphones have an impact on course understanding and well-being – but among students, not teenagers. Melissa DiMartino, a researcher and professor at New York Tech who published this study, stated that although she did not study the effects of cell phones on adolescents, she believed her findings would only be amplified in this population

“Their brains are still developing. They are very malleable. And if you look at the research on smartphones, a lot of it mirrors research on substance addiction or other types of addictive behaviors,” she said.

She said the most difficult part of actually studying youth, as the titles of the fake state-led studies suggest, is that researchers must get permission from schools to conduct research on students.

The department updated the document online Friday after multiple inquiries from the Alaska Beacon about where the sources came from. The updated reference list replaced a citation from a defunct article in the over 100-year-old Journal of Educational Psychology with a real article from the Malaysian Online Journal of Educational Technology.

Bishop said there was “nothing nefarious” about the errors and the incident did not cause any noticeable damage.

But the fake quotes show how AI disinformation can influence state policy – ​​especially if high-level government officials use the technology as an editorial shortcut that creates errors that end up in public documents and official resolutions.

A statement from a Department of Education spokesman suggests that the use of such “placeholders” is not unusual for the department. This type of error can be easily replicated if the placeholders are typically AI-generated content.

Pavlick, the artificial intelligence expert, said the situation points to a broader consideration of where people get their information and the spread of misinformation.

“I think there’s also a real problem, especially when people in power take advantage of it, because of this kind of degradation of trust that already exists, right?” she said. “Once a few times information is revealed to be false, whether intentionally or not, it is easy to dismiss everything as false.”

In this example, scientific articles – long-accepted forms of supporting arguments with research, data and facts – are being questioned, which may undermine the extent to which they remain a trusted source.

“I think a lot of people think the same way about AI as a substitute for search because it seems similar in some ways. It’s like they sat at a computer and typed something into a text box and got the answers,” she said.

She pointed to a legal case from last year where a lawyer used an artificial intelligence chatbot to write a lawsuit. The chatbot cited false cases, which the lawyer then used in court, prompting the judge to consider disciplining the lawyer. Pavlick stated that these errors reminded her of what happened in the DEED documentary.

She said it was disturbing that the technology had become so widely used without a corresponding increase in public understanding of how it works.

“I don’t know whose responsibility it really is – it probably falls more on us, the AI ​​community, right, it’s better to educate because it’s hard to blame people for not understanding, for not realizing that they have to treat it varies from other search tools and other technologies,” she said.

She said increasing skills in using artificial intelligence is one way to avoid misuse of the technology, but there are no widely recognized best practices on how this should be done.

“I think we’re going to see some examples of this type of situation escalate so that the whole country and the world will become a little more concerned about the effects of this situation,” she said.

Editor’s note: This story is republished with permission from the Alaska Beacon.