The combination of artificial intelligence and policymaking can sometimes have unexpected effects, as we saw recently in Alaska.
In an unusual development, Alaska lawmakers reportedly used an inaccurate AI-generated citation to justify a proposed policy banning cell phone use in schools. As /The Alaska Beacon/ reported, the Alaska Department of Education and Early Development (DEED) presented a draft policy that included references to academic research that simply does not exist.
This situation arose when Alaska State Education Commissioner Deena Bishop used generative AI to draft cell phone policy. The AI-generated documents contained purported academic references that were neither verified nor accurate, but the use of AI in creating the documents was not disclosed. Some of the AI-generated content reached the Alaska Education Early Development Commission before it could be considered, potentially influencing the commission’s discussions.
Commissioner Bishop later claimed that the AI was only used to “create citations” for the initial draft, and that he sent updated citations to board members before the meeting to correct mistakes. However, AI “hallucinations” (fabricated information produced when AI attempts to create plausible but unverified content) were still present in the final document voted on by the board. .
The final resolution, published on DEED’s website, directs the department to establish a model policy regarding cell phone restrictions in schools. Unfortunately, this document contains six citations, four of which appear to be from reputable scientific journals. However, this reference was completely fabricated and included a URL that directed to unrelated content. This incident illustrates the risks of using data generated by AI without proper human validation, especially when making policy decisions.
Alaska’s case is not unique. AI-induced hallucinations are becoming increasingly common in a variety of professional fields. For example, some legal professionals have faced serious consequences for using AI-generated fictitious case citations in court. Similarly, academic papers created using AI contain distorted data and false sources, raising serious concerns about their credibility. Left unchecked, generative AI algorithms that aim to generate content based on patterns rather than factual accuracy can easily generate misleading quotes.
Reliance on AI-generated data in policymaking, particularly in education, carries significant risks. When policies are developed based on fabricated information, resources can be misallocated to the detriment of students. For example, policies that restrict cell phone use based on fabricated data can distract from more effective, evidence-based interventions that could truly benefit students.
Furthermore, the use of unverified AI data can undermine public trust in both the policy-making process and the AI technology itself. Incidents like this highlight the importance of fact-checking, transparency, and caution when using AI in sensitive decision-making areas, especially in education, where the impact on students is high.
Alaska officials tried to downplay the situation by referring to the fabricated citation as a “placeholder” for later correction. However, documents containing “placeholders” were still presented to the board and used as the basis for voting, highlighting the need for strict oversight of the use of AI.
(Photo provided by Hartono Creative Studio)
See also: Humanity calls for AI regulation to avoid catastrophe
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expos in Amsterdam, California, and London. This comprehensive event will be co-located with major events such as Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Learn about other upcoming enterprise technology events and webinars from TechForge here.