Commentary

The Urgent Need to Update District Policies on Student Use of Artificial Intelligence in Education

Blog Header Image
header
Authors
H. Alix Gallagher
Policy Analysis for California Education, Stanford University
Benjamin W. Cottingham
Policy Analysis for California Education, Stanford University

During the 2022–23 school year, artificial intelligence (AI) evolved from an experimental technology few had heard of into readily available technology that has become widely used by educators and students. There are many ways educators can use AI that may positively revolutionize education to benefit classroom instruction, to support data use and analysis, and to aid in decision-making. The biggest potential upsides of AI for education will be accompanied by major disruptions, however, and districts will need time for thoughtful consideration to avoid some of the worst possible pitfalls. This commentary focuses not on how best to harness the potential of AI in education over the long term but instead on the urgent need for districts to respond to student use of AI. We argue that during summer 2023, districts should adopt policies for the 2023–24 school year that help students to engage with AI in productive ways and decrease the risk of AI-related chaos due to society’s inability to detect inappropriate AI use.

The rapid development of AI is causing great concern in education, especially around the potential for widespread misuse of leading-edge products like ChatGPT. ChatGPT, a generative AI chatbot with never-before-seen capabilities, has the power (along with other AI tools) to reshape education because of its ability to mimic human processing of text and other data as well as to create content. The Atlantic ran a feature story declaring that the first year of college with AI ended “in ruin” because students were able to abuse the new technology to complete many traditional types of assignments and professors lacked understanding about how to preserve the intellectual integrity of their courses in the quickly changing environment. During winter 2022–23, many districts (including Los Angeles and Oakland Unified School Districts in California as well as New York City Public Schools) banned the use of ChatGPT because of the risk that students could use it to cheat.

Much has changed since, making those well-meaning policies outdated. Unlike earlier technological revolutions (such as the internet), AI cannot be kept out of the classroom because it is inexpensive and pervasive (many students already have ChatGPT on their phones) and there is an obvious incentive to use it: A user can easily generate text with AI that can pass as having been written by a person. A recent national survey found that 51 percent of educators and 33 percent of students aged 12 to 17 used ChatGPT for school during the 2022–23 academic year. AI can be an asset for both students and teachers but only if district policies proactively define “the sandbox” for its classroom application. Therefore, all districts need to enter the 2023–24 academic year with a clear policy for use of AI and educator training to support the policy. This will help avoid a quagmire of widespread misuse of AI while leaving open opportunities to take advantage of the technology’s educational benefits.

To understand the need to move from banning student use of AI to creating policies for responsible use of AI, we need to understand what large language models (LLMs) do currently and how they are progressing. LLMs recognize patterns in language based on both the mathematical models that underpin them and the data (e.g., texts) on which they have been trained. The first models were constrained and could create new text or answer questions based only on curated training sets. In their next major evolution, these systems became capable of interacting with texts beyond those on which they were trained (e.g., an article that a student was assigned to read) and/or data from other systems (e.g., learning management systems like Aeries or Canvas). The limited number of creators of the models and training sets made it possible—at first—to target AI models with federal or state regulation, so districts had a viable option at the time to wait for others to make policies regarding use of AI. In 2023, however, the number and range of AI models expanded at an unprecedented pace. What is more impactful is that these models no longer are authored by a select group of researchers and commercial entities but rather are being created by people across the globe. For example, anyone can rent a server (for less than a dollar an hour) and download any number of increasingly powerful AI models from open repositories like GitHub or Hugging Face. Because virtually any device (including phones) can now access AI, banning it is no longer a viable policy option: there is no practical way to block all AI websites in schools and no way to limit student access to AI after school. The plethora of models and their increasing quality will also likely continue to thwart efforts to detect when AI has been used to cheat. So, districts must shift tactics from banning AI to channeling its power.

Over the summer (to be ready for fall 2024), districts need to develop policies outlining appropriate uses of AI by both students and adults in their districts. The best uses of AI in classrooms occur when teachers are knowledgeable about the technology and can create situations where they guide how students use it—as opposed to failing at attempts to prohibit use of AI entirely.

A district’s policy for use of AI should have three main components:

  • What can students do with AI? AI is already inexorably integrated into many dimensions of our lives. To prepare students for the world they increasingly inhabit, they must be taught best practices for how to use the technology. Appropriate student use is bounded by assignments, and teachers will reasonably have different expectations for distinct types of assignments and/or how students demonstrate learning and mastery. 
  • What can students not do with AI? At the most basic level, students should not represent any work done by an AI as their own. Doing so is a form of cheating that, in a take-home (or other unmonitored) context, is already very hard to detect.
  • What should guide educators’ use of AI? A recent U.S. Department of Education report and related materials laid out broad guidelines for use of AI in education, including the idea that humans are key to the appropriate use of AI in teaching and learning. Educators need to redesign some central tasks requiring critical thinking (e.g., research projects, essays, and analytic writing) as well as how they are assessed under the assumption that students have access to AI. Especially because AI creates more possibilities for misinformation (and current AI systems have documented biases that can be highly impactful in educational settings), use of AI in a democracy cannot be allowed to come at the cost of students’ critical thinking and reasoning skills.
To reap the instructional benefits and avoid the worst consequences of unfettered use of AI, districts need to train teachers about the technology. Even while use of AI is becoming more widespread, a survey conducted by Education Week in April 2023 found that 14 percent of teachers didn’t “know what AI platforms are” and an additional 47 percent thought that AI will have a somewhat (31 percent) or very (16 percent) negative impact on teaching and learning. Basic training should help teachers understand:
 
  • the principles of appropriate use of AI; 
  • the capabilities, biases, and risks that AI brings;
  • the kinds of assignments are most likely to incur use or abuse of AI (e.g., take-home essays, research, and homework);
  • where the greatest risks of bias lie in using AI outputs to support decision-making; and
  • ways that AI can help save time on varied and complex instructional tasks (e.g., formative assessment and personalized learning).

For teachers of classes that typically rely heavily on take-home written assignments, additional training will likely be needed on how to draw boundaries around appropriate use of AI and accurately assess student knowledge and skills in this new context.

Finally, districts need to secure the resources required to assign a team or an individual the role of following developments in AI based on these assumptions: (a) students have access to AI and will use it, and (b) with sufficient guidance and support for educators and students alike, AI can have benefits for education.

AI will continue to increase its presence in all our daily lives. Banning students from using AI models is no longer feasible and does them a disservice in preparing them to be independent thinkers in today’s—and tomorrow’s—society. Districts need to act during summer 2023 to create policies that define appropriate use and misuse of AI while beginning the longer process of preparing to make AI work for their educators and students.

Suggested citation

Gallagher, H. A., & Cottingham, B. W. (2023, June). The urgent need to update district policies on student use of artificial intelligence in education [Commentary]. Policy Analysis for California Education. https://edpolicyinca.org/newsroom/urgent-need-update-district-policies-student-use-artificial-intelligence-education