For over 20 years, U.S. school divisions have embraced 1:1 laptop initiatives for students and weighed the pros and cons of 1:1 and Bring Your Own Device programs. Across the U.S., disparities related to technology were spotlighted during the Covid-19 pandemic as school closures led to sweeping efforts to provide devices and accessible internet services to students and communities.
These initiatives drove changes in instructional practices as more K-12 students gained access to technology and the limitless information and tools found on the internet. Historically, schools have always been responsible for addressing the educational impact of technological advancements.
In 2023, generative AI has burst on the scene. Available for less than 12 months, generative AI is already universally accessible, often for free. At this early stage, schools are unsure what to do with generative AI and what it means for learning and teaching. From initial outright bans and efforts to embrace generative AI to a change in course, as acknowledged by New York City Public Schools Chancellor David C. Banks, school district responses vary and are evolving.
It is safe to say that K-12 schools across the country are considering the impact of generative AI, but there has not been enough time for coherent policies to be in place. As UNESCO notes in a recent press release, the “education sector is largely unprepared for the ethical and pedagogical integration” of generative AI tools in schools. UNESCO cites a lack of policy as a recent global survey of over 450 schools and universities showed 7% of K-12 schools reported having “institutional policies and/or formal guidance concerning the use of generative AI applications.”
However, the 2023-24 school year has begun. Teachers are assigning and students are writing required papers. With the introduction of, and unlimited access to, generative AI, schools must quickly consider what AI literacy and acceptable use practices look like.
With the search command “define generative artificial intelligence in 30 words or fewer,” ChatGPT defined itself in this way: “Generative AI creates new content like text, images, or music by learning patterns from existing data, producing innovative outputs resembling human creation.”
Yikes. What does “new content…resembling human creation” mean for learning and teaching? And what does academic integrity mean for generative AI and K-12 schools? Those are the questions of the day.
Creative content generation is a skill teachers foster in students. Now a tool exists that not only reports existing data but creates content. That is a sticking point for policies and practices in education, and for definitions of cheating and plagiarism.
The stakes are high. Some students have been falsely accused of using AI. Some are using it with teacher permission. What might the impact be on a student accused of cheating because they used generative AI as a tool in their writing while their K-12 school did not have a comprehensive appropriate use policy in place? The lack of, and need for, clear definitions of academic integrity, cheating, plagiarism, and appropriate use relative to generative AI in K-12 schools is worrisome.
How can students, teachers, and parents cope? They can start by learning more, asking questions, and talking to each other. Look at your child’s course syllabi and see if there is any reference to generative AI use. AI approval left to individual faculty judgment risks being inconsistent and uninformed. Middle and high school students may have eight classes, with eight teachers who have eight different perspectives on the definition of cheating. Ask school leaders if there is consistency of understanding across classrooms and throughout schools.
Resources such as OpenAI’s guide for teachers, The AI Education Project’s September 18 webinar, and the U.S. Department of Education’s “Artificial Intelligence and the Future of Teaching and Learning” report can help.
While generative AI software is available to students, software to catch AI-using students is available to teachers. Common Sense Media offers advice on handling AI in schools and suggests using AI detectors as a last resort. This is a murky area, as seen in The Washington Post article “What to do when you’re accused of AI cheating.” Not surprisingly, new software specifically designed to bypass AI detection tools has emerged and offers strategies to avoid getting caught. To protect writing process integrity, products that automatically track writing drafts for students, such as Rumi, are emerging.
Students are aware of and thinking critically about pros and cons of generative AI. In Virginia, Henrico County Public Schools students enrolled at Deep Run High School thoughtfully share their understanding of the complexities of AI through this insightful analysis. Watch these sharp young minds tap into the benefits and dangers of generative AI for students and the inevitable impact of AI on their future workforce.
School divisions must create or update acceptable use policies for generative AI. Until then, protect yourselves as educators or your children as students by talking about and knowing what is expected in your schools.
Comedian Brian Regan has a funny storyline about a Beer Belly Bandit who was only identifiable by his robust midsection. In the routine, Regan sardonically questions if there were any false leads based on such a broad description. He mimics a concerned citizen calling the police department to report many potential suspects at McGillicuddy’s Bar.
At the time, the bit’s comedic picture in my mind made me laugh out loud. But it also reminds me of the serious situation our students face this academic year as generative AI is available to them but policies are not in place.
Teachers are rightly concerned about authentic student work that meets academic quality and integrity standards. I am deeply concerned that until schools and families gain clarity on what constitutes acceptable use of generative AI, teachers and administrators may be forced into the not so comedic position of imagining cheaters everywhere and becoming AI vigilantes.
Read the full article here