The Dangers of Not Teaching Students How to Use AI Responsibly
"If students do not learn how to use AI and other technologies appropriately in safe school settings where mistakes are an expected part of the learning process, then they may make mistakes when learning how to use these technologies as adults in higher-stakes settings like the workplace." - Bryan Christ
Generative artificial intelligence has disrupted the classroom, making educators feel as if the only immediate and well-intentioned choice they can make is to ban this technology from being used on assignments and in academic spaces. We spoke with PhD candidate at the University of Virginia School of Data Science Bryan Christ about why it is more harmful than effective to take away students' ability to use large language models like ChatGPT, and why it is important for them to advocate for their right to have these tools in the classroom.
Q: What are the consequences of not teaching students how to use AI responsibly?
Bryan: I think the biggest educational consequence of not teaching students how to use AI responsibly is that they might use it to circumvent rather than support the learning process. For example, students might use AI to solve their homework problems without trying to solve them on their own first. In this way, they offload the learning process to AI rather than using it to support the learning process by, for example, having it give them a hint when they are stuck on a problem.
This could then set a dangerous precedent for students that AI can entirely complete tasks for them without their intervention or review, which could creep into their future roles in the workforce. These students would then engage in cognitive offloading of tasks to AI in their jobs rather than using it as a tool to support their productivity. This could lead them to submitting AI-generated work without first reviewing it, leading to critical errors in AI-generated outputs going unnoticed when models inevitably make mistakes. This is referred to as AI slop.
Q: There seems to be a general consensus that outright banning specific technologies or implementing age restrictions on social platforms will eliminate cheating, procrastination, behavioral problems, and aid in tackling mental health concerns.
This seems to be an easy solution to sweeping a larger, neglected issue under the rug. What do you think that issue is and what are ways it can be addressed in the immediate future?
Bryan: I think these approaches sweep the larger issue of teaching students how to use technology appropriately under the rug. If students do not learn how to use AI and other technologies appropriately in safe school settings where mistakes are an expected part of the learning process, then they may make mistakes when learning how to use these technologies as adults in higher-stakes settings like the workplace. It is unreasonable to expect that students will not use these technologies in the future, so it is critical to teach them how to do so appropriately.
Q: How can AI help students with learning disabilities?
Bryan: There are many ways AI could help students with learning disabilities. One way is that teachers can use it as a tool to support differentiation, or adapting instruction to individual student needs. For example, teachers could use AI to quickly convert an assignment into a different format for a student with a learning disability or to generate practice problems for students with learning disabilities that are aligned with their current readiness levels.
Students with learning disabilities can also use AI on their own to support their learning. Some examples include voice chatting with AI to learn class concepts, using AI to read a document aloud, using AI to break down complex concepts into manageable chunks, using AI to generate practice problems, or using AI to convert learning materials into a different modality like a podcast or visual diagram.
Q: How can schools and educators safely and effectively implement AI into the classroom without compromising students’ ability to be creative?
Bryan: The best way for schools and educators to safely and effectively implement AI into the classroom is to find ways to use it as a tool to supplement rather than circumvent the learning process.
One example is to have students use AI tools that are designed to support their learning by giving targeted hints or instructions when solving problems rather than giving the answer like Khanmigo or ChatGPT Study Mode. Such tools can support teachers by providing individualized instruction and tutoring at a scale unachievable by a single teacher alone.
Another example would be using AI to automatically customize practice problems to students' interests and current ability levels, which is known to support learning outcomes.
A third example would be to use AI as a tool to foster creativity itself by having students use it to learn more about topics they are interested in while practicing learning skills like reading comprehension. For example, teachers (or students) could use AI to generate reading passages and associated comprehension questions or learning activities about things their students are interested in like space or sports. In these examples and other ways, we can empower teachers and students to use AI to support creativity and learning rather than circumvent it.
Q: Some students have run into an issue where they are permitted to use generative AI in one class but banned from using it in another. Would it be beneficial for a school or university-wide policy to be implemented? Or do you see another way collaboration could be used to combat the confusion that arises in this scenario?
Bryan: I think the most important thing is for schools to be very clear with students about their expectations around AI to minimize the chances that students use AI in a way that would be deemed inappropriate or cheating. While school-wide policies can be helpful, it can be hard to craft one policy that works for every class and learning situation. I think a better approach is to define what responsible use of AI looks like at a high level for the whole school and then let teachers decide what that looks like in their individual classrooms.
For example, a school could decide that responsible use of AI means students do not use it to draft first drafts of their work, allowing individual teachers latitude in whether AI could be used to help students refine their ideas based on specific learning goals for individual assignments. Individual class policies can be confusing for students at times, but teachers can minimize this confusion by being very clear about appropriate uses of AI for each learning activity.
Q: How do you think we can eliminate the stigma around AI in education while still addressing ethical and environmental concerns?
Bryan: I think the best way to eliminate stigma around AI in education is to provide real-life examples of effective use cases. Teachers and schools are correct to be concerned that AI could be used to circumvent the learning process and plagiarize content but should also be shown how it can support learning outcomes. One way to do this is by conducting research into effective uses of AI in education, which is a burgeoning area of research. For example, in a recent study we released, we found that while students performed similarly on AI-generated and human-written math problems, they consistently preferred AI-generated problems that were customized to their interests, directly demonstrating a practical way teachers could use AI to support learning.
It is also important to be clear with students that AI tools, like all technology, have real ethical and environmental implications. For example, teachers could inform students about how AI could be used to plagiarize information by not citing sources or copyrighted data the models were trained on. Teachers could also teach students about the environmental impact of the technology in terms of electricity and water consumption.
Q: On a larger scale, students aren’t allowed to be involved in decision making around generative AI that directly impacts their educational experience. How can they get involved in their schools or universities and make sure their voices are heard?
Bryan: While teachers and schools may make decisions around generative AI without consulting students, it doesn’t necessarily mean they aren’t open to student feedback or learning about ways these policies affect students. Often, educators make policy decisions without students because it is more convenient, or they assume students might not be interested in helping shape these policies. Educators are well-intentioned and generally open to student feedback. I would recommend that students who are interested in helping shape generative AI policies talk to each other and their teachers. Often, all it takes is several interested students for schools to open lines of communication between decision makers and students.
Learn more about the importance of involving students in decisions and conversations about AI policy and usage in academic environments:
Students in the Driver’s Seat: Establishing Collaborative Cultures of Technology Governance at Universities (p.53) by Celia Calhoun, Ella Duus, Desiree Ho, Owen Kitzmann, and Mona Sloane
Biden’s AI executive order underlines need for student technology councils by UVA Associate Professor of Data Science and Media Studies Mona Sloane
As a PhD candidate at the University of Virginia School of Data Science, Bryan Christ focuses on developing and advancing methodology for the application of artificial intelligence (AI) and machine learning to mathematical reasoning and education, particularly testing, assessment, and curriculum development. As a former teacher and educational nonprofit leader with a social science background, Bryan took a nontraditional path to data science, but one that gives him a deep understanding of how data science and AI can be appropriately, ethically and effectively applied to support lifelong education and social good more broadly. Some of his research projects include:
- Developing a method to isolate math-specific parameters in Large Language Models (LLMs), which allows improvements to math reasoning without catastrophic forgetting.
- Developing methodology and synthetic data to train LLMs as age-appropriate, educational, and customizable math word problem generators for K-8 students.
- A combined approach of using LLMs and community-based participatory research to develop psychologically valid scales to measure understudied psychological phenomena, including internalized ableism
- Using machine learning models to explore and predict the impact of previous educational experiences on current life outcomes for adults with a disability.
- Conducting statistical analyses to validate the use of educational/psychological tests with diverse populations, including factor analysis, invariance, and normative data generation.
Bryan is motivated to provide all individuals with the customized educational opportunities and supports they need to reach their full potential and thrive. Bryan is a triple Hoo and holds a PhD in Data Science, a MPP, and a B.S.Ed. from UVA. Starting in the spring, Bryan will work as an Applied Scientist at Microsoft and be a lecturer in the Online MSDS program.





