AI in Education Series Part 5: Developing Ethical and Proficient AI Users
For the past few weeks, we’ve been diving deep into the report created by Leslie Eaves, SREB’s director of project-based learning, and her subcommittee of the SREB Commission on AI in Education. This report offers guidance for using AI in K-12 classrooms, structured around four key pillars.
We started with an introduction to the topic, then broke down each pillar in subsequent episodes:
- Pillar One: Creating cognitively demanding tasks
- Pillar Two: Streamlining teacher work and planning time through AI
- Pillar Three: Personalizing learning with AI
Today, we’re focusing on Pillar Four: Ethical and Proficient Student Use of AI.
Why Ethical and Proficient Use Matters
One of the first things that Leslie and I discussed was why this pillar matters. Why do we want to create ethical and proficient AI users?
Leslie explained it’s becoming increasingly clear that AI will be a significant part of the professional world. Students need to understand when and how to use AI appropriately, when to trust its results and when to be skeptical. This development process should ideally begin as early as possible, preparing students for post-secondary education and their future careers where AI will undoubtedly play a role.
If we want our students to be successful in the job market or in any of their future pursuits, then we need to help them develop the tools that they will need to get there. AI is going to be one of those skills.
(This would also be a good place to point out that another one of the commission’s subcommittee groups is currently working on a skills report that will help with that, but more on that later.)
If we want our students to be successful in the job market or in any of their future pursuits, then we need to help them develop the tools that they will need to get there. AI is going to be one of those skills.
A big question I had for Leslie was why it isn’t enough to create proficient users. We can all agree that students need to use AI effectively for their careers, but why the emphasis on ethics?
Leslie brought up an old programming adage: “Garbage in equals garbage out.” With AI, it’s “data in equals data out.” What data do you want your students developing?
We also discussed several ethical concerns we want to make students aware of:
- Bias: AI systems can inherit biases from the human decisions and outcomes in the data they’re trained on.
- Source Trust: It’s often unclear where AI gets its information, making it difficult to gauge the trustworthiness and appropriateness of the information for a given use.
- Circumventing Learning: Leslie expressed concern about students using AI to bypass the learning process. If AI can produce an answer that gets a good grade without students engaging in thoughtful research, critical thinking or problem-solving, they miss out on developing essential cognitive skills.
- Hallucinations: AI can “hallucinate,” creating information that appears legitimate but isn’t based on real sources. Taking AI’s output as gospel without deeper thought could lead to dangerous or unintended consequences.
To demonstrate why this emphasis on ethics matters, I shared a story about a lawyer who used AI to write a brief, and all the cases cited were fabrications. This led to professional embarrassment and actual trouble. Effective and ethical use go hand-in-hand.
AI can sound incredibly confident, and as humans, we’re wired to notice that confidence, which can prevent us from critically evaluating the information.
Simply prompting AI to write a paper and turning it in without review is neither effective nor ethical. So, we want to focus on ethics because proficiency and ethical use often go together.
Similarly, AI can sound incredibly confident, and as humans, we’re wired to notice that confidence, which can prevent us from critically evaluating the information. We want students to use AI as a tool for good, to support their work and lead to positive outcomes, minimizing negative ones.
I likened AI to rhetoric in philosophy: is it good or evil? Many philosophers conclude that rhetoric is a tool, and its morality depends on the intent of the user. Leslie agreed, seeing AI similarly as a tool whose impact depends on how it’s used.
Strategies for Teachers and Schools
Once we established that there are many reasons educators want to help students become proficient and ethical users of AI tools, we moved our discussion to how teachers and schools could go about doing that.
Here are some of the things that Leslie suggested teachers and school leaders could try:
- District and School-Wide Conversations: The first crucial step is for adults in the building to discuss what ethical AI use looks like from both an adult and student perspective. These conversations don’t need to be perfect or immediately solve every issue, but they need to happen and be revisited regularly. This prevents confusion for students who might experience vastly different AI policies from class to class.
- Rethink Assignments and Learning Journeys: If an assignment can be simply fed into an AI tool for a complete and correct answer, the assignment itself might need improvement. It’s about changing the assignment and, by extension, the entire “learning story” or “learning journey” that students undertake to reach the desired outcome. This pillar goes hand-in-hand with Pillar One on creating cognitively demanding tasks.
- Develop Critical Media Literacy: Students need to be trained to recognize when AI-generated content from others is inaccurate. This builds upon existing critical media literacy skills but takes them to a new level. We need to teach students to approach all media resources with a skeptical eye, checking references and ensuring the conclusions drawn are legitimate. As Leslie pointed out, this is crucial not just for news and video, but also for technical reports and scientific journal articles in science and career tech education.
- Address the Root Causes of Misuse: When students resort to cheating or plagiarism with AI, it’s important to question why. Is it laziness, lack of confidence in their writing or opinions, or a misunderstanding of how to cite sources properly? Each reason requires a different strategy. Leslie shared a story about her daughter using AI because the provided resources didn’t answer the teacher’s questions, highlighting a potential mismatch between resources and assignments.
- Involve Students in Ethics Discussions: Leslie is a big proponent of involving students in the processes by which they will be judged. Creating student-led or student-involved AI ethics committees can empower them to take ownership of ethical AI use and even hold their peers accountable. This also trains them for ethical considerations beyond K-12 education.
First Steps for Educators
For those planning lessons for the upcoming school year, the first step, as Leslie emphasized, is to have those crucial conversations at your school about what “ethics and AI” means in your specific context. Don’t shy away from it because it’s messy; instead, establish a plan for revising your ethical stance or policy over time.
What does ethics and AI mean to you and your school?
Secondly, revisit Pillar One. If you plan to embed AI in your classroom, consider how you will do so in a way that trains students to use it ethically and effectively within the learning process. This might involve re-examining your units of study and lesson plans to strategically place AI where it makes the most sense for student development.
Final Thoughts
This conversation with Leslie was incredibly insightful, and it really underscores the importance of intentional, thoughtful integration of AI in education.
Next week, during our Making Schools Work conference in New Orleans, we’ll be launching “Podcast 2.0″ with a full week of episodes. As part of this, we’ll release a bonus AI episode where Leslie and I will discuss the checklist she created to help schools vet, find, implement and review AI tools.
I’m really looking forward to it, and I hope to see some of you in New Orleans!
More Support
This conversation is part of our five-part podcast and blog series based on SREB’s Guidance for the Use of AI in the K12 Classroom. Each episode focuses on a different pillar and is designed to stand on its own, whether or not you’ve read the report.
- Listen to the podcast
- Watch the video
- Download the full free report that inspired this series
- Download our AI Tool Procurement, Implementation and Evaluation Checklist
Finally, keep up with the latest from our Commission on AI in Education by signing up for our newsletter.
And make sure to come back next week, when we talk about tips on picking the right AI tool for your school.