Teaching
AI in your classroom: a practical guide for teachers
Let’s start with reality. Your students are using AI. Whether your school has a policy or not. Whether you’ve addressed it in class or not. ChatGPT, Claude, Gemini - they’re using these tools at home, on their phones, for their homework.
Pretending otherwise doesn’t help anyone.
The question isn’t whether AI will be part of your students’ education. It already is. The question is whether you’ll shape how they use it or leave them to figure it out alone.
What AI can actually do
Before deciding how to handle AI in your classroom, it helps to understand what these tools actually are.
Large language models like ChatGPT are pattern-matching systems trained on massive amounts of text. They’re remarkably good at producing human-sounding responses. They can explain concepts, answer questions, write essays, solve math problems, and generate creative content.
They’re also confidently wrong sometimes. They make up facts. They struggle with certain types of reasoning. They have no actual understanding - just sophisticated pattern matching.
This combination of capability and limitation is exactly why students need guidance using them.
Starting the conversation
The worst thing you can do is ignore AI entirely. Students notice when adults avoid topics, and they draw their own conclusions about what that avoidance means.
Consider having an explicit conversation with your class about AI:
What it is. A tool, not a person. Not intelligent in the way humans are intelligent. Good at some things, bad at others.
What it’s good for. Explaining concepts you don’t understand. Brainstorming ideas. Getting feedback on drafts. Helping with coding or technical problems.
What it’s bad for. Original thinking. Factual accuracy without verification. Anything where the learning is in the struggle itself.
Where your class stands. Be explicit about what’s allowed and what’s not for your assignments. Students shouldn’t have to guess.
Designing assignments that work
Here’s where things get practical. Some assignment types are essentially AI-proof. Others became obsolete the day ChatGPT launched.
Assignments that still work
In-class writing. When you watch students write, you know the work is theirs. Final exams, timed essays, written responses during class - these still test what students actually know.
Process documentation. Ask for outlines, drafts, revision notes, reflections on the writing process. AI can generate a final product instantly. It’s harder to fake a documented journey.
Personal reflection. Essays that require specific personal experience - “write about a time you faced a difficult choice” - are harder to generate convincingly with AI.
Oral assessment. Presentations, discussions, oral exams. When a student has to explain and defend their thinking in real time, you learn what they actually understand.
Collaborative work. Group projects where students must engage with each other’s ideas in observable ways.
Assignments that struggle
Generic essays. “Write 500 words about the theme of justice in To Kill a Mockingbird” produces nearly identical AI output regardless of which student prompts it.
Research reports on common topics. Anything where the expected output is mainly summary of existing information.
Problem sets with standard answers. Math homework, grammar exercises, comprehension questions - AI handles these effortlessly.
This doesn’t mean these assignment types are useless. But if you’re assigning them, you should expect AI assistance and design grading accordingly.
Teaching with AI, not against it
Here’s an approach that’s worked for some teachers: make AI use explicit and guided rather than forbidden.
Use AI for feedback. Have students get AI feedback on a draft before submitting. Require them to document what feedback they received and how they incorporated it. This teaches critical evaluation of AI suggestions.
Analyze AI outputs together. Generate an essay with AI and critique it as a class. Where is it generic? Where does it miss nuance? Where is it factually questionable? This builds exactly the critical thinking skills students need.
Compare approaches. Have students complete a task, then ask AI to do the same task, then compare results. What did the human do better? What did AI do better? Why?
Make AI a learning tool. Struggling with a concept? Ask AI to explain it three different ways. Can’t figure out where you went wrong on a problem? Ask AI to walk through the steps. This is how many adults actually use AI in their work.
The honesty piece
Whatever approach you take, build in honesty norms.
Some teachers require a simple disclosure at the bottom of assignments: “AI tools used: none” or “AI tools used: ChatGPT for brainstorming initial ideas.”
The goal isn’t to eliminate AI use. It’s to make it visible and discussable. When AI use is acknowledged rather than hidden, you can teach students to use it well.
Students who learn to be transparent about AI assistance now will be better prepared for workplaces where appropriate AI use will be expected and disclosed.
When students push back
Some students will test boundaries. Some will use AI in ways that violate your policies. This isn’t new - students have always found ways to cut corners.
When it happens, focus on the learning conversation rather than just the punishment. Why did they make that choice? What did they actually learn from the assignment? What would they do differently?
The goal is preparing students for a world where AI exists, not a world where it doesn’t.
What your school should provide
If your administration hasn’t given you clear guidance on AI, you should ask for it. Teachers shouldn’t be making policy individually - that creates inconsistency that confuses students and parents.
You need:
- Clear policies about what AI use is permitted
- Guidance on designing AI-appropriate assessments
- Support when academic integrity issues arise
- Permission to experiment and adapt
If you’re in St. Louis and your school needs help developing this guidance, that’s exactly the kind of work we do with ISSL schools.