Your Students Are Using AI. Stop Panicking...
...start teaching
Gershom Aitchison | Headmaster, Education Incorporated
Your students are using AI. That’s a fact, not a crisis.
It’s also not new. When Google Translate arrived, Afrikaans teachers panicked. When Grammarly hit the market, English departments spiralled. When calculators entered maths classrooms, there were genuine debates about whether students would forget how to do long division. Spoiler: some did. The ones whose teachers adapted, didn’t.
Every generation of educators faces the same question when new technology enters the classroom: is this a threat or an opportunity? The answer has always been the same. It depends entirely on how we teach with it.
The only question that matters: who is doing the thinking?
Here’s the distinction that should anchor every AI policy in every school:
there is a difference between technology as a tool and technology as a crutch.
A crutch replaces cognitive work. A tool amplifies it.
This isn’t a new principle. Cognitive engagement is the prerequisite for learning — full stop. It always has been. A student who copies notes from the board without thinking isn’t learning. A student who sits in a group discussion but never processes the argument isn’t learning. The medium is irrelevant. If the student’s brain isn’t doing the work, no learning is taking place. Technology doesn’t change this rule. It just makes it easier to violate — and easier to enforce, if you design for it.
And here’s what makes this urgent: the world our students are walking into doesn’t need people who can produce outputs. It needs people who can think critically, solve novel problems, and direct powerful tools toward meaningful ends. Those are the transferable skills that matter — and every single one of them requires cognitive engagement to develop. If we allow AI to bypass that engagement, we’re not just failing to teach content. We’re failing to build the capacities students will actually need.
When a student pastes an essay prompt into ChatGPT and submits the output, that’s a crutch. When a student uses AI to stress-test their argument, identify gaps in their reasoning, and refine their thesis — that’s a tool. The technology is identical. The cognitive engagement is worlds apart.
Jain and Kiran (2025) call this “cognitive offloading” — the process by which students delegate thinking to AI rather than using it to extend their own capacity. Their research warns that without intentional pedagogical design, generative AI defaults to a labour-saving device rather than a learning amplifier. Razzak (2025) echoes this in a study examining whether generative AI enhances or hinders intellectual growth, concluding that the outcome depends almost entirely on how the technology is integrated into learning tasks.
This isn’t an AI problem. It’s an instructional design problem.
Three markers of cognitive engagement with AI
At Education Incorporated, we frame productive AI use around three observable behaviours, each grounded in the higher-order thinking that Bloom’s Taxonomy demands and that the Instructional Core insists upon.
1. Critical Thinking — The student corrects, disagrees with, or questions the AI.
If a student accepts AI output without scrutiny, they are not thinking. If they push back — identifying errors, challenging assumptions, evaluating the quality of reasoning — they are operating at the analysis and evaluation levels of Bloom’s revised taxonomy. Lubbe, Marais and Kruger (2025) make a compelling case that AI, when paired with Bloom’s framework and deliberate assessment design, can cultivate independent thinkers rather than dependent ones. The key is that the task must require critical evaluation, not merely permit it.
2. Creativity — The student solves problems, improves prompts, and generates original ideas.
Effective AI use is iterative. A student who crafts a single prompt and accepts whatever comes back is passive. A student who refines their prompts, synthesises multiple outputs, and builds something new from the interaction is creating. Adnans, Riantama and Silalahi (n.d.) found that when Bloom’s Taxonomy is deliberately applied as a framework for AI interaction, students demonstrate measurably stronger higher-order thinking skills. Jackson (2025) extends this further, proposing that teaching students to construct “higher-order prompts” aligned to Bloom’s taxonomy develops precisely the analytical and creative skills we claim to value.
3. Metacognition — The student thinks about how they think.
This is the most overlooked and arguably most important marker. You cannot use AI effectively for complex tasks if you do not understand your own process of thinking, learning, and planning. To get useful output from AI, you must be able to articulate what you know, what you don’t, and how you intend to bridge the gap. You must, in effect, teach the AI how to help you — and that requires genuine self-awareness about your own cognition.
Tezer (2025) frames metacognitive engagement as foundational to meaningful AI-supported learning, arguing that without it, AI interaction remains superficial. Levin, Marom and Kojukhov (2025) go further, positioning the metacognitive challenge as the central issue in AI and education — not plagiarism, not policy, not detection software. Rajkumar, Kamalaveni and Kesavan (n.d.) put it plainly in their title: Metacognition and AI: Teaching Students to Think About Thinking. If we are not teaching this, we are not preparing students for anything.
The Instructional Core tells us where to focus
The principle is straightforward: learning happens in the interaction between the student, the teacher, and the content. If AI enters that triangle and the student steps out of it — if the AI is doing the cognitive work while the student watches — then no learning is occurring. The content hasn’t changed. The student has simply been removed from the equation.
And here the Instructional Core gives us two principles that answer the accountability question before it’s even asked. First: if you can’t see it, it isn’t there. If a teacher cannot observe cognitive engagement happening — in the classroom, in the process, in the interaction — then as far as learning is concerned, it isn’t happening. That’s true whether the student copied from a friend, zoned out during a lecture, or pasted a prompt into ChatGPT. You don’t need detection software. You need proximity to the learning. Second: the accountability is in the task. If the task itself demands cognitive engagement — if it requires the student to think, argue, create, reflect — then the teacher has built accountability into the work, not bolted it on afterwards with a plagiarism checker.
But if AI enters the triangle and raises the ceiling of what the student can engage with — more complex problems, richer analysis, deeper creative work — then the Instructional Core is strengthened, not undermined. Bloom’s Two Sigma Problem (Bloom, 1984) showed us decades ago that one-to-one tutoring produces a two standard deviation improvement in student performance. But the value of that finding isn’t the headline stat — it’s why it works. The gains come from mastery learning, individualised learning paths, and the kind of immediate, responsive feedback that only a tutor sitting beside you could previously provide. AI, used properly, is the closest we have ever come to scaling those specific mechanisms. It can meet a student where they are, adapt to their pace, give feedback in the moment, and support the iterative cycle of attempt, correction, and improvement that drives deep learning. Used improperly, it shortcuts every one of those processes and leaves the student exactly where they started.
A challenge to every teacher writing an AI policy
Stop writing policies that focus on regulation and restriction. I understand the impulse. It feels safer to ban, to detect, to police. But consider what that communicates: we don’t trust you to think, so we’re going to make sure the machine can’t think for you.
Instead, build practices that teach students how to engage cognitively with technology to enhance their learning. Teach them to extract immediate, valuable feedback from AI and to iterate their corrections and improvements on the spot. That cycle — attempt, feedback, correction, improvement — is the gold standard for learning. It always has been. AI just makes it possible at a speed and scale we’ve never had access to before. The question is whether we teach students to use that cycle, or whether we lock them out of it entirely because we’re afraid of what happens when they don’t.
Ahmedtelba (2025) argues for a critical integration approach — one that analyses cognitive engagement patterns, distinguishes between naïve adoption and meaningful use, and redesigns assessment around long-term skill development. This is the work. Not detection. Not restriction. Critical thinking, creativity, metacognition, and actually doing the work.
If a student can complete your assignment entirely with AI and get full marks, the problem is not the student. The problem is the assignment.
The bottom line
AI is in your classroom whether you like it or not. The question is not whether students will use it. The question is whether they will cognitively engage with it — that is, learn while doing so.
Teach them to argue with it.
Teach them to build with it.
Teach them to think about their own thinking while they use it.
That’s not a technology strategy. That’s just good teaching.
Gershom Aitchison is the Headmaster of Education Incorporated, a PedTech school grounded in Bloom’s Two Sigma Problem and the principles of the Instructional Core. He writes about the intersection of pedagogy and technology — and why the teaching always comes first.
References
Adnans, A.A., Riantama, D. and Silalahi, A.D.K. (n.d.) ‘Redefining Creativity in AI Era: Bloom’s Taxonomy Framework and Peer Interaction Elevate Higher-Order Thinking Skills’. Available at: SSRN 5415322.
Ahmedtelba, E. (2025) ‘Critical Integration of Generative AI in Higher Education: Cognitive, Pedagogical, and Ethical Perspectives’, London Journal of Research in Humanities and Social Sciences.
Bloom, B.S. (1984) ‘The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring’, Educational Researcher, 13(6), pp. 4–16.
City, E.A., Elmore, R.F., Fiarman, S.E. and Teitel, L. (2009) Instructional Rounds in Education: A Network Approach to Improving Teaching and Learning. Cambridge, MA: Harvard Education Press.
Jackson, J. (2025) ‘Higher order prompting: Applying Bloom’s revised taxonomy to the use of large language models in higher education’, Studies in Technology Enhanced Learning, 4(1).
Jain, N. and Kiran, M. (2025) ‘Rethinking Education in the Age of Generative AI: Cognitive Offloading, Assessment Reform, and Institutional Adaptation’, ResearchGate.
Levin, I., Marom, M. and Kojukhov, A. (2025) ‘Rethinking AI in Education: Highlighting the Metacognitive Challenge’, BRAIN. Broad Research in Artificial Intelligence and Neuroscience.
Lubbe, A., Marais, E. and Kruger, D. (2025) ‘Cultivating independent thinkers: The triad of artificial intelligence, Bloom’s taxonomy and critical thinking in assessment pedagogy’, Education and Information Technologies. Springer.
Rajkumar, M.R., Kamalaveni, M.K. and Kesavan, M.A. (n.d.) ‘Metacognition and AI: Teaching Students to Think About Thinking’, in AI in Education. Available at: ResearchGate.
Razzak, N.A. (2025) ‘The Double-Edged Sword of Generative AI: Enhancing or Hindering Students’ Intellectual Growth?’, IEEE International Conference on Emerging Trends in Information Technology and Engineering.
Tezer, M. (2025) ‘Metacognitive engagement in AI-supported learning: Frameworks, challenges, and transformations’, IntechOpen.



