This is such a divisive topic. The more that I use AI, and the better that I get at using it, the more that I like it. It’s imperative, though, that we continue to ensure that we’re the ones in charge of AI. Just like using social media effectively, I think that you need to be using it for the right reasons.
That begs the question: what are the right reasons? Who gets to define them? I’ve spoken in the past about how I think that AI should be used: as a time-saver. It should help us to be more productive so that we have more time to do what’s important to us. The purpose of AI, therefore, should be for wellness; cut down on time spent doing things that you’d rather not be doing.
I wrote recently about how I’m using AI in teaching, and it included QuestionWell and good ol’ ChatGPT. Long story short: they save me time through the production of model answers and learning checks, two essential parts of what it means to be a teacher. If you want a more detailed breakdown of my process, follow this link, but be sure to read to the end of this one first. A couple of days before that post went live, I was actually at a CPD experience at work that gave me some dedicated time to engage with generative AI and gain a more secure understanding of how to use it.
How could I say no? Below are my my experiences in that seminar and the interesting ethical questions that were raised. If you’re most interested in the ethics of AI in teaching, just read all the way to the end.

What is Generative AI?
Generative AI, or Gen AI for short, is artificial intelligence that is able to continuously generate new content based on human input. At its highest level, Gen AI can hold ‘conversations’ with users – this is what most of the internet is used to with the recent popularity of ChatGPT.
All of this and more was explained by the National Institute of Teaching (NIOT), who were holding the CPD session. They had a software expert in to break down the technicalities of AI and its various incarnations, and it was at this point that my experiences with ChatGPT really levelled up. Whenever I’d used the service prior to this session, I would always have to contextualise the AI with information about who I am, my role, what I do for a living, and what I’m hoping to achieve.
That is now a thing of the past.
We were taught that you can actually add context to your ChatGPT profile, and that it will remember this for each of your interactions. If you use it for multiple purposes – say, for teaching and for blogging – then it’s a good idea to set up multiple profiles. This can be done easily using a second Google account. It’s such a time-saver and helps the software to tailor its responses to you, Mr Hamilton the Teacher, rather than you, random person, who happens to be a teacher.
Level up your AI solutions
So, I’ve used ChatGPT for model answers and learning checks in the past. Well, to see how we could try something different, NIOT split us into two groups: teachers and professional services (mainly referring to IT Support at this session), so that we could focus on individual problems.
I was lucky to have one of my History colleagues present, so we bounced ideas off each other about how this could be used in our department. Eventually, we came to the idea of uploading a GCSE mark scheme to the software, having already told it about my context as a Teacher of History, and asked it to produce more specific model answers. On the premium version of ChatGPT, you can upload PDFs, which is easier than copying and pasting across what you need.
Was it accurate?
No.
Not even close.
There was far too much detail for what we’d expect a student to hand write under timed conditions. So, as our mentor suggested, we fed it another prompt, asking that it be written by a 16-year-old. This time, it wasn’t detailed enough. There was some back-and-forth; it was a bit like a tug-of-war, until it eventually produced something usable.
I know what you’re thinking: it had taken longer to get the AI to produce an answer than it had a human. While students would write this particular answer in about 12 minutes, teachers can probably get it typed out in under 5 minutes.
However, it got me thinking: if we can train ourselves to write prompts good enough that can be shared round our department, every member of staff should be able to, theoretically, produce a model answer within a couple of seconds. There would be some work initially, sure, but once the work was done, it would be done forever, and that is the case with most resources in this day and age. I didn’t have any GCSE work to hand, but one suggestion was that I train the software by feeding it some of the students’ answers, alongside the mark scheme, and telling it what mark I would give it and why, to give ChatGPT a better idea of what a good one looks like.
Then we levelled up even further. A-Level marking is trickier than GCSE, and each script takes longer to mark as a result. It can be useful to compare students’ answers so that you know you’re marking consistently. This can absolutely be done without any AI involved, but if the software is good enough, I’d be able to use it to fine-tune my own judgements – and do it quickly. See, we’re not aiming to replace teachers; we’re aiming to revolutionise their traditional roles. Maybe this is the revolution we’d need to cut a teacher’s work week down to four days…
Once again, I input the mark scheme, input some student responses, and asked the software to rank them, according to the mark scheme, with justification for their results. Granted, it did mark both slightly too harshly, but our mentor had another solution: tell the software. Tell it that you would have given response A a high level 5 and response B a low level 6, and it will once again correct itself.
I did then ask ChatGPT to write a model answer in the style of these responses, but it didn’t quite do what I wanted it to. By the time I’d got to this stage, the session was nearly over – so more trial and error is needed. AI is only as good as its users, so continual professional development in the sector is clearly the way forward.

AI’s teaching problem
‘My question is: who would get to decide who is allowed to use ChatGPT?’
I was perplexed by that question. My colleague expanded and her response raised some interesting questions. As a teaching profession, we have to be good at lesson planning. It’s the bread and butter of what we do. Being able to plan, and re-plan – especially during a lesson – is essential in ensuring that our learners have ‘got it’. If we’re not good at planning, has our training really succeeded?
I ask these questions because, next to me, was another teacher who had used ChatGPT to plan an entire scheme of work, which came with learning checks, resources and hinge questions. Of course, we in the room all had plenty of teaching experience – at four years, I think I must have been the least experienced there. Everybody else had been teaching for at least ten years. We all know how to plan a good lesson, and using AI to help is just another great way to save time and promote a healthier work-life balance.
But what about new teachers? What about trainee teachers who decide to rely on AI-produced schemes of work entirely? When I last wrote about AI in teaching a couple of weeks ago, I said that it was really important to continue to have your own input. The AI-generated posts that I had experimented with in the past screamed, ‘I have been produced without any help from a real person. Clearly, if we’re using AI, we need to add our own twists. We need to continue to use our own professional judgements to assess whether the artificial intelligence has produced something usable.
My solution? Let the teachers who are new to the profession use AI. Let them use ChatGPT to produce as much as they need or want to. But we need to make sure that we’re integrating high-quality training during our PGCE courses to teach people how to use it right. I can’t tell you how many pieces of work, produced by younger students, that I’ve marked that are so obviously AI-generated. Teach people how to use it from the start of their teaching careers – and how to use it well – and it can be just another tool in our arsenal of teaching weapons.
What’s the point in preventing teachers, regardless of experience level, from using something to save them time, for the sake of preserving the supposed sanctity of teacher-produced lessons? I will never forget my training year – it was the hardest year of my life from a work-life balance point of view. In my opinion, we need to make teacher training easier, and use technological innovations to do so, especially if the government wants to actually hit its recruitment targets for once.
AI’s ethical problem
If the machine has produced something, is it yours? In theory, yes. Digital laws haven’t caught up with innovations, so we’re hard pressed to say definitively that a piece of AI content is your intellectual property. Right now, in theory, yes, it is yours.
But where has AI learned how to write these pieces of content? That’s right: from other sources. All of the AI images that you’ve seen are trained on, or based on, other pieces of art. You could argue that the level of copying that takes place constitutes plagiarism – or could you? Do we not all learn from the stories that we read and the ideas that we absorb? It’s tricky.
For model answers, I’m not too concerned. Generally, these will always follow a formula for students to achieve high marks, so they’re always going to be written in a certain way. In my opinion, there’s nothing uniquely original in a model answer.
Where the first ethical question pops up concerns lesson plans and resources. Look on TES and you’ll find thousands (including my free video guides) of resources, made by teachers, available for cold, hard, cash. That is their intellectual property, there’s no denying it. By using AI (not necessarily ChatGPT), you’ll be producing content that could be sold. You could make a profit – from work that is, in theory, based on the intellectual property of others.
On the other hand, consider my colleague who create the scheme of work earlier. She said that, had she done that all independently, it would have easily taken upwards of three hours.
That’s huge. Is the possibility of competition with otherwise paid resources a worthy sacrifice? Maybe. I’m not sure. What I am sure about is that there’s a clear need for the law to catch up with such a grand development in this software. It has the potential to revolutionise teaching, and I’m really excited to continue learning about how it can be used to cut down on my workload, but there are equally clear hurdles and questions emerging.
And I don’t know if I can answer them.
If you liked that, you’ll love…
- The First 90 Days: How to survive (and thrive) as a new Head of History
- 7 Brutal Questions to Course-Correct your Life before 2026
- Stop wasting time and start teaching: How to super charge Google Forms with Brisk AI and Gemini
- Automating Google Classroom: 8 features that save teachers hours each week
- What I learned from tracking my food intake for a month
3 thoughts on “Is it ethical to use AI in teaching?”