Expanding AI tools available to educators presents both opportunities and challenges for teachers. These tools can reduce the time spent planning lessons and personalizing learning. However, teachers acknowledge potential risks, as shown by the questions posed during our recent webinar on using the AI tools in Google Workspace for Education.
"What are the risks associated with developing lesson plans and other resources using AI tools, and how can they be mitigated?"
The main risk is that of inaccurate or biased output. Any response created by AI tools should be verified. Errors, called hallucinations, can exist in material produced by teachers and submitted by pupils in response to assignments. AI tools will reflect any biases present in their training data. They will also reflect any inaccuracies. It's essential to remember that large language models (LLMs) may have been trained on content up to a certain date, which can make them untrustworthy when dealing with the most up-to-date status of certain topics. For example, we asked two AI tools for particular information about UK politics and received two different answers. Only one of them was correct.
The instructions given to generative AI tools such as Google Gemini are called prompts. Using carefully crafted prompts when creating material with AI tools will help mitigate the risk of errors. A good starting point is to ensure that you instruct the AI tool by telling it what you want, who it is for and why you want it. Prompting can be a multi-stage process where the first prompt creates an initial output that can be improved by adding more information or giving further specific instructions.
Tips for writing effective prompts:
- Specify a persona, character or skill set for the AI tool to adopt, such as "You are expert in lesson planning".
- Tell the AI tool how long the output should be – “Create a worksheet with 10 questions”.
- Tell the AI tool what type of output you want – “Create a worksheet with 10 multiple choice questions, each with four possible answers”.
- Give the AI tool the chance to ask for help rather than work on assumed information – "Ask me questions before you answer".
AI creates a requirement to re-think how we test understanding. It is no longer possible to test understanding solely on the output from the pupil as we cannot be sure that it’s their own work. Existing AI checkers are being superseded by output from AI systems.
Another risk is becoming over-reliant on AI tools. Just because you can create lesson plans or other material using AI tools, it’s not necessarily a good idea to rely on them all the time. No AI tool can know your pupils as well as you do. It will not be able to provide differentiation nor adapt the content to cater for the different learning styles present in a class.
Mitigations against over reliance include:
- spend time crafting and refining the prompts you give the AI tool. Include information to provide context for the resource you are creating
- use AI-generated materials as a starting point for idea generation and adapt the output to your individual circumstances.
- add your own teaching style, personality, and pedagogical approaches to the lesson.
- invite colleagues to critically assess the output from the AI tool and collaborate on creating material.
By using AI tools you will develop a sense of where their strengths and weaknesses lie, allowing you to deploy them appropriately to help manage workload.
"Is there a recommendation for schools/trusts to set up a policy related to AI?"
Yes, there is. Think about the context of your trust and existing acceptable use policies for IT.
The Joint Council for Qualifications has recently updated its guidance on the use of AI tools in assessments. The guidance contains valuable information that a school or trust could incorporate into a policy on pupils' use of AI.
When considering the wider context of AI use, Education Data Hub’s AI guidance for schools sets out four steps to developing good governance around AI use.
- Balance necessity and the excitement of involvement in a new way of working against risk – ensuring that you set out legal, commercial, security and ethical requirements.
- Establish clear roles and responsibilities. An important difference brought about by AI decision-making is that it can be less clear about who is accountable for decisions that affect individuals. It is important that no loss of accountability occurs when a decision is made using AI.
- Establish processes to identify necessity, risk and risk management and monitor this over time.
- Ensure that the use of AI integrates with existing processes to identify and report fraud or cyber risk – the risk when using AI is incoming as well as outgoing.
In addition, it’s important to decide which AI tools can be used in various school contexts and keep the list under review. This article by Leigh Academies Trust CEO Simon Beamish explains how the trust evaluated certain tools to assess each one’s ability to make an impact in its schools.
Find out more
Over 100 people attended the recent webinar to hear Dave Fitzpatrick of RM’s Education Consultant team explain how busy teachers can use the tools to save time when creating lesson resources, practice sets and more.
You can watch a recording of the webinar here. And keep an eye out for more webinars about AI and other topics on our events page and social media channels.