The government has told school leaders how to use AI. What does the guidance actually say?

Most primary school leaders have been waiting for a clear signal from government before committing to AI in any meaningful way. That signal has now arrived.

The Department for Education has published a Leadership Toolkit on the safe and effective use of AI in education, developed with the Chiltern Learning Trust and the Chartered College of Teaching and funded directly by the DfE.

The toolkit is not a short briefing note. It covers how leaders should approach AI across their whole setting, how to factor it into a wider digital strategy, and what conditions need to be in place before any AI tool is introduced. Its central principle is that human oversight is non-negotiable: no decision that could adversely affect a pupil’s outcomes should be based on AI output alone.

Why this matters to primary schools

For primary schools in particular, the toolkit arrives at a useful moment. Many headteachers and school business managers have been using AI tools informally for the past 18 months, often without a policy framework in place or a clear sense of where the limits sit. The toolkit provides that framework and, importantly, it names specific use cases that are considered appropriate: writing letters to parents, planning lessons, analysing budgets, generating tender documents, and adapting materials for pupils with SEND.

What makes the toolkit practical rather than purely cautionary is its workload framing. The DfE is explicit that a positive impact on staff workload is one of the primary reasons to introduce an AI tool in the first place. This matters for school leaders who have felt pressure to adopt AI without a clear sense of what justified adoption looks like. The answer, according to this guidance, is time saved and outcomes maintained or improved.

The toolkit also addresses CPD, noting that involving all staff in training on AI can improve efficiency across the whole organisation. For many primary schools, this is the piece most likely to be underestimated. The tools themselves are relatively accessible; knowing how to use them well, and how to judge their output critically, is the harder problem.

It is the responsibility of leaders to ensure that any AI tools introduced in their setting are appropriate, safe, and have the correct safeguards in place.

DfE Leadership Toolkit, safe and effective use of AI in education

What this could mean in practice

The requirement for human oversight is likely to have practical implications for how AI is used in report writing and assessment in particular. Tools that help teachers draft written comments are appropriate. Tools that generate a pupil’s assessment outcome without review are not. The line is not always obvious, and school leaders will need to draw it clearly in their AI use policy before tools are rolled out to staff.

The toolkit’s emphasis on data privacy and security will also require attention. Many of the AI tools currently in use across primary schools sit outside the school’s formal procurement process. They were introduced by individual teachers or administrators and may have no formal data processing agreement in place. The DfE is clear that it is the headteacher’s and governing body’s responsibility to ensure every tool in use has been properly assessed. That may mean some tools currently in use need to be reviewed or retired.

There is also a longer-term planning angle. The DfE is clearly moving towards a position where AI use in schools is expected to be systematic and documented rather than incidental. Schools that build a clear, defensible AI policy now, with proper connections to their safeguarding and data protection frameworks, will be in a stronger position as Ofsted continues to develop its approach to digital strategy in inspections.

How we think about this at iTCHYROBOT

All of the systems we build for primary schools are based on a simple principle: AI can assist, but it doesn’t decide. It drafts, suggests, and structures but a human reviews and approves every output.

That is how our VIP Platform handles parent communications. It’s how we approach letter generation. And it’s how we design any workflow that carries risk or requires accountability. Even down to our automated website audits they are assistants not a de-facto standard just a guide.

There is one area where we are deliberately uncompromising: pupil data. Our systems are designed so that AI does not have access to it, full stop. That boundary is not a temporary safeguard; it’s a permanent design choice. As expectations around AI evolve, that kind of clarity becomes more important.

What’s useful about the Department for Education leadership toolkit is that it does not introduce something new. It formalises what schools are already doing: setting boundaries, defining responsibilities, and treating AI as part of a controlled process rather than a standalone solution.

The toolkit also reinforces a point we hear consistently from school business managers. Ad hoc tools picked up to solve isolated problems tend to create fragmentation, not efficiency. The real gains come from joined-up systems where data flows are understood and human checkpoints are built in by design. That is the direction we are taking the iTCHYROBOT platform: introducing AI assistants that handle repetitive and time-consuming tasks so that school staff get that time back, without creating new compliance risk in the process.

The DfE toolkit is a useful starting point not because it tells you what to adopt, but because it makes clear the conditions any tool needs to meet.

That’s a conversation worth having early, before habits form and systems become harder to change.

Resources

Leadership Toolkit – Transcript

Using AI in education: support for school and college leaders – GOV.UK