Generative AI is impressive, no doubt about it. We’ve gone from typing keywords into search bars to chatting with robots that sound eerily like that enthusiastic trainee who really wants to help but might just be making things up as they go. Tools like ChatGPT, Claude, or Gemini now give us answers in full sentences rather than a list of websites for us to review, they throw in a citation or two, and even ask if there’s anything else we need. They’re polite, (over)confident….. and occasionally (or frequently it appears), dead wrong.
AI has moved beyond helping us search. It now summarises 300-page policies into snack-sized quotes, turns dense research into podcasts, and can generate a talking head video in minutes using platforms like HeyGen. It’s convenient. It’s clever. It’s also disconnected from the reality of the real world.
Here’s the issue we are finding: it has no understanding of what it’s saying. None! Nada! Not a clue…
Parrots, but make it tech
Imagine teaching a parrot to say “Your website meets all current Ofsted compliance standards.” Now imagine relying on that parrot to produce your annual website audit for the governors. That’s what you could be doing when you trust generative AI, without backing it up with a good dose of fact checking.
These tools are trained to mimic. They don’t know if the content in your SEND policy is correct, if your phone number is still right, or if that member of staff listed as safeguarding lead or office main contact left three months ago. They look for patterns in words and mirror them back with confidence similar to that friend who always sounds right in a pub quiz but never actually is!
I recently sat with an Executive Head considering an AI-based website auditing tool. The pitch was classic tech optimism: save time, get instant reports, impress the governors. Lovely stuff. But after just five minutes of looking through the AI’s glowing assessment of the school site, we noticed a few… issues.
- The SENDCo had changed – not updated on the site.
- The main phone number was incorrect.
- School Lunch and finishing/collection times were wrong.
- Uniform and policies? Let’s just say some were more “historical artefact” than “current information.”
The AI didn’t spot the majority of that. It got some review dates but not all, it saw that the pages existed and there was content on them, so it ticked the boxes and moved along.
The illusion of intelligence
This isn’t just anecdotal. Research backs it up. A study titled The Illusion of Thinking by eminent thinkers in this field from Apple found that even the smartest “thinking” AIs fall apart when faced with complex tasks. Give them a logic puzzle like Tower of Hanoi and they manage only up to a point and when it fails the whole thing collapses faster than a Year 6 trying to explain their missing homework. Even when you give the AI the exact algorithm, it still messes it up. Not for lack of effort it’s just not built to actually understand what it’s doing.
It mimics. It doesn’t think. You wouldn’t hand your fire drill planning to a parrot just because it can shout “Exit via the main hall!” in a convincing tone?
Use with caution, not blind faith
I love tech. I’ll happily use AI it’s brilliant at helping. But giving it the reins on statutory audits, compliance checks, or public-facing content without any human review?
That’s a big nope.
And yes there are automation and autonomous agentic AI agents coming that will propose to you that they can read the latest news or Facebook posts and update your website content with AI generated posts and whilst this is true there is a classic case with Apple News AI that went rogue and was described as “Out of Control” – Apple urged to withdraw ‘out of control’ AI news alerts – BBC News
AI needs human validation. Every time. The truth is that as charming as it is, often it has less understanding of the task at hand than the pupil trying to answer the question.
As marketing teams selling us the virtues of “Large Reasoning Models” augmenting and enhancing the “Large Language Models,” expect another round of hype telling us that AI can now think. It can’t! Well not yet, maybe someday but not someday soon.
Until then? Treat it like what it is: a very clever, very confident parrot.
If you have any questions about how AI can assist your school please do get in touch, we have a number of useful areas where AI can assist and complement the work in schools without detracting from the reliability.