<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1331609480336863&amp;ev=PageView&amp;noscript=1">
Why Children Think AI is Magic: Critical Lessons From Sue Sentance

October 08 2025 | Thought Leadership

Why Children Think AI is Magic: Critical Lessons From Sue Sentance

AI literacy isn't just about knowing how to use the latest tools. It's about developing critical thinking, maintaining agency over technology, and ensuring young people can navigate an AI-integrated world with confidence rather than blind trust.

Many organisations focus on teaching AI skills, but few address the fundamental misconceptions that prevent truly effective engagement with these systems. Sue Sentance, Director of the Raspberry Pi Computing Education Research Centre at the University of Cambridge, offers a sobering perspective. Through her research with hundreds of young people, she reveals why our current approach to AI education is failing and what we must do differently.

The Shocking Reality: Kids Can't Explain AI

When researchers asked nearly 500 young people aged 11-16 a simple question - "What do you think AI is?" - the results were alarming.

Most respondents, primarily 12-14 year olds, couldn't articulate what AI actually is. Despite using AI systems daily, they lacked the vocabulary to discuss the technology meaningfully. Worse, many held dangerous misconceptions, believing AI is "a bit like the internet" or possessing human-like consciousness.

As Sentance explains, "Young people didn't really have a language to talk about AI. They actually didn't know. They sort of have got something in their head about it's there, it's everything, I'm interfacing with it, but not able to say anything about what we meant by AI."

This vocabulary gap isn't just academic - it's creating a generation that sees AI as magical rather than mechanical, leading them to surrender critical thinking when they need it most.

The Three Buckets of AI Literacy

Real AI literacy requires more than tool proficiency. Sentance identifies three essential areas that young people must master.

First is awareness - understanding where AI systems are being used, what happens to personal data, and recognising when AI is helpful versus harmful. Second is confident usage - being able to apply AI as an assistant for meaningful purposes, not just because the technology exists. Third is technical understanding - grasping something about how AI works, even at a basic level.

"I do believe that in order to be a good user of AI, you need to understand something about the technology and how it works. That might be at different levels and might be just tiny, but that knowledge of AI at some level will help you be a more effective and critically engaged user," Sentance notes.

This framework challenges the common assumption that digital natives automatically understand digital technology. Using AI and understanding AI are fundamentally different skills.

Breaking the Speed Learning Trap

Perhaps most importantly, Sentance pushes back against the obsession with faster learning that dominates AI education discussions.

The pressure to rapidly adopt AI tools and cram more content into curricula misses the point entirely. Humans don't learn effectively at machine speed - we need time to think, reflect, and form independent opinions.

"We don't learn fast as humans. We need time to sort of think and reflect and form our own opinions. And that's the purpose of education," Sentance explains. "This push to use AI to be more productive, to cram more in, can be not effective because we just end up with lots of stuff that we haven't really thought through."

This insight has profound implications for workplace AI adoption. Organisations rushing to implement AI tools without allowing time for deep understanding may create employees who can operate systems but cannot evaluate outputs, identify limitations, or maintain critical oversight.

The Path Forward

The solution isn't to slow down AI adoption but to prioritise depth over speed. Educational institutions and organisations must focus on building genuine understanding rather than surface-level familiarity.

This means teaching AI literacy contextually - within specific domains and applications rather than as abstract concepts. It requires interdisciplinary approaches that connect AI to real-world problems students and employees will actually face.

Most critically, it demands that we preserve human agency in AI interactions. When people understand that AI systems are tools rather than oracles, they maintain the confidence to question outputs, recognise limitations, and use technology purposefully rather than passively.

Building AI Fluency That Matters

The challenge of AI education extends far beyond schools into every organisation adopting these technologies. The same misconceptions affecting children - seeing AI as magical, human-like, or infallible - plague adult users too.

At Cambridge Spark, we help organisations develop comprehensive AI literacy programmes that address these fundamental challenges. From technical training that demystifies AI systems to leadership education that builds strategic thinking around AI adoption, our approach emphasises critical engagement over blind implementation.

The future belongs to those who can think with AI, not just think like AI. Explore our programmes and discover how we can support your transformation.

Enquire now

Fill out the following form and we’ll contact you within one business day to discuss and answer any questions you have about the programme. We look forward to speaking with you.

Upskill your workforce

Upskill your workforce and accelerate your data transformation with expert technical programmes designed to create impact.

Contact us