A few weeks ago, my nine-year-old asked me whether ChatGPT was smarter than a teacher.

It’s the kind of question that could go badly in about fifteen different directions. You could end up in a thirty-minute lecture about neural networks that you don’t fully understand yourself. You could dismiss the question — “Oh, it’s just a program, don’t worry about it” — and miss a genuine teaching moment. You could wax philosophical about consciousness and intelligence until your child’s eyes glaze over and they go back to Minecraft.

I’ve had this conversation, or versions of it, many times. Here’s what I’ve learned about how to have it well.

The Foundational Principle: Questions Over Lectures

Children learn best through conversation, not delivery. The goal of a good AI conversation isn’t to get all the information out — it’s to build your child’s thinking by engaging their own curiosity.

This means: answer their questions, but answer with questions too. “What do you think?” is often the most useful thing you can say. “That’s interesting — why do you think that?” is almost always better than a two-minute explanation.

You don’t need to be an expert. You need to be curious alongside them.

Ages 3–5: “It Follows Instructions”

Children this young are not ready for concepts like machine learning or neural networks. But they’re absolutely ready to understand that some machines follow instructions and some machines learn.

What to say: “Some computers have a helper inside that follows instructions. When you talk to the speaker in the kitchen, it follows the instruction ‘when someone says my name, listen for what they want, then try to help.’ It doesn’t really understand — it’s following very good instructions, like a very good recipe.”

What they can handle:

  • Computers do what you tell them
  • They don’t have feelings (address this early — children often project feelings onto voice assistants)
  • Sometimes they make mistakes because they didn’t understand what you meant

What to watch for: Children this age often become attached to voice assistants. It’s worth gently reinforcing that Alexa or Siri isn’t a friend, doesn’t have feelings, and doesn’t remember them between conversations. Not to be harsh — but to protect the child from forming expectations that will eventually confuse them.

Ages 6–8: “It Learns from Examples”

By age six or seven, children can handle the idea that AI learns, not just follows.

The key conversation: “How do you think the phone knows what a cat looks like when you take a photo?”

Wait for their answer. Then: “The phone was shown millions of photos of cats, with each one labeled ‘cat.’ After seeing so many examples, it got very good at recognizing cats on its own. It learned from examples — kind of like how you learned to recognize letters.”

Great follow-up questions:

  • “What if it had only been shown fluffy cats? Would it recognize a hairless cat?”
  • “What if someone had accidentally labeled a dog ‘cat’ in the training photos? Would that confuse it?”
  • “Do you think it actually knows what a cat is, or does it just recognize the pattern?”

The activity to try this week: Play the “Instructions Game” — have your child write step-by-step instructions for making a sandwich while you (the “robot”) follow them literally. Full instructions here.

Ages 9–11: “The Data is the Message”

Children at this age can handle the more uncomfortable truth: AI is only as good as the data it learned from. And data reflects the real world, including its biases.

Start with a story: “There was an AI trained to tell wolves from dogs in photos. It got really good at it. But researchers discovered something weird — it wasn’t really recognizing wolves. It was recognizing snow. Most wolf photos had snowy backgrounds. So the AI learned ‘snow = wolf’ instead of actually learning what wolves look like.”

Then ask: “What should the researchers have done differently? What if a camera on a snowy hiking trail used this AI to identify wildlife?”

The bigger point to make: “AI learns what’s in its training data. If the data is biased — which means it doesn’t represent everyone fairly — the AI will be biased too. Not on purpose. Just because that’s what it learned.”

Where this goes: Children this age are ready to hear that AI has been used in ways that treated some people unfairly — in hiring, in banking, in criminal justice. Keep it factual, not scary. The goal is critical awareness, not distrust.

Ages 12–14: “Ask the Hard Questions”

Teenagers are ready for genuine ethical complexity. The best conversations at this age aren’t about explaining AI — they’re about thinking through hard problems together.

Questions that actually generate good conversation:

  • “If a company uses AI to decide who gets hired, and the AI was trained mostly on historically successful candidates — who were mostly men — does the company have a responsibility to fix that? How?”
  • “If an AI wrote an essay that a student submitted as their own, who is responsible for the ideas in it?”
  • “Should AI systems that make decisions about bail or sentencing be required to explain their reasoning? What if they can’t?”

You don’t need answers to these questions. The conversation is the point.

One thing to be honest about: At this age, teenagers can tell when adults are BSing them. If you don’t know something, say so. “I don’t actually know how that works — want to figure it out together?” is a more useful response than a confident but incorrect explanation. It also models the intellectual humility that is genuinely one of the most important skills for navigating a world with increasingly powerful AI.

Ages 15–18: “What Does This Mean for Your Life?”

Older teenagers are starting to think about careers, futures, and the shape of their adult world. The AI conversation shifts from “how does it work” to “what does this mean for me?”

Be honest but not alarming: AI will change many careers. Some jobs that exist today won’t exist in ten years. But new jobs will emerge, just as they have with every previous technological shift. The question isn’t whether to be afraid — it’s what skills remain distinctly valuable when a lot of cognitive tasks can be automated.

Skills worth discussing:

  • Judgment under uncertainty
  • Creative problem-framing (not just solving)
  • Ethical reasoning
  • Human connection and relationship
  • The ability to ask the right questions (not just answer them)

The most important thing to say: “The people who will thrive aren’t the ones who compete with AI at what AI is good at. They’re the ones who know how to direct AI, who understand its limitations, and who bring something irreplaceable to the table.”

Then ask: “What do you think you bring to the table that a computer couldn’t?”

When the Conversation Gets Hard

Sometimes the AI conversation will go somewhere uncomfortable. A teenager might say “then what’s the point of school if AI can do all the work?” A younger child might ask “will robots take my dad’s job?”

These are fair questions. Answer them honestly.

“School isn’t just about information — it’s about building your mind. AI can produce information; it can’t build your judgment.”

“AI is changing a lot of jobs, including some that people we love do. We’re all trying to figure out what that means. The most honest thing I can tell you is that nobody knows exactly how it’ll go, but people who understand AI will be better positioned than people who don’t.”

These conversations build trust. And they model something important: that uncertainty is not a reason to stop thinking. It’s a reason to keep asking questions.


Looking for structured activities to complement these conversations? Our activity library has hands-on projects for every age group.

← Back to Articles Explore Activities →