The Idea

AI bias is not a glitch. It’s a consequence of training systems on data that reflects — and sometimes amplifies — the biases that already exist in the world. Understanding this is essential for any young person who will live and work in a world shaped by AI.

This activity isn’t about making children afraid of AI. It’s about helping them understand that AI is a human product, made by humans, trained on human data, and subject to human error and human prejudice. Which means it can be held accountable — and changed.

Background (Read Together First)

Before starting, read and discuss this story:

In 2018, Amazon scrapped a recruiting AI tool after discovering it systematically downgraded resumes from women. The system had been trained on ten years of Amazon’s hiring decisions — and since historically most successful candidates were men, the AI learned that “male” was a positive signal.

The algorithm didn’t “decide” to be sexist. It did what machine learning always does: it found patterns in the training data. The pattern was real. The problem was that the pattern reflected historical bias, not capability.

Ask: “What should Amazon have done differently? Is there any way to fix this kind of problem?”

Investigation 1: The Wolf-in-Snow Problem (15 minutes)

This is one of the most famous examples in AI bias research.

The story: Researchers trained an AI to distinguish wolves from dogs. It achieved very high accuracy. But when they analyzed what the AI was actually using to make its decisions, they found something surprising: it was mostly detecting snow. Most wolf photos in the training set had snowy backgrounds. The AI learned “snow = wolf” rather than actually learning what wolves look like.

Investigate:

  1. Why did this happen? What was wrong with the training data?
  2. How would you fix it? What would better training data look like?
  3. What if this wolf-detection AI were used in a real application — say, wildlife protection cameras? What could go wrong?
  4. Can you think of a situation where “learning the wrong thing” in an AI system could cause real harm to real people?

Investigation 2: Facial Recognition and Race (20 minutes)

The evidence: Multiple independent studies, including research by MIT’s Joy Buolamwini, have found that commercial facial recognition systems have significantly higher error rates for darker-skinned faces, especially darker-skinned women. Some systems misidentified dark-skinned women 35% of the time, while misidentifying light-skinned men less than 1% of the time.

Why does this happen?

  • Training datasets were not representative — they contained more images of lighter-skinned faces
  • The teams building these systems were not diverse, making it less likely anyone would notice or prioritize the problem
  • These systems were deployed and sold commercially before the bias was publicly documented

Discuss:

  1. If you were a police department considering using facial recognition, what would you want to know about its accuracy before using it?
  2. What harm could a 35% error rate cause in a high-stakes decision?
  3. Who do you think should be responsible for testing AI systems for bias before they’re deployed?
  4. Is it possible to build an AI system that’s completely unbiased? Why or why not?

Research activity: Look up “Joy Buolamwini AI bias” together and read the summary of her findings. Note that she founded the Algorithmic Justice League specifically to address this problem. Discuss: what does it mean that a researcher had to found a nonprofit to get the AI industry to take bias seriously?

Investigation 3: Design Your Own Fairness Test (15 minutes)

Imagine you’re testing an AI system that decides whether a job applicant should move to the next round of interviews.

Design a fairness test:

  1. What groups would you want to check? (Gender, race, age, disability, hometown, school name?)
  2. What outcome would tell you the system is fair? (Same acceptance rate? Same accuracy rate?)
  3. How would you get the test data you’d need?
  4. What would you do if you found the system was biased — would you fix it, scrap it, or disclose it?

Write your answers down. There’s no single right answer, but articulating the reasoning matters.

The AI Connection

Pull together the threads:

“AI bias is a civil rights issue. When systems that make decisions about loans, jobs, bail, college admissions, or medical diagnoses are biased, that bias falls hardest on people who are already disadvantaged. The stakes are not abstract.”

“The solution isn’t to not use AI. The solution is:

  • Diverse training data that represents the people the system will affect
  • Diverse teams that can notice problems others might miss
  • Testing before deployment, especially for high-stakes decisions
  • Transparency about what systems can and can’t do
  • Accountability — someone has to be responsible when an AI system causes harm”

Resources to Explore Further

  • Algorithmic Justice League (ajl.org) — Joy Buolamwini’s organization documenting and advocating against AI bias
  • AI Now Institute — research on the social implications of AI
  • ProPublica’s COMPAS investigation — a Pulitzer Prize-winning investigation into bias in criminal sentencing AI

Ready for more?

Explore all activities in the library, or find ones matched to your child's age.

All Activities → Browse by Age