The Idea

Every streaming service, social media platform, and e-commerce site has recommendation systems. These systems track what you interact with, find users who behave similarly to you, and recommend what those similar users liked. This is called collaborative filtering, and it’s one of the most influential algorithms in daily life.

This game makes collaborative filtering physical. Players rate movies (or songs, or products), the “algorithm” (a designated player) finds similarities, and recommendations are made. The results reveal exactly why recommendation systems are powerful — and why they create filter bubbles.

Setup (10 minutes)

Create your content library. Write 25–30 movie titles on individual index cards. Include variety: blockbuster action, indie films, documentaries, animated kids’ movies, foreign films, classic Hollywood, recent releases. The more varied, the better.

Spread the cards face-up on the table.

Each player gets a rating sheet. Columns: Movie Title | My Rating (1–5 stars). Players rate every movie they’ve actually seen using 1–5 stars, leaving blank any they haven’t seen.

Give 5–7 minutes for individual rating. No discussing ratings yet.

Round 1: Basic Recommendation

Collect all rating sheets. The designated “algorithm” player now has everyone’s ratings.

Pick one player to receive recommendations. Their job is to say nothing and wait.

The “algorithm” does this:

  1. Look at Player A’s highest-rated movies (4–5 stars)
  2. Find another player whose 4–5 star movies overlap most with Player A’s
  3. Look at what that similar player liked that Player A hasn’t seen
  4. Recommend the top 3

The “algorithm” announces: “Based on your taste, I recommend [titles] because people who like what you like also loved these.”

Ask Player A: Do these recommendations feel right? Are you surprised? Would you actually watch these?

Round 2: The Engagement Trap

Now play a second round — but the algorithm has a different objective.

New rule: The algorithm now maximizes for “engagement” — specifically, it recommends movies that players rated 4–5 stars AND spent the most time discussing (when they rated them, ask each player to say one sentence about each 4–5 star film they rated; the algorithm notes which movies generated the most commentary).

The recommendations shift. Now they skew toward movies that provoke strong reactions — not necessarily the “best” movies, but the ones that got people talking.

Ask the group:

  • Are these recommendations different from Round 1?
  • Which set of recommendations feels more like what you’d actually see on Netflix?
  • Which set is “better” — the ones you’d genuinely love, or the ones you’d spend the most time on?

This is the core tension: optimizing for engagement is not the same as optimizing for enjoyment or wellbeing. Netflix wants you to keep watching. That’s not always the same as showing you what’s best for you.

Round 3: Filter Bubbles

For the final round, remove all cards from the table except the genre of the most-recommended movies so far.

Ask players to rate this reduced selection. Notice what happens: the recommendations become even more similar. Players who liked thrillers get more thrillers. Players who liked dramas get more dramas.

The filter bubble effect: “If you only see recommendations based on what you’ve liked before, you never discover things you might love but haven’t tried yet. Your algorithm ‘world’ gets smaller and smaller.”

Ask: “What have you seen on Netflix or YouTube that surprised you — something you wouldn’t have chosen yourself but loved? How do you think you found it?”

Discussion Questions

On how it works:

  • “Why do you think YouTube shows you related videos in a chain? Is that good or bad?”
  • “Spotify has a ‘Discover Weekly’ playlist designed to show you things outside your usual taste. Why would they build that?”

On the implications:

  • “If everyone on social media only sees content that matches their existing beliefs, what happens over time?”
  • “Does a recommendation algorithm have values? It optimizes for something — what?”
  • “What would a ‘good’ recommendation algorithm optimize for, if not pure engagement?”

On your own behavior:

  • “Do you ever notice that social media shows you more of something after you interact with it once?”
  • “Have you ever deliberately broken your feed by liking things very different from your normal content?”

The AI Connection

Collaborative filtering is one of the most commercially successful applications of machine learning. It powers hundreds of billions of dollars in commerce and shapes cultural consumption for billions of people.

The game reveals several key concepts:

  • Similarity measurement: How does the algorithm decide who is “like” you? The answer shapes everything.
  • Optimization target: What the algorithm maximizes for determines what you see. Different targets → different recommendations → different effects on users.
  • Cold start problem: New users have no rating history, so the algorithm has nothing to go on. How do services handle new users?
  • Filter bubbles: Recommending what you already like keeps you comfortable — but narrow.

The hardest question, worth sitting with: Is it the algorithm’s job to show you what you’d choose, or what would be good for you? Who should decide?

Ready for more?

Explore all activities in the library, or find ones matched to your child's age.

All Activities → Browse by Age