🚀

Ages 15–18

Future Shapers

"You'll work alongside AI for your entire career. The question is whether you direct it or it directs you."

Teenagers aged 15–18 are genuinely poised to shape how AI develops and how society responds to it. They’ll be the engineers, ethicists, journalists, policymakers, and artists navigating the next two decades of AI development. The conversation at this age should match that stakes level — not catastrophizing, not dismissing, but genuinely engaging with the hardest questions.

Prompt Engineering: A Real Skill Worth Learning Now

Knowing how to communicate with AI systems is already a marketable skill and will become more so. More importantly, learning to prompt well forces a kind of precision and clarity of thinking that is inherently valuable.

The core insight of prompt engineering: AI systems don’t read minds. The quality of the output is directly determined by the quality of the input. Vague prompt → vague output. Specific, contextualized, constrained prompt → useful output.

Key prompting principles teenagers can learn and practice:

Give context. “Write a college essay” produces generic output. “Write a college essay for a first-generation student applying to computer science programs, focusing on how growing up helping my parents navigate bureaucracy in a second language taught me to think like a systems debugger” produces something worth reading.

Specify constraints. Length, format, tone, what to include, what to avoid, who the audience is.

Iterate. The first output is a draft. Follow up with: “Make this more conversational,” “The third paragraph is too long, tighten it,” “The ending feels weak — what are three alternative endings?”

Know when AI is the wrong tool. For anything requiring current knowledge, precise citations, personal experience, or genuine emotional intelligence, AI is unreliable or inappropriate. Knowing this saves time and prevents embarrassing errors.

AI and Academic Integrity

This is an urgent, unresolved conversation, and teenagers deserve to be part of it honestly rather than receiving blanket rules that feel arbitrary.

The core tension: AI can produce text that looks like a student’s work but isn’t. Schools are struggling to respond. Detection tools are unreliable and disproportionately flag non-native English speakers. Blanket bans are unenforceable. The policy landscape is genuinely unsettled.

What’s clear, regardless of policy:

  • Using AI to avoid developing your own thinking stunts something valuable. Writing badly is how you learn to write well. Working through a difficult proof is how you develop mathematical intuition. If AI does the struggle for you, you don’t develop.
  • Transparency is always safer than concealment. If you’re using AI, be upfront about it. Most educators would rather know and discuss it than discover it later.
  • The skills AI can replicate are not the ones that matter most. Producing a coherent five-paragraph essay is table stakes. Analysis, synthesis, original argument, and the ability to defend your thinking in conversation — these are what higher education and employers actually care about, and they can’t be faked.

Have the conversation at home: “How are you thinking about using AI for schoolwork? What feels like it crosses a line for you?”

The Career Landscape

Teenagers are understandably anxious about what AI means for their futures. Some honesty here:

AI will displace significant volumes of work in many fields over the next two decades. Translation, legal document review, radiology reads, customer service, basic coding, financial analysis, copywriting — all of these are being partially or substantially automated. This doesn’t mean those careers disappear, but it means the nature of the work shifts significantly.

The skills that remain most valuable are consistently human:

  • Judgment under genuine uncertainty (not optimizing for a clear objective, but deciding what the objective should be)
  • Social and emotional intelligence (building trust, navigating conflict, leading people through change)
  • Creative synthesis (connecting ideas across domains in ways that haven’t been connected before)
  • Ethical reasoning in high-stakes, ambiguous situations
  • Asking the right questions — what problem are we actually trying to solve?

The fields most resistant to full automation are those that require all of the above, especially in combination: skilled trades, healthcare, education, social work, leadership, creative direction, fundamental research.

What teenagers can do now: Learn to use AI tools fluently and critically. Build things. Develop genuine depth in at least one domain. Cultivate the skills AI can’t replicate.

Deepfakes and Information Integrity

Synthetic media — AI-generated images, video, and audio that appear to show real people saying or doing things they didn’t — is one of the most serious near-term threats to democratic discourse. Teenagers need to understand this not just as consumers of information but as future citizens and possibly future creators.

The tools for generating realistic synthetic media are now accessible and cheap. The tools for detecting it are lagging behind. This asymmetry matters.

What teenagers should know:

  • Synthetic media exists and is increasingly convincing. Perfect detection is not currently reliable.
  • The most powerful tool against deepfakes is skepticism before emotion. If a video makes you feel something strongly, that’s the moment to pause and verify. Strong emotion is exactly when our critical thinking degrades.
  • Source literacy matters more than ever. Where did this come from? Who is claiming it’s real? What is their incentive? Can I find this from multiple independent sources?
  • Creating deepfakes of real people without their consent has serious ethical and legal implications. Even for humor or satire, the norms are still forming and the harms are real.

Building Real Things

Teenagers who can build real projects with AI tools have a meaningful advantage. This means:

  • Using AI APIs (OpenAI, Anthropic, Google) to build actual tools
  • Understanding enough about how these systems work to use them responsibly
  • Learning to evaluate output critically rather than accepting it
  • Building a portfolio of real work, not hypothetical skills

The “Prompt Engineer Challenge” activity on this site is a starting point. From there: try building a simple chatbot, automating a repetitive task, or using AI to analyze something you genuinely care about. Real projects teach things tutorials can’t.