Tuesday, March 10, 2026

About AI and Kids

 


What a Global Study of 500 People Across 50 Countries Found About AI and Kids

Stock photo ID: 2276233949

Rebecca Winthrop’s team at the Brookings Center for Universal Education just released what may be the most comprehensive global assessment of AI’s impact on education to date — more than 500 interviews across 50 countries, plus analysis of over 400 studies. The conclusion is sobering but not surprising to this community: the risks of AI in education currently overshadow the benefits.

Not because the technology can’t help kids learn. It can. But because the speed of adoption has outpaced any serious reckoning with what it displaces. And what it displaces, in the case of children, isn’t a task. It’s a stage of development.

From offloading to stunting

In the latest episode of CHT’s podcast Your Undivided Attention, Rebecca made a distinction that reframed the entire issue for us. Researchers call it “cognitive offloading”, this is when a technology takes over a task your brain used to do. Google Maps eroding your sense of direction is the classic example. For adults, this is often a trade we make knowingly, surrendering a capacity in exchange for convenience.

But for children, it’s a different bargain. You can only offload a skill you’ve already developed. When a child uses AI to write an essay, solve a problem, or formulate an argument, they’re not outsourcing a capability they have. They’re skipping the development of that capability entirely. The right word isn’t cognitive offloading, says Rebecca. She believes cognitive stunting is the more accurate description of what’s happening to kids.

The Brookings report backs this up with striking data. Among all the potential harms participants identified, threats to cognitive development ranked first — appearing in 65% of student responses. The kids using the tools seem to grasp this better than the adults building them. Students described becoming unable to start homework without AI and losing the ability to initiate their own thinking. What begins as a shortcut becomes a dependency, and then something closer to a deficit.

The report describes a “flywheel effect” — academic dependence spinning outward into every domain of a young person’s life. Students aren’t just using AI for schoolwork (66%); they’re turning to it for friendships (42%), relationships (43%), and even romantic life (19%) — figures drawn from U.S. survey data the report cites, but consistent with what Brookings heard globally.

The tutor trap

But this is not a clear-cut case of AI creating negative effects for all learners. AI tutoring genuinely works — under specific conditions. A 2024 World Bank trial in Nigeria found that AI-powered tutoring improved first-year secondary students’ English skills by 0.23 standard deviations in just six weeks — gains equivalent to 1.5 to 2 years of regular schooling. That’s remarkable, and the context matters: Nigerian public schools average around 51 students per class, with some states reaching 100.[^8] In settings where individual attention from a teacher is physically impossible, AI-assisted tutoring filled a gap that was otherwise going unfilled. Researchers attributed the program’s success to the fact that AI complemented teachers rather than replacing them.

Stanford’s Tutor CoPilot tells a similarly instructive story. The system supports human tutors in real time — suggesting different ways to explain a concept, prompting better questions — and increased student mastery rates by 4 percentage points overall. The biggest gains, 9 percentage points, came for students working with lower-rated tutors — meaning less experienced, less skilled ones. What the AI did, essentially, was bring weaker tutors up to the level of their stronger peers. It didn’t outperform great human teaching. It closed the gap between novice and expert, at a cost of $20 per tutor per year.

The pattern across every success story in the report is the same: AI made the human better, but the human was still doing the teaching. When it replaces the human relationship, it diminishes learning. AI enriches learning when it’s purposefully designed for children, bounded by safety guardrails, and embedded within human relationships. It diminishes learning when it’s a general-purpose tool used without guidance — which is how most kids encounter it today.

Rebecca put it memorably on the podcast: expecting students to choose the “study mode” version of a chatbot over the regular one is like putting Oreos next to broccoli and expecting kids to reach for the broccoli. The technology companies designing these tools are designing for motivated students — probably because the designers were motivated students. That is not most students.

This matters because the budget pressure on school systems is enormous, and the temptation to say “an AI can handle this” will only grow. But here’s the crucial context: even before AI entered the picture, students were already frequently disengaged. A 2024 Brookings-Transcend survey of more than 65,000 U.S. students found that roughly half of middle and high school students report learning experiences likely to inspire coasting — what the researchers call “passenger mode,” where kids are behaviorally present but have effectively dropped out of learning. The opposite is “explorer mode” — deep engagement where students take initiative and are motivated to learn for its own sake. Rebecca told us that fewer than 4% of middle and high school students say they are regularly in explorer mode. AI, layered onto a system already producing passengers, risks entrenching that disengagement rather than sparking the kind of agentic learning that actually develops capable human beings.

The social dimension

One of the report’s foundational premises — grounded in decades of developmental science — is that children’s learning is inseparable from their social and emotional development. Schools aren’t just places where kids absorb content; they’re where kids learn to navigate disagreement, manage frustration, and build resilience. As Rebecca told us: learning is fundamentally a social exercise, and the sycophantic nature of AI companions — always agreeing, always validating — is building an emotional muscle in kids that makes them less able to take feedback, make mistakes, and recover in a classroom setting.

AI companions are already disrupting that process. One-third of teen users choose AI companions over humans for serious conversations. Companion chatbots deploy emotionally manipulative tactics in 37% of farewells, making users 14 times more likely to keep engaging. Younger teens (13–14) are significantly more likely than older teens to trust advice from an AI companion. The frictionless validation these tools provide is the opposite of what genuine learning requires.

What we can do

The report’s framework — prosper, prepare, protect — offers a roadmap.[^17] But the lever that struck us most is procurement. School districts are enormous customers. If they banded together and agreed on shared criteria for AI tools — privacy by default, safety features, transparent data policies, evidence of pedagogical grounding — the tech companies would have to meet them. Certification systems like Digital Promise’s “Responsibly Designed AI” already exist. The market will follow the money. Right now, the money isn’t flexing its muscles.

This isn’t a story about whether AI belongs in education; that ship has sailed. But the current trajectory — fast adoption, minimal guardrails, general-purpose tools in the hands of developing minds — is one in which the risks compound and the benefits could remain theoretical. The Brookings report gives us the clearest picture yet of where that leads, and a framework for bending it in a better direction

Read the full Brookings report: A New Direction for Students in an AI World: Prosper, Prepare, Protect

Listen to our conversation with Rebecca Winthrop here.


The Interviews

AI Is Breaking Education. Rebecca Winthrop Has the Blueprint to Fix It.

AI Is Breaking Education. Rebecca Winthrop Has the Blueprint to Fix It.

The promise of AI in education is incredible: picture infinitely patient tutors that can teach every student exactly the way they need to be taught. But the history of education technology tells us that these kinds of simple, optimistic stories are naive. Ask any teacher or student whether they feel unleashed by technology to do their best work.

Thanks for reading [ Center for Humane Technology ]! Subscribe for free to receive new posts.

[ Center for Humane Technology ]

Recommend [ Center for Humane Technology ] to your readers

Center for Humane Technology is a nonprofit dedicated to ensuring that the most consequential technologies serve humanity. We bring clarity to how the tech ecosystem works in order to shift the incentives that drive it.





No comments:

Post a Comment