We need to separate learning how to use technology, from using technology to learn.
They are not at all the same thing.
While the First Lady shares her dreams of robots replacing teachers, current educators are picking up the pieces of the technology that is currently in the classroom.
Over the years, the promises of technology in the classroom have been recycled with each new wave of technology:
“I believe that the motion picture is destined to revolutionize our educational system and that in a few years it will supplant largely, if not entirely, the use of textbooks. I should say that on the average we get about two percent efficiency out of schoolbooks as they are written today. The education of the future, as I see it, will be conducted through the medium of the motion picture… where it should be possible to obtain one hundred percent efficiency.” — Thomas Edison, 1922
“Today, we really are on the verge of a watershed moment, when a new generation of Web-based tools could help assess the learning of each student while delivering personalized instruction that builds on specific strengths and addresses individual weaknesses. Unfortunately, if the nation—the education community, in particular—fails to embrace this new technology, we risk squandering an extraordinary opportunity to create a renaissance in teaching and learning.” — Education Week, 2012
In 2015 the Chan Zuckerberg Initiative launched with an explicit promise to develop software “that understands how you learn best and where you need to focus.” They even publicly declared that they could take an average student to the 98th percentile. After pouring well over $100 million into “personalized learning” initiatives, there’s little to nothing to show for it.
Sound familiar? Every generation of EdTech promises the same thing — personalization at scale, equity through access, the end of the one-size-fits-all classroom — and frames it as finally (!) technologically achievable. While the mechanism has changed over time (radio, TV, the internet, computers, tablets, MOOCs, AI) the promises are nearly identical from generation to generation.
What we see over the decades is a transactionalization and commodification of education, wrapped in the language of “personalization.” Nearly 30 years into the internet era, we should all know by now that “personalized” means that you and your data are now the commodity.
Rupert Murdoch once described K-12 education as “... a $500 billion sector in the U.S. alone that is waiting desperately to be transformed by big breakthroughs that extend the reach of great teaching.” To investors and the tech industry, education was an untapped market waiting to be monetized. So… who exactly was “waiting desperately”...?
Today, almost 90% of schools provide children with internet-connected devices for the purposes of “learning,” and many schools use up to 10-15 different EdTech products / apps that parents, students and teachers then spend hours trying to navigate. (Stop and think about how many apps you need to access to engage with your kids learning.)
To parents, students, and educators the promise is “personalization.” To the market the promise is efficiency. But learning is an inherently arduous, inefficient, and repetitive process — that’s what makes learning stick. There’s no hack. The goal of technology in education (and I would argue in most spaces) should not be “efficiency,” it should be effectiveness. EdTech advocates will argue that effectiveness IS the goal, but we’ve seen enough to know that even if it is in theory, that is not at all what is happening in practice.
We’ve seen this movie before.
The promises of technology as a transformative force in the classroom have not only failed to materialize, they also seem to be counter-productive. Could they be helpful? Sure. Are there places that are doing it well? Probably. But for the most part what we’ve seen is technology being deployed as a replacement for human capital — a replacement that in practice becomes an additional burden that teachers and staff have to carry, while degrading the experience of students. (Next time you’re around a teacher, ask them how much of their time they spend triaging IT issues or screen drama. )
The truth is that no matter what the POTENTIAL of a technology is, potential breaks down in implementation.
Unlike generative AI, the digital products that are currently in classrooms (laptops, ipads, and the internet) were known and understood when they were brought in. Maybe not by all teachers, but the people who built and managed the technology understood how and why it worked. (They were what scholars refer to as “adoption technologies.”) They entered the picture slowly, and their performance was fairly consistent, reliable, and predictable, with challenges largely stemming from keeping kids on task, and preventing them from accessing particular kinds of content. (Which by all measures has proven nearly impossible to do, considering the fact that kids are regularly playing games, accessing social media, watching YouTube, and even porn on their school-issued devices.)
Generative AI is completely different. It has arrived at break-neck speed, it is neither consistent nor reliable, and even the people who create it don’t fully understand how it works. Even when it DOES turn out to be useful, there are countless conditions that are necessary to achieve that utility. Using it effectively is complex even for grown adults who are proficient with technology, and even then there are debates about utility and limitations.
Generative AI also presents new and more complex risks that are less visible than playing video games and watching YouTube, and are (in my view) far more worrisome. I’m talking about issues like cognitive offloading, the reduction in critical thinking, and even a dulling of engagement that is visible even in Ivy-league seminars.
So why exactly are we rushing to integrate this into education? I regret to inform you that I don’t have a definitive answer. I suspect it’s some combination of false marketing, magical thinking, and transactional decisions that prioritize operational “efficiency” over the very naturally inefficient process of learning. But what I can offer is a suggestion:
We need to separate learning how to use technology, from using technology to learn.
These are two totally different things, and they require different approaches and different conditions for success.
Teaching kids how to use new technology makes total sense… it’s a thing in the world and they can and should understand that it exists, and how to use it effectively. Sure. We can have a debate about when that is educationally appropriate (like kids don’t get calculators until they know basic arithmetic) but it is a reasonable thing to propose. To do this, you need people who understand the technology teaching them about it, and providing opportunities to use it in order to accomplish something.
However, using technology as a vehicle for learning is an entirely different proposition. I will not argue there’s no way it’s useful — but I will say that there are no “hacks” to learning. Maybe there are places in the workforce where it would be helpful to shave off some time or repetitive tasks…. But REPETITION IS HOW HUMANS LEARN. Learning is an inherently tedious and laborious process. Sometimes it’s interesting and exciting, but much of the time it’s frustrating or F*CKING BORING… especially in grade school, when you’d rather be playing with your friends.
Even in the best of circumstances, AI turns learning into a hollow transaction.
The bottom line is that teaching through technology, especially at lower grade levels, is essentially an outsourcing exercise. It’s about replacing human capital… Either because they need to reduce costs, or because there’s a shortage of the necessary humans and they need an alternative. Either way, it’s not in the classroom because it’s helping kids learn.
But Emily, we need “digital literacy!”
If by “digital literacy” you mean learning to use computers and the internet to find and navigate information, create artifacts, and understand how the technology works… ok. This makes sense. But using technology to teach kids foundational knowledge (like reading, writing, and math) is NOT “digital literacy” — it is an illusion of learning that stunts actual literacy.
At the risk of being annoyingly repetitive here… Being digitally literate means knowing how to USE technology to accomplish something. It does not mean digital products teach you how to read, write, and do math. It’s like saying you can teach arithmetic using calculators. Are calculators bad? No. Can they be helpful? Of course. Can a 2nd grader learn basic arithmetic through a calculator? Of course not. You do calculations with a calculator in high school BECAUSE YOU F*CKING LEARNED THE BASICS FIRST, and can now do higher level math because the concepts exist in your brain.
Kids learn by doing. This is not my opinion, it is the commonly held position of experts on child psychology and development. They are little scientists, and they learn by EXPERIENCING the world… struggling, failing, and exploring. Giving them what (some) adults imagine is a “hack” to learning, is stripping them of the very experiences that build knowledge, competence, and confidence.
But Emily! If we don’t do the AI our kids will get left behind!
Since this is an achievement-oriented question… who do you think will get into and graduate from MIT? The kid who can use AI, or the one who can design, build, and think outside of the box? Kids who become dependent on these products will live their lives in the box “AI” creates for them. Dependency strips them of agency. It strips them of the ability to produce their own thoughts, arrive at their own conclusions, and develop their own views and voice. The notion of being “left behind” is just parents buying the hype. Don’t buy it.
Justin Reich, Professor of Digital Media at MIT and author of this book said that he has…
“never encountered an example of a school system – a country, state or municipality – that rapidly adopted a new digital technology and saw durable benefits for their students. The first districts to encourage students to bring mobile phones to class did not better prepare youth for the future than schools that took a more cautious approach. There is no evidence that the first countries to connect their classrooms to the internet stand apart in economic growth, educational attainment or citizen well-being.
Read that again. “There is no evidence that the first countries to connect their classrooms to the internet stand apart…”
All of that said, unfortunately we are not the ones making these decisions. In many cases, schools are being pressured to integrate AI into the classroom for any number of reasons. So what can we do?
Draw a distinction between helping kids learn how to USE a technology, and using technology to learn.
Remind school officials that they have already been sold this promise before, and they are currently picking up the pieces. What exactly makes them think things will go differently this time? Especially considering the fact that unlike laptops, ipads, and the internet, generative AI is neither consistent nor reliable, and even the people who create it don’t fully understand how it works.
Ask WHAT IS THE GOAL? Is it closing operational gaps? What does success look like? What are indicators of success, and how will you track them?
I appreciate guides like this one by MAMA, but what is the goal? and what are indicators of success? should precede any other conversation. Skipping these questions accepts that generative AI in the classroom is a foregone conclusion — it should not be.
This is the second time around that we’ve been sold a tall tale about the potential of technology to transform education! and personalizing learning for all kids! and democratizing learning! We are picking up the pieces now…. We fell for it once, let’s not fall for it again.
More on tech in the classroom…
A look at the total lack of evidence of i-Ready — one of the most widely used ‘adaptive tutoring’ tools in U.S. classrooms. Also by Jared Cooney Horvath
The recent NY Times article about “Chromebook Remorse.”
This article by NY Times diving into the relationship between tech and test scores.
A thoughtful look at what we can learn from past edtech failures.
Emily Cherkin’s essay (based on a talk) about the impact of technology on childhood and education.
This helpful article by Stephen Fitzpatrick that explains the difference between “adoption” technologies and “arrival” technologies.
Brain Snacks
And, since we are basically talking about the “enshittification” of education, this brilliant campaign by the Norwegian government is worth a watch:
Cited & mentioned here: Chalkbeat, Kondrad Lawson, MAMA, Emily Cherkin, The Conversation, The New York Times, Jared Cooney Horvath, Jonathan Reich, Stephen Fitzpatrick, Futurism, EdWeek.





These images are amazing, and the Norwegian video makes me laugh every time I watch it. This needs shouting from rooftops - "Nearly 30 years into the internet era, we should all know by now that 'personalized' means that you and your data are now the commodity." I don't know if enough of us do understand that - if we did, I think things would change rather quickly. We need a PSA....