ICYMI // April 13
(In Case You Missed It) Stuff happening in tech that is relevant to your kids, classrooms, and lives.
Here’s what got my attention last week…
The man who invented the infinite scroll (Aza Raskin) sat down with Planet Money’s The Indicator to talk about what platforms should actually do to give people their attention back. Spoiler: he wants to kill what he created.
The Indicator also reported that wholesale electricity prices have increased 267% in areas near data centers over the past five years. They talk about the local communities and people being affected… and pushing back.
AI content scanners are being used to supercharge book banning. A tool called Blockade uses AI to scan books for content that could trigger parental objection, and is being used to accelerate the volume and speed of book challenges.
In my first weekly roundup I flagged that a company is turning Zoom meetings into a podcast, but there’s an update. 404 Media kept digging and has found that this also includes anonymous recovery programs… recorded and posted publicly.
AI is rapidly flattening and homogenizing discussion and thought, even in Ivy-league seminar classes. As one Yale student put it, “everyone now kind of sounds the same.” This absolutely tracks with my own experience and observations in class over the past year. It is also validated by a new study on the homogenizing effect of LLMs on human expression and thought. The implications of this are massive, as what we are talking about is essentially the manufacturing of a singular version of “truth” at a scale that even Orwell couldn’t imagine — but would recognize instantly.
New UCLA research finds that AI is removing the very thing that builds real learning. They conducted large-scale experiments involving fractions and reading comprehension, and found (among other things) that “after just ∼10 minutes of AI-assisted problem-solving, people who lost access to the AI performed worse and gave up more frequently than those who never used it.” The finding cuts to the heart of what education is actually for: not just not producing correct answers, but building the capacity to arrive at them. They flag that because current AI systems are “optimized only for short-term helpfulness” they “risk eroding the very human capabilities they are meant to support.” Now consider what this means for your kids’ education. Full paper here. *Thank you to Erika Hall for flagging this
Google’s AI Overviews are providing misinformation on a massive scale. An analysis conducted by AI startup Oumi found that AI Overviews are accurate about 91% of the time. That might sound tolerable, but the analysis highlights that Google processes roughly five trillion searches a year, meaning that it provides tens of millions of wrong answers every hour, and hundreds of thousands every minute. Google (of course) called the analysis flawed, but even Google’s own internal testing found that Gemini 3 produced incorrect information 28% of the time. In case it needs emphasis: Google holds over 90% of the global search engine market (and that includes schools / education) and AI Overview is the sitting at the top of every search result. *Futurism highlighted that studies have also found that people are blindly trusting AI, with one report finding that “only 8 percent of users actually double checked an AI’s answer,” and another finding that “users still listened to AI when it gave them the wrong answer nearly 80 percent of the time.”
Sources & mentions this week: Futurism, 404, Erika Hall, NPR’s Planet Money


