ICYMI // May 9
(In Case You Missed It) Stuff happening in tech that is relevant to your kids, classrooms, and lives.
Here’s what caught my attention this week…
Canvas (basically a core operating system for classrooms) was hacked this week, throwing schools into chaos in the last stretch of the year. One researcher called it “the biggest student data privacy disaster in history.” This highlights the risk of building a single point of failure into the center of education, and raises questions about the degree to which education is mediated (and dependent on) profit-driven technology. You can read my thoughts here.
A widely-cited study claiming ChatGPT improves student learning has been retracted due to “discrepancies in the meta-analysis.”
New research from the Rithm Project surveyed 2,383 young people about how AI is shaping their relationships, and the biggest predictor of high-risk AI use isn’t screen time… it’s loneliness. Teens who feel they can’t be real with the people around them are the most likely to turn to AI for emotional support. Over half of those using AI characters for companionship report feeling they have no one else. All the more reason to nurture friendships, independence, confidence, and relationships in which kids can really be themselves.
Amazon has been using AI pricing algorithms to raise prices across the entire internet. The FTC’s antitrust case alleges Amazon’s systems monitor competitors’ algorithms in real time and manipulate them into raising prices too.
Flock (the surveillance company that sells camera networks to police departments) accessed cameras inside a children’s gymnastics room, a playground, a school, a Jewish community center, and a pool as a sales demo to other police departments. A resident discovered this by pulling Flock’s access logs via a public records request, and Flock confirmed it. And in the final gut punch… city council was informed, but renewed the contract anyway… 🤷🏻♀️
Speaking of surveillance… Meta cannot resist the siren call of surveilling its own employees. They recently told U.S. staff that the company is installing software (called the Model Capability Initiative) on their computers to record everything they do — and there is no opt out. Baratunde Thurston lays it out here in his classic incisive tone: “I’ve been watching this man run the same playbook for twenty years. He monetized our friendships, then shoved fake AI friends in our feeds when the real ones dried up. He patented our ghosts so he could keep us posting after we die. He has shown contempt for every user, every family, every regulator in his path. Now he’s turning that same machine on his own people.” He goes on to remind us that AI didn’t make these decisions — people did. We (our companies, governments, organizations, schools…) can make different choices.
And in good / hopeful news… after 12 years as a tech columnist at the Washington Post and WSJ, Geoffrey Fowler is helping launch the new Youth AI Safety Institute at Common Sense Media, to test the AI products shaping childhood. And while I wish we were on a timeline that didn’t require any of this, Geoffrey shaping this new institute is a good thing. I’m excited to see where it goes.
And since tomorrow is Mother’s Day… this beautiful ad/video is circulating again and it gets me every time ❤️ Hits harder right now as I am thinking about how to coach/guide kids to navigate the kinds of things we talk about here.
Sources & mentions this week: 404 Media, The Rithm Project, Spark & Stitch Institute, The Indicator, Washington Monthly, Nature/Retraction Watch, Baratunde Thurston, Life With Machines, Geoffrey Fowler


