Walking into 2026 with clear eyes.
A snapshot of my thinking on kids + tech as we start the new year.
The issues surrounding kids and technology have become impossible to ignore, and everyone suddenly has an opinion. Parents are exhausted. Schools are overwhelmed. And into that exhaustion and overwhelm comes a parade of experts, consultants, and entrepreneurs ready to sell them exactly what they want to hear: Wait until 16. Use this monitoring software. Sign this pledge. Follow these 5 steps. It’s simple, it’s actionable, it’s reassuring. And while it’s better than nothing, and there can absolutely be positive results, none of these solve the problem — they are all bandaids addressing the symptoms. (Which again, is better than nothing, and often all we can do.)
Here’s what we can be sure of at this point: there is no silver bullet, and there is no *single* place to point a finger. The problems we’re wrestling with are complex and multi-faceted, and at the root of them are really 🤬 up incentives. There are no easy answers. That’s why I started this project… to try to gain some clarity with people who have spent their careers creating or studying the technologies that have become part of the social infrastructure that we now are raising our kids in.
In this spirit, below is a snapshot of my thinking as we start the new year. Specifically, what we can be sure we know about tech at this point.
What we can be sure of.
0. Technology is neither the entire problem, nor the entire solution.
Yes, they are designed to addict us and we are not exactly being given an “opt out” from emerging tech like AI. But eliminating technology is an easy answer to a complex problem — and there are no easy answers to complex problems, especially when industry-wide financial incentives collide with human behavior and culture.
1. There is no analogy for what parents are facing.
Apparently there’s a strong narrative circulating that parents are somehow to blame for their kids’ experiences on digital platforms. I appreciated Joseph Gordon-Levitt’s response to this. In short: Sure, parents can and should be engaged, aware, and guide their kids as they learn to use various products. At the same time, there is only so much that parents can do to keep their children safe on platforms that are quite literally designed to suck them in. Yes.
I’ve heard countless analogies for how parents can think about preparing their kids for life online — riding bikes, skiing, navigating cities, swimming. I have actually found these quite helpful in thinking through how to build muscles for navigating what is an inherently complex, risky, and increasingly dangerous environment. But the analogies don’t hold.
Let’s go with the swimming analogy. Kids can be taught to swim. It is a learnable skill, and once you learn it you know how to swim, then staying safe is about making smart/safe decisions. You can send them to the best swim schools, with the best coaches. You can supervise, and they can demonstrate competence. But learning to swim only partially helps, because what they will actually be jumping into is not really a swimming pool — it’s a sabotage pool that detects when you are tired and then spins up whirlpools and currents designed specifically for your body, with the sole purpose of keeping you in the pool forever. The pool is designed to suck you to the bottom and keep you there. How exactly are parents supposed to “train” their kids for that?
Parents can do everything right, but the products that our public officials are not only allowing—but ACCELERATING—onto the market are designed to be an all-consuming vortex that even well-adjusted adults are getting sucked into. And this all assumes the parents (1) understand the technology enough to manage it teach their kids, and (2) that parents actually have the TIME to manage / oversee their kids devices… many parents work crazy hours and literally cannot.
Anyone who suggests parenting is the problem here can, respectfully, fuck the fuck off.
2. “Child safety” features are a ruse.
Study after study shows that safety features are not a reliable solution. Safety features might reduce exposure, but they don’t eliminate it. We need to be crystal clear about this. Like it or not, you cannot expect companies to prevent your kids from seeing particular kinds of content. It does not work like movie ratings or network TV, largely because the companies face no real legal constraints.
A recent study of Instagram Teen accounts found that 30 out of 47 safety tools for teens on Instagram were “substantially ineffective or no longer exist.” The study found that teens were still being shown “content that was in violation of Instagram’s own rules, including posts describing “demeaning sexual acts” as well as autocompleting suggestions for search terms promoting suicide, self-harm or eating disorders.”
Safety features are not about children, or safety. They are half-hearted attempts to check boxes that allow companies to say “look, we are trying!” This can create a false sense of confidence for parents.
3. “Social media” means too many things to be helpful.
The term “social media” is used to describe a lot of platforms that are very different from each other — from TikTok and Instagram to Reddit and Discord to streaming platforms. In our November interview, Amanda Lenhart pointed out that “when we say ‘social media,’ we mean things that are so different, they should not actually be categorized together.” She went on:
“TikTok is a short algorithmic video feed that is so optimized to what you like and watch that it basically knows what you like better than you do. It’s incredibly attention-holding, more so than pretty much anything else that we have in the market right now. This algorithmically generated sort of hyper focused feed is one aspect of social media. And then there’s the more social ones, right? Things like Snap, Discord, even Reddit. Though I think the fact that we might even include Reddit in that—it’s really different. Then where do we put Twitch? So Twitch is a streaming platform and there’s other streaming platforms that sort of sprang out of gaming.”
Social media is an interactive digital platform, where people can create a profile, connect with others, and share / consume various content. That all now describes… countless very different platforms.
4. YouTube is a gateway platform.
YouTube is an entry point to all sorts of other media and social media platforms. Not only do a huge majority of kids use YouTube, the age drops pretty wildly, with 2, 3, and 4-year-olds on the platform. YouTube often flies under the radar in the social media conversation, but here’s what it does: it allows users to post videos of any length, connect with each other and comment on videos, and algorithmically feeds videos (short-form is the priority) that keep kids in an endless stream of video content. Oh, and videos also point off-platform to other websites and platforms.
It’s unclear to me why YouTube seems to get a pass.
5. The problem isn’t a specific product — it’s user-generated content and algorithmically mediated experiences.
A lot of the conversations about kids and tech are focused on a particular product. But the problem isn’t a specific product, it’s what powers them and how they work, which is (1) user-generated content (the WHAT), and (2) algorithmically mediated experiences (the HOW).
User-generated content = content created by users. This is the WHAT on a platform—what people see, share, and interact with. If the content that circulates is created by users, then it’s user-generated content. This could be text-based posts (like what you find on Facebook, X, or Reddit), video (like those on YouTube or TikTok), or photos (like those on Instagram).
Algorithmically mediated experiences = This is the HOW on a platform. Algorithms are basically rules for computers that invisibly shape our experiences, showing us or guiding us towards certain content, products, or people. They often show up in social media “feeds” or recommendation engines on everything from Amazon and YouTube to Netflix and whatever newspaper you read. Algorithms are all optimized for (aka designed to prioritize) something specific. In consumer-facing products they are optimized for “engagement,” which is just a nicer way of saying “attention.” What algorithms are programmed to prioritize drives all other behavior.
So stop thinking about specific products and start thinking: Is this thing based on user-generated content? And is what I’m seeing here determined by an algorithm?
6. Everything we are seeing and worried about online is happening exponentially with AI.
Technology exponentially scales age-old problems. Abuse, hate, racism, lies, and fraud — none of it is new. But the unprecedented speed and reach enabled by technology exponentially increases the harm. What is new is the speed and scale. Might kids attempt suicide, or harass their classmates, or spread revenge porn without chatbots assistance? Sure. But these products have removed every grain of friction, facilitating and even encouraging the most grievous behaviors at a instantaneous global scale.
And now they are rapidly being integrated into physical children’s toys.
7. AI is *already* integrated into everything you and your kids use.
AI is not necessarily a specific product that you go to and intentionally use to do something specific — it is already integrated into all sorts of products, ranging from social platforms to Amazon. And of course, Google Classroom.
8. AI is not a 🤬 “calculator.”
You know how we know that? Calculators ARE NOT WRONG. 2 + 2 always = 4.
Calculators don’t give you answers based on how they were programmed or their vibes for the day.
9. LLMs are not search engines. They are prediction engines.
LLMs can be useful when the person using them knows enough about what they’re asking to be able to (1) assess and validate the responses, and (2) do something with the response. Otherwise, they are treating the LLM like a search engine and trusting the results as they come in. That is, at best, not effective.
10. Corporations gonna corporation.
You might be surprised by this one, but I don’t blame companies or CEOs. I am disgusted by them and the choices they have been making, however… companies are behaving exactly the way we would guess companies would behave with no constraints or consequences.
There’s really only one entity that can impose real constraints and/or consequences: the government. But for reasons I will never fully understand, our elected officials have given a hall pass to the entire tech industry to develop garbage technology that atrophies our brains, divides our communities, and promises to help our kids with homework while coaching them to suicide.
We continue to focus “solutions” on what we can see (which are the symptoms) instead of ripping out the roots of the problem because it feels insurmountable — it is not. There was a time when kids worked in sweatshops. There was a time when products marketed for teething infants secretly contained morphine. There was a time when a 12-year-old could walk into a store and buy a pack cigarettes and a case of beer. These practices stopped not because companies saw the light, but because elected officials (and the public) decided it was unacceptable.
The existence of digital products that addict and manipulate us (and create child porn on public global platforms) is not inevitable — it is a choice. It is a choice by companies, a choice by investors, and a choice by governments. It is also a choice for them to be integrated into products that we already use, and it is absofuckinglutely a choice for them to be targeted at children.
And it’s not *our* choice. Parents aren’t choosing this. Teachers aren’t choosing this. Doctors aren’t choosing this. They are all begging for action, and being met by hollow statements and box-checking exercises.
We forget that these products all need us (and our data, and our time, and our attention) far more than we need them. So here’s to reclaiming our agency in 2026, and to our public officials maybe growing a pair.
Brain Snacks
And my absolute favorite… probably top 5 of all time.





