In Los Angeles this March, a jury did something US courts have long refused to do: it treated the feed itself as the harm. It felt like vindication, victory even, to those of us who are critical of big tech's outsized influence on every aspect of our lives. But there is need for cautious optimism, caution even, instead of celebration.
Jurors found Meta and Google negligent for the way Instagram and YouTube are designed; not for any particular piece of content (the 20‑year‑old plaintiff, identified as Kaley/KGM), happened to see on them. They awarded her $6 million in compensatory and punitive damages and explicitly described these platforms as deliberately addictive “machines” that harmed her mental health.
This is more than a sympathetic jury and a moving story. It is the first time a US jury has effectively treated major social platforms as defective consumer products whose design – infinite scroll, notifications, algorithmic recommendations – can be a “substantial factor” in harming young users. In doing so, the case skirted the traditional shield of Section 230 by focusing not on user‑generated content, but on product design and failure to warn.
For critics of big tech, and I am one of them, that sounds like justice delayed finally arriving. I was happy.
Briefly.
But if we are not careful, the legal and policy response to this big tobacco moment will harden the already rapidly enshitified internet we already have: centralized, identity‑hungry, and surveillance driven. These are precisely the conditions that made these products so powerful in the first place.
From Bad Content to Bad Machines
For nearly three decades, legal debates about platforms have orbited around content: who is responsible for extremist propaganda, self‑harm photos, misinformation. Section 230 in the US enshrined the idea that platforms are not publishers of third‑party speech. Even when courts and regulators pushed, they pushed on content moderation, not on the underlying machine.
The Kaley verdict is a reorientation of this conversation. Jurors heard company documents and expert testimony describing Instagram and YouTube as addiction machines” designed to maximize engagement, time‑on‑site and data extraction from children who were never supposed to be there in the first place.
They found negligence not only in failing to keep under‑13s off the platforms, but in failing to warn about the risks of the core design itself.
This shift from “we hosted bad content” to “we built a dangerous machine” matters. It opens the door to product‑liability style reasoning that could travel, in principle, to other design patterns: streaks, loot-boxes, recommendation systems, dark patterns in on-boarding. It also resonates with developments outside the US, where the EU’s Digital Services Act is already scrutinizing addictive design at the level of interface and recommender algorithms. Earlier this year, the European Commission issued preliminary findings that TikTok’s reliance on infinite scroll and weak “screen time breaks” breaches its duty to mitigate addictive design risks under the DSA, and told the company to change “the basic design of its service”.
But if the machine is on trial, the question becomes: what kind of machine do we build next?
“Addiction” as Legal Story and Medical Dispute
In both law and media, the Kaley verdict has been framed as proof that social media is simply addictive and toxic to teens. The courtroom narrative is clean: a straight line can be drawn: a vulnerable child to the manipulative machine.
The scientific picture is messier.
On one side, the 2026 World Happiness Report carries a chapter by Jonathan Haidt and Zachary Rausch arguing that there is now “overwhelming evidence” that social media is harming adolescents at a scale large enough to shift population‑level mental health, drawing on seven lines of evidence ranging from cross‑sectional studies to natural experiments. The authors argue that ordinary use – often five or more hours a day – functions as a product safety failure, especially for girls.
We further argue that when these lines of evidence are considered alongside the timing, scope, and cross-national trends in adolescent well-being and mental health, they can help answer a second question: was the rapid adoption of always-available social media by adolescents in the early 2010s a substantial contributor to the population-level increases in mental illness that emerged by the mid 2010s in many Western nations? We call this the “historical trends question”. We draw on our findings about the vast scale of harm uncovered while answering the product safety question to argue that the answer to the historical trends question is “yes”.
On the other, another chapter in the same report, by Helliwell and colleagues, emphasizes that the relationship between youth well-being and internet use is more nuanced: some types of online activity (communication, learning, content creation) correlate with higher life satisfaction, while heavy social media and gaming correlate with lower well-being, particularly at extreme usage levels and in English‑speaking countries. They caution that youth well-being trends cannot be reduced to a single cause.
In other words: there is strong evidence of risk and harm, but causality, dose, and mechanism are still contested.
Safety as a Pretext for More Surveillance
Politicians around the world have not waited for the science to settle. They have moved quickly to do something about youth and social media – and the measures they are choosing tell us a lot about the political economy of the internet they are entrenching.
In Australia, world‑first social media age restrictions now require major platforms – Facebook, Instagram, TikTok, X, YouTube, Snapchat, Threads, Reddit, Kick, Twitch – to take “reasonable steps” to prevent under‑16s from having accounts, backed by fines of up to A$49.5 million for non‑compliance.
In practice, they are expected to deploy multiple age assurance technologies: ID checks, facial or voice analysis, behavioral age inference.
Children and parents themselves are not fined; the pressure is entirely on platforms to ramp up identity and behavioral surveillance in order to demonstrate diligence.
In the US, California’s Digital Age Assurance Act pushes the same logic down into the operating system itself. From January 2027, OS vendors are required to collect an age or age bracket at account setup and expose it via an API so that app stores and online services can query a system‑level age signal.
The law is written broadly enough that free and open‑source operating systems – Debian, Fedora, BSDs, Pop!_OS – are, on paper, on the hook alongside Apple and Microsoft.
Layer these developments on top of each other and a pattern emerges: won't somebody please think of the children?
We've heard this moral argument before: with video games, heavy metal, rap. What happens next is history rhyming:
- pushing age‑verification and age‑bracketing ever deeper into the stack – from app sign-up forms, to OS APIs, to network‑level checks;
- incentivising large platforms and OS vendors to collect, infer, and share more information about who we are and how old we are;
- creating compliance burdens that small, de-centralized, or non‑profit projects can barely navigate, effectively nudging regulators and industry towards a small club of compliant, centralized providers.
Safety becomes the moral language through which a more identity‑locked, surveilled, and centralized internet is made to feel inevitable.
Regulators Discover “Addictive Design” – But For Whom?
The EU’s preliminary findings on TikTok’s addictive design under the DSA are a good example of this ambivalence. On one level, it is encouraging to see regulators finally target infinite scroll, frictionless autoplay, and weak screen time nudges as systemic risks requiring product changes, not simply more content moderation. The Commission is, at least in principle, saying: design patterns that exploit compulsive behavior and harm children can be unlawful. This is a good start. Unfortunately that's where the good news ends.
Notice who is legible to this kind of regulation. The DSA presumes large, centralized platforms with access to vast behavioral data, capable of implementing complex risk‑assessment and age‑assurance regimes. The Australian and Californian laws do the same.
A federated social network run by a school, a youth center, or a community collective cannot cheaply plug into this machinery. A small FOSS OS project has neither the lawyers nor the telemetry to play at this table.
The risk is that addiction design becomes another compliance rubric that only the biggest players can afford to satisfy, while everyone else is either chilled out of existence or forced to rely on the same proprietary identity infrastructure.
The Missing Imagination: Community‑Run, Free and Open Alternatives
The saddest thing about this moment is how narrow the mainstream imagination of alternatives remains. The policy menu is filled with bans, curfews, and ID checks for the same extractive platforms. There is little serious talk of changing the infrastructure.
Yet we know from both history and present practice that other models are possible. Schools and libraries have run moderated online communities for decades. Federated platforms like Mastodon and Matrix, for all their flaws, show that it is possible to have social networks that are not controlled by a single profit‑maximizing entity. Community‑run game servers, forums, and fan communities have long been youth‑driven spaces with their own norms of care and accountability. My first years on the internet, circa 2001-2003, was spent in such forums. Social media trampled such online communities during their first decade.
A genuinely emancipatory response to the Kaley verdict would start from a different question: given that these products have now been recognized, in court, as dangerous by design, how do we:
- treat them like other dangerous consumer products – with warnings, design constraints, and liability – without making bio-metric and behavioral surveillance the price of entry to the digital world;
- redirect public money, regulation, and cultural attention towards building non‑exploitative, commons‑based digital spaces for young people;
- lower the barriers for schools, municipalities, youth groups, and co‑ops to run their own FOSS‑based platforms, with public funding and legal safe harbors, rather than locking them into corporate clouds that must, by their nature, maximize engagement.
This is where free and open source software is not just a licensing detail but a political stance. An internet where young people’s social lives unfold on community‑run, auditable, forkable software – hosted by institutions that have a duty of care, not a duty to shareholders – is not a Utopian fantasy. It is not merely a design choice.
It is a political choice.
Builders, Regulators, and the Rest of Us
For those who build technology, the Kaley verdict is a warning shot: engagement is no longer a neutral metric. If a design pattern is optimized to keep a 10‑year‑old scrolling past bedtime, courts may increasingly treat that as a defect, not an achievement. Engineers, designers, and product managers now have to think like people who might one day be cross‑examined about why they shipped this infinite scroll, this notification scheme, this recommender.
For regulators, the temptation will be to double down on what already feels familiar: more age gates, more identity checks, more compliance dashboards for big platforms and OS vendors. It is politically safer to demand better seat-belts from the existing car companies than to fund buses, bike lanes, or public trains. But if all we do is wrap the same addictive machines in ever tighter rings of surveillance and control, we will have saved some children from some harms at the price of deepening structural dependence on the very firms whose incentives created the crisis.
The LA jury has told us, in the blunt language of damages and negligence, that the machine is the problem. The real task now is to ensure that the fix is not simply a more paternalistic, more identity‑hungry version of the same machine, but an opening for something else: community‑run, free and open infrastructures where young people can be online without being harvested.
That is a harder story to tell in a courtroom. But it is the story the rest of us – parents, educators, coders, writers, legislators – will have to write.
from It's FOSS https://ift.tt/FAlrsJB
via IFTTT











