Minggu, 19 April 2026

Won’t Somebody Think of the Children? Why Big Tech’s ‘Tobacco Moment’ Isn’t What It Seems

In Los Angeles this March, a jury did something US courts have long refused to do: it treated the feed itself as the harm. It felt like vindication, victory even, to those of us who are critical of big tech's outsized influence on every aspect of our lives. But there is need for cautious optimism, caution even, instead of celebration.

Jurors found Meta and Google negligent for the way Instagram and YouTube are designed; not for any particular piece of content (the 20‑year‑old plaintiff, identified as Kaley/KGM), happened to see on them. They awarded her $6 million in compensatory and punitive damages and explicitly described these platforms as deliberately addictive “machines” that harmed her mental health.

This is more than a sympathetic jury and a moving story. It is the first time a US jury has effectively treated major social platforms as defective consumer products whose design – infinite scroll, notifications, algorithmic recommendations – can be a “substantial factor” in harming young users. In doing so, the case skirted the traditional shield of Section 230 by focusing not on user‑generated content, but on product design and failure to warn.

For critics of big tech, and I am one of them, that sounds like justice delayed finally arriving. I was happy.

Briefly.

But if we are not careful, the legal and policy response to this big tobacco moment will harden the already rapidly enshitified internet we already have: centralized, identity‑hungry, and surveillance driven. These are precisely the conditions that made these products so powerful in the first place.

From Bad Content to Bad Machines

For nearly three decades, legal debates about platforms have orbited around content: who is responsible for extremist propaganda, self‑harm photos, misinformation. Section 230 in the US enshrined the idea that platforms are not publishers of third‑party speech. Even when courts and regulators pushed, they pushed on content moderation, not on the underlying machine.

The Kaley verdict is a reorientation of this conversation. Jurors heard company documents and expert testimony describing Instagram and YouTube as addiction machines” designed to maximize engagement, time‑on‑site and data extraction from children who were never supposed to be there in the first place.

They found negligence not only in failing to keep under‑13s off the platforms, but in failing to warn about the risks of the core design itself.

This shift from “we hosted bad content” to “we built a dangerous machine” matters. It opens the door to product‑liability style reasoning that could travel, in principle, to other design patterns: streaks, loot-boxes, recommendation systems, dark patterns in on-boarding. It also resonates with developments outside the US, where the EU’s Digital Services Act is already scrutinizing addictive design at the level of interface and recommender algorithms. Earlier this year, the European Commission issued preliminary findings that TikTok’s reliance on infinite scroll and weak “screen time breaks” breaches its duty to mitigate addictive design risks under the DSA, and told the company to change “the basic design of its service”.

But if the machine is on trial, the question becomes: what kind of machine do we build next?

“Addiction” as Legal Story and Medical Dispute

In both law and media, the Kaley verdict has been framed as proof that social media is simply addictive and toxic to teens. The courtroom narrative is clean: a straight line can be drawn: a vulnerable child to the manipulative machine.

The scientific picture is messier.

On one side, the 2026 World Happiness Report carries a chapter by Jonathan Haidt and Zachary Rausch arguing that there is now “overwhelming evidence” that social media is harming adolescents at a scale large enough to shift population‑level mental health, drawing on seven lines of evidence ranging from cross‑sectional studies to natural experiments. The authors argue that ordinary use – often five or more hours a day – functions as a product safety failure, especially for girls.

We further argue that when these lines of evidence are considered alongside the timing, scope, and cross-national trends in adolescent well-being and mental health, they can help answer a second question: was the rapid adoption of always-available social media by adolescents in the early 2010s a substantial contributor to the population-level increases in mental illness that emerged by the mid 2010s in many Western nations? We call this the “historical trends question”. We draw on our findings about the vast scale of harm uncovered while answering the product safety question to argue that the answer to the historical trends question is “yes”.

On the other, another chapter in the same report, by Helliwell and colleagues, emphasizes that the relationship between youth well-being and internet use is more nuanced: some types of online activity (communication, learning, content creation) correlate with higher life satisfaction, while heavy social media and gaming correlate with lower well-being, particularly at extreme usage levels and in English‑speaking countries. They caution that youth well-being trends cannot be reduced to a single cause.

In other words: there is strong evidence of risk and harm, but causality, dose, and mechanism are still contested.

Safety as a Pretext for More Surveillance

Politicians around the world have not waited for the science to settle. They have moved quickly to do something about youth and social media – and the measures they are choosing tell us a lot about the political economy of the internet they are entrenching.

In Australia, world‑first social media age restrictions now require major platforms – Facebook, Instagram, TikTok, X, YouTube, Snapchat, Threads, Reddit, Kick, Twitch – to take “reasonable steps” to prevent under‑16s from having accounts, backed by fines of up to A$49.5 million for non‑compliance.

In practice, they are expected to deploy multiple age assurance technologies: ID checks, facial or voice analysis, behavioral age inference.

Children and parents themselves are not fined; the pressure is entirely on platforms to ramp up identity and behavioral surveillance in order to demonstrate diligence.

In the US, California’s Digital Age Assurance Act pushes the same logic down into the operating system itself. From January 2027, OS vendors are required to collect an age or age bracket at account setup and expose it via an API so that app stores and online services can query a system‑level age signal.

The law is written broadly enough that free and open‑source operating systems – Debian, Fedora, BSDs, Pop!_OS – are, on paper, on the hook alongside Apple and Microsoft.

System76’s CEO, writing about this wave of laws in Colorado and California, warns that the effect is to turn OS vendors into identity brokers and gatekeepers, and to “encourage children to lie” about their age for fear of being confined to a “nerfed internet”

Layer these developments on top of each other and a pattern emerges: won't somebody please think of the children?

We've heard this moral argument before: with video games, heavy metal, rap. What happens next is history rhyming:

  • pushing age‑verification and age‑bracketing ever deeper into the stack – from app sign-up forms, to OS APIs, to network‑level checks;
  • incentivising large platforms and OS vendors to collect, infer, and share more information about who we are and how old we are;
  • creating compliance burdens that small, de-centralized, or non‑profit projects can barely navigate, effectively nudging regulators and industry towards a small club of compliant, centralized providers.

Safety becomes the moral language through which a more identity‑locked, surveilled, and centralized internet is made to feel inevitable.

Regulators Discover “Addictive Design” – But For Whom?

The EU’s preliminary findings on TikTok’s addictive design under the DSA are a good example of this ambivalence. On one level, it is encouraging to see regulators finally target infinite scroll, frictionless autoplay, and weak screen time nudges as systemic risks requiring product changes, not simply more content moderation. The Commission is, at least in principle, saying: design patterns that exploit compulsive behavior and harm children can be unlawful. This is a good start. Unfortunately that's where the good news ends.

Notice who is legible to this kind of regulation. The DSA presumes large, centralized platforms with access to vast behavioral data, capable of implementing complex risk‑assessment and age‑assurance regimes. The Australian and Californian laws do the same.

A federated social network run by a school, a youth center, or a community collective cannot cheaply plug into this machinery. A small FOSS OS project has neither the lawyers nor the telemetry to play at this table.

The risk is that addiction design becomes another compliance rubric that only the biggest players can afford to satisfy, while everyone else is either chilled out of existence or forced to rely on the same proprietary identity infrastructure.

The Missing Imagination: Community‑Run, Free and Open Alternatives

The saddest thing about this moment is how narrow the mainstream imagination of alternatives remains. The policy menu is filled with bans, curfews, and ID checks for the same extractive platforms. There is little serious talk of changing the infrastructure.

Yet we know from both history and present practice that other models are possible. Schools and libraries have run moderated online communities for decades. Federated platforms like Mastodon and Matrix, for all their flaws, show that it is possible to have social networks that are not controlled by a single profit‑maximizing entity. Community‑run game servers, forums, and fan communities have long been youth‑driven spaces with their own norms of care and accountability. My first years on the internet, circa 2001-2003, was spent in such forums. Social media trampled such online communities during their first decade.

A genuinely emancipatory response to the Kaley verdict would start from a different question: given that these products have now been recognized, in court, as dangerous by design, how do we:

  • treat them like other dangerous consumer products – with warnings, design constraints, and liability – without making bio-metric and behavioral surveillance the price of entry to the digital world;
  • redirect public money, regulation, and cultural attention towards building non‑exploitative, commons‑based digital spaces for young people;
  • lower the barriers for schools, municipalities, youth groups, and co‑ops to run their own FOSS‑based platforms, with public funding and legal safe harbors, rather than locking them into corporate clouds that must, by their nature, maximize engagement.

This is where free and open source software is not just a licensing detail but a political stance. An internet where young people’s social lives unfold on community‑run, auditable, forkable software – hosted by institutions that have a duty of care, not a duty to shareholders – is not a Utopian fantasy. It is not merely a design choice.

It is a political choice.

Builders, Regulators, and the Rest of Us

For those who build technology, the Kaley verdict is a warning shot: engagement is no longer a neutral metric. If a design pattern is optimized to keep a 10‑year‑old scrolling past bedtime, courts may increasingly treat that as a defect, not an achievement. Engineers, designers, and product managers now have to think like people who might one day be cross‑examined about why they shipped this infinite scroll, this notification scheme, this recommender.

For regulators, the temptation will be to double down on what already feels familiar: more age gates, more identity checks, more compliance dashboards for big platforms and OS vendors. It is politically safer to demand better seat-belts from the existing car companies than to fund buses, bike lanes, or public trains. But if all we do is wrap the same addictive machines in ever tighter rings of surveillance and control, we will have saved some children from some harms at the price of deepening structural dependence on the very firms whose incentives created the crisis.

The LA jury has told us, in the blunt language of damages and negligence, that the machine is the problem. The real task now is to ensure that the fix is not simply a more paternalistic, more identity‑hungry version of the same machine, but an opening for something else: community‑run, free and open infrastructures where young people can be online without being harvested.

That is a harder story to tell in a courtroom. But it is the story the rest of us – parents, educators, coders, writers, legislators – will have to write.



from It's FOSS https://ift.tt/FAlrsJB
via IFTTT

Jumat, 17 April 2026

21-year-old Polish Woman Fixed a 20-year-old Linux Bug!

Okay, not a Linux bug in the kernel, but one that has existed in the Enlightenment window manager E16 since 2006, when Kamila Szewczyk was barely a year old.

Kamila, now a 21-year-old graduate student at Saarland University in Germany, daily drives a window manager that predates most of her classmates. That alone is a fun fact.

But what makes it remarkable is that she didn't just use it, she dug into its decades-old codebase, found a bug that had been hiding there since 2006, and fixed it.

What is Enlightenment E16, again?

Kamila's Enlightenment E16 desktop

For the uninitiated, Enlightenment is a window manager for Linux, the software responsible for drawing and managing the windows on your screen. It first appeared in 1997, making it older than a significant portion of today's Linux user base. E16, the version Kamila uses, arrived in 1999 and quickly gained a reputation for being highly customizable and visually impressive, at a time when most Linux desktops were far more utilitarian.

Enlightenment is not as well known as KDE or GNOME, and even LXDE has broader name recognition today. But it has a small, dedicated following and can be found in niche distributions like Pentoo or Bodhi Linux. Bodhi actually uses Moksha, a fork of Enlightenment, as its default desktop.

Over time, the Enlightenment team began a complete rewrite of the project using a new modular framework called EFL (Enlightenment Foundation Libraries). That rewrite took over a decade and eventually became E17, released in December 2012. E17 evolved from a simple window manager into a full desktop shell with modern compositing and improved hardware support.

But not everyone followed. A portion of the community stuck with E16, continuing to maintain and develop it independently. It reached the 1.0 milestone and, as of 2024, the latest release is version 1.0.30. It is very much alive, just quietly so.

Kamila is part of that quiet community.

The accidental bug discovery

She wasn't hunting for bugs. She was doing something mundane; preparing lecture slides for a course she teaches as a graduate student. She had a couple of PDFs typeset in LaTeX, opened one of them in Atril, a document viewer, and her entire desktop froze.

It wasn't a one-off glitch. The freeze was reproducible, which is both frustrating and, for a developer, oddly exciting. A reproducible bug is a bug you can actually chase down. So she did.

After digging through the codebase, Kamila traced the freeze back to the way E16 handled overly long file names.

When a window title was too long and needed to be truncated, the algorithm responsible for doing so had no iteration limit. So it would spin indefinitely, locking up the desktop entirely. The bug had been sitting there, dormant, since 2006, waiting probably for exactly the right set of circumstances to surface.

She patched it and the fix is available on her blog. I hope she made a pull request to the original codebase as well.

Why this story matters

On the surface, this is a niche story about an obscure window manager that most Linux users have never touched. But look a little closer and it is something more than that.

Kamila was born in 2004. The bug she fixed was already two years old by then. She grew up, went to university, became a graduate student and a teacher and the bug just sat there, in a codebase maintained by a handful of enthusiasts, waiting. It took someone who actually uses E16 as a daily driver to finally stumble onto it and care enough to fix it.

That is the true open source spirit. Not a big company, not a bounty program, not a CVE filing. Just a person, their computer, a frozen desktop, and the curiosity to figure out why.

There are people who have been maintaining this codebase for decades. There are people who still use it. And every now and then, one of those users catches something no one else did and quietly makes the software a little better before moving on with their day.

That's not a small thing. That's the whole point.

Source: The Register



from It's FOSS https://ift.tt/IlZ4qSa
via IFTTT

Cal.com Goes Close Source Because "AI Can Easily Exploit Open Source Software"

AI has been a mixed bag for the open source world. Some developers are using it to write faster, catch bugs, and review patches more efficiently. Others are watching the same tools get turned against the codebases they maintain.

Cal.com, a popular open source scheduling platform and one of the more well-known self-hostable alternatives to Calendly, has found itself in the second camp. After five years as an open source project, the company has announced that it is switching to a closed-source model, citing the growing threat of AI-powered vulnerability scanning.

What happened?

The co-founder of Cal.com, Bailey Pumfleet, has addressed why they went down this path, saying that AI has changed what it takes to exploit an application. Earlier, finding vulnerabilities meant real expertise and some serious time investment.

But today, an AI model can be directed towards a public repo and do the same job systematically without needing much manual labor.

He also cited a specific case to back this up, where AI tooling reportedly found a 27-year-old vulnerability in the BSD kernel and had working exploits ready within hours.

📋
I think Bailey has misattributed the above occurence, as the 27-year-old bug was found in OpenBSD, thanks to Claude Mythos, and has since been patched.

But, yeah, closed source it is. 😅

Another thing worth knowing is that the production codebase had already been drifting away from what was publicly available. Core systems like authentication and data handling had both gone through significant rewrites, making the public repo and what actually runs in production two fairly different things by the time this announcement came.

Does it make sense?

Cal.com isn't wrong that AI can be used to hunt for vulnerabilities in open source code. That's documented and real. But the provided argument treats AI purely as an attacker's tool, which is a selective reading of the situation.

Take the Linux kernel, for example. We recently covered how Greg Kroah-Hartman, the Linux stable kernel maintainer, has been running what looks like AI-assisted fuzzing on the kernel through a branch he calls "clanker," using it to identify bugs and patch them proactively.

There's even an official policy in place that governs the use of such AI tools for contributions.

Then there's the older argument that closing your source doesn't actually make you more secure. It just means fewer eyes on the code. Open source projects benefit from anyone, anywhere, being able to spot and report problems.

Heartbleed and Log4Shell were both found by external researchers precisely because the code was auditable. This just shows us that a private codebase doesn't prevent vulnerabilities; it just reduces the chances of catching them before someone with bad intentions does.

What's next?

For self-hosters and developers, Cal.diy is what's on offer. It's available now under the MIT license, with the documentation covering installation via Docker, Vercel, Railway, Render, and a handful of other platforms.

The project is described as "strictly recommended for personal, non-production use," with a "use at your own risk" disclaimer throughout. It is community-maintained, with no official backing from Cal.com.

Feature-wise, Cal.diy covers the personal scheduling essentials like event types, calendar integrations, video conferencing, webhooks, and API access.

But a fair bit is missing. Teams, Organizations, SAML SSO, SCIM directory sync, Workflows, Routing Forms, and the Insights Dashboard are all absent from the community edition.

If you're running Cal.com for anything commercial, the Cal.diy documentation steers you back to the paid product pretty explicitly, saying that "for any commercial and enterprise-ready scheduling infrastructure, use Cal.com."

All of that made me wonder, whether AI was the catalyst or the perfect scapegoat for a closed-source transition. Anyway, I like yapping like this every so often; don't mind me.



from It's FOSS https://ift.tt/6yPsNXn
via IFTTT

Russian Baikal CPUs Are Losing Their Place in the Linux Kernel

Support for Russian Baikal CPUs is being pulled from the Linux kernel. Work has begun in the Linux 7.1 cycle to remove driver code and device tree bindings for Baikal SoC hardware, with more patches already lined up to follow.

The first removal came with the ATA pull for Linux 7.1-rc1, merged by Linus Torvalds on April 15. It dropped the Baikal bt1-ahci DT binding and stripped Baikal-specific code from the ahci_dwc driver, with the ATA maintainer, Niklas Cassel, noting that upstreaming for the SoC "is not going to be finalized."

this picture shows the linux kernel archive mirror with baikal as the searched term and a list of changes related to it shown below in a numbered list
You can browse the LKML for tracking Baikal's removal.

Furthermore, the code had been sitting unmaintained for some time now. Serge Semin, who contributed the bulk of Baikal's kernel support over the years, was among roughly a dozen Russian developers removed from the kernel MAINTAINERS file in 2024.

With no one left to maintain it and the hardware itself rare even within Russia, there appears to be no rationale for keeping the code around.

Some background info

The Baikal line of CPUs is the work of Baikal Electronics, which was founded in January 2012 as a spinoff of T-Platforms, a Russian supercomputer company.

It started with a MIPS-based chip for embedded applications, then pivoted to ARM for its later processors, all manufactured at TSMC. The plan was to supply Russian state-owned enterprises with domestically produced CPUs as an alternative to Intel and AMD.

But Russia's 2022 invasion of Ukraine ended that. Sanctions cut off TSMC access, 150,000 Baikal-M units already manufactured were seized in Taiwan, and ARM production licenses were lost. The company filed for bankruptcy in August 2023.

It did not stay down. By the end of 2024, Baikal had shipped a total of 85,000 processors since its founding and began serial production of the Baikal-U1000, a RISC-V microcontroller, in September 2025 (in Russian).

The current lineup consists of the Baikal-T (MIPS), Baikal-M and Baikal-S (ARM), and the Baikal-U (RISC-V).

Those already running Linux on Baikal hardware will need to stay on Linux 6.18 LTS or earlier, as newer kernel versions are dropping the support.


Suggested Read 📖: The Linux Kernel is Finally Letting Go of i486 CPU Support



from It's FOSS https://ift.tt/eEmDLxT
via IFTTT

Kamis, 16 April 2026

Privacy Email Service Tuta Now Also Has Cloud Storage with Quantum-Resistant Encryption

Privacy in 2026 is a bit of a joke. Governments have turned surveillance into standard operating procedure, and Big Tech companies treat your personal data like a free-for-all buffet, helping themselves, then selling the leftovers to data brokers who do the same.

That's pushed people toward privacy-first alternatives, and quite a few companies have stepped up to meet that demand. Tuta is one of the more recognizable names in that space, offering encrypted mail and calendar services to over 10 million users worldwide.

Now, the company is looking to round out its ecosystem with the one piece that's been missing, an encrypted cloud storage solution.

A haven for your files?

Tuta first laid the groundwork for this back in July 2023, when it announced the PQDrive project with backing from the German government. The initiative had received €1.5 million in funding through the KMU-innovativ program, a grant scheme that supports small and medium enterprises in research and development.

The goal was clear from the very beginning. It was to build a cloud storage service secured with post-quantum encryption, not just conventional algorithms.

To get there, Tuta partnered with the University of Wuppertal, which handled key research tasks including testing cryptographic algorithms and figuring out how to deduplicate encrypted data without punching holes in the security model.

All that effort has now produced a product ready for real-world testing. Starting today, Tuta Drive enters closed beta, with select users receiving early access to put it through its paces ahead of a public release.

It is an end-to-end encrypted cloud storage service that fits directly into Tuta's existing ecosystem alongside mail and calendar. Everything you store gets encrypted without any action needed on your end, and the zero-knowledge architecture means Tuta has no technical ability to read your files or share them with anyone else.

The encryption underpinning Drive is the same TutaCrypt protocol Tuta already uses for its mail service. It combines classical and quantum-resistant algorithms in a hybrid approach, so even if a quantum computer cracks one layer down the line, it still has to contend with the other.

And, the service is hosted in Germany, which brings strict GDPR protections into play on top of the technical safeguards.

Arne Möhle, CEO of Tuta, announced this by commenting that:

With Tuta Drive, we are taking the next step towards offering a full private digital workspace.

Today, more than ten million citizens and businesses, including journalists, whistleblowers and activists use Tuta Mail as an alternative to insecure email offered by mainstream providers.

Adding an encrypted cloud storage to Tuta will enable them to also store their files securely.

Test run

We were given early access to the closed beta ahead of its rollout today, and here's a look at what Tuta Drive is like right now.

The interface is minimal, which is fine. You get a familiar sidebar and a top bar that shows you the server connection status and houses quick-switch buttons for Mail, Contacts, Calendar, and Drive.

First, I uploaded two videos to see how Tuta Drive would handle them. Here, the upload speeds were noticeably slow when connected over a VPN, though that's more or less expected. Without an active VPN connection, file uploads were fast.

Moving those files to a new folder afterward was straightforward using the "Move" option from the right-click context menu. Drag and drop works too, and I could manually select specific files without any issues. Cut and paste for moving files around also worked well.

When uploading multiple files at once, a progress list appears, which is handy. The one catch is that you can't scroll through it to check which file is currently being processed, which was a bummer.

screenshot of tuta drive closed beta showing a long upload progress list on the right

Files are shown with appropriate icons depending on type, so images, videos, and audio all get their own visual treatment. Folders display a cat emoji where the folder size info should probably appear, which looks like a work-in-progress placeholder more than anything else.

many different file types are shown in this screenshot of tuta drive closed beta

If you upload something by mistake or decide a file isn't worth keeping, you can delete it promptly either from the right-click context menu or by hitting Delete on your keyboard. The "Trash" page then gives you the choice to either restore it if it was a wrong call or permanently delete it if you're sure.

That said, folder uploads aren't supported yet, and the keyboard shortcut support is lacking. Ctrl+A to select everything in a folder, for instance, does nothing. No search tool either; those are the kinds of gaps that user feedback tends to sort out quickly.

Seeing that this is a closed beta, I am confident that the Tuta folks will listen to what people say about their newest offering and act accordingly.


💬 Would you give Tuta Drive a shot, or are you too committed to Proton Drive or other cloud solutions to even look its way?



from It's FOSS https://ift.tt/kZGFoPN
via IFTTT

Can You Identify The Fake Linux Distros From The Real Ones?

Not all distros are created equal.

In fact, not all distros are created at all.

This quiz is simple. You'll be presented with a few Linux distros and their details. The twist is that they might not be a real thing. They could just be a fragment of my imagination.

Of course, this is valid only at the time when I created this quiz. The way we move in Linux world, there could be some new distros coming up right after I publish this quiz 😃

🚧
Some browsers block the JavaScript-based quiz units. Disable your ad blocker to enjoy the quizzes and puzzles.


from It's FOSS https://ift.tt/Pn3Yb9V
via IFTTT

Oh No! Now A Federal Bill Wants OS-Level Age Verification for Everyone in the USA

The U.S. has been quietly building up a set of state-level laws that push operating system providers into the age verification plague.

California's AB 1043, signed in October 2025, requires OS providers to collect age data at account setup and pipe it to apps through a real-time API. It kicks in on January 1, 2027.

Colorado is working on something nearly identical. SB26-051 (which we covered when it was still a proposal) passed the state Senate 28-7 on March 3, 2026, and is now waiting on a House vote to become law there too.

However, these are just state-level laws. A new federal bill, H.R.8250, introduced on April 13, 2026, by Rep. Josh Gottheimer, with Rep. Elise M. Stefanik signing on as cosponsor, has us intrigued.

a cropped screenshot of the congress.gov website that shows the proposed h.r.8250 bill

The official title of the bill reads, "To require operating system providers to verify the age of any user of an operating system, and for other purposes." But that's a mouthful; the short version is "Parents Decide Act."

If you go by the full title, the bill is pretty self-explanatory; it is going to require every operating system provider to verify the age of its user who wants to use their OS, and vaguely enough, for any "other purposes."

It has been referred to the House Committee on Energy and Commerce and currently sits at step one (Introduced) of five in the legislative process. No bill text has been published; there's no summary, no subject tags, and no related bills attached to it.

That means right now, the only thing formally known about H.R.8250 is its title, its sponsors, and where it got sent.

But wait, do you… 👇

Want more details?

this cropped screenshot shows a blog titled, "release: gottheimer announces bipartisan "parents decide act" to protect kids online."

Gottheimer's office published a press release on April 2, 2026, announcing the bill 11 days before it was formally introduced. That press release was unavailable for a while, but it is now back up.

According to the announcement, the bill would require OS developers to verify user age at device setup, allow parents to set content controls right there, and have those settings flow through to apps and platforms on the device.

Apple and Google were the companies Gottheimer named as the intended targets, with the framing centered entirely around phones and tablets.

But here's where it gets interesting for anyone outside the Apple and Google ecosystem. Gottheimer's press release framed this entirely around commercial mobile platforms. The official bill title, as you saw earlier, does not.

If the bill text matches the breadth of that title, Linux distributions and other open source operating platforms would sit squarely within its scope. And a federal bill passing would mean one nationwide compliance requirement replacing the current state-by-state situation.

The representative also voiced support for several groups, which include the likes of:

Evidently, things are getting more absurd with each passing day, and I can't wait for the day when access to anything electronic is locked behind a gate, guarded by the most decent and righteous upholders of the law. /s


💬 If you are looking for a conversation surrounding this, our forum is the place to be!



from It's FOSS https://ift.tt/t0nKs3V
via IFTTT