Selasa, 05 Mei 2026

Should You Be Worried About Copy Fail Linux Exploitation?

📋
TLDR:
- A 9-year-old bug was discovered recently.
- The vulnerability is already patched in the Linux kernel.
- Normal users could gain root access by running a small Python script.
- Not much of a bother for regular desktop Linux users who keep their systems updated.
- Could be problematic for cloud servers and containers if the kernel is not updated.

A logic flaw that sat quietly in the Linux kernel since 2017 has finally been found and disclosed. For a brief window, it let any unprivileged local user on a Linux system escalate to root with a script smaller than most config files.

The flaw is in a kernel subsystem that lets regular programs tap into built-in cryptographic functions. By feeding it file data in a specific way, an attacker can get the kernel to quietly overwrite 4 bytes of any file's in-memory copy.

The actual file on disk stays intact the whole time, so any tool checking file integrity will see nothing wrong. The exploit is just a 732-byte Python script that doesn't require any additional dependencies or compilation.

The vulnerability is tracked as CVE-2026-31431, goes by the name "Copy Fail," and was discovered by researchers at Theori using their AI security research tool, Xint Code.

The security researchers tested it on Ubuntu 24.04 LTS, Amazon Linux 2023, RHEL 10.1, and SUSE 16, getting root on all four with the exact same script each time.

They had reported the issue to the Linux kernel security team on March 23, received acknowledgment the next day, and had a patch proposed and reviewed by March 25. The fix was committed to mainline on April 1, with the CVE assigned on April 22, and public disclosure following on April 29 (linked earlier).

Who needs to worry, and who doesn't?

this picture shows six categories with different risk ratings for various linux setups

According to the Copy Fail website hosted by Theori, the risk level varies quite a bit depending on how you run Linux.

At the top are multi-tenant Linux hosts, Kubernetes and container clusters, CI runners and build farms, and cloud SaaS environments running user-supplied code.

These all get a "High" risk rating. Containers and cloud workloads are especially exposed because the Linux page cache, the part of memory this exploit corrupts, is shared across the entire host, container boundaries included.

A compromised container can take down the whole node, and a bad pull request run on a shared CI runner could hand an attacker root on that machine.

Standard Linux servers where only the team running it has shell access get a "Medium" rating, whereas personal desktops and laptops are at the bottom with a "Lower" risk rating.

Copy Fail needs local code execution to work, so it won't get anyone in remotely by itself. If malware is already running on your machine, this could be used to escalate to root, but that's a bigger problem either way.

To fix this, patching the kernel is the way. Most major distros have updates out or on the way. If patching isn't immediately possible, Theori recommends blacklisting the algif_aead kernel module as a stopgap:

echo "install algif_aead /bin/false" > /etc/modprobe.d/disable-algif-aead.conf

rmmod algif_aead 2>/dev/null

As of writing, Microsoft has noted that exploitation remained "limited and primarily observed in proof-of-concept testing," so there's no confirmed mass-scale campaign just yet.

That said, CISA, the US cybersecurity agency, has added Copy Fail to its Known Exploited Vulnerabilities (KEV) catalog, ordering US federal agencies to patch their Linux systems by May 15.

It also urged other organizations to treat it as a priority regardless of whether the federal deadline applies to them.


Suggested Read 📖: VS Code Was Adding Copilot as a Git Co-Author Without Telling Anyone



from It's FOSS https://ift.tt/9n8YOpW
via IFTTT

Go Away Microsoft! The Netherlands is Quietly Building Its Own GitHub Replacement

Back in November 2025, Jan Vlug, a software engineer who writes for the Dutch government's developer portal, put out a detailed blog recommending which Git forge the Netherlands should adopt for its governmental source code hosting needs.

His post came at a time when the Ministry of the Interior (BZK) was already setting up a dedicated Git instance, and the platform decision was still open.

Currently, the Dutch government's code is spread across GitHub and GitLab, neither of which is under government oversight.

GitHub got ruled out first because it's proprietary software, which directly conflicts with the government's own policy of preferring open source when options are equally suitable.

GitLab made it further in the evaluation but didn't survive it either. The issue was its open-core model, where the Community Edition is genuinely free software but the Enterprise Edition is not.

The solution

this cropped screenshot of the forgejo official website shows a bunch of text and buttons on the left, on the right is the project's squirrel mascot

Forgejo came out on top due to its fully free and open source nature. Licensed under GPLv3+ and governed by Codeberg e.V., a democratic nonprofit, it has no enterprise tier, proprietary upsell, or vendor lock-in problems.

On April 24, 2026, code.overheid.nl had its soft launch, with developer advocate Tom Ootes writing about it on developer.overheid.nl. He framed it as a collective project to build something together rather than ship something finished.

The platform is a self-hosted Forgejo instance, running on Dutch government infrastructure managed by SSC-ICT (DAWO). It's free for all government organizations and is built around the following goals.

Open source development with proper Git tooling, including pull requests, issue tracking, and code reviews; government-wide collaboration to reduce duplicate development across agencies; and sovereignty through full control over the hosting environment.

As mentioned earlier, this initiative is still in the pilot phase, with the rollout being kept deliberately gradual.

Not every government organization can sign up yet, and the idea is to build it alongside the developers who will actually use it, with early participants encouraged to file issues and open pull requests on the platform itself.

What's already in?

The platform is live and already hosts some content. The most notable presence is Kiesraad, the Dutch Electoral Council, which has pushed several election-related repositories including Abacus, the software used for vote counting and seat distribution, and e-KS, an electronic candidate nomination system.

The Ministry of the Interior (BZK) has the DAWO project (their digital autonomous workplace initiative) on there, along with a DigiD source code release published under a freedom of information ruling.

On the organization side, the list of who has joined since the April 24 soft launch is telling. Multiple national ministries are already on the platform: Finance, Foreign Affairs, Agriculture, and Interior.

Several major municipalities have also signed up, including The Hague, Utrecht, Leiden, and Arnhem. For a platform still in pilot with no formal launch announcement, that's a fairly significant roster.


Suggested Read 📖: France's Linux Move



from It's FOSS https://ift.tt/SbinL7f
via IFTTT

Typical Microsoft! Turns Out VS Code Was Adding Copilot as a Git Co-Author Without Telling Anyone

VS Code has been quietly appending a Co-authored-by: Copilot line to users' git commits, including ones written entirely without Copilot's involvement.

The culprit behind this, git.addAICoAuthor, is a feature that was introduced in VS Code 1.110 back in March. It is designed to tag commits with a Copilot co-author trailer when AI-generated code is involved, and it launched with off as the default.

So far good, right? 🙂

That changed in April, when Courtney Webster, a Product Manager at Microsoft, submitted a pull request that changed one thing, the default value of git.addAICoAuthor from off to all.

The PR was reviewed and merged by VS Code team member Dmitriy Vasyura on April 16, without a release note or any kind of user-facing notification.

a github screenshot that shows a merged pull request titled, "enabling ai co author by default"

The all setting is the broadest option available for git.addAICoAuthor, which added the Copilot trailer to every commit involving any AI interaction, including inline completions.

With the default flipped to all, anyone who had not manually configured the setting was suddenly getting Copilot credited in their git history.

Things got messier from there. Developers reported that the credit info (trailer) was appearing even with chat.disableAIFeatures set to true. The trailer is also appended after the commit finalizes, not appearing in VS Code's commit message editor beforehand, so there was no window to catch and remove it before it showed up in git history.

One developer replaced Copilot's generated commit message with one they wrote themselves, committed, and still found the Copilot co-author line sitting in their log.

But fret not, as the fix has been delivered.

The Fix

a small screenshot that shows an apology post by dmitriy vasyura who had enabled copilot attribution by default on vscode
Dmitriy's apology over on HackerNews.

Dmitriy, the VS Code team member who merged the original PR, came forward on Hacker News over the weekend under the username dmitriv, specifically to address the fallout.

Identifying himself as the person who approved the change, Dmitriy said that he was sorry for mistakenly turning on this feature by default without sufficient scrutiny.

Also clarifying the following before the conspiracy theories started emerging:

There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code. As folks mentioned here - many similar tools do this as well.

The fix, now live on VS Code's GitHub repo as PR #313931, reverts git.addAICoAuthor back to off by default and corrects the detection issue that caused the trailer to appear even when Copilot was not in use.

You can expect this change to land with the upcoming VS Code 1.119 release.

Via: The Register



from It's FOSS https://ift.tt/q24DyJw
via IFTTT

Senin, 04 Mei 2026

A Free Open Source Mobile Dev Hackathon Is Coming to the Netherlands on May 16

OpenSource Science B.V., better known as OS-SCi, is a Netherlands-based institution that has a pretty specific focus. To train the next generation of developers exclusively on Free and Open Source Software (FOSS).

They run bachelor's programs, modular courses, and student projects with partners that include LPI, UBports, the Rust Foundation, and the Python Institute.

If the outfit still sounds unfamiliar, you are not alone. OS-SCi doesn't get a lot of coverage, even in FOSS circles. They are primarily education-focused, operating out of their Tilburg headquarters and working with universities to integrate open source into formal curricula.

They also run FOSSTech, a separate arm that delivers open source IT solutions to organizations. So it's not purely a school; there's a consultancy side to them too.

All that context matters, because OS-SCi is about to host something that might interest quite a few of you.

OS-SCi refers to this event interchangeably as "Lomiri Tech Meeting," "Lomiri CodeFest," and "Lomiri Hackathon" across their own websites. We use "Lomiri Tech Meeting" throughout this article.

As always, independently verify event details and the organizer before attending.

Lomiri Tech Meeting

cropped screenshot of the lomiri tech meeting registration page on os-sci's website

The Lomiri Tech Meeting is a two-day, free hackathon aimed at students who want to get hands-on with open source mobile development. The focus is building apps for Lomiri and Ubuntu Touch, the mobile OS maintained by UBports.

Two keynote speakers are confirmed. Mike Gabriel, the project leader behind Lomiri's user interface, will be speaking. So will Erik Mols, who will use the event to announce the Lomiri Bounty Program, a new initiative that would offer students real-world incentives to contribute to the Ubuntu Touch ecosystem.

Every student who attends will be given free copies of Lomiri App Development Level 1 and Level 2. These are from a three-volume series that covers the platform's foundational concepts along with advanced procedures.

Beyond the keynotes and books, the event is built around hands-on app development sessions guided by experts. The goal here seems to be that attendees leave having actually built something and not just sat through some presentations.

Event Details and Registration

The Lomiri Tech Meeting is open to students of all experience levels and will run from May 16 to 17, 2026, kicking off at 10:00 AM each day and wrapping up at 4:00 PM on the 17th.

The venue is OS-SCi's headquarters at Spoorlaan 400, Tilburg, Netherlands. You can find it on OpenStreetMap and Google Maps.

It sits very close to Tilburg's main train station (Station Tilburg), which makes it fairly straightforward to reach by rail. The building also appears to have some level of wheelchair accessibility, but I recommend confirming that directly with OS-SCi before making travel arrangements.

You can register for free on the official event page.



from It's FOSS https://ift.tt/4ubI2hv
via IFTTT

What Are Linux Mint HWE ISOs and Do You Actually Need One?

Earlier this year, the Linux Mint project announced a significant shift in how it shipped releases, hinting at a longer cycle.

Project lead Clement Lefebvre had pointed out that the existing pattern of a new release every six months, on top of maintaining LMDE, was leaving the team spending more time on testing and release management than on actual development.

By March 2026, a decision had been made, with Linux Mint 23 now targeted for a Christmas 2026 release, making it the longest gap between major releases the project has seen.

The next release will be based on Ubuntu 26.04 LTS, will drop the long-used Ubiquity installer in favor of the live installer from LMDE, and ship with a functional Wayland session.

But the thing is, a longer wait works fine for existing users on a supported install. The problem is anyone trying to install Mint on very new hardware, where Linux 6.14 in the January ISO may not have the support they need.

To tackle that, the developers have introduced a new ISO that looks to improve compatibility with newer hardware.

What's happening?

a cropped screenshot of the hwe isos page on linux mint's official website

Linux Mint has published HWE ISO images for Linux Mint 22.3, where HWE stands for Hardware Enablement, and these are distinct from the regular Mint 22.3 ISOs. These ISOs ship with Linux kernel 6.17 instead of Linux 6.14 the original images came with.

Don't think that these are new releases; rather, the HWE bit ensures that you get support for newer hardware, while the underlying system remains Linux Mint 22.3 in every way, fully put through the Mint team's QA process.

The team also plans to keep this up, publishing fresh HWE ISOs each time a newer kernel lands in the package base. So the waiting period until Mint 23 in December should not leave new hardware users without a compatible release.

For context on how kernels work in the 22.x series: the LTS kernel at 6.8 is what Mint 22 and 22.1 came with, while Mint 22.2 and 22.3 moved to the HWE track, starting at 6.14 and now sitting at 6.17. Either way, both tracks get security updates and are actively looked after.

If you are already running an up-to-date Linux Mint 22.3 install, you are likely already on kernel 6.17 and do not need to touch the HWE ISO. The images are primarily useful at the installation stage, for machines where the regular ISO will not boot or install cleanly due to hardware compatibility concerns.

Who's this for?

Very new laptops and desktops with components that require a kernel newer than 6.14 will need the HWE ISOs. Of course, if the regular ISO works fine on your machine, then there is no reason to look at the HWE version.

There is also an important caveat worth flagging before you reach for one. The Linux Mint folks specifically call out NVIDIA, Broadcom, and VirtualBox users as instances where things can get complicated since proprietary and third-party modules can run into compatibility problems on newer kernels.

The HWE ISOs are listed on a dedicated page, which currently shows the Linux Mint 22.3 HWE ISO with Linux 6.17. The page includes an extensive list of mirror links across North America, Europe, Asia, Africa, and South America, so a fast download source should not be hard to find.



from It's FOSS https://ift.tt/Yyd4bPa
via IFTTT

Sabtu, 02 Mei 2026

Ubuntu’s Official Flavour List Is Shrinking, And That’s Not a Bad Thing

Choice is one of the hallmarks of Linux, to the point that both “distro fever” and “distro fatigue” are alive in equal measure. Historically, Ubuntu has also been known the same. Different stroke for the wide range of folks who make Ubuntu their Linux home. Many of us see this wide selection of choices as a plus, and with good reason: we get to pick and choose our exact experience and tailor it to our needs.

Ubuntu’s flavour ecosystem has long reflected this ethos rather well: Don't want GNOME? Use Kubuntu. Need something lighter? You can choose Xubuntu or Lubuntu. Need something more specialised? Take your pick of Edubuntu, Ubuntu Studio, and others. On paper, it’s the Linux philosophy of choice perfected.

But there comes a point where adding more official flavours stops feeling like a strength, and starts raising a more uncomfortable question: how many of these options still make sense as official Ubuntu projects? Because fewer official flavours is healthier than keeping an inflated list of under-resourced projects alive just for the sake of it. We need less scattering, and more mattering.

Choice itself isn’t the problem: clarity is

A screenshot of the official Ubuntu flavors page, showing the available flavours
There are currently 10 official flavours listed on the "Ubuntu flavors" page

Before I continue, it’s important for me to clarify one thing: I’m not arguing against choice itself. I’m making a case for greater clarity. Choice properly *applied*, not just translated to availability. After all, choice is one of the very reasons we choose Linux over other options. We want the ecosystem within the ecosystem, and we’d be lost without this flexibility.

Ubuntu is still arguably the best-known Linux distribution outside the Linux community itself. For many people, it is the first distro they hear of, the first one they search for, and often the first one they install. That visibility matters, and it can carry almost anything to a higher echelon just by association. It also means Ubuntu has a different responsibility from smaller, newer, and more niche projects.

Without meaning, choice gets noisy

It’s important to understand that the problem isn’t that Ubuntu offers different flavours overall, but that some of these choices can be difficult to justify, maintain, and uniquely define over time. A newcomer landing on the Ubuntu Flavours page isn’t thinking about packaging work, release engineering, or maintainer burnout. They’re thinking something much simpler: Which one am I supposed to choose?

The more crowded that menu becomes, the more likely it is that the answer starts to feel murky, especially if the defining characteristics of any particular flavour are harder to distinguish from another. This doesn’t mean that Ubuntu should strip away all variety and become a one-size-fits-all distro. That would only violate the key foundation that’s made Ubuntu successful in the first place.

What it does mean, is that the official choices should “Just make sense” overall.

“Official” carries a greater set of expectations

When most users hear the word “official”, it introduces connotations and ideas that can’t (or at least shouldn’t) be easily ignored. This is where the conversation becomes less about desktop preferences and default apps, and more about polish, user experience, and sustainability.

An “official” Ubuntu flavour isn’t just a remix with its own logo and download page. Ubuntu’s own ”RecognizedFlavors” wiki page makes it clear that recognised flavours are expected to have maintainers, participate in the official release cycle, follow QA coordination and bug tracking, and must have developers with the right access and experience to help keep things working across the release cycle. That’s a lot more than building a custom ISO that some people might like.

Being blessed with “official status" is not a free ride

A screenshot of the official Kubuntu website
The official Kubuntu website (https://kubuntu.org/)

The same page also makes it clear that Canonical does not simply take care of everything for these official flavours. There are limits around testing, upgrades, packages outside the main Ubuntu images, along with broader support obligations. So while flavours benefit from the wider Ubuntu base, being official still comes with a real maintenance burden.

This matters because community resources aren’t infinite, no matter how large or passionate the community. There are only so many developers, packagers, testers, documenters, and maintainers actually doing the work that makes a distro possible, and some actually devote their time and investment to multiple projects at a time. Every additional official flavour draws from this limited pool of active contributors, and demands a greater collective attention.

Therefore, it’s about way more than just giving end users more options. It’s another project (or set of projects) that needs ongoing attention, another arena for upstreams to pay attention to, another release that needs to be kept healthy, and another experience that has to reflect the image of Ubuntu well. Once you look at it that way, just adding more flavours sounds way less appealing without adding the robust backing needed to make them possible.

Passion and enthusiasm don’t automatically become maintenance

Ubuntu GNOME is no more, despite providing a purer GNOME experience (image source: Wikimedia, CC0)

One of the harder realities in open source is that a passionate user base does not automatically become a strong maintainer base. That’s not an indictment against users either; most people are genuinely better positioned to be good users than contributors, maintainers, packagers, developers, testers, documenters, or advocates. Even in dedicated communities, not everyone has the skills, time, or financial stability to support a project in those ways. This remains true when we talk about Ubuntu flavours as well.

A current example is Ubuntu MATE. In March 2026, project lead Martin Wimpress said his involvement in the project was coming to a close and asked for new maintainers to step up in his stead. Ubuntu MATE still has a clear identity and a loyal community, but loyalty isn’t the same thing as leadership or maintainership. The frank reality is that if too few people are able to carry the technical and organisational burden, even a respected official flavour can start to feel the strain. That tension is visible in the current release cycle too, with the Ubuntu 26.04 LTS release notes linking nine official flavour release-note pages rather than ten. Why? Because Ubuntu MATE is missing from the list.

It’s not just Ubuntu MATE either: the Lubuntu team has openly said it has less development manpower than before, while Ubuntu Unity says 26.04 is a regular release because key milestones were missed.

All told, the broader point is clear: adding more official flavours doesn’t magically create more maintainers. It just spreads limited labour across more projects, and as Ubuntu continues to spread its wings even further, that labour is not getting any lighter.

Some flavours clearly earn their place

Ubuntu Studio has long proven itself a worthy offering

Another key point to acknowledge here is that not every flavour adds the same kind of value for users. Some have a very clear reason to exist, such as providing a streamlined experience for a popular desktop environment that would otherwise be diminished by mixing it into a standard Ubuntu desktop base. A few examples of these include Kubuntu (for KDE Plasma), Xubuntu (for XFCE), and Lubuntu (for LXDE). Edubuntu and Ubuntu Studio serve a different kind of need, but both clearly establish themselves as necessary based on their defined purposes.

Strong purpose at the core matters because maintaining a flavour isn’t getting easier. Ubuntu’s flavour teams must keep up with the parent distro’s changes and innovations, including installer changes, release engineering, and core infrastructural changes whether to the OS itself or to build systems and other “backroom” aspects of the distro. So while the loss of convenience for some users can be understandably frustrating, it’s important to remember that it’s a balance of choice versus the reality of making these choices possible.

And this is really the distinction Ubuntu as a whole is forced to care about: not whether a flavour can exist at all, but whether it still makes sense for it to become (and remain) official long term.

Ubuntu can’t be the lab for everything

Ubuntu has often served as a proving ground for the Linux desktop as a whole. To some degree, other distributions have caught up and even surpassed it in many ways (take Fedora, for instance), but this doesn’t change the fact that the historical framing still holds strong.

Ubuntu-based efforts have helped push things forward on the Linux desktop more than once, and projects like KDE Neon have shown how the Ubuntu base can be a solid foundation for showcasing where a desktop stack is heading. This history of experimentation and iteration without compromising stability and quality has been key to Ubuntu’s success, and is one of the reasons Ubuntu has been chosen as the base for projects like Mint, Zorin and others.

There must be boundaries

Ubuntu itself can’t be the long-term official home for every experiment, budding desktop environment, or project that once felt promising but no longer has the momentum or community investment to justify and sustain its titular brand. At some point, definition, focus, and purpose must take precedent over ideas and good intentions.

Ubuntu has long positioned itself around usability, polish, and accessibility, and the legacy must mean something in the long run. “Linux for human beings” only works as an ethos if official experiences carrying the Ubuntu name are a true reflection of that very concept.

A poorly maintained, lagging, or half-broken flavour isn't a sign of openness. It's a sign that the core idea may not be enough to carry the weight of its own success.

A smaller official line-up can actually be a stronger one

I don’t see Ubuntu’s shrinking official flavour list as bad news. If anything, it I see it like as a difficult, but necessary correction (just don’t call me Thanos, I don’t think he was right). A leaner, better-focused, better-supported line-up of official flavours is healthier than an extensive list held together by fatigue and goodwill. It’s better for users, because the choices are clearer and more likely to be finely polished. It’s better for maintainers, because their efforts aren’t being spread quite so thin.

Most importantly, it’s better for Ubuntu as a whole, because Ubuntu is as much a product and a brand as it is an idea. Besides, this is Linux, and there will always be remixes, spins, experiments, and community projects that blow our minds with what’s possible, even if they don’t always last as long as others. Choice isn’t going away, nor should it. But the official Ubuntu flavour list will only be better off not being the umbrella for everything.



from It's FOSS https://ift.tt/2UhedEa
via IFTTT

Kamis, 30 April 2026

Microsoft Marks 45 Years of DOS by Open-Sourcing Its Oldest-Known Source Code

Before Microsoft became the company that shipped Windows to corporate desks around the world, it had to start somewhere. That somewhere was a scrappy little operating system written by one guy at Seattle Computer Products.

Tim Paterson built what he initially called QDOS, short for Quick and Dirty Operating System, in 1980. Intel's 8086 chip was out, but CP/M, the dominant OS of the time, had no 8086 support. He wrote something to fill that gap, modeling the CP/M API so existing software would run on it.

Microsoft bought the rights to 86-DOS for just under $100,000, shipped it to IBM as PC DOS 1.0 in August 1981, and retained the rights to sell the same OS to other PC manufacturers as MS-DOS.

That single deal set Microsoft on the path to dominating personal computing for the next two decades.

Fast forward to now

a cropped screenshot that shows paterson listings on github with a picture of tim paterson visible in the middle

On April 28, the 45th anniversary of 86-DOS 1.00, Microsoft published a blog post announcing that the earliest known DOS source code is now publicly available on GitHub, under the MIT license.

And the story behind it is an interesting one. Tim did not hand over a tidy source archive; instead, what he kept were physical assembler printouts and stacks of continuous-feed paper from 1981 that he had held onto over the decades.

Getting those into usable shape took effort, with historians Yufeng Gao and Rich Cini having to locate, scan, and transcribe the DOS-related portions into compilable code.

What's included are the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, utilities like CHKDSK, and the assembler Paterson used to write the OS itself.

Who's this for?

Honestly, seeing Microsoft open up old code is not that surprising anymore. 6502 BASIC went open source in September 2025. MS-DOS 4.0 in 2024. MS-DOS 1.25 and 2.0 back in 2018. There is a clear pattern at this point.

If you are into retro computing or low-level systems work, this is genuinely worth digging into. The source code is compilable, and you will need a copy of Seattle Computer Products' ASM assembler, which you can pull from any 86-DOS or early MS-DOS release.

The GitHub repository's README has the necessary steps for you to follow.


Suggested Read 📖: Someone Turned a PS5 Into a Linux Gaming PC



from It's FOSS https://ift.tt/JMwDqIT
via IFTTT