Selasa, 28 April 2026

LVFS Has Turned Up the Heat on Vendors Who Won't Contribute

The Linux Vendor Firmware Service, or LVFS, is what makes firmware updates on Linux not a nightmare. Hardware vendors upload their firmware directly to it, and users get those updates delivered through fwupd and tools like GNOME Software.

According to official estimates, the project has shipped over 140 million updates from 150 vendors and is a requirement for most consumer-facing Original Equipment Manufacturers (OEMs), Original Design Manufacturers (ODMs), and Independent BIOS Vendors (IBVs).

But the project is moving towards a dilemma that most open source projects of its scale eventually face. To be a sustainable undertaking in the long term. 🗓📈

They need support

this is an image of the lvfs dashboard with demo uploaded firmware data visible
Just a placeholder image of the LVFS dashboard.

Right now, the Linux Foundation covers all of LVFS' hosting costs, and Red Hat funds Richard Hughes, the project's only full-time developer. Richard, along with a bunch of part-time contributors, keep over 20,000 firmware files in circulation.

Their sustainability plan flags some key issues that come with being this understaffed.

The project has no dedicated security response team, its sole maintainer has no backup, and the volume of critical work keeps growing with no one new stepping in to help.

Security vulnerabilities get handled on a best-effort basis (yikes ☠️), and very few companies are supporting fwupd core or the LVFS web service. You could call it a tragedy of the commons where everyone depends on it, but almost no one is paying for it.

The plan was published in August 2025, and LVFS has been rolling out restrictions in phases since then. April 2025 already brought in fair-use download utilization graphs to vendor pages. Fair use upload tracking came in July, and sponsorship tiers opened up in August 2025.

The April 2026 phase kicked in at the start of this month and has been live for nearly four weeks now. Any firmware page where a vendor is crossing 50,000 monthly downloads now shows an overquota warning.

a tiny screenshot that showcases lvfs' new overquota warning
Courtesy of Richard Hughes.

Vendors below the "Startup" sponsorship level have also lost access to detailed per-firmware analytics. In August, custom LVFS API access will be cut for non-Startup vendors, with automated upload limits following in December.

How can you help?

a table that shows the premier, startup, and associate sponsorship tiers for the lvfs project

LVFS is looking for vendors who use its infrastructure to pitch in. Presently, only two hold Startup sponsor status: Framework Computer and the Open Source Firmware Foundation.

What they actually need is either two full-time software engineers or $400,000 to fund the hires through the Linux Foundation, plus a separate $30,000 for hosting. The sponsorship tiers are as follows:

  • Premier: $100,000 per year
  • Startup: $10,000 per year (under 99 employees)
  • Associate: Free, but only available to registered non-profits, academic institutions, and government entities.

Both Premier and Startup tiers require an LF Silver Membership (page 28) on top of the listed fees. There is no free option for commercial hardware vendors. Alternatively, vendors can contribute a full-time engineer to work on LVFS or fwupd directly.


Suggested Read 📖: Will You Pay $119 For An Open Source KVM?



from It's FOSS https://ift.tt/CJPv5Zq
via IFTTT

Hackers Hijacked a GitHub Actions Workflow to Push Malicious Code to PyPI

We have been routinely seeing open source projects getting hit by malicious actors with varying degrees of sophistication. Developers are often left scrambling to push out fixes in such situations.

As to why they get targeted, their attack surface is wide, maintainer bandwidth is limited, and one bad package can quietly reach thousands of users before anyone even notices.

When something slips through, developers have to yank releases, rotate credentials, and piece together what got out.

We now have a similar situation where Elementary Data's OSS Python CLI was compromised. And if you had the affected version installed, then you have some cleanup to do.

How it happened

The attack came down to a flaw in one of Elementary's GitHub Actions workflows. It was set up in a way where text from a PR comment could be passed directly into a shell command, so whatever the comment said, the runner would execute it.

At 22:10 UTC on April 24, the attacker posted a malicious comment on a pull request. The workflow ran it as code, handing over access to the runner's secrets, including the PyPI publish token and the GITHUB_TOKEN.

With those in hand, they created the branches and pull requests needed to stage a release, then kicked off Elementary's release workflow. By 22:20 UTC, elementary-data 0.23.3 was live on PyPI. A malicious Docker image was pushed four minutes later.

Who got hit

Only users who installed elementary-data 0.23.3 (now removed) from PyPI are affected, as well as anyone who pulled the compromised Docker image during the attack window.

However, Elementary Cloud is unaffected, the Elementary dbt package is unaffected, and every other version of the CLI is unaffected. That said, if you were running 0.23.3, the exposure is serious. The malware had access to anything the environment could reach.

The remedy

First check your installed version first:

pip show elementary-data | grep Version

If it shows 0.23.3, get rid of it and install the clean version:

pip uninstall elementary-data
pip install elementary-data==0.23.4

Update your requirements files and lockfiles to reflect that too.

You should also check for a marker file the malware leaves behind. If it's there, the payload ran on that machine:

  • Linux/macOS: /tmp/.trinny-security-update
  • Windows: %TEMP%\.trinny-security-update

If you find it, rotate every credential that environment had access to, and get your security team looking for any suspicious activity on those credentials.

On their end, Elementary has already pulled 0.23.3 from PyPI, GitHub, and the Docker registry on April 25.

They also decommissioned the compromised workflow, audited the rest of their GitHub Actions for the same type of vulnerability, regenerated all affected secrets, and moved to OIDC authentication.

They are currently working with an Israeli cybersecurity firm to conduct an investigation and step up their protection against such attacks going forward.



from It's FOSS https://ift.tt/v4Vzh7U
via IFTTT

Linux is Getting a New Default Folder in Your Home Directory

If you are using a rolling release distro like Arch, you might have noticed that your home directory now has a new member, a new folder called "Projects".

For as long as I remember, Linux has always had a set of default folders under the home directory. Usually they are Documents, Music, Pictures, Videos and Downloads. Templates, Desktop and Public folders are also there.

Now we have a new addition in the form of "Projects".

Projects directory for your ...well...projects

New Projects directory in Linux

The purpose of the Projects directory is simple. It gives you a place to keep your project files, the kind of files that do not necessarily go in Documents, Music, Pictures and Videos. For example, your coding projects, your 3D printing and CAD projects etc.

Why it is more than 'just another directory'

The addition of a standard Projects directory is not just about keeping your home folder organized. It has bigger implications.

For starters, it gives applications a predictable place to store project-related files. Just like image-related apps often default to the Pictures folder and video tools save into Videos, development tools, CAD software, hardware design suites applications could use Projects as their natural default.

This can also improve interoperability between tools. An IDE could offer to create repositories in Projects by default. Build tools could assume a sensible project workspace, and installation guides or README instructions could refer to a common location instead of telling users to create arbitrary folders like ~/dev, ~/code, or ~/projects.

Sandboxed applications such as Flatpak apps may also benefit because a standardized location is easier to recognize and grant permissions for.

Not to forget, backup tools, synchronization services could treat the Projects directory as a meaningful category of data, same as Documents or Pictures.

In other words, this is not 'just another directory'. It provides better desktop workflows. A small standardization like this may quietly improve usability across the Linux desktop over time.

This was an 11-year-old "request"

And interestingly, this isn’t a brand new idea. The concept has existed for over a decade.

Actually, the request to include a standard Projects directory was created in 2014. The reasoning from the original request still holds up today:

Currently XDG user dirs does not specify a directory for environments of projects. For software projects these usually include source code, version control, compiled binaries, test artefacts and downloaded dependencies. As they are much more than downloads and usually kept indefinitely, they do not fit in there. The benefit of defining a projects folder would be that when writing a README or install script for a project, one could automatically download the source to the user defined location, set up the build environment and install from there.

Like several instances in the recent past, GNOME/Freedesktop/KDE are paying attention to decades-old requests and implementing some of them.

💡
Don't like the new Projects directory? Just delete it. The xdg-user-dirs utility will not try to create it again. The default location for this directory will be moved to your home directory.

Power users, who want more control, can edit the ~/.config/user-dirs.dirs configuration file and modify it to control what goes where.

The road ahead for this change

This new standard directory change came with the release of xdg-user-dirs version 0.20. As I mentioned earlier, people using rolling release distros might already have this change. You can see a screenshot of EndeavorOS:

New projects directory in terminal

As GNOME contributor Matthias notes, support for GLib will be added in the coming months so that Flatpak, desktops and applications can make use of the Projects directory.

I am looking forward to it. You?

I have always created a dev directory in my home directory. This is where my coding related project files are located. It's better than keeping them under Documents because technically, these are not documents.

I think that I am not the only one who does this. I guess most of us have a projects or dev directory under Home.

Including a standard Projects directory is a good move. Not only does it remove the guesswork about where to keep such files, various applications can also take advantage of this new directory.

It may look like a small addition, but standardizing something many Linux users already do can improve workflows, application behavior, and even documentation over time. For a simple extra folder, “Projects” could have surprisingly large impact.



from It's FOSS https://ift.tt/RkDj37J
via IFTTT

Ubuntu is Going Big on AI (But Not The Copilot Kind You Dread)

AI has been creeping into everything, and the Linux ecosystem is no exception. Over the last couple of years, local AI has gone from a niche curiosity to something people can actually run on their machines.

On the user side, tools like Ollama and LM Studio have made it surprisingly straightforward to pull open-weight models and run them locally without requiring a cloud subscription.

For enterprises, solutions like RHEL AI and SUSE Linux Enterprise Server have been catering to organizations that want AI woven into their infrastructure.

Now, it looks like Canonical is jumping onto the bandwagon, as Ubuntu moves towards AI, and before you start calling it Ubuslop or something along those lines, understand how they are going about this.

What's happening?

cropped screenshot that shows a discourse forum post by a jnsgruk (jon seager) laying out the ubuntu ai roadmap

Jon Seager, VP of Engineering at Canonical, has published a post on Ubuntu Discourse laying out the company's AI roadmap. The short version is that AI is coming to Ubuntu; it will be local-first, and it will be built around open-weight models and open source tooling.

He laid out a framework distinguishing between two kinds of AI features, implicit and explicit. Implicit AI is about making existing OS features smarter in the background without requiring users to learn anything new or interact with anything that looks like AI.

He gave the examples of speech-to-text and text-to-speech, both of which can be improved using local inference with open source inference tools and open weight models, running entirely on-device.

Explicit AI features are a different story. These are the more obviously AI-centric, agentic workflows that could automate troubleshooting, create documents or applications, and run scheduled maintenance on a fleet of machines.

Jon also gave everyone a look at what it could look like in practice:

Imagine being able to ask your Linux machine to troubleshoot a Wi-Fi connection issue, or to stand up an open source software forge that’s pre-configured, secured, and reachable over TLS.
One could easily imagine using such a capability as a gateway for controlling your Linux machine from other devices through a variety of mediums - be that a mobile app, text messaging, voice commands or otherwise.

The delivery mechanism for all of this is inference snaps. Rather than asking users to wrangle separate tools, sift through Hugging Face, and figure out which model format works on their hardware, Canonical wants a simple snap install to handle everything, with hardware-optimized builds served based on your silicon.

And since snaps carry the same confinement rules as everything else in the ecosystem, the models are sandboxed and cannot freely reach into your files or data.

Makes sense, I guess?

The local inference approach is what makes this worth paying attention to. The default is not a cloud call to some API that logs your prompts and charges you per token. It runs on your hardware, stays there, and does not require signing up for anything.

Of course, cloud and external services are still an option, but only as a fallback for people who specifically need them, not the assumed path. That is a bigger deal than it sounds, btw.

Most AI integration announcements from Big Tech players start from the opposite assumption—cloud first, local maybe someday.

Should you be worried?

When Linux and AI are mentioned in the same breath, your mind might naturally draw a comparison to Microsoft's infamous Copilot offering, where the default experience is cloud, the model is proprietary, and half the features quietly require a Microsoft account.

What Jon is proposing keeps the user-facing, agentic stuff strictly opt-in. The implicit features would run quietly in the background and improve things you already use. Nobody is bolting a chatbot sidebar into GNOME and calling it a "productivity feature."

But, as things go with roadmaps, decisions shift under pressure and user expectations change over time. I suggest keeping a close watch on how things develop for the rest of the year.



from It's FOSS https://ift.tt/D7ZWkeo
via IFTTT

Microsoft Might Be Rebasing Azure on Fedora Linux

Microsoft's in-house Linux distribution, Azure Linux, may be heading for a significant overhaul. According to a recent report, the Big Tech giant is reportedly exploring the idea of rebasing Azure Linux on Fedora, which would be a notable shift in how the distribution is built and maintained.

Azure Linux, which longtime followers of the Linux space may know better as CBL-Mariner, has been Microsoft's internal Linux OS powering Azure services, WSL, Azure Local, and more since 2020.

It is already RPM-based, so a move toward Fedora would not be a complete departure from its existing foundation, but it would represent a serious overhaul.

a cropped screenshot of the fedora eln sig meeting on april 21 that shows conversation between neal gompa and yaakov selkowitz
The conversation between Neal Gompa (conan_kudo) and Yaakov Selkowitz (yselkowitz).

It all came to light from the recent Fedora ELN SIG meeting on April 21. During a discussion about a proposed Fedora change to build x86-64-v3 packages for Fedora 45, it was pointed out that Microsoft is one of the driving forces behind this proposal.

The change proposal was put forward jointly by Kyle Gospodnetich, a Linux engineer at Microsoft, alongside Lleyton Gray and Owen Zimmerman of Fyra Labs. Microsoft's interest appears to be tied to Azure Linux's need for x86-64-v3 performance gains, which is part of what is driving the rebase idea.

Fyra Labs, which is reportedly launching its own cloud service and wants x86-64-v3 support for Fedora and Ultramarine Linux, is also co-authoring the proposal.

It is worth noting that the x86-64-v3 change proposal still needs to be approved by the Fedora Engineering Steering Committee (FESCo) before anything moves forward.

There had also been talk of forking the whole distribution to achieve this, but Microsoft was steered toward working within the Fedora ecosystem instead.

On the surface, all of this sounds promising. As a Redditor pointed out, since Fedora is effectively upstream Red Hat Enterprise Linux, the move makes logical sense on paper.

That said, Microsoft has a history of open source commitments that lose steam over time. If they can properly follow through here, contribute meaningfully to Fedora, and not treat it as a one-way resource tap, this could genuinely be good for the ecosystem.

All of that is a big if, but it is worth watching closely.


Suggested Read 📖: Ubuntu 26.04 LTS is Here



from It's FOSS https://ift.tt/XEF6gSb
via IFTTT

Senin, 27 April 2026

MinIO Is Done With Open Source, What Are Your Options?

The MinIO GitHub repository was recently archived on April 25, 2026. But the thing is, it had been archived before, back in February, then briefly unarchived, and now it's locked again. Whether MinIO flips the switch again is anyone's guess, but it doesn't really matter at this point.

The message has been clear ever since they put the project in maintenance mode.

MinIO is one of the most widely used self-hosted object storage solutions out there. It is S3-compatible, lightweight, and runs as a single binary, integrating with pretty much everything in the cloud-native stack.

It's the kind of project you deploy once and forget about. Unless something breaks or a massive shift happens, like the move away from open source. Then you're left scrambling for alternatives.

How we got here

This didn't happen overnight. MinIO has been walking away from its open source community for well over a year.

It started in May 2025, when MinIO shipped a breaking release that removed most management features from the community edition's web UI, along with external IDP logins via LDAP and OIDC, moving them to their enterprise product.

Then in October 2025, MinIO stopped publishing Docker images and pre-built binaries for the community edition entirely. Users who needed to patch a CVE that dropped the same month couldn't just pull an updated image and had to build from source instead.

By December 3, 2025, the other shoe dropped when Harshavardhana, the co-founder of MinIO, pushed a commit to the repo's README, declaring maintenance mode (linked earlier).

And then the repo was first archived in February 2026 and again in April 2026.

What replaces it

If you're running MinIO in production today, your existing deployment still works. But you are on software that will get no new features, no compatibility updates, and no guaranteed security patches.

MinIO's official stance is to redirect people towards their proprietary AIStor solution, but that's a hard pass for anyone who prefers to stay open source. You don't need to go down that path.

Here are the three open source alternatives worth looking at. 👇

SeaweedFS

cropped screenshot of the seaweedfs github repository

SeaweedFS is a distributed storage system written in Go, built around Facebook's Haystack architecture and a few other systems. It handles S3-compatible object storage alongside blobs and files.

Its main advantage is its O(1) disk seek time that performs regardless of how many objects you store, making it particularly strong when dealing with billions of small files.

Before you ask, it is Apache 2.0-licensed, production-ready, and the closest thing to a drop-in MinIO replacement available right now.

Garage

cropped screenshot of the garage webpage

Garage is a Rust-based object storage system built by Deuxfleurs, a French small-scale self-hosting service provider. It's designed specifically for geo-distributed deployments on modest hardware; think multiple physical locations rather than a single high-performance data center.

It is small enough to run on various hardware, is simple to operate, and is well-suited for self-hosters and small organizations running storage across multiple physical locations. Not only that, but it is available under the AGPLv3 license.

RustFS

cropped screenshot of the rustfs webpage

RustFS is the newest player in this space, written in Rust and released under the Apache 2.0 license. It positions itself as a direct MinIO successor, claims 2.3x faster performance than MinIO for small object payloads, includes a management console out of the box, and supports migration from existing MinIO deployments.

The catch here is that it's still in alpha. Still worth keeping on your radar if you are planning a migration over the coming months, but you could also take a gamble on it if you like taking risks.

In the end, MinIO's pivot to enterprise-only commercial software is its call to make. The open source community, for its part, is moving on.



from It's FOSS https://ift.tt/6Gw3qrE
via IFTTT

Kamis, 23 April 2026

Kubuntu 26.04 LTS Drops X11 Support and Goes All in On Wayland

Kubuntu is one of the longest-running Ubuntu flavors and also one of the more sensible ones to recommend.

It ships the KDE Plasma desktop on top of an Ubuntu base and is maintained by a volunteer team that tracks the KDE release cycle closely and works to get the latest Plasma builds into each release.

If you want the KDE experience without leaving the Ubuntu ecosystem, Kubuntu is the cleanest way to get there.

Anyhow, with Ubuntu 26.04 LTS officially out, Kubuntu 26.04 LTS is also here alongside it, and here's what this release has to offer.

Kubuntu 26.04 LTS: What's Fresh?

in this screenshot of kubuntu 26.04 lts, a fastfetch output is shown on the right, the app launcher is shown on the left

Same as its Ubuntu base, this Kubuntu release is powered by Linux kernel 7.0, which comes in as a notable hardware support and storage upgrade over the kernel that shipped with Kubuntu 24.04 LTS.

Intel Arc users get considerably more detailed temperature data through the hardware monitoring interface, now covering memory controller, PCIe, and individual VRAM channel readings rather than just a single GPU core temperature.

XFS picks up an autonomous self-healing daemon that watches for metadata failures and I/O errors in real time and kicks off repairs without taking the filesystem offline. Rust support also officially moves from experimental to stable in this kernel.

a desktop view screenshot of kubuntu 26.04 lts with the about this system page open

For the desktop environment, Plasma 6.6 is included, which offers many improvements.

You get OCR functionality in Spectacle, the screenshot tool, Plasma Setup, a first-run wizard that handles user account creation, and virtual desktops can now be set to appear only on your primary screen.

Then there's the removal of the X11 session, which is not installed by default and will not be supported by the Kubuntu team going forward. Wayland is now the only officially supported session, and the plasma-session-x11 package remains available in the Ubuntu archive for anyone who needs it.

A few other changes from the Ubuntu base that are also worth knowing about. sudo-rs, the Rust rewrite of the classic sudo tool, is now the default sudo provider in this release.

The NTSYNC kernel driver is included too. It handles WinNT sync primitive emulation at the kernel level rather than pushing that work into user space, which improves performance for Windows games running through Wine or Proton.

NVIDIA laptop users get Dynamic Boost enabled by default, which shifts power automatically between the CPU and GPU based on what the workload demands.

You also get a set of updated applications and tooling:

  • APT 3.2
  • Qt 6.10.2
  • Firefox 149 (Snap)
  • LibreOffice 26.2.2
  • KDE Frameworks 6.24.0
  • KDE Applications 25.12.3

📥 Get Kubuntu 26.04 LTS

You will find this Kubuntu LTS release on the official website and on the Ubuntu release portal. Existing users can follow the upgrade guide for 24.04 to get the release.

If you face any issues, you can ask for help in our forum or the Kubuntu forum.



from It's FOSS https://ift.tt/Va2B4nf
via IFTTT