Kamis, 16 April 2026

Privacy Email Service Tuta Now Also Has Cloud Storage with Quantum-Resistant Encryption

Privacy in 2026 is a bit of a joke. Governments have turned surveillance into standard operating procedure, and Big Tech companies treat your personal data like a free-for-all buffet, helping themselves, then selling the leftovers to data brokers who do the same.

That's pushed people toward privacy-first alternatives, and quite a few companies have stepped up to meet that demand. Tuta is one of the more recognizable names in that space, offering encrypted mail and calendar services to over 10 million users worldwide.

Now, the company is looking to round out its ecosystem with the one piece that's been missing, an encrypted cloud storage solution.

A haven for your files?

Tuta first laid the groundwork for this back in July 2023, when it announced the PQDrive project with backing from the German government. The initiative had received €1.5 million in funding through the KMU-innovativ program, a grant scheme that supports small and medium enterprises in research and development.

The goal was clear from the very beginning. It was to build a cloud storage service secured with post-quantum encryption, not just conventional algorithms.

To get there, Tuta partnered with the University of Wuppertal, which handled key research tasks including testing cryptographic algorithms and figuring out how to deduplicate encrypted data without punching holes in the security model.

All that effort has now produced a product ready for real-world testing. Starting today, Tuta Drive enters closed beta, with select users receiving early access to put it through its paces ahead of a public release.

It is an end-to-end encrypted cloud storage service that fits directly into Tuta's existing ecosystem alongside mail and calendar. Everything you store gets encrypted without any action needed on your end, and the zero-knowledge architecture means Tuta has no technical ability to read your files or share them with anyone else.

The encryption underpinning Drive is the same TutaCrypt protocol Tuta already uses for its mail service. It combines classical and quantum-resistant algorithms in a hybrid approach, so even if a quantum computer cracks one layer down the line, it still has to contend with the other.

And, the service is hosted in Germany, which brings strict GDPR protections into play on top of the technical safeguards.

Arne Möhle, CEO of Tuta, announced this by commenting that:

With Tuta Drive, we are taking the next step towards offering a full private digital workspace.

Today, more than ten million citizens and businesses, including journalists, whistleblowers and activists use Tuta Mail as an alternative to insecure email offered by mainstream providers.

Adding an encrypted cloud storage to Tuta will enable them to also store their files securely.

Test run

We were given early access to the closed beta ahead of its rollout today, and here's a look at what Tuta Drive is like right now.

The interface is minimal, which is fine. You get a familiar sidebar and a top bar that shows you the server connection status and houses quick-switch buttons for Mail, Contacts, Calendar, and Drive.

First, I uploaded two videos to see how Tuta Drive would handle them. Here, the upload speeds were noticeably slow when connected over a VPN, though that's more or less expected. Without an active VPN connection, file uploads were fast.

Moving those files to a new folder afterward was straightforward using the "Move" option from the right-click context menu. Drag and drop works too, and I could manually select specific files without any issues. Cut and paste for moving files around also worked well.

When uploading multiple files at once, a progress list appears, which is handy. The one catch is that you can't scroll through it to check which file is currently being processed, which was a bummer.

screenshot of tuta drive closed beta showing a long upload progress list on the right

Files are shown with appropriate icons depending on type, so images, videos, and audio all get their own visual treatment. Folders display a cat emoji where the folder size info should probably appear, which looks like a work-in-progress placeholder more than anything else.

many different file types are shown in this screenshot of tuta drive closed beta

If you upload something by mistake or decide a file isn't worth keeping, you can delete it promptly either from the right-click context menu or by hitting Delete on your keyboard. The "Trash" page then gives you the choice to either restore it if it was a wrong call or permanently delete it if you're sure.

That said, folder uploads aren't supported yet, and the keyboard shortcut support is lacking. Ctrl+A to select everything in a folder, for instance, does nothing. No search tool either; those are the kinds of gaps that user feedback tends to sort out quickly.

Seeing that this is a closed beta, I am confident that the Tuta folks will listen to what people say about their newest offering and act accordingly.


💬 Would you give Tuta Drive a shot, or are you too committed to Proton Drive or other cloud solutions to even look its way?



from It's FOSS https://ift.tt/kZGFoPN
via IFTTT

Can You Identify The Fake Linux Distros From The Real Ones?

Not all distros are created equal.

In fact, not all distros are created at all.

This quiz is simple. You'll be presented with a few Linux distros and their details. The twist is that they might not be a real thing. They could just be a fragment of my imagination.

Of course, this is valid only at the time when I created this quiz. The way we move in Linux world, there could be some new distros coming up right after I publish this quiz 😃

🚧
Some browsers block the JavaScript-based quiz units. Disable your ad blocker to enjoy the quizzes and puzzles.


from It's FOSS https://ift.tt/Pn3Yb9V
via IFTTT

Oh No! Now A Federal Bill Wants OS-Level Age Verification for Everyone in the USA

The U.S. has been quietly building up a set of state-level laws that push operating system providers into the age verification plague.

California's AB 1043, signed in October 2025, requires OS providers to collect age data at account setup and pipe it to apps through a real-time API. It kicks in on January 1, 2027.

Colorado is working on something nearly identical. SB26-051 (which we covered when it was still a proposal) passed the state Senate 28-7 on March 3, 2026, and is now waiting on a House vote to become law there too.

However, these are just state-level laws. A new federal bill, H.R.8250, introduced on April 13, 2026, by Rep. Josh Gottheimer, with Rep. Elise M. Stefanik signing on as cosponsor, has us intrigued.

a cropped screenshot of the congress.gov website that shows the proposed h.r.8250 bill

The official title of the bill reads, "To require operating system providers to verify the age of any user of an operating system, and for other purposes." But that's a mouthful; the short version is "Parents Decide Act."

If you go by the full title, the bill is pretty self-explanatory; it is going to require every operating system provider to verify the age of its user who wants to use their OS, and vaguely enough, for any "other purposes."

It has been referred to the House Committee on Energy and Commerce and currently sits at step one (Introduced) of five in the legislative process. No bill text has been published; there's no summary, no subject tags, and no related bills attached to it.

That means right now, the only thing formally known about H.R.8250 is its title, its sponsors, and where it got sent.

But wait, do you… 👇

Want more details?

this cropped screenshot shows a blog titled, "release: gottheimer announces bipartisan "parents decide act" to protect kids online."

Gottheimer's office published a press release on April 2, 2026, announcing the bill 11 days before it was formally introduced. That press release was unavailable for a while, but it is now back up.

According to the announcement, the bill would require OS developers to verify user age at device setup, allow parents to set content controls right there, and have those settings flow through to apps and platforms on the device.

Apple and Google were the companies Gottheimer named as the intended targets, with the framing centered entirely around phones and tablets.

But here's where it gets interesting for anyone outside the Apple and Google ecosystem. Gottheimer's press release framed this entirely around commercial mobile platforms. The official bill title, as you saw earlier, does not.

If the bill text matches the breadth of that title, Linux distributions and other open source operating platforms would sit squarely within its scope. And a federal bill passing would mean one nationwide compliance requirement replacing the current state-by-state situation.

The representative also voiced support for several groups, which include the likes of:

Evidently, things are getting more absurd with each passing day, and I can't wait for the day when access to anything electronic is locked behind a gate, guarded by the most decent and righteous upholders of the law. /s


💬 If you are looking for a conversation surrounding this, our forum is the place to be!



from It's FOSS https://ift.tt/t0nKs3V
via IFTTT

Rabu, 15 April 2026

A PHP Dev Just Solved a 20+ Year-Old KDE Plasma Problem No One Else Would

Back in 2005, a bug report was filed by Kjetil Kjernsmo, then running KDE 3.3.2 on Debian Stable. He wanted the ability to have each connected screen show a different virtual desktop independently, rather than having all displays switch as one unit.

Over the years, over 15 duplicate reports piled onto the original as more people ran into the same wall. And that's not a surprise, because multi-monitor setups have become increasingly common.

The technical reason why this issue stayed open this long comes down to X11. Implementing it there would have required violating the EWMH specification, which has no concept of multiple virtual desktops being active at the same time.

The KWin maintainer Martin Flöser had said as much in 2013, effectively ruling it out for the entire KDE 4.x series. The only realistic path was through Wayland, and that path needed someone willing to actually walk it.

Someone finally did. The feature has now landed in KWin's master branch and is set for a Plasma 6.7 introduction.

How was this accomplished?

Video courtesy of Hynek Schlindenbuch.

The merge request was opened by Hynek Schlindenbuch, a developer with no prior KDE contributions.

Each screen now independently tracks which virtual desktop it is showing. Any desktop can appear on any screen, and the same one can be shown on multiple screens at once. Windows belong to a specific screen, even if they visually span two, and can be assigned to one or more virtual desktops.

A window stays visible when its screen is showing one of those desktops. Keyboard shortcuts only switch the desktop on the currently active screen, not across all of them at once.

Unlike Hyprland, switching to a desktop does not pull focus to that desktop's screen. Hynek made that choice deliberately.

VirtualDesktopManager tracks the current desktop separately for each output, and switching all screens together remains the default, with per-output switching available as an opt-in via settings.

Keep in mind that this fix is Wayland only. X11 was left out intentionally since it relies on the EWMH protocol, and with X11 support being dropped in Plasma 6.8 anyway, that is a less significant shortcoming than it sounds.

If you were curious about Hynek, he is a full-time PHP programmer with over six years of experience. His C++ background going into this project was minimal, and he had no experience with Qt or CMake and had only set up KDE Plasma on an old laptop a few months before opening the merge request.

The motivation for this was his plan to move to Wayland for fractional scaling support, but the missing per-screen desktop functionality was blocking his switch to Plasma.

See how a lone open source developer's initiative changes things for the rest of us? 🙃



from It's FOSS https://ift.tt/3T8kUSy
via IFTTT

Senin, 13 April 2026

An Open Source Dev Has Put Together a Fix for AMD GPU's VRAM Mismanagement on Linux

Natalie Vock (pixelcluster), a developer who works on low-level Linux code and as an independent contractor for Valve, has published a fix for a VRAM management problem that has been making life difficult for Linux gamers on AMD GPUs with 8GB of VRAM or less.

She has put together a combination of kernel patches and userspace utilities that stop background apps from stealing VRAM away from whatever game you're playing.

The underlying issue is that when VRAM runs out, the kernel driver has no way to tell which memory matters more. A game and a browser tab look identical from the driver's perspective, so when something has to give, game memory often takes the hit.

It then ends up in GTT, a chunk of system RAM that the GPU can access, but over the PCIe bus rather than directly.

The fix is built on the dmem cgroup controller that she co-developed with Maarten Lankhorst from Intel and Maxime Ripard from Red Hat. It is already in the mainline Linux kernel, and it lets the driver treat foreground apps as higher priority when handing out VRAM.

That alone was not enough, though. Natalie has also written six kernel patches to fix a specific gap where VRAM pressure would cause new memory allocations to skip those protections entirely and end up in GTT anyway.

Two userspace utilities handle the rest: dmemcg-booster sets up the groundwork so the kernel protections actually activate, and a fork of KDE Plasma's Foreground Booster keeps track of which app is in the foreground so it gets first dibs on VRAM.

What this means for Linux gamers

Instead of performance slowly degrading over a session, games should now hold steady for as long as their own VRAM usage stays within budget. Natalie notes that most modern titles tend to stay within 8GB, so owners of 8GB GPUs should be in a much better spot with today's games.

While this applies to any GPU running the amdgpu driver, Intel GPUs on the xe driver have the necessary kernel support too, though real-world testing there is still pending.

Additionally, the developer has submitted a patch for nouveau, the open source NVIDIA driver.

How to get it

🚧
The developer warns that things could break if you install the patches. Proceed with caution on general use of production machines.

The six kernel patches are not in the mainline kernel, so getting them requires some extra steps depending on your setup. CachyOS users on Linux 7.0rc7-2 or later are already covered.

On other Arch-based distros, both utilities are in the AUR. For the kernel side, you can either pull the CachyOS kernel package from the repository or install linux-dmemcg from the AUR, which compiles Natalie's development branch.

The six patch files are also linked directly in the announcement blog for anyone who wants to apply them to a custom kernel build.

For those not on an Arch-based system, the realistic options are applying the patches manually to a self-compiled kernel or waiting for your distro to pick them up. Natalie has said her post will be updated if and when the work gets packaged by other distributions.


Suggested Read 📖: The Linux 7.0 Release is Here!



from It's FOSS https://ift.tt/HQyImrq
via IFTTT

AI Code Gets Approved in the Linux Kernel… But With Strings Attached

The Linux kernel project has spent quite some time navigating the use of AI tools, and the response usually has been somewhere between "figure it out yourself" and "we'll get back to you."

Late last year, at the 2025 Maintainers Summit, Sasha Levin pushed for some documented consensus, and what came out of it was human accountability for patches being non-negotiable, purely machine-generated submissions not being welcome, and tool use being disclosed.

He promised to put something in writing without committing to enforce it, and that work has now shipped with Linux 7.0.

What is it?

The new document is called AI Coding Assistants and lives in the kernel's process docs alongside the rest of the contribution guidelines. The short version is that AI-assisted contributions still need to comply with GPL-2.0-only; AI agents cannot add Signed-off-by tags; and patches that had AI help should carry an "Assisted-by" tag.

The Developer Certificate of Origin (DCO) is an important aspect that exists, so there is a human accountable for every patch. AI assistance does not change that hard requirement.

Basically, the human submitter reviews everything the AI produced, confirms it meets licensing requirements, and puts their own name on it with an appropriate mention that AI was used.

The Assisted-by tag format is Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]; for scenarios where either single or multiple tools were used. The document gives Assisted-by: Claude:claude-3-opus coccinelle sparse as an example.

Back then, Linus was not even convinced a dedicated tag was necessary and suggested the changelog body would do the job. But now, the kernel community seems to have settled on the tag anyway.

It's already in use

We covered this earlier in the week, but Greg Kroah-Hartman (GKH) seems to have had AI-assisted fuzzing running in his kernel tree for a while now, in a branch called "clanker." He started with the ksmbd and SMB code, found some potential issues, and submitted fixes with a note telling reviewers to verify everything independently before trusting any of it.

That is just about the workflow the new policy was written around. AI surfaces issues, a human with decades of kernel experience decides what is real, writes the fix, and takes responsibility. GKH being the one doing it is not a surprise given he is the stable kernel maintainer and has probably dealt with more bad patches than the others.

Other projects have gone in a different direction. Gentoo banned AI-generated contributions entirely in 2024, with its council citing copyright risk, code quality, and ethical concerns.

NetBSD's commit guidelines put LLM-generated code in the "tainted code" category, requiring written approval from the core developers before any of it goes in.

In contrast, Linux is not banning anything. Whether that turns out to be the sensible call or just a lenient one will depend on how seriously people actually take the "a human reviewed this" part.


Suggested Read 📖: Is a Clanker Being Used in Linux Development?



from It's FOSS https://ift.tt/mkPCbEt
via IFTTT

Minggu, 12 April 2026

Linux Kernel 7.0 is Out With Improvements Across the Board for Intel, AMD, and Storage

The development of the Linux kernel moves fast, and the 7.0 release is no exception. Around the same time as this release, a patch queued for Linux 7.1 has kicked off what will eventually be the end of i486 CPU support in the kernel.

But that's a story for another time. For now, let's focus on what Linux 7.0 brings to the table.

Head penguin, Linus Torvalds, had the following words to say regarding the release:

The last week of the release continued the same "lots of small fixes" trend, but it all really does seem pretty benign, so I've tagged the final 7.0 and pushed it out.

I suspect it's a lot of AI tool use that will keep finding corner cases for us for a while, so this may be the "new normal" at least for a while. Only time will tell.
This coverage is based on the detailed reporting from Phoronix.

Linux Kernel 7.0: What's New?

The release is here, and before getting into the improvements, there is one thing worth getting out of the way first.

This is not a long-term support release. If your priority is stability and extended maintenance, this is not the kernel to land on. Instead, you could opt for Linux kernel 6.18, which is supported until December 2028.

Intel Upgrades

Linux 6.19 already added audio support for Intel Nova Lake S, but the standard Nova Lake (NVL) variant was left out. That's fixed in 7.0, and the difference between the two in terms of specs is mainly in core count (4 vs. 2).

Intel Arc users get something useful too. The Xe driver now exposes a lot more temperature data through the HWMON interface. Previously you got a single GPU core reading; now you get shutdown, critical, and max temperature limits, plus memory controller, PCIe, and individual vRAM channel temperatures.

Also for Panther Lake, GSC firmware loading and Protected Xe Path (PXP) support are in.

And lastly, Diamond Rapids (the upcoming Xeon successor to Granite Rapids) gets NTB driver support, which handles high-speed data transfers between separate systems over PCIe. It is expected to be helpful for distributed storage and cluster setups.

AMD Refinements

While the Zen 6 series of CPUs are still a while out, the kernel is already getting ready for it. Linux 7.0 merges perf events and metrics support for AMD Zen 6, covering performance counters for branch prediction, L1 and L2 cache activity, TLB activity, and uncore events like UMC command activity.

All of that is mainly useful for developers and admins doing performance profiling ahead of launch, and not something the average user will notice.

For virtualization, KVM picks up support for AMD ERAPS (Enhanced Return Address Predictor Security), a Zen 5 security feature. In VM scenarios, this bumps the Return Stack Buffer from 32 to 64 entries, letting guests make full use of the larger RSB.

AMD is also laying the groundwork for next-gen GPU hardware in 7.0, enabling new graphics IP blocks for what looks like an upcoming RDNA 4 successor and another RDNA 3.5 variant.

There are also hints of deeper NPU integration with future Radeon hardware, but AMD hasn't announced anything yet, so exact product details remain a mystery for now.

Better Storage Handling

XFS gets one of the more interesting additions this release called autonomous self-healing. A new xfs_healer daemon, managed by systemd, watches for metadata failures and I/O errors in real time and triggers repairs automatically while the filesystem stays mounted.

Btrfs picks up direct I/O support for block sizes larger than the kernel page size, falling back to buffered I/O when the data profile has duplication. There's also an experimental remap-tree feature, which introduces a translation layer for logical block addresses that lets the filesystem handle relocations and copy-on-write operations without physically moving or rewriting blocks.

EXT4 sees better write performance for concurrent direct I/O writes to multiple files by deferring the splitting of unwritten extents to I/O completion. It also avoids unnecessary cache invalidation and forced ordered writes when appending with delayed allocation.

Miscellaneous Changes

Wrapping up this section, we have some other notable changes that made it into this release:

  • RISC-V gains user-space control-flow integrity (CFI) support.
  • WiFi 8 Ultra-High Reliability (UHR) groundwork lands in the networking stack.
  • Security bug report documentation gets an overhaul to help AI tools send more actionable reports.
  • Rust support is officially no longer experimental, with the kernel team formally declaring it is here to stay.
  • ASUS motherboards, including the Pro WS TRX50-SAGE WIFI A and ROG MAXIMUS X HERO, now have working sensor support.

Installing Linux Kernel 7.0

As always, those on rolling distros like Arch Linux and other distros like Fedora and its derivatives will get this new release very soon. For others on distros like Debian, Linux Mint, Ubuntu, MX Linux, etc. You will most likely not receive this upgrade.

If that doesn't work for you, then you could always install the latest mainline Linux kernel on your Ubuntu setup. And, this goes without saying, this is risky. If you end up borking your system, we are not to blame for it.



from It's FOSS https://ift.tt/0I2kviY
via IFTTT