Jumat, 17 April 2026

Cal.com Goes Close Source Because "AI Can Easily Exploit Open Source Software"

AI has been a mixed bag for the open source world. Some developers are using it to write faster, catch bugs, and review patches more efficiently. Others are watching the same tools get turned against the codebases they maintain.

Cal.com, a popular open source scheduling platform and one of the more well-known self-hostable alternatives to Calendly, has found itself in the second camp. After five years as an open source project, the company has announced that it is switching to a closed-source model, citing the growing threat of AI-powered vulnerability scanning.

What happened?

The co-founder of Cal.com, Bailey Pumfleet, has addressed why they went down this path, saying that AI has changed what it takes to exploit an application. Earlier, finding vulnerabilities meant real expertise and some serious time investment.

But today, an AI model can be directed towards a public repo and do the same job systematically without needing much manual labor.

He also cited a specific case to back this up, where AI tooling reportedly found a 27-year-old vulnerability in the BSD kernel and had working exploits ready within hours.

πŸ“‹
I think Bailey has misattributed the above occurence, as the 27-year-old bug was found in OpenBSD, thanks to Claude Mythos, and has since been patched.

But, yeah, closed source it is. πŸ˜…

Another thing worth knowing is that the production codebase had already been drifting away from what was publicly available. Core systems like authentication and data handling had both gone through significant rewrites, making the public repo and what actually runs in production two fairly different things by the time this announcement came.

Does it make sense?

Cal.com isn't wrong that AI can be used to hunt for vulnerabilities in open source code. That's documented and real. But the provided argument treats AI purely as an attacker's tool, which is a selective reading of the situation.

Take the Linux kernel, for example. We recently covered how Greg Kroah-Hartman, the Linux stable kernel maintainer, has been running what looks like AI-assisted fuzzing on the kernel through a branch he calls "clanker," using it to identify bugs and patch them proactively.

There's even an official policy in place that governs the use of such AI tools for contributions.

Then there's the older argument that closing your source doesn't actually make you more secure. It just means fewer eyes on the code. Open source projects benefit from anyone, anywhere, being able to spot and report problems.

Heartbleed and Log4Shell were both found by external researchers precisely because the code was auditable. This just shows us that a private codebase doesn't prevent vulnerabilities; it just reduces the chances of catching them before someone with bad intentions does.

What's next?

For self-hosters and developers, Cal.diy is what's on offer. It's available now under the MIT license, with the documentation covering installation via Docker, Vercel, Railway, Render, and a handful of other platforms.

The project is described as "strictly recommended for personal, non-production use," with a "use at your own risk" disclaimer throughout. It is community-maintained, with no official backing from Cal.com.

Feature-wise, Cal.diy covers the personal scheduling essentials like event types, calendar integrations, video conferencing, webhooks, and API access.

But a fair bit is missing. Teams, Organizations, SAML SSO, SCIM directory sync, Workflows, Routing Forms, and the Insights Dashboard are all absent from the community edition.

If you're running Cal.com for anything commercial, the Cal.diy documentation steers you back to the paid product pretty explicitly, saying that "for any commercial and enterprise-ready scheduling infrastructure, use Cal.com."

All of that made me wonder, whether AI was the catalyst or the perfect scapegoat for a closed-source transition. Anyway, I like yapping like this every so often; don't mind me.



from It's FOSS https://ift.tt/6yPsNXn
via IFTTT

Russian Baikal CPUs Are Losing Their Place in the Linux Kernel

Support for Russian Baikal CPUs is being pulled from the Linux kernel. Work has begun in the Linux 7.1 cycle to remove driver code and device tree bindings for Baikal SoC hardware, with more patches already lined up to follow.

The first removal came with the ATA pull for Linux 7.1-rc1, merged by Linus Torvalds on April 15. It dropped the Baikal bt1-ahci DT binding and stripped Baikal-specific code from the ahci_dwc driver, with the ATA maintainer, Niklas Cassel, noting that upstreaming for the SoC "is not going to be finalized."

this picture shows the linux kernel archive mirror with baikal as the searched term and a list of changes related to it shown below in a numbered list
You can browse the LKML for tracking Baikal's removal.

Furthermore, the code had been sitting unmaintained for some time now. Serge Semin, who contributed the bulk of Baikal's kernel support over the years, was among roughly a dozen Russian developers removed from the kernel MAINTAINERS file in 2024.

With no one left to maintain it and the hardware itself rare even within Russia, there appears to be no rationale for keeping the code around.

Some background info

The Baikal line of CPUs is the work of Baikal Electronics, which was founded in January 2012 as a spinoff of T-Platforms, a Russian supercomputer company.

It started with a MIPS-based chip for embedded applications, then pivoted to ARM for its later processors, all manufactured at TSMC. The plan was to supply Russian state-owned enterprises with domestically produced CPUs as an alternative to Intel and AMD.

But Russia's 2022 invasion of Ukraine ended that. Sanctions cut off TSMC access, 150,000 Baikal-M units already manufactured were seized in Taiwan, and ARM production licenses were lost. The company filed for bankruptcy in August 2023.

It did not stay down. By the end of 2024, Baikal had shipped a total of 85,000 processors since its founding and began serial production of the Baikal-U1000, a RISC-V microcontroller, in September 2025 (in Russian).

The current lineup consists of the Baikal-T (MIPS), Baikal-M and Baikal-S (ARM), and the Baikal-U (RISC-V).

Those already running Linux on Baikal hardware will need to stay on Linux 6.18 LTS or earlier, as newer kernel versions are dropping the support.


Suggested Read πŸ“–: The Linux Kernel is Finally Letting Go of i486 CPU Support



from It's FOSS https://ift.tt/eEmDLxT
via IFTTT

Kamis, 16 April 2026

Privacy Email Service Tuta Now Also Has Cloud Storage with Quantum-Resistant Encryption

Privacy in 2026 is a bit of a joke. Governments have turned surveillance into standard operating procedure, and Big Tech companies treat your personal data like a free-for-all buffet, helping themselves, then selling the leftovers to data brokers who do the same.

That's pushed people toward privacy-first alternatives, and quite a few companies have stepped up to meet that demand. Tuta is one of the more recognizable names in that space, offering encrypted mail and calendar services to over 10 million users worldwide.

Now, the company is looking to round out its ecosystem with the one piece that's been missing, an encrypted cloud storage solution.

A haven for your files?

Tuta first laid the groundwork for this back in July 2023, when it announced the PQDrive project with backing from the German government. The initiative had received €1.5 million in funding through the KMU-innovativ program, a grant scheme that supports small and medium enterprises in research and development.

The goal was clear from the very beginning. It was to build a cloud storage service secured with post-quantum encryption, not just conventional algorithms.

To get there, Tuta partnered with the University of Wuppertal, which handled key research tasks including testing cryptographic algorithms and figuring out how to deduplicate encrypted data without punching holes in the security model.

All that effort has now produced a product ready for real-world testing. Starting today, Tuta Drive enters closed beta, with select users receiving early access to put it through its paces ahead of a public release.

It is an end-to-end encrypted cloud storage service that fits directly into Tuta's existing ecosystem alongside mail and calendar. Everything you store gets encrypted without any action needed on your end, and the zero-knowledge architecture means Tuta has no technical ability to read your files or share them with anyone else.

The encryption underpinning Drive is the same TutaCrypt protocol Tuta already uses for its mail service. It combines classical and quantum-resistant algorithms in a hybrid approach, so even if a quantum computer cracks one layer down the line, it still has to contend with the other.

And, the service is hosted in Germany, which brings strict GDPR protections into play on top of the technical safeguards.

Arne MΓΆhle, CEO of Tuta, announced this by commenting that:

With Tuta Drive, we are taking the next step towards offering a full private digital workspace.

Today, more than ten million citizens and businesses, including journalists, whistleblowers and activists use Tuta Mail as an alternative to insecure email offered by mainstream providers.

Adding an encrypted cloud storage to Tuta will enable them to also store their files securely.

Test run

We were given early access to the closed beta ahead of its rollout today, and here's a look at what Tuta Drive is like right now.

The interface is minimal, which is fine. You get a familiar sidebar and a top bar that shows you the server connection status and houses quick-switch buttons for Mail, Contacts, Calendar, and Drive.

First, I uploaded two videos to see how Tuta Drive would handle them. Here, the upload speeds were noticeably slow when connected over a VPN, though that's more or less expected. Without an active VPN connection, file uploads were fast.

Moving those files to a new folder afterward was straightforward using the "Move" option from the right-click context menu. Drag and drop works too, and I could manually select specific files without any issues. Cut and paste for moving files around also worked well.

When uploading multiple files at once, a progress list appears, which is handy. The one catch is that you can't scroll through it to check which file is currently being processed, which was a bummer.

screenshot of tuta drive closed beta showing a long upload progress list on the right

Files are shown with appropriate icons depending on type, so images, videos, and audio all get their own visual treatment. Folders display a cat emoji where the folder size info should probably appear, which looks like a work-in-progress placeholder more than anything else.

many different file types are shown in this screenshot of tuta drive closed beta

If you upload something by mistake or decide a file isn't worth keeping, you can delete it promptly either from the right-click context menu or by hitting Delete on your keyboard. The "Trash" page then gives you the choice to either restore it if it was a wrong call or permanently delete it if you're sure.

That said, folder uploads aren't supported yet, and the keyboard shortcut support is lacking. Ctrl+A to select everything in a folder, for instance, does nothing. No search tool either; those are the kinds of gaps that user feedback tends to sort out quickly.

Seeing that this is a closed beta, I am confident that the Tuta folks will listen to what people say about their newest offering and act accordingly.


πŸ’¬ Would you give Tuta Drive a shot, or are you too committed to Proton Drive or other cloud solutions to even look its way?



from It's FOSS https://ift.tt/kZGFoPN
via IFTTT

Can You Identify The Fake Linux Distros From The Real Ones?

Not all distros are created equal.

In fact, not all distros are created at all.

This quiz is simple. You'll be presented with a few Linux distros and their details. The twist is that they might not be a real thing. They could just be a fragment of my imagination.

Of course, this is valid only at the time when I created this quiz. The way we move in Linux world, there could be some new distros coming up right after I publish this quiz πŸ˜ƒ

🚧
Some browsers block the JavaScript-based quiz units. Disable your ad blocker to enjoy the quizzes and puzzles.


from It's FOSS https://ift.tt/Pn3Yb9V
via IFTTT

Oh No! Now A Federal Bill Wants OS-Level Age Verification for Everyone in the USA

The U.S. has been quietly building up a set of state-level laws that push operating system providers into the age verification plague.

California's AB 1043, signed in October 2025, requires OS providers to collect age data at account setup and pipe it to apps through a real-time API. It kicks in on January 1, 2027.

Colorado is working on something nearly identical. SB26-051 (which we covered when it was still a proposal) passed the state Senate 28-7 on March 3, 2026, and is now waiting on a House vote to become law there too.

However, these are just state-level laws. A new federal bill, H.R.8250, introduced on April 13, 2026, by Rep. Josh Gottheimer, with Rep. Elise M. Stefanik signing on as cosponsor, has us intrigued.

a cropped screenshot of the congress.gov website that shows the proposed h.r.8250 bill

The official title of the bill reads, "To require operating system providers to verify the age of any user of an operating system, and for other purposes." But that's a mouthful; the short version is "Parents Decide Act."

If you go by the full title, the bill is pretty self-explanatory; it is going to require every operating system provider to verify the age of its user who wants to use their OS, and vaguely enough, for any "other purposes."

It has been referred to the House Committee on Energy and Commerce and currently sits at step one (Introduced) of five in the legislative process. No bill text has been published; there's no summary, no subject tags, and no related bills attached to it.

That means right now, the only thing formally known about H.R.8250 is its title, its sponsors, and where it got sent.

But wait, do you… πŸ‘‡

Want more details?

this cropped screenshot shows a blog titled, "release: gottheimer announces bipartisan "parents decide act" to protect kids online."

Gottheimer's office published a press release on April 2, 2026, announcing the bill 11 days before it was formally introduced. That press release was unavailable for a while, but it is now back up.

According to the announcement, the bill would require OS developers to verify user age at device setup, allow parents to set content controls right there, and have those settings flow through to apps and platforms on the device.

Apple and Google were the companies Gottheimer named as the intended targets, with the framing centered entirely around phones and tablets.

But here's where it gets interesting for anyone outside the Apple and Google ecosystem. Gottheimer's press release framed this entirely around commercial mobile platforms. The official bill title, as you saw earlier, does not.

If the bill text matches the breadth of that title, Linux distributions and other open source operating platforms would sit squarely within its scope. And a federal bill passing would mean one nationwide compliance requirement replacing the current state-by-state situation.

The representative also voiced support for several groups, which include the likes of:

Evidently, things are getting more absurd with each passing day, and I can't wait for the day when access to anything electronic is locked behind a gate, guarded by the most decent and righteous upholders of the law. /s


πŸ’¬ If you are looking for a conversation surrounding this, our forum is the place to be!



from It's FOSS https://ift.tt/t0nKs3V
via IFTTT

Rabu, 15 April 2026

A PHP Dev Just Solved a 20+ Year-Old KDE Plasma Problem No One Else Would

Back in 2005, a bug report was filed by Kjetil Kjernsmo, then running KDE 3.3.2 on Debian Stable. He wanted the ability to have each connected screen show a different virtual desktop independently, rather than having all displays switch as one unit.

Over the years, over 15 duplicate reports piled onto the original as more people ran into the same wall. And that's not a surprise, because multi-monitor setups have become increasingly common.

The technical reason why this issue stayed open this long comes down to X11. Implementing it there would have required violating the EWMH specification, which has no concept of multiple virtual desktops being active at the same time.

The KWin maintainer Martin FlΓΆser had said as much in 2013, effectively ruling it out for the entire KDE 4.x series. The only realistic path was through Wayland, and that path needed someone willing to actually walk it.

Someone finally did. The feature has now landed in KWin's master branch and is set for a Plasma 6.7 introduction.

How was this accomplished?

Video courtesy of Hynek Schlindenbuch.

The merge request was opened by Hynek Schlindenbuch, a developer with no prior KDE contributions.

Each screen now independently tracks which virtual desktop it is showing. Any desktop can appear on any screen, and the same one can be shown on multiple screens at once. Windows belong to a specific screen, even if they visually span two, and can be assigned to one or more virtual desktops.

A window stays visible when its screen is showing one of those desktops. Keyboard shortcuts only switch the desktop on the currently active screen, not across all of them at once.

Unlike Hyprland, switching to a desktop does not pull focus to that desktop's screen. Hynek made that choice deliberately.

VirtualDesktopManager tracks the current desktop separately for each output, and switching all screens together remains the default, with per-output switching available as an opt-in via settings.

Keep in mind that this fix is Wayland only. X11 was left out intentionally since it relies on the EWMH protocol, and with X11 support being dropped in Plasma 6.8 anyway, that is a less significant shortcoming than it sounds.

If you were curious about Hynek, he is a full-time PHP programmer with over six years of experience. His C++ background going into this project was minimal, and he had no experience with Qt or CMake and had only set up KDE Plasma on an old laptop a few months before opening the merge request.

The motivation for this was his plan to move to Wayland for fractional scaling support, but the missing per-screen desktop functionality was blocking his switch to Plasma.

See how a lone open source developer's initiative changes things for the rest of us? πŸ™ƒ



from It's FOSS https://ift.tt/3T8kUSy
via IFTTT

Senin, 13 April 2026

An Open Source Dev Has Put Together a Fix for AMD GPU's VRAM Mismanagement on Linux

Natalie Vock (pixelcluster), a developer who works on low-level Linux code and as an independent contractor for Valve, has published a fix for a VRAM management problem that has been making life difficult for Linux gamers on AMD GPUs with 8GB of VRAM or less.

She has put together a combination of kernel patches and userspace utilities that stop background apps from stealing VRAM away from whatever game you're playing.

The underlying issue is that when VRAM runs out, the kernel driver has no way to tell which memory matters more. A game and a browser tab look identical from the driver's perspective, so when something has to give, game memory often takes the hit.

It then ends up in GTT, a chunk of system RAM that the GPU can access, but over the PCIe bus rather than directly.

The fix is built on the dmem cgroup controller that she co-developed with Maarten Lankhorst from Intel and Maxime Ripard from Red Hat. It is already in the mainline Linux kernel, and it lets the driver treat foreground apps as higher priority when handing out VRAM.

That alone was not enough, though. Natalie has also written six kernel patches to fix a specific gap where VRAM pressure would cause new memory allocations to skip those protections entirely and end up in GTT anyway.

Two userspace utilities handle the rest: dmemcg-booster sets up the groundwork so the kernel protections actually activate, and a fork of KDE Plasma's Foreground Booster keeps track of which app is in the foreground so it gets first dibs on VRAM.

What this means for Linux gamers

Instead of performance slowly degrading over a session, games should now hold steady for as long as their own VRAM usage stays within budget. Natalie notes that most modern titles tend to stay within 8GB, so owners of 8GB GPUs should be in a much better spot with today's games.

While this applies to any GPU running the amdgpu driver, Intel GPUs on the xe driver have the necessary kernel support too, though real-world testing there is still pending.

Additionally, the developer has submitted a patch for nouveau, the open source NVIDIA driver.

How to get it

🚧
The developer warns that things could break if you install the patches. Proceed with caution on general use of production machines.

The six kernel patches are not in the mainline kernel, so getting them requires some extra steps depending on your setup. CachyOS users on Linux 7.0rc7-2 or later are already covered.

On other Arch-based distros, both utilities are in the AUR. For the kernel side, you can either pull the CachyOS kernel package from the repository or install linux-dmemcg from the AUR, which compiles Natalie's development branch.

The six patch files are also linked directly in the announcement blog for anyone who wants to apply them to a custom kernel build.

For those not on an Arch-based system, the realistic options are applying the patches manually to a self-compiled kernel or waiting for your distro to pick them up. Natalie has said her post will be updated if and when the work gets packaged by other distributions.


Suggested Read πŸ“–: The Linux 7.0 Release is Here!



from It's FOSS https://ift.tt/HQyImrq
via IFTTT