Senin, 11 Mei 2026

Linux is Getting a Kill Switch!

Sasha Levin, NVIDIA engineer and co-maintainer of the stable and long-term support kernel trees, has proposed a new patch that adds a mechanism called killswitch to the Linux kernel.

It's pitched as a way for system administrators to disable a vulnerable kernel function on a running system, and the timing of it isn't a coincidence either. The patch follows the rising risk of Linux Privilege Escalation (LPE) vulnerabilities like Copy Fail and Dirty Frag.

What is it?

this cropped screenshot shows a post by sasha levin on the linux kernel mailing list regarding his killswitch proposal

The Linux kernel is built out of many thousands of small functions, each handling a specific job, like processing a network packet, opening a file, or talking to a USB device. When a security flaw shows up in one of these functions, the proper fix is to patch the code and ship a new kernel.

What killswitch entails is a more "must exterminate" approach, where the admin gives the kernel a function name and a return value. From that point on, the function still gets called by whatever was calling it, but it just hands back that value and exits. The actual code inside never runs.

In practice, that means a single line at the terminal:

echo "engage af_alg_sendmsg -1" \
        > /sys/kernel/security/killswitch/control

After this, every program trying to send data through AF_ALG (the kernel cryptography interface Copy Fail also exploited) gets an error back. Whatever bug sat in af_alg_sendmsg is now unreachable because the function never actually executes.

The effect kicks in across every CPU core immediately, and it lasts until the admin disengages it or the system reboots. Engaging anything requires root privileges.

There's also a boot parameter version killswitch=fn1=val,fn2=val,..., for cases where an operator needs to apply the mitigation across a whole fleet of machines through the bootloader.

Sasha points at AF_ALG, ksmbd, nftables, vsock, and ax25 as good candidates for this patch, saying that:

For most users, the cost of "this socket family stops working for the day" is
much smaller than the cost of running a known vulnerable kernel until the fix
land.

Any risks?

The biggest catch is that killswitch doesn't fix anything. It just stops the function from running. Anything in userspace that depends on that function stops working too, for as long as the killswitch stays engaged.

Engaging it also taints the kernel, which is its way of marking that the running code isn't pure upstream Linux anymore. A new flag (H, bit 20) gets set the moment any killswitch is active, and it persists across disengage all the way to the next reboot.

Any crash that happens afterward carries an H in its banner, which acts as a signal to the Linux maintainers triaging the bug that the image was modified. The patch also dedicates a whole section called "Choosing the right target," warning operators not to pick the wrong function.

Someone on Reddit has described this as "a security feature that may be worse than the vulnerability," and many people agree with that sentiment.

AI was involved

There's one more thing worth flagging. At the bottom of the patch, just above the diff, sits an Assisted-by: Claude:claude-opus-4-7 line, marking it as one Sasha put together with help from Anthropic's AI assistant.

That isn't a one-off. The Assisted-by tag itself comes from a kernel policy, which Sasha had a hand in shaping. It lays out how AI-assisted contributions should be attributed in commit messages, alongside configuration files for tools like Claude, Copilot, and Cursor.

It also fits a wider pattern. Greg Kroah-Hartman has been running his own AI fuzzer against the kernel, and the first bugs it surfaced were in ksmbd, which is one of the subsystems Sasha specifically called out as a killswitch candidate.

As things stand, killswitch is just a patch on the Linux Kernel Mailing List, not merged into mainline, and not in any released kernel or distro. Whether it lands and in what form depends on the upcoming review cycle.


Suggested Read 📖: A Clanker is Being Used to Carry Out AI Fuzzing in Linux



from It's FOSS https://ift.tt/TPSK5tX
via IFTTT

Restriced by the West, Huawei's Open Source HarmonyOS Now Powers 55 Million Devices

In recent times, Huawei has been China's best-standing answer to Apple's monopoly over the personal tech market. Smartphones, laptops, tablets, they do it all and they do it at a level on-par with the best companies currently working.

This is not a dynamic that has been around forever, but a more recent shift as China has been seeing a huge surge in the usage of domestic software, such as operating systems and databases. The credit is given to a steady improvement in the smoothness and ease of usage of these technologies.

Yu Chengdong, Huawei’s Executive Director reported that as of the end of March 2026, HarmonyOS has been on more than 55 million devices, adding 23 million in just under six months, which is a massive number for sure. Market giants who wish to maintain their monopoly have every reason to be worried about the competition.

HarmonyOS

The reason for such popularity and wide usage of HarmonyOS can be seen through the improvements it has made to its operating system in its latest release, HarmonyOS 6, such as:

  • Fluid animations that make the phone feel smoother, such as bringing the lighting and glass-like UI elements.
  • Measurable performance improvement made by Ark engine (the APIs that handle multimedia and graphics, especially gaming and video), and giving a better battery life.
  • Latest AI assistance tools, including scheduling and camera features that configure depth and focus in one touch.
  • AI-driven StarShield Security Architecture to improve security with added focus on anti-scam features and privacy improvements.
  • Wider native adaptations of third-party apps.
  • Huawei's growing range of affordable and feature rich mobile devices

These are only some of the elements that HarmonyOS and Huawei owe their success to. But what does it mean for the open-source community?

How does it impact FOSS communities?

HarmonyOS is based on OpenHarmony, an open source mobile operating system base that is similar to Android's AOSP but independent from it. This mass adoption of HarmonyOS encourages more contributions to the OpenHarmony kernel, showing growth in related projects as well, such as EulerOS.

The catch, however, is that HarmonyOS still functions on a lot of proprietary layers, including the aforementioned Ark compiler and the UI elements. Added to that, the Chinese documentation makes it inaccessible to a large chunk of the world population.

Some people are critical of rather unusual activities, such as Huawei spamming open source projects, urging them to use HarmonyOS. Critics see it as intrusive, while others find it acceptable as they're only using whatever means they can to bring more apps and services into the fold of their OS.

A new challenger?

A new contender challenging the Apple/Google duopoly is actually a good news for the consumers, especially with more affordable devices, as it keeps all makers involved on their toes.

While this benchmark of 55 million devices indicates that there is scope beyond iOS and Android in the mobile operating system domains. Of course, this success is tightly coupled to the awesome Huawei hardware.

Let me know what your thoughts are on Huawei's milestone in the comments. Cheers!



from It's FOSS https://ift.tt/NAvbs7V
via IFTTT

Minggu, 10 Mei 2026

I Moved My Photos from OneDrive to Ente Photos, and I'm Not Going Back

Backing up photos and videos is something most people think about only after losing something they can't get back. Local storage is still the most secure option, as long as the files are encrypted and access to the storage medium is under your control.

The catch is that local storage doesn't help much when you need to pull up a file on the go. Being away from home and needing quick access to a specific photo or video is the kind of situation it fails to handle.

Services from big tech players like Google Photos and Microsoft OneDrive fill that gap well enough, and for a while, OneDrive was my go-to.

But, seeing how aggressively Microsoft has pushed its Copilot offering across its product lineup, I thought to myself, 'it won't be long before some new Copilot-powered feature is rolled out that messes around with images and videos.'

That's why I went searching for new options. Initially, Proton Drive (partner link) looked like an option, but I wanted something outside the Proton ecosystem, so I ended up on Ente Photos.

What is Ente Photos?

screenshot of ente that shows the welcome page, asking the user to upload their first photo or import their folders
Btw, that's Ducky, Ente's mascot.

It is an open source, end-to-end encrypted photo storage service that was started in 2020. The goal, as the Ente team puts it, is to help people preserve their memories with privacy without relying on services that treat your data as a resource to be mined.

Beyond Photos, they also offer Auth, a cross-platform two-factor authentication app that backs up your 2FA secrets in an encrypted format, and Locker, which is aimed at storing sensitive documents and files securely.

We got the chance to speak with Vishnu Mohandas, the founder of Ente, back in 2024, where he told us about his vision for building a privacy-respecting alternative to the likes of Google Photos and iCloud Photos.

Quite some time has passed since, and I finally got around to trying it out and, in the process, moved away from yet another Big Tech service.

I made the switch

Before I could do anything, I had to get my files from OneDrive, and boy does Microsoft keep things painfully slow there. Downloading files from the web version of OneDrive meant it would zip the folders first, then begin the download process.

That wouldn't have worked for me, as I had over 200 GB of files to download.

the file manager of windows 11 is shown here, with many folders, all of them show a green checkmark, on the bottom-right, the onedrive app's backup progress interface is visible

To fix the painfully slow downloads, I had to install the OneDrive client on Windows 11 and configure it to keep a local copy of the files on my device. This got me my files much faster than the website, and the download was complete in a few hours.

These were a mix of different file types, most of which ended up on an external hard disk, with the photos and videos kept separate for the move to Ente Photos.

Moving on to the migration, I already had an Ente account, so I logged in and picked the 200 GB paid plan, which cost me ₹4788 annually. Keep in mind that Ente charges in USD/EUR globally, so what you end up paying in your local currency will depend on conversion rates and your payment method.

There is also a free tier that offers 10 GB of storage permanently, which is a good way to test things out before committing.

After everything was set, I started exploring the Linux client on my Fedora Workstation daily driver.

The sidebar menu showed me how much storage quota I had (200 GB ofc), along with buttons to access uncategorized content, hidden content, the trash, my account, any watched folders, a tool to free up space, and the preferences menu.

The preferences menu had options to change the interface language, set the theme between System, Light, and Dark, and enable the Machine Learning feature, which lets Ente Photos run on-device processing for face recognition and other ML-powered features.

There were also options to make the client run at startup, point it to a custom domain for self-hosted setups, and configure the app lock.

I then began the file uploads, which took a very long time. There were 21,000+ items to be uploaded, and that happened mostly because Ente Photos encrypted the files on-device before sending them to the servers, which adds overhead that services like OneDrive simply don't have to deal with.

It is the price you pay for actual end-to-end encryption, and honestly, a fair one. 🤷

Though, I did miss the folder-based organization that OneDrive had. As a general-purpose cloud storage service, OneDrive lets you build out a full folder and subfolder hierarchy for any type of file, whereas Ente's offering focuses more on photos and videos.

It did ask me during the initial upload whether I wanted separate albums, but I mistakenly went with the single album option (as you see above).

Tracking active uploads was working as expected. Ente Photos shows a dedicated interface element in the bottom-right corner that, when clicked, breaks down everything: currently uploading files, successful uploads, ignored uploads where matching files were already found, unsupported files, and failed uploads.

screenshot of the ente photos linux client showing the file downloading feature, with the button for it on the top-right, and a downloads progress interface element on the bottom-left

I could also select multiple images and videos to share them as a link with others, favorite them, fix the timestamps, edit the location, download them, archive them, hide them, or even delete them.

I used the download option, and it worked as expected, with a slightly slow processing time because I was connected to a VPN. Overall, the Linux client didn't disappoint, and doesn't feel like an afterthought. It feels like something that was built for the platform.

The mobile app

I then moved to testing the Android app for Ente Photos on an Android 16-powered smartphone, and the experience over there was on par with the OneDrive client. Or even better, I would say, as the interface didn't feel overwhelming or jampacked.

I just had to configure which folders I needed backed up, and Ente Photos did the rest. It just runs silently in the background, backing up as new content is added, and there's even an option to configure so that only new content is backed up, preventing any unnecessary clutter from ending up on your cloud storage.

There's also a private sharing option, which lets you generate end-to-end encrypted links to any album, which recipients can open without requiring an Ente account (if creating a public link).

You can also password-protect the link and set it to expire after a certain period of time.

The mobile app also has a handy search function (even the Linux client had that; I forgot to test it lol) and the machine learning features, which are disabled by default, so you are always in control of whether Ente runs any on-device processing on your photos.

As of writing this, the mobile app has been extremely reliable in my day-to-day use, and its memories feature is like the cherry on top, giving me a nice trip down memory lane every now and then.

But keep this in mind…

Ente Technologies, Inc. is incorporated in the United States, its servers are located in Europe, and its Indian operations run through a subsidiary registered in Bangalore. They operate in three different regions, with three different sets of rules to follow.

In practice, that means Ente can be compelled by authorities in any of those regions to hand over account metadata, things like your identity, billing information, and access logs.

Your files are a different matter entirely. End-to-end encryption (E2EE) means the files are encrypted on your device before they ever reach the servers, so not even Ente can read them.

For me, that is what the switch ultimately came down to.



from It's FOSS https://ift.tt/KZE9UrR
via IFTTT

After Ubuntu, Now Fedora is Jumping Onto the AI Bandwagon With Dedicated AI Developer Desktops

It is getting harder for Linux distributions to stay neutral on AI. Between enterprise-grade solutions like RHEL AI and the steady rise of local inference tools, the pressure to take a position has been building for a while.

Canonical recently made theirs clear, moving Ubuntu toward a local-first AI approach built around open-weight models and open source inference tooling, keeping everything on-device rather than routing it through a cloud subscription.

Now, Fedora has voted on an initiative called Fedora AI Developer Desktop that will spawn AI-flavored Fedora Atomic Desktops.

So Fedora and AI, huh?

a cropped screenshot of a discourse post by gordon messmer, a member of the fedora packaging team

The proposal came from contributor Gordon Messmer from the packaging team at the end of March, and the Fedora Council has since voted on it with a unanimous +6.

Currently, a lazy consensus period is the last thing standing before it is fully official, with Jef Spaleta, the Fedora Project Leader, acting as the Executive Sponsor (to keep things moving).

The goal here is to make AI development on Fedora less painful by introducing better tooling and packaging. It also aims to offer a smoother experience for users running AI applications and a dedicated space for developers to get their work in front of people who might actually use it.

Also worth knowing is that this initiative is not about adding AI tools to Fedora's existing lineup of Editions or system images. Moreover, none of the resulting images will come pre-configured to connect to remote AI services or monitor how you use your system.

On the technical side, the proposal calls for building an LTS kernel to provide a more stable foundation, alongside bundling user-friendly tools like Goose CLI and Podman Desktop to cover common AI backend workflows.

As for the images this initiative will deliver, there are three planned. The base image, targeting accelerated AI/ML workloads without any proprietary components, will be published as a Fedora Spin.

Two Fedora Remixes follow, one with CUDA runtime support and one with the full CUDA toolkit, the latter of which has some licensing issues that the project will have to tackle.

And before you ask, the developers are planning for a Fedora 45 release timeline for those, which is a few months away in October.

Why though?

Fedora has a habit of being first. Wayland as default, PipeWire, and Flatpak all landed in Fedora before they became the norm across the broader Linux ecosystem. Sitting out the AI wave entirely would be a strange departure from that track record, and probably not a wise one.

Jef, the project leader, has already laid out his rationale, arguing that AI-assisted development is already normalizing upstream. Fedora, he argues, is better off being in that conversation, pushing toward local-first and more ethical tooling, than watching from the sidelines while others set the direction.

However, not everyone is on board. Fernando F. Mancera, a long-time Fedora contributor, withdrew from the project entirely in response, writing "I do not think we can move this forward in a community way. The present situation in Fedora is clearly not for me."

There are many such disagreements in the thread, ranging from concerns about chasing an AI hype cycle to deeper objections around the NVIDIA/CUDA components and whether the community was brought along properly.

Where Fedora is headed is not that much of a stretch, as the Linux kernel already allows AI-assisted contributions, with an "Assisted-by" tag and clear human accountability as the guardrails. And Ubuntu's AI roadmap, as you already know, is moving in the same local-first direction.

But whether the project can pull this off cleanly while keeping community expectations and its own philosophy intact remains to be seen.



from It's FOSS https://ift.tt/xaoHGtf
via IFTTT

Sabtu, 09 Mei 2026

Good Job Dell and Lenovo! Hope Others Follow You

Only last week, we were talking about how LVFS, the firmware update service for Linux, had turned up the heat on vendors who didn't contribute their fair share.

To tackle that, the project has been going through a phased restrictions rollout that includes things like introducing fair-use download utilization graphs and removing detailed per-firmware analytics.

But that obviously wouldn't solve their lack of funding.

Luckily, two vendors have stepped up. Lenovo and Dell have both signed on as Premier sponsors for LVFS, each putting in $100,000 a year to help fund the project going forward.

this picture shows the may 2026 sponsors, the premier sponsors include dell and lenovo, the startup sponsors are framework and open source firmware foundation, and the engineering support sponsors are the linux foundation and red hat

They are also the first to reach this tier. Before now, only Framework Computer and the Open Source Firmware Foundation were on as Startup sponsors, contributing $10,000 a year.

Premier is the highest level of financial commitment any vendor can make to the project.

This update was announced yesterday, with the LVFS homepage already reflecting the update. Between the two of them, that's $200,000 a year going into a project that had been running almost entirely on the goodwill of the Linux Foundation and Red Hat.

Richard Hughes, the lone full-time developer at LVFS, wrapped up the announcement by saying:

With the huge industry support from Lenovo and Dell (and our existing sponsors of Framework, OSFF, and of course both the Linux Foundation and Red Hat) we can build this ecosystem stronger and higher than before; we can continue the great work we’ve done long into the future.

Where's everyone else?

It's not a coincidence that the first Premier sponsors are also two of the most Linux-invested OEMs in the industry. Lenovo, one of the largest PC vendors around, ships Ubuntu on laptops, desktops, and workstations worldwide and has over 700 Ubuntu-certified devices to its name.

Dell isn't far behind, with 140+ certified configurations and partnerships with Canonical, Red Hat, and SUSE.

These certified devices are the result of Canonical and the OEM's engineers actively collaborating to verify that the hardware runs Ubuntu reliably, covering things like drivers, firmware, and general day-to-day compatibility.

Brands that think Linux is some niche thing are ignorant at best and apathetic at worst. 🙂

The platform hasn't been that for a long time and the argument that Linux users don't represent a significant enough market to justify any investment stopped making sense years ago.

The vendors still treating LVFS like a free service they have no obligation to support should probably pay attention to what comes next. API access gets cut for non-Startup vendors in August. Automated upload limits follow in December.



from It's FOSS https://ift.tt/gh5Fzts
via IFTTT

Dirty Frag is a New Linux Exploit That Grants Root, and There's No Proper Patch Yet

It has not been a week since we came across Copy Fail, the exploit that took advantage of an old logic flaw to escalate a local user to root, giving them all kinds of harmful access over a system they shouldn't have.

A security researcher, Hyunwoo Kim (v4bel), has reported a new Linux kernel privilege escalation threat. This one is called Dirty Frag, and the disclosure of it has not gone as planned.

Hyunwoo had set a five-day embargo after submitting details to the linux-distros mailing list, but an unnamed third party published the exploit publicly the same day, and that was that.

A working exploit is now out in the open; most distros have no patch, and the algif_aead blacklist you may have applied for Copy Fail does nothing against this.

What is Dirty Frag?

Like Copy Fail, Dirty Frag modifies the in-memory copy of a system file without touching the version on disk. Every subsequent read of that file sees the corrupted copy, and nothing on the filesystem looks wrong.

Dirty Frag does this through two separate flaws. The first, xfrm-ESP Page-Cache Write (CVE-2026-43284), targets /usr/bin/su, replacing its in-memory copy with one that hands out a root shell.

The second, RxRPC Page-Cache Write (CVE-2026-43500), goes after /etc/passwd and empties the root password field. PAM accepts the blank entry and lets a root login through.

More importantly, they are chained because neither works on every system alone. The first needs a user namespace, which some Ubuntu AppArmor setups block. The second does not have that requirement, but the rxrpc.ko module it relies on is absent from most distros' default builds.

Ubuntu is one of the few that does ship it, though. Together, the two cover every major distro.

What can you do?

Most distros have nothing out yet, perhaps except AlmaLinux, which is one step ahead of the others with patched kernels already in its testing repository. For everyone else, the immediate option is blacklisting the three modules involved:

sh -c "printf 'install esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false\n' > /etc/modprobe.d/dirtyfrag.conf; rmmod esp4 esp6 rxrpc 2>/dev/null; echo 3 > /proc/sys/vm/drop_caches; true"

Doing so also clears the page cache, getting rid of any tampering that may have already happened. Hyunwoo also recommends updating the kernel and rebooting as soon as your distro has a patch out.

Update: Canonical has some mitigation guidelines for Ubuntu users.



from It's FOSS https://ift.tt/CB7q6vI
via IFTTT

Kamis, 07 Mei 2026

Yazi is the Terminal-based File Manager I Didn't Know I Needed

There are two kinds of Linux users. Those who live in the comfort of GUI and those who live in the adventurous world of terminal.

I am neither of the two.

I prefer the comfort of GUI and I jump into the terminal when required or when I am in the mood to explore something.

This article is the result of one such adventure where I tried a file manager in the terminal.

Yes! A file explorer in the terminal. If you are surprised, let me tell you that there are several terminal-based file managers available since forever.

Instead of the usual ls and cd commands combination, you browse and intercat the files in a slightly more comfortable way with these tools.

I explored one such file explorer called Yazi and it impressed me enough to cover it here on It's FOSS.

I even made a video on it. You may watch the video or read the article, whichever you prefer.

What is Yazi, again?

Yazi is a terminal-based file manager packed with features. The first time I used it, I honestly wondered why I hadn’t started earlier. For those curious, it’s written in Rust. I am not sure if you'll love it or hate it for that 😉

0:00
/0:32

Yazi File Manager

Here’s what stands out in Yazi:

  • Full asynchronous support; CPU tasks are spread across multiple threads
  • Built-in support for multiple image protocols
  • Built-in code highlighting and image encoding
  • Scrollable previews
  • Powerful file search and manipulation tools

I'll show my experience with Yazi and its features that I explored. Honestly, if you spend plenty of time in the terminal, you won't even feel the need of opening the graphical file manager like Nautilus or Nemo.

But first, let's see how to install it first.

Installing Yazi on Linux

Yazi is available in the official repositories of Arch Linux, Void Linux, OpenSUSE Tumbleweed, and more.

On Arch Linux, install it along with dependencies and tools that will make the full use of Yazi:

sudo pacman -S yazi ffmpeg 7zip jq poppler fd ripgrep fzf zoxide resvg imagemagick

Ubuntu users can install the Snap version:

sudo snap install yazi --classic

If your distribution doesn’t provide Yazi in its repositories, use the official binary release.

Don't forget to install additional packages that give Yazi all those powerful features.

sudo apt install ffmpeg 7zip jq poppler-utils fd-find ripgrep fzf zoxide imagemagick

After that, you can download the official binary, give it execute permission, and run it.

🚧
Since this is a terminal tool, you should be comfortable using the terminal and commands. I won't explain each step or command in detail; I presume you would already know these things.

Post install setup

After installing Yazi, add a small wrapper script so you can cd into the directory you were browsing when quitting Yazi.

Open your ~/.bashrc or ~/.zshrc and add:

function y() {
	local tmp="$(mktemp -t "yazi-cwd.XXXXXX")" cwd
	command yazi "$@" --cwd-file="$tmp"
	IFS= read -r -d '' cwd < "$tmp"
	[ "$cwd" != "$PWD" ] && [ -d "$cwd" ] && builtin cd -- "$cwd"
	rm -f -- "$tmp"
}

Save the file and restart your shell. Now you can launch Yazi just by typing y.

Features that make Yazi cool

Let’s look at the features that make Yazi stand out.

The cd magic

In the previous section, we added a wrapper function so you can open Yazi with y. But here’s the interesting part.

When you’re inside Yazi, navigate to any directory and press q. You’ll quit Yazi and automatically cd into that directory in your terminal.

0:00
/0:16

Entering directories in terminal while using Yazi file manager.

Don’t want to change directories? Press Q instead. You’ll exit Yazi and stay in the original directory.

Get image and file previews

If you installed the required dependencies, Yazi can preview most file types in a dedicated pane on the right side. It even provides proper syntax highlighting for code files.

If your terminal supports image protocols, such as Kitty or Ghostty, you can preview images directly in the sidebar.

0:00
/0:09

A small clip showing image preview in Yazi

You can also preview the contents of tar and zip archives without extracting them.

Below is a small screenshot showing the Yazi file manager previewing contents of an archive file. It shows what are the files residing inside an archive file like tar or zip.

This is a small screenshot showing the Yazi file manager previewing contents of an archive file. It shows what are the files residing inside an archive file like tar or zip.
Archive Preview

Previewing a text file? Press J (Shift + j) to scroll down. Press K (Shift + k) to scroll up.

0:00
/0:19

A small clip showing file preview scroll in Yazi file manager.

Switch to directories by searching for it

Changing directories in Yazi becomes fast once you know the right key combination.

Press g followed by space. A small launcher will appear. Enter an absolute or relative path; your choice.

As you type, Yazi suggests matching directories.

0:00
/0:29

A small clip showing entering into directories in Yazi file manager.

Yazi supports two search methods; one using fd and another using ripgrep.

Press s to search files by name. This uses fd, a modern alternative to the traditional find command.

0:00
/0:23

A small clip showing the working of fd search.

To search by file content, press S. This uses ripgrep, a modern replacement for grep command.

You can cancel a search anytime with Ctrl+s.

0:00
/0:18

A small clip showing the working of ripgrep search in Yazi file manager.

Bulk rename files with a breeze

Renaming multiple files doesn’t get easier than this.

First, select files using the Space key. A selected file stays selected even if you change directories. You can select files from anywhere in your system.

Once done, press r. Yazi will open all selected filenames in your default terminal text editor $EDITOR.

Edit the names as needed. Just be careful not to alter directory paths if you selected files across different folders.

When you’re done, save and exit. In Vim, that’s :wq.

That’s it. All selected files are renamed instantly.

A small clip showing renaming files in Yazi file manager.

The fuzzy search and zoxide

Yazi includes a fuzzy search mode powered by fzf.

In a directory with many files and can’t remember the exact name? Press z. Start typing something close to the filename and Yazi will narrow it down.

A small clip showing the working of fzf search in Yazi file manager.

If you use zoxide instead of the traditional cd command, you’ll like this even more. Press Z to jump to directories tracked by zoxide. Just make sure zoxide is properly set up first.

A multi tabbed interface

Yazi supports tabs. Press t to open a new tab.

Each tab gets a number. Switch between them using the associated number keys.

A small clip showing creating and switching tabs.

To close a tab, press Ctrl+C.

It has more to offer

These are just some of the features I found most useful. Since Yazi is a file manager, it naturally supports standard operations like copy, paste, and path handling.

It also includes a visual mode for file selection and an interactive file open menu similar to an “Open With” context menu.

Feeling lost among keybindings? Press F1 for a full in-app help view. Once you find what you need, press ESC to return.o

Do you find it useful?

I’ve been slowly building a full TUI workflow; Helix as my editor, Glow to preview Markdown, and now Yazi fills the last missing piece as my file manager.

I don’t expect to move away from a GUI setup this much, but Yazi makes that thought achievable. It fits naturally into the way I work.

If you like exploring TUI tools, give it a try and see if this is something you would like to use on a regular basis. Do share your experience in the comments.



from It's FOSS https://ift.tt/cnvKMR6
via IFTTT