Rabu, 25 September 2024

FOSS Weekly #24.39: New File Manager, Webcam Issue in Ubuntu, GNOME 47 Release and More Linux Stuff

There is a new file manager in the Linux town.

from It's FOSS https://ift.tt/4yqZJAK
via IFTTT

No Camera Found? Getting the Camera App to Work in Ubuntu 24.04

No Camera Found? Getting the Camera App to Work in Ubuntu 24.04

Ubuntu 24.04 ships with GNOME's new camera app and it doesn't work. It simply fails to recognize the in-built or external webcams.

When you open the Camera app, it shows "No Camera Found. Connect a camera device" message.

This may make you doubt your machine's hardware but it likely to be a software issue as the Camera app does not work by default in Ubuntu 24.04.

No Camera Found? Getting the Camera App to Work in Ubuntu 24.04

Several workarounds have been suggested for this problem on various Ubuntu forums. However, the one that worked for me was shared by an It's FOSS reader, Jack.

Here it is.

Fixing the issue

The trick here is to add yourself to the video group. You can use the usermod command for this purpose. Fortunately or unfortunately, this is a command line fix.

🚧
Type the commands as it is or copy-paste them in the terminal. The -a part is of utmost important.

Open a terminal in Ubuntu (use Ctrl+Alt+T shortcut) and run the following command:

sudo usermod -aG video $USER

If this is your first time with sudo, you should know that it asks to enter password. You have to enter your account's password through which you log into the system. While typing the password, nothing is reflected on the screen. That's normal in the UNIX/Linux world. Just blindly type the password and press enter.

There is no success message or output for the command.

💡 The usermod command modifies a user account. With G you are telling it to modify the groups the user belongs. The -a option is crucial because you are asking it to append a new group to existing groups of a user. If you don't use -a, then user will only belong to the group you specify (video here) and that will be catastrophic as you cannot use sudo and function as before.

You may have to log out or restart the system before the changes take effect.

When you start the Camera app again, it should work now.

No Camera Found? Getting the Camera App to Work in Ubuntu 24.04
That's me thinking why Ubuntu won't fix these widespread issues

Conclusion

I still prefer the good old Cheese camera app.

No Camera Found? Getting the Camera App to Work in Ubuntu 24.04

If you want, you can install it using this command:

sudo apt install cheese

It is also available from the app store but the Snap version gets priority and that also doesn't work very well.

Each Ubuntu release has bugs, 24.04 is just buggier. I don't understand how this prevalent bug made it to a long-term support release and why it has not been fixed even after the first point release of 24.04.1.

I know we are not going to get answers but at least these troubleshootings help us explore an otherwise uncharted territory.



from It's FOSS https://ift.tt/3RYs706
via IFTTT

Minggu, 22 September 2024

Generative AI & LLMs: How are They Different or Similar?

Generative AI & LLMs: How are They Different or Similar?

Generative AI and Large Language Models (LLMs) are often used interchangeably, but while they share some similarities, they differ significantly in purpose, architecture, and capabilities.

In this article, I'll break down the difference between the two, explore the broader implications of generative AI, and critically examine the challenges and limitations of both technologies.

What is Generative AI?

Generative AI refers to a class of AI systems designed to create new content, whether it's text, images, music, or even video, based on patterns learned from existing data.

How Generative AI Works

At its core, Generative AI functions by learning patterns from vast amounts of data, such as images, text, or sounds.

The process involves feeding the AI huge datasets, allowing it to "understand" these patterns deeply enough to recreate something similar but entirely original.

The "generative" aspect means the AI doesn’t just recognize or classify information; it produces something new from scratch. Here’s how:

1. Neural Networks

Generative AI uses neural networks, which are algorithms inspired by how the human brain works.

These networks consist of layers of artificial neurons, each responsible for processing data.

Neural networks can be trained to recognize patterns in data and then generate new data that follows those patterns.

2. Recurrent Neural Networks (RNNs)

For tasks that involve sequences, like generating text or music, Recurrent Neural Networks (RNNs) are often used.

RNNs are a type of neural network designed to process sequential data by keeping a sort of "memory" of what came before.

For example, when generating a sentence, RNNs remember the words that were previously generated, allowing them to craft coherent sentences rather than random strings of words.

3. Generative Adversarial Networks (GANs)

GANs work by pitting two neural networks against each other.

One network, the Generator, creates content (like an image), while the other network, the Discriminator, judges whether that content looks real or fake.

The Generator learns from the feedback of the Discriminator, gradually improving until it can produce content that’s indistinguishable from real data.

This method is particularly effective in generating high-quality images and videos.

Examples of Generative AI

  • Image generators :
    • DALL-E: It can generate highly detailed images from textual descriptions, demonstrating its ability to understand and translate language into visual form.
    • Stable Diffusion: It allows users to generate a wide range of images, from realistic portraits to fantastical landscapes
  • Music generators:
    • Udio: This AI tool can create original music compositions in various styles, from classical to electronic.
    • Jukebox: Another notable music generator, Jukebox is capable of generating realistic-sounding music in different genres and even imitating specific artists.
  • Video tools:
    • Runway: This versatile platform offers a suite of tools for video editing, animation, and generation. It can be used to create everything from simple animations to complex visual effects.
    • Topaz Video AI: This software specializes in enhancing and restoring video footage, using AI to improve quality, reduce noise, and even increase resolution.

What Are Large Language Models (LLMs)?

Large Language Models (LLMs) are a specialized form of artificial intelligence designed to understand and generate human language with remarkable proficiency.

Unlike general generative AI, which can create a variety of content, LLMs focus specifically on processing and producing text, making them integral to tasks like translation, summarization, and conversational AI.

How LLMs Work

At their core, LLMs leverage Natural Language Processing (NLP), a branch of AI dedicated to understanding and interpreting human language. The process begins with tokenization:

Tokenization

This involves breaking down a sentence into smaller units, typically words or subwords. They are called tokens in LLM terms.

For instance, the sentence "I love AI" might be tokenized as ["I", "love", "AI"]. These tokens serve as the building blocks for the model's understanding.

What are Tokens in LLMs?
Let’s clear some of LLM jargon and learn more about tokens.
Generative AI & LLMs: How are They Different or Similar?
Transformers

LLMs typically use an architecture called transformers, a model that revolutionized natural language processing.

They work by analyzing relationships between words and their contexts in massive datasets.

In simple terms, think of them as supercharged auto-complete functions capable of writing essays, answering complex questions, or summarizing articles.

Examples of LLM's

  • Text Generation:
    • GPT 3: One of the most well-known LLMs. It is capable of generating human-like texts, from writing essays to creating poetry.
    • GPT-4: It is more advance successor and further improved like having memory which allows it to maintain and access information from previous conversations.
    • Gemini: A notable LLM by Google, which focuses on enhancing text generation and understanding.

Generative AI and LLMs: A unique Bond

Now that you’re familiar with the basics of generative AI and large language models (LLMs), Let's explore the transformative potential when these technologies are combined.

Here are some ideas:

Content Creation

To all the writer folks like me that might have met with a writers block, the combination of LLMs and generative AI enables the creation of unique and contextually relevant content across various media text, images, and even music.

Talk to your documents

A fascinating real-world use case is how businesses and individuals can now scan documents and interact with them.

You could ask specific questions about the content, generate summaries, or request further insights without compromising privacy.

This approach is particularly valuable in fields where data confidentiality is crucial, such as law, healthcare, or education.

We have covered one such project called PrivateGPT.

Enhanced Chatbots and Virtual Assistants

No one likes the generic response of customer service chatbots. The combination of LLMs and generative AI can power advanced chatbots that handle complex queries more naturally.

For instance, an LLM might help a virtual assistant understand a customer’s needs, while generative AI crafts detailed and engaging responses.

Open-source projects like Rasa, a customizable chatbot framework, have made this technology accessible for businesses looking for privacy and flexibility.

Improved Translation and Localization

When combined, LLMs and generative AI can significantly improve translation accuracy and cultural sensitivity.

For example, an LLM could handle the linguistic nuances of a language like Arabic, while generative AI produces culturally relevant images or content for the same audience.

Open-source projects like Marian NMT and Unlabel Tower, a translation toolkit and LLM, show promise in this area.

Still, challenges remain especially in dealing with idiomatic expressions or regional dialects, where AI can stumble.

Challenges and Limitations

Both Generative AI and LLMs face significant challenges, many of which raise concerns about their real-world applications:

Bias

Generative AI and LLMs learn from the data they are trained on. If the training data contains biases (e.g., discriminatory language or stereotypes), the AI will reflect those biases in its output.

Generative AI & LLMs: How are They Different or Similar?
Google halts Gemini's image generation due to historical inaccuracies

This issue is especially problematic for LLMs, which generate text based on internet-sourced data, much of which contains inherent biases.

Hallucinations

A unique problem for LLMs is "hallucination," where the model generates false or nonsensical information with unwarranted confidence.

While generative AI might create something visually incoherent that’s easy to detect (like a distorted image).

But an LLM might subtly present incorrect information in a way that appears entirely plausible, making it harder to spot.

Resource Intensiveness

Training both generative AI and LLMs requires vast computational resources. It’s not just about processing power, but also storage and energy.

Generative AI & LLMs: How are They Different or Similar?
Source: Towards Data Science

This raises concerns about the environmental impact of large-scale AI training.

Ethical Concerns

The ability of generative AI to produce near-perfect imitations of images, voices, and even personalities poses ethical questions.

How do we differentiate between AI-generated and human-made content? With LLMs, the question becomes: how do we prevent the spread of misinformation or misuse of AI for malicious purposes?

Generative AI & LLMs: How are They Different or Similar?
ChatGPT's reply after 'Sky' voice faces scrutiny over Scarlett Johansson comparison

Key Takeaways

The way generative AI and LLMs complement each other is mind-blowing whether it’s generating vivid imagery from simple text or creating human-like conversations, the possibilities seem endless.

However, one of my biggest concerns is that companies are training their models on user data without explicit permission.

This practice raises serious privacy issues, if everything we do online is being fed into AI, what’s left that’s truly personal or private? It feels like we’re inching closer to a world where data ownership becomes a relic of the past.



from It's FOSS https://ift.tt/TsP7gHr
via IFTTT

Sabtu, 21 September 2024

What is a Compute Module? Why should you care about it?

What is a Compute Module? Why should you care about it?

As your projects move from tinkering to serious development, the limitations of typical Raspberry Pi boards or any SBC becomes quite obvious.

Can they truly deliver the flexibility and performance needed for a compact, scalable design?

Many developers face this question when moving from prototypes to market-ready products.

In this article, we’ll dive into what are compute modules and how they can fit into your next project.

What are Compute Modules?

In the world of embedded systems and custom hardware solutions, compute modules represent a distinct class of computing devices.

At their core, compute modules are essentially stripped-down versions of full-fledged single-board computers (SBCs).

They house the essential processing components such as the CPU, memory, and storage but leave out many of the peripherals and I/O ports found on standard SBCs.

What is a Compute Module? Why should you care about it?
Image Source: Raspberry Pi | CM4 without IO Ports

This design choice is not merely about minimization; it reflects a broader intent to offer greater flexibility and integration potential.

While it doesn’t come with all the usual bells and whistles of a standard Raspberry Pi board like HDMI ports or USB connectors it provides the core processing power you need.

This setup is ideal for creating specialized devices or products where every inch of space and every ounce of processing power must be optimally utilized.

For example, you see this industrial panel by Chipsee?

What is a Compute Module? Why should you care about it?

It is actually based on the Raspberry Pi CM4 module. And as you can see in the image below, a full-fledged Raspberry Pi would NOT have been suitable in this case. The compute module offers the flexibility of expanding and customizing the hardware.

What is a Compute Module? Why should you care about it?
Image Credit: Pallav Agrawal
📋
In simpler terms, a compute module is like a powerful brain that you can embed into your own custom-designed hardware.

Drawbacks of Compute Modules?

To some tinkerers, they may seem an odd choice, especially considering they’re often priced similarly to fully-featured SBCs.

Unlike their more comprehensive counterparts, compute modules can lack built-in I/O ports and network capabilities, necessitating additional boards for setup.

What is a Compute Module? Why should you care about it?
Image Source: Raspberry Pi | IO Board for CM4

This additional complexity and cost can make them appear less appealing at first glance.

Yet, for those who need the flexibility to design custom hardware solutions and scale their projects, these modules offer a valuable advantage.

Why you should care about a compute module?

Compute modules might seem like a niche component, but their versatility makes them invaluable in a range of real-life applications.

Here’s why you should pay attention to them and how they can be effectively utilized:

1. Cluster Computing

For those delving into high-performance computing or distributed systems, compute modules offer an efficient way to build compact, powerful clusters.

Imagine setting up a mini data center using compute modules—each module acting as a node in your cluster, providing significant computing power without occupying too much physical space.

This setup is perfect for tasks like data processing, simulations, or even small-scale cloud computing environments.

If you are interested, you can have a look at this startup called Compute Blade .

What is a Compute Module? Why should you care about it?
Image Source: Ivan Kuleshov

2. Home Automation

In home automation, where every device needs to be both powerful and unobtrusive, compute modules provide a flexible solution.

Having a smart home hub that controls lighting, security, and climate using compute modules, you can build a central unit that integrates seamlessly into your home, managing multiple systems from a single compact device.

Our pick: Home Assistant Yellow that utilizes Pi 4 Compute module at its heart.

What is a Compute Module? Why should you care about it?
Image Source: Home Assistant

3. Handheld Devices

Compute modules also shine in the realm of handheld devices.

Whether it’s a custom handheld gaming console or a specialized field tool for professionals, compute modules offer the processing power you need in a small form factor.

They can be embedded into devices where space is limited, yet performance requirements are high, delivering a robust computing solution in a portable package.

For instance, check out this amazing project called Retro Lite by StonedEdge.

What is a Compute Module? Why should you care about it?
Image Source: StonedEdge | It is inspired by Nintendo Switch but contains a Pi CM4 inside

4. Industrial Automation

For industrial applications, where ruggedness and customization are crucial, compute modules offer a way to build specialized controllers and monitoring systems.

For instance, a compute module could be embedded in a custom-built control panel that manages factory equipment, providing the necessary computational power while allowing for a tailored interface with sensors and actuators.

Here's a industrial Touch Panel solution by Comfile Technology using Pi CM4 with some other tweaks:

What is a Compute Module? Why should you care about it?
Image Source: Comfile Technology

A Taste of Different Flavors: Pi and Beyond

Compute modules aren’t just limited to the Raspberry Pi ecosystem.

While Raspberry Pi's Compute Module series is the most popular, there are other contenders in the market that offer similar flexibility and customization options.

Let’s take a look at a few of the most prominent compute modules available today:

1. Raspberry Pi Compute Modules

These are perhaps the most well-known and widely used compute modules.

What is a Compute Module? Why should you care about it?

From the original Compute Module 1 (CM1) to the more powerful and versatile Compute Module 4 (CM4), Raspberry Pi’s lineup offers a range of processing power, memory configurations, and connectivity options.

The CM4, in particular, is highly favored for its compact size, onboard eMMC storage, and the ability to support wireless connectivity via a custom carrier board.

Raspberry Pi’s compute modules are known for their adaptability and reliability, making them the go-to option for industrial automation, IoT, and embedded systems.

2. Orange Pi Compute Modules

Orange Pi provides a similar alternative to Raspberry Pi’s offerings, with its own compute modules designed for specific industrial and development purposes.

What is a Compute Module? Why should you care about it?
Image Source: Orange Pi

The Orange Pi CM4 is a direct competitor to the Raspberry Pi CM4, offering a balance of affordability and performance.

These boards are often favored for their cost-effectiveness and flexibility in hardware customization, particularly for developers working with lower budgets.

3. Nvidia Jetson Modules

For AI and machine learning applications, Nvidia Jetson compute modules take the cake.

What is a Compute Module? Why should you care about it?
Image Source: Nvidia

These modules are designed with GPU-intensive tasks in mind and are often used for edge AI, robotics, and smart video analytics.

While they’re more expensive than Raspberry Pi or Orange Pi modules, their raw processing power and ability to handle complex algorithms make them a standout option in specialized fields.

4. ArmSoM Compute Module 5

What is a Compute Module? Why should you care about it?

ArmSoM CM5 is a powerful replacement for the Raspberry Pi CM4. It has RK3576 system on chip that consists of an integrated CPU with quad-core Cortex-A72 @ 2.2GHz, quad-core Cortex-A53 @ 1.8GHz, and a dedicated NEON co-processor.

In addition to that, it also has integrated Mali GPU and 8 TOPS NPU.

Final Thoughts

In the end, compute modules give you the flexibility to create custom hardware solutions in a compact form, making them great for projects like IoT and automation.

They allow you to design exactly what you need, unlike standard SBCs.

But they have some downsides. They’re priced like full SBCs, lack built-in I/O or networking, and need extra boards, which adds cost and complexity.

While this can be frustrating for hobbyists, the customization options often make it worth the effort.

Have you used a compute module, or seen one in action? It might be worth exploring for your next project!



from It's FOSS https://ift.tt/aKN1pLF
via IFTTT

Kamis, 19 September 2024

Convert and Transfer PDFs and eBooks to Kindle Using Calibre

Convert and Transfer PDFs and eBooks to Kindle Using Calibre

S,o you bought a new e-book from a platform like Humble Bundle or other online stores.

And, now you want to transfer it to your Amazon Kindle. Fret not, I got you 😉

Kindle allows you to sideload books and there are two popular ways to do so. Either use Calibre or send the book to your Kindle mail.

💡
It is easier to send ebooks to Kindle via email.

You can find your Kindle email by going to Settings → All Settings → My Account → Send-to-Kindle Email.

Once done, compose an email using your regular email account. Attach the ebook to the email and send it to your Kindle email address.

But, the book must be in a supported format like Mobi or PDF.

Here, we focus on Calibre, as it allows you to configure the book by letting you configure the cover, formatting, and even change a file format to supported ones.

Now, let me highlight the steps you need to follow to transfer any e-book to Kindle using Calibre. If you do not have it installed, follow our guide here to set it up first:

Install the Latest Calibre on Ubuntu
Calibre is a free and open source e-book software. Here’s how you can install it on Ubuntu Linux.
Convert and Transfer PDFs and eBooks to Kindle Using Calibre

Transfer any eBook to Kindle using Calibre

When you get an eBook from third-party platforms over Amazon Marketplace, you may have to change the file format to make it work on Kindle. For example, most books outside are available in EPUB format, which is supported by other e-readers but not Kindle.

So it is essential to know the file formats first and based on that you can decide if the book you intend to transfer requires the conversion to another format or not.

Step 0: Know the file format

Kindle supports various eBook formats, but for the most part, you'll be dealing with the following four:

  • ePub (not supported by Kindle and needs to be converted to supported ones)
  • MOBI
  • azw3
  • PDF

If the file you want to transfer to Kindle is MOBI, azw3, or PDF, you can directly use it without conversion.

But if you have an EPUB file, then you'd need to convert it to MOBI or azw3.

Here's how to do that:

Step 1: Import a book to Calibre

To import a book to Calibre, first, launch the Calibre software and click on Add books the button, and it will open the file manager where you choose books to add:

Convert and Transfer PDFs and eBooks to Kindle Using Calibre

Soon, you will see the titles in your Calibre program:

Convert and Transfer PDFs and eBooks to Kindle Using Calibre

Step 2: Convert eBook to supported file format

If the book you want to send to Kindle is not supported (probably an ePub format), then you can refer to this section where I'll show you how to convert that to a supported file format.

You can identify the file format by selecting an eBook from Calibre and on the right-hand side, you will find Formats where the book format is mentioned:

Convert and Transfer PDFs and eBooks to Kindle Using Calibre

To convert the eBook(s), first select one or more unsupported eBooks and hit the Convert books option:

Convert and Transfer PDFs and eBooks to Kindle Using Calibre

Now, go to Page setup, choose your Kindle model from the Output profile, select MOBI in output format and finally hit the OK button:

Convert and Transfer PDFs and eBooks to Kindle Using Calibre

Once finished, you will see the converted file format along with the previous file format:

Convert and Transfer PDFs and eBooks to Kindle Using Calibre

Step 3: Send eBook(s) to Kindle

To send eBooks to your Kindle device, first, plug in your Kindle device to your system and Calibre will detect the device showing a Kindle icon on the top bar.

Now, select one or more eBooks and hit the Send to device button:

Convert and Transfer PDFs and eBooks to Kindle Using Calibre

That's it!

Wrapping Up...

It is incredibly easy to convert an eBook for Kindle, or for any other device you have using the Calibre ebook program. It is an excellent open-source program that every eBook reader should have, whether you are on Linux or other platforms.

💬 How do you convert eBooks to read it on your Kindle device? Or, do you stick to the Amazon Marketplace or perhaps a Kindle subscription? Let me know about it!



from It's FOSS https://ift.tt/UWLNpo0
via IFTTT

Rabu, 18 September 2024

FOSS Weekly #24.38: Arch Experience, Kernel 6.11, Mint vs Ubuntu and More Linux Stuff

FOSS Weekly #24.38: Arch Experience, Kernel 6.11, Mint vs Ubuntu and More Linux Stuff

Last week, I shared the Linux online course as our new project. Based on the feedback I received from some students, I have divided the course into smaller chapters, which makes it easier to learn at your own pace. Enjoy 😄

Linux For DevOps - Courses by Linux Handbook
Welcome to the “Linux for DevOps” course! In the fast-paced world of DevOps, proficiency in Linux is not just a skill but a necessity. Whether you are new to Linux or looking to deepen your skills, this course will guide you through essential concepts, command-line operations, and system administration tasks…
FOSS Weekly #24.38: Arch Experience, Kernel 6.11, Mint vs Ubuntu and More Linux Stuff

💬 Let's see what else you get in this edition

  • A new Linux kernel release.
  • An open-source competitor to Apple Intelligence
  • Ubuntu introducing a new authentication method.
  • Linux Mint announcing some interesting changes.
  • And other Linux news, videos and, of course, memes!

📰 Linux News

Wow! This Open-Source AI Platform Aims to Challenge Apple’s Intelligence Tech
Exactly what we need to compete with Apple’s tech locked inside its walled garden.
FOSS Weekly #24.38: Arch Experience, Kernel 6.11, Mint vs Ubuntu and More Linux Stuff


from It's FOSS https://ift.tt/kjPuRqc
via IFTTT

Selasa, 17 September 2024

Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All

Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All

Managing packages in Linux distribution varies depending on the distribution. All the Debian-based distros use APT as the package management app. While Fedora uses DNF, openSUSE depends on Zypper package manager.

Recently, distro-agnostic package managers like Snap, Flatpak, AppImage, etc. made the scene even more fragmented.

More package managers mean more commands to get familiar with.

If that's what you feel as you distrohop around the Linux-verse, I have a new app for you. It's called app 😬

App, a cross-platform package management assistant written in Go, can help you in this scenario. Basically it is a wrapper for package managersand offers the same commands on all supported distributions. It means you don't need to memorize per-distro package management commands.

That's not it. The app keeps a record of installed applications in its config file. So, if you move distros, just use the previous config file to install the same packages in the new distro.

🚧
In my opinion, you should stick to the official package managers, specially if you are managing a critcal infrastructure. Don't experiment with such tools if you easily get overwhelmed and don't like to troubleshoot.

Install App Package Management Assistant

The App provides an installation script, which you can install on any Linux distribution. Open a terminal and run the command below:

bash <(curl -sL https://hkdb.github.io/app/getapp.sh)
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Install App Utility

To update the app to the latest version, use:

app -m app update
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Update App Utility

What happens behind the scene

App utility has its own simpler syntax. Since the utility uses the same syntax across distributions, you only need to remember the syntax of App utility.

When you install an app using the app command, it records the action. Be it an installation or removal of the package, the information is stored inside the ~/.config/app directory.

Now, when you want to migrate to another distribution, you can copy this directory and paste it inside the ~/.config folder of the new distribution to start installing packages. We will see the process in detail in the next section.

📋
For the App to work, you need to install the packages using the app command. Only then the package details are stored and restored on a new system.

Basic App utility commands

Let's see some of the important commands you need to remember while using this tool.

Enable/Disable necessary package managers

By default, package managers like Flatpak, Snap, AppImage, Yay, Pip, Go, and Cargo are disabled.

You can enable them using the general syntax:

app -m <package-manager> enable

For example, to enable Flatpak support,

app -m flatpak enable
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Enable Flatpak Support

To disable an enabled package manager, use the command:

app -m <package-manager> disable
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Disable a Package Manager

Search for packages

In order to search for packages using the App utility, do the following:

app search <package-name>
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Search for packages in official repos

To search for a package on another package manager like Flathub, specify the package manager.

app -m <package-manager> search <package-name>

For example, if I am searching for Blackbox terminal emulator in Flatpak, I will run:

app -m flatpak search blackbox
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Search packages in Flatpak Flathub

Install a package

To install a package on your system, run the command:

app install <package-name>

Let's say I want to install fortune. So I will run:

app install fortune
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Install a package

Install apps from other package manager

To install an app as a Flatpak, use the command:

app -m flatpak install <package-name>
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Install a Flatpak package

This is a general formula, so, we can install packages from other package managers using:

app -m <package-manager> install <package-name>

Install an AppImage

First go to the directory where the AppImage file is downloaded. Now, to install the AppImage file:

app -m appimage install <appimage-file-name>

I have an AppImage of Standard Notes app. So, I will open my Downloads directory in a terminal and run:

app -m appimage install standard-notes-3.195.1-linux-x86_64.AppImage
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Install an AppImage File

On the official page, you have a detailed description of all package managers that are supported and respective installation methods.

💡
The project has a neat documentation on installing apps like Brave browser, that needs its own repo configuration.

List installed packages

You can list the packages that were installed using App from official repos by using the command:

app history

Similarly, to list all the installed packages from a particular repository, run:

app -m <package-manager> history
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
App install history

Remove a package

To remove a package, you can use:

app remove <package-name>
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Remove a Package

For other package managers, use:

app -m <package-manager> remove <package-name>
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All
Remove a Flatpak Package

If you are on a Debian-based distribution, you can use purge as well.

app purge <package-name>

Similarly, App supports autoremove for native package managers.

app autoremove

Upgrade packages

To upgrade all the packages installed on the system, use the command:

app update
app upgrade all

At the same time, the App allows individual package manager upgrades as well. For this, use the syntax:

app -m <package-manager> upgrade

Add PPA in Ubuntu

In an Ubuntu-based system, you can use PPA to install additional packages that are not available in the native Ubuntu repositories or there are old versions available in the repos.

So, if you are using the App for installing packages, you need to use the syntax:

app add-repo ppa:graphics-drivers/ppa

Restoring the packages in another distro

🚧
While restoring packages in cross-platform, not all platforms have the same package names.

Now, when you want to hop to a new distro, or want to replicate the current distribution, all you need to do is described below.

📋
You need to set up the extra package managers like Flatpak, Snap, etc. before starts restoring.

First, install the App utility on the new system, as mentioned in the first section.

Now, copy the ~/.config/app directory to the new system. Then open a terminal and run:

app -r all

That's it. Your packages will be installed on the new system.

🚧
Restoring Flatpak apps on a new system asked me to select an app from a huge list. So, when I inspected its ~/.config/app/packages/flatpak.json file, there was an additional space present on the beginning of the line.
I removed that space and rerun the command to restore Flatpak apps successfully.

You can restore individual package managers separately by using the command:

app -r <package-manager>

Conclusion

This seems like a useful tool if you distrohop quite often. You can get the list of the packages you install regularly and you can install them with less effort on a new system.

There are a few tools like Nala that try to provide similar features.

By the way, we are also offering Linux courses through our other portal. Something you might be interested in.

Linux For DevOps - Courses by Linux Handbook
Welcome to the “Linux for DevOps” course! In the fast-paced world of DevOps, proficiency in Linux is not just a skill but a necessity. Whether you are new to Linux or looking to deepen your skills, this course will guide you through essential concepts, command-line operations, and system administration tasks…
Apt, DNF, Zypper, Pip, Cargo, XYZ! App Rules Them All


from It's FOSS https://ift.tt/lidFGL3
via IFTTT

Minggu, 15 September 2024

What are Tokens in LLMs? A Beginner’s Guide

What are Tokens in LLMs? A Beginner’s Guide

If you’ve been following our articles on Large Language Models (LLMs) or digging into AI, you’ve probably come across the term token more than a few times. But what exactly is a "token," and why does everyone keep talking about it?

It's one of those buzzwords that gets thrown around a lot, yet few people stop to explain it in a way that’s actually understandable.

And here’s the catch - without a solid grasp of what tokens are, you’re missing a key piece of how these models function.

In fact, tokens are at the core of how LLMs process and generate text. If you’ve ever wondered why an AI seems to stumble over certain words or phrases, tokenization is often the culprit.

So, let’s cut through the jargon and explore why tokens are so essential to how LLMs operate.

What are tokens?

A token in Large Language Models is basically a chunk of text that the model reads and understands.

It can be as short as a single letter or as long as a word or even part of a word. Think of it as the unit of language that an AI model uses to process information.

Instead of reading entire sentences in one go, it breaks them down into these little digestible pieces - tokens.

In simpler words:

Imagine you're trying to teach a child a new language. You'd start with the basics: letters, words, and simple sentences.

Language models work in a similar way. They break down text into smaller, manageable units called tokens.

💡
I used Tiktokenizer, which is a handy tool for visualizing and understanding how text is tokenized by different models.

For example, the sentence "The quick brown fox jumps over the lazy dog" could be tokenized as follows:

What are Tokens in LLMs? A Beginner’s Guide

How do language models use tokens?

Once a text is tokenized, a language model can analyze each token to understand its meaning and context. This allows the model to:

  • Understand the meaning: The model can recognize patterns and relationships between tokens, helping it understand the overall meaning of a text.
  • Generate text: By analyzing the tokens and their relationships, the model can generate new text, such as completing a sentence, writing a paragraph, or even composing an entire article.

Tokenization Methods

When we talk about tokenization in the context of Large Language Models (LLMs), it's important to understand that different methods are used to split text into tokens. Let's walk through the most common approaches used today:

1. Word-Level Tokenization

This is the simplest approach where the text is split by spaces and punctuation. Each word becomes its own token.

Example: Original text: "I love programming." Tokens: ["I", "love", "programming", "."]

While this is straightforward, it can be inefficient.

For example, "running" and "runner" are treated as separate tokens even though they share a root.

What are Tokens in LLMs? A Beginner’s Guide

2. Subword-Level Tokenization

Subword tokenization breaks words into smaller, meaningful units, which makes it more efficient.

It’s great for handling words with common prefixes or suffixes and can split rare or misspelled words into known subwords.

Two popular algorithms are Byte Pair Encoding (BPE) and WordPiece.

Example (with BPE): Original text: "underestimate" Tokens: ["und","erest", "imate"] and few others:

What are Tokens in LLMs? A Beginner’s Guide

In this case, BPE breaks down "underestimate" into smaller units that can be used in other words, making it easier to handle variations and misspellings.

3. Character-Level Tokenization

This method splits text into individual characters.

It’s very flexible and can handle any text, including non-standard or misspelled words.

However, it can be less efficient for longer texts because the model deals with many more tokens.

Example: Original text: "cat" Tokens: ["c", "a", "t"]

Character-level tokenization is useful for extreme flexibility but often results in more tokens, which can be computationally heavier.

4. Byte-Level Tokenization

Byte-level tokenization splits text into bytes rather than characters or words.

This method is especially useful for multilingual texts and languages that don’t use the Latin alphabet, like Chinese or Arabic.

It’s also important for cases where the exact representation of the text is crucial.

Token Limit

A token limit refers to the maximum number of tokens an LLM can process in a single input, including both the input text and the generated output.

Think of it as a buffer—there’s only so much data the model can hold and process at once. When you exceed this limit, the model will either stop processing or truncate the input.

For example, GPT-3 can handle up to 4096 tokens, while GPT-4 can process up to 8192 or even 32,768 tokens, depending on the version.

This means that everything in the interaction, from the prompt you send to the model’s response, must fit within that limit.

Why Do Token Limits Matter?

  • Contextual Understanding: LLMs rely on previous tokens to generate contextually accurate and coherent responses.
    • If the model reaches its token limit, it loses the context beyond that point, which can result in less coherent or incomplete outputs.
  • Input Truncation: If your input exceeds the token limit, the model will cut off part of the input, often starting from the beginning or end. This can lead to a loss of crucial information and affect the quality of the response.
  • Output Limitation: If your input uses up most of the token limit, the model will have fewer tokens left to generate a response.
    • For example, if you send a prompt that consumes 3900 tokens in GPT-3, the model only has 196 tokens left to provide a response, which might not be enough for more complex queries.

Conclusion

Tokens are essential for understanding how LLMs function.

While it may seem trivial at first, tokens influence everything from how efficiently a model processes language to its overall performance in different tasks and languages.

Personally, I believe there’s room for improvement. LLMs still struggle with nuances in non-English languages or code, and tokenization plays a huge part in that.

I’d love to hear your thoughts - drop a comment below and let me know how you think advancements in tokenization could affect language models' ability to handle complex or multilingual text!



from It's FOSS https://ift.tt/fUwbrnH
via IFTTT