Category: Linux EN

  • A Digital Noah’s Ark: How UrBackup and TrueNAS Protect Your Data

    A Digital Noah’s Ark: How UrBackup and TrueNAS Protect Your Data

    In today’s digital world, our lives—both personal and professional—are recorded as data. From family photos to crucial company databases, losing this information can be catastrophic. Despite this, many individuals and businesses still treat backups as an afterthought. Here is a complete guide to building a powerful, automated, and secure backup system using free tools: UrBackup, TrueNAS Scale, and Tailscale.

    Why You Need a Plan B: The Importance of Backups

    Imagine one morning your laptop’s hard drive refuses to work. Or the server that runs your business falls victim to a ransomware attack, and all your files are encrypted. These aren’t scenarios from science-fiction films but everyday reality. Hardware failure, human error, malicious software, or even theft—the threats are numerous.

    A backup is your insurance policy. It’s the only way to quickly and painlessly recover valuable data in the event of a disaster, minimising downtime and financial loss. Without it, rebuilding lost information is often impossible or astronomically expensive.

    The Golden Rule: The 3-2-1 Strategy

    In the world of data security, there is a simple but incredibly effective principle known as the 3-2-1 strategy. It states that you should have:

    • THREE copies of your data (the original and two backups).
    • On TWO different storage media (e.g., a disk in your computer and a disk in a NAS server).
    • ONE copy in a different location (off-site), in case of fire, flood, or theft at your main location.

    Having three copies of your data drastically reduces the risk of losing them all simultaneously. If one drive fails, you have a second. If the entire office is destroyed, you have a copy in the cloud or at home.

    A Common Misconception: Why RAID is NOT a Backup

    Many NAS server users mistakenly believe that a RAID configuration absolves them of the responsibility to make backups. This is a dangerous error.

    RAID (Redundant Array of Independent Disks) is a technology that provides redundancy and high availability, not data security. It protects against the physical failure of a hard drive. Depending on the configuration (e.g., RAID 1, RAID 5, RAID 6, or their RAID-Z equivalents in TrueNAS), the array can survive the failure of one or even two drives at the same time, allowing them to be replaced without data loss or system downtime.

    However, RAID will not protect you from:

    • Human error: Accidentally deleting a file is instantly replicated across all drives in the array.
    • Ransomware attack: Encrypted files are immediately synchronised across all drives.
    • Power failure or RAID controller failure: This can lead to the corruption of the entire array.
    • Theft or natural disaster: Losing the entire device means losing all the data.

    Remember: Redundancy protects against hardware failure; a backup protects against data loss.

    image 135

    Your Backup Hub: UrBackup on TrueNAS Scale

    Creating a robust backup system doesn’t have to involve expensive subscriptions. An ideal solution is to combine the TrueNAS Scale operating system with the UrBackup application.

    • TrueNAS Scale: A powerful, free operating system for building NAS servers. It is based on Linux and offers advanced features such as the ZFS file system and support for containerised applications.
    • UrBackup: An open-source, client-server software for creating backups. It is extremely efficient and flexible, allowing for the backup of both individual files and entire disk images.

    The TrueNAS Protective Shield: ZFS Snapshots

    One of the most powerful features of TrueNAS, resulting from its use of the ZFS file system, is snapshots. A snapshot is an instantly created, read-only image of the entire file system at a specific moment. It works like freezing your data in time.

    Why is this so important in the context of ransomware?

    When ransomware attacks and encrypts files on a network share, these changes affect the “live” version of the data. However, previously taken snapshots remain untouched and unchanged because they are inherently read-only. In the event of an attack, you can restore the entire dataset to its pre-infection state in seconds, completely nullifying the effects of the attack.

    You can configure TrueNAS to automatically create snapshots (e.g., every hour) and retain them for a specified period. This creates an additional, incredibly powerful layer of protection that perfectly complements the backups performed by UrBackup.

    Advantages and Disadvantages of the Solution

    Advantages:

    • Full control and privacy: Your data is stored on your own hardware.
    • No licence fees: The software is completely free.
    • Exceptional efficiency: Incremental backups save space and time.
    • Flexibility: Supports Windows, macOS, Linux, physical servers, and VPS.
    • Disk image backups: Ability to perform a “bare-metal restore” of an entire system.

    Disadvantages:

    • Requires your own hardware: You need to have a NAS server.
    • Initial configuration: Requires some technical knowledge.
    • Full responsibility: The user is responsible for the security and operation of the server.

    Step-by-Step: Installation and Configuration

    1. Installing UrBackup on TrueNAS Scale

    1. Log in to the TrueNAS web interface.
    2. Navigate to the Apps section.
    3. Search for the UrBackup application and click Install.
    4. In the most important configuration step, you must specify the path where backups will be stored (e.g., /mnt/YourPool/backups).
    5. After the installation is complete, start the application and go to its web interface.

    2. Basic Server Configuration

    In the UrBackup interface, go to Settings. The most important initial options are:

    • Path to backup storage: This should already be set during installation.
    • Backup intervals: Set how often incremental backups (e.g., every few hours) and full backups (e.g., every few weeks) should be performed.
    • Email settings: Configure email notifications to receive reports on the status of your backups.

    3. Installing the Client on Computers

    The process of adding a computer to the backup system consists of two stages: registering it on the server and installing the software on the client machine.

    a) Adding a new client on the server:

    1. In the UrBackup interface, go to the Status tab.
    2. Click the blue + Add new client button.
    3. Select the option Add new internet/active client. This is recommended as it works both on the local network and over the internet (e.g., via Tailscale).
    4. Enter a unique name for the new client (e.g., “Annas-Laptop” or “Web-Server”) and click Add client.

    b) Installing the software on the client machine:

    1. After adding the client on the server, while still on the Status tab, you will see buttons to Download client for Windows and Download client for Linux.
    2. Click the appropriate button and select the name of the client you just added from the drop-down list.
    3. Download the prepared installation file (.exe or .sh). It is already fully configured to connect to your server.
    4. Run the installer on the client computer and follow the instructions.

    After a few minutes, the new client should connect to the server and appear on the list with an “Online” status, ready for its first backup.

    Security Above All: Tailscale Enters the Scene

    How can you securely back up computers located outside your local network? The ideal solution is Tailscale. It creates a secure, private network (a mesh VPN) between all your devices, regardless of their location.

    Why use Tailscale with UrBackup?

    • Simplicity: Installation and configuration take minutes.
    • “Zero Trust” Security: Every connection is end-to-end encrypted.
    • Stable IP Addresses: Each device receives a static IP address from the 100.x.x.x range, which does not change even when the device moves to a different physical location.

    What to do if the IP address changes?

    If for some reason you need to change the IP address of the UrBackup server (e.g., after switching from another VPN to Tailscale), the procedure is simple:

    1. Update the address on the UrBackup server: In Settings -> Internet/Active Clients, enter the new, correct server address (e.g., urbackup://100.x.x.x).
    2. Download the updated installer: On the Status tab, click Download client, select the offline client from the list, and download a new installation script for it.
    3. Run the installer on the client: Running the new installer will automatically update the configuration on the client machine.

    Managing and Monitoring Backups

    The UrBackup interface provides all the necessary tools to supervise the system.

    • Status: The main dashboard showing a list of all clients, their online/offline status, and the status of their last backup.
    • Activities: A live view of currently running operations, such as file indexing or data transfer.
    • Backups: A list of all completed backups for each client, with the ability to browse and restore files.
    • Logs: A detailed record of all events, errors, and warnings—an invaluable tool for diagnosing problems.
    • Statistics: Charts and tables showing disk space usage by individual clients over time.

    Backing Up Databases: Do It Right!

    Never back up a database by simply copying its files from the disk while the service is running! This risks creating an inconsistent copy that will be useless. The correct method is to perform a “dump” using tools like mysqldump or mariadb-dump.

    Method 1: All Databases to a Single File

    A simple approach, ideal for small environments.

    Command: mysqldump –all-databases -u [user] -p[password] > /path/to/backup/all_databases.sql

    Method 2: Each Database to a Separate File (Recommended)

    A more flexible solution. The script below will automatically save each database to a separate, compressed file. It should be run periodically (e.g., via cron) just before the scheduled backup by UrBackup.

    #!/bin/bash
    
    # Configuration
    BACKUP_DIR="/var/backups/mysql"
    DB_USER="root"
    DB_PASS="YourSuperSecretPassword"
    
    # Check if user and password are provided
    if [ -z "$DB_USER" ] || [ -z "$DB_PASS" ]; then
        echo "Error: DB_USER or DB_PASS variables are not set in the script."
        exit 1
    fi
    
    # Create backup directory if it doesn't exist
    mkdir -p "$BACKUP_DIR"
    
    # Get a list of all databases, excluding system databases
    DATABASES=$(mysql -u "$DB_USER" -p"$DB_PASS" -e "SHOW DATABASES;" | tr -d " " | grep -vE "(Database|information_schema|performance_schema|mysql|sys)")
    
    # Loop through each database
    for db in $DATABASES; do
        echo "Dumping database: $db"
        # Perform the dump and compress on the fly
        mysqldump -u "$DB_USER" -p"$DB_PASS" --databases "$db" | gzip > "$BACKUP_DIR/$db-$(date +%Y-%m-%d).sql.gz"
        if [ $? -eq 0 ]; then
            echo "Dump of database $db completed successfully."
        else
            echo "Error during dump of database $db."
        fi
    done
    
    # Optional: Remove old backups (older than 7 days)
    find "$BACKUP_DIR" -type f -name "*.sql.gz" -mtime +7 -exec rm {} \;
    
    echo "Backup process for all databases has finished."
    

    Your Digital Fortress

    Having a solid, automated backup strategy is not a luxury but an absolute necessity. Combining the power of TrueNAS Scale with its ZFS snapshots, the flexibility of UrBackup, and the security of Tailscale allows you to build a multi-layered, enterprise-class defence system at zero software cost.

    It is an investment of time that provides priceless peace of mind. Remember, however, that no system is entirely maintenance-free. Regularly monitoring logs, checking email reports, and, most importantly, periodically performing test restores of your files are the final, crucial elements that turn a good backup system into an impregnable fortress protecting your most valuable asset—your data.

  • Your Private Cloud: Reclaiming Control of Your Data with Nextcloud and TrueNAS

    Your Private Cloud: Reclaiming Control of Your Data with Nextcloud and TrueNAS

    In an era where our digital lives are scattered across the servers of tech giants, a growing number of people are seeking digital sovereignty. They want to decide where their most valuable data is stored, from family photos to confidential documents. The answer to this need is Nextcloud, a powerful open-source platform that allows you to create your own fully functional equivalent of Google Drive or Dropbox. When combined with a robust data storage system like TrueNAS, it becomes the foundation of a truly private cloud. Let’s walk through the process, from the initial decision to synchronising the final file.

    The Foundation: A Solid Installation

    Choosing a platform to host Nextcloud is crucial. TrueNAS SCALE, based on Linux, offers a solid environment for running applications in isolated containers, ensuring both security and stability. The installation process, though automated, presents the administrator with several important questions that will define the capabilities of the future cloud.

    The first step is to enhance the basic installation with additional packages. These are not random add-ons, but tools that will breathe life into your stored files:

    • ffmpeg: A digital translator for video and audio files. Without it, your library of holiday films would be just a collection of silent icons. With it, Nextcloud generates thumbnails and previews, allowing you to quickly see the content.
    • libreoffice: Enables the generation of previews for office documents. Essential for glancing at the contents of a .docx or .xlsx file without having to download it.
    • ocrmypdf & Tesseract: A duo that transforms static scans into intelligent, searchable documents. After adding a language pack—in our case, the crucial Polish one—the system automatically recognises text in PDF files, turning Nextcloud into a powerful document archive.
    • smbclient: A bridge to the Windows world. It allows you to connect existing network shares to Nextcloud, integrating the cloud with the rest of your home infrastructure.

    Each of these choices is an investment in future functionality. It is equally important to ensure the system runs like a well-oiled machine. This is where the background job mechanism, known as Cron, comes into play. Setting it to run cyclically every 5 minutes (*/5 * * * *) is an industry standard, guaranteeing that notifications arrive on time and temporary files are regularly cleared.

    Configuration: The Digital Fortress and Its Address

    After installing the basic components, it’s time to configure the network and data storage. This is where we decide how our cloud will be visible to the world and where our data will physically reside.

    For most home applications, the default network settings are sufficient. However, the key element is security. Accessing the cloud via the unsecured http:// protocol is like leaving the door to a vault wide open. The solution is to enable HTTPS encryption by assigning an SSL certificate. TrueNAS offers simple tools for generating both self-signed certificates (ideal for testing on a local network) and fully trusted certificates from Let’s Encrypt, if you own a domain.

    SSL Configuration for Nextcloud on TrueNAS using Cloudflare and Nginx Proxy Manager

    Securing your Nextcloud instance with an SSL certificate is absolutely crucial. Not only does it protect your data in transit, but it also builds trust and enables the use of many client applications that require an encrypted HTTPS connection. In this guide, we will show you how to easily configure a completely free and automatically renewing SSL certificate for your domain using the powerful combination of Cloudflare and Nginx Proxy Manager.

    Initial Assumptions

    Before we begin, let’s ensure you have the following ready:

    • A working instance of Nextcloud on TrueNAS, accessible via a local IP address and port (e.g., 192.168.1.50:30027).
    • The Nginx Proxy Manager application installed and running on TrueNAS.
    • Your own registered domain (e.g., mydomain.com).
    • A free account on Cloudflare, with your domain connected to it.

    Step 1: DNS Configuration in Cloudflare

    The first step is to point the subdomain you want to use for Nextcloud (e.g., https://www.google.com/search?q=cloud.mydomain.com) to your home network’s public IP address.

    1. Log in to your Cloudflare dashboard and select your domain.
    2. Navigate to the DNS -> Records tab.
    3. Click Add record and create a new A record:
    • Type: A
    • Name: Enter the subdomain name, e.g., cloud.
    • IPv4 address: Enter your network’s public IP address.
    • Proxy status: Turn off the orange cloud (set to DNS only). This is crucial while generating the certificate so that Nginx Proxy Manager can verify the domain without issue. After a successful configuration, we can re-enable the Cloudflare proxy.
    image 128

    Step 2: Creating a Proxy Host in Nginx Proxy Manager

    Now that the domain points to our server, it’s time to configure Nginx Proxy Manager to manage the traffic and the SSL certificate.

    1. Log in to the Nginx Proxy Manager web interface.
    2. Go to Hosts -> Proxy Hosts and click Add Proxy Host.
    3. Fill in the details in the Details tab:
    • Domain Names: Enter the full name of your subdomain, e.g., cloud.mydomain.com.
    • Scheme: http
    • Forward Hostname/IP: Enter the local IP address of your Nextcloud application, e.g., 192.168.1.50.
    • Forward Port: Enter the port your Nextcloud is listening on, e.g., 30027.
    • Tick the Block Common Exploits option to increase security.
    1. Navigate to the SSL tab:
    • From the SSL Certificate dropdown list, select “Request a new SSL Certificate”.
    • Enable the Force SSL option. This will automatically redirect all traffic from HTTP to secure HTTPS.
    • Enable HTTP/2 Support for better performance.
    • Accept the Let’s Encrypt Terms of Service by ticking “I Agree to the Let’s Encrypt Terms of Service”.
    1. Click the Save button.

    At this point, Nginx Proxy Manager will connect to the Let’s Encrypt servers, automatically perform the verification of your domain, and if everything proceeds successfully, it will download and install the SSL certificate.

    Step 3: Verification and Final Steps

    After a few moments, you should be able to access your domain https://cloud.mydomain.com in your browser. If everything has been configured correctly, you will see the Nextcloud login page with a green padlock in the address bar, indicating that your connection is fully encrypted.

    The final step is to return to your Cloudflare dashboard (Step 1) and enable the orange cloud (Proxied) for your DNS record. This will give you an additional layer of protection and performance offered by the Cloudflare CDN.

    Congratulations! Your Nextcloud instance is now secure and accessible from anywhere in the world under your own professional-looking domain.

    The next pillar is data storage. The default option, ixVolume, allows the TrueNAS system to automatically manage dedicated spaces for application files, user data, and the database. This approach ensures order and security. The temptation to mount an entire data pool as an “additional storage” is great, but it is a path to nowhere—it leads to organisational chaos and potential security vulnerabilities. A much better practice is to only mount specific, existing datasets, such as media or music.

    Even with the best configuration, an obstacle may appear. The most common one is the “Access through untrusted domain” message. This is not an error, but a testament to Nextcloud’s commitment to security. The system demands that we explicitly declare which addresses (IPs or domains) we will use to connect to it. The solution requires some detective work: finding the config.php file and adding the trusted addresses to it. In newer versions of TrueNAS, this file is often hidden in a non-standard location, such as /mnt/.ix-apps/, which requires patience and familiarity with the system console.

    image 129
    image 130

    The Gateway to the Cloud: Synchronisation at Your Fingertips

    Once the server is ready, it’s time to open the doors to it from our devices. Nextcloud offers clients for all popular platforms: from desktop computers to smartphones. In the world of Linux, we face a choice: download the universal AppImage file directly from the creators or use the modern Flatpak package system.

    image 131

    Although AppImage offers simplicity and portability, Flatpak wins in daily use. It provides full system integration, automatic updates, and, most importantly, runs in an isolated environment (sandbox), which significantly increases the level of security.

    The client authorisation process is a model of the modern approach. Instead of entering a password directly into the application, we are redirected to a browser, where we log in on our own trusted site. After a successful login, the server sends a special token back to the application, which authorises the connection. It’s simple, fast, and secure.

    The final step is to decide what to synchronise. We can choose to fully synchronise all data or, if disk space is limited, select only the most important folders. After clicking “Connect,” the magic happens—files from the server begin to flow to our local drive, and an icon in the system tray informs us of the progress.

    Configuring the Nextcloud Desktop Client

    After installing the Nextcloud client application on your computer, the next step is to connect it to your account on the server. This process is simple and secure, as it uses your web browser for authorisation, meaning your password is not entered directly into the application.

    Step 1: Initiating the Connection and Authorising in the Browser

    1. When you launch the client for the first time, you will be prompted to enter the address of your Nextcloud server (e.g., https://cloud.mydomain.com).
    2. After entering it, the application will automatically open a new tab in your default web browser.
    3. You will see a screen asking you to connect to your account. This is a security mechanism that informs you that an application (in this case, Desktop Client – Linux) is trying to access your account.
    4. Click the blue “Log in” button to continue.
    5. You will then be redirected to the standard Nextcloud login window. Enter your username (or email) and password, just as you do when logging in through the website.
    6. After a successful login, Nextcloud will confirm that the authorisation was successful and the client has been successfully connected to your account.
    7. You can now close this browser window and return to the client application.

    Step 2: Local Synchronisation Settings

    The desktop application will now display the final configuration screen, where you can define how files should be synchronised. Pay attention to the following options:

    • Remote Account: Ensure the account name and server address are correct.
    • Local Folder: By default, the client will create a Nextcloud folder in your home directory. You can choose a different location by clicking “Choose different folder”.
    • Sync Options:
    • Synchronize everything from server: The default and recommended option, which will download all files and folders from the server.
    • Choose what to sync: Allows for selective synchronisation. You can choose only the folders you want to have on your computer.

    After making your selection, click the “Connect” button.

    Step 3: Completion and Working with the Client

    That’s it! Your client is now configured. The initial synchronisation process will begin, and its progress will be visible in the main application window and via the icon in the system tray.

    In the main application window, you can now view recent activity, server notifications, and manually force a synchronisation by clicking “Sync now”. From now on, any file you add or modify in the local Nextcloud folder will be automatically synchronised with the server and other connected devices.

    More Than Just Files: An Ecosystem of Applications

    The true power of Nextcloud lies not just in file synchronisation. It lies in its ecosystem, which allows you to transform a simple data storage into a comprehensive platform for work and communication. The built-in app store offers hundreds of free extensions. Here are a few worth installing right from the start:

    image 132
    • Nextcloud Office: Thanks to integration with Collabora Online or ONLYOFFICE, Nextcloud gains the ability to edit text documents, spreadsheets, and presentations in real-time, becoming a viable alternative to Google Docs or Microsoft 365.
    • Deck: A simple but powerful project management tool in the style of Kanban boards. Ideal for organising personal tasks and teamwork.
    • Calendar & Contacts: A fully-fledged calendar and address book with the ability to synchronise via standard CalDAV and CardDAV protocols.
    • Photos: Much more than a simple photo viewer. The application can automatically categorise images based on recognised objects, create albums, and display photos on a map.
    • Notes: A minimalist application for creating and synchronising notes in Markdown format.
    image 133

    Installing and configuring your own Nextcloud is a journey that requires attention and making a few key decisions. However, the reward is priceless: full control over your own data, independence from external providers, and a platform that you can shape and expand as you wish. This is not just technology—it is a manifesto of digital freedom.

  • WireGuard on TrueNAS Scale: How to Build a Secure and Efficient Bridge Between Your Local Network and VPS Servers

    WireGuard on TrueNAS Scale: How to Build a Secure and Efficient Bridge Between Your Local Network and VPS Servers

    In today’s digital world, where remote work and distributed infrastructure are becoming the norm, secure access to network resources is not so much a luxury as an absolute necessity. Virtual Private Networks (VPNs) have long been the answer to these needs, yet traditional solutions can be complicated and slow. Enter WireGuard—a modern VPN protocol that is revolutionising the way we think about secure tunnels. Combined with the power of the TrueNAS Scale system and the simplicity of the WG-Easy application, we can create an exceptionally efficient and easy-to-manage solution.

    This article is a comprehensive guide that will walk you through the process of configuring a secure WireGuard VPN tunnel step by step. We will connect a TrueNAS Scale server, running on your home or company network, with a fleet of public VPS servers. Our goal is to create intelligent “split-tunnel” communication, ensuring that only necessary traffic is routed through the VPN, thereby maintaining maximum internet connection performance.

    What Is WireGuard and Why Is It a Game-Changer?

    Before we delve into the technical configuration, it’s worth understanding why WireGuard is gaining such immense popularity. Designed from the ground up with simplicity and performance in mind, it represents a breath of fresh air compared to older, more cumbersome protocols like OpenVPN or IPsec.

    The main advantages of WireGuard include:

    • Minimalism and Simplicity: The WireGuard source code consists of just a few thousand lines, in contrast to the hundreds of thousands for its competitors. This not only facilitates security audits but also significantly reduces the potential attack surface.
    • Unmatched Performance: By operating at the kernel level of the operating system and utilising modern cryptography, WireGuard offers significantly higher transfer speeds and lower latency. In practice, this means smoother access to files and services.
    • Modern Cryptography: WireGuard uses the latest, proven cryptographic algorithms such as ChaCha20, Poly1305, Curve25519, BLAKE2s, and SipHash24, ensuring the highest level of security.
    • Ease of Configuration: The model, based on the exchange of public keys similar to SSH, is far more intuitive than the complicated certificate management found in other VPN systems.

    The Power of TrueNAS Scale and the Convenience of WG-Easy

    TrueNAS Scale is a modern, free operating system for building network-attached storage (NAS) servers, based on the solid foundations of Linux. Its greatest advantage is its support for containerised applications (Docker/Kubernetes), which allows for easy expansion of its functionality. Running a WireGuard server directly on a device that is already operating 24/7 and storing our data is an extremely energy- and cost-effective solution.

    This is where the WG-Easy application comes in—a graphical user interface that transforms the process of managing a WireGuard server from editing configuration files in a terminal to simple clicks in a web browser. Thanks to WG-Easy, we can create profiles for new devices in moments, generate their configurations, and monitor the status of connections.

    Step 1: Designing the Network Architecture – The Foundation of Stability

    Before we launch any software, we must create a solid plan. Correctly designing the topology and IP addressing is the key to a stable and secure solution.

    The “Hub-and-Spoke” Model: Your Command Centre

    Our network will operate based on a “hub-and-spoke” model.

    • Hub: The central point (server) of our network will be TrueNAS Scale. All other devices will connect to it.
    • Spokes: Our VPS servers will be the clients (peers), or the “spokes” connected to the central hub.

    In this model, all communication flows through the TrueNAS server by default. This means that for one VPS to communicate with another, the traffic must pass through the central hub.

    To avoid chaos, we will create a dedicated subnet for our virtual network. In this guide, we will use 10.8.0.0/24.

    Device RoleHost IdentifierVPN IP Address
    Server (Hub)TrueNAS-Scale10.8.0.1
    Client 1 (Spoke)VPS110.8.0.2
    Client 2 (Spoke)VPS210.8.0.3
    Client 3 (Spoke)VPS310.8.0.4

    The Fundamental Rule: One Client, One Identity

    A tempting thought arises: is it possible to create a single configuration file for all VPS servers? Absolutely not. This would be a breach of a fundamental WireGuard security principle. Identity in this network is not based on a username and password, but on a unique pair of cryptographic keys. Using the same configuration on multiple machines is like giving the same house key to many different people—the server would be unable to distinguish between them, which would lead to routing chaos and a security breakdown.

    Step 2: Prerequisite – Opening the Gateway to the World

    The most common pitfall when configuring a home server is forgetting about the router. Your TrueNAS server is on a local area network (LAN) and has a private IP address (e.g., 192.168.0.13), which makes it invisible from the internet. For the VPS servers to connect to it, you must configure port forwarding on your router.

    You need to create a rule that directs packets arriving from the internet on a specific port straight to your TrueNAS server.

    • Protocol: UDP (WireGuard uses UDP exclusively)
    • External Port: 51820 (the standard WireGuard port)
    • Internal IP Address: The IP address of your TrueNAS server on the LAN
    • Internal Port: 51820

    Without this rule, your VPN server will never work.

    Step 3: Hub Configuration – Launching the Server on TrueNAS

    Launch the WG-Easy application on your TrueNAS server. The configuration process boils down to creating a separate profile for each client (each VPS server).

    Click “New” and fill in the form for the first VPS, paying special attention to the fields below:

    Field Name in WG-EasyExample Value (for VPS1)Explanation
    NameVPS1-PublicA readable label to help you identify the client.
    IPv4 Address10.8.0.2A unique IP address for this VPS within the VPN, according to our plan.
    Allowed IPs192.168.0.0/24, 10.8.0.0/24This is the heart of the “split-tunnel” configuration. It tells the client (VPS) that only traffic to your local network (LAN) and to other devices on the VPN should be sent through the tunnel. All other traffic (e.g., to Google) will take the standard route.
    Server Allowed IPs10.8.0.2/32A critical security setting. It informs the TrueNAS server to only accept packets from this specific client from its assigned IP address. The /32 mask prevents IP spoofing.
    Persistent Keepalive25An instruction for the client to send a small “keep-alive” packet every 25 seconds. This is necessary to prevent the connection from being terminated by routers and firewalls along the way.
    image 124

    After filling in the fields, save the configuration. Repeat this process for each subsequent VPS server, remembering to assign them consecutive IP addresses (10.8.0.3, 10.8.0.4, etc.).

    Once you save the profile, WG-Easy will generate a .conf configuration file for you. Treat this file like a password—it contains the client’s private key! Download it and prepare to upload it to the VPS server.

    Step 4: Spoke Configuration – Activating Clients on the VPS Servers

    Now it’s time to bring our “spokes” to life. Assuming your VPS servers are running Linux (e.g., Debian/Ubuntu), the process is very straightforward.

    1. Install WireGuard tools:
      sudo apt update && sudo apt install wireguard-tools -y
    2. Upload and secure the configuration file: Copy the previously downloaded wg0.conf file to the /etc/wireguard/ directory on the VPS server. Then, change its permissions so that only the administrator can read it:
      # On the VPS server:
      sudo mv /path/to/your/wg0.conf /etc/wireguard/wg0.conf
      sudo chmod 600 /etc/wireguard/wg0.conf
    3. Start the tunnel: Use a simple command to activate the connection. The interface name (wg0) is derived from the configuration file name.
      sudo wg-quick up wg0
    4. Ensure automatic start-up: To have the VPN tunnel start automatically after every server reboot, enable the corresponding system service:
      sudo systemctl enable wg-quick@wg0.service

    Repeat these steps on each VPS server, using the unique configuration file generated for each one.

    Step 5: Verification and Diagnostics – Checking if Everything Works

    After completing the configuration, it’s time for the final test.

    Checking the Connection Status

    On both the TrueNAS server and each VPS, execute the command:

    sudo wg show

    Look for two key pieces of information in the output:

    • latest handshake: This should show a recent time (e.g., “a few seconds ago”). This is proof that the client and server have successfully connected.
    • transfer: received and sent values greater than zero indicate that data is actually flowing through the tunnel.

    The Final Test: Validating the “Split-Tunnel”

    This is the test that will confirm we have achieved our main goal. Log in to one of the VPS servers and perform the following tests:

    1. Test connectivity within the VPN: Try to ping the TrueNAS server using its VPN and LAN addresses.
      ping 10.8.0.1       # VPN address of the TrueNAS server
      ping 192.168.0.13  # LAN address of the TrueNAS server (use your own)

      If you receive replies, it means that traffic to your local network is being correctly routed through the tunnel.
    2. Test the path to the internet: Use the traceroute tool to check the route packets take to a public website.
      traceroute google.com

      The result of this command is crucial. The first “hop” on the route must be the default gateway address of your VPS hosting provider, not the address of your VPN server (10.8.0.1). If this is the case—congratulations! Your “split-tunnel” configuration is working perfectly.

    Troubleshooting Common Problems

    • No “handshake”: The most common cause is a connection issue. Double-check the UDP port 51820 forwarding configuration on your router, as well as any firewalls in the path (on TrueNAS, on the VPS, and in your cloud provider’s panel).
    • There is a “handshake”, but ping doesn’t work: The problem usually lies in the Allowed IPs configuration. Ensure the server has the correct client VPN address entered (e.g., 10.8.0.2/32), and the client has the networks it’s trying to reach in its configuration (e.g., 192.168.0.0/24).
    • All traffic is going through the VPN (full-tunnel): This means that in the client’s configuration file, under the [Peer] section, the Allowed IPs field is set to 0.0.0.0/0. Correct this setting in the WG-Easy interface, download the new configuration file, and update it on the client.

    Creating your own secure and efficient VPN server based on TrueNAS Scale and WireGuard is well within reach. It is a powerful solution that not only enhances security but also gives you complete control over your network infrastructure.

  • Nginx Proxy Manager on TrueNAS Scale: Installation, Configuration, and Troubleshooting

    Nginx Proxy Manager on TrueNAS Scale: Installation, Configuration, and Troubleshooting

    Section 1: Introduction: Simplifying Home Lab Access with Nginx Proxy Manager on TrueNAS Scale

    Modern home labs have evolved from simple setups into complex ecosystems running dozens of services, from media servers like Plex or Jellyfin, to home automation systems such as Home Assistant, to personal clouds and password managers. Managing access to each of these services, each operating on a unique combination of an IP address and port number, quickly becomes impractical, inconvenient, and, most importantly, insecure. Exposing multiple ports to the outside world increases the attack surface and complicates maintaining a consistent security policy.

    The solution to this problem, employed for years in corporate environments, is the implementation of a central gateway or a single point of entry for all incoming traffic. In networking terminology, this role is fulfilled by a reverse proxy. This is an intermediary server that receives all requests from clients and then, based on the domain name, directs them to the appropriate service running on the internal network. Such an architecture not only simplifies access, allowing the use of easy-to-remember addresses (e.g., jellyfin.mydomain.co.uk instead of 192.168.1.50:8096), but also forms a key component of a security strategy.

    In this context, two technologies are gaining particular popularity among enthusiasts: TrueNAS Scale and Nginx Proxy Manager. TrueNAS Scale, based on the Debian Linux system, has transformed the traditional NAS (Network Attached Storage) device into a powerful, hyper-converged infrastructure (HCI) platform, capable of natively running containerised applications and virtual machines. In turn, Nginx Proxy Manager (NPM) is a tool that democratises reverse proxy technology. It provides a user-friendly, graphical interface for the powerful but complex-to-configure Nginx server, making advanced features, such as automatic SSL certificate management, accessible without needing to edit configuration files from the command line.

    This article provides a comprehensive overview of the process of deploying Nginx Proxy Manager on the TrueNAS Scale platform. The aim is not only to present “how-to” instructions but, above all, to explain why each step is necessary. The analysis will begin with an in-depth discussion of both technologies and their interactions. Then, a detailed installation process will be carried out, considering platform-specific challenges and their solutions, including the well-known issue of the application getting stuck in the “Deploying” state. Subsequently, using the practical example of a Jellyfin media server, the configuration of a proxy host will be demonstrated, along with advanced security options. The report will conclude with a summary of the benefits and suggest further steps to fully leverage the potential of this powerful duo.

    Nginx Proxy Manager Login Page

    Section 2: Tool Analysis: Nginx Proxy Manager and the TrueNAS Scale Application Ecosystem

    Understanding the fundamental principles of how Nginx Proxy Manager works and the architecture in which it is deployed—the TrueNAS Scale application system—is crucial for successful installation, effective configuration, and, most importantly, efficient troubleshooting. These two components, though designed to work together, each have their own unique characteristics, the ignorance of which is the most common cause of failure.

    Subsection 2.1: Understanding Nginx Proxy Manager (NPM)

    At the core of NPM’s functionality lies the concept of a reverse proxy, which is fundamental to modern network architecture. Understanding how it works allows one to appreciate the value that NPM brings.

    Definition and Functions of a Reverse Proxy

    A reverse proxy is a server that acts as an intermediary on the server side. Unlike a traditional (forward) proxy, which acts on behalf of the client, a reverse proxy acts on behalf of the server (or a group of servers). It receives requests from clients on the internet and forwards them to the appropriate servers on the local network that actually store the content. To an external client, the reverse proxy is the only visible point of contact; the internal network structure remains hidden.

    The key benefits of this solution are:

    • Security: Hiding the internal network topology and the actual IP addresses of application servers significantly hinders direct attacks on these services.
    • Centralised SSL/TLS Management (SSL Termination): Instead of configuring SSL certificates on each of a dozen application servers, you can manage them in one place—on the reverse proxy. Traffic encryption and decryption (SSL Termination) occurs at the proxy server, which offloads the backend servers.
    • Load Balancing: In more advanced scenarios, a reverse proxy can distribute traffic among multiple identical application servers, ensuring high availability and service scalability.
    • Simplified Access: It allows access to multiple services through standard ports 80 (HTTP) and 443 (HTTPS) using different subdomains, eliminating the need to remember and open multiple ports.

    NPM as a Management Layer

    It should be emphasised that Nginx Proxy Manager is not a new web server competing with Nginx. It is a management application, built on the open-source Nginx, which serves as a graphical user interface (GUI) for its reverse proxy functions. Instead of manually editing complex Nginx configuration files, the user can perform the same operations with a few clicks in an intuitive web interface.

    The main features that have contributed to NPM’s popularity are:

    • Graphical User Interface: Based on the Tabler framework, the interface is clear and easy to use, which drastically lowers the entry barrier for users who are not Nginx experts.
    • SSL Automation: Built-in integration with Let’s Encrypt allows for the automatic, free generation of SSL certificates and their periodic renewal. This is one of the most important and appreciated features.
    • Docker-based Deployment: NPM is distributed as a ready-to-use Docker image, which makes its installation on any platform that supports containers extremely simple.
    • Access Management: The tool offers features for creating Access Control Lists (ACLs) and managing users with different permission levels, allowing for granular control over access to individual services.

    Comparison: NPM vs. Traditional Nginx

    The choice between Nginx Proxy Manager and manual Nginx configuration is a classic trade-off between simplicity and flexibility. The table below outlines the key differences between these two approaches.

    AspectNginx Proxy ManagerTraditional Nginx
    Management InterfaceGraphical User Interface (GUI) simplifying configuration.Command Line Interface (CLI) and editing text files; requires technical knowledge.
    SSL ConfigurationFully automated generation and renewal of Let’s Encrypt certificates.Manual configuration using tools like Certbot; greater control.
    Learning CurveLow; ideal for beginners and hobbyists.Steep; requires understanding of Nginx directives and web server architecture.
    FlexibilityLimited to features available in the GUI; advanced rules can be difficult to implement.Full flexibility and the ability to create highly customised, complex configurations.
    Scalability / Target UserIdeal for home labs, small to medium deployments. Hobbyist, small business owner, home lab user.A better choice for large-scale, high-load corporate environments. Systems administrator, DevOps engineer, developer.

    This table clearly shows that NPM is a tool strategically tailored to the needs of its target audience—home lab enthusiasts. These users consciously sacrifice some advanced flexibility for the significant benefits of ease of use and speed of deployment.

    Nginx Proxy Manager Dashboard

    Subsection 2.2: Application Architecture in TrueNAS Scale

    To understand why installing NPM on TrueNAS Scale can encounter specific problems, it is necessary to know how this platform manages applications. It is not a typical Docker environment.

    Foundations: Linux and Hyper-convergence

    A key architectural change in TrueNAS Scale compared to its predecessor, TrueNAS CORE, was the switch from the FreeBSD operating system to Debian, a Linux distribution. This decision opened the door to native support for technologies that have dominated the cloud and containerisation world, primarily Docker containers and KVM-based virtualisation. As a result, TrueNAS Scale became a hyper-converged platform, combining storage, computing, and virtualisation functions.

    The Application System

    Applications are distributed through Catalogs, which function as repositories. These catalogs are further divided into so-called “trains,” which define the stability and source of the applications:

    • stable: The default train for official, iXsystems-tested applications.
    • enterprise: Applications verified for business use.
    • community: Applications created and maintained by the community. This is where Nginx Proxy Manager is located by default.
    • test: Applications in the development phase.

    NPM’s inclusion in the community catalog means that while it is easily accessible, its technical support relies on the community, not directly on the manufacturer of TrueNAS.

    Storage Management for Applications

    Before any application can be installed, TrueNAS Scale requires the user to specify a ZFS pool that will be dedicated to storing application data. When an application is installed, its data (configuration, databases, etc.) must be saved somewhere persistently. TrueNAS Scale offers several options here, but the default and recommended for simplicity is ixVolume.

    ixVolume is a special type of volume that automatically creates a dedicated, system-managed ZFS dataset within the selected application pool. This dataset is isolated, and the system assigns it very specific permissions. By default, the owner of this dataset becomes the system user apps with a user ID (UID) of 568 and a group ID (GID) of 568. The running application container also operates with the permissions of this very user.

    This is the crux of the problem. The standard Docker image for Nginx Proxy Manager contains startup scripts (e.g., those from Certbot, the certificate handling tool) that, on first run, attempt to change the owner (chown) of data directories, such as /data or /etc/letsencrypt, to ensure they have the correct permissions. When the NPM container starts within the sandboxed TrueNAS application environment, its startup script, running as the unprivileged apps user (UID 568), tries to execute the chown operation on the ixVolume. This operation fails because the apps user is not the owner of the parent directories and does not have permission to change the owner of files on a volume managed by K3s. This permission error causes the container’s startup script to halt, and the container itself never reaches the “running” state, which manifests in the TrueNAS Scale interface as an endless “Deploying” status.

    Section 3: Installing and Configuring Nginx Proxy Manager on TrueNAS Scale

    The process of installing Nginx Proxy Manager on TrueNAS Scale is straightforward, provided that attention is paid to a few key configuration parameters that are often a source of problems. The following step-by-step instructions will guide you through this process, highlighting the critical decisions that need to be made.

    Step 1: Preparing TrueNAS Scale

    Before proceeding with the installation of any application, you must ensure that the application service in TrueNAS Scale is configured correctly.

    1. Log in to the TrueNAS Scale web interface.
    2. Navigate to the Apps section.
    3. If the service is not yet configured, the system will prompt you to select a ZFS pool to be used for storing all application data. Select the appropriate pool and save the settings. After a moment, the service status should change to “Running”.

    Step 2: Finding the Application

    Nginx Proxy Manager is available in the official community catalog.

    1. In the Apps section, go to the Discover tab.
    2. In the search box, type nginx-proxy-manager.
    3. The application should appear in the results. Ensure it comes from the community catalog.
    4. Click the Install button to proceed to the configuration screen.

    Step 3: Key Configuration Parameters

    The installation screen presents many options. Most of them can be left with their default values, but a few sections require special attention.

    Application Name

    In the Application Name field, enter a name for the installation, for example, nginx-proxy-manager. This name will be used to identify the application in the system.

    Network Configuration

    This is the most important and most problematic stage of the configuration. By default, the TrueNAS Scale management interface uses the standard web ports: 80 for HTTP and 443 for HTTPS. Since Nginx Proxy Manager, to act as a gateway for all web traffic, should also listen on these ports, a direct conflict arises. There are two main strategies to solve this problem, each with its own set of trade-offs.

    • Strategy A (Recommended): Change TrueNAS Scale Ports
      This method is considered the “cleanest” from NPM’s perspective because it allows it to operate as it was designed.
    1. Cancel the NPM installation and go to System Settings -> General. In the GUI SSL/TLS Certificate section, change the Web Interface HTTP Port to a custom one, e.g., 880, and the Web Interface HTTPS Port to, e.g., 8443.
    2. Save the changes. From this point on, access to the TrueNAS Scale interface will be available at http://<truenas-ip-address>:880 or https://<truenas-ip-address>:8443.
    3. Return to the NPM installation and in the Network Configuration section, assign the HTTP Port to 80 and the HTTPS Port to 443.
    • Advantages: NPM runs on standard ports, which simplifies configuration and eliminates the need for port translation on the router.
    • Disadvantages: It changes the fundamental way of accessing the NAS itself. In rare cases, as noted on forums, this can cause unforeseen side effects, such as problems with SSH connections between TrueNAS systems.
    • Strategy B (Alternative): Use High Ports for NPM
      This method is less invasive to the TrueNAS configuration itself but shifts the complexity to the router level.
    1. In the NPM configuration, under the Network Configuration section, leave the TrueNAS ports unchanged and assign high, unused ports to NPM, e.g., 30080 for HTTP and 30443 for HTTPS. TrueNAS Scale reserves ports below 9000 for the system, so you should choose values above this threshold.
    2. After installing NPM, configure port forwarding on your edge router so that incoming internet traffic on port 80 is directed to port 30080 of the TrueNAS IP address, and traffic from port 443 is directed to port 30443.
    • Advantages: The TrueNAS Scale configuration remains untouched.
    • Disadvantages: Requires additional configuration on the router. Each proxied service will require explicit forwarding, which can be confusing.

    The ideal solution would be to assign a dedicated IP address on the local network to NPM (e.g., using macvlan technology), which would completely eliminate the port conflict. Unfortunately, the graphical interface of the application installer in TrueNAS Scale does not provide this option in a simple way.

    Storage Configuration

    To ensure that the NPM configuration, including created proxy hosts and SSL certificates, survives updates or application redeployments, you must configure persistent storage.

    1. In the Storage Configuration section, configure two volumes.
    2. For Nginx Proxy Manager Data Storage (path /data) and Nginx Proxy Manager Certs Storage (path /etc/letsencrypt), select the ixVolume type.
    3. Leaving these settings will ensure that TrueNAS creates dedicated ZFS datasets for the configuration and certificates, which will be independent of the application container itself.

    Step 4: First Run and Securing the Application

    After configuring the above parameters (and possibly applying the fixes from Section 4), click Install. After a few moments, the application should transition to the “Running” state.

    1. Access to the NPM interface is available at http://<truenas-ip-address>:PORT, where PORT is the WebUI port configured during installation (defaults to 81 inside the container but is mapped to a higher port, e.g., 30020, if the TrueNAS ports were not changed).
    2. The default login credentials are:
    • Email: admin@example.com
    • Password: changeme
    1. Upon first login, the system will immediately prompt you to change these details. This is an absolutely crucial security step and must be done immediately.

    Section 4: Troubleshooting the “Deploying” Issue: Diagnosis and Repair of Installation Errors

    One of the most frequently encountered and frustrating problems when deploying Nginx Proxy Manager on TrueNAS Scale is the situation where the application gets permanently stuck in the “Deploying” state after installation. The user waits, refreshes the page, but the status never changes to “Running”. Viewing the container logs often does not provide a clear answer. This problem is not a bug in NPM itself but, as diagnosed earlier, a symptom of a fundamental permission conflict between the generic container and the specific, secured environment in TrueNAS Scale.

    Nginx Proxy Manager Log

    Problem Description and Root Cause

    After clicking the “Install” button in the application wizard, TrueNAS Scale begins the deployment process. In the background, the Docker image is downloaded, ixVolumes are created, and the container is started with the specified configuration. The startup script inside the NPM container attempts to perform maintenance operations, including changing the owner of key directories. Because the container is running as a user with limited permissions (apps, UID 568) on a file system it does not fully control, this operation fails. The script halts its execution, and the container never signals to the system that it is ready to work. Consequently, from the perspective of the TrueNAS interface, the application remains forever in the deployment phase.

    Fortunately, thanks to the work of the community and developers, there are proven and effective solutions to this problem. Interestingly, the evolution of these solutions perfectly illustrates the dynamics of open-source software development.

    Solution 1: Using an Environment Variable (Recommended Method)

    This is the modern, precise, and most secure solution to the problem. It was introduced by the creators of the NPM container specifically in response to problems reported by users of platforms like TrueNAS Scale. Instead of escalating permissions, the container is instructed to skip the problematic step.

    To implement this solution:

    1. During the application installation (or while editing it if it has already been created and is stuck), navigate to the Application Configuration section.
    2. Find the Nginx Proxy Manager Configuration subsection and click Add next to Additional Environment Variables.
    3. Configure the new environment variable as follows:
    • Variable Name: SKIP_CERTBOT_OWNERSHIP
    • Variable Value: true
    1. Save the configuration and install or update the application.

    Adding this flag informs the Certbot startup script inside the container to skip the chown (change owner) step for its configuration files. The script proceeds, the container starts correctly and reports readiness, and the application transitions to the “Running” state. This is the recommended method for all newer versions of TrueNAS Scale (Electric Eel, Dragonfish, and later).

    Solution 2: Changing the User to Root (Historical Method)

    This solution was the first one discovered by the community. It is a more “brute force” method that solves the problem by granting the container full permissions. Although effective, it is considered less elegant and potentially less secure from the perspective of the principle of least privilege.

    To implement this solution:

    1. During the installation or editing of the application, navigate to the User and Group Configuration section.
    2. Change the value in the User ID field from the default 568 to 0.
    3. Leave the Group ID unchanged or also set it to 0.
    4. Save the configuration and deploy the application.

    Setting the User ID to 0 causes the process inside the container to run with root user permissions. The root user has unlimited permissions, so the problematic chown operation executes flawlessly, and the container starts correctly. This method was particularly necessary in older versions of TrueNAS Scale (e.g., Dragonfish) and is documented as a working workaround. Although it still works, the environment variable method is preferred as it does not require escalating permissions for the entire container.

    Verification

    Regardless of the chosen method, after saving the changes and redeploying the application, you should observe its status in the Apps -> Installed tab. After a short while, the status should change from “Deploying” to “Running”, which means the problem has been successfully resolved and Nginx Proxy Manager is ready for configuration.

    Section 5: Practical Application: Securing a Jellyfin Media Server

    Theory and correct installation are just the beginning. The true power of Nginx Proxy Manager is revealed in practice when we start using it to manage access to our services. Jellyfin, a popular, free media server, is an excellent example to demonstrate this process, as its full functionality depends on one, often overlooked, setting in the proxy configuration. The following guide assumes that Jellyfin is already installed and running on the local network, accessible at IP_ADDRESS:PORT (e.g., 192.168.1.10:8096).

    Step 1: DNS Configuration

    Before NPM can direct traffic, the outside world needs to know where to send it.

    1. Log in to your domain’s management panel (e.g., at your domain registrar or DNS provider like Cloudflare).
    2. Create a new A record.
    3. In the Name (or Host) field, enter the subdomain that will be used to access Jellyfin (e.g., jellyfin).
    4. In the Value (or Points to) field, enter the public IP address of your home network (your router).

    Step 2: Obtaining an SSL Certificate in NPM

    Securing the connection with HTTPS is crucial. NPM makes this process trivial, especially when using the DNS Challenge method, which is more secure as it does not require opening any ports on your router.

    1. In the NPM interface, go to SSL Certificates and click Add SSL Certificate, then select Let’s Encrypt.
    2. In the Domain Names field, enter your subdomain, e.g., jellyfin.yourdomain.com. You can also generate a wildcard certificate at this stage (e.g., *.yourdomain.com), which will match all subdomains.
    3. Enable the Use a DNS Challenge option.
    4. From the DNS Provider list, select your DNS provider (e.g., Cloudflare).
    5. In the Credentials File Content field, paste the API token obtained from your DNS provider. For Cloudflare, you need to generate a token with permissions to edit the DNS zone (Zone: DNS: Edit).
    6. Accept the Let’s Encrypt terms of service and save the form. After a moment, NPM will use the API to temporarily add a TXT record in your DNS, which proves to Let’s Encrypt that you own the domain. The certificate will be generated and saved.
    Nginx Proxy Manager SSL

    Step 3: Creating a Proxy Host

    This is the heart of the configuration, where we link the domain, the certificate, and the internal service.

    1. In NPM, go to Hosts -> Proxy Hosts and click Add Proxy Host.
    2. A form with several tabs will open.

    “Details” Tab

    • Domain Names: Enter the full domain name that was configured in DNS, e.g., jellyfin.yourdomain.com.
    • Scheme: Select http, as the communication between NPM and Jellyfin on the local network is typically not encrypted.
    • Forward Hostname / IP: Enter the local IP address of the server where Jellyfin is running, e.g., 192.168.1.10.
    • Forward Port: Enter the port on which Jellyfin is listening, e.g., 8096.
    • Websocket Support: This is an absolutely critical setting. You must tick this option. Jellyfin makes extensive use of WebSocket technology for real-time communication, for example, to update playback status on the dashboard or for the Syncplay feature to work. Without WebSocket support enabled, the Jellyfin main page will load correctly, but many key features will not work, leading to difficult-to-diagnose problems.

    “SSL” Tab

    • SSL Certificate: From the drop-down list, select the certificate generated in the previous step for the Jellyfin domain.
    • Force SSL: Enable this option to automatically redirect all HTTP connections to secure HTTPS.
    • HTTP/2 Support: Enabling this option can improve page loading performance.

    After configuring both tabs, save the proxy host.

    Step 4: Testing

    After saving the configuration, Nginx will reload its settings in the background. It should now be possible to open a browser and enter the address https://jellyfin.yourdomain.com. You should see the Jellyfin login page, and the connection should be secured with an SSL certificate (a padlock icon will be visible in the address bar).

    Subsection 5.1: Advanced Security Hardening (Optional)

    The default configuration is fully functional, but to enhance security, you can add extra HTTP headers that instruct the browser on how to behave. To do this, edit the created proxy host and go to the Advanced tab. In the Custom Nginx Configuration field, you can paste additional directives.

    It’s worth noting that NPM has a quirk: add_header directives added directly in this field may not be applied. A safer approach is to create a Custom Location for the path / and paste the headers in its configuration field.

    The following table presents recommended security headers.

    HeaderPurposeRecommended ValueNotes
    Strict-Transport-SecurityForces the browser to communicate exclusively over HTTPS for a specified period.add_header Strict-Transport-Security “max-age=31536000; includeSubDomains; preload” always;Deploy with caution. It’s wise to start with a lower max-age and remove preload until you are certain about the configuration.
    X-Frame-OptionsProtects against “clickjacking” attacks by preventing the page from being embedded in an <iframe> on another site.add_header X-Frame-Options “SAMEORIGIN” always;SAMEORIGIN allows embedding only within the same domain.
    X-Content-Type-OptionsPrevents attacks related to the browser misinterpreting MIME types (“MIME-sniffing”).add_header X-Content-Type-Options “nosniff” always;This is a standard and safe setting.
    Referrer-PolicyControls what referrer information is sent during navigation.add_header ‘Referrer-Policy’ ‘origin-when-cross-origin’;A good compromise between privacy and usability.
    X-XSS-ProtectionA historical header intended to protect against Cross-Site Scripting (XSS) attacks.add_header X-XSS-Protection “0” always;The header is obsolete and can create new attack vectors. Modern browsers have better, built-in mechanisms. It is recommended to explicitly disable it (0).

    Applying these headers provides an additional layer of defence and is considered good practice in securing web applications. However, it is critical to use up-to-date recommendations, as in the case of X-XSS-Protection, where blindly copying it from older guides could weaken security.

    Section 6: Conclusions and Next Steps

    Combining Nginx Proxy Manager with the TrueNAS Scale platform creates an incredibly powerful and flexible environment for managing a home lab. As demonstrated in this report, this synergy allows for centralised access management, a drastic simplification of the deployment and maintenance of SSL/TLS security, and a professionalisation of the way users interact with their self-hosted services. The key to success, however, is not just blindly following instructions, but above all, understanding the fundamental principles of how both technologies work. The awareness that applications in TrueNAS Scale operate within a restrictive ecosystem is essential for effectively diagnosing and resolving specific problems, such as the “Deploying” stall error.

    Summary of Strategic Benefits

    Deploying NPM on TrueNAS Scale brings tangible benefits:

    • Centralisation and Simplicity: All incoming requests are managed from a single, intuitive panel, eliminating the chaos of multiple IP addresses and ports.
    • Enhanced Security: Automation of SSL certificates, hiding the internal network topology, and the ability to implement advanced security headers create a solid first line of defence.
    • Professional Appearance and Convenience: Using easy-to-remember, personalised subdomains (e.g., media.mydomain.co.uk) instead of technical IP addresses significantly improves the user experience.

    Recommendations and Next Steps

    After successfully deploying Nginx Proxy Manager and securing your first application, it is worth exploring its further capabilities to fully utilise the tool’s potential.

    • Explore Access Lists: NPM allows for the creation of Access Control Lists (ACLs), which can restrict access to specific proxy hosts based on the source IP address. This is an extremely useful feature for securing administrative panels. For example, you can create a rule that allows access to the TrueNAS Scale interface or the NPM panel itself only from IP addresses on the local network, blocking any access attempts from the outside.
    • Backup Strategy: The Nginx Proxy Manager configuration, stored in the ixVolume, is a critical asset. Its loss would mean having to reconfigure all proxy hosts and certificates. TrueNAS Scale offers built-in tools for automating backups. You should configure a Periodic Snapshot Task for the dataset containing the NPM application data (ix-applications/releases/nginx-proxy-manager) to regularly create snapshots of its state.
    • Securing Other Applications: The knowledge gained during the Jellyfin configuration is universal. It can now be applied to secure virtually any other web service running in your home lab, such as Home Assistant, a file server, a personal password manager (e.g., Vaultwarden, which is a Bitwarden implementation), or the AdGuard Home ad-blocking system. Remember to enable the Websocket Support option for any application that requires real-time communication.
    • Monitoring and Diagnostics: The NPM interface provides access logs and error logs for each proxy host. Regularly reviewing these logs can help in diagnosing access problems, identifying unauthorised connection attempts, and optimising the configuration.

    Mastering Nginx Proxy Manager on TrueNAS Scale is an investment that pays for itself many times over in the form of increased security, convenience, and control over your digital ecosystem. It is another step on the journey from a simple user to a conscious architect of your own home infrastructure.

  • Your Server is Secure: A Guide to Permanently Blocking Attacks

    Your Server is Secure: A Guide to Permanently Blocking Attacks

    A Permanent IP Blacklist with Fail2ban, UFW, and Ipset

    Introduction: Beyond Temporary Protection

    In the digital world, where server attacks are a daily occurrence, merely reacting is not enough. Although tools like Fail2ban provide a basic line of defence, their temporary blocks leave a loophole—persistent attackers can return and try again after the ban expires. This article provides a detailed guide to building a fully automated, two-layer system that turns ephemeral bans into permanent, global blocks. The combination of Fail2ban, UFW, and the powerful Ipset tool creates a mechanism that permanently protects your server from known repeat offenders.

    Layer One: Reaction with Fail2ban

    At the start of every attack is Fail2ban. This daemon monitors log files (e.g., sshd.log, apache.log) for patterns indicating break-in attempts, such as multiple failed login attempts. When it detects such activity, it immediately blocks the attacker’s IP address by adding it to the firewall rules for a defined period (e.g., 10 minutes, 30 days). This is an effective but short-term response.

    Layer Two: Persistence with UFW and Ipset

    For a ban to become permanent, we need a more robust, centralised method of managing IP addresses. This is where UFW and Ipset come in.

    What is Ipset?

    Ipset is a Linux kernel extension that allows you to manage sets of IP addresses, networks, or ports. It is a much more efficient solution than adding thousands of individual rules to a firewall. Instead, the firewall can refer to an entire set with a single rule.

    Ipset Installation and Configuration

    The first step is to install Ipset on your system. We use standard package managers for this.

    sudo apt update
    sudo apt install ipset

    Next, we create two sets: blacklist for IPv4 addresses and blacklist_v6 for IPv6.

    sudo ipset create blacklist hash:ip hashsize 4096
    sudo ipset create blacklist_v6 hash:net family inet6 hashsize 4096

    The hashsize parameter determines the maximum number of entries, which is crucial for performance.

    Integrating Ipset with the UFW Firewall

    For UFW to start using our sets, we must add the appropriate commands to its rules. We edit the UFW configuration files, adding rules that block traffic originating from addresses contained in our Ipset sets. For IPv4, we edit /etc/ufw/before.rules:

    sudo nano /etc/ufw/before.rules

    Immediately after *filter and :ufw-before-input [0:0], add:

    # Rules for the permanent blacklist (ipset)
    # Block any incoming traffic from IP addresses in the ‘blacklist’ set (IPv4)
    -A ufw-before-input -m set –match-set blacklist src -j DROP

    For IPv6, we edit /etc/ufw/before6.rules:

    sudo nano /etc/ufw/before6.rules

    Immediately after *filter and :ufw6-before-input [0:0], add:

    # Rules for the permanent blacklist (ipset) IPv6
    # Block any incoming traffic from IP addresses in the ‘blacklist_v6’ set
    -A ufw6-before-input -m set –match-set blacklist_v6 src -j DROP

    After adding the rules, we reload UFW for them to take effect:

    sudo ufw reload

    Script for Automatic Blacklist Updates

    The core of the system is a script that acts as a bridge between Fail2ban and Ipset. Its job is to collect banned addresses, ensure they are unique, and synchronise them with the Ipset sets.

    Create the script file:

    sudo nano /usr/local/bin/update-blacklist.sh

    Below is the content of the script. It works in several steps:

    1. Creates a temporary, unique list of IP addresses from Fail2ban logs and the existing blacklist.
    2. Creates temporary Ipset sets.
    3. Reads addresses from the unique list and adds them to the appropriate temporary sets (distinguishing between IPv4 and IPv6).
    4. Atomically swaps the old Ipset sets with the new, temporary ones, minimising the risk of protection gaps.
    5. Destroys the old, temporary sets.
    6. Returns a summary of the number of blocked addresses.

    #!/bin/bash

    BLACKLIST_FILE=”/etc/fail2ban/blacklist.local”
    IPSET_NAME_V4=”blacklist”
    IPSET_NAME_V6=”blacklist_v6″

    touch “$BLACKLIST_FILE”

    # Create a unique list of banned IPs from the log and the existing blacklist file
    (grep ‘Ban’ /var/log/fail2ban.log | awk ‘{print $(NF)}’ && cat “$BLACKLIST_FILE”) | sort -u > “$BLACKLIST_FILE.tmp”
    mv “$BLACKLIST_FILE.tmp” “$BLACKLIST_FILE”

    # Create temporary ipsets
    sudo ipset create “${IPSET_NAME_V4}_tmp” hash:ip hashsize 4096 –exist
    sudo ipset create “${IPSET_NAME_V6}_tmp” hash:net family inet6 hashsize 4096 –exist

    # Add IPs to the temporary sets
    while IFS= read -r ip; do
        if [[ “$ip” == *”:”* ]]; then
            sudo ipset add “${IPSET_NAME_V6}_tmp” “$ip”
        else
            sudo ipset add “${IPSET_NAME_V4}_tmp” “$ip”
        fi
    done < “$BLACKLIST_FILE”

    # Atomically swap the temporary sets with the active ones
    sudo ipset swap “${IPSET_NAME_V4}_tmp” “$IPSET_NAME_V4”
    sudo ipset swap “${IPSET_NAME_V6}_tmp” “$IPSET_NAME_V6”

    # Destroy the temporary sets
    sudo ipset destroy “${IPSET_NAME_V4}_tmp”
    sudo ipset destroy “${IPSET_NAME_V6}_tmp”

    # Count the number of entries
    COUNT_V4=$(sudo ipset list “$IPSET_NAME_V4” | wc -l)
    COUNT_V6=$(sudo ipset list “$IPSET_NAME_V6” | wc -l)

    # Subtract header lines from count
    let COUNT_V4=$COUNT_V4-7
    let COUNT_V6=$COUNT_V6-7

    # Ensure count is not negative
    [ $COUNT_V4 -lt 0 ] && COUNT_V4=0
    [ $COUNT_V6 -lt 0 ] && COUNT_V6=0

    echo “Blacklist and ipset updated. Blocked IPv4: $COUNT_V4, Blocked IPv6: $COUNT_V6”
    exit 0

    After creating the script, give it execute permissions:

    sudo chmod +x /usr/local/bin/update-blacklist.sh

    Automation and Persistence After a Reboot

    To run the script without intervention, we use a cron schedule. Open the crontab editor for the root user and add a rule to run the script every hour:

    sudo crontab -e

    Add this line:

    0 * * * * /usr/local/bin/update-blacklist.sh

    Or to run it once a day at 6 a.m.:

    0 6 * * * /usr/local/bin/update-blacklist.sh

    The final, crucial step is to ensure the Ipset sets survive a reboot, as they are stored in RAM by default. We create a systemd service that will save their state before the server shuts down and load it again on startup.

    sudo nano /etc/systemd/system/ipset-persistent.service
    “`ini
    [Unit]
    Description=Saves and restores ipset sets on boot/shutdown
    Before=network-pre.target
    ConditionFileNotEmpty=/etc/ipset.rules

    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/bin/bash -c “/sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset restore -f /etc/ipset.rules”
    ExecStop=/sbin/ipset save -f /etc/ipset.rules

    [Install]
    WantedBy=multi-user.target

    Finally, enable and start the service:

    sudo systemctl daemon-reload
    sudo systemctl enable –now ipset-persistent.service

    How Does It Work in Practice?

    The entire system is an automated chain of events that works in the background to protect your server from attacks. Here is the flow of information and actions:

    1. Attack Response (Fail2ban):
    • Someone tries to break into the server (e.g., by repeatedly entering the wrong password via SSH).
    • Fail2ban, monitoring system logs (/var/log/fail2ban.log), detects this pattern.
    • It immediately adds the attacker’s IP address to a temporary firewall rule, blocking their access for a specified time.
    1. Permanent Banning (Script and Cron):
    • Every hour (as set in cron), the system runs the update-blacklist.sh script.
    • The script reads the Fail2ban logs, finds all addresses that have been banned (lines containing “Ban”), and then compares them with the existing local blacklist (/etc/fail2ban/blacklist.local).
    • It creates a unique list of all banned addresses.
    • It then creates temporary ipset sets (blacklist_tmp and blacklist_v6_tmp) and adds all addresses from the unique list to them.
    • It performs an ipset swap operation, which atomically replaces the old, active sets with the new, updated ones.
    • UFW, thanks to the previously defined rules, immediately starts blocking the new addresses that have appeared in the updated ipset sets.
    1. Persistence After Reboot (systemd Service):
    • Ipset’s operation is volatile—the sets only exist in memory. The ipset-persistent.service solves this problem.
    • Before shutdown/reboot: systemd runs the ExecStop=/sbin/ipset save -f /etc/ipset.rules command. This saves the current state of all ipset sets to a file on the disk.
    • After power-on/reboot: systemd runs the ExecStart command, which restores the sets. It reads all blocked addresses from the /etc/ipset.rules file and automatically recreates the ipset sets in memory.

    Thanks to this, even if the server is rebooted, the IP blacklist remains intact, and protection is active from the first moments after the system starts.

    Summary and Verification

    The system you have built is a fully automated, multi-layered protection mechanism. Attackers are temporarily banned by Fail2ban, and their addresses are automatically added to a permanent blacklist, which is instantly blocked by UFW and Ipset. The systemd service ensures that the blacklist survives server reboots, protecting against repeat offenders permanently. To verify its operation, you can use the following commands:

    sudo ufw status verbose
    sudo ipset list blacklist
    sudo ipset list blacklist_v6
    sudo systemctl status ipset-persistent.service

    How to Create a Reliable IP Whitelist in UFW and Ipset

    Introduction: Why a Whitelist is Crucial

    When configuring advanced firewall rules, especially those that automatically block IP addresses (like in systems with Fail2ban), there is a risk of accidentally blocking yourself or key services. A whitelist is a mechanism that acts like a VIP pass for your firewall—IP addresses on this list will always have access, regardless of other, more restrictive blocking rules.

    This guide will show you, step-by-step, how to create a robust and persistent whitelist using UFW (Uncomplicated Firewall) and ipset. As an example, we will use the IP address 111.222.333.444, which we want to add as trusted.

    Step 1: Create a Dedicated Ipset Set for the Whitelist

    The first step is to create a separate “container” for our trusted IP addresses. Using ipset is much more efficient than adding many individual rules to iptables.

    Open a terminal and enter the following command:

    sudo ipset create whitelist hash:ip

    What did we do?

    • ipset create: The command to create a new set.
    • whitelist: The name of our set. It’s short and unambiguous.
    • hash:ip: The type of set. hash:ip is optimised for storing and very quickly looking up single IPv4 addresses.

    Step 2: Add a Trusted IP Address

    Now that we have the container ready, let’s add our example trusted IP address to it.

    sudo ipset add whitelist 111.222.333.444

    You can repeat this command for every address you want to add to the whitelist. To check the contents of the list, use the command:

    sudo ipset list whitelist

    Step 3: Modify the Firewall – Giving Priority to the Whitelist

    This is the most important step. We need to modify the UFW rules so that connections from addresses on the whitelist are accepted immediately, before the firewall starts processing any blocking rules (including those from the ipset blacklist or Fail2ban).

    Open the before.rules configuration file. This is the file where rules processed before the main UFW rules are located.

    sudo nano /etc/ufw/before.rules

    Go to the beginning of the file and find the *filter section. Just below the :ufw-before-input [0:0] line, add our new snippet. Placing it at the very top ensures it will be processed first.

    *filter
    :ufw-before-input [0:0]
    # Rule for the whitelist (ipset) ALWAYS HAS PRIORITY
    # Accept any traffic from IP addresses in the ‘whitelist’ set
    -A ufw-before-input -m set –match-set whitelist src -j ACCEPT

    • -A ufw-before-input: We add the rule to the ufw-before-input chain.
    • -m set –match-set whitelist src: Condition: if the source (src) IP address matches the whitelist set…
    • -j ACCEPT: Action: “immediately accept (ACCEPT) the packet and stop processing further rules for this packet.”

    Save the file and reload UFW:

    sudo ufw reload

    From this point on, any connection from the address 111.222.333.444 will be accepted immediately.

    Step 4: Ensuring Whitelist Persistence

    Ipset sets are stored in memory and disappear after a server reboot. To make our whitelist persistent, we need to ensure it is automatically loaded every time the system starts. We will use our previously created ipset-persistent.service for this.

    Update the systemd service to “teach” it about the existence of the new whitelist set.

    sudo nano /etc/systemd/system/ipset-persistent.service

    Find the ExecStart line and add the create command for whitelist. If you already have other sets, simply add whitelist to the line. An example of an updated line:

    ExecStart=/bin/bash -c “/sbin/ipset create whitelist hash:ip –exist; /sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset restore -f /etc/ipset.rules”

    Reload the systemd configuration:

    sudo systemctl daemon-reload

    Save the current state of all sets to the file. This command will overwrite the old /etc/ipset.rules file with a new version that includes information about your whitelist.

    sudo ipset save > /etc/ipset.rules

    Restart the service to ensure it is running with the new configuration:

    sudo systemctl restart ipset-persistent.service

    Summary

    Congratulations! You have created a solid and reliable whitelist mechanism. With it, you can securely manage your server, confident that trusted IP addresses like 111.222.333.444 will never be accidentally blocked. Remember to only add fully trusted addresses to this list, such as your home or office IP address.

    How to Effectively Block IP Addresses and Subnets on a Linux Server

    Blocking single IP addresses is easy, but what if attackers use multiple addresses from the same network? Manually banning each one is inefficient and time-consuming.

    In this article, you will learn how to use ipset and iptables to effectively block entire subnets, automating the process and saving valuable time.

    Why is Blocking Entire Subnets Better?

    Many attacks, especially brute-force types, are carried out from multiple IP addresses belonging to the same operator or from the same pool of addresses (subnet). Blocking just one of them is like patching a small hole in a large dam—the rest of the traffic can still get through.

    Instead, you can block an entire subnet, for example, 45.148.10.0/24. This notation means you are blocking 256 addresses at once, which is much more effective.

    Script for Automatic Subnet Blocking

    To automate the process, you can use the following bash script. This script is interactive—it asks you to provide the subnet to block, then adds it to an ipset list and saves it to a file, making the block persistent.

    Let’s analyse the script step-by-step:

    #!/bin/bash

    # The name of the ipset list to which subnets will be added
    BLACKLIST_NAME=”blacklist_nets”
    # The file where blocked subnets will be appended
    BLACKLIST_FILE=”/etc/fail2ban/blacklist_net.local”

    # 1. Create the blacklist file if it doesn’t exist
    touch “$BLACKLIST_FILE”

    # 2. Check if the ipset list already exists. If not, create it.
    # Using “hash:net” allows for storing subnets, which is key.
    if ! sudo ipset list $BLACKLIST_NAME >/dev/null 2>&1; then
        sudo ipset create $BLACKLIST_NAME hash:net maxelem 65536
    fi

    # 3. Loop to prompt the user for subnets to block.
    # The loop ends when the user types “exit”.
    while true; do
        read -p “Enter the subnet address to block (e.g., 192.168.1.0/24) or type ‘exit’: ” subnet
        if [ “$subnet” == “exit” ]; then
            break
        elif [[ “$subnet” =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{1,2}$ ]]; then
            # Check if the subnet is not already in the file to avoid duplicates
            if ! grep -q “^$subnet$” “$BLACKLIST_FILE”; then
                echo “$subnet” | sudo tee -a “$BLACKLIST_FILE” > /dev/null
                # Add the subnet to the ipset list
                sudo ipset add $BLACKLIST_NAME $subnet
                echo “Subnet $subnet added.”
            else
                echo “Subnet $subnet is already on the list.”
            fi
        else
            # Check if the entered format is correct
            echo “Error: Invalid format. Please provide the address in ‘X.X.X.X/Y’ format.”
        fi
    done

    # 4. Add a rule in iptables that blocks all traffic from addresses on the ipset list.
    # This ensures the rule is added only once.
    if ! sudo iptables -C INPUT -m set –match-set $BLACKLIST_NAME src -j DROP >/dev/null 2>&1; then
        sudo iptables -I INPUT -m set –match-set $BLACKLIST_NAME src -j DROP
    fi

    # 5. Save the iptables rules to survive a reboot.
    # This part checks which tool the system uses.
    if command -v netfilter-persistent &> /dev/null; then
        sudo netfilter-persistent save
    elif command -v service &> /dev/null && service iptables status >/dev/null 2>&1; then
        sudo service iptables save
    fi

    echo “Script finished. The ‘$BLACKLIST_NAME’ list has been updated, and the iptables rules are active.”

    How to Use the Script

    1. Save the script: Save the code above into a file, e.g., block_nets.sh.
    2. Give permissions: Make sure the file has execute permissions: chmod +x block_nets.sh.
    3. Run the script: Execute the script with root privileges: sudo ./block_nets.sh.
    4. Provide subnets: The script will prompt you to enter subnet addresses. Simply type them in the X.X.X.X/Y format and press Enter. When you are finished, type exit.

    Ensuring Persistence After a Server Reboot

    Ipset sets are stored in RAM by default and disappear after a server restart. For the blocked addresses to remain active, you must use a systemd service that will load them at system startup.

    If you already have such a service (e.g., ipset-persistent.service), you must update it to include the new blacklist_nets list.

    1. Edit the service file: Open your service’s configuration file.
      sudo nano /etc/systemd/system/ipset-persistent.service
    2. Update the ExecStart line: Find the ExecStart line and add the create command for the blacklist_nets set. An example updated ExecStart line should look like this (including previous sets):
      ExecStart=/bin/bash -c “/sbin/ipset create whitelist hash:ip –exist; /sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset create blacklist_nets hash:net –exist; /sbin/ipset restore -f /etc/ipset.rules”
    3. Reload the systemd configuration:
      sudo systemctl daemon-reload
    4. Save the current state of all sets to the file: This command will overwrite the old /etc/ipset.rules file with a new version that contains information about all your lists, including blacklist_nets.
      sudo ipset save > /etc/ipset.rules
    5. Restart the service:
      sudo systemctl restart ipset-persistent.service

    With this method, you can simply and efficiently manage your server’s security, effectively blocking entire subnets that show suspicious activity, and be sure that these rules will remain active after every reboot.

  • Wazuh on Your Own Server: Digital Sovereignty at the Cost of Complexity

    Wazuh on Your Own Server: Digital Sovereignty at the Cost of Complexity

    Faced with the rising costs of commercial solutions and escalating cyber threats, the free security platform Wazuh is gaining popularity as a powerful alternative. However, the decision to self-host it on one’s own servers represents a fundamental trade-off: organisations gain unprecedented control over their data and system, but in return, they must contend with significant technical complexity, hidden operational costs, and full responsibility for their own security. This report analyses for whom this path is a strategic advantage and for whom it may prove to be a costly trap.

    Introduction – The Democratisation of Cybersecurity in an Era of Growing Threats

    The contemporary digital landscape is characterised by a paradox: while threats are becoming increasingly advanced and widespread, the costs of professional defence tools remain an insurmountable barrier for many organisations. Industry reports paint a grim picture, pointing to a sharp rise in ransomware attacks, which are evolving from data encryption to outright blackmail, and the ever-wider use of artificial intelligence by cybercriminals to automate and scale attacks. In this challenging environment, solutions like Wazuh are emerging as a response to the growing demand for accessible, yet powerful, tools to protect IT infrastructure.

    Wazuh is defined as a free, open-source security platform that unifies the capabilities of two key technologies: XDR (Extended Detection and Response) and SIEM (Security Information and Event Management). Its primary goal is to protect digital assets regardless of where they operate—from traditional on-premises servers in a local data centre, through virtual environments, to dynamic containers and distributed resources in the public cloud.

    The rise in Wazuh’s popularity is directly linked to the business model of dominant players in the SIEM market, such as Splunk. Their pricing, often based on the volume of data processed, can generate astronomical costs for growing companies, making advanced security a luxury. Wazuh, being free, eliminates this licensing barrier, which makes it particularly attractive to small and medium-sized enterprises (SMEs), public institutions, non-profit organisations, and all entities with limited budgets but who cannot afford to compromise on security.

    The emergence of such a powerful, free tool signals a fundamental shift in the cybersecurity market. One could speak of a democratisation of advanced defence mechanisms. Traditionally, SIEM/XDR-class platforms were the domain of large corporations with dedicated Security Operations Centres (SOCs) and substantial budgets. Meanwhile, cybercriminals do not limit their activities to the largest targets; SMEs are equally, and sometimes even more, vulnerable to attacks. Wazuh fills this critical gap, giving smaller organisations access to functionalities that were, until recently, beyond their financial reach. This represents a paradigm shift, where access to robust digital defence is no longer solely dependent on purchasing power but begins to depend on technical competence and the strategic decision to invest in a team.

    image 122

    To fully understand Wazuh’s unique position, it is worth comparing it with key players in the market.

    Table 1: Positioning Wazuh Against the Competition

    CriterionWazuhSplunkElastic Security
    Cost ModelOpen-source software, free. Paid options include technical support and a managed cloud service (SaaS).Commercial. Licensing is based mainly on the daily volume of data processed, which can lead to high costs at a large scale.“Open core” model. Basic functions are free, advanced ones (e.g., machine learning) are available in paid subscriptions. Prices are based on resources, not data volume.
    Main FunctionalitiesIntegrated XDR and SIEM. Strong emphasis on endpoint security (FIM, vulnerability detection, configuration assessment) and log analysis.A leader in log analysis and SIEM. An extremely powerful query language (SPL) and broad analytical capabilities. Considered the standard in large SOCs.An integrated security platform (SIEM + endpoint protection) built on the powerful Elasticsearch search engine. High flexibility and scalability.
    Deployment OptionsSelf-hosting (On-Premises / Private Cloud) or the official Wazuh Cloud service (SaaS).Self-hosting (On-Premises) or the Splunk Cloud service (SaaS).Self-hosting (On-Premises) or the Elastic Cloud service (SaaS).
    Target AudienceSMEs, organisations with technical expertise, entities with strict data sovereignty requirements, security enthusiasts.Large enterprises, mature Security Operations Centres (SOCs), organisations with large security budgets and a need for advanced analytics.Organisations seeking a flexible, scalable platform, often with an existing Elastic ecosystem. Development and DevOps teams.

    This comparison clearly shows that Wazuh is not a simple clone of commercial solutions. Its strength lies in the specific niche it occupies: it offers enterprise-class functionalities without licensing costs, in exchange requiring greater technical involvement from the user and the assumption of full responsibility for implementation and maintenance.

    Anatomy of a Defender – How Does the Wazuh Architecture Work?

    image 123

    Understanding the technical foundations of Wazuh is crucial for assessing the real complexity and potential challenges associated with its self-hosted deployment. At first glance, the architecture is elegant and logical; however, its scalability, one of its greatest advantages, simultaneously becomes its greatest operational challenge in a self-hosted model.

    The Agent-Server Model: The Eyes and Ears of the System

    At the core of the Wazuh architecture is a model based on an agent-server relationship. A lightweight, multi-platform Wazuh agent is installed on every monitored system—be it a Linux server, a Windows workstation, a Mac computer, or even cloud instances. The agent runs in the background, consuming minimal system resources, and its task is to continuously collect telemetry data. It gathers system and application logs, monitors the integrity of critical files, scans for vulnerabilities, inventories installed software and running processes, and detects intrusion attempts. All this data is then securely transmitted in near real-time to the central component—the Wazuh server.

    Central Components: The Brain of the Operation

    A Wazuh deployment, even in its simplest form, consists of three key central components that together form a complete analytical system.

    1. Wazuh Server: This is the heart of the entire system. It receives data sent by all registered agents. Its main task is to process this stream of information. The server uses advanced decoders to normalise and structure logs from various sources and then passes them through a powerful analytical engine. This engine, based on a predefined and configurable set of rules, correlates events and identifies suspicious activities, security policy violations, or Indicators of Compromise (IoCs). When an event or series of events matches a rule with a sufficiently high priority, the server generates a security alert.
    2. Wazuh Indexer: This is a specialised and highly scalable database, designed for the rapid indexing, storage, and searching of vast amounts of data. Technologically, the Wazuh Indexer is a fork of the OpenSearch project, which in turn was created from the Elasticsearch source code. All events collected by the server (both those that generated an alert and those that did not) and the alerts themselves are sent to the indexer. This allows security analysts to search through terabytes of historical data in seconds for traces of an attack, which is fundamental for threat hunting and forensic analysis processes.
    3. Wazuh Dashboard: This is the user interface for the entire platform, implemented as a web application. Like the indexer, it is based on the OpenSearch Dashboards project (formerly known as Kibana). The dashboard allows for the visualisation of data in the form of charts, tables, and maps, browsing and analysing alerts, managing agent and server configurations, and generating compliance reports. It is here that analysts spend most of their time, monitoring the security posture of the entire organisation.

    Security and Scalability of the Architecture

    A key aspect to emphasise is the security of the platform itself. Communication between the agent and the server occurs by default over port 1514/TCP and is protected by AES encryption (with a 256-bit key). Each agent must be registered and authenticated before the server will accept data from it. This ensures the confidentiality and integrity of the transmitted logs, preventing them from being intercepted or modified in transit.

    The Wazuh architecture was designed with scalability in mind. For small deployments, such as home labs or Proof of Concept tests, all three central components can be installed on a single, sufficiently powerful machine using a simplified installation script. However, in production environments monitoring hundreds or thousands of endpoints, such an approach quickly becomes inadequate. The official documentation and user experiences unequivocally indicate that to ensure performance and High Availability, it is necessary to implement a distributed architecture. This means separating the Wazuh server, indexer, and dashboard onto separate hosts. Furthermore, to handle the enormous volume of data and ensure resilience to failures, both the server and indexer components can be configured as multi-node clusters.

    It is at this point that the fundamental challenge of self-hosting becomes apparent. While an “all-in-one” installation is relatively simple, designing, implementing, and maintaining a distributed, multi-node Wazuh cluster is an extremely complex task. It requires deep knowledge of Linux systems administration, networking, and, above all, OpenSearch cluster management. The administrator must take care of aspects such as the correct replication and allocation of shards (index fragments), load balancing between nodes, configuring disaster recovery mechanisms, regularly creating backups, and planning updates for the entire technology stack. The decision to deploy Wazuh on a large scale in a self-hosted model is therefore not a one-time installation act. It is a commitment to the continuous management of a complex, distributed system, whose cost and complexity grow non-linearly with the scale of operations.

    The Strategic Decision – Full Control on Your Own Server versus the Convenience of the Cloud

    The choice of Wazuh deployment model—self-hosting on one’s own infrastructure (on-premises) versus using a ready-made cloud service (SaaS)—is one of the most important strategic decisions facing any organisation considering this platform. This is not merely a technical choice, but a fundamental decision concerning resource allocation, risk acceptance, and business priorities. An analysis of both approaches reveals a profound trade-off between absolute control and operational convenience.

    The Case for Self-Hosting: The Fortress of Data Sovereignty

    Organisations that decide to self-deploy and maintain Wazuh on their own servers are primarily driven by the desire for maximum control and independence. In this model, it is they, not an external provider, who define every aspect of the system’s operation—from hardware configuration, through data storage and retention policies, to the finest details of analytical rules. The open-source nature of Wazuh gives them an additional, powerful advantage: the ability to modify and adapt the platform to unique, often non-standard needs, which is impossible with closed, commercial solutions.

    However, the main driving force for many companies, especially in Europe, is the concept of data sovereignty. This is not just a buzzword, but a hard legal and strategic requirement. Data sovereignty means that digital data is subject to the laws and jurisdiction of the country in which it is physically stored and processed. In the context of stringent regulations such as Europe’s GDPR, the American HIPAA for medical data, or the PCI DSS standard for the payment card industry, keeping sensitive logs and security incident data within one’s own, controlled data centre is often the simplest and most secure way to ensure compliance.

    This choice also has a geopolitical dimension. Edward Snowden’s revelations about the PRISM programme run by the US NSA made the world aware that data stored in the clouds of American tech giants could be subject to access requests from US government agencies under laws such as the CLOUD Act. For many European companies, public institutions, or entities in the defence industry, the risk that their operational data and security logs could be made available to a foreign government is unacceptable. Self-hosting Wazuh in a local data centre, within the European Union, completely eliminates this risk, ensuring full digital sovereignty.

    The Reality of Self-Hosting: Hidden Costs and Responsibility

    The promise of free software is tempting, but the reality of a self-hosted deployment quickly puts the concept of “free” to the test. An analysis of the Total Cost of Ownership (TCO) reveals a series of hidden expenses that go far beyond the zero cost of the licence.

    • Capital Expenditure (CapEx): At the outset, the organisation must make significant investments in physical infrastructure. This includes purchasing powerful servers (with large amounts of RAM and fast processors), disk arrays capable of storing terabytes of logs, and networking components. Costs associated with providing appropriate server room conditions, such as uninterruptible power supplies (UPS), air conditioning, and physical access control systems, must also be considered.
    • Operational Expenditure (OpEx): This is where the largest, often underestimated, expenses lie. Firstly, the ongoing electricity and cooling bills. Secondly, and most importantly, personnel costs. Wazuh is not a “set it and forget it” system. As numerous users report, it requires constant attention, tuning, and maintenance. The default configuration can generate tens of thousands of alerts per day, leading to “alert fatigue” and rendering the system useless. To prevent this, a qualified security analyst or engineer is needed to constantly fine-tune rules and decoders, eliminate false positives, and develop the platform. For larger, distributed deployments, maintaining system stability can become a full-time job. One experienced user bluntly stated, “I’m losing my mind having to fix Wazuh every single day.” According to an analysis cited by GitHub, the total cost of a self-hosted solution can be up to 5.25 times higher than its cloud equivalent.

    Moreover, in the self-hosted model, the entire responsibility for security rests on the organisation’s shoulders. This includes not only protection against external attacks but also regular backups, testing disaster recovery procedures, and bearing the full consequences (financial and reputational) in the event of a successful breach and data leak.

    The Cloud Alternative: Convenience as a Service (SaaS)

    For organisations that want to leverage the power of Wazuh but are not ready to take on the challenges of self-hosting, there is an official alternative: Wazuh Cloud. This is a Software as a Service (SaaS) model, where the provider (the company Wazuh) takes on the entire burden of managing the server infrastructure, and the client pays a monthly or annual subscription for a ready-to-use service.

    The advantages of this approach are clear:

    • Lower Barrier to Entry and Predictable Costs: The subscription model eliminates the need for large initial hardware investments (CapEx) and converts them into a predictable, monthly operational cost (OpEx), which is often lower in the short and medium term.
    • Reduced Operational Burden: Issues such as server maintenance, patch installation, software updates, scaling resources in response to growing load, and ensuring high availability are entirely the provider’s responsibility. This frees up the internal IT team to focus on strategic tasks rather than “firefighting.”
    • Access to Expert Knowledge: Cloud clients benefit from the knowledge and experience of Wazuh engineers who manage hundreds of deployments daily. This guarantees optimal configuration and platform stability.

    Of course, convenience comes at a price. The main disadvantage is a partial loss of control over the system and data. The organisation must trust the security policies and procedures of the provider. Most importantly, depending on the location of the Wazuh Cloud data centres, the same data sovereignty issues that the self-hosted model avoids may arise.

    Ultimately, the choice between self-hosting and the cloud is not an assessment of which option is “better” in an absolute sense. It is a strategic allocation of risk and resources. The self-hosted model is a conscious acceptance of operational risk (failures, configuration errors, staff shortages) in exchange for minimising the risk associated with data sovereignty and third-party control. In contrast, the cloud model is a transfer of operational risk to the provider in exchange for accepting the risk associated with entrusting data and potential legal-geopolitical implications. For a financial sector company in the EU, the risk of a GDPR breach may be much higher than the risk of a server failure, which strongly inclines them towards self-hosting. For a dynamic tech start-up without regulated data, the cost of hiring a dedicated specialist and the operational risk may be unacceptable, making the cloud the obvious choice.

    Table 2: Decision Analysis: Self-Hosting vs. Wazuh Cloud

    CriterionSelf-Hosting (On-Premises)Wazuh Cloud (SaaS)
    Total Cost of Ownership (TCO)High initial cost (hardware, CapEx). Significant, often unpredictable operational costs (personnel, energy, OpEx). Potentially lower in the long term at a large scale and with constant utilisation.Low initial cost (no CapEx). Predictable, recurring subscription fees (OpEx). Usually more cost-effective in the short and medium term. Potentially higher in the long run.
    Control and CustomisationAbsolute control over hardware, software, data, and configuration. Ability to modify source code and deeply integrate with existing systems.Limited control. Configuration within the options provided by the supplier. No ability to modify source code or access the underlying infrastructure.
    Security and ResponsibilityFull responsibility for physical and digital security, backups, disaster recovery, and regulatory compliance rests with the organisation.Shared responsibility. The provider is responsible for the security of the cloud infrastructure. The organisation is responsible for configuring security policies and managing access.
    Deployment and MaintenanceComplex and time-consuming deployment, especially in a distributed architecture. Requires continuous maintenance, monitoring, updating, and tuning by qualified personnel.Quick and simple deployment (service activation). Maintenance, updates, and ensuring availability are entirely the provider’s responsibility, minimising the burden on the internal IT team.
    ScalabilityScalability is possible but requires careful planning, purchase of additional hardware, and manual reconfiguration of the cluster. It can be a slow and costly process.High flexibility and scalability. Resources (computing power, disk space) can be dynamically increased or decreased depending on needs, often with a few clicks.
    Data SovereigntyFull data sovereignty. The organisation has 100% control over the physical location of its data, which facilitates compliance with local legal and regulatory requirements (e.g., GDPR).Dependent on the location of the provider’s data centres. May pose challenges related to GDPR compliance if data is stored outside the EU. Potential risk of access on demand by foreign governments.

    Voices from the Battlefield – A Balanced Analysis of Expert and User Opinions

    A theoretical analysis of a platform’s capabilities and architecture is one thing, but its true value is verified in the daily work of security analysts and system administrators. The voices of users from around the world, from small businesses to large enterprises, paint a nuanced picture of Wazuh—a tool that is incredibly powerful, but also demanding. An analysis of opinions gathered from industry portals such as Gartner, G2, Reddit, and specialist forums allows us to identify both its greatest advantages and its most serious challenges.

    The Praise – What Works Brilliantly?

    Several key strengths that attract organisations to Wazuh are repeatedly mentioned in reviews and case studies.

    • Cost as a Game-Changer: For many users, the fundamental advantage is the lack of licensing fees. One information security manager stated succinctly: “It costs me nothing.” This financial accessibility is seen as crucial, especially for smaller entities. Wazuh is often described as a “great, out-of-the-box SOC solution for small to medium businesses” that could not otherwise afford this type of technology.
    • Powerful, Built-in Functionalities: Users regularly praise specific modules that deliver immediate value. File Integrity Monitoring (FIM) and Vulnerability Detection are at the forefront. One reviewer described them as the “biggest advantages” of the platform. FIM is key to detecting unauthorised changes to critical system files, which can indicate a successful attack, while the vulnerability module automatically scans systems for known, unpatched software. The platform’s ability to support compliance with regulations such as HIPAA or PCI DSS is also a frequently highlighted asset, allowing organisations to verify their security posture with a few clicks.
    • Flexibility and Customisation: The open nature of Wazuh is seen as a huge advantage by technical teams. The ability to customise rules, write their own decoders, and integrate with other tools gives a sense of complete control. “I personally love the flexibility of Wazuh, as a system administrator I can think of any use case and I know I’ll be able to leverage Wazuh to pull the logs and create the alerts I need,” wrote Joanne Scott, a lead administrator at one of the companies using the platform.

    The Criticism – Where Do the Challenges Lie?

    Equally numerous and consistent are the voices pointing to significant difficulties and challenges that must be considered before deciding on deployment.

    • Complexity and a Steep Learning Curve: This is the most frequently raised issue. Even experienced security specialists admit that the platform is not intuitive. One expert described it as having a “steep learning curve for newcomers.” Another user noted that “the initial installation and configuration can be a bit complicated, especially for users without much experience in SIEM systems.” This confirms that Wazuh requires dedicated time for learning and experimentation.
    • The Need for Tuning and “Alert Fatigue”: This is probably the biggest operational challenge. Users agree that the default, “out-of-the-box” configuration of Wazuh generates a huge amount of noise—low-priority alerts that flood analysts and make it impossible to detect real threats. One team reported receiving “25,000 to 50,000 low-level alerts per day” from just two monitored endpoints. Without an intensive and, importantly, continuous process of tuning rules, disabling irrelevant alerts, and creating custom ones tailored to the specific environment, the system is practically useless. One of the more blunt comments on a Reddit forum stated that “out of the box it’s kind of shitty.”
    • Performance and Stability at Scale: While Wazuh performs well in small and medium-sized environments, deployments involving hundreds or thousands of agents can encounter serious stability problems. In one dramatic post on a Google Groups forum, an administrator managing 175 agents described daily problems with agents disconnecting and server services hanging, forcing him to restart the entire infrastructure daily. This shows that scaling Wazuh requires not only more powerful hardware but also deep knowledge of optimising its components.
    • Documentation and Support for Different Systems: Although Wazuh has extensive online documentation, some users find it insufficient for more complex problems. There are also complaints that the predefined decoders (pieces of code responsible for parsing logs) work great for Windows systems but are often outdated or incomplete for other platforms, including popular network devices. This forces administrators to search for unofficial, community-created solutions on platforms like GitHub, which introduces an additional element of risk and uncertainty.

    An analysis of these starkly different opinions leads to a key conclusion. Wazuh should not be seen as a ready-to-use product that can simply be “switched on.” It is rather a powerful security framework—a set of advanced tools and capabilities from which a qualified team must build an effective defence system. Its final value depends 90% on the quality of the implementation, configuration, and competence of the team, and only 10% on the software itself. The users who succeed are those who talk about “configuring,” “customising,” and “integrating.” Those who encounter problems are often those who expected a ready-made solution and were overwhelmed by the default configuration. The story of one expert who, during a simulated attack on a default Wazuh installation, “didn’t catch a single thing” is the best proof of this. An investment in a self-hosted Wazuh is really an investment in the people who will manage it.

    Consequences of the Choice – Risk and Reward in the Open-Source Ecosystem

    The decision to base critical security infrastructure on a self-hosted, open-source solution like Wazuh goes beyond a simple technical assessment of the tool itself. It is a strategic immersion into the broader ecosystem of Open Source Software (OSS), which brings with it both enormous benefits and serious, often underestimated, risks.

    The Ubiquity and Hidden Risks of Open-Source Software

    Open-source software has become the foundation of the modern digital economy. According to the 2025 “Open Source Security and Risk Analysis” (OSSRA) report, as many as 97% of commercial applications contain OSS components. They form the backbone of almost every system, from operating systems to libraries used in web applications. However, this ubiquity has its dark side. The same report reveals alarming statistics:

    • 86% of the applications studied contained at least one vulnerability in the open-source components they used.
    • 91% of applications contained components that were outdated and had newer, more secure versions available.
    • 81% of applications contained high or critical risk vulnerabilities, many of which already had publicly available patches.

    One of the biggest challenges is the problem of transitive dependencies. This means that a library a developer consciously adds to a project itself depends on dozens of other libraries, which in turn depend on others. This creates a complex and difficult-to-trace chain of dependencies, meaning organisations often have no idea exactly which components are running in their systems and what risks they carry. This is the heart of the software supply chain security problem.

    By choosing to self-host Wazuh, an organisation takes on full responsibility for managing not only the platform itself but its entire technology stack. This includes the operating system it runs on, the web server, and, above all, key components like the Wazuh Indexer (OpenSearch) and its numerous dependencies. This means it is necessary to track security bulletins for all these elements and react immediately to newly discovered vulnerabilities.

    The Advantages of the Open-Source Model: Transparency and the Power of Community

    In opposition to these risks, however, stand fundamental advantages that make the open-source model so attractive, especially in the field of security.

    • Transparency and Trust: In the case of commercial, closed-source solutions (“black boxes”), the user must fully trust the manufacturer’s declarations regarding security. In the open-source model, the source code is publicly available. This provides the opportunity to conduct an independent security audit and verify that the software does not contain hidden backdoors or serious flaws. This transparency builds fundamental trust, which is invaluable in the context of systems designed to protect a company’s most valuable assets.
    • The Power of Community: Wazuh boasts one of the largest and most active communities in the open-source security world. Users have numerous support channels at their disposal, such as the official Slack, GitHub forums, a dedicated subreddit, and Google Groups. It is there, in the heat of real-world problems, that custom decoders, innovative rules, and solutions to problems not found in the official documentation are created. This collective wisdom is an invaluable resource, especially for teams facing unusual challenges.
    • Avoiding Vendor Lock-in: By choosing a commercial solution, an organisation becomes dependent on a single vendor—their product development strategy, pricing policy, and software lifecycle. If the vendor decides to raise prices, end support for a product, or go bankrupt, the client is left with a serious problem. Open source provides freedom. An organisation can use the software indefinitely, modify and develop it, and even use the services of another company specialising in support for that solution if they are not satisfied with the official support.

    This duality of the open-source nature leads to a deeper conclusion. The decision to self-host Wazuh fundamentally changes the organisation’s role in the security ecosystem. It ceases to be merely a passive consumer of a ready-made security product and becomes an active manager of software supply chain risk. When a company buys a commercial SIEM, it pays the vendor to take responsibility for managing the risk associated with the components from which its product is built. It is the vendor who must patch vulnerabilities in libraries, update dependencies, and guarantee the security of the entire stack. By choosing the free, self-hosted Wazuh, the organisation consciously (or not) takes on all this responsibility itself. To do this in a mature way, it is no longer enough to just know how to configure rules in Wazuh. It becomes necessary to implement advanced software management practices, such as Software Composition Analysis (SCA) to identify all components and their vulnerabilities, and to maintain an up-to-date “Software Bill of Materials” (SBOM) for the entire infrastructure. This significantly raises the bar for competency requirements and shows that the decision to self-host has deep, structural consequences for the entire IT and security department.

    The Verdict – Who is Self-Hosted Wazuh For?

    The analysis of the Wazuh platform in a self-hosted model leads to an unequivocal conclusion: it is a solution with enormous potential, but burdened with equally great responsibility. The key trade-off that runs through every aspect of this technology can be summarised as follows: self-hosted Wazuh offers unparalleled control, absolute data sovereignty, and zero licensing costs, but in return requires significant, often underestimated, investments in hardware and, above all, in highly qualified personnel capable of managing a complex and demanding system that requires constant attention.

    This is not a solution for everyone. Attempting to implement it without the appropriate resources and awareness of its nature is a straight path to frustration, a false sense of security, and ultimately, project failure.

    Profile of the Ideal Candidate

    Self-hosted Wazuh is the optimal, and often the only right, choice for organisations that meet most of the following criteria:

    • They have a mature and competent technical team: They have an internal security and IT team (or the budget to hire/train one) that is not afraid of working with the command line, writing scripts, analysing logs at a low level, and managing a complex Linux infrastructure.
    • They have strict data sovereignty requirements: They operate in highly regulated industries (financial, medical, insurance), in public administration, or in the defence sector, where laws (e.g., GDPR) or internal policies categorically require that sensitive data never leaves physically controlled infrastructure.
    • They operate at a large scale where licensing costs become a barrier: They are large enough that the licensing costs of commercial SIEM systems, which increase with data volume, become prohibitive. In such a case, investing in a dedicated team to manage a free solution becomes economically justified over a period of several years.
    • They understand they are implementing a framework, not a finished product: They accept the fact that Wazuh is a set of powerful building blocks, not a ready-made house. They are prepared for a long-term, iterative process of tuning, customising, and improving the system to fully match the specifics of their environment and risk profile.
    • They have a need for deep customisation: Their security requirements are so unique that standard, commercial solutions cannot meet them, and the ability to modify the source code and create custom integrations is a key value.

    Questions for Self-Assessment

    For all other organisations, especially smaller ones with limited human resources and without strict sovereignty requirements, a much safer and more cost-effective solution will likely be to use the Wazuh Cloud service or another commercial SIEM/XDR solution.

    Before making the final, momentous decision, every technical leader and business manager should ask themselves and their team a series of honest questions:

    1. Have we realistically assessed the Total Cost of Ownership (TCO)? Does our budget account not only for servers but also for the full-time equivalents of specialists who will manage this platform 24/7, including their salaries, training, and the time needed to learn?
    2. Do we have the necessary expertise in our team? Do we have people capable of advanced rule tuning, managing a distributed cluster, diagnosing performance issues, and responding to failures in the middle of the night? If not, are we prepared to invest in their recruitment and development?
    3. What is our biggest risk? Are we more concerned about operational risk (system failure, human error, inadequate monitoring) or regulatory and geopolitical risk (breach of data sovereignty, third-party access)? How does the answer to this question influence our decision?
    4. Are we ready for full responsibility? Do we understand that by choosing self-hosting, we are taking responsibility not only for the configuration of Wazuh but for the security of the entire software supply chain on which it is based, including the regular patching of all its components?

    Only an honest answer to these questions will allow you to avoid a costly mistake and make a choice that will genuinely strengthen your organisation’s cybersecurity, rather than creating an illusion of it.

    Integrating Logs from Docker Applications with Wazuh SIEM

    In modern IT environments, containerisation using Docker has become the standard. It enables the rapid deployment and scaling of applications but also introduces new challenges in security monitoring. By default, logs generated by applications running in containers are isolated from the host system, which complicates their analysis by SIEM systems like Wazuh.

    In this post, we will show you how to break down this barrier. We will guide you step-by-step through the configuration process that will allow the Wazuh agent to read, analyse, and generate alerts from the logs of any application running in a Docker container. We will use the password manager Vaultwarden as a practical example.

    The Challenge: Why is Accessing Docker Logs Difficult?

    Docker containers have their own isolated file systems. Applications inside them most often send their logs to “standard output” (stdout/stderr), which is captured by Docker’s logging mechanism. The Wazuh agent, running on the host system, does not have default access to this stream or to the container’s internal files.

    To enable monitoring, we must make the application logs visible to the Wazuh agent. The best and cleanest way to do this is to configure the container to write its logs to a file and then share that file externally using a Docker volume.

    Step 1: Exposing Application Logs Outside the Container

    Our goal is to make the application’s log file appear in the host server’s file system. We will achieve this by modifying the docker-compose.yml file.

    1. Configure the application to log to a file: Many Docker images allow you to define the path to a log file using an environment variable. In the case of Vaultwarden, this is LOG_FILE.
    2. Map a volume: Create a mapping between a directory on the host server and a directory inside the container where the logs are saved.

    Here is an example of what a fragment of the docker-compose.yml file for Vaultwarden with the correct logging configuration might look like:

    version: “3”

    services:
      vaultwarden:
        image: vaultwarden/server:latest
        container_name: vaultwarden
        restart: unless-stopped
        volumes:
          # Volume for application data (database, attachments, etc.)
          – ./data:/data
        ports:
          – “8080:80”
        environment:
          # This variable instructs the application to write logs to a file inside the container
          – LOG_FILE=/data/vaultwarden.log

    What happened here?

    • LOG_FILE=/data/vaultwarden.log: We are telling the application to create a vaultwarden.log file in the /data directory inside the container.
    • ./data:/data: We are mapping the /data directory from the container to a data subdirectory in the location where the docker-compose.yml file is located (on the host).

    After saving the changes and restarting the container (docker-compose down && docker-compose up -d), the log file will be available on the server at a path like /opt/vaultwarden/data/vaultwarden.log.

    Step 2: Configuring the Wazuh Agent to Monitor the File

    Now that the logs are accessible on the host, we need to instruct the Wazuh agent to read them.

    Open the agent’s configuration file:

    sudo nano /var/ossec/etc/ossec.conf

    Add the following block within the <ossec_config> section:

    <localfile>
      <location>/opt/vaultwarden/data/vaultwarden.log</location>
      <log_format>logall</log_format>
    </localfile>

    Restart the agent to apply the changes:

    sudo systemctl restart wazuh-agent

    From now on, every new line in the vaultwarden.log file will be sent to the Wazuh manager.

    Step 3: Translating Logs into the Language of Wazuh (Decoders)

    The Wazuh manager is now receiving raw log lines, but it doesn’t know how to interpret them. We need to create decoders that will “teach” it to extract key information, such as the attacker’s IP address or the username.

    On the Wazuh manager server, edit the local decoders file:

    sudo nano /var/ossec/etc/decoders/local_decoder.xml

    Add the following decoders:

    <!– Decoder for Vaultwarden logs –>
    <decoder name=”vaultwarden”>
      <prematch>vaultwarden::api::identity</prematch>
    </decoder>

    <!– Decoder for failed login attempts in Vaultwarden –>
    <decoder name=”vaultwarden-failed-login”>
      <parent>vaultwarden</parent>
      <prematch>Username or password is incorrect. Try again. IP: </prematch>
      <regex>IP: (\S+)\. Username: (\S+)\.$</regex>
      <order>srcip, user</order>
    </decoder>

    Step 4: Creating Rules and Generating Alerts

    Once Wazuh can understand the logs, we can create rules that will generate alerts.

    On the manager server, edit the local rules file:

    sudo nano /var/ossec/etc/rules/local_rules.xml

    Add the following rule group:

    <group name=”vaultwarden,”>
      <rule id=”100105″ level=”5″>
        <decoded_as>vaultwarden</decoded_as>
        <description>Vaultwarden: Failed login attempt for user $(user) from IP address: $(srcip).</description>
        <group>authentication_failed,</group>
      </rule>

      <rule id=”100106″ level=”10″ frequency=”6″ timeframe=”120″>
        <if_matched_sid>100105</if_matched_sid>
        <description>Vaultwarden: Multiple failed login attempts (possible brute-force attack) from IP address: $(srcip).</description>
        <mitre>
          <id>T1110</id>
        </mitre>
        <group>authentication_failures,</group>
      </rule>
    </group>

    Note: Ensure that the rule id is unique and does not appear anywhere else in the local_rules.xml file. Change it if necessary.

    Step 5: Restart and Verification

    Finally, restart the Wazuh manager to load the new decoders and rules:

    sudo systemctl restart wazuh-manager

    To test the configuration, make several failed login attempts to your Vaultwarden application. After a short while, you should see level 5 alerts in the Wazuh dashboard for each attempt, and after exceeding the threshold (6 attempts in 120 seconds), a critical level 10 alert indicating a brute-force attack.

    Summary

    Integrating logs from applications running in Docker containers with the Wazuh system is a key element in building a comprehensive security monitoring system. The scheme presented above—exposing logs to the host via a volume and then analysing them with custom decoders and rules—is a universal approach that you can apply to virtually any application, not just Vaultwarden. This gives you full visibility of events across your entire infrastructure, regardless of the technology it runs on.

  • Ubuntu Pro: More Than Just a Regular System. A Comprehensive Guide to Services and Benefits

    Ubuntu Pro: More Than Just a Regular System. A Comprehensive Guide to Services and Benefits

    Canonical, the company behind the world’s most popular Linux distribution, offers an extended subscription called Ubuntu Pro. This service, available for free for individual users on up to five machines, elevates the standard Ubuntu experience to the level of corporate security, compliance, and extended technical support. What exactly does this offer include, and is it worth using?

    Ubuntu Pro is the answer to the growing demands for cybersecurity and stability of operating systems, both in commercial and home environments. The subscription integrates a range of advanced services that were previously reserved mainly for large enterprises, making them available to a wide audience. A key benefit is the extension of the system’s life cycle (LTS) from 5 to 10 years, which provides critical security updates for thousands of software packages.

    A Detailed Review of the Services Offered with Ubuntu Pro

    To fully understand the value of the subscription, you should look at its individual components. After activating Pro, the user gains access to a services panel that can be freely enabled and disabled depending on their needs.

    1. ESM-Infra & ESM-Apps: Ten Years of Peace of Mind

    The core of the Pro offering is the Expanded Security Maintenance (ESM) service, divided into two pillars:

    • esm-infra (Infrastructure): Guarantees security patches for over 2,300 packages from the Ubuntu main repository for 10 years. This means the operating system and its key components are protected against newly discovered vulnerabilities (CVEs) for much longer than in the standard LTS version.
    • esm-apps (Applications): Extends protection to over 23,000 packages from the community-supported universe repository. This is a huge advantage, as many popular applications, programming libraries, and tools we install every day come from there. Thanks to esm-apps, they also receive critical security updates for a decade.

    In practice, this means that a production server or workstation with an LTS version of the system can run safely and stably for 10 years without the need for a major system upgrade.

    2. Livepatch: Kernel Updates Without a Restart

    The Canonical Livepatch service is one of the most appreciated tools in environments requiring maximum uptime. It allows the installation of critical and high-risk security patches for the Linux kernel while it is running, without the need to reboot the computer. For server administrators running key services, this is a game-changing feature – it eliminates downtime and allows for an immediate response to threats.

    End of server restarts. The Livepatch service revolutionises Linux updates

    Updating the operating system’s kernel without having to reboot the machine is becoming the standard in environments requiring continuous availability. The Canonical Livepatch service allows critical security patches to be installed in real-time, eliminating downtime and revolutionising the work of system administrators.

    In a digital world where every minute of service unavailability can generate enormous losses, planned downtime for system updates is becoming an ever greater challenge. The answer to this problem is the Livepatch technology, offered by Canonical, the creators of the popular Ubuntu distribution. It allows for the deployment of the most important Linux kernel security patches without the need to restart the server.

    How does Livepatch work?

    The service runs in the background, monitoring for available security updates marked as critical or high priority. When such a patch is released, Livepatch applies it directly to the running kernel. This process is invisible to users and applications, which can operate without any interruptions.

    “For administrators managing a fleet of servers on which a company’s business depends, this is a game-changing feature,” a cybersecurity expert comments. “Instead of planning maintenance windows in the middle of the night and risking complications, we can respond instantly to newly discovered threats, maintaining one hundred percent business continuity.”

    Who benefits most?

    This solution is particularly valuable in sectors such as finance, e-commerce, telecommunications, and healthcare, where systems must operate 24/7. With Livepatch, companies can meet rigorous service level agreements (SLAs) while maintaining the highest standard of security.

    Eliminating the need to restart not only saves time but also minimises the risk associated with restarting complex application environments.

    Technology such as Canonical Livepatch sets a new direction in IT infrastructure management. It shifts the focus from reactive problem-solving to proactive, continuous system protection. In an age of growing cyber threats, the ability to instantly patch vulnerabilities, without affecting service availability, is no longer a convenience, but a necessity.

    3. Landscape: Central Management of a Fleet of Systems

    Landscape is a powerful tool for managing and administering multiple Ubuntu systems from a single, central dashboard. It enables remote updates, machine status monitoring, user and permission management, and task automation. Although its functionality may be limited in the free plan, in commercial environments it can save administrators hundreds of hours of work.

    Landscape: How to Master a Fleet of Ubuntu Systems from One Place?

    In today’s IT environments, where the number of servers and workstations can reach hundreds or even thousands, manually managing each system separately is not only inefficient but virtually impossible. Canonical, the company behind the most popular Linux distribution – Ubuntu, provides a solution to this problem: Landscape. It’s a powerful tool that allows administrators to centrally manage an entire fleet of machines, saving time and minimising the risk of errors.

    What is Landscape?

    Landscape is a system management platform that acts as a central command centre for all Ubuntu machines in your organisation. Regardless of whether they are physical servers in a server room, virtual machines in the cloud, or employees’ desktop computers, Landscape enables remote monitoring, management, and automation of key administrative tasks from a single, clear web browser.

    The main goal of the tool is to simplify and automate repetitive tasks that consume most of administrators’ time. Instead of logging into each server separately to perform updates, you can do so for an entire group of machines with a few clicks.

    Key Features in Practice

    The strength of Landscape lies in its versatility. The most important functions include:

    • Remote Updates and Package Management: Landscape allows for the mass deployment of security and software updates on all connected systems. An administrator can create update profiles for different groups of servers (e.g., production, test) and schedule their installation at a convenient time, minimising the risk of downtime.
    • Real-time Monitoring and Alerts: The platform continuously monitors key system parameters, such as processor load, RAM usage, disk space availability, and component temperature. If predefined thresholds are exceeded, the system automatically sends alerts, allowing for a quick response before a problem escalates into a serious failure.
    • User and Permission Management: Creating, modifying, and deleting user accounts on multiple machines simultaneously becomes trivially simple. Landscape enables central management of permissions, which significantly increases the level of security and facilitates audits.
    • Task Automation: One of the most powerful features is the ability to remotely run scripts on any number of machines. This allows you to automate almost any task – from routine backups and the installation of specific software to comprehensive configuration audits.

    Free Plan vs. Commercial Environments

    Canonical offers Landscape on a subscription basis, but also provides a free “Landscape On-Premises” plan that allows you to manage up to 10 machines at no cost. This is an excellent option for small businesses, enthusiasts, or for testing purposes. Although the functionality in this plan may be limited compared to the full commercial versions, it provides a solid insight into the platform’s capabilities.

    However, it is in large commercial environments that Landscape shows its true power. For companies managing dozens or hundreds of servers, investing in a license quickly pays for itself. Reducing the time needed for routine tasks from days to minutes translates into real financial savings and allows administrators to focus on more strategic projects. Experts estimate that implementing central management can save hundreds of hours of work per year.

    Landscape is an indispensable tool for any organisation that takes the management of its Ubuntu-based infrastructure seriously. Centralisation, automation, and proactive monitoring are key elements that not only increase efficiency and security but also allow for scaling operations without a proportional increase in costs and human resources. In an age of digital transformation, effective management of a fleet of systems is no longer a luxury, but a necessity.

    4. Real-time Kernel: Real-time Precision

    For specific applications, such as industrial automation, robotics, telecommunications, or stock trading systems, predictability and determinism are crucial. The Real-time Kernel is a special version of the Ubuntu kernel with integrated PREEMPT_RT patches, which minimises delays and guarantees that the highest priority tasks are executed within strictly defined time frames.

    In a world where machine decisions must be made in fractions of a second, standard operating systems are often unable to meet strict timing requirements. The answer to these challenges is the real-time operating system kernel (RTOS). Ubuntu, one of the most popular Linux distributions, is entering this highly specialised market with a new product: the Real-time Kernel.

    What is it and why is it important?

    The Real-time Kernel is a special version of the Ubuntu kernel in which a set of patches called PREEMPT_RT have been implemented. Their main task is to modify how the kernel manages tasks, so that the highest priority processes can pre-empt (interrupt) lower-priority ones almost immediately. In practice, this eliminates unpredictable delays (so-called latency) and guarantees that critical operations will be executed within a strictly defined, repeatable time window.

    “The Ubuntu real-time kernel provides industrial-grade performance and resilience for software-defined manufacturing, monitoring, and operational technologies,” said Mark Shuttleworth, CEO of Canonical.

    For sectors such as industrial automation, this means that PLC controllers on the assembly line can process data with absolute precision, ensuring continuity and integrity of production. In robotics, from assembly arms to autonomous vehicles, timing determinism is crucial for safety and smooth movement. Similarly, in telecommunications, especially in the context of 5G networks, the infrastructure must handle huge amounts of data with ultra-low latency, which is a necessary condition for service reliability. Stock trading systems, where milliseconds decide on transactions worth millions, also belong to the group of beneficiaries of this technology.

    How does it work? Technical context

    The PREEMPT_RT patches, developed for years by the Linux community, transform a standard kernel into a fully pre-emptible one. Mechanisms such as spinlocks (locks that protect against simultaneous access to data), which in a traditional kernel cannot be interrupted, become pre-emptible in the RT version. In addition, hardware interrupt handlers are transformed into threads with a specific priority, which allows for more precise management of processor time.

    Thanks to these changes, the system is able to guarantee that a high-priority task will gain access to resources in a predictable, short time, regardless of the system’s load by other, less important processes.

    The integration of PREEMPT_RT with the official Ubuntu kernel (available as part of the Ubuntu Pro subscription) is a significant step towards the democratisation of real-time systems. This simplifies the deployment of advanced solutions in industry, lowering the entry barrier for companies that until now had to rely on niche, often closed and expensive RTOS systems. The availability of a stable and supported real-time kernel in a popular operating system can accelerate innovation in the fields of the Internet of Things (IoT), autonomous vehicles, and smart factories, where precision and reliability are not an option but a necessity.

    5. USG (Ubuntu Security Guide): Auditing and Security Hardening

    USG is a tool for automating the processes of system hardening and auditing for compliance with rigorous security standards, such as CIS Benchmarks or DISA-STIG. Instead of manually configuring hundreds of system settings, an administrator can use USG to automatically apply recommended policies and generate a compliance report.

    In an age of growing cyber threats and increasingly stringent compliance requirements, system administrators face the challenge of manually configuring hundreds of settings to secure IT infrastructure. Canonical, the company behind the popular Linux distribution, offers the Ubuntu Security Guide (USG) tool, which automates the processes of system hardening and auditing, ensuring compliance with key security standards, such as CIS Benchmarks and DISA-STIG.

    What is the Ubuntu Security Guide and how does it work?

    The Ubuntu Security Guide is an advanced command-line tool, available as part of the Ubuntu Pro subscription. Its main goal is to simplify and automate the tedious tasks associated with securing Ubuntu operating systems. Instead of manually editing configuration files, changing permissions, and verifying policies, administrators can use ready-made security profiles.

    USG uses the industry-recognised OpenSCAP (Security Content Automation Protocol) tool as its backend, which ensures the consistency and reliability of the audits performed. The process is simple and is based on two key commands:

    • usg audit [profile] – Scans the system for compliance with the selected profile (e.g., cis_level1_server) and generates a detailed report in HTML format. This report indicates which security rules are met and which require intervention.
    • usg fix [profile] – Automatically applies configuration changes to adapt the system to the recommendations contained in the profile.

    As Canonical emphasises in its official documentation, USG was designed to “simplify the DISA-STIG hardening process by leveraging automation.”

    Compliance with CIS and DISA-STIG at Your Fingertips

    For many organisations, especially in the public, financial, and defence sectors, compliance with international security standards is not just good practice but a legal and contractual obligation. CIS Benchmarks, developed by the Center for Internet Security, and DISA-STIG (Security Technical Implementation Guides), required by the US Department of Defence, are collections of hundreds of detailed configuration guidelines.

    Manually implementing these standards is extremely time-consuming and prone to errors. USG addresses this problem by providing predefined profiles that map these complex requirements to specific, automated actions. Example configurations managed by USG include:

    • Password policies: Enforcing appropriate password length, complexity, and expiration period.
    • Firewall configuration: Blocking unused ports and restricting access to network services.
    • SSH security: Enforcing key-based authentication and disabling root account login.
    • File system: Setting restrictive mounting options, such as noexec and nosuid on critical partitions.
    • Deactivation of unnecessary services: Disabling unnecessary daemons and services to minimise the attack surface.

    The ability to customise profiles using so-called “tailoring files” allows administrators to flexibly implement policies, taking into account the specific needs of their environment, without losing compliance with the general standard.

    Consequences of Non-Compliance and the Role of Automation

    Ignoring standards such as CIS or DISA-STIG carries serious consequences. Apart from the obvious increase in the risk of a successful cyberattack, organisations expose themselves to severe financial penalties, loss of certification, and serious reputational damage. Non-compliance can lead to the loss of key contracts, especially in the government sector.

    Security experts agree that compliance automation tools are crucial in modern IT management. They allow not only for a one-time implementation of policies but also for continuous monitoring and maintenance of the desired security state in dynamically changing environments.

    The Ubuntu Security Guide is a response to the growing complexity in the field of cybersecurity and regulations. By shifting the burden of manual configuration to an automated and repeatable process, USG allows administrators to save time, minimise the risk of human error, and provide measurable proof of compliance with global standards. In an era where security is the foundation of digital trust, tools like USG are becoming an indispensable part of the arsenal of every IT professional managing Ubuntu-based infrastructure.

    6. Anbox Cloud: Android in the Cloud at Scale

    Anbox Cloud is a platform that allows you to run the Android system in cloud containers. This is a solution aimed mainly at mobile application developers, companies in the gaming industry (cloud gaming), or automotive (infotainment systems). It enables mass application testing, process automation, and streaming of Android applications with ultra-low latency.

    How to Install and Configure Ubuntu Pro? A Step-by-Step Guide

    Activating Ubuntu Pro is simple and takes only a few minutes.

    Requirements:

    • Ubuntu LTS version (e.g., 18.04, 20.04, 22.04, 24.04).
    • Access to an account with sudo privileges.
    • An Ubuntu One account (which can be created for free).

    Step 1: Get your subscription token

    1. Go to the ubuntu.com/pro website and log in to your Ubuntu One account.
    2. You will be automatically redirected to your Ubuntu Pro dashboard.
    3. In the dashboard, you will find a free personal token. Copy it.

    Step 2: Connect your system to Ubuntu Pro

    Open a terminal on your computer and execute the command below, pasting the copied string into the place of [YOUR_TOKEN]:

    sudo pro attach [YOUR_TOKEN]

    The system will connect to Canonical’s servers and automatically enable default services, such as esm-infra and livepatch.

    Step 3: Manage services

    You can check the status of your services at any time with the command:

    pro status –all

    You will see a list of all available services along with information on whether they are enabled or disabled.

    To enable a specific service, use the enable command. For example, to activate esm-apps:

    sudo pro enable esm-apps

    image 120

    Similarly, to disable a service, use the disable command:

    sudo pro disable landscape

    Alternative: Configuration via a graphical interface

    On Ubuntu Desktop systems, you can also manage your subscription through a graphical interface. Open the “Software & Updates” application, go to the “Ubuntu Pro” tab, and follow the instructions to activate the subscription using your token.

    image 121

    Summary

    Ubuntu Pro is a powerful set of tools that significantly increases the level of security, stability, and management capabilities of the Ubuntu system. Thanks to the generous free subscription offer for individual users, everyone can now take advantage of features that until recently were the domain of corporations. Whether you are a developer, a small server administrator, or simply a conscious user who cares about long-term support, activating Ubuntu Pro is a step that is definitely worth considering.

  • Untitled post 1942

    Windows 10 Reaches End-of-Life. Which System to Choose in 2025?

    The Dawn of a New Computing Era – Navigating the Windows 10 End-of-Life

    The 14th of October 2025 Deadline: What This Really Means for Your Device

    Windows 10 support will officially end on the 14th of October 2025. This date marks the end of Microsoft’s provision of ongoing maintenance and protection for this operating system. After this critical date, Microsoft will no longer provide:

    • Technical support for any issues. This means there will be no official technical help or troubleshooting support from Microsoft.
    • Software updates, which include performance enhancements, bug fixes, and compatibility improvements. The system will become static in terms of its core functionality.
    • Most importantly, security updates or patches will no longer be issued. This is the most significant consequence for user security.

    It’s important to understand that a Windows 10 computer will continue to function after this date. It won’t suddenly stop working or become useless. However, continuing to operate it without security patches carries serious risks.

    Microsoft 365 app support on Windows 10 will also end on the 14th of October 2025. While these applications will still work, Microsoft strongly recommends upgrading to Windows 11 to avoid performance and reliability issues over time. It should be noted that Microsoft will continue to provide security updates for Microsoft 365 on Windows 10 for an additional three years, until the 10th of October 2028. Support for non-subscription versions of Office (2016, 2019) will also end on the 14th of October 2025, on all operating systems. Office 2021 and 2024 (including LTSC versions) will still work on Windows 10 but will no longer be officially supported.

    The Risks of Staying on an Old System: Why an Unsupported OS is a Liability

    Remaining on an unsupported operating system brings with it a number of serious risks that can have far-reaching consequences for security, performance, and compliance.

    • Security Risks: The Biggest Concerns: Without regular security updates from Microsoft, Windows 10 systems will become increasingly vulnerable to cyberattacks. Hackers actively seek and exploit newly discovered vulnerabilities in outdated systems that will no longer be patched after support ends. This can lead to a myriad of security issues, including unauthorised access to sensitive data, ransomware attacks, and breaches of confidential financial or customer information.

    The risk of an unsupported operating system is not a sudden, immediate failure, but a gradually increasing exposure. This risk will grow over time as security vulnerabilities are found and not patched. This means that every new vulnerability discovered globally that affects Windows 10 will remain an open door for attackers on unsupported systems. The risk profile of an unsupported Windows 10 PC is not static; it is in constant decline as more zero-day vulnerabilities become public knowledge and are integrated into attacker tools. While some users might believe that “common sense,” a firewall, and an antivirus are sufficient for “a few years,” this approach fails to account for the dynamic and escalating nature of cyber threats. Users who choose to remain on Windows 10 without the ESU programme aren’t just risking a single, isolated attack. They are exposing themselves to an ever-growing and increasingly dangerous attack surface. This means that even with cautious user behaviour, the sheer number of unpatched vulnerabilities will eventually make their system an easy target for malicious actors, drastically increasing the probability of a successful attack over time.

    • Software Incompatibility and Performance Issues: As the broader tech ecosystem progresses, software developers will inevitably shift their focus to Windows 11 and newer operating systems, leaving Windows 10 behind. This will, over time, cause a number of problems for Windows 10 users:
    • Slower Performance: The lack of ongoing updates and optimisations can cause the system to slow down, use resources inefficiently, and experience an overall decrease in performance.
    • Application Crashes: Critical business tools or popular consumer applications that rely on modern system architectures or APIs may cease to function correctly, or at all, hindering day-to-day tasks.
    • Limited Vendor Support: IT and software vendors are likely to prioritise newer systems like Windows 11, making it difficult and potentially more expensive to find support for Windows 10 issues.
    • Hardware Upgrade Pressure: Businesses and individuals may face additional challenges if their systems no longer meet the hardware requirements for newer software, forcing them into costly upgrades or replacements.
    • Compliance and Regulatory Risks (Especially for Businesses): For industries subject to specific security and compliance regulations (e.g., healthcare, finance, government), staying on an unsupported operating system can pose significant risks. Many regulatory frameworks explicitly require companies to use supported, up-to-date software to ensure adequate data protection and security measures. Continuing to use Windows 10 past the end-of-life date could put an organisation at risk of failing audits, which could result in hefty fines, penalties, and even loss of certification.

    Delaying the transition to a new operating system does not necessarily mean saving money. In fact, doing so can lead to significantly higher costs in the long run. The section explicitly discusses the “costs of waiting,” listing “emergency upgrades,” “potential downtime,” and “unplanned hardware replacements” as financial consequences. This extends beyond the direct costs of the update itself. The core implication is that delaying the transition doesn’t save money; it merely defers costs, often with significant multipliers due to urgency, disruption, and unforeseen consequences. For businesses, the cost extends far beyond direct financial penalties. A security breach or non-compliance due to an unsupported OS can lead to severe reputational damage, loss of customer trust, legal liability, and long-term operational disruptions that are often far more expensive and difficult to recover from than a planned, proactive upgrade. Delaying the transition is a false economy. It is a deferral of inevitable costs, which are likely to be compounded by unexpected crises, legal repercussions, and reputational damage. Proactively planning and investing now can prevent far greater, unquantifiable losses in the future.

    Upgrading to Windows 11 – A Smooth Transition

    Windows Update

    For many users, the most straightforward and recommended path will be to upgrade to Windows 11, Microsoft’s latest operating system. This option provides continuity in a familiar Windows ecosystem while offering expanded features, enhanced security, and long-term support directly from Microsoft.

    Is Your PC Ready for Windows 11? Demystifying the System Requirements

    To upgrade directly from an existing Windows 10 installation, your device must be running Windows 10, version 2004 or later, and have the 14 September 2021 security update or later installed. These are preconditions for the upgrade process itself.

    Minimum Hardware Requirements for Windows 11: Microsoft has established specific hardware baselines to ensure that Windows 11 delivers a consistent and secure experience. Your computer must meet or exceed the following specifications:

    • Processor: 1 gigahertz (GHz) or faster with two or more cores on a compatible 64-bit processor or System on a Chip (SoC).
    • RAM: 4 gigabytes (GB) or more.
    • Storage: A storage device of 64 GB or greater. Note that additional storage may be required over time for updates and specific features.
    • System Firmware: UEFI, with Secure Boot capability. This refers to the modern firmware interface that replaces the older BIOS.
    • TPM: Trusted Platform Module (TPM) version 2.0. This is a cryptographic processor that enhances security.
    • Graphics Card: Compatible with DirectX 12 or later with WDDM 2.0 driver.
    • Display: A high-definition (720p) display that is greater than 9 inches diagonally, with 8 bits per colour channel.
    • Internet Connection and Microsoft Account: Required for Windows 11 Home to complete the initial device setup on first use, and generally essential for updates and certain features.

    Key Requirements Nuances (TPM and Secure Boot): These two requirements are often the most common sticking points for users with otherwise capable hardware.

    Many PCs shipped within the last 5 years are technically capable of supporting Trusted Platform Module version 2.0 (TPM 2.0), but it may be disabled by default in the UEFI BIOS settings. This is particularly true for retail PC motherboards used by individuals who build their own computers. Secure Boot is an important security feature designed to prevent malicious software from loading during the computer’s startup. Most modern computers are Secure Boot capable, but similar to TPM, there may be settings that make the PC appear not to be Secure Boot capable. These settings can often be changed within the computer’s firmware (BIOS).

    The initial “incompatible” message from Microsoft’s PC Health Check app can be misleading for many users, potentially leading them to believe they need to buy a new computer when their existing one is fully capable. The sections repeatedly highlight that TPM 2.0 and Secure Boot are often enable-able features on existing hardware but are turned off by default. It states: “Most PCs shipped within the last 5 years are capable of supporting Trusted Platform Module version 2.0 (TPM 2.0).” And “In some cases, PCs capable of TPM 2.0 are not configured to do so.” Similarly, it notes, “Most modern computers are Secure Boot capable, but in some cases, there may be settings that make the PC appear not to be Secure Boot capable.” Educating users on how to check and enable these critical BIOS/UEFI settings is extremely important for a smooth, cost-effective, and environmentally friendly transition to Windows 11, preventing unnecessary hardware waste.

    Unlocking Windows 11: A Guide to Checking and Enabling Compatibility

    Microsoft provides the PC Health Check app to assess your device’s readiness for Windows 11. This application will indicate if your system meets the minimum requirements.

    How to Check Your TPM 2.0 Status:

    • Press the Windows key + R to open the Run dialogue box, then type “tpm.msc” (without quotes) and select OK.
    • If a message appears saying “Compatible TPM not found,” your PC may have the TPM disabled. You’ll need to enable it in the BIOS.
    • If a TPM is ready for use, check “Specification Version” under the “TPM Manufacturer Information” section to see if it’s version 2.0. If it is lower than 2.0, the device does not meet the Windows 11 requirements.

    How to Enable TPM and Secure Boot: These settings are managed via the UEFI BIOS (the computer’s firmware). The exact steps and labels vary depending on the device manufacturer, but the general method of access is as follows:

    • Go to Settings > Update & Security > Recovery and select Restart now under the “Advanced startup” section.
    • On the next screen, choose Troubleshoot > Advanced options > UEFI Firmware Settings > Restart to apply changes.
    • In the UEFI BIOS, these settings are sometimes located in a submenu called “Advanced,” “Security,” or “Trusted Computing.”
    • The option to enable TPM may be labelled as “Security Device,” “Security Device Support,” “TPM State,” “AMD fTPM switch,” “AMD PSP fTPM,” “Intel PTT,” or “Intel Platform Trust Technology.”
    • To enable Secure Boot, you will typically need to switch your computer’s boot mode from “Legacy” BIOS (also known as “CSM” mode) to “UEFI/BIOS” (Unified Extensible Firmware Interface).

    Beyond the Basics: Key Windows 11 Features and Benefits for Different Users

    Windows 11 is not just a security update; it introduces a range of new features and enhancements designed to boost productivity, improve the gaming experience, and provide a better overall user experience.

    • Productivity and UI Enhancements:
    • Redesigned Shell: Windows 11 features a fresh, modern visual design influenced by elements of the cancelled Windows 10X project. This includes a centred Start menu, a separate “Widgets” panel replacing the old Live Tiles, and new window management features.
    • Snap Layouts: This feature allows users to easily utilise available desktop space by opening apps in pre-configured layouts that intelligently adjust to the screen size and dimensions, speeding up workflow by an average of 50%.
    • Desktops: Users can create separate virtual desktops for different projects or work streams and instantly switch between them from the taskbar, which helps with organisation.
    • Microsoft Teams Integration: The Microsoft Teams collaboration platform is deeply integrated into the Windows 11 UI, accessible directly from the taskbar. This simplifies communication compared to Windows 10, where setup was more difficult. Skype is no longer included by default.
    • Live Captions: A system-wide feature that allows users to enable real-time live captions for videos and online meetings.
    • Improved Microsoft Store: The Microsoft Store has been redesigned, allowing developers to distribute Win32 applications, Progressive Web Applications (PWAs), and other packaging technologies. Microsoft also plans to allow third-party app stores (such as the Epic Games Store) to distribute their clients.
    • Android App Integration: A brand-new feature for Windows, enabling native integration of Android apps into the taskbar and UI via the new Microsoft Store. Users can access around 500,000 apps from the Amazon Appstore, including popular titles such as Disney Plus, TikTok, and Netflix.
    • Seamless Redocking: When docking or undocking from an external display, Windows 11 remembers how apps were arranged, providing a smooth transition back to your preferred layout.
    • Voice Typing/Voice Access: While voice typing is available on both systems, Windows 11 introduces comprehensive Voice Access for system navigation.
    • Digital Pen Experience: Offers an enhanced writing experience for users with digital pens.
    • Gaming Enhancements: Windows 11 includes gaming technologies from the Xbox Series X and Series S consoles, aiming to set a new standard in PC gaming.
    • DirectStorage: A unique feature that significantly reduces game loading times by allowing game data to be streamed directly from an NVMe SSD to the graphics card, bypassing CPU bottlenecks. This allows for faster gameplay and more detailed, expansive game worlds. It should be noted that Microsoft has confirmed DirectStorage will also be available for Windows 10, but NVMe SSDs are key to its benefits.
    • Auto HDR: Automatically adds High Dynamic Range (HDR) enhancements to games built on DirectX 11 or later, improving contrast and colour accuracy for a more immersive visual experience on HDR monitors.
    • Xbox Game Pass Integration: The Xbox app is deeply integrated into Windows 11, providing easy access to the extensive game library for Game Pass subscribers.
    • Game Mode: The updated Game Mode in Windows 11 optimises performance by concentrating system resources on the game, reducing the utilisation of background applications to free up CPU for better performance.
    • DirectX 12 Ultimate: Provides a visual uplift for games with features like ray tracing for realistic lighting, variable-rate shading for better performance, and mesh shaders for more complex scenes.
    • Security and Performance Improvements:
    • Enhanced Security: Windows 11 features enhanced security protocols, including more secure and reliable connection methods, advanced network security (encryption, firewall protection), and built-in Virtual Private Network (VPN) protocols. It supports Wi-Fi 6, WPA3, encrypted DNS, and advanced Bluetooth connections.
    • TPM 2.0: Windows 11 includes enhanced security by leveraging the Trusted Platform Module (TPM) 2.0, an important building block for security-related features such as Windows Hello and BitLocker.
    • Windows Hello: Provides a secure and convenient sign-in, replacing passwords with stronger authentication methods based on a PIN or biometrics (face or fingerprint recognition).
    • Smart App Control: This feature provides an extra layer of security, only allowing reputable applications to be installed on the Windows 11 PC.
    • Increased Speed and Efficiency: Windows 11 is designed to better process information in the background, leading to a smoother overall user experience. Less powerful devices (with less RAM or limited CPU power) might even feel a noticeable increase in performance.
    • Faster Wake-Up: It claims a faster wake-up from sleep mode.
    • Smaller Update Sizes:.
    • Latest Support: As the newest version, Windows 11 benefits from continuous development, including monthly bug fixes, new storage alerts, and feature improvements like Windows Spotlight. This ensures the device remains fully protected and open to future upgrades.

    Windows 11 is presented as more than just an incremental upgrade; it is a platform designed for a “hybrid world” and offers “impressive improvements” that “accelerate device performance.” The integration of Android applications, new widgets, advanced security features, and next-generation gaming technologies like Auto HDR and DirectStorage (even if DirectStorage is coming to Windows 10, the full package is in Windows 11), collectively paint a picture of an operating system that is being actively developed with future computing trends in mind. Its continuous updates and development cement its position as a long-term supported platform within the Microsoft ecosystem. For users who want to leverage the latest technologies, integrate their mobile experiences, benefit from ongoing feature development, or simply ensure their system remains current and secure for the foreseeable future, upgrading to Windows 11 is a clear strategic choice. It represents an investment in future productivity, entertainment, and security, not just a necessary reaction to the Windows 10 end-of-life.

    Upgrade Considerations: Performance on Older Hardware, Changes in User Experience

    While Windows 11 is optimised for performance and may even speed up less powerful devices, it’s important to manage expectations. An older PC that just meets the minimum requirements may not deliver the same “accelerated user experience” as a brand new device designed for Windows 11.

    Users should also be prepared for changes to the user interface and workflow. While many find the new design “simple” and “clean,” critics have pointed to changes like the limitations in customising the taskbar and the difficulty in changing default apps as potential steps backwards from Windows 10. A period of adjustment to the new layout and navigation should be expected.

    Table: Windows 11 vs. Windows 10: Key Feature Upgrades

    Feature CategoryWindows 10 Status/DescriptionWindows 11 Upgrade/Description
    User InterfaceTraditional Start menu, Live TilesCentred Start menu, Widgets panel, new Snap Layouts
    SecurityBasic security, no TPM 2.0 requirementTPM 2.0 requirement, Windows Hello, Smart App Control, improved network protocols
    GamingLimited gaming features, no native DirectStorageDirectStorage (requires NVMe SSD), Auto HDR, enhanced Game Mode, Xbox Game Pass integration, DirectX 12 Ultimate
    App CompatibilityNo native Android app integrationNative Android app integration via the Microsoft Store
    CollaborationTeams app as a separate install, more difficult setupDeep Microsoft Teams integration with the taskbar
    PerformanceStandard background process managementBetter background processing, potential performance boost on less powerful devices, faster wake-up
    SupportSupport ends 14 October 2025Continuous support, monthly bug fixes, new features

    Exploring Alternatives for Incompatible Hardware (and Beyond)

    image 117

    For users whose current hardware doesn’t meet the strict Windows 11 requirements, or for those simply looking for a different computing experience, there are several viable and attractive alternatives. These options can breathe new life into older machines, offer different philosophies on privacy and customisation, or cater to specific professional needs.

    The Extended Security Updates (ESU) Programme: A Short-Term Fix

    What ESU Offers and Its Critical Limitations: The Windows 10 Extended Security Updates (ESU) programme is designed to provide customers with an option to continue receiving security updates for their Windows 10 PCs after the end-of-support date. Specifically, it delivers “critical and important security updates” as defined by the Microsoft Security Response Center (MSRC) for Windows 10 version 22H2 devices. This programme aims to mitigate the immediate risk of malware and cybersecurity attacks for those not yet ready to upgrade.

    • Critical Limitations: It’s important to understand that ESU does not provide full continuation of support for Windows 10. It explicitly excludes:
    • New features.
    • Non-security, customer-requested updates.
    • Design change requests.
    • General technical support. Support is only provided for issues directly related to ESU licence activation, installation, and any regressions caused by ESU itself.

    Cost and Programme Duration: The ESU programme is a paid service. For individual consumers, Microsoft offers a few sign-up options:

    • At no extra cost if you sync your PC’s settings to a Microsoft account.
    • Cashing in 1,000 Microsoft Rewards points.
    • A one-time purchase of $30 (or local currency equivalent) plus applicable tax.

    All of these sign-up options provide extended security updates until the 13th of October 2026. You can sign up for the ESU programme at any time until its official end on the 13th of October 2026. A single ESU licence can be used on up to 10 devices.

    The ESU programme is presented as an “option to extend usage” or “extra time before transitioning to Windows 11.” It explicitly states that ESU only delivers security updates and offers no new features, non-security updates, or general technical support. This means that while the immediate security risk is mitigated, the underlying issues with software incompatibility, lack of performance optimisation, and declining vendor support (detailed in) will persist and likely worsen over time. The operating system becomes a stagnant, patched version of Windows 10, increasingly incompatible with modern software and hardware. The ESU programme is therefore a temporary fix, not a sustainable long-term solution. It is best suited for users who truly need a short grace period (up to one year) to save up for new hardware, plan a more extensive migration, or manage a critical business transition. It should not be viewed as a viable strategy to indefinitely continue using Windows 10, as it merely defers the inevitable need to move to a fully supported and evolving operating system.

    Embracing the Open Road: Linux Distributions

    image 118

    For many users with Windows 11 incompatible hardware, or for those looking for greater control, privacy, and performance, Linux offers a robust and diverse ecosystem of operating systems.

    Why Linux? Advantages for Performance, Security, and Customisation.

    • Free and Open Source: The vast majority of Linux distributions are completely free, and nearly all their components are open source. This fosters transparency, community development, and eliminates licensing fees.
    • Performance on Older Hardware: A significant advantage of many Linux distributions is their ability to run efficiently on older computers with limited RAM or slower processors. They are often streamlined to consume fewer resources than Windows, effectively “resurrecting” seemingly obsolete machines and making them feel snappy.
    • Security: Linux generally boasts a strong security posture due to its open-source nature (allowing for widespread inspection and rapid patching), robust permission systems, and a smaller target for malware compared to Windows.
    • Customisation: Linux offers unparalleled customisation options for the user interface, desktop environment, and overall workflow, allowing users to precisely tailor their computing experience to their preferences.
    • Stability and Reliability: Many distributions are known for being “dependable” and requiring “very little maintenance,” benefiting from the robustness of their underlying Linux architecture.
    • Community Support: The Linux community is vast, active, and generally welcoming, offering extensive online resources, forums, and willing assistance for new users.
    • Dual Boot Option: Users can easily install Linux alongside their Windows or macOS system, creating a dual-boot setup that allows them to choose the operating system to use at each startup. This is ideal for testing or for users who need access to both environments.

    Choosing Your Linux Companion: Tailored Recommendations for Every User.

    • For Windows Converts and Daily Use:
    • Linux Mint (XFCE Edition): This distribution has long been a favourite among Windows converts due to its traditional desktop layout. It is designed to be straightforward and intuitive, making users feel “at home” quickly. Linux Mint includes all the essentials out-of-the-box, such as a web browser, media player, and office suite, making it ready to use without extensive setup. It is described as very user-friendly, highly customisable, and “incredibly fast.”
    • Zorin OS Lite: Zorin OS Lite stands out for its balance of performance and aesthetics. It has a polished interface that closely resembles Windows, making the transition easy for former Windows users. Even on older systems (even up to 15 years old), Zorin OS Lite provides a surprisingly modern experience without taxing system resources. It comes with essential apps and offers “Windows app support,” allowing users to run many Windows applications.
    • For Gamers and Power Users:
    • Pop!_OS: Promoted for STEM professionals and creators, Pop!_OS also provides an “amazing gaming experience.” Key features include “Hybrid Graphics” (allowing users to switch between battery-saving and high-power GPU modes or run individual apps on GPU power) and strong, out-of-the-box support for popular gaming platforms like Steam, Lutris, and GameHub. It offers a simple and colourful layout.
    • Fedora (Workstation/Games Lab): Fedora Workstation (with GNOME) is the flagship edition, and Fedora also offers “Labs,” such as the “Games” Lab, which is a collection and showcase of games available in Fedora. Fedora tends to keep its kernel and graphics drivers very up to date, which is a significant advantage for gaming performance and compatibility. AMD graphics cards are typically “plug-and-play” on modern Linux distributions like Fedora. While Nvidia cards require “a bit of work,” most major distributions, including Fedora, provide straightforward ways to install Nvidia drivers directly from their software centres.
    • General Linux Gaming: Gaming on Linux has “infinitely improved” since 2017. Most Linux distributions now perform great for gaming as long as you install Steam and other launchers like Heroic, which leverage compatibility layers like Proton/Proton-GE. Users report being able to play “everything from old Win95 or DOS games all the way up to the latest releases.”
    • For Reviving Older Hardware (Low-spec PCs):
    • Puppy Linux: Designed to be extremely small, fast, and portable, Puppy Linux often runs entirely from RAM, allowing it to boot quickly and operate smoothly even on machines that seem hopelessly outdated. Despite its small size, it includes a complete set of applications for browsing, word processing, and media playback.
    • AntiX Linux: A no-frills distribution specifically designed for low-spec hardware. It is based on Debian but strips away the heavier desktop environments in favour of extremely lightweight window managers (such as IceWM and Fluxbox), keeping resource usage incredibly low (often under 200 MB of idle RAM). Despite its minimalism, AntiX remains surprisingly powerful and stable for daily tasks.
    • Other Lightweight Options: Linux Lite, Bodhi Linux, LXLE Linux, Tiny Core Linux, and Peppermint OS are also mentioned as excellent choices for older or low-spec hardware.

    Software Ecosystem: Office Suites, Creative Tools, and Running Windows Apps.

    • Office Suites:
    • LibreOffice: This is the most popular free and open-source office suite available for Linux. It is designed to be compatible with Microsoft Office/365 files, handling popular formats such as .doc, .docx, .xls, .xlsx, .ppt, and .pptx.
    • Compatibility Nuances: While it is generally compatible with simple documents, users should be aware that the “translation” between LibreOffice’s Open Document Format and Microsoft’s Office Open XML format is not always perfect. This can lead to imperfections, especially with complex formatting, macros, or when documents are exchanged and modified multiple times. Installing Microsoft Core Fonts on Linux can significantly improve compatibility. For critical documents, users can first test LibreOffice on Windows or use the web version of Microsoft 365 to double-check compatibility before sharing.
    • Creative Tools:
    • GIMP (GNU Image Manipulation Program): A powerful, free, and open-source raster graphics editor (often considered an alternative to Adobe Photoshop). GIMP provides advanced tools for high-quality photo manipulation, retouching, image restoration, creative compositions, and graphic design elements like icons. It is cross-platform, available for Windows, macOS, and Linux.
    • Inkscape: A powerful, free, and open-source vector graphics editor (similar to Adobe Illustrator). Inkscape specialises in creating scalable graphics, making it ideal for tasks like logo creation, intricate illustrations, and vector-based designs where precision and quality-lossless scalability are paramount. It is also cross-platform.
    • Running Windows Applications (Gaming and General Software):
    • Wine (Wine Is Not an Emulator): A foundational compatibility layer that allows Windows software (including many older games and general applications) to run directly on Linux-based operating systems.
    • Proton: Developed by Valve in collaboration with CodeWeavers, Proton is a specialised compatibility layer built on a patched version of Wine. It is specifically designed to improve the performance and compatibility of Windows video games on Linux, integrating key libraries like DXVK (for translating Direct3D 9, 10, 11 to Vulkan) and VKD3D-Proton (for translating Direct3D 12 to Vulkan). Proton is officially distributed via the Steam client as “Steam Play.”
    • ProtonDB: An unofficial community website that crowdsources and displays data on the compatibility of various game titles with Proton, providing a rating scale from “Borked” (doesn’t work) to “Platinum” (works perfectly).
    • Proton’s Advantages over Pure Wine: Proton is a “tested distribution of Wine and its libraries,” offering a “nice overlay” that helps configure everything to “just work” for many games. It automatically handles dependencies and leverages performance-enhancing translation layers.

    Historically, Linux has largely been dismissed as a viable platform for gaming. However, the sections collectively paint a picture of a dramatically improved and increasingly competitive Linux gaming environment. It explicitly states, “Nearly all Linux distros have become infinitely better at gaming since 2017.” This improvement is directly tied to Valve’s significant investment in Proton, which has been a game-changer for Windows game runnability. The emergence of gaming-focused distributions like Pop!_OS and Fedora’s “Games” Lab, along with an active community around ProtonDB, signals a deliberate and successful effort to make Linux a strong contender for gamers. It’s no longer just about “getting games to run” but about achieving an “amazing gaming experience” and “easy, great performance.” For gamers with Windows 11 incompatible hardware, Linux is no longer a last resort but a genuinely competitive and often superior alternative for many titles, especially for those willing to engage with the community and learn a few new tools. This shift is a significant development, challenging the long-held belief of Windows being the only gaming OS.

    Key Linux Caveats:

    • Learning Curve: While distributions like Linux Mint and Zorin OS Lite are designed to be friendly for Windows converts, there can still be an initial learning curve for users completely new to the Linux environment. This often involves understanding package managers, file systems, and different approaches to software installation.
    • Hardware Driver Support: Modern Linux distributions have vastly improved hardware detection and driver support (e.g., AMD graphics cards are often plug-and-play, and Nvidia drivers are easily available via software tools). However, very new or niche hardware components may still require manual driver installation or troubleshooting, which can be a barrier for less technical users.
    • Gaming Anti-Cheat Limitations: A significant drawback for multiplayer gaming is that any game that implements kernel-level anti-cheat software will typically not work on Linux. This is because the creators of such anti-cheat systems are often unwilling to support Linux with user-level anti-cheat, citing concerns about preventing cheating. Games like Apex Legends have removed Linux support for this reason. This is a critical limitation for users whose primary gameplay involves such titles.

    The entire success and rapid evolution of Linux as a viable desktop OS, particularly in areas like gaming (Proton, DXVK, VKD3D-Proton), is largely attributed to its open, community-driven development model, often amplified by corporate support (e.g., Valve’s investment in Proton). Unlike Windows’s centralised, proprietary development, Linux benefits from a distributed network of developers, which allows for rapid iteration, specialised forks (like Proton GE), and direct feedback from the community (like ProtonDB). This model fosters tremendous flexibility and often bleeding-edge performance, as developers can quickly address issues and implement new technologies. However, this model also means that support for highly proprietary or deeply integrated features (such as kernel-level anti-cheat) is dependent on the willingness of external, often profit-driven, developers to adapt their software, leading to the “limitation” mentioned in. Users embracing Linux are entering a dynamic, evolving ecosystem that offers unparalleled flexibility, privacy, and often superior performance on older hardware. However, it comes with an implicit understanding that while much is delivered out-of-the-box, specific challenges (such as certain proprietary software or anti-cheat) may require a degree of self-reliance, engagement with community resources, or acceptance of limitations. This highlights a fundamental philosophical difference in operating system development and support compared to the traditional proprietary model.

    Table: Recommended Linux Distributions for Different User Profiles

    User ProfileRecommended DistributionsKey AdvantagesKey Caveats/Limitations
    Windows Converts / Daily UseLinux Mint (XFCE Edition), Zorin OS LiteFriendly interface, out-of-the-box apps, Windows app support (Zorin), snappy performanceInitial learning curve, Zorin OS Lite has a more polished interface than some other lightweight distros
    Gamer / Power UserPop!_OS, Fedora (Workstation/Games Lab)Gaming optimisations (Hybrid Graphics, Steam/Lutris/GameHub), up-to-date kernel/drivers, AMD plug-and-playAnti-cheat issues in some online games, Nvidia driver installation may require “a bit of work”
    Reviving Older Hardware / Low-Spec PCPuppy Linux, AntiX Linux, Linux Lite, Bodhi Linux, LXLE Linux, Tiny Core Linux, Peppermint OSLow resource usage, snappy performance even on very old hardware, Puppy Linux runs from RAM, AntiX is minimalistMore minimalist UI, may require more technical knowledge for setup

    Cloud-Powered Rebirth: ChromeOS Flex

    image 119

    ChromeOS Flex is Google’s solution for transforming older Windows, Mac, or Linux devices into secure, cloud-based machines, offering many of the features available on native ChromeOS devices. It is particularly appealing for organisations and individuals looking to extend the life of existing hardware while benefiting from a modern, secure, and easy-to-manage operating system.

    Transforming an Old PC into a Secure, Cloud-Based Device.

    ChromeOS Flex allows you to install a lightweight, cloud-focused operating system on a variety of existing devices, including older Windows and Mac PCs. This can effectively “resurrect” older machines, making them run significantly faster and more responsively than they would with an outdated or resource-heavy operating system. It provides a familiar, simple, and web-centric computing experience that leverages Google’s cloud services.

    System Requirements and Installation Process.

    Minimum Requirements for ChromeOS Flex: While ChromeOS Flex can run on uncertified devices, Google does not guarantee performance, functionality, or stability on such systems. For an optimal experience, ensure your device meets the following minimum requirements:

    • Architecture: Intel or AMD x86-64-bit compatible device (it will not run on 32-bit CPUs).
    • RAM: 4 GB.
    • Internal Storage: 16 GB.
    • Bootable from USB: The system must be capable of booting from a USB drive.
    • BIOS: Full administrator access to the BIOS is required, as changes may need to be made to boot from the USB installer.
    • Processor and Graphics: Components manufactured before 2010 may result in a poor experience. Specifically, Intel GMA 500, 600, 3600, and 3650 graphics chipsets do not meet ChromeOS Flex performance standards.

    Installation Process: The ChromeOS Flex installation process typically involves two main steps:

    • Creating a USB Installer: You will need a USB drive of 8 GB or more (all contents will be erased). The recommended method is to use the “Chromebook Recovery Utility” Chrome browser extension on a ChromeOS, Windows, or Mac device. Alternatively, you can download the installer image directly from Google and use a tool such as the dd command-line utility on Linux.
    • Booting and Installation: Boot the target device using the USB installer you created. You can choose to either install ChromeOS Flex permanently to the device’s internal storage or temporarily run it directly from the USB installer to test compatibility and performance.

    Benefits: Robust Security, Simplicity, and Performance on Lower-Spec Hardware.

    • Robust Security: ChromeOS Flex inherits many of ChromeOS’s strong security features, making it a highly secure option for older hardware:
    • Read-Only OS: The operating system is read-only, meaning it cannot run traditional executable files (.exe, etc.), which are common hiding places for viruses and ransomware. This reduces the attack surface significantly.
    • Sandboxing: The system’s architecture is segmented, with each webpage and app running in a confined, isolated environment. This ensures that malicious apps and files are always isolated and cannot access other parts of the device or data.
    • Automatic Updates: ChromeOS Flex receives full updates every 4 weeks and minor security fixes every 2-3 weeks. These updates operate automatically and in the background, ensuring constant protection against the latest threats without impacting user productivity.
    • Data Encryption: User data is automatically encrypted at rest and in transit, protecting it from unauthorised access even if the device is lost or stolen.
    • UEFI Secure Boot Support: While ChromeOS Flex devices do not contain a Google security chip, their bootloader has been checked and approved by Microsoft to optionally support UEFI Secure Boot. This can maintain the same boot security as Windows devices, preventing unknown third-party operating systems from being run.
    • Simplicity and Performance: ChromeOS Flex provides a streamlined, minimalist, and intuitive user experience. Its “cloud-first” design means it relies less on local processing power, allowing it to perform exceptionally well and fast even on older, low-spec hardware. This makes it an excellent choice for users focused primarily on web browsing, cloud-based productivity, and lightweight computing tasks.

    Limitations: Offline Capabilities, App Ecosystem, and Hardware-Level Security Nuances.

    While ChromeOS Flex offers many advantages, it’s important to be aware of its limitations, especially compared to a full ChromeOS or traditional desktop operating systems:

    • Offline Capabilities: As a cloud-focused OS, extensive offline work can be limited without specific web applications that support offline functionality.
    • App Ecosystem:
    • Google Play and Android Apps: Unlike full ChromeOS devices, ChromeOS Flex has limited support for Google Play and Android apps. Only some Android VPN apps can be deployed. This means the vast ecosystem of Android apps is largely unavailable.
    • Windows Virtual Machines (Parallels Desktop): ChromeOS Flex does not support running Windows virtual machines using Parallels Desktop.
    • Linux Development Environment: Support for the Linux development environment in ChromeOS Flex varies depending on the specific device model.
    • Hardware-Level Security Nuances:
    • No Google Security Chip/Verified Boot: ChromeOS Flex devices do not contain a Google security chip, which means the full ChromeOS “verified boot” procedure (a hardware-based security check) is not available. While UEFI Secure Boot is an alternative, it “cannot provide the security guarantees of ChromeOS Verified Boot.”
    • Firmware Updates: Unlike native ChromeOS devices, ChromeOS Flex devices do not automatically manage and update their BIOS or UEFI firmware. These updates must be supplied by the original equipment manufacturer (OEM) of the device and manually managed by device administrators.
    • TPM and Encryption: While ChromeOS Flex automatically encrypts user data, not all ChromeOS Flex devices have a supported Trusted Platform Module (TPM) to protect the encryption keys at a hardware level. Without a supported TPM, the data is still encrypted but may be more susceptible to attack. Users should check the certified models list to see for TPM support.

    ChromeOS Flex is presented as a highly secure alternative to an unsupported Windows 10, boasting features like a read-only OS, sandboxing, and automatic updates. However, it also details several security features that are either missing or limited compared to a native ChromeOS device: the lack of a Google security chip, the absence of a full ChromeOS Verified Boot (relying instead on the less robust UEFI Secure Boot), and the inconsistent presence of a supported TPM. This implies that while Flex offers significant security improvements over an unpatched Windows 10, it doesn’t achieve the top-tier, hardware-level security found in purpose-built Chromebooks. Users should be aware of this trade-off, understanding that while their older hardware gets a new lease of life and better protection, it won’t have the identical level of security as a newer, dedicated ChromeOS device.

    General Best Practices for Operating System Migration

    Regardless of the path you choose, the process of migrating an operating system requires careful planning and adherence to best practices to minimise risk and ensure a smooth transition.

    Data Backup

    Before any operating system change, including an upgrade or clean installation, it is crucial to perform a full system image backup. Data is vulnerable to unforeseen complications during the upgrade process, making a preventative backup a sound choice. This safeguards critical files, applications, and personalised settings, ensuring a smooth transition and the ability to restore your digital environment in the event of unexpected issues.

    You should use disk imaging technology, not just file copying. Operating systems like Windows are complex, and some data (e.g., passwords, preferences, app settings) exists outside of regular files. A full disk image copies every bit of data, including files, folders, programmes, patches, preferences, settings, and the entire operating system, enabling a complete system and app restoration to a new operating system. You should also remember to account for hidden partitions that may contain important system restore data.

    Software Compatibility Check

    Prior to migration, you should thoroughly check that all the applications and software you use are compatible with the new operating system. Incompatibility can lead to data loss, corruption, or inaccuracies, affecting the new system’s reliability and integrity. It is recommended to perform compatibility tests in a sandbox environment or a virtual machine to identify potential issues before the actual migration. The testing should cover various hardware configurations, software, and networks to ensure smooth operation.

    Driver Considerations

    A clean installation of an operating system will remove all drivers from the computer. While modern operating systems have enough generic drivers to get a basic system up and running, they will lack the specialised hardware drivers needed to run newer network cards, 3D graphics, and other components. It is recommended to have the drivers for key components like your network cards (Wi-Fi and/or wired) ready so that after the OS installation you can connect to the Internet to download the rest of the drivers. Drivers should be downloaded from the official websites of the hardware manufacturers to avoid performance issues or malware infections.

    Phased Approach and Testing

    An effective migration strategy should include a phased approach, breaking the process down into manageable stages. Each phase should have clearly defined goals and a rollback strategy in case issues arise. Before the migration, thorough testing should be conducted to identify potential problems and adjust configurations. After the migration, intensive monitoring and “hyper-care” support are essential to resolve any issues quickly and ensure the system stabilises in the new environment.

    User Training (for Organisations)

    For organisations, deployment preparation should include providing contextual training for end-users to quickly familiarise employees with the new systems and tasks. Creating IT sandbox environments for new applications can provide hands-on training for end-users, enabling employees to learn by doing without the risks of using live software.

    Conclusion

    The end of support for Windows 10 on the 14th of October 2025 represents an unavoidable turning point for all users. Continuing to use an unsupported operating system brings serious and escalating risks to security, performance, and compliance, which will only worsen over time. Delaying the migration decision is not a saving, but a deferral of costs that could be significantly higher in the event of unplanned outages or security breaches.

    For the majority of users whose hardware meets the minimum requirements, the most logical and future-proof solution is to upgrade to Windows 11. This system not only offers continuity within the familiar Microsoft environment but also provides significant improvements in user interface, productivity (e.g., Snap Layouts, Teams integration), gaming features (DirectStorage, Auto HDR), and most importantly, security (TPM 2.0, Smart App Control). Many PCs that initially seem incompatible can be made ready for Windows 11 through simple setting changes in the BIOS/UEFI, avoiding unnecessary spending on new hardware. Windows 11 is an investment in long-term stability, performance, and access to the latest technologies.

    For those whose hardware doesn’t meet the Windows 11 requirements, or for users seeking alternative experiences, other equally valuable paths are available. The Extended Security Updates (ESU) programme for Windows 10 offers short-term security protection until October 2026, but this is only a temporary fix that does not address software compatibility issues and the lack of new features.

    Linux distributions provide a robust and flexible alternative, capable of breathing new life into older hardware. They offer high performance, unmatched customisation, strong security, and a rich ecosystem of free software (e.g., LibreOffice, GIMP, Inkscape). Thanks to the development of Proton, Linux has also become a surprisingly competitive gaming platform, although certain limitations (e.g., kernel-level anti-cheat) still exist. Distributions such as Linux Mint and Zorin OS Lite are ideal for those transitioning from Windows, while Pop!_OS and Fedora will cater to the needs of gamers and advanced users.

    ChromeOS Flex is another option that allows you to transform older computers into lightweight, secure, and cloud-based devices. This is an excellent solution for users who value simplicity, speed, and solid security, although it comes with certain limitations regarding offline capabilities and Android app access.

    Regardless of the choice, a proactive approach is key. Any migration should be preceded by a complete data backup, a thorough software compatibility check, and preparation of the necessary drivers. Adopting a phased approach with testing before and after the migration will minimise the risk of disruptions.

    The end of support for Windows 10 is not just the end of an era, but also an opportunity to modernise, optimise, and adapt your computing environment to individual needs and the challenges of the future. Making an informed choice of operating system in 2025 is crucial for your computer’s security, performance, and usability for years to come.

  • From Zero to Karaoke Hero: A Complete Guide to Synchronised Lyrics in Jellyfin, TrueNAS, and Finamp

    From Zero to Karaoke Hero: A Complete Guide to Synchronised Lyrics in Jellyfin, TrueNAS, and Finamp

    Imagine a music service that not only stores your entire collection in lossless quality but also displays synchronised lyrics in real time, turning every listening session into a karaoke singalong. What’s more, it’s completely yours – free from adverts, subscriptions, and algorithmic tracking. This isn’t a futuristic vision but a completely achievable reality thanks to the power of open-source software. We can confirm: the karaoke feature works perfectly in both the Jellyfin web interface and the Finamp app on an iPhone. This comprehensive guide will walk you through every stage of building such a system – from installation to creating your own lyric files.

    In an era dominated by streaming giants, a movement of “self-hosting” enthusiasts is growing – people who independently manage their own data and services. Instead of entrusting their digital identity to corporations, they build their own private clouds, media servers, and much more. This article is the essence of that philosophy, showing you how to reclaim control over your music and enrich it with features you’d be hard-pressed to find with the competition.

    The Architecture of Your Private Spotify: Key Components

    Before we delve into the configuration, we need to understand the foundations on which we’ll build our music centre. Success depends on the harmonious collaboration of three key elements.

    • Jellyfin (Server Version 10.9+): This is the brain of the entire operation. Jellyfin is a free media server that catalogues and serves your music files. Version 10.9 was revolutionary, introducing a standardised, server-managed approach to handling song lyrics. This means all the “heavy lifting” involved in sourcing and processing lyrics happens on the server, and the client applications simply consume the ready-to-use data.
    • TrueNAS SCALE: This is a reliable and powerful operating system for your home server (NAS). Built on Linux, it offers official support for running applications like Jellyfin in isolated containers, which guarantees stability, security, and order in the system.
    • Finamp and Jellyfin App (Mobile Clients): These are your windows to the world of music. Finamp, especially in its redesigned beta version, is a favourite among iPhone users as it eliminates the problem of music stopping when the screen goes dark and handles displaying lyrics perfectly. Equally important, the latest versions of the official Jellyfin app also flawlessly support the synchronised lyrics feature.

    The Magic of Synchronised Lyrics: The Anatomy of an .lrc File

    Confirming that the karaoke feature works is exciting. To make full use of it, you need to understand where this “magic” comes from. The effect of highlighting text in perfect synchronisation with the music depends on the format of the downloaded file. The heart of this mechanism is a simple text file with an .lrc extension.

    • Synchronised Lyrics (.lrc/.elrc): This is the Holy Grail for karaoke fans. These files contain not only the song’s words but also precise time markers for each line.
    • Unsynchronised Lyrics (.txt): This is a simpler form, containing plain text. In this case, the application will simply scroll through it smoothly as the song plays, without highlighting individual verses.

    The structure of an .lrc file is incredibly simple. Each line of text is preceded by a time marker in the format [minutes:seconds.hundredths of a second].

    Example of an .lrc file structure:

    [ar: Song Artist]

    [ti: Song Title]

    [al: Album]

    [00:15.50]The first line of text appears after 15.5 seconds.

    [00:19.25]The second line of text comes in after 19.25 seconds.

    Become a Lyric Creator: How to Create Your Own .lrc File

    What if the LrcLib plugin can’t find the lyrics for your favourite niche song? Don’t worry! You can very easily create one yourself.

    1. Get the lyrics: Find the song’s words online and copy them.
    2. Open a text editor: Use any simple editor, like Notepad (Windows) or TextEdit (macOS).
    3. Synchronise with the music: Play the song and pause it at the beginning of each line to note the exact time (minutes and seconds).
    4. Format the file: Before each line of text, add the noted time in square brackets, e.g., [01:23.45]. The more accurate the hundredths of a second, the smoother the effect.
    5. Save the file: This is the most important step. Save the file in the same folder as the audio file, giving it the exact same name but changing the extension to .lrc.

    If the music file is: My Super Song.flac

    The lyric file must be called: My Super Song.lrc

    After saving the file, all you need to do is re-scan your library in Jellyfin, and the server will automatically detect and link your manually created lyrics with the song.

    Server Configuration – A Step-by-Step Guide

    1. Installing the LrcLib Plugin

    • In the Jellyfin dashboard, go to Plugins > Catalogue.
    • Search for and install the official “LrcLib” plugin from the Jellyfin repository. Avoid the outdated jellyfin-lyrics-plugin by Felitendo, which is no longer being developed and may cause errors.
    • Be sure to restart the Jellyfin server for the changes to take effect.
    image 116

    2. The Most Important Library Configuration

    This is a crucial, though unintuitive, step. By default, Jellyfin hides downloaded lyrics in a metadata folder, creating a “black box”. We’ll change this to have full control.

    • Go to Dashboard > Libraries…
    • Find your music library, click the menu (three dots), and select Manage library.
    • In the library settings, tick the option “Save lyrics to media folders”.

    3. Starting the Process

    • Go to Dashboard > Scheduled tasks.
    • Find the task “Download missing lyrics” and run it manually. Set a schedule (e.g., daily) so that newly added music is processed automatically.
    • Once the task is finished, run a new scan of the music library.

    When Something Goes Wrong – Advanced Troubleshooting

    Even with an ideal configuration, you might encounter problems. Here’s how to deal with them.

    • First Rule of Diagnosis: Always check if the lyrics are visible in the Jellyfin web interface in a browser before you start looking for problems in the mobile app. If they’re not on the server, the problem isn’t on the client side.
    • Troubleshooting Scanning and Metadata Refreshing: Sometimes, due to the specifics of the system’s operation, the Jellyfin user interface or database doesn’t refresh immediately after the first scan. This manifests as the lyrics still not being visible despite having completed all the steps. The solution is to run a second scan, this time selecting the more detailed “Search for missing metadata” option for the given library. This extra step often forces the system to re-analyse the folders and register the new .lrc files.
    • The Ultimate Weapon – Manual Cache Cleaning: The most persistent problem is Jellyfin’s aggressive metadata caching. The system creates an internal copy of the lyrics and often refuses to update it, even if the source .lrc file is changed. A simple refresh from the interface can be unreliable. The only 100% effective method is to manually delete the cached file from the server’s file system.
    1. Locate the Jellyfin configuration path on your TrueNAS server (e.g., /mnt/pool/ix-applications/jellyfin/config).
    2. Launch the Shell in the TrueNAS interface.
    3. Navigate to the lyrics cache directory using the cd command and your path: cd /mnt/pool/ix-applications/jellyfin/config/metadata/lyrics.
    4. Find and delete the problematic file using the find command. Replace “Song Title.lrc” with the actual file name: find -type f -name “Song Title.lrc” -print -delete. The -print flag will display the file before it’s deleted.
    5. In the Jellyfin interface, for the given song, select Refresh metadata with the Search for missing metadata mode. Jellyfin, forced into action, will download and process the lyrics anew.

    Long-Term Maintenance and Next Steps

    • Tagging Hygiene: The effectiveness of downloading lyrics depends on the quality of your music’s metadata. Use tools like MusicBrainz Picard to ensure your files have accurate and consistent tags.
    • Backups: Regularly back up your entire Jellyfin configuration folder (e.g., /mnt/pool/ix-applications/jellyfin/config) to protect your settings, metadata, and user data in case of failure.
    • External Access: Once you’ve mastered local streaming, the natural next step is to configure secure access from outside your home using Nginx Proxy Manager.

    Congratulations! You’ve just built a fully functional, private, and significantly more powerful equivalent of commercial streaming services, with a sensational karaoke feature that will liven up any party or solo evening with headphones.

  • OpenLiteSpeed (OLS) with Redis. Fast Cache for WordPress Sites.

    OpenLiteSpeed (OLS) with Redis. Fast Cache for WordPress Sites.

    Managing a web server requires an understanding of the components that make up its architecture. Each element plays a crucial role in delivering content to users quickly and reliably. This article provides an in-depth analysis of a modern server configuration based on OpenLiteSpeed (OLS), explaining its fundamental mechanisms, its collaboration with the Redis caching system, and its methods of communication with external applications.

    OpenLiteSpeed (OLS) – The System’s Core

    The foundation of every website is the web server—the software responsible for receiving HTTP requests from browsers and returning the appropriate resources, such as HTML files, CSS, JavaScript, or images.

    What is OpenLiteSpeed?

    OpenLiteSpeed (OLS) is a high-performance, lightweight, open-source web server developed by LiteSpeed Technologies. Its key advantage over traditional servers, such as Apache in its default configuration, is its event-driven architecture.

    • Process-based model (e.g., Apache prefork): A separate process or thread is created for each simultaneous connection. This model is simple, but with high traffic, it leads to significant consumption of RAM and CPU resources, as each process, even if inactive, reserves resources.
    • Event-driven model (OpenLiteSpeed, Nginx): A single server worker process can handle hundreds or thousands of connections simultaneously. It uses non-blocking I/O operations and an event loop to manage requests. When a process is waiting for an operation (e.g., reading from a disk), it doesn’t block but instead moves on to handle another connection. This architecture provides much better scalability and lower resource consumption.

    Key Features of OpenLiteSpeed

    OLS offers a set of features that make it a powerful and flexible tool:

    • Graphical Administrative Interface (WebAdmin GUI): OLS has a built-in, browser-accessible admin panel that allows you to configure all aspects of the server—from virtual hosts and PHP settings to security rules—without needing to directly edit configuration files.
    • Built-in Caching Module (LSCache): One of OLS’s most important features is LSCache, an advanced and highly configurable full-page cache mechanism. When combined with dedicated plugins for CMS systems (e.g., WordPress), LSCache stores fully rendered HTML pages in memory. When the next request for the same page arrives, the server delivers it directly from the cache, completely bypassing the execution of PHP code and database queries.
    • Support for Modern Protocols (HTTP/3): OLS natively supports the latest network protocols, including HTTP/3 (based on QUIC). This provides lower latency and better performance, especially on unstable mobile connections.
    • Compatibility with Apache Rules: OLS can interpret mod_rewrite directives from .htaccess files, which is a standard in the Apache ecosystem. This significantly simplifies the migration process for existing applications without the need to rewrite complex URL rewriting rules.

    Redis – In-Memory Data Accelerator

    Caching is a fundamental optimisation technique that involves storing the results of costly operations in a faster access medium. In the context of web applications, Redis is one of the most popular tools for this task.

    What is Redis?

    Redis (REmote Dictionary Server) is an in-memory data structure, most often used as a key-value database, cache, or message broker. Its power comes from the fact that it stores all data in RAM, not on a hard drive. Accessing RAM is orders of magnitude faster than accessing SSDs or HDDs, as it’s a purely electronic operation that bypasses slower I/O interfaces.

    In a typical web application, Redis acts as an object cache. It stores the results of database queries, fragments of rendered HTML code, or complex PHP objects that are expensive to regenerate.

    How Do OpenLiteSpeed and Redis Collaborate?

    The LSCache and Redis caching mechanisms don’t exclude each other; rather, they complement each other perfectly, creating a multi-layered optimisation strategy.

    Request flow (simplified):

    1. A user sends a request for a dynamic page (e.g., a blog post).
    2. OpenLiteSpeed receives the request. The first step is to check the LSCache.
      • LSCache Hit: If an up-to-date, fully rendered version of the page is in the LSCache, OLS returns it immediately. The process ends here. This is the fastest possible scenario.
      • LSCache Miss: If the page is not in the cache, OLS forwards the request to the appropriate external application (e.g., a PHP interpreter) to generate it.
    3. The PHP application begins building the page. To do this, it needs to fetch data from the database (e.g., MySQL).
    4. Before PHP executes costly database queries, it first checks the Redis object cache.
      • Redis Hit: If the required data (e.g., SQL query results) are in Redis, they are returned instantly. PHP uses this data to build the page, bypassing communication with the database.
      • Redis Miss: If the data is not in the cache, PHP executes the database queries, fetches the results, and then saves them to Redis for future requests.
    5. PHP finishes generating the HTML page and returns it to OpenLiteSpeed.
    6. OLS sends the page to the user and, at the same time, saves it to the LSCache so that subsequent requests can be served much faster.

    This two-tiered strategy ensures that both the first and subsequent visits to a page are maximally optimised. LSCache eliminates the need to run PHP, while Redis drastically speeds up the page generation process itself when necessary.

    Delegating Tasks – External Applications in OLS

    Modern web servers are optimised to handle network connections and deliver static files (images, CSS). The execution of application code (dynamic content) is delegated to specialised external programmes. This division of responsibilities increases stability and security.

    OpenLiteSpeed manages these programmes through the External Applications system. The most important types are described below:

    • LSAPI Application (LiteSpeed SAPI App): The most efficient and recommended method of communication with PHP, Python, or Ruby applications. LSAPI is a proprietary, optimised protocol that minimises communication overhead between the server and the application interpreter.
    • FastCGI Application: A more universal, standard protocol for communicating with external application processes. This is a good solution for applications that don’t support LSAPI. It works on a similar principle to LSAPI (by maintaining permanent worker processes), but with slightly more protocol overhead.
    • Web Server (Proxy): This type configures OLS to act as a reverse proxy. OLS receives a request from the client and then forwards it in its entirety to another server running in the background (the “backend”), e.g., an application server written in Node.js, Java, or Go. This is crucial for building microservices-based architectures.
    • CGI Application: The historical and slowest method. A new application process is launched for each request and is closed after returning a response. Due to the huge performance overhead, it’s only used for older applications that don’t support newer protocols.

    OLS routes traffic to the appropriate application using Script Handlers, which map file extensions (e.g., .php) to a specific application, or Contexts, which map URL paths (e.g., /api/) to a proxy type application.

    Communication Language – A Comparison of SAPI Architectures

    SAPI (Server Application Programming Interface) is an interface that defines how a web server communicates with an application interpreter (e.g., PHP). The choice of SAPI implementation has a fundamental impact on the performance and stability of the entire system.

    The Evolution of SAPI

    1. CGI (Common Gateway Interface): The first standard. Stable, but inefficient due to launching a new process for each request.
    2. Embedded Module (e.g., mod_php in Apache): The PHP interpreter is loaded directly into the server process. This provides very fast communication, but at the cost of stability (a PHP crash causes the server to crash) and security.
    3. FastCGI: A compromise between performance and stability. It maintains a pool of independent, long-running PHP processes, which eliminates the cost of constantly launching them. Communication takes place via a socket, which provides isolation from the web server.
    4. LSAPI (LiteSpeed SAPI): An evolution of the FastCGI model. It uses the same architecture with separate processes, but the communication protocol itself was designed from scratch to minimise overhead, which translates to even higher performance than standard FastCGI.

    SAPI Architecture Comparison Table

    FeatureCGIEmbedded Module (mod_php)FastCGILiteSpeed SAPI (LSAPI)
    Process ModelNew process per requestShared process with serverPermanent external processesPermanent external processes
    PerformanceLowVery highHighHighest
    Stability / IsolationExcellentLowHighHigh
    Resource ConsumptionVery highModerateLowVery low
    OverheadHigh (process launch)Minimal (shared memory)Moderate (protocol)Low (optimised protocol)
    Main AdvantageFull isolationCommunication speedBalanced performance & stabilityOptimised performance & stability
    Main DisadvantageVery low performanceInstability, security issuesMore complex configurationTechnology specific to LiteSpeed

    Comparison of Communication Sockets

    Communication between processes (e.g., OLS and Redis, or OLS and PHP processes) occurs via sockets. The choice of socket type affects performance.

    FeatureTCP/IP Socket (on localhost)Unix Domain Socket (UDS)
    AddressingIP Address + Port (e.g., 127.0.0.1:6379)File path (e.g., /var/run/redis.sock)
    ScopeSame machine or over a networkSame machine only (IPC)
    Performance (locally)LowerHigher
    OverheadHigher (goes through the network stack)Minimal (bypasses the network stack)
    Security ModelFirewall rulesFile system permissions

    For local communication, UDS is a more efficient solution because it bypasses the entire operating system network stack, which reduces latency and CPU overhead. This is why it’s preferred in optimised configurations for connections between OLS, Redis, and LSAPI processes.

    Practical Implementation and Management

    To translate theory into practice, let’s analyse a real server configuration for the virtual host solutionsinc.co.uk.

    5.1 Analysis of the solutionsinc.co.uk Configuration Example

    1. External App Definition:
      • In the “External App” panel, a LiteSpeed SAPI App named solutionsinc.co.uk has been defined. This is the central configuration point for handling the dynamic content of the site.
      • Address: UDS://tmp/lshttpd/solutionsinc.co.uk.sock. This line is crucial. It informs OLS that a Unix Domain Socket (UDS) will be used to communicate with the PHP application, not a TCP/IP network socket. The .sock file at this path is the physical endpoint of this efficient communication channel.
      • Command: /usr/local/lsws/lsphp84/bin/lsphp. This is the direct path to the executable file of the LiteSpeed PHP interpreter, version 8.4. OLS knows it should run this specific programme to process scripts.
      • Other parameters, such as LSAPI_CHILDREN = 50 and memory limits, are used for precise resource and performance management of the PHP process pool.
    2. Linking with PHP Files (Script Handler):
      • The application definition alone isn’t enough. In the “Script Handler” panel, we tell OLS when to use it.
      • For the .php suffix (extension), LiteSpeed SAPI is set as the handler.
      • [VHost Level]: solutionsinc.co.uk is chosen as the Handler Name, which directly points to the application defined in the previous step.
      • Conclusion: From now on, every request for a file with the .php extension on this site will be passed through the UDS socket to one of the lsphp84 processes.
    image 114
    image 115

    This configuration is an excellent example of an optimised and secure environment: OLS handles the connections, while dedicated, isolated lsphp84 processes execute the application code, communicating through the fastest available channel—a Unix domain socket.

    5.2 Managing Unix Domain Sockets (.sock) and Troubleshooting

    The .sock file, as seen in the solutionsinc.co.uk.sock example, isn’t a regular file. It’s a special file in Unix systems that acts as an endpoint for inter-process communication (IPC). Instead of communicating through the network layer (even locally), processes can write to and read data directly from this file, which is much faster.

    When OpenLiteSpeed launches an external LSAPI application, it creates such a socket file. The PHP processes listen on this socket for incoming requests from OLS.

    Practical tip: A ‘Stubborn’ .sock file

    Sometimes, after making changes to the PHP configuration (e.g., modifying the php.ini file or installing a new extension) and restarting the OpenLiteSpeed server (lsws), the changes may not be visible on the site. This happens because the lsphp processes may not have been correctly restarted with the server, and OLS is still communicating with the old processes through the existing, “old” .sock file.

    In such a situation, when a standard restart doesn’t help, an effective solution is to:

    1. Stop the OpenLiteSpeed server.
    2. Manually delete the relevant .sock file, for example, using the terminal command: rm /tmp/lshttpd/solutionsinc.co.uk.sock
    3. Restart the OpenLiteSpeed server.

    After restarting OLS, not finding the existing socket file, it will be forced to create a new one. More importantly, it will launch a new pool of lsphp processes that will load the fresh configuration from the php.ini file. Deleting the .sock file acts as a hard reset of the communication channel between the server and the PHP application, guaranteeing that all components are initialised from scratch with the current settings.

    Summary

    The server configuration presented is a precisely designed system in which each element plays a vital role.

    • OpenLiteSpeed acts as an efficient, event-driven core, managing connections.
    • LSCache provides instant delivery of pages from the full-page cache.
    • Redis acts as an object cache, drastically accelerating the generation of dynamic content when needed.
    • LSAPI UDS creates optimised communication channels, minimising overhead and latency.

    An understanding of these dependencies allows for informed server management and optimisation to achieve maximum performance and reliability.