Category: Security

  • A Digital Noah’s Ark: How UrBackup and TrueNAS Protect Your Data

    A Digital Noah’s Ark: How UrBackup and TrueNAS Protect Your Data

    In today’s digital world, our lives—both personal and professional—are recorded as data. From family photos to crucial company databases, losing this information can be catastrophic. Despite this, many individuals and businesses still treat backups as an afterthought. Here is a complete guide to building a powerful, automated, and secure backup system using free tools: UrBackup, TrueNAS Scale, and Tailscale.

    Why You Need a Plan B: The Importance of Backups

    Imagine one morning your laptop’s hard drive refuses to work. Or the server that runs your business falls victim to a ransomware attack, and all your files are encrypted. These aren’t scenarios from science-fiction films but everyday reality. Hardware failure, human error, malicious software, or even theft—the threats are numerous.

    A backup is your insurance policy. It’s the only way to quickly and painlessly recover valuable data in the event of a disaster, minimising downtime and financial loss. Without it, rebuilding lost information is often impossible or astronomically expensive.

    The Golden Rule: The 3-2-1 Strategy

    In the world of data security, there is a simple but incredibly effective principle known as the 3-2-1 strategy. It states that you should have:

    • THREE copies of your data (the original and two backups).
    • On TWO different storage media (e.g., a disk in your computer and a disk in a NAS server).
    • ONE copy in a different location (off-site), in case of fire, flood, or theft at your main location.

    Having three copies of your data drastically reduces the risk of losing them all simultaneously. If one drive fails, you have a second. If the entire office is destroyed, you have a copy in the cloud or at home.

    A Common Misconception: Why RAID is NOT a Backup

    Many NAS server users mistakenly believe that a RAID configuration absolves them of the responsibility to make backups. This is a dangerous error.

    RAID (Redundant Array of Independent Disks) is a technology that provides redundancy and high availability, not data security. It protects against the physical failure of a hard drive. Depending on the configuration (e.g., RAID 1, RAID 5, RAID 6, or their RAID-Z equivalents in TrueNAS), the array can survive the failure of one or even two drives at the same time, allowing them to be replaced without data loss or system downtime.

    However, RAID will not protect you from:

    • Human error: Accidentally deleting a file is instantly replicated across all drives in the array.
    • Ransomware attack: Encrypted files are immediately synchronised across all drives.
    • Power failure or RAID controller failure: This can lead to the corruption of the entire array.
    • Theft or natural disaster: Losing the entire device means losing all the data.

    Remember: Redundancy protects against hardware failure; a backup protects against data loss.

    image 135

    Your Backup Hub: UrBackup on TrueNAS Scale

    Creating a robust backup system doesn’t have to involve expensive subscriptions. An ideal solution is to combine the TrueNAS Scale operating system with the UrBackup application.

    • TrueNAS Scale: A powerful, free operating system for building NAS servers. It is based on Linux and offers advanced features such as the ZFS file system and support for containerised applications.
    • UrBackup: An open-source, client-server software for creating backups. It is extremely efficient and flexible, allowing for the backup of both individual files and entire disk images.

    The TrueNAS Protective Shield: ZFS Snapshots

    One of the most powerful features of TrueNAS, resulting from its use of the ZFS file system, is snapshots. A snapshot is an instantly created, read-only image of the entire file system at a specific moment. It works like freezing your data in time.

    Why is this so important in the context of ransomware?

    When ransomware attacks and encrypts files on a network share, these changes affect the “live” version of the data. However, previously taken snapshots remain untouched and unchanged because they are inherently read-only. In the event of an attack, you can restore the entire dataset to its pre-infection state in seconds, completely nullifying the effects of the attack.

    You can configure TrueNAS to automatically create snapshots (e.g., every hour) and retain them for a specified period. This creates an additional, incredibly powerful layer of protection that perfectly complements the backups performed by UrBackup.

    Advantages and Disadvantages of the Solution

    Advantages:

    • Full control and privacy: Your data is stored on your own hardware.
    • No licence fees: The software is completely free.
    • Exceptional efficiency: Incremental backups save space and time.
    • Flexibility: Supports Windows, macOS, Linux, physical servers, and VPS.
    • Disk image backups: Ability to perform a “bare-metal restore” of an entire system.

    Disadvantages:

    • Requires your own hardware: You need to have a NAS server.
    • Initial configuration: Requires some technical knowledge.
    • Full responsibility: The user is responsible for the security and operation of the server.

    Step-by-Step: Installation and Configuration

    1. Installing UrBackup on TrueNAS Scale

    1. Log in to the TrueNAS web interface.
    2. Navigate to the Apps section.
    3. Search for the UrBackup application and click Install.
    4. In the most important configuration step, you must specify the path where backups will be stored (e.g., /mnt/YourPool/backups).
    5. After the installation is complete, start the application and go to its web interface.

    2. Basic Server Configuration

    In the UrBackup interface, go to Settings. The most important initial options are:

    • Path to backup storage: This should already be set during installation.
    • Backup intervals: Set how often incremental backups (e.g., every few hours) and full backups (e.g., every few weeks) should be performed.
    • Email settings: Configure email notifications to receive reports on the status of your backups.

    3. Installing the Client on Computers

    The process of adding a computer to the backup system consists of two stages: registering it on the server and installing the software on the client machine.

    a) Adding a new client on the server:

    1. In the UrBackup interface, go to the Status tab.
    2. Click the blue + Add new client button.
    3. Select the option Add new internet/active client. This is recommended as it works both on the local network and over the internet (e.g., via Tailscale).
    4. Enter a unique name for the new client (e.g., “Annas-Laptop” or “Web-Server”) and click Add client.

    b) Installing the software on the client machine:

    1. After adding the client on the server, while still on the Status tab, you will see buttons to Download client for Windows and Download client for Linux.
    2. Click the appropriate button and select the name of the client you just added from the drop-down list.
    3. Download the prepared installation file (.exe or .sh). It is already fully configured to connect to your server.
    4. Run the installer on the client computer and follow the instructions.

    After a few minutes, the new client should connect to the server and appear on the list with an “Online” status, ready for its first backup.

    Security Above All: Tailscale Enters the Scene

    How can you securely back up computers located outside your local network? The ideal solution is Tailscale. It creates a secure, private network (a mesh VPN) between all your devices, regardless of their location.

    Why use Tailscale with UrBackup?

    • Simplicity: Installation and configuration take minutes.
    • “Zero Trust” Security: Every connection is end-to-end encrypted.
    • Stable IP Addresses: Each device receives a static IP address from the 100.x.x.x range, which does not change even when the device moves to a different physical location.

    What to do if the IP address changes?

    If for some reason you need to change the IP address of the UrBackup server (e.g., after switching from another VPN to Tailscale), the procedure is simple:

    1. Update the address on the UrBackup server: In Settings -> Internet/Active Clients, enter the new, correct server address (e.g., urbackup://100.x.x.x).
    2. Download the updated installer: On the Status tab, click Download client, select the offline client from the list, and download a new installation script for it.
    3. Run the installer on the client: Running the new installer will automatically update the configuration on the client machine.

    Managing and Monitoring Backups

    The UrBackup interface provides all the necessary tools to supervise the system.

    • Status: The main dashboard showing a list of all clients, their online/offline status, and the status of their last backup.
    • Activities: A live view of currently running operations, such as file indexing or data transfer.
    • Backups: A list of all completed backups for each client, with the ability to browse and restore files.
    • Logs: A detailed record of all events, errors, and warnings—an invaluable tool for diagnosing problems.
    • Statistics: Charts and tables showing disk space usage by individual clients over time.

    Backing Up Databases: Do It Right!

    Never back up a database by simply copying its files from the disk while the service is running! This risks creating an inconsistent copy that will be useless. The correct method is to perform a “dump” using tools like mysqldump or mariadb-dump.

    Method 1: All Databases to a Single File

    A simple approach, ideal for small environments.

    Command: mysqldump –all-databases -u [user] -p[password] > /path/to/backup/all_databases.sql

    Method 2: Each Database to a Separate File (Recommended)

    A more flexible solution. The script below will automatically save each database to a separate, compressed file. It should be run periodically (e.g., via cron) just before the scheduled backup by UrBackup.

    #!/bin/bash
    
    # Configuration
    BACKUP_DIR="/var/backups/mysql"
    DB_USER="root"
    DB_PASS="YourSuperSecretPassword"
    
    # Check if user and password are provided
    if [ -z "$DB_USER" ] || [ -z "$DB_PASS" ]; then
        echo "Error: DB_USER or DB_PASS variables are not set in the script."
        exit 1
    fi
    
    # Create backup directory if it doesn't exist
    mkdir -p "$BACKUP_DIR"
    
    # Get a list of all databases, excluding system databases
    DATABASES=$(mysql -u "$DB_USER" -p"$DB_PASS" -e "SHOW DATABASES;" | tr -d " " | grep -vE "(Database|information_schema|performance_schema|mysql|sys)")
    
    # Loop through each database
    for db in $DATABASES; do
        echo "Dumping database: $db"
        # Perform the dump and compress on the fly
        mysqldump -u "$DB_USER" -p"$DB_PASS" --databases "$db" | gzip > "$BACKUP_DIR/$db-$(date +%Y-%m-%d).sql.gz"
        if [ $? -eq 0 ]; then
            echo "Dump of database $db completed successfully."
        else
            echo "Error during dump of database $db."
        fi
    done
    
    # Optional: Remove old backups (older than 7 days)
    find "$BACKUP_DIR" -type f -name "*.sql.gz" -mtime +7 -exec rm {} \;
    
    echo "Backup process for all databases has finished."
    

    Your Digital Fortress

    Having a solid, automated backup strategy is not a luxury but an absolute necessity. Combining the power of TrueNAS Scale with its ZFS snapshots, the flexibility of UrBackup, and the security of Tailscale allows you to build a multi-layered, enterprise-class defence system at zero software cost.

    It is an investment of time that provides priceless peace of mind. Remember, however, that no system is entirely maintenance-free. Regularly monitoring logs, checking email reports, and, most importantly, periodically performing test restores of your files are the final, crucial elements that turn a good backup system into an impregnable fortress protecting your most valuable asset—your data.

  • WireGuard on TrueNAS Scale: How to Build a Secure and Efficient Bridge Between Your Local Network and VPS Servers

    WireGuard on TrueNAS Scale: How to Build a Secure and Efficient Bridge Between Your Local Network and VPS Servers

    In today’s digital world, where remote work and distributed infrastructure are becoming the norm, secure access to network resources is not so much a luxury as an absolute necessity. Virtual Private Networks (VPNs) have long been the answer to these needs, yet traditional solutions can be complicated and slow. Enter WireGuard—a modern VPN protocol that is revolutionising the way we think about secure tunnels. Combined with the power of the TrueNAS Scale system and the simplicity of the WG-Easy application, we can create an exceptionally efficient and easy-to-manage solution.

    This article is a comprehensive guide that will walk you through the process of configuring a secure WireGuard VPN tunnel step by step. We will connect a TrueNAS Scale server, running on your home or company network, with a fleet of public VPS servers. Our goal is to create intelligent “split-tunnel” communication, ensuring that only necessary traffic is routed through the VPN, thereby maintaining maximum internet connection performance.

    What Is WireGuard and Why Is It a Game-Changer?

    Before we delve into the technical configuration, it’s worth understanding why WireGuard is gaining such immense popularity. Designed from the ground up with simplicity and performance in mind, it represents a breath of fresh air compared to older, more cumbersome protocols like OpenVPN or IPsec.

    The main advantages of WireGuard include:

    • Minimalism and Simplicity: The WireGuard source code consists of just a few thousand lines, in contrast to the hundreds of thousands for its competitors. This not only facilitates security audits but also significantly reduces the potential attack surface.
    • Unmatched Performance: By operating at the kernel level of the operating system and utilising modern cryptography, WireGuard offers significantly higher transfer speeds and lower latency. In practice, this means smoother access to files and services.
    • Modern Cryptography: WireGuard uses the latest, proven cryptographic algorithms such as ChaCha20, Poly1305, Curve25519, BLAKE2s, and SipHash24, ensuring the highest level of security.
    • Ease of Configuration: The model, based on the exchange of public keys similar to SSH, is far more intuitive than the complicated certificate management found in other VPN systems.

    The Power of TrueNAS Scale and the Convenience of WG-Easy

    TrueNAS Scale is a modern, free operating system for building network-attached storage (NAS) servers, based on the solid foundations of Linux. Its greatest advantage is its support for containerised applications (Docker/Kubernetes), which allows for easy expansion of its functionality. Running a WireGuard server directly on a device that is already operating 24/7 and storing our data is an extremely energy- and cost-effective solution.

    This is where the WG-Easy application comes in—a graphical user interface that transforms the process of managing a WireGuard server from editing configuration files in a terminal to simple clicks in a web browser. Thanks to WG-Easy, we can create profiles for new devices in moments, generate their configurations, and monitor the status of connections.

    Step 1: Designing the Network Architecture – The Foundation of Stability

    Before we launch any software, we must create a solid plan. Correctly designing the topology and IP addressing is the key to a stable and secure solution.

    The “Hub-and-Spoke” Model: Your Command Centre

    Our network will operate based on a “hub-and-spoke” model.

    • Hub: The central point (server) of our network will be TrueNAS Scale. All other devices will connect to it.
    • Spokes: Our VPS servers will be the clients (peers), or the “spokes” connected to the central hub.

    In this model, all communication flows through the TrueNAS server by default. This means that for one VPS to communicate with another, the traffic must pass through the central hub.

    To avoid chaos, we will create a dedicated subnet for our virtual network. In this guide, we will use 10.8.0.0/24.

    Device RoleHost IdentifierVPN IP Address
    Server (Hub)TrueNAS-Scale10.8.0.1
    Client 1 (Spoke)VPS110.8.0.2
    Client 2 (Spoke)VPS210.8.0.3
    Client 3 (Spoke)VPS310.8.0.4

    The Fundamental Rule: One Client, One Identity

    A tempting thought arises: is it possible to create a single configuration file for all VPS servers? Absolutely not. This would be a breach of a fundamental WireGuard security principle. Identity in this network is not based on a username and password, but on a unique pair of cryptographic keys. Using the same configuration on multiple machines is like giving the same house key to many different people—the server would be unable to distinguish between them, which would lead to routing chaos and a security breakdown.

    Step 2: Prerequisite – Opening the Gateway to the World

    The most common pitfall when configuring a home server is forgetting about the router. Your TrueNAS server is on a local area network (LAN) and has a private IP address (e.g., 192.168.0.13), which makes it invisible from the internet. For the VPS servers to connect to it, you must configure port forwarding on your router.

    You need to create a rule that directs packets arriving from the internet on a specific port straight to your TrueNAS server.

    • Protocol: UDP (WireGuard uses UDP exclusively)
    • External Port: 51820 (the standard WireGuard port)
    • Internal IP Address: The IP address of your TrueNAS server on the LAN
    • Internal Port: 51820

    Without this rule, your VPN server will never work.

    Step 3: Hub Configuration – Launching the Server on TrueNAS

    Launch the WG-Easy application on your TrueNAS server. The configuration process boils down to creating a separate profile for each client (each VPS server).

    Click “New” and fill in the form for the first VPS, paying special attention to the fields below:

    Field Name in WG-EasyExample Value (for VPS1)Explanation
    NameVPS1-PublicA readable label to help you identify the client.
    IPv4 Address10.8.0.2A unique IP address for this VPS within the VPN, according to our plan.
    Allowed IPs192.168.0.0/24, 10.8.0.0/24This is the heart of the “split-tunnel” configuration. It tells the client (VPS) that only traffic to your local network (LAN) and to other devices on the VPN should be sent through the tunnel. All other traffic (e.g., to Google) will take the standard route.
    Server Allowed IPs10.8.0.2/32A critical security setting. It informs the TrueNAS server to only accept packets from this specific client from its assigned IP address. The /32 mask prevents IP spoofing.
    Persistent Keepalive25An instruction for the client to send a small “keep-alive” packet every 25 seconds. This is necessary to prevent the connection from being terminated by routers and firewalls along the way.
    image 124

    After filling in the fields, save the configuration. Repeat this process for each subsequent VPS server, remembering to assign them consecutive IP addresses (10.8.0.3, 10.8.0.4, etc.).

    Once you save the profile, WG-Easy will generate a .conf configuration file for you. Treat this file like a password—it contains the client’s private key! Download it and prepare to upload it to the VPS server.

    Step 4: Spoke Configuration – Activating Clients on the VPS Servers

    Now it’s time to bring our “spokes” to life. Assuming your VPS servers are running Linux (e.g., Debian/Ubuntu), the process is very straightforward.

    1. Install WireGuard tools:
      sudo apt update && sudo apt install wireguard-tools -y
    2. Upload and secure the configuration file: Copy the previously downloaded wg0.conf file to the /etc/wireguard/ directory on the VPS server. Then, change its permissions so that only the administrator can read it:
      # On the VPS server:
      sudo mv /path/to/your/wg0.conf /etc/wireguard/wg0.conf
      sudo chmod 600 /etc/wireguard/wg0.conf
    3. Start the tunnel: Use a simple command to activate the connection. The interface name (wg0) is derived from the configuration file name.
      sudo wg-quick up wg0
    4. Ensure automatic start-up: To have the VPN tunnel start automatically after every server reboot, enable the corresponding system service:
      sudo systemctl enable wg-quick@wg0.service

    Repeat these steps on each VPS server, using the unique configuration file generated for each one.

    Step 5: Verification and Diagnostics – Checking if Everything Works

    After completing the configuration, it’s time for the final test.

    Checking the Connection Status

    On both the TrueNAS server and each VPS, execute the command:

    sudo wg show

    Look for two key pieces of information in the output:

    • latest handshake: This should show a recent time (e.g., “a few seconds ago”). This is proof that the client and server have successfully connected.
    • transfer: received and sent values greater than zero indicate that data is actually flowing through the tunnel.

    The Final Test: Validating the “Split-Tunnel”

    This is the test that will confirm we have achieved our main goal. Log in to one of the VPS servers and perform the following tests:

    1. Test connectivity within the VPN: Try to ping the TrueNAS server using its VPN and LAN addresses.
      ping 10.8.0.1       # VPN address of the TrueNAS server
      ping 192.168.0.13  # LAN address of the TrueNAS server (use your own)

      If you receive replies, it means that traffic to your local network is being correctly routed through the tunnel.
    2. Test the path to the internet: Use the traceroute tool to check the route packets take to a public website.
      traceroute google.com

      The result of this command is crucial. The first “hop” on the route must be the default gateway address of your VPS hosting provider, not the address of your VPN server (10.8.0.1). If this is the case—congratulations! Your “split-tunnel” configuration is working perfectly.

    Troubleshooting Common Problems

    • No “handshake”: The most common cause is a connection issue. Double-check the UDP port 51820 forwarding configuration on your router, as well as any firewalls in the path (on TrueNAS, on the VPS, and in your cloud provider’s panel).
    • There is a “handshake”, but ping doesn’t work: The problem usually lies in the Allowed IPs configuration. Ensure the server has the correct client VPN address entered (e.g., 10.8.0.2/32), and the client has the networks it’s trying to reach in its configuration (e.g., 192.168.0.0/24).
    • All traffic is going through the VPN (full-tunnel): This means that in the client’s configuration file, under the [Peer] section, the Allowed IPs field is set to 0.0.0.0/0. Correct this setting in the WG-Easy interface, download the new configuration file, and update it on the client.

    Creating your own secure and efficient VPN server based on TrueNAS Scale and WireGuard is well within reach. It is a powerful solution that not only enhances security but also gives you complete control over your network infrastructure.

  • Nginx Proxy Manager on TrueNAS Scale: Installation, Configuration, and Troubleshooting

    Nginx Proxy Manager on TrueNAS Scale: Installation, Configuration, and Troubleshooting

    Section 1: Introduction: Simplifying Home Lab Access with Nginx Proxy Manager on TrueNAS Scale

    Modern home labs have evolved from simple setups into complex ecosystems running dozens of services, from media servers like Plex or Jellyfin, to home automation systems such as Home Assistant, to personal clouds and password managers. Managing access to each of these services, each operating on a unique combination of an IP address and port number, quickly becomes impractical, inconvenient, and, most importantly, insecure. Exposing multiple ports to the outside world increases the attack surface and complicates maintaining a consistent security policy.

    The solution to this problem, employed for years in corporate environments, is the implementation of a central gateway or a single point of entry for all incoming traffic. In networking terminology, this role is fulfilled by a reverse proxy. This is an intermediary server that receives all requests from clients and then, based on the domain name, directs them to the appropriate service running on the internal network. Such an architecture not only simplifies access, allowing the use of easy-to-remember addresses (e.g., jellyfin.mydomain.co.uk instead of 192.168.1.50:8096), but also forms a key component of a security strategy.

    In this context, two technologies are gaining particular popularity among enthusiasts: TrueNAS Scale and Nginx Proxy Manager. TrueNAS Scale, based on the Debian Linux system, has transformed the traditional NAS (Network Attached Storage) device into a powerful, hyper-converged infrastructure (HCI) platform, capable of natively running containerised applications and virtual machines. In turn, Nginx Proxy Manager (NPM) is a tool that democratises reverse proxy technology. It provides a user-friendly, graphical interface for the powerful but complex-to-configure Nginx server, making advanced features, such as automatic SSL certificate management, accessible without needing to edit configuration files from the command line.

    This article provides a comprehensive overview of the process of deploying Nginx Proxy Manager on the TrueNAS Scale platform. The aim is not only to present “how-to” instructions but, above all, to explain why each step is necessary. The analysis will begin with an in-depth discussion of both technologies and their interactions. Then, a detailed installation process will be carried out, considering platform-specific challenges and their solutions, including the well-known issue of the application getting stuck in the “Deploying” state. Subsequently, using the practical example of a Jellyfin media server, the configuration of a proxy host will be demonstrated, along with advanced security options. The report will conclude with a summary of the benefits and suggest further steps to fully leverage the potential of this powerful duo.

    Nginx Proxy Manager Login Page

    Section 2: Tool Analysis: Nginx Proxy Manager and the TrueNAS Scale Application Ecosystem

    Understanding the fundamental principles of how Nginx Proxy Manager works and the architecture in which it is deployed—the TrueNAS Scale application system—is crucial for successful installation, effective configuration, and, most importantly, efficient troubleshooting. These two components, though designed to work together, each have their own unique characteristics, the ignorance of which is the most common cause of failure.

    Subsection 2.1: Understanding Nginx Proxy Manager (NPM)

    At the core of NPM’s functionality lies the concept of a reverse proxy, which is fundamental to modern network architecture. Understanding how it works allows one to appreciate the value that NPM brings.

    Definition and Functions of a Reverse Proxy

    A reverse proxy is a server that acts as an intermediary on the server side. Unlike a traditional (forward) proxy, which acts on behalf of the client, a reverse proxy acts on behalf of the server (or a group of servers). It receives requests from clients on the internet and forwards them to the appropriate servers on the local network that actually store the content. To an external client, the reverse proxy is the only visible point of contact; the internal network structure remains hidden.

    The key benefits of this solution are:

    • Security: Hiding the internal network topology and the actual IP addresses of application servers significantly hinders direct attacks on these services.
    • Centralised SSL/TLS Management (SSL Termination): Instead of configuring SSL certificates on each of a dozen application servers, you can manage them in one place—on the reverse proxy. Traffic encryption and decryption (SSL Termination) occurs at the proxy server, which offloads the backend servers.
    • Load Balancing: In more advanced scenarios, a reverse proxy can distribute traffic among multiple identical application servers, ensuring high availability and service scalability.
    • Simplified Access: It allows access to multiple services through standard ports 80 (HTTP) and 443 (HTTPS) using different subdomains, eliminating the need to remember and open multiple ports.

    NPM as a Management Layer

    It should be emphasised that Nginx Proxy Manager is not a new web server competing with Nginx. It is a management application, built on the open-source Nginx, which serves as a graphical user interface (GUI) for its reverse proxy functions. Instead of manually editing complex Nginx configuration files, the user can perform the same operations with a few clicks in an intuitive web interface.

    The main features that have contributed to NPM’s popularity are:

    • Graphical User Interface: Based on the Tabler framework, the interface is clear and easy to use, which drastically lowers the entry barrier for users who are not Nginx experts.
    • SSL Automation: Built-in integration with Let’s Encrypt allows for the automatic, free generation of SSL certificates and their periodic renewal. This is one of the most important and appreciated features.
    • Docker-based Deployment: NPM is distributed as a ready-to-use Docker image, which makes its installation on any platform that supports containers extremely simple.
    • Access Management: The tool offers features for creating Access Control Lists (ACLs) and managing users with different permission levels, allowing for granular control over access to individual services.

    Comparison: NPM vs. Traditional Nginx

    The choice between Nginx Proxy Manager and manual Nginx configuration is a classic trade-off between simplicity and flexibility. The table below outlines the key differences between these two approaches.

    AspectNginx Proxy ManagerTraditional Nginx
    Management InterfaceGraphical User Interface (GUI) simplifying configuration.Command Line Interface (CLI) and editing text files; requires technical knowledge.
    SSL ConfigurationFully automated generation and renewal of Let’s Encrypt certificates.Manual configuration using tools like Certbot; greater control.
    Learning CurveLow; ideal for beginners and hobbyists.Steep; requires understanding of Nginx directives and web server architecture.
    FlexibilityLimited to features available in the GUI; advanced rules can be difficult to implement.Full flexibility and the ability to create highly customised, complex configurations.
    Scalability / Target UserIdeal for home labs, small to medium deployments. Hobbyist, small business owner, home lab user.A better choice for large-scale, high-load corporate environments. Systems administrator, DevOps engineer, developer.

    This table clearly shows that NPM is a tool strategically tailored to the needs of its target audience—home lab enthusiasts. These users consciously sacrifice some advanced flexibility for the significant benefits of ease of use and speed of deployment.

    Nginx Proxy Manager Dashboard

    Subsection 2.2: Application Architecture in TrueNAS Scale

    To understand why installing NPM on TrueNAS Scale can encounter specific problems, it is necessary to know how this platform manages applications. It is not a typical Docker environment.

    Foundations: Linux and Hyper-convergence

    A key architectural change in TrueNAS Scale compared to its predecessor, TrueNAS CORE, was the switch from the FreeBSD operating system to Debian, a Linux distribution. This decision opened the door to native support for technologies that have dominated the cloud and containerisation world, primarily Docker containers and KVM-based virtualisation. As a result, TrueNAS Scale became a hyper-converged platform, combining storage, computing, and virtualisation functions.

    The Application System

    Applications are distributed through Catalogs, which function as repositories. These catalogs are further divided into so-called “trains,” which define the stability and source of the applications:

    • stable: The default train for official, iXsystems-tested applications.
    • enterprise: Applications verified for business use.
    • community: Applications created and maintained by the community. This is where Nginx Proxy Manager is located by default.
    • test: Applications in the development phase.

    NPM’s inclusion in the community catalog means that while it is easily accessible, its technical support relies on the community, not directly on the manufacturer of TrueNAS.

    Storage Management for Applications

    Before any application can be installed, TrueNAS Scale requires the user to specify a ZFS pool that will be dedicated to storing application data. When an application is installed, its data (configuration, databases, etc.) must be saved somewhere persistently. TrueNAS Scale offers several options here, but the default and recommended for simplicity is ixVolume.

    ixVolume is a special type of volume that automatically creates a dedicated, system-managed ZFS dataset within the selected application pool. This dataset is isolated, and the system assigns it very specific permissions. By default, the owner of this dataset becomes the system user apps with a user ID (UID) of 568 and a group ID (GID) of 568. The running application container also operates with the permissions of this very user.

    This is the crux of the problem. The standard Docker image for Nginx Proxy Manager contains startup scripts (e.g., those from Certbot, the certificate handling tool) that, on first run, attempt to change the owner (chown) of data directories, such as /data or /etc/letsencrypt, to ensure they have the correct permissions. When the NPM container starts within the sandboxed TrueNAS application environment, its startup script, running as the unprivileged apps user (UID 568), tries to execute the chown operation on the ixVolume. This operation fails because the apps user is not the owner of the parent directories and does not have permission to change the owner of files on a volume managed by K3s. This permission error causes the container’s startup script to halt, and the container itself never reaches the “running” state, which manifests in the TrueNAS Scale interface as an endless “Deploying” status.

    Section 3: Installing and Configuring Nginx Proxy Manager on TrueNAS Scale

    The process of installing Nginx Proxy Manager on TrueNAS Scale is straightforward, provided that attention is paid to a few key configuration parameters that are often a source of problems. The following step-by-step instructions will guide you through this process, highlighting the critical decisions that need to be made.

    Step 1: Preparing TrueNAS Scale

    Before proceeding with the installation of any application, you must ensure that the application service in TrueNAS Scale is configured correctly.

    1. Log in to the TrueNAS Scale web interface.
    2. Navigate to the Apps section.
    3. If the service is not yet configured, the system will prompt you to select a ZFS pool to be used for storing all application data. Select the appropriate pool and save the settings. After a moment, the service status should change to “Running”.

    Step 2: Finding the Application

    Nginx Proxy Manager is available in the official community catalog.

    1. In the Apps section, go to the Discover tab.
    2. In the search box, type nginx-proxy-manager.
    3. The application should appear in the results. Ensure it comes from the community catalog.
    4. Click the Install button to proceed to the configuration screen.

    Step 3: Key Configuration Parameters

    The installation screen presents many options. Most of them can be left with their default values, but a few sections require special attention.

    Application Name

    In the Application Name field, enter a name for the installation, for example, nginx-proxy-manager. This name will be used to identify the application in the system.

    Network Configuration

    This is the most important and most problematic stage of the configuration. By default, the TrueNAS Scale management interface uses the standard web ports: 80 for HTTP and 443 for HTTPS. Since Nginx Proxy Manager, to act as a gateway for all web traffic, should also listen on these ports, a direct conflict arises. There are two main strategies to solve this problem, each with its own set of trade-offs.

    • Strategy A (Recommended): Change TrueNAS Scale Ports
      This method is considered the “cleanest” from NPM’s perspective because it allows it to operate as it was designed.
    1. Cancel the NPM installation and go to System Settings -> General. In the GUI SSL/TLS Certificate section, change the Web Interface HTTP Port to a custom one, e.g., 880, and the Web Interface HTTPS Port to, e.g., 8443.
    2. Save the changes. From this point on, access to the TrueNAS Scale interface will be available at http://<truenas-ip-address>:880 or https://<truenas-ip-address>:8443.
    3. Return to the NPM installation and in the Network Configuration section, assign the HTTP Port to 80 and the HTTPS Port to 443.
    • Advantages: NPM runs on standard ports, which simplifies configuration and eliminates the need for port translation on the router.
    • Disadvantages: It changes the fundamental way of accessing the NAS itself. In rare cases, as noted on forums, this can cause unforeseen side effects, such as problems with SSH connections between TrueNAS systems.
    • Strategy B (Alternative): Use High Ports for NPM
      This method is less invasive to the TrueNAS configuration itself but shifts the complexity to the router level.
    1. In the NPM configuration, under the Network Configuration section, leave the TrueNAS ports unchanged and assign high, unused ports to NPM, e.g., 30080 for HTTP and 30443 for HTTPS. TrueNAS Scale reserves ports below 9000 for the system, so you should choose values above this threshold.
    2. After installing NPM, configure port forwarding on your edge router so that incoming internet traffic on port 80 is directed to port 30080 of the TrueNAS IP address, and traffic from port 443 is directed to port 30443.
    • Advantages: The TrueNAS Scale configuration remains untouched.
    • Disadvantages: Requires additional configuration on the router. Each proxied service will require explicit forwarding, which can be confusing.

    The ideal solution would be to assign a dedicated IP address on the local network to NPM (e.g., using macvlan technology), which would completely eliminate the port conflict. Unfortunately, the graphical interface of the application installer in TrueNAS Scale does not provide this option in a simple way.

    Storage Configuration

    To ensure that the NPM configuration, including created proxy hosts and SSL certificates, survives updates or application redeployments, you must configure persistent storage.

    1. In the Storage Configuration section, configure two volumes.
    2. For Nginx Proxy Manager Data Storage (path /data) and Nginx Proxy Manager Certs Storage (path /etc/letsencrypt), select the ixVolume type.
    3. Leaving these settings will ensure that TrueNAS creates dedicated ZFS datasets for the configuration and certificates, which will be independent of the application container itself.

    Step 4: First Run and Securing the Application

    After configuring the above parameters (and possibly applying the fixes from Section 4), click Install. After a few moments, the application should transition to the “Running” state.

    1. Access to the NPM interface is available at http://<truenas-ip-address>:PORT, where PORT is the WebUI port configured during installation (defaults to 81 inside the container but is mapped to a higher port, e.g., 30020, if the TrueNAS ports were not changed).
    2. The default login credentials are:
    • Email: admin@example.com
    • Password: changeme
    1. Upon first login, the system will immediately prompt you to change these details. This is an absolutely crucial security step and must be done immediately.

    Section 4: Troubleshooting the “Deploying” Issue: Diagnosis and Repair of Installation Errors

    One of the most frequently encountered and frustrating problems when deploying Nginx Proxy Manager on TrueNAS Scale is the situation where the application gets permanently stuck in the “Deploying” state after installation. The user waits, refreshes the page, but the status never changes to “Running”. Viewing the container logs often does not provide a clear answer. This problem is not a bug in NPM itself but, as diagnosed earlier, a symptom of a fundamental permission conflict between the generic container and the specific, secured environment in TrueNAS Scale.

    Nginx Proxy Manager Log

    Problem Description and Root Cause

    After clicking the “Install” button in the application wizard, TrueNAS Scale begins the deployment process. In the background, the Docker image is downloaded, ixVolumes are created, and the container is started with the specified configuration. The startup script inside the NPM container attempts to perform maintenance operations, including changing the owner of key directories. Because the container is running as a user with limited permissions (apps, UID 568) on a file system it does not fully control, this operation fails. The script halts its execution, and the container never signals to the system that it is ready to work. Consequently, from the perspective of the TrueNAS interface, the application remains forever in the deployment phase.

    Fortunately, thanks to the work of the community and developers, there are proven and effective solutions to this problem. Interestingly, the evolution of these solutions perfectly illustrates the dynamics of open-source software development.

    Solution 1: Using an Environment Variable (Recommended Method)

    This is the modern, precise, and most secure solution to the problem. It was introduced by the creators of the NPM container specifically in response to problems reported by users of platforms like TrueNAS Scale. Instead of escalating permissions, the container is instructed to skip the problematic step.

    To implement this solution:

    1. During the application installation (or while editing it if it has already been created and is stuck), navigate to the Application Configuration section.
    2. Find the Nginx Proxy Manager Configuration subsection and click Add next to Additional Environment Variables.
    3. Configure the new environment variable as follows:
    • Variable Name: SKIP_CERTBOT_OWNERSHIP
    • Variable Value: true
    1. Save the configuration and install or update the application.

    Adding this flag informs the Certbot startup script inside the container to skip the chown (change owner) step for its configuration files. The script proceeds, the container starts correctly and reports readiness, and the application transitions to the “Running” state. This is the recommended method for all newer versions of TrueNAS Scale (Electric Eel, Dragonfish, and later).

    Solution 2: Changing the User to Root (Historical Method)

    This solution was the first one discovered by the community. It is a more “brute force” method that solves the problem by granting the container full permissions. Although effective, it is considered less elegant and potentially less secure from the perspective of the principle of least privilege.

    To implement this solution:

    1. During the installation or editing of the application, navigate to the User and Group Configuration section.
    2. Change the value in the User ID field from the default 568 to 0.
    3. Leave the Group ID unchanged or also set it to 0.
    4. Save the configuration and deploy the application.

    Setting the User ID to 0 causes the process inside the container to run with root user permissions. The root user has unlimited permissions, so the problematic chown operation executes flawlessly, and the container starts correctly. This method was particularly necessary in older versions of TrueNAS Scale (e.g., Dragonfish) and is documented as a working workaround. Although it still works, the environment variable method is preferred as it does not require escalating permissions for the entire container.

    Verification

    Regardless of the chosen method, after saving the changes and redeploying the application, you should observe its status in the Apps -> Installed tab. After a short while, the status should change from “Deploying” to “Running”, which means the problem has been successfully resolved and Nginx Proxy Manager is ready for configuration.

    Section 5: Practical Application: Securing a Jellyfin Media Server

    Theory and correct installation are just the beginning. The true power of Nginx Proxy Manager is revealed in practice when we start using it to manage access to our services. Jellyfin, a popular, free media server, is an excellent example to demonstrate this process, as its full functionality depends on one, often overlooked, setting in the proxy configuration. The following guide assumes that Jellyfin is already installed and running on the local network, accessible at IP_ADDRESS:PORT (e.g., 192.168.1.10:8096).

    Step 1: DNS Configuration

    Before NPM can direct traffic, the outside world needs to know where to send it.

    1. Log in to your domain’s management panel (e.g., at your domain registrar or DNS provider like Cloudflare).
    2. Create a new A record.
    3. In the Name (or Host) field, enter the subdomain that will be used to access Jellyfin (e.g., jellyfin).
    4. In the Value (or Points to) field, enter the public IP address of your home network (your router).

    Step 2: Obtaining an SSL Certificate in NPM

    Securing the connection with HTTPS is crucial. NPM makes this process trivial, especially when using the DNS Challenge method, which is more secure as it does not require opening any ports on your router.

    1. In the NPM interface, go to SSL Certificates and click Add SSL Certificate, then select Let’s Encrypt.
    2. In the Domain Names field, enter your subdomain, e.g., jellyfin.yourdomain.com. You can also generate a wildcard certificate at this stage (e.g., *.yourdomain.com), which will match all subdomains.
    3. Enable the Use a DNS Challenge option.
    4. From the DNS Provider list, select your DNS provider (e.g., Cloudflare).
    5. In the Credentials File Content field, paste the API token obtained from your DNS provider. For Cloudflare, you need to generate a token with permissions to edit the DNS zone (Zone: DNS: Edit).
    6. Accept the Let’s Encrypt terms of service and save the form. After a moment, NPM will use the API to temporarily add a TXT record in your DNS, which proves to Let’s Encrypt that you own the domain. The certificate will be generated and saved.
    Nginx Proxy Manager SSL

    Step 3: Creating a Proxy Host

    This is the heart of the configuration, where we link the domain, the certificate, and the internal service.

    1. In NPM, go to Hosts -> Proxy Hosts and click Add Proxy Host.
    2. A form with several tabs will open.

    “Details” Tab

    • Domain Names: Enter the full domain name that was configured in DNS, e.g., jellyfin.yourdomain.com.
    • Scheme: Select http, as the communication between NPM and Jellyfin on the local network is typically not encrypted.
    • Forward Hostname / IP: Enter the local IP address of the server where Jellyfin is running, e.g., 192.168.1.10.
    • Forward Port: Enter the port on which Jellyfin is listening, e.g., 8096.
    • Websocket Support: This is an absolutely critical setting. You must tick this option. Jellyfin makes extensive use of WebSocket technology for real-time communication, for example, to update playback status on the dashboard or for the Syncplay feature to work. Without WebSocket support enabled, the Jellyfin main page will load correctly, but many key features will not work, leading to difficult-to-diagnose problems.

    “SSL” Tab

    • SSL Certificate: From the drop-down list, select the certificate generated in the previous step for the Jellyfin domain.
    • Force SSL: Enable this option to automatically redirect all HTTP connections to secure HTTPS.
    • HTTP/2 Support: Enabling this option can improve page loading performance.

    After configuring both tabs, save the proxy host.

    Step 4: Testing

    After saving the configuration, Nginx will reload its settings in the background. It should now be possible to open a browser and enter the address https://jellyfin.yourdomain.com. You should see the Jellyfin login page, and the connection should be secured with an SSL certificate (a padlock icon will be visible in the address bar).

    Subsection 5.1: Advanced Security Hardening (Optional)

    The default configuration is fully functional, but to enhance security, you can add extra HTTP headers that instruct the browser on how to behave. To do this, edit the created proxy host and go to the Advanced tab. In the Custom Nginx Configuration field, you can paste additional directives.

    It’s worth noting that NPM has a quirk: add_header directives added directly in this field may not be applied. A safer approach is to create a Custom Location for the path / and paste the headers in its configuration field.

    The following table presents recommended security headers.

    HeaderPurposeRecommended ValueNotes
    Strict-Transport-SecurityForces the browser to communicate exclusively over HTTPS for a specified period.add_header Strict-Transport-Security “max-age=31536000; includeSubDomains; preload” always;Deploy with caution. It’s wise to start with a lower max-age and remove preload until you are certain about the configuration.
    X-Frame-OptionsProtects against “clickjacking” attacks by preventing the page from being embedded in an <iframe> on another site.add_header X-Frame-Options “SAMEORIGIN” always;SAMEORIGIN allows embedding only within the same domain.
    X-Content-Type-OptionsPrevents attacks related to the browser misinterpreting MIME types (“MIME-sniffing”).add_header X-Content-Type-Options “nosniff” always;This is a standard and safe setting.
    Referrer-PolicyControls what referrer information is sent during navigation.add_header ‘Referrer-Policy’ ‘origin-when-cross-origin’;A good compromise between privacy and usability.
    X-XSS-ProtectionA historical header intended to protect against Cross-Site Scripting (XSS) attacks.add_header X-XSS-Protection “0” always;The header is obsolete and can create new attack vectors. Modern browsers have better, built-in mechanisms. It is recommended to explicitly disable it (0).

    Applying these headers provides an additional layer of defence and is considered good practice in securing web applications. However, it is critical to use up-to-date recommendations, as in the case of X-XSS-Protection, where blindly copying it from older guides could weaken security.

    Section 6: Conclusions and Next Steps

    Combining Nginx Proxy Manager with the TrueNAS Scale platform creates an incredibly powerful and flexible environment for managing a home lab. As demonstrated in this report, this synergy allows for centralised access management, a drastic simplification of the deployment and maintenance of SSL/TLS security, and a professionalisation of the way users interact with their self-hosted services. The key to success, however, is not just blindly following instructions, but above all, understanding the fundamental principles of how both technologies work. The awareness that applications in TrueNAS Scale operate within a restrictive ecosystem is essential for effectively diagnosing and resolving specific problems, such as the “Deploying” stall error.

    Summary of Strategic Benefits

    Deploying NPM on TrueNAS Scale brings tangible benefits:

    • Centralisation and Simplicity: All incoming requests are managed from a single, intuitive panel, eliminating the chaos of multiple IP addresses and ports.
    • Enhanced Security: Automation of SSL certificates, hiding the internal network topology, and the ability to implement advanced security headers create a solid first line of defence.
    • Professional Appearance and Convenience: Using easy-to-remember, personalised subdomains (e.g., media.mydomain.co.uk) instead of technical IP addresses significantly improves the user experience.

    Recommendations and Next Steps

    After successfully deploying Nginx Proxy Manager and securing your first application, it is worth exploring its further capabilities to fully utilise the tool’s potential.

    • Explore Access Lists: NPM allows for the creation of Access Control Lists (ACLs), which can restrict access to specific proxy hosts based on the source IP address. This is an extremely useful feature for securing administrative panels. For example, you can create a rule that allows access to the TrueNAS Scale interface or the NPM panel itself only from IP addresses on the local network, blocking any access attempts from the outside.
    • Backup Strategy: The Nginx Proxy Manager configuration, stored in the ixVolume, is a critical asset. Its loss would mean having to reconfigure all proxy hosts and certificates. TrueNAS Scale offers built-in tools for automating backups. You should configure a Periodic Snapshot Task for the dataset containing the NPM application data (ix-applications/releases/nginx-proxy-manager) to regularly create snapshots of its state.
    • Securing Other Applications: The knowledge gained during the Jellyfin configuration is universal. It can now be applied to secure virtually any other web service running in your home lab, such as Home Assistant, a file server, a personal password manager (e.g., Vaultwarden, which is a Bitwarden implementation), or the AdGuard Home ad-blocking system. Remember to enable the Websocket Support option for any application that requires real-time communication.
    • Monitoring and Diagnostics: The NPM interface provides access logs and error logs for each proxy host. Regularly reviewing these logs can help in diagnosing access problems, identifying unauthorised connection attempts, and optimising the configuration.

    Mastering Nginx Proxy Manager on TrueNAS Scale is an investment that pays for itself many times over in the form of increased security, convenience, and control over your digital ecosystem. It is another step on the journey from a simple user to a conscious architect of your own home infrastructure.

  • Wazuh on Your Own Server: Digital Sovereignty at the Cost of Complexity

    Wazuh on Your Own Server: Digital Sovereignty at the Cost of Complexity

    Faced with the rising costs of commercial solutions and escalating cyber threats, the free security platform Wazuh is gaining popularity as a powerful alternative. However, the decision to self-host it on one’s own servers represents a fundamental trade-off: organisations gain unprecedented control over their data and system, but in return, they must contend with significant technical complexity, hidden operational costs, and full responsibility for their own security. This report analyses for whom this path is a strategic advantage and for whom it may prove to be a costly trap.

    Introduction – The Democratisation of Cybersecurity in an Era of Growing Threats

    The contemporary digital landscape is characterised by a paradox: while threats are becoming increasingly advanced and widespread, the costs of professional defence tools remain an insurmountable barrier for many organisations. Industry reports paint a grim picture, pointing to a sharp rise in ransomware attacks, which are evolving from data encryption to outright blackmail, and the ever-wider use of artificial intelligence by cybercriminals to automate and scale attacks. In this challenging environment, solutions like Wazuh are emerging as a response to the growing demand for accessible, yet powerful, tools to protect IT infrastructure.

    Wazuh is defined as a free, open-source security platform that unifies the capabilities of two key technologies: XDR (Extended Detection and Response) and SIEM (Security Information and Event Management). Its primary goal is to protect digital assets regardless of where they operate—from traditional on-premises servers in a local data centre, through virtual environments, to dynamic containers and distributed resources in the public cloud.

    The rise in Wazuh’s popularity is directly linked to the business model of dominant players in the SIEM market, such as Splunk. Their pricing, often based on the volume of data processed, can generate astronomical costs for growing companies, making advanced security a luxury. Wazuh, being free, eliminates this licensing barrier, which makes it particularly attractive to small and medium-sized enterprises (SMEs), public institutions, non-profit organisations, and all entities with limited budgets but who cannot afford to compromise on security.

    The emergence of such a powerful, free tool signals a fundamental shift in the cybersecurity market. One could speak of a democratisation of advanced defence mechanisms. Traditionally, SIEM/XDR-class platforms were the domain of large corporations with dedicated Security Operations Centres (SOCs) and substantial budgets. Meanwhile, cybercriminals do not limit their activities to the largest targets; SMEs are equally, and sometimes even more, vulnerable to attacks. Wazuh fills this critical gap, giving smaller organisations access to functionalities that were, until recently, beyond their financial reach. This represents a paradigm shift, where access to robust digital defence is no longer solely dependent on purchasing power but begins to depend on technical competence and the strategic decision to invest in a team.

    image 122

    To fully understand Wazuh’s unique position, it is worth comparing it with key players in the market.

    Table 1: Positioning Wazuh Against the Competition

    CriterionWazuhSplunkElastic Security
    Cost ModelOpen-source software, free. Paid options include technical support and a managed cloud service (SaaS).Commercial. Licensing is based mainly on the daily volume of data processed, which can lead to high costs at a large scale.“Open core” model. Basic functions are free, advanced ones (e.g., machine learning) are available in paid subscriptions. Prices are based on resources, not data volume.
    Main FunctionalitiesIntegrated XDR and SIEM. Strong emphasis on endpoint security (FIM, vulnerability detection, configuration assessment) and log analysis.A leader in log analysis and SIEM. An extremely powerful query language (SPL) and broad analytical capabilities. Considered the standard in large SOCs.An integrated security platform (SIEM + endpoint protection) built on the powerful Elasticsearch search engine. High flexibility and scalability.
    Deployment OptionsSelf-hosting (On-Premises / Private Cloud) or the official Wazuh Cloud service (SaaS).Self-hosting (On-Premises) or the Splunk Cloud service (SaaS).Self-hosting (On-Premises) or the Elastic Cloud service (SaaS).
    Target AudienceSMEs, organisations with technical expertise, entities with strict data sovereignty requirements, security enthusiasts.Large enterprises, mature Security Operations Centres (SOCs), organisations with large security budgets and a need for advanced analytics.Organisations seeking a flexible, scalable platform, often with an existing Elastic ecosystem. Development and DevOps teams.

    This comparison clearly shows that Wazuh is not a simple clone of commercial solutions. Its strength lies in the specific niche it occupies: it offers enterprise-class functionalities without licensing costs, in exchange requiring greater technical involvement from the user and the assumption of full responsibility for implementation and maintenance.

    Anatomy of a Defender – How Does the Wazuh Architecture Work?

    image 123

    Understanding the technical foundations of Wazuh is crucial for assessing the real complexity and potential challenges associated with its self-hosted deployment. At first glance, the architecture is elegant and logical; however, its scalability, one of its greatest advantages, simultaneously becomes its greatest operational challenge in a self-hosted model.

    The Agent-Server Model: The Eyes and Ears of the System

    At the core of the Wazuh architecture is a model based on an agent-server relationship. A lightweight, multi-platform Wazuh agent is installed on every monitored system—be it a Linux server, a Windows workstation, a Mac computer, or even cloud instances. The agent runs in the background, consuming minimal system resources, and its task is to continuously collect telemetry data. It gathers system and application logs, monitors the integrity of critical files, scans for vulnerabilities, inventories installed software and running processes, and detects intrusion attempts. All this data is then securely transmitted in near real-time to the central component—the Wazuh server.

    Central Components: The Brain of the Operation

    A Wazuh deployment, even in its simplest form, consists of three key central components that together form a complete analytical system.

    1. Wazuh Server: This is the heart of the entire system. It receives data sent by all registered agents. Its main task is to process this stream of information. The server uses advanced decoders to normalise and structure logs from various sources and then passes them through a powerful analytical engine. This engine, based on a predefined and configurable set of rules, correlates events and identifies suspicious activities, security policy violations, or Indicators of Compromise (IoCs). When an event or series of events matches a rule with a sufficiently high priority, the server generates a security alert.
    2. Wazuh Indexer: This is a specialised and highly scalable database, designed for the rapid indexing, storage, and searching of vast amounts of data. Technologically, the Wazuh Indexer is a fork of the OpenSearch project, which in turn was created from the Elasticsearch source code. All events collected by the server (both those that generated an alert and those that did not) and the alerts themselves are sent to the indexer. This allows security analysts to search through terabytes of historical data in seconds for traces of an attack, which is fundamental for threat hunting and forensic analysis processes.
    3. Wazuh Dashboard: This is the user interface for the entire platform, implemented as a web application. Like the indexer, it is based on the OpenSearch Dashboards project (formerly known as Kibana). The dashboard allows for the visualisation of data in the form of charts, tables, and maps, browsing and analysing alerts, managing agent and server configurations, and generating compliance reports. It is here that analysts spend most of their time, monitoring the security posture of the entire organisation.

    Security and Scalability of the Architecture

    A key aspect to emphasise is the security of the platform itself. Communication between the agent and the server occurs by default over port 1514/TCP and is protected by AES encryption (with a 256-bit key). Each agent must be registered and authenticated before the server will accept data from it. This ensures the confidentiality and integrity of the transmitted logs, preventing them from being intercepted or modified in transit.

    The Wazuh architecture was designed with scalability in mind. For small deployments, such as home labs or Proof of Concept tests, all three central components can be installed on a single, sufficiently powerful machine using a simplified installation script. However, in production environments monitoring hundreds or thousands of endpoints, such an approach quickly becomes inadequate. The official documentation and user experiences unequivocally indicate that to ensure performance and High Availability, it is necessary to implement a distributed architecture. This means separating the Wazuh server, indexer, and dashboard onto separate hosts. Furthermore, to handle the enormous volume of data and ensure resilience to failures, both the server and indexer components can be configured as multi-node clusters.

    It is at this point that the fundamental challenge of self-hosting becomes apparent. While an “all-in-one” installation is relatively simple, designing, implementing, and maintaining a distributed, multi-node Wazuh cluster is an extremely complex task. It requires deep knowledge of Linux systems administration, networking, and, above all, OpenSearch cluster management. The administrator must take care of aspects such as the correct replication and allocation of shards (index fragments), load balancing between nodes, configuring disaster recovery mechanisms, regularly creating backups, and planning updates for the entire technology stack. The decision to deploy Wazuh on a large scale in a self-hosted model is therefore not a one-time installation act. It is a commitment to the continuous management of a complex, distributed system, whose cost and complexity grow non-linearly with the scale of operations.

    The Strategic Decision – Full Control on Your Own Server versus the Convenience of the Cloud

    The choice of Wazuh deployment model—self-hosting on one’s own infrastructure (on-premises) versus using a ready-made cloud service (SaaS)—is one of the most important strategic decisions facing any organisation considering this platform. This is not merely a technical choice, but a fundamental decision concerning resource allocation, risk acceptance, and business priorities. An analysis of both approaches reveals a profound trade-off between absolute control and operational convenience.

    The Case for Self-Hosting: The Fortress of Data Sovereignty

    Organisations that decide to self-deploy and maintain Wazuh on their own servers are primarily driven by the desire for maximum control and independence. In this model, it is they, not an external provider, who define every aspect of the system’s operation—from hardware configuration, through data storage and retention policies, to the finest details of analytical rules. The open-source nature of Wazuh gives them an additional, powerful advantage: the ability to modify and adapt the platform to unique, often non-standard needs, which is impossible with closed, commercial solutions.

    However, the main driving force for many companies, especially in Europe, is the concept of data sovereignty. This is not just a buzzword, but a hard legal and strategic requirement. Data sovereignty means that digital data is subject to the laws and jurisdiction of the country in which it is physically stored and processed. In the context of stringent regulations such as Europe’s GDPR, the American HIPAA for medical data, or the PCI DSS standard for the payment card industry, keeping sensitive logs and security incident data within one’s own, controlled data centre is often the simplest and most secure way to ensure compliance.

    This choice also has a geopolitical dimension. Edward Snowden’s revelations about the PRISM programme run by the US NSA made the world aware that data stored in the clouds of American tech giants could be subject to access requests from US government agencies under laws such as the CLOUD Act. For many European companies, public institutions, or entities in the defence industry, the risk that their operational data and security logs could be made available to a foreign government is unacceptable. Self-hosting Wazuh in a local data centre, within the European Union, completely eliminates this risk, ensuring full digital sovereignty.

    The Reality of Self-Hosting: Hidden Costs and Responsibility

    The promise of free software is tempting, but the reality of a self-hosted deployment quickly puts the concept of “free” to the test. An analysis of the Total Cost of Ownership (TCO) reveals a series of hidden expenses that go far beyond the zero cost of the licence.

    • Capital Expenditure (CapEx): At the outset, the organisation must make significant investments in physical infrastructure. This includes purchasing powerful servers (with large amounts of RAM and fast processors), disk arrays capable of storing terabytes of logs, and networking components. Costs associated with providing appropriate server room conditions, such as uninterruptible power supplies (UPS), air conditioning, and physical access control systems, must also be considered.
    • Operational Expenditure (OpEx): This is where the largest, often underestimated, expenses lie. Firstly, the ongoing electricity and cooling bills. Secondly, and most importantly, personnel costs. Wazuh is not a “set it and forget it” system. As numerous users report, it requires constant attention, tuning, and maintenance. The default configuration can generate tens of thousands of alerts per day, leading to “alert fatigue” and rendering the system useless. To prevent this, a qualified security analyst or engineer is needed to constantly fine-tune rules and decoders, eliminate false positives, and develop the platform. For larger, distributed deployments, maintaining system stability can become a full-time job. One experienced user bluntly stated, “I’m losing my mind having to fix Wazuh every single day.” According to an analysis cited by GitHub, the total cost of a self-hosted solution can be up to 5.25 times higher than its cloud equivalent.

    Moreover, in the self-hosted model, the entire responsibility for security rests on the organisation’s shoulders. This includes not only protection against external attacks but also regular backups, testing disaster recovery procedures, and bearing the full consequences (financial and reputational) in the event of a successful breach and data leak.

    The Cloud Alternative: Convenience as a Service (SaaS)

    For organisations that want to leverage the power of Wazuh but are not ready to take on the challenges of self-hosting, there is an official alternative: Wazuh Cloud. This is a Software as a Service (SaaS) model, where the provider (the company Wazuh) takes on the entire burden of managing the server infrastructure, and the client pays a monthly or annual subscription for a ready-to-use service.

    The advantages of this approach are clear:

    • Lower Barrier to Entry and Predictable Costs: The subscription model eliminates the need for large initial hardware investments (CapEx) and converts them into a predictable, monthly operational cost (OpEx), which is often lower in the short and medium term.
    • Reduced Operational Burden: Issues such as server maintenance, patch installation, software updates, scaling resources in response to growing load, and ensuring high availability are entirely the provider’s responsibility. This frees up the internal IT team to focus on strategic tasks rather than “firefighting.”
    • Access to Expert Knowledge: Cloud clients benefit from the knowledge and experience of Wazuh engineers who manage hundreds of deployments daily. This guarantees optimal configuration and platform stability.

    Of course, convenience comes at a price. The main disadvantage is a partial loss of control over the system and data. The organisation must trust the security policies and procedures of the provider. Most importantly, depending on the location of the Wazuh Cloud data centres, the same data sovereignty issues that the self-hosted model avoids may arise.

    Ultimately, the choice between self-hosting and the cloud is not an assessment of which option is “better” in an absolute sense. It is a strategic allocation of risk and resources. The self-hosted model is a conscious acceptance of operational risk (failures, configuration errors, staff shortages) in exchange for minimising the risk associated with data sovereignty and third-party control. In contrast, the cloud model is a transfer of operational risk to the provider in exchange for accepting the risk associated with entrusting data and potential legal-geopolitical implications. For a financial sector company in the EU, the risk of a GDPR breach may be much higher than the risk of a server failure, which strongly inclines them towards self-hosting. For a dynamic tech start-up without regulated data, the cost of hiring a dedicated specialist and the operational risk may be unacceptable, making the cloud the obvious choice.

    Table 2: Decision Analysis: Self-Hosting vs. Wazuh Cloud

    CriterionSelf-Hosting (On-Premises)Wazuh Cloud (SaaS)
    Total Cost of Ownership (TCO)High initial cost (hardware, CapEx). Significant, often unpredictable operational costs (personnel, energy, OpEx). Potentially lower in the long term at a large scale and with constant utilisation.Low initial cost (no CapEx). Predictable, recurring subscription fees (OpEx). Usually more cost-effective in the short and medium term. Potentially higher in the long run.
    Control and CustomisationAbsolute control over hardware, software, data, and configuration. Ability to modify source code and deeply integrate with existing systems.Limited control. Configuration within the options provided by the supplier. No ability to modify source code or access the underlying infrastructure.
    Security and ResponsibilityFull responsibility for physical and digital security, backups, disaster recovery, and regulatory compliance rests with the organisation.Shared responsibility. The provider is responsible for the security of the cloud infrastructure. The organisation is responsible for configuring security policies and managing access.
    Deployment and MaintenanceComplex and time-consuming deployment, especially in a distributed architecture. Requires continuous maintenance, monitoring, updating, and tuning by qualified personnel.Quick and simple deployment (service activation). Maintenance, updates, and ensuring availability are entirely the provider’s responsibility, minimising the burden on the internal IT team.
    ScalabilityScalability is possible but requires careful planning, purchase of additional hardware, and manual reconfiguration of the cluster. It can be a slow and costly process.High flexibility and scalability. Resources (computing power, disk space) can be dynamically increased or decreased depending on needs, often with a few clicks.
    Data SovereigntyFull data sovereignty. The organisation has 100% control over the physical location of its data, which facilitates compliance with local legal and regulatory requirements (e.g., GDPR).Dependent on the location of the provider’s data centres. May pose challenges related to GDPR compliance if data is stored outside the EU. Potential risk of access on demand by foreign governments.

    Voices from the Battlefield – A Balanced Analysis of Expert and User Opinions

    A theoretical analysis of a platform’s capabilities and architecture is one thing, but its true value is verified in the daily work of security analysts and system administrators. The voices of users from around the world, from small businesses to large enterprises, paint a nuanced picture of Wazuh—a tool that is incredibly powerful, but also demanding. An analysis of opinions gathered from industry portals such as Gartner, G2, Reddit, and specialist forums allows us to identify both its greatest advantages and its most serious challenges.

    The Praise – What Works Brilliantly?

    Several key strengths that attract organisations to Wazuh are repeatedly mentioned in reviews and case studies.

    • Cost as a Game-Changer: For many users, the fundamental advantage is the lack of licensing fees. One information security manager stated succinctly: “It costs me nothing.” This financial accessibility is seen as crucial, especially for smaller entities. Wazuh is often described as a “great, out-of-the-box SOC solution for small to medium businesses” that could not otherwise afford this type of technology.
    • Powerful, Built-in Functionalities: Users regularly praise specific modules that deliver immediate value. File Integrity Monitoring (FIM) and Vulnerability Detection are at the forefront. One reviewer described them as the “biggest advantages” of the platform. FIM is key to detecting unauthorised changes to critical system files, which can indicate a successful attack, while the vulnerability module automatically scans systems for known, unpatched software. The platform’s ability to support compliance with regulations such as HIPAA or PCI DSS is also a frequently highlighted asset, allowing organisations to verify their security posture with a few clicks.
    • Flexibility and Customisation: The open nature of Wazuh is seen as a huge advantage by technical teams. The ability to customise rules, write their own decoders, and integrate with other tools gives a sense of complete control. “I personally love the flexibility of Wazuh, as a system administrator I can think of any use case and I know I’ll be able to leverage Wazuh to pull the logs and create the alerts I need,” wrote Joanne Scott, a lead administrator at one of the companies using the platform.

    The Criticism – Where Do the Challenges Lie?

    Equally numerous and consistent are the voices pointing to significant difficulties and challenges that must be considered before deciding on deployment.

    • Complexity and a Steep Learning Curve: This is the most frequently raised issue. Even experienced security specialists admit that the platform is not intuitive. One expert described it as having a “steep learning curve for newcomers.” Another user noted that “the initial installation and configuration can be a bit complicated, especially for users without much experience in SIEM systems.” This confirms that Wazuh requires dedicated time for learning and experimentation.
    • The Need for Tuning and “Alert Fatigue”: This is probably the biggest operational challenge. Users agree that the default, “out-of-the-box” configuration of Wazuh generates a huge amount of noise—low-priority alerts that flood analysts and make it impossible to detect real threats. One team reported receiving “25,000 to 50,000 low-level alerts per day” from just two monitored endpoints. Without an intensive and, importantly, continuous process of tuning rules, disabling irrelevant alerts, and creating custom ones tailored to the specific environment, the system is practically useless. One of the more blunt comments on a Reddit forum stated that “out of the box it’s kind of shitty.”
    • Performance and Stability at Scale: While Wazuh performs well in small and medium-sized environments, deployments involving hundreds or thousands of agents can encounter serious stability problems. In one dramatic post on a Google Groups forum, an administrator managing 175 agents described daily problems with agents disconnecting and server services hanging, forcing him to restart the entire infrastructure daily. This shows that scaling Wazuh requires not only more powerful hardware but also deep knowledge of optimising its components.
    • Documentation and Support for Different Systems: Although Wazuh has extensive online documentation, some users find it insufficient for more complex problems. There are also complaints that the predefined decoders (pieces of code responsible for parsing logs) work great for Windows systems but are often outdated or incomplete for other platforms, including popular network devices. This forces administrators to search for unofficial, community-created solutions on platforms like GitHub, which introduces an additional element of risk and uncertainty.

    An analysis of these starkly different opinions leads to a key conclusion. Wazuh should not be seen as a ready-to-use product that can simply be “switched on.” It is rather a powerful security framework—a set of advanced tools and capabilities from which a qualified team must build an effective defence system. Its final value depends 90% on the quality of the implementation, configuration, and competence of the team, and only 10% on the software itself. The users who succeed are those who talk about “configuring,” “customising,” and “integrating.” Those who encounter problems are often those who expected a ready-made solution and were overwhelmed by the default configuration. The story of one expert who, during a simulated attack on a default Wazuh installation, “didn’t catch a single thing” is the best proof of this. An investment in a self-hosted Wazuh is really an investment in the people who will manage it.

    Consequences of the Choice – Risk and Reward in the Open-Source Ecosystem

    The decision to base critical security infrastructure on a self-hosted, open-source solution like Wazuh goes beyond a simple technical assessment of the tool itself. It is a strategic immersion into the broader ecosystem of Open Source Software (OSS), which brings with it both enormous benefits and serious, often underestimated, risks.

    The Ubiquity and Hidden Risks of Open-Source Software

    Open-source software has become the foundation of the modern digital economy. According to the 2025 “Open Source Security and Risk Analysis” (OSSRA) report, as many as 97% of commercial applications contain OSS components. They form the backbone of almost every system, from operating systems to libraries used in web applications. However, this ubiquity has its dark side. The same report reveals alarming statistics:

    • 86% of the applications studied contained at least one vulnerability in the open-source components they used.
    • 91% of applications contained components that were outdated and had newer, more secure versions available.
    • 81% of applications contained high or critical risk vulnerabilities, many of which already had publicly available patches.

    One of the biggest challenges is the problem of transitive dependencies. This means that a library a developer consciously adds to a project itself depends on dozens of other libraries, which in turn depend on others. This creates a complex and difficult-to-trace chain of dependencies, meaning organisations often have no idea exactly which components are running in their systems and what risks they carry. This is the heart of the software supply chain security problem.

    By choosing to self-host Wazuh, an organisation takes on full responsibility for managing not only the platform itself but its entire technology stack. This includes the operating system it runs on, the web server, and, above all, key components like the Wazuh Indexer (OpenSearch) and its numerous dependencies. This means it is necessary to track security bulletins for all these elements and react immediately to newly discovered vulnerabilities.

    The Advantages of the Open-Source Model: Transparency and the Power of Community

    In opposition to these risks, however, stand fundamental advantages that make the open-source model so attractive, especially in the field of security.

    • Transparency and Trust: In the case of commercial, closed-source solutions (“black boxes”), the user must fully trust the manufacturer’s declarations regarding security. In the open-source model, the source code is publicly available. This provides the opportunity to conduct an independent security audit and verify that the software does not contain hidden backdoors or serious flaws. This transparency builds fundamental trust, which is invaluable in the context of systems designed to protect a company’s most valuable assets.
    • The Power of Community: Wazuh boasts one of the largest and most active communities in the open-source security world. Users have numerous support channels at their disposal, such as the official Slack, GitHub forums, a dedicated subreddit, and Google Groups. It is there, in the heat of real-world problems, that custom decoders, innovative rules, and solutions to problems not found in the official documentation are created. This collective wisdom is an invaluable resource, especially for teams facing unusual challenges.
    • Avoiding Vendor Lock-in: By choosing a commercial solution, an organisation becomes dependent on a single vendor—their product development strategy, pricing policy, and software lifecycle. If the vendor decides to raise prices, end support for a product, or go bankrupt, the client is left with a serious problem. Open source provides freedom. An organisation can use the software indefinitely, modify and develop it, and even use the services of another company specialising in support for that solution if they are not satisfied with the official support.

    This duality of the open-source nature leads to a deeper conclusion. The decision to self-host Wazuh fundamentally changes the organisation’s role in the security ecosystem. It ceases to be merely a passive consumer of a ready-made security product and becomes an active manager of software supply chain risk. When a company buys a commercial SIEM, it pays the vendor to take responsibility for managing the risk associated with the components from which its product is built. It is the vendor who must patch vulnerabilities in libraries, update dependencies, and guarantee the security of the entire stack. By choosing the free, self-hosted Wazuh, the organisation consciously (or not) takes on all this responsibility itself. To do this in a mature way, it is no longer enough to just know how to configure rules in Wazuh. It becomes necessary to implement advanced software management practices, such as Software Composition Analysis (SCA) to identify all components and their vulnerabilities, and to maintain an up-to-date “Software Bill of Materials” (SBOM) for the entire infrastructure. This significantly raises the bar for competency requirements and shows that the decision to self-host has deep, structural consequences for the entire IT and security department.

    The Verdict – Who is Self-Hosted Wazuh For?

    The analysis of the Wazuh platform in a self-hosted model leads to an unequivocal conclusion: it is a solution with enormous potential, but burdened with equally great responsibility. The key trade-off that runs through every aspect of this technology can be summarised as follows: self-hosted Wazuh offers unparalleled control, absolute data sovereignty, and zero licensing costs, but in return requires significant, often underestimated, investments in hardware and, above all, in highly qualified personnel capable of managing a complex and demanding system that requires constant attention.

    This is not a solution for everyone. Attempting to implement it without the appropriate resources and awareness of its nature is a straight path to frustration, a false sense of security, and ultimately, project failure.

    Profile of the Ideal Candidate

    Self-hosted Wazuh is the optimal, and often the only right, choice for organisations that meet most of the following criteria:

    • They have a mature and competent technical team: They have an internal security and IT team (or the budget to hire/train one) that is not afraid of working with the command line, writing scripts, analysing logs at a low level, and managing a complex Linux infrastructure.
    • They have strict data sovereignty requirements: They operate in highly regulated industries (financial, medical, insurance), in public administration, or in the defence sector, where laws (e.g., GDPR) or internal policies categorically require that sensitive data never leaves physically controlled infrastructure.
    • They operate at a large scale where licensing costs become a barrier: They are large enough that the licensing costs of commercial SIEM systems, which increase with data volume, become prohibitive. In such a case, investing in a dedicated team to manage a free solution becomes economically justified over a period of several years.
    • They understand they are implementing a framework, not a finished product: They accept the fact that Wazuh is a set of powerful building blocks, not a ready-made house. They are prepared for a long-term, iterative process of tuning, customising, and improving the system to fully match the specifics of their environment and risk profile.
    • They have a need for deep customisation: Their security requirements are so unique that standard, commercial solutions cannot meet them, and the ability to modify the source code and create custom integrations is a key value.

    Questions for Self-Assessment

    For all other organisations, especially smaller ones with limited human resources and without strict sovereignty requirements, a much safer and more cost-effective solution will likely be to use the Wazuh Cloud service or another commercial SIEM/XDR solution.

    Before making the final, momentous decision, every technical leader and business manager should ask themselves and their team a series of honest questions:

    1. Have we realistically assessed the Total Cost of Ownership (TCO)? Does our budget account not only for servers but also for the full-time equivalents of specialists who will manage this platform 24/7, including their salaries, training, and the time needed to learn?
    2. Do we have the necessary expertise in our team? Do we have people capable of advanced rule tuning, managing a distributed cluster, diagnosing performance issues, and responding to failures in the middle of the night? If not, are we prepared to invest in their recruitment and development?
    3. What is our biggest risk? Are we more concerned about operational risk (system failure, human error, inadequate monitoring) or regulatory and geopolitical risk (breach of data sovereignty, third-party access)? How does the answer to this question influence our decision?
    4. Are we ready for full responsibility? Do we understand that by choosing self-hosting, we are taking responsibility not only for the configuration of Wazuh but for the security of the entire software supply chain on which it is based, including the regular patching of all its components?

    Only an honest answer to these questions will allow you to avoid a costly mistake and make a choice that will genuinely strengthen your organisation’s cybersecurity, rather than creating an illusion of it.

    Integrating Logs from Docker Applications with Wazuh SIEM

    In modern IT environments, containerisation using Docker has become the standard. It enables the rapid deployment and scaling of applications but also introduces new challenges in security monitoring. By default, logs generated by applications running in containers are isolated from the host system, which complicates their analysis by SIEM systems like Wazuh.

    In this post, we will show you how to break down this barrier. We will guide you step-by-step through the configuration process that will allow the Wazuh agent to read, analyse, and generate alerts from the logs of any application running in a Docker container. We will use the password manager Vaultwarden as a practical example.

    The Challenge: Why is Accessing Docker Logs Difficult?

    Docker containers have their own isolated file systems. Applications inside them most often send their logs to “standard output” (stdout/stderr), which is captured by Docker’s logging mechanism. The Wazuh agent, running on the host system, does not have default access to this stream or to the container’s internal files.

    To enable monitoring, we must make the application logs visible to the Wazuh agent. The best and cleanest way to do this is to configure the container to write its logs to a file and then share that file externally using a Docker volume.

    Step 1: Exposing Application Logs Outside the Container

    Our goal is to make the application’s log file appear in the host server’s file system. We will achieve this by modifying the docker-compose.yml file.

    1. Configure the application to log to a file: Many Docker images allow you to define the path to a log file using an environment variable. In the case of Vaultwarden, this is LOG_FILE.
    2. Map a volume: Create a mapping between a directory on the host server and a directory inside the container where the logs are saved.

    Here is an example of what a fragment of the docker-compose.yml file for Vaultwarden with the correct logging configuration might look like:

    version: “3”

    services:
      vaultwarden:
        image: vaultwarden/server:latest
        container_name: vaultwarden
        restart: unless-stopped
        volumes:
          # Volume for application data (database, attachments, etc.)
          – ./data:/data
        ports:
          – “8080:80”
        environment:
          # This variable instructs the application to write logs to a file inside the container
          – LOG_FILE=/data/vaultwarden.log

    What happened here?

    • LOG_FILE=/data/vaultwarden.log: We are telling the application to create a vaultwarden.log file in the /data directory inside the container.
    • ./data:/data: We are mapping the /data directory from the container to a data subdirectory in the location where the docker-compose.yml file is located (on the host).

    After saving the changes and restarting the container (docker-compose down && docker-compose up -d), the log file will be available on the server at a path like /opt/vaultwarden/data/vaultwarden.log.

    Step 2: Configuring the Wazuh Agent to Monitor the File

    Now that the logs are accessible on the host, we need to instruct the Wazuh agent to read them.

    Open the agent’s configuration file:

    sudo nano /var/ossec/etc/ossec.conf

    Add the following block within the <ossec_config> section:

    <localfile>
      <location>/opt/vaultwarden/data/vaultwarden.log</location>
      <log_format>logall</log_format>
    </localfile>

    Restart the agent to apply the changes:

    sudo systemctl restart wazuh-agent

    From now on, every new line in the vaultwarden.log file will be sent to the Wazuh manager.

    Step 3: Translating Logs into the Language of Wazuh (Decoders)

    The Wazuh manager is now receiving raw log lines, but it doesn’t know how to interpret them. We need to create decoders that will “teach” it to extract key information, such as the attacker’s IP address or the username.

    On the Wazuh manager server, edit the local decoders file:

    sudo nano /var/ossec/etc/decoders/local_decoder.xml

    Add the following decoders:

    <!– Decoder for Vaultwarden logs –>
    <decoder name=”vaultwarden”>
      <prematch>vaultwarden::api::identity</prematch>
    </decoder>

    <!– Decoder for failed login attempts in Vaultwarden –>
    <decoder name=”vaultwarden-failed-login”>
      <parent>vaultwarden</parent>
      <prematch>Username or password is incorrect. Try again. IP: </prematch>
      <regex>IP: (\S+)\. Username: (\S+)\.$</regex>
      <order>srcip, user</order>
    </decoder>

    Step 4: Creating Rules and Generating Alerts

    Once Wazuh can understand the logs, we can create rules that will generate alerts.

    On the manager server, edit the local rules file:

    sudo nano /var/ossec/etc/rules/local_rules.xml

    Add the following rule group:

    <group name=”vaultwarden,”>
      <rule id=”100105″ level=”5″>
        <decoded_as>vaultwarden</decoded_as>
        <description>Vaultwarden: Failed login attempt for user $(user) from IP address: $(srcip).</description>
        <group>authentication_failed,</group>
      </rule>

      <rule id=”100106″ level=”10″ frequency=”6″ timeframe=”120″>
        <if_matched_sid>100105</if_matched_sid>
        <description>Vaultwarden: Multiple failed login attempts (possible brute-force attack) from IP address: $(srcip).</description>
        <mitre>
          <id>T1110</id>
        </mitre>
        <group>authentication_failures,</group>
      </rule>
    </group>

    Note: Ensure that the rule id is unique and does not appear anywhere else in the local_rules.xml file. Change it if necessary.

    Step 5: Restart and Verification

    Finally, restart the Wazuh manager to load the new decoders and rules:

    sudo systemctl restart wazuh-manager

    To test the configuration, make several failed login attempts to your Vaultwarden application. After a short while, you should see level 5 alerts in the Wazuh dashboard for each attempt, and after exceeding the threshold (6 attempts in 120 seconds), a critical level 10 alert indicating a brute-force attack.

    Summary

    Integrating logs from applications running in Docker containers with the Wazuh system is a key element in building a comprehensive security monitoring system. The scheme presented above—exposing logs to the host via a volume and then analysing them with custom decoders and rules—is a universal approach that you can apply to virtually any application, not just Vaultwarden. This gives you full visibility of events across your entire infrastructure, regardless of the technology it runs on.

  • Ubuntu Pro: More Than Just a Regular System. A Comprehensive Guide to Services and Benefits

    Ubuntu Pro: More Than Just a Regular System. A Comprehensive Guide to Services and Benefits

    Canonical, the company behind the world’s most popular Linux distribution, offers an extended subscription called Ubuntu Pro. This service, available for free for individual users on up to five machines, elevates the standard Ubuntu experience to the level of corporate security, compliance, and extended technical support. What exactly does this offer include, and is it worth using?

    Ubuntu Pro is the answer to the growing demands for cybersecurity and stability of operating systems, both in commercial and home environments. The subscription integrates a range of advanced services that were previously reserved mainly for large enterprises, making them available to a wide audience. A key benefit is the extension of the system’s life cycle (LTS) from 5 to 10 years, which provides critical security updates for thousands of software packages.

    A Detailed Review of the Services Offered with Ubuntu Pro

    To fully understand the value of the subscription, you should look at its individual components. After activating Pro, the user gains access to a services panel that can be freely enabled and disabled depending on their needs.

    1. ESM-Infra & ESM-Apps: Ten Years of Peace of Mind

    The core of the Pro offering is the Expanded Security Maintenance (ESM) service, divided into two pillars:

    • esm-infra (Infrastructure): Guarantees security patches for over 2,300 packages from the Ubuntu main repository for 10 years. This means the operating system and its key components are protected against newly discovered vulnerabilities (CVEs) for much longer than in the standard LTS version.
    • esm-apps (Applications): Extends protection to over 23,000 packages from the community-supported universe repository. This is a huge advantage, as many popular applications, programming libraries, and tools we install every day come from there. Thanks to esm-apps, they also receive critical security updates for a decade.

    In practice, this means that a production server or workstation with an LTS version of the system can run safely and stably for 10 years without the need for a major system upgrade.

    2. Livepatch: Kernel Updates Without a Restart

    The Canonical Livepatch service is one of the most appreciated tools in environments requiring maximum uptime. It allows the installation of critical and high-risk security patches for the Linux kernel while it is running, without the need to reboot the computer. For server administrators running key services, this is a game-changing feature – it eliminates downtime and allows for an immediate response to threats.

    End of server restarts. The Livepatch service revolutionises Linux updates

    Updating the operating system’s kernel without having to reboot the machine is becoming the standard in environments requiring continuous availability. The Canonical Livepatch service allows critical security patches to be installed in real-time, eliminating downtime and revolutionising the work of system administrators.

    In a digital world where every minute of service unavailability can generate enormous losses, planned downtime for system updates is becoming an ever greater challenge. The answer to this problem is the Livepatch technology, offered by Canonical, the creators of the popular Ubuntu distribution. It allows for the deployment of the most important Linux kernel security patches without the need to restart the server.

    How does Livepatch work?

    The service runs in the background, monitoring for available security updates marked as critical or high priority. When such a patch is released, Livepatch applies it directly to the running kernel. This process is invisible to users and applications, which can operate without any interruptions.

    “For administrators managing a fleet of servers on which a company’s business depends, this is a game-changing feature,” a cybersecurity expert comments. “Instead of planning maintenance windows in the middle of the night and risking complications, we can respond instantly to newly discovered threats, maintaining one hundred percent business continuity.”

    Who benefits most?

    This solution is particularly valuable in sectors such as finance, e-commerce, telecommunications, and healthcare, where systems must operate 24/7. With Livepatch, companies can meet rigorous service level agreements (SLAs) while maintaining the highest standard of security.

    Eliminating the need to restart not only saves time but also minimises the risk associated with restarting complex application environments.

    Technology such as Canonical Livepatch sets a new direction in IT infrastructure management. It shifts the focus from reactive problem-solving to proactive, continuous system protection. In an age of growing cyber threats, the ability to instantly patch vulnerabilities, without affecting service availability, is no longer a convenience, but a necessity.

    3. Landscape: Central Management of a Fleet of Systems

    Landscape is a powerful tool for managing and administering multiple Ubuntu systems from a single, central dashboard. It enables remote updates, machine status monitoring, user and permission management, and task automation. Although its functionality may be limited in the free plan, in commercial environments it can save administrators hundreds of hours of work.

    Landscape: How to Master a Fleet of Ubuntu Systems from One Place?

    In today’s IT environments, where the number of servers and workstations can reach hundreds or even thousands, manually managing each system separately is not only inefficient but virtually impossible. Canonical, the company behind the most popular Linux distribution – Ubuntu, provides a solution to this problem: Landscape. It’s a powerful tool that allows administrators to centrally manage an entire fleet of machines, saving time and minimising the risk of errors.

    What is Landscape?

    Landscape is a system management platform that acts as a central command centre for all Ubuntu machines in your organisation. Regardless of whether they are physical servers in a server room, virtual machines in the cloud, or employees’ desktop computers, Landscape enables remote monitoring, management, and automation of key administrative tasks from a single, clear web browser.

    The main goal of the tool is to simplify and automate repetitive tasks that consume most of administrators’ time. Instead of logging into each server separately to perform updates, you can do so for an entire group of machines with a few clicks.

    Key Features in Practice

    The strength of Landscape lies in its versatility. The most important functions include:

    • Remote Updates and Package Management: Landscape allows for the mass deployment of security and software updates on all connected systems. An administrator can create update profiles for different groups of servers (e.g., production, test) and schedule their installation at a convenient time, minimising the risk of downtime.
    • Real-time Monitoring and Alerts: The platform continuously monitors key system parameters, such as processor load, RAM usage, disk space availability, and component temperature. If predefined thresholds are exceeded, the system automatically sends alerts, allowing for a quick response before a problem escalates into a serious failure.
    • User and Permission Management: Creating, modifying, and deleting user accounts on multiple machines simultaneously becomes trivially simple. Landscape enables central management of permissions, which significantly increases the level of security and facilitates audits.
    • Task Automation: One of the most powerful features is the ability to remotely run scripts on any number of machines. This allows you to automate almost any task – from routine backups and the installation of specific software to comprehensive configuration audits.

    Free Plan vs. Commercial Environments

    Canonical offers Landscape on a subscription basis, but also provides a free “Landscape On-Premises” plan that allows you to manage up to 10 machines at no cost. This is an excellent option for small businesses, enthusiasts, or for testing purposes. Although the functionality in this plan may be limited compared to the full commercial versions, it provides a solid insight into the platform’s capabilities.

    However, it is in large commercial environments that Landscape shows its true power. For companies managing dozens or hundreds of servers, investing in a license quickly pays for itself. Reducing the time needed for routine tasks from days to minutes translates into real financial savings and allows administrators to focus on more strategic projects. Experts estimate that implementing central management can save hundreds of hours of work per year.

    Landscape is an indispensable tool for any organisation that takes the management of its Ubuntu-based infrastructure seriously. Centralisation, automation, and proactive monitoring are key elements that not only increase efficiency and security but also allow for scaling operations without a proportional increase in costs and human resources. In an age of digital transformation, effective management of a fleet of systems is no longer a luxury, but a necessity.

    4. Real-time Kernel: Real-time Precision

    For specific applications, such as industrial automation, robotics, telecommunications, or stock trading systems, predictability and determinism are crucial. The Real-time Kernel is a special version of the Ubuntu kernel with integrated PREEMPT_RT patches, which minimises delays and guarantees that the highest priority tasks are executed within strictly defined time frames.

    In a world where machine decisions must be made in fractions of a second, standard operating systems are often unable to meet strict timing requirements. The answer to these challenges is the real-time operating system kernel (RTOS). Ubuntu, one of the most popular Linux distributions, is entering this highly specialised market with a new product: the Real-time Kernel.

    What is it and why is it important?

    The Real-time Kernel is a special version of the Ubuntu kernel in which a set of patches called PREEMPT_RT have been implemented. Their main task is to modify how the kernel manages tasks, so that the highest priority processes can pre-empt (interrupt) lower-priority ones almost immediately. In practice, this eliminates unpredictable delays (so-called latency) and guarantees that critical operations will be executed within a strictly defined, repeatable time window.

    “The Ubuntu real-time kernel provides industrial-grade performance and resilience for software-defined manufacturing, monitoring, and operational technologies,” said Mark Shuttleworth, CEO of Canonical.

    For sectors such as industrial automation, this means that PLC controllers on the assembly line can process data with absolute precision, ensuring continuity and integrity of production. In robotics, from assembly arms to autonomous vehicles, timing determinism is crucial for safety and smooth movement. Similarly, in telecommunications, especially in the context of 5G networks, the infrastructure must handle huge amounts of data with ultra-low latency, which is a necessary condition for service reliability. Stock trading systems, where milliseconds decide on transactions worth millions, also belong to the group of beneficiaries of this technology.

    How does it work? Technical context

    The PREEMPT_RT patches, developed for years by the Linux community, transform a standard kernel into a fully pre-emptible one. Mechanisms such as spinlocks (locks that protect against simultaneous access to data), which in a traditional kernel cannot be interrupted, become pre-emptible in the RT version. In addition, hardware interrupt handlers are transformed into threads with a specific priority, which allows for more precise management of processor time.

    Thanks to these changes, the system is able to guarantee that a high-priority task will gain access to resources in a predictable, short time, regardless of the system’s load by other, less important processes.

    The integration of PREEMPT_RT with the official Ubuntu kernel (available as part of the Ubuntu Pro subscription) is a significant step towards the democratisation of real-time systems. This simplifies the deployment of advanced solutions in industry, lowering the entry barrier for companies that until now had to rely on niche, often closed and expensive RTOS systems. The availability of a stable and supported real-time kernel in a popular operating system can accelerate innovation in the fields of the Internet of Things (IoT), autonomous vehicles, and smart factories, where precision and reliability are not an option but a necessity.

    5. USG (Ubuntu Security Guide): Auditing and Security Hardening

    USG is a tool for automating the processes of system hardening and auditing for compliance with rigorous security standards, such as CIS Benchmarks or DISA-STIG. Instead of manually configuring hundreds of system settings, an administrator can use USG to automatically apply recommended policies and generate a compliance report.

    In an age of growing cyber threats and increasingly stringent compliance requirements, system administrators face the challenge of manually configuring hundreds of settings to secure IT infrastructure. Canonical, the company behind the popular Linux distribution, offers the Ubuntu Security Guide (USG) tool, which automates the processes of system hardening and auditing, ensuring compliance with key security standards, such as CIS Benchmarks and DISA-STIG.

    What is the Ubuntu Security Guide and how does it work?

    The Ubuntu Security Guide is an advanced command-line tool, available as part of the Ubuntu Pro subscription. Its main goal is to simplify and automate the tedious tasks associated with securing Ubuntu operating systems. Instead of manually editing configuration files, changing permissions, and verifying policies, administrators can use ready-made security profiles.

    USG uses the industry-recognised OpenSCAP (Security Content Automation Protocol) tool as its backend, which ensures the consistency and reliability of the audits performed. The process is simple and is based on two key commands:

    • usg audit [profile] – Scans the system for compliance with the selected profile (e.g., cis_level1_server) and generates a detailed report in HTML format. This report indicates which security rules are met and which require intervention.
    • usg fix [profile] – Automatically applies configuration changes to adapt the system to the recommendations contained in the profile.

    As Canonical emphasises in its official documentation, USG was designed to “simplify the DISA-STIG hardening process by leveraging automation.”

    Compliance with CIS and DISA-STIG at Your Fingertips

    For many organisations, especially in the public, financial, and defence sectors, compliance with international security standards is not just good practice but a legal and contractual obligation. CIS Benchmarks, developed by the Center for Internet Security, and DISA-STIG (Security Technical Implementation Guides), required by the US Department of Defence, are collections of hundreds of detailed configuration guidelines.

    Manually implementing these standards is extremely time-consuming and prone to errors. USG addresses this problem by providing predefined profiles that map these complex requirements to specific, automated actions. Example configurations managed by USG include:

    • Password policies: Enforcing appropriate password length, complexity, and expiration period.
    • Firewall configuration: Blocking unused ports and restricting access to network services.
    • SSH security: Enforcing key-based authentication and disabling root account login.
    • File system: Setting restrictive mounting options, such as noexec and nosuid on critical partitions.
    • Deactivation of unnecessary services: Disabling unnecessary daemons and services to minimise the attack surface.

    The ability to customise profiles using so-called “tailoring files” allows administrators to flexibly implement policies, taking into account the specific needs of their environment, without losing compliance with the general standard.

    Consequences of Non-Compliance and the Role of Automation

    Ignoring standards such as CIS or DISA-STIG carries serious consequences. Apart from the obvious increase in the risk of a successful cyberattack, organisations expose themselves to severe financial penalties, loss of certification, and serious reputational damage. Non-compliance can lead to the loss of key contracts, especially in the government sector.

    Security experts agree that compliance automation tools are crucial in modern IT management. They allow not only for a one-time implementation of policies but also for continuous monitoring and maintenance of the desired security state in dynamically changing environments.

    The Ubuntu Security Guide is a response to the growing complexity in the field of cybersecurity and regulations. By shifting the burden of manual configuration to an automated and repeatable process, USG allows administrators to save time, minimise the risk of human error, and provide measurable proof of compliance with global standards. In an era where security is the foundation of digital trust, tools like USG are becoming an indispensable part of the arsenal of every IT professional managing Ubuntu-based infrastructure.

    6. Anbox Cloud: Android in the Cloud at Scale

    Anbox Cloud is a platform that allows you to run the Android system in cloud containers. This is a solution aimed mainly at mobile application developers, companies in the gaming industry (cloud gaming), or automotive (infotainment systems). It enables mass application testing, process automation, and streaming of Android applications with ultra-low latency.

    How to Install and Configure Ubuntu Pro? A Step-by-Step Guide

    Activating Ubuntu Pro is simple and takes only a few minutes.

    Requirements:

    • Ubuntu LTS version (e.g., 18.04, 20.04, 22.04, 24.04).
    • Access to an account with sudo privileges.
    • An Ubuntu One account (which can be created for free).

    Step 1: Get your subscription token

    1. Go to the ubuntu.com/pro website and log in to your Ubuntu One account.
    2. You will be automatically redirected to your Ubuntu Pro dashboard.
    3. In the dashboard, you will find a free personal token. Copy it.

    Step 2: Connect your system to Ubuntu Pro

    Open a terminal on your computer and execute the command below, pasting the copied string into the place of [YOUR_TOKEN]:

    sudo pro attach [YOUR_TOKEN]

    The system will connect to Canonical’s servers and automatically enable default services, such as esm-infra and livepatch.

    Step 3: Manage services

    You can check the status of your services at any time with the command:

    pro status –all

    You will see a list of all available services along with information on whether they are enabled or disabled.

    To enable a specific service, use the enable command. For example, to activate esm-apps:

    sudo pro enable esm-apps

    image 120

    Similarly, to disable a service, use the disable command:

    sudo pro disable landscape

    Alternative: Configuration via a graphical interface

    On Ubuntu Desktop systems, you can also manage your subscription through a graphical interface. Open the “Software & Updates” application, go to the “Ubuntu Pro” tab, and follow the instructions to activate the subscription using your token.

    image 121

    Summary

    Ubuntu Pro is a powerful set of tools that significantly increases the level of security, stability, and management capabilities of the Ubuntu system. Thanks to the generous free subscription offer for individual users, everyone can now take advantage of features that until recently were the domain of corporations. Whether you are a developer, a small server administrator, or simply a conscious user who cares about long-term support, activating Ubuntu Pro is a step that is definitely worth considering.

  • Untitled post 1942

    Windows 10 Reaches End-of-Life. Which System to Choose in 2025?

    The Dawn of a New Computing Era – Navigating the Windows 10 End-of-Life

    The 14th of October 2025 Deadline: What This Really Means for Your Device

    Windows 10 support will officially end on the 14th of October 2025. This date marks the end of Microsoft’s provision of ongoing maintenance and protection for this operating system. After this critical date, Microsoft will no longer provide:

    • Technical support for any issues. This means there will be no official technical help or troubleshooting support from Microsoft.
    • Software updates, which include performance enhancements, bug fixes, and compatibility improvements. The system will become static in terms of its core functionality.
    • Most importantly, security updates or patches will no longer be issued. This is the most significant consequence for user security.

    It’s important to understand that a Windows 10 computer will continue to function after this date. It won’t suddenly stop working or become useless. However, continuing to operate it without security patches carries serious risks.

    Microsoft 365 app support on Windows 10 will also end on the 14th of October 2025. While these applications will still work, Microsoft strongly recommends upgrading to Windows 11 to avoid performance and reliability issues over time. It should be noted that Microsoft will continue to provide security updates for Microsoft 365 on Windows 10 for an additional three years, until the 10th of October 2028. Support for non-subscription versions of Office (2016, 2019) will also end on the 14th of October 2025, on all operating systems. Office 2021 and 2024 (including LTSC versions) will still work on Windows 10 but will no longer be officially supported.

    The Risks of Staying on an Old System: Why an Unsupported OS is a Liability

    Remaining on an unsupported operating system brings with it a number of serious risks that can have far-reaching consequences for security, performance, and compliance.

    • Security Risks: The Biggest Concerns: Without regular security updates from Microsoft, Windows 10 systems will become increasingly vulnerable to cyberattacks. Hackers actively seek and exploit newly discovered vulnerabilities in outdated systems that will no longer be patched after support ends. This can lead to a myriad of security issues, including unauthorised access to sensitive data, ransomware attacks, and breaches of confidential financial or customer information.

    The risk of an unsupported operating system is not a sudden, immediate failure, but a gradually increasing exposure. This risk will grow over time as security vulnerabilities are found and not patched. This means that every new vulnerability discovered globally that affects Windows 10 will remain an open door for attackers on unsupported systems. The risk profile of an unsupported Windows 10 PC is not static; it is in constant decline as more zero-day vulnerabilities become public knowledge and are integrated into attacker tools. While some users might believe that “common sense,” a firewall, and an antivirus are sufficient for “a few years,” this approach fails to account for the dynamic and escalating nature of cyber threats. Users who choose to remain on Windows 10 without the ESU programme aren’t just risking a single, isolated attack. They are exposing themselves to an ever-growing and increasingly dangerous attack surface. This means that even with cautious user behaviour, the sheer number of unpatched vulnerabilities will eventually make their system an easy target for malicious actors, drastically increasing the probability of a successful attack over time.

    • Software Incompatibility and Performance Issues: As the broader tech ecosystem progresses, software developers will inevitably shift their focus to Windows 11 and newer operating systems, leaving Windows 10 behind. This will, over time, cause a number of problems for Windows 10 users:
    • Slower Performance: The lack of ongoing updates and optimisations can cause the system to slow down, use resources inefficiently, and experience an overall decrease in performance.
    • Application Crashes: Critical business tools or popular consumer applications that rely on modern system architectures or APIs may cease to function correctly, or at all, hindering day-to-day tasks.
    • Limited Vendor Support: IT and software vendors are likely to prioritise newer systems like Windows 11, making it difficult and potentially more expensive to find support for Windows 10 issues.
    • Hardware Upgrade Pressure: Businesses and individuals may face additional challenges if their systems no longer meet the hardware requirements for newer software, forcing them into costly upgrades or replacements.
    • Compliance and Regulatory Risks (Especially for Businesses): For industries subject to specific security and compliance regulations (e.g., healthcare, finance, government), staying on an unsupported operating system can pose significant risks. Many regulatory frameworks explicitly require companies to use supported, up-to-date software to ensure adequate data protection and security measures. Continuing to use Windows 10 past the end-of-life date could put an organisation at risk of failing audits, which could result in hefty fines, penalties, and even loss of certification.

    Delaying the transition to a new operating system does not necessarily mean saving money. In fact, doing so can lead to significantly higher costs in the long run. The section explicitly discusses the “costs of waiting,” listing “emergency upgrades,” “potential downtime,” and “unplanned hardware replacements” as financial consequences. This extends beyond the direct costs of the update itself. The core implication is that delaying the transition doesn’t save money; it merely defers costs, often with significant multipliers due to urgency, disruption, and unforeseen consequences. For businesses, the cost extends far beyond direct financial penalties. A security breach or non-compliance due to an unsupported OS can lead to severe reputational damage, loss of customer trust, legal liability, and long-term operational disruptions that are often far more expensive and difficult to recover from than a planned, proactive upgrade. Delaying the transition is a false economy. It is a deferral of inevitable costs, which are likely to be compounded by unexpected crises, legal repercussions, and reputational damage. Proactively planning and investing now can prevent far greater, unquantifiable losses in the future.

    Upgrading to Windows 11 – A Smooth Transition

    Windows Update

    For many users, the most straightforward and recommended path will be to upgrade to Windows 11, Microsoft’s latest operating system. This option provides continuity in a familiar Windows ecosystem while offering expanded features, enhanced security, and long-term support directly from Microsoft.

    Is Your PC Ready for Windows 11? Demystifying the System Requirements

    To upgrade directly from an existing Windows 10 installation, your device must be running Windows 10, version 2004 or later, and have the 14 September 2021 security update or later installed. These are preconditions for the upgrade process itself.

    Minimum Hardware Requirements for Windows 11: Microsoft has established specific hardware baselines to ensure that Windows 11 delivers a consistent and secure experience. Your computer must meet or exceed the following specifications:

    • Processor: 1 gigahertz (GHz) or faster with two or more cores on a compatible 64-bit processor or System on a Chip (SoC).
    • RAM: 4 gigabytes (GB) or more.
    • Storage: A storage device of 64 GB or greater. Note that additional storage may be required over time for updates and specific features.
    • System Firmware: UEFI, with Secure Boot capability. This refers to the modern firmware interface that replaces the older BIOS.
    • TPM: Trusted Platform Module (TPM) version 2.0. This is a cryptographic processor that enhances security.
    • Graphics Card: Compatible with DirectX 12 or later with WDDM 2.0 driver.
    • Display: A high-definition (720p) display that is greater than 9 inches diagonally, with 8 bits per colour channel.
    • Internet Connection and Microsoft Account: Required for Windows 11 Home to complete the initial device setup on first use, and generally essential for updates and certain features.

    Key Requirements Nuances (TPM and Secure Boot): These two requirements are often the most common sticking points for users with otherwise capable hardware.

    Many PCs shipped within the last 5 years are technically capable of supporting Trusted Platform Module version 2.0 (TPM 2.0), but it may be disabled by default in the UEFI BIOS settings. This is particularly true for retail PC motherboards used by individuals who build their own computers. Secure Boot is an important security feature designed to prevent malicious software from loading during the computer’s startup. Most modern computers are Secure Boot capable, but similar to TPM, there may be settings that make the PC appear not to be Secure Boot capable. These settings can often be changed within the computer’s firmware (BIOS).

    The initial “incompatible” message from Microsoft’s PC Health Check app can be misleading for many users, potentially leading them to believe they need to buy a new computer when their existing one is fully capable. The sections repeatedly highlight that TPM 2.0 and Secure Boot are often enable-able features on existing hardware but are turned off by default. It states: “Most PCs shipped within the last 5 years are capable of supporting Trusted Platform Module version 2.0 (TPM 2.0).” And “In some cases, PCs capable of TPM 2.0 are not configured to do so.” Similarly, it notes, “Most modern computers are Secure Boot capable, but in some cases, there may be settings that make the PC appear not to be Secure Boot capable.” Educating users on how to check and enable these critical BIOS/UEFI settings is extremely important for a smooth, cost-effective, and environmentally friendly transition to Windows 11, preventing unnecessary hardware waste.

    Unlocking Windows 11: A Guide to Checking and Enabling Compatibility

    Microsoft provides the PC Health Check app to assess your device’s readiness for Windows 11. This application will indicate if your system meets the minimum requirements.

    How to Check Your TPM 2.0 Status:

    • Press the Windows key + R to open the Run dialogue box, then type “tpm.msc” (without quotes) and select OK.
    • If a message appears saying “Compatible TPM not found,” your PC may have the TPM disabled. You’ll need to enable it in the BIOS.
    • If a TPM is ready for use, check “Specification Version” under the “TPM Manufacturer Information” section to see if it’s version 2.0. If it is lower than 2.0, the device does not meet the Windows 11 requirements.

    How to Enable TPM and Secure Boot: These settings are managed via the UEFI BIOS (the computer’s firmware). The exact steps and labels vary depending on the device manufacturer, but the general method of access is as follows:

    • Go to Settings > Update & Security > Recovery and select Restart now under the “Advanced startup” section.
    • On the next screen, choose Troubleshoot > Advanced options > UEFI Firmware Settings > Restart to apply changes.
    • In the UEFI BIOS, these settings are sometimes located in a submenu called “Advanced,” “Security,” or “Trusted Computing.”
    • The option to enable TPM may be labelled as “Security Device,” “Security Device Support,” “TPM State,” “AMD fTPM switch,” “AMD PSP fTPM,” “Intel PTT,” or “Intel Platform Trust Technology.”
    • To enable Secure Boot, you will typically need to switch your computer’s boot mode from “Legacy” BIOS (also known as “CSM” mode) to “UEFI/BIOS” (Unified Extensible Firmware Interface).

    Beyond the Basics: Key Windows 11 Features and Benefits for Different Users

    Windows 11 is not just a security update; it introduces a range of new features and enhancements designed to boost productivity, improve the gaming experience, and provide a better overall user experience.

    • Productivity and UI Enhancements:
    • Redesigned Shell: Windows 11 features a fresh, modern visual design influenced by elements of the cancelled Windows 10X project. This includes a centred Start menu, a separate “Widgets” panel replacing the old Live Tiles, and new window management features.
    • Snap Layouts: This feature allows users to easily utilise available desktop space by opening apps in pre-configured layouts that intelligently adjust to the screen size and dimensions, speeding up workflow by an average of 50%.
    • Desktops: Users can create separate virtual desktops for different projects or work streams and instantly switch between them from the taskbar, which helps with organisation.
    • Microsoft Teams Integration: The Microsoft Teams collaboration platform is deeply integrated into the Windows 11 UI, accessible directly from the taskbar. This simplifies communication compared to Windows 10, where setup was more difficult. Skype is no longer included by default.
    • Live Captions: A system-wide feature that allows users to enable real-time live captions for videos and online meetings.
    • Improved Microsoft Store: The Microsoft Store has been redesigned, allowing developers to distribute Win32 applications, Progressive Web Applications (PWAs), and other packaging technologies. Microsoft also plans to allow third-party app stores (such as the Epic Games Store) to distribute their clients.
    • Android App Integration: A brand-new feature for Windows, enabling native integration of Android apps into the taskbar and UI via the new Microsoft Store. Users can access around 500,000 apps from the Amazon Appstore, including popular titles such as Disney Plus, TikTok, and Netflix.
    • Seamless Redocking: When docking or undocking from an external display, Windows 11 remembers how apps were arranged, providing a smooth transition back to your preferred layout.
    • Voice Typing/Voice Access: While voice typing is available on both systems, Windows 11 introduces comprehensive Voice Access for system navigation.
    • Digital Pen Experience: Offers an enhanced writing experience for users with digital pens.
    • Gaming Enhancements: Windows 11 includes gaming technologies from the Xbox Series X and Series S consoles, aiming to set a new standard in PC gaming.
    • DirectStorage: A unique feature that significantly reduces game loading times by allowing game data to be streamed directly from an NVMe SSD to the graphics card, bypassing CPU bottlenecks. This allows for faster gameplay and more detailed, expansive game worlds. It should be noted that Microsoft has confirmed DirectStorage will also be available for Windows 10, but NVMe SSDs are key to its benefits.
    • Auto HDR: Automatically adds High Dynamic Range (HDR) enhancements to games built on DirectX 11 or later, improving contrast and colour accuracy for a more immersive visual experience on HDR monitors.
    • Xbox Game Pass Integration: The Xbox app is deeply integrated into Windows 11, providing easy access to the extensive game library for Game Pass subscribers.
    • Game Mode: The updated Game Mode in Windows 11 optimises performance by concentrating system resources on the game, reducing the utilisation of background applications to free up CPU for better performance.
    • DirectX 12 Ultimate: Provides a visual uplift for games with features like ray tracing for realistic lighting, variable-rate shading for better performance, and mesh shaders for more complex scenes.
    • Security and Performance Improvements:
    • Enhanced Security: Windows 11 features enhanced security protocols, including more secure and reliable connection methods, advanced network security (encryption, firewall protection), and built-in Virtual Private Network (VPN) protocols. It supports Wi-Fi 6, WPA3, encrypted DNS, and advanced Bluetooth connections.
    • TPM 2.0: Windows 11 includes enhanced security by leveraging the Trusted Platform Module (TPM) 2.0, an important building block for security-related features such as Windows Hello and BitLocker.
    • Windows Hello: Provides a secure and convenient sign-in, replacing passwords with stronger authentication methods based on a PIN or biometrics (face or fingerprint recognition).
    • Smart App Control: This feature provides an extra layer of security, only allowing reputable applications to be installed on the Windows 11 PC.
    • Increased Speed and Efficiency: Windows 11 is designed to better process information in the background, leading to a smoother overall user experience. Less powerful devices (with less RAM or limited CPU power) might even feel a noticeable increase in performance.
    • Faster Wake-Up: It claims a faster wake-up from sleep mode.
    • Smaller Update Sizes:.
    • Latest Support: As the newest version, Windows 11 benefits from continuous development, including monthly bug fixes, new storage alerts, and feature improvements like Windows Spotlight. This ensures the device remains fully protected and open to future upgrades.

    Windows 11 is presented as more than just an incremental upgrade; it is a platform designed for a “hybrid world” and offers “impressive improvements” that “accelerate device performance.” The integration of Android applications, new widgets, advanced security features, and next-generation gaming technologies like Auto HDR and DirectStorage (even if DirectStorage is coming to Windows 10, the full package is in Windows 11), collectively paint a picture of an operating system that is being actively developed with future computing trends in mind. Its continuous updates and development cement its position as a long-term supported platform within the Microsoft ecosystem. For users who want to leverage the latest technologies, integrate their mobile experiences, benefit from ongoing feature development, or simply ensure their system remains current and secure for the foreseeable future, upgrading to Windows 11 is a clear strategic choice. It represents an investment in future productivity, entertainment, and security, not just a necessary reaction to the Windows 10 end-of-life.

    Upgrade Considerations: Performance on Older Hardware, Changes in User Experience

    While Windows 11 is optimised for performance and may even speed up less powerful devices, it’s important to manage expectations. An older PC that just meets the minimum requirements may not deliver the same “accelerated user experience” as a brand new device designed for Windows 11.

    Users should also be prepared for changes to the user interface and workflow. While many find the new design “simple” and “clean,” critics have pointed to changes like the limitations in customising the taskbar and the difficulty in changing default apps as potential steps backwards from Windows 10. A period of adjustment to the new layout and navigation should be expected.

    Table: Windows 11 vs. Windows 10: Key Feature Upgrades

    Feature CategoryWindows 10 Status/DescriptionWindows 11 Upgrade/Description
    User InterfaceTraditional Start menu, Live TilesCentred Start menu, Widgets panel, new Snap Layouts
    SecurityBasic security, no TPM 2.0 requirementTPM 2.0 requirement, Windows Hello, Smart App Control, improved network protocols
    GamingLimited gaming features, no native DirectStorageDirectStorage (requires NVMe SSD), Auto HDR, enhanced Game Mode, Xbox Game Pass integration, DirectX 12 Ultimate
    App CompatibilityNo native Android app integrationNative Android app integration via the Microsoft Store
    CollaborationTeams app as a separate install, more difficult setupDeep Microsoft Teams integration with the taskbar
    PerformanceStandard background process managementBetter background processing, potential performance boost on less powerful devices, faster wake-up
    SupportSupport ends 14 October 2025Continuous support, monthly bug fixes, new features

    Exploring Alternatives for Incompatible Hardware (and Beyond)

    image 117

    For users whose current hardware doesn’t meet the strict Windows 11 requirements, or for those simply looking for a different computing experience, there are several viable and attractive alternatives. These options can breathe new life into older machines, offer different philosophies on privacy and customisation, or cater to specific professional needs.

    The Extended Security Updates (ESU) Programme: A Short-Term Fix

    What ESU Offers and Its Critical Limitations: The Windows 10 Extended Security Updates (ESU) programme is designed to provide customers with an option to continue receiving security updates for their Windows 10 PCs after the end-of-support date. Specifically, it delivers “critical and important security updates” as defined by the Microsoft Security Response Center (MSRC) for Windows 10 version 22H2 devices. This programme aims to mitigate the immediate risk of malware and cybersecurity attacks for those not yet ready to upgrade.

    • Critical Limitations: It’s important to understand that ESU does not provide full continuation of support for Windows 10. It explicitly excludes:
    • New features.
    • Non-security, customer-requested updates.
    • Design change requests.
    • General technical support. Support is only provided for issues directly related to ESU licence activation, installation, and any regressions caused by ESU itself.

    Cost and Programme Duration: The ESU programme is a paid service. For individual consumers, Microsoft offers a few sign-up options:

    • At no extra cost if you sync your PC’s settings to a Microsoft account.
    • Cashing in 1,000 Microsoft Rewards points.
    • A one-time purchase of $30 (or local currency equivalent) plus applicable tax.

    All of these sign-up options provide extended security updates until the 13th of October 2026. You can sign up for the ESU programme at any time until its official end on the 13th of October 2026. A single ESU licence can be used on up to 10 devices.

    The ESU programme is presented as an “option to extend usage” or “extra time before transitioning to Windows 11.” It explicitly states that ESU only delivers security updates and offers no new features, non-security updates, or general technical support. This means that while the immediate security risk is mitigated, the underlying issues with software incompatibility, lack of performance optimisation, and declining vendor support (detailed in) will persist and likely worsen over time. The operating system becomes a stagnant, patched version of Windows 10, increasingly incompatible with modern software and hardware. The ESU programme is therefore a temporary fix, not a sustainable long-term solution. It is best suited for users who truly need a short grace period (up to one year) to save up for new hardware, plan a more extensive migration, or manage a critical business transition. It should not be viewed as a viable strategy to indefinitely continue using Windows 10, as it merely defers the inevitable need to move to a fully supported and evolving operating system.

    Embracing the Open Road: Linux Distributions

    image 118

    For many users with Windows 11 incompatible hardware, or for those looking for greater control, privacy, and performance, Linux offers a robust and diverse ecosystem of operating systems.

    Why Linux? Advantages for Performance, Security, and Customisation.

    • Free and Open Source: The vast majority of Linux distributions are completely free, and nearly all their components are open source. This fosters transparency, community development, and eliminates licensing fees.
    • Performance on Older Hardware: A significant advantage of many Linux distributions is their ability to run efficiently on older computers with limited RAM or slower processors. They are often streamlined to consume fewer resources than Windows, effectively “resurrecting” seemingly obsolete machines and making them feel snappy.
    • Security: Linux generally boasts a strong security posture due to its open-source nature (allowing for widespread inspection and rapid patching), robust permission systems, and a smaller target for malware compared to Windows.
    • Customisation: Linux offers unparalleled customisation options for the user interface, desktop environment, and overall workflow, allowing users to precisely tailor their computing experience to their preferences.
    • Stability and Reliability: Many distributions are known for being “dependable” and requiring “very little maintenance,” benefiting from the robustness of their underlying Linux architecture.
    • Community Support: The Linux community is vast, active, and generally welcoming, offering extensive online resources, forums, and willing assistance for new users.
    • Dual Boot Option: Users can easily install Linux alongside their Windows or macOS system, creating a dual-boot setup that allows them to choose the operating system to use at each startup. This is ideal for testing or for users who need access to both environments.

    Choosing Your Linux Companion: Tailored Recommendations for Every User.

    • For Windows Converts and Daily Use:
    • Linux Mint (XFCE Edition): This distribution has long been a favourite among Windows converts due to its traditional desktop layout. It is designed to be straightforward and intuitive, making users feel “at home” quickly. Linux Mint includes all the essentials out-of-the-box, such as a web browser, media player, and office suite, making it ready to use without extensive setup. It is described as very user-friendly, highly customisable, and “incredibly fast.”
    • Zorin OS Lite: Zorin OS Lite stands out for its balance of performance and aesthetics. It has a polished interface that closely resembles Windows, making the transition easy for former Windows users. Even on older systems (even up to 15 years old), Zorin OS Lite provides a surprisingly modern experience without taxing system resources. It comes with essential apps and offers “Windows app support,” allowing users to run many Windows applications.
    • For Gamers and Power Users:
    • Pop!_OS: Promoted for STEM professionals and creators, Pop!_OS also provides an “amazing gaming experience.” Key features include “Hybrid Graphics” (allowing users to switch between battery-saving and high-power GPU modes or run individual apps on GPU power) and strong, out-of-the-box support for popular gaming platforms like Steam, Lutris, and GameHub. It offers a simple and colourful layout.
    • Fedora (Workstation/Games Lab): Fedora Workstation (with GNOME) is the flagship edition, and Fedora also offers “Labs,” such as the “Games” Lab, which is a collection and showcase of games available in Fedora. Fedora tends to keep its kernel and graphics drivers very up to date, which is a significant advantage for gaming performance and compatibility. AMD graphics cards are typically “plug-and-play” on modern Linux distributions like Fedora. While Nvidia cards require “a bit of work,” most major distributions, including Fedora, provide straightforward ways to install Nvidia drivers directly from their software centres.
    • General Linux Gaming: Gaming on Linux has “infinitely improved” since 2017. Most Linux distributions now perform great for gaming as long as you install Steam and other launchers like Heroic, which leverage compatibility layers like Proton/Proton-GE. Users report being able to play “everything from old Win95 or DOS games all the way up to the latest releases.”
    • For Reviving Older Hardware (Low-spec PCs):
    • Puppy Linux: Designed to be extremely small, fast, and portable, Puppy Linux often runs entirely from RAM, allowing it to boot quickly and operate smoothly even on machines that seem hopelessly outdated. Despite its small size, it includes a complete set of applications for browsing, word processing, and media playback.
    • AntiX Linux: A no-frills distribution specifically designed for low-spec hardware. It is based on Debian but strips away the heavier desktop environments in favour of extremely lightweight window managers (such as IceWM and Fluxbox), keeping resource usage incredibly low (often under 200 MB of idle RAM). Despite its minimalism, AntiX remains surprisingly powerful and stable for daily tasks.
    • Other Lightweight Options: Linux Lite, Bodhi Linux, LXLE Linux, Tiny Core Linux, and Peppermint OS are also mentioned as excellent choices for older or low-spec hardware.

    Software Ecosystem: Office Suites, Creative Tools, and Running Windows Apps.

    • Office Suites:
    • LibreOffice: This is the most popular free and open-source office suite available for Linux. It is designed to be compatible with Microsoft Office/365 files, handling popular formats such as .doc, .docx, .xls, .xlsx, .ppt, and .pptx.
    • Compatibility Nuances: While it is generally compatible with simple documents, users should be aware that the “translation” between LibreOffice’s Open Document Format and Microsoft’s Office Open XML format is not always perfect. This can lead to imperfections, especially with complex formatting, macros, or when documents are exchanged and modified multiple times. Installing Microsoft Core Fonts on Linux can significantly improve compatibility. For critical documents, users can first test LibreOffice on Windows or use the web version of Microsoft 365 to double-check compatibility before sharing.
    • Creative Tools:
    • GIMP (GNU Image Manipulation Program): A powerful, free, and open-source raster graphics editor (often considered an alternative to Adobe Photoshop). GIMP provides advanced tools for high-quality photo manipulation, retouching, image restoration, creative compositions, and graphic design elements like icons. It is cross-platform, available for Windows, macOS, and Linux.
    • Inkscape: A powerful, free, and open-source vector graphics editor (similar to Adobe Illustrator). Inkscape specialises in creating scalable graphics, making it ideal for tasks like logo creation, intricate illustrations, and vector-based designs where precision and quality-lossless scalability are paramount. It is also cross-platform.
    • Running Windows Applications (Gaming and General Software):
    • Wine (Wine Is Not an Emulator): A foundational compatibility layer that allows Windows software (including many older games and general applications) to run directly on Linux-based operating systems.
    • Proton: Developed by Valve in collaboration with CodeWeavers, Proton is a specialised compatibility layer built on a patched version of Wine. It is specifically designed to improve the performance and compatibility of Windows video games on Linux, integrating key libraries like DXVK (for translating Direct3D 9, 10, 11 to Vulkan) and VKD3D-Proton (for translating Direct3D 12 to Vulkan). Proton is officially distributed via the Steam client as “Steam Play.”
    • ProtonDB: An unofficial community website that crowdsources and displays data on the compatibility of various game titles with Proton, providing a rating scale from “Borked” (doesn’t work) to “Platinum” (works perfectly).
    • Proton’s Advantages over Pure Wine: Proton is a “tested distribution of Wine and its libraries,” offering a “nice overlay” that helps configure everything to “just work” for many games. It automatically handles dependencies and leverages performance-enhancing translation layers.

    Historically, Linux has largely been dismissed as a viable platform for gaming. However, the sections collectively paint a picture of a dramatically improved and increasingly competitive Linux gaming environment. It explicitly states, “Nearly all Linux distros have become infinitely better at gaming since 2017.” This improvement is directly tied to Valve’s significant investment in Proton, which has been a game-changer for Windows game runnability. The emergence of gaming-focused distributions like Pop!_OS and Fedora’s “Games” Lab, along with an active community around ProtonDB, signals a deliberate and successful effort to make Linux a strong contender for gamers. It’s no longer just about “getting games to run” but about achieving an “amazing gaming experience” and “easy, great performance.” For gamers with Windows 11 incompatible hardware, Linux is no longer a last resort but a genuinely competitive and often superior alternative for many titles, especially for those willing to engage with the community and learn a few new tools. This shift is a significant development, challenging the long-held belief of Windows being the only gaming OS.

    Key Linux Caveats:

    • Learning Curve: While distributions like Linux Mint and Zorin OS Lite are designed to be friendly for Windows converts, there can still be an initial learning curve for users completely new to the Linux environment. This often involves understanding package managers, file systems, and different approaches to software installation.
    • Hardware Driver Support: Modern Linux distributions have vastly improved hardware detection and driver support (e.g., AMD graphics cards are often plug-and-play, and Nvidia drivers are easily available via software tools). However, very new or niche hardware components may still require manual driver installation or troubleshooting, which can be a barrier for less technical users.
    • Gaming Anti-Cheat Limitations: A significant drawback for multiplayer gaming is that any game that implements kernel-level anti-cheat software will typically not work on Linux. This is because the creators of such anti-cheat systems are often unwilling to support Linux with user-level anti-cheat, citing concerns about preventing cheating. Games like Apex Legends have removed Linux support for this reason. This is a critical limitation for users whose primary gameplay involves such titles.

    The entire success and rapid evolution of Linux as a viable desktop OS, particularly in areas like gaming (Proton, DXVK, VKD3D-Proton), is largely attributed to its open, community-driven development model, often amplified by corporate support (e.g., Valve’s investment in Proton). Unlike Windows’s centralised, proprietary development, Linux benefits from a distributed network of developers, which allows for rapid iteration, specialised forks (like Proton GE), and direct feedback from the community (like ProtonDB). This model fosters tremendous flexibility and often bleeding-edge performance, as developers can quickly address issues and implement new technologies. However, this model also means that support for highly proprietary or deeply integrated features (such as kernel-level anti-cheat) is dependent on the willingness of external, often profit-driven, developers to adapt their software, leading to the “limitation” mentioned in. Users embracing Linux are entering a dynamic, evolving ecosystem that offers unparalleled flexibility, privacy, and often superior performance on older hardware. However, it comes with an implicit understanding that while much is delivered out-of-the-box, specific challenges (such as certain proprietary software or anti-cheat) may require a degree of self-reliance, engagement with community resources, or acceptance of limitations. This highlights a fundamental philosophical difference in operating system development and support compared to the traditional proprietary model.

    Table: Recommended Linux Distributions for Different User Profiles

    User ProfileRecommended DistributionsKey AdvantagesKey Caveats/Limitations
    Windows Converts / Daily UseLinux Mint (XFCE Edition), Zorin OS LiteFriendly interface, out-of-the-box apps, Windows app support (Zorin), snappy performanceInitial learning curve, Zorin OS Lite has a more polished interface than some other lightweight distros
    Gamer / Power UserPop!_OS, Fedora (Workstation/Games Lab)Gaming optimisations (Hybrid Graphics, Steam/Lutris/GameHub), up-to-date kernel/drivers, AMD plug-and-playAnti-cheat issues in some online games, Nvidia driver installation may require “a bit of work”
    Reviving Older Hardware / Low-Spec PCPuppy Linux, AntiX Linux, Linux Lite, Bodhi Linux, LXLE Linux, Tiny Core Linux, Peppermint OSLow resource usage, snappy performance even on very old hardware, Puppy Linux runs from RAM, AntiX is minimalistMore minimalist UI, may require more technical knowledge for setup

    Cloud-Powered Rebirth: ChromeOS Flex

    image 119

    ChromeOS Flex is Google’s solution for transforming older Windows, Mac, or Linux devices into secure, cloud-based machines, offering many of the features available on native ChromeOS devices. It is particularly appealing for organisations and individuals looking to extend the life of existing hardware while benefiting from a modern, secure, and easy-to-manage operating system.

    Transforming an Old PC into a Secure, Cloud-Based Device.

    ChromeOS Flex allows you to install a lightweight, cloud-focused operating system on a variety of existing devices, including older Windows and Mac PCs. This can effectively “resurrect” older machines, making them run significantly faster and more responsively than they would with an outdated or resource-heavy operating system. It provides a familiar, simple, and web-centric computing experience that leverages Google’s cloud services.

    System Requirements and Installation Process.

    Minimum Requirements for ChromeOS Flex: While ChromeOS Flex can run on uncertified devices, Google does not guarantee performance, functionality, or stability on such systems. For an optimal experience, ensure your device meets the following minimum requirements:

    • Architecture: Intel or AMD x86-64-bit compatible device (it will not run on 32-bit CPUs).
    • RAM: 4 GB.
    • Internal Storage: 16 GB.
    • Bootable from USB: The system must be capable of booting from a USB drive.
    • BIOS: Full administrator access to the BIOS is required, as changes may need to be made to boot from the USB installer.
    • Processor and Graphics: Components manufactured before 2010 may result in a poor experience. Specifically, Intel GMA 500, 600, 3600, and 3650 graphics chipsets do not meet ChromeOS Flex performance standards.

    Installation Process: The ChromeOS Flex installation process typically involves two main steps:

    • Creating a USB Installer: You will need a USB drive of 8 GB or more (all contents will be erased). The recommended method is to use the “Chromebook Recovery Utility” Chrome browser extension on a ChromeOS, Windows, or Mac device. Alternatively, you can download the installer image directly from Google and use a tool such as the dd command-line utility on Linux.
    • Booting and Installation: Boot the target device using the USB installer you created. You can choose to either install ChromeOS Flex permanently to the device’s internal storage or temporarily run it directly from the USB installer to test compatibility and performance.

    Benefits: Robust Security, Simplicity, and Performance on Lower-Spec Hardware.

    • Robust Security: ChromeOS Flex inherits many of ChromeOS’s strong security features, making it a highly secure option for older hardware:
    • Read-Only OS: The operating system is read-only, meaning it cannot run traditional executable files (.exe, etc.), which are common hiding places for viruses and ransomware. This reduces the attack surface significantly.
    • Sandboxing: The system’s architecture is segmented, with each webpage and app running in a confined, isolated environment. This ensures that malicious apps and files are always isolated and cannot access other parts of the device or data.
    • Automatic Updates: ChromeOS Flex receives full updates every 4 weeks and minor security fixes every 2-3 weeks. These updates operate automatically and in the background, ensuring constant protection against the latest threats without impacting user productivity.
    • Data Encryption: User data is automatically encrypted at rest and in transit, protecting it from unauthorised access even if the device is lost or stolen.
    • UEFI Secure Boot Support: While ChromeOS Flex devices do not contain a Google security chip, their bootloader has been checked and approved by Microsoft to optionally support UEFI Secure Boot. This can maintain the same boot security as Windows devices, preventing unknown third-party operating systems from being run.
    • Simplicity and Performance: ChromeOS Flex provides a streamlined, minimalist, and intuitive user experience. Its “cloud-first” design means it relies less on local processing power, allowing it to perform exceptionally well and fast even on older, low-spec hardware. This makes it an excellent choice for users focused primarily on web browsing, cloud-based productivity, and lightweight computing tasks.

    Limitations: Offline Capabilities, App Ecosystem, and Hardware-Level Security Nuances.

    While ChromeOS Flex offers many advantages, it’s important to be aware of its limitations, especially compared to a full ChromeOS or traditional desktop operating systems:

    • Offline Capabilities: As a cloud-focused OS, extensive offline work can be limited without specific web applications that support offline functionality.
    • App Ecosystem:
    • Google Play and Android Apps: Unlike full ChromeOS devices, ChromeOS Flex has limited support for Google Play and Android apps. Only some Android VPN apps can be deployed. This means the vast ecosystem of Android apps is largely unavailable.
    • Windows Virtual Machines (Parallels Desktop): ChromeOS Flex does not support running Windows virtual machines using Parallels Desktop.
    • Linux Development Environment: Support for the Linux development environment in ChromeOS Flex varies depending on the specific device model.
    • Hardware-Level Security Nuances:
    • No Google Security Chip/Verified Boot: ChromeOS Flex devices do not contain a Google security chip, which means the full ChromeOS “verified boot” procedure (a hardware-based security check) is not available. While UEFI Secure Boot is an alternative, it “cannot provide the security guarantees of ChromeOS Verified Boot.”
    • Firmware Updates: Unlike native ChromeOS devices, ChromeOS Flex devices do not automatically manage and update their BIOS or UEFI firmware. These updates must be supplied by the original equipment manufacturer (OEM) of the device and manually managed by device administrators.
    • TPM and Encryption: While ChromeOS Flex automatically encrypts user data, not all ChromeOS Flex devices have a supported Trusted Platform Module (TPM) to protect the encryption keys at a hardware level. Without a supported TPM, the data is still encrypted but may be more susceptible to attack. Users should check the certified models list to see for TPM support.

    ChromeOS Flex is presented as a highly secure alternative to an unsupported Windows 10, boasting features like a read-only OS, sandboxing, and automatic updates. However, it also details several security features that are either missing or limited compared to a native ChromeOS device: the lack of a Google security chip, the absence of a full ChromeOS Verified Boot (relying instead on the less robust UEFI Secure Boot), and the inconsistent presence of a supported TPM. This implies that while Flex offers significant security improvements over an unpatched Windows 10, it doesn’t achieve the top-tier, hardware-level security found in purpose-built Chromebooks. Users should be aware of this trade-off, understanding that while their older hardware gets a new lease of life and better protection, it won’t have the identical level of security as a newer, dedicated ChromeOS device.

    General Best Practices for Operating System Migration

    Regardless of the path you choose, the process of migrating an operating system requires careful planning and adherence to best practices to minimise risk and ensure a smooth transition.

    Data Backup

    Before any operating system change, including an upgrade or clean installation, it is crucial to perform a full system image backup. Data is vulnerable to unforeseen complications during the upgrade process, making a preventative backup a sound choice. This safeguards critical files, applications, and personalised settings, ensuring a smooth transition and the ability to restore your digital environment in the event of unexpected issues.

    You should use disk imaging technology, not just file copying. Operating systems like Windows are complex, and some data (e.g., passwords, preferences, app settings) exists outside of regular files. A full disk image copies every bit of data, including files, folders, programmes, patches, preferences, settings, and the entire operating system, enabling a complete system and app restoration to a new operating system. You should also remember to account for hidden partitions that may contain important system restore data.

    Software Compatibility Check

    Prior to migration, you should thoroughly check that all the applications and software you use are compatible with the new operating system. Incompatibility can lead to data loss, corruption, or inaccuracies, affecting the new system’s reliability and integrity. It is recommended to perform compatibility tests in a sandbox environment or a virtual machine to identify potential issues before the actual migration. The testing should cover various hardware configurations, software, and networks to ensure smooth operation.

    Driver Considerations

    A clean installation of an operating system will remove all drivers from the computer. While modern operating systems have enough generic drivers to get a basic system up and running, they will lack the specialised hardware drivers needed to run newer network cards, 3D graphics, and other components. It is recommended to have the drivers for key components like your network cards (Wi-Fi and/or wired) ready so that after the OS installation you can connect to the Internet to download the rest of the drivers. Drivers should be downloaded from the official websites of the hardware manufacturers to avoid performance issues or malware infections.

    Phased Approach and Testing

    An effective migration strategy should include a phased approach, breaking the process down into manageable stages. Each phase should have clearly defined goals and a rollback strategy in case issues arise. Before the migration, thorough testing should be conducted to identify potential problems and adjust configurations. After the migration, intensive monitoring and “hyper-care” support are essential to resolve any issues quickly and ensure the system stabilises in the new environment.

    User Training (for Organisations)

    For organisations, deployment preparation should include providing contextual training for end-users to quickly familiarise employees with the new systems and tasks. Creating IT sandbox environments for new applications can provide hands-on training for end-users, enabling employees to learn by doing without the risks of using live software.

    Conclusion

    The end of support for Windows 10 on the 14th of October 2025 represents an unavoidable turning point for all users. Continuing to use an unsupported operating system brings serious and escalating risks to security, performance, and compliance, which will only worsen over time. Delaying the migration decision is not a saving, but a deferral of costs that could be significantly higher in the event of unplanned outages or security breaches.

    For the majority of users whose hardware meets the minimum requirements, the most logical and future-proof solution is to upgrade to Windows 11. This system not only offers continuity within the familiar Microsoft environment but also provides significant improvements in user interface, productivity (e.g., Snap Layouts, Teams integration), gaming features (DirectStorage, Auto HDR), and most importantly, security (TPM 2.0, Smart App Control). Many PCs that initially seem incompatible can be made ready for Windows 11 through simple setting changes in the BIOS/UEFI, avoiding unnecessary spending on new hardware. Windows 11 is an investment in long-term stability, performance, and access to the latest technologies.

    For those whose hardware doesn’t meet the Windows 11 requirements, or for users seeking alternative experiences, other equally valuable paths are available. The Extended Security Updates (ESU) programme for Windows 10 offers short-term security protection until October 2026, but this is only a temporary fix that does not address software compatibility issues and the lack of new features.

    Linux distributions provide a robust and flexible alternative, capable of breathing new life into older hardware. They offer high performance, unmatched customisation, strong security, and a rich ecosystem of free software (e.g., LibreOffice, GIMP, Inkscape). Thanks to the development of Proton, Linux has also become a surprisingly competitive gaming platform, although certain limitations (e.g., kernel-level anti-cheat) still exist. Distributions such as Linux Mint and Zorin OS Lite are ideal for those transitioning from Windows, while Pop!_OS and Fedora will cater to the needs of gamers and advanced users.

    ChromeOS Flex is another option that allows you to transform older computers into lightweight, secure, and cloud-based devices. This is an excellent solution for users who value simplicity, speed, and solid security, although it comes with certain limitations regarding offline capabilities and Android app access.

    Regardless of the choice, a proactive approach is key. Any migration should be preceded by a complete data backup, a thorough software compatibility check, and preparation of the necessary drivers. Adopting a phased approach with testing before and after the migration will minimise the risk of disruptions.

    The end of support for Windows 10 is not just the end of an era, but also an opportunity to modernise, optimise, and adapt your computing environment to individual needs and the challenges of the future. Making an informed choice of operating system in 2025 is crucial for your computer’s security, performance, and usability for years to come.

  • Build Your Own Fort Knox: The Complete Guide to Vaultwarden on a Private VPS

    Build Your Own Fort Knox: The Complete Guide to Vaultwarden on a Private VPS

    Introduction: Your Passwords Are Weak – change that and protect yourself from criminals

    Have you ever been treated by a website like a rookie in a basic training camp? “Your password is weak. Very weak.” The system claims you can’t use it because it’s under 20 characters, doesn’t contain a lowercase or uppercase letter, 5 special characters and 3 digits. And on top of that, it can’t be a dictionary word. Or, even worse, you’ve fallen into a loop of absurdity: you type in a password you are absolutely convinced is correct. The system says it’s not. You request a reset. You get a code you have to enter in 16 seconds, 3 of which have already passed. You type a new password. “You cannot use a password that is your current password.” The one the system just rejected. It’s a digital comedy of errors that no one finds funny.

    This daily struggle with authentication systems drives us to the brink of despair. It gets to the point where, like in a certain anecdote, the only way to meet security requirements is to change the name of your cat to “CapitalK97Yslash&7”. This is funny until we realise that our digital lives are based on similarly outlandish and impossible-to-remember constructions. The problem is that human memory is fallible. Even seemingly simple passwords, like “ODORF”, which a father once set as the admin password, can slip your mind at the least opportune moment, leading to blocked access to the family computer.

    In the face of these difficulties, many of us take shortcuts. We use the same, easy-to-remember passwords across dozens of services. We create simple patterns, like the name of a building with zeros instead of the letter “O”, which in one doctor’s office protected patient data and was known by 18 people. Such practices are an open invitation to cybercriminals. The problem, however, doesn’t lie solely in our laziness. It’s the systems with terrible user interfaces and frustrating requirements that actively discourage us from caring about security. Since current methods fail, there must be a better way. A way that is both secure, convenient, and doesn’t require memorising 64-character passwords.

    Digital Safe on Steroids: Why a Password Manager is Your New Best Mate

    Before we dive into the world of self-hosting, it’s crucial to understand why a dedicated password manager is a fundamental tool for anyone who navigates the internet. It’s a solution that fundamentally changes the user’s relationship with digital security – from an antagonistic fight to a symbiotic partnership. Instead of being a problem, passwords become something that works in the background, without our effort.

    One Ring to Rule Them All (One Master Password)

    The basic concept of a password manager is brilliant in its simplicity: you only need to remember one, very strong master password (or, even better, a long phrase). This password acts as the key to an encrypted safe (called a “vault”), which stores all your other credentials. No more memorising dozens of logins.

    An Unbreakable Generator

    The greatest weakness of human passwords is their predictability. Password managers eliminate this problem by having a built-in random password generator. Want to set a random password of 100 characters that is a random string of letters, numbers and special characters? With a single click, it can create a long, complicated, and completely random password, such as X@Ln@x9J@&u@5n##BhfRe5^67gFdr. The difference in security between Kitty123!” and such a random string of characters is astronomical – it’s like comparing a plywood door to the vault door of a bank.

    Convenience and Productivity (Autofill)

    Security that makes life difficult is rarely used. That’s why password managers focus on convenience. Their most important function is autofilling login forms in browsers and applications. When you visit a bank’s website, the manager automatically detects the login fields and offers to fill them with your saved data. This not only saves time but also eliminates the risk of typos. These minutes saved each day add up, genuinely increasing productivity.

    Device Syncing

    Your digital world isn’t limited to one device. A password manager ensures you have access to your vault from anywhere – on your laptop at work, on your tablet at home, and on your smartphone while travelling. All your data is synchronised, so a password saved on one device is immediately available on the others.

    Protection Against Phishing and Attacks

    Password managers offer a subtle but powerful protection against phishing. The autofill function is tied to the specific URL of a website. If a cybercriminal sends you a link to a fake bank website that looks identical to the real one, the password manager won’t offer to autofill, because the URL will be different. This is an immediate warning sign. It also protects against “credential stuffing” attacks, where hackers test passwords stolen from one service on dozens of others. With a password manager, you can easily create separate passwords for each website, each bank, each social media portal, email account, etc. Even if someone steals data from Facebook, if you used that password exclusively for Facebook, the criminals won’t be able to log in to your bank or other services or portals with it.

    Security Audit

    Modern password managers act as a personal security auditor. They regularly scan your vault for weak, reused, or compromised passwords that have appeared in public data breaches. This allows you to proactively react and change threatened credentials.

    By automating the most difficult tasks – creating and remembering unique, strong passwords – a password manager removes the cognitive load and frustration. As a result, applying the best security practices becomes effortless, leading to a dramatic increase in your overall level of protection.

    Introducing Vaultwarden: Bitwarden for DIYers with a Heart for Privacy

    Now that we know what a powerful tool a password manager is, it’s time to choose the right one. There are many players on the market, but for privacy enthusiasts and DIYers, one project stands out in particular: Vaultwarden.

    Vaultwarden is an unofficial but fully functional server implementation of the popular Bitwarden password manager. It was written from scratch in the Rust programming language, and its main goal was to create an alternative that is incredibly lightweight and efficient. While the official, self-hosted version of Bitwarden requires 11 separate Docker containers to run and has significant hardware requirements, Vaultwarden runs in one neat container and consumes minimal resources. This means you can easily run it on a cheap mini-computer like a Raspberry Pi, an old laptop, or the smallest virtual machine in the cloud.

    Most importantly, Vaultwarden is fully compatible with all official Bitwarden client applications – browser plugins, desktop applications, and mobile apps for Android and iOS. This means you get a polished and convenient user interface while maintaining full control over your server.

    However, the real “icing on the cake” and the reason the self-hosting community has fallen in love with Vaultwarden is the fact that it unlocks all of Bitwarden’s premium features for free. Choosing Vaultwarden is not just about saving money, but a conscious decision that perfectly fits the ethos of independence and control. It’s not a “worse substitute”, but for many conscious users, simply a better choice, because its features and distribution model are fully aligned with the values of the open-source world.

    The table below shows what you get by choosing Vaultwarden.

    FeatureBitwarden (Free Plan)Bitwarden (Premium Plan sim10/year)Vaultwarden (Self-hosted)
    Unlimited passwords & devicesYesYesYes
    Secure sharing (2 users)YesYesYes
    Basic 2FA (TOTP, Email)YesYesYes
    Advanced 2FA (YubiKey, FIDO2)NoYesYes
    Integrated Authenticator (TOTP)NoYesYes
    File attachments (up to 1GB)NoYesYes
    Emergency AccessNoYesYes
    Vault health reportsNoYesYes
    Additional users (e.g. for family)NoNoYes

    Of course, this freedom comes with responsibility. Vaultwarden is a community project, which means there is no official technical support. In case of problems, you rely on documentation and help from other users on forums. There may also be a short delay in compatibility after major updates to official Bitwarden clients before Vaultwarden developers adapt the code. You are your own administrator – that’s the price for complete control.

    The Power of Self-Hosting

    The decision to use Vaultwarden is inseparably linked to a broader concept: self-hosting. It’s an idea that shifts the paradigm from being a passive consumer of digital services to being their active owner. This is a fundamental change in the balance of power between the user and the technology provider.

    Full Data Control – Digital Sovereignty

    The main and most important advantage of self-hosting is absolute control over your own data. When you use a cloud service, your passwords, notes, and other sensitive information are stored on servers belonging to a corporation. In the case of self-hosting, your password vault physically resides on hardware that you control – whether it’s a server at home or a rented virtual machine. No one else has access to it. You are the guardian of your data, which is the essence of digital sovereignty.

    No More Vendor Lock-in

    By using cloud services, you are dependent on their provider. A company can raise prices, change its terms of service, limit functionality, or even go bankrupt, leaving you with a data migration problem. Self-hosting frees you from this “ecosystem lock-in.” Your service works for as long as you want, on your terms.

    Privacy

    In today’s digital economy, data is the new oil. Providers of free services often earn money by analysing user data, selling it to advertisers, or using it to train artificial intelligence models. When you self-host services, this problem disappears. Your data is not a commodity. You set the rules and you can be sure that no one is looking at your information for commercial purposes.

    Long-Term Savings

    The subscription model has become the standard in the software world. Although a single fee may seem low, the sum of annual costs for all services can be significant. Self-hosting requires an initial investment in hardware (you can often use an old computer or a cheap Raspberry Pi) and is associated with electricity costs, but it eliminates recurring subscription fees. In the long run, it is a much more economical solution.

    Customisation and Learning Opportunities

    Self-hosting is not only about practical benefits, but also a fantastic opportunity to learn and grow. It gives you full flexibility in configuring and customising services to your own specific needs. It is a satisfying journey that allows you to better understand how the technologies we use every day work.

    For a person concerned about the state of privacy on the internet, self-hosting is not a technical curiosity. It’s a logical and necessary step to regain control over your digital life.

    An Impenetrable Fortress: How a VPN Creates a Private Bridge to Your Password Vault

    Self-hosting Vaultwarden gives you control over your data, but how do you ensure secure access to it from outside your home? The simplest solution seems to be exposing the service to a public IP address and securing it with a so-called reverse proxy (e.g., Nginx Proxy Manager). This is a popular and good solution, but it has one drawback: your service is visible to the entire world. This means it is constantly being scanned by bots for vulnerabilities and weaknesses.

    However, there is a much more secure architecture that changes the security model from “defending the fortress” to “hiding the fortress”. It involves placing Vaultwarden behind a VPN server.

    What is a VPN and how does it work?

    A VPN, or Virtual Private Network, creates a secure, encrypted “tunnel” through the public internet. When your laptop or smartphone connects to your home VPN server (e.g., using the popular and modern WireGuard protocol), it virtually becomes part of your home local network. All communication is encrypted and invisible to anyone else, including your internet service provider or the operator of the public Wi-Fi network in a café.

    “VPN-Only” Architecture

    In this configuration, the server running Vaultwarden has no ports open to the public internet. From the perspective of the global network, it is completely invisible. The only publicly accessible element is the VPN server, which listens on one specific port.

    To access your password vault, you must first connect to the VPN server. After successful authorisation, your device is “inside” your private network and can freely communicate with the Vaultwarden server, just as if both devices were standing next to each other.

    Layers of Security

    This approach creates three powerful layers of protection:

    1. Invisibility: This is the most important advantage. Cybercriminals and automated scanners cannot attack a service they cannot see. By eliminating the public access point to Vaultwarden, you reduce the attack surface by over 99%.
    2. VPN Encryption: All communication between your device and the server is protected by strong VPN encryption. This is an additional layer of security, independent of the HTTPS encryption used by the Vaultwarden application itself.
    3. Bitwarden End-to-End Encryption: Even in the extremely unlikely scenario that someone manages to break through the VPN security and listen in on network traffic, your vault data remains secure. It is protected by end-to-end encryption (E2EE), which means it is encrypted on your device using your master password before it is even sent to the server. An attacker would only see a useless, encrypted “blob” of data.

    For the hobbyist administrator, this is a huge simplification. Instead of worrying about securing every single hosted application, you focus on maintaining the security of one, solid entry point – the VPN server. This makes advanced security achievable without having to be a cybersecurity expert.

    More Than You Think: What You Can Store in Your Vaultwarden Vault

    The true power of Vaultwarden extends far beyond storing passwords for websites. Thanks to its flexible structure and support for various data types, it can become your single, trusted “source of truth” for practically any sensitive information in your life. It’s not a password manager, it’s a secret manager.

    Standard Data Types

    Vaultwarden, just like Bitwarden, offers several predefined entry types to help you organise your data:

    • Logins: The obvious foundation – they store usernames, passwords, and also codes for two-factor authentication (TOTP). Although when it comes to TOTP, I am a strong opponent of keeping them in the same application as logins and passwords. I’ll explain why in a moment.
    • Cards: A secure place for credit and debit card details. This makes online shopping easier, eliminating the need to manually enter card numbers and CVV codes.
    • Identities: Used to store personal data such as full name, addresses (billing, shipping), phone numbers, and email addresses. Ideal for quickly filling out registration forms.
    • Secure Notes: An encrypted text field for any information you want to protect.

    Creative Uses of Secure Notes and Custom Fields

    The real magic begins when we start creatively using secure notes, custom fields, and – crucially – file attachments (a premium feature in Bitwarden that is free in Vaultwarden). Your vault can become a digital “survival pack”, containing:

    • Software license keys: No more searching through old emails for your Windows or Office key.
    • Wi-Fi network passwords: Store passwords for your home network, work network, or a friend’s network.
    • Hardware information: Serial numbers, purchase dates, and warranty information for your electronics – invaluable in case of a breakdown or theft.
    • Medical and insurance data: Policy numbers, contact details for your insurer, a list of medications you take.
    • Answers to “security questions”: Instead of providing real data (which can often be found on the internet), generate random answers to questions like “What was your mother’s maiden name?” and save them in the manager.
    • Document data: Passport numbers, ID card numbers, driving license numbers.
    • Hardware configurations: Notes on the configuration of your router, home server, or other network devices.
    • Encrypted attachments: This is a game-changer. You can securely store scans of your most important documents: passport, birth certificate, employment contracts, and even your will. In case of a fire, flood, or theft, you have instant access to digital copies.

    Comparing this to the popular but dangerous practice of keeping passwords in a notes app (even an encrypted one), the advantage of Vaultwarden is crushing. Notes apps do not offer browser integration, a password generator, a security audit, or phishing protection. They are simply a digital notepad, while Vaultwarden is a specialised, fortified fortress.

    Magic at Your Fingertips: Browser Plugins and Mobile Apps

    All this powerful, secure server infrastructure would be useless if using it every day were cumbersome. Fortunately, the ecosystem of Bitwarden clients makes interacting with your private Vaultwarden server smooth, intuitive, and practically invisible. It is this seamless client integration that is the bridge between advanced security and everyday convenience.

    Configuration for Self-hosting: The First Step

    Before you start, you must tell each client application where your server is located. This is a crucial step. In both the browser plugin and the mobile app, before logging in, you need to go into the settings (usually under the cogwheel icon) and in the “Server URL” or “Self-hosted environment” field, enter the address of your Vaultwarden instance (e.g., [podejrzany link usunięto]). Remember that for this to work from outside your home, you must first configure your subdomain, or be connected to the VPN server.

    Browser Plugins: Your Personal Assistant

    The Bitwarden plugin, which you will use to connect to your Vaultwarden server (for Edge, Chrome, Firefox, Safari, and others) is the command centre in your browser.

    • Autofill in practice: When you go to a login page, a small Bitwarden icon will appear on the form fields, and the plugin’s icon in the toolbar will show the number of credentials saved for that site. Clicking on it allows you to fill in the login and password with one motion.
    • Password generator at hand: When creating a new account, you can click the plugin icon, go to the generator, create a strong password, and immediately paste it into the appropriate fields on the site.
    • Automatic saving: When you log in to a site using credentials that you don’t yet have in your vault, the plugin will display a discreet bar at the top of the screen asking if you want to save them.
    • Full access to the vault: From the plugin, you can view and edit all your entries, copy passwords, 2FA codes, and also manage folders without having to open a separate website.

    Mobile Apps (Android & iOS): Security in Your Pocket

    Bitwarden mobile apps transfer all functionality to smartphones, integrating deeply with the operating system.

    • Biometric login: Instead of typing a long master password every time, you can unlock your vault with your fingerprint or a face scan (Face ID).
    • Integration with the autofill system: Both Android and iOS allow you to set Bitwarden as the default autofill service. This means that when you open a banking app, Instagram, or any other app that requires a login, a suggestion to fill in the data directly from your vault will appear above the keyboard.
    • Offline access: Your encrypted vault is also stored locally on the device. This means you have access to it even without an internet connection (and without a VPN connection). You can view and copy passwords. Synchronisation with the server will happen automatically as soon as you regain a connection.

    After the initial effort of configuring the server, daily use becomes pure pleasure. All the complexity of the backend – the server, containers, VPN – disappears, and you only experience the convenience of logging in with a single click or a tap of your finger. This is the ultimate reward for taking back control.

    Storing TOTP Codes Directly in Vaultwarden and Why It’s a Bad Idea

    One of the tempting premium features that Vaultwarden provides for free is the ability to store two-factor authentication (TOTP) codes directly in the same entry as the login and password. At first glance, this seems incredibly convenient – all the data needed to log in is in one place. The browser plugin can automatically fill in not only the password but also copy the current 2FA code to the clipboard, shortening the entire process to a few clicks. No more reaching for your phone and rewriting six digits under time pressure.

    However, this convenience comes at a price, and that price is the weakening of the fundamental principle on which two-factor authentication is based. The idea of 2FA is to combine two different types of security: something you know (your password) and something you have (your phone with the code-generating app). By storing both of these elements in the same digital safe, which is Vaultwarden, you reduce them to a single category: things you know (or can find out by breaking the master password). This creates a single point of failure. If an attacker manages to get your master password to the manager in any way, they get immediate access to both authentication factors. The security barrier that was supposed to require compromising two separate systems is reduced to one.

    Therefore, although storing TOTP codes in a password manager is still much better than not using 2FA at all, from the point of view of maximum security, it is recommended to use a separate, dedicated application for this purpose (such as Aegis Authenticator, Authy, or Google Authenticator) installed on another device – most often a smartphone. This way, even if your password vault is compromised, your accounts will still be protected by a second, physically separate layer of security.

    Configuring the Admin Panel

    Regardless of whether you are the captain of a Docker container ship or a traditionalist who nurtures system services, at some point you will want to look behind the scenes of your Vaultwarden. This is what the admin panel is for – a secret command centre from which you can manage users, view diagnostics, and configure global server settings. By default, however, it is disabled, because like any good fortress, it doesn’t open its gates to just anyone. And after attempting to enter the panel, you will get an error message:

    “The admin panel is disabled, please configure the ‘ADMIN TOKEN’ variable to enable it”

    To activate it, you must set a special “key” – the administrator token.

    Scenario 1: Docker Lord

    If you ran Vaultwarden using Docker Compose (which is the most popular and convenient method), you set the admin panel key using the ADMIN_TOKEN environment variable. However, for security reasons, you should not use plain, open text there. Instead, you generate a secure Argon2 hash for the chosen password, which significantly increases the level of protection.

    Here is the complete and correct process:

    1. Generate the Password Hash First, come up with a strong password that you will use to log in to the admin panel. Then, using the terminal on the server, execute the command built into Vaultwarden to create its secure hash:
    docker exec -it vaultwarden /vaultwarden hash

    After entering the password twice, copy the entire generated string that starts with $argon2id$.

    1. Update the docker-compose.yml file Now add the prepared hash to the docker-compose.yml file. There are two critical rules here:
    • Every dollar sign $ in the hash must be doubled (e.g., $argon2id$ becomes $$argon2id$$) to avoid errors in Docker Compose.
    • To automatically correct the token, use the command:
    echo '$argon2id$v=1...REMAINDER_OF_TOKEN' | sed 's#\$#\$\$#g'

    The value of ADMIN_TOKEN cannot be in any apostrophes or quotes.

    Correct configuration:

    services:
      vaultwarden:
        image: vaultwarden/server:latest
        container_name: vaultwarden
        restart: unless-stopped
        volumes:
          - ./data:/data
        ports:
          - "8080:80"
        environment:
          # Example of a hashed and prepared token:
          - ADMIN_TOKEN=$$argon2id$$v=19$.....
    
    1. Apply Changes and Log In After saving the file, stop and rebuild the container with the command:

    docker-compose down
    docker-compose up -d

    Your admin panel, available at https://your.domain.com/admin, will now ask for a password. To log in, type the password you chose in the first step, not the generated hash.

    Scenario 2: Traditionalist with a system service (systemd)

    If you decided to install Vaultwarden as a native system service, for example using systemd, the configuration looks a bit different, but the idea remains the same. Instead of the docker-compose.yml file, environment variables are most often stored in a dedicated configuration file. This is usually an .env file or similar, which is pointed to by the service file.

    For example, you can create a file /etc/vaultwarden.env and put your token in it:

    ADMIN_TOKEN=your_other_very_secure_token

    Then you must make sure that the vaultwarden.service service file (usually located in /etc/systemd/system/) contains a line that loads this file with variables: EnvironmentFile=/etc/vaultwarden.env. After making the changes, you must reload the systemd daemon configuration (sudo systemctl daemon-reload), and then restart the Vaultwarden service itself (sudo systemctl restart vaultwarden). From now on, the admin panel at https://your.domain.com/admin will be active and secured with your new, shiny token.

    Summary: Why Vaultwarden on a VPN Server is Your Personal Fort Knox

    We have analysed the journey from the frustration of weak passwords to building your own digital fortress. The solution presented here is based on three powerful pillars that in synergy create a system far superior to the sum of its parts:

    1. The Power of a Password Manager: It frees you from the obligation of creating and remembering dozens of complicated passwords. It provides convenience with autofill and strength with randomly generated, unique credentials for each service.
    2. The Control of Self-Hosting: It gives you absolute sovereignty over your most valuable data. You are the owner, administrator, and guardian of your digital safe, free from corporate regulations, subscriptions, and privacy concerns.
    3. The Invisibility of a VPN: It elevates security to the highest level, making your service invisible to the public internet. Instead of building ever-higher walls around a visible fortress, you simply hide it from the sight of potential attackers.

    The combination of Vaultwarden, with its lightness and free premium features, and an architecture based on a VPN, creates a solution that is not only more secure and private than most commercial cloud services but also extremely flexible and satisfying to manage.

    It’s true, it requires some effort and a willingness to learn. But the reward is priceless: regaining full control over your digital security and privacy. It’s time to stop changing your cat’s name. It’s time to build your own Fort Knox.

  • WordPress and the “A scheduled event has failed” error

    WordPress and the “A scheduled event has failed” error

    Every WordPress site administrator knows that feeling. You log into the dashboard, and there’s a message waiting for you: “A scheduled event has failed”. Your heart stops for a moment. Is the site down? Is it a serious crash?

    Calm down! Before you start to panic, take a deep breath. This error, although it sounds serious, rarely means disaster. Most often, it’s simply a signal that the internal task scheduling mechanism in WordPress isn’t working optimally.

    In this article, we’ll explain what this error is, why it appears, and how to fix it professionally in various server configurations.

    What is WP-Cron?

    WordPress needs to perform cyclical background tasks: publishing scheduled posts, creating backups, or scanning the site for viruses (as in the case of the wf_scan_monitor error from the Wordfence plugin). To handle these operations, it uses a built-in mechanism called WP-Cron.

    The problem is that WP-Cron is not a real cron daemon known from Unix systems. It’s a “pseudo-cron” that has a fundamental flaw: it only runs when someone visits your website.

    • On sites with low traffic: If no one visits the site, tasks aren’t performed on time, which leads to errors.
    • On sites with high traffic: WP-Cron is called on every page load, which generates unnecessary server load.

    In both cases, the solution is the same: disable the built-in WP-Cron and replace it with a stable, system-level cron job.

    Scenario 1: A Single WordPress Site

    This is the most basic and common configuration. The solution is simple and comes down to two steps.

    Step 1: Disable the built-in WP-Cron mechanism

    Edit the wp-config.php file in your site’s main directory and add the following line:

    define(‘DISABLE_WP_CRON’, true);

    Step 2: Configure a system cron

    Log into your server via SSH and type crontab -e to edit the list of system tasks. Then, add one of the following lines, which will properly call the WordPress cron mechanism every 5 minutes.

    • wget method: */5 * * * * wget -q -O – https://yourdomain.co.uk/wp-cron.php?doing_wp_cron >/dev/null 2>&1
    • curl method: */5 * * * * curl https://yourdomain.co.uk/wp-cron.php?doing_wp_cron >/dev/null 2>&1

    Remember to replace yourdomain.co.uk with your actual address. From now on, tasks will be executed regularly, regardless of site traffic.

    Scenario 2: Multiple Sites on a Standard Server

    If you manage multiple sites, adding a separate line in crontab for each one is impractical and difficult to maintain. A better solution is to create a single script that will automatically find all WordPress installations and run their tasks.

    Step 1: Create the script file

    Create a file, e.g., /usr/local/bin/run_all_wp_crons.sh, and paste the following content into it. The script searches the /var/www/ directory for wp-config.php files.

    #!/bin/bash
    #
    # Script to run cron jobs for all WordPress sites
    # Optimised for ISPConfig directory structure
    # The main directory where the ACTUAL site files are located in ISPConfig
    SITES_ROOT=”/var/www/clients/”

    # Path to the PHP interpreter (may need to be adjusted)
    PHP_EXECUTABLE=”/usr/bin/php”

    # Logging (optional, but useful for debugging)
    LOG_FILE=”/var/log/wp_cron_runner.log”

    echo “Starting cron jobs (ISPConfig): $(date)” >> $LOG_FILE

    # Find all wp-config.php files and run wp-cron.php for them
    find “$SITES_ROOT” -name “wp-config.php” -print | while IFS= read -r -d ” config_file; do

        # Extract the directory where WordPress is located
        WP_DIR=$(dirname “$config_file”)

        # Extract the user name (e.g., web4) from the path
        # It’s the sixth element in the path /var/www/clients/client4/web4/web/
        WP_USER=$(echo “$WP_DIR” | awk -F’/’ ‘{print $6}’)

        if [ -z “$WP_USER” ]; then
            echo “-> WARNING: Failed to determine user for: $WP_DIR” >> $LOG_FILE
            continue
        fi

        # Check if the wp-cron.php file exists in this directory
        if [ -f “$WP_DIR/wp-cron.php” ]; then
            echo “-> Running cron for: $WP_DIR as user: $WP_USER” >> $LOG_FILE
            # Run wp-cron.php using PHP CLI, switching to the correct user
            su -s /bin/sh “$WP_USER” -c “(cd ‘$WP_DIR’ && ‘$PHP_EXECUTABLE’ wp-cron.php)”
        else
            echo “-> WARNING: Found wp-config, but no wp-cron.php in: $WP_DIR” >> $LOG_FILE
        fi
    done

    echo “Finished: $(date)” >> $LOG_FILE
    echo “—” >> $LOG_FILE

    Step 2: Grant the script execution permissions

    chmod +x /usr/local/bin/run_all_wp_crons.sh

    Step 3: Create a single cron job to manage everything

    Now your crontab can contain just one line:

    */5 * * * * /usr/local/bin/run_all_wp_crons.sh >/dev/null 2>&1

    Scenario 3: Multiple Sites with ISPConfig Panel

    The ISPConfig panel uses a specific directory structure with symlinks, e.g., /var/www/yourdomain.co.uk points to /var/www/clients/client1/web1/. Using the script above could cause tasks to be executed twice.

    To avoid this, you need to modify the script to search only the target clients directory.

    Step 1: Create a script optimised for ISPConfig

    Create the file /usr/local/bin/run_ispconfig_crons.sh. Note the change in the SITES_ROOT variable.

    #!/bin/bash
    # Script to run cron jobs for all WordPress sites
    # Optimised for ISPConfig directory structure
    # We only search the directory with the actual site files
    SITES_ROOT=”/var/www/clients/”

    # Path to the PHP interpreter
    PHP_EXECUTABLE=”/usr/bin/php”

    # Optional log file to track progress
    LOG_FILE=”/var/log/wp_cron_runner.log”

    echo “Starting cron jobs (ISPConfig): $(date)” >> $LOG_FILE

    find “$SITES_ROOT” -name “wp-config.php” -print0 | while IFS= read -r -d $’\0′ config_file; do
        WP_DIR=$(dirname “$config_file”)
        if [ -f “$WP_DIR/wp-cron.php” ]; then
            echo “-> Running cron for: $WP_DIR” >> $LOG_FILE
            (cd “$WP_DIR” && “$PHP_EXECUTABLE” wp-cron.php)
        fi
    done

    echo “Finished: $(date)” >> $LOG_FILE
    echo “—” >> $LOG_FILE

    Steps 2 and 3 are analogous to Scenario 2: give the script execution permissions (chmod +x) and add a single line to crontab -e, pointing to the new script file.

    Summary

    The “A scheduled event has failed” error is not a reason to panic, but rather an invitation to improve your infrastructure. It’s a chance to move from the unreliable, built-in WordPress mechanism to a solid, professional system solution that guarantees stability and performance.

    Regardless of your server configuration, you now have the tools to sleep soundly, knowing that your scheduled tasks are running like clockwork.

  • Error: Change ARI Username Password FreePBX Asterisk

    Error: Change ARI Username Password FreePBX Asterisk

    In early 2023, the number of attacks on FreePBX Asterisk systems increased. The vulnerability exploited by hackers is the ARI interface. To gain access to the ARI interface, one must know the ARI username and password, but also the login details for the FreePBX administrative interface. This is why it is so important to use strong, hard-to-crack passwords. In the new version of FreePBX, we are shown the error: Change ARI Username Password.

    The ARI user and its password are created during the FreePBX installation. The username consists of about 15 random characters, and the password of about 30 random characters. The developers of the FreePBX system discovered that for some reason on some systems the username and password are not unique.

    This does not look like an error in Asterisk or FreePBX itself, so their versions are irrelevant here. If there has been a leak of ARI data, the hacker can gain access to our FreePBX system regardless of its version.

    How to get rid of the “Change ARI Username Password” error

    image 112

    To patch the security hole, we must create a new ARI user and a new password for it. To create a new ARI user, log in to your FreePBX system and enter the command:

    fwconsole rpc "ari.create_user('RANDOM_CHARACTERS', 'RANDOM_PASSWORD')"

    In place of RANDOM_CHARACTERS, enter 15 random alphanumeric characters. Then create a new password with the command:

    fwconsole rpc "ari.change_password('RANDOM_CHARACTERS', 'RANDOM_PASSWORD')"

    In place of RANDOM_PASSWORD, enter 30 random alphanumeric characters. Next, we need to reload the settings with the command:

    fwconsole reload

    Finally, all you have to do is restart FreePBX with the command:

    fwconsole restart

    After the restart, the “Change ARI Username Password” error message should disappear.

    image 113

    Summary

    FreePBX is an extremely secure system. However, even the most secure system will be vulnerable to hacking if easy-to-crack passwords are used and the configuration is incorrect.

  • Phishing: Why Companies Fall Victim to Their Attacks

    Phishing: Why Companies Fall Victim to Their Attacks

    Over time, phishing attacks have become increasingly sophisticated. Thanks to public information campaigns on television, more people have become aware of the threats and know what to look out for after receiving messages from uncertain sources. However, criminals are also constantly adapting their attack methods to changing circumstances, and many people still fall victim to these types of attacks, losing confidential or private data, and even their savings. The matter is even more serious when we talk about phishing attacks on companies that hold confidential data of their customers in their databases. How is it possible that companies still fall victim to such attacks?

    image 31

    Lack of Employee Security Training

    Training employees is a cost to the company. Therefore, in the name of saving money, it turns out that business owners are abandoning training their employees in the field of cyber-security. Untrained employees do not know how to avoid ever-new security threats. Without proper training, they may not realise how serious a threat phishing can be to the company, how to detect it, and how to protect themselves against it. A lack of employee training can end in disaster for the enterprise and cause the company many problems, including legal ones. Training is essential to know how to recognise a phishing message in the first place.

    As long as business owners ignore the problem of a lack of employee training, companies will continue to fall victim to phishing attacks. The cost incurred for training in cyber-crime can pay for itself in the future, and ignoring this type of threat can come back to haunt them.

    The problem affects small companies to an even greater extent than large enterprises, as large companies can usually allocate more funds for training, because the cost per employee for such training will be lower than in small firms with a few or a dozen employees. Furthermore, the IT infrastructure of large companies is generally much better protected against cyber-attacks than in small businesses.

    Money

    Cyber-criminals make money from phishing attacks. Often, large sums of money. Obtaining confidential data, for example, login details for banking websites, from unsuspecting employees is much easier than hacking directly into the banks’ websites. That is why, despite the passage of time, phishing attacks are still going strong. New, ever more sophisticated methods of phishing attacks are constantly emerging.

    Cyber-criminals are often able to invest considerable funds in purchasing software and hardware to carry out these types of attacks. This, combined with the unawareness of untrained company employees, means that tens of thousands of data-phishing sites are detected each year. According to The Anti-Phishing Working Group, over a million phishing attacks were detected in the first quarter of 2022. In March 2022, over 384,000 data-phishing sites were discovered. This is a really serious problem for private individuals and an even bigger problem for companies.

    image 32

    Careless Employees

    Sometimes it is not the company itself that is responsible for falling victim to phishing, but the carelessness and negligence of individual employees, even despite appropriate training being conducted. Clicking on links and entering confidential data on websites without thinking can result in the leakage of login data. Any employee with access to websites at work can fall victim to phishing.

    Easy Access to Software for Criminals

    In the past, only a handful of hackers in the world had the skills to write software to carry out effective phishing attacks. Today, in the age of the ubiquitous internet, with the right amount of cash, criminals are able to easily acquire professional tools and software to carry out phishing attacks. That is why the number of these attacks is growing year on year.

    Companies Are Looking for Savings

    Recent years, 2020-2022 (the coronavirus pandemic, high energy prices), have not been easy for entrepreneurs. It is no wonder, then, that companies looking for savings are tightening their belts and giving up on employee training. However, saving on a company’s cyber-security can come back to haunt them in the future.

    Summary

    The problem of phishing attacks, especially on companies, is growing year on year, and their methods are becoming more and more sophisticated. Therefore, taking care of the security of company data and the confidential data of our clients is extremely important. That is why professional training for office employees in the field of security is extremely important. Such training is offered by many companies, such as Network Masters or Securitum, both online and in-person. It is also extremely important to properly secure our company’s IT infrastructure itself. A good quality firewall can automatically detect and block many types of attacks on our company’s computer systems, including phishing.