In an era where our digital lives are scattered across the servers of tech giants, a growing number of people are seeking digital sovereignty. They want to decide where their most valuable data is stored, from family photos to confidential documents. The answer to this need is Nextcloud, a powerful open-source platform that allows you to create your own fully functional equivalent of Google Drive or Dropbox. When combined with a robust data storage system like TrueNAS, it becomes the foundation of a truly private cloud. Let’s walk through the process, from the initial decision to synchronising the final file.
The Foundation: A Solid Installation
Choosing a platform to host Nextcloud is crucial. TrueNAS SCALE, based on Linux, offers a solid environment for running applications in isolated containers, ensuring both security and stability. The installation process, though automated, presents the administrator with several important questions that will define the capabilities of the future cloud.
The first step is to enhance the basic installation with additional packages. These are not random add-ons, but tools that will breathe life into your stored files:
ffmpeg: A digital translator for video and audio files. Without it, your library of holiday films would be just a collection of silent icons. With it, Nextcloud generates thumbnails and previews, allowing you to quickly see the content.
libreoffice: Enables the generation of previews for office documents. Essential for glancing at the contents of a .docx or .xlsx file without having to download it.
ocrmypdf & Tesseract: A duo that transforms static scans into intelligent, searchable documents. After adding a language pack—in our case, the crucial Polish one—the system automatically recognises text in PDF files, turning Nextcloud into a powerful document archive.
smbclient: A bridge to the Windows world. It allows you to connect existing network shares to Nextcloud, integrating the cloud with the rest of your home infrastructure.
Each of these choices is an investment in future functionality. It is equally important to ensure the system runs like a well-oiled machine. This is where the background job mechanism, known as Cron, comes into play. Setting it to run cyclically every 5 minutes (*/5 * * * *) is an industry standard, guaranteeing that notifications arrive on time and temporary files are regularly cleared.
Configuration: The Digital Fortress and Its Address
After installing the basic components, it’s time to configure the network and data storage. This is where we decide how our cloud will be visible to the world and where our data will physically reside.
For most home applications, the default network settings are sufficient. However, the key element is security. Accessing the cloud via the unsecured http:// protocol is like leaving the door to a vault wide open. The solution is to enable HTTPS encryption by assigning an SSL certificate. TrueNAS offers simple tools for generating both self-signed certificates (ideal for testing on a local network) and fully trusted certificates from Let’s Encrypt, if you own a domain.
SSL Configuration for Nextcloud on TrueNAS using Cloudflare and Nginx Proxy Manager
Securing your Nextcloud instance with an SSL certificate is absolutely crucial. Not only does it protect your data in transit, but it also builds trust and enables the use of many client applications that require an encrypted HTTPS connection. In this guide, we will show you how to easily configure a completely free and automatically renewing SSL certificate for your domain using the powerful combination of Cloudflare and Nginx Proxy Manager.
Initial Assumptions
Before we begin, let’s ensure you have the following ready:
A working instance of Nextcloud on TrueNAS, accessible via a local IP address and port (e.g., 192.168.1.50:30027).
The Nginx Proxy Manager application installed and running on TrueNAS.
Your own registered domain (e.g., mydomain.com).
A free account on Cloudflare, with your domain connected to it.
Step 1: DNS Configuration in Cloudflare
The first step is to point the subdomain you want to use for Nextcloud (e.g., https://www.google.com/search?q=cloud.mydomain.com) to your home network’s public IP address.
Log in to your Cloudflare dashboard and select your domain.
Navigate to the DNS -> Records tab.
Click Add record and create a new A record:
Type: A
Name: Enter the subdomain name, e.g., cloud.
IPv4 address: Enter your network’s public IP address.
Proxy status: Turn off the orange cloud (set to DNS only). This is crucial while generating the certificate so that Nginx Proxy Manager can verify the domain without issue. After a successful configuration, we can re-enable the Cloudflare proxy.
Step 2: Creating a Proxy Host in Nginx Proxy Manager
Now that the domain points to our server, it’s time to configure Nginx Proxy Manager to manage the traffic and the SSL certificate.
Log in to the Nginx Proxy Manager web interface.
Go to Hosts -> Proxy Hosts and click Add Proxy Host.
Fill in the details in the Details tab:
Domain Names: Enter the full name of your subdomain, e.g., cloud.mydomain.com.
Scheme: http
Forward Hostname/IP: Enter the local IP address of your Nextcloud application, e.g., 192.168.1.50.
Forward Port: Enter the port your Nextcloud is listening on, e.g., 30027.
Tick the Block Common Exploits option to increase security.
Navigate to the SSL tab:
From the SSL Certificate dropdown list, select “Request a new SSL Certificate”.
Enable the Force SSL option. This will automatically redirect all traffic from HTTP to secure HTTPS.
Enable HTTP/2 Support for better performance.
Accept the Let’s Encrypt Terms of Service by ticking “I Agree to the Let’s Encrypt Terms of Service”.
Click the Save button.
At this point, Nginx Proxy Manager will connect to the Let’s Encrypt servers, automatically perform the verification of your domain, and if everything proceeds successfully, it will download and install the SSL certificate.
Step 3: Verification and Final Steps
After a few moments, you should be able to access your domain https://cloud.mydomain.com in your browser. If everything has been configured correctly, you will see the Nextcloud login page with a green padlock in the address bar, indicating that your connection is fully encrypted.
The final step is to return to your Cloudflare dashboard (Step 1) and enable the orange cloud (Proxied) for your DNS record. This will give you an additional layer of protection and performance offered by the Cloudflare CDN.
Congratulations! Your Nextcloud instance is now secure and accessible from anywhere in the world under your own professional-looking domain.
The next pillar is data storage. The default option, ixVolume, allows the TrueNAS system to automatically manage dedicated spaces for application files, user data, and the database. This approach ensures order and security. The temptation to mount an entire data pool as an “additional storage” is great, but it is a path to nowhere—it leads to organisational chaos and potential security vulnerabilities. A much better practice is to only mount specific, existing datasets, such as media or music.
Even with the best configuration, an obstacle may appear. The most common one is the “Access through untrusted domain” message. This is not an error, but a testament to Nextcloud’s commitment to security. The system demands that we explicitly declare which addresses (IPs or domains) we will use to connect to it. The solution requires some detective work: finding the config.php file and adding the trusted addresses to it. In newer versions of TrueNAS, this file is often hidden in a non-standard location, such as /mnt/.ix-apps/, which requires patience and familiarity with the system console.
The Gateway to the Cloud: Synchronisation at Your Fingertips
Once the server is ready, it’s time to open the doors to it from our devices. Nextcloud offers clients for all popular platforms: from desktop computers to smartphones. In the world of Linux, we face a choice: download the universal AppImage file directly from the creators or use the modern Flatpak package system.
Although AppImage offers simplicity and portability, Flatpak wins in daily use. It provides full system integration, automatic updates, and, most importantly, runs in an isolated environment (sandbox), which significantly increases the level of security.
The client authorisation process is a model of the modern approach. Instead of entering a password directly into the application, we are redirected to a browser, where we log in on our own trusted site. After a successful login, the server sends a special token back to the application, which authorises the connection. It’s simple, fast, and secure.
The final step is to decide what to synchronise. We can choose to fully synchronise all data or, if disk space is limited, select only the most important folders. After clicking “Connect,” the magic happens—files from the server begin to flow to our local drive, and an icon in the system tray informs us of the progress.
Configuring the Nextcloud Desktop Client
After installing the Nextcloud client application on your computer, the next step is to connect it to your account on the server. This process is simple and secure, as it uses your web browser for authorisation, meaning your password is not entered directly into the application.
Step 1: Initiating the Connection and Authorising in the Browser
When you launch the client for the first time, you will be prompted to enter the address of your Nextcloud server (e.g., https://cloud.mydomain.com).
After entering it, the application will automatically open a new tab in your default web browser.
You will see a screen asking you to connect to your account. This is a security mechanism that informs you that an application (in this case, Desktop Client – Linux) is trying to access your account.
Click the blue “Log in” button to continue.
You will then be redirected to the standard Nextcloud login window. Enter your username (or email) and password, just as you do when logging in through the website.
After a successful login, Nextcloud will confirm that the authorisation was successful and the client has been successfully connected to your account.
You can now close this browser window and return to the client application.
Step 2: Local Synchronisation Settings
The desktop application will now display the final configuration screen, where you can define how files should be synchronised. Pay attention to the following options:
Remote Account: Ensure the account name and server address are correct.
Local Folder: By default, the client will create a Nextcloud folder in your home directory. You can choose a different location by clicking “Choose different folder”.
Sync Options:
Synchronize everything from server: The default and recommended option, which will download all files and folders from the server.
Choose what to sync: Allows for selective synchronisation. You can choose only the folders you want to have on your computer.
After making your selection, click the “Connect” button.
Step 3: Completion and Working with the Client
That’s it! Your client is now configured. The initial synchronisation process will begin, and its progress will be visible in the main application window and via the icon in the system tray.
In the main application window, you can now view recent activity, server notifications, and manually force a synchronisation by clicking “Sync now”. From now on, any file you add or modify in the local Nextcloud folder will be automatically synchronised with the server and other connected devices.
More Than Just Files: An Ecosystem of Applications
The true power of Nextcloud lies not just in file synchronisation. It lies in its ecosystem, which allows you to transform a simple data storage into a comprehensive platform for work and communication. The built-in app store offers hundreds of free extensions. Here are a few worth installing right from the start:
Nextcloud Office: Thanks to integration with Collabora Online or ONLYOFFICE, Nextcloud gains the ability to edit text documents, spreadsheets, and presentations in real-time, becoming a viable alternative to Google Docs or Microsoft 365.
Deck: A simple but powerful project management tool in the style of Kanban boards. Ideal for organising personal tasks and teamwork.
Calendar & Contacts: A fully-fledged calendar and address book with the ability to synchronise via standard CalDAV and CardDAV protocols.
Photos: Much more than a simple photo viewer. The application can automatically categorise images based on recognised objects, create albums, and display photos on a map.
Notes: A minimalist application for creating and synchronising notes in Markdown format.
Installing and configuring your own Nextcloud is a journey that requires attention and making a few key decisions. However, the reward is priceless: full control over your own data, independence from external providers, and a platform that you can shape and expand as you wish. This is not just technology—it is a manifesto of digital freedom.
The modern internet is a battlefield for our attention, and adverts have become the primary ammunition. This is felt particularly acutely on smartphones, where intrusive banners and pop-up windows can effectively discourage you from browsing content. However, there is an effective and comprehensive solution that allows you to create your own protective shield, not only on your home network but on any device, wherever you are.
The Problem: Digital Clutter and Loss of Privacy
Anyone who has tried to read an article on a smartphone is familiar with this scenario: the content is regularly interrupted by adverts that take up a significant portion of the screen, slow down the page’s loading time, and consume precious mobile data. While this problem is irritating on desktop computers, on smaller screens it becomes a serious barrier to accessing information.
Traditional browser plug-ins solve the problem only partially and on a single device. They don’t protect us in mobile apps, on Smart TVs, or on games consoles. What’s worse, ubiquitous tracking scripts collect data about our activity, creating detailed marketing profiles.
The Solution: Centralised Management with AdGuard Home
The answer is AdGuard Home—software that acts as a DNS server, filtering traffic at a network-wide level. By installing it on a home server, such as the popular TrueNAS, we gain a central point of control over all devices connected to our network.
Installation and configuration of AdGuard Home on TrueNAS are straightforward thanks to its Apps system. A key step during installation is to tick the “Host Network” option. This allows AdGuard Home to see the real IP addresses of the devices on your network, enabling precise monitoring and management of clients in the admin panel. Without this option, all queries would appear to originate from the server’s single IP address.
After installation, the crucial step is to direct DNS queries from all devices to the address of our AdGuard server. This can be achieved in several ways, but thanks to Tailscale, the process becomes incredibly simple.
Traditional Methods vs. The Tailscale Approach
In a conventional approach, to direct traffic to AdGuard Home, we would need to change the DNS addresses in our router’s settings. When this isn’t possible (which is often the case with equipment from an internet service provider), the alternative is to configure AdGuard Home as a DHCP server, which will automatically assign the correct DNS address to devices (this requires disabling the DHCP server on the router). The last resort is to change the DNS manually on every device in the house. It must be stressed, however, that all these methods work only within the local network and are completely ineffective for mobile devices using cellular data away from home.
However, if we plan to use Tailscale for protection outside the home, we can also use it to configure the local network. This is an incredibly elegant solution: if we install the Tailscale client on all our devices (computers, phones) and set our AdGuard server’s DNS address in its admin panel, enabling the “Override local DNS” option, we don’t need to make any changes to the router or manually on individual devices. Tailscale will automatically force every device in our virtual network to use AdGuard, regardless of which physical network it is connected to.
AdGuard Home Features: Much More Than Ad Blocking
Protection against Malware: Automatically blocks access to sites known for phishing, malware, and scams.
Parental Controls: Allows you to block sites with adult content, an invaluable feature in homes with children.
Filter Customisation: We can use ready-made, regularly updated filter lists or add our own rules.
Detailed Statistics: The panel shows which queries are being blocked, which devices are most active, and which domains are generating the most traffic.
For advanced users, the ability to manage clients is particularly useful. Each device on the network can be given a friendly name (e.g., “Anna-Laptop,” “Tom-Phone”) and assigned individual filtering rules. In my case, for VPS servers that do not require ad blocking, I have set default DNS servers (e.g., 1.1.1.1 and 8.8.8.8), so their traffic is ignored by the AdGuard filters.
The Challenge: Blocking Adverts Beyond the Home Network
While protection on the local network is already a powerful tool, true freedom from adverts comes when we can use it away from home. By default, when a smartphone connects to a mobile network, it loses contact with the home AdGuard server. Attempting to expose a DNS server to the public internet by forwarding ports on your router is not only dangerous but also ineffective. Most mobile operating systems, like Android and iOS, do not allow changing the DNS server for mobile connections, making such a solution impossible. This is where Tailscale comes to the rescue.
Tailscale: Your Private Network, Anywhere
Tailscale is a service based on the WireGuard protocol that creates a secure, virtual private network (a “Tailnet”) between your devices. Regardless of where they are, computers, servers, and phones can communicate with each other as if they were on the same local network.
Installing Tailscale on TrueNAS and on mobile devices is swift and straightforward. After logging in with the same account, all devices see each other in the Tailscale admin panel. To combine the power of both tools, you need to follow these key steps:
In the Tailscale admin panel, under the DNS tab, enable the Override local DNS option.
As the global DNS server, enter the IP address of our TrueNAS server within the Tailnet (e.g., 100.x.x.x).
With this configuration, all DNS traffic from our phone, even when it’s using a 5G network on the other side of the country, is sent through a secure tunnel to the Tailscale server on TrueNAS and then processed by AdGuard Home. The result? Adverts, trackers, and malicious sites are blocked on your phone, anytime and anywhere.
Advanced Tailscale Features: Subnet Routes and Exit Node
Tailscale offers two powerful features that further extend the capabilities of our network:
Subnet routes: This allows you to share your entire home LAN (e.g., 192.168.1.0/24) with devices on your Tailnet. After configuring your TrueNAS server as a “subnet router,” your phone, while away from home, can access not only the server itself but also your printer, IP camera, or other devices on the local network, just as if you were at home.
Exit node: This feature turns your home server into a fully-fledged VPN server. Once activated, all internet traffic from your Tailnet (not just DNS queries) is tunnelled through your home internet connection. This is the perfect solution when using untrusted public Wi-Fi networks (e.g., in a hotel or at an airport), as all your traffic is encrypted and protected. If your home server is in the UK, you also gain a UK IP address while abroad.
Checking the Effectiveness of Ad Blocking
To find out how effective your ad-blocking filters are, you can visit https://adblock.turtlecute.org/. There, you will see what types of adverts are being blocked and which are still being displayed. This will help you to fine-tune your filter lists in AdGuard Home.
Summary: Advantages and Disadvantages
Creating such a system is an investment of time, but the benefits are invaluable.
Advantages:
Complete and Unified Protection: Blocks adverts and threats on all devices, on any network, with minimal configuration.
Centralised Management: A single place to configure rules for the entire household.
Increased Privacy and Security: Reduces tracking and encrypts traffic on public networks.
Performance: Faster page loading and lower mobile data consumption.
Disadvantages:
Requires a Server: Needs a 24/7 device like a TrueNAS server to be running.
Dependency on Home Connection: The speed of DNS responses and bandwidth (in Exit Node mode) outside the home depends on your internet’s upload speed.
The combination of AdGuard Home and Tailscale is a powerful tool for anyone who values a clean, fast, and secure internet. It is a declaration of digital independence that places control back into the hands of the user, away from advertising corporations.
In today’s digital world, where remote work and distributed infrastructure are becoming the norm, secure access to network resources is not so much a luxury as an absolute necessity. Virtual Private Networks (VPNs) have long been the answer to these needs, yet traditional solutions can be complicated and slow. Enter WireGuard—a modern VPN protocol that is revolutionising the way we think about secure tunnels. Combined with the power of the TrueNAS Scale system and the simplicity of the WG-Easy application, we can create an exceptionally efficient and easy-to-manage solution.
This article is a comprehensive guide that will walk you through the process of configuring a secure WireGuard VPN tunnel step by step. We will connect a TrueNAS Scale server, running on your home or company network, with a fleet of public VPS servers. Our goal is to create intelligent “split-tunnel” communication, ensuring that only necessary traffic is routed through the VPN, thereby maintaining maximum internet connection performance.
What Is WireGuard and Why Is It a Game-Changer?
Before we delve into the technical configuration, it’s worth understanding why WireGuard is gaining such immense popularity. Designed from the ground up with simplicity and performance in mind, it represents a breath of fresh air compared to older, more cumbersome protocols like OpenVPN or IPsec.
The main advantages of WireGuard include:
Minimalism and Simplicity: The WireGuard source code consists of just a few thousand lines, in contrast to the hundreds of thousands for its competitors. This not only facilitates security audits but also significantly reduces the potential attack surface.
Unmatched Performance: By operating at the kernel level of the operating system and utilising modern cryptography, WireGuard offers significantly higher transfer speeds and lower latency. In practice, this means smoother access to files and services.
Modern Cryptography: WireGuard uses the latest, proven cryptographic algorithms such as ChaCha20, Poly1305, Curve25519, BLAKE2s, and SipHash24, ensuring the highest level of security.
Ease of Configuration: The model, based on the exchange of public keys similar to SSH, is far more intuitive than the complicated certificate management found in other VPN systems.
The Power of TrueNAS Scale and the Convenience of WG-Easy
TrueNAS Scale is a modern, free operating system for building network-attached storage (NAS) servers, based on the solid foundations of Linux. Its greatest advantage is its support for containerised applications (Docker/Kubernetes), which allows for easy expansion of its functionality. Running a WireGuard server directly on a device that is already operating 24/7 and storing our data is an extremely energy- and cost-effective solution.
This is where the WG-Easy application comes in—a graphical user interface that transforms the process of managing a WireGuard server from editing configuration files in a terminal to simple clicks in a web browser. Thanks to WG-Easy, we can create profiles for new devices in moments, generate their configurations, and monitor the status of connections.
Step 1: Designing the Network Architecture – The Foundation of Stability
Before we launch any software, we must create a solid plan. Correctly designing the topology and IP addressing is the key to a stable and secure solution.
The “Hub-and-Spoke” Model: Your Command Centre
Our network will operate based on a “hub-and-spoke” model.
Hub: The central point (server) of our network will be TrueNAS Scale. All other devices will connect to it.
Spokes: Our VPS servers will be the clients (peers), or the “spokes” connected to the central hub.
In this model, all communication flows through the TrueNAS server by default. This means that for one VPS to communicate with another, the traffic must pass through the central hub.
To avoid chaos, we will create a dedicated subnet for our virtual network. In this guide, we will use 10.8.0.0/24.
Device Role
Host Identifier
VPN IP Address
Server (Hub)
TrueNAS-Scale
10.8.0.1
Client 1 (Spoke)
VPS1
10.8.0.2
Client 2 (Spoke)
VPS2
10.8.0.3
Client 3 (Spoke)
VPS3
10.8.0.4
The Fundamental Rule: One Client, One Identity
A tempting thought arises: is it possible to create a single configuration file for all VPS servers? Absolutely not. This would be a breach of a fundamental WireGuard security principle. Identity in this network is not based on a username and password, but on a unique pair of cryptographic keys. Using the same configuration on multiple machines is like giving the same house key to many different people—the server would be unable to distinguish between them, which would lead to routing chaos and a security breakdown.
Step 2: Prerequisite – Opening the Gateway to the World
The most common pitfall when configuring a home server is forgetting about the router. Your TrueNAS server is on a local area network (LAN) and has a private IP address (e.g., 192.168.0.13), which makes it invisible from the internet. For the VPS servers to connect to it, you must configure port forwarding on your router.
You need to create a rule that directs packets arriving from the internet on a specific port straight to your TrueNAS server.
Protocol: UDP (WireGuard uses UDP exclusively)
External Port: 51820 (the standard WireGuard port)
Internal IP Address: The IP address of your TrueNAS server on the LAN
Internal Port: 51820
Without this rule, your VPN server will never work.
Step 3: Hub Configuration – Launching the Server on TrueNAS
Launch the WG-Easy application on your TrueNAS server. The configuration process boils down to creating a separate profile for each client (each VPS server).
Click “New” and fill in the form for the first VPS, paying special attention to the fields below:
Field Name in WG-Easy
Example Value (for VPS1)
Explanation
Name
VPS1-Public
A readable label to help you identify the client.
IPv4 Address
10.8.0.2
A unique IP address for this VPS within the VPN, according to our plan.
Allowed IPs
192.168.0.0/24, 10.8.0.0/24
This is the heart of the “split-tunnel” configuration. It tells the client (VPS) that only traffic to your local network (LAN) and to other devices on the VPN should be sent through the tunnel. All other traffic (e.g., to Google) will take the standard route.
Server Allowed IPs
10.8.0.2/32
A critical security setting. It informs the TrueNAS server to only accept packets from this specific client from its assigned IP address. The /32 mask prevents IP spoofing.
Persistent Keepalive
25
An instruction for the client to send a small “keep-alive” packet every 25 seconds. This is necessary to prevent the connection from being terminated by routers and firewalls along the way.
After filling in the fields, save the configuration. Repeat this process for each subsequent VPS server, remembering to assign them consecutive IP addresses (10.8.0.3, 10.8.0.4, etc.).
Once you save the profile, WG-Easy will generate a .conf configuration file for you. Treat this file like a password—it contains the client’s private key! Download it and prepare to upload it to the VPS server.
Step 4: Spoke Configuration – Activating Clients on the VPS Servers
Now it’s time to bring our “spokes” to life. Assuming your VPS servers are running Linux (e.g., Debian/Ubuntu), the process is very straightforward.
Upload and secure the configuration file: Copy the previously downloaded wg0.conf file to the /etc/wireguard/ directory on the VPS server. Then, change its permissions so that only the administrator can read it: # On the VPS server: sudo mv /path/to/your/wg0.conf /etc/wireguard/wg0.conf sudo chmod 600 /etc/wireguard/wg0.conf
Start the tunnel: Use a simple command to activate the connection. The interface name (wg0) is derived from the configuration file name. sudo wg-quick up wg0
Ensure automatic start-up: To have the VPN tunnel start automatically after every server reboot, enable the corresponding system service: sudo systemctl enable wg-quick@wg0.service
Repeat these steps on each VPS server, using the unique configuration file generated for each one.
Step 5: Verification and Diagnostics – Checking if Everything Works
After completing the configuration, it’s time for the final test.
Checking the Connection Status
On both the TrueNAS server and each VPS, execute the command:
sudo wg show
Look for two key pieces of information in the output:
latest handshake: This should show a recent time (e.g., “a few seconds ago”). This is proof that the client and server have successfully connected.
transfer: received and sent values greater than zero indicate that data is actually flowing through the tunnel.
The Final Test: Validating the “Split-Tunnel”
This is the test that will confirm we have achieved our main goal. Log in to one of the VPS servers and perform the following tests:
Test connectivity within the VPN: Try to ping the TrueNAS server using its VPN and LAN addresses. ping 10.8.0.1 # VPN address of the TrueNAS server ping 192.168.0.13 # LAN address of the TrueNAS server (use your own)
If you receive replies, it means that traffic to your local network is being correctly routed through the tunnel.
Test the path to the internet: Use the traceroute tool to check the route packets take to a public website. traceroute google.com
The result of this command is crucial. The first “hop” on the route must be the default gateway address of your VPS hosting provider, not the address of your VPN server (10.8.0.1). If this is the case—congratulations! Your “split-tunnel” configuration is working perfectly.
Troubleshooting Common Problems
No “handshake”: The most common cause is a connection issue. Double-check the UDP port 51820 forwarding configuration on your router, as well as any firewalls in the path (on TrueNAS, on the VPS, and in your cloud provider’s panel).
There is a “handshake”, but ping doesn’t work: The problem usually lies in the Allowed IPs configuration. Ensure the server has the correct client VPN address entered (e.g., 10.8.0.2/32), and the client has the networks it’s trying to reach in its configuration (e.g., 192.168.0.0/24).
All traffic is going through the VPN (full-tunnel): This means that in the client’s configuration file, under the [Peer] section, the Allowed IPs field is set to 0.0.0.0/0. Correct this setting in the WG-Easy interface, download the new configuration file, and update it on the client.
Creating your own secure and efficient VPN server based on TrueNAS Scale and WireGuard is well within reach. It is a powerful solution that not only enhances security but also gives you complete control over your network infrastructure.
Section 1: Introduction: Simplifying Home Lab Access with Nginx Proxy Manager on TrueNAS Scale
Modern home labs have evolved from simple setups into complex ecosystems running dozens of services, from media servers like Plex or Jellyfin, to home automation systems such as Home Assistant, to personal clouds and password managers. Managing access to each of these services, each operating on a unique combination of an IP address and port number, quickly becomes impractical, inconvenient, and, most importantly, insecure. Exposing multiple ports to the outside world increases the attack surface and complicates maintaining a consistent security policy.
The solution to this problem, employed for years in corporate environments, is the implementation of a central gateway or a single point of entry for all incoming traffic. In networking terminology, this role is fulfilled by a reverse proxy. This is an intermediary server that receives all requests from clients and then, based on the domain name, directs them to the appropriate service running on the internal network. Such an architecture not only simplifies access, allowing the use of easy-to-remember addresses (e.g., jellyfin.mydomain.co.uk instead of 192.168.1.50:8096), but also forms a key component of a security strategy.
In this context, two technologies are gaining particular popularity among enthusiasts: TrueNAS Scale and Nginx Proxy Manager. TrueNAS Scale, based on the Debian Linux system, has transformed the traditional NAS (Network Attached Storage) device into a powerful, hyper-converged infrastructure (HCI) platform, capable of natively running containerised applications and virtual machines. In turn, Nginx Proxy Manager (NPM) is a tool that democratises reverse proxy technology. It provides a user-friendly, graphical interface for the powerful but complex-to-configure Nginx server, making advanced features, such as automatic SSL certificate management, accessible without needing to edit configuration files from the command line.
This article provides a comprehensive overview of the process of deploying Nginx Proxy Manager on the TrueNAS Scale platform. The aim is not only to present “how-to” instructions but, above all, to explain why each step is necessary. The analysis will begin with an in-depth discussion of both technologies and their interactions. Then, a detailed installation process will be carried out, considering platform-specific challenges and their solutions, including the well-known issue of the application getting stuck in the “Deploying” state. Subsequently, using the practical example of a Jellyfin media server, the configuration of a proxy host will be demonstrated, along with advanced security options. The report will conclude with a summary of the benefits and suggest further steps to fully leverage the potential of this powerful duo.
Section 2: Tool Analysis: Nginx Proxy Manager and the TrueNAS Scale Application Ecosystem
Understanding the fundamental principles of how Nginx Proxy Manager works and the architecture in which it is deployed—the TrueNAS Scale application system—is crucial for successful installation, effective configuration, and, most importantly, efficient troubleshooting. These two components, though designed to work together, each have their own unique characteristics, the ignorance of which is the most common cause of failure.
At the core of NPM’s functionality lies the concept of a reverse proxy, which is fundamental to modern network architecture. Understanding how it works allows one to appreciate the value that NPM brings.
Definition and Functions of a Reverse Proxy
A reverse proxy is a server that acts as an intermediary on the server side. Unlike a traditional (forward) proxy, which acts on behalf of the client, a reverse proxy acts on behalf of the server (or a group of servers). It receives requests from clients on the internet and forwards them to the appropriate servers on the local network that actually store the content. To an external client, the reverse proxy is the only visible point of contact; the internal network structure remains hidden.
The key benefits of this solution are:
Security: Hiding the internal network topology and the actual IP addresses of application servers significantly hinders direct attacks on these services.
Centralised SSL/TLS Management (SSL Termination): Instead of configuring SSL certificates on each of a dozen application servers, you can manage them in one place—on the reverse proxy. Traffic encryption and decryption (SSL Termination) occurs at the proxy server, which offloads the backend servers.
Load Balancing: In more advanced scenarios, a reverse proxy can distribute traffic among multiple identical application servers, ensuring high availability and service scalability.
Simplified Access: It allows access to multiple services through standard ports 80 (HTTP) and 443 (HTTPS) using different subdomains, eliminating the need to remember and open multiple ports.
NPM as a Management Layer
It should be emphasised that Nginx Proxy Manager is not a new web server competing with Nginx. It is a management application, built on the open-source Nginx, which serves as a graphical user interface (GUI) for its reverse proxy functions. Instead of manually editing complex Nginx configuration files, the user can perform the same operations with a few clicks in an intuitive web interface.
The main features that have contributed to NPM’s popularity are:
Graphical User Interface: Based on the Tabler framework, the interface is clear and easy to use, which drastically lowers the entry barrier for users who are not Nginx experts.
SSL Automation: Built-in integration with Let’s Encrypt allows for the automatic, free generation of SSL certificates and their periodic renewal. This is one of the most important and appreciated features.
Docker-based Deployment: NPM is distributed as a ready-to-use Docker image, which makes its installation on any platform that supports containers extremely simple.
Access Management: The tool offers features for creating Access Control Lists (ACLs) and managing users with different permission levels, allowing for granular control over access to individual services.
Comparison: NPM vs. Traditional Nginx
The choice between Nginx Proxy Manager and manual Nginx configuration is a classic trade-off between simplicity and flexibility. The table below outlines the key differences between these two approaches.
Aspect
Nginx Proxy Manager
Traditional Nginx
Management Interface
Graphical User Interface (GUI) simplifying configuration.
Command Line Interface (CLI) and editing text files; requires technical knowledge.
SSL Configuration
Fully automated generation and renewal of Let’s Encrypt certificates.
Manual configuration using tools like Certbot; greater control.
Learning Curve
Low; ideal for beginners and hobbyists.
Steep; requires understanding of Nginx directives and web server architecture.
Flexibility
Limited to features available in the GUI; advanced rules can be difficult to implement.
Full flexibility and the ability to create highly customised, complex configurations.
Scalability / Target User
Ideal for home labs, small to medium deployments. Hobbyist, small business owner, home lab user.
A better choice for large-scale, high-load corporate environments. Systems administrator, DevOps engineer, developer.
This table clearly shows that NPM is a tool strategically tailored to the needs of its target audience—home lab enthusiasts. These users consciously sacrifice some advanced flexibility for the significant benefits of ease of use and speed of deployment.
Subsection 2.2: Application Architecture in TrueNAS Scale
To understand why installing NPM on TrueNAS Scale can encounter specific problems, it is necessary to know how this platform manages applications. It is not a typical Docker environment.
Foundations: Linux and Hyper-convergence
A key architectural change in TrueNAS Scale compared to its predecessor, TrueNAS CORE, was the switch from the FreeBSD operating system to Debian, a Linux distribution. This decision opened the door to native support for technologies that have dominated the cloud and containerisation world, primarily Docker containers and KVM-based virtualisation. As a result, TrueNAS Scale became a hyper-converged platform, combining storage, computing, and virtualisation functions.
The Application System
Applications are distributed through Catalogs, which function as repositories. These catalogs are further divided into so-called “trains,” which define the stability and source of the applications:
stable: The default train for official, iXsystems-tested applications.
enterprise: Applications verified for business use.
community: Applications created and maintained by the community. This is where Nginx Proxy Manager is located by default.
test: Applications in the development phase.
NPM’s inclusion in the community catalog means that while it is easily accessible, its technical support relies on the community, not directly on the manufacturer of TrueNAS.
Storage Management for Applications
Before any application can be installed, TrueNAS Scale requires the user to specify a ZFS pool that will be dedicated to storing application data. When an application is installed, its data (configuration, databases, etc.) must be saved somewhere persistently. TrueNAS Scale offers several options here, but the default and recommended for simplicity is ixVolume.
ixVolume is a special type of volume that automatically creates a dedicated, system-managed ZFS dataset within the selected application pool. This dataset is isolated, and the system assigns it very specific permissions. By default, the owner of this dataset becomes the system user apps with a user ID (UID) of 568 and a group ID (GID) of 568. The running application container also operates with the permissions of this very user.
This is the crux of the problem. The standard Docker image for Nginx Proxy Manager contains startup scripts (e.g., those from Certbot, the certificate handling tool) that, on first run, attempt to change the owner (chown) of data directories, such as /data or /etc/letsencrypt, to ensure they have the correct permissions. When the NPM container starts within the sandboxed TrueNAS application environment, its startup script, running as the unprivileged apps user (UID 568), tries to execute the chown operation on the ixVolume. This operation fails because the apps user is not the owner of the parent directories and does not have permission to change the owner of files on a volume managed by K3s. This permission error causes the container’s startup script to halt, and the container itself never reaches the “running” state, which manifests in the TrueNAS Scale interface as an endless “Deploying” status.
Section 3: Installing and Configuring Nginx Proxy Manager on TrueNAS Scale
The process of installing Nginx Proxy Manager on TrueNAS Scale is straightforward, provided that attention is paid to a few key configuration parameters that are often a source of problems. The following step-by-step instructions will guide you through this process, highlighting the critical decisions that need to be made.
Step 1: Preparing TrueNAS Scale
Before proceeding with the installation of any application, you must ensure that the application service in TrueNAS Scale is configured correctly.
Log in to the TrueNAS Scale web interface.
Navigate to the Apps section.
If the service is not yet configured, the system will prompt you to select a ZFS pool to be used for storing all application data. Select the appropriate pool and save the settings. After a moment, the service status should change to “Running”.
Step 2: Finding the Application
Nginx Proxy Manager is available in the official community catalog.
In the Apps section, go to the Discover tab.
In the search box, type nginx-proxy-manager.
The application should appear in the results. Ensure it comes from the community catalog.
Click the Install button to proceed to the configuration screen.
Step 3: Key Configuration Parameters
The installation screen presents many options. Most of them can be left with their default values, but a few sections require special attention.
Application Name
In the Application Name field, enter a name for the installation, for example, nginx-proxy-manager. This name will be used to identify the application in the system.
Network Configuration
This is the most important and most problematic stage of the configuration. By default, the TrueNAS Scale management interface uses the standard web ports: 80 for HTTP and 443 for HTTPS. Since Nginx Proxy Manager, to act as a gateway for all web traffic, should also listen on these ports, a direct conflict arises. There are two main strategies to solve this problem, each with its own set of trade-offs.
Strategy A (Recommended): Change TrueNAS Scale Ports This method is considered the “cleanest” from NPM’s perspective because it allows it to operate as it was designed.
Cancel the NPM installation and go to System Settings -> General. In the GUI SSL/TLS Certificate section, change the Web Interface HTTP Port to a custom one, e.g., 880, and the Web Interface HTTPS Port to, e.g., 8443.
Save the changes. From this point on, access to the TrueNAS Scale interface will be available at http://<truenas-ip-address>:880 or https://<truenas-ip-address>:8443.
Return to the NPM installation and in the Network Configuration section, assign the HTTP Port to 80 and the HTTPS Port to 443.
Advantages: NPM runs on standard ports, which simplifies configuration and eliminates the need for port translation on the router.
Disadvantages: It changes the fundamental way of accessing the NAS itself. In rare cases, as noted on forums, this can cause unforeseen side effects, such as problems with SSH connections between TrueNAS systems.
Strategy B (Alternative): Use High Ports for NPM This method is less invasive to the TrueNAS configuration itself but shifts the complexity to the router level.
In the NPM configuration, under the Network Configuration section, leave the TrueNAS ports unchanged and assign high, unused ports to NPM, e.g., 30080 for HTTP and 30443 for HTTPS. TrueNAS Scale reserves ports below 9000 for the system, so you should choose values above this threshold.
After installing NPM, configure port forwarding on your edge router so that incoming internet traffic on port 80 is directed to port 30080 of the TrueNAS IP address, and traffic from port 443 is directed to port 30443.
Advantages: The TrueNAS Scale configuration remains untouched.
Disadvantages: Requires additional configuration on the router. Each proxied service will require explicit forwarding, which can be confusing.
The ideal solution would be to assign a dedicated IP address on the local network to NPM (e.g., using macvlan technology), which would completely eliminate the port conflict. Unfortunately, the graphical interface of the application installer in TrueNAS Scale does not provide this option in a simple way.
Storage Configuration
To ensure that the NPM configuration, including created proxy hosts and SSL certificates, survives updates or application redeployments, you must configure persistent storage.
In the Storage Configuration section, configure two volumes.
For Nginx Proxy Manager Data Storage (path /data) and Nginx Proxy Manager Certs Storage (path /etc/letsencrypt), select the ixVolume type.
Leaving these settings will ensure that TrueNAS creates dedicated ZFS datasets for the configuration and certificates, which will be independent of the application container itself.
Step 4: First Run and Securing the Application
After configuring the above parameters (and possibly applying the fixes from Section 4), click Install. After a few moments, the application should transition to the “Running” state.
Access to the NPM interface is available at http://<truenas-ip-address>:PORT, where PORT is the WebUI port configured during installation (defaults to 81 inside the container but is mapped to a higher port, e.g., 30020, if the TrueNAS ports were not changed).
The default login credentials are:
Email: admin@example.com
Password: changeme
Upon first login, the system will immediately prompt you to change these details. This is an absolutely crucial security step and must be done immediately.
Section 4: Troubleshooting the “Deploying” Issue: Diagnosis and Repair of Installation Errors
One of the most frequently encountered and frustrating problems when deploying Nginx Proxy Manager on TrueNAS Scale is the situation where the application gets permanently stuck in the “Deploying” state after installation. The user waits, refreshes the page, but the status never changes to “Running”. Viewing the container logs often does not provide a clear answer. This problem is not a bug in NPM itself but, as diagnosed earlier, a symptom of a fundamental permission conflict between the generic container and the specific, secured environment in TrueNAS Scale.
Problem Description and Root Cause
After clicking the “Install” button in the application wizard, TrueNAS Scale begins the deployment process. In the background, the Docker image is downloaded, ixVolumes are created, and the container is started with the specified configuration. The startup script inside the NPM container attempts to perform maintenance operations, including changing the owner of key directories. Because the container is running as a user with limited permissions (apps, UID 568) on a file system it does not fully control, this operation fails. The script halts its execution, and the container never signals to the system that it is ready to work. Consequently, from the perspective of the TrueNAS interface, the application remains forever in the deployment phase.
Fortunately, thanks to the work of the community and developers, there are proven and effective solutions to this problem. Interestingly, the evolution of these solutions perfectly illustrates the dynamics of open-source software development.
Solution 1: Using an Environment Variable (Recommended Method)
This is the modern, precise, and most secure solution to the problem. It was introduced by the creators of the NPM container specifically in response to problems reported by users of platforms like TrueNAS Scale. Instead of escalating permissions, the container is instructed to skip the problematic step.
To implement this solution:
During the application installation (or while editing it if it has already been created and is stuck), navigate to the Application Configuration section.
Find the Nginx Proxy Manager Configuration subsection and click Add next to Additional Environment Variables.
Configure the new environment variable as follows:
Variable Name: SKIP_CERTBOT_OWNERSHIP
Variable Value: true
Save the configuration and install or update the application.
Adding this flag informs the Certbot startup script inside the container to skip the chown (change owner) step for its configuration files. The script proceeds, the container starts correctly and reports readiness, and the application transitions to the “Running” state. This is the recommended method for all newer versions of TrueNAS Scale (Electric Eel, Dragonfish, and later).
Solution 2: Changing the User to Root (Historical Method)
This solution was the first one discovered by the community. It is a more “brute force” method that solves the problem by granting the container full permissions. Although effective, it is considered less elegant and potentially less secure from the perspective of the principle of least privilege.
To implement this solution:
During the installation or editing of the application, navigate to the User and Group Configuration section.
Change the value in the User ID field from the default 568 to 0.
Leave the Group ID unchanged or also set it to 0.
Save the configuration and deploy the application.
Setting the User ID to 0 causes the process inside the container to run with root user permissions. The root user has unlimited permissions, so the problematic chown operation executes flawlessly, and the container starts correctly. This method was particularly necessary in older versions of TrueNAS Scale (e.g., Dragonfish) and is documented as a working workaround. Although it still works, the environment variable method is preferred as it does not require escalating permissions for the entire container.
Verification
Regardless of the chosen method, after saving the changes and redeploying the application, you should observe its status in the Apps -> Installed tab. After a short while, the status should change from “Deploying” to “Running”, which means the problem has been successfully resolved and Nginx Proxy Manager is ready for configuration.
Section 5: Practical Application: Securing a Jellyfin Media Server
Theory and correct installation are just the beginning. The true power of Nginx Proxy Manager is revealed in practice when we start using it to manage access to our services. Jellyfin, a popular, free media server, is an excellent example to demonstrate this process, as its full functionality depends on one, often overlooked, setting in the proxy configuration. The following guide assumes that Jellyfin is already installed and running on the local network, accessible at IP_ADDRESS:PORT (e.g., 192.168.1.10:8096).
Step 1: DNS Configuration
Before NPM can direct traffic, the outside world needs to know where to send it.
Log in to your domain’s management panel (e.g., at your domain registrar or DNS provider like Cloudflare).
Create a new A record.
In the Name (or Host) field, enter the subdomain that will be used to access Jellyfin (e.g., jellyfin).
In the Value (or Points to) field, enter the public IP address of your home network (your router).
Step 2: Obtaining an SSL Certificate in NPM
Securing the connection with HTTPS is crucial. NPM makes this process trivial, especially when using the DNS Challenge method, which is more secure as it does not require opening any ports on your router.
In the NPM interface, go to SSL Certificates and click Add SSL Certificate, then select Let’s Encrypt.
In the Domain Names field, enter your subdomain, e.g., jellyfin.yourdomain.com. You can also generate a wildcard certificate at this stage (e.g., *.yourdomain.com), which will match all subdomains.
Enable the Use a DNS Challenge option.
From the DNS Provider list, select your DNS provider (e.g., Cloudflare).
In the Credentials File Content field, paste the API token obtained from your DNS provider. For Cloudflare, you need to generate a token with permissions to edit the DNS zone (Zone: DNS: Edit).
Accept the Let’s Encrypt terms of service and save the form. After a moment, NPM will use the API to temporarily add a TXT record in your DNS, which proves to Let’s Encrypt that you own the domain. The certificate will be generated and saved.
Step 3: Creating a Proxy Host
This is the heart of the configuration, where we link the domain, the certificate, and the internal service.
In NPM, go to Hosts -> Proxy Hosts and click Add Proxy Host.
A form with several tabs will open.
“Details” Tab
Domain Names: Enter the full domain name that was configured in DNS, e.g., jellyfin.yourdomain.com.
Scheme: Select http, as the communication between NPM and Jellyfin on the local network is typically not encrypted.
Forward Hostname / IP: Enter the local IP address of the server where Jellyfin is running, e.g., 192.168.1.10.
Forward Port: Enter the port on which Jellyfin is listening, e.g., 8096.
Websocket Support: This is an absolutely critical setting. You must tick this option. Jellyfin makes extensive use of WebSocket technology for real-time communication, for example, to update playback status on the dashboard or for the Syncplay feature to work. Without WebSocket support enabled, the Jellyfin main page will load correctly, but many key features will not work, leading to difficult-to-diagnose problems.
“SSL” Tab
SSL Certificate: From the drop-down list, select the certificate generated in the previous step for the Jellyfin domain.
Force SSL: Enable this option to automatically redirect all HTTP connections to secure HTTPS.
HTTP/2 Support: Enabling this option can improve page loading performance.
After configuring both tabs, save the proxy host.
Step 4: Testing
After saving the configuration, Nginx will reload its settings in the background. It should now be possible to open a browser and enter the address https://jellyfin.yourdomain.com. You should see the Jellyfin login page, and the connection should be secured with an SSL certificate (a padlock icon will be visible in the address bar).
The default configuration is fully functional, but to enhance security, you can add extra HTTP headers that instruct the browser on how to behave. To do this, edit the created proxy host and go to the Advanced tab. In the Custom Nginx Configuration field, you can paste additional directives.
It’s worth noting that NPM has a quirk: add_header directives added directly in this field may not be applied. A safer approach is to create a Custom Location for the path / and paste the headers in its configuration field.
The following table presents recommended security headers.
Header
Purpose
Recommended Value
Notes
Strict-Transport-Security
Forces the browser to communicate exclusively over HTTPS for a specified period.
A historical header intended to protect against Cross-Site Scripting (XSS) attacks.
add_header X-XSS-Protection “0” always;
The header is obsolete and can create new attack vectors. Modern browsers have better, built-in mechanisms. It is recommended to explicitly disable it (0).
Applying these headers provides an additional layer of defence and is considered good practice in securing web applications. However, it is critical to use up-to-date recommendations, as in the case of X-XSS-Protection, where blindly copying it from older guides could weaken security.
Section 6: Conclusions and Next Steps
Combining Nginx Proxy Manager with the TrueNAS Scale platform creates an incredibly powerful and flexible environment for managing a home lab. As demonstrated in this report, this synergy allows for centralised access management, a drastic simplification of the deployment and maintenance of SSL/TLS security, and a professionalisation of the way users interact with their self-hosted services. The key to success, however, is not just blindly following instructions, but above all, understanding the fundamental principles of how both technologies work. The awareness that applications in TrueNAS Scale operate within a restrictive ecosystem is essential for effectively diagnosing and resolving specific problems, such as the “Deploying” stall error.
Summary of Strategic Benefits
Deploying NPM on TrueNAS Scale brings tangible benefits:
Centralisation and Simplicity: All incoming requests are managed from a single, intuitive panel, eliminating the chaos of multiple IP addresses and ports.
Enhanced Security: Automation of SSL certificates, hiding the internal network topology, and the ability to implement advanced security headers create a solid first line of defence.
Professional Appearance and Convenience: Using easy-to-remember, personalised subdomains (e.g., media.mydomain.co.uk) instead of technical IP addresses significantly improves the user experience.
Recommendations and Next Steps
After successfully deploying Nginx Proxy Manager and securing your first application, it is worth exploring its further capabilities to fully utilise the tool’s potential.
Explore Access Lists: NPM allows for the creation of Access Control Lists (ACLs), which can restrict access to specific proxy hosts based on the source IP address. This is an extremely useful feature for securing administrative panels. For example, you can create a rule that allows access to the TrueNAS Scale interface or the NPM panel itself only from IP addresses on the local network, blocking any access attempts from the outside.
Backup Strategy: The Nginx Proxy Manager configuration, stored in the ixVolume, is a critical asset. Its loss would mean having to reconfigure all proxy hosts and certificates. TrueNAS Scale offers built-in tools for automating backups. You should configure a Periodic Snapshot Task for the dataset containing the NPM application data (ix-applications/releases/nginx-proxy-manager) to regularly create snapshots of its state.
Securing Other Applications: The knowledge gained during the Jellyfin configuration is universal. It can now be applied to secure virtually any other web service running in your home lab, such as Home Assistant, a file server, a personal password manager (e.g., Vaultwarden, which is a Bitwarden implementation), or the AdGuard Home ad-blocking system. Remember to enable the Websocket Support option for any application that requires real-time communication.
Monitoring and Diagnostics: The NPM interface provides access logs and error logs for each proxy host. Regularly reviewing these logs can help in diagnosing access problems, identifying unauthorised connection attempts, and optimising the configuration.
Mastering Nginx Proxy Manager on TrueNAS Scale is an investment that pays for itself many times over in the form of increased security, convenience, and control over your digital ecosystem. It is another step on the journey from a simple user to a conscious architect of your own home infrastructure.
A Permanent IP Blacklist with Fail2ban, UFW, and Ipset
Introduction: Beyond Temporary Protection
In the digital world, where server attacks are a daily occurrence, merely reacting is not enough. Although tools like Fail2ban provide a basic line of defence, their temporary blocks leave a loophole—persistent attackers can return and try again after the ban expires. This article provides a detailed guide to building a fully automated, two-layer system that turns ephemeral bans into permanent, global blocks. The combination of Fail2ban, UFW, and the powerful Ipset tool creates a mechanism that permanently protects your server from known repeat offenders.
Layer One: Reaction with Fail2ban
At the start of every attack is Fail2ban. This daemon monitors log files (e.g., sshd.log, apache.log) for patterns indicating break-in attempts, such as multiple failed login attempts. When it detects such activity, it immediately blocks the attacker’s IP address by adding it to the firewall rules for a defined period (e.g., 10 minutes, 30 days). This is an effective but short-term response.
Layer Two: Persistence with UFW and Ipset
For a ban to become permanent, we need a more robust, centralised method of managing IP addresses. This is where UFW and Ipset come in.
What is Ipset?
Ipset is a Linux kernel extension that allows you to manage sets of IP addresses, networks, or ports. It is a much more efficient solution than adding thousands of individual rules to a firewall. Instead, the firewall can refer to an entire set with a single rule.
Ipset Installation and Configuration
The first step is to install Ipset on your system. We use standard package managers for this.
sudo apt update sudo apt install ipset
Next, we create two sets: blacklist for IPv4 addresses and blacklist_v6 for IPv6.
The hashsize parameter determines the maximum number of entries, which is crucial for performance.
Integrating Ipset with the UFW Firewall
For UFW to start using our sets, we must add the appropriate commands to its rules. We edit the UFW configuration files, adding rules that block traffic originating from addresses contained in our Ipset sets. For IPv4, we edit /etc/ufw/before.rules:
sudo nano /etc/ufw/before.rules
Immediately after *filter and :ufw-before-input [0:0], add:
# Rules for the permanent blacklist (ipset) # Block any incoming traffic from IP addresses in the ‘blacklist’ set (IPv4) -A ufw-before-input -m set –match-set blacklist src -j DROP
For IPv6, we edit /etc/ufw/before6.rules:
sudo nano /etc/ufw/before6.rules
Immediately after *filter and :ufw6-before-input [0:0], add:
# Rules for the permanent blacklist (ipset) IPv6 # Block any incoming traffic from IP addresses in the ‘blacklist_v6’ set -A ufw6-before-input -m set –match-set blacklist_v6 src -j DROP
After adding the rules, we reload UFW for them to take effect:
sudo ufw reload
Script for Automatic Blacklist Updates
The core of the system is a script that acts as a bridge between Fail2ban and Ipset. Its job is to collect banned addresses, ensure they are unique, and synchronise them with the Ipset sets.
Create the script file:
sudo nano /usr/local/bin/update-blacklist.sh
Below is the content of the script. It works in several steps:
Creates a temporary, unique list of IP addresses from Fail2ban logs and the existing blacklist.
Creates temporary Ipset sets.
Reads addresses from the unique list and adds them to the appropriate temporary sets (distinguishing between IPv4 and IPv6).
Atomically swaps the old Ipset sets with the new, temporary ones, minimising the risk of protection gaps.
Destroys the old, temporary sets.
Returns a summary of the number of blocked addresses.
# Create a unique list of banned IPs from the log and the existing blacklist file (grep ‘Ban’ /var/log/fail2ban.log | awk ‘{print $(NF)}’ && cat “$BLACKLIST_FILE”) | sort -u > “$BLACKLIST_FILE.tmp” mv “$BLACKLIST_FILE.tmp” “$BLACKLIST_FILE”
# Add IPs to the temporary sets while IFS= read -r ip; do if [[ “$ip” == *”:”* ]]; then sudo ipset add “${IPSET_NAME_V6}_tmp” “$ip” else sudo ipset add “${IPSET_NAME_V4}_tmp” “$ip” fi done < “$BLACKLIST_FILE”
# Atomically swap the temporary sets with the active ones sudo ipset swap “${IPSET_NAME_V4}_tmp” “$IPSET_NAME_V4” sudo ipset swap “${IPSET_NAME_V6}_tmp” “$IPSET_NAME_V6”
After creating the script, give it execute permissions:
sudo chmod +x /usr/local/bin/update-blacklist.sh
Automation and Persistence After a Reboot
To run the script without intervention, we use a cron schedule. Open the crontab editor for the root user and add a rule to run the script every hour:
sudo crontab -e
Add this line:
0 * * * * /usr/local/bin/update-blacklist.sh
Or to run it once a day at 6 a.m.:
0 6 * * * /usr/local/bin/update-blacklist.sh
The final, crucial step is to ensure the Ipset sets survive a reboot, as they are stored in RAM by default. We create a systemd service that will save their state before the server shuts down and load it again on startup.
sudo nano /etc/systemd/system/ipset-persistent.service “`ini [Unit] Description=Saves and restores ipset sets on boot/shutdown Before=network-pre.target ConditionFileNotEmpty=/etc/ipset.rules
The entire system is an automated chain of events that works in the background to protect your server from attacks. Here is the flow of information and actions:
Attack Response (Fail2ban):
Someone tries to break into the server (e.g., by repeatedly entering the wrong password via SSH).
Fail2ban, monitoring system logs (/var/log/fail2ban.log), detects this pattern.
It immediately adds the attacker’s IP address to a temporary firewall rule, blocking their access for a specified time.
Permanent Banning (Script and Cron):
Every hour (as set in cron), the system runs the update-blacklist.sh script.
The script reads the Fail2ban logs, finds all addresses that have been banned (lines containing “Ban”), and then compares them with the existing local blacklist (/etc/fail2ban/blacklist.local).
It creates a unique list of all banned addresses.
It then creates temporary ipset sets (blacklist_tmp and blacklist_v6_tmp) and adds all addresses from the unique list to them.
It performs an ipset swap operation, which atomically replaces the old, active sets with the new, updated ones.
UFW, thanks to the previously defined rules, immediately starts blocking the new addresses that have appeared in the updated ipset sets.
Persistence After Reboot (systemd Service):
Ipset’s operation is volatile—the sets only exist in memory. The ipset-persistent.service solves this problem.
Before shutdown/reboot: systemd runs the ExecStop=/sbin/ipset save -f /etc/ipset.rules command. This saves the current state of all ipset sets to a file on the disk.
After power-on/reboot: systemd runs the ExecStart command, which restores the sets. It reads all blocked addresses from the /etc/ipset.rules file and automatically recreates the ipset sets in memory.
Thanks to this, even if the server is rebooted, the IP blacklist remains intact, and protection is active from the first moments after the system starts.
Summary and Verification
The system you have built is a fully automated, multi-layered protection mechanism. Attackers are temporarily banned by Fail2ban, and their addresses are automatically added to a permanent blacklist, which is instantly blocked by UFW and Ipset. The systemd service ensures that the blacklist survives server reboots, protecting against repeat offenders permanently. To verify its operation, you can use the following commands:
sudo ufw status verbose sudo ipset list blacklist sudo ipset list blacklist_v6 sudo systemctl status ipset-persistent.service
How to Create a Reliable IP Whitelist in UFW and Ipset
Introduction: Why a Whitelist is Crucial
When configuring advanced firewall rules, especially those that automatically block IP addresses (like in systems with Fail2ban), there is a risk of accidentally blocking yourself or key services. A whitelist is a mechanism that acts like a VIP pass for your firewall—IP addresses on this list will always have access, regardless of other, more restrictive blocking rules.
This guide will show you, step-by-step, how to create a robust and persistent whitelist using UFW (Uncomplicated Firewall) and ipset. As an example, we will use the IP address 111.222.333.444, which we want to add as trusted.
Step 1: Create a Dedicated Ipset Set for the Whitelist
The first step is to create a separate “container” for our trusted IP addresses. Using ipset is much more efficient than adding many individual rules to iptables.
Open a terminal and enter the following command:
sudo ipset create whitelist hash:ip
What did we do?
ipset create: The command to create a new set.
whitelist: The name of our set. It’s short and unambiguous.
hash:ip: The type of set. hash:ip is optimised for storing and very quickly looking up single IPv4 addresses.
Step 2: Add a Trusted IP Address
Now that we have the container ready, let’s add our example trusted IP address to it.
sudo ipset add whitelist 111.222.333.444
You can repeat this command for every address you want to add to the whitelist. To check the contents of the list, use the command:
sudo ipset list whitelist
Step 3: Modify the Firewall – Giving Priority to the Whitelist
This is the most important step. We need to modify the UFW rules so that connections from addresses on the whitelist are accepted immediately, before the firewall starts processing any blocking rules (including those from the ipset blacklist or Fail2ban).
Open the before.rules configuration file. This is the file where rules processed before the main UFW rules are located.
sudo nano /etc/ufw/before.rules
Go to the beginning of the file and find the *filter section. Just below the :ufw-before-input [0:0] line, add our new snippet. Placing it at the very top ensures it will be processed first.
*filter :ufw-before-input [0:0] # Rule for the whitelist (ipset) ALWAYS HAS PRIORITY # Accept any traffic from IP addresses in the ‘whitelist’ set -A ufw-before-input -m set –match-set whitelist src -j ACCEPT
-A ufw-before-input: We add the rule to the ufw-before-input chain.
-m set –match-set whitelist src: Condition: if the source (src) IP address matches the whitelist set…
-j ACCEPT: Action: “immediately accept (ACCEPT) the packet and stop processing further rules for this packet.”
Save the file and reload UFW:
sudo ufw reload
From this point on, any connection from the address 111.222.333.444 will be accepted immediately.
Step 4: Ensuring Whitelist Persistence
Ipset sets are stored in memory and disappear after a server reboot. To make our whitelist persistent, we need to ensure it is automatically loaded every time the system starts. We will use our previously created ipset-persistent.service for this.
Update the systemd service to “teach” it about the existence of the new whitelist set.
Find the ExecStart line and add the create command for whitelist. If you already have other sets, simply add whitelist to the line. An example of an updated line:
Save the current state of all sets to the file. This command will overwrite the old /etc/ipset.rules file with a new version that includes information about your whitelist.
sudo ipset save > /etc/ipset.rules
Restart the service to ensure it is running with the new configuration:
sudo systemctl restart ipset-persistent.service
Summary
Congratulations! You have created a solid and reliable whitelist mechanism. With it, you can securely manage your server, confident that trusted IP addresses like 111.222.333.444 will never be accidentally blocked. Remember to only add fully trusted addresses to this list, such as your home or office IP address.
How to Effectively Block IP Addresses and Subnets on a Linux Server
Blocking single IP addresses is easy, but what if attackers use multiple addresses from the same network? Manually banning each one is inefficient and time-consuming.
In this article, you will learn how to use ipset and iptables to effectively block entire subnets, automating the process and saving valuable time.
Why is Blocking Entire Subnets Better?
Many attacks, especially brute-force types, are carried out from multiple IP addresses belonging to the same operator or from the same pool of addresses (subnet). Blocking just one of them is like patching a small hole in a large dam—the rest of the traffic can still get through.
Instead, you can block an entire subnet, for example, 45.148.10.0/24. This notation means you are blocking 256 addresses at once, which is much more effective.
Script for Automatic Subnet Blocking
To automate the process, you can use the following bash script. This script is interactive—it asks you to provide the subnet to block, then adds it to an ipset list and saves it to a file, making the block persistent.
Let’s analyse the script step-by-step:
#!/bin/bash
# The name of the ipset list to which subnets will be added BLACKLIST_NAME=”blacklist_nets” # The file where blocked subnets will be appended BLACKLIST_FILE=”/etc/fail2ban/blacklist_net.local”
# 1. Create the blacklist file if it doesn’t exist touch “$BLACKLIST_FILE”
# 2. Check if the ipset list already exists. If not, create it. # Using “hash:net” allows for storing subnets, which is key. if ! sudo ipset list $BLACKLIST_NAME >/dev/null 2>&1; then sudo ipset create $BLACKLIST_NAME hash:net maxelem 65536 fi
# 3. Loop to prompt the user for subnets to block. # The loop ends when the user types “exit”. while true; do read -p “Enter the subnet address to block (e.g., 192.168.1.0/24) or type ‘exit’: ” subnet if [ “$subnet” == “exit” ]; then break elif [[ “$subnet” =~ ^([0-9]{1,3}\.){3}[0-9]{1,3}\/[0-9]{1,2}$ ]]; then # Check if the subnet is not already in the file to avoid duplicates if ! grep -q “^$subnet$” “$BLACKLIST_FILE”; then echo “$subnet” | sudo tee -a “$BLACKLIST_FILE” > /dev/null # Add the subnet to the ipset list sudo ipset add $BLACKLIST_NAME $subnet echo “Subnet $subnet added.” else echo “Subnet $subnet is already on the list.” fi else # Check if the entered format is correct echo “Error: Invalid format. Please provide the address in ‘X.X.X.X/Y’ format.” fi done
# 4. Add a rule in iptables that blocks all traffic from addresses on the ipset list. # This ensures the rule is added only once. if ! sudo iptables -C INPUT -m set –match-set $BLACKLIST_NAME src -j DROP >/dev/null 2>&1; then sudo iptables -I INPUT -m set –match-set $BLACKLIST_NAME src -j DROP fi
# 5. Save the iptables rules to survive a reboot. # This part checks which tool the system uses. if command -v netfilter-persistent &> /dev/null; then sudo netfilter-persistent save elif command -v service &> /dev/null && service iptables status >/dev/null 2>&1; then sudo service iptables save fi
echo “Script finished. The ‘$BLACKLIST_NAME’ list has been updated, and the iptables rules are active.”
How to Use the Script
Save the script: Save the code above into a file, e.g., block_nets.sh.
Give permissions: Make sure the file has execute permissions: chmod +x block_nets.sh.
Run the script: Execute the script with root privileges: sudo ./block_nets.sh.
Provide subnets: The script will prompt you to enter subnet addresses. Simply type them in the X.X.X.X/Y format and press Enter. When you are finished, type exit.
Ensuring Persistence After a Server Reboot
Ipset sets are stored in RAM by default and disappear after a server restart. For the blocked addresses to remain active, you must use a systemd service that will load them at system startup.
If you already have such a service (e.g., ipset-persistent.service), you must update it to include the new blacklist_nets list.
Edit the service file: Open your service’s configuration file. sudo nano /etc/systemd/system/ipset-persistent.service
Update the ExecStart line: Find the ExecStart line and add the create command for the blacklist_nets set. An example updated ExecStart line should look like this (including previous sets): ExecStart=/bin/bash -c “/sbin/ipset create whitelist hash:ip –exist; /sbin/ipset create blacklist hash:ip –exist; /sbin/ipset create blacklist_v6 hash:net family inet6 –exist; /sbin/ipset create blacklist_nets hash:net –exist; /sbin/ipset restore -f /etc/ipset.rules”
Reload the systemd configuration: sudo systemctl daemon-reload
Save the current state of all sets to the file: This command will overwrite the old /etc/ipset.rules file with a new version that contains information about all your lists, including blacklist_nets. sudo ipset save > /etc/ipset.rules
Restart the service: sudo systemctl restart ipset-persistent.service
With this method, you can simply and efficiently manage your server’s security, effectively blocking entire subnets that show suspicious activity, and be sure that these rules will remain active after every reboot.
Faced with the rising costs of commercial solutions and escalating cyber threats, the free security platform Wazuh is gaining popularity as a powerful alternative. However, the decision to self-host it on one’s own servers represents a fundamental trade-off: organisations gain unprecedented control over their data and system, but in return, they must contend with significant technical complexity, hidden operational costs, and full responsibility for their own security. This report analyses for whom this path is a strategic advantage and for whom it may prove to be a costly trap.
Introduction – The Democratisation of Cybersecurity in an Era of Growing Threats
The contemporary digital landscape is characterised by a paradox: while threats are becoming increasingly advanced and widespread, the costs of professional defence tools remain an insurmountable barrier for many organisations. Industry reports paint a grim picture, pointing to a sharp rise in ransomware attacks, which are evolving from data encryption to outright blackmail, and the ever-wider use of artificial intelligence by cybercriminals to automate and scale attacks. In this challenging environment, solutions like Wazuh are emerging as a response to the growing demand for accessible, yet powerful, tools to protect IT infrastructure.
Wazuh is defined as a free, open-source security platform that unifies the capabilities of two key technologies: XDR (Extended Detection and Response) and SIEM (Security Information and Event Management). Its primary goal is to protect digital assets regardless of where they operate—from traditional on-premises servers in a local data centre, through virtual environments, to dynamic containers and distributed resources in the public cloud.
The rise in Wazuh’s popularity is directly linked to the business model of dominant players in the SIEM market, such as Splunk. Their pricing, often based on the volume of data processed, can generate astronomical costs for growing companies, making advanced security a luxury. Wazuh, being free, eliminates this licensing barrier, which makes it particularly attractive to small and medium-sized enterprises (SMEs), public institutions, non-profit organisations, and all entities with limited budgets but who cannot afford to compromise on security.
The emergence of such a powerful, free tool signals a fundamental shift in the cybersecurity market. One could speak of a democratisation of advanced defence mechanisms. Traditionally, SIEM/XDR-class platforms were the domain of large corporations with dedicated Security Operations Centres (SOCs) and substantial budgets. Meanwhile, cybercriminals do not limit their activities to the largest targets; SMEs are equally, and sometimes even more, vulnerable to attacks. Wazuh fills this critical gap, giving smaller organisations access to functionalities that were, until recently, beyond their financial reach. This represents a paradigm shift, where access to robust digital defence is no longer solely dependent on purchasing power but begins to depend on technical competence and the strategic decision to invest in a team.
To fully understand Wazuh’s unique position, it is worth comparing it with key players in the market.
Table 1: Positioning Wazuh Against the Competition
Criterion
Wazuh
Splunk
Elastic Security
Cost Model
Open-source software, free. Paid options include technical support and a managed cloud service (SaaS).
Commercial. Licensing is based mainly on the daily volume of data processed, which can lead to high costs at a large scale.
“Open core” model. Basic functions are free, advanced ones (e.g., machine learning) are available in paid subscriptions. Prices are based on resources, not data volume.
Main Functionalities
Integrated XDR and SIEM. Strong emphasis on endpoint security (FIM, vulnerability detection, configuration assessment) and log analysis.
A leader in log analysis and SIEM. An extremely powerful query language (SPL) and broad analytical capabilities. Considered the standard in large SOCs.
An integrated security platform (SIEM + endpoint protection) built on the powerful Elasticsearch search engine. High flexibility and scalability.
Deployment Options
Self-hosting (On-Premises / Private Cloud) or the official Wazuh Cloud service (SaaS).
Self-hosting (On-Premises) or the Splunk Cloud service (SaaS).
Self-hosting (On-Premises) or the Elastic Cloud service (SaaS).
Target Audience
SMEs, organisations with technical expertise, entities with strict data sovereignty requirements, security enthusiasts.
Large enterprises, mature Security Operations Centres (SOCs), organisations with large security budgets and a need for advanced analytics.
Organisations seeking a flexible, scalable platform, often with an existing Elastic ecosystem. Development and DevOps teams.
This comparison clearly shows that Wazuh is not a simple clone of commercial solutions. Its strength lies in the specific niche it occupies: it offers enterprise-class functionalities without licensing costs, in exchange requiring greater technical involvement from the user and the assumption of full responsibility for implementation and maintenance.
Anatomy of a Defender – How Does the Wazuh Architecture Work?
Understanding the technical foundations of Wazuh is crucial for assessing the real complexity and potential challenges associated with its self-hosted deployment. At first glance, the architecture is elegant and logical; however, its scalability, one of its greatest advantages, simultaneously becomes its greatest operational challenge in a self-hosted model.
The Agent-Server Model: The Eyes and Ears of the System
At the core of the Wazuh architecture is a model based on an agent-server relationship. A lightweight, multi-platform Wazuh agent is installed on every monitored system—be it a Linux server, a Windows workstation, a Mac computer, or even cloud instances. The agent runs in the background, consuming minimal system resources, and its task is to continuously collect telemetry data. It gathers system and application logs, monitors the integrity of critical files, scans for vulnerabilities, inventories installed software and running processes, and detects intrusion attempts. All this data is then securely transmitted in near real-time to the central component—the Wazuh server.
Central Components: The Brain of the Operation
A Wazuh deployment, even in its simplest form, consists of three key central components that together form a complete analytical system.
Wazuh Server: This is the heart of the entire system. It receives data sent by all registered agents. Its main task is to process this stream of information. The server uses advanced decoders to normalise and structure logs from various sources and then passes them through a powerful analytical engine. This engine, based on a predefined and configurable set of rules, correlates events and identifies suspicious activities, security policy violations, or Indicators of Compromise (IoCs). When an event or series of events matches a rule with a sufficiently high priority, the server generates a security alert.
Wazuh Indexer: This is a specialised and highly scalable database, designed for the rapid indexing, storage, and searching of vast amounts of data. Technologically, the Wazuh Indexer is a fork of the OpenSearch project, which in turn was created from the Elasticsearch source code. All events collected by the server (both those that generated an alert and those that did not) and the alerts themselves are sent to the indexer. This allows security analysts to search through terabytes of historical data in seconds for traces of an attack, which is fundamental for threat hunting and forensic analysis processes.
Wazuh Dashboard: This is the user interface for the entire platform, implemented as a web application. Like the indexer, it is based on the OpenSearch Dashboards project (formerly known as Kibana). The dashboard allows for the visualisation of data in the form of charts, tables, and maps, browsing and analysing alerts, managing agent and server configurations, and generating compliance reports. It is here that analysts spend most of their time, monitoring the security posture of the entire organisation.
Security and Scalability of the Architecture
A key aspect to emphasise is the security of the platform itself. Communication between the agent and the server occurs by default over port 1514/TCP and is protected by AES encryption (with a 256-bit key). Each agent must be registered and authenticated before the server will accept data from it. This ensures the confidentiality and integrity of the transmitted logs, preventing them from being intercepted or modified in transit.
The Wazuh architecture was designed with scalability in mind. For small deployments, such as home labs or Proof of Concept tests, all three central components can be installed on a single, sufficiently powerful machine using a simplified installation script. However, in production environments monitoring hundreds or thousands of endpoints, such an approach quickly becomes inadequate. The official documentation and user experiences unequivocally indicate that to ensure performance and High Availability, it is necessary to implement a distributed architecture. This means separating the Wazuh server, indexer, and dashboard onto separate hosts. Furthermore, to handle the enormous volume of data and ensure resilience to failures, both the server and indexer components can be configured as multi-node clusters.
It is at this point that the fundamental challenge of self-hosting becomes apparent. While an “all-in-one” installation is relatively simple, designing, implementing, and maintaining a distributed, multi-node Wazuh cluster is an extremely complex task. It requires deep knowledge of Linux systems administration, networking, and, above all, OpenSearch cluster management. The administrator must take care of aspects such as the correct replication and allocation of shards (index fragments), load balancing between nodes, configuring disaster recovery mechanisms, regularly creating backups, and planning updates for the entire technology stack. The decision to deploy Wazuh on a large scale in a self-hosted model is therefore not a one-time installation act. It is a commitment to the continuous management of a complex, distributed system, whose cost and complexity grow non-linearly with the scale of operations.
The Strategic Decision – Full Control on Your Own Server versus the Convenience of the Cloud
The choice of Wazuh deployment model—self-hosting on one’s own infrastructure (on-premises) versus using a ready-made cloud service (SaaS)—is one of the most important strategic decisions facing any organisation considering this platform. This is not merely a technical choice, but a fundamental decision concerning resource allocation, risk acceptance, and business priorities. An analysis of both approaches reveals a profound trade-off between absolute control and operational convenience.
The Case for Self-Hosting: The Fortress of Data Sovereignty
Organisations that decide to self-deploy and maintain Wazuh on their own servers are primarily driven by the desire for maximum control and independence. In this model, it is they, not an external provider, who define every aspect of the system’s operation—from hardware configuration, through data storage and retention policies, to the finest details of analytical rules. The open-source nature of Wazuh gives them an additional, powerful advantage: the ability to modify and adapt the platform to unique, often non-standard needs, which is impossible with closed, commercial solutions.
However, the main driving force for many companies, especially in Europe, is the concept of data sovereignty. This is not just a buzzword, but a hard legal and strategic requirement. Data sovereignty means that digital data is subject to the laws and jurisdiction of the country in which it is physically stored and processed. In the context of stringent regulations such as Europe’s GDPR, the American HIPAA for medical data, or the PCI DSS standard for the payment card industry, keeping sensitive logs and security incident data within one’s own, controlled data centre is often the simplest and most secure way to ensure compliance.
This choice also has a geopolitical dimension. Edward Snowden’s revelations about the PRISM programme run by the US NSA made the world aware that data stored in the clouds of American tech giants could be subject to access requests from US government agencies under laws such as the CLOUD Act. For many European companies, public institutions, or entities in the defence industry, the risk that their operational data and security logs could be made available to a foreign government is unacceptable. Self-hosting Wazuh in a local data centre, within the European Union, completely eliminates this risk, ensuring full digital sovereignty.
The Reality of Self-Hosting: Hidden Costs and Responsibility
The promise of free software is tempting, but the reality of a self-hosted deployment quickly puts the concept of “free” to the test. An analysis of the Total Cost of Ownership (TCO) reveals a series of hidden expenses that go far beyond the zero cost of the licence.
Capital Expenditure (CapEx): At the outset, the organisation must make significant investments in physical infrastructure. This includes purchasing powerful servers (with large amounts of RAM and fast processors), disk arrays capable of storing terabytes of logs, and networking components. Costs associated with providing appropriate server room conditions, such as uninterruptible power supplies (UPS), air conditioning, and physical access control systems, must also be considered.
Operational Expenditure (OpEx): This is where the largest, often underestimated, expenses lie. Firstly, the ongoing electricity and cooling bills. Secondly, and most importantly, personnel costs. Wazuh is not a “set it and forget it” system. As numerous users report, it requires constant attention, tuning, and maintenance. The default configuration can generate tens of thousands of alerts per day, leading to “alert fatigue” and rendering the system useless. To prevent this, a qualified security analyst or engineer is needed to constantly fine-tune rules and decoders, eliminate false positives, and develop the platform. For larger, distributed deployments, maintaining system stability can become a full-time job. One experienced user bluntly stated, “I’m losing my mind having to fix Wazuh every single day.” According to an analysis cited by GitHub, the total cost of a self-hosted solution can be up to 5.25 times higher than its cloud equivalent.
Moreover, in the self-hosted model, the entire responsibility for security rests on the organisation’s shoulders. This includes not only protection against external attacks but also regular backups, testing disaster recovery procedures, and bearing the full consequences (financial and reputational) in the event of a successful breach and data leak.
The Cloud Alternative: Convenience as a Service (SaaS)
For organisations that want to leverage the power of Wazuh but are not ready to take on the challenges of self-hosting, there is an official alternative: Wazuh Cloud. This is a Software as a Service (SaaS) model, where the provider (the company Wazuh) takes on the entire burden of managing the server infrastructure, and the client pays a monthly or annual subscription for a ready-to-use service.
The advantages of this approach are clear:
Lower Barrier to Entry and Predictable Costs: The subscription model eliminates the need for large initial hardware investments (CapEx) and converts them into a predictable, monthly operational cost (OpEx), which is often lower in the short and medium term.
Reduced Operational Burden: Issues such as server maintenance, patch installation, software updates, scaling resources in response to growing load, and ensuring high availability are entirely the provider’s responsibility. This frees up the internal IT team to focus on strategic tasks rather than “firefighting.”
Access to Expert Knowledge: Cloud clients benefit from the knowledge and experience of Wazuh engineers who manage hundreds of deployments daily. This guarantees optimal configuration and platform stability.
Of course, convenience comes at a price. The main disadvantage is a partial loss of control over the system and data. The organisation must trust the security policies and procedures of the provider. Most importantly, depending on the location of the Wazuh Cloud data centres, the same data sovereignty issues that the self-hosted model avoids may arise.
Ultimately, the choice between self-hosting and the cloud is not an assessment of which option is “better” in an absolute sense. It is a strategic allocation of risk and resources. The self-hosted model is a conscious acceptance of operational risk (failures, configuration errors, staff shortages) in exchange for minimising the risk associated with data sovereignty and third-party control. In contrast, the cloud model is a transfer of operational risk to the provider in exchange for accepting the risk associated with entrusting data and potential legal-geopolitical implications. For a financial sector company in the EU, the risk of a GDPR breach may be much higher than the risk of a server failure, which strongly inclines them towards self-hosting. For a dynamic tech start-up without regulated data, the cost of hiring a dedicated specialist and the operational risk may be unacceptable, making the cloud the obvious choice.
Table 2: Decision Analysis: Self-Hosting vs. Wazuh Cloud
Criterion
Self-Hosting (On-Premises)
Wazuh Cloud (SaaS)
Total Cost of Ownership (TCO)
High initial cost (hardware, CapEx). Significant, often unpredictable operational costs (personnel, energy, OpEx). Potentially lower in the long term at a large scale and with constant utilisation.
Low initial cost (no CapEx). Predictable, recurring subscription fees (OpEx). Usually more cost-effective in the short and medium term. Potentially higher in the long run.
Control and Customisation
Absolute control over hardware, software, data, and configuration. Ability to modify source code and deeply integrate with existing systems.
Limited control. Configuration within the options provided by the supplier. No ability to modify source code or access the underlying infrastructure.
Security and Responsibility
Full responsibility for physical and digital security, backups, disaster recovery, and regulatory compliance rests with the organisation.
Shared responsibility. The provider is responsible for the security of the cloud infrastructure. The organisation is responsible for configuring security policies and managing access.
Deployment and Maintenance
Complex and time-consuming deployment, especially in a distributed architecture. Requires continuous maintenance, monitoring, updating, and tuning by qualified personnel.
Quick and simple deployment (service activation). Maintenance, updates, and ensuring availability are entirely the provider’s responsibility, minimising the burden on the internal IT team.
Scalability
Scalability is possible but requires careful planning, purchase of additional hardware, and manual reconfiguration of the cluster. It can be a slow and costly process.
High flexibility and scalability. Resources (computing power, disk space) can be dynamically increased or decreased depending on needs, often with a few clicks.
Data Sovereignty
Full data sovereignty. The organisation has 100% control over the physical location of its data, which facilitates compliance with local legal and regulatory requirements (e.g., GDPR).
Dependent on the location of the provider’s data centres. May pose challenges related to GDPR compliance if data is stored outside the EU. Potential risk of access on demand by foreign governments.
Voices from the Battlefield – A Balanced Analysis of Expert and User Opinions
A theoretical analysis of a platform’s capabilities and architecture is one thing, but its true value is verified in the daily work of security analysts and system administrators. The voices of users from around the world, from small businesses to large enterprises, paint a nuanced picture of Wazuh—a tool that is incredibly powerful, but also demanding. An analysis of opinions gathered from industry portals such as Gartner, G2, Reddit, and specialist forums allows us to identify both its greatest advantages and its most serious challenges.
The Praise – What Works Brilliantly?
Several key strengths that attract organisations to Wazuh are repeatedly mentioned in reviews and case studies.
Cost as a Game-Changer: For many users, the fundamental advantage is the lack of licensing fees. One information security manager stated succinctly: “It costs me nothing.” This financial accessibility is seen as crucial, especially for smaller entities. Wazuh is often described as a “great, out-of-the-box SOC solution for small to medium businesses” that could not otherwise afford this type of technology.
Powerful, Built-in Functionalities: Users regularly praise specific modules that deliver immediate value. File Integrity Monitoring (FIM) and Vulnerability Detection are at the forefront. One reviewer described them as the “biggest advantages” of the platform. FIM is key to detecting unauthorised changes to critical system files, which can indicate a successful attack, while the vulnerability module automatically scans systems for known, unpatched software. The platform’s ability to support compliance with regulations such as HIPAA or PCI DSS is also a frequently highlighted asset, allowing organisations to verify their security posture with a few clicks.
Flexibility and Customisation: The open nature of Wazuh is seen as a huge advantage by technical teams. The ability to customise rules, write their own decoders, and integrate with other tools gives a sense of complete control. “I personally love the flexibility of Wazuh, as a system administrator I can think of any use case and I know I’ll be able to leverage Wazuh to pull the logs and create the alerts I need,” wrote Joanne Scott, a lead administrator at one of the companies using the platform.
The Criticism – Where Do the Challenges Lie?
Equally numerous and consistent are the voices pointing to significant difficulties and challenges that must be considered before deciding on deployment.
Complexity and a Steep Learning Curve: This is the most frequently raised issue. Even experienced security specialists admit that the platform is not intuitive. One expert described it as having a “steep learning curve for newcomers.” Another user noted that “the initial installation and configuration can be a bit complicated, especially for users without much experience in SIEM systems.” This confirms that Wazuh requires dedicated time for learning and experimentation.
The Need for Tuning and “Alert Fatigue”: This is probably the biggest operational challenge. Users agree that the default, “out-of-the-box” configuration of Wazuh generates a huge amount of noise—low-priority alerts that flood analysts and make it impossible to detect real threats. One team reported receiving “25,000 to 50,000 low-level alerts per day” from just two monitored endpoints. Without an intensive and, importantly, continuous process of tuning rules, disabling irrelevant alerts, and creating custom ones tailored to the specific environment, the system is practically useless. One of the more blunt comments on a Reddit forum stated that “out of the box it’s kind of shitty.”
Performance and Stability at Scale: While Wazuh performs well in small and medium-sized environments, deployments involving hundreds or thousands of agents can encounter serious stability problems. In one dramatic post on a Google Groups forum, an administrator managing 175 agents described daily problems with agents disconnecting and server services hanging, forcing him to restart the entire infrastructure daily. This shows that scaling Wazuh requires not only more powerful hardware but also deep knowledge of optimising its components.
Documentation and Support for Different Systems: Although Wazuh has extensive online documentation, some users find it insufficient for more complex problems. There are also complaints that the predefined decoders (pieces of code responsible for parsing logs) work great for Windows systems but are often outdated or incomplete for other platforms, including popular network devices. This forces administrators to search for unofficial, community-created solutions on platforms like GitHub, which introduces an additional element of risk and uncertainty.
An analysis of these starkly different opinions leads to a key conclusion. Wazuh should not be seen as a ready-to-use product that can simply be “switched on.” It is rather a powerful security framework—a set of advanced tools and capabilities from which a qualified team must build an effective defence system. Its final value depends 90% on the quality of the implementation, configuration, and competence of the team, and only 10% on the software itself. The users who succeed are those who talk about “configuring,” “customising,” and “integrating.” Those who encounter problems are often those who expected a ready-made solution and were overwhelmed by the default configuration. The story of one expert who, during a simulated attack on a default Wazuh installation, “didn’t catch a single thing” is the best proof of this. An investment in a self-hosted Wazuh is really an investment in the people who will manage it.
Consequences of the Choice – Risk and Reward in the Open-Source Ecosystem
The decision to base critical security infrastructure on a self-hosted, open-source solution like Wazuh goes beyond a simple technical assessment of the tool itself. It is a strategic immersion into the broader ecosystem of Open Source Software (OSS), which brings with it both enormous benefits and serious, often underestimated, risks.
The Ubiquity and Hidden Risks of Open-Source Software
Open-source software has become the foundation of the modern digital economy. According to the 2025 “Open Source Security and Risk Analysis” (OSSRA) report, as many as 97% of commercial applications contain OSS components. They form the backbone of almost every system, from operating systems to libraries used in web applications. However, this ubiquity has its dark side. The same report reveals alarming statistics:
86% of the applications studied contained at least one vulnerability in the open-source components they used.
91% of applications contained components that were outdated and had newer, more secure versions available.
81% of applications contained high or critical risk vulnerabilities, many of which already had publicly available patches.
One of the biggest challenges is the problem of transitive dependencies. This means that a library a developer consciously adds to a project itself depends on dozens of other libraries, which in turn depend on others. This creates a complex and difficult-to-trace chain of dependencies, meaning organisations often have no idea exactly which components are running in their systems and what risks they carry. This is the heart of the software supply chain security problem.
By choosing to self-host Wazuh, an organisation takes on full responsibility for managing not only the platform itself but its entire technology stack. This includes the operating system it runs on, the web server, and, above all, key components like the Wazuh Indexer (OpenSearch) and its numerous dependencies. This means it is necessary to track security bulletins for all these elements and react immediately to newly discovered vulnerabilities.
The Advantages of the Open-Source Model: Transparency and the Power of Community
In opposition to these risks, however, stand fundamental advantages that make the open-source model so attractive, especially in the field of security.
Transparency and Trust: In the case of commercial, closed-source solutions (“black boxes”), the user must fully trust the manufacturer’s declarations regarding security. In the open-source model, the source code is publicly available. This provides the opportunity to conduct an independent security audit and verify that the software does not contain hidden backdoors or serious flaws. This transparency builds fundamental trust, which is invaluable in the context of systems designed to protect a company’s most valuable assets.
The Power of Community: Wazuh boasts one of the largest and most active communities in the open-source security world. Users have numerous support channels at their disposal, such as the official Slack, GitHub forums, a dedicated subreddit, and Google Groups. It is there, in the heat of real-world problems, that custom decoders, innovative rules, and solutions to problems not found in the official documentation are created. This collective wisdom is an invaluable resource, especially for teams facing unusual challenges.
Avoiding Vendor Lock-in: By choosing a commercial solution, an organisation becomes dependent on a single vendor—their product development strategy, pricing policy, and software lifecycle. If the vendor decides to raise prices, end support for a product, or go bankrupt, the client is left with a serious problem. Open source provides freedom. An organisation can use the software indefinitely, modify and develop it, and even use the services of another company specialising in support for that solution if they are not satisfied with the official support.
This duality of the open-source nature leads to a deeper conclusion. The decision to self-host Wazuh fundamentally changes the organisation’s role in the security ecosystem. It ceases to be merely a passive consumer of a ready-made security product and becomes an active manager of software supply chain risk. When a company buys a commercial SIEM, it pays the vendor to take responsibility for managing the risk associated with the components from which its product is built. It is the vendor who must patch vulnerabilities in libraries, update dependencies, and guarantee the security of the entire stack. By choosing the free, self-hosted Wazuh, the organisation consciously (or not) takes on all this responsibility itself. To do this in a mature way, it is no longer enough to just know how to configure rules in Wazuh. It becomes necessary to implement advanced software management practices, such as Software Composition Analysis (SCA) to identify all components and their vulnerabilities, and to maintain an up-to-date “Software Bill of Materials” (SBOM) for the entire infrastructure. This significantly raises the bar for competency requirements and shows that the decision to self-host has deep, structural consequences for the entire IT and security department.
The Verdict – Who is Self-Hosted Wazuh For?
The analysis of the Wazuh platform in a self-hosted model leads to an unequivocal conclusion: it is a solution with enormous potential, but burdened with equally great responsibility. The key trade-off that runs through every aspect of this technology can be summarised as follows: self-hosted Wazuh offers unparalleled control, absolute data sovereignty, and zero licensing costs, but in return requires significant, often underestimated, investments in hardware and, above all, in highly qualified personnel capable of managing a complex and demanding system that requires constant attention.
This is not a solution for everyone. Attempting to implement it without the appropriate resources and awareness of its nature is a straight path to frustration, a false sense of security, and ultimately, project failure.
Profile of the Ideal Candidate
Self-hosted Wazuh is the optimal, and often the only right, choice for organisations that meet most of the following criteria:
They have a mature and competent technical team: They have an internal security and IT team (or the budget to hire/train one) that is not afraid of working with the command line, writing scripts, analysing logs at a low level, and managing a complex Linux infrastructure.
They have strict data sovereignty requirements: They operate in highly regulated industries (financial, medical, insurance), in public administration, or in the defence sector, where laws (e.g., GDPR) or internal policies categorically require that sensitive data never leaves physically controlled infrastructure.
They operate at a large scale where licensing costs become a barrier: They are large enough that the licensing costs of commercial SIEM systems, which increase with data volume, become prohibitive. In such a case, investing in a dedicated team to manage a free solution becomes economically justified over a period of several years.
They understand they are implementing a framework, not a finished product: They accept the fact that Wazuh is a set of powerful building blocks, not a ready-made house. They are prepared for a long-term, iterative process of tuning, customising, and improving the system to fully match the specifics of their environment and risk profile.
They have a need for deep customisation: Their security requirements are so unique that standard, commercial solutions cannot meet them, and the ability to modify the source code and create custom integrations is a key value.
Questions for Self-Assessment
For all other organisations, especially smaller ones with limited human resources and without strict sovereignty requirements, a much safer and more cost-effective solution will likely be to use the Wazuh Cloud service or another commercial SIEM/XDR solution.
Before making the final, momentous decision, every technical leader and business manager should ask themselves and their team a series of honest questions:
Have we realistically assessed the Total Cost of Ownership (TCO)? Does our budget account not only for servers but also for the full-time equivalents of specialists who will manage this platform 24/7, including their salaries, training, and the time needed to learn?
Do we have the necessary expertise in our team? Do we have people capable of advanced rule tuning, managing a distributed cluster, diagnosing performance issues, and responding to failures in the middle of the night? If not, are we prepared to invest in their recruitment and development?
What is our biggest risk? Are we more concerned about operational risk (system failure, human error, inadequate monitoring) or regulatory and geopolitical risk (breach of data sovereignty, third-party access)? How does the answer to this question influence our decision?
Are we ready for full responsibility? Do we understand that by choosing self-hosting, we are taking responsibility not only for the configuration of Wazuh but for the security of the entire software supply chain on which it is based, including the regular patching of all its components?
Only an honest answer to these questions will allow you to avoid a costly mistake and make a choice that will genuinely strengthen your organisation’s cybersecurity, rather than creating an illusion of it.
Integrating Logs from Docker Applications with Wazuh SIEM
In modern IT environments, containerisation using Docker has become the standard. It enables the rapid deployment and scaling of applications but also introduces new challenges in security monitoring. By default, logs generated by applications running in containers are isolated from the host system, which complicates their analysis by SIEM systems like Wazuh.
In this post, we will show you how to break down this barrier. We will guide you step-by-step through the configuration process that will allow the Wazuh agent to read, analyse, and generate alerts from the logs of any application running in a Docker container. We will use the password manager Vaultwarden as a practical example.
The Challenge: Why is Accessing Docker Logs Difficult?
Docker containers have their own isolated file systems. Applications inside them most often send their logs to “standard output” (stdout/stderr), which is captured by Docker’s logging mechanism. The Wazuh agent, running on the host system, does not have default access to this stream or to the container’s internal files.
To enable monitoring, we must make the application logs visible to the Wazuh agent. The best and cleanest way to do this is to configure the container to write its logs to a file and then share that file externally using a Docker volume.
Step 1: Exposing Application Logs Outside the Container
Our goal is to make the application’s log file appear in the host server’s file system. We will achieve this by modifying the docker-compose.yml file.
Configure the application to log to a file: Many Docker images allow you to define the path to a log file using an environment variable. In the case of Vaultwarden, this is LOG_FILE.
Map a volume: Create a mapping between a directory on the host server and a directory inside the container where the logs are saved.
Here is an example of what a fragment of the docker-compose.yml file for Vaultwarden with the correct logging configuration might look like:
version: “3”
services: vaultwarden: image: vaultwarden/server:latest container_name: vaultwarden restart: unless-stopped volumes: # Volume for application data (database, attachments, etc.) – ./data:/data ports: – “8080:80” environment: # This variable instructs the application to write logs to a file inside the container – LOG_FILE=/data/vaultwarden.log
What happened here?
LOG_FILE=/data/vaultwarden.log: We are telling the application to create a vaultwarden.log file in the /data directory inside the container.
./data:/data: We are mapping the /data directory from the container to a data subdirectory in the location where the docker-compose.yml file is located (on the host).
After saving the changes and restarting the container (docker-compose down && docker-compose up -d), the log file will be available on the server at a path like /opt/vaultwarden/data/vaultwarden.log.
Step 2: Configuring the Wazuh Agent to Monitor the File
Now that the logs are accessible on the host, we need to instruct the Wazuh agent to read them.
Open the agent’s configuration file:
sudo nano /var/ossec/etc/ossec.conf
Add the following block within the <ossec_config> section:
From now on, every new line in the vaultwarden.log file will be sent to the Wazuh manager.
Step 3: Translating Logs into the Language of Wazuh (Decoders)
The Wazuh manager is now receiving raw log lines, but it doesn’t know how to interpret them. We need to create decoders that will “teach” it to extract key information, such as the attacker’s IP address or the username.
On the Wazuh manager server, edit the local decoders file:
<!– Decoder for Vaultwarden logs –> <decoder name=”vaultwarden”> <prematch>vaultwarden::api::identity</prematch> </decoder>
<!– Decoder for failed login attempts in Vaultwarden –> <decoder name=”vaultwarden-failed-login”> <parent>vaultwarden</parent> <prematch>Username or password is incorrect. Try again. IP: </prematch> <regex>IP: (\S+)\. Username: (\S+)\.$</regex> <order>srcip, user</order> </decoder>
Step 4: Creating Rules and Generating Alerts
Once Wazuh can understand the logs, we can create rules that will generate alerts.
On the manager server, edit the local rules file:
sudo nano /var/ossec/etc/rules/local_rules.xml
Add the following rule group:
<group name=”vaultwarden,”> <rule id=”100105″ level=”5″> <decoded_as>vaultwarden</decoded_as> <description>Vaultwarden: Failed login attempt for user $(user) from IP address: $(srcip).</description> <group>authentication_failed,</group> </rule>
Note: Ensure that the rule id is unique and does not appear anywhere else in the local_rules.xml file. Change it if necessary.
Step 5: Restart and Verification
Finally, restart the Wazuh manager to load the new decoders and rules:
sudo systemctl restart wazuh-manager
To test the configuration, make several failed login attempts to your Vaultwarden application. After a short while, you should see level 5 alerts in the Wazuh dashboard for each attempt, and after exceeding the threshold (6 attempts in 120 seconds), a critical level 10 alert indicating a brute-force attack.
Summary
Integrating logs from applications running in Docker containers with the Wazuh system is a key element in building a comprehensive security monitoring system. The scheme presented above—exposing logs to the host via a volume and then analysing them with custom decoders and rules—is a universal approach that you can apply to virtually any application, not just Vaultwarden. This gives you full visibility of events across your entire infrastructure, regardless of the technology it runs on.
Imagine a music service that not only stores your entire collection in lossless quality but also displays synchronised lyrics in real time, turning every listening session into a karaoke singalong. What’s more, it’s completely yours – free from adverts, subscriptions, and algorithmic tracking. This isn’t a futuristic vision but a completely achievable reality thanks to the power of open-source software. We can confirm: the karaoke feature works perfectly in both the Jellyfin web interface and the Finamp app on an iPhone. This comprehensive guide will walk you through every stage of building such a system – from installation to creating your own lyric files.
In an era dominated by streaming giants, a movement of “self-hosting” enthusiasts is growing – people who independently manage their own data and services. Instead of entrusting their digital identity to corporations, they build their own private clouds, media servers, and much more. This article is the essence of that philosophy, showing you how to reclaim control over your music and enrich it with features you’d be hard-pressed to find with the competition.
The Architecture of Your Private Spotify: Key Components
Before we delve into the configuration, we need to understand the foundations on which we’ll build our music centre. Success depends on the harmonious collaboration of three key elements.
Jellyfin (Server Version 10.9+): This is the brain of the entire operation. Jellyfin is a free media server that catalogues and serves your music files. Version 10.9 was revolutionary, introducing a standardised, server-managed approach to handling song lyrics. This means all the “heavy lifting” involved in sourcing and processing lyrics happens on the server, and the client applications simply consume the ready-to-use data.
TrueNAS SCALE: This is a reliable and powerful operating system for your home server (NAS). Built on Linux, it offers official support for running applications like Jellyfin in isolated containers, which guarantees stability, security, and order in the system.
Finamp and Jellyfin App (Mobile Clients): These are your windows to the world of music. Finamp, especially in its redesigned beta version, is a favourite among iPhone users as it eliminates the problem of music stopping when the screen goes dark and handles displaying lyrics perfectly. Equally important, the latest versions of the official Jellyfin app also flawlessly support the synchronised lyrics feature.
The Magic of Synchronised Lyrics: The Anatomy of an .lrc File
Confirming that the karaoke feature works is exciting. To make full use of it, you need to understand where this “magic” comes from. The effect of highlighting text in perfect synchronisation with the music depends on the format of the downloaded file. The heart of this mechanism is a simple text file with an .lrc extension.
Synchronised Lyrics (.lrc/.elrc): This is the Holy Grail for karaoke fans. These files contain not only the song’s words but also precise time markers for each line.
Unsynchronised Lyrics (.txt): This is a simpler form, containing plain text. In this case, the application will simply scroll through it smoothly as the song plays, without highlighting individual verses.
The structure of an .lrc file is incredibly simple. Each line of text is preceded by a time marker in the format [minutes:seconds.hundredths of a second].
Example of an .lrc file structure:
[ar: Song Artist]
[ti: Song Title]
[al: Album]
[00:15.50]The first line of text appears after 15.5 seconds.
[00:19.25]The second line of text comes in after 19.25 seconds.
Become a Lyric Creator: How to Create Your Own .lrc File
What if the LrcLib plugin can’t find the lyrics for your favourite niche song? Don’t worry! You can very easily create one yourself.
Get the lyrics: Find the song’s words online and copy them.
Open a text editor: Use any simple editor, like Notepad (Windows) or TextEdit (macOS).
Synchronise with the music: Play the song and pause it at the beginning of each line to note the exact time (minutes and seconds).
Format the file: Before each line of text, add the noted time in square brackets, e.g., [01:23.45]. The more accurate the hundredths of a second, the smoother the effect.
Save the file: This is the most important step. Save the file in the same folder as the audio file, giving it the exact same name but changing the extension to .lrc.
If the music file is: My Super Song.flac
The lyric file must be called: My Super Song.lrc
After saving the file, all you need to do is re-scan your library in Jellyfin, and the server will automatically detect and link your manually created lyrics with the song.
Server Configuration – A Step-by-Step Guide
1. Installing the LrcLib Plugin
In the Jellyfin dashboard, go to Plugins > Catalogue.
Search for and install the official “LrcLib” plugin from the Jellyfin repository. Avoid the outdated jellyfin-lyrics-plugin by Felitendo, which is no longer being developed and may cause errors.
Be sure to restart the Jellyfin server for the changes to take effect.
2. The Most Important Library Configuration
This is a crucial, though unintuitive, step. By default, Jellyfin hides downloaded lyrics in a metadata folder, creating a “black box”. We’ll change this to have full control.
Go to Dashboard > Libraries…
Find your music library, click the menu (three dots), and select Manage library.
In the library settings, tick the option “Save lyrics to media folders”.
3. Starting the Process
Go to Dashboard > Scheduled tasks.
Find the task “Download missing lyrics” and run it manually. Set a schedule (e.g., daily) so that newly added music is processed automatically.
Once the task is finished, run a new scan of the music library.
When Something Goes Wrong – Advanced Troubleshooting
Even with an ideal configuration, you might encounter problems. Here’s how to deal with them.
First Rule of Diagnosis: Always check if the lyrics are visible in the Jellyfin web interface in a browser before you start looking for problems in the mobile app. If they’re not on the server, the problem isn’t on the client side.
Troubleshooting Scanning and Metadata Refreshing: Sometimes, due to the specifics of the system’s operation, the Jellyfin user interface or database doesn’t refresh immediately after the first scan. This manifests as the lyrics still not being visible despite having completed all the steps. The solution is to run a second scan, this time selecting the more detailed “Search for missing metadata” option for the given library. This extra step often forces the system to re-analyse the folders and register the new .lrc files.
The Ultimate Weapon – Manual Cache Cleaning: The most persistent problem is Jellyfin’s aggressive metadata caching. The system creates an internal copy of the lyrics and often refuses to update it, even if the source .lrc file is changed. A simple refresh from the interface can be unreliable. The only 100% effective method is to manually delete the cached file from the server’s file system.
Locate the Jellyfin configuration path on your TrueNAS server (e.g., /mnt/pool/ix-applications/jellyfin/config).
Launch the Shell in the TrueNAS interface.
Navigate to the lyrics cache directory using the cd command and your path: cd /mnt/pool/ix-applications/jellyfin/config/metadata/lyrics.
Find and delete the problematic file using the find command. Replace “Song Title.lrc” with the actual file name: find -type f -name “Song Title.lrc” -print -delete. The -print flag will display the file before it’s deleted.
In the Jellyfin interface, for the given song, select Refresh metadata with the Search for missing metadata mode. Jellyfin, forced into action, will download and process the lyrics anew.
Long-Term Maintenance and Next Steps
Tagging Hygiene: The effectiveness of downloading lyrics depends on the quality of your music’s metadata. Use tools like MusicBrainz Picard to ensure your files have accurate and consistent tags.
Backups: Regularly back up your entire Jellyfin configuration folder (e.g., /mnt/pool/ix-applications/jellyfin/config) to protect your settings, metadata, and user data in case of failure.
External Access: Once you’ve mastered local streaming, the natural next step is to configure secure access from outside your home using Nginx Proxy Manager.
Congratulations! You’ve just built a fully functional, private, and significantly more powerful equivalent of commercial streaming services, with a sensational karaoke feature that will liven up any party or solo evening with headphones.
Introduction: Your Passwords Are Weak – change that and protect yourself from criminals
Have you ever been treated by a website like a rookie in a basic training camp? “Your password is weak. Very weak.” The system claims you can’t use it because it’s under 20 characters, doesn’t contain a lowercase or uppercase letter, 5 special characters and 3 digits. And on top of that, it can’t be a dictionary word. Or, even worse, you’ve fallen into a loop of absurdity: you type in a password you are absolutely convinced is correct. The system says it’s not. You request a reset. You get a code you have to enter in 16 seconds, 3 of which have already passed. You type a new password. “You cannot use a password that is your current password.” The one the system just rejected. It’s a digital comedy of errors that no one finds funny.
This daily struggle with authentication systems drives us to the brink of despair. It gets to the point where, like in a certain anecdote, the only way to meet security requirements is to change the name of your cat to “CapitalK97Yslash&7”. This is funny until we realise that our digital lives are based on similarly outlandish and impossible-to-remember constructions. The problem is that human memory is fallible. Even seemingly simple passwords, like “ODORF”, which a father once set as the admin password, can slip your mind at the least opportune moment, leading to blocked access to the family computer.
In the face of these difficulties, many of us take shortcuts. We use the same, easy-to-remember passwords across dozens of services. We create simple patterns, like the name of a building with zeros instead of the letter “O”, which in one doctor’s office protected patient data and was known by 18 people. Such practices are an open invitation to cybercriminals. The problem, however, doesn’t lie solely in our laziness. It’s the systems with terrible user interfaces and frustrating requirements that actively discourage us from caring about security. Since current methods fail, there must be a better way. A way that is both secure, convenient, and doesn’t require memorising 64-character passwords.
Digital Safe on Steroids: Why a Password Manager is Your New Best Mate
Before we dive into the world of self-hosting, it’s crucial to understand why a dedicated password manager is a fundamental tool for anyone who navigates the internet. It’s a solution that fundamentally changes the user’s relationship with digital security – from an antagonistic fight to a symbiotic partnership. Instead of being a problem, passwords become something that works in the background, without our effort.
One Ring to Rule Them All (One Master Password)
The basic concept of a password manager is brilliant in its simplicity: you only need to remember one, very strong master password (or, even better, a long phrase). This password acts as the key to an encrypted safe (called a “vault”), which stores all your other credentials. No more memorising dozens of logins.
An Unbreakable Generator
The greatest weakness of human passwords is their predictability. Password managers eliminate this problem by having a built-in random password generator. Want to set a random password of 100 characters that is a random string of letters, numbers and special characters? With a single click, it can create a long, complicated, and completely random password, such as X@Ln@x9J@&u@5n##BhfRe5^67gFdr. The difference in security between Kitty123!” and such a random string of characters is astronomical – it’s like comparing a plywood door to the vault door of a bank.
Convenience and Productivity (Autofill)
Security that makes life difficult is rarely used. That’s why password managers focus on convenience. Their most important function is autofilling login forms in browsers and applications. When you visit a bank’s website, the manager automatically detects the login fields and offers to fill them with your saved data. This not only saves time but also eliminates the risk of typos. These minutes saved each day add up, genuinely increasing productivity.
Device Syncing
Your digital world isn’t limited to one device. A password manager ensures you have access to your vault from anywhere – on your laptop at work, on your tablet at home, and on your smartphone while travelling. All your data is synchronised, so a password saved on one device is immediately available on the others.
Protection Against Phishing and Attacks
Password managers offer a subtle but powerful protection against phishing. The autofill function is tied to the specific URL of a website. If a cybercriminal sends you a link to a fake bank website that looks identical to the real one, the password manager won’t offer to autofill, because the URL will be different. This is an immediate warning sign. It also protects against “credential stuffing” attacks, where hackers test passwords stolen from one service on dozens of others. With a password manager, you can easily create separate passwords for each website, each bank, each social media portal, email account, etc. Even if someone steals data from Facebook, if you used that password exclusively for Facebook, the criminals won’t be able to log in to your bank or other services or portals with it.
Security Audit
Modern password managers act as a personal security auditor. They regularly scan your vault for weak, reused, or compromised passwords that have appeared in public data breaches. This allows you to proactively react and change threatened credentials.
By automating the most difficult tasks – creating and remembering unique, strong passwords – a password manager removes the cognitive load and frustration. As a result, applying the best security practices becomes effortless, leading to a dramatic increase in your overall level of protection.
Introducing Vaultwarden: Bitwarden for DIYers with a Heart for Privacy
Now that we know what a powerful tool a password manager is, it’s time to choose the right one. There are many players on the market, but for privacy enthusiasts and DIYers, one project stands out in particular: Vaultwarden.
Vaultwarden is an unofficial but fully functional server implementation of the popular Bitwarden password manager. It was written from scratch in the Rust programming language, and its main goal was to create an alternative that is incredibly lightweight and efficient. While the official, self-hosted version of Bitwarden requires 11 separate Docker containers to run and has significant hardware requirements, Vaultwarden runs in one neat container and consumes minimal resources. This means you can easily run it on a cheap mini-computer like a Raspberry Pi, an old laptop, or the smallest virtual machine in the cloud.
Most importantly, Vaultwarden is fully compatible with all official Bitwarden client applications – browser plugins, desktop applications, and mobile apps for Android and iOS. This means you get a polished and convenient user interface while maintaining full control over your server.
However, the real “icing on the cake” and the reason the self-hosting community has fallen in love with Vaultwarden is the fact that it unlocks all of Bitwarden’s premium features for free. Choosing Vaultwarden is not just about saving money, but a conscious decision that perfectly fits the ethos of independence and control. It’s not a “worse substitute”, but for many conscious users, simply a better choice, because its features and distribution model are fully aligned with the values of the open-source world.
The table below shows what you get by choosing Vaultwarden.
Feature
Bitwarden (Free Plan)
Bitwarden (Premium Plan sim10/year)
Vaultwarden (Self-hosted)
Unlimited passwords & devices
Yes
Yes
Yes
Secure sharing (2 users)
Yes
Yes
Yes
Basic 2FA (TOTP, Email)
Yes
Yes
Yes
Advanced 2FA (YubiKey, FIDO2)
No
Yes
Yes
Integrated Authenticator (TOTP)
No
Yes
Yes
File attachments (up to 1GB)
No
Yes
Yes
Emergency Access
No
Yes
Yes
Vault health reports
No
Yes
Yes
Additional users (e.g. for family)
No
No
Yes
Of course, this freedom comes with responsibility. Vaultwarden is a community project, which means there is no official technical support. In case of problems, you rely on documentation and help from other users on forums. There may also be a short delay in compatibility after major updates to official Bitwarden clients before Vaultwarden developers adapt the code. You are your own administrator – that’s the price for complete control.
The Power of Self-Hosting
The decision to use Vaultwarden is inseparably linked to a broader concept: self-hosting. It’s an idea that shifts the paradigm from being a passive consumer of digital services to being their active owner. This is a fundamental change in the balance of power between the user and the technology provider.
Full Data Control – Digital Sovereignty
The main and most important advantage of self-hosting is absolute control over your own data. When you use a cloud service, your passwords, notes, and other sensitive information are stored on servers belonging to a corporation. In the case of self-hosting, your password vault physically resides on hardware that you control – whether it’s a server at home or a rented virtual machine. No one else has access to it. You are the guardian of your data, which is the essence of digital sovereignty.
No More Vendor Lock-in
By using cloud services, you are dependent on their provider. A company can raise prices, change its terms of service, limit functionality, or even go bankrupt, leaving you with a data migration problem. Self-hosting frees you from this “ecosystem lock-in.” Your service works for as long as you want, on your terms.
Privacy
In today’s digital economy, data is the new oil. Providers of free services often earn money by analysing user data, selling it to advertisers, or using it to train artificial intelligence models. When you self-host services, this problem disappears. Your data is not a commodity. You set the rules and you can be sure that no one is looking at your information for commercial purposes.
Long-Term Savings
The subscription model has become the standard in the software world. Although a single fee may seem low, the sum of annual costs for all services can be significant. Self-hosting requires an initial investment in hardware (you can often use an old computer or a cheap Raspberry Pi) and is associated with electricity costs, but it eliminates recurring subscription fees. In the long run, it is a much more economical solution.
Customisation and Learning Opportunities
Self-hosting is not only about practical benefits, but also a fantastic opportunity to learn and grow. It gives you full flexibility in configuring and customising services to your own specific needs. It is a satisfying journey that allows you to better understand how the technologies we use every day work.
For a person concerned about the state of privacy on the internet, self-hosting is not a technical curiosity. It’s a logical and necessary step to regain control over your digital life.
An Impenetrable Fortress: How a VPN Creates a Private Bridge to Your Password Vault
Self-hosting Vaultwarden gives you control over your data, but how do you ensure secure access to it from outside your home? The simplest solution seems to be exposing the service to a public IP address and securing it with a so-called reverse proxy (e.g., Nginx Proxy Manager). This is a popular and good solution, but it has one drawback: your service is visible to the entire world. This means it is constantly being scanned by bots for vulnerabilities and weaknesses.
However, there is a much more secure architecture that changes the security model from “defending the fortress” to “hiding the fortress”. It involves placing Vaultwarden behind a VPN server.
What is a VPN and how does it work?
A VPN, or Virtual Private Network, creates a secure, encrypted “tunnel” through the public internet. When your laptop or smartphone connects to your home VPN server (e.g., using the popular and modern WireGuard protocol), it virtually becomes part of your home local network. All communication is encrypted and invisible to anyone else, including your internet service provider or the operator of the public Wi-Fi network in a café.
“VPN-Only” Architecture
In this configuration, the server running Vaultwarden has no ports open to the public internet. From the perspective of the global network, it is completely invisible. The only publicly accessible element is the VPN server, which listens on one specific port.
To access your password vault, you must first connect to the VPN server. After successful authorisation, your device is “inside” your private network and can freely communicate with the Vaultwarden server, just as if both devices were standing next to each other.
Layers of Security
This approach creates three powerful layers of protection:
Invisibility: This is the most important advantage. Cybercriminals and automated scanners cannot attack a service they cannot see. By eliminating the public access point to Vaultwarden, you reduce the attack surface by over 99%.
VPN Encryption: All communication between your device and the server is protected by strong VPN encryption. This is an additional layer of security, independent of the HTTPS encryption used by the Vaultwarden application itself.
Bitwarden End-to-End Encryption: Even in the extremely unlikely scenario that someone manages to break through the VPN security and listen in on network traffic, your vault data remains secure. It is protected by end-to-end encryption (E2EE), which means it is encrypted on your device using your master password before it is even sent to the server. An attacker would only see a useless, encrypted “blob” of data.
For the hobbyist administrator, this is a huge simplification. Instead of worrying about securing every single hosted application, you focus on maintaining the security of one, solid entry point – the VPN server. This makes advanced security achievable without having to be a cybersecurity expert.
More Than You Think: What You Can Store in Your Vaultwarden Vault
The true power of Vaultwarden extends far beyond storing passwords for websites. Thanks to its flexible structure and support for various data types, it can become your single, trusted “source of truth” for practically any sensitive information in your life. It’s not a password manager, it’s a secret manager.
Standard Data Types
Vaultwarden, just like Bitwarden, offers several predefined entry types to help you organise your data:
Logins: The obvious foundation – they store usernames, passwords, and also codes for two-factor authentication (TOTP). Although when it comes to TOTP, I am a strong opponent of keeping them in the same application as logins and passwords. I’ll explain why in a moment.
Cards: A secure place for credit and debit card details. This makes online shopping easier, eliminating the need to manually enter card numbers and CVV codes.
Identities: Used to store personal data such as full name, addresses (billing, shipping), phone numbers, and email addresses. Ideal for quickly filling out registration forms.
Secure Notes: An encrypted text field for any information you want to protect.
Creative Uses of Secure Notes and Custom Fields
The real magic begins when we start creatively using secure notes, custom fields, and – crucially – file attachments (a premium feature in Bitwarden that is free in Vaultwarden). Your vault can become a digital “survival pack”, containing:
Software license keys: No more searching through old emails for your Windows or Office key.
Wi-Fi network passwords: Store passwords for your home network, work network, or a friend’s network.
Hardware information: Serial numbers, purchase dates, and warranty information for your electronics – invaluable in case of a breakdown or theft.
Medical and insurance data: Policy numbers, contact details for your insurer, a list of medications you take.
Answers to “security questions”: Instead of providing real data (which can often be found on the internet), generate random answers to questions like “What was your mother’s maiden name?” and save them in the manager.
Document data: Passport numbers, ID card numbers, driving license numbers.
Hardware configurations: Notes on the configuration of your router, home server, or other network devices.
Encrypted attachments: This is a game-changer. You can securely store scans of your most important documents: passport, birth certificate, employment contracts, and even your will. In case of a fire, flood, or theft, you have instant access to digital copies.
Comparing this to the popular but dangerous practice of keeping passwords in a notes app (even an encrypted one), the advantage of Vaultwarden is crushing. Notes apps do not offer browser integration, a password generator, a security audit, or phishing protection. They are simply a digital notepad, while Vaultwarden is a specialised, fortified fortress.
Magic at Your Fingertips: Browser Plugins and Mobile Apps
All this powerful, secure server infrastructure would be useless if using it every day were cumbersome. Fortunately, the ecosystem of Bitwarden clients makes interacting with your private Vaultwarden server smooth, intuitive, and practically invisible. It is this seamless client integration that is the bridge between advanced security and everyday convenience.
Configuration for Self-hosting: The First Step
Before you start, you must tell each client application where your server is located. This is a crucial step. In both the browser plugin and the mobile app, before logging in, you need to go into the settings (usually under the cogwheel icon) and in the “Server URL” or “Self-hosted environment” field, enter the address of your Vaultwarden instance (e.g., [podejrzany link usunięto]). Remember that for this to work from outside your home, you must first configure your subdomain, or be connected to the VPN server.
Browser Plugins: Your Personal Assistant
The Bitwarden plugin, which you will use to connect to your Vaultwarden server (for Edge, Chrome, Firefox, Safari, and others) is the command centre in your browser.
Autofill in practice: When you go to a login page, a small Bitwarden icon will appear on the form fields, and the plugin’s icon in the toolbar will show the number of credentials saved for that site. Clicking on it allows you to fill in the login and password with one motion.
Password generator at hand: When creating a new account, you can click the plugin icon, go to the generator, create a strong password, and immediately paste it into the appropriate fields on the site.
Automatic saving: When you log in to a site using credentials that you don’t yet have in your vault, the plugin will display a discreet bar at the top of the screen asking if you want to save them.
Full access to the vault: From the plugin, you can view and edit all your entries, copy passwords, 2FA codes, and also manage folders without having to open a separate website.
Mobile Apps (Android & iOS): Security in Your Pocket
Bitwarden mobile apps transfer all functionality to smartphones, integrating deeply with the operating system.
Biometric login: Instead of typing a long master password every time, you can unlock your vault with your fingerprint or a face scan (Face ID).
Integration with the autofill system: Both Android and iOS allow you to set Bitwarden as the default autofill service. This means that when you open a banking app, Instagram, or any other app that requires a login, a suggestion to fill in the data directly from your vault will appear above the keyboard.
Offline access: Your encrypted vault is also stored locally on the device. This means you have access to it even without an internet connection (and without a VPN connection). You can view and copy passwords. Synchronisation with the server will happen automatically as soon as you regain a connection.
After the initial effort of configuring the server, daily use becomes pure pleasure. All the complexity of the backend – the server, containers, VPN – disappears, and you only experience the convenience of logging in with a single click or a tap of your finger. This is the ultimate reward for taking back control.
Storing TOTP Codes Directly in Vaultwarden and Why It’s a Bad Idea
One of the tempting premium features that Vaultwarden provides for free is the ability to store two-factor authentication (TOTP) codes directly in the same entry as the login and password. At first glance, this seems incredibly convenient – all the data needed to log in is in one place. The browser plugin can automatically fill in not only the password but also copy the current 2FA code to the clipboard, shortening the entire process to a few clicks. No more reaching for your phone and rewriting six digits under time pressure.
However, this convenience comes at a price, and that price is the weakening of the fundamental principle on which two-factor authentication is based. The idea of 2FA is to combine two different types of security: something you know (your password) and something you have (your phone with the code-generating app). By storing both of these elements in the same digital safe, which is Vaultwarden, you reduce them to a single category: things you know (or can find out by breaking the master password). This creates a single point of failure. If an attacker manages to get your master password to the manager in any way, they get immediate access to both authentication factors. The security barrier that was supposed to require compromising two separate systems is reduced to one.
Therefore, although storing TOTP codes in a password manager is still much better than not using 2FA at all, from the point of view of maximum security, it is recommended to use a separate, dedicated application for this purpose (such as Aegis Authenticator, Authy, or Google Authenticator) installed on another device – most often a smartphone. This way, even if your password vault is compromised, your accounts will still be protected by a second, physically separate layer of security.
Configuring the Admin Panel
Regardless of whether you are the captain of a Docker container ship or a traditionalist who nurtures system services, at some point you will want to look behind the scenes of your Vaultwarden. This is what the admin panel is for – a secret command centre from which you can manage users, view diagnostics, and configure global server settings. By default, however, it is disabled, because like any good fortress, it doesn’t open its gates to just anyone. And after attempting to enter the panel, you will get an error message:
“The admin panel is disabled, please configure the ‘ADMIN TOKEN’ variable to enable it”
To activate it, you must set a special “key” – the administrator token.
Scenario 1: Docker Lord
If you ran Vaultwarden using Docker Compose (which is the most popular and convenient method), you set the admin panel key using the ADMIN_TOKEN environment variable. However, for security reasons, you should not use plain, open text there. Instead, you generate a secure Argon2 hash for the chosen password, which significantly increases the level of protection.
Here is the complete and correct process:
Generate the Password Hash First, come up with a strong password that you will use to log in to the admin panel. Then, using the terminal on the server, execute the command built into Vaultwarden to create its secure hash:
docker exec -it vaultwarden /vaultwarden hash
After entering the password twice, copy the entire generated string that starts with $argon2id$.
Update the docker-compose.yml file Now add the prepared hash to the docker-compose.yml file. There are two critical rules here:
Every dollar sign $ in the hash must be doubled (e.g., $argon2id$ becomes $$argon2id$$) to avoid errors in Docker Compose.
To automatically correct the token, use the command:
echo '$argon2id$v=1...REMAINDER_OF_TOKEN' | sed 's#\$#\$\$#g'
The value of ADMIN_TOKEN cannot be in any apostrophes or quotes.
Correct configuration:
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
restart: unless-stopped
volumes:
- ./data:/data
ports:
- "8080:80"
environment:
# Example of a hashed and prepared token:
- ADMIN_TOKEN=$$argon2id$$v=19$.....
Apply Changes and Log In After saving the file, stop and rebuild the container with the command:
docker-compose down docker-compose up -d
Your admin panel, available at https://your.domain.com/admin, will now ask for a password. To log in, type the password you chose in the first step, not the generated hash.
Scenario 2: Traditionalist with a system service (systemd)
If you decided to install Vaultwarden as a native system service, for example using systemd, the configuration looks a bit different, but the idea remains the same. Instead of the docker-compose.yml file, environment variables are most often stored in a dedicated configuration file. This is usually an .env file or similar, which is pointed to by the service file.
For example, you can create a file /etc/vaultwarden.env and put your token in it:
ADMIN_TOKEN=your_other_very_secure_token
Then you must make sure that the vaultwarden.service service file (usually located in /etc/systemd/system/) contains a line that loads this file with variables: EnvironmentFile=/etc/vaultwarden.env. After making the changes, you must reload the systemd daemon configuration (sudo systemctl daemon-reload), and then restart the Vaultwarden service itself (sudo systemctl restart vaultwarden). From now on, the admin panel at https://your.domain.com/admin will be active and secured with your new, shiny token.
Summary: Why Vaultwarden on a VPN Server is Your Personal Fort Knox
We have analysed the journey from the frustration of weak passwords to building your own digital fortress. The solution presented here is based on three powerful pillars that in synergy create a system far superior to the sum of its parts:
The Power of a Password Manager: It frees you from the obligation of creating and remembering dozens of complicated passwords. It provides convenience with autofill and strength with randomly generated, unique credentials for each service.
The Control of Self-Hosting: It gives you absolute sovereignty over your most valuable data. You are the owner, administrator, and guardian of your digital safe, free from corporate regulations, subscriptions, and privacy concerns.
The Invisibility of a VPN: It elevates security to the highest level, making your service invisible to the public internet. Instead of building ever-higher walls around a visible fortress, you simply hide it from the sight of potential attackers.
The combination of Vaultwarden, with its lightness and free premium features, and an architecture based on a VPN, creates a solution that is not only more secure and private than most commercial cloud services but also extremely flexible and satisfying to manage.
It’s true, it requires some effort and a willingness to learn. But the reward is priceless: regaining full control over your digital security and privacy. It’s time to stop changing your cat’s name. It’s time to build your own Fort Knox.